I have published a comet Daily article that describes how I have benchmarked Jetty’s Cometd implementationof dojox Bayeux protocol to 20,000 simultaneous clients.  This blog looks at the same results from a more Jetty centric view point.

The diagram above shows results of the benchmarking. These results are described in more detail in the article, but in summary they show that sub-second latency can be achieved even for 20,000 simultaneous clients – albeit with some reduced message throughput.

Things to note with regards to Jetty are:

  • 20,000 simultaneous Bayeux connections means 20,000 simultaneous TCP/IP connections, 20,000 outstanding HTTP requests and 20,000  Continuations. This are good numbers for web-1.0 applications, let alone web-2.0 Ajax push applications.
  • To test this, I used the Jetty asynchronous HTTP client, which was able to also scale to 20,000 connections – a pretty amazing number!  Over 10,000 the client was a little brittle but with a bit more work chasing lock starvation this should be able to be resolved.
  • It was not all roses.  Jetty itself proved a little brittle over 10,000 connections until I replaced the ThreadPool.  The problem was lock starvation, as the threadpool only had a single lock that was just too much in demand.  I have written a new QueuedThreadPool that can be dropped into exist Jetty deployments that has 4 locks and a streamlined implementations.  It greatly improved the scalability. I will soon blog in detail how this can be applied.
  • 20,000 – I just like saying that number.
  • The default jetty.xml has a lowResources limit set at 5000 connections.  This is 15,000 to low when you can handle 20,000!
  • There are some more performance/scalability improvements in the pipeline with Jetty 6.2(trunk).
20,000 reasons why Jetty scales

5 thoughts on “20,000 reasons why Jetty scales

  • April 21, 2008 at 9:59 pm
    Permalink

    Hi,
         Would it be possible to remove the comet jars out of jetty and make comet work on any other web container.
         Like comet working in Jetty removed and make it work in tomcat or any other web container.

    Thx,
    Laks.
        

  • April 22, 2008 at 11:59 pm
    Permalink

    The cometd jars are portable and will run on other containers….they just run better on Jetty and scale batter on Jetty.     That’s the whole point of the blog – Jetty has asynchronous features that scale really well.

    Note that tomcat is working on their own bayeux implementation. Also when servlet-3.0 arrives, the jetty features will be standardized and portability will improve.

  • February 26, 2009 at 4:10 am
    Permalink

    hi greg,
    i’m not a java expert but i finally went through the cometd code and have a question about being able to handle 20000 users..
    when you say, “albeit with some reduced message throughput”, are you saying that you changed some settings to make this happen? if so, what are they? i need to be able to handle many users, but don’t care about the throughput.
    thanks,
    ted

  • February 26, 2009 at 11:29 am
    Permalink

    Ted,
    the main settings that need to be changed are giving lots of file descriptions to your operating system (normally limited to 1024 per user) and allocating a good slab of heap to your JVM.
    Other tuning will depend on your individual application.

  • February 26, 2009 at 7:05 pm
    Permalink

    hi greg,
    thanks for the reply. i actually found the stress test page that explains what you said in more detail.
    ted

Comments are closed.