I have published a comet Daily article that describes how I have benchmarked Jetty’s Cometd implementationof dojox Bayeux protocol to 20,000 simultaneous clients. This blog looks at the same results from a more Jetty centric view point.
The diagram above shows results of the benchmarking. These results are described in more detail in the article, but in summary they show that sub-second latency can be achieved even for 20,000 simultaneous clients – albeit with some reduced message throughput.
Things to note with regards to Jetty are:
- 20,000 simultaneous Bayeux connections means 20,000 simultaneous TCP/IP connections, 20,000 outstanding HTTP requests and 20,000 Continuations. This are good numbers for web-1.0 applications, let alone web-2.0 Ajax push applications.
- To test this, I used the Jetty asynchronous HTTP client, which was able to also scale to 20,000 connections – a pretty amazing number! Over 10,000 the client was a little brittle but with a bit more work chasing lock starvation this should be able to be resolved.
- It was not all roses. Jetty itself proved a little brittle over 10,000 connections until I replaced the ThreadPool. The problem was lock starvation, as the threadpool only had a single lock that was just too much in demand. I have written a new QueuedThreadPool that can be dropped into exist Jetty deployments that has 4 locks and a streamlined implementations. It greatly improved the scalability. I will soon blog in detail how this can be applied.
- 20,000 – I just like saying that number.
- The default jetty.xml has a lowResources limit set at 5000 connections. This is 15,000 to low when you can handle 20,000!
- There are some more performance/scalability improvements in the pipeline with Jetty 6.2(trunk).