With the imminent release of cometd-2.0.0, it’s time to publish some of our own lies, damned lies and benchmarks. It has be over 2 years since we published the 20,000
reasons that cometd scales
and in that time we have completely reworked both the client side and server side of cometd, plus we have moved to Jetty 7.1.4 from eclipse as the main web server for cometd.

Cometd is a publish subscribe framework that delivers events via comet server push techniques from a HTTP server to the browser. The cometd-1 was developed in parallel to the development of many of the ideas and techniques for comet, so the code base reflected some of the changed ideas and old thinking as was in need of a cleanup. Cometd-2 was a total redevelopment of all parts of the java and javascript codebase and provides:

  • Improved Java API for both client and server side interaction.
  • Improved concurrency in the server and client code base.
  • Fully pluggable transports
  • Support for a websocket transport (that works with latest chromium browsers).
  • Improved extensions
  • More comprehensive testing and examples.
  • More graceful degradation under extreme load.

The results have been a dramatic increase in throughput while maintaining sub second latencies and great scalability.

The chart above shows the preliminary results of recent benchmarking carried out by Simone Bordet for a 100 room chat server.  The test was done on Amazon EC2 nodes with 2 x amd64 CPUs and 8GB of memory, running ubuntu Linux 2.6.32 with Sun’s 1.6.0_20-b02 JVM. Simone did some tuning of the java heap and garbage collector, but the operating system was not customized other than to increase the file descriptor limits.  The test used the HTTP long polling transport. A single server machine was used and 4 identical machines were used to generate the load using the cometd java client that is bundled with the cometd release.

It is worth remembering that the latencies/throughput measured include the time in the client load generator, each running the full HTTP/cometd stack for many thousands of clients when in a real deployment  each client would have a computer/browser. It is also noteworthy that the server is not just a dedicated comet server, but the fully featured Jetty Java Servlet container and the cometd messages are handled within the rich application context provided.

It can be seen from the chart above, that message rate has been significantly improved from the 3800/s achieved in 2008. All scenarios tested were able to achieve 10,000 messages per second with excellent latency. Only with 20,000 clients did the average latency start to climb rapidly once the message rate exceeded 8000/s.  The top average  server CPU usage was 140/200 and for the most part latencies were under 100ms over the amazon network, which indicates that there is some additional capacity available for this server.  Our experience of cometd in the wild indicates that you can expect another 50 to 200ms network latency crossing the public internet, but that due to the asynchronous design of cometd, the extra latency does not reduce throughput.

Below is an example of the raw output of one of the 4 load generators, which shows some of the capabilities of the java cometd client, which can be used to develop load generators specific for your own application:

Statistics Started at Mon Jun 21 15:50:58 UTC 2010
Operative System: Linux 2.6.32-305-ec2 amd64
JVM : Sun Microsystems Inc. Java HotSpot(TM) 64-Bit Server VM runtime 16.3-b01 1.6.0_20-b02
Processors: 2
System Memory: 93.82409% used of 7.5002174 GiB
Used Heap Size: 2453.7236 MiB
Max Heap Size: 5895.0 MiB
Young Generation Heap Size: 2823.0 MiB
- - - - - - - - - - - - - - - - - - - -
Testing 2500 clients in 100 rooms
Sending 3000 batches of 1x50B messages every 8000
 

2 Comments

Makoto · 07/07/2010 at 04:32

Hi, This is very impressive.
Do you have any machine resource usage stats(eg: top, vmstat,netstat )? You mention that CPU usage is not high, but wondering if it affects other figures such as memory consumption, and load average.
I am also looking forward to the websocket benchmark.

Raman Gupta · 07/07/2010 at 16:50

Using Jetty 7.1.4 and CometD 1.1.1 on one of the Amazon EC2 c1.xlarge boxes (8 CPUS, 7GB memory), I was able to achieve a latency of less than 190ms (99th percentile of 454 ms) with a throughput of 133,000 msgs/s (with SSL turned on).
The clients were split across 4 other c1.xlarge boxes. I am looking forward to the CometD-2 release!

Comments are closed.