In response to the recent discussion of push v pull Ajax performance, I decided to do some performance testing of the Jetty implementation of bayeux for the cometd project. This was also a great way to test the  asynchronous http client  that is now included with Jetty.
The test scenarios was simple: 1000,2000,5000 & 10000 users simultaneously connect to the cometd server and subscribe to a chat room. We publish a fixed number of messages to the rooms and vary the number of users per room while measuring message throughput and latency. The software was written using dojo-0.9 for the cometd client and Jetty 6.1.5 for the server. A long polling transport was used, which means that each client has an outstanding request parked in the server waiting for an event so that a response may be sent to the client.
The server  machine  was a Intel 1.83GHz core duo minimac with 1GB ram running OSX. The client machine was an Intel 2GHz centrino duo thinkpad  with 512M ram running ubuntu linux.  Suns Java 1.5.11  was used on both.  These are pretty small machines to be testing 10000 simultaneous users, but they managed well enough to draw some conclusions. The crucial configuration for the server was:

<Set name="ThreadPool">  <New class="org.mortbay.thread.BoundedThreadPool">    <Set name="minThreads">10</Set>    <Set name="maxThreads">250</Set>    <Set name="lowThreads">25</Set>  </New></Set><Call name="addConnector">  <Arg>    <New class="org.mortbay.jetty.nio.SelectChannelConnector">      <Set name="port">8080</Set>      <Set name="maxIdleTime">240000</Set>      <Set name="Acceptors">2</Set>      <Set name="acceptQueueSize">1000</Set>      <Set name="lowResourcesConnections">11000</Set>      <Set name="lowResourcesMaxIdleTime">1000</Set>    </New>  </Arg></Call>

The jetty asynchronous http client is based on the same NIO technology as the server. But instead or parsing requests and generating responses, it generates requests and parses responses.   Thus the latency due to NIO scheduling will measured twice in this test, once for the server and once for the client. In reality the 10000 clients would be running on 10000 machines each using blocking IO for the 1 or 2 TCP/IP connections they maintain to the server.  So the actual results for Bayeux can be expected to be better (perhaps significantly) than the numbers here:
Results
full size
The results show large numbers of simultaneous users can indeed be handled with low latency. It must be remember that each connected user has 1 or 2 TCP/IP connections and will have at least 1 outstanding HTTP request.  With a non NIO or non Continuation based  server, this would require around 11,000 threads to handle 10,000 simultaneous users.  Jetty handles this number of connections with only 250 threads.
Below 1000 message deliveries per second, the average latency is small and almost constant for 1000, 2000 and 5000 users, but for 10,000 users the latency starts creeping up to a few hundred milliseconds, which still highly interactive and sufficient for a chat, collaborative editing and many games.
Above 1000 messages per second, the latency starts to suffer and at 3000 messages per second, reaches 1.5 seconds for 1000 users and 7 seconds for 5000 users. For less than 10,000 users, this degradation can be described as graceful and is a reflection of the bayeux protocols ability to batch more messages when under duress in a classic latency vs throughput tradeoff.
Above all, it must be recognized that all results in this test have superior latency to the pull solutions referred to in the link above. To achieve 1 second average latency, pull solutions would need to poll every 2 seconds, generating 5000 requests per second for a server with 10000 idle users!
The conclusion from these tests is that Bayeux + Jetty + Continuations does indeed provide scalable low latency communications for Ajax applications.



12 Comments

Anonymous · 23/07/2007 at 19:51

Awesome!

David Jerry · 24/07/2007 at 13:10

Interested to know if there is a comparison between Weblogic!?

Anonymous · 25/07/2007 at 15:58

This is great news.
Greg, what tools are you using to simulate so many clients and to get results? Right now im limited to java and htmlUnit to check the status of an updated page but it would be nice to have a better testing approach. Please let me know.
Cheers,

Dominique

Anonymous · 06/08/2007 at 00:04

Have you seen the yield stuff on http://chaoticjava.com ?

It is yield for iterators implemented using byte code manipulation. It seems very clever, and possibly useful for asynchronous  http stuff.

Anonymous · 15/10/2007 at 22:33

Very curious … can you clarify how you simulated clients in this test?  The posting says, "The software was written using dojo-0.9 for the cometd client…", then later it says, "…asynchronous http client is based on the same NIO technology as the server…".   

Also, what is mean by message throughput?  For example if you have 3000 connected users, and send one chat message per second, this will result in 3000 messages per second (one to each connected user).  In your results, would you call this 3000 messages per second, or do you call that one message per second?

Greg Wilkins · 17/10/2007 at 04:29

The normal bayeux client is written in javascript as part of dojo.
for testing purposes, we wrote a bayeux client in java so that we could test many many clients without running 10k browsers.

the messages per second measured are how many messages delivered to the client.

So if we have 20 users per room and we publish 4 message per second, that will result in  80 messages per second in this chart.

Chandra · 17/11/2007 at 16:12

thank you for great stats.

I am pretty new to comet based technologies.Is it possible to deploy a meebo like chat server using Jetty and Comet. Can we run multiple jetty instances on different machines to handle heavy loads? can Jetty scale linearly by adding more jetty instance servers?

thank you.

Graham Barker · 05/12/2007 at 10:18

Hi Greg, This looks like pretty good stuff. I’m running some performance tests on comet/jetty myself. Would it be possible to get the source code you used for the above test? It would be greatly appreciated. I am using a higher spec machine then that mentioned above but will be looking to have 15k simultaneous connections (with a desired goal of 45k although admittedly this is a while away). Any help or suggested tests would be very good. Great blog by the way. Cheers Graham

Greg Wilkins · 06/12/2007 at 04:17

Graham,

all the code is part of jetty and is under contrib/cometd/demo/src/test/java/…

Anonymous · 02/04/2008 at 19:54

Good work. What was the message size used during these benchmarks?

Bogdan Maxim · 10/05/2010 at 07:25

How did you set up the test?
I’ve tried to set one up myself (for an Open Source project – aspComet), but couldn’t find a suitable bayeux client..

Greg Wilkins · 10/05/2010 at 07:37

Bogdan, look at http://cometd.org
That project has both client and server implementations of bayeux in several languages.

Comments are closed.