One of my pet peeves is misleading benchmarks, as discussed in my Lies, Damned Lies and Benchmarks blog.  Recently there has been a bit of interest in Vert.x, some of it resulting from apparently good benchmark results against node.js. The author gave a disclaimer that the tests were non-rigorous and just for fun, but they have already lead some people to ask if Jetty can scale like Vert.x.

I know absolutely nothing about Vert.x, but I do know that their benchmark is next to useless to demonstrate any kind of scalability of a server.  So I’d like to analyse their benchmarks and compare them to how we benchmark jetty/cometd to try to give some understanding about how benchmarks should be designed and interpreted.

The benchmark

The vert.x benchmark uses 6 clients, each with 10 connections, each with up to 2000 pipelines HTTP requests for a trivial 200 OK or tiny static file. The tests were run for a minute and the average request rate was taken. So lets break this down:

6 Clients of 10 connections!

However you look at this (6 users each with a browser with 10 connections, or 60 individual users), 6 or 60 users does not represent any significant scalability.  We benchmark jetty/comet with 10,000 to 200,000 connections and have production sites that run with similar numbers.

Testing 60 connections does not tell you anything about scalability. So why do so many benchmarks get performed on low numbers of connections?  It’s because it is really really hard to generate realistic load for hundreds of thousands of connections.  To do so, we use the jetty asynchronous HTTP client, which has been designed specifically for this purpose, and we still need to use multiple load generating machines to achieve high numbers of connections.

2000 pipelined requests!

Really? HTTP pipelining is not turned on by default in most web browsers, and even if it was, I cannot think of any realistic application that would be generate 2000 requests in a pipeline. Why is this important?  Because with pipelined requests a server that does:

byte[] buffer = new byte[8192];
socket.getInputStream().read(buffer);

will read many requests into that buffer in a single read.  A trivial HTTP request is a few 10s of bytes (and I’m guessing they didn’t send any of the verbose complex headers that real browsers do), so the vert.x benchmark would be reading 30 or more requests on each read.  Thus this benchmark is not really testing any IO performance, but simply how fast they can iterate over a buffer and parse simple requests. At best it is telling you about the latency in their parsing and request handling.

Handling reads is not the hard part of scaling IO.  It is handling the idle pauses between the reads that is difficult.  It is these idle periods that almost all real load profiles have that requires the server to carefully allocate resources so that idle connections do not consume resources that could be better used by non idle connections.    2000 connections each with 6 pipelined requests would be more realistic, or better yet 20000 connections with 6 requests that are sent with 10ms delays between them.

Trivial 200 OK or Tiny static resource

Creating a scalable server for non trivial applications is all about trying to ensure that maximal resources are applied to performing real business logic in preparing dynamic responses.   If all the responses are trivial or static, then the server is free to be more wasteful.  Worse still for realistic benchmarks, a trivial response generation can probably be in-lined by the hotspot compiler is a way that no real application ever could be.

Run for a minute

A minute is insufficient time for a JVM to achieve steady state.  For the first few minutes of a run the Hotspot JIT compiler will be using CPU to analyse and compile code. A trivial application might be able to be hotspot compiled in a minute, but any reasonably complex server/application is going to take much longer.  Try watching your application with jvisualvm and watch the perm generation continue to grow for many minutes while more and more classes are compiled. Only after the JVM has warmed up your application and CPU is no longer being used to compile, can any meaningful results be obtained.

The other big killer of performance are full garbage collections that can stop the entire VM for many seconds.  Running fast for 60 seconds does not do you much good if a second later you pause for 10s while collecting the garbage from those fast 60 seconds.

Benchmark result need to be reported for steady state over longer periods of time and you need to consider GC performance.  The jetty/cometd benchmark tools specifically measures and reports both JIT and GC actions during the benchmark runs and we can perform many benchmark runs in the same JVM.  Below is example output showing that for a 30s run some JIT was still performed, so the VM is not fully warmed up yet:

Statistics Started at Mon Jun 21 15:50:58 UTC 2010
Operative System: Linux 2.6.32-305-ec2 amd64
JVM : Sun Microsystems Inc. Java HotSpot(TM) 64-Bit Server
VM runtime 16.3-b01 1.6.0_20-b02
Processors: 2
System Memory: 93.82409% used of 7.5002174 GiB
Used Heap Size: 2453.7236 MiB
Max Heap Size: 5895.0 MiB
Young Generation Heap Size: 2823.0 MiB
- - - - - - - - - - - - - - - - - - - -
Testing 2500 clients in 100 rooms
Sending 3000 batches of 1x50B messages every 8000µs
- - - - - - - - - - - - - - - - - - - -
Statistics Ended at Mon Jun 21 15:51:29 UTC 2010
Elapsed time: 30164 ms
        Time in JIT compilation: 12 ms
        Time in Young Generation GC: 0 ms (0 collections)
        Time in Old Generation GC: 0 ms (0 collections)
Garbage Generated in Young Generation: 1848.7974 MiB
Garbage Generated in Survivor Generation: 0.0 MiB
Garbage Generated in Old Generation: 0.0 MiB
Average CPU Load: 109.96191/200

Conclusion

I’m sure the vert.x guys had every good intent when doing their micro-benchmark, and it may well be that vert.x scales really well.  However I wish that when developers consider benchmarking servers, that instead of thinking: “let’s send a lot of requests at it”, that their first thought was “let’s open a lot of connections at it”.  Better yet, a benchmark (micro or otherwise) should be modelled on some real application and the load that it might generate.

The jetty/cometd benchmark is of a real chat application, that really works and has real features like member lists, private messages etc.  Thus the results that we achieve in benchmarks are able to be reproduced by real applications in production.

 
 
 
 
 


3 Comments

Anubhav · 13/03/2013 at 04:58

Many times, benchmark doesn’t reveal complete truth, they are very sensitive and you need to be really careful while calculating them.

Node.js vs SilkJS « T F D · 28/09/2012 at 09:29

[…] at 300,00 requests/s. You do have to take it with a pinch of salt after you have read another post http://webtide.intalio.com/2012/05/truth-in-benchmarking/ with some detailed analysis that points out testing performance on the same box with no network and […]

Avoiding Parallel Slowdown in Jetty-9 with CPU Cache analysis. | Webtide Blogs · 17/12/2012 at 11:56

[…] it (note that this is preciously the kind of non realistic benchmark load that I argue against in Truth in Benchmarking and Lies, DamnLies and Benchmarks, but so long as you know what you are testing the results are […]

Comments are closed.