The Jetty Project just released the Jetty Load Generator, a Java 11+ library to load-test any HTTP server, that supports both HTTP/1.1 and HTTP/2.
The project was born in 2016, with specific requirements. At the time, very few load-test tools had support for HTTP/2, but Jetty’s HttpClient
did. Furthermore, few tools supported web-page like resources, which were important to model in order to compare the multiplexed HTTP/2 behavior (up to ~100 concurrent HTTP/2 streams on a single connection) against the HTTP/1.1 behavior (6-8 connections). Lastly, we were more interested in measuring quality of service, rather than throughput.
The Jetty Load Generator generates requests asynchronously, at a specified rate, independently from the responses. This is the Jetty Load Generator core design principle: we wanted the request generation to be constant, and measure response times independently from the request generation. In this way, the Jetty Load Generator can impose a specific load on the server, independently of the network round-trip and independently of the server-side processing time. Adding more load generators (on the same machine if it has spare capacity, or using additional machines) will allow the load against the server to increase linearly.
Using this core principle, you can setup the load testing by having N load generator loaders that impose the load on the server, and 1 load generator probe that imposes a very light load and measures response times.
For example, you can have 4 loaders that impose 20 requests/s each, for a total of 80 requests/s seen by the server. With this load on the server, what would be the experience, in terms of response times, of additional users that make requests to the server? This is exactly what the probe measures.
If the load on the server is increased to 160 requests/s, what would the probe experience? The same response times? Worse? And what are the probe response times if the load on the server is increased to 240 requests/s?
Rather than trying to measure some form of throughput (“what is the max number of requests/s the server can sustain?”), the Jetty Load Generator measures the quality of service seen by the probe, as the load on the server increases. This is, in practice, what matters most for HTTP servers: knowing that, when your server has a load of 1024 requests/s, an additional user can still see response times that are acceptable. And knowing how the quality of service changes as the load increases.
The Jetty Load Generator builds on top of Jetty’s HttpClient
features, and offers:
- A builder-style Java API, to embed the load generator into your own code and to have full access to all events emitted by the load generator
- A command-line tool, similar to Apache’s
ab
orwrk2
, with histogram reporting, for ease of use, scripting, and integration with CI servers.
Download the latest command-line tool uber-jar from: https://repo1.maven.org/maven2/org/mortbay/jetty/loadgenerator/jetty-load-generator-starter/
$ cd /tmp $ curl -O https://repo1.maven.org/maven2/org/mortbay/jetty/loadgenerator/jetty-load-generator-starter/1.0.2/jetty-load-generator-starter-1.0.2-uber.jar
Use the --help
option to display the available command line options:
$ java -jar jetty-load-generator-starter-1.0.2-uber.jar --help
Then run it, for example:
$ java -jar jetty-load-generator-starter-1.0.2-uber.jar --scheme https --host your_server --port 443 --resource-rate 1 --iterations 60 --display-stats
You will obtain an output similar to the following:
---------------------------------------------------- ------------- Load Generator Report -------------- ---------------------------------------------------- https://your_server:443 over http/1.1 resource tree : 1 resource(s) begin date time : 2021-02-02 15:38:39 CET complete date time: 2021-02-02 15:39:39 CET recording time : 59.657 s average cpu load : 3.034/1200 histogram: @ _ 37 ms (0, 0.00%) @ _ 75 ms (0, 0.00%) @ _ 113 ms (0, 0.00%) @ _ 150 ms (0, 0.00%) @ _ 188 ms (0, 0.00%) @ _ 226 ms (0, 0.00%) @ _ 263 ms (0, 0.00%) @ _ 301 ms (0, 0.00%) @ _ 339 ms (46, 76.67%) ^50% @ _ 376 ms (7, 11.67%) ^85% @ _ 414 ms (5, 8.33%) ^95% @ _ 452 ms (1, 1.67%) @ _ 489 ms (0, 0.00%) @ _ 527 ms (0, 0.00%) @ _ 565 ms (0, 0.00%) @ _ 602 ms (0, 0.00%) @ _ 640 ms (0, 0.00%) @ _ 678 ms (0, 0.00%) @ _ 715 ms (0, 0.00%) @ _ 753 ms (1, 1.67%) ^99% ^99.9% response times: 60 samples | min/avg/50th%/99th%/max = 303/335/318/753/753 ms request rate (requests/s) : 1.011 send rate (bytes/s) : 189.916 response rate (responses/s): 1.006 receive rate (bytes/s) : 41245.797 failures : 0 response 1xx group: 0 response 2xx group: 60 response 3xx group: 0 response 4xx group: 0 response 5xx group: 0 ----------------------------------------------------
Use the Jetty Load Generator for your load testing, and report comments and issues at https://github.com/jetty-project/jetty-load-generator. Enjoy!