There are several open source HTTP servers and Servlet Containers available: Eclipse Jetty, Apache Tomcat, Glassfish, Resin, so a frequently asked question is why use Jetty rather than one of these alternatives. This is a short overview of the technical and non-technical drivers of a decision to use Jetty.
Performance is very important for every web business, no matter how many requests per second they need to serve. However, when measuring performance it is important to know which performance metrics are valid for your site and to measured them correctly.
Page Load Time
If a site is handling less than many thousands of requests per second, then throughput (requests per second) is unlikely to be an important metric. Instead, studies have shown that slow pages can reduce revenue and conversions from a web site by up to 15%, so minimising page load time is the performance metric applicable for many (most) sites.
Jetty’s focus on multi-connection HTTP and features such as SPDY (and soon HTTP/2.0) can significantly reduce page load latencies without the need to re-engineer your web application.
Small Memory Footprint
Jetty has been designed to have a small memory footprint, which is an excellent basis for good all round performance and scalability. If the server users less memory, then there is more memory available for the application, threads, and caches. Also having more free memory can greatly improve garbage collection and improve all round application performance. A small memory footprint allows you to run more instances of the server on virtual hardware, which is often memory constrained, making Jetty very cloud friendly.
If your website does need to handle many thousands of requests per second, then it is very important to understand that there is a big difference between serving 10,000 requests per second over 1 TCP/IP connection vs serving the same request rate over 10,000 connections. Many published or self performed benchmarks consist of openning a few connections and sending as many requests as possible over them. This is a poor measure of throughput because it is based on a load profile which is unlike the vast majority of loads real web servers experience. Such tests simulate a few extraordinarily busy users, when most typical web sites will see many simultaneous users who send requests in short bursts separated by idle periods. We discuss the impact that such different load profiles can have on performance results as discussed in Lies, damned lies and benchmarks.
We have designed Jetty for scalable performance under realistic loads of many simultaneous connections and we can achieve excellent results with many tens of thousands of HTTP connections and hundreds of thousands of simultaneous WebSocket connections. Most importantly, because our benchmarks are based on real applications under realistic loads, we have real users who have achieved the same results in production.
The web has proved time and time again that users are fickle and that one can lose or gain massive market share very quickly depending on usability and availability of the latest web features. To develop a successful web application and to keep an already successful web application current and relevant, it is vital that your server allows you to keep up with the latest techniques and protocols.
Jetty was the first HTTP server developed in Java (1994), long before the servlet specification and has been at the forefront of web development ever since. Jetty has either lead or been among the first movers on many significant innovations: HTTP/1.1, asynchronous servlets, comet, WebSocket, SPDY, and soon HTTP/2.0. If you want to keep your market share from migrating away to follow the latest web developments, then using Jetty gives you a platform on which you can stay current.
Typically we make new, advanced features available in the current, stable release of Jetty as optional extras before we make them core features in the next major release. For example, when browser support was rolled out for WebSockets and SPDY through 2011 and 2012, Jetty 7 and Jetty 8 included support for these as optional extras while we simultaneously re-architected jetty-9 to build these important technologies built into the core server, not just as adjuncts. This approach generally allows your development team to experiment and innovate with new features without subjecting your application to a major version upgrade.
The Jetty project is receptive to new ideas, it lets you bring your own ideas to fruition. For example, we first developed asynchronous servlets as a result of suggestions from the ActiveMQ project, who had been told by other open source server projects that the suggested use-case was a protocol abuse and should not be done in a Java application server. Asynchronous servlets are now part of the servlet specification and ActiveMQ has enjoyed the scalability benefits longer than most by using Jetty.
The flip side of innovation can be vendor lockin if a feature is available only on a single server. At the Jetty project we are keenly aware that we want developers to use Jetty because they want to, not because they have to, thus we make every effort to adopt standards and avoid proprietary APIs and extensions. Jetty developers are active in the JCP and IETF where we participate in the development of Internet standards. This allows us to be early implementers of new standards (for example, Servlet 3.0, WebSocket), or to work towards standardization of our own innovations, for example, asynchronous servlets.
Currently we have deployed WebSocket support using both a native Jetty API and the javax.websocket api which we contributed heavily on.
The view from 20,000 feet is that Jetty and the other containers are rather similar, they are all Java applications servers offering implementations of the 2.5/3.0 servlet specification with optional extras giving many JEE features. You can drop a standard WAR file into all of these servers and expect them to run, so in many ways all the servers are commodity products and for many webapps it is not important which you use.
On closer inspection, the architectures of the servers differ greatly, mostly because each project historically has had a different focus. Unlike the other containers, we did not develop Jetty to be first and foremost an application server. Application servers have the benefit of controlling the majority of their environment and enforcing that deployed applications adhere to their conventions and standards. This is great if you want a commodity server, but can lack flexibility if you need to operate outside of the commodity box.
Jetty is first and foremost a set of software components built to offer HTTP and servlet services. You can assemble these components as needed to form a purpose-built server, including as an application server. One of the Jetty design mottoes is: “Don’t put your application into Jetty, put Jetty into your application.” The benefits of this approach are simply that one size does not fit all. While Jetty can (and has been) used as the web tier of full and partial JEE stacks ( Geronimo, Jboss, Sybase EAServer, JonAS, Glassfish and Hightide), such stacks are not the only “solution” application servers require. Thus Jetty has also become the basis for other application frameworks including: SIP telephony (www.cipango.org), Ajax JMS (www.activemq.org), Asynchronous SOA services (Apache Camel)
There is “no taxation without representation”. Because we have implemented features like JMX, JNDI, annotations and JEE integration as pluggable and/or extended components, if you do not need a particular feature then it does not need to be assembled and the server does not have to pay a memory/CPU cost for an unused feature. While other servers can also be stripped back to lighter weight cores, it is often harder to strip back complexity than it is to simply add what you need.
As a collection of assembled software components, it is very simple to extend and/or replace components with customized behavior. For example, when Google chose Jetty as the servlet container for AppEngine, they were able to easily plug in their own HTTP connector and session management extensions. Similarly the integration of Jetty into development tools like Maven can be very flexible since the components that control the layout of a webapp can be updated to run an unassembled application from source rather than an assembled WAR. For example you can use the mvn jetty:run plugin to run a maven webapp project from source, without assembly.
All Jetty APIs are public. You can use every module within Jetty as part of an extended/pluggable solution. Thus there is no scope at all for Jetty developers to be lazy with their designs/implementations and under the covers: Jetty code is easy to understand and maintain.
Development tools and frameworks often take advantage of the embeddable nature of Jetty: Google Widget Toolkit, Grails, Eclipse, OSGi Equinox, OSGi Felix, Maven, Continuum, Fisheye, Jruby, Xbean, Tapestry, Cocoon, Plexus etc. all either use Jetty by default or have Jetty bindings. Thus when it comes to production, it makes sense to run your application on the same server that was used to develop it.
Support and Community
Sometimes using the best technology can still be tough if you are the only one doing so and are unsupported. Jetty is an open source project with the normal community support lists, but it is also well represented in collaborative support systems like Stack Overflow.
For commercial support, Webtide provides developer advice, which is focused on answering your developers questions during development to avoid production problems; and production support that helps diagnose and fix any issues on your live servers. We can also assist you with custom Jetty developments and extensions.
Interested? Contact us, and let’s talk.