With most web applications today, the number of simultaneous users can greatly exceed the number of connections to the server.
This is because connections can be closed during the frequent pauses in the conversation while the user reads the
content or completes in a form. Thousands of users can be served with hundreds of connections.
But AJAX based web applications have very different
traffic profiles to traditional webapps.
While a user is filling out a form, AJAX requests to the server will be asking for
for entry validation and completion support. While a user is reading content, AJAX requests may
be issued to asynchronously obtain new or updated content. Thus
an AJAX application needs a connection to the server almost continuously and it is no
longer the case that the number of simultaneous users can greatly exceed the number of
simultaneous TCP/IP connections.
If you want thousands of users you need thousands of connections and if you want tens of thousands
of users, then you need tens of thousands of simultaneous connections. It is a challenge for java
web containers to deal with significant numbers of connections, and you must look at your entire system,
from your operating system, to your JVM, as well as your container implementation.
Operating Systems & Connections
A few years ago, many operating systems could not cope with more than a few hundred TCP/IP connections.
JVMs could not handle the thread requirements of blocking models and the poll system call used for asynchronous
handling could not efficiently work with more than a few hundred connections.
Solaris 7 introduced the /dev/poll
mechanism for efficiently handling thousands of connections and Sun have
continued their development so that now Solaris 10 has a
new optimized TCP/IP stack that is reported
to support over 100 thousand simultaneous TCP/IP connections. Linux has also made great advances in this area and
comes close to S10’s performance. If you want scalable AJAX application server, you must start with such an
operation system configured correctly and with a JVM that uses these facilities.
Connection Buffers
In my previous blog entry I described how
Jetty 6 uses Continuations and javax.nio to limit the number of threads required to service AJAX traffic. But threads are
not the only resources that scale with connections and you must also consider buffers. Significant memory can be consumed if
a buffer is allocated per connection. Memory cannot be saved by shrinking the buffer size good reasons to have
significantly large buffers:
- Below 8KB TCP/IP can have efficiency problems with it’s sliding window protocol.
- When a buffer overflows, the application needs to be blocked. This holds a thread and associated resources
and increases switching overheads. - If the servlet can complete without needing to flush the response, then the container can flush the buffer
outside of the blocking application context of a a servlet, potentially using non-blocking IO. - If the entire response is held in the buffer, then the container can set the content length header and can avoid
using chunking and extra complexity.
Jetty 6 contains a number of features designed to allow larger buffers to be used
in a scalable AJAX server.
Jetty 6 Split Buffers
Jetty 6 uses a split buffer architecture and dynamic buffer allocation. An idle connection will have no buffer allocated to it,
but once a request arrives an small header buffer is allocated. Most requests have no content, so often this is the only
buffer required for the request. If the request has a little content, then the header buffer is used for that content as
well. Only if the header received indicates that the request
content is too large for the header buffer, is an additional larger receive buffer is allocated.
For responses, a similar approach is used with a large content buffer being allocated once response data starts to be generated.
If the content might need to be chunked, space is reserved at the start and the end of the content buffer to allow the data to
be wrapped as a chunk without additional data copying.
Only when the response is committed is a smaller header buffer allocated.
These strategies mean that Jetty 6 allocates buffers only when they are required and that these buffers are of
a size suitable for the specific usage. Response content buffers of 64KB or more can easily be used without
blowing out total memory usage.
Gather writes
Because the response header and response content are held in different buffers, gather writes
are used to combine the header and response into a single write to the operating system. As efficient direct buffers are used, no
additional data copying is needed to combine header and response into a single packet.
Direct File Buffers
Of course there will always be content larger than the buffers allocated, but if the content is large then it
is highly desirable to completely avoid copying the data to a buffer. For very large static content,
Jetty 6 supports the use of mapped file buffers,
which can be directly passed to the gather write with the header buffer for the ultimate in java io speed.
For intermediate sized static content, the Jetty 6 resource cache stores direct byte buffers which also can be written
directly to the channel without additional buffering.
For small static content, the Jetty 6 resource cache stores byte buffers which are copied into the
header buffer to be written in a single normal write.
Conclusion
Jetty 6 employs a number of innovative strategies to ensure that only the resources that are actually
required are assigned to a connection and only for the duration of they are needed. This careful
resource management gives Jetty an architecture designed to scale to meet the needs of AJAX
applications.