Now that the 2.5 servlet specification is final, we must start thinking
about the next revision and what is needed. I believe that the most
important change needed is that the Servlet API must be evolved to
support an asynchronous model.
I see 5 main use-cases for asynchronous servlets:

  1. Non-blocking input – The ability to receive data from a
    client without blocking if the data is slow arriving. This
    is actually not a significant driver for an asynchronous API, as
    most request arrive in a single packet, or handling can be delayed
    until the arrival of the first content packet. More over, I would
    like to see the servlet API evolve so that applications do
    not have to do any IO.
  2. Non-blocking output – The ability to send data to a
    client without blocking if the client or network is slow.
    While the need for asynchronous output is much greater than
    asynchronous input, I also believe this is not a
    significant driver. Large buffers can allow the
    container to flush most responses asynchronously
    and for larger responses it would still be better to
    avoid the application code handling IO.
  3. Delay request handling – The comet style of
    Ajax web application can require that a request handling
    is delayed until either a timeout or an event has occured.
    Delaying request handling is also useful if a remote/slow
    resource must be obtained before servicing the request
    or if access to a specific resource needs to be throttled to prevent
    too many simultaneous accesses. Currently the only
    compliant option to support this is to wait within
    the servlet, consuming a thread and other resources.
  4. Delay response close – The comet style of
    Ajax web application can require that a response is held
    open to allow additional data to be sent when asynchronous
    events occur. Currently the only compliant option to support
    this is to wait within the servlet, consuming a thread and
    other resources.
  5. 100 Continue Handling – A client may request
    a handshake from the server before sending a request body.
    If this is sent automatically by the container, it prevents
    this mechanism being meaningfuly used. If the application
    is able to decide if a 100-Continue is to be sent, then
    an asynchronous API would prevent a thread being consumed
    during the round trip to the client.

All these use cases can be summarized as “sometimes you just
have to wait for something” with the perspective that waiting
within the Servlet.service method is an expensive
place to park a request while doing this waiting as:

  • A Thread must be allocated.
  • If IO has begun then buffers must be allocated.
  • If Readers/Writers are obtained, then character converters are allocated
  • The session cannot be passivated
  • Anything else allocated by the filter chain is held

These are all resources that are frequently pooled or passivated
when a request is idle. Because comet style Ajax applications require
a waiting request for every user, this invalidates the use of
pools for these resources and requires maximal resource usage.
To avoid this resource crisis, the servlet spec requires some low
cost short term parking for requests.

The Current Solutions

Givin the need for a solution, the servlet container implementations have
started providing this with an assortment of non-compliant extensions:

  • Jetty has Continuation
    which are targetted at comet applications
  • BEA has a future response mechanism also targetted Comet applications
  • Glassfish has an extensible NIO layer for async IO below the servlet model
  • The tomcat developers have just started developing Comet support in tomcat 6

It is ironic that just as the 2.5 specification resolves most of the
outstanding portability issues, new portability issues are being created.
A standard solution is needed if webapplications are to remain portable and
if Ajax framework developers are not going to be forced to support
multiple servers
as well as multiple browsers. 

A Proposed Standard Solution?

I am still not exactly sure how a standard solution should look, but I’m already
pretty sure how it should NOT look:

  • It should not be an API on a specific servlet. By the time a container has
    identified a specific servlet, much of the work has been done. More over, as filters
    and dispatchers give the abilility to redirect a request, any asynchronous API on
    a servlet would have to follow the same path.
  • It probably will not be based on Continuations. While Continuations are
    a useful abstraction (and will continue to be so), a lower level solution can
    offer greater efficiencies and solve additional use-cases.
  • It is not exposing Channels or other NIO mechanisms to the servlet programmer.
    These are details that the container should implement and hide and NIO may not be
    the actually mechanism used.

An approach that I’m currently considering is based around a Coordinator
entity that can be defined and mapped to URL patterns just like filters
and servlets. A Coordinator would be called by the container in response
to asynchronous event and would coordinate the call of the synchronous
service method.
The default coordinator would provide the normal servlet style of
scheduling and could look like:

class DefaultCoordinator implements ServletCoordinator
{
void doRequest(ServletRequest request)
{
request.continue();
request.service();
}
void doResponse(Response response)
{
response.complete();
}
}

The ServletRequest.continue() call would trigger any required 100-Continue response
and an alternative Coordinator may not call this method if a request body is not required or
should not be sent.
The ServletRequest.service() call will trigger the dispatch of a thread to the the normal Filter chain
and and Servlet service methods. An alternative Coordinator may choose not to call service during
the call to doRequest. Instead it may register with asynchronous event sources and
call service() when an event occurs or after a timeout. This can delay event handling
until the required resources are available for that request.
The ServletResponse.complete() call will cleanup a response and close the response streams (if not
already closed). An alternative Coordinator may choose not to call complete during the call to
doResponse(), thus leaving the response open for asynchronous events to write more content.
An subsequent event or timeout may call complete to close the response and return it’s connection
to the scheduling for new requests.
The coordinator lifecycle would probably be such that an instance would be allocated to a request, so
that fields in derived coordinator can be used to communicate between the doRequest and
doResponse methods.
It would also be possible to extend the Coordinator approach to make available events such as arrival
of request content or the possibility of writing more response content. However, I believe that asynchronous
IO is of secondary importance and the approach should be validated for the other use-cases first.
If feedback of this approach is good, I will probably implement a prototype in Jetty 6 soon.


1 Comment

Aditya Pandit · 06/03/2009 at 15:30

I see this posting is from 3 years ago but I am curious to find out the current state of handling asynchronous behavior. I hear 3.0 released a few months back(Jan 2009) has async and Comet support but I am looking for some workarounds to add similar behavior in a 2.4 servlet engine(am using weblogic). Any pointers will be much appreciated. Anyways I will go through your other posts to see if any subsequent post throws more light on the path you followed. 🙂
BTW very interesting post. I would not have understood any of it but currently I am facing an asynchronous problem. Some things are appreciated only when they hit you right on the head.
Thanks.

Comments are closed.