In the CometD introduction post, I explained how the CometD project provides a solution for writing low-latency server-side event-driven web applications.

Examples of this kind of applications are financial applications that provide stock quote price updates, or online games, or position tracking systems for fast moving objects (think a motorbike on a circuit).
These applications have in common the fact that they generate a high rate of server-side events, say in the order of around 10 events per second.

With such an event rate, most of the times you start wondering if it is appropriate to really send to clients every event (and therefore 10 events/s) or if it not better to save bandwidth and computing resources and send to clients events at a lower rate.
For example, even if the stock quote price changes 10 times a second, it will probably be enough to deliver changes once a second to a web application that is conceived to be used by humans: I will be surprised if a person can make any use (or even see it and remember it) of the stock price that was updated 2 tenths of a seconds ago (and that in the meanwhile already changed 2 or 3 times). (Disclaimer: I am not involved in financial applications, I am just making a hypothesis here for the sake of explaining the concept).

The CometD project provides lazy channels to implement this kind of message flow control (it also provides other message flow control means, of which I’ll speak in a future entry).

A channel can be marked as lazy during its initialization on server-side:

Any message sent to that channel will be marked to be a lazy message, and will be delivered lazily: either when a timeout (the max lazy timeout) expires, or when the long poll returns, whichever comes first.

It is possible to configure the duration of the max lazy timeout, for example to be 1 second, in web.xml:

With this configuration, lazy channels will have a max lazy timeout of 1000 ms and messages published to a lazy channel will be delivered in a batch once a second.
Assuming, for example, that you have a steady rate of 8 messages per second arriving to server-side that update the GOOG stock quote, you will be delivering a batch of 8 messages to clients every second, instead of delivering 1 message every 125 ms.

Lazy channels do not immediately reduce the bandwidth consumption (since no messages are discarded), but combined with a GZip filter that compresses the output allow bandwidth savings by compressing more messages for each delivery (as in general it is better to compress a larger text than many small ones).

You can browse the CometD documentation for more information, look at the online javadocs, post to the mailing list or pop up in the IRC channel #cometd on irc.freenode.org.

CometD Message Flow Control with Lazy Channels
Tagged on:

6 thoughts on “CometD Message Flow Control with Lazy Channels

  • April 14, 2011 at 9:04 am
    Permalink

    Hi,
    what would be really good for financial application is the ability to merge (aka “coalesce”) the updates that arrive in the time window that you describe above.
    In most cases clients will only be interested in the most recent “tick” (or value update) of a stock.
    So you could discard all value updates received in that time window apart from the last one.
    In general the coalescing policies could be arbitrarily complex.
    For example some data is field-based: update 1 might give you a new value for field F1, update 2 might give you a new value for field F5 etc.
    What you’d need to do in this case is to produce only one update with the latest value of F5 and of F1.

    Could we think of a way to let people inject their message coalescing logic in a lazy channel?

    thanks,
    Michele

  • April 19, 2011 at 4:17 pm
    Permalink

    Michele, see if this entry answers your question.

  • April 20, 2011 at 10:51 am
    Permalink

    Hi,
    yes I think the “deQueue” method offers a good way to deal with this problem.
    This solution raises another interesting question though.

    In my application code I deal with higher-level “StreamingData” objects which I convert to JSON messages at the last possible moment, before publishing them to channels.

    If I wanted to apply my coalescing logic using the de-queue mechanism I’d be forced to re-interpret the messages and turn them back into something like my old StreamingData objects.

    It would be very cool to have the possibility of pushing to a channel a standard java bean and then converting it to JSON perhaps in the deQueue method.

    This way you’d move the conversion to the “wire” format further down the line offering interesting possibilities in terms of coalescing / throttling etc.

    I can certainly apply coalescing at the application level but that would force me to create another queue.
    It’d be better to do it in the queue that already exists in my opinion.

    I understand that something like that might be hard to achieve as the Message abstraction is used all over the places in the server code.

    Michele

  • April 20, 2011 at 12:29 pm
    Permalink

    Hi,
    I think I can answer part of my own question: a Bayeux message is a Map and as such I should be able to put whatever I want in it.
    Which means I can store my “StreamingData” object as a value associated with the “data” key.

    In the “deQueue” method I can then apply my coalescing logic and finally convert the remaining StreamingData objects to something that can be transformed into JSON by the lower layers.

    Thanks!
    Michele

  • April 20, 2011 at 12:51 pm
    Permalink

    Michele,

    the object to json conversion is an important step that bears performance and semantic implications.

    If you publish a message to 100 clients, you do not want that the 12th client modifies the message instance, because otherwise you will have 11 clients receiving the original message, and 89 others receiving a modified message (things are even more complicated since messages are delivered concurrently).

    CometD cannot deep copy the message for each client (so modifications will be local to that client only) for performance reasons.

    In CometD 2, we took very seriously the API, and wherever a ServerMessage.Mutable is passed as parameter to methods, then the message may be modified by adding/removing/modifying fields; wherever a ServerMessage is passed as parameter, it must not be modified anymore.
    CometD cannot enforce total immutability (it would be impossible to avoid something like message.getExt().get("foo").get("bar").get("baz").put("new_field", "new_value")), but it makes the message immutable for the fields it can control (and throws an exception if a modification is attempted).

    DeQueueListeners take a queue of ServerMessages as parameter, so you may modify the queue but not the message instances. To coalesce messages you need to identify the candidates, create a new message instance and merge the candidates into the new instance, remove the candidates from the queue and add the new instance to the queue.

    Cheers

    • March 4, 2012 at 8:25 am
      Permalink

      Hi Gautham,Thanks for this plugin, it’s a very cool idea!Regarding use of the arnentil jetty libs, your best bet is to contact one of the other groups working in the WST area. For example, I believe the web services plugin people have recently switched to using the arnentilly available jetty libs, so they would be in the best position to guide you.The best thing to do would probably be to ask on the wtp-dev lists at:If you can’t find anyone to contact, mail me privately and I’ll have a dig around and see if I can find someone for you to contact.best regardsJan

Comments are closed.