Typically the IO performance of pure java has been close enough to native code for all the use cases of a HTTP Server, with the one key exception of SSL/TLS. I’m not exactly sure why the JVM has never provided a decent implementation of TLS – I’m guessing it is a not technical problem. Historically, this has never been a huge issue as most large scalable deployments have offloaded SSL/TLS to the load balancer and the pure java server has been more than sufficient to receive the unencrypted traffic from the load balancer.
However, there is now a move to increase the depth that SSL/TLS penetrates the data centre and some very large Jetty users are looking to have all internal traffic encrypted to improve internal security and integrity guarantees. In such deployments, it is not possible to offload the TLS to the load balancer and encryption needs to be applied locally on the server. Jetty of course fully supports TLS, but that currently means we need to use the slow java TLS implementation.
Thus we are looking at alternative solutions and it may be possible to plug in a native JSSE implementation backed by openSSL. While conceptually attractive, the JSSE API is actually a very complex one that is highly stateful and somewhat fragile to behaviour changes from implementations. While still a possibility, I would prefer to avoid such a complex semantic over a native interface (perhaps I just answered my own question about why their is not a performant JSSE provider?).
The other key option is to use a local instance native TLS offload to something like haproxy or ngnix and then make a local connection to pure java Jetty. This is a viable solution and the local connector is typically highly performant and low latency. Yet this architecture also opens the option of using Unix Domain Sockets to further optimize that local connection – to reduce data copies and avoid dispatch delays. Thus I have used the JNR unix socket implementation to add unix-sockets to jetty-9.4 (currently in a branch, but soon to be merged to master).
My current target for a frontend with this is haproxy , primarily because it can work a the TCP level rather than at the HTTP level and we have already used it in offload situations with both HTTP/1 and HTTP/2. We need only the TCP level proxy since in this scenario any parsing of HTTP done in the offloader can be considered wasted effort… unless it is being used for something like load balancing… which in this scenario is not appropriate as you will rarely load balance to a local connection (NB there has been some deployment styles that did do load balancing to multiple server instances on the same physical server, but I believe that was to get around JVM limitations on large servers and I’m not sure they still apply).
So the primary target for this effort is terminating SSL on the application server rather than the load balancer in an architecture like ( [x] is a physical machine [[x]] is multiple physical machines ):
[[Client]] ==> [Balancer] ==> [[haproxy-->Jetty]]
It is in the very early days for this, so our most important goals ahead is to find some test scenarios where we can check the robustness and the performance of the solution. Ideally we are looking for a loaded deployment that we could test like:
[[Client]] ==> [Balancer] ---> [haproxy--lo0-->Jetty]
Also from a Webtide perspective we have to consider how something like this could be commercially supported as we can’t directly support the JNR native code. Luckily the developers of JNR are sure that development of JNR will continue and be supported in the (j)Ruby community. Also as JNR is just a thin very thin veneer over the standard posix APIs, there is limited scope for complex problems within the JNR software and a very well known simple semantic that needs to be supported. Another key benefit of the unixsocket approach is that it is an optimization on an already efficient local connection model, which would always be available as a fallback if there was some strange issue in the native code that we could not immediately support.
So early days with this approach, but initial effort looks promising. As always, we are keen to work with real users to better direct the development of new features like this in Jetty.