David M. Lloyd

Software Engineer at Red Hat, Inc.

View My GitHub Profile

30 June 2008

A modified approach to asynchronous reads

by

In a previous post, I talked about the impracticality of asynchronous reads in a scalable server situation. The problem I cited was that if there are large amounts of pending reads, each with its own preallocated buffer, a large quantity of resources can possibly be consumed, thus impeding scalability.

I’ve been thinking about the problem some more and I’ve come up with an alternate approach. Basically, the solution to the problem is to simply not allocate a buffer until the read takes place. This is made pretty simple by use of the BufferAllocator interfacein XNIO.

Using this interface, the signature of the asynchronous read method would look like this:

IoFuture asyncRead(BufferAllocator allocator) throws IOException;

The buffer is allocated only when the channel is readable. And if an NIO.2-style (or similar) async read is used “under the covers” for whatever reason, then the allocation can simply happen right upfront.

Look for this feature in XNIO 1.1!

tags: