Beware, there are multiple buffers!

Some people note that in my bufferbloat testing I set the transmit queue length (txqueuelen) to zero on Linux.

Note that this is *at best* a short term hack to reduce pain, and the wrong answer in general, and on some hardware will cause your system to go completely catatonic.  Please, please, don’t just blindly go twisting knobs without understanding what you are doing…

There are many potential places where buffers may hide today. These include (at least):

  1. the Linux transmit queue (which txqueuelen controls)
  2. device drivers themselves may hide one or more packets (e.g. the Libertas driver) internally, which simplified its implementation
  3. Most current hardware has very large DMA ring buffers, often allowing for up to 4096 packets/queue in the hardware itself; in the drivers we’ve examined, the default size seems to be in the 200-300 packet range (also true on some Mac and Windows ethernet drivers we’ve played width).
  4. Sometimes the hardware itself may also have packet buffers buried in them. Again, from OLPC, the wireless module there has 4 packets of buffering hidden out in the device.
  5. (?) encryption buffers.

Old hardware often has very limited buffering in the drivers and hardware; this is part of the history as to how we got to where we are.

Some buffering is necessary for your network stack to work properly.  The only reason I was setting txqueuelen to zero from 1000 was that I had figured out there were and additional 256 packets of buffering in the Intel wireless and ethernet drivers I was using.  Normally, for classification to be able to work, we’d like to have the Linux transmit queue set to some reasonable (small value), so that we can play nice traffic games of various sorts. So in my experiments, I knew there were already 256 buffers available, and my system would continue to function.

Now the question is: how much buffering is “enough”?

And the answer is, unfortunately, not simple. The buffering that should be present depends upon the bandwidth (which may vary by orders of magnitude) and the delay (which is anywhere from 10ms to a couple hundred if you are going around the world).  The rule of thumb has been the bandwidth delay product, where the delay has been presumed to be around 100ms. And it also depends on workload. The rub is that ethernet spans 3 orders of magnitude in bandwidth, and on wireless, it’s even worse, where moving your laptop a few inches can change your performance by orders of magnitude.

What a server system’s buffering needs on 1G or 10G networks is very different than what you will need on an 802.11g network (which at best runs about 20Mbps, and often runs much more slowly).  But right now, the knobs, for historical reasons, were often set to maximize bandwidth performance for such systems without regard to latency under load on computers in most people’s homes.

So whatever we set these knob(s) to, it is guaranteed to be wrong much of the time for many systems.  At best, until we have better tools at hand, we can mitigate our pain a bit by twisting the knobs to something that may make more sense for the environment where you are running most of the time, and some of our default values in our operating systems and device drivers may need tuning in the short term to the bandwidth. So some short term mitigation is possible by being slightly more clever.

The real long term solution, however, is AQM (active queue management) in the most general sense: the buffering at all layers of the system needs proper integration and management (not just router queues), and it needs to be very dynamic in nature: ergo the interest we have in eBDP, SFB, algorithms and we hope RED Light soon.  We need to signal the end points to slow down appropriately. And getting the operating systems to manage both their buffering in concert with the underlying device drivers and hardware is why this is going to be an interesting problem (as in the Chinese curse).

Loaded guns can hurt if you aim them at your foot and pull the trigger. So please do be careful, and think…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: