The First Bufferbloat Battle Won

August 6, 2012

Some puzzle pieces of a picture puzzle.Bufferbloat was covered in a number of sessions at the Vancouver IETF last week.

The most important of these sessions is a great explanation of Kathie Nichols and Van Jacobson’s CoDel (“coddle”) algorithm given during Tuesday’s transport area meeting by Van.  It is not to be missed by serious network engineers. It also touches on why we like fq_codel so much, though I plan to write much more extensively on this topic very soon. CoDel by itself is great, but in combination with SFQ (like) algorithms that segregate flows, the results are stunning; CoDel is the first AQM algorithm which can work across arbitrary number of queues/flows.

The Saturday before the IETF the IAB / IRTF Workshop on Congestion Control for Interactive Real-Time Communication took place. My position paper was my blog entry of several weeks back. In short,  there is no single bullet, though with CoDel we finally have the final missing bullet for its complete solution. The other, equally important but non-technical bullets will be market pressure fix broken software/firmware/hardware all over the Internet: so exposing the bloat problem is vital. You cannot successfully engineer around bufferbloat, but you can detect it, and let users know when they are suffering to enable them to vote with their pocket books. In one of the later working groups, someone coined the term “net-sux” index, though I hope we can find something more marketable.

In the ICCRG (Internet Congestion Control Research Group) meeting, I covered research related topics including global topics, algorithmic questions, data acquisition and analysis needs, and needed tools for diagnosis.

Thursday included the RMCAT BOF. With the on-going deployment of large scale real time teleconferencing systems, congestion avoidance algorithms are becoming of pressing concern. TCP has integrated congestion avoidance algorithms, but RTP does not currently have equivalent mechanism. So long as RTP’s useage is low in the Internet, this is not a major issue; but classic 1980’s congestion collapse could occur should those rise to dominate Internet traffic. I was asked to cover AQM and Bufferbloat to help set context for the ensuing discussion. I covered the current status in brief and then added a bit of heresy. With a slight amount of forethought, we could arrange that someday real time media and AQM algorithms interact in novel ways. Detection (and preferably correct assignment of blame) is key to getting bufferbloat cleaned up.

In short, we’ve won the firstbattle for the hearts and minds of engineers who build the Internet and the tools are present to build the weapons to solve bufferbloat; but the campaign to fix the Internet will long and difficult.

The Internet is Broken, and How to Fix It

June 26, 2012

Some puzzle pieces of a picture puzzle.

Many real time applications such as VOIP, gaming,  teleconferencing, and performing music together, require low latency. These are increasingly unusable in today’s internet, and not because there is insufficient bandwidth, but that we’ve failed to look at the Internet as a end to end system. The edge of the Internet now often runs congested. When it does, bufferbloat causes performance to fall off a cliff.

Where once a home user’s Internet connection consisted of a single computer, it now consists of a dozen or more devices – smart phones, TV’s, Apple TV’s/Roku devices, tablet devices, home security equipment, and one or more computer per household member. More Internet connected devices are arriving every year, which often perform background activities without user’s intervention, inducing transients on the network. These devices need to effectively share the edge connection, in order to make each user happy. All can induce congestion and bufferbloat that baffle most Internet users.

The CoDel (“coddle”) AQM algorithm provides the “missing link” necessary for good TCP behavior and solving bufferbloat. But CoDel by itself is insufficient to solve provide reliable, predictable low latency performance in today’s Internet.

Bottlenecks are most common at the “edge” of the Internet and there you must be very careful to avoid queuing delays of all sorts. Your share of a busy 802.11 conference network (or a marginal WiFi connection, or one in a congested location) might be 1Mb/second, at which speed a single packet represents 13 milliseconds. Your share of a DSL connection in the developing world may similarly limited. Small business often supports many people on limited bandwidth. Budget motels commonly use single broadband connections among all guests.

Only a few packets can ruin your whole day!  A single IW10 TCP open has immediately blown any telephony jitter budget at 1Mbps (which is about 16x the bandwidth of conventional POTS telephony).

Ongoing technology changes makes the problem more challenging. These include:

  • Changes to TCP, including the IW10 initial window changes and window scaling.
  • NIC Offload engines generate bursts of line rate packet streams at multi-gigabit rates. These features are now “on” by default even in cheap consumer hardware including home routers, and certainly in data centers. Whether this is advisable (it is not…) is orthogonal to the reality of deployed hardware and current device drivers and default settings.
  • Deployment of “abusive” applications (e.g. HTTP/1.1 using many > 2 TCP connections, sharded web sites, BitTorrent). As systems designers, we need to remove the incentives for such abusive application behavior, while protecting the user’s experience. Network engineers must presume software engineers will optimize their application performance, even to the detriment of other uses of the Internet, as the abuse of HTTP by web browsers and servers demonstrates.
  • The rapidly increasing number of devices sharing home and small office links.
All of these factors contribute to large line rate bursts of packets crossing the Internet to arrive at a user’s edge network, whether in his broadband connection, or more commonly, in their home router.
Read the rest of this entry »

The Bufferbloat Bandwidth Death March

May 23, 2012

Some puzzle pieces of a picture puzzle.Latency much more than bandwidth governs actual internet “speed”, as best expressed in written form by Stuart Chesire’s It’s the Latency, Stupid rant and more formally in Latency and the Quest for Interactivity.

Speed != bandwidth despite all of what an ISP’s marketing department will tell you. This misconception is reflected up to and including FCC Commissioner Julius Genachowski, and is common even among technologists who should know better, and believed by the general public. You pick an airplane to fly across the ocean, rather than a ship, even though the capacity of the ship may be far higher.

Read the rest of this entry »

A Milestone Reached: CoDel is in Linux!

May 22, 2012

Some puzzle pieces of a picture puzzle.The CoDel AQM algorithm by Kathie Nichols and Van Jacobson provides us with an essential missing tool to control queues properly. This work is the culmination of their at three major attempts to solve the problems with AQM algorithms over the last 14 years.

 

Eric Dumazet wrote the codel queuing discipline (based on a quick prototype by Dave Täht, who spent the last year working 60 hour weeks on bufferbloat) which landed in net-next a week or two ago; yesterday, net-next was merged into the Linux mainline for inclusion in the next Linux release.  Eric also implemented a fq_codel queuing discipline, combining fair queuing and CoDel  (pronounced “coddle”), and it works very well.  The CoDel implementation was dual licensed BSD/GPL to help the *BSD community. Eric and others have tested CoDel on 10G Ethernet interfaces; as expected, CoDel performance is good in what’s been tested to date.

Linux 3.5 will likely release in August. So it was less than a month from first access to the algorithm (which was formally published in the AQM Queue May 6) to Linux mainline; it should be about four total from availability of the algorithm to Linux release.  Not bad at all :-).

Felix Fietkau merged both the codel and fq_codel into the OpenWrt mainline last week for release this summer. 37 architectures, 150 separate routing platforms, no waiting…

The final step should be to worry about all the details, and finally kill pfifo_fast once we’ve tested CoDel enough.

While I don’t think that this is the end of the story, fair queuing in combination with CoDel, and Tom Herbert’s great BQL work together go a very long way toward dealing with bufferbloat on Ethernet devices in Linux.  I think there is more to be done, but we’re much/most of the way to what is possible.

Some Ethernet hardware (both NIC’s and many Ethernet switches) has embedded bufferbloat (in this case, large FIFO buffers) that software may not be able to easily avoid; as always you must test before you are sure you are bloat free! Unfortunately, we’ll see a lot of this:  a very senior technologist of a major router vendor started muttering “line cards” in a troubled voice at an IETF meeting when he really grokked bufferbloat. Adding AQM, simple as CoDel is, as an afterthought may be very hard: do not infer from the speed of implementation in Linux on Ethernet that all operating systems, drivers, and hardware will it will be so easy or retrofit possible; for example, if your OS has no equivalent to BQL, you’ll have that work to do (and easily other work too).

Wireless is much more of a challenge than Ethernet for Linux, particularly 802.11n wireless; the buffering and queuing internal to these devices and drivers is much more complex, and the designers of those software stacks are still understanding the implications of AQM, particularly since the driver boundary has partitioned the buffering in unfortunate ways. I think it will be months, to even a year or two before those have good implementations that get us down to anything close to theoretical minimum latency. But running CoDel is likely a lot better than nothing anyway if you can: that’s certainly our CeroWrt/OpenWrt experience, and the tuning headaches Dave Täht was having trying to use RED go away. So give it a try….

Let me know of CoDel progress in other systems please!

The Next Nightmare is Coming

May 14, 2012

BitTorrent was NEVER the Performance Nightmare

BitTorrent is a lightning rod on two fronts: it is used to download large files, which Some puzzle pieces of a picture puzzle.the MPAA sees as a nightmare to their business model, and BitTorrent has been a performance nightmare to ISP’s and some users. Bram Cohen has taken infinite grief for BitTorrent over the years, when the end user performance problems are not his fault.

Nor is TCP the performance problem, as Bram Cohen recently flamed about TCP on his blog.

I blogged about this before but several key points seem to have been missed by most: BitTorrent was never the root cause of most of the network speed problems BitTorrent triggered when BitTorrent deployed. The broadband edge of the Internet was already broken when BitTorrent deployed, with vastly too much uncontrolled buffering, which we now call bufferbloat. As my demonstration video shows, even a single simple TCP file copy can cause horrifying speed loss in an overbuffered network.  Speed != bandwidth, despite what the ISP’s marketing departments tell you.

But almost anything can induce bufferbloat suffering (filling bloated buffers) too: I can just as easily fill the buffers with UDP or other protocols as with TCP. So long as uncontrolled, single queue devices pervade the broadband edge, we will continue to have problems.
But new nightmares will come….
Read the rest of this entry »

Fundamental Progress Solving Bufferbloat

May 8, 2012

Kathie Nichols and Van Jacobson today published an article entitled “Controlling Queue Delay” in the ACM Queue. which describes a new adaptive active queue management algorithm (AQM), called CoDel (pronounced “coddle”). This is their third attempt over a 14 year period to solve the adaptive AQM problem and it is finally a successful solution. The article will appear sometime this summer in the Communications of the ACM. Additionally, another independent adaptive AQM algorithm by other authors is also working its way through the academic publication cycle.

A working adaptive AQM algorithm is essential to any full solution  to bufferbloat. Existing AQM algorithms are inadequate, particularly in wireless with its very rapid changes in bandwidth.

Everyone working in networking, not just those interested in AQM systems, should read the article, as it dispels common misunderstandings about how TCP interacts with queuing.

Read the rest of this entry »

Bufferbloat goings on…

May 1, 2012

The bufferbloat front has appeared quiet for several months since two publications hit CACM (1), (2) and several videos hit YouTube, though I have one more article to write for IEEE Spectrum (sigh…).

There has been a lot going on behind the lines, however, and some major announcements are imminent on ways to really fix bufferbloat. But I wanted to take a moment to acknowledge other important work in the meanwhile so they do not get lost in the noise, and to get your juices flowing.

  1. First off, Linux 3.3 shipped with BQL (byte queue limits) done by Tom Herbert of Google.  This is good stuff: finally, the transmit rings in Linux network device drivers won’t cause hundreds of packets of buffering.
  2. Dave Taht has had good success prototyping in CeroWrt a combination of Linux’s SFQ and RED to good effect: SFQ ensures decent sharing among short lived interactive flows which receive preference to long lived elephant flow TCP sessions. As transient bufferbloat and TSO/GSO GRO/LRO smart NIC’s make clear, no comprehensive solutions for achieving good latency are possible without some sort of “fair” queuing and/or classification. As in all RED based AQM algorithms, tuning SFQRED is a bitch and a better AQM is badly needed; news at 11 on that front. CeroWrt is approaching its first release with all sorts of nice features and I’ll blog about it when it’s soup. In the meanwhile, adventurers can find all they want to know about CeroWrt at the links here.
  3. The DOCSIS changes to mitigate bufferbloat in cable modems continues on its way.  While I haven’t checked in to see when deployment really starts (driven by modification to cable carrier deployment systems), we should see this major improvement later this year.

And, as outlined in other writings on this blog, and demonstrated in this video, you can do things about bufferbloat in your home today.

So there is hope.  Really…  Stay tuned…

I’ll be attending Penguicon on April 28, 29.

April 19, 2012

I’ve never been to Penguicon before; but they invited me and John Scalzi (one of my favorite recent SF authors) to be guests of honor, so how could I possibly say no? I haven’t been to a SF con for decades; much less one crossed with a Linux conference.  I think it should be fun; certainly there are a lot of interesting topics. There are quite a few other fun people attending, including Bruce Schneier, and I’ll get to embarrass myself about Heinlein with Eric Raymond as well as doing a little different take on bufferbloat than my usual talk, more what people can do about it themselves.

An Minor Diversion into DNSSEC….

March 21, 2012

I realised recently an interesting milestone has been reached, that will thrill the people who have slaved on DNSSEC for over a decade: DNSSEC running end-to-end, into the house the way “it should really work” without requiring any configuration or intervention has happened.  After all the SOPA/PIPA anguish, seeing DNSSEC come to life is really, really nice. This post may also interest router hackers who’d like a real home router they can afford, particularly one that will do mesh routing.

Read the rest of this entry »

Diagnosing Bufferbloat

February 20, 2012

People (including in my family) ask how to diagnose bufferbloat.

Bufferbloat’s existence is pretty easy to figure out; identifying which hop is the current culprit is harder.  For the moment, let’s concentrate on the edge of the network.

The ICSI Netalyzr project is the easiest way for most to identify problems: you should run it routinely on any network you visit. as it will tell you of lots of problems, not just bufferbloat.  For example, I often take the Amtrak Acela express, which has WiFi service (of sorts).  It’s DNS server did not randomize its ports properly, leaving you vulnerable to man-in-the-middle attacks (so it would be unwise to do anything that requires security); this has since been fixed, as today’s report shows (look at the “network buffer measurements”).  This same report shows very bad buffering, in both directions, of about 6 seconds up, and 1.5 seconds downstream.  Other runs today show much worse performance, including an inability to determine the buffering entirely (netalyzr cannot always determine the buffering in the face of cross traffic or other problems; it conservatively only reports buffering if it makes sense).

Netalyzer Uplink buffer test results

Netalyzer Uplink buffer test results

As you’d expect, performance is terrible (you can see what even “moderate” bufferbloat does in my demo video on a fast cable connection).  The train buffering is similar to what my brother has on his DSL connection at home; but as the link is busy with other users, the performance is continually terrible, rather than intermittently terrible.  6 seconds is commonplace; but the lower right hand netalyzr data is cut off since ICSI does not want their test to run for too long.

In this particular case,  with only a bit more investigation, we can guess most of the problems are in the train<->ISP hop, because my machine reports high bandwidth on its WiFi interface (130Mbps 802.11n), with the uplink speeds a small fraction of that, so the bottleneck to the public internet is usually in that link, rather than the WiFi hop (remember, it’s just *before* the lowest bandwidth hop that the buffers fill in either direction).  In your home (or elsewhere on this train), you’d have to worry about the WiFi hop as well unless you are plugged directly into the router. But further investigation shows additional problems.

If netalyzr isn’t your cup of tea, you may be able to observe what is happening with “ping”, while you (or others) load your network.

By “ping”ing the local router on the train and also somewhere else, you can glean additional information. As usual, a dead giveaway for bufferbloat is high and variable RTT’s with little packet loss (but sometimes packets are terribly delayed and out of order; packets stuck in buffers for even 10’s of seconds are not unusual). Local pings vary much more that you might like, sometimes as much as several hundred milliseconds, but occassionally even multiple seconds on occasion.  Here, I hypothesize bloat in the router on the train, just as I saw inside my house when I first understood that bufferbloat was a generic problem with many causes. Performance is terrible at times due to the train’s connection; but also a fraction of the time due to serving local content with bloat in the router.

Home router bloat

Specifically, if the router has lots of buffering (as most modern routers do; often 256-1250 packets), and is using a default FIFO queuing discipline, it is easy for a router to fill these buffers with packets all destined for the same machine that is operating at a fraction of the speed that WiFi might go.  Ironically, modern home routers tend to have much larger buffering than old routers, due to changes in upstream operating systems optimized toward bandwidth, whose systems were not tested for latency.

Even if “correct” buffering were present (actually an oxymoron), the bandwidth can drop from the 130 Mbps I see to the local router all the way down to 1Mbps, the minimum speed WiFi will operate at, so your buffering can be very much too high even at the best of times.  Moving your laptop/pad/device a few centimeters can make a big difference in bandwidth. But since we have no AQM algorithm to control the amount of buffering, recent routers have been tuned (to the extent they’ve been tuned at all) to operate at maximum bandwidth, even though this means the buffering available can easily be 100 times too much when running slowly (which all turns into delay).  One might also hope that a router would prevent starvation to other connections in such circumstances, but as these routers are typically running with a FIFO queuing disciple, they won’t.  A local (low RTT) flow can get a much higher fraction of bandwidth than a long distance flow.

To do justice to the situation, it is also possible that the local latency variation is partially caused by device driver problems in the router: Dave Taht’s experience has been that 802.11n WiFi device drives often buffer many more packets than they should (beyond that required for good performance when aggregating packets for 802.11n), and he, Andrew McGregor, and Felix Fietkau spent a lot of time last fall reworking one of those Linux device drivers. Since wireless on the train supports 802.11n, we know implies that these device drivers are in play; fixing these problems for the CeroWrt project was a prerequisite for later work on queuing and AQM algorithms.