Archive for the ‘Puzzle’ Category

Traditional AQM is not enough!

July 10, 2013

Note: Updated October 24, 2013, to fix some editorial nits, and to clarify the intended point that it is the combination of a working mark/drop algorithm with flow scheduling that is the “killer” innovation, rather than the specifics of today’s fq_codel algorithm.

Latency (called “lag” by gamers), once incurred, cannot be undone, as best first explained by Stuart Cheshire in his rant: “It’s the latency, Stupid.” and more formally in “Latency and the Quest for Interactivity,” and noted recently by Stuart’s 12 year old daughter, who sent Stuart a link to one of the myriad “Lag Kills” tee shirts, coffee bugs, and other items popular among gamers.lag_kills_skeleton_dark_tshirt

Out of the mouth of babes…

Any unnecessary latency is too much latency.

Many networking engineers and researchers express the opinion that 100 milliseconds latency is “good enough”. If the Internet’s worst latency (under load) was 100ms, indeed, we’d be much better off than we are today (and would have space warp technology as well!). But the speed of light and human factors research easily demonstrate this opinion is badly flawed.

Many have understood bufferbloat to be a problem that primarily occurs when a saturating “elephant flowis present on a link; testing for bufferbloat using elephants is very easy, and even a single elephant TCP flow from any modern operating system may fill any size uncontrolled buffer given time, but this is not the only problem we face. The dominant application, the World Wide Web, is anti-social to any other application on the Internet, and its collateral damage is severe.

Solving the latency problem requires a two prong attack.

(more…)

TCP Small Queues

October 1, 2012

Some puzzle pieces of a picture puzzle.Linux 3.6 just shipped.  As I’ve noted before, bloat occurs in multiple places in an OS stack (and applications!). If your OS TCP implementation fills transmit queues more than needed, full queues will cause the RTT to increase, etc. , causing TCP to misbehave. Net result: additional latency, with no increase in bandwidth performance. TCP small queues reduces the buffering without sacrificing performance, reducing latency.

To quote the Kernel Newbies page:

TCP small queues is another mechanism designed to fight bufferbloat. TCP Small Queues goal is to reduce number of TCP packets in xmit queues (qdisc & device queues), to reduce RTT and cwnd bias, part of the bufferbloat problem. Without reduction of nominal bandwidth, we have reduction of buffering per bulk sender : < 1ms on Gbit (instead of 50ms with TSO) and < 8ms on 100Mbit (instead of 132 ms).

Eric Dumazet (now at Google) is the author of TSQ. It is covered in more detail at LWN.  Thanks to Eric for his great work!

The combination of TSQ, fq_codel and BQL (Byte Queue Limits) gets us much of the way to solving bufferbloat on Ethernet in Linux. Unfortunately, wireless remains a challenge (the drivers need to have a bunch of packets for 802.11n aggregation, and this occurs below the level that fq_codel can work on), as do other device types.  For example, a particular DSL device we looked at last week has a minimum ring buffer size of 16, again, occurring beneath Linux’s queue discipline layer.  “Smart” hardware has become a major headache. So there is much to be done yet in Linux, much less other operating systems.

I’m attending the International Summit for Community Wireless Networks

September 24, 2012

I will be giving a updated version of my bufferbloat talk there on Saturday, October 6.  The meeting is about community wireless networks (many of which are mesh wireless networks) on which bufferbloat is a particular issue.  It is in Barcelona, Spain, October 4-7.

We tried (and failed) to make ad-hoc mesh networking work when I was at OLPC, and I now know that one of the reasons we were failed was bufferbloat.

I’ll also be giving a talk at the UKNOF (UK Network Operator’s Forum) in London on October 9, but that is now full and there is no space for new registrants.

The Internet is Broken, and How to Fix It

June 26, 2012

Some puzzle pieces of a picture puzzle.

Many real time applications such as VOIP, gaming,  teleconferencing, and performing music together, require low latency. These are increasingly unusable in today’s internet, and not because there is insufficient bandwidth, but that we’ve failed to look at the Internet as a end to end system. The edge of the Internet now often runs congested. When it does, bufferbloat causes performance to fall off a cliff.

Where once a home user’s Internet connection consisted of a single computer, it now consists of a dozen or more devices – smart phones, TV’s, Apple TV’s/Roku devices, tablet devices, home security equipment, and one or more computer per household member. More Internet connected devices are arriving every year, which often perform background activities without user’s intervention, inducing transients on the network. These devices need to effectively share the edge connection, in order to make each user happy. All can induce congestion and bufferbloat that baffle most Internet users.

The CoDel (“coddle”) AQM algorithm provides the “missing link” necessary for good TCP behavior and solving bufferbloat. But CoDel by itself is insufficient to solve provide reliable, predictable low latency performance in today’s Internet.

Bottlenecks are most common at the “edge” of the Internet and there you must be very careful to avoid queuing delays of all sorts. Your share of a busy 802.11 conference network (or a marginal WiFi connection, or one in a congested location) might be 1Mb/second, at which speed a single packet represents 13 milliseconds. Your share of a DSL connection in the developing world may similarly limited. Small business often supports many people on limited bandwidth. Budget motels commonly use single broadband connections among all guests.

Only a few packets can ruin your whole day!  A single IW10 TCP open has immediately blown any telephony jitter budget at 1Mbps (which is about 16x the bandwidth of conventional POTS telephony).

Ongoing technology changes makes the problem more challenging. These include:

  • Changes to TCP, including the IW10 initial window changes and window scaling.
  • NIC Offload engines generate bursts of line rate packet streams at multi-gigabit rates. These features are now “on” by default even in cheap consumer hardware including home routers, and certainly in data centers. Whether this is advisable (it is not…) is orthogonal to the reality of deployed hardware and current device drivers and default settings.
  • Deployment of “abusive” applications (e.g. HTTP/1.1 using many > 2 TCP connections, sharded web sites, BitTorrent). As systems designers, we need to remove the incentives for such abusive application behavior, while protecting the user’s experience. Network engineers must presume software engineers will optimize their application performance, even to the detriment of other uses of the Internet, as the abuse of HTTP by web browsers and servers demonstrates.
  • The rapidly increasing number of devices sharing home and small office links.
All of these factors contribute to large line rate bursts of packets crossing the Internet to arrive at a user’s edge network, whether in his broadband connection, or more commonly, in their home router.
(more…)

The Bufferbloat Bandwidth Death March

May 23, 2012

Some puzzle pieces of a picture puzzle.Latency much more than bandwidth governs actual internet “speed”, as best expressed in written form by Stuart Chesire’s It’s the Latency, Stupid rant and more formally in Latency and the Quest for Interactivity.

Speed != bandwidth despite all of what an ISP’s marketing department will tell you. This misconception is reflected up to and including FCC Commissioner Julius Genachowski, and is common even among technologists who should know better, and believed by the general public. You pick an airplane to fly across the ocean, rather than a ship, even though the capacity of the ship may be far higher.

(more…)

A Milestone Reached: CoDel is in Linux!

May 22, 2012

Some puzzle pieces of a picture puzzle.The CoDel AQM algorithm by Kathie Nichols and Van Jacobson provides us with an essential missing tool to control queues properly. This work is the culmination of their at three major attempts to solve the problems with AQM algorithms over the last 14 years.

 

Eric Dumazet wrote the codel queuing discipline (based on a quick prototype by Dave Täht, who spent the last year working 60 hour weeks on bufferbloat) which landed in net-next a week or two ago; yesterday, net-next was merged into the Linux mainline for inclusion in the next Linux release.  Eric also implemented a fq_codel queuing discipline, combining fair queuing and CoDel  (pronounced “coddle”), and it works very well.  The CoDel implementation was dual licensed BSD/GPL to help the *BSD community. Eric and others have tested CoDel on 10G Ethernet interfaces; as expected, CoDel performance is good in what’s been tested to date.

Linux 3.5 will likely release in August. So it was less than a month from first access to the algorithm (which was formally published in the AQM Queue May 6) to Linux mainline; it should be about four total from availability of the algorithm to Linux release.  Not bad at all :-).

Felix Fietkau merged both the codel and fq_codel into the OpenWrt mainline last week for release this summer. 37 architectures, 150 separate routing platforms, no waiting…

The final step should be to worry about all the details, and finally kill pfifo_fast once we’ve tested CoDel enough.

While I don’t think that this is the end of the story, fair queuing in combination with CoDel, and Tom Herbert’s great BQL work together go a very long way toward dealing with bufferbloat on Ethernet devices in Linux.  I think there is more to be done, but we’re much/most of the way to what is possible.

Some Ethernet hardware (both NIC’s and many Ethernet switches) has embedded bufferbloat (in this case, large FIFO buffers) that software may not be able to easily avoid; as always you must test before you are sure you are bloat free! Unfortunately, we’ll see a lot of this:  a very senior technologist of a major router vendor started muttering “line cards” in a troubled voice at an IETF meeting when he really grokked bufferbloat. Adding AQM, simple as CoDel is, as an afterthought may be very hard: do not infer from the speed of implementation in Linux on Ethernet that all operating systems, drivers, and hardware will it will be so easy or retrofit possible; for example, if your OS has no equivalent to BQL, you’ll have that work to do (and easily other work too).

Wireless is much more of a challenge than Ethernet for Linux, particularly 802.11n wireless; the buffering and queuing internal to these devices and drivers is much more complex, and the designers of those software stacks are still understanding the implications of AQM, particularly since the driver boundary has partitioned the buffering in unfortunate ways. I think it will be months, to even a year or two before those have good implementations that get us down to anything close to theoretical minimum latency. But running CoDel is likely a lot better than nothing anyway if you can: that’s certainly our CeroWrt/OpenWrt experience, and the tuning headaches Dave Täht was having trying to use RED go away. So give it a try….

Let me know of CoDel progress in other systems please!

The Next Nightmare is Coming

May 14, 2012

BitTorrent was NEVER the Performance Nightmare

BitTorrent is a lightning rod on two fronts: it is used to download large files, which Some puzzle pieces of a picture puzzle.the MPAA sees as a nightmare to their business model, and BitTorrent has been a performance nightmare to ISP’s and some users. Bram Cohen has taken infinite grief for BitTorrent over the years, when the end user performance problems are not his fault.

Nor is TCP the performance problem, as Bram Cohen recently flamed about TCP on his blog.

I blogged about this before but several key points seem to have been missed by most: BitTorrent was never the root cause of most of the network speed problems BitTorrent triggered when BitTorrent deployed. The broadband edge of the Internet was already broken when BitTorrent deployed, with vastly too much uncontrolled buffering, which we now call bufferbloat. As my demonstration video shows, even a single simple TCP file copy can cause horrifying speed loss in an overbuffered network.  Speed != bandwidth, despite what the ISP’s marketing departments tell you.

But almost anything can induce bufferbloat suffering (filling bloated buffers) too: I can just as easily fill the buffers with UDP or other protocols as with TCP. So long as uncontrolled, single queue devices pervade the broadband edge, we will continue to have problems.
But new nightmares will come….
(more…)

Bufferbloat goings on…

May 1, 2012

The bufferbloat front has appeared quiet for several months since two publications hit CACM (1), (2) and several videos hit YouTube, though I have one more article to write for IEEE Spectrum (sigh…).

There has been a lot going on behind the lines, however, and some major announcements are imminent on ways to really fix bufferbloat. But I wanted to take a moment to acknowledge other important work in the meanwhile so they do not get lost in the noise, and to get your juices flowing.

  1. First off, Linux 3.3 shipped with BQL (byte queue limits) done by Tom Herbert of Google.  This is good stuff: finally, the transmit rings in Linux network device drivers won’t cause hundreds of packets of buffering.
  2. Dave Taht has had good success prototyping in CeroWrt a combination of Linux’s SFQ and RED to good effect: SFQ ensures decent sharing among short lived interactive flows which receive preference to long lived elephant flow TCP sessions. As transient bufferbloat and TSO/GSO GRO/LRO smart NIC’s make clear, no comprehensive solutions for achieving good latency are possible without some sort of “fair” queuing and/or classification. As in all RED based AQM algorithms, tuning SFQRED is a bitch and a better AQM is badly needed; news at 11 on that front. CeroWrt is approaching its first release with all sorts of nice features and I’ll blog about it when it’s soup. In the meanwhile, adventurers can find all they want to know about CeroWrt at the links here.
  3. The DOCSIS changes to mitigate bufferbloat in cable modems continues on its way.  While I haven’t checked in to see when deployment really starts (driven by modification to cable carrier deployment systems), we should see this major improvement later this year.

And, as outlined in other writings on this blog, and demonstrated in this video, you can do things about bufferbloat in your home today.

So there is hope.  Really…  Stay tuned…

Bufferbloat demonstration videos

February 1, 2012

If people have heard of bufferbloat at all, it is usually just an abstraction despite having personal experience with it. Bufferbloat can occur in your operating system, your home router, your broadband gear, wireless, and almost anywhere in the Internet.  They still think that if experience poor Internet speed means they must need more bandwidth, and take vast speed variation for granted. Sometimes, adding bandwidth can actually hurt rather than help. Most people have no idea what they can do about bufferbloat.

So I’ve been working to put together several demos to help make bufferbloat concrete, and demonstrate at least partial mitigation. The mitigation shown may or may not work in your home router, and you need to be able to set both upload and download bandwidth.

Two  of four cases we commonly all suffer from at home are:

  1. Broadband bufferbloat (upstream)
  2. Home router bufferbloat (downstream)
Rather than attempt to show worst case bufferbloat which can easily induce complete failure, I decided to demonstrate these two cases of “typical” bufferbloat as shown by the ICSI data. As the bufferbloat varies widely as the ICSI data shows, your mileage will also vary widely.

There are two versions of the video:

  1. A short bufferbloat video, of slightly over 8 minutes, which includes both demonstrations, but elides most of the explanation. It’s intent is to get people “hooked” so they will want to know more.
  2. The longer version of the video clocks in at 21 minutes, includes both demonstrations, but gives a simplified explanation of bufferbloat’s cause, to encourage people to dig yet further.
Since bufferbloat only affects the bottleneck link(s), and broadband and WiFi bandwidth are often similar and variable, it’s very hard to predict where you will have trouble. If you to understand that the bloat grows just before the slowest link in a path, (including in your operating system!) you may be able to improve the situation. You have to take action where the queues grow. You may be able to artificially move the bottleneck from a link that is bloated to one that is not. The first demo moves the bottleneck from the broadband equipment to the home router, for example.
To reduce bufferbloat in the home (until the operating systems and home routers are fixed), your best bet is to ensure your actual wireless bandwidth is always greater than your broadband bandwidth (e.g., by using 802.11n and possibly multiple access points) and use bandwidth shaping in the router to “hide” the broadband bufferbloat.  You’ll still see problems inside your house, but at least, if you also use the mitigation demonstrated in the demo, you can avoid problems accessing external web sites.
The most adventurous of you may come help out on the CeroWrt project, an experimental OpenWrt router where we are working on both mitigating and eventually fixing bufferbloat in home routers. Networking and ability to reflash routers required!


CACM: BufferBloat: What’s Wrong with the Internet?

December 8, 2011

Communications of the ACM: Bufferbloat: What’s Wrong with the Internet?

February issue of the Communications of the ACM.

Some puzzle pieces of a picture puzzle.

A discussion with Vint Cerf, Van Jacobson, Nick Weaver, and Jim Gettys

This is part of an ACM Queue case study, accompanying Kathie Nichols and my article that appeared in the January 2012 CACM (Communications of the ACM).


Follow

Get every new post delivered to your Inbox.

Join 609 other followers