The Bufferbloat Bandwidth Death March

Some puzzle pieces of a picture puzzle.Latency much more than bandwidth governs actual internet “speed”, as best expressed in written form by Stuart Chesire’s It’s the Latency, Stupid rant and more formally in Latency and the Quest for Interactivity.

Speed != bandwidth despite all of what an ISP’s marketing department will tell you. This misconception is reflected up to and including FCC Commissioner Julius Genachowski, and is common even among technologists who should know better, and believed by the general public. You pick an airplane to fly across the ocean, rather than a ship, even though the capacity of the ship may be far higher.

The Internet could and should degrade gradually in the face of load; but today’s Internet does not degrade gracefully due to bufferbloat. Instead, performance falls off a cliff. We are lemmings on migration.

The Internet is designed to run as fast as it can, and so will fill any capacity network link as soon as you have any applications that asks to do so. We have more and more such applications, and the buffers get bigger each hardware generation, though usually operated at a small fraction of that possible bandwidth. As soon as a network or network link reaches100% capacity, the usually grossly over sized buffers fill, and large delays (latency) occur often best measured in seconds. Performance falls off a cliff, as demonstrated in my bufferbloat video in which routine web surfing becomes 15 times slower with a single competing TCP connection (in the opposite direction!). As explained elsewhere in this blog, today’s Internet edge filled with devices that often have ten times (or more) too much buffering, in dumb devices that do no active queue management. Performance is terrible, and everyone thinks you are out of bandwidth: but you usually are not!

Low latency applications, including reliable high quality teleconferencing, have now become effectively impossible in today’s Internet.

Bufferbloat kills performance wherever the bandwidth drops from high to low, as it does at the edge of today’s Internet. So long as bufferbloat pervades the Internet, everyone, ISP’s and customer’s alike, are on a joint death march to try to build out a network of infinite bandwidth all the way to the people’s devices, as that is the only way to avoid a high to low bandwidth transition. This is not a sane technological direction or economically viable course to hold.

But I almost despair in the current situation. Completely flawed misunderstanding of the history and root cause of today’s Internet performance problems are still driving technical and policy debates and decisions, and we do not have the processes in place to avoid future problems, that may be again entirely different.

We must attack the real root cause of most of today’s Internet performance problems. Thankfully, we finally now have all the tools to attack the real performance problems we all experience. And the most common cause of terrible performance is bufferbloat.

4 Responses to “The Bufferbloat Bandwidth Death March”

  1. Paul Richards Says:

    Totally agree. The fact is most Internet users are brainwashed into thinking that faster speeds make applications perform faster and that their Internet experience will dramatically improve. Sadly nothing can be further from the truth.

    Here are some Popular Myths about Internet Speed:

    1. I have a 10Mbps connection so I can browse at that ‘speed’

    2. If I increase my Internet connection speed from 5Mbps to 50Mbps my Internet browsing experience will be ten times faster

    3. My 10Mbps Internet speed will be consistent no matter where I browse

    4. Speed is more important than quality because data quality does not affect my connection performance

    5. Internet speed testers measure connections speed

    6. Some packet loss is acceptable if it is just a few packets

    7. My Internet connection is 100Mbps so there is plenty of Internet capacity for my business.

    8. If I increase my TCP window size I will get a faster speed

    Read this article in detail:

  2. anonymouse Says:

    I thought one of the lessons of bufferbloat is that packet loss is in fact okay: TCP is designed to recover from packet loss and, in the absence of ECN, even relies on it to detect congestion. So sometimes it’s better to just drop a packet and wait for a retransmit. If the alternative is delaying the packet in a buffer for a few seconds, it’s probably better not to buffer that much and just drop the packet, since it ends up delaying things less overall. Of course, ECN is a better option than either of those, but it’s not one that’s always available.

    • gettys Says:

      *Timely* packet loss is required for proper behavior when a link is saturated. Seconds of delay is not timely… Such buffering does not ever help performance, but only begets delay.

      The myth is that all packet loss is evil: when it is in fact *necessary* for proper functioning of the network.

  3. gettys Says:

    And this article is flawed: in fact, packet loss (or ECN, not typically enabled) is *necessary* for correct operation of the network at saturation, which happens almost all the time.

    It’s a good example of myths, begetting myths.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: