Mitigations versus Solutions of Bufferbloat in Broadband

I have distinguished in my writing between what I call “mitigations” and “solutions”.

  • mitigations are actions we can take, often immediately, which make the situation better, and improve (possibly greatly) the current grim situation.  Since they may only work some of the time, and may require conscious thought tuning and action by network operators and users, or have other limitations that are often far from optimal, they won’t work in some circumstances or necessarily be implemented everywhere. Often these mitigations will come at some cost, as in the case today’s posting below.
  • solutions are full solutions for a problem that get behavior to something approximating optimal.  Sometimes they may be mitigations that can be widely applied in an ISP, even though though they may require thought there. The “just work” for everyone.

But observed facts (e.g. RED or other AQM is far from universally used; more about this in a future post) shows that anything that does not “just work” is often distrusted and under-used (and seldom enabled by default), so such a solution is seldom the optimal solution we should be looking for: really “solving” the problem once and for all.  As good engineers and scientists, we should always be striving for “just works” quality solutions, which we don’t have for bufferbloat in all its forms.

The full “solution” for the entire Internet is going to be hard; we need to solve too many different problems (as you will see) at too many points in all paths your data may traverse, to wave a wand eliminate bufferbloat overnight.  Some of the point solutions will actually require replacement of hardware, and time to research and engineer such hardware along with economics will often take time.  Does that mean we should do nothing?  Of course not: we can immediately make the situation much better than it is, particularly for consumer home Internet service. And remember, your competitor will eventually beat you if you sit on your hands.

Gamers and others have been mitigating bufferbloat in broadband for years. Read on. You’ll suffer much less. Mitigation of home router bufferbloat itself will be tomorrow’s installment.

Mitigating Broadband Bufferbloat in the Home Router

The best solution will be to remove the grossly bloated buffers properly, and not to have to hack around this problem in our home router.  ISP’s and their vendors may be able to mitigate existing equipment partially (by cutting buffers to something closer to sane points in the CPE); these mitigations will take time, and are not something you can go do today, yourself.  Those are also technology dependent; and what can be done there is probably best taken up by the equipment vendors and standards bodies.  As in many mitigations, they may come with costs. In the downstream direction, ISP’s not running RED on their head ends may turn it on. So when your network changes, you will need to repeat this process.

I remind you that bufferbloat can occur elsewhere in your path as you make these tests: some ISP’s and content providers do not run with any queue management enabled, so you may have bottlenecks beyond your broadband connection in your path beyond the last mile broadband connection to help confound you (your wireless hop to your home network, and anywhere beyond your broadband headend. My observations of Comcast’s network beyond the CMTS has been very, very clean, confirming what I was told to expect when I had lunch with Comcast.  You may not be so lucky with your ISP. That my test site is on a well run network at MIT, peering directly with Comcast has certainly made my life easier. More on the core Internet topic in the future.

Our quest here  is to try to overcome what has already happened: we usually have gross bufferbloat in the broadband provider’s equipment, which may or may not be customer replaceable. If it is customer replaceable (e.g. cable modems) we can hope that in a year or three that the market may start to provide routers and broadband gear that implement some rational queue management and behave better.

Here’s what you can do today, if your router supports it.  If not, you can go buy a new home router (or install open source router software) today that can mitigate the problem for little cost ($100 or less). You’ll see why this is a mitigation at best, rather than a solution: it isn’t something you’re going to ask your aged parents to try.

Many mid-range or high end home routers have traffic shaping features. They may be called traffic shaping, or “QOS” (Quality of Service).  Some routers I’ve seen (I’ve seen quite a few over the last years) have a single knob to set bandwidth on both directions; they aren’t particularly useful.  You want one which lets you adjust bandwidth in both directions. I’ve experimented with several routers: your mileage will vary. Some commercial routers work really well, some less so.   Sometimes these routers are marketed as  “gamer routers.”  There is probably some gamer’s web site someplace that goes into this in gory detail, with reviews of different routers.  If so, please let me know. Facilities also exist in the open source router projects of various sorts, e.g. OpenWRTDDWRTTomato, and Gargoyle. More on this topic below.

The WISEciti research project is also researching the behavior of  home routers: if you have an old router, they may be interested in giving it a home.

Our goal is to keep buffers in your upstream broadband link from filling and turn them back into “dark buffers”. We can try to avoid bufferbloat in the broadband device this by transmitting data to it slightly less fast than the broadband device will accept, and ensuring the router forwards data slightly less fast than the broadband  device will transmit it. Formally, this is called “traffic shaping”.  Gamers have been doing this for years, as they are very latency sensitive and empirically discovered that limiting your bandwidth in this way will have good effects on observed latency. Note you should do traffic shaping before you worry about classifying data (e.g. ensuring your voip gets priority over TCP flows), as the goal here is to mitigate the upstream broadband device’s faults as much as possible.

Some ISP’s provide a home router as part of their service that their wires plug into directly. I have no idea if these routers are usable for the following process. I presume not in the discussion below. In  either case, you need the ability to perform traffic shaping.

Plug your router into the ethernet on broadband gear, or at worst, into the ethernet jack of your home router if that is included in your broadband service.  We’re trying to mitigate the broadband link problem here, not fix the router’s bufferbloat, which is a later topic.

I recommend monitoring your home connection via smokeping while you try this process.  It isn’t clear to me that the bandwidth you get from a broadband carrier is a constant over time, as load occurs. I haven’t explored carefully what happens when my ISP’s network gets busy.

Start “pinging” some nearby site (best is not an ISP router, if only because they may be loaded at times and process ping on the slow path).  Note the latency. Saturate the link in an upstream direction (e.g. by copying a file someplace, or uploading a video somewhere; you will probably be able to figure out some way to do so. Note it’s behavior: you’ll very likely find that the latency grows to some value of hundreds of milliseconds or even seconds. You’ll see the latency climb gradually, and then start varying (that’s the behavior you see in my TCP traces).

By using traceroute and ping on the path traceroute exposes, you can figure out which hop is the bottleneck.  If it is not the broadband hop, then you need to find some other site to work against.

Next, find out what your provisioned bandwidth is for both directions, nominally.  This is what you pay for.

Enter half these values into your home router in the the bandwidth shaping or QOS form, as per your router, having enabled this feature.  You may or may not have to reboot your router whenever you adjust the values.  Some routers attempt to determine the available bandwidth in some fashion automatically; I have no idea how successful they may be, and expect that features like Comcast’s PowerBoost will confuse them, so manual use is recommended unless you find the router “does the right thing” automatically . I also expect that some routers work better than others in this area.

Again load your link.

Your latency should be only slightly higher than when your line is idle.  Exactly how much seems to depend on the router.

You can try approaching or even exceeding your provisioned bandwidth by binary search; when you exceed the available bandwidth, you’ll see the latencies start to rise (slowly).  Since the rate at which the buffers will fill is determined by the difference between the broadband bandwidth and your router’s bandwidth to the router, patience is in order to tune the value. Complicating this testing is that some ISP’s dynamically change the available bandwidth (e.g. Comcast’s PowerBoost).  You actually do often have more bandwidth temporarily available (if it is available) early in a connection, requiring yet further patience. Did I tell you that you need patience?

Do the same process for downstream bandwidth.  There may or may not be similar buffers in the downstream direction in the broadband plant (head end and CPU), and ISP’s may or may not be running RED to control queues in the broadband “head end” equipment itself.  Your mileage may vary.

This process works better on some routers than on others.  What value you should try is not clear.  With one router I tried, the behavior on Comcast was exactly what I would want (low latency) when the router was set to the provisioned bandwidth (Comcast claims they slightly over-provision their customer’s accounts); on another router, I have to reduce the values used by more than 30% from my provisioned bandwidth (which may or may not reflect reality).  Even so, I end up on the router I am using today with 20ms latency (I get less than 10ms when idle). Contrast this smokeping with the one in the previous posting: during this one today, I was performing the same kind rdist to MIT that I performed when I found the smoking gun. Not perfect, but way more than an order of magnitude improvement, and also note the packet loss has stopped.

Smokeping of my house after broadband bufferbloat mitigation

Mitigated broadband smokeping

Different routers may not shape the bandwidth to the values you nominally set; before complaining to an ISP that you are not getting what you pay for, please do your homework and verify the actual bandwidth you get out of your router (this is easier said than done: but Dualcomm Technologies makes a cheap port mirroring switch you can afford). The router may not have computed the transformation from the UI to the operating system correctly, and/or forgotten to compensate for packet overhead, or bandwidth shaping may just be broken, and remember, your ISP’s bandwidth includes your packet overhead; your “goodput” should be slightly lower than the marketing BPS of the ISP.

Educating all vendors and network operators about bufferbloat is in order, and exercising your pocket book when selecting hardware and services is essential to recovery from bufferbloat. But let’s only complain about the right problem,  in the right directions, and politely please; the mistake is so widespread we are all Bozos on this bus. Please report problems to the router vendor if they are at fault, and only bug the ISP they aren’t giving you what you pay for if you determine they aren’t actually providing what you pay for.  No one appreciates angry support calls, and ones that aren’t people’s fault and over which they have no control are very frustrating. I am hoping and presuming my audience is primarily technical, and will be a part of bufferbloat mitigation and solution, rather than creating a support nightmare problem for all involved.

Note that this mitigation may also be partial; congestion on the network interconnecting the broadband head-ends might be reflected into the broadband hop itself at times of congestion.

This mitigation has come at a cost: you have defeated any PowerBoost style bandwidth boost your ISP has been kind enough to give you, and possibly a fraction of your rated bandwidth. This hurts, as the Internet tradition is to share when resources are available, and be fair when to everyone when there is not enough resources available.  Short of some attempts (which I haven’t had time to try), such as Paul Bixel’s active QOS control implementation found in the Gargoyle open source router, you are out of luck. I’ll report back on my experiences with Gargoyle when I have time. Alternative mitigations, such as Remote Active Queue Management as mentioned in Nick Weaver‘s comments to a previous entry here, may become feasible with time.

For me, the mitigation is a no-brainer: the network th home actually *works* even when others are using it in my house.  With no mitigation, we would periodically be stepping on each other. Additional bandwidth at the cost of tolerating a broken network that I can’t use for some of my essential services is a very poor trade.  And if you are a gamer, it may save your life ;-).

QOS and Telephony

If you succeed at mitigating bufferbloat in your broadband connection, you have further challenges. You may have bufferbloat in the home router itself, particularly over wireless hops (as I have observed and noted earlier).  Running an open source router may allow further mitigation of problems in your home router; but this post is long enough as it is and dinner time is fast approaching, so I’ll leave discussing mitigation of bufferbloat in home routers for another day.

Let’s first talk about QOS for telephony for a moment. Note that all this is essentially what Ooma is doing: they put their box in ahead of your home network, reserve bandwidth for VOIP, and classify VOIP traffic ahead of other traffic. I used one of these before it was repetitively damaged by lightning.

Before you have mitigated broadband bufferbloat, any QOS policy you may set in the router may very well (almost certainly is) ineffective when your broadband connection is saturated. And the router itself may also suffer from bufferbloat. (which is why this all can be so confusing; this bear of little brain has often been very confuzed in this quest). But once you have successfully mitigated broadband bufferbloat by bandwidth shaping the broadband hop, you can hope that you to enable QOS for your non-carrier provided VOIP and Skype might work OK (when the home router itself is not feeling bloated). I expect it is wise to do so even though it should not be necessary, for reasons alluded to in a previous entry, that I will elaborate on in a future blog entry. Browsers can cause serious jitter, much more than in past years, and are so worrying they are part of what I lose sleep over.  I’ll circle back to the browser problem in a week or so.

Some of the open source routers  (and Linux itself) have very fancy traffic classification, queue management and allocation facilities; these may not be enabled even in the open source routers, or properly set up (depending on the distro). Go wild. Have fun.  Find and fix bufferbloat bugs with and in the open source routers (since I’ve found that they have the same problems I found on my laptop as covered in fun with your switch, and fun with wireless, particularly since they seem to have only worried about the broadband link).  Show everyone what can be done, so the industry catches up faster (and more are free software converts!).

About these ads

32 Responses to “Mitigations versus Solutions of Bufferbloat in Broadband”

  1. Stefano Rivera Says:

    > By using traceroute and ping on the path traceroute exposes, you can figure out which hop is the bottleneck.

    There’s a nice tool, mtr (mtr-tiny in Debian/Ubuntu) which gives you a live traceroute with ping statistics for each hop.

    Thanks for an interesting and entertaining series. I used do QoS at home for this problem (thanks LARTC) when I was on the wrong end of a 64k ISDN link, and it made a world of difference (concurrent bittorrent and interactive use). These days I’m just lazy and rate-limit any long-running rsync job.

    • gettys Says:

      Cool. In all my wanderings, I hadn’t stumbled across mtr. Thanks!

    • gettys Says:

      Rate limiting yourself is not feasible for most people; they would have no clue.

      And rate limiting your kids, while possible in some open source routers, is a major admin headache and not possible for most.

      And things you don’t expect catch you, as a google chrome crash upload did to me one evening.

      We gotta get this all fixed.

  2. walken Says:

    One problem when setting up traffic shaping is to accurately model the rate that can be achieved on the uplink.

    In my case (DSL connection, good noise margin on my line) the raw rate on the DSL link is constant, but the modem has to encapsulate ethernet packets into ATM cells that are sent on the DSL link. This causes a variable overhead (function of the packet size) which iproute was not readily able to model (last time I looked, which was years ago).

    The following change created an accurate model for my link (but I did not bother to make it generic):

    — iproute/tc/tc_core.c.orig 2004-07-30 13:26:15.000000000 -0700
    +++ iproute/tc/tc_core.c 2005-11-27 18:15:45.000000000 -0800
    @@ -62,11 +62,18 @@
    for (i=0; i<256; i++) {
    +#if 0
    unsigned sz = (i<<cell_log);
    if (overhead)
    sz += overhead;
    if (sz < mpu)
    sz = mpu;
    + unsigned sz = ((i+1) < 1514) sz = 1514; // clamp MTU size packets
    + sz += 18; // DSL bridge encapsulation.
    + sz += 47; sz -= sz % 48; // round to next ATM cell.
    rtab[i] = tc_core_usec2tick(1000000*((double)sz/bps));
    return cell_log;

    • gettys Says:

      Yes, and if you think this was bad, think about wireless and losses where the goodput is constantly varying.

      We have some real challenges ahead.

  3. Simon Says:

    Thankfully, walken’s hack isn’t needed on modern Linux (I’m using Debian). I use the following shaping script on my DSL- note the “atm overhead” bit:

    set -e
    # Change to 10 when doing PPPoA, VC Mux
    # See
    # Note that uplink rate is in sync bits per second
    case ${DEV} in
      UPLINK=$(cat /sys/class/atm/solos-pci0/parameters/TxBitRate)
      UPLINK=$(cat /sys/class/atm/solos-pci1/parameters/TxBitRate)
      echo "Error - I can only shape the solos lines"
      exit 1
    /sbin/tc qdisc del dev ${DEV} root &> /dev/null || true
    /sbin/tc qdisc add dev ${DEV} root handle 1: htb default 30
    # Priority classes:
    # 1:10 is urgent stuff, such as VoIP, TCP acks, and pings - these get the full link if they need it
    # 1:20 is normal stuff - this gets half the uplink
    # 1:30 is stuff generated by the router. 1/10th the uplink, low priority                                                                                                                                   
    /sbin/tc class add dev ${DEV} parent 1: classid 1:1 htb rate ${UPLINK}bit burst 10k linklayer atm overhead ${OVERHEAD}                                                                                     
    /sbin/tc class add dev ${DEV} parent 1:1 classid 1:10 htb prio 10 rate ${UPLINK}bit ceil ${UPLINK}bit burst 10k linklayer atm overhead ${OVERHEAD}                                                         
    /sbin/tc class add dev ${DEV} parent 1:1 classid 1:20 htb prio 20 rate $(( ${UPLINK} / 2))bit ceil ${UPLINK}bit burst 10k linklayer atm overhead ${OVERHEAD}                                               
    /sbin/tc class add dev ${DEV} parent 1:1 classid 1:30 htb prio 30 rate $(( ${UPLINK} / 10))bit ceil ${UPLINK}bit burst 10k linklayer atm overhead ${OVERHEAD}                                              
    # They're all SFQ classes                                                                                                                                                                                  
    /sbin/tc qdisc add dev ${DEV} parent 1:10 handle 10: sfq perturb 10                                                                                                                                        
    /sbin/tc qdisc add dev ${DEV} parent 1:20 handle 20: sfq perturb 10                                                                                                                                        
    /sbin/tc qdisc add dev ${DEV} parent 1:30 handle 30: sfq perturb 10           
    • Jesper Dangaard Brouer Says:

      Hurray — someone is actually using my TC options “linklayer” and “overhead” :-)

      Notice this is standard included on 2.6.24 kernels and TC util since 2.6.25.

      Choosing the right overhead can be difficult, because the overhead is dependend on the ADSL encap used on you line. There is a table with an overview on page 53 table 5.3 in my thesis (

      –Jesper Dangaard Brouer

  4. Jim Gettys on bufferbloat - Untangle Forums Says:

    [...] For me the entry on self-initiated mitigation steps was particularly interesting:…t-mitigations/ I wonder whether there is anything an Untangle box could do to help with the iterative process of [...]

  5. Kurt Garloff Says:

    The problem with queues building up at your DSL router for upstream transmission has been identified as killing any attempt to do QoS and creates the latency that you want to avoid.
    Bert Hubert has given a great presentation at Linux-Kongress 2001 on this

    I remember that the precondition to do QoS was clearly spelled out: You need to build the queues on your router and manage them and avoid the DSL modem to do it. Thus first step is to liimit the outgoing bandwidth on the router to what your upstream bitrate is minus a small delta.

    Looking at the slides, I actually see the focus much more on investigating queing disciplines — but the three points I took home are that *I* need to throttle (as opposed to others) upstream data, that I should not even bother considering anything else than HTB and that I can’t do much about downstream (except prioritizing the ACKs).

    Throttling upstream data myself and managing the queue that builds up with HTB produced such convincing results (like being able to work normally in an ssh session when uploading stuff concurrently which was completely unusable before), that one of our engineers (Uwe Gansert here at SUSE) added a module to SuSEfirewall2 to make it easy for users to use this by just filling in their upstream bandwidth. Works great!

    I think the approach is slightly different from yours though:
    - You think we should avoid significant queues and drop packets and have TCP congestion algos do its job reacting to dropped packets (or ECN) — resulting in less congestion and thus better latency for everyone
    - We actually do build a queue but manage it — resulting in best latency for those we prioritize

    One final remark:
    On a lot of DSL modems/routers, you can install the alternative RouterTech ( firmware — that would certainly be a good place to fight excessive buffers (and maybe do some QoS on the buffers that we allow).

    • gettys Says:

      Remember, if queues are short, you are getting good quality all the time. So we really don’t want much of a queue. Queue management is vital to both have good latency, and to avoid overall congestion in the network.

      What QOS helps for is making sure that the right apps/people get the relative service they need. Having VOIP traffic compete against a burst of web TCP traffic is not what we want, particularly now the web browser/server folks have gone off the deep end.

  6. ghira Says:

    Look at the various queueing options available on Cisco routers. In particular “LLQ”. This does work on ADSL interfaces, though you need
    to tell your Cisco router that the ADSL interface is running on a non-default kind of ATM PVC. “vbr-nrt” is the option everyone seems to go for.

    You can then do all sorts of things. e.g. tell the router that certain
    kinds of packets get transmitted first any time a choice needs to be made (up to a certain bandwidth at least) which is good for voice stuff.

    You can also “guarantee” certain amounts of bandwidth to various
    other classes of traffic and even attempt to be “fair” to individual
    flows within those classes.

    It’s obviously possible to go way too far with this sort of thing, and I’m not saying that other manufacturers don’t do it. It’s just that I have done
    it successfully with Cisco routers such as 877s.

    I thought it was fairly standard knowledge that if you everything with a single FIFO queue with tail-drop, things will not be great.

    I’ve heard that you can get pretty good results just by giving priority
    to packets below a certain size, so that ntp, dns queries, attempts
    to start new connections, acks, voice packets etc. all get transmitted
    ahead of the large packets in outgoing email, file transfers etc.

    • gettys Says:

      I also need to repeat:

      Traffic classification is orthogonal to AQM.

      If you don’t signal congestion at a congested hop in the network (by either packet drop or ECN), there is no way the end points will slow down and keep the queue size sane and transport protocols will fill the buffers at the bottleneck. For whatever class of service that traffic is classified in, the latencies will go to hell.

      So traffic classification is a useful tool, but fundamentally not a solution in any sense for bufferbloat. For that, you need to enable some AQM (e.g. RED). All buffers need to be managed.

      • Alex Yuriev Says:

        “If you don’t signal congestion at a congested hop in the network (by either packet drop or ECN), there is no way the end points will slow down and keep the queue size sane and transport protocols will fill the buffers at the bottleneck. ”

        Let me guess, you think backbone routers and core switches look into inside of the packets?

        • gettys Says:

          No. AQM algorithms such as RED, Blue, etc do not require inspecting the contents of packets, and are implemented in high speed routers. In fact, they work probabilistically.

        • Alex Yuriev Says:

          Try again.

          At wire speeds we better be looking at the only relevant portion of IP header (and in most cases) just a portion of IP header with the destination address. The moment you start looking at anything other than the destination address you are adding complexities, bugginess and slowness.

  7. Andrew Says:

    I worked at Ruckus Wireless for a few years. Much of their wireless home router product ‘special sauce’ was in QOS. Their target market was voice and video over Wifi which meant a whole lot of shaping, early drop (from the head of the queue), etc. Sold a lot of units to service providers for roll out all over Europe and Asia. The US is still a ways back when it comes to converged networks.

    So, yes, some of the equipment providers are aware of the issues on the edge. But as has already been pointed out, unless you see ‘voice/video/gaming’ in the marketing literature for a product you are likely to have large buffers and long latencies.

    Today, packet drop is viewed as worse than long latency by all the standards, testing suites, etc. Large ISPs will almost immediately throw out any solution that drops a packet in a test…even when it means TCP performance is increased. So, we equipment providers end up adding buffers just to make the cut to get ISP buy-in. That means it is up to the end user to fiddle with the default configuration (the one that passed the ISP’s test suite).

    So, today, the motivation for TCP performance isn’t there at the ISP level. Get them to stop testing L2 packet drop and start testing TCP performance and the equipment provider market will quickly respond by making the changes noted here into default behavior. The code isn’t hard…as has been pointed out. But if I enable it, I can no longer sell my product to or through an ISP.

    I no longer work for Ruckus Wireless and of course I can’t make claims on their behalf…but I still use their product at home…as do all of my relatives ;-)

    • gettys Says:

      As I keep having to point out (and have now generated an FAQ page), QOS does not solve the bufferbloat problem, while it can be helpful for specific applications, as I keep pointing out. We can mitigate the problem somewhat by fixing some of the unfortunate tuning in Linux, but robust solutions are considerably harder.

      And you have no QOS knobs available in broadband as I’ve also pointed out.

      Fundamentally, you have to manage the buffers.

      Worse yet, it’s pretty hard on 802.11 or other wireless technologies: Van Jacobson says classic RED won’t hack it, though he thinks there are things that will.

      Most of the ISP’s, believe it or not, have been blissfully unaware of bufferbloat, not to mention the home router market.

      And yes, until we expose the problem (which is what this blog is attempting to do), and get people testing for it, we won’t get the market place to move. Dave Clark, myself, and others are trying to help on that end too.

      We must move from strictly marketing on “bits per second” to also “operations per second”. By this metric, my home network is often about 100 times better than it was when I started on this last summer.

  8. ghira Says:

    “Traffic classification is orthogonal to AQM.”

    Oh, I agree.

    I’m not just talking about classifying the traffic and then
    not doing anything different to the different classes. If you have a
    “priority” class, traffic in _that_ will experience reduced latency
    even if buffers in general are very full. (Assuming the priority traffic
    isn’t so voluminous that you might as well not bother, obviously).

    I use RED, but not in all traffic classes.

    For the sake of argument, let’s say I have a 400k ADSL line.

    I could imagine doing something like:

    100k voice, priority (fifo)
    200k citrix (fair-queue)
    “everything else” (fair-queue, RED)

    in a case where I have a small number of IP phones, some citrix
    terminals which I care about, and I’m not that bothered about what
    happens to anything else.

    I don’t want to discard voice or citrix traffic unless I have no
    choice, but I’ll happily mistreat the “everything else” class.
    since citrix isn’t a priority class, it can use more than 200k if there’s
    spare bandwidth available.

    Any _any_ time there’s a packet in the voice queue when I need
    to decide what to send next, voice wins, unless it’s been over 100k
    lately in which case I drop it. (Let’s hope I chose 100k to be more
    than the voice traffic I ever expect to see.)

    Sure, I’m using RED, but not everywhere. and even if I turned
    off RED in the “everything else” class I’d expect the damage to
    be mostly packet loss in all classes due to buffers filling up.
    I should try this and see. Obviously I’m assuming the 200k
    assignment to citrix is generous: I’m trying to put suffering
    into the “everything else” class as far as possible.

    FWIW there’s a hardware transmit queue on Cisco _after_ the
    mechanism that orders packets based on stuff like the above, and that
    is FIFO. As you would expect, Cisco’s advice is to make that as small
    as possible. On a slow enough ADSL line, enough to take one max
    size packet. If you’re only transmitting at 400k, having to interrupt
    for every packet is not the end of the world.

    Sorry about my examples being very cisco-centric, but it’s the only
    context I do this sort of thing in.

    • Alex Yuriev Says:

      For the sake of the argument lets look at this little router sitting in a core of one of a network carrying usenet-over-web service, a couple of VOD customers and a little VOIP carrier

      4x 10G/sec to core1, peak utilization 31Gbit/sec, average 17Gbit/sec
      2x 10G/sec to switch-vcore, peak utilization 18Gbit/sec, average 12Gbit/sec
      8x 1G/sec to customer aggregation switches , between 20Mbit/sec to 800Mbit/sec each port

      Propose how to classify traffic or propose how to signal congestion when one of the transit providers’s 10Gs is congested.

  9. ghira Says:

    Sorry, didn’t really read carefully enough.

    “For whatever class of service that traffic is classified in, the latencies will go to hell.”

    the fair-queue thing is sort of a cheat, in that it turns a single class
    into a whole bunch of sub-classes.

    • gettys Says:

      And I can self-congest. Go look at the smokeping plot I published the day I knew I had bufferbloat nailed at home. The gaps where latencies are good are where I was sooo sick of the problem, I stopped my test program just to read some mail and/or surf the web for a few minutes.

      Classification is no substitute for queue management. In fact, there are good arguments that doing queue management over all classes simultaneously may be the easiest/best solution.

  10. Joe Says:

    “Plug your router into the ethernet on broadband gear directly, or at worst, into the ethernet jack of your home router.”

    What are you trying to say here?

    • gettys Says:

      You are right: that’s badly confusing.

      Hopefully this clarifies my intent:

      Plug your router into the ethernet on broadband gear, or at worst, into the ethernet jack of your home router if that is included in your broadband service.

    • sellers Says:

      gettys Says:
      January 7, 2011 at 8:17 pm | Reply
      You are right: that’s badly confusing.

      Hopefully this clarifies my intent:

      Plug your router into the ethernet on broadband gear, or at
      worst, into the ethernet jack of your home router if that is
      included in your broadband service.

      Forgive me, but that is even more confusing!

      That sentence tells us that in the worst case we should plug our router into our router. Was that really your intent?

      Perhaps you meant:

      Plug your PC into the ethernet on broadband gear, or at worst, into the ethernet jack of your home router if that is included in your broadband service.

      • gettys Says:

        Sometimes you may not be able to mess with a router provided by a broadband provider (as I found at my Inlaws, who didn’t know the password to their router provided by Verizon). So you might actually need another router capable of bandwidth shaping on top of what the broadband carrier provided…

        Someplace you have to be able to limit the bandwidth to avoid the buffers from filling.

  11. Sherwood Says:

    Oh to have broadband!

    My link to the world is through satellite link. My MINIMUM latency is 600 ms just for speed of light delays. (2 round trips to geosync orbit) 700-800 is typical good performance. 2000 ms common, and on occasion 5000 ms.

    On top of that: Traffic shaping by my ISP works per tcp connection. It actually makes sense: The first few hundred KB of a connection are pretty fast, then it’s throttled back. My ISP claims that this makes browsing reasonable: Click on a link and it opens soon, at the expense of downloads, which most people are unlikely to care about how fast they are.

    On top of that: They have a policy of Throttling if you are a Packet Pig. While my nominal bandwidth is 2Mb/s down, 0.5Mb/s up, if I use more than 40 MB in an hour, I am throttled back to roughly modem speed for the remainder of that hour, and all of the next. Throttling policies are variable depending on demand on my particular beam. I’ve found that starting a major download after 11 p.m. on a weeknight is less likely to be throttled.

    2Mb/s down works out to about 0.2 MB/s net throughput after sucking off the overhead or about 12 MB/min. which is 0.7 GB/hour. Would be nice.

    The net result is that our bulk download bandwidth is roughly 0.5 GB/day. Max.

  12. Karl Says:

    Thank you!

    I’ve been a system administrator for ~12 years. But until I read your analysis, I thought that slow congested links were just the way of the world on home broadband. Inspired by your posts, I finally got around to installing OpenWRT and configuring QoS.

    The difference is astounding. It’s like night and day.

    I’ve got a ~400kbit up / ~2000kbit down link (as measured by M-Lab) from Sonic DSL. Without QoS, saturating the link with an SCP upload causes ping times (to the other end of the link) to rise from 15 msec to 1-2 sec. With QoS set appropriately, saturating the link causes ping time to rise from 15 msec to only 25-30 msec.

    This is huge. Thanks again for your research!

  13. ac Says:

    Forgive my ignorance but to me this all sounds very complicated. Can’t the OS net stack on each device and each NIC 1) sample the latency to next hop 2) use information from wireless adapter to see the current link theoretical speed and then use these two informations to actively limit the upload rate to that NIC?

    Ok maybe it’s more complicated than that but if I were tasked to fix this problem that might be my first approach.

    • gettys Says:

      You have to signal the sending end (e.g. the TCP sending the data) to slow down, by packet drop or ECN, else it will continue to go faster and faster.

      So just rate limiting doesn’t fix the servo system, which the buffer is a part of.

    • Sherwood Botsford Says:

      I’m on satellite internet. At times the ping time to the service provider’s DNS server (4 hops — Mac > router > Satellite modem (bridge mode) > Gateway > DNS server.) can get as high as 15 seconds. At 2 seconds most computers DNS lookup returns “No server responded”

      Not clear to me how I can manage this at my end.

      • gettys Says:

        I’d run a local DNS caching server, to begin with. Dunno what’s involved in doing that on a MAC, however. This is part of why we run Bind in CeroWrt; it’s configured to be a local caching DNS server as one piece of what it gets us.

        Whether you can play the same mitigation game we do on broadband isn’t clear to me: I suspect satellite links are highly variable bandwidth, much more than Powerboost is on cable. But you can disabuse me of that presumption easily enough… You can investigate using traceroute and ping to see where your delays predominate.

        • Sherwood Botsford Says:

          Running a local cache is easy. Macports +maradns. Takes about an hour to catch up.

          However almost every DNS record specifies a cache time of 24 hours or less. So if the last time you logged into gmail was yesterday, the cache has likely expired.

          A useful adjunct to a local cache would be a script that analyzed the log of the dns server, and anything that was in last N days logs more than N times would be looked up once a day. (You have to account for the updater script adding entries to the log flle.) The effect would be that sites you looked at on a regular basis would be kept in cache even if regular was once a month.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Get every new post delivered to your Inbox.

Join 605 other followers

%d bloggers like this: