Any network system with buffering shared among many users is much like a congested highway. We’ll call them
“big fat networks”. Two such network technologies which show this problem are 802.11 (abgn), and 3g wireless. In one, the buffers are distributed among the clients (and may also be in the access points and routers); in the other, both possibly in the clients, and the radio controllers they talk to, but also possibly in the backhaul networks.
You have suffered unusable networks at conferences. Wonder why no more. You can make your life less painful by mitigating your operating system’s and access point’s buffering.
Moral of the Story
Whether you call what we see on 802.11 and 3g networks “congestion collapse” as the 1980′s NSFnet event was called (with high packet loss rates), or something different such as bufferbloat (exhibiting much lower, but still significant packet losses), the effect is the same: horrifyingly bad latency and the resulting application failures. Personally, I’m just as happy with “congestion collapse” as with bufferbloat.
The moral of the story is clear: when the network is running slowly, we really need to absolutely minimize the amount of buffering to achieve anything like decent latencies on shared media. Yet when the network is unloaded, we want to fill this network pipe that may be hundred megabits or more in size. On such a shared, variable performance network: there is no single right answer for buffering. You cannot just “set it, and forget it”. Read on…