MIT Researchers Say All Network Congestion Algorithms Are Unfair

MIT Researchers Say All Network Congestion Algorithms Are Unfair

(Credit: Getty Images)

We’re all using more data than ever before, and the bandwidth caps ISPs force on us do little to slow people down — they’re . Legitimate network management has to go beyond penalizing people for using more data, but researchers from MIT say the algorithms that are supposed to do that don’t work as well as we thought. A newly published study suggests that it’s impossible for these algorithms to distribute bandwidth fairly. 

We’ve all been there, struggling to get enough bandwidth during peak usage to stream a video or upload large files. Your devices don’t know how fast to send packets because they lack information on upstream network conditions.

If they send packets too slow, you waste available bandwidth. If they go too fast, packets can be lost, and resent packets cause delays. You have to rely on the network to adjust, which can be frustrating even though academics and businesses have spent years developing algorithms that are supposed to reduce the impact of network saturation. These systems, like the BBR algorithm devised by Google, aim to control delays from packets waiting in queues on the network to make sure everyone gets some bandwidth. 

But can this type of system ever be equitable? contends that there will always be at least one sender who gets screwed in the deal. This hapless connection will get no data while others get a share of what’s available, a problem known as “starvation.” The team developed a mathematical model of network congestion and fed it all the algorithms currently used to control congestion. No matter what they did, . 

Optical fiber, in blue and white

The problem appears to be the overwhelming complexity of the internet. Algorithms use signals like packet loss to estimate congestion, but packets can also be lost for reasons unrelated to congestion. This “jitter” delay is unpredictable and causes the algorithm to spiral toward starvation, say the researchers. This led the team to define these systems as “delay-convergent algorithms” to indicate that starvation is inevitable. 

Study author and MIT grad student Venkat Arun explains that the failure modes identified by the team have been present on the internet for years. The fact no one knew about them speaks to the difficulty of the problem. Existing algorithms may fail to avoid starvation, but the researchers believe a solution is possible. They continue to explore other classes of algorithms that could do a better job, perhaps by accepting wider variation in delay across a network. These same modeling tools could also help us understand other unsolved problems in networked systems.

Now read:

Facebook Twitter Google+ Pinterest
Tel. 619-537-8820

Email. This email address is being protected from spambots. You need JavaScript enabled to view it.