Tuesday, 18 December 2007

Exaflood?

What I hate about being in IT, new fangled terms every week. Exaflood is new term describing the worry about the Internet collapsing under the amount of data pushed across it.

Its a good article, it puts perspective on the Internet capacity debate (and its related cousin Net Neutrality). The columnist wryly observes that these warnings are nothing new, and in the years to come the term will keep changing (zettaflood anyone?). While there is an element of hysteria about Internet data capacity, there is also an element of truth.

Until we get to some new transmission technology, long distance data communication speeds are always lagging local network speeds. (Have I posted about this topic before?)

Example, chances are your computer right now has a 10Megabit or 100Megabit link to your Internet connection. Google's servers (which are hosting this site) are either 1Gigabit or 10 Gigabit each. The actual Internet link between you and Google? 1.5-3Megabits on average. So both you and Google have more bandwidth at your computer, and therefore the ability to push more data than the link between you can handle. It is very easy to max out, or saturate your link, preventing the use of more data.

Some colleagues of mine have signed up with Vonage to use their VOIP service to replace their Bell landlines. They are also active downloaders. When they use BitTorrent to download seasons 1&2 of Dexter, they saturate their high-speed Internet connection with so much data, they no longer can place calls on their Vonage service. "Sorry honey, you can't use the phone to talk to your mother right now, I'm downloading your favorite show for you!"

This is our reality. Without the ISPs doing anything, it is very possible, and very easy for an end user to disrupt their own service and cause havoc with the very technologies they are trying to use. When you aggregate all the users together, they can cause havoc on the Internet backbone disrupting services to other users just trying to get their e-mail. The network that the Internet is built on, is and always will be, more akin to a drinking straw than a main water line. (In relative terms, high-speed Internet is an oxymoron).

This is where various aspects of the Net Neutrality debate comes in. As Nate Anderson points out in his article, upgrading "the Internet" (and especially the "last mile") is an expensive proposition. With computers getting faster, local networks going to 100 Gigabit and beyond, future proofing the Internet to handle an unlimited amount of traffic is a technical impossibility.

So if you can't provide unlimited data rates, what's an ISP to do? Manage the data they have to ensure a basic minimum of service. That's why ISPs are starting to deploy deep packet inspection technologies. This is an exploding field (and one I am researching to handle our corporate data needs).

ISPs are caught between the rock of consumer demand and the hard place of technical limitations. They have 3 options to chose from:
  1. Let the consumer do whatever they want, and risk congestion failures or outright outages on their networks.
  2. Throttle high-bandwidth apps (P2P, video streaming) by leveraging protocol analyzers and deep packet inspection tools.
  3. Charge much higher service fees to customers to finance network upgrades (that will still lag consumer demand, and still face congestion, just not as frequently)
All of those options are unpalatable to some extent. Which is why we'll bitch and moan about the unfairness of it all. But it is what it is. There's no getting away from it.

No comments: