Thursday, 3 April 2008

First Amendment of The Internet?

And the Net Neutrality debate comes to Canada, big time. Thank-you Ma Bell. "Hands off our information", is the cry.

Up front, I think what Bell did in this case was wrong. But not completely. I posted on this topic not to long ago. Internet Bandwidth is a finite resource.

"Without the ISPs doing anything, it is very possible, and very easy for an end user to disrupt their own service and cause havoc with the very technologies they are trying to use. When you aggregate all the users together, they can cause havoc on the Internet backbone disrupting services to other users just trying to get their e-mail. The network that the Internet is built on, is and always will be, more akin to a drinking straw than a main water line. (In relative terms, high-speed Internet is an oxymoron)."


"ISPs are caught between the rock of consumer demand and the hard place of technical limitations. They have 3 options to chose from:

1. Let the consumer do whatever they want, and risk congestion failures or outright outages on their networks.
2. Throttle high-bandwidth apps (P2P, video streaming) by leveraging protocol analyzers and deep packet inspection tools.
3. Charge much higher service fees to customers to finance network upgrades (that will still lag consumer demand, and still face congestion, just not as frequently)"


Compounding the problem, Internet Bandwidth is oversubscribed. This is not unique to the Internet. Every access resource is oversubscribed. Your neighborhood may be easy to drive through, but if every single vehicle in your neighborhood tried to leave at the same time, you'd have gridlock. Your neighborhood roads are oversubscribed. Your toilet flushes without issue, but what if everyone flushed at the same time?

Roads, sewers, treated water, telephone lines, airplane seats, etc, etc are all examples of things that are oversubscribed. Why? Well its too damned expensive to build anything so that all potential users could use it at once. When building a network, you sample what an average usage is, factor in expected growth of the user base and try to account for peak periods. Play with the numbers any way you want, but in the end, what you have is a service that can only accommodate some of the users some of the time. It can never handle all of the users all of the time.

The Internet (and computers in particular) are a tough nut to crack. No matter how fast the computer, how much hard drive space or the speed of the network, it is never enough. An old IT term for planning storage comes to mind. "Hard drive space is like closet space, you never have enough".

Data usage on networks and on hard drives is going through exponential growth. I'll give you a real world example. In 10 years I have gone from managing 16GB of network storage to 4.6 TB of network storage. We now have 28750 times the data we did 10 years ago. If you assumed the same rate of growth over the last 10 years, you'd be wrong. Just over a year ago I was managing 1 TB of data. Now its almost 5. I'm on a growth curve and its exponential. I ask you, how do you forecast capacity based on 4-5 digit percentage growth rates? This is the challenge ISPs are facing. Consumer demand for bandwidth is outstripping their ability to supply it.

I grant that my numbers relate to storage, but it highlights a growing trend in the Internet user community. Digital Gluttony. P2P file sharing involves thousands and thousands of individuals sharing Gigs and Gigs of data with each other. (In many cases, data that they will never personally use. They're using data like western society uses water.) This type of usage was never forecasted by ISPs sizing their networks. The explosion in popularity of social networking sites and multimedia rich sites like YouTube, etc. do not help either.

In short, there is wayyyy more data being pushed across the Internet then some links on the Internet can necessarily handle. Like it or not, about the only solution for ISPs with saturated network links is bandwidth throttling.

And this is where I partially defend what Bell did to those ISPs. If it is facing saturation on some of its links, it does need to throttle the connections across it to ensure a minimum level of service. But, it shouldn't break into individual data streams on downstream ISPs and throttle individual applications. It should throttle the entire link to that ISP. It is then up to that ISP to decide how to manage their link and what services to what customers.

Again, this is our new reality. Barring a technical revolution in networking, bandwidth throttling is here to stay. Better get used to it.

No comments: