Monday, 28 July 2008

Is the Internet threatened?

Ken asks my thoughts here.

His concern stems from corporate desires to control the information flow, and find some way to ensure revenue flow.

The first article linked, states

What will the Internet look like in Canada in 2010? I suspect that the ISP's will provide a "package" program as companies like Cogeco currently do. Customers will pay for a series of websites as they do now for their television stations.

Sorry, no. Not gonna happen. There are billions of websites on the Internet, with thousands being torn down and recreated every day. The Internet is in constant flux. Its too vast to really control properly. But what about website monitoring products, like the proverbial Net Nanny? Yes they do work, to a point. I manage one of these products every day to ensure corporate compliance of our network (ie, no surfing porn at work). Its only about 75% effective. For our needs that's good enough, I only have a thousand users accessing several thousand websites. The product keeps a lid on things and prevents the maintenance guy from checking out the latest centerfold at Playboy. But for tens of thousands of customers, accessing millions of websites? 75% ain't gonna cut it. And the reason why the product is inaccurate, is because the company that makes it employs people that constantly scour the internet, and classify sites based on their content. Right there, you have a problem, human error. Then you have issues with sites with multiple content types. A simple example is CBC. You might want to allow people to read news online, but not watch steaming video sports casts. Well hosts both. How do you classify it?

Lastly there are the infrastructure websites, like Akamai. Most web surfers are unaware of these sites. They exist because high bandwidth content (like video) is expensive to serve up to clients. So you create your webpage at or wherever. You pay a fee to Akamai, and you upload your expensive to host content to them. Then on your website, you just redirect your customer to them.

Akamai alone accounts for %50 of our traffic on our company Internet connection. It isn't because our employees go there directly. They are redirected from the sites they really wanted. Since Akamai hosts literally anything on the Internet, it is impossible to classify the content that is downloaded from it.

That's the nature of websites, they are created and destroyed, they interlink and interweave with each other. They are elastic creatures. To truly restrict where customers can go, requires a static Internet (which doesn't exist) or vast resources of dedicated classifier monkeys (like a few thousand) that constantly review what is changing out there.

The next link is SaveTheNet. This is about the Net Neutrality debate. I wrote extensively on this back in my Closet Liberal guise. If there's one post I should have kept, that was it. (Serves me right for deleting my old blog in a fit of pique. Oh well, live and learn.)

This is a case of cutting off your nose to spite your face.

"William L. Smith, chief technology officer for Atlanta-based BellSouth Corp., told reporters and analysts that an Internet service provider such as his firm should be able, for example, to charge Yahoo Inc. for the opportunity to have its search site load faster than that of Google Inc."

I say go nuts. You'll kill the Internet so fast, you won't know what hit you.

Again, I speaketh from experience (I manage the very same bandwidth management products major ISPS use on their networks). In Canada, Rogers is associated with Yahoo, and Bell Canada with Microsoft. It would be very easy for Rogers to prioritize Yahoo, and Bell to prioritize Microsoft. Heck they might do it already, we would never know. Why? Because priorities only work during network congestion. I.e., there is more traffic on a connection than that connection could handle. In normal times, its always first come first server. Think of the HOV lanes on the 403. During rush hour, they are faster because all the other lanes are full. At night, when traffic is light, you can be in any lane you wish, and you'll move along at the same clip. The latter circumstance is the more prevalent on the Internet. When there's congestion, everything slows down. If its bad enough, connections start timing out. At that point, the fact that a few sites or services have priority makes sense. Taht way some traffic gets through, as you can't prioritize everything (If everything is a priority, nothing is). The thing you have to remember is, that bits of data on the Internet travel at or near the speed of light (depending on whether its a photon or electron). Only in the lab can you slow or accelerate photons or electrons. Its not something you can do with a switch or a router. One bit of data (the most basic of data packet) will always travel at the same speed, regardless of the value of that bit.

When we refer to "slowness" we are actually talking about volume of data. If you are on a dial up connection accessing a basic text only website, it will download just as fast on that modem as it would on the Ultra-Fast package from your ISP. What constrains you is how much data you are downloading. A lower bandwidth connection (ie the bps rating) will download fewer bits per second than a higher one.

Again, lets use the highway analogy. If there was only one car on the road, it doesn't matter if there are 6 lanes or 1, that car can still travel at whatever speed it chooses. For the purposes of our discussion, imagine that cars only had one speed, 100 km/h. Now start adding more cars. A one lane highway will still allow those cars to travel at 100 km/hr up to the point that the road is full. What happens to the cars that try to get on, is that they have to wait for space to open up, and then they can get on the road, but they will do it at 100 km/h. But they have to queue up. You add more lanes, you can have more cars simultaneously on the highway, and hopefully fewer cars queuing up on the ramps.

This is how bandwidth management works. You control the queue. You organize the pending requests based on their source, destination or data type (using the highway analogy, where are you going, where are you coming from, and whether you are using a car, van or semi.) Depending on the queue, you might let more packets out of one queue than another. But once out of the queue, the packet travels at full speed.

This is a critical point. You can't speed up data. You can only control how much data goes on. Going back to the highway. Say I like red cars more than green cars. I can let 5 red cars onto the highway for every green car. But what if there are no red cars? I can hold back the green cars in the queue, even though the highway is empty. Conversely though, if there are no green cars, I cannot make the red cars go faster. And if the highway is full of blue cars, well my red cars are still going to queue up.

If Rogers charged Yahoo to guarantee it was faster (and Bell did the same with Microsoft) to act on those guarantees, you have to restrict the competitor traffic. This is where things get fun. If I'm a Bell customer accessing Yahoo, I'll be restricted by Bell's policy. Even though Yahoo is paying Rogers for priority on its network I still get slow service. The reverse would be true for Rogers customers accessing Microsoft MSN. Extend this to any competitive field. As a Bell customer I can only get the Goodyear website, but not Bridgestone. Or I can only surf Home Depot, but not Rona. It would be chaos. Utter chaos.

Back to the green car/red car thing. If I in Toronto only allow 5 green cars an hour to enter or exit the highway, and the guy in Montreal did the reverse to the red cars. Which car would travel from Montreal to Toronto the fastest?

Trick question, neither will. They will both take the same (artificially) slowed time. Even though you did 100 km/hr on the highway its either the exit ramp or on ramp that will slow you down.

And that is why I am OK with ISPs trying to sell "priority". They're gonna shoot themselves in the foot and reduce the Internet to an unusable crawl. The rapid backpedaling by the ISPs will be a grand display of awkward fumbling and blustering. CEOs that don't understand the technology they sell are amusing creatures to watch.

Tomorrow, (or later, free time depending) I will write on what I think is the real threat to the Internet.


Ken Breadner said...

Thanks for that encylopedia. *smile*
No, seriously, thank you. For putting my mind at ease, and for explaining everything so clearly. It's like you do tech support or something.

Catelli said...

No problem!