Want the quick answer, you know, the one which troubles you every time you upgrade your broadband and you /really/ believe that the streams will be reliable this time? It's because adding bandwidth in connectionless systems does not eliminate congestion, since the two issues are orthogonal, and what's more, most people actually know this from personal experience, at least, those people who drive a car in a congested urban area, anyway.
Road management agencies have long tried to address the congestion problem by adding lanes to motorways, but the result is not typically what was hoped for, as all it does is release further previously suppressed demand. Furthermore, adding lanes to roads does little or nothing to alleviate problems at junctions and other pinch-points, in fact, it can make them seem worse. Other techniques have been tried, again, to little positive effect; London's partial bus-lanes are well known to regular commuters in London. What use is weighted queueing up to the traffic lights when you're stuck 20 car-lengths back behind a car?
Well, none at all, really.
The whole queueing argument is one of the 21st Centuries best "snake-oil" sales pitches, but the flaws are easy to spot. Consider the bus-lane example above and think about how you /could/ make it work. In order to be 100% certain that the bus can get onto the bus-lane, the only way is to have a guaranteed access to it, which is essentially arguing that the bus-lane must be extended right back to the previous flexibility point, be it a junction, traffic-light or roundabout. At this point, though, the connection is no longer connectionless, rather, we've reserved bandwidth which is only available to the bus and nothing else between two flexibility points, or to put it another way, we've made a *connection*. Consider this in terms of a router function, we are effectively saying that a queue within the router is not going to be able to guarantee that a particular packet can get onto its queue if there is congestion on the incoming link.
Of course, you can now try to use the M25 needs another lane argument, and make the incoming link larger, which seems to address this problem, but in reality, it merely pushes the congestion problem to somewhere else in the network. So, just like the road system so many of us are so familiar with, there is no way to address the congestion problem using connectionless technologies, it is always going to be "best effort", and we need to look for another approach.
...Or do we?
One of the great strengths of the internet is that the vast majority of traffic which uses it has highly resilient transport protocols. It doesn't matter a great deal in most cases if packets are lost, misrouted, delayed or damaged, as the various layers of the IP network will recognise these different kinds of defect, and handle them appropriately. Lost packets can be requested again, or perhaps recreated using checksums, misrouted packets can be discarded as they do not belong to our link, delayed packets can be re-inserted into the link after a short delay in responsiveness to the user, and damaged packets can be re-requested or perhaps fixed using checksums.
All of this admirable behaviour is quite fine for two classes of traffic, the file, and the message. Files are typically fairly large, but
can be transferred using whatever bandwidth is available at the time. There is typically little requirement for a rapid file-transfer, merely a requirement that it be completed in the end with no errors. Messages are similar, but typically much shorter, and have some requirements regarding reasonable latency.
However, there is a third class of traffic, one becoming increasingly important in the Web2.0 world, that of the stream. Streams have one unique property, something shared by neither files nor messages, this is a requirement for temporal consistency between information units. This requirement is non-negotiable, and fixed within some very tight limits indeed, because for streams, the human brain-ear and/or brain-eye is involved in the link, and the brain cannot tolerate any significant amounts of jitter, wander, echo, packet-loss, errors, noise, distortion or any number of other impairments.
Sadly, all those fixes noted above for defects in transmission cannot be used for streams, because, quite simply, there is no time to request additional packets, nor time to re-queue misaligned packets. Streams are unique. This is why, exactly why, no matter how much bandwidth is made available, voice or streams on the internet are nev nev nev never going to be perfect using the current de de designs.
Well, now we should be asking ourselves a new version of our question, which is: can this be handled using the new protocols we have available?
Sometimes, history is kind to us, and on this occasion, is it extremely kind and generous, because all we need to look at are existing telco networks for inspiration regarding what needs to be done. Traditional telco networks are at the apex of many years of development of plesiochronous and then synchronous digital networks, these networks are designed for optimal latency performance, and tailored very tightly to the needs of voice telephony. All the same issues which were the bane of early internet adopters, analogue modems on dial-up lines, low speeds and so on were a direct result of using the switched voice network, optimised for streams, for the new and upcoming internet, which at that time was dominated by file and message transfers.
The bandwidths now, in the noughties, used for the internet are colosal when compared with those required for voice networks, although it's possible that video streaming networks, if commercially successful, could result in networks of much greater bandwidth still.
Well, what exactly did those telco networks do well? Connections. They set up connections, on the fly, very rapidly. They created the equivalent of a dedicated railway line or personal motorway lane or bus-lane from the very start to the very end of the connection. The bandwidth was /never/ shared with anything else, rather, it was dedicated to the voice stream in question, and had the other fascinating property of being fully duplexed with guarantees that there will be no significant latency between the two directions.
Which brings us right back a decade to look at some of the erroneous assumptions which were being made during the 1990s about the then future network designs, ie., those designs of today. A great many pundits and equipment manufacturers were arguing strongly that all traffic types should be merged in layer 3 on IP, instead of on layer 2, SDH or WDM, but many of us were then quite convinced that this could never work, because of the peculiar, but unavoidable, needs of streams.
The right answer was right where we were, that we should mix our traffic at layer two, where it is possible to keep connection-oriented links separated from connectionless ones, so that we can have a best-effort internet which can co-exist with connection-oriented streams. So does that mean we're stuck with the highly expensive TDM interfaces? Well, not necessarily; some colleagues of mine have been working on Carrier Grade Ethernet, now known as PBB-TE, directly aimed at resolving this set of pressures. It gets the gains of ethernet in terms of interface costs, but can be managed using existing telco management infrastructure, keeping introduction costs to a reasonable minimum. PBB-TE offers connection-oriented packet switching, rather like ATM did, but without the small-packet limitations of ATM. You can read more about it here: http://en.wikipedia.org/wiki/Carrier_ethernet_transport
This issue also has strong implications for the net neutrality debate, in that it essentially eliminates it! Telcos will be able to charge for bits moved no matter whether those bits were on the connectionless or connection-oriented side, so there will be little disadvantage to telcos if people choose to use a best-effort voice-provider, since the bits could be charged for in the same way anyway.
Looked at the other way around, it could be possible, by judicious use of PBB-TE, to make Skype or Gizmo as reliable as the PSTN *all* the time, not just some of it, similarly, streaming from network based servers could be made reliable by extending PBB-TE connections from the net-side server back out to the customer premises equipment.
This is a longer-term view, of course, and in the first instance, the core of carrier networks will be addressed, as this is the most sensible first step. The question I'm now pondering is how we get PBB-TE capability embedded into standard linux applications and perhaps into the kernel, too. Because, until we fix that, internet streaming is n n n ne neve ...