The great SciFi author, Arthur C. Clarke, once wrote that any sufficiently advanced technology would be indistinguishable from magic. This remark can be considered from many angles, but in respect of open-source versus proprietary machines, it has a somewhat surprising application to current technology, even amongst current technologists.
When faced with a proprietary, sealed, black-box of some kind, engineers respond in different ways, but one reasonably constant reaction is a concern, or fear, of lack of understanding of the operation of the internals. This reaction is the root of "FUD", it is the seed of doubt which can be, with care and attention, grown into full-scale Fear, the fear of magic, the fear of the unknown. The Uncertainty resulting from not knowing exactly what is in the black box can be enough to encourage wildly inaccurate risk assessments by typically even-tempered, cautious people.
Why does this reaction occur? Simply, most engineers are control and function obsessed, they do not like to estimate how likely something is to happen, rather, they want to *know*. They want to know exactly how everything works, what makes it tick, what goes on inside the box. This is precisely how engineers should behave, indeed, it is this set of characteristics which has enabled engineers to build ships, trains, cars, bridges, rockets, the telecommunications network, satellites, televisions, radios and so on. One cannot combine components together unless one has a reasonable understanding of how they function, and how they will behave as one changes their environment.
So: enter stage-right the black box, the proprietary object, the locked-down machine. If this machine is going to stand alone, or has been deliberately designed to operate fully independently in its environment, then this is no problem. For example, a microcontrolled toaster is unlikely to be problematic, so long as the power supply is compatible, and the slots are the right size for bread, there is unlikely to be a problem. How about a television or radio? Again, so long as the TV can handle PAL and/or DVB, or the radio can manage AM and FM or Digital Radio, then it will work just fine if the power plug is the right kind. These boxes are not typically in the middle of complex, larger machines, they are usually the last in the chain, the customer premises equipment. But what if the box is designed for the middle of a network, say? Perhaps it's a multiplexing box, or a router, and perhaps it has some really useful feature, but the system designer doesn't quite, 100%, understand how it works?
Again, initial deployment will be fine, so long as the proprietary function, well, functions, then all will be well. The problems occur later, when one of two key events occurs. Either there is a change requirement, some adjustment to the overall system is needed, or, an alternative supplier is required. At this point, the proprietaryness suddenly becomes an enormous issue. Changing out a toaster or television working to accepted standards is no great issue - beyond selecting the features required by the customer against a price, all will be well. When a box in the middle of a network or system is considered, though, the problem becomes many times more complex, as the interactions between this box and many others need to be considered. This is the point at which the good engineers are separated from the also-rans, because they are the ones who can envisage what the outcome of different changes will be. Unfortunately, even for the very best engineers, predicting the behaviour of a proprietary, locked-down box is next to impossible, particularly if an alternative is to be sought.
Anyone who has looked at the challenges associated with automatic testing of complex machines will know that these tests are always limited by the number of conditions which can be measured, the problem rises exponentially - each new condition must be multiplied by all of the others. This is enough to give the greatest minds pause for thought, and probably a need for a beer.
Why? Well, inside that proprietary box is something which is, to a point, magic. Prodding it from the outside is most unlikely to ever reveal all its secrets, so a really good engineer will always be concerned that they've forgotten something, or missed something, or not considered a condition.
The result? The consulting engineer might well be very unwilling to consider the change-out of a proprietary device without truly exhaustive investigation - that investigation will add cost to the change out, and will serve to make alternatives seem less viable, more expensive, less suitable to a business.
The solution is really very simple, but like many simple solutions, not necessarily trivial! Engineers like simple things, for the reasons described above - they make their brains hurt less when performing design work, analysis, maintenance, repair, and so on. So, our non-trivial but simple solution goes like this: Start replacing the proprietary magic boxes with open boxes, running on standard hardware, using open-source software, adhering to all possible standards. The accountants will surely point to interesting facts, like "this open box costs more than the proprietary one". Well, it (usually) doesn't, in fact, but one has to consider exit-cost as part of the overall cost, which most companies do not, thus they arrive at the wrong answer.
There are other answers, of course, which straddle the ends of the spectrum, and handle such significant problems as "there is no open solution to this problem available". Naturally, you could write you own, or pay someone to write it for you, but sometimes even this is not practical for some reason. In this case, then the key is to be sure to have several different types of black-box being used for the same thing, that way, at least a company or organisation is not dependent on just one or perhaps two suppliers. This is a hugely underestimated risk, in that once a supplier realises that a large customer has really got nowhere else to go, it's amazing how much the price can rise...
That's the price of magic!
Want the quick answer, you know, the one which troubles you every time you upgrade your broadband and you /really/ believe that the streams will be reliable this time? It's because adding bandwidth in connectionless systems does not eliminate congestion, since the two issues are orthogonal, and what's more, most people actually know this from personal experience, at least, those people who drive a car in a congested urban area, anyway.
Road management agencies have long tried to address the congestion problem by adding lanes to motorways, but the result is not typically what was hoped for, as all it does is release further previously suppressed demand. Furthermore, adding lanes to roads does little or nothing to alleviate problems at junctions and other pinch-points, in fact, it can make them seem worse. Other techniques have been tried, again, to little positive effect; London's partial bus-lanes are well known to regular commuters in London. What use is weighted queueing up to the traffic lights when you're stuck 20 car-lengths back behind a car?
Well, none at all, really.
The whole queueing argument is one of the 21st Centuries best "snake-oil" sales pitches, but the flaws are easy to spot. Consider the bus-lane example above and think about how you /could/ make it work. In order to be 100% certain that the bus can get onto the bus-lane, the only way is to have a guaranteed access to it, which is essentially arguing that the bus-lane must be extended right back to the previous flexibility point, be it a junction, traffic-light or roundabout. At this point, though, the connection is no longer connectionless, rather, we've reserved bandwidth which is only available to the bus and nothing else between two flexibility points, or to put it another way, we've made a *connection*. Consider this in terms of a router function, we are effectively saying that a queue within the router is not going to be able to guarantee that a particular packet can get onto its queue if there is congestion on the incoming link.
Of course, you can now try to use the M25 needs another lane argument, and make the incoming link larger, which seems to address this problem, but in reality, it merely pushes the congestion problem to somewhere else in the network. So, just like the road system so many of us are so familiar with, there is no way to address the congestion problem using connectionless technologies, it is always going to be "best effort", and we need to look for another approach.
...Or do we?
One of the great strengths of the internet is that the vast majority of traffic which uses it has highly resilient transport protocols. It doesn't matter a great deal in most cases if packets are lost, misrouted, delayed or damaged, as the various layers of the IP network will recognise these different kinds of defect, and handle them appropriately. Lost packets can be requested again, or perhaps recreated using checksums, misrouted packets can be discarded as they do not belong to our link, delayed packets can be re-inserted into the link after a short delay in responsiveness to the user, and damaged packets can be re-requested or perhaps fixed using checksums.
All of this admirable behaviour is quite fine for two classes of traffic, the file, and the message. Files are typically fairly large, but
can be transferred using whatever bandwidth is available at the time. There is typically little requirement for a rapid file-transfer, merely a requirement that it be completed in the end with no errors. Messages are similar, but typically much shorter, and have some requirements regarding reasonable latency.
However, there is a third class of traffic, one becoming increasingly important in the Web2.0 world, that of the stream. Streams have one unique property, something shared by neither files nor messages, this is a requirement for temporal consistency between information units. This requirement is non-negotiable, and fixed within some very tight limits indeed, because for streams, the human brain-ear and/or brain-eye is involved in the link, and the brain cannot tolerate any significant amounts of jitter, wander, echo, packet-loss, errors, noise, distortion or any number of other impairments.
Sadly, all those fixes noted above for defects in transmission cannot be used for streams, because, quite simply, there is no time to request additional packets, nor time to re-queue misaligned packets. Streams are unique. This is why, exactly why, no matter how much bandwidth is made available, voice or streams on the internet are nev nev nev never going to be perfect using the current de de designs.
Well, now we should be asking ourselves a new version of our question, which is: can this be handled using the new protocols we have available?
Sometimes, history is kind to us, and on this occasion, is it extremely kind and generous, because all we need to look at are existing telco networks for inspiration regarding what needs to be done. Traditional telco networks are at the apex of many years of development of plesiochronous and then synchronous digital networks, these networks are designed for optimal latency performance, and tailored very tightly to the needs of voice telephony. All the same issues which were the bane of early internet adopters, analogue modems on dial-up lines, low speeds and so on were a direct result of using the switched voice network, optimised for streams, for the new and upcoming internet, which at that time was dominated by file and message transfers.
The bandwidths now, in the noughties, used for the internet are colosal when compared with those required for voice networks, although it's possible that video streaming networks, if commercially successful, could result in networks of much greater bandwidth still.
Well, what exactly did those telco networks do well? Connections. They set up connections, on the fly, very rapidly. They created the equivalent of a dedicated railway line or personal motorway lane or bus-lane from the very start to the very end of the connection. The bandwidth was /never/ shared with anything else, rather, it was dedicated to the voice stream in question, and had the other fascinating property of being fully duplexed with guarantees that there will be no significant latency between the two directions.
Which brings us right back a decade to look at some of the erroneous assumptions which were being made during the 1990s about the then future network designs, ie., those designs of today. A great many pundits and equipment manufacturers were arguing strongly that all traffic types should be merged in layer 3 on IP, instead of on layer 2, SDH or WDM, but many of us were then quite convinced that this could never work, because of the peculiar, but unavoidable, needs of streams.
The right answer was right where we were, that we should mix our traffic at layer two, where it is possible to keep connection-oriented links separated from connectionless ones, so that we can have a best-effort internet which can co-exist with connection-oriented streams. So does that mean we're stuck with the highly expensive TDM interfaces? Well, not necessarily; some colleagues of mine have been working on Carrier Grade Ethernet, now known as PBB-TE, directly aimed at resolving this set of pressures. It gets the gains of ethernet in terms of interface costs, but can be managed using existing telco management infrastructure, keeping introduction costs to a reasonable minimum. PBB-TE offers connection-oriented packet switching, rather like ATM did, but without the small-packet limitations of ATM. You can read more about it here: http://en.wikipedia.org/wiki/Carrier_ethernet_transport
This issue also has strong implications for the net neutrality debate, in that it essentially eliminates it! Telcos will be able to charge for bits moved no matter whether those bits were on the connectionless or connection-oriented side, so there will be little disadvantage to telcos if people choose to use a best-effort voice-provider, since the bits could be charged for in the same way anyway.
Looked at the other way around, it could be possible, by judicious use of PBB-TE, to make Skype or Gizmo as reliable as the PSTN *all* the time, not just some of it, similarly, streaming from network based servers could be made reliable by extending PBB-TE connections from the net-side server back out to the customer premises equipment.
This is a longer-term view, of course, and in the first instance, the core of carrier networks will be addressed, as this is the most sensible first step. The question I'm now pondering is how we get PBB-TE capability embedded into standard linux applications and perhaps into the kernel, too. Because, until we fix that, internet streaming is n n n ne neve ...
Some will surely know that I edit a daily News digest, culled from the postings of the linux advocacy group popularly known as cola. I've been an inhabitant of cola for many years, have contributed to the FAQ, have contributed to the weekly stats package, and have made thousands of posts of my own.
I originally went to the cola group because it was debating many of the issues which were fundamental to my day job, essentially, the pros and cons of free software, Linux, the GPL, a new business paradigms around support, new development models built on the internet.
I've been all of positive, helpful, irritated, angry and even unreasonable at times, and have regularly taken part in political debate much wider than the politics of cola. Many of the people there I have the most respect for are the ones I most often disagree with. Except one group, these are the anti-charter, off-topic trolls; people who, in my view, should not post in this group at all.
In the main, I have them killfiled, a few at the leafnode level so they never enter my spool, but most in my slrn score file. I do it that way because I'm not the only user of my news server.
I watched again, today, as one of the regular trolls (usenet speak for people who knowingly present a false and provocative argument, frequently inappropriate for the group, in order to gain an audience. Well, today, I responded to the chap who had been trolled, with one of the most poetic pieces of prose I've ever written. Even I became sad on writing it, yet, I feel it is probably a reasonable summary of the character in question, so I've reproduced it here. Perhaps you'll resonate with it, perhaps not.
<--------- quote --------->
> Crowing would be me PERSONALLY bragging about it at every opportunity,
> manually typing "Look tosser, I've got a degree in this, I know what I'm
> talking about".
One can only imagine that he's either jealous, or merely trying anything he can to troll a response from you. Personally, I suspect the latter, he's just trolling, because he knows that you'll respond to this, even if just to tell him he's wrong.
I have a mental image here of a sad and lonely man, in a bedsit or cramped flat, dishes stacking out of the sink, and the remains of the last take-away meal waiting to be put into the rubbish bin. The room is fairly dark, perhaps a single light bulb, but most illumination comes from a PC monitor, sat pride of place, on an untidy but obviously well-used desk. There are piles of old PCI cards nearby, and lots of "windows for dummys" books, and perhaps an MCSE certificate in a frame on the wall, in view of the computer user.
There is a phone, but it rarely rings, and a mobile he uses to take pictures of what he likes outside the room. He transfers them to the PC in his room, of course, and would like to share them with someone, but is lacking in sufficient social connections to do so. His email account is full of messages from contacts in Microsoft, people he once knew well, people he envies because they were employed when he was not. These, of course, are the ones with the Comp Sci degrees...
He is at once fascinated with and terrified of Linux and free software. His terror stems from his lack of relevant academic qualification, but his fascination, in equal measure, from awe of those who write the code, who can manage the machines, and who are not scared. Deep down, he knows that free software is on an unstoppable march to an inevitable victory. He knows that his flimsy certifications will become worthless, and that he'll have to change or find another career, but he hopes that if he rants for long enough, like those foolish courtiers of King Cnut,that he can wish the tide back out.
So, he fills his empty hours with busy minutes, convincing himself that there is some value to his interaction with the usenet community he came to harass. The busy minutes often feel like exciting seconds, as he trolls someone into a response, perhaps justifying a point, or disputing, yet again, the same argument he made last year and the year before, in those endless, lonely days of empty hours, stretching back as
far as he chooses to recall. When he was younger, he used to look down on the old men, sat on their parkbenches, but now, he avoids gazing at them, because he's terrified of seeing himself, in 20 years, looking
Don't feed the trolls - pity them.
<--------- /quote --------->
Yesterday, I finally got around to loading the open router firmware onto a my linksys WRT54GL. I've imaginatively named it wrt54gl1.marknet, installed iproute2 on it, and recreated the policy-based routing on my current main server.
Next steps are to install bind on it, transfer a copy of my domain information, and then set up the DHCP server to replace the one on the main server. Basically, I'll move as much as I can onto it, as I'm splitting the main server, giskard, into multiple low-power machines, including a mythtv box from efficientpc.co.uk, and a file server from somewhere unknown so far, but it'll be a low-power one.
There is something very satisfying about secshelling to my linksys router from my Nokia 770; how far linux has come!
On the strange-event-of-the-day front, I found out why some of my emails from work to personal account were disappearing - they were getting grabbed into the spam-catcher bucket. I've requested that my email address is removed from this bucket, although I wonder how they will cope with spoofed from addresses? Time will tell.