A Practical Navigator for the Internet Economy

Multiplexing for QoE rather than QoS

Focus on User Experience


Explaining the otherwise unexplainable has always been a personal challenge.  There is something very important in what Neil Davies and crew are up to here.  From “foundations to penthouse” they are recommending that we recast the way we both think and act around statistically multiplexed packet networks.  This is not a small task, and to increase the magnitude of the challenge their recasting says to the world: the apparent success of how we built and scaled the first 20 years of the commercial Internet was misdirected. That trial-and-error effort was based on a series of appealing (but ultimately ill-founded) “hunches.”  If you will just throw out decades of accepted practice, and follow us instead, we have something better for you. You may indeed ask: why should I do that? Let us explain.

Neil Davies has come up with a new mathematical approach to networking, which in formal terms he defines as the “translocation” of data between computation processes.  If you adopt this approach, which entails a complete break with much accepted past knowledge, your severely-stressed broadband business can become economically sustainable. Your customers will also be happier, because the experience that they receive will greatly improve.

Yet there are some problems: among them the fact that this is NOT a message that can readily be understood by, let alone sold to, most decision makers. Nevertheless, having formed a consultancy of like-minded people, Neil Davies is busy showing a few others how to apply these techniques to real networks.  What Davies says to me is that they have met with such success that folk running the net on a day-to-day basis flee in terror from the idea of trying to sell their bosses on adoption. Such an admission by the bosses would mean that for some number of years they had backed the wrong course of action, while accepting underserved accolades and pay raises.

So what to do?  Use the Internet to tell your story!  Like Mao Tse Tung, gather your revolutionaries in the surrounding villages, convert the countryside, and eventually storm the Verizons and AT&Ts of the world.  Thus Martin Geddes, one of the best explainers out there, signed on with Neil about three years ago, and has become an excellent and tireless advocate.  Martin and Neil are trying to promote understanding with intensive seminars - a business model that places these time-strapped practitioners under stress.  In January 2013 I asked: would they explain it all to me?  Sure.  But the explaining has not been easy and, in this case, we have run out of time to meet this publication deadline.

Consequently, I have found that my normal approach doesn’t work when addressing such a paradigm shift.  I am much more comfortable in undertaking an exposition of something only after I have grasped everything.  As of today – April 1, 2013 – I am not quite there.  This is the reason that I have used “a leap of faith” as my closing sub-heading.

Nevertheless, despite a large number of frustrations, what follows is a way of explaining the ideas and work of Predictable Network Solutions that has not - I believe - ever been offered before.

Multiplexing is what all of this is about

Bigger pipes don’t work because, especially in access networks, we are asking statistical multiplexing to do the impossible.  As currently configured, broadband networks cannot continue to deliver every possible offered payload with acceptably low packet loss and delay, whilst also maintaining a reasonable cost structure.  The resource is finite, and trade-offs are inevitable. Consequently, we must work with these trade-offs, and not against them. Rather than over-provisioning ever-larger pipes to blast the data through using brute force, networks must take the pipes they have, and schedule the traffic appropriately. Doing this is both a more cost effective use of the resource and delivers a better end user experience.

However achieving this means that they must multiplex their traffic streams together to meet two competing objectives: keep the network busy (to get more revenue) while delivering user satisfaction by bounding the resulting loss and delay for each flow. In other words, every broadband application must appear to offer a dedicated circuit, even though not a single such circuit is being supplied. The job of the network is to manage the trade-off between these two objectives, which we respectively call “resource efficiency” and “flow efficiency”.

This in turn requires us to introduce a new elementary core concept in networking, one that they call “quality attenuation”. This is a statistical idea about loss and delay, and their relationship to the applied load.  By allocating and trading quality attenuation between flows, users will not notice any degradation in service caused by the statistical multiplexing, and network operators can get the maximum possible statistical gain in resource use.

Here are some highlights of what they have done:

They have developed a new mathematics of traffic multiplexing that enables measurement, management, and prediction.

This prediction is based on a compositional property, which is the critical missing step in taking networking from a craft to a science.

Their “translocation multiplexing process” optimally interweaves multiple diverse sources of demand with a fixed supply of transmission resources.

Telcos and users are respectively happy when resource efficiency and flow efficiency are high.  Getting to this state requires two things: That there is (just) enough volumetric capacity for the user demand; and that demand is scheduled in such a way as to give each flow the appropriate treatment it needs.

The only choices the network has are over which flows to impair, and by how much to impair them. This is a fundamental re-framing of networking away from “moving packets” to “allocating impairment”.

We inherently operate from a position of allocating impairment and limiting failure.  This is an often unspoken function of our use of multiplexing. What you must do is to be able to manage the way in which those inevitable failures occur so as to mitigate their effect.

The best performing network – both economically and technically – is the network that is engineered in a way so that its operators can take direct control over the underlying mathematical properties of statistical multiplexing.

Efficient use of transmission demands there are at least three classes of service required.  In other words, we must build a poly service network, as opposed to mono service network.  

In offering such a poly service network you impair each application flow, but taking care never to go beyond the established level of tolerance for loss and delay. You avoid having a breakdown of the application as a result, and keep the system, as a whole, stable.

Network flows can be multiplexing in such as was that every flow gets its own class of service, in accordance with its needs, and taking into consideration the application’s tolerance for loss versus delay.

Finally, as a part of network engineering, you can move to a system of billing (Quality Transport Agreements) that puts these new understandings into effect.  This is another intriguing area, and we will in May spend four days of face-to-face time in London to explore.  Isaac Wilder and I hope to come away with enough grasp of what Neil and colleagues are doing in order to teach others how to apply these ideas in the United States.  We shall begin with small-scale greenfield projects, and then see what happens.

 

Contents


Executive Summary                                 p. 3
Introduction                                           p. 6
Creating a Fit-for-Purpose Metric for IP Network
Performance Via Application of Verifiable
Engineering Standards   

The Beginning there was Multiplexing                                p.   9
Moving from packet voice to packet data                            p.  10
Networks translocate information                                    p.  10
Networking is therefore a multiplexed game of chance                p.  11
The game’s winning and losing boundaries                        p.  11
Summary: flow and resource efficiency                        p. 12
The curse of the gambler: runs of bad luck                        p. 14
Double or quits: sometimes everyone loses                    p. 15
Stacking the odds in the game of chance                        p. 15
Enter the third epoch of networking                            p. 16

Under these conditions the odds start to turn against us.              p. 16

Bandwidth can’t solve scheduling [ie multiplexing] problems              p. 17
It’s all about quality attenuation                                   p. 17
Re-writing the rules of the game of chance                        p. 18
Networks are trading spaces                                        p. 18
The Goal: improve networking quality by exploiting multiplexing        p. 19
Networks don’t do ‘work‘                                    p. 19
Basic laws of network operation                                p. 20
The end game: poly service networks                            p. 21
Matching supply and demand                                p. 21
Contracts for delivery                                    p. 21
The holy grail of distributed computing                            p. 22

Conclusion
Having presented the theory - establish the engineering    p. 23
Editor’s Conclusion - Time for a Leap of Faith                    p. 25