A Practical Navigator for the Internet Economy

Christian Huitema on Quality of Service pp. 1 - 6

We interview Christian Huitema, Chief Scientist at Bellcore's Internet Research Laboratory. The interview is a wide ranging discussion of Quality of Service issues. Huitema points out that QoS in the Internet is deteriorating with average packet loss figures having increased from the 2 to 3 percent range two years ago, to 5 percent a year ago, to an average of 10% now.

He points out that RSVP is not likely to be the cure-all that people assumed a year ago for several reasons. In addition to the settlements problem, RSVP has another weakness that will make service providers shun it - every active RSVP session opens another route in the routing table of the defaultless core backbones of the global Internet. Furthermore RSVP posits relatively long, high bandwidth connections where it seems that most of the net's traffic is slower speed, brief burst, web transactions.

What a year ago was spoken of as Integrated Services has now turned into Differentiated Services where precedence bits are being used to generate classes of traffic. The model mentioned on November 22nd by van Jacobson in his "Two Bit" internet draft of traffic is one that emulates the aviation industry with the vast majority of traffic going economy class while business and first class makes up a disproportionate share of the airlines revenue. Current QoS efforts are likely to embrace this business model.

Huitema urges looking at Quality of Service in a broader scheme than just that of technology. In such a scheme Technology is the first consideration. The second is customer expectation. The third is how you provision your network. And the fourth is economics. "In the end QoS boils down to your need to get enough income to beef up your network. If you cannot pay for your network, you won't get any quality whatsoever. Now we have looked briefly at the technology that can be used to do differentiation of services. And this is one way to get some people to pay more."

Customer expectations entail the development of tools to measure network performance in ways that can be replicated and that customers will understand. With this goal in mind, Huitema's lab is working on proprietary tools for some of its customers. Some of these tools will also be designed to focus on helping customers to plan and size their network growth more accurately.

Finally Huitema is examining tools will give users a broad series of options in understanding the costs network services offered and their relationship to values derived by both parties from the services. He says that "We don't have them yet but we are working on them as we realize that the economics of the network are as important as the technology."

Overbey on IBM Global Net pp. 7 -10

We interview Sid Overbey, Vice President of IBM Switched Access and Internet Service, IBM Global Services. Overby explains how, when the NSFNET cooperative agreement ended in April of 1995, the IBM router engineers went off and helped to build, on the foundation of IBM's V-Net, the geographically largest $19.95 a month unlimited usage dial up ISP in the world. Global Net reaches into more than 50 foreign countries with 500 dial-up pops abroad and another 600 in the US.

Having been launched as an Internet service for users of IBM's OS/2 Warp operating system, Global Net has become a general dialup ISP that, unlike ATT's Worldnet, is the system of choice for Internet users who travel internationally because no matter where they go they almost always are within reach of a Global Net pop. The equipment used to support the pops is primarily an IBM RS/6000 machine as a server and 3Com (US Robotics) PRI technology.

At the fringes, the net uses Ascend/Cascade frame relay switches multiplexing traffic onto an ATM based OC3 core. Globally the network, currently, has some 200 different peers. While Global Net is fully peered with the big five, and has one private interconnect open now, it expresses extreme doubt that, by the end of the summer of 1998, the big five will have raised the peering bar to OC12.

Boardwatch and Keynote Encore pp. 11 - 14

In late November an interesting discussion of the Boardwatch - Keynote tests of webserver responsiveness as a measure of backbone responsiveness occurred on Inet-access. Whatever one thinks of Rickard's methodology, it has certainly attracted the attention of the industry. The tests generalize backbone performance from a small set of servers doing a large set of measurements. Unfortunately there are so many variables involved in the complex and ever changing way that the internet is put together, that such generalization is fraught with risk.

Sean Doran characterized the test as an obvious outgrowth of what he sees as 'a "human happiness" test suite. Whether or not it is a good indicator of "network OKness" is largely unrelated to the test itself, and rather more in the conclusion space of the particular testers."

"Unfortunately the argument is insoluble at this point because we simply do not have the means to prove what an average Internet user really does online, much less when her or his patience threshold is exceeded."

Sean goes on to provide very interesting data about the variability in the performance of links of varied levels of bandwidths that he has observed. While the internet certainly does not have a 40 kilo-character speed limit, the bandwidth obtainable by a single user with a large pipe is likely to be only of a fraction of that for which the pipe is rated. Finally on November 28 Rahul Desai published his own well reasoned critique of what had to be done to reframe the methodology before it would be worth further testing.

Brian Kahin Submits to our FOIA, p.15

After a second brush-off from the Office of Science and Technology Policy, we sent the letter in this article to OSTP Director Jack Gibbons. We maintain that Brian Kahin has been stonewalling us since our October 7th request. We have sought documents referring to Kahin's behind-the-scenes meetings with AT&T, IBM and Oracle to discuss the creation of a database for CORE. OSTP informed us that it shipped a box of 252 responsive documents on December 2 and will ship another box by December 5. We were told that the paper trail on these otherwise unpublished discussions was a foot high. Unfortunately, because of Agency stalling, we have nothing to include in this issue.

Network Address Translation Devices Improve, pp. 16 -19

On NANOG in early November during the same time period as the IPv6 discussion, Sean Doran argued that "NAT" boxes could become the fundamental scaling technology of the Internet by making actual IP addresses announced by a network behind its own border routers irrelevant to the rest of the Internet.

As is so often the case, reality turned out to be a little different, for others were quick to point out that some protocols would break when and if they were forced to travel through a Network Address Translation device. Less clear was the answer to the question of how protocols like DNS SEC would be handled. The consensus seemed to be that a workaround for the problem was quite doable. Less clear was what would happen if NAT was used to try to segment backbone routing. The one place in which it is currently most secure is in firewalls for major corporate intranets.

Management of .us Namespace, pp. 20 - 22

We conclude the article on the .us namespace that we began in November's issue. The contrast between NSI's management of .com and IANA's management of .us could hardly be more stark.

Two Bit Differentiated Services Architecture, p. 24

Van Jacobson announces an important draft at ftp://ftp.ee.lbl.gov/papers/draft-nichols-diff-svc-arch-00.txt