A Practical Navigator for the Internet Economy



pp. 1-9

We interview Bill St Arnaud, Director of Network Projects for Ca*Net3 the world's first optical national R&D network. Arnaud describes the beginnings of a trend that may enable enough ISPs to peer directly with other ISPs so that instead of having to buy transit from core backbones for 80% of their traffic they may only need to buy transit for 20%.

The Optical Border Gateway Protocol (0BGP) is an experimental concept which, at this point, is unproven. The first enabling step is dark fiber and the availability of many dozens of wavelengths such that an ISP can then purchase its own wavelength and use it to connect to an exchange point several thousand miles away if need be. At such an exchange point you will be able to do standard peering. The advantage given you by OBGP is that you can get to that exchange point without having to pay a transit costs to an upstream provider.

As the number of available wavelengths multiplies and the prices those wavelengths comes down, ISPs will find that they are able to buy wavelengths from their networks to dozens and perhaps even hundreds of exchange points some of which will be in other countries and on other continents halfway around the world. This is the first step. It's doable today and is beginning to happen as Williams and others are now selling wavelengths to ISPs.

With OBGP any optical switch in a optical network can be treated as an internet exchange point such that autonomous ISPs can interconnect and peer with each other anywhere along the network. Now this has some profound consequences. If you let users at the edge control the routing and topology of the network through control of the ports on the switch that means the carrier in the middle will be very limited in how it can optimize and manage the wavelength routing in the network.

BGP has an options field and a number of proprietary products have been using that field for their own special purposes. They are proposing to use the options field in BGP to turn networking upside down. Today when you connect an ISP to an upstream ISP, the first thing they do is to install the physical fiber. Then they put in the link layer which can be ATM or SONET. Next they establish IP connectivity and finally BGP connectivity. They are saying let us reverse that whole process.

So first they would establish BGP peering. They would say I want to peer with Gordon Cook. They have your AS number and can start a BGP session where they instruct their router to connect us. It is the router then that establishes the physical connection between us. Right here in Ottawa most of their institutions are going to put in either for four or eight wavelengths. They can set up their wavelengths set up to go to whichever universities they choose. Then if someone says let's set up a BGP session with Gordon Cook, a router will take one of those wavelengths and steer it towards a connection that will peer with your network.

In their Quebec university-owned dark fiber network they will do a proof of concept that will demonstrate these capabilities over the course of the next couple of months. They will put together and test a very simple version of the protocol. They already have a number of industry partners involved in the project who have indicated to them that they're ready to take the concept commercially and go with them into the IETF.

For one time investment as little $10,000 it is possible to get wired with dark fiber that will last for 20 years. The biggest single expense for institutions controlling their own dark fiber is internet transit cost. If OBGP becomes usable the amount of transit needed for purchase should decrease to between a fourth and a fifth of what is necessary to buy today. What works to reduce transit expense for universities will also reduce them for small ISPs that could use wave lengths and OBGP to peer with each other.

Arnaud expects that in the future there will be three parallel networks in existence. In fact he would suggest that policy not to try to converge all telecommunications into IP because doing so will increase the cost of IP. The residential telephone market will continue to provide delivery of telephone by twisted pair. There is nothing wrong with a voice-over copper which in fact works very well. A second network may well be broadcast video over coax which again works very well. There is no really compelling reason to deliver broadcast video over IP. I believe there will be a third network dedicated to only IP and this network can under some circumstances carry voice or video or both.

When asked what must be done to continue the scaling of backbones, Arnaud comments: "There are probably three possibilities. I am not a fan of the MPLS approach. MPLS was a technique designed to cope with shortage of bandwidth. Now with the ability to buy wavelengths of light on fiber bandwidth is not an issue. Consequently micro engineering the network is probably not necessary today. Also it does not look like we can build routers big enough to aggregate all the traffic. We will require optical "cut thru" or "bypass" circuits. The challenge that faces is who controls the cut through circuits - the carrier in the middle or the customer at the edge."

"Now, doing it this way is not building a traditional circuit switched model. Circuit switched models imply that for every flow you must start a new circuit. The service will set up a switched circuit and send a web page to you, and then switch to another circuit and send a web page to someone else. This will not scale. Having direct peering with each other with wavelengths and doing bypass around other ones may achieve the same ends. In the standards bodies, you have three approaches. One is called the overlay and this is basically the circuit switched model. This is being promoted by under the label of ODSI. Another one is called peer networking. This is where all the wavelengths are treated like MPLS tunnels. The third approach is ours. In this we say let BGP be the controlling mechanism and let decisions be made more upon the lines of traditional Internet direct peering relationships."




pp. 10 - 14, 26

We interview Bernard Daines the CEO of World Wide Packets. Daines explains that the explosion of dark fiber over the past five years has been fueled by fiber networks laid by cable TV companies and by utilities in addition to the more familiar carrier networks. In many place in the US utilities are bringing fiber to existing homes and especially to developments of new homes. The dynamics of DWDM and Gigabit Ethernet mean that the cost of bringing fiber to new homes and of bringing telephone, video services, and internet as well has fallen low enough to enable a homeowner to pay for the services that only a few years ago a wealthy corporation could afford.

World Wide Packets is re-engineering gigabit Ethernet switching equipment for use in fiber to the home environments. It is driving down the cost of such equipment to the point where a hub and spoke distribution system for 100 homes in the form of a central community distribution system and subscriber distribution units for each home would cost a total of about $1200 per home. Their equipment could be used to provide telecommunications services to customers of companies like VDN in Montreal about which we wrote in our last issue.

In addition to the Subscriber and Community Distribution Units World Wide packets provides its customer with a Network Management Unit (NMU)that is basically, a network management software application. Fiber owners or network administrators would use the NMU to provision services to their subscribers, manage network operations, and define service levels. It provides a GUI interface, which can be Java-based or Web-based, to control the CDU and the SDU-and to give the cable or utility customers of World Wide Packets complete end-to-end management of their entire network. In short World Wide Packets is enabling the creation of micro-telecommunication companies that by finding community market niches can use the cost efficiencies of fiber and gigabit Ethernet to compete against the older and far larger and more wealthy but technologically backward ILECs.



pp. 15 - 18

We summarize arguments on IPv6 on IETF and Nanog list from late August and early September. It is pointed out that advocates have alienated ISPs whose help they need.


pp. 19-24

We publish documents relevant to the campaigns of Karl Auerbach and Larry Lessig as well as evidence of Ted Byfield's efforts to find out how the code for the membership campaign and elections servers was acquired.



ENUM Protocol Seen As Directory Lookup Tying Internet Telephony Services to PSTN

All Numbers In PSTN Can Be ENUM Provisioned To Identify What Services Their Owner Uses

ENUM Enables VoIP Services To Find Ordinary PSTN End Points - Is Seen As Driver of Convergence

pp. 1- 10, 16

ENUM is the new IETF protocol designed to function like a directory services feature linking PSTN phone numbers to Internet telephony oriented services. We interview Rich Shockey who was co-chair of the ENUM working group.

E164 refers to the international telephone number protocol established by the International Telecommunication Union. E164 resolution means the use of the ENUM protocol (RFC2916) to connect any number in the globally switched telephone network to whatever Internet services have been provisioned for it. Such services may range from a personal web page look up to the ability to retrieve voice mail from anywhere in the world with a local phone call. ENUM makes it possible for the first time to connect a voice over IP service to any POTS phone whose number has been ENUM provisioned.

Currently one SIP provisioned phone can find another only if the owners of each number are aware of the other's existence. Before the web to FTP a file from a directory you had to know it existed or you had to browse directories and stumble by chance on anything interesting. The web became a means of finding and indexing such files and eventually of stitching them together so they could be intelligently located with great ease. ENUM will offer a way for a VOIP service like SIP to transparently find every PSTN phone on the globe whose owner has ENUM provisioned it through national registries to be set up late next year, When a PSTN phone is ENUM provisioned much of its use will begin to flow over the Internet. If ENUM services become as popular as expected, they will be a means by which huge amounts of PSTN traffic will be sucked out of the PSTN and onto the Internet. ENUM has sometimes been referred to as the service control point for the deconstruction of the PSTN by the Internet.

ENUM in a single sentence has been defined as "telephone number in URL out using NAPTR". An ENUM specific domain (in other words the ENUM expression of a telephone number under a single unique administrative DNS domain) must list any and all services available for that domain. The new ENUM GTLD is e164.arpa. It was added to the Root late last month.

ENUM itself is a simple protocol taking only five pages to describe. Shockey reports that the central development issue revolved around a debate of whether or not to use NAPTR records for service discovery. "NAPTR stands for the Naming Authority Pointer Resource record. It is RFC 2915 written by Mike Meeling of Network Solutions and Ron Daniel of Data Fusion. Debate about what resource records had to be returned for ENUM service resolution was extremely contentious."

Shockey describes RFC 2915 as "a profoundly elegant and powerful document for service resolution within a domain. For example it has an ability to list "n" number of services for a domain through the use of regular expressions and a variety of other features and functions. The importance of the use of NAPTR records in this environment cannot be stressed highly enough." Shockey states that the advantage in the use of NAPTR was having a single resolution methodology for resources associated with a telephone number. "If we did not use NAPTR for a resolution we would have issued a sort of directive to the Internet community saying that it was OK to resolve a telephone number to any resource record." (Tony Rutkowski in a short article of his own in Communications Week International also lauds the importance of NAPTR as the glueballs holding ENUM together.)

It seems to the Editor that the intent of use of NAPTR for resource records may be to ensure that the customer has in effect only a single key chain for use in tying together all advanced services to which he subscribes in what is regarded as the most important enabling technology of convergence between telephony and computers as represented by the Internet.

Shockey acknowledges that since ENUM becomes a single point of control and also a single point of failure, the way in which services are provisioned will be absolutely critical. The consumer must be given absolute and total control over his ENUM services which may become the single tool set by which he controls his business and personal communications.

Under this ENUM business model there will be only a single ENUM provisioning authority for each nation state. The IETF and ITU have agreed not to break the e164 mould which means that each national telephone numbering authority will be asked to decide who will provision ENUM services within its borders.

In the US it is likely that an early decision of the new administration will be to choose whether the Department of Commerce or the FCC will issue a solicitation for a national ENUM administrator. Some think that giving the task to the FCC would be both Bell head friendly and ensure a slower role out than an assignment to the Net head friendly Department of Commerce. In any case it is assumed that a successful bidder will have to provide assurances that customer control over the selection of ENUM services and over the privacy issues involved in having what may become a single identifier for all one's telecommunications activity will have to be very carefully respected. It will also be critical to guarantee that when a business or individual changes phone numbers, that all ENUM services attached to an old number are severed from that number immediately on customer disconnect and attached to the new phone number as soon as it becomes live. Critical issues with ENUM are thought to be far more political in nature than technical.

Instant Messaging Coordination of People and Devices Becomes Standards Track High Priority

Serves As An Enabler For Many New Applications, New Uses Of Bandwidth And Intelligent User Agents

pp. 11-16

We interview Henning Schulzrinne, member of the Internet Architecture Board and Director of the Columbia University Internet Real Time Laboratory. Our subject is a tutorial on Instant Messaging and its applications, and on recent high priority IETF instant messaging protocol standardization efforts.

While the ability to send short messages that appear in real time on the screen of the recipient was a feature of most computer bulletin board software, it was AOL's adaptation of this capability that put it one the map of the Internet. From a consumer services perspective it likely ranks behind the Web and email as the third major reason by which people justify their Internet usage. Standardization matters because you now have a problem that there are isolated communities where people who use the Yahoo instant messaging client cannot send messages to people who use either the Microsoft or the AOL client. AOL is appears to have 90 percent of instant messaging users while the remaining 10% are split between Microsoft, Yahoo, ICQ and Tribal Voice.

While this lack of standardization is not now especially daunting in the U.S., it will also begin to get much more important once more advanced wireless services come into play. The popularity of the Short Messaging Service (SMS) in Europe on the GSM mobile phone is huge. Moreover this popularity exists despite the fact that you have to type in your message with the number keys. On completion these messages are sent instantaneously to another mobile phone with bridges to email also available. We can expect that this popularity will migrate with GSM devices to the United States.

Schulzrinne explains that "Instant Messaging is primarily a first order mechanism. In other words you use it to set off other events. This explains where the interest of those of us on the multimedia side came in. What you can do is set up a number of simultaneous AOL, or whatever type, of presence sessions or messaging sessions. When your group is present, you can start up a completely unrelated application such as a voice-over IP conference call."

He also explains how standardization efforts were ramped up earlier this year in the IETF with the decision in the spring in Adelaide to "set up a design competition where the first working group and others were or challenged essentially to put up or shut up." The result was seven or eight submissions that were winnowed into three groups: the XPP or blocks-based proposal of Marshall Rose and Dave Crocker and then "the SIP based proposal which a number of us including Christian Huitema worked on. Within this one, the primary work had started much earlier with a proposal within the SIP working group which Jonathan Rosenberg and I had been working on for some time. Finally there was a whole set of proposals which was called "group two", and which was characterized by a more limited single set of functions for text-based presence indication and instant messaging."

The Schulzrinne Huitema SIP based approach came from a notion of seeing messaging in the more generic sense of event notification. "We see a signaling system consisting of a push part and a pull part if you like. The push part comes in where you call me on the phone to find out if I'm available. The phone rings and either I pick up or I don't. The pull apart would be when I tell you, just in case you wanted to know, whether I'm available to talk or not. These two are mirror images of each other and thus it makes sense for them to be provided within a similar overall signaling framework that includes the ability to reach end systems."

He explains the power of using SIP not for a just an "I call you up mode" but also for an "I notify you when I'm available mode," and an "I want to find out when you are available in order to subscribe to" mode means that all of these messages are handled by the same set of software or so-called proxy servers. "You can think of those as being cousins of your SMTP server. If as we expect, the next generation wireless systems will all use SIP internally, we will have an infrastructure with a billion users on it. We might as well leverage this infrastructure since the proxies do not have to be upgraded in all in order to have this functionality."

"Consequently a proxy does not even have to be aware that instant messaging or presence is going on. Therefore a proxy built today will be a perfectly capable router of subscriptions and notifications regardless of what happens to its details in the future. To us, this capability opens up an avenue for integration which hopefully will lead to lots of interesting new services." Among these will be SIP based notification or execution of events. An instant message may turn on a remote white board or web cam while it may also notify someone of a remote event such as change in the condition of an on going process.

Commerce Department Formation Of ICANN Seen As Illegal End Run Around The Administrative Procedures Act And The United States Constitution

Michael Froomkin's Findings To Be Published In Duke Law Journal

Lawrence Lessig Lauds Froomkin's Creation Of Framework That Could Force Reform

pp. 17 - 21

We summarize and comment on Michael Froomkin's 166 page 711 footnote long landmark paper: "Wrong Turn in Cyperspace: Using ICANN to Route Around the APA and the Constitution" to be published by the Duke University Law Journal, October 2000, Volume 50, No. 1. Froomkin's indictment in his opening paragraph is succinct: "The United States government is managing a critical portion of the Internet's infrastructure in violation of the Administrative Procedures Act (APA) and the Constitution. For almost two years the Internet Corporation for Assigned Names and Numbers (ICANN) has been making domain name policy under contract with the Department of Commerce (DoC). ICANN is formally a private non-profit California corporation created, in response to a summoning by U.S. government officials, to take regulatory actions that the Department of Commerce was unable or unwilling to take directly. If the U.S. government is laundering its policy making through ICANN, it violates the APA; if ICANN is in fact independent, then the federal government's decision to have ICANN manage a public resource of such importance, and to allow - indeed, require - it to enforce regulatory conditions on users of that resource, violates the non-delegation doctrine of the U.S. Constitution. In either case, the relationship violates basic norms of due process and public policy designed to ensure that federal power is exercised responsibly."

We believe that it is very important to use Froomkin's compelling insights to educate both citizens and the executive and legislative branches of the US government. We need to understand quickly what has happened and why we should "be afraid." Out of such education it is to be hoped that legal or legislative redress may be found. In a brief interview with us Larry Lessig explains that Froomkin has provided a road map for legal that under the right circumstances could be used to compel ICANN to change its ways. The almost final draft of the Froomkin paper is a 1.1 meg pdf file at http://www.law.miami.edu/~froomkin/articles/icann1.pdf We also offer comments on the recently concluded ICANN membership at large elections.

Some Insights On Network Service Level Agreements

pp. 22-23

We publish a NANOG discussion on issues to consider in evaluating quality of service agreement with upstream providers.