Exponential Growth Strains Backbone Capacity And Fuels Market For Cost Effective Access Technology, pp. 1-18

With new users continuing to clamber aboard the Internet in record numbers, questions arise about the capacity of the infrastructure to sustain the growth. New switching technologies have been touted by some as the solution. We find that while they do permit some providers to achieve a more cost effective use of network resources, they fail to answer other critical problems.

We examine complaints about declines in service quality and find anxiety about some provider's ability to build infrastructure rapidly enough to maintain market share. Some users assert that expanding into new markets and capturing new customers is taking precedence over providing reasonable standards of service to existing customers. But others are beginning to point out that the service standards that one provider may offer are not much use in the Internet at large because as soon as traffic leaves that provider's network, its service standards can no longer have any affect. The botton line -- any attempt to impose standards industry-wide would drive out smaller providers that lacked the capital to do the necessary upgrades to meet them. In this fast moving context, the backbones of the major providers are in need of more bandwidth, more network interchange points to hand off traffic more efficiently and some means accomodating new users without overwhelming the capacity of routing tables.

While Cisco is introducing a new 75XX series of routers to handle the load, some feel that the capacity of these work horses will be quickly stressed. Some also feel that, unlike bandwidth crunches of the past, the current problems are more numerous and difficult to contend with. At the same time, while the disciples of ATM promise reliable data transport in the many hundreds of megabyte per second range, so far, for IP transport, they have been unable to deliver. While the image of running headlong into a brick wall at 60 miles an hour was used by Paul Vixie in a very telling definition of the multiple problems faced, the consensus seems to be that the majors would be able to impose rationing at some level before the unthinkable happened. [In Paul Vixie's words "There's one nice thing about the brick wall we're headed for, and that's that it'll nail every part of our system simultaneously. We're running out of bus bandwidth, link capacity, route memory, and human brain power for keeping it all straight -- and we'll probably hit all the walls in the same week." - NANOG list.]

Meanwhile there has been some recent press discussion about "new" Internet backbones established by PSI, UUNET and Netcom. PSI actually acquired a Cascade 9000 based layer 2 switched backbone almost two years ago. UUNET made its move six months ago and Netcom completed its transition this fall. To its advocates this architecture overcomes some of the problems of router based backbone. Switches we are told: (1) have sub-msec latency (2) can support multiple circuits at T-3 (3) are not burdened with processing any of the IP layer 3 nor routing overhead (4) because of 3 have a cost per port that is 300 to 400% less than a Cisco 7000.

Switches don't have to "think" as hard as routers. They do more in silicon. They take the frame in looking up the virtual circuit it is destined for and then blasting that frame down the virtual circuit. When the frame gets to the end of the virtual circuit, it pops out at a router. The provider avoid having its routers at the top of its hierarchy. Each router is rather like an airline hub. If the traffic is not destined to be rerouted at that level, it is popped into the backbone switches which send it to the proper router hub off load point where it is then shunted through to its destination the rest of the way at layer three. Switched IP ultimately must be routed. When it is many of the efficiencies of the switched transport is lost.

Depending on the provider such an architecture may have considerable cost efficiencies. Unfortunately it does not offer any magic bullet for the problem of overflowing routing tables. The actual physical circuits underneath the PVC mesh are still of the same capacity. An extra layer of equipment is added to manage the layer two PVC mesh. The routing has not been saved at all because before a packet can go out addressed to a PVC, the router has to have a routing entry for it to direct it out to the proper destination. To reach its destination the IP packets still have to pop up to layer three where they again have to be routed. Therefore routers around the periphery of the layer two frame relay backbone will still have to carry the full routing tables. For the same reason, the ability of a layer two PVC mesh to offer up to four different priorities of service is negatively impacted.

A second use of switching fabrics exits. Many networks with layer 2 frame relay switching fabrics are using them to aggregate customers so that they can achieve a better customer cost equation per port. For, if they have a need to put a POP in place in a particular area, it makes sense to use the equipment that gives them the highest performance, lowest cost, best value way of connecting their customers into their infrastructure. Once they do this, they then bring this traffic back to some other point on their network where it can be routed.

For some providers of medium size, frame relay and frame relay backhaul is cost effective. They can do this with small switches as well as big ones. On the other hand the major IXC carriers won't be building small private frame relay networks for their internets because they already have very large and very mature networks for the frame relay services they have already rolled out. We find that because of the differences between service providers markets, their size, and their priorities, there is no one right network architecture. Any attempt to evaluate the architecture of the largest ten national service providers assumes you can compare all the drivers which went into the designs.

Incorporating layer 2 switched fabric at the periphery or at the backbone of a network lets the smaller nationals ramp up in a hurry to compete with the IXCs by getting the most bang for their investor dollars. We find that network architecture answers rather than "right" or "wrong" need to be seen as a function of a provider's market position and strategy. We also find that understanding the strengths and weaknesses of the comparative network architectures and knowing as much as possible about the architecture of each major provider is a mandatory requirement in assessing the market positioning of the major service providers.

We go on to consider the complaint that the Internet exhibits exponential usage and ubiquity growth, without the resources to upgrade quickly enough to satisfy all the demands. We ask if it is embedded in the very nature of the Internet that the "product" will be spoiled if providers don't spend as much time in cooperation across networks as they do in competition? Some are concerned that, with the closing of the NSFnet seven months ago, the cross provider communication necessary to make the Internet work with acceptable service has become severely weakened.

Our reporting is based on discussion carried out on the Nanog mail list and on additional interviews with BBN Planet, Cascade, Cisco, Nysernet, AT&T, and the San Diego Super Computer Center.

Routing Announcements By MAE-East Connected Networks, p.18

We offer a comparative snapshot of route announcements at MAE-East taken in the last week of August and the last week of November. Total numbers have increased by more than 10%. Some providers, through aggregation have decreased their announcements, but others like EUnet have arrived with large sets of their own announcements. [A great deal should probably not be read into the increased route announcemnets. EUnet for example has been at MAE East for some time. Our source had picked up its routes for the first time in November. We believe however that these figures will be useful to follow over time and we hope to be able to do so.]

Access Indiana Emerges As Governor's Industrial Policy For Internet Service pp. 19 - 21

The Intelinet Commission Director states that awardees of Access Indiana funds must buy Internet connectivity from state's preferred backbone providers (Ameritech or Sprint) even if other ISPs could provide cheaper service. He describes State's policy as one to ensure development of a strong statewide backbone provider with high quality of service standards.

The Access Indiana committee imposed this rule change very quietly in early November. The Michiana Freenet had been awarded a $65,000 grant by the committee and, unaware of the latest rule change, submitted a plan to buy its Internet connectivity from CICnet. The AI Committee canceled its grant to the Michiana Freenet on November 19. The Director of the county library copied us on an extremely angry exchange of mail with the AI committee coordinator which we fed back to Indiana citizens via our own mail list compiled after the committee began to censor its formerly free list at the end of August. The Library Director also directly questioned the propriety of the state librarian's turning $300,000 of federal LSCA funds over to the committee for grants to be used in helping build Ameritech and Sprint infrastructure.

Vint Cerf And Steve Vonrump Clarify MCI'S Business Model Intentions pp. 21-22

A month ago Internet Week reported that MCI was on the verge of imposing settlements. This did not square with what Vint Cerf had tolf us at the end of August. We set out to find out what had happened. Two highlights from the discussion follow. From Vint Cerf on November 20: "So far as I am aware, in all our agreements to interconnect, we include an opportunity to work with the peer to review our mutual experience with traffic levels and distribution so as to determine whether each party is able to account for costs and, we hope, matching revenues. We are interested in assuring that all parties can recover their costs for offering Internet services to customers and want to tune our business models toward that end." On November 27 Steve VonRump added that "I stand firmly by my comments on network quality, and the need to establish some kind of performance accountability between networks." MCI does not appear to be advancing the one year time frame cited by Vint Cerf in his interview with us published in September. MCI however is clearly the most eager of the major providers to impose some kind of charges in addition to those for pure connectivity.