A Practical Navigator for the Internet Economy

Provider Based CIDR Likely To Impede Progress Of Smaller Players, pp. 1-8

When it looked like the availability of Class B addresses would soon be exhausted, CIDR was developed several years ago to permit fine-grained allocation of network addresses so that more networks could be supported globally. Although advocates point out that it was always intended to slow the growth in size of the Internet's routing tables, it is really only in 1995 that CIDR has also become the principal means of slowing down and indeed nearly halting the runaway growth in number of routes advertised to the backbone routers of the global Internet. It had become necessary to do this because the total number of routes advertised was doubling in less than a year while the doubling of hardware software capability needed to handle route doubling takes between two and three years.

While current routers can handle upwards of 60,000 routes, and the current number of routes advertised is between 30 and 35 thousand, the consensus of the technical community is that further growth in number of advertised routes should be severely constrained because the current buffer in number of routes advertised and equipment capability is necessary to dampen route flaps. Flaps can occur when changes in network routing occur. If a major backbone router goes off line, or especially if a peering session is suddenly dropped, the other routers have to reconfigure alternative paths for traffic directed at the router in question and propagate the reconfiguration to the farthest corners of the network. Think of the ripples caused by a stone thrown into a pond. Should that router come back on line or other changes in the dynamic configuration of the network occur before the first changes have propagated to the corners of the network, we will have a new set of ripples colliding with the previous set and routers "flapping" instead of routing. Increased computing power in routers is seen to be the best way for dampening the impact of such flaps. New router code from Cisco apparently is also helping.

Provider based CIDR is seeking to prevent routing table growth by insisting that users and downstream ISPs get network numbers from their larger upstream suppliers and aggregating their routes so that smaller set of routes needs to be announced to the entire Internet for their hosts to be globally reachable. If this technique is to be maximally effective, suppliers are finding that they must insist that anyone who changes providers must accept IP numbers from the CIDR blocs of the new supplier rather than continue to use the numbers initially assigned. If these numbers from the previous provider were used, then the new provider would have to announce them globally as a series of new routes adding to the burden of routing in already overburdened backbones. Consequently, the customer is forced to renumber his hosts -- an act that is time consuming and costly -- and in extreme cases so burdensome as to be prohibitive.

Such policy tends to favor the larger providers who can give the best reassurances to customers that they will never have to move to another supplier and would be able to survive fees based on the number of routes they ask those with whom they interconnect to carry should this practice begin to occur as an additional means of holding down the growth of routing tables.

Dave Crocker has been a severe critic of provider based CIDR - on the CIDRD mail list and elsewhere -- suggesting that policies being followed would put the squeeze on small ISPs and advocating investigation of geographic based rather than provider based CIDR. In an interview with us he extensively describes the problem and advocates the testing of some geography based alternatives. While the chances of developing the geography based alternatives do not look to be good, he also points out how the current practice of multi-homing, that is an essential part in the growth of small ISPs, causes new routes to be propagated limiting thereby the benefits of provider based CIDR. As a result multi-homing could become more difficult and/or more costly to do in future. Noel Chiappa, Dave's most consistent critic, has provided us with a critique of Dave's position. The article concludes with Noel's response to Dave, and Dave's comments on Noel's response.

We publish this debate less to try to ascertain who is correct than to highlight the warning signs of the impact of network growth on smaller providers. Those who survive will be those who most promptly factor correctly the economic and operational implications of these developments into their business plans.

Asked To Comment By Maine Public Advocate We Find NYNEX Plan Not In Public Interest pp. 9 - 14

This summer NYNEX presented the Maine PUC with a plan to attach all Maine public schools and libraries to the Internet during the next five years - spending $20 million to do so instead of returning the $20 million in overcharges to rate payers.

The NYNEX plan proposes a centralized, one-size-fits-all 56 kbs solution for the entire state. NYNEX would install and control all parts of a frame relay cloud and the Internet links there to. Schools and libraries would be hung as "dumb" leaves on the branches of the network. The result is a poor one for the citizens of Maine. But, unlike Indiana where state government has assisted Ameritech in implementing a solution with similar intent, citizen groups in Maine are petitioning the PUC to set up an independent oversight board to assist independent ISPs in working with NYNEX and local communities to establish a decentralized network owned and controlled by local communities. If Maine successfully pursues its current direction, we believe it should become a model for the rest of the nation.

Discussion Of Transient InternetMCI Routing Problem pp. 15-18

When an SSE synchronization bug caused, from the perspective of Westnet, a several hour partition between MCI and Sprint on October 4th, we posted notice of the difficulty to the North American Network Operators list and asked what had happened. We received a torrent of discussion that showed concern with the way the majors are interconnecting at the NAPs. However it also showed that the Internet ethic of problem solving cooperation among service providers is alive and well.

NTIA Funds MercerNet, pp. 19-20

In the spring of 1995 MercerNet started of as a continued attempt to get a full motion multipoint video network into Mercer County schools and libraries in order for Princeton and other wealthy bedroom communities in the county to be able to continue to offer advanced placement courses to their students in the face of cutbacks in state aid. The plans did not include Internet and seemed have little to do with education or community development except in the most elitist way.

At the beginning of the summer Comcast became involved with the program and decided to invest in it as a testbed for a product that it would seek to market to its other franchises. As a result the project was broadened and Comcast made a substantial investment in bringing Internet to all 21 participating locations. It obligates Comcast to build a dedicated fiber based video network and a coax based Internet at Ethernet speeds network to 12 schools and 9 branches of the county library. Comcast will pay all installation costs, the Feds will pick up the video classrooms, and Comcast will pay the monthly connect fees for the first year. The project now can do some good things at the county level, if the planners open their activities via the Internet to county residents. In the three weeks since the award of the grant there has been, unfortunately, no public communication from the awardees.

Internet Society - Tony Rutkowski, Mike Roberts And Rick Adams Discuss The Role Of The Charter Members, pp. 20-21

The Charter Members seem to have been an after thought as Harms, Kahn and King acted as individuals to incorporate the society. Privileges granted CNRI, Educom and RARE in June 1992 are source of discord.

MCI's Internet Business Model As Depicted In Internet Week Interview Differs From Vint Cerf's Position In Sept. Cook Report, pp. 21 - 22

According to a summary posted from Japan's Glocom Institute on Dave Farber's IP list on Oct 14 and 16th, Stephen VonRump, MCI's VP of Data Marketing, in an interview with Internet Week, expressed the view that MCI was working to introduce settlements similar to those among telcos with those with whom it peered. And that MCI was "convinced that such agreements and settlements systems are necessary for the continued growth of the Internet" because "for the Internet to sustain its growth, commerce and business really need to trust the Internet to carry traffic." According to Stephen Anderson of Glocom the article further stated: "MCI is likely to go to the FCC for regulation to seek improved Internet quality and standards. MCI will use standards issues to wage competitive war against smaller ISP companies that may not be able to invest in the large capital commitments needed to meet high standards." VonRump has been in touch with us and we are asking him to explain, the differences in these positions from the more reasoned ones taken by Vint Cerf in his late August interview with us. Expect a follow up next month.

 

 

Exponential Growth Strains Backbone Capacity And Fuels Market For Cost Effective Access Technology, pp. 1-18

With new users continuing to clamber aboard the Internet in record numbers, questions arise about the capacity of the infrastructure to sustain the growth. New switching technologies have been touted by some as the solution. We find that while they do permit some providers to achieve a more cost effective use of network resources, they fail to answer other critical problems.

We examine complaints about declines in service quality and find anxiety about some provider's ability to build infrastructure rapidly enough to maintain market share. Some users assert that expanding into new markets and capturing new customers is taking precedence over providing reasonable standards of service to existing customers. But others are beginning to point out that the service standards that one provider may offer are not much use in the Internet at large because as soon as traffic leaves that provider's network, its service standards can no longer have any affect. The botton line -- any attempt to impose standards industry-wide would drive out smaller providers that lacked the capital to do the necessary upgrades to meet them. In this fast moving context, the backbones of the major providers are in need of more bandwidth, more network interchange points to hand off traffic more efficiently and some means accomodating new users without overwhelming the capacity of routing tables.

While Cisco is introducing a new 75XX series of routers to handle the load, some feel that the capacity of these work horses will be quickly stressed. Some also feel that, unlike bandwidth crunches of the past, the current problems are more numerous and difficult to contend with. At the same time, while the disciples of ATM promise reliable data transport in the many hundreds of megabyte per second range, so far, for IP transport, they have been unable to deliver. While the image of running headlong into a brick wall at 60 miles an hour was used by Paul Vixie in a very telling definition of the multiple problems faced, the consensus seems to be that the majors would be able to impose rationing at some level before the unthinkable happened. [In Paul Vixie's words "There's one nice thing about the brick wall we're headed for, and that's that it'll nail every part of our system simultaneously. We're running out of bus bandwidth, link capacity, route memory, and human brain power for keeping it all straight -- and we'll probably hit all the walls in the same week." - NANOG list.]

Meanwhile there has been some recent press discussion about "new" Internet backbones established by PSI, UUNET and Netcom. PSI actually acquired a Cascade 9000 based layer 2 switched backbone almost two years ago. UUNET made its move six months ago and Netcom completed its transition this fall. To its advocates this architecture overcomes some of the problems of router based backbone. Switches we are told: (1) have sub-msec latency (2) can support multiple circuits at T-3 (3) are not burdened with processing any of the IP layer 3 nor routing overhead (4) because of 3 have a cost per port that is 300 to 400% less than a Cisco 7000.

Switches don't have to "think" as hard as routers. They do more in silicon. They take the frame in looking up the virtual circuit it is destined for and then blasting that frame down the virtual circuit. When the frame gets to the end of the virtual circuit, it pops out at a router. The provider avoid having its routers at the top of its hierarchy. Each router is rather like an airline hub. If the traffic is not destined to be rerouted at that level, it is popped into the backbone switches which send it to the proper router hub off load point where it is then shunted through to its destination the rest of the way at layer three. Switched IP ultimately must be routed. When it is many of the efficiencies of the switched transport is lost.

Depending on the provider such an architecture may have considerable cost efficiencies. Unfortunately it does not offer any magic bullet for the problem of overflowing routing tables. The actual physical circuits underneath the PVC mesh are still of the same capacity. An extra layer of equipment is added to manage the layer two PVC mesh. The routing has not been saved at all because before a packet can go out addressed to a PVC, the router has to have a routing entry for it to direct it out to the proper destination. To reach its destination the IP packets still have to pop up to layer three where they again have to be routed. Therefore routers around the periphery of the layer two frame relay backbone will still have to carry the full routing tables. For the same reason, the ability of a layer two PVC mesh to offer up to four different priorities of service is negatively impacted.

A second use of switching fabrics exits. Many networks with layer 2 frame relay switching fabrics are using them to aggregate customers so that they can achieve a better customer cost equation per port. For, if they have a need to put a POP in place in a particular area, it makes sense to use the equipment that gives them the highest performance, lowest cost, best value way of connecting their customers into their infrastructure. Once they do this, they then bring this traffic back to some other point on their network where it can be routed.

For some providers of medium size, frame relay and frame relay backhaul is cost effective. They can do this with small switches as well as big ones. On the other hand the major IXC carriers won't be building small private frame relay networks for their internets because they already have very large and very mature networks for the frame relay services they have already rolled out. We find that because of the differences between service providers markets, their size, and their priorities, there is no one right network architecture. Any attempt to evaluate the architecture of the largest ten national service providers assumes you can compare all the drivers which went into the designs.

Incorporating layer 2 switched fabric at the periphery or at the backbone of a network lets the smaller nationals ramp up in a hurry to compete with the IXCs by getting the most bang for their investor dollars. We find that network architecture answers rather than "right" or "wrong" need to be seen as a function of a provider's market position and strategy. We also find that understanding the strengths and weaknesses of the comparative network architectures and knowing as much as possible about the architecture of each major provider is a mandatory requirement in assessing the market positioning of the major service providers.

We go on to consider the complaint that the Internet exhibits exponential usage and ubiquity growth, without the resources to upgrade quickly enough to satisfy all the demands. We ask if it is embedded in the very nature of the Internet that the "product" will be spoiled if providers don't spend as much time in cooperation across networks as they do in competition? Some are concerned that, with the closing of the NSFnet seven months ago, the cross provider communication necessary to make the Internet work with acceptable service has become severely weakened.

Our reporting is based on discussion carried out on the Nanog mail list and on additional interviews with BBN Planet, Cascade, Cisco, Nysernet, AT&T, and the San Diego Super Computer Center.

Routing Announcements By MAE-East Connected Networks, p.18

We offer a comparative snapshot of route announcements at MAE-East taken in the last week of August and the last week of November. Total numbers have increased by more than 10%. Some providers, through aggregation have decreased their announcements, but others like EUnet have arrived with large sets of their own announcements. [A great deal should probably not be read into the increased route announcemnets. EUnet for example has been at MAE East for some time. Our source had picked up its routes for the first time in November. We believe however that these figures will be useful to follow over time and we hope to be able to do so.]

Access Indiana Emerges As Governor's Industrial Policy For Internet Service pp. 19 - 21

The Intelinet Commission Director states that awardees of Access Indiana funds must buy Internet connectivity from state's preferred backbone providers (Ameritech or Sprint) even if other ISPs could provide cheaper service. He describes State's policy as one to ensure development of a strong statewide backbone provider with high quality of service standards.

The Access Indiana committee imposed this rule change very quietly in early November. The Michiana Freenet had been awarded a $65,000 grant by the committee and, unaware of the latest rule change, submitted a plan to buy its Internet connectivity from CICnet. The AI Committee canceled its grant to the Michiana Freenet on November 19. The Director of the county library copied us on an extremely angry exchange of mail with the AI committee coordinator which we fed back to Indiana citizens via our own mail list compiled after the committee began to censor its formerly free list at the end of August. The Library Director also directly questioned the propriety of the state librarian's turning $300,000 of federal LSCA funds over to the committee for grants to be used in helping build Ameritech and Sprint infrastructure.

Vint Cerf And Steve Vonrump Clarify MCI'S Business Model Intentions pp. 21-22

A month ago Internet Week reported that MCI was on the verge of imposing settlements. This did not square with what Vint Cerf had tolf us at the end of August. We set out to find out what had happened. Two highlights from the discussion follow. From Vint Cerf on November 20: "So far as I am aware, in all our agreements to interconnect, we include an opportunity to work with the peer to review our mutual experience with traffic levels and distribution so as to determine whether each party is able to account for costs and, we hope, matching revenues. We are interested in assuring that all parties can recover their costs for offering Internet services to customers and want to tune our business models toward that end." On November 27 Steve VonRump added that "I stand firmly by my comments on network quality, and the need to establish some kind of performance accountability between networks." MCI does not appear to be advancing the one year time frame cited by Vint Cerf in his interview with us published in September. MCI however is clearly the most eager of the major providers to impose some kind of charges in addition to those for pure connectivity.