A Practical Navigator for the Internet Economy

SONET Has a Future: New, More Flexible, Less Costly Breeds Mesh with High Speed Ethernet

Cost of 10 Gig E Expected to Be Only 15% less than OC192 SONET

New OIF UNI Offers Signaling and Routing Protocols and Moves Rapid End-to-End Provisioning Closer

pp. 1-14

Find out how to order single copy ($125) or group license ($250) for just THIS issue.

We interview Joe Berthold, Vice President for Network Architecture for Ciena. Netheads have flogged SONET without mercy almost since its introduction. SONET they claim is a very expensive artifact of the ILEC's need to maintain their voice-centric, TDMA, circuit-switched networks. They see gigabit Ethernet as a brutally cost effective SONET killer.

However, Berthold points out that gigabit Ethernet has been extremely cost effective because it is based on the FDDI standard. It derives its own cost effectiveness by leveraging FDDI hardware development. But ten gigabit Ethernet is unable to do this. Consequently 10 Gig E may only be about 15% cheaper than OC192, its SONET equivalent.

To understand why SONET has been very expensive, Berthold explains that when SONET came to market in the early 1990s, its major customers in North America were the seven ILECs which shared common Operations Support Software maintained by Bellcore (now Telcordia).

Since the ILEC networks were centralized and "intelligent" as opposed to "stupid' and edge controlled, the hardware that they used had to be engineered to be compatible with the ILEC's OSS systems. This was and still is done through a process know as OSMINE (Operations Systems Modification of Intelligent Network Elements). OSMINE was and is not a standard to which a company could code its software in advance. Instead it is a proprietary process by which Telcordia, on behalf of its ILEC clients, engineers the ILEC's OSS software systems to communicate with and otherwise incorporate new hardware systems that the ILECs purchase. For SONET vendors, becoming OSMINE compliant could be a one to two year-long process. The early SONET vendors included companies like Lucent and Nortel which were used to marching to the ILEC's tune. They merely added in the millions of dollars of cost necessary to complete OSMINE to the basic purchase price of the equipment.

When the 1996 Telco Reform Act enabled the CLEC, it also enable a market for non OSMINE compliant SONET products. Cerent and Cyras are examples of companies that were formed in 1997 and in 1999 brought to market much cheaper SONET equipment. As Berthold puts it: "Cerent did a wonderful thing for the industry. It showed that you could make SONET equipment and sell it at half the price of what the ILECs were paying and still make a profit. There was no reason that the SONET equipment had to remain as expensive as it did. But with a small number of suppliers and small number of customers it was easy to let business go on as usual until the impact of the 1997 - 1999 time frame was felt."

These companies are also doing things like adding Ethernet capabilities to SONET rings. For example by means of a specially designed Ethernet card the Ciena MetroDirector K2 will support the resilient packet data ring Ethernet protocol (802.17). According to Berthold: Here you have a piece of equipment that, looked at in one way is SONET and, looked at in another way, is rather like a distributed Ethernet switch. Cisco has offered a pre-standardized version of 802.17 which they call a spatial reuse protocol. The main reason for going in this direction with SONET equipment is to be able to offer data services to customers and be able to do it in a way that uses a shared infrastructure enabling bandwidth to be delivered over a very wide area of your network."

Rather than having to put in a separate overlay gigabit Ethernet network users of this equipment can provision from their SONET equipment Ethernet service in 50 megabit increments rather than have to pay for a 400% increase in bandwidth when, for example, they fill an OC-3.

In long haul portions of carrier networks, whether one chooses SONET or Ethernet technology will be driven in part by the anticipated bandwidth desired, the numbers of fibers available, and by the distance between the two points where you want that bandwidth. Only after you have answers to these questions does it makes sense to look at a choice between SONET or Ethernet technology.

Some of the signaling and routing protocols of the Internet are now being incorporated into some new SONET equipment. The most recent Optical Internetworking Forum demonstration at Supercom featured an inter-operability event where 25 different vendors demonstrated inter-operability of a signaling protocol used to set up and tear down optical channels (wavelengths)and SONET circuits. This capability was made possible by the User Network Interface protocol or UNI protocol.

The interview concludes with a detailed discussion of GMPLS and optical control planes - critical technology needed to enable dynamic bandwidth allocation.

France to Build National Broadband Infrastructure,

p. 14

A brief announcement on the part of the French Government of the intent to build out a national broadband network by 2005.

Dale Hatfield Describes Conservative "Market Oriented" FCC Policy Perspective

Explains Software Defined Radio and Politics of Spectrum Licensing,

pp. 15 - 21

We interview Dale Hatfield who from December 1997 to December 2000 was Chief of the Office of Engineering and Technology at the Federal Communications Commission. Looking back at the carnage wreaked upon the industry since last winter, Dale concludes: "What we've seen is that Congress' striking down legal barriers to entry was a necessary but not a sufficient condition to ensure competition. The marketplace here maybe telling us that there are economies of scale in the last mile because the widespread duplication of that facility is rather difficult."

Some of his conclusions seem to us to be inspired by a belief in the virtues of the operation of a free market and level playing field. Whatever these market conditions, the record of the past five years shows that the ILECs have been able to manipulate adroitly the politics of the market place and technology on their behalf. A fresh example of this is given by Robert V. Green, an analyst for Briefing.com. Green wrote a piece on July 30 2001 called The Unintended Outcome of the Telecom Act of 1996. Here he states that "while many of the CLEC's never made it to profitability, they did get some customers. For example, New Jersey CLEC, KMC Telecom never made it to the public markets, but raised more than $2 billion in debt. It has yet to report even an EBITDA profitable quarter. But what KMC Telecom has accomplished is a "competitive lines" number. At 570,000 lines, KMC Telecom is going a long way towards establishing that Verizon has opened up the NJ market to local competition." Green goes on to point out other examples of CLECs that are now either in Chapter 11 or on the verge thereof. Yet in virtually every case, the ILEC is successfully using the existence of the CLECs to show compliance with having opened the local loop to competition in order to be given authorization to sell long distance. Citing the FCC's inclination to grant Verizon long distance approval in Pennsylvania, Green correctly concludes "The absurdity of this situation seems to be lost on the FCC."

We also discuss the politics of mobile spectrum licensing and the School and Library Corporation with Hatfield. Finally Hatfield discusses the concept of a software defined radio. "What you want to do I think is decentralize things as much as you can, both economically and in the marketplace. In order to do this you want to encourage cognizant radios-that is to say radios that understand the environment they are in. Using GPS, such radios can be aware of where they are geographically. A radio could say okay today I am in a Mule Shoe, Texas. There is no UHF television station out here. Therefore, I can operate wide open and not bother anyone. But the next day I am in New York City and now I am in an entirely different environment where I have too tighten up on my operations. I have to narrow my bandwidth and tighten filters.

"You want to encourage people who hold spectrum to negotiate with people who have these types of devices to say I will pay you X amount of dollars to use your spectrum on a non interfering bases. Let me say that there's a lot of interest in this in the Defense Department. Because when you are suddenly air lifted into a different country where you don't know where the transmitters are, you really would like to have smart radios.

Mikki Barry on the Real Nature of ICANN's Authority

US Gov't Rubber Stamps ICANN Requests as Karen Rose Is Left Free To Make Her Own Policy,

pp. 21 - 24

Mikki Barry and Timothy Denton debate ICANN's trustworthiness: Barry: What is wrong with ICANN is almost the same as what is wrong with the ITU, the WTO, etc. etc. Lack of non financial interests being given power in the structure and decision making. I fear the same outcome with any of these organizations, except that one could argue that there is even less recourse with a private body like an ICANN than a quasi public one. Given that DOC has essentially washed its hands of any type of oversight (which is also not surprising from the history of all of this) then there is essentially NO reason for ICANN to do anything whatsoever to gain legitimacy. It has all the legitimacy it needs to continue to act in an arbitrary and financially driven fashion. ICANN is not even accountable to its own board, as such, can it possibly be accountable to anyone else? I truly fear, after attempting to participate in this mess since prior to its inception, that the only real way of participation and legitimacy is to route around ICANN completely.

Karl Auerbach takes exception to the behavior of Joe Sims on Dave Farber's IP list. Michael Froomkin writes on DoC's rubber stamping of ICANN's grant of .biz to Neulevel: "While the choice of TLDs might be defended as a 'technical' matter, the close control over the business of being a registry cannot seriously be called anything other than a whole series of policy choices. It's understandable why DoC might enjoy being out of the line of fire, but much less clear that the law allows it this passive role."

Debate on Dynamic Bandwidth Allocation Ignites on EFM Standards Mail List

Carriers Want Traffic Engineering Capabilities

Amount of Administration and Provisioning in the EPON PHY an Issue

Addition of DOCSIS Like Features Seen as Enabler of Third Party Access Discrimination,

pp. 25 -35

The debates that flourished on the EFM working group mail list in July though arcane to a non Ethernet specialist are quite significant in terms of the capability of the standard that will emerge. Basically two points of view are emerging, one camp that wants the minimal additional features to make it possible for Ethernet to be used in a public network, (link failure, metering of signal strengths, etc) and another camp wants to bring DOCSIS-like functionality into EFM.

The interest that has been articulated in making Dynamic Bandwidth Allocation (DBA) a part of the standard seems to be centered in the DOCSIS oriented group. It is likely that DBA capabilities would allow these folk to traffic engineer the line to their advantage and to the disadvantage of third parties who are buying access to the lines.

DBA is the ability of one user to get all of the bandwidth at one small point in time, i.e. burst bigger than his channel. This requires the master device to grant timeslots for slave devices to talk. This apparently yields 20% better link utilization than without DBA. Some suggest that this also makes true open access more complicated because one master device has to be within the control of one person. This forces slave devices which don't listen to the master to be disconnected. This would not be a problem on point-to-point fiber, but is on a shared Passive optical network (PON) or HFC for that matter.

The debate appears to be: will the standard take on the characteristics of full blown DOCSIS or will it contain only the features needed to bring real high speed Ethernet from an outside service providers to the home or business? The flavor of Operation, Management, Administration and Provisioning (OAM&P) is a contentious issue. If traditional A&P is used along with O&M, it is very likely that the resulting mechanisms will not support open access in the standard.

One could expect a standard coming from a closed group of companies, such as members of Cablelabs, to ensure that DOCSIS doesn't support open access in a friendly way, but coming out of an IEEE, being aware of DN00-185 and other regulatory efforts to enable open access, it seems inappropriate for IEEE to ignore such concerns.

Basically, there is a list of significant challenges facing the working group. (1) How to make Ethernet friendly for use in the public network, and friendly to carriers. (2) How to ensure that DBA is implemented for reasonable purposes and does not result in unnecessarily expensive head-end equipment. (3) How DBA will interoperate with non-DBA equipment. (4) How the thinnest set of A&P is added to O&M so that all this management functionality traverses open access points of interconnection (POI). (5) How QoS will traverse open access POI. These are a set of problems that have never been tackled in the industry. The outcome is likely to affect the nature of future FTTH efforts (www.ftthcouncil.com) and other deployment of EFM technology. To complete the standards with these results will be a significant challenge for the working group. We present two long threads that show how these issues have so far emerged.

Peering Revisited

Tier Ones Move Peering Interconnects into Exchange Points,pp. 35 - 39

On NANOG an anonymous poster asserts that the largest Tier One backbones (WorldCom, Sprint, CW, Genuity, and others) have issued RFCs that will coincidentally result in their using the same co-lo carrier hotels in eight US cities to do their peering interconnections.

The discussion that resulted was useful primarily as a snapshot of how ISPs currently view the politics of interconnection and the business models of exchange points.

If there was any consensus expressed Dan Golding came closest with the following comment: "Most of the really big ISPs peer at the OC12 or OC48 level with each other. The provisioning interval on these large circuits is insane. There is typically little or no cost - it's not really about expense, as the Big ISPs are also big carriers, and the do meet in certain places, circuit-wise. The problem is that the carrier arms of these companies move slowly, especially for such large circuits. They may also through up political roadblocks. And, in those occasional cases where a large ISP needs to get a circuit to reach another carrier, that OC-48 really hurts financially.

These ISPs want to meet in facilities that have room and power for all their transport gear and routers. They want stuff like multiple entrance facilities, high security, and other features.

They want to be able to just run x-connections to each other when they need a new circuit. Of course, it's not quite so simple, because if someone is running an OC-48 x-connect between two really big providers there's lots of other work to do, but it's still a real time-saver."