A Practical Navigator for the Internet Economy


Lightera Units Replace SONET AD Muxs Via Smart Signaling & Routing Bandwidth Management - Omnia Provides Local Optical Rings to Enable Optical Transport Through the Local Loop and Across the WAN,

pp. 1 - 10

We interview Joe Berthold Vice President of Network Architecture and Standards. Starting in 1996 Ciena introduced wave division multiplexing and then dense wave division multiplexing (DWDM). The early part of the interview covers technical issues such as how Bragg Filters are made and used in DWDM as well as the characteristics of 1300 versus 1500 nano-meter fiber.

In 1996-97 Ciena worked with Sprint to help alleviate that carrier's fiber shortage. They developed their optical amplifiers and Bragg grating filters in house. The carrying capacity of fiber is being refined every year in order to better match the changing characteristics of the amplifiers and switching devices available. This is the reason that Qwest, Level 3, and Williams only fill one of ten or more conduits in the right of way at a time. The premise is that the time the first conduit is used up, there will be much better fiber to lay in the second conduit.

For integrated network management they have used a device called the Sentry 4000 which was introduced in January '98. In this three shelf equipment rack you have you network management hardware and an optical amplifier.. The second shelf supports eight different transmitters. The final shelf contains receivers, each with a Bragg grating matched to a different color. To add channels, they add cards. Depending on what wavelength they want to turn up, they put in a transmitter with the desired frequency, and a receiver with the appropriate Bragg filter. While they offer both OC-48 and OC-192 interfaces now, OC768 doesn't appear practical commercially because, with current fiber, they would have to reamplify it so often that it would not be economic to do so. Moving now from OC 48 to OC 192 looks practical because the electronics on the ends of the fiber does not cost four times as much.

One of Ciena's developments has been an all optical network that removed the need for electrical regeneration of the laser pulses and therefore the need for SONET add drop muxs that cost any where from several tens of thousands of dollars to several hundreds of thousands of dollars apiece.

Lightera, which Ciena purchased early this year, will enable Ciena to bring to market by year's end a device called the Core Director. In one equipment bay it will be possible to terminate 256 different OC-48 (2.5 gigabit) channels using OC-48 plug-ins each the size of two business cards. If a Core director were configured to behave like SONET Add Drop Multiplexers, one unit could replace 48 bays of SONET equipment. While the Core Director can emulate many Add Drop multiplexers, it can also, at the same time, operate as an optical channel switch or optical crossconnect.

The Core director is configured on a port-by-port basis. Is this port part of a SONET ring or not? Or is this port just a port on a switch that is part of a mesh network? Do you want to use restoration? Ciena has a set of restoration protocols built into this, called optical signaling and routing protocols that allow the Core Director to behave in a similar way to a network of ATM switches, automatically reconfiguring in the event of loss of resources somewhere.

They have developed protocols that will allow the core Direct to perform both the functions of layer 2 switching and layer 3 routing. Their OSRP (Optical Signaling and Routing protocols are designed to make bandwidth provisioning easier. A circuit can be activated by installing a small plug in place a point and click on a GUI interface using their network management software.

Corestream introduced in June 99 replaces the Sentry 4000 and comes with an initial configuration of 96 channels at 2.5 Gigabits or 48 channels at 10 G, or a mixture of both. The architecture is extensible to a speed of two terabits per second. A further stage of evolution called "Multiwave Lightworks" will integrate the Core Stream technology with the Core Director, so that the WDM interfaces are inside the same platform, saving both additional interfaces between the two, and space, power, and cost.

Ciena is also moving into provisioning equipment to bring high speed optical streams through the local loop to customer premises equipment. Smaller versions of its wide area network products will allow carriers to terminate new broadband circuits without having to set up new SONET rings or buy $100,000 SONET AD multiplexers for individual buildings.

Omnia is another company acquired by Ciena earlier this year. The Omnia AXR 500, is a service delivery and multiplexing box which makes it very simple for a carrier to offer a number of services ranging all the way from voice up to ATM to IP to private Ethernet networks. The AXR500 provides service interfaces and then multiplexes the services and bring them back to the carrier's office.

Ciena may be the only player which currently has a combination of long-haul transport systems, point-to-point systems in Metropolitan Areas, and finally ring access systems. Its strategy is to be able to originate a wavelength at a customer location, route it to an office, hop onto a long-haul system, go across the country, come back out to its local system, and go back out to the customer. It wants that entire transport from customer to customer to stay as an optical channel.

Sean Donelan on Breaking into National Cost Free Peered Status,

p. 10, 20

Sean Donelan points out that it has been several years since a large national or international ISP has been able to become a top level national peering player without buying another ISP that was already fully peered.

Role of Satellite Bandwidth in Delivery of Content Across the Global Internet

PanAmSat Discussion Explores Economics of Content Delivery, Decreases in Cost of Receiving Equipment Make Delivery of Same Data to Many Different Places Competitive with Fiber

pp. 11 - 18

We interview Rob Bednarek, CTO, Gail Fell, New Media Director and Aaron Falk Protocol Specialist. The interview starts with and overview of the satellite industry focusing on the declining cost of receiving equipment and the different market position of GEO's and LEOs.

Satellite's major niche in the Internet business model is the delivery of content to multiple sites. Such content may be continually updated web caches at major ISP backbone sites or delivery specialized content to a receiving business with many locations scattered over an entire continent. While fiber can reach up to about ten different sites economically, if the number of sites which must get the same material increases beyond ten and into the hundreds the advantage swings to satellite's favor.

Note also that for connecting ISP's in remote areas in a single hop to a global backbone, they are quite good. Any content for something like an advertising campaign, or something like a Victoria's secret fashion show that has to be delivered in mix and match proportions to servers all over the world is a good candidate for satellite delivery.

In the area of protocol development there are several proprietary solutions to reliable multicast, that are currently available. They are developed by independent companies which have some software-only and some software-hardware combined solutions for doing reliable multicast transmission, targeted mostly toward satellite. You have a single sender and a bunch of receivers and it's one hop. And there are a variety of techniques that are being used for forward error correction to allow scalability to fairly large numbers of receivers.

According to Fell reliable multicast is a way of taking information and sending it one to many multiple destinations in a way that ensures error-free delivery. There's a variety of ways that one can ensure error-free delivery. In Unicast, TCP is used. TCP acknowledges packets actually received. The problem is that if one has a lot of receivers and they're all sending acknowledgements to the sender, then the sender can easily get overwhelmed, so that approach, the acknowledgement-based approach, does not scale well to large numbers of receivers. But there is a variety of ways that the problem may be solved. For example one may inhibit a acknowledgement transmission and then send the file many times. Doing this will increase the certainty of reception by all sites. One of the strengths of the satellite industry is that because of its topology, it can get to everyone. A user doesn't need to worry about different tariffs or about signals using another provider's resources, or about crossing provider boundaries for its customers?

The Internet protocol has helped the standardization of satellite data protocols far more than anything else. Because from the physical layer, it is possible to adapt the necessary things on satellite receivers, but the choke point has been what happened after that and, traditionally, physical satellite networks were used in very closed architecture data networks. Such networks tended to be private networks, with the exception of voice, they were not part of big public networks. However, IP changed all that. As a result, the request that comes from a foreign country is, I need a pipe for IP interchange and I'll take care of everything downstream from the physical layer in my country, you just get that connectivity to me.

For third world ISPs satellite delivery from a cost standpoint, it's fairly constant, and, actually, that's one of the appeals, is it's entirely within the control of the ISP. An ISP can contract for as much or as little connectivity to the U.S. as is required to satisfy its customers. Different ISP's have different opinions as to how long customers should wait for response times. This is something that lets them differentiate themselves from each other.

If you can go from having ten people receive your bits to having a thousand or a million people receive them,, your actual price per bit is going to change considerably. The key becomes defining a set of services that can take advantage of the natural multicasting which satellites provide.




ICANN is moving forward inexorably. Whether it is moving toward triumph or ready to fall off the cliff remains to be seen. It has great trouble getting its sense of mission accurately adjutsed. On June 13th IBM Vice President and GIP officer (with the assistance of Esther Dyson, Vint Cerf, and Mike Roberts) wrote privately to Silicon Valley venture capitalists soliciting funds for ICANN. Patrick: "ICANN is trying to get the policy, technical and financial aspects of the Internet moved successfully from U.S. government to the international private sector. Everyone thinks this is a good idea. In fact, I would say that the future of the Internet is dependent on the execution of the plan."

Consider carefully his words. Remember that Esther on August 29 chidingly asked Dave Farber not to call ICANN the Internet's "Oversight Board" since ICANN's purpose was nothing more than dealing with a subset of technical coordination issues.

Since its establishment last October ICANN has waged a calculated campaign of deception. But in the last two weeks since ICANN's regimented Santiago performance, public perception is shifting. It has waged a stealth campaign designed to get Internet user's hatred of Network Solutions focused on and supportive of its announced purpose of ending the NSI monopoly over .com. With less arrogance on the part of Dyson and Roberts it might have succeeded. However as ICANN has said one thing and done another, people are beginning to catch on that its goal is to establish its own monopoly, in place of that of NSI.

In the September 6th issue of Business Week, Mike France wrote: "if Esther Dyson & Co. prove that they're able to successfully manage domain names, then they would be in a strong position to handle more urgent policy problems such as protecting intellectual property. While no one is asking ICANN to take on more responsibilities yet, the group could tackle problems more swiftly than the alternative: new and untested Internet regulatory agencies."

"The second reason ICANN's influence could grow is that domain names are starting to be viewed as a potentially powerful method of getting Netizens to obey the law. When people buy names for their Web sites, they could be required to sign a detailed contract obligating them to comply with a certain set of rules governing the sale of products, the use of someone else's intellectual property, the display of sexual content-you name it. If they violated the terms of the contract, they would forfeit the domain name. That may not sound like a particularly serious penalty, but on the Internet it's a death sentence."

"While this may sound far-fetched, it appears to be the most efficient way of enforcing the law on the Net. Already, ICANN is contemplating forcing applicants for new domain names to agree to a set of rules blocking so-called cybersquatting-the practice of registering well-known corporate brand names as domain names before the actual owners have a chance to do so. [Editor: Blocking much more than just this. According to its March 99 Registrar Accreditation Criteria, ICANN could revoke a registrant's domain name for a wide variety of infractions.]"

"''After all the talk over the past few years about how difficult it will be to regulate conduct on the Internet,'' says David Post, a cyberlaw specialist at Temple University School of Law, ''the domain name system looks like the Holy Grail, the one place where enforceable Internet policy can be promulgated without any of the messy enforcement'' problems, France concluded."

The battle is not just about NSI anymore. Awareness of the profound reach of the ambitions of Cerf, Dyson, Roberts and Patrick for ICANN is growing. As shown in their private June 99 fund raising correspondence this group is holding ICANN out as the only hope for the continued commercial success of the Internet while, at the same time, warning that the stability of the net and the fate of electronic commerce hang on the balance.

ICANN is taking such care not to be legally accountable to anyone that people are beginning to wonder why. Under California law it looked as though ICANN members would have had some real authority by state statue to examine corporate books, and bring derivative actions against the corporation. ICANN had always asserted its accountability to a doubting public by saying that its members would elect half the board. In Santiago however they were deprived of even this right by the establishment of a membership council that they would elect. The council would then select the board members. Never mind the fact that ICANN's shadowy controllers have now decided that that the membership will not be activated until a unreasonably large total of 5,000 members can be chosen by means yet to be determined.

The reasons for ICANN's intransigence have finally become clear as we recount in a 4,000 word introduction to this 27,000 word part 2 of the November COOK Report. If one goes back over the details of events since Landweber sent his October first master plan to the ISOC Board in 1995, ISOC has been single mindedly pursuing a campaign to make itself (now via ICANN) legally responsible for Internet technical administration. ISOC's first move by itself in 1995 angered the ITU, INTA and WIPO. The later two had decided that DNS service presented unique problems and unique possibilities. If they could link trademarks to domain names, it would become much easier and more cost effective to enforce presence in cyberspace on terms favorable to the most wealthy corporations and unfavorable to individuals and small businessmen. As a result ISOC invited its early critics, Walter Tramposch for WIPO, David Maher for INTA and Robert Shaw for the ITU to join it in another attempt to garner legal control over DNS policy and the Root servers. This resulted in the IAHC and gTLD-MoU which itself was attacked in 1997 by others upon whose interest ISOC's emerging coalition trod.

When with the issuance of the Green Paper in 1998, it became obvious that the IAHC coalition had failed, ISOC, in early 1998, invited its critics into the fold once again. In a complete abrogation of responsibility on their part to ensure that an open process would follow, the democrats released the White Paper in June 1998. This document was built on a clever ruse. It stated that the Clinton Gore administration would keep its hands off the Internet by approving an industry led effort to develop "newco" which in October of last year was re-labled ICANN. The reigns of authority however were turned over to the lawyer Joe Sims as a new member in the Cerf, Patrick, Heath partnership. One of Sim's duties was to keep Jon Postel legally defended. Sims accomplished by writing a set of bylaws that created an unaccountable California public benefit corporation to reimpose the defunct IAHC gTLD-MoU solution. This time with the European Commission added to the ISOC coalition in a partnership with the ITU to enlarge ICANN's powers over Internet protocols and the assignment of IP numbers for the purposes of routing in the Internet. Mike Roberts had been part of the ISOC inner circle for some time. Esther Dyson was added now for her corporate networking abilities and a Board representing the ISOC, old line telco and intellectual property interests behind the latest ISOC coalition was announced.

ICANN, set up under the guise of forming the permanent corporation as quickly as possible, then began, instead, to make policy. Its agenda was to cement itself in control of making policy decisions about DNS, IP numbers and Protocols as quickly as possible. It hoped to be recognized by enough players to begin to be able to make decisions that, if obeyed, could acquire the force of law through practice. Its agenda was not to provide open and transparent coordination of Internet plumbing but rather to deliver a set of rules that would place its trademark, and old line telco supporters in the driver's seat where they could use the new body to place themselves firmly in regulatory charge of the Internet's key infrastructure. The supporters were so transfixed by the enormity of the possibilities in front of them that the small clique which guided ICANN was also moved by the same arrogance that caused IAHC to crash and burn. For nearly a year it has gone full speed ahead and steam-rollered over all opposition. What it has finally begun to do is alienate many of its early supporters who now quite correctly are wondering why it should be needed in the first place.

ICANN has high hopes of succeeding because it is the consumate insider's battle, convoluted and hideously complex. We offer this long article as documentary evidence of ICANN's deceptive practices as it dodges and feints and uses continued secrecy to push its agenda on behalf of its ISOC, trademark, telco and EC masters. This is a battle not only for the control of telecommunications world wide but also for the rules on which electronic commerce will be played. If Mike, Esther, Vint and John have their way, it will be decided out of sight and away from any citizen oversight by a small elite group of corporate and government bureaucrats. The battle is likely nearing a turning point and one must hope that the rest of us can work through Congressional action to obtain justice and accountability since the Clinton Gore administration has abdicated any leadership

Editor's Note: I will put it up on my web site in a couple of days. Someone on the ICANN side asked me why I am doing this which certainly will win me no friends among powerful people. My answer is that it is a matter of principle: the end, does not justify any means. For the full text of part 2 of the November 99 COOK Report, click on What's Behind ICANN? - September 1999




Some Notes from your Editor: In preparing this issue I have been doing much thinking about how to view Internet technology from both a business and regulatory model. Four articles have forced this focus. Three, one on Equinix Internet Business Centers, one on venture capitalists and one on ICANN are published herein. The fourth on the issue of how Internet Access to cable networks in Canada could become the focus of a new regulatory paradigm is by Francois Menard. It couldn't be finished in time. I anticipate being able to publish this as a part one of the January 2000 COOK Report within a few days after I return from Nepal on November 6th.

This article grew out of conversations with Francois Menard who has been looking very closely at the dispute between Videotron and Canadian ISPs over Open Access to Videotron's cable network in Canada. From the framework outlined in "Netheads versus Bellheads" (http://www.tmdenton.com/netheads3.htm), Menard has applied his understanding of the technology to make some persuasive arguments about the choices facing Canadian regulators. As I read his drafts I pressed him to elaborate on and generalize the assumptions that he had applied. The result has become a rough draft of a document that I believe carries forward the ideas expressed in Netheads versus Bellheads. The conclusions that are so well founded that they may become extremely compelling basis from which to rethink current regulatory approaches to telecommunications. I want to use this preface to summarize my own understanding of the issues involved and to alert my readers to some things to think about between now and my publication of his completed piece in mid November.

The Impact of the Internet on Technology Development and Regulation

We are looking squarely at a situation where the pace of technology development has outstripped the ability of politicians and regulators to deal with it. We need to remind ourselves that ‹ from the introduction of the telegraph 150 years ago through radio, telephony, and television ‹ telecommunications technologies have grown and prospered as vertically integrated businesses with heavy emphasis on infrastructures which to be need regulated as natural monopolies in order for them to grow and prosper and to serve thereby the public's needs. Such technologies often included proprietary twists designed to give one large corporation's vertical monopoly a competitive edge over another's infrastructure. Significant economic inefficiency was produced by each company having to build its own distribution infrastructure. This then led to a situation where companies could turn to regulators and plead for protection. They would argue that they could only afford to undertake the expense of an upgrade to their infrastructure if they were promised that they would not be required to share it with competitors.

During the past 25 years Moore's law and the TCP/IP protocol have built a very different foundation for telecommunications. Astonishing advances in integrated circuitry have created a situation where equipment needed to add requisite intelligence to a telecommunications network was no longer so terribly expensive that it could only be afforded by a vertically integrated monopoly. For example a PC having the power of a minicomputer of a decade ago now costs as much as a television set. With the release of the Apple G4 the computational power of a 1990 supercomputer is yours for $2500.

The second part of the revolutionary wave facing us is the impact of the TCP/IP protocol. IP gave us generic "envelopes" into which binary data could be dumped and sent via basic transport protocols across a network for processing at ever more intelligent endpoints on the desktops of users. This is the essence of the "stupid network" as argued by David Isenberg in 1997. Just deliver the bits. For the first time a horizontally oriented telecommunications infrastructure could be built via inexpensive technologies with companies invited to plug into each other tinker toy fashion.

Those operating under this world view merely offer others TCP/IP bandwidth into which they may plug their networks. Bandwidth providers can interconnect and, just so long as they use the same public domain interoperable protocols, they can focus on an effort of interconnecting separate and individually owned infrastructure. Because this infrastructure does not have to be vertically integrated, it can be plugged together in chunks like building blocks of Lego or the Tinker Toys of an earlier generation.

This infrastructure does not need a single central authority point since the reliability of the network is governed by the TCP/IP stack residing inside each user's machine talking to the TCP/IP protocol stack resident within the machine elsewhere within the network with which the user communicates. The value of the resulting inter network was enhanced not by any one company's vertical market share but rather by the size of the network measured by numbers of computers connected. The costs of interconnection and transportation continued to plummet as expensive telephony oriented transport equipment such as SONET and ATM could be replaced by generic and inexpensive Ethernet. Ethernet is now reaching speeds of ten gigabits per second and threatening to become a universal transport protocol from desk top to desk top across the wide area network of networks known as the Internet.

Suddenly a structure based on new technology and only partially overlaid on the public switched telephone network had developed. This technology operated on a fundamentally different business model and philosophy from that of the public telephony network. But by late 1999 every one who looked at the new century ahead realized that, in less than a decade, TCP/IP had enabled Internet technology and, had created an infrastructure which unless it is choked off by regulation, will soon overtake the PSTN in traffic and equal it in size.

Growing a Stupid Network to Match the Intelligent in Size

The Internet operated from a suite of shared open protocols designed to make it easy for any interested company to inter network and any applicable technology to inter operate. This new telecommunications model was designed to break down vertical barriers. By using a common protocol that could encapsulate vastly different communications media, the Internet enabled a seamless interweaving of data, voice and video on its physical transmission media. A continuous flow of statistical multiplexed connectionless traffic filled the wireline infrastructure with an efficiency not attainable by bursty, connection-oriented traffic that was either voice, video or data. The cost of delivering the bits to intelligent user endpoints at the edge of the network was headed downward in a way that gave the vertically integrated dinosaurs much heartburn. Worse yet, from the big corporate point of view, was the fact that the economies scale behind this new telecom model encouraged horizontal cooperation between companies that could serve as specialists for outsourcing services desired by end users. For example, EarthLink could employ one company to maintain its customers' email services and another to do its web hosting. The efficiencies of horizontal cooperation also made it possible for end users to compete successfully with vertically integrated empires in delivering their own content.

The vertically integrated telcos were supposed to be natural monopolies because of the vast size of their wireline networks, built by billions of dollars investment over several decades. It was assumed that such fully connected infrastructures could only be built over the course of decades. And yet with the flowering of the internet paradigm at least four new national fiber networks (Qwest, Level 3, Williams and IXC) have blossomed in the past five ears. A concentration on horizontal inter operable inter networked services has enabled a previously unimaginable increase in the total size of the telecommunications pie. Qwest has grown from revenues of nothing to 2.2 billion in 5 years and is now acquiring US West with revenues of 12 billion per year. Internet growth in bandwidth consumption continues to double every six months and is moving into entirely new areas of human commerce and information sharing .

This growth of the unregulated, horizontally inter-linked Internet has enabled an entire new blossoming of telecommunications technology where venture capitalists fund entrepreneurs to develop new technology which cannibalizes other new technology at maximum speed. Given the business model and operational organization of the Internet and Internet savvy companies, the continual innovation of Internet is desirable in a way that is incomprehensible to those in the older more conservative vertically integrated telco structures. The interview with Flat Iron Partners in this issue is designed to give readers a taste of how venture capital inter operates to fuel the growing Internet. Given an adequate understanding of the operational paradigm described herein VCs will easily come to understand that despite the pleas of Vint Cerf and John Patrick, giving money to ICANN is contrary to their own interests.

The Equinix interview that is this issues' lead article will help readers to understand how the success of the internet's horizontal value proposition created an environment where the first vertically integrated ISPs found that they could prosper only by outsourcing functionality to specialists. Such functionality runs the gamut from web hosting to email hosting, to modem pool provision, to Internet telephony settlement services. Equinix is bringing service providers of all kinds together into neutral business centers where customers are able to locate and sell their services to ISPs without supervision or regulatory interference of any kind. The purpose of the Equinix Exchange is to facilitate all levels of interconnection and to make changing one's internet connections as easy and inexpensive as possible. Equinix Exchanges serve as collection points for the same unbundled layer 3 services that Menard is positing as the foundation of a new regulatory paradigm.

The Equinix business model could not operate within a vertically oriented regulatory market. Furthermore, regulators attuned to such a tradition have a vertical frame of reference. To them everything in the horizontal plane that intersects their vertical world view looks like a point. They simply cannot see the horizontal possibilities. To them the intersections in the plane appear like a bunch of unrelated dots. On the other hand to those operating from the horizontal Internet perspective voice and data packets look absolutely the same and should be treated the same.

However, if one looks at the horizontal plane on which the ISP lives, each intersecting intelligent network service also is seen as an unrelated unconnected dot. Each service is merely something that can add value at the third or IP layer of the ISP's network more cost effectively with its new stupid network technology than it can add value as an overlay of a telco "intelligent network." From the ISP point of view, entire vertical telco infrastructures may now be contained in just a single Dense Wave Division Multiplexing lambda (color) set up by the fiber provider with a few clicks on a web site. It is also a bit ironic that, in terms of this paradigm, national fiber based infrastructure providers are thriving by providing dark fiber services, while ILECs feel that their local copper plants are their worst liability.

A Single Point of Failure

Unfortunately in the process of scaling the distributed technology of the Internet, the Domain Name System developed by Paul Mockepetris in 1986 was designed in such a way that it was subject to central control. With the rise of the web people accepted as fact that DNS was the means for a user friendly form of address and did not debate whether the existence of a single root for the hierarchical DNS to point to was risky in that it was a single point of control in an otherwise utterly decentralized system.

Those who saw the enormous economy of such a system also saw it as a threat to their vertically integrated empires. To defend themselves, they fashioned a strategy of using the ostensibly neutral Internet Society as a magnet around which to organize participants from the vertically integrated telecom industry and content -trademark industry in an effort to organize ICANN. Those tied most strongly to the old vertically integrated view of technology (IBM, MCI, ATT) came together in ICANN. In doing this they turned ICANN from an allegedly neutral coordination body for internet plumbing into a body which in October 1999 is poised on the edge of monopoly control of the internet's DNS. The existence of a single root system for resolving internet addresses made it possible for those with the most to lose from the success of the internet's continued growth to fashion ICANN as a regulatory system which would impose a vertically integrated monopoly of control on a system that was beginning to threaten an installed base worth hundreds of billions if not trillions of dollars. ICANN will use network technology to fashion itself as an archaic central authority. This is the lesson of the ICANN article to be found on page 13 of this issue.

We must understand first that economic gains in telecommunications have always come from new entry into the marketplace and not from improved regulation. As the is pointed out below, in the Internet we have a set of technologies that, if they are not disabled by the by the politicians and regulators, can enable affordable broadband to the home immediately.

One must recognize that two different platforms based on different business and financial models are set to compete for telecommunications at the local loop. One business model would appear to be a choice of service from a single vertically integrated giant telco on the one hand for cable, telephony, wireless (PCS), internet, content on demand all brought to us on a DSL enhanced copper or coax based local loop.

A competing business model may be found in Canada. There as he deploys the IP over fiber CANet III, Bill St. Arnaud has concluded that, if aggregated horizontal services were brought in to homes in urban areas netted as much as $100 per month per house hold, a fiber based loop company working as a neutral carrier for various services could being fiber to the home. No one is asking for unbundled layer 3 services (horizontal plug-ins like IP Telephony, web hosting and email) to be given away. Regulation should encourage the owners of the copper plant to stick to a schedule for replacing the copper with fiber to which any and all service providers should have equal access. In the US perhaps the universal service fund could be used as a means of weaning away the operators of the intelligent network from their bell headed tools. In other words owner of the fiber local loop should be treated as a common carrier.

Menard writes in a draft of the paper he is preparing: "The layered model of the Internet, which has been previously explained by the authors in Bellheads vs Netheads (www.tmtendon.com), makes it possible for common carrier obligations to be enforced at a higher layer than the physical one. Doing so can finally ensure a viable regulatory framework that allows facilities-less INTERNET competition with horizontally specialized services to compete with facilities-based competition with a monopoly on vertically integrated services."

How to sum up? Regulators, given their vertical mindset, seem to have let the Internet progress as far as it has almost by inadvertence. Given the state of technology, Menard believes that now they will have to do one of two things. Either tell Internet users they made a big mistake and that since they must will maintain their vertical mindset, then users had better prepare for a much more centralized marketplace with several orders of magnitude fewer providers or that regulators must embrace the concept of level 3 unbundling. This will force facilities based players like Videotron to let ISPs pay appropriately for the privilege of using their infrastructure. The ISPs will be free to compete with Videotron by using their knowledge of the capabilities of the technology of the internet in more creative ways than has Videotron.

As Menard is finding (we intend to present his findings in our next issue) the onward rush of horizontally based internet technologies is running finally headlong into the entrenched interests of the vertically integrated telcos which are propped up by many decades of entrenched regulatory mindset and political payoffs. The battle for the future of the Internet may well lie in the industry's ability to understand its assets and educate both regulators and politicians into an understanding of the consequences of moving forward without a clue of the nature of the fundamental incompatibilities of these different world views.

CONTENTS of December 1999 COOK Report

Equinix Creating Neutral Exchange Points -- Adelson Traces Trend Toward Horizontal Outsourcing -- Equinix Business Model Offers Incubator for Coordination and Interconnection Between Providers of ISP Services pp. 1- 8, 24


Venture Capital and the Internet -- Jerry Colonna of Flat Iron Partners Describes the Process -- Predicts a Stabile Growth Path & Spread of VCs to Europe pp. 9 - 12


ICANN - NSI Domain Name Accord Creates Joint Monopoly: Registrants Deprived of Most Rights -- Symbiotic Relationship Expected to Free NSI of Liability for Acting as Agent of ICANN Which Can't Be Sued pp. 13 - 21, 24


Executive Summary -- Regulator's Dilemma: Cable versus Internet -- Some Vertical versus Horizontal Organizing Paradigms for Equinix, VCs and ICANN pp. 21-24