A Practical Navigator for the Internet Economy

Driven by Need for Risk Management Bandwidth Commodity Market Coming

Efforts Underway to Create Tools Where Rapid Market Changes in Demand and Supply Can Quickly Match Buyers and Sellers

Stan Hanks Explains How These Developments Will Reshape Internet Industry

pp. 1-8, 12, 15

Over the next 12 to 24 months experts predict that bandwidth will become a commodity tradable in real time on commodities exchanges around the world. We interview Stan Hanks who was formerly VP of Research and Technology for Enron Communications. Currently he is very much involved in making the commoditization happen. (We have also interviewed Lin Franks of Anderson Consulting and intend to publish that interview in our June issue.)

As Hanks points out: If you get to the point where you have an oligopoly of suppliers - which we pretty much do - and an increase in availability combined with a historic decline in price, as well as a fair amount of price elasticity associated with the thing in question, you start seeing the development of commoditization.

Hanks outlines in detail how a cost of about $2.75 cents per channel per mile for OC192 capable lit fiber across a wide area network is derived. He then points out that because national networks average 25,000 route miles of 144 fiber cables the initial cost of such a network will run to multiple billions dollars. This is a very hefty investment for something whose wholesale price is declining at the rate of about 40% a year over the past five years or so. The problem is that when planning an investment like this it is not possible to derive reasonably accurate figures of income that might be expected from such investment. Financial exposure now is vast with no adequate way within the industry to manage risk. Commoditization of bandwidth will provide the tools by which risk can be managed.

The first step that the industry can make in this direction is to establish a benchmark price and uniform contracts. Efforts are already well underway to do this and success is anticipated well before the end of the year. Such a benchmark might be the price of a DS3 from New York to LA or it might even be the cost of a wavelength of light on a DWM system for a distance of 500 miles.

A real commodities market will assure users that they will always be able to get a supply of bandwidth, even at very short notice. One may the expert the internet business model to shift from the question will there be adequate supply to the question of what it will do with the bandwidth. Having an assured supply at a predictable price will make it possible to do many things with bandwidth that currently are not economic.

Currently ISPs tend not to give the capacity planning problem adequate attention. Their ability to turn up new bandwidth is hampered by the fact that they don't have the financial management and projection kinds of tools that enable them to go to their finance people and say if you give me "x" dollars for new capacity I can give you "y" income within "z" amount of time. Before long financial analysts are going to be asking senior carrier management what it is doing about the huge amounts of unmanaged risks it carries on its books. Suggestions are being made that the way to manage this responsibly join in an industry effort to commoditize bandwidth and eventual automate trading.

The terms for purchase of fiber today tend to be negotiated from scratch with each contract, and built around very long term duration - 10 to 30 years. Part of what is needed is a re-education of the industry to the degree where it can grasp why purchase of bandwidth in terms of a few hours duration to a few weeks duration will be better for the interests of everyone than purchases for ten to twenty five years duration. Sycamore is getting a leg up on the rest of the industry not by focusing just on optical transmission services but on building software that can be useful in the provisioning of new bandwidth services.

Ultimately, we may expect to see the vertical hierarchy of the big carrier backbones devolve into a mesh. Currently these big networks don't just connect to each other at a handful of places, they interconnect in all kinds of interesting ways. But they connect only to each other and then to their customers. This interconnection topology is going to start to evolve in very interesting ways. Customers of one vertical network, given the opportunity do so, would like to be able to buy by bandwidth to connect themselves with customers attached to a different vertical backbone. Horizontal linkages that are then over printed onto the vertical ones. Then you could move through the matrix of space either vertically or horizontally. And you could do this in accordance with what your real time switching and bandwidth equipment would allow.

According to Hanks, the only reason that this hasn't happened to date is two fold. First there isn't enough money in it in terms of applications. Second that there is no way to manage the risk associated with doing it. This horizontalization comes when A and B wind up being able to directly connect to each other, on an "as needed" basis. Akamai and the other CDNs - CDN = Content Distribution Network - are doing things to facilitate this; Enron is also doing this. Akamai's content distribution model sets up horizontal routing for web sites in such a way where should traditional routs become congested, Akamai's routing can switch from a vertical organization to a horizontal paths across provider boundaries. Inktomi and Digital Island after its recent merger with Sandpiper may also be regarded Content Distribution Networks determined to build their own models of horizontal connectivity across provider back bones. There are more of these out there. "Coming Soon," as the saying goes. Hanks was at a venture capital conference recently and found "CDN" to be one of the new hot buzzwords

Swedish Ruling Party Endorses Building National Broadband Infrastructure: Goal Vital to Sweden's Security

Interview With Swedish Commission Member Explores Development of Infrastructure Policy Goals of Five Megabits per Second IP to Every Swedish Home and Apartment

pp. 9-12

We interview Anne-Marie Eklund Löwinder, a senior project Leader of the Swedish Government's Commission on Information Technology and Communications (CITC). Ann Marie explains the rationale behind the national fiber strategy presented to the Swedish parliament this week. The government is proposing a fiber build out that will connect together all municipalities in all of Sweden. The fiber is to be owned by the municipalities and sold on equal access terms to ISPs which meet the program's criteria. A second and equally important part of the program is designed to lead to a local build out that will result in an Ethernet jack delivering TCP/IP at five megabits per second to every home and apartment in Sweden. The interview also discusses Stockholm's experience with Stokab which has fibered almost the entire metropolitan region over the past five years.

Napster - MP3 File Sharing Application - A Hugely Popular Bandwidth Sink Defies Control Efforts of Network Administrators

pp. 13-15

Napster is an application written by a 19 year old computer science student last summer. Downloadable from the web, it lets users temporarily turn their computers into servers for the purpose of swapping MP3 files. Grown hugely popular in the last several months, it accounts for a significant percentage of Internet traffic. According to university network administrators, it is clogging campus connections to the Internet. We publish edited discussion on what can be done about the problem from the CAIDA and NANOG mail lists. Port blocking has been tried without great success as students in many cases find other ports to use. A new program called Gnutella and far more powerful than Napster is under development as well. Some people are saying that Napster's impact on Internet traffic may approach that of the web.

Cracking the Code: an Analysis of US Internet Governance, E-Commerce and DNS Policy

Why US Dominance of E-commerce Indeed is Dead if ICANN Fails & Why the US Has Most to Lose from Continuing a Policy Founded on Indefinite Control of the Root

pp. 16 -23

Various court decisions are making ever more clear the advantage that possession of the root gives to the US in maintaining its commanding lead in global e-commerce. This is leading to resentment abroad. Given the course on which we are all headed, ICANN is likely to be at best a temporary band aide on a festering sore until decisions of foreign courts or governments fracture the US-controlled, authoritative root. We discuss both some of the ways in which this fracture might take place and what impact it would likely have on the Internet's operation.

While a fractured root would certainly not destroy e-commerce, the very fact that it happened would be likely to pop the speculative bubble supporting the stratospheric prices of Internet stocks. It would demonstrate that a globally-unified forward march of the global economy running on internet "rails" is only a pipe dream. Many investors and VCs would be forced to rethink the price value equations on which their actions have been based. Should contention over the root get serious enough to throw prices of Internet stocks into a nose dive, the United States would loose far more than any other nation in the world. This is very likely what John Patrick and Vint Cerf and Esther Dyson had in mind when they asked the venture capital community to contribute to ICANN last summer cryptically warning that if ICANN failed e-commerce would also fail. Certainly the ongoing uncertainty of how much of a global market for business to business e-commerce would be easily reachable in the event of trouble for the authoritative root would take the buzz off of most e-commerce business plans.

We arrived at the above conclusions after pondering Ed Gerck's essay "Thinking" (April COOK Report, pp. 23-25). We find Gerck's article to be a useful point of view for analyzing some unresolved issues relating to ICANN and the Department of Commerce on the one hand and the DNS and alleged need for a single authoritative root on the other. Gerck sees DNS as the major centralized component of an otherwise decentralized Internet. In his essay "Thinking" he says that some of the choices made long ago in the design of the DNS not only make it depend on a single root but also "without the DNS there is no email service, search engines do not work, and webpage links fail." DNS is " the single handle of information control in the Internet. And, in the reverse argument," it is "its single point of failure."

With something as powerful as the Internet, everyone wants more and more to seize control, if only to keep others from controlling it. It certainly can be argued that the struggle for control of DNS has become the focal point over the last four years of a diverse coalition of actors (trade mark interests, IBM, ATT and others) that have gathered together to form ICANN. Now it is generally assumed under US law that the organization which controls an entity bears legal responsibility (liability) for the use of its power. Gerck suggests that under the conditions of a single handle of control over the Internet, the controlling organization's liability is potentially total. Thus given the nature of ICANN's use of DNS as a means of grabbing control over the Internet, the liability facing ICANN and anyone else who would emulate it is essentially unlimited. As a result, in structuring ICANN it has been necessary to insulate all players from the consequences of their otherwise unlimited liability.

We have taken Gerck's essay and used it as a template on which we have applied our own knowledge of ICANN. This process has helped to bring a number of issues into focus for the first time. In its eagerness for control those who have promoted ICANN have taken all the critical administrative infrastructure of the Internet DNS, IP numbers and protocols and dumped them into the single ICANN basket.

But having all our eggs in one basket and having in the DNS a single point of failure creates the kind of prize that, as long as we still have national economies competing against each other, the US government and its major corporate allies will do what ever is necessary to protect from foreign capture, or even from foreign influence. Since ICANN is the basket holding all the eggs it, in the meantime, must be protected from its unlimited liability by being made virtually unsuable.

In order to make ICANN unsuable, its backers have had to create for it an arbitrary structure that renders it immune from the inputs of those communities that it is supposed to serve. This arbitrary structure has in turn prevented ICANN from inheriting the political legitimacy within the Internet community that Jon Postel's exercise of these functions once enjoyed. ICANN follows a carefully scripted routine that supports its role as guardian of all the Internet's administrative eggs that have been in its single "basket." This scripting greatly angers those who having mistaken the ICANN process for being one of actual openness have invested their time in hope of influencing the outcome. However the play acting also serves ICANN interests in that it can be spun by ICANN's public relations firm in such a way that the casual press lacking the time and ability to do its own research may be fooled. Therefore ICANN has bought the administration some short term time to regroup and maneuver.

What we have done in this article is demonstrate (1) why ICANN can be nothing more than a temporary fix (2) how ICANN is likely to fail (3) why the consequences of this failure will hurt the United States more than it will hurt other nations, (4) why from ICANN's efforts designed at all costs to shore up what is really an untenable effort to maintain long term central control over Internet addressing there needs to be a switch to efforts aimed at placing in the hands of each user the means by which he or she shall be able to address and find Internet objects.

ICANN was created as a diversion on the part of Ira Magaziner who conveniently left the administration and returned to private consulting as soon as it was established. It is a smokescreen cleverly designed to give the illusion to the rest of the world that the US is transferring control of administrative functions over the net to a world body where the Europeans and Asians would be led into thinking they could play a significant role in policy making.

And indeed just so long as they don't try to grab the root, American policy is to play along with the Europeans and Asians and acting through ICANN do such things as granting them direct control of their own country codes, and the power to enable their corporations to have preferential treatment over domain names on the excuse that such names can be treated as trademarks. Many other powerful groups have been given an opportunity to play in the great ICANN charade.

As long as ICANN is there, it gives the impression that others besides the US government will be allowed a role in root server policy making and control. In reality the continued heavy hand behavior of Roberts and Dyson has made it possible to drag out the ICANN foundation process for another year getting it conveniently past the upcoming touchy US Presidential elections. As a result the Clinton Administration has been able to extend the dual relationship of the ICANN DoC cooperative agreement.

The extension makes it possible to preserve ICANN as a maneuver designed to deflect attention from the stark fact that without ICANN, the US administration would seize the root servers by force rather than loose control. This is the secret of why ICANN cannot be allowed to fail. ICANN's central purpose is to divert attention from the fact that the Clinton administration has made a decision to treat the root servers as a strategic telecommunications resource over which it is perhaps even prepared to use the police power of the state to protect from falling into the wrong hands.

It would be encouraging to see some interest in Washington in the incubation of the understanding necessary for the Internet and e-commerce to cooperate in working its way out of the win-lose control situation in which it finds itself. The route of control has been tried. As we have shown in this discussion, not only has it not worked, it also looks to be untenable on a long term global basis. It is to be hoped that if our policy makers understand that we are likely to loose more than anyone else in a struggle to maintain our control, they may also come to understand that they have the most to gain by removing all possible levers of control from everyone's grasp. If it becomes clear that no single entity can hope to control the Internet, many strains in the present system could be quickly dissipated. We are a "can do" nation. If the administration were to understand that everyone would have more to gain from such an outcome, we believe that there is adequate talent available to ensure success.