A Practical Navigator for the Internet Economy

Defining New Internet Business Models

Internet Performance Measurement Applied by Matrix Net Systems to the Solution of Enterprise and ISP Problems

John Quarterman and Peter Salus Explain Measurement Techniques Enabling Independent Evaluation of SLAs and Bandwidth Pricing,

pp. 1 - 23

Find out how to order single copy ($125) or group license ($250) for just THIS issue.

We interview John Quarterman and Peter Salus on the subject of Internet performance measurement and analysis as performed by Matrix NetSystems of which they are respectively Chief Technical Officer and Chief Knowledge Officer.

In a 15,000 word interview he and Salus explain the technical and business aspects of their work. They measure by using more than 80 "beacons" scattered at strategic locations around the world to ping and traceroute to 120,000 "destinations" in more than 100 nations collecting hundreds of millions of data points per day.

They have been doing this on a smaller scale since 1993. They have traffic data since then measured by latency, packet loss, and reachability to a core set of Internet destinations. As a result they explain how they can compare over time how the reachability of this set of destinations has been impacted by earthquakes, hurricanes, and the terrorist attacks of September 11.

On an on going basis, the products and services of Matrix require the distillation of masses of automated data and analysis of the results. Matrix can then advise global corporations how to tune their network connections to ISPs to better serve their customers or show ISPs similar means of improving their own network's performance.

Quarterman and Salus explain how a New York City based global financial services organization gave Matrix the IP numbers of 33,000 of their own customers around the world. This is a group of people for whom fast and reliable connectivity is imperative because the lack of same could have major negative impacts on their financial transactions. Plotting network performance over time for these customers of the financial services institution makes possible performance rating of the customer's ISPs. By comparing the performance of various ISPs in places in the world like Hong Kong, Singapore, Europe and the US, Matrix is able to advise the financial services organization how varied placement of servers can be expected to improve performance.

The performance monitoring that Matrix does can also be used as a means of independently validating ISP service level agreements. It has monitoring of latency packet loss and reachability for about two dozen large ISPs posted for public review on its web side. By monitoring performance from nodes on the network of a given ISP to a wide range of Internet destinations, Matrix can get not only information about the performance of the customer's network but also data on how well through its peering, it delivers transit to the entire Internet on behalf of its customers.

We said to Quarterman: "Two years ago in attempting to commoditize bandwidth so that it could be traded, the critical lacking ingredient was the establishment of a benchmark price. That is to say a unit of measure by which other units of bandwidth could be valued. Thinking at the time was that the benchmark would likely be settled on as something like a DS3 between New York and San Francisco. The breakthrough in the commoditization of natural gas was to separate it from the influence and cost of the network that transported it."

"However what is becoming clear from talking to you is that while fine for natural gas, this tactic is simply unworkable for IP bandwidth. There can be no meaningful way to benchmark a DS3 from point a to point b in isolation from the network on which it rides. Bandwidth as a stream of photons traveling along a fiber is very different from natural gas which can be easily stored at local collection points before use. The availability and performance of the stream of photons is what makes it useful. Therefore a lightwave sent from one city to another can not be realistically valued for trading purposes without its basic price being tied to an index of the performance of the network that transports it.."

"The index is a function of your measures of latency, packet loss and reachability, of the transport network taken on an hour-by-hour, day-by-day, week-by-week basis over a period of time. All of which means a DS3 from New York to San Francisco on UUNET might be worth X dollars while a DS 3 on ATT's backbone New York to San Francisco might be worth Y number of dollars where the difference between X and Y is a function of the difference in the performance index of the two transport networks. Furthermore that the value of the bandwidth would fluctuate over time as network performance fluctuated. Your Foresight performance index sounds like has a reasonable chance of acceptance. If it does become accepted, you have solved an important problem. Bandwidth trading has not been viable because there has been inadequate information on which to make it happen. You may have the information needed to make it possible."

Quarterman responded "Yes, network performance provides price differentiation for links in addition to bandwidth and geography. Indeed this idea would be applicable to bandwidth trading. From the financial side of the coin, there are also major benefits using the approach to bandwidth pricing. As price is more commonly associated with a QoS level, a carrier can use index data to enable itself and others set prices on the services it performs. A trader will call this concept "price transparency", and even though trading has "fallen into disfavor," the concept is still very useful to the marketplace. With price transparency, if a carrier knows their network inventory position and the size of the outstanding service level agreements for which they have contracted, the carrier can then perform serious quantified risk management and asset level management functions, This capability becomes a very powerful decision making tool for the CFO."

"If providers can model their current inventory, we can help them value it. If we can map the carrier's network to begin with, we know the carrier's inventory, and that's a CFO's asset base. The carrier's outstanding SLAs are its outstanding liabilities. If we can associate a value with those inventory positions, we have the core of every risk management or asset liability management analysis. These two kinds of analysis are very powerful decision support tools for the CFO."

We asked: Another way of saying all this is that you appear to have the means by which ISPs can define for the first time a viable business model. That is a rather important accomplishment. Quarterman replied: "Yes, a big problem with the current ISP business model is that they're all basically selling connectivity with the only differentiator being price, so they end up competing for the cheapest rates. The only companies making money on the Internet have been outfits like Cisco that sell equipment to ISPs. That's like selling shovels and bacon to gold miners during the gold rush. Most of the miners didn't make money, but some of the vendors did. But during the current market slump even the shovel vendors aren't making much money."

Software-Defined Radios Enter the Limelight,

p. 23

We republish a pointer to an excellent essay By Mark Long on the state of the art of software defined radios. On January 8, 2002 Long wrote: According to the SDR Forum, the term software defined radio (SDR) is used to describe radios that provide software control of a variety of modulation techniques, wide-band or narrow-band operation, communications security functions such as hopping, and waveform requirements of current and evolving standards over a broad frequency range. The end result is a completely new technology for enabling a wide range of applications within the wireless and broadband industries that potentially can provide efficient and comparatively inexpensive methods for overcoming many of the technical constraints that currently limit existing telecommunication systems.

Optical Signaling Systems by Scott Clavena (From Light Reading.)

pp. 23 -24

Optical networking may have gone from boom to bust in the past year, but the need for a totally new way of controlling carrier networks has never been stronger.

In order to survive and prosper, service providers need to make more money from their existing infrastructures, and they also need to slash costs. Well, guess what? That's exactly what new signaling technologies like GMPLS (generalized multiprotocol label switching) and standardized interfaces like the Optical UNI (user network interface) aim to make possible.

In essence, these protocols promise to automate the operation of telecom networks so that capacity can be used more efficiently and services can be provisioned much more rapidly, from remote consoles or via requests from client gear. That promises to radically reduce the need to send engineers out into the field to manually reconfigure equipment.

Editor: The above are the first 3 paragraphs of a long (22,000 words) and very good tutorial.

ICANN and Antitrust,

p. 24

by Michael Froomkin & Mark A. Lemley

Editor's Note: This is another excellent footnoted article from M. Froomkin and a new colleague argues that ICANN's behavior is likely in violation of antitrust statutes. He finds it likely that it is not immune from anti trust liability. The paper is 55 pages long and is available as a 179 kilobyte PDF from the URL below.

http://personal.law.miami.edu/~froomkin/articles/icann-antitrust.pdf

Empowering the Customer or Empowering the Telco?

Betting Your Company's Future on Your Understanding of the Right Mix of Technology, Economics and Policy,

pp. 25 - 29

The Internet has not as yet solved the problem of how to deliver huge quantities of bandwidth at a price that enables providers to pay off debt and make a profit. In this sense it has not yet developed a viable business model. The irony is that its new technology has been successful at undercutting the prices enjoyed by incumbent carriers and needed by them to maintain the financial viability of their services. Indeed it has become so successful in this respect that some analysts are beginning to fear for the stability of the global telecommunications system.

As recently as 1999 analysts were predicting that the IP technology of the packet switched Internet would sweep away the old circuit switched telecom technology. they were wrong. It did not collapse under the onslaught of a triumphant new global packet network bringing vast amounts of inexpensive bandwidth to every home and business.

One reason it did not was that the technologists were so certain of the superiority of their product and were so good at driving the hype that got them their early stage capital investment they were able to sail forward without a long term viable business model for what they were doing. Build it and you will be saved - somehow. The provisioning of vast amounts of cheap bandwidth was seen as a sustainable business model for the Internet.

The problem is that ten years on the bandwidth business model has not proven to be a viable one. We contend that the question is whether bandwidth is something on which a business model can be built? Or is bandwidth, like a highway, just an enabler? We started out a decade ago talking about the information super highway and then proceeded to try to build multiple global privatized versions. Imagine if Ford had spent tens of billions building a global interstate for its cars. While GM. Daimler-Chrysler and Honda and Toyota had each done the same thing. What has been built are highways with largely identical performance and capable of huge indiscriminate through-put of "vehicles" or packets. They have lead to an unsustainable business model. "Become a customer of my commodity system." "No. Not his. Mine. I just doubled the speed and I will sell you access for 20% less. I only had to borrow another billion dollars against my non existent profits." Yes we have a train wreck. Any wonder?

In the context of this economic upheaval the most significant technology trend that we see is one that will present managers, investors and policy makers with a choice pointed out by the title of this report. Empower the user. Or empower the telco. Choices are being made. The technologists are driving control of lambdas into the hands of end users. Peer-to-peer, as software and infrastructure, is enabling the formation of communities of users at the network's edges. Here the goal is generally to make the center and anything associated with it disappear. The impact of technology on network architecture will be the most important trend to watch in 2002.

But, as many have found out to their dismay, we can no longer make intelligent decisions in telecommunications absent a thorough understanding of the industry's economic picture. Indeed analysis of technology trends done without understand of their economic impact, are, in this climate, of limited use.

We are marching toward a denouement designed to allow the ILECs to try to be the last ones standing by allowing them to use their control over the "last mile" to re-monopolize service. This after all, is what the "free market" has given us. We owe the great boom of the last 20 years to our faith in the free market so if we just hang on a while longer someone or something will save us. Indications that the new technology may also bankrupt the ILECs are not yet on the radar screens of most analysts.

This leaves us in a strange situation. One where we are so smart that we can shove billions more photons down the same thread of glass this year than we could last year. But it is also one where we are also so ideologically blinded that we remain wedded to the building and maintenance of multiple privately owned systems when experience now shows us that there is no business model that can pay for multiple competing privately owned commodity systems.

Roxane Googin, Editor of the High Tech Observer, in a short essay in David Isenberg's Smart Letter 64 has captured the essential problem: "But even though the attackers are starving, they are forcing marginal bandwidth prices below the ILEC's cost of provisioning -- not only replacement but also provisioning. So the ILECs are going to get squeezed because they have this complex, labor-intensive infrastructure that is no longer supported by a viable economic base."

"In this kind of nightmare scenario, nobody wins. It is just a big mess because the attackers are [also] going under. Meanwhile, they have crippled the incumbents. We are witnessing the perfectly predictable outcome of this process: no equipment sales, and no more progress." [Snip] "So we have to fix the problem. This means restructuring the debt and owning up to what the real issues are. This owning-up hasn't been done yet."

This is a dilemma that we shall be exploring further in our April 2002 issue with a long interview with Roxane Googin.

Anatomy of a Small Revolution (Part 2)

by Dave Hughes , pp. 30 - 43

Dave Hughes brings the story of Old Colorado City up to date. He writes: "I have watched the area grow and change. Change in ways that are not necessarily good. It's been a long time - 25 years. And nothing is forever. Even though Old Colorado City and the Westside remains so much better off than it has ever been before, and by every objective measure is an unprecedented (in Colorado Springs) success for which I am given ample credit, still I have watched human, business, residential, and government 'nature' at work over time, and see lots of reasons why it may not sustain itself forever."

Article and Discussion Highlights,

pp. 43 - 48

Executive Summary,

pp. 48 -52