Print 

Wireless Reaches Internet Critical Mass

Internet Use Jumps to Mobile Platforms as Spread of Digital Infrastructure Enhances Wireless Capabilities

We Survey Issues and Players in Internet Wireless Services

pp.1-6, 10

We interview Ira Brodsky CEO of Datacomm Research and author of books on wireless communications. We survey the 1999 explosion in digital wireless technologies summarizing with particular attention to their impact on the Internet mobile, fixed and wireless LAN technologies. Brodsky points out that because of advances in digital technology wireless broadband access to the Internet has become a reality. This means that virtually everything we do with wireline Internet connections we will also do with wireless.

The interview explains from a technical point of view what is done to achieve high bandwidth using TDMA and CDMA. It covers Triton network Systems and LDMS. It shows through a discussion of Phone.com and the WAP protocol how cell phones are becoming web browsers. It describes PCS as well as Sprint's leadership in this cellular technology. It explains how Metricom intends to compete with Sprint PCS.

Looking at the Europeans it describes TDMA, CDMA and GSM applications in Europe. GSM is an enhanced form of TDMA that is popular primarily in Europe. However many people doubt that it would stand up well against a well aimed rollout of CDMA.

Wireless LANs are critical to the hopes of home networks of IP aware appliances. Costs are approaching $100 a node and speeds are approaching Ethernet. The viability of this market however is likely dependent on the outcome of the IPv6 deployment discussed in the IETF debate article included in this issue. It may also be impacted by how the network continues to scale as broadband moves into the edges of the network.

With broadband wireless as an end user option, one moves into the reality of access to the Internet being available from any where at anytime under almost any condition. Given the cost and time necessary for the installation of fiber-based local loop infrastructure wireless is becoming a more and more viable local loop alternative. On January 5, 2000 Advanced Radio Telecom announced deployment of 100 megabit per second IP network to connect high speed business LANs to backbones across the US. The ART broadband MANs will be deployed by years end in ten cities across the US. They will use Cisco supplied Ethernet routing and switching products and configure its MANs in a self healing ring architecture "capable of providing 200 Mbps of total bandwidth on its bi-directional paths.'

According to George Gilder, the growth of these systems will be sustained by the introduction, 12 to 24 months from now, of chipsets based on Qualcomm's 2.4 megabit per second HDR data transmission technology that is a flavor of TDMA running in unused CDMA channels. According to Brodsky: "HDR is CDMA; there may be some time-sharing going on, but it would be misleading to call it "TDMA." It would be more accurate to say HDR runs on separate channels that can be either in the cellular/PCS spectrum or outside that spectrum. Saying "unused channels" suggests it borrows channels from the voice system." Cellular coverage in the US has evolved to the state where basically any customer can roam nationwide by using an analogue phone that is also either CDMA or TDMA compatible. Finally Brodsky is optimistic about the capability of a company like Cisco to sell wireless spread spectrum equipment to ISPs who could use it to by pass the LECs local loop strangle hold.

The Disruptive Internet: Triumph or Chaos?

Accurate Assessment of State of Internet 2000 Depends on Mix of Technology, Governance Efforts, & Network Engineering Issues

pp. 7 - 10

In our annual State of the Internet essay we examine the continued triumph of the Internet in the areas of wireless and broadband technology, in electronic commerce and in newly evolving data storage technologies. However we caution analysts not to find a false sense of security by restricting their analyses to just these areas. For two areas not involving technology impact the Internet's future. They are the drive to regulate and control by means of ICANN as evidenced especially by the support for ICANN from those companies still dependent on the success of their legacy based technology. They second area focuses on an argument over architecture. This argument takes form as a commitment to make wide spread deployment of IPv6 a reality.

The IPv6 commitment is part of a technical debate over what some perceive as the lost "potential" of end-to -end IP connectivity as NATs and firewalls have come to shield or otherwise protect hosts on corporate intranets and prevent several important protocols like IPsec from penetrating the NAT and firewall barriers. The way these concerns are handled will affect the structure of the Internet. It will be extremely difficult to make progress on the implementation of IPv6 without a centralized top down drive designed to get as many as possible to change. But the amount of attention and effort given over to this drive will impact the final area which is one focusing on a slowly growing concern over the continued scalability of internet architecture and routing as broadband technology is deployed at the edges of the network.

Some find it worrisome that some of the scalability issues such as competing backbone architectures examined in the January 2000 COOK Report are generally not discussed openly. These folk believe that these issues may be more important to the smooth future functionality of the Internet than the outcome of the IPv6 deployment issue. If too much emphasis is placed on new technologies regardless of their impact on network architecture, network performance is likely to seriously degrade. If too much emphasis is placed on the struggle to control through code or architecture -- in addition to law -- in the way that Lessig points out in his Code and Other Laws of Cyberspace, the ability of engineers to handle the challenge of architectural design issues will be severely impacted.

Consequently the overall success or failure of Internet architecture will be determined by the interaction of these three with each other. We contend that most analysts are aware of the technology issues and are making the mistake of focusing on them to the exclusion of the regulatory control and network architecture and protocol design issues. The result hinders the Internet's ability to respond to the demands placed upon it by runaway growth. Successful analysis now demands an ability to synthesize technology, legal and network design issues.

IETF Debates IPv6 Implementation and End- to-End Architectural Transparency

NAT Boxes and Firewalls Seen by Some as Kludges to Be Eliminated and by others as Symbols of Healthy Diversity

pp. 11 - 24

During the first half of December, on the general IETF list, there was an outstanding discussion of some critical problems of Internet architecture. Most participants were among the most distinguished engineers in the IETF. The focal point was over the dilemmas posed by the desire of these people to gain a set of perceived benefits from the deployment of IPv6. Brian Carpenter's December 1999 Internet draft http://www.ietf.org/internet-drafts/draft-carpenter-transparency-05.tx t on Internet transparency provided the foundation for the discussion.

The crux of the perceived problem is that in order to make IPv4 addresses scale during the Internet's take off in the mid 1990s architectural "kludges" such as private IP addresses for intranets hidden behind Network Address Translation (NAT) boxes and firewalls and Classless Inter Domain Routing (CIDR) were instituted. The result has been that huge investments have been made in equipment and architecture that will not be easily changed. Also protocols designed to work in an Internet with end to end transparency will not work in a world where, to get from the backbone to a receiving device on the edge of the network, they have to travel through NAT boxes and or firewalls.

The perception is that the kludges are now very cumbersome and costly for corporations to manage. There is a perception that IPv6 which has several orders of magnitude more addresses than IPv4 will provide the Internet with enough flexibility such that the irritating kludges standing in the way of end to end transparency can be removed. Alas this is really true only if IPv6 can be massively deployed through out the Internet. We are talking deployment at such a level that IPv4 virtually disappears. The problem facing the Internet is that, short of an unprecedented regulatory decree that commands massive adoption of IPv6 globally, enough deployment of IPv6 to ever make a difference is unlikely to happen.

Some strong philosophical issues of design and management are at work here. On the one hand the IPv6 advocates have a top down vision of a uniformly designed and managed Internet.. Opposed to their view is the belief that certainly reflects the operational reality of the net - namely that the market place is working with the development of diverse solutions that perform quite satisfactorily.

When Ian King wrote: NAT IS A HACK. Why is there so much effort going in to somehow either "legitimizing" it, or demonizing it? Perry Metzger replied because there is a fight brewing about IPv6 and whether NAT is a sufficient alternative to IPv6. Ed Gerck summarized an opposing point of view. Further, it seems to me that if NATs are to be blamed for the demise of IPv6, or its ad eternum delay, then maybe this is what the market wants - a multiple-protocol Internet, where tools for IPv4/IPv6 interoperation will be needed and valued. A commercial opportunity, clearly.

Part of the fight is over control. Who gets to set the rules by which Internet architecture will run? It could turn out to be unfortunate wen others believe that there are serious unresolved problems with routing architectures that the time and talent of the IETF is focused on the IPv6 control issues. We may be certain, however that the IPv6 controversy is extremely important and will not quickly disappear.

Farber Moves to FCC as Chief Technologist

p. 24

On January 3, 2000 Dave Farber was appointed Chief Technologist at the FCC. Reaction was generally favorable that the Agency would have an Internet expert in the position. We wish that we felt as comfortable as the other experts about Farber's mission.