Print 

Recursive Inernet Architecture

An introduction to RINA a Possible New Foundation
for Global Telecommunication

Executive Summary


John Day is a maverick who deserves to be much better known by those who use the Internet and care about its fate. John’s involvement with the Internet goes back to the ARPANET when he was at the Illinois node. He was an active observer and participant in the development of early standards in the 70s and 1980s. In the early 90s, in his words “I wanted to understand what was going on, independent of politics, hardware constraints and egos.  If I was going to be working in this field, I needed to know what was we knew and for the things we didn't know figure them out, if I could.  I was thinking about how networks work, the Internet was only one data point in that.  This was not engineering, this was science.”  

John is a maverick in many ways.  He began thinking about issue of architecture as early as the 1970s.  While he views the decision to adopt IPv6 instead of CLNP in 1992 as disastrous, he states that the outlines of his basic theory were in place when the IPv6 decision was made.  In his own basic theory of networks published in late 2007 as Patterns in Network Architecture, he has labeled those who have been credited with the astonishing global success of the Internet as having functioned as “craftsmen.” He chides them for not having shown the rigor of adopting a scientifically defensible approach to computer science in general and the principles of network operation in particular.

Rather than accepting what the first generation of Internet “craftsmen” built, John has developed what he considers to be a scientifically verifiable and defensible series of principles on which to build a new network architecture.  The purpose of this issue of the COOK Report is to describe what he is doing.  I do this by means of a mash-up of two of the many extensive talks he has given describing his new architecture.  I also outline the international development effort that is underway to build interoperable prototypes of what he calls Recursive Internet Architecture or RINA for short.

I contend that what John is doing is of huge possible importance. I say possible only because most people would think it highly presumptuous to assume that anyone man or woman could design something intended to replace what in 20 years time has become the dominant and pervasive global architecture for telecommunications.   Yet this is precisely what John is doing because he contends that what we have now cannot possibly continue to scale and operate satisfactorily over the next 20 years.  We are probably 12 to 18 months away from the first usable RINA prototype. So the jury is out and will remain so for a while. Nevertheless, I contend that anyone with a professional ongoing interest in telecommunications should be paying attention to these events.

As a generalist I find John’s ideas hard to follow. Let it be said that there is the book, and between about 2008 and the beginning of the IRATI project in January 2013, perhaps two or three dozen talks and papers fleshing out the ideas in the book with fresh insights and leading to draft specifications for modules that are being coded and tested currently. [Citations and links to most of these are found on pages 86-88 below.] In my examination of this literature I have not had any of the experience or training that John assumes as necessary to serve as the foundation for his students and for the people who will write the code that will instantiate the completed RINA specifications.

Still, I think it important even in this Executive Summary to recount in outline form at least John’s explanation of what he’s doing and why our most important general-purpose technology of 21st-century has been put together by what Lee Smolin calls craftsmen on the basis of what they can get to just “work.” To its critics, the result is a structure held together by Band-Aids, patches and an assortment of kludges.  It has never been designed on the basis of anything that could be called “scientific principles.” John describes in the final chapter of his book the rigorous way in which he has approached his task.
 
There he asks: “But what are the fundamental principles that can be taken from theory in computer science? Fundamental principles are relations that are invariant across important problem domains. The more fundamental, the greater the scope of their use. To have principles, we need good theory. But developing theory in computer science is much more difficult than in any other field because as I have said many times before, ‘we build what we measure.’” Patterns, p. 373

I would add that building something “measurable” is far easier than building a well structured body of abstract thought.

Where to start?  As Alan Turing did with mathematics.   But,  John writes “Mathematics is not a science. In mathematics, the only requirement is that a theory be logically consistent. In science, a theory must be logically consistent and fit the data.” Patterns, p. 374.

“Because mathematics is independent of the data, this can provide us with a means to develop theory in the systems disciplines of computer science. In a real sense, there are principles in these fields that are independent of technology and independent of the data. Principles follow from the logical constraints (axioms) that form the basis of the class of systems. This is the architecture.”

“The first two are pure theory. Here is where the pressure to emulate Euclid is most felt. One wants to find the minimal set of principles needed for a model that yields the greatest results with the simplest concepts. The general principles derived from the empirical will often take the form of a trade-off or principles that operate within certain bounds or relations between certain measures.” Patterns, p. 374.

Beginning on page 375 in what he calls “High Points’ John lays out in bullet form a series of conclusions about ‘what we had learned along the way. We considered the patterns that fell out when we extracted the invariances from protocols,

‘The third from the last of the “high points” on page 380 is one of the most significant. “Private” addresses turn out to be the general case, and “public” addresses are a specific kind of private address. This was totally unexpected, and I am sure is not going to be popular among those whose utopian desires out-strip their scientific discipline. But this is consistent with the world we now find ourselves, where potentially everyone owns their own network, not just large corporations or governments. It appears now that the idea of the global public address that everyone has to have was an artifact of artificial constraints of early technology that are no longer necessary. One can break out of the tyranny of being accessible within a single global address space: one more blow for freedom, privacy, and security, another blow against the Orwellian tyranny of having to be always connected to the public network.”

And on the last page, the second half of one of the final paragraphs rings loud and clear with its focus on the same effective use of economic resources that we found to be a powerful motivator in the approach of Neil Davies and Martin Geddes in the preceding two issues. “Clearly an approach to networking like this benefits the owners of networks more than the vendors. Major vendors will hate it because it represents further commoditization and simplification. Engineers will hate it, because it requires many fewer engineers to develop and configure policies than developing whole new protocols and will come up with all sorts of excuses to continue their expensive proliferation of special cases and unnecessary optimizations. The greatest expense in networking today is staff, both operations and engineering. This direction requires much less of both.”

In my opinion anything that helps swing back the pendulum of power from the multibillion dollar global corporations that, willingly or not, became the handmaiden of the NSA in enabling the panopticon revealed in the summer of 2013 into a situation where small players could offer competitive services and prices will be a huge blessing.  Our holy grail should be to develop local community providers that can work for the benefit of their community and the customers there in.

The RINA prototypes will be open source and they will offer their users a genuine opportunity to change the way the net is run because the recursive units on which they are built their DIFs [Distributed Interprocess Communication Facilities] will be compatible with the structure and protocols of the current network.  RINA is being offered as a potential problem solver.  As John says: If it does things for you, use it.  If not stick with the commercial internet.  In offering what looks to be an alternative to the present global monoculture, RINA can present a more hopeful future.

 

Contents

Executive Summary                              p.  4

Why Do We Need RINA ?
Background and Introduction                             p.   7
The Prototype Development Efforts                            p.   8
Why Must We Care?                                        p.   8
Designing and then Building the New Architecture                p.  11
Further Details about the Prototype Projects and IRATI            p.  13
How Patterns in Network Architecture Became RINA and then
Transitioned to Prototype Development                         p.  14
Some Personal Conclusions on the Part of the Editor                p.  16

The Barcelona 2011 “Clean Slate” Talk
Introduction                                            p. 18
The 2011 Barcelona “Clean Slate” and 2010 Korean “How to Clean        a Slate” Talks                                            p. 19
A Path to a New Architecture Begins by Asking What             
Don’t We Understand?                                     p. 21
Meanwhile the Internet Never Got Past This                        p. 23
Let’s Go Back to Basics                                    p. 24
Inter Process Communication Access Protocol - IAP                 p. 26
Two Applications in Two Systems Require Three New Concepts         p. 28
A Little Re-organizing                                            p. 32
Communications on the Cheap                                p. 34
OSI Should Have Seen the Pattern                            p. 39
Summary and Implications                            p. 40
A Layer is a Distributed IPC Facility --  That is to Say a Layer is a DIF    p. 40

From This We Can Construct What We Need.  But First We Need a
Few Tools                                                p. 41
So What Was the Nature of the Application?                    p. 41
Delta-t 1980                                            p. 42
The Structure of Protocols                                  p. 44
Addressing                                            p. 47
What a Layer Looks Like                                    p. 53
How Does it Work ?  Joining a Layer                            p. 56
Implications of the Naming Model                            p. 59
Choosing a Layer                                        p. 64
An Inter-Net Directory                                    p. 64
How Does it Work?  The internet and ISPs                        p. 68
Security                                                p. 71
So How Do We Get There?                            p. 74

Conclusion                                        p, 76
Perils of an Iconoclast                                    p. 77
John Day is not the Only One to Question the Soundness of the Net    p. 79
Beware of the Professionalization of the Sciences                p. 82
What Happened and Why?                                    p. 85
The Pouzin Society “Bibliography”                            p. 88
A Final Observation on Architecture                            p. 90
Afterword -- “Moving Beyond TCP/IP”                    p. 92