A Practical Navigator for the Internet Economy

RFID Middleware, RFID Architecture, Supply Chains, Offshoring, and the Real Time Global Corporation

Explorations in the Globalization of Everything - a Three Part Issue

How to purchase this issue. $600 or $2400 group.

Global Technology Change Speeds Up

This three month issue started out to investigate the ideas and technology behind what Roxane Googin called in our February 2004 issue the emergence of the real time global corporation. Here we have examined the economics and technology dynamics of offshoring. We have looked at some not yet well observed events in China and, in a long interview, with Art Kleiner have examined the application to technology of Art’s newest research on Core Group Theory. Toffler’s “Third Wave” has arrived with crushing impact.

We also have taken an exhaustive plunge into supply chains and RF-ID technology as enablers of real time tightly knit corporations. As we have recently seen with the maturation of both telecom and computers into commodities, people in the field have been shaking their heads in dismay saying that the great burst of technology innovation that marked the last 30 years was dying out.

One area that is still basically innovative is the area made possible by the advances in miniaturization enabled by the continuation of Moore’s Law and RF technology. In ways never before imaginable both telecommunications and computing are becoming ubiquitous. This is leading us to an environment of omnipresent computing grids of various types. It is enabling the application of RF-ID techniques across supply chains.

RF-ID Perception Vs Reality

But as we have found out the gap between popular “knowledge” and reality in the recent explosion of interest in RF-ID is huge. The popular perception is that intelligent tags will soon be everywhere and that it will continually broadcast the whereabouts of every item we purchase to some omnipresent infrastructure that will leave us instantiated as entries in thousands of data bases with no surviving privacy worthy of the name.

While the technology that could enable this is coming into existence gradually, it is highly unlikely that we shall ever see anything like such an all encompassing environment. The expense of the omni present data collection mechanisms would be huge, first generation tags are designed to be killed at the point of sale and strategic and psychological contradictions among supply chain partners that right now have much to gain but still can’t cooperate are ensuring that the use of data across organizational boundaries grows very very slowly.


We are unlikely to see wide spread use at the items level for five years or more because the tags still cost to much and because current enterprise resource planning software would have to be massively redesigned to benefit from item left tracking. The Wal*Mart and DoD actions, for the time being, mean quicker inventory taking mostly in warehouses. Period.


Although, as our report will show ultimate successful implementation of all supply chains from point of manufacture to point of sale will mean profound changes in how business is done globally. At this point, however, the path from here to there is completely murky.


Families of tags and the processes for using them are not mature. The transition from UCC bar code corporate software linkages to EDP Global with VeriSign inspired DNS look ups is a gleam in the eye of the trade group that wishes to preserve the sway of bar codes over commerce. While companies know that much more can be done with RF-ID tags, they are unsure of what and how and are burdened with complicated and expensive EDI systems that work fine for the most part without the tags.

The defining issues are architecture and psychology. If the industry goes forward in a transition from optically read to radio read codes, changes will be significant but likely not disruptive. As we shall see however there is a technology design that is likely to be profoundly disruptive. This is a service grid architecture system based on mobile RF-ID agents. It will take something like this before the Yankee Group’s extravagant 600 billion dollar a year savings for North America can bear any semblance to reality.

In the meantime the real time global corporation is being driven forward by technology-based economic motivation of squeezing more effective use out of one’s resources and capital. Inventory turn over tightening is here and now on corporate agendas in a way that RF-ID won’t be for some time.

However, in a three-month immersion in this topic we have come upon an extremely intriguing comprehensive and inherently disruptive proposed application for this technology. A company called Mission Assurance is seeking funding for an extremely innovative service grid environment that will enable users to share supply chain information across corporate boundaries in a highly tailored way involving the application of locally controllable trust and policy mechanisms. In this report we go into immense detail on what these people are proposing because no matter whether or not it ever becomes a real product, at the conceptual level, it forces one to think and understand what can likely be possible.

While the resulting environment, should service-grid-based, supply chain management be successfully deployed, would not mean an environment where tagged items in our possession would be continually read when they came within omni present sensors attached to an omni present series of networks, the domain encompassed by such an environment would be vast. It bears thinking about because although not currently feasible, it will become so relatively soon. Under such conditions the best protection against unacceptable losses of privacy will be to be certain that tags cannot remain active beyond the point-of-sale.

Consequently what we see now is that RF-ID works or can work at three levels.

1. a quicker and hence cheaper method of taking inventory at the location level
2. input of RF-ID'ed results into corporate ERP and SCM (supply chain management software) so as with bar codes central management knows the sales figures.
3. ultimately, the creation of virtualized information down to the item and product level via an intelligent mobile agent system operating in a service grid such that product data travels through the supply chain with the product. Boom. There you have it! That’s the elephant.

Let's look at each in turn.

1. Just saving a few seconds in inventory scans over an over and over again in a big operation adds up.

2. But here is where it gets interesting. 30 years after the introduction of bar codes, we have arrived at a state where this scanned data is normally feed into software developed by the UCC that is interface to deliver data on the transaction (inventory check in or sales check out) to corporate EDI, or other accounting and inventory management software.

Transitioning this system of legacy vertically-structured information silos from scanned bar code data to wirelessly transmitted data of the same kind that is now derived from bar codes is what the UCC, EAN, Auto-ID, VeriSign, EPC Global system is all about. The RF-ID tag can hold more information. But the tags being pushed by Auto-ID and EPC, in order to be dirt cheap, will hold little more information.

The effect of this is to speed up what is becoming a truly legacy business process. Some saving is likely. But, only on the order of one or two percent.

Because the ERP, and SCM software of all the players is different, the information flow can go most easily with a single company or even within a single division. To the extent information crosses company boundaries (you don’t get real supply chain benefits unless this happens) players using the EPC Global approach must rely on RossettaNet formats to allow their vertical silos to communicate with the vertical silos of their suppliers.Again, over a number of years, nothing really revolutionary and savings of only a few percent.

3. A service grid based on service-oriented-architecture service. This is revolutionary. The savings here could run 10 to 30%. Here you set up a service grid infrastructure that is similar to what IBM calls on demand computing. You are effectively creating an information envelope that, using micro-services and mobile agents, can collect data about an object as it passes through the supply chain during its life cycle. The envelope collects a product history over the life-time of the product. Because the envelope is independent of the companies who contract for its services, such companies can tell the keepers of the envelop what policy they want applies to there objects at which points of their life cycles.

The difficulties here are several. The mobile product virtualization that the service grid creates for its customers must still be interfaced with legacy corporate ERP systems. This interfacing problem is presumably solvable but only by progressing through one company ERP and product accounting system at a time. In some industries like pharmaceutical products, perishables, and certain kinds of consumer electronics the savings are great enough to mandate near term adoption.

Although the economic pay offs are likely huge, the difficulties standing in the way are formidable.
To become viable, the provider of the envelope will need software that can talk to just about every other piece of software known. An immense problem of IT integration is out there because until the mobile agent envelop approach can communicate with enough pipes to enough corporate silos it isn’t worth much. However the provider of the envelope will have a disruptive technology promising significant economic return to every company willing to take the risk of using it

There will be a struggle between the desire for cost savings and such privacy related issues as a fear of business processes becoming too transparent to the users of such software. They may ask “What are we exposing that could embarrass us?” This indeed may be a frequent question and worry. Many companies would choose to loose a percent of two or three out of fear of having to “undress” in front of those with whom they do business.

But eventually the large corps with divisions will adopt it across divisions with in the corporation and buy from suppliers who are willing to do the same. Once the first do, their competitors will have to follow.

Mission Assurance of course offers the edge-based envelope approach. They tell me that they have taken that idea farther than anyone else. Connecterra and OAT Systems post Savant are have some of the same general ideas. IBM is developing RF-ID middle ware that first will integrate its customers tags with their legacy ERP systems but soon will allow tags to communicate with all parts of IBM’s “on demand” computing environment.

Here is the intriguing conclusion. Imagine a fully mature Mission Assurance like eco-system. Could this function as a virtual business process glue? If so, it just MIGHT enable some emerging competitor of Amazon.com or Wal*Mart, a company in India say with out the huge sums invested in legacy IT systems to attempt to put together a truly virtual business operation, that was composed of outsourced pieces, held together by central financial and strategy staff but other wise without all the other strategic infrastructure that the earlier players needed. Might this be the real end point of the RF-ID supply chain revolution five to ten years from now?We had the railroads, the oil companies, and then the telecom companies. Are the new 21st century empires to be made up of the purveyors of the new business process glue?

How this Study Evolved

When we began this exploration last December, our only under standing of RF-ID was that these tags would be extraordinarily privacy invasive and that It would somehow have some extraordinary impacts of supply chains by enabling the rooting out of all manner of inefficiencies. In beginning our conversations with Mission Assurance we had no idea how unusual and atypical its approach was. We had simply asked a VC subscriber for assistance in finding out more about RF-ID. His response was to introduce us to Mission Assurance. The interview that follows on page 113 through 126 was completed in December and is an introduction not only to Mission Assurance, but to the entire field.

By January we were discovering that the filed was in far more flux and far more complex than we suspected. Our earliest attempt at a summary is the article expressing the metaphor of the three rivers on pages 65-67 above. Chapter 4 on Supply Chains beginning of page 59 above examines the players and uses in more depth. It examines the gamble taken by EPC Global and its alliance with VeriSign which looks to us like a misguided step. What follows is our analysis as of early March 2004.

EPC Global in the Midst of “Mithra versus Godzilla”

Under Stratton Sclavos VeriSign’s goal has been to link up with Microsoft and cash in on every means of central control that it can grab from its NTIA granted monopoly over .com and .net. ZD Net on February 25, 2004 captured very nicely on of the jewels handed Verisign as a result of EPG Global’s falling for the sales press described in detail in Chapter 4 above.

Meanwhile ZD Net’s Face to Face column give a snapshot of the immense stakes.
http://techupdate.zdnet.com/special_report/Stratton_Sclavos.html?tag=tu.arch.link
Imagine a directory service and data bank that manages and routes domain name traffic, RFID tag information, voice calls, and digital authentication services. VeriSign is on a mission to become the Internet infrastructure utility that everyone else plugs into. By 2010, VeriSign Chairman and CEO Stratton Sclavos hopes to have at least half of all voice and data network interactions passing through his company's services.

The company recently announced a pact with Microsoft to integrate its security services into Windows, and is making deals with major telecom carriers for its communications services. VeriSign also was hired by EPCglobal to run a new RFID-based directory, and the company just introduced the Open Authentication Reference Architecture (OATH), a proposed open framework for authenticating users and controlling access to corporate networks. (See David Berlind's commentary on OATH.)

But, VeriSign's path to become a dominant Internet utility is not without significant roadblocks. Sclavos is at odds many of the Internet pioneers and governing bodies in pushing for a more commercialized Internet infrastructure. "A transition needs to occur, going from purely academic and public sector funded to something that is now a mix in between," Sclavos said in our interview. "The statistics are saying it's going commercial in a big way. It is the foundation for our economy in the next couple of decades. We are at that incredible pivotal point where we have to make a transition to commercial entities running this and having the ability to create innovative new products and get paid for them."

If the Site Finder (a service that redirected mistyped URLs to VeriSign’s Web site) controversy with the Internet Corporation for Assigned Names and Numbers (ICANN) is a hint of what's to come, commercializing the Internet's infrastructure will be long, drawn out ideological battle.”
COOK Report: One of the disasters waiting to happen from Sclavos' actions is that a company that purports to be all about security and trust is aligning with the least secure operating system in existence under the guise of keeping the user happy. Users want convenience above security so that is what we will give them. Amazingly this came in the context of Sclavos warning that the bad guys are out there and going to harm the commercial Internet! Surely you jest, you are thinking.

Well, consider the following from a Feb. 25 ZD Net interview http://zdnet.com.com/2100-1105_2-5165305.html:One of the problems, Sclavos said, is that technology companies haven't made security simple enough, forcing customers to choose between security and simplicity.
"End users will always choose ease of use over security," he said. "We better just comply." Sclavos cautioned that the cost of security issues involves not just damage from attacks but also the fear that is engendered. "It is delaying and slowing the adoption of new technologies," Sclavos said.

What folk are not likely to learn until way too late is that VeriSign has possibly the “slickest” legal and PR team on the net. Sclavos is indulging in gross over simplification and using the same language that he used five months ago with Site Finder. The only good being done here is put money into VeriSign’s pocket and take advantage of the ignorance of non-technical users.


The Bush administration’s ignorance of the net has bitten us with broadband and it will bite us again by allowing this entity that calls itself VeriSign to install itself in as much of a monopoly over net infrastructure as it can get away with. Bigger is better to the Bush White House and if Verisign wants to become the MaBell of the Internet they will have at it. Lauren Weinstein put it well: Now we're faced with a "Godzilla vs. Mothra" battle, where the Internet -- and its users -- will likely take the brunt of the collateral damage. See http://www.interesting-people.org/archives/interesting-people/200402/msg00279.html

Bringing things back to the subject at hand, what is sad is that EPC Global fell for VeriSign salesmanship. From pages 69 through 77 above we point out in detail some of the alternatives that were not pursued and some of the possible consequences of the path chosen. While we refuse to predict with confidence, there is certainly the possibility that the Verisign embrace into which EPC Global wandered could become a major set back to the adoption of RF-ID technology.

How Does One Understand the Market for RF-ID in Supply Chains and the Architectural Implications of Solutions Being Offered?

While as readers will see we have talked in great detail with Mission Assurance we tested their view of the world in interviews with Keith Dierkx on pages 78- 86 and Brough Turner on pages 87 – 91 above. Keith explains their use in transportation management. Dierkx points out some significant findings: He asks is RF-ID about the ability to acquire, secure, distribute and share information more effectively? But even before you make your judgment about architectural approaches to RF-ID you have to ask whether the corporate systems exist to use RF-ID information intelligently. And then management must ask: when I have all this data, how much of it is useful information that will actually allow me to improve my business processes.

Brough Turner explains the difficulty of integrating RF-ID into supply chain software with existing ERP and web services based supply chain management systems for many companies built on bar code based technology and functioning well enough so that adding radio tags produces insufficient return on the investment to justify taking the step. Finally in Chapter Seven, after having considered Mission Assurance in great detail in Chapter Six, we interview Terry Retter of PricewaterhouseCoopers. We ask Terry’s over all assessment of the field and solicit his comments in particular on what Mission Assurance is proposing to do. Can it be done? What is the market for it? We conclude by feeding back comments on Retter’s assessment from Mission Assurance and offer a detailed tour of its architecture.

Why All the Emphasis on Mission Assurance?

In 12 years of publication we have never even come close to going into as much detail on a single company or technology as we do here with Mission Assurance. We have burrowed into them for several reasons. What they are offering is very very different from everyone else. We hate to trot out the tired old cliché of paradigm change but don’t know how else to describe it. The majority of the text in this report is not the interviews but rather taken from eleven weeks of discussion in a private mail list of nearly 60 people. More than 30 actually wrote comments to the list. (See page 6 and 7 above.) We used this forum to problem Mission Assurance’s ideas that we put out for scrutiny to a panel of bright people. When we probed deeper, we got more detail and solid answers not brush offs. We have been probing for 8 weeks and have not run out of substantive answers from them. At the very end we put our conclusions in front of Terry Retter and asked him to judge. Terry did so in a very useful way on pp. 138-146 below.

What Wedge Greene and his comrades are proposing is an amazing and very complex distributed service grid based on underlying technologies of Java and Jini. It is a technology that not only passes data between corporations but also enables a highly tailorable form of distributed policy. It is very complex. Therefore the extreme detail we have gone into show what it does and how. It encompasses flows of information, that as far as we can ascertain, no one else is attempting to deal with. A major reason for this is that standard industry observers correctly say that corporations won’t play with processes that force them to reveal and make really transparent to everyone their here-to-fore private business processes. This position has been predominant in the face of technology that if fully implemented could have major economic consequences. Mission Assurance is suggesting that its service grid is so highly tailorable that users can control who can see what information. The suggestion is that thus reassured users will adopt the system.

Sensing that there is a huge economic payoff lurking here, should the claims of Mission Assurance hold up, we have honed in on the subject more than ever before. We end this summary with the following assertions. Mission Assurance currently is a mainly a set of well thought out ideas arranged into an elaborate design for which funding is being sought. Currently we have no financial relationship whatsoever with Mission Assurance. They are not even subscribers. If in the future they offered us money in return for a license to use this March-May issue in the marketing education efforts, we would accept. However, we have not been for sale before and are not for sale now. And so far they have not offered anything for licensing rights.

They may fail to get funded. They may fail to build a working system. They may fail to come up with a viable business model. Because they have to be interested in their own success, we have made a point of seeking independent opinions. In addition to subjecting their claims to the scrutiny of our mail list, we have talked individually and at length with several of the people on the list. Can this be done? Are thesefolk full of hot air?

These people tell us that they think what Mission Assurance is proposing to do, can indeed be done. Therefore we conclude that they are proposing to do, as far as we have been able to ascertain, is feasible. We have concluded that it has been worth while to discuss their concepts at such length, because, no matter what the eventual outcome, it offers a very intriguing glimpse into what Moore’s law and advances in radio technology make possible. If Mission Assurance doesn’t succeed, it probably won’t be too long before someone else does succeed with something similar. Finally these concepts appear to be highly useful in understanding how the use of RF-ID in supply chains could have consequences far beyond the more mundane issues of inventory taking and shipment tracking that mark the more simplistic discussion of the trade and national press.

Keith Dierkx expressed the issues very well on page 78 above: Is this monumental stampede about RF-ID really about RF-ID? Or is it about the ability to acquire, secure, distribute and share information more effectively? But even before you make your judgment about architectural approaches to RF-ID you have to ask whether the corporate systems exist to use RF-ID information intelligently. And then you must ask when you have all this data, how much of it is useful information that will actually allow yourself to improve your business processes? I would suggest that with RF-ID we may get re-leveraging of supply chain software solutions.
Mission Assurance shows one interestinway. We are sure it is not the only way.

Control Versus Innovation in Off Shoring

In the remainder of this summary we introduce two other ideas that we consider worth thinking about. One is the issue of what is offshoreable and what is not.The other is trust in computer networks.

From what Arcady Khotin tells me about his off shore programming business in St Petersburg Russia, the work the he gets is generally development of modules that are shipped back to the company that pays for them. The company may be America, Scandanavian, or anywhere else in the world.

No matter where it is, the entity that pays maintains control by segmenting and then integrating the modules that result. The possibilities to leverage are quite interesting however.

A few years back there was a new small company that gained globally a significant share of a major market. They started and as they grew they hired 3 people full time in Arcady's place, then 8 and 12 and 15 and ultimately i think they were employing about 30 of Arcady's 100 employees. When they were employing 30 folk in Russia they hand no more than about 15 or 20 doing all the integration and finally development testing and marketing. Indeed if I remember correctly they had maybe another two dozen people writing other parts of their code in Moscow. The point is that majority of the employees (contract) of this well known company were in Russia. The company always had more contract employees that direct employees.

There are all sorts of possibilities in software off shoring that never get any play even in the trade press. Sure, you better know your code and be meticulous, but I think people skills are critical. In the early 80s Arcady wrote code for the Russian space shuttle. Given the state of his hardware he had to write in assembler and even machine code. But he wrote and he chaffed under the Soviet bureaucracy where there was precious little incentive to accomplish anything.

Motivating and delegating and organizing, and planning and selling and risk taking and enthusiasm are powerful skills and a lot harder to measure the just writing code. Arcady saw an article in Boardwatch about Dave Hughes and when he heard Dave was in Moscow in February 93 took the overnight train from St Petersburg in order to have a two-hour meeting with this weird cursor cowboy. He was ready to go anywhere and do anything to learn.

What I conclude from thinking about this is that integration and organization and packaging are not going to be as outsourceable, as writing code.

Trust in Computer Networks

I have come to the conclusion that it is necessary to end this Executive Summary with a discussion of Trust and trust issues in electronic or Internet based voting. When I began this exploration, early December, I did not plan to include any discussion with Ed Gerck on trust. He was in my private mail list only because for the first time I offered access to the list to all our subscribers as a benefit of being a subscriber. About one third of our subscribers took me up on my offer.

A long discussion on trust ensued. I have published the results on pages 92 through 103 above. It ended with the following commentary on use of a paper ballot in voting machines. Although it goes a bit outside the scope of this issue on the real time corporation supply chains on RF-ID, I include it here and have gone so far as to editorialize on it because it is quite possible that the failure in public policy that we are experiencing will push us into future electoral disasters and goes to the heart of the value or lack of value of computer technology in other fundamental activities in our society. Read on.

Trust Requires Corroboration by Independent Channels

Gerck: To solve this question in electronic voting, some advocate printing a paper copy of the ballot, which the voter can see and verify that it is identical to the ballot she intended to cast, and then sending the paper copy to ballot box A while an electronic copy of that same ballot is sent to ballot box B. The idea is that ballot box B could be tallied quickly while ballot box A would be used as a physical proof for a manual recount. Such a suggestion is oftentimes advanced as the sine qua non solution to voting reliability in electronic voting.

But what makes the introduction of a paper ballot special is not the fact that it is paper instead of bits. It is the fact that the voter is actually casting his vote twice. We now have two independent channels of information for the ballot, one from the terminal as source B, the other one from the printer as source A. So we have N = 2.

In other words, this design provides for two outputs: ballot A and ballot B. However, in the event of a discrepancy between the two, no resolution is possible from within the system. Technology provides no answer in this model. Thus, between two conflicting outputs, each of which is the result of a trusted system, the decision of which “is correct” must be made independently of the system, by policy.

This means that such a paper/electronic system does not work when it is needed most ˆ i.e., when the system reveals that it is not a “perfect clerk.”

The situation can be corrected with a better model, as I have discussed elsewhere. The solution considers not only machine-machine communication channels but also human-machine communication channels because the voter can act as a source and as verifier in more than one part of the system. Further, human-human communication channels must also be considered because we do not want machines to have the potential to run amok, unchecked.

Information Theory can be used to describe such communication channels and, as previously noted, the concept of qualified reliance on information can be introduced as a formal definition of trust in order to rate such channels in terms of providing effective proofs when all operational factors (collusion, hackers, buffer overflow, bugs, etc.) are taken into account.

As a result, the only provable solution to increase reliability in communications (e.g., the communication between the voter as a sender and the ballot box as a receiver) turns out to be to increase the number/capacity of independent channels until the probability of error is as close to zero as desired (direct application of Shannon’s Tenth Theorem in Information Theory). To be complete, the solution considers not only machine-machine communication channels but also human-machine and human-human. Thus, if an electronic system is able to provide N proofs (human and machine based), these N proofs for some value of N larger than two will become more reliable than one so-called “physical proof” even if this one proof is engraved in gold or printed on paper.

I note that the same applies to other communication protocols, not just voting. Public voting is, however, one of the hardest -- because it needs to be anonymous, secure and independently verifiable.” Editor: Gerck’s quote above comes largely from Ed Gerck, C. Andrew Neff, Ronald L. Rivest, Aviel D. Rubin, Moti Yung: The Business of Electronic Voting. Financial Cryptography 2001: 243-268. Springer Verlag. Note that this is a compilation of five separately authored short papers presented as part of a panel discussion.

Part Two Voting Train Wreck: an Editorial

As I compiled and edited the long trust discussion, I read Ed’s short summary and suddenly the light went on. I understood what he was talking about. It seemed rather obvious.

This is my description of a handful of well-meaning people, people who nonetheless are actors in what is shaping up as the greatest technology policy failure of my lifetime. The story is also that there is another side to the story, a side that is being systematically underreported by the technology press. The problem: expensive and faulty voting systems, with no solution in sight. The solution: according to Dr. Peter Neumann, Dr. Rebecca Mercuri and others, either vote using paper ballots or do not vote at all. I have a problem with this solution.

It goes to the heart of the value, or lack of value, of computer technology in fundamental activities in our society. Even though I see no basic reason why computers cannot be used to tally votes, my problem with Neumann, Mercuri and others is not one of technology or mathematics or algorithms. They are the computer scientists after all. My problem is one of method and procedure and the reasoned applications of scholarly research when presented with new ideas. Their minds are not open to a solution that would not use paper in computerized voting (aka DRE) or, heaven forbid, use the Internet to vote. When someone like Dr. Ed Gerck challenges their world view, with qualified arguments summarized in this Executive Summary, we hear not much more than “it’s impossible on its face.”

I decided to enter into discussion with most of the principal actors in order to lay out their basic positions by showing what the beginning of a dialogue might look like. I have no intention of shifting the focus of the COOK Report to electronic or Internet voting. However in view of my own personal knowledge of what was happening, and the compelling human story that is developing, I have found it impossible not to take a public stand on the basic issues of scholarly methodology. I will watch and report on the extent to which they and others as well will make the effort to look more closely at these complex issues.

This voting train wreck comes in five parts:

Domination of the Electronic Voting Machine Debate By Dogmatism

A Brief Flashback – The Expertise of Eva Waskell

A Proposal to Maintain the Responsibility of the Scholar and Scientist

Ed Gerck’s Responses to his Critics, and

Ted Selker: An Academic Who Has Done his Job

Given both my observation of the train wreck that we have had with elections during the past five years and my knowledge of Ed Gerck, Safevote and Eva Waskell’s four years of work for Safevote, I have a problem with the way in which Peter Neumann, Lauren Weinstein, Rebecca Mercuri and Avi Rubin have managed to use their positions as academics and respected scholars to dominate the public discussion. They are, of course, entitled to use their scholarship to advocate any positions they choose and can defend with due intellectual rigor. But they also have responsibilities. They are generally expected to read a wide range of primary source material and to discover new sources if possible. They are expected to study, to debate, and to think. They are expected to probe and keep an open mind until independently verifiable facts move in to drive their analysis to a defensible conclusion.

Domination of the Electronic Voting Machine Debate By Dogmatism

These four individuals have done us all a great service in bringing their intellect to bear on exposing the risks of the pell-mell rush to DRE voting machines. They have correctly attacked the problems behind Sequoia and Diebold and ES&S. But they have become dogmatic in their approaches to the issue and have taken public positions that maintain it is impossible to prove, for anything but the most trivial computational module, for all voters that the code that is supposed to be running in the voting machine IS actually running there, and that nothing else is. Further, they maintain that anyone who claims to have solved the Internet Voting Integrity problem is seriously misguided, in the face of untrustworthiness everywhere throughout the Internet.

Thus, what happens, if there is a person who, through means that have not occurred to these four, steps forward and says: “I have a solution to these problems,” they will not hear what is suggested. They have said as much repeatedly. They are so publicly committed to their positions, that they will consider no other views or solutions. They maintain that they are computer scientists but if someone floats a paradigm shifting idea, they are not willing to hear or analyze it. And they will use the prestige of their positions and credentials to cast aspersions on ideas that they find to be uncomfortable.

My problem with Neumann, Mercuri, Weinstein and Rubin is not one of technology or mathematics or algorithms. They are the computer scientists after all. I am not. My problem is one of method and procedure and the reasoned applications of scholarly research when presented with new ideas. Their minds are not open to someone like an Ed Gerck who challenges their world view. If they can dominate public discussion, his ideas will never be heard above the cacophony. For the past five years I have watched this occur with great dismay.

The first three drove their stakes in the ground early. (Avi Rubin became prominent on the voting issue only in 2001.) In December 2000 with Risks 21;14
http://catless.ncl.ac.uk/Risks/21.14.html they concluded

"In the wake of the recent Presidential election problems, the knee-jerk reaction of "gee, can't we modernize and solve all this with electronic and/or Internet voting?" is predictable, but still wrongheaded. The shining lure of these "hype-tech" voting schemes is only a technological fool's gold that will create new problems far more intractable than those they claim to solve."
"Peter Neumann, Rebecca Mercuri, and Lauren Weinstein" (Risks 21.14)
See also
http://www.mail-archive.com/ This email address is being protected from spambots. You need JavaScript enabled to view it. /msg00049.html

And by January 19, 2001 Safevote is safely tarred on Privacy Watch as just another “election company.” http://www.pcworld.com/resource/article/0,aid,38262,pg,3,00.asp
The battle over Internet voting is far from being won by the election companies: "There will be people who will plow forward in defiance of common sense and be the poster children for what doesn't work," said Lauren Weinstein, a co-chair of People for Internet Responsibility who has coauthored anti-Web-voting articles with Neumann.

Safevote's Gerck says that he believes key requirements, including voter privacy and election integrity, "can be guaranteed in a system using a protocol that requires both technology and human intervention," but Weinstein vehemently disagrees. "Internet voting, it's not even a close call: It's a garbage concept," Weinstein says. "From a technical standpoint it's easy to say it's impossible to fulfill the requirements you need for an election (using the Internet)."

Editor: The position these highly respected people took was that Gerck was just another DRE vendor pushing a flawed technology. Now my doctoral training is Russian History. Not mathematics, nor physics, nor computer science. I will not be so foolish as to debate these people on technical grounds. I will debate them on the grounds of scholarly process.

Furthermore although I have known Dr. Ed Gerck personally for five years, I would not make a public protest based just on his word versus theirs. Ed and Eva Waskell tell me that these folk have never seriously examined Ed’s technology design – dismissing it out of hand in the press and in face-to-face forums at conferences and hearings. Eva, who has given the last 20 years of her life to these issues, after working for Ed for six months told me: ‘what he is doing is real. It works. It is fundamentally different than any other approach. I have read all Ed’s work and the documents that support it and I fully understand at a technical level what is going on.’

These words from a person so dedicated were difficult for me to ignore. Approximately six months before the Florida recount debacle, I contacted Peter Neumann via email for I knew that he had known Eva for many years and believed that, if this person whom he knew well and presumably trusted, was convinced that this was new and different solution, Peter would undertake a serious study of Ed’s position.

A Brief Flashback

On December 1, 1999 I had made an introduction of Eva Waskell to Ed Gerck. Up to that point both were in a private email discussion with Larry Lessig about his newly published Code and Other laws of Cyber Space. Before Eva responded to a comment about voting, neither person knew that the other was deeply interested in the application of computers to voting. Ed hired Eva almost immediately.

But why you may wonder do I care about Eva? I care because we have here a question of broad expertise lacked by the academics who are monopolizing the discussion. I first learned about her interest in the risks of the applications of proprietary black boxes to count votes in elections in 1994. There is a serious problem. In the 20 years since Eva became involved we have so many layers of new technology being pushed onto the public scene that it becomes very difficult to get some sense of the interlinking and therefore the reliability of the whole. Never mind issue of electronic voting at a walk in voting polling place versus casting one’s vote remotely using the Internet. Or using the distributed nature of the Internet to run interlinked processes serving as checks and balances to the elections process. Issues that, as readers will see, are not clearly demarcated in the comments of Neumann and Mercuri.

Policy experts must ask themselves who is an expert on the way elections have been held in the past? What about the entire system of the election process as it has been done in this country for the past fifty years? Who is saying that elections in the past have been perfect? They have not of course. Eva knows that because she has spent 20 years not in the trenches observing the totality of all procedures at the local state and national level. She has experience as a poll worker and through research, writing, investigative reporting, public speaking, grassroots organizing and election consulting. She is one of a relatively small handful of people who have credentials in the totality of the electoral administrative process – a process that Avi Rubin glimpsed for the first time in his service as an elections judge in Baltimore on March 2, 2004. It is not clear that the opponents have this experience. The breadth of her expertise should not be ignored because it gives her unique qualifications to make judgments about what Ed Gerck is doing. Computers? She wrote code in assembler in the 1980s.

A quick review of some of that experience is in order. In May 1985, while working as a stringer for The Economist, she accidentally stumbled onto a lead regarding a 1982 court case in Elkhart, Indiana in which the plaintiff alleged that the counting and the certification of the votes were "false and fraudulent.". The case caught her interest because the court ordered the vendor to turn over the vote counting software to the plaintiff for inspection.

Her research on the lawsuits filed against the vendor Computer Election Systems (one of the "ancestors" of ES&S) served as the basis for an article by David Burnham that appeared in the New York Times on July 29, 1985. She was quoted in that article: "Even when local officials learned of the problems [with computerized voting systems], little apparent effort was made to correct them." Over the next eight years, she read thousands of pages of transcripts from this lawsuit and others, and talked at length with the attorneys and computer experts involved. This education, along with hundreds of hours of interviews with election officials, formed the core of her knowledge base.

During the remainder of the 1980s, she went on to field study as a consultant to the Texas State Attorney General's office in their investigation of a Dallas City election. This Dallas City election was audited by Terry Elkins, the first person to ever perform a citizen's audit of election results. See David Burnham's September 23, 1986 article on page A26 of the New York Times, "Texas Looks Into Reports of Vote Fraud".) Eva worked closely with Elkins for many years.
In 1993 she helped compile, edit and distribute a Source Book on Computerized Voting, a joint project of Computer Professionals for Social Responsibility and ELECTION WATCH, an election watchdog group based in southern California and founded by Mae Churchill.
She helped organize the first conference on computer security in elections which was held at Boston University in 1987. As a result of this involvement she met Peter Neumann who published some of her work in the ACM Software Engineering Notes. From 1989 to 1999 she served as the volunteer director of the Elections Project for the Washington, DC chapter of CPSR. Mae Churchill and Marc Rotenburg led Rebecca Mercuri to Eva in the 1989-90 time frame. Rebecca and Eva kept in touch and shared information with each other for a decade until some months after Eva went to work for Ed Gerck in December 1999.

By 1994 Eva had also gained much experience in local Florida elections. In May of 1994 Eva left four years of full time employment at TeleStrategies and from that point until going to work for Ed in December 1999 she devoted herself full time to work with a nationwide network of election activists.

During this time, I was a friend who kept in close touch and was continually apprised of her efforts. I think it is fair to say that she was well known to the vendor and generally despised by them. At both the state and Federal level she knew very well the entire national election infrastructure including the widely respected Roy Saltman at NIST. I have never known an individual who dedicated herself with such total devotion and single-minded integrity to the public good – in this case to preserving the integrity of elections.

There is no way that Eva would ever have sold her soul to just “another DRE vendor.” After all at this point she had dedicated 16 years of her life to warning of the risks of black boxes being allowed to count votes using proprietary software. She would never have gone to work for a computer vendor whose approach was one and the same as the companies she fought for so many years. She interviewed Gerck as much if not more than he interviewed her. They found each other acceptable.

At about this time I contacted Rebecca Mercuri by email and told her that in my opinion Eva would not have gone to work for Ed if he were just “another DRE vendor.” I pleaded with Rebecca to study Gerck’s ideas with care. Then in the late spring of 2000 I emailed Peter Neumann and asked him out of his long knowledge of Eva and her dedication to the risks of DREs to sit down with both people and try to grasp how Gerck differed from the other DRE vendors. He politely declined. In December 2001 I met Avi Rubin in person at the UseNix LISA Conference in San Diego. I told him what I have outlined here. He declined as well.

My issue is one of what, if any, effort these people have made to understand the approach of Gerck and Safevote that they were lumping together as “just another DRE vendor.” In email correspondence they have told me that they believe they did understand what Gerck was saying and that they believed his work was simply not credible on its face.

I have a problem, not with the technical mathematical substance of the issues that I do not claim to be qualified to decide, but with the methodology being used by the four. Lump Gerck together with the other DRE vendors and dismiss him. In particular I have a problem with Peter Neumann and Rebecca Mercuri who, have known Eva for many years. If they had kept the open and cautious minds of true scientists, they would have stopped and asked themselves what it meant when a person who was one of the very first explorers of these issues with a 16 year record of single minded dedication to the dangers involved, appeared to endorse just “another” DRE System. Any scientist with an open mind would have said whoa! How can this be? They hopefully would have called Eva and asked her what happened?

There appear to be two possibilities here. Either that Eva had sold out or she had not. If she had sold out, then what on earth would that mean about the dedication of her preceding 16 years? A scholar trained to weigh evidence carefully and make a decision on the basis of information that he or she had personally verified, should have confronted Eva and interrogated her until it became clear whether or not Eva had ‘gone over to the enemy.’

Of course there are two problems here. The easy outcome would have been verification that Eva had betrayed what she had dedicated herself so selflessly to for so long. Neumann’s and Mercuri’s suspicions would have been verified with a triumphant case closed.

The other possibility was far more troubling. What if Eva had not sold out? What if Gerck’s technology were real and it worked? That would mean that a lot of lines that Mercuri and Neumann were drawing in the public sand would have to be redrawn. That public assertions made as to the impossibility of doing what Gerck claimed would have to be retracted. A very unpleasant prospect. Furthermore the other problem was that confronting Eva and Ed in extended discussion and debate would be unpleasant and fraught with unforeseeable outcomes.

I have not looked forward to the prospect of having to throw down the gauntlet in front of four very prominent and highly respected people. But I have done that now. I have corresponded with all four. It has not been pleasant nor did I expect it to be. But in doing so, I have learned a great deal. I suggest that it is time for Peter and Rebecca to do the same with Ed and Eva.

A Proposal to Maintain the Responsibility of the Scholar and Scientist

I began by sending Peter a draft of my proposed editorial on Wednesday night and on Thursday morning I sent him a second message explaining that my grounds for challenging him were based on the methodology of his scholarship.

I said: “I well understand your reluctance to trust computers in voting of any kind and absolutely agree that considering where we are now, [with both sides sharply polarized and with insecure DREs scheduled to record the majority of the votes cast in November] we'd be better off with the old analog paper based machines. Therefore I am pleased that you and Lauren and Rebecca and Avi are making life miserable for the Diebolds of the world. You should.”

“But I have a problem nevertheless with the objectivity of your scholarship. In Risks you publish one of the most famous mail lists on the globe. And where do most of the risks come from? From the computer doing its own thing under conditions that were not anticipated and without a human in the loop to intervene when trouble occurs. Computer scientists have not designed their systems to take human behavior into account. They are maintaining, in effect, that they cannot be so designed.”

“And what does Gerck say on my list? "The solution considers not only machine-machine communication channels but also human-machine communication channels because the voter can act as a source and as verifier in more than one part of the system. Further, human-human communication channels must also be considered because we do not want machines to have the potential to run amok, unchecked." “

Gerck is not a dumb man, nor is he a charlatan, Peter. Have you ever spent more than a few minutes in serious conversation with him? I have. . . . Almost four years ago I reminded you that Eva was working for Ed and pleaded with you out of respect for your many years of knowing her, to take a closer look. You responded with a politely worded refusal.

I am reminding you again today one of the earliest and most dedicated activists in this field still says that there is a computer/human-based answer and that the technology that you have never seriously examined works. I believe that Eva would never go over to the “enemy,” Peter. Therefore, until you can say that you have sat down with both Eva and Ed for as long as it takes to get to the bottom of this; conclude either that it does work or that it fails; and do so in a documented scholarly essay, I am prepared to say in public that I believe you have failed in your most fundamental responsibilities of scholarship.

To his credit Peter responded on March 4: “Come on. I have read Ed's stuff. It proves nothing. I have spent various times at various meetings with Ed. We seem to agree that we disagree on some things and agree on others. . . . . I will be very happy to take another look at what Ed and Eva are doing.”

I find Peter’s response commendable, but, in view of assertions, he has made in his follow up conversations with me, views which I present below with Ed Gerck’s comments, in order to have any reasonable chance of an outcome that makes any progress toward a resolution of the issues, there needs to be not just another look but a real, in person, dialogue.

As a suggestion if Ed and Peter agree to meet for a face discussion I would gladly publish a recording of the discussion. They could sit down in private and engage in a dialogue that would give Peter a chance to probe and test Ed’s ideas. Each individual could make his or her recording of the dialogue. Each could write up their own summary of what they felt they learned with the understanding that: I would publish the two resulting summaries and that, should they be unable to do this because they remained in serious disagreement, I could publish from the tape a transcript of what was said. If Peter wished to ask Rebecca Mercuri to participate, he would be welcome to do so. I am sure that Peter and Ed would agree that the future of free and honest elections in the United States deserves no less than an extensive effort by all parties to reach agreement.

Ed Gerck's Response to His Critics

I asked Ed to respond to comments sent to me by Rebecca and Peter on March 5,
Rebecca Mercuri: I have read Ed Gerck's work and have iterated with him over email on numerous occasions. His theories are flawed, on various counts. Peter has attempted to explain to you the problem with Gerck's work, but I will reiterate that Gerck has never supplied a strong mathematical proof (this is quite different from theoretical musings, it is a FORMAL proof) for his crypto-based voting system.

Gerck: My papers describe an information-theory based voting system, both for electronic voting as well as for Internet voting. In both cases, formal proof was already provided by Shannon 50 years ago. The fact that Mercuri mentions a "crypto-based" system" just underlies a basic difficulty she has in reading the material.

Mercuri: As well, for a voting system to be acceptable for use in a democratic governmental election, it must be transparent and hence completely understanable by all citizens (let's assume a U.S. High School education).

Gerck: When systems are implemented using what I propose, anyone can verify that the system is working as specified with a confidence level (statistically defined) as close to 100% as desired. If the system is not working as specified for its voter-verified records, its own checks and balances shall indicate a malfunction. This is much better than what current paper-based voting systems provide, where no one can have any confidence level that the system is working as desired.
That said, the principles behind what I propose are very intuitive and the Moguls in India already used them successfully 500 years ago. They used at least three parallel reporting channels to survey their provinces with some degree of reliability. Their additional efforts were paid for by the improved results.

Mercuri: So this mathematical proof must not only be correct and complete, but there must be an independent way for all voters to assure that the code that is supposed to be running in the voting machine IS actually running there, and that nothing else is. Since this is theoretically impossible for anything but the most trivial computational module.

Gerck: "All voters" can be assured that this is true in the systems that I propose, to the confidence level required, by principles already well-known to the election community. Roy Saltman published the mathematical analysis while he was at NIST, in 1975. The mathematical basis was supplied by Shannon 50 years ago.

Mercuri: It will probably take Ed Gerck a LONG time to come up with this proof, but Peter and I (and others) have encouraged him to do so, and we will look forward to seeing his proof of correctness and transparency when he is able to accomplish this task.

Gerck: The proof was given by Shannon 50 years ago and exemplified in a series of papers by Roy Saltman almost 30 years ago. My work was, actually, much simplified and made shorter in time by following what these pioneers have done.

Neumann: Anyone who claims to have solved the Internet Voting Integrity problem is seriously misguided, in the face of untrustworthiness everywhere throughout the Internet.

Editor: Peter is missing the issue here that there are two levels - electronic voting by walk in local polling places and the use of the internet in polling and administrative collection of data. He has just dismissed it with the assertion that on its face it cannot be done. He needs to give his own proofs rather than flat assertions. The quality of these exchanges goes to the heart of my concern about the methodology in play.

Gerck: Neumann, Mercuri, Rubin, Weinstein and others say that one needs to assure reliability for all parts of a system, including hardware, OS, and voting software in all hosts, for all possible failure modes, in order to assure reliability for the system. Which, of course, is impossible. Hence, a solution is impossible, no matter how it is proposed.

This is the blindfold that prevents everyone who puts it on from seeing a solution. Reliable systems can be made of unreliable parts. This principle was known to the Moguls in India 500 years ago and was mathematically modeled by Claude Shannon some 50 years ago. And this is crucially important because there is no other way to build a reliable system.

Anyone with the required technical background, who would seriously consider reading what I have ALREADY published (there is no need for new material) should be able to understand that there are indeed: (1) paperless solutions for electronic voting; and (2) solutions for Internet voting. A search on Google should also provide replies that I have ALREADY given in public lists to ALL the questions posed so far by Neumann, Mercuri, Rubin, Weinstein and others.

For those interested, here are the main references:

1. DRE - ELECTRONIC VOTING: On August 2-30, 2001, I presented an invited paper at the WOTE'01 conference in Tomales Bay, California. The conference was about trustworthiness in voting systems. My paper was on the Witness Voting System, a provable, reliable solution for voter-verified electronic voting (DRE), providing integrity and anonymity proofs, that does not use paper. Peter Neumann was present and had no questions. The conference proceedings is available electronically at http://www.vote.caltech.edu/wote01/pdfs/ , look for gerck-witness.pdf
2. INTERNET VOTING: Published in 2002. Please see Ed Gerck: Private, Secure and Auditable Internet Voting, chapter in "Secure Electronic Voting: Trends and Perspectives, Capabilities and Limitations", Edited by Prof. Dr. Dimitrios Gritzalis, Kluwer Academic Publishers, 2002, ISBN 1-4020-7301-1. See also http://www.wkap.nl/prod/b/1-4020-7301-1?a=1

Now, critics have suggested that all aspects of such systems be understandable by voters with a high-school education, so that voters could know why they are relying on such systems. The answer has already been given above. When such systems are implemented ANYONE can verify that the system is working as specified with a confidence level (statistically defined) as close to 100% as desired. This is much better than what current paper-based voting systems provide, where NO ONE can have any confidence level that the system is working as desired. Election systems have been the subject of fraud in this country for 200 years.

When I asked Ed to summarize, his view of the current state of the dispute, he wrote:
"The whole issue of electronic-voting and Internet -voting has been addressed by Peter Neumann, Rebecca Mercuri, Avi Rubin and other critics with a complete determination to arrive at a negative answer. That any attempt undertaken will fail, since by their own definition all needed infrastructure is insecure. And that anyone who claims to the contrary is incorrect by principle. Consequently it should be no surprise, that my technology fails to meet their requirements, if by their definition, it is impossible to do what I am in fact doing.

I welcome, however, their expose' of faults in current electronic voting systems and their stabs at Internet-voting. Trying to cure all the faults (which would take them a LONG time to find) will nevertheless, fail to lead to a secure electronic voting or internet voting solution.

The answer is not to demand perfection for all parts. Reliable systems can be made of unreliable parts. And this is crucially important here because there is NO OTHER WAY to build a reliable system, as Peter, Rebecca, Avi and other critics will eventually demonstrate by the exhaustion of all other possibilities.”

Editor: Having to use expensive and faulty voting systems without any solution in sight, and hearing these arguments against Ed, I would demand a much more rigorous explanation than just “it’s impossible on its face.”

Ted Selker: An Academic Who Has Done his Job

I asked Ed Gerck for an example of someone in the academic community, who he felt had done his homework on the questions of election technology and who understood where he, Ed, was coming from. He named Ted Selker of the Media Lab at MIT who has researched both electronic voting (DRE) as well as Internet voting.

The best documented evidence available of Selker’s ideas that I have found comes from a September 8, 2003, “interesting-people” message from Stephen D. Poe, CEO of Nautilus Solutions to Dave Farber. Poe attached an interview that Simson Garfinkel had done with Ted Selker. Poe also wrote: “I spend too much time in my consulting practice trying to convince clients to pay less attention to the hype about all the issues that could arise in the electronic world and pay more attention to the simple questions of "Is this proposed method more or less secure than what you have now? Does this proposed method provide more or less benefit for your buck?"
Editor: Poes’ statement above deserves serious thought. Given the disaster of the last election, with emotions understandably running high and with enough found out about the flawed Diebold machines to indicate that there is strong evidence of a serious threat in the application of the technology, one still has to wonder how careful the efforts of these leaders of the anti DRE movement have been? But given the heat of the battle, and the press’ inability to run with more than simplistic sound bytes, I can understand why, with the best of intentions undoubtedly, they have been using baseball bats in a slugfest.

My intention in this editorial is to appeal to a stronger approach that will better serve the public interest of maintaining the integrity of our electoral process.

Poe to Dave Farber: The following article starts to address those questions. Editor: In his article “Campaigning for Computerized Voting,” September 3 2003 Simson Garfinkel decided to go beyond the usual critics of electronic voting. In his own words Simson met one “who isn’t opposed to DREs. In fact, he’s positively enthusiastic about them. And this man isn’t just anybody; he’s Ted Selker, an award-winning inventor with many patents, formerly with IBM Research, currently a professor at the MIT Media Lab, and member of several panels and commissions that looked at the issue of voting following the debacle of the 2000 presidential election.”

Here is what Simson Garfinkel had to say: "I met Selker a few days after he had attended a meeting of computer scientists and election officials in Colorado. He was livid. He had just spent two days listening to the experts of the field talk about all of the failings with DREs and how these systems could be used to steal an election.

“’What these people don’t realize,’ he told me, ‘is that automated tabulating machines were invented for a reason that is, because paper is a fundamentally bad way of making and keeping accurate records. Paper is bulky and heavy. It can be hard to read something recorded on paper, no matter whether the marks were made by hand with pen-and-ink or by a computerized printer. Paper rips and gets jammed in machines. Paper dust gets everywhere.’ Eliminating paper, Selker explained to me, has the potential for dramatically improving elections.”

“But what about all of the ways that you can hack the voting machines?” I asked him. Selker laughed. Politicians, he told me, have been hacking elections in America for more than 200 years. The geeks are focusing on the abilities of hackers to steal elections by reprogramming DREs because electronic attacks are what these folks understand. But if your goal is truly better elections, he says, the DREs can do more good than harm.

Editor: Selker is suggesting that by careful design the equipment can become more useable and offer an increased possibility of reducing fraud.

Garfinkel: Before talking with Selker, I was squarely in the anti-DRE camp. After listening to him, I realize that there is another side to the story that is being systematically underreported by the technology press. Did he convince me? Well, let’s say that I’m no longer convinced of the inherent correctness of the anti-DRE position.[The entire interview is found at: http://www.interesting-people.org/archives/interesting-people/200309/msg00054.html ]

Editor: As with most technology and policy issues this one is horribly complicated. I want to close here with a reminder that my complaints are based not on the technology substance but rather on the methodology employed. I have indicated what my problems with the methodology are. There is always time to temper what is shaping up as the greatest technology policy failure of my lifetime
I have ventured into this minefield because, in the heat of ongoing battle, I believe that some academics who have positions of public trust, have, nevertheless, suffered an inadvertent loss of perspective and are not keeping their minds open to the ideas of another man with equally outstanding academic credentials: Dr. Ed Gerck. Yes, Ed Gerck represents with Safevote a commercial business, a vendor. However, I disagree with the tacit premise that individuals who sell products are basically dishonest about their product's limitations. I think that being accurate is good business. These days those who stretch the truth often get caught and lose their reputations and business. That seems to be potential problem, now, for Gerck's critics who have been saying that it is impossible to do what Gerck claims to be doing. They have dismissed his work out of hand. I have talked both to them and to Gerck and have found that it is clear that what review of Gerck's work they have done has been marked by the unceasing assertion that what he was proposing was not possible. However, many experts, including Waskell, agree with Gerk. Also more than 300,000 voters in 35 elections, reportedly, have used Gerck's system successfully. Arguments that say, it is impossible to do what Gerck claims to be doing, remind me that everyone knew, 101 years ago, machine powered conveyances cannot possibly enable us to fly.

I have no intention of shifting the focus of the COOK Report to electronic or Internet voting. However in view of my own personal knowledge of what was happening I have found it impossible not to take a public stand on these issues of methodology. I want to close with another thank you to everyone I dialoged with on this issue. I will also watch and report on the extent to which they and others as well will make the effort to look more closely at these complex issues.
Disclaimer: Ed Gerck has been a $300 a year subscriber to this newsletter for four years. I have never received any income from any other financial transaction or compensation for anything from Eva Waskell or Ed Gerck. Nor have I any ownership stake in any enterprise owned by these two people.

Contents

Introduction -
Supply Chain RF-ID Middleware, Offshoring,
Real Time Global Corporation, Explorations
in the Globalization of Everything p. 1

Chapter 1

Management & Strategy
Art Kleiner on Core Groups in Technology
Companies and the Internet p. 8

Finding New Tarnhelms p. 8
Applying the Tarnhelms to Telecom,
IT and the Real Time Corporation p. 10
What the Core Group Wants p. 13
Hernando DeSoto, p. 14
How Phillips Lost to Sony p. 16
The Digital Revolution’s Impact on the Core Group p. 21

Conversations with Art Kleiner on Who Wins &
Who Loses in the Age of the Real Time Corporation p. 23

Number of Organizations Doubles Every 25 Years, p. 23
Looking for an Integrated Learning Bas. p. 24
Engineers Who Understand What the Core
Group Does Not p. 25
Evidence for Organizational Growth p. 26
Lavina Weismann Describes a Model of IT
Leadership Besed on Core Group Theory p. 27

Chapter 2

Offshoring. Developments in India and China
An Entire Industry Grows Up Designed to Facilitate
Export of Jobs -- Telecom Liberalization in India China and Elsewhere Creates Infrastructure that Enables
Foreigners to Become US Telecommuters. p. 30

Technology Changes Enable Migration
of Business Back Offices p. 30
Commoditized Processes Lead the Way p. 31
Comparing Countries p. 31
Strategy for Making Off Shoring Decisions p. 32
Understanding the Sources and the Risks p. 32
To Do It Yourself or Not? p. 33
But Where Does It Stop? p. 34

Offshoring an Inevitable Part of a Globalized
Economy Driven by Rapid Technology Change
Ultimate Limits and Economic Impact Uncertain p. 35

An Offshoring Operation in Tiblisi p. 37
Entertaining Ourselves to Death p. 39

China’s Economic and Technology Strategy
Leveraging its Economy in a Attempt to Impose
its Own Standards p. 43

A New Chinese Strategy p. 43
Could China Become Global Top Technology
Producer by 2008? p. 44
The Currency Issue and The Doormat Question p. 47
The New Chinese Standards p. 48

Chapter 3

Can Microsoft Be Open Sourced
and Off Shored?
We Examine Microsoft’s Continuing Woes p. 51

Sun's Move into China a Big Factor p. 51
The IT Industry Is Shifting away from Microsoft p. 52
In Scotland Open Office and on The Internet
Windows Source Code, p. 57

Chapter 4

Supply Chains –
Enterprises Investing at the Edges -
Trend Necessitated by Increasing Importance
of non Vertically Oriented Supply Chains p. 59

Demands of Supplier and Customer Networks
Create Shifts in IT Spending, Employment
and Demographics, p. 60
RF-ID as a Huge Data Source, p. 60
Further Comments on Wal*mart - Could it
Lease its Stores Too? p. 61
An Industry overview: We Find Three “Rivers”
of the Industry Rushing Toward a Hoped for
Confluence p. 65
Auto-ID p. 65
VeriSign and EPC Global p. 66
The Three Rivers p. 67

What Can and Can’t the Tags Do? Why
was DNS Chosen? How did VeriSign Win
EPC Global? p. 68

EPC Global and VeriSign p. 69
Why Use a DNS Registry? Single point of
Failure and Critical Infrastructure Risk p. 72
How VeriSign Won EPC Global p. 75

Keith Dierkx on RF-ID issues in Transportation Management --Knowing Where Assets Are and Keeping Them in Motion Lead to Early Understanding of
Supply Chain Systems Interoperability Issues p. 78

Two Different Paths Toward Standards p. 78
Asset Identification in a Closed Loop System p. 79
Supply Chain Flexibility p. 80
Scalability of Solutions in Search of Better
Information about Use of Resources p. 81
Need for Prompt Acquisition of Information
into an Integrated Network p. 81
RF-ID as a Tool that Must be Integrated
into Other Processes p. 82
Why Implement? Understanding Systemic
Complexity p. 83
Using the EPC Global Framework as a
Crutch to Side Step Issues of Integration p. 83
Complexity of Integration Will Slow Down
Adoption p. 85
Vertical Solutions? p. 85

Economics and Technology of Supply Chain
Management in $100 Million a Year Telecom
Hardware Company --RF-ID Seen as Unnecessary p. 87

Integration of Information Systems
Capability via XML Using Internet Overlays p. 87
RF-ID is But a Single Part of the
Business Process Whole p. 88
Economics of Telecom Supply Chains, p. 89
Who Needs RF-ID? p. 90
RosettaNet, RF-ID and Serial Numbers p. 91

Chapter Five

Data Shared Across Corporate Boundaries
Raises Issues of Trust -- How to Think
Rigorously about What to Expect from
Computer Based Information Systems p. 92

Trust in Computer Networks, Supply
Chains and Voting, p. 92
Can Trust Be Algorithmically Defined? p. 93
Trust Is Used Without Understanding p. 95
Tag Data Must be Homogeneous Across
Environments, While Enterprise Data Cannot Be p. 96
Policy Model for Trust Enforced via Service Utilities p. 98
Trust -- qualified reliance on information,
based on factors independent of that information, p. 99
Trust Requires Corroboration by
Independent Channels, p. 102

But Where’s the Edge? Delineating Any
Agreement Proves Very Difficult Problem Because
New Supply Chain Business Models Blur Edges p. 104

Defining the Edge, p. 104
Is the Edge a Boundary of a System? p. 105
Has the Edge versus Core Distinction Outlived
its Usefulness? p. 106
Edge as an Indefinable Term?, ` p. 107
The Edge as a Boundary Point for Regulation? p. 108

An Introduction to the Technology &
Economics of RF-ID Middleware in Supply Chains - as covered in the remainder of this report - p. 110

Chapter Six

RF-ID Middleware – Some architectural and economic issues An Introduction to the architectural concepts,
software supply Chain Virtualization and Organizational and infrastructural players p. 113


Part One -- The Architectural Enablers p. 113
Space-Based Computing and Service Oriented
Architecture, p. 113
Microservices and Software System Flexibility p. 114
Service Grids, p. 115
A Middleware Software System for RF-ID p. 117
Part Two: Adopting RF-ID Technology p. 117
Technical Issues, Standards and Adoption
Moving in Parallel p. 117
Relationship of EPC to UCC – Moving
from EDI to Something Better p. 118
Moving Data Across Corporate Boundaries:
EDI–INT to Web Services p. 119
Rosetta Net and Others Experiment with
Intra Corporate Communication. p. 119
Where is EPC Global Going? p. 121
Will Things Go Forward Even Though
Trust Issues Exist? p. 121
RF-ID Objectives p. 121
Creation of a Software Based Virtualization
of a Physical Item and Trust p. 122
The EPC Product Registry p. 124
Perishable Product History Supply Chain
Scenario, p. 125

RF-ID Middleware Architectural Issues
A Service Grid Architecture Enables Product
Information to Travel with Tagged Goods
Throughout a Supply Chain p. 127

The Architecture of RF-ID Middleware and
Delineating Market Boundaries p. 127
The Intelligent Network p. 129
Greene on Middleware Enabling Technologies p. 130
Scaling of Services and Ambiguity of Language p. 130
Auto-ID Proposed Standards p. 131
Conclusion: Ellipsis Value Add p. 134

Two Different RF-ID Business Models: Faster
Inventory Management versus Tailorable Supply
Chain Tracking With Edge Based “Action” p. 136

Enabling Edge Based Decision Making p. 136
The RF-ID Agent in Vertical Space p. 137
Business Process Changes Are Coming -
But How Fast Is the Question p. 137
Management May Ignore Engineering’s Answers p. 138

Chapter Seven

Whither the Future? Rapid Emergence of Single
RF-ID Solution for Supply Chains Unlikely -- Systems
integration Joined by Issues of Psychology and
Customer Education -- Will Technology Push
Outrun User Pull? Interview with Terry Retter p. 139

When Do You Tag a Product? p. 139
Where to Store the Data? p. 140
Issues of Message Exchange between Supplier
and Customer p. 141
Future Direction Will Involve New Architectures p. 142
Parallel Architectures Enable Shortening of Cyclesp. 142
Even Cash Cycle Motivation Was Unable to
Create Consistent Taxonomy in the Auto Industry p. 143
Technology Clashes with Behavioral Psychology p. 144
Product Push or User Pull? p. 145

Implementing Policy to Solve Issues of Trust - How
an RF-ID Mobile Agent System Could Use a Service
Grid to Create a Policy Overlay that Would Create “Rules of the Road” for its Mobile Agents p. 148

Information Flow p. 149
Some Further Detailed Illustrations and Explan-
ations of Service Grid Mobile RF-ID Architecture p. 150

Interview, Discussion, and Article Highlights p. 157

Executive Summary, Part 1 p. 189

Executive Summary Part 2 Voting Train Wreck Editorial p. 194

A Recommended Web Based RFID resource

A very good resource on RFID with a discussion forum and basic information on over 100 companies involved is found at

http://www.rfidexchange.com

Contributors to This Issue

This three month issue was put together using a private mail list. I edited the results into the articles that, together with the six interviews, make up the content. The following list members made comments that I used in my edits of the list discussion.

John Berryhill , IP Attorney, Dann, Dorfman, Herrell, and Skillman
Ren Chin , VC at ParTech International
Frank Coluccio, Owner DTI Consulting
Reed Cundiff, Senior Vice President, Yankee Group
Keith Dierkx, SVP of Operations and CIO Embarcadero Systems Corp
Bob Franskton, Founder Software Arts and Visicalc
Ed Gerck, Founder Network Manifold Associates
Steve Goldstein retired – formerly International Communications NSF
Wedge Greene, CTO Mission Assurance
Brian Hanley, President of software development company based in former USSR
Sebastian Hassinger, Senior Strategist Pervasive Computing, IBM.
Dave Hughes, Owner Old Colorado City Communications (a WISP)
David Hughes, designer of wireless data collection grid
David Isenberg, the Brains Behind the Stupid network
Suzanne Johnson, formerly senior technology manager with Intel
Art Kleiner, author of Who Really Matters? And several other books
Pete Kruckenberg, Senior Network Engineer at Utah Education Network
Andrew Odlyzko, Director Digital Technology Center University of Minnesota
Greg Pelton, Senior Director Cisco Technology Center
Terry Retter, Director Global Technology Center, PricewaterhouseCoopers
Reuben Steiger, Chief Development Officer, Mission Assurance
Charles Sands, co founder Broadband Access Coalition Bangkok
Chris Savage, attorney, Cole, Raywid & Braverman, LLP Washington DC
EinarStefferud, Retired Internet Pioneer
Ted Stout, Professional Generalist, Founder & COO, tedCities
Jonathan Thatcher chaired 10 gig Ethernet standards & Ethernet in First Mile Alliance
Michael Tiffany Director Strategy, Mission Assurance
Brough Turner, Co Founder NMS Communications
Ryan Weidenmiller, Telecom Analyst, Sequoyah Investment group
Ronald Yokubaitis, Owner Texas.Net