[BITS] Standards readings, wk 8: Policy

Rohit Khare (rohit@godzilla.ICS.uci.edu)
Tue, 24 Feb 1998 02:53:00 -0800


---
[As usual, the indented bits are Phil Agre's words, not mine. --RK]

Week 8 / Policy

For people from many backgrounds, standards tend to imply de jure standards set by the government, or through formal negotiations between governments. This was historically the case in telecommunications, but as world changes the question arises of government's proper role in the standards process. Ideological approaches to the question tend toward the predictable extremes, but in the middle lies a very complicated range of alternatives whose advisability depends on the interactions among numerous aspects of the standardization process.

In my idle imaginings, technological design space is like the rubber-sheet model of space-time. The virgin design space is flat, formless. Products fall like buckshot of varying weights, as producers 'explore' the space. Customer cognition is revealed in their paths wandering through design space, eventually spiralling into adopting one product or another (or none), with the expanding influence of the gravity well reflecting the network effect. IPR fences set up around the product delineate how near or far another producer may come before trespassing. Standards, finally, are a surveyor's effort to map out regions of the design space and decide what's in or out. And Policy, well that's politicians gerrymnadering about where the standards *ought* to be.

Excessive diffusion, as Saloner introduces, is a black hole, a well so deep that no lighter-weight product can establish another stable well within its sphere of influence. Let's see if this metaphor helps navigate this week's readings.

Or maybe the metaphor's a pachinko machine, randomly allocating users to slots. I forget :-)

---

Lewis M. Branscomb and Brian Kahin, Standards processes and objectives for the National Information Infrastructure, in Brian Kahin and Janet Abbate, eds, Standards Policy for Information Infrastructure, Cambridge: MIT Press, 1995.

Branscomb and Kahin are close to the ground of government policy processes, and their introductory chapter to the very useful "Standards Policy for Information Infrastructure" volume is written in the language of those processes. It quickly surveys issues such as competing models for the standards process, the increasing sophistication of users, the emergence of standards consortia, intellectual property, and the several potential roles for government in the new standards environment.

Unfortunately, my reactions to this article are going to be colored by my distaste for their actual conclusion. I'm not even sure the authors would stand as firmly behind these early ideas favoring government involvement. They stated: "Quite likely, the procedures of the IETF will not work as well with a multitude of stakeholders in a commerciallized environment where large investments are at stake."

They see three arcs for NII development: winner-take-all competition a la desktop applications; cooperative Internet stds development; and traditional telecomm process.

The description of standards consortia is the most useful bit of the paper, and must have been especially so in 95. "[Consortia] may be horizontal (among competitors), vertical (between integrators and suppliers), or comprised of firms providing complementary products and services. They may develop specifications, patentable technology, or tools and platforms. They may be structured as stock companies, exclusive non-profit orgs, open trade assocs., or ad hoc interest groups. ... may seek to accelerate the process of formal standards... or bypass it altogether." I'm still looking forward to a good survey paper on IT standards consortia, though. There's a good book on R&D consortia in particular, already out from HBS Press: Technology Fountainheads, but I'd like something to compare HL7, IrDA, and Unicode, for example.

To better understand their alternative arcs, I think it would be useful to ask "how should accessibility features be accomodated?" Are the interests of the disabled protected equally well in each case? does this agenda have equal priority in each case? Do we have related examples to help separate the consequences of each arc?

---

Paul A. David, Standardization policies for network technologies: The flux between freedom and order revisited, in Richard Hawkins, Robin Mansell, and Jim Skea, eds, Standards, Innovation and Competitiveness: The Politics and Economics of Standards in Natural and Technical Environments, Edward Elgar, 1995.

The choice between standardization and non-standardization is often framed as a choice between the respective virtues of order and freedom, and standards policy is consequently framed in similar terms. David, though, argues that this analysis is ill-posed. Instead, he urges us to view the institutional processes of standards-setting as a dynamic response to a dynamic environment. The consequences for policy are complicated and largely pessimistic.

I had a hard time pinning down what the take-home lesson of David was. Perhaps too blunt, but I didn't find this article useful

I do want to highlight these nifty bits: "Some years ago, Tibor Scitovsky (1976) drew an arresting portrait of modern-day, affluent consumers as engaged forever in mixing novelty with repetition in largely vain efforts to maintain a physiologically comfortable equipoise between states of excess stimulation and inadequate arousal. ... the order archieved by standardization and homogenization can bring efficiency gains only at the cost of suppressing some idiosyncratic sources of consumer satisfaction."

---

Samuel Krislov, How Nations Choose Product Standards and Standards Change Nations, Pittsburgh: University of Pittsburgh Press, 1997. Chapter 9: The Evolution of Standards and the Processes of Formalization. and Chapter 6: Standards in the European Community.

Krislov's book is a comparative study of standardization policies as part of broader national industrial and economic strategies. This concluding chapter draws on his case studies of the United States, European Union, Japan, and the erstwhile Soviet Europe by placing the evolution of standards and standardization processes in that broader institutional perspective.

Hmm. I wish I had read more about his national cases. I read 6 and 9, and skimmed 10, but I can see that there may be more meat on the bone elsewhere in the book. The EC chapter was a useful linking of the stories of technocratic unification intertwingled with the political project of European identity.

The technocratic seach process says the placement of product-balls on the rubber sheet is inherent in the sheet itself: there is a weakest point, and products will gravitate towards their 'natural' optimum. The standard is a point.

The diffusion model accords a lot of power to the first mover. It's vast, dangerous rubber sheet out there, so after there's one shot in the bowl, the others huddle near it as branches ("mankind is predisposed to imitation"). The standard is a inclusionary boundary drawn around the instances

The restrictive model also accords a lot of power to the first mover, the power to monopolize. The standards is an exclusionary boundary drawn around a single instance.

He is correct to draw a ladder from market-quality/quantity standards to group-enforced to gov't-approved and gov't-enforced standards, but doesn't seem to acknowledge the degree to which it is no longer an escalator. In the past, standards seem to progress up the ladder as a matter of course, but now, esp for IT, forum-shopping can reflect maturity-shopping. Many private agreeements may never aspire further...

---

Sanford V. Berg, Public policy and corporate strategies in the AM stereo market, in H. Landis Gabel, ed, Product Standardization and Competitive Strategy, Amsterdam: North-Holland, 1987.

AM radio in the United States is an example of a laissez faire policy that leads to rapid, sharp competition between competing standards, selecting a clear winner without stranding too many consumers. Berg briefly recounts the particular and tries to derive lessons for the broader question of markets for interdependent components.

In the old regulated days, the FCC would choose, so winner-take-all markets led to entrenched positions ("The systems had strong similarities, despite their being mutually incompatible"). In practice, the AM stereo battle was well and truly joined only in the early 80's when 4-5 viable competitors had to duke it out after the FCC's no-decision decision. The case specifically identifies the issues facing broadcasters, transmitter mfrs, reciever mfrs, and the overall context of the broadcasting business (that AM was becoming ghettoized as talk). The success of vertical integration also seems to be well-explained in this analysis.

The difficulty of the decision facing the FCC does come across clearly, though. I look forward to replaying it in class.

There were lots of fascinating historical bits. First stereo: 1925, using two AM bands and two sets in New Haven.

---

Joseph Farrell, Standardization and intellectual property, Jurimetrics Journal 30(1), 1989, pages 35-50.

Farrell argues that the market dynamics of standards recommend a limited role for intellectual property protection in general and copyright in particular. If the economy benefits from compatibility, then the difficult question is whether and when intellectual property rights encourage markets to evolve toward compatibility.

I think I was prejudiced against Farrell's overly strong point-of-discussion, "copyright may be an inefficient way to protect software" before recognizing, at the end, he may be more focused on look-and-feel debates than core functionality. It's true that setting up fences on the rubber-sheet is a matter of judgment, and we can only wish there were more concrete guidance. But you can't control for invention...

Standards do need to be shared, so that's why we see most formal SDO processes require waivers of IPR claims, and that industry de facto evolution favors more open standards. (I'd love to learn more about Dolby here!). On the other hand, with strong IPR, it can 'scare away' nearby development and thus limit consumers to the nearest product-well: excessive adoption -- which *accelerates* the process of lock-in because of the network effect.