Not Quite Dead Yet

From: Gregory Alan Bolcer (gbolcer@endtech.com)
Date: Sun Feb 11 2001 - 10:13:25 PST


Despite Adam's prognostications, this tech trend's
(in my best Monty Python voice) "not quite dead yet."

Greg

http://www.commvergemag.com/commverge/issues/2001/200102/02cs.asp

Cover Story February 2001

  Strength in numbers
  The powerful potential of peer-to-peer.
  Brian Dipert, Contributing Editor

                          If you haven't yet heard of Napster, then with all
                          due respect, you've apparently spent the past
                          year cooped up in a cabin in the backwoods of
                          Montana. Napster's simple, straightforward
                          system for swapping MP3 files has music
                          listeners jumping for joy, music producers
                          hopping mad, and musicians firmly ensconced on
                          both sides of the love/hate fence (with some
                          tenuously straddling it).

                          But this isn't another article about Napster. This
                          is an article about a much larger trend, of which
                          Napster is but a small example. It's about a
                          potentially enormous change in the fabric of our
                          digital universe. About fundamental shifts in the
                          way we compute and communicate. And it starts
  with an unassuming acronym, P2P, which stands for "peer-to-peer."

  Napster has brought the term P2P into the public consciousness, but has also
  tainted it with an overly limited and negative connotation. Peer-to-peer is about
  file swapping, yes, but more generally it's about sharing. Not just files, but also
  computing power, storage resources, ideas, and other things we probably
  haven't even thought of yet. When Pat Gelsinger, vice president and chief
  technical officer of Intel, commented at last fall's Intel Developers Conference
  that, "peer-to-peer computing could usher in the next generation of the Internet,
  as Mosaic sparked the last," I don't think he was just getting excited about
  being able to snag free Metallica tracks.

  Groundwork

  Peer-to-peer interchange doesn't necessarily have to be between two PCs,
  either. Anything Net-connected, from a thermostat to a supercomputer, is
  potentially a participant in the P2P party. Let's begin our exploration of
  peer-to-peer and its counterpart, client-server, with a few definitions, so that
  we're sure we're speaking the same language.

  Client-server, in its most extreme form,
  describes a networking arrangement in which
  one member, the client, is a "dumb" device,
  with only enough processing and memory
  capability to boot itself, load a minimal
  operating system, make requests to other computers, and communicate the
  results of those requests to the user. The server on the other end of the
  communication link does all the heavy lifting. It passively waits for incoming
  requests, receives them, translates them, does whatever processing is required
  based on them, and sends the results back in a format the client can accept. The
  server communicates with numerous clients and must both queue and prioritize
  their inbound requests. If it assumes that all clients are equally "stupid" and that
  all communications links between server and clients are bandwidth-deficient, it
  by necessity communicates with the clients in a consistent low-resolution,
  text-only manner.

                  The client-server model is the domain of, for example, the
                  vintage VT100 terminal (and now the numerous VT100
                  software emulators running on otherwise-intelligent
                  computers), and, in slightly more advanced forms, the X
                  terminal and Microsoft's Windows Terminal Services. It's a
                  model that companies such as Netscape (now part of
                  AOL), Oracle, and Sun Microsystems enthusiastically
                  embrace, because it puts the bulk of the hardware and
                  software innovation (translation: initial sales and upgrades)
                  on the server. Intel and Microsoft both sell into the server
  space, too, but the majority of their money is made on the client side, so their
  lukewarm embrace of client-server (and Microsoft's well-documented pursuit of
  Netscape) is understandable.

  Microsoft's much-publicized .NET initiative might at first glance appear to be a
  capitulation to client-server trends. But close inspection of the company's
  distributed-computing plans reveals a blend of client-server, peer-to-peer, and
  standalone computing techniques, as well as no relief on client processing and
  memory requirements. Cynically, one could make a credible argument that—in
  spite of Microsoft's claims of .NET's potential for "increased collaboration" and
  "lower maintenance costs"—the overwhelming intent of the initiative is simply
  to boost the company's revenues via pay-per-use tolls, and the elimination of
  illegal software duplication and single-license but multi-system installations.

  Peer-to-peer, in its purest form, is at the complete opposite end of the usage
  spectrum from client-server. In P2P, both interacting users have robust
  processing and storage resources at their disposal, and the link between them is
  fat, fast to respond, and persistent. The increasing reality of this presumption is
  behind the groundswell of interest in P2P. Even entry-level PCs sport 500-MHz
  CPUs and high-speed, 2-Gbyte hard disks. University students, perhaps the
  most enthusiastic Napster groupies (along with subversive office cubicle
  dwellers) share T1 connections. Home adoption of cable and DSL connections
  is exploding. And Metricom's Ricochet service and DirecPC give a glimpse into
  how the high-bandwidth wireless future might take shape.

  Same as the old stuff

  P2P may be the latest trendy buzzword, but examples of the general concept
  have existed for years. Consider, for example, the simple networking that's been
  in Windows since the version 3.11 days. Windows For Workgroups enabled
  users to access each other's hard drives and peripherals such as printers, first
  under the proprietary NetBEUI protocol and later under industry-standard
  alternatives such as TCP/IP. Now, in fact, even the ability for an entire LAN to
  share one client's Internet connection is built into the operating system. In the
  old days, peer-to-peer networks were discouraged for anything more than the
  smallest workgroup configurations, because of the performance burdens they
  placed both on clients and on the LAN. But in the modern era of switched
  100-Mbit/sec Ethernet connections and 1.5-GHz Pentium 4 processors, these
  apprehensions are becoming increasingly dated.

  How about sharing processing power? SETI@home (setiathome.berkeley.edu)
  has been in existence for almost two years now, and I've had it on my
  computers for several months (check out my stats on the SETI site via user
  name bdipert@pacbell.net). Even though SETI@home is set to always run in
  the background, I rarely notice even a hint of a drag on even my
  lowest-performance PC (though I always turn it off whenever I'm doing
  hardware or software benchmarking). Similar concepts find use in programs
  like FightAIDS@home (www.fightaidsathome.com), based on the
  general-purpose Entropia resource-coalescing algorithm, and by Sun's
  Java-enabled Grid Engine.

  Lotus Notes, a groupware collaborative communications pioneer, harnessed a
  mostly client-server approach (along with a local client storage cache) for
  interactions between a PC and the server hosting various Notes databases. Each
  night when those databases scattered across the globe replicated with each
  other, though, they communicated as peers. Ironically, Ray Ozzie, the creator of
  Notes, is now the founder, chairman and CEO of peer-to-peer platform
  developer Groove Networks. Ozzie has mused that if he had to do Notes all
  over again, "I wouldn't make it server-based." And with Groove, he's executing
  on his vision.

  Peer-to-peer server interaction even precedes Notes. Consider, for example, the
  auto-updating done by DNS and Usenet servers, both of which in contrast
  interact with individual computer users in a client-server fashion. The
  collaborative gaming built into popular PC titles such as Unreal Tournament
  and Quake III: Team Arena operates in a mostly peer-to-peer fashion, and
  CenterSpan started out targeting online gaming with a completely peer-to-peer
  approach. Although most of today's chat software (AOL's ICQ and Instant
  Messenger, Microsoft's MSN Messenger, Yahoo Messenger, and so on) is
  server-enabled, it doesn't have to be. Jabber, for example, is an open-source
  peer-to-peer messaging client.

  The concept of file sharing across the Internet will be old news to anyone who's
  ever done an FTP upload or download. Server-initiated file transfer had its
  origins in the frequently-cursed "push" technology of a few years ago, which
  like the old man in the movie Monty Python and the Holy Grail, isn't dead
  yet. Though Napster is the P2P poster child, Napster's MP3-swapping equation
  actually depends on a network of servers. You log into a server, and from that
  point on you view the hard drive contents only of others who are also logged
  into that same server. Those pesky servers are why Metallica was able to force
  Napster to track down and expel users who were offering to trade copies of the
  band's songs. And the lack of server middlemen is why content developers and
  marketers fear purer P2P alternatives like Aimster (which, as the name implies,
  runs 'on top' of AOL Instant Messenger), CuteMX, Freenet, Gnutella, Hotline,
  iMesh, Ohaha, and Publius.

  Napster is today's leading music-sharing application primarily by virtue of its
  being first. Yet, in addition to the presence of servers, it's got other non-ideal
  P2P characteristics, too. It relies on a proprietary set of communication rules
  known as the Napster protocol. And it only enables swapping of MP3 files.
  Programs such as Pakster and Wrapster enable you to disguise other files as
  MP3s, but they're clumsy to use.

  Secrets of success

  Notice the P2P/client-server hybrid trend yet? In reality, very few of today's
  networked applications are purely client-server or peer-to-peer, but the clear
  migration is toward the latter. David Gelertner, professor of computer science
  at Yale University, notes in his essay "The Second Coming: A Manifesto," "If a
  million people use a Web site simultaneously, doesn't that mean we must have a
  heavy-duty remote server to keep them all happy? No. We could move the site
  onto a million desktops and use the Internet for coordination. Could
  Amazon.com be an itinerant horde instead of a fixed central command post?
  Yes."

  In his Developers Forum keynote, Intel's
  Gelsinger drew some intriguing parallels
  between the history of Mosaic and the potential
  of P2P. The trigger point in the Internet
  revolution came not just when a compelling
  application (Mosaic) emerged but also when important infrastructure
  requirements were satisfied—common protocols, ease of use, standards,
  scalability and security. Plenty of Internet-based applications existed before
  Mosaic, including FTP, NNTP, WAIS, Gopher, and even WWW. But the pre-
  and post-Mosaic statistics speak for themselves. In 1992, 50 Web servers existed
  worldwide. The University of Illinois released the first version of Mosaic in
  1993. One year later there were 10,000 Web servers.

  P2P holds promise for shifting the bulk of computing towards the edges of the
  network, if it can satisfy these same infrastructure requirements. And therein
  lies both Intel's excitement about P2P and its participation. When it comes to
  driving industry standardization of computing hardware and software, Intel has
  served in a way that's either (depending on your perspective) welcome and
  impartial or heavy-handed and self-serving.

  Now the company, most visibly through evangelist Bob Knighten (and his staff)
  and the Intel-sponsored Peer-to-Peer Working Group, hopes to similarly guide
  P2P to a successful future, which benefits both consumers and numerous
  companies (and by the way also sells lots of fast microprocessors and
  networking equipment). Intel's investment group has even gotten into the act,
  bankrolling several P2P startup companies and peering into the plans and
  potentials of dozens of others. Intel's still-attached-at-the-hip partner, Microsoft,
  has even shown signs of understanding the need for non-proprietary
  approaches in a pervasively networked world; the foundation of the .NET
  initiative will be constructed of industry-standard XML (extensible markup
  language). Industry-standard protocols that build on an already-mature Internet
  foundation, such as FTP and HTTP, WebDAV, URL, and MIME, are oft-touted
  in the documentation for Endeavors Technology's Magi P2P infrastructure
  software.

  Big bang?

  Is it possible to quantify the potential impact of P2P? Three "rules of the
  network" give a glimpse into the approach's possibilities.

  Sarnoff's Law (named after broadcast pioneer David Sarnoff) regards the
  network as a medium with few transmitters and many receivers. The value of
  services targeted from the former to the latter linearly increases with the
  number of transmitters. We'll call this variable "n."

  Metcalfe's Law, coined by the so-called Father of the Internet, Ethernet
  inventor and 3Com founder Bob Metcalfe, regards the network as a medium
  for inter-communication, in which each device can converse with as many as
  (n-1) other devices, where n represents the total number of network nodes.
  Therefore the value of the network is n(n-1) or approximately n2. For an
  analogy, think of how useless a single fax machine is. But as more fax machines
  appear, the value of the resulting network that links them increases by the
  square of their total number.

  Finally, we turn to Reed's Law, coined by computer and network consultant
  David Reed. Reed views the network as a grouping medium in which as many
  as 2n-n-1, or approximately 2n, different interest groups may form. Why are
  these equations important? The Internet contains millions of Web servers. But it
  encompasses hundreds of millions of client PCs. Now consider the billions of
  PDAs, Web-enabled cell phones, refrigerators, and other devices which'll go
  online in the years ahead. Whether calculated by equations containing n, n2 or
  2n, that's a whole lotta incremental value.

  As increasingly "dumb" but still Net-connected widgets enter the picture,
  though, the lines between client-server and peer-to-peer will increasingly blur. A
  workstation won't be able to talk with a supercomputer in the same way that it
  converses with a security sensor, after all. Just as fax machines and modems
  negotiate to reach a mutually agreeable communication level, back-and-forth
  interrogation of capabilities will be required of any pervasive P2P application.
  The fact that my workstation wouldn't necessarily want to share the same type
  of information with a security sensor as it shares with a supercomputer will
  naturally simplify this effort.

  Alternatively, too, my thermostat or Net-enabled refrigerator might use the
  home server in my closet as its proxy or surrogate, enabling the more robust
  device to act in its stead for interactions it wouldn't be able to handle itself. P2P
  software must also comprehend the fact that some devices will be behind
  firewalls and therefore be unable or unwilling to respond to communications
  initiated from outside. In such a case, a surrogate outside the firewall might
  queue up these requests for devices behind the firewall to periodically examine
  and reply to.

  If the already-mentioned P2P examples aren't enough to whet your appetite,
  here's a few more ideas for your consideration. First, how many (or should I
  say, how few) of you regularly back up your hard drives to a tape or other
  medium? Well, if a statistically determined percentage of all computers on a
  LAN contained not only their own files but also copies of other computers'
  files, you wouldn't need to back up at all. Your computer, should it encounter a
  damaged or missing file, could search for and download a copy located
  elsewhere on the network. Such an approach, currently under development in
  Microsoft's research labs, would have to comprehend the fact that not all
  computers on the network would be present at all times (some would be off,
  notebooks would be disconnected for travel). But it's just statistics, not rocket
  science. And backup wouldn't even need to take the form of a session that you
  have to remember to initiate or task-schedule.

                           Here's another one. You're the IT director of a
                           multinational company with a far-flung array of
                           offices scattered across the globe. Most
                           locations are plagued by slow, expensive and
                           unreliable WAN access. Employees periodically
                           request large multimedia files for training
                           purposes. Do you a) force each employee to
                           download a copy of each file, b) install an
  expensive server and associated IT personnel at each location, c) ship lots of
  environmentally unfriendly CD-Rs and Zip discs around, or d) rely on P2P
  software that could automatically detect that another computer on the office
  LAN already has the desired file and automatically provide it to the requesting
  client? I thought you'd pick d.

  Situation No. 3. I'm just as impressed with the Google search engine as
  everyone else. But I still get a higher percentage of bogus results than I'd prefer.
  And certain times of the day, when traffic is high, the responses just crawl back
  to my DSL-equipped PC, indicating that the server on the other end is the
  bottleneck. Isn't there a better way? Thought you'd never ask. Why send a
  search request halfway across the country when your buddy in the cubicle next
  to you did a similar search 5 minutes ago? And, if he's already sorted through
  and found the few links that really matter, and he's part of your workgroup and
  therefore probably has the same criteria you do, why recreate his efforts? It's
  the distributed search engine. And companies like i5 Digital and Gonesilent are
  working on it now.

  Final scenario—another true story. Call it Intel@work. In the early 1990s, even
  though each processor generation was taking 10 to 40 times more processing
  power to design and simulate than its predecessor, Intel quit buying mainframe
  computers. Instead, the company began spreading the workload across all the
  workstations in its then Californiawide, and now worldwide, engineering
  network—a concept it calls NetBatch. Beginning with just a few hundred
  workstations in 1990, NetBatch now coordinates the activities of more than
  10,000 systems. When Israeli design personnel are asleep, California, Oregon,
  and Texas engineers are using their computers' MIPs and memory. And vice
  versa. Intel claims it recently hit greater than 80 percent average utilization of
  the total available engineering computer resources, processing 2.7 million
  queued jobs per month. And Intel says it has saved, oh, half a billion dollars in
  the roughly 10 years NetBatch has been in place.

  What's being harnessed in all these cases? Cheap LAN bandwidth, compared to
  the WAN. Cheap client processors, compared to those in servers. And cheap
  client mass storage, compared to server drive arrays. Slowly but surely, the
  P2P advocates assert, the supercomputer and server are being rendered
  irrelevant.

  Power to the people

  Thanks to the Napster stigma, however, P2P still seems like a dirty word to
  some folks. Particularly if they're naturally resistant to change, reinforced with a
  little corporate culture-injected risk-adversity. After all, P2P, even more than the
  Internet before it, takes power away from the network owners and puts it in the
  hands of network users.

  The revolutionary tone taken by some P2P supporters, such as the following
  bulleted list, taken verbatim from the Freenet home page, probably doesn't help
  matters much in the buttoned-down worlds of Wall Street and Washington DC:

       Freenet does not have any form of centralized control or administration.
       It will be virtually impossible to forcibly remove a piece of information
       from Freenet.
       Both authors and readers of information stored on this system may
       remain anonymous if they wish.
       Information will be distributed throughout the Freenet network in such a
       way that it is difficult to determine where information is being stored.
       Anyone can publish information; they don't need to buy a domain name,
       or even a permanent Internet connection.
       Availability of information will increase in proportion to the demand for
       that information.
       Information will move from parts of the Internet where it is in low
       demand to areas where demand is greater.

  Computing revolutionaries like Linux and Apache happen to be doing very
  well, though, thank you very much. And whether network managers
  proactively embrace P2P or are dragged kicking and screaming towards it, they
  will sooner or later have to face the reality that it is here to stay. "I think any IT
  manager who fails to look at peer-to-peer should be fired," says Cheryl Currid,
  president of technology research firm Currid and Company. "I can't think of an
  organization that doesn't have a crying need for more MIPS or more storage."

  It's not easy finding green

  P2P is the latest in a long line of Internet-related acronyms and buzzwords.
  Who can forget B2C, B2B, and push? Like its notorious predecessors, P2P is
  in danger of being a technology long on hype and short on financial viability.
  As Intel's self-proclaimed "peer-to-peer evangelist" Bob Knighten notes,
  "Coming up with lots of ideas for using P2P is the easy part. Figuring out
  how to make money with those ideas is the hard part."

  Knighten sees industry collaboration and interoperability standards-setting as
  necessary requirements in order to ensure a healthy environment for
  innovation and growth. Or, as Endeavors Technology puts it, "The peering
  infosphere of the next decade will be as rich and diverse as any natural
  ecology. However, the engineering and technical challenges are as great as the
  potential values suggested by the Sarnoff, Metcalfe, and Reed Laws."
  Knighten hopes that his negotiating skills will be aided by the fact that most
  P2P companies are small and therefore theoretically don't have much to gain
  from proprietary activities.

  Those small companies are backed by some big names, though. And that's the
  catch. Investors may still be intoxicated by the zooming valuations of past
  dot-com investments and hungry to make another quick buck, or battered by
  past dot-COM investments and desperate to recoup their losses. Regardless,
  they may force P2P startups to ignore their gut feel and go proprietary,
  rolling the dice and gambling that by being first to market with the next killer
  app, they'll follow in Microsoft's footsteps and become the de facto standard
  for years to come. To use a baseball analogy, it's swinging for the fences with
  the possibility of striking out, versus the safe single.

  You've probably heard about Napster's truce with Bertelsmann AG.
  Bertelsmann loans Napster 10s of millions of dollars to come up with a
  subscription-based service. Upon completion of the service, Bertelsmann will
  drop its lawsuit and open up its BMG Music catalog. Sounds good on paper.
  But who's going to pay for content they can otherwise get for free through
  original Napster or a dozen other file-swapping services? And what
  watermarking or encryption technology is strong enough to prevent any
  premium content BMG might have from falling into the hands of the
  nonpaying masses? At least CenterSpan must think there's money to be made
  here. The company paid $5 million for the assets of bankrupt file-swapping
  developer Scour Exchange, which CenterSpan plans to reintroduce under a
  subscription model sometime this year.

  Sort through the business plans of the P2P pioneers, and you won't see much
  of the notorious dot-com mantra, "We'll burn through cash building an
  audience first, then later worry about how to make money." A few ideas
  dominate. Some infrastructure providers hope to license their technology for
  use by other companies, either internally or externally. This is analogous to a
  computer software company licensing its game engine to other vendors, or a
  search-engine provider selling licenses to ISPs or Intranet developers.

  Idea No. 2 involves selling beefed-up versions of software that's free in its
  basic form, or putting it on retail shelves for those users with slow Internet
  connections or those unwilling to deal with the download hassle (something
  Netscape tried and failed to do). Idea No. 3, for those P2P providers eyeing
  the e-commerce arena, involves a cut of every transaction completed using
  their software. And in idea No. 4, companies like Entropia hope to resell
  users' otherwise-unused hard-drive space and processing power to
  MIPS-hungry companies in financial, scientific, multimedia-development (Toy
  Story III rendering, anyone?), and other industries. A few million
  SETI@home users suggest there's a viable market for processing sharing, at
  least for nonprofit endeavors. But the market opportunity of for-profit
  resource-sharing applications is unclear at best. That's particularly true for
  home and business users who are justifiably paranoid about unwanted
  intrusions into their computers and networks.

  The final worry, a nightmare that probably keeps every software developer
  awake at least some nights, is that Microsoft will roll P2P functions into a
  future iteration of its operating system, obliterating third-party projects in the
  process. Guess we'll just have to see what the Department of Justice does
  under a Republican administration, eh?

   Peer review

   Here's some sources of additional knowledge on the peer-to-peer trend.

   One indication that an emerging technology has hit the big time is when
   large conferences spring up focusing on the topic. Such is the case with
   peer-to-peer computing, as evidenced by O'Reilly & Associates' Peer-to-Peer
   Conference, which will take place February 14 to 16 in San Francisco
   (conferences.oreilly.com). DCI is also running one, the Summit on
   Peer-to-Peer Computing—same dates, same city, different hotel
   (www.p2psummit.com).

   Another sign of peer-to-peer's legitimacy is Intel's embrace of the technology.
   In fact, Intel's Peer to Peer Computing Working Group site
   (www.peer-to-peerwg.org) is a good jumping-off point for continued
   research. What's more, Intel says it will be spending a sizeable chunk of its
   upcoming Spring Developers Forum (February 27 to March 1 in San Jose)
   on the topic.

   Endeavors Technology's white paper "Peer-to-Peer Architectures and the
   Magi Open-Source Infrastructure," an excellent read, is downloadable from
   the vendor's Web site (www.endtech.com). The sites of the other vendors
   mentioned in this article are also fine sources of information. But don't be
   surprised if at least a few of them don't exist when you check them out. P2P
   is a fast-moving technology. By the time you read this, perhaps a fourth of
   the companies will have gone out of business in this now dot-com-adverse
   investment environment, another quarter will have been acquired or will
   have changed their name or focus, and new companies will have sprung up
   to take their places. C'est la vie.

   P2P is getting lots of press coverage, but aside from CommVerge (of
   course), I'd also suggest you keep an eye on Red Herring, Upside, and
   Wired. Red Herring devoted a considerable portion of its December 4,
   2000, issue to P2P. And the "Guide to Global File Sharing" in Wired's
   October 2000 issue provides a good set of links covering the file-swap
   aspects of P2P.



This archive was generated by hypermail 2b29 : Fri Apr 27 2001 - 23:17:36 PDT