Re: ***SNS*** Paying The Piper [Long]

Date view Thread view Subject view Author view

From: Gregory Alan Bolcer (gbolcer@endtech.com)
Date: Thu Sep 21 2000 - 12:15:00 PDT


Hey Mark, Some notes of interest in response
to your reader reply in the lastest issue of SNS...

  The Intel-sponsored P2P working group meeting was
cancelled on the 26th due to overwhelming demand. Intel plans
on announcing a new time, date, location on the 22nd. [1] Second,
many P2P file storage technologies use a RAID algorithm on the
client side so that even if "Joe's" PC goes down, the data
is recoverable and re-assemblable across a variety of machines.

Some projects like Freenet actually perform
smart caching and network flow similar to Akamai on the
Web side. One of the side effects of which
is if you actively try and delete all the copies of something,
it gets more widely dispersed. Other near term problems with
pure P2P plays include Gnutella's lowest common denominator "modem" problem.

P2P decentralized processing (UP.com, Popular Power, Mithral.com, etc) lack the
applications to take advantage of the infrastructure and easily
segmentable problems; P2P decentralized storage systems
(Gnutella, Freenet, emikolo.com, etc.) tend to have trouble with
what AT&T calls the "freeloader" problem. [2] Some projects
like MojoNation attempt to ameliorate this problem through micro-transactions
as credits for setting the market for cycle, storage and access time.

Up to 70% of all Gnutella users that supply space on their drives and
connections share no files, 50% of all responses are returned
by the top 1% of sharing hosts. Also, the most able to share
files don't always share the most desirable one.
File sharing systems like Napster don't share this type of problem
because they centralize the file directory listings and more
desirable files are more widely distributed. If every
Napster user decided to use the WebFreenet take the
distribution out of the hands of the user and automates the
distribution.

The big raging P2P debate is whether or not we need new protocols
to address P2P applications or HTTP can evolve and/or be used to
work in a true P2P way.

The advantages to using a proprietary P2P protocol over HTTP
as quoted by Kragen Sitaker (kragen@pobox.com) on FoRK inlcude[3]:

- adaptive caching not only prevents hot-spots from forming --- it also
  conserves network bandwidth. If you run a university network, this
  will be a relief.
- it allows large document stores to be transparently distributed
  across many machines' small hard disks.
- it is automatically redundant; it is intended to survive not only
  simple machine failures, but actual hostile attacks.
- it is better suited to anonymous publication.
- it is better suited to publication of controversial information,
  because it is intended to be impossible to remove the information
  from circulation.

In the same post a good cumulation of pure, non-HTTP P2P are summarized:
- it is not well-suited for dynamically-generated content, it appears,
  and thus for building remotely-accessible applications like Hotmail.
- it has not been tested in large deployments, and its scaling
  properties are not obvious.
- it is not well-suited for frequently-updated content, it appears.
- while it is designed to ensure that attacks from a few points cannot
  cause information to become unavailable, it does not appear to be
  designed to ensure that lack of attention does not cause information
  to become unavailable. It is theoretically possible that, on a
  Freenet, important information could be lost simply because nobody
  accesses it.

Given the progress of online storage sites, the maturity of
HTTP technologies, the ease of integrating e-commerce and wireless
infrastructure, availability of highly scalable caching solutions, and the
ability to not have to throw the baby out with the bathwater--to develop a
P2P infrastructure that complements from the scalable solutions and technologies that
are currently in use rather than reinventing the same protocol stack
and the applications that surround it.

Beyond the infrastructure plays and all the associated technical
burdens of deploying a new application level protocol, P2P applications
providers seem to take advantage of the "liveness" of data.
Infrasearch(/gonesilent.com) allows live searches on things
traditionally not accessible from traditional search engines, Flycode and Lightshare
plan to cash in on the decentralized e-commerce model, Groove.net and Roku
appear to be working on decentralized work with some access for devices, and
of course I can't get away with not mentioning our Magi project which looks
to combine the scalability and technology of the existing Web and deploy
it out in a P2P manner to desktops, smartphones, PDAs, and embedded devices
as a way to provide writable, two-way Web e-commerce/m-commerce and notification.

Anyways, your latest newsletter spurred me into assembling a variety
of issues across a variety of lists into a single place in the
hopes of understanding the field better, or at least being able to
explain it to my mom.

Greg

[1] http://www.peer-to-peerwg.org/
[2] http://www.parc.xerox.com/istl/groups/iea/papers/gnutella/index.html
[3] http://xent.com/april00/0003.html

-- 
Gregory Alan Bolcer        | gbolcer@endtech.com    | work: 949.833.2800
Chief Technology Officer   | http://www.endtech.com | cell: 714.928.5476
Endeavors Technology, Inc. | efax: 603.994.0516     | wap:  949.278.2805

> Mark, > > If the PC and disk drive and memory businesses want to rekindle demand they should really be pushing peer to peer adoption. > This includes technology development and standards. I know Intel formed a group in August with IBM and HP to try and do > this thought this should really be Job #1 for all these guys. I believe that Intel is holding its first user group meeting Sept 26. > > This would certainly revive desktop sales and require much more processing speed (as well as increase demand for broadband > by a factor of at least 1000). I recently had a senior technology executive tell me that large corporations would never adopt > P2P standards because if "Joe's" computer is off or wiped out you wouldn't want to lose all the data. I remember when they > used to say the large corporations would never put there information on the Net because it was to risky. But I've thought about > a peer 2 peer network with perhaps Caching/Storage or mutiple redundancies to solve some of these issues. Anyway they way > things are going now the PC's golden era is over. Funny to think that Napster might not only save, but reaccelerate the growth > rate of the entire PC industry. And AOL's IM could become the world's most valuable piece of software. > > Noah Blackstein > Dynamic Mutual Funds > > Noah, > > I certainly agree with you about AOL's IM software - and, of course, so does the Federal Trade Commission, which this week > is trying to assess what weight to give to this issue in approving or denying the AOL/TW merger - even as the Euros try to > balance life in favor of their own conglomerates (see Quotes). > > I would suggest that there are various tasks that are supremely well-suited to Peer-To-Peer (P2P) computing, and others that > are not. > > A long time (8 years) ago, a friend of mine, having left Microsoft, told me all about the power of this p2p stuff. I knew then > that he was right, but it is obviously context-sensitive. > > It's great to use a million PCs (or Segas) to do the SETI (search for extra-terrestrial intelligence) project, because that project > is well-suited to the demand: no particular urgency, large demand for cycles, highly centralized problem processor, no worries > if your node shuts off or goes away -- > > There are many projects, however, that do not fit these criteria. Indeed, there are few that are, in their own way, as casual as > SETI. > > So let's bump all this up a bit. > > How could you create a new Distributed Network Computer System that would be more responsive, safe, and not a victim to > having its limbs lopped off, for whatever reason? > > I have no problem visualizing a new world network, with various uber node manager sites, that make sure that there are > appropriate backups, response times, and resources available, for all of this to work. > > In fact, I would say that it is inevitable. > > At a time when certain companies and pundits are arguing for a more centralized, server-centric, dumb client architecture for > the world, it is becoming ever more clear that the opposite is happening: a Confederacy of Smart Local Machines - with lots of > (hard disk) memory. > > Mark Anderson


Date view Thread view Subject view Author view

This archive was generated by hypermail 2b29 : Thu Sep 21 2000 - 12:19:17 PDT