[Jon Udell / Derek Robinson] Distributed HTTP, Beyond Napster

Date view Thread view Subject view Author view

From: Adam Rifkin (Adam@KnowNow.Com)
Date: Thu Sep 28 2000 - 23:28:47 PDT

Wow, Derek. Cool hack.


Rohit: also check out Udell's original paper:


This is neat, I think I'm going to include these in the KnowNow whitepapers
Properly attributed, of course. :)

Revisiting DHTTP
Distributed HTTP, Beyond Napster

By Jon Udell

A few years ago, I became fascinated with the possibilities inherent in
peer-to-peer HTTP networking. I built a prototype system, and wrote about
it in several installments of my Web Project column in Byte Magazine.
Because of the magazine's demise, these never appeared online, but I did
publish a paper on the system that I called dhttp (for "distributed HTTP").
Here were the properties of dhttp that I found compelling:

Lightweight. The first prototype was small enough to fit on a
single floppy. And most of its bulk was the Perl interpreter.

Simple. To install, you just unzipped a handful of files into a
directory. To uninstall, you deleted them.

100 percent script. I wasn't the first to discover that a basic
HTTP server is a simple thing, well within the capability of modern
scripting languages. More recently, Zope has demonstrated the power of an
HTTP daemon made out of the same scripting language used to deliver Web

Symmetrical. HTTP services that are lightweight, simple, and fully
scripted can become pervasive. The distinction between "big services out
there in the cloud" and "little services here on my machine" starts to
erode. Every machine can act simultaneously as a client and a server.

It was this last point -- the symmetry inherent in peer networking -- that
thrilled me. It seemed to me that this to change the world, though I wasn't
sure exactly how that would play out. In the final chapter of my book, I
worked out more fully some of the intriguing possibilities of peer-to-peer
HTTP networking: proxying, encryption, data replication. And I concluded
the following:

Like any powerful technology, this one's a double-edged sword. Wielded
responsibly, it can enable all sorts of useful things. In the wrong hands,
it can spell disaster. As with genetic engineering, there are two ways to
respond to this dilemma:

Reject the technology You might reasonably conclude that potential
risks outweigh potential benefits. Peer-to-peer replication of code and
data is inherently uncontrollable, therefore dangerous, therefore to be

Embrace the technology You might also reasonably conclude that if
peer-to-peer replication of code and data seems too simple and too
powerful, then the correct response is to tap into the source of that
simplicity and power, analyze the associated risks, and learn how to manage

Written in mid-1998, this was (if I do say so myself) a prescient
observation. Two years later, Napster proved my point. The network really
is the computer. The client/server mode of the original Web is only a
degenerate form of the peer-to-peer mode that will characterize the
next-generation Web. And as Napster is showing us, peer networking has
disruptive effects.

A new use for dhttp?

Was dhttp ahead of its time? Perhaps. In any case, I hadn't thought much
more about it until Napster brought peer networking into the mainstream.
And then, this week, Derek Robinson dropped by my newsgroup to announce a
really interesting dhttp-based project:

I'm a Perl novice, working on an 'in-situ' WYSIWYG-style HTML editor in
JScript for IE5. No, it doesn't use the MS 'DHTML-Edit' component. Yes,
it's browser-specific, but only uses innerHTML and the Text Range object +
methods. (A version of Text Range is included in the W3C's DOM2
specification, while innerHTML has been added to the latest Mozilla
milestones, so the subset of the IE DOM it uses is as "cross browser" as
anything else out there these days -- i.e. not very!)

I've pushed JS Bookmarklets about as far as they can reasonably be taken
toward on-the-fly/as-you-surf Web-page editing; anything closer to a useful
in-situ editor entails access to the host file system, which client-side JS

It just needs to be able to write "<SCRIPT SRC='edit_page.js'>" into the
head of (a copy of) the target page. Then the rest of what's needed for
more-than-adequate HTML editing can be accomplished using client-side JS.
The copy-paste can already be done with a bookmarklet, but only on local
HTML files. I'm looking at writing a DHTTP plug-in app to accept the target
page's location.href URL from the link-bar bookmarklet, copy the page's
HTML with the <SCRIPT SRC=edit_page.js> patch, save the page to a local
directory where the external JS file lives, then re-open the doctored doc
in the same (or another) browser window.

The nicest feature is seeing your changes immediately redrawn in the
original page. Note that if the selections get too big IE will hang. There
are hints here for how to make the two apparently incompatible
content-access schemes (Text Range vs. innerHTML) work together --
especially how to get rid of the spurious HTML tags that txt_range.htmlText
inserts in selections that go across elements -- which may be useful for
anyone else wanting to try their hand at taking in-situ HTML editing beyond

Of course the limitations on JS Bookmarklets (no key-handling) don't extend
to in-page or external JavaScript. Fully WYSIWYG point'n'click HTML editing
can be implemented in an impressively tiny script, but it can't do the job
properly without a client-side server such as DHTTP. The potential for very
low-rent, ultra-flexible alternatives to Zope, WebDAV, etc., is pretty

Bookmarklets, by the way, are small JavaScript programs packaged into URLs
using the javascript: protocol. These programs, accessed as bookmarks, are
typically used to streamline and simplify browsing, for example, by
submitting a highlighted phrase to a search engine. Derek's hack, as he
says, pushes this technique to the limit:



Date view Thread view Subject view Author view

This archive was generated by hypermail 2b29 : Thu Sep 28 2000 - 23:36:52 PDT