From: Adam Rifkin -4K (adam@XeNT.ics.uci.edu)
Date: Thu Aug 17 2000 - 19:21:58 PDT
Sometimes I feel tempted to cross-post interesting stuff from other
mailing lists, even when their participants are on FoRK, too.
For example, the Two Way Web thread on email@example.com in March 2000:
contains a great little discussion of "The Two Way Web" -- I go back and
reread that thread from time to time.
[Of course, when I really want to feel warm-and-fuzzy I set Don Box's
SOAP bulls-eye as the wallpaper on my desktop:
But sometimes it's fun to cross-post things just so when I sort-of
remember seeing it I can go back through the scrapbook that is the FoRK
archive instead of trying to remember where the heck on the web I saw
it. So I'm cross-posting the list of some stuff Dan Connolly learned
about HTTP by doing it (wrong):
Says Dan: "I think we got some things right. I think GET/PUT/POST is
enough. Perhaps having a principled way of specialize POST (say... a
header field that carries a URI ala HTTP ext) should have been there
from the beginning."
Definitely got the One Way Web right. My question is, can the One Way
Web evolve into the Two Way Web?
When Rohit talks about the "Two Way Web", he discusses "the belief that
HTTP has three flaws, so fundamental they're rarely even noticed:
1. It's ONE-WAY data flow, requiring clients to initiate
2. It's ONE-TO-ONE data flow, preventing group synchronization or services
3. It's ONE-SHOT data flow, only reliable if the origin server's up"
Experimentation suggests that yes, the current (One Way) Web can evolve
into the Two Way Web. Along that line, I remember reading a 1999
whitepaper by David Raal and Scott Shattuck at
a medium for app development and deployment could be realized without
Java applets, despite the fact that "HTTP does not easily support
multiple connected transactions with common state, session and business
information." They determined that "it's possible to create true
client/server applications for the web that can be written almost
completely on the client side, and that the resulting performance,
scalability, and fault tolerance characteristics are compelling."
Bring it on.
Meantime, here's Dan's words of wisdom...
> From: Dan Connolly <firstname.lastname@example.org>
> To: email@example.com
> Date: Tue, 15 Aug 2000 01:15:50 -0500
> Subject: HTTP goofs and musings
> Here's some stuff I learned by doing it (wrong) in HTTP:
> * GET should have had the full request URI in there all along.
> * with TCP, the server gets to talk first. give it 128bits
> or so to say what protocol version(s) it speaks,
> and maybe a few more for a nonce for security protocols.
> On the other hand, maybe it isn't the case in wireless
> protocol that the server gets to talk first for free.
> * two full round trips is plenty to avoid passwords-in-the-clear.
> Have the server hello include a nonce and do keyed
> digest, at least.
> * on the other hand, anonymous authentication (cookies) gets
> the job done in a lot more cases than I would have expected.
> Make sure the user agents get informed consent of the
> users, though. And let the user agent make up the cookie.
> * MIME was a mixed blessing. probably good at the time on balance,
> but maybe not something to do again. Definitely reference
> media types by URI rather than centralized two-level hierarchy,
> if you have it to do again.
> * chunked encoding was probably the last/best idea to get
> into HTTP. Good balance between human readability and byte
> efficiency. Python's pickle format is similarly good: use
> printable characters for "markup", but a fixed number
> of them, and *prefix* variable-length fields by
> bytecount, rather than scanning, byte by byte, for delimiters.
> Things like the Mac's 32 bit file types, which usually
> consist of 4 printable characters, are also good this way.
> Something tells me netstrings is almost right, but
> a little too far on the bit-tweezer end
> * with the ratio of cpu power to bandwidth these days,
> use compression a lot more. Make zip compression the
> default for text/* types or some such.
> * allow the server to give an error message at the *end*
> of the payload as well as at the beginning. Let 'em
> say "200 here comes... [payload] OOPS! ran out
> of resources or something... this reply is incomplete."
> Let 'em put metadata (like maybe last-modified) at
> the end too, so as not to put it in the critical
> path between the request guesture and the first
> piece of data on the screen.
> I think we got some things right. I think GET/PUT/POST
> is enough. Perhaps having a principled way of specialize
> POST (say... a header field that carries a URI ala
> HTTP ext) should have been there from the beginning.
> RST is certainly a good thing. (cf Fielding).
> While I'm just musing... I think focussing on the
> agent of change -- wireless -- is an excellent idea.
> There's too much momentum in the Wintel/Mac-talks-
> to-linux/solaris/nt-server configuration to move
> it far/quickly.
> Seems like gnutella and instant messaging are significant
> agents for change too.
> Hm... so is voice-over-ip, but even I don't want
> to stretch the HTTP design space that far. ;-)
mucho .siggage today boys and girls
The challenge? How to return some balance to the web by moving functionality to the client side and creating true client/server applications for the web. This was the promise of Java...with applets the web was supposed to be a balance of the best of centralized and decentralized architecture. But applets have been plagued by security problems, slow performance, bloated library requirements, inconsistent JVM support, and a tendency to crash browsers. In all our recent conversations with other developers we hear the same thing -- applets suck.
So if not applets then what?
Named in honor of the African drum of the same name, our Djembe(tm) is a distributed event/signaling infrastructure used to provide notification of events across machines, even across firewalls.
While the push for B2B continues at breakneck pace and standards like SOAP and XML are touted as the solution to the e-commerce problem, we've again taken a different tack from the mainstream. In our estimation any system that relies on an essentially synchronous model will fail to support real-world behavior. Also, while XML is valuable as a language-independant format it's not a bandwidth-efficient one. Other mechanisms are possible that offer far better performance without sacrificing readability or XML's 'self-describing-data' model -- assuming that's even a requirement for the application being developed.
Instead of trying to construct a messaging model we've focused our efforts on building a scalable event system. Much in the way that modern user interfaces work asynchronously via event notification we believe the best way to support the complex interactions demanded by e-commerce is to build on a foundation of event notification. Secure, distributed events.
It all stems from our simple definition of 'e-services'. While others spend their time talking about e-this and e-that it seems impossible to get a clear definition of exactly *what* they're talking about. We've got a simple definition. Dynamic, distributed workflow. That's right. Workflow. Because we're talking across systems it's distributed workflow. And because we're talking about an environment people envision as being flexible and adaptable we arrive at our definition of the e-*'s -- Dynamic, distributed workflow.
To support that vision we've constructed prototypes that allow companies to share data securely, across firewalls, through a simple publish/subscribe model. Imagine an Oracle database trigger -- an event handler if there ever was one -- creating a Djembe signal which could travel across firewalls to let you know your package just shipped. Nobody else can see that signal, it's secure. Your system receives the event and takes appropriate action. That's B2* in everyone's vocabulary.
Best of all, since TIBET is event enabled, its possible to have web-based applications built with TIBET act as peers in the signaling system. This allows two users to collaborate quickly and efficiently without extensive programming complexity. Programmers for the system just keep doing the same thing they do now, write event handlers. Simple, well-understood, and universally applicable. Secure, distributed events enable dynamic, distributed workflow.
Djembe is still under development and will be fully integrated with TIBET's signaling model to allow server-to-server, server-to-browser, and browser-to-browser notification. With this system collaborative applications running in the browser will not only be possible they'll be simple to develop and deploy.
Look for Djembe in late 2000 or early 2001.
[Shows periodic table.] You know, if this were a computer language, people would say it has too many ways to do the same thing. It has too many features that work too similarly, and at the same time it's missing key features of higher abstraction that would really help an MIT grad student. Elements do multiple inheritance of properties, which is evil. Strong typing is not enforced. Nothing should be made of carbon, because organic programming gives you too many ways to get into trouble. There are too many metals, too many gasses, and not enough semiconductors like silicon. There ought to be more elements like carbon. Everything should be made of carbon atoms. Silicon is only good for sand, it should be removed. If this were really object-oriented, electrons and quarks would have the same interface as atoms and molecules. There's not enough encapsulation of electrons in the metals. There's too much encapsulation in the lanthanides and the noble gasses. And why the heck do we need so many different noble gasses anyway? They don't do anything! Throw 'em into that big hole at the top of the chart. And don't get me started on isotopes! The periodic table is a mess. It should be redesigned. -- Larry Wall, 3rd State of the Onion speech, http://www.perl.com/pub/1999/08/onion/talk1.html
This archive was generated by hypermail 2b29 : Thu Aug 17 2000 - 20:41:04 PDT