From: Jeff Bone (jbone@jump.net)
Date: Thu Mar 16 2000 - 08:56:35 PST
> [ pay attention to this one! 8-) ]
>
> You don't *NEED* to do any such thing!!! It's all freaking equivalent!!
> The only question is whether it's *efficient* for a broad problem domain.
>
Mark, I've paid close attention to everything you've said so far, and frankly,
we're just not connecting. I'm not sure what you mean by "any such thing" but let
me take a crack at this again. In my lexicon:
HTTP POST
== RPC
== remote method call
== parameterized function call
My point was, you *could* use GET for all this. I am NOT *suggesting* that we use
GET, or that using GET would be appropriate. POST much preferred.
> GET is the request. What do you need to parameterize?
Have you ever *looked* at some of these URLs? It *is* the case that parameters
get put on URLs, Mark. Every cgi-bin program in the world eats parameters. Could
you do away with the arg=blah syntax? Sure, you could flatten it all down into
the namespace and get rid of them. But why bother? Are you *really* arguing
against cgi-bin?
Why do we need parameters? Because not all functions I might want to invoke
remotely are 0-ary. Conversely, not all functions that I want to export are
0-ary. Strictly speaking, I *could* do everything I want to do with 1-ary
functions, and in the Web case, those 1-ary functions could be written as
synthetic URLs with everything folded down into a non-parameterized URL form, but
wouldn't THAT be inefficient?!?!
But why do we need all this anyway? Because we want to build distributed systems
that mirror our understanding of how to model problems as functions / procedures /
objects / whatever. We want to have a practical, low-overhead form of remote
procedure call, or remote method invocation, or whatever. Turns out HTTP works
nicely for that. It wasn't designed for that? Well, that just underscores its
versatility, doesn't it.
> You're not caching that "document" because it doesn't make sense to, not
> because the content itself is updated very frequently. *BIG* difference.
:scratches head. Duh! I believe that was my point. But why would you *want* to
cache that document anyway? Its response was a transient thing, and the request
isn't idempotent. Aside: you are aware that HTTP accomodates resources setting
their own cache policies, right? HTTP, -wrt- caching specifically, is already
designed to accomodate the fact that people often treat requests as RPCs.
> Not at all. The *VAST* majority of web applications out there are Good.
> Off the top of my head, the only bad ones are those than tunnel other
> protocols through POST; XML-RPC/SOAP, RealAudio/Video, etc..
Oh, I get it: this is a Good vs. Evil thing? :-P
> We've been at it in a big way for 5+ years, and there's a clear winner
> right now. I'll let my bet ride.
5 whole years, Mark? Well, Mark, I too have been watching this thing develop for
some years now. I'll tell you, this sort of "conceptual dawning" I see over the
last year or so re: specifically HTTP-as-universal-RPC is a big, big deal. So
I'll see your bet and raise you. What do we want to play for?
Where's Greg Bolcer? Care to chime in here, Greg? Surely you have a point of
view in this.
>
> MB
jb
This archive was generated by hypermail 2b29 : Thu Mar 16 2000 - 09:02:44 PST