Climbing Clueful Mountain

Ron Resnick (resnick@interlog.com)
Fri, 4 Apr 1997 05:53:51 -0500 (EST)


This is an open letter to Adam Rifkin (aka I Find Karma), and to Rohit Khare,
regarding their views on the future direction of universal component technology
and the web. Basically, I'm trying to goad them into opening their kimono a bit more :-).

In a recent offline email discussion, Adam and I exchanged these views:

At 04:03 4/1/97 PST, Adam Rifkin (I Find Karma) wrote:
>
>[Ron wrote]:
>> I'm also quite intrigued with your views on components maybe not being
>> the next Big Thing, and maybe something else showing up. Presumably
>> this has something to do with your/Rohit's paper on why http
>> is going to save us all:-).
>
>Yup, those two trains of thought both travel on the same track.
>
>> I would be quite interested in doing
>> a real intensive discussion of such ideas on one or the other of
>> d-o or fork. As I've mentioned, I can live with http or any tp
>> as a transport -in my view, they all need fundamental work to
>> be right, so they're about equally broken.
>
>True, it's just that I have a hunch how to fix http, and I'm not
>so sure how to fix the others.
>
>> It's your notion
>> that html, instead of compound docs,
>> is a sound place to start building distributed apps
>> that has me shaking my head.
>
>Yes, no one is ready to believe me on that one, yet.
>
>> I like the web as a basis for building a WON or a sogs or whatever
>> you want to call it because of its critical mass, its consciousness
>> penetration, its cross platfrom diversity. Not for its html, http or
>> url contributions which I dont expect to see remain intact and the
>> base for the "real thing". I think Java, and Beans, are the nucleus
>> for that.
>
>I guess I see it as: Java and Beans could not do their good work without
>using the underlying Web technologies for linking, markup, and transfer.
>And those underlying technologies could also form a layer on which other
>object models or component systems could be incorporated... that the Web
>facilities for linking, markup, and transfer form a universal piece of
>middleware for whatever d-o's happen to come along.
>
>> Can you imagine a www that's all beans and no html? why not!
>> (and my guess is you agree, at least partially).
>
>At least partially, yes. But I think that if you get linking, markup,
>and transfer implemented correctly, other things will fall in naturally.
>That's just a hypothesis, though.
>
> :) Adam
>
>

Adam also said the following on FoRK a bit earlier, regarding Apple's
decision to ditch OpenDoc:

>Remember, they [Apple] can do components without doing OpenDoc. The
>Open Object Embedding work by Bjorn E. Backlund of Xanthus (see his
>paper in the SIG CHI Bulletin, January 1997, Vol 29, No 1) on the NeXT
>platform shows great promise for that platform to deliver a compelling
>compound document framework. Heck, the work Rohit himself did with
>eText in 1994 and 1995 foreshadowed this insight - that a compound
>document framework need not be complicated to be compelling.
>
>And again, they have Beans and CORBA and LiveConnect and ActiveX to play
>with if that "paradigm shift" does become relevant.
>
>I'm personally not so convinced anymore. I think another compelling
>technology could come along and make the world completely leapfrog the
>components idea. And don't ask me what it is, because I don't know.
>Just a hunch, that component models themselves are too complicated for
>people to want to use. Great for the small cabal who reads the jillions
>of man pages and figures out what's going on, terrible for everyone
>else, means that eventually everyone else will shut the cabal down.
>
>----
>adam@cs.caltech.edu
>
>Johnnie Walker Gold Label is an inspired blend of rare, aged whiskies.
>The full aroma, smooth, rich character and uniquely long, lingering
>finish make it like no other whisky. At its heart is the rare malt,
>clynelish, distilled using spring waters which run through hills
>containing veins of pure gold. This unique whiky will add new
>dimensions to your drinking enjoyment.
> -- 1 Litre of JWGL, a specially selected blend of rare 18-year aged
>whiskies. 43% alcohol by volume. :)

--------

Well, as threatened, I've decided to call Adam's bluff and move this discussion
out into the open on both d-o and FoRK (sorry for the duplication to
subscribers to both lists - I had to hit Rohit, who's on FoRK but isn't on d-o,
but these are the kind of issues I generally like to deal with on d-o, hence
both lists).

Actually, I'm quite intrigued by Adam's views (presumably shared
by Rohit). I've been thinking about them a fair bit in the last few days,
with the following synopsis, based on the ubiquitous 97% rule
(ie, 3% of the population "get it" at any given time, while 97% don't.
If you have to ask what "it" is, you don't get it.).

So, here is a climb up "Clueful Mountain", where each level is a 97%
plateau, and the rest of the mountain is a 3% above it:

97% of the population at large don't really get any of it at all.
Network computing, distributed systems, object computing, and the
urgent, pressing impact of these things on humanity at large doesn't sink
in to the vast majority of folks, even with all the Internet hype we've had
in the last few years. To the average Joe (as opposed to the 2 distinguished
Joes on our lists), the whole Internet thing is just a passing fad,
kind of like CB radio.

Of the 3% that can get beyond that, 97% of them are your typical web
jockeys - html, cgi, cookies, hotmetal, frames'n'forms, navigator-gold types.
Sure, the web is real. Sure, e-commerce is around the corner. Yup, we
need better security. But the paradigm is basically to take the web of
today, and just do more of the same - tons and tons and tons of it, ad nauseum.

Now we're up to the 3% of the 3% who can see that the web of today,
even spiffed, isn't enough. 97% of _them_ realize that what's missing is
a robust computing model for building distributed apps. And once you're
talking about *computing* on the internet, java comes out more
or less in the same breath (as well it should). These 97%'ers are the ones
who are all raring to go out building piles of applets all over the place:
write once, run everywhere - but how exactly do we glue all the applets
together? No idea from these folks - applets are cool just on their own,
thanks very much.

So now we've got 3% of 3% of 3% who can see that just computing on
the web isn't it - we need a component model, and a proper distributed
object model. 97% of them see that
we need dynamically
binding, smart components that find each other, figure out what they're
good for, use each other, plug together, display their views as appropriate.
And, we need to build in sync & async invocation models, CORBA IDL
and (perhaps) IIOP, multicast and replication transport semantics, and a
host of object services.
Here we've got the folks who've graduated from Orfali/Harkey/Edwards school
(or the Oliver Sims school, or equivalent) and lapped up Taligent & OpenDoc
& Newi, and learned to grudgingly respect OLE (and now ActiveX) as being a
part of the same componentware vision (although hoping fervently that
it ultimately loses). With the passing of OpenDoc (and really, long before
that), this camp has for the most part been at the Java Beans party.

OK, we're now at the 3% of 3% of 3% of 3%: the ones who are into Beans
and componentware, but are *still* looking for more. Btw, this is - until
thinking about Adam's ideas - where I've put myself, and where I think
the d-o list folks are generally at. Let's call ourselves the 97% of this level.
And 0.03**4 of the general adult population of the literate world is
about 8.1E-7 * a couple billion, say , or about 1620
folks who get this far - sounds about right, probably even a bit generous.
The "more than Beans" we want is the realization of the utterly outrageous
mindboggling universality of bits, packaged into smart components (what d-o
likes to call shadows) in every conceivable aspect of
human existence (recall the diaper example used in a recent d-o post).
We recognize that Beans, for all its promise, is
hardly the be-all end-all component model we'll be building to. We understand
the need to model components higher up and be prepared to build disposable
impelementations in OD yesterday, in Beans today, in a successor tomorrow.
We understand the need to build robust distribution models, async message
delivery, thread safe handlers, etc., and to make that functionality available
at the higher layers of the visual components. (ie, Making them visual and intuitive
to an end user doesn't mean make them braindead and undistributable.
These are NOT VBXs, duh, and they're a lot more than OpenDoc parts too).
We also understand the need to put it *all* in there: agents/aglets, workflow
and long transactions, core reflection, serialization, ...
I think the Caltech Infospheres project speaks a very similar language, which
is presumably why we non-Caltechers have found so much in common here
with them -Joe Kiniry, in particular.
To summarize, I think this level of "getting it" is still pretty advanced, *way*
beyond where most folks are at, and I'm comfortable with it, and fairly
pleased with myself for making it this far. Cluons flow pretty freely around
here :-).

But, if we're still a 97% of a population of 1620, there's another 3%, or
about 49 more out there, right? That high elevated plateau where a sudden
superconductivity phenomenon occurs in the cluon field, and they now float
effortlessly and resistence-free :-).
It couldn't possibly be Adam & Rohit and maybe some silent partners up here,
now could it? (Perhaps Dan Conolly with his "WIDL: Someone caught the vision!"
FoRK post?)
So what is it that this devious bunch is up to?
Something so wild and so retrograde... a high end
component model built out of html!!?? This is like bellbottom pants -
so far out they're in again. Careful folks, these are *not* the naive html lovers
of the lowly 3% of the 3% left way below on the foothills of Clueful Mountain-
this bunch is far to sophisticated for that.
Rather, this is that wonderful crew, the karmakids who sit crosslegged
on top of the mountain in cluon heaven
- the 3 of 3 of 3 of 3 of 3% who know all about the Power Of The Beans (i.e. they
give you gas :-), but who claim to have
perceived something even more fundamental: as Adam puts it, _linking_,
_markup and _transfer_. And, perhaps more significantly, simplicity where
object based components have complexity (witness Adam's "cabal" comments).
Not to mention overwhelming
marketshare and mindspace: how much html/http is actually out there,
versus how much beans? how much corba? how many applets, for that matter?
Maybe html does rule, just because it's popular, it's fun, people like it,
and it enables a fundamentally better & different component model than
a non hyper-linked one.
Not that our karmakids are anti-java - far from it, as noted in Adam's
comments. They just believe that the
hyperlinked web fabric offers a more basic platform to build on.

[Of course, I'm making all this up. Adam&Rohit haven't actually *said* any
of these things, at least not to me. But I'm interpolating from the hints
they've dropped.]
Hmmm food for thought.
--------
thinking.... digesting... burp..
------

Ok, time's up; I thought about it.
And, I reject it. (How impetuous of me - first I attribute
ideas to Adam&Rohit based on pure supposition, then I go ahead to reject
them :-) -
like I said, I'm baiting them to come out in the open and take me on).

I gladly move back to my happy little niche one step behind the karmakids,
looking for my components to save me, and for html to go to its ultimate
demise. Why?

Let's take each of Adam's buzzwords: linking, markup & transfer. I hope
I don't misinterpret what he meant by each (always a danger in a missive
like this :-).

Linking - our famous friend the <a href=> </a>, that great invention of TB-L
that went beyond mere tagged markup languages like Script, to allow
a *web* of documents. (All right, I can quit now, you don't need the primer,
I know :-). Of course, linking existed well before html - I remember fooling
around with it in Apple's HyperCard (I think that's what it was called)
in 1991, for example.

Markup - that lovely 60s technology of logically designating pieces of
a document to be interpreted according to embedded tags. Been done
in Tex/LaTex, Script, and various GMLs. Powerful, expressive, portable,
not to be dissed, but hardly rocket science.

Transfer - shipping stuff around, right? Pouring bits down the pipes, serving
them up from servers, shuffling them to clients..... html on the network ->
http.


So what gives? Is there really something here that has to be provided in a
substrate subtending the component model? Or are these features not things
that can be provided more naturally by the components themselves?

First off, I think we can all agree that just about any service can be
provided above or below just about any other one. It's that lovely loopy GEBish
self-referential nature of computing systems and von Neumann architectures that
one layer's code is the next layer's data. So, there's no objection to the notion
that you CAN put linking/markup underneath the components. The objection is
more along the lines of: why would you WANT to do this? this is a BAD idea :-).

--- Going into a wierd explanation of why I think objects are the answer---
I'm hardly an object freak - I mumble the lingo, but it's really not my scene.
But the one part of it I buy into in a big way is the notion of encapsulating
behaviour with the data it acts on, assigning clear responsibilities to the
candidate objects of the system, and designing with objects all the way down.

This comes from my quest to ultimately find the one simplifying rule
of behaviour that explains the mechanics of everything. I think it comes
from my physics background - looking for the Grand Unification of everything
into a simple concise statement of how things work, and then expressing
the vast diversity of specific instances as the myriad boundary conditions/ICs
you can apply to the core dynamic equations.

In computing systems, the closest I seem to come to this is the notion of
patterns which capture the repetitive symmetries of the same damn thing happening
over and over again all over the place - very similar to the physical notion
of looking for symmetries in phenomena, and isolating the invariants of a system.
And, the most basic pattern I seem to find is Give/Take. Sure, in its most
common form, people talk about Give/Take as applying to economic systems and
transactions. It's relevant there- but I think that economic transactions are just
like any other transaction (or *exchange*, as I prefer). (ie, there
is a symmetry between economic exchanges, physical exchanges, idea exchanges,...)
Ultimately, everything
can be reduced to "here is A" and "there is B" and "A does something or other
that affects B". I think a different way of stating Give/Take in physical
systems is Newton's 3rd Law (we had this discussion on d-o long ago, when
it was called forr).

Since Give/Take is so basic, and is my candidate Grand Unification theory,
it applies at the very base, and everything else gets layered on top. Just as
you unify forces at the level of strong, weak, EM, gravity, and the subatomic
particles they act on, and then you start building your complex nucleons, your
atoms, your molecules, your higher level orders of matter on top. So too here.
Get the GUTS
right at the bottom, and the rest is just applied GUTS - nuclear physics is
applied GUTS, atomic physics is applied nuclear physics, chemistry is applied atomic
physics, biology is applied chemistry, etc. That's the world of the atoms.

In our world, the world of the bits,
Give/Take is GUTS, and the native particles it wants to operate on
are objects, right at the bottom of the scaffolding. If you want things like
linking and markup, I can't see any reason why they violate this principle. Build
them as applied Give/Take. Eg, have a component model in which component "A" maintains
object references to 'linked' objects B, C, whatever. If A is selected to display
its view (it renders itself), part of that rendering is visually displaying
its possible links, and allowing an external user/component to follow that link
(ie press the mouse button).

Adam, on the other hand, seems to imply that for him the primordial soup layer
(the GUTS of his system) are the services the present web offers.

Recall our exchange:
me:
>> Can you imagine a www that's all beans and no html? why not!
>> (and my guess is you agree, at least partially).

[Note: that "all bean" web I refer to, with no html to muck up the works,
is exactly the GUTS I allude to above]

Adam:
>
>At least partially, yes. But I think that if you get linking, markup,
>and transfer implemented correctly, other things will fall in naturally.
>That's just a hypothesis, though.
>
> :) Adam

I dispute the hypothesis. I challenge you to offer more evidence.

Ultimately, my concern is that we've got massive amounts of bits out there now,
and we've only barely started. Wait till webcams and webmikes stop being a novelty,
and every last streetcorner and 7-11 has a real time feed into the network.
Even that's nothing - wait till each of us is wired and our pulses, body temperatures,
blood pressure, present location, is scooped up real time and fed into the bitspace.
The trouble is that if we're not careful, we'll wind up with incalculable piles
of dumb bits. Witness the Altavista problem noted in a recent FoRK post - there's
too much crap and nobody can find anything anymore.
People seem to think (eg Negroponte gives this impression in
Being Digital) that just *digitizing* the information is enough. Once its bits,
we can manipulate them endlessly. The trouble is that dumb bits aren't much
better than analog info. There's a "quality of bits" hierarchy. Scanned text,
or a fax, is "bits", literally, but is highly degraded info, unless you painfully
start to "upgrade" the bits with OCR etc. Pure ascii text is one step up, but
they're still pretty dumb bits - you can edit them, but they have no idea what
they say about themselves. That's the problem Altavista faces - it's trying
to do an "information upgrade" transformation on ascii analogous to OCR on a fax.
There's a thermodynamic principle here: You have to invest work (OCR, natural
language searching, etc.) to move from a high entropy state to a lower entropy state.
What we're striving for is "smart bits" - bits that
know what they are about (ie they have a metadata facility), can advertise themselves,
can describe themselves to people and to other bits (ie components). We need
traders to offer a marketplace for metadata commerce (wooh, I kinda like that :-).
If the bits, as they get fed into the bitspace in real time, already have their
smarts tagged onto them, you invest the thermodynamic work to get them low-entropy
on entry. This has two advantages: One: the nearer to creation you do this, the lower
the work is likely to be (it's like a Carnot cycle :-), since the bits are closest
to their point of origin where the context is that *knows* what they're about.
Two, an amortization
argument. If the bits are tightly bound with their smarts (basically - if
you practice true object encapsulation), you only invest the effort to create
the smart object once -after that they're "free" to use smart. (free from
a work investment perspective, not necessarily $, where they are pay-per-use).
In contrast, in a polluted sea of dumb bits (the web today, heh heh),
you have to invest the same damn Altavista searches over and over -
the source bits never get any smarter, since the search engine treats them
as read only. Sure, it can cache stuff, but it's really not the same thing.

I remember Joe Kiniry's paper at last year's OOPSLA workshop being in this
direction, although also still starting with html metadata. WHAT'S WITH ALL
YOU GUYS AND YOUR PASSION FOR HTML ANYWAY?
Html just isn't going
to be smart enough, I fear, to support such intelligence. It's not cut out for it.
Not to say Beans is there either, but it's a hell of a lot closer.

-----End of warped physics/ object lesson--- :-)

So, on purist "lets get it right" grounds, I reject the notion of html as a "good"
place to start. html is no place for any self respecting coder to want to develop
expressive powerful behaviour. We haven't laboured for half a century in industry
and academia to understand
how to build reusable, flexible robust software just to go back to a tool that
was designed for *documents*, not for *software*. (This is a big part of the
confusion here, I think- I'm quite against the whole notion of document-centric computing,
and compound documents as terminology. I think documents are *not* a good metaphor
for the daily activities networked bits are going to permeate in our lives. When
I go shopping or drive my car or have a beer, there are no "documents" involved.
But the networked bits will be in all these places, and more.
I think the Taligent "people places things" imagery, and the Orfali-coined
"shippable places" is much more apt).

Having said all of this, I now hedge all my bets.

The one thing I've learned in the last few years is that it's absolutely impossible
to predict the future of the technology - what's going to work, what isn't. What
people will buy, what they won't. What standards count, which ones don't.

The karmakids might well be right: html might evolve into a bona fide object model just
due to the sheer weight of how much stuff is html already. Then again, we have piles
of legacy COBOL and PL/1, and I don't see any calls for turning that into
our base for "the real thing". Then again, Missy could win due to their
sheer weight and ActiveX becomes it. Or a dark horse could pull up, just
as Java did a couple of years ago, and we all lurch another way.
So the viable options, right now, on the table seem to be:
(1) Beans, (2) html-base, (3) ActiveX, (4) dark horse. Note that I'm already
turfing OpenDoc and CommonPoint (as initially incarnated) and Newi etc.
Also turfed is any sense that CORBA on its own is going to define our
component model. I think BODTF,
as Mark has noted, may influence the list of viable ones, but isn't going
to generate one. CORBA isn't "the real thing", but it is an important influence on it.
Sad but true. That's not because of their technical approach, which is fine,
it's just a cold statement of their "imagination capture" factor in the market.

My hedge, then, is to say that I would personally prefer to see Beans do it all.
It's the cleanest starting point, it has the best technical story, it's free,
it's open, it has tool support. I can happily build my Give/Take GUTS from
here. However, not I nor you nor even Bill Gates or JavaSoft or OMG (or the W3C :-)
has the power to pick the winner. The winner could be any of the above, and
we'll all have to live with it, for better or worse. Are you prepared to
live in an ActiveX world? It's about as palatable to me as your html-world,
but I'm bracing for any of these.

So, to preserve sanity, as well as to make real progress in bracing oneself for
the onslaught of the revolution, and to capitalize and profit from that revolution,
my philosophy remains:

1. Pick a domain. Something you know something about. Something that there is
a perceivable demand for, and capital moving around in. Something that you
can grab onto, understand the basic invariants of behaviour that are attached
to the *humans* in the domain, not the *technology* serving it.

2. Model it like crazy. Find its basic components, its patterns of behaviour.

3. Pick a tech. You could even do this randomly - draw straws from the ones
mentioned above. Probably you'd have a reason to pick one or the other. I'll
go Beans, the karmakids go html, Joe Barrera can go ActiveX :-).

4. Implement. Pay attention to robustness. to threadsafe resources. to memory
allocation and scaling issues. to exception handling. (read the waldo paper :-).
Realize all through this step that you're building disposable implementations.
Don't expect the code to last.

5. Budget for a technology exchange about every couple of years. Your component
model is your reuse, not your code.

With this philosophy, I believe I can hang onto my Beans vision, just as OpenDoc
folks hung onto theirs until their world collapsed :-), but I'm already
emotionally prepared for the swing to something else. Maybe your html really
takes off. I hope not. But I'm ready.

Ron