[Munchkins] WTH am I doing with my life?

Rohit Khare (rohit@uci.edu)
Mon, 21 Sep 1998 05:43:13 -0700


[Apologies in advance: this rant indulges my habits of melodrama and
wordiness. It does lay out my professional vision, though.

Midnight writing makes midnight reading,
Rohit Khare ]

This weekend, I have spent a lot of time assiduously doing... nothing.

This is not just a flip comment.

It hurts. It hurts badly.

Not just morally, but by blowing more personal (& one organizational)
deadlines

Partially, it's because I'm still moving in -- all kinds of arrangements to
be made to get the place in order, lots of shopping, etc, etc. The Open
House next Saturday is more than just a housewarming, there's some personal
news to celebrate, too (but I'm not going to blame it on her!).

Partially, it's envy of my new housemate, a first-year in Philosophy about
my age who, at this moment, does not owe a single bit of work to anyone on
the planet, a state of grace I have not seen since July 1991, and before
that only the summer of 1987.

Not one guilt-free day since -- 2,500+ days of running away from
*something*.

Mostly, it hurts because the questions I'm putting off right now strike at
the heart of what I want to accomplish with my life. It's a slow, slow
process: getting off of the treadmill of "Things I Can Do" onto "Things I
Want To Do."

So far, I have marked my career by pitching in, helping start,
prosyletyzing, and developing perspective. Not known for staying close to
home, finishing, or detail work.

This hit me just now walking through the RDF spec. I'm at an IHOP at 2 in
the morning trying to compile course notes to walk through it. I'm looking
at the acknowledgements, and I'm trying to think of what it would be like to
be one of the people in there.

I *understand* this stuff. I could have been useful. It fits in with a lot
of things I have tried to do (i.e. PEP as 'protocol-metadata about
resources').

The statement above is true for a lot of values of 'it.'

I'd shamelessly claim there *is* a germ of truth (skill) to my involvement
in so many activities -- even if periperhally. Security, markup, user
interface, oo a&d, economics, protocol design, all of those myriad little
threads I'm chasing down on the conference circuit.

And yet -- WTH am I *really* going to do?

Am I ever going to claim a problem (or solution) as my own?

Until now, I've written off all the years I've spent partially spelunking
into subfields and yanking back up to the surface as an investment in my
breadth, credibility, and actual intellectual capital.

(Translation: "I actually believe there's some problem large enough (& worth
solving) I'm gonna need all these skills").

[Not that I can ever reel myself back to the surface easily. Being the kind
of loudmouth, overcommitted personality I am, I leave obligations behind in
every borehole. I still hang around -- and owe things to, or speak about --
the latest developments in each. (Translation: "I still spend lots of time
reading, attending, and speaking about each of the areas I've been
previously implicated in")]

The debate over notifications brings this into sharp relief. Here's an IETF
working group effort that seems like low-hanging fruit. Our team at UCI +
Adam has done everything it can to clear a path. The managerial and
technical scope is within my grasp. I could finally Win One.

But what would I win?

It seems unfair that right on the verge of a winning project (finally!), I
have to stop an reassess the Big Picture. I'm trained to do this kind of
technical negotiation and standards battle. I can follow in the footsteps of
my elders in the research group. It'd be so easy to Just Do It.

Why am I afraid?

Why am I running away to a *grander* vision -- which I'm also afraid of
explicitly stating?

I am so afraid of writing down the Munchkin plan because it would be a
yardstick to measure myself against -- a public yardstick -- and then where
will I be if I come up short?

Here goes.

----------------------------------------------------------------------------
---------
Munchkins from First Principles (a sketch)
----------------------------------------------------------------------------
---------
(0) I want to do something Big.
(1) The future of networking is a Big thing
(2) The Internet will scale extremely well in the near future
(3) Nanotechnology will happen in my lifetime

The zeroth premise is the hardest. Very, very, very hard to admit on
'paper', even if I seem like such a monster of ego.

(0) I want to do something Big.

Argument: Because I can't imagine not.

I really can't see myself doing something incremental. My spirit will
wither, like it is this morning, and I'll finally have that long-awaited
breakdown, and I'll have to recenter my life around some other kind of
Reality than the Achievement frame I've been in for 23 years. It's a sad,
sad, immature thing to believe, but I don't want to live my life if it's for
mediocrity. Fighting a jihad and losing -- fine by me. Life without drama?
I'm addicted, I can't go on without the daily fix of 'As the FoRK turns.'

Sigh. But deep down inside, I do believe it. I'm ashamed to admit *I think
I'm special*.

[There are other ways of stating (0). Sometimes I hide behind a
religio-philosophical front: it's my duty. Sometimes it's a fatalistic one:
I'm an only child, born in the States, with so many, many gifts compared to
what my life should have been in India that I feel terribly indebted.

Here's another statement of (0): "I have very high expectations for myself".
Unfortunately, I have to face up to its converse -- that in my dark heart,
"I have low expectations of everyone else". Self-effacement can still mask
contempt. I hate admitting that. But contempt is the story of the 97%
theorem, isn't it?]

(1) The future of networking is a Big thing

Argument: Distribution seems to be latest and least understood source of
system complexity. I used to passionately believe it was languages &
methodology, as a young OO revolutionary.

Coordination is what links computing and economics to me, my twin
disciplinary hats. Designing a business process and a network protocol are
much the same thing: contracts and enforcement between autonomous
components.

Why don't I think that Scaling and Mathematics are Big things -- like I used
to when I started out doing graph theory and in Caltech's theoretical CS
tutelage? Frankly, for no better reason than I'm not good at them.

(2) The Internet will scale extremely well in the near future

Argument: It's not just that JPL's already discussing Interplanetary IP, or
that IPv6 has n quadrillion possible IP addresses per cubic angstrom. It's a
(depressing) realization that the growth of the net in users and information
is balanced exactly enough to Moore's law that we *will* go on maintaining
massive router tables, DNS caches, and other centralized structures because
we can maintain a stable price point for those boxes at O(10^5).

Of course, those growth constants are not just casually linked -- they are
causally identical. Free market economics is working extremely efficiently
at closing the loop between router, fiber, cable, and host prices with the
cost of the net -- and, hence, the number of users and files.

The Internet is in an island of stability that may well reach up to a
billion users and even home devices. What could possibly cause a -- sorry
to use the phrase -- paradigm shift?

(3) Nanotechnology will happen in my lifetime

Argument: First, I believe (3) because of (1): I *want* to live in
interesting times, and I have to simply live my life assuming this
breakthrough will come sooner than planetary self-immolation. Second, I
believe (3) out of moral justice: as part of my heritage, I am painfully
aware that 90-95% of the planet does not live a life even close to our
standards. I have to believe something Big is going to make life on this
planet a little more sure. Third, I believe (3) because it's technically
feasible. Without replaying Eric Drexler's arguments of the last decade,
just remember that every cell in the human body is testament to molecular
engineering.

(3) is the logical consequence of (1) and (2): something that will upset the
applecart of the Internet as the dominant frame for computer Networking, and
Coordination broadly.

[The remainder of this rant discusses that claim, culminating in a defense
of my current career path]

----------------------------------------------------------------------------
---------
What might lie beyond the Internet-paradigm's grasp?
----------------------------------------------------------------------------
---------

Today, computers are born without identity and must be Administered onto the
network. Each new Internet host depends on the entire planetary governance
system of Internet numbers, names, ethernet numbers, and so on to plug into
the grid. *People* have to be aware of what's inside and outside, to orient
the flow of communications. MX records, default routes, firewalls, all
entangle Administration into the network.

Even the rosiest future for instant-on, mobile IP connectivity depends on
someone pinning down most of these aspects: in-home DHCP services (even if
IPv6 used GPS coordinates as network prefixes, would that make it any more
usable or scalable?) or wireless ad-hoc networks with a fixed net-id, or
CPU's shipped from the factory with unique id numbers.

The ulterior conceit is that there even *is* One Network. The ultimately
futile struggle to maintain a single system image: the hope that if you took
a global snapshot, there really is a single directory of names, address, and
net-ids; that routing protocols all converge to consistent map. Simply
scaling the Internet numerically, by throwing more hardware at the problem,
may work, but it's not gonna be new research.

Consider instead a rice-grain sized computer ("The Basmati 3000":-). Without
a single element of nanotechnological wizardry, I'd bet that well within ten
years, an analog, asynchronous VLSI chip powered off of some fuel-air or
inductive field source could offer billions of instructions, bytes, and run
at a high enough clock rate to *directly* manipulate radio waves. Async
gives us very low power and very high clock speeds on demand (Caltech's
finishing a MIPS R3000 clone combining jaw-dropping integer performance with
ultra-low power -- and simulating at over 1GHz, cooled, on a
not-so-agressive manufacturing process). Analog provides all the wireless
interfaces -- if not radio, then solid-state lasers.

And it'll be cheap. It'd have to be. Barely a few bucks to manufacture. And
though I'm not to sure how much of a premium the IPR will be worth,
competition should bring it for less than the 20x factor of Intel chips.

MTV can already afford to give away pagers for advertising value alone.
Let's posit something fun:

Suppose munchkins are free.

Now, research gets interesting. I don't need to fill in all the usual
techno-futurism about "your right cufflink talking to your left cufflink via
satellite", or eggs that tell the refrigerator to order replacements when
you make an omelet. The relevant discontinuity is that there's no
Administrator.

[Note: I am *not* claiming I've "discovered" a new problem. Related
buzzwords include: amorphous computing, ubiquitous computing, and
self-organizing systems.]

Indeed, there is no visible 'network interface.' Remember, this is the
bedrock of Internetworking: IP addresses define network interfaces -- not
hosts (and certainly not users!)

At most, munchkins are 'born' with a host id: a private key. A *lot* of
classical IP solutions go haywire when you can only enumerate machines
rather than links. Interdomain routing, for one.

Purely wireless interfaces truly shatter the notion of "inside" and
"outside" a network: like the scales on a fish, they overlap everywhere.
Second, wireless surfaces the physical reality of multicasting. Just as the
planet's GPS infrastructure stands to obviate decades of theoretical
research on network clock synchronization, pervasive & guaranteed link-layer
multicast makes algorithms like at-most-one response and 'cache snooping'
more practical.

*** My hunch is that munchkins break enough rules to call the Internet back
into question. ***

Some Gedankenexperiments:

a. suppose you are a billion cells trying to be a liver. How do you
communicate state changes and optimize liver function (inside), as well as
regulate whole-organism behavior (outside)?

b. suppose you are a trillion cells trying to be a human. How do you map out
where other organs are? do you care? what kind of 'backbone routers' does
the CNS provide?

c. suppose you are a billion assemblers trying to assemble a rocket engine.
It's not enough to hope that in the Brownian chaos of the vat you can map
out x, y, and z coordinates to a micron and refer to a Master Plan of
what-molecule-goes-where in each assembler. What kind of adaptive network --
and subnetwork partitions -- will this task need? [Scenario from Drexler]

d. suppose you are a billion handsets trying to be a Phone Company. How do
you find your friends and neighbors, business, and total strangers? What
possible tariff structures could work?

----------------------------------------------------------------------------
---------
So What? Why do you think *you* can help this happen?
----------------------------------------------------------------------------
---------
(0) why am I describing a hardware breakthrough as a software guy?
(1) why am I depending on wireless breakthroughs?
(2) why is the ID a private key rather than a serial number?
(3) how could this infrastructure possibly be "free"?

So far, all I've proven is that I can churn out 'high-concept' sci-fi to
keynote analysts' retreats. More marketing spooge, in short.

Forget whether or not these bittyboxes could use IP as-is. (After all, the
control hypothesis is that today's "PCs" remain Internetted, and these new
devices remain satellite drones of the Master computer -- that Bluetooth is
essentially right in separating the world into home-piconets at the
periphery of the planetary-Internet.) Even if this comes to pass, I don't do
hardware, wireless physics, or corporate finance. How could I be a player,
rather than a spectator?

(0) why am I describing a hardware breakthrough as a software guy?

This entire line of investigation doesn't even need a hardware platform. The
idea of an ad-hoc network of hosts (rather than network interfaces)
originally arose from my vision of a '*TP server mesh'. Basically, I posit
that caching Web proxies or, better yet, SMTP relays already act like
munchkins -- except that their communications paths are Administratively
determined (by browser/firewall or MX configuration, respectively).

My personal 'research' (== 'high-risk') position is that there's not much
difference between all these Transfer Protocols (TPs). Files, mail, web
pages, news posts, network management state, etc are just different kinds of
documents to be be Transferred to the specified Destination set. A hybrid
between an email server (push, store-and-forward, message queues) and a web
server (pull, circuit-switched, synchronous replies) covers enough of the
space to become a 'protocol switch' or 'message depot' that can take a
single application-layer datagram and burst it out a mix of recipients by
choosing the transfer strategy (flood-fill, poll, interrupt) independently.
[*TP is an old FoRK concept. @@url]

Now, while the Network layer might know the local topology intimately, the
application-layer has no idea. At that level of virtualization, SMTP
believes the Internet is a fully-connected graph and just chooses the
next-hop by MX. Thus, there's a very good reason email to my boss six feet
down the hall went by way of Reston and Colorado Springs.

I would like *TP servers to discover neighbors and calculate routes and so
on -- relearning the lessons of the lower layers, to be sure, but this time
for ADUs rather than PDUs. The reason? Application Data Units are where the
value is -- and that's what we can charge for, rather than per-Protocol Data
Unit pricing. See (3) below.

(1) why am I depending on wireless breakthroughs?

At the same time, I'm skeptical that any new server software package could
take off like httpd and drive these innovations. Much as the 'real' value of
the Web was URLs, but required HTML and HTTP to catalyze it, I believe that
applayer protocol unification does not sell itself.

A new hardware platform, though, has the best odds of triggering a software
shift as well. Even virtualized hardware like Java. [There's a lot of
rhetorical similarity between Jini and Munchkins, except in the details.
Jini presumes IP and DNS].

Will it be purely wireless? I'm getting less certain. The National Academy
of Science's recent report, "The Evolution of Untethered Communications"
scared me straight regarding the absolute physical difficulty of this feat.
A planetary data mesh built from very tiny radios may never be stable enough
to deliver. To say nothing of the latency of thousand-hop routes...

(2) why is the ID a private key rather than a serial number?

In the game of turtles, someone has to be at the bottom. I feel comfortable
arguing that in the end, identity is the ability to keep a secret -- and
that a keypair is the ultimate identifier. I don't know how to deliver an IP
datagram if two people have the same address, but a message can be
"addressed" to only one recipient the moment it's encrypted -- no matter how
many message depots keep a copy along the way.

Morally, I'll accept the charge that I'm replacing 128-bit addresses with
keys 10-20x larger. But the larger size means that host-numbering can now be
completely decentralized. Power on, tune in (to some thermal noise), and
drop out.

The metaphorical shift is that munchkins now seem like homunculi: little
people. If all you have are node identities, then (exactly like PGP's web of
trust), you can only know the people you meet.

*** I'd like to bring the social-radius hypothesis ('six degrees of
separation') to networking ***

Naming, routing, and addressing are all trust management problems: do you
have faith in an External Table, or trust only those you've been introduced
to?

Existence proof: Humans form an ad-hoc communication network. In extremis,
any two people can establish contact (e.g. a relationship brokered by telcos
and government census) -- but communication is typically hop-by-hop in
tightly-connected social circles. Faith in global directory services is not
required.

(3) how could this infrastructure possibly be "free"?

I've speculated before on FoRK about the economics of munchkins --
self-organizing auction markets with 'currencies' denominated in future
willingness to retransmit packets across whatever radius a node controls.
[there's a link off of the FoRK FAQ].

It won't be free for all users, obviously. Depends on the credits you
collect by forwarding traffic. But I'd like to think that we can finally
realize Arthur C. Clarke's dream of humanity's birthday gift to itself at
the turn of the Millennium: making telecommunications a human right.

[Footnote: his actual proposal was abolishing international calling rates on
1/1/2001]

----------------------------------------------------------------------------
---------
Means and Ends
----------------------------------------------------------------------------
---------

Sigh. Another five hours wasted on this memo. Let's work backwards to solve
the dilemma I posed back at the beginning. Here's a dependency chain of
goals, with the professional skills or knowledge each step draws upon:

(0) free decentralized planetary net
leadership/management
(1) unified message routing model user
interface/usability
(2) economic model game theory,
macroeconomics
(3) trust model trust
management
(4) per-device/user addressing
cryptography
(5) unified message transfer protocol
standardization
(6) *TP software
architecture
(7) rHTTP XML, metadata, performance
evaluation
(8) Notifications Event-based
integration

(0) free decentralized planetary net

This is a social policy goal. It requires leadership and resources (business
empire/consortium-building skills) in addition to:

(1) unified message routing model

This technical goal allows the munchkin relay network to package, track, and
bill application-layer message transfers. Trying to reconstruct value flows
once the transactions have been shattered into IP packets is a difficult
proposition. Standardizing an envelope for 'documents' (probably MIME-like)
and a constraint set (a middleware grail), it allows the *TP engine to
calculate the best delivery strategy. For example, caching in the face of
flash crowds as prices increase.

The hard part of this model is the user interface to the constraint set. How
can very-high-level applications like a datebook, stock trader, and
videoconferencing rig communicate user's respective priorities down to this
level? Could an operating environment learn how valuable mail to the boss,
or regarding a certain project is worth? What to prioritize when logging in
from an Airfone?

This requires (2) and (5)

(2) economic model

There's a value to the transaction ("get my stock trade to market" vs. "my
video to grandma") as well as to bits in space ("how much to lob a 2K packet
to the next carphone to the left?" vs "how much for 500 ATM cells to
Paris?"). A microcurrency for the latter (every radio cell and fiber line is
its own market) and exchange rates relative to a reserve currency allow *TP
routers to calculate strategies for the former. A stock-trade might be worth
enough to transmit by several pathways (a 'bounty' model).

Microcurrencies are limited in range (a single apartment or campus -- a
small set of links) and value (the IOU to transit one of your packets in
exchange is limited to the least trustworthy currency issuer). A reserve
currency would be an exercise in branding: credits that come with better
insurance that they will be exchangeable for bandwidth anytime, anywhere.
Meme: UCIbucks vs MCIbucks.

Identifying recipients and establishing trading relationships requires:

(3) trust model

"Paranoid networking" is a term I use sometimes to describe a posture of
trusting only oneself and growing from there. Every new key, domain name,
new email sender knocking on your door must be explicitly introduced and
added to the Web of Trust. Of course, there are lots of models to
accommodate under the covers: trusting a group, an external authority, a
hierarchy of CAs, government backdoors, &c.

Trust refers back to economics. In the end, the reason you trust "United"
means Airlines and not Van Lines is advertising ("bribery"). Or the reason
you choose to exchange UCIbucks at par, because it relays packets 99% of the
time, but discount Robucks, since I'm not at home 50% of the time (and why
you'll pay a premium for 99.9% MCIbucks). Trust is retribution is economic
outcomes.

Actually settling microtrades and sealing messages requires:

(4) per-device/user addressing

At some level, there has to be an authenticated counterparty. If it's a
self-organizing network of people, there have to be stable human identities.
Among munchkins, it'll have to be a device key burned into the hardware --
and most likely, right above it will be it's "owner's key", the human
identity or device that manages it in turn. These will be the only
permanent, stable identifiers to trust, since the mappings to location/IP
address/network will be ephemeral.

(5) unified message transfer protocol

Munchkins are not alike. I don't mean to leave the impression there's some
single bit of wonder-hardware. Just like the Internet, compatibility is a
behavioral property, not a physical one. So the divergent pool of devices
out there will need to agree at some high level about what these encrypted
bags of bytes they're shipping around are, and how much they're worth.

Promulgating an application-layer parallel to IP packets would require
standardization and negotiation above and beyond merely prototyping a hybrid
app-layer protocol:

(6) *TP

A wire protocol which is flexible enough to span email and chat, news and
web pages, large and small, batch and streaming, and push and pull seems
implausible, but I think it's worth trying. This goal is at the outer
horizon of what I might accomplish in my doctoral studies at Irvine.

The key here is not just hacking together a hybrid command structure and
borrowing the design techniques gleaned from surveying the entire history of
IETF applayer protocols, but to tie it back to software architecture. Roy
Fielding's work points in this direction, by linking a particular TP,
HTTP/1.1 to the style he's dubbed "representational state transfer". Drawing
a bit more widely, it might be possible to link other variants of the
"Irvine styles" of software architecture -- C2-like components connected by
messages across busses (specialized into requests and notifications
depending on the direction of flow) -- link to variant modes of this
protocol. Ideally, there would even be analytical implications: what happens
to an application in such-and-such style when a message queue is inserted,
or message delivery changes from sender-initiated to polling.

Not to overlook the actual engineering difficulty, though. A first cut would
be:

(7) rHTTP

We've been talking about a mythical "10-page micro HTTP spec" for a long
time. We think that a workalike to HTTP/1.1 could be factored into
orthogonal parts using PEP/Mandatory (caching, security, content type,
linking) and reencoded to be far more compact and parseable. Even without
migrating to a binary form, or a really flexible and 'nifty' HTTP-NG,
there's value to an experiment that cleans up what we already know is
awkward and represents the slim "HTTP model" smothered in the 120 page spec.

The (reengineered | risc | roy | rohit | our)HTTP project would exploit XML,
metadata, and performance evaluation skills. It would probably factor out a
basic type of message, the notification (8), which would be specialized into
requests and replies.

(8) Notifications (?)

After Chicago, UCI created enough confusion in the arena of
Web-server-initiated event notification that it seems like people are
waiting for us (me) to speak out. The problem is, there's a huge canvas of
"notification" problems -- as Adam catalogued this summer, over 200-odd. And
while the lead users -- presence, printing, and distributed authoring -- all
want HTTP-based messages of HTTP-related events, the area directors and
others are encouraging a broader view of the problem. For example, one
comment was "HTTP isn't reliable enough" -- implying retry queues a la SMTP.
For that matter, part of me would be satisfied just recommending email for
these uses and simply documenting a convention to (un)subscribe to a web
resource.

This is an arena where we'd have to address the installed base. It's an
*engineering* problem, which makes it hard to claim research value. It may
be a first step on my ladder of goals, but there's a risk it's a few years
to do when the bolder step may be to skip it.

It's a hard one to skip, though. Standards-project-management is what I've
been trained to do so far; it's an easy problem in many ways; and it knits
me into a social network I absolutely want to be part of (IETF leadership).
And after all, it's grad school. What's a year or two, right?

And now we're truly back where we began.