[TimBL on the Web's Future, Apr 1999] So, what comes next?

Date view Thread view Subject view Author view

From: Adam Rifkin (adam@KnowNow.com)
Date: Thu Jan 11 2001 - 11:33:26 PST


Selected passages from

   http://www.w3.org/1999/04/13-tbl.html

with stream-of-conscious thoughts from yours truly...

> The basic ideas of the Web is that an information space through which
> people can communicate, but communicate in a special way: communicate
> by sharing their knowledge in a pool. The idea was not just that it
> should be a big browsing medium. The idea was that everybody would be
> putting their ideas in, as well as taking them out. This is not
> supposed to be a glorified television channel.

For the first ten years, the Web was a glorified television channel.

> Also everybody should be excited about the power to actually create
> hypertext. Writing hypertext is good fun, and being with a group of
> people writing hypertext and trying to work something out, by making
> links is a different way of working. I hoped that it would be a way
> that soon, for example, the European Particle Physics Laboratory at
> Geneva, Switzerland, where I was at the time. I'd hoped it would be a
> way for us to much more efficiently use people who came and went, use
> student work, use people working remotely. And leave a trail, not a
> paper trail, but a trail in hyperspace.

FoRK is a trail in hyperspace. Weblogs are trails in hyperspace. Heck,
usenet posts and mailing list emails are trails in hyperspace since most
of those contain URLs now too.

> So I had hoped that the Web would be a tool for us, understanding each
> other and working together efficiently on larger scales. Getting over
> the problem which befalls the organization that was so fun when it was
> a start-up of six people (many of you will know about this
> phenomenon). When you get to 60 people it is still great fun, and
> you're still rollerblading in the parking lot.

Actually, I don't think we've EVER had a person rollerblading in the
parking lot. TimBL must be thinking of Google. :)

> And then when you get to 61 people, you worry that you don't know that
> person's name, and the difficulties of scaling the organization set in.
> There's a second half to the dream really, and I must admit that
> originally I was a little bit careful about expressing this. But the
> second half is the hope that when we've got all of our organization
> communicating together through this medium which is accessible to
> machines, to computer programs, that there will be some cool computer
> programs which we could write to analyze that stuff: to figure out how
> the organization really runs; and what is its real structure, never
> mind the structure we have given it; and all kinds of things like
> that. And to do that, of course, the information on the Web would have
> to be understandable to some extent by a machine and at the moment
> it's not.

Amen, brother!

> Now, in 1992 it was clear that it was taking off. It still wasn't
> clear that it would, for example ever take over from the Internet
> Gopher, which was another system expanding exponentially on the
> Internet. But people were already starting to come into my
> office. Alan Kotok from Digital came with three colleagues,
> unannounced. Now, people don't generally drop in Geneva unannounced,
> particularly Americans. We found a conference room quickly and he
> explained that they were starting to investigate what Digital should
> do, how Digital should address this "Internet" and the World Wide
> Web. "We're concerned about stability and we understand that it all
> hinges on some specifications which you have stored on a disk
> somewhere..". They wondered how stable they were and how we get to
> insure their continued stability and their evolution.

New platforms almost always start off unstable, but you don't have to
have that stability at the beginning; rather, you need the "pleasure
button" that immediately gives someone a tangible benefit for learning a
new meme and technique. All the best Web additions -- JavaScript,
Flash, RSS, HTTP 1.1 -- had this. (SOAP doesn't have it. Yet.)

Stability is something that comes through regular iterations and
improvements, from consensus and running code.

> The fundamental thing about the space -- about this Web, as I said, is
> that anything can refer to anything. Otherwise it's no fun. You've got
> to be able to make the link to anything. It's no good asking people to
> put things on the Web, saying that anything of importance should have
> this "URL",if you then request anything else. To make such an
> audacious request you have to then release anything else. So that
> requires that the Web has completely minimalist design. We don't
> impose anything else. It has to be independent of anything.

In being independent of anything it can subsume anything. This is why
no one can build a "better Web" because the Web will just swallow it too!

> The great challenge, really the raison d'etre initially for getting
> the Web protocols out, was to be independent of hardware platform: to
> be able to see the stuff on the mainframe from your PC and to be able
> to see the stuff on the PC from the Mac. To get across those
> boundaries was at the time so huge and strange and unbelievable.

Ten years later, it really does seem unbelievable that it was so hard to
get across those boundaries. But if the Web had never left the NEXTSTEP
platform TimBL developed it on, it arguably would never have taken off.
We might still be sharing all our information via gopher, usenet, and email.

> And if we don't do things right it will be huge and strange and
> unbelievable again: we could go back down that route very easily.
> It was important to get it should be independent of software. The
> World Wide Web originally was a client program called "World Wide
> Web". I eventually renamed the program because I didn't want the World
> Wide Web to be one program. It's very important that any program that
> can talk the World Wide Web protocols (HTTP, HTML,...) can provide
> equivalent access to the information.

We still have to be vigilant that no single software provider owns port 80.
Who'd have believed five years ago that Internet Explorer and Apache
would have such huge market shares?

> It's very important to be independent of the way you actually happen
> to access this information. We're using a rather large screen here but
> it works just as well on this small screen. It should also work if you
> need to have these read to you, because maybe you're visually impaired
> or maybe you're driving along. 20 percent of the people who have
> access to the Web have some sort of impairment; maybe they can see the
> screen fine but they can't use a mouse. So it's very important that we
> separate the content from the way we're presenting it.

Even in 2001, it still blows peoples' minds that you can have multiple
views of the same data when we do demonstrations.

> It's important that the Web should be independent of quality of
> information.

FoRK sees to that. :)

> I don't want it to be somewhere where you would publish technical
> reports only after you had finished.

Then again, without proper versioning, it's impossible to see where a
particular document *was* before the version you're currently viewing.

> If you can link to anything I want this to be part of the process. So
> the review of the technical report and the scribbling of the original
> note which led to the idea that became the project which resulted in
> the technical report should all be there and they should all be linked
> together. So it's very important that you should be able to instantly
> go in there and edit. (Now actually I'm very sorry that this is not my
> machine so I'm not using my editor. Otherwise I would be able to just
> go into this slide and put the cursor in the middle and edit the
> slide.) At the same time, when I use the word "quality," it's
> important to remember that the idea of quality is completely
> subjective. So the Web shouldn't have in it any particular built-in
> notion of what quality means at all.

It doesn't really feel like PICS took off at all. The Web as a whole
seems to route around most forms of content rating.

> There are one, two, three, four, five, six dimensions I have mentioned
> along which documents on the Web can vary. Throughout all the history
> and through the future evolution it's been very important to maintain
> this invariance with all the fancy new ideas that came in. Every now
> and again we get a new suggestion that flagrantly violates one of
> these areas, and we have to find ways to turn it around and express it
> in a way which does not.

Very noble. And at the same time, it makes change take a loooong time.
When was the last real Web evolution that affected the end-user in a
very profound way?

> The last dimension of independence is an interesting one. There's a
> difference between documents and data.

I spend my days trying to think of ways to blur this distinction.

> Because on the Web you find "documents" of the sorts of things people
> read and write, and you find "data" out there which is the sorts of
> things machines read and write.

I believe that distinction will be unnecessary in the long run.
That the Web should allow the entire spectrum from data to document.

> And that distinction is interesting. And it's important that the Web
> should allow everything on that spectrum as well; that we should have
> things which are very specifically aimed at people, caligraphy and
> poetry. At the same time we should have hard data which is processable
> very efficiently, and logic which can be analyzed by a machine. And
> things in between. A lot of the Web is sort of things in between. When
> you hit a Web page which has stock prices on it, there is data on
> there. You're looking for data. When you look for the weather you're
> looking for data but it comes in this sort of dressed up fashion with
> a nice pink flashing border and a few ads at the top in a way that's
> designed to appeal to you and entice you to buy things.

> So you could think of it, if you like, as three layers: at the top,
> there is the presentation layer. For this slide it's defined by style
> sheet. And in the middle there's content, a funny word which seems to
> be popular on the Web nowadays. This, the HTML code, which says that
> this thing which in fact the style sheet had turned yellow is a first
> level heading, and this thing is an unordered list. And then
> underneath -- there isn't a lot on this page I would say would be
> data. There's a metadata at the top which gives the relationship
> between this slide and the other slides. But the data are the things
> like the stock prices and who actually wrote this and when it was
> created, and what we think the weather is going to be like tomorrow in
> Boston and things like that.

Metadata is a hard sell to the average Web dev. It has taken a long
time and a lot of work. RSS has a pleasure button -- content syndication.
It's curious that RDF has not yet found a pleasure button to bring it to
the masses.

> How well are we doing? Are we doing human communication through shared
> knowledge? Let's look through the document side. On this side the
> languages are natural language. They're people talking to people. So
> the language is you just can't analyze them very well. And this is the
> big problem on the net for a lot of people, is the problem for my
> mother and your mother and our kids. They go out to search engines and
> they ask a question and the search engine gives these stupid answers.

If metadata were the solution to this problem, people would make their
Web pages more descript. The problem is, semantics are a hard concept
for most people to grasp. E.g. http://www.xmlbastard.com/standards

> It's very important that we use this human intuitive ability because
> everything else we can automate, but we're not very good at
> automatically doing that. I wanted the Web to be what I call an
> interactive space where everybody can edit. And I started saying
> "interactive," and then I read in the media that the Web was great
> because it was "interactive," meaning you could click. This was not
> what I meant by interactivity, so I started calling it
> "intercreativity". (I don't generally believe in making up words to
> solve problems, so I'm sorry about this one.) What I mean is being
> creative with others. A few fundamental rules make this possible. As
> you can read, so you should be able (given the authority) to write. If
> you can see pictures on your screen, why can't you take pictures and
> very easily and intuitively put them up there?

This describes the "Read-Write Web". Weblogs represent a great first step
but they need to be two-way-ified.

> At the moment I certainly cannot put the cursor in the middle of this
> slide and correct a spelling mistake. So in fact there's a huge amount
> we have to do. One of the reasons this is difficult is that it's
> actually hard. The research community produced group editors which
> would allow you to edit documents and share a document. And while two
> people are working at the same time -- we know how to do that; we the
> academic community. But I don't have it here now. I can't edit this so
> that somebody watching this on a broadcast can see the edit at the
> same time.

This describes the "Two-Way Web". If the Read-Write Web is about
"anyone can publish", then the Two-Way Web is about "anyone can publish
and anyone can subscribe". The Web needs a subscription model.

To address the specific concern of editing a Web document online,
http://standardbrains.editthispage.com/ is a great first step but it
needs to be two-way-ified and robustified...

> So one of the reasons is that it's actually hard to get the software
> working seriously, as a product. It also needs a whole lot of
> infrastructure. We need a lot more stability.

Amen.

> We need digital signature so that when you share things with your
> colleagues you know that you're sharing it with your colleagues and
> you're not sharing it with just anybody, any hacker who happened to
> turn up on that strip of Ethernet.

The Two-Way Web includes, in addition to subscriptions, a presence
mechanism. So what's the fundamental abstraction that allows both?
Event notifications. The Web needs event notifications.

This is not news. People have been pounding the table for this for years.

BTW, I don't think dsigs alone are the cure for what the "Web of Trust"
needs. http://www.erights.org/ is very interesting stuff.

> Now a look on the other side. The other side is very different. Data
> has very well-defined meaning. So typically a huge number of Web pages
> are generated from databases. The people who produce the databases
> may, when they started it with a little spreadsheet, have had a vague
> idea of what the columns meant, but by now have a very good idea of
> what the columns mean. The database expresses well-defined
> relationship between things in the columns. When you had a weather
> server to pick up the temperature in Massachusetts, in fact the person
> behind it knows that this is the temperature in degrees Centigrade
> measured at seven o'clock in the morning at Logan Airport using this
> little thermometer four feet above the ground by that little bench
> that you see on the television. So there is well-defined data and
> there are well-defined things you do with it. When you write a digital
> check a fairly well-defined thing has got to happen. And when you look
> at your bank statement after having written the check and the check
> having even been cashed, there's got to be a very simple logical
> relationship between those things. You don't generally send pieces of
> poetry, which should give the bank a feel for the amount of money to
> pay to the payee.

> At the moment there's a very strange phenomenon going on. The data is
> being exported as Web pages. There are programs which want to process
> that data, who want to, for example, analyze the stock prices, who
> want to look at all the bookstores and find out where you can get that
> book cheapest and then present you with a comparative shopping list --
> and there are lots of Web sites out there. If you're not using one,
> do: you could save yourself some money. What's happening is that they
> are often going out to a Web site which may or may not be cooperative:
> it may just be putting that information on the Web. Sometimes the Web
> sites that they are scraping for data, would not cooperate if asked
> to. But the data is out there; it's available. And so you have one
> program which is turning it from data into documents, and another
> program which is taking the document and trying to figure out where in
> that mass of glowing flashing things is the price of the book. It
> picks it out from the third row of the second column of the third
> table in the page. And then when something changes suddenly you get
> the ISBN number instead of the price of a book and you have a
> problem. This process is called "screen scraping," and is clearly
> ridiculous, and the fact that everybody is doing it shows to me that
> there is a very very clear demand for actually shipping the data as
> data. So that if somebody wants to do an SQL query, if somebody wants
> to query an object out here, they don't have to go through this whole
> simulation of a very simple query in order to actually get at the data.

So much to do in this domain, so little time.

> A really exciting thing would be if we could scale that ability to
> make intuitive leaps. I've always wanted to be able to do this with a
> group, of very bright, very enthusiastic people really interested in
> specific overlapping areas, say LCS, or all the people who are trying
> to find a cure for AIDS, or whatever. A typical thing researcher tries
> to do is to get as much into his or her head at once and then hope
> that the solution forms, the penny drops, that connection is made, and
> they can write it down before they go to sleep. How can you get a
> group of people to do the same thing? Maybe if we can use the Web as a
> very low bandwidth ineffective small set of neural connections which
> connect the people. Imagine that one person surfing the Web can leave
> a trail. In other words, if somebody, as they're surfing the Web and
> they notice an interesting association and connection can represent
> that with a link, then another person surfing the Web on another topic
> maybe find that link and use it and as a result bring a new communal
> path a little bit further on. And so the group as a whole after a
> while will be able to make that "Aha!". That's something I would find
> very exciting.

The Web still has the potential to unleash a wholly better means of
collaboration and interaction. The real questions in my mind are when
and how.

How long can the world really afford to wait?

I guess the answer is, as long as it has to.

----
Adam@KnowNow.Com

Speed is king, time is evil. -- Vivek Ranadive


Date view Thread view Subject view Author view

This archive was generated by hypermail 2b29 : Thu Jan 11 2001 - 11:38:11 PST