From: Eliezer S. Yudkowsky (sentience@pobox.com)
Date: Fri Aug 18 2000 - 13:39:18 PDT
Jeff Bone wrote:
>
> Geez, maybe this is all harder than you thought, huh?
I might have carried on this thread at greater length if not for that
last gratuitous dig. Jeff, I've been doing this for far, far longer
than you have. If I sound terse, it's because a lot of the issues
you're so excited about have been argued into the ground on the
Extropian newsgroup, and it does get tiring after a while. If you want
to make suggestions, that's your right, but I have no professional
obligation to keep stomping on the greasy spot where there used to lie a
dead horse.
I'm glad to see that you're starting to think concretely about the
problems involved. You still have what I would describe as an overly
excited picture of my own opinions on the topic. If you assume that -
as, it is blatantly obvious, the ethics of a Sysop Programmer require -
I have absolutely no interest in dictating to anyone, and reason from
there, you should be able to come up with a basically equivalent set of
Sysop Instructions on your own.
You've said that you have the right to do certain things; and you worry
that I, by virtue of what you think is megalomania, threaten this
right. Very well; if that right exists, it must be defended - in a way
that doesn't violate everyone else's rights. (Something that you seem
awfully blithe about, when it comes to dropping tacnukes.) That right
must define either a Sysop Instruction, or be a logical consequence of a
Sysop Instruction.
That's a meta-Rule, if you like.
You got off on the wrong foot when you started thinking of it in terms
of raw power, the ability to dictate. If you think about the basic
rules that *prevent* anyone from dictating to you, you'll find that you
have pretty much a complete set of Sysop Instructions. You may think
that the "rights" you have should be a result of some balance-of-power
setup, or of some social rules for an ecology of transhumans - but that
is, metaphorically, exactly and only what a Sysop is; the reified,
intelligent form of the systemic rules that ensure individual freedom.
(That these rules cannot actually be implemented as an ecology is a
scientific/engineering issue; a consequence of offensive technology
winning over defensive. Also see below about "compromise".)
All the details are just that - details. "Let the UN decide," meaning
that I don't care who decides. In fact, it seems unlikely that the UN
will decide; questions such as risk-tolerance scenarios probably can be
left to the Sysop, plain and simple, since these questions likely have
answers that are obvious to a sufficiently smart observer. In other
words, the details are important, but they are not necessarily things a
Sysop Programmer needs to know. Anything with a *forced* decision is
either a Sysop Instruction or a Sysop decision. You don't think I have
the right to dictate risk-tolerance outcomes; why do you expect me to
have an answer for the scenario?
Your suggestion of building a Philosophy Mind which builds the Sysop is
almost right, but any Mind has power simply by virtue of being much
smarter than we are. A Philosophy Mind that can invent Sysop
Instructions necessarily has both the power and the motivation to
implement them. So it's a one-step instead of a two-step, but aside
from that, you've described one of the basic theoretical principles
behind Sysop construction.
On a side note: Your theory that evolutionary competition is built into
reality is fashionable, but wrong. That's just in the tiny little
corner occupied by humans. Complexity gets started - evolves - in the
balanced spaces, the moderate spaces, but complexity can exist and grow
anywhere. The rest of the Universe is not room temperature; it is the
freezing cold of space or the heat of the center of the star. We can
survive only at room temperature, but our descendants will be able to
survive anywhere. The vast majority of the Universe goes to extremes.
So does self-enhancing intelligence. We lose nothing - indeed, we gain
everything - by moving to a Universe occupying extremes of freedom and
growth and life. But the plausible-sounding compromises are invariably
untenable by simple reason of being compromises. Hence the Singleton.
That's philosophy, if you like.
And now, I have to get back to work. If you want to know more, read
more. I suggest "Coding a Transhuman AI 2.0":
http://singinst.org/CaTAI.html
Toodles,
Eliezer.
-- sentience@pobox.com Eliezer S. Yudkowsky http://singinst.org/home.html
This archive was generated by hypermail 2b29 : Fri Aug 18 2000 - 13:57:23 PDT