Re: [Fwd: Eliezer speaks (forwardable)] - was loserhood and analysis

Date view Thread view Subject view Author view

From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Fri Aug 18 2000 - 18:56:08 PDT


Eliezer S. Yudkowsky writes:

> Which we both know.
 
Well, I have to include a brief review of what this is all about.
 
> Translation: "I don't think you can do it without evolution. I think
> my method is better."
 
No. I'm pretty certain one can't do it Yud's way, and that one can
probably do it the old-fashioned Darwin ways. There might be other
ways, but I don't see them right now.
 
> Well, in that case, Jeff Bone doesn't need to worry about me, right? He
> needs to worry about you.
 
Nope, because I consider a truly intelligent AI hitting the streets
right now a Seriously Bad Idea.

> Guess what: *I* don't think that *your* method is going to work.

Well, I told you the reasons why I think you won't succeed. I'd like
to hear reasons why evolutionary systems can't do it -- as you know
it, they did it at least once.

> So the situation is symmetrical, except for one thing: You claim that
> your method will result in a competing ecology of superintelligences
> with survival instincts, and I claim that my method would result in an
> singleton altruistic superintelligence with no particular emotional
> attachment to itself.
 
I claim that nothing will really change, only the scopes become vastly
larger. The spectrum of complexity is much wider, the speed of
operation is much faster, the green zone expands dramatically etc. but
all this is still subject to the same laws governing an ecosystems,
with a heavy weight on what the players at the god side of the scale
might do.

> > I realize there are enough brilliantly stupid people out there who
> > will want to build the Golem
>
> So now who's brilliantly stupid?
 
I don't want to build a Golem. I'm not suicidal yet.

> > short. Let's upgrade ourselves, so that we have a smidgen of a
> chance
> > in the RatRace++. If we don't, welcome to a yet another great
> > extinction event, by cannibal mind children.
>
> Gene, you WOULD get eaten by any upgraded human that wasn't altruistic
> enough to build a Sysop. If you can live in a world of upgraded humans
> - if upgraded humans can be Good Guys - then AIs can be designed to be
> Good Guys. "The space of all possible minds", remember?

Well, I intend to be one of the upgraded guys, and attempt to escape
extinction by keeping up with the Joneses.
 
> Did you read CaTAI 2.0 yet?
 
Not really, because it's too long. I will some day, but right now I
have bigger fish to catch.
 
> > I do not buy the "nothing unplausible" without backing up your
> > assertions with arguments. So far you're describing an arbitrarily
> > implausible, arbitrarily unstable construct. You do no show you get
> > there (a plausible traversible development trajectory is missing),
> and
> > you do not show how you intend to stay there, once/if you got there.
>
> "Singularity Analysis", section on trajectories of self-enhancement as a
> function of hardware, efficiency, and intelligence.
 
Ok, I'll read it. But right now I have to pack, and to catch the plane
tomorrow.

> See "The Plan to Singularity", "If nanotech comes first", "Brute-forcing
> a seed AI".
 
We will see whether your stuff makes sense.
 
> Put your eggs in enough baskets and I GUARANTEE that one of them will
> break.
 
So what, I'm still left with 99.9% of the eggs.
 
> > The fitness delta of the first AI will not be dramatically
> > higher than of all the rest of us/other AIs
>
> > For the record, I consider ALife AI development route currently
> > extremely dangerous since intrinsically noncontainable once it
> enters
> > the explosive autofeedback loop
>
> ??

Initial kinetics is slow. I don't think runaway happens on minute
scale, on basis of no evidence whatsoever. In case it really happens,
I'm dead meat, and don't have to worry about it either way.

> > Nor do you describe how you intend to make
> > the goals inviolable. There are a billion ways around the
> neoAsimovian
> > laws robotics. If I can think 10^6 times faster than you, and I'm
> even
> > a little tiny bit smarter than you, I can hack my way through any
> > safeguards you might want to create.
>
> See "Coding a Transhuman AI 1.0", "Precautions", "Why Asimov Laws are a
> bad idea".

Nice trick, pointing people to a long document to shut them up. I
should do it some day, too, it seems to work every time, regardless
what is actually in that document.
 
> According to you, morality is arbitrary and any set of motivations is as
> good as any other - so why would it tamper with the initial

It is arbitrarily, and it is not as good as any other, because once
the system starts to evolve it automatically moves into an EoC
regime. It will mutate around most obvious blocks first.

> suggestions? What need for elaborate Asimov Laws? Remember, we are
> talking about a NON-EVOLVED system here.

If the system is static, it can't be good for anything. If it's
dynamic, it goes somewhere. Due to undecidedability of what a given
modification does it sooner or later goes somewhere where you don't
want it to go. Lacking a dutch kid to put a finger into the hole of
the dike, the whole thing breaks out of initial confiment, and walks
all over your tulip garden.

> --
> sentience@pobox.com Eliezer S. Yudkowsky
> http://singinst.org/home.html


Date view Thread view Subject view Author view

This archive was generated by hypermail 2b29 : Fri Aug 18 2000 - 20:02:16 PDT