[Fwd: Eliezer speaks (forwardable)] - was loserhood and analysis

Date view Thread view Subject view Author view

From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Fri Aug 18 2000 - 02:00:17 PDT


Brian Atkins writes:
> Ok a response from the author of the doc.

> From: "Eliezer S. Yudkowsky" <sentience@pobox.com>
> To: Brian Atkins <brian@posthuman.com>
> Eugene Leitl and I have been having this discussion for ages. My

I wouldn't call it a discussion. It's just you now seem to have a
certain audience (100 k hits according to your counter, and now this
being FoRKed), and I'd rather not let incorrect arguments stand
unopposed. Strangely, people seem to assume that if no apparent
response is forthcoming, then the asserted position must be correct,
instead that no one has bothered with a response. Extraordinary
claims, and such.

My time is short, as I'm going offline for a week (good-bye Golden
State), so this will be just an outline. I'm not saying anything new
here, but some of the audience might be new, so here goes. (Sorry for
the hubris and egomania, I've been bitten by Eliezer, someone kindly
pass the garlic).

> standing reply, to which Eugene has not yet responded, is that while
> Eugene may assume that developing AI requires evolutionary competition,
> my described method of developing AI does not. Developing a survival

I doubt I haven't responded, but here goes anyway.

Your method requires explicit coding of an AI bootstrap core by
a human team.

My method involves creation of boundary conditions for spontaneous
emergence of an AI, using an educated-guess seed population, driven by
evolutionary algorithms running on dedicated 3d integrated molecular
circuit hardware (sorry, silicon does not provide enough crunch by far).

Your method is entirely unvalidated in practise (in fact there's an
impressive track of dramatic failures), and aims for a high-complexity
system while humans are demonstrably unable to create working systems
beyond a certain complexity threshold.

My method is low-complexity, and is validated by existance of
biological life, particularly humans on this planet. There's a lot
more to it, but this will do for time being. (We don't have to start
with ursoup (a high-connectivity spiking finite state automaton
network would be my first guess, though I wouldn't mind using scanned
critters for seed), and we don't have to raise each generation in
slowtime, etc.)

> instinct would require evolutionary competition on survival tasks.

Of course it would require that, so what? It's a part of the boundary
condition.

> Developing humanlike observer-biased or observer-centered perceptions

"Humanlike" is stretching it. There is no point in trying to put the
ape into the virtual box. However you code your fitness (problem set
suite), you'll not be able to generate something genuinely human. Bits
have no need for hair, skin, or sexual dimorphism. Fear these bits,
for they will kick your butt.

> would require politics-associated selection pressures, which would
> require AIs in social competition - not just interactive competition,
> but competition in which survival or reproductive success depended on
> the pattern of alliances or enmities. Survival evolutionary competition

Of course, so what?

> and political evolutionary competition are the forces that are causally
> responsible for, respectively, human observer-biased goals and human
> observer-biased perceptions. As my AI development plan relies strictly

Are you trying to make a point? So far I see only descriptions.

> on self-enhancement and invokes neither form of evolutionary
> development, there is nothing implausible about a goal set that includes
> the happiness or unhappiness or freedom of humans, but does not include

You presume you can code such a goal, and that the system can indeed
use such a goal constructively. You're remarkably hazy on how the seed
AI will recognize which modifications will bring it nearer to a goal,
and which will farther. Clearly, you can't "just see" which way the
code has to be written. You have to sample code space, and benchmark
the system on a metric, hein?

> the observing AI, in utilitarian calculations of the total desirability
> of the Universe. There is nothing implausible about assuming that the
> entire Human universe involves a single, underlying, superintelligent
> AI. The first seed AI to achieve transhumanity can invent

I do not buy the "nothing unplausible" without backing up your
assertions with arguments. So far you're describing an arbitrarily
implausible, arbitrarily unstable construct. You do no show you get
there (a plausible traversible development trajectory is missing), and
you do not show how you intend to stay there, once/if you got there.

> nanotechnology, and whatever comes after nanotechnology, and thereby

We're 10-15 years away from practical molecular memory, and soon after
computronium. I'd call that nanotechnology, albeit not a machine-phase
system. Once we have that kind of hardware, finding a good enough CA
rule and the type of data-processing state pattern can be brute-forced
essentially overnight. De Garis is close enough on that one.

> become the sole guardian of the Solar System, maintaining distinct and

Don't put your eggs in one basket, however large the egg, and however
large the basket. Eventually, it gets smashed, and creates a huge mess.

> inviolable memory spaces for all the uploads and superintelligences

There is that tiny little turd of undecidedability Kurt Goedel left on
the living room rug, and it just doesn't want to go
away. "Inviolable", my ass. Reality is messy, has always been messy,
and will be messy, amen, unless something entirely unexpected comes
along. I will make it stay messy by coding the thing my way in case I
have a definite suspicion that your seed AI might be going somewhere,
even if this will result in our premature demise. Better that, than
subjectively eternal static slavery.

> running on its hardware ("The Sysop Scenario"). No ecology of
> superintelligence is involved. Sure, this is an infinitely small spike

The question is rather, how can you prevent an ecology of
superintelligences from arising? Remember, we're not perfectly synched
clones. There are many teams who want to create true AI (the
fools). The fitness delta of the first AI will not be dramatically
higher than of all the rest of us/other AIs (because you deliberately
crippled it by making it brittle and explicitly hand-coded) and you do
not describe how your self-modification will prevent nonclones from
arising (you are not talking about a single instance of your godling
AI, are you???), and how you can prevent subsystems becoming
autonomous and radiating. Nor do you describe how you intend to make
the goals inviolable. There are a billion ways around the neoAsimovian
laws robotics. If I can think 10^6 times faster than you, and I'm even
a little tiny bit smarter than you, I can hack my way through any
safeguards you might want to create.

> in the possible state space. So is a skyscraper. So is any other
> designed configuration of quarks. So what?
 
If you look around, you'll notice that not everybody is living in a
skyscraper, especially a single particular kind of skyscraper. All
complex objects do not come as exact copies.

For the record, I consider ALife AI development route currently
extremely dangerous since intrinsically noncontainable once it enters
the explosive autofeedback loop, and would rather see its development
inhibited, while boosting life extension, cryonics, nanotechnology and
uploading.

I realize there are enough brilliantly stupid people out there who
will want to build the Golem, so our window of opportunity is
short. Let's upgrade ourselves, so that we have a smidgen of a chance
in the RatRace++. If we don't, welcome to a yet another great
extinction event, by cannibal mind children.


Date view Thread view Subject view Author view

This archive was generated by hypermail 2b29 : Fri Aug 18 2000 - 03:06:44 PDT