Risk Tolerance, HaL, The Rules vs. Democracy, and Tyranny

Date view Thread view Subject view Author view

From: Jeff Bone (jbone@jump.net)
Date: Fri Aug 18 2000 - 12:44:06 PDT


"Eliezer S. Yudkowsky" wrote:

> As far as I can tell, 'gene wants to turn you loose in an ecology of
> competing superintelligences with no particular motivation to be nice to
> you.
>

IMO, we will *become* the superintelligences. And competition is something
we're all hardwired for, indeed IMO it's a necessary and fine aspect of how
self-organizing complexity gets its work done, it's hardwired into reality at a
fundamental level. Ecology or economy, it's a fundamental aspect of all
self-organizing systems we know about. But I forgot, you don't believe in
evolutionary processes. (I agree -w- Gene; I don't think there's any way to
achieve any non-trivial level of self-organizing complexity without evolutionary
processes.)

> I challenge your use of the term "subjugate". Name a specific example
> of something that you can't do, and feel you should be able to. Not a
> general instance like "do whatever I want" - a specific instance like
> "upgrade myself to superintelligence" or "build a time machine".

Okay, I'm going to somewhat punt that challenge and address a related but
somewhat different scenario. Can we agree that there is "risk" inherent in all
elective behavior and physical activity? Can we agree that, in any physical
action, it is literally impossible to eliminate all possible risk, because if
nothing else the universe is entropic and subject to quantum uncertainty?

Your stated goal is to make it "impossible for anybody to kill you without your
permission." My argument will be that you *cannot* accomplish this without
prohibiting *all* beings (including yourself) from taking *any* elective
action. I will observe that this is in direct and unresolvable conflict with
your other stated goal of maximizing freedom. I'll point out issues related to
the first goal, and suggest practical refinements to the first goal. Those
refinements, however, are somewhat thorny, possibly unresolvable, and at a
minimum require the creation of a social framework for building consensus on the
issue, and I will explain why.

So, let's assume that I want to do something on a massive scale of engineering,
like build a wormhole transit tube to Tau Ceti, with the local end anchored out
near Jupiter. Strike that, let's assume I want to tow an asteroid into orbit
around the home planet in order to build a resort for Beings who want to
download into the physical and just hang out. (Let's say this is a popular
proposal, as well, with broad support.) Now, let's say there's a
1/100,000,000,000 (arbitrarily small) chance that, despite the best possible
control systems, simulations, contingency mechanisms, etc. that this asteroid
I'm bringing in is going to impact the Earth, significantly impairing the
survivability / prosperity / happiness of, say, the Amish. Who gets to decide
if that's an acceptable risk? The SuperCop^H^H^HSysop, I suppose. But: what
if the SuperCop is *wrong* and the impact occurs; who is accountable? Who is
liable? But those are tangent issues... It's impossible for the SuperCop to be
infallible in this regard, because it is not omniscient. To be infallible and
omniscient, the SuperCop would have to be a kind of intentional, actionable
Maxwell's Demon distributed throughout the local phase space but somehow
separate from it. That's provably impossible for a whole number of reasons,
ranging from the Heisenberg Uncertainty Principle to Bell's Theorem to the 2nd
Law of Thermodynamics and beyond.

So back to the argument: if the goal of the SuperCop is to prevent *any*
non-elective risk to any individual's survivability, then by definition the
SuperCop *must* eliminate the possibility of *all* elective action by *any*
being. But then that would conflict with the goal of enabling maximum freedom.
You logically can't build a SuperCop that can achieve both goals; you're
steering your poor monster straight into the jaws of unresolvable moral /
logical dilemma. (Believe me, I know what that's like, lately I *live* there.
;-) I can imagine a scenario where, like Hal, the poor being just flips out. I
wouldn't be surprised if the thing decides that the only reasonable course of
action is to simply dose everybody with VirtualNanoHappySleeping Gas and quietly
show us the door. Sounds like a really great recipe for Berserkers. Note that
last bit is tongue-in-cheek and humorous in intent; the serious point is that
you simply can't build an infallible protector, and there's no good way to guess
how such a Being might react to fundamental logical misformulations of "The
Rules."

OTOH, you might counter that one of The Rules is some level of acceptable risk.
Fine, but who gets to decide what that level is, if the SuperCop doesn't do it
itself? What if my --- or everybody else's --- risk tolerance is greater than
yours? You go ahead and code in your own preference, because to you that's the
"obvious" answer. You've therefore restricted my freedom in favor of your own
non-consensus paranoia level. That's coercion, that's subjugation, that's
tyrrany of the minority (of 1,) that's wrong, and that's evil.. Or you might
argue that the right answer is to take the most conservative case, i.e. the
public risk level to tolerate is equal to whoever has the lowest risk
threshold. That's still tyranny of the minority, still a superminority of 1.
You might take the high end, but then everybody lives in fear that Joe Fearless
is going to implicitly allow something disastrous to potentially occur. You
might take the weighted average; that's probably one of the better options, but
then let me observe that you've just institutionalized democracy. That's fine,
I have no problem with that, but there's a big difference between "let everybody
decide collectively" and "this is one of The Rules." And it still holds out the
option that everybody will collectively decide to take a risk, i.e. *create* a
risk to you, without your permission and that you don't agree with.

One option might be to create some kind of game theoretic / economic framework
in which cost / benefit is weighed along some kind of actuarial lines. Fine,
but who sets the parameters? Same problems as above. You could set things up
to evolve the optimal parameters to achieve maximum cost benefit at minimum net
negative happiness among the constituency. But then, you don't believe in
evolutionary paths to figuring out tough questions. Better just to sit down and
hammer it out, right?

Geez, maybe this is all harder than you thought, huh?

jb


Date view Thread view Subject view Author view

This archive was generated by hypermail 2b29 : Fri Aug 18 2000 - 12:47:41 PDT