Re: Risk Tolerance, HaL, The Rules vs. Democracy, and Tyranny

Date view Thread view Subject view Author view

From: Jeff Bone (jbone@jump.net)
Date: Fri Aug 18 2000 - 14:12:15 PDT


"Eliezer S. Yudkowsky" wrote:

> Jeff Bone wrote:
> >
> > Geez, maybe this is all harder than you thought, huh?
>
> I might have carried on this thread at greater length if not for that
> last gratuitous dig.

Sorry, I'm just a teaser. My bad. I have a lot of respect for you, E..
but you're a little wacky.

> Jeff, I've been doing this for far, far longer
> than you have.

You have no way to support that assertion, assuming "this" is "thinking
about these kinds of issues." And how condescending and patronizing is
that? The only reason for believing it is overdeveloped ego, and the only
reason for stating it is contempt.

> If I sound terse, it's because a lot of the issues
> you're so excited about have been argued into the ground on the
> Extropian newsgroup, and it does get tiring after a while. If you want
> to make suggestions, that's your right, but I have no professional
> obligation to keep stomping on the greasy spot where there used to lie a
> dead horse.

I will see if I can't dig up the Extropian list. I've never subscribed to
it because most self-proclaimed Extropians I know are kinda loopy. ;-)

> I'm glad to see that you're starting to think concretely about the
> problems involved.

Well, you accuse me of digs but then you patronize me repeatedly? Hmmm...

> You still have what I would describe as an overly
> excited picture of my own opinions on the topic.

It would be mistaken for you to draw any conclusions at all about my level
of "excitement."

> If you assume that -
> as, it is blatantly obvious, the ethics of a Sysop Programmer require -
> I have absolutely no interest in dictating to anyone, and reason from
> there, you should be able to come up with a basically equivalent set of
> Sysop Instructions on your own.

See previous logical argument; you can't *avoid* tyranny unless you refine
or eliminate some or all of the goals you've stated.

> If you think about the basic
> rules that *prevent* anyone from dictating to you, you'll find that you
> have pretty much a complete set of Sysop Instructions.

I suppose my intuition tells me that, logically, there's cannot be a
consistent set of such rules.

> You may think
> that the "rights" you have should be a result of some balance-of-power
> setup, or of some social rules for an ecology of transhumans - but that
> is, metaphorically, exactly and only what a Sysop is; the reified,
> intelligent form of the systemic rules that ensure individual freedom.

You're basically just ignoring the argument. Refute the following
assertion: in order to ensure that no one can kill you without your
permission, the SuperCop must prevent ALL beings from taking ANY elective
action whatsoever.

> All the details are just that - details.

The devil is in the details.

> since these questions likely have
> answers that are obvious to a sufficiently smart observer.

Well, let's try some of these answers out, and how they fly. You might very
well be smarter than anybody here, or even everybody else on the planet.
But your reliance on the "obviousness" of your solutions, when they are
clearly nonobvious to other folks, implicitly shows that you believe you're
smarter than everybody else.

> In other
> words, the details are important,

Thanks for clearing that up.

> but they are not necessarily things a
> Sysop Programmer needs to know.

I disagree.

> Anything with a *forced* decision is
> either a Sysop Instruction or a Sysop decision. You don't think I have
> the right to dictate risk-tolerance outcomes; why do you expect me to
> have an answer for the scenario?

I don't expect an answer, just a strategy.

> Your suggestion of building a Philosophy Mind which builds the Sysop is
> almost right, but any Mind has power simply by virtue of being much
> smarter than we are.

My point is, the only consistent approach to your goals is to ditch the
notion that you can decide The Rules.

> On a side note: Your theory that evolutionary competition is built into
> reality is fashionable, but wrong.

Thanks for clearing that up. Can you send me some of the papers you've
written on that topic?

> That's just in the tiny little
> corner occupied by humans.

Complexity --- self-organizing complexity --- abounds in lots of things
besides humans. Look at the coarse structure of the observable universe.
Or the fractal nature of the surface of a rose petal.

> Complexity gets started - evolves - in the
> balanced spaces, the moderate spaces, but complexity can exist and grow
> anywhere.

We're singing the same song.

> The rest of the Universe is not room temperature; it is the
> freezing cold of space or the heat of the center of the star.

Oh, again, thanks, I'm so dense about science I had no idea. (Yeah, that's
sarcasm, but when somebody whines that they're being disrespected and then
proceeds to treat their peers in a condescending manner it's kind of
aggravating. I support everybody's right to just blast away rhetorically.
We could all use a little thicker skin, me included.)

> And now, I have to get back to work. If you want to know more, read
> more. I suggest "Coding a Transhuman AI 2.0":
> http://singinst.org/CaTAI.html

If I could find a pointer to a document called "Being a Human 1.0" it might
be appropriate for me to insert, here.

jb


Date view Thread view Subject view Author view

This archive was generated by hypermail 2b29 : Fri Aug 18 2000 - 14:15:47 PDT