From: Jeff Bone (jbone@jump.net)
Date: Thu Aug 17 2000 - 15:02:55 PDT
Strata Rose Chalup wrote:
>
> I still have strong reservations about the the desirability of the
> end-goal, as well as the methods, but I no longer believe that the folks
> in charge are running a backyard nitroglycerin lab and using hammers for
> QA.
Actually, Strata, I'm not so sure. I hate being an alarmist and I don't in
any way agree with Joy's neoluddite position on this whole deal, but, consider
this:
Brian Atkins wrote:
> Actually that is what we hope our eventual AI will evolve into: a "sysop"
> that simply would actively not allow evil to happen. An operating system
> for the universe?
The scary thing about that is, who gets to define what constitutes "evil?"
One man's
definition of "universal good" is another man's "tyranny." Pick your hot
button, say, elective euthanasia. What if the "Creators of the Universe"
a.k.a. Brian and Eliezer (a) don't come down on your side of the issue, and
(b) don't believe in unlimited free will? Let's say, for whatever reason, you
just want out. You're in a lot of pain. Now you're trapped in a world where
life is eternal and you just aren't allowed to opt out.
It would be at least a little bit worse than, for all intents and purposes,
being forced to use Microsoft Windows due to its market dominance.
Sorry, I just can't trust such big decisions to folks who think they have a
direct line to The Truth on matters such as "the meaning of life," "who is the
most significant human being," "what's the best operating system for
everybody," ;-) etc.
I'm not afraid of the future; my "SL" to use their terminology is pretty much
at the top end; the things that amuse and threaten me are more like boundary
conditions for the universe, apparently "SL4." However, it worries me when we
believe our own bullshit enough to think that we know best for other people.
jb
This archive was generated by hypermail 2b29 : Thu Aug 17 2000 - 15:26:55 PDT