From: Eugene Leitl (eugene.leitl@lrz.uni-muenchen.de)
Date: Wed Aug 16 2000 - 19:44:14 PDT
Brian Atkins writes:
> I don't want to shoot you, but I want you to read this:
>
> http://www.singinst.org/tmol-faq/meaningoflife.html
>
> and give a response.
I don't know, out of points 1, 2, 3 in "1.2: Why should I get up in
the morning?" only 1 is partially true. (Because it could as well mean
the exact opposite, holocaust or worse). Assuming that "2" everything
will be eventually, somehow, explained to you, and all will make
sense, like, is highly dubious, because I usually tend not to explain
things to ants, or rotifers. When their lifes are not entirely
orthogonal to mine they tend not to profit with this particular kind
of interaction. (Because emergence of cooperation requires iterated
iteractions between agents on equal footing). Assuming that "3"
altruism is the omega point of convergence is rather ad hoc without
further evidence, preferably a proof.
"The only reason to do a thing is because it is right." is also wrong
(or rather a meaningless tautology), because values are entirely
subjective, and only shared as artefacted by darwinian evolution and
by exposure to the same environment. Assuming that there is one
global, special, divinely chosen set of values apart from whatever
co-evolutionary driven emergence has provided us with strikes me as
naive.
The assumption that a machine superintelligence is automagically going
to be superbenign and self-restricting in its dealing with mehums on
basis of no evidence whatsoever and in face of arguments to the
contrary based on evolutionary biology is extremely shaky. From what I
know I'd rather deliberately limit the rate of progress in
ALife-flavoured brand of AI, until humanity has been upgraded enough
that it has at least nonzero odds in the next round of RatRace++.
This archive was generated by hypermail 2b29 : Wed Aug 16 2000 - 20:50:16 PDT