[FoRK] Super-Intelligent Humans Are Coming

Dr. Ernie Prabhakar drernie at radicalcentrism.org
Thu Oct 23 17:16:00 PDT 2014


Hi Stephen,

> On Oct 23, 2014, at 2:13 PM, Stephen D. Williams <sdw at lig.net> wrote:
> 
> On 10/19/14, 2:28 PM, Dr. Ernie Prabhakar wrote:
>> Hi Eugen,
>> 
>> Sent from my iPhone
>> 
>>> On Oct 19, 2014, at 12:11, Eugen Leitl <eugen at leitl.org> wrote:
>>> 
>>> Of course superintelligent evil is quite scary. But it seems a
>>> degenerated case, a superintelligence can't be consistently evil
>>> in the human sense of the word unless it's playacting, or it wouldn't
>>> be a superintelligence.
>> You seem to have a very different definitions of super intelligent than I do. I am not aware of any particular correlation between intelligence and morality. To me, morality is fundamentally about the core assumptions we choose to start reasoning from.
>> 
>> Or do you believe it is possible to rationally derive Morality starting from nothing, given sufficient computational horsepower?
> 
> What do you mean by "starting from nothing”?

Starting from nothing but the assumption of super intelligence.

> Existing with others in the physical universe is a fairly rich starting point.

Are you assuming that existence is good?  

>  Existing as a human, especially with modern understanding of what that means, is a very rich starting point.

Are you assuming that human beings are valuable?  All of them?  That would be a very rich starting point.  But are you sure you could convince a super-intelligent computer? I have a hard time convincing very intelligent humans that all the other ones are equally valuable.

> Add to that any reasonable subset of story-based culture, and, well, you can go right or wrong there depending on which subset.

That’s the hard problem, isn’t it? Which story do you believe? To me, THAT is the problem we need to solve: coming up with the right story filter. Intelligence is simply a multiplier once we get that sign right.
 
> A fairly solid core of generalized principles, just like we already have for logic, math, science, computer science, etc. can go far.

I’m not terribly impressed with our current foundational principles of logic, math, or computer science.  If your super-intelligence is as reliable as modern software, I’d be worried...

> Why do communist and socialist systems tend to fail while republics tend to succeed?  Etc.

Ah, are you taking about empirical validation then?  THAT is an interesting approach.  But in that case, the real issue is not super-intelligence, but Big Data. In which case, the question is whether we can come up with an encoding algorithm that is sufficiently unbiased to allow the intelligence to come to a fair conclusion.

> The interesting question is in what cases does morality != efficiency?  What are valid and invalid goals?  Is there a convergence or divergence between human and non-human goals?  Would an intelligent system that we produce necessarily be human-like?  Of course various things could go bad, just as in an instant human.  Is there a fundamental difference?

No, that’s my point.  Super-intelligent humans do not seem intrinsically more likely to be more moral than any other.   They could just as easily become extremely efficient at screwing things up more.

— Ernie P.

> 
>> 
>> E
> 
> sdw
> 
> _______________________________________________
> FoRK mailing list
> http://xent.com/mailman/listinfo/fork




More information about the FoRK mailing list