[FoRK] Super-Intelligent Humans Are Coming

J. Andrew Rogers andrew at jarbox.org
Thu Oct 23 21:58:56 PDT 2014

> On Oct 23, 2014, at 5:57 PM, Stephen D. Williams <sdw at lig.net> wrote:
> One super-intelligent human has a <50% of being moral (the Superman Problem I'll call it) while a group (SuperFriends?) is very likely to be moral due to fundamental and derived effects.  If the first generation isn't, they'll likely kill each other off.  But some generation will get it right.

Define “moral”. 

Human morality is baskets of evolved heuristics that reflect biology and other local constraints. Islands of stability and game theoretic optimality are not even consistent across human populations. And a lot of popular human morality is predicated on near-parity of intelligence, which is being discarded as an assumption here. We would lack the ability to even understand the context in which a super-intelligent person is evaluating the morality of an action.

See also: human morality as evaluated by mice in a mouse context

Compounding this, humans can rationalize almost any action as “moral” if placed in the proper context. A super-intelligent human could be expected to be superb at constructing the necessary context. Normal humans are able to do this regularly.

More information about the FoRK mailing list