Prince Charles gets nervous, neo-Luddite meme gathering steam

Jeff Bone jbone at deepfile.com
Mon Apr 28 12:39:28 PDT 2003


On Monday, Apr 28, 2003, at 10:43 US/Central, Jeff Bone wrote:

>
> Russell:
>
> Now, for completeness, argue the other side.
>
> jb

I'll even seed your counter-Devil's-advocate discussion...

>> (1) The private sector usually leads government in
>> applying innovation. This isn't true for pure research.
>> But when it comes to bending science and technology
>> to practical purpose while decreasing cost and
>> difficulty, capitalism is an accelerator that no one
>> has yet beat.

This observation seems accurate but I'm not sure how it relates to the 
discussion.

>> (2) Wackos, fundamentalists, criminals, and terrorists
>> are quick to pick up applied technology that furthers
>> their cause. There were fundamentalist bulletin boards
>> back in the days when a 300 bps was a fast modem.

The "bad guys" are going to have disproportionate / asymmetric damage 
capabilities regardless of what happens;  in a post-9/11 era, you might 
even say they already do, and this trend will only increase, no matter 
what.  All that will happen with anti-progress legislative measures and 
non-funding is that it might slow this down slightly, while completely 
halting work on appropriate countermeasures.  "If you outlaw guns, only 
outlaws will have guns."  (BTW, whatever you think of the merits of 
that argument, it plays to the sensibilities of many who might 
otherwise be co-opted into the neo-Luddite camp.)

>> (3) Dangerous technology falling into the wrong hands
>> is an external cost. The customer who buys a DNA
>> fabricator has an individual interest in knowing that
>> it will function as intended, and that it has
>> safeguards against misuse. But he doesn't have any
>> individual interest that the underlying technology,
>> back at the lab, is kept out of the wrong hands. That
>> interest he shares with the rest of the world, and is
>> unlikely to surface as part of his purchase decision.

Again, this is a valid observation, but IMHO tangential or at best 
pro-gov't oversight rather than pro-prohibition.

>> (4) Absent regulation, businesses will exploit
>> external costs for profits. That is an economic law
>> as certain as any other. Even if one business
>> exercises ethical constraints in such matters, that
>> simply highlights the opportunity for others to do
>> differently. This happens in every field, in every
>> domain. Polluting processes are moved to regions
>> where their pollution is not regulated. Risky
>> processes are moved to regions where occupational
>> safety has a low priority. Spam thrives because it
>> converts an external cost to internal profit.

Valid observation.  Points to the futility of legislative control, 
underscores the need for defensive countermeasures.

>> (5) ...

So the argument is that the private sector can't hack the necessary 
security?  Well, maybe so --- though I imagine that start-ups, 
corporations, and so forth all have an equal vested interest in not 
turning the planet into computronium goo, at least prematurely. ;-)  
And btw, at least the grey goo scenario is easily avoidable:  make 
self-replicating assemblers require some rare element in the 
replication process.  Similar built-in breaks aren't so obvious for AI, 
Eli's convoluted "Friendly" concept notwithstanding.

OTOH, the general arc of your argument sounds to me like it's more 
pro-gov't funding and oversight of e.g. nanotech, rather than 
pro-legislative prohibition.

>> (6) An evolutionary arms race between abusers of
>> technology and people trying to defend against the
>> abuse will make victims of most people.

I'm not sure this makes sense, given the nature of the technology being 
discussed.  W/o benevolent applications of the technologies in 
question, particularly strong countermeasures, the abusers will 
certainly victimize everyone.  Given a struggle between the two, it's 
*perhaps* a toss-up --- but a fighting chance is better than no chance 
at all.

Short version of the argument "for" follows.  One of the two following 
possibilities is certain to happen:

(1)  Humanity and its intellectual descendants will eventually cease to 
exist, probably sooner rather than later.
(2)  Humanity and its intellectual descendants will proliferate 
throughout the universe.

Per (1), the existential risks to planetary humanity are individually 
small in terms of short-term probability (and unavoidable long-term) 
but have an enormous cost, therefore risk-adjusted analysis favors 
working to ameliorate these risks --- particularly by getting 
off-planet and out of the neighborhood ASAP.  The risks scale from 
planetary-scale extinction events to universal (and hence *probably* 
unavoidable) risks.  Risks that are "external" have to be valued higher 
than risks that are self-inflicted.

Per (2), it's not really credible to think that, Star Trek-like, we're 
going to go zipping about the universe in the same form we have today, 
with similar average lifetimes, etc.  Economics, psychology, and so 
forth all argue convincingly against that rosy picture.  Basically, if 
humanity is to have its diaspora, we're going to have to be 
fundamentally different from what we think of as human today.  So the 
net-net seems to be that if humanity is to survive long-term in any 
sense, it will have to be different from what we think of as "human" 
today.

jb



More information about the FoRK mailing list