The thing I've always like about transhumanism is that it is focused on the future. (Regular ol' humanism seems to be largely concerned with the past, or at best, the present.) So, because this is my blog and I can do anything I want with it, I'm going to arbitrarily dedicate the month of March to blogging about the future. [Insert ooh-ing and ah-ing.]
And a good place to start is with an interesting perspective on progress vs. risk...
A common conception of the future now includes the idea of a technological singularity. Briefly, this singularity is the point at which some form of superhuman intelligence (usually conceived of as an AI, rather than augmented human intelligence) has evolved beyond our capability to understand or control it. Enter human extinction scenarios, as we start to worry that "superhuman intelligences may have goals inconsistent with human survival and prosperity."
So... 1) The Singularity will (probably) be the result of man's work in deliberately advancing computer intelligence to this point. 2) There's a very good chance things won't end well for us once this intelligence gets outside of our control. Probably a better chance that they won't end well than that they will, though I'd like to see a statistical analysis on that.
There are two approaches to avoiding our extinction at the hands of an uberintelligence... 1) Don't make an uberintelligence! 2) Convince yourself that your uberintelligence will be different and/or incapable of harming humanity, and blissfully go about creating it.
Enter the Prisoner's Dilemma... (The classic version of this dilemma is presented below.)
"Two suspects are arrested by the police. The police have insufficient evidence for a conviction, and, having separated both prisoners, visit each of them to offer the same deal. If one testifies (defects from the other) for the prosecution against the other and the other remains silent (cooperates with the other), the betrayer goes free and the silent accomplice receives the full 10-year sentence. If both remain silent, both prisoners are sentenced to only six months in jail for a minor charge. If each betrays the other, each receives a five-year sentence. Each prisoner must choose to betray the other or to remain silent. Each one is assured that the other would not know about the betrayal before the end of the investigation. How should the prisoners act?"
Now I'm going to overlay the structure of the Prisoner's Dilemma to the issue of the Singularity.
There is currently insufficient advancement in the technological realm to support an artificial intelligence that would be capable of reaching 'singularity'. In order to ensure that this critical state of technological development is not reached, each 'player' must forgo certain potentially beneficial advances in computer technology/algorithms. This is the only way to ensure that no participant suffers with respect to negative consequences of a technological singularity. If one player defects from that objective and begins to experiment with AI, he may wind up with a 'better' short-term outcome for himself (in terms of job, prestige, etc.), but in a very real sense, he is willing to risk the potential futures of every other player (and the rest of us). He is gambling, and his rewards may come at the expense of everyone else. He is the betrayer in the Prisoner's Dilemma. (And don't buy that "devoted his life to improving the lot of humanity" crap.)
In some sense, this argument can be made about the development of any potentially dangerous technology. And so we must weigh our belief in progress vs. the risks that progress represents. If the history of progress has shown us anything, it is that there is always going to be somebody who is willing to plunge on ahead, perhaps out of deluded self-confidence, or in search of glory/fame, or just because s/he can, risks be damned. The rest of us are just along for the ride. I'll echo Hughes at this point...
"Remaining always mindful of the myriad ways that our indifferent universe threatens our existence and how our growing powers come with unintended consequences is the best way to steer towards progress in our radically uncertain future." Unfortunately, wisdom is largely the product of hindsight.
"We need to carefully deliberate how best to reduce risks and expedite beneficial applications. We also need forums where people can constructively discuss what should be done, and a social order where responsible decisions can be implemented." Welcome to the month of March in my blog, wherein I'll try to do some of this.
But today I'm still bothered by two things...
- If much of the work on AI is driven by a desire to avoid the negative consequences of a technological singularity, why don't we simply stop working on trying to produce an artificial intelligence that's capable of reaching a singularity point? (The argument seems to be 'Well, somebody's going to do it; it might as well be me because I can do it better/safer.')
- What benefits do we expect to derive from the creation of an artificial intelligence that outweigh the potential risks? (Maybe I should read this...)
Update 03/05/10: I don't intend to do a lot of debating about the issue of AI, but I'm more than happy to give you access to both sides of the story. Heck, if I hadn't come of age in the era when computer languages were still ridiculously simplistic, I might have been intrigued enough by the idea of AI to work on it myself.
No comments:
Post a Comment