Sunday, March 21, 2010

Future Most Probable

"The empires of the future are the empires of the mind."

When I think about the future, I see a complicated web. Attempting to isolate one problem or issue to discuss is difficult, but here goes...

A couple weeks ago I had occasion to be in the pharmacy section of a large drugstore chain on a busy Saturday. And the pharmacy was handing out prescription after prescription - both at the counter and at the drive-thru. Prescriptions frequently came with verbal admonitions ('warnings') about possible side-effects. Several things occurred to me in those minutes...

1) Culturally, we have a 'there's an app for that' approach to illness, rather than a holistic approach to health.

2) Belief in the power/safety of the app is more important than understanding the actual workings of the app.

3) Our individual willingness to invoke an app is generally not equalled by our individual ability/willingness to measure its effects or critically examine the outcome.

Pause for a story...

We'll call our protagonist Lady. Lady was experiencing episodes of extreme emotion (sadness) in her life. She knew that these episodes were 1) out of character for her, and 2) did not correlate to any easily-identifiable psychological triggers. She had done a fairly-thorough assessment of her life to try to determine if there was in fact something psychological going on. Was she unhappy with her job? (No.) Was she experiencing a mid-life crisis? (No. She was mostly content with what she had.) Having talked to Lady extensively during this time, I was impressed by the depth of her introspection.

Eventually Lady began to look for a chemical explanation for these episodes. Were they tied to her menstrual cycle or birth control? (No.) Perhaps something she was eating? Through a combination of internet research and an elimination diet, she was able to isolate Chemical X as the causal agent. Eliminating this chemical from her diet also eliminated the episodes of extreme emotions. Reintroducing the chemical brought them back. Lady 1) suffered unnecessarily for a period of time, but 2) was able, through introspection and rational analysis, eliminate the source of her suffering.

This story is representative of much of what I see (and hope for) in the near future. I see a continuing growth in the realization that simply because something is available does not mean that it is safe. I see our increased reliance on pharmaceuticals and artificial chemicals bringing us to a crisis point with respect to the issue of Safety, and also with respect to the issue of Identity.

To some degree these two issues are intertwined, and there very well may be an Event in the near future that captures our collective attention and highlights this. (It's amazing to me that we still have as high a tolerance/acceptance for pharmaceutical intervention as we do, given all the stories about ineffectiveness and unintended side-effects that have surfaced.) But it is not difficult to predict that as more people gain more experience with a wider range of pharmaceuticals/chemicals and their psychological consequences, the issue of Identity will be brought to the forefront of our collective consciousness. Questions like What am I if a drug can make me do/feel this? will demand answers as never before.

The nature of human-ness, consciousness, and our sense of identity will be topics of increasing popular interest. When I think about where people will turn for the answers to these questions, I see no ready area of information. The word 'spirituality' comes to mind, but I would like to see that word replaced by something that indicates an informed, supportive environment that can facilitate introspection and self-awareness, as well as provide knowledge (scientific knowledge) about the phenomenology of consciousness. This does not currently occur in our educational system, nor in most systems of religious instruction. It is something that must be sought out and/or developed by the individual; it is not currently a part of our societal awareness. I hope that this will change, and there are promising indications that this can happen.

I predict that we will also need to increase the scope and breadth of our collective dialogue as to our responsibilities to the next generation. To what degree do they deserve (and can we impose on them) modification without representation? Several months ago I sent a letter to the Center for Cognitive Liberty and Ethics, asking (among other things) about the status of their organization. It seems to me that their public activities (publishing, etc.) have fallen off quite a bit since the mid-2000's. I still have not received a reply. This disturbs me because I think that we are only beginning to see the complexity of the issues that will arise as neurological modification becomes more prevalent.

I should take a minute to point out my own biases in this area. I am generally hyper-aware of the cognitive effects of drugs in my system. I can distinguish and describe the cognitive effects of ibuprofen and acetaminophen, even though neither drug is intended to produce them. I came of age during the height of the 'war on drugs', and that may have predisposed me to have a negative or cautionary view of pharmaceutical intervention. (It's a possible bias; I acknowledge it.) I have a background in biology and an appreciation of the complex role that a single chemical can play within a living system. Perhaps this is why I have reservations about casually introducing a chemical into that system, especially if all of its potential effects are not know up front.

So perhaps I am projecting my own concerns into my vision of the future. Or perhaps there really will be an increasing collective movement towards understanding the conscious experience that we call 'human'. Perhaps we will take up the following questions together... Who are we when our Identity - our behaviors and the way we process information - has been visibly altered? When our conscious continuity with the past is significantly disrupted by artificial means - when we are no longer predictable in the same way as we were before - how responsible are we for those changes and the resulting actions? What responsibility do we bear to others who have lost their much of their Identity to something like Alzheimer's? Upon who, and why, and how, can we inflict attempts to modify Identity for the better, or to serve our own ends?

It's easy to tout individual responsibility (and I am very proud of Lady for the way she approached and took control of her own well-being), but this ignores the issues surrounding those who are dependent on us and who cannot make informed decisions for themselves. And it ignores the ethics of exercising power over others because we feel justified in doing so.

I guess it's pretty clear by now that I see this as one of the most pressing and challenging philosophical and ethical issue that we face in the near future. I am happy to see conferences attempting to address these issues. (I'll be at this one, and I plan to blog about it.) But so much of the thinking on these issues remains isolated within the academic/intellectual realm. So much of what is in the larger sphere - what the general public is exposed to - seems to be a reinforcement of the 'there's an app for that' mentality. Selling the app, and convincing us that we need it: these are the media images that surround us.

The lack of general knowledge and appreciation for biological complexity, combined with easy access to pharmaceuticals, is beginning to be socially-problematic, yet we have no targeted approach for educating children (or adults) about these issues. (Our Fair State only recently OK'd teaching birth control in sex education classes. I have never understood how perpetuating ignorance solves a problem, but that's a topic for another post...) It's easy to say that education is the answer, but I believe that the answer is going to be whatever facilitates an appreciation for the fragility and malleability of consciousness and identity. We will need to socially reinforce the idea that integrity of mind is sacrosanct. How exactly this should be accomplished, I do not know (though I have some ideas), but I do see it as the most-probable philosophical and ethical crisis point for which humanists and transhumanists should be preparing.

Sunday, March 14, 2010

The Future of Aging

Change is the constant of sentience.

I'm staring down the barrel of another birthday and, although it's not one of the 'big ones', for some reason I find that I am acutely aware of aging. Perhaps it's because my hair started to go gray en masse this past year. Perhaps it's the unsolicited invitation to a fertility clinic that I received a week ago, based (I presume) on nothing other than my age.

Yet, though I am faced with physical reminders of age, I don't feel old. I feel like there is so much that I haven't done, therefore I can't be getting old. I haven't been married, born children, or owned property, therefore I can't be getting old, right? Right? I know; I'm confounding physical age with a more-ambiguous trait that is the product of experience. [Insert various platitudes on 'age is a state of mind'.] But physical aging creepeth up on me and perhaps this makes for good future-fodder for the blog...

A seemingly-unassailable position of many futurists/transhumanists is that aging is a bad thing, or, at least, that it is an obstacle to be overcome. Aging, you see, is the road to death. One transhumanist says this of aging and death...

"So we tell ourselves curing aging will cause too many problems and that aging has a lot of natural beauty to it and creates a lot of meaning and that all of that is good. But I think there is one other reason. Imagine we suddenly discover we can cure aging. It’s simple, cheap, universal, and we manage to quickly adapt society to deal with an undying population. All of the impacts described by bioconservatives don’t exist, anti-aging is a glorious and beautiful time and everyone lives for centuries.

The cost is the realization that every death was preventable. That billions of people have been, in effect, tortured for decades by nature and because we could not change it we described it as beautiful and honorable. The crisis in our collective psyche would be something of unparalleled magnitude. Our species is a master at making virtue of necessity, but what becomes of our virtue when that necessity ceases to be? Does it cease as well?"

When I pause to reflect on myself, I see a heavily-modified consciousness walking around in a comfortably-owned body. The heavily-modified consciousness is a topic for another day, but the comfortably-owned body is worth discussing. Certainly that body is not perfect. It's probably quite far from anyone's definition of perfect. Knowing this, I might ask myself - Why haven't I done more to change it? Why haven't I pushed harder to lose those extra pounds? Why have I accepted the damage that has accumulated over time?

Munkittrick's question was 'Why do we accept aging?', but I do not think that the answer is as 'We accept it because we have no choice.' Certainly there are people who fight it every step of the way, with diet and exercise and (sigh) surgery. The primary objection to aging seems to be to the deterioration of the physical body and the reduction of its capabilities, yet there is a large portion of our society that is all-too-willing to engage in activities that prematurely or unnecessarily damage the body, or who at least seem unwilling to take proper care of their bodies. (That whole diet and exercise thing?) It's like we're inviting aging, and challenging it to ravish us. Do we do this because we're faced with inevitable death, and happiness can only be found in embracing, nay hastening, that outcome? I don't think so.

Age also marks various degrees of status, and life seems to be a race to get to that pinnacle age/status, followed by a prolonged battle to stay there. Evolutionary biologists will tell you that our genes are programmed to seek prime reproductive material, and that we respond to signs of physical age accordingly. Presumably this is also the source of all our efforts to camouflage our physical age. So what happens as we become better and better at hiding those signs of age? And what happens as physical age becomes further-dissociated from one's ability to reproduce? What status/traits will replace physical age as the primary determinant of desirability, and how will they be signaled? [Here I pause for extensive thought on what and how I am/should be signaling with respect to reproduction and the fact that, while I have relationship aspirations, I don't have an overwhelming desire to bear children and would be perfectly happy not doing so. Should I quit coloring my hair and display the markings of age with pride, or continue to engage in the youth-is-beauty driven attempts to 'stay young'? My introspection is messy; this post has been heavily-edited to remove most traces of it.]

It may take quite a bit of time before we evolve past our (genetic?) reaction to the physical signs of aging. In the meantime, we'll continue to fight the physical process of aging with science and technology. As we do so, we must not ignore the pressing social issues of aging that we are currently faced with, such as care and quality of life, and the right to die. We can't ignore the fact that a great many people live lives that they wouldn't necessarily want to prolong. We should strive to have a firm handle on these ethical issues before we are gifted with greatly-extended lifespans. (Fodder for future posts.)

Having managed to find my soapbox again, it's probably time to stop writing, but after spending several hours thinking about how I felt about aging, I find that I am not so troubled by my gray hair. I believe in what I've done with my life so far, and I do believe in the platitudes that say that age is a state of mind.

"None are so old as those who have outlived enthusiasm."

"People like you and I, though mortal of course like everyone else, do not grow old no matter how long we live...[We] never cease to stand like curious children before the great mystery into which we were born."

Tuesday, March 2, 2010

Back to the Future... The Prisoner's Dilemma

"Transhumanists have inherited the tension between Enlightenment convictions in the inevitability of progress, and Enlightenment’s scientific, rational realism that human progress or even civilization may fail." (q)

The thing I've always like about transhumanism is that it is focused on the future. (Regular ol' humanism seems to be largely concerned with the past, or at best, the present.) So, because this is my blog and I can do anything I want with it, I'm going to arbitrarily dedicate the month of March to blogging about the future. [Insert ooh-ing and ah-ing.]

And a good place to start is with an interesting perspective on progress vs. risk...

A common conception of the future now includes the idea of a technological singularity. Briefly, this singularity is the point at which some form of superhuman intelligence (usually conceived of as an AI, rather than augmented human intelligence) has evolved beyond our capability to understand or control it. Enter human extinction scenarios, as we start to worry that "superhuman intelligences may have goals inconsistent with human survival and prosperity."

So... 1) The Singularity will (probably) be the result of man's work in deliberately advancing computer intelligence to this point. 2) There's a very good chance things won't end well for us once this intelligence gets outside of our control. Probably a better chance that they won't end well than that they will, though I'd like to see a statistical analysis on that.

There are two approaches to avoiding our extinction at the hands of an uberintelligence... 1) Don't make an uberintelligence! 2) Convince yourself that your uberintelligence will be different and/or incapable of harming humanity, and blissfully go about creating it.

Enter the Prisoner's Dilemma... (The classic version of this dilemma is presented below.)

"Two suspects are arrested by the police. The police have insufficient evidence for a conviction, and, having separated both prisoners, visit each of them to offer the same deal. If one testifies (defects from the other) for the prosecution against the other and the other remains silent (cooperates with the other), the betrayer goes free and the silent accomplice receives the full 10-year sentence. If both remain silent, both prisoners are sentenced to only six months in jail for a minor charge. If each betrays the other, each receives a five-year sentence. Each prisoner must choose to betray the other or to remain silent. Each one is assured that the other would not know about the betrayal before the end of the investigation. How should the prisoners act?"

Now I'm going to overlay the structure of the Prisoner's Dilemma to the issue of the Singularity.

There is currently insufficient advancement in the technological realm to support an artificial intelligence that would be capable of reaching 'singularity'. In order to ensure that this critical state of technological development is not reached, each 'player' must forgo certain potentially beneficial advances in computer technology/algorithms. This is the only way to ensure that no participant suffers with respect to negative consequences of a technological singularity. If one player defects from that objective and begins to experiment with AI, he may wind up with a 'better' short-term outcome for himself (in terms of job, prestige, etc.), but in a very real sense, he is willing to risk the potential futures of every other player (and the rest of us). He is gambling, and his rewards may come at the expense of everyone else. He is the betrayer in the Prisoner's Dilemma. (And don't buy that "devoted his life to improving the lot of humanity" crap.)

In some sense, this argument can be made about the development of any potentially dangerous technology. And so we must weigh our belief in progress vs. the risks that progress represents. If the history of progress has shown us anything, it is that there is always going to be somebody who is willing to plunge on ahead, perhaps out of deluded self-confidence, or in search of glory/fame, or just because s/he can, risks be damned. The rest of us are just along for the ride. I'll echo Hughes at this point...

"Remaining always mindful of the myriad ways that our indifferent universe threatens our existence and how our growing powers come with unintended consequences is the best way to steer towards progress in our radically uncertain future." Unfortunately, wisdom is largely the product of hindsight.

"We need to carefully deliberate how best to reduce risks and expedite beneficial applications. We also need forums where people can constructively discuss what should be done, and a social order where responsible decisions can be implemented." Welcome to the month of March in my blog, wherein I'll try to do some of this.

But today I'm still bothered by two things...

  • If much of the work on AI is driven by a desire to avoid the negative consequences of a technological singularity, why don't we simply stop working on trying to produce an artificial intelligence that's capable of reaching a singularity point? (The argument seems to be 'Well, somebody's going to do it; it might as well be me because I can do it better/safer.')
  • What benefits do we expect to derive from the creation of an artificial intelligence that outweigh the potential risks? (Maybe I should read this...)

Update 03/05/10: I don't intend to do a lot of debating about the issue of AI, but I'm more than happy to give you access to both sides of the story. Heck, if I hadn't come of age in the era when computer languages were still ridiculously simplistic, I might have been intrigued enough by the idea of AI to work on it myself.