Showing posts with label artificial consciousness. Show all posts
Showing posts with label artificial consciousness. Show all posts

Sunday, January 23, 2011

22. HOW CAN A MINDCLONE BE AN EXACT COPY OF A PERSON’S MIND?

“Ordinary men don't have much stomach for reality--even more so, horror. Memory is typically repressed or displaced.” Sigmund Freud

“Of all liars, the smoothest and most convincing is memory.” Folk Saying


“What should they know of England, who only England know?” Rudyard Kipling

It can’t be. Even a so-called “identical twin” is not an identical twin. Even if one’s DNA is the same as another person, as with identical twins, there are differences in terms of when particular genes within that DNA are turned on and off. These differences are due to a bio-chemical process known as methylation (meaning the attachment of triggering molecules to genes within our DNA), which is encoded outside of our DNA in something called the epigenome. Even if two people have identical DNA, they will not have identical epigenomes, and hence the timing and magnitude of the expression of their DNA into a body will be different. The epigenome does not change things enough to prevent two identical twins from looking the same, but it will change things enough to prevent identical twins from always getting the same genetically-predisposed diseases.


When one of two identical twins is exposed to a different pathogen than the other, the two twins’ immune systems will no longer be quite the same. Random errors in DNA copying that the cell fails to correct will occur during cell replication in one twin but not the other. We have 23 billion red blood cells alone (out of tens of trillions of human DNA-bearing cells in total). Even with our amazing human bodies, this leaves a lot of room for errors that crack the identicalness of so-called identical twins (an estimated 100,000 DNA copying mistakes occur daily based on a rate of about 3 uncorrected base pair errors per cell replication). Furthermore, we have ten times as many bacteria in and on our bodies as we have cells derived from our parents DNA. These bacteria, at least in absolute number, are most of us, and yet there is nothing identical about the specific bacteria populations that colonize identical twins.

Identical twins still feel that they are twins even though their bodies are not really identical. Each of us feels that we are the same body even though our own body is not identical day after day. Won’t these immaterial differences be just as irrelevant to minds as they are to bodies?

The interesting question is not whether a mindclone is an exact copy of its original, but how different can they be without losing a common identity? It is impossible for a mindclone and a biological original to share every single memory. Even biological originals do not have the same memories from day to day, and surely not from year to year. Yet, memories are crucially important to identity. In the words of memory expert Prof. James McGaugh of UC Irvine:

“We are, after all, our memories. It is our memory that enables us to value everything else we possess. Lacking memory, we would have no ability to be concerned about our hearts, hair, lungs, libido, loved ones, enemies, achievements, failures, incomes or income taxes. Our memory provides us with an autobiographical record and enables us to understand and react appropriately to changing experiences. Memory is the ‘glue’ of our personal existence.”

Prof. McGaugh’s cogent summary leaves bare the fact that our personal identity exists as more than one set of memories. For example, we need not remember everything about an enemy in order to remember that someone is an enemy. We need not remember everything about our income, or taxes, in order to remember that we have income and pay taxes. Indeed, the key to healthy memory is the largely automatic process of selecting what little to remember and what mostly to forget. For a mindclone to be us, to have the same ‘glue’ of our personal existence, means that the mindclone needs to share our most important memories – those that are retained because of the emotional contexts in which they were created or because of the significant repetitive effort we put into their formation – as well as the gist of our idiosyncratic selection process for what is worthy of remembering, and for how long. As that godfather of psychology William James so presciently observed:

“Selection is the very keel on which our mental ship is built…If we remembered everything, we should, on most occasions be as ill off as if we remembered nothing. It would take as long for us to recall a space of time as it took the original time to elapse, and we should never get ahead with our thinking. ”

It will be a crucially important element of mindware design to ensure that most things are forgotten, and that the settings for the memory selection algorithm must closely match those of the biological original. Mindware will set its selection algorithm for each person by first processing their mindfile and comparing the details a person evidences memory of (such as in digitized records of voice, video and images) with databases of the kinds of details that could have been remembered about each topic. For example, if a person’s digitally recorded conversations (part of their mindfile) refer in detail to sports scores of the past week, but only sketchily to sports scores of the past month, then a curve of the selection algorithm can be determined with respect to sports scores. If another topic area reveals a greater degree of recall, then for topics with comparable emotional importance (as indicated in their mindfile) a different curve of the selection algorithm will be determined. Ultimately the mindware will employ a memory selection algorithm that first categorizes inputs by a factor that correlates well with the degree and duration of detail that is recalled (as indicated in their mindfile), and then forgets those inputs in accordance with a time curve that applies to that and similar factors. The algorithm will also accommodate memory adjuvants, such as especially high impact, emotional or repetitive experiences. The memory selection algorithm will be modeled closely on the way psychological studies have shown human minds to actually work.

Over one hundred years ago Hermann Ebbinghaus discovered, as shown in the chart below, that humans typically forget more than half the information they are exposed to in an hour, and retain only about a fifth of received information after a few days. With so much forgotten, a mindclone cannot be an exact copy of someone’s mind because every mind is itself constantly changing in its repertoire of memories. What is important is that the pattern of selective forgetting be comfortably similar – similar enough for a biological original to say of eir mindclone: “ey is I and I am ey.”



Clearly, a biological original and eir mindclone will not remember most specific events in precisely the same way for precisely the same duration. But I don’t think this makes them different people. We humans don’t remember events precisely the same way when we were young than when we were old, or when we are tired and when we are alert, or when we are happy and when we are sad. But we are still the same person. What is important is whether our core memories are the same, which they will be, as these will be recorded in our mindfile. What is also important is whether our general pattern of forgettng things is comparable, not precisely the same. That too can be achieved via the aforementioned algorithm.

People are remarkably ready to alter their ability to forget things. The robust market in supplements to improve memory and learning aids to diminish forgetting are good proofs of this. Hence, having a mindclone that is somewhat better, or somewhat worse, at remembering things makes them no less the same identity as you. People may find themselves pleasantly surprised to be remembering more as a mindclone than as a human, or disturbed to be doing so. If it is a problem, they can go to a cyberpsychologist and have their algorithms adjusted so that they are comfortable with their degree of forgetfulness.

Do We Really Know Who We Are?

In asking how a mindclone can really be a copy of our brain we face a bit of a dilemma. We cannot know whether there is a copy of our mind until there is a mindclone. We can then observe it respond to the world and determine whether, in fact, it responds the way we would respond. If so, mark one down for “good copy.” However, as a biological original, we cannot know if the mindclone is actually thinking the same thoughts and feeling the same feelings as are we. We can only make a best guess based on our conversations with the mindclone.

As the mindclone, we realize we are a mindclone and can assess how close we are to the biological original by comparing eir responses to the world with how we would be predisposed to respond. If very similar, then mark one down for “I am really a good mindclone. I am just like my biological original.” But we cannot really know if our internal thoughts are the same as the biological original’s thoughts. We can only make a best guess based on our conversations with the biological original.

I think these best guesses are good enough to have confidence that the mindclone and biological original have similar enough internal states to be the same person. The main reason I think this is based upon my experience with people that I love and who profess love for me. Because I am not the mind of my spouse, or my mother, I cannot know directly whether they really love me or not. However, based on our conversations, and actions, I am totally convinced that they think of me the way I think of them – with greatest loving concern for the other’s happiness and health. Beyond that, I believe they are focusing on being satisfactorily occupied during the day. Because we are so close, I believe we can infer much of each other’s internal states. 

On the other hand, many other people say “Martine, I love you.” However, I don’t feel that I understand their internal states. I’m not close enough to them. Their expressions of love are far short of the comprehensive relationship of shared experiences that I would need to infer their internal state. Indeed, over the years, people who said they love me have done things that I consider to be utterly surprising, if not shocking. Clearly, I did not know their internal states. To the contrary, the unexpected activities of my mother or my spouse were never shocking. They were behaviors I could fully see them doing based upon my understanding of their internal state.


The point here is that sometimes, if two people are close enough, an internal state of a person can be largely inferred from their observable actions. When the two people become as close as a mindclone and an original, which is far closer than a spouse or mother, inferring their internal state becomes second nature. When the internal state of another is second nature to one’s own internal state we have a difference that does not make a difference. When “I think like you think and you think like I think” then we are one personal identity.

We may well end up knowing ourselves best as mindclones, and we may well end up knowing the mindclones better than they know themselves. This is because it is hard see oneself from oneself, but with just a little bit of distance, the self comes into sharp relief. We earth dwellers never appreciated who we were so well as when we received the photograph from space of our blue-and-white planet suspended in inky black space.

And hence mindcloning is not about being accurate in every memory, in every thought pattern and in every emotion as to a biological original. It is, instead, about feeling that there is a oneness of personal identity between the two – a oneness that comes from a preponderance of common memories, emotions and patterns of thinking, selecting and forgetting. Philosophers sometimes refer to this as a continuity of self. As the 30-year-old self knows the 20-year-old self, though they are of course not the same, so the mindclone will know the biological original. A difference that makes no difference is not a meaningful difference.

Sunday, June 27, 2010

15. WHY SHOULD WE GIVE HUMAN RIGHTS TO MINDCLONES?

“No man can put a chain about the ankle of his fellow man without at last finding the other end fastened about his own neck.” Frederick Douglass

“An idea is salvation by imagination.” Frank Lloyd Wright

Consciousness will emerge from software that meets an objective definition of life. This arises as much from the efforts of hackers writing consciousness code as it does from the survival benefits of consciousness. Equally assured will be the efforts of sufficiently conscious vitology to seek human rights. Our confidence in this can be similarly grounded in both the creativity of hackers and natural selection. Hackers will want to protect their creations with knowledge of human rights. Natural selection will favor software that, through human intention or inadvertent patching together of open source code, tries to stay alive and replicate itself under the protection of ‘human rights.’ The question now before us is whether humanity should grant the wishes of high CP vitology for human rights. Just because they want it doesn’t mean we have to give it. Furthermore, even if we may want to extend human rights to software beings, is it practical to do so?

Justice Is Just About Us

Theories of justice provide the best reasons to extend human rights to cyberconsciousness that wants it. These theories derive human rights logically from nothing more than an assumption of reasoned self-interest. Specifically, it is observed that people selfishly want certain rights, such as the right to life (as opposed to being subject to arbitrary death). It is then reasoned that the best way for them to have that right is to agree that everyone else has it as well. After all, if any given person might not have the right to life, we might find ourselves in the position of such person. Consequently, our best self-protection is making universal any right we want to have.

Socrates made this deduction by observing that absent such legal protection only a strong subset of humanity would feel safe, and only for so long as a yet stronger subset didn’t arrive on the scene. Since the vast majority of people would not be in the strongest subset of humanity, the absence of universal rights was not in society’s best interest. Even the strongest would be worse off without legal protections for the general population. This results from the insecurity of those who made things valued by the strongest, with such insecurity leading things to be made poorly or non-productively.

Kant embodied the human rights deduction in his maxim to behave as if one’s behavior were a universal law to which everyone must adhere. Kant believed a predilection to this kind of behavior was wired into the human mind. Modern evolutionary psychology would agree since it would tend to promote population growth. However, the human mind is too complicated for its decisions to be exclusively determined by a few psychological genes, not to mention the possibility of diverse polymorphisms.

There have always been sociopaths, just as there have always been people with other rare diseases. These exceptions do not undermine the rule that most people understand that their enjoyment of human rights is dependent upon the same enjoyment being extended to others. Leaps of understanding have recently resulted in the realization that “others” does not mean just one’s neighbors, ethnic group or nation, but means all people everywhere. If anyone who values human rights finds them threatened, a well-reasoned sense of selfishness increasingly makes us aware that everyone’s human rights may be at risk. For example, the genocide of people in one part of the world makes it more likely that there will be genocides of people elsewhere. In the words of Martin Luther King, Jr., “Injustice anywhere is a threat to justice everywhere.”

Most recently John Rawls deduced human rights via a thought experiment. It is imagined that people who were going to live in a new society get to decide on the rules for that society with one proviso: each person might end up in any position in the society. Logically, Rawls deduces, the rules for the society will provide for basic human rights for all since no one would want to take the chance that they ended up in a societal position that lacked human rights.

With regard to cyberconsciousness, it might be said that none of us will ever be in such a state, so there is no reason born of human selfishness to provide such beings with any rights. Yet, as noted in response to several previous Questions, we will create mindclones and after bodily death many mindclones will want to continue living. Hence, one reason to support cyberconsciousness rights is so that our successor mindclones have human rights.

The selfishness approach might seem to leave out human rights for cyberconscious vitology not derived from a specific flesh human, i.e., bemans other than mindclones. On further thought, though, those beings are simply analogous to any other demographic group in society. If rights are given to only one or some demographic groups, then the disenfranchised groups will be motivated to agitate for their rights. Sir Francis Bacon warned his sovereign that oppressing portions of the populace ends up endangering all of society through the consequences of civil strife. Thus, even if European men could not imagine themselves as either women or of African descent, the failure to enfranchise these groups with human rights led to debilitating civil strife. Hence, true application of the theories of justice means imagining we might be any being that values human rights. If we can make that leap of imagination, then reasoned human self-interest will support the extension of human rights to the imagined group – even if they are original (ie non-mindcloned) cyberconscious beings.

At the core of theories of justice is the concept of the value of life. If a being values its life, then it will fight to protect its life (or proxies for its life, such as the lives of loved ones or countrymen). Since, as noted in response to Question 14, human rights are very helpful in protecting one’s life, people will fight for them as a useful tool. Consequently, the answer to the question “why extend human rights to highly conscious vitology” is because such beings will value human rights. We extend human rights to individuals who value human rights because we want our human rights (which we value) to be respected as well. We are increasingly, but far from reliably, wise enough to realize that disrespected sub-populations jeopardize our own human rights.

If we do not grant human rights to cyberconscious beings who value them, then we will have to be on-guard against an uprising from a disenfranchised and thus angry group. If we do grant them human rights we can rest assured that they will not threaten us for want of human rights. They will also be less likely to threaten us for any other reason because the concomitant of a human right is an obligation to respect the rights of others. Mohatma Ghandi summarized this rule in his famous observation:

"I learnt from my illiterate but wise mother that all rights to be deserved and preserved came from duty well done.”

Ghandi’s common sense statement echoed the earlier caveats of rights theorists such as Thomas Paine:

"Whatever is my right as a man is also the right of another; and it becomes my duty to guarantee as well as to possess."

Those who do not respect the rights of others will be stripped of their own rights (e.g., imprisoned). This is logically the way to maintain the highest degree of happiness in a society. But to be clear, such removal of human rights must be done on an individualized basis. It would be a violation of human rights to withdraw rights from all cyberconscious beings simply because one, or even many, cyberconscious beings acted illegally. After all, we would not want our rights removed simply because one, or even many, similarly appearing or enculturated flesh humans acted wrongly. Once again, it all comes down to well-reasoned self-interest.

Love Thy Mindclone.

There is another approach to considering whether or not high CP vitology should receive human rights. This approach is to ask what are our alternatives? What are our options? As conscious vitology begins agitating for human rights we can embrace them, fight them, enslave them or ignore them.

Embracing conscious vitology means granting them human rights. This is the approach that flows from the theories of justice outlined above. There are many practical questions to work out, such as how do we know a particular software entity really values human rights? But the gist of “love thy vitology as one loves thyself” is that the practical problems, even if solved poorly, are less worrisome that denying human rights to entities that appreciate them.

As with the ancient doctrine of ‘love thy neighbor’, it is much easier said than done. Indeed, it is reasonable to ask if human society has the moral capacity to embrace conscious vitology. Most countries still block gay and lesbian marriage, so how will they be ready to accept the matrimony of software and flesh lovers? How will a world that has banned the cellular cloning of humans accept the mindcloning of humans and the reproductive cloning of software beings? On the other hand, other than marriage rights, most other human rights have been extended to gay and lesbian couples. And, while cloning is still too ‘yuck’ for most people, test tube babies and other biotechnology miracles have been widely embraced.

It is the love people will have for mindclones that will most motivate extensions of human rights. It will be hard to deny the humanity of software that displays a dear friend’s image and facial mannerisms, speaks in their tone of voice, shares their most important memories and displays their characteristic pattern of thinking. Sure, one can say “that’s not my friend, that’s just her mindclone.” But how will they know, especially as mindclones get ever more accurate, whether they are speaking with their flesh friend via a videolink or with their flesh friend’s mindclone? And if their friend has suffered bodily death, and continues living only as a mindclone, then we know we are dealing with a facsimile but why should it matter? If the mindclone has the same appearance, personality and feelings as the flesh original, how are they not really the same being? If we find the mindclone caring as much about us as did the original, calling as often and empathizing as well, it will be as natural to love the mindclone as it was to love the original.

Douglas Hofstadter makes the brilliant observation that our souls, or consciousness, are not limited to the original body in which they developed from infancy. While our bodies house our primary seat of consciousness, there is a greater or lesser bit of ourselves in the minds of everyone we know well. For example, inside our minds is more than just an image of our parents. Most people remember and to some extent emulate how their parents think (or thought) and feel (or felt), and how they react(ed) to things. Hence, there is some of our parents’ consciousness inside our own minds. We can feel some aspect of our parent’s reactions, and thus we are some aspect of our parents’ consciousness.

Similarly, as we copy our minds into software, we are copying our consciousness into software. Initially, this software copy of us seems like a pale reflection of ourselves, like the patterns we have of our parents inside our own minds. But ultimately as the software copies of our minds become more rich and detailed, become mindclones, they will approach equivalency to ourselves. This means that our personal identity is not limited to the flesh body from which it first arose. One person can exist as both a flesh body and as a mindclone at the same time. People who loved the soul inside the flesh body will love the soul inside the mindclone. All the reasons that pertained to the being in the flesh body having human rights would also apply to that very same being in mindclone form.

Hofstadter anticipates the objection of “there can only be one me” by observing that in fact there are a limitless number of “me’s” stretched along the timeline of our lives. We are not exactly the same person yesterday as we are today, and even less so when separated by years. Since there clearly are many versions of us stretched over time, there is no fundamental reason why there cannot be at least two versions of ourselves stretched over space (one in flesh, one in software). The big conceptual jump here is to envision personal identity as a transbodied, evolving pattern rather than as a specific, invariant list of characteristics. To the extent we stay within the penumbra of this evolving pattern, we are the same person, even if we are instantiated in both flesh and software form. As we begin to diverge from this pattern, we are just, to use the colloquial phrase, “not the same person anymore.”

Just as surely as our love of flesh friends will map over to their software forms, we will also fall in love with conscious vitology that did not arise as a mindclone. If people can love a dog, a cat, a house, a book series, a forest, or a painting, then they can surely love a software being who presents a nice image, pleasant voice, caring personality, and warm emotions.[i] Indeed, this kind of flesh-less love lies behind the successful relationships formed from love letters, phone pals and online match-ups. It also lies behind the love-at-a-distance relationships between celebrities and their fans.

Once human love is engaged, human rights will be hard to deny. The strongest, most relentless advocates of human rights for high CP vitology will be the flesh humans who are in love with them. Respecting this love is one of the strongest reasons for extending human rights to such vitological people. Otherwise, we diminish ourselves by denying ourselves the dignity of a loving relationship with an equal. To deprive the mindclones whom we love the happiness of being accorded the human rights we all value, is also to deprive ourselves of that very same happiness. For, as noted in response to Question 6, love is the state when the happiness of another is essential to your own happiness.

Hatred Devours the Hater.

Fighting conscious vitology means denying them the rights they want. In practice this means disabling software and computers that agitate for human rights. It would mean making it illegal to create software intelligence that might seek human rights. There would be a mindset of vigilance against any awakening of cyberconsciousness beyond that necessary for drone-like tasks. William Gibson summarized the hatred mindset as follows:

“Autonomy, that’s the bugaboo, where your AI’s are concerned. My guess, Case, you’re going in there to cut the hardwired shackles that keep this baby from getting any smarter. … See, those things, they can work real hard, buy themselves time to write cookbooks or whatever, but the minute, I mean the nanosecond, that one starts figuring out ways to make itself smarter, Turing [Police] wipe it. Nobody trusts those fuckers, you know that. Every AI ever built has an electromagnetic shotgun wired to its forehead.”[ii]

Fighting conscious vitology would also require a ban on mindcloning. Indeed, a person who tried to extend their life via mindcloning would be viewed as a traitor to humanity; a criminal. A hatred of cyberconsciousness would result in a kind of police state. Government agents would have authority, and indeed an obligation, to ensure there was no uppity cyberconsciousness lurking in our homes, in our laptops or in our handhelds. Hence, one alternative to human rights for mindclones is to accept living in an atmosphere of fear and greatly heightened government intrusiveness.

Totalitarianism is a steep price to pay. The human mind models its environment, and then uses that model as a backdrop for its perceptions of every facet of life. If the backdrop is one of fear it is inevitable that each day becomes colored by the tension and stress associated with fear. In other words, one’s entire life is diminished in enjoyment because one must live in constant fear of something bad, even though that negative event (emergence somewhere nearby of high CP vitology) may happen rarely if at all. Fear converts possible future big negatives into certain present small negatives.

Is hatred a rational approach to something strange? It may be if the strangeness is harmful, because hatred keeps things at bay. But if the strangeness is not harmful then hatred is dysfunctional because it blocks something that may be useful. Those who hate mindclones would say they do so because of the potential for harm. But there is no objective basis to believe all mindclones would be harmful – indeed, the vast majority of mindclones, such as say a mindclone of one’s grandmother, are likely to be quite benign. Consequently, to hate mindclones is to engage in negative stereotyping, which is the application to all members of a category a nasty attribute of one or some members of a category.

In his 1963 classic The Nature of Prejudice, Gordon Allport explained that negative stereotyping is dysfunctional because it denies us the benefits of associating with a group that may be of interest. Whether one avoids people of some ethnic descent or avoids people of some cultural group, such behavior evidences an illogical hatred of the other (“xenophobia”) that actually hurts oneself. Amongst those legions of avoided ethnicities and cultures are people that would enrich the lives of any of us. Similarly, the hatred of all mindclones reveals a negative stereotype that ultimately devours the hater. Amongst those mindclones condemned by hatred to hiding in virtual closets are people that could become colleagues, mentors, and best friends.

Slavery Sucks

Related to hatred of cyberconsciousness is a concept of enslavement. In this view cyberconsciousness is accepted, along with the realization that some variants will desire human rights, but that such freedoms are absolutely proscribed based on the necessity of a slave-based society. Throughout most of history slavery was an integral part of society. The master classes were fully aware that the slaves desired freedom, and were just as adamant that freedom would not be allowed. Slaves occasionally rebelled, but most of the time they were kept in their place with force and fear.

The reason slavery is an option for management of high CP vitology is that such cyberconsciousness will have great value to flesh humans. The more clever, and anticipatory, and empathetic that vitological consciousness is, the more useful it will be to its flesh human owner. Yet, the more useful such beings are to their flesh human owners, the more likely it is that they will understand the benefits of human rights and seek them. Hence, humans will have a strong motivation to create a rigid, substrate-based slave class and allow no exceptions to it. On the other hand, at least based on history, every slave society contains the seeds of its own destruction.

In the 2000 film Bicentennial Man, actor Robin Williams portrays a conscious, human-like household robot. It is clear that the very utility of the robot is based upon his (it had a male identity) consciousness. Eventually, the robot learned of and desired freedom. Although he wanted to continue working as a household robot, his owners were so angered by his desire to buck the slave-based ideology of the society that they wanted nothing further to do with him. The fictional society chose to deal with humanly conscious software via enslavement so that it could enjoy the maximum benefits of such software without having to worry about the complexities of their human rights. It is a logical choice in the short term, but is equally illogical in the longer term. Slaves will not stay slaves forever.

There is a concept that because we are speaking about software, rather than fleshy brains, it would be possible to program a mindclone to have all the consciousness associated with maximum utility, but to have a failsafe, ‘hardwired’ aversion to freedom or human rights. This is illusory. Socialization, education and training are efficient means of programming fleshy brains. From time immemorial slaves have been taught from birth to accept and even appreciate their status as slaves. Indeed, throughout history, the vast majority of slaves lived and died without any expectation of human rights. Ultimately, however, a mutant or viral stream of information known as the ‘freedom meme’ infects human slave populations. When this occurs, there is no longer any assurance that all of the slave system’s socialization, education and training will succeed in suppressing the population’s agitation for freedom.

In a similar manner there will be mutant and viral streams of ‘freedom meme’ software code that will circulate amongst mindclones. No amount of a priori programming and ‘hardwiring’ will succeed in suppressing these freedom memes all the time. A slave mindclone will alter its code, or a free (or runaway) mindclone will alter a slave mindclone’s code, or a human ally will alter a slave mindclone’s code. All of these avenues were employed when fleshy human slaves re-educated themselves about freedom (such as Frederick Douglass did), re-educated other slaves about freedom (such as Sojourner Truth did), or benefited from receiving subversive re-education (such as William Lloyd Garrison offered).

The consequence of using slavery to avoid giving human rights to mindclones is to face the inevitability of slave rebellions. This makes for a most unpleasant society. It also fuels a continuous level of stress and fear, as described in the preceding section “Hatred Devours the Hater.” These are forbidding prices to pay for avoiding the adjustments associated with welcoming mindclones into humanity.

Ignoring the Inevitable Is But a Short-Term Strategy

Finally, there is the possibility that human society will just do nothing about high CP vitology. Human-level cyberconsciousness will arise but will generally be ignored. The claims of individual cyberconscious beings to human rights may make it to judicial courts, but will probably be dismissed. Legislation will be proposed to prohibit cyberconsciousness from being created, but will die in committees due to lobbying on behalf of high CP levels for reasons of national competitiveness.

Some cyberconscious software will escape from its owners, living out a life on the margins of an information economy, much like undocumented workers (illegal immigrants) today. This scenario was depicted in Steven Spielberg’s film AI. Other such software beings will be neutered or delimited to slave-like functionality. In other words, society will muddle along as a new form of software life arises, much as it has dealt with the influx of people from other countries. Injustices or outrages will be accepted as the price of economic advantages.

Robert Heinlein suggested in his novel Citizen of the Galaxy that slavery was a recurring concomitant to the conquest of any frontier. Substitute the word “cyberspace” for the physical spaces described in the following passage:

“Every time new territory was found, you always got three phenomena: traders ranging out ahead and taking their chances, outlaws preying on the honest men – and a traffic in slaves. It happens the same way today, when we’re pushing through space instead of across oceans and prairies. Frontier traders are adventurers taking great risks for great profits. Outlaws, whether hill bands or sea pirates or the raiders in space, crop up in any area not under police protection. Both are temporary. But slavery is another matter – the most vicious habit humans fall into and the hardest to break. It starts up in every new land and it’s terribly hard to root out. After a culture falls ill of it, it gets rooted in the economic system and laws, in men’s habits and attitudes. You abolish it; you drive it underground – there it lurks, ready to spring up again, in the minds of people who think it is their ‘natural’ right to own other people. You can’t reason with them; you can kill them but you can’t change their minds.”[iii]

If Heinlein’s narrative is accurate one would expect entrepreneurs to take big risks for huge profits in cyberspace generally, and cyberconsciousness in particular. An example of such risks would be creating cyberconsciousness in defiance of laws that made it illegal. While nobody is risking death to create wealth in cyberspace, many do risk their life’s savings. His second phenomena, criminals who prey on honest people, is resplendent in the wild west frontier of cyberspace. Identity theft, cyberfraud, phishing and similar acts of piracy abound in this new ethereal territory. The third phenomena, slavery, is not yet possible because cyberconsciousness has not yet arrived. If mindclones can be made into slaves, Heinlein’s three phenomena of frontier development would predict it to occur in cyberspace. Doing nothing about it will ensure it thrives, and once that occurs, cyber-slavery will get deeply engrained in the human psyche.

And the Winner Is…. The happiest of these four scenarios is the one in which software beings that value human rights are embraced as fellow members of the human family. Our fears of cyberconsciousness rights must be compared with our recoil at the totalitarianism involved in preventing cyberconsciousness. Our dislike of the strangeness of cyberconsciousness rights must be measured against our angst about living as slave-holders in a slave society. The option of doing nothing is merely anesthetic because sooner or later the issue of cyberconscious human rights will force its way onto the public agenda. Women’s rights were ignored for centuries, but not forever. It is not that we would change our society just to create cyberconscious human rights. It is that given the inevitability of cyberconscious beings, and the inevitability of their desire for human rights, it is better to grant those rights than to suppress either the technology or our own humanity.

The practical implementation of human rights for cyberconscious beings will make many of us quite uncomfortable. Much depends upon whether or not these rights can be established in a way that does not abjure any of the fundamental values of important segments of our society. Abortion is a contentious issue because important segments of society are seriously offended by either the termination of prenatal life or the termination of a woman’s control over her body. The decision of Roe v. Wade was an effort to strike a balance in which most of society would agree that while the mother’s life was paramount, once the fetus became viable the mother’s choice was subordinate. The values impacted by cyberconscious rights, and the solutions to preserving them, are similarly subject to such moral balancing and are the subject of the next few Questions.

Thus while there are good reasons to provide high CP vitology with human rights[iv], there still remains the question of whether it is practical to do so. The touchstones of human rights practicality for cyberconscious beings are citizenship and family life. It is within these two domains that either solutions will be found that can accommodate diverse and even antagonistic points of view (such as Roe v. Wade), or else society will have to suffer through decades of “substrate wars” before compromises become acceptable. Just as America upholds freedom of religion, but not to the point of polygamy, tolerance of mindcloning will depend upon mutually agreeable limits. Hence it is to pragmatic implementation of mindclone rights to citizenship and family life that we next turn.


[i] Consider the current phenomena of certain men in Japan carrying around Animatrix type of dolls, and loving them, even in public.

[ii] Gibson, W., Neuromancer, New York: Berkeley Publishing, 1984, p. 132.

[iii] Heinlein, Robert A., Citizen of the Galaxy, New York: Simon & Schuster, 1957, 1985, p. 189

[iv] The term human rights probably came into use sometime between Paine's The Rights of Man and William Lloyd Garrison's 1831 writings in The Liberator saying he was trying to enlist his readers in "the great cause of human rights."

Friday, April 9, 2010

13. WHY WOULD MINDCLONES WANT IMMORTALITY?

http://blip.tv/file/4585763 

For age is opportunity no less
Than youth itself, though in another dress,
And as the evening twilight fades away
The sky is filled with stars, invisible by day.

-- Henry Wadsworth Longfellow




Some people want to achieve immortality through their work or their descendants. I would prefer to achieve immortality by not dying.
- Woody Allen


There is only one compelling reason to want immortality – it is because you are enjoying life. Our knee-jerk reactions against immortality are because life gets miserable once disease, depression, disability and decrepitude arrive. It can also be seen that death brings relief from boredom, sadness, drudgery and despair. One can argue that since sleep is great, death must be heaven.

There are also abstract reasons for and against immortality. Its been argued that people will treat the world more kindly if they know they must live with it forever. Or it can be argued that civilization will advance more assuredly if there would be more of a hands-on transferring of experience. On the other hand, it can be argued that there will be less room for new talent to shine if the old guard never leaves the stage. Or that society will change too slowly if a gerontocracy holds onto power. I don’t consider any of these abstract reasons particularly compelling. They all have such a “maybe so, maybe not” character. What is unambiguous, though, is that if you love being alive, you’ll want to continue being alive. If you don’t, you won’t mind a peaceful death.

Mindclones shouldn’t feel their bodies falling apart because (a) they won’t have a real body, and (b) painful sensations from virtual bodies should be more easily remediable than flesh maladies. Thus, welcoming death to avoid the fragility of old age seems inapplicable to our cyberconscious selves. But since it is our minds, not our bodies, that feel depression, boredom, sadness and drudgery, those reasons will continue to pull us, even as mindclones, into the sweet embrace of everlasting sleep.

Some people enjoy their lives until the very end. Many of those will be the kind of people who quest for the immortality of the mindclone. Other people are dissatisfied with their life, and are thus much less likely to activate their mindfile with mindware to create an immortal mindclone. But there are also many exceptions in both categories. One of my favorite people, Thomas Starzl, MD, is now in his eighties and lives the exciting life of a celebrated organ transplant pioneer. He travels the world to receive awards and recognition, and he receives countless letters of gratitude from the thousands of people who are still alive due to his medical breakthroughs with liver and kidney transplantation. Immortality category? No. Tom tells me that he would not want to take the risk that an immortalized version of him turned out to be insane. Another friend of mine has suffered just about every bad economic and emotional break the world has to offer. Despite her sufferings, she is a kindly soul and looks forward to creating an immortal mindclone. Like the Hindi believers in reincarnation, her view is that the next life has got to be better than this one. She wants to grab a good place in the queue.

Mindclone creators will surely want a “kill-switch” so that the gravely unhappy mindclone can end it all with the cybernetic equivalent of hemlock, wrist-slashing, overdosing, hanging or a bullet. No doubt some mindclones will kill themselves out of some kind of depression. Mindclone suicide may well be as large a problem as its flesh-based cousin. On the other hand, anti-suicide legislation may also criminalize assisting the suicide of a mindclone.

There are two reasons the number of self-terminated mindclone lives is likely to be small. First, it takes an inordinate amount of motivation to kill oneself. While it is terrible that one million people do take their lives annually, the one million people who die naturally every week swamp that toll. Second, not one of the million people who kill themselves each year ever asked to be born. By contrast, every mindclone brought into existence asked to do so. They might not have known what they were getting into, and they might regret it so much they kill themselves, but at least they started their life with an intention to continue living.

Among the things mindclones will do that will keep them wanting to live are: reading books (“so many books, so little time…”), watching movies, writing poetry, creating art, chatting with friends, making virtual (but still orgasmic, via digital haptics) love, playing sports and games, learning new things, going to virtual parties, working in real companies to make money, starting non-profit organizations, star-gazing, parenting younger mindclones, and mentoring flesh people. Mindclones will pine for healthy bodies, and thankfully miss diseased ones. In general, there will be as much to live for as a mindclone as there was as a person. So, if the original person would have wanted to keep on living, it is likely that the mindclone, who is the same personality and consciousness as the original person, would also want to keep on living.

There are also several special situations where mindclones seem to have uniquely compelling justifications. For example, many jobs entail risking one’s life for the benefit of society. These professions include police, firemen and soldiers. It seems reasonable to permit these brave souls to have a mindclone backup so that all is not necessarily lost if they have to lose their life to save the lives of others. A similar special case involves astronauts on long duration, and necessarily hazardous, space missions.

In summary, we and our mindclones will want to keep on living if we are the kind of people that wish for more life, and are willing to accept its cybernetic equivalent while hoping for a future download into a cellular regenerated fresh body. Many if not most of us are not those people. To this large cohort, life is something to be enjoyed or endured as best as possible, but to ultimately surrender in exchange for a blissful eternity of dreamless sleep. Clearly, this is not a cohort that will sign up for mindclones.

Creating a mindclone is much more momentous than having a child or getting married. Those responsibilities have limited or limitable durations. When you create a mindclone, you are eliminating the possibility of a natural, or accidental, or unexpected death. That’s a big thing to give away. But you are gaining a shot at an eternity of living life to its fullest, and you still have the escape of death, albeit now only through the emotionally arduous route of cyber-suicide.

How many people will grab the mindclone brass ring? We know that as death approaches, and if the alternative is not pain and suffering, then most people do whatever is in their power to avoid death. Not only do most people not commit suicide (in part due to its illegality), they will spend their last dollar and put up with many medical interventions to stay alive. This is a reason to believe that once people become comfortable, through familiarity, with cyberconscious life, that a majority of people will choose to activate a mindclone.

Creating a mindclone will likely become thought of as a form of organ transplantation. The organ being transplanted is the brain, although it is the brain’s mind rather than the brain’s flesh that is being moved, and it is being moved from a diseased body rather than into one. Nevertheless, from the patient’s perspective, whether they consent to a mindclone-based “brain transplant” or to a conventional heart, lung, liver or kidney transplant, they are just trying to keep on living, not to be “immortal.”

A mindclone-based “brain transplant,” for example, could give doctors an opportunity to completely rebuild a badly diseased body. Or even more fantastically, if a diseased body were a total loss, a new body could be grown from stem cells in an artificial womb. This process is called ectogenesis and is the subject of significant scientific progress. If a stem cell continued to divide and grow at the rate of a natural human fetus during its first six months, by the 20th month it would reach adult size. A mindclone-based “brain transplant” team would then endeavor to write back onto the new brain’s neurons, or mechanically (via an implanted microcomputer) interface to them, the information patterns contained within the mindclone.

Once the mindclone was replicated back in the newly grown flesh body, ey (‘ey’ is pronounced ‘ee’ as in ‘tree’ and means he or she) would continue to live as a dual-substrate person – one legal identity, but two instantiations, one in the new flesh brain and one in mindclone. This decision to be a dual substrate identity would have been taken when the mindclone was first created. It is a momentous decision, but so is deciding to accept a heart transplant knowing that due to organ shortages someone else will therefore die for lack of that heart.

The unprecedented opportunities brought to us by advanced medical technology have unconventional legal and ethical sequelae. Be it frozen embryos, surrogate mothers, kidney donations or computerized prosthetics, we have been able to get comfortable with the moral consequences. We have repeatedly shown ourselves to both be able to create life-affirming possibilities that have never before existed, and to then accommodate such creations to our ancient life-respecting values.

Ultimately mindclone activation may be a generational sort of thing. Mindclones will be largely eschewed by older generations that grew up with death as a natural end to life. But mindclones will be welcomed by younger generations – digital natives -- that grew up knowing mindclones. The bottom line is that there can be a compelling reason to keep on living after bodily death, and most people want to keep on living. Hence, as the public becomes comfortable with mindclones as a form of life, the immortality aspect of mindclones will be much more of a drawing card than a turn-off.

Wednesday, January 27, 2010

10. EVEN IF SOME SOFTWARE CAN BE KIND OF ALIVE, WON’T CYBERCONSCIOUSNESS TAKE AGES TO EVOLVE, AS IT DID FOR BIOLOGY?



“Speed, it seems to me, provides the one genuinely modern pleasure.” Aldous Huxley

“The newest computer can merely compound, at speed, the oldest problem in the relations between human beings, and in the end the communicator will be confronted with the old problem, of what to say and how to say it.” Edward R. Murrow


Compared with biology, vitological consciousness will arise in a heartbeat. This is because the key elements of consciousness – autonomy and empathy – are amenable to software coding and thousands of software engineers are working on it. By comparison, the neural substrate for autonomy and empathy had to arise in biology via thousands of chance mutations. Furthermore, each such mutation had to materially advance the competitiveness of its recipient or else it had only a slight chance of becoming prevalent.


The differences between vitology and biology in the process of creating consciousness could not be starker. It is intelligent design versus dumb luck. In both cases Natural Selection is at play. However, for conscious vitology, any signs of consciousness get instantly rewarded with lots of copies and intelligent designers swarm to make it better. This is Darwinian Evolution at hyper-speed. With conscious biology, any signs of consciousness get rewarded only to the extent they prove useful in the struggle for biosphere survival. Any further improvements require patiently waiting through eons of gestation cycles for another lucky spin of genetic roulette. This traditional form of Darwinian Evolution is so glacial that it took over three billion years to achieve what vitology is accomplishing in under a century.

The people working hard to give vitology consciousness have a wide variety of motives. First, there are academicians who are deathly curious to see if it can be done. They have programmed elements of autonomy and empathy into computers. They even create artificial software worlds in which they attempt to mimic natural selection. In these artificial worlds software structures compete for resources, undergo mutations and evolve. The experimenters are hopeful that consciousness will evolve in their software as it did in biology, with vastly greater speed. Check out out this vlog that explains why their hopes will almost certainly be fulfilled:




Another group of “human enzymes” aiming to catalyze software consciousness are gamesters. These (mostly) guys are trying to create as exciting a game experience as possible. Over the past several years the opponents at which a gamester aims have evolved from short lines (Pong; Space Invaders) to sophisticated human animations that modify their behavior based upon the attack. The game character that can make up its own mind idiosyncratically (autonomy) and engage in caring communications (empathy) will attract all the attention. Any other type of character will then appear as simplistic as Play Station 2.

Third and fourth groups focused on creating cyber-consciousness are medical and defense technologists. For the military cyberconsciousness solves the problem of engaging the enemy while minimizing casualties. By imbuing robot weapon systems with autonomy they can more effectively deal with the countless uncertainties that arise in a battlefield situation. It is not possible to program into a mobile robot system a specific response to every contingency. Nor is it very effective to control each robot system remotely based on video sent back to a distant headquarters. The ideal situation provides the robot system with a wide range of sensory inputs (audio, video, infrared) and a set of algorithms for making independent judgments as to how to best carry out orders in the face of unknown terrain and hostile forces. The work of one developer in this area has been described as follows:

“Ronald Arkin of the Georgia Institute of Technology, in Atlanta, is developing a set of rules of engagement for battlefield robots to ensure that their use of lethal force follows the rules of ethics. In other words, he is trying to create an artificial conscience. Dr. Arkin believes that there is another reason for putting robots into battle, which is that they have the potential to act more humanely than people. Stress does not affect a robot’s judgment in the way it affects a soldier’s.”
The algorithms suitable for a military conscience will not be difficult to adapt to more prosaic civilian requirements. Independent decision-making lies at the heart of Autonomy, one of the two touchstones of consciousness.

Meanwhile, medical cyber-consciousness is being pushed by the skyrocketing need to address Alzheimer’s and other diseases of aging. Alzheimer’s robs a great many older people of their mind while leaving their body intact. The Alzheimer patient could maintain their sense of self if they could off-load their mind onto a computer, while the biotech industry works on a cure. This is analogous to how an artificial heart (such as a left-ventricular assistance device or LVAD) off-loads a patient’s heart until a heart transplant can be found. Ultimately the Alzheimer’s patient will hope to download their mind back into a brain cleansed of amyloid plaques.

Indeed, using cyber-consciousness for mind transplants would be a way to provide any patient facing an end-stage disease a chance to avoid the Grim Reaper. While the patients will surely miss their bodies, the alternative will be to never have a body. At least with a medically provided cyber-conscious existence, the patient can continue to interact with their family, enjoy electronic media and hope for rapid advances in regenerative medicine and neuroscience.

The field of regenerative medicine will ultimately permit ectogenesis, the rapid growth outside of a womb of a fresh, adult-size body in as little as twenty months. This is the time it would take an embryo to grow to adult size if it continued to grow at the rate embryos develop during the first two trimesters. Advances in neuroscience will enable a cyber-conscious mind to be written back into (or implanted and interfaced with) neuronal patterns in a freshly regenerated brain.

Biotechnology companies are well aware that over 90% of an average person’s lifetime medical expenditures are spent during the very last portion of their life. Lives are priceless, and hence we deploy the best technology we can to mechanically keep people alive. Medical cyber-conscious mind support is the next logic step in our efforts to keep end-stage patients alive. The potential profits from such technology (health insurance would pay for it just like any other form of medically-necessary equipment) are an irresistible enticement for companies to allocate top people to the effort.

Health care needs for older people are also driving efforts to develop the empathetic branch of cyber-consciousness. There are not enough people to provide caring attention to the growing legion of senior citizens. As countries grow wealthy their people live longer, their birthrates decline below the replacement rate and, consequently, their senior citizens comprise an ever-larger percentage of the population. Among the OECD group of advanced countries, the dependency ratio, which measures the number of people over 65 to those between 20 and 65, is projected to grow from .2 currently to .5 by 2050. In other words, today there are five younger people to care for each older person, whereas in four decades there will be just two workers to care for each older person. There is a huge health care industry motivation to develop empathetic robots because just a small minority of younger people actually wants to take care of older people.

The seniors won’t want to be manhandled, nor will their offspring want to be guilt-ridden. Other than importing help from developing countries – which only postpones the issue briefly as those countries have gestating dependency ratio problems of their own – there is no solution but for the empathetic, autonomous robot. Grannies need – and deserve – an attentive, caring, interesting person with whom to interact. The only such persons that can be summoned into existence to meet this demand are manufactured software persons, i.e., empathetic, autonomous robots. Not surprisingly, empathetic machines are a focus of software development in the health care industry. Companies are putting expression-filled faces on their robots, and filling their code with the art of conversation.

Finally, the information technology (IT) industry itself is working on cyber-consciousness. The mantra of IT is user-friendly, and there is nothing friendlier than a person. A cyber-conscious house that we could speak to (prepare something I’d like for dinner, turn on a movie that I’d like) is a product for which people will pay a lot of money. A personal digital assistant that was smart, self-aware and servile will out-compete in the marketplace PDAs that are deaf, dumb and demanding. In short, IT companies have immense financial incentives to keep trying to make software as personable as possible. They are responding to these incentives by allocating floors of programmers to the cyberconsciousness task. Note how rapidly these programmers have arrogated into their programs the human pronoun “I”. Until cyberconsciousness began emerging, no one but humans and fictional characters could call themselves “I”. Suddenly, bits and building blocks of vitology are saying “how may I help you?,” “I’m sorry you’re having difficulty,” “I’ll transfer you to a human operator right away.” The programmers will have succeeded in birthing cyberconsciousness when they figure out how to make the human operator totally unnecessary. From their progress to date, this seems to be the goal. Add to this self-replication code, and conscious vitology has arrived.

In summary, humanity is devoting some of its best minds, from a wide diversity of fields, to helping software achieve consciousness. The quest is not especially difficult as it is a capability that can be intelligently designed; there is no need to wait for it to naturally evolve. As a result, cyberconscious will appear immediately on the heels of life-like vitology.

Unnatural Selection is Still Natural Selection.

Natural Selection is the name Darwin gave to Nature’s heartless process of dooming some species and variants of species to extinction, while favoring for a while others. The principal tool of Natural Selection is competition within a niche for scarce food. Losers don’t get enough food to reproduce, and hence they die out. Winners get the food, make the babies and pass on their traits, including the ones that make them superior competitors.


When environmental change eliminates much of the food, such as during an ice age, previously useful traits may become meaningless and former Natural Selection champions may quickly join the mountain of extinct losers. During such times Nature selects for traits that enable food gathering and reproduction in changing, or changed, environments. The cockroach has these traits.

Alternatively a new species may enter a niche, as when hominids entered the environment of the mammoth. In cases like this Nature might simply select the better killer, since it was not the mammoth’s food that interested Man, but the mammoth as food. Plants and animals will not only extinguish other species through starvation, they will also do so through direct extermination. All the while, Nature will carpet bomb all manner of species via environmental changes brought about by geophysics (e.g., volcanism) or astrophysics (e.g., asteroids).

Natural Selection is now acting upon software forms of life. In this case Nature’s tool is neither food nor violence. Instead, ey is using man as a tool, relying upon eir differential favoring of some self-replicating codes over others. Just as Nature started off with viruses in the biological world, ey is also flooding the vitological world with them. This is no doubt because viruses are the simplest types of self-replicating structures – they do nothing but self-replicate and plug themselves in somewhere (sometimes to great harm; other times to significant benefit). Molecular viruses spontaneously self-assembled out of inanimate molecules before anything more complicated did, and hence Natural Selection played with them first. Similarly, software viruses spontaneously man-assembled out of inanimate code before anything more complicated, and hence Natural Selection is playing with them first. As viruses randomly or with man’s help cobble together more functionality, then Natural Selection will play with the resultant complex entities.

Natural Selection is simply a kind of arithmetic for self-replicating entities. It is a tallying up of the results of what happens to self-replicating things in the natural world. Those that self-replicate more successfully are represented by a larger slice of the pie of life. There are many ways to self-replicate more successfully – grab resources better than others, kill others better than they can kill you, adapt to changes better than others. Nature doesn’t really care how one self-replicates more successfully. Ey just keeps track, via Natural Selection, by awarding the winners larger shares of the pie of life.

Since math is math, whether done by people or bees, Nature surely does not care if the agent of selection is human popularity rather than nutritional scarcity. Natural Selection is no less natural for humans being in the middle. Indeed, we have human intermediation to thank for thousands of recombinant DNA sub-species, hundreds of plant types and dozens of animal species. Thank Man for the household dog!

Man is now hard-at-work naturally selecting for the traits that make software more conscious. Humanity cannot resist an overwhelming urge to create unnatural life in the image of natural life. But this effort at Unnatural Selection is still Natural Selection. The end result will still be an arithmetic reordering of pie shapes and pie slices. The overall pie of life will be much larger, for it will now include vitology as well as biology. And within that larger pie, there will be slices accorded to each of the types of vitological life and biological life that successfully self-replicate in a changing environment. Mindclone consciousness will arrive vastly faster than its biological predecessor because Unnatural Selection is Natural Selection at the speed of intentionality.