Wednesday, December 23, 2009

9. HOW CAN A MINDCLONE BE CONSCIOUS OR IMMORTAL IF ITS NOT EVEN ALIVE?

I think computer viruses should count as life. I think it says something about human nature that the only form of life we have created so far is purely destructive. We've created life in our own image.
- Stephen Hawking


Watch a quick VLOG summary of this Question

It is amazing that out of the countless trillions of ways molecules can be arranged, only a few million ways result in things that can reproduce themselves. The biologist E.O. Wilson estimates there are about 13 million species, broken down as follows:

Insects 9 million
Bacteria 1 million
Fungi 1 million
Viruses 0.3 million
Algae 0.3 million
Worms 0.3 million
Plants 0.2 million
Protozoa 0.2 million
Echinoderms 0.2 million
Mollusks 0.2 million
Crustaceans 0.2 million
Fish 30 thousand
Reptiles 10 thousand
Birds 10 thousand
Amphibians 5 thousand
Mammals 5 thousand

It has been estimated that since the Pre-Cambrian Explosion 540 million years ago, during which the predecessors of most of these species arose, upwards of 90% of all species are extinguished each 100 million years due to environmental catastrophes. Hence, even counting the ways life might have been organized in the distant past, not more than a few hundreds of millions of molecular patterns have worked. In comparison, a practically infinite number of molecular patterns are possible given the dozens of atomic building blocks nature has to work with and the astronomical number of possibilities for stringing these atoms together in three-dimensional space.

Far, far less than one in a thousand molecular patterns will result in something that lives. It is not just about the magic of the DNA and RNA molecules. Most forms of even those molecules would not result in organisms that felt obligated to eat, excrete and respond to stimuli. Only the rare special cases of viable DNA and RNA molecules can do that. Very precise nucleotide sequences are needed to organize random atoms into protein building blocks that work together so symphonically that a reproductive being results. Life is a miracle because it is so unlikely.

Yet, we are inundated with life. Our skins crawl with bacteria, and our planet teams with skins. This is because life works very well. No matter how rare it is in theory, once it occurs it multiplies, for that is what life does. Rocks crumble and aggregate, but lives copy and proliferate. Most importantly, life also mutates. This is because the process of copying DNA is imperfect. Mutations result in diversity among life forms, and this diversity is crucial to life’s success. Diversity enables life to keep trying out new forms of molecular organization. Forms that work well spread and ones that don’t become rare or extinct.

The lesson of life is just this simple: no matter how unlikely something is in the first place, once it occurs it will become prevalent in those niches in which it continues reproducing versions of itself.

Life owes its improbable existence to an exceedingly rare kind of code. This life-code does two things unique to life. First, it enables self-replicating order to be structured out of disorder. Second, it enables that order to be maintained (for a while) against all the forces that make things fall apart. Wow yourself with this: life-codes are merely a mathematical sequence, like a formula, that shazam-like transforms randomness into purpose and entropy into organization. Life-codes are a real-world Harry Potter incantation, expressed in numerical silence. Any string of numbers that can God-like summon beings out of inanimate dust is as amazing as this universe gets.

Mathematics is invisible. We see its shadow when it gets expressed in something tangible. DNA is a molecule of life because it expresses a mathematical code that organizes viable patterns of molecules out of the inert chemical soup surrounding us. The patterns are viable because they self-replicate and they maintain their order, for a time, against Nature’s forces of disorder. The patterns are visible as nucleotide sequences, but their capabilities are based upon the arithmetic of the sequences – the specific numbers of A, T, G and C molecules that are required to direct the assembly of a specific protein needed to maintain a life process.

From the mathematical underpinning of biochemistry we can state an elegant definition of life: the expression of a code that enables self-replication and maintenance against disorder. Rocks are not alive because they are not the expression of a code. But the algae that covers a rock is. Microsoft Word is not alive because it doesn’t self-replicate (humans copy it). But software that could self-replicate and maintain itself against degradation would seem to be as alive as algae.


The genius of Darwin was to see a continuous chain of life in an immense scattering of broken shards of separated links. We can build on Darwin by presenting a continuous chain of life-codes in what otherwise looks like disparate phenomena. Specifically, RNA, DNA and software life- codes are links in an evolutionary chain. It is the chain of mathematical sequences capable of organizing self-replicating and self-maintaining entities out of inert building blocks. This view is consistent with the so-called “disposable soma” theory of evolution, “soma” being the Greek word for body. The theory says that bodies are DNA’s way of making more DNA. I’m taking the theory one level higher: somas are math’s way of making new self-replicating codes.


Nature surprises us with new life-codes just as she surprises us with new variations on existing life-codes. Nature will select new life-codes that are superior self-replicators in their niche just as she selects the best replicating variations on existing life-codes. Life-codes that give rise to many adaptable variations will become more dominant, just as phyla that give rise to many adaptable species become more prevalent. It is simply a step-up of scale to understand that evolution operates on types of life-codes as well as on the offspring of life-codes. DNA, as a type of life-code, is itself subject to a struggle for survival just as are the millions of species that use it as their code for organizing order out of raw nature. It is exciting to be alive at the time that new kinds of life-code, based in software rather than molecules, make their initial appearance.

We Bring Improbable Things to Life

The numbers of ways to write software are as unlimited as the ways to string molecules together. It might seem as unlikely for software to become alive as it was for molecules to become alive. Yet while it took eons for earth’s first molecules to self-replicate, people have already hit upon certain strings of software code that reproduces itself. We call them software viruses. People have also organized lines of code into sequences that respond to stimuli. These programs are familiar to any gamester or avatar user. Humans endlessly mutate (“hack”) software the way cosmic rays and random chemistry mutate our genetic codes. A good argument can be made that these hacks have already produced software with most if not all the qualities of life.


Just like life, software is organized, and exchanges energy with the environment. It takes in electricity and sheds heat via its hardware, much as a genetic code takes in nutrients and sheds waste via its body. As with living molecules, living software can reproduce, respond to stimuli, develop and adapt. Programs are written that go out onto the web, find compatible freeware, cut and paste it into the original code and continue developing. Humans and other life forms develop analogously: we go out into our natural environment, incorporate food and compatible experiences.

There are of course many differences between organic life and software that has characteristics of life. But the simple lesson of life remains the same: No matter how unlikely living software is, once it occurs it will become prevalent in its niche if it can continue reproducing itself.

Now, these are undeniable facts: there is universal fascination with software (e.g. applications), software has a gigantic stake in the economy (e.g. chips) and the energies of hackers worldwide are mind-boggling (e.g. web apps). These forces are as prolific in producing living software prototypes as Mother Nature was in producing living RNA/DNA prototypes. Organic life clicked “on” then, and cybernetic life is clicking “on” now. Improbability becomes inevitability when numbers get large. There are a very large number of people working on imbuing software with the characteristics of life.


The differences between organic and cybernetic life are less important that their similarities. Both are mathematical codes that organize a compatible domain to perform functions that must ultimately result in reproduction. For organic life, the code is written in molecules and the domain is the natural world. For cybernetic life the code is written in voltage potentials and the domain is the IT world. We call organic life biology. It seems fitting to call cybernetic life vitology .

In biology the mathematically coded nucleotides organize nearby atoms into ever-larger molecules. These molecules, such as proteins, do life’s work of reproducing by bulking up and (if sufficiently evolved) trying to stay safe. In vitology the mathematically coded voltage levels organize nearby sub-routines into ever-larger programs. These programs do life’s work of reproducing by occupying more firmware and (if sufficiently evolved) trying to stay safe.

It is interesting to recall that molecules also depend upon electron-based voltage levels to stay connected. Atoms bind into molecules via either covalent or ionic electron coupling. Hence, at the most general level, vitology is a life-code that requires only electrons, while biology is a life- code that requires atomic nuclei as well as electrons. The electron-based life-codes of vitology must be seated in compatible computer hardware, while the atom-based life-codes of biology must be seated in a compatible nutrient milieu. The main point is that biology and vitology are each abstract mathematical codes that spell out the path to self-replication in organic and IT environments, respectively. Thus, stripped to its essence, all life is but the expression of self-replicating codes.

What Is Life?

Many experts have tried to lasso the definition of life. They often disagree: some emphasize biology, others physics, some requirements are Darwinian, others spiritual. They are all talking about pretty much the same things we think of as being alive – plants, animals, and microbes. The problem is that none of the definitions are consistent and complete to everyone’s satisfaction. Some definitions exclude sterile worker bees, while others exclude flu viruses. Every boundary falters at its edge. So, why bother trying to come up with a one-size-fits-all definition of life?

There are no philosophically compelling reasons to define life. The reasons are all utilitarian. Humans are passionate about categorizing things, for much the same reason they like to build fences. It stakes out a territory that can be used for one’s benefit. Defining organic life as biology empowers biologists to be the source of expertise on the organic aspects of life.

I’ve just suggested a new kind of life, vitology, because software is arising that has the functions of life, but not the substrate of biology. As this living software evolves some versions will unambiguously seem to be alive, and soon thereafter other versions will aggressively claim to be sentient and conscious. All life forms try out, via mutation, different shapes and behaviors – software won’t be any different. If these sentience or consciousness claims are helpful to survival, we can expect seeing more software adopt the same position. It is not necessary to posit that the vitological software “wants” to survive for this to occur, any more than it is necessary to posit that bacteria “want” to survive. It is simply that things that do survive become more prevalent and things that don’t tend to disappear.

We can either deny vitological claims of consciousness, or broaden membership in the huge family of life. To do the former is to incite a long, unpleasant conflict. Think slavery and its disavowal of African humanity. To do the latter requires more than the biologist’s expertise. Hence, avoiding a conflict amongst substrates – flesh versus firmware, wet versus dry, natural versus artificial, DNA coded versus digitally coded – this is a reason to (re)define life.

Biologists purport to be the experts on defining life. They believe it is something that is (1) organized, (2) exchanges matter and energy with the environment, (3) reproduces, (4) responds to stimuli, (5) develops and (6) adapts. If something meets these criteria, then biologists will study it.

Physicists have also tried to define life. Physicists are the experts on physical reality, of which life is certainly a part. To these scientists, life is something that – for a while -- runs counter to the Second Law of Thermodynamics. This law says that everything in the universe is becoming more dis-ordered and random. Since life actually builds and maintains order in a defined area, it alone seems to defy physics and thus gives it a unique defining characteristic. In the words of Erwin Schrödinger:

“A living organism [like everything else in the universe] continually increases its entropy – or, as you may say, produces positive entropy – and thus tends to approach the dangerous state of maximum entropy [thermodynamic equilibrium, when nothing moves], which is death. It can only keep aloof from it, i.e., alive, by drawing from its environment negative entropy [which means order or structured things]….”


Physicists will concede, however, that their definition also has exceptions. Nobody feels that stars or galaxies are alive, and yet these objects build and maintain order at the expense of the cosmic things they suck up. Many of these environmental intakes would qualify as “negative entropy”, or ordered things, such as when a galaxy grows by swallowing another galaxy. The growth of a star by accretion of atoms blasted into space by supernovae is not so different in terms of Schrödinger’s definition than the growth of bacteria by assimilation of terrestrial carbon, hydrogen, oxygen and nitrogen.

We’ve sent several spacecraft to the surface of Mars with sensitive equipment to detect whether or not there were chemical signs of life in the Martian soil. The results were ambiguous. Even the top exo-biologists could not agree on whether the chemical signs we measured in the Martian soil were signs of life.

It is tough, if not impossible, to come up with a consistent and complete definition of life. For most people life is something “natural” that “acts alive.” We think something “acts alive” if it moves under its own power, like a stick that suddenly makes us jump because it turns out to be one of the three thousand species of insectoid walking sticks (Phasmatodea). We think something is “natural” if it is not man-made at all, or man-made only from living components. For example, a new breed of dog may be man-made, but we don’t doubt the Labradoodles are alive since they are made by hybridizing labradors and poodles. Similarly, baby humans are man-and-woman-made, but from things that act alive, like sperm and egg cells. On the other hand, the best man-made robot came from things like silicon and rubber that are not considered living. Hence, we don’t think robots are alive.

The Martian experience highlights a problem with another possible criteria for defining life: does it possess DNA or RNA? These are the molecular codes for making the forms and functions of everything we think of as living. Scientists feel that we can’t assume life evolved these same molecular codes off the earth. Furthermore, there are things such as viruses that possess RNA and yet are not admitted into the textbooks of life. This is because they are inert unless and until they are brought inside a cell.

The peculiarity of RNA and DNA could be circumvented by defining life as “anything that operates in a compatible environment pursuant to a code that is subject to natural selection.” Natural selection requires a code to replicate with some incidence of mutation (error) so that alternate versions of a life form can have a differential chance to thrive in new or changing environments. Under this definition, everything that biologists call life would be life because all those species have a code subject to natural selection, i.e., DNA or RNA. In addition, some things that biologists do not call life, such as viruses, would be considered alive because their code is subject to natural selection when it is in a compatible environment (a cell). On the other hand, things that are not called life, such as crystal rocks or neutron stars, are not alive because they are not operating in accordance with a replicable code.

An important feature of this all-encompassing definition is that it would include software viruses and other programs that either propagate, or disappear, in accordance with their environmental compatibility. In this case, the environment is information technology such as hardware, firmware and software.

A software program is a code, much like DNA or RNA. It instructs other software to do things as DNA instructs other molecules to do things. If software codes can make many copies of themselves, they will become prevalent, just as is the case for DNA-based beings. If software codes fail to significantly self-replicate, they will become “missing links”, disappearing from reality over time. If software codes mutate, such as by inaccurate copying, they will usually not function at all, or not function differently. Similarly, most DNA mutations are either benign or fatal. Sometimes, however, a software mutation could be beneficial in its original or in a new computing environment. In such rare cases, that software mutation would become the preferred form of the program, and would proliferate. Again, it is the same situation with DNA. It is thanks to millions of rare beneficial DNA mutations out of a countless greater number of dysfunctional ones that plants and animals arose from simple cells.

Schrödinger recognized the key role of DNA/RNA-based chromosomes in providing the source of order by which living things uniquely defy the Second Law of Thermodynamics:

“An organism’s astonishing gift of concentrating a ‘stream of order’ on itself and thus escaping the decay into atomic chaos – of ‘drinking orderliness’ from a suitable environment – seems to be connected with the presence of the ‘aperiodic solids’, the chromosome molecules, which doubtless represent the highest degree of well-ordered atomic association we know of – much higher than the ordinary periodic crystal – in virtue of the individual code every atom and every radical is playing here.”


The order of the chromosome that Schrödinger sees as behind the uniqueness of life is not different in function from order of self-replicating, self-maintaining software code. Consequently, life is that which has an order-constructing code enabling the entity to maintain itself against disorder. The requirement for self-replication, or Darwinian selection, simply extends this code-based definition of life into multiple generations. In essence, the living “entity” that is doing battle against disorder becomes the species rather than a member of the species. Humans, for example, are alive because they are members of a species that have a code (DNA) enabling order to be fabricated out of the environment for the benefit of maintaining the species’ battle against disorder (staying alive long enough to create subsequent generations that do the same thing).

Combining these considerations, we can answer the question of what is life as follows:

Life is anything that creates order in a compatible environment pursuant to a Darwinian code. If the code is Darwinian (subject to natural selection) then it must be self-replicating and it must structure a host (Schrödinger’s ‘negative entropy’) for itself that lasts long enough to self-replicate. As Joel Garreau has observed, chickens are the egg’s tool to make more eggs.

Biology versus Vitology: A Sixth Kingdom, Fourth Domain or Second Realm


Our consistent and complete definition of life will not satisfy everyone. Biologists will not see their commonality with software engineers, even though the simplest and most elegant definition of life includes both their subject matter. To solve this problem it might be necessary to admit that there are two different kinds – or realms -- of life: biological life, and vitological life.

Biological life is anything that operates in a compatible environment pursuant to a DNA or RNA code. Indeed, the current taxonomical division of life into three domains (archaea, bacteria and eukaryota) is mostly based upon systematic differences in these codes. (Despite these systematic differences, the most advanced eukaryota, mammals, have one-third of their genome in common with the most primitive domain, archaea.) Previously, biological life was sub-divided into five kingdoms (monerans, protists, eukaryotes, fungi and animals) based on the structure and function of each group’s cells.

If software-based forms of life were to be accommodated within the current domain-based vision of life, the resulting phylogenetic tree might look something like the following figure, created by biologist and cyberlife pioneer Nick Mayer. A fourth domain, “digitaea” would accompany archaea, bacteria and eukaryota. Note that digitaea branches off of animals and hominids just as those groupings branched off of plants and fungi long ago. Three species of digitaea are suggested: stemeids that are mindclone continuations of hominids, nanoids that are new life forms assembled from self-replicating nanotechnology, and ethereates for new purely software-beings, lacking any physical instantiation.


In fact, it is awkward to categorize vitology using biology’s domains and kingdoms since both DNA and cell structure is irrelevant to purely code-based life forms. Vitological life is anything that operates in a compatible environment pursuant to an electronic code that is subject to natural selection. The limitations to Darwinian and electronic codes is to emphasize that we are talking about life-like beings – things that are part of a class that can self-replicate, compete for resources and survive – and to codes that are written in 0 and 1 energy states in pieces of technology.

Vitological and biological life are developing radically differently. Vitological life is in many respects more primitive than prokaryotic cells, which lack even a nucleus. A software virus is about as functional as a biological virus. On the other hand, there are software modules such as web crawlers and navigation routines that can outsmart the cleverest animals on the planet. These modules are not alive, for they lack any drive to self-replicate, but they could be cobbled into a larger program that did meet most or all of the expectations of life. Most remarkable is that all these jigsaw pieces of vitological life popped into being within a few decades.

Meanwhile, biological life continues to change so slowly that we marvel at the genius of a Darwin to see the continuity amidst all the extinct pieces. Mutations arise, and specie dominance changes, especially amongst bacteria. But everything is incremental. There are no fundamental new biological capabilities popping into being analogous to navigational guidance software.

Vitology benefits from Lamarckism, the ability of offspring to inherit characteristics acquired during the life of its parents, whereas biology generally does not. Acquired characteristics cannot be biologically inherited, but they can be (and usually would be) inherited by copying software forms of life. This difference greatly accelerates the evolution of vitological life. It is also perhaps the clearest way to demarcate the vitological from the biological realms of life.

There is no a priori reason why living things should not inherit in a Lamarckian manner, but it is a fact that biological beings generally do not while vitological beings generally will. Giraffes are not able to rewrite their DNA code to incorporate useful characteristics they acquired, such as a more muscular neck, but must instead await random genetic mutations that lengthen the neck. A cyber-Giraffe, however, would necessarily have changed its code to cyber-muscularize its neck, and would thus necessarily pass onto its cyber-offspring the lengthened neck.

Vitology is proceeding as if the brain, the eye, the limbs, the vital organs and the basic cell all developed at once, but as separate entities. None really looked alive except maybe the basic cell – the rest were just really cool tools without a future or a past. A Darwin could see the inevitability of software hacks that would stitch the entities together into a piece of life par excellence. He would realize that once such hacks occurred, the resultant being would self-replicate like crazy. That is what life’s program would tell it to do. It would have the smarts to carry out that program despite obstacles and enemies.

It is obvious that vitology is developing millions of times faster than biology. Vitology is parallel processing in decades what biology serially processed over epochs. This difference of phylogeny, their unique domains of competence and their customized tools for achieving reproduction are what makes it unobvious that they are just two different approaches to life. But squint at that mutating self-replicating code at the core of it all, and at the common life-like functions they share, and it becomes clear that strings of digits spell life just as well as can strings of molecules.

Mindclones are alive, just not the same kind of life that we are accustomed to. They are functionally alive, albeit with a different structure and substance than has ever existed before. Yet, that is the story of life. Before there were nucleated cells, eukaryotes (of which we are comprised), such things had never been seen before – not for nearly two billion years. That is time duration that bacterium had an exclusive claim to life on earth. Before there were multicellular creatures there were only single cell creatures – from their perspective, the first slime molds were not so much a life form but a community of single cell creatures. And so the story goes, down through the descent of man. We must judge life based upon whether it streams order upon itself – self-replicates pursuant to a Darwinian code and maintains itself against the tendency to dissemble – and not get picky over what it looks like or what flavor of Darwinian code it uses. Using this objective yardstick, vitology will be alive.

Mindclones, sitting at the apex of vitology, will feel as full of life as we do from our perch atop the summit of biology. Aware of themselves, with the emotions, autonomy and concerns of their forbearers, mindclone consciousness will bubble as frothily alive as does ours.

Friday, October 23, 2009

8. WHAT IS TECHNO-IMMORTALITY?


"Reports of my death have been greatly exaggerated”, Samuel Langhorn Clemens, a.k.a. Mark Twain, in a May, 1897 note to the New York Journal, which had reported news of the fatal illness of Twain’s cousin, James Ross Clemens, as that of Twain. New York Observer, June 2, 1897. (In fact, Twain died the day after the 1910 perihelion of Halley’s Comet, having been born two weeks after its 1835 perihelion, leading him to immortally observe “now here are these two unaccountable freaks; they came in together, they must go out together.”)


http://blip.tv/play/AYHC9S0A

Cyberconsciousness implies techno-immortality. Immortality means living forever. This has never happened in the real world, so we think of immortality as a spiritual existence (as in heaven) or as a non-personal existence (as in ‘Bach’s music will live forever’). With cyberconsciousness it will be possible, for the first time, for a person to live forever in the real world. This unique, technologically empowered form of living forever is called techno-immortality.

Mindclones are the key to techno-immortality. Imagine that before a person’s body dies he or she creates a mindclone. After bodily death is declared the person will insist that he or she is still alive, albeit as a mindclone in cyberspace. The surviving mindclone will think, feel and act just as did the deceased original. While the mindclone will be stuck in cyberspace, he or she will still be able to read online books, watch streaming movies, and participate in virtual social networks. It will seem no more right to declare the mindclone dead than it would be to declare someone dead upon becoming a paraplegic. Practically speaking the mindclone’s original achieved techno-immortality.

A semantic purest may argue that “immortal” means “forever”, and since we have no way to know how long the mindclones will last they cannot be deemed immortal. This is a fair point, but it should be recognized that mindclones last far longer than the hardware they run on at any particular time. Mindclones, just as people, are really sets of information patterns. In the same way that the information patterns of great books and works of art are copied through the ages in new media after new media, so will be the case with mindclones. We are continuing to copy and interact with human texts that are thousands of years old, originally written in stone, and now stored digitally. Mindclones, being conscious beings with a desire to survive, can be expected to last even longer.

Therefore, by techno-immortal, we do not literally mean living until the sun explodes and the stars disappear. Such eschatological timeframes are beyond our consideration. Techno-immortality means living so long that death (other than by suicide) is not thought of as a factor in one’s life. This uber-revolutionary development in human affairs is the inevitable consequence of mindfiles, mindware and mindclones. Our souls will now be able to outlast our bodies -- not only in religion, but also on earth.

Techno-immortality need not imply an eternity of life in a box. Broadband connectivity to audio and video, and to tactile, taste and scent enabled future websites, will make life much more enjoyable than the ‘in a box’ phrase suggests. The outputs of our fingertips, taste buds and olfactory nerves are electronic signals that can be interpreted by software in the same manner as are sound waves and light signals. Nevertheless, it is hard to beat a real flesh body for mind-blowing experiences. Within a few score years for an optimist, and not more than a few centuries for a pessimist, current rates of technology development will result in replacement bodies grown outside of a womb. Such spare bodies, or “sleeves” as novelist Richard Morgan calls them , will be compatibly matched with mindclones. To make the sleeve be the same person as the mindclone either:

(a) the sleeve’s neural patterns will need to be grown ectogenetically to reflect those of the mindclone’s software patterns; or
(b) the sleeve’s naturally grown neural patterns will need to be interfaced and subordinated to a very small computer implanted in the cranium that contains a copy of the mindclone’s software.

Once these feats of neuro-technology are accomplished, techno-immortality will then also extend into the walkabout world of swimming in real water and skiing on real snow. In addition, mechanical bodies, including ones with flesh-like skin, are rapidly being developed to enable robotic help with elder care in countries like Japan (where the ratio of young to old people is getting too small). Such robot bodies will also be outfitted with mindclone minds to provide for escapes from virtual reality.

Techno-immortality triggers a philosophical quandary about identity. The gist of it is that people say ‘you cannot be dead and alive at the same time.’ This is related to another objection to mindclones – that they can’t really be ‘me’ or ‘you’ because we can’t be two different things, or in two different places, at the same time. All of these objections flow from the inability of the philosopher to accept that identity is not necessarily body-specific. In other words, a person’s identity is more like a fuzzy cloud that encompasses, to a greater or lesser extent, whatever loci contain their mannerisms, personality, recollections, feelings, beliefs, attitudes and values.

It is hard for us to feel comfortable with this view of identity because we have had no experience with it. Throughout history the only locus for our mind was the brain atop our head and shoulders. Hence, it is natural for us to believe that identity is singular to one bodily form. In a similar way, before Einstein, it was natural to believe that the speed of light depends upon how fast the source of light is traveling. All of our experience was that a rock thrown from a moving train must have the combined speed of the train’s motion and the rock’s pitch. When Einstein showed us how to think about something outside of our experience, we were able to logically deduce that the speed of light must be invariant. Similarly, when you think about a computer that runs mindware on a mindfile that is equivalent to your mind, then you must logically deduce that identity is not limited to one locus. Identity follows its constituents – mannerisms, personality, recollections, feelings, beliefs, attitudes and values – wherever those components may reside.

We are all familiar with the associative law of mathematics: if a = b, and b = c, then a = c. In our case a = our identity as defined by b, the key memories and characteristic thought patterns stored in our brain’s neural connections. With the advent of mindfiles and mindware it is possible to recreate those key memories and characteristic thought patterns in c, a mindclone. Since our original identity, a, derives from our cognitive status, b, and since the cognitive status from a brain, ba, is no different than the cognitive status from a mindclone, bc, it follows logically that our mindclone identity, c, is the same as our brain identity, a. Furthermore, this proof demonstrates that identity is not limited to a single body or “instantiation” such as a or c. Ergo, with the rise of mindclones has come the demise of inevitable death. While unmodified bodies do inevitably die, software-based patterns of identity information do not.

There is a great inclination to argue that unless every aspect of the a-based identity is also present in the c-based identity, then ba is not the same thing as bc and hence a is not really equal to and c. This argument is based on a false premise that our identity is invariant. In fact, nobody maintains “every aspect” of identity from day to day, and certainly not from year to year. We remember but a small fraction of yesterday’s interactions today, and will remember still less tomorrow. Yet we all treat each other, and our selves, as people of a constant identity.

Even in the extreme cases of amnesia or dementia, we do not doubt that the patient has a constant identity. Only in the final stages of Alzheimer’s does our confidence in the sufferer’s identity begin to waver. Therefore, a perfect one-to-one correspondence between ba and bc is not necessary in order for them to be equivalent. Instead, if suitably trained psychologists attest to a continuity of identity between ab and cb, which would tend to track with the perceptions of laypeople as well as of the original and their mindclone, then it must be accepted that the psychological fuzz of identity has cloned itself onto a new substrate. The individual’s cloud identity is now instantiated in both a brain and a mindclone.

Techo-immortality is possible because it will be soon possible to replicate the constituents of your identity – and hence your identity – in multiple, highly survivable loci, namely in software on different servers. It is irrelevant that these copies are not identical to the original. Perfect copies of anything are a physical impossibility, both in space as well as in time. Mindclones that are cognitively and emotionally equivalent to their originals, and practically accepted as their original identities, must be techno-immortal continuations of the original beings.

This question reminds me of the amazing story about how a young student, Aaron Lansky, saved Yiddish literature from disappearing. By the late 20th century, virtually all of the native speakers of Yiddish were elderly. After they died, their Yiddish books were being thrown away – almost no one understood a need to preserve this literature. Perhaps 5%-10% of the entire literature was literally disappearing each year. Lansky took it upon himself, with the help of a small group of friends, to collect all of the Yiddish books in the world before they ended up in dumpsters. After a decade his team had collected over a million volumes, had reignited interest in the language and had created a global Yiddish book exchange system. However, because the books were so frail (Yiddish was mostly read by poor Jews, and thus printed on cheap early 20th century paper to keep prices down) they were disintegrating before they could be shared. Consequently, Lansky then raised the money and signed contracts to digitize the entire collection. Indeed, the first literature completely digitized was Yiddish. Thereafter, those who wished any particular book simply selected the title from an online catalog and a print-to-order new copy was sent to them, on nice acid-free paper.

Did digitizing Yiddish literature save it from death by oblivion via dumpsters? Absolutely. Were the digitized texts the exact same as the handworn books? No. Did it matter? Absolutely not. The culture, what might be called the Yiddish soul, was exactly the same in the reprinted books of hundreds of authors, poets and playwrights.

Lingering objections to mindclones based upon inexactitude simply misunderstand the nature of identity. Identity is a property of continuity. This means that a person’s identity can exist to a greater or lesser extent depending upon the presence or absence of its constituents. We believe that we have the same identity as we grow from teenagers to adults because to a great extent our mannerisms, personality, recollections, feelings, beliefs, attitudes and values have been continually present over those years. Of course we have changed, but the changes are on top of bedrock constancy. For the same reason it is not necessary for our mindclone to share every memory with its biological original to have the same identity as that original. Similarly, Yiddish literature is alive even if only 98% rather than 100% of Yiddish literature has been digitized. To love your mother you need not remember all that she has done for you. A continuity of strong positive and emotive orientations toward her, as well as the remembered highlights of your life with her, are plenty adequate.

In summary, techno-immortality is the ability to live practically forever through the downloading of your identity to a mindclone. Identity exists wherever its cognitive and emotional patterns exist, which can be in more than one place, in flesh as well as in software, and in varying degrees of completeness. While humans have never before experienced out-of-body identity, that is about to change with mindcloning. Along with this change will come something else new to humanity – techno-immortality.

Tuesday, October 6, 2009

7. HOW CAN WE KNOW CONSCIOUSNESS IS REALLY THERE?

One way to know that something exists is to measure it. A common perspective is that consciousness is immeasurable, because it is subjective. However, even subjective phenomena may be measured through approximations, and hence a science of consciousness is quite possible.

Expressed symbolically, a Consciousness Product (CP) along the continuum of consciousness can be defined as A*E, where A = Autonomy and E = Empathy. In other words, each of Autonomy and of Empathy are necessary, to at least some small degree, but neither is sufficient alone, to establish consciousness. Hence, neither an animal that appears self-aware but purely instinctual, nor a software routine that appears to reason but lack sentience, is at all conscious. The former lacks the potential for Autonomy, and the latter lacks the potential for Empathy. But grant the instinctual animal some measure of independent thought -- idiosyncratic choosing among instinctual options -- and Autonomy creeps in (idios "one's own" and syn-krasis "mixture"). Or provide future software with understanding of people’s feelings, via words or graphics, and via software settings for happiness or sadness, and Empathy slips in as well. Such consciousness can arise from human action either directly (by writing code for it) or indirectly (by emerging from sufficiently complex pattern association code). In either event, consciousness will arrive on the combined backs of Autonomy and Empathy.


Were the average human Consciousness Product (CP) arbitrarily set at something like 100, as are IQ scores, with equal contributions of Autonomy and Empathy, then someone like Martin Luther King would have a score higher than that because he was more conscious than average. He empathized with others more than most people do, and his moral judgments were more fine-tuned. The net result of his and Gandhi’s consciousness was an adamant insistence on non-violence. The average military recruit, or passively supportive citizen, can rationalize nationally organized killing. On the other hand, a household pet might have a CP equal to half or less than the human average. This does not make the pet non-conscious, just less conscious. How far down the CP scale can one go before there is no consciousness? It disappears when there is not even a fraction of a percent of typical human autonomy or not even a fraction of a percent of typical human empathy.

A brilliant machine with no ability to ever feel another’s pain or joy would be considered soul-less; without consciousness. A snuggly life form that feels every human emotion but can do nothing else would be considered mind-less; without consciousness. Between mindless and soulless is a vast continuum of possible expressions of consciousness. Hence, consciousness is widespread, as advocates of a simple self-awareness definition usually insist. However, some beings are more conscious than others, as humanists have always claimed.

The earliest hints of consciousness arose from genetic mutations that directed neurons to be connected (or grown) in a way that empowered self-awareness. In other words, inanimate molecules are ordered by DNA to assemble into conscious-trending clumps of neurons. It would not seem less improbable that inanimate lines of code can be ordered by human intelligence to assemble into conscious-trending clumps of software programs.

Nobody knows what the minimum number, and arrangement, of neural connections or lines of software code are for various levels of consciousness to arise. What must be the case, barring mystical explanations, is that consciousness is an epiphenomena of a good enough relational database. “Good enough” means not only multi-dimensional arrays of associations, but also sophisticated capabilities for running persistent series of associations – stories, emotions, scenarios, simulations, conversations, personalities -- through the database, with outputs and inputs occurring in near real-time. Each person’s idiosyncratic pattern of activating and maintaining groups of associations, coupled with their unique relational database, is their self, their consciousness. The strengths of the neural connections that form the relational database patterns have been firmed-up over a life such that we are familiar to ourselves (and to others) at (almost) every moment of our wakefulness.

Brains are awesome relational databases, and human brains are the best of the best, with complex patterns of association-sequencing worthy of the term “mindware.” But brains need not be made solely of flesh. There are other ways to flexibly connect billions of pieces of information together. Software brains, designed to run on powerful processors, are in hot pursuit. These software brains will not necessarily arrive with a typical CP of 100, but neither do humans. The continuity of consciousness paradigm makes room for a range of autonomous and empathetic beings, human and non-human. The closer these souls think like us, and feel like us, the closer their consciousness will be like ours. But so long as they reason and feel at all, there is a conscious mind at play. This means that there is a characteristic pattern of association sequencing that tries to maintain a coherent mental structure of the world (autonomy), with the being at the center, other relevant beings not far off, and conceptions of those other beings’ feelings of significant concern (empathy).

There are two main reasons we think consciousness is such a big deal. First, consciousness makes us vulnerable to psychic harm and thus triggers the Golden Rule – we must be aware of other people’s consciousness because we want others to be aware of our own. This underlies the great importance attached to respecting the dignity of others. To the extent someone or some thing is conscious, we need to respect its dignity, for we expect to be similarly respected. Therefore, determining the existence and extent of consciousness is crucial to our social system. Second, consciousness is itself a shared thing, a kind of social property. Each of our minds is full of thoughts and feelings placed there by other people . When a mindclone claims to be conscious, they are attaching themselves to this social grid. They are claiming at least some of the rights, obligations and privileges that attach to humanity. Naturally, applications to membership in so important a club will be viewed cautiously.


Consciousness is Like Pornography


So, if consciousness can be created solely with software, how will we recognize it? How will mindclones’ CPs be correlated with those of their human originals?
We’ll recognize conscious software by evidence of the telltale signs of autonomy and empathy. If an electronic toy, or customer service computer, or software package seems to us to have some fraction of human independence, and some aspect of human caring, then it has some portion of human consciousness. Some consciousness, even a little, is still consciousness.

The toy, computer or software will have a fraction of autonomy if it shows a unique, idiosyncratic approach to problem solving. Every person approaches problems with an individualized blend of innate skill and experience. For many problems, such as getting from New York to Washington, there are limited options from which almost everyone will select one or the other of the obvious choices. However, for maintaining a conversation or describing one’s goals, the options are much greater. Consequently, these are good tasks through which to assess the consciousness of software beings. If the toy, computer or software talks about half as sensibly as a human adult, or expresses personal goals that make about half as much sense as those of most adults, then they have demonstrated an Autonomy value in the CP equation of about 5 out of a possible 10.

In a similar vein, if the toy, computer or software demonstrates about half the empathy of a typical human adult, then it would a 5 out of 10 on the Empathy Axis as well. Its total CP would be 25 (CP = A*E), meaning it has about one fourth the consciousness of a human. This is still consciousness; it is just not what we’d recognize as human consciousness. How would a piece of software demonstrate empathy? One way would be to make gestures and sounds that mimic human emotions such as happiness and sadness when sensory data indicates that a person defined as a friend is emoting either happiness or sadness. It is not fair to say “well, that’s not real empathy, that’s just programming or mimicry,” because humans are no less programmed and mimickers in that regard – just without lines of software code.

Ultimately this becomes a philosophical issue between Essentialists and Materialists. The former believe emotion can only arise from a human (or perhaps biological) brain, whereas the latter believe that “emotion is as emotion does.” Susan Blackmore summarizes the Materialist view as: “There is no dividing line between as if and real consciousness. Being able to sympathize with others and respond to their emotions is one part of what we mean by consciousness.”

Another insight into the question “how do we know consciousness when we see it” is to recall the long-running judicial conundrum of “how do we know if something is pornographic?” In a landmark Supreme Court case, Jacobellis v. Ohio, Justice Potter Stewart concluded that pornography was hard to define but “I know it when I see it.” Consciousness is similarly hard to define, but most people feel they “know it when they see it.” Of course the reason pornography is a judicial conundrum is because different people perceive it differently; one man’s pornography is another man’s work of art. Similarly, one woman’s conscious mindclone will be another woman’s inanimate chatbot.

Ultimately the Supreme Court pioneered a rational path by adopting some standards (analogous to our empathy and autonomy thresholds) and recognizing that the same film or photograph could be pornographic in one community but artistic in another. In other words, pornography was largely in the eyes of the beholder. This is like our recognition earlier that it is other people who determine our consciousness. To make determinations more predictable – which are important when faced with a possible plethora of allegedly conscious mindclones – let’s now expand on the CP concept, offering a more specific approach to quantification of consciousness.

Quantifying Consciousness

We can be much more precise about the existence and value of a CP by agreeing upon some standard measures. For example, there are psychological tests of human consciousness that have repeatability values on the order of 80%. These tests measure many facets of autonomy and empathy. They do not rank the test-takers as more or less conscious, but they do quantify them in terms of the unique features of their consciousness. A similar test could be developed that was intended to measure autonomy and empathy. After the test was given to a large enough sample of people (cross-culturally would be better), there would be a normal distribution of scores for each of autonomy and empathy. The peak of these distributions could be associated with CP component scores of 10 for each of Empathy and Autonomy. Thereafter, mindclones who scored higher or lower than the averages would be said to have higher or lower than average CP scores.

By way of example, suppose we have a CP test question “Do you choose your own friends?” Choices might range from “Always” (value 1), “Usually” (value 0.75), “Sometimes” (value 0.5), “Rarely” (value 0.25) to “Never” (value 0.0). If the value most often selected by people is the “Sometimes” value of 0.5, then twenty such questions would comprise each of the Autonomy and Empathy prongs of the CP test, since that would result in an average CP score of 100.

At least two challenges may be anticipated. First, it can be argued that a CP score is no more a measure of consciousness than an IQ score is a measure of intelligence. A second criticism is that even if consciousness is being measured, it is only human consciousness being assessed, which is irrelevant to software consciousness.

As to the first objection, the test of complex phenomena is never the same thing as the phenomena. Even the numerical measure of a length of wood is not the same thing as the actual length of that wood due to inconsistencies in the accuracy of rulers. The point of the CP scale is to give objectivity to the continuum of consciousness paradigm; to take what is abstract theory and render it subject to empirical research. While scores along the CP scale will always be fuzzy, an argument as to whether a piece of software has a CP score of, say 10 or 20, reveals a more important truth – that the software is very likely conscious, but does not constitute a mindclone since people have far higher CPs.

The second objection is that the test is human-centric, whereas consciousness is something that transcends species. This criticism ignores the fact that it is our intention to measure degrees of human consciousness. It is possible that there will be modes of consciousness missed by this test, but it is also likely that non-human modes of consciousness will be captured by the test. By “consciousness” most people mean “human consciousness.” Hence, a test for the emergence of consciousness in mindclones must measure human consciousness in order to be accepted by the human community.

The following 1908 quotation from the famous deaf-and-blind celebrity pioneer, Helen Keller, is poignant in how clearly it implies human consciousness builds on human language skills:

“Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness….Since I had no power of thought, I did not compare one mental state with another.”

Similar reports can be found in the literature on feral children. Higher-order languages, such as human languages, can be thought of as a kind of enzyme for making synaptic connections. While some mental conceptualizations are possible without this enzyme, abstract meanings proceed with great viscosity, if at all. Hence, it would wholly appropriate for beings lacking such language skills to receive much lower CP scores. This is not so much a matter of human-centricity, but of abstract-centricity, and it is in the realm of abstractions that consciousness dwells.

Mindclones will have human consciousness because they will have the full panoply of human language skills, the full pattern-association capability of mindware, and the full synchronization of their thoughts, personalities and emotions to those of their biological original. A mindclone should test, repeatedly, to have the same CP as their biological original. This is measurable and hence scientific proof that mindclone consciousness really exists, at least to the same extent that our own consciousness measurably exists.

Friday, August 14, 2009

6. HOW CAN CONSCIOUSNESS BE CREATED IN SOFTWARE?

“Some men see things as they are and wonder why. Others dream things that never were and ask why not?” Robert F. Kennedy



There are thousands of software engineers across the globe working day and night to create cyberconsciousness. This is real intelligent design. There are great financial awards available to the people who can make game avatars respond as curiously as people. Even vaster wealth awaits the programming teams that create personal digital assistants with the conscientiousness, and hence consciousness, of a perfect slave.

How can we know that all of this hacking will produce consciousness? This takes us to what is known as the “hard problem” and “easy problem” of consciousness. The “hard problem” is how do the web of molecules we call neurons give rise to subjective feelings or qualia (the “redness of red”)? The alternative “easy problem” of consciousness is how electrons racing along neurochemistry can result in complex simulations of “concrete and mortar” (and flesh and blood) reality? Or how metaphysical thoughts arise from physical matter? Basically, both the hard and the easy problems of consciousness come down to this: how is it that brains give rise to thoughts (‘easy’ problem), especially about immeasurable things (‘hard’ problem), but other parts of bodies do not? If these hard and easy questions can be answered for brain waves running on molecules, then it remains only to ask whether the answers are different for software code running on integrated circuits.



At least since the time of Isaac Newton and Leibniz, it was felt that some things appreciated by the mind could be measured whereas others could not. The measurable thoughts, such as the size of a building, or the name of a friend, were imagined to take place in the brain via some exquisite micro-mechanical processes. Today we would draw analogies to a computer’s memory chips, processors and peripherals. Although this is what philosopher David Chalmers calls the “easy problem” of consciousness, we still need an actual explanation of exactly how one or more neurons save, cut, paste and recall any word, number, scent or image. In other words, how do neuromolecules catch and process bits of information?

Those things that cannot be measured are what Chalmers calls the “hard problem.” In his view, a being could be conscious, but not human, if they were only capable of the “easy” kind of consciousness. Such a being, called a zombie, would be robotic, without feelings, empathy or nuances. Since the non-zombie, non-robot characteristics are also purported to be immeasurable (e.g., the redness of red or the heartache of unrequited love), Chalmers cannot see even in principle how they could ever be processed by something physical, such as neurons. He suggests consciousness is a mystical phenomenon that can never be explained by science. If this is the case, then one could argue that it might attach just as well to software as to neurons – or that it might not – or that it might perfuse the air we breathe and the space between the stars. If consciousness is mystical, then anything is possible. As will be shown below, there is no need to go there. Perfectly mundane, empirical explanations are available to explain both the easy and the hard kinds of consciousness. These explanations work as well for neurons as they do for software.

As indicated in the following figure, Essentialists v. Materialists, there are three basic points of view regarding the source of consciousness. Essentialists believe in a mystical source specific to humans. This is basically a view that God gave Man consciousness. Materialists believe in an empirical source (pattern-association complexity) that exists in humans and can exist in non-humans. A third point of view is that consciousness can mystically attach to anything. While mystical explanations cannot be disproved, they are unnecessary because there is a perfectly reasonable Materialist explanation to both the easy and hard kinds of consciousness.

MATERIALISM vs. ESSENTIALISM








If human consciousness is to arise in software we must do three things: first explain how the easy problem is solved in neurons; second, explain how the hard problem is solved in neurons; and third, explain how the solution in neurons is replicable in information technology. The key to all three explanations is the relational database concept. With the relational database an inquiry (or a sensory input for the brain) triggers a number of related responses. Each of these responses is, in turn, a stimulus for a further number of related responses. An output response is triggered when the strength of a stimulus, such as the number of times it was triggered, is greater than a set threshold.



For example, there are certain neurons hard-wired by our DNA to be sensitive to different wavelengths of light, and other neurons are sensitive to different phonemes. So, suppose when looking at something red, we are repeatedly told “that is red.” The red-sensitive neuron becomes paired with, among other neurons, the neurons that are sensitive to the different phonetics that make up the sounds “that is red.” Over time, we learn that there are many shades of red, and our neurons responsible for these varying wavelengths each become associated with words and objects that reflect the different “rednesses” of red. Hence, the redness of red is not only the immediate physical impression upon neurons tuned to wavelengths we common refer to as "red", but is also (1) each person’s unique set of connections between neurons hard-wired genetically from the retina to the various wavelengths we associate with different reds, and (2) the plethora of further synaptic connections we have between those hard-wired neurons and neural patterns that include things that are red. If the only red thing a person ever saw was an apple, then redness to them means the red wavelength neuron output that is part of the set of neural connections associated in their mind with an apple. Redness is not an electrical signal in our mind per se, but it is the associations of color wavelength signals with a referent in the real world. Redness is part of the gestalt impression obtained in a second or less from the immense pattern of neural connections we have built up about red things.



After a few front lines of sensory neurons, everything else is represented in our minds as a pattern of neural connections. It is as if the sensory neurons are our alphabet. These are associated (via synapses) in a vast number of ways to form mental images of objects and actions, just like letters can be arranged into a dictionary full of words. The mental images can be strung together (many more synaptic connections) into any number of coherent (even when dreaming) sequences to form worldviews, emotions, personalities, and guides to behavior. This is just like words can be grouped into a limitless number of coherent sentences, paragraphs and chapters. Grammar for words is like the as yet poorly understood electro-chemical properties of the brain that enable strengthening or weakening of waves of synaptic connections that support attentiveness, mental continuity and characteristic thought patterns. Continuing the analogy, the self, our consciousness, is the entire book of our autonomous and empathetic lives, written with that idiosyncratic style that is unique to us. It is a book full of chapters of life-phases, paragraphs of things we’ve done and sentences reflecting streams of thought.



Neurons save, cut, paste and recall any word, number, scent, image, sensation or feeling no differently for the so-called hard than for the so-called easy problems of consciousness. Let’s take as our example the “hard” problem of love, what Ray Kurzweil calls the “ultimate form of intelligence.” Robert Heinlein defines it as the feeling that another’s happiness is essential to your own.

Neurons save the subject of someone’s love as a collection of outputs from hard-wired sensory neurons tuned to the subject’s shapes, colors, scents, phonetics and/or textures. These outputs come from the front-line neurons that emit a signal only when they receive a signal of a particular contour, light-wave, pheromone, sound wave or tactile sensation. The set of outputs that describes the subject of our love is a stable thought – once so established with some units of neurochemical strength, any one of the triggering sensory neurons can harken from our mind the other triggering neurons.

Neurons paste thoughts together with matrices of synaptic connections. The constellation of sensory neuron outputs that is the thought of the subject of our love is, itself, connected to a vast array of additional thoughts (each grounded directly or, via other thoughts, indirectly, to sensory neurons). Those other thoughts would include the many cues that lead us to love someone or something. These may be resemblance in appearance or behavior to some previously favored person or thing, logical connection to some preferred entity, or some subtle pattern that matches extraordinarily well (including in counterpoint, syncopation or other form of complementarities) with the patterns of things we like in life. As we spend more time with the subject of our love, we further strengthen sensory connections with additional and strengthened synaptic connections such as those connected with eroticism, mutuality, endorphins and adrenaline.

There is no neuron with our lover’s face on it. There are instead a vast number of neurons that, as a stable set of connections, represent our lover. The connections are stable because they are important to us. When things are important to us, we concentrate on them, and as we do, the brain increases the neurochemical strengths of their neural connections. Many things are unimportant to us, or become so. For these things the neurochemical linkages become weaker and finally the thought dissipates like an abandoned spider web. Neurons cut unused and unimportant thoughts by weakening the neurochemical strengths of their connections. Often a vestigial connection is retained, capable of being triggered by a concentrated retracing of its path of creation, starting with the sensory neurons that anchor it.

What the discussion above shows is that consciousness can be readily explained as a set of connections among sensory neuron outputs, and links between such connections and sequences of higher-order connections. With each neuron able to make as many as 10,000 connections, and with 100 billion neurons, there is ample possibility for each person to have subjective experiences through idiosyncratic patterns of connectivity. The “hard problem” of consciousness is not so hard. Subjectivity is simply each person’s unique way of connecting the higher-order neuron patterns that come after the sensory neurons. The “easy problem” of consciousness is solved in the recognition of sensory neurons as empirical scaffolding upon which can be built a skyscraper worth of thoughts. If it can be accepted that sensory neurons can as a group define a higher-order concept, and that such higher-order concepts can as a group define yet higher-order concepts, then the “easy problem” of consciousness is solved. Material neurons can hold non-material thoughts because the neurons are linked members of a cognitive code. It is the meta-material pattern of the neural connections, not the neurons themselves, that contains non-material thoughts.




Lastly, there is the question of whether there is something essential about the way neurons form into content-bearing patterns, or whether the same feat could be accomplished with software. The strengths of neuronal couplings can be replicated with weighted strengths for software couplings in relational databases. The connectivity of one neuron to up to 10,000 other neurons can be replicated by linking one software input to up to 10,000 software outputs. The ability of neuronal patterns to maintain themselves in waves of constancy, such as in personality or concentration, could equally well be accomplished with software programs that kept certain software groupings active. Finally, a software system can be provided with every kind of sensory input (audio, video, scent, taste, tactile). Putting it all together, Daniel Dennett observes:

“If the self is ‘just’ the Center of Narrative Gravity, and if all the phenomena of human consciousness are explicable as ‘just’ the activities of a virtual machine realized in the astronomically adjustable connections of a human brain, then, in principle, a suitably ‘programmed’ robot, with a silicon-based computer brain, would be conscious, would have a self. More aptly, there would be a conscious self whose body was the robot and whose brain was the computer.”

At least for a Materialist, there seems to be nothing essential to neurons, in terms of creating consciousness, which could not be achieved as well with software. The quotation marks around ‘just’ in the quote from Dennett is the famous philosopher’s facetious smile. He is saying with each ‘just’ that there is nothing to belittle about such a great feat of connectivity and patterning.

Tuesday, July 28, 2009

Tuesday, July 14, 2009

5. WHAT IS CYBERCONSCIOUSNESS?


“Am not going to argue whether a machine can really be alive, really be self-aware. Is a virus self-aware? Nyet. How about oyster? I doubt it. A cat? Almost certainly. A human? Don’t know about you, tovarishch, but I am. Somewhere along evolutionary chain from macromolecule to human brain self-awareness crept in. Psychologists assert it happens automatically whenever a brain acquires certain very high number of associational paths. Can’t see it matters whether paths are protein or platinum. (Soul? Does a dog have a soul? How about cockroach?)” Robert Heinlein, The Moon is a Harsh Mistress

Cyberconsciousness means consciousness in a cybernetic medium. Cybernetics is the replication of biological control systems with technology. In 1984 Robert Gibson coined the term ‘cyberspace’ in his novel Neuromancer about an alternative reality existing inside computer networks. Soon thereafter, cyber became a prefix meaning anything computer-related. That much is easy. Lengthy answers are needed for what is consciousness, and how could it possibly exist in a computerized form, outside of a brain.

The biggest problem with discussions of consciousness is that people are not sure what they are talking about. This is because consciousness is what Marvin Minsky calls a “suitcase word.” Such a word carries lots of meanings, so there are constant problems of comparing apples to oranges in debates about consciousness. For example, most people speak of consciousness as if it was one thing, self-awareness. Yet, surely baby self-awareness is different from adolescent self-awareness. The self-awareness of an octopus (if it exists) may well be quite diminished – or advanced -- compared to that of a cat (if it exists).

A Suitcase Full of Autonomy and Empathy

There are three reasons why the common use of “self-awareness” as a definition for consciousness does not work well with cyberconsciousness. First, “any beginning programmer can write a short piece of software that examines, reports on, and even modifies itself.” It is thus easy to program software to be self-aware. For example, the software running a robot vehicle could be written to define objects in its real world. Those objects might be the terrain (“navigate it using sensors”), programmers (“follow any orders coming in”) and the vehicle itself (“I am a robot vehicle that navigates terrain in response to programming orders.”) Yet, very few people would accept that such a simple set of code, albeit literally “self-aware”, was conscious. It bears too little in common with what most people think of as conscious – a being that thinks independently and is sensitive to the feelings of others (when not infantile, sleeping or seriously ill).

A second problem with the “self-awareness” definition of consciousness is that it is an all-or-nothing proposition. In fact, given the graduated fashion in which brains have evolved, it is more likely that there are gradations of consciousness. Beings can be more or less independent thinkers – even human thought is largely dictated by genetics and upbringing – and beings can be more or less sensitive to other’s feelings – consider the animals you know, including the human ones. So, our definition of consciousness shouldn’t be the common “self-awareness” one because that term would force too gross a categorization. Its “either you are or are not” standard is inconsistent with the blurry reality of multitudinous and ambiguous differences in self-awareness.

A final problem with the “self-awareness” definition is that it doesn’t necessarily require what is called “phenomenal consciousness” (meaning awareness of one’s feelings and subjective perceptions), or “sentience.” The possibility of self-awareness without sentience (such as the Mr. Smiths of Matrix) exemplifies this third problem with the common definition of consciousness. For example a person who acts as if they have no emotions is called a robot or zombie, meaning a machine without consciousness. Self-awareness is clearly necessary, but also far from sufficient, for a definition of consciousness that matches what people really mean by the term.

So, self-awareness is at once both the most common meaning of consciousness as well as a horrible match for what people really mean by consciousness! This occurs because when applied to humans, self-awareness secretly brings along (in Prof. Minsky’s suitcase) independent thought, sentience and empathy – all of which are part of being human. But when applied to other species, and to mindclones, we can no longer be sure what if anything is in “the suitcase.” Hence, the term self-awareness is inadequate to express our expectations for consciousness. We know the self-aware human is also somewhat rational, emotional and caring. So, self-aware humans are good enough proxies for conscious humans. We don’t know that a self-aware software program is anything but self-aware. Hence, for species other than humans, mere self-awareness is an inadequate definition for consciousness because we really require reason, feelings and concern as well.

Shortcomings of “What It Is Like to Be”

Consciousness entails a processing of perceptions into a mental worldview. This is what some people call the “what it is like to be” definition. Consciousness uses patterns of neural connections, usually triggered in real-time by physical sense data, to create something meta-physical – a more or less coherent, individualized and hence subjective, virtual image of one’s relevant world. It is the immeasurability of this subjectivity that also underlies the confusion over consciousness.

Most people require this mental subjectivity to include feelings or emotions (sentience) in order to qualify as consciousness. This is of course because feelings and emotions are integral to human consciousness. Sentience, on the other hand, is no better than self-awareness as a stand-alone definition of consciousness. This is because as noted above, we expect conscious beings to be independent thinkers as well as feelers. We can say humans are conscious if they are sentient, because we know all humans are also independent thinkers (Minsky’s suitcase again). But we cannot make the same statement regarding other species, or mindclones (that suitcase is still empty).

Feelings do not require having any cognitive capability at all. When a hooked worm or fish squirms, most people interpret that as evidence that it hurts (others however consider it a mere reaction like a knee jerk that indicates no emotion). If the hooked worm or fish is in pain, or is stressed, this means it has sentience. But most people would not consider the fish or worm conscious because we don’t believe some part of their neurology is thinking about the pain, and complaining about it. Instead, we think the worm or fish is simply reacting in pain, and is reflexively trying to get out of the nasty situation. Of course we humans would do likewise, but we would also (to the extent pain subsided) commiserate about it, and contemplate what to do next. It is upon such recondite differences, that the definition of human consciousness resides. To satisfy the common conception of consciousness there needs to be autonomy (e.g., contemplation) and empathy (e.g., commiseration) as well as sentience and self-awareness.

To determine if software will become conscious we need a tighter definition for consciousness than self-awareness. We also need a definition that requires sentience, but is not satisfied with it alone. Most people will not be satisfied that a software being is conscious simply because there is something “that it is like to be” that software being – any more so than we think a fish is conscious because there may be something “that it is like to be a fish”, or a bat, or any other being. Experience, per se, is not what most people really mean by consciousness. There must also be an independent will – something akin to what is thought of as a soul – and also an element of transcendence – a conscience. Finally, we need a definition that can span a broad range of possible forms of consciousness.

The Continuum of Consciousness

A comprehensive solution to the consciousness conundrum is to adopt a new approach – “the continuum of consciousness” -- that explains all of the diverse current views, while also pointing the way for fruitful quantitative research. Such a “continuum of consciousness” model would encompass everything from seemingly sentient animal behaviors to the human obsession with how do others see me. It would provide a common lexicon for all researchers. Hence, the definition of consciousness needs to be broad but concrete:

Consciousness = A continuum of maturing abilities, when healthy, to be autonomous and empathetic, as determined by consensus of a small group of experts.

Autonomous means, in this context, the independent capacity to make reasoned decisions, with moral ones at the apex, and to act on them.

Independent means, in this context, capable of idiosyncratic thinking or acting.

Empathetic means, in this context, the ability to identify with and understand other beings’ feelings.

Feelings, in this context, mean a perceived mental or physical sensation or gestalt.

Small group of experts means, in this context, three or more individuals certified in a field of mental health or medical ethics.

This definition says a subject is a little conscious if they think and feel a little like us; they are very conscious if they think and feel just like us. It is a human-centric definition because when people ask “is it conscious?,” they mean “is it in any way humanly conscious?” In other words, conscious is a shorthand way of judging whether a subject “thinks and feels at all like people.”

How do we know if someone or something is empathetic or autonomous? Since “independence” and especially “feelings” are internal mental states, it is very difficult to be definitive about the existence of consciousness. It is likely that in the future individual neuron mapping will enable consciousness to be determined empirically. Until that time one’s consciousness is determined by others. A subject is conscious to the extent other people think they are autonomous and empathetic. This makes sense because, as noted above, it is compared to human consciousness that we measure any other consciousness as either absent or present to some degree. We think our dogs and cats are conscious because we see aspects of human consciousness in them.

Someone is guilty of an intentional crime if other people (the jury) think they had the mental intent to do the crime (as well as performing the criminal acts). Society is accustomed to letting others make determinative decisions about one’s mental state. Thus, it is logical to also let society make determinative decisions as to whether or not someone or something is conscious. For the determination of consciousness, the consensus of three or more experts in the field, such as psychologists or ethicists, substitute for a jury. As software does actually present with consciousness, it is likely that professional associations will offer special mindclone psychology certifications to better standardize consciousness determinations.

Of course an expert determination of consciousness is not the same thing as a fully objective determination of consciousness. Similarly, a jury may think a defendant lacked criminal intent whereas, in fact, he really had the intent. However, when objective determinations are impossible, society readily accepts alternatives such as appraisals of one’s peers or experts. Also, when the experts determine that a software being is or is not conscious, they are of course only considering human consciousness. Prof. Minsky’s consciousness suitcase always carries a human-centric bias.

It is important to clarify a few aspects of the “continuum of consciousness” definition. First, the inability to make moral decisions, due to lack of understanding of right and wrong, makes one less conscious. This is because human consciousness includes moral judgments, and it is compared to this understanding of consciousness that we decide whether a gradation of it exists.

The reason for moral choice having a dominant role is that consciousness matters because it embodies the most important shared value among humans, that of a moral conscience. In other words, while consciousness has a minimalist definition of being awake, alert and aware – “is he conscious?!” – it also has a more salient meaning of thinking and feeling like a normal human. To think like a normal human, one must be able to make the kind of moral decisions, based on some variant of the Golden Rule, which Kant taught were hard-wired into human brains.

For example, no matter how self-aware or empathetic a being was, most people would not admit they shared human consciousness unless they had a maturing ability to understand, when healthy, the difference between shared concepts of right and wrong. To such people a Hitler is conscious, whereas a crocodile is merely self-aware, because a Hitler makes (very wrong) moral choices, while a crocodile makes no moral choice at all. The continuum of consciousness paradigm would call the crocodile less conscious than Hitler if experts agreed it had diminished but still present idiosyncratic decision-making capability (even if moral judgment was absent) and at least some modicum of empathy.

A second important clarifying point relates to the term “independent.” While the true independence of anyone in society is contestable (e.g., do we just do what our genes tell us to do?), the inclusion of this term would exclude from consciousness only an entity that had absolutely no independent capacity, i.e., an automaton or zombie. The reason for the requirement of idiosyncratic thought is that we expect each human to be unique. Even if we are bounded by our genes, and constrained by our culture, we are each a one-of-a-kind, not fully predictable mixture of such programming. We are independent because our blended nature enables us to transcend our programming. (Skeptics of software consciousness, such as Roger Penrose in his book the Emperor’s Mind, rely on this characteristic, while others believe code can be written to transcend code). It is this fresh and slightly enigmatic characteristic, especially when applied in furtherance of rationality and/or empathy, which we expect in anyone who is conscious rather than autonomic. Hence, “independence” does not require being a pioneer, or a leader. It does require being able to decide things and act based on a personalized gestalt rather than only on a rigid formula.

There is a philosophical gray zone called “free will” between independent reasoning and instinctual or programmed behavior. A benefit of the continuum of consciousness paradigm is that it empowers a wide variety of views regarding the independence of behavior to be considered conscious, while still recognizing important differences in the role played by genes, instinct or programming.

A third clarifying point concerns the use of “empathy” in the definition of consciousness. Similar to moral choice, empathy is crucial to a definition of consciousness because it tells us whether someone feels like us, as well as thinks like us (autonomy). For example, no matter how good a machine was at being an autonomous decision-maker (including moral decisions), and aware of its surroundings and of itself, most people would not admit it was conscious unless it truly seemed to understand and identify with other people’s feelings – which would require it to have feelings of its own. A mere ability to expertly arrive at moral judgments, without any affect in relation to any of those judgments, will not pass a consciousness litmus test with most people. To be humanly conscious one must not only know that genocide is wrong; one must also feel that genocide is horrific.

Empathy is a subset of sentience, which is the ability to have feelings and/or emotions. Hence, sentience is a necessary, but not sufficient, basis for consciousness. While every animal that feels pain is sentient, only those that identify and understand another being’s pain, at least to some extent, have a position on the Empathy axis of consciousness. Empathy also overlaps self-awareness, another necessary, but not sufficient, basis for consciousness. As shown in the chart below, the overlapping domains of self-awareness, sentience, empathy and autonomy define the continuum of consciousness.






Definition of Consciousness Diagram
1 – Self-aware entities that lack feelings as well as autonomy, such as the DARPA car that drives itself but cannot decide to do anything else.
2 -- Sentient entities that lack self-awareness as well as empathy, such as an arthropod (< 10M neurons).
3 -- Autonomous entities that lack feelings, such as a suitably programmed robot without emotion routines.
4 -- Empathetic entities that lack self-awareness, such as some pets.
5 – Conscious entities that are self-aware and sentient, and more specifically are relatively autonomous and empathetic, like people.