Tuesday, July 28, 2009

Tuesday, July 14, 2009


“Am not going to argue whether a machine can really be alive, really be self-aware. Is a virus self-aware? Nyet. How about oyster? I doubt it. A cat? Almost certainly. A human? Don’t know about you, tovarishch, but I am. Somewhere along evolutionary chain from macromolecule to human brain self-awareness crept in. Psychologists assert it happens automatically whenever a brain acquires certain very high number of associational paths. Can’t see it matters whether paths are protein or platinum. (Soul? Does a dog have a soul? How about cockroach?)” Robert Heinlein, The Moon is a Harsh Mistress

Cyberconsciousness means consciousness in a cybernetic medium. Cybernetics is the replication of biological control systems with technology. In 1984 Robert Gibson coined the term ‘cyberspace’ in his novel Neuromancer about an alternative reality existing inside computer networks. Soon thereafter, cyber became a prefix meaning anything computer-related. That much is easy. Lengthy answers are needed for what is consciousness, and how could it possibly exist in a computerized form, outside of a brain.

The biggest problem with discussions of consciousness is that people are not sure what they are talking about. This is because consciousness is what Marvin Minsky calls a “suitcase word.” Such a word carries lots of meanings, so there are constant problems of comparing apples to oranges in debates about consciousness. For example, most people speak of consciousness as if it was one thing, self-awareness. Yet, surely baby self-awareness is different from adolescent self-awareness. The self-awareness of an octopus (if it exists) may well be quite diminished – or advanced -- compared to that of a cat (if it exists).

A Suitcase Full of Autonomy and Empathy

There are three reasons why the common use of “self-awareness” as a definition for consciousness does not work well with cyberconsciousness. First, “any beginning programmer can write a short piece of software that examines, reports on, and even modifies itself.” It is thus easy to program software to be self-aware. For example, the software running a robot vehicle could be written to define objects in its real world. Those objects might be the terrain (“navigate it using sensors”), programmers (“follow any orders coming in”) and the vehicle itself (“I am a robot vehicle that navigates terrain in response to programming orders.”) Yet, very few people would accept that such a simple set of code, albeit literally “self-aware”, was conscious. It bears too little in common with what most people think of as conscious – a being that thinks independently and is sensitive to the feelings of others (when not infantile, sleeping or seriously ill).

A second problem with the “self-awareness” definition of consciousness is that it is an all-or-nothing proposition. In fact, given the graduated fashion in which brains have evolved, it is more likely that there are gradations of consciousness. Beings can be more or less independent thinkers – even human thought is largely dictated by genetics and upbringing – and beings can be more or less sensitive to other’s feelings – consider the animals you know, including the human ones. So, our definition of consciousness shouldn’t be the common “self-awareness” one because that term would force too gross a categorization. Its “either you are or are not” standard is inconsistent with the blurry reality of multitudinous and ambiguous differences in self-awareness.

A final problem with the “self-awareness” definition is that it doesn’t necessarily require what is called “phenomenal consciousness” (meaning awareness of one’s feelings and subjective perceptions), or “sentience.” The possibility of self-awareness without sentience (such as the Mr. Smiths of Matrix) exemplifies this third problem with the common definition of consciousness. For example a person who acts as if they have no emotions is called a robot or zombie, meaning a machine without consciousness. Self-awareness is clearly necessary, but also far from sufficient, for a definition of consciousness that matches what people really mean by the term.

So, self-awareness is at once both the most common meaning of consciousness as well as a horrible match for what people really mean by consciousness! This occurs because when applied to humans, self-awareness secretly brings along (in Prof. Minsky’s suitcase) independent thought, sentience and empathy – all of which are part of being human. But when applied to other species, and to mindclones, we can no longer be sure what if anything is in “the suitcase.” Hence, the term self-awareness is inadequate to express our expectations for consciousness. We know the self-aware human is also somewhat rational, emotional and caring. So, self-aware humans are good enough proxies for conscious humans. We don’t know that a self-aware software program is anything but self-aware. Hence, for species other than humans, mere self-awareness is an inadequate definition for consciousness because we really require reason, feelings and concern as well.

Shortcomings of “What It Is Like to Be”

Consciousness entails a processing of perceptions into a mental worldview. This is what some people call the “what it is like to be” definition. Consciousness uses patterns of neural connections, usually triggered in real-time by physical sense data, to create something meta-physical – a more or less coherent, individualized and hence subjective, virtual image of one’s relevant world. It is the immeasurability of this subjectivity that also underlies the confusion over consciousness.

Most people require this mental subjectivity to include feelings or emotions (sentience) in order to qualify as consciousness. This is of course because feelings and emotions are integral to human consciousness. Sentience, on the other hand, is no better than self-awareness as a stand-alone definition of consciousness. This is because as noted above, we expect conscious beings to be independent thinkers as well as feelers. We can say humans are conscious if they are sentient, because we know all humans are also independent thinkers (Minsky’s suitcase again). But we cannot make the same statement regarding other species, or mindclones (that suitcase is still empty).

Feelings do not require having any cognitive capability at all. When a hooked worm or fish squirms, most people interpret that as evidence that it hurts (others however consider it a mere reaction like a knee jerk that indicates no emotion). If the hooked worm or fish is in pain, or is stressed, this means it has sentience. But most people would not consider the fish or worm conscious because we don’t believe some part of their neurology is thinking about the pain, and complaining about it. Instead, we think the worm or fish is simply reacting in pain, and is reflexively trying to get out of the nasty situation. Of course we humans would do likewise, but we would also (to the extent pain subsided) commiserate about it, and contemplate what to do next. It is upon such recondite differences, that the definition of human consciousness resides. To satisfy the common conception of consciousness there needs to be autonomy (e.g., contemplation) and empathy (e.g., commiseration) as well as sentience and self-awareness.

To determine if software will become conscious we need a tighter definition for consciousness than self-awareness. We also need a definition that requires sentience, but is not satisfied with it alone. Most people will not be satisfied that a software being is conscious simply because there is something “that it is like to be” that software being – any more so than we think a fish is conscious because there may be something “that it is like to be a fish”, or a bat, or any other being. Experience, per se, is not what most people really mean by consciousness. There must also be an independent will – something akin to what is thought of as a soul – and also an element of transcendence – a conscience. Finally, we need a definition that can span a broad range of possible forms of consciousness.

The Continuum of Consciousness

A comprehensive solution to the consciousness conundrum is to adopt a new approach – “the continuum of consciousness” -- that explains all of the diverse current views, while also pointing the way for fruitful quantitative research. Such a “continuum of consciousness” model would encompass everything from seemingly sentient animal behaviors to the human obsession with how do others see me. It would provide a common lexicon for all researchers. Hence, the definition of consciousness needs to be broad but concrete:

Consciousness = A continuum of maturing abilities, when healthy, to be autonomous and empathetic, as determined by consensus of a small group of experts.

Autonomous means, in this context, the independent capacity to make reasoned decisions, with moral ones at the apex, and to act on them.

Independent means, in this context, capable of idiosyncratic thinking or acting.

Empathetic means, in this context, the ability to identify with and understand other beings’ feelings.

Feelings, in this context, mean a perceived mental or physical sensation or gestalt.

Small group of experts means, in this context, three or more individuals certified in a field of mental health or medical ethics.

This definition says a subject is a little conscious if they think and feel a little like us; they are very conscious if they think and feel just like us. It is a human-centric definition because when people ask “is it conscious?,” they mean “is it in any way humanly conscious?” In other words, conscious is a shorthand way of judging whether a subject “thinks and feels at all like people.”

How do we know if someone or something is empathetic or autonomous? Since “independence” and especially “feelings” are internal mental states, it is very difficult to be definitive about the existence of consciousness. It is likely that in the future individual neuron mapping will enable consciousness to be determined empirically. Until that time one’s consciousness is determined by others. A subject is conscious to the extent other people think they are autonomous and empathetic. This makes sense because, as noted above, it is compared to human consciousness that we measure any other consciousness as either absent or present to some degree. We think our dogs and cats are conscious because we see aspects of human consciousness in them.

Someone is guilty of an intentional crime if other people (the jury) think they had the mental intent to do the crime (as well as performing the criminal acts). Society is accustomed to letting others make determinative decisions about one’s mental state. Thus, it is logical to also let society make determinative decisions as to whether or not someone or something is conscious. For the determination of consciousness, the consensus of three or more experts in the field, such as psychologists or ethicists, substitute for a jury. As software does actually present with consciousness, it is likely that professional associations will offer special mindclone psychology certifications to better standardize consciousness determinations.

Of course an expert determination of consciousness is not the same thing as a fully objective determination of consciousness. Similarly, a jury may think a defendant lacked criminal intent whereas, in fact, he really had the intent. However, when objective determinations are impossible, society readily accepts alternatives such as appraisals of one’s peers or experts. Also, when the experts determine that a software being is or is not conscious, they are of course only considering human consciousness. Prof. Minsky’s consciousness suitcase always carries a human-centric bias.

It is important to clarify a few aspects of the “continuum of consciousness” definition. First, the inability to make moral decisions, due to lack of understanding of right and wrong, makes one less conscious. This is because human consciousness includes moral judgments, and it is compared to this understanding of consciousness that we decide whether a gradation of it exists.

The reason for moral choice having a dominant role is that consciousness matters because it embodies the most important shared value among humans, that of a moral conscience. In other words, while consciousness has a minimalist definition of being awake, alert and aware – “is he conscious?!” – it also has a more salient meaning of thinking and feeling like a normal human. To think like a normal human, one must be able to make the kind of moral decisions, based on some variant of the Golden Rule, which Kant taught were hard-wired into human brains.

For example, no matter how self-aware or empathetic a being was, most people would not admit they shared human consciousness unless they had a maturing ability to understand, when healthy, the difference between shared concepts of right and wrong. To such people a Hitler is conscious, whereas a crocodile is merely self-aware, because a Hitler makes (very wrong) moral choices, while a crocodile makes no moral choice at all. The continuum of consciousness paradigm would call the crocodile less conscious than Hitler if experts agreed it had diminished but still present idiosyncratic decision-making capability (even if moral judgment was absent) and at least some modicum of empathy.

A second important clarifying point relates to the term “independent.” While the true independence of anyone in society is contestable (e.g., do we just do what our genes tell us to do?), the inclusion of this term would exclude from consciousness only an entity that had absolutely no independent capacity, i.e., an automaton or zombie. The reason for the requirement of idiosyncratic thought is that we expect each human to be unique. Even if we are bounded by our genes, and constrained by our culture, we are each a one-of-a-kind, not fully predictable mixture of such programming. We are independent because our blended nature enables us to transcend our programming. (Skeptics of software consciousness, such as Roger Penrose in his book the Emperor’s Mind, rely on this characteristic, while others believe code can be written to transcend code). It is this fresh and slightly enigmatic characteristic, especially when applied in furtherance of rationality and/or empathy, which we expect in anyone who is conscious rather than autonomic. Hence, “independence” does not require being a pioneer, or a leader. It does require being able to decide things and act based on a personalized gestalt rather than only on a rigid formula.

There is a philosophical gray zone called “free will” between independent reasoning and instinctual or programmed behavior. A benefit of the continuum of consciousness paradigm is that it empowers a wide variety of views regarding the independence of behavior to be considered conscious, while still recognizing important differences in the role played by genes, instinct or programming.

A third clarifying point concerns the use of “empathy” in the definition of consciousness. Similar to moral choice, empathy is crucial to a definition of consciousness because it tells us whether someone feels like us, as well as thinks like us (autonomy). For example, no matter how good a machine was at being an autonomous decision-maker (including moral decisions), and aware of its surroundings and of itself, most people would not admit it was conscious unless it truly seemed to understand and identify with other people’s feelings – which would require it to have feelings of its own. A mere ability to expertly arrive at moral judgments, without any affect in relation to any of those judgments, will not pass a consciousness litmus test with most people. To be humanly conscious one must not only know that genocide is wrong; one must also feel that genocide is horrific.

Empathy is a subset of sentience, which is the ability to have feelings and/or emotions. Hence, sentience is a necessary, but not sufficient, basis for consciousness. While every animal that feels pain is sentient, only those that identify and understand another being’s pain, at least to some extent, have a position on the Empathy axis of consciousness. Empathy also overlaps self-awareness, another necessary, but not sufficient, basis for consciousness. As shown in the chart below, the overlapping domains of self-awareness, sentience, empathy and autonomy define the continuum of consciousness.

Definition of Consciousness Diagram
1 – Self-aware entities that lack feelings as well as autonomy, such as the DARPA car that drives itself but cannot decide to do anything else.
2 -- Sentient entities that lack self-awareness as well as empathy, such as an arthropod (< 10M neurons).
3 -- Autonomous entities that lack feelings, such as a suitably programmed robot without emotion routines.
4 -- Empathetic entities that lack self-awareness, such as some pets.
5 – Conscious entities that are self-aware and sentient, and more specifically are relatively autonomous and empathetic, like people.