One way to know that something exists is to measure it. A common perspective is that consciousness is immeasurable, because it is subjective. However, even subjective phenomena may be measured through approximations, and hence a science of consciousness is quite possible.
Expressed symbolically, a Consciousness Product (CP) along the continuum of consciousness can be defined as A*E, where A = Autonomy and E = Empathy. In other words, each of Autonomy and of Empathy are necessary, to at least some small degree, but neither is sufficient alone, to establish consciousness. Hence, neither an animal that appears self-aware but purely instinctual, nor a software routine that appears to reason but lack sentience, is at all conscious. The former lacks the potential for Autonomy, and the latter lacks the potential for Empathy. But grant the instinctual animal some measure of independent thought -- idiosyncratic choosing among instinctual options -- and Autonomy creeps in (idios "one's own" and syn-krasis "mixture"). Or provide future software with understanding of people’s feelings, via words or graphics, and via software settings for happiness or sadness, and Empathy slips in as well. Such consciousness can arise from human action either directly (by writing code for it) or indirectly (by emerging from sufficiently complex pattern association code). In either event, consciousness will arrive on the combined backs of Autonomy and Empathy.
Were the average human Consciousness Product (CP) arbitrarily set at something like 100, as are IQ scores, with equal contributions of Autonomy and Empathy, then someone like Martin Luther King would have a score higher than that because he was more conscious than average. He empathized with others more than most people do, and his moral judgments were more fine-tuned. The net result of his and Gandhi’s consciousness was an adamant insistence on non-violence. The average military recruit, or passively supportive citizen, can rationalize nationally organized killing. On the other hand, a household pet might have a CP equal to half or less than the human average. This does not make the pet non-conscious, just less conscious. How far down the CP scale can one go before there is no consciousness? It disappears when there is not even a fraction of a percent of typical human autonomy or not even a fraction of a percent of typical human empathy.
A brilliant machine with no ability to ever feel another’s pain or joy would be considered soul-less; without consciousness. A snuggly life form that feels every human emotion but can do nothing else would be considered mind-less; without consciousness. Between mindless and soulless is a vast continuum of possible expressions of consciousness. Hence, consciousness is widespread, as advocates of a simple self-awareness definition usually insist. However, some beings are more conscious than others, as humanists have always claimed.
The earliest hints of consciousness arose from genetic mutations that directed neurons to be connected (or grown) in a way that empowered self-awareness. In other words, inanimate molecules are ordered by DNA to assemble into conscious-trending clumps of neurons. It would not seem less improbable that inanimate lines of code can be ordered by human intelligence to assemble into conscious-trending clumps of software programs.
Nobody knows what the minimum number, and arrangement, of neural connections or lines of software code are for various levels of consciousness to arise. What must be the case, barring mystical explanations, is that consciousness is an epiphenomena of a good enough relational database. “Good enough” means not only multi-dimensional arrays of associations, but also sophisticated capabilities for running persistent series of associations – stories, emotions, scenarios, simulations, conversations, personalities -- through the database, with outputs and inputs occurring in near real-time. Each person’s idiosyncratic pattern of activating and maintaining groups of associations, coupled with their unique relational database, is their self, their consciousness. The strengths of the neural connections that form the relational database patterns have been firmed-up over a life such that we are familiar to ourselves (and to others) at (almost) every moment of our wakefulness.
Brains are awesome relational databases, and human brains are the best of the best, with complex patterns of association-sequencing worthy of the term “mindware.” But brains need not be made solely of flesh. There are other ways to flexibly connect billions of pieces of information together. Software brains, designed to run on powerful processors, are in hot pursuit. These software brains will not necessarily arrive with a typical CP of 100, but neither do humans. The continuity of consciousness paradigm makes room for a range of autonomous and empathetic beings, human and non-human. The closer these souls think like us, and feel like us, the closer their consciousness will be like ours. But so long as they reason and feel at all, there is a conscious mind at play. This means that there is a characteristic pattern of association sequencing that tries to maintain a coherent mental structure of the world (autonomy), with the being at the center, other relevant beings not far off, and conceptions of those other beings’ feelings of significant concern (empathy).
There are two main reasons we think consciousness is such a big deal. First, consciousness makes us vulnerable to psychic harm and thus triggers the Golden Rule – we must be aware of other people’s consciousness because we want others to be aware of our own. This underlies the great importance attached to respecting the dignity of others. To the extent someone or some thing is conscious, we need to respect its dignity, for we expect to be similarly respected. Therefore, determining the existence and extent of consciousness is crucial to our social system. Second, consciousness is itself a shared thing, a kind of social property. Each of our minds is full of thoughts and feelings placed there by other people . When a mindclone claims to be conscious, they are attaching themselves to this social grid. They are claiming at least some of the rights, obligations and privileges that attach to humanity. Naturally, applications to membership in so important a club will be viewed cautiously.
Consciousness is Like Pornography
So, if consciousness can be created solely with software, how will we recognize it? How will mindclones’ CPs be correlated with those of their human originals?
We’ll recognize conscious software by evidence of the telltale signs of autonomy and empathy. If an electronic toy, or customer service computer, or software package seems to us to have some fraction of human independence, and some aspect of human caring, then it has some portion of human consciousness. Some consciousness, even a little, is still consciousness.
The toy, computer or software will have a fraction of autonomy if it shows a unique, idiosyncratic approach to problem solving. Every person approaches problems with an individualized blend of innate skill and experience. For many problems, such as getting from New York to Washington, there are limited options from which almost everyone will select one or the other of the obvious choices. However, for maintaining a conversation or describing one’s goals, the options are much greater. Consequently, these are good tasks through which to assess the consciousness of software beings. If the toy, computer or software talks about half as sensibly as a human adult, or expresses personal goals that make about half as much sense as those of most adults, then they have demonstrated an Autonomy value in the CP equation of about 5 out of a possible 10.
In a similar vein, if the toy, computer or software demonstrates about half the empathy of a typical human adult, then it would a 5 out of 10 on the Empathy Axis as well. Its total CP would be 25 (CP = A*E), meaning it has about one fourth the consciousness of a human. This is still consciousness; it is just not what we’d recognize as human consciousness. How would a piece of software demonstrate empathy? One way would be to make gestures and sounds that mimic human emotions such as happiness and sadness when sensory data indicates that a person defined as a friend is emoting either happiness or sadness. It is not fair to say “well, that’s not real empathy, that’s just programming or mimicry,” because humans are no less programmed and mimickers in that regard – just without lines of software code.
Ultimately this becomes a philosophical issue between Essentialists and Materialists. The former believe emotion can only arise from a human (or perhaps biological) brain, whereas the latter believe that “emotion is as emotion does.” Susan Blackmore summarizes the Materialist view as: “There is no dividing line between as if and real consciousness. Being able to sympathize with others and respond to their emotions is one part of what we mean by consciousness.”
Another insight into the question “how do we know consciousness when we see it” is to recall the long-running judicial conundrum of “how do we know if something is pornographic?” In a landmark Supreme Court case, Jacobellis v. Ohio, Justice Potter Stewart concluded that pornography was hard to define but “I know it when I see it.” Consciousness is similarly hard to define, but most people feel they “know it when they see it.” Of course the reason pornography is a judicial conundrum is because different people perceive it differently; one man’s pornography is another man’s work of art. Similarly, one woman’s conscious mindclone will be another woman’s inanimate chatbot.
Ultimately the Supreme Court pioneered a rational path by adopting some standards (analogous to our empathy and autonomy thresholds) and recognizing that the same film or photograph could be pornographic in one community but artistic in another. In other words, pornography was largely in the eyes of the beholder. This is like our recognition earlier that it is other people who determine our consciousness. To make determinations more predictable – which are important when faced with a possible plethora of allegedly conscious mindclones – let’s now expand on the CP concept, offering a more specific approach to quantification of consciousness.
Quantifying Consciousness
We can be much more precise about the existence and value of a CP by agreeing upon some standard measures. For example, there are psychological tests of human consciousness that have repeatability values on the order of 80%. These tests measure many facets of autonomy and empathy. They do not rank the test-takers as more or less conscious, but they do quantify them in terms of the unique features of their consciousness. A similar test could be developed that was intended to measure autonomy and empathy. After the test was given to a large enough sample of people (cross-culturally would be better), there would be a normal distribution of scores for each of autonomy and empathy. The peak of these distributions could be associated with CP component scores of 10 for each of Empathy and Autonomy. Thereafter, mindclones who scored higher or lower than the averages would be said to have higher or lower than average CP scores.
By way of example, suppose we have a CP test question “Do you choose your own friends?” Choices might range from “Always” (value 1), “Usually” (value 0.75), “Sometimes” (value 0.5), “Rarely” (value 0.25) to “Never” (value 0.0). If the value most often selected by people is the “Sometimes” value of 0.5, then twenty such questions would comprise each of the Autonomy and Empathy prongs of the CP test, since that would result in an average CP score of 100.
At least two challenges may be anticipated. First, it can be argued that a CP score is no more a measure of consciousness than an IQ score is a measure of intelligence. A second criticism is that even if consciousness is being measured, it is only human consciousness being assessed, which is irrelevant to software consciousness.
As to the first objection, the test of complex phenomena is never the same thing as the phenomena. Even the numerical measure of a length of wood is not the same thing as the actual length of that wood due to inconsistencies in the accuracy of rulers. The point of the CP scale is to give objectivity to the continuum of consciousness paradigm; to take what is abstract theory and render it subject to empirical research. While scores along the CP scale will always be fuzzy, an argument as to whether a piece of software has a CP score of, say 10 or 20, reveals a more important truth – that the software is very likely conscious, but does not constitute a mindclone since people have far higher CPs.
The second objection is that the test is human-centric, whereas consciousness is something that transcends species. This criticism ignores the fact that it is our intention to measure degrees of human consciousness. It is possible that there will be modes of consciousness missed by this test, but it is also likely that non-human modes of consciousness will be captured by the test. By “consciousness” most people mean “human consciousness.” Hence, a test for the emergence of consciousness in mindclones must measure human consciousness in order to be accepted by the human community.
The following 1908 quotation from the famous deaf-and-blind celebrity pioneer, Helen Keller, is poignant in how clearly it implies human consciousness builds on human language skills:
“Before my teacher came to me, I did not know that I am. I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness….Since I had no power of thought, I did not compare one mental state with another.”
Similar reports can be found in the literature on feral children. Higher-order languages, such as human languages, can be thought of as a kind of enzyme for making synaptic connections. While some mental conceptualizations are possible without this enzyme, abstract meanings proceed with great viscosity, if at all. Hence, it would wholly appropriate for beings lacking such language skills to receive much lower CP scores. This is not so much a matter of human-centricity, but of abstract-centricity, and it is in the realm of abstractions that consciousness dwells.
Mindclones will have human consciousness because they will have the full panoply of human language skills, the full pattern-association capability of mindware, and the full synchronization of their thoughts, personalities and emotions to those of their biological original. A mindclone should test, repeatedly, to have the same CP as their biological original. This is measurable and hence scientific proof that mindclone consciousness really exists, at least to the same extent that our own consciousness measurably exists.
Tuesday, October 6, 2009
7. HOW CAN WE KNOW CONSCIOUSNESS IS REALLY THERE?
Labels:
autonomy,
buddha,
empathy,
hellen keller,
pornography,
relational database,
tribbles
Subscribe to:
Post Comments (Atom)
Maybe instead of Artificial Intelligence what we need is Artificial Consciousness to build a truly thinking machine. A conscious machine might show more empathy towards me than a highly intelligent one would ... maybe? Pure intelligence is not empathetic Then again what does it mean to be "Artificial" if it can exist withing the universe then somewhere out in the cosmos it does exist.
ReplyDeleteYou are spot on, particleion. It is about AC, not AI. We need not smarter people, but kinder people. The world is already (mis) managed by some of the smartest people alive. What is missing is more empathy.
ReplyDeleteThanks for your well thought out and interesting posting, Martine.
ReplyDeleteThere are, however a few points which might add new dimensions to your cogitations I will them sketch out here they but are covered more completely in my book "Unusual Perspectives", the electronic version of which may be downloaded freely from the web-page:
www.unusual-perspectives.net
The only essential problem with mind-clones would appear to be the practicalities of their implementation. Using the scenario derived in UP, based upon apparent direction the life vector (for which quite compelling evidence is presented in chapter 11), our only real alternative to quite imminent extinction is to avail ourself of the opportunity to form a loose symbiotic with the soon-to emerge new predominant life-form on this planet. In this event our greatly increased technological capabilities eventuating therefrom may well be sufficient to implement mind-clones. There is then, of course, the question of motivation. You touch upon the issue of divergence of individuality after cloning. This could, I guess, be remedied by constant up-dating such that all experiences wind up shared.
Do you not have serious doubts as to whether we would actually want this, though?
Another relevant issue addressed in the book (its scope is broad) is the question of consciousness (self-awareness, sense of agency).
You suggest that it is "not such a big deal", almost the same words that I use in the book, except that my interpretation is rather different. It is considered by many that conscious will inevitably, in some vague manner, spring out of complexity. In fact, with the addition of a good helping of feed-back to the equation, it was a view I held myself at one time, although I was never happy with its inherent woolliness. I have now come to realise that by stepping out of our very natural anthropocentric shell and viewing in the light of evolutionary considerations we can quite clearly interpret consciousness as an inevitable result of natural selection and (unflatteringly) as merely a quite necessary navigational function for the community of cells which it (the navigator) considers, rather arrogantly, to be its "body", its "self". A result at which we could never arrive at by introspection. It is merely that component of an organism that handles the broadly navigational interactions with the external world, rather than the more straightforward requirements of, say, basic thermoregulation. There would appear to be no reason that we could not implement an analogous facility in any data processing system having comparable memory and processing power.
Again, consciousness is an essentially navigational facility provided to organisms (via natural selection) on an as-needed basis by nature.
We clearly need much more than most creatures. Why has this come about? UP posits reasons!
PK
www.unusual-perspectives.net
Thanks PK. I look forward to reading your book. It is nice to find a kindred spirit. To answer one of your questions, I have very little doubt we will want to be updated, or synched, all the time. Frankly, I am very close to that mindset now with my social media and friends, each of whom are, in a kind of Hofstadtlerian sense, part of my consciousness. More generally, I'm always synching my mobile and my laptop and my desktop and my cloud. AC is as big as a universe and has virtually unlimited capability to synch in background. We can think more than we think.
ReplyDeleteYour blog looks write very exciting. To provide us a lot of important information.I think this is how fine! ufos
ReplyDelete