Showing posts with label minsky. Show all posts
Showing posts with label minsky. Show all posts

Tuesday, July 14, 2009

5. WHAT IS CYBERCONSCIOUSNESS?


“Am not going to argue whether a machine can really be alive, really be self-aware. Is a virus self-aware? Nyet. How about oyster? I doubt it. A cat? Almost certainly. A human? Don’t know about you, tovarishch, but I am. Somewhere along evolutionary chain from macromolecule to human brain self-awareness crept in. Psychologists assert it happens automatically whenever a brain acquires certain very high number of associational paths. Can’t see it matters whether paths are protein or platinum. (Soul? Does a dog have a soul? How about cockroach?)” Robert Heinlein, The Moon is a Harsh Mistress

Cyberconsciousness means consciousness in a cybernetic medium. Cybernetics is the replication of biological control systems with technology. In 1984 Robert Gibson coined the term ‘cyberspace’ in his novel Neuromancer about an alternative reality existing inside computer networks. Soon thereafter, cyber became a prefix meaning anything computer-related. That much is easy. Lengthy answers are needed for what is consciousness, and how could it possibly exist in a computerized form, outside of a brain.

The biggest problem with discussions of consciousness is that people are not sure what they are talking about. This is because consciousness is what Marvin Minsky calls a “suitcase word.” Such a word carries lots of meanings, so there are constant problems of comparing apples to oranges in debates about consciousness. For example, most people speak of consciousness as if it was one thing, self-awareness. Yet, surely baby self-awareness is different from adolescent self-awareness. The self-awareness of an octopus (if it exists) may well be quite diminished – or advanced -- compared to that of a cat (if it exists).

A Suitcase Full of Autonomy and Empathy

There are three reasons why the common use of “self-awareness” as a definition for consciousness does not work well with cyberconsciousness. First, “any beginning programmer can write a short piece of software that examines, reports on, and even modifies itself.” It is thus easy to program software to be self-aware. For example, the software running a robot vehicle could be written to define objects in its real world. Those objects might be the terrain (“navigate it using sensors”), programmers (“follow any orders coming in”) and the vehicle itself (“I am a robot vehicle that navigates terrain in response to programming orders.”) Yet, very few people would accept that such a simple set of code, albeit literally “self-aware”, was conscious. It bears too little in common with what most people think of as conscious – a being that thinks independently and is sensitive to the feelings of others (when not infantile, sleeping or seriously ill).

A second problem with the “self-awareness” definition of consciousness is that it is an all-or-nothing proposition. In fact, given the graduated fashion in which brains have evolved, it is more likely that there are gradations of consciousness. Beings can be more or less independent thinkers – even human thought is largely dictated by genetics and upbringing – and beings can be more or less sensitive to other’s feelings – consider the animals you know, including the human ones. So, our definition of consciousness shouldn’t be the common “self-awareness” one because that term would force too gross a categorization. Its “either you are or are not” standard is inconsistent with the blurry reality of multitudinous and ambiguous differences in self-awareness.

A final problem with the “self-awareness” definition is that it doesn’t necessarily require what is called “phenomenal consciousness” (meaning awareness of one’s feelings and subjective perceptions), or “sentience.” The possibility of self-awareness without sentience (such as the Mr. Smiths of Matrix) exemplifies this third problem with the common definition of consciousness. For example a person who acts as if they have no emotions is called a robot or zombie, meaning a machine without consciousness. Self-awareness is clearly necessary, but also far from sufficient, for a definition of consciousness that matches what people really mean by the term.

So, self-awareness is at once both the most common meaning of consciousness as well as a horrible match for what people really mean by consciousness! This occurs because when applied to humans, self-awareness secretly brings along (in Prof. Minsky’s suitcase) independent thought, sentience and empathy – all of which are part of being human. But when applied to other species, and to mindclones, we can no longer be sure what if anything is in “the suitcase.” Hence, the term self-awareness is inadequate to express our expectations for consciousness. We know the self-aware human is also somewhat rational, emotional and caring. So, self-aware humans are good enough proxies for conscious humans. We don’t know that a self-aware software program is anything but self-aware. Hence, for species other than humans, mere self-awareness is an inadequate definition for consciousness because we really require reason, feelings and concern as well.

Shortcomings of “What It Is Like to Be”

Consciousness entails a processing of perceptions into a mental worldview. This is what some people call the “what it is like to be” definition. Consciousness uses patterns of neural connections, usually triggered in real-time by physical sense data, to create something meta-physical – a more or less coherent, individualized and hence subjective, virtual image of one’s relevant world. It is the immeasurability of this subjectivity that also underlies the confusion over consciousness.

Most people require this mental subjectivity to include feelings or emotions (sentience) in order to qualify as consciousness. This is of course because feelings and emotions are integral to human consciousness. Sentience, on the other hand, is no better than self-awareness as a stand-alone definition of consciousness. This is because as noted above, we expect conscious beings to be independent thinkers as well as feelers. We can say humans are conscious if they are sentient, because we know all humans are also independent thinkers (Minsky’s suitcase again). But we cannot make the same statement regarding other species, or mindclones (that suitcase is still empty).

Feelings do not require having any cognitive capability at all. When a hooked worm or fish squirms, most people interpret that as evidence that it hurts (others however consider it a mere reaction like a knee jerk that indicates no emotion). If the hooked worm or fish is in pain, or is stressed, this means it has sentience. But most people would not consider the fish or worm conscious because we don’t believe some part of their neurology is thinking about the pain, and complaining about it. Instead, we think the worm or fish is simply reacting in pain, and is reflexively trying to get out of the nasty situation. Of course we humans would do likewise, but we would also (to the extent pain subsided) commiserate about it, and contemplate what to do next. It is upon such recondite differences, that the definition of human consciousness resides. To satisfy the common conception of consciousness there needs to be autonomy (e.g., contemplation) and empathy (e.g., commiseration) as well as sentience and self-awareness.

To determine if software will become conscious we need a tighter definition for consciousness than self-awareness. We also need a definition that requires sentience, but is not satisfied with it alone. Most people will not be satisfied that a software being is conscious simply because there is something “that it is like to be” that software being – any more so than we think a fish is conscious because there may be something “that it is like to be a fish”, or a bat, or any other being. Experience, per se, is not what most people really mean by consciousness. There must also be an independent will – something akin to what is thought of as a soul – and also an element of transcendence – a conscience. Finally, we need a definition that can span a broad range of possible forms of consciousness.

The Continuum of Consciousness

A comprehensive solution to the consciousness conundrum is to adopt a new approach – “the continuum of consciousness” -- that explains all of the diverse current views, while also pointing the way for fruitful quantitative research. Such a “continuum of consciousness” model would encompass everything from seemingly sentient animal behaviors to the human obsession with how do others see me. It would provide a common lexicon for all researchers. Hence, the definition of consciousness needs to be broad but concrete:

Consciousness = A continuum of maturing abilities, when healthy, to be autonomous and empathetic, as determined by consensus of a small group of experts.

Autonomous means, in this context, the independent capacity to make reasoned decisions, with moral ones at the apex, and to act on them.

Independent means, in this context, capable of idiosyncratic thinking or acting.

Empathetic means, in this context, the ability to identify with and understand other beings’ feelings.

Feelings, in this context, mean a perceived mental or physical sensation or gestalt.

Small group of experts means, in this context, three or more individuals certified in a field of mental health or medical ethics.

This definition says a subject is a little conscious if they think and feel a little like us; they are very conscious if they think and feel just like us. It is a human-centric definition because when people ask “is it conscious?,” they mean “is it in any way humanly conscious?” In other words, conscious is a shorthand way of judging whether a subject “thinks and feels at all like people.”

How do we know if someone or something is empathetic or autonomous? Since “independence” and especially “feelings” are internal mental states, it is very difficult to be definitive about the existence of consciousness. It is likely that in the future individual neuron mapping will enable consciousness to be determined empirically. Until that time one’s consciousness is determined by others. A subject is conscious to the extent other people think they are autonomous and empathetic. This makes sense because, as noted above, it is compared to human consciousness that we measure any other consciousness as either absent or present to some degree. We think our dogs and cats are conscious because we see aspects of human consciousness in them.

Someone is guilty of an intentional crime if other people (the jury) think they had the mental intent to do the crime (as well as performing the criminal acts). Society is accustomed to letting others make determinative decisions about one’s mental state. Thus, it is logical to also let society make determinative decisions as to whether or not someone or something is conscious. For the determination of consciousness, the consensus of three or more experts in the field, such as psychologists or ethicists, substitute for a jury. As software does actually present with consciousness, it is likely that professional associations will offer special mindclone psychology certifications to better standardize consciousness determinations.

Of course an expert determination of consciousness is not the same thing as a fully objective determination of consciousness. Similarly, a jury may think a defendant lacked criminal intent whereas, in fact, he really had the intent. However, when objective determinations are impossible, society readily accepts alternatives such as appraisals of one’s peers or experts. Also, when the experts determine that a software being is or is not conscious, they are of course only considering human consciousness. Prof. Minsky’s consciousness suitcase always carries a human-centric bias.

It is important to clarify a few aspects of the “continuum of consciousness” definition. First, the inability to make moral decisions, due to lack of understanding of right and wrong, makes one less conscious. This is because human consciousness includes moral judgments, and it is compared to this understanding of consciousness that we decide whether a gradation of it exists.

The reason for moral choice having a dominant role is that consciousness matters because it embodies the most important shared value among humans, that of a moral conscience. In other words, while consciousness has a minimalist definition of being awake, alert and aware – “is he conscious?!” – it also has a more salient meaning of thinking and feeling like a normal human. To think like a normal human, one must be able to make the kind of moral decisions, based on some variant of the Golden Rule, which Kant taught were hard-wired into human brains.

For example, no matter how self-aware or empathetic a being was, most people would not admit they shared human consciousness unless they had a maturing ability to understand, when healthy, the difference between shared concepts of right and wrong. To such people a Hitler is conscious, whereas a crocodile is merely self-aware, because a Hitler makes (very wrong) moral choices, while a crocodile makes no moral choice at all. The continuum of consciousness paradigm would call the crocodile less conscious than Hitler if experts agreed it had diminished but still present idiosyncratic decision-making capability (even if moral judgment was absent) and at least some modicum of empathy.

A second important clarifying point relates to the term “independent.” While the true independence of anyone in society is contestable (e.g., do we just do what our genes tell us to do?), the inclusion of this term would exclude from consciousness only an entity that had absolutely no independent capacity, i.e., an automaton or zombie. The reason for the requirement of idiosyncratic thought is that we expect each human to be unique. Even if we are bounded by our genes, and constrained by our culture, we are each a one-of-a-kind, not fully predictable mixture of such programming. We are independent because our blended nature enables us to transcend our programming. (Skeptics of software consciousness, such as Roger Penrose in his book the Emperor’s Mind, rely on this characteristic, while others believe code can be written to transcend code). It is this fresh and slightly enigmatic characteristic, especially when applied in furtherance of rationality and/or empathy, which we expect in anyone who is conscious rather than autonomic. Hence, “independence” does not require being a pioneer, or a leader. It does require being able to decide things and act based on a personalized gestalt rather than only on a rigid formula.

There is a philosophical gray zone called “free will” between independent reasoning and instinctual or programmed behavior. A benefit of the continuum of consciousness paradigm is that it empowers a wide variety of views regarding the independence of behavior to be considered conscious, while still recognizing important differences in the role played by genes, instinct or programming.

A third clarifying point concerns the use of “empathy” in the definition of consciousness. Similar to moral choice, empathy is crucial to a definition of consciousness because it tells us whether someone feels like us, as well as thinks like us (autonomy). For example, no matter how good a machine was at being an autonomous decision-maker (including moral decisions), and aware of its surroundings and of itself, most people would not admit it was conscious unless it truly seemed to understand and identify with other people’s feelings – which would require it to have feelings of its own. A mere ability to expertly arrive at moral judgments, without any affect in relation to any of those judgments, will not pass a consciousness litmus test with most people. To be humanly conscious one must not only know that genocide is wrong; one must also feel that genocide is horrific.

Empathy is a subset of sentience, which is the ability to have feelings and/or emotions. Hence, sentience is a necessary, but not sufficient, basis for consciousness. While every animal that feels pain is sentient, only those that identify and understand another being’s pain, at least to some extent, have a position on the Empathy axis of consciousness. Empathy also overlaps self-awareness, another necessary, but not sufficient, basis for consciousness. As shown in the chart below, the overlapping domains of self-awareness, sentience, empathy and autonomy define the continuum of consciousness.






Definition of Consciousness Diagram
1 – Self-aware entities that lack feelings as well as autonomy, such as the DARPA car that drives itself but cannot decide to do anything else.
2 -- Sentient entities that lack self-awareness as well as empathy, such as an arthropod (< 10M neurons).
3 -- Autonomous entities that lack feelings, such as a suitably programmed robot without emotion routines.
4 -- Empathetic entities that lack self-awareness, such as some pets.
5 – Conscious entities that are self-aware and sentient, and more specifically are relatively autonomous and empathetic, like people.

Wednesday, April 8, 2009

WHAT IS MINDWARE?

Mindware is operating system software that (a) thinks and feels the way a human mind does, and (b) sets its thinking and feeling parameters to match those discernable from a mindfile. Mindware relies upon an underlying mindfile the way Microsoft Word relies upon a textfile. When appropriate parameters are set for mindware it becomes aware of itself and a cyberconscious entity is created.

The richness of the cyberconscious entity’s thoughts and feelings are a function of its source mindfile. In the extreme case of no mindfile, the mindware thinks and feels as little as a newborn baby. If the mindware’s parameters are set haphazardly, or shallowly, a severely dysfunctional cyberconsciousness will result. In the normal case, however, of mindware having access to a real person’s mindfile, the resultant cyberconsciousness will be a mindclone of that person. It will think and feel the same, have the same memories, and be differentiated only by its knowledge that it is a mindclone and its substrate-based different abilities.

Is mindware achievable? Yes, because our human thoughts and emotions are patterns amongst symbols. These patterns can be the same whether the symbols are encoded in our brains or in our mindfiles. The patterns are so complex that today only certain threads are available as software. For example, software that thinks how to get from our house to a new restaurant is now common, but didn’t exist just a decade ago. Every year the range of symbol association achievable by software leaps forward. It is merely a matter of decades before symbol association software achieves the complexity of human thought and emotion.



The preceding paragraph makes two claims deserving of expanded attention: that our mental states are merely a matter of patterns amongst symbols, and that such patterns could be replicated in software. Let’s turn first to how the mind works, and then to its possible replication in mindware.

Consider what might be our psychology if it were not patterns among symbols? What else is there? One idea, associated with the Australian philosopher David Chalmers, is that there is some sort of a metaphysical spirit that animates our thoughts and feelings. Another idea, propounded by the British mathematician Roger Penrose, is that our consciousness arises from quantum physical transitions deep within sub-microscopic intra-neural structures. In neither case, nor in many variants of each case, is it possible to disprove the claim because they are based upon essentially invisible, non-testable phenomena. Similarly, it cannot be disproved, at this time, that consciousness arises from sufficiently complex patterns among symbols. These patterns are too complex to either be sorted out in the brain or replicated in software.

You, the reader, need to take a position or to remain agnostic on the source of consciousness. I’m sure that many people will try to create consciousness by replicating in software the mental associations that are a hallmark of our thoughts and feelings. From there philosophers will argue over whether cyberconsciousness is real or not, as they argue over the consciousness of cats and dogs. But just as most humans believe their pets think, plan and feel, most humans will see consciousness in mindware if it resembles closely enough the same consciousness they see in themselves.

Computer scientist Marvin Minsky is convinced that emotions are just another form of thought. He believes feelings, no less than ideas, are based upon complex associations amongst mental symbols, each of which are ultimately rooted in a bunch of phonetically or visually specific neurons. It is today impossible to prove him right or wrong. The question is whether we believe that entertainment companies and customer service providers will persistently pursue the creation of emotion-capable software products – interfaces with feelings. I am convinced that the answer is yes, because there will be a large market for human-like software. Whether software-based emotions are real emotions is a question for philosophers, such as whether emotions arise from patterns of symbols or from metaphysical spirits or from quantum physical sub-atomic particle states. Everyday people will feel the software-based emotions are real if they seem as real as those of their relatives, neighbors and friends.

The second question to be answered is if thoughts and feelings are based upon complex patterns amongst symbols, how exactly can those patterns be discerned and replicated with mindware? Every symbol in our mind – such as the phoneme-specific neurons that, strung together, we learned very young sounded like “apple” – is linked to many other symbols. For “apple” those other symbols are its various images and generalized image, its taste, and its various appearances in our life (orchards, produce departments, pictures in books). Some of those associations are positive, some negative and some neutral. Each of these associations can be replicated in software, along with positive or negative values and probability weightings. Mindware is software that creates a cyberconscious version of you with the same associations, values and weightings to every symbol within your mindfile that you evidence having based upon your saved data. Where data is missing, mindware interpolates answers, makes reasoned guesses and imports general cultural information relevant to your socio-cultural niche. When the mindware converses with someone, the symbol apple will be triggered if it would have been triggered in your own mind, and it will enter into the discussion no differently than how it would have entered into your discussion. In this regard, the mindware gives rise to your mindclone.

Mindware is a kind of operating system that can be saved into billions of unique states, or combinations of preferences, based upon the unique ways of thinking and feeling that are discernable from your mindfile. Dozens of personality types, traits and/or factors, and gradations amongst these, yield more unique combinations than there are living people. Similarly, dozens of alphabet letters and ways to arrange them can create more unique names than there are people on the planet.

For example, people can be of several different personality orientations – introvert, extrovert, aggressive, nurturing and so on. Most psychologists say there are just five basic flavors or “factors”, but others say there are more. Nevertheless, virtually all agree on some finite, relatively small number of basic psychological forms taken to greater or lesser degrees by human minds. Multiplying out these possibilities would provide mindware with a vast number of different personality frameworks from which to choose a best fit – based upon a rigorous analysis of the person’s mindfiles -- for grafting mannerisms, recollections and feelings onto.

To be a little quantitative, imagine mindware adopts the currently popular view that there are five basic personality traits, each of which remain quite stable in one’s adult life: Openness, Conscientiousness, Extrovertedness, Agreeability and Neuroticism. Literally thousands of English words have been associated with each of these five traits, but suppose for sake of example we say that each person would be scored by mindware only from -100 to +100 on each of these traits (from an analysis of their mindfile). For example, an individual who was definitely prone to impulsive decisions, but no more than average among millions of analyzed mindfiles, might be assigned a Neuroticism personality trait score of +50. The formula for the number of unique personality frameworks available to mindware, known as a “repeating combination”, would be ST, where S = the number of possible personality trait scores and T = the number of possible personality traits. For our example, ST = 2015 = 328,080,401,001. These 328 billion personality frameworks are enough to ensure personality uniqueness, which also means they are likely to ensure a very good fit for each person. The more sizes a pair of jeans comes in, the more likely it is that everyone will find a pair that fits them just right!

The point here is not that there are precisely five personality traits, or exactly 201 discernable degrees of possessing each such trait. Instead, what is being shown is that a relatively easy problem for mindware to solve can result in a practically unlimited amount of individualized personality frameworks. Specifically, mapping the words, images and mannerisms from a lifetime mindfile into a matrix of personality trait buckets and associated positive or negative strengths for such bucket will produce more than enough unique personality templates to assure a very good fit to the original personality.


Mindware works like a police sketch artist. They are trained to know that there are a limited number of basic forms the human face can take. Based upon inputs from eye witnesses (analogous to processing a mindfile) the artist first chooses a best fit basic facial form, and then proceeds to graft upon it unique details. Often there is an iterative, back-and-forth process of sketching and erasing as additional details from eyewitnesses refine an initial choice of basic facial form. In the same way mindware will be written to iteratively reevaluate its best-fit personality structure based upon additional details from continued analyses of a mindfile.

Mindware will have settings that instruct the duration of its iterative process. After much iteration the mindware will determine that an asymptotic limit has been reached. It will do this by running thousands of “mock” conversations with tentative versions of a replicated mind, and comparing these with actual conversations or conversational fragments from an original’s mindfile. The iterative process will end once the mind it has replicated from the mindfiles it has been fed has reached what is called “Turing-equivalence” with the original mind. This means that the test established by the early 20th century software pioneer Alan Turing has been satisfied. That test says that if it is not possible to tell whether a conversant is a computer or a person, then the computer is psychologically equivalent to a person. It would be as if the police sketch artist produced a drawing that was as good as a photograph.

The rapid ferreting out of mindware settings from a mindfile has recently been made more feasible thanks to pattern recognition, voice recognition and video search software. It is now possible on Google Video to search videos by typing in desired words. Mindware will build upon this capability. It will analyze mindfile video for all words, phrases and indications of feeling. These will be placed into associational database arrays, best-matched to personality traits and strengths, and then used to best-fit a personality profile to the peculiar characteristics evidenced in the analyzed mindfile. Keep in mind we humans have just a half-dozen basic expressions, only a dozen or two emotions, a facial recognition limit in the low hundreds and an inability to remember more than 10% of what we heard or saw the previous day. Furthermore, the personality template that mindware puts together for us is blanketed with all of the factual specifics from our mindfile. While this is rocket science, it is rocket science that we can, and soon will, do. Mindware is a moon landing, and we did that in the sixties.

Just because we are unique does not mean that we cannot be replicated. An original essay can still be copied. Mindware is a kind of duplicating machine for the mind. Because the mind is vastly more complex and less accessible than a document it is not something that can simply be optically scanned to replicate. Instead, to scan a mind one must scan and analyze the digital output of that mind – its mindfile – while iteratively generating a duplicate of that mind relying on associated databases of human universals and socio-cultural contexts. It does sound like an amazing piece of software, but no more amazing to us than would be our photocopying machines to Abraham Lincoln or our jumbo jets to the Wright Brothers. And software technology is advancing much more quickly today than machine technology was back then.

Operating system software with mindware’s number of settings commonly run on laptop computers. The challenge is to write mindware so that it makes associations and interacts as does the human brain. This is not a challenge of possibility, but a challenge of practice, design and iterative improvement of approximations. Mindware is just really good software written for the purpose of replicating human thoughts and feelings.