Showing posts with label software. Show all posts
Showing posts with label software. Show all posts

Friday, August 14, 2009

6. HOW CAN CONSCIOUSNESS BE CREATED IN SOFTWARE?

“Some men see things as they are and wonder why. Others dream things that never were and ask why not?” Robert F. Kennedy



There are thousands of software engineers across the globe working day and night to create cyberconsciousness. This is real intelligent design. There are great financial awards available to the people who can make game avatars respond as curiously as people. Even vaster wealth awaits the programming teams that create personal digital assistants with the conscientiousness, and hence consciousness, of a perfect slave.

How can we know that all of this hacking will produce consciousness? This takes us to what is known as the “hard problem” and “easy problem” of consciousness. The “hard problem” is how do the web of molecules we call neurons give rise to subjective feelings or qualia (the “redness of red”)? The alternative “easy problem” of consciousness is how electrons racing along neurochemistry can result in complex simulations of “concrete and mortar” (and flesh and blood) reality? Or how metaphysical thoughts arise from physical matter? Basically, both the hard and the easy problems of consciousness come down to this: how is it that brains give rise to thoughts (‘easy’ problem), especially about immeasurable things (‘hard’ problem), but other parts of bodies do not? If these hard and easy questions can be answered for brain waves running on molecules, then it remains only to ask whether the answers are different for software code running on integrated circuits.



At least since the time of Isaac Newton and Leibniz, it was felt that some things appreciated by the mind could be measured whereas others could not. The measurable thoughts, such as the size of a building, or the name of a friend, were imagined to take place in the brain via some exquisite micro-mechanical processes. Today we would draw analogies to a computer’s memory chips, processors and peripherals. Although this is what philosopher David Chalmers calls the “easy problem” of consciousness, we still need an actual explanation of exactly how one or more neurons save, cut, paste and recall any word, number, scent or image. In other words, how do neuromolecules catch and process bits of information?

Those things that cannot be measured are what Chalmers calls the “hard problem.” In his view, a being could be conscious, but not human, if they were only capable of the “easy” kind of consciousness. Such a being, called a zombie, would be robotic, without feelings, empathy or nuances. Since the non-zombie, non-robot characteristics are also purported to be immeasurable (e.g., the redness of red or the heartache of unrequited love), Chalmers cannot see even in principle how they could ever be processed by something physical, such as neurons. He suggests consciousness is a mystical phenomenon that can never be explained by science. If this is the case, then one could argue that it might attach just as well to software as to neurons – or that it might not – or that it might perfuse the air we breathe and the space between the stars. If consciousness is mystical, then anything is possible. As will be shown below, there is no need to go there. Perfectly mundane, empirical explanations are available to explain both the easy and the hard kinds of consciousness. These explanations work as well for neurons as they do for software.

As indicated in the following figure, Essentialists v. Materialists, there are three basic points of view regarding the source of consciousness. Essentialists believe in a mystical source specific to humans. This is basically a view that God gave Man consciousness. Materialists believe in an empirical source (pattern-association complexity) that exists in humans and can exist in non-humans. A third point of view is that consciousness can mystically attach to anything. While mystical explanations cannot be disproved, they are unnecessary because there is a perfectly reasonable Materialist explanation to both the easy and hard kinds of consciousness.

MATERIALISM vs. ESSENTIALISM








If human consciousness is to arise in software we must do three things: first explain how the easy problem is solved in neurons; second, explain how the hard problem is solved in neurons; and third, explain how the solution in neurons is replicable in information technology. The key to all three explanations is the relational database concept. With the relational database an inquiry (or a sensory input for the brain) triggers a number of related responses. Each of these responses is, in turn, a stimulus for a further number of related responses. An output response is triggered when the strength of a stimulus, such as the number of times it was triggered, is greater than a set threshold.



For example, there are certain neurons hard-wired by our DNA to be sensitive to different wavelengths of light, and other neurons are sensitive to different phonemes. So, suppose when looking at something red, we are repeatedly told “that is red.” The red-sensitive neuron becomes paired with, among other neurons, the neurons that are sensitive to the different phonetics that make up the sounds “that is red.” Over time, we learn that there are many shades of red, and our neurons responsible for these varying wavelengths each become associated with words and objects that reflect the different “rednesses” of red. Hence, the redness of red is not only the immediate physical impression upon neurons tuned to wavelengths we common refer to as "red", but is also (1) each person’s unique set of connections between neurons hard-wired genetically from the retina to the various wavelengths we associate with different reds, and (2) the plethora of further synaptic connections we have between those hard-wired neurons and neural patterns that include things that are red. If the only red thing a person ever saw was an apple, then redness to them means the red wavelength neuron output that is part of the set of neural connections associated in their mind with an apple. Redness is not an electrical signal in our mind per se, but it is the associations of color wavelength signals with a referent in the real world. Redness is part of the gestalt impression obtained in a second or less from the immense pattern of neural connections we have built up about red things.



After a few front lines of sensory neurons, everything else is represented in our minds as a pattern of neural connections. It is as if the sensory neurons are our alphabet. These are associated (via synapses) in a vast number of ways to form mental images of objects and actions, just like letters can be arranged into a dictionary full of words. The mental images can be strung together (many more synaptic connections) into any number of coherent (even when dreaming) sequences to form worldviews, emotions, personalities, and guides to behavior. This is just like words can be grouped into a limitless number of coherent sentences, paragraphs and chapters. Grammar for words is like the as yet poorly understood electro-chemical properties of the brain that enable strengthening or weakening of waves of synaptic connections that support attentiveness, mental continuity and characteristic thought patterns. Continuing the analogy, the self, our consciousness, is the entire book of our autonomous and empathetic lives, written with that idiosyncratic style that is unique to us. It is a book full of chapters of life-phases, paragraphs of things we’ve done and sentences reflecting streams of thought.



Neurons save, cut, paste and recall any word, number, scent, image, sensation or feeling no differently for the so-called hard than for the so-called easy problems of consciousness. Let’s take as our example the “hard” problem of love, what Ray Kurzweil calls the “ultimate form of intelligence.” Robert Heinlein defines it as the feeling that another’s happiness is essential to your own.

Neurons save the subject of someone’s love as a collection of outputs from hard-wired sensory neurons tuned to the subject’s shapes, colors, scents, phonetics and/or textures. These outputs come from the front-line neurons that emit a signal only when they receive a signal of a particular contour, light-wave, pheromone, sound wave or tactile sensation. The set of outputs that describes the subject of our love is a stable thought – once so established with some units of neurochemical strength, any one of the triggering sensory neurons can harken from our mind the other triggering neurons.

Neurons paste thoughts together with matrices of synaptic connections. The constellation of sensory neuron outputs that is the thought of the subject of our love is, itself, connected to a vast array of additional thoughts (each grounded directly or, via other thoughts, indirectly, to sensory neurons). Those other thoughts would include the many cues that lead us to love someone or something. These may be resemblance in appearance or behavior to some previously favored person or thing, logical connection to some preferred entity, or some subtle pattern that matches extraordinarily well (including in counterpoint, syncopation or other form of complementarities) with the patterns of things we like in life. As we spend more time with the subject of our love, we further strengthen sensory connections with additional and strengthened synaptic connections such as those connected with eroticism, mutuality, endorphins and adrenaline.

There is no neuron with our lover’s face on it. There are instead a vast number of neurons that, as a stable set of connections, represent our lover. The connections are stable because they are important to us. When things are important to us, we concentrate on them, and as we do, the brain increases the neurochemical strengths of their neural connections. Many things are unimportant to us, or become so. For these things the neurochemical linkages become weaker and finally the thought dissipates like an abandoned spider web. Neurons cut unused and unimportant thoughts by weakening the neurochemical strengths of their connections. Often a vestigial connection is retained, capable of being triggered by a concentrated retracing of its path of creation, starting with the sensory neurons that anchor it.

What the discussion above shows is that consciousness can be readily explained as a set of connections among sensory neuron outputs, and links between such connections and sequences of higher-order connections. With each neuron able to make as many as 10,000 connections, and with 100 billion neurons, there is ample possibility for each person to have subjective experiences through idiosyncratic patterns of connectivity. The “hard problem” of consciousness is not so hard. Subjectivity is simply each person’s unique way of connecting the higher-order neuron patterns that come after the sensory neurons. The “easy problem” of consciousness is solved in the recognition of sensory neurons as empirical scaffolding upon which can be built a skyscraper worth of thoughts. If it can be accepted that sensory neurons can as a group define a higher-order concept, and that such higher-order concepts can as a group define yet higher-order concepts, then the “easy problem” of consciousness is solved. Material neurons can hold non-material thoughts because the neurons are linked members of a cognitive code. It is the meta-material pattern of the neural connections, not the neurons themselves, that contains non-material thoughts.




Lastly, there is the question of whether there is something essential about the way neurons form into content-bearing patterns, or whether the same feat could be accomplished with software. The strengths of neuronal couplings can be replicated with weighted strengths for software couplings in relational databases. The connectivity of one neuron to up to 10,000 other neurons can be replicated by linking one software input to up to 10,000 software outputs. The ability of neuronal patterns to maintain themselves in waves of constancy, such as in personality or concentration, could equally well be accomplished with software programs that kept certain software groupings active. Finally, a software system can be provided with every kind of sensory input (audio, video, scent, taste, tactile). Putting it all together, Daniel Dennett observes:

“If the self is ‘just’ the Center of Narrative Gravity, and if all the phenomena of human consciousness are explicable as ‘just’ the activities of a virtual machine realized in the astronomically adjustable connections of a human brain, then, in principle, a suitably ‘programmed’ robot, with a silicon-based computer brain, would be conscious, would have a self. More aptly, there would be a conscious self whose body was the robot and whose brain was the computer.”

At least for a Materialist, there seems to be nothing essential to neurons, in terms of creating consciousness, which could not be achieved as well with software. The quotation marks around ‘just’ in the quote from Dennett is the famous philosopher’s facetious smile. He is saying with each ‘just’ that there is nothing to belittle about such a great feat of connectivity and patterning.

Wednesday, April 8, 2009

WHAT IS MINDWARE?

Mindware is operating system software that (a) thinks and feels the way a human mind does, and (b) sets its thinking and feeling parameters to match those discernable from a mindfile. Mindware relies upon an underlying mindfile the way Microsoft Word relies upon a textfile. When appropriate parameters are set for mindware it becomes aware of itself and a cyberconscious entity is created.

The richness of the cyberconscious entity’s thoughts and feelings are a function of its source mindfile. In the extreme case of no mindfile, the mindware thinks and feels as little as a newborn baby. If the mindware’s parameters are set haphazardly, or shallowly, a severely dysfunctional cyberconsciousness will result. In the normal case, however, of mindware having access to a real person’s mindfile, the resultant cyberconsciousness will be a mindclone of that person. It will think and feel the same, have the same memories, and be differentiated only by its knowledge that it is a mindclone and its substrate-based different abilities.

Is mindware achievable? Yes, because our human thoughts and emotions are patterns amongst symbols. These patterns can be the same whether the symbols are encoded in our brains or in our mindfiles. The patterns are so complex that today only certain threads are available as software. For example, software that thinks how to get from our house to a new restaurant is now common, but didn’t exist just a decade ago. Every year the range of symbol association achievable by software leaps forward. It is merely a matter of decades before symbol association software achieves the complexity of human thought and emotion.



The preceding paragraph makes two claims deserving of expanded attention: that our mental states are merely a matter of patterns amongst symbols, and that such patterns could be replicated in software. Let’s turn first to how the mind works, and then to its possible replication in mindware.

Consider what might be our psychology if it were not patterns among symbols? What else is there? One idea, associated with the Australian philosopher David Chalmers, is that there is some sort of a metaphysical spirit that animates our thoughts and feelings. Another idea, propounded by the British mathematician Roger Penrose, is that our consciousness arises from quantum physical transitions deep within sub-microscopic intra-neural structures. In neither case, nor in many variants of each case, is it possible to disprove the claim because they are based upon essentially invisible, non-testable phenomena. Similarly, it cannot be disproved, at this time, that consciousness arises from sufficiently complex patterns among symbols. These patterns are too complex to either be sorted out in the brain or replicated in software.

You, the reader, need to take a position or to remain agnostic on the source of consciousness. I’m sure that many people will try to create consciousness by replicating in software the mental associations that are a hallmark of our thoughts and feelings. From there philosophers will argue over whether cyberconsciousness is real or not, as they argue over the consciousness of cats and dogs. But just as most humans believe their pets think, plan and feel, most humans will see consciousness in mindware if it resembles closely enough the same consciousness they see in themselves.

Computer scientist Marvin Minsky is convinced that emotions are just another form of thought. He believes feelings, no less than ideas, are based upon complex associations amongst mental symbols, each of which are ultimately rooted in a bunch of phonetically or visually specific neurons. It is today impossible to prove him right or wrong. The question is whether we believe that entertainment companies and customer service providers will persistently pursue the creation of emotion-capable software products – interfaces with feelings. I am convinced that the answer is yes, because there will be a large market for human-like software. Whether software-based emotions are real emotions is a question for philosophers, such as whether emotions arise from patterns of symbols or from metaphysical spirits or from quantum physical sub-atomic particle states. Everyday people will feel the software-based emotions are real if they seem as real as those of their relatives, neighbors and friends.

The second question to be answered is if thoughts and feelings are based upon complex patterns amongst symbols, how exactly can those patterns be discerned and replicated with mindware? Every symbol in our mind – such as the phoneme-specific neurons that, strung together, we learned very young sounded like “apple” – is linked to many other symbols. For “apple” those other symbols are its various images and generalized image, its taste, and its various appearances in our life (orchards, produce departments, pictures in books). Some of those associations are positive, some negative and some neutral. Each of these associations can be replicated in software, along with positive or negative values and probability weightings. Mindware is software that creates a cyberconscious version of you with the same associations, values and weightings to every symbol within your mindfile that you evidence having based upon your saved data. Where data is missing, mindware interpolates answers, makes reasoned guesses and imports general cultural information relevant to your socio-cultural niche. When the mindware converses with someone, the symbol apple will be triggered if it would have been triggered in your own mind, and it will enter into the discussion no differently than how it would have entered into your discussion. In this regard, the mindware gives rise to your mindclone.

Mindware is a kind of operating system that can be saved into billions of unique states, or combinations of preferences, based upon the unique ways of thinking and feeling that are discernable from your mindfile. Dozens of personality types, traits and/or factors, and gradations amongst these, yield more unique combinations than there are living people. Similarly, dozens of alphabet letters and ways to arrange them can create more unique names than there are people on the planet.

For example, people can be of several different personality orientations – introvert, extrovert, aggressive, nurturing and so on. Most psychologists say there are just five basic flavors or “factors”, but others say there are more. Nevertheless, virtually all agree on some finite, relatively small number of basic psychological forms taken to greater or lesser degrees by human minds. Multiplying out these possibilities would provide mindware with a vast number of different personality frameworks from which to choose a best fit – based upon a rigorous analysis of the person’s mindfiles -- for grafting mannerisms, recollections and feelings onto.

To be a little quantitative, imagine mindware adopts the currently popular view that there are five basic personality traits, each of which remain quite stable in one’s adult life: Openness, Conscientiousness, Extrovertedness, Agreeability and Neuroticism. Literally thousands of English words have been associated with each of these five traits, but suppose for sake of example we say that each person would be scored by mindware only from -100 to +100 on each of these traits (from an analysis of their mindfile). For example, an individual who was definitely prone to impulsive decisions, but no more than average among millions of analyzed mindfiles, might be assigned a Neuroticism personality trait score of +50. The formula for the number of unique personality frameworks available to mindware, known as a “repeating combination”, would be ST, where S = the number of possible personality trait scores and T = the number of possible personality traits. For our example, ST = 2015 = 328,080,401,001. These 328 billion personality frameworks are enough to ensure personality uniqueness, which also means they are likely to ensure a very good fit for each person. The more sizes a pair of jeans comes in, the more likely it is that everyone will find a pair that fits them just right!

The point here is not that there are precisely five personality traits, or exactly 201 discernable degrees of possessing each such trait. Instead, what is being shown is that a relatively easy problem for mindware to solve can result in a practically unlimited amount of individualized personality frameworks. Specifically, mapping the words, images and mannerisms from a lifetime mindfile into a matrix of personality trait buckets and associated positive or negative strengths for such bucket will produce more than enough unique personality templates to assure a very good fit to the original personality.


Mindware works like a police sketch artist. They are trained to know that there are a limited number of basic forms the human face can take. Based upon inputs from eye witnesses (analogous to processing a mindfile) the artist first chooses a best fit basic facial form, and then proceeds to graft upon it unique details. Often there is an iterative, back-and-forth process of sketching and erasing as additional details from eyewitnesses refine an initial choice of basic facial form. In the same way mindware will be written to iteratively reevaluate its best-fit personality structure based upon additional details from continued analyses of a mindfile.

Mindware will have settings that instruct the duration of its iterative process. After much iteration the mindware will determine that an asymptotic limit has been reached. It will do this by running thousands of “mock” conversations with tentative versions of a replicated mind, and comparing these with actual conversations or conversational fragments from an original’s mindfile. The iterative process will end once the mind it has replicated from the mindfiles it has been fed has reached what is called “Turing-equivalence” with the original mind. This means that the test established by the early 20th century software pioneer Alan Turing has been satisfied. That test says that if it is not possible to tell whether a conversant is a computer or a person, then the computer is psychologically equivalent to a person. It would be as if the police sketch artist produced a drawing that was as good as a photograph.

The rapid ferreting out of mindware settings from a mindfile has recently been made more feasible thanks to pattern recognition, voice recognition and video search software. It is now possible on Google Video to search videos by typing in desired words. Mindware will build upon this capability. It will analyze mindfile video for all words, phrases and indications of feeling. These will be placed into associational database arrays, best-matched to personality traits and strengths, and then used to best-fit a personality profile to the peculiar characteristics evidenced in the analyzed mindfile. Keep in mind we humans have just a half-dozen basic expressions, only a dozen or two emotions, a facial recognition limit in the low hundreds and an inability to remember more than 10% of what we heard or saw the previous day. Furthermore, the personality template that mindware puts together for us is blanketed with all of the factual specifics from our mindfile. While this is rocket science, it is rocket science that we can, and soon will, do. Mindware is a moon landing, and we did that in the sixties.

Just because we are unique does not mean that we cannot be replicated. An original essay can still be copied. Mindware is a kind of duplicating machine for the mind. Because the mind is vastly more complex and less accessible than a document it is not something that can simply be optically scanned to replicate. Instead, to scan a mind one must scan and analyze the digital output of that mind – its mindfile – while iteratively generating a duplicate of that mind relying on associated databases of human universals and socio-cultural contexts. It does sound like an amazing piece of software, but no more amazing to us than would be our photocopying machines to Abraham Lincoln or our jumbo jets to the Wright Brothers. And software technology is advancing much more quickly today than machine technology was back then.

Operating system software with mindware’s number of settings commonly run on laptop computers. The challenge is to write mindware so that it makes associations and interacts as does the human brain. This is not a challenge of possibility, but a challenge of practice, design and iterative improvement of approximations. Mindware is just really good software written for the purpose of replicating human thoughts and feelings.