Friday, August 14, 2009

6. HOW CAN CONSCIOUSNESS BE CREATED IN SOFTWARE?

“Some men see things as they are and wonder why. Others dream things that never were and ask why not?” Robert F. Kennedy



There are thousands of software engineers across the globe working day and night to create cyberconsciousness. This is real intelligent design. There are great financial awards available to the people who can make game avatars respond as curiously as people. Even vaster wealth awaits the programming teams that create personal digital assistants with the conscientiousness, and hence consciousness, of a perfect slave.

How can we know that all of this hacking will produce consciousness? This takes us to what is known as the “hard problem” and “easy problem” of consciousness. The “hard problem” is how do the web of molecules we call neurons give rise to subjective feelings or qualia (the “redness of red”)? The alternative “easy problem” of consciousness is how electrons racing along neurochemistry can result in complex simulations of “concrete and mortar” (and flesh and blood) reality? Or how metaphysical thoughts arise from physical matter? Basically, both the hard and the easy problems of consciousness come down to this: how is it that brains give rise to thoughts (‘easy’ problem), especially about immeasurable things (‘hard’ problem), but other parts of bodies do not? If these hard and easy questions can be answered for brain waves running on molecules, then it remains only to ask whether the answers are different for software code running on integrated circuits.



At least since the time of Isaac Newton and Leibniz, it was felt that some things appreciated by the mind could be measured whereas others could not. The measurable thoughts, such as the size of a building, or the name of a friend, were imagined to take place in the brain via some exquisite micro-mechanical processes. Today we would draw analogies to a computer’s memory chips, processors and peripherals. Although this is what philosopher David Chalmers calls the “easy problem” of consciousness, we still need an actual explanation of exactly how one or more neurons save, cut, paste and recall any word, number, scent or image. In other words, how do neuromolecules catch and process bits of information?

Those things that cannot be measured are what Chalmers calls the “hard problem.” In his view, a being could be conscious, but not human, if they were only capable of the “easy” kind of consciousness. Such a being, called a zombie, would be robotic, without feelings, empathy or nuances. Since the non-zombie, non-robot characteristics are also purported to be immeasurable (e.g., the redness of red or the heartache of unrequited love), Chalmers cannot see even in principle how they could ever be processed by something physical, such as neurons. He suggests consciousness is a mystical phenomenon that can never be explained by science. If this is the case, then one could argue that it might attach just as well to software as to neurons – or that it might not – or that it might perfuse the air we breathe and the space between the stars. If consciousness is mystical, then anything is possible. As will be shown below, there is no need to go there. Perfectly mundane, empirical explanations are available to explain both the easy and the hard kinds of consciousness. These explanations work as well for neurons as they do for software.

As indicated in the following figure, Essentialists v. Materialists, there are three basic points of view regarding the source of consciousness. Essentialists believe in a mystical source specific to humans. This is basically a view that God gave Man consciousness. Materialists believe in an empirical source (pattern-association complexity) that exists in humans and can exist in non-humans. A third point of view is that consciousness can mystically attach to anything. While mystical explanations cannot be disproved, they are unnecessary because there is a perfectly reasonable Materialist explanation to both the easy and hard kinds of consciousness.

MATERIALISM vs. ESSENTIALISM








If human consciousness is to arise in software we must do three things: first explain how the easy problem is solved in neurons; second, explain how the hard problem is solved in neurons; and third, explain how the solution in neurons is replicable in information technology. The key to all three explanations is the relational database concept. With the relational database an inquiry (or a sensory input for the brain) triggers a number of related responses. Each of these responses is, in turn, a stimulus for a further number of related responses. An output response is triggered when the strength of a stimulus, such as the number of times it was triggered, is greater than a set threshold.



For example, there are certain neurons hard-wired by our DNA to be sensitive to different wavelengths of light, and other neurons are sensitive to different phonemes. So, suppose when looking at something red, we are repeatedly told “that is red.” The red-sensitive neuron becomes paired with, among other neurons, the neurons that are sensitive to the different phonetics that make up the sounds “that is red.” Over time, we learn that there are many shades of red, and our neurons responsible for these varying wavelengths each become associated with words and objects that reflect the different “rednesses” of red. Hence, the redness of red is not only the immediate physical impression upon neurons tuned to wavelengths we common refer to as "red", but is also (1) each person’s unique set of connections between neurons hard-wired genetically from the retina to the various wavelengths we associate with different reds, and (2) the plethora of further synaptic connections we have between those hard-wired neurons and neural patterns that include things that are red. If the only red thing a person ever saw was an apple, then redness to them means the red wavelength neuron output that is part of the set of neural connections associated in their mind with an apple. Redness is not an electrical signal in our mind per se, but it is the associations of color wavelength signals with a referent in the real world. Redness is part of the gestalt impression obtained in a second or less from the immense pattern of neural connections we have built up about red things.



After a few front lines of sensory neurons, everything else is represented in our minds as a pattern of neural connections. It is as if the sensory neurons are our alphabet. These are associated (via synapses) in a vast number of ways to form mental images of objects and actions, just like letters can be arranged into a dictionary full of words. The mental images can be strung together (many more synaptic connections) into any number of coherent (even when dreaming) sequences to form worldviews, emotions, personalities, and guides to behavior. This is just like words can be grouped into a limitless number of coherent sentences, paragraphs and chapters. Grammar for words is like the as yet poorly understood electro-chemical properties of the brain that enable strengthening or weakening of waves of synaptic connections that support attentiveness, mental continuity and characteristic thought patterns. Continuing the analogy, the self, our consciousness, is the entire book of our autonomous and empathetic lives, written with that idiosyncratic style that is unique to us. It is a book full of chapters of life-phases, paragraphs of things we’ve done and sentences reflecting streams of thought.



Neurons save, cut, paste and recall any word, number, scent, image, sensation or feeling no differently for the so-called hard than for the so-called easy problems of consciousness. Let’s take as our example the “hard” problem of love, what Ray Kurzweil calls the “ultimate form of intelligence.” Robert Heinlein defines it as the feeling that another’s happiness is essential to your own.

Neurons save the subject of someone’s love as a collection of outputs from hard-wired sensory neurons tuned to the subject’s shapes, colors, scents, phonetics and/or textures. These outputs come from the front-line neurons that emit a signal only when they receive a signal of a particular contour, light-wave, pheromone, sound wave or tactile sensation. The set of outputs that describes the subject of our love is a stable thought – once so established with some units of neurochemical strength, any one of the triggering sensory neurons can harken from our mind the other triggering neurons.

Neurons paste thoughts together with matrices of synaptic connections. The constellation of sensory neuron outputs that is the thought of the subject of our love is, itself, connected to a vast array of additional thoughts (each grounded directly or, via other thoughts, indirectly, to sensory neurons). Those other thoughts would include the many cues that lead us to love someone or something. These may be resemblance in appearance or behavior to some previously favored person or thing, logical connection to some preferred entity, or some subtle pattern that matches extraordinarily well (including in counterpoint, syncopation or other form of complementarities) with the patterns of things we like in life. As we spend more time with the subject of our love, we further strengthen sensory connections with additional and strengthened synaptic connections such as those connected with eroticism, mutuality, endorphins and adrenaline.

There is no neuron with our lover’s face on it. There are instead a vast number of neurons that, as a stable set of connections, represent our lover. The connections are stable because they are important to us. When things are important to us, we concentrate on them, and as we do, the brain increases the neurochemical strengths of their neural connections. Many things are unimportant to us, or become so. For these things the neurochemical linkages become weaker and finally the thought dissipates like an abandoned spider web. Neurons cut unused and unimportant thoughts by weakening the neurochemical strengths of their connections. Often a vestigial connection is retained, capable of being triggered by a concentrated retracing of its path of creation, starting with the sensory neurons that anchor it.

What the discussion above shows is that consciousness can be readily explained as a set of connections among sensory neuron outputs, and links between such connections and sequences of higher-order connections. With each neuron able to make as many as 10,000 connections, and with 100 billion neurons, there is ample possibility for each person to have subjective experiences through idiosyncratic patterns of connectivity. The “hard problem” of consciousness is not so hard. Subjectivity is simply each person’s unique way of connecting the higher-order neuron patterns that come after the sensory neurons. The “easy problem” of consciousness is solved in the recognition of sensory neurons as empirical scaffolding upon which can be built a skyscraper worth of thoughts. If it can be accepted that sensory neurons can as a group define a higher-order concept, and that such higher-order concepts can as a group define yet higher-order concepts, then the “easy problem” of consciousness is solved. Material neurons can hold non-material thoughts because the neurons are linked members of a cognitive code. It is the meta-material pattern of the neural connections, not the neurons themselves, that contains non-material thoughts.




Lastly, there is the question of whether there is something essential about the way neurons form into content-bearing patterns, or whether the same feat could be accomplished with software. The strengths of neuronal couplings can be replicated with weighted strengths for software couplings in relational databases. The connectivity of one neuron to up to 10,000 other neurons can be replicated by linking one software input to up to 10,000 software outputs. The ability of neuronal patterns to maintain themselves in waves of constancy, such as in personality or concentration, could equally well be accomplished with software programs that kept certain software groupings active. Finally, a software system can be provided with every kind of sensory input (audio, video, scent, taste, tactile). Putting it all together, Daniel Dennett observes:

“If the self is ‘just’ the Center of Narrative Gravity, and if all the phenomena of human consciousness are explicable as ‘just’ the activities of a virtual machine realized in the astronomically adjustable connections of a human brain, then, in principle, a suitably ‘programmed’ robot, with a silicon-based computer brain, would be conscious, would have a self. More aptly, there would be a conscious self whose body was the robot and whose brain was the computer.”

At least for a Materialist, there seems to be nothing essential to neurons, in terms of creating consciousness, which could not be achieved as well with software. The quotation marks around ‘just’ in the quote from Dennett is the famous philosopher’s facetious smile. He is saying with each ‘just’ that there is nothing to belittle about such a great feat of connectivity and patterning.