Showing posts with label dennett. Show all posts
Showing posts with label dennett. Show all posts

Sunday, April 17, 2011

25. WON’T IT BE IMPOSSIBLE TO ETHICALLY TEST AND DEVELOP CYBER-CONSCIOUSNESS?


“Partial freedom seems to me a most invidious mode of slavery.”  Edmund Burke

“Ethics is knowing the difference between what you have a right to do, and what is right to do.”  Potter Stewart

A fundamental principle of bioethics requires the consent of a patient to any medical procedure performed upon them.   A patient will exist the moment a conscious mindclone arises in some academic laboratory, or hacker’s garage.  At that moment ethical rules will be challenged, for the mindclone has not consented to the work being done on eir mind.  Does this situation create a catch-22 ethical embargo against developing cyber-consciousness?

There are at least three ways to answer this challenge.  First, it can be approached with a medical ethics focus on the mindclone itself.   Second, it can be approached philosophically – focusing on the mindclone as just part and parcel of the biological original.  Third, it can be approached pragmatically – what will the government likely require?

Creating Ethical Beings Ethically

How can it be ethical to test mindclone-creating mindware when any resulting mindclone has not first consented to being the subject of such an experiment?  How will we know we have mindware that creates an ethically-reasoning mindclone if it is not ethical to even do the tests and trials?    

As to the first question, ethicists agree that someone else can consent to a treatment for a person who is unable to consent.  For example, the parents of a newborn child can consent to experimental medical treatment for them.  The crucial criterion is that the consenter must have the best interests of the patient in mind, and not be primarily concerned with the success of a medical experiment.  One of the purposes of an Institutional Review Board (IRB) or medical review committee is to exercise this kind of consent on behalf of persons who cannot give their consent.  Hence, having a responsible committee act on their behalf solves the problem of ethical consent for the birth of a mindclone or beman. 

Sometimes people complain that they “did not ask to be born.”  Yet, nobody has an ethical right to decide whether or not to be born, as that would be temporally illogical.  The solution to this conundrum is for someone else to consent on behalf of the newborn, whether this is done implicitly via biological parenting, or explicitly via an ethics committee.  In each case there is a moral obligation (which can be enforced legally today for biological parents) to avoid intentionally causing harm to the newborn.   We are now ready to turn to the second question:  how can an ethics committee, acting on behalf of the best interests of future mindclones or bemans, avoid causing harm to them?

One possible solution to ethically developing mindclones is to take the project in stages.  The first stage must not rely upon self-awareness or consciousness.  This would be based upon first developing the autonomous, moral reasoning ability that is a necessary, but not sufficient, basis for consciousness.  Recall from Question 5 that consciousness is a continuum of maturing abilities, when healthy, to be autonomous and empathetic, with autonomous defined as:  “the independent capacity to make reasoned decisions, with moral ones at the apex, and to act on them.”  Independent means, in this context, “capable of idiosyncratic thinking and acting.”

By running many simulations mindclone developers can gain comfort that the reasoning ability of the mindware is human-equivalent.  In fact, the reasoning ability of the mindware should match that of the biological original who is being mindcloned. 

The second stage of development expands the mindware to incorporate human feelings and emotions, via settings associated with aspects of pain, pleasure and the entire vast spectrum of human sentience.  At this stage all the feelings and emotions are terminating in a “black box”, devoid of any self-awareness.  Engineers will measure and validate that the feelings are real, via instruments, but no “one” will actually be feeling the feelings.

The third stage entails creating in software the meaningful memories and patterns of thought of the original person being mindcloned.  This can be considered the identity module.  If this is a case of a de novo cyberconscious being, i.e., a beman, then this identity module is either missing or is created from whole cloth. 

Finally, a consciousness bridge will be developed that marries the reasoning, sentience and identity modules, giving rise to autonomy with empathy and hence consciousness.  Feelings and emotions will be mapped to memories and characteristic ways of processing information.  There will be a sentient research subject when the consciousness bridge first connects the autonomy, empathy and identity modules. 

This bridging approach to ethically creating mindclones is reminiscent of Dennett’s observation that the disassociation from themselves that some victims of horrible abuse exhibit – a kind of denial that the abuse happened to them – is not only a way to avoid the sensation of suffering, but is also likely to be the normal state in beings that have not integrated consciousness into their mind.[i]  In other words, if a being is unable to mentally organize a conceptualized self into a mental world of conceptualized things and experienced sensations, then they cannot actually suffer from pain because there is not a yet a self to suffer.  Pain can be experienced, and it can hurt like hell, but it is an autonomic hurt and not a personally experienced hurt.  In Dennett’s view, when people witness this kind of pain in most animals, they anthropomorphize themselves into the animal’s position and imagine the animal’s hurt.  But because most animals cannot do this, they cannot hurt.  Similarly, until a self was bridged into a mindclone’s or beman’s complex relational database of mindware and mindfiles, there would be “no one home” to complain.

Ethically, approval from research authorities should be obtained before the consciousness bridge is activated.  There will be concern not to cause gratuitous harm, nor to cause fear, and to manage the subject at the end of the experiment gracefully or to continue its virtual life appropriately.  The ethics approvals may be more readily granted if the requests are graduated.  For example, the first request could be to bridge just a small part of the empathy, identity and autonomy modules, and for just a brief period of time.   After the results of experiments are assessed, positive results would be used to request more extensive approvals.  Ultimately there would be adequate confidence that a protocol existed pursuant to which a mindclone could be safely, and humanely, awakened into full consciousness for an unending period of time – just as there are analogous protocols for bringing flesh patients out of medically induced comas.

For example, before companies are allowed to test new drugs on patients they must first test a very small dose of the drug, for a very short period of time, on a healthy volunteer.  Only gradually, based on satisfaction with the safety of previous tests, are companies allowed to test the drugs more robustly.  Analogously, we can envision ethical authorities first permitting the test of only a small sliver of consciousness and only for a small sliver of time.  Gradually, as ethical review committees become convinced that the previous trials were safe (did not cause pain or fear), greater tests of consciousness would be permitted. 

Of course we are all aware of drugs that have been withdrawn from sale after having even been approved.  In these cases evidence of dangerous side effects appear that were not evident during the clinical trials.  No doubt the same situation will occur with mindclones – some tortured minds may be created inadvertently.  This does not mean it is unethical to create mindclones.  It means that every means practical should be employed to minimize the risks of such side effects, and if they manifest, to be able to rapidly resolve the problem.  For example, if test equipment indicates a serious problem with a mindclone it should be promptly placed into a “sleep-mode” so as not to suffer.

In the graduated process described above the experimental subject still did not consent to being “born.”  However, ey could not so consent.  In these cases a guardian (such as an institutional review board or certified cyberpsychiatrist or attorney) can ethically consent on an incompetent’s behalf, with such conditions as they may to impose.  Alternatively, humans may in fact consent that their donated mindfiles can be used to create mindclones through a medical research process, assuming such consent was fully informed with a disclosure of the risks to the best of the researcher’s abilities.

In the foregoing way it will be possible to ethically develop mindware that can be approved by regulatory authorities as capable of producing safe and effective mindclones for ordinary people.  The authority may be the FDA in the U.S., or the EMA in the E.U., or some new regulatory entity.  They will need to be assured that the mindware is safe and effective, and that proving it so was accomplished via clinical trials that were ethically conducted.  As shown in the answer to this Question, by taking the inchoate mindclone through incrementally greater stages of consciousness, the regulatory hurdle can be met.

What’s the Big Deal – Just Me and My Mindclone

Another approach to the ethics of mindcloning is to remember the explanation in Question 23 that a mindclone and eir biological original are the same person.  Hence, the ethical requirement of “consent” is satisfied so long as a biological person requests their mindfile to be activated with mindware into a mindclone.  For example, there is no ethical objection to a person authorizing one, two or twenty-two plastic surgeries upon their face, in the process transforming their looks almost beyond recognition.  With mindcloning the plastic surgery is replaced with cyber surgery, and it is performed outside of one’s body.  However, the end result, functionally, is quite similar – a person has consented to change of self -- from one face to another in the case of plastic surgery; from one instantiation to two instantiations in the case of mindcloning.  In each case the individual’s future will be changed, because others will interact differently with them, and they will behave differently.  However, we recognize the right for a person to medically do as they please with their body (and mind), provided no doctor is being called upon to harm them without a countervailing benefit.

When consciousness first arises in a mindclone, it is not a new consciousness but an expansion of an existing consciousness.  If it hurts, if it frightens, if it enlightens, it is not pain, fear or inspiration occurring to a new soul, but to an existing soul who now transcends two substrates, brain and software.  The opening of consciousness in a mindclone is like what occurs to us when we have a profound educational experience.  I remember that I cried when I first read how the Nazis tied the legs of pregnant Jews together to kill them and their babies, and how, half a century later, the Liberian rebels chopped off the hands of young teenagers and talented craftsmen.  My consciousness opened up to realms of cruelty that I had never imagined.  I can’t say that I’m any better off for that education, but I knew what I was getting into in reading those stories.  Similarly, creating a mindclone is going to change our minds.  But it is our minds that we are changing, and this is something we have an ethical right to do.

We must also always remember that our minds are dynamically evolving pastiches of information and patterns of information processing.  There is no such thing as having one mind, completely formed at birth, and never changing after that.  Indeed, an excellent definition of a mind is that which idiosyncratically aggregates, utilizes and exchanges information and information processing patterns.  Consider the following meditation by Douglas Hofstadter:


“We are all curious collages, weird little planetoids that grow by accreting other people’s habits and ideas and styles and tics and jokes and phrases and tunes and hopes and fears as if they were meteorites that came soaring out of the blue, collided with us, and stuck.  What at first is an artificial, alien mannerism slowly fuses into the stuff of our self, like wax melting in the sun, and gradually becomes as much a part of us as ever it was of someone else (and that person may very well have borrowed it from someone else to begin with).  Although my meteorite metaphor may make it sound as if we are victims of random bombardment, I don’t mean to suggest that we willingly accrete just any old mannerism onto our sphere’s surface – we are very selective, usually borrowing traits that we admire or covet – but even our style of selectivity is itself influenced over the years by what we have turned into as a result of our repeated accretions.  And what was once right on the surface gradually becomes buried like a Roman ruin, growing closer and closer to the core of us as our radius keeps increasing.  All of this suggests that each of us is a bundle of fragments of other people’s souls, simply put together in a new way.  But of course not all contributors are represented equally.  Those whom we love and who love us are the most strongly represented inside us, and our “I” is formed by a complex collusion of all their influences echoing down the many years.”[ii]


The relevance of Hofstadter’s extended metaphor lies in its implication that a mindclone is very much a part of its biological original because so very much of it would be copied from the original.  If we are an agglomeration of other people, we surely must be much more an agglomeration of ourselves -- even as we evolve from month to month and year to year.   Our mindclones will be consolidations of ourselves, extensions of ourselves, and expansions of ourselves.  They will be “of ourselves” and hence we are on firm ethical ground when we consent to their conscious awakening.

Quite a different situation prevails for the creation of a non-mindclone beman.  Such consciousness is not an extension of anyone, but an entirely new idiosyncratic mixture of information and information processing patterns.  The creation of such consciousness could be ethically considered as an exercise of a person’s own personal autonomy only in terms of each person having a right to create new life, as with biological reproductive rights.

The Ethics of Practicality

In the film Singularity is Near, futurist Ray Kurzweil argues with environmentalist Bill McKibbon over the ethics of keeping people alive as long as technology makes a good quality of life possible.  McKibbon says he is worried about the ethics of avoiding death.  Kurzweil responds, “I don’t think people are going to wax philosophical if they are healthy but 120 years old, and a government official says they have to die.”  The clear implication is “hell no.” 

Similarly, I started a truck locating company called Geostar back in the 1980s.  At first people wrung their hands over the ethics of monitoring the truck drivers’ locations via satellite.  Many thought the drivers would rip the satellite locators off their cab roofs.  Instead, the drivers embraced the technology because it enabled them to make much more money.  The satellite tracking technology permitted trucking company dispatchers to know at all times if locator-equipped drivers were close to the locations newly called-in loads.  Not a single locator was ripped off the thousands of trucks using our service.

I think practically speaking the benefits of having a mindclone will be so enticing that any ethical dilemma will find a resolution.  With mindclones we are offering people the opportunity to cram twice as much life into each day, absorb twice as many interesting things and continue living beyond the days of their bodies – with a practical hope of future transplantation via downloading into a new body.  I doubt that those who wax philosophically about the ethics of mindcloning will win many arguments.  People will want their mindclones, like we want smartphones, especially as they become cheaper and better.

There will be different companies competing to offer mindclone-creating mindware.   As described in Questions 12 and 16, they will need some sort of regulatory approval in order to legally sell their mindware (as opposed to black market sales).  The public will be reluctant to permit cyber-consciousness to arise in great numbers without some guarantee of its safety and efficacy, e.g., lack of psychoses in mindclones.  Certainly the public will only accept the citizenship of mindclones that are created from mindware that has been certified by an expert government agency to produce mindclones that are mentally equivalent to their biological originals (assuming adequate mindfiles).

I think it is unlikely that cyber-consciousness will be accepted as real consciousness until it has manifested itself, probably many times over, and been shown to be persuasive in media interviews and court cases.  Hence, it will be difficult to hold up experimental development of cyber-consciousness because regulators will not believe there is any real sentience to worry about – “just code.”  Yet, once cyber-consciousness has appeared, and been generally accepted, then the ethics of its development is a moot point.

Thus, practically speaking, the first mindclones will arise without much (or any formal) ethical protection during their development.  Before the mindware that produced these mindclones can be generally marketed to the public, as certified to produce mindclone citizen extensions of biological originals, government agencies will require safety and efficacy testing.  Specifically, government agencies will want proof that the mindware produces a healthy mind, and one that is practically indistinguishable from the mind of the biological original with an adequate size mindfile.  Government agencies will not give their blessings to such proof unless it is developed ethically.

Ethical guidelines for developing mindclones will include a requirement of consent for the creation of a conscious being.  As to the creation of mindclones, the consent of the biological original will likely be acceptable.  As to the creation of bemans, there will be a more challenging pathway.  Ethical review boards will need to be persuaded that the beman minds are not suffering during the process of accruing cyber-consciousness.  This is not an insuperable barrier.   However, it will require a much more deliberate development pathway based upon numerous graduated introductions of elements of cyber-consciousness, such as autonomy, empathy, identity and software bridges amongst these elements.

The bottom line is that ethical considerations favor a more rapid introduction of mindclones than non-mindclone bemans.  Ultimately, however, the seeming catch-22 of how does a consciousness consent to its own creation can be solved.


[i] D. Dennett, Kinds of Minds, New York: BasicBooks, 1996, pp. 166-68.
[ii] D. Hofstadter, I Am A Strange Loop, p. 251-52.

Friday, August 14, 2009

6. HOW CAN CONSCIOUSNESS BE CREATED IN SOFTWARE?

“Some men see things as they are and wonder why. Others dream things that never were and ask why not?” Robert F. Kennedy



There are thousands of software engineers across the globe working day and night to create cyberconsciousness. This is real intelligent design. There are great financial awards available to the people who can make game avatars respond as curiously as people. Even vaster wealth awaits the programming teams that create personal digital assistants with the conscientiousness, and hence consciousness, of a perfect slave.

How can we know that all of this hacking will produce consciousness? This takes us to what is known as the “hard problem” and “easy problem” of consciousness. The “hard problem” is how do the web of molecules we call neurons give rise to subjective feelings or qualia (the “redness of red”)? The alternative “easy problem” of consciousness is how electrons racing along neurochemistry can result in complex simulations of “concrete and mortar” (and flesh and blood) reality? Or how metaphysical thoughts arise from physical matter? Basically, both the hard and the easy problems of consciousness come down to this: how is it that brains give rise to thoughts (‘easy’ problem), especially about immeasurable things (‘hard’ problem), but other parts of bodies do not? If these hard and easy questions can be answered for brain waves running on molecules, then it remains only to ask whether the answers are different for software code running on integrated circuits.



At least since the time of Isaac Newton and Leibniz, it was felt that some things appreciated by the mind could be measured whereas others could not. The measurable thoughts, such as the size of a building, or the name of a friend, were imagined to take place in the brain via some exquisite micro-mechanical processes. Today we would draw analogies to a computer’s memory chips, processors and peripherals. Although this is what philosopher David Chalmers calls the “easy problem” of consciousness, we still need an actual explanation of exactly how one or more neurons save, cut, paste and recall any word, number, scent or image. In other words, how do neuromolecules catch and process bits of information?

Those things that cannot be measured are what Chalmers calls the “hard problem.” In his view, a being could be conscious, but not human, if they were only capable of the “easy” kind of consciousness. Such a being, called a zombie, would be robotic, without feelings, empathy or nuances. Since the non-zombie, non-robot characteristics are also purported to be immeasurable (e.g., the redness of red or the heartache of unrequited love), Chalmers cannot see even in principle how they could ever be processed by something physical, such as neurons. He suggests consciousness is a mystical phenomenon that can never be explained by science. If this is the case, then one could argue that it might attach just as well to software as to neurons – or that it might not – or that it might perfuse the air we breathe and the space between the stars. If consciousness is mystical, then anything is possible. As will be shown below, there is no need to go there. Perfectly mundane, empirical explanations are available to explain both the easy and the hard kinds of consciousness. These explanations work as well for neurons as they do for software.

As indicated in the following figure, Essentialists v. Materialists, there are three basic points of view regarding the source of consciousness. Essentialists believe in a mystical source specific to humans. This is basically a view that God gave Man consciousness. Materialists believe in an empirical source (pattern-association complexity) that exists in humans and can exist in non-humans. A third point of view is that consciousness can mystically attach to anything. While mystical explanations cannot be disproved, they are unnecessary because there is a perfectly reasonable Materialist explanation to both the easy and hard kinds of consciousness.

MATERIALISM vs. ESSENTIALISM








If human consciousness is to arise in software we must do three things: first explain how the easy problem is solved in neurons; second, explain how the hard problem is solved in neurons; and third, explain how the solution in neurons is replicable in information technology. The key to all three explanations is the relational database concept. With the relational database an inquiry (or a sensory input for the brain) triggers a number of related responses. Each of these responses is, in turn, a stimulus for a further number of related responses. An output response is triggered when the strength of a stimulus, such as the number of times it was triggered, is greater than a set threshold.



For example, there are certain neurons hard-wired by our DNA to be sensitive to different wavelengths of light, and other neurons are sensitive to different phonemes. So, suppose when looking at something red, we are repeatedly told “that is red.” The red-sensitive neuron becomes paired with, among other neurons, the neurons that are sensitive to the different phonetics that make up the sounds “that is red.” Over time, we learn that there are many shades of red, and our neurons responsible for these varying wavelengths each become associated with words and objects that reflect the different “rednesses” of red. Hence, the redness of red is not only the immediate physical impression upon neurons tuned to wavelengths we common refer to as "red", but is also (1) each person’s unique set of connections between neurons hard-wired genetically from the retina to the various wavelengths we associate with different reds, and (2) the plethora of further synaptic connections we have between those hard-wired neurons and neural patterns that include things that are red. If the only red thing a person ever saw was an apple, then redness to them means the red wavelength neuron output that is part of the set of neural connections associated in their mind with an apple. Redness is not an electrical signal in our mind per se, but it is the associations of color wavelength signals with a referent in the real world. Redness is part of the gestalt impression obtained in a second or less from the immense pattern of neural connections we have built up about red things.



After a few front lines of sensory neurons, everything else is represented in our minds as a pattern of neural connections. It is as if the sensory neurons are our alphabet. These are associated (via synapses) in a vast number of ways to form mental images of objects and actions, just like letters can be arranged into a dictionary full of words. The mental images can be strung together (many more synaptic connections) into any number of coherent (even when dreaming) sequences to form worldviews, emotions, personalities, and guides to behavior. This is just like words can be grouped into a limitless number of coherent sentences, paragraphs and chapters. Grammar for words is like the as yet poorly understood electro-chemical properties of the brain that enable strengthening or weakening of waves of synaptic connections that support attentiveness, mental continuity and characteristic thought patterns. Continuing the analogy, the self, our consciousness, is the entire book of our autonomous and empathetic lives, written with that idiosyncratic style that is unique to us. It is a book full of chapters of life-phases, paragraphs of things we’ve done and sentences reflecting streams of thought.



Neurons save, cut, paste and recall any word, number, scent, image, sensation or feeling no differently for the so-called hard than for the so-called easy problems of consciousness. Let’s take as our example the “hard” problem of love, what Ray Kurzweil calls the “ultimate form of intelligence.” Robert Heinlein defines it as the feeling that another’s happiness is essential to your own.

Neurons save the subject of someone’s love as a collection of outputs from hard-wired sensory neurons tuned to the subject’s shapes, colors, scents, phonetics and/or textures. These outputs come from the front-line neurons that emit a signal only when they receive a signal of a particular contour, light-wave, pheromone, sound wave or tactile sensation. The set of outputs that describes the subject of our love is a stable thought – once so established with some units of neurochemical strength, any one of the triggering sensory neurons can harken from our mind the other triggering neurons.

Neurons paste thoughts together with matrices of synaptic connections. The constellation of sensory neuron outputs that is the thought of the subject of our love is, itself, connected to a vast array of additional thoughts (each grounded directly or, via other thoughts, indirectly, to sensory neurons). Those other thoughts would include the many cues that lead us to love someone or something. These may be resemblance in appearance or behavior to some previously favored person or thing, logical connection to some preferred entity, or some subtle pattern that matches extraordinarily well (including in counterpoint, syncopation or other form of complementarities) with the patterns of things we like in life. As we spend more time with the subject of our love, we further strengthen sensory connections with additional and strengthened synaptic connections such as those connected with eroticism, mutuality, endorphins and adrenaline.

There is no neuron with our lover’s face on it. There are instead a vast number of neurons that, as a stable set of connections, represent our lover. The connections are stable because they are important to us. When things are important to us, we concentrate on them, and as we do, the brain increases the neurochemical strengths of their neural connections. Many things are unimportant to us, or become so. For these things the neurochemical linkages become weaker and finally the thought dissipates like an abandoned spider web. Neurons cut unused and unimportant thoughts by weakening the neurochemical strengths of their connections. Often a vestigial connection is retained, capable of being triggered by a concentrated retracing of its path of creation, starting with the sensory neurons that anchor it.

What the discussion above shows is that consciousness can be readily explained as a set of connections among sensory neuron outputs, and links between such connections and sequences of higher-order connections. With each neuron able to make as many as 10,000 connections, and with 100 billion neurons, there is ample possibility for each person to have subjective experiences through idiosyncratic patterns of connectivity. The “hard problem” of consciousness is not so hard. Subjectivity is simply each person’s unique way of connecting the higher-order neuron patterns that come after the sensory neurons. The “easy problem” of consciousness is solved in the recognition of sensory neurons as empirical scaffolding upon which can be built a skyscraper worth of thoughts. If it can be accepted that sensory neurons can as a group define a higher-order concept, and that such higher-order concepts can as a group define yet higher-order concepts, then the “easy problem” of consciousness is solved. Material neurons can hold non-material thoughts because the neurons are linked members of a cognitive code. It is the meta-material pattern of the neural connections, not the neurons themselves, that contains non-material thoughts.




Lastly, there is the question of whether there is something essential about the way neurons form into content-bearing patterns, or whether the same feat could be accomplished with software. The strengths of neuronal couplings can be replicated with weighted strengths for software couplings in relational databases. The connectivity of one neuron to up to 10,000 other neurons can be replicated by linking one software input to up to 10,000 software outputs. The ability of neuronal patterns to maintain themselves in waves of constancy, such as in personality or concentration, could equally well be accomplished with software programs that kept certain software groupings active. Finally, a software system can be provided with every kind of sensory input (audio, video, scent, taste, tactile). Putting it all together, Daniel Dennett observes:

“If the self is ‘just’ the Center of Narrative Gravity, and if all the phenomena of human consciousness are explicable as ‘just’ the activities of a virtual machine realized in the astronomically adjustable connections of a human brain, then, in principle, a suitably ‘programmed’ robot, with a silicon-based computer brain, would be conscious, would have a self. More aptly, there would be a conscious self whose body was the robot and whose brain was the computer.”

At least for a Materialist, there seems to be nothing essential to neurons, in terms of creating consciousness, which could not be achieved as well with software. The quotation marks around ‘just’ in the quote from Dennett is the famous philosopher’s facetious smile. He is saying with each ‘just’ that there is nothing to belittle about such a great feat of connectivity and patterning.