Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Sunday, September 19, 2010

18. WON’T THE WORLD GET WEIRD WITH LEGALLY-PROTECTED IMMORTAL MINDCLONES ALL OVER THE PLACE?

“Why should Palatine Boors be suffered to swarm our settlements? They will never adopt our Language and Customs.” Benjamin Franklin

Census Fact: As of 2010, there are approximately 50 million Americans of German decent. Very few speak German or even feel any German group identity.

We adapt. Only a few decades ago capital punishment was carried out in every country in the world. Many, like England, had daily public hangings. Today, even Russia, with a mountainous history of government-ordered executions, has a capital punishment moratorium. Since 1996, as part of their effort to show they are as modern as the rest of Europe, they have not executed a criminal through the judicial system. If we can learn to protect the lives of serial killers, child mutilators and terrorists, surely we can learn to protect the lives of peace-loving model citizen mindclones.

The world is constantly getting weird compared to how it was. When my grandmother was born, the fastest time to get a document across the ocean was a few weeks -- a ship voyage, followed by connecting rail or pony express. By the time she died a facsimile of any document could get across the ocean in a few seconds -- attached to an email. From a few weeks to a few seconds? That’s weird.

When my dad was born, the notion of thousands of undergrads across the country sitting in classrooms and lecture halls obviously watching movies on their phones while the professor drones on would be – weird. Phones were big, black and stuck to the household wall while movies were huge, spellbinding and shown only in big theatres. Universities were hallowed halls. By the time he died, not only were iphone movies common, but entire university educations from places like MIT were also available on the very same phone. Weird.

Which is weirder, life drastically changing or some imaginary world in which we are still, in the 21st century, completely limited to dialing Miss Sarah, the Andy of Mayberry switchboard operator, to connect us to each other? Which is weirder, that we can multitask -- simultaneously listen to the prof, text our friends and watch X-men on our Android -- or some black-and-white surreality in which century after century we continue to learn by rote, or feel the back of a switch, in a one-room schoolhouse, boys only, so that girls can get their 10-15 pregnancies in, starting around age 13, before they die?




My point is that weird is just a word for something very different from our comfort zone. We are comfortable with smart cars and smart phones, so life in horse-and-buggy days seems weird. We are not yet comfortable with smart software, like mindclones and bemans, so that kind of life also seems weird. Nothing is good or bad because it is weird. Things are just weird because they are very different.

The important question to ask is whether legally-protected, immortal mindclones is a good kind of weird (like contact lenses would be to Ben Franklin), or a bad kind of weird (like streaming a spycam you snuck into your girlfriend’s room). Are mindclones cool or yuck? Hot or horrid? These are the questions of weirdness we must parse.

What Innovations Have We Loved, and Which Have We Hated?

There are two ways a technology gets perceived as horrid or yucky. The first way, generally associated with horridness, is to adversely impact the quality of life. Think old-school commercial-ridden television, famously called ‘the great wasteland,’ or the loss of privacy that sneakily placed webcams entail. The second way, more associated with yuck, makes people feel viscerally disgusted. Think hybridizing people and farm animals the way some fruits and vegetables are genetically modified (seedless, differently colored, blended tastes).

Surveys regularly show that mobile phones, alarm clocks and televisions are among the most hated products. They achieve this status because they interfere with our normal behaviors. Instead of talking with each other, we stare at the TV. Instead of sleeping until we feel refreshed, the alarm clock blasts us from bed. Instead of paying attention to each other, we interrupt each other to answer or peck at our mobiles. Yet, at the same time, these products are ubiquitous. We feel we need them, and we surely want them. This is because they also help us in important, even crucial, ways. Mobiles save us time, alarm clocks keep us housed and clothed (by helping us avoid getting fired) and televisions relax us with escapist entertainment.

Based on this experience it may not be so easy to categorize mindclones as either hot weird or horrid weird. Our experience is to accept technologies so long as we want or need them more than we hate them. We will surely complain about having to interact with someone’s mindclone instead of the flesh original. Others will bitch about us spending all of our time with our mindclone instead of pressing the flesh. But will we really be angry that we are talking to a most helpful mindclone instead of a script-reading call center rep or voicemail box? And won’t we very quickly find our mindclones to be indispensable for handling our more than 24 hours worth of responsibilities (and opportunities) in under 24 hours? No matter how much we may hate specific information, electronics and media technologies, we also find them indispensable. Also, since these information technologies rarely entail “wet biology”, we rarely if ever feel “yuck” about them.



What would it take for a mindclone to generate a “yuck” reaction? When something seems to change normal human biology, people begin to move from “hate” to “yuck” or “disgust.” Yet, here to, it is possible to also greatly value something that is otherwise “disgusting”, and to thereby incorporate it into society.

Strong feelings of “yuck” accompanied the first vaccinations, organ transplants, birth control pills, and test tube babies. Yet, over time, people appreciated the enormous benefits of these technologies, and have accepted them even if they still feel queasy about their unnaturalness. As Reason magazine recently summarized:

“in 1969, a Harris poll found that a majority of Americans believed that producing test-tube babies was "against God's will." Christiaan Barnard was condemned by many as a "butcher" when he transplanted the first heart into the chest of 55-year-old Louis Washkansky on December 3, 1967. The contraceptive pill introduced in 1960 was outlawed by many states until near the end of that decade. And much further back, Edward Jenner's 1796 discovery that inoculation with cowpox scabs would prevent people from getting smallpox was mocked by newspaper editorials and cartoons depicting men with cow's heads.
As history amply demonstrates, the public's immediate "yuck" reaction to new technologies is a very fallible and highly changeable guide to moral choices or biomedical policy. For example, by 1978, more than half of Americans said that they would use in vitro fertilization (IVF) if they were married and couldn't have babies any other way. More than 200,000 test-tube babies later, the majority of Americans now heartily approve of IVF. Globally nearly 50,000 heart transplants have been performed, and 83 percent of Americans favor organ donation. The contraceptive pill is legal in all states and millions of American families have used them to control their reproductive lives. And smallpox is the first human disease ever eradicated.”

In summary, we hate and love the very same technologies. We complete a mental balancing act, collectively throughout society, between two principal questions. Where is the technology on the scale from merely annoying to downright disgusting? How useful is the technology to us, from superfluous to life-saving? We ultimately feel that new possibilities that are above the “acceptance line” shown in the graph to the right are too badly weird for our society. However, new possibilities under the acceptance line are a “good kind of weird”, and can proceed in our time.

In forecasting where mindclones will be placed on the Social Acceptance of Weirdness chart we can compare them with things research has shown to be universally perceived as disgusting. While there was variance amongst localities, Dr. Valerie Curtis, a researcher with the London School of Hygeine and Tropical Medicine, worldwide found just these factors to trigger disgust across cultures:

Bodily secretions - faeces (poo), vomit, sweat, spit, blood, pus, sexual fluids
Body parts - wounds, corpses, toenail clippings
Decaying food - especially rotting meat and fish, rubbish
Certain living creatures - flies, maggots, lice, worms, rats, dogs and cats
People who are ill, contaminated

She concluded from her research that the universal human facial reaction of disgust (screwing up our noses and pulling down the corners of our mouth ) is genetically wired to images that are associated with disease. This disgust reaction can be overcome, as when bodily secretions are dealt with hygienically, or when animals are kept as harmless pets. However, Dr. Curtis believes, absent cultural conditioning people who acquired genetic mutations that made them repulsed by frequently diseased things lived longer, had more children, and passed on those behavioral genes related to disgust to the rest of us.

Whether or not Dr. Curtis’ evolutionary hypothesis is correct, it is clear that mindclones do not fall within any of her categories of disgust. This is important because it means that mindclones do not necessarily have to be life-saving to clear the social acceptance of weirdness hurdle. In order to achieve good weirdness status, legally-protected immortal mindclones need to be more useful than annoying – more hot than horrid. This will almost certainly be the case as they are an extrapolation of the software we use and data files we accumulate today. We find our software and data files immensely useful, and hence we sock more and more of our memories and life functions into them. The surest way for a piece of software to gain an edge on its competitors is to make it more human – intuitive, naturally interfaced and responsive. One of the most popular Web 3.0 applications, Evernote, has the tagline “Never Forget Anything.” Our very behaviors today reveal that we believe the utility of software and data-files far outstrips their annoyances.

Furthermore, we want our software and data-files legally-protected, and as long-lasting as possible. We expect our computerized information to be protected by privacy laws. We are far more offended by the notion of employers or government agencies combing through our web browsing history than we are that our software privately recommends to us books, songs and sites we may like based on that history. We cannot get enough back-up possibilities for our data – disks, thumb drives, external hard drives and cloud storage. My photo saving site, phanfare.com, specifically promises my pictures and videos will be stored “forever.”

Yes, the world will get weird with immortal, legally-protected mindclones running around. But it will be a good kind of weird. It will be a kind of weird that at minimum makes our life much more useful, and ultimately will make our life much more enduring. The mindclones will be our alter egos, our selves as best friends, our technologically empowered, autonomous but still synchronized, conscience and cognition. Furthermore, mindclones will do this without triggering the ancient human bugaboos of disgust that underlie yuck weirdness – signs, symptoms and vectors of death, disease and destruction. Mindclones will be clean. They are the anti-death. This is weirdness we will want.

Wednesday, January 27, 2010

10. EVEN IF SOME SOFTWARE CAN BE KIND OF ALIVE, WON’T CYBERCONSCIOUSNESS TAKE AGES TO EVOLVE, AS IT DID FOR BIOLOGY?



“Speed, it seems to me, provides the one genuinely modern pleasure.” Aldous Huxley

“The newest computer can merely compound, at speed, the oldest problem in the relations between human beings, and in the end the communicator will be confronted with the old problem, of what to say and how to say it.” Edward R. Murrow


Compared with biology, vitological consciousness will arise in a heartbeat. This is because the key elements of consciousness – autonomy and empathy – are amenable to software coding and thousands of software engineers are working on it. By comparison, the neural substrate for autonomy and empathy had to arise in biology via thousands of chance mutations. Furthermore, each such mutation had to materially advance the competitiveness of its recipient or else it had only a slight chance of becoming prevalent.


The differences between vitology and biology in the process of creating consciousness could not be starker. It is intelligent design versus dumb luck. In both cases Natural Selection is at play. However, for conscious vitology, any signs of consciousness get instantly rewarded with lots of copies and intelligent designers swarm to make it better. This is Darwinian Evolution at hyper-speed. With conscious biology, any signs of consciousness get rewarded only to the extent they prove useful in the struggle for biosphere survival. Any further improvements require patiently waiting through eons of gestation cycles for another lucky spin of genetic roulette. This traditional form of Darwinian Evolution is so glacial that it took over three billion years to achieve what vitology is accomplishing in under a century.

The people working hard to give vitology consciousness have a wide variety of motives. First, there are academicians who are deathly curious to see if it can be done. They have programmed elements of autonomy and empathy into computers. They even create artificial software worlds in which they attempt to mimic natural selection. In these artificial worlds software structures compete for resources, undergo mutations and evolve. The experimenters are hopeful that consciousness will evolve in their software as it did in biology, with vastly greater speed. Check out out this vlog that explains why their hopes will almost certainly be fulfilled:




Another group of “human enzymes” aiming to catalyze software consciousness are gamesters. These (mostly) guys are trying to create as exciting a game experience as possible. Over the past several years the opponents at which a gamester aims have evolved from short lines (Pong; Space Invaders) to sophisticated human animations that modify their behavior based upon the attack. The game character that can make up its own mind idiosyncratically (autonomy) and engage in caring communications (empathy) will attract all the attention. Any other type of character will then appear as simplistic as Play Station 2.

Third and fourth groups focused on creating cyber-consciousness are medical and defense technologists. For the military cyberconsciousness solves the problem of engaging the enemy while minimizing casualties. By imbuing robot weapon systems with autonomy they can more effectively deal with the countless uncertainties that arise in a battlefield situation. It is not possible to program into a mobile robot system a specific response to every contingency. Nor is it very effective to control each robot system remotely based on video sent back to a distant headquarters. The ideal situation provides the robot system with a wide range of sensory inputs (audio, video, infrared) and a set of algorithms for making independent judgments as to how to best carry out orders in the face of unknown terrain and hostile forces. The work of one developer in this area has been described as follows:

“Ronald Arkin of the Georgia Institute of Technology, in Atlanta, is developing a set of rules of engagement for battlefield robots to ensure that their use of lethal force follows the rules of ethics. In other words, he is trying to create an artificial conscience. Dr. Arkin believes that there is another reason for putting robots into battle, which is that they have the potential to act more humanely than people. Stress does not affect a robot’s judgment in the way it affects a soldier’s.”
The algorithms suitable for a military conscience will not be difficult to adapt to more prosaic civilian requirements. Independent decision-making lies at the heart of Autonomy, one of the two touchstones of consciousness.

Meanwhile, medical cyber-consciousness is being pushed by the skyrocketing need to address Alzheimer’s and other diseases of aging. Alzheimer’s robs a great many older people of their mind while leaving their body intact. The Alzheimer patient could maintain their sense of self if they could off-load their mind onto a computer, while the biotech industry works on a cure. This is analogous to how an artificial heart (such as a left-ventricular assistance device or LVAD) off-loads a patient’s heart until a heart transplant can be found. Ultimately the Alzheimer’s patient will hope to download their mind back into a brain cleansed of amyloid plaques.

Indeed, using cyber-consciousness for mind transplants would be a way to provide any patient facing an end-stage disease a chance to avoid the Grim Reaper. While the patients will surely miss their bodies, the alternative will be to never have a body. At least with a medically provided cyber-conscious existence, the patient can continue to interact with their family, enjoy electronic media and hope for rapid advances in regenerative medicine and neuroscience.

The field of regenerative medicine will ultimately permit ectogenesis, the rapid growth outside of a womb of a fresh, adult-size body in as little as twenty months. This is the time it would take an embryo to grow to adult size if it continued to grow at the rate embryos develop during the first two trimesters. Advances in neuroscience will enable a cyber-conscious mind to be written back into (or implanted and interfaced with) neuronal patterns in a freshly regenerated brain.

Biotechnology companies are well aware that over 90% of an average person’s lifetime medical expenditures are spent during the very last portion of their life. Lives are priceless, and hence we deploy the best technology we can to mechanically keep people alive. Medical cyber-conscious mind support is the next logic step in our efforts to keep end-stage patients alive. The potential profits from such technology (health insurance would pay for it just like any other form of medically-necessary equipment) are an irresistible enticement for companies to allocate top people to the effort.

Health care needs for older people are also driving efforts to develop the empathetic branch of cyber-consciousness. There are not enough people to provide caring attention to the growing legion of senior citizens. As countries grow wealthy their people live longer, their birthrates decline below the replacement rate and, consequently, their senior citizens comprise an ever-larger percentage of the population. Among the OECD group of advanced countries, the dependency ratio, which measures the number of people over 65 to those between 20 and 65, is projected to grow from .2 currently to .5 by 2050. In other words, today there are five younger people to care for each older person, whereas in four decades there will be just two workers to care for each older person. There is a huge health care industry motivation to develop empathetic robots because just a small minority of younger people actually wants to take care of older people.

The seniors won’t want to be manhandled, nor will their offspring want to be guilt-ridden. Other than importing help from developing countries – which only postpones the issue briefly as those countries have gestating dependency ratio problems of their own – there is no solution but for the empathetic, autonomous robot. Grannies need – and deserve – an attentive, caring, interesting person with whom to interact. The only such persons that can be summoned into existence to meet this demand are manufactured software persons, i.e., empathetic, autonomous robots. Not surprisingly, empathetic machines are a focus of software development in the health care industry. Companies are putting expression-filled faces on their robots, and filling their code with the art of conversation.

Finally, the information technology (IT) industry itself is working on cyber-consciousness. The mantra of IT is user-friendly, and there is nothing friendlier than a person. A cyber-conscious house that we could speak to (prepare something I’d like for dinner, turn on a movie that I’d like) is a product for which people will pay a lot of money. A personal digital assistant that was smart, self-aware and servile will out-compete in the marketplace PDAs that are deaf, dumb and demanding. In short, IT companies have immense financial incentives to keep trying to make software as personable as possible. They are responding to these incentives by allocating floors of programmers to the cyberconsciousness task. Note how rapidly these programmers have arrogated into their programs the human pronoun “I”. Until cyberconsciousness began emerging, no one but humans and fictional characters could call themselves “I”. Suddenly, bits and building blocks of vitology are saying “how may I help you?,” “I’m sorry you’re having difficulty,” “I’ll transfer you to a human operator right away.” The programmers will have succeeded in birthing cyberconsciousness when they figure out how to make the human operator totally unnecessary. From their progress to date, this seems to be the goal. Add to this self-replication code, and conscious vitology has arrived.

In summary, humanity is devoting some of its best minds, from a wide diversity of fields, to helping software achieve consciousness. The quest is not especially difficult as it is a capability that can be intelligently designed; there is no need to wait for it to naturally evolve. As a result, cyberconscious will appear immediately on the heels of life-like vitology.

Unnatural Selection is Still Natural Selection.

Natural Selection is the name Darwin gave to Nature’s heartless process of dooming some species and variants of species to extinction, while favoring for a while others. The principal tool of Natural Selection is competition within a niche for scarce food. Losers don’t get enough food to reproduce, and hence they die out. Winners get the food, make the babies and pass on their traits, including the ones that make them superior competitors.


When environmental change eliminates much of the food, such as during an ice age, previously useful traits may become meaningless and former Natural Selection champions may quickly join the mountain of extinct losers. During such times Nature selects for traits that enable food gathering and reproduction in changing, or changed, environments. The cockroach has these traits.

Alternatively a new species may enter a niche, as when hominids entered the environment of the mammoth. In cases like this Nature might simply select the better killer, since it was not the mammoth’s food that interested Man, but the mammoth as food. Plants and animals will not only extinguish other species through starvation, they will also do so through direct extermination. All the while, Nature will carpet bomb all manner of species via environmental changes brought about by geophysics (e.g., volcanism) or astrophysics (e.g., asteroids).

Natural Selection is now acting upon software forms of life. In this case Nature’s tool is neither food nor violence. Instead, ey is using man as a tool, relying upon eir differential favoring of some self-replicating codes over others. Just as Nature started off with viruses in the biological world, ey is also flooding the vitological world with them. This is no doubt because viruses are the simplest types of self-replicating structures – they do nothing but self-replicate and plug themselves in somewhere (sometimes to great harm; other times to significant benefit). Molecular viruses spontaneously self-assembled out of inanimate molecules before anything more complicated did, and hence Natural Selection played with them first. Similarly, software viruses spontaneously man-assembled out of inanimate code before anything more complicated, and hence Natural Selection is playing with them first. As viruses randomly or with man’s help cobble together more functionality, then Natural Selection will play with the resultant complex entities.

Natural Selection is simply a kind of arithmetic for self-replicating entities. It is a tallying up of the results of what happens to self-replicating things in the natural world. Those that self-replicate more successfully are represented by a larger slice of the pie of life. There are many ways to self-replicate more successfully – grab resources better than others, kill others better than they can kill you, adapt to changes better than others. Nature doesn’t really care how one self-replicates more successfully. Ey just keeps track, via Natural Selection, by awarding the winners larger shares of the pie of life.

Since math is math, whether done by people or bees, Nature surely does not care if the agent of selection is human popularity rather than nutritional scarcity. Natural Selection is no less natural for humans being in the middle. Indeed, we have human intermediation to thank for thousands of recombinant DNA sub-species, hundreds of plant types and dozens of animal species. Thank Man for the household dog!

Man is now hard-at-work naturally selecting for the traits that make software more conscious. Humanity cannot resist an overwhelming urge to create unnatural life in the image of natural life. But this effort at Unnatural Selection is still Natural Selection. The end result will still be an arithmetic reordering of pie shapes and pie slices. The overall pie of life will be much larger, for it will now include vitology as well as biology. And within that larger pie, there will be slices accorded to each of the types of vitological life and biological life that successfully self-replicate in a changing environment. Mindclone consciousness will arrive vastly faster than its biological predecessor because Unnatural Selection is Natural Selection at the speed of intentionality.

Wednesday, April 8, 2009

WHAT IS MINDWARE?

Mindware is operating system software that (a) thinks and feels the way a human mind does, and (b) sets its thinking and feeling parameters to match those discernable from a mindfile. Mindware relies upon an underlying mindfile the way Microsoft Word relies upon a textfile. When appropriate parameters are set for mindware it becomes aware of itself and a cyberconscious entity is created.

The richness of the cyberconscious entity’s thoughts and feelings are a function of its source mindfile. In the extreme case of no mindfile, the mindware thinks and feels as little as a newborn baby. If the mindware’s parameters are set haphazardly, or shallowly, a severely dysfunctional cyberconsciousness will result. In the normal case, however, of mindware having access to a real person’s mindfile, the resultant cyberconsciousness will be a mindclone of that person. It will think and feel the same, have the same memories, and be differentiated only by its knowledge that it is a mindclone and its substrate-based different abilities.

Is mindware achievable? Yes, because our human thoughts and emotions are patterns amongst symbols. These patterns can be the same whether the symbols are encoded in our brains or in our mindfiles. The patterns are so complex that today only certain threads are available as software. For example, software that thinks how to get from our house to a new restaurant is now common, but didn’t exist just a decade ago. Every year the range of symbol association achievable by software leaps forward. It is merely a matter of decades before symbol association software achieves the complexity of human thought and emotion.



The preceding paragraph makes two claims deserving of expanded attention: that our mental states are merely a matter of patterns amongst symbols, and that such patterns could be replicated in software. Let’s turn first to how the mind works, and then to its possible replication in mindware.

Consider what might be our psychology if it were not patterns among symbols? What else is there? One idea, associated with the Australian philosopher David Chalmers, is that there is some sort of a metaphysical spirit that animates our thoughts and feelings. Another idea, propounded by the British mathematician Roger Penrose, is that our consciousness arises from quantum physical transitions deep within sub-microscopic intra-neural structures. In neither case, nor in many variants of each case, is it possible to disprove the claim because they are based upon essentially invisible, non-testable phenomena. Similarly, it cannot be disproved, at this time, that consciousness arises from sufficiently complex patterns among symbols. These patterns are too complex to either be sorted out in the brain or replicated in software.

You, the reader, need to take a position or to remain agnostic on the source of consciousness. I’m sure that many people will try to create consciousness by replicating in software the mental associations that are a hallmark of our thoughts and feelings. From there philosophers will argue over whether cyberconsciousness is real or not, as they argue over the consciousness of cats and dogs. But just as most humans believe their pets think, plan and feel, most humans will see consciousness in mindware if it resembles closely enough the same consciousness they see in themselves.

Computer scientist Marvin Minsky is convinced that emotions are just another form of thought. He believes feelings, no less than ideas, are based upon complex associations amongst mental symbols, each of which are ultimately rooted in a bunch of phonetically or visually specific neurons. It is today impossible to prove him right or wrong. The question is whether we believe that entertainment companies and customer service providers will persistently pursue the creation of emotion-capable software products – interfaces with feelings. I am convinced that the answer is yes, because there will be a large market for human-like software. Whether software-based emotions are real emotions is a question for philosophers, such as whether emotions arise from patterns of symbols or from metaphysical spirits or from quantum physical sub-atomic particle states. Everyday people will feel the software-based emotions are real if they seem as real as those of their relatives, neighbors and friends.

The second question to be answered is if thoughts and feelings are based upon complex patterns amongst symbols, how exactly can those patterns be discerned and replicated with mindware? Every symbol in our mind – such as the phoneme-specific neurons that, strung together, we learned very young sounded like “apple” – is linked to many other symbols. For “apple” those other symbols are its various images and generalized image, its taste, and its various appearances in our life (orchards, produce departments, pictures in books). Some of those associations are positive, some negative and some neutral. Each of these associations can be replicated in software, along with positive or negative values and probability weightings. Mindware is software that creates a cyberconscious version of you with the same associations, values and weightings to every symbol within your mindfile that you evidence having based upon your saved data. Where data is missing, mindware interpolates answers, makes reasoned guesses and imports general cultural information relevant to your socio-cultural niche. When the mindware converses with someone, the symbol apple will be triggered if it would have been triggered in your own mind, and it will enter into the discussion no differently than how it would have entered into your discussion. In this regard, the mindware gives rise to your mindclone.

Mindware is a kind of operating system that can be saved into billions of unique states, or combinations of preferences, based upon the unique ways of thinking and feeling that are discernable from your mindfile. Dozens of personality types, traits and/or factors, and gradations amongst these, yield more unique combinations than there are living people. Similarly, dozens of alphabet letters and ways to arrange them can create more unique names than there are people on the planet.

For example, people can be of several different personality orientations – introvert, extrovert, aggressive, nurturing and so on. Most psychologists say there are just five basic flavors or “factors”, but others say there are more. Nevertheless, virtually all agree on some finite, relatively small number of basic psychological forms taken to greater or lesser degrees by human minds. Multiplying out these possibilities would provide mindware with a vast number of different personality frameworks from which to choose a best fit – based upon a rigorous analysis of the person’s mindfiles -- for grafting mannerisms, recollections and feelings onto.

To be a little quantitative, imagine mindware adopts the currently popular view that there are five basic personality traits, each of which remain quite stable in one’s adult life: Openness, Conscientiousness, Extrovertedness, Agreeability and Neuroticism. Literally thousands of English words have been associated with each of these five traits, but suppose for sake of example we say that each person would be scored by mindware only from -100 to +100 on each of these traits (from an analysis of their mindfile). For example, an individual who was definitely prone to impulsive decisions, but no more than average among millions of analyzed mindfiles, might be assigned a Neuroticism personality trait score of +50. The formula for the number of unique personality frameworks available to mindware, known as a “repeating combination”, would be ST, where S = the number of possible personality trait scores and T = the number of possible personality traits. For our example, ST = 2015 = 328,080,401,001. These 328 billion personality frameworks are enough to ensure personality uniqueness, which also means they are likely to ensure a very good fit for each person. The more sizes a pair of jeans comes in, the more likely it is that everyone will find a pair that fits them just right!

The point here is not that there are precisely five personality traits, or exactly 201 discernable degrees of possessing each such trait. Instead, what is being shown is that a relatively easy problem for mindware to solve can result in a practically unlimited amount of individualized personality frameworks. Specifically, mapping the words, images and mannerisms from a lifetime mindfile into a matrix of personality trait buckets and associated positive or negative strengths for such bucket will produce more than enough unique personality templates to assure a very good fit to the original personality.


Mindware works like a police sketch artist. They are trained to know that there are a limited number of basic forms the human face can take. Based upon inputs from eye witnesses (analogous to processing a mindfile) the artist first chooses a best fit basic facial form, and then proceeds to graft upon it unique details. Often there is an iterative, back-and-forth process of sketching and erasing as additional details from eyewitnesses refine an initial choice of basic facial form. In the same way mindware will be written to iteratively reevaluate its best-fit personality structure based upon additional details from continued analyses of a mindfile.

Mindware will have settings that instruct the duration of its iterative process. After much iteration the mindware will determine that an asymptotic limit has been reached. It will do this by running thousands of “mock” conversations with tentative versions of a replicated mind, and comparing these with actual conversations or conversational fragments from an original’s mindfile. The iterative process will end once the mind it has replicated from the mindfiles it has been fed has reached what is called “Turing-equivalence” with the original mind. This means that the test established by the early 20th century software pioneer Alan Turing has been satisfied. That test says that if it is not possible to tell whether a conversant is a computer or a person, then the computer is psychologically equivalent to a person. It would be as if the police sketch artist produced a drawing that was as good as a photograph.

The rapid ferreting out of mindware settings from a mindfile has recently been made more feasible thanks to pattern recognition, voice recognition and video search software. It is now possible on Google Video to search videos by typing in desired words. Mindware will build upon this capability. It will analyze mindfile video for all words, phrases and indications of feeling. These will be placed into associational database arrays, best-matched to personality traits and strengths, and then used to best-fit a personality profile to the peculiar characteristics evidenced in the analyzed mindfile. Keep in mind we humans have just a half-dozen basic expressions, only a dozen or two emotions, a facial recognition limit in the low hundreds and an inability to remember more than 10% of what we heard or saw the previous day. Furthermore, the personality template that mindware puts together for us is blanketed with all of the factual specifics from our mindfile. While this is rocket science, it is rocket science that we can, and soon will, do. Mindware is a moon landing, and we did that in the sixties.

Just because we are unique does not mean that we cannot be replicated. An original essay can still be copied. Mindware is a kind of duplicating machine for the mind. Because the mind is vastly more complex and less accessible than a document it is not something that can simply be optically scanned to replicate. Instead, to scan a mind one must scan and analyze the digital output of that mind – its mindfile – while iteratively generating a duplicate of that mind relying on associated databases of human universals and socio-cultural contexts. It does sound like an amazing piece of software, but no more amazing to us than would be our photocopying machines to Abraham Lincoln or our jumbo jets to the Wright Brothers. And software technology is advancing much more quickly today than machine technology was back then.

Operating system software with mindware’s number of settings commonly run on laptop computers. The challenge is to write mindware so that it makes associations and interacts as does the human brain. This is not a challenge of possibility, but a challenge of practice, design and iterative improvement of approximations. Mindware is just really good software written for the purpose of replicating human thoughts and feelings.