Positronic Love

Pygmalion priant Vénus d'animer sa statue, Jean-Baptiste Regnault

Pygmalion priant Vénus d’animer sa statue, Jean-Baptiste Regnault

There was a sculptor of Ancient Greece by the name of Pygmalion who created a statue of a woman so beautiful and so lifelike that he fell at once in love, and pined away for desire of that which he could not possess. Daily he kissed the statue, and begged her to life, to breath, to love, but she was but an image in ivory and answered not his plea. One day it was the feast of Aphrodite and Pygmalion brought an offering to her temple and humbly implored the goddess of love with tears and sonnets to grant his heart’s deepest desire. Upon returning to his workshop he kissed his statue once more and–what do you know–she came to life.

This is the first of many tales celebrating the love of human beings for their perfect, but cold-hearted creations. Sometimes a miracle occurs, and they live happily ever after. But what if the being you fall in love with can’t love you in return?


When confronted with the possibility of creating artificial intelligence, dedicated speculators often pose two questions. 1. is is possible to create true intelligence, rather than the seeming intelligence of advanced analytical powers (such as those demonstrated by IBM Watson)? 2. Even supposing a computerized system could somehow develop intelligence to a high enough degree to qualify as a true AI–is it possible to create a machine that can have feelings?

Scanning of a human brain by X-rays

Scanning of a human brain by X-rays

The question of artificial intelligence is an endlessly fascinating one. The debate surrounding artificial sentience is even more complex. But creating artificial emotion is almost an entirely different focus than the preceding two–and no less important. After all, you don’t have to be human in order to feel emotions. Animals are clearly capable of feeling and demonstrating love, affection, pain, fear, and hatred among many other feelings. M0st feelings are the result of a positive or negative feedback mechanism in the body. When we do something we associated with pleasure, endorphins are released into the blood stream, causing us to feel happy or contented. This is an automated process that has nothing to do with our intelligence level.

The science of love is mostly biological. Our subconscious survival skills automatically detect potential partners for procreation and send the brain signals that release a variety of hormones intended to further the continuation of the human race. Presumably early artificial intelligences will be unable to procreate, but there are other kinds of love–such as the love between children and their parents. This is also a product of biological self-preservation. If parents hated their children, said children would most likely not survive to adulthood and the human race would go extinct.

So could a program based on the concept of self-preservation be built into an artificial intelligence that would cause it to feel positive emotions towards people who protect it and negative ones towards those who are aggressive and wish it harm? Presumably, yes. Asimov’s Laws of Robotics #3 states: “A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.” But is self-preservation sufficient to induce feelings of love or happiness? Countless people who did things they absolutely hated in order to survive says no. Since emotion is a result of chemicals released into the blood stream, we must first find a way to make our robot feel physical pleasure and pain. Which presumably means they must first have a physical body and not be a chat-bot locked in a hard-drive somewhere.

6a00d8341bf7f753ef019affc63311970d-800wiMany science fiction writers have portrayed androids as having a synthetic body that operates on biological principles similar to the human form. A combination of hydraulics and electricity simulate blood and nerves. Certainly electricity is an important factor. Since the android requires it to function, slowing down or shutting off the supply would be like slowing down or stopping a human’s heart. However, the increased heart-rate felt by a human being when spending time with a loved one is not the cause of love, but the effect. So increasing the voltage of your robot will not make it fall in love.

So far we’ve completely skipped over the complexities of intelligence. Suppose you created a robot that could pass the Turing test. The Turing test is one conceivable method for determining the intelligence level of a robot. A human tester sits on one side of a screen, and interviews a number of subjects, one of which is a robot. If he cannot tell the people from the AI then the AI has passed the test. So what if the tester starts asking the computer how it feels? The AI says it is nervous. Nervousness is an emotion. But is the AI really nervous or does it say that because that is how a human would react in that situation? Let’s take the hypothetical further. The robot passes the test, has a synthetic body that closely mimics that of humanity, and is turned loose in the world. The robot is intelligent and physically attractive and an unsuspecting human being becomes interested in it as a potential mate. How does the robot respond? It’s programmed to behave as a human would, and will therefore respond as a typical human in these situations. It’s also programmed to behave like a good human, so its responses are pleasant. One night the human confesses love to the robot. The robot reciprocates.

puzzle-wall-e-i-eva-496-jpgBut is it real? The robot is acting on programming, rather than actual emotion. Or is it? Are our own emotions any more than biological programming? But the robot responds predictably. There are rules and fail-safes. It would respond the same way to any human who approached it and has no personal opinion. It is intelligent but it is a computer. And, since it lacks free will, it is presumably not a sentient being but a mere machine.

So the engineer of an emotional robot goes back to the drawing board. Instead of making a robot that emulates humans, he makes a robot that is the best possible robot it can be. When faced with the question “how do you feel” the robot answers honestly: “I am a robot. I don’t have feelings.” This is a self-aware robot, a robot that can ask all the important questions like: “What am I?” and “What is the meaning of life?” But without a biological feedback system to trigger emotional responses or a programmed routine to fake them, can this robot ever feel anything or will it take delight only in computation and rationality?

chappie-interview-ftNow we come to the final possibility, the most mysterious one, and that is the development of spontaneous sentience–or the robot with a soul. When Pygmalion’s statue moved and breathed and spoke to him the miracle wasn’t that stone became flesh and blood. The miracle was that she was alive and she loved him and they lived happily to the end of their days. And so when science fiction writers create the perfect android, the heirs of the imperfect and short-lived human race, they imply a certain miracle.

When the engineers at U.S. Robots and Mechanical Men perfected the positronic brain they had no plans to create real living beings. They only wanted produce intelligent, but mechanical servants for the human race. But the synthetic neural network they created was so complex that it began to evolve on its own and over time there were unforseen consequences. The robot began to develop opinions, personalities; emotions.


When the Tyrell Corporation developed the Nexus-6 android, their only goal was to provide intelligent, human-like, but physically superior slaves to help conquer new worlds beyond earth. But the mental system they created was so complex, that after four years the androids began to develop empathic cognition–the ability to feel and think the way humans feel and think. When the Sirius Cybernetics Corporation attempted to develop androids with a “Genuine People Personality” they abandoned the project after their prototype developed a personality a little too people-like.When Tony Stark set out to create Ultron he wanted a super-intelligent computer to operate a highly complex military system designed to protect the planet. But the level of complexity contained in the model he copied was too complex to be controlled by the programming designed for it, and the brand new AI “woke up.”

In every case the result was not the goal because something else got involved. A miracle occurred. It could not be duplicated, or reverse-engineered. To try and explain it involves delving into questions like what makes us human, what makes us individuals, and what defines the human experience. The best we can do when exploring the topic of artificial intelligence is ask the question: “Do androids dream of electric sheep?”

Comments are closed.

Skip to toolbar