SPIEGEL: Dr. Gelernter, the American journalist Ambrose Bierce described the word we are looking for as "a temporary insanity curable by marriage." Do you know what we mean?
Gelernter: I don't.
SPIEGEL: It's love. It's a question from the TV show Jeopardy, and the IBM supercomputer Watson had no problem finding the solution. So does that mean Watson knows what love is?
Gelernter: He doesn't have the vaguest idea. The field of artificial intelligence has not even started to address how emotions are represented cognitively, what makes a belief and so forth. The problem is, I don't think only with my mind. I think with my body and my mind together. There's no such thing as love without bodily input, output, reaction and response. So love is beyond Watson.
SPIEGEL: Why, then, is Watson still doing well at Jeopardy?
Gelernter: Because the body is not involved in playing Jeopardy. You don't have to mean or to believe a single thing you say. The game is superficial enough to be winnable by an entity with no emotions, no sensations, and no self.
SPIEGEL: Still, Watson's opponents, Jeopardy all-time champions Ken Jennings and Brad Rutter, said in interviews that they had the feeling they were playing against a human. How come we can even consider Watson being on a par with us?
Gelernter: I even consider my macaw Ike to be on a par with me (laughs and points to his macaw). But seriously, I'd rather chat with Watson than with some of the people in my department at Yale. Any baby with a teddy bear immediately anthropomorphizes the teddy bear. We want to see images of ourselves, mirrors of ourselves. Anthropomorphizing is a powerful human urge. So I have no problems calling Watson a "he." That's a normal human response.
SPIEGEL: Watson defeated Jennings and Rutter in the competition recently with staggering ease. If not human-like, can Watson at least tell us something about the human mind?
Gelernter: Watson was not built to study the human mind. And the IBM people don't claim that they've solved any cognitive problems. Watson was built to win Jeopardy. That's it. For that purpose, it is drawing heavy on the parallel programming strategy. This strategy explicitly says: forget about the brain. The question is, can we burn raw computing power in such a way that we can create something that's able to compete with a human? The result is an extraordinary piece of technology that -- unlike IBM's chess computer Deep Blue -- has major implications for applied artificial intelligence.
Computers Don't Know What Pain Is
SPIEGEL: But could you bring yourself to call a machine like that "intelligent"?
Gelernter: The question is how superficial you are willing to be with your definition of "intelligence." When we think of it seriously, intelligence implies a self who is intelligent, an entity that can sense its thoughts, is aware of the fact that it's thinking and that it is manifesting intelligence. None of that is part of Watson by design.
SPIEGEL: But let's assume that we start feeding Watson with poetry instead of encyclopedias. In a few years time it might even be able to talk about emotions. Wouldn't that be a step on the way to at least showing human-like behavior?
Gelernter: Yes. However, the gulf between human-like behavior and human behavior is gigantic. Feeding poetry into Watson as opposed to encyclopedias is not going to do any good. Feed him Keats, and he will read "My heart aches, and a drowsing numbness pains my senses." What the hell is that supposed to mean? When a poet writes "my heart aches" it's an image, but it originates in an actual physical feeling. You feel something in the center of your chest. Or take "a drowsing numbness pains my senses": Watson can't know what drowsy means because he's never fallen asleep. He doesn't know what pain is. He has no purchase on poetry at all. Still, he could win at Jeopardy if the category were English Romantic poets. He would probably even do much better than most human contestants at not only saying Keats wrote this but explaining the references. There's a lot of data involved in any kind of scholarship or assertion, which a machine can do very well. But it's a fake.
SPIEGEL: What is so special about the human brain that the machine can't replicate it?
Gelernter: The brain is radically different from the machine. Physics and chemistry are fundamental to its activity. The brain moves signals from one neuron to another by using a number of different neurotransmitters. It is made out of cells with certain properties, built out of certain proteins. It is a very elaborate piece of biology. The computer on the other hand is a purely electronic machine made out of semiconductors and other odds and ends. I can't replicate the brain on a chip just in the same way I can't replicate orange juice on a chip. Orange juice is just not the same thing as a chip.
SPIEGEL: Are you serious about your statement that a machine won't truly be able to think until it can daydream and hallucinate?
Gelernter: Absolutely. We go through an oscillation between different mental states several times during the day, and you can't understand the mind without understanding this spectrum. We have wide-awakeness, high energy, high level of concentration, which is associated with analytical capacities. And at the other end of the spectrum, we are exhausted, our thoughts are drifting. In that state, our thoughts are arranged in a different way. We start to freely associate. Take Rilke: All of a sudden it occurred to him that the flight of a swallow in the twilight sky is like a crack in a teapot. It's a very strange image but a very striking image. Certainly, nobody else ever said it before. These sorts of new analogies and new images give rise to creativity, but also to scientific insights. Emotion has a lot to do with it. Why do you combine a bird flying with a piece of ceramic with a crack? Because, at least in Rilke's mind, they are tagged with a similar emotion.
SPIEGEL: But we are certainly able to think without getting so poetic.
Gelernter: True. At the upper end of the spectrum, when my thoughts are disciplined by the logical rules of induction, analogies play no part. I have hypotheses, and I work my way through to conclusions. That kind of intelligence doesn't need emotions, and it doesn't need a body. But it is also of almost no importance to human beings. We think purely logically, analytically, roughly zero percent of our time. However, when I am thinking creatively, when I am inventing new analogies, I can't do that without my emotional faculty. The body is intensely involved; it created the fuel that drives that process by engendering emotions.
SPIEGEL: But even Watson might soon be able to come up with interesting analogies. Just give him the right books to read.
Gelernter: It's possible to build a machine that is capable of what seems like creativity -- even a machine that can hallucinate. But it wouldn't be like us at all. It would always be a fake, a facade. So, it is perfectly plausible that "Watson 2050" will win some poetry contests. It might write a magnificent sonnet that I find beautiful and moving and that becomes famous all over the world. But does that mean that Watson has a mind, a sense of self? No, of course not. There is nobody at home.
SPIEGEL: Can you be sure?
Gelernter: There is nothing inside.
SPIEGEL: How can you know, then, that somebody is at home within another human being?
Gelernter: I know what I am. I am a human being. If you are a human being too, my belief is you are intelligent. And not because you passed a test, not because you showed me you can do calculus or translate Latin. You could be fast asleep, somebody could ask, "Is he intelligent?" And I will say, "Yes, of course. He's a human being." The only intelligence everyone has ever experienced firsthand is his own. There is no objective test for intelligence in others. The observable behavior tells you nothing about what is within. The only way we can confidently ascribe intelligence is by seeing a creature like us.
"Scientists Will Never Reproduce a Human Mind"
SPIEGEL: There is a lab in Lausanne, Switzerland, where a group of scientists are trying to recreate the brain's biology in each and every detail, one neuron at a time, in a supercomputer. They hope to replicate a complete human brain within a decade. Wouldn't that be "a creature like us"?
Gelernter: They could produce a very accurate brain simulator. They may be able to predict the behavior of the brain down to the transmission of signals. But they're not going to produce a mind any more than a hurricane simulator produces a hurricane.
SPIEGEL: Other scientists are far more optimistic. Regardless of all the obstacles, they say, the exponentially growing number of transistors on a chip will provide us with virtually infinite possibilities. If we connect huge numbers of computer chips in the right way and give them the right tasks to perform, then at some point consciousness will emerge.
Gelernter: It is impossible to create mental states by writing software -- no matter how sophisticated it gets. If a simple computer can't produce orange juice, a much more complicated computer won't do any better. Computer chips are just the wrong substrate, the wrong stuff for consciousness. Now, can some kind of a miracle happen if you put a lot of them together? Maybe. But I have no reason to believe that such a miracle will happen.
SPIEGEL: Given that we can manage a really good fake, a robot that pretends to be conscious in a convincing way -- would we even notice if it wasn't the real thing?
Gelernter: It already makes no difference to us. Just take the robots in Iraq and Afghanistan where they search for mines and so forth. The men on the front lines become emotionally attached to their robots, they're sad when they are destroyed. And 50 years from now, robots will be much better. There are a lot of lonely people in the world. So now they have a robot, and it is around all the time, chats with them. Sure they will be attached to it. The robot will know all about them. The robot will be able to say things like, "How are you feeling this morning? I realize your back was hurting yesterday." Will people have human-type feelings towards the robots? Absolutely. And then the question becomes: Does it matter that, in this sense, they are being defrauded? The answer is, given the scarcity of companionship in the world, it probably doesn't matter in practical terms. However, it certainly matters philosophically. If you care about what it is to be a human being, the robot is not going to tell you.
SPIEGEL: Things might change if you give him a near-perfect body, equipped with sensors that help him feel things and explore his environment like humans do.
Gelernter: In that case the machine would be capable of simulating humanness much more effectively. But a fake body attached to a computer is still not going to generate real sensations. If you knocked your foot on something, your brain registers what we call pain. If you think of something good that is going to happen tomorrow, the body responds by feeling good, then the mind feels better and so forth. This feedback loop is very important to human behavior. A fake body, however, is still just binary switches with voltage levels going up and down.
SPIEGEL: The American computer scientist Ray Kurzweil argues that the Internet itself might be on the brink of becoming super-intelligent, just because it will have computing power beyond imagination. And his beliefs are gaining in popularity. Why are these ideas so attractive?
Gelernter: Because creating the mind is the philosopher's stone of the Digital Age. In the Middle Ages, the alchemists tried to produce gold. Now they've moved over to the mind. Don't get me wrong: They are going to produce a lot of interesting science along the way. But they are not going to get a mind.
If You're Dead, You're Dead
SPIEGEL: The so-called Singularity Movement predicts the advance of highly intelligent machines that will one day perhaps even become part of our bodies.
Gelernter: We are being offered more ways than ever to destroy humanity by negating the significance of humanness. In the science fiction community there are those who say, "I will live forever insofar as I will be able to take my entire mind state and upload it to some server, and then I can die, but it doesn't matter if my mind stays there." Now, any two-year-old child can see the flaw in this argument: When you die, you are dead, and it doesn't matter if there is one copy or a billion copies of what your mind was before you died. It doesn't matter to you. You're still dead. The great philosophical analogy of the second half of the 20th century was that mind is to brain the way software is to computer. But this is ridiculous. There is no analogy between mind and software.
SPIEGEL: Why not?
Gelernter: If you have some software, you can make as many copies as you like. You can put it on a million different computers, and it is always exactly the same software. Minds, however, run on exactly one platform. You can't swap the mind out to some storage medium and then run it again after keeping it offline.
SPIEGEL: Hypothetically, what would happen if you managed to transfer one person's brain into another person's body?
Gelernter: There would be no mind anymore. As you took the brain from somebody and put it into somebody else's head, the mind that you used to have is gone because that mind was part of a body and responsive to that body. From a medical standpoint, the question is if the brain is a flexible enough organism to re-tune itself to a different kind of input from a different body. But the original mind would definitely be lost.
SPIEGEL: Assuming those popular visions of Artificial Intelligence won't come true in the foreseeable future, where do you see AI research going in the next decade?
Gelernter: My hope is that the philosophy of mind and cognitive science will develop a very sophisticated theory of how the mind works. The philosophy of mind has been dazzled by computing, which led down the wrong path. We have to get rid of this ridiculous obsession with computing, which has done tremendous damage. People worrying about singularity should go back and read Nietzsche. They should try and understand Kafka seriously. They should read a poet like William Wordsworth. Now, in an entirely separate effort, Artificial Intelligence will produce more and more powerful machines. We'll rely on them heavily. They will fix problems and answer questions for us all the time. No one will claim that they have minds, least of all the people who built the programs.
SPIEGEL: One of those powerful assistants might well be a descendent of Watson. Let's assume it has been shrunken to the size of a pea, and it could be plugged into our brains. Wouldn't it be wonderful to have all that knowledge on hand within your own body?
Gelernter: I can have all of Watson's knowledge available already -- by just opening my laptop. Does it matter to me if I can get the answer not in 10 seconds but in 10 microseconds? It really depends on how I define my integrity as a human being. Could I get more direct access to a million completely meaningless disconnected facts if I implanted a Watson chip and then go onto Jeopardy? I would win Jeopardy. However, it would give me no happiness, no satisfaction, no feeling of triumph, no feeling of accomplishment.
SPIEGEL: You wouldn't feel tempted to get yourself a Watson?
Gelernter: Sure I would. This is a brilliant, exciting piece of technology. I don't want to take anything away from it. It puts AI on a track that is going to produce fascinating technology. It uses exactly what we are rich in, namely pure primitive computer power, to produce sophisticated answers to complex questions. We need that ability, and Watson can do it.
SPIEGEL: Would you trade your macaw for Watson, if you had to choose?
Gelernter: No way would I trade my macaw in for any piece of software. Look at him. He's got a face. He's got a big smile on his beak. He's a creature who does have emotions, who has interests, and who is a member of the family. You'd have to offer me a lot more than Watson for my macaw.
SPIEGEL: Dr. Gelernter, thank you very much for talking to us.