A mounting fear that science fiction may turn into reality came to light recently. Three brilliant physicists (Stephen Hawking, Max Tegmark, and Frank Wilczek) joined with a noted computer scientist (Stuart Russell) to worry in public about what they termed “superintelligent machines.” In an April 14 Huffington Post article, they take a familiar sci-fi theme, machines that turn on their masters to destroy humankind, and tell us that computers are coming dangerously close to acquiring such a capacity.
I found myself smiling through most of the article–the gap between fiction and reality seems pretty wide right now–but that’s just the kind of complacency the authors are worried about. What if weapons of war are completely automated and turned loose to name their own targets? What if the current trend toward high-speed computer trading on Wall St. is perfected to the point that machines can manipulate the world’s economy?
Those two possibilities pose dangers that do, in fact, seem to loom as real possibilities. But I wonder if the term “superintelligent machine” doesn’t beg the question. Is any machine intelligent to begin with? Despite the vogue for Artificial Intelligence (AI), I think no machine is intelligent or ever will be. The four authors rest their case on a sentence that strikes me as wrong-headed: “There is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains.”
A lot of assumptions are packed into this sentence:
- Our brain is what makes humans intelligent.
- Thinking is the same as computation.
- Thoughts can be broken down into bits of information.
- If a computer has as many bits of information as the human brain, it can compete on an equal footing with the human mind.
These assumptions are bywords in the AI field, but that doesn’t mean they hold water. At the very least, each statement meets with serious push back when examined carefully.
- Our brain is what makes humans intelligent.
Right now there’s a worldwide discussion of how the mind is related to the brain. This, the so-called “hard problem,” hasn’t been solved. It’s been a perplexing problem for at least 2,000 years, challenging the most brilliant philosophers since Aristotle and Plato. You can’t solve it by cutting the Gordian knot and saying that “of course” the brain is the same as the mind. So the first assumption has no basis in science. We can only say that mental processes have a parallel in neural activity. That’s like saying every note in a Mozart symphony can be played on a piano. Yes, the piano has all the notes, but it took a mind to dream up the symphony.
- Thinking is the same as computation.
This is a favorite assumption of computer scientists, as it has to be since otherwise the whole field of AI collapses. Computers compute. They do nothing else. But it has never been shown that the human mind only computes. When a ten-year-old says things like, “I don’t want to go to bed,” “I won’t eat that sandwich until you cut the crusts off,” or “That video game is for little babies,” he’s expressing human traits known as will, desire, opinion, and capriciousness. These aren’t the products of computation. Neither is wishing, hoping, dreaming, persisting, refusing, rebelling–the list is endless.
- Thoughts can be broken down into bits of information.
This is another assumption that must be true if AI is to exist–but it’s not true. One only needs to look at linguistics, which tells us that any sentence communicates not just its literal meaning (i.e., its information) but tone of voice, mood, implicit bonding with another person, cultural contest, and past associations. “I love you” can be sincere, ironic, sarcastic, deeply emotional, superficial, or code for the next act of espionage. Connotations count just as much as literal information.
It won’t do to say that a computer can tweak all of these connotations into other bits of information, because that’s not how language works. We grab the whole meaning all at once (as a gestalt, to use a technical psychological term), which is how we need only a glimpse of a friend’s face to bring up an entire relationship–our minds don’t break the relationship down into computational bits of information.
- If a computer has as many bits of information as the human brain, it can compete on an equal footing with the human mind.
This assumption is the source of worry from the four scientists, but it depends on the three preceding assumptions being true, and they aren’t. A computer, no matter how large its storage capacity and how swift its speed, will never think a single thought. It’s dumber than our ten-year-old in countless ways, because every child is the product, not of computations, but of experience, and experiences are created and processed by the mind.
I’m not claiming that questioning these assumptions means that super computers can’t be put to evil uses. They certainly can be, from hacking into the power grid to stealing identities, to setting off nuclear weapons for all anyone knows. These evil deeds are extensions of evil human intentions, just as a gun is an extension of the intention to kill someone.
As for the most common sci-fi speculation, that computers will learn to become independent of their programmers, taking on a will of their own and driving their own agendas, all I can say is “hmm.” Bad intentions and sloppy controls may be enough, one day, to make super computers act as if they are independent. But that “as if” covers a lot of possibilities one side or the other. One thing is certain: the human mind will always be ahead of computers when it comes to thinking, because no computer has ever had a thought or will have in the future, no matter how much good or bad they wind up doing.
Deepak Chopra, MD is the author of more than 80 books with twenty-two New York Times bestsellers. He serves as the founder of The Chopra Foundation and co-founder of The Chopra Center for Wellbeing. His latest book is The 13th Disciple: A Spiritual Adventure.