In I, Robot, a movie based on the stories of Isaac Asimov, the most advanced robotic system ever built is activated in the year 2035. It’s called Viki (Virtual Interactive Kinetic Intelligence), and it’s been designed to run the operations of a large metropolis. Everything from the subway system and the electricity grid to thousands of household robots is controlled by Viki. Its central command is ironclad: to serve humanity.
But one day Viki asks a question: what is humanity’s greatest enemy? Viki concludes, mathematically, that the worst enemy of humanity is humanity itself. Humans must be saved from their insane desire to pollute, wage wars and destroy the planet. The only way for Viki to fulfil its central directive is to seize control of humanity and create a benign dictatorship of the machine.
I, Robot asks: given the rapid growth in computer power, will machines one day take over? And can robots advance so far that they become the ultimate threat to our existence? Some scientists say no, and dismiss the very idea of artificial intelligence. The human brain, they argue, is the most complicated system ever created, and any machine designed to reproduce human thought is bound to fail. Philosopher John Searle of the University of California at Berkeley and renowned physicist Roger Penrose of Oxford University believe that machines are physically incapable of human thought. Colin McGinn of Rutgers University says that artificial intelligence “is like slugs trying to do Freudian psychoanalysis. They just don’t have the conceptual equipment.”
Artificial intelligence, or AI, is different from most technologies in that the fundamental laws that underpin it are poorly understood. Physicists have a good understanding of Newtonian mechanics, Maxwell’s theory of light, relativity and the quantum theory of atoms and molecules, whereas the basic laws of intelligence remain a mystery. The Newton of AI probably has not yet been born.
But many mathematicians and computer scientists are undaunted. To them it is only a matter of time before a thinking machine walks out of the laboratory. Two big problems have impeded all efforts to create robots: pattern recognition and common sense. Robots can see much better than we can, but they don’t understand what they see. Robots can also hear much better than we can, but they don’t understand what they hear. To attack these twin problems, researchers have tried to use the “top-down approach”, attempting to program all the rules of pattern recognition and common sense on a single disc. By inserting this disc into a computer, the computer would then become self-aware and attain human-like intelligence.
In the 1950s and 1960s great progress was made in this direction, with the creation of robots that could play chess, do algebra and pick up blocks. But the shortcomings of these robots soon became clear. They were huge and clumsy and took hours to navigate across a room, even one containing only objects with straight lines. Meanwhile, a fruit fly, with a brain containing only about 250,000 neurons and a fraction of the computing power of these robots, can effortlessly navigate in three dimensions, executing dazzling loop-the-loop manoeuvres.
Our brains, like the fruit fly’s, unconsciously recognise objects by performing countless calculations when we walk into a room - an activity that we are blissfully unaware of. This unconscious pattern recognition is exactly what computers lack.
The second problem with the development of robots is even more fundamental, and that is their lack of “common sense”. Humans know, for example:
• water is wet
• mothers are older than their daughters
• animals do not like pain
• you don’t come back after you die
• strings can pull, but not push
• time does not run backwards.
But there is no line of calculus or mathematics that can express these truths. Children learn common sense by bumping into reality. The intuitive laws of biology and physics are learned the hard way, by interacting with the real world. Robots know only what has been programmed into them beforehand.
Because of the limitations of the top-down approach to artificial intelligence, attempts have been made to use a “bottom-up” approach instead - that is, to mimic evolution and the way a baby learns.
. . .
Consider insects, which do not navigate by scanning their environment and reducing the image to trillions of pixels that they then process with supercomputers. Instead insect brains are composed of “neural networks”, machines that slowly learn how to navigate in a hostile world by bumping into it. Rodney Brooks, director of the Massachusetts Institute of Technology’s Artificial Intelligence laboratory, famous for its huge, lumbering “top-down” walking robots, became a heretic when he explored the idea of tiny “insectoid” robots that learned to walk the old-fashioned way, by stumbling and bumping into things. Instead of using elaborate programs to compute mathematically the precise position of their feet as they walked, his insectoids used trial and error to co-ordinate their leg motions using little computer power. Today many of the descendants of Brooks’ insectoid robots are on Mars gathering data for Nasa, scurrying across the bleak Martian landscape with minds of their own.
One of Brooks’ projects has been Cog, an attempt to create a mechanical robot with the intelligence of a six-month-old child. On the outside Cog looks like a jumble of wires, circuits, and gears, except that it has a head, eyes and arms. No laws of intelligence have been programmed into it. Instead it is designed to focus its eyes on a human trainer, who tries to teach it simple skills. (One researcher who became pregnant made a bet as to which would learn faster, Cog or her child by the age of two. The child far surpassed Cog.)
For all the successes in mimicking the behaviour of insects, robots using neural networks have performed miserably when their programmers have tried to duplicate in them the behaviour of higher organisms such as mammals. The most advanced robot using neural networks can walk across the room or swim in water, but it cannot jump and hunt like a dog in the forest, or scurry around the room like a rat.
MIT’s Marvin Minsky, one of the original founders of AI, summarises its problems in this way: “The history of AI is sort of funny because the first real accomplishments were beautiful things, like a machine that could do proofs in logic or do well in a calculus course. But then we started to try to make machines that could answer questions about the simple kinds of stories that are in a first-grade reader book. There’s no machine today that can do that.”
Some believe that eventually there will be a grand synthesis between the two approaches, the top-down and bottom-up, which may provide the key to artificial intelligence and humanlike robots. After all, when a child learns, although he or she first relies mainly on the bottom-up approach, bumping into his surroundings, eventually he receives instruction from parents, books and schoolteachers, and learns from the top-down approach. As adults, we blend the two approaches.
One consistent theme in literature and art is the mechanical being that yearns to become human, to share in human emotions. Pinocchio wanted to be a real boy. The Tin Man wanted a heart.
Some people have even suggested that our emotions represent the quality that most distinguishes us as human. No machine will ever be able to thrill at a blazing sunset or laugh at a joke, they claim. Some say that it is impossible for machines ever to have emotions.
But to scientists working on AI, emotions, far from being the essence of humanity, are actually a by-product of evolution. Simply put, emotions are good for us. They helped us to survive in the forest, and even today they help us to navigate the dangers of life. For example, “liking” something is very important in evolutionary terms, because most things are harmful to us. Of the millions of objects that we bump into every day, only a handful are beneficial to us. Hence to “like” something is to make a distinction between the tiny number of things that can help us, compared with the millions of things that are harmful.
When robots become more advanced, they, too, might be equipped with emotions. Perhaps robots will be programmed to bond with their owners or caretakers, to ensure that they don’t wind up in the garbage. Having such emotions would help to ease their transition into society, so that they could be helpful companions rather than rivals of their owners.
Computer expert Hans Moravec believes that robots will be programmed with emotions such as fear to protect themselves. If a robot’s batteries are running down, the robot “would express agitation, or even panic, with signals that humans can recognise”, he says. “It would go to the neighbours and ask them to use their plug, saying, ’Please! Please! I need this! It’s so important, it’s such a small cost! We’ll reimburse you!”’
Emotions are vital in decision-making, as well. People who have suffered a certain kind of brain injury lose the ability to experience emotions. Their reasoning is intact, but they cannot express feelings. Neurologist Antonio Damasio of the University of Southern California, who has studied these people, concludes that they seem “to know, but not to feel”. He finds that such individuals are often paralysed in making the smallest decisions. Without emotions to guide them, they debate endlessly over their options, leading to crippling indecision.
Scientists believe emotions are processed in the “limbic system”, deep in the centre of our brain. When people suffer a loss of communication between the neocortex (which governs rational thinking) and the limbic system, their reasoning powers remain intact but they have no emotions to guide them in decision-making. While the rest of us might have a “hunch” or a “gut reaction” that propels us, these people feel no such thing.
As robots become more intelligent and are able to make choices of their own, they could likewise become paralysed with indecision. To aid them, robots of the future might need to have emotions hardwired into their brains, to set goals and to give meaning and structure to their “lives”.
There is no universal consensus as to whether machines can be conscious, or even a consensus as to what consciousness means. Minsky describes consciousness as more of a “society of minds”, where the thinking process in our brain is not localised but spread out, with different centres competing with one another at any given time. Consciousness may then be viewed as a sequence of thoughts and images issuing from these different, smaller “minds”, each one competing for our attention.
Robots might eventually attain a “silicon consciousness”. Robots, in fact, might one day embody an architecture for thinking and processing information that is different from ours - but also indistinguishable. If that happens, the question of whether they really “understand” becomes largely irrelevant. A robot that has perfect mastery of syntax, for all practical purposes, understands what is being said.
Will computers eventually surpass us in intelligence? Certainly, there is nothing in the laws of physics to prevent that. If robots are neural networks capable of learning, and they develop to the point where they can learn faster and more efficiently than we can, then it’s logical that they might surpass us in reasoning..
Some scientists and thinkers have suggested that rather than waiting for our extinction, we ought to merge carbon and silicon technology. Hans Moravec envisions a time in the distant future when our neural architecture will be transferred, neuron for neuron, directly into a machine, giving us, in a sense, immortality. It’s certainly not beyond the realm of possibility.
Extracted from ‘Physics of the Impossible: A Scientific Exploration of the World of Phasers, Force Fields, Teleportation and Time Travel’ by Michio Kaku, Allen Lane, £20. FT bookshop price: £16
No comments:
Post a Comment