The Future of AI
Chapter 6
The Future of AI
AI of the future may not look like Cog or have the moves of ASIMO, but it will probably exhibit many of the same attributes that are being perfected in these humanoid robots. The science of artificial intelligence is less than sixty years old, which is young for a branch of science, yet AI has advanced phenomenally since the early days of moth-ridden vacuum tubes. The future of AI is hard to predict, but no one questions that it will be more available and more abundant and present in people's lives. Some experts anticipate that in the future AI hardware will become smaller, AI will become an essential element of caring for the elderly, and it will include a blending of the best of both human and mechanical intelligences.
Mini AI
The direction of all electronics and technology continues to be toward ever smaller and more portable products. The first generation of computers, giants that filled entire rooms with whirring gears and fans, gave way to desktop versions run by transistors. The technology shrank dramatically with the invention of silicon chips. Laptops have been miniaturized to handheld computers, and cell phones are half the size they were just five years ago. That trend is reflected in AI also.
A cutting-edge artificial intelligence technology being perfected at the University of California at Berkeley works on an incredibly small scale. "Smart dust" is a network of wireless micro-electromechanical sensors the size of dust particles—imagine grains of sand with a brain—that could monitor everything from temperature, light, vibration, and movement to radiation and toxic chemicals. The particles of smart dust, called motes, could be as small as one cubic millimeter, which would fit on the tip of a ballpoint pen. At present, prototype smart dust motes are about the size of a pager and run on AA batteries. But these sensors have
infinite possibilities. Scientists hope that these mini-sensors can be sprinkled throughout a large area like tiny dust motes floating in the air.
Networked by the hundreds, smart dust motes can pass information from one to the other almost instantaneously. They survey the world around them and "chat" wirelessly through the system until the information reaches a central computer. Smart dust is being marketed for use in factories, homes, and public places and for commercial, military, medical, security, and ecological applications. For example, sensors dispersed in an art gallery could sense movement when the gallery was closed. They could be placed anywhere inside or outside an active airport to detect chemical weapons or plastic explosives. Sensors dropped from an airplane could help predict the path of a forest fire,
and those positioned around a house could monitor the vital signs of the people living inside.
Not quite as small as the proposed smart dust, but more mobile, is the microfly, a robot project funded by the Department of Defense. The microfly will weigh less than a paper clip and zip about on wings that are only one-twentieth the thickness of a sheet of paper. Its artificial intelligence sensors will allow it to run reconnaissance missions for the military and scout out enemy troops without being detected. According to Promode Bandyopadhyay, head of robotics at the federal Office of Naval Research, "You could have a swarm of them in a battlefield. Eventually, they can work as a group and detect the presence of hostile forces and materials."23
Writer and researcher Ray Kurzweil has predicted even smaller robotic AI systems using nanotechnology, which is the engineering of devices on a microscopic scale. He suggests that nanobots could be injected into the human body and travel through a person's system to detect disease. There already is such a device, developed in France, that is small enough to be swallowed. It moves along the patient's digestive tract and with a tiny wheel measures the intestines and takes samples. It can also be programmed to stop at a certain spot to release a dose of medicine or perform a simple surgical procedure. German researchers have devised an even smaller robotic unit that can travel inside a blood vessel. As thin as a matchstick, it has three moving sections that push and pull it along like an inchworm. An even smaller robotic arm has been created that is no bigger than a hyphen. Its tiny silicon frame can bend and grab glass beads only a fraction of an inch long.
Advances in powering such tiny AI robots have made surprising leaps in recent years. Some researchers are even using parts of bacteria to create tiny rotors similar to those of a helicopter or a propeller on a boat to move these micromachines along. So far these machines are not equipped with intelligence systems, but researchers believe that in the future they will be used to locate and destroy cancer cells and monitor human health.
Seniorbots
Monitoring health is an increasing concern in the U.S. medical community. Within fifty years, more than 30 percent of the population will be over the age of sixty-five, and the National Institute on Aging predicts that more than 14 million Americans will be diagnosed with Alzheimer's disease. With fewer caregivers and a growing population of potential patients, people will need other kinds of assistance to live happier, safer, and more independent lives. Companies that specialize in AI are scrambling to find ways to help. "Assisted cognition systems will enable aging adults to stay at home longer and to take care of themselves,"24 says Eric Dishman from Intel Labs. Dr. Zeungnam Bien, director of the Robot Welfare Research Center, agrees: "Various forms of welfare robotic systems will be the means of sustaining society."25
In the home robotic aides will be able to open the refrigerator, retrieve a jar, and open it for someone with arthritis. Such devices could keep track of a person's medications and issue reminders to take pills on a correct schedule. Senior care facilities in Japan are already using robots in several ways. Robot therapy offers companionship to lonely patients as robotic dogs wander the halls of nursing homes much the same way that real animals are used in nursing homes in the United States. The mechanical AI dogs offer the added benefit of being able to remember the names of an Alzheimer's patient's children.
Researchers at Carnegie Mellon Institute are testing Pearl, a nursebot. Only four feet high with a cute face, Pearl will eventually be able to understand spoken commands, respond to questions, and keep elderly patients on a regular schedule.
A less sophisticated piece of equipment that may come on the market sooner than Pearl is the AI walker. The walker is equipped with sonar detectors to prevent a patient from bumping into objects and a laser range finder and mapping software to figure out where it is going when the person using the walker does not know. It is also able to move itself out of the way when not in use and come when called.
Other nonrobotic assisted cognitive systems combine AI software, global positioning technology, sensor networks, and even infrared identification badges so that patients with early-stage Alzheimer's will be able to use AI personal digital assistants. One new product for patients is a handheld activity compass that is capable of memorizing a person's daily routine. It can offer suggestions when the person becomes confused or provide directions when the person gets lost. The person only needs to click on a picture of the desired destination and a directional arrow appears on the handheld screen and points in the proper direction. If the destination is not clear in the patient's mind, the handheld compass could offer suggestions based on the time of day, daily routine, and current location. For example, the compass could direct a patient who loses his or her way in a doctor's office building toward the proper office or the exit.
An Alzheimer's patient's house could be wired with a network of sensors to become a "smart home." Heat sensors could warn the resident that a hot stove had been left unattended. Motion sensors could detect periods of inactivity and signal an outside agency or call for an ambulance if someone were injured. In twenty years, the baby boomers who now surf the Internet and use e-mail and instant messaging to keep up with friends will be ready for the high-tech elder care that is now being perfected.
But other researchers are looking at ways to assist a failing brain directly with artificial intelligence so that there would be no need for smart homes or nursebots. They see the future of artificial intelligence as a merging of the best of both biological and mechanical worlds.
Cyborg Science
In the movie Star Trek: First Contact, Captain Jean-Luc Picard is captured by a race of creatures called the Borg that are part man and part machine. Picard's human parts are slowly replaced by wire, metal, and silicon. The idea might seem unsettling for a squeamish moviegoer, but it is exciting to AI researchers and neuroscientists who hope to make bionics a reality—to create not a hostile Borg race that would try to take over the universe but artificial cognitive parts that would be available when a person's biological parts fail.
This concept is not so far-fetched. Already there are bionic parts for people who need them. Cochlear implants connect the electronics of a silicon device directly to the nervous system for those who cannot hear. Artificial arms and legs are available for those who have lost a limb, and pacemakers provide the electrical stimulus to regulate a faulty heart. With machinery and computer power, scientists can now make whole what was once broken. If the human body becomes weak, mechanical parts are sturdy and replaceable. But what about a person's brain?
A person's memory can grow dim over the years and holds a finite amount of information. A person's thoughts, ideas, and knowledge are lost after death. But a computer brain can have a limitless amount of memory stored on an infinite number of disks. The knowledge it contains can be downloaded and shared among other computers so that it is never lost. Some AI researchers, like Hans Moravec at the Carnegie Mellon Institute, predict that someday there will be ways to download the contents of the human brain into a computer so that it would "live" forever.
But for now, the future may lie in the arm of one AI researcher, Kevin Warwick of Reading University.
Warwick had a silicon chip implanted in his arm to record the signals that pass through his nerves as he moves. For example, the signals recorded when he wiggles his fingers will be broadcast through a tiny radio antenna to a computer, which will store the signals on a hard drive. Although other researchers have criticized the project as a publicity stunt, Warwick hopes that eventually the computer will be able to play back the signals so that his nervous system will be triggered and wiggle his fingers in response. If these kinds of brain signals are transmittable, it very well could mean that a man could direct a machine's actions by simply thinking about them, and a machine could direct a man's actions.
Why pursue a man and machine mind meld? Some researchers believe that a thinking robot will never be very effective because of the inherent limitations of trying to duplicate biological systems with machinery. But they hope that a mix of biological intelligence, artificial intelligence, and robotics will produce high-level thinking machines and effective replacement parts for people with mental and physical disabilities. The people who could benefit most would be those with paralyzed limbs, but the U.S. Department of Defense's DARPA is also interested in mind-controlled battlebots and airplanes that can be directed by thoughts alone. DARPA funds a Human Assisted Neural Devices Program to find ways to integrate human thought processes into computer operations.
If it seems impossible, think again. The mind meld has begun, although not with the human species. In a lab at Duke University's Center for Neuroengineering, a robotic arm swings from side to side, pivots, and straightens as if to snatch something unseen out of the air. The clamplike hand opens and closes, then shoots out again in a different direction. There is no visible sign of what is controlling the arm except for a trail of tangled cables that snake out the door and down the hall to a small, darkened room.
Inside that room is the power behind the robotic arm—a small monkey strapped in a chair. The monkey is motionless, staring at a computer monitor watching a dot move around the screen. As the monkey watches the dot, its brain is directing the robot arm with its thoughts, which trigger signals picked up by electrodes buried in its brain. The signal is transmitted through the cables to the robotic arm.
Another related research project has created the first robot that moves using the biological neural network that is no longer in a living body. Called the Hybrot, it is a small circular device with movements that are controlled by neural cells taken from the brain of a rat. "We call it the 'Hybrot' because it is a hybrid of living and robotic components," says Steve Potter, a professor at Georgia Institute of Technology. "We hope to learn how
living neural networks may be applied to the artificial computing systems of tomorrow." Researchers also hope to learn about learning. "Learning is often defined as a lasting change in behavior, resulting from experience,"26 Potter says, and in order for a being to experience the world, the brain needs a body.
A droplet that contains a few thousand living neurons from a rat's brain is placed on a special petri dish fitted with sixty microelectrodes. This functions as the robot's brain. The cell's activities are recorded by the electrodes and transmitted to the robot's body, a small circular device that can fit in the palm of a hand. The robot's body then moves in response to the electrical impulses received by the cell. The Hybrot is not cyborg material yet, but it is a start.
AI Ethics
The interfacing of brain and machines and robots taking care of grandparents are unsettling ideas to some people. Much time, effort, and technology has been, and will continue to be, spent on making machines that are intelligent. But not as much attention has been paid to a code of ethics to guide the use of artificial intelligence. The Internet has already called into question people's right to privacy and intellectual property rights that protect writers and other creators who show their work online.
In recent years experts have tried to forecast future concerns regarding the ethics of artificial intelligence, particularly about people's perceptions of themselves. Would people become lazy if they had robots to do all the dull, dirty jobs for them? Would people become less caring if they dealt with robots all day long? How would attitudes change if humans were no longer the only humanlike intelligent beings on the planet?
What if scientists are successful in creating truly autonomous AI? Will laws change? This dilemma was played out in an episode of Star Trek: The Next Generation when the robot Data had to stand trial to establish his rights to decide his own fate. The prosecution contended that he was no more aware than a toaster. If a creature behaves like a human, should it also enjoy human rights?
So far, science has followed where fiction has led, designing artificial creatures and intelligent machines that used to dwell only in the minds of writers and filmmakers. But what if the ultimate fictional horror story happens? What if machines become so smart that mankind loses control over them? Here too fiction has been first to address this possibility. In his classic novel I, Robot, Isaac Asimov wrote the Three Laws of Robotics so that humans would always maintain control. Asimov's fictional robots had superhuman strength and superhuman intelligence, so it was vital to keep that power in check. His three laws stated:
- A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings except when such would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First and Second Laws.27
Someday hardwiring AI systems with similar safeguards may be necessary if intelligent robots become commonplace or if AI systems are used more often in people's lives. It will need to be addressed so that those who are conducting the research are aware of the power and potential hazards that exist. "In any kind of technology there are risks," says Ron Brackman of DARPA. The branch of the military that funds much of the AI research in the United States does not take the ethical issues for granted. DARPA consults neurologists, psychologists, and philosophers to help sort out the future before it gets here. After all, artificial intelligence systems are used to monitor and track threats from other countries and deploy long-range missiles and other weapons. It is comforting to know that the military takes such ideas seriously. "We're not stumbling down some blind alley," Brackman says. "We're very cognizant of these issues."28
The question of whether intelligent machines could take over the world may never come into play. But other, more mundane questions will demand difficult answers. For example, what are the legal responsibilities of artificial intelligence systems that function as
decision makers? Banks frequently blame computer error for adding too much interest or withdrawing too much money from accounts, but what would happen if a medical expert system were to fail to diagnose a fatal disease or recommend a procedure that proved deadly to a patient? Who would be responsible—the company that uses the system, the programmers who set it up, or the experts who supplied the information? These and many more questions are already being discussed in boardrooms and labs around the world.
AI Power
Most people do not realize how much of their lives are affected by AI and cannot imagine how it will expand in the years to come. People surf the Internet with AI search engines and encounter intelligent agents when they buy a product from a Web site. Their bank accounts and credit cards are monitored with AI software, and transportation systems run smoothly with artificial intelligence programs. Every e-mail and every cell phone call is routed using AI networks. The appliances in the kitchen and the car in the driveway all have parts that use artificial intelligence technology. And that is only a small taste of what AI has accomplished in the last sixty years.
The future promises many more advances with robotic assistants, microscopic systems, and cyborg technology. "As this enormous computing power is combined with…advances of the physical sciences…enormous power is being unleashed," says Bill Joy of Microsoft. "These combinations open up the opportunity to completely redesign the world."29 And, for better or worse, how people choose to use that power is up to them.