Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100

Home > Other > Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 > Page 9
Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 Page 9

by Michio Kaku


  Not only is this ideal to analyze objects that have ferrous metals in them, it can also analyze objects that are too large to fit inside a conventional MRI machine or cannot be moved from their sites. For example, in 2006 the MRI-MOUSE successfully produced images of the interior of Ötzi the iceman, the frozen corpse found in the Alps in 1991. By moving the U-shaped magnet over Ötzi, it was able to successively peel away the various layers of his frozen body.

  In the future, the MRI-MOUSE may be miniaturized even more, allowing for MRI scans of the brain using something the size of a cell phone. Then, scanning the brain to read one’s thoughts may not be such a problem. Eventually, the MRI scanner may be as thin as a dime, barely noticeable. It might even resemble the less-powerful EEG, where you put a plastic cap with many electrodes attached over your head. (If you place these portable MRI disks on your fingertips and then place them on a person’s head, this would resemble performing the Vulcan mind meld of Star Trek.)

  TELEKINESIS AND THE POWER OF THE GODS

  The endpoint of this progression is to attain telekinesis, the power of the gods of mythology to move objects by sheer thought.

  In the movie Star Wars, for example, the Force is a mysterious field that pervades the galaxy and unleashes the mental powers of the Jedi knights, allowing them to control objects with their mind. Lightsabers, ray guns, and even entire starships can be levitated using the power of the Force—and to control the actions of others.

  But we won’t have to travel to a galaxy far, far away to harness this power. By 2100, when we walk into a room, we will be able to mentally control a computer that in turn will control things around us. Moving heavy furniture, rearranging our desk, making repairs, etc., may be possible by thinking about it. This could be quite useful for workers, fire crews, astronauts, and soldiers who have to operate machinery requiring more than two hands. It could also change the way we interact with the world. We would be able to ride a bike, drive a car, play golf or baseball or elaborate games just by thinking about them.

  Moving objects by thought may become possible by exploiting something called superconductors, which we shall explain in more detail in Chapter 4. By the end of this century, physicists may be able to create superconductors that can operate at room temperature, thereby allowing us to create huge magnetic fields that require little power. In the same way that the twentieth century was the age of electricity, the future may bring us room-temperature superconductors that will give us the age of magnetism.

  Powerful magnetic fields are presently expensive to create but may become almost free in the future. This will allow us to reduce friction in our trains and trucks, revolutionizing transportation, and eliminate losses in electrical transmission. This will also allow us to move objects by sheer thought. With tiny supermagnets placed inside different objects, we will be able to move them around almost at will.

  In the near future, we will assume that everything has a tiny chip in it, making it intelligent. In the far future, we will assume that everything has a tiny superconductor inside it that can generate bursts of magnetic energy, sufficient to move it across a room. Assume, for example, that a table has a superconductor in it. Normally, this superconductor carries no current. But when a tiny electrical current is added, it can create a powerful magnetic field, capable of sending it across the room. By thinking, we should be able to activate the supermagnet embedded within an object and thereby make it move.

  In the X-Men movies, for example, the evil mutants are led by Magneto, who can move enormous objects by manipulating their magnetic properties. In one scene, he even moves the Golden Gate Bridge via the power of his mind. But there are limits to this power. For example, it is difficult to move an object like plastic or paper that has no magnetic properties. (At the end of the first X-Men movie, Magneto is confined in a jail made completely of plastic.)

  In the future, room-temperature superconductors may be hidden inside common items, even nonmagnetic ones. If a current is turned on within the object, it will become magnetic and hence it can be moved by an external magnetic field that is controlled by your thoughts.

  We will also have the power to manipulate robots and avatars by thinking. This means that, as in the movies Surrogates and Avatar, we might be able to control the motions of our substitutes and even feel pain and pressure. This might prove useful if we need a superhuman body to make repairs in outer space or rescue people in emergencies. Perhaps one day, our astronauts may be safely on earth, controlling superhuman robotic bodies as they move on the moon. We will discuss this more in the next chapter.

  We should also point out that possessing this telekinetic power is not without risks. As I mentioned before, in the movie Forbidden Planet, an ancient civilization millions of years ahead of ours attains its ultimate dream, the ability to control anything with the power of their minds. As one trivial example of their technology, they created a machine that can turn your thoughts into a 3-D image. You put the device on your head, imagine something, and a 3-D image materializes inside the machine. Although this device seemed impossibly advanced for movie audiences back in the 1950s, this device will be available in the coming decades. Also, in the movie, there was a device that harnessed your mental energy to lift a heavy object. But as we know, we don’t have to wait millions of years for this technology—it’s already here, in the form of a toy. You place EEG electrodes on your head, the toy detects the electrical impulses of your brain, and then it lifts a tiny object, just as in the movie. In the future, many games will be played by sheer thought. Teams may be mentally wired up so that they can move a ball by thinking about it, and the team that can best mentally move the ball wins.

  The climax of Forbidden Planet may give us pause. Despite the vastness of their technology, the aliens perished because they failed to notice a defect in their plans. Their powerful machines tapped not only into their conscious thoughts but also into their subconscious desires. The savage, long-suppressed thoughts of their violent, ancient evolutionary past sprang back to life, and the machines materialized every subconscious nightmare into reality. On the eve of attaining their greatest creation, this mighty civilization was destroyed by the very technology they hoped would free them from instrumentality.

  For us, however, this is still a distant danger. A device of that magnitude won’t be available until the twenty-second century. However, we face a more immediate concern. By 2100, we will also live in a world populated by robots that have humanlike characteristics. What happens if they become smarter than us?

  Will robots inherit the earth? Yes, but they will be our children.

  —MARVIN MINSKY

  The gods of mythology with their divine power could animate the inanimate. According to the Bible, in Genesis, Chapter 2, God created man out of dust, and then “breathed into his nostrils the breath of life, and man became a living soul.” According to Greek and Roman mythology, the goddess Venus could make statues spring to life. Venus, taking pity on the artist Pygmalion when he fell hopelessly in love with his statue, granted his fondest wish and turned the statue into a beautiful woman, Galatea. The god Vulcan, the blacksmith to the gods, could even create an army of mechanical servants made of metal that he brought to life.

  Today, we are like Vulcan, forging in our laboratories machines that breathe life not into clay but into steel and silicon. But will it be to liberate the human race or enslave it? If one reads the headlines today, it seems as if the question is already settled: the human race is about to be rapidly overtaken by our own creation.

  THE END OF HUMANITY?

  The headline in the New York Times said it all: “Scientists Worry Machines May Outsmart Man.” The world’s top leaders in artificial intelligence (AI) had gathered at the Asilomar conference in California in 2009 to solemnly discuss what happens when the machines finally take over. As in a scene from a Hollywood movie, delegates asked probing questions, such as, What happens if a robot becomes as intelligent as your spouse?

  As compellin
g evidence of this robotic revolution, people pointed to the Predator drone, a pilotless robot plane that is now targeting terrorists with deadly accuracy in Afghanistan and Pakistan; cars that can drive themselves; and ASIMO, the world’s most advanced robot that can walk, run, climb stairs, dance, and even serve coffee.

  Eric Horvitz of Microsoft, an organizer of the conference, noting the excitement surging through the conference, said, “Technologists are providing almost religious visions, and their ideas are resonating in some ways with the same idea of the Rapture.” (The Rapture is when true believers ascend to heaven at the Second Coming. The critics dubbed the spirit of the Asilomar conference “the rapture of the nerds.”)

  That same summer, the movies dominating the silver screen seemed to amplify this apocalyptic picture. In Terminator Salvation, a ragtag band of humans battle huge mechanical behemoths that have taken over the earth. In Transformers: Revenge of the Fallen, futuristic robots from space use humans as pawns and the earth as a battleground for their interstellar wars. In Surrogates, people prefer to live their lives as perfect, beautiful, superhuman robots, rather than face the reality of their own aging, decaying bodies.

  Judging from the headlines and the theater marquees, it looks like the last gasp for humans is just around the corner. AI pundits are solemnly asking: Will we one day have to dance behind bars as our robot creations throw peanuts at us, as we do at bears in a zoo? Or will we become lapdogs to our creations?

  But upon closer examination, there is less than meets the eye. Certainly, tremendous breakthroughs have been made in the last decade, but things have to be put into perspective.

  The Predator, a 27-foot drone that fires deadly missiles at terrorists from the sky, is controlled by a human with a joystick. A human, most likely a young veteran of video games, sits comfortably behind a computer screen and selects the targets. The human, not the Predator, is calling the shots. And the cars that drive themselves are not making independent decisions as they scan the horizon and turn the steering wheel; they are following a GPS map stored in their memory. So the nightmare of fully autonomous, conscious, and murderous robots is still in the distant future.

  Not surprisingly, although the media hyped some of the more sensational predictions made at the Asilomar conference, most of the working scientists doing the day-to-day research in artificial intelligence were much more reserved and cautious. When asked when the machines will become as smart as us, the scientists had a surprising variety of answers, ranging from 20 to 1,000 years.

  So we have to differentiate between two types of robots. The first is remote-controlled by a human or programmed and pre-scripted like a tape recorder to follow precise instructions. These robots already exist and generate headlines. They are slowly entering our homes and also the battlefield. But without a human making the decisions, they are largely useless pieces of junk. So these robots should not be confused with the second type, which is truly autonomous, the kind that can think for itself and requires no input from humans. It is these autonomous robots that have eluded scientists for the past half century.

  ASIMO THE ROBOT

  AI researchers often point to Honda’s robot called ASIMO (Advanced Step in Innovative Mobility) as a graphic demonstration of the revolutionary advances made in robotics. It is 4 feet 3 inches tall, weighs 119 pounds, and resembles a young boy with a black-visored helmet and a backpack. ASIMO, in fact, is remarkable: it can realistically walk, run, climb stairs, and talk. It can wander around rooms, pick up cups and trays, respond to some simple commands, and even recognize some faces. It even has a large vocabulary and can speak in different languages. ASIMO is the result of twenty years of intense work by scores of Honda scientists, who have produced a marvel of engineering.

  On two separate occasions, I have had the privilege of personally interacting with ASIMO at conferences, when hosting science specials for BBC/Discovery. When I shook its hand, it responded in an entirely humanlike way. When I waved to it, it waved right back. And when I asked it to fetch me some juice, it turned around and walked toward the refreshment table with eerily human motions. Indeed, ASIMO is so lifelike that when it talked, I half expected the robot to take off its helmet and reveal the boy who was cleverly hidden inside. It can even dance better than I can.

  At first, it seems as if ASIMO is intelligent, capable of responding to human commands, holding a conversation, and walking around a room. Actually, the reality is quite different. When I interacted with ASIMO in front of the TV camera, every motion, every nuance was carefully scripted. In fact, it took about three hours to film a simple five-minute scene with ASIMO. And even that required a team of ASIMO handlers who were furiously reprogramming the robot on their laptops after we filmed every scene. Although ASIMO talks to you in different languages, it is actually a tape recorder playing recorded messages. It simply parrots what is programmed by a human. Although ASIMO becomes more sophisticated every year, it is incapable of independent thought. Every word, every gesture, every step has to be carefully rehearsed by ASIMO’s handlers.

  Afterward, I had a candid talk with one of ASIMO’s inventors, and he admitted that ASIMO, despite its remarkably humanlike motions and actions, has the intelligence of an insect. Most of its motions have to be carefully programmed ahead of time. It can walk in a totally lifelike way, but its path has to be carefully programmed or it will stumble over the furniture, since it cannot really recognize objects around the room.

  By comparison, even a cockroach can recognize objects, scurry around obstacles, look for food and mates, evade predators, plot complex escape routes, hide among the shadows, and disappear in the cracks, all within a matter of seconds.

  AI researcher Thomas Dean of Brown University has admitted that the lumbering robots he is building are “just at the stage where they’re robust enough to walk down the hall without leaving huge gouges in the plaster.” As we shall later see, at present our most powerful computers can barely simulate the neurons of a mouse, and then only for a few seconds. It will take many decades of hard work before robots become as smart as a mouse, rabbit, dog or cat, and then a monkey.

  HISTORY OF AI

  Critics sometimes point out a pattern, that every thirty years, AI practitioners claim that superintelligent robots are just around the corner. Then, when there is a reality check, a backlash sets in.

  In the 1950s, when electronic computers were first introduced after World War II, scientists dazzled the public with the notion of machines that could perform miraculous feats: picking up blocks, playing checkers, and even solving algebra problems. It seemed as if truly intelligent machines were just around the corner. The public was amazed; and soon there were magazine articles breathlessly predicting the time when a robot would be in everyone’s kitchen, cooking dinner, or cleaning the house. In 1965, AI pioneer Herbert Simon declared, “Machines will be capable, within twenty years, of doing any work a man can do.” But then the reality set in. Chess-playing machines could not win against a human expert, and could play only chess, nothing more. These early robots were like a one-trick pony, performing just one simple task.

  In fact, in the 1950s, real breakthroughs were made in AI, but because the progress was vastly overstated and overhyped, a backlash set in. In 1974, under a chorus of rising criticism, the U.S. and British governments cut off funding. The first AI winter set in.

  Today, AI researcher Paul Abrahams shakes his head when he looks back at those heady times in the 1950s when he was a graduate student at MIT and anything seemed possible. He recalled, “It’s as though a group of people had proposed to build a tower to the moon. Each year they point with pride at how much higher the tower is than it was the previous year. The only trouble is that the moon isn’t getting much closer.”

  In the 1980s, enthusiasm for AI peaked once again. This time the Pentagon poured millions of dollars into projects like the smart truck, which was supposed to travel behind enemy lines, do reconnaissance, rescue U.S. troops, and return to headquarters, al
l by itself. The Japanese government even put its full weight behind the ambitious Fifth Generation Computer Systems Project, sponsored by the powerful Japanese Ministry of International Trade and Industry. The Fifth Generation Project’s goal was, among others, to have a computer system that could speak conversational language, have full reasoning ability, and even anticipate what we want, all by the 1990s.

  Unfortunately, the only thing that the smart truck did was get lost. And the Fifth Generation Project, after much fanfare, was quietly dropped without explanation. Once again, the rhetoric far outpaced the reality. In fact, there were real gains made in AI in the 1980s, but because progress was again overhyped, a second backlash set in, creating the second AI winter, in which funding again dried up and disillusioned people left the field in droves. It became painfully clear that something was missing.

  In 1992 AI researchers had mixed feelings holding a special celebration in honor of the movie 2001, in which a computer called HAL 9000 runs amok and slaughters the crew of a spaceship. The movie, filmed in 1968, predicted that by 1992 there would be robots that could freely converse with any human on almost any topic and also command a spaceship. Unfortunately, it was painfully clear that the most advanced robots had a hard time keeping up with the intelligence of a bug.

 

‹ Prev