Book Read Free

Physics of the Impossible

Page 10

by Michio Kaku


  Funding for the CIA studies began in 1972, and Russell Targ and Harold Puthoff of the Stanford Research Institute (SRI) in Menlo Park were in charge. Initially, they sought to train a cadre of psychics who could engage in “psychic warfare.” Over more than two decades, the United States spent $20 million on Star Gate, with over forty personnel, twenty-three remote viewers, and three psychics on the payroll.

  By 1995, with a budget of $500,000 per year, the CIA had conducted hundreds of intelligence-gathering projects involving thousands of remote viewing sessions. Specifically, the remote viewers were asked to

  • locate Colonel Gadhafi before the 1986 bombing of Libya

  • find plutonium stockpiles in North Korea in 1994

  • locate a hostage kidnapped by the Red Brigades in Italy in 1981

  • locate a Soviet Tu-95 bomber that had crashed in Africa

  In 1995, the CIA asked the American Institute for Research (AIR) to evaluate these programs. The AIR recommended that the programs be shut down. “There’s no documented evidence it had any value to the intelligence community,” wrote David Goslin of the AIR.

  Proponents of Star Gate boasted that over the years they had scored “eight-martini” results (conclusions that were so spectacular that you had to go out and drink eight martinis to recover). Critics, however, maintained that a huge majority of the remote viewing produced worthless, irrelevant information, wasting taxpayer dollars, and that the few “hits” they scored were vague and so general that they could be applied to any number of situations. The AIR report stated that the most impressive “successes” of Star Gate involved remote viewers who already had some knowledge of the operation they were studying, and hence might have made educated guesses that sounded reasonable.

  In the end the CIA concluded that Star Gate had yielded not a single instance of information that helped the agency guide intelligence operations, so it canceled the project. (Rumors persisted that the CIA used remote viewers to locate Saddam Hussein during the Gulf War, although all efforts were unsuccessful.)

  BRAIN SCANS

  At the same time, scientists were beginning to understand some of the physics behind the workings of the brain. In the nineteenth century scientists suspected that electrical signals were being transmitted inside the brain. In 1875 Richard Caton discovered that by placing electrodes on the surface of the head it was possible to detect the tiny electrical signals emitted by the brain. This eventually led to the invention of the electroencephalograph (EEG).

  In principle the brain is a transmitter over which our thoughts are broadcast in the form of tiny electrical signals and electromagnetic waves. But there are problems with using these signals to read someone’s thoughts. First, the signals are extremely weak, in the milliwatt range. Second, the signals are gibberish, largely indistinguishable from random noise. Only crude information about our thoughts can be gleaned from this garble. Third, our brain is not capable of receiving similar messages from other brains via these signals; that is, we lack an antenna. And, finally, even if we could receive these faint signals, we could not unscramble them. Using ordinary Newtonian and Maxwellian physics, telepathy via radio does not seem to be possible.

  Some believe that perhaps telepathy is mediated by a fifth force, called the “psi” force. But even advocates of parapsychology admit that they have no concrete, reproducible evidence of this psi force.

  But this leaves open the question: What about telepathy using the quantum theory?

  In the last decade, new quantum instruments have been introduced that for the first time in history enable us to look into the thinking brain. Leading this quantum revolution are the PET (positron-emission tomography) and MRI (magnetic resonance imaging) brain scans. A PET scan is created by injecting radioactive sugar into the blood. This sugar concentrates in parts of the brain that are activated by the thinking process, which requires energy. The radioactive sugar emits positrons (antielectrons) that are easily detected by instruments. Thus, by tracing out the pattern created by antimatter in the living brain, one can also trace out the patterns of thought, isolating precisely which parts of the brain are engaged in which activity.

  The MRI machine operates in the same way, except it is more precise. A patient’s head is placed inside a huge doughnut-shaped magnetic field. The magnetic field makes the nuclei of the atoms in the brain align parallel to the field lines. A radio pulse is sent into the patient, making these nuclei wobble. When the nuclei flip orientation, they emit a tiny radio “echo” that can be detected, thereby signaling the presence of a particular substance. For example, brain activity is related to oxygen consumption, so the MRI machine can isolate the process of thinking by zeroing in on the presence of oxygenated blood. The higher the concentration of oxygenated blood, the greater the mental activity in that part of the brain. (Today “functional MRI machines” [fMRI] can zero in on tiny areas of the brain only a millimeter across in fractions of a second, making these machines ideal for tracing out the pattern of thoughts of the living brain.)

  MRI LIE DETECTORS

  With MRI machines, there is a possibility that one day scientists may be able to decipher the broad outlines of thoughts in the living brain. The simplest test of “mind reading” would be to determine whether or not someone is lying.

  According to legend, the world’s first lie detector was created by an Indian priest centuries ago. He would put the suspect and a “magic donkey” into a sealed room, with the instruction that the suspect should pull on the magic donkey’s tail. If the donkey began to talk, it meant the suspect was a liar. If the donkey remained silent, then the suspect was telling the truth. (But secretly, the elder would put soot on the donkey’s tail.)

  After the suspect was taken out of the room, the suspect would usually proclaim his innocence because the donkey did not speak when he pulled its tail. But the priest would then examine the suspect’s hands. If the hands were clean, it meant he was lying. (Sometimes the threat of using a lie detector is more effective than the lie detector itself.)

  The first “magic donkey” in modern times was created in 1913, when psychologist William Marston wrote about analyzing a person’s blood pressure, which would be elevated when telling a lie. (This observation about blood pressure actually goes back to ancient times, when a suspect would be questioned while an investigator held on to his hands.) The idea soon caught on, and soon even the Department of Defense was setting up its own Polygraph Institute.

  But over the years it has become clear that lie detectors can be fooled by sociopaths who show no remorse for their actions. The most famous case was that of the CIA double agent Aldrich Ames, who pocketed huge sums of money from the former Soviet Union by sending scores of U.S. agents to their death and divulging secrets of the U.S. nuclear navy. For decades Ames sailed through a battery of the CIA’s lie detector tests. So, too, did serial killer Gary Ridgway, known as the notorious Green River Killer; he killed as many as fifty women.

  In 2003 the U.S. National Academy of Sciences issued a scathing report on the reliability of lie detectors, listing all the ways in which lie detectors could be fooled and innocent people branded as liars.

  But if lie detectors measure only anxiety levels, what about measuring the brain itself? The idea of looking into brain activity to ferret out lies dates back twenty years, to the work of Peter Rosenfeld of Northwestern University, who observed that EEG scans of people in the process of lying showed a different pattern in the P300 waves than when those people were telling the truth. (P300 waves are often stimulated when the brain encounters something novel or out of the ordinary.)

  The idea of using MRI scans to detect lies was the brainchild of Daniel Langleben of the University of Pennsylvania. In 1999 he came upon a paper stating that children suffering from attention deficit disorder had difficulty lying, but he knew from experience that this was wrong; such children had no problem lying. Their real problem was that they had difficulty inhibiting the truth. “They would just blurt things
out,” recalled Langleben. He conjectured that the brain, in telling a lie, first had to stop itself from telling the truth, and then create a deception. He says, “When you tell a deliberate lie, you have to be holding in mind the truth. So it stands to reason it should mean more brain activity.” In other words, lying is hard work.

  Through experimenting with college students and asking them to lie, Langleben soon found that lying creates increased brain activity in several areas, including the frontal lobe (where higher thinking is concentrated), the temporal lobe, and the limbic system (where emotions are processed). In particular, he noticed unusual activity in the anterior cingulated gyrus (which is associated with conflict resolution and response inhibition).

  He claims to have attained consistent success rates of up to 99 percent when analyzing his subjects in controlled experiments to determine whether or not they were lying (e.g., he asked college students to lie about the identity of playing cards).

  The interest in this technology has been so pronounced that two commercial ventures have been started, offering this service to the public. In 2007 one company, No Lie MRI, took on its first case, a person who was suing his insurance company because it claimed that he had deliberately set his deli on fire. (The fMRI scan indicated that he was not an arsonist.)

  Proponents of Langleben’s technique claim that it is much more reliable than the old-fashioned lie detector, since altering brain patterns is beyond anyone’s control. While people can be trained to a degree to control their pulse rate and sweating, it is impossible for them to control their brain patterns. In fact, proponents point out that in an age of increased awareness of terrorism this technology could save countless lives by detecting a terrorist attack on the United States.

  While conceding this technology’s apparent success rate in detecting lies, critics have pointed out that the fMRI does not actually detect lies, only increased brain activity when someone is telling a lie. The machine could create false results if, for example, a person were to tell the truth while in a state of great anxiety. The fMRI would detect only the anxiety felt by the subject and incorrectly reveal that he was telling a lie. “There is an incredible hunger to have tests to separate truth from deception, science be damned,” warns neurobiologist Steven Hyman of Harvard University.

  Some critics also claim that a true lie detector, like a true telepath, could make ordinary social interactions quite uncomfortable, since a certain amount of lying is a “social grease” that helps to keep the wheels of society moving. Our reputation might be ruined, for example, if all the compliments we paid our bosses, superiors, spouses, lovers, and colleagues were exposed as lies. A true lie detector, in fact, could also expose all our family secrets, hidden emotions, repressed desires, and secret plans. As science columnist David Jones has said, a true lie detector is “like the atom bomb, it is best reserved as a sort of ultimate weapon. If widely deployed outside the courtroom, it would make social life quite impossible.”

  UNIVERSAL TRANSLATOR

  Some have rightfully criticized brain scans because, for all their spectacular photographs of the thinking brain, they are simply too crude to measure isolated, individual thoughts. Millions of neurons probably fire at once when we perform the simplest mental task, and the fMRI detects this activity only as a blob on a screen. One psychologist compared brain scans to attending a boisterous football game and trying to listen to the person sitting next to you. The sounds of that person are drowned out by the noise of thousands of spectators. For example, the smallest chunk of the brain that can be reliably analyzed by an fMRI machine is called a “voxel.” But each voxel corresponds to several million neurons, so the sensitivity of an fMRI machine is not good enough to isolate individual thoughts.

  Science fiction sometimes uses a “universal translator,” a device that can read a person’s thoughts and then beam them directly into another’s mind. In some science fiction novels alien telepaths place thoughts into our mind, even though they can’t understand our language. In the 1976 science fiction movie Futureworld a woman’s dream is projected onto a TV screen in real time. In the 2004 Jim Carrey movie, Eternal Sunshine of the Spotless Mind, doctors pinpoint painful memories and erase them.

  “That’s the kind of fantasy everyone in this field has,” says neuroscientist John Haynes of the Max Planck Institute in Leipzig, Germany. “But if that’s the device you want to build, then I’m pretty sure you need to record from a single neuron.”

  Since detecting signals from a single neuron is out of the question for now, some psychologists have tried to do the next best thing: to reduce the noise and isolate the fMRI pattern created by individual objects. For example, it might be possible to identify the fMRI pattern created by individual words, and then construct a “dictionary of thought.”

  Marcel A. Just of Carnegie-Mellon University, for example, has been able to identify the fMRI pattern created by a small, select group of objects (e.g., carpentry tools). “We have 12 categories and can determine which of the 12 the subjects are thinking of with 80 to 90% accuracy,” he claims.

  His colleague Tom Mitchell, a computer scientist, is using computer technology, such as neural networks, to identify the complex brain patterns detected by fMRI scans associated with performing certain experiments. “One experiment that I would love to do is to find words that produce the most distinguishable brain activity,” he notes.

  But even if we can create a dictionary of thought, this is a far cry from creating a “universal translator.” Unlike the universal translator, which beams thoughts directly into our mind from another mind, an fMRI mental translator would involve many tedious steps: first, recognizing certain fMRI patterns, converting them into English words, and then uttering these English words to the subject. In this sense, such a device would not correspond to the “mind meld” found on Star Trek (but it would still be very useful for stroke victims).

  HANDHELD MRI SCANNERS

  Yet another stumbling block to practical telepathy is the sheer size of the fMRI machine. It is a monstrous device, costing several million dollars, filling up an entire room, and weighing several tons. The heart of the MRI machine is a large doughnut-shaped magnet, measuring several feet in diameter, which creates a huge magnetic field of several teslas. (The magnetic field is so enormous that several workers have been seriously injured when hammers and other tools went flying through the air when the power was accidentally turned on.)

  Recently physicists Igor Savukov and Michael Romalis of Princeton University have proposed a new technology that might eventually make handheld MRI machines a reality, thus possibly slashing the price of an fMRI machine by a factor of one hundred. They claim that huge MRI magnets can be replaced by supersensitive atomic magnetometers that can detect tiny magnetic fields.

  First, Savukov and Romalis created a magnetic sensor from hot potassium vapor suspended in helium gas. Then they used laser light to align the electron spins of the potassium. Next they applied a weak magnetic field to a sample of water (to simulate a human body). Then they sent a radio pulse into the water sample, which made the water molecules wobble. The resulting “echo” from the wobbling water molecules made the potassium’s electrons wobble as well, and this wobbling could be detected by a second laser. They came up with a key result: even a weak magnetic field could produce an “echo” that could be picked up by their sensors. Not only could they replace the monstrous magnetic field of the standard MRI machine with a weak field; they could also get pictures instantaneously (whereas MRI machines can take up to twenty minutes to produce each picture).

  Eventually, they theorize, taking an MRI photo could be as easy as taking a picture with a digital camera. (There are stumbling blocks, however. One problem is that the subject and the machine have to be shielded from stray magnetic fields from the outside.)

  If handheld MRI machines become a reality, they might be coupled to a tiny computer, which in turn could be loaded with the software capable of decoding certain key phrases, wo
rds, or sentences. Such a device would never be as sophisticated as the telepathic devices found in science fiction, but it could come close.

  THE BRAIN AS A NEURAL NETWORK

  But will some futuristic MRI machine one day be able to read precise thoughts, word for word, image for image, as a true telepath could? This is not so clear. Some have argued that MRI machines will be able to decipher only vague outlines of our thoughts, because the brain is not really a computer at all. In a digital computer, computation is localized and obeys a very rigid set of rules. A digital computer obeys the laws of a “Turing machine,” a machine that contains a central processing unit (CPU), inputs, and outputs. A central processor (e.g., the Pentium chip) performs a definite set of manipulations of the input and produces an output, and “thinking” is therefore localized in the CPU.

  Our brain, however, is not a digital computer. Our brain has no Pentium chip, no CPU, no Windows operating system, and no subroutines. If you remove a single transistor in the CPU of a computer, you are likely to cripple it. But there are recorded cases in which half the human brain can be missing, yet the remaining half of the brain takes over.

  The human brain is actually more like a learning machine, a “neural network,” that constantly rewires itself after learning a new task. MRI studies have confirmed that thoughts in the brain are not localized in one spot, as in a Turing machine, but are spread out over much of the brain, which is a typical feature of a neural network. MRI scans show that thinking is actually like a Ping-Pong game, with different parts of the brain lighting up sequentially, with electrical activity bouncing around the brain.

  Because thoughts are so diffuse and scattered throughout many parts of the brain, perhaps the best that scientists will be able to do is compile a dictionary of thoughts, that is, establish a one-to-one correspondence between certain thoughts and specific patterns of EEGs or MRI scans. Austrian biomedical engineer Gert Pfurtscheller, for example, has trained a computer to recognize specific brain patterns and thoughts by focusing his efforts on µ waves found in EEGs. Apparently, µ waves are associated with the intention to make certain muscle movements. He tells his patients to lift a finger, smile, or frown, and then the computer records which µ waves are activated. Each time the patient performs a mental activity, the computer carefully logs the µ wave pattern. This process is difficult and tedious, since you have to carefully process out spurious waves, but eventually Pfurtscheller has been able to find striking correspondences between simple movements and certain brain patterns.

 

‹ Prev