The Future: Six Drivers of Global Change

Home > Nonfiction > The Future: Six Drivers of Global Change > Page 32
The Future: Six Drivers of Global Change Page 32

by Al Gore


  After observing that the woman responded to each of these questions by manifesting exactly the brain activity one would expect from someone who is conscious, the doctor then used these two questions as a way of empowering the young woman to “answer” either “yes” by thinking about playing tennis, or “no” by imagining a stroll through her house. He then patiently asked her a series of questions about her life, the answers to which were not known by anyone participating in the medical team. She answered correctly to virtually all the questions, leading Owen to conclude that she was in fact conscious. After continuing his experiments with many other patients, Owen speculated that as many as 20 percent of those believed to be in vegetative states may well be conscious with no way of connecting to others. Owen and his team are now using noninvasive electroencephalography (EEG) to continue this work.

  Scientists at Dartmouth College are also using an EEG headset to interpret thoughts and connect them to an iPhone, allowing the user to select pictures that are then displayed on the iPhone’s screen. Because the sensors of the EEG are attached to the outside of the head, it has more difficulty interpreting the electrical signals inside the skull, but they are making impressive progress.

  A LOW-COST HEADSET developed some years ago by an Australian game company, Emotiv, translates brain signals and uses them to empower users to control objects on a computer screen. Neuroscientists believe that these lower-cost devices are measuring “muscle rhythms rather than real neural activity.” Nevertheless, scientists and engineers at IBM’s Emerging Technologies lab in the United Kingdom have adapted the headset to allow thought control of other electronic devices, including model cars, televisions, and switches. In Switzerland, scientists at the Ecole Polytechnique Fédérale de Lausanne (EPFL) have used a similar approach to build wheelchairs and robots controlled by thoughts. Four other companies, including Toyota, have announced they are developing a bicycle whose gears can be shifted by the rider’s thoughts.

  Gerwin Schalk and Anthony Ritaccio, at the Albany Medical Center, are working under a multimillion-dollar grant from the U.S. military to design and develop devices that enable soldiers to communicate telepathically. Although this seems like something out of a science fiction story, the Pentagon believes that these so-called telepathy helmets are sufficiently feasible that it is devoting more than $6 million to the project. The target date for completion of the prototype device is 2017.

  “TRANSHUMANISM” AND THE “SINGULARITY”

  If such a technology is perfected, it is difficult to imagine where more sophisticated later versions of it would lead. Some theorists have long predicted that the development of a practical way to translate human thoughts into digital patterns that can be deciphered by computers will inevitably lead to a broader convergence between machines and people that goes beyond cyborgs to open the door on a new era characterized by what they call “transhumanism.”

  According to Nick Bostrom, the leading historian of transhumanism, the term was apparently coined by Aldous Huxley’s brother, Julian, a distinguished biologist, environmentalist, and humanitarian, who wrote in 1927, “The human species can, if it wishes, transcend itself—not just sporadically, an individual here in one way, an individual there in another way—but in its entirety, as humanity. We need a name for this new belief. Perhaps transhumanism will serve: man remaining man, but transcending himself, by realizing new possibilities of and for his human nature.”

  The idea that we as human beings are not an evolutionary end point, but are destined to evolve further—with our own active participation in directing the process—is an idea whose roots are found in the intellectual ferment following the publication of Darwin’s On the Origin of Species, a ferment that continued into the twentieth century. This speculation led a few decades later to the discussion of a new proposed endpoint in human evolution—the “Singularity.”

  First used by Teilhard de Chardin, the term “Singularity” describes a future threshold beyond which artificial intelligence will exceed that of human beings. Vernor Vinge, a California mathematician and computer scientist, captured the idea succinctly in a paper published twenty years ago, entitled “The Coming Technological Singularity,” in which he wrote, “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”

  In the current era, the idea of the Singularity has been popularized and enthusiastically promoted by Dr. Ray Kurzweil, a polymath, author, inventor, and futurist (and cofounder with Peter Diamandis of the Singularity University at the NASA Research Park in Moffett Field, California). Kurzweil envisions, among other things, the rapid development of technologies that will facilitate the smooth and complete translation of human thoughts into a form that can be comprehended by and contained in advanced computers. Assuming that these breakthroughs ever do take place, he believes that in the next few decades it will be possible to engineer the convergence of human intelligence—and even consciousness—with artificial intelligence. He recently wrote, “There will be no distinction, post-Singularity, between human and machine or between physical and virtual reality.”

  Kurzweil is seldom reluctant to advance provocative ideas simply because many other technologists view them as outlandish. Another close friend, Mitch Kapor, also a legend in the world of computing, has challenged Kurzweil to a $20,000 bet (to be paid to a foundation chosen by the winner) involving what is perhaps the most interesting long-running debate over the future capabilities of computers, the Turing Test. Named after the legendary pioneer of computer science Alan Turing, who first proposed it in 1950, the Turing Test has long served as a proxy for determining when computers will achieve human-level intelligence. If after conversing in writing with two interlocutors, a human being and a computer, a person cannot determine which is which, then the computer passes the test. Kurzweil has asserted that a computer will pass the Turing Test by the end of 2029. Kapor, who believes that human intelligence will forever be organically distinctive from machine-based intelligence, disagrees. The potential Singularity, however, poses a different challenge.

  More recently, the silicon version of the Singularity has been met by a competitive challenge from some biologists who believe that genetic engineering of brains may well produce an “Organic Singularity” before the computer-based “Technological Singularity” is ever achieved. Personally, I don’t look forward to either one, although my uneasiness may simply be an illustration of the difficult thinking that all of us have in store as these multiple revolutions speed ahead at an ever accelerating pace.

  THE CREATION OF NEW BODY PARTS

  Even though the merger between people and machines may remain in the realm of science fiction for the foreseeable future, the introduction of mechanical parts as replacements for components of the human body is moving forward quickly. Prosthetics are now being used to replace not only hips, knees, legs, and arms, but also eyes and other body parts that have not previously been replaceable with artificial substitutes. Cochlear implants, as noted, are used to restore hearing. Several research teams have been developing mechanical exoskeletons to enable paraplegics to walk and to confer additional strength on soldiers and others who need to carry heavy loads. Most bespoke in-ear hearing aids are already made with 3D printers. The speed with which 3D printing is advancing makes it inevitable that many other prosthetics will soon be printed.

  In 2012, doctors and technologists in the Netherlands used a 3D printer (described in Chapter 1) to fabricate a lower jaw out of titanium powder for an elderly woman who was not a candidate for traditional reconstructive surgery. The jaw was designed in a computer with articulated joints that match a real jaw, grooves to accommodate the regrowth of veins and nerves, and precisely designed depressions for her muscles to be attached to it. And of course, it was sized to perfectly fit the woman’s face.

  Then, the 3D digital blueprint was fed into the 3D printer, which laid down titanium powder, one ultrathin layer at a time (thirty-thr
ee layers for each millimeter), and fused them together with a laser beam each time, in a process that took just a few hours. According to the woman’s doctor, Dr. Jules Poukens of Hasselt University, she was able to use the printed jaw normally after awakening from her surgery, and one day later was able to swallow food.

  The 3D printing of human organs is not yet feasible, but the emerging possibility has already generated tremendous excitement in the field of transplantation because of the current shortage of organs. However, well before the 3D printing of organs becomes feasible, scientists hope to develop the ability to generate replacement organs in the laboratory for transplantation into humans. Early versions of so-called exosomatic kidneys (and livers) are now being grown by regenerative medicine scientists at Wake Forest University. This emerging potential for people to grow their own replacement organs promises to transform the field of transplantation.

  Doctors at the Karolinska Institute in Stockholm have already created and successfully transplanted a replacement windpipe by inducing the patient’s own cells to regrow in a laboratory on a special plastic “scaffolding” that precisely copied the size and shape of the windpipe it replaced. A medical team in Pittsburgh has used a similar technique to grow a quadriceps muscle for a soldier who lost his original thigh muscle to an explosion in Afghanistan, by implanting into his leg a scaffold made from a pig’s urinary bladder (stripped of living cells), which stimulated his stem cells to rebuild the muscle tissue as they sensed the matrix of the scaffolding being broken down by the body’s immune system. Scientists at MIT are developing silicon nanowires a thousand times smaller than a human hair that can be embedded in these scaffolds and used to monitor how the regrown organs are performing.

  As one of the authors of the National Organ Transplant Act in 1984, I learned in congressional hearings about the problems of finding enough organ donors to meet the growing need for transplantation. And having sponsored the ban on buying and selling organs, I remain unconvinced by the argument that this legal prohibition (which the U.S. shares with all other countries besides Iran) should be removed. The potential for abuse is already obvious in the disturbing black market trade in organs and tissues from people in poor countries for transplantation into people living in wealthy countries.

  Pending the development of artificial and regenerated replacement organs, Internet-based tools, including social media, are helping to address the challenge of finding more organ donors and matching them with those who need transplants. In 2012, The New York Times’s Kevin Sack reported on a moving example of how sixty different people became part of “the longest chain of kidney transplants ever constructed.” Recently, Facebook announced the addition of “organ donor” as one of the items to be updated on the profiles of its users.

  Another 3D printing company, Bespoke Innovations of San Francisco, is using the process to print more advanced artificial limbs. Other firms are using it to make numerous medical implants. There is also a well-focused effort to develop the capacity to print vaccines and pharmaceuticals from basic chemicals on demand. Professor Lee Cronin of the University of Glasgow, who leads one of the teams focused on the 3D printing of pharmaceuticals, said recently that the process they are working on would place the molecules of common elements and compounds used to formulate pharmaceuticals into the equivalent of the cartridges that feed different color inks into a conventional 2D printer. With a manageably small group of such cartridges, Cronin said, “You can make any organic molecule.”

  One of the advantages, of course, is that this process would make it possible to transmit the 3D digital formula for pharmaceuticals and vaccines to widely dispersed 3D printers around the world for the manufacturing of the pharmaceuticals on site with negligible incremental costs for the tailoring of pharmaceuticals to each individual patient.

  The pharmaceutical industry relied historically on large centralized manufacturing plants because its business model was based on the idea of a mass market, within which large numbers of people were provided essentially the same product. However, the digitization of human beings and molecular-based materials is producing such an extraordinarily high volume of differentiating data about both people and things that it will soon no longer make sense to lump people together and ignore medically significant information about their differences.

  Our new prowess in manipulating the microscopic fabric of our world is also giving us the ability to engineer nanoscale machines for insertion into the human body—with some active devices the size of living cells that can coexist with human tissue. One team of nanotechnologists at MIT announced in 2012 that they have successfully built “nanofactories” that are theoretically capable of producing proteins while inside the human body when they are activated by shining a laser light on them from outside the body.

  Specialized prosthetics for the brain are also being developed. Alongside pacemakers for hearts, comparable devices can now be inserted into brains to compensate for damage and disorders. Doctors are already beginning to implant computer chips and digital devices on the surface of the brain and, in some cases, deeper within the brain. By cutting a hole in the skull and placing a chip that is wired to a computer directly on the surface of the brain, doctors have empowered paralyzed patients with the ability to activate and direct the movement of robots with their thoughts. In one widely seen demonstration, a paralyzed patient was able to direct a robot arm to pick up a cup of coffee, move it close to her lips, and insert the straw between her lips so she could take a sip.

  Experts believe that it is only a matter of time before the increased computational power and the reduction in size of the computer chips will make it possible to dispense with the wires connecting the chip to a computer. Scientists and engineers at the University of Illinois, the University of Pennsylvania, and New York University are working to develop a new form of interface with the brain that is flexible enough to stretch in order to fit the contours of the brain’s surface. According to the head of R&D at GlaxoSmithKline, Moncef Slaoui, “The sciences that underpin bioelectronics are proceeding at an amazing pace at academic centers around the world but it is all happening in separate places. The challenge is to integrate the work—in brain-computer interfaces, materials science, nanotechnology, micro-power generation—to provide therapeutic benefit.”

  Doctors at Tel Aviv University have equipped rats with an artificial cerebellum, which they have attached to the rat’s brain stem to interpret information from the rest of the rat’s body. By using this information, doctors are able to stimulate motor neurons to move the rat’s limbs. Although the work is at an early stage, experts in the field believe that it is only a matter of time before artificial versions of entire brain subsystems are built. Francisco Sepulveda, at the University of Essex in the U.K., said that the complexity of the challenge is daunting but that scientists see a clear pathway to succeed. “It will likely take us several decades to get there, but my bet is that specific, well-organized brain parts such as the hippocampus or the visual cortex will have synthetic correlates before the end of the century.”

  Well before the development of a synthetic brain subsystem as complex as the hippocampus or visual cortex, other so-called neuroprosthetics are already being used in humans, including prosthetics for bladder control, relief of spinal pain, and the remediation of some forms of blindness and deafness. Other neuroprosthetics expected to be introduced in the near future will, according to scientists, be able to stimulate particular parts of the brain to enhance focus and concentration, that with the flip of a switch will stimulate the neural connections associated with “practice” in order to enhance the ability of a stroke victim to learn how to walk again.

  “MODIFY THE KID”

  As implants, prosthetics, neuroprosthetics, and other applications in cybernetics continue to improve, the discussion about their implications has broadened from their use as therapeutic, remedial, and reparative devices to include the implications of using prosthetics that enhance humans. For example
, the brain implants described above that can help stroke victims learn more quickly how to walk again, can also be used in healthy people to enhance concentration at times of their choosing to help them learn a brand-new skill, or enhance their capacity for focus when they feel it is particularly important.

  The temporary enhancement of mental performance through the use of pharmaceuticals has already begun, with an estimated 4 percent of college students routinely using attention-focusing medications like Adderall, Ritalin, and Provigil to improve their test scores on exams. Studies at some schools found rates as high as 35 percent. After an in-depth investigation of the use of these drugs in high schools, The New York Times reported that there was “no reliable research” on which to base a national estimate, but that a survey of more than fifteen schools with high academic standards yielded an estimate from doctors and students that the percentage of students using these substances “ranges from 15 percent to 40 percent.”

  The Times went on to report, “One consensus was clear: users were becoming more common … and some students who would rather not take the drugs would be compelled to join them because of the competition over class rank and colleges’ interest.” Some doctors who work with low-income families have started prescribing Adderall for children to help them compensate for the advantages that children from wealthy families have. One of them, Dr. Michael Anderson, of Canton, Georgia, told the Times he thinks of it as “evening the scales a little bit.… We’ve decided as a society that it’s too expensive to modify the kid’s environment. So we have to modify the kid.”

 

‹ Prev