To Be a Machine

Home > Other > To Be a Machine > Page 2
To Be a Machine Page 2

by Mark O'Connell


  And then, shortly after I watched the documentary, I learned about a lecture Sandberg was due to deliver at Birkbeck College on the topic of cognitive enhancement. I made plans to go to London. It seemed as good a place as any to begin.

  An Encounter

  IT OCCURRED TO me, as I established myself in the back row of a packed lecture hall in Birkbeck and took brisk stock of the assembled crowd, that the future, such as it was, looked a lot like the past. Dr. Anders Sandberg’s lecture had been organized by a group called London Futurists, a kind of transhumanist salon that had been meeting regularly since 2009 to discuss topics of interest to aspiring posthumans: radical extension of life spans, mind uploading, increased mental capacity through pharmacological and technological means, artificial intelligence, the enhancement of the human body through prostheses and genetic modification. We had gathered here to contemplate a profound societal shift, a coming transfiguration of the human condition, and yet there was no ignoring the fact that we were an overwhelmingly male group. Aside from the fact that almost all these faces were lit by the pallid luminescence of smartphone screens, this could be happening at almost any point in the last two centuries: a group, composed of mainly men, arranged in tiered seating in a room in Bloomsbury, there to listen to another man talk about the future.

  A middle-aged gentleman with vigorous red eyebrows approached the lectern and took command of the room. This was David Wood, the chair of the London Futurists—a prominent transhumanist and tech entrepreneur. Wood had been a founder of Symbian, the first mass-market smartphone operating system, and his company Psion had been an early pioneer in the handheld computer market. He talked, in a meticulous Scottish accent, of how the next ten years would see more “fundamental and profound changes to the human experience than in any preceding ten-year period in history.” He talked about the technological modification of brains, the refinement and enhancement of cognition itself.

  “Can we get rid,” he asked, “of some of the biases and mistakes in reasoning that we’ve all inherited from our biology? Instincts that served us well when we roamed the African savanna, but which are not now very much in our favor?”

  The question seemed to encapsulate the transhumanist worldview, its conception of our minds and bodies as obsolete technologies, outmoded formats in need of complete overhaul.

  He introduced Anders, who was these days a futurist of note, and a research fellow at Oxford’s Future of Humanity Institute—an organization, founded in 2005 with an endowment from the tech entrepreneur James Martin, where philosophers and other academics were charged with conjuring and thinking through various scenarios for the future of the human species. Anders was recognizable still as the priestly young man whose strange and solitary observance I’d watched in the YouTube video, but he was in his early forties now, a fleshier and more substantial figure, adhering more or less to the disheveled house style of professional scholarship—the rumpled suit, the air of abstracted conviviality.

  He spoke for the better part of two hours on the topic of intelligence, on how it might be increased at the level of the individual, and at the level of the species. He spoke of methods of cognitive augmentation, both existing and imminent—of education, smart drugs, genetic selection, brain implant technologies. He spoke of how as humans age, they lose their capacity to assimilate and retain information; life extension technologies, he allowed, would go some way toward addressing this situation, but we would also need to improve how our brains functioned through the course of our lives. He spoke of the social and economic costs of suboptimal mental performance, of how misplaced house keys alone—the time and energy invested in trying to find them—ran the U.K.’s GDP a deficit of £250 million every year.

  “There are a lot of these little losses in society all the time,” he said, “because of stupid mistakes, forgetfulness, and so on.”

  This struck me as an extreme manifestation of positivism. Anders spoke of intelligence as essentially a problem-solving tool, a function of productivity and yield—as something closer to the measurable processing power of a computer than any irreducibly human quality. In a general sense, I was fundamentally opposed to this conception of the mind. And yet in a personal sense I could not help but reflect on the fact that I myself had, through my own absentmindedness, only that morning squandered about £150, having somehow managed to book a hotel room in London for the night before I’d arrived, and having subsequently had to fork out for another one. I had always been somewhat scattered and forgetful, but since becoming a father—and resulting, at least in part, from such early parental phenomena as interrupted sleep, general distraction, and too much time spent watching episodes of Thomas and Friends on YouTube—my processing power, my memory capacity, had begun noticeably to decrease. And so as much as I was temperamentally resistant to the profoundly instrumentalist view of human intelligence Anders was advancing in his lecture, I couldn’t help but feel that I could probably stand a little enhancement myself.

  The thrust of his lecture was that biomedical cognitive enhancements would facilitate improved acquisition and retention of mental ability, of what he referred to as “human capital,” allowing for better reasoning and functioning in the world. He addressed the questions of social justice that arose from this—questions of what he called “the fair distribution of brains”—given that those in a position to afford enhanced brains were likely to be those people already occupying an elite position within society. His suggestion, though, was that less intelligent people would wind up benefiting more from enhancement technologies than those who were already very intelligent, and that the overall effects of increased general intelligence would inevitably benefit society as a whole—a kind of trickle-down economics of intelligence.

  All of this—the setup, the situation—was utterly familiar to me, and yet utterly strange. I had lately abandoned the sinking ship of an academic career for the hardly less precarious vessel of freelance writing. I had used up several years of my unextended life span getting a PhD in English literature, only to confirm my suspicion that a PhD in English literature was never going to lead me to the promised land of actual employment. I had spent much of my twenties and thirties trying to pay attention to people standing at lecterns and saying things. And yet the sorts of things that Anders Sandberg was saying were very different to the sorts of things I was used to hearing from people standing at lecterns. I was, yes, sitting in the back of a lecture hall and trying to focus on the matter at hand, an activity in which I was deeply and intricately experienced. But in no sense was I among my people. In no sense was this my world.

  —

  After the lecture, a contingent of assorted futurists migrated to an oak-paneled pub in Bloomsbury for some early afternoon drinking. By the time I sat down at the table with my pint of bitter, word had spread around the group that I was writing a book on transhumanism and related matters.

  “You’re writing a book!” said Anders, apparently delighted by the idea. He pointed to a hardback volume that sat in front of me on the table, a cultural history of severed heads I had been carrying around with me since I’d acquired it earlier that day. “Is that the book you’re writing?”

  “What, this?” I said, unsure as to whether I was missing some intricate transhumanist joke about cryonic head storage, or possibly time travel.

  “No, that one’s already been written,” I said, unnecessarily. “I’m writing a book about transhumanism and related topics.”

  “Ah, excellent!” said Anders.

  I wasn’t sure what to say. I almost told him that the book I was planning to write might not be the sort of book that he, or transhumanists in general, would believe to be excellent. I felt suddenly conscious of myself as an interloper among these rationalists and futurists, an odd and perhaps even slightly pitiful figure, with my antediluvian notebook and pen, an emissary of letters in the world of zeros and ones.

  I noted that Anders wore a pendant around his neck, a thing with a large medallio
n not unlike those devotional medals worn by especially pious Catholics. I was about to ask him about it when his attention was seized by an attractive Frenchwoman who wanted to talk about brain uploading.

  An aristocratic young man who had been sitting to my left now turned toward me and asked about this book I was writing. He was elegantly attired, his hair punctiliously crafted. His name was Alberto Rizzoli, he told me, and he was from Italy. (At one point, in reference to my book, he mentioned that his family used to be in the publishing business. Only later that evening, as I glanced through my notes, did it occur to me that Alberto was surely a scion of the Rizzoli media dynasty, which would make him the grandson of Angelo Rizzoli, who had produced Fellini’s La Dolce Vita and 8½.) He was studying at the Cass Business School in London, but was also working on a beta-stage tech start-up, which provided 3D printing materials for primary classrooms. He was twenty-one years old, and had considered himself a transhumanist since his teens.

  “I certainly can’t imagine myself at thirty,” he said, “without some kind of enhancement.”

  I myself was thirty-five, like Dante at the time of his vision—midway upon the journey of my life. And I was, for better or worse, unenhanced. As disturbed as I was by the idea of the cognitive augmentations Anders had spoken of in his lecture, I was nonetheless intrigued by the thought of what such technologies might do for me. Such technologies might, for instance, have freed me from the burden of having to take notes while talking to transhumanists, allowing me instead to record everything through some internal nanochip for purposes of later perfect recall, as well as, say, furnishing me with the extra-contextual information—in, as it were, real time—that the grandfather of the young Italian man I was speaking with had produced a bunch of Fellini films.

  A silver-haired man in a sport coat and expensive-looking shirt sat down across from Alberto and me. He had positioned himself snugly beside Anders, and was waiting for a gap in his conversation with the Frenchwoman. In the meantime, he had helped himself to a couple of pistachios from Anders’s snack bowl, one of which he had fumbled on the way to his mouth and dropped down the neck of his shirt, open to the ideally entrepreneurial three-to-four buttons. I watched him as he hooked a finger through a gap between two lower buttons and poked around momentarily before capturing the truant pistachio and popping it discreetly into his mouth. Our eyes met as he did so, and we smiled blandly in each other’s direction. He handed me a card, from which I learned that he was in the professional futurism business. (I considered making a lighthearted joke about how a business card, attractive as this particular one was, seemed an oddly old-fashioned method for a professional futurist to be announcing his status as such, but I thought better of it, and crammed the card into the section of my wallet that served as the somewhat overcrowded final resting place of such printed disjecta.)

  He had started out in artificial intelligence research, he said, but now made his living as a keynote speaker at business conferences, informing corporations and business leaders of trends and technologies that were going to disrupt their particular sectors. He spoke as though he were doing a brisk and slightly distracted run-through of a TED talk; his physical gestures were both emphatic and relaxed, suggesting a resolute optimism toward a horizon of vast and terrible disruptions. He spoke to me of those changes and opportunities that were at hand, of a near future in which AI would revolutionize the financial sector, and in which a great many lawyers and accountants would become literally redundant, their expensive labor made superfluous by ever smarter computers; he spoke to me of a future in which the law itself would be inscribed in the mechanisms through which we act and live, in which cars would automatically fine their drivers for breaking speed limits: a future in which there would in fact be no need for such things as drivers, or car manufacturers, given that vehicles would soon be sailing calmly out of showrooms like ghost-ships, still warm from the 3D printer from which they had lately emerged, according to the precise specifications of the consumer for whose home or workplace they were now setting course.

  I told him that the one reassuring aspect of my job as a writer was that I was unlikely to get replaced by a machine anytime soon. I might not make a lot of money, I admitted, but I was at least in no immediate danger of being ejected outright from the marketplace by a gadget that did exactly what I did, but more cheaply and efficiently.

  The man tilted his head from one side to the other, pursing his lips, as though considering whether to permit me this limited self-consolation.

  “Sure,” he conceded. “I mean certain kinds of journalism will probably not be replaced by AI. Opinion writing, in particular. People will probably always want to read opinions generated by actual humans.”

  Although hot takes were under no immediate threat, certain plays and films and works of prose fiction had, he said, already been written to order by computer programs. It was true that these plays and films and works of prose fiction were not very good, or so he had heard, but it was also true that computers tended to improve very quickly at things they initially did not do well. His point, I supposed, was that I and people like me were just as expendable as everyone else, just as fucked by the future. I considered asking him whether he thought computers might eventually replace even keynote speakers, whether the thought leaders of the next decade might fit in the palms of our hands, but realized that whatever answer he provided to this question would be cause for smug vindication on his part anyway, and so I resolved instead to include a description in my book of his retrieving a dropped pistachio from inside his expensive shirt—an act of petty and futile vengeance, and the kind of absurd irrelevance that would certainly be beneath the dignity and professional discipline of an automated writing AI.

  Anders and the attractive Frenchwoman to my right were engaged in what seemed to me an impenetrably technical discussion about the progress of research into mind uploading. The conversation had turned to Ray Kurzweil, the inventor and entrepreneur and director of engineering at Google who had popularized the idea of the Technological Singularity, an eschatological prophecy about how the advent of AI will usher in a new human dispensation, a merger of people and machines, and a final eradication of death. Anders was saying that Kurzweil’s view of brain emulation, among other things, was too crude, that it totally ignored what he called the “subcortical mess of motivations.”

  “Emotions!” said the Frenchwoman, emotionally. “He doesn’t need emotions! That is why!”

  “That might be true,” said Alberto.

  “He wants to become a machine!” she said. “That is what he really wants to be!”

  “Well,” said Anders, poking thoughtfully among the bowl of empty shells, searching in vain for an uneaten pistachio. “I also want to become a machine. But I want to be an emotional machine.”

  —

  When I finally spoke at length with Anders, he expanded on this desire of his to become a machine, this literal aspiration toward a condition of hardware. As one of the foremost thinkers within the transhumanist movement, he was known as much as anything for his advocation and theorizing of the idea of mind uploading, of what was known among the initiates as “whole brain emulation.”

  It wasn’t, he insisted, that he wanted this right away; even if such a thing might be possible in the near future—and he stressed that we were nowhere close—it wouldn’t be desirable for humans to start getting uploaded into machines all of a sudden anyway. He spoke of the potential dangers of the sort of sudden convergence that techno-millenarians like Kurzweil refer to as the Singularity.

  “What would be a nice scenario,” he said, “is that we first get smart drugs and wearable technologies. And then life extension technologies. And then, finally, we get uploaded, and colonize space and so on.” If we managed not to extinguish ourselves, or to be extinguished, what we now think of as humanity would be the nucleus, he believed, of some greatly more vast and brilliant phenomenon that would spread across the universe and “convert a lot of matter a
nd energy into organized form, into life in a generalized sense.”

  He had held this view, he said, since childhood, since consuming wholesale the contents of the Stockholm municipal library’s sci-fi section. In high school he read scientific textbooks for pure diversion, and kept a scrapbook of equations he found especially stimulating; he was excited, he said, by the movement of the logic, the lockstep progression of the thought—by the abstract symbols themselves more than the actual things they signified.

  One especially rich source of such equations was a book called The Anthropic Cosmological Principle by John D. Barrow and Frank J. Tipler. At first, Anders read the book primarily for these tantalizing calculations—“weird formulas,” as he put it, “about things like electrons orbiting hydrogen atoms in higher dimensions”—but like a kid with a copy of Playboy who eventually turns his attention to a Nabokov story, he began to take an interest in the text that surrounded them. The view of the universe advanced by Barrow and Tipler was as an essentially deterministic mechanism, in which “intelligent information processing must come into existence,” and increase exponentially over time. This teleological premise led Tipler, in his later work, to the idea of the Omega Point, a projection whereby intelligent life takes over all matter in the universe, leading to a cosmological singularity, which he claims will allow future societies to resurrect the dead.

  “The idea was a revelation to me,” Anders told me. “This theory that life will eventually control all matter, all energy, and calculate an infinite amount of information—that was kind of awesome for an information-obsessed teenager. This was something I realized we needed to work on.”

 

‹ Prev