g always holds in the long term. And he is not some lone-wolf academic with an eccentric theory of inequality. Scores of well-respected economists have given ringing endorsements to his book’s central thesis, including economics Nobel laureates Robert Solow, Joseph Stiglitz, and Paul Krugman. Krugman has written that
Piketty doesn’t just offer invaluable documentation of what is happening, with unmatched historical depth. He also offers what amounts to a unified field theory of inequality, one that integrates economic growth, the distribution of income between capital and labor, and the distribution of wealth and income among individuals into a single frame.*
The only solution to this growing problem, it seems, is the redistribution of the wealth concentrating within a tiny elite using instruments like aggressive progressive taxation (such as exists in some European countries that show a much better distribution of wealth), but the difficulty here is the obvious one that political policymaking is itself greatly affected by the level of inequality. This vicious positive-feedback loop makes things even worse. It is clearly the case now in the United States that not only can the rich hugely influence government policy directly but also that elite forces shape public opinion and affect election outcomes with large-scale propaganda efforts through media they own or control. This double-edged sword attacks and shreds democracy itself.
The resultant political dysfunction makes it difficult to address our most pressing problems—for example, lack of opportunity in education, lack of availability of quality healthcare, man-made climate change, and not least the indecent injustice of inequality itself. I’m not sure if there is any way to stop the growth in inequality we have seen in the last four or five decades anytime soon, but I do believe it is one of the important things we have learned more about in the last couple of years. Unfortunately the news is not good.
The Age of Visible Thought
Peter Gabriel
Singer-songwriter, musician, humanitarian activist
It now seems inevitable that the decreasing cost and increasing resolution of brain-scanning systems, accompanied by a relentless increase in computer power, will take us soon to the point where our own thinking may be visible, downloadable, and open to the world in new ways.
It was the news that brain scanners are starting to be developed at consumer price levels that obsessed me this year.
Through the work of Mary Lou Jepsen, I was introduced to the potential of brain-reading devices and learned that patterns generated while watching a succession of varied videos would provide the fundamental elements to connect thought to image. A starting point was the work pioneered at Jack Gallant’s Lab at UC Berkeley in 2011, which proved that the patterns of brain activity from fMRI scanners when a subject was viewing an assortment of videos would enable thoughts to be translated into digital images.
Recording more and more images and corresponding brain patterns boosts the vocabulary in the individual’s visual dictionary of thought. Accuracy greatly increases with the quantity and quality of data and of the decoding algorithms. Jepsen persuaded me that this is realizable within a decade, within the cost range of consumer electronics, and in a form that appeals to non-techies. Laborious techniques and huge, power-hungry, multimillion-dollar systems based on magnetic fields will be succeeded by optical techniques where the advantages of consumer electronics can assert themselves; the power of AI algorithms will do the rest. This science-fiction future is not only realizable but, because of enormous potential benefits, will inevitably be realized.
And so here we are: Our thoughts themselves are about to take a leap out of our heads, from our brains to computer, to the Internet, and to the world. We are entering the Age of Visible (and Audible) Thought. This will surely affect human life as deeply as any technology our imagination has yet devised, or any evolutionary advance.
The essence of who we are is contained in our thoughts and memories, which are about to be opened like tin cans and poured onto a sleeping world. Inexpensive scanners would enable us to display our thoughts and access those of others who do the same. The consequences and ethics of this have barely been considered. I imagine the pioneers of this research enjoying a heady Oppenheimer cocktail of anticipation and foreboding, of exhilaration and dread. Our task is to ensure that they do not feel alone or ignored.
One giant tech company is believed to have already backed off exploring the development of brain reading for Visual Thought, apparently fearing the potentially negative repercussions and controversy over privacy. The emergence of this suite of technologies will have enormous impact on the everyday ways we live and interact and can clearly transform, positively and negatively, our relationships, aspirations, work, creativity, and techniques for extracting information. Those not comfortable swimming in these transparent waters will not flourish. Perhaps we’ll need to create “swimming lessons” to teach us how to be comfortable being open, honest, and exposed—ready to navigate these waters of visible thought.
What else happens in a World of Visible Thought? One major difference is that as thought becomes closer and closer to action, with shorter feedback loops accelerating change, timescales collapse and the cozy security blanket of a familiar slowness evaporates. A journey for my grandfather from London to New York shrank from a perilous three weeks to a luxurious three hours for my generation on the Concorde. Similarly, plugging thought directly into the material world will all but eliminate the comfort of time lag. If I look outside at the streets, the buildings, the cars, I am just looking at thought turned into matter, the idea in its material form. With 3D printing and robotics, that entire process can become nearly instantaneous.
The past year has witnessed robots building bridges and houses, but these currently work from 3D blueprints. Soon we’ll be able to plug in the architect directly and, with a little bit of fine tuning, see her latest thoughts printed and assembled into a building immediately. The same goes for film, for music, and for every other creative process. Barriers between imagination and reality are about to burst open. Do we ignore it, or do we get into boat-building, like Noah? Here comes the flood. . . .
Our Changing Conceptions of What It Means to Be Human
Howard Gardner
Hobbs Professor of Cognition and Education, Harvard Graduate School of Education; author, Truth, Beauty, and Goodness Reframed
We live at a time of great, perhaps unprecedented, advances in digital technology (hardware/software) and biological (genetic/brain) research and applications. It’s easy to see these changes as wholly or largely positive, although as a card-carrying member of the pessimists’ society I can easily point to problematic aspects as well. But irrespective of how full (or empty) you believe the glass to be, a powerful question emerges: To what extent will our conceptions of what it means to be human change?
History records huge changes in our species over the last 5,000 years or so—and presumably prehistory would fill in the picture. But scholars have generally held the view that the fundamental nature of our species—the human genome, so to speak—has remained largely the same for at least 10,000 years and possibly much longer. As Marshall McLuhan argued, technology extends our senses, it does not fundamentally change them. Once one begins to alter human DNA (for example, through CRISPR) or the human nervous system (by inserting mechanical or digital devices), we are challenging the very definition of what it means to be human. And once one cedes high-level decisions to digital creations, or these artificially intelligent entities cease to follow the instructions programmed into them and rewrite their own processes, our species will no longer be dominant on this planet.
In a happy scenario, such changes will take place gradually, even imperceptibly, and they may lead to a more peaceful and even happier planet. But as I read the news of the day, and of the last quarter century, I discern little preparedness on the part of human beings to accept a lesser niche, let alone to follow Neanderthals into obscurity. And so I expect tomorrow’s news to highlight human resistance to fundamental alterat
ions in our makeup, and quite possibly feature open warfare between old and newly emerging creatures. But there will be one difference from times past: Rather than looking for insights in the writings of novelists like Aldous Huxley or George Orwell or Anthony Burgess, we’ll be eavesdropping on the conversations among members of the third culture.
Complete Head Transplants
Kai Krause
Software pioneer; philosopher; author, A Realtime Literature Explorer
Early this year an old friend, a professor of neurology, sent me an article from a medical journal, Surgical Neurology International—at first glance, predictably, a concoction of specialist language. The “Turin Advanced Neuromodulation Group” is describing “Cephalosomatic Anastomosis” (CSA), to be performed with “a nanoknife made of a thin layer of silicon nitride with a nanometer sharp-cutting edge.”
Only slowly it becomes clear that they are talking about something rather unexpected: “Kephale,” Greek for head; “Somatikos,” Greek for body; “Anastasis,” Latin for resurrection—that prosaic CSA stands for a complete head transplantation. And that reverberated with me, the implications being literally mind-boggling.
The thought of a functioning brain reconnected to an entirely new body opens up any number of speculations. And has done so in countless sci-fi books and B movies. But there is a lot to consider.
The author, Italian surgeon Sergio Canavero, announced a few months later that he had a suitable donor for the head part and suddenly made it sound quite real, adding tangible details: The operation, to be performed in 2017, would take place in China, require a team of 150 specialists, take 36+ hours, and cost $15+ million. Then it hit the mass media. Many responses revolved around the ethics of such an action, using the F-word a lot (and I mean “Frankenstein”) and debating the scientific details of the spinal-cord fusion.
My stance on the ethical side is biased by a personal moment: In the mid-nineties I visited Stephen Hawking in Cambridge for a project and he later visited in Santa Barbara—both interactions, up close, left me with an overpowering impression. There was that metaphor of “the mind trapped in a body,” playing out in all its deep and poignant extreme—the most intelligent of minds weighed down so utterly by the near useless shell of a body. A deep sadness would overcome anyone witnessing it—far beyond the Hollywood movie adaption.
There’s the rub, then: Who could possibly argue against this man’s choosing to lengthen his lifetime and gain a functioning body should such an option exist? Could anyone deny him the right to try, if medicine were up to the task? (Hawking is not a candidate even in theory, his head being afflicted by the disease as well, but he does serve as a touching and tragic example of that ethical side.)
Another personal connection for me is this: Critics called it “playing God” (imagine!). Human hubris. Where would the donors come from? Is this medicine just for the rich? Now, consider: The first such operation leads to the recipient’s death after eighteen days. It is repeated, and the subsequent 100 operations lead to nearly 90 percent of the patients not surviving past the two-year mark. No, that is not a prognostication for CSA; I am recalling events from nearly fifty years ago. In December 1967, Christiaan Neethling Barnard performed the first human heart transplantation, the eyes of the world upon him; his face was subsequently plastered on magazine covers across the globe. I was ten and remembered his double-voweled name as much as the unfathomable operation itself. He was met with exactly the same criticism, the identical ethical arguments.
After the dismal survival rate, the initial enthusiasm turned around, and a year later those condemning the practice were gaining. Only after the introduction of ciclosporin to vastly improve the immune-rejection issues did the statistics turn in his favor; tens of thousands of such operations have since been performed. Every stage of progress has had critical voices loudly extrapolating curves into absurdity; back then, as now, there is doomsday talk of “entire prison populations harvested for donors,” and such.
Sadly, watching videos of Canavero on the Web is rather cringeworthy. Slinging hyperbolae such as “the world will never be the same,” naming his protocols heaven and gemini, squashing a banana representing a damaged spinal cord versus a neatly sliced one to illustrate his ostensibly easy plan. He repeatedly calls it “fusing spaghetti” and even assessed the chances for his Russian donor at “90 percent to walk again.”
The Guardian notes that “he published a book, Donne Scoperte, or Women Uncovered, that outlined his tried-and-tested seduction techniques.” It seems clear that there is little place for levity when he belittles the details and glosses over the reality: millions of quadriplegic victims closely eyeing the chances of truly re-fusing spinal cords.
The story here is not about one celebrity poseur. In my view, it cannot happen by 2017, by far. But 2027, ’37, ’47? Looking backward, you can see the increase in complexity that makes it almost inevitable to think this will be possible. And then the truly interesting questions come into play. If phantom limbs bring serious psychological issues, what would an entire phantom body conjure up? The self-image is such a subtle process—the complexity of signals, fluids, and messenger chemistry—how could it all possibly attain a state remotely stable, let alone “normal”?
Christiaan Barnard, asked why anyone would choose such a risky procedure, replied, “For a dying person, a transplant is not a difficult decision. If a lion chases you to a river filled with crocodiles, you will leap into the water convinced you have a chance to swim to the other side. But you would never accept such odds if there were no lion.”
Me, I dread even the dentist’s waiting room. But thirty years hence, maybe I, too, would opt for the crocodiles. If Hawking can survive longer, by all means he should. Some other characters I can think of, their best hope lies in acquiring a new head. Thus I am of two minds about complete head transplants.
The En-Gendering of Genius
Rebecca Newberger Goldstein
Philosopher, novelist; Visiting Professor, NYU; author, Plato at the Googleplex
For most of its history our species has systematically squandered its human capital by spurning the creative potential of half its members. Higher education was withheld from women in just about every place on Earth until the 20th century, with the few who persevered before then considered “unsexed.” It’s only been in the last few decades that the gap has so significantly closed that, at least in the U.S., more bachelor’s degrees have been earned by women than by men since 1982, and since 2010 women have earned the majority of doctoral degrees. This recent progress only underscores the past’s wasteful neglect of human resources.
Still, the gender gap has stubbornly perpetuated itself in certain academic fields, usually identified as STEM—science, technology, engineering, and mathematics—and this is as true in Europe as in the U.S. A host of explanations have been posed as to the continued male dominance—some only in nervous, hushed voices—as well as recommendations for overcoming the gap. If the underrepresentation of women in STEM isn’t the result of innate gender differences in interests and/or abilities (this last, of course, being the possibility that can only be whispered), then it’s important for us to overcome it. We’ve got enormously difficult problems to solve, both theoretical and practical, and it’s lunacy not to take advantage of all the willing and able minds that are out there.
Which is why I found a 2015 article published in Science by Andrei Cimpian and Sarah-Jane Leslie big news.* First of all, their data show that the lingering gender gap shouldn’t be framed in terms of STEM versus non-STEM. There are STEM fields—for example, neuroscience and molecular biology—that have achieved 50-percent parity in the number of PhDs earned by men and women in the U.S. And there are non-STEM fields—for example, music theory and composition (15.8 percent) and philosophy (31.4 percent)—where the gender gap rivals such STEM fields as physics (18 percent), computer science (18.6 percent) and mathematics (28.6). So that’s the first surprise that their research delivers: that it
’s not science per se that, for whatever reasons, produces stubborn gender disparity. And this finding in itself somewhat alters the relevance of the various hypotheses offered for the tenacity of the imbalance.
The hypothesis that Leslie and Cimpian tested is one I’ve rarely seen put on the table and surely not in a testable form. They call it the FAB hypothesis—for field-specific ability beliefs. It focuses on the belief as to whether success in a particular field requires pure innate brilliance, the kind of raw intellectual power that can’t be taught and for which no amount of conscientious hard work is a substitute. One could call it the Good-Will-Hunting quotient, after the 1997 movie featuring Matt Damon as a janitor at MIT who now and then, in the dead of night, pauses to put down his mop in order to effortlessly solve the difficult problems left scribbled on a blackboard.
To test the FAB hypothesis, the researchers sent out queries to practitioners—professors, postdocs, and graduate students—in leading U.S. universities, probing the extent to which the belief in innate brilliance prevailed in the field. In some fields, success was viewed as more a function of motivation and practice, while in others the Good-Will-Hunting quotient was more highly rated.
And here’s the second surprise: the strength of the FABs in a particular field predicts the percentage of women in that field more accurately than other leading hypotheses, including field-specific variation in work/life balance and reliance on skills for systematizing vs. empathizing. In other words, what Cimpian and Leslie found is that the more success within a field was seen as a function of sheer intellectual firepower, with words such as “gifted” and “genius” not uncommon, the fewer the women. The FAB hypothesis cut cleanly across the STEM/non-STEM divide.
Know This Page 24