We’re Still Behind
Mary Catherine Bateson
Professor emerita, George Mason University; visiting scholar, Sloan Center on Aging and Work, Boston College; author, Composing a Further Life
On October 4, 1957, the Soviet Union launched Sputnik, the planet’s first artificial satellite. That was very big news, the beginning of an era of space exploration involving multiple launchings and satellites spending long periods in orbit, launched from many nations. In the weeks after Sputnik, however, another news story played out that led to a range of other actions based on the recognition that U.S. education was falling behind not only in science but in other fields of education as well, such as geography and foreign languages. This is still true. We are not behind at the cutting edge, but we are behind in general broad-based understanding of science, and this is not tolerable for a democracy in an increasingly technological world.
The most significant example is climate change. It turns out, for instance, that many basic terms are unintelligible to newspaper readers. Recently I encountered the statement that “a theory is just a guess—and that includes evolution,” not to mention most of what has been reconstructed by cosmologists about the formation of the universe. When new data is published that involves a correction or expansion of earlier work, this is taken to indicate weakness rather than the great strength of scientific work as an open system, always subject to correction by new information. When the winter temperature dips below freezing, you hear, “This proves that the Earth is not warming.” Most Americans are not clear on the difference between “weather” and “climate.” The U.S. government supports the world’s most advanced research on climate, but the funds to do so are held hostage by politicians convinced that climate change is a hoax. And we can add trickle-down economics and theories of racial and gender inferiority to the list of popular prejudices that many Americans believe are ratified by science.
Among the popular misconceptions of scientific concepts is a skewed concept of cybernetics as dealing only with computers. It is true that key concepts developed in the field of cybernetics resulted in computers as an important by-product, but the more significant achievement of cybernetics was a new way of thinking about causation, now more generally referred to as systems theory. Listen to the speeches of politicians proclaiming their intent to solve problems like terrorism. It’s like asking for a single pill that will “cure” old age. If you don’t like x, you look for an action that will eliminate it, without regard to the side effects (bombing ISIL increases hostility, for example) or the effects on the user (consider torture). Decisions made with overly simple models of cause and effect are both dangerous and unethical.
The news that has stayed news is that American teaching of science is still in trouble, and that errors of grave significance are made based on overly simple ideas of cause and effect, all too often exploited and amplified by politicians.
Neural Hacking, Handprints, and the Empathy Deficit
Daniel Goleman
Psychologist, science journalist; author, A Force for Good: The Dalai Lama’s Vision for Our World
When I worked as a journalist at the science desk of the New York Times, our editors were constantly asking us to propose story ideas that were new, important, and compelling. The potential topics in science news are countless, from genetics to quantum physics. But if I were at the Times today, I’d pitch three science stories, all of which are currently under the collective radar, and each of which continues to unfold and will have mounting significance for our lives in years ahead.
For one: epigenetics. With the human genome mapped, the next step has been figuring out how it works, including what turns all those bits of genetic code on and off. Here everything from our metabolism to our diet to our environment and habits comes into play. A case in point is neuroplasticity. First considered seriously a decade or so ago, neuroplasticity—the brain’s constant reshaping through repeated experiences—presents a potential for neural hacking apps. As neuroscientists like Judson Brewer at Yale and Richard Davidson at the University of Wisconsin-Madison have shown, we can choose which elements of brain function we want to strengthen through sustained mind training. Do you want to better regulate your emotions, enhance your concentration and memory, become more compassionate? Each of these goals means strengthening distinct neural circuitry through specific, bespoke mental exercise, which might one day become a new kind of daily fitness routine.
For another: industrial ecology as a technological fix. This new discipline integrates such fields as physics, biochemistry, and environmental science with industrial design and engineering to create a new method—life-cycle assessment (or LCA)—for measuring the ecological costs of our materialism. LCA gives a hard metric for how something as ubiquitous as a mobile phone affects the environment and public health at every stage in its life cycle. This methodology gives us a fine-grained lens on how human activities degrade the global systems that support life and points to specific changes that would bring the most benefit. Some companies are using LCA to change how their products are made, so that they will replenish rather than deplete. As work at the Harvard T. H. Chan School of Public Health illustrates, this means using LCA to shift away from the footprint metric (how much damage we do to the planet) to the handprint—measuring the good we do, or how much we reduce our footprint. A news peg: Companies are about to release the first major net-positive products, which, over their entire life cycle, replenish rather than deplete.
Finally: the inverse relationship between power and social awareness, which integrates psychology into political science and sociology. Ongoing research at the University of California at Berkeley by psychologist Dacher Keltner, and at other research centers around the world, shows that people who are higher in “social power”—through wealth, status, rank, or the like—pay less attention, in face-to-face encounters, to those who hold less power. Lessened attention means lessened empathy and understanding. Thus, those who wield power (such as wealthy politicians) have virtually no sense of how their decisions affect the powerless. Movements like Occupy, Black Lives Matter, and the failed Arab Spring can be read as attempts to overcome this divide. Such an empathy deficit will augment political tensions far into the future. Unless, perhaps, those in power follow Gandhi’s dictate to consider how their decisions affect “the poorest of the poor.”
Send in the Drones
Diana Reiss
Professor, Department of Psychology, Hunter College; author, The Dolphin in the Mirror
The increasing use of drone technology is revolutionizing wildlife science and changing the kinds of things we can observe. As a marine mammal scientist who studies cetaceans—dolphins and whales—I see how drones afford extended perception, far less intrusive means of observing and documenting animal behavior, and new approaches to protecting wildlife. Drones (referred to more formally as UAVs, for unmanned aerial vehicles) bring with them a new set of remote-sensing and data-collection capabilities.
The Holy Grail of observing animals in the wild is not being there, because your very presence is often a disturbing influence. Drones are a solution to this problem. Imagine the feeling of exhilaration and presence as your drone soars above a socializing pod of whales or dolphins, enabling you to spy on them from on high. We can now witness much of what was the secret life of these magnificent mammals. Myriad behaviors and nuances of interactions that could not be seen from a research boat—or would have been interrupted by its approach—are now observable.
Animal health assessments and animal rescues are being conducted by veterinarians and researchers with the aid of drones. For example, the Whalecopter, a small drone developed by research scientists at Woods Hole Oceanographic Institution in Massachusetts, took high-resolution photographs of whales to document fat levels and skin lesions and then hovered in at closer range to collect samples of whale breath to study bacteria and fungi in their blow. NOAA scientists in Alaska are using drones to help them monitor beluga
-whale strandings in Cook Inlet, providing critical information about the animals’ condition, location, number, relative age, and whether they are submerged or partially stranded. The relayed images from drones are often clearer than those obtained by a traditional aerial surveys. Even if a drone cannot save an individual whale, getting more rapidly to a doomed whale enables scientists to conduct a necropsy on fresh tissue and determine the cause of death, which could bring about the survival of other whales.
Patrol drones are already being used to monitor and protect wildlife from poachers. One organization, Air Shepherd, has been deploying drones in Africa to locate poachers seeking elephant ivory and rhino horns. Programmed drones monitor high-traffic areas where the animals are known to congregate—areas known also to the poachers. They have been effective in locating poachers and informing the authorities of their whereabouts.
This is a new era of wildlife observation and monitoring. In my field, a future generation of cetacean-seeking drones may be around the corner—drones programmed to find cetacean-shaped forms and follow them. I can envision using a small fleet of “journalist drones” to monitor and provide realtime video feeds on the welfare of various species in our oceans, on our savannahs, and in our jungles. We might call it Whole World Watching (WWW) and create a global awareness, a more immediate connection between the world’s human population and the other species sharing our planet.
That Dress
Susan Blackmore
Psychologist; author, Consciousness: An Introduction
Could the color of a cheap dress create a meaningful scientific controversy? In 2015, a striped, body-hugging, £50 dress did just that. In February, Scottish mother Cecilia Bleasdale sent her family a poor-quality photo of a dress she bought for her daughter’s wedding. Looking at the image, some people saw the stripes as blue and black, others as white and gold. Quickly posted online, “that dress” was soon mentioned nearly half a million times. This simple photo had everything a meme needs to thrive: It was easy to pass on, accessible to all, and sharply divided opinions. #thedress was, indeed, called the meme of the year and even a “viral singularity.” Yet it did not die out as fast as it had risen; unlike most viral memes, this one prompted deeper and more interesting questions.
Scientists quickly picked up on the dispute and garnered some facts. Seen in daylight, the actual dress is indisputably blue and black. It is only in the slightly bleached-out photograph that white and gold is seen. In a study of 1,400 respondents who’d never seen the photo before, 57 percent saw blue and black, 30 percent saw white and gold, and about 10 percent saw blue and brown. Women and older people more often saw white and gold.
This difference is not like disputes over whether the wallpaper is green or blue. Nor is it like ambiguous figures, such as the famous Necker cube, which appears tilted toward or away from the viewer, or the duck/rabbit or wife/mother-in-law drawings. People typically see these bi-stable images either way and flip their perception between views, getting quicker with practice. Not so with “that dress.” Only about 10 percent of people could switch colors. Most saw the colors, resolutely, one way only and remained convinced they were right. What was going on became a genuinely interesting question for the science of color vision.
Vision science has long shown that color is not the property of an object, even though we speak of it as though it were. Color in fact emerges from a combination of the wavelengths of light emitted or reflected from an object and the kind of visual system looking at it. A normal human visual system, with three cone types in the retina, concludes “yellow” when any one of an indefinite number of different wavelength combinations affects its color-opponent system in a certain way. Thus a species with more cone types, such as the mantis shrimp, which has about sixteen types, would see many different colors where humans would see only the same shade of yellow.
When people are red/green color-blind, with only two cone types instead of three, we may be tempted to think they fail to see something’s “real” color. Yet there is no such thing. There are even rare people (mostly women) who have four cone types. Presumably they can see colors the rest of us cannot even imagine. This may help us accept the conclusion that the dress is not intrinsically one color or the other, but it still provides no clue as to why people see the dress so differently.
Could the background in the photo be relevant? In the 1970s, Edwin Land, inventor of the Polaroid camera, showed that the same colored square appears a different color depending on the squares surrounding it. This relates to an important problem that evolution has had to solve. If color information is to be useful, an object must look the same color on a bright sunny day as on an overcast one, yet the incident light is yellower at midday and bluer from a gloomy or evening sky. So our visual systems use a broad view of the scene to assess the incident light and then discount that when making color decisions, just like the automatic white balance (AWB) in modern cameras.
This, it turns out, may solve the Great Dress Puzzle. It seems that some people take the incident light as yellowish, discounting the yellow to see blue and black, while others assume a bluer incident light and see the dress as white and gold. Do the age and sex differences provide any clues as to why? Are genes, or people’s lifetime experiences, relevant? The controversy is still stimulating more questions.
Was it a step too far when some articles suggested that #thedress could prompt a “worldwide existential crisis” over the nature of reality? Not at all, for color perception really is strange. When philosophers ponder the mysteries of consciousness, they may refer to qualia—private, subjective qualities of experience. An enduring example is “the redness of red,” because the experience of seeing color provokes all those questions that make the study of consciousness so difficult. Is someone else’s red like mine? How could I find out? And why, when all this extraordinary neural machinery is doing its job, is there subjective experience at all? I would guess that “that dress” has yet more fun to provide.
Anthropic Capitalism and the New Gimmick Economy
Eric R. Weinstein
Mathematician and economist; managing director, Thiel Capital
Consider a thought experiment: If market capitalism was the brief product of happy coincidences confined in space and time to the developed world of the 19th and 20th centuries (but no longer held under 21st-century technology), what would our world look like if there were no system to take its place? I have been reluctantly forced to the conclusion that if technology had killed capitalism, economic news would be indistinguishable from today’s feed.
Economic theory, like the physics on which it’s based, is in essence an extended exercise in perturbation theory. Solvable and simplified frictionless markets are populated by rational agents, which are then all subjected to perturbations in an effort to recover economic realism. Thus, while economists do not, as outsiders contend, believe idealized models to be exactly accurate, it’s fair to say that they assume deviations from the ideal are manageably small. Let’s list a few such heuristics that may have recently been approximately accurate but aren’t enforced by any known law:
Wages set to the marginal product of labor are roughly equal to the need to consume at a societally acceptable level.
Price is nearly equal to value, except in rare edge cases of market failure.
Prices and outputs fluctuate coherently so that it’s meaningful to talk of scalar rates of inflation and growth (rather than varying field concepts like temperature or humidity).
Growth can be both high and stable, with minimal interference by central banks.
The anthropic viewpoint (more common in physics than economics) on such heuristics would lead us to ask, “Is society now focused on market capitalism because it is a fundamental theory, or because we have just lived through the era in which it was possible due to remarkable coincidences?”
To begin to see the problem, recall that in previous eras innovations created high-value occupations by automating or obviating tho
se of lower value. This led to a heuristic that those who fear innovation do so because of a failure to appreciate newer opportunities. Software, however, is different in this regard, and the basic issue is familiar to any programmer who has used a debugger. Computer programs, like life itself, can be decomposed into two types of components: (1) loops, which repeat with small variations, and (2) Rube Goldberg–like processes, which happen once.
If you randomly pause a computer program, you’ll almost certainly land in the former, because the repetitive elements are what gives software its power, by dominating the running time of almost all programs. Unfortunately, our skilled labor and professions currently look more like the former than the latter, which puts our educational system in the crosshairs of what software does brilliantly.
In short, what today’s flexible software is threatening is to “free” us from the drudgery of all repetitive tasks rather than those of lowest value, pushing us away from expertise, which we know how to impart, toward ingenious Rube Goldberg–like opportunities unsupported by any proven educational model. This shift in emphasis from jobs to opportunities is great news for a tiny number of today’s creatives but troubling for a majority who depend on stable and cyclical work to feed families. The opportunities of the future should be many and lavishly rewarded, but it’s unlikely they will ever return in the form of stable jobs.
A next problem is that software replaces physical objects by small computer files. Such files have the twin attributes of what economists call public goods: (1) The good must be inexhaustible (my use doesn’t preclude your use or reuse), and (2) the good must be non-excludable (the existence of the good means that everyone can benefit from it even if they don’t pay for it).
Know This Page 40