Book Read Free

Visions, Ventures, Escape Velocities: A Collection of Space Futures

Page 23

by Ed Finn


  On top of this, how we perceive and respond to risk is further complicated by how our brains process information. Many of the decisions we make as individuals are based on mental shortcuts—heuristics—in what the psychologist Daniel Kahneman calls “System 1 thinking.”[2] As it turns out, we don’t have the mental bandwidth to consciously process every decision we make, together with its potential consequences. And so our brains relegate many decisions to subconscious routines, which are either learned through experience, or are hardwired in. This is useful, as it prevents us being overwhelmed by decisions like how best to maneuver our coffee cup to our mouth, or how put one foot in front of the other without falling over while walking. But it also creates problems when we’re faced with risks we haven’t evolved to handle every day. Like sending crewed missions to Mars or asteroids.

  In effect, heuristics are a great evolutionary response to staying alive, but they’re not always reliable in today’s technologically complex world. And this leads to unconscious bias in how we weigh risk-related information and make risk-based decisions.[3] For instance, we tend to be more cautious in unfamiliar surroundings and when faced with unfamiliar situations. We have a tendency to trust people and information that support what we “feel” is right, while rejecting information that feels wrong. We internally prioritize risks and benefits in ways that don’t always make sense to others. And we get complacent around risks we are familiar with.

  These biases can help us avoid potentially risky situations. But they also influence what we consider worth protecting, and how we make sense of trade-offs between the possible outcomes of actions we take—whether these outcomes are real, or simply things we perceive to be true.[4] One consequence of this is that we instinctively find it hard to make sense of numbers when it comes to risk—something I was rudely reminded of some years ago during that most intimate of risk calculations, a personal health crisis.

  I was suffering from persistent headaches at the time, and my healthcare provider advised me to have a CAT scan of my head to take a look around. Part of the procedure involves being injected with a contrast-enhancing dye, and just before the injection, I was asked to sign a waiver—a document acknowledging that I understood the risk involved, and I was good to go with the procedure.

  The risk, as it turned out, was pretty low—there was around a one-in-a-million chance of serious complications from being injected with the dye, including death. Unfortunately, this didn’t make my choice any easier. As a physicist, I’m expected to be good with numbers. Yet as I sat there trying to make sense of what a million-to-one chance of dying meant compared to the occasional headache, I couldn’t make any rational sense of whether the risk was worth it or not. I even got as far as trying to estimate on the fly how many people in the U.S. have CAT scans each year, and how many die as a result … this didn’t help!

  In the end I signed the waiver—not because I’d done the math and it made sense, but because that was what I was expected to do.

  Part of my issue was working out what was of value to me, and what was worth risking. Faced with the waiver, I was faced with weighing up the value of occasional headaches (which, incidentally, cleared up of their own accord) with the value of not being dead. Yet the most important value, it turned out, was that of not embarrassing myself in front of the people waiting to inject and scan me by refusing to sign. At that point (I am embarrassed to say) the shame of walking away was far more important to me than a one-in-a-million chance of dying!

  What this incident reinforced with me is that what we consider to be important—and what we will do to protect this—is not always obvious, and doesn’t always align with a dispassionate analysis of the data. Perhaps it isn’t all that surprising that, when we make risk decisions, the importance or “value” of what we stand to lose becomes critical in the decision we make.

  This is true for individuals, but it extends to organizations as well. While it’s easier for a business or a government agency (for example) to use evidence and scientific analysis in risk decisions, there are the equivalents of institutional heuristics and—critically—institutional ideas of what’s important, which heavily influence the decisions they make. As a result, what on the surface may look like a reticence to take risks that doesn’t seem to be based in logic, may actually be well-considered intent to avoid harm to something that’s value isn’t immediately apparent to outsiders. This may be profit or economic growth. But it may equally be brand identity, customer base, or even deeply embedded institutional values. And as a result, an organization may quite rationally decide that its reputation and identity are not worth risking at any cost—not because it’s risk-averse, but because the consequences of “identity death” are simply too important to be traded against gains in other areas.[5]

  This complexity around risk and decisions comes through in Naam’s “The Use of Things.” Here, what is really important—the hands-on human dimension of space exploration—is less tangible, and maybe less “sellable” institutionally, than the more overt goal of mining asteroids for water. Yet it is the value of having a real person on the mission that ultimately drives the risk decisions in the story.

  In Ashby’s “Death on Mars,” a more subtle but perhaps more profound interplay of value and risk plays out. Here, we see risk in terms of mission value (establishing a Mars base), group value (trust and transparency), personal value (managing the process of dying), and conflicts between all three. Depending on how the story is approached, and where your sympathies lie as a reader, the risks—and the appropriateness of the decisions that are made—come across very differently. Should Donna have placed her right to die on her own terms above the mission goals? Was the risk of emotional pain to Khalidah that resulted from Donna concealing her illness worth what she gained from the deception? What would Song have risked by revealing what she knew? How important was the social “experiment” the crew was participating in, compared to what was important to each of them individually?

  These risk issues play out within a context that—in this case—is disconnected from centralized decision-making; presumably because of communication delays with Earth, but also possibly because of the nature of the mission. Within the context of the story, there is devolution of risk agency to the team orbiting Mars, and an expectation that risk decisions will align with established mission goals. This separation ends up amplifying the significance of each team member’s realization of what’s important to them (in effect, what is potentially at risk to each of them personally), and what they will do—or what they will trade—to protect this.

  What emerges is a complex risk landscape, where the risks include threats to dignity, integrity, and relationships. Within this landscape—one where someone will suffer no matter what is done—simply characterizing thinking and actions as “risk-averse” is not helpful. Instead, we should consider the degree to which individuals and the group as a whole are willing to contemplate and ultimately accept the consequences of actions, both to themselves and others. Risk in this instance is not a danger to be avoided, but an inevitability that reveals what the primary value is within a complex landscape, and what it is worth risking to sustain that value.

  In this way, “Death on Mars” creates a scenario that illuminates the complexity and personal nature of many risk decisions, and forces us to closely examine risk’s fundamental nature as a threat to something of value or importance, where the “value” that’s relevant extends far beyond conventional metrics of risk, and isn’t always universally shared. This of course runs the “risk” of complicating decisions in the real world (imagine a regulator including interpersonal relationships in risk assessments—it’s hardly likely to make the process any easier). And yet, this broader understanding of risk is essential to better understanding the consequences of actions, and making informed decisions. It also opens the way to thinking differently about how we protect and increase what is of value—especially where, as in the case of “Death on Mars,” what is of val
ue to those with the opportunity to protect it may differ from what’s of value to the organization they work for.

  By approaching risk as a threat to value within a complex and interconnected landscape, risk conversations can be elevated from simplistic “go/no-go” options to conversation about how potential gains and losses in value may be balanced across all individuals and organizations affected by a decision. And this elevation in turn opens the door to creative and innovative approaches to protecting existing and future value. In “The Use of Things,” for instance, it’s the “how” of survival that becomes important. And in “Death on Mars,” it’s the “how” of death itself.

  From Donna’s perspective in “Death on Mars,” what is important to her is a meaningful death, and her dignity in being able to have control over when and how she dies. This is a deeply personal value, and one that isn’t understood by her companions. It’s also directly in conflict with what is important to some of them, and in this respect, what reduces risk for Donna (in the sense of a threat to meaning, control, and dignity) increases risk for others. Whether her decision was appropriate or not depends on your perspective. Donna’s “risk” and her response to it, as well as the rest of the crew’s response, profoundly affects the evolving risk landscape in a way that couldn’t be captured in either evaluations of risk aversion, or simple numbers.

  In Naam’s “The Use of Things,” we see another facet of risk that arises from approaching risk as a “threat to value.” In this case, it’s how thinking more broadly about potential consequences can lead to innovation in how risks are anticipated and managed. Here, the repurposing of the CALTROP mining bots to carry out a unique space rescue is interesting in two respects. Importantly, it makes visible a “hidden value” in the mission: the ultimate importance of preserving human life over the more overt need to demonstrate that water can be extracted from an asteroid. It also demonstrates a remarkable degree of anticipation and creativity in how the CALTROPs are programmed and designed with risk in mind.

  Reading Naam’s story, it has to be assumed that the communication time lag between the asteroid and Earth would have been too long for the CALTROPs to be remotely programmed in the time between Abrams being hit and his rescue. In other words, the machines must have been preprogrammed on Earth for such an eventuality. As part of the CALTROP design process, someone worked out that there was a possibility of a human operative becoming untethered in space, possibly with a compromised suit, and that it was worth building in a feature where the bots’ programming allowed them to prioritize human life over material extraction, coordinate their actions, and enact an improvised rescue mission.

  This would have taken considerable resources, as well as some creative thinking around how the bots could be designed to respond to a threat to value under uncertain and unpredictable circumstances. Yet in a risk calculation in which human life holds the highest value, that anticipatory effort more than paid off.

  On a superficial reading, both Naam’s “The Use of Things” and Ashby’s “Death on Mars” can be interpreted as being about risk aversion—NASA’s aversion to risking human life and the mission, and Khalidah’s aversion to risking the death of a friend. But on a deeper read they help unpack the concept of risk from the perspective of what’s important to whom, and how existing and future value may be protected. And any illusion of risk aversion arises from a complex social calculus of what is worth fighting for.

  By focusing on the consequences of decisions and actions from multiple perspectives—something we are exploring at Arizona State University’s Risk Innovation Lab by thinking about risk as a “threat to current or future value”—[6]both of these stories highlight the need and opportunities for creativity and innovation in how we think and act on risk. In today’s increasingly interconnected and technologically complex world, this is becoming ever more important, as conventional risk thinking becomes further disconnected from real-world challenges and opportunities. And perhaps nowhere is this more relevant than in the multi-constituency and value-laden domain of space exploration.

  Space has a unique place in our social psyche, and with increasing global connectivity, citizens are becoming more engaged—and more demanding—in what happens outside the Earth’s atmosphere and beyond. Add in the emergence of private space companies and evolving public-private partnerships—all with their own ideas of what constitutes “value”—and you have the makings of a highly complex and convoluted risk landscape.

  When there were relatively few players in the space game, and critical decisions were largely the domain of government agencies, the concept of risk aversion might have had some value. As we move toward an increasingly complex web of players, though, it’s going to be increasingly important to understand risk from a different perspective.

  Both Ashby and Naam capture the complexity of this shifting risk landscape well. Their stories jointly hint at what might be lost—or what future value threatened—by taking a rigid and outmoded approach to risk with space exploration. But they also reveal the possibilities of increasing future “value”—not just in terms of knowledge creation and wealth, but also in terms of social and personal value—by approaching risk in a more creative and nuanced way. In fact, rather than avoiding risk entirely, both authors offer insights into what may be achieved by working with risk, and making decisions that ultimately strengthen and protect what is most valued by the community, while avoiding consequences that undermine that value.

  Like myself, I suspect neither author buys into a simplistic idea of risk aversion. Rather, from these two stories, they support the concept of making smart decisions that protect what is valuable and important—not just to corporations and governments, but also to individuals and the communities they are a part of. This, I suspect, is an evolution of the old black-and-white mathematics of risk that will become increasingly important as we push the boundaries of space exploration, and weigh the many different types of values and voices that are tied up in reaching out into the solar system and beyond.

  Acknowledgments: There’s something wonderfully satisfying about the serendipitous insights that come from “yes and” collaborations between creative writers and technical experts. I am deeply grateful to Ramez Naam and Madeline Ashby for their inspiring works, and for helping me see the world I thought I knew through new eyes.

  [1] An aversion to risk in this context is closely associated with “loss-aversion,” where people will tend to hold on to what they already have, rather than risking losing it to gain something else. [back]

  [2] Daniel Kahneman, Thinking, Fast and Slow (New York: Farrar, Straus and Giroux, 2011). [back]

  [3] For more on how we perceive and respond to risks, see Paul Slovic, The Feeling of Risk: New Perspectives on Risk Perception (London: Earthscan, 2010). [back]

  [4] The National Academies of Sciences, Engineering, and Medicine report Communicating Science Effectively: A Research Agenda (2017) provides a good summary of what is known about how heuristics influence how people make sense of and use science-based information. The report is free to download at https://www.nap.edu/catalog/23674/communicating-science-effectively-a-research-agenda. [back]

  [5] The reality of corporate decision-making is, naturally, more complex than this, and involves an institutional “psychology” of decision-making that is often opaque. Yet institutional perceptions and articulations of “value” remain important in both informing decisions and weighing consequences. [back]

  [6] More information on how the Risk Innovation Lab is exploring risk from this perspective can be found at https://riskinnovation.asu.edu. [back]

  Section IV: Exoplanets

  Suddenly Nadia felt a breeze swirl through her nervous system, running up her spine and out into her skin; her cheeks tingled, and she could feel her spinal cord thrum. Beauty could make you shiver! It was a shock to feel such a physical response to beauty, a thrill like some kind of sex. And this beauty was so strange, so alien. Nadia had never seen it properly be
fore, or never really felt it, she realized that now; she had been enjoying her life as if it were a Siberia made right, so that really she had been living in a huge analogy, understanding everything in terms of her past. But now she stood under a tall violet sky on the surface of a petrified black ocean, all new, all strange; it was absolutely impossible to compare it to anything she had seen before; and all of a sudden the past sheered away in her head and she turned in circles like a little girl trying to make herself dizzy, without a thought in her head. Weight seeped inward from her skin, and she didn’t feel hollow anymore; on the contrary she felt extremely solid, compact, balanced. A little thinking boulder, set spinning like a top.

  —Kim Stanley Robinson, Red Mars

  Shikasta

  by Vandana Singh

  Chirag:

  This is the first time I am speaking to you, aloud, since you died.

  I’ve learned by now that joy is of two kinds—the easy, mindless sort, and the kind that is earned hard, squeezed from suffering like blood from a stone. All my life I wanted my mother to see her son rise beyond the desert of deprivations that was our life—she wanted me to be a powerful man, respected by society—but so much of what she saw were my struggles, my desperation. So when the impossible happened, when our brave little craft was launched—the first crowdfunded spacecraft to seek another world—the unexpected shock of joy took her from illness to death in a matter of months. She died smiling—you remember her slight smile. You were always asking her why she didn’t let herself smile more broadly, laugh out loud. “Auntie,” you’d say, “smile!” That made her laugh, reluctantly. You were always pushing at limits, including those we impose on ourselves.

 

‹ Prev