Learning From the Octopus

Home > Other > Learning From the Octopus > Page 7
Learning From the Octopus Page 7

by Rafe Sagarin


  Even the “realists” in the business literature, who scornfully reject the boxes and arrows in favor of collecting real data about real companies, remain too tightly constrained by their business backgrounds to capture the full extent to which learning is linked to adaptation, as it is in nature. In part this stems from a misappropriation of Darwinian thought that is still omnipresent in business thinking—the notion that success comes through “survival of the fittest.” This kind of thinking leads inevitably to the idea that a successful organization must optimize every component of its practice as it strives for perfection. In turn, this pathway requires constant benchmarking and comparison to a set of predetermined metrics.

  Harvard Business School professor David Garvin, who was among the first to call for more realism in the organizational management discussion, backed up his call with a detailed quantitative benchmarking tool that any organization could use to measure just exactly how prone to learning each component of its organization is.13 The model asks employees dozens of questions in each of several “blocks” related to the organization’s learning environment, learning-related processes, and leadership, and scores for each sector are measured on a 100-point scale. The organization can then compare its scores to benchmark averages of previously tested organizations and presumably work to improve the learning capabilities of each of its underperforming sectors.

  The problem with this kind of approach is that it assumes there is a value of learning for its own sake. By the time you get to the level of measuring each of what a researcher has determined to be the most important individual components of learning, you may have lost sight of why it is important to learn in the first place. That is, having the capacity to learn does not mean an organization will actually learn or learn the right lessons, any more than having a Ferrari in your garage makes you a great driver.

  Robert Wears, an emergency medicine researcher at the University of Florida, noted this when commenting on the organizational learning literature. He cites an example of a medical article that lamented the lack of hard evidence for learning among clinicians treating shock in children, even as the article reported a tenfold decline in children’s shock deaths over the same time period!14 At a conference on adaptability in military operations that I attended, a senior naval analyst pleaded for new ideas on how to better train troops to learn to be adaptable despite being unable to identify where or when troops on the ground were failing to adapt. Natural organisms don’t need to “benchmark” learning because nature makes it abundantly clear when learning is needed. Likewise, extreme circumstances in human societies, such as are found in emergency medicine and warfare, demonstrate their own unequivocal results on how well individuals and organizations learn.

  A further assumption in trying to benchmark learning is that there is actually some desirability in optimizing each component of learning. Nature doesn’t give a fig for survival of the fittest, nor for optimization. If an organism is surviving fine and reproducing its genes through the range of environmental variation it experiences, then good enough. Although it is taken to have an objective, technical connotation, “optimization” in the business literature is a value-laden term, like “ruthless” or “cooperative” or “creative,” that appeals broadly across a wide spectrum but doesn’t necessarily indicate a useful response to a difficult environmental problem.

  In natural adaptive systems, value-laden terms need not apply for acceptance—an organism might be successful by cooperating, by altruistically helping its kin, or by building structures that other organisms rely on, or it might eat its own sisters, serially rape females, and dump toxic chemicals all around it, making the environment entirely inhospitable for everything else nearby—it doesn’t matter as long as it passes on its genes or helps a closely related individual pass on its closely related genes. Here I want to reassure you that it is not this value-less character of nature that I am advocating we replicate in society—I strongly believe that all sorts of values—ethical, economic, political, and social—necessarily come into play when changing policy and practice. Moreover, to be more precise, when it comes to organisms such as humans (and likely a number of other animals with advanced cognitive capacities), values do interact with the day-to-day struggle to survive, so that the net effect is that values can play a role in some evolutionary systems. Rather, I am suggesting that the truly important characteristics of natural learning systems—their ability to integrate all sorts of experiences, past, present, and hypothetical future into patterns—can be inculcated in society without resorting to value-laden goals disguised as objective benchmarks.

  It just doesn’t matter how close the organism is to its own theoretically optimal performance. It might work at 25 percent of its capacity and still survive just fine in a given environment. An old acquaintance with a business background and an indefatigable entrepreneurial spirit once made a small fortune selling enamel lapel pins that stated simply, “110%” as “motivational gifts” for businesses to give to their employees. I doubt he would have done so well with pins that read “25%” or “Just Do Good Enough.” Yet in nature, the organism that gives 110 percent or even something close to 100 percent of its capacity to a given task is almost assuredly going to wind up dead.

  FORCES OF LEARNING

  Natural learning is neither a deliberate nor an optimized process. Indeed, the vast majority of organisms—even humans most of the time—learn without being consciously aware that they are learning. Learning carries much of the same qualities of Darwinian evolution—it is made possible by selection for, and reproduction of, those variants who learn the right lessons from the environmental challenges put before them.

  These selective forces are likely at work behind a disturbing global trend that has been acutely experienced by U.S. and allied forces in Afghanistan and Iraq—namely that the objectively weaker sides of conflicts, in terms of technology, firepower, troop numbers, and financial resources, have become over the past 100 years or so increasingly likely to win wars.15 There are a number of reasons why this may be the case. In Vietnam, for example, both the greater familiarity with the terrain and acclimation to the climate were certainly a great help to the Vietcong. But evolutionary biologist and security analyst Dominic Johnson points to Darwinian selective forces as a likely culprit behind the success of weaker sides. The reasons for this lie in all three components of Darwinian evolution: variation, selection, and replication. First, insurgencies fighting regular armies tend to have a more diverse set of tactics at their disposal, whereas the regular armies they fight are constrained by long-standing institutional norms, ethical and legal constraints (such as the Geneva Conventions), and standard operating procedures (that themselves have become such an integral part of military operations that they are routinely referred to by their acronym: SOPs). In Iraq in particular, insurgents were also drawn from a much more diverse population than U.S. forces: the 311 foreign fighters captured in Iraq between April and October 2005 came from 27 different countries.16 The differences between an army specialist from Poughkeepsie, New York, and one from Lubbock, Texas, pale when compared to the variation between a hardened fighter from Sudan and an eager new al-Qaeda recruit from Syria. Second, largely due to superior firepower, selection forces (which take the form of killing and capturing enemy troops) are much stronger on insurgents than on regular armies. Not to put too fine a point on it, but U.S. soldiers and marines kill far more insurgents than die themselves due to insurgent attacks. But this strengthens insurgencies in the long run. On average, the weakest and least adept fighters will be killed off or captured. The tactics that didn’t work will disappear. The hiding places that were easy to uncover won’t be used anymore. What the survivors then do sustains the evolutionary cycle for insurgencies—they replicate their successful ideas and tactics by recruiting and training new insurgents. The net result is that the insurgency as an organization (albeit a loosely controlled one) has learned better ways to fight a regular army. What this looks like
quantitatively, and has been demonstrated in Iraq, is that the ratio of insurgents killed per U.S. soldier killed is virtually unchanging, despite a huge escalation of resources and troop deployments by the United States.

  This particular kind of selective learning would seem to be a good example of learning from failure, and not seemingly one we would want to replicate in our own endeavors. Learning from failure in nature usually involves death. While the unfortunate individual with the weak mutation doesn’t learn, the species as a whole experiences a kind of learning by reducing the likelihood of poorly performing variations in the population. No one would advocate that we should tolerate higher mortality rates among troops because it would increase our opportunities to learn. But we do need to be aware that this type of learning can be an unintended and almost inevitable consequence of apparent short-term progress.

  More helpfully, we must also recognize that learning to survive in nature is a process of learning from both success and failure. Learning from success reinforces mutations that benefit survival. Successes are the creative outputs that provide new working models for survival. Nature learns much more from success than from failure. A sea otter pup inherits the same particular dietary preferences as its mother because it watches her successful foraging dives and learns how to avoid sea urchin spines or crab claws. Amphibians emerged on land because using the resources abundantly available there was a much more successful strategy than fighting for survival in an increasingly crowded ocean. In fact, over the history of life on Earth, despite several mass extinctions, the number of species and the number of unique niches that they occupy have more or less continually increased.17 The millions of extant species today, and the countless individuals of all those species, are each success stories in nature.

  Societal organizations, by contrast, tend to do most of their learning from failures. After 9/11, Hurricane Katrina, and many other security disasters, long lists were made of how and why we failed to maintain security. At first blush, this would appear to be a good thing. But here, an important distinction arises between human society and the rest of the natural world. Ethically, we shouldn’t plan to learn from mistakes in the way that nature does, which typically involves a lot of death. When we do learn from mistakes, it means that we are too late to prevent the suffering and loss of life that occurred in the first place.

  A former student of mine, who is an active-duty lieutenant in the U.S. Coast Guard, noticed the tendency to learn from failure and ignore the lessons from success through her years of responding to oil spills and other hazards. When the Coast Guard experienced one of its biggest failures in recent years—the botched response to the relatively small 40,000 gallon M/V Cosco Busan oil spill on November 7, 2007, in San Francisco—it immediately set to work studying and accounting for what went wrong. The commandant of the Coast Guard ordered an “Incident Specific Preparedness Review” to be carried out by a large group of local, federal, and international agencies with expertise in oil spills and disaster response. The result, several months later, was a massive document with over 190 recommendations, a number of which were implemented in future Coast Guard practices.

  Yet when the Coast Guard deftly held back and cleaned up over 9 million gallons of oil spilled after Hurricane Katrina and Hurricane Rita in one of the most challenging cleanup environments possible, nary a word was spoken. In fact, the massive Townsend “After Action” report on Katrina identified 17 “Critical Challenges,” 125 recommendations, and 243 action items, covering everything from search-and-rescue to transportation infrastructure to human services, but none of them addressed oil spill cleanup, the one unqualified success after Katrina.18 The oil spilled by Katrina was one of the largest oil spills on record, approximately two-thirds the size of the 1989 Exxon Valdez spill. Yet so forgotten were the oil spills caused by Katrina that by the time of the 2008 presidential campaign, Republican candidate Mike Huckabee was able to argue publicly that “not one drop of oil was spilled” due to Katrina.19

  Why would it be important to learn from the modest success of the Katrina oil cleanup when so many other aspects of the Katrina response were unmitigated disasters? The answer would come just a few years later, on April 20, 2010, when the Deepwater Horizon deep sea oil rig contracted by British Petroleum exploded, killing eleven people, and began hemorrhaging oil for months into the same Gulf of Mexico that was wreaked by Hurricane Katrina. Virtually none of the lessons learned from Katrina’s failures would be helpful in responding to the Deepwater Horizon spill—the spill didn’t create a refugee crises or cripple land transportation routes—but as 2.5 million gallons of oil per day poured into the gulf, it seems reasonable that previous lessons from successfully containing and cleaning large amounts of oil under extremely difficult circumstances would be valuable.

  Why do we fall back on learning from mistakes rather than successes? In part, it’s because we often take a business or engineering approach to problem solving. Engineers seem to take a perverse pleasure in highlighting the importance of learning from failure. There are many books on famous engineering failures, and the striking 16-millimeter footage of the Tacoma Narrows suspension bridge oscillating like a sea snake before disintegrating plays extremely well on disaster documentaries.20 Engineers rightfully point out that these disasters help to make future bridge, building, and oil rig designs safer, and this works well in the intelligently designed world that engineers live within. But in the dynamic world of security, good designs are not nearly enough.

  As with the engineers, a mantra of the business learning literature is that organizations need to learn from their failures, and a view of “failure as the ultimate teacher” prevails. A 1993 paper, in fact, praised British Petroleum as an exemplar of learning from failure,21 noting that BP capitalizes on “constructive failure,” which is defined as a failure that provides “insight, understanding, and thus an addition to the commonly held wisdom of the organization.” 22 Undoubtedly, the Deepwater Horizon disaster provided all the components of constructive failure to BP, but it killed eleven people, saddled the company with well over $1 billion of damages,23 was catastrophic to BP’s share price, slammed the door on the permissive regulatory romper room that allowed BP and other oil companies to operate relatively cheaply in deep water, and, of course, resulted in one of the most extreme man-made environmental disasters in history. Looking to “failure as the ultimate teacher” isn’t too valuable if the whole school burns down.

  When we take a biological perspective on learning, we realize that we are biased toward learning from failure because of the selective forces at work. In nature, the selective agent acting on learning processes is anything that identifies one variant over another and helps it reproduce or kills it off—a violent storm that rips the weaker kelps off the rocks, a clever predator that lures deep-sea fish directly into its jaws with a glowing lantern, a picky mate that passes up the advances of any male companion whose claws or antlers or tail feathers are just a little too small.

  When it comes to how we respond to big events in society, it is often news media that play the selective agent. After the Cosco Busan spill, images of hundreds of frustrated San Francisco volunteers waiting to clean up oiled birds, but held back by government bureaucrats, were disseminated by national media. Those kind of images result in calls to Congress and demands for investigations. By contrast, the Coast Guard’s valiant attempts to clean up oil spills following Hurricane Katrina hardly made newsworthy footage relative to images of people stranded on the roofs of their flooded houses and Americans begging for deliverance from the overwhelmed refugee camp in the Superdome.

  This force of selection isn’t likely to change soon. We cannot (and would not want to) order media outlets to only report good news. Moreover, there are some security concerns, especially ongoing or recurring events, for which learning from the last failure is helpful. A minor computer virus embedded in an attachment that temporarily disables your computer is generally all it takes for you to scan all futu
re attachments for viruses. But the most dramatic, most costly, and most deadly failures are often idiosyncratic confluences of events that then cause a “paradigm shift” in how we view security. As Dominic Johnson and ecologist Elizabeth Madin have pointed out, we didn’t learn from earlier warning signs of these catastrophes because of a range of individual and group barriers to learning, including human psychological biases that cause us to underestimate risk or underappreciate risks that we don’t sense directly, as well as institutional inertia and political disposition toward maintaining the status quo.24 In these cases—Pearl Harbor, the 9/11 attacks, and Hurricane Katrina are all good examples—the catastrophe itself was the failure we finally learned from.

  So, how can we learn adaptively when the things we are most concerned about tend to be low-probability one-time events? A now classic paper from the organizational learning literature, titled “Learning from Samples of One or Fewer,” tried to address this problem.25 The authors argue that organizations can learn from minimal experience or from less-than-catastrophic events in three ways—by deepening their experience of the event, by spreading out the experience through eliciting a wide range of people to analyze the event, or by creating hypothetical events to mimic and learn from an experience. This last approach is particularly important when considering events that could be catastrophic in reality.

  Who learns like this, and does it actually work? As it turns out, human babies do, and they have an excellent track record of developing into high-functioning, highly adaptable adult humans. And babies, in turn, are not that much different than security planners or security organizations. That is, both babies and security planners typically have very little experience with the things they need to learn about. There are countless objects, situations, behaviors, and natural phenomena that are perfectly commonplace to adults that babies have never seen. Likewise, there are countless potential terrorist or cybercrime attacks, ways that fragile food distribution systems could collapse, or pathways for natural disasters to strike from.

 

‹ Prev