Learning From the Octopus
Page 12
USING REDUNDANCY AS A NATURAL EXPERIMENT
When, as a boy, I would accompany my grandfather on his trips to the Aqueduct or Belmont race tracks, my bets based on the most cleverly named horse rarely won, but they also didn’t significantly skew the odds relative to the contributions of thousands of presumably better handicappers than I. It is the collective wisdom of those handicappers, translated first through dollars bet and then into the statistical measure of the odds of a horse winning, that turn out to be fairly good predictors of each horse’s success.12 The idea of pari-mutuel betting itself is based on the principle that many redundant actors will trend toward the correct solutions, making up for the small number of individuals who do very stupid things (like bet on the horses with the best names).
After the financial collapse of 2008, gambling and financial markets seem to be forever entwined, so it’s appropriate to recognize that the same logic of pari-mutuel betting underlies futures markets, which are financial instruments that allow investment based on the expected changes in the prices of commodities and other traded goods. It has been pointed out that futures markets have been used at least since the time of Aristotle,13 but their origins in nature go far deeper. Ants and other social insects, such as honeybees, send out individual scouts to search for better, safer homes, but they won’t move as a colony until a threshold quorum number of scouts indicate the attractiveness of the new site.14
These types of futures prediction schemes have been subsumed lately under the term crowdsourcing,15 which has been applied to energizing wide swathes of the public to improve business practices, guess the outcome of elections, develop translation services for obscure languages,16 engage in social movements, and determine how philanthropic resources should be distributed.
My friend Josh Donlan, who has a knack for proposing clever and controversial ideas that cross the borders between scientific ecology and public policy (he recently proposed “rewilding” North America by bringing back descendants of the massive mammals that once roamed the land, including camels, cheetahs, and lions17), has proposed a highly controversial futures market for endangered species18 in which the government would sell futures on species at risk—if the species population or critical habitat area decreases, investments accrue to conservation funds; if the species populations improve, investors receive a return on their speculation.
The notion of betting on the future rubs many people the wrong way for many reasons. A recent proposal to allow people to invest in movies they thought would become blockbusters or not was vehemently opposed by the film industry, which feared people would manipulate the system and cause financial ruin of expensive movies,19 and was also opposed by film lovers on philosophical grounds: they felt it would turn their art into just another commodity.20 Some critics think that certain things shouldn’t be traded but heavily regulated by the government. For example, we shouldn’t have to speculate on the future of endangered species because the government should just do its job of protecting them. This is similar to criticism raised against “cap and trade” programs to reduce pollution by allowing heavy polluters to buy pollution credits and efficient producers to sell credits, with the total number of credits capped at a certain amount. Opponents to these approaches feel that selling rights to pollute, or providing a way to make money off failure (a bomb at the box office, a woodpecker going extinct) is dismally cynical or even unethical.
Pentagon officials found this out the hard way when DARPA (the same agency that created the Grand Challenges) floated the idea for a “terrorism futures market,”21 which would have created a website for people to place bids on when they thought the next terrorist act or assassination would occur. Senator Barbara Boxer called the idea “very sick” when she raised the proposed project by surprise, just days before it was supposed to go live, during an unrelated hearing in the Foreign Relations Committee. A chorus of angry lawmakers from both parties quickly seconded Senator Boxer and called for firing the DARPA spooks who thought up the idea. The program was terminated almost immediately.
That many economists and security analysts still think a terrorism futures market is a really good idea22 is a clear illustration of a point I made earlier: even biologically inspired or biologically analogous solutions must pass through ethical, political, economic, and other social filters before they can be made practicable for society. This does not invalidate them—in fact, all solutions in nature go through many filters before they become adaptations that stay with organisms through their life and across generations. These filters may be the presence or absence of predators in the region, the particular environmental conditions of the time and place, the disease load carried by the organism, and, in more social organisms, very similar filters to those faced by human societies—an efficient new gathering strategy devised by a low-ranking chimpanzee, for example, might not get replicated just because of her status in society. The point is that these evolutionary filters are another example of redundancy that both ensures the elimination of bad ideas and strengthens those that survive. Rather than dismiss the various ethical, moral, economic, and political objections that are often raised when considering a change in policy, they would be better embraced and used as tools to strengthen the adaptation.
Of course, the best example of an adaptive redundant system in society is the Internet. There are now billions of redundant observers and disseminators of information operating in a network loosely draped on a minimal architecture that is itself continually changing. Along with the rise of the Internet came a whole new field of science studying the properties of networks. The more scientists looked at human-made networks like the Internet, the more they saw nature. Network analyst Alessandro Vespignani has noted that the Internet “has become one of the first human artefacts that we study as a natural phenomenon.”23 The Internet is not that different from a network of terrorists plotting an attack, and both of these types of networks are not that different from organisms in an ecosystem or the relationships between different proteins in a yeast cell. There are striking similarities in how they form, how they respond to change, and how they are vulnerable.
Networks emerge in similar ways, through preferential links made between independent operators. In network science, these operators are called nodes. Your personal website could be a node, and maybe it’s linked to the company you work for, your friend’s blog, some websites you particularly like, and a search engine—most likely Google—but not the website of someone you’ve never met or a company you’ve never heard of, even though they are likely linked to Google as well. As a result of this preferential linking that doesn’t treat every single node equally, a small number of nodes grow to be incredibly important hubs of activity, and many nodes remain connected to just a few other places on the net. Networks in nature don’t build much differently. Hermit crabs in a network of tide-pool organisms may be linked to snails because they use their shells, and both species might be linked to a larger crab species that eats them. A generalist predator, which eats a lot of different kinds of organisms, will be a hub in this network. So too, could something like the tiny algae that coats the rocks and is grazed by many different species.
Understanding the roles and relationships of entities in a network, which is essentially what all ecologists do, has value far beyond developing plot lines for Animal Planet documentaries. Eric Berlow, an ecologist who studies food webs, argues that network science is primarily valuable because it can turn a complicated problem into a much more solvable complex problem. In a brilliant three-minute TED (technology, entertainment, and design) talk in Oxford in 2010, he used the same network analysis he uses on Sierra Nevada lake food webs to transform an über-complicated government-created diagram of U.S. strategy in Afghanistan—a tangled yarn ball of lines and arrows and boxes that was howlingly ridiculed in the media—into a very small set of truly actionable tasks.24 Instead of focusing on the whole complicated picture at once, Berlow highlighted the networked relationships between entiti
es in the U.S. strategy and then cut out all the entities that were more than three or four degrees removed from the ultimate goal of “increasing popular support of the Afghan government.” Of this much smaller remaining set, he also eliminated the entities that no one could do anything about, such as the harsh terrain of Afghanistan. In less than three minutes, his analysis collapsed the entire complicated affair into just two necessary actions: active engagement of ethnic rivalries and religious beliefs, and fair transparent economic development. Both complex tasks, to be sure, but much more clearly understandable than the tangle of yarn the United States has been tangled up with in Afghanistan for over a decade now.
Like natural systems, networks respond to change quite rapidly. Albert-László Barabasi, whose book Linked is the most accessible and entertaining treatment of network science, likens networks to our own skin.25 Consider how much your skin can change—the nerve cells on your arm are just a few of millions, little mildly important nodes working away as part of a vast network. If you spill hot bacon grease on your arm, suddenly these nerve cells become the most important hub of activity—screaming out directions to the entire body and getting responses from the entire body, “Step away from the oven! Get the grease off! Send in the pain killers! Send in the immune system to fight off infection!” Likewise, consider how many sites that were once just lowly little nodes—someone’s personal project—found a niche and blossomed into the next go-to source for instant celebrity gossip, extreme political hyperventilating, or online shopping.
The Internet, like natural systems, doesn’t have any end goal or overarching values. So if change steers it toward providing the ultimate platform for global meet-ups of wannabe radical jihadists, that’s where it will go. Indeed, the Internet has evolved quite well with the changing face of global terrorism. It was once an outright recruitment tool for drawing fighters into al-Qaeda and other terrorist groups. But as large terrorist organization became increasingly forced underground after 9/11, and as intelligence services scrutiny of jihadists websites grew, the recruitment mission largely dropped off. In its place, the sites of radicals became places to amplify the alleged insults of the foreign occupation of Iraq (a more effective, though indirect, means of recruitment) by posting the latest videos of allied atrocities such as the torture of prisoners at Abu Ghraib or the recorded killings of Americans by IED attack. More recently, chat rooms stemming from radical sites have been the places where disaffected youth from all over the world have been meeting up to flex their ideological street “cred” and share dreams of martyrdom.26
Can networks of would-be terrorists on the Internet, or the real-world networks of terrorists that arise from them (or more likely, from a number of kinship relations between radicalized individuals), be taken down? Like ecosystems, all networks are resilient to attack, to a point. If an attack takes out a node, like your personal website, the network will carry on just fine—this is akin to an individual animal dying in an ecosystem. It has been noted that by the time of the 9/11 attacks, several of the key hijackers on different planes had no close networked connections between one another, so that even if some had been caught, the rest of the terror network would have likely survived to carry out the rest of the attack.27 Even if an attack takes out a pretty important hub (akin to losing a whole species in an ecosystem), there may be some inconvenience, but as with the ecosystem “a new species will gain ascendancy,” and the network will anneal itself around the gap left by the departed hub. But as the threat gets larger, either by winding its way through many nodes in the network or by taking out some really big hubs, the resilience of the network is much less certain.
We are learning now that with strong interconnection, we can also see catastrophic failure. Ferenc Jordan is a Hungarian scientist who studies all sorts of networks, from children’s social groups in schools to the power dynamics of social wasps to food web networks that bind different species in an ecosystem. He is especially interested in how these networks change when a big event happens—a popular child leaves the school, the queen wasp dies, or uncontrolled fishing removes a top predator. When a much bigger human tragedy occurred on July 7, 2005, Jordan was ready with the tools he uses to study networks to examine the attack on four London underground stations. He found that of 3.2 million possible combinations of stations to bomb to achieve maximum structural destructiveness to the underground network, the terrorists chose the second most destructive combination.28 Whether the terrorists actually used network analysis, just chose the most intuitive or easiest set of stations to attack, or just got lucky, is unclear, but it suggests that studying networks has enormous value both offensively and defensively.
As our world becomes more interconnected, networks can actually become more vulnerable. This is because most networks now are not so isolated, but rather have become networks of networks. Recent analysis by Israeli and American scientists published in the journal Nature has shown that when networks are interdependent, failures in the nodes of one network will likely lead to failures of the network it is connected to.29 They also found that more heterogeneous networks, meaning those with fewer redundant copies of individual nodes, are more likely to collapse.30
The same day that the authors’ rather theoretical paper appeared in Nature, it was validated in the real world through the misery of thousands of travelers stranded throughout Europe. A single volcanic eruption in Iceland, combined with just the right wind conditions, grounded all flights from most of Europe for several subsequent days. As flights were cancelled and passengers scrambled for other ways to get to and out of Europe, the entire transportation system came to a standstill. No planes, no trains, no boats, no rental cars. People were forced to camp out in airports, ironically stuck at an absolute standstill while located in some of the largest hubs of one of the largest transportation networks in the world.
Why didn’t redundancy work in this example? There are a number of likely reasons. For one thing, the threat was completely novel and was huge, encompassing the entire airspace very quickly. The airlines had little experience figuring out new flight patterns, safety precautions, and maintenance procedures for a giant ash cloud. As they began to compile this information in the days following the eruption, conflicting reports emerged, with some commercial airlines reporting little problems in test flights and the U.S. military reporting some worrisome conditions and deterioration of equipment during its test flights. One big contributor to the problem, seized on by critics of a single European Union, was that the EU treated all of the transportation network as a single entity, equally disabled by the ash. Only after many days of misery did the EU concede to divide its airspace into three units (a marginal improvement over just one unit), with different abilities to allow flights depending on local conditions.
Interconnected redundant systems solve plenty of security problems, but they also create literally a world of new security problems. This is because as ever more clever problem solvers interact, they elicit responses among their adversaries, who themselves must improve in order to keep up. The resulting escalation of armaments and defenses, and strategies and counter-strategies, that occurs between competing organisms (which is the topic of the next chapter) is both a response to the creative ways organisms use redundancy and a force for further adaptation. Just as some organisms use redundancy better than others (think beetles vs. centipedes), some organisms play this game of escalation better than others. The Cold War gave us an artificial sense that escalation was mostly about who had more, or more powerful, weapons, but as the next chapter shows, this kind of linear escalation is fairly limited in nature, and since the current state of security is far more wild than it was during the Cold War, we might want to pay attention to how biological escalation really works.
chapter six
BEYOND MAD FIDDLER CRABS
IN THE 1983 FILM War Games, two teenagers inadvertently nearly set off World War III when they hack into the computer of the North American Aerospace Defense Command (NORAD
) and cause it to simulate a full-scale nuclear attack from the Soviet Union. In desperation they track down the disenchanted computer scientist, Stephen Falken, who programmed the computer, and hightail it in his helicopter to NORAD to stop the U.S. Air Force nuclear hawks before they unleash their missiles for real. As they watch the out-of-control computer, Dr. Falken shares a plan with the movie’s female lead:FALKEN: Did you ever play tic-tac-toe?
JENNIFER: Yeah, of course.
FALKEN: But you don’t anymore.
JENNIFER: No.
FALKEN: Why?
JENNIFER: Because it’s a boring game. It’s always a tie.
FALKEN: Exactly. There’s no way to win.1
Accordingly, with the air force brass nervously watching on, they program the computer first to play tic-tac-toe against itself until it gets bored. Then they have it play thermo-nuclear war games against itself until it likewise figures out that there is no way to win and it finally stops the real game it is playing with the U.S. nuclear stockpile.
It’s a captivating scene because the simplicity of the tic-tac-toe board and the horrifying trajectories of thousands of nuclear missiles painting a web around the world map contrasted on the huge NORAD video display exemplify the same bare logic—some conflicts can’t be won.
At the end of War Games, the audience breathes a collective sigh of relief. The DEFCON level is returned to low, stability has returned. The United States and Soviets both still have thousands of missiles pointed at one another, but the computer running them has learned, just as the humans who made the missiles and the computer programs, that using them would benefit neither side. In the fantasy world of War Games and in the real world of the Cold War, this stability was known as “mutually assured destruction,” or MAD. The idea that any release from this stable state would lead to immediate and complete catastrophe became the mechanism and the energy to maintain stability. MAD is what kept the Cold War cold.