Breakpoint_Why the Web will Implode, Search will be Obsolete, and Everything Else you Need to Know about Technology is in Your Brain

Home > Other > Breakpoint_Why the Web will Implode, Search will be Obsolete, and Everything Else you Need to Know about Technology is in Your Brain > Page 4
Breakpoint_Why the Web will Implode, Search will be Obsolete, and Everything Else you Need to Know about Technology is in Your Brain Page 4

by Jeff Stibel


  Like all networks in a growth phase, the internet appears to be boundless. But the internet is a physical network, subject to physical limits. In our world of wireless computers, smartphones, tablets, and cloud storage, it’s easy for us to forget this. The internet is bound by the width of cables, the amount of energy available, and the capacity of routers and switches. So it is remarkable that we have gone from two to almost ten billion internet-connected devices in a mere half century and still have not hit a breakpoint.

  In 1995, just as the internet was entering hypergrowth, many pundits were convinced it would collapse under the weight of that growth. Bob Metcalfe, inventor of Ethernet and the namesake of the networking law “bigger is better,” went so far as to say that it would “soon go spectacularly supernova and in 1996 catastrophically collapse.” Metcalfe gave 11 key reasons, including the rate of growth, the amount of spam online, and the limits of available bandwidth. Many people at the time agreed with him but, of course, the internet did not collapse. Yet Metcalfe was largely correct: the internet was growing too fast, well beyond its carrying capacity.

  Traffic was simply too heavy. Consider America Online (AOL), the largest internet service provider at the time. In 1994, AOL openly admitted that it could not handle the load or demand of the internet. It started limiting the number of users online during peak times, almost begging customers to switch to competitors. The problems culminated in August 1996 with a huge outage that affected six million AOL users and ultimately forced AOL to refund millions of dollars to angry customers. Clearly, the population of the internet had overshot the carrying capacity of its environment—its bandwidth. Yet rather than imploding, the internet somehow continued to grow, with more people spending more time on the internet and creating even more traffic. The internet continued to accelerate, defying logic.

  The internet continues to grow because we keep moving it to new environments with increased carrying capacities. The biological equivalent would be if the reindeer herd, having eaten all the lichen on St. Matthew Island, swam to a nearby island that had more lichen, and then continued to grow their population. It’s like a crab finding a larger shell or a brain if it could transcend the skull. In the non-biological world, Ponzi schemes are networked so that each successive scam increases the carrying capacity so as to avoid reaching a breakpoint (though, like any fixed environment, it is only a matter of time before the house of cards comes crashing down and somebody goes to jail). With the internet, we’ve loaded everything up and moved to a bigger island—and we’ve done it about a half dozen times.

  Most of us still remember dialing up in the 1990s with the accompanying sounds of screeching and static. The early internet relied exclusively on the telephone network, which was built to transmit analog data. Your computer was connected with a phone cable to your modem, which translated digital data to analog data and then sent that analog data into your phone jack. When you “dialed in,” your call was answered by your internet service provider, which enabled data switching back and forth through your modem. Modems were slow, excruciatingly so by modern standards. In 1991, modems worked at a speed of 14.4 kilobits per second (kbps). By 1996, the year in which Bob Metcalfe said the internet would collapse, we were cruising at around 33.6 kbps, which many considered to be the upper limit of speed available through a standard four-wire phone cable. But it wasn’t. The 56 kbps modem was invented in 1996 and became widely available in 1998. Same island, more lichen.

  It became increasingly clear that the phone network and the four-wire phone cable weren’t cut out for transmitting all this new digital data, and part of the solution ironically stemmed from Bob Metcalfe himself. Metcalfe had invented the idea of the Ethernet (and corresponding hardware) back in the 1970s. Ethernets, with their larger bandwidth connections, quickly became popular in universities and large companies.

  Cable broadband internet was introduced in the mid-1990s and became widespread at the turn of the century. Using the existing cable television network and its corresponding coaxial wiring, new cable modems, plus Metcalfe’s eight-wire RJ45 Ethernet cords, we were able to radically increase data speeds—from 56 kpbs to between 1000 and 6000 kpbs—or 1 to 6 megabits per second (many cable modems are currently capable of speeds up to 30 mbps, but few internet service providers supported those speeds back then). This was a true bandwidth revolution, and we soon found ourselves with more carrying capacity than we knew what to do with. New island, fresh lichen.

  Over time, we invented larger and faster cables—T1, T3, fiber optics. Collectively, we call these larger bandwidth cables “broadband” because they’re made up of broader physical bands, wires, and cables than the phone network. In moving from our old phone system infrastructure to a shiny new broadband infrastructure, we essentially moved the internet from one island to a larger island. There was plenty of space to roam and lots of virtual lichen for us to eat. But now we’re getting too big even for this island.

  IV

  In the brain, limitations of skull size and energy consumption are offset by evolutionary innovations. It turns out that the brain is an expensive asset because it consumes so much of the body’s energy. In nature, food is often scarce, and hunting for calories is a time-consuming and dangerous task. One of the reasons that animals have relatively small brains is that efficiency trumps intelligence. In fact, humans have evolved cultural and technological tools to offset the energy hogs in our skulls.

  If you had to guess what one thing separates us from the rest of the animal kingdom, it would likely come from a predictable list: bipedalism, opposable thumbs, use of fire. These things are important, but Richard Wrangham, a Harvard University anthropologist, put forth a new theory that has recently been supported by research from Suzana Herculano-Houzel, a neuroscientist at the Federal University of Rio de Janeiro in Brazil. They have shown that what sets us apart is our ability to cook. For us to evolve bigger brains than our closest ape cousins, we needed to increase our caloric intake by over 700 calories per day. That may seem easy these days (one Big Mac would do the trick with a few calories to spare), but remember, we started off as raw foodies. That posed a huge problem for our former selves: eating raw food is incredibly time consuming—it takes a gorilla nearly 80 percent of its day to forage and consume the calories needed to maintain a brain one third our size. To grow our brains from ape-sized to human-sized would have required spending well over nine hours crunching veggies and chewing on raw meat each day. That would have left little time for anything else, rendering our larger brains useless.

  Cooking food actually changes its composition, which allows cooked food to be consumed more quickly and digested faster. By cooking food, our ancestors consumed many more calories than they would have otherwise, which provided fuel for their hungry growing brains and left them with extra time to use those brains. Herculano-Houzel, after reporting her findings to the National Academy of Sciences, went so far as to say that “the reason we have more neurons than any other animal alive is that cooking allowed this qualitative change.” Humans set ourselves apart from other animals because cooking increased our energy intake enough to support a bigger brain.

  We once believed that what made us smart was descending from the trees or becoming biped or discovering fire, but perhaps it was our gluttonous consumption. We increased our carrying capacity by creating efficiencies that other animals did not have available. Or as Herculano-Houzel says, “The more I think about it, the more I bow to my kitchen. It’s the reason we are here.”

  V

  The internet is also an energy hog; green it is not. We are now beginning to understand the massive energy it will take to sustain the internet’s growth. Think of all the things that use energy: cars, factories, drilling, China. None of them individually compares to the consumption growth of the internet, which recent estimates peg at roughly 2 percent of all energy consumed.

  As with cooking to increase caloric intake or m
igrating to find lichen, internet companies across the globe have moved to energy-rich environments. In fact, most internet companies don’t actually reside in Silicon Valley; their people may be there, but not their technologies. These companies have moved their systems to areas where energy is abundant. For example, the data centers for Google, Facebook, Netflix, and many other companies are housed near abundant and cheap energy sources. Some sit near water dams, others near wind power, still others near coal, natural gas, or nuclear power.

  Google alone uses enough energy each year to power 200,000 homes. That’s roughly 260 million watts, or one-quarter of the output of a large nuclear power plant. When you think of the meteoric growth of the internet, you can quickly see that there is an alarming problem ahead of us: the internet is on track to consume 20 percent of the world’s power, just as the brain consumes 20 percent of the body’s power. At the internet’s current rate of growth, it will get there within ten years.

  This leads to an obvious problem. If the internet continues along this growth trajectory, it could take down the entire energy grid and either collapse or accelerate global warming to an unsustainable rate in the process. Luckily the internet, like the brain, has evolved a few shortcuts to maximize its energy efficiency. Remember TCP and how the brain, ants, and the internet all use this technology to regulate the flow of information? TCP is basically an efficiency gateway. It actively looks for bottlenecks and frees them by slowing down transmissions. Those slowdowns paradoxically speed up the entire system, thereby creating efficiency and energy savings.

  TCP isn’t the brain’s only trick. Our brains compartmentalize different functions to increase efficiency. Brain scientists call this modularity. We have distinct regions for language, vision, memory, and most other high-level cognitive functions. Speed and efficiency are the hallmarks of a modular system—it is much more economical when many of the areas that control a specific function are close together. Just imagine an airplane with half of the controls in the cockpit and the rest in the rear lavatory, and you’ll get the idea.

  Modularity has become the norm on the internet. We have structured large parts of the internet into what are called server farms, massive storage facilities housed near one another. Part of the reason for this is power constraints, but it is also an efficiency trick. Huge speed efficiencies result from having Facebook, Netflix, Amazon, and all of the smaller guys sharing space. It turns out that much of the internet is housed in large buildings known as “carrier hotels.” One carrier hotel in New York City has over one million square feet more than the Empire State Building. In a crowded city, it contains mostly computers and wires. Imagine the value of that building. Actually, you don’t have to imagine: Google bought it for $1.9 billion in the highest priced real estate transaction recorded across the globe in 2010. This particular piece of real estate was purchased for Google’s most valuable asset, which is not human capital, but computers and wires. And even though Google owns the building, the real estate continues to be shared with some of the largest—and smallest—names across the internet, creating a virtuous cycle of increased efficiency.

  The human brain’s capacity for reason, consciousness, judgment, and decision making is due in large part to a module called the cerebral cortex, a region that is larger as a percentage of our brains than it is in any other animal. Elephants may have bigger brains overall, but our relative cortical size is much larger. Invertebrates don’t have cortexes at all. The newest part of the cerebral cortex, the neocortex, evolved in humans only 200,000 years ago. It is responsible for virtually all areas of higher reasoning.

  Cloud computing is modularity at its finest, and it may evolve into the cerebral cortex of the internet. Most people think of cloud computing as a way to store information, which it is, but clouds do more than that. Computing clouds allow for independent computations to happen across the internet, giving individuals access to virtually unlimited computing resources. Where you were once limited to your own computers or servers to process information, the cloud allows you to tap the resources of universities, governments, and large companies such as Amazon, Google, IBM, and Microsoft.

  There is incredible efficiency associated with this model, as large entities can rent out idle computing resources at a fraction of the cost. But much more is going on behind the scenes. Clouds allow you to tie many small computers together to make large distributed supercomputers. Google’s cloud, for instance, is composed of nothing more than a bunch of inexpensive desktop PCs. But when you put together hundreds of millions of those desktops, the computational power is awesome, more powerful than any machine on earth (including biological machines like our own brains). Because the individual units are independent of one another, computations can happen in parallel, just as in the brain. Individual computers can’t do that; clouds make it happen. Parallel processing—where multiple things happen at once—has been linked directly to consciousness and self-awareness. It is here where real intelligence, and possibly even consciousness, will likely come online.

  The human brain has an incredibly efficient memory system, which is actually divided into two distinct systems. The first stores information permanently in the brain, which creates our long-term memories. The second is a fleeting memory system that remembers a small amount of information for a small amount of time. The reason for this is efficiency: pulling information from long-term memory is very costly in terms of energy. Short-term memory, on the other hand, is fluid and easily accessible; it’s an ideal repository for information the brain is likely to need in the near future. The downside is how small the memory system is—it turns out that our brains can hold only about seven pieces of short-term information at any given time. So short-term memory is quite limited, but that’s precisely why it’s so efficient.

  Mimicking the brain’s short-term memory, scientists at MIT invented content delivery networks, or CDNs, in the late 1990s. Since then, companies like Akamai and Edgecast have commercialized variations of the technology. Basically, this technology replicates what the brain does: it creates short-term storage for information that is used often. These companies have built servers all over the world whose purpose is to store information close to where you are. If you are in Singapore trying to reach Facebook or YouTube, you are likely going to see a copy of those pages that come from a server in Singapore hosted by Akamai or Edgecast. Just imagine the amount of time and energy saved by having that information close to home instead of going across the globe to retrieve it. Not surprisingly, this is big business, as nearly 45 percent of all internet traffic flows through a content delivery network.

  Myelin sheaths are another interesting innovation, one that has evolved only in vertebrate brains. Myelin is a fatty tissue that wraps the connections going from neuron to neuron. This wrapper acts as insulation, helping the neuron retain information. The advantage of this, again, is speed and efficiency. Without myelin, the information traveling between neurons would decay faster or require more energy to make the trip. In that case, the brain would be smaller and slower; or, at a minimum, the neurons would need to be closer together. If neurons had to be close together to communicate, long spinal cords would be ineffective and would not allow for the simultaneous growth of intelligence and body size, the latter of which is necessary to consume energy to support the former. This is a key reason that vertebrates have larger brains (and vertebrae, for that matter).

  Samuel Morse and Alexander Graham Bell both used non-insulated copper wires in early versions of the telegraph and telephone. These lines worked reasonably well over short distances, but the communication decayed over long distances. So they added a synthetic version of myelin—a plastic coating to wrap around the copper. This insulation kept the electrical signal from decaying or completely falling off the copper. We now have all kinds of insulators, from metal and glass to plastic and ceramic. Each provides added levels of efficiency, reducing the need for energy. This is true with copper, alumin
um, and even silicon.

  We are not perfect computers; our brains tend to fumble in the dark and make educated guesses. Imagine trying to calculate the trajectory of a flying object as well as its shape, distance, wind velocity, and speed. Computers can do this perfectly, without breaking a sweat. Now imagine lifting your arm and catching a ball—ah, easy. Computers compute; we guess. In that way, our brains are designed to be prediction engines, fallible and full of mistakes, and to these characteristics we owe our baseball prowess. Our brains make guesses and are often wrong. But being wrong can be a good thing: when a system is free to make mistakes, it can offset energy needs with the exorbitant cost of perfection. The brain’s lack of perfection saves significant energy without reducing overall intelligence. The brain can do all that fancy ball handling with less energy than is consumed by a single 20-watt light bulb.

  The brain’s lack of perfection starts from the bottom, with the smallest brain component—the neuron. Neurons aren’t just fallible, they are downright faulty: neurons fail to fire when they should between 30 and 90 percent of the time. Our 100 billion neurons fire often enough that misfires do no real harm because the network has enough neurons to correct itself. So even at the neuronal level, the brain prefers expediency to perfection.

  Researchers at Stanford and Caltech are currently looking at ways to replicate that error rate in transistors with the hope of reducing energy consumption without compromising performance. They have created the Neurogrid chip, which uses roughly 1/10,000th of the power of a traditional silicon chip. Like neurons, these chips are not perfect, and as a result they require millions of transistors to do the calculating work of just a few traditional ones. But that isn’t the point, of course. Ultimately, we are trying to catch the ball, not calculate its trajectory. IBM recently attempted to simulate the brain functionality of 55 million neurons using traditional chips. They did so successfully, but at the cost of 320,000 watts of energy. Like the brain, the Neurogrid chip could perform similarly with less than a watt.

 

‹ Prev