Tubes: A Journey to the Center of the Internet

Home > Nonfiction > Tubes: A Journey to the Center of the Internet > Page 11
Tubes: A Journey to the Center of the Internet Page 11

by Andrew Blum


  I stammered a bit: I wasn’t trying to hurt the Internet! I love the Internet! I tried to explain about my journalistic imperative, that only by making people more aware of these places will there be the wherewithal to properly protect them. I believed that, yet it didn’t strike me as an argument that served his interests well. I turned the question back to him. I knew they wanted the attention; it would be good for business. So which was more important: that, or being quiet, perhaps quieter than his competitors? He shrugged, before delivering a parting shot: “Do you want to be the guy who says, ‘Here’s what you attack to take down the country’?” And then he talked for another hour.

  The truth was, his question stuck. In the course of visiting the Internet, when I arrived someplace new I often felt a kind of low dread that my journey was too eccentric to be palatable, that anyone I encountered would suspect ulterior motives, that I was a subject of suspicion. I was off the path, nosing around places few, if anyone, ever did. I wasn’t so paranoid that I really believed I was being followed, but I didn’t feel entirely comfortable either. After all, what was I really doing? (“Oh, you know, just sharing details about your local critical infrastructure with the world.”)

  But I shouldn’t have worried. Inevitably, when I arrived at some unmarked building crucial to the functioning of the Internet, the same thing always happened. The veil of secrecy didn’t descend, but lifted. Instead of stumbling around in the dark looking for the network, it often felt as if the lights had come up, and the more a person knew about the physical infrastructure of the Internet, the less concerned he or she appeared to be about its security. The “secret” locations I was interested in were not so secret after all. Whoever happened to be in charge happily led me around, and nearly always spent extra time to make sure I understood what I was looking at.

  Over time I recognized that their openness wasn’t merely polite, but philosophical—an attitude in part derived from the Internet’s legendary robustness. Well-designed networks have redundancies built in; in the event of a failure at a single point, traffic would quickly route around it, so an engineer doing his job properly shouldn’t be worried. More often, the biggest threat to the Internet is an errant construction backhoe or, in one recent well-publicized case, a seventy-five-year-old grandmother in the country of Georgia slicing through a buried fiber-optic cable with a shovel, knocking Armenia offline for twelve hours.

  Yet above and beyond those practical concerns (or lack thereof) was a more philosophical rationale: the Internet is profoundly public. It has to be. If it were hidden, how would all the networks know where to connect? Equinix in Ashburn, for example, is unequivocally one of the most important network hubs in the world—as Equinix would be the first to tell you. (And if you enter “Equinix, Ashburn” into Google Maps, a friendly red flag will land square in the middle of the campus.) With the exception of certain totalitarian countries, a network doesn’t have to apply to any central authority to connect to another network; it just has to convince that network it’s worth its while. Or, even easier, just pay the network. The Internet has the character of a bazaar, with hundreds of independent players circulating around one another, working things out among themselves. This dynamic is at work physically, in buildings like the PAIX, Ashburn, and others. It’s at work geographically, as networks move to complement one another’s regional strengths. And it’s at work socially, when network engineers break bread and drink beer.

  When we’re sitting in front of our screens, the path by which everything comes to us is entirely obscured. We might notice that one page loads faster than another, or that a movie streaming from one site always looks better than one from another—a result, very likely, of fewer hops between the source and us. Sometimes this is obvious; I recall planning a trip to Japan, and waiting as local travel pages loaded like molasses. Other times it takes an extra leap of understanding; video-chatting with a friend in another city, I couldn’t get over how good the quality was until I remembered that she had the same home Internet service provider. The stream never had to leave the network. But by and large, when we enter an address in our browser, or an email arrives in our inbox, or an instant message flashes on the screen, there’s no clue whatsoever as to the path it took to get there, how far it traveled or how long it took. From out here, the Internet appears to have no texture, no grain; with rare exceptions, there’s no “weather”—conditions don’t change day to day.

  Yet looked at from within, the Internet is handmade, one link at a time. And it’s always expanding. The constant growth of Internet traffic requires the constant growth of the Internet itself, both in the thickness of its pipes and the geographic reach of individual networks. For the engineers, that means a network not busy being born is busy dying. As Eric Troyer said about Ashburn, “The goal of coming into sites like ours is to create as many vectors out to the logical Internet as you can. The more vectors, the more reliable your network becomes—and generally the cheaper it becomes because you have more ways to send your traffic.”

  So the Internet is public because it’s handmade. New links don’t just happen according to some automated algorithm, they need to be created: negotiated by two network engineers, then activated along a distinct physical path. That’s hard to make happen in secret.

  Making those connections between networks is known as “peering.” In the simplest terms, peering is the agreement to interconnect two networks—but that’s like saying “politics” is merely the activity of government. Peering implies that the two networks involved are “peers,” in the sense that they are of the same size and status, and therefore exchange data on more or less equal terms, and without money changing hands. But figuring out who’s your peer is a touchy business in any context. Inside the Internet, it’s made more complicated when peering can also mean “paid peering”—when something with a clearer value than data is added to tip the scales in one direction or another. In its subtleties and nuances, peering has a Talmudic quality, with a body of laws and precedents that are ostensibly public but require years of study to be properly understood. The consequences are huge. Peering allows information to flow freely across the Internet—by which I mean both liberally and at low cost. Without peering, online videos would clog the Internet’s pipes—YouTube might no longer be free. And service providers would accept less reliability in the name of lower costs. The Internet would be more brittle and expensive. Given those stakes, nowhere is the process of Internetworking more intense, and more fraught with occasional drama, than among the network engineers loosely known as “the peering community.”

  I went to observe them firsthand at one of the thrice-annual meetings of the North American Network Operators’ Group, or NANOG, at the Hilton in Austin, Texas. When I arrived, the hotel lobby was filled with men in jeans and fleece, chatting quietly with one another across the tops of laptops festooned with bumper stickers. These are the wizards behind the Internet’s curtains—although plumbers might be just as good an analogy. What they do certainly seems like magic. Collectively, they command a global nerve system of astonishing capabilities, even if most of the time its daily operations are mundane. But mundane or not, there’s no doubt we’re fearsomely dependent on the body of highly specialized knowledge that only they possess. When things go wrong in the middle of the night on the Internet’s biggest pipes, only the NANOGers know how to fix it. (And it’s a stale joke at the conference that if a bomb went off in its midst, who would be left to run the Internet?) They aren’t primarily bureaucrats or salespeople, policymakers or inventors. They are operators, keeping the traffic flowing on behalf of their corporate bosses. And on behalf of one another. The defining characteristic of the Internet is that no network is an island. Even the most crack engineer is useless without the engineer who runs the next network over. Accordingly, people don’t come to NANOG for the formal presentations. They come for the networking opportunities—and not “networking” as a figure of speech. Plenty of business cards were exchanged at the con
ference I attended, but so were Internet routes. A NANOG meeting is the human manifestation of the Internet’s logical links. It exists to cement the social bonds that underscore the Internet’s technical bonds—a chemical process aided by ample bandwidth and beer.

  The typical NANOGer will have the job title of “engineer” preceded by one of a handful of qualifiers like “data,” “traffic,” “network,” “Internet,” or, occasionally, “sales.” He—and nine out of ten attendees in Austin were men—might run the Internetwork of one of the biggest and most familiar suppliers of Internet content, like Google, Yahoo!, Netflix, Microsoft, or Facebook; one of the biggest owners of the Internet’s physical networks, like Comcast, Verizon, AT&T, Level 3, or Tata; or one of the companies variously serving the Internet’s inner workings, from equipment makers like Cisco or Brocade to cell-phone manufacturers like Research in Motion, to volunteer delegates from ARIN, the Internet’s contentious, United Nations–like governing body. Jay Adelson was a NANOG fixture until he left Equinix, and Eric Troyer rarely missed a meeting. Steve Feldman—the guy who built MAE-East—was the chair of the NANOG steering committee.

  If for most of us a given bit’s journey across the Internet is opaque and instantaneous, for a NANOGer it is as familiar as a walk to the grocery store. At least in his own Internet neighborhood he will know each link along the way. He can invariably diagram the logical links and, in all likelihood, picture the physical ones. He may have set it up himself, configuring the routers (perhaps even unpacking them from their original boxes), ordering the appropriate long-distance circuits (if not showing where they should be dug into the ground), and continually fine-tuning the flows of traffic. Martin Levy, an “Internet technologist” at Hurricane Electric, which runs a good-sized international backbone network, keeps a photo album of routers on his laptop, alongside pictures of his son. These are the people with the best mental maps of the Internet, the ones who have internalized its structure beyond all others. And they’re also the ones who know that its proper functioning—that every move you make online—depends on a clear and open path across the whole Internet, from end to end.

  The peering people divided into two camps: those looking for new networks to connect to their network; and the facility owners and Internet exchange operators who compete to host those physical connections in their buildings. The highest powered of both sets tended to be more extroverted, bopping around during the coffee breaks slapping hands and handing out business cards. They were better dressed, and they bragged about how they could hold their alcohol. Take, for example, the peering link between Google and Comcast, the big US cable company. The YouTube videos, Gmail emails, and Google searches of Comcast’s fourteen million customers would, as much as possible, use the direct link between the companies’ networks, avoiding any third-party “transit” provider. Physically, that Comcast-Google link would be repeated a handful of times in places like Ashburn and the PAIX (and indeed in those two specifically). But socially, it is visible in the relationship between the peering coordinators, Ren Provo at Comcast and Sylvie LaPerrière at Google—two of the few women at the conference. Provo, whose official title is “Principal Analyst Interconnect Relations,” worked the crowd at NANOG in her Comcast bowling shirt, asking about people’s families and yelling jokes across the room. Her husband, Joe, is also a network engineer and high up in NANOG’s volunteer bureaucracy, making the Provos the conference’s unofficial power couple; many NANOGers speak fondly of their wedding weekend. LaPerrière is a charming French Canadian whose business card reads “Programme Manager.” She seems to be universally adored, albeit tempered by an undercurrent of fear at her power. If you run a network, you want good links to Google—if only so your customers don’t complain their YouTube videos are jittery. LaPerrière makes it easy for them. For the most part her job is to say yes to all comers, since those good links are in Google’s interest as well. Their peering policy is “open”; in the macho jargon of NANOG that makes LaPerrière a “peering slut” (and would if she were a man, too). Not surprisingly, LaPerrière and Provo are good friends, and I often spotted them huddling in the hotel hallways. Their relationship helps smooth the way across a technically complex and financially wrought minefield of variables.

  “You can tell if your friend is telling the truth or not,” LaPerrière explained to me, before quickly adding that friendship has its limits. “In the long run you’ll only be successful if you’re truly representing your company, and you make it very clear that this is your company’s policy, not the ‘Sylvie policy,’” she said. “Friendship in my book doesn’t play a role at all. It just makes the interactions nicer.” Yet her protestations only make the broader point: a connection between networks is a relationship.

  And peering can get nasty. Occasionally a major network will “de-peer”—literally pull the plug on a connection and refuse to carry its combatant’s traffic, usually after failing to convince the other network that it should be paying them. In one famous de-peering episode in 2008, Sprint stopped peering with Cogent for three days. As a result, 3.3 percent of global Internet addresses “partitioned,” meaning they were cut off from the rest of the Internet, according to an analysis by Renesys, a company that tracks Internet traffic flows and the politics and economics of connection. Any network that was “single-homed” behind Sprint or Cogent—meaning they relied on the network exclusively to get to the rest of the Internet—was unable to reach any network that was “single-homed” behind the other. Among the better-known “captives” behind Sprint were the US Department of Justice, the Commonwealth of Massachusetts, and Northrop Grumman; behind Cogent were NASA, ING Canada, and the New York court system. Emails between the two camps couldn’t be delivered. Their websites appeared to be unavailable, the connection unable to be established. The web had broken into pieces.

  For Renesys, which makes a business out of measuring the quantity of Internet addresses handled by each network and reading the tea leaves as to their quality, a “de-peering event” like that is an amazing moment, like the lights being flipped on at a club. The relationships are revealed. The topography of the Internet is inherently public, or else it wouldn’t work—how would the bits know where to go? But the financial terms that underpin each individual connection are obscured—just as an office’s physical address is public while the details of its lease are private. The lesson Renesys was selling from this analysis was that anyone serious about their Internet should be “intelligently multi-homed.” Meaning: don’t roll all your eggs through one network. The network engineers’ credo is “Don’t break the Internet.” But as Renesys’s Jim Cowie explained, that cooperation goes only so far. “When it gets to a level of seriousness, people get very quiet. There’s a huge amount of money and legal exposure at stake.”

  Traditionally, peering has been dominated by an exclusive club made up of the biggest Internet backbones, often known as the “Tier-1” carriers. In the strictest definition, Tier-1 networks don’t pay any other network for a connection; others pay them. A Tier-1 network has customers and peers, but it doesn’t have “providers.” What results is a tightly interconnected clique of giants, often whispered about as a “cabal.” Renesys tracks the relationships among them by “reading the shadows on the wall,” as Cowie put it, created by the routes each network broadcasts to the Internet routing table—the signs that say “this way to that website!” But because the exact agreements between networks are private even if the routes are public, the precise list of Tier-1 providers can be hard to write. In 2010, Renesys identified thirteen companies at the top of the heap, and four at the very top: Level 3, Global Crossing, Sprint, and NTT. But in 2011, Level 3 purchased Global Crossing, in a deal valued at $3 billion—so then there were three.

  However, peering has been evolving in recent years. As the Internet has grown, the practice has become increasingly distributed. It’s become more cost-effective for smaller networks to peer among themselves, in part because many smaller networks have become
pretty big. And while peering used to be more common among regional networks (like those guys in Minnesota), it’s now more frequently seen at a global scale. These new peering players are different in that they are not primarily “carriers,” meaning they’re not in the business of carrying other people’s traffic; instead, they’re plenty worried about their own traffic. It’s like the Internet version of a university or company operating a shuttle bus between campuses, rather than relying on public transportation—or the huge companies that will do the same with a private airplane between cities. When there’s enough traffic between two points, it becomes worth it to move it yourself.

  They include some of the Internet’s most familiar names, including Facebook and Google. In recent years, both have put enormous resources into building out their global networks, in general not by laying new fiber-optic cables (although Google did partner on the construction of a new cable under the Pacific) but by leasing significant amounts of bandwidth within existing cables or buying individual fibers outright. In that sense, a network like Google’s or Facebook’s will be logically independent on a global scale: they each have their own private pathways, traveling within the existing physical pipes. The crucial advantage of this is that they can store their data anywhere they choose—primarily in Oregon and North Carolina, in Facebook’s case—and use their own networks to move it around freely on these private pathways parallel to the public Internet.

 

‹ Prev