The Silo Effect

Home > Other > The Silo Effect > Page 6
The Silo Effect Page 6

by Gillian Tett


  Numerous other anthropologists have followed a similar path, studying not just the Western world but some of the most modern and complex parts of it. In the late twentieth century Karen Ho (at the University of Minnesota) spent several years studying the habitus of Wall Street banks, using the same intellectual framework that Bourdieu developed among the Kabyle to understand the mind-set of bankers.37 Caitlin Zaloom, another American anthropologist, has studied financial traders in Chicago and London.38 Alexandra Ouroussoff, a British anthropologist, has studied credit rating agencies.39 Douglas Holmes (at Binghamton University) has analyzed central banks and explored how institutions such as the European Central Bank and Bank of England use words and silences to shape markets.40 Annelise Riles (from Cornell Law School) has explored how international lawyers treat finance.41 Geraldine Bell (an anthropologist employed by Intel) has analyzed computing culture. Danah Boyd, a self-styled “digital anthropologist” employed by Yahoo, has explored how social media has shaped American teenagers.42 These are just a tiny sample of the work that thousands of anthropologists have done, and are still doing, in companies, government departments, urban communities, rural villages, and so on. However, wherever anthropologists work, the research tends to share particular traits: a focus on watching real life, usually through participant observation; a desire to connect all parts of society, rather than just concentrate on one tiny corner; a commitment to analyzing the gap between rhetoric and reality, or the social silences that mark our lives; and, above all else, a passion for anthropos, or understanding the spoken and unspoken cultural patterns that shape human existence, or the intellectual endeavor that drove Bourdieu.

  However, by the time that the brilliant Frenchman died, his heritage was marked by a profound irony: though he had been highly influential for anthropology in the last couple of decades of his career, Bourdieu stopped calling himself an “anthropologist,” and started describing himself as a “sociologist” instead. He did that partly because he was offered a juicy job as “Professor of Sociology” in a Paris university. Another factor, however, was that as the twentieth century wore on the disciplines of anthropology and sociology increasingly blended into one. As anthropologists started to study complex Western societies and sociologists conducted more on-the-ground research, it became harder to draw distinctions between the academic fields. In any case, Bourdieu believed that it was ridiculous to fret too much about academic boundaries or labels. He did not want to put any discipline into a specialist box, and he hated the way that universities tended to classify academics into different, competing tribes. To Bourdieu, “anthropology” was not really an academic label or self-contained discipline, but an attitude toward life. It was an intellectual prism, or mode of inquiry, that anyone could use to get a richer understanding of the world, in combination with other fields such as economics, sociology, among others. To be an anthropologist you did not need to sit inside a university, or have a doctorate. Instead, you needed to be humbly curious, ready to question, criticize, explore, and challenge ideas; to look at the world with fresh eyes and think about the classification systems and cultural patterns that we take for granted. “Anthropology demands the open-mindedness with which one must look and listen, record in astonishment and wonder that which one would not have been able to guess,” as Margaret Mead, the former doyenne of the American anthropology world, once observed.43

  That carries wide lessons, not least because it means that anthropology can be applied to many fields. In my case, for example, I started my career doing a PhD in anthropology in a classic manner. I traveled to Soviet Tajikistan, and spent many months in a remote mountainous village engaged in participant observation of the type pioneered by Malinowksi. I wore Tajik clothes, lived with families, helped them in their daily chores, and spent hours observing the villagers, studying how this community used marriage rituals to express ethnic identity. (Essentially I concluded that the villagers maintained their Muslim identity in a supposedly atheist communist system by juggling these marriage rituals and symbols, subdividing their space and using marriage ties to define their social group.) However, like Bourdieu, I later became frustrated with what I saw of the world of academic anthropology. Although the discipline promotes the idea of taking an interconnected perspective on the world, university departments of anthropology can be surprisingly introverted and detached from the wider world. (That is partly because the discipline tends to attract people who are better at listening and observing than thrusting themselves into the limelight. Its adherents also tend to be antiestablishment, and wary of dealing with the institutions of power, perhaps because they analyze them so extensively.) I on the other hand was eager to interact with the world in a more dynamic manner. So when an opportunity arose to move into journalism, I grabbed it; it seemed a place I could use some of my training in observation and analysis. Writing stories felt like the anthropological equivalent of being on a speed date.

  But once you have conducted anthropology research, as I did, you never lose that perspective. Studying anthropology tends to change the way you look at the world. It leaves a distinctive chip in your brain, or lens over your eye. Your mind-set becomes instinctive: wherever you go or work, you start asking questions about how different elements of a society interact, looking at the gap between rhetoric and reality, noting the concealed functions of rituals and symbols, and hunting out social silences. Anyone who has been immersed in anthropology is doomed to be an insider-outsider for the rest of their life; they can never take anything entirely at face value, but are compelled to constantly ask: why? Anthropology, in other words, makes you permanently curious, cynical, and relativist. Adding that perspective to other fields of inquiry enhances your analysis, just as salt adds flavor to food.

  I certainly would not pretend that studying anthropology is the only way to get this insider-outsider perspective, or to question the cultural patterns around us. We all know some individuals who have an innate ability to question cultural rules, pierce social silences, see the story behind the story, and analyze social patterns but have never studied anthropology. But we also know many people who do not question the world; in fact, most people never analyze or question the cultural patterns—or habits—that guide them. Most of us are unthinking creatures of our environment, in the sense that we rarely challenge the ideas we inherit. But the key point is this: with or without a formal training in anthropology, we all do need to think about the cultural patterns and classification systems that we use. If we do, we can master our silos. If we do not, they will master us.

  Moreover, when people are mastered by rigid silos this can cause debilitating problems. As I shall now explain in the following chapters, starting with the story of Sony—and its peculiar “octopus pots.”

  2

  OCTOPUS POTS

  How Silos Crush Innovation

  “I came to see, in my time at IBM, that culture isn’t just one aspect of the game. It is the game.”

  —Lou Gerstner, former CEO of IBM1

  THE MOOD IN THE VAST, majestic Venetian Ballroom in the Sands Expo and Convention Center in Las Vegas was hushed and excited. Hundreds of technology journalists and electronics experts sat before a huge video screen, suspended on a stage between ornate pillars and red velvet curtains. The lights went down and a giant animated mouse appeared on the screen, whiskers twitching; it was a character from Stuart Little, a hit children’s film of 1999.

  With a squeaky voice, the mouse announced some of the recent creative triumphs of Sony, the Japanese electronics and media group. “But you don’t just want to hear from me! Oh no! I gotta get out of the way for Nobuyuki Idei—Ideeeeiiiiii,” the mouse squeaked, as he leapt around a cartoon kitchen. “The CEO of Sony, Soniiiiiiii!”2 A tall, solemn, distinguished Japanese man stood up. The laughter died away. Once a year in November, the titans of the computing and electronic world gathered in Las Vegas for the Comdex trade fair for the computing industry. Just the day before, on November 13, 1999, Bill Gates, the leg
endary founder of Microsoft, had declared in a speech that the world was on the verge of an innovation revolution.3

  Now Idei was due to make the second big keynote. The audience was eager to hear how Sony would respond to this upheaval. Twenty years earlier, the Japanese group had earned extraordinary success by launching the popular Sony Walkman. The device had changed how millions of consumers listened to music, and earned Sony a reputation for being a hotbed of innovation. In the 1960s and 1970s it had produced radios and televisions, in the 1980s there were camcorders, digital cameras, and video recorders, and in the 1990s Sony had jumped into computers and developed a vast music and film empire based out of America, generating hits such as Star Wars and Stuart Little.

  But could this successful corporation adapt to the Internet? Could it produce another hit like the Walkman? Idei knew that expectations for his speech were sky-high. He was determined not to disappoint. “The Internet and high-speed connection networks are both a threat and opportunity for us all,” he solemnly told the audience, likening the digital revolution to “the giant meteor that destroyed the dinosaurs” many millennia ago, in terms of its potential impact on traditional companies. “What we are and will be is a broadband entertainment company,”4 he added in careful, precise English. He had spent his life rising through the ranks of the giant Sony global edifice both in Japan and America.

  Next to Idei, on the Venetian Ballroom stage, sat George Lucas, the film director. “I am playing hooky from writing Star Wars Episode 2!” he declared, sparking laughter across the ballroom, before explaining that Sony’s new products were transforming how Star Wars and other films were made. “Whatever I can imagine, I know I can put on the screen somehow. This is it. This is the revolution, and I’m in the middle of it. It’s a great time to be alive.”5

  The excitement in the hall mounted. The Sony executives unveiled more gadgets, such as a new PlayStation console. “Just the fact that Lucas came on stage was impressive,” said Timothy Strachan, sales manager of Sydney-based Total Peripherals. “Having been in the industry for 13 years, it’s great to see Sony bring a game machine like the PlayStation 2 into the world of computing.”6 Then Steve Vai, a wild-haired guitar virtuoso, appeared on stage. He cut a sharp contrast to the impeccably neat, white-shirted Japanese executives. But Idei turned to Vai and asked him to play something. Chords ripped through the hall from his guitar. Then Vai casually pulled out a little device, the size of a packet of chewing gum, and revealed that this was yet another Sony invention: a digital music player called the “Memory Stick Walkman.”

  Howard Stringer, a British man with a cherubic face, who was running Sony’s operations in America, stood up and took the device. “Listen!” he said, speaking with a clipped, upper-crust British accent that was sometimes dubbed “BBC English.” The device was tiny. However, the chords were crystal clear. The audience applauded. The watching journalists and technical experts suddenly understood what was going on: the same company that had changed how the world listened to music back in 1979 by launching the Walkman was attempting to repeat the same trick. This time, however, it was producing a digital version of the Walkman, suitable for the Internet age.

  Would it work? On that exuberant day in the Venetian Ballroom in November 1999, most observers would have said yes. After all, Sony seemed to have everything that a company might need to build a twenty-first-century successor to the Walkman: creative consumer electronics engineers, slick designers, a computing division, expertise with video games, and it owned 50 percent of BMG, a music label bursting with famous artists such as Michael Jackson and Vai. No other company had so many advantages under one roof: not Samsung, Microsoft, Panasonic, or Steve Jobs’s Apple.

  But as the audience sat, gazing in awe, something peculiar occurred. Idei stepped forward and waved a second device. It was a Vaio MusicClip, a pen-size digital audio player. He explained that this device had also recorded the guitar music. The chords from Vai’s guitar pierced the hall again.

  By the standards of normal corporate strategy, this profusion of devices was profoundly odd. When consumer companies unveil new products, they tend to keep the presentation simple, to avoid confusing customers (or their own salesmen). Typically, they only offer one technology at a time in each specific niche. That was what Sony had done with the original, iconic Walkman. But now Sony was unveiling not one, but two different digital Walkman products, each of which used different proprietary technology. Indeed, soon after the company produced a third offering too, known as the “Network Walkman.” The devices competed with each other. The company seemed to be fighting itself.

  To the audience, the risks of that strategy were not immediately clear. The plethora of devices was taken as a sign of the company’s eclectic, creative genius. But years later, when some of Sony’s own leaders looked back on that day in Las Vegas, they realized that all those devices had been an ominous signal of disaster. Why Sony unveiled not one, but two different digital Walkman devices in 1999 was because it was completely fragmented: different departments of the giant Sony empire had each developed their own—different—digital music devices, with proprietary technology, known as ATRAAC3, that was not widely compatible. None of these departments, or silos, was able to agree on a single product approach, or even communicate with each other to swap ideas, or agree on a joint strategy.

  This had debilitating consequences. Within a couple of years Sony had dropped out of the digital music game, paving the way for Apple to storm the market with the iPod. But the only thing that was more startling than the presence of these silos at the time was that so few people inside Sony could see just how crazy the situation had become, far less change this sense of fragmentation. Sony was a company sliding into tribalism, but its employees were so used to this pattern that they had failed to notice it at all.

  In some sense, this makes Sony no different from any other social group. As I explained in the previous chapter, humans always assume that the way that they organize the world around them is entirely natural and inevitable. The Kabyle Berber whom Bourdieu studied considered it normal to put men in one part of the house and women in the other. The employees in New York’s City Hall took it for granted that the Fire Department should sit in a different section from other staff, or that government statistics be kept on separate databases. So too most of the employees at Sony assumed it was natural that the people developing computers in one department were separate from the part of the company that was handling music.

  Some of the managers and staff could see that this pattern had drawbacks. Stringer, the British man running the Sony American operations, was worried. Indeed, in the years after Sony unveiled its different—competing—digital Walkman devices, Stringer threw himself into the task of fighting the silos, with sometimes comical results. “What went wrong at Sony? Silos were a big part of it all,” Stringer later observed.

  However, on that heady day in 1999, the Sony staff felt too excited about their new gadgets to start questioning their cultural patterns. They were flush with success, and could not see disaster in front of their noses. Or the threat to the company that was symbolized by those competing gadgets on the Las Vegas stage.

  SONY DID NOT START as a bureaucratic behemoth. When the company first sprang to life after World War II,7 the Japanese group had an unusual spirit of flexibility and creativity. Traditionally, Japanese society has been marked—or marred—by a sense of rigid hierarchy and corporate discipline. Employees generally do not move between companies and juniors do not challenge their seniors or take risks by overturning established patterns. This sense of hierarchy and conformity was particularly strong during the militarist period of the 1930s. However, after Japan lost the war in 1945, the country became more open to change for a few years. So many senior men had been killed or discredited in the conflict that it was possible for young men to challenge the status quo.

  It was in these conditions that Sony was born. The group was founded by two young men, Akio Morita and Ma
saru Ibuka, who had first met on a military base in 1944. They were each working on an Imperial Army engineering project to develop a heat-seeking missile. They did not appear to have much in common: Morita was a young, well-bred scientist and heir to his family’s ancient sake-brewing business; Ibuka a gruff, antisocial engineer who hailed from a humble background. But the two men shared a passion for engineering and an iconoclastic view. So when Ibuka decided to use his engineering skills to set up a workshop in a bomb-damaged department store in central Tokyo, he persuaded Morita to leave his ancestral sake business and join the entrepreneurial venture.8

  The new company started with a dozen employees and the equivalent of $500 of capital, and tried ventures ranging from the production of electric rice cookers to selling sweetened miso soup and building a miniature golf course on a burned-out tenement lot. But then the group started performing radio repairs, and tried to copy a type of tape recording devices that American soldiers were bringing into Japan.9 It was a challenging venture: Japan was so short of any tools at the time that the only way to make magnetic tape for cassettes was to grind down magnets and glue the powder onto plastic, using chemical mixes cooked over a stove. “[We] made those first tapes by hand,” Morita recalled. “We would cut enough tape for a small reel and then we would lay out the long strip on the floor of our laboratory. [But] our first attempts to get a magnetic material were failures . . . the magnets we ground into powder were too powerful. . . . We ended up painting the coating on by hand with fine brushes made of the soft bristles from a raccoon’s belly.”10 But by 1950, Morita and Ibuka had found a way to copy those America tape recorders on a large scale and were selling them inside Japan, under the name Tokyo Tsushin Kogyo (Tokyo Telecommunications Engineering). Then Ibuka visited the United States and persuaded Western Electric, the parent company of Bell Laboratories, to sell him a license for the manufacture of transistor radios for $25,000. The company started making a tiny new portable radio for the Japanese market. They called this the “pocketable radio,” and it quickly sold on a massive scale, under a new brand name, “Sony” (chosen since it was easy to pronounce). “Miniaturization and compactness have always appealed to the Japanese,” Morita explained.11

 

‹ Prev