The Formula_How Algorithms Solve All Our Problems... and Create More

Home > Other > The Formula_How Algorithms Solve All Our Problems... and Create More > Page 20
The Formula_How Algorithms Solve All Our Problems... and Create More Page 20

by Luke Dormehl


  Algorithmizing the World

  Can everything be subject to algorithmization? There are two ways to answer this question. The first is to approach it purely on a technical level. At present, no, everything cannot be “solved” by an algorithm. At time of writing, for instance, recognizing objects with anything close to the ability of a human is still a massive challenge. A young child only has to be shown a handful of “training examples” in order to identify a particular object—even if they have never seen that object before. An algorithm designed for a similar task, however, will frequently require long practice sessions in which the computer is shown thousands of different versions of the same thing and corrected by a human when it is wrong. Even then, an algorithm may struggle with the task when carrying it out in a real-world environment, in which it is necessary to disambiguate contours belonging to different overlapping objects.

  Computer scientist and teacher John MacCormick similarly gives the example of an algorithm’s unsuitability for being used as a teaching aid for grading students’ work, since this is a task that is too complex (and, depending on the subject, too subjective) for a bot to carry out.5 Could both of these tasks be performed algorithmically in the future as computers continue to get more powerful? Absolutely. It is for this reason that it is dangerous to bet against a bot. A decade ago, respected MIT and Harvard economists Frank Levy and Richard Murnane published a well-researched book entitled The New Division of Labor, in which they compared the respective capabilities of human workers and computers. In an optimistic second chapter called “Why People Still Matter,” the authors described a spectrum of information-processing tasks ranging from those that could be handled by a computer (e.g., arithmetic), to those that only a human could do. One illustration they gave was that of a long-distance truck driver:

  The . . . truck driver is processing a constant stream of [visual, aural and tactile] information from his environment. . . . [T]o program this behavior we could begin with a video camera and other sensors to capture the sensory input. But executing a left turn against oncoming traffic involves so many factors that it is hard to imagine discovering the set of rules that can replicate a driver’s behavior. . . . Articulating [human] knowledge and embedding it in software for all but highly structured situations are at present enormously difficult tasks. . . . Computers cannot easily substitute for humans in [jobs like truck driving].6

  At least one part of this assertion is correct: computers cannot easily substitute for humans when it comes to driving. Certainly Levy and Murnane were not mistaken at the time that they were writing. The year their book was released, DARPA announced its Grand Challenge, in which entrants from the country’s top AI laboratories competed for a $1 million prize by constructing driverless vehicles capable of navigating a 142-mile route through the Mojave Desert. The “winning” team made it less than eight miles (in several hours) before it caught fire and shuddered to a halt.

  A lot can change in a decade, however, as you will know from reading Chapter 3, in which I discuss the success of Google’s self-driving cars. Should such technologies prove suitably efficient, there is every possibility that they will take over the jobs currently occupied by taxi drivers and long-distance drivers.

  Similar paradigm shifts are now taking place across a wide range of fields and industries. Consider Amazon, for instance. In Amazon’s early days (when it was just an online book retailer, rather than the leviathanic “everything store” it was referred to as in a recent biography) it featured two rival departments, whose interdepartmental squabbling serves as a microcosm of sorts for the type of fight regularly seen in the age of The Formula. One department was made up of the editorial staff, whose job it was to review books, write copy for the website’s home page and provide a reassuringly human voice to customers still wary about handing over their credit card details to a faceless machine. The other group was referred to as the personalization team and was tasked with the creation of algorithms that would recommend products to individual users.

  Of the two departments, it was this latter division that won both the short-term battle and the long-term war. Their winning weapon was called Amabot and replaced what had previously been person-to-person, handcrafted sections of Amazon’s website with automatically generated suggestions that conformed to a standardized layout. “The system handily won a series of tests and demonstrated it could sell as many products as the human editors,” wrote Brad Stone in his well-researched 2013 history of Amazon.

  After the editorial staff had been rounded up and either laid off or else assigned to other parts of the company, an employee summed up the prevailing mood by placing a “lonely hearts” advertisement in the pages of a local Seattle newspaper on Valentine’s Day in 2002, addressing the algorithm that had rendered them obsolete:

  DEAREST AMABOT: If you only had a heart to absorb our hatred . . . Thanks for nothing, you jury-rigged rust bucket. The gorgeous messiness of flesh and blood will prevail!7

  This is a sentiment that is still widely argued—particularly when algorithms take on the kind of humanities-oriented fields I have approached in this book. However, it is also necessary to note that drawing a definite line only capable of being crossed by the “gorgeous messiness of [the] flesh” is a little like Levy and Murnane’s statements about which jobs are safe from automation.

  Certainly, there are plenty of jobs and other areas of life now carried out by algorithm, which were previously thought to have been the sole domain of humans. Facial recognition, for instance, was once considered to be a trait performable only by a select few higher-performing animals—humans among them. Today algorithms employed by Facebook and Google regularly recognize individual faces out of the billions of personal images uploaded by users.

  Much the same is true of language and automated translation. “There is no immediate or predictable prospect of useful machine translation,” concluded a U.S. National Academy of Sciences committee in 1965. Leap forward half a century and Google Translate is used on a daily basis, offering two-way translation between 58 different languages: 3,306 separate translation services in all. “The service that Google provides appears to flatten and diversify inter-language relations beyond the wildest dreams of even the E.U.’s most enthusiastic language parity proponents,” writes David Bellos, author of Is That a Fish in Your Ear?: Translation and the Meaning of Everything.8 Even if Google Translate’s results aren’t always perfect, they are often “good enough” to be useful—and are getting better all the time.

  The Great Restructuring

  What is notable about The Formula is how, in many cases, an algorithm can replace large numbers of human workers. Jaron Lanier makes this point in his most recent book, Who Owns The Future?, by comparing the photography company Kodak with the online video-sharing social network Instagram. “At the height of its power . . . Kodak employed more than 140,000 people and was worth $28 billion,” Lanier observes. “They even invented the first digital camera. But today Kodak is bankrupt, and the new face of digital photography has become Instagram. When Instagram was sold to Facebook for $1 billion in 2012, it employed only 13 people. Where did all those jobs disappear to? And what happened to the wealth that all those middle-class jobs created?”9

  What causes shock for many people commenting on this subject is how indiscriminate the automation is. Nothing, it seems, is safe from a few well-designed algorithms offering speed, efficiency and value for money. Increasing numbers of books carry doom scenarios related to industries struggling in the wake of The Formula. In Failing Law Schools, law professor Brian Tamanha points to U.S. government statistics suggesting that until 2018 there will only be 25,000 new openings available for young lawyers—despite the fact that law schools will produce around 45,000 graduates during that same time frame. It is quite possible, Tamanha writes, that this ratio will one day be remembered as the “good old days.” Indeed, it is quite conceivable to imagine a future in which law firms stop hir
ing junior and trainee lawyers altogether, and pass much of this work over to artificial intelligence systems instead. In keeping with this, a number of experts predict that there will be between 10 and 40 percent fewer lawyers a decade from now as there are today.10

  As Erik Brynjolfsson and Andrew McAfee suggest in their pamphlet “Race Against the Machine,” this is not so much the result of a Great Recession or a Great Stagnation, so much as it is a Great Restructuring.11 The new barometer for which jobs are safe from The Formula has less to do with the social class of those traditionally holding them than it does to do with a trade-off between cost and efficiency. Professions and fields that have evolved to operate as inefficiently as possible (lawyers, accountants, barristers and legislators, for example) while also charging the most money will be particularly vulnerable when it comes to automation. To survive—as economist Theodore Levitt famously phrased it in his 1960 article “Marketing Myopia”—every industry must “plot the obsolescence of what now produces their livelihood.”12

  In the new algorithmic world, it is the computer scientists and mathematicians who will be increasingly responsible for making cultural determinations and will ultimately thrive in the Great Restructuring. Others will suffer the “end of work” described by social theorist Jeremy Rifkin in his book of the same name. This is a workplace in which “fewer and fewer workers will be needed to produce the goods and services for the global population.” As costs of everything from legal bills to entertainment come down, so too will the availability of many types of work decrease. As André Gorz writes in Farewell to the Working Class, “The majority of the population [will end up belonging to] the post-industrial neo-proletariat which, with no job security or definite class identity, fills the area of probationary, contracted, casual, temporary and part-time employment.”13 As job security and class identity are replaced by automation and algorithmic user profiles, the world may finally get the “twenty-hour working week and retirement at fifty” that previous generations of techo-utopianists dreamed about. It just won’t necessarily be voluntary.14

  Spoons Instead of Shovels

  There is a famous anecdote about the American economist and statistician Milton Friedman visiting a country in Asia during the 1960s. Taken to a worksite where a new canal was being excavated, Friedman was shocked to see that the workers were using shovels instead of modern tractors and earthmovers. “Why are there so few machines?” he asked the government bureaucrat traveling with him. “You don’t understand,” came the answer. “This is a jobs program.” Friedman considered for a second, then replied, “Oh, I thought you were trying to build a canal. If it’s jobs you want, then you should give these workers spoons, not shovels.”

  You could, of course, extend this to any number of technologies. Tractors are more efficient earthmovers than shovels, as shovels are more efficient than spoons, and spoons are more efficient than hands. The question is, where do we stop the process? The cultural theorist Paul Virilio once pointed out how the invention of the ship was also the invention of the shipwreck. If this is the case, then how many shipwrecks do we need before we stop building ships? Those looking for stories of algorithms run amok can certainly find them with relative ease. On May 6, 2010, the Dow Jones Industrial Average plunged 1,000 points in just 300 seconds—effectively wiping out close to $1 trillion of wealth in a stock market debacle that became known as the Flash Crash. Unexplained to this day, the Flash Crash has been pinned on everything from the impact of high-speed trading to a technical glitch.15

  Yet few people would seriously put forward the view that algorithms are, in themselves, bad. Indeed, it’s not simply a matter of algorithms doing the jobs that were once carried out manually; in many cases algorithms perform tasks that would be impossible for a human to perform. Particularly, algorithms like those utilized by Google that rely on unimaginably large datasets could never be reenacted by hand. Consider also the algorithm developed by mathematician Max Little that is able to diagnose Parkinson’s disease down the phone line, by “listening” to callers’ speech patterns for vocal tremors that are inaudible to the human ear.16

  The French economist Jean Fourastié humorously asked whether prehistoric man felt the same trepidation at the invention of the bronze sword that 20th-century man did at the birth of the atomic bomb. As technologies are invented and prove not to be the end of humanity, they recede into background noise, where they become fodder for further generations of disruptive technology, just as the strongest lions eventually weaken and are overtaken by younger, fitter ones. Confusing the matter further is the complex relationship we enjoy with technology on a daily basis. Like David Ecker, the Columbia Spectator journalist I quoted in the last chapter, most of us hold concerns over “bad” uses of technology, while enjoying everything good technology makes possible. To put it another way, how did I find out the exact details of the Flash Crash I mentioned above? I Googled it.

  Objectivity in the Post-mechanical Age

  One topic I continued to butt up against during the writing of this book (and in my other tech writing for publications like Fast Company and Wired) is the subject of objectivity. In each chapter of this book, the subject of objectivity never strayed far from either my own mind or the various conversations I enjoyed with the technologists I had the opportunity to interview. In Chapter 1, the question of objectivity arose with the idea that there are certain objective truths algorithms can ascertain about an individual, be those the clinician-free means by which Larry Smarr diagnosed himself with Crohn’s disease or the “dividual” profiles constructed by companies like Quantcast. In Chapter 2, the dream of objectivity revolved around the ultrarational matching process at the heart of algorithmic dating, supposed to provide us with a “better” way of matching us with romantic partners. In Chapter 3, objectivity was a way of making the law fairer, by creating legal algorithms that would ensure that the law was enforced the same way every time. Finally, in Chapter 4, objectivity was about the universal rules relating to what defines something as a work of art.

  Objectivity is a term that is often associated with algorithms, and companies in thrall of them. For example, Google notes in its “Ten Things We Know to Be True” manifesto that “our users trust our objectivity.” Were we to think for too long about the fact that we expect a publicly traded company to be entirely objective (or even that such a thing is possible when it comes to filtering and ranking information), we might see the fundamental flaw in this conundrum—but then again, when Google is providing search results in 0.21 seconds, there isn’t a great deal of time to think. “This is a very unique way of thinking about the world,” says scholar Ted Striphas, author of The Late Age of Print, who has spent the past several years investigating what he calls algorithmic culture. “It’s about degrees of pure objectivity, where you are never in the realm of subjectivity; you’re only in the realm of getting closer and closer to some inexorable truth . . . You are never wrong, you’re only ever more right.”

  As it happens, Google may actually be telling the truth here: their users really do seem to trust in their objectivity. According to a survey of web users carried out in 2005, only 19 percent of individuals expressed a lack of trust in their search engines, while more than 68 percent considered the search engines they used regularly to be fair and unbiased.17

  Science-fiction author Arthur C. Clarke famously wrote, “any sufficiently advanced technology is indistinguishable from magic.” Just like photography appeared to people over 100 years ago, so too does the speed with which algorithms work make us view them as authoritative and unaffected by human fallibility. Paste a block of text into Google’s translation services and in less than a second its algorithms can transform the words into any one of 58 different languages. The same is true of Google’s search algorithms, which, as a result of its “knowledge” about individual users, allow our specific desires and requirements to be predicted with an almost preternatural ability. “Google works for us because
it seems to read our minds,” says media scholar Siva Vaidhyanathan, “and, in a way, it does.”18

  As with magic, our reverence for Google’s work comes partly because we see only the end result and none of the working. Not only are these black-boxed and obscured, they are practically instantaneous. This effect doesn’t only serve to fool the technologically uniformed. At the start of his book Nine Algorithms That Changed the Future, accomplished mathematician and computer scientist John MacCormick writes, “at the heart of every algorithm . . . is an ingenious trick that makes the whole thing work.” He goes on to expand upon the statement, suggesting that:

  Since I’ll be using the word “trick” a great deal, I should point out that I’m not talking about the kind of tricks that are mean and deceitful—the kind of trick a child might play on a younger brother or sister. Instead, the tricks . . . resemble tricks of the trade or even magic tricks: clever techniques for accomplishing goals that would otherwise be difficult or impossible.19

  While perhaps a well-intentioned distinction, MacCormick’s error is his casual assumption about a sense of algorithmic morality. Stating that algorithms and the goals they aim to accomplish are neither good nor bad (although, to return to Melvin Kranzberg’s first law of technology, nor are they neutral) seems an extraordinarily sweeping and unqualified statement. Nonetheless, it is one that has been made by a number of renowned technology writers. In a controversial 2008 article published in Wired magazine, journalist Chris Anderson announced that the age of big datasets and algorithms equaled what he grandly referred to as “The End of Theory.” No more would we have to worry about the elitist theories of so-called experts, Anderson said.

  There is now a better way. Petabytes allow us to say: “Correlation is enough.” We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.20

 

‹ Prev