The Formula_How Algorithms Solve All Our Problems... and Create More

Home > Other > The Formula_How Algorithms Solve All Our Problems... and Create More > Page 6
The Formula_How Algorithms Solve All Our Problems... and Create More Page 6

by Luke Dormehl


  Deleuze expanded on this idea later in his life by discussing what he called the “society of control.”45 Echoing Toffler’s Third Wave thesis that we have progressed from a disciplinary society based on the production of physical goods, to an economy founded on information and financialization, Deleuze examined how the structures of power and control have changed. In previous societies, he argues that control occurred within specific linear sites, such as the school, the workplace or the family home. Each one of these came with its own unique set of rules, which applied only to that site. Between spaces, people were relatively unmonitored and still had space for uncontrolled life to occur. This changes in a control society, in which there is continuous regulation, although this is less obvious than in a disciplinary society. Since power is everywhere it also appears to be nowhere. Rather than being forced to fit into preexisting molds, Deleuze argues that we are encased in an enclosure that transforms from moment to moment—like a giant sieve whose mesh strains and bulges from point to point.

  This seems to do a good job of summing up the new algorithmic identity. In the digital age, everyone can have a formula of his or her own. Companies like Quantcast and Google get no benefit at all from everyone acting in the same way, since this allows for no market segmentation to occur. It is for this reason that articles like Steven Poole’s May 2013 cover story for the New Statesman, “The Digital Panopticon,” invoke the wrong metaphor when it comes to big data and algorithmic sorting.46

  The panopticon, for those unfamiliar with it, was a prison designed by English philosopher and social theorist Jeremy Bentham in the late 18th century. Circular in design and with a large watchtower in the center, the theory behind the panopticon was that prisoners would behave as if they were being watched at all times—with the mere presence of the watchtower being as effective as iron bars in regulating behavior and ensuring that everyone acted the same way. (A similar idea is behind today’s open-plan offices.)

  As former Wired editor Chris Anderson argues in The Long Tail, modern commerce depends upon market segmentation.47 Unless you’re selling soap, aiming a product at a homogeneous mass audience is a waste of time. Instead, vendors and marketers increasingly focus on niche audiences. For niches to work, it is important to companies that they know our eccentricities, so that they can figure out which tiny interest group we belong to. In this sense, an authoritarian device like the panopticon—which makes everyone act the same way—is counterproductive. In algorithmic sorting, audiences know they are being surveilled; they just don’t care. The apparatus of capture (how companies recognize us) and the apparatus of freedom (buy the products that best sum you up as an individual) are entwined so totally as to be almost inseparable. As French philosopher Jacques Ellul argued in The Technological Society, the citizens of the future (he was writing in the early 1960s) would have everything their heart desired, except for their freedom.48 This chilling concept is one that was more recently expanded upon by media historian Fred Turner:

  If the workers of the industrial factory found themselves laboring in an iron cage, the workers of many of today’s post-industrial information firms often find themselves inhabiting a velvet gold mine . . . a workplace in which the pursuit of self-fulfillment, reputation, and community identity, of interpersonal relationships and intellectual pleasure, help to drive the production of new media goods.49

  In her work on the digital identity, MIT psychoanalyst Sherry Turkle talks about our different “selves” as windows. “Windows have become a powerful metaphor for thinking about the self as a multiple, distributed system,” she wrote in her 1995 book Life on Screen, a magnum opus that landed her on the cover of Wired magazine. “The self is no longer simply playing different roles in different settings at different times. The life practice of windows is that of a [de-centered] self that exists in many worlds, that plays many roles at the same time.”50

  This view of the self as a “multiple, distributed system” (or else a slightly clunky Microsoft operating system) was meant as empowering. The de-centered, windowed self means that the woman who wakes up in bed next to her husband can walk downstairs, close the “wife” window and open the “mother” one in order to make breakfast for her daughter, before hopping in the car and driving to work, where she will once again clear her existing windows and open a new one titled “lawyer” or “doctor.” This is, of course, the thing about windows: they can be opened and closed at will.

  What Fred Turner describes, on the other hand, is a world in which multiple subjectivities exist, but these subjectivities constantly crash into one another. Unlike the “windowed self” or the “segmentary animal” who fills different roles at school, in the workplace and at the home, where The Formula is involved these rules are not isolated to one location but affect one another in intricate, granular and often invisible ways.

  Keeping Up Appearances

  It might not even matter whether specific pieces of data shaping our identity are “true” or not. Whether we are physically male or female, or else consider ourselves to be male or female, ultimately what will determine how we are treated online is the conclusions reached by algorithms. A New York Times article from April 2013 related the story of an unnamed friend of the writer’s who received, by mail, a flyer advertising a seminar for patients suffering from multiple sclerosis, hosted by two drug companies, Pfizer and EMD Serono. Spam e-mails and their physical counterparts are, of course, nothing new. What was alarming about this situation, however, was not the existence of the spam message, but the targeting of it. The recipient was not an MS sufferer, although she had spent some time the previous year looking up the disease on a number of consumer health sites. As the arrival of the flyer proved, somewhere her name and contact details had been added to a database of MS sufferers in Manhattan. The ramifications of this were potentially vast. “Could [my friend], for instance, someday be denied life insurance on the basis of that profile?” the author asks. “She wanted to track down the source of the data, correct her profile and, if possible, prevent further dissemination of the information. But she didn’t know which company had collected and shared the data in the first place, so she didn’t know how to have her entry removed from the original marketing list.”51

  John Cheney-Lippold, the scholar who coined the term “new algorithmic identity,” says that the top-performing male students in his classroom are regularly characterized as female online. “Who is to say that they’re not female in that case?” he asks rhetorically. In a world in which terms like “male” and “female” are simply placeholders for specific types of behavior, traditional distinctions of discrimination break down. With this in mind, who is to say what “gender” or “racial” discrimination will look like by, for example, the year 2040? If gender discrimination is not based on anything physical but rather on the inferences of algorithms, can a man be discriminated against in terms of the barring of certain services because he skews statistically female in his clicks? What does it mean for the future of racial politics if a young white male growing up in a working-class environment in inner-city Detroit is classified “blacker” than an older, educated African-American female living in Cambridge, Massachusetts? Could a person be turned down for a job on the basis that it is a role black males statistically do worse in (whatever that might be), even if the individual in question is physically a white female?

  These types of discriminatory behavior could prove challenging to break, particularly if they are largely invisible and in most cases users will never know how they have been categorized. Unlike the shared history that was drawn on to help bring about the civil rights or women’s lib movements, algorithmically generated consumer categories have no cultural background to draw upon. What would have happened in the case of Rosa Parks’s December 1955 protest—which garnered the support of the African-American community at large—had she not been discriminated against purely on the basis of her skin color, but on several thousand uniquely weighted var
iables based upon age, location, race and search term history? There is racism, ageism and sexism, but is there an “ism” for every possible means of demographic and psychographic exclusion? Unlike previous discriminatory apparatuses, today’s categories of differentiation may be multiply cross-referenced to the point where it even becomes difficult to single out the single overriding factor that leads to a person being denied credit, or enables them to proactively alter their perceived desirability.

  It is also worth noting that gender and race do not exist as stable concepts but rather in a state of constant flux. Like the concept of a “character” that a person builds but never finishes, so too are specific categories like maleness not simply inferred by algorithms and then established from that point on, but instead have to be reinforced on a constant basis.52 A user may be considered male, but in the event that they then begin searching for more “female” subjects, they will be reclassified. The man who regularly buys airline tickets might be recognized as increasingly female, while a female with a keen interest in sports or world news becomes statistically male.

  Categorizing additionally has the ability to move beyond what we might traditionally think of as categories. If concepts like “creativity” and “perceptiveness” are successfully quantified and linked to consistent types of behavior, these might take on as much importance as gender or race. It was this world that Deleuze was addressing when he predicted that increasingly our credit cards and social security numbers will become more significant identity markers than the color of our skin or the place we went to school. “A man is no longer a man confined,” he wrote, “but a man in debt.”

  CHAPTER 2

  The Match & the Spark

  Each summer, thousands of freshly qualified doctors graduate from medical school in the United States. The next step of their training involves being paired up with a teaching hospital where they can continue to learn their craft; a process that is referred to as residency.

  Deciding which doctors go to which hospitals involves a two-way matching process in which both doctors and hospitals have their list of preferences, and the task is to match both parties up in such a way that everyone is pleased with the decision.

  Of course, this is easier said than done. Among both people and hospitals, some are more popular and in higher demand than others. A hospital might be geographically preferable, for example: perhaps situated in a big city, or in an especially scenic location. It might be preferable based on its overall reputation, or because of a particular member of staff who is held in particularly high regard within the medical teaching community.

  Think about how difficult it could be to decide, based on consensus, where to go on a family holiday. Now imagine that each location around the world can only be visited by one family at a time, but that all families have to still go on holiday during the same week. Now imagine that those holiday destinations also have their own list about which family they want to welcome.

  In 1962, two American economists named David Gale and Lloyd Shapley set out to devise an algorithmic solution to just this conundrum. What they came up with was called The Stable Marriage Problem—also known as “The Match.”1

  To explain The Match, picture a remote island, cut off from the rest of civilization. On this island, there live an even number of men and women of marriageable age. The problem asks that all of these people are paired up in what we might consider a stable relationship. To explain what is meant by “stable” first allow me to explain what counts as an “unstable” marriage. Suppose that two of the marriageable men on this island are named James and Rob, while two of the marriageable women are named Ruth and Alice. James is married to Ruth, although he secretly prefers Alice. Alice is married to Rob, but she secretly prefers James. Both, in other words, are already married to other people, but would be better off if they were matched together. It is not beyond the limits of our imagination to suppose that one day James and Alice will run off together—hence the lack of stability. The goal of the algorithm is to come up with a solution in which everyone is matched up in such a way that no couples exist that would rather be paired with a person other than with their respective partners.

  There are a few rules to consider before any actual matching takes place. At the start of the problem, every marriageable man and woman on the island is asked to compile a list of preferences, in which they rank every member of the opposite sex in the order in which they most appeal. James might list Alice first, then Ruth, then Ruth’s friend Charlotte, and so on. Only men may propose to women, although women have the right to not only refuse to marry a particular man if they deem him a bad match, but also to accept proposals on a tentative basis, keeping their options open in case someone better comes along. The marriage proposal process works in rounds, which I will describe as “days” in order for things to be kept simple.

  On the morning of the first day, every man proposes to his first choice of wife. Certain women will be in the fortunate position of having received multiple proposals, while others will have received none. On the afternoon of the first day, each woman rejects all suitors except for her current best available option, who she tentatively agrees to marry, knowing that she can ditch him later on. (This is referred to as a “deferred acceptance.”) Come dawn of the second day and those men who remain single propose to their next best choice. That afternoon, the women who accepted marriage on day one have the chance to trade up, if the man who proposed to them on day two is, in their view, preferable to the person they are engaged to. This process continues for days three, four, five, et cetera, until all couples are matched in stable relationships. At this point the algorithm terminates.

  Here’s an example:

  Name

  First Choice

  Second Choice

  Third Choice

  Fourth Choice

  Alice

  Tim

  James

  Rob

  Rajiv

  Ruth

  James

  Rajiv

  Tim

  Rob

  Charlotte

  Rob

  Rajiv

  Tim

  James

  Bridgette

  Rajiv

  Rob

  James

  Tim

  James

  Alice

  Bridgette

  Charlotte

  Ruth

  Rob

  Alice

  Ruth

  Charlotte

  Bridgette

  Tim

  Ruth

  Bridgette

  Charlotte

  Alice

  Rajiv

  Charlotte

  Alice

  Ruth

  Bridgette

  On day one, each man proposes to his top choice. James and Rob propose to Alice, Tim proposes to Ruth, and Rajiv proposes to Charlotte. Of Alice’s two proposals she prefers James to Rob and so accepts his offer; knowing that she might well do better later on. Ruth, meanwhile, accepts Tim’s proposal, while Charlotte accepts Rajiv’s. On day two, Rob—rejec
ted by Alice on the first day—proposes to Ruth. Rob, however, is Ruth’s fourth choice and she remains engaged to Tim. On day three, Rob tries again and asks Charlotte to marry him. Rob is Charlotte’s first choice and so she ditches Rajiv and becomes engaged to Rob. On day four, the newly single Rajiv asks Alice to marry him, although she elects to stay with James. On day five, Rajiv then asks Ruth to marry him, and Ruth breaks it off with Tim and becomes engaged to Rajiv. On day six, Tim proposes to Bridgette who, while he remains her fourth and last choice of match, has no other proposals and so agrees to marry him. The final couples are therefore as follows:

  James and Alice

  Rob and Charlotte

  Rajiv and Ruth

  Tim and Bridgette

  There are several neat attributes to this particular algorithm. Among its most impressive qualities is the fact that not only does it manage to find a partner for everyone, but it does this with maximum efficiency. It is, of course, impossible in all but the most unlikely of cases that everyone will receive their first choice of partner, and this effect is only amplified as the numbers increase. Four boys and four girls may well receive their first choice, but would 40 boys and 40 girls? Or 400?

  This algorithm, it should be noted, favors whoever it is that is doing the proposing (in this case the men). Were it to work the other way around from this male-optimal scenario (with the women asking the men to marry them, and the men doing the rejecting or the accepting) the algorithm for solving this particular problem would have terminated after one day, with the couples arranged as follows:

 

‹ Prev