Book Read Free

The Formula_How Algorithms Solve All Our Problems... and Create More

Page 13

by Luke Dormehl


  It is into this frame that “Ambient Law” enters.27 Ambient Law refers to the idea that instead of requiring lawyers to call attention to items of legal significance around us, laws can be both embedded within and enforced by our devices and environment. Ambient Law is a vision of the future in which autonomic smart environments take an unprecedented number of decisions for and about us on a constant, real-time basis. Autonomic computing’s central metaphor is that of the human body’s central nervous system. In the same way that the body regulates temperature, breathing and heart rate, without us having to be consciously aware of what is happening, so too is the dream of autonomic computing for algorithms to self-manage, self-configure and self-optimize—without the need for physical or mental input on the part of users.

  One example of Ambient Law might be the “smart office,” which continuously monitors its own internal temperature and compares these levels to those stipulated by health and safety regulations. In the event that a specified legal limit is exceeded, an alarm could be programmed to sound. Another usage of Ambient Law is the car that refuses to be driven by individuals with an excessive level of alcohol in their bloodstream. A number of different car manufacturers have developed similar technology in recent years. In a system developed by Japanese carmaker Nissan, would-be motorists are monitored from the moment they get behind the wheel. An alcohol odor sensor is used to check their breath, another sensor tests for alcohol in the sweat of their palm as they touch the gear stick, and a miniature dashboard camera monitors their face and eye movements—looking for increased blinking to indicate drowsiness, or a drooping mouth to suggest yawning. Based on an averaging of all of these biometrics, the car’s in-built algorithms then decide whether or not an individual is safe to drive. If the answer is negative, the car’s transmission locks, a “drunk-driving” voice alert sounds over the car’s satellite navigation system, and the driver’s seat belt tightens around them to provide (in the words of a Nissan spokesperson) a “mild jolt” designed to snap them out of their stupor.28

  The Politics of Public Space

  These technologies unnerve some people because of what they suggest about algorithms’ new role as moral decision-makers. One only has to look at the hostile reaction afforded Apple when it started censoring “objectionable” content in its App Store, to see that many computer users view morality and technology as two unrelated subjects. This is an understandable reaction, but one that also shows a lack of awareness about the historical role of “technology.” If science aims for a better understanding of the world, then technology (and, more accurately, technologists) has always sought to change it. The result is a discipline that is inextricably tied in with a sense of morality, regardless of how much certain individuals might try to deny it. In this way, technology is a lot like law, with both designed as man-made forces for regulating human behavior.

  In a 1980 essay entitled “Do Artifacts Have Politics?,” the sociologist Langdon Winner singled out several of the bridges over the parkways on Long Island, New York.29 Many of these bridges, Winner observed, were extraordinarily low, with as little as nine feet of clearance at the curb. Although the majority of people seeing them would be unlikely to attach any special meaning to their design, they were actually an embodiment of the social and racial prejudice of designer Robert Moses, who was responsible for building many of the roads, parks, bridges and other public works in New York between the 1920s and 1970s. With the low bridges, Moses’s intention was to allow only whites of “upper” and “comfortable middle” classes access to the public park, since these were the only demographics able to afford cars. Because poorer individuals, which included many blacks, relied on taller public buses they were denied access to the park, since the buses were unable to handle the low overpasses and were forced to find alternative routes. In other words, Moses built bias (and a skewed sense of morality) into his designs. As New York town planner Lee Koppleman later recalled, “The old son of a gun . . . made sure that buses would never be able to use his goddamned parkways.”30

  While the neo-libertarian Google might be a million miles from Moses’s attitudinal bias, it is difficult not to look at the company’s plans to use data-mining algorithms to personalize maps and see (perhaps unintentional) strains of the same stuffy conservatism. Over the past decade, Google Maps has become a ubiquitous part of many people’s lives, vital to how we move from one place to another on a daily basis. As journalist Tom Chivers wrote in the Daily Telegraph, “Of all of the search giant’s many tentacles reaching octopus-like into every area of our existence, Maps, together with its partner Google Earth and their various offspring, can probably claim to be the one that has changed our day-to-day life the most.”31 In 2011, while speaking to the website TechCrunch, Daniel Graf, the director of Google Maps for mobile, asked rhetorically, “If you look at a map and if I look at a map [my emphasis], should it always be the same for you and me? I’m not sure about that, because I go to different places than you do.”32 The result of this insight was that from 2013 onward, Google Maps began incorporating user information to direct users toward those places most likely to be home to like-minded individuals, or subjects that they have previously expressed an interest in. “In the past, such a notion would have been unbelievable,” Google crowed in promotional literature. “[A] map was just a map, and you got the same one for New York City, whether you were searching for the Empire State Building or the coffee shop down the street. What if, instead, you had a map that’s unique to you, always adapting to the task you want to perform right this minute?”

  But while this might be helpful in some senses, its intrinsic “filter bubble” effect may also result in users experiencing less of the serendipitous discovery than they would by using a traditional map. Like the algorithmic matching of a dating site, only those people and places determined on your behalf as suitable or desirable will show up.33 As such, while applying The Formula to the field of cartography might be a logical step for Google, it is potentially troubling. Stopping people of a lower economic status from seeing the houses and shops catering to those of higher economic means—or those of one religion from seeing on their maps the places of worship belonging to those of another—might initially seem a viable means of reducing conflict, but it would do nothing in the long term to promote tolerance, understanding or an evening of the playing field.

  This situation might only be further exacerbated were certain algorithms, like the Nara recommender system I described in Chapter 1, to be implemented incorrectly. By looking not at where an individual currently is, but where advertisers would eventually like them to be, people reliant on algorithms for direction could be channeled down certain routes—like actors playing along to a script.

  Your Line, My Line

  It was the French philosopher and anthropologist Bruno Latour—picking up from where sociologist Langdon Winner left off—who first put forward the notion of technological “scripts.”34 In the same way that a film script or stage play prescribes the actions of its performers, so too did Latour argue that technology can serve to modify the behavior of its users by demanding to be dealt with in a certain way.35 For instance, the disposability of a plastic coffee cup, which begins to disintegrate after only several uses, will encourage people to throw it away. A set of heavy weights attached to hotel keys similarly makes it more likely that they will be returned to the reception desk, since the weight will make the keys cumbersome to carry around.

  Less subtle might be the springs attached to a door that dictate the speed at which people should enter a building—or the concrete speed bumps that prompt drivers to drive slowly, or else risk damaging their shock absorbers. Less subtle still would be the type of aforementioned Ambient Law that ensures that a vehicle will not start because its driver is inebriated, or the office building that not only sounds an alarm but also turns off workers’ computer screens because a certain heat threshold has been reached, and they should exit for their own safet
y.

  As with Moses’s low-hanging bridges, such scripts can be purposely inscribed by designers. By doing this, designers delegate specific responsibilities to the objects they create, and these can be used to influence user behavior—whether that be encouraging them to conform to particular social norms or forcing them into obeying certain laws.36 Because they serve as an “added extra” on top of the basic functionality of an object or device, scripts pose a number of ethical questions. What, for example, is the specific responsibility of the technology designer who serves as the inscriber of scripts? If laws or rules are an effort to moralize other people, does this differ from attempts to moralize technology? Can we quantify in any real sense the difference between a rule that asks that we not waste water in the shower and the use of a water-saving showerhead technology that ensures that we do not?

  In their book Nudge: Improving Decisions about Health, Wealth, and Happiness, authors Richard Thaler and Cass Sunstein recount the story of a fake housefly placed in each of the urinals at Schiphol Airport in Amsterdam. By giving urinating men something to aim at, spillage was reduced by a whole 80 percent.37 While few would likely decry the kind of soft paternalism designed to keep public toilets clean, what about the harder paternalism of a car that forcibly brakes to stop a person breaking the speed limit? To what degree can actions be considered moral or law-abiding if the person carrying them out has no choice but to do so? And when particular actions are inscribed by designers (or, in the case of The Formula, computer scientists), who has the right to implement and enforce them?

  While it would be a brave (and likely misguided) person who would step up and defend a drunk driver’s right to drive purely on democratic grounds, the question of the degree to which behavior should be rightfully limited or regulated is one central to moral philosophy. In some situations it may appear to be morally justified. In others, it could just as easily raise associations with the kind of totalitarian technocracy predicted in George Orwell’s Nineteen Eighty-Four.

  Unsurprisingly, there is a high level of disagreement about where the line in the sand should be drawn. Roger Brownsword, a legal scholar who has written extensively on the topic of technological regulation and the law, argues that the autonomy that underpins human rights means that a person should have the option of either obeying or disobeying a particular rule.38 At the other end of the spectrum is Professor Sarah Conly, whose boldly titled book, Against Autonomy, advocates “[saving] people from themselves” by banning anything that might prove physically or psychologically detrimental to their well-being. These include (but are by no means limited to) cigarettes, trans fats, excessively sized meals, the ability to rack up large amounts of debt, and the spending of too much of one’s paycheck without first making the proper saving provisions. “Sometimes no amount of public education can get someone to realize, in a sufficiently vivid sense, the potential dangers of his course of behavior,” Conly writes. “If public education were effective, we would have no new smokers, but we do.” Needless to say, in Conly’s world of hard paternalism there are more speed bumps than there are plastic coffee cups.39

  The Prius and the Learning Tree

  On the surface, the idea that we should be able to enforce laws by algorithm makes a lot of sense. Since legal reasoning is logical by nature, and logical operations can be automated by a computer, couldn’t codifying the legal process help make it more efficient than it already is? In this scenario, deciding legal cases would simply be a matter of entering the facts of a particular case, applying the rules to the facts, and ultimately determining the “correct” answer.

  In his work, American scholar Lawrence Lessig identifies law and computer code as two sides of the same coin. Lessig refers to the laws created by Congress in Washington, D.C., as “East Coast code” and the laws that govern computer programs as “West Coast code,” in reference to the location of Silicon Valley.40 Once a law of either type is created, Lessig argues, it becomes virtual in the sense that from this point forward it has an existence independent of its original creator. Lessig was hardly the first person to explore this similarity. Three hundred years before Lessig’s birth, the great mathematician and coinventor of calculus, Gottfried Leibniz, speculated that legal liability could be determined using calculation. Toward the end of the 19th century another larger group of legal scholars formed the so-called jurimetrics movement, which argued that the “ideal system of law should draw its postulates and its legislative justification from science.”41

  While both Leibniz and the jurimetrics were misguided in their imagining of the legal system as a series of static natural laws, their dream was—at its root—an honest one: based on the idea that science could be used to make the law more objective. Objectivity in a legal setting means fairness and impartiality. The person who fails to act objectively has allowed self-interest or prejudice to cloud their judgment. By attempting to turn legal reasoning into a system that would interpret rules the same way every time, the belief was that a consistency could be found to rival that which is seen in the hard sciences.

  The problem with the jurimetrics’ approach to law was challenged most effectively by an experiment carried out in 2013—designed to examine the challenges of turning even the most straightforward of laws into an algorithm. For the study, 52 computer programmers were assembled and split into two groups. Each group was tasked with creating an algorithm that would issue speeding tickets to the driver of a car whenever it broke the speed limit. Both groups were provided with two datasets: the legal speed limit along a particular route, and the information about the speed of a particular vehicle (a Toyota Prius) traveling that route on a previous occasion, collected by using an on-board computer. The data showed that the Prius rarely exceeded the speed limit, and on those occasions that it did, did so only briefly and by a moderate degree. To make things more morally ambiguous, these violations occurred at times during the journey when the Prius was set to cruise control. The journey was ultimately completed safely and without incident.

  The first group of computer programmers was asked to write their algorithm so that it conformed to “the letter of the law.” The second was asked to meet “the intent of the law.” Unsurprisingly, both came to very different conclusions. The “intent of the law” group issued between zero and 1.5 tickets to the driver for the journey. The “letter of the law” group, on the other hand, issued a far more draconian 498.3 tickets. The astonishing disparity between the two groups came down to two principal factors. Where the “intent of the law” group allowed a small amount of leeway when crossing the speed limit, the “letter of the law” group did not. The “letter of the law” group also treated each separate sample above the speed limit as a new offense, thereby allowing a continuous stream of tickets to be issued in a manner not possible using single speed cameras.42

  Rules and Standards

  This raises the question of “rules” versus “standards.” Broadly speaking, individual laws can be divided up into these two distinct camps, each of which exists on opposite ends of the legal spectrum. To illustrate the difference between a rule and a standard, consider two potential laws both designed to crack down on unsafe driving. A rule might state, “No vehicle shall drive faster than 65 miles per hour.” A standard, on the other hand, may be articulated as “No one shall drive at unsafe speeds.” The subjectivity of standards means that they require human discretion to implement, while rules exist as hard-line binary decisions, with very little in the way of flexibility.

  Computers are constantly getting better at dealing with the kind of contextual problems required to operate in the real world. For example, an algorithm could be taught to abandon enforced minimum speed limits on major roads in the event that traffic or weather conditions make adhering to them impossible. Algorithms used for processing speed camera information have already shown themselves capable of picking out people who are new to a certain area. Drivers who are unfamiliar with a place might find them
selves let off if they are marginally in excess of the speed limit, while locals spotted regularly along a particular route may find themselves subject to harsher treatment. However, both of these exceptions take the form of preprogrammed rules, as opposed to evidence that computers are good at dealing with matters of ambiguity.

  This technological drive toward more objective “rules” is one that has played out over the past several centuries. For instance, in his book Medicine and the Reign of Technology, Stanley Joel Reiser observes how the invention of the stethoscope

  helped to create the objective physician, who could move away from involvement with the patient’s experiences and sensations, to a more detached relation, less with the patient but more with the sounds from within the body.43

  Similar sentiments are echoed by sociologist Joseph Gusfield in The Culture of Public Problems: Drinking-Driving and the Symbolic Order, in which he argues that the rise of law enforcement technologies stems from a desire for objectivity, centered around that which is quantifiable. To illustrate his point, Gusfield looks at the effect that the introduction of the Breathalyzer in the 1950s had on the previously subjective observations law enforcement officials had relied on to determine whether or not a person was “under the influence.” As Gusfield writes:

  [Prior to such tests, there was] a morass of uncorroborated reports, individual judgments, and criteria difficult to apply to each case in the same manner. Both at law and in the research “laboratory,” the technology of the blood level sample and the Breathalyzer meant a definitive and easily validated measure of the amount of alcohol in the blood and, consequently, an accentuated law enforcement and a higher expectancy of convictions.44

 

‹ Prev