Army of None

Home > Other > Army of None > Page 4
Army of None Page 4

by Paul Scharre


  As automobiles and aircraft demonstrate, it is meaningless to refer to a system as “autonomous” without referring to the specific task that is being automated. Cars are still driven by humans (for now), but a host of autonomous functions can assist the driver, or even take control for short periods of time. The machine becomes “more autonomous” as it takes on more tasks, but some degree of human involvement and direction always exists. “Fully autonomous” self-driving cars can navigate and drive on their own, but a human is still choosing the destination.

  For any given task, there are degrees of autonomy. A machine can perform a task in a semiautonomous, supervised autonomous, or fully autonomous manner. This is the second dimension of autonomy: the human-machine relationship.

  Semiautonomous Operation (human in the loop)

  In semiautonomous systems, the machine performs a task and then waits for a human user to take an action before continuing. A human is “in the loop.” Autonomous systems go through a sense, decide, act loop similar to the military OODA loop, but in semiautonomous systems the loop is broken by a human. The system can sense the environment and recommend a course of action, but cannot carry out the action without human approval.

  Supervised Autonomous Operation (human on the loop)

  In supervised autonomous systems, the human sits “on” the loop. Once put into operation, the machine can sense, decide, and act on its own, but a human user can observe the machine’s behavior and intervene to stop it, if desired.

  Fully Autonomous Operation (human out of the loop)

  Fully autonomous systems sense, decide, and act entirely without human intervention. Once the human activates the machine, it conducts the task without communication back to the human user. The human is “out of the loop.”

  Many machines can operate in different modes at different times. A Roomba that is vacuuming while you are home is operating in a supervised autonomous mode. If the Roomba becomes stuck—my Roomba frequently trapped itself in the bathroom—then you can intervene. If you’re out of the house, then the Roomba is operating in a fully autonomous capacity. If something goes wrong, it’s on its own until you come home. More often than I would have liked, I came home to a dirty house and a spotless bathroom.

  It wasn’t the Roomba’s fault it had locked itself in the bathroom. It didn’t even know that it was stuck (Roombas aren’t very smart). It had simply wandered into a location where its aimless bumping would nudge the door closed, trapping it. Intelligence is the third dimension of autonomy. More sophisticated, or more intelligent, machines can be used to take on more complex tasks in more challenging environments. People often use terms like “automatic,” “automated,” or “autonomous” to refer to a spectrum of intelligence in machines.

  Automatic systems are simple machines that don’t exhibit much in the way of “decision-making.” They sense the environment and act. The relationship between sensing and action is immediate and linear. It is also highly predictable to the human user. An old mechanical thermostat is an example of an automatic system. The user sets the desired temperature and when the temperature gets too high or too low, the thermostat activates the heat or air conditioning.

  Automated systems are more complex, and may consider a range of inputs and weigh several variables before taking an action. Nevertheless, the internal cognitive processes of the machine are generally traceable by the human user, at least in principle. A modern digital programmable thermostat is an example of an automated system. Whether the heat or air conditioning turns on is a function of the house temperature as well as what day and time it is. Given knowledge of the inputs to the system and its programmed parameters, the system’s behavior should be predictable to a trained user.

  Spectrum of Intelligence in Machines

  “Autonomous” is often used to refer to systems sophisticated enough that their internal cognitive processes are less intelligible to the user, who understands the task the system is supposed to perform, but not necessarily how the system will perform that task. Researchers often refer to autonomous systems as being “goal-oriented.” That is to say, the human user specifies the goal, but the autonomous system has flexibility in how it achieves that goal.

  Take a self-driving car, for example. The user specifies the destination and other goals, such as avoiding accidents, but can’t possibly specify in advance every single action the autonomous car is supposed to perform. The user doesn’t know where there will be traffic or obstacles in the road, when lights will change, or what other cars or pedestrians will do. The car is therefore programmed with the flexibility to decide when to stop, go, and change lanes in order to accomplish its goal: getting to the destination safely.

  In practice, the line between automatic, automated, and autonomous systems is still blurry. Often, the term “autonomous” is used to refer to future systems that have not yet been built, but once they do exist, people describe those same systems as “automated.” This is similar to a trend in artificial intelligence where AI is often perceived to encompass only tasks that machines cannot yet do. Once a machine conquers a task, then it is merely “software.”

  Autonomy doesn’t mean the system is exhibiting free will or disobeying its programming. The difference is that unlike an automatic system where there is a simple, linear connection from sensing to action, autonomous systems take into account a range of variables to consider the best action in any given situation. Goal-oriented behavior is essential for autonomous systems in uncontrolled environments. If a self-driving car were on a closed track with no pedestrians or other vehicles, each movement could be programmed into the car in advance—when to go, stop, turn, etc. But such a car would not be very useful, as it could only drive in a simple environment where every action could be predicted. In more complex environments or when performing more complex tasks, it is crucial that the machine be able to make decisions based on the specific situation.

  This greater complexity in autonomous systems is a double-edged sword. The downside to more sophisticated systems is that the user may not be able to predict its specific actions in advance. The feature of increased autonomy can become a flaw if the user is surprised in an unpleasant way by the machine’s behavior. For simple automatic or automated systems, this is less likely. But as the complexity of the system increases, so does the difficulty of predicting how the machine will act.

  It can be exciting, if a little scary, to hand over control to an autonomous system. The machine is like a black box. We specify its goal and, like magic, the machine overcomes obstacles to reach the goal. The inner workings of how it did so are often mysterious to us; the distinction between “automated” and “autonomous” is principally in the mind of the user. A new machine only feels “autonomous” because we don’t yet have a good mental model for how it “thinks.” As we gain experience with the machine and begin to better understand it, the layers of fog hiding the inner workings of the black box dissipate, revealing the complex logic driving its behavior. We come to decide the machine is merely “automated” after all. In understanding the machine, we have tamed it; the humans are back in control. That process of discovery, however, can be a rocky one.

  A few years ago, I purchased a Nest “learning thermostat.” The Nest tracks your behavior and adjusts the house’s temperature as needed, “learning” your preferences over time. There were bumps along the way as I discovered various aspects of the Nest’s functionality and occasionally the house was temporarily too warm or too cold, but I was sufficiently enamored of the technology that I was willing to push through these growing pains. My wife, Heather, was less tolerant of the Nest. Every time it changed the temperature on its own, disregarding an instruction she had given, she viewed it more and more suspiciously. (Unbeknownst to her, the Nest was following other guidance I had given it previously.)

  The final straw for the Nest was when we came home from summer vacation to find the house a toasty 84 degrees, despite my having gone online the night before and set
the Nest to a comfortable 70. With sweat dripping off our faces, we set our bags down in the foyer and I ran to the Nest to see what had happened. As it turned out, I had neglected to turn off the “auto-away feature.” After the Nest’s hallway sensor detected no movement and discerned we were not home, it reverted—per its programming—to the energy-saving “away” setting of 84 degrees. One look from Heather told me it was too late, though. She had lost trust in the Nest. (Or, more accurately, in my ability to use it.)

  The Nest wasn’t broken, though. The human-machine connection was. The same features that made the Nest “smarter” also made it harder for me to anticipate its behavior. The disconnect between my expectations of what the Nest would do and what it was actually doing meant the autonomy that was supposed to be working for me ended up, more often than not, working against my goals.

  HOW MUCH SHOULD WE TRUST AUTONOMOUS SYSTEMS?

  All the Nest did was control the thermostat. The Roomba merely vacuumed. Coming home to a Roomba locked in the bathroom or an overheated house might be annoying, but it wasn’t a catastrophe. The tasks entrusted to these autonomous systems weren’t critical ones.

  What if I was dealing with an autonomous system performing a truly critical function? What if the Nest was a weapon, and my inability to understand it led to failure?

  What if the task I was delegating to an autonomous system was the decision whether or not to kill?

  3

  MACHINES THAT KILL

  WHAT IS AN AUTONOMOUS WEAPON?

  The path to autonomous weapons began 150 years ago in the mid-nineteenth century. As the second industrial revolution was bringing unprecedented productivity to cities and factories, the same technology was bringing unprecedented efficiency to killing in war.

  At the start of the American Civil War in 1861, inventor Richard Gatling devised a new weapon to speed up the process of firing: the Gatling gun. A forerunner of the modern machine gun, the Gatling gun employed automation for loading and firing, allowing more bullets to be fired in a shorter amount of time. The Gatling gun was a significant improvement over Civil War–era rifled muskets, which had to be loaded by hand through the muzzle in a lengthy process. Well-trained troops could fire three rounds per minute with a rifled musket. The Gatling gun fired over 300 rounds per minute.

  In its time, the Gatling gun was a marvel. Mark Twain was an early enthusiast:

  [T]he Gatling gun . . . is a cluster of six to ten savage tubes that carry great conical pellets of lead, with unerring accuracy, a distance of two and a half miles. It feeds itself with cartridges, and you work it with a crank like a hand organ; you can fire it faster than four men can count. When fired rapidly, the reports blend together like the clattering of a watchman’s rattle. It can be discharged four hundred times a minute! I liked it very much.

  The Gatling gun was not an autonomous weapon, but it began a long evolution of weapons automation. In the Gatling gun, the process of loading bullets, firing, and ejecting cartridges was all automatic, provided a human kept turning the crank. The result was a tremendous expansion in the amount of destructive power unleashed on the battlefield. Four soldiers were needed to operate the Gatling gun, but by dint of automation, they could deliver the same lethal firepower as more than a hundred men.

  Richard Gatling’s motivation was not to accelerate the process of killing, but to save lives by reducing the number of soldiers needed on the battlefield. Gatling built his device after watching waves of young men return home wounded or dead from the unrelenting bloodshed of the American Civil War. In a letter to a friend, he wrote:

  It occurred to me that if I could invent a machine—a gun—which could by its rapidity of fire, enable one man to do as much battle duty as a hundred, that it would, to a great extent, supersede the necessity of large armies, and consequently, exposure to battle and disease be greatly diminished.

  Gatling was an accomplished inventor with multiple patents to his name for agricultural implements. He saw the gun in a similar light—machine technology harnessed to improve efficiency. Gatling claimed his gun “bears the same relation to other firearms that McCormack’s reaper does to the sickle, or the sewing machine to the common needle.”

  Gatling was more right than he knew. The Gatling gun did indeed lay the seeds for a revolution in warfare, a break from the old ways of killing people one at a time with rifled muskets and shift to a new era of mechanized death. The future Gatling wrought was not one of less bloodshed, however, but unimaginably more. The Gatling gun laid the foundations for a new class of machine: the automatic weapon.

  AUTOMATIC WEAPONS: MACHINE GUNS

  Automatic weapons came about incrementally, with inventors building on and refining the work of those who came before. The next tick in the gears of progress came in 1883 with the invention of the Maxim gun. Unlike the Gatling gun, which required a human to hand-crank the gun to power it, the Maxim gun harnessed the physical energy from the recoil of the gun’s firing to power the process of reloading the next round. Hand-cranking was no longer needed, and once firing was initiated, the gun could continue firing on its own. The machine gun was born.

  The machine gun was a marvelous and terrible invention. Unlike semiautomatic weapons, which require the user to pull the trigger for each bullet, automatic weapons will continue firing so long as the trigger remains held down. Modern machine guns come in all shapes and sizes, from the snub-nosed Uzi that plainclothes security personnel can tuck under their suit jackets to massive chain guns that rattle off thousands of rounds per minute. Regardless of their form, their power is palpable when firing one.

  As a Ranger, I carried an M249 Squad Automatic Weapon, or SAW, a single-person light machine gun carried in infantry fire teams. Weighing seventeen pounds without ammunition, the SAW is on the hefty side of what can be considered “hand held.” With training, the SAW can be fired from the shoulder standing up in short controlled bursts, but is best used lying on the ground. The SAW comes equipped with two metal bipod legs that can be flipped down to allow the gun to stand elevated off the dirt. One does not simply lay on the ground and fire the SAW, however. The SAW has to be managed; it has to be controlled. When fired, the weapon bucks and moves like a wild animal from the rapid-fire recoil. At a cyclic rate of fire, with the trigger held down, the SAW will fire 800 rounds per minute. That’s thirteen bullets streaming out of the barrel per second. At that rate of fire, a gunner will rip through his entire stash of ammunition in under two minutes. The barrel will overheat and begin to melt.

  Using the SAW effectively requires discipline. The gunner must lean into the weapon to control it, putting his weight behind it and digging the bipod legs into the dirt to pin the weapon down as it is fired. The gunner fires in short bursts of five to seven rounds at a time to conserve ammunition, keep the weapon on target, and prevent the barrel from overheating. Under heavy firing, the SAW’s barrel will glow red hot—the barrel may need to be removed and replaced with a spare before it begins to melt. The gun can’t handle its own power.

  On the other end of the spectrum of infantry machine guns is the M2 .50 caliber heavy machine gun, the “ma deuce.” Mounted on military trucks, the .50 cal is the gun that turns a simple off-road truck into a piece of lethal machinery, the “gun truck.” At eighty pounds—plus a fifty-pound tripod—the gun is a behemoth. To fire it, the gunner leans back in the turret to brace him or herself and thumbs down the trigger with both hands. The gun unleashes a powerful THUK THUK THUK as the rounds exit. The half inch–wide bullets can sail over a mile.

  Machine guns changed warfare forever. In the late 1800s, the British Army used the Maxim gun to aid in their colonial conquest of Africa, allowing them to take on and defeat much larger forces. For a time, to the British at least, machine guns might have seemed like a weapon that lessened the cost of war. In World War I, however, both sides had machine guns and the result was bloodshed on an unprecedented scale. At the Battle of the Somme, Britain lost 20,000 men in a single day, mowed d
own by automatic weapons. Millions died in the trenches of World War I, an entire generation of young men.

  Machine guns accelerated the process of killing by harnessing industrial age efficiency in the service of war. Men weren’t merely killed by machine guns; they were mowed down, like McCormack’s mechanical reaper cutting down stalks of grain. Machine guns are dumb weapons, however. They still have to be aimed by the user. Once initiated, they can continue firing on their own, but the guns have no ability to sense targets. In the twentieth century, weapons designers would take the next step to add rudimentary sensing technologies into weapons—the initial stages of intelligence.

  THE FIRST “SMART” WEAPONS

  From the first time a human threw a rock in anger until the twentieth century, warfare was fought with unguided weapons. Projectiles—whether shot from a sling, a bow, or a cannon—follow the laws of gravity once released. Projectiles are often inaccurate, and the degree of inaccuracy increases with range. With unguided weapons, destroying the enemy hinged on getting close enough to deliver overwhelming barrages of fire to blanket an area.

  In World War II, as rockets, missiles, and bombs increased the range at which combatants could target one another—but not their accuracy—militaries sought to develop methods for precision guidance that would allow weapons to accurately strike targets from long distances. Some attempts to insert intelligence into weapons were seemingly comical, such as behaviorist B. F. Skinner’s efforts to control a bomb by the pecking of a pigeon on a target image. Skinner’s pigeon-guided bomb might have worked, but it never saw combat. Other attempts to implement onboard guidance measures did, giving birth to the first “smart” weapons: precision-guided munitions (PGMs).

 

‹ Prev