Future Crimes

Home > Other > Future Crimes > Page 44
Future Crimes Page 44

by Marc Goodman


  Admittedly, some of these new developments sound like something out of a Philip K. Dick novel. For instance, nanny bots have already been launched in South Korea and Japan. They can play games and carry out limited conversations with speech recognition. Many use the robot’s eyes to transmit live video of your children to your computer or smart phone. NEC’s PaPeRo robot nanny also allows you to speak with your children directly or via text messages, which the robot can read to your child, and SoftBank’s Pepper proclaims that “it can read your child’s emotions and facial expressions and respond appropriately.” Though robo-nannies may prove helpful to sleep-deprived, overworked parents everywhere, another area of personal robotics that is expanding even more rapidly is that of elder-care bots. Given demographic trends and aging populations in developed countries around the world, there is a dearth of caretakers to provide the emotional and physical support required for the elderly. Nowhere is this challenge as great as in Japan, where nearly 25 percent of the population is over sixty-five. To help alleviate the problem, Prime Minister Shinzō Abe’s government allocated ¥2.39 billion in 2013 to assist the national development of elder-care robots. One such example is Paro, an adorably cute white baby harp seal robot meant to keep the elderly company. Paro “can recognize individual voices, track motion and remember behaviors that elicit positive responses from patients.” When petted, it responds by cooing and cuddling up to any person touching it. Thousands of Paro units have been sold globally, and they have proven particularly useful with advanced dementia patients in reducing levels of violence and improving mood. Recognizing the market need for elder-care robots, iRobot (maker of vacuums and killer bots) has opened a new division specifically to serve seniors.

  One of the fastest-growing types of elder-care bots are telepresence robots—machines that allow people to “move virtually through a distant building by remotely controlling a wheeled robot equipped with a camera, microphone, loudspeaker and screen displaying live video” of the person’s face controlling the bot over the Internet. Robots such as the MantaroBot and the EU’s GiraffPlus allow children to “beam in” from thousands of miles away and remotely drive a wheeled bot with an iPad-type face in order to interact with aging parents. Relatives can check on their elderly loved ones, eat meals with them via Skype-like video conversations, and even ensure that they have awoken and not fallen in their own homes. It’s not just worried adults who are using telepresence bots to check in on their parents; increasingly, they are becoming mainstays in hospitals as well. iRobot’s RP-VITA (Remote Presence Virtual + Independent Telemedicine Assistant) is allowing doctors, particularly specialists, to appear at their patients’ bedsides and diagnose them without having to be physically in the same room. With the push of a button on an iPad, a doctor across town or around the world can direct the robot to the patient’s bedside, zoom in on his pupils, and even have a nurse place a stethoscope on his chest to remotely hear his heartbeat. Whether robots have better bedside manner is yet to be determined.

  Businesses too are starting to realize the value of having telepresence robots in the office, allowing employees to abstract their physical presence through remotely controlled devices. Companies such as Suitable Technologies and Double Robotics have models that cost around $3,000 and allow employees to work from home while their robotic alter egos wander the hallways at the office, walk up to colleagues at their desks, or catch up on all the latest gossip in the lunchroom. Even the famed NSA leaker Edward Snowden used a telepresence bot to give a presentation to an audience of thousands at TED 2014 in Vancouver, all without the bother of leaving the safety of his undisclosed location in Russia.

  Humans Need Not Apply

  As time moves on, we will see robots emerge for every possible job and purpose. Already Starwood hotels have introduced robotic butlers, “on call day and night.” They can find their ways to any guest’s room and deliver that toothbrush you forgot or the room service you ordered, freeing up staff to work on other tasks. Momentum Machines’ burger bot can crank out 360 perfectly cooked-to-order hamburgers per hour, each with the precise toppings (lettuce, ketchup, onions) requested by the customer.

  A 2013 study by Oxford University on the future of work conducted a detailed analysis of over seven hundred occupations and concluded that 47 percent of U.S. employees are at high risk of losing their jobs to robotic automation as soon as 2023. Those working in the transportation field (taxi drivers, bus drivers, long-haul truck drivers, FedEx drivers, pizza delivery drivers) face particular risk, with up to a 90 percent certainty that their jobs will be replaced by autonomous vehicles. But it’s not just low-level positions that are at risk. News outlets such as the Associated Press and the Los Angeles Times are using bots and algorithms to automatically write thousands of articles on topics as diverse as homicides, earthquakes, and the latest business earnings. Biopsies can be “analyzed more efficiently by image-processing software than lab techs,” and QuickBooks can handle the majority of tasks performed by an accountant. Many believe that it is the growth of automation and robotics that has led to the deep wage stagnation we have seen since 2004. Bill Gates was prescient in his predictions regarding the future of robotics and the presence of a robot in every home and office. But whether your job is flipping burgers, driving a truck, or writing breaking news, anybody who has read or seen John Steinbeck’s Grapes of Wrath knows that industrial transitions are brutal for those left behind.

  Now even foreign outsourcing may be replaced by robo-sourcing, eliminating more jobs for human beings both domestically and overseas. As machines become smarter and more capable, the human race may enjoy an incredible renaissance in which all our daily chores are carried out by bots, leaving us to a life of leisure, with unlimited free time to sing, dance, and paint while sunning our atrophying muscles on a beach somewhere. Alternatively, society might descend into chaos as the mass unemployed and unemployable revolt against the few human czars controlling the world’s robots. The scenario could tip in either direction depending on the public policy, legal, economic, and ethical decisions we make today.

  Robot Rights, Law, Ethics, and Privacy

  A man without ethics is a wild beast loosed upon this world.

  ALBERT CAMUS

  While nobody would argue your Roomba should be covered under the UN’s Universal Declaration of Human Rights, as robots grow more intelligent and potentially sentient in the distant future, such questions will undoubtedly be raised. In the meantime, the robots in our world bring with them a wide array of public policy, legal, and ethical issues beyond their impact on the workforce. If a robotic surgeon accidentally punctures an artery, leading to a patient’s death, can the surviving family sue the robot or its manufacturer for malpractice? When a self-driving car gets in an accident, who will be at fault? Can the non-driving passenger be sued? The car company? The firm that wrote the driving and navigation software? When it’s clear an autonomous vehicle is about to become involved in an unavoidable collision, should its crash-optimization algorithm cause it to hit the telephone pole (killing the passenger), the motorcyclist to the left, the Chevy on the right, or the pedestrian straight ahead? Though our ability to build and field robots is racing ahead exponentially, ethically we remain infants.

  While ubiquitous robots are on the horizon, there is a paucity of roboethicists, policy experts, and legislators capable of keeping up with the complex questions these scientific developments will pose for humanity. In particular, we will see new and previously unimaginable assaults upon our privacy. Just like social media sites, apps, and mobile phones before them, robots will come with terms of service that will detail conditions that protect robo-manufacturers and affect your privacy. Though your robot vacuum, elder-care bot, or play toy may sit in the corner looking innocuous and cute, ready to serve at a moment’s notice, it is armed with an array of cameras, microphones, and sensors capable of seeing and recording everything you do in the privacy of your own home.

  Hobbyist drones equipped wit
h HD cameras are already posing privacy threats previously not encountered. In mid-2014, a young woman in Seattle living on the twenty-sixth floor of an apartment building was surprised to see a quadcopter (a small helicopter with four rotors) hovering just outside her window filming her as she changed in her bedroom—a robotic Peeping Tom for the twenty-first century. In another Seattle incident, a man decided to hover his camera-equipped personal drone over a neighbor’s backyard. When a woman heard the noise, which she thought was a garden weed whacker, she opened the curtains of her second-story bedroom to investigate, only to see a drone hovering outside her window just a few feet away. She sent her husband to investigate, and he found a neighbor flying the drone, but when the robo-invading pilot was asked to stop filming immediately, he refused, claiming it was legal for him to do so. He may be right.

  While walking on somebody else’s lawn is trespassing, flying over it with a helicopter (large or small) is not as the result of a 1946 Supreme Court decision that declared, “Air is a public highway.” Of course the Seattle cops called to the scene of these incidents were confused, and they aren’t the only ones. According to a 2012 government report on private drones flying over the United States, the GAO concluded, “Currently, no federal agency has specific statutory responsibility to regulate privacy matters relating to Unmanned Aircraft Systems for the entire federal government. Given the ability of these devices to house high-powered cameras, infrared sensors, facial recognition technology, and license plate readers, some argue that drones present a substantial privacy risk.” Ya think?

  Raising issues about who owns the air rights above property and who can be filmed where is just the very beginning of a deeply complex set of legal, ethical, and public policy matters that will undoubtedly arise with much greater frequency as the number of robots in use throughout our society proliferates. Perhaps the earliest attempts to deal with these foundational questions came in 1942 from Isaac Asimov when he published his short story “Runaround,” in which he coined the term “robotics” and presented his famed Three Laws of Robotics:

  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

  While Asimov gives us an excellent starting point by which to consider these issues, we certainly could not program a machine at this point in time to concretely understand the concept of breakfast, let alone a construct as abstract as “harm.” Robots would likely require a much more flexible and adaptive code of ethics, one that we have not even come close to constructing thus far. Yet the drive toward widespread industrial, military, medical, and personal robotics continues, and accidents are bound to happen.

  Danger, Will Robinson

  “Danger, Will Robinson!” was the phrase oft repeated by the protective robot guarding a young space adventurer to warn the boy of impending threats in the 1960s television show Lost in Space. If only all robots took such precautions in their interactions with all human beings. As people interact more and more with robots, there are unforeseen consequences, not the least of which are serious injuries or even death at the hands of machines, even those meant to help. In 2013, the FDA launched an investigation into numerous incidents of harm caused by Intuitive Surgical’s da Vinci medical robot, incidents that the company allegedly failed to report to the government as required by law. In one instance, a man suffered a perforated colon during his prostate surgery; in another, the robot grabbed a patient’s abdominal tissue during a colorectal surgery, refusing to release it despite the human surgeon’s efforts to open the jaws of the machine’s hand. It wasn’t until the da Vinci was fully rebooted that it finally let go. In another case, a woman was struck in the face by a surgical robot during her hysterectomy.

  The overwhelming majority of injuries in human-robot interaction occur as a result not of surgical bots but of industrial ones. Though no comprehensive statistics on robo-accidents exist globally, there are numerous reports of such accidents. In 2007, for example, a worker in Stockholm who thought he had turned off the power to a robot approached the machine to repair it. Unfortunately, the power was still on, and the robot suddenly came to life, firmly grabbed the man by the head, lifted him off the ground, and broke four of his ribs before he was able to struggle free. In a collision between man and machine, it is the machine who is likely to win, and many cases have resulted in death. One of the first cases of robotic homicide occurred in 1981 when a thirty-seven-year-old employee of Kawasaki Heavy Industries named Kenji Urada was working to repair a robot that he had not turned off completely. Unable to sense him, the robot’s powerful hydraulic arm accidentally knocked the man into a nearby grinding machine, where he was crushed to death. Back in the United States, a car factory employee was killed in 2001 when he entered a robot’s unlocked cage to clean it. The machine, thinking the worker was an auto part, grabbed the man by the neck, pinning him until he was asphyxiated. According to the Occupational Safety and Health Administration, in the United States alone at least thirty-three deaths have occurred—a number that is likely to go up as robots leave their cages and begin walking among us. Apparently, not all robots have yet to hear of Mr. Asimov and his three laws.

  Robotic accidents become much worse once it is decided that it is a good idea to provide fully automatic weapons to robots, as members of the South African National Defence Force discovered in 2009 during a live-fire training exercise. A computer-controlled Oerlikon MK5 twin-barreled anti-aircraft gun suffered an apparent software glitch, causing the device to fire in full-auto mode, at the rate of 550 rounds per minute while spinning around wildly in 360-degree circles like an out-of-control garden hose. When it was all over, nine soldiers, including several female officers, were dead, and another fourteen were gravely injured, leaving behind a blood-splattered scene reminiscent of a Terminator movie. The incident goes to show that when a robot suffers a computer “blue screen of death,” it can actually lead to death and have far-reaching impact in our common 3-D physical space. It’s not just industrial- or ground-based bots that can fail, so too can flying ones.

  According to a Washington Post report, over four hundred military UAVs have accidentally fallen from the sky, domestically and overseas, “slamming into homes, farms, runways, highways and in one case an Air Force C-130 Hercules cargo plane while mid-flight.” While nobody has died in any of the reported incidents, it is only by a miracle that it is so. In 2009, a drone pilot lost control of an armed Reaper UAV with a sixty-six-foot wingspan, flying uncontrollably across Afghanistan. The renegade flying robot was only stopped when U.S. jet fighters intervened and shot it down before it entered the airspace of Tajikistan.

  Closer to home, nearly fifty drones have crashed in the United States, including a 375-pound army drone that smashed into the ground next to a Pennsylvania elementary school, “just a few minutes after students went home for the day.” Robotic accidents are the exception, occurring relatively infrequently, and active measures are being taken to arm robots with collision detection and avoidance systems to prevent many of the industrialtype accidents. Nevertheless, given the expected tremendous growth in home bots, work bots, factory bots, doc bots, and war bots, the potential for harm is far from trivial—a risk that will grow significantly when robots join the IoT and can be hacked from afar by malicious actors.

  Hacking Robots

  In the future, when Microsoft leaves a security-flaw in their code it won’t mean that somebody hacks your computer. It will mean that somebody takes control of your servant robot and it stands in your bedroom doorway sharpening a knife and watching you sleep.

  DANIEL H. WILSON, ROBOTICIST AND AUTHOR

  There are dozens of robotic operating systems, mostly proprietary, running everything from military weapons systems to SCADA industri
al control systems. But just as desktop computers and smart phones coalesced around a few leading operating systems, the same is happening in robotics using ROS—the Robot Operating System. Doing so will have a tremendous positive impact on the future of robotics as programmers will not have to reinvent the wheel every time they want to encode a particular function in a robot. ROS is free and open source, providing modules for robotics simulation, movement, vision, navigation, perception, facial recognition, and so forth. It is exactly these types of open-source community efforts and shared experience building, barely conceivable just a few years ago, that allow companies like Rethink Robotics to offer Baxter for $22,000 instead of $200,000.

  ROS, originally developed at Willow Garage in 2007, is now maintained by the Open Source Robotics Foundation and runs on everything from small toys to large industrial robots. As noted numerous times throughout this book, there has never been a computer that could not be hacked, a dictum that applies to robots as well, with important implications for our common security. The hackers’ task will be unwittingly assisted by a standardized Robot Operating System, which will give them a unified target to attack. Standardization of a universal ROS paves the way for large-scale cyber attacks, just as we saw with PCs. Importantly, there is a momentous difference between hacking robots and hacking other computing systems and other objects on the IoT: robots will be perennially moving about our physical space, walking, driving, running, flying, and swimming all around us. Robots, connected to the Internet, can be hacked and redirected in any number of dangerous and sinister ways—a fact that has not escaped the attention of criminals and terrorists alike. When robots are commandeered, not only can hackers use the machine’s sensors to spy, but they can use the device’s robotic actuators, arms, legs, and wheels to follow, hit, kick, push, shoot, stab, drag, and kill.

 

‹ Prev