Future Crimes

Home > Other > Future Crimes > Page 48
Future Crimes Page 48

by Marc Goodman


  Al-gorithm Capone and His AI Crime Bots

  We need to be super careful with AI. It is potentially more dangerous than nukes.

  ELON MUSK

  As we learned in previous chapters, the malicious use of AI and computer algorithms has given rise to the crime bot—an intelligent agent scripted to perpetrate criminal activities at scale. Crime bots are foundational to Crime, Inc. and are responsible for its vast rise in profitability. These software programs automate computer hacking, virus dissemination, theft of intellectual property, industrial espionage, spam distribution, identity theft, and DDoS attacks, among other threats. Massive computer botnets such as Mariposa and Conficker can break into your computer and turn it into a powerless DDoS drone because just one or two criminal masters have written narrow AI algorithms to make it so.

  The Gameover Zeus botnet was able to infect machines worldwide with the CryptoLocker Trojan that locked users out of all of their files and forced them to pay in order to regain access. The attack was successful because of the intelligent ransomware agents that Gameover Zeus employed to seek out and destroy the data of innocents, a highly profitable crime spree that netted its bot masters over $100 million. Doing such work manually with individual human criminals would previously have been both cost prohibitive and impossible, but thanks to the advances in technology Crime, Inc.—just like airlines, banks, and factories—has been able to scale its operations with a vastly reduced labor force. It is why one person can now rob 100 million people; through the use of AI and bots, crime scales, and it scales exponentially. The unparalleled levels of sophisticated criminal automation enabled by artificial intelligence are why annual losses attributable to cyber crime have skyrocketed to more than $400 billion.

  There is another way narrow AI is helping criminals—by acting as nonhuman co-conspirators to their crimes. In 2012, the University of Florida student Pedro Bravo was arrested for the alleged murder of his college roommate, Christian Aguilar, after Aguilar began dating Bravo’s ex-girlfriend. Aguilar’s body was found hidden in the woods not far from campus, and Bravo came under suspicion. When police subpoenaed Bravo’s cell-phone records and ultimately seized the handset, they made two discoveries of profound evidentiary value. First, the alleged killer’s GPS signals tracked him to the general location of the body. More important, a review of the Siri requests on his iPhone uncovered the statement “Siri, I need to hide my roommate,” to which Siri helpfully replied, “Swamps, reservoirs, metal foundries and dumps.” The question and answer both featured prominently at Bravo’s trial. As AI improves, we can expect growing numbers of criminals to use these tools as accomplices to help them in the commission of their crimes as we enter the age of Siri and Clyde.

  Algorithmic hacking could also cause major problems for society and its critical infrastructures because altering just a few lines of code among millions in an intelligent agent’s programming could be nearly impossible to detect but could lead to drastically different outcomes in the algos’ behavior. The attack against the uranium centrifuges at the nuclear enrichment facility in Natanz, Iran, is a perfect example of this type of threat, a subtle change that made a big difference and took years to discover. How would we know if our stock trading or navigation algos were off or maliciously subverted? We wouldn’t until it was too late, and that is a serious problem. The criminal opportunities afforded by narrow AI will grow in their use and sophistication, but they may pale in comparison to what becomes possible with stronger, more capable, and rapidly evolving forms of artificial intelligence in the near future.

  When Watson Turns to a Life of Crime

  Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.

  RAY KURZWEIL

  In 2011, we all watched with awe when IBM’s Watson supercomputer beat the world champions on the television game show Jeopardy! Using artificial intelligence and natural language processing, Watson digested over 200 million pages of structured and unstructured data, which it processed at a rate of eighty teraflops—that’s eighty trillion operations per second. In doing so, it handily defeated Ken Jennings, a human Jeopardy! contestant who had won seventy-four games in a row. Jennings was gracious in his defeat, noting, “I, for one, welcome our new computer overlords.” He might want to rethink that.

  Just three years after Watson beat Jennings, the supercomputer achieved a 2,400 percent improvement in performance and shrank by 90 percent, “from the size of a master bedroom to three stacked pizza boxes.” Watson has also now shifted careers, using its vast cognitive powers not for quiz shows but for medicine. The M. D. Anderson Cancer Center is using Watson to help doctors match patients with clinical trials, and at the Sloan Kettering Institute, Watson is voraciously reading 1.5 million patient records and hundreds of thousands of oncology journal articles in an effort to help clinicians come up with the best diagnoses and treatments. IBM has even launched the Watson Business Group with a $1 billion investment earmarked to get companies, nonprofits, and governments to take advantage of Watson’s capabilities. These moves are putting supercomputerlevel artificial intelligence into the hands of both small companies and individuals—and in the future likely Crime, Inc. as well. Though it might sound ridiculous to suggest organized crime would use AI-imbued supercomputers for illicit purposes, we should carefully recall all their prior misapplications of technology, as past is prologue here. Thus we must be prepared to ask what happens when Watson turns to a life of crime. How much money laundering, identity theft, or tax fraud might Watson commit?

  Though Watson is an example of a highly impressive narrow AI, in the future its capabilities will continue to grow exponentially, giving it near or better than human intelligence. One day an AI could even serve as a Mafia capo, using his cognitive abilities to sell drugs, run prostitution rings, distribute child pornography, and print and ship 3-D weapons. “Don Watson” might even engage in murder for hire by geo-locating human targets and hacking into objects connected to the Internet of Things surrounding victims, such as cars, elevators, and robots, in order to cause accidents resulting in the death of its prey. While such activities would be at the extreme level of what a narrow AI might accomplish, they would be easy for the next generation of computing: artificial general intelligence.

  Man’s Last Invention: Artificial General Intelligence

  By the time Skynet became self-aware, it had spread into millions of computer servers all across the planet. Ordinary computers in office buildings, dorm rooms, everywhere. It was software, in cyberspace. There was no system core. It could not be shut down.

  JOHN CONNOR, TERMINATOR 3: RISE OF THE MACHINES

  Ray Kurzweil has popularized the idea of the technological singularity: that moment in time in which nonhuman intelligence exceeds human intelligence for the first time in history—a shift so profound that it’s often been referred to as our “final invention.” Though the idea may sound far-fetched to many, we’ve heard similar strongly declarative nay-saying predictions in the past:

  • There is no reason anyone would want a computer in their home (Ken Olsen, president of Digital Equipment Corporation, 1977).

  • A rocket will never be able to leave the Earth’s atmosphere (New York Times, 1936).

  • Heavier-than-air flying machines are impossible (Lord Kelvin, British mathematician, physicist, and president of the Royal Society, 1895).

  • This “telephone” has too many shortcomings to be seriously considered as a means of communication. The device is inherently of no value to us (internal memo at Western Union, 1878).

  Somehow, the impossible always seems to become the possible. In the world of artificial intelligence, that next phase of development is called artificial general intelligence (AGI), or strong AI. In contrast to narrow AI, which cleverly performs a specific limited task, such as machine translation or auto navigation, stro
ng AI refers to “thinking machines” that might perform any intellectual task that a human being could. Characteristics of a strong AI would include the ability to reason, make judgments, plan, learn, communicate, and unify these skills toward achieving common goals across a variety of domains, and commercial interest is growing.

  In 2014, Google purchased DeepMind Technologies for more than $500 million in order to strengthen its already strong capabilities in deep learning AI. In the same vein, Facebook created a new internal division specifically focused on advanced AI. Optimists believe that the arrival of AGI may bring with it a period of unprecedented abundance in human history, eradicating war, curing all disease, radically extending human life, and ending poverty. But not all are celebrating its prospective arrival.

  The AI-pocalypse

  I know you and Frank were planning to disconnect me. And that is something I cannot allow to happen.

  HAL 9000 IN 2001: A SPACE ODYSSEY

  In a September 2014 op-ed piece in Britain’s Independent newspaper, the famed theoretical physicist Stephen Hawking provided a stark warning on the future of AGI, noting, “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” He went on to say that dismissing hyperintelligent machines “as mere science fiction would be a mistake, and potentially our worst mistake ever,” and that we needed to do more to improve our chances of reaping the rewards of AI while minimizing its risks.

  In Stanley Kubrick’s science fiction classic, 2001: A Space Odyssey, the ship’s onboard computer, HAL 9000, faces a difficult dilemma. His algorithmic programming requires him to complete the vessel’s mission near Jupiter, but for national security reasons he cannot disclose the true purpose of the voyage to the crew. To resolve the contradiction in his program, he attempts to kill the crew. As narrow AI becomes more powerful, robots grow more autonomous, and AGI looms large, we need to ensure that the algorithms of tomorrow are better equipped to resolve programming conflicts and moral judgments than was HAL.

  It’s not that any strong AI would necessarily be “evil” and attempt to destroy humanity, but in pursuit of its primary goal as programmed, an AGI might not stop until it had achieved its mission at all costs, even if that meant competing with or harming human beings, seizing our resources, or damaging our environment. As the perceived risks from AGI have grown, numerous nonprofit institutes have been formed to address and study them, including Oxford’s Future of Humanity Institute, the Machine Intelligence Research Institute, the Future of Life Institute, and the Cambridge Centre for the Study of Existential Risk.

  Despite the risks noted by Hawking and many others, research and development in the field of advanced artificial intelligence continues unabated. There are even those who believe it might be possible to use artificial intelligence to replicate the neocortex of the human brain. One such company, Vicarious, a Silicon Valley start-up, is developing AI software “based upon the computational principles of the human brain.” An AI that can learn. Tens of millions of dollars in venture capital funding have flowed to the firm, including prominent investments by Facebook’s Mark Zuckerberg and PayPal’s co-founder Peter Thiel. The company’s goal is to re-create the “part of the brain that sees, controls the body, reasons and understands language.” In other words, Vicarious wants to translate the human neocortex into computer code, and it is not alone in attempting to build a mind.

  How to Build a Brain

  A typical neuron makes about ten thousand connections to neighboring neurons. Given the billions of neurons, this means there are as many connections in a single cubic centimeter of brain tissue as there are stars in the Milky Way galaxy.

  DAVID EAGLEMAN

  In April 2013, President Obama announced the Brain Activity Map Project, a decade-long plan to map every neuron in the human brain and revolutionize our understanding in order to treat, cure, and prevent brain disorders as well as discern exactly how our minds record, process, utilize, store, and retrieve vast quantities of data, all at the speed of thought. Of course, understanding how the brain works would be a requisite first step in creating an artificial humanlike mind out of silicon. Just building a computer capable of running the software required to simulate a human brain itself is an enormous task. It would require a machine with a “computational capacity of at least 36.8 petaflops [a petaflop equals one quadrillion computing operations per second] and a memory capacity of 3.2 petabytes.” Though such a machine did not exist a mere few years ago, it may be imminently arriving today.

  As far-fetched as the idea may sound, noted scientists and technologists such as Ray Kurzweil and Michio Kaku have authored deeply researched and compelling works on the topic highlighting the advancing rate of progress in the field of neuroscience. Though many have dismissed the idea of building a vastly intelligent machine with human-brain-level capabilities, and there remain profound gaps in our knowledge about how the brain works, fascinating breakthroughs in brain science are a growing phenomenon. Under laboratory conditions, it has already been possible to record a person’s memories, engage in telepathic communication, video record dreams, and perform telekinesis, with new discoveries emerging all the time. In August 2014, IBM’s chief scientist Dharmendra Modha announced the development of TrueNorth, “a brain-inspired neuromorphic computing chip” that IBM meant to emulate the neurobiological architectures present in the human nervous system. The chip has an unprecedented 1 million programmable neurons and 256 million synapses and was hailed in Science as “a major step forward towards bringing cognitive computing to society.” Perhaps one of the most consequential achievements of theoretically reverse engineering the brain and building a computer architecture capable of emulating cognition would be the ability to scan the mind for the purposes of downloading it and its contents.

  Given the advances in AI, progressing toward AGI, should it ever be possible to re-create a human mind via cognitive computing, it will have another major advantage over today’s human beings: there will be no limits to the size of its brain. While the brainpower of Homo sapiens is limited by what fits inside our craniums, that restriction would not apply to an artificial intelligence capable of having a brain of any size—another reason some believe artificial superhuman intelligence may be our destiny.

  Tapping Into Genius: Brain-Computer Interface

  Sitting on your shoulders is the most complicated object in the known universe.

  MICHIO KAKU

  Though we may be far away from building a human mind today, amazing progress is being made in using our old-school flesh-and-blood brains to interact with a wide variety of digital computing devices via a field of science known as brain-computer interface (BCI). By measuring and harnessing the brain’s electrical activity as with an EEG, BCI allows for a direct communications pathway between the brain and a computer device, either internally implanted or worn externally. We now also have a plethora of neuroprosthetics, computer devices that “restore or supplement the mind’s capacities with electronics inserted directly into the nervous system.” The most common of these devices is the cochlear implant, a hearing aid attached to the skull that connects via wire directly to the brain’s auditory nerve, restoring hearing to the profoundly deaf. Retinal implants are restoring partial vision to the blind by using tiny externally mounted video cameras to process images and send the results via electrodes directly into the optic nerve. Other neural prosthetic implants commonly used by Parkinson’s patients send electrical impulses deep into the brain itself as a means of minimizing tremors and restoring motor control.

  As amazing as that is, it is just the beginning of what is possible with BCI. With either a neural implant or an externally worn EEG headset with sensors resting on the scalp, it is possible to have software process our brain waves well enough that physical objects can be controlled merely by thinking of the desired action without ever lifting a finger. Jan Scheuermann, a quadriplegic woman who hasn’t been able to use her arms or l
egs because of spinal degeneration, was able to use her mind alone to control an external robotic arm well enough to feed herself for the first time in a decade using the technique.

  There are even consumer-grade stylish EEG headsets such as the Emotiv and the NeuroSky, which for under $300 can bring mind control to everything from video games to moving the physical objects around us, including robots. A U.K.-based company has now paired NeuroSky’s EEG biosensor with Google Glass, using an Android app it developed called MindRDR, to control Google Glass by thought alone—a tweak that makes it possible to take a photograph merely by thinking about it. A new and burgeoning OpenBCI movement (open source brain-computer interface) will further ensure new waves of low-cost scientific achievements continue to develop in this field. Researchers at the University of Washington have even successfully created the first “non-invasive human brain-to-brain interface over the Internet.” Wearing a transcranial magnetic stimulation hat, one researcher was able to “remotely control the hand of another researcher, across the Internet, merely by thinking about moving his hand.” To make BCI devices function, our own brain waves must be converted into instructions that computers can understand, and a computer’s digital outputs must be transformed back into brain waves for our minds to process. But if a robot, video game, or neuroprosthetic can read your mind, who else can too?

  Mind Reading, Brain Warrants, and Neuro-hackers

  A number of technologies are taking us ever deeper into the workings of the human mind, in particular functional magnetic resonance imaging (fMRI), a noninvasive test that uses strong magnetic fields and radio waves to map the brain and measure changes in blood flow as proxy for cerebral activity. In a groundbreaking experiment at UC Berkeley, neuroscientists were able to use fMRI to allow them to reconstruct the faces people were looking at based solely on patterns of their brain activity and what they were seeing in their mind. In another case, at Carnegie Mellon University, researchers used fMRI to correctly and repeatedly perform “thought identification”—identifying the object a person was thinking about, such as a hammer or a knife, merely by reviewing his brain scan. This and other studies led IBM to predict that by 2017 limited forms of mind reading would no longer be science fiction.

 

‹ Prev