by Marc Goodman
Wilson’s efforts have left Congress in the dust, which failed to pass introduced legislation prohibiting 3-D-printed weapons. These plastic firearms can be near impossible to detect on standard metal detectors, as a team of Israeli investigative reporters proved by smuggling a 3-D-printed gun into the highly secure Knesset building, twice. In the meantime, dozens of other digital gunsmiths have improved upon the original Liberator and even posted their own digital gun files online. Other repositories for online designs for 3-D weapons have been created, including those that have plans for hand grenades and mortar rounds. The FBI’s Terrorist Explosive Device Analytical Center is concerned by the trend and recently purchased its own 3-D printer to investigate how terrorists might use 3-D printers to build IEDs. The weapons conundrum posed by 3-D printers is non-static: as these devices grow in size and capability, they will be able to fabricate even larger weapons, including shoulder-fired missile launchers and large military-style robots.
With digital manufacturing, national border inspections become meaningless. Why risk smuggling weapons or drugs into the country when you can simply print your guns, pills, or bombs after you cross the border? The challenges 3-D printing poses to international security are not just limited to crime and terrorism; they will affect long-standing instruments of international law, such as weapons bans. Need parts for uranium centrifuges in Iran? No problem, just print them. Embargoes and even naval blockades, our traditional tools for ensuring global security against rogue regimes, will fail epically as larger and more sophisticated 3-D printers become mainstream. The old paradigms of national borders, guards, gates, and tall fences may well become outdated as technology develops much more rapidly than our security mechanisms—the new normal that will be even further exacerbated by a host of new science-fiction-like technologies coming online in the very near future.
CHAPTER 16
Next-Generation Security Threats:
Why Cyber Was Only the Beginning
We have arranged things so that almost no one understands science and technology. This is a prescription for disaster. We might get away with it for a while, but sooner or later this combustible mixture of ignorance and power is going to blow up in our faces.
CARL SAGAN
Breaking: Two Explosions in the White House and Barack Obama Is Injured,” reported the Associated Press on its official Twitter news feed at 1:07 p.m. on April 23, 2013. In an instant, the AP’s two million followers had retweeted the news thousands of times, and the world went into panic mode. On Wall Street, the reaction was both swift and staggering: the Dow Jones Industrial Average and the S&P 500 plummeted. Within three minutes, the AP’s tweet had wiped out $136 billion in shareholder value.
Thereafter, the tweets flew fast and furious. At 1:13 p.m., the AP confirmed that the explosion-reporting tweet was bogus. At 1:16 p.m., the White House press secretary, Jay Carney, was forced to comment on live TV: “I can say that the President is fine, I was just with him.” Finally at 1:17 p.m., the Syrian Electronic Army (SEA) admitted it had hacked the Associated Press. Within a matter of nine minutes, the SEA was able to rock some of the world’s most powerful institutions, from Wall Street to the White House, with one wayward tweet. What the hell just happened?
When the news of an explosion at 1600 Pennsylvania Avenue broke, the market suspected a probable terrorist attack and immediately foresaw the profound negative impact it would have; after all, 9/11 was estimated to have cost America $3.3 trillion in economic losses. Traders immediately began dumping their shares, and the exchanges went into free fall. But these traders weren’t the Gordon Gekko, masters-of-the-universe types with slicked-back hair and $10,000 suits of yesteryear. In fact, they weren’t even human. At hedge funds, investment banks, and pension funds across the tristate area and around the world, networks of supercomputers were doing the trading en masse, slaves to their algorithmic programming.
Gekko and the majority of his human lot on the trading floors lost out to computers in 1999, replaced by ultrafast electronic high-frequency trading (HFT) platforms. These algorithms (algos) are a form of artificial intelligence, fully empowered to make trading decisions and spend money on their clients’ behalf. As of 2015, they represent up to 70 percent of the trading volume on the Dow Jones. These software programs (written by human beings) carry out step-by-step calculations and automated reasoning in order to respond to fluctuations in the market and parse machinereadable news to drive maximal profit to their masters. Simplistically, positive quarterly earnings from a company mean buy, and a terrorist attack means sell. The supercomputers behind the trading platforms are voracious readers, working 24/7 to uncover tidbits of data that can move the markets. Just one news service alone, Thomson Reuters, feeds these HFT algos by scanning fifty thousand distinct news sources and four million social media sites at speeds no human being could ever possibly match. The vast networks of HFT machines can collectively make trillions of calculations per second, and trades can be executed in less than half a millionth of a second, thousands of times faster than the blink of an eye.
When the artificial-intelligence-based algorithmic trade bots came across a tweet mentioning “explosions,” “Obama,” and “White House” in the same sentence from a source they had been trained to trust, the Associated Press, it took them just a few thousandths of a second to respond. As they did, other algorithms picked up on the activity, and soon a full-on snowball effect was in play. Algorithms began selling en masse, erasing $136 billion in valuation in an amazing three minutes. Any human being looking closely at the tweet might have noticed it was poorly phrased, was not in AP’s style format, and failed to capitalize the word “breaking,” as is AP’s convention, subtleties lost on a robo-trader. By then, however, the damage had been done. When the dust settled, many firms had lost millions of dollars. The Syrian Electronic Army, an international hacking group with ties to Bashar al-Assad’s regime, admitted its role in the attack and mocked the president by using the hashtag #byebyeObama on its own Twitter account, @official_SEA6. It also was happy to let the world know that the password for the AP’s Twitter account was APM@rketing. FBI and intelligence officials had come across the SEA before, when it previously hacked the New York Times, the BBC, and CBS News, but its latest attack was enough to have it branded as a terrorist organization by some and land it on the FBI’s most wanted list.
The AP Twitter White House explosion debacle was not the first time algorithms had run amok on Wall Street, and it surely won’t be the last. More important, a Securities and Exchange Commission investigation into these types of incidents, including the infamous Flash Crash in May 2010, concluded the market, dominated by ultrafast trading algorithms, “had become so fragmented and fragile that a single large trade could send stocks into a sudden spiral.” In a world now measured in millionths of a second and heading exponentially faster all the time, there is literally no time for human intervention once the algos begin to go awry. The Syrian Electronic Army’s ability to roil global financial markets in an instant lays bare the economic risks of cyber terrorism to a deeply interconnected world, automated by computers and operating at near the speed of light. But this story reveals much more than just a tale of woe about the perilous state of our common economic security. It is a harbinger of things to come. Whether we realize it or not, we are increasingly turning more of our lives over to computer algorithms and artificial intelligence to make decisions for us. For those who recall John Connor’s rather unpleasant interactions with Skynet in the film The Terminator, it is a decision that is fraught with risk.
Nearly Intelligent
The question of whether a computer is playing chess, or doing long division, or translating Chinese, is like the question of whether robots can murder or airplanes can fly … These are questions of decision, not fact; decision as to whether to adopt a certain metaphoric extension of common usage.
NOAM CHOMSKY
When the computer scientist John McCarthy coined the term “artificial intel
ligence” in 1956, he defined it succinctly as “the science and engineering of making intelligent machines.” Today artificial intelligence (AI) more broadly refers to the study and creation of information systems capable of performing tasks that resemble human problem-solving capabilities, using computer algorithms to do things that would normally require human intelligence, such as speech recognition, visual perception, and decision making. These computers and software agents are not self-aware or intelligent in the way people are; rather, they are tools that carry out functionalities encoded in them and inherited from the intelligence of their human programmers. This is the world of narrow or weak AI, and it surrounds us daily.
Weak AI can be a powerful means for accomplishing specific and narrow tasks. When Amazon, TiVo, or Netflix recommends a book, TV show, or film to you, it is doing so based on your prior purchases, viewing history, and demographic data that it crunches through its AI algorithms. When you get an automated phone call from your credit card company flagging possible fraud on your account, it’s AI saying, “Hmm, Jane doesn’t normally purchase cosmetics in Manhattan and a laptop in Lagos thirty minutes apart.” Google Translate could not be accomplished without AI, nor could your car’s GPS navigation or your chat with Siri.
Talk to My Agent
Technology is, after all, merely the physical manifestation of the human will, and when it comes to AI agents, that human can be digitally magnified a billionfold. Whether you’re a high-frequency Wall Street trader, a malware author, a medical researcher, a marketer, an astronomer, a dictator, or a drone builder, narrow AI is the workhorse of the automation age.
DANIEL SUAREZ
When you set your DVR to record the latest episode of Mad Men or schedule the alarm on your iPhone to wake you at 7:00 a.m., you are actually programming software to act as an intelligent agent on your behalf. AI is software you imbue with agency to represent you elsewhere in society. Moving forward, we will all come to rely on digital “bot-lers” such as these to help us manage nearly all tasks in our lives, from the mundane to the life changing.
As narrow AI capabilities grow, we are seeing algorithms play increasingly active roles throughout more and more businesses and professions. In medicine, “computer-aided diagnostics” are helping physicians to interpret X-ray, MRI, and ultrasound results much more rapidly, using algorithms and highly complex pattern-recognition techniques to flag abnormal test results. The legendary Silicon Valley entrepreneur and investor Vinod Khosla has referred to this as the age of Dr. A.—Dr. Algorithm—hailing a revolution in health care in which we won’t need the average human doctor, instead finding much better and cheaper care for 90–99 percent of our medical needs through AI, big data, and improved medical software and diagnostics. It’s not just physicians who face massive disruption from algorithmic competition; armies of expensive lawyers are finding themselves replaced by cheaper software. Today, artificial intelligence e-discovery software can analyze millions of pretrial documents, sifting, sorting, and ranking them for potential evidentiary value at a speed no human attorney could match—all for only 15 percent of the cost. But what do we really know about these algorithms and the mathematical processes behind them? Precious little, as it turns out.
Black-Box Algorithms and the Fallacy of Math Neutrality
One and one is two. Two plus two equals four. Basic, eternal, immutable math. The type of stuff we all learned in kindergarten. But there is another type of math—the math encoded in algorithms—formulas written by human beings and weighted to carry out their instructions, their decision analyses, and their biases. When your GPS device provides you with directions using narrow AI to process the request, it is making decisions for you about your route based on an instruction set somebody else has programmed. While there may be a hundred ways to get from your home to your office, your navigation system has selected one. What happened to the other ninety-nine? In a world run increasingly by algorithms, it is not an inconsequential question or a trifling point.
Today we have the following:
• algorithmic trading on Wall Street (bots carry out stock buys and sells)
• algorithmic criminal justice (red-light and speeding cameras determine infractions of the law)
• algorithmic border control (an AI can flag you and your luggage for screening)
• algorithmic credit scoring (your FICO score determines your creditworthiness)
• algorithmic surveillance (CCTV cameras can identify unusual activity by computer vision analysis, and voice recognition can scan your phone calls for troublesome keywords)
• algorithmic health care (whether or not your request to see a specialist or your insurance claim is approved)
• algorithmic warfare (drones and other robots have the technical capacity to find, target, and kill without human intervention)
• algorithmic dating (eHarmony and others promise to use math to find your soul mate and the perfect match)
Though the inventors of these algorithmic formulas might wish to suggest they are perfectly neutral, nothing could be further from the truth. Each algorithm is saturated with the profound human bias of the person or people who wrote the formula. But who governs these algorithms and how they behave in grooming us? We have no idea. They are black-box algorithms, shrouded in secrecy and often declared trade secrets, protected by intellectual property law. Just one algorithm alone—the FICO score—plays a major role in each American’s access to credit, whether or not you get a mortgage, and what your car loan rate will be. But nowhere is the formula published; indeed, it is a closely guarded secret, one that earns FICO hundreds of millions of dollars a year. But what if there is a mistake in the underlying data or the assumptions inherent in the algorithm? Too bad. You’re out of luck. The near-total lack of transparency in the algorithms that run the world means that we the people have no insight and no say into profoundly important decisions being made about us and for us. The increasingly concentrated power of algorithms in our society has gone unnoticed by most, but without insight and transparency into the algorithms running our world, there can be no accountability or true democracy. As a result, the twenty-first-century society we are building is becoming increasingly susceptible to manipulation by those who author and control the algorithms that pervade our lives.
We saw a blatant example of this abuse in mid-2014 when a study published by researchers at Facebook and Cornell University revealed that social networks can manipulate the emotions of their users simply by algorithmically altering what they see in the news feed. In a study published by the National Academy of Sciences, Facebook changed the update feeds of 700,000 of its users to show them either more sad or more happy news. The result? Users seeing more negative news felt worse and posted more negative things, the converse being true for those seeing the more happy news. The study’s conclusion: “Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness.” Facebook never explicitly notified the users affected (including children aged thirteen to eighteen) that they had been unwittingly selected for psychological experimentation. Nor did it take into account what existing mental health issues, such as depression or suicidality, users might already be facing before callously deciding to manipulate them toward greater sadness. Though Facebook updated its ToS to grant itself permission to “conduct research” after it had completed the study, many have argued that the social media giant’s activities amounted to human subjects research, a threshold that would have required prior ethical approval by an internal review board under federal regulations. Sadly, Facebook is not the only company to algorithmically treat its users like lab rats.
The lack of algorithmic transparency, combined with an “in screen we trust” mentality, is dangerous. When big data, cloud computing, artificial intelligence, and the Internet of Things merge, as they are already doing, we will increasingly have physical objects acting on our behalf in 3-D space. Having an AI drive a robot that
brews your morning coffee and makes breakfast sounds great. But if we recall the homicide in 1981 of Kenji Urada, the thirty-seven-year-old employee of Kawasaki who was crushed to death by a robot, things don’t always turn out so well. In Urada’s case, further investigation revealed it was the robot’s artificial intelligence algorithm that erroneously identified the man as a system blockage, a threat to the machine’s mission to be immediately dealt with. The robot calculated that the most efficient way to eliminate the threat was to push “it” with its massive hydraulic arm into the nearby grinding machine, a decision that killed Urada instantly before the robot unceremoniously returned to its normal duties. Despite the obvious challenges, the exponential productivity boosts, dramatic cost savings, and rising profits attainable through artificial intelligence systems are so great there will be no turning back. AI is here to stay, and never one to miss an opportunity, Crime, Inc. is all over it.