The first ever computer virus to spread ‘in the wild’ outside of a laboratory network started as a practical joke. In February 1982, Rich Skrenta wrote a virus that targeted Apple II home computers. A fifteen-year-old high school student in Pennsylvania, Skrenta had designed the virus to be annoying rather than harmful. Infected machines would occasionally display a short poem he’d written.[5]
The virus, which he called ‘Elk Cloner’, spread when people swapped games between computers. According to network scientist Alessandro Vespignani, most early computers weren’t networked, so computer viruses were much like biological infections. ‘They were spreading on floppy disks. It was a matter of contact patterns and social networks.’[6] This transmission process meant that Elk Cloner didn’t get much further than Skrenta’s wider friendship group. Although it reached his cousins in Baltimore and made its way onto the computer of a friend in the US Navy, these longer journeys were rare.
Yet the era of localised, relatively harmless viruses wouldn’t last long. ‘Computer viruses quickly drifted into a completely different world,’ said Vespignani. ‘They were mutating. The transmission routes were different.’ Rather than relying on human interactions, malware adapted to spread directly from machine to machine. As malware became more common, the new threats needed some new terminology. In 1984, computer scientist Fred Cohen came up with the first definition of a computer virus, describing it as a program that replicates by infecting other programs, just as a biological virus needs to infect host cells to reproduce.[7] Continuing the biological analogy, Cohen contrasted viruses with ‘computer worms’, which could multiply and spread without latching onto other programs.
Online worms first came to public attention in 1988 thanks to the ‘Morris worm’, created by Cornell student Robert Morris. Released on 2 November, it spread quickly through ARPANET, an early version of the Internet. Morris claimed that the worm was meant to transmit silently, in an effort to estimate the size of the network. But a small tweak in its code would cause some big problems.
Morris had originally coded the program so that when it reached a new computer, it would start by checking whether the machine was already infected, to avoid installing multiple worms. The problem with this approach is that it made it easy for users to block the worm; they could in essence ‘vaccinate’ their computer against it by mimicking an infection. To get around this issue, Morris had the worm sometimes duplicate itself on a machine that was already infected. But he underestimated the effect this would have. When it was released, the worm spread and replicated far too quickly, causing many machines to crash.[8]
The story goes that the Morris worm eventually infected 6,000 computers, around 10 per cent of the internet at the time. According to Morris’s contemporary Paul Graham, however, this was just a guess, which soon spread. ‘People like numbers,’ he later recalled. ‘And so this one is now replicated all over the Internet, like a little worm of its own.’[9]
Even if the morris outbreak number were true, it would pale in comparison to modern malware. Within a day of the Mirai outbreak starting in August 2016, almost 65,000 devices had been infected. At its peak, the resulting botnet consisted of over half a million machines, before shrinking in size in early 2017.
Yet Mirai did share a similarity with the Morris worm, in that its creators hadn’t expected the outbreak to get so out of hand. Although Mirai would hit headlines when it affected websites like Amazon and Netflix in October 2016, the botnet was initially designed for a more niche reason. When the FBI traced its origins, they discovered it had started with a twenty-one-year-old college student named Paras Jha, his two friends, and the computer game Minecraft.
Minecraft has over fifty million active users globally, who play together in vast online worlds. The game has been hugely profitable for its creator, who bought a $70m mansion after selling Minecraft to Microsoft in 2014.[10] It has also been lucrative for people who run the independent servers that host Minecraft’s different virtual landscapes. While most online multiplayer games are controlled by a central organisation, Minecraft operates as a free market: people can pay to access whichever server they want. As the game became more popular, some server owners found themselves making hundreds of thousands of dollars a year.[11]
Given the increasing amount of money on the line, a few owners decided to try and take out their rivals. If they could direct enough fake activity at another server – what’s known as a ‘distributed denial of service’ (DDoS) attack – it would slow down the connection for anyone playing. This would frustrate users into looking for an alternative server, ideally the one owned by the people who organised the attack. An online arms market emerged, with mercenaries selling increasingly sophisticated DDoS attacks, and in many cases also selling protection against them.
This was where Mirai came in. The botnet was so powerful it would be able to outcompete any rivals attempting to do the same thing. But Mirai didn’t remain in the Minecraft world for long. On 30 September 2016, a few weeks before the Dyn attack, Jha and his friends published the source code behind Mirai on an internet forum. This is a common tactic used by hackers: if code is publicly available, it’s harder for authorities to pin down its creators. Someone else – it’s not clear who – then downloaded the trio’s code and used it to target Dyn with a DDoS attack.
Mirai’s original creators – who were based in New Jersey, Pittsburgh and New Orleans – were eventually caught after the FBI seized infected devices and painstakingly followed the chain of transmission back to its source. In December 2017, the three pleaded guilty to developing the botnet. As part of their sentence, they agreed to work with the FBI to prevent other similar attacks in the future. A New Jersey court also ordered Jha to pay $8.6 million in restitution.[12]
The Mirai botnet managed to bring the internet to a halt by targeting the Dyn web address directory, but on other occasions, web address systems have helped someone stop an attack. As the WannaCry outbreak was growing in May 2017, British cybersecurity researcher Marcus Hutchins got hold of the worm’s underlying code. It contained a lengthy gibberish web address – iuqerfsodp9ifjaposdfjhgosurijfaewrwergwea.com – that WannaCry was apparently trying to access. Hutchins noticed the domain wasn’t registered, so bought it for $10.69. In doing so, he inadvertently triggered a ‘kill switch’ that ended the attack. ‘I will confess that I was unaware registering the domain would stop the malware until after I registered it, so initially it was accidental,’ he later tweeted.[13] ‘So I can only add “accidentally stopped an international cyber attack” to my résumé.’
One of the reasons Mirai and WannaCry spread so widely is that the worms were very efficient at finding vulnerable machines. In outbreak terms, modern malware can create a lot of opportunities for transmission, far more than their predecessors were capable of. In 2002, computer scientist Stuart Staniford and his colleagues wrote a paper titled ‘How to 0wn the Internet in Your Spare Time’[14] (in hacker culture, ‘0wn’ means ‘control completely’). The team showed that the ‘Code Red’ worm, which had spread through computers the previous year, had actually been fairly slow. On average, each infected server had only infected 1.8 other machines per hour. This was still much faster than measles, one of the most contagious human infections: in a susceptible population, a person who has measles will infect 0.1 others per hour on average.[15] But it was still slow enough to mean that, like a human outbreak, Code Red took a while to really take off.
Staniford and his co-authors suggested that, with a more streamlined, efficient worm, it would be possible to get a much faster outbreak. Borrowing from Andy Warhol’s famous ‘fifteen minutes of fame’ quote, they called this hypothetical creation a ‘Warhol worm’, because it would be able to reach most of its targets within this time. However, the idea didn’t stay hypothetical for long. The following year, the world’s first Warhol worm surfaced when a piece of malware called ‘Slammer’ infected over 75,000 machines.[16] Whereas the Code Red outbreak had initially doubled in s
ize every 37 minutes, Slammer doubled every 8.5 seconds.
Slammer had spread quickly at first, but it soon burned itself out as it became harder to find susceptible machines. The eventual damage was also limited. Although the sheer volume of Slammer infections slowed down many servers, the worm wasn’t designed to harm the machines it infected. It’s another example of how malware can come with a range of symptoms, just like real-life infections. Some worms are near invisible or display poems; others hold machines to ransom or launch DDoS attacks.
As shown by the Minecraft server attacks, there can be an active market for the most powerful worms. Such malware is commonly sold in hidden online marketplaces, like the ‘dark net’ markets that operate outside the familiar, visible websites we can access with regular search engines. When security firm Kaspersky Lab researched options available in these markets, they found people offering to arrange a five-minute DDoS attack for as little as $5, with an all-day attack costing around $400. Kaspersky calculated that organising a botnet of around 1,000 computers would cost about $7 per hour. Sellers charge an average of $25 for attacks of this length, generating a healthy profit margin.[17] The year of the WannaCry attack, the dark net market for ransomware was estimated to be worth millions of dollars, with some vendors making six-figure salaries (tax-free, of course).[18]
Despite the popularity of malware with criminal groups, it’s suspected that some of the most advanced examples originally evolved from government projects. When WannaCry infected susceptible computers, it did so by exploiting a so-called ‘zero-day’ loophole, which is when software has a vulnerability that isn’t publicly known. The loophole behind WannaCry was allegedly identified by the US National Security Agency as a way of gathering intelligence, before somehow finding its way into other hands.[19] Tech companies can be willing to pay a lot to close these loopholes. In 2019, Apple offered a bounty of up to $2 million for anyone who could hack into the new iPhone operating system.[20]
During a malware outbreak, zero-day loopholes can boost transmission by increasing the susceptibility of target machines. In 2010, the ‘Stuxnet’ worm was discovered to have infected Iran’s Natanz nuclear facility. According to later reports, this meant it would have been able to damage the vital centrifuges. To successfully spread through the Iranian systems, the worm had exploited twenty zero-day loopholes, which was almost unheard of at the time. Given the sophistication of the attack, many in the media pointed to the US and Israeli military as potential creators of the worm. Even so, the initial infection may have been the result of something far simpler: it’s been suggested that the worm got into the system via a double agent with an infected USB stick.[21]
Computer networks are only as strong as their weakest links. A few years before the Stuxnet attack, hackers successfully accessed a highly fortified US government system in Afghanistan. According to journalist Fred Kaplan, Russian intelligence had supplied infected USB sticks to several shopping kiosks near the nato headquarters in Kabul. Eventually an American soldier had bought one and used it with a secure computer.[22] It’s not only humans who pose a security risk. In 2017, a US casino was surprised to discover its data had been flowing to a hacker’s computer in Finland. But the real shock was the source of the leak. Rather than targeting the well-protected main server, the attacker had got in through the casino’s internet-connected fish tank.[23]
Historically, hackers have been most interested in accessing or disrupting computer systems. But as technology increasingly becomes internet-connected, there is growing interest in using computer systems to control other devices. This can include highly personal technology. While that casino fish tank was being targeted in Nevada, Alex Lomas and his colleagues at British security firm Pen Test Partners were wondering whether it was possible to hack into Bluetooth-enabled sex toys. It didn’t take them long to discover that some of these devices were highly vulnerable to attack. Using only a few lines of code, they could in theory hack a toy and set it vibrating at its maximum setting. And because devices allow only one connection at a time, the owner would have no way of turning it off.[24]
Of course, Bluetooth devices have a limited range, so could hackers really do this in reality? According to Lomas, it’s certainly possible. He once checked for nearby Bluetooth devices while walking down a street in Berlin. Looking at the list on his phone, he was surprised to see a familiar ID: it was one of the sex toys that his team had shown could be hacked. Someone was presumably carrying it with them, unaware a hacker could easily switch it on.
It’s not just Bluetooth toys that are susceptible. Lomas’ team found other devices were vulnerable too, including a brand of sex toy with a WiFi-enabled camera. If people hadn’t changed the default password, it would be fairly easy to hack into the toy and access the video stream. Lomas has pointed out that the team has never tried to connect to a device outside their lab. Nor did they do the research to shame people who might use these toys. Quite the opposite: by raising the issue, they wanted to ensure that people could do what they wanted without fear of being hacked, and in doing so pressure the industry to improve standards.
It’s not just sex toys that are at risk. Lomas has found that the Bluetooth trick also worked on his father’s hearing aids. And some targets are even larger: computer scientists at Brown University discovered that it was possible to gain access to research robots, due to a loophole in a popular robotics operating system. In early 2018, the team managed to take control of a machine at the University of Washington (with the owners’ permission). They also found threats closer to home. Two of their own robots – an industrial helper and a drone – were accessible to outsiders. ‘Neither was intentionally made available on the public Internet,’ they noted, ‘and both have the potential to cause physical harm if used inappropriately.’ Although the researchers focused on university-based robots, they warned that similar problems could affect machines elsewhere. ‘As robots move out of the lab and into industrial and home settings, the number of units that could be subverted is bound to increase manifold.’[25]
The internet of things is creating new connections across different aspects of our lives. But in many cases, we may not realise exactly where these connections lead. This hidden network became apparent at lunchtime on 28 February 2017, when several people with internet-connected homes noticed that they couldn’t turn on their lights. Or turn off their ovens. Or get into their garages.
The glitch was soon traced to Amazon Web Services (AWS), the company’s cloud computing subsidiary. When a person hits the switch to turn on a smart light bulb, it will typically notify a cloud-based server – such as AWS – potentially located thousands of miles away. This server will then send a signal back to the bulb to turn it on. That February lunchtime, however, some of the AWS servers had briefly gone offline. With the server down, a large number of household devices had stopped responding.[26]
AWS has generally been very reliable – the company promises working servers over 99.99 per cent of the time – and if anything this reliability has boosted the popularity of such cloud computing services. In fact, they’ve become so popular that almost three-quarters of Amazon’s recent profits have come from AWS alone.[27] However, widespread use of cloud computing, combined with the potential impact of a server failure, has led to suggestions that AWS might be ‘too big to fail’.[28] If large amounts of the web rely on a single company, small problems at the source could be greatly amplified. Related concerns surfaced in 2018, when Facebook announced that millions of its users had been affected by a security breach. Because many people use their Facebook account to sign in to other websites, such attacks may spread further than users initially realise.[29]
This isn’t the first time we’ve met this combination of hidden links and highly connected hubs. These are the same network quirks that made the pre-2008 financial system vulnerable, allowing seemingly local events to have an international impact. In online networks, however, these effects can be even more extreme. And this can lead t
o some rather unusual outbreaks.
Not long after the millennium bug came the ‘love bug’. In early May 2000, people around the world received e-mails with a subject line that read ‘ILOVEYOU’. The message carried a computer worm, which was disguised as a text file containing a love letter. When opened, the worm corrupted files on that person’s computer and e-mailed itself to everyone in their address book. It spread widely, crashing the e-mail system of several organisations, including the UK parliament. Eventually IT departments rolled out countermeasures, which protected computers against the worm. But then something odd happened. Rather than disappear, the worm persisted. Even a year later, it was still one of the most active bits of malware on the internet.[30]
Computer scientist Steve White had noticed the same thing happening with other computer worms and viruses. In 1998, he’d pointed out that such bugs would often linger online. ‘Now here’s the mystery,’ White wrote.[31] ‘Our evidence on virus incidents indicates that, at any given time, few of the world’s systems are infected.’ Although viruses persisted for a long time in the face of control measures, suggesting they were highly contagious, they generally infected relatively few computers, which implied they weren’t that good at spreading.
What was causing this apparent paradox? A couple of months after the love bug attack, Alessandro Vespignani and fellow physicist Romualdo Pastor-Satorras came across White’s paper. Computer viruses didn’t seem to behave like biological epidemics, so the pair wondered if the structure of the network might have something to do with it. The previous year, a study had shown that there was a lot of variation in popularity on the world wide web: most websites had very few links, while some had a vast number.[32]
The Rules of Contagion Page 22