by Amit Katwala
In its simplest form, an algorithm is simply a set of instructions or rules to follow in problem-solving operations, but quantum algorithms have to be designed in accordance with the unique properties of quantum mechanics. ‘Quantum computers use interference in their fundamental algorithms,’ Shor explains. ‘You can think of the computation going through all sorts of different paths – and each path has a corresponding phase. If you can somehow arrange the computation so that all the correct answers have the same phase, the probability of seeing a correct answer gets increased.’
Shor’s algorithm applies that insight to create a method for factoring large numbers. It uses a series of mathematical tricks to turn the number to be factored into a ‘wave’ of smaller ones that repeat in some fixed sequence. He realised that the rhythm and frequency of this wave – known as its period – is related to the factors of the original number. Shor draws an analogy with a diffraction grating, a physical object that splits light into different wavelengths like a prism: a beam of white light goes in one end, and a rainbow emerges from the other. All the light of a certain wavelength gets amplified at one point in space, and the all the light of another gets amplified at a different point. Shor’s algorithm acts like a mathematical prism: large numbers go in one end, and the factors come out of the other.
Shor’s method is laborious and unwieldy for classical computers but, crucially, it is structured in such a way that the calculations involved can be performed simultaneously on a quantum computer. ‘This was the first provable case of a problem where a classical computer needs to expand exponential resources, and a quantum computer doesn’t,’ says Rocchetto. Shor’s algorithm can factor large numbers exponentially quicker than classical computers – the bigger the number, the more of an advantage it provides. It caused shockwaves across the quantum community, and helped kickstart the field in a big way because of its implications for cybersecurity, which we’ll cover in detail in the next chapter. ‘This is what ignited a lot of interest in the field – both on the theoretical side and also on the commercial and defence side,’ Rocchetto says. ‘It was a breakthrough for both communities.’
Similarly powerful breakthroughs have proved harder to find, however. Most other quantum algorithms that have been developed so far only offer a more modest quadratic speed-up instead of the exponential increase provided by Shor’s algorithm. ‘The quadratic speed-up – where we improve the run time of the classical algorithm, but not enough to make it an easy problem, can be extremely relevant in real-world scenarios,’ says Rocchetto. ‘These quadratic speed-ups are harder to detect – it’s a matter of trying in practice – but they can still be extremely significant. They could help save companies immense amounts of money.’
In 1996, another Bell Labs researcher devised an algorithm that could help explain why Google in particular is pumping so much money into building a quantum computer. Lov Grover demonstrated that quantum computers could be used to massively speed up how quickly an unordered database could be searched. If you give a classical computer the phone book, and ask it to find a particular number, it has to check each and every entry in turn until it finds the right one. You can speed up the process by throwing multiple processors at the problem – start one going from the top of the list and one from the bottom, for instance – but the average time required still grows rapidly with the size of the list to be searched.
Grover’s algorithm describes a method for searching a database using the power of quantum interference. By encoding each of the items in the list in superposition, their waves can be manipulated so that the wrong answers cancel one another out, and the right answers reinforce one another. Again, the method is too complex to describe in detail here, but the implications are huge. Over multiple steps of processing, the correct answer simply rises to the top – as if all the other numbers in the phonebook had slowly faded away. Grover showed that while a classical computer would have to look through 500,000 items on average to find the right one in a list a million entries long, a quantum computer running his algorithm would only need to search a thousand. It provides a quadratic speed-up over classical devices. A quantum device only needs to look through the square root of the number of items – and for a million items you’d only need 20 qubits.
Together, Shor and Grover equipped quantum computers with two powerful weapons that promised to transform everything from cryptography to finance. In theory, anyway. But in the real world – where equations and elegant algorithms crash into reality – things are a lot more complicated.
Margins of error
Shor’s and Grover’s algorithms were designed for a perfect quantum computer. But when they were written in the mid-1990s such a thing didn’t exist, and it still doesn’t. Despite claims of quantum supremacy, we’re still in the ‘NISQ’ era, which means that the best devices are still noisy and error-prone – their qubits still flip into the wrong state or slide out of superposition. Even the best quantum computers – chilled to near-impossible temperatures and isolated behind shields – still have error rates that are too high. According to a 2019 talk1 by John Preskill, the probability of error per two-qubit gate in a quantum system, using the best hardware available at the time, is about 0.1 per cent. Similarly, the probability of error per measurement is around 1 per cent, which is way too high for anything useful. ‘Theory is well ahead of experiment,’ says Rob Young. ‘The really big, important problem is that the vast majority of algorithms that are being considered today are so far ahead of where the performance metrics of the real quantum systems are, that you don’t know if they are ever going to be useful.’
For a long time, many thought that the problem of error correction was insurmountable – a fatal flaw in quantum computing. Some researchers still believe this. ‘My analysis shows that noisy quantum computers with a few dozen qubits deliver such primitive computational power,’ the Yale professor Gil Kalai told WIRED in 2019, ‘that it will simply not be possible to use them as the building blocks we need to build quantum computers on a wider scale.’2
Errors are a problem for classical computers too, but there are well-established systems in place to identify and correct them – based on sending several copies of the same message and comparing them for differences, or sending each bit a set number of times, so that even if one or two get randomly flipped the overall message is still clear, or using what are known as parity bits to check that a message has been correctly received.
That doesn’t work for quantum computers, because each of those systems involves measuring the bits – which knocks them out of superposition and kills the benefits. It is possible to mitigate some of the error simply by repeating each computation hundreds or thousands of times, which is what Google’s researchers did during their quantum supremacy experiment – repeat measurements help to separate the signal from the noise. But mitigating noise doesn’t solve the problem, particularly when you try to scale that up to larger devices with more qubits.
However, clever work by Shor and others proved that quantum error correction is possible, if you have enough qubits. ‘To protect against multiple errors, you would need to surround each qubit with dozens of redundant qubits,’ explains George Johnson in his book A Shortcut Through Time. ‘And then, when an error was found, you would have to worry about more errors creeping into the correction process itself. Guarding against that would require still more redundancy.’
These error-correction algorithms are designed in such a way as to measure the error without affecting the computation – they use ‘parity bits’ that aren’t part of the calculation, but are affected by it if something goes wrong. A similar system is used for credit card numbers – the first 15 digits of the long number on the front of the card are determined by your bank and card issuer, but the final digit is generated by feeding the first 15 digits into a special algorithm. This allows the entity processing someone’s card details to check for errors without knowing the original card number – but requires extra resources (in this c
ase, extra digits in the card number).
‘The problem with error correction is that if you want to correct errors and you do not have very accurate quantum gates, it takes an enormous amount of overhead,’ says Shor. ‘It’s going to multiply the number of qubits and the time you need to run the computer by thousands or tens of thousands.’ While on paper, Shor’s algorithm only needs a few thousand qubits, in practice you’d actually need a lot more to handle error correction when factoring large numbers – maybe a million or more physical qubits, according to Boixo. ‘In terms of practical application, you would definitely need a fault-tolerant quantum computer or a significant advancement in the algorithm itself.’ Others are working on fault-tolerant algorithms, which would be naturally resistant to errors because of the way they’re written. But even those will require vast improvements in the underlying hardware. Despite Google’s achievement of quantum supremacy, we’re still a long way off practical quantum computers in the NISQ era.
But that hasn’t stopped companies ploughing ahead, and trying to deploy some of the quantum computing techniques they’ve learned to everything from reducing traffic jams to diagnosing cancer.
Traffic unjammed
Seattle is a city that is being devoured by big tech. Between Microsoft’s home in Redmond and the voracious appetite of Amazon, the city is a mess of cranes and construction works, with numerous road closures adding to the traffic problems caused by its harbour setting and inclement weather. An incident on one of the city’s many bridges can quickly lead to long traffic jams.
Mapping apps, which are designed to reroute drivers when there are potential delays, can actually end up making the problem worse. They’re selfish. When you request a route, Google Maps will give you the quickest route for you at that moment, without taking into account how your journey might delay the journeys of other users, or how hundreds of people using the same section of road will lead to bottlenecks. A more balanced routing system would consider each of the route requests together, and suggest routes that helped to minimise bottlenecks and keep traffic flowing faster. But there’s a practical barrier: a system like that would require vast amounts of computational power. This is what’s known as an optimisation problem. There are thousands of different types, from figuring out the best order for a delivery driver to make his stops, to figuring out how many packages will fit in the back of his truck before you send it on its way. Anything where you need to maximise efficiency or minimise costs is an optimisation problem, and right now they’re largely tackled by trial and error.
Most optimisation problems, including the famous travelling salesman problem (see p. 11), don’t have the kind of underlying structure that would allow quantum computers to solve them exponentially quicker than classical computers. But quantum computers could certainly solve them more quickly, according to Christopher Savoie of Zapata Computing, and come up with considerably better answers. And in the meantime, even though we still live in a pre-quantum computer era, the algorithms that have been derived to run on them can be used to solve smaller optimisation problems on existing, classical hardware.
In 2018, for instance, Microsoft’s quantum researchers teamed up with Ford on a project aimed at improving traffic flow in Seattle. They created a virtual model of traffic in the city, and ran a simulation with more than 5,000 vehicles simultaneously requesting one of ten different routes each. According to Microsoft, using quantum-inspired algorithms instead of the selfish routing used by current systems resulted in a 73 per cent reduction in congestion, an 8 per cent drop in commuting time, and would save more than 55,000 hours a year for the drivers of these vehicles if deployed in the real world. ‘Optimisation is the most fruitful area for exploration, because there are these complex problems that pop up in every industry,’ says Ben Porter, director of business development for Microsoft’s quantum operations. ‘It doesn’t matter whether you’re talking automotive, aerospace, utilities – there are some rich areas for us to explore.’
Financial services are another potentially lucrative area where quantum computers could have a big impact in future. The 2008 financial crash was caused, in part, by the inability of banks and regulators to correctly analyse risk in a complex system. Researchers working with IBM have been testing quantum algorithms to see if they might be better than classical ones at running a Monte Carlo simulation, a common way of analysing risk that often takes days to run through millions of simulations of a particular scenario. ‘A quantum computer can provide a quadratic speed-up – instead of many million scenarios, we only require a few thousands to achieve the same accuracy,’ says IBM mathematician Stefan Woerner.3 That’s enough to take the running time for a Monte Carlo simulation from something that needs to be done overnight to something that can happen in close to real time – you can imagine the implications for stock market traders.
But perhaps one of the most exciting applications of quantum computing could be in the field of machine learning. For decades, algorithms for classical computers were written by hand – painstakingly crafted by coders, who were like chefs writing down detailed recipes. But as computing power got cheaper, artificial intelligence and machine learning came to the fore. Now, the algorithms behind everything from facial recognition to online translation are more likely to be created by training a general-purpose program on a vast set of data. These technologies are hugely powerful. They can, for instance, diagnose lung cancer from scans more successfully than human experts. But they’re only as good as the data you feed them on. If you don’t have good enough data, or your underlying data is biased, you end up with flawed, biased algorithms.
Quantum computing offers the possibility of creating data where none actually exists in the real world, through a process known as generative modelling. It’s a process that’s already being carried out on classical computers, but quantum devices could do it faster and at a larger scale. ‘If we have a sample of a hundred things, we can use generative modelling to create things that are similar,’ explains Savoie. The additional power of quantum computers could be used to extrapolate from limited data sets, and feed machine-learning algorithms with data, even when we don’t have it. ‘Enhancing this allows us to do things with scant data – whether that’s looking for rare lung cancer in MRIs, or in facial recognition, where you have a lot of pictures of the side of a face but not the front of a face,’ says Savoie. It is, he says, like creating deep fakes that are as good as the real thing.
As deep fakes have demonstrated, tools like AI and machine learning can be hugely beneficial, or tremendously disruptive, depending on whose hands they’re in. Quantum computers are likely to have the same impact. If they work, they will undoubtedly bring benefits to the worlds of biology, chemistry and physics. But there are risks to such powerful machines being concentrated in the hands of a select few large companies or national governments. Some are worried that quantum computers could also upend the security systems that protect everything from banking systems to military secrets.
4
Cracking the code
In June 2013, the American whistleblower Edward Snowden released thousands of classified documents to the media relating to the practices of the US National Security Agency (NSA). The documents created uproar in the press and shockwaves in global politics, as the extent of the NSA’s surveillance on its own citizens and those of other countries came to light. Snowden was forced into exile.
The famous incident had another facet. It was also a pivotal moment in the global battle to gain the upper hand in quantum technologies. While security agencies stockpile vast amounts of data in the hope of one day soon having a computer that can decode it, researchers are scrambling to develop quantum-secure communication and encryption methods.
According to analysis by the Washington think-tank Center for a New American Security (CNAS),1 Snowden’s revelations spooked the Chinese government so much that it began searching for new, home-grown solutions for cybersecurity that would protect it from the NSA’s pryi
ng eyes. ‘Chinese leaders seem to hope that quantum networks can serve as a shield that ensures the “absolute security” of critical communications,’ write the report’s authors, Elsa Kania and John Costello.
The security protocols that many nations rely on could soon be under threat because of quantum computers, and because of Peter Shor in particular. When he wrote his factoring algorithm in 1994, it immediately attracted attention from the defence industry, because it threatened to break the underlying encryption technology that supports much of our cybersecurity infrastructure.
All manner of sensitive data, from military documents to credit card numbers, is protected using codes that rely on the difficulty of factoring large numbers. This method, known as RSA encryption after its creators Ronald Rivest, Adi Shamir and Leonard Adleman, is also known as public-key cryptography. Before RSA, if you wanted to send a coded message, you’d take the text and then pass it through a series of well-defined steps. These could be something as simple as transposing the letters, so that A becomes B, B becomes C and so on, or something more complicated. One difficult-to-crack method involves turning a string of text into a long string of digits, and then multiplying or adding a random number to it. Only someone who knows this random number, known as the ‘key’, can easily decrypt and read the message’s contents. The challenge is getting the key to the recipient without it also being intercepted – in the past, banks employed couriers with locked briefcases to distribute new keys.