Book Read Free

Quantum Computing

Page 2

by Amit Katwala


  For a while, it was possible to sidestep Feynman’s argument – at least in practical terms – because classical computers seemed to be advancing so quickly. In 1965, Intel’s Gordon Moore formulated ‘Moore’s Law’, which states that the number of transistors that can be crammed onto an integrated circuit (and with it the number of bits that can be processed) doubles roughly every two years. That has been the case for decades, and it’s what has helped drive an era of astonishing technological progress. Packing smaller and more efficient transistors onto chips with increasingly elaborate structures has enabled computers with more storage, more memory and more power. They can run ever more powerful programs and simulations.

  But in recent years, various problems have arisen. Transistors have become so small that we’re starting to run into barriers. The first is energy. Adding more and more transistors makes chips ever more power-hungry, which means you need to either make each individual switch more efficient, or find a way of massively increasing the amount of power the computer uses. The second problem is heat. Although chipmakers do their best, the process of performing calculations inevitably generates heat, which requires ever more elaborate cooling systems.

  The third problem is physics itself. Moore’s Law is slowing down because, once you get to a really small scale, the laws of physics change, and quantum mechanics takes over. That means it’s harder to make the kind of incremental improvements that have driven so much progress. In 2012,3 Australian researchers created a transistor that consisted of a single atom, switching between two states to signify 1s and 0s. Today, transistors are being built that have features smaller than around 22 nanometres (for comparison, a human hair is about 80,000 nanometres thick).

  Chipmakers have no choice but to grapple with quantum effects, as Google’s Tony Megrant points out. Quantum physics will become an inevitable part of computing. ‘As our field is just forming, you see an equal number of articles about the end of Moore’s Law,’ says Megrant. ‘Things have been miniaturised so far that quantum effects are showing up. That’s an error for them, whereas it’s the thing we’re harnessing.’

  In 1985, the Oxford-based physicist David Deutsch went a step further than Feynman. Like Feynman, Deutsch has acquired almost legendary status in the world of quantum physics – he’s famous for almost never being seen outside his house in Oxford, where he works late into the night with his head in other worlds. Most of his work has been focused on the notion of the multiverse – the idea that our universe is just one of almost infinite parallel universes, with every possible future unfolding simultaneously in one of them. That includes everything from a universe that’s identical to ours until the moment the coin you just flipped lands on heads instead of tails, to a universe where Earth wasn’t hit by a comet and dinosaurs evolved into intelligent beings with cars and planes and microchips of their own.4

  Deutsch realised that a computer built from quantum components could be so much more powerful than just a physics simulator. Instead of bits, which can only be 1 or 0, these components – which would eventually become known as quantum bits, or ‘qubits’ – can be 1, 0 or in a state of superposition where they are ‘both 1 and 0 at the same time’.

  That’s the simple explanation, and it’s the one you’ll usually encounter in news articles and popular science books like this one. The truth is a little more complicated. A qubit in superposition is not technically in both states at the same time. Instead, it’s got some probability of being 1 and some probability of being 0 – and observing it causes it to ‘collapse’ into one of the two states, as when you reveal the result of a coin toss.

  But in basic terms, you can think of qubits as a globe, with 1 at the North Pole, 0 at the South Pole, and superposition at an unspecified point somewhere else on the sphere. Or imagine a coin: if heads is 1 and tails is 0, then superposition is a spinning coin, laden with unrealised potential futures.

  Qubits can hold more information, more efficiently, than bits. To describe the state of one qubit, you’d need at least two bits (0 on a qubit would be 00, 1 would be 11, and superposition would be 01 or 10). For two qubits, you’d need four bits, for three qubits eight bits, and so on. To describe the state of 300 qubits, you’d need more bits than there are atoms in the known universe. You’d need 72 billion gigabytes of classical computer memory to store as much information as the 53 qubits on Google’s Sycamore chip, for instance. ‘This is an exponential growth of the numbers that we would have to keep track of to describe this quantum system,’ explains Tony Megrant. ‘This is the exponential power that you hear of in quantum computing.’

  Deutsch figured out that a computer built of qubits instead of bits could use the uncertainty of quantum mechanics to its advantage. As well as simulating nature more efficiently, Deutsch realised, a machine made from qubits would also be able to handle uncertainty more efficiently, and tackle certain problems thousands or millions of times faster. Instead of trying out each path of a maze in turn, it could effectively go down every single path in parallel, at the same time. It’s a bit like holding your finger in the pages of a choose-your-own-adventure book – and seeing the consequences of every decision point at once.

  If multiple qubits are coupled together, their interference patterns can be carefully choreographed so that the paths leading to wrong answers cancel one another out, while those leading to the right answer reinforce one another. The result is an exponential increase in computing power for certain types of problem. This is why some believe that quantum computers could go well beyond the confines of classical computers to create powerful new materials, turbocharge the fight against climate change and completely upend cryptography.

  In theory, the very nature of quantum mechanics poses a fundamental challenge for computing. To do calculations, you need to be able to measure things, and pass on the results of what you find to the next stage of the equation. But measuring something in superposition knocks it out of that state: the photon no longer appears to be in two places at once; Schrödinger’s cat is either dead or alive. You need to be able to move that spinning coin around without disturbing its spin.

  In fact, this is possible, thanks to another weird feature of quantum mechanics called ‘entanglement’. When two electron waves interact with each other, each one leaves a mark on the other. This means that they’re inextricably linked, or ‘entangled’, no matter how far apart they drift. Even if they’re separated by billions of miles, measuring one entangled particle instantly changes the state of the other one – the wave function of both particles collapses at the same time. It’s an observation which has puzzled physicists for decades. There is something linking the two particles, and it means that quantum information can be transferred from one place to another, without the underlying superposition collapsing. Entanglement solves the measurement problem. It means that you can pass information from one qubit to another without collapsing the superposition.

  Deutsch’s insights were of critical importance, and by 1992 people were starting to pay attention to the world of quantum computing. But the idea might have remained in the world of theory if it hadn’t been for Giuseppe Castagnoli, head of IT at Elsag Bailey, a manufacturer of industrial control systems that is now part of ABB. ‘At the time I was in charge of the Information Communication Technology Division of Elsag Bailey, and was personally interested in quantum computation, which was in its very early stage,’ remembers Castagnoli, now 78 and still publishing papers on quantum cryptography. ‘When I saw the possibility of an industrial application of quantum computation and communication, I approached the scientific community.’

  ‘He persuaded his company that, instead of sponsoring some art exhibition, he would sponsor a series of conferences,’ recalls Artur Ekert, a professor of quantum physics at the University of Oxford and an early attendee of Castagnoli’s annual workshops at Villa Gualino, a hillside hotel overlooking Turin, from 1993 to 1998. Here, the young academics who are now among the most influential people in quantum compu
ting rubbed shoulders and exchanged ideas.

  In 1994, Ekert gave a talk to the International Conference on Atomic Physics in Boulder, Colorado, based on some of the ideas he’d absorbed at Villa Gualino. For the first time, he broke down quantum computation into its basic building blocks, drawing parallels with classical devices and describing the types of switches and logic gates that would be needed to build a quantum machine. Logic gates do things like combine the inputs from two bits and spit out one answer – so only displaying a 1 if both input gates are 1, for instance.

  Ekert’s talk marked the birth of quantum computing as an industry. ‘This meeting started the whole avalanche,’ he says. ‘All of a sudden the computer scientists were talking about algorithms; atomic physicists saw that they could play a role. Later it started spilling over into other fields, it started accelerating, and it became the industry you see today.’

  Before it could become an industry, though, scientists had to figure out how to actually build a qubit. In the 1990s, this was still an entirely theoretical construct. To make quantum computing work, scientists needed to find or create something that was small enough to adhere to the laws of quantum mechanics, but also big enough to be reliably controlled. It’s a quest that has pushed our understanding of physics and material science to the limit.

  2

  Building the impossible

  Google’s quantum lab is just north of Santa Barbara, California, in a squat, beige building around the corner from a beer distributor. It doesn’t look like the home of the next big breakthrough in technology – but you could probably have said the same thing about Bletchley Park in the 1940s. There are rows of messy desks, and surfboards hanging on the wall – at lunch, a group of engineers play Nintendo in a meeting room named after Richard Feynman, and there’s a bed and bowl set aside for Qubit, the office dog.

  The real action is in the hardware lab itself, through a set of double doors plastered with nerdy stickers featuring jokes that you’d need a PhD in quantum mechanics to understand. Inside, everything is clean, white and precise. The hum of machinery fills the air, and there’s a quiet, almost reverent level of concentration among the handful of people working in the area. This is where Google’s 53-qubit Sycamore chip achieved quantum supremacy, and ushered in a new dawn for computer science.

  When Artur Ekert gave his talk at Boulder in 1994, the qubit was still an entirely theoretical construct. To make quantum computers a reality, physicists had to find a way to actually build a qubit – a single switch that could be reliably flicked between one and zero, and that could also exist in a state of superposition. It needed to be something small enough to adhere to the laws of quantum mechanics, but big enough and stable enough to be reliably controlled.

  Those two conditions are challenging enough. But there’s a third problem that quantum engineers spend their time grappling with: interference – not from other qubits, but from the outside world. When you’re working on the quantum scale, the slightest noise can nudge a qubit out of the delicate state of superposition, like a breeze blowing out a candle or toppling a spinning coin. In the industry this is called ‘decoherence’, and it typically happens within a fraction of a second. ‘You’re simultaneously trying to really well isolate the inner workings of a quantum computer and yet be able to tell it what to do and to get the answer out of it,’ says Chetan Nayak, Microsoft’s general manager of quantum hardware.

  It’s all part of a delicate balancing act. Each quantum computation is a frantic race to perform as many operations as possible in the fraction of a second before a qubit ‘decoheres’ out of superposition. ‘The lifetime of the quantum information is super-short,’ explains Jan Goetz of the Finnish start-up IQM, which is developing technology to try and increase the clock speed of quantum chips and improve their performance in this regard. ‘The more complex you make the processors, the more the lifetime goes down.’

  Every new control method you add in also adds more interference and noise. ‘Every time we add a line into that it actually adds decoherence,’ says Google’s Tony Megrant. ‘We’re hurting our device, but can we somehow overall end up better? The most challenging thing is how can you get a quantum system with sufficient lifetime to operate long enough that you can do meaningful things.’

  That’s why most of the room is taken up by six cryostats, arranged in two rows of three. These are nested metal cylinders, each painted in one of Google’s corporate colours, hanging down from the ceiling and narrowing slightly from top to bottom, like a chandelier. They’re designed to gradually cool a quantum chip to a temperature that’s colder than outer space, and keep it completely isolated from heat, noise, vibration and electromagnetic interference. The cryostats gradually step down the temperature – each level gets progressively colder, and it takes the whole machine almost two days to get the quantum chip down to 10 millikelvin, and nearly a week to warm back up to room temperature. ‘It’s all about how to protect the lifetime of the system,’ says Yu Chen, a quantum research scientist at Google who focuses on measuring and calibrating the systems.

  Computing with lasers

  Google’s technology is just one of a number of different attempts to build working qubits. Artur Ekert’s talk sparked research in a number of different directions – all proceeding in parallel like climbers taking different routes up a mountain. There have been dozens of different approaches dating back to the early days of quantum computing – qubits have been suspended in laser beams, trapped in diamonds and inferred from the aggregate magnetic alignment of billions of particles in a machine that works like an MRI scanner. Some routes offer a more gentle starting slope before accelerating in difficulty, while others have a steeper initial learning curve, but promise to be easier to scale up to the thousands or millions of qubits we’ll eventually need to solve real-world problems.

  The earliest efforts were based on ion traps, a technology that grew out of well-established work on atomic clocks, which were developed in the mid-twentieth century to provide supremely accurate timekeeping by tapping into the internal metronome of an atom. An ion is an atom with a positive or negative charge (rather than a neutral one). In trapped-ion quantum computing, qubits are formed from individual ions which are held in tiny wells, with pulses of lights used to nudge them between different states, and stop them jiggling around, like a cruise ship being held in place by tug boats.1 In May 1995, Peter Zoller and Ignacio Cirac from the University of Innsbruck in Austria – who were also part of the nascent quantum community that met annually in Turin – published a paper2 describing how ions could be used as qubits for simple operations, and how the states of 1, 0 and superposition could be encoded in the rocking motion of the ion, or the energy level or ‘spin’ of one of the electrons that orbits it.

  Ekert had described the need to build a particular type of quantum logic gate known as a CNOT gate – a two-bit gate where the second bit only flipped from 1 to 0 if the first bit was in a certain state. In classical computers, which consist of a complex network of logic gates composed of individual bits, you can make any type of circuit from what are known as NOR and NAND gates. CNOT gates are the quantum equivalent. If they could be achieved, Ekert said, it meant that computer scientists would have everything they needed to start building quantum circuits.

  In 1995, just a year later, a team at the National Institute of Standards and Technology (NIST) at Boulder was able to build a working CNOT gate using a single ion of the element beryllium. A laser pulse would change the spin of the orbiting electron from up to down (or 1 to 0), but only if the ion was vibrating in a certain way (which corresponded to 1 rather than 0). They’d created a working two-qubit quantum gate – but they’d need many more than that to build a working quantum computer.

  Ion traps have perhaps the gentlest gradient of all the routes up the quantum mountain, because they’re not reliant on any new technologies being developed. ‘The physics is done,’ says Peter Chapman, president and CEO of Baltimore-based, Amazon-backed I
onQ, which is attempting to commercialise a trapped-ion quantum computer. ‘I like to joke that IonQ computers are largely built and delivered in parts from Amazon – it’s pretty much off the shelf.’

  Using ions has cons as well as pros. On the plus side, ions don’t have to be made – they simply exist in nature. ‘We don’t have a manufacturing problem,’ Chapman says. ‘Mother Nature manufactures atoms.’ That brings huge advantages in terms of the amount of shielding required – trapped ions are much less likely to get knocked out of superposition by environmental interference.

  On the minus side, because ions are so small, trapped-ion quantum computing is harder – some say impossible – to scale to the level required for a useful quantum computer, which would need hundreds or thousands of qubits. Chapman argues that this could be achieved by creating separate, smaller quantum devices and shuttling information between them using light.

  Another early approach to quantum computing offered a quite different solution to the problem of decoherence. Instead of trying to isolate qubits from the environment, it simply accepted that some will inevitably decohere, and solved the problem by sheer force of numbers.

  In 2001, IBM researchers in San Jose, California, led by Isaac Chung were able to create a working quantum computer using a small amount of liquid and magnets. They used a molecule containing five fluorine atoms and two carbon atoms. Each of the seven atoms in this molecule has its own spin state, so effectively the molecule could operate as a seven-qubit quantum computer, if you could reliably control it.

 

‹ Prev