by John Gribbin
There's one small catch. In order to complete the transformation, information about the way the first photon was tweaked has to be transmitted to the location of the second photon by conventional means, no faster than the speed of light. This information is then used to tweak the second photon in just the right way (not the same way that the first photon was tweaked, but in a kind of reverse process) to complete the transformation. In effect, the conventional signal tells the system what tweak has been applied to photon number one, and the system then does the opposite to photon number two. Quantum teleportation requires both a quantum “channel” and a classical “channel”; it takes two signals to dance the teleportation tango.
A large and successful research effort has gone into making this reality, not least because quantum information offers a way of transmitting information utterly securely using systems that cannot be cracked. I have explained the details in my book Schrödinger's Kittens, but the essential point is that information traveling by the quantum “channel” cannot be read by a third party; in addition, any attempt to eavesdrop will alter the quantum state of the photons, making it obvious that they have been interfered with. This is not the reason why teleportation helps in the design of quantum computers; indeed, in recent times headline-making developments in quantum teleportation have concentrated on much larger scales than those appropriate for computation. But their success emphasizes the reality of the process, and how good scientists now are at working with quanta.
In 2012, two record-breaking experiments made those headlines—both of which will probably have been superseded by the time you read this. First, a large group of Chinese researchers succeeded in teleporting a quantum state through 97 kilometers of open air across Qinghai Lake, using a telescope to focus the photons. Almost as an aside, the experiments confirmed the by-now-expected violation of Bell's inequality, offering insight for the theorists into the foundations of quantum physics. A few weeks later, a team from Austria, Canada, Germany and Norway teleported the properties of a photon across a distance of 143 kilometers, from the astronomical observatory at La Palma, in the Canary Islands, to a European Space Agency ground station on the neighboring island of Tenerife. Both the transmitting station and the receiving station were located roughly 2,400 meters above sea level, where the air is thin and atmospheric interference is reduced.
But the air is even thinner at higher altitudes, so that in some ways it should be easier to carry out quantum teleportation, and achieve secure communication, by pointing the beams upward to a satellite. The distances involved are very similar to those already achieved on the ground, and although there are, of course, many other problems involved in establishing this kind of satellite communication, the Chinese are already planning a satellite experiment, provisionally scheduled for launch in 2016 or 2017, to test the possibilities, using ground stations in Europe and in China to communicate with the satellite simultaneously for a few minutes in each orbit.17 This is particularly important because this kind of quantum information is soon lost if the photons are sent through fiber-optic cables. The leader of the Chinese team, Pan Jianwei, of the University of Science and Technology of China in Hefei, envisages an eventual network of satellites acting as repeater stations for global coverage of a quantum communications network. This could be the basis of an utterly secure quantum Internet; and in all probability many of the computers plugged into that Internet will by then themselves be running on quantum principles, including teleportation.
In connection with this work, Chinese researchers have devised ever better techniques for entangling photons. In 2004, they could produce a few four-photon entanglement events every second; by 2012, they could produce entangled groups of four photons at a rate of a few thousand per second. This is important for the communications work, but also, as I shall shortly explain, for some kinds of quantum computing. It's time to return to the main thread of my story.
FUN WITH PHOTONS18
Serge Haroche was born on September 11, 1944—making him just seven months younger than David Wineland—in Casablanca, Morocco, which was then a French “protectorate.” It did not become fully independent until 1956, at which point the Haroche family (his father was a lawyer and his mother a Russian-born teacher) left for France. He graduated from the École Normale Supérieure in Paris in 1967, and received his PhD from the Pierre and Marie Curie University of Paris in 1971. While working for his doctorate, Haroche was a research associate at the National Center for Scientific Research (CNRS, from the French title), where he stayed as a research fellow from 1971 to 1973 (including a year, 1972/3, as a visitor at Stanford University) and as a senior research fellow from 1973 to 1975. In 1975 he was appointed a professor at the University of Paris, moving in 2001 to the Collège de France, where he remains, as Professor of Quantum Physics. As well as his first Stanford trip, Haroche has at various times been a visiting scientist at Stanford, Harvard, Yale and MIT.
The common thread of Wineland's and Haroche's work is that both of them make direct observations of individual quantum systems without disturbing their quantum states. But they approach the task from opposite directions. Wineland traps individual ions and both manipulates and monitors their behavior with light; Haroche, a pioneer of the technique known as cavity quantum electrodynamics (CQED), uses atoms to manipulate and monitor the behavior of trapped photons. He actually uses microwave photons, with wavelengths longer than those of visible light, but to a physicist all photons are “light.”
The way to trap a photon is with mirrors. In Haroche's lab in Paris, hollow half-spherical mirrors made of superconducting material are placed about 3 centimeters apart, making a hollow cavity, and cooled to less than one degree above absolute zero. These mirrors are so reflective that a photon can bounce back and forth between them for 130 milliseconds (more than a tenth of a second) before it is absorbed. Since photons travel at the speed of light, just under 300,000 kilometers per second, this means that an individual photon travels roughly 40,000 kilometers, back and forth across the same 3 centimeters, before it is absorbed. This is approximately equivalent to flying once around the equator of the Earth. While it is doing so, Haroche manipulates it using rubidium atoms in a specially prepared state, known as Rydberg atoms after the Swedish physicist Johannes Rydberg, a nineteenth-century pioneer of atomic spectroscopy. In a Rydberg atom, the outer electrons have been given an energy boost which has lifted them up into “orbits” much farther out from the nucleus and the inner electrons than usual. They may be as much as 125 nanometers across, a thousand times bigger than ordinary atoms, and interact strongly with microwave photons through a phenomenon known as the dynamical Stark effect. A Rydberg atom sent through the cavity at a carefully controlled speed just has time to interact with the photon, inducing a phase shift in its quantum state—in effect, a change of sign but not a change of size. The photon is unaffected by this, but in the process has become entangled with the Rydberg atom. By analyzing the state of the Rydberg atom after it emerges from the cavity, and comparing it with the state it was in when it entered the cavity, Haroche and his colleagues can determine the state of the photon, without it decohering. In an extension of this technique, they can count the number of photons in the cavity, which is much harder than you might guess.
Using these techniques, Haroche and his colleagues have been able to put the photons into a superposition (a “Schrödinger's cat state”) in which, in effect, they are equivalent to waves going in opposite directions at the same time, then to monitor them with Rydberg atoms to see how long it takes for the superposition to decohere. With feedback processes being used to preserve the “cat state” for longer, this is one possible route to a qubit based on light. But other teams are also having fun with photons, and may be closer to a breakthrough in the field of what is now called quantum photonics.
With their love for acronyms, physicists sometimes refer to the processes involved in quantum computation as QIP, for Quantum Information Processing. Single photons can be
manipulated relatively easily, using properties such as polarization, to act as single qubits; but, as we have seen, a key step in QIP is the ability to manipulate pairs of qubits to act as CNOT gates. This involves “flipping” the state of a target qubit (T) only if the control qubit (C) is in the state 1. The way this is achieved in what has become known as linear optical quantum computing is for the control and target qubits (photons) to be shepherded through an optical network of mirrors, half-silvered mirrors, and so on, together with two other photons. The conditions for the CNOT operation to occur exist within the network (only for the C and T photons), but the experimenters only know if it has taken place when the photons emerge. There are two detectors at the output end of the business part of the network, and in this case if a single photon is detected at each output then the CNOT operation has taken place. This happens only about one-sixteenth of the time—as ever in the quantum world we have to deal with probabilities, not certainties. This is unfortunate, because the probability of a successful computation decreases exponentially with the increase in number of CNOT gates, making scaling impractical. But there is a way out.
Now comes the clever bit—and it involves quantum teleportation. Remember those two extra photons, passing through the network without being involved in the CNOT operation? The mechanics of the process are complicated, but it is possible to entangle each of the “working” photons with one of the “spare” photons, and to combine this with a teleportation process. Within the network, the CNOT operation is attempted repeatedly, until the operation is successful, and only then does the result appear as output. In terms of classical physics, this seems like pure quantum magic, and almost like time travel—the control and target qubits are teleported onto the output photons only after it is known that the gate has succeeded. The teleportation technique could also be applied in reading data from other kinds of quantum computers, such as those based on trapped ions.
I have described the basics of linear optical quantum computing in terms of conventional optical components such as mirrors and beam splitters, and this is indeed the way the first CNOT gate for single photons was built, by Jeremy O'Brien and his colleagues at the University of Queensland, in Australia. It took up an entire laboratory bench, a couple of square meters in area, with photons propagating through the air. Fine for demonstrating that the optical CNOT operation could be made to work, but hardly practicable for scaling up to a working quantum computer containing hundreds or thousands of gates. But Colossus was also based on glass technology—valves—and thanks to miniaturization using semiconductors, we now have computers far more powerful than Colossus, containing far more gates, that we can hold in the palm of a hand. Classical computing had to wait for semiconductor technology to be developed before the computers could be miniaturized; but Turing's heirs, such as O'Brien, already have the semiconductor technology, enabling them to move straight from the lab bench “proof of principle” to miniaturization. That's just what O'Brien has done, now working at the University of Bristol, in England, where he is director of the Centre for Quantum Photonics.
By 2008, O'Brien and his colleagues had developed a device containing hundreds of CNOT gates in a piece of silicon just 1 millimeter thick. Instead of mirrors and beam splitters, the device steers photons through a network of waveguides each a millionth of a meter across: channels of transparent silica grown onto a silicon base using standard industrial techniques.19 Indeed, these “chips,” just 70 mm by 3 mm, were manufactured by industry—CIP Technologies, of Ipswich, in Suffolk—not by university technicians. The “silica on silicon” technology is widely used in optical telecommunication devices, where the silica guides light in the same way as optical fibers but on a smaller scale. In the pioneering Bristol device, four photons are guided into the network and are put into a superposition of all possible four-bit inputs; the calculation performed by gates inside the network creates an entangled output, which is collapsed by measuring the output states of the appropriate pair of photons. In this way, the Bristol team used Shor's algorithm to determine the factors of 15, proudly finding the answer 3 and 5. All done at room temperature in a device superficially similar to a common computer chip.
“These results show,” said the team in an article in Science,20 “that it is possible to directly ‘write’ sophisticated photonic quantum circuits onto a silicon chip, which will be of benefit to future quantum technologies based on photons, including information processing.”
Although “all” you need to do to tackle bigger problems is to add more qubits, the “adding more qubits” step is at present a major hurdle, since it means finding a reliable source of single photons. But there is no reason to think this is beyond the technological capabilities of an industry that has already developed the conventional microchip. Another possible approach is to combine this technique with one of the other techniques I have described, using trapped ions or quantum dots. The Bristol Centre is patenting key aspects of the technology, and plans to offer it under license; Nokia and Toshiba (who are manufacturing devices for the Bristol group) are already working on the development of photonic chips based on the Bristol breakthrough. The Bristol team are enthusiastic—perhaps understandably over-enthusiastic—about the possibilities. Speaking in 2012, Mark Thompson, a leading member of the Bristol group, said that single-purpose “computers” for use in cryptography should be available (at least to customers with deep pockets) within three years, because they only need one pair of entangled photons, or two pairs allowing for the teleportation trick; that is plausible enough. Soon after, he expects to have chips each using twenty pairs of working photons to produce 10 qubit computers that will be able to carry out some kinds of calculations faster than conventional computers;21 other researchers regard this as the limit of practical possibilities with existing photon sources. But Thompson also expects to have systems using hundreds of qubits that can be applied to specific tasks—such as determining the shape of a molecule in what is known as protein folding, a key step in the development of new drugs—“within ten years.” If these projections are well founded, they make quantum photonics very much the front runner in the quantum computer race at the time of writing, late in 2012; but perhaps the more ambitious forecasts should be taken with a pinch of salt.
As with conventional chips, though, if a working prototype can be developed, virtually all of the expense attaches to designing and manufacturing the first chip. Once you have set up the machinery to make one, you can make a million at marginal extra cost. This raises the real possibility, for example, of incorporating quantum technology into smartphones, although maybe not on a ten-year timescale. Why bother? Well, although quantum computers can be used to crack conventional codes, they can be used to create codes that cannot be cracked even by conventional computers.22 This doesn't just mean that celebrities and politicians could enjoy making phone calls without the fear of being hacked. It means that we could all administer our bank accounts or handle other sensitive data from our phones without worrying that the information might be captured by a third party and misused. Even if our computers and smartphones will, as I have explained, still have to use classical methods for dealing with most problems, the prospect of unhackable computers and smartphones is alone sufficient to justify the effort going into computing with quanta.
This brings my story to a pleasingly symmetrical conclusion. It started with Alan Turing and the need to crack codes; it ends with Turing's heirs and the need for uncrackable codes. From Colossus to qubits, the story is essentially the same.
The story of quantum computing is usually told, as I have told it here, in terms of small numbers of quantum entities, entanglement, and superposition. Everything I have told you about this approach to computing with quanta is correct. But there may be something else going on, something which could involve a different approach to computing with quanta, and may also provide insight into the foundations of quantum mechanics. You may, perhaps, have noticed something odd about the very first quantum c
omputer I described, which was indeed the first one to apply Shor's algorithm successfully—the NMR technique which used a thimbleful of liquid to factorize the number 15. Several physicists pointed out that in a liquid at room temperature it is not possible to maintain entanglement and superpositions; the nuclear spins are shaken up by interactions between particles and cannot be neatly aligned. And yet, the NMR systems work! Whatever the intention of the people who designed those experiments, they must, as the experimenters themselves came to acknowledge, be working for another reason. What was it that gave these systems the power to run Shor's algorithm?
The answer seems to be a phenomenon known as discord, a term introduced to quantum physics by Wojciech Zurek, of the Los Alamos National Laboratory, New Mexico, in 2000. Quantum discord provides a measure of quantum correlations, and in particular of how much a system is disturbed when it is measured and information is obtained from it. Everyday objects, including classical computers, are unchanged when they are observed, so they have zero discord; quantum systems, as we have seen, are affected by being forced to collapse into specific states, so they have positive values of discord. The person credited with applying this idea to quantum computing is Animesh Datta, of the University of Oxford. He built on work by Emanuel Knill, now at NIST, and Raymond Laflamme, now at the University of Waterloo, Canada. They had raised the question of what would happen if a qubit in a “mixed” state (that is, not 0 or 1 but some messier state, say, one-third 0 and two-thirds 1) were sent through an entangling gate with a “pure” qubit, in a definite state (0 or 1). Mixed qubits cannot be used for entangling; but they can interact with pure qubits, and it turned out that such a quantum interaction between the mixed and pure qubits, which is described mathematically by discord, can be used in computation. In other words, instead of preparing and controlling two pure qubits on their way through the network of gates that make up a quantum computer, only a single carefully controlled pure qubit is needed, while the other one can be knocked about by its interactions with the surrounding environment. This seemed to be the reason for the success of the early NMR experiments. As long as some of the nuclei were playing ball by being aligned in accordance with the expectations of the researchers, it didn't matter that most were being jostled out of the pure states that had seemed so important.