Book Read Free

Know This

Page 11

by Mr. John Brockman


  Hayden and Harlow’s work connects physics and computer science in a totally unprecedented way. Physicists have long speculated that information plays a fundamental role in physics. It’s an idea that dates back to Konrad Zuse, the German engineer who built the world’s first programmable electronic computer in his parent’s living room in 1938 and the first universal Turing machine three years later. In 1969, Zuse wrote a book called Calculating Space, in which he argued that the universe itself is a giant digital computer. In the 1970s, the physicist John Wheeler began advocating for “it from bit”—the notion that the physical, material world is, at bottom, made of information. Wheeler’s influence drove the burgeoning field of quantum information theory and led to quantum computing, cryptography, and teleportation. But the idea that computational complexity might not only describe the laws of physics but actually uphold the laws of physics is entirely new.

  It’s odd, on first glance, that something as practical as resource constraints could tell us anything deep about the nature of reality. And yet in quantum mechanics and relativity, such seemingly practical issues turn out to be equally fundamental. Einstein deduced the nature of spacetime by placing constraints on what an observer can see. Noticing that we can’t measure simultaneity at a distance gave him the theory of special relativity; realizing that we can’t tell the difference between acceleration and gravity gave him the curvature of spacetime. Likewise, when the founders of quantum mechanics realized that it is impossible to accurately measure position and momentum, or time and energy, simultaneously, the strange features of the quantum world came to light. That such constraints were at the heart of both theories led thinkers such as Arthur Stanley Eddington to suggest that at its deepest roots physics is epistemology. The new computational complexity results push further in that direction.

  So that’s the news: a profound connection between information, computational complexity, and spacetime geometry has been uncovered. It’s early to say where these clues will lead, but it’s clear now that physicists, computer scientists, and philosophers will all bring something to bear to illuminate the hidden nature of reality.

  Einstein Was Wrong

  Hans Halvorson

  Professor of philosophy, Princeton University

  We’ve known about “quantum weirdness” for more than 100 years, but it’s still making headlines. In the summer of 2015, experimental groups in Boulder, Delft, and Vienna announced that they had completed a decades-long quest to demonstrate quantum nonlocality. The possibility of such nonlocal effects first captured the attention of physicists in the 1930s, when Einstein called it “spooky action at a distance”—indicating that he perceived it as a bug of the nascent theory. But on this particular issue, Einstein couldn’t have been more wrong: Nonlocality isn’t a bug of quantum mechanics, it’s a pervasive feature of the physical world.

  To understand why the scientific community has been slow to embrace quantum nonlocality, recall that 19th-century physics was built around the ideal of local causality. According to this ideal, for one event to cause another, those two events must be connected by a chain of spatially contiguous events. In other words, for one thing to affect another, the first thing needs to touch something, which touches something else, which touches something else . . . eventually touching the other thing.

  For those of us schooled in classical physics, the notion of local causality might seem central to a rational outlook on the physical world. For example, I don’t take reports of telekinesis seriously—and not because I’ve taken the time to examine all the experiments that have tried to confirm its existence. No, I don’t take reports of telekinesis seriously because it seems irrational to believe in some sort of causality that doesn’t involve things moving through space and time.

  But QM appears to conflict with local causality. According to QM, if two particles are in an entangled state, then the outcomes of measurements on the second particle will always be strictly correlated (or anticorrelated) with measurements on the first particle—even when the second particle is far, far away from the first. Quantum mechanics also claims that neither the first nor the second particle has any definite state before the measurements are performed. So what explains the correlations between the measurement outcomes?

  It’s tempting to think that quantum mechanics is just wrong when it says that the particles aren’t in any definite state before they’re measured. In fact, that’s exactly what Einstein suggested in the famous “EPR” paper he wrote with Podolsky and Rosen. However, in the 1960s, John Stewart Bell showed that EPR could be put to experimental test. If, as suggested by Einstein, each particle has its own state, then the results of a certain crucial experiment would disagree with the predictions made by quantum mechanics. Thus, in the 1970s and 1980s, the race was on to perform this crucial experiment—an experiment that would establish the existence of quantum nonlocality.

  The experiments of the 1970s and 80s came out decisively in favor of quantum nonlocality. However, they left open a couple of loopholes. It was only in 2015 that the ingenious experimenters in Boulder, Delft, and Vienna were able to definitively close these loopholes—propelling quantum nonlocality back into the headlines.

  But is it news that quantum mechanics is true? Didn’t we already know this—or at least wasn’t the presumption strongly in its favor? Yes, the real news here isn’t that quantum mechanics is true. The real news is that we are learning how to harness the resources of a quantum world. In the 1920s and 1930s, quantum nonlocality was a subject of philosophical perplexity and debate. In 2015, questions about its meaning are being replaced by questions about what we can do with it. For instance, quantum nonlocality could facilitate information-theoretic and crytographic protocols that far exceed anything that could have been imagined in a world governed by classical physics. And this is the reason quantum nonlocality is still making headlines.

  But don’t get carried away—quantum nonlocality still doesn’t make it rational to believe in telekinesis.

  Replacing Magic with Mechanism?

  Ross Anderson

  Professor of security engineering, Computer Laboratory, University of Cambridge; author, Security Engineering

  The most thought-provoking scientific meeting I went to in 2015 was Emergent Quantum Mechanics, organized in Vienna by Gerhard Groessing. This is the go-to place if you’re interested in whether quantum mechanics dooms us to a universe (or multiverse) that can be causal or local but not both, or whether we might just make sense of it after all. The big new theme was emergent global correlation. What is this, and why does it matter?

  The core problem of quantum foundations is the Bell tests. In 1935, Einstein, Podolsky, and Rosen noted that if you measured one of a pair of particles that shared the same quantum-mechanical wave function, this would immediately affect what could be measured about the other even if it were some distance away. Einstein held that this “spooky action at a distance” was ridiculous so quantum mechanics must be incomplete. This was the most cited paper in physics for decades. In 1964, the Irish physicist John Bell proved that if particle behavior were explained by hidden local variables, their effects would have to satisfy an inequality that would be broken in some circumstances by quantum-mechanical behavior. In 1969, Clauser, Horne, Shimony, and Holt proved a related theorem that limits the correlation between the polarization of two photons, assuming that this polarization is carried entirely by and within them. In 1974, Freedman and Clauser showed this limit was violated experimentally, followed by Aspect, Zeilinger, and many others. These “Bell tests” convince many physicists that reality must be weird; maybe nonlocal, noncausal, or even involving multiple universes.

  For example, it’s possible to entangle photon A with photon B, then B with C, then C with D, and measure that A and D are correlated, despite the fact that they didn’t exist at the same time. Does this mean that when I measure D, some mysterious influence reaches backward in time to A? The math doesn’t let me use this to send a message backw
ard in time—say, to order the murder of my great-grandfather (the no-signaling theorem becomes a kind of “no-TARDIS theorem”)—but such experiments are still startlingly counterintuitive.

  At the Vienna conference, a number of people advanced models according to which quantum phenomena emerge from a combination of local action and global correlation. As the Nobel prizewinner Gerard ’t Hooft put it in his keynote talk, Bell assumed that spacelike correlations are insignificant, and this isn’t necessarily so. In Gerard’s model, reality is information, processed by a cellular automaton fabric operating at the Planck scale, and fundamental particles are virtual particles—like Conway’s gliders but in three dimensions. In a version he presented at the previous EmQM event in 2013, the fabric is regular, and its existence many break gauge invariance just enough to provide the needed long-range correlation. The problem was that the Lorentz group is open, which seemed to prevent the variables in the automata being bitstrings of finite length. In his new version, the automata are randomly distributed. This was inspired by an idea of Stephen Hawking’s on balancing the information flows into and out of black holes.

  In a second class of emergence models, the long-range order comes from an underlying thermodynamics. Groessing has a model in which long-range order emerges from subquantum statistical physics; Ariel Caticha has a model with a similar flavor, which derives quantum mechanics as entropic dynamics. Ana María Cetto looks to the zero-point field and sets out to characterize active zero-point field modes that sustain entangled states. Bei-Lok Hu adds a stochastic term to semiclassical gravity, whose effect after renormalization is nonlocal dissipation with colored noise.

  There are others. The quantum-cryptography pioneer Nicolas Gisin has a new book on quantum chance in which he suggests that the solution might be nonlocal randomness—a random event that can manifest itself at several locations. My own suspicion is that it might be something less colorful; perhaps the quantum vacuum just has an order parameter, like a normal superfluid or superconductor. If you want long-range order that interacts with quantum systems, we have quite a few examples and analogs to play with.

  But whether you think the quantum vacuum is God’s computer, God’s bubble bath, or even God’s cryptographic keystream generator, there’s suddenly a sense of excitement and progress, of ideas coming together, of the prospect that we might just possibly be able to replace magic with mechanism.

  There may be a precedent. For about forty years after Galileo, physics was a free-for-all. The old Ptolemaic certainties had been shot away and philosophers’ imaginations ran wild. Perhaps it would be possible, some said, to fly to America in eight hours in a basket carried by swans? Eventually Newton wrote the Principia and spoiled all the fun. Theoretical physics has been stuck for the past forty years, and imaginations have been running wild once more. Multiple universes that let stuff travel backward in time without causing a paradox? Or perhaps it’s time for something new to come along and spoil the fun.

  Quantum Entanglement Is Independent of Space and Time

  Anton Zeilinger

  Physicist, University of Vienna; scientific director, Institute of Quantum Optics and Quantum Information; author, Dance of the Photons

  The notion of quantum entanglement, famously called “spooky action at a distance” by Einstein, emerges more and more as having deep implications for our understanding of the world. Recent experiments have verified the fact that quantum correlations between two entangled particles are stronger than any classical, local pre-quantum worldview allows. So, since quantum physics has predicted these measurement results for at least eighty years, what’s the deal?

  The point is that the predictions of quantum mechanics are independent of the relative arrangement in space and time of the individual measurements: fully independent of their distance, independent of which is earlier or later, etc. One has perfect correlations between all of an entangled system, even as these correlations cannot be explained by properties carried by the system before measurement. So quantum mechanics transgresses space and time in a very deep sense. We would be well advised to reconsider the foundations of space and time in a conceptual way.

  To be specific, consider an entangled ensemble of systems. This could be two photons, or any number of photons, electrons, atoms—and even larger systems, like atomic clouds at low temperature or superconducting circuits. We now do measurements individually on those systems. The important point is that for a maximally entangled state, quantum physics predicts random results for the individual entangled property. For photons, say, this could the polarization. That is, for a maximally entangled state of two or more entangled photons, the polarization observed in the experiment could be anything: horizontal, vertical, any direction linear, right-handed circular, left-handed circular, any elliptical state. Thus, if we do a measurement, we observe a random polarization. And this for each individual photon of the entangled ensemble. But a maximally entangled state predicts perfect correlations between the polarizations of all photons making up the entangled state.

  To me, the most important message is that the correlations between particles, like photons, electrons, or atoms—or larger systems, like superconducting circuits—are independent of which of the systems are measured first and how large the spatial distance between them is.

  At first glance, this might not seem surprising. After all, if I measure the heights of peaks of the mountains around me, it doesn’t matter in which sequence I do the measurements and whether I measure the more distant ones first or the nearer ones. The same is true for measurements on entangled quantum systems. However, the important point is that the first measurement on any system entangled with others instantly changes the common quantum state describing all, the subsequent measurement on the next does that again, and so on. Until, in the end, all measurement results on all systems entangled with each other are perfectly correlated.

  Moreover, as recent experiments finally prove, we now know that this cannot be explained by any communication limited by Einstein’s cosmic speed limit, the speed of light. Also, one might think that there is a difference if two measurements are done such that one is after the other in way that a signal could tell the second one what to do as a consequence of the earlier measurement. Or whether they are arranged at such a distance and done sufficiently simultaneously such that no signal is fast enough to do so. Thus, it appears that on the level of measurements of properties of members of an entangled ensemble, quantum physics is oblivious to space and time.

  An understanding is possible via the notion of information—information seen as the possibility of obtaining knowledge. Then quantum entanglement describes a situation where information exists about possible correlations between possible future results of possible future measurements without any information existing for the individual measurements. The latter explains quantum randomness, the first quantum entanglement. And both have significant consequences for our customary notions of causality.

  It remains to be seen what the consequences are for our notions of space and time—or spacetime, for that matter. Spacetime itself cannot be above or beyond such considerations. I suggest we need a new deep analysis of spacetime, a conceptual analysis perhaps analogous to the one done by the Viennese physicist-philosopher Ernst Mach, who kicked Newton’s absolute space and absolute time from their throne. The hope is that in the end we will have new physics analogous to Einstein’s new physics in the two theories of relativity.

  Breakthroughs Become Part of the Culture

  Lisa Randall

  Theoretical physicist, Harvard University; author, Dark Matter and the Dinosaurs: The Astounding Interconnectedness of the Universe

  Some of the interesting discoveries and observations of the last year include a new species of human; observations of dwarf planets including Pluto, which demonstrated that Pluto was more geologically active than anticipated; data of species loss indicating a track toward a sixth extinction; and a careful measurement of the timing of the impa
ct triggering the K-Pg extinction and enhanced Deccan Traps volcanic activity, indicating that they occurred at essentially the same time, thus both contributing to species loss 66 million years ago. But news in science is usually the product of many years of effort, even when it appears to be a sudden revolutionary discovery, and the headlines of any given year are not necessarily representative of what is most significant.

  So I’m going to answer a slightly different question, which is what advances I expect we’ll hear about in the coming decade, bearing in mind that the most common stories concern news that in some global sense hasn’t changed all that much. Crisp clean events and many important discoveries are news, but for only a short time. True breakthroughs become part of the culture. General relativity was news in 1915 and the bending of light was news in 1919. Yet although general relativity factors into news today, the theory itself is no longer news. Quantum mechanics stays in the news, but only because people don’t want to believe it, so incremental verifications are treated as newsworthy.

  Instead of saying more about the important discoveries of the last year, I’ll give a few examples of scientific advances I expect we might hear about in the next few. The first is the type we won’t really solve but will marginally, incrementally develop. The second is a type where we will make advances but the news won’t necessarily reflect the most important implications. The third might be a true breakthrough that largely solves a puzzle—like the Higgs boson discovery, which was big news in 2012 but (though still exciting and an important guidepost for the future of particle physics) is no longer news today.

 

‹ Prev