Richard Feynman

Home > Other > Richard Feynman > Page 16
Richard Feynman Page 16

by John Gribbin


  The experiment with two holes shows that, even for an entity we are used to thinking of as a particle (such as an electron), something (either the particle itself or the probability wave) goes through both holes in the experiment and interferes with itself in this way to determine the pattern on the screen. But suppose we make an experiment with four holes, instead of just two. Now, obviously, the ‘something’ has to travel through all four holes and make the appropriate interference pattern, and this can be calculated using the rules we have just sketched out. The same is true for an experiment with three holes, or a hundred, or any number you like. You can imagine making more and more holes until there is nothing left to obstruct the path of the electrons or photons at all – you have an experiment with no holes, or one hole, or infinitely many holes, depending on your point of view. One of Feynman’s key insights was that you can still treat the electron or photon (or anything else) as having gone through each of the infinite number of holes, adding up the probabilities associated with each path in the usual way. Integrating (adding up) the probabilities for literally every possible path from the source of the light or electrons to the detector screen on the far side of the experiment then gives you the result that the overwhelmingly most probable path for the particle to follow is a straight line from the source to the detector. For more complicated paths, the phases of adjacent trajectories are exactly opposed to one another (the arrows point in opposite directions), and they all cancel out, leaving just the path that is expected from classical physics. It is only near the classical path (the path of least action) that the probabilities add up and reinforce one another, because they are in phase. And so Feynman’s path integral approach to quantum mechanics does indeed also give classical mechanics, and all of classical optics, from the same set of equations.

  Figure 8. Common sense (and schoolbook physics) tells us that ‘light travels in straight lines’.

  This is such a dramatic discovery that it is worth showing one example of how it makes us think again about familiar features of the world, such as the idea that ‘light travels in straight lines’. In Figure 8, we show how classical optics teaches us that light is reflected from a mirror. This is so familiar that it seems to fly in the face of common sense to suggest that the image you see in the mirror is a result of light coming from the source in all directions, bouncing off the mirror at all kinds of crazy angles and reaching your eye that way, as it looks in Figure 9. That, though, is exactly what happens, according to Feynman. But the light travelling by crazy angles gets cancelled out by neighbouring light that is equally strong but has opposite phase, so that you do not become aware of it. Because of phase differences, the amplitudes only add up and reinforce each other near the path of least time from the source to your eye – the Principle of Least Action is at work, and as Feynman put it in QED, ‘where the time is least is also where the time for nearby paths is nearly the same’, which is why the probabilities add up there.

  Figure 9. Feynman says that light travels by every conceivable crazy path from the source to your eye, bouncing off the mirror at all kinds of angles (and even travelling by weird routes that do not involve bouncing off the mirror at all).

  You can actually prove, by yourself, that light from the edges of the mirror really is entering your eyes by some of the crazy routes shown in Figure 9. In the more scientifically precise version of such an experiment, you first cover up all the mirror except for a bit out by the edge, so that it cannot reflect. Way out on the edge of the mirror, although the probabilities for neighbouring paths cancel out, you can still find thin strips of mirror where the probabilities all add up. The trouble is, these strips are separated from one another by equally thin strips for which the probabilities are exactly out of phase with the first set of strips, so you see no light from the edge of the mirror. All you have to do, though, is cover up alternating strips of mirror. You are left with half as much working mirror, but now all the paths are in phase, and you really will actually see the light coming to you from these crazy angles (Figure 10).

  The set-up is called a diffraction grating, and because the effect depends to some extent on the wavelength of the light, if you do it with ordinary light you will see a colourful rainbow pattern. And you don’t even have to go to the trouble of laying out a mirror and covering it with strips of cloth carefully cut to a precise width. The spacing you need to produce the effect with ordinary light is the same as the spacing of the grooves on an ordinary compact disc. Just hold a CD under the light, and you will see for yourself a rainbow pattern caused by photons bouncing off the disc at the ‘wrong’ angles – quantum electrodynamics made visible in your own home. Whether taking the path of least time or bouncing around at ‘crazy’ angles, ‘Light doesn’t really travel only in a straight line’, said Feynman; ‘it “smells” the neighbouring paths around it, and uses a small core of nearby space.’

  Figure 10. We don’t normally see light bouncing off mirrors at crazy angles because the light cancels out everywhere except near the path of least time. But if strips of mirror are carefully blacked out to stop the cancelling, light really is seen to be reflected at all kinds of weird angles.

  Which brings us on to the famous Feynman diagrams. The archetypal Feynman diagram is a spacetime diagram which represents an interaction between two electrons that involves the exchange of a photon. The electrons approach one another, exchange the photon, and move apart (Figure 11). But there is much more to this kind of diagram than appears at first sight. For a start, the exchange of the photon represented by the wiggly line should not be taken as a ‘classical’ particle following a single spacetime path, but as the sum over histories of all possible ways in which that photon could have gone from one particle to the other. The wiggly line doesn’t represent a path, but a summation of all possible paths – a path integral. Secondly, what goes on at the junctions of a Feynman diagram, where different lines intersect, is precisely determined by the rules of quantum electrodynamics. Each kind of intersection – each vertex – represents a different kind of interaction, each with its own precise meaning and its own set of equations that describe what is going on. In this sense, a few Feynman diagrams can represent a kind of shorthand for the hundreds of equations required by Schwinger’s or Tomonaga’s approach to QED. In January 1988, Feynman stressed that:

  Figure 11. The archetypal Feynman diagram. Two particles (perhaps two electrons) approach one another, interact by the exchange of a force-carrying particle (in this case, a photon) and are deflected.

  The diagrams were intended to represent physical processes and the mathematical expressions [our italics] used to describe them. Each diagram signified a mathematical expression. Mathematical quantities were associated with points in space and time. I would see electrons going along, being scattered at one point, then going over to another point and being scattered there, emitting a photon and the photon goes over there. I would make little pictures of all that was going on; these were physical pictures involving the mathematical terms. These pictures evolved only gradually in my mind … they became a shorthand for the processes I was trying to describe physically and mathematically … I was conscious of the thought that it would be amusing to see these funny-looking pictures in the Physical Review.2

  One of the most important features of these diagrams is that they treat particles and antiparticles on an equal footing, which is what makes Feynman’s theory Lorentz invariant, in line with the requirements of relativity theory. By treating particles and antiparticles in the same way, the nature of the infinities that arise in QED becomes clear (at least to a mathematician), and Freeman Dyson proved that the infinities that arise in interactions described by Feynman diagrams are always of the kind which can be removed by renormalization. A dramatic result which did much to persuade other physicists of the value of Feynman’s approach. Today, one of the chief criteria used to decide whether a new idea in particle physics is worth pursuing is whether or not the theory is renormalizable – that is,
whether or not it can be described using Feynman diagrams. If it cannot, then it is rejected out of hand.

  Feynman’s ‘funny-looking pictures’ have become so important both because they really do incorporate all of the complex mathematical rules, and because they give a direct practical insight into what is going on. To use them properly (to get numbers out of the calculations to compare with experiments), you need to understand the mathematics. But to get an idea of what is going on, you only need the pictures – and that’s all we are going to be concerned with now as we indicate how that fantastically accurate calculation of the magnetic moment of the electron was worked out. With the physical insight provided by the pictures, Feynman diagrams can even give a picture of processes too complicated to be calculated, but which have a clear physical meaning that could only be derived from Schwinger’s pages of equations by a virtuoso mathematician. To a virtuoso, this democratization of physics may seem unnecessary; many years later, Schwinger described the effect of the Feynman diagram as ‘bringing computation to the masses’;3 he did not intend this as a compliment.

  Figure 12. A Feynman diagram can also describe how an electron moving from A to B is deflected when it interacts with a magnetic field (when it meets a photon from a magnet).

  The simplest version of the interaction between an electron and the field of a magnet can be represented in a diagram like Figure 12. A photon from the magnet is absorbed by the electron. If the situation were really that simple, the calculated magnetic moment of the electron would be 1. In fact, as we have mentioned, it is actually a little bigger, about 1.00116. But the electron can also be involved in a kind of self-interaction, in which it emits a photon and later reabsorbs the same photon (called a ‘virtual’ photon), while in between, it interacts with the photon from the magnet. This is represented in a Feynman diagram like Figure 13. And when you do the corresponding calculation, you get a value for the magnetic moment, allowing for all possible interactions of this kind, a bit bigger than 1, but still not quite as big as the experimental value. It was this single virtual photon version of the calculation that showed physicists they were on the right track in the 1940s.

  Figure 13. Things are not quite as simple as they seem in Figure 12. The electron can emit a virtual photon, and then reabsorb it, as well as interacting with the photon from the magnet. More and more complicated loops can be added, but happily in this case they have smaller and smaller influences on the interaction.

  Of course, the next step in the process is obvious. You have to consider the possibility that the electron emits two photons, one after the other, and reabsorbs them. Sure enough, when you do the calculation you get an answer a little closer to the experimental figure. But now the calculations are getting difficult, and it took two years for all the possibilities involving two of these virtual photons to be included. It wasn’t until the middle of the 1980s that the calculation involving up to three virtual photons was carried through, giving the value for the magnetic moment that we quoted at the beginning of this chapter, in very close agreement with the experiments. And, equally significantly, we can see immediately why the theory doesn’t yet give precise agreement with experiment – we have not yet included the effects of four virtual photons, or five, or still greater numbers. Happily though, the correction gets smaller for each extra photon in the calculation, and the results for three virtual photons are good enough to satisfy most people.

  It’s just as well that the correction gets smaller for higher numbers of virtual photons – for higher ‘order’ in the calculation – because there are yet further complications that really ought to be included, if not in the calculations then at least in our mental picture of what is going on around an electron, or any other quantum entity. It’s easy to think that you understand where the energy required to make a virtual photon can come from. A single photon doesn’t carry a lot of energy, and no doubt the electron can spare some of its kinetic energy, or whatever, to make the photon. But this isn’t quite the right picture.

  There is one key ingredient of quantum mechanics that we have not yet discussed, and it is called uncertainty. In the quantum world, it turns out, it is impossible for all of the properties of a quantum entity, such as a photon or an electron, to be specified at the same time. This restriction was first worked out, in the 1920s, by Werner Heisenberg, and is known as Heisenberg’s Uncertainty Principle, or just as the Uncertainty Principle. The important point is that it has nothing to do with our clumsiness in trying to make measurements of the properties of tiny things like electrons; it is built into their very nature.4 So, for example, an electron cannot have both a precise location in space and a precise momentum (a definite direction) at the same time. It may have a very well-defined location (as when it makes a spot of light on a detector screen), but then the electron itself cannot ‘tell’ where it is going next. Or it may have very well-defined momentum, as when it is travelling along a certain trajectory, but then the electron itself does not ‘know’ exactly where it is along that trajectory.

  Uncertainty also applies to the energy available to make virtual particles. According to the Special Theory of Relativity, you need a certain amount of energy, mc2, to make an electron. In fact, since the quantum rules only allow the creation of electron–positron pairs, you need 2mc2 to make the pair. But quantum uncertainty says that for a short enough time (a very short time!) the Universe cannot be certain that there isn’t that much energy in any tiny volume of empty space. So electron– positron pairs can be created anywhere and everywhere, provided that they almost immediately get back together and annihilate one another. The more energy you ‘borrow’, the quicker you have to pay it back.

  This is where virtual photons actually ‘come from’. They don’t have to borrow any energy from the electrons involved in an interaction. They borrow it from empty space from nothing at all – while, in a sense, the Universe isn’t looking. Because photons carry little energy, virtual photons can be made in profusion in this way, and last for a relatively long time. But quantum uncertainty says that during its existence, the low-energy photon can, very briefly, borrow a lot more energy from nothing at all, and turn itself into an electron–positron pair. The pair promptly gives back the energy and disappears, turning back into a photon, but the process can repeat during the lifetime of the virtual photon. And even these virtual electrons and virtual positrons can be involved in the whole business of creating photons and virtual pairs. Each ‘real’ electron is actually surrounded by a frothing cloud of virtual photons and other entities, popping in and out of existence all the time.

  In spite of this complexity, QED is so good that it can be used to calculate, with the aid of Feynman diagrams, all kinds of messy interactions involving photons being exchanged between charged particles. It is the cloud of virtual photons (and other things) around an electron which prevents it from behaving as a ‘bare’ point charge and reduces the self-interaction from infinity to a small amount responsible for the Lamb shift. But QED can do more than explain everything there is to explain about the behaviour of photons and electrons. It provides the template with which physicists have built their theories of the workings of those other forces we mentioned, the ones that operate within the nucleus.

  One of these forces is called the strong interaction, because it is the strongest of all the four forces of nature. It is an attractive force that holds the nucleus together, operating on both neutrons and protons and overcoming the electrical repulsion between all the positively charged protons in the nucleus, which tries to blow the nucleus apart. The other nuclear force is called the weak interaction, because it is weaker than the strong interaction. Very little was known about the weak interaction in the 1940s, but after the success of QED in explaining electromagnetism, in the 1950s many physicists worked on the problem of developing a deeper understanding of the force – Feynman was also involved in some of this work, as we shall see in Chapter 8. Two physicists, Abdus Salam and Steven Weinberg, independently cracked
the problem in the 1960s, and shared the Nobel Prize for their efforts in 1979. Again, we won’t go into the (sometimes hairy) mathematical details; the relevant point is that the resulting theory of the weak interaction is exactly like the QED theory of electromagnetism, and can be understood in terms of Feynman diagrams involving a greater variety of particles (which is one reason why the mathematics is hairy).

  The particles that can take part in weak interactions are the proton and neutron, on one side, and the electron and an associated particle called the neutrino on the other side. Protons and neutrons are members of a family called baryons, and electrons and neutrinos are members of a family called leptons. Moving between the two families there are so-called intermediate vector bosons, which play the role in the weak interaction that photons do in electromagnetism – only there are three kinds of vector boson, one with zero charge (dubbed Z0), one carrying a unit of positive charge (dubbed W+), and one carrying a unit of negative charge (the W- boson). Unlike photons, these bosons each have mass. There is one other important rule. The total number of baryons involved in an interaction always stays the same, and the total number of leptons always stays the same.

  The basic process of radioactive decay is seen at its most simple when a neutron sits on its own, outside an atom. Within a few minutes, the neutron will decay spitting out an electron and transforming itself into a proton. Electric charge is conserved, because the positive charge on the proton and the negative charge on the electron cancel out. The number of baryons is conserved, because you start with one (a neutron) and end up with one (a proton). At first sight, it seems that the world has gained a lepton (the electron); but it turns out that in neutron decay another particle, an antineutrino, is always produced as well. So there are still zero leptons overall, since a particle and an antiparticle cancel each other out, for these purposes, in the same way that the positive charge and the negative charge cancel each other out.

 

‹ Prev