Cosmology_A Very Short Introduction

Home > Other > Cosmology_A Very Short Introduction > Page 8
Cosmology_A Very Short Introduction Page 8

by Peter Coles


  Each of the fermions also has a mirror-image version called its antiparticle. The antiparticle of the electron is the positron; there are also antiquarks and antineutrinos.

  The theory of QED describes interactions between the charged fermions. The next force to come under the spotlight was the weak nuclear force, which is responsible for the decay of certain radioactive materials. The weak interaction involves all kinds of fermions including the neutrinos which, being uncharged, cannot feel the QED interaction. As in the case of electromagnetism, weak forces between particles are mediated by other particles – not photons, in this case, but massive

  15. Building blocks of matter. The standard model of particle physics consists of a relatively small number of basic particles. There are quarks arranged in three generations each of which contains two particles; heavy nuclear particles are made of such quarks. The leptons are arranged in a similar fashion. The quarks and the leptons are fermions, forces between them are mediated by bosons (on the right) called the photon, the gluons, and the weak W and Z bosons.

  particles called the W and Z bosons. The fact that these particles have mass (unlike the photon) is the reason why the weak nuclear force has such a short range and its effects are confined to the tiny scales of an atomic nucleus. The W and Z particles otherwise play the same role in this context as the photon does in QED: they, and the photon, are examples of what are known as gauge bosons.

  The theory of the strong interactions responsible for holding the quarks together in hadrons is called quantum chromodynamics (or QCD) and it is built upon similar lines to QED. In QCD, there is another set of gauge bosons to mediate the force. These are called gluons; there are eight of them. In addition QCD has a property called ‘colour’ which plays a similar role to that of electric charge in QED.

  The drive for unification

  Is it possible, taking a cue from Maxwell’s influential unification of electricity and magnetism in the 19th century, to put all QED, the weak interactions, and QCD together in a single overarching theory?

  A theory that unifies the electromagnetic force with the weak nuclear force was developed around 1970 by Glashow, Salaam, and Weinberg. Called the electroweak theory, this represents these two distinct forces as being the low-energy manifestations of a single force. When particles have low energy, and are moving slowly, they do feel the different nature of the weak and electromagnetic forces. Physicists say that at high energies there is a symmetry between the electromagnetic and weak interactions: electromagnetism and the weak force appear different to us at low energies because this symmetry is broken. Imagine a pencil standing on its end. When vertical it looks the same from all directions. A random air movement or passing lorry will cause it to topple: it could fall in any direction with equal probability. But when it falls, it falls some particular way picking out some specific direction. In the same way, the difference between electromagnetism and weak nuclear forces could be just happenstance, a chance consequence of how the high-energy symmetry was broken in our world.

  The electroweak and strong interactions coexist in a combined theory of the fundamental interactions called the standard model. It’s an amazing success that all the principal particles predicted by the standard model have now been discovered, with only one exception. (A special boson, called the Higgs, is required to explain the masses in the standard model and it has so far defied detection.) This model, however, does not provide a unification of all three interactions in the same way that the electroweak theory does for two of them. Physicists hope eventually to unify all three of the forces discussed so far in a single theory, which would be known as a Grand Unified Theory, or GUT. There are many contenders for such a theory, but it is not known which (if any) is correct.

  One idea associated with unified theories is supersymmetry. According to this hypothesis, there is an underlying symmetry between the fermions and the bosons, two families which are treated separately in the standard model. In supersymmetric theories, every fermion has a boson ‘partner’ and vice versa. Quarks have bosonic partners called squarks, neutrinos have sneutrinos and so on. The photon, a boson, has a fermion partner called the photino. The partner of the Higgs boson is the Higgsino, and so on. One of the interesting possibilities of supersymmetry is that at least one of the myriad of particles that one expects to reveal themselves at very high energy might be stable. Could one of these particles make up the dark matter that seems to pervade the Universe?

  Baryogenesis

  It is clear that the idea of symmetry plays an important role in particle theory. For example, the equations that describe electromagnetic interactions are symmetric when it comes to electrical charge. If one changed all the positive charges into negative charges, and vice versa, Maxwell’s equations that describe electromagnetism would still be correct. To put it another way, the choice of assigning negative charge to electrons and positive charges to protons is arbitrary: it could have been done the other way around, and nothing would be different in the theory. This symmetry translates into the existence of a conservation law for charge; electrical charge can be neither created nor destroyed. It seems to make sense that our Universe should not have a net electrical charge: there should be just as much positive charge as negative charge, so the net charge is expected to be zero. This seems to be the case.

  The laws of Physics also seem to fail to distinguish between matter and anti-matter. But we know that ordinary matter is much more common than anti-matter. In particular, we know that the number of baryons (protons and neutrons) exceeds the number of anti-baryons. Baryons actually carry an extra kind of ‘charge’ called their baryon number B. The Universe carries a net baryon number. Like the net electric charge, one would have thought that B should be a conserved quantity. So if B is not zero now, there seems to be no avoiding the conclusion that it can’t have been zero at any time in the past. The problem of generating this asymmetry – the problem of baryogenesis – perplexed scientists working on the Big Bang theory for some considerable time.

  The Russian physicist Andrei Sakharov in 1967 was the first to work out under what conditions there could actually be a net baryon asymmetry and to show that, in fact, baryon number need not be a conserved quantity. He was able to produce an explanation in which the laws of Physics are indeed baryon-symmetric, and at early times the Universe had no net baryon number, but as it cooled a gradual preference for baryons over anti-baryons emerged. His work was astonishingly prescient, because it was performed long before any unified theories of particle physics were constructed. He was able to suggest a mechanism which could produce a situation in which for every thousand million anti-baryons in the early Universe, there were a thousand million and one baryons. When a baryon and an anti-baryon collide, they annihilate in a puff of electromagnetic radiation. In Sakharov’s model, most of the baryons would encounter anti-baryons, and be annihilated in this way. We would eventually be left with a universe containing thousands of millions of photons for every baryon that survives. This is actually the case in our Universe. The cosmic microwave background radiation contains billions of photons for every baryon. The explanation of this is a pleasing example of the interface between particle physics and cosmology, but it is by no means the most dramatic. In the next chapter, I will discuss the idea of cosmic inflation according to which subatomic physics is thought to affect the entire geometry of the Universe.

  Chapter 6

  What’s the matter with the Universe?

  Is the Universe finite or infinite? Will the Big Bang end in a Big Crunch? Is space really curved? How much matter is there in the Universe? And what form does this matter take? One would certainly hope that a successful scientific cosmology could provide answers to questions as basic as these. The answers depend crucially upon a number known as Ω (Omega). Astronomers have long grappled with the problem of how to measure Ω using observations of the Universe around us, with only limited success. Dramatic progress in the development and application of new technology now s
uggests the possibility that the value of Ω may finally be pinned down within the next few years. But there is a sting in the tale. The most recent observations suggest that Ω does not, after all, hold all the answers. The issue of Ω is, however, not entirely an observational one, because the precise value that this quantity takes holds important clues about the very early stages of the Big Bang, and for the structure of our Universe on very large scales. So why is Ω so important and its value so elusive?

  The quest for Ω

  To understand the role of Ω in cosmology, it is first necessary to remember how Einstein’s general theory of relativity relates geometrical properties of space-time (such as its curvature and expansion), to the physical properties of matter (such as its density and state of motion). As I explained in Chapter 3, the application of this complicated theory in cosmology is greatly simplified by the introduction of the Cosmological Principle. In the end, the evolution of the entire Universe is governed by one relatively simple equation, now known as the Friedmann equation.

  The Friedmann equation can be thought of as expressing the law of conservation of energy for the Universe as a whole. Energy comes in many different forms throughout nature but only two relatively familiar forms are involved here. A moving object, such as a bullet, carries a type of energy called kinetic energy, which depends upon its mass and velocity. Obviously since the Universe is expanding, and all the galaxies are rushing apart, the Universe contains a great deal of kinetic energy. The other form of energy is potential energy, which is a little more difficult to understand. Whenever an object is moving and interacting through some kind of force it can gain or lose potential energy. For example, imagine a weight tied on the end of a dangling piece of string. This makes a simple pendulum. If I raise the weight, it gains potential energy because I have to work against gravity to lift it. If I then release the weight the pendulum begins to swing. The weight then picks up kinetic energy and as it drops it loses potential energy. Energy is transferred between the two types in this process, but the total energy of the system is conserved. The weight will swing to the bottom of its arc, where it has no potential energy, but it will still be moving. It will in fact describe a complete cycle, returning eventually to the top of its arc at which point it stops (instantaneously) before starting another swing. At the top, it has no kinetic energy but maximum potential. Wherever the weight is, the energy of this system is constant. This is the law of conservation of energy.

  In cosmological terms, the kinetic energy depends crucially on the expansion rate or, in other words, upon the Hubble constant Ho. The potential energy depends on the density of the Universe, i.e. upon how much matter there is per unit volume of the Universe. Unfortunately, this quantity is not known at all accurately: it is even less certain than the value of the Hubble constant. If we knew the mean density of matter and the value of Ho, however, we could calculate the total energy of the Universe. This would have to be constant in time, in accordance with the law of conservation of energy (or, in this context, the Friedmann equation).

  Setting aside the technical difficulties that arise when General Relativity is involved, we can now discuss the evolution of the Universe in broad terms using familiar examples from high-school physics. For instance, consider the problem of launching a vehicle from Earth into space. Here the mass responsible for the gravitational potential energy of the vehicle is the Earth. The kinetic energy of the vehicle is determined by the power of the rocket we use. If we give the vehicle only a modest rocket, so that it doesn’t move very quickly at launch, then the kinetic energy is small and may be insufficient for the rocket to escape from the attraction of the Earth. Consequently, the vehicle goes up some way and then comes back down again. In terms of energy, what happens is that the rocket uses up its kinetic energy, given expensively at launch, to pay the price in terms of potential energy for its increased height. If we use a bigger rocket, it would go higher before crashing down to the ground. Eventually, we will find a rocket big enough to supply the vehicle with enough energy for it to buy its way completely out of the gravitational field of the Earth. The critical launch velocity here is usually called escape velocity: above the escape velocity, the rocket keeps on going for ever; below it the rocket comes crashing down again.

  In the cosmological setting the picture is similar but the critical quantity is not the velocity of the rocket (which is analogous to the Hubble constant and is therefore known, at least in principle), but the mass of the Earth (or, in the cosmological case, the density of matter). It is therefore most useful to think about a critical density of matter, rather than a critical velocity. If the real density of matter exceeds the critical density, then the Universe will eventually recollapse: its gravitational energy is sufficient to slow down, stop, and then reverse the expansion. If the density is lower than this critical value, the Universe will carry on expanding forever. The critical density turns out to be extremely small. It also depends on Ho, but is on the order of one hydrogen atom per cubic metre. Most modern experimental physicists would consider material with such a low density to be a very good example of a vacuum!

  16. The Friedmann models. As well as having various options for curved space, the Friedmann models can also behave in different ways as they evolve with time. If Ω is greater than one then the expansion will eventually stop and the Universe will recollapse. If it is less than one the Universe will expand forever. Poised between these is the flat Universe with Ω finely tuned to be exactly unity.

  And now, at last, we can introduce the quantity Ω: it is simply the ratio of the actual density of matter in the Universe to the critical value that marks the dividing line between eternal expansion and ultimate recollapse. Ω = 1 marks that dividing line: Ω1 means an ever-expanding Universe, and Ω>1 indicates one that recollapses in the future to a Big Crunch. Whatever the precise value of Ω, however, the effect of matter is always to slow down the expansion of the Universe, so that these models always predict a cosmic deceleration, but more of that shortly. But the long-term viability of the cosmological expansion is not the only issue whose resolution depends on Ω. These arguments based on simple ideas of energy resulting from Newtonian physics are not the whole story. In Einstein’s general theory of relativity, the total energy-density of material determines the global curvature of space, as I described in Chapter 3. A space of negative global curvature results in models with Ω less than 1. A model with negative curvature is called an open universe model. A positively curved (closed) model pertains if Ω exceeds unity. In between, there is the classic British compromise universe, poised between eternal expansion and eventual recollapse, which has Ω exactly equal to unity. This model also has a flat geometry in which Euclid’s theorems all apply. What a relief it would be if the Universe chose this simplest of all options!

  The quantity Ω determines both the geometry of space on cosmological scales and the eventual fate of the Universe, but it is important to stress that the value of Ω is not at all predicted in the standard Big Bang model. It may seem to be a fairly useless kind of theory that is incapable of answering the basic questions that revolve around Ω, but in fact that is an unfair criticism. As I have explained, the Big Bang is a model, rather than a theory. As a model, it is self-consistent mathematically and compared to observations, but it is not complete. In this context this means that Ω is a ‘free’ parameter in much the same way as the Hubble constant Ho. To put it another way, the mathematical equations of the Big Bang theory describe the evolution of the Universe, but in order to calculate a specific example we need to supply a set of initial conditions to act as a starting point. Since the mathematics on which the model is based break down at the very beginning, we have no way of fixing the initial conditions theoretically. The Friedmann equation is well defined whatever the values of Ω and Ho, but our Universe happens to have been set up with one particular numerical combination of these quantities. All we can do, therefore, is to use observational data to make inferences about the cosmological par
ameters: they cannot, at least with the knowledge presently available and within the framework of the standard Big Bang, be deduced by reason alone. On the other hand, there is the opportunity to use present-day cosmological observations to learn about the very early Universe.

  The search for two numbers

  The importance of determining the cosmological parameters was recognized early on in the history of cosmology. Indeed, the distinguished astronomer Allan Sandage (formerly Hubble’s research student) once wrote a paper entitled ‘Cosmology: The Search for Two Numbers’. Two decades later, we still don’t know the two numbers, and to understand why, we have to understand the different kinds of observation that can inform us about Ω, and what kind of results they have produced. There are many different types of observation, but they can be grouped into four main categories.

 

‹ Prev