Higgs:The invention and discovery of the 'God Particle'

Home > Other > Higgs:The invention and discovery of the 'God Particle' > Page 10
Higgs:The invention and discovery of the 'God Particle' Page 10

by Higgs- The Invention


  Gell-Mann, Fritzsch, and Bardeen now worked together to explore the options. They wanted to see if it was possible to reconcile the results on neutral pion decay with a variation of the original model of fractionally charged quarks.

  As Han and Nambu had suggested, what they needed was a new quantum number. Gell-Mann decided to call this new quantum number ‘colour’. In this new scheme, quarks would possess three possible colour quantum numbers: blue, red, and green.*

  Baryons would be constituted from three quarks of different colour, such that their total ‘colour charge’ is zero and their product is ‘white’. For example, a proton could be thought to consist of a blue up-quark, a red up-quark and a green down-quark (uburdg). A neutron would consist of a blue up-quark, a red down-quark and a green down-quark (ubdrdg). The mesons, such as pions and kaons, could be thought to consist of coloured quarks and their anti-coloured anti-quarks, such that the total colour charge is zero and the particles are also ‘white’.

  It was a neat solution. The different quark colours provided the extra degree of freedom and meant that there was no violation of the Pauli exclusion principle. Tripling the number of different types of quarks meant that the decay rate of the neutral pion was now accurately predicted. And nobody could expect to see the colour charge revealed in experiments as this was a property of quarks and the quarks are ‘confined’ inside white-coloured hadrons. Colour could not be seen because nature demands that all observable particles are white.

  ‘We gradually saw that that [colour] variable was going to do everything for us!’ Gell-Mann explained. ‘It fixed the statistics, and it could do that without involving us in crazy new particles. Then we realized that it could also fix the dynamics, because we could build an SU(3) gauge theory, a Yang–Mills theory, on it.’10

  By September 1972, Gell-Mann and Fritzsch had elaborated a model consisting of three fractionally charged quarks which could take three ‘flavours’ – up, down, and strange – and three colours, bound together by a system of eight coloured gluons, the carriers of the strong ‘colour force’. Gell-Mann presented the model at a conference on high-energy physics held to mark the opening of the National Accelerator Laboratory in Chicago.

  But he was already beginning to have second thoughts. Once more troubled particularly by the status of the quarks and the mechanism by which they are permanently confined, Gell-Mann gave the theory a somewhat muted fanfare. He mentioned a variation of the model featuring a single gluon. He emphasized that the quarks and gluons were ‘fictitious’.

  By the time he and Fritzsch came to write up the lecture, they had been overtaken by their doubts. ‘In preparing the written version,’ he later wrote, ‘unfortunately, we were troubled by the doubts just mentioned, and we retreated into technical matters.’11

  This failure of courage is not so difficult to understand. If the coloured quarks really were permanently confined inside ‘white’ baryons and mesons, such that their fractional electric charges and their colour charges can never be seen, then it could be argued that all speculation about their properties was inherently idle.

  The theorists were now very close to a grand synthesis: a combination of quantum field theories based on an SU(3)×SU(2)×U(1) symmetry – what would become known as the Standard Model. This would be a synthesis which would set the theoretical stage for experimental particle physics for the next thirty years. This hesitancy was simply a deep breath before the plunge.

  In fact, tantalizing evidence for the existence of quarks had emerged just a few years earlier from high-energy collisions involving electrons and protons. The results of experiments conducted at the Stanford Linear Accelerator Center (SLAC) in California hinted strongly that the proton consists of point-like constituents.

  But it was not clear that these point-like constituents were quarks. Even more puzzling, the results also suggested that, far from being held in a tight grip inside the proton, the constituents behaved as though they were entirely free to roam around inside their larger hosts. How was this meant to be compatible with the idea of quark confinement?

  The theorists’ work was almost complete. The Standard Model was almost in place. It was now the turn of the experimentalists.

  PART II

  Discovery

  6

  Alternating Neutral Currents

  ____________

  In which protons and neutrons are shown to have an internal structure and the predicted neutral currents of the weak nuclear force are found, and then lost, and then found again

  Cosmic rays produce some of the highest-energy particle collisions ever observed, much higher in some instances than can be achieved even with today’s particle colliders.* But the origin of the rays is mysterious, and the particles and energies involved in triggering events are unknown. Successful cosmic ray experiments rely on chance detection of new particles or new processes, detection that can prove difficult to replicate.

  Despite the success of cosmic ray experiments in uncovering the positron, the muon, pions, and kaons in the two decades spanning the 1930s to the early 1950s, further progress in particle physics had to await the development of ever-more powerful man-made particle accelerators.

  The first accelerators were built in the late 1920s. These were linear accelerators, producing acceleration of electrons or protons by passing them through a linear sequence of oscillating electric fields. One such accelerator was used in 1932 by John Cockcroft and Ernest Walton to produce high-speed protons which were then fired at stationary targets, transmuting the target nuclei in the first artificially induced nuclear reactions.*

  American physicist Ernest Lawrence invented an alternative accelerator design in 1929. This involved using a magnet to confine a stream of protons to move in a spiral whilst accelerating them to higher and higher speeds using an alternating electric field. He called it the cyclotron.

  Lawrence was also something of a showman, with grand ambitions. There followed a succession of larger and larger machines, culminating in 1939 with a design for a gargantuan super-cyclotron with a magnet weighing two thousand tons. Lawrence estimated that this would deliver proton energies of 100 million electron volts (100 MeV), on the threshold of the energies required for protons to penetrate the nucleus. Lawrence approached the Rockefeller Foundation with requests for support. His pitch was greatly strengthened when, in the middle of a game of tennis, he was informed that he had just won the 1939 Nobel Prize for physics.

  With the outbreak of war, Lawrence’s cyclotron technology was diverted to the problem of separating quantities of uranium-235 sufficient to produce the atom bomb that was dropped on Hiroshima. The electromagnetic isotope separation facility Y-12, constructed at Oak Ridge in eastern Tennessee, was based on Lawrence’s cyclotron design.*

  The magnets used at Y-12 were 250 feet long and weighed between three thousand and ten thousand tons. Their construction exhausted America’s supply of copper, and the US Treasury had to loan the Manhattan project fifteen thousand tons of silver to complete the windings. The magnets required as much power as a large city and were so strong that workers could feel the pull of magnetic force on the nails in their shoes. Women straying close to the magnets would occasionally lose their hairpins. Pipes were pulled from the walls. Thirteen thousand people were employed to run the plant, which began operation in November 1943.

  This was the first example of what would become known as ‘big science’.

  The cyclotron used a constant magnetic field strength and fixed-frequency electric field and so had an inherent limit in terms of particle energies of about 1000 MeV (or 1 giga electron volt, GeV). To access yet higher energies, it is necessary to drive the accelerated particles in bunches around a circular track along which both the magnetic and electric fields are synchronously varied. Early examples of such synchrotrons included the Bevatron, a 6.3 GeV accelerator built in 1950 at the Radiation Laboratory in Berkeley, California, and the Cosmotron, a 3.3 GeV machine built in 1953 at Brookhaven National Labora
tory in New York.

  Other countries began to get in on the act. On 29 September 1954, eleven western European countries ratified a convention to establish the Conseil Européen pour la Recherche Nucléaire (the European Council for Nuclear Research, or CERN).* Three years later a 10 GeV proton synchrotron was inaugurated by the Soviet Union’s Joint Institute for Nuclear Research in Dubna, 120 kilometres north of Moscow. CERN soon followed in 1959 with a 26 GeV proton synchrotron in Geneva.

  Funding for high-energy physics in America was greatly increased as the race for Cold War technological supremacy reached white heat in the 1960s. The Alternating Gradient Synchrotron was constructed at Brookhaven in 1960, capable of operating at 33 GeV. It seemed evident that the future development of particle physics lay in the hands of synchrotron designers, pushing the technology to ever greater collision energies.

  So, when construction of a new $114-million 20 GeV linear electron accelerator commenced in 1962 at Stanford University in California, many particle physicists dismissed it as an irrelevant machine, capable only of second-rate experiments.

  But some physicists recognized that the emphasis on ever higher-energy hadron collisions had come at the cost of subtlety. The synchrotrons were used to accelerate protons and smash them into stationary targets, including other protons. As Richard Feynman explained, proton–proton collisions were ‘…like smashing two pocket watches together to see how they are put together.’1

  The Stanford Linear Accelerator Center (SLAC) was built on 400 acres of Stanford University grounds about 60 kilometres south of San Francisco. It reached its 20 GeV design beam energy for the first time in 1967. The three-kilometre accelerator is linear, rather than circular, because bending electron beams into a circle using intense magnetic fields results in dramatic energy loss through emission of X-ray synchrotron radiation.

  When an electron collides with a proton, three different types of interaction may result. The electron may bounce relatively harmlessly off the proton, exchanging a virtual photon, changing the electron’s velocity and direction but leaving the particles intact. This, so-called ‘elastic’ scattering yields electrons with relatively high scattered energies clustered around a peak.

  In a second type of interaction, the collision with the electron may exchange a virtual photon which kicks the proton into one or more excited energy states. The scattered electron comes away with less energy as a result, and a chart of scattered energy vs. yield shows a series of peaks or ‘resonances’ corresponding to different excited states of the proton. Such scattering is ‘inelastic’, as new particles (such as pions) may be created, although both electron and proton emerge intact from the interaction. In essence, the energy of the collision, and of the exchanged virtual photon, has gone into the creation of new particles.

  The third type of interaction is called ‘deep inelastic’ scattering, in which much of the energy of the electron and the exchanged virtual photon goes into destroying the proton completely. A spray of different hadrons results and the scattered electron recoils, now with considerably less energy.

  Studies of deep inelastic scattering at relatively small angles from a liquid hydrogen target began at SLAC in September 1967. They were carried out by a small experimental group including MIT physicists Jerome Friedman and Henry Kendall and Canadian-born SLAC physicist Richard Taylor.

  They focused their attention on the behaviour of something called the ‘structure function’ as a function of the difference between the initial electron energy and the scattered electron energy. This difference is related to the energy lost by the electron in the collision or the energy of the virtual photon that is exchanged. They saw that as the virtual photon energy was increased, the structure function showed marked peaks corresponding to the expected proton resonances. However, as the energy increased further, these peaks gave way to a broad, featureless plateau that fell away gradually as it extended well into the range of deep inelastic collisions.

  Rather curiously, the shape of the function appeared to be largely independent of the initial electron energy. The experimentalists didn’t understand why.

  But American theorist James Bjorken did. Bjorken had obtained his doctorate at Stanford University in 1959 and had recently returned to California after a spell at the Niels Bohr Institute in Copenhagen. Just before SLAC was completed, he had developed an approach to predicting the outcomes of electron–proton collisions using a rather esoteric approach based on quantum field theory.

  In this model, it was possible to think of the proton in two distinct ways. It could be considered as a solid ‘ball’ of material substance, with mass and charge distributed evenly. Or it could be thought of as a region of largely empty space containing discrete, point-like charged constituents, much as the atom had been shown in 1911 to be empty space containing a tiny, positively charged nucleus.

  These two very different ways of thinking about the structure of the proton would produce very different scattering results. Bjorken had understood that electrons of sufficient energy could penetrate the interior of a ‘composite’ proton and collide with its point-like constituents. In the region of deep inelastic collisions, the electrons would be scattered in greater numbers and at larger angles, and the structure function would behave in the way now being revealed by the experiments.

  Bjorken had drawn back from declaring that such point-like constituents might be quarks. The quark model was still treated with derision by the physics community and there were alternative theories available that were better regarded. Arguments over the interpretation of the data broke out even within the group of MIT-SLAC physicists. Consequently, the physicists did not rush to declare the results as evidence for the existence of quarks.

  And there the matter rested for another ten months.

  Richard Feynman visited SLAC in August 1968. After working on the weak nuclear force and aspects of quantum gravity, he had decided to turn his attention back to high-energy physics. His sister Joan lived in a house near the SLAC facility, and during visits to her he would take the opportunity to ‘snoop around’ at SLAC to find out what was happening in the field.

  He heard about the work of the MIT-SLAC group on deep inelastic scattering. A second round of experiments was about to begin, but the physicists were still puzzling over the interpretation of the data from the previous year.

  Bjorken was out of town, but his new postdoctoral research associate Emmanuel Paschos told Feynman about the behaviour of the structure function and asked him what he thought. When Feynman saw the data he declared: ‘All my life I’ve looked for an experiment like this, one that can test a field theory of the strong force!’2 He figured it out that night in his motel room.

  He believed that the behaviour the MIT-SLAC physicists had seen was related to the distribution of momentum of point-like constituents deep inside the proton. Feynman called these constituents ‘partons’ – literally ‘parts of protons’ – to avoid getting entangled with any specific model for the interior of the proton.*

  ‘I’ve really got something to show youse guys,’ Feynman declared to Friedman and Kendall the next morning. ‘I figgered it all out in my motel room last night!’3 Bjorken had already arrived at most of the conclusions that Feynman now drew, and Feynman acknowledged his priority. But, once again, Feynman was describing the physics in a far simpler, yet richer, more visual way. When he returned to SLAC in October 1968 to deliver a lecture on the parton model, it was like setting a fire. Nothing breeds confidence in a bold idea than its enthusiastic advocacy by a Nobel Laureate.

  Were partons actually quarks? Feynman didn’t know and didn’t care, but Bjorken and Paschos soon had a detailed model of partons based on a triplet of quarks.

  Further studies of deep inelastic scattering of electrons from neutrons at SLAC and results from studies of the scattering of neutrinos from protons at CERN provided further supporting evidence. By mid-1973, quarks had officially ‘arrived’. They might have been conceived partly in jest as a stran
ge quirk of nature, but they had now taken a decisive step towards acceptance as real constituents of the hadrons.

  Some important questions remained unanswered. The behaviour of the structure function could only be properly understood if the quarks were assumed to be individually bouncing around inside the proton or neutron, completely independently of each other. And yet, the 20-GeV electrons had struck the individual quarks, resulting in the destruction of the target nucleon hosts, so how come no free quarks had been liberated?

  It didn’t make sense. If the strong force kept the quarks so tightly bound inside the nucleons that they were forever ‘confined’ and could never be seen, how could it be that inside the nucleons they were moving about apparently so freely?

  ____________

  By the end of 1971, a fully fledged quantum field theory of electro-weak interactions had been worked out, and the theorists’ confidence was growing. Symmetry-breaking using the Higgs mechanism could explain the distinction between electromagnetism and the weak nuclear force, which would otherwise be the same universal electro-weak force. Symmetry-breaking had left the photon massless whilst giving mass to the carriers of the weak force. The weak force demanded two charged force carriers, the W+ and W− particles, and a neutral force carrier, the Z0. If the Z0 existed then interactions involving its exchange could be expected to be manifested in the form of weak neutral currents.

 

‹ Prev