Lawrence Krauss - The Greatest Story Ever Told--So Far

Home > Other > Lawrence Krauss - The Greatest Story Ever Told--So Far > Page 32
Lawrence Krauss - The Greatest Story Ever Told--So Far Page 32

by Why Are We Here (pdf)


  electromagnetic and weak interactions got stronger.

  It didn’t take a rocket scientist to wonder whether the strength of

  the three different interactions might become identical at some

  small-distance scale. When they did the calculations, they found

  (with the accuracy with which the interactions were then measured)

  that such a unification looked possible, but only if the scale of

  unification was about fifteen orders of magnitude in scale smaller

  than the size of the proton.

  This was good news if the unified theory was the one proposed by

  Georgi and Glashow—because if all the particles we observe in

  nature got unified in this new large-gauge group, then new gauge

  bosons would exist that produce transitions between quarks (which

  make up protons and neutrons), and electrons and neutrinos. That

  ͥ͞͠

  would mean protons could decay into other lighter particles. As

  Glashow put it, “Diamonds aren’t forever.”

  Even then it was known that protons must have an incredibly

  long lifetime. Not just because we still exist almost 14 billion years

  after the Big Bang, but because we all don’t die of cancer as children.

  If protons decayed with an average lifetime smaller than about a

  billion billion years, then enough protons would still decay in our

  bodies during our childhood to produce enough radiation to kill us.

  Remember that in quantum mechanics, processes are probabilistic.

  If an average proton lives a billion billion years, then if one has a

  billion billion protons, on average one will decay each year. A lot

  more than a billion billion protons are in our bodies.

  However, with the incredibly small proposed distance scale and

  therefore the incredibly large mass scale associated with

  spontaneous symmetry breaking in Grand Unification, the new

  gauge bosons would get large masses. That would make the

  interactions they mediate be so short-range that they would be

  unbelievably weak on the scale of protons and neutrons today. As a

  result, while protons could decay, they might live, in this scenario,

  perhaps a million billion billion billion years before decaying. No

  problem.

  • • •

  With the results of Glashow and Georgi, and Georgi, Quinn, and

  Weinberg, the smell of grand synthesis was in the air. After the

  success of the electroweak theory, particle physicists were feeling

  ambitious and ready for further unification.

  How would one know if these ideas were correct, however? There

  was no way to build an accelerator to probe an energy scale a million

  billion times greater than the rest mass energy of protons. Such a

  machine would have to have a circumference of the Moon’s orbit.

  ͥ͞͡

  Even if it was possible, considering the earlier debacle over the SSC,

  no government would ever foot the bill.

  Happily, there was another way, using the kind of probability

  arguments I just presented that give limits to the proton lifetime. If

  the new Grand Unified Theory predicted a proton lifetime of, say, a

  thousand billion billion billion years, then if one could put a

  thousand billion billion billion protons in a single detector, on

  average one of them would decay each year.

  Where could one find so many protons? Simple: in about three

  thousand tons of water.

  So all that was required was to get a tank of, say, three thousand

  tons of water, put it in the dark, make sure there were no

  radioactivity backgrounds, surround it with sensitive phototubes that

  can detect flashes of light in the detector, and then wait for a year to

  see a burst of light when a proton decayed. As daunting as this may

  seem, at least two large experiments were commissioned and built to

  do just this, one deep underground next to Lake Erie in a salt mine,

  and one in a mine near Kamioka, Japan. The mines were necessary

  to screen out incoming cosmic rays that would otherwise produce a

  background that would swamp any proton decay signal.

  Both experiments began taking data around 1982–83. Grand

  Unification seemed so compelling that the physics community was

  confident a signal would soon appear and Grand Unification would

  mean the culmination of a decade of amazing change and discovery

  in particle physics—not to mention another Nobel Prize for

  Glashow and maybe some others.

  Unfortunately, nature was not so kind in this instance. No signals

  were seen in the first year, the second, or the third. The simplest

  elegant model proposed by Glashow and Georgi was soon ruled out.

  But once the Grand Unification bug had caught on, it was not easy

  to let it go. Other proposals were made for unified theories that

  ͥ͢͞

  might cause proton decay to be suppressed beyond the limits of the

  ongoing experiments.

  On February 23, 1987, however, another event occurred that

  demonstrates a maxim I have found is almost universal: every time

  we open a new window on the universe, we are surprised. On that

  day a group of astronomers observed, in photographic plates

  obtained during the night, the closest exploding star (a supernova)

  seen in almost four hundred years. The star, about 160,000 light-

  years away, was in the Large Magellanic Cloud—a small satellite

  galaxy of the Milky Way observable in the southern hemisphere.

  If our ideas about exploding stars are correct, most of the energy

  released should be in the form of neutrinos, despite that the visible

  light released is so great that supernovas are the brightest cosmic

  fireworks in the sky when they explode (at a rate of about one

  explosion per hundred years per galaxy). Rough estimates then

  suggested that the huge IMB (Irvine-Michigan-Brookhaven) and

  Kamiokande water detectors should see about twenty neutrino

  events. When the IMB and Kamiokande experimentalists went back

  and reviewed their data for that day, lo and behold IMB displayed

  eight candidate events in a ten-second interval, and Kamiokande

  displayed eleven such events. In the world of neutrino physics, this

  was a flood of data. The field of neutrino astrophysics had suddenly

  reached maturity. These nineteen events produced perhaps nineteen

  hundred papers by physicists, such as me, who realized that they

  provided an unprecedented window into the core of an exploding

  star, and a laboratory not just for astrophysics but also for the

  physics of neutrinos themselves.

  Spurred on by the realization that large proton-decay detectors

  might serve a dual purpose as new astrophysical neutrino detectors,

  several groups began to build a new generation of such dual-purpose

  detectors. The largest one in the world was again built in the

  ͥͣ͞

  Kamioka mine and was called Super-Kamiokande, and with good

  reason. This mammoth fifty-thousand-ton tank of water, surrounded

  by 11,800 phototubes, was operated in a working mine, yet the

  experiment was maintained with the purity of a laboratory clean

  room. This was absolutely necessary because in a detector of this size

  one h
ad to worry not only about external cosmic rays, but also about

  internal radioactive contaminants in the water that could swamp any

  signals being searched for.

  Meanwhile, interest in a related astrophysical neutrino signature

  also reached a new high during this period. The Sun produces

  neutrinos due to the nuclear reactions in its core that power it, and

  over twenty years, using a huge underground detector, Ray Davis had

  detected solar neutrinos, but had consistently found an event rate

  about a factor of three below what was predicted using the best

  models of the Sun. A new type of solar neutrino detector was built

  inside a deep mine in Sudbury, Canada, which became known as the

  Sudbury Neutrino Observatory (SNO).

  Super-Kamiokande has now been operating almost continuously,

  through various upgrades, for more than twenty years. No proton-

  decay signals have been seen, and no new supernovas observed.

  However, the precision observations of neutrinos at this huge

  detector, combined with complementary observations at SNO,

  definitely established that the solar neutrino deficit observed by Ray

  Davis is real, and moreover that it is not due to astrophysical effects

  in the Sun but rather due to the properties of neutrinos. At least one

  of the three known types of neutrinos is not massless—although it

  has a small mass indeed, perhaps a hundred million times smaller

  than the mass of the next-lightest particle in nature, the electron.

  Since the Standard Model does not accommodate neutrinos’ masses,

  this was the first definitive observation that some new physics,

  ͥͤ͞

  beyond the Standard Model and beyond the Higgs, must be

  operating in nature.

  Soon after this, observations of higher-energy neutrinos that

  regularly bombard Earth as high-energy cosmic-ray protons hit the

  atmosphere and produce a downward shower of particles, including

  neutrinos, demonstrated that yet a second neutrino has mass. This

  mass is somewhat larger, but still far smaller than the mass of the

  electron. For these results team leaders at SNO and Kamiokande

  were awarded the 2015 Nobel Prize in Physics—a week before I

  wrote the first draft of these words. To date these tantalizing hints of

  new physics are not explained by current theories.

  The absence of proton decay, while disappointing, turned out to

  be not totally unexpected. Since Grand Unification was first

  proposed, the physics landscape had shifted slightly. More precise

  measurements of the actual strengths of the three nongravitational

  interactions—combined with more sophisticated calculations of the

  change in the strength of these interactions with distance—

  demonstrated that if the particles of the Standard Model are the only

  ones existing in nature, the strength of the three forces will not unify

  at a single scale. In order for Grand Unification to take place, some

  new physics at energy scales beyond those that have been observed

  thus far must exist. The presence of new particles would not only

  change the rate at which the three known interactions change with

  scale so that they might unify at a single scale of energy, it would also

  tend to drive up the Grand Unification scale and thus suppress the

  rate of proton decay—leading to predicted lifetimes in excess of a

  million billion billion billion years.

  As these developments were taking place, theorists were driven

  by new mathematical tools to explore a possible new type of

  symmetry in nature, which became known as supersymmetry. This

  fundamental symmetry is different from any previous known

  ͥͥ͞

  symmetry, in that it connects the two different types of particles in

  nature, fermions (particles with half-integer spins) and bosons

  (particles with integer spins). The upshot of this (many other books,

  including some by me, explore this idea in detail) is that if this

  symmetry exists in nature, then for every known particle in the

  Standard Model at least one corresponding new elementary particle

  must exist. For every known boson there must exist a new fermion.

  For every known fermion there must exist a new boson.

  Since we haven’t seen these particles, this symmetry cannot be

  manifest in the world at the level we experience it, and it must be

  broken, meaning the new particles will all get masses that could be

  heavy enough so that they haven’t been seen in any accelerator

  constructed thus far.

  What could be so attractive about a symmetry that suddenly

  doubles all the particles in nature without any evidence of any of the

  new particles? In large part the seduction lay in the very fact of

  Grand Unification. Because if a Grand Unified Theory exists at a

  mass scale of fifteen to sixteen orders of magnitude higher energy

  than the rest mass of the proton, this is also about thirteen orders of

  magnitude higher than the scale of electroweak symmetry breaking.

  The big question is why and how such a huge difference in scales can

  exist for the fundamental laws of nature. In particular, if the

  Standard Model Higgs is the true last remnant of the Standard

  Model, then the question arises, Why is the energy scale of Higgs

  symmetry breaking thirteen orders of magnitude smaller-scale than

  the scale of symmetry breaking associated with whatever new field

  must be introduced to break the GUT symmetry into its separate

  component forces?

  The problem is a little more severe than it appears. Scalar

  particles such as the Higgs have several new quantum mechanical

  properties that are unlike those of fermions or spin 1 particles such

  ͟͜͜

  as gauge particles. When one considers the effects of virtual particles,

  including particles of arbitrarily large mass, such as the gauge

  particles of a presumed Grand Unified Theory, these tend to drive

  up the mass and symmetry-breaking scale of the Higgs so that it

  essentially becomes close to, or identical to, the heavy GUT scale.

  This generates a problem that has become known as the naturalness

  problem. It is technically unnatural to have a huge hierarchy

  between the scale at which the electroweak symmetry is broken by

  the Higgs particle and the scale at which the GUT symmetry is

  broken by whatever new heavy scalar field breaks that symmetry.

  The brilliant mathematical physicist Edward Witten argued in an

  influential paper in 1981 that supersymmetry had a special property.

  It could tame the effect that virtual particles of arbitrarily high mass

  and energy have on the properties of the world at the scales we can

  currently probe. Because virtual fermions and virtual bosons of the

  same mass produce quantum corrections that are identical except

  for a sign, if every boson is accompanied by a fermion of equal mass,

  then the quantum effects of the virtual particles will cancel out. This

  means that the effects of virtual particles of arbitrarily high mass and

  energy on the physical properties of the universe on scales we can

  measure would now be completely removed.

  If, however, su
persymmetry is itself broken, then the quantum

  corrections will not quite cancel out. Instead they would yield

  contributions to masses that are the same order as the

  supersymmetry-breaking scale. If it was comparable to the scale of

  the electroweak symmetry breaking, then it would explain why the

  Higgs mass scale is what it is. And it also means we should expect to

  begin to observe a lot of new particles—the supersymmetric partners

  of ordinary matter—at the scale currently being probed at the LHC.

  This would solve the naturalness problem because it would

  protect the Higgs boson masses from possible quantum corrections

  ͟͜͝

  that could drive them up to be as large as the energy scale associated

  with Grand Unification. Supersymmetry could allow a “natural”

  large hierarchy in energy (and mass) separating the electroweak scale

  from the Grand Unified scale.

  That supersymmetry could in principle solve the hierarchy

  problem, as it has become known, greatly increased its stock with

  physicists. It caused theorists to begin to explore realistic models that

  incorporated supersymmetry breaking and to explore the other

  physical consequences of this idea. When they did so, the stock price

  of supersymmetry went through the roof. For if one included the

  possibility of spontaneously broken supersymmetry into calculations

  of how the three nongravitational forces change with distance, then

  suddenly the strength of the three forces would naturally converge at

  a single, very small-distance scale. Grand Unification became viable

  again!

  Models in which supersymmetry is broken have another

  attractive feature. It was pointed out, well before the top quark was

  discovered, that if the top quark was heavy, then through its

  interactions with other supersymmetric partners, it could produce

  quantum corrections to the Higgs particle properties that would

  cause the Higgs field to condense at its currently measured energy

  scale if Grand Unification occurred at a much higher, superheavy

  scale. In short, the energy scale of electroweak symmetry breaking

  could be generated naturally within a theory in which Grand

  Unification occurs at a much higher energy scale. When the top

  quark was discovered and indeed was heavy, this added to the

  attractiveness of the possibility that supersymmetry breaking might

 

‹ Prev