by Peter Ward
This discovery was singular enough that we will paraphrase the abstract from the paper itself: Molecular fossils (biomarkers) from 2,700-million-year-old sedimentary strata found from cores taken from an ancient part of the deep and old Australian sedimentary rock record indicate that when these ancient strata were actually deposited, they were in an environment shared by photosynthesizing bacteria called cyanobacteria, putting far back in time the oldest-known occurrence of these tiny, oxygen-producing microbial plants. But even more surprisingly, a second kind of biomarker called steranes found in the sampled strata provided persuasive evidence that not only the prokaryotic life forms were present, but that eukaryotes were there too—a group whose first fossils come from strata as much as a billion years younger than the age of the rock cores of this study.
This paper, printed in the prestigious journal Science, hurled a revolutionary new finding at the scientific world for two reasons—the presence not only of photosynthesis producing oxygen at a very early date, but also the even more surprising discovery of one of the three great groups, or domains, of life, the Eukarya (the other being Bacteria and Archaea, both microbial and dominantly single celled) in the old rocks too. All this evidence came from cores extracted from deep in the Earth. The take-home point: both photosynthetic bacteria and eukaryotes existed far earlier than previously thought, all the way back to 2,700 million years ago. This electrifying paper in one fell swoop rewrote scientific history, and the history of life as well.
But science is about doubting and questioning. Let us jump almost ten years, to 2008, and look at another paper on this subject, with one of the coauthors being the same J. Brocks who was senior author of the 1999 Science paper mentioned above. Here are the salient two sentences here: “The oldest fossil evidence for eukaryotes and cyanobacteria therefore reverts to 1.78–1.68 billion years ago and around 2.15 billion years ago, respectively. Our results eliminate the evidence for oxygenic photosynthesis at about 2.7 billion years ago and exclude previous biomarker evidence for the long delay (circa 300 million years) between the appearance of oxygen producing cyanobacteria and the rise in atmospheric oxygen 2.45–2.32 billion years ago.”
Quite a difference! So what happened between 1999 and 2008 causing this abrupt scientific volte-face?
The original biomarker studies from the late 1990s were criticized on several fronts, including the fact that many ancient biochemical pathways that do not use oxygen are known to have been “updated” after the great oxygenation event to incorporate enzymes that do. However, the real problem with the biomarker studies was the methodology used to get the samples, not the analyses of what was in the samples. The investigators were finding the precious biomarkers, all right. But when, exactly, did the biomarkers get into the cores? Rocks are not the impermeable, hard, and durable objects we usually take them for, but actually exist often in environments where chemical changes—and later contamination—occur. In the late 1990s there was not yet sufficient appreciation of the intense need for testing for—and eliminating—the chance of younger contamination in these ancient samples, particularly when the putative biomarkers are present in concentrations less than that of the surrounding air.
Thus it was to the horror of the mainstream biomarker community that one of their rising stars—Jochen Brocks of the Australian National University—suddenly changed his tune in 2005 (ultimately leading to the 2008 article we have cited from above), arguing that his own thesis work documenting the presence of Archean biomarkers was confounded by contamination! That, in turn, led one of the major geobiology funding agencies (the Agouron Institute) to support a critical repeat of the biomarker scientific drilling projects, with new means of testing for contamination. The result (as of this writing in mid-2014) is that no biomarkers were found. In fact, at a meeting late in 2013 the source of the contamination was revealed to be a stainless steel saw blade that had been made “stainless” (by the manufacturer) by high-pressure impregnation with petroleum products! As of this writing, the biomarker community has not developed intellectually rigorous tests to prove that the organic biomarkers in Archean rocks—any of them—date back from the time the sediments accumulated.
Another big picture in the great debate about the origin of molecular oxygen in Earth’s atmosphere was framed using a new kind of Earth history tool: comparing the concentrations of sulfur isotopes. We have already seen (and will see again, in the sections on mass extinction) that comparing the compositions of the isotopes of carbon is useful for studying life, and was even used to try to decide when the first life appeared on Earth, since living cells favor specific isotopes of the same kinds of atoms (such as carbon or oxygen, and as we show here, sulfur) over the others of that same element. In normal chemical reactions, light isotopes move through reaction series slightly faster than heavy isotopes because the lighter elements have slightly weaker chemical bonds that can be made and broken faster, producing higher reaction rates, and because of this, plants prefer the lightest isotopes of carbon and oxygen over their slightly heavier, more massive sister isotopes. James Farquhar, Mark Thiemens, and colleagues at the University of California in San Diego came up with a new method in 2000 to use the relative numbers of sulfur isotopes found in rocks of known age to tell us when particular kinds of life might have arisen.
Farquhar and Thiemens analyzed the pattern of sulfur isotopes in sedimentary rocks from Archean to Paleozoic time, finding large variations in sulfur isotopes prior to about 2.4 billion years ago. But in rocks younger than these the fluctuations disappear, and the best interpretation is that this change was caused by a lack of ultraviolet radiation hitting molecules of SO2 in Earth’s atmosphere. This could have happened only through the formation—the first formation at that—of the ozone layer that exists to this day. If there is no oxygen, there is no ozone screen, and we now have evidence that there was no ozone layer before about 2.4 billion years ago. After this, many other sedimentary indicators start to suggest the presence of atmospheric oxygen.
So there was no oxygen before 2.4 billion years ago, at least not enough oxygen to create an ozone layer. But were there any cyanobacteria anywhere at all? Probably not. When it became clear that the major scientific drilling program in South Africa (funded by the Agouron Institute mentioned above) had missed the great oxygenation event, they allowed the team to drill two more holes through slightly younger sediments in South Africa, which certainly did cross this event. This is the time interval between ~2.4 and 2.2 GA, the earliest part of what is called the Paleoproterozoic. And they found something rather peculiar. As noted above, the minerals pyrite and uraninite, and the sulfur isotopes, are very strong indicators of the lack of oxygen. At the other end of this spectrum is the element manganese, which is usually an equally powerful indicator for the presence of free molecular oxygen. The new data show copious levels of sedimentary manganese oxide, but in the same rock that has the other indicators of the lack of oxygen!
But it was more complex. Our junior colleague at Caltech, Woodward Fischer, working with graduate student Jena Johnson and Caltech alumnus Sam Webb (in charge of one of the microanalytical beam lines at the Stanford linear accelerator), decided to look further.5 It turns out that the same sediments that have this slug of sedimentary manganese also have silt-sized grains of detrital pyrite and uraninite, and the isotopic sulfur signature that demands no free oxygen (well, less than 1 ppm). This was completely unexpected, but it gets worse. Working with another Young Turk colleague at Caltech, Mike Lamb—an expert in the geophysics of mineral transport during sedimentation—they extended this no oxygen constraint to the entire depositional system. The silt at the edge of the delta where we sampled it had to have originally been eroded from a continent somewhere, then transported through a river system, through meandering streams, coastal estuaries, near-shore sedimentary environments, and out to the distal toe of the delta. None of these environments could have had even 1 ppm of free molecular oxygen6 (and so were obviously not affected by glacial melt
water, which might have had a little bit). Oxygenic cyanobacteria have well-known nutrient requirements—principally iron and phosphorous7—that would have been provided in many places along this depositional pathway. They produce copious quantities of oxygen—bubbles—when they grow. If any of these “islands of oxygenic photosynthesis” had actually existed, then where were they? The worst place for them to grow would be far out at sea, away from these nutrient sources. That was the vision of Preston Cloud mentioned above, but it frankly does not make sense in this context. The survival of those sedimentary indicators of anoxia is totally incompatible with the presence of oxygen—and cyanobacteria—anywhere in the environments those grains traveled through.
Overlap interval of contradictory geochemical signals for the rise of oxygen. Silt-sized rounded grains of pyrite and uraninite, which are quickly destroyed by the faintest whiff of oxygen, are associated with the first pulses of sedimentary manganese, which normally requires molecular oxygen. This overlap interval (inside the magnifying glass) may be the hint of a manganese-precipitating photosynthetic bacterium, which would be an important evolutionary stepping-stone on the path to oxygenic photosynthesis. (Diagram courtesy of Woodward Fischer, Caltech)
So what could the resolution of this paradox be? We think that the oxygen-emitting system of cyanobacteria had not evolved at this time (2.4 billion years ago), but that many of the evolutionary steps needed to get there had already been taken. It turns out that the actual biochemical complex in oxygen-releasing photosynthesis that collects the energy to split water, releasing oxygen, relies on a cluster of four manganese atoms, with a calcium atom thrown in for good luck. When this protein is made from scratch in living plants, the manganese atoms are sucked into the complex, one at a time, with the aid of photons that oxidize them. We suggested that these unique bursts of manganese in the sediments (not timid whiffs) might be the product of an evolutionary ancestor of the cyanobacteria that fed on reduced manganese dissolved in the water, using it as a source of the electrons needed to do photosynthesis.8 Many primitive photosynthetic bacteria are known to do this with H2S, organic carbon, and ferrous iron, but none have yet been found that can use manganese. Photosynthesis of this sort would leave copious amounts of a waste product—manganese oxide—behind in the sediment, but would not release the molecular O2 that would destroy the sedimentary pyrite or uraninite, or create an ozone screen to change sulfur chemistry. This overlap interval where sedimentary manganese exists with rounded, detrital grains of the minerals pyrite and uraninite are present happens in one—and only one—brief interval of geological time, between ~2.4 and ~2.35 billion years ago.9 If that is indeed the time that this protein evolved, all of the other indirect suggestions for earlier oxygenic photosynthesis must be wrong. This is a new and controversial interpretation we are posing here. But we are confident it is the correct one.
In our model, this manganese-oxidizing microbe, then new to the world through some random new mutation in all probability, dominated the ecosystem for a few million years until it managed to deplete the surface waters of soluble manganese. Through some biochemical rearrangement, this tiny new kind of microbe became capable of grabbing electrons directly from water molecules, releasing copious quantities of O2 in the process. That would have been the first true cyanobacterium. Because water is basically everywhere, its growth would no longer be limited by the supply of electron donors in the environment. Only trace levels of iron and phosphate are needed for it to grow. But during this interval of time, there are clear records of glacial deposits, and those deposits contain plenty of iron, phosphate, and other nutrients for these new cyanobacteria to grow on. In fact, this glacially fertilized growth would be capable of destroying the planetary greenhouse in less than a million years by removing two important gases—CO2 and methane—too rapidly for the system to recover.10 The result of the sudden destruction of the greenhouse would be a global glaciation, termed a “snowball Earth” event.
We apologize for the complex chemistry necessary in the preceding section. But to get this story right requires complexity. As we see now, the world was unalterably changed from this point onward.
A SNOWBALL FROM HELL
In all of Earth history, we have rarely seen ocean stratification (where the ocean has a thin upper layer that is oxygenated, but a much thicker layer underneath that is not) when the Earth’s high latitude poles are glaciated. Cold water sinks at the poles, driving circulation. On top of that, the glaciers themselves are very good at grinding up continental rocks into powder and throwing them back into the oceans, where the tiny particles of rusted iron and phosphorus are two of the same key ingredients of fertilizer we use on our lawns and gardens today. Satellite images of melting icebergs show a plume of photosynthetic activity in their wake, confirming the powerful effect on oceanic productivity that a little ground-up rock can have. And a great debate is raging even today about the effect of an illegal iron-dumping experiment in the Pacific Northwest in 2012, commissioned by Haida Gwaii (formerly the Queen Charlotte Islands), which was followed only two years later by a massive increase in salmon.
During Archean and Early Proterozoic time there were several major glacial intervals before the great oxidation event, including three minor episodes from ~2.9 to 2.7 billion years ago, and several more between 2.45 and 2.35 GA. A simple calculation suggests that the amount of iron and phosphate dumped into the oceans during any of those glacial advances would have been more than enough for cyanobacteria—if they had evolved by then—to completely overwhelm the anoxic surface environment, and flip the planetary atmosphere and surface ocean into a stable oxygen-rich situation like today; it would have taken less than 1 million years to do so.11 The fact that it did not happen then is another strong line of reasoning that oxygenic photosynthesis had not yet evolved.
The youngest, and firmest, constraint on the presence of copious oxygen in the atmosphere comes from the presence of a vast deposit of the mineral manganese, known as the Kalahari manganese field in South Africa, dated at 2.22 GA, in the same basin where the Agouron drilling project sampled. This deposit is enormous, a blanket fifty meters thick that covers nearly five hundred square kilometers, deposited on a continental shelf. There is no trace of detrital pyrites, uraninite, or weird sulfur isotopes. It could only have formed in an oxygen-rich atmosphere, and thus this gives us the oldest date at which we are sure that the world of cyanobacteria, the ozone shield, and oxygen in both the sea and air existed.
Between this deposit and the underlying manganese overlap interval is another peculiar beast—a glaciation so severe that it marched into the tropics12 and most likely froze the entire ocean surface, producing the first of the snowball Earth episodes.13
This first snowball Earth episode, actually named by coauthor Kirschvink, may have lasted nearly 100 million years.14 So what is a snowball Earth? In fact they were first discovered in younger rocks.
We now know that glacial deposits were produced between 717 and 635 million years ago, and can now be found on virtually all the continents. Two geologists working in the first half of the twentieth century, Brian Harland of the UK and Douglas Mawson of Australia, recognized early on that there was a great infra-Cambrian ice age that seems to have had an unusually large, global extent. Although they recognized clear features of unambiguous glacial origin—like drop stones, tillites, and glacially striated pavements at the bottom of the units—there were several features of these deposits that were puzzling. Many of the clasts were composed of shallow-water limestone, much as if the glaciers had marched out over the carbonate platforms like those in the Bahamas (which today form only in the tropics), ripping up pieces and carrying them away. They were also associated with an unusual occurrence of banded ironstones, similar to those that had disappeared from Earth nearly a billion years earlier, and the glacial sediments were usually covered by layers of limestone (again, a “fingerprint” of low-latitude formation). In a 1964 review article published in Scientific American, Harlan
d argued that the glaciers must have reached the equator because some of the deposits would have been at low latitudes no matter where Earth’s rotation axis had wandered. Harland also specifically rejected the idea that the oceans might have frozen over, as it would invoke the “ice catastrophe” from which climate modelers assured him the planet could never have escaped.
Measuring the latitude of continents in the past is a specialty of a branch of geophysics called paleomagnetism, which studies the fossil record of Earth’s magnetic field. Earth’s field is vertical at the poles, but horizontal at the equator. Hence, measuring the angle of the magnetic field at the time a rock formed with respect to the (horizontal) bedding planes provides an estimate of the latitude at the time the rocks formed. Unfortunately, it is necessary to actually prove that the magnetism one measures is as ancient as the rock, and was not acquired during recent weathering or some metamorphic event. (To be meaningful, we must study things that really and truly date to the time that the rocks formed. This is the flaw with the Precambrian biomarker studies noted earlier.)
The possibility of testing this low-latitude glaciation hypothesis attracted many early attempts at paleomagnetic analysis. However, in 1966 a new paradigm for the geological sciences was proposed—that of plate tectonics. If the continents could move relative to each other, it was then possible that all of the infra-Cambrian glacial sediments actually formed at the poles, and plate tectonics could have moved them down to their present position in low latitudes. The idea of low-latitude Precambrian glaciation basically dropped off the geophysical radar screen. It just seemed too far-fetched to the scientists studying the early Earth.