Astrobiology
Page 6
So far, we’ve discussed isotopes, microfossils, and stromatolites. Further possible indicators of early life are biomarkers, which are recognizable derivatives of biological molecules. We’re all familiar with the concept that fossil skeletons allow us to distinguish a Tyrannosaurus rex or Triceratops. On the microscopic scale, microbes can leave behind remnants of individual organic molecules, consisting of skeletal rings or chains of carbon atoms. These ‘molecular backbones’ come from particular molecules that are found only in certain types of microbe. So biomarkers not only confirm the presence of past life but can also indicate specific forms of life.
Unfortunately, like some microfossils, ancient biomarkers are also mired in controversy. The oldest reported biomarkers date from about 2.8 Ga and appear to show molecules suggesting the presence of oxygen-producing cyanobacteria. But the sampled rocks may have been contaminated by younger organic material. More certain biomarkers are found in rocks from 2.5 Ga inside tiny fluid inclusions that appear uncontaminated.
Despite the limitations of some evidence, the overall record shows that Earth was inhabited by 3.5 Ga—within 1 billion years of its formation and shortly after heavy impact bombardment, which suggests that life might originate fairly quickly on geological timescales on suitable planets. This means that it’s not out of the question that Mars, which we think had a geologically short window of habitability (Chapter 6), may also have evolved life.
Chapter 4
From slime to the sublime
How did Earth maintain an environment fit for life?
However life started, once established, it persisted for over 3.5 billion years and evolved from microbial slime to the sophistication of human civilization. During this period, the Earth maintained oceans and, for the most part, a moderate climate, even though there was an increase of about 25 per cent in the amount of sunlight. The gradual brightening of the Sun is a consequence of the way that the Sun shines on the main sequence. When four hydrogen nuclei are fused into one helium nucleus in the Sun’s core, there are fewer particles, so material above the core presses inwards to fill available space. The compressed core warms up, causing fusion reactions to proceed faster, so that the Sun brightens about 7–9 per cent every billion years. This theory is confirmed by observations of Sun-like stars of different ages.
If the Earth had possessed today’s atmosphere at about 2 Ga or earlier, the whole planet should have been frozen under the fainter Sun, but geological evidence suggests otherwise. This puzzle is the faint young Sun paradox. There is evidence for liquid water back to at least 3.8 Ga, which includes, for example, the presence of sedimentary rocks that were formed when water washed material from the continents into the oceans.
There are three ways to resolve the faint young Sun paradox. The most likely explanation involves a greater greenhouse effect in the past. Another suggestion is that the ancient Earth as a whole was darker than today and absorbed more sunlight, although there’s scant supporting evidence. A third idea is that the young Sun shed lots of material in a rapid outflow (solar wind) to space so that the Sun’s core wasn’t as compressed and heated over time by overlying weight as assumed above. If the mass loss was just right, the Sun could have started out as bright as today. However, observations of young Sun-like stars presently don’t support the third hypothesis.
In considering the first idea, we need to appreciate that the atmosphere warms the Earth to an extent depending on atmospheric composition. Without an atmosphere (and assuming that the amount of sunlight that the Earth reflects stayed the same), the Earth’s surface would be a chilly –18°C. Instead, today’s average global surface temperature is +15°C. The 33°C difference (= 18 + 15) is the size of Earth’s greenhouse effect, which is the warming caused by the atmosphere.
How does the greenhouse effect work? A planet’s surface is heated by visible sunlight, causing it to glow in the infrared just as your warm body shines at those wavelengths. An atmosphere tends to be mostly opaque to the infrared radiation coming up from the surface below, so it absorbs the infrared energy and warms. Because the atmosphere is warm it also radiates in the infrared. Some of this radiation from the atmosphere travels back down to the planet. So the surface of a planet is warmer than it would be in the absence of an atmosphere because it receives energy from a heated atmosphere in addition to visible sunlight.
At 3.5 Ga, when the Sun was 25 per cent fainter, a 50°C greenhouse effect would have been needed to maintain the same global average temperature on Earth that we enjoy today with a 33°C greenhouse effect. A stronger greenhouse effect is possible if there had been greater levels of greenhouse gases, which are those responsible for absorbing infrared radiation coming from the Earth’s surface. Today, water vapour accounts for about two-thirds of the 33°C greenhouse effect, and carbon dioxide (CO2) accounts for most of the rest. However, water vapour condenses as rain or snow, so its concentration is basically a response to the background temperature set by atmospheric CO2, which doesn’t condense. In this way, CO2 controls today’s greenhouse effect even though its level is small. Around 1700, there were about 280 parts per million of CO2 in Earth’s atmosphere (meaning that in a million molecules of air, 280 were CO2), while in 2010 there were 390 parts per million. Since industrialization, CO2 has been released from deforestation and burning fossil fuels such as oil or coal. In the late Archaean, an upper limit on the CO2 amount of tens to a hundred times higher than the pre-industrial level is deduced from the chemical analyses of palaeosols, which are fossilized soils. But even such levels of CO2 were not enough to counter the faint young Sun.
In fact, the Archaean atmosphere had less than one part per million of molecular oxygen (O2), which implies that methane (CH4) was an important greenhouse gas. Today, atmospheric methane is at a low level of 1.8 parts per million because it reacts with oxygen, which is the second most abundant gas in the air at 21 per cent. (Most of today’s air is nitrogen (N2), which is 78 per cent, but neither oxygen nor nitrogen are greenhouse gases.) In the Archaean, the lack of oxygen would have allowed atmospheric methane to reach a level of thousands of parts per million. Methane wouldn’t accumulate without limit because it can be decomposed by ultraviolet light in the upper atmosphere. Subsequent chemistry involving methane’s decomposition fragments can generate other hydrocarbons, i.e. chemicals made of hydrogen and carbon, including ethane gas (C2H6) and a smog of sooty particles. A combination of methane, ethane, and carbon dioxide—plus the water vapour that builds up in response to the temperature set by these non-condensable greenhouse gases—would have provided enough greenhouse effect to offset the fainter Sun. This assumes that there was a source of methane from microbial life, just like today, which is plausible because the metabolism for methane generation is ancient (see Chapter 5).
Of course, we could ask what the Earth’s early climate was like before life. In that case, CO2 probably controlled the greenhouse effect, as today. In fact, throughout much of Earth history, there has been a geological cycle of CO2 that regulated climate on timescales of about a million years. (It still operates today but is far too slow to counteract human-induced global warming.) Essentially, atmospheric CO2 dissolves in rainwater and reacts with silicate rocks on the continents. The dissolved carbon from this chemical weathering then travels down rivers to the oceans, where it ends up in rocks on the seafloor, such as in limestone, which is calcium carbonate (CaCO3). If deposition of carbonates were all that were happening, the Earth would lose atmospheric CO2 and freeze, but there is a mechanism that returns CO2 to the atmosphere. Seafloor carbonates are transported on slowly moving oceanic plates that descend beneath other plates in the process of subduction. An example today is the South Pacific ‘Nazca’ plate, which is sliding eastwards under Chile. Carbonates are squeezed and heated during subduction, causing them to decompose into CO2. Volcanism (where rocks melt) and metamorphism (where rocks are heated and pressurized but don’t melt) release CO2. The whole cycle of CO2 loss and replenishment is called the carbonate�
��silicate cycle and behaves like a thermostat. If the climate gets warm, more rainfall and faster weathering consume CO2 and cool the Earth. If the Earth becomes cold, CO2 removal from the dry air is slow, so CO2 accumulates from geological emissions, increasing the greenhouse effect.
The carbonate–silicate cycle probably regulated climate before life originated. It then likely played an increasingly important role as a thermostat after levels of methane greenhouse gas declined in two steps when atmospheric oxygen concentrations increased, first around 2.4 Ga and then 750–580 Ma (Ma = millions of years ago).
A caveat is that the cycle may have operated differently in the Hadean and possibly the Archaean because plate tectonics—the large-scale motion of geologic plates that ride on convection cells within the mantle below—probably had a different style. Radioactive elements in the mantle generate heat when they decay, and more were decaying on the early Earth. So, on the one hand, a hotter, less rigid Hadean mantle should have allowed oceanic crust to sink more quickly. On the other hand, a hotter mantle should have produced more melting and thicker, warmer oceanic crust that was less prone to subduct. Overall, the presence of granites inferred from zircons (Chapter 3) implies that crust must have been buried somehow because granite is produced when sunken crust melts. However, exactly how tectonics operated on the early Earth remains an open question.
The Great Oxidation Event: a step towards complex life
The most drastic changes of the Earth’s atmospheric composition have been increases in oxygen, which were just as important for the evolution of life as variations in greenhouse gases. For most of Earth’s history, oxygen levels were so low that oxygen-breathing animals were impossible. The Great Oxidation Event is when the atmosphere first became oxygenated, 2.4–2.3 Ga. However, oxygen levels only reached somewhere between 0.2 to 2 per cent by volume, not today’s 21 per cent. Large animals were precluded until around 580 Ma after oxygen had increased a second time to levels exceeding 3 per cent (Fig. 3).
Nearly all atmospheric oxygen (O2) is biological. A tiny amount is produced without life when ultraviolet sunlight breaks up water vapour molecules (H2O) in the upper atmosphere, causing them to release hydrogen. Net oxygen is left behind when the hydrogen escapes into space, thereby preventing water from being reconstituted. But abiotic oxygen production is small because the upper atmosphere is dry. Instead, the major source of oxygen is oxygenic photosynthesis, in which green plants, algae, and cyanobacteria use sunlight to split water into hydrogen and oxygen. These organisms combine the hydrogen with carbon dioxide to make organic matter, and they release O2 as waste.
3. The approximate history of atmospheric oxygen, based on geologic evidence (ppm = parts per million; ppt = parts per trillion)
Another, more primitive, type of microbial photosynthesis that doesn’t split water or release oxygen is anoxygenic photosynthesis. In this case, biomass is made using sunlight and hydrogen, hydrogen sulphide, or dissolved iron in hydrothermal areas around volcanoes. Today, microbial scum grows this way in hot springs.
Before plants and algae evolved, the earliest oxygen-producing organisms were similar to modern cyanobacteria. Cyanobacteria are bluish-green bacteria that teem in today’s oceans and lakes. DNA studies show that a cyanobacteria-like microbe was the ancestor of plants, algae, and modern cyanobacteria. Consequently, we might suppose that the atmosphere became oxygenated once cyanobacteria evolved. However, evidence suggests that cyanobacteria were producing oxygen long before it flooded the atmosphere. A plausible explanation is that reductants, which are chemicals that consume oxygen, at first rapidly overwhelmed the oxygen. Reductants include gases such as hydrogen, carbon monoxide, and methane that come from volcanoes, geothermal areas, and seafloor vents. Distinctive iron-rich sedimentary rocks called banded iron formations dating from the Archaean show that there was considerable dissolved iron in the Archaean ocean, unlike today’s ocean, which would have also reacted with oxygen.
The first signs of photosythetic oxygen appear about 2.7–2.8 Ga according to evidence from stromatolites and the presence of chemicals that became soluble by reacting with oxygen. In north-west Australia, stromatolites in rocks called the Tumbiana Formation once ringed ancient lakes. In theory, microbes might have built such stromatolites using anoxygenic photosynthesis, but there’s no evidence for hydrothermal emissions needed for this metabolism. Instead, cyanobacteria using oxygenic photosynthesis probably constructed the stromatolites, consistent with tufts and pinnacles in the stromatolites that are produced when filaments of cyanobacteria glide towards sunlight. Furthermore, molybdenum and sulphur, which are elements only soluble when oxidized, become concentrated in seafloor sediments after 2.8 Ga to levels that are possible if microbes oxidized these elements on land using local sources of oxygen such as stromatolites.
Another reason why oxygen didn’t immediately accumulate in the Earth’s atmosphere is that its production is mostly a zero-sum process. When a molecule of oxygen is made, an accompanying molecule of organic carbon is generated, as summarized in the following equation:
carborn dioxide + water + sunlight = organic matter + oxygen
CO2 + H2O + sunlight = CH2O + O2
The process easily reverses, i.e. organic matter reacts with oxygen to regenerate carbon dioxide and water. Also, the Archaean atmosphere’s lack of oxygen meant that microbes could readily convert organic carbon into methane gas, as happens today in smelly, oxygen-free lake or seafloor sediments. In the air, methane could react with the oxygen to recreate water vapour and carbon dioxide. In both cases—direct reversal or indirect cancellation with methane—no net oxygen is produced, despite the presence of photosynthesis.
However, a tiny fraction (about 0.2 per cent, today) of organic carbon is buried in sediments and separated from oxygen, which prevents the two from recombining. Every organic carbon that gets buried is equivalent to one net O2 molecule that’s freed up. Of course, this ‘freed’ oxygen can easily react with many other substances besides organic carbon, including geological gases and dissolved minerals such as iron. Nonetheless, a tipping point was reached when the flow of reductants to the atmosphere dropped below the net flow of oxygen associated with organic carbon burial, causing the Great Oxidation Event. Afterwards, a plateau of 0.2–2 per cent oxygen was reached probably because oxygen that dissolved in rainwater reacted significantly with continental minerals, preventing its further accrual.
Indeed, a surge in oxidation is the evidence for the Great Oxidation. ‘Red beds’—rust-coloured continental surfaces that arise when iron minerals react with oxygen—appear. Today, reddish land surfaces are common, such as in the American south-west. A change in sulphur processing in the atmosphere also occurred. Before the Great Oxidation, when there was less than one part per million of atmospheric oxygen, red- and yellow-coloured particles of elemental sulphur literally fell out of the sky. They formed in chemical reactions at many kilometres altitude when sulphur dioxide gas from volcanoes was broken up by ultraviolet light. The falling sulphur particles carried a sulphur isotope composition into rocks that indicates atmospheric formation. After the Great Oxidation, oxygen combined with atmospheric sulphur, preventing elemental particles from forming. The sky cleared and the isotopic signature vanishes from sedimentary rocks.
With the Great Oxidation, Earth’s stratospheric ozone layer formed. This region of concentrated ozone between 20 and 30 kilometres in height shields the Earth’s surface from harmful ultraviolet rays. If there were no ozone layer, you would be severely sunburned in tens of seconds. Ozone is a molecule of three oxygen atoms (O3), which comes from oxygen. An oxygen molecule (O2) is split by sunlight into oxygen atoms (O), which then combine with other O2 molecules to make ozone (O3).
The cause of the Great Oxidation is debated but must boil down to either a global increase in oxygen production from more organic burial or a decrease in oxygen consumption. The problem with the first idea is that organic matter extracts the light carbon isotope, carbon-12, from seaw
ater, but seawater carbon, recorded in ancient limestones, doesn’t steadily decrease in carbon-12 content. That leaves the second idea.
But what would have caused the flow of reductants to diminish? One possibility is that the relatively large abundances of hydrogen-bearing gases such as methane and hydrogen (which are inevitable in an atmosphere without oxygen) meant that hydrogen escaped rapidly from the upper atmosphere into space. The loss of hydrogen oxidized the solid planet. Oxidized rocks have fewer reductants so this gradually throttled further release of reductants. Oxidation happens because you can generally trace escaping hydrogen back to its source in water (H2O), even if hydrogen atoms go through methane (CH4) or hydrogen (H2) intermediaries. So when hydrogen leaves, oxygen gets left behind where the hydrogen originated. Water vapour itself has trouble reaching the upper atmosphere because it condenses into clouds, but other hydrogen-bearing gases such as methane don’t condense and sneak the hydrogen out into space.
A rough analogy for the Great Oxidation is an acid-base titration of the sort performed in high school by dripping acid into an alkaline solution. Suddenly, the solution changes colour, typically from clear to red. The ancient atmosphere reached a similar transition from hydrogen rich to oxygen rich. Rather than acid base, we call such a titration a ‘redox titration’ because it’s a competition between reductants such as hydrogen and oxidants such as oxygen.
A boring billion years ended by the advent of animal life
The ocean and land adjusted to the change of the Great Oxidation, and by about 1.8 Ga, atmospheric oxygen had settled down and remained between 0.2 and 2 per cent for an amazingly long time. The evolution towards complex life was slow perhaps because anoxic waters that were often rich in dissolved iron or hydrogen sulphide underlay a moderately oxygenated surface ocean. Such anoxic conditions are toxic for complex life. In fact, evolution was so sluggish that the interval from 1.8 to 0.8 Ga is called the boring billion. According to one scientific paper, ‘never in the course of Earth’s history did so little happen to so much for so long’!