Seeing Further

Home > Nonfiction > Seeing Further > Page 27
Seeing Further Page 27

by Bill Bryson


  From the utopianism of the 1930s to the bland consumerism of the 1960s and the sleek monochrome minimalism of the 1980s, the mood of the developed world can be gauged from its polymer consumables. Today our bulk plastics are struggling towards a more environmentally friendly image, being biodegradable, made from non-oil-based ingredients, or more easily recycled. Meanwhile, high-tech plastics infiltrate the information technology once monopolised by silicon. Electronic circuits are being written with plastic, manufactured with cheap printing technology instead of demanding expensive high-vacuum conditions. Glowing television screens can be created from all-plastic light-emitting diodes on sheets as thin and flexible as paper.

  Even paper itself is being reinvented, partly in plastic, for the information age. It is one of those fabrics that are hard to improve: its cheapness, durability, portability and readability (thanks to the high brightness contrast with ink, whether in bright or dim light) have secured the survival of the book and the newspaper in the digital age. But now the benefits of information technology are being combined with those of paper in a material commonly called e-paper or (to turn the idea on its head) e-ink: a plastic sheet with the lightness and appearance of paper on which the ink can be rearranged electronically. A sheet of the stuff, connected to a microchip loaded with data, is an entire library. These heady possibilities should come with a warning, however, for Bacon was right to say that power stems from knowledge and not mere information.

  Engineering Life

  We have also means to make divers plants rise by mixtures of earths without seeds; and likewise to make divers new plants, differing from the vulgar; and to make one tree or plant turn into another. We have also parks and enclosures of all sorts of beasts and birds … By art likewise, we make them greater or taller than their kind is; and contrariwise dwarf them, and stay their growth: we make them more fruitful and bearing than their kind is; and contrariwise barren and not generative. Also we make them differ in colour, shape, activity, many ways … Neither do we this by chance, but we know beforehand, of what matter and commixture what kind of those creatures will arise.

  Of all the ‘marvels’ in Bensalem, these are surely the most chilling, not least because of the apparent nonchalance with which the priests of Salomon’s House tamper with living nature. Here most of all Bacon’s treatise takes on a Faustian cast, and it is an easy matter to trace the path from New Atlantis to Mary Shelley, whose fantastic fable of life remade tapped into centuries of apprehension about the consequences of scientific hubris.

  Of course, we cannot read Bacon’s comments now without thinking of biotechnology and genetic engineering, which permit the ‘commixture’ of creatures: spider genes in goats, plants loaded with the genetic defensive armoury of quite different (even animal) species. We can hollow out animal eggs and load them up with human genomes, and then grow them into embryos. And this is just the beginning. It is probably a matter of a few years before new species are designed on the blackboard and manufactured with genomes synthesised in the laboratory, collections of genes handpicked more fastidiously than anything selective breeding can achieve. These genomes might be transferred into emptied cells, or simply allowed to override the existing genetic instruction manuals of ordinary bacteria. This ‘synthetic biology’ will represent a new origin of life, after a fashion: the first organisms outside the great chain of being that began almost four billion years ago. And tellingly, such efforts are now framed in terms that relate more to an information age than to the molecular biology of Crick and Watson: organisms, we are told, are being ‘reprogrammed’ with new ‘software’, and then ‘rebooted’ to get them running. The redesign of life ‘from scratch’ will be accompanied by well-motivated concerns about safety and ethics, but it will also confront us with deeper questions that we have previously preferred to keep at arm’s length: What is life? When does it begin? What is ‘natural’?

  These questions that weigh so heavily for us now might have been regarded as far less burdensome within the uncompromisingly mechanistic worldview of Francis Bacon. Like his contemporaries René Descartes and Thomas Hobbes, he considered all phenomena, whether the workings of the human body or of the stars, to have rational, material causes. Everything was so many atoms, colliding in insensate profusion. Moreover, Bacon’s outlook (which accords with that of most scientists and engineers throughout the ages) is essentially optimistic, guided by a belief that the human lot can be improved by technical means. He was eager to free the sciences from religious shackles, to abandon the hierarchy of the Earth and heavens and a reliance on teleological explanations.

  By and large, those aspirations still underpin efforts to engineer biology. Some of biotechnology’s earliest successes stemmed from an image of living cells, primarily bacteria, as microscopic factories for manufacturing sorely needed drugs. The development in the 1970s of recombinant DNA technology, which enabled genes to be sliced out of the genome of one organism and spliced into that of another, using natural enzymes that conduct such cutting and pasting, enabled human insulin to be derived by fermentation of genetically modified Escherichia coli bacteria. Sights are now set not just on pharmaceuticals but on cleaner fuels, greener manufacturing of materials, biological clean-up of environmental contaminants, even ‘wet nano-robots’ that engage hand-to-hand with disease agents.

  There was, as we can see, nothing new in the materialistic conception of life that enabled biotechnologists to view it as amenable to principles of construction and design. And indeed the reception of this ‘cut-and-paste’ approach to the living world was, all things considered, relatively muted: so long as it declines to re-engineer human beings (and perhaps other higher organisms), biotechnology tends to be seen as just another industrial process, more akin to brewing than to vivisection. While opponents of genetic modification have played on philosophically suspect notions of the ‘natural’ and ‘unnatural’, most of the resistance to its introduction has been motivated by concerns about commercial ownership and responsibility, and about public health: issues that might reasonably be raised (and often are) for any new technology. As far as the ‘sanctity of life’ is concerned, public opinion often shows a solipsistic parochialism. Yet if there is one lesson to be drawn from the controversy in Europe about genetically modified organisms (apart from a reminder of the unwelcome influence of mass media), it is that technologies are less likely to gain easy acceptance until they can demonstrate tangible benefits to potential consumers.

  All the same, scientists have been revealingly eager to exploit public sympathy, or at least tolerance, towards ‘pure’ science in the promotion of biotechnological initiatives with decidedly applied goals. The Human Genome Project was in truth something of a mixture of both, to the extent that the distinction is meaningful at all; but the rhetoric with which the project advertised itself was concerned with uncovering the secrets – in the deeply misleading metaphor, ‘reading the book’ – of life. The project was entirely dependent on technical advances, and it gave rise to no new theories but rather to an impressive and immensely useful (but only patchily understood) data bank. The frequent comparisons with the Moon landings were more apt than perhaps intended, since both were feats of technical prowess more than they were voyages into the scientific unknown.

  Strikingly, then, the extension of engineering ideas to biology has so far been regarded with scarcely more distaste or disdain than is reserved for engineering more generally, and the complaints are often of much the same nature. Few even perceive the philosophical boldness of a word such as ‘bioengineering’, which is commonly accepted with the indifference one might expect to see accorded to a branch of automotive engineering. Perhaps we are more the heirs to Bacon’s vision than we realise. Even concerns about the prospect of the de novo creation of life are so far voiced only by rather minor pressure groups, and they too tend to focus on safety issues. Battle lines are only really drawn when biological technologies impinge on human life, as in the cases of stem-cell technology, embryo re
search and assisted conception. Only here have certain traditional belief systems deemed it necessary to impose assumptions about what life consists in.

  Distorted, dogmatic, and dangerous though such assumptions may sometimes be, buried within them are some genuine questions about the ethical responsibilities of the engineer. Opinions may differ on the boundaries of human dignity, but it is surely right that these boundaries feature in any consideration of what we might and might not make. And the desirability of a technological goal is not to be determined simply by a health-and-safety or cost-benefit analysis, but by a careful consideration of the difficult question of whether it seems likely in the long term, on balance, to serve human welfare and well-being. The disturbing aspect of Bacon’s utopian scientific writings is often not so much what they consider possible, but how readily he assumes that humankind has the wisdom to handle such power.

  WHY ENGINEERING MATTERS

  As I write, the Large Hadron Collider, the world’s biggest atom-smasher at CERN in Geneva, has switched on with almost unprecedented media jamboree*. Asked about the practical value of it all, Stephen Hawking has said that ‘modern society is based on advances in pure science that were not foreseen to lead to practical applications’. It’s a common claim, and it subtly reinforces the hierarchy that Medawar identified: technology and engineering are the humble offspring of pure science, the casual cast-offs of a more elevated pursuit.

  I don’t believe that such pronouncements are intended to denigrate applied science as an intellectual activity; they merely speak into a culture in which that has already happened. Pure science undoubtedly does lead to applied spin-offs, but this is not the norm. Rather, most of our technology has come from explicit and painstaking efforts to develop it. And this is simply a part of the scientific enterprise. A dividing line between pure and applied science makes no sense at all, running as it does in a convoluted path through disciplines, departments, even individual scientific papers and careers. Research aimed at applications fills the pages of the leading journals in physics, chemistry and the life and Earth sciences; curiosity-driven research with no real practical value is abundant in the ‘applied’ literature of the materials, biotechnological and engineering sciences. The fact that ‘pure’ and ‘applied’ science are useful and meaningful terms seduces us sometimes into thinking that they are real, absolute and distinct categories.

  This isn’t merely a semantic issue. Concerns about a decline in university admissions for science and engineering are more or less universal among the various disciplines, but there is good reason to suspect that the sciences deemed to be more ‘pure’ retain a greater attraction for the brightest students among those who still gravitate in this direction – even though employment prospects for an engineer are better than for a string theorist (who in recent years has seemed likely to end up on Wall Street). In 1998 the President of the US National Academy of Engineering, William A. Wulf, stated: ‘We need to understand why in a society so dependent on technology, a society that benefits so richly from the results of engineering, a society that rewards engineers so well, engineering isn’t perceived as a desirable profession.’ Yet many of the most pressing global problems – clean energy generation, the management of water resources, securing nuclear non-proliferation, creating less waste and more efficient use of material resources – cry out for technological expertise.

  There’s no simple formula for the rehabilitation of the engineering, synthetic and technological (in the oldest sense) aspects of science. Celebrating their achievements is all very well, although it remains a conundrum why, for example, the British people seem to hold Isambard Kingdom Brunel in such high esteem without showing much inclination to follow in his footsteps. But no amount of flag-waving can disguise the fact that the practical sciences, the craft sciences if you will, have always had and will always have a double-edged nature: along with life-saving drugs, safer transportation, more accessible information and solar power comes pollution, landfills and nuclear weapons. The conventional talk of ‘dual-use’ technology should rather acknowledge the reality of a thousand uses, guided by as many agendas. As US writer Richard Powers puts it in his 1998 novel Gain, an exploration of the social politics of industrial chemistry, ‘People want everything. That’s their problem.’

  Science does itself no favours when it tries to skip away from such complex issues with talk of ‘pure knowledge’, untainted by the marketplace. That’s a privileged position enjoyed by a very few of its practitioners, who even then cannot be sure that their seemingly arcane ideas won’t end up guiding the fabrication and operation of some device or other. Science is about making stuff, just as much as it is about understanding stuff. The two go hand in hand, and always have done. Francis Bacon implied as much; but in the twenty-first century, disciplines such as nanotechnology, quantum information technology and synthetic biology are blurring as never before the false distinctions between thinking and doing. So what shall we make tomorrow?

  * At the time of publication, the hiatus caused by the large Hadron Collider’s subsequent malfunction is almost at an end.

  14 PAUL DAVIES

  JUST TYPICAL: OUR CHANGING PLACE IN THE UNIVERSE

  Paul Davies is a British-born theoretical physicist, cosmologist, astrobiologist and best-selling author. He is Director of the Beyond Center for Fundamental Concepts in Science and co-Director of the Cosmology Initiative, both at Arizona State University. He has written 28 books including The Mind of God, About Time, How to Build a Time Machine, The Fifth Miracle and The Goldilocks Enigma. His latest book, The Eerie Silence, is about the search for intelligent life in the universe.

  THE DEVELOPMENT OF COSMOLOGY HAS CONFIRMED OVER AND OVER AGAIN THAT WE DO NOT OCCUPY A CENTRAL POSITION IN THE GREAT SCHEME OF THINGS. BUT AS PAUL DAVIES EXPLAINS, THE STORY OF OUR REALISATION THAT WE HOLD NO SPECIAL PLACE IN THE COSMOS COULD YET BE A TALE WITH A TWIST.

  When the Royal Society was founded 350 years ago, the Copernican revolution was only a few decades old. Before Copernicus, many people believed the Earth lay at the centre of the universe and mankind was the pinnacle of creation. The discovery that Earth is but one planet among several orbiting the Sun came as a shock and forced human beings to drastically re-evaluate their place in the universe. It is a lesson that has been repeated often in the centuries that followed. The pivotal change that occurred with Copernicus was so far-reaching that scientists refer to ‘the Copernican principle’ quite generally to mean that our situation in the universe should not be in any way special or privileged. Expressed simply, the Copernican principle asserts that we are typical. Some of the deepest unanswered questions in cosmology and astrobiology in the twenty-first century concern whether and when that principle might break down.

  The Copernican principle has been a remarkably reliable guide when applied to astronomy and cosmology, although it got off to a bad start. In the seventeenth century it was widely believed that the other planets and moons in the solar system resembled Earth, even to the extent of being inhabited by plants, animals and sentient beings. Kepler, for example, wrote a treatise about the denizens of Earth’s moon. Galileo pioneered the use of the telescope to study the heavens, and it soon became clear that the other planets differ in many respects from Earth; within the solar system, then, Earth turns out to be a very atypical planet. But Galileo also discovered that the Sun is an undistinguished star among a vast number that collectively make up the Milky Way galaxy. Later measurements established that the galaxy contains about four hundred billion stars in total, arranged in a disc shape and embellished by spiral arms sprouting from a central spherical bulge. The entire assemblage is about one hundred thousand light years across.

  At the turn of the twentieth century, it was widely believed that the Copernican principle might soon fail in two key respects. The first concerned the distribution of stars in the universe. The Dutch astronomer Jacobus Kapteyn made a painstaking analysis and concluded that the Sun lay in a privileged position near the ce
ntre of the Milky Way, with the galaxy a sort of ‘island universe’ surrounded by a seemingly limitless void. But within a decade or two this model was refuted. As far as we can tell, there is after all nothing very special about the location of the solar system. It actually resides in one of the spiral arms about twenty-five thousand light years from the galactic centre – middle suburbia, if you like.

  Related to the question of the structure of the galaxy was a controversy concerning the wispy patches of light painstakingly catalogued in the eighteenth century by Frenchman Charles Messier. Some astronomers maintained they were far-flung galaxies in their own right – other ‘Milky Ways’. The alternative view was that these nebulae were clouds of glowing gas located within the Milky Way. The dispute was finally settled when telescopes became powerful enough to image individual stars in some of the nebulae, revealing them to be other ‘island universes’, or galaxies, in their own right, many very similar to the Milky Way. We now know that the Milky Way is in fact a typical galaxy, just as the Sun is a typical star, so the Copernican principle works on an extra-galactic scale too.

  At the same time as the true nature of extra-galactic nebulae was being established, similar observations revealed that the other galaxies are in motion with respect to ours and each other, a feature that could readily be deduced from the Doppler shift in the spectral lines of their light. Edwin Hubble in the USA found a systematic pattern to this motion, which can be summarised by saying that the entire universe is expanding: the galaxies are, on average, moving away from each other. Running ‘the great cosmic movie’ backwards suggests that, some billions of years ago, the matter in the universe was compressed into a small volume of space and was expanding very rapidly, a state of affairs now called the big bang.

 

‹ Prev