A History of the World
Page 69
All that said, the possibility of machines matching and outpacing human intelligence in many fields is clearly a real one, taking place all around us today. The interweaving of billions of people’s imaginative lives through the Internet is the most obvious way technology has changed our lives recently; but Artificial Intelligence, or AI, may soon prove itself much more significant. Major advances in enabling machines to ‘see’ (one of the hardest problems) and respond to natural human language are occurring now. New insights into the way the chemical-biological human brain processes information, and how this can be mimicked by later generations of computer, are lively subjects in the universities and labs. So Kasparov’s look of astonishment when Deep Blue made its crucial move ought to be remembered as a special moment in human history.
The dream of machines able to match human intelligence is an ancient one, but it only became a serious scientific subject in the 1950s, thanks to advances in computer science and, to a lesser extent, in the understanding of the brain. Alan Turing, the brilliant scientist and pioneer in computing who was critical to Britain’s wartime organization at Bletchley Park (which broke secret German codes), became fascinated with it. He had worked before the war on computer theory; in 1936 he had proposed what became known as a ‘Turing machine’, which would read symbols on a long tape, to make mathematical calculations. At this time, punched cards and vacuum tubes were the best technology available, but war tends to accelerate invention and the ‘Colossus’ machines at Bletchley Park used to break Nazi codes are generally regarded as the world’s first proper computers, in the sense of being programmable and digital, as well as electrical.
In 1950, Turing proposed his famous ‘Turing test’. This posited that if a judge was having a conversation with a human being and a computer (the identities of each being disguised through a keyboard) and was not able to tell which was which, the computer had passed the test. This, he suggested, was the sensible, measurable answer to the question about whether machines could ever think, or achieve consciousness. He did not live to see the advances that would follow. Turing was gay, and in 1952 was convicted for ‘gross indecency’ with another man and obliged to accept chemical castration as part of his punishment, as well as losing his clearance to work on government projects. He died of cyanide poisoning, probably an act of suicide, in 1954.
Two years later a conference took place at Dartmouth College, New Hampshire, where Marvin Minsky, one of the fathers of AI, and John McCarthy, the computer scientist who coined the term, led discussions on natural language, computer programming and mathematical logic. It was a breakthrough moment for the new discipline. Back then, the optimism of people like Minsky and McCarthy ran far ahead of what was possible. Urged on by fiction writers like Arthur C. Clarke, predictions were made in the late 1950s and 60s that artificial intelligence would have arrived by the 1970s or 80s. Turing himself had focused on chess as a useful test-system for AI because of its complex logic and patterning; in 1958 two key scientists at Carnegie Mellon University, Pittsburgh, had predicted that by 1968 a digital computer would be the world’s chess champion.56 All that had held them back was lack of computing power – the physical slowness of the machines available.
But they were working on it. After transistors replaced the old vacuum tubes, the problem was packing together and powering enough of them. Transistors are basic semiconductors switching electronic signals, and thus the essential components of digital computing. The first generation used copper wires and were relatively slow. Many people worked on the problem, but it was an employee of Texas Instruments called Jack Kilby who was generally credited with this advance. In 1958 he etched transistor elements onto slices of germanium, a carbon-based semi-metal, and connected them with fine gold wires to an oscillator and amplifier. Silicon would soon prove to work better, but the ‘chip’ had been born – in essence, pieces of cooked, sliced sand that are engraved using ultraviolet light and gas to turn them into electrical switches. In 1965 Gordon Moore, a co-founder of Intel Corporation, said mankind would now see a repeated annual doubling of the number of transistors that could be fitted onto a circuit, and though widely criticized for this explosive exponential prediction, he has been proved largely correct. By the late 1970s, entire microprocessors were being put on a single chip. For the IBM team this was essential to their chess-playing machine, which began with a circuit board of six thousand transistors.
What can we expect next? The enthusiasts rest heavily on the idea of acceleration, or exponential growth – the notion that technological progress multiplies by a constant figure, rather than simply adding a constant (as in linear growth). The difference is between a very slowly rising stable line and one that starts slowly and then suddenly erupts upwards to a near-vertical line of ‘take-off ’. A graph of human population increase during the timespan described in this book shows something like this. Moore’s law on computing power does the same. More generally and unscientifically, much of the underlying shape of the story told here is of exponential growth – the millennia of hunter-gathering, followed by the relative speed of the farming revolution, then the ever-faster hurtle through towns, cities, empires and industrial technology.
The scientist and writer Ray Kurzweil has popularized the phrase ‘the Singularity’ – dignified, like God, with a capital letter – which he defines as the time when the pace of change is so rapid and profound that human life is transformed. The idea came from a mathematician and science-fiction writer called Vernor Vinge, who boldly plumped for the year 2030 when ‘computer super-intelligence’ would give birth to the Singularity, leading to a time when large computer networks might wake up as a superhuman intelligence. The language is close to religious, and may yet provoke a new religion or cult. Kurzweil proclaims ‘a transforming event looming in the first half of the twenty-first century’. Like a black hole changing the patterns of matter and energy, ‘this impending Singularity in our future is increasingly transforming every institution and aspect of human life, from sexuality to spirituality’.
Popular culture has been quick to seize on the darker possibilities of this phenomenon for human freedom. If Shakespeare wrote his history plays partly as a warning to Tudor audiences about the future, the Hollywood creators of The Terminator, Blade Runner, Matrix and many other films are warning their twenty-first-century audiences about the possible outcomes of exponential growth in computer intelligence. The merger of the human and the human-built would mean that people could go beyond their frail, biological bodies – in terms not just of how long they live but of how well they think. ‘Our thinking,’ Kurzweil writes, ‘is extremely slow: the basic neural transactions are several million times slower than contemporary electronic circuits. That makes our physiological bandwidth extremely limited compared to the exponential growth of the overall human knowledge base . . . The Singularity will allow us to transcend these limitations of our biological bodies and brains.’57
There are plenty of sceptics, who argue that machines will continue to be useful tools for mankind, perhaps soon driving our cars and trains, cleaning our homes, as well as today’s machines standing in for factory workers or researchers; but they will not achieve consciousness, or threaten our human control of the planet. Jack Schwartz, an American mathematician, forcibly argued that computers cannot, like brains, take ‘relatively disorganized information’ and use internally organized structures to generate actions and thinking in the real world. But he also warns that if AI really is happening, ‘Since man’s near-monopoly of all higher forms of intelligence has been one of the most basic facts of human existence throughout the past history of this planet, such developments would clearly create a new economics, a new sociology, and a new history.’58
Though scientists vigorously debate the meaning of ‘consciousness’ – is it anything more than very integrated and sophisticated neural networks dealing with information? – the proposed Singularity touches the depths of human self-understanding. Technologies are never neutral in thei
r effects, nor predictable. Early telephones were proposed as devices for listening at home to concerts of classical music, while early enthusiasts for the Internet saw it as a worldwide academic library rather than as a place for social networking, politics or porn. Some scientists are starting to turn their attention to whether artificial or machine intelligence can be programmed to be wise, as well as to learn and self-replicate.
The key underlying theme of this history has been the mismatch between growth in mankind’s technical ability to shape the world (in which AI follows on from the growing of fatter carrots, the invention of gunpowder and steam engines) and the lack of development in mankind’s political ability to govern itself successfully. Good government generally leads to technical advance, since free speech, reliable patent law, the ability to make profits and the guarantee of personal security generally encourage inventors. But it does not necessarily work the other way: technical advance does not produce political virtue. Bad government, whether that means oppression, or a heedless enthusiasm for consumption today with no thought for later generations, or merely corruption, is more widespread; and the technological fruits of good government tend to fall into bad hands.
Garry Kasparov played, thought and lost that game in 1997 as a mercurial, flawed human being. Feng-Hsiung Hsu was right: Kasparov lost not to a machine but to tool-making humans, as human as him, as passionate as him. Later they were able to dismantle the computer they had created with the sole intention of beating him, and put it away. But like nuclear weapons, or the Internet, the biggest technological advances cannot be easily dismantled and set aside. They are handed over to the dangerous and uncertain arena of politics. So it is good to report that Kasparov, having retired from chess, then threw himself into the cause of political reform in his Russian homeland, where he has become a vocal, and apparently fearless, critic of President Putin’s authoritarian government.
A Fair Field Full of Folk
In the late 1300s a clerk from rural England, William Langland, had a vision of the world’s crowded humanity, or as he put it in his Christian poem Piers Plowman, ‘a fair field full of folk’. He could never have envisioned just how full the field would get. In his day, there were perhaps twice as many people alive as there had been when Jesus Christ was born. Since 1950, the population has grown at up to a hundred times the speed it grew after the invention of agriculture, and ten thousand times as fast as it did before that. It is now about seven billion, or seven times what it was as the industrial revolution got going.
This is a great human achievement. Those who say there are simply too many of us, too many individual life experiences, have got to imagine wiping out many billions of other human beings (rarely themselves, or their families) – a genocidal vision outdoing any of the maniac rulers described in this book. The huge increase in population in the last century, and continuing in this, is a problem caused by success – by the success of vaccination and clean-water programmes, and of the ‘green revolution’ in agriculture. Without the latter, involving mechanization, new crop varieties, irrigation and fertilizers (after 1940), it has been estimated that mankind would have needed extra farmland the size of North America to feed itself. To put it another way, some two billion people are alive because of it. Yet most observers believe so many billions of humans are too many for the planet to sustain indefinitely; we need too much water, we consume too much carbon-based energy, and we take over too much land to feed ourselves, for the biosphere to cope.
By far the best-known problem is climate change, the effect of a sharp rise in the amount of carbon dioxide in the atmosphere. This is mainly caused by the burning of fossil fuels, and as a greenhouse gas, it stops the planet cooling itself as efficiently as it needs to, thereby raising temperatures. By how much, and with exactly what effect, are unknown. An increase in ‘wild’ or unpredictable weather patterns may be one of the consequences. Looking at possible projections, this is either a problem rather overstated today and which can be dealt with by greener ways of generating energy; or it is an imminent catastrophe that could make this the last human century. But the scientific consensus tilts towards the alarming end of the spectrum. Another English visionary, the scientist James Lovelock, who pioneered a way of thinking about the Earth as a living entity (it was always a metaphor), speaks for many who are terrified about the possible effects. Unlike Langland, Lovelock talks of the planet suffering ‘a fever brought on by a plague of people’.59
Climate change is only the most discussed of the effects of the vast leap in human numbers. Though the planet is girdled by water, relatively little of it is fresh water and readily available for humans to use for growing food, for drinking and in industry. There are now severe water shortages in many parts of the world, particularly Asia and Africa, as more and more people suck from rivers that grow no larger – or, because of the construction of huge dams, have grown smaller.
The quality of soil is another looming problem. Soil is where the world’s eighty-mile crust of rock meets the atmosphere – where geology meets biology. It is both very thin and utterly precious. The historian J.R. McNeill describes it beautifully: ‘It consists of mineral particles, organic matter, gases, and a swarm of tiny living things. It is a thin skin, rarely more than hip deep, and usually much less. Soil takes centuries or millennia to form. Eventually it all ends up in the sea through erosion. In the interval between formation and erosion, it is basic to human survival.’60 After the breakthroughs of Haber and others (mentioned earlier) across much of the world, the degradation of soils has reached a point where even the intensive use of fertilizer is not improving crop yields. In Africa, food production per head has actually fallen since 1960; in China, about a third of arable land has been abandoned because of erosion.
Then there are the problems of deforestation and the extinction of species. Humans have always destroyed forests, both because they wanted the wood (a problem for the ancient Greeks, the Nazca and the Japanese, as we have seen) and to expand their farmland. Northern Europe was once covered in trees. But the deforestation of the twentieth century was particularly dramatic, removing perhaps half of the remaining total; and was concentrated in tropical areas, notably the South American rainforests of the Amazon and Orinoco, and in West Africa and Indonesia. The importance of forests for maintaining the health of the atmosphere, and coping with the carbon problem, is now well understood; but these rainforests contain a very high proportion of endangered plant, insect and animal species, which may in turn harbour many useful secrets for human survival. If, as many scientists predict, around 30 per cent of current species become extinct over the next century, then that would be a huge planetary event, another mistake by the clever ape.
Two last problems must be added to this rather woeful litany. Overfishing and the acidification of the oceans are causing an environmental disaster that would be a worldwide scandal if we were able to see clearly below the waves; and it is a disaster affecting an important source of food. Add to this the atmospheric pollution in the megacities that increasingly dominate as human habitations (more than half of us now live in cities), which has caused a huge loss of life, albeit generally in the older and weaker. McNeill estimates a twentieth-century toll from air pollution of up to forty million people, equivalent to the combined casualties of both world wars, or about the same as the 1918–19 flu pandemic.
Like other problems, this was a ‘failure of success’, in this case caused by the arrival of cars, air travel and a lifestyle more materially rich; many of those most affected by pollution have migrated from villages and small towns to the cities, prepared to live in slums, favelas or shanty towns simply to have the chance to exploit the greater opportunities of urban life. Across the globe, the move from the countryside to cities (most dramatic in China and India) is the single biggest migration in human history.
In finishing this history I was greatly tempted to find another subject than ‘the environment’. Warnings of global catastrophe hang over us everywhere, d
arkening our imaginations. Yet the fourfold increase in humanity during the past century is surely the biggest single piece of news. The fresh problems it throws up cannot be smuggled into other stories, or relegated. It is the final evidence for one of the major themes of this book, mankind’s extraordinary technical intelligence. It is the higher end of the curve that began with fire and hand-axes, moved through the selection of grasses and the domestication of animals, advanced to steam engines and vaccination, and further.
But it also confronts us with the second obvious theme, which is the long lag in our advancing political and social intelligence. Only by doing better here can we solve the failures of success.
The news is not all bad, by any means. Let us return to Steven Pinker, who in The Better Angels of Our Nature pointed out that we are far less likely to die violently today than ever before. Overall, early societies had much higher rates of killing than later ones. Some criticized his arguments about the death-rates in hunter-gatherer societies (dealt with earlier in this book), but his statistical evidence from medieval times onward has been generally accepted. The decrease in killings has come about partly because, as states grew larger in size and smaller in number, there were fewer wars between them. Partly it reflects the increase in law and order, especially in cities. Partly, too, it reflects the humanitarian movements of modern times, from the Enlightenment campaigns against slavery and torture to the growing intolerance of domestic violence today. As we get to know more about each other’s lives and live in more heavily managed, crowded societies, we seem to be becoming less violent, and kinder.