Homo Deus
Page 34
If the algorithm makes the right decisions, it could accumulate a fortune, which it could then invest as it sees fit, perhaps buying your house and becoming your landlord. If you infringe on the algorithm’s legal rights – say, by not paying rent – the algorithm could hire lawyers and sue you in court. If such algorithms consistently outperform human fund managers, we might end up with an algorithmic upper class owning most of our planet. This may sound impossible, but before dismissing the idea, remember that most of our planet is already legally owned by non-human intersubjective entities, namely nations and corporations. Indeed, 5,000 years ago much of Sumer was owned by imaginary gods such as Enki and Inanna. If gods can possess land and employ people, why not algorithms?
So what will people do? Art is often said to provide us with our ultimate (and uniquely human) sanctuary. In a world where computers replace doctors, drivers, teachers and even landlords, everyone would become an artist. Yet it is hard to see why artistic creation will be safe from the algorithms. Why are we so sure computers will be unable to better us in the composition of music? According to the life sciences, art is not the product of some enchanted spirit or metaphysical soul, but rather of organic algorithms recognising mathematical patterns. If so, there is no reason why non-organic algorithms couldn’t master it.
David Cope is a musicology professor at the University of California in Santa Cruz. He is also one of the more controversial figures in the world of classical music. Cope has written programs that compose concertos, chorales, symphonies and operas. His first creation was named EMI (Experiments in Musical Intelligence), which specialised in imitating the style of Johann Sebastian Bach. It took seven years to create the program, but once the work was done, EMI composed 5,000 chorales à la Bach in a single day. Cope arranged a performance of a few select chorales in a music festival at Santa Cruz. Enthusiastic members of the audience praised the wonderful performance, and explained excitedly how the music touched their innermost being. They didn’t know it was composed by EMI rather than Bach, and when the truth was revealed, some reacted with glum silence, while others shouted in anger.
EMI continued to improve, and learned to imitate Beethoven, Chopin, Rachmaninov and Stravinsky. Cope got EMI a contract, and its first album – Classical Music Composed by Computer – sold surprisingly well. Publicity brought increasing hostility from classical-music buffs. Professor Steve Larson from the University of Oregon sent Cope a challenge for a musical showdown. Larson suggested that professional pianists play three pieces one after the other: one by Bach, one by EMI, and one by Larson himself. The audience would then be asked to vote who composed which piece. Larson was convinced people would easily tell the difference between soulful human compositions, and the lifeless artefact of a machine. Cope accepted the challenge. On the appointed date, hundreds of lecturers, students and music fans assembled in the University of Oregon’s concert hall. At the end of the performance, a vote was taken. The result? The audience thought that EMI’s piece was genuine Bach, that Bach’s piece was composed by Larson, and that Larson’s piece was produced by a computer.
Critics continued to argue that EMI’s music is technically excellent, but that it lacks something. It is too accurate. It has no depth. It has no soul. Yet when people heard EMI’s compositions without being informed of their provenance, they frequently praised them precisely for their soulfulness and emotional resonance.
Following EMI’s successes, Cope created newer and even more sophisticated programs. His crowning achievement was Annie. Whereas EMI composed music according to predetermined rules, Annie is based on machine learning. Its musical style constantly changes and develops in reaction to new inputs from the outside world. Cope has no idea what Annie is going to compose next. Indeed, Annie does not restrict itself to music composition but also explores other art forms such as haiku poetry. In 2011 Cope published Comes the Fiery Night: 2,000 Haiku by Man and Machine. Of the 2,000 haikus in the book, some are written by Annie, and the rest by organic poets. The book does not disclose which are which. If you think you can tell the difference between human creativity and machine output, you are welcome to test your claim.18
In the nineteenth century the Industrial Revolution created a huge new class of urban proletariats, and socialism spread because no one else managed to answer their unprecedented needs, hopes and fears. Liberalism eventually defeated socialism only by adopting the best parts of the socialist programme. In the twenty-first century we might witness the creation of a new massive class: people devoid of any economic, political or even artistic value, who contribute nothing to the prosperity, power and glory of society.
In September 2013 two Oxford researchers, Carl Benedikt Frey and Michael A. Osborne, published ‘The Future of Employment’, in which they surveyed the likelihood of different professions being taken over by computer algorithms within the next twenty years. The algorithm developed by Frey and Osborne to do the calculations estimated that 47 per cent of US jobs are at high risk. For example, there is a 99 per cent probability that by 2033 human telemarketers and insurance underwriters will lose their jobs to algorithms. There is a 98 per cent probability that the same will happen to sports referees, 97 per cent that it will happen to cashiers and 96 per cent to chefs. Waiters – 94 per cent. Paralegal assistants – 94 per cent. Tour guides – 91 per cent. Bakers – 89 per cent. Bus drivers – 89 per cent. Construction labourers – 88 per cent. Veterinary assistants – 86 per cent. Security guards – 84 per cent. Sailors – 83 per cent. Bartenders – 77 per cent. Archivists – 76 per cent. Carpenters – 72 per cent. Lifeguards – 67 per cent. And so forth. There are of course some safe jobs. The likelihood that computer algorithms will displace archaeologists by 2033 is only 0.7 per cent, because their job requires highly sophisticated types of pattern recognition, and doesn’t produce huge profits. Hence it is improbable that corporations or government will make the necessary investment to automate archaeology within the next twenty years.19
Of course, by 2033 many new professions are likely to appear, for example, virtual-world designers. But such professions will probably require much more creativity and flexibility than your run-of-the-mill job, and it is unclear whether forty-year-old cashiers or insurance agents will be able to reinvent themselves as virtual-world designers (just try to imagine a virtual world created by an insurance agent!). And even if they do so, the pace of progress is such that within another decade they might have to reinvent themselves yet again. After all, algorithms might well outperform humans in designing virtual worlds too. The crucial problem isn’t creating new jobs. The crucial problem is creating new jobs that humans perform better than algorithms.20
The technological bonanza will probably make it feasible to feed and support the useless masses even without any effort on their side. But what will keep them occupied and content? People must do something, or they will go crazy. What will they do all day? One solution might be offered by drugs and computer games. Unnecessary people might spend increasing amounts of time within 3D virtual-reality worlds, which would provide them with far more excitement and emotional engagement than the drab reality outside. Yet such a development would deal a mortal blow to the liberal belief in the sacredness of human life and of human experiences. What’s so sacred in useless bums who pass their days devouring artificial experiences in La La Land?
Some experts and thinkers, such as Nick Bostrom, warn that humankind is unlikely to suffer this degradation, because once artificial intelligence surpasses human intelligence, it might simply exterminate humankind. The AI is likely to do so either for fear that humankind would turn against it and try to pull its plug, or in pursuit of some unfathomable goal of its own. For it would be extremely difficult for humans to control the motivation of a system smarter than themselves.
Even preprogramming the system with seemingly benign goals might backfire horribly. One popular scenario imagines a corporation designing the first artificial super-intelligence, and giving it an innocent test such as
calculating pi. Before anyone realises what is happening, the AI takes over the planet, eliminates the human race, launches a conquest campaign to the ends of the galaxy, and transforms the entire known universe into a giant super-computer that for billions upon billions of years calculates pi ever more accurately. After all, this is the divine mission its Creator gave it.21
A Probability of 87 Per Cent
At the beginning of this chapter we identified several practical threats to liberalism. The first is that humans might become militarily and economically useless. This is just a possibility, of course, not a prophecy. Technical difficulties or political objections might slow down the algorithmic invasion of the job market. Alternatively, since much of the human mind is still uncharted territory, we don’t really know what hidden talents humans might discover, and what novel jobs they might create to replace the losses. That, however, may not be enough to save liberalism. For liberalism believes not just in the value of human beings – it also believes in individualism. The second threat facing liberalism is that in the future, while the system might still need humans, it will not need individuals. Humans will continue to compose music, to teach physics and to invest money, but the system will understand these humans better than they understand themselves, and will make most of the important decisions for them. The system will thereby deprive individuals of their authority and freedom.
The liberal belief in individualism is founded on the three important assumptions that we discussed earlier in the book:
1. I am an in-dividual – i.e. I have a single essence which cannot be divided into any parts or subsystems. True, this inner core is wrapped in many outer layers. But if I make the effort to peel these external crusts, I will find deep within myself a clear and single inner voice, which is my authentic self.
2. My authentic self is completely free.
3. It follows from the first two assumptions that I can know things about myself nobody else can discover. For only I have access to my inner space of freedom, and only I can hear the whispers of my authentic self. This is why liberalism grants the individual so much authority. I cannot trust anyone else to make choices for me, because no one else can know who I really am, how I feel and what I want. This is why the voter knows best, why the customer is always right and why beauty is in the eye of the beholder.
However, the life sciences challenge all three assumptions. According to the life sciences:
1. Organisms are algorithms, and humans are not individuals – they are ‘dividuals’, i.e. humans are an assemblage of many different algorithms lacking a single inner voice or a single self.
2. The algorithms constituting a human are not free. They are shaped by genes and environmental pressures, and take decisions either deterministically or randomly – but not freely.
3. It follows that an external algorithm could theoretically know me much better than I can ever know myself. An algorithm that monitors each of the systems that comprise my body and my brain could know exactly who I am, how I feel and what I want. Once developed, such an algorithm could replace the voter, the customer and the beholder. Then the algorithm will know best, the algorithm will always be right, and beauty will be in the calculations of the algorithm.
During the nineteenth and twentieth centuries, the belief in individualism nevertheless made good practical sense, because there were no external algorithms that could actually monitor me effectively. States and markets may have wished to do exactly that, but they lacked the necessary technology. The KGB and FBI had only a vague understanding of my biochemistry, genome and brain, and even if agents bugged every phone call I made and recorded every chance encounter on the street, they did not have the computing power to analyse all this data. Consequently, given twentieth-century technological conditions, liberals were right to argue that nobody can know me better than I know myself. Humans therefore had a very good reason to regard themselves as an autonomous system, and to follow their own inner voices rather than the commands of Big Brother.
However, twenty-first-century technology may enable external algorithms to know me far better than I know myself, and once this happens, the belief in individualism will collapse and authority will shift from individual humans to networked algorithms. People will no longer see themselves as autonomous beings running their lives according to their wishes, and instead become accustomed to seeing themselves as a collection of biochemical mechanisms that is constantly monitored and guided by a network of electronic algorithms. For this to happen, there is no need of an external algorithm that knows me perfectly, and that never makes any mistakes; it is enough that an external algorithm will know me better than I know myself, and will make fewer mistakes than me. It will then make sense to trust this algorithm with more and more of my decisions and life choices.
We have already crossed this line as far as medicine is concerned. In the hospital, we are no longer individuals. Who do you think will make the most momentous decisions about your body and your health during your lifetime? It is highly likely that many of these decisions will be taken by computer algorithms such as IBM’s Watson. And this is not necessarily bad news. Diabetics already carry sensors that automatically check their sugar level several times a day, alerting them whenever it crosses a dangerous threshold. In 2014 researchers at Yale University announced the first successful trial of an ‘artificial pancreas’ controlled by an iPhone. Fifty-two diabetics took part in the experiment. Each patient had a tiny sensor and a tiny pump implanted in his or her stomach. The pump was connected to small tubes of insulin and glucagon, two hormones that together regulate sugar levels in the blood. The sensor constantly measured the sugar level, transmitting the data to an iPhone. The iPhone hosted an application that analysed the information, and whenever necessary gave orders to the pump, which injected measured amounts of either insulin or glucagon – without any need of human intervention.22
Many other people who suffer from no serious illnesses have begun to use wearable sensors and computers to monitor their health and activities. The devices – incorporated into anything from smartphones and wristwatches to armbands and underwear – record diverse biometric data such as blood pressure. The data is then fed into sophisticated computer programs, which advise you how to change your diet and daily routines so as to enjoy improved health and a longer and more productive life.23 Google, together with the drug giant Novartis, are developing a contact lens that checks glucose levels in the blood every few seconds, by testing tear contents.24 Pixie Scientific sells ‘smart diapers’ that analyse baby poop for clues about the baby’s medical condition. Microsoft has launched the Microsoft Band in November 2014 – a smart armband that monitors among other things your heartbeat, the quality of your sleep and the number of steps you take each day. An application called Deadline goes a step further, telling you how many years of life you have left, given your current habits.
Some people use these apps without thinking too deeply about it, but for others this is already an ideology, if not a religion. The Quantified Self movement argues that the self is nothing but mathematical patterns. These patterns are so complex that the human mind has no chance of understanding them. So if you wish to obey the old adage and know thyself, you should not waste your time on philosophy, meditation or psychoanalysis, but rather you should systematically collect biometric data and allow algorithms to analyse them for you and tell you who you are and what you should do. The movement’s motto is ‘Self-knowledge through numbers’.25
In 2000 the Israeli singer Shlomi Shavan conquered the local playlists with his hit song ‘Arik’. It’s about a guy who is obsessed with his girlfriend’s ex, Arik. He demands to know who is better in bed – him, or Arik? The girlfriend dodges the question, saying that it was different with each of them. The guy is not satisfied and demands: ‘Talk numbers, lady.’ Well, precisely for such guys, a company called Bedpost sells biometric armbands you can wear while having sex. The armband collects data such as heart rate, sweat level, duration of
sexual intercourse, duration of orgasm and the number of calories you burnt. The data is fed into a computer that analyses the information and ranks your performance with precise numbers. No more fake orgasms and ‘How was it for you?’26
People who experience themselves through the unrelenting mediation of such devices may begin to see themselves as a collection of biochemical systems more than as individuals, and their decisions will increasingly reflect the conflicting demands of the various systems.27 Suppose you have two free hours a week, and you are unsure whether to use them in order to play chess or tennis. A good friend may ask: ‘What does your heart tell you?’ ‘Well,’ you answer, ‘as far as my heart is concerned, it’s obvious tennis is better. It’s also better for my cholesterol level and blood pressure. But my fMRI scans indicate I should strengthen my left pre-frontal cortex. In my family, dementia is quite common, and my uncle had it at a very early age. The latest studies indicate that a weekly game of chess can help delay the onset of dementia.’
You can already find much more extreme examples of external mediation in the geriatric wards of hospitals. Humanism fantasises about old age as a period of wisdom and awareness. The ideal elder may suffer from bodily ailments and weaknesses, but his mind is quick and sharp, and he has eighty years of insights to dispense. He knows exactly what’s what, and always has good advice for the grandchildren and other visitors. Twenty-first-century octogenarians don’t always look like that. Thanks to our growing understanding of human biology, medicine keeps us alive long enough for our minds and our ‘authentic selves’ to disintegrate and dissolve. All too often, what’s left is a collection of dysfunctional biological systems kept going by a collection of monitors, computers and pumps.