The Scientific Secrets of Doctor Who

Home > Other > The Scientific Secrets of Doctor Who > Page 33
The Scientific Secrets of Doctor Who Page 33

by Simon Guerrier

‘It’s not an AI at all,’ said Gina. ‘It’s a massive scam.’

  ‘A scam?’ said Ace. She glanced at the Doctor ‘But you said it works.’

  ‘Oh, it works all right,’ said Gina. ‘It’s a cunningly disguised gateway for the most massive crowd-sourcing project of all time. Raymond got rich not by inventing a super-intelligent inhuman mind, but by exploiting millions of very human minds, all over the developing world, and paying them a pittance.’

  Raine came and perched on the arm of Gina’s chair, staring at the computer screen. ‘But surely these people must have realised what they were doing. Why didn’t they blow the whistle?’

  ‘That’s where we come to the one bit which I have to grudgingly admit actually is rather clever,’ said Gina. ‘The so-called Canterbury AI minces up the task into a vast number of tiny chunks, all of which are worked on separately.’

  The Doctor nodded, ‘So each person only contributes a microscopic part of the whole – like one piece of Lego in a vast structure. None of them has any idea what the finished product will look like.’

  ‘Exactly.’

  ‘But his game is over now,’ said the Doctor, smiling. ‘Thanks to Raine and Ace.’ He leaned close to Gina. ‘Can you release the “AI” online as a resource open for everyone to use, with the crowd-source workers receiving a fair share of the revenues?’

  Gina nodded and chuckled as she began to type at the keyboard. ‘He’s going to find that the price of his stock can go down as well as up.’

  Ace looked up as Raine rose to her feet and walked towards the door.

  ‘Where are you going?’

  ‘Out in the garden, to pick a few pears,’ said the girl who stole the stars.

  * * *

  ‘Your trouble is, Drathro, that you’ve no concept of what life is.’

  ‘I have studied my work units for five centuries. I understand all their responses. What you would call life.’

  ‘Understanding is not the same as knowing.’

  The Sixth Doctor and L3 robot Drathro, The Trial of a Time Lord (1986)

  * * *

  In The War Machines (1966), the First Doctor faces a terrible foe – a computer that can use a telephone line to communicate with other computers. In The Green Death (1973), people’s movements through a building are watched by automated cameras linked to a computer. In The Greatest Show in the Galaxy (1988–1989), the TARDIS is invaded by a robot who takes over the controls to play a video advertising a circus.

  At the time these stories were broadcast, these were all imaginative science fiction ideas. Today, the internet, CCTV and sophisticated electronic junk mail are all part of our daily lives – but there’s an important difference. In The War Machines, The Green Death and The Greatest Show in the Galaxy, these artificial systems can speak, and even argue with the Doctor. They’re intelligent, thinking machines.

  At least, we don’t consider the computers today that run CCTV or send junk mail as being intelligent – but then, that depends on what intelligence means. The word comes from the Latin for perceiving or understanding. Surely a CCTV camera perceives whatever it is recording and when we give a computer an instruction, such as typing the T on the keyboard, it ‘understands’ how to respond and puts a T on the screen.

  Over the years, scientists have tried to devise ways to more accurately measure and assess intelligence. In 1912, German psychologist William Stern proposed the intelligence quotient or IQ test. Modern versions of the IQ test include studies of comprehension, reasoning, memory and processing speed. A ‘norming sample’ of participants taking the test is used to work out an average rate of intelligence – given as 100. Once this average has been found, anyone else taking the test can see if they are above or below the average and to what degree.

  Studies have shown that those with higher IQ scores are likely to be more healthy, live longer and earn more in better jobs. Yet some are sceptical of using IQ to predict future behaviour, and argue that IQ is often an indicator not of the intelligence a person was born with but the way their upbringing and environment has shaped their responses – in Chapter 11, we talked about the debate about the differences between nature and nurture. In fact, IQ seems not to be innate; it’s the sort of test you can train for. Others point out that IQ only gives a partial picture of the subject’s mental ability: for example, the test does not measure creativity, emotional intelligence or experience.

  There’s something of this debate in the Doctor Who story The Ribos Operation (1978), in which the Doctor and his new companion Romana are sent on a quest to collect pieces of the Key to Time. Romana is highly intelligent. She claims to have graduated from the Time Lord Academy with a triple first, while the Doctor only scraped through with fifty-one per cent at the second attempt. But, as the story shows, she’s naive, inexperienced and lacks the ability to think laterally and deduce from evidence – essential skills if she is to help recover pieces of the Key. She might be intelligent, but she has a lot to learn from the Doctor.

  Three stories later, The Androids of Tara (1978) begins with the Doctor playing chess against his robot dog, K-9. Romana looks briefly at the board and concludes that K-9 will win in twelve moves. K-9 corrects her: he’ll do it in eleven. The Doctor stares at the board doubtfully.

  * * *

  ‘Mate in eleven? Oh yes, oh yes. Well, that’s the trouble with chess, isn’t it? It’s all so predictable.’

  The Fourth Doctor, The Androids of Tara (1978)

  * * *

  It’s a funny scene, but the Doctor has a point. He might not be good at exams or chess, but he has a knack for thinking in ways that aren’t expected. Battling monsters requires a different kind of intelligence.

  We talked in Chapter 8 about how a team of British mathematicians led by Alan Turing was able to crack Nazi secret codes in the Second World War, and how that work resulted in a new invention – the programmable computer Colossus, designed by Tommy Flowers. In fact, Colossus was only part of the solution to cracking the codes, and what actually happened can tell us a lot about the nature of intelligence – and how it applies to machines.

  The Nazis used radio to send instructions and news to their forces spread all over the world. The British and other Allied forces could listen in to these messages but couldn’t understand them because they were in code. Obviously, cracking that code would make a huge difference to the progress of the war, so sizeable resources and some of the brightest minds were devoted to the problem.

  The Nazi codes were produced on Enigma machines, which looked like normal keyboards but contained rotating discs called rotors. If you input a letter into the machine, such as pressing the letter T on the keyboard, the rotors turned and produced a different letter as the output – for example, the next letter in the alphabet: U. If you had just one rotor in your Enigma machine, substituting each letter typed on the keyboard for the next letter along in the alphabet, you might produce a message from English words that said:

  UIF TDJFOUJGJD TFDSFUT PG EPDUPS XIP

  But with such a simple system, it wouldn’t take your enemies long to crack this code – if you use a pen and paper, you can decipher it pretty quickly, can’t you?

  To make such a code harder to break, instead of substituting each letter in exactly the same way – moving up or down the alphabet by a given number of letters – you could randomise the substitutions: every T swapped for U, but every H swapped for A and every E swapped for Z, and so on. But there were ways around that, too. For example, E is the letter most frequently used in the German language – and in English, too – so you could check your coded messages for the most frequently used letter, and then assume that was E. The next most common letters in German are N, S, R and I; in English they are T, A, O and N. You could also look for the most common letter pairs (in English: TH, HE and AN) and the most common double letters (in English: LL, EE and SS).

  It might take some trial and error – in the English words used to produce the coded message above, E appeared four
times, but so did C, O and T. However, deducing just a handful of the substitutions would give you most of the letters in a word, allowing you to guess the rest – revealing yet more substitutions that might help with other words. Having decoded most of the words in a sentence, you could probably work out the rest. You might have worked out what the coded message above said before you’d decoded all of it.

  To make codes harder to break, the first Enigma machines contained three separate rotors. Each time a letter was pressed, the rotors turned inside the machine by different amounts. That meant that the first time you pressed the T on the keyboard the machine might substitute a U – but the next time you pressed the T, the rotors would be in different positions so the machine would substitute something else, such as a J. That made the code much more secure: you couldn’t just puzzle it out with pen and paper. You’d need an Enigma machine of your own (or understand how one worked) and you’d also need to know the positions of the three rotors when the coded message was sent.

  If each rotor was divided into 26 – so that you could adjust it to substitute a T for any letter in the alphabet – then three rotors meant there were 26 x 26 x 26 or 17,576 possible starting combinations. A fourth rotor added to later machines meant 456,976 possible combinations. Special plugboards added to the machines made the codes even more secure: a plugboard with 10 leads in it meant more than 150 trillion possible combinations. That presented a serious challenge to anyone trying to crack the codes. But the Nazis also changed the settings they used on the Enigma machines every 24 hours. If breaking the codes was to be of any practical use, it had to be done very quickly every day.

  In the 1930s, Polish mathematicians cracked some of the simpler versions of the Enigma code using machines which they called ‘cryptological bombs’ that could cycle through all the 17,576 possible starting combinations of a three-rotor system in about two hours. The Polish codebreakers also spotted clues in the coded messages that could help narrow down the search. Some words or phrases appeared regularly in messages, such as beginning with the word ‘Anx’ – the German for ‘to’ – followed by a space and then someone’s name.

  When the Enigma machines became more complex, these clues – called ‘cribs’ by the British – became very important. Alan Turing improved upon the design of the Polish codebreaking machine, and used the Polish name for it, ‘bombe’. The first British bombe, produced in 1940, could use any known part of the message to help crack the whole of the code, in what is called a ‘known plaintext attack’. Finding an effective crib greatly increased the speed at which the British could decode the Nazi messages. Later bombes, and the first programmable computer – Colossus, built in 1943 – were more sophisticated at cracking codes, but all still relied on cribs.

  Turing’s efforts inspired the Doctor Who story The Curse of Fenric (1989), in which brilliant mathematician Doctor Judson has built a computer that he assures his old friend Commander Millington can break the Nazi codes. Millington has his own methods, as the Doctor and his companion Ace discover as they explore the army base and discover the most unlikely room:

  * * *

  ‘This is a perfect replica of the German naval cipher room in Berlin. Even down to the files…’

  ‘Commander Millington’s a spy?’

  ‘Oh no, no, no, no. He’s just trying to think the way the Germans think, to keep one step ahead.’

  The Seventh Doctor and Ace, The Curse of Fenric (1989)

  * * *

  It’s a remarkable sight: a perfect recreation of a Nazi office in the middle of a British army base, but it’s indicative of exactly the kind of creative, unconventional thinking that really was used to crack the codes.

  This kind of thinking required a certain kind of intelligence – and a creative approach to recruitment. In January 1942, the Daily Telegraph ran a competition with £100 offered to anyone who could solve a particularly difficult crossword in under twelve minutes. The crossword was a mix of general knowledge, anagrams and riddles. As well as the cash prize, those who took part were contacted by the War Office and offered work in codebreaking.

  There were overlaps between solving crosswords and cracking codes. Deducing the answer to the crossword clue 1 Across would provide you with letters in some of the Down clues – just as working out some of the substitutions in a code could make it easy to work out the rest, as we saw before. More than that, it often helps in solving a crossword to get inside the mind of the person who set it, knowing their particular habits and preferences, the kinds of clue they like to set. Codebreakers learnt to get inside the minds of the Nazi operators sending the coded messages, too. One codebreaker, Mavis Batey, even deduced that two operators both had girlfriends called Rosa – and used the name in their messages. Ironically, this creative, different kind of thinking required to crack the codes depended on the ability to think just like someone else – to imitate them.

  After the war, Turing continued to work on developments in computers. His Automatic Computing Engine, or ACE, was one of the earliest computers to store electronic programs in its memory – before then, computers such as Colossus had to be rewired manually before starting any new task. As computers became more sophisticated and capable of more complex tasks, some people started to ask whether they would one day be able to think for themselves – and how we would be able to judge the moment when they did.

  Turing also applied himself to this problem, and in 1950 published a short paper on the subject: ‘Computing Machinery and Intelligence’. He begins by arguing that the question ‘Can machines think?’ is tricky, because it depends on our definitions of intelligence. As we’ve already seen, there are lots of different kinds of intelligence, so instead Turing proposed a game: without seeing them and only judging their answers to a series of questions, can we tell a computer and person apart? Turing’s argument was that if a computer can imitate us well enough to fool us into thinking that it’s a person, it must present the same apparent intelligence as a person. We don’t test the intelligence of living people before we speak to them – we assume it’s there. So why don’t we assume intelligence in a computer, too?

  At the time, Turing’s imitation game – now better known as the ‘Turing test’ – was an interesting thought experiment. But today, we are used to computer systems that could well pass that test. Companies increasingly offer a customer support tool on their websites that makes us think we’re chatting to a person but is actually an automated system. Computer viruses are often contained in messages that apparently come from our friends or other trusted sources. There’s also an example related to Doctor Who.

  For three months in 2006, British Telecom had a system that translated text messages into voice messages on a telephone landline – and used Fourth Doctor actor Tom Baker as the voice. It took Baker eleven days to record 11,593 sounds and phrases that the automated system could then assemble in different combinations to match the content of text messages. It even knew to translate abbreviations such as ‘xx’ to ‘kiss kiss’ and ‘gr8’ to ‘great’. Some people had fun sending messages that made the automated voice say daft things – or even ‘sing’ famous songs. But a few people who knew Tom Baker in real life found it a bit confusing: for them, the automated system passed the Turing test.fn1

  Today, we’re used to automated systems predicting our responses. Online shops offer us deals and recommendations based on our previous purchases. Social media recommend people or news stories we might want to engage with based on our current network of friends. Some of these predictions might be wrong, but the more sophisticated systems can make surprisingly accurate matches – as if they know us better than we do ourselves.

  Already, voice recognition systems are used in automated phone lines – you are asked to say ‘yes’ or give other simple answers when prompted. As these systems improve, we’ll find ourselves talking to more and more computers, and those computers will have ever more sophisticated ways of predicting our responses, so that it will become ever more d
ifficult to know if the person we’re talking to is a person or not.

  When it gets to that point, aren’t they then people, too?

  * * *

  ‘The trouble with computers, of course, is that they’re very sophisticated idiots. They do exactly what you tell them at amazing speed, even if you order them to kill you.’

  The Fourth Doctor, Robot (1974–1975)

  * * *

  We can use Doctor Who to explore our problematic attitude to artificial intelligence. In the series, there is often a clear distinction: artificial intelligence is not on the same level as human (or alien) life. When battling the generally quite friendly, intelligent robot in Robot, the Doctor has few of the moral quandaries he does when facing even the Daleks. He destroys the robot but (as we saw in Chapter 9) lets the Daleks live.

  In fact, his companion Sarah argues against both these decisions. She tells him the Daleks are ‘the most evil creatures ever invented. You must destroy them.’ While, upset by the destruction of the robot, she says something that implies that it had passed the Turing test – and the Doctor agrees.

  * * *

  ‘I had to do it, you know.’

  ‘Yes, yes, I know. It was insane and it did terrible things, but at first, it was so human.’

  ‘It was a wonderful creature, capable of great good, and great evil. Yes, I think you could say it was human.’

  The Fourth Doctor and Sarah Jane Smith, Robot

  * * *

  The Doctor describes his robot dog K-9 as his ‘best friend’ on more than one occasion, but in School Reunion (2006) he seems happy to send K-9 to his death to stop the Krillitane invasion. Well, he’s not happy, exactly – but we can’t imagine him allowing a human companion to make the same kind of sacrifice. Does it make it OK that he can rebuild a robot companion after it is destroyed? And what about when, in Planet of Fire (1984), the Doctor terminates the ‘life’ of robot companion Kamelion and then doesn’t rebuild him? Kamelion asks the Doctor to terminate him, but it’s something that the Doctor would surely never do with a human companion.

 

‹ Prev