Although the computer offers a reality of sorts, until now the cyber-world has always had its limits, its boxed boundary easy to encompass with a sweep of the eye. We would need a three-dimensional, all-pervading environment to completely seduce us away from real reality. Computers may dominate our lives, but we know where the division lies between screen and ‘out there’. Robots, on the other hand – moving about as they do in three dimensions – might have the potential to seduce us away from the real world, were they not still the clunky heroes of heavy-handed science fiction.
Films featuring robots, even Steven Spielberg's recent offering AI, usually start with the idea that less than benign scientists are trying to generate human intelligence in a robotic or computerized guise. This scenario is compelling, as humanoid robots most obviously generate a good storyline. In particular, most of us are intrigued by the idea of beings alarmingly cleverer than us but at the same time flawed by a fatal human-like disposition towards world domination. Even the brilliant physicist Stephen Hawking is not immune to this kind of musing. He warns that humans must change their DNA, or else: ‘The danger is real that this computer intelligence will develop and take over the world. We must develop as quickly as possible technologies that make possible a direct connection between brain and computer, so that artificial brains contribute to human intelligence rather than opposing it.’
The big worry is clearly that computers and ultimately robots are about to overtake us, although it is not exactly obvious what they are really about to do. Although the development of robots has lagged decades behind that of computers, the new generation of robots is about to get far more serious. At last, the robots of the future will have cast off their sci-fi heritage, having little in common with their predecessors – those tin-men of Hollywood finally consigned to nostalgic 20th-century memory. But why has it taken so long for robots to be anything other than metaphorically hard-hearted, cold-blooded and usually bent on wiping out humans?
Our fascination for the eternal polarization of good and bad, for fairy tales of intrinsically evil machines, is not a good enough excuse for explaining the surprisingly slow technological development of robots in real life. Steve Grand, of the company Cyberlife, has a much more persuasive list of reasons. He suggests that the difficulty may have started as far back as the middle of the 20th century, when visionaries such as Alan Turing were carrying out pioneering work on artificial intelligence. Turing wanted a challenge for very primitive computers that had no sensory device other than a paper-tape reader. Chess seemed a good standard problem as a starting point for such machines. But even Turing himself, it is worth noting, recognized that competence at the game was a poor indicator of intelligence, and that it was important to recognize what he termed ‘situatedness’, responses to a real, complex environment.
Another reason why robots have not really impressed anyone as yet could be, claims Grand, to do with testosterone – more specifically, the fact that it has been mainly men who have worked on robotic projects. Men, so the argument runs, favour a ‘top down’ form of control, as ascribed traditionally to the Hollywood robots, rather than interaction with the outside world in a way that would allow an intelligence to evolve ‘bottom up’. Until now most of these hormonally challenged individuals have made the erroneous assumption that the brain ‘does things’ to data; the alternative, and one that is far closer to the physiology of our ever-adaptive human brains, is a two-way street whereby incoming information changes the brain, online, and at the same time alters whatever particular and personal evaluations and interpretations that brain had formulated. In effect, the robot should be permanently reprogramming itself in the light of experience. A further difficulty, arguably also arising from the notorious male inability to multitask, is that most earlier ‘intelligent’ machines have processed information in series, i.e. sequentially, rather than performing several functions simultaneously, in parallel. Leaving aside the issue of whether AI (Artificial Intelligence) researchers have not delivered because they are, according to Grand, ‘blinkered, domineering, chess-playing (male) nerds’, we clearly need to define what exactly we want from our robots in the future.
The goals of true AI have now shifted from creating a ‘disembodied, rational intelligence’. ‘Intelligence’ itself is a very loaded term. A central problem is that it is always defined operationally – whether scoring highly in IQ tests or surviving and thriving on the primeval savannah. Yet when we speak of intelligence, we are usually uncomfortably acknowledging an additional feature, one that actually harks back to the Latin roots for ‘understanding’. There is a queasy sense that operational tests and definitions are not really getting to the essence of intelligence – that inner, subjective ability, which is so much harder to test and measure, where we understand something at the visceral level, and do not just give learnt, automated responses. The less emphasis on ‘automated’ and the more we see of ‘learnt’ behaviour in robots the nearer they may come to eventually being labelled as ‘intelligent’.
The robots of the future will be far more interactive and prepared to learn. For example, scientists at Glasgow University have now developed a 6ft 2in robot known as ‘DB’ – Dynamic Brain. DB has as many joints as a human, powered by hydraulics. Moreover, a human and DB have learnt to press their ‘hands’ together and move them around, as in tai chi, in a fashion that is apparently ‘mutually satisfying’. Note that a term describing a subjective inner state, a feeling, has already crept into the description of the robot's actions, implying that D B might actually be conscious. Frank Pollick, one of DB's creators, certainly does not go that far explicitly but maintains that robots will have to ‘appear’ to express emotions and transmit social signals. Such emphasis on emotion is of course a radical departure from the Hollywood scene, but applies only to robots specifically designed for those tasks where it is appropriate. And that question of appropriateness is an important one. If we are to get real about robots, then we have to say goodbye to the old romance of metal simulacra of ourselves, and face up to a future in which there is a diverse range of very different robots – not highly adaptable humanoid generalists but utterly focused specialists.
At the University of California, San Diego, for example, AI boffins are developing an all-terrain wheeled rescue robot, with the basic aim of cutting five minutes from response times, and saving an estimated forty-nine more lives a year. The idea is that a scanning system detects an accident, and immediately deploys a robot, equipped with cameras and wireless link, to investigate. No delicate introspection or sophisticated inner state is needed here – just the fast and focused tackling of a one-off problem.
Another type of robot designed to save lives in a different way is the ‘Virtual Human’, an artificial but integrated system of vital organs that can be used for all manner of pharmaceutical or safety testing. Far from a mere concept on a screen, the technology is already being developed for an artificial ‘real’ system that actually breathes, with cells that replicate and die, and blood that flows. In short, the Virtual Human will work just like a human, except that it will be controlled by a computer – with no pretensions at all to its own brain. In this particular instance, however, that is not the point: the goal here is to model the interactions and interplay between the mechanical and chemical systems that constitute the plumbing and running of the body, not to endow the robot with any type of thought processes, let alone consciousness.
The whole point of a model is that you have extracted the salient feature in which you are interested whilst ignoring everything else that constitutes the organism or system: an aircraft could be a ‘model’ of a bird without having a beak or feathers if the salient feature was flight. So if you are, after all, interested primarily in the interplay of biochemical mechanisms and cascades that underpin drug action, or the physiological domino effects of injuries sustained by a crash victim, consciousness and other mental functions are not a central issue; in fact, such considerations would presumably complica
te your endeavours unnecessarily. As it is, the Virtual Human project is awesome since it entails billions of megabytes – far more data than the human-genome project. Still, in the future, such number-crunching activities will be increasingly the norm. We are now comfortable with the concept of bio-informatics, where computers can zip through masses of genome computations in minutes rather than the several years it would have taken only a decade or so ago. Soon, however, we may have still far more complex and sophisticated cross-referenced databases of all bodily reflexes and interactions: ‘physio-informatics’!
Although it may take years to come to pass, it is undeniable that different genres of robots – each with a single job to do – will dominate medicine and surgery in the future. For example, a robot could take biopsies of brain tumours with greater precision than a human surgeon can consistently achieve. The brain, locked as it is in the skull, is not an easy terrain in which to locate the precise region you wish to sample without causing too much collateral damage to healthy tissue nearby. The cerebral target, lying as it does deep and invisible within banks of neurons, has to be identified by manoeuvring within a three-dimensional map – a little like the classic game of battleships. My friend and colleague brain surgeon Henry Marsh has likened current neurosurgical practice to a large JCB digger attempting to pick up a safety pin. But an error of a mere fraction of a millimetre off-target can make all the difference to how a patient lives the rest of their lives. Clearly, there is a case for a more mechanized and reliable approach.
Already, the use of machines for orientation around the brain is used in a new procedure to treat Parkinson's disease that involves implanting electrodes in the brain. Parkinson's disease is a severe disorder of movement; the sufferer has uncontrollable tremor, muscle stiffness and, perhaps most debilitating and distressing of all, is unable to translate thought into action. The malfunction arises in a small region deep in the brain (the substantia nigra), where key neurons manufacture an all-important chemical messenger, the transmitter dopamine. Electrical stimulation of this system will boost, artificially, the release of dopamine. But first locating the precise position of the dopamine system within the brain is vital. In the new automated procedure titanium beads around the patient's head provide landmarks on a scan that enable a robot to find the precise 3D coordinates, and then zoom in on the respective site for biopsy or electrode implant. For the moment, this device may be stretching the definition of a robot; after all, at present the machines are simply precision manipulators. Yet such systems are probably just the first in an increasingly sophisticated series where automation features more and more, and human participation becomes more and more remote – perhaps eventually disappearing altogether.
If and when these automated procedures prove to be completely failsafe, robots will gradually replace human surgeons. It is quite easy to imagine a scenario, not too far off, where the surgeon takes a back seat for most procedures, on hand only for unexpected emergencies. Ultimately, perhaps, every eventuality from bursting arteries to sudden cardiac failure to a dangerous lightening of anaesthesia will be programmed into the software to be catered for by the cyber-surgeon. Gradually engineers may feature in surgical planning and procedures more than the doctors, who by this time may be sitting in another room, perhaps miles away, interfacing with the machines by voice command. Within this century we will see a radical change in the traditional medical professions. For those with a premium on a large amount of quickly accessed information, combined with utterly reproducible manual precision, robots will be strong candidates. Further down the line machines could even replace the human designers themselves…
But not all such diverse robotic agendas need be exclusive to life-threatening situations. On a less (literally) vital note, robots could arguably be a new source of fun: for example, robot football has been under way since 1997. On 4 August 2001 the fifth World Robot Soccer competition, RoboCup, was held in Seattle. Over a hundred entrants competed in four different leagues according to size and ability. The essential rule was that the players function independently, without any remote control whatsoever. Admittedly, most were on wheels and used a scoop not a foot. Yet a good time was had by all; such was the enthusiasm – of the spectators as opposed to the participants – that the ultimate goal now is to develop a team of humanoid robots, by 2050, to take on the human victors of the most recent World Cup, and play according to FIFA rules.
These football-playing robots, like their medical counterparts, are designed to do a specific job, one that naturally follows from playing chess albeit in a more highly unpredictable and complex environment. Once again they do not have to have any subjective inner states, any emotion. But just think about the human fans, not those who have had a hand in creating the robots in the first place but the general public, adults and kids, spending an afternoon at the match. The question of whether mechanical players would stir up emotion in a human spectator, beyond the pleasure of the novelty, is an uncertain one. I suspect a fan would ultimately prefer to share in the glory and excitement of a human team's elation rather than observe the preprogrammed fulfilment of duty by a machine, regardless of whether a robot's skills might prove superior to those of its human counterpart. But then I am viewing the situation with my turn-of-the-century mind.
Perhaps in the future it will not be that David Beckham or Alan Shearer are interchangeable with some robot in both our hearts and minds nor that we turn off at the prospect of robo-footie, but rather that our position will be midway between the two. We shall certainly be involved with robot-like activities in both work and leisure, yet at the same time there will be a covert distinction that the robot world is in some way ‘different’. But might such a boundary become increasingly blurred, especially as the last of the baby boomers give way to generations that cut their teeth on IT?
There is no doubt that robots of the future will be highly interactive and efficient at what they are designed to do, and will in turn make our lives more functionally streamlined. However, the real impact on the lives we are about to live in this century surely boils down to whether we view robots as independent beings and ultimately, of course, whether they will ever con us into thinking that they have views about us, and enjoy that secret, private world that makes life, for us humans, worth living: the inner, subjective state of consciousness.
Of course, merely behaving as though you have emotions, and indeed tugging at emotions in others, in no way implies consciousness. Just look at the pet robot-dog from Sony, named ‘Aibo’ after the Japanese for ‘companion’. As soon as Aibo came on the market, 3,000. were sold in twenty minutes at $2,500 each! No one claims, least of all Sony, that Aibo is conscious but there is perhaps a nagging, deep-seated conviction that any creature interacting with you as Aibo does must really be feeling something.
A journalist, Jon Wurtzel, had the intriguing assignment of recording his daily experiences as he looked after Aibo. At first, he claims, he felt a ‘strong emotional response’. Indeed, as soon as Aibo emerged from the packaging Wurtzel felt himself grinning. Initially, he enjoyed petting the silicon-canine and the seemingly enthusiastic responses with which his overtures were greeted. Yet as time went on he grew increasingly frustrated because he felt no further relationship developed; Jon Wurtzel felt ‘let down’.
Still, Sony are now taking some 60,000 orders a month for a new version of Aibo, this one resembling an unlikely pet, a lion cub. A tame lion cub is perhaps more a creature of the imagination than a domestic dog, and as such could allow for easier suspension of belief if it deviates from what we would normally have expected of a developing relationship with a pet. The cub has more touch sensors for ‘intimate interaction’, plus it will ‘understand’ fifty words and be able to imitate the tones of human voices. An added feature is that the owner will be able to set preprogrammed movements for the robot through a personal computer. It will be interesting to see how someone like Jon Wurtzel, or indeed any of us, might fare with this new pet. Will we still feel �
��let down’, or will the new features – or the fact that the ‘pet’ is no longer like a dog – actually change our expectations?
Another factor that has been missing to date in this cyber-pet is the demand for constant attention. The toy Tamagotchi is only two-dimensional but arguably stirs the emotions by threatening to ‘die’ if left unattended. Or it could be that we subconsciously demand a greater repertoire of expressions and gestures in others in order to be convinced that they are sentient. It is hard to specify exactly what it is about a face and its expressions that makes us break into a grin and indeed what mysterious sights and sounds make us work at sustaining a continuing emotional interaction. Following this train of thought, scientists at Tokyo University are working on a robot with more human features, such as video cameras for eyes, which are programmed to mirror the expression of anyone looking at her; yes, this face is female. Along similar lines, the best-known robot for facial expressions is ‘Kismet’ at MIT. However, a team at the Robotics Institute at Carnegie Mellon are developing a robot with a ‘friendly’ face. This time the face in question is actually a flat-screen monitor, with animated graphics of a model (again female) face; perhaps the most important innovation of all is that the friendly face moves her lips in synchrony with her voice output. In the future the plan is to equip the face with sophisticated software so that she can ‘work out’ the type of people more likely to be willing to interact with her, and then focus on them.
So in the future you will walk into a room full of people, and then suddenly the face on the wall – or indeed of a robot that has been moving around – will single you out and start talking to you alone. And the technology will not stop at picking up on a human's subliminal body language at a party but extend to maintaining an apparent awareness of the complex social rules that underpin our lives.
Tomorrow's People Page 7