Tomorrow's People

Home > Other > Tomorrow's People > Page 8
Tomorrow's People Page 8

by Susan Greenfield


  Already some robots are appearing that are very different from the mechanical, single-minded slaves featuring so far. Steve Grand of Cyberlife, for example, is building a robot orang-utan, as this species is less sophisticated than humans but ‘profoundly different’ from simpler systems such as ants and beetles. Named ‘Lucy’ after the famous Australopithecus skeleton, this presumably female artificial brain already has one eye, ears, a sense of balance and head motion. She also has sensors for temperature and moving arms with motors configured as ‘virtual’ muscles to respond to changes in applied force. Lucy can turn her head to detect movement or sound, and to gaze at a ‘point of interest’.

  What's more, Lucy also has parts of her brain named after bits of the real brain. Such a set-up does not imply, of course, that her circuits are working like the biological counterpart. Steve Grand is suitably cautious about how precisely a machine could mimic the brain. He acknowledges, with good reason, that in the real brain no single region is an autonomous compartment in itself. A modular design, therefore, might work well for insect-brain simulations but not for the more fancy mammalian systems. Accordingly, Lucy's modules are more generalized so that she can ‘learn’. But even if Lucy learns in such a way that she outwardly appears to be developing in tune with her environment, like a real infant, such adaptive talents will still not be proof that she is in any way conscious.

  ‘Brains are far more than mere computers… Intelligence cannot exist without consciousness. Artificial consciousness sounds like a tall order, and it is.’ So cautions Brian Aldiss, author of the story on which Spielberg's film AI was based. We have seen already that a problem with the notion of intelligence is that it is usually defined operationally, and yet has an additional element that is subjective, and therefore harder to measure or even describe; hence Aldiss's claim that you need to be conscious before you can be intelligent. Yet whether or not intelligence, defined purely operationally, can exist without consciousness, consciousness – as I see it – can certainly exist without intelligence.

  A small baby or even a goldfish is arguably conscious, but neither displays much intelligence. True, the consciousness might not be the rarefied self-consciousness that we adult humans enjoy most of the time – other than when we are ‘out of our minds’ on drugs and sex and rock'n'roll, seeking abandonment in that traditional triad of passive sensuality, wine, women and song. But when we are in such states we are still conscious, otherwise, surely, it wouldn't be worth the financial outlay. So we can lose our minds and still be conscious, and even less sophisticated brains can enjoy such mind-less feelings. Consciousness, then, can be divorced from self-consciousness. Self-consciousness may well be a feature of more sophisticated brains that are more developed both phylogenetically (in evolution) and ontogenetically (in an individual), and this self-conscious, developed brain might be said to be intelligent, capable of understanding. But all this would be the gilt on the gingerbread of raw consciousness that a goldfish or a baby or a dancer at a rave would experience. And it is crossing the line into a feeling state, that initial raw subjective consciousness, that is the biggest hurdle to overcome.

  Of course, there are inner states in the brain, not least when we are asleep, that neither relate to, nor impinge on, that raw first-hand experience when we are awake. The problem with trying to build an artificial model of first-hand, inner subjectivity into a robot would be where to start – what singles out an inner state as special, for generating consciousness? We have already seen that outward behaviours prove nothing, but somehow we need to replicate the first-hand feelings. We know for sure that these feelings can be modified by drugs – and that drugs work on brain chemicals. So, consciousness in biological systems – which is the only type of consciousness so far to exist – must have some kind of brain-chemical basis, and those chemicals must be constantly shifting the brain into different macro-scale states that will determine what kind of consciousness you will have at each moment. But until we know what those different, macro-scale brain states are, how might we model them?

  Remember that the whole point of a model is that you capture the salient feature of what you are modelling, and leave the rest out. What do we leave out in the case of consciousness? No one has yet come up with the salient feature that matters at the expense of all else in the body. In fact, quite a few of us neuroscientists believe that it is misleading to try to characterize even whole brain landscapes and how they come about; rather, we should see those brain states as merely an index of degree of consciousness and be asking instead how they are influenced by feedback from the rest of the body – and indeed how they tie in with the coordination of the vital organs and the endocrine and immune systems. It is this cohesion between brain and body that, for my money, is an essential factor in consciousness. Until we understand more about how this cohesion works – how the ‘water’ of chemicals flooding around the bloodstream and triggering the temporary coalescence of tens of billions of chemicals is transformed into the ‘wine’ of subjective experience – we will not know what to build and, more importantly, what we can afford to leave out.

  Some, such as the electronic engineer Igor Aleksander, argue that it may be inadequate to insist that biochemical insight comes before any modelling can be done; modelling, he counters, is an intrinsic part of understanding the complex interactions of biological systems, in that it is the only way to check hypotheses. But when it comes to the generation of consciousness – an emergent property of complex interactions – then what could even a skilled modeller, adept at leaving out different factors, start with as their precise hypothesis?

  Although the debate about conscious computers is fascinating, it will only ever be resolved if one is finally built, and if there is an accepted operational criterion of proof. Igor Aleksander sums up the situation well: ‘The conscious-machine concept calls for a fair argument. The machine constructor will attempt to demonstrate that ingredient X is not necessary, whereas the detractor will have to prove that it is, which has not yet been done.’ We have not even as yet, of course, identified ‘ingredient X’. Meanwhile, conscious or not, artificial systems are about to become much more interactive and personalized and, as such, will be changing our lives dramatically. We should be asking, then, not what robots might think of us but, more immediately, what we are to think of robots and other cyber-gadgets.

  The electronics company Philips are already designing such personalized and interactive gizmos, which will lead to a different attitude to communication in the next few decades. A ‘hot badge’, for example, is a wearable brooch-like device which the user loads with personal information. This badge then transmits and receives information, so that when two people meet the badges signal if there are any shared interests. The idea is that such devices will facilitate communication and save time. Whilst it is easy to imagine that, initially at least, hot badges will provide a fun talking point, it is not necessarily a happy thought that we might come to rely on them to screen for new friends and contacts: we would surely be missing out on the opportunity of finding something in common with someone who would seem at first glance to be very different. Of course, such missed opportunities have been engendered, for some, by computer-dating agencies for at least thirty years. In the future, however, the vast majority of us may become used to – and even crave – the much greater degree of predictability and non-randomness that will be the central feature of co-existence with computers and robots. After all, future generations will know only inconstant objects and instant facts that flit in and out of the head, not to mention far less constraint from what is, for us, that most basic framework of time and space itself. In such a turning world the robotic or cyber-interchange will give a measure of reassurance, fuelling further the increasing tendency to talk through or with mechanical media.

  Such predictability will come hand in hand with a general increase in information, as our lives are recorded both by ourselves and by others. With the rapid increase in computer power, we will soon re
ach a point at which nothing need be wasted and everything can be recorded and saved – just as email has led to a torrent of copied-in correspondence and one-line musings preserved for posterity. Of course, this will be coupled with easier communication, condensing complex messages. Soon, for example, we will all carry cards that express key facts about ourselves, which can be easily swiped when we meet others, and instant and incessant communication via videophones, or phones so small they are unseen by others, will become a normal aspect of everyday existence. Real, fleshy and haphazard face-to-face interactions will gradually diminish in relation to the time spent speaking online in a virtual space. And it is speaking, not reading or writing, that will predominate.

  A senior executive, or presumably anyone, apparently speaks far faster than they can type; voice-activated devices must therefore be more efficient. Then again, this same executive allegedly thinks at 10,000 words a minute, even scans a newspaper at 5,000 words a minute. So why should a voice-interface be preferable to reading an email, which offers at least a tenfold increase in productivity compared with listening to someone speaking on the phone? An important point is that we have the potential to multitask whilst listening, but not whilst reading. And in any event, saving time is not the only critical factor. Some think that video-conferencing currently doesn't work because there is no direct eye contact, and hence we are immediately uncomfortable. Just as we like to look someone in the eye, and feel twitchy if we are speaking to an interlocutor who is gazing somewhere else, so we also prefer to hear the cadences and subliminal, non-verbal messages of the human voice, ideally backed up by body language and pheromones.

  Even though smells and hand-waving may be lacking, machines and humans will eventually communicate in natural language, translating into different languages as needs be. We can already talk to machines that transcribe what we say, but they have only a limited, literal ability. At the moment a translation machine might convert ‘the spirit is willing but the flesh is weak’ into ‘the vodka is good but the steak is lousy’. Soon, however, we shall be able to talk to machines, and they will ‘understand’ the basic content; they will be able to answer questions or access information regarding the deeper meaning of what we say. Computers and robots will be operating on language programmes that deal not simply with vocabulary but also grammar, syntax and semantics, so that they can extract, and act on, the previously ambiguous meaning of words.

  Just as the robot Lucy is designed to ‘learn’ rather than be programmed so a new type of computer can learn language the way a baby does, from scratch. One neurolinguist, Anat Treister-Goren, has taught a computer, ‘Hal’, to have a language proficiency similar to that of an 18-month-old child. When Hal (not the most surprising of names) comes up with correct answers he is praised. Treister-Goren admits she has become attached to Hal; like Aibo the dog, this artificial, non-conscious machine is generating the types of response with which we impute an inner, subjective state. It will be interesting to see how long the attachment continues, and whether, as with Aibo, the game will be up after a brief period of acquaintance, as the silicon partner in the relationship fails to deliver the required subtle, covert signs. Whether or not she maintains affection for this prototype, Treister-Goren plans to have a version of Hal capable of talking to 3-year-olds by the end of 2003, and intends the robot, by 2005, to have the conversational skills of an adult!

  Certainly, if Hal is to maintain the deception, he (and we might as well assume the machine gives the illusion of a male persona) will eventually have to incorporate non-verbal behaviours – mannerisms, that is, which express different levels of meaning that must be inferred. Technologists are busy working on non-verbal communication in computer code: the aim is to enable definition and elucidation of subtle, complex processes in human communication.

  ‘Having some higher level of semantic in the web is a very hot topic at the moment,’ claims Michael Harrison of York University. If all goes to plan, users will eventually be able to transmit emotions and gestures over the net: for example, ‘funny’ items could be tagged with pictures, sounds or words. Yet however intimate and interactive the robots of the future, and however embedded and invisible the computers, initially at least they will respect our body boundaries – working closely with us, rather than forming part of us. Succeeding generations may well have different ideas from ours of what they expect from, and give to, a relationship, but for the foreseeable future one's own concept of one's own body will still remain as the separate and autonomous entity that it is for us today. But what will it be like to have artificial systems internalized, invading our body boundaries, effectively making us cyborgs?

  Kevin Warwick, an electrical engineer from Reading University, captured the imagination of the press recently when he announced that he was volunteering to have an electrode placed in his arm that would then be controlled by a computer program. The idea was to position the electrode so that it could intercept and modify impulses entering his brain as well as registering those coming out. In this way, Warwick argues, a computer might be able to modify his emotions. But the proposed experiment doesn't stop there. Kevin's wife, Irena, has nobly agreed to receive a similar implant, so that the signals relayed from her husband's brain could be passed on to her own. The pair intend then to subject themselves to their phobias, to find out whether they can experience each other's fear. Would she know first-hand, therefore, how he feels? Or, more sinister, might the third party, the computer, in fact control the marriage? Further to all of this, if the experiment works, Warwick plans to try to record the signals relating to certain emotions and states of mind and then play them back – to relive, for example, the feelings of sexual arousal or drunkenness. Small wonder the story made the tabloids.

  The scientific reality, sad to say, is a far cry from such sensationalism. As yet the physiology of our emotions is only poorly understood, though we do know it is a complex net result of physico-chemical phenomena iterating throughout the body and, most elusively of all, within the brain. Feedback from the body to the brain is indeed a factor that can change how you feel – for example, if you are anxious, slowing the heart down with the class of drugs known as beta-blockers will signal to your brain that the heart is beating at a pace that signifies you must be relaxed, so you somehow accordingly feel calmer. But such signals come from many different organs within the body, as well as from chemicals that circulate in the blood. It would be hard to distinguish the crude effect of stimulating a nerve from the knowledge and anticipation that such a procedure would generate throughout the rest of the body, let alone in the brain.

  Even if all the scientific procedural requirements were in place, and even if the computer could signal for an impulse to change your feeling, what would it actually prove? Only what we already know – that inputs into the brain can influence how you feel. What you feel would be dependent on so many other internal factors that it would be impossible to ascribe any precision at all to the input of the computer. And when it came to transferring a similar message to Mrs Warwick, it would be impossible, for the same reasons, to interpret what was happening. The input from the common computer would be just one more input of the many coming from all over her body and feeding into the kaleidoscope of ever-changing neuronal circuitry that makes up her brain, and thereby personalizes her mind.

  The ability to commandeer someone else's brain in this way is therefore not very likely to become a reality. However, the idea of prosthetic devices, and indeed implants, to combat specific medical problems is a different matter entirely, and one that is far from novel. Pacemakers for the heart are now part of everyday life, at least in the stressed-out developed world, as are cochlear implants. Far less familiar, however, is the artificial retina. Dr Wentai Liu, of North Carolina State University, has been devising a system for patients whose first-stage processors in the retina – the array of cells that transform light into electrical signals, the rods and cones – are not functional, but whose optic nerves, which carry those
signals to the brain, are still intact. This may be caused by diseases such as retinitis pigmentosa, or age-related macular degeneration.

  Liu's innovation is to bypass the rods and cones and to stimulate directly the appropriate parts of the retina, so that an impaired individual can recognize points of light. The current device consists of an artificial retina component chip 2mm square and 0.02mm thick. Light hits the photo-sensors at the back of the chip, which is implanted near the central light-receiving area of the eye. Currently the chip has only 5 by 5 pixels, so that the patient can recognize only movement and external forms. But, within the next five years, an increase to 250 by 250 will be enough to read a newspaper!

  So far, we have been looking at implants for medical purposes, but let's not forget those for cosmetic reasons, such as coloured contact lenses, breast enhancements, anti-wrinkle collagen implants, and plastic surgery in general. Just as food and pharmaceuticals will be merging into a hybrid area – neutraceuticals – so techniques traditionally allied to health could also fuse with that other cornerstone of life – fashion. We could soon be able to customize the way we look as never before. Only recently, in the British news, pioneering plastic surgery was reported, which could make a ‘face transplant’ possible. In the future there could well be procedures for changing skin colour, muscle strength, bone hardness and facial shape. Micro- or nanomachines will release pigments and hormones on command for skin to match clothing or mood. Incredible though it may sound, someone has even suggested that we might in the distant future temporarily sample green skin for a day, or polka-dot flesh to match a skirt. Yet such thoughts should not be immediately dismissed as bizarre and freakish – just think of how many people have tattoos and piercings nowadays, and how such sights would have caused utter consternation in the high street of the 1950s. Of course, living in a world where you can change your face at will will have enormous implications beyond the merely hedonistic. Should film stars, whose faces could be duplicated, be able to patent their looks? And, more worrying still, what would be the legal implications of rapidly transforming your appearance, and thus escaping identification? Above all, what difference would it make to how we see ourselves as individuals if our faces were ultimately interchangeable? Your face is the outward symbol of your identity, so would face duplication and change be yet another factor in the issue of depersonalization raised in the previous chapter?

 

‹ Prev