Tomorrow's People

Home > Other > Tomorrow's People > Page 10
Tomorrow's People Page 10

by Susan Greenfield


  A third approach that is as fast as brain operations themselves is MEG (magneto-encephalography), a technique that monitors the minute changes in the magnetic fields generated when brain cells are producing their electrical signals. Now the timescale is over thousandths of a second, but one problem is that the magnetic fields become increasingly hard to detect as one tries to probe deeper into the brain. The other difficulty is the relative lack of precision with which it is possible to visualize individual brain cells, or even clusters of cells – far worse than with fMRI. Obviously, one solution will be to combine the two techniques to use the fast timescales of MEG with the precise spatial resolution of fMRI. But even then there will still be a long way to go to capture the activity of small networks of cells working together over a fraction of a second.

  This ideal scenario may actually already have been realized, at an experimental stage, using optical dyes that fluoresce as brain cells become active. In rat brains, for example, neuroscientists can already detect a single cell working within a time frame of milliseconds. When we look at the brain in this way we can see, on a slowed-down film, that hundreds of thousands of cells can synchronize together and then just as suddenly stop, within a mere ten milliseconds. Such a flash flood would be completely lost with current imaging techniques in humans; but this ‘optical imaging’ entails applying toxic dyes, and thus is of use only in the laboratory, not the clinic.

  The challenge now is to develop a technique that works over the same timescale as these dyes but that is non-invasive – the ideal would be to rely not on blood flow but on the voltage generated in neurons themselves – and at the same time to have a way of monitoring that is non-toxic. As such, the barriers now might be merely technical: it may be possible, for example, to devise a way of exploiting the fast time frame of voltage changes across the membranes of neurons, as optical imaging does, but to monitor them with the kind of non-invasive detection methods of fMRI. In any case, it is not completely crazy to predict that, within the next decade or so, we will be able to observe the activity of the brain in an awake human subject and correlate what they are doing or thinking or feeling with the shifting configurations of the active assemblies of working neurons in their brain. What this exercise actually tells us about how the brain works is another matter – we will need to know more than just the fact that a brain region is active during the performance of a certain task in order to work out how the neurons operate within that brain region, and how in turn all that cohesive activity fits into the grand scheme of holistic brain function.

  But some, like Ray Kurzweil or Freeman Dyson, predict a far more dramatic turn of events ten or twenty years on, and a far more radical development in the way in which we understand the brain. If we can achieve non-invasive brain scans using not cumbersome external equipment but nanorobots of some sort within the brain, scanning could take place more cheaply, and therefore much more often and in many more of us. This would lead to two further possibilities. Firstly, our brains could be scanned, ultimately, all the time. We could therefore provide an endless output of our brain operations to any interested third party. The second possibility is more sinister still: if it becomes possible to monitor something easily, and to overcome the issue of decoding, then it is only a small step to manipulating it with equal facility. Hence even if we didn't know exactly how certain wholesale brain-cell assemblies and configurations led to certain states of mind, we could still induce those states of mind simply by driving the brain into the known configurations. This is very different from ‘merely’ attempting to manipulate thought through isolated local implants with a limited sphere of stimulation: this time the wholesale landscape of the brain – not a focal, local region – is the target.

  And, contemplating this very big question of whether it would be possible to manipulate the brain with such precision, we can let our imaginations run still further. It may eventually be possible to download information about those brain states onto a chip or CD. Might we end up with the digital equivalent of an individual human mind, disembodied entirely from all the messy biology that created it? Jim Gray of Microsoft foresees that neither the cost nor size of the computation required will be any problem in creating a complete digital video record of one's life. The problem would be more how to digest, analyse, organize and retrieve this information. In fact it goes still deeper than that: how do we identify what we mean in the first place by ‘information’? There is no problem in counting, say, a memory of the date of the Battle of Hastings as information, but when it comes to a memory of an event – a day at the sea last summer for example – then things become potentially far more tricky in terms of what you would download.

  For example, imagine you had captured one scene of the day at the sea as one of my personal memories: there it is on the screen, a windswept, deserted beach, with the shadows long across the sand. Of course, you could easily look up at what time in my life the scene had featured, and you could most probably work out whether it was sunrise or sunset, but what else would the scene actually tell you? Add in the sounds of waves crashing on the beach and sucking back, along with the calls of gulls and even the smells of fresh air and salt, but still you won't actually be sharing my personal memory. If I recalled such a scene, however, there would also be my invisible individual presence: my covert hopes for the day or evening; the unstated reason why I was there in the first place; the wider background of feelings, mood and disposition culminating in the conscious need for a holiday, and the expectation of returning to the city the next day; along with a lifetime of culture and prejudices leading to my general views of holidays, cities, beaches and isolation, and so on. In order, therefore, to share my particular take on this scene you would have to download a vast amount of additional information: my whole life story, and therefore most of my other memories as well – which would itself need to be set in the context of ever wider value systems and assumptions.

  But even if you could cross-reference exhaustively, how would you get to know my personal feelings concerning that day? You would need to download everything about me, not just the information content of my brain, but the ongoing status of my hormone and immune systems as well. More difficult still, how would you address the tricky problem that my attitude to the day has continually been changing as my life has unfolded, and the memory has accordingly been revised as I have reviewed and revised my attitudes as I have aged? At what stage would you be accessing the memory and how would you know precisely what it felt like to be me on the beach at that time? Cyber-biographer that you may be, you would not have hacked into the essence of that memory because the day at the sea never existed as a free-standing, objective phenomenon; it is just not tractable to being reduced to ‘information’.

  Even if we had the awesome technology to transfer every piece of information contained in the brain to an artificial system, it would still not be the same as a real brain in terms of how that information was used. A central issue is that in the brain the hardware and software are effectively one and the same. The size and shape of a brain cell are critical to how it operates: these physical characteristics will determine its efficiency in integrating incoming electrical blips into an all-or-none signal. This signal will then become one of up to 100,000 inputs to the next neuron along. However, the overall size and shape of a neuron is highly dynamic, subject to change in accordance with how hard the cell is working, which in turn is dependent on how actively the neuron is being stimulated by other neurons. The physical features of the neurons, and the network they form with other neurons – the hardware – is thus impossible to disentangle from the activity of those neurons during certain brain operations – the software. This feature of intertwined structure and function must be kept in mind if we dream, as many do, of building a machine ‘just like the brain’, or even better.

  There are really two completely distinct issues for the potential artificial brain of the future. The first issue is that of creating synthetic brains that have faster pro
cessing power than ours do – an issue of quantity. The second issue is that of designing synthetic brains that actually do what ours do, perhaps even better – this is an issue of quality that does not necessarily follow from an improved quantity in brainpower.

  Let's look first at the simpler issue of quantity. Unlike most things to do with the brain, its brute processing power can be readily measured and compared with that of present and future computers. According to Ray Kurzweil, by 2019 a mere $1,000 dollars will purchase processing power to match that of the human brain. In a similar spirit, Hans Moravec evaluates the growth in computer power in terms of visual processing in particular: computer vision. The power to process at 1 MIPS (million instructions per second) is sufficient for a computer to extract simple features from real-time imagery, say, tracking a white spot on a mottled background. At 10 MIPS, a system can follow complex grey-scale patches; such power already underlies the ‘smartness’ of smart bombs and cruise missiles. Processing at 100 MIPS, a system can follow moderately unpredictable features like roads; 1,000 MIPS is sufficient for coarse-grained, 3D spatial awareness; 10,000 MIPS will allow a system to find 3D objects in clutter. For robot vision, 1,000 MIPS is needed to match the precision of the human retina.

  If AI is to progress, this level of power must become more affordable for modelling the brain. If 100 million MIPS can simulate 100 billion neurons, namely a brain, one neuron would be worth about 1,000 instructions per second. However this stratospheric capability would still not be enough, as a neuron could conceivably generate up to 1,000 electrical blips (action potentials) per second. The important thing is to improve the ratio of memory to speed and to develop ever more miniaturized components, which will have less inertia and operate more quickly using less energy.

  Still, one prediction is that within fifteen years human-level AI, or rather the artificial ‘processing power’ of machines, could be operating a hundred times more rapidly than we do when we are thinking. At the moment speed, not memory, is the limiting factor. The processing power of the human brain seems staggering: 100 million to 100 billion MIPS. Compare this value with the 1,000 MIPS of a high-range PC today, and with the most powerful supercomputer, which can field 10 million MIPS. By 2005, Blue Gene, from IBM, will be able to offer one billion MIPS.

  But speed is not everything: we cannot simply extrapolate from Blue Gene in 2005 to a super-human AI ten years later. A primary constraining factor is simple feasibility. In 1964 Gordon Moore, later co-founder of Intel, introduced the now-famous law that bears his name, which predicted that the number of transistors per square inch of integrated circuit – and hence processor speed – should double every year. This figure has since been revised to every eighteen months. However, Moore's Law is doomed to become obsolete eventually, due to physical limitations on the size of the chips that store and process information. Some predict that the limits of current silicon technology will be reached as early as 2007.

  This is where the issue of quality in computer and brain processing comes in – what we are actually building when we attempt to build an artificial brain. The quantity arguments of those such as Kurzweil turn out in any case to be largely irrelevant. Merely counting MIPS equates the brain to an information cruncher, which it most certainly is not. A proper artificial brain, akin to a biological one, would need to include the chaotic chemical and electrical events therein, and its intricate emergent properties. And even if we could circumvent the limitations to Moore's Law, and even if we lower our sights to developing something that did not emulate a human brain in its entirety, what might it actually do? One approach is to ignore the underlying molecular and biochemical mechanisms and concentrate on copying a macro-trait of our brains – learning through strengthening of connections; this was the strategy behind the design of Steve Grand's Lucy. Many other AI followers, such as Nick Bostrom and Ray Kurzweil, envisage that computers will learn best by strengthening their neuronal connections (synapses) by means of repeated experiences; the main requirement is that they will be unsupervised. Simulation of the senses could be easy with video-cameras, microphones and haptic sensors.

  On the other hand, Igor Aleksander points out that these ‘synaptic weight changes’ are only one method of learning, in neural models. In his own simulations he uses over fifty different types of neurons, and they do not all necessarily learn. In those that do, some learning is indeed ‘one-shot’, as in the real brain, rather than being always incremental. After all, there is far more to the different types of learning and memory of which the brain is capable than the simple algorithm of strength-of-synapse-through-experience. Even if we overlook the more micro-level of cellular changes, the ever-shifting hardware-software interface, then there is still the next level up, beyond nets of neurons: the gross three-dimensional brain itself. Remember that the brain is not a homogeneous tabula rasa – rather, in some as yet poorly understood grand scheme, anatomical brain regions have differentiated functions.

  It is still a mystery how this elaborate nested architecture coordinates itself to give rise to brain functions. For example, the outer layer of the brain, the cortex, is domain-specific: it looks homogeneous, yet different areas are clearly linked to specific senses, and others to vaguer ‘association’ functions relating to memory and thinking. A puzzle that has plagued me personally is why electrical signals arriving at one part of the cortex give rise to a visual experience, and in another to an auditory one. As soon as electromagnetic or sound waves in the physical world have been transduced by the retina or the cochlea respectively into an electrical impulse the brain should have no obvious basis for being able to tell any difference. But it does. Some have suggested that the critical factor in determining the type of sensory experience is in the connectivity of different regions of cortex to other brain areas. But surely this is simply postponing the question rather than answering it. Why should one set of connections give rise to sight, and another to the totally different experience of hearing? How can one signal be intrinsically different from the other when all their electrical features are the same? Certainly a factor in this apparent miracle is interaction with the environment. In blind people, cells in the visual cortex respond to the tactile stimulation of reading Braille; similarly, there are (admittedly rare) cases of synaesthesia – seeing sounds and hearing colours – which also testifies to a far from rigid compartmentalization of function in brain areas.

  So how do these considerations help us in building a more humanlike brain, as opposed to a machine that learns efficiently? Steve Grand, along with most true brain modellers, has recognized the importance of chemicals within the brain. He factors ‘levels of a punishment chemical’ into his particular system, whilst sleep is modelled by ‘simply attaching chemoreceptors to their threshold parameters and defining the necessary chemical reactions’. Yet chemicals themselves are not autonomous brain components – it is how they change the configurations of brain cells that counts – and the same chemical can work in different ways in different parts of the brain.

  Another approach, albeit still hypothetical, would be to use the nascent technology of nanoscience, which holds the promise of unprecedented control over the structure of matter. In theory, nanoscience might eventually enable us to register the position of every neuron and synapse in the brain, and make a map that could be scanned and copied. But like the human genome project, such an exercise would require no insight into human cognition at all. We would end up therefore with a simulacrum of the brain, not a model that extracted the salient features and left out the rest – rather, nanoscience would give us an artificial brain exactly indistinguishable from a real one.

  But for many, the promises of nanoscience depend on the impossible – on changing the fundamental laws of chemistry and how basic bonds form matter. For theoretical as well as practical reasons, then, it is best not to assume that we will soon be building simulacra of biological brains, but to think instead of the more realistic prospect of artificial brains – some kind of model or
abstraction of their biological counterparts. A big question is, then, whether such powerful systems will actually rival us in terms of original thought and indeed consciousness – whether they will be our equals on a qualitative level.

  Whatever the answer to this question, it seems fairly certain that, on the quantitative level, computers will soon overtake us in brute processing power – so long as we can deal with the problem posed by the physical limitations on the storage of information. And once they do so they will surely be capable of artificial evolution, designing machines more effectively than humans. Such machines will also accelerate other areas of technology. Once computers can use oral communication the web could be superseded by the Machine, a single entity of millions of connected speaking machines. The Machine will know a user's profile of interests and preferences by as early as 2011, and by 2021 will control economic management, so that supply and demand become perfectly balanced. In one sense, the computers will have taken over control of the planet.

  But of course an automated system is far from being autocratic, or even autonomous. The prophecies of those like electrical engineer Kevin Warwick, and indeed of futurologist Ian Pearson, suggest an independent-minded executive – a silicon controller, independent to an extent but ultimately answerable to a human – rather than a mere slave-system that makes life easier for us.

 

‹ Prev