Book Read Free

Tomorrow's People

Page 22

by Susan Greenfield


  Already, three-quarters of American high-school students prefer researching on the internet to using reference books: People of the Screen indeed. Moreover, there is some evidence that computer-assisted learning is beneficial. Mid-school students in rural Georgia who have been given laptops are showing improvements in grades and attendance rates, whilst many of their siblings who had dropped out of school have returned. One reason for this trend could be that learning is optimal when interacting – computer learning, using keyboard, screen and mouse, enables more interaction than learning in the standard classroom.

  On the other hand, the arrival of computers in the classroom has not been accompanied by any statistically significant improvement in pupils' academic achievement. Some warn that we should learn from the mistake of the failed IT investment in the workplace in the 1980s. A key issue that was overlooked some twenty or so years ago was the attendant need to spend two to four times more on training than on the technology itself. In fact, three to five times greater investment apparently needs to be spent on organizational restructuring and job redesign than on the actual equipment. But finding funds is already a huge problem for educational establishments, not to mention the difficulties of training every teacher.

  Schools in the Information Age will not simply be the same kind of schools that we went to with merely more computers in them! One of the most fundamental changes in the future, as we saw earlier, will be in oral communication with, and personalization of, machines. Children will grow up interacting with dozens of ‘personologies’ emitted from their PCs, and therefore will be as comfortable communicating and socializing in the cyber-world as in the real one, perhaps even more so. The brain is particularly plastic, impressionable, when it is developing. Early exposure to computers, hypertexting, mouse manipulation, menus and binary decisions will inevitably leave its mark on the nascent synapses. A recent finding widely reported in the news was that young people have already developed thumbs as dextrous as their fingers, due to incessant playing of the Gameboy and text messaging; believe it or not, children are even starting to point with their thumbs! So surely any day now exams, like all school activities, will be converted to a screen-based medium. This change will be welcomed even by the current generation, increasingly used as they are to assimilating information using a keyboard and screen, forced to sit for three hours with a pen and paper and expected to express themselves by increasingly obsolete and arduous handwriting.

  However, we have no idea whether this new type of environment will be ultimately beneficial or deleterious. It could be the case that multimedia stimulation, assaulting the senses, hard-wires the brain for faster cognitive processing. On the other hand, what about reflection and imagination? Will the ready-made, second-hand images obviate the need or opportunity to think up one's own? Will the urgency of the multimedia moment keep the next generations ruthlessly reacting to the present, with no time left to retreat into daydreams?

  But perhaps we should not judge new minds by old values. Since the essence of the human brain has been, for tens of thousands of years, adaptability to new external demands, perhaps we should simply face the fact that the new generation of brains will be fundamentally different from ours, in that they will be specifically suited, cognitively and physically, to computers and a cyber-world. Of course, for those born at the beginning of this century technology will be changing even faster; more than ever, succeeding generations will need to adapt to technical innovation. However, one fundamental question – given the potential of the screen to protect upcoming generations from the hectic, random heave of humanity ‘out there’ – is whether young people will be able to integrate material that they can understand intellectually but not necessarily appreciate emotionally. Will the new way of life in the 21st century mean that young people are more mature, or less?

  My mother, born in 1927, often used to tell me that in her day there was ‘no such thing as teenagers’. In fact, the concept is still unknown in some less developed countries, and it was indeed only in the post-war world of the second half of the 20th century that teenage culture started to flourish. Before then adolescents used to be apprentices, students, soldiers and farmers, but not teenagers. Reform of the child labour laws in the 1930s, the spread of suburbia and targeted youth marketing in the 1950s combined to give rise to the culture, or cultures, of the iconic dress, music, speech, ideas and behaviour that each of us remembers as ‘special’ in our own era, and, of course, superior to anything else before or since.

  Yet it may be that adolescents are doomed to return to their original obscurity as the teenage age gap is closing. Certainly few would dispute the precocity of the young of today, let alone of the future. The average age for starting menstruation is now twelve, compared to fifteen in the 1800s. There has also been a dramatic increase in teenage mothers – the UK currently has the highest number of unintended teenage pregnancies in Europe, and in a recent survey almost 38 per cent of girls aged fifteen admitted to having had sexual intercourse. A conspicuous consumer culture geared, via magazines, adverts and music, to steering ever-younger girls towards clothes and boyfriends has been blamed by many critics for causing this change. And this is not so much ‘growing up’, in terms of having more control and insight over your feelings, as simply losing one's childhood at an earlier age.

  Yet the demise of teenagers is not entirely due to the increased potential for IT-agility and independence at a pre-teen age, but also rests on the notion that the process of getting older is slowing down in grown-ups. Certainly, the introduction of lifelong learning, and its necessity in the workplace, means that adults need to preserve their learning skills as much as possible whilst somehow still using their previous experience to best advantage. These contradictory demands of being open to innovation but at the same time being able to evaluate new people, processes or things in the light of experience could well cause emotional turmoil and distress. But the central issue here is that adults will be behaving, or trying to behave, more like young people for longer. Undoubtedly, the advances in healthcare and health awareness, combined with an increased flexibility in choosing a life partner, will blur the distinction between the teenager and the twenty- or even the thirty-somethings, just as advances in healthcare will blur the distinction between middle and old age.

  But irrespective of blurring age boundaries the problems traditionally faced by teenagers, and now by those behaving like teenagers, will not go away: concerns of identity and introspection will, if anything, probably increase. According to the 1995 Youth Risk Behavior Survey, 30 per cent of American high-school girls reported seriously considering suicide, along with 18 per cent of boys – indeed it is the leading cause of death in this age group. There are increasing expectations to study and perform better, plus peer pressure to have sex, with the attendant risks of HIV and early pregnancy. Making your own decisions and sorting out your own system of values has never been quite so important as for the young person in the modern cyber-shadowed family, nor as difficult given the diminution in parental influence coupled with the decline, and perhaps eventual demise, of the notion of a constant and unambiguous individual self – especially when it comes to that traditional teenage obsession: relationships, more specifically their initiation – flirting.

  As with its simple forerunner, text messaging, the appeal of cyber-flirting seems obvious. It makes you feel connected, part of an atavistic type of tribe, even, of other young people. There is also a feeling of immediacy and hence excitement, as well as the novelty of the unknown. A more sinister and deeper concern, however, is an issue that has been cropping up time and again as we ponder what the future holds – the issue of identity. One cyber-flirter claimed that they enjoyed the activity because it ‘makes me feel like a different person’. The staccato texting style can conceal a multitude of difficulties with interpersonal skills: concealment behind a persona of sophisticated software would be the natural next step for the fledgling adult nurtured by the cyber-world. And as w
ith the prospect of a fictional cyber-family, so the extension of cyber-flirting could be flirting with fictional lovers. As a sign of the times, and also as evidence that such predictions are not too outlandish, I was amazed to discover that such a service already exists: for only $2.40 a month a Japanese mobile-phone company arranges messages from virtual boyfriends – computer-generated, completely fictitious characters.

  Just imagine how this trend may develop a few decades on, when teenagers, currently the most IT-agile generation, with the most leisure and freedom from professional and personal commitments, no longer have a monopoly in any of these areas. We might be faced with a society without the ‘real’ courtship that has characterized Western society for centuries, with all the traditional angst, fun, plotting and suspense replaced by a more anodyne activity where the individual is less vulnerable but, by the same token, not as fulfilled.

  ‘Children's physical space is getting smaller at the same time as the cyberworld is getting bigger,’ warns Judith Wagner, a child development specialist from Whittier College, California. Young people are spending more time indoors in front of their PCs; ‘nature’ is now something they see on a video. Then again, more interaction with the physical world may be possible, though from a remote and safe distance. For example, in 1994 Ken Goldberg developed the Mercury Project, which combined the web and robots to enable a network of individuals to operate, via their PCs, a robotic excavator. This robotic arm was guided, remotely, in an attempt to locate real items of ‘treasure’ buried in a real tub. He followed this with the more team-like sophisticated activity of ‘Telegarden’. This time a real, six-foot-wide tub was divided up between a network of PC-users who each cultivated a small patch via a robot arm. This is the first true manipulation of the outside world from the cyber-world, inside, viewing reality indirectly. Similarly, though on a sweeping scale, the web has already made it possible for anyone to observe planet Earth live on screen, as shots are relayed in real time via satellite. ‘I felt like I was seeing God,’ enthuses Technophile Mark Pesce.

  The impact of the Information Age is not simply that of the psychological revolution and evolution of a cyber-friend but rather the raw reality that your computer will connect you intimately with the whole planet. Some have even developed the somewhat fanciful metaphor of the Earth as a brain, with humans as previously isolated cells and the connections between them now newly forged as the net. In any event, such cyber-globalization means that we are due for some big cultural changes. Along with the facility for translating languages in real-time and access to worldwide news, there may even be a ‘cybrarian’, which will use voice-recognition to find educational materials that students need on the internet.

  Moreover, information will be organized in the non-linear, more hypertext style of free association. And such associations will be expressed in visual media to convey experiences, in stark contrast to words, which convey ideas. So the next generation may well have more visual sensibilities, and be as proficient at manipulating images as their parents and grandparents – us – were once agile with words. Once literacy is truly as outdated as the slide rule and log tables are today, education will be transformed entirely into an experience rather than a thought process. For example, the mandarins at the BBC already foresee screen stuntmen using Pythagoras' theorem to calculate a trajectory for falling, a fire-eater demonstrating the nature of the elements, and 3D animation of atoms. But graphic and lively though such material might be, compared to the ancient talk and chalk methodology, there are drawbacks.

  The first problem is that emphasis on interactivity would blur the distinction between the new tendency for a visceral, immediate response and a thought-out opinion. If a student expresses a view, how will it stack up and register against that of an expert? How will such a comparison be made? In the long run there may not be any ‘experts’ as the notion of a corpus of knowledge will have become obsolete. As the children of the future no longer need a long attention span to follow a linear narrative of words but rather are trapped in the immediacy of the ‘now’ – ever-stronger flashing lights and bleeps may be needed to sustain motivation or concentration over time frames of seconds.

  Secondly, this noisy, bright and fast-moving display on the screen, transformed at the touch of a button, or the screen, at any moment you choose cannot help students work out abstract concepts, just from instant, screen-based evidence flashed up as a literal image before their eyes. Indeed, there is a risk that the use of new technologies for education will shift the focus to passive ‘fun’, indistinguishable from the rest of our sensorily overloaded cyber-lives.

  But then again, we saw at the outset that interaction has huge potential to be educative. Take, for example, Lego Mindstorms, computer-controlled, programmable Lego bricks. Apparently, when Mindstorms was undergoing trials the children used for the testing couldn't believe that what they were doing might soon be regarded as school-work: it was too much fun. The Mindstorms bricks needn't be attached to a computer. Rather, the software is downloaded to a brick that then runs the programme. Mindstorms bricks seem to have struck just the right balance: there is enough pre-packaged technology without hampering creativity. The child, and indeed the interested adult, learns by doing. In general, interaction seems not sufficient in itself but rather a necessary starting point from which to develop the best methods for teaching.

  The advent of nanotechnology, which promises unprecedented control over matter, could also inspire the logical successor of Mindstorms. Mark Pesce predicts that nanotechnology could inspire a similar ‘toy’ for manipulating atoms. Children are already learning with a computer coupled to bricks; so the hypothetical child of the future might mess around with a plastic nut – essentially a pile of atoms – linked to a computer. It would be relatively straightforward for the child to create a ring of carbon – benzene – then to apply basic mechanics at this atomic level to end up with, say, a molecular calculator. The big question is, of course, whether nanotechnology will ever deliver its potential and achieve this type of precision; many are understandably cynical.

  Nonetheless, a scaled-up version, at least, might be feasible, whereby components are assembled not on a nano- but on a micro-scale, instructed by the appropriate software. The Playstation 2, launched in November 2000, has proved the fastest-selling consumer electronics product in history – it is offering a glimmer of just how versatile cyber-toys will be in the future. As well as playing games the magic box can play DVDs and provide PC-type connectors. There are film-like simulations of the human face, as well as an image scanner and an interface with a camcorder. The user can thereby insert images of themselves as game characters. We saw that future readers – perhaps they should now be called simply consumers – may be able to determine the course of a novel and make an active contribution to the narrative; so here they are starring in a game. Once more the line is blurred between the individual who has created a work and the person who is reading or watching – the barriers between one mind and another are breaking down into a kind of all-encompassing networked brain. Moreover, as children and adults alike participate in novels and games, and as those same users have less and less practice at abstract thought, less imagination and less time for reflection, so there is a risk that the significance of facts and the desire to understand what is happening to and around you may diminish.

  So perhaps education of the future should emphasize context instead of facts, since these can be readily accessed rather than learnt by rote; homework may consist of placing those facts into different conceptual frameworks – hypertexting will come into its own. In essence, learning as we know it may vanish in favour of a free-association hypertexting that is gradually rationalized and expanded. So, a linear knowledge, say, of the Tudor monarchs of England, with related insights into the literature and history of the 16th century, will be replaced by the term ‘Henry VIII’ cross-referenced to obesity, syphilis, divorce, gender selection, as well as to marine warfare, Martin Luther, Hampton Court,
red hair and other less obvious associations. But a child will no longer actually ‘know all about’ Tudor England, nor ‘understand’ the factors that drove the Reformation in Europe, nor even perhaps have a grasp of the general concept of, say, religion.

  Moreover, in order to hypertext at the moment we all need to have an appropriate knowledge-base first. How will that knowledge be obtained in the future? Information is not the same as knowledge, and somehow core concepts will have to be in place in the young mind in order for them to ask the appropriate questions regarding incoming information. Our children and grandchildren may well be able to roam the planet and interact with it from the other side of a screen, and catalogue facts pertaining to acid rain or depletion of the ozone layer with far more authority and cross-referencing than we ever could. But they will perhaps never take time to reflect on ways of putting those facts together in a way that we would currently characterize as understanding, at the very least, and as a creative idea, at the very best.

  A further concern is whether children of the future will ever need or want to move from the wide, sanitized world opened at a touch of the keyboard to a more visceral experience within the confined limits of a real back garden, street or park. As education becomes an ongoing experience, and therefore less differentiated from everyday life, and as that experience is increasingly screen-derived, perhaps not just the notion of ‘learning’ but even the traditional concepts of ‘school’ and ‘university’ will start to become meaningless.

 

‹ Prev