Conscious

Home > Other > Conscious > Page 6
Conscious Page 6

by Vic Grout


  *

  He was hardly surprised to get a call from Jenny Smith the following morning. He was putting together some final reports for the most recent contracts he had worked on. Bob liked to submit a brief concluding note along with his invoice for services rendered. He doubted anyone actually read them. He often admitted that, ‘if an expert comes in, explains to you what was wrong with your network, fixes the problem and explains whatever it is that you mustn’t do again, why would you bother to read a report a fortnight later that tells you the same thing?’ However, it gave him a sense of completing the work properly, so he always wrote them. His smart-watch buzzed and he jabbed at the screen.

  “I take it you’ve seen this RFS thing?” Jenny snapped, clearly short of temper.

  “Well, seen it, yes … but absolutely no idea what it is or what any of it means,” asserted Bob quickly, lest there be any misunderstanding.

  “Damn!” grunted Jenny. “I was hoping you were going to help me out with this.”

  “Er, help you out?”

  “Yes, I got caught by surprise by the University’s Marketing Department late last night: that’ll teach me to answer the phone half-cut! The BBC wanted someone to talk about RFS on the lunchtime news today. I actually suggested you but they said they wanted an academic. It was late and I was tired so I said ‘yes’ almost before I’d thought about it. My thinking then was that no-one would know what this was about so I’d be as good at saying that as anyone else.” She sighed. “But thinking about it fresh this morning, I’m doubting that’s the impression they’ve been given. They may be expecting some sort of insight from me and I don’t think I can deliver that. Have you any idea what’s going on?”

  “Well, it has to be some sort of malware,” suggested Bob. “I just can’t see what else it can be. It’s not as if tens of thousands – I don’t know, maybe hundreds of thousands; maybe millions – of peripheral devices are going to independently develop faults. The networks themselves seem OK and …” He paused, reflecting on some of the earlier download delays. “Actually, come to think of it, the network – the Internet, I mean, obviously – was a bit of a pain this morning. Maybe the faults aren’t just on the edges?”

  “Anyway,” he continued, “whether they are or not, this can’t be coincidence. There has to be someone behind it; nothing else makes sense. But I can’t get much further than that. Who’s behind it and how it’s happening, I’ve no idea. This isn’t a conventional cyber-attack, for a start. It’s bigger than anything that’s ever been seen before, it’s spread across more platforms, it’s just utterly random and I can’t see any evidence of how it’s coordinated or whatever malware it might be is being spread. And who’s to benefit? Also, there’s just no consistency: it’s not as if everything’s breaking; just the odd thing here and there. That’s simply not the way it normally works: even malware tends to be predictable – deterministic really.”

  “You mean deterministic as in same-thing-in-same-thing out always?” suggested Jenny. She was aware that there were definitions of the word beyond computer science.

  “Yes, that’s right,” Bob confirmed. “This isn’t deterministic. Whatever these devices seem to be responding to, it doesn’t always have the same effect. It’s as if it’s not conventional computer logic at work here. It’s almost …” He paused to reflect. “It’s almost fuzzy.”

  “OK, well, that’s pretty much what I was going to say!” laughed Jenny nervously. “And I might quote you on that!”

  *

  Bob rarely watched TV, certainly not on Sundays, but he naturally made an exception now. A few hours later, he was nestled in his armchair in the corner of the lounge, with a cup of tea, ready to see Jenny on the lunchtime news. He could have stayed and watched it in his office but this was his weekend compromise. Jill was getting Sunday dinner ready in the kitchen.

  If the interview was an unpleasant experience for Bob to witness, it must have been a whole lot worse for Jenny. The piece started well enough but descended rapidly. Naturally, it was the lead news item and began with a short documentary-type summary of the emerging phenomenon, the delivery shared between the anchor-woman in the studio and a couple of reporters out on the streets. A problem the TV channels were having was that it was proving difficult to catch RFS events actually taking place. Generally the best they could do was after-shots of devices that had already malfunctioned, either stopped inappropriately or returned to normal. So, for anyone still unaware of what was unfolding around them, the story had a slightly spoof air to it. However, for the non-sceptic, the sight of the two reporters explaining malfunctioning displays, signals, controls and devices in general would have already been familiar. The focus then returned to the studio, where ‘Professor Jenny Smith, now joins us to shed some light on these strange happenings’. Bob winced inwardly.

  “Professor Smith,” started the interviewer, “so, what do we think is behind all this RFS?” And it went rapidly downhill from there.

  If Jenny had started with a frank admission that neither she nor anyone else had the faintest idea, the interview would probably have been short, disappointing but reasonably painless. However, she made the fatal mistake of opening with a somewhat more non-committal answer.

  “Well, it’s hard to say for sure,” she said nervously (inadvertently giving the impression that she might, if pressed, be able to say something or at least suggest it without being sure), and then proceeded to outline all the things she did not know about RFS. She repeated essentially what Bob had said about there having to be some malicious intent behind it all but that it was impossible to say who or indeed exactly what. She managed to list a few things it could not be but had little – nothing really – to offer by way of firm suggestions. At one point, she began to hint towards military research gone wrong but realised, in mid-sentence, that she could not justify any such claim. She then shifted towards the possibility of cyber-terrorism.

  Neither did she have any real suggestions to offer for what type of technology the attacks (if indeed that was what they were) might be using. ‘How are so many different types of device being affected across the world, Professor Smith?’ She did not know. Again, there were plenty of things to say about how this could not be effected but no insight into how it might. The interviewer initially tried to help by turning to new questions but each new question merely turned out to be another Jenny failed to answer. Gradually the tone became more confrontational. Why could the academic and scientific community (of which she was apparently being offered as the spokesperson) not explain what was going on? What were people like her paid for if they could not step up to the mark when their country, and the world, needed them? Who should they be talking to if this was beyond the experience or ability of the scientists? In total, her humiliating descent from respected senior academic to pointless schedule-filler took less than eight embarrassing minutes. The concluding ‘Thank you, Professor Smith’ had a distinct ‘Thanks for nothing!’ undertone.

  *

  But Jenny was tough, and not one to stay down long. Within the hour, she was back on the line to Bob and both were laughing at the daft futility of it all.

  “And what exactly did they expect?” she giggled almost hysterically. “I told them they’d be better off with you!”

  “Oh, yes,” snorted Bob derisively. “That would have made all the difference, of course, wouldn’t it? Because I’ve got all the answers!”

  They were interrupted by a ring at the door. The couriers had come to collect Hattie.

  Chapter 5: The Singularity

  There were a couple of pieces of better news the next morning. Firstly, the missing plane had landed safely, albeit in something of an emergency fashion and not at the intended destination airport. Failure of ground-based communications systems had been the biggest problem; the plane’s own navigation system had, on the whole it appeared, behaved. The authorities were in some doubt as to whether this had been a ‘real case of RFS’ or ‘a conventional equipment fault’
. That’s a stupid thing to say, thought Bob, with mild irritation. If you don’t know what’s causing RFS, how can you make arbitrary judgements whether something’s been caused by it? That’s like saying you don’t know whether that unexplained thing in the sky was a real UFO! He genuinely wondered sometimes whether the average inhabitant of Planet Earth could, left to their own devices, think their way out of an unlocked shed.

  The second welcome news was to be found in a message from Andy sent hours before. (Andy appeared to be an even earlier riser than him these days. Bob reflected, with some amusement, that he had not been like that at university. He had nearly failed his first year as a result of only ever attending lectures in the afternoon!) Andy was asking for some more exact details of Bob’s European itinerary as he thought they might be able to meet at some point. After a haphazard dialogue throughout the day, it was established that they indeed could, in Luxembourg to be precise, the evening after Bob’s appointment with The Commission.

  There was an international conference on The Ethics of Technology in Luxembourg City the following week, Andy explained, which had unexpectedly lost its keynote speaker and guest of honour at the last moment. Under normal circumstances, such an obligation was one of the most binding commitments an academic could enter into and not to be withdrawn from for all but the most exceptional of reasons. On this occasion, however, the intended speaker had died unexpectedly and was to be forgiven. However tolerantly the conference organising committee were prepared to look upon such a ‘no show’, though, it did leave them somewhat in the lurch, so Andy, who knew one of the conference co-chairs, had agreed to save the day through the lure of an all-expenses-paid trip and a welcome break from his normal routine.

  Bob was staying in a hotel on the east side of the city, mid-way between the centre and the airport. He had chosen it for its proximity to the Jean Monnet Building used by The Commission. Andy was going to be able to stay wherever he liked, being in the position of offering the conference such a favour. Although, his conference was more central, the whole place being so small and taxis reasonable, the conference committee would agree to Andy staying at the same hotel without any question. They arranged to meet there for dinner on Saturday, the evening before Andy’s talk.

  Andy’s final message of the day had the following addition.

  “BTW, Aisha and I have knocked out the attached over the weekend. I’d been asked to do something ‘futuristic’ as an editorial for an ethics newsletter so we worked together on it. I wrote the first draft, then it went through the ‘Aisha filter’ and we’ve batted it to-and-fro since. It’s probably not quite the piece I would have given them, left to my own devices, but I think it reads quite well. See what you think? (I’ve sent it to Jenny too.)”

  Bob opened the attachment on his reader, settled back with another cup of tea, and read with interest.

  ‘How Singular is the Singularity?’ (Andrew Jamieson and Aisha Davies)

  If recent headlines are anything to go by, opinion on the likelihood – and impact – of the ‘Technological Singularity’ is diverging rapidly. Is this largely because we don’t even agree on what it is?

  ‘Artificial Intelligence’ (AI) is certainly in the news at lot at the moment. But so are robots; and Kurzweil’s Singularity; and machine evolution; and transhumanism. Are these the same thing? Are they even related? If so, how? What exactly should we be arguing about? Are we worried precisely because we don’t even understand the questions?

  Well, perhaps to make a start, we should point out that intelligence isn’t the same thing as evolution (in any sense). That’s obvious and accepted for ‘conventional’ life-on-earth but we seem to be getting a bit confused between the two when it comes to machines. Developments in both may proceed in parallel and one may eventually lead to the other (although which way round is debatable) but they’re not the same thing.

  Biological evolution, as our natural example, works by species continuing to adapt to their environment. If there’s any intelligence at all in that process, it’s in the ingenuity of how the algorithm itself solves the problem – not the species in question. Depending on what we mean by intelligence (we’ll have a go at this further on), an individual within a species may or may not possess intelligence – if the individual doesn’t, then a group of them collectively might – but either way, it’s not required. Evolution works through random mutations producing better specimens; neither the species nor an individual can take credit for that – it’s all down to the algorithm fitting the problem space. Many species are supremely adapted to their environment but their individuals would fail most common definitions of intelligence.

  For a slightly tangential example, we might reasonably expect ongoing engineering advances to lead to continually improving travel (or communications or healthcare or safety or comfort or education or entertainment or whatever) but these improvements might arise in other ways too. Engineering isn’t the same thing as Travel. Intelligence isn’t the same thing as Evolution. So which of these is involved in ‘The Singularity’?

  Well, the clearest – but somewhat generic and by no means universally accepted – definition of the ‘Technological Singularity’ (TS) is a point in the future where machines are able to automatically build other machines with better features than themselves. There’s then an assumption that this process would soon accelerate so that new generations of machines would appear increasingly quickly and with increasing sophistication. If this improvement in performance becomes widespread and/or general – i.e. it goes beyond being simply better suited for a particular, narrow role – then it becomes a bit hard to see where it might all end. It’s debatable, in a pure scientific sense, whether this makes for a genuine ‘singularity’ (compare with black holes and y = 1/x at x=0) but it would clearly be a period of considerable uncertainty.

  And it’s not a particularly mad idea really. We already use computers to help design the next generation of machines, including themselves; in fact, many complex optimisation problems in layout, circuitry, etc. are entirely beyond human solution today. We also have machines producing machines – or components of machines, from simple 3D printers to complex production lines; and, once again, the efficiency and/or accuracy of the process is way beyond what a human could manage. In principle, all we have to do is merge together automated design and automated production and we have replication. Repeated replication with improvements from generation to generation is evolution. No-one’s explicitly mentioned intelligence.

  OK, there are a couple of reality checks needed here before we go much further. Firstly, the technology still has a long way to go to get to this point. The use of software and hardware in design and production is still pretty piecemeal compared to what would be necessary for automatic replication; there's a lot of joining up to do yet. Computers largely assist in the process today, rather than own it; something altogether more complete is needed for machines 'giving birth' to new ones. On the other hand, common suggestions for the arrival of the TS (although almost entirely for the wrong reasons) centre around 2050. This is quite conceivable: three decades or so is a huge time in technological advancement - almost anything's possible.

  Secondly, we may not have explicitly mentioned intelligence on the road to automatic replication but some of this adaptation might sound like it? Autonomously extending optimisation algorithms to solve new problem classes, for example, certainly fits most concepts of 'intelligent software'. This is more difficult and it depends on definitions (still coming) but we come back once more to cause not being effect. In a strict sense, replication (and therefore evolution) isn't dependent on intelligence; after all, it isn't with many conventional life forms. It’s possible to imagine, say, an industrial manufacturing robot, which was simply programmed to produce a larger version of itself – mechanically difficult today, certainly, but not intelligent. Anyway, the thing that might worry us most about a heavily-armed human or robot wouldn’t necessarily be its intelligence; in fact, it might be its
lack of it. (More on this later too.)

  So intelligence isn’t directly required for the TS; rather the establishment of an evolutionary process. In particular, when people say things like “The TS will occur when we build machines with the neural complexity of the human brain”, they’ve missed the point spectacularly – both conceptually and, as it happens, even numerically (still to come). However, it can’t be entirely denied that some form of machine ‘intelligence’ will probably have a hand in all this. At the very least, developments in AI are likely to continue alongside the filling-in-the-gaps necessary for machine replication so we’re going to have get to grips with what it means somehow …

  And right here is where it gets very difficult. Because there’s simply no standard, accepted, agreed definition of ‘intelligence’, not even for conventional life; in fact the word is clearly used to mean different things in different contexts.

  We won’t even begin to attempt to describe all the different, and multi-dimensional, definitions of intelligence here. Even on a single axis, they sit somewhere on a spectrum from the crude intelligent=clever extreme to the (in fact, equally crude but with a deceptive air of sophistication) intelligent=conscious. It will even upset many to use ‘self-aware’ and ‘conscious’ as synonyms, but we will here for simplicity. No single definition works. By some, conscious life isn't intelligent if it isn't 'clever enough'; by others, an automaton might be if it solves fixed problems 'fast enough'.

  And of course, it gets worse when we try to apply this to computers and machines. By some definitions, a pocket calculator is intelligent because it processes data quickly; by others, a robot, which was superior to a human in every single mental and physical way, wouldn’t be if it was conventionally programmed and wasn’t aware of its own existence. (Is an AI robot more or less intelligent than a dog, or a worm, or a microbe?) We sometimes try to link AI to some level of adaptability – a machine extending its ability beyond its initial design or configuration to new areas – but this proves very difficult to tie down in practice. (At which point is a computer really writing its own code, for example.) Furthermore, there are two philosophically different types of machine intelligence to consider: that which is (as it is now) the result of good human design (artificial intelligence) and that which arises from the machine somehow 'waking up' and becoming self-aware (real intelligence).

 

‹ Prev