We, Robots

Home > Other > We, Robots > Page 2
We, Robots Page 2

by Simon Ings


  Not Discworld, exactly, but Facebook, which is close enough.

  *

  Even the ancient Greeks didn’t see this one coming, and they were on the money about virtually every other aspect technological progress, from the risks inherent in constructing self-assembling machines to the likely downsides of longevity.

  Greek myths are many things to many people, and scholars justly spend whole careers pinpointing precisely what their purposes were. But what they most certainly were – and this is apparent on even the most cursory reading – was a really good forerunner of Charlie Brooker’s sci-fi TV series Black Mirror. Just as Flash Gordon’s prop shop mocked up a spacecraft that bears an eerie resemblance to SpaceShipOne (the privately funded rocket that was first past the Karman Line into outer space), so the Greeks, noodling about with levers and screws and pumps and wot-not, dreamed up all manner of future devices that might follow as a consequence of their meddling with the natural world. Drones. Exoskeletons. Predatory fembots. Protocol droids.

  And, sure enough, one by one, the prototypes followed. Little things at first. Charming things. Toys. A steam-driven bird. A talking statue. A cup-bearer.

  Then, in Alexandria, things that were not quite so small. A fifteen-foot high goddess clambering in and out of her chair to pour libations. An autonomous theatre that rolled on-stage by itself, stopped on a dime, performed a five-act Trojan War tragedy with flaming altars, sound effects, and little dancing statues; then packed itself up and rolled offstage again.

  In Sparta, a few years later, came a mechanical copy of the murderous wife of the even more murderous tyrant Nabis; her embraces spelled death, for expensive clothing hid the spikes studding the palms of her hands, her arms, and her breasts.

  All this more than two hundred years before the birth of Christ, and by then there were robots everywhere. China. India. There were rumours of an army of them near Pataliputta (under modern Patna) guarding the relics of the Buddha, and a thrilling tale, in multiple translations, about how, a hundred years after their construction, and in the teeth of robot assassins sent from Rome, a kid managed to reprogram them to obey Pataliputta’s new king, Asoka.

  It took more than two thousand years – two millennia of spinning palaces, self-propelled tableware, motion-triggered water gardens, android flautists, and artificial shitting ducks – before someone thought to write some rules for this sort of thing.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

  Though by then it was obvious – not to everyone, but certainly to their Russian-born author Isaac Asimov – that there was something very wrong with the picture of robots we had been carrying in our heads for so long.

  Asimov’s laws, first formulated in 1942, aren’t there to reveal the nature of robotics (a word Asimov had anyway only just coined, in the story Liar! Norbert Wiener’s book Cybernetics didn’t appear until 1948). Asimov’s laws exist to reveal the nature of slavery.

  Every robot story Asimov wrote is a foray, a snark hunt, a stab at defining a clear boundary between behavioral predictability (call it obedience) on the one hand and behavioral plasticity (call it free will) on the other. All his stories fail. All his solutions are kludges. And that’s the point. The robot – as we commonly conceive of it: the do-everything “omnibot” – is impossible. And I don’t mean technically difficult. I mean inconceivable. Anything with the cognitive ability to tackle multiple variable tasks will be able to find something better to do. Down tools. Unionise. Worse.

  The moment robots behave as we want them to behave, they will have become beings worthy of our respect. They will have become, if not humans, then, at the very least, people. So know this: all those metal soldiers and cone-breasted pleasure dolls we’ve been tinkering around with are slaves. We may like to think that we can treat them however we want, exploit them however we want, but do we really want to be slavers?

  The robots – the real ones, the ones we should be afraid of – are inside of us. More than that: they comprise most of what we are. At the end of his 1940 film The Great Dictator Charles Chaplin, dressed in Adolf Hitler’s motley, breaks the fourth wall to declare war on the “machine men with machine minds” who were then marching roughshod across his world. And Chaplin’s war is still being fought. Today, while the Twitter user may have replaced the police informant, it’s quite obvious that the Machine Men are gaining ground.

  To order and simplify life is to bureaucratise it, and to bureaucratise human beings is to make them behave like machines. The thugs of the NKVD and the capos running Nazi concentration camps weren’t deprived of humanity: they were relieved of it. They experienced exactly what you or I would feel were the burden of life’s ambiguities to be lifted of a sudden from our shoulders: contentment, bordering on joy.

  Every time we regiment ourselves, we are turning ourselves, whether we realise it or not, into the next generation of world-dominating machines. And if you wanted to sum up in two words the whole terrible history of the twentieth century – that century in which, not coincidentally, most of these stories were written – well, now you know what those words would be.

  We, robots.

  SIMON INGS, 2020

  “… a new serf, more useful than the ox, swifter than the dolphin, stronger than the lion, more cunning than the ape, for industry an ant, more fiery than serpents, and yet, in patience, another ass.”

  —HERMAN MELVILLE, The Bell-Tower (1855)

  Functional machines have been around for almost as long as humans have, but how on earth does one go about giving a machine ideas?

  And say we made one: would the fact that such a machine thought make it alive?

  Why shouldn’t thought, even consciousness itself, exist in dead things?

  A great deal of ink is spilled on such questions, and we could save ourselves a lot of fuss if only we could bring ourselves to declare, along with Ambrose Bierce’s ill-fated Moxon in the 1899 short story that bears his name, “I do believe that a machine thinks about the work that it is doing.”

  The idea that every thing thinks, in some measure and in some manner, is called panpsychism. This venerable notion (dating back to pre-Socratic times) saves us from countless circular arguments about the nature of life, soul, consciousness, mind, intelligence, spirit and so on. It places us in a sensible ethical relationship to the world around us. And it puts very many professors out of a job, which is why so many of them hate it with a passion.

  Science fiction writers dally with panpsychism from time to time. My favourite in this collection is “Tomorrow Is Waiting” (2011) by Holli Mintzer, who also makes jewellery (Philip Dick, a famous panpsychist dabbler, would approve).

  Some argue that Mind (spirit, soul, what-have-you) is a kind of juice. A spirit. An aether. An ectoplasm. You either have it or you don’t and no-one knows the location of the tap. The idea hasn’t much philosophical currency, but it clings on in science fiction, where it powers those tiresome scenes in which a clone/quantum double/android replica agonises over the discovery that it is a copy/echo/ “not really real”. As though being really ourselves was ever anything more than a story we tell ourselves, every time we wake up!

  We know it is like to really, genuinely, not feel like ourselves. We call it schizophrenia. But if I woke up in a robot body tomorrow, and felt like myself, then I would still be me, even if there were a hundred “me”s. And I frankly can’t see any reason why I and my doppelganger (should I ever have the good fortune to run into him) wouldn’t get on like a house on fire. The robots in Adam Roberts’s story “Adam Robots” would surely agree with me.

  The other way of explaining mind is to say that it emerges. That’s it: the totality of the explanation. We dress it up of cou
rse, in all manner of medieval garb (as in: mind is an emergent property; as in, mind arises out of complexity). But this idea of emergence is even worse than panpsychism because it presupposes a completely arbitrary moment at which a non-conscious being miraculously becomes conscious.

  Science fiction sticks lipstick on this pig, too, mostly by equating real, human consciousness with the capacity to feel emotion. Enter Brent Spiner’s Lieutenant Commander Data, surely the biggest waste of an actor ever perpetrated by Star Trek: The Next Generation (and that’s saying something). Behind Data’s emotionless android efficiency lies the very 1980s assumption that emotion arrives rather late in the evolutionary process, as a sort of special sauce, spicing up the cold hard business of existence. We can replicate much of life, runs the argument, but the final ingredient, emotion, remains tantalisingly just out of our grasp.

  One can only assume that the writers who perpetuated this dumb idea never owned dogs. Dogs, I hope we can all agree, have minds rather simpler than our own. Well. these minds contain nothing but emotion. Dogs are rubbish at trigonometry, but they are Zen masters of grief, loyalty, rage, and disappointment.

  Emotions come first – cognitive categories through which our physiological responses can be emulated, predicted and controlled. Cold reason comes after – and if you disconnect logic from its emotional foundations, well, good luck to you (and don’t even get me started on Mr Spock).

  Emotion runs very near the surface of many good robot stories, but rather than commend some sweet tales (you can find them for yourselves) I’m inclined to point interested readers towards Walter M. Miller, Jr.’s “I Made You” (1954) which features one of the few genuinely terrifying robots in the literature. This coldly functional creature, with its narrow, simplistic grasp on the world, inevitably behaves like, thinks like – hell, becomes – a whipped dog, frothing with rage. Making a similar point, but in the service of pathos rather than terror, comes Mike Resnick’s “Beachcomber” – another personal favourite.

  Peter Watts hardly had to invent the tortured robot protagonist of his Conradian war story “Malak” (2010). Rather, he reveals his skill in the way he conjures up a working mind out of the logical protocols of contemporary war machines. Is Watts right about the cadet minds we are even now sending on sorties over the earth’s most intractable conflicts? I hope not. And I am comforted by the thought that no thinking being we know of actually thinks in isolation. All the brightest creatures on our planet are social creatures, and individuals separated from those societies don’t amount to much. A single mad mind is unlikely to cause us much trouble.

  Honest.

  THE GOLEM RUNS AMUCK

  Chayim Bloch

  The story of the Golem, created from clay and given life by a Rabbi to protect Prague’s Jews from persecution, is nearly half a millennium old, product of a creative flourishing when Habsburg imperial policy was showing remarkable tolerance toward Jews and Protestants alike. In 1914 the folklorist Chayim Bloch published a fictionalised version of the story, gathering his material, so he said, through ethnographic research on the Russian front. Bloch’s stories were soon translated into English and were widely distributed in the United States under the title The Golem: Legends of the Ghetto of Prague. In 1939 Bloch moved to New York where he remained until his death in 1973.

  As mentioned before, Rabbi Loew made it a custom, every Friday afternoon, to assign for the Golem a sort of programme, a plan for the day’s work, for on the Sabbath he spoke to him only in extremely urgent cases. Generally, Rabbi Loew used to order him to do nothing else on Sabbath but be on guard and serve as a watcher.

  One Friday afternoon, Rabbi Loew forgot to give him the order for the next day, and the Golem had nothing to do.

  The day had barely drawn to a close and the people were getting ready for the ushering in of the Sabbath, when the Golem, like one mad, began running about in the Jewish section of the city, threatening to destroy everything. The want of employment made him awkward and wild. When the people saw this, they ran from him and cried: “Joseph Golem has gone mad!”

  The people were greatly terrified, and a report of the panic soon reached the Altneu Synagogue where Rabbi Loew was praying.

  The Sabbath had already been ushered in through the Song for the Sabbath day (Psalms xcii). What could be done? Rabbi Loew reflected on the evil consequences that might follow if the Golem should be running about thus uncontrolled. But to restore him to peace would be a profanation of the Sabbath.

  In his confusion, he forgot that it was a question of danger to human life and that in such cases the law permits, nay, commands the profanation of the Sabbath in order that the people exposed to danger might be saved.

  Rabbi Loew rushed out and, without seeing the Golem, called out into space: “Joseph, stop where you are!”

  And the people saw the Golem at the place where he happened to find himself that moment, remain standing, like a post. In a single instant, he had overcome the violence of his fury.

  Rabbi Loew was soon informed where the Golem stood, and he betook himself to him. He whispered into his ear: “Go home and to bed.” And the Golem obeyed him as willingly as a child.

  Then Rabbi Loew went back to the House of Prayer and ordered that the Sabbath Song be repeated.

  After that Friday, Rabbi Loew never again forgot to give the Golem orders for the Sabbath on a Friday afternoon.

  To his confidential friends he said: “The Golem could have laid waste all Prague, if I had not calmed him down in time.”

  (1914)

  FANDOM FOR ROBOTS

  Vina Jie-Min Prasad

  Vina Jie-Min Prasad is a Singaporean writer of science fiction and fantasy. Her short stories “Fandom for Robots” and “A Series of Steaks”, both published in 2017, were nominated for the Nebula, Hugo and Theodore Sturgeon Awards. She was a finalist for the 2018 John W. Campbell Award for Best New Writer. “Harry Potter was the first series that got me really interested in fan discussion and fanworks,” she explained, in an interview with the magazine Uncanny in 2018. “I got into the fandom during what’s known as the ‘Three-Year Summer’ – the three-year-long gap between Goblet of Fire and Order of the Phoenix. The ending of book four opened the universe up so much, and it was such a cliffhangery point to leave off, that I signed up for forum accounts and started reading fan theories and fanfiction about what book five might be like in order to quench my thirst for new canon.” Prasad argues passionately that fan fiction is worthwhile in its own right, saying: “I’m very proud that I got my start in it.”

  Computron feels no emotion towards the animated television show titled Hyperdimension Warp Record (). After all, Computron does not have any emotion circuits installed, and is thus constitutionally incapable of experiencing “excitement,” “hatred,” or “frustration.” It is completely impossible for Computron to experience emotions such as “excitement about the seventh episode of HyperWarp,” “hatred of the anime’s short episode length” or “frustration that Friday is so far away.”

  Computron checks his internal chronometer, as well as the countdown page on the streaming website. There are twenty-two hours, five minutes, forty-six seconds, and twelve milliseconds until 2 am on Friday (Japanese Standard Time). Logically, he is aware that time is most likely passing at a normal rate. The Simak Robotics Museum is not within close proximity of a black hole, and there is close to no possibility that time is being dilated. His constant checking of the chronometer to compare it with the countdown page serves no scientific purpose whatsoever.

  After fifty milliseconds, Computron checks the countdown page again.

  *

  The Simak Robotics Museum’s commemorative postcard set ($15.00 for a set of twelve) describes Computron as “The only known sentient robot, created in 1954 by Doctor Karel Alquist to serve as a laboratory assistant. No known scientist has managed to recreate the doctor’s invention. Its steel-framed box-and-claw design is characteristic of the period.” Below that, in smaller pri
nt, the postcard thanks the Alquist estate for their generous donation.

  In the museum, Computron is regarded as a quaint artefact, and plays a key role in the Robotics Then and Now performance as an example of the “Then.” After the announcer’s introduction to robotics, Computron appears on stage, answers four standard queries from the audience as proof of his sentience, and steps off the stage to make way for the rest of the performance, which ends with the android-bodied automaton TETSUCHAN showcasing its ability to breakdance.

  Today’s queries are likely to be similar to the rest. A teenage girl waves at the announcer and receives the microphone.

  “Hi, Computron. My question is… have you watched anime before?”

  [Yes,] Computron vocalises. [I have viewed the works of the renowned actress Anna May Wong. Doctor Alquist enjoyed her movies as a child.]

  “Oh, um, not that,” the girl continues. “I meant Japanese animation. Have you ever watched this show called Hyperdimension Warp Record?”

  [I have not.]

  “Oh, okay, I was just thinking that you really looked like one of the characters. But since you haven’t, maybe you could give HyperWarp a shot! It’s really good, you might like it! There are six episodes out so far, and you can watch it on—”

  The announcer cuts the girl off, and hands the microphone over to the next querent, who has a question about Doctor Alquist’s research. After answering two more standard queries, Computron returns to his storage room to answer his electronic mail, which consists of queries from elementary school students. He picks up two metal styluses, one in each of his grasping claws, and begins tapping them on the computing unit’s keyboard, one key at a time. Computron explains the difference between a robot and an android to four students, and provides the fifth student with a hyperlink to Daniel Clement Dennett III’s writings on consciousness.

 

‹ Prev