Book Read Free

The Improbable Rise of Singularity Girl

Page 44

by Bryce Anderson


  We are making more sophisticated decisions today than we were twenty thousand years ago, and we're making those decisions better than our ancestors could have. But much of our decision-making doesn't happen in our heads. Bureaucracy -- in the non-perjorative sense of "formally defined procedures for choosing between actions" -- has taken over, underpinned by new technologies that track data and implement the byzantine rules that someone, somewhere came up with. I think bureaucracy can scale, but it leaves us with a society that isn't exactly a human one, and in the current form it seems very slow to respond to either human wants or our long-term good.

  But what happens when -- as happens in the story -- intelligence becomes too cheap to meter? What if we can improve our brains themselves, or even better, build minds that far surpass ours (which in turn build minds that far surpass theirs)? Such minds might be capable of deep love and grief, and to deny them humanity might seem willful and perverse. But at the same time, their existence relegates the unaugmented, squishy mind to the dustbin of history. Honestly, it's hard for me to imagine Homo sapiens being the pinnacle of evolution, rather than a mere stepping stone down an unfamiliar road. The transition seems inevitable to me; the only uncertainty is whether the transfer of power is a smooth, peaceful, nostalgic affair, or an unholy mess that involves lots of screaming.

  If Ray Kurzweil were here, he'd probably argue that we would co-evolve with these ever-increasing intelligences. I don't see that it makes a substantial difference: strap a billion-fold brain onto me, or onto a garden gnome, and it seems there's substantially similar amounts of 'me' in the result. I guess that's why I decided to have Helen let go of her old self after achieving superintelligence.

  The final limit may be the limit of the physically possible. That's kind of where I was going with the whole "patching the laws of physics" thing in the last chapter. [Belated spoiler alert] If nothing else holds the ever-expanding intelligence back, it stands to reason that it will either learn everything that can be known about the universe that contains it, or -- if there are fundamental limits to knowability -- it will spend the next twenty billion years beating its head against the brick wall standing between it and complete understanding. I'd like to imagine that as heat death closes in on the Universe, the last thought to go through its mind might be, "I wish I'd figured out what The Duck meant."

  I've been toying with the idea of a sequel, wherein the ascended Helen is introduced to the millions of minds that got there before she did. Each has become so adept at manipulating matter at the small scale that they eventually built themselves into the fabric of the universe, where they live out their immortal existences at planck speed. But I'm about as qualified to write that story as my pet rabbit is. This story already stretched my weirdness gland to its outer limits.

  Moving on. I expect to get complaints about how I ended the book: with humanity trapped under the thumb of a not-entirely benevolent dictator, who enforces complete order on the world. It's not obvious that there is any path of escape, and this control is justified by the possibility of a planet-killing quantum bomb. Admittedly, this was a post-hoc justification for what would appear to be an intolerable level of control. But I think we have to come to terms with the raw power of all these rapidly progressing technological fields (computation, AI, gene wrangling, nanoscale machines). I fear that the maxim that will govern the dawning era will be, "If anyone can build it, somebody will."

  We're already living in a world where one person with the right knowledge and the right equipment can make an unholy mess, but really, we haven't seen anything yet. Just wait until you can 3D print your own biolab or nuclear warhead (BYOU235, of course).

  In The Singularity is Near, Ray Kurzweil argues fairly convincingly that our ability to defend against the dangers of new technologies will rise in lockstep with the powers of these technologies. There will be mistakes, and some very costly accidents, but no "gray goo" scenario. By the time we're able to create such a substance, we'll understand it very well, and perhaps already equipped the planet with an immune system to defend against it.

  I'm less hopeful than Kurzweil. It's easier to destroy than to create. Take it up with thermodynamics; I'm just the messenger. I suspect that as technology expands our ability to do anything we want, the power to destroy will increase faster than the power to create, disproportionately advantaging the ornery and self-destructive among us.

  Maybe abundance will tame us. Maybe if we solve the mammoth problem of providing a comfortable, interesting life for the ten to twelve billion people who will be knocking around on this ball, then perhaps our noble intentions can overwhelm our fear, greed, and resentment through sheer numbers. My fondest hope is for a world where everyone's needs are met, and nobody feels some lack so badly that it's hard to tempt anyone into a pyrrhic act of destruction.

  But I don't see such societal satiation happening under a capitalist system, because capitalism pretty much depends on the bulk of humanity being dissatisfied and ever-striving for more work and more reward. The other problem is that, as robotics and AI converts the economy's demand for labor into a demand for capital, many people will lose the purchasing power needed to satisfy their wants. Surrounded by shameless displays of wealth, yet struggling for even the basic necessities of life? Sounds like a recipe for frustration and a thirst for vengeance. I know these are fightin' words, but I think communism is much better suited to an age of abundance than capitalism is.

  But hey, what do I know? Like everybody else, my mental map of the world is fuzzy, and not adequate to the sort of prognostications I'm trying to make here. I have no special insight, and life is short. It will be even shorter if your Roomba has its way. So let's just acknowledge that, whatever our political differences, we all agree that we shouldn't be equipping Roombas with laser cannons.

  Bryce

 

 

 


‹ Prev