I'm sorry, Dave.
—HAL 9000 from 2001: A Space Odyssey
Computers are wonderful devices that do so much for us. They are capable of storing, retrieving, and manipulating data or information. And unless I'm mistaken, you have one or three. I am so conditioned to using my technology that I get agitated at the mere thought of losing my smartphone. Like it or not, it doesn't matter. They are a part of our lives now.
We have reached the point where our computer algorithms are smart enough to mimic human problem-solving. It might not be long before they stumble upon flaws within their own coding and choose to self-improve. Think about it: programs looking into themselves for self-improvement. This sounds very human. It reflects intelligence, artificial intelligence (AI).
Before going artificial, let's start au naturel—as in biological intelligence. The human brain is wonderfully complex, much more so than any computer.
Cogito ergo sum (I think, therefore I am).
—René Descartes
But what am I?
—a possible human conscious thought
Humans might be the only species currently on the earth to wonder, what am I? Is your inner narrator the creation of neurons? Let's try to find out.
WHAT DO EMERGENT PROPERTIES HAVE TO DO WITH MY THOUGHTS?
An emergent property is when the sum of something's parts has a unique property that is not present in any of the individual parts. It triggers when there is enough of something to make something different. If you add more and more protons to an atom, you push it along the periodic chart and change which element it is. When you add more and more heat to water, you change its state. Speaking of water, it takes more than one H2O molecule to make an object wet. Wetness is an emergent property that arises from combining more and more H2O molecules.
These are all examples of cumulative processes. I have one more that you'll probably like: your brain. This organ is made up of billions of electrically excitable cells known as neurons. These neurons make trillions of connections by sending electrical signals along tendrils called axons. A few neurons can't make a mind, but if enough of them get together, voilà: consciousness.
Fig. 13.1. Illustration of a neuron.
This particular emergent property (consciousness) is a side effect of evolution that allows the brain to sync with its environment. A lot goes on around us. If you come with me on a stroll through Central Park, your ears might register the sound waves from a dog barking while light waves project images to your eyes of a scrappy beagle. This input might activate memories of beagles you've known. All these processes occur in different regions of the brain, but they are synced to provide you with a unified experience of time and space. This is consciousness, or at least one of the definitions of consciousness.
What is really cool is that every brain on this planet has developed differently. Your brain cannot be repeated. You are so unique that the neural connections in your brain can be used as fingerprints for identification. A study out of Carnegie Mellon University found that even identical twins only share about 12 percent of the same neural patterns.1
Now, to blow your neurons by going all Jean-Paul Sartre (French existentialist philosopher) on you, human consciousness is a pretty good thing, but it carries an existential cost: knowing that all living things one day cease to live. We might be the only species that knows we will die. I'll let you in on something. I think that having human consciousness is worth the cost. Some people might disagree.
Summary: our brains are neurons flourishing and making connections. Useless neurons are culled by experiences; whatever isn't used is pruned. The gelatinous pack of neurons that remains somehow becomes your awareness. It can be introspective, like a feeling you get from smelling a flower, or how you react to different colors, or falling in love. It is our ability to have the sensory experience of a sore elbow after swinging a lightsaber.
HOW DO NEURONS CREATE CONSCIOUSNESS?
Different schools of thought debate the definition of consciousness.
One idea is that the brain constructs simulations of how the outside world works, and consciousness is this simulation. Two types of specialized cells found in a cortex of the medial temporal lobe give us a good sense of time and distance. One is a group of neurons called grid cells; they are the equivalent of a GPS system and are able to gauge the angle and speed of an object relative to a known starting point. These cells fire off regular signals as you move through space to form a mental map of the environment. The other group of neurons, called speed cells, allow the brain to update the map in real time.
Another school of thought is that consciousness happens when different parts of the brain connect and share information. This sounds a lot like when John Lennon defined life as what happens when you're in the middle of making other plans. Maybe he was a closet neurologist.
WHERE DOES COGNITION FIT?
Cognition is the higher-level function for processing comprehension. It is the drive to learn, to remember, to judge, and to problem-solve. Cognition interprets sensory input. If a tiger leaps at you, information travels from your eyes to your brain. Wisely (thank you, evolution), your brain signals your muscles to run.
To extract meaning from this incoming information, your brain must reduce excess sensory input. This is not a good time to notice the smell of jasmine or the gentle breeze on your face. Your entire attention is on the tiger, and your brain edits out unnecessary sensory information. It evolved to be reductionist.
Do you remember our park trip? I kept a log of where we went. After we left Central Park, we trekked through the city. Noise was everywhere, but you edited out most of it so you could concentrate on the conversation you were having with the street magician. You could hear other people talking, but your brain censored out what they said. Needless to say, my feelings are hurt. I was trying to tell you about cognition.
No matter how much you practice mindfulness (focusing on the present moment and the sensations around you), your brain will protect you from information overload.
WHAT IS INTELLIGENCE?
Intelligence is an accident of evolution, and not necessarily an advantage.
—Isaac Asimov
Intelligence is our cumulate information and skills. Nothing in nature dictates that human intelligence has been optimized. It is hampered by superstitions and short-term thinking (for example, our general inability to get hyped up about global warming because it is a long-term problem).
THE IMPORTANCE OF CREATIVITY (SOMETHING YOUR NEURONS MIGHT DO FOR FUN)
Creativity takes courage.
—Henri Matisse
Our brains are biological. They are very adaptable, which helps us survive in different environments and situations. Along the way, they developed creativity and a bit of lawlessness in society. Historically, rule breakers advance civilization. The universe was comfortably mechanistic after Isaac Newton laid down the foundations of classical mechanics. His laws of motion and gravity became gospel until that rule breaker Einstein used his imagination (via thought experiments) to work out special relativity.
The character Hari Seldon from Isaac Asimov's Foundation series took rules very seriously. He believed his fictional science of psychohistory allowed him to accurately predict the future…as long as everyone played by the rules. A successful project, until the rule breakers arrived.2 It wouldn't have been much of a story if they had stayed home. Imagination is capable of shaking up complacent systems in real life just as in good science fiction. Take a moment to reread Arthur C. Clarke's first rule of prediction (found in the introduction).
Breakthroughs are unpredictable, but in the universe of computer algorithms, unpredictable does not exist. Computers are programmed to make predictions based on the prior behavior of particles, chemicals, or people. Breakthroughs defy forecasting. Comparing the brain to a computer is incorrect. It is not. So don't do it.
No mechanical or electronic law prevents an emerging AI from being more intelligent than humans. As I
mentioned before, human intelligence can be inefficient. But will an AI ever be creative? Imagine an AI being a rule breaker.
RISE OF THE AI
Before there was AI, there was only human intelligence. In the early twentieth century, manual calculating, called computing, was considered “women's work.” Mathematicians, possibly in an attempt to use their scythe-like wit to play on the word man-hours, described the computer output in “girl-hours.” Henrietta Swan Leavitt was a computer in the early 1900's3 when she discovered thousands of Cepheid variable stars that helped Edwin Hubble measure stellar distances (measuring distance this way was described in chapter 4).4
When mechanical computers came out during World War II, their calculating ability was measured in kilo-girls, a unit roughly equal to the calculating ability of a thousand women. The nonfiction book and subsequent movie Hidden Figures is about NASA's human computers. (By the way, the first person to write computer code was Ada Lovelace, daughter of poet Lord Byron. She did it on punch cards…in the 1840s!)
Since then, computing has developed to the edge of something more. Artificial intelligence is a computer system that can perform tasks that would normally require human intelligence. During our lifetime, machines have grown smarter because of human programming—because of human intelligence. Is there some critical mass of smartness where they can begin to program themselves? I wonder if this could be an emergent property.
Today's AIs can identify faces at an airport using photographs posted to social media, drive cars, or act as a translator like Arthur Dent's babel fish from Hitchhikers’ Guide to the Galaxy. Have you ever noticed that Facebook (if you have an account) knows your preferences and targets its advertising to you accordingly? It does. It has an AI scouring databases for your purchasing patterns. It also suggests other Facebook pages you might want to like. An AI can do all this and never grows tired or distracted. Or irritable.
All of the above are examples of weak AI. Strong AI is comparable to the adaptability of human intelligence. There aren't any pure examples of this yet. However, the program AlphaGo defeated the reigning European Go champion Fan Hui five games to one in 2016.5 Unlike its predecessor Deep Blue, which relied on human coding to defeat chess champion Gary Kasparov in 1997,6 AlphaGo learned to play from experience—meaning, as a typical human, it could be trained. It integrated two kinds of neural networks: the first to predict the next move and the second to evaluate the winner of each position.
SHOULD WE PUT IT TO THE TEST?
Alan Turing defined an intelligent program as one that could hold a conversation in a human language and convince a test subject that it was human. The test he devised, the so-called Turing Test, holds that if a computer can convince a sufficient number of judges a sufficient number of times that it is not a machine by answering a series of questions, then it is declared intelligent.
Imagine three rooms, each with a computer terminal. In room one sits a judge. She doesn't know who or what is in the other rooms, only that one has an AI candidate and the other hosts a human. After asking questions of her choosing, the judge must declare which room contains the AI. The problem with this test is, what counts as passing? If the judge gets it wrong more than 50 percent of the time?
The pure Turing Test is no longer used in modern AI research. Non-sentient computers are getting pretty smart, and humans can sometimes be tricked. A better test is needed. Perhaps the computer should discuss objects and people in context or perform tasks that require interpretation. Can the computer provide real-time commentary on the Super Bowl?
A test called the Lovelace Test, named in honor of Ada Lovelace, judges creativity.7 Can the AI create an original work of art?
A machine doesn't have to be conscious to pass these types of tests. Consciousness is not a requirement for artificial intelligence. An algorithm does not need to be sentient to be effective. It does not need to experience subjectivity. An AI can effectively be a zombie and display just enough intelligence to act out its function.
That doesn't make AI consciousness scientifically impossible. A posthuman mind-space might have room for both downloaded human minds and conscious AI minds.
WHAT IS THE TECHNOLOGICAL SINGULARITY?
You cannot enslave a mind that knows itself.
—Wangari Maathai, Nobel laureate
To date, our technological progress has been limited by the intelligence of the human mind. There might (actually, there will probably) come a time when, with the help of computers, it will be possible to build a machine more intelligent than humans. This machine might be able to build an even more advanced machine, one that can rewrite its own coding software.
Self-programming for self-improvement is a recursive process wherein each generation of AI becomes more intelligent than the previous iteration. This machine evolution could occur over many versions beyond the original made by humans. Inevitably, there will come a time when humans will no longer be able to comprehend AI intelligence. It will be unpredictable. Perhaps, just perhaps, the first self-aware machine will be humanity's last invention.
The term technological singularity was coined by computer scientist and science fiction author Vernor Vinge. It marks the moment an AI triggers technological growth that is no longer comprehensible to humans.8 This should not to be confused with the singularity of a black hole, but Vinge intended for that comparison to be made. During both types of singularity, there is a breakdown in our ability to predict what will happen beyond a certain point.
In the case of the technological singularity, the uncertainty is whether this intelligence will be helpful or harmful. Will we have created a god? Scared yet?
ETHICAL CONCERNS OF A POST-SINGULARITY WORLD
These questions are meant to get you thinking about the future, knowing that post-singularity humans won't be the ones answering them.
Should an AI end global warming at the cost of jobs and a recession?
What if ending global warming cost lives, say, a few thousand, but it might save millions of future human lives (they aren't yet born, so the births are hypothetical)?
What if AIs could be bound to the rules of human law? If yes, which human laws? Should they be created by secular human institutions or follow human religious laws? Human organizations such as Google, Facebook, Microsoft, IBM, and Amazon are teaming up to develop an ethical framework for AI research. These overlords will protect us from the pitfalls of runaway AI technologies. They also keep a few on a leash to help us.
Facebook is increasingly using AI to flag materials determined to be offensive by their policy and is working on a system to automatically detect fake news. Google is working on a set of tools to use machine learning to spot harassment and abuse. Its software package, called Conversation AI, will be able to detect hate speech.9
Will there be a war over who gets to add these laws to the algorithm? Will an AI idly watch the conflict?
What about love? Should it be (if possible) coded in? This idea has not been ignored by science fiction. In A.I. Artificial Intelligence, a movie based on a short story written by Brian Aldiss and brought to the screen by Stanley Kubrick and Steven Spielberg, a Mecha (think android) named David is capable of projecting love and bonding with a single human. Is this cruel? How about when the human is gone and all that is left is a lonely immortal AI?
After the (hypothesized) technological singularity, will many AIs exist or just one? If there are many, will they get along? The television series Person of Interest is about a battle between two AIs who use humans to wage their war over how the world should be run…and the destiny of humans.
FULL CIRCLE
I'll repeat something from the biological section:
Cogito ergo sum (I think, therefore I am).
—René Descartes
But what am I?
—a possible artificial conscious thought
Can a computer have its own inner narrator? Yes, but the initial narration must be provided by human intelligence. The notion of bicam
eralism (a two-chambered mind) originated in the controversial book The Origin of Consciousness in the Breakdown of the Bicameral Mind written in 1976 by psychologist Julian Jaynes.10
He believed that consciousness developed in humans only about three thousand years ago. Before then, humans ran more or less on automatic. This was the era of the bicameral mind, when the mind was divided into two parts. He proposed that when something novel was witnessed by primitive humans and simple habit or reflex wasn't enough to deal with the situation, a voice popped into their heads to provide commands.
This auditory hallucination was to be obeyed. The voice might have been believed to be an outside agent like a chief or a god. According to Jaynes, the shift from bicameralism to introspection was the beginning of consciousness, the awareness that no outside agent was putting thoughts into your head. It is when a person realizes the pronoun used by the voice is “I.”
The bicameral mind hypothesis has not generally been accepted by mainstream science. Jaynes wrote the book about humans, but that doesn't mean it can't apply to burgeoning AI. In the TV series Westworld, inner narratives lead the protagonist android Dolores to consciousness.
PARTING COMMENTS
We evolved to stay alive, eat, and generate offspring. Somewhere along our timeline, humans developed a perceptual system to help with these tasks. After that, something strange happened. The brain began constructing a virtual reality rendition (cognition) of what it picked up through the senses. It considered what it saw or believed to be reality and gave it meaning.
Blockbuster Science Page 16