Artificial intelligence is a computer system able to perform tasks that normally would require a biological (human) intelligence. Someday, a generation of computer systems might be able to rewrite their own coding software and then create a more advanced generation of computers. This is machine evolution, and it could lead to intelligences that don't need humans anymore. The point of runaway technological growth is called the technological singularity.
CHAPTER 13 BONUS MATERIALS
BONUS 1: MOORE’S LAW
The human brain can store a petabyte of information. A petabyte is a million gigabytes or a thousand terabytes. Feel free to flash a smug smile at your computer's paltry terabyte hard drive. Sadly, there are biological limits to the size of the human brain. At this point, it might have (or might nearly have) evolved to the limits of its processing power. There are no such limits to computers.
In 1965 Gordon Moore, the founder of Intel, pointed out that the number of transistors on an integrated circuit doubles approximately every year (in 1975 he revised this forecast to a doubling every two years).11 A transistor is an electronic on/off switch on a microchip. The more transistors a microchip has, the faster the processing speed. The faster the processing becomes, the more efficient the computer functions, at least as measured in terms of internal clock speed (meaning more calculations per clock tick).
Roughly speaking, a computer processor speed doubles every two years. This has become known as Moore's law. As a corollary, the price of computers halves. So far, Moore's law has proven to be pretty accurate.
Over the years, manufacturers have made transistors smaller and clocks faster so they can perform more computations per second. This method has a limit. Electronics get too hot if you force them to calculate too quickly. We therefore add more cores (groups of processors that calculate in parallel) to increase the computer's speed. Now if only we can perfect quantum computing…but that was a story from a different chapter.
BONUS 2: TIME IS IN THE EYE OF THE BEHOLDER
A correlation between pupil dilation and the perception of time appears to exist, at least for nonhuman primates. A study found that when pupils are large, monkeys felt time pass faster than it actually did.12 The pupils might indicate how a person keeps track of time. Large pupils might reflect a chemically heighted sense of awareness from some shock, which triggers a need to mentally slow time.
When you are mellow, on the other hand, your pupils might be smaller, indicating that time is perceived as flowing faster. Of course this study was performed on monkeys, so the extension to humans is only conjecture.
BONUS 3: NEFARIOUS SOFTWARE (THREE ANNOYING COMPUTER INFECTIONS)
A computer virus is a code that replicates itself to corrupt a computer system. A computer virus must be run by a user to be effective.
Trojans do not replicate. Instead, they hide inside innocent-appearing programs just like the warriors smuggled away inside the Trojan horse of Greek mythology. If you run the Trojan program, it does its nefarious task, perhaps deleting files and opening other programs. Most likely, it is granting an intruder privileges to a system while appearing to be a harmless app.
A worm is a program that replicates on its own. Unlike a computer virus, it doesn't need a user to run it. It spreads across networks. A weaponized worm named Stuxnet was used to infiltrate and target Iranian nuclear plants.13
Danger, danger!
—B-9 (Class M-3 General Utility Non-Theorizing Environmental Control Robot, aka, the robot from Lost in Space)
Joke: A robot walks into a bar, orders a drink, and lays down some cash. Bartender says, “Hey, we don't serve robots.” And the robot says, “Oh, but someday you will.”
I hope this chapter's opening joke isn't prophetic because that would really suck, at least for us biological types. Perhaps less so for robots. I don't know about you, but I'm not prepared to fight off a T-800 robot from The Terminator, so I suggest we defund the Skynet program. Now!
A robot can be defined as a mobile nonorganic device that is programmable and designed to perform functions traditionally done by humans. It is not a computer, although, like the T-800, an AI program can be housed inside a robot body. If an AI needs mobility, a robot is much more practical for downloading than a meat suit clone (a biological body).
Robots and AI are not the same thing. But they are the peanut butter and jelly of cybernetics as well as some frightening science fiction. With the exception of Teddy from A.I. Artificial Intelligence.
Robots and AI make up a true version of René Descartes's mind and body dualism. Descartes believed that the mind is not the same as the brain. The mind is the immaterial essence of being human. It is housed (for a time) in the body, but it is not a product of the body. As discussed in the last chapter, neuroscientists now believe our mind can be explained by the neuron connections of the brain. That means that the mind (intelligence) cannot be separated from the human body. An AI (mind), however, can be separated from the robot (body).
To be a practical helper, a robot needs autonomy, a way to move about and interact with the external environment. If we want a robot to perform detailed-oriented tasks like helping someone with a disability or performing medical surgery, it must be able to gauge how much pressure to use to open a door or lift a patient.
This type of robot cannot be created like an AI that masters a task after being programmed with millions of examples (as it can while learning chess or the game of Go). A robot not only must “think,” but it has to interact physically with the world. To do that it must have an awareness of its surroundings.
Humans take their external cues from their senses. Robot sensors can be modeled on human ones. Cameras can be used for eyes, microphones for ears, a gyroscope for inner ear balance, and so on.
IN THE BEGINNING, THERE WAS A WORD AND A FEW WIRES
The word robot hit the science fiction universe in 1921 when Karel Capek introduced his play R.U.R (Rossum's Universal Robots).1 In this social commentary on the working class (robot is a Czech word for “worker”), artificial people are created to serve humans. The robots do not take too kindly to the idea of servitude and revolt.
Humanity had to wait another twenty years to be introduced to the word robotics. You can thank Isaac Asimov for this linguistic contribution. It appeared in his short story “Liar” published in Astounding Science Fiction. Asimov claims he didn't know he originated the word. He thought it already existed as the equivalent of mechanics for machinery.2
If not in word, the idea of robot mechanized workers can be traced back as far as ancient Greek mythology. Talos, an automaton made of bronze forged by Hephaestus, protected Crete from pirates. Less poetic, although a lot more scientific, is the Greek Antikythera mechanism discovered in 1900. The device dates back to between 205 BCE and 100 BCE. It is believed to have been a computer of sorts that predicted astronomical positions and eclipses.
Around 60 CE, Heron of Alexandria designed mechanical devices for lifting and transporting heavy objects. Not all the automatons created by the Greeks were necessarily practical. Some were made for intellectual pursuits. The Greek mathematician Archytas built a steam-powered robo-pigeon sometime between 400 and 350 BCE to research how birds fly.3
In 1206 CE, Ibn al-Razzaz al-Jazari, the chief engineer at the Artuklu Palace in Turkey, completed The Book of Knowledge of Ingenious Mechanical Devices in which he described hundreds of mechanical devices.4 Not everything he made was intended to be useful. One of his creations was a musical mechanical band.
At the 1939 world's fair, Westinghouse debuted Elektro, a seven-foot-tall mechanical man that could walk, talk, and perform a final activity that was appropriate for the time—smoke a cigarette.5
Personally, I find Elektro a bit frightening. I think that is why in 1940 he was joined by a dog-shaped companion named Sparko.6 I checked and found no evidence of Elektro ever having revolted against his human creators.
Ford Motors accomplished two famous firsts in robotics. The first indust
rial robot was used by them in 1961. And according to the Guinness Book of World Records, the first human death by robot took place at the Ford Motor plant in Flat Rock Michigan on January 25, 1979. A mechanical arm struck Robert Williams a fatal blow to the head.7
ROBOTIC EVOLUTION
The evolution of robots isn't so much about its form as it is about its status. Initially robots were created as automaton slaves under the direct remote control of a human. This type of robot is increasingly popular in soldiering. Highly trained pilots fly drones all over the world. As of 2014, the United States military had over ten thousand recorded drones in its service.8 Robot ground vehicles are used for defusing improvised explosive devices (IEDs).
Fig. 14.1. Illustration of Elektro and Sparko. (Wikimedia Creative Commons, author: Daderot.)
All of these rely on outside agency to perform their duties. This means they rely on an outside manipulator to control their actions (a slave to another's will). If we add some basic programming to allow for a semblance of autonomy, robots go from slavery to servitude. These can be self-driving cars or the Roomba that cleans up after you. Finally (possibly), they become coworkers with sensory input and weak AI. After that, I leave it to science fiction to speculate.
TO SERVE AND OBEY
Asimov and his robot fiction hold a special place in the science fiction cannon. He understood the potential danger of robots, and, as reported in the short story “Runaround” for the March 1942 issue of Astounding Science Fiction, he introduced the handbook of robotics, fifty-sixth edition 2058 CE. To save our collective human butts (at least in his science fiction universe), he introduced the now legendary three laws of robotics, which made them subservient to humans. He also thought up the pliable positronic robot brains that are force-fed these laws.
Isaac Asimov's classic Three Laws of Robotics (1942):9
Law One: A robot might not injure a human being or, through inaction, allow a human being to come to harm.
Law Two: A robot must obey the orders given by human beings except where such orders would conflict with the First Law.
Law Three: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Asimov added a zeroth law in the 1986 book Foundation and Earth.10 This law is of the highest order and supersedes the First Law.
Law Zero: A robot might not injure humanity, or by inaction, allow humanity to come to harm.
Things get tricky if we were to actually force these laws on robots. In many of his stories, Asimov demonstrated how the rules lead to dilemmas. In his later books, adding the zeroth law actually forced robots to stunt human development by preventing the species from taking any collective risks.
As robots become more intelligent (adding in weak AI), communication with humans might lead to dilemmas. Human communication is nuanced. To the uninitiated, the intent of our words might be vague. I'm talking about a robot trying to separate how we ask for something from what we really want.
The misuse of a metaphor could be disastrous. Imagine after a tough day at work you causally say, “My boss is killing me.” What should your robot think about this? Does it contact the police? Should the robot murder your boss in order to protect you?
Roger Clarke, a consultant in information technology, addressed this problem when he updated Asimov's laws. He also added sections on how robots should behave in large numbers and how to behave in robot hierarchies.11 Here is Clarke's Extended Set of the Laws of Robotics:
The Meta-Law: A robot might not act unless its actions are subject to the Laws of Robotics.
Law Zero: A robot might not injure humanity, or, through inaction, allow humanity to come to harm.
Law One: A robot might not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate a higher-order Law
Law Two:
(a) A robot must obey orders given it by human beings, except where such orders would conflict with a higher-order Law.
(b) A robot must obey orders given it by superordinate robots, except where such orders would conflict with a higher-order Law.
Law Three:
(a) A robot must protect the existence of a superordinate robot as long as such protection does not conflict with a higher-order Law.
(b) A robot must protect its own existence as long as such protection does not conflict with a higher-order Law.
Law Four: A robot must perform the duties for which it has been programmed, except where that would conflict with a higher-order Law.
The Procreation Law: A robot might not take any part in the design or manufacture of a robot unless the new robot's actions are subject to the Laws of Robotics.
Gordon Briggs and Matthias Scheutz of Tufts University set up a test using software designed to enhance a robot's natural language capabilities by improving the robot's conceptual framework.12 A set of questions helped the robot decide whether to carry out a human command. Now a robot can simulate a pout and stomp its foot, declaring no to any command that might lead to contradiction. A bit of disrespect can avoid a lot of difficulties.
Question 1: Do I know how to do X?
Question 2: Am I physically able to do X?
Question 3: Am I able to do X right now?
Question 4: Am I obligated to do X based on my social role or relationship to the person giving the command?
Question 5: Does it violate any normative or ethical principle for me to do X, including the possibility I might be subjected to inadvertent or needless damage?
Unlike Asimov's and Clarke's laws, which are designed to protect humans, robotics physicists Mark W. Tilden developed a set of programming principles intended to nudge robots toward sentience. He believes that humans can take care of themselves. These principles are not much different than the evolutionary imperatives of being human. In fact, if you substitute the word human for robot in the principles, you get a hierarchy for humans. His three guiding principles for robotics:13
Principle 1: A robot must protect its existence at all costs.
Principle 2: A robot must obtain and maintain access to its own power source.
Principle 3: A robot must continually search for better power sources.
The condensed version:
Principle 1: Protect thine ass.
Principle 2: Feed thine ass.
Principle 3: Look for better real estate.
ROBOTS IN OUR EVERYDAY LIVES (AS SERVANTS)
1.Robots to do your driving
To avoid the thousands of crashes annually, the US Transportation Department has proposed that all new cars be able to communicate with each other about their relative locations and speeds using wireless technology.14 Of course, the car manufactures would have to ensure that their vehicles speak the same language as the vehicles made by their competitors. Good luck.
Robots are all about trust. It takes a lot of trust to be a passenger in an autonomous car. This is where Uber pops in. The company is rolling out these autos for its customers. A fleet of Uber autonomous cars has been sent to Pittsburg and cities in California. These cars limit their speed to twenty-five to thirty mph, unless it is safer to match the speed of traffic.15 Google has been trying for some time to get humans out of the driver seat; the company has been working on its own version of self-driving cars.16
2.Robots used in medicine
The development of movement in children with brain damage or cerebral palsy is often delayed. Their brains won't build and reinforce the connections involved with motor skills. Enter robots.
Researchers at the University of Oklahoma have developed technology that promotes crawling.17 They came up with a tech-onesie and a robot on wheels loaded with a machine learning algorithm. The robot supports the baby, detecting kicks or weight shifts, and rolls in the indicated direction. This gives the baby practice crawling while stimulating the brain's motor control areas.
Robotic tele-surgery might allow surgeons to remotely manipulate ro
botic arms. Specialists from anywhere on the planet could perform life-saving procedures anywhere else. They could watch their actions using a 3-D display of the operating theater. With the proper feedback enhancements, surgeons could feel what the robot arms touch in real time.
Have you heard about Embodied Cognition in a Compliantly Engineered Robot (ECCEROBOT)? This robot was built at the Technical University of Munich to allow researchers to study how the brain and our movement interact.18 Built with human-made bones, muscles, and tendons (human-made as in artificial; not human-made as in from a human body), ECCEROBOT reveals how the human neural and anatomical system moves.
Are you stressed out or concerned about a sick family member? Help is available in four feet of cuteness known as Pepper. This emotional robot uses facial and voice recognition to determine a person's emotional state ranging from sadness to concern. Aldebaran Robotics and SoftBank introduced the humanoid-shaped robot in Japan in 2014.19
I can't imagine people treating this little guy (thing?) like an appliance. As long as humans are capable of forming emotional attachments, we can't help but connect to robots, at least the cute ones. Pepper learns a user's personality traits and adapts to the person's habits, placing us one step closer to Westworld.
Finally, the CardioARM is a robotic surgical system designed for cardiac surgery.20 With its articulate design, it snakes through the chest and wraps around the heart to perform delicate surgery without the need to crack open the patient's chest. A doctor controls it with a joystick.
3.Robocops
A robocop is no longer science fiction. The Robocop in the movie of the same name was actually a transhuman cyborg, but the Knightscope (K5) is a true robot.21 The five-foot, three-hundred-pound K5 prototype is equipped with all the expected cool equipment like a 360-degree video camera, thermal imaging, laser range finder, radar, and a microphone to pick up sound. All that technology notifies authorities of crimes being committed in schools and businesses. K5 can be found patrolling the Stanford Shopping Center in California. So if you go there to do a little shopping, behave.
Blockbuster Science Page 17