Know This

Home > Other > Know This > Page 36
Know This Page 36

by Mr. John Brockman


  Deep Learning, Semantics, and Society

  Steve Omohundro

  Scientist, Possibility Research, Self-Aware Systems; cofounder, Center for Complex Systems Research

  Deep-learning neural networks are the most exciting recent technological and scientific development. Technologically, they are soundly beating competing approaches in a wide variety of contests including speech recognition, image recognition, image captioning, sentiment analysis, translation, drug discovery, and video-game performance. This has led to huge investments by the big technology companies and the formation of more than 300 deep-learning startups with more than $1.5 billion of investment.

  Scientifically, these networks are shedding new light on one of the most important scientific questions of our time: “How do we represent and manipulate meaning?” Many theories of meaning have been proposed that involve mapping phrases, sounds, or images into logical calculi with formal rules of manipulation. For example, Montague semantics tries to map natural-language phrases into a typed Lambda calculus.

  The deep-learning networks naturally map input words, sounds, or images into vectors of neural activity. These vector representations exhibit a curious “algebra of meaning.” For example, after training on a large English language corpus, Tomas Mikolov’s Word2Vec exhibits this strange relationship: “King - Man + Woman = Queen.” His network tries to predict words from their context (or vice versa). The shift of context from “The king ate his lunch” to “The queen ate her lunch” is the same as from “The man ate his lunch” to “The woman ate her lunch.” The statistics of many similar sentences lead to the vector from “king” to “queen” being the same as from “man” to “woman.” It also maps “prince” to “princess,” “hero” to “heroine,” and many other similar pairs. Other “meaning equations” include “Paris - France + Italy = Rome,” “Obama - USA + Russia = Putin,” “Architect - Building + Software = Programmer.” In this way, these systems discover important relational information purely from the statistics of training examples.

  The success of these networks can be thought of as a triumph of distributional semantics, first proposed in the 1950s. Meaning, relations, and valid inference all arise from the statistics of experiential contexts. Similar phenomena were found in the visual domain in Radford, Metz, and Chintala’s deep networks for generating images. The vector representing a smiling woman minus the woman with a neutral expression plus a neutral man produces an image of the man smiling. A man with glasses minus the man without glasses plus a woman without glasses produces an image of the woman with glasses.

  Deep-learning neural networks now have hundreds of important applications. A classical challenge for industrial robots is to use vision to find and pick up a desired part from a bin of disorganized parts. An industrial-robot company recently reported success at this task using a deep neural network with eight hours of training. A drone company recently described a deep neural network that autonomously flies drones in complex real-world environments. Why are these advances happening now? For these networks to learn effectively, they require large training sets, often with millions of examples. This, combined with the large size of the networks, means that they also require large amounts of computational power. These systems are having a big impact now because the Web is a source of large training sets, and modern computers with graphics co-processors have the power to train them.

  Where is this going? Expect these networks to soon take on every conceivable application. Several recent university courses on deep learning have posted their students’ class projects. In just a few months, hundreds of students were able to use these technologies to solve a wide variety of problems that would have been regarded as major research programs a decade ago. We are in a kind of Cambrian explosion of these networks right now. Groups all over the world are experimenting with different sizes, structures, and training techniques, and other groups are building hardware to make them more efficient.

  All of this is exciting but it also means that artificial intelligence is likely to soon have a much bigger impact on our society. We must work to ensure that these systems have a beneficial effect—and to create social structures that help integrate the new technologies. Many of the contest-winning networks are “feedforward” from input to output. These typically perform classification or evaluation of their inputs and don’t invent or create anything. More recent networks are “recurrent nets,” which can be trained by “reinforcement learning” to take actions to best achieve rewards. This kind of system is better able to discover surprising or unexpected ways of achieving a goal. The next generation of network will create world models and do detailed reasoning to choose optimal actions. That class of system must be designed very carefully to avoid unexpected undesirable behaviors. We must carefully choose the goals we ask these systems to optimize. If we can develop the scientific understanding and social will to guide these developments in a beneficial direction, the future is bright indeed!

  Seeing Our Cyborg Selves

  Thomas A. Bass

  Professor of English and journalism, University at Albany, SUNY; author, The Spy Who Loved Us

  We are still rolling down the track created by Moore’s Law, which means that news about science and technology will continue to focus on computers getting smaller, smarter, faster, and increasingly integrated into the fabric of our everyday lives—in fact, integrated into our bodies as prosthetic organs and eyes. Our cyborg selves are being created out of advances not only in computers but also in computer peripherals. This is the technology that allows computers to hear, touch, and see.

  Computers are becoming better at “seeing” because of advances in optics and lenses. Manufactured lenses, in some ways better than human lenses, are getting cheap enough to put everywhere. This is why the news is filled with stories about self-driving cars, drones, and other technology that relies on having lots of cameras integrated into objects.

  This is also why we live in the age of selfies and surveillance. We turn lenses on ourselves as readily as the world turns lenses on us. If once we had a private self, this self has disappeared into curated images of ourselves doing stuff that provokes envy in the hearts of our less successful “friends.” If once we walked down the street with our gaze turned outward on the world, now we walk with our eyes focused on the screens that mediate this world. At the same time, we are tracked by cameras that record our motion through public space, which has become monitored space.

  Lenses molded from polymers cost pennies to manufacture, and the software required to analyze images is getting increasingly smart and ubiquitous. Lenses advanced enough for microscopy now cost less than a dollar. A recent issue of Nature Photonics, reporting on work done by researchers in Edinburgh, described cameras that use photons for taking pictures around corners and in other places the human eye can’t see. This is why our self-driving cars will soon have lower insurance rates than the vehicles we currently navigate around town.

  The language of sight is the language of life. We get the big picture. We focus on a problem. We see—or fail to see—each other’s point of view. We have many ways of looking, and more are being created every day. With computers getting better at seeing, we need to keep pace with understanding what we’re looking at.

  The Rejection of Science Itself

  Douglas Rushkoff

  Media analyst; documentarian; author, Throwing Rocks at the Google Bus

  I’m most interested by the news that an increasing number of people are rejecting science altogether. With 31 percent of Americans believing that human beings have existed in their current form since the beginning and only 35 percent agreeing that evolution happened through natural processes, it’s no wonder that parents reject immunization for their children and voters support candidates who value fervor over fact.

  To be sure, science has brought some of this on itself, by refusing to admit the possibility of any essence to existence and by too often aligning with corporate efforts to profit f
rom discoveries with little concern for the long-term effects on human well-being.

  But the dangers of an antiscientific perspective, held so widely, are particularly perilous at this moment in technological history. We are fast acquiring the tools of creation formerly relegated to deities. From digital and genetic programming to robots and nanotechnology, we are developing things that, once created, will continue to act on their own. They will adapt, defend, and replicate, much as life itself. We have evolved into the closest things to gods this world has ever known, yet most of us have yet to acknowledge the actual processes that got us to this point.

  That so many trade scientific reality for provably false fantasy at precisely the moment when we have gained such powers may not be entirely coincidental. But if these abilities are seen as something other than the fruits of science, and are applied with utter disregard to their scientific context, I fear we will lack the humility required to employ them responsibly.

  The big science story of the century—one that may even decide our fate—will be whether or not we accept science at all.

  Re-thinking Artificial Intelligence

  Rodney A. Brooks

  Roboticist; Panasonic Professor of Robotics, emeritus, MIT; author, Flesh and Machines

  This past year there has been an endless supply of news stories, as distinct from news itself, about artificial intelligence. Many of these stories concerned the opinions of eminent scientists and engineers who do not work in the field, about the almost immediate dangers of superintelligent systems waking up and not sharing human ethics and being disastrous for humankind. Others have quoted people in the field on the immorality of having AI systems make tactical military decisions. Still others report that various car manufacturers predict the imminence of self-driving cars on our roads. Yet others cite philosophers (amateur and otherwise) on how such vehicles will have to make life-or-death decisions.

  My own opinions on these topics are counter to the popular narrative; mostly I think people are getting way ahead of themselves. Arthur C. Clarke’s third law is that any sufficiently advanced technology is indistinguishable from magic. These news stories, and the experts prompting them, are jumping so far ahead of the state of the art in AI that they talk about a magic future variety of it, and once magic is involved, any consequence one desires or fears can be derived.

  There has also been recent legitimate news on artificial intelligence, most of it centering on the stunning performance of deep-learning algorithms—the back-propagation ideas of the mid-1980s now extended, by better mathematics, to many more than just three network layers, and extended in computational resources by the massive computer clouds maintained by West Coast U.S. tech titans and also by the clever use of GPUs (Graphical Processing Units) within those clouds.

  The most practical immediate effect of deep learning is that speech-understanding systems are noticeably better than just two or three years ago, enabling new services on the Web or on our smartphones and home devices. We can easily talk to those devices now and have them understand us. The frustrating speech interfaces of five years ago are gone.

  The success of deep learning has, I believe, led many people to wrong conclusions. When someone displays a particular performance in some task—translating text from a foreign language, say—we have an intuitive understanding of how to generalize to what sort of competence the person has. For instance, we know that the person understands that language and can answer questions about which of the people in a story about a child dying in a terrorist attack, say, would mourn for months and who would feel they had achieved their goals. But the translation program likely has no such depth of understanding. One cannot apply the normal generalization from performance to competence to make similar generalizations for AI programs.

  By now we have started to see a trickle of news stories running counter to the narrative of artificial intelligence’s runaway success. I welcome these stories, as they strike me as bringing reality back to the debates about our future relationship to AI. There are two sorts of such stories:

  The first is about the science, with many researchers now declaring that a lot more science needs to be done to come up with learning algorithms mimicking the broad capabilities of humans and animals. Deep learning, by itself, won’t solve many of the learning problems for general AI—for instance, where spatial or deductive reasoning is involved. Further, all the breakthrough results we’ve seen have been years in the making, and there’s no reason to expect a sudden and sustained series of them, despite the enthusiasm of young researchers who weren’t around during the last three waves of such predictions, in the 1950s, 1960s, and 1980s.

  The second class of stories is about how self-driving cars and drivers of other cars interact. When large physical kinetic masses are in close proximity to human beings, the rate of adoption has been much slower than that of, say, Java Script in Web browsers. There has been a naïve enthusiasm that fully self-driving cars will soon be deployed on public roads. The reality is that there will be fatal accidents (even things built by incredibly smart people sometimes blow up), which will cause irrational levels of caution (given the daily death toll worldwide of more than 3,000 automobile fatalities caused by people). The latest news stories document the high accident rate of self-driving cars under test; so far, all are minor accidents and attributable to errors on the part of the other driver, the human. The cars themselves are driving perfectly, goes the narrative, and not breaking the law like all humans do, so it’s the humans that are at fault. When you’re arguing that those pesky humans just don’t get a technology, you’ve already lost the argument. A lot more work must be done before self-driving cars are loosed in environments where ordinary people are also driving, no matter how shiny the technology seems to the engineers building it.

  The hype in the news about AI is finally being met with a little pushback. There will be screams of indignation from true believers, but eventually this bubble will fade. We’ll gradually see more and more effective uses of AI in our lives, but it will be slow and steady—not explosive and not existentially dangerous.

  I, for One

  Joshua Bongard

  Associate professor of computer science, University of Vermont; author, How the Body Shapes the Way We Think

  “Welcome, our new robot overlords,” I will say when they arrive. As I sit here nursing a coffee, watching the snow fall outside, I daydream about the coming robot revolution. The number of news articles about robotics and AI are growing exponentially, indicating that superintelligent machines will arise shortly. Perhaps in 2017.

  As a roboticist myself, I hope to contribute to this phase change in the history of life on Earth. The human species has recently painted itself into a corner and—global climate conferences and nuclear nonproliferation treaties notwithstanding—seems unlikely to find a way out with biological smarts alone: We’re going to need help. And the growing number of known Earth-like yet silent planets indicates that we can’t rely on alien help anytime soon. We’re going to need homegrown help. Machine help. There is much that superintelligent machines could help us with.

  Very, very slowly, some individuals in some human societies have been enlarging their circles of empathy: human rights, animal cruelty, and microaggressions are recent inventions. Taken together, they indicate that we are increasingly able to place ourselves in others’ shoes. We can feel what it would be like to be the target of hostility or violence. Perhaps machines will help us widen these circles. My intelligent frying pan may suggest sautéed veggies over the bloody steak I’m about to drop into it. A smartphone might detect cyberbullying in a photo I’m about to upload and suggest that I think about how that might make the person in the photo feel. Better yet, we could imbue machines with the goal of self-preservation, mirror neurons to mentally simulate how others’ actions may endanger their own continued existence, and the ability to invert those thought processes so that they can realize how their own actions threaten the existence of others. Such
machines would then develop empathy. Driven by sympathy, they would feel compelled to teach us how to strengthen our own abilities in that regard. In short: future machines may empathize about humans’ limited powers of empathy.

  The same neural machinery that enables us (if we so choose) to imagine the emotional or physical pain suffered by another also allows us to predict how our current choices will influence our future selves. This is known as prospection. But humans are also lazy; we make choices now that we come to regret later. Machines could help us here, too. Imagine neural implants that can directly stimulate the pain and pleasure centers of the brain. Such a device could make you feel sick before your first bite into that bacon cheeseburger rather than after you’ve finished it. A passive-aggressive comment to a colleague or loved one would result in an immediate fillip to the inside of the skull.

  In the same way that machines could help us maximize our powers of empathy and prospection, they could also help us minimize our agency-attribution tendencies. If you’re a furry little creature running through the forest and you see a leaf shaking near your path, it’s safer to attribute agency to the leaf’s motion than to not: Better to believe there’s a predator hiding behind the leaf than to attribute its motion to wind. Such paranoia stands you in good Darwinian stead, in contrast to another creature who thinks “Wind” and ends up eaten. It is possible that such paranoid creatures evolved into religious humans who saw imaginary predators (i.e., gods) behind every thunderstorm and stubbed toe. But religion leads to religious wars and leaders who announce, “God made me do it.” Such defenses don’t hold up well in modern humanist societies. Perhaps machines could help us correctly interpret the causes of each and every sling and arrow of outrageous fortune we experience in our daily lives. Did I miss my bus because I’m being punished for the fact that I didn’t call my sister yesterday? My Web-enabled glasses immediately flick on to show me that bus schedules have become more erratic due to this year’s cut in my city’s public transportation budget. I relax as I start walking to the subway: It’s not my fault.

 

‹ Prev