Book Read Free

Hit Refresh

Page 14

by Satya Nadella


  One of our top AI researchers decided to try an experiment to demonstrate how a computer can learn to learn. A highly esteemed computer scientist and medical doctor, Eric Horvitz, runs our Redmond research lab and has long been fascinated with machines that perceive, learn, and reason. His experiment was to make it easier for a visitor to find him, and to free up his human assistant for more critical work than the mundane task of constantly giving directions. So, to visit his office, you enter the ground floor lobby where a camera and computer immediately notices you, calculates your direction, pace, and distance and then makes a prediction so that an elevator is suddenly waiting for you. Getting off the elevator, a robot says hello and asks if you need help finding Eric’s desk among the confusing corridors and warren of surrounding offices. Once there, a virtual assistant has already anticipated your arrival, knows Eric is finishing up a phone call, and asks if you’d like to be seated until Eric is available. The system received some basic training but, over time, learned to learn on its own so that programmers were not needed. It was trained, for example, to know what to do if someone in the lobby pauses to answer a call or stops to pick up a pen that’s fallen on the floor. It begins to infer, to learn, and to program itself.

  Peter Lee is another gifted AI researcher and thinker at Microsoft. In a meeting one morning in his office, Peter reflected on something the journalist Geoffrey Willans once said. “You can never understand one language until you understand at least two.” Goethe went further. “He who does not know foreign languages does not know anything about his own.” Learning or improvement in one skill or mental function can positively influence another one. The effect is transfer learning, and it’s seen not only in human intelligence but also machine intelligence. Our team, for example, found that if we trained a computer to speak English, learning Spanish or another language became faster.

  Peter’s team decided to invent a real-time, language-to-language translator that breaks the language barrier by enabling a hundred people at one time to speak in nine different languages, or type messages to one another in fifty different languages. The result is inspiring. Workers all over the globe can be linked via Skype or simply by speaking into their smartphones and understand one another instantly. A Chinese speaker can present a sales and marketing plan in her native language and teammates listening in can see or hear what’s being said in their native languages.

  My colleague, Steve Clayton, told me the story of how profound this technology was for his multicultural family. He said the first time he saw the technology demonstrated he knew that his young children, English speakers, would for the first time be able to have a live conversation with their Chinese-speaking relatives.

  Looking ahead, many others will use our tools to expand the translator beyond the initial languages we began with. A healthcare company, for example, may want to create English, Spanish, and other highly specialized versions of the translator that speak the language of medicine. An AI tool would be used to watch and listen to health-care professionals talk, and then, after a period of observation, would automatically generate a new model for a health-care–specific version. A Native American tribe might preserve its language by listening to elders speak. The optimal state will be when those AI systems not only translate but improve—perhaps converting conversation into ideas about improving patient care or converting a conversation into an essay.

  The holy grail for AI has long been a really good personal agent that can assist you in meaningful ways to get the most out of life at home and work. Cortana, named for our synthetic intelligence character in the popular video game Halo, is a fascinating case study of where we stand today and how we hope one day to deliver a highly effective alter ego—an agent that knows you deeply. It will know your context, your family, your work. It will also know the world. It will be unbounded. And it will get smarter the more it’s used. It will learn from its interactions with all of your apps as well as from your documents and emails in Office.

  Today, there are more than 145 million Cortana users each month in 116 countries. Those customers already have asked 13 billion questions, and with each question the agent is learning to become more and more helpful. In fact, I’ve come to rely on Cortana’s commitment feature, which searches through my emails hunting for promises I’ve made and then gently reminds me as the deadline approaches. If I told someone I’d follow up with them in three weeks, Cortana makes a note of that and reminds me later to ensure I keep my commitment.

  Our Cortana team, part of a relatively new AI and research division, works in a tall Microsoft building in downtown Bellevue with windows looking out over the Pacific Northwest’s lakes and mountains. The beauty of these surroundings coupled with the mandate to push the edges of innovation has attracted incredible talent—designers, linguists, knowledge engineers, and computer scientists.

  Jon Hamaker, one of the group’s engineering managers, says his goal is for customers to tell him “I couldn’t live without Cortana—she saved me again today.” He and his team spend their days thinking through scenarios that would make that true. What do our users do—how, when, where, and with whom do they interact? What would build a bond with the user? How can we save the user time, reduce the user’s stress, help the user stay one step ahead of everyday challenges? Hamaker’s quest is to capture every type of data from sources including GPS, email, calendar, and correlative data from the web and turn that data into understanding, and even empathy. Perhaps your digital assistant will schedule time to ask you questions that will help fill in gaps where the data is insufficient in order to help you more. Perhaps the assistant will be helpful in times of uncertainty—when you’re in a new place where the currency and the language are foreign, for example.

  Those kinds of uncertainties fascinate our engineers who focus on semantic ontologies, the study of interrelationships among people and entities. Their ambition is to develop an agent that can do much more than simply get you a search result. They dream of a day when a digital agent will understand context and meaning, using them to better predict what you need and want. The digital assistant should always have a good answer, sometimes even an answer to a question you didn’t know you had.

  Emma Williams is not an engineer. She was an English literature scholar with a focus on Anglo-Saxon and Norse literature. Her job is to think through the emotional intelligence (EQ) design of our AI products, including Cortana. She’s confident about the IQ of the team we have working on agents; she wants to make sure we have the EQ as well.

  One day she discovered a new build of Cortana in which Cortana displayed anger when asked certain questions. Williams promptly put her foot down. (If medieval Norse tales about Vikings taught her anything, it’s that pillaging while searching for resources should not be part of discovering new things.) She made the point that Cortana offers an implicit promise to users that she will always be calm, cool, and collected. Rather than becoming angry with you, Cortana should understand your emotional state, whatever it is, and respond appropriately. The team revised Cortana in accordance with Williams’s sensibilities.

  If this journey toward an AI-powered assistant is one of a million miles, we’ve walked only the first few of those miles. But these first few steps are inspiring ones when we contemplate what they may produce.

  My former colleague David Heckerman is a distinguished scientist who has spent thirty years working on AI. Years ago, he created one of the first effective spam filters by figuring out the weak link of his adversaries—the spammers who clog your in-box with junk mail—and foiled their attempts to succeed. Today the team he built at Microsoft develops machine learning algorithms designed to discover and exploit weak links in HIV, the common cold, and cancer. HIV, the virus that causes AIDS, mutates rapidly and broadly in a human body, but there are constraints in how the virus mutates. The advanced machine learning algorithms we’ve built have discovered which sections of HIV proteins are absolutely essential to their function so that a vaccine can be trained to
attack those very regions. Using clinical data, his team can simulate mutations and identify targets. Similarly, they are taking genomic sequences for a cancer tumor and predicting the best targets for the immune system to attack.

  If the potential for this AI work is breathtaking, the potential for quantum computing is mind-blowing.

  * * *

  Santa Barbara, California, is closer to Hollywood than it is Silicon Valley. Its casual, beach-front college campus just north of Tinseltown is the unlikely center of quantum computing development, the very future of our industry. Its proximity to Hollywood is fitting, since a film script may be a better guide to quantum physics and mechanics than a textbook. Rod Serling’s The Twilight Zone likely put it best: “You’re traveling through another dimension, a dimension not only of sight and sound but of mind. A journey into a wondrous land whose boundaries are that of imagination. That’s the signpost up ahead—your next stop, the Twilight Zone.”

  Defining quantum computing is no simple feat. Originating in the 1980s, quantum computing leverages certain quantum physics properties of atoms or nuclei that allow them to work together as quantum bits, or qubits, to be the computer’s processor and memory. By interacting with each other while being isolated from our environment, qubits can perform certain calculations exponentially faster than conventional, or classical, computers.

  Photosynthesis, bird migration, and even human consciousness are studied as quantum processes. In today’s classical computing world, our brain thinks and our thoughts are typed or spoken into a computer that in turn provides feedback on a screen. In a quantum world, some researchers speculate that there will be no barrier between our brains and computing. It’s a long way off, but might consciousness one day merge with computation?

  “If quantum mechanics hasn’t profoundly shocked you, you haven’t understood it yet,” the Danish Nobel physicist Niels Bohr once said. A later Nobel physicist, Richard Feynman, proposed the notion of quantum computing, unleashing today’s global pursuit to harness quantum mechanics for computation. Among those racing to understand it are Microsoft, Intel, Google, and IBM as well as startups like D-Wave and even governments with hefty national defense budgets. The shared hope is that quantum computing will utterly transform the physics of computing itself.

  Of course, if building a quantum computer were easy, it would have been done by now. While classical computing is bound by its binary code and the laws of physics, quantum computing advances every kind of calculation—math, science, and engineering—from the linear world of bits to the multidimensional universe of qubits. Instead of being simply a 1 or a 0 like the classical bit, qubits can be every combination—a superposition—which enables many computations all at once. Thus, we enter a world in which many parallel computations can be simultaneously answered. In a properly constructed quantum algorithm, the result is, according to one of our scientists, “a great massacre in which all or most of the wrong answers are canceled out.”

  Quantum computing is not only faster than conventional computing, but its workload obeys a different scaling law—rendering Moore’s Law little more than a quaint memory. Formulated by Intel founder Gordon Moore, Moore’s Law observes that the number of transistors in a device’s integrated circuit doubles approximately every two years. Some early supercomputers ran on around 13,000 transistors; the Xbox One in your living room contains 5 billion. But Intel in recent years has reported that the pace of advancement has slowed, creating tremendous demand for alternative ways to provide faster and faster processing to fuel the growth of AI. The short-term results are innovative accelerators like graphics-processing unit (GPU) farms, tensor-processing unit (TPU) chips, and field-programmable gate arrays (FPGAs) in the cloud. But the dream is a quantum computer.

  Today we have an urgent need to solve problems that would tie up classical computers for centuries, but that could be solved by a quantum computer in a few minutes or hours. For example, the speed and accuracy with which quantum computing could break today’s highest levels of encryption is mind-boggling. It would take a classical computer 1 billion years to break today’s RSA-2048 encryption, but a quantum computer could crack it in about a hundred seconds, or less than two minutes. Fortunately, quantum computing will also revolutionize classical computing encryption, leading to ever more secure computing.

  To get there we need three scientific and engineering breakthroughs. The math breakthrough we’re working on is a topological qubit. The superconducting breakthrough we need is a fabrication process to yield thousands of topological qubits that are both highly reliable and stable. The computer science breakthrough we need is new computational methods for programming the quantum computer.

  At Microsoft, our people and our partners right now are working with the transport, experimental and theoretical physics, and the mathematics and computer science that will one day make quantum computing a reality. The hotbed of this activity is Station Q, which is co-located with the theoretical physics department at the University of California at Santa Barbara. Station Q is the brainchild of Michael Freedman, who won math’s top award, the Fields Medal, at the International Congress of the International Mathematical Union in 1986 at age thirty-six. He went on to join Microsoft Research. He’s assembled some of the world’s leading quantum talent in Santa Barbara—the theoretical physicists whose pencil-and-paper calculations provide fodder for experimental physicists, who in turn play with those theoretical conjectures to build experiments that down the road can be used by electrical engineers and app developers to bring quantum computing to market.

  * * *

  It’s just after noon at Station Q and, over tacos al pastor, two theoretical physicists are badgering an experimental physicist about his latest findings. They are arguing over developments in an inquiry focused on a complex corner of the math and physics world known as Majorana fermions, or particles, which hold promise for the kind of superconducting we need to invent a steady-state quantum computer. Sunlight bounces off the Pacific Ocean from nearby Campus Point, illuminating the countless chalky equations they’ve chiseled onto the blackboards that encircle the conference room.

  This is just the kind of intensive, real-time collaboration it will take to produce the breakthroughs we need. Craig Mundie, the visionary former chief technology officer of Microsoft, created our quantum effort years ago, but the academic process was cumbersome. A theoretical physicist publishes an idea. An experimental physicist tests that theory and then publishes the results. When the experiment fails or produces suboptimal results, the theorist then criticizes the experiment’s methodology and updates the original theory. The whole process starts again.

  Now the demand for quantum computing has sped up the race for discovery, and the only way to get there first is to shorten the time in between theory, experiment, and building something. The search for a quantum computer has become something of an arms race. Needing to move more quickly and to be more efficient and outcome-oriented, we have set a goal and timeline to build a quantum computer that can do something useful, something classical computers can’t, and that will require thousands of qubits. To get there, we’ve pressed for greater collaboration. We brought together some of the greatest minds in the world and asked them to work together on an equal basis and to approach problems together with openness and humility. We agreed that experimental and theoretical scientists would sit together or work closely over Skype to shape the ideas and the tests, a practice that has greatly streamlined the process.

  So far, we’ve had more than thirty patents issued, but the finish line remains distant. While the race for the cloud, artificial intelligence, and mixed reality have been loud and well-publicized, the quantum computing race has gone largely unnoticed, in part because of its complexity and secrecy.

  A worthy target for quantum will be advancing AI’s ability to truly comprehend human speech and then accurately summarize it. Even more promising, quantum computing may ultimately save lives through incredible medical breakthrou
ghs. For example, the computational problem of developing a vaccine to target HIV exhausts present computational resources, since the HIV protein coat is highly variable and constantly evolving. As a result, an HIV vaccine has been projected to be ten years away now for several decades. With a quantum computer, we could approach this problem in a new way.

  The same can be said of a dozen other areas in which technology is “stuck”—high temperature superconductors, energy efficient fertilizer production, string theory. A quantum computer would allow a new look at our most compelling problems.

  Computer scientist Krysta Svore is at the heart of our quest to solve problems on a quantum computer. Krysta received her PhD from Columbia University focusing on fault tolerance and scalable quantum computing, and she spent a year at MIT working with an experimentalist designing the software needed to control a quantum computer. Her team is designing an exotic software architecture that assumes our math, physics, and superconducting experts succeed in building a quantum computer. To decide which problems her software should go after first, she invited quantum chemists from around the world to make presentations and to brainstorm. One problem stood out. Millions of people around the world go hungry because of inadequate food production or flawed distribution. One of the biggest problems with food production is that it requires fertilizer, which can be costly and draining on our environmental resources. Making fertilizer requires converting nitrogen from the atmosphere into ammonia, which enables the decomposition of bacteria and fungi. This chemistry, known as the Haber process, has not been improved upon since Fritz Haber and Carl Bosch invented it in 1910. The problem is so big and so complex there simply have not been breakthroughs. A quantum computer in partnership with a classical computer, however, can run massive experiments in order to discover a new, artificial catalyst that can mimic the bacterial process and reduce the amount of methane gas and energy required to produce fertilizer, reducing the threat to our environment.

 

‹ Prev