AI Superpowers

Home > Nonfiction > AI Superpowers > Page 11
AI Superpowers Page 11

by Kai-Fu Lee


  But not every technological revolution follows this pattern. Often, once a fundamental breakthrough has been achieved, the center of gravity quickly shifts from a handful of elite researchers to an army of tinkerers—engineers with just enough expertise to apply the technology to different problems. This is particularly true when the payoff of a breakthrough is diffused throughout society rather than concentrated in a few labs or weapons systems.

  Mass electrification exemplified this process. Following Thomas Edison’s harnessing of electricity, the field rapidly shifted from invention to implementation. Thousands of engineers began tinkering with electricity, using it to power new devices and reorganize industrial processes. Those tinkerers didn’t have to break new ground like Edison. They just had to know enough about how electricity worked to turn its power into useful and profitable machines.

  Our present phase of AI implementation fits this latter model. A constant stream of headlines about the latest task tackled by AI gives us the mistaken sense that we are living through an age of discovery, a time when the Enrico Fermis of the world determine the balance of power. In reality, we are witnessing the application of one fundamental breakthrough—deep learning and related techniques—to many different problems. That’s a process that requires well-trained AI scientists, the tinkerers of this age. Today, those tinkerers are putting AI’s superhuman powers of pattern recognition to use making loans, driving cars, translating text, playing Go, and powering your Amazon Alexa.

  Deep-learning pioneers like Geoffrey Hinton, Yann LeCun, and Yoshua Bengio—the Enrico Fermis of AI—continue to push the boundaries of artificial intelligence. And they may yet produce another game-changing breakthrough, one that scrambles the global technological pecking order. But in the meantime, the real action today is with the tinkerers.

  INTELLIGENCE SHARING

  And for this technological revolution, the tinkerers have an added advantage: real-time access to the work of leading pioneers. During the Industrial Revolution, national borders and language barriers meant that new industrial breakthroughs remained bottled up in their country of origin, England. America’s cultural proximity and loose intellectual property laws helped it pilfer some key inventions, but there remained a substantial lag between the innovator and the imitator.

  Not so today. When asked how far China lags behind Silicon Valley in artificial intelligence research, some Chinese entrepreneurs jokingly answer “sixteen hours”—the time difference between California and Beijing. America may be home to the top researchers, but much of their work and insight is instantaneously available to anyone with an internet connection and a grounding in AI fundamentals. Facilitating this knowledge transfer are two defining traits of the AI research community: openness and speed.

  Artificial intelligence researchers tend to be quite open about publishing their algorithms, data, and results. That openness grew out of the common goal of advancing the field and also from the desire for objective metrics in competitions. In many physical sciences, experiments cannot be fully replicated from one lab to the next—minute variations in technique or environment can greatly affect results. But AI experiments are perfectly replicable, and algorithms are directly comparable. They simply require those algorithms to be trained and tested on identical data sets. International competitions frequently pit different computer vision or speech recognition teams against each other, with the competitors opening their work to scrutiny by other researchers.

  The speed of improvements in AI also drives researchers to instantly share their results. Many AI scientists aren’t trying to make fundamental breakthroughs on the scale of deep learning, but they are constantly making marginal improvements to the best algorithms. Those improvements regularly set new records for accuracy on tasks like speech recognition or visual identification. Researchers compete on the basis of these records—not on new products or revenue numbers—and when one sets a new record, he or she wants to be recognized and receive credit for the achievement. But given the rapid pace of improvements, many researchers fear that if they wait to publish in a journal, their record will already have been eclipsed and their moment at the cutting edge will go undocumented. So instead of sitting on that research, they opt for instant publication on websites like www.arxiv.org, an online repository of scientific papers. The site lets researchers instantly time-stamp their research, planting a stake in the ground to mark the “when and what” of their algorithmic achievements.

  In the post-AlphaGo world, Chinese students, researchers, and engineers are among the most voracious readers of www.arxiv.org. They trawl the site for new techniques, soaking up everything the world’s top researchers have to offer. Alongside these academic publications, Chinese AI students also stream, translate, and subtitle lectures from leading AI scientists like Yann LeCun, Stanford’s Sebastian Thrun, and Andrew Ng. After decades spent studying outdated textbooks in the dark, these researchers revel in this instant connectivity to global research trends.

  On WeChat, China’s AI community coalesces in giant group chats and multimedia platforms to chew over what’s new in AI. Thirteen new media companies have sprung up just to cover the sector, offering industry news, expert analysis, and open-ended dialogue. These AI-focused outlets boast over a million registered users, and half of them have taken on venture funding that values them at more than $10 million each. For more academic discussions, I’m part of the five-hundred-member “Weekly Paper Discussion Group,” just one of the dozens of WeChat groups that come together to dissect a new AI research publication each week. The chat group buzzes with hundreds of messages per day: earnest questions about this week’s paper, screen shots of the members’ latest algorithmic achievements, and, of course, plenty of animated emojis.

  But Chinese AI practitioners aren’t just passive recipients of wisdom spilling forth from the Western world. They’re now giving back to that research ecosystem at an accelerating rate.

  CONFERENCE CONFLICTS

  The Association for the Advancement of Artificial Intelligence had a problem. The storied organization had been putting on one of the world’s most important AI conferences for three decades, but in 2017 they were in danger of hosting a dud event.

  Why? The conference dates conflicted with Chinese New Year.

  A few years earlier, this wouldn’t have been a problem. Historically, American, British, and Canadian scholars have dominated the proceedings, with just a handful of Chinese researchers presenting papers. But the 2017 conference had accepted an almost equal number of papers from researchers in China and the United States, and it was in danger of losing half of that equation to their culture’s most important holiday.

  “Nobody would have put AAAI on Christmas day,” the group’s president told the Atlantic. “Our organization had to almost turn on a dime and change the conference venue to hold it a week later.”

  Chinese AI contributions have occurred at all levels, ranging from marginal tweaks of existing models to the introduction of world-class new approaches to neural network construction. A look at citations in academic research reveals the growing clout of Chinese researchers. One study by Sinovation Ventures examined citations in the top one hundred AI journals and conferences from 2006 to 2015; it found that the percentage of papers by authors with Chinese names nearly doubled from 23.2 percent to 42.8 percent during that time. That total includes some authors with Chinese names who work abroad—for example, Chinese American researchers who haven’t adopted an anglicized name. But a survey of the authors’ research institutions found the great majority of them to be working in China.

  A recent tally of citations at global research institutions confirmed the trend. That ranking of the one hundred most-cited research institutions on AI from 2012 to 2016 showed China ranking second only to the United States. Among the elite institutions, Tsinghua University even outnumbered places like Stanford University in total AI citations. These studies largely captured the pre-AlphaGo era, before China pushed even more research
ers into the field. In the coming years, a whole new wave of young Ph.D. students will bring Chinese AI research to a new level.

  And these contributions haven’t just been about piling up papers and citations. Researchers in the country have produced some of the most important advances in neural networks and computer vision since the arrival of deep learning. Many of these researchers emerged out of Microsoft Research China, an institution that I founded in 1998. Later renamed Microsoft Research Asia, it went on to train over five thousand AI researchers, including top executives at Baidu, Alibaba, Tencent, Lenovo, and Huawei.

  In 2015, a team from Microsoft Research Asia blew the competition out of the water at the global image-recognition competition, ImageNet. The team’s breakthrough algorithm was called ResNet, and it identified and classified objects from 100,000 photographs into 1,000 different categories with an error rate of just 3.5 percent. Two years later, when Google’s DeepMind built AlphaGo Zero—the self-taught successor to AlphaGo—they used ResNet as one of its core technological building blocks.

  The Chinese researchers behind ResNet didn’t stay at Microsoft for long. Of the four authors of the ResNet paper, one joined Yann LeCun’s research team at Facebook, but the other three have founded and joined AI startups in China. One of those startups, Face++, has quickly turned into a world leader in face- and image-recognition technology. At the 2017 COCO image-recognition competition, the Face++ team took first place in three of the four most important categories, beating out the top teams from Google, Microsoft, and Facebook.

  To some observers in the West, these research achievements fly in the face of deeply held beliefs about the nature of knowledge and research across political systems. Shouldn’t Chinese controls on the internet hobble the ability of Chinese researchers to break new ground globally? There are valid critiques of China’s system of governance, ones that weigh heavily on public debate and research in the social sciences. But when it comes to research in the hard sciences, these issues are not nearly as limiting as many outsiders presume. Artificial intelligence doesn’t touch on sensitive political questions, and China’s AI scientists are essentially as free as their American counterparts to construct cutting-edge algorithms or build profitable AI applications.

  But don’t take it from me. At a 2017 conference on artificial intelligence and global security, former Google CEO Eric Schmidt warned participants against complacency when it came to Chinese AI capabilities. Predicting that China would match American AI capabilities in five years, Schmidt was blunt in his assessment: “Trust me, these Chinese people are good. . . . If you have any kind of prejudice or concern that somehow their system and their educational system is not going to produce the kind of people that I’m talking about, you’re wrong.”

  THE SEVEN GIANTS AND THE NEXT DEEP LEARNING

  But while the global AI research community has blossomed into a fluid and open system, one component of that ecosystem remains more closed off: big corporate research labs. Academic researchers may rush to share their work with the world, but public technology companies have a fiduciary responsibility to maximize profits for their shareholders. That usually means less publishing and more proprietary technology.

  Of the hundreds of companies pouring resources into AI research, let’s return to the seven that have emerged as the new giants of corporate AI research—Google, Facebook, Amazon, Microsoft, Baidu, Alibaba, and Tencent. These Seven Giants have, in effect, morphed into what nations were fifty years ago—that is, large and relatively closed-off systems that concentrate talent and resources on breakthroughs that will mostly remain “in house.”

  The seals around corporate research are never airtight: team members leave to found their own AI startups, and some groups like Microsoft Research, Facebook AI Research, and DeepMind still publish articles on their most meaningful contributions. But broadly speaking, if one of these companies makes a unique breakthrough—a trade secret that could generate massive profits for that company alone—it will do its best to keep a lid on it and will try to extract maximum value before the word gets out.

  A groundbreaking discovery occurring within one of these closed systems poses the greatest threat to the world’s open AI ecosystem. It also threatens to stymie China in its goal of becoming a global leader in AI. The way things stand today, China already has the edge in entrepreneurship, data, and government support, and it’s rapidly catching up to the United States in expertise. If the technological status quo holds for the coming years, an array of Chinese AI startups will begin fanning out across different industries. They will leverage deep learning and other machine-learning technologies to disrupt dozens of sectors and reap the rewards of transforming the economy.

  But if the next breakthrough on the scale of deep learning occurs soon, and it happens within a hermetically sealed corporate environment, all bets are off. It could give one company an insurmountable advantage over the other Seven Giants and return us to an age of discovery in which elite expertise tips the balance of power in favor of the United States.

  To be clear, I believe the odds are slightly against such a breakthrough coming out of the corporate behemoths in the coming years. Deep learning marked the largest leap forward in the past fifty years, and advances on this scale rarely come more than once every few decades. Even if such a breakthrough does occur, it’s more likely to emerge out of the open environment of academia. Right now, the corporate giants are pouring unprecedented resources into squeezing deep learning for all it’s worth. That means lots of fine-tuning of deep-learning algorithms and only a small percentage of truly open-ended research in pursuit of the next paradigm-shifting breakthrough.

  Meanwhile, academics find themselves unable to compete with industry in practical applications of deep learning because of the requirements for massive amounts of data and computing power. So instead, many academic researchers are following Geoffrey Hinton’s exhortation to move on and focus on inventing “the next deep learning,” a fundamentally new approach to AI problems that could change the game. That type of open-ended research is the kind most likely to stumble onto the next breakthrough and then publish it for all the world to learn from.

  GOOGLE VERSUS THE REST

  But if the next deep learning is destined to be discovered in the corporate world, Google has the best shot at it. Among the Seven AI Giants, Google—more precisely, its parent company, Alphabet, which owns DeepMind and its self-driving subsidiary Waymo—stands head and shoulders above the rest. It was one of the earliest companies to see the potential in deep learning and has devoted more resources to harnessing it than any other company.

  In terms of funding, Google dwarfs even its own government: U.S. federal funding for math and computer science research amounts to less than half of Google’s own R&D budget. That spending spree has bought Alphabet an outsized share of the world’s brightest AI minds. Of the top one hundred AI researchers and engineers, around half are already working for Google.

  The other half are distributed among the remaining Seven Giants, academia, and a handful of smaller startups. Microsoft and Facebook have soaked up substantial portions of this group, with Facebook bringing on superstar researchers like Yann LeCun. Of the Chinese giants, Baidu went into deep-learning research earliest—even trying to acquire Geoffrey Hinton’s startup in 2013 before being outbid by Google—and scored a major coup in 2014 when it recruited Andrew Ng to head up its Silicon Valley AI Lab. Within a year, that hire was showing outstanding results. By 2015, Baidu’s AI algorithms had exceeded human abilities at Chinese speech recognition. It was a great accomplishment, but one that went largely unnoticed in the United States. In fact, when Microsoft reached the same milestone a year later for English, the company dubbed it a “historic achievement.” Ng left Baidut in 2017 to create his own AI investment fund, but the time he spent at the company both testified to Baidu’s ambitions and strengthened its reputation for research.

  Alibaba and Tencent were relative latecomers to the AI talent
race, but they have the cash and data on hand to attract top talent. With WeChat serving as the all-in-one super-app of the world’s largest internet market, Tencent possesses perhaps the single richest data ecosystem of all the giants. That is now helping Tencent to attract and empower top-flight AI researchers. In 2017, Tencent opened an AI research institute in Seattle and immediately began poaching Microsoft researchers to staff it.

  Alibaba has followed suit with plans to open a global network of research labs, including in Silicon Valley and Seattle. Thus far, Tencent and Alibaba have yet to publicly demonstrate the results of this research, opting instead for more product-driven applications. Alibaba has taken the lead on “City Brains”: massive AI-driven networks that optimize city services by drawing on data from video cameras, social media, public transit, and location-based apps. Working with the city government in its hometown of Hangzhou, Alibaba is using advanced object-recognition and predictive transit algorithms to constantly tweak the patterns for red lights and alert emergency services to traffic accidents. The trial has increased traffic speeds by 10 percent in some areas, and Alibaba is now preparing to bring the service to other cities.

  While Google may have jumped off to a massive head start in the arms race for elite AI talent, that by no means guarantees victory. As discussed, fundamental breakthroughs are few and far between, and paradigm-shifting discoveries often emerge from unexpected places. Deep learning came out of a small network of idiosyncratic researchers obsessed with an approach to machine learning that had been dismissed by mainstream researchers. If the next deep learning is out there somewhere, it could be hiding on any number of university campuses or in corporate labs, and there’s no guessing when or where it will show its face. While the world waits for the lottery of scientific discovery to produce a new breakthrough, we remain entrenched in our current era of AI implementation.

 

‹ Prev