Book Read Free

Visions of the Future

Page 62

by Brin, David


  Among his many honors, he is the recipient of the National Medal of Technology, was inducted into the National Inventors Hall of Fame, holds twenty honorary doctorates, and honors from three U.S. presidents.

  Ray has written five national best-selling books, including New York Times best sellers The Singularity Is Near at http://amzn.to/1D6rFBF and How To Create A Mind at http://amzn.to/175bCtk. He is a Director of Engineering at Google heading up a team developing machine intelligence and natural language understanding.

  This article was originally written and published at http://www.kurzweilai.net/the-significance-of-watson prior to the tournament in which Watson won.

  IBM’s “Watson” Deep QA program, running on IBM Power7 servers.

  (Image: IBM T.J. Watson Research Labs)

  In The Age of Intelligent Machines1, which I wrote in the mid-1980s, I predicted that a computer would defeat the world chess champion by 1998. My estimate was based on the predictable exponential growth of computing power (an example of what I now call the “law of accelerating returns”2) and my estimate of what level of computing was needed to achieve a chess rating of just under 2800 (sufficient to defeat any human, although lately the best human chess scores have inched above 2800).

  I also predicted that when that happened we would either think better of computer intelligence, worse of human thinking, or worse of chess, and that if history was a guide, we would downgrade chess.

  Deep Blue defeated Gary Kasparov in 1997, and indeed we were immediately treated to rationalizations that chess was not really exemplary of human thinking after all. Commentaries pointed out that Deep Blue’s feat just showed how computers were good at dealing with high-speed logical analysis and that chess was just a matter of dealing with the combinatorial explosion of move-countermoves. Humans, on the other hand, could deal with the subtleties and unpredictable complexities of human language.

  I do not entirely disagree with this view of computer game playing. The early success of computers with logical thinking, even at such tasks as solving mathematical theorems, showed what computers were good for. Recall that CMU’s “General Problem Solver” solved a mathematical theorem in the 1950s that had eluded Russell and Whitehead in their Principia Mathematica, one of the early successes of the AI field that led to premature confidence in AI.

  Computers could keep track of vast logical structures and remember enormous databases with great accuracy. Search engines such as Google and Bing continue to illustrate this strength of computers.

  Indeed no human can do what a search engine does, but computers have still not shown an ability to deal with the subtlety and complexity of language. Humans, on the other hand, have been unique in our ability to think in a hierarchical fashion, to understand the elaborate nested structures in language, to put symbols together to form an idea, and then to use a symbol for that idea in yet another such structure. This is what sets humans apart.

  That is, until now. Watson is a stunning example of the growing ability of computers to successfully invade this supposedly unique attribute of human intelligence. If you watch Watson’s performance, it appears to be at least as good as the best “Jeopardy!” players at understanding the nature of the question (or I should say the answer, since “Jeopardy!” presents the answer and asks for the question, which I always thought was a little tedious). Watson is able to then combine this ability to understand the level of language in a “Jeopardy!” query with a computer’s innate ability to accurately master a vast corpus of knowledge.

  I’ve always felt that once a computer masters a human’s level of pattern recognition and language understanding, it would inherently be far superior to a human because of this combination.

  We don’t know yet whether Watson will win this particular tournament, but it won the preliminary round and the point has been made, regardless of the outcome. There were chess machines before Deep Blue that just missed defeating the world chess champion, but they kept getting better and passing the threshold of defeating the best human was inevitable. The same is true now with “Jeopardy!.”

  Yes, there are limitations to “Jeopardy!” Like all games, it has a particular structure and does not probe all human capabilities, even within understanding language. Already commentators are beginning to point out the limitations of “Jeopardy!,” for example, that the short length of the queries limits their complexity.

  For those who would like to minimize Watson’s abilities, I’ll add the following. When human contestant Ken Jennings selects the “Chicks dig me” category, he makes a joke that is outside the formal game by saying “I’ve never said this on TV, ‘chicks dig me.’” Later on, Watson says, “Let’s finish Chicks Dig Me.” That’s also pretty funny and the audience laughs, but it is clear that Watson is clueless as to the joke it has inadvertently made.

  However, Watson was never asked to make commentaries, humorous or otherwise, about the proceedings. It is clearly capable of dealing with a certain level of humor within the queries. If suitably programmed, I believe that it could make appropriate and humorous comments also about the situation it is in.

  It is going to be more difficult to seriously argue that there are human tasks that computers will never achieve. “Jeopardy!” does involve understanding complexities of humor, puns, metaphors and other subtleties. Computers are also advancing on a myriad of other fronts, from driverless cars (Google’s cars have driven 140,000 miles through California cities and towns without human intervention) to the diagnosis of disease.

  WATSON ON YOUR PC OR MOBILE PHONE?

  Watson runs on 90 servers, although it does not go out to the Internet. When will this capability be available on your PC? It was only five years between Deep Blue in 1997, which was a specialized supercomputer, and Deep Fritz in 2002, which ran on eight personal computers, and did about as well.

  This reduction in the size and cost of a machine that could play world-champion level chess was due both to the ongoing exponential growth of computer hardware and to improved pattern recognition software for performing the key move-countermove tree-pruning decision task. Computer price-performance is now doubling in less than a year, so 90 servers would become the equivalent of one in about seven years. Since a server is more expensive than a typical personal computer, we could consider the gap to be about ten years.

  But the trend is definitely moving towards cloud computing, in which supercomputer capability will be available in bursts to anyone, in which case Watson-like capability would be available to the average user much sooner. I do expect the type of natural language processing we see in Watson to show up in search engines and other knowledge retrieval systems over the next five years.

  PASSING THE TURING TEST

  How does all of this relate to the Turing test? Alan Turing based his eponymous Turing test entirely on human text language based on his (in my view accurate) insight that human language embodies all of human intelligence. In other words, there are no simple language tricks that would enable a computer to pass a well-designed Turing test. A computer would need to actually master human levels of understanding to pass this threshold.

  Incidentally, properly designing a Turing test is not straightforward and Turing himself left the rules purposely vague. How qualified does the human judge need to be? How human does the judge need to be (for example, can he or she be enhanced with nonbiological intelligence)? How do we ensure that the human foils actually try to trick the judge?

  How long should the sessions be? Mitch Kapor and I bet $20,000 ($10,000 each), with the proceeds to go to the charity of the winner’s choice, whether a computer would pass a Turing test by 2029. I said yes and he said no. We spent considerable time negotiating the rules, which you can see at http://www.kurzweilai.net/a-wager-on-the-turing-test-the-rules.

  What does this achievement with “Jeopardy!” tell us about the prospect of computers passing the Turing test? It certainly demonstrates the rapid progress being made on human language understanding. There are man
y other examples, such as CMU’s Read the Web project, which has created NELL (Never Ending Language Learner)3, which is currently reading documents on the Web and accurately understanding most of them.

  With computers demonstrating a basic ability to understand the symbolic and hierarchical nature of language (a reflection of the inherently hierarchical nature of our neocortex), it is only a matter of time before that capability reaches Turing-test levels. Indeed, if Watson’s underlying technology were applied to the Turing test task, it should do pretty well. Consider the annual Loebner Prize competition, one version of the Turing test. Last year, the best chatbot fooled the human judges 25 percent of the time, and the competition requires only a 30 percent level to pass.

  Given that contemporary chatbots do well on the Loebner competition, it is likely that such a system based on Watson technology would actually pass the Loebner threshold4. In my view, however, that threshold is too easy. It would not be likely to pass the more difficult threshold that Mitch Kapor and I defined. But the outlook for my bet, which is not due until 2029, is looking pretty good.

  It is important to note that an important part of the engineering of a system that will pass a proper Turing test is that it will need to dumb itself down. In a movie I wrote and co-directed, The Singularity is Near, A True Story about the Future5, an AI named Ramona needs to pass a Turing test, and indeed she has this very realization. After all, if you were talking to someone over instant messaging and they seemed to know every detail of everything, you’d realize it was an AI.

  What will be the significance of a computer passing the Turing test? If it is really a properly designed test it would mean that this AI is truly operating at human levels. And I for one would then regard it as human. I’m expecting this to happen within two decades, but I also expect that when it does, observers will continue to find things wrong with it.

  By the time the controversy dies down and it becomes unambiguous that nonbiological intelligence is equal to biological human intelligence, the AIs will already be thousands of times smarter than us. But keep in mind that this is not an alien invasion from Mars. We’re creating these technologies to extend our reach. The fact that farmers in China can access all of human knowledge with devices they carry in their pockets is a testament to the fact that we are doing this already.

  Ultimately, we will vastly extend and expand our own intelligence by merging with these tools of our own creation.

  ENDNOTES

  The Age of Intelligent Machines at http://www.kurzweilai.net/the-age-of-intelligent-machines-prologue-the-second-industrial-revolution.

  Law of Accelerating Returns at http://lifeboat.com/ex/law.of.accelerating.returns.

  NELL: Never-Ending Language Learning at http://rtw.ml.cmu.edu/rtw/.

  Loebner Prize at http://aisb.org.uk/events/loebner-prize.

  The Singularity is Near, A True Story about the Future at http://singularity.com/themovie/index.php.

  PROOF THAT THE END OF MOORE’S LAW IS NOT THE END OF THE SINGULARITY

  eric klien

  Eric is President of Lifeboat Foundation. Read his bio at http://lifeboat.com/ex/bios.eric.klien.

  The following was first published on our blog at http://lifeboat.com/blog/2014/12/proof-that-the-end-of-moores-law-is-not-the-end-of-the-singularity and reached #21 on reddit.

  Samsung 850 Pro: The solution to Moore’s Law ending.

  During the last few years, the semiconductor industry has been having a harder and harder time miniaturizing transistors with the latest problem being Intel’s delayed roll-out of its new 14 nm process. The best way to confirm this slowdown in progress of computing power is to try to run your current programs on a 6-year-old computer. You will likely have few problems since computers have not sped up greatly during the past 6 years. If you had tried this experiment a decade ago you would have found a 6-year-old computer to be close to useless as Intel and others were able to get much greater gains per year in performance than they are getting today.

  Many are unaware of this problem as improvements in software and the current trend to have software rely on specialized GPUs instead of CPUs has made this slowdown in performance gains less evident to the end user. (The more specialized a chip is, the faster it runs.) But despite such workarounds, people are already changing their habits such as upgrading their personal computers less often. Recently people upgraded their ancient Windows XP machines only because Microsoft forced them to by discontinuing support for the still popular Windows XP operating system. (Windows XP was the second most popular desktop operating system in the world the day after Microsoft ended all support for it. At that point it was a 12-year-old operating system.)

  It would be unlikely that AIs would become as smart as us by 2029 as Ray Kurzweil has predicted if we depended on Moore’s Law to create the hardware for AIs to run on. But all is not lost. Previously, electromechanical technology gave way to relays, then to vacuum tubes, then to solid-state transistors, and finally to today’s integrated circuits. One possibility for the sixth paradigm to provide exponential growth of computing has been to go from 2D integrated circuits to 3D integrated circuits. There have been small incremental steps in this direction, for example Intel introduced 3D tri-gate transistors with its first 22 nm chips in 2012. While these chips were slightly taller than the previous generation, performance gains were not great from this technology. (Intel is simply making its transistors taller and thinner. They are not stacking such transistors on top of each other.)

  But quietly this year, 3D technology has finally taken off. The recently released Samsung 850 Pro1 which uses 42 nm flash memory is competitive with competing products that use 19 nm flash memory. Considering that, for a regular flat chip, 42 nm memory is (42 × 42) / (19 × 19) = 4.9 times as big and therefore 4.9 times less productive to work with, how did Samsung pull this off? They used their new 3D V-NAND architecture, which stacks 32 cell layers on top of one another. It wouldn’t be that hard for them to go from 32 layers to 64 then to 128, etc. Expect flash drives to have greater capacity than hard drives in a couple years! (Hard drives are running into their own form of an end of Moore’s Law situation.) Note that by using 42 nm flash memory instead of 19 nm flash memory, Samsung is able to use bigger cells that can handle more read and write cycles.

  Samsung is not the only one with this 3D idea. For example, Intel has announced2 that it will be producing its own 32-layer 3D NAND chips in 2015. And 3D integrated circuits are, of course, not the only potential solution to the end of Moore’s Law. For example, Google is getting into the quantum computer business which is another possible solution.3 But there is a huge difference between a theoretical solution that is being tested in a lab somewhere and something that you can buy on Amazon today.

  Finally, to give you an idea of how fast things are progressing, a couple months ago Samsung’s best technology was based on 24-layer 3D MLC chips and now Samsung has already announced4 that it is mass producing 32-layer 3D TLC chips that hold 50% more data per cell than the 32-layer 3D MLC chips currently used in the Samsung 850 Pro.

  The Singularity is near!

  ENDNOTES

  The Samsung 850 Pro is available at http://amzn.to/1BifBPu.

  Intel announcement is at http://www.extremetech.com/computing/194911-intel-announces-32-layer-3d-nand-chips-plans-for-larger-than-10tb-ssds.

  Google’s entry into quantum computers is discussed at http://www.technologyreview.com/news/530516/google-launches-effort-to-build-its-own-quantum-computer/.

  Learn about 32-layer 3D TLC chips at http://www.kitguru.net/components/memory/anton-shilov/samsung-confirms-mass-production-of-tlc-3d-v-nand-flash-memory/.

  THE FUTURE OF ENERGY:

  TOWARDS THE “ENERGULARITY”

  josé cordeiro, mba, phd

  José (http://cordeiro.org) studied science at Universidad Simón Bolívar, Venezuela; engineering at the Massachusetts Institute of Technology, Cambridge; economics at Georgetown University, Washington; and management at INSEAD,
France. He is chair of the Venezuela Node of The Millennium Project; founding faculty and energy advisor in Singularity University at NASA Research Park in Silicon Valley, California; founder of the World Future Society’s Venezuela Chapter; cofounder of the Venezuelan Transhumanist Association; and former director of the Club of Rome (Venezuela Chapter), World Transhumanist Association, and the Extropy Institute.

  Abstract

  Homo sapiens sapiens is the only species that has learned how to harness the power of fire. The conscious generation and use of external energy plays a unique role in our human and cultural evolution, from harnessing fire to developing nuclear fusion. Humanity has gone through several energy “waves,” advancing exponentially from wood, coal, oil, gas, and eventually hydrogen/solar/nuclear in a continuous process of “decarbonization” and “hydrogenization” of our energy sources. The latest transition from fossil and scarce fuels to more renewable and abundant energy sources might not be easy, but it has already started.

  The creation of an Energy Network or “Enernet” will allow us to connect the whole world and to increase, not reduce, our energy consumption. With the Enernet, energy and power will become abundant and basically free, just like information and bandwidth are today thanks to the Internet. Storage considerations are also important, but new batteries and other advanced technologies will make the Enernet more resilient and create positive network effects. This is fundamental for improving the living standards of all people around the world and for moving into the next planetary transition: energy is essential for solving humanity’s needs on Earth and for exploring and colonizing the universe.

 

‹ Prev