Know This

Home > Other > Know This > Page 26
Know This Page 26

by Mr. John Brockman


  For physicists, the most fundamental biological question relates to the basic physical principles behind life. How do the laws of physics, far from thermal equilibrium, lead to the spontaneous formation of matter that can self-organize and evolve into ever more complex structures? To answer this, we need to abstract the organizing principles of living systems from the platform of chemistry underlying the biology—and thus perhaps show that life on Earth is not a miraculous, chance event but an inevitable consequence of the laws of physics. Understanding why life occurs at all would let us predict confidently that life exists elsewhere and perhaps even how it could be detected.

  This is important because of another discovery, which appeared online in the scientific journal Icarus with the title “Enceladus’s measured physical libration requires a global subsurface ocean,” by P. C. Thomas, et al. This, too, is a coming-of-age story, and recounts a triumph of human ingenuity. NASA sent a spacecraft to Saturn and for seven years it observed with exquisite accuracy the rotation of the moon Enceladus. Enceladus wobbles as it rotates. You probably know that if you have two eggs, one hard-boiled and the other not, you can tell which is which by spinning them and seeing what happens when you stop (try it!).

  The big news is that Enceladus is like the raw egg. It wobbles as if filled with liquid. There’s a worldwide ocean of water under its surface of solid ice—an ocean presumably kept above freezing by tidal friction and geothermal activity. Enceladus is one place in the solar system where we know there is a large body of warm water and geothermal activity, potentially capable of supporting life as we know it.

  The same wonderful spacecraft photographed fountains of water and vapor spurting from Enceladus’s south pole and has flown through them to see what molecules are present. Future missions to the Fountains of Enceladus will look specifically for life. I hope Q-Bio will be there too, at least in spirit, predicting what to look for given the moon’s geochemistry. And perhaps even predicting that we should confidently expect life everywhere we look.

  Mathematics and Reality

  Clifford Pickover

  Author, trilogy: The Math Book, The Physics Book, The Medical Book

  A recent headline in the journal Nature declared “Paradox at the heart of mathematics makes physics problem unanswerable.” 3 Quarks Daily weighed in with “Gödel’s incompleteness theorems are connected to unsolvable calculations in quantum physics.” Indeed, the degree to which mathematics describes, constrains, or makes predictions about reality is sure to be a fertile and important discussion topic for years or even centuries to come.

  In 1931, mathematician Kurt Gödel determined that some statements are undecidable, suggesting that it is impossible to prove them either true or false. In his first incompleteness theorem, Gödel recognized that there will always be statements about the natural numbers that are true but unprovable within the system. We now leap forward more than eighty years and learn that Gödel’s principle appears to make it impossible to calculate an important property of a material—namely, the gaps between the lowest energy levels of its electrons. Although this finding seems to concern an idealized model of the atoms in a material, some quantum information theorists, such as Toby Cubitt, suggest that this finding limits the extent to which we can predict the behavior of certain real materials and particles.

  Prior to this finding, mathematicians also discovered unlikely connections between prime numbers and quantum physics. For example, in 1972, physicist Freeman Dyson and number theorist Hugh Montgomery discovered that if we examine a strip of zeros from Riemann’s critical line in the zeta function, certain experimentally recorded energy levels in the nucleus of a large atom have a mysterious correspondence to the distribution of zeros, which, in turn, has a relationship to the distribution of prime numbers.

  Of course, there is a great debate as to whether mathematics is a reliable path to the truth about the universe and reality. Some suggest that mathematics is essentially a product of the human imagination and we simply shape it to describe reality.

  Nevertheless, mathematical theories have sometimes been used to predict phenomena that were not confirmed until years later. Maxwell’s equations, for example, predicted radio waves. Einstein’s field equations suggested that gravity would bend light and that the universe is expanding. Physicist Paul Dirac once noted that the abstract mathematics we study now gives us a glimpse of physics in the future. In fact, his equations predicted the existence of antimatter, which was subsequently discovered. Similarly, mathematician Nikolai Lobachevsky said that “there is no branch of mathematics, however abstract, which may not someday be applied to the phenomena of the real world.”

  Mathematics is often in the news, particularly as physicists and cosmologists make spectacular advances, even contemplating the universe as a wave function and speculating on the existence of multiple universes. Because the questions that mathematics touches on can be quite deep, we will continue to discuss the implications of the relationship between mathematics and reality perhaps for as long as humankind exists.

  Synthetic Learning

  Kevin Kelly

  Senior Maverick and cofounder, Wired; author, The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future

  Researchers at DeepMind, an AI company in London, recently reported that they taught a computer system how to learn to play forty-nine simple video games—not “how to play video games” but how to learn to play the games. This is a profound difference. Playing a video game, even one as simple as the 1970s classic game Pong, requires a suite of sophisticated perception, anticipation, and cognitive skills. A dozen years ago, no algorithms could perform these tasks, but today these game-playing codes are embedded in most computer games. When you play a 2015 video game, you’re usually playing against refined algorithms crafted by genius human coders. But rather than program this new set of algorithms to play a game, the DeepMind AI team programmed their machine to learn how to play. The algorithm (a deep neural network) started out with no success in the game and no skill or strategy and then assembled its own code as it played the game, by being rewarded for improving. The technical term is “unsupervised learning.” By the end of hundreds of rounds, the neural net could play the game as well as human players, sometimes better.

  This learning should not be equated with human intelligence. The mechanics of its learning are vastly different from how we learn. It won’t displace humans or take over the world. However, this kind of synthetic learning will grow in capabilities. The significant news is that learning—real unsupervised learning—can be synthesized. Once learning can be synthesized, it can be distributed into all kinds of ordinary devices and functions. It can enable self-driving cars to get better or medical diagnosing programs to improve with use.

  Learning, like many other attributes we thought only humans owned, turns out to be something we can program machines to do. Learning can be automated. While simple second-order learning (learning how to learn) was once rare and precious, it will now become routine and common. Just like tireless powerful motors and speedy communications a century ago, learning will quickly become the norm in our built world. All kinds of simple things will learn. Automated synthetic learning won’t make your oven as smart as you are, but it will make better bread.

  Very soon, smart things won’t be enough. Now that we know how to synthesize learning, we’ll expect all things to automatically improve as they’re used, just as DeepMind’s game learner did. Our surprise in years to come will be in the many unlikely places we’ll be able to implant synthetic learning.

  A Genuine Science of Learning

  Keith Devlin

  Mathematician; executive director, H-STAR Institute, Stanford University; author, The Man of Numbers: Fibonacci’s Arithmetic Revolution

  The education field today is much like medicine was in the 19th century—a human practice guided by intuition, experience, and occasionally inspiration. It took the development of modern biology and biochemist
ry in the early part of the 20th century to provide the solid underpinnings of today’s science of medicine.

  To me—a mathematician who became interested in mathematics education in the second half of my career—it seems we may at last be seeing the emergence of a genuine science of learning. Given the huge significance of education in human society, that would make it one of the most interesting and important of today’s science stories.

  At the risk of raising the ire of many researchers, I should note that I am not basing my assessment on the rapid growth in educational neuroscience—you know, the kind of study where a subject is slid into an fMRI machine and asked to solve math puzzles. Those studies are valuable, but at the present stage, at best, they provide tentative clues about how people learn and little specific in terms of how to help people learn. (A good analogy would be trying to diagnose an engine fault in a car by moving a thermometer over the hood.) Someday educational neuroscience may provide a solid basis for education the way, say, the modern theory of genetics advanced medical practice. But not yet. Rather, the science of learning emerges from the possibilities Internet technology brings to the familiar experimental cognitive-science approach.

  The problem that has traditionally beset learning research has been its essential dependence on the individual teacher, which makes it near impossible to run the kinds of large-scale, control-group, intervention studies common in medicine. Classroom studies invariably end up as studies of the teacher as much as of the students, and often measure the effect of the students’ home environment rather than what goes on in the classroom.

  For instance, news articles often cite the large number of successful people who as children attended a Montessori school, a figure hugely disproportionate to the relatively small number of such schools. Now, it may well be that Montessori educational principles are good ones, but it’s also true that such schools are magnets for passionate, dedicated teachers, and the pupils who attend them do so because they have parents who decide to enroll their offspring in such a school and have already raised their children in a learning-rich home environment.

  Internet technology offers an opportunity to carry out medical-research-like, large-scale control-group studies of classroom learning which can significantly mitigate the teacher effect and home effect, allowing useful investigation of different educational techniques. Provided you collect the right data, Big Data techniques can detect patterns that cut across the wide range of teacher/teacher and family/family variation, allowing useful conclusions to be drawn.

  An important factor is that a sufficiently significant part of the actual learning is done in a digital environment, where every action can be captured. This is not easily achieved. The vast majority of educational software products operate around the edges of learning: providing the learner with information; asking questions and capturing their answers (in a machine-actionable, multiple-choice format); and handling course logistics with a learning management system.

  What is missing is any insight into what is actually going on in the student’s mind—something that can be very different from what the evidence shows, as was illustrated for mathematics learning several decades ago by a study now famously referred to as “Benny’s Rules,” where a child who had aced a progressive battery of programmed learning cycles was found to have constructed an elaborate internal, rule-based “mathematics” enabling him to pass all the tests with flying colors, but which was completely false and bore no relation to actual mathematics.

  But realtime, interactive software allows for much more than we have seen flooding out of such tech hotbeds as Silicon Valley. To date, some of the more effective uses from the viewpoint of running large-scale, comparative-learning studies have been by way of learning video games—so-called game-based learning. (It remains an open question how significant the game element is in terms of learning outcomes.)

  In elementary through middle-school mathematics learning (the research I am familiar with), what has been discovered, by a number of teams, is that digital learning interventions of as little as 10 minutes a day can, in as little as a month, result in significant learning gains when measured by a standardized test—with improvements of as much as 20 percent in some key thinking skills. That may sound like an educational magic pill; it almost certainly is not. It’s most likely an early sign that we know even less about learning than we thought we did.

  Part of what’s going on is that many earlier studies measured knowledge rather than thinking ability. The learning gains found in the studies I refer to are not knowledge acquired or algorithmic procedures mastered but high-level problem-solving ability. What’s exciting about these findings is that in today’s information- and computation-rich environment those human problem-solving skills are now at a premium.

  Like any good science, and in particular any new science, this work has generated far more research questions than it has answered. Indeed, it is too early to say whether it has answered any questions. Rather, as of now we have a scientifically sound method to conduct experiments at scale, some suggestive early results, and a long and growing list of research questions—all testable. Looks to me like we’re about to see the emergence of a genuine science of learning.

  Bayesian Program Learning

  John C. Mather

  Senior astrophysicist, Observational Cosmology Laboratory, NASA’s Goddard Space Flight Center; recipient, 2006 Nobel Prize in physics; co-author (with John Boslough), The Very First Light

  You may not like it! But artificial intelligence jumped a bit closer in 2015 with the development of Bayesian program learning, described by Lake, Salakhutdinov, and Tenenbaum in Science (“Human-level concept learning through probabilistic program induction”). It’s news because for decades I’ve been hearing about how hard it is to achieve artificial intelligence, and the most successful methods have used brute force. Methods based on understanding the symbols and logic of things and language have had a tough time. The challenge is to invent a computer representation of complex information and then enable a machine to learn that information from examples and evidence.

  Lake et al. give a mathematical framework, an algorithm, and a computer code that implements it, and their software has learned to read 1,623 handwritten characters in fifty languages as well as a human being. They write: “Concepts are represented as simple probabilistic programs—that is, probabilistic generative models expressed as structured procedures in an abstract description language.” Also, a concept can be built up by re-using parts of other concepts or programs. The probabilistic approach handles the imprecision both of definitions and examples. (Bayes’ theorem tells us how to compute the probability of something complicated if we know the probabilities of various smaller things that go into the complicated thing.) Their system can learn very quickly, sometimes in one shot or from a few examples, in a humanlike way, and with humanlike accuracy. This ability is in dramatic contrast to competing methods depending on immense data sets and simulated neural networks, which are always in the news.

  So now there are many new questions: How general is this approach? How much structure do humans have to give it to get it started? Could it really be superior in the end? Is this how living intelligent systems work? How could we tell? Can this computer system grow enough to represent complex concepts important to humans in daily life? Where are the first practical applications?

  This is a long-term project, without any obvious limits to how far it can go. Could this method be so efficient that it doesn’t take a super-duper supercomputer to achieve, or at least represent, artificial intelligence? Insects do very well with a tiny brain, after all. More generally, when do we get accurate transcriptions of multi-person conversations, instantaneous machine-language translation, scene recognition, face recognition, self-driving cars, self-directed drones safely delivering packages, machine understanding of physics and engineering, machine representation of biological concepts, and machine ability to read the Library of Congress and discuss it in
a philosophy or history class? When will my digital assistant really understand what I want to do? Is this how the intelligent Mars rover will hunt for signs of life on Mars? How about military offense and defense? How could this system implement Asimov’s three laws of robotics to protect humans from robots? How would you know whether to trust your robot? When will people be obsolete?

  I’m sure many people are already working on all these questions. I see opportunities for mischief, but the defense against the dark arts will push rapid progress, too. I am both thrilled and frightened.

  FSM (Feces-Standard Money)

  Jaeweon Cho

  Professor, Environmental Engineering, UNIST, Ulsan, Republic of Korea

  We are facing problems from two of our great inventions: money and flushing the toilet.

  We all use money, but we can be isolated from money at the same time. Money is one of the greatest human inventions, but it may also be among the worst ever created.

  Our present monetary system has nothing to do with anything that comes from human beings. While we can do many things with money in our modern societies, there are no significant connections between the money and ourselves. Thus, it can be hypothesized that whenever we use money, we are isolating ourselves from the world.

  Flushing the toilet—a second great invention—also has both positive and negative aspects. While it deals effectively with issues of hygiene, when we flush the toilet we are flushing our excretion into the natural environment, and this leads to severe problems.

 

‹ Prev