Know This

Home > Other > Know This > Page 34
Know This Page 34

by Mr. John Brockman


  The goals of deep science stand in contrast to those of broad science. Broad science seeks to map all nodes within a level of analysis as well as links between them (for example, all neurons and their connections in a model organism, like a worm). Comprehensive characterization is a necessary step in mapping the landscapes of new data produced by novel techniques. Examples of broad-science approaches include connectomic attempts to characterize all brain cells in a circuit, or computational efforts to digitally model all circuit components. Broad-science initiatives implicitly assume that by fully characterizing a single level of analysis, a better understanding of higher-order functions will emerge. Thus, a single expert at one level of analysis can advance through persistent application of relevant methods.

  Due to more variables, methods, and collaborators, deep-science approaches pose greater coordination challenges than broad-science approaches. Which nodes to target or levels to link might not be obvious at first and might require many rounds of research. Although neuroscientists have long distinguished different levels of analysis, they have often emphasized one level to the exclusion of others, or assumed that links across levels were arbitrary and thus unworthy of study. New techniques, however, have raised possibilities for testing links across levels. Thus, one deep-science strategy might involve targeting links that causally connect ascending levels of analysis. For instance, recent evidence indicates that optogenetic stimulation of midbrain dopamine neurons (the hardware level) increases fMRI activity in the striatum (the process level), which predicts approach behavior (the goal level) in rats and humans.

  While deep-science findings are not yet news, I predict they soon will be. Deep science and broad science are necessary complements, but broad-science approaches currently dominate. By linking levels of analysis, deep-science approaches may more rapidly translate basic neuroscience knowledge into behavioral applications and healing interventions—which should be good news for all.

  A World That Counts

  Alex (Sandy) Pentland

  Toshiba Professor of Media, Arts, and Sciences, MIT; director, MIT Human Dynamics Lab and the Connection Science program; author, Social Physics

  In 2014 a group of Big Data scientists (including myself), representatives of Big Data companies, and the heads of National Statistical Offices from nations in both the Northern and Southern Hemispheres met at United Nations headquarters and plotted a revolution. We proposed that all of the nations of the world begin to measure poverty, inequality, injustice, and sustainability in a scientific, transparent, accountable, and comparable manner. Surprisingly, this proposal was approved by the U.N. General Assembly in 2015, as part of the 2030 Sustainable Development Goals.

  This apparently innocuous agreement is informally known as the Data Revolution within the U.N., because for the first time there is an international commitment to discover and tell the truth about the state of the human family as a whole. Since the beginning of time, most people have been isolated, invisible to government, and without information about or input to governmental health, justice, education, or development policies. But in the last decade this has changed. As our U.N. Data Revolution report, titled A World That Counts, states:

  Data are the lifeblood of decision-making and the raw material for accountability. Without high-quality data providing the right information on the right things at the right time, designing, monitoring and evaluating effective policies becomes almost impossible. New technologies are leading to an exponential increase in the volume and types of data available, creating unprecedented possibilities for informing and transforming society and protecting the environment. Governments, companies, researchers and citizen groups are in a ferment of experimentation, innovation and adaptation to the new world of data, a world in which data are bigger, faster and more detailed than ever before. This is the data revolution.

  More concretely, the vast majority of humanity now has a two-way digital connection that can send voice, text, and, most recently, images and digital sensor data, because cell-phone networks have spread nearly everywhere. Information is suddenly potentially available to everyone. The Data Revolution combines this enormous new stream of data about human life and behavior with traditional data sources, enabling a new science of “social physics,” which can let us detect and monitor changes in the human condition and provide precise, nontraditional interventions to aid human development.

  Why would anyone believe that anything will actually come from a U.N. General Assembly promise that the National Statistical Offices of the member nations will measure human development openly, uniformly, and scientifically? It is not because anyone hopes that the U.N. will manage or fund the measurement process. Instead, we believe that uniform scientific measurement of human development will happen because international development donors are finally demanding scientifically sound data to guide aid dollars and trade relationships.

  Moreover, once reliable data about development start becoming familiar to business people, supply chains and private investment will begin paying attention. A nation with poor measures of justice or inequality normally also has higher levels of corruption, and a nation with a poor record in poverty or sustainability normally also has a poor record of economic stability. As a consequence, nations with low measures of development are less attractive to business than nations with similar costs but better human-development numbers.

  Historically we have been blind to the living conditions of the rest of humanity; violence or disease could spread to pandemic proportions before the news would make it to the ears of central authorities. We are now beginning to be able to see the condition of all of humanity with unprecedented clarity. Never again should it be possible to say “We didn’t know.” No one should be invisible. This is the world we want—a world that counts.

  Programming Reality

  Neil Gershenfeld

  Physicist; director, MIT’s Center for Bits and Atoms; author, The Nature of Mathematical Modeling

  The most notable scientific news story in 2015 was not obviously about science. What was apparent was the coverage of diverging economic realities. Much of the world struggled with income inequality, persistent unemployment, stagnant growth, and budgetary austerity, amid corporate profit records and a growing concentration of wealth. In turn, this gulf led to a noisy emergence of far-right and far-left political movements, offering a return to a promised better time decades (or centuries) ago. And these drove the appearance of a range of conflicts, connected by a common thread of occurring in failing and failed economies.

  So what do all these dire news stories have to do with science? They share an implicit syllogism so obvious it’s never mentioned: Opportunity comes from creating jobs because jobs create income, and inequality is due to the lack of income. That’s what’s no longer true. The unseen scientific story is to break the historical relationship between work and wealth by removing the boundary between the digital and physical worlds.

  Some discoveries arrive as an event, like the flash of a lightbulb; some are best understood in retrospect as the accumulation of a body of work, where the advance is to take it seriously. This is one of those. Digitizing communication and computation required a few decades each, leading to a revolution in how knowledge is created and shared. The coverage now of 3D printing and the maker movement is only the visible tip of a much bigger iceberg, digitizing not just design descriptions for computer-controlled manufacturing machines (which is decades old) but also the designs themselves by specifying the assembly of digital materials.

  Life is based on a genetic code that determines the placement of twenty standard amino acids; that was discovered (by molecular biology) a few billion years ago. We’re now learning how to apply this insight beyond molecular biology; emerging research is replacing processes that continuously deposit or remove materials with ones that code the reversible construction of discrete building blocks. This is being done across disciplines and length scales, from atomically precise manufacturing to wh
ole-genome synthesis of living cells to the three-dimensional integration of functional electronics to the robotic assembly of modular aircraft and spacecraft. Taken together, these add up to programming reality—turning data into things and things into data.

  Returning to the news stories from 2015: Going to work commonly means leaving home to travel to somewhere you don’t want to be, to do something you don’t want to do, producing something for someone you’ll never see, to get money to pay for something you want. What if you could instead just make what you want? In the same way that digitizing computing turned information into a commodity, digitizing fabrication reduces the cost of producing something to the incremental cost of its raw materials.

  In the largest-ever gathering of heads of state, the Sustainable Development Goals were launched at the U.N. in 2015. These target worthy aims, including ending poverty and hunger, ensuring access to healthcare and energy, building infrastructure, and reducing inequality. Left unsaid is how to accomplish these goals, which will require spending vast amounts of money. But development needn’t recapitulate the Industrial Revolution; just as developing countries have skipped over landlines and gone right to mobile phones, mass manufacturing with global supply chains can be replaced with sustainable local on-demand fabrication of all the ingredients of a technological civilization. This is a profound challenge, but it’s one with a clear research roadmap and is the scientific story behind the news.

  Pointing Is a Prerequisite for Language

  N. J. Enfield

  Professor of linguistics, University of Sydney; research associate, Language and Cognition Group, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; author, Natural Causes of Language

  Research in developmental and comparative psychology has discovered that the humble pointing gesture is a key ingredient of the capacity for developing and using human language, and indeed for the very possibility of human social interaction.

  Pointing gestures seem simple. We use them all the time. I might point when I give someone directions to the station, when I indicate which loaf of bread I want to buy, or when I show you where you have spinach stuck in your teeth. We often accompany such pointing gestures with words, but for infants who cannot yet talk, these gestures can work on their own.

  Infants begin to communicate by pointing at about nine months of age; it’s a year before they can produce even the simplest sentences. Careful experimentation has established that prelinguistic infants can use pointing gestures to ask for things, to help others by pointing things out to them, and to share experiences with others by drawing attention to things they find interesting and exciting.

  Pointing does not just manipulate the other’s focus of attention; it momentarily unites two people through a shared focus on something. With pointing, we do not just look at the same thing, we look at it together. This is a particularly human trick, and arguably what ultimately makes social and cultural institutions possible. Being able to point and to comprehend the pointing gestures of others is crucial to achieving “shared intentionality,” the ability to build relationships through the sharing of perceptions, beliefs, desires, and goals.

  Comparative psychology finds that pointing (in its full-blown form) is unique to our species. Few nonhuman species seem able to comprehend pointing (notably, domestic dogs can follow pointing, while our closest relatives among the great apes cannot), and there is little evidence of pointing occuring spontaneously between members of any species other than our own. Apparently only humans have the social-cognitive infrastructure needed to support the kind of cooperative and prosocial motivations that pointing gestures presuppose.

  This suggests a new place to look for the foundations of human language. While research on language in cognitive science has long focused on its logical structure, the news about pointing suggests an alternative: that the essence of language is found in our capacity for the communion of minds through shared intentionality. At the center of it is the deceptively simple act of pointing, an act that must be mastered before language can be learned at all.

  Macro-Criminal Networks

  Eduardo Salcedo-Albarán

  Philosopher; director, Scientific Vortex, Inc.

  Powerful computation today boosts our ability to perceive and understand the world. The more data we process and analyze, the more natural and social phenomena we discover and understand. Copious social data reveal global trends. For instance, analyzing masses of judicial information with current computational tools has exposed a new and complex social phenomenon: macro-criminal networks.

  Our brains make sense of those social networks in which only about 150 to 200 individuals participate. Known as “Dunbar’s number,” this is an approximation of the social-network size we can interact with. Thus macro-criminal networks cannot be perceived or analyzed without computational power, algorithms, and the right concepts of social complexity.

  Unfortunately, we as a society lack tools, legislation, and enforcement mechanisms to confront these global, resilient, and decentralized structures, which are characterized by messy hierarchies and various types of leaders. Macro-criminal networks overwhelm most law-enforcement agents, who still search for criminal organizations with simple hierarchies run by full-time criminals and commanded by a single boss. This classic idea of “organized crime” is outdated and doesn’t reflect the complexity of the macro-criminal networks now being uncovered.

  Investigating and prosecuting crime today without the right concepts or computational tools for processing and analyzing enormous amounts of data is like studying galaxies with 17th-century telescopes. The greatest challenge in confronting macro-criminal networks is not the adoption of powerful computers or the application of deep learning but modifying that mindset of scholars, investigators, prosecutors, and judges. Legislation focused on one victim and one victimizer leads to wrong analysis and insufficient enforcement of such systemic crimes as the corruption in Latin America and West Africa, human trafficking in Eastern Europe, and forced displacement in Central Africa. As a consequence, structures supporting those crimes worldwide are overlooked.

  Crime in its various expression is always news. From corruption to terrorism and trafficking activities, crime affects our way of life while hampering development in various countries. Understanding the phenomenon of huge, resilient, and decentralized criminal macro-structures is critical for achieving global security. We need to commit and allocate the right scientific, institutional, and economic resources to deal with it.

  Virtual Reality Goes Mainstream

  Thomas Metzinger

  Philosopher, Johannes Gutenberg-Universität Mainz; editor, Open-Mind.net; author, The Ego Tunnel

  Suppose you have just popped one of those new hedonic enhancement pills for virtual environments. Not the dramatic, illegal stuff, just the legal pharmaceutical enhancement that comes as a direct-to-consumer advertising gift with the gadget itself. It has the great advantage of blocking nausea and thereby stabilizing the realtime, fMRI-based neurofeedback-loop into your own virtual reality (allowing you to interact with the unconscious causes of your own feelings directly, as if they were now part of an external environment), while at the same time nicely minimizing the risk of depersonalization disorder and Truman Show delusion. These pills also reliably prevent addiction and the diminished sense of agency upon reentering the physical body following longtime immersion—at least the package leaflet says so. As you turn on the device, two of your “Selfbook-friends” are already there, briefly flashing their digital subject identifiers. Their avatars immediately make eye contact and smile at you, and you automatically smile back, while you feel the pill taking effect. Fortunately, they can see neither the new Immersive Porn trial version nor the expensive avatar that represents your Compassionate Self. You only use that twice a week in your psychotherapy sessions. The NSA, however, sees everything.

  In 2016, VR will finally break through at the mass-consumer level. Moreover, users will soon be able
to toggle between virtual, augmented, and substitutional reality, experiencing virtual elements intermixed with their “actual” physical environment, or an omnidirectional video feed giving them the illusion of being in a different location in space and/or time. Oculus Rift, Zeiss VR One, Sony PlayStation VR, HTC Vive, Samsung’s Galaxy Gear VR or Microsoft’s HoloLens are just the beginning, and it’s hard to predict the psychosocial consequences over the next two decades as an accelerating technological development is driven by massive market forces and not by scientists anymore. There will be great benefits (just think of the clinical applications) and a host of new ethical issues, ranging from military applications to data protection (for example, “kinematic fingerprints” generated by motion-capture systems, avatar ownership, and individuation will become important questions for regulatory agencies to consider).

  The real news, however, may be that the general public will gradually acquire a new and intuitive understanding of what their conscious experience really is and what it always has been. VR is the representation of possible worlds and possible selves, with the aim of making them appear real—ideally, by creating a subjective sense of “presence” in the user. Interestingly, some of our best theories of the human mind and conscious experience describe such experience in a similar way. Leading theoretical neurobiologists, like Karl Friston, and eminent philosophers, like Jakob Hohwy and Andy Clark, describe it as the constant creation of internal models of the world, virtual neural representations of reality which express probability density functions and work by continuously generating hypotheses about the hidden causes of sensory input, minimizing their prediction error. In 1995, Finnish philosopher Antti Revonsuo pointed out that conscious experience is a virtual model of the world—a dynamic internal simulation which in standard situations cannot be experienced as a virtual model because it is phenomenally transparent: We “look through it” as if we were in direct and immediate contact with reality. What is historically new, and what creates not only novel psychological risks but also entirely new ethical and legal dimensions, is that one virtual reality gets ever more deeply embedded into another virtual reality. The conscious mind, which has evolved under specific conditions and over millions of years, now gets causally coupled and informationally woven into technical systems for representing possible realities. Increasingly, consciousness is not only culturally and socially embedded but also shaped by a specific technological niche that, over time, acquires rapid, autonomous dynamics and new properties. This creates a complex convolution, a nested form of information flow in which the biological mind and its technological niche influence each other in ways we are just beginning to understand.

 

‹ Prev