Know This
Page 43
This ode of Horace starts with the energetic advice not to ask questions that cannot be answered. It’s an eternal reminder, and not only for scientists; it is very old news that has to be repeated regularly. Science’s purpose is to discover right and good questions—indeed, often unasked questions—before trying to supply an answer. But what are criteria for right questions? How do we know that a question can be answered and does not belong to the realm of irrationality? How can a mathematician trust that a proof will be possible before spending years to find it? Apparently, the power of implicit knowledge, or intuition, is much stronger than we are inclined to believe. Albert Einstein is said to have remarked: “The intuitive mind is a sacred gift and the rational mind is a faithful servant. We have created a society that honors the servant and has forgotten the gift.
In poetry we often find representations of such implicit knowledge and intuition with high scientific value, opening new windows of potential discoveries. If read (even better, spoken) with an open mind, poetry can serve as a bridge, an effortless link, between the different cultures. Thus, poetry does not belong to the humanities only (if at all); poems in all languages express anthropological universals and cultural specifics in a unique way and provide insights into human nature, the mode of thinking and experiencing, often shadowed by a castrated scientific language.
After his warning with respect to questions that cannot be answered, Horace suggests that one should simply accept reality (ut melius quidquid erit pati), and he gives the frustrating but good advice to bring our great hopes into a smaller space (spatio brevi spem longam reseces). This is hard to take; scientists always want to go beyond the limits of our mental power. But it is good to be reminded that our evolutionary heritage has dictated limits of reasoning and insights which must be accepted and should instill modesty.
Other cultures, too, have pointed out such limits. More than 2,000 years ago, Lao Tzu, in the Tao Te Ching, says: “To know about not knowing is the highest” (in pinyin with indication of the tones: zhi-1 bu-4 zhi-1, shang-4). To accept such an attitude is not easy, and it may be impossible to suppress the search for causality, as expressed by the French poet Paul Verlaine: “It is the greatest pain not to know why” (C’est bien la pire peine de ne savoir pourquoi).
Apparently, poets (of course not all of them) have some knowledge about our mental machinery which can guide scientific endeavors. But there is also a problem, which is language itself: Can poetry be translated? Can even scientific language be translated veridically? Of course not. Take the English translation, “Seize the present,” of Carpe diem. Does the English “present” cover equivalent connotations in Chinese, German, or any other language? The English “present” evokes different associations compared to the German Gegenwart or the Chinese xianzai. “Present” is associated with sensory representations, whereas Gegenwart has a more active flavor; the component warten refers either to “to take care of something” or “to wait for something,” and it is thus also past- and future-oriented. The Chinese xianzai is associated with the experience of existence in which something is accessible by its perceptual identity; it implies a spatial reference, indicating the here as the locus of experience, and it is also action-oriented. Although the different semantic connections are usually not thought of explicitly, they still may create a bias within an implicit frame of reference.
What follows? We must realize that the language one uses, including that of scientific discourse, is not neutral with respect to what one wants to express. But this is not a limitation; if one knows several languages—and a scientist knows several languages anyway—it is a rich source of creativity. Some sentences, however, do not suffer from translations; they are easily understood and they last forever. When Horace says that while we talk, the envious time is running away (dum loquimur, fugerit invida aetas), one is reminded of scientific (and political) discussions full of words with not too much content—not exactly new news.
Linking the Levels of Human Variation
Elizabeth Wrigley-Field
Assistant professor, Department of Sociology, University of Minnesota-Twin Cities
We are rewriting the story of human populations with data that depict individuals simultaneously from above and below: at scales geographic and genetic, from social networks to microbial networks. What is new is not the aspiration to integrate each level of human experience but the data that make it possible.
When we have only one kind of data, we can find only one kind of answer. But in the social sciences, explanations are like ecosystems. The presence of people—and their leftovers—enables mice to live in a house; the presence of a cat constrains them. Just as a species’ niche expands or contracts with the presence of other species, explanatory factors, too, are constrained and enabled by the presence of other factors. Data that combine disparate scales reveal this expansion and contraction of explanatory space.
Consider what makes someone smoke tobacco. We all know that in the United States, fewer people smoke today than fifty years ago, just before a major cultural and regulatory shift began. What the sociologist Jason Boardman and his colleagues have now shown is that whether someone smokes is more heavily influenced by their genes today than it was before smoking was stigmatized. In the 1960s, when every hostess had an ashtray and every stranger had a light, it didn’t take much to decide to light up; today, when nicotine prompts dirty looks, it often takes a powerful biological urge that afflicts us unequally. The changed culture makes room for our genes to determine whether we smoke; our genes limit the room for the cultural shift to change what we do. Data only on genes or only on the shift in norms would give us one kind of answer about why people smoke, but both together show us how each constrains the other.
Or consider the rise of antibiotic-resistant staph bacteria. The epidemiologist Diane Lauderdale and her colleagues are analyzing a particular cause of this deadly epidemic in Chicago: prison. Their work triangulates knowledge at the micro scale of how the bacterium passes from one person’s skin to another’s with knowledge at the macro scale that determines whose skin touches whose: how people move in and out of crowded jails, where they live when they leave, what sports they play. The result is increasingly realistic models of interaction between microbes and humans—not only as individuals or as populations but both at the same time.
This is the future of the population sciences: zooming simultaneously inside individuals, to their microbes or their genes, and outside them to their social norms, their neighborhoods, the laws regulating them. Data that zoom in both directions don’t just let us ask new questions—they let us ask a new type of question, one that embraces the contingent and contextual nature of human behavior. The social sciences vacillate between broad generalities (which usually turn out to be less general than they appear) and particularistic studies of specific settings. Only data that link the levels of human experience let us fill in the gloriously contingent middle. In the human sciences, the scope conditions are the story.
When you’re making a map, you don’t just want to know what goes inside the borders; you want to know where the borders are. Explanations should map the space of possibilities, and data that span the levels of human variation let us explore the borders.
Challenging the Value of a University Education
Steve Fuller
Philosopher; Auguste Comte Chair in Social Epistemology, University of Warwick, U.K.; co-author (with Veronika Lipinska), The Proactionary Imperative: A Foundation for Transhumanism
Just in time for the start of the 2015–16 academic year, the U.K. branch of one of the world’s leading accounting firms, Ernst & Young, announced that it would no longer require a university degree as a condition of employment. Instead it would administer its own tests to prospective junior employees. In the future, this event will be seen as the tipping point toward the end of the university as an all-purpose credentials mill that feeds the “knowledge-based” economy.
University heads have long complai
ned that economists demean their institution when they reduce its value to a labor-market signal: A good degree = a good job prospect. Yet it would seem that even the economists have been too generous to universities. To be sure, Silicon Valley and its emulators have long administered their own in-house tests to job candidates, but Ernst & Young gained international headlines for being a large mainstream elite employer that has felt compelled to turn to such an approach.
When one considers the massive public and, increasingly, private resources dedicated to funding universities, and the fact that both teaching and research at advanced levels can be and have been done more efficiently outside of universities, the social function of universities can no longer be taken for granted.
As the Ernst & Young story suggests, a prime suspect is the examination system, which has always sat uneasily between the teaching and research functions of the university. At best, exams capture a student’s ability to provide a snapshot of a field in motion. But photography is a medium better suited for the dead or the immortal than for ongoing inquiry, where a premium is placed on the prospect that many of our future beliefs will be substantially different from our present ones. A recurring theme in the life stories of great innovators of the modern period, starting with Einstein, is the failure of the exam system to bring out their true capacities. It’s not that the thinking of these innovators wasn’t transformed by their academic experience; rather, it’s that academia lacked an adequate means of registering that transformation.
One charitable but no less plausible diagnosis of many of the errors routinely picked up by examiners is that they result from students’ having suspended conventional assumptions in the field in which they are being examined. Yet these assumptions may be challenged if not overturned in the not-too-distant future. Thus, what strikes the examiner as corner-cutting sloppiness may capture an intuition that amounts to a better grasp of the truth of some matter.
But what sort of examination system would vindicate this charitable reading of error and thereby aid in spotting the next generation of innovators? It is not obvious that an in-house exam administered by, say, Ernst & Young will be any less of an epistemic snapshot than an academic exam, if it simply tests for the ability to solve normal puzzles in normal ways. The in-house exam will just be more content-relevant to the employer.
An alternative would be to make all university examinations tests in counterfactual reasoning. In effect, students would be provided access to the field’s current state of knowledge—the sort of thing they would normally regurgitate as exam answers—and then be asked to respond to scenarios in which the assumptions behind the answers are suspended in various ways. Thus, students would be tested at once for their sense of how the current state of knowledge hangs together and their ability to reassemble that knowledge strategically under a state of induced uncertainty.
When the great Prussian philosopher-administrator Wilhelm von Humboldt made the “unity of teaching and research” the hallmark of the modern university 200 years ago, his aim was to propel Germany onto the world stage at a time when it was playing catch-up with the political and economic innovations coming from France and Britain. In the process, he transformed the academic into a heroic figure who led by example. “Humboldtian” academics were people whose classroom performance inspired a questing spirit in students as they tried to bring together the often inchoate elements of their field into a coherent whole that pointed the way forward. The ultimate validity of any such synthesis mattered less than the turn of mind the performance represented—one that remained “never at rest,” to recall the title of the standard biography of Isaac Newton.
The move by Ernst & Young to administer its own purpose-built examinations is an attempt to produce a more targeted and less expensive version of what it, and much of society, thinks is the source of value in a university education. Universities will fail if they try to compete on those terms. However, they may survive if they learn how to exam in the spirit of Humboldt.
The Hermeneutic Hypercycle
Maximilian Schich
Associate professor for art and technology, University of Texas at Dallas
The most exciting news in our scientific quest to understand the nature of culture is not a single result but a fundamental change in the metabolism of research: With increasing availability of cultural data, increasingly robust quantification nurtures further qualitative insight; taken together, the results inspire new conceptual and mathematical models, which in turn put into question and allow for faster modification of existing data models. Closing the loop, better models lead to more efficient collection of even more cultural data. In short, the hermeneutic circle is replaced by a hermeneutic hypercycle. Driven by the quantification of nonintuitive dynamics, cultural science is accelerated in an autocatalytic manner.
The original “hermeneutic circle” characterizes the iterative research process of the humanist to understand a text or an artwork. The circle of hermeneutic interpretation arises as understanding specific observations presupposes an understanding of the underlying worldview, while understanding the worldview presupposes an understanding of specific observations. As such, the hermeneutic circle is a philosophical concept that functions as a core principle of individual research in the arts and humanities. Friedrich Ast explained it implicitly in 1808; Heidegger and Gadamer further clarified it in the mid-20th century.
Unfortunately the advent of large database projects in the arts and humanities has almost disconnected the hermeneutic circle in practice. Over decades, database models to embody the underlying worldview were mostly established using formal logic and a-priori expert intuition. Database curators were subsequently used to collect vast numbers of specific observations, enabling further traditional research while failing to feed back systematic updates into the underlying database models.
As a consequence, “conceptual reference models” are frozen, sometimes as ISO standards, and out of sync with the nonintuitive complex patterns that would emerge from large numbers of specific observations by quantitative measurement. A systematic data science of art and culture is now closing the loop using quantification, computation, and visualization in addition.
The “hermeneutic hypercycle” is a term that returned no result in search engines before this contribution went online. A product of horizontal meme-transfer, it combines the hermeneutic circle with the concept of the catalytic hypercycle, as introduced by Eigen and Schuster. Like the carbon cycle that keeps our Sun shining and the citric-acid cycle that generates energy in our cells, the hermeneutic circle in data-driven cultural analysis can be understood as a cycle of reactions, here to nurture our understanding of art and culture.
The cycle of reactions is a catalytic hypercycle, as data collection, quantification, interpretation, and data modeling all feed back to catalyze themselves. Their cyclical connection provides a mutual corrective of bias (avoiding an error catastrophe) and leads to a vigorous growth of the field (as we learn what to learn next). In simple words, data collection leads to more data collection, quantification leads to more quantification, interpretation leads to more interpretation, and modeling leads to more modeling. Altogether, data collection nurtures quantification and interpretation, which in turn nurtures modeling, which again nurtures data collection, etc.
It is fascinating to observe the resulting vigorous growth of cultural research. While the naming game of competing terms such as digital humanities, culture analytics, culturomics, or cultural data science is still going on, it becomes ever clearer that we are on our way to a sort of systems biology of cultural interaction, cultural pathways, and cultural dynamics, broadly defined. The resulting “systematic science of the nature of culture” is exciting news, as most issues, from religious fundamentalism to climate change, require cultural solutions and “Nature cannot be fooled.”
Rethinking Authority with the Blockchain Crypto Enlightenment
Melanie Swan
Philosopher, economic theorist, New
School for Social Research
If a central problem in the contemporary world could be defined, it might be called adapting ourselves to algorithmic reality. The world is marked by an increasing presence of technology, and the key question is whether we will have an empowering or enslaved relation with it. To have an enabling relation, we may need to mature and grow in new ways. The fear is that just as human-based institutions can oppress, so too might technology-orchestrated realities, and in fact this case might be worse. Blockchain technology is the newest and most emphatic example of algorithmic reality, news that makes us consider our relation with technology more seriously.
Blockchain technology (the secure distributed ledger software that underlies cryptocurrencies like Bitcoin) connotes Internet II (the transfer of value) as a clear successor position to Internet I (the transfer of information). This means that all human interaction regarding the transfer of value—including money, property, assets, obligations, and contracts—could be instantiated in blockchains for quicker, easier, less costly, less risky, and more auditable execution. Blockchains could be a tracking register and inventory of all the world’s cash and assets. Orchestrating and moving assets with blockchains concerns both immediate and future transfer, whereby entire classes of industries, like the mortgage industry, might be outsourced to blockchain-based smart contracts, in an even more profound move to the automation economy. Smart contracts are radical as an implementation of self-operating artificial intelligence and also through their parameter of blocktime—rendering time, too, assignable rather than fixed.