This isn’t entirely useless; it’s “normal science” that grows by progressive accretion, employing the bricklayers rather than the architects of science. If a new experimental observation (e.g., bacterial transformation; ulcers cured by antibiotics) threatens to topple the edifice, it’s called an anomaly, and the typical reaction of those who practice normal science is to ignore it or brush it under the carpet—a form of psychological denial surprisingly common among my colleagues.
This is not an unhealthy reaction, since most anomalies turn out to be false alarms; the baseline probability of their survival as real anomalies is small, and whole careers have been wasted pursuing them (think polywater and cold fusion). Yet even such false anomalies serve the useful purpose of jolting scientists from their slumber by calling into question the basic axioms that drive their particular area of science. Conformist science feels cozy, given the gregarious nature of humans, and anomalies force periodic reality checks even if the anomaly turns out to be flawed.
More important, though, are genuine anomalies that emerge every now and then, legitimately challenging the status quo, forcing paradigm shifts, and leading to scientific revolutions. Conversely, premature skepticism toward anomalies can lead to stagnation of science. One needs to be skeptical of anomalies but equally skeptical of the status quo if science is to progress.
I see an analogy between the process of science and of evolution by natural selection. For evolution, too, is characterized by periods of stasis (= normal science), punctuated by brief periods of accelerated change (= paradigm shifts), based on mutations (= anomalies), most of which are lethal (false theories) but some of which lead to the budding-off of new species and phylogenetic trends (= paradigm shifts).
Since most anomalies are false alarms (spoon-bending, telepathy, homeopathy), one can waste a lifetime pursuing them. So how does one decide which anomalies to invest in? Obviously one can do so by trial and error, but that can be tedious and time-consuming.
Let’s take four well-known examples: (1) continental drift, (2) bacterial transformation, (3) cold fusion, and (4) telepathy. All of these were anomalies when they arose, because they didn’t fit the big picture of normal science at the time. The evidence that all the continents broke off and drifted away from a giant supercontinent was staring people in the face, as Wegener noted in the early twentieth century. Coastlines coincided almost perfectly; certain fossils found on the east coast of Brazil were exactly the same as the ones on the west coast of Africa, etc. Yet it took fifty years for the idea to be accepted by the skeptics.
Anomaly (2) was observed by Fred Griffith, decades before DNA and the genetic code. He found that if you inject a heat-treated, dead, virulent species of bacteria (pneumococcus S) into a rat previously infected with a nonvirulent species (pneumococcus R), then species R became transformed into species S, thereby killing the rat. About fifteen years later, Oswald Avery found that you can even do this in a test tube; dead S would transform live R into live S if the two were simply incubated together; moreover, the change was heritable. Even the juice from S did the trick, leading Avery to suspect that a chemical substance in the juice—DNA—might be the carrier of heredity. Others replicated this. It was almost like saying, “Put a dead lion and eleven pigs into a room and a dozen live lions emerge,” yet the discovery was largely ignored for years. Until Watson and Crick deciphered the mechanism of transformation.
The third anomaly—telepathy—is almost certainly a false alarm.
You will see a general rule of thumb emerging here. Anomalies (1) and (2) were not ignored because of lack of empirical evidence. Even a schoolchild can see the fit between continental coastlines or the similarity of fossils. (1) was ignored solely because it didn’t fit the big picture—the notion of terra firma, or a solid, immovable Earth—and there was no conceivable mechanism that would allow continents to drift, until plate tectonics was discovered. Likewise (2) was repeatedly confirmed but ignored because it challenged the fundamental doctrine of biology—the stability of species. But notice that the third, telepathy, was rejected for two reasons: first, because it didn’t fit the big picture; and second, because it was hard to replicate. This gives us the recipe we are looking for: focus on anomalies that have survived repeated attempts to disprove experimentally but are ignored by the establishment solely because you can’t think of a mechanism. But don’t waste time on ones that have not been empirically confirmed despite repeated attempts (or ones for which the effect becomes smaller with each attempt—a red flag!)
Words themselves are paradigms, or stable “species” of sorts, that evolve gradually with progressively accumulating penumbras of meaning or sometimes mutate into new words to denote new concepts. These can then consolidate into chunks with “handles” (names) for juggling ideas around, generating novel combinations. As a behavioral neurologist, I am tempted to suggest that such crystallization of words, and juggling them, is unique to humans and occurs in brain areas in and near the left TPO (temporal-parietal-occipital junction). But that’s pure speculation.
Recursive Structure
David Gelernter
Computer scientist, Yale University; chief scientist, Mirror Worlds Technologies; author, Mirror Worlds
Recursive structure is a simple idea (or shorthand abstraction) with surprising applications beyond science.
A structure is recursive if the shape of the whole recurs in the shape of the parts: for example, a circle formed of welded links that are circles themselves. Each circular link might itself be made of smaller circles, and in principle you could have an unbounded nest of circles made of circles made of circles.
The idea of recursive structure came into its own with the advent of computer science (that is, software science) in the 1950s. The hardest problem in software is controlling the tendency of software systems to grow incomprehensibly complex. Recursive structure helps convert impenetrable software rain forests into French gardens—still (potentially) vast and complicated but much easier to traverse and understand than a jungle.
Benoit Mandelbrot famously recognized that some parts of nature show recursive structure of a sort: A typical coastline shows the same shape or pattern whether you look from six inches or sixty feet or six miles away.
But it also happens that recursive structure is fundamental to the history of architecture, especially to the Gothic, Renaissance, and Baroque architecture of Europe—covering roughly the five hundred years between the thirteenth and eighteenth centuries. The strange case of “recursive architecture” shows us the damage one missing idea can create. It suggests also how hard it is to talk across the cultural Berlin Wall that separates science and art. And the recurrence of this phenomenon in art and nature underlines an important aspect of the human sense of beauty.
The reuse of one basic shape on several scales is fundamental to Medieval architecture. But, lacking the idea (and the term) “recursive structure,” art historians are forced to improvise ad-hoc descriptions each time they need one. This hodgepodge of improvised descriptions makes it hard, in turn, to grasp how widespread recursive structure really is. And naturally, historians of post-Medieval art invent their own descriptions—thus obfuscating a fascinating connection between two mutually alien aesthetic worlds.
For example: One of the most important aspects of mature Gothic design is tracery—the thin, curvy, carved stone partitions that divide one window into many smaller panes. Recursion is basic to the art of tracery.
Tracery was invented at the cathedral of Reims circa 1220 and used soon after at the cathedral of Amiens. (Along with Chartres, these two spectacular and profound buildings define the High Gothic style.) To move from the characteristic tracery design of Reims to that of Amiens, just add recursion. At Reims, the basic design is a pointed arch with a circle inside; the circle is supported on two smaller arches. At Amiens, the basic design is the same—except that now the window recurs in miniature inside each smaller
arch. (Inside each smaller arch is a still smaller circle supported on still smaller arches.)
In the great east window at Lincoln Cathedral, the recursive nest goes one step deeper. This window is a pointed arch with a circle inside; the circle is supported on two smaller arches—much like Amiens. Within each smaller arch is a circle supported on two still smaller arches. Within each still smaller arch, a circle is supported on even smaller arches.
There are other recursive structures throughout Medieval art.
Jean Bony and Erwin Panofsky were two eminent twentieth-century art historians. Naturally, they both noticed recursive structure. But neither man understood the idea in itself. And so, instead of writing that the windows of Saint-Denis show recursive structure, Bony said that they are “composed of a series of similar forms progressively subdivided in increasing numbers and decreasing sizes.” Describing the same phenomenon in a different building, Panofsky writes of the “principle of progressive divisibility (or, to look at it the other way, multiplicability).” Panofsky’s “principle of progressive divisiblity” is a fuzzy, roundabout way of saying “recursive structure.”
Louis Grodecki noticed the same phenomenon—a chapel containing a display platform shaped like the chapel in miniature, holding a shrine shaped like the chapel in extra-miniature. And he wrote: “This is a common principle of Gothic art.” But he doesn’t say what the principle is; he doesn’t describe it in general or give it a name. William Worringer, too, had noticed recursive structure. He described Gothic design as “a world which repeats in miniature, but with the same means, the expression of the whole.”
So each historian makes up his own name and description for the same basic idea—which makes it hard to notice that all four descriptions actually describe the same thing. Recursive structure is a basic principle of Medieval design; but this simple statement is hard to say, or even think, if we don’t know what “recursive structure” is.
If the literature makes it hard to grasp the importance of recursive structure in Medieval art, it’s even harder to notice that exactly the same principle recurs in the radically different world of Italian Renaissance design.
George Hersey wrote astutely of Bramante’s design (ca. 1500) for St. Peter’s in the Vatican that it consists of “a single macrochapel . . . , four sets of what I will call maxichapels, sixteen minichapels, and thirty-two microchapels.” “The principle [he explains] is that of Chinese boxes—or, for that matter, fractals.” If only he had been able to say that “recursive structure is fundamental to Bramante’s thought,” the whole discussion would have been simpler and clearer—and an intriguing connection between Medieval and Renaissance design would have been obvious.
Using instead of ignoring the idea of recursive structure would have had other advantages, too. It helps us understand the connections between art and technology, helps us see the aesthetic principles that guide the best engineers and technologists and the ideas of clarity and elegance that underlie every kind of successful design. These ideas have practical implications. For one, technologists must study and understand elegance and beauty as design goals; any serious technology education must include art history. And we reflect, also, on the connection between great art and great technology on the one hand and natural science on the other.
But without the right intellectual tool for the job, new instances of recursive structure make the world more complicated instead of simpler and more beautiful.
Designing Your Mind
Don Tapscott
Business strategist; chairman, Moxie Insight; adjunct professor, Rotman School of Management, University of Toronto; author, Grown Up Digital: How the Net Generation Is Changing Your World; coauthor (with Anthony D. Williams), Macrowikinomics: Rebooting Business and the World
Given recent research about brain plasticity and the dangers of cognitive load, the most powerful tool in our cognitive arsenal may well be design. Specifically, we can use design principles and discipline to shape our minds. This is different from acquiring knowledge. It’s about designing how each of us thinks, remembers, and communicates—appropriately and effectively for the digital age.
Today’s popular hand-wringing about the digital age’s effects on cognition has some merit. But rather than predicting a dire future, perhaps we should be trying to achieve a new one. New neuroscience discoveries give hope. We know that brains are malleable and can change depending on how they are used. The well-known study of London taxi drivers showed that a certain region in the brain involved in memory formation was physically larger than in non-taxi-driving individuals of a similar age. This effect did not extend to London bus drivers, supporting the conclusion that the requirement of London’s taxi drivers to memorize the multitude of London streets drove structural brain changes in the hippocampus.
Results from studies like these support the notion that even among adults the persistent, concentrated use of one neighborhood of the brain really can increase its size and presumably its capacity. Not only does intense use change adult brain regional structure and function but temporary training and perhaps even mere mental rehearsal seem to have an effect as well. A series of studies showed that one can improve tactile (Braille character) discrimination among seeing people who are blindfolded. Brain scans revealed that participants’ visual cortex responsiveness was heightened to auditory and tactile sensory input after only five days of blindfolding for over an hour each time.
The existence of lifelong neuroplasticity is no longer in doubt. The brain runs on a “use it or lose it” motto. So could we use it to build it right? Why don’t we use the demands of our information-rich, multistimuli, fast-paced, multitasking digital existence to expand our cognitive capability? Psychiatrist Dr. Stan Kutcher, an expert on adolescent mental health who has studied the effect of digital technology on brain development, says we probably can: “There is emerging evidence suggesting that exposure to new technologies may push the Net Generation [teenagers and young adults] brain past conventional capacity limitations.”
When the straight-A student is doing her homework at the same time as five other things online, she is not actually multitasking. Instead, she has developed better active working memory and better switching abilities. I can’t read my e-mail and listen to iTunes at the same time, but she can. Her brain has been wired to handle the demands of the digital age.
How could we use design thinking to change the way we think? Good design typically begins with some principles and functional objectives. You might wish to perceive and absorb information effectively, concentrate, remember, infer meaning, be creative, write, speak, and communicate well, and enjoy important collaborations and human relationships. How could you design your use of (or abstinence from) media to achieve these goals?
Something as old-school as a speed-reading course could increase your input capacity without undermining comprehension. If it made sense in Evelyn Wood’s day, it is doubly important now, and we’ve learned a lot since then about how to read effectively.
Feeling distracted? The simple discipline of reading a few full articles per day rather than just the headlines and summaries could strengthen attention.
Want to be a surgeon? Become a gamer, or rehearse while on the subway. Rehearsal can produce changes in the motor cortex as big as those induced by physical movement. In one study, a group of participants was asked to play a simple five-finger exercise on the piano while another group of participants was asked to think about playing the same tune in their heads using the same finger movements, one note at a time. Both groups showed a change in their motor cortex, with differences among the group who mentally rehearsed the tune as great as those who did so physically.
Losing retention? Decide how far you want to apply Albert Einstein’s law of memory. When asked why he went to the phone book to get his number, he replied that he memorized only those things he couldn’t look up. There’s a lot to remember these days. Between the da
wn of civilization and 2003, there were five exabytes of data collected (an exabyte equals 1 quintillion bytes). Today five exabytes of data gets collected every two days! Soon there will be five exabytes every few minutes. Humans have a finite memory capacity. Can you develop criteria for which will be inboard and which outboard?
Or want to strengthen your working memory and ability to multitask? Try reverse mentoring—learning with your teenager. This is the first time in history when children are authorities about something important, and the successful ones are pioneers of a new paradigm in thinking. Extensive research shows that people can improve cognitive function and brain efficiency through simple lifestyle changes, such as incorporating memory exercises into their daily routine.
Why don’t schools and universities teach design for thinking? We teach physical fitness, but rather than brain fitness, we emphasize cramming young heads with information and testing their recall. Why not courses that emphasize designing a great brain?
Does this modest proposal raise the specter of “designer minds”? I don’t think so. The design industry is something done to us. I’m proposing that we each become designers.
Free Jazz
Andrian Kreye
Editor, The Feuilleton (arts and essays) of the German daily Sueddeutsche Zeitung, Munich
It’s always worth taking a few cues from the mid-twentieth-century avant-garde. When it comes to improving your cognitive toolkit, free jazz is perfect. It is a highly evolved new take on an art that has—at least, in the West—been framed by a strict set of twelve notes played in accurate fractions of bars. It is also the pinnacle of a genre that began with the blues, just a half century before Ornette Coleman assembled his infamous double quartet in the A&R Studio in New York City one December day in 1960. In science terms, that would mean an evolutionary leap from elementary-school math to game theory and fuzzy logic in a mere fifty years.
This Will Make You Smarter Page 23