We dedicated our initial chapters to explaining how the human brain processes information. The machinery constructed by our approximately one hundred billion neurons would allow us in principle to see and remember everything in excruciating detail. However, we saw in the cases of Shereshevskii, Funes, and the savants that such boundless memory limits the ability to think; so, far from memorizing everything, the brain instead focuses on relatively sparse information and extracts meaning by processing it redundantly, many times and in many different ways. It is for this exact reason that we highlighted the importance of delegating trivial memory tasks to modern-day gadgets, while resisting the temptation to be constantly bombarded with information; this is also the reason we criticized the educational system, which values the capacity to memorize over the ability to comprehend. We assume from past experiences the perceptual information not registered by our brain; these unconscious inferences lead to the construction of Helmholtz’s signs, in the case of vision, and of Bartlett’s schemas in the case of memory; they are the assumptions that we constantly make and that sometimes lead us to be fooled by optical illusions or false memories.
This is not exactly a strategy we would be inclined to follow when designing a robot or a computer. In designing a data-processing system, we tend to prioritize accuracy and efficiency, acquiring the maximum possible information and using the minimum necessary processing power to store it and retrieve it faithfully later. In terms of data-storage efficiency, the process implemented in our brains is exorbitantly expensive, imprecise, and extremely inefficient, but it is, in fact, fundamental to our ability to apprehend information. Though a computer can store thousands of high-resolution photographs, it is unable to understand them as we do. We perceive and remember very little because our brain prioritizes understanding. Our ability to extract meaning and understand is the result of millions of years of evolution, of trial and error that settled on the best possible strategy after attempting countless others. A brilliant inventor in search of a revolution in artificial intelligence could, in principle, try to replicate the strategy employed by our brain—in fact, replicating basic brain principles led to major recent breakthroughs with the development of deep neural networks15—but duplicating its parallel processing and redundancy would not be enough. The key lies in selecting exactly what to process and how to process it. The scant information we choose to process depends on the task we have at hand—for example, we see the same book very differently if we are looking for something to read than if we need a way to raise the computer monitor. This flexibility in attributing meaning, in selecting which information to process and which to discard, is what defines our intelligence. Our limitation in the processing and retrieval of information is precisely what distinguishes us from savants, other animals, HAL 9000, the internet, a replicant, or the Terminator. Our capacity to manage and relate abstractions, coded by concept neurons in the hippocampus, is the basis of our memory—and, perhaps, the cornerstone of what makes us human.
NOTES
Chapter 1
1Curiously, Roy Batty’s final words, repeatedly quoted by sci-fi film buffs, are not in Dick’s book, nor do they appear in the film’s original script. They were sketched by Rutger Hauer shortly before the scene was shot.
2Similar arguments have been put forth by Ray Kurzweil (a famous futurist and inventor of the first print-to-speech reading machine for the blind) to defend the idea of a cybernetic, “transhuman” being that could transcend the many weaknesses of our bodies and, presumably, of our brains.
3To simplify matters, I am leaving aside the complex processes that unfold while neurons are not firing. These are known as subthreshold activity.
4Hopfield’s original paper from the early 1980s opened up an important research avenue in neuroscience. To give you an idea of the impact of this work, while most scientific papers are cited at most a few times by other papers, Hopfield’s paper has over 18,000 citations to date. See: John Hopfield. “Neural networks and physical systems with emergent collective computational properties.” Proceedings of the National Academy of Sciences 79 (1982): 2554–2558.
5Santiago Ramón y Cajal. “The Croonian Lecture: La fine structure des centres nerveux.” Proceedings of the Royal Society of London 55 (1894): 444–468.
6Donald Hebb. The Organization of Behavior: A Neuropsychological Theory. New York: John Wiley and Sons, 1949.
7Bliss and Lømo’s work was published in: Tim Bliss and Terje Lømo. “Long-lasting potentiation of synaptic transmission in the dentate area of the anaesthetized rabbit following stimulation of the perforant path.” Journal of Physiology 232 (1973): 331–356.
8Among other works that show the relation between LTP and memory formation, refer to: R. Morris, E. Anderson, G. Lynch and M. Baudry. “Selective impairment of learning and blockade of long-term potentiation by an N-methyl-D-aspartate receptor antagonist, AP5.” Nature 319 (1986): 774–776.
9Recent estimations give a more precise figure of 86 billion neurons: Suzana Herculano-Houzel. “The human brain in numbers: a linearly scaled-up primate brain.” Frontiers in Human Neuroscience 3 (2009): article 31.
10Of course, this number depends on the type of sand and the truck’s capacity. Considering that a grain of sand can have a diameter between 0.02 mm and 2 mm, let us assume an average diameter of 0.5 mm. One centimeter can thus hold twenty grains of sand side by side, and a volume of one cubic centimeter can hold approximately 20 × 20 × 20 = 8,000 grains of sand. The cargo compartment of a truck measures approximately 5 m × 2 m × 1.5 m, which corresponds to a volume of 15 million cubic centimeters. This means that a truck can transport some 15 million times 8,000 or 1.2×011 grains of sand, which approximately corresponds to the number of neurons in the brain. Following this analogy, the number of neurons in a snail’s brain corresponds to approximately a pinch of sand, the total in a fly or an ant to a soupspoon full of sand, in a bee or a cockroach to the amount of sand in a small coffee cup, in a frog to a two-liter bottle full of sand, in a mouse to the sand in a bucket, in a cat to a wheelbarrow full of sand, and the number of neurons in a macaque monkey corresponds to the sand that fits in an excavator shovel. However, intelligence is not just determined by the number of neurons an animal has, as the number of neurons in the brain of an African elephant corresponds to three cargo trucks full of sand and the number in a whale to five cargo trucks. What really matters is how the neurons connect to each other, forming complex circuits that underlie different brain functions.
11In this case, we consider the beach to have a width of 50 meters and a depth of 25 meters (half as much as the width).
12This value corresponds to a specific configuration but provides an order-of-magnitude estimate. For more details, see: E. Gardner. “Maximum storage capacity in neural networks.” Europhysics Letters 4 (1987): 481–485.
13Although it is almost impossible to estimate the fraction of neurons generally involved in the encoding of memories, some studies in monkeys estimate that about 1.7 percent of the neurons in the intertemporal cortex are involved in memory-retrieval tasks. For more details, see: Kuniyoshi Sakai and Yasushi Miyashita. “Neural organization for the long-term memory of paired associates.” Nature 354 (1991): 152–155.
Chapter 2
1This work was published in: Kristin Koch, Judith McLean, Ronen Segev, Michael A. Freed, Michael J. Berry II, Vijay Balasubramanian, and Peter Sterling. “How much the eye tells the brain.” Current Biology 16 (2006): 1428–1434.
2Binary numbers are sequences of digits, each of which can have only one of two values, 0 or 1. For example, 0001 equals 1 in decimal notation, 0010 equals 2, 0011 equals 3, 0100 equals 4, and so on. It is easy to implement binary numbers in digital circuits, and for that reason they are the basic language of computers.
3Analogously, three bits can represent eight objects, four bits sixteen objects, and in general, N bits can represent 2N objects.
4Claude Shannon (1916–2001) studied electrical engineering and mathematics at
the University of Pittsburgh and graduated at only twenty years of age. He then earned a master’s degree at MIT, where he applied algebraic principles to the development of circuits, and during the war worked on cryptography at Bell Labs, developing and cracking secret codes. After the war, Shannon dedicated himself to the subject in which he was to obtain his greatest achievements: the study of the encoding and optimal transmission of information. Shannon introduced concepts such as “Shannon entropy,” which is used to measure (in bits) the amount of information contained in a message. Shannon’s most celebrated work is a paper published in 1948 that originated information theory: Claude Shannon. “A Mathematical Theory of Communication.” Bell System Technical Journal 27 (1948): 379–423 and 623–656.
For the application of information theory to neuroscience, see for example: Rodrigo Quian Quiroga and Stefano Panzeri. “Extracting information from neural populations: Information theory and decoding approaches.” Nature Reviews Neuroscience 10 (2009): 173–185.
5With 24 bits, it is possible to generate more than 16 million different colors. Nowadays there are monitors with a color depth of 32 bits, but their color resolution is essentially indistinguishable from that of a 24-bit monitor.
6As was to be expected, this fact was not left unnoticed and was indeed refuted by an expert on the topic, who estimated that the minimum resolution for the eye to be unable to differentiate pixels at a 30 cm distance is 477 ppi. However, in support of Jobs’s statement (or of the group of researchers at Apple who provided him with the figure), a later article in Discover magazine showed that only someone with perfect vision would be able to differentiate pixels at 300 ppi, and that this resolution is more than sufficient for most people. For more details on this discussion, see www.wired.com/2010/06/iphone-4-retina-2 and http://blogs.discovermagazine.com/badastronomy/2010/06/10/resolving-the-iphone-resolution.
7The bottom image was taken at the British Museum in London by Carlos Pedreira and Joaquín Navajas, two students in my laboratory. Using a movable eye tracker, Carlos and Joaquín concluded that, in the course of a few minutes in a museum room, people looked on average at some fifty objects for more than one second. The surprising result was that, after they left the room, they were asked what they had seen and could remember only five or so objects. This fact gives rise to several interesting conclusions, but we defer until later the discussion of how little we remember.
8These days, an eye tracker simply films the pupil with a digital camera. In Yarbus’s time, experiments were much more tedious, since eye movements were recorded using the reflection of a beam of light on a small mirror mounted on something resembling a contact lens that was implanted in the subject’s eyeballs. These techniques, as well as several eye-tracking results, are described in Yarbus’s classic: Alfred Yarbus. Eye Movements and Vision. New York: Plenum Press, 1967.
9This experiment was carried out in my laboratory for a documentary, aired in England by Channel 4, about the way we perceive art. We went beyond elementary observation (like the fact that we tend to concentrate on the eyes when we look at a face) and studied how gaze patterns changed after we modified details of the paintings using Adobe Photoshop. Using an eye tracker, in another experiment, we studied how people observed works of art at the Tate Gallery, and, highlighting the importance of seeing original works of art at the museum, we observed that the fixation patterns were radically different when people looked at reproductions stored in a computer. For more details about these experiments, see: Rodrigo Quian Quiroga and Carlos Pedreira. “How do we see art: an eye-tracker study.” Frontiers in Human Neuroscience 5 (2011): article 98.
And: Jennifer Binnie, Sandra Dudley, and Rodrigo Quian Quiroga. “Looking at Ophelia: A comparison of viewing art in the gallery and in the lab.” Advances in Clinical Neuroscience and Rehabilitation 11 (3) (2001): 15–18.
10Art is so subjective that, whereas Van Gogh’s paintings reach astounding prices nowadays, the artist himself managed to sell a single painting in his lifetime; so subjective that we typically require some objective guideline, like the artist’s renown, the opinions of critics, or the majesty of the surroundings, to decide which works of art are good and which are not. Joshua Bell, a famous violinist who routinely fills the most prestigious concert halls, was barely noticed by a handful of people as he played a Bach concerto on his Stradivarius in a subway station.
11I was lucky to have Mariano rotate for one year in my laboratory, bridging ideas from art and neuroscience about visual perception. The result of this collaboration was “The Art of Visual Perception,” an art and science show exhibited in a gallery in England. For more details, see www.youtube.com/watch?v=cg8RZE65Na4.
Chapter 3
1For an entertaining but rigorous discussion of the way neurons are organized in the retina, see Chapter 3 of the book by David Hubel, a disciple of Kuffler’s who went on to share the Nobel Prize for Physiology or Medicine with Torsten Wiesel for their study of the primary visual cortex, the first area in the cortex that receives information from the retina: David Hubel. Eye, Brain and Vision (Second Edition). Scientific American Library Series, London/New York: W. H. Freeman, 1995.
For a free online version of the book, see: http://hubel.med.harvard.edu/index.html.
2This is a principle well known by visual artists, who use contrast to highlight the brightness of a given color in their palette. For a fascinating description of the subject, see: Margaret Livingstone. Vision and Art: The Biology of Seeing. New York: Harry N. Abrams, 2008.
3For more details, see: Horace Barlow. “The Ferrier lecture 1980: Critical limiting factors in the design of the eye and visual cortex.” Proceedings of the Royal Society of London B, 212 (1981): 1–34.
4This is, of course, just a very brief allusion to the philosophical roots of this discussion. For more detailed treatments of the subject, refer, for example, to: Anthony Kenny. A New History of Western Philosophy. Oxford: Oxford University Press, 2012.
And: Bertrand Russell. A History of Western Philosophy. London: Routledge Classics, 1946.
5The breadth of the contributions of Helmholtz (1821–1894) to different areas of science is truly astounding. Among other things, Helmholtz formulated the principle of conservation of energy and postulated the notion of free energy in thermodynamics, invented the ophthalmoscope to examine the retina, measured the conduction speed within nerves, derived a mathematical description of acoustic vibrations, and established the modern theory of colors using three variables (hue, saturation, and brightness) to characterize them.
6Several authors, in particular David Hubel and Torsten Wiesel, developed an animal model to study alterations in behavior and in the response patterns of neurons in visual areas caused by visual deprivation. To that end, they sewed shut the eyelids of cats of different ages and for different spans of time (usually a few days after birth and for three months) and then studied the behavior of the animals after their eyes reopened. For more information, refer to: Torsten Wiesel and David Hubel. Journal of Neurophysiology 26 (1963): 978–993.
7S.B.’s case, along with a brief historical overview of similar cases, is described in: Richard Gregory and Jean Wallace. “Recovery from early blindness: A case study.” Experimental Psychology Society Monograph No. 2. London: Heffer, 1963.
Oliver Sacks describes a similar case in his book An Anthropologist on Mars. The movie At First Sight, starring Val Kilmer, is based on this story.
8The Man Who Mistook His Wife for a Hat is indeed the title of one of Sacks’s most famous books.
Chapter 4
1Jorge Luis Borges. Ficciones. Buenos Aires: Sur, 1944.
2William James. The Principles of Psychology. Vol. 1. New York: Henry Holt, 1890, 680.
3Themistocles was the strategist behind the Greek naval defense against the Persian invasions and, according to Cicero, possessed an extraordinary memory.
4Rodrigo Quian Quiroga. Borges and Memory. Cambridge, MA: MIT Press, 2012.
5Gustav Spiller. The Mind of
Man: A Text-Book of Psychology. London: Swan Sonnenschein & Co., 1902.
As I investigated the pursuits and readings that could have inspired Borges’s brilliant vision of memory in “Funes the Memorious,” by chance I stumbled into Spiller’s book in Borges’s library. The book had a note on the first page in Borges’s own handwriting referring to the fragment where Spiller estimates the number of memories he collected throughout his life. For more details, see Chapter 2 of Borges and Memory.
6The very small number of memories we keep from our first years of life is a phenomenon known as childhood amnesia. Childhood amnesia has attracted the attention of neuroscientists and psychologists, especially after Sigmund Freud published a set of seminal studies of subconscious repression during childhood. For more details, refer to Chapter 12 of: Alan Baddeley, Michael Eysenck, and Michael Anderson. Memory. New York: Psychology Press, 2009.
7Galton quantified his memory capacity by assessing the number of recollections brought forth by a set of words. This work was published in: Francis Galton. “Psychometric experiments.” Brain 2 (1879): 149–162.
And also as part of a book: Francis Galton. Inquiries into Human Faculty and Its Development. London: Dent & Sons, 1907.
The Forgetting Machine Page 12