Book Read Free

The New Science of the Mind

Page 6

by Mark Rowlands


  These intuitive claims suggest what we might call a deflationary response to the ecological and enactive arguments we have encountered. The deflationary response accepts the arguments, but tries to severely restrict their influence. Basically, the response goes like this: OK, we'll give you perception. Perception, we'll accept, involves manipulation and transformation of external information bearing structures, and so is a process that extends into a perceiving organism's environment, and is not restricted to processes occurring inside that organism's head. We'll accept all that-but that's as far as it goes. Real cognition-cognition central, if you like-is restricted to processes occurring inside the cognizing organism's brain. In other words, perception is a peculiarly peripheral and decidedly idiosyncratic form of cognition; and we cannot use it as a template for understanding cognition in general.

  This response holds on to the sandwich model itself, but merely rearranges our understanding of its ingredients. Perception breaks down into two components, one of which is to be found in the sandwich's filling, and the other to be found in the bread. The conclusion is clear. If the new science is to deal the Cartesian conception anything more than a glancing blow, it cannot restrict its arguments to perception: it must go after cognition central. Here the general strategy of the nascent new science has been to try and show that cognition central is not, in fact, as central as we might think. Or, to put the same point another way: cognition is a lot more like perception than we have hitherto realized or been willing to accept.

  An early, but nonetheless important, development of this general theme is to be found in the work of the Soviet psychologists Anton Luria and Lev Vygotsky. In a classic series of studies originally published in 1930, but not reproduced in English until 1992, Luria and Vygotsky consider the difference in memory tasks facing two types of people. On the one hand, there is the African envoy who has to remember word for word the message of his tribal chief. On the other, there is the Peruvian kvinu officer. Kvinus are a system of knots, and were used in ancient Peru, China, Japan, and several other parts of the world. They are conventional external representationsin essence, a forerunner of written language-used to record various sorts of information (e.g., about the state of the army, census statistics, taxes) or to provide instructions (e.g., to remote provinces). The Chudi tribe of Peru had a special officer assigned to the task of tying and interpreting kvinus. In the early stages of development of the kvinu system, such officers were rarely able to read other people's kvinus, unless they were accompanied by verbal comment. However, over time, the system was refined and standardized to such an extent that they could be used to record all the major matters of state, and depict laws and events.

  Luria and Vygotsky argue that the development of kvinus will have had a profound effect on the strategies of remembering employed by those who could use them. The African envoy, who has no similar external system of representation, must remember verbatim the perhaps lengthy message of his tribal chief. He has to remember not simply the general gist of the message, but also, and much more difficult, the precise sequence of words uttered by his chief. The Peruvian kvinu officer, on the other hand, does not have to remember the information contained in the knot he has tied. He has to remember only the "code" that will allow him to access the information contained in the knot. The African envoy relies on outstanding biological memory to remember the information he is to transmit. The Peruvian officer's reliance on this sort of memory is much less, amounting to, at most, the remembering of the code. Once he knows this, he is able to tap into a potentially unlimited amount of information contained in the kvinu system.

  Once an external information store of this sort becomes available, Luria and Vygotsky argue, it is easy to see how memory is going to develop. As external forms of information storage increase in number and sophistication, naked biological memory is going to become progressively less and less important. Therefore, Luria and Vygotksy predict, types of memory that are clearly biological will have a tendency to wither away. The most obvious implication will be for episodic memory (see box 2.1): the outstanding episodic memory of both primitive cultures and children will diminish significantly with the process of enculturation. The cultural evolution of memory is, therefore, also involution-the withering away of vestigial forms.

  The Peruvian kvinu officer, having learned the appropriate code, has access to potentially more information than the African envoy could absorb in a lifetime. And this is not brought about by way of internal development of the brain. The internal demands on the Peruvian officer the amount of information he must process internally-are, if anything, far fewer than those placed on the African envoy. The Peruvian is in a position to appropriate more information at less internal cost. Moreover, kvinus form a fairly basic external representational system: the amount and kinds of information they can embody are strictly limited. But with the development of more sophisticated forms of external representation-in particular, language-capable of carrying greater quantities and varieties of information, the benefits of learning the code that allows you to tap into this information increase accordingly.

  Box 2.1

  Varieties of Remembering

  Today psychologists generally distinguish three distinct types of memory. This tripartite distinction hadn't been made explicit at the time of Luria and Vygotsky's studies. I have translated their claims into this contemporary terminology.

  1. Procedural memory is the mnemonic component of learned-as opposed to fixed-actions patterns: to have procedural memory is to remember how to do something that you have previously learned. For this reason, it is sometimes referred to as knowing how (Ryle 1949) or habit memory (Bergson 1908/1991; Russell 1921). The most obvious examples of procedural memory are embodied skills such as riding a bicycle, playing the piano, or skiing. Procedural memory has nothing essentially to do with conscious recall of prior events: one can, in principle, know how to do something while having completely forgotten learning to do it.

  2. Semantic memory is memory of facts (Tulving 1983). You might remember that, for example, Ouagadougou is the capital of Burkina Faso. It is not immediately clear the extent to which this category is distinguishable from the category of belief. What is the difference between, for example, believing that Ouagadougou is the capital of Burkina Faso and remembering this fact? Neither beliefs nor memories need be consciously recalled or apprehended by a subject in order to be possessed by that subject. (That is, beliefs are dispositional, rather than occurrent, items [see box 3.1]). Therefore, it seems likely that semantic memories are simply a subset of beliefs. Not all beliefs qualify as semantic memories. If I perceive that the cat is on the mat, and form the belief that the cat is on the mat on this basis, it would be very odd to claim that I remember that the cat is on the mat. However, any semantic memory does seem to be token-identical with a belief: the claim that I remember that p without believing that p seems to be contradictory.

  3. Episodic memory, sometimes called "recollective memory" (Russell 1921), is a systematically ambiguous expression. Often it is used to denote memory of prior episodes in a subject's life (Tulving 1983, 1993, 1999; Campbell 1994, 1997). However, it is also sometimes taken to denote memory of prior experiences possessed by that subject. For example, Locke understood (episodic) memory as a power of the mind "to revive perceptions which it has once had, with this additional perception annexed to them, that it has had them before" (1690/1975, 150). In a similar vein, Brewer defines episodic memory as a reliving of one's phenomenal experience from a specific moment in his or her past, accompanied by a belief that the remembered episode was personally experienced by the individual in the past (Brewer 1996, 60). The ambiguity embodied in the concept of episodic memory, then, is that between the episode experienced and the experience of the episode. This ambiguity is significant, but it can be accommodated in a sufficiently sophisticated account of episodic memory.

  According to Luria and Vygotsky, remembering in literate cultures follows the sort of dynamic i
nterplay between internal and external processes that we find in the Peruvian kvinu officer. It is characteristic of modern memory to offload part of the task of remembering into external information-bearing structures-written language being the most obvious and important one. In the case of semantic memory at least, remembering for us consists largely in retaining the "code" that will allow us to plug into the rich and varied stores of information around us.

  Luria and Vygotsky's account of remembering parallels, in many essential respects, the ecological and enactive models of perception we examined earlier. Crucial to their account is the idea of structures that (i) are external to the remembering subject, and (ii) carry information relevant to the memory task in question. If in possession of the requisite "code," the remembering subject can use or deploy these structures to reduce the amount of internal information processing that he or she needs to perform in the accomplishing of a given memory task. In Luria and Vygotsky's account, external representational structures can go proxy, at least in part, for internal representational structures. Therefore, at least some of the role played in Cartesian cognitive science by mental representations can be taken over by the remembering subject acting on the world in appropriate ways. Merlin Donald (1991) has supplied an impressive development of this general theme in his account of the origin of the modern mind.

  5 Neural Networks and Situated Robotics

  The development of robotics in the past two decades has been decisively shaped by two factors. First, there was the development of connectionist or neural network models of cognition. Second, there was the role assigned to environmental interaction-manipulation and exploitation of environmental factors or circumstances-in the modeling of cognitive processes. These two factors are not unrelated.

  Box 2.2

  Neural Networks

  Neural networks are, in essence, pattern-/napping devices. Pattern mapping, in this sense, is made up four different types of process. Pattern recognition is the mapping of a given pattern onto a more general pattern. Pattern completion is the mapping of an incomplete pattern onto a complete version of the same pattern. Pattern transformation is the mapping of one pattern onto a different but related pattern. And pattern association is the arbitrary mapping of one pattern onto another, unrelated pattern (Bechtel and Abrahamsen 1991, 106).

  A neural network is made up of a collection of nodes or units-the rough equivalent of neurons. Each node can be connected to various other ones. The underlying idea is that activation in one unit can affect, or have an impact on, activation in the other units to which it is connected. However, the nature of this impact can vary along at least three dimensions. First, the connections can be of varying strengths. The strength of a connection between two units A and B is a function of how much of the activation of A is transferred to B. Even if A and B are connected in such a way that activation in A is transmitted to B, it might be that half of the activation of A is transmitted, or a quarter. It might be that the activation of A is augmented when it is passed on to B: for example, A fires at level q, but the quantity of activation passed on to B is 2q.

  The second type of variability derives from the fact that connections can be both excitatory and inhibitory. The connection between A and B is excitatory if the activation in A tends, all things being equal, to produce activation in B or increase the activity in B if B is already firing (in ways determined by the strengths of the connection between them). The connection between A and B is inhibitory if activation in A tends to stop or dampen down activity in B.

  The third source of variability derives from the fact that any unit may (but need not) have a threshold level of activation below which it will not fire. Thus, for example, even if the activation q from A is passed on to B, B does not fire because its threshold level is greater than q.

  The units of a neural network are arranged into layers: an input layer, an output layer, and one or more hidden layers. Connections between units occur both between layers and within layers.

  For our purposes, the details of neural networks are less important than their characteristic strengths and weaknesses. Specifically, neural networks are very good at certain tasks, and very bad at others. Much of the allure of these networks is that that the tasks they are good at are the sorts of tasks that can easily, or relatively easily, be performed by human beings. Conversely, the sorts of tasks that neural networks are not very good at are ones that humans also find difficult. With traditional symbolic systems, in contrast, the relation is reversed. Traditional "rules and representations" systems are very good at the tasks that humans find difficult, and very bad at tasks that humans find easy. This suggests that neural networks will provide far more realistic models of human cognition than traditional systems.

  The tasks that human beings and connectionist systems seem to be very good at correspond to, broadly speaking, the tasks that can be easily reduced to pattern-mapping-recognizing, completing, transforming, and associating-operations. These include visual perception/recognition tasks, categorization, recalling information from memory, and finding adequate solutions to problems with multiple, partial, and inconsistent constraints. The tasks that humans and neural networks are relatively bad at include, most notably, logical and mathematical calculations and formal reasoning in general.

  From a neural network perspective, the problem with a process such as formal reasoning is that it does not seem reducible to pattern-mapping operations: such reasoning conforms to a structure that does not seem to be replicable by way of pattern-mapping. However, the problem cuts both ways. According to the rules and representations approach, the ability of humans to engage in formal reasoning processes is to be explained in terms of there existing inside human brains both mental representations and rules governing the transformation of those representations. These rules and representations mirror those of a formal system-like logic or mathematics. One of the problems with this sort of approach is that it makes it difficult to understand why humans are so bad at formal reasoning processes such as those involved in mathematics or deductive logic. More precisely, on the traditional approach, it is difficult to see why humans should be subject to characteristic patterns of errors exhibited by humans engaged in formal reasoning processes. If formal reasoning is a matter of manipulating structures according to rules, and if the relevant structures and rules are contained in the brain, then it seems that we should be, if not infallible, at least a lot better than we are at formal reasoning.

  Neural network approaches face the opposite problem: explaining how humans can be so good at formal reasoning. Neural networks specialize in pattern-mapping operations, and processes of formal reasoning don't reduce to these. Therefore, it seems that neural network approaches are going to have difficulty explaining how humans have achieved the level of competence in formal reasoning that they have in fact achieved. In short, with respect to formal reasoning of the sort of exhibited in mathematics and formal logic, humans are not as good as traditional approaches predict they should be, and they're not a bad as neural network approaches predict they should be.

  For our purposes, what is important is the strategy that connectionist theorists adopted to explain the human facility in formal reasoning. Rumelhart, McClelland, and the PDP Research Group's (1986) neural network account of our ability to engage in mathematical reasoning was based on the idea of embedding the network in a larger environment, one that the network was able to utilize in appropriate ways. Consider mathematical reasoning. In a fairly simple case of multiplication, say, 2 x 2 = 4, most of us can learn to just see the answer. This, Rumelhart et al. suggest, is evidence of a pattern-completing mechanism of the sort that can be easily modeled by a neural network. But, for most of us, the answer to more complex multiplications will not be so easily discernible. For example, 343 x 822 is not easy to do in the head. Instead, we avail ourselves of an external formalism that reduces the larger task to an iterated series of smaller tasks (see also Clark 1989). Thus, we write the numbers down on paper, a
nd go through a series of simple pattern-completing operations (2 x 3, 2 x 4, etc.), storing the intermediate results on paper according to a well-defined algorithm. Rumelhart et al.'s point is that if we have a neural network that is embedded, in the sense that it is incorporated into a further system that is capable of manipulating mathematical structures external to it, then a process such as long multiplication, which ostensibly requires postulation of internally instantiated mathematical symbols, can be reduced to other processes that require no such thing. The main features of this embedded network are:

  1. A pattern-recognition device necessary for recognizing external structures such as "2," "x," "3," and so on.

  2. A pattern-completion device necessary for completing already recognized patterns such as "2 x 3 =."

  Both sorts of device are easily implemented in a neural network.

  3. A capacity to manipulate mathematical structures in the environment. Thus, for example, upon recognition of the pattern "2 x 3 =," the embedded system is able to complete that pattern and then, crucially, write or record the numeral "6." This then forms a new pattern for the system to recognize, and its completion and recording of this will, in turn, direct it to a further pattern to be recognized and completed, and so on.

 

‹ Prev