by Joshua Cohen
That was the hypothesis of interwar American psychologists, whose attempts at conditioning attention would capitalize—commercialize—Behaviorism. Meanwhile, the Soviet Union had nationalized the attention of its comrade-citizens, so as to redistribute it more equally, if more covertly, in bedroom cameras and water-closet mics. Only an animal can read the name Pavlov without a reaction.
Ivan Petrovich Pavlov (1849–1936). His experimentation with animals conditioned him to zoomorphism, a belief in his animal self. To Pavlov, animal life was associative: a furry warren of innate and acquired reflexes, the former of which are instinctive (developed by the species for survival, and heritable), the latter, learned behaviors. The past teaches the future how to respond; everything not a response is a stimulus to self, while your own responses are merely stimuli to others—especially in conditions of cost/benefit, or punishment/reward (the system is nonexistent but rigged).
Behaviorism rejected volition in the sense of free will, and so sought to deny the discriminatory, or preferencing/privileging, aspects of attention (what to ignore, the ordering of responses to simultaneous stimuli). It accomplished this by contextualizing them socially: If you believe you have volition, and you’re surrounded by others who believe they have volition, you might be rewarded; however, if you believe you have volition, and you’re surrounded by others who don’t believe they have volition, you will be punished, for certain (Pavlov died the year of the great show trials and purges). For John Watson (1878–1958), speaking for Behaviorism, the school he established: “Attention is merely, then, with us, synonymous with the complete dominance of one habit system.” Non-Behaviorist theorists of attention were misbehaving, in the words of Gilbert Ryle, by “misdescribing heed in the contemplative idiom” (The Concepts of Mind, 1949).
Gestalt psychology (Gestalt: “entirety of form,” or “shape,” “quiddity,” “haecceity”), codified coevally with Pavlovian conditioning, appears to offer a middle ground: While some Gestaltists expressly denied attention, others just attempted to deprive the human of a choice in its control. Emerging from the laboratory experiments of Wundt, Gestalt sought the data to prove, in the definition of the discipline’s principal founder, Max Wertheimer (1880–1943), that all physical and mental phenomena both were, and had to be perceived and conceived of as, “wholes [Ganzen], whose behavior is not determined by that of their individual parts [Stücke], conversely the behavior of the parts is determined by the structural laws [Strukturgesetzen] of the whole.” To Gestalt, the wholeness of attention, then, had to consist of its response, and the constituent piecework—what neuropsychologists regarded as the discriminatory or preferencing/privileging capacities—was subordinated as features, or behaviors, of the stimulus itself. This demotion was merely an auxiliary purpose of Danish psychologist Edgar Rubin’s The Nonexistence of Attention, 1921, Gestalt’s extremest attention text—an attempt to disprove attentivity by proving that a person was unable to selectively focus on one specific reading of an optical illusion (either “the figure,” or “the ground”).
In illusions like “Rubin’s Vase,” a specific reading (of vase or faces) can result only in a reading of the whole (vase and faces), while a reading of the whole can result only in a reading of the specific (mentally, however, one can conceive of either and both, the specific and the whole).
Figure and ground constantly change, yet the differences between them, according to Rubin, “are essentially concrete”—“thus the use of the term ‘attention’ was rendered obsolete.”
* * *
—
I’VE WRITTEN THIS ESSAY, I’ve written thus far, because I was interested in the subject. Which was: whether I was able to write about something I wasn’t interested in, something I loathed. I needed the money. I was responding to pressure (social/professional), a blinking cursor (the computer’s bell).
I’d always wanted to write an essay about nothing that was also an essay about everything, but the only thing I’d lacked was a subject, until.
It’s a curious feature of science papers on attention—they’re all much shorter, divided into much shorter sections, than anything I’m used to reading from the academic humanities. Also, they repeat too frequently—too frequently. In terms of structure, it’s as if their researchers couldn’t trust me, couldn’t trust themselves, to pay attention throughout. Each begins with an Abstract, proceeds to an Introduction, moves to a Methodology, transitions to a Results, and ends with a Conclusion, yet at each stage reiterating a later, or earlier, stage, and so progressing and regressing both—from the conclusiveness of the Abstract (“We have examined the efficiency of attentional networks across age and after five days of attention training [experimental group] compared with different types of no training [control groups] in four-year-old and six-year-old children. Strong improvement in executive attention and intelligence was found from ages four to six years. Both four- and six-year-olds showed more mature performance after the training than did the control groups”), to the results-expectancy of the Introduction (“In this study, we explore how a specific educational intervention targeted at the executive attention network might influence its development. We explore training at ages four and six years so that we might compare influence of specific training at these two ages with general improvement due to development. The intervention we developed was designed to train attention in general, with a special focus on executive control in children of four years of age and older. We adopted a method used to prepare macaque monkeys for space travel and modified the various training modules to make them accessible and pleasant for young children. Before and after training, we assayed attention skills of the children by giving them the Child ANT [Attention Network Test, available @ sacklerinstitute.org/cornell/assays_ and_tools] while monitoring brain activity from 128 scalp electrodes. We also measured their intelligence. Their parents filled out a temperament questionnaire about the children as well”), to the Methodology, which is attentivity itself, the very tautology being tested for (“Electroencephalogram [EEG] Recording and Data Processing: […] Forty of the forty-nine four-year-old participants and twenty-three of the twenty-four six-year-old participants agreed to wear the sensor net that allows acquiring EEG data. […] Genotyping Procedure: Cheek swabs were collected from most of the six-year-olds […] and genotyping of the DAT1 gene was performed. […] Training Program: The first three exercises taught the children to track a cartoon cat on the computer screen by using the joystick. […] The anticipation exercises involved teaching the children to anticipate the movement of a duck across a pond by moving the cat to where they thought the duck would emerge. […] The stimulus discrimination exercises consisted of a series of trials in which the child was required to remember a multiattribute item (different cartoon portraits) to pick out of an array”), to the introduction-memory of the Results—INSERT CHARTS—then, again, the abstracted Conclusion: The six-year-olds exhibited better executive attention than the four-year-olds, in the main, though the trained four-year-olds produced an EEG pattern similar to that of the untrained six-year-olds, while the trained six-year-olds evinced a more adult pattern, at midline frontal brain/electrode position Fz—though, too, genetics matter. Those with the homozygous long/long allele had less difficulty resolving conflict than those with heterozygous long/short alleles. “We found that the long form of the DAT1 gene was associated with stronger effortful control and less surgency (extraversion). This finding suggests that the less outgoing and more controlled children may be less in need of attention training” (quotations from “Training, maturation, and genetic influences on the development of executive attention,” M. R. Rueda, M. Rothbart, B. McCandliss, L. Saccomanno, M. Posner, Proceedings of the United States National Academy of Sciences, 2005).
(positioning of EEG electrodes on cerebral cortex; letters label lobes: [F] frontal, [T] temporal, [P] parietal, [O] occipital; odd numbers label left hemisphere, even label right; the closer to
the midline, the smaller the number; “z” indicates electrode placement directly atop the midline)
The experimental method, which once led with its theories or hypotheses, tested them, and then presented its conclusions, now leads with its conclusions (what follows, it follows, is conclusive as well). In cognitive neuroscience especially, different groups are subjected to the same or similar tests, though attention’s traditional tempororubric of response time, RT, has been amplified by the spatiorubric of imaging: initially with MRI (magnetic resonance imaging, which uses a magnetic field to align the body’s hydrogen atoms, emits radio frequencies that force the atoms out of alignment, then terminates the frequencies, as the energy expelled by the atoms as they automatically realign is converted into picture) and PET machines (positron emission tomography, which detects gamma rays emitted by a radionuclide tracer, bound to a glucose molecule, injected into the subject)—1970s to 2000. Both machines, though, have been millennially supplanted by the fMRI, a device that, as implied by the prefix, gives “functional,” real-time (another RT) portraits of the brain, measuring neuron activity from oxygenated bloodflow without exposing the subject to undue radiation.*1
In attention experiments, results must be compared from among these machines with the findings of genetic testing, and those too with the results of batteries of aptitude and personality assessments—producing, cumulatively, better likenesses of the subjects and scientists both, yet no more complete a concept of what attention is, moreover, of what it can be. Instead, what’s obtained from the color blobs of the fMRI—kinetic fluoresced figurative Rorschachs—is not the idea of what’s happening, only the idea that something’s happening: somewhere, somewhen, somewhy. Technological impartiality can be a misnomer (that moldering bias effect by which the subject being scrutinized becomes cognizant of the scrutiny, and so behaves “atypically,” and so scans “atypically”); brain activity can be misleading (the realization that all neuronal activity following a stimulus doesn’t have to be stimulus-response)—though such problems pertain to attention only if attention is what is being examined, not just how machines and brains interact if given parallel, yet intersecting, tasks.
Neuroacademia, then, becomes split into lobes—complementary studies. A scanner stimulates the subjects as they complete a routine. The scientists monitor, attuned to whether all experimentally valid cues—like an expected light or sound—engage (for example) the dorsal frontoparietal regions (regarded as the primary network of “selective attention”), and all experimentally invalid cues—like an unexpected light or sound—engage that same dorsal region but in cooperation with (for example) a secondary ventral frontoparietal network (involved with interrupting “selective attention” and reorienting it toward evaluation of the relevance of new stimuli).
Note, then, that in one experiment alone, the attention being sought can be presented as both a property (the automatic concentration of mind) and a function, which doesn’t just select (what to attend to), but also discriminates (as to how to attend), and is even capable of thought (in its differentiation between experimentally valid and invalid cues and between correct and incorrect responses).
“Attention,” conclusively, must have “a basic static architecture,” along with “a substrate processual structure,” neither of which had been perceivable, until it was conceived, and so became “attention,” at least in the wholeness of a part.
But if selectivity is to be admitted to attention, it must be selected among itself—with psychological distinctions admitted between “abience,” the decision to attend to another thing based on a refusal of present attention; “adience,” the decision to attend to the present thing based on a refusal of any and all other attention; “acturience,” the decision to attend to a thing with the purpose of changing its nature; and “avoidance,” the inability to decide on an alternate attention—and the methods of all must be tracked. Preferencing/privileging order is tested by instructing subjects to complete a series of tasks, in any order they choose, then to complete another related series in an illogical order—“related,” and “illogical,” defined by the experiment.
Neuroacademia discriminates most intensely among attention’s discriminatory aspects. Not only can attention be “overt,” “covert,” “active/focused,” “directed/voluntary,” “split/divided,” “distributed,” “sustained,” “restored/alternating,” “shifting,” “deconcentrated,” “conditional,” “phasic,” “limbic,” “cortical,” but it can be all that and also “exogenous”—externally driven, stimulated from without, from the “bottom up”—or “endogenous”—internally driven, stimulated from within, from the “top down”: “introspective,” “retrospective.” Neuroacademia relevates these designations to the nth degree—a thoroughgoing depth.
Following a stimulus, some neurons “fire”—respond—immediately, with “spikes,” while others are constantly “chattering,” or “babbling,” emitting signals of increasing length in response to increasing stimulus amplitude/intensity. The territories of this verbose combat are divided into demi-hemi-spheres, by either attentive activity or type: one region “activates,” “controls,” “drives,” or “executes”; another serves to “modulate,” or “inflect.” The first affects neurons directly, increasing or decreasing their synaptic activity; the second affects them indirectly (a third might integrate them with a fourth). The idea is that “executive” and “legislative” branches are separated from each other to make the best or at least most efficient use of prior information or memory (long and short: the “judiciary”), so as not to retain any information or intelligence previously registered—no duplicates. (Quotes in the paragraphs above are sourced from scholarly lit published since 2000.)
Ultimately, though, the major cleavage in the brain, and so in attention, obtains between the areas that receive the stimulus—the primary visual cortex (cerebral cortex, occipital lobe), and the primary auditory cortex (cerebral cortex, temporal lobe); along with the extrastriate cortices, bilocated adjacent to the primary visual, which aid in spatial and shape distinctions; and superior temporal area 22, bilocated adjacent to the primary auditory, which aids in pitch recognition on the R side, and word recognition on the L—and the areas that process it. The connections between such sensory receptors and the disparate sites of sense processing break down in the lab, under the scrutiny of technological receptors and processors, especially if the subjects themselves are already “broken.” Which is the most enduring neuroirony: If attention is locatable, it is sensory; if it is measurable, it is processual; as both, as neither, it can be defined only by its absence—by a subject’s inability to focus; an inability to decide on what to focus; an inability to switch focuses; an inability to resist switching focuses—and this absence is most noticeably present in subjects who are “diseased.” They themselves become the flaw in the connection. In neuroacademia, positive evidence for attention deficit might be a lesion or tumor, but negative evidence remains deaf and blind. Anecdotal. Situational. It’s the traumatized who set the average, and what are impairments to them can only be hypotheses to the rest.*2
* * *
—
IN THE PROCESS OF digging around in the prop trunk of punishment—stockades, pillories, cages, loops, hoops, and saddles—Michel Foucault (1926–84) exhumed Jeremy Bentham’s 1791 panopticon, a prison-theater-in-the-round. What Bentham had proposed as a measure to rehabilitate criminals—immuring them in glassed cells situated around a central tower, from which a warden could observe them, without being observed himself—became, in Foucault’s conception, an architectural paragon of postwar Europe. Within this prison’s mortifying exposure, European citizens, bereft of the reciprocities of family and community, could at least count on the recognizance of the state.
Though concomitant with this governmental warden being able to see and hear, yet not be seen or heard—the panopticon was also always a panauricon—was the prisoners’ paranoi
a that no such entity existed. The inmates became so accustomed to the idea of an audience, in a sense, that when they began to doubt its existence—Sovietism had been checked, and torture was never officially sanctioned in Western prisons—they began to reform into an audience themselves. This, at least, was the emendation of Thomas Mathiesen (b. 1933), who posited that Foucault’s interpretation had been canceled by the fall of the Iron Curtain, but that the individual, far from being freed, had rather absorbed totalitarianism bodily, psychologically, to culminate in a warding—or governance—of self. A self-monitoring, an autoregulation, would define life in the globalized synopticon/synaurican. With the convicts converted to spectators, the “warden” became the ward—a force bestowing fame and craving capital. Where the few had engaged the many, now the many would engage the few—still not interacting personally, but through the walls of glass cells refurbished into voluntary vitrines, as transpicuous as prophylactics. Bentham’s original template, drafted by architect Willey Reveley, resembled an amphitheater, a tiered stadium fit for bread and circuses, touchdown dances and brute gang-rape cheers (the fans incarnating the on-field teams, reenacting the carnage), but also it resembled: a wraparound surveillant screen bank.