Book Read Free

War Play

Page 5

by Corey Mead


  To meet this need, Yerkes and a team of psychologists developed the Alpha and Beta tests, the armed services’ first group examinations. The Alpha test, administered to literate English-speakers, was a multiple-choice written examination, while the Beta test, which did not use language, was intended for illiterates and non-English-speakers. The tests’ formats would not look unfamiliar to test-takers today—they featured questions based on synonyms and antonyms, analogies, and the unscrambling of sentences. The results of the Alpha led to cries of a national literacy “problem”: of the 1.7 million men who took the exam, 30 percent could not read the form well enough to understand it. Given that the majority of these men had had some type of formal schooling, the results alerted educators nationwide to problems with reading instruction in the schools, the first time that methods of teaching literacy had been debated on such a large scale.

  From the army’s perspective, the tests’ benefit was that they provided supposedly objective criteria with which soldiers could be classified, trained, and placed into appropriate jobs—including, importantly, officer training. The tests could also be used to identify soldiers with low skills who, provided they received remedial training, might still perform some useful function. And while intelligence testing convinced some cynics that many adults weren’t mentally capable of benefiting from education, the war years also proved that thousands of people thought uneducable could gain basic reading skills in a mere six to twelve weeks. By 1919, almost 25,000 illiterate and nonnative personnel had received this quick-fix instruction.

  The large amount of data collected by Yerkes and his colleagues was ultimately used to more pernicious effect than their military mandate called for. Manipulated and inaccurately analyzed in a way that supported racist and eugenicist beliefs for decades to come, the data, as Stephen Jay Gould wrote in his classic work The Mismeasure of Man, gave rise to three “facts” that greatly influenced American social policy:

  The average mental age of white American adults [stood] just above the edge of moronity at a shocking and meager thirteen;

  European immigrants [could] be graded by their country of origin . . . The darker peoples of southern Europe and the Slavs of eastern Europe [were] less intelligent than the fair peoples of western and northern Europe;

  The Negro [was] at the bottom of the scale with an average mental age of 10.41.

  To the major proponents of testing at the time, almost all of whom were ardent eugenicists, the data seemed to verify their extant racial and cultural prejudices—prejudices that continued to be built into the structure of standardized testing throughout the twentieth century and on into today. Thomas Sticht notes that the army Alpha and Beta tests also “produced a way of thinking about intelligence and aptitude that has continued to underpin the use of mental tests” in the military and in our public schools, including the basic concept of mental categories and the idea that standardized tests can reliably sort people into the various categories.

  World War I also marked the first time that specialized training was a major military priority, as it became clear that soldiers needed to operate increasingly complex equipment. During the Civil War period, over 90 percent of enlistees had been engaged in nontechnical combat-related activities, while less than 10 percent of the force worked as craftsmen, clerical personnel, or technical personnel. By World War I, however, fewer than 50 percent of enlistees performed nontechnical combat-related duties, while skilled and semiskilled personnel were needed in great numbers. For this reason, adult and vocational training became a major concern for the armed forces during World War I, when technical specialists were in short supply. (Fewer than 20 percent of the necessary specialists were available without training.) This prompted the army to adopt a functional approach to literacy, in which lessons focused exclusively on job-related tasks. Comprehension, not just decoding, became the goal, as soldiers now had to be literate enough to read text-based manuals relating to their jobs. This resulted in a number of experimental programs, such as one at Camp Grant in Illinois, where soldiers received lessons in reading, math, and civics along with technical and vocational training. The declared aim of these programs was “to develop arrested mentality” as quickly as possible.

  The military’s adult education efforts were bolstered by the National Defense Act of 1916, which ensured that soldiers would “be given the opportunity to study and receive instruction upon educational lines of such a character as to increase their military efficiency and enable them to return to civil life better equipped for industrial, commercial, and general business occupations.” The war effort also led to the passage of the Smith-Hughes Act in 1917, which greatly expanded vocational training in high schools nationwide.

  After World War I, the attention generated by the army’s wartime experience, along with the growing eagerness of school administrators to embrace science-based methods of organization, led to the growth of standardized achievement testing as a mass educational movement. The first Scholastic Aptitude Test (SAT), given in 1926, was a modified version of the army’s Alpha exam—an unsurprising development, given that the well-known eugenicist Carl Brigham, a key figure in developing the army tests during World War I, was tapped by the College Board to develop the SAT. Many of the original test’s questions made the military connection explicit, as in this math problem: “A certain division contains 5,000 artillery, 15,000 infantry, and 1,000 cavalry. If each branch is expanded proportionately until there are in all 23,100 men, how many will be added to the artillery?”

  While the bulk of the army’s instructional efforts ceased after the end of World War I because of the drawing down of American forces, scattered efforts to train illiterate soldiers remained. Using army booklets as source material, the lessons emphasized reading and writing, although the booklets’ content concerned history, civics, basic hygiene, and other topics aimed at enabling soldiers to become “productive” members of society. In this same period, the use of scientific personnel research for selection and classification mostly fell by the wayside, as the military contracted to its prewar size. At the beginning of World War II, the army inducted soldiers based merely on the inductee’s own assertion that he could understand simple orders given in English.

  Following the attack on Pearl Harbor, tests were brought back into play in order to screen the vast numbers of new soldiers, not only to determine their fitness for military service but also to slot them into aptitude-appropriate jobs. The Army General Classification Test (AGCT) took the place of the old Alpha. (Although intended as an examination of general learning ability, norms for the AGCT were based solely on the responses of white male soldiers and Civilian Conservation Corps members.) As the war continued, the large numbers of illiterate and semiliterate inductees spurred the development of additional unwritten exams. Between 1941 and 1945, the years of U.S. involvement in World War II, almost every important American in the field of testing was involved with the military in some way.

  World War II also fueled the expansion of standardized testing by generating a need for the kinds of skilled and educated people who, once recruited, would be key military assets. This in turn focused attention on the importance of college study. With the enactment of the G.I. Bill in 1944, tens of thousands of veterans entered college for the first time. The sudden appearance of so many new students greatly increased the appeal of the SAT for school administrators, owing to the efficiency of its multiple-choice format. Though unintended by its creators, the G.I. Bill in effect did away with the elitist notion that blue-collar personnel were not fit for college study and that college should be reserved for the privileged.

  World War II brought other instructional changes as well. The large numbers of new personnel in the war, for example, required a training model based in large part on group learning in classroom environments. To facilitate this model, military psychologists made significant advances in the development and use of educational technology. These psychologists also examined the military’s existing
learning principles to determine whether they really worked in the training realm. From this examination came a newfound military focus on such principles as “part-task training,” in which complex tasks are analyzed and then broken down into components that can be mastered more easily. This process is today a crucial part of the civilian “vocationalism” movement, which, as its name implies, emphasizes vocational training in public education.

  Between 1941 and 1945, the military launched a program of adult basic education whose scale was perhaps unrivaled in human history. This program established permanently for the armed forces the idea that adult learning could significantly improve job performance. It also made education relevant to soldiers by giving them academic credit for knowledge gained in the workplace, another innovation that quickly made its way into the civilian world.

  Given the relatively short time period (twelve weeks at most) in which soldiers received instruction during the war, the focus was on specific military-related content knowledge, and instructional materials were tailored to a fourth-grade reading level. At the same time, the criterion by which an “acceptable” level of literacy among soldiers was defined fluctuated widely depending on manpower needs. Between 1941 and 1945, the minimum standards for enlistment underwent constant revision; the more recruits the army needed, the more it depended on evidence of educability, rather than level of education, in determining who could join its ranks.

  The increased military emphasis on education during World War II also resulted in the development of General Educational Development (GED) tests, which enabled soldiers to use their military experience to qualify for high school equivalency degrees. Initially designed by the staff of the United States Armed Forces Institute (USAFI), the tests were at first restricted to military personnel and veterans. Beginning in 1947, however, the GED was offered to civilian adults; by the end of the next decade, more nonveteran than veteran adults were taking the test. Today, of course, the GED is used widely throughout North America for the purpose of high school equivalency certification. In the United States, it accounts for almost 15 percent of annual high school diplomas.

  The United States Armed Forces Institute was important for another reason, too. Tasked by the War Department to be a correspondence school for enlisted men, and using the U.S. Postal Service at home and the army post office system abroad, the USAFI pioneered the large-scale use of distance education. Study by correspondence was viewed as an efficient and attractive alternative for soldiers stationed overseas: there was no set starting or ending date, students could work at their own pace, and the courses were equally appropriate for individuals and groups. By the end of World War II, the USAFI was operating branches in places as varied and far-flung as Puerto Rico, Anchorage, London, Rome, Manila, New Delhi, Cairo, and Panama.

  The Development of Computer-Based Learning

  During World War II, the military’s focus on literacy stemmed less from the war’s expanding reach than from crucial and complex changes in the actual conduct of war. As Deborah Brandt relates, military-related advances required soldiers to “mediate technologies, gather intelligence, operate communication systems, and run bureaucracies that were growing faster and more elaborate” at every turn. Yet the technological advances of World War II didn’t just drive the military’s literacy standards upward—they also, and more profoundly, “changed the rationale” for education, from one of morality to one of productivity. Literacy had long been conceived of as a tool for social and religious stability: it enabled citizens to read their Bibles and served to acculturate and “tame” the immigrant masses. But during the war, literacy was detached from its moral associations. It became instead what Brandt calls “a needed raw material in the production of war.” As a result, education was transformed from “an attribute of a ‘good’ individual” into a resource “vital to national security and global competition.”

  The technological basis of this shift has played out in the military’s post–World War II learning innovations, as the military, more than any other single institution, has forged the link between education and technology. For decades the armed forces have been the world’s most significant funders and developers of computer-based instruction and educational technology, both independently and through partnerships with industry. As one account puts it, “Computers would probably have found their way into classrooms sooner or later, but without [ongoing military support] it is unlikely that the electronic revolution in education would have progressed as far and as fast as it has.” In addition to computers, the military has funded multimedia applications, simulations, instructional films, instructional television, overhead projectors, intelligent tutoring systems, teaching machines, and language laboratories. Not all of these were invented by the military, but all were developed, refined, and popularized by the military.

  Computer-based education has its roots in World War II–era military research into man-machine systems—that is, integrated systems of humans and machines. Historian Martin van Creveld outlines the questions that in the 1940s and ’50s captured the imaginations of military engineers and psychologists: “Which are the strong points of man, and which are those of the new machines? How . . . should the burden of work be divided among them? How should communication between man and machine . . . be organized?” These inquiries lay at the heart of the military’s initial forays into computer-based education.

  In 1958, behavioral psychologist B. F. Skinner published an influential article, “Teaching Machines,” that attracted wide notice in the military and industry alike. (A teaching machine is a device used for automated instruction.) That same year, the American Psychological Association and the Air Force Office of Scientific Research held symposiums on the topic within four months of each other. Not coincidentally, the Defense Advanced Research Projects Agency was founded in 1958. Conceived of as a response to the perceived Soviet threat, DARPA provided essential funding for research in computer-based instruction.

  The most notable project of the era was PLATO (Programmed Logic for Automated Teaching Operations), based at the University of Illinois at Urbana-Champaign. Funded by the air force, army, and navy, PLATO was a computer system designed expressly for educational purposes; researchers wanted to highlight both the pedagogic and the economic advantages of computer-based instruction. In a pioneering turn, PLATO had a plasma screen that offered text and rudimentary animation. The project also led to the development of a programming language for educational software. For years PLATO was the world’s most widely used computer-based instructional system.

  Many of the other most significant advances in computer-based education during the 1950s and ’60s derived from the air force’s Semi-Automatic Ground Environment, or SAGE. From this project came significant advances in core memory, keyboard input, graphic displays, and digital communication over telephone lines. The SAGE system also pioneered the design of “user-friendly interfaces”—think, for example, of help menus or online instruction aids—that teach users how to work with a particular system. On a more abstract level, the SAGE program fostered the idea that computer-based systems could be used as tools to enhance cognition. In this model, the functioning of the human brain was reconceived in the image of computer processing, which ultimately led to the field of cognitive psychology. Decision-making and problem-solving, in both real-world and simulated environments, were the focus of SAGE’s computer-based instructional efforts. Altogether, SAGE marked the world’s first example of computer-managed instruction.

  The Education Gospel and Technological Change

  As this chapter shows, the military has had a significant influence on American education. Channeled through a variety of intermediaries, especially the corporate world, military-sponsored methods, concepts, and technologies have repeatedly ended up in our public schools. The military’s longstanding technological focus also explains why it has been a driving force behind what W. Norton Grubb and Marvin Lazerson term the “education gospel,” the social assumptions
surrounding the postindustrial transformation of public education’s purpose. This is how the authors describe the gospel’s vision: “The Knowledge Revolution (or the information society, or the high-tech revolution) is changing the nature of work, shifting away from occupations rooted in industrial production to occupations associated with knowledge and information. This transformation has both increased the skills required for new occupations and updated the three R’s, enhancing the importance of ‘higher-order’ skills, including communications skills, problem solving, and reasoning.” Without the military’s seminal role, this transformation would not have assumed the same shape and form nor happened so quickly.

  In keeping with this transformation, the authors note, workers today are required to engage in the kind of lifelong learning that enables them to keep up with rapid technological advances. The military has shaped this larger societal shift by providing a great deal of the technical apparatus and institutional rationale behind it. Decades ago, sociologist Daniel Bell noted that military technology was the “major determinant” of what he later termed the “information society.” Taking his cue from Bell’s analysis, Douglas Noble argues that the military’s influence on “information theory, systems analysis, nuclear energy and transistors, . . . automation, robotization, [and] bioengineering” made it the “advance guard” of our high-tech economy.

 

‹ Prev