The Story of Psychology

Home > Other > The Story of Psychology > Page 89
The Story of Psychology Page 89

by Morton Hunt


  The human engineering aspects include a number of physical features of the workplace and job that I/O psychologists pay attention to. Among them:31

  —the “work-space envelope,” including such factors as privacy and crowding, lighting, the spatial relationships of desks and chairs in relation to shelves, files, and doors, the best height for work surfaces, and many similar matters;

  —noise in the workplace, which can generate stress and interfere with cognitive processes;

  —specialization of the job, which makes for efficiency and high output, but workers who do the same thing all day (welding one corner of a car door, skinning chicken breasts, entering deposits and withdrawals on a computer) find their work monotonous, fatiguing, and lacking in meaning.

  Psychologists can make useful suggestions about these workplace characteristics, but all of them cost money, although the argument has been made that more comfortable and less bored workers actually do more and better work and that employee turnover is reduced.

  But human engineering is only one facet of the much larger subject of job satisfaction, a major concern of I/O psychologists. This is a broad and complex subject; we will content ourselves here with merely noting, first, the major organizational causes of job satisfaction, as summarized by psychologist Robert Baron of Rensselaer Polytechnic Institute and two co-authors.32

  —a comfortable, pleasant work setting (the result of good solutions to the three engineering problems just mentioned),

  —a fair reward system,

  —high respect for the boss,

  —participation in decision making, and

  —appropriate workload.

  In addition, there are four personal causes of job satisfaction:

  —the individual’s status,

  —seniority,

  —a good match between the employee’s interests and work, and

  —genetic factors. Genetic factors? Yes. Studies of identical twins separated at birth and raised apart have found that despite their different life upbringing and life experience, they have very similar levels of job satisfaction, which strongly suggests that innate personality traits play a considerable part in it.33

  Fitting the person to the job: In large part this consists of assessing the ability of potential employees to perform a particular job. But in the case of managers, it also calls for appraising them after some years on the job in order to determine who has been moving up and looks like high-level material, and who seems stuck and unlikely ever to contribute much. Companies have good reason to want to know which prospective employees to bet on. One insurance company reckoned in 1974 that it cost $31,600 to replace a salesperson and $185,000 to replace a sales manager; the figures would be roughly four times as large today.34

  Employee testing began, as we saw, before World War I. It has grown steadily ever since; nowadays a majority of large organizations and some smaller ones use tests in personnel selection. The evidence is that it pays off. A typical study, made for an artificial ice plant, found that of applicants for maintenance positions whose test scores ranged from 103 to 120, 94 percent were later rated as superior on the job; of those whose scores ranged from 60 to 86, only 25 percent were rated that highly.35

  Tests for blue-collar jobs range from paper-and-pencil quizzes measuring knowledge of the job to “work sample tests” in which the applicant performs tasks similar to those of the actual job. White-collar job tests similarly range from written ones measuring verbal fluency, numerical ability, reasoning ability, and other cognitive skills, to those in which the applicant does filing, gives directions based on maps, handles emergency phone calls, and the like.

  At many companies, applicants for managerial positions undergo a rigorous evaluation procedure known as assessment. Henry Murray, of TAT fame, and others developed assessment during World War II as a means of selecting intelligence agents for the OSS (Office of Strategic Services, the predecessor of the CIA). OSS assessment, as we saw in an earlier chapter, relies on personality tests and observations of the candidates in several artfully contrived situations. After the war, some of the psychologists who had worked in the OSS assessment project adapted the method to other purposes at the Institute for Personality Assessment and Research in Berkeley. Abandoning the qualifications of spies for more mundane concerns, they developed assessment protocols for dozens of specialties ranging from law school student to Mount Everest climber and from M.B.A. candidate to mathematician.36

  But it was Douglas Bray, a psychologist at AT&T, who worked out the method of personnel assessment that became the model for American business and industry. Bray, born in Massachusetts, had made his way as far as graduate school at Clark University, where he earned a master’s in psychology before being drafted in 1941. He was assigned to the Air Corps’s aviation psychology program, where he helped create paper-and-pencil tests, psychomotor skills tests, and simulations to screen candidates for training as pilots, navigators, bombardiers, and aerial gunners.37

  The work gave Bray an abiding interest in assessment. After the war he earned a doctorate in social psychology at Yale and taught for some years, but in 1955 he had the lucky break that started him on the real work of his life. A former professor recommended him to AT&T, which needed a psychologist to conduct a long-term study on selecting people who could become highly effective managers. At the time, AT&T was hiring as many as six thousand college graduates a year and promoting thousands more from vocational jobs to management jobs; knowing how to pick winners would be of immense value.

  In Bray, it had picked a winner before having a method for doing so. Within a year he had assembled a staff, devised an assessment protocol, and begun using it in an “assessment center” in the headquarters of Michigan Bell in St. Clair. (Michigan Bell was the first company in the AT&T system to participate in the managerial-career study.) At the assessment center, twelve management candidates at a time would spend three days undergoing interviews, completing a battery of cognitive tests, personality inventories, attitude scales, and projective tests, and taking part in three major behavioral simulations—leaderless group discussion, a business game, and “In-Basket,” an individual exercise in which each participant was handed a sheaf of memos, letters, and requests, and had to make decisions, write replies, and take other appropriate actions. Eight assessors, chiefly psychologists, spent a week observing and evaluating the participants in each group.38

  As in all longitudinal research, the hardest part for Bray was waiting to gather evidence that the assessment method was valid. Eight years and again twenty years after each participant’s assessment Bray conducted reassessments. The results strongly validated his method. After twenty years, 43 percent of the college graduates who had been rated the most promising had reached the fourth (of six) level or higher of management, as against only 20 percent of those judged less promising. Of non-college men, 58 percent of those highly rated by the assessment had made it to the third level or higher, but only 22 percent of those not highly rated had risen that far.39

  Bray’s assessment center and method did not catch on for some years, but in the expansive economic atmosphere of the 1970s it mushroomed; by 1980 there were about a thousand assessment centers, and by 1990 some two thousand.40 Since then, the number has decreased somewhat because costs proved too high to be practical for most positions, but Assessment Centers continue to be widely used in the U.S. and almost every industrialized country for identifying or selecting senior-level talent.41 Today, assessment in a center can take as little as one day, and evaluation has been much speeded up by replacing paper-and-pencil tests with computerized Q-and-A programs, and group exercises with computerized and video-aided simulations.

  Many of the Bray techniques, in simplified and speeded-up form, are being used by the multitude of assessment organizations now operating on the Web.42 Bray has won six awards for his work as an applied psychologist, including one from the American Psychological Association, which presented him in 1991 with the Gold Med
al for Life Achievement in the Application of Psychology.

  The Use and Misuse of Testing

  The testing of job applicants by employers is only a small part of what is now one of psychology’s most extensive influences on American life. Each year scores of millions of Americans take standardized multiple-choice tests published by over a hundred companies, some of which are multi- multi-million-dollar enterprises. Thanks to the federal No Child Left Behind Law, in 2006 every student from the third to eighth grade and one high school grade had to take state tests—about 45 million in all. (It was estimated by the Government Accounting Office that states would spend anywhere from $1.9 billion to $5.3 billion from 2002 to 2008 to implement No Child Left Behind–mandated tests.43) Add to that all the IQ tests given in schools throughout the nation, the standardized tests required for certification in the professions, the tests administered to many would-be employees by companies, the SAT, ACT, and other tests that play a role in college admissions, the personality and other tests given to patients by psychotherapists, and many others, and it is evident that testing is one of psychology’s most successful applications to daily life. It has become a major means by which our society makes decisions about people’s lives in education, employment, physical and mental health treatment, the civil service, and the military. And even love and mating: A number of dating services now use personality and other tests to generate “matches” between people.44

  Binet’s aim in developing intelligence tests, early in the century, was to benefit both the children and society by determining which children needed special education. Similarly, psychological and employment tests have always been basically diagnostic, meant to benefit the people being tested and those who deal with them. The extraordinary expansion of testing in the past several decades is evidence that it does serve these purposes. Testing is, in fact, essential to the functioning of modern society; schools, universities, large industries, government, and the military would be crippled and all but inoperable if they were suddenly deprived of the information they gain from it.

  Yet testing can lend itself to misuse, the most serious example being the favoring of certain racial and economic groups and the handicapping of others. The obvious case in point is the effect of testing on the educational and employment opportunities of whites as compared with blacks, Hispanics, and other disadvantaged groups.

  To people with an unqualified hereditarian view of human abilities, the use of intelligence and achievement tests poses no ethical problem. They believe that middle- and upper-class people do better on such tests, on the average, than lower-class people simply because they are better intellectually endowed by nature. As we saw, the followers of Galton were convinced that heredity accounts for the differences between the average scores on IQ and other mental tests of people of different classes and races. It was on this basis that schools throughout the country began testing students fairly early in the century and placing the higher-scoring in academic programs and the lower-scoring in “vocational” programs, thus preparing students for what were taken to be their manifest stations in life.

  If that reasoning were correct, such testing and placement would be not only fair but in the best interests of the individuals and of society. But what if the test scores reflect the influence of environment? What if poverty and social disadvantage prevent children and adults from developing their latent abilities, causing them to score lower than those from favored backgrounds? If that is the case, the use of test scores to measure supposedly innate ability and to determine each individual’s educational and employment opportunities is a grave injustice and a major source of social inequity.

  Time and again, for more than sixty years, controversy has raged over the extent to which the scores of IQ and other cognitive ability tests measure innate abilities and the extent to which they reflect life experience . But it became clear in recent decades that the data used by both hereditarian and environmentalist psychologists, chiefly derived from cross-sectional samples (samples of people of different ages), did not adequately explain the processes observed by Piaget and other developmental psychologists. Longitudinal studies tracing the course of development in individuals revealed that nature and nurture are not static, fixed components but are interactive and highly variable over time. At any point in life, an individual’s intellectual and emotional development is the product of the continuing interaction of his or her experiences and innate capabilities.

  Then, too, most developmentalists have come to believe that different genotypes are affected to different degrees by environment; each has its own “reaction range.” As Irving Gottesman, emeritus professor at the University of Minnesota Medical School, has explained, an individual with mongolism may, in an enriched environment, attain a level of intellectual development only modestly higher than he would in a restricted poor environment; an individual with the hereditary equipment of a genius may, in an excellent environment, reach a level of development very much higher than he would in a poor environment.45 Thus, at low levels of innate ability the influence of environment is far less than it is at high levels.

  Such generalizations, however, tell us only about categories, not about the relative influences of nature and nurture on any one person; there are too many idiosyncratic and incalculable factors in each person’s history to permit analysis of the relative roles of heredity and environment on the individual’s development. It is therefore impossible, at least at present, to precisely determine innate intellectual ability from an individual’s test scores.

  That being so, how can testing be used to determine schooling and job placement without unfairly benefiting privileged middle-class persons and unfairly penalizing the disadvantaged? The answer, so far, has been to control testing by political and legal means. The Civil Rights Act of 1964 and its amendments gave minority and other disadvantaged groups a legal toehold from which to attack testing as discriminatory and to demand remedial action. They challenged educational and employment tests in court, sometimes successfully, on the grounds that some of the materials are familiar to whites but not to most minority groups and, more broadly, that minority groups, particularly blacks and Hispanics, grow up under such social disadvantages that any test, even one based on symbols rather than words and ostensibly “culture fair,” is unfair.

  The radical remedy demanded by some activist groups at the height of the civil rights ferment in the 1960s was the abandonment of testing, and, as mentioned earlier, in New York, Washington, D.C., and Los Angeles city administrations actually banned intelligence testing in the elementary schools.46 But the opponents of testing had majority power only in a few large cities, and in any case placing slow learners and the handicapped in the same classrooms as normal and gifted children so slowed down the education of the latter group that the efforts to eliminate testing soon failed.

  Similar attacks on the use of college qualifying tests were made by some civil rights activists and groups. Ralph Nader, for one, charged in 1980 that the SATs discriminate against minority students, most of whom come from culturally impoverished backgrounds. Complaints and pressure against the SATs continued. Spokespersons for minorities have lately kept up a drumfire of charges against the SAT, claiming among other things that analogies used in the test are culture-bound and unfair to students with nonwhite, non-middle-class backgrounds, as are certain items using special, class-related words like “regatta,” and that the readers who grade a new writing section in the SAT are likely to emphasize stylistically and grammatically Standard English, marking students down whose style employs idioms, phrases, or word patterns more common to communities of color. The College Board vigorously denies all of these charges, asserting that there is no research indicating that analogy questions are culturally biased, that data about the use of “regatta” show that minority students found the question using it no more difficult than did white students, and that the English teachers who read the essays “are trained to ignore errors in grammar, spelling, or pu
nctuation until those errors are so bad as to get in the way of making sense of the student’s argument.”47 The jury is still out.

  In the realm of employment testing, activists scored a major success, at least temporarily. The General Aptitude Test Battery (GATB), which measures a number of cognitive abilities and some aspects of manual dexterity, was developed in the 1940s by the U.S. Employment Service and was long used by that bureau and many of its state and local offices as the basis of referrals to employers. But the average GATB scores of minority groups were well below those of the majority groups, so if test scores resulted in, say, 20 percent of whites being referred for a particular job, only 3 percent of blacks and 9 percent of Hispanics might be referred for the same job.

  The amended Civil Rights Act made it illegal to use the scores in this way, not because the tests failed to measure abilities wanted by employers but because national policy required giving the disadvantaged compensatory advantages.48 Rulings by the Equal Employment Opportunity Commission and a number of court decisions led to a solution known as “within-group norming” or “race norming.” Under this policy, test takers were referred for jobs not on the basis of their raw scores but according to where they ranked within their own racial or ethnic group. A black who scored in the eighty-fifth percentile of black test takers would be put on an equal footing with a white who scored in the eighty-fifth percentile of the whites, even though the black’s score was lower than the white’s. A black with the same score as a white would be rated higher than the white.49 In the 1980s the employment services of thirty-eight states used race norming, some more than others. Employers, by and large, went along with the method, mainly because it helped them meet government affirmative action requirements.

 

‹ Prev