Book Read Free

The One World Schoolhouse: Education Reimagined

Page 7

by Salman Khan


  The foregoing is not intended as a wholesale condemnation of our current educational system. I’m not proposing that we shut down the schools and start over. What I am suggesting, however, is that we adopt a more questioning and skeptical stance toward the educational customs and assumptions we’ve inherited. Those customs, as I hope I’ve made clear, were the products of particular times and circumstances, established by human beings with human flaws and limited wisdom, whose motives were often complicated. That doesn’t mean there aren’t some good ideas in our traditional approach. Most people who’ve been to school, after all, can read and write, know some basic math and science, and hopefully have picked up some useful social skills as well. To that extent, school works. But we do ourselves and our kids a disservice if we fail to look past those minimum requirements and recognize the places where the system has become creaky and archaic, and why old customs and standards no longer suffice.

  Swiss Cheese Learning

  As we’ve seen, our current system divides disciplines into “subjects,” and further divides the subjects into independent units, thereby creating the dangerous illusion that the topics are discrete and unconnected. While that’s a serious problem, there’s an even more basic failing here: Chances are that the topics themselves have not been covered thoroughly enough, because our schools measure out their efforts in increments of time rather than in target levels of mastery. When the interval allotted for a given topic has run out, it’s time to give a test and move on.

  Let’s consider a few things about that inevitable test. What constitutes a passing grade? In most classrooms in most schools, students pass with 75 or 80 percent. This is customary. But if you think about it even for a moment, it’s unacceptable if not disastrous. Concepts build on one another. Algebra requires arithmetic. Trigonometry flows from geometry. Calculus and physics call for all of the above. A shaky understanding early on will lead to complete bewilderment later. And yet we blithely give out passing grades for test scores of 75 or 80. For many teachers, it may seem like a kindness or perhaps merely an administrative necessity to pass these marginal students. In effect, though, it is a disservice and a lie. We are telling students they’ve learned something that they really haven’t learned. We wish them well and nudge them ahead to the next, more difficult unit, for which they have not been properly prepared. We are setting them up to fail.

  Forgive a glass-half-empty sort of viewpoint, but a mark of 75 percent means you are missing fully one-quarter of what you need to know (and that is assuming it is on a rigorous assessment). Would you set out on a long journey in a car that had three tires? For that matter, would you try to build your dream house on 75 or 80 percent of a foundation?

  It’s easy to rail against passing students whose test scores are marginal. But I would press the argument further and say that even a test score of 95 should not be regarded as good enough, as it will inevitably lead to difficulties later on.

  Consider: A test score of 95 almost always earns an A, but it also means that 5 percent of some important concept has not been grasped. So when the student moves on to the next concept in the chain, she’s already working with a 5 percent deficit. Even worse, many deficiencies have been masked by tests that have been dumbed down to the point that students can get 100 percent without any real understanding of the underlying concept (they require only formula memorization and pattern matching).

  Continuing our progression through another half dozen concepts—which might bring our hypothetical student to, say, Algebra II or Pre-Calc. She’s been a “good” math student all along, but all of a sudden, no matter how much she studies and how good her teacher is, she has trouble comprehending what is happening in class.

  How is this possible? She’s gotten A’s. She’s been in the top quintile of her class. And yet her preparation lets her down. Why? The answer is that our student has been a victim of Swiss cheese learning. Though it seems solid from the outside, her education is full of holes.

  She’s been tested and tested, but the tests have lacked rigor and any deficiencies they identified weren’t corrected. She’s been given gold stars for her 95s—or even 100s—on superficial exams, and that’s fine; there’s nothing wrong with giving kids gold stars. But she should also have been given a review of the 5 percent of problems that she missed. The review should have been followed by a rigorous retest; if the retest resulted in anything less than 100 percent, the process should have been repeated. Once a certain level of proficiency is obtained, the learner should attempt to teach the subject to other students so that they themselves develop a deeper understanding. As they progress, they should keep revisiting the core ideas through the lenses of different, active experiences. That’s the way to get the holes out of Swiss cheese learning. It is, after all, much better and more useful to have a deep understanding of algebra than a superficial understanding of algebra, trigonometry, and calculus. Students with deep backgrounds in algebra find calculus intuitive.

  As a practical matter, our conventional classroom model does not generally allow for these customized reviews and retests, still less for moving beyond memorization to experience the concepts through open-ended, creative projects. This is one of the central ways in which the model proves archaic and no longer serves our needs.

  The example of the historically good student all of a sudden not understanding an advanced class because of a Swiss cheese foundation could best be termed hitting a wall. And it is commonplace. We have all seen classmates go through this and have directly experienced it ourselves. It’s a horrible feeling, leaving the student only frustration and helplessness.

  Let’s look at a couple of subjects where students—even previously very successful students—classically hit the wall. One of these is organic chemistry—a discipline that has converted generations of pre-med students into English majors. Is organic chemistry more difficult than freshman general chemistry? Yes—that’s why it comes after. But at the same time it’s just an extrapolation of the concepts in the first-year course. If you truly understand inorganic chemistry, then organic makes intuitive sense. But absent a firm grasp of the basics, organic chemistry doesn’t feel intuitive at all; rather, it seems like a daunting, dizzying, and endless progression of reactions that need to be memorized. Faced with such a mind-numbing chore, many students give up. Some, by superhuman effort, power through. The problem is that memorization without intuitive understanding can’t remove the wall, but only push it back.

  An even more vivid example of the power of Swiss cheese learning to wreak havoc is provided by calculus—possibly the most common subject on which students meet their Waterloo. This is not because calculus is fundamentally so difficult. It is because calculus is a synthesis of much that has gone before. It assumes complete mastery of algebra and trigonometry. Calculus has the power to solve problems that are beyond the reach of more elementary forms of math, but unless you’ve truly understood those more elementary concepts, calculus is of no use to you. It’s this element of synthesis, of pulling it all together, that gives calculus its beauty. At the same time, however, it’s why calculus is so likely to reveal the cracks in people’s math foundations. In stacking concept upon concept, calculus is the subject most likely to tip the balance, reveal the dry rot, and send the whole edifice crashing down.

  Another consequence of Swiss cheese learning is the very common but perplexing inability of many people—even very bright people with top-tier educations—to connect what they have studied in the classroom to questions they encounter in the outside world. Examples of this abound in everyday life; let me present one such instance from my own experience as a hedge fund analyst.

  My work in that capacity consisted partly in interviewing CEOs and CFOs of publicly traded companies so that I could understand their businesses well enough to make informed predictions about their future performance. One day I asked a CFO why his company’s marginal cost of production seemed higher than that of its competitors. (The marginal cost of production
refers to the expense of creating one extra unit of a product, before the “fixed costs” of a factory and other corporate overhead have been figured in. In other words, it’s the labor and materials price of that one single widget.) The CFO looked at me—a tad suspiciously, as if he imagined that some sort of corporate espionage might be afoot—and told me that information about marginal cost was considered proprietary, and he had no idea where I’d come up with my number.

  I told him that he’d given me that number himself.

  He scratched his chin, crossed and uncrossed his ankles.

  I pointed out that included in the company’s publicly stated filings were numbers for the cost of goods sold from two different periods, along with reports of the number of units sold. Figuring out the marginal cost of production, then, was a matter of doing a little elementary math—specifically, solving two equations with two unknowns, a type of problem that is the stuff of eighth-grade algebra.

  Now, I tell this story not to embarrass or criticize the CFO. He was a bright guy with an Ivy League education, and his math background extended to calculus and beyond. Clearly, though, there seemed to be something wrong, something missing, in the way that he’d been taught. He’d apparently studied algebra with an eye toward getting a good grade on the test that was the climax of the unit; presumably the test centered on the working out of a handful of problems, and the problems consisted of solving for variables that had no apparent meaning in the real world. What, then, was the point of learning algebra? What was algebra actually about? What could algebra do? These very basic questions, it seemed, had gone unexplored.

  This failure to relate classroom topics to their eventual application in the real world is one of the central shortcomings of our broken classroom model, and is a direct consequence of our habit of rushing through conceptual modules and pronouncing them finished when in fact only a very shallow level of functional understanding has been reached. What do most kids actually take away from algebra? Sadly, the usual takeaway is that it’s about a bunch of x’s and y’s, and that if you plug in a few formulas and procedures that you’ve learned by rote, you’ll come up with the answer.

  But the power and importance of algebra is not to be found in x’s and y’s on a test paper. The important and wonderful thing is that all those x’s and y’s can stand in for an infinitely diverse set of phenomena and ideas. The same equations that I used to figure out the production costs of a public company could be used to calculate the momentum of a particle in space. The same equations can model both the optimal path of a projectile and the optimal price for a new product. The same ideas that govern the chances of inheriting a disease also inform whether it makes sense to go for a first down on fourth-and-inches.

  The difficulty, of course, is that getting to this deeper, functional understanding would use up valuable class time that might otherwise be devoted to preparing for a test. So most students, rather than appreciating algebra as a keen and versatile tool for navigating through the world, see it as one more hurdle to be passed, a class rather than a gateway. They learn it, sort of, then push it aside to make room for the lesson to follow.

  Tests and Testing

  Let’s now look at another aspect and some other implications of our long and largely unexamined habits of classroom teaching and testing. To do this, let’s start by asking one of those incredibly basic questions: What do tests really test?

  At first glance this question might seem so simple as to be trivial, but the longer and deeper you look at it, the less self-evident the answer becomes.

  Consider some things that tests don’t test.

  Tests say little or nothing about a student’s potential to learn a subject. At best, they offer a snapshot of where the student stands at a given moment in time. Since we have seen that students learn at widely varying rates, and that catching on faster does not necessarily imply understanding more deeply, how meaningful are these isolated snapshots?

  Tests say nothing about how long learning will be retained. Recalling what we’ve learned about how the brain stores information, retention involves the effective transfer of knowledge from short-term memory to long-term memory. Some students seem to have a knack for keeping facts and figures and formulas in short-term memory just exactly as long as they need them for a grade. After that, who knows? Conventional testing doesn’t tell us.

  Testing tells us little or nothing about the why of right or wrong answers. In a given instance, does a mistake suggest an important concept missed or only a moment’s carelessness? If a student fails to finish an exam, did she give up in frustration or simply run out of time? Given the time she needed, how well might she have done? On the other hand, what does a correct answer tell us about a student’s quality of reasoning? Was the correct answer the result of deep understanding, a brilliant intuition, rote memorization, or a lucky guess? Usually it’s impossible to tell.

  Finally, tests are by their nature partial and selective. Say a particular module has covered concepts A through G. The test—by design or by randomness—mainly addresses concepts B, D, and F. The students who, on a hunch or by sheer dumb luck, have geared their preparation toward that subset of the subject matter will probably test much better. Does this suggest greater mastery of the entire subject? Again, given traditional classroom approaches, there’s just no way to know.

  So then, coming back to our original question—what do tests actually test?—it seems that the most that can be confidently said is this: Tests measure the approximate state of a student’s memory and perhaps understanding, in regard to a particular subset of subject matter at a given moment in time, it being understood that the measurement can vary considerably and randomly according to the particular questions being asked.

  That’s a pretty modest statement of what we should reasonably expect to glean from testing, but I would argue that it’s all that the data justify. To be sure, the data could and should be improved; as we’ll see, broadening and deepening the range of what we can learn from students’ exercises and test results is at the very heart of the improvements I would propose to our current system. For now, however, suffice it to say that our overreliance on testing is based largely on habit, wishful thinking, and leaps of faith.

  For all of that, conventional schools tend to place great emphasis on test results as a measure of a student’s innate ability or potential—not only on standardized tests, but on thoroughly unstandardized end-of-term exams that may or may not be well designed—and this has very serious consequences. What are we actually accomplishing when we hand out those A’s and B’s and C’s and D’s? As we’ve seen, what we’re not accomplishing is meaningfully measuring student potential. On the other hand, what we’re doing very effectively is labeling kids, squeezing them into categories, defining and often limiting their futures.

  This outcome is actually what the Prussian architects of our standard classroom model explicitly intended. Tests determined who would go to school beyond eighth grade and who would not. This, in turn, would dictate who was eligible for the more prestigious and remunerative professions, and who would be consigned to a lifetime of menial labor and low social status. Early industrial society needed a lot of lower-end workers, after all, people who worked with their hands and backs rather than their minds. The Prussian version of “tracking” students assured a plentiful labor supply. Moreover, since the testing process, for all its flaws and limitations, could claim to be “scientific” and objective, there was at least the illusion of fairness in the system. If you didn’t look too closely—if you factored out things like family wealth and political connections and the wherewithal to hire private tutors—the system could pass for a meritocracy.

  To be clear, I am not antitesting. Tests can be valuable diagnostic tools to identify gaps in learning that need to be fixed. Well-designed tests can also be used as evidence that someone actually knows a subject domain well at a specific point in time. What is important to remember, however, is to have a solid dose of ske
pticism when interpreting results from even the most well-designed tests; they are, after all, just imperfect human constructs.

  Tests also change. If the changes could be solely ascribed to evolving insights into educational methods, that would be great. In the real world, however, things are seldom so straightforward. Economics and politics factor in, as does a strange Alice-in-Wonderland kind of cockeyed logic; tests change, in part, so that the results will come closer to what the testers think they should be.

  In a fascinating recent instance of this, the state of New York hired a new company to redesign the standardized tests administered to millions of third through eighth graders.5 Why the expensive overhaul? Two seemingly contradictory reasons. In 2009, the old tests seemed to have become too predictable, so that students and teachers, having a pretty good idea of what was coming, were doing mere test prep rather than real teaching and learning. Test scores were high… too high to be considered reliable. Responding to criticism regarding the perceived laxity of their standards, the New York State Board of Regents ordered its then–testing company to make the tests more difficult. It complied, and perhaps did too good a job; scores plummeted. To state what should be obvious, teachers didn’t get less good and students didn’t get less smart from one year to the next. So who was really being tested here—the students or the testers?

  Apparently the testers flunked, because the state fired them and hired a different company, giving the new designers an extremely specific set of guidelines. Questions should not be “tricky.” Possibly misleading use of negatives—“Which of the following words cannot be used to describe the tone of this passage?”—was disallowed, as were those old standbys “none of the above” or “all of the above.” So finicky had the regents become that they even specified the fonts that should be used for maximum legibility. Moreover, they mandated that reading samples should “have characters that are portrayed as positive role models [and] have a positive message.” What all this positivity has to do with any sort of objective measure of reading competence is too subtle for me. Clearly, this is politics, not pedagogy.

 

‹ Prev