The Massachusetts Comprehensive Assessment System, better known as MCAS, is 21 years old. Passed in 1993 as part of a groundbreaking education reform act, it was first administered in fourth, eighth, and 10th grades — and used as a graduation requirement, at a time when the only statewide requirements were four years of physical education and a year of US history. Since then, MCAS has grown in frequency and influence. Tests have been added in new grade levels, partly out of fairness to teachers, who complained that they shouldn’t be held accountable for previous years of work. State officials have tied MCAS scores to administrative interventions, and placed more emphasis on schools’ year-to-year growth. Those changes, intended to better measure student progress, have led to an unintended but predictable consequence: More classroom time spent preparing for the tests.
Test preparation should never be the goal of high-stakes testing; a well-educated student should be able to succeed on a well-written test without excessive drills. But critics, in Massachusetts and elsewhere, have long complained that in practice, test prep crowds out classroom projects and creative, expansive learning. Lately, that resistance has grown louder, accompanied by some specific complaints: about schools overloading on diagnostic tests in bids to improve their MCAS scores, or posting students’ scores on public walls, as an attempt at motivation. Last year, the powerful Massachusetts Teachers Association elected Barbara Maledoni, a fierce critic of standardized testing, as its president. The current education secretary and the chairman of the state Board of Elementary and Secondary Education, both appointed by Governor Deval Patrick, have voiced skepticism about how much testing is done. Meanwhile, the board is considering dropping MCAS altogether in favor of PARCC, the computer-based test aligned with the Common Core curriculum, which has been adopted in most states.
It is in this context that the state plans to order a study of MCAS testing, to be conducted by an outside consultant and completed next April. State officials say they hope the study will attach empirical data to the anecdotal evidence, and some clearly think it will prove that schools test too much. Others believe it will show that the problems are limited to certain overzealous districts, and that most schools are handling their testing obligations in stride. Governor-elect Charlie Baker, who will inherit this issue before the study is complete, is open to reevaluating some of the details around testing, but not testing itself. A spokesman said Baker won’t favor “pulling back from our obligation, and commitment, to independently measuring student achievement.”
Baker is right. (He’s also right to question whether a consultant is needed, at extra cost, or whether the schools can accomplish the task in-house.) Still, a survey is worthwhile, so long as it asks the right questions. A study could uncover best and worst practices, identifying districts that manage to administer tests without disrupting entire schools — as well as those that take the pressure too far, at students’ expense. It could also suggest ways the state could decrease the frequency of tests while still garnering useful information. After so many years of practice, some tweaking is likely in order, but the original purpose of education reform can’t be forgotten. High standards — and measurable standards — are one reason Massachusetts schools routinely top the nation. When they’re used correctly, tests are not the enemy.