In 1995, Reed College, the fiercely intellectual liberal arts college in Portland, Ore., made national headlines by refusing to participate in the annual US News & World Report’s “Best Colleges” guide.
Reed’s decision, which I happily embraced when I became its president in 2002, was a statement of its academic values. In its view, the entire enterprise of arraying educational institutions in a one-size-fits-all, ordinal hierarchy, with its faux precision and dubious methodologies, was deeply antithetical to the scholarly standards to which it held its own faculty and students.
The following year, US News punished Reed for its act of defiance, dropping it from the top quartile of liberal arts colleges to the bottom quartile. Even today there is ample evidence that Reed’s ranking is artificially deflated. But the college has not only survived, it has thrived, wearing its rebellious stance as a badge of academic integrity that many applicants find attractive.
Earlier this year, Columbia University also withdrew its submission to US News for the 2023 rankings but for a very different reason: One of its own mathematics professors had alleged that the school submitted inflated statistics for 2022 and Columbia said it would need more time to evaluate its data-gathering procedures.
But Columbia’s withdrawal did not stop the publication from giving it a rating anyway: In the 2022-23 guide released Sept. 12, Columbia plunged from number 2 to 18. Whether the drop in ranking and the revelations of misreporting will injure Columbia in the long run depends on what the university does next.
My advice — and my hope — is that Columbia will pull out of the rankings permanently. In so doing, it could make a powerful statement about the distorting effects rankings have had on American higher education.
The formulas used to calculate schools’ relative scores are completely arbitrary. From the hundreds of metrics that might be used to describe and evaluate institutions of higher education, rankers pick out a small handful — US News uses 17. Then they assign each of those metrics an equally arbitrary arithmetical weight, combine them, and calculate a composite score for each school. The choice of variables and the assignment of weights is based on pure guesswork by a group of journalists, with no underpinning in scientific research or theory.
Why, for example, does US News base its rankings on how much a school pays its faculty, but not on how well those faculty teach? Why does the publication’s formula use spending on instruction, student activities, and administration, but not on financial aid? Why does the formula weigh a school’s faculty salary levels seven times as much as its student/faculty ratio? The editors have never been able to offer a real explanation for any of this.
The rankings rely almost completely on unverified, unaudited data submitted by the very schools being ranked. Throughout its history, the US News rankings have been plagued by reports of schools distorting and falsifying the statistics they report. The accusations against Columbia University are just the latest example. Would you trust a restaurant’s ranking based purely on the chef’s description of the quality of its food? Would you invest in a company based on unaudited, self-reported financial data?
In a desperate attempt to improve their scores, countless colleges have distorted their practices and policies, often to the disadvantage of academic quality. For example, the US News formula gives a school credit for having a high percentage of small classes. Fair enough. But because it asks colleges to calculate this figure for its fall semester, many schools have shifted large lecture courses to the spring semester, forcing students to delay satisfying the introductory courses they need in order to progress in their academic program. As another example, the use of SAT scores to evaluate colleges’ “student selectivity” has discouraged those schools from taking chances on especially promising, hard-working, creative applicants with lower scores.
Leading rankings have notoriously glorified wealth and prestige, encouraging schools to privilege the already privileged — and disadvantage the already disadvantaged — in their admissions programs. For example, the formula used by US News gives wealth-dependent measures of spending per student six times as much weight as its “social mobility” factor, based on the school’s percentage of Pell Grant recipients. Similarly, the mad scramble to improve acceptance and yield rates led many schools to increase the use of binding early admissions programs that have repeatedly been shown to favor wealthy applicants. Likewise, schools have dramatically increased the use of so-called “merit” aid grants to lure higher-income students with elevated SAT scores, often at the cost of providing adequate need-based aid to lower-income students.
All of these flaws with the ranking system are well known to college and university leaders. During my tenure as president of Reed College, I counseled dozens of my counterparts at other colleges to follow our lead and reject the rankings. To my disappointment — but not surprise — very few did. Fear of punishment or loss of competitive advantage usually triumphed.
But the momentum is shifting. This year, 17 percent of schools ranked by US News refused to respond to its statistical questionnaire and 66 percent refused to respond to its “peer assessment” survey. Recent revelations of data manipulation by such nationally prominent universities as Temple, Rutgers, University of Southern California, Villanova, and Columbia have shown how the entire college rankings system rests on a foundation of sand.
These signs give me hope that higher education will finally muster the courage to point out that this particular emperor has no clothes.
Colin Diver is the former president of Reed College and former dean of the law schools at Boston University and the University of Pennsylvania. A Boston native, he is the author of “Breaking Ranks: How the Rankings Industry Rules Higher Education and What to Do about It.”