The Unintended Consequences of Rankings
Authored on:Oct 30, 2013
We are a society obsessed with quantifying and ranking things. Neurosurgeons, small towns, and nasal sprays all have their own ranking lists. Someone is a winner and someone is a loser.
While many of us are aware of this increased quantification and vaguely understand its potential dangers, Michael Sauder (Sociology, CLAS) is working to make the unintended consequences of this trend and fascination clear, especially in the world of education.
Ten-year study of law school rankings.
“Our timing was good,” says Sauder of the beginning of their decade-long research. “Although the rankings had started in the 1980s, schools weren’t really taking them seriously until the mid-1990s. By the time we started talking to administrators, faculty, and students, people understood that the rankings were having an effect, but there wasn’t any empirical data on it yet.” To date, he and Espeland have conducted about 170 interviews with people involved in legal education across the country.
After publishing several articles on the rankings, Sauder and Espeland are now completing a book, tentatively titled Fear of Falling: How Media Rankings Changed Legal Education in America (forthcoming from the Russell Sage Foundation). Sauder came to the work via an interest in status; he sees the rankings as a formalized status system. Espeland is interested in what she calls “commensuration” and in the ways qualitative information is translated into quantitative data and consequential effects, such as the U.S. government's use of qualitative research to set a price on Native American land.
Schools are changing services, scholarships, and even missions.
With regard to law school rankings, the common belief was that they were affecting the legal landscape, but no one was certain what that meant. Sauder and Espeland’s interviews have discovered that law schools are changing their missions, their career services offerings, their scholarship offerings, and sometimes even their staff structure in response to the rankings.
“We have found that often schools respond to the numbers rather than the phenomena that they’re supposed to represent,” says Sauder. For example, many schools have tried to raise their reputational scores among peers by spending significant sums on promotional materials that are sent only to administrators at other schools. Why administrators? Because their responses to the US News survey play a significant role in determining a school’s overall rank. (The process is not unlike that of the Oscars, in which other people in the industry are the judges.)
“In a few cases, schools have spent millions of dollars effectively advertising themselves to other schools,” says Sauder.
Information has become name of the game.
One of the most important criterion in the ranking formula is the percentage of employed graduates. Sauder said that for years, schools only counted the number of graduates working in the legal field. But a few astute schools started to include any form of employment in this statistic (e.g., driving a taxi, working the counter at McDonalds, shelving books in the law school library), pushing their employment numbers—and thus their overall rankings—higher. In order not to lose ground in the rankings, almost every school has now adopted this looser interpretation of employment status.
Simply gathering this information is so time consuming that many career services staff and assistant deans have told Sauder that they now spend the majority of their office’s time and resources tracking student employment as opposed to providing employment guidance. In one extreme case, a career services staff member had been fired the day before her meeting with Sauder because her dean was dissatisfied with the numbers.
The effects of school rankings extend overseas. Sauder has received reports from Japan, Russia, and Iceland, as well as from countries in the European Union, that more funding is being given by national governments to the most elite institutions that are more likely to fare well in the rankings. Regional schools are losing ground as a result.
Rankings provide service.
Sauder is not a complete critic of the rankings. He says that in an information glut, they serve a purpose by simplifying a complicated process. And he believes Robert Morse, director of data for U.S. News, and the magazine try to do the best job possible to provide relative measures regarding the quality of education.
But he also believes there is a better system. More rankings could actually be better, he says, giving the example of business schools, which rely on four to five major ranking organizations, including the Wall Street Journal and Forbes. “If a school ranks fourth in one system and 45th in another,” he explains, “it becomes pretty clear that something is off kilter.”
"These are complicated institutions."
He also believes that ranking ordinally—that is including first, second, and third placeholders—is misleading. “These are complicated institutions with different goals and missions,” he notes. “Making such fine distinctions is really impossible. But students and donors don’t understand that when a school drops by four places it’s a statistical variation, not a matter of change in quality.”
Sauder hopes that students, as well as administrators, become more savvy about the rankings as a result of this research: “It’s more important to choose a school that feels like a good fit instead of based on rankings.”
In the last chapter of their book, Sauder and Espeland are looking at the ways rankings and other forms of quantified accountability measures have spread into additional areas, such as crime statistics and healthcare. Sauder says, “I hope that this work can shed new light on the unintended consequences that public measures produce and provide ideas about how they might be improved.”