Advertiser Disclosure

Can You Trust U.S. News & World Report’s College Rankings?

How do students get the best value out of their college education? And how effectively do college ranking sites deliver on that need? As the rising cost of college far outpaces wages for American workers, this question of value is more important than it has ever been. The good news is that students have more choices than ever before. The bad news is that navigating these choices can be exceedingly difficult. College ranking leaders generally operate under the premise that their ranking lists make this process easier. But do they really? Are college ranking leaders actually providing the information that students need to make smart decisions? Or are they feeding students into a college industry that profits from reputation over outcomes? Are college rankings the ultimate case of style over substance?

We examine these questions and more with a closer look at the ranking sector through the prism of its most widely recognized and influential player—U.S. News & World Report.

***

The United States is home to just under 20 million college students. Every year, this number will include nearly 3 million new arrivals—each shopping for one of the biggest purchases they’ll make in their lives. According to IBISWorld, “The market size, measured by revenue, of the Colleges & Universities industry is $568.2bn in 2022.” A multi-billion dollar industry catering predominantly to young consumers deserves strong consumer protections and good information. 

Instead, we have U.S. News & World Report. 

How effectively does this top gatekeeper inform and protect consumers?

To answer this question, let’s consider a recent scandal. In November 2021, the former dean of Temple University’s Fox School of Business Management, Moshe Porat was convicted on charges of fraud for his lead role in a conspiracy aimed at inflating his school’s ranking with U.S. News & World Report. Today, Porat faces the possibility of up to 25 years in prison and financial penalties amounting to half-a-million dollars. But in his prime, Porat succeeded in quadrupling enrollment for his program and raking in millions in donations.

How did he do it?

In 2015, USNWR debuted its ranking of the top online MBA programs in the country for the very first time. Fox landed the top spot and held it for the next three years. Charges against Porat reveal that he ordered his staff to provide inaccurate information about the school’s application and acceptance processes. 

It may not be a surprise to learn that this self-reporting methodology is vulnerable to exploitation, particularly with so much at stake for schools. What is surprising is that Porat succeeded at defrauding the top college ranker for so long. NBC News points out that the scheme was inspired by a basic revelation, noting that “Porat ordered his staff members to send inaccurate information about the program after he learned that U.S. News & World Report lacked the resources to audit any of the data submitted by the schools, according to the indictment.”

This is startling not for what it implies specifically about the Temple University scandal, but for what may be hiding beneath the surface. Temple’s top spot was entirely manufactured by the school itself, and all evidence suggests that the endeavor payed off in both enrollment and endowment…until its perpetrators were caught, of course. But this is entirely beside the point. 

The point is that this conviction marks the inevitable convergence of two factors:

  • The outsized influence of U.S. News & World Report in shaping the economic fortunes of colleges; and
  • The damning vulnerability of U.S. News & World Report rankings to manipulation, gaming, and outright fraud.

While Porat’s crimes may be anecdotal, and while they may fall on the more egregious end of the tactical spectrum, they underscore just how problematic the role of USNWR has become. 

Why is USNWR such a big problem?

USNWR has long been the target of harsh criticism and yet, remains a singularly dominant source from which students derive information and from which colleges derive reputation. In this fiercely competitive and increasingly global higher education sector, perception overshadows performance. Reputation is everything. 

U.S. News & World Report holds a monopoly over both perception and reputation. Their rankings are a major driver of endowment decisions, enrollment figures, and employment opportunities. USNWR play a determinant role in how aspiring college students select their learning destination, how educators and researchers build their reputation, and how colleges position themselves in the marketplace.

And on a broader scale, the ranking methodology used by U.S. News & World Report levies so much control over the behavior of the college sector that it should bear as much responsibility as federal and state governments, as well as the schools themselves, for the high cost of college, the student debt crisis, and low rate of completion plaguing American universities. Moreover, as smaller colleges are crushed under the weight of reputation-driven competition, USNWR shoulders some responsibility for tilting the scales even further in favor of the big players.

But this isn’t meant to serve as a broadside attack on U.S. News & World Report. As the dominant force in a ranking sector brimming with flawed metrics and, in some cases, deceptive intent, U.S. News & World report is simply the perfect embodiment of what is most problematic about the college ranking business from its vulnerability to manipulation and its arbitrarily weighted metrics to its inherent biases and its disproportionate influence on the decisions that colleges students make and the investments that colleges prioritize.

A Note on the Competition in the Ranking Sector

U.S. News & World Report is the most widely read of the college rankers, topping a trio of influential rankers that also includes The Academic Ranking of World Universities (ARWU) from the Shanghai Ranking Consultancy, and the Quacquarelli Symonds (QS) World University Rankings. Another leader in the space, The THE World University Rankings actually began through a partnership with QS before splintering into its own ranking concern.

Other ranking players include The Princeton Review, Bloomberg Business, and the Department of Education, which provides a College Scorecard indexing practical indicators of the value offered by each college or university in its ranking. 

According to Simon Marginson, a professor from the Centre for the Study of Higher Education at the University of Melbourne, we can divide today’s leading college ranking strategies into three categories:

  • Those that rely entirely on bibliometric or research-based metrics (such as ARWU’s Shanghai Rankings);
  • Those that rely on a multi-indicator system which assigns different weights to indicators both subjective and objective (such as U.S. News & World Report); and 
  • Those that place a particularly high burden of support on reputational measures like survey responses (such as Quacquarelli Symonds).

Each ranking system has strengths, and each has weaknesses. Understanding and recognizing these strengths and weaknesses can truly illuminate the value of each ranking approach. Know what to look for (and what to take with a grain of salt), and you can learn quite a bit about a given college or university.

Each of these college ranking groups employs its own set of metrics, and in doing so, each prioritizes certain features when rating the excellence of a given college. Rather than dissect each methodology individually, we have linked directly to the methodology page for each of the rankers noted above. We simply advise that before you trust a ranking, you understand exactly what it proposes to measure.

With that aside, we use USNWR as our case study here because it has carved out such a dominant space in the ranking sector and because, upon closer examination, it serves as a revealing demonstration of just how little this type of ranking can tell us about the actual value of a college degree from one institution versus another. 

The College Ranking Landscape

USNWR is a favorite punching bag for critics in the higher education sector. But our goal here is not to judge the rankings provided by USNWR as ‘good’ or ‘bad.’ Instead, we believe a closer examination of its methodology should shed greater light on exactly what it is that USNWR ranks. 

Before we can do that, let’s take a closer look at the history of college ranking. In the first half of the 20th century, most universities relied on a common metric—the number of students who had graduated as so-called “best men.” For instance, says NPR, Prentice and Kunkel published a guide from the 1930s through the 1950s that listed colleges based on how many of their alumni appeared “in the social bible Who’s Who.”

Of course, Prentice and Kunkel’s singular influence is a distant memory. In its place is a fairly robust set of online competitors in the college ranking space, each holding the trademark on its own distinctive ranking strategy.

The U.S. News & World reporting rankings are the best known and most frequently cited of the college ranking publications. Though these rankings had been concentrated largely on the U.S. market for most of their history, they have served as the commercial template for other competitors in the field. USNWR’s annual lists have had a profound influence on how colleges are perceived by students, parents, alumni, and employers. The magazine published its first “America’s Best College” report in 1983, largely setting into motion the development of the industry that drives the focus of our discussion today.

The magazine began publishing the report annually starting in 1987 and since that time, has become the most frequently quoted of the college ranking outlets. At its inception, the ranking was entirely subjective, based on survey responses from university and college presidents. Starting in 1988, U.S. News undertook an effort to use more meaningful quantitative data in its rankings. Accordingly, the Best Colleges ranking methodology was created by Robert Morse, who continues to serve as the chief data strategist for U.S. News to date. 

As noted, for the bulk of their history, the U.S. News rankings have focused solely on U.S. colleges. Only in 2014 did U.S. News unveil its Best Global Universities rankings. Entering a field already dominated by the Shanghai Ranking, the QS World University Rankings, and the Times Higher Education rankings, U.S. News became the first major U.S. based publisher to submit a global ranking to the conversation, moving into an already competitive space.

For an overview of this space, Richard Holmes, an industry-leading ranking watchdog, administrates a reference site called University Ranking Watch. Here, he provides ongoing critique and analysis of the various university ranking systems in circulation today. He provides the warning that for many ranking sites, methodologies are in somewhat constant flux. Either in response to academic criticism, or in an effort to refine existing strategies, or even with the interest of generating new headlines, ranking services have a tendency to revisit their ranking metrics and formulae on a somewhat regular basis, and with often notable changes in outcome.

This has certainly become standard operating procedure for USNWR which has not only transformed dramatically since its inception, but which transforms in subtle, almost imperceptible, but consequential ways nearly every single year. 

USNWR Builds Its Own Landscape

As domestic U.S. rankers go, the success of USNWR is unmatched. In fact, its rankings became so popular that by 2010, U.S. News had largely dispensed with the news-magazine model that once defined it, instead focusing fully on its franchise ranking packages. The strategy has proven effective. As a testament to its popularity, U.S. News & World Report’s 2014 Best Colleges rankings attracted 2.6 million unique visitors and 18.9 million page views on the single day of its release. To give you an idea of how many people are looking, that traffic originated from more than 3000 different sites. Google and Facebook led the charge but visitors all over the web are ushered to these rankings every year.

U.S. News & World Report provides a broad array of ranking categories based on discipline, geography, degree level, and cost. However, for our purposes, the analysis that follows will largely concern the method used to produce its general ranking of the Best National and Regional Colleges and Universities in 2022. 

The U.S. News Ranking Methodology

According to its own reporting, the U.S. News ranking system is based on two pillars: quantitative measures proposed by education experts as reliable indicators of academic quality and the magazine’s own research on the indicators deemed to matter most in education. 

Before calculating its rankings, U.S. News divides all institutions according to their type: National Universities; National Liberal Arts Colleges; Regional Universities; Regional Colleges; and Two-Year Colleges. Regional Universities and Colleges are also subsequently broken down geographically: North, South, Midwest, and West.

The factors that are used to consequently determine ranking are weighted according to U.S. News & World Report’s assessment of the importance of each measure. Each college or university is then ranked against its counterparts based on a composite weighted score. For its 2022 Best Colleges list, U.S. News ranked 1466 U.S. bachelor’s degree-granting institutions, using “17 measures of academic quality.”

Ranking Criteria

These measures are derived using the following weighted criteria:

  • Graduation and Retention Rates (22%)—Graduation and Retention metrics are based on two components—six-year graduation rate (80%) and first-year retention rate (20%).
  • Undergraduate Academic Reputation (20%)—Peer Assessment Survey is used to measure faculty dedication to teaching and the quality of academic programs based on feedback from two survey populations are surveyed: “top academics” such as presidents, provosts, and deans of admissions; and high school guidance counselors.
  • Faculty Resources (20%)—This category is intended to showcase each college or university’s commitment to instruction and is based on five factors:
  1. Class Size Index (40%)
  2. Faculty Compensation (35%)
  3. Proportion of Professors with Terminal Degree in Their Field (15%)
  4. Student-faculty Ratio (5%)
  5. Proportion of Full-time Faculty-5%
  • Financial Resources (10%)—This metric is based on “the average spending per student on instruction, research, student services and related educational expenditures” across the two years prior to each ranking.
  • Graduation Rate Performance (8%)—“Performance” here is measured according to USNWR’s own unique formula in which they “compared each college’s actual six-year graduation rate with what we predicted for its fall 2014 entering class. The predicted rates were modeled from factors including admissions data, the proportion of undergraduates who were awarded Pell Grants, school financial resources, the proportion of federal financial aid recipients who are first-generation college students, and National Universities’ math and science orientations. We divided each school’s actual graduation rate by its predicted rate and took a two-year average of the quotients for use in the rankings.”
  • Student Selectivity For the Fall 2020 Entering Class (7%)—Student selectivity is based on two components: SAT/ACT scores (65%), and; admission of high school students in the top 10% of their class (35%). Recent changes include the removal of acceptance rate and application of a 15% discount on test scores for the growing number of test-optional schools where fewer than 50% of new applicant submit scores.
  • Social Mobility (5%)—This more recently developed metric concerns each school’s performance in graduating students who have received needs-based Pell Grants, and is based on two components: Six-year Pell Grant Grad Rates (50%); and Pell Grant Graduation Rate Performance based on the proportion of Pell Grant recipients in the total student population (50%).
  • Graduate Indebtedness (5%)—Another recent addition to the USNWR ranking formula, this metric is based on two components: a Graduate Indebtedness Total (60%) based on average accumulated federal loan debt among 2019 and 2020 bachelor’s degree recipients as compared to the median among ranked schools; and Graduate Indebtedness Proportion With Debt (40%) based on the percentage of 2019 and 2020 graduates who borrowed federal loans.
  • Alumni Giving Rate (3%)—This variable is derived from the percentage of living alumni with a bachelor’s degree who donated to their school between 2018 and 2020. 

Once these variables have been calculated, weighted scores are assigned and combined. All scores are then scaled to fit in a spectrum of 100 points. 

What’s wrong with USNWR’s ranking methodology?

Surely you’ve heard the saying, “if you can’t explain it simply, you don’t understand it well enough.”

Try finding a simple way to explain the dense formula that USNWR used to rank the Best Colleges. Full disclosure—I skipped a lot of the minutiae for your convenience. You can get the long version here, and you should.

But here’s the key takeaway. Complexity does not equal accuracy. In fact, it may be the USNWR’s most powerful form of defense—authority through obfuscation. The tactic has not insulated USNWR from criticism, however. 

The criteria outlined above have an enormous bearing on the reputation, enrollment, and endowment of American colleges and universities. To the point, studies by the National Bureau of Economic Research determined that rankings have a direct and determinable impact on admission rates, the SAT scores of incoming freshmen and even the evaluations rendered by bond-rating organizations like Moody’s. 

Since the release of the very first ranking in 1983, U.S. News & World Report has engendered criticism and, more recently, outright hostility. For a thorough timeline of the claims lobbed against the ranker, check out College Rankings Held Hostage: The Undeserved Monopoly of U.S. News.

For a closer look at the reasons USNWR deserves so much of the criticism it receives, let’s zoom in on a few of the core flaws in its methodology. 

U.S. News is Always Tinkering

In our review of the methodology, we noted that there have been a few changes to the formula for rankings both this year and in the years leading up to the current ranking. For instance, USNWR lowered the threshold this year for qualifying as a test-optional school, and thus earning a 15% discount on standardized test weighting. This marks only the second year that USNWR is incorporating Graduate Indebtedness into its formula, and the third year the ranker has accounted for Social Mobility.

On their face, these changes are proposed as a way to address shifting educational and cultural realities, to refine the ability of U.S. News to accurately measure excellence and to redress criticism of the shortcomings in its methodology. But are these changes to the formula purposive? Do they truly improve the accuracy of ranking outcomes? Are they even designed to do so?

The Atlantic offers a different theory, citing critics who suggest that these slight annual adjustments are merely changes in window dressing for the same annually released product. Think of it as an iPhone—a paragon of planned obsolescence, each iteration of your hardware inching another step closer to irrelevance with every software update.  

Every year that USNWR tweaks its formula, it lends one more shred of evidence to the observation that the weighting of criteria is essentially arbitrary. With the addition or subtraction of each metric, with the slight modification of components constituting each metric, with subtle percentage-point changes in the weighting of criteria, the rankings from prior years have become less accurate even as no material changes have occurred at the schools ranked. 

It doesn’t take a statistician to deduce that the rankings published this year will be dashed to inaccuracy in just a few years’ time, and more than likely because of tweaks to the USNWR methodology, as opposed to material changes in the value proposition and educational offerings at the schools both gaining and losing statue on its list. If that’s true, how can we really trust the accuracy of these rankings today?

U.S. News Uses Highly Subjective Metrics

In 1997, U.S. News commissioned the National Opinion Research Council to produce a comprehensive critique of its ranking methodology. The findings offered a pretty problematic conclusion. Namely, the Council found that there was little apparent empirical justification for the weighting assigned to different variables. The importance ascribed to each seems to suggest little “defensible empirical or theoretical basis.”

Again, the ease with which these weights are annually adjusted makes it rather easy to accept the Council’s conclusions. In spite of the considerable influence that the U.S. News rankings have over the college sector writ large, critics argue that the methodology used to derive them simply lacks proper experimental rigor or academic authority. 

The reliance on peer review stands out as particularly troubling as it accounts for a full 20% of the composite score given to each school. By its very definition, this source of data is entirely subjective and entirely without transparency. Calling it a “black box” criterion, a study from Stanford notes of reputation surveys that “there is little transparency to the way they are calculated. The inner workings of these metrics are mysterious and poorly understood except by a few in the know. Whereas figures like graduation rates, SAT scores, and acceptance rates are public knowledge, the survey participants and results that lead to these black box metrics are not identified or reported.”

The study also alleges that reputation survey reflects something akin to cronyism in the college ranking business. Schools of high repute reward one another with friendly marks of approval in an unspoken arrangement that retains the basic pecking order of prestige in higher education. Perhaps even more importantly, this subjective metric essentially elevates schools for their prestige as opposed to their educational offering. While there is often a correlation between the two, these should not be conflated. 

Unfortunately, by weighting reputation survey at 20% of a school’s total score , USNWR makes it impossible to avoid this conflation. To reiterate an earlier stated point—perception overshadows performance. For more on the outside impact of this metric across the college ranking sector, check out The Problematic Influence of Reputation Surveys in College Ranking.

USNWR Rankings Lean Heavily on Metrics With Inherent Socioeconomic Bias

The subjectivity of weighting, as well as the hazy prerogative behind the selection of some criteria to the exclusion of others, are both cause for further scrutiny. Some critics have argued that these conditions facilitate a ranking system that is inherently tilted to the advantage of colleges which are both more costly and less economically accessible. 

As Salon phrases it, “these rankings exhibit a callous disregard for college affordability, prioritizing schools that spend more money on flashy amenities rather than scholarships and grants.” The article goes on to argue that “the magazine glamorizes selectivity, which creates a culture of exclusion that shuns low-income students the hardest.”

The U.S. News & World Report rankings reward those colleges which achieve greater exclusivity. This, criticizes Salon, is used as a way to measure a college’s worth, which is not only inherently inegalitarian but may also not be the truest indicator of a school’s value. 

Salon suggests that the way these rankings weight certain characteristics like selectivity and donations, to the total disregard of affordability, may actually help to drive up and rationalize the high cost of those schools which are already least financially accessible. Of particular consequence is the formula’s heavy reliance on graduation and retention rate metrics. The single likeliest cause of college non-completion is financial distress. 

Forbes reports “The current system makes colleges look good when they enroll students who are already well positioned to succeed, which really means students who are affluent. It makes a college look bad when it does the hard work of patiently supporting less well-off, less well-prepared students as they work toward their degrees.”

The most heavily weighted metric in USNWR’s formula is also its most vulnerable to selection bias. In this sense, USNWR fosters a system that rewards schools for de-prioritizing students with financial disadvantages. The recently added 5% nod to social mobility seems unlikely to offset this far more encompassing priority. 

For more, find out What’s wrong with using graduation rates to rank colleges?

USNWR Lacks the Capacity to Audit Its Findings

Most alarming of our findings is the detail that emerged from accounts of the scandal as Temple’s Fox School of Business. Dean Porat’s crimes went undetected—indeed, were even rewarded—for a full three years. How?

In spite of the incredible capacity it has to influence the behavior of students and colleges, U.S. News & World Report lacks either the resources or the authority to audit the data it collects. To say that this makes USNWR vulnerable to gaming and manipulation seems too great an understatement.

It would be more accurate to say that it invites fraud and deception. To wit, one need not concoct some clever statistical ruse in order to manipulate standings. There are easier methods. As such, schools seeing no more direct path upward may simply fabricate their numbers. 

Nearly a decade ago, Propublica identified the problem, noting at the time that five colleges had admitted to overstating their admissions statistics: Bucknell University, Claremont McKenna College, Emory University, George Washington University, and Tulane University’s business school. And in each of these cases, the scandals were only exposed to the light of day by the colleges themselves.

There’s little evidence that such revelations brought about a major crisis of conscience for USNWR. Some five years later, in 2018, eight schools including Hampton University, Saint Louis University, and Dakota Wesleyan University were found to have submitted incorrect data and consequently gained undeserved ground in the rankings. In 2019, the University of Oklahoma revealed that it had submitted inaccurate alumni donation rates to the leading ranker…for 20 years!

This is not a great track record when it comes to quality control. Considering that little has been done to resolve the root issue—reliance on self-reporting and an absence of data auditing—we have little reason to believe that some schools won’t resort to deception. In the era of the cash-strapped college, there are simply too many dollars at stake to presume otherwise.

A Final Word On Methodology, Manipulation, and the Simple Joy of Opting Out

In a scathing criticism of the ranking system, and in explanation of his school’s decision to boycott the rankings altogether, Reed College president Colin Diver wrote in 2005 that these rankings “create powerful incentives to manipulate data and distort institutional behavior for the sole or primary purpose of inflating one’s score. Because the rankings depend heavily on unaudited, self-reported data, there is no way to ensure either the accuracy of the information or the reliability of the resulting rankings.”

Naturally, Reed verily proved its point when it opted to withhold the data requested by U.S. News for its annual ranking in 1995. Without experiencing any actual decline in its performance or the metrics related thereto, Reed dropped precipitously in the rankings, falling from the 2nd quartile to the 4th. 

This impact demonstrates just how little independent verification U.S. News does of the statistics it draws from self-reporting. Conduct your own research to see how well Reed has done in the years since. Ironically, by releasing itself from the heavy burden of competing in the rankings game, Reed found itself more readily capable of advancing priorities specific to learning, curriculum and academics. These priorities organically rose to the top as Reed found a greater raison d’être than merely gaming a ranking system. Today, Reed is thriving, a fact best exemplified when the school is ranked using empirical metrics. 

Is there value in USNWR Rankings?

Now that we’ve taken a closer look at the methodology, let’s once again step back for a look at the bigger picture. Obviously, the criticism above only joins an already loud chorus of voices calling for change. But USNWR continues to wield influence. You may be hard-pressed to undertake a search for the right college without encountering its permeating ranking lists and stamps of approval. 

With that in mind, if you do plan to consult U.S. News & World Report as you make an enrollment decision, it is important to take the value of its rankings with a grain of salt. There is valuable information to be gleaned from these rankings, provided you have a clear sense of the metrics used to arrive at them. 

First and foremost, consider these rankings an indicator of reputation. This, above all else, is at the root of the U.S. News ranking system. The 22% weighting given to reputation indicators, combined with the inherent biases to student selectivity, means that much of the ranking is based on preconception and public image. In essence, these rankings foster and preserve a hierarchy in education that long predates U.S. News & World Report. 

This is not to suggest that its rankings are without merit. Reputation is a meaningful indicator of quality in education—if not inherently, at least incidentally. For most schools, this reputation extends from a track record of success, academic excellence, positive post-graduate employment outcomes, and the attraction of notable scholars. So if you are navigating these rankings as you search for a school, consider that a high ranking and a positive reputation have real-world implications from the abilities of your professors and classmates to the connections and job prospects that await you upon graduation.

Use these rankings to gain a foothold on the tiers of prestige separating institutions of higher learning. U.S. News enjoys a self-fulfilling role in reinforcing and giving heft to the reputations of well-regarded institutions. There is value in knowing what the general public consensus is on a given school. USNWR does indeed have the monopoly on perception.

U.S. News rankings can be useful if reputation is your top priority. They may not, on the other hand, be very useful as you seek out a cost-effective way to pursue a top quality education. The skyrocketing cost of tuition, the challenges of loan repayment, and the difficult employment realities that often await graduates suggest that affordability is a paramount concern for most college aspirants. Be aware that this concern is not addressed by U.S. News & World Report’s rankings. 

Look elsewhere if you seek information on affordability, accessibility, socioeconomic diversity, and even post-graduate employment figures. And more importantly, be wary of the empirical soundness of the data underlying these rankings. Even if you believe that every metric included here is properly formulated, intuitively weighted and constructed without bias, you’d also have to believe that every school submitting its data is doing so with integrity. Recent historical evidence casts some serious doubt on that likelihood.

Your best defense is a healthy supply of skepticism.