Princeton University

Where Do Rankings Come From?

The most popular and widely used rankings for undergraduate programs is undoubtedly, the US News and World Report rankings. Usually when someone gives an overall ranking to a school in the US, this is the number they are referring to. These rankings do not in fact measure the university as a whole; they measure primarily indicators related to undergraduate education. So that number isn’t as meaningful as you might think it is. In any case, whether you decide that the rankings are a perfectly accurate and useful tool for choosing a university, or not, you should know what the number means. Let’s look at the methodology of the US News and World Report rankings, and some critiques of it.

The Methodology and Critiques

First, the rankings give the highest weight to peer assessment. This information is collected by asking university administrators–presidents, provosts and admissions deans–to rank universities. Reputation of a university can be important and it is very hard to quantify; peer assessment is not a bad idea. However one critique of this method is that presidents and deans of admissions are not necessarily in the best position to have detailed and up-to-date information. For one thing, they tend to be very busy running their own university. For another, no one consults professors, who tend to know their colleagues at other universities well and who heavily influence the quality of a university education. Nor are students, who may have inside information not available to the public, surveyed. It should also be noted that every year, the number of university administrators who fill out the peer assessment forms goes down!

The rankings also give high priority to retention rates-the number of undergrad students who return to a university after their first year–and the number of undergrad students who graduate in six years (4 years being the norm). However, critics of the rankings note that some of this has to do with the student, not the university. For example, a first-year may not return for a second year because he needs to help his family. Or because he doesn’t like the food. In some cases, the student may have a highly specific need that is not met by the university–such as an excellent music recording studio, or a professor who specializes in Egyptian hieroglyphics.

Faculty resources measured by student-faculty ratio, number of classes with under 20 students, number of classes with more than 50 students, average faculty salary, ratio of full time faculty, and faculty with the highest degrees in their field. Again this applies to faculty teaching undergrad only. And again there are critics who point out that a lot of this has to do with how rich the university is. Harvard is a very wealthy university which can afford to pay its professors well and hire enough professors to have a good student-faculty ratio. Of course, it’s a good sign that a university is spending its funds on professors and not, say, new fountains. But none of this tells us if the professors are teaching well or not. A very famous professor may get an incredible salary given the prestige or even the consulting business he brings to the university, and never set foot in a classroom.

Notably, spending per student factors into rankings too without any measure of how exactly those funds are being used. Alumni giving factors in, in the theory that graduates give to their university if they enjoyed the university. So universities who have a high percentage of alumni donating money must give satisfaction. Not a bad theory but it does mean that the financial resources of a university have a high overall weight in the rankings, directly and indirectly. Also, some universities are more likely to produce highly-paid alumni than others. And highly-paid alumni are more likely to donate, being in a position to do so. An excellent arts school may not have wealthy alumni who can afford to donate in massive proportions.

Student selectivity is the next factor we come to. 15% of a universities’ ranking is based on the qualifications of enrolled first-year students and the proportion of accepted students versus applications. This is probably the most heavily criticized element because the key question is not how smart the students are when they come in. It’s how smart the students are when they leave! Does the university add value to its students or does it just accept smart students who learn nothing for four years. Many people also note that some universities improve their score in this area by simply increasing the number of applications they accept or rejecting more students. In fact, this may be creating a highly negative trend in US higher education where students with good but not excellent qualifications will not be accepted to top schools that are trying to boost their ratings.

Now graduation rate, a measure of what the student gets out of the university, is a factor in the rankings, only 5%. But this is measured by the difference between the actual rate and the magazines prediction. Which is a somewhat subjective and indirect way to measure what students get out of a university.

The Washington Monthly has also published a report on the rankings commissioned by US News and World Report in 1997. Many of the critiques I make here are also noted there, along with recommendations for improvement.

Poor Number 20

As you statistics majors may have noticed, it is possible for low-ranked universities to be equal to or even superior to higher ranked universities in certain aspects. For example, Princeton is ranked number one this year. Brown University–widely considered to be an excellent school in the US–is ranked 15th. Must be a much more inferior university, right?

You might be surprised to hear that Brown is 3rd in the country for graduation and retention rank. Princeton is actually only 2nd, while Harvard (overall ranked number 2 this year) is 1st. In fact, 97% of all freshman return to Brown for a second year equal to Harvard’s rate and only 1% lower than Princeton. Only 10% of classes at both Princeton and Brown are over 50 people, meaning Brown also provides a lot of smaller classes where kids don’t get lost or remain unknown to professors. Brown has more full-time professors than either Princeton or Harvard (94% at Brown, 91% at Princeton and 92% at Harvard). Selectivity in term of SAT scores of accepted students is virtually identical at all three universities.

I could do the same analysis for any number of universities. Even Colorado State University, ranked 124th has more full-time faculty and fewer classes under 50 students than Princeton or Harvard and is as selective as the University of Iowa, ranked 64th. The point is the rankings are not a single number handed down by God; it is a composite measure and as such the final result doesn’t mean the university is superior in every way.

So now you know what the rankings measure. Note what they do not measure: quality of knowledge given, chances of students getting a good job after graduation, student satisfaction, academic rigor, usefulness of knowledge, and a bunch of things that might matter more to you than how many alumni donated to the school. Now, there are reasons why these things aren’t measured, namely, it’s really hard to quantify student satisfaction or quality of knowledge given. And US News does its best to measure these things indirectly. The problem likely lies in the concept of rankings altogether rather than flaws in any one methodology. So don’t think that little number is objective OR as useful as it seems!

Follow us