It’s the Number 1 time for Rankings – Part II

As promised in the first part of this post, here is a description of US News’ ranking procedure, for non-IR types.

US News sends out five surveys every year – three to the IR office of every college and university, one each to Presidents, Provosts, and Admissions Deans at every college, and one to High School guidance counselors.   The surveys that go to H.S. Guidance Counselors and to college Presidents, etc. are very similar, and are called the “Reputation Survey” and the “Peer Assessment,”  respectively.  They list all of the institutions in a category (Swarthmore’s is National Liberal Arts Colleges), and ask the respondent to rate the quality of the undergraduate program at each institution on a one to five scale.  There is an option for “don’t know.”   Responses on these two surveys comprise the largest, and most controversial, component of the US News ranking, the “Academic Reputation” score.   It’s the beauty contest.

The three surveys that are sent to IR office ask questions about 1) financial aid; 2) finances; and 3) everything else.  This year, these three surveys included 713 questions.   I wish that were a typo.   We consult with other offices, crunch a lot of data, do an awful lot of checking and follow-up, and many, many hours and days later, submit our responses to US News.  Then there are several rounds of checks and verifications, in which US News flags items that seem odd based on previous years’ responses, and we must tell them “oops – please use this instead,” or “yes, it is what I said it is.”  Of those >700 items, US News uses about a dozen or two in their rankings, and the rest go into other publications and products – on which I’m sure they make oodles of money.   Here are the measures that are used for ranking our category of institution, and the weights that are assigned to the measures in computing the final, single score, on which we are ranked:

Category and Weight in Total Score Measurements and Weight in Category
22.5% Academic Reputation
67% Avg Peer Rating on “Reputation Survey”
33% Avg H.S. Counselor Rating on Rep Survey
15% Student Selectivity
10% Acceptance Rate
40% Percent in Top 10% of HS class
50% SAT / ACT
20% Faculty Resources
35% Ranked Faculty, Avg Salary+Fringe (COLA)
15% % FT Faculty with PhD or Terminal Degree
5% Percent Faculty who are Full-time
5% Student/Faculty Ratio
30% Small Classes (% < 20)
10% Big Classes (% > 50)
20% Graduation and Retention
80% 6-yr Graduation Rate
20% Freshman Retention rate
10% Financial Resources
100% Expenditures per Student
7.5% Graduation Rate Performance
100% Actual rate minus Rate predicted by formula
5% Alumni Giving Rate
100% # Alumni Giving / # Alumni of Record (Grads)

The percentages next to the individual “measurements” reflect the measure’s contribution to the category it belongs to.   So for example, the student selectivity measure is affected least by acceptance rate (only accounts for 10% of the overall category score).  The percentage next to the category reflects its weight in the overall final score.  As I mentioned, the Academic Reputation score counts the most.

The way that US News comes up with a single scores is by first converting each measure to a  z-score (remember your introductory statistics?), which is a standardized measure that reflects a score’s standing among all the scores in the distribution, expressed as a proportion of the standard deviation (z=(Score minus the  Mean)/Standard Deviation).  If an institution had a 6-year graduation rate that was one standard deviation above the average for all institutions, the z-score would be 1.0.

This transformation is VERY important.  With z-scores at the heart of this, one cannot guess whether an improvement – or drop- in a particular measure might result in an improved ranking. It is our standing on each measure that matters.   If our average SAT scores increased, but everyone else’s went up even more, our position in the distribution would actually drop.

So then they weight, combine, weight again, combine (convert to positive numbers somewhere in there, average a few years together somewhere else, an occasional log transformation, …),  and out pops a final score, which is again rescaled to a maximum value of 100.  (I always picture the Dr. Seuss star-belly sneetch machine.)  One single number.

But there are a couple of other features of the method worth mentioning.  One is that the average faculty compensation for each institution is weighted by a cost of living index, which US News doesn’t publish because it is proprietary (they purchased it from Runzheimer).  It is also very outdated (2002).  As Darren McGavin said when opening the leg lamp box in A Christmas Story, “Why, there could be anything in there!”  Another unique feature is the “Graduation Rate Performance” measure, which compares our actual graduation rate with what US News predicts that it ought to be, given our expenditures, students’ SAT scores and high school class standing, and our percentage of students who are Pell grant recipients.  Their prediction is based on a regression formula that they derive using the data submitted to them by all institutions.   Did I mention the penalty for being a private institution?  Yes, private institutions have higher graduation rates, so if you are a private institution, so should you.

Institutions are ranked within their category, based on that final single score, and with much fanfare the rankings are released.

Published by

Robin Huntington Shores

Currently the Director of Institutional Research and Assessment at Swarthmore College, Robin has worked in Institutional Research for over 20 years at a range of institutions.