USN&WR is playing round robin with rankings. They are shrewd enough to recognize consumer behavior in weighting the ranking factors and ensuring countless nervous families continue to use their rankings in compiling a prospective college list. An example of their detailed tracking: once it got out that USN&WR includes the number of top 10% students attending in their calculation of a college ranking; savvy high schools stopped providing rankings.
In their effort to measure more “output” rather than input they’ve changed a number of factors. Here is a small sample of the changes and what they mean:
Class ranking goes down in importance from 40% to 25%. This actually is the result of some weird circular thinking – high schools began removing it when it became apparent schools were using the “top 10% ranking” to admit/deny students. Since now less schools are reporting, USN&WR is lowering its value. The question remains will colleges change their admit/deny process now that this is of less value? And how will they determine admit/deny when the top 10% is not provided?
Test scores, however, are rising in relative importance. 50% – 65%. So now the ranking places more weight on an artificial snapshot measurement than 4 years of performance. It will be interesting to see the impact on the many schools who are now going test optional. (see fairtest.org website) and is sure to fuel more prep time and repeat tests for frantic, nervous students.
Graduation rate performance is increasing in value to 7.5% and will apply to all Regional Universities and Colleges. A few interesting notes about the one measurement that is actually outcome based:
As before, they are measuring 6 year graduation rates, not 4 year. For most families actually performing to this standard creates a 50% increase in college expenses.
It uses a complex formula which takes the college population into consideration looking at students awarded Pell Grants, incoming test scores (giving test score even more weight!) and more to predict graduation rates. Then USNWR ranks whether the college exceeds or fails their prediction.
USN&WR also recognizes there are great schools who should be recognized. Why have only 1 #1 when there can be many? So USNWR produces a number of ranked lists, including:
Regional Universities (ranked with geographic region)
Liberal Arts colleges
Business programs -numerous categories (ranked solely on peer assessment rankings)
Engineering Programs – numerous (ranked solely on peer assessment rankings)
A+ Schools for B students (National & Liberal Arts)
Up and coming colleges
Academic programs to look for
Historically Black universities and colleges
Best undergrad teaching (new in 2010)
Unranked specialty schools
However, not all these lists use the same criteria and criteria weighting as the main list. Many of these use peer and/or student reviews as the ranking criteria making them subjective rather than objective reviews. Actually, it can be argued that all of the rankings are subjective as most of the information used is self reported and it has been discovered on more than one occasion, numbers presented do not represent the actual facts. Tulane University, Bucknell University, Claremont McKenna College, Emory University, Providence College and George Washington University were all outed this year for providing false information. Some overrepresented the number of students in the top 10% of their high school class, others SAT scores. Still other schools have misstated their acceptance rates to look more exclusive. This is true for both lower and higher ranking schools, including Dominican (which reported a 54% acceptance rate that was actually 73%) and Tulane (which reported 57% acceptance to its business school. In reality, 93% were accepted).
Is it beginning to feel like you need a degree in statistical analysis to really understand the complexities that make up the rankings? If so, you are not alone. Rather than trying to understand every piece part of the calculations, instead create your own rankings to determine college fit.