College Rankings

views updated

COLLEGE RANKINGS


David Webster stated in 1986 that there are two elements that define college rankings. The first is that academic quality can be measured by selected criteria. For example, in many studies the reputation of the faculty and the selectivity of students are used as measures of an institution's quality. The second element is that using these measurements leads to an ordering of institutions. In other words, since quality is in short supply, there can be only one numberone school. Therefore, unlike classifications (e.g., Carnegie classifications), which group institutions by type, or guides, which give information on individual colleges (e.g., Peterson's Guide to Four Year Colleges ), rankings order institutions from best toworst.

History of Rankings

This notion of ranking the academic excellence of U.S. colleges and universities is not new. For nearly 100 years various organizations have attempted to rank postsecondary institutions. In 1910 James Cat-tell from Columbia University offered rankings in American Men of Science that assessed the "scientific strength" of elite institutions by looking at the reputations of their science and social science faculty. Most early efforts applied the ranking to the college as a whole, rather than to individual departments. The rankings also tended to be based on what happened to the students after graduation instead of the accomplishments of the school's faculty. Cattell's work is an early exception.

E. Grady Bogue and Robert L. Saunders offered a brief history of graduate school rankings in 1992. They reported that the first graduate school study was conducted in 1925 by Raymond Hughes. He called on his fellow faculty members at Miami University in Ohio to draw up a list of quality universities and to identify national scholars in specific fields of study to serve as raters. Ultimately, in A Study of Graduate Schools of America, Hughes relied on forty to sixty raters to assess twenty disciplines for graduate study at thirty-six universities. He followed up this ranking with another in 1934 for the American Council on Education. In this report, he assessed fifty disciplines and increased the number of raters to 100. Graduate programs were not ranked again until 1959, when Hayward Keniston conducted his assessment of them. The list of schools was surprisingly similar to the work done by Hughes in 1925. Two other well-known graduate school studies were done by Allan Cartter in 1966 and Kenneth D. Roose and Charles J. Anderson in 1970.

Since that time, there have been several other notable studies that assessed graduate education. One major study was conducted in 1982 for the Conference Board of Associated Research Councils. It was far more comprehensive than the earlier effortscovering thirty-two disciplines at 228 institutions. Then, in 1995, the National Research Council's Committee for the Study of Research-Doctoral Programs assessed forty-one disciplines and 274 institutions using over 7,500 raters. These 1995 rankings included both reputational ratings based on the opinions of faculty and objective data that focused on studentfaculty ratios, number of programs, and faculty publications and awards. In 1990 U.S. News and World Report began to offer their rankings of graduate and professional programs, focusing on business, law, medicine, and engineering.

In general, the early rankings efforts were not distributed widely. Most of these attempts were viewed only by "academic administrators, federal agencies, state legislators, graduate student applicants, and higher education researchers" (Stuart, p.16). The audience, however, grew substantially when U.S. News and World Report began publishing rankings of undergraduate institutions in 1983. By the late 1990s, U.S. News and World Report, Time partnering with the Princeton Review, Newsweek partnering with Kaplan Testing Service, and Money magazine were selling an estimated 6.7 million copies of their special rankings issues annually. As Patricia M. McDonough and her associates illustrated in 1998, rankings have become big business. It should be noted that there are all kinds of college rankings besides those that look at academic quality. For instance, Money magazine determines the "Best College Buys" and the Princeton Review names the topparty schools.

In spite of the numerous methods employed over the years, academic rankings have been amazingly stable (see Table 1). Curiously, there is just enough change to give the listings credibility. The number-one school may change from year to year, but, in general, schools near the top of the list decades ago are generally seen near the top of the list in the early twenty-first century. In 1991 Alexander Astin contended that the stability could be explained by "the fact that beliefs about the institutional hierarchy in American higher education affect our perceptions of both graduate and undergraduate programs and are highly resistant to change" (p. 37), and that this "folklore" regarding an institution's quality affects students' college choices as well as the perceptions of institutional raters. Therefore, according to Astin, rankings reflect the myth of quality, rather than the reality of it.

The Pros and Cons of Rankings

Proponents of rankings contend that the main advantage to rankings is that they provide a way for families to make "sound economic decisions" regarding the education of their children. Rankings serve as a type of consumer report for families wishing to compare colleges and universities. Webster claimed that rankings bring to the attention of some families little-known schools that may be good choices for their children. Also, he found that ranking approaches have become more standardized, because U.S. News and World Report publishes its ranking methods.

McDonough and her colleagues offered several additional reasons for the public's interest in these publications. First, ever since the Watergate scandal, the American public has developed a skeptical attitude toward the country's national institutions. Therefore, as college admissions grow more chaotic, the public turns to these seemingly unbiased resources for help in their college searches. Thus, rankings help reduce the risk in a student's college choice. Second, the highly competitive race for places at the university further encourages families to seek objective, comparative data. Families believe the higher the ranking, the better the reputation of a college or university. According to Charles J. Fombrun, the college's "reputation is a cue to consumers of what they can expect; a reputation acts as a guarantee of quality" (McDonough, Antonio, Walpole, and Perez, p. 515). Third, students and families eager to bask in the glow of attending a highly ranked institution rely on these published reports to inform their college-going decisions.

In spite of these attributes, rankings are not without critics. Since the beginning, colleges have cried foul at the publication of rankings. For example, in 1986 Webster reported that the 1911 effort by the U.S. Bureau of Education was withheld by two U.S. presidents because of the outcry against it from college administrators. Today, the complaints are just as vociferous. Steven Sample, the president of the University of Southern California, called the

TABLE 1

rankings "silly" and "bordering on fraud" (Trounson and Gottlieb, p. A12). Theodore Mitchell, president of Occidental College, contended that "the rankings are 'a distortion of an institution's character and, at worst, a kind of tyrannical tool to get institutions to chase after a single vision of what good higher education is"' (Trounson and Gottlieb, p. A12). William Massy, director of the Stanford Institute for Higher Education Research, and Robert Zemsky, director of the Institute for Research on Higher Education at the University of Pennsylvania, went so far as to say that rankings encourage "the kind of competition that puts higher education at risk" (Webster 1992, p. 19). The criticisms even come from within the U.S. News and World Report organization. Amy Graham, an economist who was responsible for the list for two years in the late 1990s, stated that the methods for data collection are "misleading" and "produce invalid results" (Trounson and Gottlieb, p. A12).

Debra Stuart in 1995 offered a number of other common criticisms of rankings. First, "there is no consensus about how to measure academic quality" (Stuart, p. 18). Therefore, how does one make sense of the various rankings efforts? Additionally, Astin's concern regarding the stability of rankings suggests that myth and institutional perceptions may have as much to do with the rankings as the methods used to determine them. In fact, the methods for assessing quality reflect a bias toward institutional size, student test scores, and the number of "star" faculty. Astin and others question this definition of quality, because it has nothing to do with the student's college experience or learning.

Second, Stuart stated that raters are biased depending on their own affiliations and knowledge of institutions. Would the rankings be the same with different raters? Third, she suggested that there is a halo effect. For example, one highly ranked department at a college or university may provide sufficient glow to allow other departments at that institution to be more highly ranked than is warranted. Fourth, the timing of assessments may affect the outcome. If studies are conducted close on each other's heels, then the results of one may affect the raters' views for the second. Also, if the studies are not done regularly, then the standing ranking may not reflect changes in the department, good or bad. And finally, the use of different sorts of methodologies makes comparisons between reports impossible.

Another criticism leveled at rankings is that colleges change their own processes and procedures to attempt to better their rankings. They do this because it is believed that high rankings positively affect admissions. Highly ranked schools have seen an increase in the number of student applications, a rise in the average SAT scores of entering students, and less need for financial aid offers to attract students. So, for example, the practice of early acceptance, which commits early applicantswho tend to be high achieversto attend an institution if accepted, distorts selectivity and yield figures (i.e., the percentage of admitted students who actually accept admission offers).

The effects of rankings, however, are not limited to admissions. College administrators use the data to make decisions regarding resource allocations. Therefore the pressure is on for programs to do well, so departments may manipulate their reporting in a way to improve their placement on the list. For example, administrators at Cornell University removed students who had never graduated from their alumni lists "before computing the fraction of alumni who contributed to the university. This change improved Cornell's reported alumni giving rate, a factor used by U.S. News " to assess quality (Monks and Ehrenberg, p. 44).

Yet, not all adaptations are seen as a negative. Webster in 1992 noted it is a virtue when institutions improve their facilities in their effort to improve their rankings. For example, the administrators at Texas A&M University acknowledge they use the rankings to spur changes, such as class size, with a goal of being ranked a top-ten university.

Central to much of the criticism is that the current rankings do not look at student learning in their assessment of an institution. As a result, an alternative to the U.S. News and World Report rankings has been developed. This National Survey of Student Engagement attempts to measure student learning and satisfaction. It hopes to create "national benchmarks of effective educational practices" (Reisberg, p. A67). Still in its infancy, it is unclear if this survey will usurp the U.S. News and World Report rankings or offer valuable supplemental information for prospective students, their families, and researchers.

Nevertheless, "academic quality rankings, despite all their faults, have been useful, from their beginnings, in providing more accurate information about the comparative quality of American colleges, universities, and individual departments and professional fields of study than any other source" (Webster 1986, p. 9). Families have come to rely on these studies to make their college-choice decisions, so rankings are most likely here to stay.

See also: College Search and Selection; Higher Education in the United States, subentry on System.

bibliography

Astin, Alexander W. 1991. Achieving Educational Excellence. San Francisco: Jossey-Bass.

Bogue, E. Grady, and Saunders, Robert L. 1992. The Evidence for Quality. San Francisco: Jossey-Bass.

Ehrenberg, Ronald G., and Hurst, Peter J. 1996. "The 1995 NRC Ratings of Doctoral Programs: A Hedonic Model." Change 28 (3):4655.

Gose, Ben. 1999. "A New Survey of 'Good Practices' Could Be an Alternative to Rankings." Chronicle of Higher Education October 22:A65.

Hoover, Eric. 2002. "New Attacks on Early Decision." Chronicle of Higher Education January 11:A45.

Hossler, Donald, and Foley, Erin. 1995. "Reducing the Noise in the College Choice Process: The Use of College Guidebooks and Ratings." In Evaluating and Responding to College Guidebooks and Rankings, ed. R. Dan Walleri and Marsha K. Moss. San Francisco: Jossey-Bass.

Machung, Anne. 1998. "Playing the Rankings Game." Change 30 (4):1216.

McDonough, Patricia M.; Antonio, Anthony Lising; Walpole, Marybeth; and Perez, Leonor Xochitl. 1998. "College Rankings: Democratized College Knowledge for Whom?" Research in Higher Education 39:513537.

Monks, James, and Ehrenberg, Ronald G. 1998. " U.S. News and World Report 's College Rankings: Why They Do Matter." Change 31 (6):4251.

Reisberg, Leo. 2000. "Are Students Actually Learning?" Chronicle of Higher Education November 17:A67.

Stuart, Debra. 1995. "Reputational Rankings: Background and Development." In Evaluating and Responding to College Guidebooks and Rankings, ed. R. Dan Walleri and Marsha K. Moss. San Francisco: Jossey-Bass.

Trounson, Rebecca, and Gottlieb, Jeff. 2001. "'Best Colleges' List Released amid Criticism," Los Angeles Times September 7.

Webster, David S. 1986. Academic Quality Rankings of American Colleges and Universities. Springfield, IL: Charles C. Thomas.

Webster, David S. 1992. "Rankings of Undergraduate Education in U.S. News and Money Are They Any Good?" Change 24 (2):1831.

Webster, David S., and Skinner, Tad. 1996. "Rating Ph.D. Programs: What the NRC Report Says and Doesn't Say." Change 28 (3):2245.

internet resource

U.S. News and World Report. 2002. "America's Best Colleges 2002: Liberal Arts CollegesBachelor (National) Top 50." <www.usnews.com/usnews/edu/college/rankings/libartco/tier1/t1libartco.htm>

Barbara F. Tobolowsky

More From encyclopedia.com

About this article

College Rankings

Updated About encyclopedia.com content Print Article

You Might Also Like

    NEARBY TERMS

    College Rankings