University Rankings Revisited

Review of

The Rise of American Research Universities

Md.: Johns Hopkins University Press, 1997, 319 pp.

To the list of life’s very few certainties, Americans at least may confidently add a new category: ratings. We are an extremely competitive people, a fact reflected in our political and economic systems. Moreover, we are not content that every contest simply produces a winner and a loser; we yearn to know who among the winners is the best of the best. The proliferation of “best-of” lists is limited only by the imaginations of marketing specialists.

Until relatively recently, higher education was only marginally involved in the ratings game. Although intercollegiate athletics has been one of the main arenas of ratings madness, we hardly ever saw crazed university presidents, their faces painted, waving their index fingers and screaming, “We’re number one,” into TV cameras. Today, however, we have the functional equivalent of that finger waving in the reactions each year to the ups and downs of the institutional ratings published by U.S. News and World Report. It is surely one of the least savory developments in the recent history of higher education.

It is not, however, wholly unprecedented. In 1925, Raymond Hughes, president of Iowa State College, ranked 24 graduate programs in 38 universities. Others then quickly aggregated his data into institutional rankings. In 1957, Hayward Keniston undertook a systematic ranking of universities by asking department heads at 25 “leading” universities to rate the graduate departments. Since then, four national studies have provided fodder for institutional rankings. The two most recent, published in 1982 and 1995 by the National Research Council (NRC), are the most sophisticated and the most sensitive to the essential silliness of attempting to rank in order of quality entities as complex and diverse as universities. Alas, within days of the publication of the NRC studies, university public relations offices were cranking out analyses to the media demonstrating how well their institutions fared and/or why the study methodology failed to do justice to their splendid programs. Little, it seemed, had changed since 1925.

This history of ratings in higher education is recounted in useful detail in this excellent book by Hugh Davis Graham, professor of American history at Vanderbilt University, and Nancy Diamond, who has a Ph.D. in public policy from the University of Maryland, Baltimore County. Indeed, the book contains just about everything worth knowing about the attempts to rank U.S. research universities. Unfortunately, the authors point out, all previous studies relied heavily on rankings based on reputation. Reputation may be an increasingly perishable commodity in public life these days, but it is remarkably stable in academic life, and not always with justification. The well-known halo effect can mask declines in quality and dampen the perception of quality improvements. As Graham and Diamond write, “Reputational surveys, by capturing shared perceptions of institutions’ rising and falling status in the academic pecking order, reinforce and prolong the reputations they survey.”

New kids on the block

But the authors’ interest in rankings is not the prurient one that puts U.S. News and World Report’s annual higher education issue right up there with Sports Illustrated‘s swimsuit issue in newsstand sales. They have a serious point to make, and they make it convincingly: “The central argument of this book,” they write, “is that new research universities did emerge from the competitive scramble of 1945 to challenge more successfully than has been realized the hegemony of traditional elites.”

On one level, that conclusion seems so obvious as to be almost self-evident. The sheer number of universities that are now major actors in the research enterprise would seem to belie any notion of a system in which the rich get richer and the rest scramble for the leftovers. But in fact that notion has periodically dominated federal research policy and is the most commonly stated justification for the rise in academic pork-barrel spending. In the heat of politics and institutional aggrandizement, even what is obvious sometimes yields to what is advantageous, and the political advantage for some years now has been on the side of those who cry poor.

But there is more to the matter than simply the number of universities now in the research system as compared with some earlier period. The authors want to prove a further point: The rank order of universities as producers of research has changed, and a surprising number of newcomers have made it into the upper division of the big leagues.

As their instrument for demonstrating the change, Graham and Diamond have devised a set of indices that measures research productivity and eliminates the bias for sheer size that inevitably accompanies rankings that emphasize the volume of sponsored research. They focus instead on per capita publication output, especially publication in the leading peer-reviewed journals in the major disciplines. There are no perfect measures, including these, but it turns out that looking at universities over time using these measures is quite revealing and often surprising.

The envelope, please

There are few surprises at the top. In the authors’ rankings, the leading private universities, as in all previous studies, are still the national leaders. The authors explain this long record of high quality without the conspiratorial overtones that sometimes accompany such analyses: “First, their sovereignty as private entities enabled them to move quickly to exploit targets of opportunity, and their prestige and prior service guaranteed access to the corridors of national power . . . [but] second, and less well understood than the prestige factor, was a structural advantage. The private research universities were organized in a way that maximized, far more than public universities, the proportion of campus faculty whose research fields were supported by major federal funding agencies.” There was less pressure in private universities for instruction and applied research and greater freedom to emphasize basic research, which was a better fit with government funding policies.

Even among the privates, though, there has been considerable movement into the top rank. Using their methods, the authors place in the top 25 universities in research productivity 10 institutions that did not make the top 25 in reputational ranking in the 1982 NRC study.

The biggest surprises are in the public sector. The authors place 14 public universities in their top 25 and 21 in the top 33; none of these schools showed up in comparable places in the 1982 NRC study. All general campuses of the University of California are on the list, as are three State University of New York campuses. Of that group, only U.C. Berkeley was a major university before World War II; most did not open until after the war, and all are now major universities. Thus, the argument that the research system is dominated by a few institutions whose faculty form an old-boy network committed to taking care of their own simply does not wash. It is probably too much to expect persuasive evidence to prevail over political and institutional self-interest, but it should at least be harder from now on to make the stale old arguments with a straight face.

Fortunately, the authors are not satisfied simply to have devised a different way of ranking universities. Their purpose is to understand why the system has developed as it did. What accounts for the extraordinary productivity of not just single institutions, or a small group of institutions, but of the system as a whole? Those who have envied the success of U.S. universities-and that includes most of the rest of the world-may not be happy with the answers advanced by Graham and Diamond. It is, they write, a story of American exceptionalism.

“The historic American preference for decentralized authority and weak governmental institutions has exacted a price: vigilante justice, chattel slavery, incompetent militias, debased currencies, fraudulent securities, medical quackery, and poisoned food and drugs. In higher education, however, the nation’s historic proliferation of weak institutions in a decentralized environment paid surprising dividends in the modern era of market competition.” When the war and its aftermath produced opportunities in the form of large federal funding of research and huge state investments in public educational institutions, the tradition of competition and the flexibility made possible by the absence of an overarching government ministry unleashed enormous energies and an unprecedented expansion of research activity centered in universities.

Success factors

The authors identify three key factors in explaining the success of the system as a whole and of the institutions within it. First, as already noted, the United States’s large, dynamic private institutions provided the initial engine for the growth of university-based research, and its members continue to dominate the research system even as they provide a spur and a model to their public sector competitors. “The unique historical role of elite private institutions in American higher education has been a source of both pride and resentment. On the one hand, the well-endowed private elite institutions, by bidding up the stakes in ongoing market competition, have raised the level of academic support and performance in public universities. On the other hand, private universities, enjoying a large measure of freedom from the bureaucratic and regulatory constraints of the public sector, have nonetheless relied heavily on government tax and fiscal policies to fund research, build and equip the research infrastructure, and subsidize high tuitions.”

Second, the authors note the enormous importance of biomedical research in the total system of research support and, as a consequence, the huge advantage conferred on a university with a first-class medical school. Indeed, of the leading nontechnical institutions in the authors’ rankings, only Princeton and Berkeley make it to the top without a medical school.

Finally, a number of state schools-Berkeley, Wisconsin, Michigan, Minnesota, and Illinois- have long been among the leading U.S. universities. Thus, it was not necessary to invent a new model for a high-quality public university.

For those who make policy about research and decide who gets what, the authors have presented a useful paradox. On the one hand, they provide strong evidence of the respect due to those institutions whose faculty and leadership have sustained high levels of quality over a long period of time. On the other, they provide an incentive and one set of means to look for quality in other places. In fact, there is no necessary conflict between the two. Scarce public resources should go to those who are judged in fair competition to be likely to do the best work. There is something to be said for that proposition in any public program; there is everything to be said for it when the decisions involve intellectual work. In order to be fair, though, those who judge the competition must be prepared to put aside preconceptions based solely on reputation and look for ways to find high quality wherever it may exist.

In this fine book, Graham and Diamond make an important contribution to our understanding of what actually happened during that amazing period in the history of higher education that began with World War II. It is a pleasing addition to their accomplishment that they have done it by turning the feared and despised tool of institutional rankings into an instrument for better understanding that history.

Cite this Article

Rosenzweig, Robert M. “University Rankings Revisited.” Issues in Science and Technology 13, no. 4 (Summer 1997).

Vol. XIII, No. 4, Summer 1997