Thomas F. Connolly, Professor-Humanities and Social Sciences, Prince Mohammad Bin Fahd University

Thomas F. Connolly is Professor of Humanities and Social Sciences at Prince Mohammad Bin Fahd University and is a Fellow of the Center for Futuristic Studies. He is the author of three books: George Jean Nathan and the Making of Modern American Drama Criticism, Genus Envy: Nationalities, Identities, and the Performing Body of Work, and British Aisles: Studies in English and Irish Drama, and dozens of articles. He serves on the editorial boards of several scholarly journals. He has been a consultant and commentator for the BBC, The New York Times, The New Yorker, CBS, PBS, and NPR. Dr. Connolly has also worked in state government and for the Department of State’s United States Information Service. A former Fulbright Senior Scholar, Dr. Connolly is the recipient of the Parliamentary Medal of the Czech Republic, awarded for “the educational improvement of the nation.” His next book “Good-bye Good Ol’ USA: What America Lost in World War II” is forthcoming from Houghton Mifflin/PMU Press.

 

What does it tell you that administrators and admissions officers are obsessed with rankings? Furthermore, what does it tell you that faculty find them dubious? Rankings are a marketing tool for the education industry. They reveal how the need or greed for funding has suborned institutions of higher learning. It reflects the preoccupation with indexing. I taught in the United States and Europe for decades, published three books and dozens of articles, before I came to Asia I had never heard of SCOPUS or Web of Science. When I was confronted with the influence of these colossi, I contacted colleagues in the USA about their significance. I was met with the equivalent of blank stares. In highly specialized technical fields, there was some acknowledgement, but across the arts and sciences, the consensus was that one knew the important journals in one’s field and that such things as “impact factor” were the product of sophistic rather than sophisticated calculators. However, in the past decade, the United Kingdom’s higher education system has become subjugated by impact factoring.

One fears that the same mindset that proffers quantified student evaluations as evidence for the quality of teaching is behind this sort of marketing ploy, though student evaluations have a bit more to do with academics than the gaming that goes on with ratings and rankings. The outstanding example is the notorious success of Northeastern University in Boston, Massachusetts. In a few years, it went from a third-tier commuter school to a highly ranked and “competitive” university that expanded exponentially at every level. By then, its president’s salary was more than double that of Harvard’s; he lived in a multi-million dollar townhouse and a chauffeured limousine drove him for the fifteen minutes it took to get to his office. How did this happen? The president used a laser-beam approach to improving Northeastern’s rating in the annual U.S. News Best Colleges survey. The academic validity of this publication has been compared to the Sports Illustrated “swimsuit” issue’s usefulness as a guide for Olympic swimming gear. Nevertheless, it has a host of imitators who exacerbate the reduction of the pursuit of a college degree to consumer option check-offs.

If one chooses to read the literature that questions the efficacy of these quantification pyramids that also purport to offer qualitative certification, one will discover negative assessments ranging from Better Business Bureau style complaints to elaborate statistical dismantling of their claims. Additionally, a serious flaw in the SCOPUS/Web of Science impact factor scheme is its inability to quantify monographs or books. Professors at some universities are discouraged from writing books and instructed to concentrate on articles for this reason. 

News stories such as the indictment for fraud of the former business school dean at Temple University in Philadelphia, Pennsylvania do not bolster the ratings boosters’ case for objectivity. This erstwhile administrator knew that tuition and donations depended on his school’s ranking, so he fabricated data. 

Again, one must ask if commercially produced rankings are valid as a more than a marketing tool, why are administrators so desperate to legitimize them? A parallel rankings industry has arisen that skillfully, if craftily, mirrors the education industry. It offers publications, conferences, and reams of research to justify itself. Any academic who considers these works is immediately suspicious because they are produced under commercial auspices. (One notes as well that participation by professors at these conferences does not “count” at rank-conscious universities because such gatherings are not “indexed”). I confess that I know this because I have been involved in such a conference and been approached by one of their publications about the placement of articles. I was told that I could not write the articles about my university, a staff member would do that. I was only to offer superficial guidance. The staff member, who appeared to be barely out of college, had never set foot on the campus nor had she any knowledge of it. The articles would be produced according to a template.

What is to be done? Universities must take courage. There are universities that refuse to participate in the ranking system.  They refuse to provide the requested data. Administrators ought to leave determining the scholarly endeavors of faculty to their respective departments. Professors should assert themselves and insist that the bureaucratic approach to scholarship cease. The residue of number crunching must be consigned to the dustbin of history.

 

 

 

 

 

**The views and opinions expressed in this article are those of the author’s and do not necessarily reflect the official policy or position of Higher Education Digest**

Content Disclaimer

Related Articles