There’s a Problem With The Way We Rank Scientific Journals

by Ludvig Bohlin

The research community has passed the 50 million mark in terms of the total number of science papers published. Each year, approximately 2.5 million new scientific papers are published. The growing bulk of research means great opportunities, but it also poses challenges when it comes to navigating the immense amount of publications. One way to navigate is to use rankings, but the fact that we still use methods that are 50 years old to solve this problem poses some interesting questions. Is there a problem with the way we rank science?

More money spent on research in combination with digitalization are driving factors behind the increasing number of scientific publications. Along with the increase in publications, we have also seen a substantial increase in the total number of scientific journals. Add to this the increasing number of predatory or fake journals, which produce high volumes of poor-quality research, and you have a veritable jungle of journals to wade through. How is it even possible to find relevant research in this jungle?

While the only way to fully evaluate the quality and impact of a scientific paper is to read and analyse it, this approach takes far too much time even for a relatively small number of publications. To simplify the task of finding relevant research, scientists have therefore constructed methods to find relevant publications. For example, publications in similar research fields are grouped together in the same journals, and all these journals are also ranked by importance. The importance is usually apparent through how widely the journal is read and how it is perceived in the community. This importance is boiled down to a ranking based on how often the publications in the journal are cited.

As the number of journals has multiplied, journal rankings have become increasingly important for scientific decisions. Not only are the rankings widely used in the evaluation of impact and quality, but the rankings have also become increasingly important for scientific decisions. From submissions and subscriptions to grants and hirings, researchers, policy makers, and funding agencies make important decisions influenced by journal rankings. There is no doubt that journal rankings play a crucial role in science, however, there are some problems with the rankings we use today, for example:

  •  The ranking data is not open
  •  The ranking can be misleading
  •  The ranking system is not transparent

In our paper, we examine some traditional rankings and look at how robust they are to the selection of included journals. Typically, traditional rankings are derived by creating a network based on the citations between a selection of journals and the rankings therefore unavoidably depend on this selection. However, little is known about how robust the rankings are to the selection of included journals. To address this problem, we compare the robustness of traditional journal rankings, and our own ranking method.

We show that traditional rankings are not as robust to the selection of journals as our model that uses more citation data over a longer time. Unlike traditional rankings, our model also provides an open and transparent ranking. The goal of this ranking is to accurately capture the actual behavior of researchers that navigate between journals by using citation patterns over a longer time period. Our model provides a way to overcome the problems associated with the way we rank scientific journals.

Read the original paper

Bohlin, L. , Viamontes Esquivel, A. , Lancichinetti, A. and Rosvall, M. (2016), Robustness of journal rankings by network flows with different amounts of memory. J Assn Inf Sci Tec, 67: 2527-2535.

Share this post