Open Access

Quality control in research: the mysterious case of the bouncing impact factor

Research must be reliable and publication is part of our quality control system. Scientific articles get reviewed by peers and they get screened by editors. Reviewers ideally help improve the project and its presentation, and editors ideally select the best papers to publish.

Perhaps to help scientists through the sea of scholarly articles, an attempt has been made to quantify which journals are most important to read — and to publish in. This system — called impact factor — is used as a proxy for quality in decisions about hiring, grants, promotions, prizes and more. Unfortunately, that system is deeply flawed.

Impact factor is a scam. It should no longer be part of our quality control system.

What is impact factor?

A journal’s impact factor is assigned by Thomson Reuters, a private corporation, and is based on the listings they include in their annual Journal Citation Report.

To calculate the impact factor for a journal in 2014 we have to know both the number of articles published in 2012 and 2013 and the number of citations those articles received in 2014; the latter is then divided by the former. If a journal publishes a total of 100 articles in 2012 and 2013, and if those articles collectively garner 100 citations in 2014, then the impact factor for 2014 is 1. Björn Brembs illustrates it like this.

Slide2

Impact factor in the humanities

Impact factors in the natural sciences are much higher than those in the humanities. Journals in medicine or science can have impact factors as high as 50. In contrast, Language, the journal of the Linguistic Society of America, has an impact factor under 2 and many humanities journals are well under 1.

If impact factor indicates readership, this may be accurate. Journals in medicine or science may well have 50 times the readership of even the biggest humanities journals. But when impact factor accords prestige and even becomes a surrogate for quality, this variation can give the impression that the research performed in medicine and the sciences is of a higher quality or more important than the research performed in the humanities. I would wager that many political debates at universities are fed by such attitudes.

Fortunately, the explanation for low impact factors in the humanities is much simpler. While articles in top science journals often consist of a few pages, the ones in the humanities are more likely to be a few dozen. Naturally, it takes more time to review or revise a long article. As a result, many top journals in the humanities use 2-3 years from initial submission to publication. 2-3 years! This means that the window of measurement for impact factor calculation is often closed before a paper is even cited once.

What counts as an article?

Impact factor can be changed in two ways and both of them get gamed sometimes. One option is to increase the number of citations. Editors have been known to practice coercive citation, as I wrote about in How journals manipulate the importance of research and one way to fix it.

The second way to increase impact factors is to shrink the number of articles in the equation. In addition to articles, journals might include letters, opinion pieces, or replies. These are rarely cited, and sometimes editors have to negotiate with Thomson Reuters about which of them should be excluded from the count. The impact factor game provides an amusing description of this process.

Current Biology saw its impact factor jump from 7 to almost 12 from 2002 to 2003. In a recent talk, Brembs reveals how this happened.

In the calculation of the impact factor for Current Biology in 2002, it was reported that the journal published 528 articles in 2001. But for the calculation in 2003 — for which 2001 is still relevant — that number had been reduced to 300. No wonder the impact factor took a hop! They can’t both be right and I wouldn’t be surprised if negotiations were involved.

Slide1

We must build an infrastructure for research that delivers genuine quality control. Ad hoc windows that treat different fields differently and systems in which importance gets confounded with commercial interests cannot be part of this system.

And if we succeed in finding new ways to determine quality, impact factor will surely get bounced.

Many of the points in Brembs’ speech, When decade-old functionality would be progress: the desolate state of our scholarly infrastructure, deserve the attention of those who think about making scientific communication better; in addition to the slides, Brembs and his colleagues Katherine Button and Marcus Munafò have an important paper called Deep impact: unintended consequences of journal rank, which I’ve also discussed at The Guardian in Science research: 3 problems that point to a communications crisis. Brembs’ speech was made at the 2014 Munin Conference at the University of Tromsø.

My interest in moving universities towards balance encompasses gender equality, the communication of scientific results, promoting research-based education and leadership development more generally. Read more

Share

12 Comments

Republish