The work of women researchers is worth about 80% of the work of their male colleagues. At least, that’s what you’d have to believe according to one standard measure of quality.
The value of a research article is often determined by the number of times other researchers mention that article in their own writing.
Citation statistics easily become a proxy for quality assessments in hiring and promotion processes and also when grants are awarded. International rankings of universities sometimes consider how many times an institution’s professors get cited, too.
There is a gender citation gap showing that it’s a hoax to claim citation count is an objective and reliable indicator of quality.
The number of citations, in other words, is treated as a reliable and accurate indicator of the quality of an article for some very important decisions.
Unfortunately, it’s not. This becomes painfully clear when we notice that men and women researchers get cited at different rates, and when we try to understand that.
The gender citation gap
A new study in the field of International Relations (IR) demonstrates that men get cited more than women. This study is a good model for studying other fields, too. Furthermore, it illustrates how social and subconscious factors slow down the advancement of science.
Here’s a taste of the fascinating work by Daniel Maliniak, Ryan Powers and Barbara F. Walter, but I’ll warn you right now, the conclusions are depressing.
A research article written by a woman and published in any of the top journals will still receive significantly fewer citations than if that same article had been written by a man.
Some will think this is unsubstantiated and unprovable, but what if it’s true? If it is, it might explain the famous leaky pipeline whereby women don’t advance all the way to the top. Why don’t women get ahead? Because their work is cited less, which makes them less good candidates to be hired, promoted, and supported. And why doesn’t their work get cited? Let’s see what could possibly be the explanation.
A database containing 25 years of articles from the 12 leading IR journals was used for the study. The database itself provides information about the authors, their institutions, some aspects of the content, and more. The researchers correlated the articles in this database with citation information available through the Web of Knowledge database.
Over a period of several years, the average number of citations of articles authored by men alone was about 25 while it was about 20 for articles authored by women alone. This may seem like a small difference, but the average article in the humanities and social sciences hardly gets cited at all — on average, less than once a year — so even these small numbers strongly impact the perceived quality of the work.
Many hypothesis were entertained about the cause of the differences in citation.
Articles published by women in the top IR journals are cited less often than those written by men even after controlling for the age of publication, whether the author came from a [top research] school, the topic under study, the quality of the publishing venue, the methodological and theoretical approach, and the author’s tenure status.
If these factors didn’t explain the difference, what might? There are a couple of intriguing possibilities. One involves the way we cite each other. And one involves the way we cite ourselves.
A self-citation is a reference in an article to other work by the author of the article. There’s nothing inherently bad or problematic about self-citation. Scholars have long-term projects and they publish their results bit by bit; it’s only natural to refer to earlier work when it forms part of the context for a new article.
But apparently there’s something a little awkward for some about citing their own work. Women are more reserved about referring to themselves than men are. In the IR study, men cited their own previous work about twice as much as women did.
Could the difference between the numbers of citations the work of men and women receive boil down to self-citations? Maliniak, Powers and Walter tried to answer this question by subtracting self-citations from the totals in the database, but the gender gap still existed.
It turns out that self-citation leads to more citations by others. Through self-citation, colleagues become aware of the work and may refer to it themselves. So the IR researchers went one step further and not only subtracted out self-citations, but subtracted out a number of non-self-citations that could be construed as the self-citation bonus (based on independent work on the longterm effects of self-citation). But even this move failed to level out the gender gap.
Citation alone isn’t the whole story, of course. Articles also gain position and authority in a field based on where they are cited and by whom. Network analyses are revealing on this point, too, showing that the work of women is not only cited less, but that it also is less authoritative in the sense that it is less likely to be cited in the most central articles in the field.
Network analyses also reveal that men aren’t the only ones with a bias. Women and men alike tend to cite more work by their own sex than by the other. If there are more men in a field, then this discovery is also part of the explanation for the sex-based difference in citation frequency.
Networks matter. Producing high-quality work is not sufficient for research to gain the attention of the widest number of scholars or have the greatest impact. Scholars tend to cite scholars they know. If networks tend to bifurcate along gender lines, then any field that is disproportionately male will also disproportionately favor their work.
Think about what that means. Men cite men more and women cite women more. So, if a field is 80% men and 20% women, then of course the work of men is going to get a lot more citations than the work of women. Could journals and editors help solve this with gentle questions or nudges?
The citation hoax
There is a gender citation gap and it demonstrates that citation count is neither an objective nor reliable indicator of quality. Uncovering this hoax is important for at least two reasons.
First of all, citations are increasingly employed in various kinds of professional evaluations; the emergence of Google Scholar has made it even easier to gather this information and once it’s there, it’s more likely to be used. As citations become an important measure of quality in promotion reviews, the citation gap will strengthen the disproportionate promotion of men over women.
Secondly, given the irrational skewing of citation and authority against work done by women, the ideas contained in that work are under-utilized in the further advancement of research since they are less well known by others. The frontiers of knowledge will not be pushed forward as they should be, but will rather reflect a bias that downgrades the work of women and leaves us all as the literally unknowing victims of a hoax.
What would your colleagues say about this? Share this link and ask them!
Are you an editor or a publisher? What could journals do to help counter the gender citation gap?
Research consulted: The gender citation gap in international relations, Daniel Malinia, Ryan Powers and Barbara F. Walter, International organization, August 2013, pp 1-34. [Non-paywall version]
I encourage you to republish this article online and in print, under the following conditions.
- You have to credit the author.
- If you’re republishing online, you must use our page view counter and link to its appearance here (included in the bottom of the HTML code), and include links from the story. In short, this means you should grab the html code below the post and use all of it.
- Unless otherwise noted, all my pieces here have a Creative Commons Attribution licence -- CC BY 4.0 -- and you must follow the (extremely minimal) conditions of that license.
- Keeping all this in mind, please take this work and spread it wherever it suits you to do so!