Gender Equality

The great citation hoax: Proof that women are worse researchers than men

The work of women researchers is worth about 80% of the work of their male colleagues. At least, that’s what you’d have to believe according to one standard measure of quality.

The value of a research article is often determined by the number of times other researchers mention that article in their own writing.

Citation statistics easily become a proxy for quality assessments in hiring and promotion processes and also when grants are awarded. International rankings of universities sometimes consider how many times an institution’s professors get cited, too.

There is a gender citation gap showing that it’s a hoax to claim citation count is an objective and reliable indicator of quality.

The number of citations, in other words, is treated as a reliable and accurate indicator of the quality of an article for some very important decisions.

Unfortunately, it’s not. This becomes painfully clear when we notice that men and women researchers get cited at different rates, and when we try to understand that.

The gender citation gap

A new study in the field of International Relations (IR) demonstrates that men get cited more than women. This study is a good model for studying other fields, too. Furthermore, it illustrates how social and subconscious factors slow down the advancement of science.

Here’s a taste of the fascinating work by Daniel Maliniak, Ryan Powers and Barbara F. Walter, but I’ll warn you right now, the conclusions are depressing.

A research article written by a woman and published in any of the top journals will still receive significantly fewer citations than if that same article had been written by a man.

Some will think this is unsubstantiated and unprovable, but what if it’s true? If it is, it might explain the famous leaky pipeline whereby women don’t advance all the way to the top. Why don’t women get ahead? Because their work is cited less, which makes them less good candidates to be hired, promoted, and supported. And why doesn’t their work get cited? Let’s see what could possibly be the explanation.

A database containing 25 years of articles from the 12 leading IR journals was used for the study. The database itself provides information about the authors, their institutions, some aspects of the content, and more. The researchers correlated the articles in this database with citation information available through the Web of Knowledge database.

Over a period of several years, the average number of citations of articles authored by men alone was about 25 while it was about 20 for articles authored by women alone. This may seem like a small difference, but the average article in the humanities and social sciences hardly gets cited at all — on average, less than once a year — so even these small numbers strongly impact the perceived quality of the work.

Many hypothesis were entertained about the cause of the differences in citation.

Articles published by women in the top IR journals are cited less often than those written by men even after controlling for the age of publication, whether the author came from a [top research] school, the topic under study, the quality of the publishing venue, the methodological and theoretical approach, and the author’s tenure status.

If these factors didn’t explain the difference, what might? There are a couple of intriguing possibilities. One involves the way we cite each other. And one involves the way we cite ourselves.


A self-citation is a reference in an article to other work by the author of the article. There’s nothing inherently bad or problematic about self-citation. Scholars have long-term projects and they publish their results bit by bit; it’s only natural to refer to earlier work when it forms part of the context for a new article.

But apparently there’s something a little awkward for some about citing their own work. Women are more reserved about referring to themselves than men are. In the IR study, men cited their own previous work about twice as much as women did.

Could the difference between the numbers of citations the work of men and women receive boil down to self-citations? Maliniak, Powers and Walter tried to answer this question by subtracting self-citations from the totals in the database, but the gender gap still existed.

It turns out that self-citation leads to more citations by others. Through self-citation, colleagues become aware of the work and may refer to it themselves. So the IR researchers went one step further and not only subtracted out self-citations, but subtracted out a number of non-self-citations that could be construed as the self-citation bonus (based on independent work on the longterm effects of self-citation). But even this move failed to level out the gender gap.

Citation networks

Citation alone isn’t the whole story, of course. Articles also gain position and authority in a field based on where they are cited and by whom. Network analyses are revealing on this point, too, showing that the work of women is not only cited less, but that it also is less authoritative in the sense that it is less likely to be cited in the most central articles in the field.

Network analyses also reveal that men aren’t the only ones with a bias. Women and men alike tend to cite more work by their own sex than by the other. If there are more men in a field, then this discovery is also part of the explanation for the sex-based difference in citation frequency.

Networks matter. Producing high-quality work is not sufficient for research to gain the attention of the widest number of scholars or have the greatest impact. Scholars tend to cite scholars they know. If networks tend to bifurcate along gender lines, then any field that is disproportionately male will also disproportionately favor their work.

Think about what that means. Men cite men more and women cite women more. So, if a field is 80% men and 20% women, then of course the work of men is going to get a lot more citations than the work of women. Could journals and editors help solve this with gentle questions or nudges?

The citation hoax

There is a gender citation gap and it demonstrates that citation count is neither an objective nor reliable indicator of quality. Uncovering this hoax is important for at least two reasons.

First of all, citations are increasingly employed in various kinds of professional evaluations; the emergence of Google Scholar has made it even easier to gather this information and once it’s there, it’s more likely to be used. As citations become an important measure of quality in promotion reviews, the citation gap will strengthen the disproportionate promotion of men over women.

Secondly, given the irrational skewing of citation and authority against work done by women, the ideas contained in that work are under-utilized in the further advancement of research since they are less well known by others. The frontiers of knowledge will not be pushed forward as they should be, but will rather reflect a bias that downgrades the work of women and leaves us all as the literally unknowing victims of a hoax.

What would your colleagues say about this? Share this link and ask them!

Are you an editor or a publisher? What could journals do to help counter the gender citation gap?

Research consulted: The gender citation gap in international relations, Daniel Malinia, Ryan Powers and Barbara F. WalterInternational organization, August 2013, pp 1-34. [Non-paywall version]

My interest in moving universities towards balance encompasses gender equality, the communication of scientific results, promoting research-based education and leadership development more generally. Read more



  • Ba-ldei Aga says:

    Probably I have omitted, but I did not understand how sex ratio of authors was controlled and what was done with gender mixed authors

    • Curt Rice says:

      The part of the research I report on here is mostly about single-authored papers. There is more to be told, though, and the co-author scenario isn’t any more uplifting. Basically, women get cited more when they co-author with a man than when they write on their own. Check out the original article, via the link above, to read more about that.

      • Ba-ldei Aga says:

        1 more point:
        is it easily possible to define author’s gender from a name, e.g., JK Smith ?

        • Curt Rice says:

          It’s not always easy, that’s right. The authors of the research article this blog entry is based on describe their procedure as follows:

          This coding is based first on the pronouns that the individual authors use to refer to themselves in articles or on their department website. If no pronoun is used by the individual, we looked for photographs of the individual on their department or personal website. Finally, if no pronoun usage or photo was available, we coded based on the most common gender associated with the individual. In cases where a name was not overwhelmingly associated with one gender or another, we left the gender of the article as missing data.

  • Tim says:

    Nice summary of the findings. your conclusion says that because patterns of citations show that there is a difference with gender and that therefore citations must be a bad judge of article quality.
    In other words, you believe that there is no possible way for women to be worse researchers than men and so citation counts as tool for article quality must be discarded.

    The other option is to view citations as showing that there is a gender gap in academic articles. That women may be suffering in academia, and that citation counts show this.

    • Stephen says:

      I understand they selecting articles by journal. So unless you are saying that the journals accept lower quality articles from Women than from Men, and that the referees are more forgiving of faults in articles from women than they are from men, then the assumption should be that the average quality of articles in a journal will be the same regardless of the sex of the author.

      If you are making the previous claim then you have to explain why the bias favouring articles written by women suddenly disappears when it comes to citing articles.

  • Thank you for highlighting this research. I have long suspected that this is what is happening. See my latest post on my website

  • bealoideas says:

    There is a lot of assumptions in this article and therefore does not logically make sense. As Tim mentioned the difference could be a secondary effect of the leaky pipeline. For instance a long term negative effect of maternity leave. We all know the citation metrics have their flaws but they are certainly not a hoax. They should be only used in combination as we have always known.

    • Curt Rice says:

      The blog entry here is based on an article and in 1000 words I can’t cover everything. The researchers who wrote the article — linked to above, at the end of the blog entry — address this concern of yours and demonstrate that it is not a relevant factor in the data they are looking at.

  • El Jefe says:

    I think the network effect is a big one – after a while in a field, you get a feeling for who is already doing good work, so you tend to bias towards looking at their stuff first, or you cite them because everyone is familiar with their work, it’s well accepted etc.

    One of the other major places you pick up good references is at conferences, where you see and meet with people, and get to talk to them one on one to actually get at their thoughts on the subject. From the conversations you have, you are prompted to look at their work (or not). If its still a grade school boy vs girls mixing mentality (and it is – I do notice that the young PhD students are still clumping by sex at conferences) – then women’s networks are bound to grow more slowly – because most of the researchers are male!

    • Curt Rice says:

      I think these points are indeed part of the story: we do tend to look at work by people we’ve encountered before, or people who we meet at conferences. And so as these networks get gendered, then citation practices also get gendered. Conference travel is important, especially early in an academic career … which is why we have to realize that gender equality at work is not trivially related to gender equality at home.

  • Simon Rose says:

    It’s worth considering, notably in the field of IR Scholarship, that the predominant theories are based on rationalism; Realism notably, which dominates the discourse in America (and as a result much of the rest of the world… not least because of its legitimisation and subsequent commercial linkage to the continued global predominance of the US military)… with that being said, and this might be a sweeping generalisation, but still one based on my own experience studying in the field; female scholars seem to tend to be interested in more ‘progressive’ social theories, and so are often dismissed by their male counterparts on theoretical grounds… i.e. for being less focused on abstract ideas and more on immediate ‘on the ground’ human concerns such as identity politics rather than rationally motivated behaviours (as if this is scholarly infancy and does not show ‘higher thought’). Of course if we consider how this landscape allows for the status-quo to be ongoing (i.e. the predominance of realism in the academic sphere and its reflection in the ‘real world’), then women researchers might well continue to be sidelined by some kind of imagined political necessity. …I can only speak from my experience as an IR student so it could well be a different picture elsewhere, i.e. women researchers not getting a look in for more widespread reasons that make my own observations of a secondary nature.

  • kerry heseltine says:

    From two sides I have noted over the years, most recently my wife fails miserably in driving to a site I have carefully explained to her how to find. She without fail phones me crying wishing for me to pick her up !
    Lastly in my college years the gals with whom I dealt with dealing with research tended to want an easier way out than spending time going thru mazes of materials. Tho they were more willing to have me date them.
    sincerely Mr. Kerry Heseltine
    men are men and women will never be men. Thank God ! but I do really Love the fair sex.

  • Tim says:

    I don’t think there are a lot of people out there who are so sexist that they’re going to go the extra mile to find out the gender of Smith J, Ramachandran P, or Ying T before they decide to cite their work or not. They’re going to cite the work based on the information presented therein, nothing more.

    If women aren’t getting cited as much as men, it’s because they’re not doing as much quality research. The truth hurts, but trying to pretend that researchers are sexistly discriminating which articles they cite based on how many men wrote it is ridiculous.

    • Curt Rice says:

      You’re building a straw man argument, Tim. No one is claiming that people figure out the sex of an author and then avoid citing the work if it is by a woman. But people who work in research fields and write scholarly articles know that there are many factors feeding our awareness of the literature, including what journals we read, who we know, what talks we hear at conferences, what is cited in the reference lists of works we read, etc. It is not the case that we construct our reference lists based on quality alone. And my point here is not that researchers are “sexistly discriminating” but rather that there is research demonstrating that the work of women gets cited less than the work of men, also when quality is controlled for. That requires explanation. What’s yours?

      • Might I propose... says:

        No matter how you slice it, it seems that attributing this to sexism is jumping onto the bandwagon so to speak. A strawman of sexism is brought up in a field where it is the facts presented which are important, not who are presenting them – unless that is someone who has made a name for themselves .

        • Curt Rice says:

          I guess I don’t understand what you’re saying. First of all, I don’t see anyone attributing this to anything, but rather just raising the facts. Of course, as researchers, we want to explain the facts. Two interesting hypotheses in this context are the effects of networking, e.g. conference attendance, on publication and citation rates, and the effects of self-citation. Who is saying “sexism” here? But it is a bit naïve to think that science is about the presentation of facts. That’s not the business we’re in. We’re in the business of understanding, of discovering what hasn’t been known before, and of building models to interpret the world. So, “facts” falls a bit short as a characterization of the issue — and your idea that the name behind the facts can matter just demonstrates the importance of networking in the communication of scientific results.

      • Frank Franky says:

        I read the original article. You wrote to Tim above that “Quality is controlled for” in the comparisons.
        I saw many variables controlled for, but none that indicate the value of an article with respect to analytical depth, originality, theoretical advancement—you know, features that relate to how interesting the substance of an article is to another scholar. So, quality was not controlled for, as far as I can see.

  • Brian says:

    I think the greatest example of the institutionalised sexism is the blatant misogynistic response by most male comments to this article – women cannot drive, their more interested in dating (read sex) than studying, they simply are worse at research ( maybe their mammary got in the way?) Fairer sex?, they have a baby and ….. somehow that makes a difference! – oh please!!!!!!!
    There seems to be a basic lack of recognition that the system as a whole is founded and set up by males only. So to be effective and get a better out come the rules etc need to be opened up and readjusted with both sexes at the table. You see the entire system is in it’s self misogynistic. What is exciting in higher learning is the idea of having all departments working together for common outcomes, each bringing a different perspective to the table. Maybe the much needed change will be born from this. But with the predominance of the sciences in higher learning lately it may take awhile.

  • Dipak Basu says:

    The author as a Sacndinavian, who acts as Drum-beaters for the Anglo-Americans ( look at how many -Americans got the now discredited Nobel Prize and how many non-Americans are and were never considered), has a great faith on citations and on the so-called Top Journals.
    First, there is a rumour that there exist an army of 50,000 Chinese internet warrior . There can be similar Anglo-American internet warrior. Their job is to increase citations by artificial means by publishing journals just to cite each other or their clients. It is possible that some agents are there who receive money from the authors to employ other authors to cite. When citation determine the fate of the authors, it is obvious that the efficient market system will create market to fill up the gap in the citation market.
    What are the top journals? If you look at the Research Assessment Committee of the UK universities, almost all top journals are either British or Americans. Very few journals from other countries, except from the drum beaters for the Anglo Americans like Scandinavian and Dutch, are there. Others are excluded. Is that mean these top journals are publishing great research? Hardly.
    The editors of these top journals, and other journals as well are publishing each others articles and the articles produced by their students and those who can pay for it in many different ways. Most articles are desk-rejected, i.e., the editors never read these at all.
    Thus, citations or the top journals are the least credible indicator of good research. In fact hardly any Japanese journals are participating in the citation game. Is that mean the journals produced by the Japanese universities are all rubbish?
    The same question can be raised for the World University Ranking game, which are produced by some ritish newspapers like The Times, The Guardian, The Economist magazine and The Financial times as a great marketing efforts to attract the foreign students to the Anglo-American universities, where majority of the universities in most of the countries of the world do not participate in this British game organized by these British newspapers.
    Joan Martin of Stanford University, before the days of ‘Turn It In’ software, did one experiement. She have picked up 10 articles from 10 top journals in Economics & Management and submitted to 10 other top journals keping the authors name intact. Eight of them were rejected and the other two authors were asked to revise their articles. None of the editors or reviewers of the top journals could find out that these were already published in other top journals.

    • Curt Rice says:

      I think if you read this entry again and the other postings on this blog, you will see that I have very little faith in measurements like citations, impact factor, international rankings, and the idea that high visibility journals contain the highest quality articles. In other words, you ascribe to me a position which is in fact nearly the opposite of what I claim in my writings here. Do you have a reference for the experiment by Martin?

  • Dipak Basu says:

    Organizational Culture: Mapping the Terrain by Joanne Martin , Sage Publisher, 2001

5 Trackbacks


I encourage you to republish this article online and in print, under the following conditions.

  • You have to credit the author.
  • If you’re republishing online, you must use our page view counter and link to its appearance here (included in the bottom of the HTML code), and include links from the story. In short, this means you should grab the html code below the post and use all of it.
  • Unless otherwise noted, all my pieces here have a Creative Commons Attribution licence -- CC BY 4.0 -- and you must follow the (extremely minimal) conditions of that license.
  • Keeping all this in mind, please take this work and spread it wherever it suits you to do so!