Traditional scientific communication directly threatens the quality of scientific research. Today’s system is unreliable — or worse! Our system of scholarly publishing reliably gives the highest status to research that is most likely to be wrong.
This system determines the trajectory of scientific careers. The longer we stick with it, the more likely it will become even worse.
These claims and the problems described below are grounded in research recently presented by Björn Brembs and Marcus Munafò in Deep Impact: Unintended consequences of journal rank.
We have a system for communicating results in which the need for retraction is exploding, the replicability of research is diminishing, and the most standard measure of journal quality is becoming a farce.
Retraction rates
Retraction is one possible response to discovering that something is wrong with a published scientific article. When it works well, journals publish a retraction statement identifying the reason for the retraction.
Retraction rates have increased tenfold in the past decade, after many years of stability, and a new paper in the Proceedings of the National Academy of Sciences demonstrates that two-thirds of all retractions follow from scientific misconduct: fraud, duplicate publication and plagiarism (Ferric C. Fang, R. Grant Steen & Arturo Casadevall: Misconduct accounts for the majority of retracted scientific publications)
Even more disturbing is the finding that the most prestigious journals have the highest rates of retraction, and that fraud and misconduct are greater sources of retraction in these journals than in less prestigious ones.
Among articles that are not retracted, there is evidence that the most visible journals publish less reliable (i.e., less replicable) research results than lower ranking journals. This may be due to a preference among prestigious journals for results that have more spectacular or novel findings, a phenomenon known as publication bias (e.g. P.J. Easterbrook, R. Gopalan, J.A. Berlin and D.R. Matthews, Publication bias in clinical research, The Lancet). Publication bias, in turn, is a direct cause of the decline effect.
The decline effect
One cornerstone of the quality control system in science is replicability; research results should be so carefully described that they can be obtained by others who follow the same procedure. Yet journals generally are not interested in publishing mere replications, giving this particular quality control measure somewhat low status, independent of how important it is, e.g. in studying potential new medicines.
When studies are reproduced, the resulting evidence is often weaker than in the original study. Indeed, Brembs and Munafò review research leading them to claim that “the strength of evidence for a particular finding often declines over time.”
In a fascinating piece entitled The truth wears off, the New Yorker offers the following interpretation of the decline effect.
The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out.
Yet it is exactly the spectacularity of statistical flukes that increase the odds of getting published in a high prestige journal.
The politics of prestige
One approach to measuring the importance of a journal is to count how many times scientists cite its articles; this strategy has been formalized as impact factor. Publishing in journals with high impact factors feeds job offers, grants, awards, and promotions. A high impact factor also enhances the popularity — and profitability — of a journal, and journal editors and publishers work hard to increase them, primarily by trying to publish what they believe will be the most important papers.
However, impact factor can also be illegitimately manipulated. For example, the actual calculation of impact factor involves dividing the total number of citations in recent years by the number of articles published in the journal in the same period. But what is an article? Do editorials count? What about reviews, replies or comments?
By negotiating to exclude some pieces from the denominator in this calculation, publishers can increase the impact factor of their journals. In The impact factor game, the editors of PLoS Medicine describe the negotiations determining their impact factor. An impact factor in the 30s is extremely high, while most journals are under 1. The PLoS Medicine negotiations considered candidate impact factors ranging from 4 to 11. This process led the editors to “conclude that science is currently rated by a process that is itself unscientific, subjective, and secretive.”
A more cynical strategy for raising impact factor is when editors ask authors to cite more articles from their journals, as I described in How journals manipulate the importance of research and one way to fix it.
A crisis for science
The problems discussed here are a crisis for science and the institutions that fund and carry out research. We have a system for communicating results in which the need for retraction is exploding, the replicability of research is diminishing, and the most standard measure of journal quality is becoming a farce.
Ranking journals is at the heart of all three of these problems. For this reason, Brembs and Munafò conclude that the system is so broken it should be abandoned.
Getting past this crisis will require both systemic and cultural changes. Citations of individual articles can be a good indicator of quality, but the excellence of individual articles does not correlate with the impact factor of the journals in which they are published. When we have convinced ourselves of that, we must see the consequences it has for the evaluation processes essential to the construction of careers in science and we must push nascent alternatives such as Google Scholar and others forward.
Politicians have a legitimate need to impose accountability, and while the ease of counting — something, anything — makes it tempting for them to infer quality from quantity, it doesn’t take much reflection to realize that this is a stillborn strategy.
As long as we believe that research represents one of the few true hopes for moving society forward, then we have to face this crisis. It will be challenging, but there is no other choice.
What’s your take on publishing in your field? Are these issues relevant? Are you concerned? If so, I invite you to leave a comment below, or to help keep the discussion going by posting this on Facebook, Twitter, or your favorite social medium.
For a little more on Brembs and Munafò’s article, see a brief note by Brembs on the London School of Economics’ Impact of Social Science blog and discussion at Physics Today’s blog. And don’t forget to follow Retraction Watch.
Finally, a disclosure: The New Yorker article cited above was written by Jonah Lehrer, one of the subjects of my piece: Whaddaya mean plagiarism? I wrote it myself! How open access can eliminate self-plagiarism.
This posting subsequently has appeared at The Guardian as Science research: 3 problems that point to a communications problem
Share
29 Comments
23 Trackbacks
- Curt Rice on the quality of science. « Åse Fixes Science
- Science research: three problems that point to a communications crisis | Science Blogs and News
- Science research: three problems that point to a communications crisis | Education News
- Science research: three problems that point to a communications crisis | Blogonomics
- 4 ways open access enhances academic freedom
- We DO have a problem | Ferniglab's Blog
- The shocking truth about the quality of research: it’s not getting better
- Fake papers are not the real problem in science | Achilleas Kostoulas
- Fake Papers are Not the Real Problem in Science
- Junk Science: Fake Papers are NOT the Real Problem | OMSJ
- The Law Winks for Corrupt Scientists | OMSJ
- 17,493 ways to hold universities more accountable
- Essay Content : Leave science to the scientists. (AJC Prelim 2012) | H1 General Paper 8807
- 17,493 ways to hold universities more accountable | Curt Rice
- Can we trust all the scientific findings that we are constantly bombarded with?
- Recently read: Sexist Peer Review; Non-replicable Science; and Flexible Job Searches | Achilleas Kostoulas
- Predicting the Unpredictable – Data That Moves You
- Openness is the only quality of an academic article that can be objectively measured | SciELO in Perspective
- Abertura é a única qualidade de um artigo científico que pode ser objetivamente aferida | SciELO em Perspectiva
- La apertura es la única cualidad de un artículo académico que puede ser medido objetivamente | SciELO en Perspectiva
- Jan Velterop / La apertura es la única cualidad de un artículo académico que puede ser medido objetivamente – CIECEHCS
- Worth looking at for Healthy Scepticism – Recht ist Das
- Predatory Journals and academic publishing fallacy around the world.
Republish
I encourage you to republish this article online and in print, under the following conditions.
- You have to credit the author.
- If you’re republishing online, you must use our page view counter and link to its appearance here (included in the bottom of the HTML code), and include links from the story. In short, this means you should grab the html code below the post and use all of it.
- Unless otherwise noted, all my pieces here have a Creative Commons Attribution licence -- CC BY 4.0 -- and you must follow the (extremely minimal) conditions of that license.
- Keeping all this in mind, please take this work and spread it wherever it suits you to do so!
This post states very clearly some of the problems with the publication of our scientific results. I would say that most of these problems would be solved by an open conversation of the results after their publication. Let’s be perfectly honest, right now the game we play is to convince two or three reviewers to accept an article to Nature (regardless of what the rest of the field thinks of it) in order to be in a good position for getting the next grant or position. If this wasn’t the game there would be far less pressure to publish in flashy journals and far fewer retractions. Couldn’t we shift this game towards publishing solid science if what counted was to publish a finding that held up to scrutiny by everyone interested (not just the 2/3 reviewers). This seems to be the case in some fields of physics where there is an open conversation for results after their publication but it is sorely missing from the life sciences. Why are biomedical researchers so reluctant to comment on articles on sites like pubpeer.com where there is a clear and open way to discuss publications? I use sites like these to comment on articles that grab my interest (either negatively or positively) and very often the authors respond to clarify the issues. The system works very well and if it were adopted by a critical mass of biomedical researchers it would certainly shift the publication game towards a better, and more open system of publication (and it would put pressure on all of us to publish quality and not flash).
Thanks for leaving these thoughts, Paul. I think you hit the nail on the head when it comes to what drives the system. The physicists are way ahead of us, at least in one sense: they put everything on archives, let it float around informally, and then publish only if they need the belt-notch. These are also the folks who have 1000+ authors on some articles! You’re asking for debate and discussion — surely a strategy for improving both science and the communication thereof. Sign me up!
Curt,
The more papers that people comment on, the quicker the community as a whole will become used to the process and the quicker it will make become the norm. Sign yourself up to one of these commenting sites and start a debate on a publication. Authors are usually notified automatically and comments can be anonymous. Tell everyone to do the same and together we can all start getting the biomed community used to the process.
There are noteworthy points here, but the conclusions seem blown out of proportion. First, the possibility of malpractice and manipulation does not mean that a system is fundamentally flawed. Second, issues that are particular to positivist science can not be extended to academic publishing in general. Some of the background documents and posts referred to here represent extremist views, but LSE’s Impact of Social Science blog as a whole is more diverse. See also http://scholarlykitchen.sspnet.org/.
Thanks for leaving this feedback, Jørgen. I know that the conclusions seem big, but I think we’re talking about more than the possibility of malpractice and manipulation. We’re talking about a system which rewards those things and thereby can be charged with encouraging them. And, indeed, we are seeing the effects of this. With the technological options we have today, it’s easy to imagine systems which are considerably different and which are more likely to identify and reward quality and also being more likely to weed out the bad stuff. One example is the open evaluation system I described in another post: http://curt-rice.com/2012/12/17/open-evaluation-11-sure-steps-and-2-maybes-towards-a-new-approach-to-peer-review/
So, dramatic as it sounds, when I hear (as I recently have) a very high ranked fellow responsible for giving out hundreds of millions of crowns say, “he had 5 papers in Science and Nature in the last 2 years,” the only possible conclusion is that we’re taking huge risks that are completely unjustified. It’s a rough conclusion to reach, but I think the research on this matter is compelling. Alas …
Carry on!
I am happy someone want to debate this topics.
“First, the possibility of malpractice and manipulation does not mean that a system is fundamentally flawed.”
No, of course, but it is not so difficult to find studies that document flaws all over the scholarlys system.
“Second, issues that are particular to positivist science can not be extended to academic publishing in general.”
Of course, it is a lot of different cultures of publishing and ranking in the different fields, but again, it is not difficult and find problematic issues in most of the academic fields.
What do you mean with extremist views? Is it wrong or do you mean that the examples are very odd?
You refer to the Scholarly Kitchen, the lobbyist blog to the biggest publishers, without reference to a specific part or blogpost. That is like referring to the Bible, it says not so much in my opinion.
One thing that strikes me is some parts of science are better-off than others.
But to your points, there’s no doubt that a significant amount of scientific research is flawed, that bias pervades it as it does every other human enterprise. The issues you mention are all significant, and I share your concern.
On the other hand, despite the often wretched state of science, I think it’d be impossible to find a large institution that’s anywhere near as impartial, self-correcting, and meritocratic. “Science ain’t much, but it’s better than the alternatives.”
I so want to be on the same team as you, Mike. I want to believe that science is meritocratic and self-correcting. But I’m completely disillusioned about the status quo, although that doesn’t leave me without hope or a vision.
We do not reward merit in the university system. We reward good networks. That comes to grants, hirings, promotions, etc. And I’ve had a stellar career in those regards, so there’s no bitterness here for me personally. Now there’s research on how hiring and promotion works, cf my other writings on women’s careers or the publishing system. There’s simply no empirical basis for believing that if you do everything right, you’ll be rewarded. That’s not the world I live in, at least.
But maybe what I’m trying to do here is part of the self-correcting process 🙂
I think we are on the same team– I wouldn’t say science is anywhere close to being impartial, self-correcting, or meritocratic. But I would say it’s less partial, more self-correcting, and less corrupt than any other institution of the same size, now and throughout history.
In absolute terms, science does a manifestly terrible job at most things. In relative terms… it doesn’t look so bad. Cold comfort perhaps…
Mike,
Winston Churchill said this one time:
“Many forms of Government have been tried, and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed, it has been said that democracy is the worst form of government except all those other forms that have been tried from time to time.”
Sir Winston Churchill, Hansard, November 11, 1947
“tried from time to time” means that we should not give up to find something better than democracy, it is just the best we have tried so far. With Internet it should be big possibilities to make the scholarly system easily better. Perhaps it will get better without any special effort for promoting open science, but it can take a long time. Right now the sciences have problems to keep up with the development of antibiotica, Malaria, climate technology, unbalances in the world by poverty etc. That is why we need to speed up the scholarly systems. We are running against the clock.
Pal,
I would highly recommend Michael Nielsen’s work on open science. This is a great overview:
http://michaelnielsen.org/blog/the-future-of-science-2/
An interesting post indeed, yet there are even more questions related to quality in science, some of them fundamental that found no mention in the post.
What is quality in science and how to measure it? The assumed notion of quality is getting more article published in more “prestigious” journals, thus having a high Impact Factor. Yet, there is no assessment of
what I call Quotation Context Polarity: is the article quoted in a positive, negative, or possibly neutral way.
As evidence for it let me quote from the first comment: “I use sites like these to comment on articles that grab my interest (either negatively or positively)”. As far as I know, and I don’t know that much so I might be wrong, the Impact Factor does not reflect the Quotation Context Polarity.
What is actually science? Assuming one would agree criteria of measuring quality in science, are these criteria applicable for all sciences, including religious studies, musicology, linguistics, computational linguistics, mathematics, biology, chemistry, etc.? The author doesn’t mention that there might be crucial differences in setting up some quality assessment device for biology vs. mathematics vs. linguistics.
It is true, such basic questions can not be debated at large in a post yet, one should at least mention them.
As for the conclusions, I don’t think that pushing “nascent alternatives such as Google Scholar and others forward” is THE solution of the current problems. In my humble opinion, a better solution is to have universities educate students to be honest first and foremost to themselves, yet this might be too late to teach at the university, this should start in a very early stage of a human’s life.
“Traditional scientific communication” = Scientific communication 1.0.
Just putting it online does not make it a 2.0.
I really hope it is self-correcting, For this the merit function should reward the proper scientific research, which currently does not seem so.
The reality is very sad with the increase in retractions, especially from high IF journals as indicated by Retraction Watch, also the adoption of IF, which I agree completely with Björn and Marcus arXiv paper.
A start would be for those scientists (hopefully active researcher) responsible for funding, or for hiring new junior scientist/professorship, would allow for transparent criteria of evaluation.
Regarding the IF, IMHO, should be noted that a publication in any journal should be appropriately weighted. That is, let’s say IF=20, then your publication is only above average if it attains at least 20 citations. I say this because publication in high IF journal does not guarantee citations, they mainly attract “insignificant” citations. I categorize “insignificant” citations, the ones in the introduction section, that brings the reader into context and contributed in no way (or very little) compared to citation in the methodlogy, results or conclusion sections which in some way contribute to the replicability.
The one reliable difference between the high IF journals and others, is that in the former, there are fewer papers with zero citations. The editor of Nature recently gave a talk here in Tromsø, and he noted that 80% of the impact factor of Nature is due to about 25% of the papers published.
“high status to research that is wrong”
It would seem your post is a nice theoretical background to an ongoing experiment on the making of science that I have been engaged with in the last few years…
In short:
In 2004, an MIT (now EPFL) group published a paper in Nat Mater proposing the concept of “stripy nanoparticles”; the same group went on to publish over 25 papers (including in Science, PNAS, JACS, Nat Mater, Nature Nano, Nature Comm) based on that concept. Unfortunately, the 2004 paper is based on an obvious experimental artifact. My experiment has been to challenge those publications.
A major result has been obtained end of last year with the publication of “stripy nanoparticles revisited” after a peer review process which took over three years (!):
http://raphazlab.wordpress.com/2012/11/23/stripy-nanoparticles-revisited/
with a few follow up discussion on my blog, other blogs and online publications (links at the bottom of the piece below):
http://raphazlab.wordpress.com/2012/12/13/stripy-revisited-posts-where-to-start/
A first lesson from the ongoing experiment is therefore that, not only “journals generally are not interested in publishing mere replications”… it is incredibly hard (and slow) to challenge problematic data (even when the problems are obvious, see little video in the first post above).
A second lesson is that independence of Editors is somewhat theoretical… A Nature Material Editor went on my blog and submitted ~ 15 comments in three days criticizing the blog… anonymously! (before being identified and submitting a somewhat convoluted apologies: http://raphazlab.wordpress.com/2013/01/02/response-to-on-my-pep-pamies-comments-on-levys-blog/)
A third lesson (TBC, only preliminary results) is that maybe some journals are not taking the issue of data reuse very seriously…
http://ferniglab.wordpress.com/2013/02/06/responses-to-evidence-of-self-plagiarism/
The article and comments are a nice discussion piece. I am, however, a little concerns about the lack of clarity of context. The issues identified are largely about the problems in the publication process and uses of that process (such as hiring). With the exception of outright fraud, it says nothing about the quality of the research itself. For example, duplication, plagiarism, publication bias, and decline effect have no bearing on the quality of the actual scientific research. The problem is in the interpretation in a sort of biased “web of trust” context.
Regression towards the mean isn’t something we can do away with. It is an inherent part of the scientific process and is why no individual research result holds much value. I’ve always complained about media sensationalist headlines about “new discovery”. More likely then not the more sensational it is, the less likely it will survive replication. This isn’t a problem with science itself but with PR. Sensationalism sells, people buy it, and people get dismayed to find out next year that it was wrong. Yet we scientists don’t trust single results and good science trudges onward, weeding out the outliers and building on repeatable results.
The problem here is that the sensationalistic problem has found its way into scientific journals and hiring of researchers. Science itself is fine, and the research is fine too, building slowly and statistically over time as it always has. I wish that was made more clear. I already see people using these sorts of “science is unreliable” articles for anti-science and anti-intellectual purposes which are not warranted.
That being said, I do see the effects of these publication problems on friends and colleagues, some choosing to leave research entirely. I do wish to support efforts to fix this very serious problem. I just wish everyone was more clear on the boundaries of the actual problem. Research can be trusted as much as always if you understand how it is supposed to be trusted, which is to be skeptical until your objections are less likely than the consistent results.
I share your concern about pieces like this one I’ve written being used by anti-intellectuals, and have thought about that quite a bit when writing this type of blog entry. What I end up thinking on that point is that it’s even worse if the anti-intellectuals figure out the problems we have but haven’t tried to fix. And, it’s also not good when politicians who support science refer to published results as though they’re suddenly facts, just because they appeared in Science or Nature.
I do agree that the issue here is about communication more than science, although if a greater degree of replication were required prior to publishing, that might cut back on the more sensational results, which themselves are the source of bigger regressions to the mean.
Increasingly, I think that a sensible way of “counting” to find quality is more likely to be found with a focus on individual articles (e.g. citations of that article) than through the aggregations that evaluation of a whole journal entail.
The distinction made by between “science” (doing fine, trudging along building solidly on reproducible results) and something else, the “sensationalistic problem” (somewhat external to science, not affecting the quality of scientific research) is not that helpful. Communication is an integral part of the scientific process. Sensationalism results in publications of absurd claims which are not just just statistical fluctuations. The very limited space (and appetite) for controversies and debate results in those being essentially accepted and highly cited (even if never reproduced by other groups). Those ‘high impact’ studies shape fields, impact funding and careers: they undoubtedly affect the quality (and nature) of the scientific research.
“Why you can
To improve the quality of research I would suggest the following measures:
1) All experiments to be designed and planned (with date and time of execution) by the researcher or research team and then reviewed by a neutral research team (neutral to the research project being reviewed) before executing the experiements. Of course, the process should guarantee the protection of intelectual and industrial property.
2) All experiments to be video and audiotaped and saved permanently. The saved material to be available for review to interested parties. Of course, the process should guarantee the protection of intelectual and industrial property.
3) All experiments to be monitored physically by at least one neutral researcher or any citizen interested to monitor the execution of the experiment. There should be a written protocol signed by the researcher and the monitor(s).
4) All researchers and monitors that manipulate and falsificate research data should be banned from doing further “research” or “monitoring” and where appropriate should be tried as criminals.
I know this will increase the price of doing research, but my personal opinion is that we are all paying a very high price for the ongoing abuse, manipulation, falsification and fabrication of research .
Sincerely,
Dr. Ahmet Sallmani
Great post. We DO have a problem. The number of retractions is way below the level at which they should be occurring. Vested self-interest of authors, institutions and journals mean that where we should have a retraction, we have a correction. These are not corrections of mistakes but misconduct/fraud. If our undergraduate students produce work of this sort, they get a zero. We should also consider the effect on our graduate students and postdocs. Out comes a paper in their field, they read it and get totally depressed. Why? Because some fraudster has got a paper in a “major” journal (aka one that guarantees a thesis or a tenured position) and it is clearly wrong, e.g., re-used data, copied and pasted, for different experimental conditions. Some years later they may see a “correction”. What do they do? Stay in science, staying true to the messiness of data, become tempted to cheat or leave through disillusion? We are killing off the lifeblood of science through pandering to vested self-interest and turning a blind eye to corruption.
This is an issue of corruption. This is the correct word to use.
The cure?
As Pål Lykkja said above, there is only one cure, democracy. This means openness and transparency. I publish a paper, you have access to the raw data and are free to comment on it. If the paper is flawed, then it is retracted. We are a long way from that, but moving towards it, slowly.
Thank you for your nice essay. Reading your article reminds me an experience I had with ‘detecting’ of two clear cases of plagiarisms of a Ph.D. graduate that conducted in two articles from his dissertation. The articles co-authored with his supervisors, as usual case. I was following other articles of him which they also were plagiarized ones and once I notified him about such scientific misconduct in one article of him. However, seemed to me that he does not care about such issues at all as published the new two articles. This time I moved further and notified the editors of such articles and the editors found ‘clear’ cases of plagiarism and decided to retract. But, since the supervisor believed he has nothing to do with these cases (as he have not) put huge amount of pressure on the editors to not to retract. Finally, my identity revealed to the supervisor and i faced tons of pressure from him because he described my actions as traitor and … Now, I am dubious whether I should have informed the editors or not?
I think the one of the reasons (at least for this case described) is the researchers do the research and publish article just for grant/tenure/etc. there is no real interest in research per se.
Thank you for reading this very long comment.
This typifies the climate change saga. I wrote about the 1948, 1960s and 1970s snow showers which were often over 600mm, just recently.
It is almost certain if we had snow like that this year it would be blamed on climate change and because in the last 30-40 years we have been lucky to see 150mm of snow those under 40 years old have probably never seen a lot of snow. They could easily be convinced it was a new phenomena and due to climate change. Take away all the money gained by spreading alarm and lies and the truth would come to the forefront. As lies of this nature are so profitable all speculative data has been skewed in favour of the money earner, climate change.
Donald Merritt
Academia has divorced itself from science. Science will now be performed by individuals working outside academia who are motivated only by an interest in pursuing the truth. True science will also be accurately assessed only by such individuals. Thus, science will re-establish itself in the form it used to have. Meanwhile, universities will be seen increasingly for what they are – farcical and ridiculous – until they are all abolished. This will occur alongside an enormous reduction in the human population, when the effects of overpopulation lead to a catastrophic shortage of food. This will all happen before 2050.
I find this with Darwinism. I believe certain anti-ID and anti-Creationist organisations regularly write en masse to try and ‘drown out’ dissenting evidences. They employ flash mobs to intimidate and bully universities, colleges, and institutions which attempt to present lectures or courses on dissenting views. This is religion, not science. We have seen on several occasions, how books which dissent from the Darwinian religion, are given bad reviews, marked down into 1-star or ‘very poor’ status on publication sites, BEFORE the books have actually been released. There are instances of fraudulent evidence presented by Darwinists, and their media supporters, with great fanfare, glossy double-page spreads, exhibitions, and so forth, but which are later retracted with a tiny entry on page z somewhere.
I think certain sciences are going backwards. I suspect it is the result of driving Christian ethics out of the education system.
The increase in retraction rates, is it because scientific misconduct has increased or is it because information nowadays flows more freely and as a consequence it is easier to detect scientific misconduct?
This reminds me of crime figures which are far more dependent on a) resources aimed at detection, and b) the willingness of people to report crime than on the number of crimes actually committed. Can we be sure that the true rate of fraud (which may or may not be reflected by retraction rate) has changed at all or at least anywhere near as dramatically as suggested in Fang et al.? The hypothesis that retraction rate is a close and robust indicator of the underlying problems is by no means established by this study.
This is a great article – but I have to say that some groups are there really do try to do their best to check and recheck their own work. On the other hand… there are quite some more who seem never willing to refute their own original claims.
One of my own, personal, flaws as a researcher is designing experiments where I am blind to my own bias; where I end up creating (inadvertently) experimental conditions which will weight themselves towards my hypothesis. I do my best to correct for this, but it is challenging if we are going to be honest. This is natural – especially in the preliminary stage. There are so many permutations and angles to cover, that we make mistakes in our methods – we tend to think of why things could be true, rather than why they should be false.
If we have a ‘success’ in finding something new, many times there is a rush to publish, before we have done a perfect job of looking for what else could have caused it. But if we’re honest, we try and do a better job next time. One of the things I am proud of is learning to be directly critical of my own work in the discussion – I do my very best to highlight where my work is lacking, or where my assumptions could be causing a blind spot -and to put it on paper, where everyone will see it. I also try, when possible, to collaborate with someone who disagrees with my hypothesis. It makes the politics hard, and the science harder, but I think my experiments end up at a much higher quality this way.
Finally, not only is there a real dearth of follow up studies getting published, but there is a real lack of ‘that didn’t work’ publications out there. That eliminates a huge body of work that (just as erroneously) found no effect, when in fact there was one. I wonder how many ‘successful’ experiments follow a failed experiment, in the same basic track.
I just want to tell you my story. I somehow got myself a name for being “creative” and became a star when I was younger. Then I start challenging some fundamentals in the field. After proven correct in the field, I lost my NIH grant, lost my job and sold fish for a living.
I managed to find a research position again. I continued challenging fundamentals, and got kicked out research again. Luckily, I still have a job. I devoted all my spare time continue my research. I have very limited resource to empirically prove my theory, until I read a famous article from the big people who put me through this.
In their paper, Figure 1 shows the ideal curve, Figure 2 shows the human curve, as a function of parameter A ranges from 1.0 to 4.0. According to their theory, the ideal curve should always be higher than the human curve. In Figure 8, they combined Figure 1 and 2, to demonstrate the ideal is higher. However, parameter A ranges from 1.0 to 2.3 in Figure 8. I got suspicious and using using Photoshop I overlapped their Figure 1 and 2, clearly, the two curves cross at A=2.5 and human curve is higher than the ideal at A>2.5. There are more such examples in this paper that their data supports my theory although their conclusion is opposite of mine….
I will present it in a coming conference, and I will pay for the conference out of my own pocket. I don’t know what will happen to me. Every humiliation and punishment they have put me through. I don’t want revenge, I only want an equal chance to use my ability to contribute to humanity. I am a scientist, and a good one.