The dark underbelly of scientific publishing pt.1
What’s this about?
In a publications-as-currency environment, it is no surprise that authorship itself becomes a commodity. As I curate a list of open access journals in the field of Educational Technology, my interested was piqued when I first heard about the increasing prevalence of hijacked journals and academic paper mills, initially uncovered by Anna Abalkina. Of course, I wanted to learn whether and to what extent this dark underbelly of academic publishing also lingers on the edges of my own field, Educational Technology. Or worse, maybe it is not such a fringe phenomenon after all? In this blog post I provide a brief introduction to the topic, show some examples, and alert fellow researchers to this problem. Spoiler: Education and Educational Technology has not been spared, although the scope of the problem is hard to estimate. What we do know of, however, is likely just the tip of the iceberg.
Let me start with an example. Navigate to this URL and have the page translated by your browser: http://123mi.ru/1/
Find the search bar at the top and type in your favorite keyword (more general keywords work better here). At the time of writing, searching for “Educational Technology” returns 62 “offers” from a total of 11761. The search produced a list of article titles with information about pricing, quartile of the publication venue, keywords about the relevant disciplines, a deadline, as well as a price. Let me spell this out clearly: You can RIGHT NOW buy authorship for any of these papers (see screenshot below).
As you can see, last author positions has already been sold for 64k rubles (around 1000 Euros, at the time of writing). Of course, first author position will cost you a bit more, about 1500 Euros. Please note that you are not buying potential authorship to a paper that may or may not be published. No, you are paying for a publication in a Q2 journal, indexed in Scopus. This means that once you’ve sent the money, you can go ahead and add this line to your CV.
How prevalent is this really?
Okay, so now that you’ve seen that this exists, maybe you are wondering how big of a problem this really is? Surely, this could simply be a scam, no? Let me direct you to another URL. This one is a list made by diligent researchers looking into the issue of paper mills: List of suspicious papers
This list contains published papers deemed suspicious, meaning there is reason to believe that they stem from a paper mill. In this case, it’s the Tanu.pro paper mill. Scrolling through this list you see that all major scientific publishing houses appear in one or more entries, Springer, Elsevier, Wiley, SAGE, and many more. Let’s look at one journal specifically: Review of Education (ROE), published by Wiley, one of the four journals of BERA (British Educational Research Association). There are 9 entries for ROE, all from 2022. Scrolling to the right, you find a link to pubpeer.com, with a comment as to why a given paper has been flagged as suspicious. These comments point to logical flaws or contradictions in the paper as additional evidence, as the initial reason for flagging papers are the email addresses of the corresponding author being associated with this paper mill, i.e. tanu.pro, nuos.pro, etc. Although indicting evidence is sparse, this means ROE likely published papers where authors paid for coauthorship. Moreover, the scientific merits of these papers are highly questionable, but more about that later.
Maybe you are thinking “Okay sure, this is a journal by BERA, but it’s only been around since 2020 and doesn’t even have an impact factor. Can you show me a journal with significant impact and standing?”. Glad you asked. Let me point you to the entries for suspicious papers in Thinking Skills and Creativity, published by Elsevier, established in 2006, Impact Factor 3.1. If you scroll to the right, you will see that some of these have already been retracted by the journal, which is nice, as this indicates a somewhat assuring degree of editorial oversight. However, in the short time of their existence (since 2021), these six papers have garnered a total of 25 citations (at the time of writing this blog), so arguably they have already become enmeshed with the scientific literature. Unfortunately, it is a known scientometric phenomenon that retracted articles continue to be cited (read here).
Maybe in a last effort of disbelief you are now saying “Okay sure, but this isn’t my field, these are not Educational Technology journals”. Again, I have bad news for you. Have a look at the November 2022 issue of Education and Information Technologies (IF: 3.66) and scroll all the way down. There you’ll find two retraction notices by the editor, explaining that authorship for these papers was bought.
Of course, Tanu.pro is not the only paper mill. There are many, many more, one of which has also been documented with a list of suspicious papers: International Publisher. For further reading on Chinese paper mills uncovered by Elizabeth Bik, see here.
So how does this work?
Don’t we have peer-review to guard us from this? Yes, we do but it appears that peer-review can be hijacked. But how would that work? Read this retraction note from Thinking Skills and Creativity to hear it from the editors themselves: “After a thorough investigation, the Editors have concluded that the acceptance of this article was based upon the positive advice of three unreliable reviewer reports. The reports were submitted from email accounts which were provided to the journal as suggested reviewers during the submission of the article. The reviewer accounts did not respond to the journal request to confirm the reviewer identity, and the Editors decided to retract the article.” So apparently, one strategy to sell authorship is submitting to journals that rely on suggested reviewers. Day (2022) shows that paper-mills create fake referee accounts at journals, mimicking the identity of real researchers.
But how are these papers created in the first place? As Albakina and Bishop (2022) explain, in cancer biology and genetics, a dizzying array of normal-looking but still fraudulent papers was discovered. In these areas, paper templates were reused and only details varied between papers. Another approach is using some kind of Artificial Intelligence to generate papers from scratch or to change the wording from published papers just enough to avoid plagiarism detection. An amusing side effect of this are tortured phrases. Of course, these approaches work best in journals with weak editorial oversight. For this reason, special issues may be targeted specifically, as guest editors with vested interests can provide access into legitimate journals. As Albakina and Bishop (2022) show, paper mill papers often have jarring flaws like impenetrable writing and incomplete (or total lack of) descriptions of methods and would not normally pass peer review. To no surprise, the faked peer reviews are also weak, requesting minor revisions to include newer references, correct typos, or change conclusions.
The scope of paper mill fraud cannot be overstated. They likely operate in most if not all areas of research, injecting uncounted numbers of papers into the literature (see here, here, and here for more examples), with medicine being the most common target. To give an indication of scope, Thinking Skills and Creativity alone retracted around 50 papers recently due to suspected fraud. As a little side hustle, fraudulent papers sometimes systematically cite particular (non-fake) research. This is done to game citation metrics, presumably also paid for by some desperate researchers in need of an inflated h-index.
How can we identify paper mill papers?
In some cases, it’s easy. All you have to do is to read the paper in order to identify them as fraudulent. The depressing implication is that you might be the first person to actually read the piece, otherwise it wouldn’t be published in the first place. In other instances, it’s not so easy. Looking at the few known papers in Educational Technology stemming from a paper mills, you would be hard pressed to immediately and unmistakably identify them as crap. For example, by no means is this a good paper (being about “learning styles” gave it away) but, honestly, I’ve read worse. So what to look out for?
Here is a set of characteristics that may indicate something fishy:
- Corresponding author lists an email address associated with a paper mill (e.g. kpi.com, tanu.pro, nuo.pro, unesp.co.uk, murdoch.in etc.)
- Author list contains *four* authors (authorship is usually sold in set of four)
- Authors are from the Russian Federation, Kazakhstan, less frequently Azerbaijan
- Tortured phrases in writing
- Unclear, vague, or even impenetrable writing
- Authors’ ORCID profile is empty
- This particular set of authors has no history of previous collaboration
Of course, aside from the suspicious email addresses, no single characteristic is so indicative that it should lead us to a damning conclusion. However, if a multitude of these features co-occur, we should be vigilant. Possibly, the paper you’re suspicious of is already included in of of several lists of suspicious papers. If not, consider adding them to a list of suspicious papers to make a case for retraction upon investigation.
What to make of this?
Unfortunately, I cannot think of any silver lining here, no upbeat message to conclude this blog post. The problem is: We are working in a publish-not-read environment. We are producing more papers than can ever be read and publications are reduced to a metric, in some cases fully decoupled from scientific merit. Yes, the incentives are severely misaligned but that alone is no excuse. We would not be in this situation if we, collectively, would be more diligent in reading research (and maybe also publish less, but that’s whole nother can of worms). I think it is not exaggerated to say that many senior researchers, in a given year, publish more scientific articles than they read. We tend to forget that the so-called self-correcting nature of science is not some law of nature but something that *we* have to do. Only by actually reading the scientific literature can we identify what is a genuine scientific contribution and what is utter crap. If it is crap, we can investigate, notify editors and journals and a paper might be retracted. If paid authorship and faked peer-review increasingly leads to retraction, the whole business model falls apart quickly.
To be clear, I am not excluding myself from this accusation. I, too, don’t read enough, because reading can be boring (yup, yes) and it is not incentivised. The situation may be different if all papers were like this, though…
For the next blog on the dark underbelly of academic publishing, I’ll talk about hijacked journals. Buckle up!