2 Clarke Drive
Suite 100
Cranbury, NJ 08512
© 2025 MJH Life Sciences™ and OncLive - Clinical Oncology News, Cancer Expert Insights. All rights reserved.
Having grown up in the cradle of academic medicine, writing this commentary gives me no pleasure. Unfortunately, the rapidly increasing dysfunction associated with what has reasonably been considered the coin of the realm for countless decades, the publication of original research or regularly cited review articles in high-impact peer-reviewed scientific journals, cannot be dismissed or understated.
The current state of affairs is not the result of a single event and did not achieve its dreary state over a short period. However, it is no longer deniable that there has been a palpable acceleration in the magnitude and negative effects of this dysfunction, which, of course, raises serious concerns.
This discussion begins by highlighting a recent report from Australian investigators who employed a machine learning model to examine the oncology literature from 1999 to 2024 for the presence of peer-reviewed publications that possessed the hallmarks of being produced in fraudulent paper mills.1 Distressingly, the effort identified 9.87% of the entire literature as falling into this category, with an increasing percentage of published manuscripts identified as fraudulent in the most recent years included in the analysis. To some observers, perhaps the most surprising finding was that a similar percentage of paper mill manuscripts were found in the “top 10% of journals by impact factor.” Finally, and remarkably, “over 170,000 papers by authors from Chinese institutions were flagged” in this paper mill model.
To be clear, the identification of manuscripts as possibly originating from paper mills through this model does not prove this accusation. However, the magnitude of the observations, as well as evidence of an increase in the percentage of papers that fall into this category over the past several decades, must be considered a matter of great concern.
A second report highlights an additional worrying trend: fake reviewers are being selected for fraudulent manuscripts.2 This analysis revealed a particularly unique problem. Once individuals are identified as authors of peer-reviewed manuscripts, regardless of the ultimate scientific validity of the papers or even the existence of these individuals, they are subsequently more likely to be asked to serve as peer reviewers. When new manuscripts are submitted from these paper mill organizations, the authors may “suggest” specific individuals as potential “reviewers” for their papers who have been programmed to submit favorable reviews. While clearly complex, this scenario provides evidence for the sophistication and coordination of the groups responsible for this rapidly expanding paper mill enterprise.
In summary, the scientific publication peer-review process is on an increasingly unstable footing. While the academic leadership, scientific journal publishers, and institutions that are most frequently responsible for paying the bills (eg, governmental agencies and universities) discuss the future of this vital operation, the situation worsens.
Two fundamental components of the peer-review process––trust and the willingness of the broad scientific community (including, in the case of medical sciences, active clinicians and investigators) to willingly volunteer their time and expertise to ensure as much as humanly possible the existence and continuation of a highly robust peer-reviewed literature––are under active challenge.
In an earlier era, and speaking specifically about clinical medicine, there were far fewer peer-reviewed journals, both general medical and specialty-focused publications. A request from the editor of one of these journals was likely met with at least modest enthusiasm (always appreciating the question of individual time management), a recognition that the requestor (or someone within the journal staff) appreciated that those asked to provide a peer review had achieved a sufficient level of expertise that is required to participate in this critical process.
There was even the realistic expectation (or perhaps a more appropriate term is possibility) that those who examined submitted reviews would, at some level, evaluate the quality, completeness, and effort that went into this exercise. One might have considered it an honor (and that likely remains the case for many scientific publications) to have been asked. This would be particularly relevant for a more junior faculty member or academically inclined clinician.
However, today, with the virtual explosion in the number of claimed “peer-reviewed scientific journals,” with many recognized as being nothing more than a shady business venture (“predatory journals”),3 it is often difficult for an individual asked to provide a peer review to know if the request is coming from a legitimate entity. And, of course, as noted above, the request to review comes without compensation, even though it is well recognized that individuals and organizations profit from this publication venue.
Further, because of the ever-increasing specialization and complexity inherent within basic and translational laboratory science and clinical medicine, the requirements associated with providing an objectively meaningful peer review only magnify. Team science, both in the clinical and nonclinical domains, the uniqueness and nuances of specific research efforts, and the requirement for complex statistical analysis only intensify questions about the adequacy of the peer-review process. One might reasonably inquire how one or even several peer reviewers are able to adequately evaluate the reported science contained within a manuscript that includes several dozen or even more than 100 authors.4
Publishers are clearly attempting to deal with the rapidly evolving situation, including strategies designed to limit the widespread use of large public databases, a common strategy of paper mills to create meaningless but “statistically significant” observations with the specific purpose of adding the names of individuals willing to pay to play to achieve authorship status.5 Unfortunately, well-intended strategies designed to improve the availability of scientific publications have inadvertently added to the problem, for who would likely be in the best position to have the funds necessary in this “open access” arrangement: individual researchers, a university struggling to sustain its operations, or a pay-to-play paper mill? Of course, the answer here is painful, but obvious.
Finally, it is critical to acknowledge the role of artificial intelligence (AI) in both accelerating the problem and providing at least one critical component of a solution. The widespread availability of AI tools has dramatically increased the opportunity for paper mills and other malicious strategies to create and submit fraudulent science and (as noted above) negatively affect the essential peer-review process.
However, one might hope that in the future the large community of organizations and individuals deeply invested in ensuring the future of the scientific communication process (eg, academics, publishers, governmental regulators, and patient advocates) will see a way to work together and effectively employ tools such as AI with the plan to develop, pilot, and subsequently embrace approaches that maintain and enhance its essential objectivity and quality.
Related Content: