Advocacy Campaign:
Responsible AI in Science
Generative AI has allowed for the proliferation of large volumes of plausible but false, inaccurate, or low-quality scientific material. Once published, this material is difficult to detect, correct, or remove, and can propagate through citations, reviews, and policy long after the fact.
At scale, this creates a catastrophic risk of degradation of the scientific record itself, and a wide-scale loss of trust in science across society as a reliable basis for evidence-based decision-making. Without trust in science and the research to policy pipeline, our democracies may be unable to address the major challenges of the 21st century.
In 2025, we published Managing the Risks of Generative AI in Academic Publishing, which examined how these risks arise and where meaningful intervention is still possible.
Leverage Point
Responsibility for AI use in research is spread across thousands of researchers, journals, and institutions. But control over what enters and persists in the scientific record is not.
A small number of global academic publishers act as gatekeepers for the majority of published science. Their policies shape disclosure norms, screening practices, and enforcement across the entire system. Today, those AI policies are insufficient, inconsistent, and largely unenforced.
Our Campaign
We are running a focused advocacy campaign aimed at this bottleneck. The objective is to engage major academic publishers to review, strengthen, and align their AI policies - particularly around disclosure, screening, and enforcement - before AI-generated low-quality research becomes structurally embedded in the scientific literature.
This is not a campaign against AI. It is an attempt to ensure that scientific publishing retains the basic safeguards needed for science to remain cumulative and trustworthy – and ultimately, useful for our societies, in addition to expanding the sum of human knowledge for its own sake.
Modern societies depend on science to coordinate policy, regulation, and long-term decision-making. If confidence in the scientific record erodes at scale, that coordinating function breaks down.
This campaign targets one of the few places where that trajectory can still be altered.
Managing the Risks of Generative AI in Academic Publishing
Peer-reviewed literature is critical for humanity’s endeavor to expand the boundaries of knowledge, but rising use in academia of large language models poses a systemic challenge to progress. There are significant gaps in current safeguards; showing the need for more standardization and greater enforcement.