Advocacy Campaign:
Responsible AI in Science

Generative AI has allowed for the proliferation of large volumes of plausible but false, inaccurate, or low-quality scientific material. Once published, this material is difficult to detect, correct, or remove, and can propagate through citations, reviews, and policy long after the fact.

At scale, this creates a catastrophic risk of degradation of the scientific record itself, and a wide-scale loss of trust in science across society as a reliable basis for evidence-based decision-making. Without trust in science and the research to policy pipeline, our democracies may be unable to address the major challenges of the 21st century.

In 2025, we published Managing the Risks of Generative AI in Academic Publishing, which examined how these risks arise and where meaningful intervention is still possible.

Leverage Point

Responsibility for AI use in research is spread across thousands of researchers, journals, and institutions. But control over what enters and persists in the scientific record is not.

A small number of global academic publishers act as gatekeepers for the majority of published science. Their policies shape disclosure norms, screening practices, and enforcement across the entire system. Today, those AI policies are insufficient, inconsistent, and largely unenforced.

Our Campaign

We are running a focused advocacy campaign aimed at this bottleneck. The objective is to engage major academic publishers to review, strengthen, and align their AI policies - particularly around disclosure, screening, and enforcement - before AI-generated low-quality research becomes structurally embedded in the scientific literature.

This is not a campaign against AI. It is an attempt to ensure that scientific publishing retains the basic safeguards needed for science to remain cumulative and trustworthy – and ultimately, useful for our societies, in addition to expanding the sum of human knowledge for its own sake.

Modern societies depend on science to coordinate policy, regulation, and long-term decision-making. If confidence in the scientific record erodes at scale, that coordinating function breaks down.

This campaign targets one of the few places where that trajectory can still be altered.

Why This and Why Now?

See the full explanation of the catastrophic risk of inaction.

Managing the Risks of Generative AI in Academic Publishing
Research Report GIEF Research Report GIEF

Managing the Risks of Generative AI in Academic Publishing

The scientific record is a critical global public good underpinning evidence-based governance, technological progress, and ultimately, epistemic trust in our societies. The rapid integration of generative AI into academic research and publishing is occurring faster than safeguards are being standardized or enforced. Without structural reform at the level of major publishers, the risk is large-scale epistemic contamination, degrading the reliability of the very knowledge systems modern societies depend upon, and ultimately threatening a collapse in trust across society.

Read More