Responsible AI in Science:
An Open Letter to the World's Academic Publishers
The scientific record is one of civilization's most consequential public institutions. It is the foundation on which all policy is built and all challenges are addressed, from medicine to climate policy to technological innovation to democratic governance. Science as an institution functions because it is cumulative, verifiable, and trusted. This epistemic trust in science is one of the most important foundations of stability in our democracies.
That trust is now under existential threat.
Generative AI is now entering the scientific record at a speed and volume that the self-correcting institutions science relies on (peer review, replication, post-publication correction) cannot combat. In 2022, heavily AI-generated content was essentially absent from peer-reviewed literature; but by 2024, credible estimates suggest roughly 20% [1,2,3] of published papers already showed signs of heavy AI involvement, including substantial AI-written passages, fake references, or synthetic data. This trend is accelerating rapidly. Once embedded, AI-generated material is cited and built upon, and after only a few cycles of reuse, it becomes much harder to audit. At that point, work grounded in real evidence and work built on synthetic science become indistinguishable.
This is already happening, and it is a threat to trust in the scientific record as a whole.
The problem extends beyond papers. An analysis of ICLR 2026 found that 21% of peer reviews were fully AI-generated, with over half showing some AI involvement [4]. The gatekeeping mechanism is now compromised by the same technology it is supposed to evaluate. Publishers cannot protect the scientific record if the review process itself is synthetic.
In Managing the Risks of Generative AI in Academic Publishing [5] we analyzed the AI policies of the world's top 12 academic publishers - the organizations responsible for more than two-thirds of all published science. While basic disclosure requirements exist (circa July 2025), policies are inconsistent and lack specificity, there are few mentions of detection mechanisms and policy enforcement. The system currently relies almost entirely on author and journal self-reporting. That is not a sufficient safeguard in the age of convincing, rapid generative AI.
You - the major publishers - sit at a crucial intervention point. No other actors in the scientific ecosystem have the reach, the authority or the structural position to act at the scale this moment requires. This is an enormous responsibility, but it is also an historic opportunity to be the protectors of the scientific record. This will ultimately benefit science, academia, society and publishers and journals themselves, by ensuring the enduring legacy and legitimacy of the academic scientific record.
We are asking you to take it. Specifically:
Align AI usage and disclosure requirements across your portfolios, by working with experts to ensure policies tackle the issues outlined here.
Move beyond self-reporting toward real detection and real enforcement mechanisms.
Join us in forming a publisher-led taskforce, working with existing organizations and coalitions, to build the governance infrastructure science now needs.
We are inviting you to help shape what comes next and protect trust in science and factfulness in our society.
Safeguarding epistemic trust is a basic condition for effective governance, for international cooperation, and for the stability of our societies. The scientific record took centuries to build. It will not take centuries to lose. We are asking you to act before the damage becomes irreversible.
From the undersigned scientists, AI developers, science communicators, academics, policymakers, and public voices
This open letter is coordinated by the Geopolitical Insight and Education Foundation (GIEF), a Washington DC-based independent policy and research think tank, focused on democratic stability and resilience. To add your signature fill out the form below.
By signing this letter, your name, title, and organisation may be displayed publicly on this page alongside other signatories. Your email address is collected for verification purposes only and will never be displayed or shared with third parties. You may request removal of your signature at any time by contacting info@giefoundation.net. For full details, see our Privacy Policy.