The Catastrophic Risk of Inaction
WHY THIS PROGRAM MATTERS
The civilization-level risk posed by AI pollution of science is easy to underestimate, but it is real. GIE Foundation’s view is that generative AI creates a near-term catastrophic risk by degrading the reliability of science, the system modern societies depend on to decide what is true, and in turn threatening the collapse of trust and factfulness in society as a whole.
What makes this dangerous is not only that AI sometimes produces bad material, but the speed and volume at which convincing synthetic content is entering the scientific record and embedding itself. In 2022, AI-generated text, data, or references were essentially absent from peer-reviewed literature. By 2024, credible estimates suggest that roughly 17% of published papers already show signs of AI involvement in substantive content: AI-written passages, fabricated or distorted references, and synthetic or manipulated data, with the trend accelerating. This has happened far faster than the institutions science relies on - peer review, replication, and post-publication correction - can plausibly absorb.
TWO HARMS
There are two orders of harm resulting from this. The first is utility-level harm, and the second is societal-level harm.
On utility harm: Much of the AI-generated material is low quality or false (hallucinated), but it is good enough to pass review and to be cited. Once that happens, the damage compounds: False results do not remain isolated; they are reused, summarized, and embedded in later work. In areas such as medicine, climate research, or engineering, this can lead directly to harmful decisions being made on the basis of evidence that only appears to exist.
The more serious problem is the speed and scale of pollution; this is the societal-level existential harm to our democracies. Science functions because it is cumulative; new work assumes that the existing literature is broadly reliable. If synthetic/unreliable material enters the record massively faster than it can be identified and corrected, that assumption stops holding. After only a few rounds of citation and reuse, it becomes extremely difficult to distinguish work grounded in real evidence from work resting on synthetic foundations. At that point, the scientific record as a whole becomes suspect. The assumption will become that any science may or may not be grounded in objective fact, and can be used, abused or ignored.
When trust in science and even facts themselves erodes at scale, the consequences are also not confined to research: Science is the backbone of collective decision-making in modern societies, from public health to technology regulation to long-term risk management. If there is no longer confidence that scientific claims correspond to reality, evidence stops settling disagreements or informing debate. Inevitably, policy will become a contest of persuasion rather than verification, and institutions will lose the ability to coordinate action on the basis of shared facts.
CATASTROPHIC RISK
This is a catastrophic risk to society’s functioning. Truth and factfulness have already been lost from the media and politics; science is the last bastion of empiricism. If science also falls to subjectivity, it may be impossible to re-establish factfulness ever again.
A society that no longer trusts its mechanisms for producing knowledge cannot govern itself coherently, and cannot respond effectively to severe or systemic threats. Generative AI makes this outcome plausible for the first time by enabling a flood of convincing but unreliable material that overwhelms existing safeguards. Once the scientific record is widely believed to be polluted beyond repair, trust may not return simply by producing more research.
What follows from this is a crisis for objective truth itself. Science is the last large-scale institution that still produces claims intended to be objectively true, verifiable, and independent of persuasion. If even this system becomes widely suspected of being saturated with synthetic, unverifiable material, the assumption that there exists a shared external reality will collapse. Information will stop being something that can be checked and becomes something that can be chosen. Claims won’t be evaluated by correspondence to evidence, but rather by alignment with identity, interest, or power. This is the same dynamic often described as the “dead internet theory”: An environment flooded with content that looks real but is no longer grounded in anything outside itself, destroying its utility as an information network as a whole. In practice, this is an “epistemic dead zone”: An environment where information still looks authoritative but no longer reliably corresponds to anything outside itself.
WHAT THIS MEANS FOR DEMOCRACY
Once this shift occurs, democratic society will be impossible to sustain, because democracy depends on disagreement over values and policies, not over basic facts. Courts, regulators, elections, public health systems, and markets all rely on the premise that evidence can, in principle, settle what is true. If that premise collapses, institutions will lose legitimacy and collective decision-making is impossible. At that point, the problem is no longer misinformation or bad policy; it’s the irrecoverable loss of a shared reality altogether. Large-scale risks, like pandemics, climate stress, technological hazards (like AI itself), and geopolitical crises become ungovernable as societies can no longer agree on the reality those solutions are meant to address. The democratic ‘conversation’ that drives democratic governance cannot function without a shared language of facts.
This program focuses on the narrow point where intervention is still realistic - with the small number of global academic publishers that decide what enters and persists in the scientific record. Improving AI disclosure, governance, and enforcement at this level offers a credible chance to slow or prevent a broader collapse in epistemic trust.
The scale of generative AI creates the risk that science stops functioning as a shared reference point for reality at all; democratic societal collapse is at that point inevitable.