Lena River Delta, Russia | NASA
Shorter, conceptual, accessible pieces offering timely framing and strategic perspective. Useful for public, media and political awareness.
COMMENTARIES
Modeling State Recognition as a Displacement Stabilizer in the Horn of Africa
Outdated diplomatic orthodoxy is blocking Somaliland’s role in stabilizing the Horn of Africa. A pragmatic shift is urgently needed to reflect the territory’s de facto governance and relieve mounting displacement pressures. Recognition or functional inclusion would unlock access to aid, security cooperation, and climate finance, tools Somaliland already has the capacity to use. Failing to act leaves migration unmanaged, maritime routes exposed, and one of the most stable actors in the region excluded from solutions.
Rebooting Nuclear Safety Regulation for the Electrification Era
Outdated and over-precautionary safety regulation is holding back nuclear power. A bold rethink is urgently needed to enable capacity expansion at lower cost. This is vital to unlock nuclear’s unique potential to meet fast-rising demand for reliable, affordable electricity while also driving down greenhouse gas emissions.
Mercy Outlasts Missiles
A century ago, amid famine and revolution, the US mounted a huge humanitarian relief mission, feeding millions in Soviet Russia. As the war in Ukraine drags on, this forgotten act reminds us that compassion is not weakness but geopolitical wisdom.
AI On The Frontlines
Large-language models are being used intensively by both sides in the Ukraine war, demonstrating their potential for offensive propaganda, but also how far they can help defend against political disinformation. Can AI help democracies win the information war?
Taiwan's Nightmare Scenario
The world is acutely aware of the risk that China seeks to take back Taiwan by force. Less well understood is the possibility that Beijing could bring the island to its knees through a strategy of physical isolation that could be strategically harder for the West to resist.
Invisible Lies: AI and the Future of Academic Integrity
As large language models become more and more used in academic and other kinds of writing so model hallucinations risk introducing errors, which can propagate as they become accessed as training data. Without counter-action, this process risks creating feedback loops of misinformation and finally model collapse.