What AI Governance Can Learn From Climate – And Why It Mostly Hasn't

For roughly three decades, climate governance has been our most sustained real-world experiment in managing a slow-moving, civilisation-scale risk. Not because it has worked especially well, but because it has forced institutions to confront something genuinely hard: acting when harms are unevenly distributed across time and geography, and when feedback from decision–making arrives only after the damage has already begun.
I have been working at the intersections of both climate and AI governance (UK Country Representative of the Global Ecovillage Network vis-à-vis Arcadia Impact AI Governance Taskforce), international justice (Platform for Peace & Humanity), and foresight (Futures4Europe). What strikes me is how relatively little this accumulated experience informs contemporary AI safety debates. The communities are strikingly siloed – and given that AI is advancing far faster than climate change ever did, that seems like a problem.
We are behaving, in some ways, as though we have never encountered a high-stakes global risk before.
The parallels that are drawn – and the ones that aren’t
Comparisons between AI and climate change do get drawn occasionally. When they do, they usually focus on democratic accountability, compressed decision cycles, representation and legitimacy, or public trust. These are real concerns, but they tend to pivot toward speculative political outcomes rather than the underlying question: what happens when risks accelerate faster than institutional learning?
Climate research communities spent years developing tools to reason across long time horizons – accounting for feedback loops, identifying lock-in dynamics, anticipating tipping points, while AI governance faces comparable structural challenges – non-linear capability growth, deployment decisions that may be difficult or impossible to reverse, and incentives that systematically reward speed over caution.
There’s also a version of the comparison that narrows too quickly. An Oxford law blog that I recently came across framed AI and climate change as twin transformations, then spent most of its length on AI’s carbon footprint. That question matters, but it treats AI more as an environmental hazard rather than a global-risk technology. (Although, the IEA projects global data-centre electricity consumption will reach around 945 TWh by 2030, roughly equivalent to Japan’s current total electricity use. Worth knowing about.)
Where the analogies are genuinely useful
The UNU Institute for Environment and Human Security has argued that AI governance frameworks can learn from climate adaptation – specifically from the inflection point when adaptation stopped being treated as a niche environmental concern, and was reframed as a cross-sectoral risk affecting security, infrastructure, and economic stability. Climate governance accelerated once those linkages were recognised and institutionalised.
The solar geoengineering parallel is instructive, and underused. Both geoengineering and frontier AI are global-scale technologies characterised by profound scientific uncertainty, asymmetric incentives, and the risk of unilateral deployment by a small number of actors who are thereby capable of forcing a planetary transition. Geoengineering has long been haunted by what some call the “governance-gap paradox” – the need for regulation before technical feasibility is fully proven because, by the time it is proven, the window may have closed. However, solar geoengineering startups are now entering a commercial take-off phase without adequate governance frameworks. That trajectory should look familiar to AI governance activists. In fact, climate activists have spotted the pattern, and are watching the AI governance space keenly.
The lesson I draw from this is that once frontier-scale technologies attract serious capital, the window for responsible governance narrows fast. The SB 1047 case is instructive. This particular California AI safety bill – which would have required frontier model developers to implement basic safety protocols – passed both chambers of the state legislature with strong support, only to be vetoed by Governor Newsom in September 2024 after an intense industry lobbying campaign. Among those who publicly opposed it was former House Speaker Nancy Pelosi, whose household held between $16 million and $80 million in AI-adjacent stocks including Nvidia, Amazon, Google, and Microsoft at the time of her opposition (American Prospect, 2024). The bill had been endorsed by Geoffrey Hinton, Yoshua Bengio, Elon Musk, and Anthropic. The governance window, in other words, was open, but investment capital closed it.
If we wait until highly capable models are deployed across critical infrastructure, the options shrink dramatically. In my current work with Arcadia Impact – developing severity thresholds for AI incident escalation – I have already seen how difficult it is to define governance triggers before systems are deployed. As the California SB 1047 case illustrates, once deployment occurs, political and institutional incentives shift toward preserving existing capabilities rather than constraining them. This is why calls for pre-deployment licensing, capability forecasting, and international coordination are not alarmist – they are simply late.
Where the strongest analyses still underestimate
Even comparative analyses which I find otherwise useful tend to underestimate the tempo of AI risk. A November 2024 analysis categorised AI impacts as “intermittent” and “non-linear,” labelled it a “sectoral” rather than collective risk, and described its economic stakes as “low to medium.” This framing, to me, already feels far behind the curve – though perhaps not for the reasons most commonly cited.
Common framing of AI governance urgency understandably leans on the most dramatic examples: autonomous weapons use in conflicts where the forces are still overwhelmingly human,[1] AI in cyberoperations, deceptive model behaviour – these are real and documented, but they are neither new nor what they appear.
Whether harmful outcomes emerge from misaligned systems that human deceptivity can exploit, or from entirely rational competitive incentive – such as companies deploying models faster than safety allows, cutting corners to reduce costs, and prioritising capability over accountability – the governance gap is the same. The problem does not require AI to “go rogue” – it only requires that no adequate framework exists when the consequences compound.
A direct comparison makes the point:
Climate risk: Short-term: extreme weather events. Medium-term: ecosystem degradation, biodiversity loss. Long-term: ocean-current collapse, polar permafrost thaw
AI risk: Short-term: biased automated decision-making, AI-driven cyberattacks. Medium-term: power concentration, pervasive AI surveillance. Long-term: misaligned advanced systems operating beyond human control
Both trajectories involve cascading risks and feedback loops. The difference is the timescale. Climate unfolds over generations. AI risk may be compressed into a few training cycles – some say as early as by 2027.
Why climate communities are natural allies
As climate activists, we understand how windows of opportunity open and close – the gap between the Rio 1992 Earth Summit and meaningful action taking place is the story of a window missed. We know how early decisions lock in structural disadvantages: carbon-intensive infrastructure commits us to decades of emissions regardless of subsequent political will, and between now and 2030, $90 trillion in global infrastructure investment could either deepen or begin to break that lock-in. We know how problems can resist standard policy tools — Australia repealed its carbon tax within two years of introduction; France’s fuel tax increases triggered a political revolt. We have watched bifurcated responses undermine collective action, as US withdrawal from the Paris Agreement — twice — demonstrated that no framework is stable when its largest actors treat participation as optional. And we have spent decades arguing that uneven risk distribution demands coordinated response: the nations facing existential loss from sea-level rise — Tuvalu, Kiribati, the Maldives — contribute less than 1% of global emissions.
None of this is abstract for us. These are the structural features of governing a global commons under political inertia. These patterns also map almost directly onto frontier AI governance.
Then there is the psychological parallel. Climate change appeared too abstract and too slow-moving to demand aggressive early action, becoming politically unavoidable only once harms were visible and, by which point, much harm was already locked in. AI risk has the opposite problem: it moves at a pace that denies policymakers the time needed to form new instincts. Slow recognition is as dangerous as slow response, just for different reasons.
The Montréal Protocol is the counterexample I keep returning to. When the scientific community accepted the evidence of ozone depletion, governments acted with unusual speed. The protocol was negotiated in 1987, within two years of the critical findings, establishing a stabilisation period before irreversible damage. It demonstrates that precaution taken early enough can avert worst-case outcomes even under genuine uncertainty. Our current inability to forecast the capabilities of the next generation of AI systems is not a reason to wait – it may be the strongest case for acting before thresholds are crossed.
I have consistently argued that frontier AI needs something equivalent to the International Civil Aviation Organisation: you cannot certify a new aircraft design without the plans being scrutinised and approved. We should be doing the same with foundation models.
The case for bridging these communities
AI governance is largely driven by a few thousand professionals, many of whom share common assumptions and common blind spots. It draws on concepts long familiar in climate governance, peace studies, and foresight – on systemic risk, irreversibility, on collective action problems and path dependence – but it often does so without consistently engaging the communities that have been working on those concepts for decades.
Climate communities have spent decades on precaution under uncertainty. Peace communities know what effective treaties and de-escalation frameworks need to look like. Foresight work is built on the detection of weak yet aggregated signals and path dependence. The question is why do these communities so rarely intersect with AI governance work, given how directly their accumulated knowledge applies?
The window for precaution in climate governance was measured in decades, and we still struggled to use it well. In AI, the equivalent window may be measured in years. Keeping these conversations siloed risks repeating – at far greater speed – the failures we now look back on in climate action. I have become increasingly frustrated by this as a practical constraint on my own work. I move between climate governance spaces, AI safety discussions, international justice forums, and foresight networks regularly. The conversations are often uncannily parallel, sometimes using different terminology for identical concepts, frequently reinventing frameworks that already exist elsewhere. The waste is significant.
More importantly, the missed synthesis means that we are slower than we should be at recognising patterns, slower at adapting lessons, and slower at building upon institutional muscle memory. I am not merely making an analytical claim. This is a practical problem about where talent, attention, and cross-community relationships need to head, fairly urgently.