Drawing Red Lines w/ Su Cizem
Apr 6, 2026·
·
3 min read
Into AI Safety
Technology has been moving faster than policy for some time now, and the advent of AI isn't changing that, so what can we do to maintain safety despite uncertainty? Su Cizem has spent the last few years trying to answer that question. As an analyst at the Future Society, she works on global AI governance, specifically on building international consensus around AI red lines: the thresholds we collectively agree must never be crossed. In this conversation, Su walks through her path from philosophy to policy, the evolution of the global AI safety summit series, why voluntary commitments from AI labs aren't enough, and what it would actually take to make international cooperation on AI safety real.
INTERVIEW RECORDED 2026.03.12; ASIDES RECORDED 2026.03.31; TRANSCRIPT
Chapters
00:00:00 ❙ Introduction
00:03:23 ❙ From Philosophy to Policy
00:22:25 ❙ What AI Governance Actually Means
00:26:49 ❙ The Summit Series
00:43:01 ❙ Drawing The Red Lines
01:10:51 ❙ Can These Companies Govern Themselves?
01:24:01 ❙ Breaking Into The Field
01:27:51 ❙ Closing Thoughts & Outro
00:03:23 ❙ From Philosophy to Policy
00:22:25 ❙ What AI Governance Actually Means
00:26:49 ❙ The Summit Series
00:43:01 ❙ Drawing The Red Lines
01:10:51 ❙ Can These Companies Govern Themselves?
01:24:01 ❙ Breaking Into The Field
01:27:51 ❙ Closing Thoughts & Outro
Links
Red Lines
- Global Call for AI Red Lines
- The Futures Society report - “Facing the Stakes of AI Together”: 2025 Athens Roundtable Report
- The Futures Society blogpost - Key Takeaways from the India AI Impact Summit
- The Futures Society blogpost - Where Do We Draw the Line? Inside the IASEAI Expert Workshop on AI Red Lines
Global AI Safety Summit Series
- Wikipedia article - AI Safety Summit 2023
- UK government press release - The Bletchley Declaration by Countries Attending the AI Safety Summit
- Politico article - How the global effort to keep AI safe went off the rails
- TechPolicy.Press article - US Delegation Heads to India AI Summit Intent on ‘Domination’
- TIME article - World Leaders Near Declaration on AI, Indian Government Says
- Indian government press release - Championing Inclusive and Multilingual AI for the Global South, India Unveils New Delhi Frontier AI Commitments
- AI Impact Summit Declaration
- The Futures Society report - Citizens and Experts Unite: Final Report on Global Consultations for France’s 2025 AI Action Summit
Anthropic vs. DoD
- TechPolicy.Press article - A Timeline of the Anthropic-Pentagon Dispute
- Congressional Research Service report - Pentagon-Anthropic Dispute over Autonomous Weapon Systems: Potential Issues for Congress
- Center for American Progress article - The Department of Defense’s Conflict With Anthropic and Deal With OpenAI Are a Call for Congress To Act
- The Guardian article - AI got the blame for the Iran school bombing. The truth is far more worrying
OpenAI + Google Employee Open Letter
- The open letter - We Will Not Be Divided
- Engadget article - Google and OpenAI employees sign open letter in ‘solidarity’ with Anthropic
- TechCrunch article - Employees at Google and OpenAI support Anthropic’s Pentagon stand in open letter
OpenAI’s New DoDeal
- The Register article - Altman said no to military AI abuses – then signed Pentagon deal anyway
- CNBC article - Sam Altman tells OpenAI staffers that military’s ‘operational decisions’ are up to the government
- Transformer article - OpenAI’s Pentagon red lines are a mirage
- OpenAI press release - Our agreement with the Department of War
- EFF article - The Anthropic-DOD Conflict: Privacy Protections Shouldn’t Depend On the Decisions of a Few Powerful People
Peter Singer
- Singer’s 1972 paper - Famine, Affluence, and Morality
- Singer’s book of the same name
- The Partially Examined Life podcast episode - Episode 150: Guest Peter Singer on Famine, Affluence, and Morality
Responsible Scaling Policies, etc.
- SaferAI report - Evaluating AI Providers’ Frontier AI Safety Frameworks
- Anthropic’s RSP
- OpenAI’s Preparedness Framework
- Google Deepmind’s Frontier Safety Framework
- Meta’s Frontier AI Framework
- Don’t Worry About the Vase article - Anthropic Responsible Scaling Policy v3: A Matter of Trust