AI governance ignores hard-won climate lessons on systemic risk, lock-in, and precaution – doing so at greater speed, with a narrower window to act.
How does (likely) the biggest AI safety training organization do it? Li-Lian Ang shares their strategy.
Will Petillo discusses PauseAI's grassroots movement to address risks from frontier AI development
Tristan and Felix de Simone join me to discuss the value of constituent communication on AI.
As always, the best things come in 3s: dimensions, musketeers, pyramids, and... 3 installments of my interview with Dr. Peter Park!
Join me for round 2 with Dr. Peter Park, an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT.
Dr. Peter Park, an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT, joins me to discuss his non-profit, StakeOut.AI.