Scaling AI Safety Through Mentorship w/ Dr. Ryan Kidd

Feb 1, 2026·
Into AI Safety
Into AI Safety
· 3 min read

What does it actually take to build a successful AI safety organization? I’m joined by Dr. Ryan Kidd, who has co-led MATS from a small pilot program to one of the field’s premier talent pipelines. In this episode, he reveals the low-hanging fruit in AI safety field-building that most people are missing: the amplifier archetype.

I pushed Ryan on some hard questions, from balancing funder priorities and research independence, to building a robust selection process for both mentors and participants. Whether you’re considering a career pivot into AI safety or already working in the field, this conversation offers practical advice on how to actually make an impact.

As part of my effort to make this whole podcasting thing more sustainable, I have created a Kairos.fm Patreon which includes an extended version of this episode. Supporting gets you access to these extended cuts, as well as other perks in development.

INTERVIEW RECORDED 2026.01.06; ASIDES RECORDED 2026.01.25; TRANSCRIPT

Chapters

00:00:00 ❙ Intro
00:08:16 ❙ Building MATS Post-FTX & Summer of Love
00:13:09 ❙ Balancing Funder Priorities and Research Independence
00:19:44 ❙ The MATS Selection Process
00:33:15 ❙ Talent Archetypes in AI Safety
00:50:22 ❙ Comparative Advantage and Career Capital in AI Safety
01:04:35 ❙ Building the AI Safety Ecosystem
01:15:28 ❙ What Makes a Great AI Safety Amplifier
01:21:44 ❙ Lightning Round Questions
01:30:30 ❙ Final Thoughts & Outro

Ryan’s Writing

  • LessWrong post - Talent needs of technical AI safety teams
  • LessWrong post - AI safety undervalues founders
  • LessWrong comment - Comment permalink with 2025 MATS program details
  • LessWrong post - Talk: AI Safety Fieldbuilding at MATS
  • LessWrong post - MATS Mentor Selection
  • LessWrong post - Why I funded PIBBSS
  • EA Forum post - How MATS addresses mass movement building concerns

FTX Funding of AI Safety

  • LessWrong blogpost - An Overview of the AI Safety Funding Situation
  • Fortune article - Why Sam Bankman-Fried’s FTX debacle is roiling A.I. research
  • NY Times article - FTX probes $6.5M in payments to AI safety group amid clawback crusade
  • Cointelegraph article - FTX probes $6.5M in payments to AI safety group amid clawback crusade
  • FTX Future Fund article - Future Fund June 2022 Update (archive)
  • Tracxn page - Anthropic Funding and Investors

Training & Support Programs

Funding Organizations

Coworking Spaces

Research Organizations & Startups

Other Sources

  • AXRP website - The AI X-risk Research Podcast
  • LessWrong blogpost - Shard Theory: An Overview