Next Steps at BlueDot w/ Li-Lian Ang
Sep 15, 2025·
·
4 min read

Into AI Safety
I'm joined by my good friend, Li-Lian Ang, first hire and product manager at BlueDot Impact. We discuss how BlueDot has evolved from their original course offerings to a new "defense-in-depth" approach, which focuses on three core threat models: reduced oversight in high risk scenarios (e.g. accelerated warfare), catastrophic terrorism (e.g. rogue actors with bioweapons), and the concentration of wealth and power (e.g. supercharged surveillance states). On top of that, we cover how BlueDot's strategies account for and reduce the negative impacts of common issues in AI safety, including exclusionary tendencies, elitism, and echo chambers.
2025.09.15: BlueDot Impact is hiring right now, and should be looking to fill additional positions in the near future! If you think you’d be a good fit, I definitely recommend applying, I had a great experience when I contracted as a course facilitator. If you end up applying, let them know you found out about the opportunity from the podcast!
Follow Li-Lian on LinkedIn, and look at more of her work on her blog!
As part of my effort to make this whole podcasting thing more sustainable, I have created a Kairos.fm Patreon which includes an extended version of this episode. Supporting gets you access to these extended cuts, as well as other perks in development.
INTERVIEW RECORDED 2025.08.27; ASIDES RECORDED 2025.09.07
Chapters
00:00:00 âť™ Intro
00:03:23 âť™ Meeting Through the Course
00:05:46 âť™ Eating Your Own Dog Food
00:13:13 âť™ Impact Acceleration
00:22:13 âť™ Breaking Out of the AI Safety Mold
00:26:06 ❙ Bluedot’s Risk Framework
00:41:38 âť™ Dangers of "Frontier" Models
00:54:06 âť™ The Need for AI Safety Advocates
01:00:11 âť™ Hot Takes and Pet Peeves
00:03:23 âť™ Meeting Through the Course
00:05:46 âť™ Eating Your Own Dog Food
00:13:13 âť™ Impact Acceleration
00:22:13 âť™ Breaking Out of the AI Safety Mold
00:26:06 ❙ Bluedot’s Risk Framework
00:41:38 âť™ Dangers of "Frontier" Models
00:54:06 âť™ The Need for AI Safety Advocates
01:00:11 âť™ Hot Takes and Pet Peeves
Links
- BlueDot Impact website
Defense-in-Depth
- BlueDot Impact blogpost - Our vision for comprehensive AI safety training
- Engineering for Humans blogpost - The Swiss cheese model: Designing to reduce catastrophic losses
- Open Journal of Safety Science and Technology article - The Evolution of Defense in Depth Approach: A Cross Sectorial Analysis
X-clusion and X-risk
- Nature article - AI Safety for Everyone
- Ben Kuhn blogpost - On being welcoming
- Reflective Altruism blogpost - Belonging (Part 1: That Bostrom email)
AIxBio
- RAND report - The Operational Risks of AI in Large-Scale Biological Attacks
- OpenAI “publication” (press release) - Building an early warning system for LLM-aided biological threat creation
- Anthropic Frontier AI Red Team blogpost - Why do we take LLMs seriously as a potential source of biorisk?
- Kevin Esvelt preprint - Foundation models may exhibit staged progression in novel CBRN threat disclosure
- Anthropic press release - Activating AI Safety Level 3 protections
Persuasive AI
- Preprint - Lies, Damned Lies, and Distributional Language Statistics: Persuasion and Deception with Large Language Models
- Nature Human Behavior article - On the conversational persuasiveness of GPT-4
- Preprint - Large Language Models Are More Persuasive Than Incentivized Human Persuaders
AI, Anthropomorphization, and Mental Health
- Western News article - Expert insight: Humanlike chatbots detract from developing AI for the human good
- AI & Society article - Anthropomorphization and beyond: conceptualizing humanwashing of AI-enabled machines
- Artificial Ignorance article - The Chatbot Trap
- Making Noise and Hearing Things blogpost - Large language models cannot replace mental health professionals
- Idealogo blogpost - 4 reasons not to turn ChatGPT into your therapist
- Journal of Medical Society Editorial - Importance of informed consent in medical practice
- Indian Journal of Medical Research article - Consent in psychiatry - concept, application & implications
- Media Naama article - The Risk of Humanising AI Chabots: Why ChatGPT Mimicking Feelings Can Backfire
- Becker’s Behavioral Health blogpost - OpenAI’s mental health roadmap: 5 things to know
Miscellaneous References
- Carnegie Council blogpost - What Do We Mean When We Talk About “AI Democratization”?
- Collective Intelligence Project policy brief - Four Approaches to Democratizing AI
- BlueDot Impact blogpost - How Does AI Learn? A Beginner’s Guide with Examples
- BlueDot Impact blogpost - AI safety needs more public-facing advocacy
More Li-Lian Links
Relevant Podcasts from Kairos.fm
- Scaling Democracy w/ Dr. Igor Krawczuk for AI safety exclusion and echo chambers
- Getting into PauseAI w/ Will Petillo for AI in warfare and exclusion in AI safety