Scaling Democracy w/ Dr. Igor Krawczuk

May 21, 2024·
Into AI Safety
Into AI Safety
· 6 min read
The *almost* Dr. Igor Krawczuk joins me for what is the equivalent of 4 of my previous episodes. We get into all the classics: eugenics, capitalism, philosophical toads... Need I say more?

If you’re interested in connecting with Igor, head on over to his website, or check out placeholder for thesis (it isn’t published yet).

Because these shownotes have a whopping 115 additional links below, I’ll highlight some that I think are particularly worthwhile:

Chapters

00:02:32 ❙ Introducing Igor
00:10:11 ❙ Aside on EY, LW, EA, etc., a.k.a. lettersoup
00:18:30 ❙ Igor on AI alignment
00:33:06 ❙ “Open Source” in AI
00:41:20 ❙ The story of infinite riches and suffering
00:59:11 ❙ On AI threat models
01:09:25 ❙ Representation in AI
01:15:00 ❙ Hazard fishing
01:18:52 ❙ Intelligence and eugenics
01:34:38 ❙ Emergence
01:48:19 ❙ Considering externalities
01:53:33 ❙ The shape of an argument
02:01:39 ❙ More eugenics
02:06:09 ❙ I’m convinced, what now?
02:18:03 ❙ AIxBio (round ??)
02:29:09 ❙ On open release of models
02:40:28 ❙ Data and copyright
02:44:09 ❙ Scientific accessibility and bullshit
02:53:04 ❙ Igor’s point of view
02:57:20 ❙ Outro

Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance. All references, including those only mentioned in the extended version of this episode, are included.