Polysemanticity w/ Dr. Darryl Wright
Jan 22, 2024Β·
Β·
1 min read

Into AI Safety
Darryl and I discuss his background, how he became interested in machine learning, and a project we are currently working on investigating the penalization of polysemanticity during the training of neural networks.

Chapters
01:46 β Interview begins
02:14 β Supernovae classification
08:58 β Penalizing polysemanticity
20:58 β Our "toy model"
30:06 β Task description
32:47 β Addressing hurdles
39:20 β Lessons learned
02:14 β Supernovae classification
08:58 β Penalizing polysemanticity
20:58 β Our "toy model"
30:06 β Task description
32:47 β Addressing hurdles
39:20 β Lessons learned
Links
Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.
- Zooniverse
- BlueDot Impact
- AI Safety Support
- Zoom In: An Introduction to Circuits
- MNIST dataset on PapersWithCode
- MNIST on Wikipedia
- Clusterability in Neural Networks
- CIFAR-10 dataset
- Effective Altruism Global
- CLIP Blog
- CLIP on GitHub
- Long Term Future Fund
- Engineering Monosemanticity in Toy Models