OpenAI's o1 System Card, Literally Migraine Inducing

Dec 22, 2024Β·
muckrAIkers
muckrAIkers
Β· 3 min read
The idea of model cards, which was introduced as a measure to increase transparency and understanding of LLMs, has been perverted into the marketing gimmick characterized by OpenAI's o1 system card. To demonstrate the adversarial stance we believe is necessary to draw meaning from these press-releases-in-disguise, we conduct a close read of the system card. Be warned, there's a lot of muck in this one.
EPISODE RECORDED 2024.12.08

Discussed Figures and Tables

Chapters

00:00:00 ❙ Intro
00:00:00 ❙ Recorded 2024.12.08
00:00:54 ❙ Actual intro
00:03:00 ❙ System cards vs. academic papers
00:05:36 ❙ Starting off sus
00:08:28 ❙ o1.continued
00:12:23 ❙ Rant #1: figure 1
00:18:27 ❙ A diamond in the rough
00:19:41 ❙ Hiding copyright violations
00:21:29 ❙ Rant #2: Jacob on "hallucinations"
00:25:55 ❙ More ranting and "hallucination" rate comparison
00:31:54 ❙ Fairness, bias, and bad science comms
00:35:41 ❙ System, dev, and user prompt jailbreaking
00:39:28 ❙ Chain-of-thought and Rao-Blackwellization
00:44:43 ❙ "Red-teaming"
00:49:00 ❙ Apollo's bit
00:51:28 ❙ METR's bit
00:59:51 ❙ Pass@???
01:04:45 ❙ SWE Verified
01:05:44 ❙ Appendix bias metrics
01:10:17 ❙ The muck and the meaning

Additional o1 Coverage

  • NIST + AISI [report] - US AISI and UK AISI Joint Pre-Deployment Test
  • Apollo Research’s paper - Frontier Models are Capable of In-context Scheming
  • VentureBeat article - OpenAI launches full o1 model with image uploads and analysis, debuts ChatGPT Pro
  • The Atlantic article - The GPT Era Is Already Ending
  • Patrick Chao Tweet

On Data Labelers

  • 60 Minutes article + video - Labelers training AI say they’re overworked, underpaid and exploited by big American tech companies
  • Reflections article - The hidden health dangers of data labeling in AI development
  • Privacy International article = Humans in the AI loop: the data labelers behind some of the most powerful LLMs’ training datasets

Chain-of-Thought Papers Cited

  • Paper - Measuring Faithfulness in Chain-of-Thought Reasoning
  • Paper - Language Models Don’t Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting
  • Paper - On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language Models
  • Paper - Faithfulness vs. Plausibility: On the (Un)Reliability of Explanations from Large Language Models

Other Mentioned/Relevant Sources

Unrelated Developments

  • Cruz’s letter to Merrick Garland
  • AWS News Blog article - Introducing Amazon Nova foundation models: Frontier intelligence and industry leading price performance
  • BleepingComputer article - Ultralytics AI model hijacked to infect thousands with cryptominer
  • The Register article - Microsoft teases Copilot Vision, the AI sidekick that judges your tabs
  • Fox Business article - OpenAI CEO Sam Altman looking forward to working with Trump admin, says US must build best AI infrastructure