AI, Reasoning or Rambling?
Jul 13, 2025Β·
Β·
2 min read

muckrAIkers
In this episode, we redefine AI’s “reasoning” as mere rambling, exposing the “illusion of thinking” and “Potemkin understanding” in current models. We contrast the classical definition of reasoning (requiring logic and consistency) with Big Tech’s new version, which is a generic statement about information processing. We explain how Large Rambling Models generate extensive, often irrelevant, rambling traces that appear to improve benchmarks, largely due to best-of-N sampling and benchmark gaming.
Words and definitions actually matter! Carelessness leads to misplaced investments and an overestimation of systems that are currently just surprisingly useful autocorrects.
EPISODE RECORDED 2025.07.07
Chapters
00:00:00 β Intro
00:00:40 β OBB update and Meta's talent acquisition
00:03:09 β What are rambling models?
00:04:25 β Definitions and polarization
00:09:50 β Logic and consistency
00:17:00 β Why does this matter?
00:21:40 β More likely explanations
00:35:05 β The "illusion of thinking" and task complexity
00:39:07 β "Potemkin understanding" and surface-level recall
00:50:00 β Benchmark gaming and best-of-n sampling
00:55:40 β Costs and limitations
00:58:24 β Claude's anecdote and the Vending Bench
01:03:05 β Definitional switch and implications
01:10:18 β Outro
00:00:40 β OBB update and Meta's talent acquisition
00:03:09 β What are rambling models?
00:04:25 β Definitions and polarization
00:09:50 β Logic and consistency
00:17:00 β Why does this matter?
00:21:40 β More likely explanations
00:35:05 β The "illusion of thinking" and task complexity
00:39:07 β "Potemkin understanding" and surface-level recall
00:50:00 β Benchmark gaming and best-of-n sampling
00:55:40 β Costs and limitations
00:58:24 β Claude's anecdote and the Vending Bench
01:03:05 β Definitional switch and implications
01:10:18 β Outro
Links
- Apple paper - The Illusion of Thinking
- ICML 2025 paper - Potemkin Understanding in Large Language Models
- Preprint - Large Language Monkeys: Scaling Inference Compute with Repeated Sampling
Theoretical understanding
- Max M. Schlereth Manuscript - The limits of AGI part II
- Preprint - (How) Do Reasoning Models Reason?
- Preprint - A Little Depth Goes a Long Way: The Expressive Power of Log-Depth Transformers
- NeurIPS 2024 paper - How Far Can Transformers Reason? The Globality Barrier and Inductive Scratchpad
Empirical explanations
- Preprint - How Do Large Language Monkeys Get Their Power (Laws)?
- Andon Labs Preprint - Vending-Bench: A Benchmark for Long-Term Coherence of Autonomous Agents
- LeapLab, Tsinghua University and Shanghai Jiao Tong University paper - Does Reinforcement Learning Really Incentivize Reasoning Capacity
- Preprint - RL in Name Only? Analyzing the Structural Assumptions in RL post-training for LLMs
- Preprint - Mind The Gap: Deep Learning Doesn’t Learn Deeply
- Preprint - Measuring AI Ability to Complete Long Tasks
- Preprint - GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models
Other sources
- Zuck’s Haul webpage - Meta’s talent acquisition tracker
- Hacker News discussion - Opinions from the AI community
- Interconnects blogpost - The rise of reasoning machines
- Anthropic blog - Project Vend: Can Claude run a small shop?