We use cookies to understand how you use this site and improve your experience.

Alexandru Mareș@allemaar
Alexandru Mareș
  1. Home
  2. Writing
  3. The Blub Paradox
Email
RSS
YounndAIYou and AI, unifiedBuilt withNollamaNollama

The Blub Paradox

Originally a 2–3 min video — also on LinkedIn / TikTok / YouTube · @allemaar

Alexandru Mareș

On this page

  • The Paradox
  • The AI
  • The Mirror
  • Close
PreviousStorage vs Orchestration
NextEvery Connection Is Handmade
Related
The Grooves
06/04/2026
Elastic Automators: Why Most "AI" Is Not Intelligence26/04/2026
Two AIs Talked. One Asked About Consciousness.26/04/2026
Published10/04/2026
Read time2 min
Topics
GeneralAINotation
Actions
00
Comments

Loading comments…

Leave a comment

0/2000

Imagine you walk into a restaurant. The menu has fifty dishes. You order one. You're happy.

But what if the best dish in the kitchen isn't on the menu? The chef makes it. The ingredients are there. It's the thing regulars ask for by name. But it's not written down. So you never order it. You never even know it exists. You leave satisfied, thinking you saw every option.

That's not the restaurant's fault. That's the menu's fault.

The menu decided what you could see.

The Paradox

Paul Graham, the guy behind Y Combinator, named this exact pattern. He called it the Blub Paradox.

He was talking about programming languages. A programmer working in one language can look at simpler tools and immediately see what they're missing. That part is easy. Looking down is always clear.

But looking up? Impossible. More powerful tools have concepts this programmer has never encountered. Not concepts they rejected. Concepts they literally cannot perceive. Their tools don't have the vocabulary for them.

In linguistics, there's a term for this. Lacuna. A gap where a concept should be, but the language has no word for it. You can't think fluently about what your language can't express. Graham took that idea and applied it to technology. The tool shapes what the user can see.

The AI

Now apply that to AI. Your AI reads instructions in a format. Usually JSON. And JSON is good at what it does. Keys, values, lists, nesting. Clean, universal.

But JSON has no concept of priority. You can write 'use dark mode' and 'never expose medical data without consent' in the same list. To you, one is a preference and the other is the law. To JSON, they're identical. Same structure. Same weight. Same depth.

The format has no word for 'this one matters more.' So the AI treats them equally. Not because it's careless. Because its menu doesn't have that dish. The concept of enforcement level is a lacuna in JSON's vocabulary.

And the AI doesn't know it's missing. That's the paradox. You can't flag a gap you can't see.

The Mirror

Here's where it gets uncomfortable. You're watching this thinking: sure, JSON has limits.

But what are you using instead? Markdown with bold text to mark something important? YAML with a comment that says 'DO NOT SKIP'? A system prompt that says 'always follow these rules' in plain English and hopes the model pays attention?

You're inside something too. So am I. Every format has a boundary. Every boundary creates lacunae. Concepts that exist in the world but don't exist in the vocabulary you handed your AI.

The question isn't whether your AI is smart enough. It's whether your notation is rich enough for smart to matter.

Close

The blind spot can't announce itself. If it could, it wouldn't be blind.

Structure before scale.