The Blub Paradox
Originally a 2–3 min video — also on LinkedIn / TikTok / YouTube · @allemaar
We use cookies to understand how you use this site and improve your experience.
Originally a 2–3 min video — also on LinkedIn / TikTok / YouTube · @allemaar
Imagine you walk into a restaurant. The menu has fifty dishes. You order one. You're happy.
But what if the best dish in the kitchen isn't on the menu? The chef makes it. The ingredients are there. It's the thing regulars ask for by name. But it's not written down. So you never order it. You never even know it exists. You leave satisfied, thinking you saw every option.
That's not the restaurant's fault. That's the menu's fault.
The menu decided what you could see.
Paul Graham, the guy behind Y Combinator, named this exact pattern. He called it the Blub Paradox.
He was talking about programming languages. A programmer working in one language can look at simpler tools and immediately see what they're missing. That part is easy. Looking down is always clear.
But looking up? Impossible. More powerful tools have concepts this programmer has never encountered. Not concepts they rejected. Concepts they literally cannot perceive. Their tools don't have the vocabulary for them.
In linguistics, there's a term for this. Lacuna. A gap where a concept should be, but the language has no word for it. You can't think fluently about what your language can't express. Graham took that idea and applied it to technology. The tool shapes what the user can see.
Now apply that to AI. Your AI reads instructions in a format. Usually JSON. And JSON is good at what it does. Keys, values, lists, nesting. Clean, universal.
But JSON has no concept of priority. You can write 'use dark mode' and 'never expose medical data without consent' in the same list. To you, one is a preference and the other is the law. To JSON, they're identical. Same structure. Same weight. Same depth.
The format has no word for 'this one matters more.' So the AI treats them equally. Not because it's careless. Because its menu doesn't have that dish. The concept of enforcement level is a lacuna in JSON's vocabulary.
And the AI doesn't know it's missing. That's the paradox. You can't flag a gap you can't see.
Here's where it gets uncomfortable. You're watching this thinking: sure, JSON has limits.
But what are you using instead? Markdown with bold text to mark something important? YAML with a comment that says 'DO NOT SKIP'? A system prompt that says 'always follow these rules' in plain English and hopes the model pays attention?
You're inside something too. So am I. Every format has a boundary. Every boundary creates lacunae. Concepts that exist in the world but don't exist in the vocabulary you handed your AI.
The question isn't whether your AI is smart enough. It's whether your notation is rich enough for smart to matter.
The blind spot can't announce itself. If it could, it wouldn't be blind.
Structure before scale.
Loading comments…