We use cookies to understand how you use this site and improve your experience.
In 1940, the same year Whorf published his paper on the Hopi, Jorge Luis Borges published "Tlön, Uqbar, Orbis Tertius."
It's a story about a fictional world called Tlön, whose languages have no nouns. In Tlön's southern hemisphere, the basic unit of language isn't the object but the verb. Instead of saying "the moon rose," speakers say something like "upward behind the onstreaming it mooned." Objects don't exist in Tlön's epistemology, because the language has no way to refer to persistent things. Materialism is inconceivable, not because Tlön's inhabitants have decided against it, but because their language makes it grammatically impossible.
This alone would be a thought experiment about Sapir-Whorf. But Borges adds a twist that keeps me up at night.
The ideas of Tlön begin to reshape reality. Objects from Tlön start appearing in the real world. The constructed language doesn't just describe an alternate world. It creates one.
I think about this story every time I make a design decision in YON.
If we accept the parallel, that notation shapes how AI agents perceive systems, then YON doesn't just describe your architecture. It creates a version of your architecture in the agent's cognitive space. The system the agent "sees" is the YON-mediated version.
If YON's structure emphasizes rules and constraints but underrepresents performance characteristics, the agent will build a version of reality where rules matter and performance is invisible. If YON's kind system has categories for rules and specs but not for "technical debt" or "scaling bottleneck," those concepts become harder for the agent to perceive.
The notation constructs the world. Not the world, but the world the agent inhabits.
Borges meant his story as a warning, not a celebration. The constructed language was dangerous it was powerful. The more completely it shaped perception, the more reality it displaced.
I feel the weight of this. If we can design notations that shape how AI agents think, we are building cognitive architectures for minds that don't fully exist yet. The notation decisions we make today will create the default cognitive grooves of tomorrow's agents.
This isn't a small responsibility. And it deserves more than a footnote.
There are five things I try to keep honest about:
First: analogy is not equivalence. The Sapir-Whorf parallel is productive, but YON-and-LLMs is not the same system as Hopi-and-humans. The mechanisms differ. I hold this as a useful frame, not a proven theory.
Second: the untranslatable concepts argument is the weakest link. YON's concepts existed as intentions before the notation named them. Making them addressable is different from making them thinkable. I need to be careful about this distinction.
Third: the gaps in YON's structure become gaps in agent cognition. This is Caveat 5, and it's the Borges warning made technical. Whatever YON can't express, the agent can't easily perceive. Notation design carries a responsibility to not leave things out.
Fourth: there's a difference between controlling behavior and shaping cognition. I sometimes blur this line. Runtime constraints are controllability. Cognitive architecture is something deeper. Both matter. They're not the same.
Fifth: none of this gives us permission to stop questioning. A notation that shapes minds should be questioned constantly, by its designer most of all.
Borges asked: what happens when a constructed language creates a reality?
I ask: what happens when a notation system constructs the cognitive reality of an AI agent?
The answer is that we're already inside the story. Every behavioral contract, every @RULE, every kind= tag is a small act of world-construction. We're building the cognitive world that agents will inhabit.
I'd rather build it deliberately, with the warning in mind, than build it accidentally and discover the consequences later.
Previous: The Vocabulary Next in the series: The Pliable Mind, on why LLMs are different from every previous cognizer.
Loading comments…