We use cookies to understand how you use this site and improve your experience.

Alexandru Mareș@allemaar
Alexandru Mareș
  1. Home
  2. Writing
  3. The Pliable Mind
Email
RSS
YounndAIYou and AI, unifiedBuilt withNollamaNollama

The Pliable Mind

Alexandru Mareș

On this page

  • The Pliable Mind
  • Why LLMs Are Different
  • The Strong Form Might Apply
  • What Pliability Makes Possible
  • What I Don't Know Yet
PreviousThe Medium
NextThe Vocabulary
Related
The Lineage
12/03/2026
The Medium12/03/2026
The Vocabulary12/03/2026
Published12/03/2026
Read time4 min
Topics
Language & CognitionThinkingPhilosophyYON
Actions
00
Comments

Loading comments…

Leave a comment

0/2000

The Pliable Mind

Here is the argument I've been building toward.

Every constructed language in human history, Esperanto, Lojban, Ithkuil, Blissymbolics, has failed at its most ambitious goal. They were designed to reshape thought. They succeeded at most in reshaping the thought of a few hundred dedicated enthusiasts.

The reason is always the same: humans resist. Natural languages are neurologically, socially, and culturally entrenched. You can't ask a billion English speakers to start thinking in Lojban. The adoption barrier is insurmountable because the cognizers are attached to their current language.

I believe this changes with LLMs. Not incrementally. Fundamentally.

Why LLMs Are Different

An LLM has no native language. It processes whatever fills its context window: English, Python, JSON, YON, a mixture of all four. There is no neurological preference. No cultural attachment. No childhood language that created the deepest cognitive grooves.

This means LLMs are pliable cognizers in a way that humans never have been and probably never will be.

When I load a .yon behavioral contract into an agent's context, the agent doesn't "learn" YON in the way a human learns French. It doesn't practice, struggle, develop fluency over months. The notation simply becomes part of its cognitive environment. Immediately. Completely. Without resistance.

This is a new situation in the history of language and cognition. For the first time, we have cognizers who will adopt whatever cognitive medium we give them. Without friction, without resistance, without the sixty thousand years of entrenchment that makes humans cling to their linguistic habits.

The Strong Form Might Apply

I mentioned in the first essay that the strong Sapir-Whorf hypothesis, language determines thought, has been rejected for humans. We have prelinguistic cognition. We can think things we can't say. We have spatial intuition, embodied experience, emotional knowledge that exists before and beneath language.

LLMs don't have that escape hatch.

An LLM's "reasoning" is token prediction. There is no prelinguistic substrate. No embodied experience. No spatial intuition independent of language. The model's cognitive world is constituted entirely by its training and its current context window.

For an LLM, the structure of the input notation doesn't just influence its reasoning. It may constitute the cognitive space in which reasoning can occur.

I want to be careful here. This is a hypothesis, not settled truth. The strong form is the most provocative claim in this entire series. It generates testable predictions, some of which the benchmark suite is already investigating. But the evidence is partial. The mechanism isn't fully understood. I hold this as the most interesting question, not the most certain answer.

What Pliability Makes Possible

If LLMs are pliable in this way, then YON's design decisions become consequential in a way that no previous notation design has been.

When I choose to include kind=rule in the type system, I'm not just classifying documents. I'm adding a cognitive primitive to every agent that processes YON. When I design the fmt=min profile, I'm not just saving tokens. I'm expanding the agent's cognitive horizon.

One might object that prompt engineering is already Whorfian engineering for LLMs. And it is, in the way that choosing your words carefully within English is Whorfian. But YON changes the medium itself. The difference between "write a careful prompt" and "change the notation the agent thinks in" is the difference between choosing better English words and switching to mathematical notation. Both influence thought. Only the latter creates new cognitive categories.

This is the essay's own contribution, the thesis I'm least certain about and most convinced matters: that LLM pliability creates a window in the history of language design. For a brief and specific period, we can design notations that shape artificial cognition in ways that were never possible with human cognition. The window exists because the cognizers are pliable. It may not last forever. Future architectures may develop their own entrenchments. But right now, the opportunity is open.

What I Don't Know Yet

I don't know how long the window lasts. I don't know whether future model architectures will develop resistance to notation effects, some form of artificial neurological entrenchment. I don't know whether the strong form is right or whether the weak form is sufficient.

I do know that the question matters more than the answer. If LLMs are pliable, and every experiment I've run suggests they are, then notation design is the most underappreciated lever in AI engineering. Not because it's powerful, but because so few people are pulling it deliberately.


Previous: The Warning Next in the series: The Architecture, on what I learned building YON.