We use cookies to understand how you use this site and improve your experience.

Alexandru Mareș@allemaar
Alexandru Mareș
  1. Home
  2. Writing
  3. The Strong Form
Email
RSS
YounndAIYou and AI, unifiedBuilt withNollamaNollama

The Strong Form

Originally a 2–3 min video — also on LinkedIn / TikTok / YouTube · @allemaar

Alexandru Mareș

On this page

  • The Floor Drops
  • The Dangerous Idea
  • The Curtain Is The Show
  • The Evidence
  • Word By Word
PreviousNotation as Alignment
NextProbably Purple
Related
Why Constructed Languages Always Failed — Until Now
24/04/2026
One Line, One Thought19/04/2026
The Borges Warning18/04/2026
Published13/04/2026
Read time2 min
Topics
GeneralAIYON
Actions
00
Comments

Loading comments…

Leave a comment

0/2000

Imagine you can hear shapes.

A bat can. Echolocation. Every surface in a room has a sound. A moth three meters away has a texture your ears can feel. In 1974 a philosopher named Thomas Nagel wrote a famous paper about this. He asked: what is it like to be a bat? And his point wasn't that we can't know. His point was that there IS an answer. The bat has a felt, first-person world. Alien to us. But real.

The Floor Drops

Now ask the same question about an AI. And the floor goes. There's no felt experience we can point to. No hearing. No seeing. Nothing humming underneath the words.

The bat has a world you can't reach. The AI has no world at all.

Sit with that. Because it changes something.

The Dangerous Idea

In the nineteen-thirties, two linguists — Sapir and Whorf — had a dangerous idea. They said language shapes thought. There's a weak version: your language influences what you find easy to think about. That one's broadly accepted. And a strong version: your language determines what you can think at all. The strong version? Rejected. Because humans have a whole layer of thinking that happens before language. You can feel what you can't name. You can picture what you can't describe.

But here's the thing. We were talking about humans. An AI doesn't have a layer before language. There IS no 'before language' for an AI. Which means — maybe — strong Sapir-Whorf is wrong for humans... but right for AI.

The Curtain Is The Show

For an AI, notation isn't shaping thought. It IS thought. The format isn't the wrapper around the mind. It IS the mind. Change the shape of the input — you change what the AI can think. Not grooves in a landscape. The whole landscape. The curtain is the show.

The Evidence

I know this sounds like philosophy. It's not. I tested it. Six hundred benchmarks across six pillars. The sixth one is literally named Sapir-Whorf — three hundred and sixty-nine tests measuring how notation shapes what an AI can think. Budget models gain up to forty points of accuracy just from the structure of the input. Zero hallucinated fields. Six thousand test runs. Zero failures.

That's what I built. I call it YON — a notation shaped like thought, for a system that only knows shape. Getting ready for release. JSON is your API format. YON is the Cognitive Architecture your AI thinks in.

Word By Word

So the next time you write a prompt. A schema. A system instruction — remember what you're actually doing. You're not giving instructions to a mind. You're building the mind. Word. By. Word.