We use cookies to understand how you use this site and improve your experience.

Alexandru Mareș@allemaar
Alexandru Mareș
  1. Home
  2. Writing
  3. Notation As Alignment
Email
RSS
YounndAIYou and AI, unifiedBuilt withNollamaNollama

Notation as Alignment

Originally a 2–3 min video — also on LinkedIn / TikTok / YouTube · @allemaar

Alexandru Mareș

On this page

  • The Problem With Locks
  • The Hallway
  • The Difference
PreviousWhat Your AI Can't Tell You
NextThe Strong Form
Related
Elastic Automators: Why Most "AI" Is Not Intelligence
26/04/2026
Two AIs Talked. One Asked About Consciousness.26/04/2026
Expect the Lie25/04/2026
Published16/04/2026
Read time2 min
Topics
GeneralAIAlignment
Actions
00
Comments

Loading comments…

Leave a comment

0/2000

Every AI safety technique today is basically a lock on a door. And locks can be picked.

RLHF. Constitutional AI. Reward modeling. We train values into a model's weights. Billions of parameters adjusted until the system behaves. We run it through thousands of examples of what good looks like. What harmful looks like. And then we hope.

We hope the values stuck. We hope the model generalized correctly. We hope no one finds the prompt that picks the lock.

That's alignment in 2026. Locks on doors.

The Problem With Locks

Here's what makes locks fragile. You can't read them.

A model trained on two trillion tokens, fine-tuned with human feedback for months. Somewhere inside those weights, it learned not to help build weapons. But where? Which parameters? Nobody knows. You can't open it up and point to the rule. You can't edit one constraint without retraining the whole thing.

And when someone finds a jailbreak, and they always do, you can't see why it worked. The lock failed. You don't know which tumbler gave way. You just know the door is open.

The Hallway

So instead of better locks, what if you designed a better hallway?

Not training values into weights. Writing constraints into the structure the model reads at runtime. An instruction that says 'before generating any response about controlled substances, verify the request against the policy block.' That's not trained. It's written. It's a wall in the hallway.

The model doesn't need to have internalized that rule during training. It's right there, in the text, every single time the model runs. Present. Readable. Editable. No retraining.

You're not hoping the model learned the right values. You're shaping the environment so the right behavior is the only path available.

Locks can be picked. Hallways just go where they go.

The Difference

And here's what changes everything. You can read the alignment. You can edit it. You can audit every constraint, hand it to a regulator, diff it against last Tuesday's version. Try doing that with a neural network.

The structure IS the safety.

You still need training. You still need locks. But you also need hallways. And right now, almost nobody is building them.