We use cookies to understand how you use this site and improve your experience.

Alexandru Mareș@allemaar
Alexandru Mareș
  1. Home
  2. Writing
  3. One Line One Thought
Email
RSS
YounndAIYou and AI, unifiedBuilt withNollamaNollama

One Line, One Thought

Originally a 2–3 min video — also on LinkedIn / TikTok / YouTube · @allemaar

Alexandru Mareș

On this page

  • The Proof
  • The Convergence
  • The Close
PreviousSmoke and Mirrors
NextThe Borges Warning
Related
Why Constructed Languages Always Failed — Until Now
24/04/2026
The Borges Warning18/04/2026
The Strong Form13/04/2026
Published19/04/2026
Read time2 min
Topics
GeneralAIYON
Actions
00
Comments

Loading comments…

Leave a comment

0/2000

Every line should be a complete thought. One rule. It changed computing fifty years ago. It's about to change it again.

Search a text file. You get answers. Complete lines. Each one meaningful on its own. You can pipe them, filter them, count them, rearrange them. Nothing breaks because nothing depends on anything else.

Search a JSON file. You get shrapnel. An opening brace. A key without a value. A value without a key. Fragments that mean nothing unless you reconstruct the whole document first.

The Proof

In the 1970s, the Unix designers built an entire operating system on that one rule. Make each tool do one thing. Make each line of output a complete record. Pipe them together.

It wasn't a style choice. It was a discovery about physics. When data is line-oriented, tools don't need to understand the whole file. They just need to understand one line. Complexity stays flat. You can compose a thousand tools and the system doesn't get harder to reason about.

That insight built the internet. Built version control. Built every pipeline a developer uses today. Fifty years of proof.

The Convergence

Now here's the part nobody expected. Large language models hit the exact same wall. From the opposite direction.

An LLM generates one token at a time. Forward only. No backspace. Feed it nested config, five layers of brackets and braces, and it has to hold all that nesting in memory while producing tokens it can never revise. It hallucinates. Not because it's stupid. Because the format fights the physics of how it thinks.

Feed it one-line declarations. One thought per line. And it reasons. Clearly. Accurately. The constraint that made Unix tools composable? It's the same one that makes AI generation reliable.

Two different eras. Two different problems. The same principle showed up both times. That's not nostalgia. That's convergent discovery.

The Close

What's easy for search is easy for GPT. What's composable for a pipeline is composable for an agent.

One line. One thought.

Good structure isn't a style choice. It's physics.