One Line, One Thought
Originally a 2–3 min video — also on LinkedIn / TikTok / YouTube · @allemaar
We use cookies to understand how you use this site and improve your experience.
Originally a 2–3 min video — also on LinkedIn / TikTok / YouTube · @allemaar
Every line should be a complete thought. One rule. It changed computing fifty years ago. It's about to change it again.
Search a text file. You get answers. Complete lines. Each one meaningful on its own. You can pipe them, filter them, count them, rearrange them. Nothing breaks because nothing depends on anything else.
Search a JSON file. You get shrapnel. An opening brace. A key without a value. A value without a key. Fragments that mean nothing unless you reconstruct the whole document first.
In the 1970s, the Unix designers built an entire operating system on that one rule. Make each tool do one thing. Make each line of output a complete record. Pipe them together.
It wasn't a style choice. It was a discovery about physics. When data is line-oriented, tools don't need to understand the whole file. They just need to understand one line. Complexity stays flat. You can compose a thousand tools and the system doesn't get harder to reason about.
That insight built the internet. Built version control. Built every pipeline a developer uses today. Fifty years of proof.
Now here's the part nobody expected. Large language models hit the exact same wall. From the opposite direction.
An LLM generates one token at a time. Forward only. No backspace. Feed it nested config, five layers of brackets and braces, and it has to hold all that nesting in memory while producing tokens it can never revise. It hallucinates. Not because it's stupid. Because the format fights the physics of how it thinks.
Feed it one-line declarations. One thought per line. And it reasons. Clearly. Accurately. The constraint that made Unix tools composable? It's the same one that makes AI generation reliable.
Two different eras. Two different problems. The same principle showed up both times. That's not nostalgia. That's convergent discovery.
What's easy for search is easy for GPT. What's composable for a pipeline is composable for an agent.
One line. One thought.
Good structure isn't a style choice. It's physics.
Loading comments…