Smoke and Mirrors
Originally a 2–3 min video — also on LinkedIn / TikTok / YouTube · @allemaar
We use cookies to understand how you use this site and improve your experience.
Originally a 2–3 min video — also on LinkedIn / TikTok / YouTube · @allemaar
We named it intelligence before it was intelligent. Now we're doing it again.
In 1956, a young computer scientist named John McCarthy organized a summer workshop at Dartmouth College. He needed a name for the proposal. Something that would set his work apart from Norbert Wiener's cybernetics. Something that would attract funding and attention. He chose 'artificial intelligence.'
It was a pitch, not a description. The systems they were building couldn't think, couldn't reason, couldn't understand a sentence. They could follow rules. Smart rules, written by smart people. But rules.
That was seventy years ago. The name stuck.
Today, the systems are better. Much better. They predict which word comes next in a sentence so well that it feels like understanding. They generate images so precisely that it feels like seeing. They answer questions so fluently that it feels like knowing.
But prediction is not comprehension. A system that can finish your sentence has not understood your sentence. It has calculated the most probable next word, based on every sentence it has ever been trained on. That is remarkable engineering. It is not intelligence.
In 1980, a philosopher named John Searle proposed a thought experiment. Imagine a person locked in a room with a book of rules for responding to Chinese characters. Messages come in. The person looks up the correct response, writes it down, passes it out. To the people outside, it looks like the room speaks Chinese. The person inside understands nothing.
The room follows the rules perfectly. The room does not know what a single character means. That gap between performing a task and understanding the task is the gap that the word 'intelligence' papers over.
And now the same trick is being repeated.
The industry looked at systems that predict text and generate images and said: the next step is artificial general intelligence. AGI. A system that can do anything a human mind can do.
That is a second floor being built on a building that doesn't exist.
We haven't proven the first claim. We haven't shown that these systems understand anything at all. We've shown that they're fast, useful, and convincing. Convincing is not the same as intelligent. But the name did its work. People believe the first floor is solid, so nobody questions the plans for the second.
The companies building these systems know this. One of them spent years chasing AGI as its stated mission. Then their CEO called it 'not a super useful term.' The term did what it needed to do. Now it's being quietly retired before anyone asks for the proof.
I've spent years building tools that sit next to these systems. I've watched them work. They are genuinely impressive. Smart engineering by smart people solving hard problems.
But calling it intelligence has consequences. It shapes what people expect. It shapes what governments try to regulate. It shapes what companies are allowed to promise.
The question was never whether the machine is intelligent.
The question is why we keep saying it is.
Loading comments…