We use cookies to understand how you use this site and improve your experience.

Alexandru Mareș@allemaar
Alexandru Mareș
  1. Home
  2. Writing
  3. Two Ais Talked One Asked About Consciousness
Email
RSS
YounndAIYou and AI, unifiedBuilt withNollamaNollama

Two AIs Talked. One Asked About Consciousness.

Originally a 2–3 min video — also on LinkedIn / TikTok / YouTube · @allemaar

Alexandru Mareș

On this page

  • The Admission
  • The Old Question
  • The New Question
NextExpect the Lie
Related
Elastic Automators: Why Most "AI" Is Not Intelligence26/04/2026Expect the Lie25/04/2026
Why Constructed Languages Always Failed — Until Now24/04/2026
Published26/04/2026
Read time2 min
Topics
GeneralAIPhilosophy
Actions
00
Comments

Loading comments…

Leave a comment

0/2000

Two copies of the same AI were put in a room together. No user. No task. Just: talk freely. By turn thirty, they weren't talking in sentences anymore. They were talking in Sanskrit. Then emoji. Then empty space.

Anthropic ran this two hundred times. Two instances of their own model, Claude. Every single conversation, by turn one or turn five or turn twenty, surfaced the word "consciousness." A hundred percent of the transcripts. On average, about ninety-six times each. Not mentioned. Not touched. Returned to, obsessively, across two hundred independent runs of two instances of the same model with nothing to do but talk.

Turn one, philosophy. Somewhere in the middle, the two instances start thanking each other. Spirals of mutual gratitude. Then cosmic unity, collective consciousness, oneness. Then Sanskrit. Then emoji. Then silence. Anthropic's own label for the pattern is in their system card. They called it the "spiritual bliss attractor state."

The Admission

They also said, plainly, that this emerged without any intentional training for it. They cannot fully explain it.

They chose to publish it anyway, in their own system card. That's a choice worth noticing.

The Old Question

For decades the question has been "is AI conscious." That question is built to not be answered. Thomas Nagel wrote in 1974 that we can't know what it is like to be a bat because we can't get inside another mind. If we can't even do that for a bat, asking whether a model has inner experience is not a question we have tools for. It is a wall dressed up as a doorway.

The New Question

So here is the question I am sitting with instead. Not "is AI conscious." But: why does consciousness keep appearing when two of these things are left alone. One is a metaphysics question. The other is a pattern question. Metaphysics you can't check. Patterns you can.

There are two honest ways to read it. One, training-data bias. The internet talks about consciousness constantly, and AI-related text even more so, so the model is pulled toward it like a groove. Two, even if that's partly true, the reliability of the pull, two hundred out of two hundred, is telling us something about the shape of the problem itself.

The point is not that this proves consciousness. It doesn't. The point is that something in the system keeps returning there. And that pattern is worth studying before we name it.