For weeks I had been scrolling LinkedIn and TikTok and noticing the same thing: three different people in a row would post something with the same opening, the same middle, the same close. Different person, same post. After a while it was four in a row, then five. The shape was identifiable before I had read the first line.
For a while I thought it was me. That I had been online too long, or had developed some kind of allergy to repetition that other people did not share. Then a paper landed three weeks ago that put numbers on the feeling. This piece is the synthesis the paper enables but does not itself make.
The pattern, named
Here is the texture I am talking about. Posts on different topics that share the same opening move (a one-line dramatic statement, then a turn). Marketing emails whose second sentence I can predict from the first. Product descriptions across competing brands that I cannot tell apart without checking the URL. FAQ answers that begin with a soft restatement of the question before the answer arrives. Comments under a video that all start with the same kind of "great point" framing before any actual content. Three different people delivering the same observation in different fonts.
It is not that any one of these is bad. It is that, taken together, the texture of online writing has narrowed. The phrase that would not leave my head was: copies of copies that lost their spark, their humanity. There is no apparent human thought behind any individual one of them, because there does not need to be. The shape carries itself.
The answer is yes. The effect is small. It is also statistically significant, robust to sensitivity checks, and not explained by publication bias. As the authors put it, the result reveals "a small but statistically significant homogenization effect associated with AI use, robust across sensitivity analyses and not explained by publication bias." For a meta-analytic finding from nineteen studies, "small but significant" is exactly what one would expect if the underlying phenomenon is real but variable in magnitude.
The 2026 meta-analysis is the literature catching up to that viral take. It says the homogenization half holds across studies, not just in the original 300-person experiment. That is the part everyone needed to know. But the meta-analysis added something the viral take had lost.
The qualifier
The flattening is task-sensitive. The de Rooij and Biskjaer paper found that the homogenization concentrates in semantically constrained ideation, the kind of writing where the brief tells you most of what to write before you start. Product descriptions. Marketing copy. FAQ entries. Alt-text for routine images. Captions for content that follows a known shape. Where the writing task starts from a tight specification and the model fills in the texture, the texture flattens. Where the task is open-ended divergent thinking, where the writer is generating possibilities rather than executing a brief, the effect barely shows up.
This is the qualifier the viral 2024 take had lost on its way around the internet. The story everyone repeated was "AI flattens creativity." The story the data actually tells is closer to "AI flattens the part of writing that was already constrained." Which is a different claim, with a different practical implication.
The synthesis
Here is the move neither paper makes out loud, and the move this piece exists to put down on the record.
The constrained-task pile is most internet text. This is where the reframe happens. When the de Rooij paper says the effect is "small," it means small per individual use. But almost everything anyone reads on the internet on a given day is in the constrained-task category. Every product page. Every help article. Every onboarding email. Every push notification. Every blurb under a thumbnail. Every alt-text. Every meta description. Every comment that opens with a soft restatement before getting to its point. Every caption that frames a video.
Most of what we read is the kind of writing the paper measures. The pile is enormous, and the pile is mostly constrained ideation.
Multiply small effect by giant pile and you get a system-level shift in texture. By a small amount per use. Billions of times. That is what you have been feeling.
What this means in practice
This is not a brief against AI. What people call AI, in YounndAI vocabulary, are elastic automators. They are excellent at constrained tasks. That is exactly why they win there. The success of the tool is the source of the flattening. Not a failure mode. Not a bug to be patched. The use case.
So the practical move is not "use less AI." That is the wrong question and it leaves the texture problem unsolved anyway, since you using less AI makes no aggregate dent. The practical move is knowing which writing the tool makes yours, and which writing it makes the pile's. A few concrete decisions a working writer can make this week:
If you run a marketing copy desk, hand the FAQ page to the model. Hand the product description boilerplate to the model. Do not hand the campaign idea to the model. The campaign is the divergent step; the FAQ is the convergent one. The model is on the wrong side of that line for the work that should sound like you.
If you write technical documentation, hand the API reference to the model. Hand the changelog to the model. Do not hand the philosophy document to the model. The reference is constrained; the philosophy is the project's voice. The first compresses cleanly; the second is what readers come for.
If you produce content of any kind, hand the meta description and the social-card excerpt to the model. Do not hand the angle to the model. The excerpt is convergent on a known shape. The angle is the divergent move that distinguishes one piece from the next, and it is what gets the piece read.
If you reply to email professionally all day, hand the meeting summary and the confirmation reply to the model. Do not hand the difficult-conversation message to the model. The first is convergent; the second is the work that is uniquely yours, because the negotiation is uniquely yours.
The pattern: convergent tasks where the brief carries most of the work, hand them over without guilt. Divergent tasks where the writer's frame is the value, keep your hands on them. That is where the spark still lives, in the de Rooij/Biskjaer sense and in any other sense worth using.
A quieter close
The paper is a preprint. Peer review may tighten or loosen the magnitude, may sharpen or break the moderator finding on task-sensitivity. The qualitative shape, that the homogenization is real but small per use and concentrated in constrained tasks, is the part that is hard to dispute and worth holding on to.
The hardest part about this finding is that it does not give you permission to either embrace AI uncritically or refuse it. It gives you a discrimination problem. A list of decisions to make about which writing is yours and which writing is the pile's. That is a less satisfying conclusion than either "AI is a miracle" or "AI is a catastrophe," and it is closer to true.
I thought it was me. It is not me. It is the math.
Loading comments…