We use cookies to understand how you use this site and improve your experience.

Alexandru Mareș@allemaar
Alexandru Mareș
  1. Home
  2. Writing
  3. Copies Of Copies
Email
RSS
YounndAIYou and AI, unifiedBuilt withNollamaNollama

Copies of Copies

A grid of pale rectangular tiles fading into one another, each one a slightly weaker copy than the last, suggesting a textural narrowing across iterations.
References
  1. articleAlwin de Rooij, Michael Mose Biskjaer (2026). Does generative AI make us think alike? A systematic review and meta-analysis of homogenization effects in human–AI co-creation
  2. articleAnil R. Doshi, Oliver P. Hauser (2024). Generative AI enhances individual creativity but reduces the collective diversity of novel content
  • paperAlexandru Mares (2026). Elastic Automators: A Diagnostic Vocabulary for Language-Model-Driven Workflow Systems
  • Alexandru Mareș

    On this page

    • The pattern, named
    • The evidence
    • The qualifier
    • The synthesis
    • What this means in practice
    • A quieter close
    PreviousWhat Notation Did to History
    NextThe First Law That Doesn't Know What AI Is
    Also on
    Related
    The Moment AI Stopped Being a Tool05/05/2026The AI That Writes Like Everyone and No One23/04/2026Your Writing Has a Heartbeat22/04/2026
    Published07/05/2026
    Read time7 min
    Topics
    GeneralAICreativityWriting
    Actions
    00
    Comments

    Loading comments…

    Leave a comment

    0/2000

    For weeks I had been scrolling LinkedIn and TikTok and noticing the same thing: three different people in a row would post something with the same opening, the same middle, the same close. Different person, same post. After a while it was four in a row, then five. The shape was identifiable before I had read the first line.

    For a while I thought it was me. That I had been online too long, or had developed some kind of allergy to repetition that other people did not share. Then a paper landed three weeks ago that put numbers on the feeling. This piece is the synthesis the paper enables but does not itself make.

    The pattern, named

    Here is the texture I am talking about. Posts on different topics that share the same opening move (a one-line dramatic statement, then a turn). Marketing emails whose second sentence I can predict from the first. Product descriptions across competing brands that I cannot tell apart without checking the URL. FAQ answers that begin with a soft restatement of the question before the answer arrives. Comments under a video that all start with the same kind of "great point" framing before any actual content. Three different people delivering the same observation in different fonts.

    It is not that any one of these is bad. It is that, taken together, the texture of online writing has narrowed. The phrase that would not leave my head was: copies of copies that lost their spark, their humanity. There is no apparent human thought behind any individual one of them, because there does not need to be. The shape carries itself.

    The evidence

    Three weeks ago, on April 14, 2026, Alwin de Rooij at Tilburg University and Michael Mose Biskjaer at Aarhus University released a preprint on PsyArXiv titled "Does generative AI make us think alike? A systematic review and meta-analysis of homogenization effects in human–AI co-creation." They pulled together nineteen empirical studies and sixty-one effect sizes on what happens when humans use generative AI to write or generate ideas. They asked one question across all of them: when a tool is in the loop, does the output get more similar across people?

    The answer is yes. The effect is small. It is also statistically significant, robust to sensitivity checks, and not explained by publication bias. As the authors put it, the result reveals "a small but statistically significant homogenization effect associated with AI use, robust across sensitivity analyses and not explained by publication bias." For a meta-analytic finding from nineteen studies, "small but significant" is exactly what one would expect if the underlying phenomenon is real but variable in magnitude.

    This is not the first paper to land on this. Anil Doshi and Oliver Hauser, in "Generative AI enhances individual creativity but reduces the collective diversity of novel content," published in Science Advances in 2024, ran a single experiment with three hundred participants writing short stories with and without AI assistance. They found that individual creativity rose with AI help, while the collective diversity of stories produced by the AI-assisted group fell. Each writer's gain was the population's loss. That single finding traveled fast through 2024 and 2025, becoming a kind of viral shorthand: AI makes you better and everyone else worse at the same time.

    The 2026 meta-analysis is the literature catching up to that viral take. It says the homogenization half holds across studies, not just in the original 300-person experiment. That is the part everyone needed to know. But the meta-analysis added something the viral take had lost.

    The qualifier

    The flattening is task-sensitive. The de Rooij and Biskjaer paper found that the homogenization concentrates in semantically constrained ideation, the kind of writing where the brief tells you most of what to write before you start. Product descriptions. Marketing copy. FAQ entries. Alt-text for routine images. Captions for content that follows a known shape. Where the writing task starts from a tight specification and the model fills in the texture, the texture flattens. Where the task is open-ended divergent thinking, where the writer is generating possibilities rather than executing a brief, the effect barely shows up.

    This is the qualifier the viral 2024 take had lost on its way around the internet. The story everyone repeated was "AI flattens creativity." The story the data actually tells is closer to "AI flattens the part of writing that was already constrained." Which is a different claim, with a different practical implication.

    The synthesis

    Here is the move neither paper makes out loud, and the move this piece exists to put down on the record.

    The constrained-task pile is most internet text. This is where the reframe happens. When the de Rooij paper says the effect is "small," it means small per individual use. But almost everything anyone reads on the internet on a given day is in the constrained-task category. Every product page. Every help article. Every onboarding email. Every push notification. Every blurb under a thumbnail. Every alt-text. Every meta description. Every comment that opens with a soft restatement before getting to its point. Every caption that frames a video.

    Most of what we read is the kind of writing the paper measures. The pile is enormous, and the pile is mostly constrained ideation.

    A small per-use homogenization effect, multiplied across the entire pile of constrained-task internet writing, aggregates into the felt textural narrowing many readers have been registering since 2024. This is the synthesis the de Rooij/Biskjaer 2026 meta-analysis ratifies but does not itself state.

    Multiply small effect by giant pile and you get a system-level shift in texture. By a small amount per use. Billions of times. That is what you have been feeling.

    What this means in practice

    This is not a brief against AI. What people call AI, in YounndAI vocabulary, are elastic automators. They are excellent at constrained tasks. That is exactly why they win there. The success of the tool is the source of the flattening. Not a failure mode. Not a bug to be patched. The use case.

    So the practical move is not "use less AI." That is the wrong question and it leaves the texture problem unsolved anyway, since you using less AI makes no aggregate dent. The practical move is knowing which writing the tool makes yours, and which writing it makes the pile's. A few concrete decisions a working writer can make this week:

    If you run a marketing copy desk, hand the FAQ page to the model. Hand the product description boilerplate to the model. Do not hand the campaign idea to the model. The campaign is the divergent step; the FAQ is the convergent one. The model is on the wrong side of that line for the work that should sound like you.

    If you write technical documentation, hand the API reference to the model. Hand the changelog to the model. Do not hand the philosophy document to the model. The reference is constrained; the philosophy is the project's voice. The first compresses cleanly; the second is what readers come for.

    If you produce content of any kind, hand the meta description and the social-card excerpt to the model. Do not hand the angle to the model. The excerpt is convergent on a known shape. The angle is the divergent move that distinguishes one piece from the next, and it is what gets the piece read.

    If you reply to email professionally all day, hand the meeting summary and the confirmation reply to the model. Do not hand the difficult-conversation message to the model. The first is convergent; the second is the work that is uniquely yours, because the negotiation is uniquely yours.

    The pattern: convergent tasks where the brief carries most of the work, hand them over without guilt. Divergent tasks where the writer's frame is the value, keep your hands on them. That is where the spark still lives, in the de Rooij/Biskjaer sense and in any other sense worth using.

    A quieter close

    The paper is a preprint. Peer review may tighten or loosen the magnitude, may sharpen or break the moderator finding on task-sensitivity. The qualitative shape, that the homogenization is real but small per use and concentrated in constrained tasks, is the part that is hard to dispute and worth holding on to.

    The hardest part about this finding is that it does not give you permission to either embrace AI uncritically or refuse it. It gives you a discrimination problem. A list of decisions to make about which writing is yours and which writing is the pile's. That is a less satisfying conclusion than either "AI is a miracle" or "AI is a catastrophe," and it is closer to true.

    I thought it was me. It is not me. It is the math.