We use cookies to understand how you use this site and improve your experience.

Alexandru Mareș@allemaar
Alexandru Mareș
  1. Home
  2. Writing
  3. The First Law That Doesnt Know What Ai Is
Email
RSS
YounndAIYou and AI, unifiedBuilt withNollamaNollama

The First Law That Doesn't Know What AI Is

An off-white paper plate with a low silver-glass bracket, six small artefacts inside it (a spell-checked text card, a learning thermostat dial, a folded navigation map, a spam-filter sieve, a frontier-model document, and a steering wheel), and a single faint amber line tracing the bracket's open seam.
References
  1. articleEuropean Union (2024). Regulation (EU) 2024/1689 (EU AI Act)
  2. article
European Union (2024). EU AI Act, Article 3(1) — Definition of 'AI system'
  • bookWalter Ong (1982). Orality and Literacy
  • bookJack Goody (1977). The Domestication of the Savage Mind
  • paperAlexandru Mares (2026). Elastic Automators: A Diagnostic Vocabulary for Language-Model-Driven Workflow Systems
  • essayAlexandru Mares (2026). Notation as Alignment
  • Alexandru Mareș

    On this page

    • Article 3, paragraph 1
    • A stable noun the field doesn't have
    • Format does work on what it holds
    • Notation as Alignment, scaled to a continent
    • What this means in practice
    NextThe Moment AI Stopped Being a Tool
    Also on
    Related
    Elastic Automators: Why Most "AI" Is Not Intelligence26/04/2026The Moment AI Stopped Being a Tool05/05/2026The 100x Cut Nobody Saw Coming01/05/2026
    Published06/05/2026
    Read time7 min
    Topics
    GeneralAI RegulationOntology
    Actions
    00
    Comments

    Loading comments…

    Leave a comment

    0/2000

    On August 2, this year, twenty-seven countries start enforcing a law against AI systems. That date is three months from today. The law itself was passed in July 2024, two summers ago, and has had twenty-two months to settle. The text is finished. The courts are warming up. The inspectors are getting their badges. Only one thing is still missing. A working definition of what they're enforcing it on.

    A note on the date for precision-minded readers. Parts of the law are already in force. Prohibited-practice rules came online in February 2025; general-purpose AI provider obligations in August 2025. What changes on August 2, 2026 is the bulk of the obligations together with the Commission's enforcement powers and the penalty regime. So "starts enforcing" is shorthand for "begins full enforcement"; that is what the calendar feels like to a company sitting outside the prohibited-practice carve-outs, which is most companies.

    That's the part that keeps me coming back. I've been writing about format and meaning for a couple of years, mostly inside small corners of the AI conversation. Now there's a continent-scale enforcement deadline pinned to a single noun, and the noun is still being argued over by the people who use it for a living. The law doesn't have the luxury of waiting for them to settle it.

    Article 3, paragraph 1

    Article 3, paragraph 1, is where the law tries to define the noun. Read the way a court will read it, the definition runs, roughly: a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs.

    Read that again, slowly.

    A spell-checker fits that. A thermostat with a learning mode fits that. A navigation app rerouting around traffic fits that. A spam filter fits that. The recommendation engine that suggests the next email reply fits that. So does a frontier model writing legal briefs. So does the system that drives a car.

    The law writes one bracket around all of them. Same bracket. Same word.

    It is worth sitting with that for a moment. The bracket has a single boundary line, and inside it is a heterogeneous taxonomy of artefacts whose only common feature is the legal language describing them. A 1990s autocorrect engine and a 2025 multimodal foundation model are, on the page, . The law's category is not the field's category, because the field has not had to commit to a single category, and the law has.

    the same kind of thing

    A stable noun the field doesn't have

    The people who build these systems don't agree on what an AI system is. The people who study them don't agree. The people who fund them don't agree. Put three engineers in a room and you can walk out with four definitions, each defensible inside its sub-discipline and contradictory outside of it. A computer scientist trained in symbolic AI carries one definition. A machine-learning researcher trained on probabilistic models carries another. A robotics engineer working on embodied control carries a third. None of them is wrong inside their own field. They are simply not talking about the same object.

    The law does not have that option.

    A law needs a stable noun. It needs something a court can point to.

    So it writes one, in legal prose, and the moment it does, the disagreement that existed in the field becomes invisible inside the bracket. A judge cannot suspend a hearing while the discipline finishes its argument. The bracket is the operative reality.

    Format does work on what it holds

    This is the part I keep coming back to. The format you write something in decides what the something can be.

    Walter Ong made that point in 1982. Writing, he argued, didn't just record speech; it restructured the consciousness of the people who used it. Spoken thought is fluid, embodied, situated. Written thought is fixable, examinable, citable. The shift from oral to literate culture wasn't a change in subject matter; it was a change in the kind of thinking that became possible. The notation does work on the mind.

    Jack Goody made the same point a little earlier, in 1977. He looked at the list, the table, the formal definition, and showed that they aren't neutral containers. The list creates categories that the spoken sentence cannot hold. The table forces a row-and-column logic onto material that, in speech, would refuse to sit still. Once you write something into a list, you are already doing classification work that the list itself imposes.

    A statute is a very specific format. It needs a defined term. It needs an enforceable boundary. It needs a verb a court can rule on. The format of legal prose has its own grammar of inclusion and exclusion: a thing is either inside the defined term or it isn't. There is no provisional category, no "let's wait and see," no footnote that says this definition is unstable and the field is still working on it.

    What the world calls AI, what I have called elastic automation in our own terms, is a moving category. It hasn't held still long enough for any field to define it cleanly. Today's frontier model is barely the same kind of thing as last year's; the boundary between "language model" and "agent" was rewritten across 2024–2025 by the cohort of systems that started acting in browsers and CRMs rather than just answering questions. The law is asking this category to hold still anyway.

    That isn't a complaint about the law. It's a comment about the format.

    Notation as Alignment, scaled to a continent

    This connects directly to a principle I've been calling Notation as Alignment. Whatever you write something down in writes back. The notation does work on the thing being notated. In small contexts, this looks like a tooling choice: how you serialize a request to a model shapes what the model can answer well. At larger scales, it looks like a culture choice: how a discipline writes its objects determines what counts as a research question inside it. Now, on August 2, it looks like a continent's worth of enforcement.

    Legal prose is a kind of notation. What it can hold determines what gets enforced. In a courtroom, the bracket of Article 3(1) becomes the operative reality, regardless of how messy the underlying category is in any laboratory. A research paper can hedge. A statute cannot. The hedge is what gets stripped out when the format changes.

    What this means in practice

    The text of the law is final. The dictionary the text relies on is still being written. The dictionary will be written, mostly, by the first cases that get litigated. By inspectors knocking on doors. By companies asking their lawyers whether the spell-checker counts, whether the search bar counts, whether the internal forecasting tool counts.

    Three months out, the practical questions are not about whether the law is good or bad. They are about which side of the bracket each artefact lands on. A bank with a fraud-detection model that has been running for a decade now has to ask whether that model is, today, an "AI system" under the operative definition. A small SaaS shipping a smart-reply feature has to ask the same question with less budget and worse access to counsel. The cost of being on the wrong side of the bracket is asymmetric. The cost of being inside the bracket and not knowing it is higher than the cost of being outside it and overpreparing.

    The first wave of cases will write the dictionary in retrospect. Whatever pattern those cases set will reach back to clarify Article 3(1), and the clarification will not look like a re-drafting of the statute. It will look like a body of decisions that quietly fixes which artefacts the bracket actually held all along.

    The inspectors arrive in August. The dictionary is still being written.

    That's the situation. Calm on paper. Open in practice. Ninety days from now, we begin to find out what the EU thinks an AI system actually is. Whatever it turns out to be, the notation will do its work either way.