We use cookies to understand how you use this site and improve your experience.

Alexandru Mareș@allemaar
Alexandru Mareș
  1. Home
  2. Writing
  3. What Notation Did To History
Email
RSS
YounndAIYou and AI, unifiedBuilt withNollamaNollama

What Notation Did to History

References
  1. articleJeff Miller / MacTutor (St Andrews) (2024). Earliest Uses of Symbols of Calculus (MacTutor)
  2. articleJ. J. O'Connor and E. F. Robertson (MacTutor) (2000). Sir Isaac Newton — biography (MacTutor)
  • bookFlorian Cajori (1928). A History of Mathematical Notations
  • articleWikipedia contributors (2025). Notre-Dame school (Pérotin / Magnus liber organi, ~1200)
  • articleWikipedia contributors (2025). Proto-cuneiform (Uruk, ~3200 BCE)
  • bookWalter Ong (1982). Orality and Literacy
  • bookJack Goody (1977). The Domestication of the Savage Mind
  • essayAlexandru Mares (2026). YON — YounndAI Object Notation
  • Alexandru Mareș

    On this page

    • Late 1675
    • A cathedral, around the year 1200
    • A scribe in Uruk, three thousand years earlier
    • The pattern, and the test
    • What we cannot see
    NextCopies of Copies
    Also on
    Related
    Two AIs Talked. One Asked About Consciousness.26/04/2026Second Brain, No Thought21/04/2026The Blub Paradox10/04/2026
    Published08/05/2026
    Read time7 min
    Topics
    GeneralNotationCognitionHistoryPhilosophy
    Actions
    00
    Comments

    Loading comments…

    Leave a comment

    0/2000

    I spend my days designing a notation. The systems people now call AI talk through it. I use it daily, and the test I run on myself is simple. Do I think differently this month than I did last month? On the days I can answer yes, the answer is almost never that I got smarter. The answer is that the notation let me think a thing I couldn't think before.

    That test isn't new. I'm at one end of a pattern that runs through several thousand years of human history, and three notation moments in particular. The smaller ones inside that pattern are the ones I want to look at now.

    Late 1675

    In late 1675, Gottfried Wilhelm Leibniz sat down in a room in Paris and wrote three symbols in a private notebook. d over dx. He had written the integral sign on the 29th of October. Two weeks later, on the 11th of November, he wrote dx, dy, and dy over dx for the first time. It was the first time anyone had written the differential calculus that way. Leibniz didn't publish it for another nine years. The notebook stayed private. Leibniz's late-1675 Paris manuscripts

    By the time Leibniz wrote those three symbols, Isaac Newton had been working on the same calculus for almost a decade. He'd developed it in 1666, during the years Cambridge sent him home to wait out the plague. Newton's 1666 fluxions He called his version the method of fluxions, and his notation was a single dot above a letter. ẋ for the first derivative. ẍ for the second. The dots stacked.

    Mathematically, Newton's notation was equivalent to Leibniz's. You could prove the same theorems either way. Typographically, the two systems lived on different planets. Newton's notation worked beautifully on a chalkboard for a single mathematician working on first and second derivatives. It collapsed when you needed a fourth or fifth derivative. It was almost impossible to typeset cleanly in a printed book. The dots crowded the letter; the line height strained; the print shops choked.

    When Leibniz finally published his version in the Acta Eruditorum in 1684, the continent read it. The continent adopted it. By the early 1700s, mathematicians like Johann Bernoulli, and later Leonhard Euler, Pierre-Simon Laplace, and Joseph-Louis Lagrange, were stacking entire architectures on top of d over dx. The notation generalized. It could carry.

    British mathematicians stayed loyal to Newton. After the priority dispute between Newton and Leibniz, where Newton (by then Master of the Mint, with real institutional power) used that power to make Leibniz the loser, British math became a national-pride matter. The notation choice came bundled with the priority loyalty. They kept the dots.

    For the next hundred years, the continent leapt ahead. British mathematics fell behind. A full century. Cajori's A History of Mathematical Notations (1928) documented this in detail, and the picture has held up. The typographic limits of Newton's notation, more than the personalities of his successors, are what made the difference.

    Newton was the smarter man. He had the math nine years earlier. He lost because his notation didn't scale. The slower man chose better symbols.

    A cathedral, around the year 1200

    Roughly five hundred years before Leibniz, in Paris, a composer named Pérotin was writing music for the cathedral of Notre-Dame. Pérotin and the Magnus liber organi at Notre-Dame He worked in a tradition that produced what is now called the Magnus liber organi, the great book of organum. Within it, Pérotin made a leap nobody had made before. He wrote music for four independent vocal lines.

    Four melodies sounding at the same time. Modern listeners don't hear that as remarkable, because we've grown up with most of a millennium of polyphonic music behind us. But for someone working in 1200, four-voice polyphony was a new kind of thinking. You cannot hold four independent voices in your head while writing them. The cognitive load is too high. Music notation that could express four-voice polyphony, with staff lines, neumes, and rhythmic mode markers, gave Pérotin a writing surface where he could think four lines at once.

    The notation didn't write down a song he was already humming. The notation made the kind of song composable.

    Pérotin's manuscripts didn't write down songs. They wrote down a shape no one had ever sung before.

    A scribe in Uruk, three thousand years earlier

    Three thousand two hundred years before Christ, in the city of Uruk in what is now southern Iraq, a scribe pressed a sharpened reed into wet clay. The marks they made were proto-cuneiform, a precursor to the wedge-shaped writing that would carry Mesopotamian civilization for the next three thousand years. Uruk proto-cuneiform (~3200 BCE)

    Walter Ong, in Orality and Literacy (1982), and Jack Goody, in The Domestication of the Savage Mind (1977), wrote the foundational treatments of what those marks did. Walter Ong, Orality and Literacy (1982) Jack Goody, The Domestication of the Savage Mind (1977) Their case is unusually strong, because the contrast is so clean. We have detailed records of pre-writing societies, and we know what those societies could and could not sustain.

    Pre-writing societies could not sustain law that survived its lawmakers. They could not maintain records that crossed generations cleanly. They could not run bureaucracy that outlasted the people who ran it. Each of these things requires a notation that doesn't depend on memory. Memory is too short and too deformable. A clay tablet survives the people who pressed it.

    It is not that Mesopotamians before writing were stupid, and it is not that the people who invented writing were geniuses. The thing that mattered was the artifact: a notation that survived memory. With it, three new kinds of human institution became possible.

    The pattern, and the test

    This is the part most readers get backwards. We treat notation as packaging, the way ribbon is packaging for a gift. The thought is the gift; the wrapping is decoration; the wrapping doesn't change the gift.

    That picture is wrong. Notation is not packaging. It is the part that compounds.

    Pérotin's notation didn't decorate a song he was already humming. It made the song possible. Leibniz's d over dx didn't dress up a calculus he already had in his head. It made the next century of calculus possible. The Uruk scribe's reed-on-clay didn't transcribe a law that already existed somewhere. It made the law possible by giving it a body that survived the lawmaker.

    Reviel Netz makes a related case for Greek mathematics in The Shaping of Deduction in Greek Mathematics (1999). The Greek mathematicians' notation, with its lettered diagrams and its specific style of proof-by-construction, made certain kinds of geometric thought possible that the same Greeks could not have done without it. The pattern repeats. Each notation creates the kind of thought that wasn't there before.

    We are inside another one of these moments now. The notation we settle on for the systems people now call AI will decide what becomes thinkable next. The shape of the thing is being chosen this decade, by people sitting at desks like the one Leibniz sat at in 1675, with no idea which symbol or syntax will turn out to be the one that scales.

    I am one of those people. I spend my days designing one of these notation systems, and I use it daily. The test I run on myself is the only test I trust. Do I think differently this month than I did last month? When the answer is yes, it is almost never because I got smarter. It is because the notation let me think a thing I couldn't think before. That is what notation does. That is the only test that matters.

    What we cannot see

    A common reflex, when reading a story like Leibniz's, is to feel quietly superior to the people inside it. We have calculus now. We have a notation that scales. The British mathematicians who stayed loyal to Newton's dots seem, from where we sit, to have been making a mistake we can see clearly.

    We are not smarter than Leibniz. We work in the notation he built. We are inside a notation moment of our own, and we cannot see clearly which symbols on which desks will turn out to have been the ones that compounded. The same is true of every prior moment in the pattern. The Uruk scribes did not know they were inventing law. Pérotin did not know he was inventing the cognitive shape of polyphony. Leibniz did not know his three-symbol shorthand would carry three centuries of physics.

    We never know what we couldn't think. Until the new notation lets us think it.