The Moment AI Stopped Being a Tool

- articleAnthropic (2024). Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku
- articleSalesforce (2024).
We use cookies to understand how you use this site and improve your experience.

The cursor moved across my screen. My hand wasn't on the mouse. Nothing on the desk had moved. The pointer slid across an open browser, found a button, and clicked it — then it kept going. It opened a tab, typed an address, scrolled, and a form filled itself out, field by field. I watched a small task get done by nobody, and for a few seconds I forgot what I was looking at.
That moment is not a capability surprise. It is a category surprise. The same kind of machine that had been answering my questions for two years stopped waiting for them — and eighteen months later, in 2026, the scene above is normal.
Back in October 2024, Anthropic shipped Computer Use. The model looks at the screen, moves the cursor, clicks, and types. Salesforce shipped Agentforce within the same week; it began running CRM workflows on its own. A few months later, in January 2025, OpenAI shipped Operator — a research preview with its own browser, filling forms and placing orders. Three companies. Three surfaces. One verb.
A year and a half on, Gartner forecasts that forty percent of enterprise applications will feature task-specific AI agents by year-end 2026, up from less than five percent in 2025 — the press release ran on August 26, 2025. The cohort is small. Eighteen months on, the verb they shipped is everywhere inside the building.
When a thing acts, somebody owns the consequence. With a tool, that's the user. A hammer doesn't make a mistake — the carpenter does. A compiler doesn't ship a bug — the engineer does. With an actor, the chain breaks somewhere in the middle of the action, and the question of who is at fault gets harder to answer cleanly.
Hannah Arendt drew a line through this in 1958, in The Human Condition. She split human activity into three. Labor keeps the body alive. Work makes the durable thing. Action is the one that begins something new in the world. Eighteen months ago, software did labor and work — it computed, it rendered, it stored. It didn't act. Now part of it does. That changes who owns what when it goes wrong.
AI didn't get smarter eighteen months ago. The capability curve kept climbing the way it had been climbing. What changed sat one level up. The verb changed. From query to operate. From answer the question to perform the action. From look up the address to book the flight. That is a different category of thing.
This is the category I named elastic automation. What people call AI is automation flexible enough to negotiate language — and now, flexible enough to negotiate outcomes. I have watched the same loop run inside my own work for a year and a half. It reads context, picks a move, does the move, checks the result, and picks the next one. Same grammar. Larger stage now.
Once the grammar moves, the risk calculus moves with it. Decision-making is no longer downstream of the user; it is co-located with the system. Responsibility attribution becomes a design problem, not a courtesy footnote. The systems we ship now have to declare what they will operate on, what they will not, and where the human is in the loop on purpose, not by accident.
That was the moment. Three companies. One verb. Eighteen months on, it's everywhere. The grammar changed first. The risk calculus followed. A tool does not act. An actor does. That is the shift.
Loading comments…