We use cookies to understand how you use this site and improve your experience.

Alexandru Mareș@allemaar
Alexandru Mareș
  1. Home
  2. Clusters
  3. Textual Kinematics
Email
RSS
YounndAIYou and AI, unifiedBuilt withNollamaNollama
Cluster · active

Cluster — Textual Kinematics


# Cluster: Textual Kinematics

## Short definition

The Cluster of work covering **Textual Kinematics (TK)** — text analyzed as a dynamical system, with measurable rhythm, velocity, and dynamics. Detects AI-generated text via the temporal physics of writing, and opens an interpretable research surface that classifier-based detection can't.

## Long explanation

Most AI text detectors are classifiers — they train on examples and become brittle as new generators ship. Textual Kinematics works differently. Instead of pattern-matching surface features, TK measures *generation-process invariants*: the temporal physics that any LLM-driven generator inherits from its sampling process. The signal lives in the dynamics, not the lexicon.

The central hypothesis (the **Classical-Jazz Hypothesis**) is that LLM-generated text exhibits smooth, classical dynamics, while human-generated text exhibits erratic, jazz-like dynamics. The strongest TK signal — `adversarialDeviation` (Δ=32.1 on the canonical benchmark) — measures inter-signal agreement; it's a meta-signal that's harder to defeat by training a generator on detector outputs.

This Cluster collects every Body about TK: the foundational essays, method-paper drafts, detector benchmarks, and case studies. **SYL** ([[syl|/concepts/syl]]) is the production implementation — a separate concept, in scope here as the productized form.

## Why it matters

TK is Alex's **research method** (implemented as a YounndAI product via SYL) — the analytical surface that lets the program speak about generated text rigorously rather than impressionistically. It also serves an internal role: SAI cognitive instances use SYL-scored trust to weight incoming text Mems, closing what would otherwise be a self-declared trust gap. The research method, the production tool, and the cognitive substrate are all aligned around the same insight.

Outside YounndAI's implementations, TK is a contribution to the AI-detection field that doesn't depend on classifier-arms-race dynamics. As long as LLM generators inherit sampling-based generative architectures, they leave temporal signatures TK can read.

## Best starting point

1. **The TK project hub:** [[GUIDE|TK — textualkinematics.org]] (`alm-os/Projects/TK - textualkinematics.org/GUIDE.md`)
2. **The companion concept cards:** [[textual-kinematics|/concepts/textual-kinematics]] and [[syl|/concepts/syl]]
3. **Dynamics essays in the timeline** — see related Bodies below.

## Main paper / article / repo

- **Project hub (vault):** [[GUIDE|TK]]
- **Domain:** [textualkinematics.org](https://textualkinematics.org)
- **Concept card:** [[textual-kinematics|/concepts/textual-kinematics]]
- **Production implementation:** [[syl|SYL]] — stringyourlines.com
- **Method papers:** in development (planned formalization paper; benchmark paper; case-study series).

## All related Bodies

- [[2026-E0024 - The AI That Writes Like Everyone and No One/_metadata|E0024 — The AI That Writes Like Everyone and No One]] — the rhythm-collapse argument
- [[2026-E0030 - What Happens When AI Trains on AI/_metadata|E0030 — What Happens When AI Trains on AI]] — synthetic-data feedback loops; how TK signals decay across model generations
- [[2026-E0027 - Expect the Lie/_metadata|E0027 — Expect the Lie]] — adjacent: trust assessment for LLM outputs
- [[2026-E0031 - The Chinese Room Has a New Tenant/_metadata|E0031 — The Chinese Room Has a New Tenant]] — adjacent: the limits of behavioral evidence for cognition
- (More Bodies as the Arc continues.)

## Videos / diagrams / infographics

- Per-episode shorts and visual briefs; permalinks captured in each episode's `_metadata.md`.
- Future: TK signal-visualization diagrams; Δ-deviation traces; rhythm spectrograms.

## External references

- Information-theoretic analyses of generated text (literature in NLP / ML).
- Dynamical-systems approaches in linguistics (mostly historical; TK extends to LLM generation).
- AI text detection prior art (GPTZero, OpenAI's classifier, etc.) — TK contrasts methodologically.

## Related topics

- [[elastic-automators|Cluster: Elastic Automators]] — different angle on the same generators
- [[ai-cognition|Cluster: AI Cognition]] — context for what TK can and can't tell us about cognition
- AI text detection (broader Cluster TBD)

## FAQs

**Q. Can TK be defeated by adversarial training?**
A. Surface-feature detectors can be defeated by training on their own outputs. TK is harder to defeat because the signal lives in generation-process invariants. An adversarial generator would need to reshape its sampling dynamics, not just its surface tokens — a much deeper architectural change.

**Q. Does TK distinguish among models (GPT vs Claude vs Gemini)?**
A. Sometimes. The Classical-Jazz Hypothesis is about LLMs in general vs. humans, but secondary signatures vary across model families and decoding settings.

**Q. Is TK competing with classifier-based detection or complementing it?**
A. Complementary in production (an ensemble can use both). Methodologically, TK opens a research surface classifiers don't — interpretable, generative-process-grounded signals.

## Latest updates

- *(production)* — SYL v7 in deployment.
- *(in development)* — TK methods paper.
- Episodes E0024, E0027, E0030, E0031 published or drafted.