What Stays Human When the Machine Becomes Fluent
Notes before a salon — May 13, Chinatown.
In the early 2000s, researchers studying epilepsy patients identified a single neuron in one patient’s brain that fired only when she encountered Jennifer Aniston. Her photograph, her name in text, even a stylized cartoon drawing — the same cell, every time. That is how human cognition organizes meaning: symbolically.
Large language models work differently. They don’t hold concepts. Given everything you’ve typed so far, the model calculates which token is most probable next, then the next, and the next. There is no Jennifer Aniston neuron inside a model. There are billions of weights, spread across vast stretches of text, distributing the probabilities of words against other words.
This sounds like a technical detail. It is the architectural fact that shapes what these systems do, where they fail, and what they quietly optimize for.
When a prediction engine writes a sentence about grief, it produces a sequence of tokens that are statistically likely to appear in sentences about grief. The output can sound felt. It can be useful. But it is not generated from any held understanding of what grief is. The fluency is real. Underneath the fluency, there is no one — there is a probability surface that has learned how grief sounds.
Most days, this distinction is invisible. The drafts the model produces are useful, the use cases are real, and a draft is a draft.
But the distinction matters the moment language becomes a vehicle for connection. A therapist’s note about a patient. A designer’s rationale for a decision. A parent’s letter to their child. When a sentence carries a person, what is actually inside the sentence becomes the question.
We have spent a few centuries treating language as the way meaning travels between people. The model treats language as the data set over which meaning is approximated.
I keep returning to two questions.
The first is about attention. Predictive interfaces are built to reduce friction — suggest the next sentence, autocomplete the thought, finish your email before you have quite figured out what you wanted to say. But friction is sometimes where thinking happens. The pause before you name something. The word you reach for and reject. If the model fills that pause, the thought does not compose; it gets selected from a menu of plausible completions. The question is not whether the menu is well-stocked. It is the kind of mind that gets formed by spending time with it.
The second is about discernment. Working close to the model — editing its prose, arguing with its reasoning — is its own kind of literacy. You start to feel where it is grounded and where it is confabulating. You start to see the shape of plausibility, which has a shape, and which is not always the shape of truth. Most users will never get this close. The question is: which literacies are passed on to people unknowingly?
These are not anti-AI questions. The model is here. It is useful. Pretending otherwise is a posture rather than a position. The questions are about staying intentional inside the use — knowing when fluency is doing the work and when meaning is, and noticing the moments those two part ways.
Curiosity over conclusions, as a posture, is most of what I think this asks for. The conversations I am interested in are with people who can narrate their logic rather than prescribe a worldview. Most of the public discourse about AI runs on prescription. The interesting room runs on noticing.
I am hosting the second AI Literacy Salon on this question on May 13, in Chinatown. Stephen P. Williams (a writer currently teaching a large AI company’s models how to write better), Kristy Zadrozny (a New York therapist), and Johnny Dance (writing on mathematics, games, and ontology) in conversation.

