I recently met myself. Or something that was supposed to be me. Delphi had been trained on my content. ElevenLabs had been trained on my voice. By any reasonable technical standard, the result was impressive. The cadence was right. The vocabulary was plausible. And yet the moment I heard it, I knew something was off. Not broken. Off. The simulation hadn’t failed. It had succeeded at the wrong thing entirely.
This is the uncanny valley nobody talks about. Not the one where a robot’s face almost looks human, but the one where your own words come back to you, emptied out. You recognize the shape of what you said, but the thing that made you say it is gone. It’s like hearing a cover of a song you wrote, performed by someone who learned every note but never knew what the song was about. The technical execution is there. The reason it existed in the first place is not.
What’s missing isn’t data. It’s not a fidelity problem. It’s a category error. The assumption that capturing enough of someone’s output captures something essential about them.
That replicating the signal with sufficient precision yields the source. It doesn’t.
I keep coming back to the distinction between creativity and production. These aren’t two points on a spectrum. They’re two different things.
Creativity is pre-linguistic. It happens before the sentence, before the image, before the sound. It’s the moment something that didn’t exist begins to take shape. Not as an artifact. As a direction. An instinct. A commitment to this and not that, before you can fully explain why.
Everything after that moment is production. The writing, the editing, the recording, the designing. That’s the craft of turning the creative impulse into something others can encounter. And production is where AI lives. Entirely. It operates on the artifact layer. It can recombine, extend, polish, and mimic what’s already been produced. What it cannot do is reach back into the moment before production began.
This matters more than people think. Creativity can’t be scraped. It can’t be trained on. It precedes the artifact, which means no amount of ingesting artifacts will reverse-engineer the thing that generated them. You can feed a model every essay I’ve ever written, and it will learn my patterns. It will not learn why I chose them, or what I was reaching for when I did.
In this context, aesthetics takes a very different meaning than we normally give it. People treat it like it means style. A visual preference. A tone. A vibe. But aesthetics is what makes direction possible, what makes things different.
When you have real aesthetic judgment, you can walk into a room full of options and know which one to pursue. You can look at a draft and feel one word louder than the other. You can make the call that pushes a project, or let it go. Not because you ran the numbers, but because something in your body told you this is right, or this is wrong, and you’ve been right enough times to trust the signal. Aesthetics is what lets you move. It’s the layer underneath every meaningful decision about what to make and how.
And you only develop it by living with the output of your choices. In practice, over time, through the accumulation of choices that mattered. It’s embodied. It requires a presence that can be affected by what it creates.
Utility, by contrast, is disembodied by definition. Its function is stripped of uniqueness. A useful output doesn’t need to be yours. It doesn’t carry your intuition. It just needs to work.
This is exactly where Delphi’s replication breaks down. It can produce sentences that sound like mine, but it will never feel the weight of choosing one over another. It doesn’t care if it’s wrong, because it can’t mean any of it.
AI is worth using when it moves in the direction of something you’ve already decided to write. Use it to clean up data you’ve already decided matter. Use it to accelerate production on a project whose creative direction you own completely.
But the moment you use AI to figure out what to say, what to care about, what direction to go, you’ve crossed a line. You’ve handed over the one thing that makes the work yours.
I see this constantly now. Someone uses AI to generate a brand strategy, a content calendar, and a positioning document. They present it with confidence. But push on any single point. Why this word and not that one? Why this audience? Why this tone? There’s nothing underneath. No rationale that comes from lived experience. No moment where they chose a direction and felt the value of it. They show up with a disembodied artifact that cannot grow with their thinking.
AI has no meaning, so its output can never stand scrutiny.
The difference between using AI well is about whether you had something to say before you opened the program. If you did, AI accelerates you. If you didn’t, AI gives you the illusion that you did, and that illusion has a half-life. It shows. Maybe not immediately. But eventually, and always.
The real literacy of our moment isn’t knowing how to use AI. It’s knowing what layer you’re operating on.
When someone shows you a beautifully produced piece of content, the question worth asking isn’t “Is this good?” It’s “what layer did this come from?” Was there a genuine creative act underneath? A person who took the time to actively pursue and articulate meaning and created a place for that to meet others.
The risk now isn’t that AI will produce bad work. It’s that AI will produce work good enough to fool us into thinking the creative layer is there when it isn’t.
All technology exists to mediate human communication; that connection is always at the level of meaning.
Organizations will ship strategies nobody actually believes in. Writers will publish essays nobody actually wrote. It will be editorial communication all the way down, with no generative value (value that stays after you leave the room).
The cost of not being able to tell the difference is enormous. It’s not just a quality problem. It’s a trust problem. When you can’t distinguish between a genuine voice and a well-produced simulation, you lose the ability to know who actually stands behind what they’re saying.
This is the new literacy: not prompt engineering, not tool fluency, but the ability to look at a piece of work and articulate its both structured and open-ended value.

