Notes from AI Literacy Salon #1 – March 26, 2026
Most conversations about AI start with what it can do. This one started with what it actually is.
At the first AI Literacy Salon, hosted by In Process Coaching and joined by guests Noa Dolberg (Google’s Material Design), Cherae Robinson (Head of Platform, Flybridge), and Sam Spurlin (Founder, Deliberate Works), we spent an evening thinking together about a question that doesn’t get asked enough: when does AI help, and when does it just look like it’s helping?







The salon was built for creative professionals – not because they need to become technologists, but because AI is reshaping the terrain they walk on. The goal wasn’t to teach prompts or tools. It was to build a way of thinking about what AI does well, where it falls apart, and why that distinction matters for anyone doing creative work.
The prediction engine
Here’s the starting point: almost everything AI does comes back to one word – prediction.
Given an image, predict what’s in it. Given a string of words, predict what comes next. Given a customer’s behavior, predict what they’ll do. AI finds patterns in past data and uses them to make guesses about future or unseen data. That’s the engine under the hood, whether you’re talking about medical imaging, language translation, recommendation algorithms, or code generation.
And when predictions work, they're transformative. It catches tumors earlier than human eyes. It makes text accessible in more than 100 languages. It surfaces the song you didn’t know you needed. It turns descriptions into working prototypes. There is real, tangible value here.
But prediction has edges. It works well when the future resembles the past, when there’s enough quality data, when the problem has a clear “right answer,” and when the context is stable and well-defined. When those conditions break down, so does the prediction. And for creative work, those conditions break down constantly.
The ambiguity problem
This is where the salon got interesting. Many of the things that matter most in creative work are not prediction problems. They are ambiguity problems.
What should this brand feel like? Is this essay saying something true? What does this community actually need? Should we take this project in a new direction? These questions don’t have answers hiding in past data. They require judgment, interpretation, and taste.
Prediction asks: given what has happened, what will happen next? Ambiguity reduction asks: what is actually going on, and what should we do about it?
AI is exceptional at the first. It is, at best, a thinking partner for the second. Knowing which one you’re facing – that’s the skill.
This is also where creatives tend to get tripped up. “AI will replace designers” assumes design is prediction. “AI will write your strategy” assumes strategy is prediction. The hype cycle collapses prediction and ambiguity reduction into a single concept. But some parts of creative work are prediction-like – layout suggestions, copy variants, image generation. The parts that aren’t? That’s where your value lives.
How many dogs are in Greenpoint, Brooklyn?
To make this concrete, the salon included a Fermi estimation exercise. The question: how many dogs are there in Greenpoint, Brooklyn?
Nobody knows the exact answer. That’s the point. A Fermi estimation walks you through breaking an unknowable question into smaller, estimable pieces – population, household size, dog ownership rates, neighborhood density. Each step doesn’t give you “the answer.” Each step reduces the ambiguity around the answer.
This is the opposite of what a prediction model does. If you asked an AI, “How many dogs are in Greenpoint?” it would give you a number. Confidently. Immediately. But it wouldn’t show you the reasoning chain. It wouldn’t flag which assumptions are shaky. It wouldn’t tell you where the real uncertainty lives.
The people in the room did all of that using salon.ai-literacy.space And the difference wasn’t that they got a better number – it’s that they got a better understanding of the question.
Confidence is not competence
One of the most disorienting things about modern AI: it speaks with certainty even when it’s guessing. A prediction model doesn’t know what it doesn’t know. It doesn’t experience doubt.
You do. That’s a feature, not a bug.
When attendees paired up to discuss what they wished they could predict – whether a stock will go up, whether it’ll rain Saturday, whether people will connect with their work – the conversation naturally split into two categories. Objective predictions, where a right answer exists even if it’s hard to reach. And subjective ones, where there is no right answer, only your answer. AI can take a real swing at objective predictions. Subjective goals? It can only simulate an opinion.
A framework to take home
The salon left participants with a single, portable question to carry into their work: Is this a prediction problem or an ambiguity problem?
If it’s a prediction, explore AI tools. They might save you real time. If it’s ambiguity, use AI as a sparring partner, not a decision-maker. The ambiguity is yours to reduce.
You don’t need to become an AI expert. You need to become a better judge of when AI expertise is relevant – and when it isn’t. AI literacy isn’t about learning prompts or tools; those change every six months. It’s about knowing what kind of problem you’re facing.
The creative professionals who thrive will be the ones who can hold ambiguity while strategically using prediction where it helps. The future isn’t something to predict. It’s something to shape. And shaping requires exactly the kind of thinking that no prediction engine can do for you.
The AI Literacy Salons are hosted by Nitzan Hermon through In Process Coaching and Critical Business School. Learn more at ai-literacy.space or criticalbusinessschool.com.
