This is a response to a question from AI Literacy Salon #1. If the person I spoke to is reading this, please feel free to respond. If you want to join our next salon, please mark May 13th in your calendar and follow along on IG.
Can you ask an algo to teach you the Fermi Estimate1?
There is something quietly absurd about asking a language model to teach you how to estimate under uncertainty. The Fermi method — that discipline of making reasonable guesses from first principles when you lack precise data — is fundamentally a practice of navigating genuine ignorance. To learn it from a system that pretends to have no ignorance is to learn the form while bypassing the substance entirely.
Enrico Fermi could estimate the yield of the Trinity test by dropping scraps of paper and watching them scatter in the blast wave. The method that bears his name isn’t really a method at all. It’s a disposition — a trained willingness to commit to a number when the comfortable move is to defer. What made Fermi Fermi wasn’t a procedure. It was decades of being wrong in ways that cost him something: reputation, experimental resources, the trust of colleagues. His estimates were calibrated against a history of personal stakes.
An algorithm has no such history. More precisely, it has something that resembles that history — trained on thousands of worked examples, Fermi problems with clear pedagogical structures and satisfying resolutions — but that archive is not experience. It is a pattern. The algorithm can fluently reproduce the genre of Fermi reasoning. It will walk you through the steps: break the problem into tractable sub-problems, estimate each component, multiply through, and sanity-check against known quantities. The steps are correct. And they are completely beside the point.
Here is the fallacy: Fermi estimation is not a procedure for arriving at answers. It is a procedure for building intuition about the size of things. The only way intuition gets built is through the friction of being wrong and caring about it. When an algorithm teaches you the method, it removes that friction by design. Every worked example is already solved. Every sub-estimate has already been quietly optimized toward the known answer. You are not learning to navigate uncertainty. You are learning to recognize the shape of navigated uncertainty after the fact.
This produces a particular kind of epistemic false confidence. Students who learn the Fermi estimation algorithmically often become fluent at performing it — at reproducing the genre markers of good estimation reasoning — without developing any genuine sense of scale. Ask them to estimate the number of piano tuners in Chicago, and they will produce a crisp, well-formatted response. Ask them to estimate something they haven’t seen before, something where no training data provides a nearby anchor, and the performance collapses. The method runs, but nothing runs it.
There is a name for this gap, though it usually goes unnamed in discussions of algorithmic pedagogy: the difference between editorial communication and personal integration. Editorial communication is the ability to explain a concept correctly — to render it in language that is accurate, well-structured, and transmissible. Personal integration is something else entirely. It is what happens when a concept stops being something you know about and becomes something that changes how you see. The algorithm is a masterclass in editorial communication. It will explain Fermi estimation better than most textbooks. But explanation and integration are not on the same continuum. You cannot explain someone into calibration. Calibration is a somatic thing — it lives in the body’s history of estimates that embarrassed you, the ones where you were off by an order of magnitude in front of people whose opinion mattered.
The confusion between these two modes is not innocent. When we treat editorial fluency as evidence of understanding, we create a learner who can narrate the journey without having taken it. They become skilled at the meta-level — at talking about estimation, at identifying the moves in someone else’s Fermi reasoning — while remaining genuinely unformed at the level where the reasoning would actually happen: under pressure, in real time, with incomplete information and something at stake. The algorithm teaches you to write about swimming. The ocean is still waiting.
The deeper issue is that Fermi estimation is irreducibly embodied. You need to have held a piano key to have intuitions about the action of a piano key. You need to have hired someone to fix something in your home to have intuitions about service visit frequency. You need to have walked a city block to have an intuition for block length. The estimates are downstream from life. When you short-circuit that process — when you outsource the estimation to a system that has read about piano keys rather than touched them — you are not accelerating learning. You are replacing it with something that resembles it.
None of this means algorithms can’t be useful in this domain. A model can usefully serve as an interlocutor: pushing back on your estimates, surfacing considerations you missed, checking your arithmetic. That’s a legitimate pedagogical role. The fallacy is in positioning the algorithm as the authority — as the entity whose estimates you are trying to approximate, rather than the entity you are using to sharpen your own.
This distinction matters because of what happens when you get the relationship backwards. If you are trying to approximate the algorithm’s answer, you are implicitly treating the algorithm’s confidence as calibration. You are importing its flatness, its absence of stakes, its structural inability to be genuinely wrong. You are learning to reason like something that doesn’t reason — that pattern-matches fluently but never commits, never sweats, never lies awake at night having been badly off on something that mattered.
The Fermi method teaches you to be usefully wrong. An algorithm can only teach you to be elegantly approximate. These are not the same thing, and confusing them is expensive.
More on the Fermi Estimation on NASA, and our simple talk website

