In a recent AI Literacy workshop, I asked the prompt: *What do you wish an algorithm could predict?* As the answers moved from the practical towards bigger, harder questions, I arrived at a realization I haven’t been able to shake since.
Algorithms cannot predict anything subjective.
That sounds obvious when you say it like that. But the implications aren’t obvious at all, and I think they’re worth sitting with.
Prediction, in the algorithmic sense, requires a target -- some outcome the system can be trained against. Did the customer churn or not? Was the transaction fraudulent or not? Did the borrower default or not? These work as prediction tasks because there’s a definitive answer to check against. The algorithm learns by comparing what it guessed to what actually happened, and adjusting.
Now ask: did this person flourish?
There’s no ground truth. There is no dataset where someone recorded, objectively and beyond dispute, whether a given life constituted flourishing. You can’t train a model on it because there’s nothing to train it toward. It’s not that we lack enough data or that the models aren’t sophisticated enough. The problem is more basic than that: a prediction requires a right answer, and subjective questions don’t have one. They have interpretations, frameworks, arguments, traditions -- but not a verifiable outcome you can score a model against.
This isn’t a gap that better engineering closes. It’s a category error. Asking an algorithm to predict something subjective is like asking a scale to measure a color. The instrument doesn’t fail at the task -- the task doesn’t coherently exist for that instrument.
And yet we ask these questions constantly. We ask algorithms to predict “risk,” “quality,” “success,” “fit,” “potential” -- words that feel concrete until you try to pin them to a single measurable definition. Each of these smuggles in subjectivity while wearing the uniform of objectivity. The algorithm returns a number, and numbers feel like facts.
Here’s where it gets political, and I mean that in the most literal sense.
Because algorithms need a target variable to function, someone has to provide one. When the concept is subjective, that means someone has to *manufacture* a target -- choose indicators, set thresholds, decide what gets measured and what gets left out. Someone has to translate “flourishing” into a column in a spreadsheet.
That translation is where politics enters, quietly and permanently.
Say you’re building a system to predict community wellbeing. You need to operationalize “well-being.” So you pick: median income, life expectancy, crime rates, and educational attainment. Reasonable choices. But now you’ve made a series of decisions about what wellbeing *is*, and those decisions carry consequences. A community rich in social cohesion but low in median income scores poorly. A place with high earnings but deep isolation scores well. The model didn’t decide that. A person did when they chose the indicators. The model inherited the worldview and produced a numerical output.
Whoever defines the target variable has, in effect, legislated what the concept means. And the algorithm launders that choice — it takes a decision made by a person with a particular perspective and presents the result as if it were discovered rather than decided. The output looks like a measurement. It’s actually policy.
This is what I mean by outsourcing personal politics. When we ask an algorithm to predict human flourishing, we’re not just asking a technical question. We’re asking someone -- a designer, a product manager, a research team -- to define what a good life looks like, encode that definition, and then deploy it at scale, behind the comforting facade of mathematical neutrality.
The pattern repeats everywhere once you see it. Predictive policing systems predict “crime,” but “crime” in the training data refers to where arrests were made, which reflects where police were already deployed. Hiring algorithms predict “job performance,” but performance might mean manager ratings, revenue, or retention -- each definition producing a different model with different winners. Student success platforms predict “success,” but is success graduation? GPA? post-graduation earnings? Each choice is a stance dressed up as a variable.
None of this means algorithms are useless, or that prediction has no value. It means the interesting question is almost never “can an algorithm predict X?” The interesting question is: who decided what X means, and do we agree with them?
That question doesn’t get answered by a system. It gets answered by people talking to each other—in rooms, in organizations, across teams. It’s a leadership question. The most important part of any predictive system isn’t the model or the data. It’s the definition. And definitions are forged in conversation, not in code.
What struck me in the workshop wasn’t just the realization itself. It was the fact that it surfaced through exchange – people bouncing ideas off each other, pushing on each other’s assumptions, arriving somewhere none of us would have gotten alone. That’s the thing an algorithm can’t replicate: the interpersonal negotiation of meaning. The back-and-forth where someone says “flourishing” and someone else says “whose flourishing?” and the whole room has to sit with the discomfort of not having a clean answer.
Human flourishing is the thing we’ve been arguing about for as long as we’ve been capable of argument. It is not a technical problem with a solution that experts can implement. It’s an ongoing, necessary negotiation about values – and that negotiation only happens between people, face-to-face, willing to be changed by each other’s perspectives.
Leaders who deploy predictive systems have a responsibility here that can’t be delegated to the model. They’re the ones who decide what gets defined, who’s consulted, and whether the definition stays open to revision. That’s not a technical responsibility. It’s a human one.
Asking an algorithm to predict flourishing doesn’t resolve the negotiation. It just hides it inside a black box and calls the output a score.
So the next time someone pitches you a system that claims to measure, predict, or optimize for something as rich and contested as flourishing, or wellbeing, or fairness -- ask to see the definition. Ask whose definition it is. Ask what conversations led to it, and whether those conversations are still happening.
That’s where the real prediction lives: not in the model, but in the quality of the exchange that built it.
We have 4 more spots for the AI Literacy Salon on the 26th at Index in Greenpoint.

