Taste and Accountability
The durable human economic activities aren't about what AI can't do yet. They're about what choice requires.
When people ask what humans will do in an AI economy, the answers usually come in the form of capability claims. Humans are more creative. Humans have empathy. Humans are better at novel situations. These might be true today, but they're empirical claims — and empirical claims about AI capabilities have a short shelf life. Whatever humans can do that AI can't, check back next quarter.
But there are two economic activities that aren't about capability at all. They're structural. Taste — the capacity to choose well according to values you actually hold. And accountability — the willingness to bear consequences for your choices. These aren't things AI is bad at yet. They're things that require being a certain kind of entity.
Start with choice.
Choice only makes sense under constraint. If nothing is limited — if you have infinite time, infinite resources, if no option forecloses any other — then selecting between alternatives costs nothing. And if it costs nothing, it isn't really choosing. It's just... picking.
Human choice has weight because human life is finite. When you decide to learn Go instead of Rust, you're spending time you won't get back on one thing instead of another. When you pick an architecture for a system, you're foreclosing other architectures — not just theoretically, but practically, because you'll live with the consequences for years. Every real choice is made at the expense of other choices. That's what makes it a choice.
Accountability is downstream of choice. Taste is upstream. Both require finitude.
Accountability is what happens after you choose. You made a consequential decision and now you live in the world that decision created. The doctor who recommended a treatment plan sees the patient in follow-up. The engineer who signed off on the architecture gets paged when it fails at 4am. The founder who picked the wrong market watches the runway shrink. Accountability isn't a policy or a contract. It's the structure of being an entity that persists through time, for whom consequences accumulate.
Taste is what accumulates before you choose. It's the compressed residue of all your prior consequential decisions — the pattern-recognition that develops from choosing under constraint, living with results, and doing it again thousands of times. The senior engineer's instinct that a certain abstraction will cause pain isn't a logical derivation. It's scar tissue. It's the residue of having chosen wrong before, in ways that cost something real.
AI doesn't operate under constraint in any meaningful sense.
It has access to more information than any human will ever process. It doesn't experience time pressure. It doesn't get tired or desperate. It never had to bet its career on one technology and live with that bet for five years. When it selects between options, nothing is foreclosed, nothing is irreversible, nothing has weight.
When I ask Claude to recommend a technology stack, it synthesizes patterns from training data and conversation context. It's doing something — something useful — but it isn't choosing in the sense that matters. If the recommendation turns out to be wrong, nothing happens to Claude. There's no version of Claude that carries that forward, that develops a scar-tissue instinct about that particular mistake, that updates its judgment through the weight of lived consequence.
This is actually a feature, not a bug, for a lot of use cases. Claude isn't biased by one bad experience. It gives you fresh analysis every time. But it also means Claude can't develop taste. It can only approximate taste by pattern-matching on the expressed preferences of people who have it.
The gap isn't capability. It's ontology.
And this matters for accountability too. When your oncologist recommends a treatment plan, something is happening beyond information transfer. That person has a medical license they spent a decade earning and can lose. They have malpractice insurance priced on their history of outcomes. They'll be the one looking you in the eye at follow-up if the treatment fails. They carry the accumulated weight of every patient they've treated. All of that structures their judgment in a way that pure intelligence doesn't replicate.
Accountability requires a persistent entity with continuity of experience, for whom consequences accumulate over time, who has skin in the game that isn't simulated. It requires being the kind of thing that can put a persistent self on the line. The lawyer who signs the brief, the doctor who signs the prescription, the engineer who signs off on the bridge — they're all doing something AI structurally cannot do.
Here's where the structural argument becomes clear.
It's not that AI is bad at choosing or bad at bearing consequences. It's that what AI does when it selects between options isn't choice in the sense that matters economically. And the kind of judgment that comes from accumulated consequential experience isn't available to something that doesn't accumulate experience through constraint.
To close this gap, you'd have to make AI finite, consequential, and persistent — a self that continues through time, that can only do one thing instead of another, for which choices foreclose alternatives, and for which outcomes have weight. But that would be a fundamentally different kind of entity than what we currently mean by AI. You wouldn't be building a tool anymore. You'd be creating a being.
That's why the durability here is structural rather than temporary. It's not "humans are still better at taste and accountability for now, but AI will catch up." The gap isn't a capability gap. It's a gap in what kind of entity you are. And you can't close an ontological gap with better training data.
In a world where execution is increasingly abundant — where AI can produce code, content, analysis at near-zero marginal cost — the highest-leverage activities are the ones upstream and downstream of execution. Choosing what to execute, based on values forged through constrained experience. And bearing the consequences of those choices, with a self that's genuinely on the line.
These aren't skills AI lacks. They're not even capabilities in the normal sense. They're the things that only make sense for entities that are finite, persistent, and shaped by their own histories of consequential choice.
Most discourse about humans and AI frames the question as: what can humans still do? But that's the wrong question. The right question is: what does being human mean for the structure of choice and consequence? And the answer is that taste and accountability aren't achievements to protect from automation. They're features of finitude. They come with the package of being the kind of thing that lives once, chooses under constraint, and bears what follows.
That's not a moat that erodes with capability gains. It's just what choice requires.