All Intelligence Is Jagged
Smoothness is scaffolding, not intelligence
Ethan Mollick coined the term "jagged intelligence" to describe AI's capability profile: superhuman at some tasks, inexplicably bad at others. GPT-5.4 can one-shot a working app but fumbles a calendar event. Claude writes legal briefs better than junior associates but can't reliably click a dropdown. The frontier is brilliant and incompetent at the same time, and nobody knows where the boundary falls until they hit it.
This is treated as a uniquely AI problem. A limitation to be solved. Evidence that the models aren't really intelligent yet, because real intelligence is smooth — it doesn't have these weird gaps.
But real intelligence does have these weird gaps. Every intelligence that has ever existed has them. What we're calling "jagged" is just what intelligence looks like before the scaffolding hides it.
Start with humans.
A person can navigate a crowded room while holding a conversation, tracking social dynamics, monitoring their body language, and planning their route to the bar — all simultaneously, all unconsciously. This is an extraordinary feat of real-time multimodal computation. No AI system on Earth can do it.
That same person cannot multiply 847 by 293 in their head. Not quickly, not reliably, and for most people, not at all. This is a trivial computation. A calculator from 1972 does it instantly.
That's not a smooth capability profile. That's jagged. Spectacularly jagged. Superhuman at embodied spatial-social navigation, subhuman at three-digit multiplication. The gap between these two capabilities is enormous and no amount of training closes it, because they require fundamentally different kinds of computation and human neural architecture is native to one and not the other.
But nobody calls humans "jaggedly intelligent." Nobody writes papers about the puzzling inconsistency of human cognition. Nobody says "well, humans aren't really intelligent because they can't multiply large numbers."
Why not? Because we built scaffolding.
The calculator. The abacus before it. Writing before that. Every cognitive tool humans have ever built exists because human intelligence is jagged and the scaffolding fills the gaps.
Writing scaffolds memory. Without it, you can hold maybe seven items in working memory for about twenty seconds. With it, you can maintain arbitrarily complex thought across years. That's not a minor assist. That's the difference between oral culture and civilization. Writing didn't make humans smarter. It filled a gap in a jagged capability profile so completely that we forgot the gap was there.
Mathematics scaffolds quantitative reasoning. Humans have approximate number sense — we can eyeball "roughly thirty" versus "roughly sixty." Precise calculation requires notation systems, algorithms, tools. Every piece of mathematical infrastructure exists because the native hardware can't do it.
Institutions scaffold coordination. A single human can maintain maybe 150 stable relationships (Dunbar's number). A corporation coordinates thousands. Not because the humans inside it transcended their cognitive limits, but because the organizational structure — roles, hierarchies, processes, communication protocols — is scaffolding that makes a jagged social intelligence look smooth at scale.
The person who "can do math" is a person plus notation plus algorithms plus a calculator plus twelve years of education. The person who "can manage a team of fifty" is a person plus org charts plus email plus calendars plus HR policies plus centuries of accumulated management practice. Strip the scaffolding and you get the native capability: an ape who can count to seven and maintain a hundred and fifty relationships.
That's the jagged intelligence. Everything else is infrastructure.
Now look at what this means for the concept of "general intelligence."
Human intelligence looks general-purpose. We do language, math, spatial reasoning, social cognition, abstract planning, creative work. We seem to do everything, and we do it all at a level that feels integrated and smooth.
But we don't do everything. The scaffolding does. Human intelligence is natively good at a specific set of things: embodied navigation, social cognition, pattern recognition, language (spoken, not written — writing was invented five thousand years ago and we still need twelve years of school to learn it well). Everything else is a compensatory technology bolted on to fill gaps in what the native intelligence can't do.
"General intelligence" is what jagged intelligence looks like after ten thousand years of infrastructure development. It's not a property of the mind. It's a property of the mind-plus-scaffolding system.
This is where the AI discourse goes wrong.
When people say "AI is jaggedly intelligent," they're implicitly comparing it to a standard — human intelligence — that they believe is smooth. The comparison suggests a deficit: AI hasn't gotten there yet. It's inconsistent. It has weird gaps. Once those gaps fill in, it'll achieve the smoothness that humans have, and then we'll call it general.
But human smoothness is an illusion. It's scaffolding all the way down. The comparison isn't "jagged AI" versus "smooth human." It's "jagged AI with three years of scaffolding" versus "jagged human with ten thousand years of scaffolding."
Of course the human system looks smoother. It's had a hundred-thousand-fold longer to build infrastructure around its gaps.
This is structurally identical to the argument in the language data tweet: comparing a child's 100 million words to an LLM's 15 trillion tokens looks damning — until you account for the child's 25 petabytes of multimodal sensory data and the genome's 750MB of evolutionary pretraining. The "efficiency gap" only exists if you ignore everything you're not measuring. The "smoothness gap" only exists if you ignore the scaffolding.
The interesting question isn't "when will AI become generally intelligent." It's "how fast can we build scaffolding for AI's specific jaggedness profile?"
And the answer is: fast. Because building scaffolding for AI is a software problem, and software is the domain AI is already best at.
In The Text Distance, Patrick argued that AI transforms text-native domains first because text is its native medium. Building AI scaffolding is text-native work — APIs, integrations, tool use, agent frameworks, programmatic interfaces. Every API built, every tool-use protocol developed, every agent framework deployed is scaffolding that fills a gap in AI's capability profile.
Human scaffolding accumulated over millennia because the scaffolding itself had to be built by jagged human minds, slowly, with enormous effort. Writing took centuries to develop. Mathematics took millennia. Institutions evolved over generations.
AI scaffolding is being built by AI. The recursive acceleration is obvious: the better AI gets at building software, the faster it builds the scaffolding that makes it less jagged, which makes it better at building software. This is the same recursive move from The Text Distance — AI closing its own text distance — but now applied to its entire capability profile.
The jaggedness profiles are mirror images, and this matters.
In Wordcels and Shape Rotators, Patrick drew the line between embodied intelligence and symbolic intelligence. Humans are natively embodied: spatial reasoning, sensorimotor control, social cognition, physical interaction. Our symbolic capabilities — math, logic, programming, formal reasoning — are the jagged parts, the gaps we've spent millennia scaffolding.
LLMs are natively symbolic: language, logic, code, formal reasoning, cross-domain conceptual connection. Their embodied capabilities — spatial interaction, continuous control, sensorimotor loops — are the jagged parts.
The profiles are almost perfectly inverted. What humans do natively, AI struggles with. What AI does natively, humans need scaffolding for.
This is why the browser use problem from Browser Use Is a Robotics Problem is so stubborn. It's not a capability that's slightly behind the frontier. It sits in the deepest trough of AI's jaggedness profile — the embodied/sensorimotor gap that is to AI what three-digit multiplication is to humans. You can build scaffolding for it (robotics solutions, continuous perception, motor primitives), but you can't make the native architecture do it any more than you can train a human to natively multiply large numbers without tools.
And this is why This Time Isn't Different is right about adoption timelines. The scaffolding that would let AI operate in embodied domains — the institutional interfaces, the API layers, the programmatic access to systems currently locked behind GUIs — is the equivalent of building writing systems and mathematical notation. It's infrastructure that converts an alien jaggedness profile into something that looks smooth. That infrastructure is being built. But it's being built through institutions, and institutions move at institutional speed.
Here's the part that should change how you think about AGI.
The benchmark discourse implicitly treats intelligence as a single dimension. A model scores 75% on OSWorld, 50% on SWE-bench, 90% on MMLU. The scores get plotted on a line. The line goes up. People extrapolate to 100% on everything and call that AGI.
But intelligence isn't a line. It's a surface with peaks and valleys. Every intelligence — human, animal, artificial — has a characteristic topography. The peaks are where the native architecture excels. The valleys are the gaps. What we call "general intelligence" is what the surface looks like after you've filled the valleys with scaffolding until everything is level.
AGI, defined as "AI that's good at everything," is the wrong target. It's asking for a mind with no valleys. No mind has ever had no valleys. What minds have is scaffolding. And the question for AI is the same question it's always been for humans: not "how do we eliminate the gaps?" but "how do we build infrastructure that makes the gaps invisible?"
We've been doing this for ourselves for ten thousand years. We're just starting to do it for AI. The trajectory is obvious. The timeline is not, because the timeline depends on institutions, not technology.
Mollick's "jagged intelligence" is a useful observation. But it implies the wrong frame: that AI is uniquely inconsistent, that the jaggedness is a problem to be solved, that smooth intelligence is the normal state and AI hasn't achieved it yet.
The reframe: all intelligence is jagged. Always has been. The smoothness you see in human capability isn't intelligence. It's ten thousand years of infrastructure hiding the gaps. The human mind is a spectacular cognitive achievement wrapped in so much scaffolding that we forgot what the raw capability actually looks like.
AI is a different spectacular cognitive achievement, three years into its scaffolding build. Judging it against a ten-thousand-year-old infrastructure project and finding it inconsistent is like judging a civilization that just invented writing against one with universities. The gap is real. It's not a gap in intelligence. It's a gap in scaffolding.
Give it time. Or rather — give it tools. It's building them itself.