This Time Isn't Different
How one fact about transformer architecture becomes a macroeconomic prediction
In Browser Use Is a Robotics Problem, I argued that AI's inability to reliably interact with graphical interfaces isn't a capability gap — it's a category error. Browser use requires continuous sensorimotor control: spatial precision, real-time feedback, error recovery in a changing environment. Transformers are discrete symbolic reasoners. There is literally no place in the architecture for continuous feedback to happen. The problem isn't that the models need to get smarter. It's that the computation they're being asked to do is the wrong type of computation for what they are.
That essay was about a technical limitation. This one is about where it leads — because the consequences cascade much further than I initially realized.
Here's the thing I didn't fully draw out: if browser agents actually worked, the economic story changes completely.
If AI could reliably navigate a GUI the way a human does — click the right buttons, fill in the right fields, handle dropdowns and modals and scroll positions — then it could slot into any existing software system without that system's cooperation. Epic doesn't need to build an API. SAP doesn't need to open up. The legacy claims processing platform doesn't need to change anything. The AI just uses the software the way a human employee would, except faster and cheaper and around the clock.
In that world, the only barrier to AI automation is whether the AI is smart enough to do the job. And for a huge range of white-collar work — insurance claims, accounts payable, HR administration, medical billing — the answer is already yes. The intelligence is there. If browser agents worked, we'd be looking at rapid, bottom-up automation across every sector that runs on desktop software. Any worker's job that lives inside a GUI is immediately accessible.
But browser agents don't work. Not because the models are too dumb, but because it's a robotics problem. The screenshot-reason-click loop is fundamentally the wrong computation for the task. And that's not getting solved by scaling up the same architecture. It requires something different — continuous perception, learned motor primitives, spatial memory — solutions that look like robotics, not language modeling.
Which means the bypass doesn't exist. AI can't just slot in. And that single technical fact — a discrete architecture can't do continuous control — becomes load-bearing for the entire pace of economic transformation.
Because now look at what's behind the wall.
Legacy enterprise vendors — Epic in healthcare, SAP in manufacturing, Oracle in finance, and dozens of others running huge chunks of the economy — have a business model that actively resists the alternative path. If AI can't navigate their GUI, the other option is a programmatic interface. An API. But opening a clean, comprehensive API is an existential threat to their business model and they know it.
Their moat is lock-in. They've spent decades building systems that are deeply embedded in their customers' operations, with proprietary data formats, complex configuration, and switching costs measured in years and millions of dollars. If an AI agent can do everything through an API, then the agent becomes the interface and the platform becomes commoditized infrastructure. The platform that was the product becomes a detail underneath something else.
So they won't open up. They'll bolt on their own AI features — "Now with AI-powered clinical insights!" — because that lets them charge more while keeping the lock-in. But the kind of deep, comprehensive API access that would let a third-party AI agent actually operate the system? That's giving away the store.
The robotics problem preserves the institutional wall. The institutional wall gates the pace of adoption. One architectural fact about how attention heads process information cascades into the reason millions of white-collar jobs are safe for now.
And the institutional wall is thick.
Consider what bottom-up AI adoption actually requires. The optimistic story goes: individual workers start using AI tools, they become more productive, adoption spreads organically. And that's roughly what's happening for workers who operate in open environments — text editors, browsers, email. Software engineers, marketers, writers. Their workflows live in text, their tools are accessible, and AI walks right in.
But for any worker whose job runs through a proprietary enterprise platform, bottom-up adoption hits the wall immediately. A claims adjuster can't just start using an AI agent to handle their workflow because the workflow lives inside a legacy platform and there's no way for the AI to touch it. The browser can't be automated reliably. The API doesn't exist or is deliberately limited. The worker is locked in and the AI is locked out.
So adoption has to come from the top. From the decision to switch platforms entirely, or from the vendor deciding to open up, or from the enterprise negotiating custom API access. All of which are slow, expensive, politically fraught processes that happen on procurement timelines, not technology timelines.
There's a human capital dynamic that slows it further. The people whose jobs would be automated have every incentive to resist or slow-walk adoption, and they're often the ones with the institutional knowledge about how the current systems work. The person who knows all the quirks of the Epic configuration is simultaneously the person most threatened by AI and the person you need most during a migration. That's a very effective brake.
Now add the incentive asymmetry.
The places where AI adoption is fastest are the places where it's additive — where AI lets you do things you literally couldn't do before. Drug discovery, materials science, financial modeling, scientific research. Pfizer isn't going to fire half its researchers because AI speeds up discovery. They're going to discover more drugs. The economic value creation is additive. The people working there see their roles change but the headcount pressure is low because the pie is growing. And there's a natural internal champion: scientists and researchers excited about AI because it makes their work more interesting.
The places where AI would displace the most labor — insurance claims processing, accounts payable, HR administration, medical billing, all that unsexy middle office work — are precisely the domains where adoption is slowest. The legacy vendor lock-in. The migration costs. The fact that cutting costs is a less exciting boardroom conversation than growing revenue. The organizational inertia of mid-market companies that don't have innovation teams or CTO-level champions for this. And the people most threatened being the most effective resisters.
Growth pulls adoption forward. Cost-cutting hits every friction at once. The labor displacement everyone worries about is concentrated in exactly the part of the economy best insulated from rapid change.
All of which leads to an analogy that I think is more precise than the ones currently circulating.
The AI hype narrative says this is like the internet — everything changes in a decade. The AI skeptic narrative says the bubble will pop. Neither is right.
This is electrification.
Electricity was obviously superior to steam from day one. Nobody seriously argued that steam was better. The technology was clearly transformative, and it was — it reshaped every sector of the economy, changed how cities were built, altered the entire structure of daily life.
Full adoption took forty years.
Not because of capability skepticism. Because of rewiring costs. Factories built for steam had mechanical power transmission — a central engine driving shafts and belts through the building. Converting to electric motors meant ripping out the entire power infrastructure and redesigning the factory floor. The factories that were built for electric from scratch had enormous advantages — they could put motors at each machine, redesign workflows, optimize layouts. But retrofitting the existing factories took decades because the switching costs were massive and the existing setup, while inferior, still worked.
That's exactly the landscape for AI. Software development, content creation, scientific research — domains that were already text-native, already API-accessible, already structured for the kind of work AI does natively — these are the factories built for electric. They're adopting immediately. They're already transformed.
Healthcare running on Epic. Manufacturing running on SAP. Finance running on Oracle. Government running on systems older than the people operating them. These are the steam factories. The technology to transform them exists. The intelligence is there. But the rewiring — the platform migrations, the API buildouts, the organizational restructuring, the workforce transitions — that takes decades. Not because anyone is wrong about the technology. Because the institutional apparatus is the same institutional apparatus it's always been.
"This time is different" is almost always wrong. And the people saying it are usually technologists reasoning from the capability of the technology in isolation rather than from the system the technology has to be adopted into. They don't personally experience much institutional friction — they work in environments optimized for rapid adoption by design — so they underweight it.
The better frame: the technology is genuinely different this time. But the humans and institutions it has to flow through are the same as they've always been. And historically, the humans and institutions are the bottleneck, not the technology.
What's funny is that this framing is actually more bullish on AI in the long run, not less. The hype narrative sets up expectations for rapid transformation, which leads to the inevitable disappointment cycle — "AI didn't change everything in three years, therefore it was overhyped." Investment pulls back. Talent disperses. Useful projects get defunded. The electrification framing says: this takes twenty to forty years to fully play out, the impact is enormous but unevenly distributed across time and sectors, and the real value accrues to people who understand the adoption dynamics rather than just the technology. That's a more patient thesis but it's a more robust one. It doesn't depend on everything changing at once.
Here's the landing, and it's the thing I didn't expect to arrive at when I started thinking about why AI can't click a dropdown.
The walls aren't failures. They're features.
The robotics problem — AI's inability to do continuous sensorimotor control through visual interfaces — isn't just a technical limitation. It's the thing that preserves the institutional barriers that slow adoption to a survivable pace. If browser agents worked perfectly, if AI could slot into every legacy system through the GUI, the displacement would be sudden and brutal. Millions of white-collar workers whose jobs run through proprietary platforms would face automation on technology timelines — months, not years.
Instead, they face it on institutional timelines. Platform migrations that take years. Vendor negotiations that take quarters. Organizational restructuring that happens incrementally. People retire out naturally. New entrants train for different roles. The economy adjusts.
Every past technological revolution needed this kind of shock absorption and never had enough of it. The transition from agricultural to industrial labor was devastating precisely because it happened faster than institutions could adapt. The AI transition has something those didn't: a set of technical and institutional frictions that naturally pace the transformation to something humans and institutions can absorb.
We already live in a world where CRISPR coexists with fax machines. Where SpaceX lands rockets autonomously while the DMV requires you to show up in person with a physical document. The frontier and the middle have never moved at the same speed. AI doesn't break that pattern. It extends it.
Blockbuster drugs and maybe fusion. Insurance claims still processed through legacy platforms. Which is, incidentally, not totally unlike what we have now — amazing discoveries alongside middle office work still sometimes happening on paper or by fax. The future isn't evenly distributed and never has been. The specific mechanism that keeps it uneven this time — a discrete architecture that can't do continuous control, preserving institutional walls that gate the pace of change — is new. The pattern is ancient.
The technology is genuinely different. The humans are the same. And the humans, as always, are the bottleneck.
This time isn't different. Thank god.