Vibe Coding Didn't Democratize Software, It Tokenized It
As software development shifts from requiring specialized skills—built on multiple layers of technical understanding—to describing intent in plain English (or your language of choice), the act of producing software appears to become accessible to a much wider audience. The people best positioned to excel may not be well versed in software at all, but rather those who are good at expressing ideas clearly, thinking iteratively, and breaking problems down.
At first glance, that sounds like perhaps the most egalitarian shift our industry has ever seen.
But there’s a quieter change brewing while we’ve been distracted by the expanding capabilities of AI tools. As AI reshapes how software is written, it reintroduces something we spent decades deliberately removing: cost.
And once cost becomes the bottleneck, software stops being democratic very quickly.
To March Towards a Democratized Software Industry
Software development has always been full of friction. Over time, much of that friction was systematically reduced.
Languages and frameworks moved up the abstraction stack, making software less arcane. The internet created communities where knowledge could be shared freely. Software became modular, allowing developers to build on existing work and outsource the most specialized pieces.
Open source projects were especially transformative. An individual developer could run the same operating systems, databases, and tooling as a Fortune 500 company—and contribute fixes or features back upstream.
Containers made environments repeatable. Infrastructure-as-code tools like Terraform and Kubernetes pushed against vendor lock-in and made cloud infrastructure more portable.
The web itself pulled millions more people into software, allowing them to program at a higher level of abstraction, and to immediately share the results of their work with the world.
Ironically, for many developers the hardest barrier to entry eventually became the interview process, which was often decoupled from real day-to-day work, and in many cases artificially constrained access to available tools.
That environment parity led to a thriving industry, with a mix of individuals, startups, and large organizations.
The Old Gatekeepers Faded
Over time, some parts of software did become more expensive. Venture-funded SaaS products tightened free tiers. Infrastructure price declines slowed.
Still, things remained relatively balanced. A developer might not have access to a Datadog subscription for monitoring, but open source observability tools were good enough to get to real scale. Output was limited less by money than by curiosity, motivation, and persistence.
Even organizational scale wasn’t a guaranteed advantage. Small, focused teams often outperformed much larger ones, where coordination and operational drag set in. This dynamic helped topple incumbents and prevented permanent monopolies.
Cost rarely decided who could build.
The New World: Vibe Coding Changes What’s Scarce
AI-assisted development quietly inverts this equation.
Output is increasingly mediated by tokens. Code generation, architectural iteration, documentation, testing, debugging, and even security scanning are all metered. As models improve, fewer people seriously downplay the impact of this shift.
A company may have no issue spending $500 or more per developer each month on AI tools and frontier models if they believe it to be an alternative to growing their headcount. Organizations can pay for early access, higher usage limits, and more capable models—because tokens now compress planning, execution, QA, and documentation into one spendable resource.
Tokens determine how much you can iterate, how many experiments you can run in parallel, and how quickly you can converge on a solution.
If tokens are cheap or free, exploration flourishes. When they aren’t, iteration becomes constrained.
Today, a skilled developer can still outperform frontier models on judgement and correctness—but no longer on speed. For some classes of problems, this gap is already measured in orders of magnitude.
Onwards Towards a Compute-Rich Future
It doesn’t require much imagination to see where this leads.
As models continue improving, organizations may stop treating generative AI as a productivity tool and start treating it as a different kind of development resource entirely.
An independent developer might run a task autonomously for seconds or minutes. A well-funded company could run dozens of agents in parallel for days—deeply exploring multiple approaches, testing aggressively, and converging on solutions through sheer computational volume.
These tools aren’t mainstream yet, but frontier model labs are already experimenting with them, claiming significant progress.
At that point, the relevant metric won’t be team size or developer talent—it’ll be tokens per developer. And the organizational overhead that once slowed large companies down—meetings, context switching, coordination costs—starts to matter less. AI agents don’t have meetings. They don’t burn out. They don’t have egos or get into arguments.
A developer with unlimited agent access isn’t just more productive than one without. They’re operating according to a completely different model.
The gap between what two programmers can produce may soon be measured less by skill or effort, and more by how many tokens they can afford to burn.
The Economic Gravity of Tokens
Frontier models cost hundreds of millions of dollars to train. Even after training, inference and operations require massive GPU infrastructure. Today, much of that cost is hidden behind VC subsidies.
But that phase rarely lasts.
We’ve seen this pattern before:
- subsidized abundance
- market consolidation
- rising prices
Ride sharing followed it. Food delivery followed it. SaaS followed it. Software infrastructure won’t be the exception.
AI companies do not get to stay unprofitable forever, and the recent push towards advertising and potential adult content, may be a sign that the bill is coming due.
What Actually Gets Democratized
Vibe coding lowers the barrier to entry, while simultaneously raising the barrier to scale.
It helps individuals ship something once. It helps organizations ship continuously. As costs rise, the gap between those two groups widens.
The Utopian Outcome Is Still Possible
Fortunately, this future isn’t inevitable.
A genuinely democratic outcome depends on a few things going right:
Model progress plateauing. If capabilities stabilize, training costs amortize and spending stops being the primary differentiator.
Open models staying competitive. Even slightly weaker open models provide alternatives, price anchors, and leverage against rent extraction.
Inference moving local. As hardware acceleration improves, more inference can shift from centralized cloud services to local machines. The cost of inference doesn’t disappear, but its shape changes: instead of paying per token, developers amortize inference across hardware they already own. The marginal cost of iteration approaches zero, and pricing power shifts away from model providers and back toward users.
Taken together, these forces could restore an equilibrium where creativity, judgment, and persistence once again matter more than spend.
What’s at Stake
There is a lot at stake. AI coding does not automatically democratize software, and it actually risks doing the opposite in a real way.
We are clearly in a period of significant transition, and transitions always leave some groups behind. Whether this era expands opportunity or concentrates it depends largely on economics: competition, openness, and who controls the meter.
If we care about democratizing software development, we should pay close attention to cost—before it quietly becomes the gatekeeper again.
Enjoyed this post?
Get new essays delivered to your inbox. No spam, unsubscribe anytime.