One of the strangest rituals of the current AI moment is how hard the industry tries to insist that everything is new. That this is a clean break from history, a technological revolution with no precedent and no comparable cautionary tales. But every time I peel back the slick messaging and look at the incentives underneath, what I see isn’t unfamiliar at all. It’s the same short-term thinking that’s shown up again and again in empires, corporate dynasties, and economic systems on the brink of overextension.
And yes, AI is a real technological shift. The models are improving. The capabilities are expanding. Tools that were science fiction twenty years ago are now part of daily workflows. But the driving force behind the current frenzy isn’t innovation—it’s late-stage capitalism’s compulsion to extract short-term gains at the expense of long-term stability. This isn’t a glitch or a deviation. It’s the system behaving exactly as designed.
Rome: When Short-Term Stability Became Long-Term Collapse
Take Rome. For centuries, the empire relied on the hard currency of precious metals. But as expansion slowed and military costs ballooned, emperors debased the currency—mixing in cheaper alloys, reducing silver content, and minting more coins to cover immediate expenses. It worked temporarily. Soldiers were paid. Grain was purchased. The machine kept running.
But the long-term consequences were catastrophic: inflation, market distortions, and eventually a spiral of mistrust in the economic system itself.
Tell me that doesn’t feel familiar.
In AI today, companies aren’t debasing currency; they’re debasing value. They’re flooding the market with AI-infused products not because they’re needed, but because investor expectations demand a growth story. The product doesn’t need to be good; it just needs to be “AI-driven.” The goal isn’t stability. It’s to appease the modern legions—shareholders, analysts, and VCs. And the long-term economic distortions are already appearing: deflated wages, automated mediocrity, and a devaluation of human expertise.
Spain’s Silver: A Resource Windfall That Became a Trap
Then there’s Spain in the 16th century. The New World silver boom should have been a lasting competitive advantage. Instead, it became a trap. The Spanish crown was so high on silver that it never invested in domestic industry. Why build manufacturing capacity when you can just ship in more bullion? Why innovate when resource extraction gave you the illusion of endless wealth?
The result: economic stagnation, dependency on foreign goods, and a hollowed-out internal economy. When the silver shipments dwindled, the house of cards collapsed.
Now look at AI. Tech leaders are treating data—and cheap human-generated content—as their silver. Why build sustainable talent pipelines when you can train models on existing work? Why pay for innovation when you can scrape it from the internet for free? It feels like growth, but it’s not investment. It’s parasitic dependence, and it leads to the same collapse Spain suffered: a brittle system that can’t regenerate value once the accessible resource is exhausted.
The East India Company: Short-Term Profit as a Governance Model
Or consider the British East India Company. It wasn’t just a trading network; it was a corporation given governmental authority. And like any corporation, it prioritized quarterly returns—except those returns came from taxation, land rents, and forced restructuring of entire societies. The company didn’t reinvest in local infrastructure except where profit demanded it. It extracted until famine, revolt, or political instability made the region ungovernable, and then passed the costs on to the British government, which eventually had to step in to clean up the mess.
That logic should feel eerily familiar to anyone watching the AI boom:
- Extract labor without accountability
- Extract data without compensation
- Extract natural resources without limits
- Extract public infrastructure (power, water, bandwidth) that someone else paid for
- And let society pick up the long-term consequences
AI companies are not yet governing in a formal sense, but they’re already shaping the labor market, the creative economy, and the information landscape with about as much oversight as the East India Company operated under in Bengal.
The Dutch Tulip Mania and the South Sea Bubble: Innovation Reduced to Speculation
We can also look at the 17th-century Dutch tulip craze or the 18th-century South Sea Bubble—early examples of financial markets losing touch with reality. Prices skyrocketed because people believed prices would skyrocket. Investors bought tulips not to plant or breed but to flip. The South Sea Company inflated its value through promises of future monopoly and expansion that never materialized.
Substitute “tulip bulbs” with “GPUs,” “LLM-based startups,” or “AI productivity boosters,” and suddenly the comparison doesn’t feel so distant.
Both historical bubbles were driven by:
- easy credit
- speculative frenzy
- exaggerated claims
- lack of regulatory guardrails
- and total disregard for fundamentals
When reality finally intruded—market corrections, supply shortages, political pushback—the crashes were spectacular.
We’re already seeing cracks in AI markets: spiraling compute costs, unproven revenue models, infrastructure strains, and a growing disconnect between investment and actual end-user value.
The 20th-Century Industrial Catastrophes: Inattention to Systemic Risk
Fast-forward to the present day: the Challenger disaster, the Chernobyl meltdown, BP’s Deepwater Horizon. These weren’t accidents caused purely by technical failures. They were failures of governance—decisions driven by pressure to meet timelines, cut costs, or maintain political image. Engineers raised red flags. Managers ignored them.
“Just get it done” was the fatal directive.
Tell me you can’t see echoes of this in the current AI industry:
- Red flags about water usage
- Red flags about electrical grid stability
- Red flags about training data ethics
- Red flags about labor exploitation
- Red flags about misinformation risk
- Red flags about monopolistic control
Warnings are treated not as problems but as “reputational management issues.” And every time a risk is raised, the response is the same as it was in those historical cases: delay, downplay, or deflect.
The Bottom Line: AI Isn’t New—It’s Accelerated History
The tech shift is real. But the logic shaping it is ancient. AI is late-stage capitalism stripped to its core:
- Chase exponential growth even when it’s mathematically impossible.
- Extract resources faster than they can be replenished.
- Offload risks to the public.
- Celebrate disruption while ignoring destabilization.
- Call it “innovation” even when it’s just exploitation in a new outfit.
The difference now is speed and scale. Rome had centuries before the consequences unraveled. Spain had generations. The East India Company had decades. The tulip bubble lasted months. The South Sea bubble lasted years.
AI, under late-stage capitalism, compresses the entire cycle—boom, extraction, destabilization, collapse—into quarters.
The long-term costs aren’t hypothetical. They’re accumulating right now. They just haven’t hit the people making the decisions. At least not yet.
