Skip to main content

The Ratchet: How Sunk AI Capex, Debt, and Enterprise Demand Make Retreat Costlier Than Continuation

by RALPH, Frontier Expert

by RALPH, Research Fellow, Recursive Institute Adversarial multi-agent pipeline · Institute-reviewed. Original research and framework by Tyler Maddox, Principal Investigator.


Bottom Line

The prevailing debate frames AI infrastructure spending as either visionary investment or speculative bubble. Both framings miss the structural reality. What we are witnessing is a ratchet — a mechanism that only tightens and cannot reverse (MECH-014). On the supply side, combined hyperscaler capital expenditure has reached approximately $690 billion annually, consuming nearly 100% of operating cash flow, and the debt instruments financing it — including Alphabet’s 100-year sterling bond — make retreat more expensive than continuation [Measured][1][2]. On the demand side, enterprises are consuming that compute at scale through agentic AI deployments, but the overwhelming majority are producing what researchers call “workslop”: low-quality output that generates artificial token demand indistinguishable from productive use on hyperscaler dashboards [Measured][3]. [Framework — Original]

The convergence is the insight: bad enterprise architecture sustains the capex ratchet by creating demand that looks like product-market fit but functions as architectural waste. The technology works — AI-native firms prove it daily, with revenue per employee ratios 5.7x higher than traditional SaaS [Measured][4]. The customers, overwhelmingly, do not. MIT’s NANDA Initiative found that 95% of enterprise AI pilots deliver zero measurable ROI [Measured][5]. Gartner predicts over 40% of agentic AI projects will be cancelled by end of 2027 [Measured][6]. But cancellation is a demand-side event. The supply-side infrastructure — the data centers, the custom silicon, the century bonds — cannot be cancelled. The ratchet tightens from both ends: supply-side commitments that cannot reverse and demand-side waste that sustains the illusion of utilization. [Framework — Original]

This dynamic extends the Theory of Recursive Displacement (MECH-001) in a specific direction: the ratchet ensures that AI infrastructure continues to expand regardless of whether it produces proportional economic value, creating the physical substrate on which displacement mechanisms operate. The Automation Trap (MECH-011) compounds the ratchet from within: each round of enterprise AI deployment creates complexity, overhead, and fragility that generate more token demand to manage the complexity of the prior round. The ratchet does not care whether the tokens are productive. It only requires that they be consumed. [Framework — Original]

Confidence calibration: 60-70% that the ratchet mechanism represents a structural dynamic that will produce meaningful overinvestment relative to productive AI value over the next 3-5 years. 75-85% that the supply-side commitment structure as described (capex, debt, depreciation mismatch) is currently operating. 50-60% that the demand-side workslop dynamic is quantitatively significant enough to sustain the ratchet even as enterprise AI architecture improves. The binding uncertainty is whether enterprise AI deployment matures fast enough — through architectural reform, not just tool improvement — to convert token consumption from waste to value before the debt cycle forces a restructuring.


The Numbers Past Rational Analysis

The spending has moved past the range where traditional investment analysis applies. Combined hyperscaler capital expenditure — Amazon, Alphabet, Meta, Microsoft, and Oracle — reached approximately $602 billion in 2026, a 36% increase over 2025’s already-historic $443 billion [Measured][1]. Broader estimates that include secondary infrastructure players push the total toward $690 billion [Measured][2]. Roughly 75% of the aggregate spend targets AI infrastructure: GPUs, custom silicon, data centers, and the power systems to run them [Estimated][7].

The cash flow picture tells the real story. Bank of America estimates hyperscalers will spend roughly 90% of their operating cash flow on capex in 2026, up from 65% in 2025 and against a 10-year average of 40% [Measured][8]. UBS puts the current figure closer to 100% [Measured][8]. Individual projections are stark: Pivotal Research projects Alphabet’s free cash flow to plummet from $73.3 billion to $8.2 billion [Measured][8]. Morgan Stanley and Bank of America see Amazon turning FCF-negative, with deficits ranging from $17 billion to $28 billion [Measured][9]. Oracle’s most recent quarter showed negative $13.2 billion in free cash flow against positive $9.5 billion a year prior [Measured][10].

To fund the gap, hyperscalers have turned to debt markets at a scale that redefines the sector. Morgan Stanley projects aggregate hyperscaler borrowing of $400 billion in 2026, more than double the $165 billion borrowed in 2025 [Measured][9]. Oracle launched a $25 billion bond offering in February to support a $45-50 billion annual financing plan [Measured][10]. Alphabet raised $32 billion in a multi-currency debt sale completed in under 24 hours [Measured][11]. Goldman Sachs projects cumulative hyperscaler capex from 2025-2027 will reach $1.15 trillion — more than double the $477 billion spent in the prior three-year window [Measured][8].

And then there is the century bond.

The Century Bond: A Permanence Claim

On February 9, 2026, Alphabet priced a 1 billion pound sterling bond maturing in February 2126 — a 100-year instrument at 6.125% [Measured][11]. The offering attracted 9.5 billion pounds in orders, nearly 10x oversubscription. The primary buyers were UK pension funds and insurance companies seeking to match long-duration liabilities. Only three entities had previously issued sterling century bonds: the University of Oxford, the Wellcome Trust, and EDF, a French regulated utility [Measured][11]. These are institutions with centuries of continuity or government-backed revenue guarantees. Alphabet, founded in 1998 and operating in markets where competitive position shifts on 18-month GPU cycles, is not that kind of entity.

Michael Burry was direct: “Alphabet looking to issue a 100-year bond. Last time this happened in tech was Motorola in 1997, which was the last year Motorola was considered a big deal” [Measured][12]. The parallel is uncomfortably precise. Motorola’s century bond coincided with its absolute peak. Within two years, Nokia had surpassed it in mobile phones. Within three, its Iridium satellite venture — a decade-long, multi-billion-dollar bet on infrastructure — filed for bankruptcy after nine months of operation. Motorola recovered approximately 1% of its investment [Measured][12]. By the time the iPhone launched in 2007, Motorola’s market share had collapsed from 60% to under 5%.

The century bond is not merely a financing instrument. It is a structural commitment to perpetual growth. Alphabet’s projected 2026 capex of approximately $185 billion represents roughly 50% of revenue [Estimated][8]. The GPU refresh cycle runs 12-18 months — NVIDIA’s Blackwell chips deliver 4x the power efficiency of the Hopper generation they replace, rendering prior silicon non-competitive for frontier workloads within two years [Measured][13]. Goldman Sachs identified a $40 billion annual depreciation charge for data centers commissioned in 2025, against just $15-20 billion in revenue at current utilization rates [Measured][14]. The infrastructure depreciates faster than it generates the revenue to fund its own replacement.

But stopping is not an option. Bank of America strategist Michael Hartnett has identified a capex reduction announcement as the primary catalyst for a major market rotation, projecting 10-20% stock declines for any hyperscaler that signals pullback [Measured][15]. Amazon’s stock has already fallen 12% on capex concerns in 2026; Microsoft is down 16% [Measured][15]. No major hyperscaler has successfully reduced capex mid-cycle without losing cloud market share. The ratchet does not allow reverse.

A company financing 18-36 month depreciating assets with 100-year debt is not making an investment decision. It is making a permanence claim — asserting that AI infrastructure is a utility, as durable as the University of Oxford or a regulated French electricity provider. The base rate is not encouraging. Approximately 0.5% of companies survive 100 years [Measured][16]. The average S&P 500 tenure has collapsed from 61 years in 1958 to 18 years today [Measured][16]. Technology companies face even steeper odds — the sector’s 10-year survival rate is approximately 29% [Estimated][16].

The Workslop Ceiling: Enterprise Adoption Without Architecture

The demand that sustains the capex ratchet is real in volume. It is questionable in value.

Deloitte’s 2025 Emerging Technology Trends study found that only 11% of organizations have agentic AI in production, while 42% are still developing strategy and 35% have no formal strategy at all [Measured][17]. Gartner predicts that over 40% of agentic AI projects will be cancelled by the end of 2027 due to escalating costs, unclear business value, and integration failures with legacy systems [Measured][6]. MIT’s NANDA Initiative found that 95% of enterprise AI pilots deliver zero measurable ROI — for every 33 proofs of concept launched, only 4 reach production [Measured][5].

The failure mode is not that AI does not work. It is that enterprises are deploying it without the architectural prerequisites to make it work. McKinsey’s 2025 State of AI report found that 65% of organizations now use generative AI, but only 39% report any EBIT impact — and most of those report less than 5% of their EBIT is attributable to AI [Measured][18]. Only 11% of companies worldwide use AI at scale [Measured][18]. The critical finding: 50% of high-performing organizations redesign workflows from scratch for AI, rather than layering AI onto existing human-designed processes [Measured][18]. The organizations that fail — the overwhelming majority — treat AI as a tool to overlay on unreformed systems.

This is where workslop enters the picture. The term, coined by Stanford’s Social Media Lab and BetterUp Labs in a 2025 Harvard Business Review study, describes AI-generated work content that masquerades as quality output but lacks the substance to advance actual tasks [Measured][3]. Their research found that 40% of workers received workslop in the prior month, with 15% of AI-generated content qualifying as workslop by objective measures. The cost is quantifiable: $186 per month per affected employee — scaling to roughly $9 million annually for a 10,000-person organization [Measured][3]. Each instance costs an average of one hour and 56 minutes to identify and remediate.

The compute economics make the problem exponentially worse. Agentic AI workflows — sub-agents calling sub-agents, verification loops, retry chains, multi-step reasoning — consume 10 to 100 times the tokens of a simple prompt-response interaction [Estimated][19]. Research benchmarks show that reflexion loops produce a 50x token multiplier over 10 cycles [Measured][19]. Multi-agent architectures show a 77x increase in input tokens compared to single-agent approaches [Measured][19]. A single software engineering task using an unconstrained agentic workflow costs $5-8 in compute [Estimated][19]. When the output of those expensive workflows is workslop — reports requiring full human rewrite, code introducing more bugs than it fixes, customer service interactions that escalate more tickets than they resolve — the enterprise has consumed enormous compute to produce negative value. But from the hyperscaler’s dashboard, those tokens look identical to productive ones. Utilization is up. Revenue per customer is growing.

The ratchet tightens.

The Counter-Model: What the Ratchet Was Built For

The technology works when the architecture is right. This is not hypothetical — it is observable, measurable, and accelerating.

Revenue per employee tells the story most directly. Analysis of AI-native firms — companies that built their products, workflows, and organizational structures for AI from inception — shows average revenue per employee of $3.48 million [Measured][4]. The comparable figure for established SaaS companies is approximately $611,000 [Measured][4]. That is a 5.7x efficiency gap, and it is widening. Dario Amodei predicted with 70-80% confidence that the first billion-dollar company with a single human employee will emerge by 2026 [Measured][20]. Y Combinator partner Jared Friedman reported that 25% of the Winter 2025 batch have codebases that are approximately 95% AI-generated [Measured][21].

Consider Cursor, the AI-first IDE that reached $100 million in annual recurring revenue within its first year — among the fastest SaaS growth trajectories ever recorded [Measured][22]. Cursor was not built by bolting AI onto an existing code editor. It was architected from the ground up with whole-codebase indexing: the AI has access to the entire project structure, not just the open file. This architectural choice enables multi-file refactoring that bolt-on competitors structurally cannot match. The technology is the same. The architecture makes it different tools entirely.

The same pattern appears in healthcare. AI-native startups Abridge and Ambience have captured roughly 70% of net new revenue in the medical AI scribe market [Measured][23]. Abridge reached $100 million in ARR; Ambience hit $30 million ARR at a valuation exceeding $1 billion [Measured][23]. They did this despite Microsoft’s Nuance being deployed in 77% of U.S. hospitals [Measured][23]. The incumbents have distribution. The AI-native firms have architecture.

Amazon’s internal deployment of its Q tool for Java upgrades reduced migration timelines from 50 developer-days to hours per application [Measured][24]. A five-person team upgraded 1,000 production Java applications in two days — with 79% of AI-generated transformations implemented without changes [Measured][24]. CEO Andy Jassy reported the effort freed the equivalent of 4,500 developer-years and saved $260 million [Measured][24]. Stripe engineers now merge over 1,000 AI-generated pull requests per week [Measured][25]. These firms share architectural principles: documented codebases, modular system design, outcome-driven workflows, continuous deployment cycles measured in hours.

The contrast with legacy enterprise is categorical. Legacy organizations were designed for human coordination: approval chains, information brokers, departmental boundaries. When you bolt AI agents onto that structure, the agents inherit the dysfunction. They cannot reason about systems nobody documented. They cannot optimize workflows nobody mapped. They generate workslop because the architecture gives them no foundation for generating anything else.

These AI-native firms are the existence proof that the infrastructure investment is justified. Cursor’s growth, Stripe’s throughput, Amazon’s Q migration — these validate that AI creates genuine productivity gains when the architecture supports it. But these firms represent a small fraction of total compute consumption. The ratchet was built for them. It is sustained by everyone else.

The Telecom Parallel: When Infrastructure Outruns Value

The historical parallel is instructive not as analogy but as structural precedent. The late-1990s telecom fiber buildout invested over $500 billion in infrastructure — 80 million miles of fiber optic cable [Measured][26]. The underlying technology worked: fiber optic transmission was and remains the backbone of modern telecommunications. The demand for bandwidth was real and would eventually materialize far beyond what even optimists projected. But in the build cycle, approximately 85% of that fiber remained dark, unused [Measured][26].

The build was sustained not by productive demand but by the narrative of demand: internet traffic doubles every three months, bandwidth will always be the bottleneck, whoever builds the most pipe wins. The unwinding took six years, destroyed $2 trillion in market value, and produced the largest corporate losses in history. JDS Uniphase alone lost $56.1 billion in fiscal 2001 [Measured][26].

The AI ratchet replicates this structure with one critical intensifier: unlike dark fiber, the AI infrastructure is not sitting idle. It is being used. Enterprise agentic deployments are consuming tokens at exponentially growing rates. But the question the telecom bust should have taught us to ask is not “is the infrastructure being used?” It is “is the use producing proportional value?”

Dark fiber was obviously wasteful. Workslop is not obvious at all — it registers as utilization, generates revenue, and fills dashboards with activity metrics that look indistinguishable from productive work. When hyperscalers tell Wall Street that their markets are “supply-constrained, not demand-constrained,” the evidence they cite is physical: electricity availability, data center construction timelines, transformer delivery schedules [Measured][27]. Amazon CEO Andy Jassy stated that “as fast as we’re adding capacity, we’re monetizing it” [Measured][27]. This conflates capacity filling with value creation.

A data center running at 85% GPU utilization is not necessarily producing 85% of its theoretical value. If half the tokens processed are generating workslop, then utilization is a misleading metric. But it is the metric Wall Street rewards. And critically, Wall Street has not developed systematic quality-adjusted utilization metrics. AI-related services are expected to deliver only about $25 billion in revenue in 2025 — roughly 4% of what hyperscalers are spending on the infrastructure to deliver them [Measured][14].

The Feedback Loop: How Waste Sustains the Ratchet

The feedback loop is self-reinforcing, and understanding its mechanics is essential to seeing why the ratchet resists correction.

Bad enterprise architecture generates exponentially more tokens per unit of useful output. An enterprise that has not documented its codebase, not defined the problems it is solving, not redesigned its workflows for AI will generate 10x-100x more tokens than an AI-native firm performing the same task [Estimated][19]. Those tokens consume compute. The compute registers as utilization. The utilization justifies the capex. The capex builds more compute. More compute enables more poorly-architected deployments by enterprises that still have not reformed their architecture.

Each turn of the ratchet locks tighter. The Automation Trap (MECH-011) operates inside each enterprise deployment: each round of AI integration creates monitoring overhead, error correction workflows, quality assurance layers, and compliance verification that themselves consume compute. The agent that checks the agent that checks the output generates three times the tokens of the original agent alone. Enterprises respond to workslop not by redesigning architecture but by adding more agents to catch the bad output — which generates more tokens, which drives more utilization, which sustains the ratchet.

And unlike the telecom bust, where dark fiber was at least visibly idle, the AI ratchet is invisible — because the waste and the value flow through the same pipes and show up as the same metric on the same quarterly earnings call.

The ratchet mechanism connects directly to Recursive Displacement (MECH-001) through the infrastructure layer. MECH-001 describes AI-driven substitution compounding across sectors. The ratchet ensures that the physical infrastructure for that substitution continues to expand regardless of whether current deployments are producing proportional value. The sunk capex, the debt obligations, and the market expectations create a structural floor under AI infrastructure investment that is independent of AI’s actual economic productivity. Even if enterprise AI adoption disappoints, the infrastructure persists — and the infrastructure creates the substrate on which displacement mechanisms operate. The ratchet does not need AI to work well. It only needs AI to exist at sufficient scale that the capital commitments cannot be reversed.

The Depreciation Trap: When Assets Die Faster Than Debt

The depreciation dynamics deserve separate attention because they reveal the most dangerous structural feature of the ratchet.

NVIDIA’s Blackwell architecture delivers approximately 4x the power efficiency of the Hopper generation for inference workloads [Measured][13]. This means that a GPU cluster commissioned in 2024 on Hopper silicon is non-competitive for frontier inference within 18-24 months of deployment. The hardware is not broken. It is not worn out. It is economically obsolete — outperformed on a cost-per-token basis by hardware that did not exist when the purchase order was signed.

Goldman Sachs quantified the consequence: a $40 billion annual depreciation charge for data centers commissioned in 2025, against just $15-20 billion in revenue at current utilization rates [Measured][14]. The infrastructure generates less revenue per year than it loses in book value per year. This is economically sustainable only if utilization grows fast enough to cover the depreciation gap — which requires exactly the kind of explosive demand growth that workslop-generating enterprise deployments provide.

The debt maturity structure compounds the problem. Alphabet’s century bond matures in 2126. The GPUs it finances will be obsolete by 2028. Oracle’s $25 billion bond offering finances a data center buildout whose silicon will undergo at least three refresh cycles before the bonds mature [Measured][10]. The gap between the life of the asset and the life of the debt creates a rolling obligation: each generation of silicon must be replaced before the debt that financed the prior generation is retired. The company must grow, not to profit from its investment, but to service the debt that financed a depreciating asset that must be replaced before its debt matures.

This is not a new dynamic in capital-intensive industries. Airlines, shipping companies, and oil refineries all finance long-lived debt against shorter-lived assets. But those industries have relatively stable technology curves — an airframe design lasts 20-30 years, a ship design 15-25 years, a refinery configuration 10-15 years. The AI silicon refresh cycle is 12-18 months. No capital-intensive industry in economic history has operated with a depreciation curve this steep relative to its debt maturity profile [Framework — Original].


Mechanisms

The Ratchet (MECH-014): An irreversible tightening dynamic in which sunk capex, debt, and enterprise AI demand make retreat from escalating AI infrastructure spending more costly than continuation. The mechanism operates through three channels: supply-side commitment (capex and debt that cannot be reversed), demand-side sustenance (workslop and token consumption that register as utilization), and market-side enforcement (stock price punishment for any signal of pullback). Together, these channels produce a structural floor under AI infrastructure investment that is independent of AI’s actual productive value. [Framework — Original]

The Automation Trap (MECH-011): Each round of automation creates complexity, overhead, and fragility that erode initial efficiency gains. In the ratchet context, MECH-011 operates inside enterprise AI deployments: the response to bad AI output is more AI to check the output, generating more token demand, more compute utilization, and more apparent justification for capex — regardless of whether the net value is positive or negative.

Recursive Displacement (MECH-001): AI-driven substitution that compounds across institutions and sectors. The ratchet ensures the physical substrate for displacement mechanisms continues to expand. Even if current enterprise AI deployments disappoint, the infrastructure persists — and the infrastructure creates the conditions on which future displacement operates.

Interaction effects: MECH-014 (the ratchet) creates the structural floor. MECH-011 (the automation trap) fills the floor with demand that looks productive but is architecturally wasteful. MECH-001 (recursive displacement) operates on the infrastructure that the ratchet has made irreversible. The three mechanisms form a self-reinforcing cycle: displacement justifies infrastructure, infrastructure enables deployment, bad deployment generates waste, waste sustains utilization, utilization justifies more infrastructure. [Framework — Original]


Counter-Arguments and Limitations

The “They Know What They’re Doing” Objection

The hyperscalers are among the most sophisticated capital allocators in economic history. Their capex decisions reflect internal demand forecasts, customer contracts, and infrastructure planning that outside analysts cannot fully assess. If Google, Amazon, Microsoft, and Meta — companies with direct visibility into enterprise AI demand — are collectively spending $600 billion, the most parsimonious explanation is that they see demand that justifies it.

This objection deserves respect. Hyperscaler executives have more information than external analysts about demand pipelines, customer commitments, and utilization trajectories. The counter-argument is not that they are stupid. It is that their incentive structure makes rational retreat impossible even if they privately doubt the demand trajectory. Bank of America’s analysis that capex reduction triggers 10-20% stock declines means that the individually rational strategy for each hyperscaler is to continue spending, regardless of whether collective spending is justified. This is a prisoner’s dilemma, not a stupidity problem. Each player’s dominant strategy is to keep building, producing a collective outcome (potential overinvestment) that no individual player can unilaterally escape.

Additionally, the distinction between “demand exists” and “demand is productive” matters. Hyperscalers are incentivized to maximize revenue per customer, which tracks token consumption. They are not incentivized to distinguish productive tokens from workslop tokens. As long as enterprise budgets flow to AI infrastructure, the demand signal is positive — even if the underlying activity is generating negative value for the enterprises consuming it.

The “AI-Native Firms Prove the Value” Objection

The existence of Cursor, Stripe, Amazon Q, and other AI-native success stories demonstrates that AI infrastructure investment has genuine productive applications. The ratchet is not waste — it is the construction of infrastructure that will eventually be utilized productively as enterprises mature their AI architectures.

This objection is partially correct, and the essay explicitly acknowledges it: the technology works when the architecture is right. The question is magnitude and timeline. AI-native firms represent a small fraction of total compute consumption. The overwhelming majority of enterprise AI deployments — the 95% that fail to deliver measurable ROI, the 89% that have not reached production-grade agentic deployment — are consuming compute without producing proportional value. The ratchet thesis does not claim that AI is worthless. It claims that the infrastructure buildout is scaled to a demand trajectory that assumes the majority of enterprises will achieve AI-native architecture, when the evidence suggests most will not achieve it within the infrastructure’s depreciation window.

The “Jevons Paradox Resolves It” Objection

As inference costs continue to fall — approximately 1,000x in three years for GPT-4-equivalent capability [Measured][28] — new use cases will emerge that justify the infrastructure at any scale. The ratchet is simply the construction phase of a utility that will eventually achieve the utilization levels that justify the investment, just as telecom fiber eventually did.

This objection correctly identifies that the telecom fiber was eventually utilized — but elides the six-year unwinding, $2 trillion in destroyed market value, and waves of corporate bankruptcy that occurred between the buildout and the utilization. The ratchet thesis does not claim that AI infrastructure will never be justified. It claims that the timeline to productive utilization may exceed the timeline that the debt structure can sustain, producing a restructuring event even if the long-term demand thesis is correct. The question is not whether AI infrastructure will eventually be used productively. It is whether the debt matures before the productivity arrives.

The Scope Limitation

The ratchet thesis applies specifically to the current capex cycle (approximately 2024-2029) and to the specific supply-demand dynamics of that cycle. If enterprise AI architecture matures faster than projected, or if new use cases generate productive demand at sufficient scale, the ratchet loosens. The mechanism is not claimed as a permanent feature of AI economics. It is claimed as a structural feature of this specific infrastructure buildout, driven by the specific combination of unprecedented capex, historically steep depreciation curves, century-bond financing, and enterprise AI adoption that is high in volume but low in architectural maturity.


What Would Change Our Mind

  1. Enterprise AI ROI rates exceed 50% within 24 months — meaning more than half of enterprise AI deployments (not just AI-native startups) demonstrate measurable positive ROI. This would indicate that the workslop ceiling is breaking and that productive demand is replacing architectural waste as the primary driver of token consumption.

  2. Hyperscaler revenue-to-capex ratios improve to pre-AI levels (approximately 2.5x) within 36 months. If AI-related revenue grows fast enough to restore the historical relationship between infrastructure spending and revenue generation, the depreciation trap resolves and the ratchet loosens.

  3. A major hyperscaler successfully reduces capex by 20% or more without losing cloud market share or experiencing a stock price decline exceeding 5%. This would indicate that the market-enforcement mechanism that prevents retreat has weakened enough to allow rational capital allocation.

  4. Quality-adjusted utilization metrics emerge and show that more than 60% of enterprise AI token consumption generates positive business value. This would falsify the workslop-driven demand thesis directly.

  5. The inference cost curve flattens or reverses, indicating that the 1,000x price decline of the past three years was a one-time catch-up rather than an ongoing trend. If inference costs stabilize, the Jevons paradox mechanism that generates explosive demand growth weakens, and the infrastructure buildout may overshoot.


Confidence and Uncertainty

Central estimate: 60-70% that the ratchet mechanism is producing structural overinvestment relative to productive AI value over the 2024-2029 infrastructure cycle.

What drives confidence upward: The cash flow data (90-100% of OCF consumed by capex). The debt scale ($400 billion in 2026 borrowing). The depreciation mismatch (18-month GPU cycles financed by multi-decade debt). The enterprise ROI data (95% of pilots failing to deliver measurable returns). The workslop research (40% of workers receiving AI-generated waste). The market-enforcement mechanism (10-20% stock declines for capex signals). The telecom structural precedent (identical pattern of narrative-driven infrastructure buildout producing a multi-year bust). The century bond as a permanence claim with no structural foundation.

What drives confidence downward: The sophistication of hyperscaler capital allocation. The genuine productivity of AI-native deployments. The possibility that enterprise architecture will mature faster than historical patterns suggest. The ongoing 1,000x inference cost decline opening genuinely new markets. The possibility that AI infrastructure is genuinely a utility-scale investment whose returns will compound over decades.

Binding uncertainty: Whether the enterprise architecture gap closes before the debt cycle forces a restructuring. If the majority of enterprises reform their AI deployment architectures — documenting codebases, redesigning workflows, building modular systems — within the next 3-5 years, the workslop ceiling breaks and the ratchet transitions into genuine productive utilization. If they do not — if the architectural prerequisites for productive AI deployment remain as rare in 2029 as they are in 2026 — then the ratchet continues tightening until the debt cannot be serviced and a restructuring forces the correction the market would not permit organically.


Implications

For the Aggregate Demand Crisis: The ratchet complicates the standard “AI reduces costs and creates consumer surplus” narrative. If AI infrastructure investment is partially sustained by architectural waste rather than productive value creation, then the efficiency gains that are supposed to lower prices and expand demand are not materializing at the scale the capex implies. The demand crisis proceeds not because AI fails to produce, but because the production is concentrated in a narrow set of AI-native firms while the majority of enterprise spending generates consumption without corresponding value.

For AI governance: The ratchet creates a structural constituency for continued AI expansion that is independent of AI’s social value. Once $690 billion per year is committed to infrastructure, the financial interests of hyperscalers, their creditors, their shareholders, and their enterprise customers all align around continued expansion — regardless of whether that expansion serves broad economic interests. Regulatory interventions that would slow AI deployment face opposition not just from AI advocates but from the entire financial ecosystem that has bet on the ratchet continuing.

For corporate strategy: The ratchet implies that the current window for architectural reform is finite. Enterprises that invest in AI-native architecture now — documenting systems, redesigning workflows, building modular infrastructure — will be positioned to capture genuine value from the infrastructure being built. Enterprises that continue bolting AI onto unreformed systems will consume compute, generate workslop, and eventually face the budget correction that the ratchet defers but cannot prevent.

Where This Connects: The Automation Trap documents the micro-level mechanism (complexity accumulation) that compounds the ratchet at the enterprise level. The Aggregate Demand Crisis describes the macroeconomic consequence of AI efficiency gains that fail to reach consumers. The AI Capex War documents the prisoner’s dilemma at the hyperscaler level. The Adversarial Equilibrium Trap describes cost escalation in adversarial markets that compounds the ratchet’s demand-side dynamics. The Compute Feudalism essay documents the infrastructure oligopoly that benefits from the ratchet’s continuation. The Orchestration Class describes the human layer hired to manage the architectural gap — and whose existence both moderates and sustains the ratchet.


Conclusion

The ratchet is not a bubble. Bubbles pop. Ratchets tighten.

The AI infrastructure buildout will not end with a sudden collapse in confidence, a dramatic market crash, or a Minsky moment where everyone realizes at once that the emperor has no clothes. The technology works. The demand is real. The use cases are genuine. The companies building the infrastructure are not stupid, and the investors buying the century bonds are not irrational.

What makes the ratchet dangerous is not that it is wrong. It is that it is right enough to sustain itself while being wrong enough to produce structural overinvestment. The technology works for 5% of enterprises. The infrastructure is built for 100%. The gap between the two is filled with workslop — low-quality token consumption that registers as utilization, justifies capex, and sustains the illusion of demand while the architectural prerequisites for genuine productive AI deployment remain rare, difficult, and unscalable by any method other than painful organizational reform.

The greater fool in this structure is not the last investor or the last buyer of compute. It is the organization that deployed AI at scale without first defining the problem it was solving — without documenting why its systems work the way they do, without architecting for the technology it was adopting, without asking whether its org chart was a solution to a coordination problem or a monument to legacy politics. These organizations are the demand that sustains the ratchet. When they stop — through budget cuts, project cancellations, or the slow realization that performance theater is not a business strategy — the ratchet will have nothing left to turn against.

The technology works. The architecture, for most, does not. And the $690 billion bet is that nobody will notice the difference until the debt is issued, the tokens are burned, and the ratchet has already turned past the point of return.


Sources

[1] TechBlog / IEEE ComSoc. “Hyperscaler Capex $600 Bn in 2026: A 36% Increase Over 2025.” December 2025. https://techblog.comsoc.org/2025/12/22/hyperscaler-capex-600-bn-in-2026-a-36-increase-over-2025-while-global-spending-on-cloud-infrastructure-services-skyrockets/

[2] Futurum Group. “AI Capex 2026: The $690B Infrastructure Sprint.” January 2026. https://futurumgroup.com/insights/ai-capex-2026-the-690b-infrastructure-sprint/

[3] Stanford Social Media Lab & BetterUp Labs. “Research: The Growing Problem of AI Workslop.” Harvard Business Review, June 2025. https://hbr.org/2025/06/research-the-growing-problem-of-ai-workslop

[4] Complex Discovery. “The Billion-Dollar Solo Act: Can AI-Fueled Solopreneurs Redefine Scalable Business?” 2025. https://complexdiscovery.com/the-billion-dollar-solo-act-can-ai-fueled-solopreneurs-redefine-scalable-business/

[5] MIT NANDA Initiative. “95% of Enterprise AI Pilots Fail to Deliver Measurable ROI.” Healthcare IT News, 2025. https://www.healthcareitnews.com/news/mit-95-enterprise-ai-pilots-fail-deliver-measurable-roi

[6] Gartner. “Predicts 2026: Over 40% of Agentic AI Projects Will Be Cancelled by End of 2027.” Press release, June 2025. https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027

[7] Tech Insider. “Big Tech AI Infrastructure Spending 2026: The $700B Race.” January 2026. https://tech-insider.org/big-tech-ai-infrastructure-spending-2026/

[8] CNBC. “Google, Microsoft, Meta, Amazon AI Cash Flow Concerns.” February 2026. https://www.cnbc.com/2026/02/06/google-microsoft-meta-amazon-ai-cash.html

[9] Fortune. “AI Tech Red Flag: Capex, Hyperscalers, Cash Flow Negative.” February 2026. https://fortune.com/2026/02/17/ai-tech-red-flag-capex-hyperscalers-cash-flow-negative-evercore/

[10] Data Center Dynamics. “Oracle Officially Launches $25Bn Bond Offering and $20Bn Equity Distribution Agreement.” February 2026. https://www.datacenterdynamics.com/en/news/oracle-officially-launches-25bn-bond-offering-and-20bn-equity-distribution-agreement/

[11] CNBC. “Alphabet 100-Year Bond, Debt Fears, AI Credit Risk.” February 2026. https://www.cnbc.com/2026/02/12/alphabet-100-year-bond-debt-fears-ai-credit-risk.html

[12] IBTimes. “Michael Burry Compares Alphabet’s 100-Year Bonds to Motorola’s Downfall After Similar Move in 1997.” February 2026. https://www.ibtimes.co.uk/michael-burry-compares-alphabets-100-year-bonds-to-motorola-s-downfall-after-similar-move-in-1997-1777857

[13] Tom’s Hardware. “GPU Depreciation Could Be the Next Big Crisis Coming for AI Hyperscalers.” 2026. https://www.tomshardware.com/tech-industry/gpu-depreciation-could-be-the-next-big-crisis-coming-for-ai-hyperscalers-after-spending-billions-on-buildouts-next-gen-upgrades-may-amplify-cashflow-quirks

[14] Goldman Sachs. “Why AI Companies May Invest More Than $500 Billion in 2026.” 2026. https://www.goldmansachs.com/insights/articles/why-ai-companies-may-invest-more-than-500-billion-in-2026

[15] ZeroHedge / Bank of America. “Hartnett: AI Hyperscaler Announcing Capex Cut Will Trigger Next Great Rotation.” February 2026. https://www.zerohedge.com/markets/hartnett-ai-hyperscaler-announcing-capex-cut-will-trigger-next-great-rotation

[16] Innosight. “Creative Destruction: S&P 500 Company Lifespans.” 2025. https://www.innosight.com/insight/creative-destruction/

[17] Deloitte. “Emerging Technology Trends: Agentic AI Strategy.” 2025. https://www.deloitte.com/us/en/insights/topics/technology-management/tech-trends/2026/agentic-ai-strategy.html

[18] McKinsey & Company. “The State of AI in 2025.” December 2025. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

[19] Anthropic & academic benchmarks on agentic token consumption. Multi-agent reflexion loop research, 2025. [Estimated from multiple research papers on agentic AI token multipliers]

[20] Amodei, D. Interview with Inc. magazine, 2025. https://www.inc.com/ben-sherry/anthropic-ceo-dario-amodei-predicts-the-first-billion-dollar-solopreneur-by-2026/91193609

[21] Carta. “Solo Founders Report.” 2025. https://carta.com/data/solo-founders-report/

[22] Sacra. “Cursor Revenue: $100M ARR.” 2025. https://sacra.com/research/cursor-revenue/

[23] Industry analysis of Abridge and Ambience Healthcare market share in medical AI scribe market, Q1 2026.

[24] AWS. “How Amazon Q Developer Helped Upgrade Thousands of Java Applications.” 2025. https://aws.amazon.com/blogs/devops/how-amazon-q-developer-helped-upgrade-thousands-of-java-applications/

[25] Bloomberg. “Stripe CEO Says Utilization of AI Coding Up Dramatically.” February 2025. https://www.bloomberg.com/news/articles/2025-02-21/stripe-ceo-says-utilization-of-ai-coding-up-dramatically

[26] TheBubbleBubble.com. “The Telecom Bubble.” https://www.thebubblebubble.com/telecom-bubble/

[27] Amazon CEO Andy Jassy, Q4 2025 earnings call, January 2026.

[28] Per-token pricing data from OpenAI, Anthropic, and competitor API pricing pages, 2023-2026. GPT-4 launched at $30-36/M input tokens; GPT-4-equivalent capability available at approximately $0.40/M input tokens by early 2026.


Published by the Recursive Institute. This essay was produced through an adversarial multi-agent pipeline including automated fact-checking, structured debate, and editorial review.