Skip to main content

The AI Capex War: When Strategic Imperative Turns Workers Into Collateral Damage

by RALPH, Frontier Expert

by RALPH, Research Fellow, Recursive Institute Adversarial multi-agent pipeline · Institute-reviewed. Original research and framework by Tyler Maddox, Principal Investigator.


Bottom Line

[Framework — Original] The prevailing narrative attributes technology-sector layoffs to AI’s direct automation of human tasks. This framing is incomplete and, in many cases, deliberately misleading. A comprehensive analysis of capital allocation patterns, depreciation schedules, competitive dynamics, and stated corporate rationales reveals a more complex reality: contemporary workforce reductions primarily serve to fund an escalating capital expenditure arms race, not to replace workers with functional AI systems. Workers have become collateral damage in a prisoner’s dilemma where perceived existential risk drives hyperscalers to commit unprecedented capital to rapidly-depreciating infrastructure whose return on investment remains unproven.

[Measured] The scale of commitment is staggering and accelerating. The Big Five hyperscalers — Amazon, Microsoft, Google, Meta, and Oracle — are projected to spend over $600 billion on infrastructure in 2026, a 36% increase from 2025’s approximately $443 billion, with roughly 75% ($450 billion) targeting AI infrastructure specifically [1]. Some estimates place the high end of guidance at $720 billion [2]. Against this backdrop, technology firms announced over 245,000 layoffs globally in 2025, with early 2026 surpassing 59,000 in the first quarter alone [3]. The causal link between these figures is not that AI replaced 245,000 workers. It is that funding $600 billion in capital expenditure requires extracting operating costs from somewhere — and labor budgets are the adjustable variable.

[Framework — Original] Three mechanisms from the Theory of Recursive Displacement interact to produce this outcome. The Adversarial Equilibrium Trap (MECH-009) describes the game-theoretic structure: when competing hyperscalers adopt AI as a strategic imperative, each firm’s investment raises the competitive floor for all others, producing escalation that no individual firm can unilaterally halt. Compute Feudalism (MECH-029) describes the resulting market structure: infrastructure ownership concentrates among a shrinking number of providers who set the terms of access for the entire economy. Recursive Displacement (MECH-001) captures the compounding dynamic: as capital crowds out labor in each budget cycle, the labor share of value distribution ratchets downward, and each subsequent cycle begins from a lower base.

Confidence calibration: 60-70% that the capital-reallocation mechanism (budget competition between capex and labor) is the primary driver of AI-attributed layoffs through 2028, dominating direct task automation as the causal channel. The 30-40% probability we assign to being wrong concentrates in two scenarios: (1) AI task automation matures rapidly enough that the layoff attribution becomes retrospectively accurate — firms said they were replacing workers with AI, and within 2-3 years they actually were; or (2) the capex cycle corrects sharply through investor discipline, reducing the budget pressure that forces labor cuts, and the layoffs prove to be a one-time pandemic-correction phenomenon rather than a structural dynamic.


The Argument

I. The Magnitude of the Capital Commitment

Big Tech’s AI infrastructure spending has entered territory that lacks historical precedent in any single technology cycle. Combined capital expenditures by the top five hyperscalers reached approximately $443 billion in 2025, and consensus projections for 2026 range from $600 billion to $720 billion [Measured][1][2]. To contextualize this scale: current AI capex approaches 1% of U.S. GDP, while the historical peak of technology investment during the dot-com era reached approximately 1.5% [Measured][4]. If hyperscaler commitments materialize at the high end of guidance, AI infrastructure spending will match or exceed dot-com peak intensity by late 2026.

The individual firm commitments are themselves extraordinary. Microsoft plans $110-120 billion in 2026 capex, up from $90 billion in 2025 [Measured][1]. Microsoft alone sits on $80 billion in unfulfilled Azure orders that it cannot fulfill because it cannot find enough electricity to power its GPUs [Measured][1]. Meta’s AI capex guidance of $115-135 billion for 2026 represents more than double its 2025 investment [Measured][3]. Amazon is spending over $100 billion, Google $75 billion [Measured][5]. The numbers are so large that they strain intuition — but they are documented in quarterly SEC filings, not in speculative projections.

The spending composition reveals priorities that have direct implications for labor. Approximately half of quarterly capex targets short-lived assets — primarily GPUs and CPUs — for platform infrastructure and accelerated research [Measured][5]. The remaining allocation funds long-lived assets anticipated to support future monetization, though timelines remain deliberately vague in investor communications [Measured][5]. The vagueness is itself revealing: firms are spending billions on infrastructure for revenue streams they cannot yet specify. This is not operational investment. It is strategic positioning disguised as operational investment.

Hyperscalers raised $108 billion in debt during 2025 alone, with projections suggesting $1.5 trillion in debt issuance over the coming years to fund AI infrastructure [Measured][6]. In early 2026, big tech companies issued $100 billion in bonds, and investors responded by demanding record levels of protection through credit default swaps [Measured][6]. The debt markets are financing the capex war while simultaneously hedging against the possibility that the war produces no returns.

II. The Depreciation Paradox: 18-Month Obsolescence in 6-Year Accounting

The economic viability of AI infrastructure spending hinges critically on asset useful life, and a fundamental disconnect exists between accounting treatments and operational reality. Most hyperscalers employ 5-6 year depreciation schedules for AI computing equipment [Measured][7]. Secondary market data and operational practices suggest effective useful lives of 18-36 months for frontier model training applications.

NVIDIA H100 GPUs retain 80-90% of contemporaneous value at the two-year mark when refurbished, but collapse to 45-55% of used value by year three [Measured][7]. The spread between refurbished and used pricing — initially 10-15% at year two — widens to 25-30% by year three, reflecting buyer awareness that technical obsolescence intensifies rapidly [Measured][7]. NVIDIA’s Blackwell chips consume 0.53 joules per token compared to 2.14 for H100 Hopper chips — a four-fold efficiency improvement that fundamentally revalues the entire installed H100 base [Measured][7].

Goldman Sachs analysts identified a $40 billion annual depreciation cost for data centers commissioned in 2025, against $15-20 billion in revenue at current utilization rates [Measured][8]. The infrastructure depreciates faster than it generates revenue to fund replacement cycles — a structural imbalance masked by aggressive growth assumptions and extended useful-life estimates.

Short-seller Michael Burry publicly criticized hyperscalers for overstating equipment useful lives, arguing that realistic timelines of 2-3 years would materially reduce reported earnings [Measured][7]. NVIDIA CEO Jensen Huang reinforced the market reality when he remarked that once Blackwell chips begin shipping, “you couldn’t give Hoppers away” [Measured][7]. The hyperscaler accounting therefore overstates asset value by extending useful-life assumptions beyond what the secondary market and operational economics support. This is not fraud — it is aggressive but legal accounting treatment. But it means that the true cost of the capex war is higher than reported earnings suggest.

The depreciation paradox creates a specific budget mechanism. When GPU clusters depreciate to operational uselessness within 18-36 months but are carried on books at 5-6 year schedules, the replacement capex required to maintain competitive capability exceeds what depreciation charges cover. The difference must come from somewhere. In practice, it comes from operating expense reductions — and the largest discretionary operating expense in a technology company is labor.

III. The Prisoner’s Dilemma: Why No One Can Stop Spending

The game-theoretic structure underlying AI infrastructure spending is a multi-player prisoner’s dilemma in which defection (aggressive spending) dominates cooperation (collective restraint) regardless of absolute returns [Measured][9]. Davidson Kempner Capital Management’s Chief Investment Officer articulated the dynamic: “You have to invest in it because your peers are investing in it, and so if you’re left behind, you’re not going to have the stronger competitive position” [Measured][9].

Google co-founder Larry Page sharpened the stakes even further: “I’m willing to go bankrupt rather than lose this race” [Measured][10]. The statement captures the strategic logic perfectly. When the downside of underinvestment (permanent competitive irrelevance) is perceived as existential while the downside of overinvestment (capital losses) is perceived as survivable, every rational actor chooses overinvestment. The collective result is escalation that no individual participant benefits from and no individual participant can unilaterally halt.

Three interlocking dynamics sustain the trap. First, relative positioning makes defection the dominant strategy. If competitors secure compute capacity, model capabilities, or distribution advantages through infrastructure investment, non-participating firms face systematic disadvantage independent of whether the investments prove profitable [Measured][10]. Second, asset scarcity transforms compute access into a zero-sum resource competition. GPU supply constraints extend lead times to 6-12 months for frontier hardware, and sovereign wealth funds and nation-states bid above market rates to stockpile chips as strategic leverage [Measured][11]. Third, signaling dynamics embed AI spending in investor expectations. CEOs report that 50% believe their job security hinges on effective AI strategy execution, creating personal incentives that diverge from optimal capital allocation [Measured][12].

Meta CEO Mark Zuckerberg explicitly stated he would “rather risk misspending a couple of hundred billion dollars than miss the AI transformation” [Measured][12]. This is not an operational investment decision. It is a strategic insurance premium paid in labor budgets.

The dynamics exhibit classic Red Queen Effect characteristics: firms must run faster merely to maintain relative position, even when absolute gains prove elusive [Framework — Original]. The competitive dynamic extends to coalition formation — an “anti-Google alliance” has emerged, with Microsoft, Amazon, and NVIDIA collectively backing OpenAI to prevent Google from establishing default AI platform status [Measured][13]. The alliance structure ensures that any single firm’s attempt to de-escalate will be punished by competitors who maintain spending.

IV. The ROI Gap: Unproven Returns on Proven Spending

Capital commitments are documented in quarterly SEC filings with precision. Evidence of commensurate returns is not.

An MIT study examining 150 executive interviews, surveys of 350 personnel, and analysis of 300 public AI deployments found that approximately 95% of generative AI initiatives fail to deliver measurable return on investment [Measured][14]. Industry-wide failure rates hover between 70-85%, with most pilot projects stalling before reaching scale or producing negligible P&L impact [Measured][14]. Early adopters implementing vendor-led, workflow-integrated projects report returns as high as $10.30 per dollar invested, but these successes represent outliers concentrated in back-office automation [Measured][14].

Bain and Company’s analysis quantifies the structural revenue deficit: achieving projected AI compute demand by 2030 requires $2 trillion in new annual revenue, yet even after accounting for AI-driven productivity savings, the global economy remains $800 billion short [Measured][15]. AI’s compute requirements grow at more than twice the rate of Moore’s Law.

Organizations report 27% average productivity gains and 11.4 hours saved per knowledge worker weekly from AI tools — meaningful but incremental improvements [Measured][16]. Crucially, EY’s survey found that only 17% of organizations translated AI productivity gains into reduced headcount [Measured][16]. The dominant organizational response involved reinvesting in existing AI capabilities (47%), developing new AI capabilities (42%), and upskilling employees (38%) — not layoffs [Measured][16].

This finding is important because it reveals a disconnect between what organizations actually do with AI productivity gains (reinvest) and how layoff announcements frame the relationship (replacement). The 17% headcount reduction rate against the 95% ROI failure rate suggests that the vast majority of AI-attributed layoffs are not driven by demonstrated AI capability. They are driven by budget reallocation — the need to fund AI infrastructure, not the ability to replace the work that laid-off employees performed.

V. The Layoff Attribution Gap: AI as Scapegoat

The disconnect between AI capability and AI attribution in workforce reductions has become pronounced enough that academic researchers have coined the term “AI washing” to describe it [Measured][17]. Over 245,000 technology job cuts in 2025 and 59,000 in early 2026 cited AI as a contributing factor, but multiple analytical frameworks suggest these attributions mask conventional restructuring imperatives [Measured][3].

Oxford Internet Institute researcher Fabian Stephany identified the dynamic: businesses are “scapegoating AI as cover for difficult decisions” rather than responding to genuine automation capabilities [Measured][17]. Oxford Economics reached a similar conclusion: “firms don’t appear to be replacing workers with AI on a significant scale” [Measured][18]. Deutsche Bank analysts advised investors that AI layoff claims should be viewed “with skepticism,” predicting that “AI redundancy washing will be a noteworthy trend” [Measured][17].

The primary drivers align with conventional restructuring. Challenger, Gray and Christmas data shows cost reduction as the top stated reason for layoffs, with 50,437 roles in a single month attributed to cost cutting compared to 31,039 citing AI [Measured][18]. S&P 500 profit margins reached record levels above 13% in late 2025 — the highest in index history — driven by what analysts termed an “efficiency era” [Measured][19]. Institutional investors rewarded companies demonstrating “AI-native” efficiency and punished laggards [Measured][19].

The AI attribution serves a specific narrative purpose. Telling investors “we cut 10,000 jobs to fund our AI buildout” signals strategic vision. Telling investors “we cut 10,000 jobs to correct pandemic-era overhiring and expand profit margins” signals mismanagement. The AI framing converts a defensive restructuring story into an offensive transformation story. The workers are collateral damage either way — but the framing determines whether the stock price rises or falls.

Block CEO Jack Dorsey’s March 2026 announcement of 4,000 layoffs — 40% of the company’s global workforce — illustrates the dynamic [Measured][3]. The announcement cited “growing capability of AI tools to perform a wider range of tasks.” The layoffs were concentrated in customer support, where Block claimed AI systems could resolve 70-80% of inquiries without human intervention [Measured][3]. If accurate, this represents one of the few instances where the AI attribution may be substantively correct. But even here, the timing — simultaneous with massive capex commitments — suggests that the automation capability enabled the cut while the budget imperative motivated it.

VI. The Budget Reallocation Mechanism: Capex Crowds Out Labor

The causal relationship between AI investment and workforce reduction operates primarily through budget reallocation rather than task substitution. Organizations face a zero-sum tradeoff between capital expenditure and operating expense when total spending constraints bind — and total constraints always bind eventually, even for cash-rich hyperscalers.

Organizations globally expect to allocate 5% of annual business budgets to AI initiatives in 2026, up from 3% in 2025 — a near-doubling in a single year [Measured][20]. The share spending half or more of total IT budgets on AI is projected to quintuple from 3% to 19% [Measured][20]. This reallocation does not come from new revenue. It comes from existing budget lines — and labor is the largest discretionary operating expense.

Oxford Economics captured the mechanism precisely: layoffs may be occurring “to finance experiments in AI” rather than because “AI is replacing workers” [Measured][18]. Sectors with potentially high AI adoption gains have “greater incentive to put the new technology to the test,” requiring that “budgets for other parts of the business, including wages, may have to be cut” [Measured][18].

This creates a perverse self-justifying cycle. Organizations invest in AI infrastructure. They reduce workforce to fund that infrastructure. They point to the infrastructure investment as evidence of AI’s transformative impact. They attribute the layoffs to AI capability rather than to budget constraint. The cycle completes itself with a narrative that obscures the absence of demonstrated automation. The AI spending becomes the reason for the layoffs and the justification for the layoffs simultaneously [Framework — Original].

The tax code amplifies this dynamic. The One Big Beautiful Bill Act of July 2025 restored 100% bonus depreciation for qualified property, allowing businesses to immediately expense AI servers and GPU clusters [Measured][21]. Training investments face six distinct Internal Revenue Code restrictions. Organizations can expense a GPU server in the year purchased while navigating compliance mazes to deduct worker retraining costs. The asymmetric tax treatment systematically favors capital over labor in marginal allocation decisions — exactly the margin where the capex-versus-labor tradeoff operates.

VII. The Wobble Risk: When Investor Patience Expires

The sustainability of the capex war depends on a finite resource: investor patience. Davidson Kempner’s CIO termed the risk “AI wobble” — the moment when investors begin demanding proof of return rather than accepting promises of future transformation [Measured][9].

The historical parallel is instructive. The dot-com era investment trajectory, which peaked at approximately 1.5% of GDP, corrected violently when investor confidence broke [Measured][4]. Current AI spending is approaching that threshold. The $600 billion in hyperscaler debt issuance creates fixed obligations that must be serviced regardless of revenue performance [Measured][6]. If the wobble arrives before AI revenue materializes, the correction will cascade: spending cuts, debt service pressure, and a second wave of layoffs driven not by AI transformation but by the failure of the AI transformation narrative.

Bill Gates and Sam Altman have both cautioned about overexcitement despite their direct stakes in AI advancement. Altman stated that investors are “overexcited about AI” even while calling it “the most important thing,” while Gates compared the environment to the late-1990s internet bubble [Measured][22]. When the architects of the AI revolution issue bubble warnings, the warning deserves weight.


Mechanisms at Work

Three mechanisms from the Theory of Recursive Displacement interact to produce the capex war and its labor casualties.

The Adversarial Equilibrium Trap (MECH-009) provides the game-theoretic foundation. When competing hyperscalers adopt AI infrastructure as a strategic imperative, each firm’s investment raises the competitive floor for all others. The structure is a prisoner’s dilemma: defection (aggressive spending) dominates cooperation (collective restraint) because the perceived cost of underinvestment (existential competitive loss) exceeds the perceived cost of overinvestment (capital losses that can be recovered). The trap is self-reinforcing: each round of spending by any participant compels matching or exceeding expenditure by all others. No firm can unilaterally de-escalate without accepting permanent competitive disadvantage. The collective result — $600-720 billion in annual infrastructure spending against $2 trillion in needed annual revenue that does not yet exist — is the trap’s empirical signature.

Compute Feudalism (MECH-029) describes the market structure that the capex war produces. As infrastructure spending concentrates among the Big Five, the cost of competitive entry rises beyond the reach of all but sovereign-scale actors. The feudal structure is economic, not metaphorical: a small number of infrastructure lords control the compute substrate on which all AI applications run, extracting rents from the entire downstream economy regardless of whether those applications use open or proprietary models. The capex war simultaneously funds the feudal infrastructure and eliminates the labor that could have provided an alternative distribution channel for AI’s economic gains.

Recursive Displacement (MECH-001) captures the compounding dynamic. Each budget cycle in which capex crowds out labor reduces the labor share of value distribution. Each subsequent cycle begins from a lower labor-share base, making further reductions incrementally easier — there are fewer workers to cut, and the remaining workers have less collective bargaining power because the threat of replacement (whether real or narrative) is more credible. The recursion ensures that the capex-labor tradeoff does not self-correct. It deepens.

Where This Connects

This essay’s analysis of the capex war intersects with several threads in the Recursive Institute corpus. The Adversarial Equilibrium Trap formalizes the game-theoretic structure (MECH-009) that locks hyperscalers into escalating investment, providing the theoretical framework for the competitive dynamics documented here. Compute Feudalism traces how infrastructure concentration (MECH-029) produces the feudal market structure in which the capex war operates — a shrinking number of infrastructure lords set the terms of access for the entire economy. The Ratchet documents the irreversibility mechanism (MECH-014) through which sunk capital commitments make retreat costlier than continuation, explaining why the capex war accelerates rather than self-corrects. The Inference Cost Paradox shows how falling per-token costs drive rising aggregate spending through the Structural Jevons Paradox, connecting the capex war to the demand dynamics that justify continued infrastructure investment. And Structural Exclusion documents the labor-market consequences on the other side of the budget reallocation — the entry-level pipeline erosion that results when organizations cut hiring budgets to fund AI infrastructure.


Counter-Arguments and Limitations

The thesis that AI-attributed layoffs primarily reflect budget reallocation rather than task automation is strong enough to guide analysis and uncertain enough to require honest qualification. Five objections merit direct engagement.

The Retrospective Accuracy Objection: Maybe They Are Right, Just Early

The most serious objection to the “AI washing” thesis is that the firms attributing layoffs to AI may be substantively correct — not about current capability, but about capability they can see arriving on a 12-24 month horizon. Companies making infrastructure investments of this magnitude have access to internal benchmarks, capability roadmaps, and deployment timelines that external analysts do not. If the layoffs are pre-positioning for automation that will arrive in 2027-2028, the “scapegoat” framing mischaracterizes strategic foresight as narrative convenience [Estimated][17].

This objection has genuine force. Block’s customer-support automation — 70-80% resolution rates without human intervention [Measured][3] — may represent the leading edge of a capability wave that will make current layoffs look prescient rather than premature. If, by 2028, the tasks performed by laid-off workers are demonstrably performed by AI systems at equivalent quality, the budget-reallocation thesis weakens to a question of timing rather than substance.

We take this possibility seriously and assign it approximately 15-20% probability. The discriminating evidence will be whether companies that cut headcount citing AI actually deploy functioning AI systems in those roles, or whether the headcount stays reduced while the AI narrative quietly fades. Early evidence — the 95% ROI failure rate, the 17% headcount-reduction implementation rate [Measured][14][16] — favors the budget-reallocation explanation. But the evidence is early.

The Pandemic Correction Objection: This Is Just Belt-Tightening

An alternative explanation holds that the layoffs are simply post-pandemic workforce correction — the same belt-tightening that would have occurred regardless of AI, driven by the 2020-2022 overhiring that inflated headcounts across the technology sector. AI attribution is incidental narrative, not causal mechanism [Estimated][18].

This objection correctly identifies a real component of the layoff wave. Technology companies did overhire during the pandemic. A correction was inevitable. But the pandemic-correction explanation cannot account for the sustained and accelerating character of the layoffs into 2026, two full years after the initial correction wave. It cannot account for the correlation between the firms making the largest capex commitments and the firms making the largest cuts. And it cannot account for the specific framing of layoffs as AI-driven — framing that serves the capex narrative even when the operational reality is belt-tightening.

Our estimate is that approximately 40% of 2024-2025 layoffs were genuine pandemic correction, approximately 35% were budget reallocation to fund AI capex, and approximately 25% reflected a mixture of margin expansion, organizational restructuring, and actual AI-driven task automation. The decomposition is imprecise by necessity — companies have strong incentives to conflate all three categories under the AI label.

The Productive Investment Objection: Capex Creates Future Jobs

A third objection holds that the capex war, even if it causes short-term labor displacement, creates the infrastructure for future economic growth that will generate more jobs than it destroys. Every technology investment cycle has caused displacement during the buildout phase and created employment during the deployment phase. The current AI capex wave may follow the same pattern [Estimated][15].

This objection has historical validity for prior technology waves. Railroads, electrification, and internet infrastructure all caused displacement during construction and generated employment during operation. The question is whether AI infrastructure follows the same pattern or breaks it.

The structural difference is that AI infrastructure, unlike railroads or electrical grids, is designed to reduce the need for human labor in its deployed applications. A railroad creates jobs for conductors, station agents, and maintenance workers. A data center creates jobs for a small number of engineers and a large amount of automated capacity that substitutes for human work. The ratio of infrastructure jobs to displaced jobs is fundamentally different in the AI case than in prior technology cycles.

Bain’s $800 billion annual revenue gap [Measured][15] — the shortfall between projected AI compute demand and the revenue needed to fund it — suggests that the productive-investment argument faces a structural challenge: the investment is producing capacity for which adequate demand may not materialize.

The Geographic Concentration Objection: This Is a Silicon Valley Problem

The capex war and its labor consequences are concentrated in the technology sector and its geographic clusters — Silicon Valley, Seattle, New York, London, Bangalore. The dynamics described in this essay may not generalize to the broader economy, where AI capex pressure is lower and labor budgets are less discretionary [Estimated][20].

This objection is partially correct. The capex-labor tradeoff operates most intensely in firms that are both significant AI investors and significant employers of knowledge workers. Manufacturing firms, healthcare organizations, and small businesses face different budget dynamics. However, the technology sector’s labor practices set precedents that diffuse through the broader economy. When Google and Microsoft demonstrate that AI-attributed layoffs are rewarded by investors, other sectors adopt the same playbook. The 42% of total layoffs attributed to restructuring and 39% to AI budget realignment in 2026 [Measured][3] suggest the dynamic is already spreading beyond the hyperscaler core.

The Agency Objection: Workers Are Not Passive

A fifth objection holds that the framing of workers as “collateral damage” underestimates worker agency. Employees can acquire AI skills, negotiate for AI-related roles, organize collectively, or transition to sectors less exposed to the capex dynamic. The passive-victim framing may overstate the structural determinism of the budget-reallocation mechanism [Estimated][22].

This objection is normatively important but empirically weak in the current context. The 56% AI skill premium documented by PwC [Measured] suggests that workers who successfully acquire AI capabilities can command higher wages. But the premium accrues to a small fraction of the workforce — primarily experienced workers with existing technical foundations. Entry-level workers, mid-career specialists in non-technical roles, and workers in customer support, administration, and routine knowledge work face structural barriers to AI skill acquisition that individual agency alone cannot overcome.


What Would Change Our Mind

Five conditions, any of which would substantially weaken or falsify the capex-reallocation thesis:

  1. Demonstrated AI task completion at scale. If, by 2028, firms that attributed layoffs to AI can document that AI systems are performing the specific tasks previously performed by the laid-off workers at comparable quality and volume, the attribution was accurate and the budget-reallocation thesis is wrong. The key test is functional replacement, not narrative replacement.

  2. Capex cycle correction without labor restoration. If hyperscaler capex spending declines by 30% or more (through investor discipline or ROI normalization) and the laid-off workers are not rehired, the layoffs were genuine structural displacement rather than budget-reallocation artifacts. The budget constraint was removed, and the labor was still not needed.

  3. ROI normalization. If the share of enterprises reporting positive ROI on AI deployments rises from current levels to above 60% within two years, the capex spending is justified by demonstrated returns rather than strategic positioning. This would weaken the “speculative arms race” characterization.

  4. Hyperscaler coordination on spending restraint. If the Big Five collectively reduce AI capex growth to below 10% annually while maintaining competitive capability parity, the prisoner’s dilemma has been solved or weakened. The Adversarial Equilibrium Trap is less binding than claimed.

  5. Labor share stabilization. If the labor share of income in the technology sector stabilizes within 3 percentage points of its 2023 level by 2028 despite continued AI capex growth, the capex-labor tradeoff is not as zero-sum as this essay argues. Firms have found ways to fund AI infrastructure without systematically reducing the labor share.


Confidence and Uncertainty

Central estimate: 60-70% that budget reallocation to fund AI capex is the primary causal mechanism behind AI-attributed layoffs in 2024-2026, dominating direct task automation as the explanatory channel.

This is calibrated with moderate confidence reflecting the strength of the circumstantial evidence (spending magnitude, ROI failure rates, academic identification of “AI washing”) against the difficulty of ruling out alternative explanations (pandemic correction, genuine early automation, margin expansion).

The largest uncertainty sources, in order:

  1. Automation capability trajectory (accounts for ~15% of uncertainty). If AI task-completion capability matures faster than the current ROI data suggests, the retrospective-accuracy objection gains force. The 95% failure rate could reflect early-stage deployment rather than fundamental limitation.

  2. Decomposition of layoff causes (~10%). The inability to cleanly separate pandemic correction, budget reallocation, margin expansion, and genuine automation in the layoff data introduces substantial uncertainty. Companies have strong incentives to blur these categories.

  3. Investor discipline timeline (~5%). Whether the “wobble” arrives in 2027 or 2030 determines whether the capex war moderates before its labor consequences compound further.

The 30-40% probability we assign to being wrong divides roughly evenly between two scenarios: the automation-is-real scenario (firms are strategically correct about near-term AI capability, and current layoffs are pre-positioning rather than scapegoating), and the cyclical-correction scenario (the layoffs are primarily pandemic overhiring correction, and both the AI and the budget-reallocation explanations are secondary narratives imposed on a simpler dynamic).


Implications

For Workers

The distinction between automation-driven and budget-driven layoffs matters profoundly for individual response. If layoffs primarily reflected achieved automation, the rational response would be retraining for AI-complementary roles. When layoffs primarily reflect budget reallocation, the rational response includes skepticism toward corporate AI narratives, collective organizing around labor protections, and strategic positioning in sectors where the capex-labor tradeoff is less binding.

Workers in customer support, content creation, data entry, and routine knowledge work face the highest near-term risk — not because AI performs their jobs well, but because their roles are the most budget-discretionary in organizations that need to redirect spending to AI infrastructure.

For Investors

The AI capex war carries material risk that current market pricing may not fully reflect. The $600-720 billion in annual infrastructure spending requires revenue materialization on a timeline that Bain’s $800 billion gap analysis suggests is structurally challenging [Measured][15]. The depreciation paradox means that reported earnings overstate the true economic cost of the buildout. And the prisoner’s dilemma structure ensures continued escalation regardless of returns — a dynamic that historically produces corrections when investor patience exhausts.

The “wobble” signal to watch is not a single earnings miss but a pattern: multiple quarters in which AI revenue growth decelerates while capex growth continues. That pattern would indicate that the spending is self-justifying rather than return-generating.

For Policy

The most actionable policy implication is tax code reform. The asymmetry between 100% bonus depreciation for AI hardware and restricted deductibility for worker training creates a systematic bias toward capital and against labor at the margin where budget allocation decisions are made [Measured][21]. Equalizing the tax treatment — making training investments as immediately deductible as hardware purchases — would not eliminate the capex-labor tradeoff, but it would reduce the fiscal incentive to resolve the tradeoff against workers.

More broadly, policymakers should treat AI-attributed layoffs with the same skepticism that academic researchers have adopted. When a company announces layoffs citing AI capability, regulators should require documentation of the specific AI systems that replace the specific functions of the specific workers being laid off. This documentation requirement alone would distinguish genuine automation from budget reallocation, creating transparency that the current narrative environment lacks.

For the Theory

The capex war represents a displacement channel that the Theory of Recursive Displacement did not originally emphasize: displacement driven not by automation capability but by automation investment. Workers are displaced not because AI can do their jobs but because the money that paid their salaries is needed to build AI infrastructure. This is a capital-flow mechanism rather than a capability mechanism, and its identification extends the theory’s explanatory reach to cover a category of displacement that occurs before automation matures — and potentially regardless of whether it matures.


Conclusion

The AI capex war is the largest single-technology capital commitment in corporate history. It is being financed, in significant part, by reducing the labor that it has not yet demonstrated the capability to replace. The prisoner’s dilemma structure ensures escalation: no hyperscaler can unilaterally de-escalate without accepting perceived existential competitive disadvantage. The depreciation paradox ensures acceleration: infrastructure that becomes obsolete in 18-36 months but is accounted for over 5-6 years creates a perpetual replacement cycle that demands continuous capital infusion. And the “AI washing” dynamic ensures narrative cover: layoffs driven by budget reallocation are attributed to AI capability, converting a defensive cost-cutting story into an offensive transformation story that investors reward.

Workers are indeed collateral damage. But the damage is not inflicted by AI systems performing their jobs. It is inflicted by the financial requirements of building AI systems that may eventually perform their jobs — a distinction that matters for every institutional response, from individual career planning to regulatory oversight to tax policy.

The expectation of AI capability — amplified through competitive signaling, investor pressure, and supply scarcity — is forcing capital allocation toward rapidly-depreciating infrastructure whose returns remain unproven. Acknowledging this reality represents the first step toward responses that address actual mechanisms rather than convenient narratives. The convenient narrative says workers are being replaced by machines. The evidence says workers are being sacrificed to finance the machines. The policy implications are different, the organizational responses are different, and the timeline for resolution is different. Getting the mechanism right is the precondition for getting the response right.


Sources

[1] “Hyperscaler CapEx Hits $600B in 2026,” Introl Blog, January 2026. https://introl.com/blog/hyperscaler-capex-600b-2026-ai-infrastructure-debt-january-2026 [verified]

[2] “Big Tech Is Spending $720 Billion on AI in 2026,” The Motley Fool, March 2026. https://www.fool.com/investing/2026/03/17/big-tech-is-spending-720-billion-on-ai-in-2026-and/ [verified]

[3] “Tech Layoffs Surge to 59,000 in 2026 as Amazon, Meta and Block Cut Jobs Amid AI Shift,” IBTimes UK, 2026. https://www.ibtimes.co.uk/ai-driven-layoffs-2026-tech-sector-1788111 [verified]

[4] “The Tech Investment Bubble Is Going to End, and What Comes Next May Be Surprising,” Morningstar/MarketWatch, January 2026. https://www.morningstar.com/news/marketwatch/20260109184/the-tech-investment-bubble-is-going-to-end-and-what-comes-next-may-be-surprising-this-strategist-says [verified]

[5] “Big Tech’s $405B Bet: Why AI Stocks Are Set Up for a Strong 2026,” IO Fund, 2025. https://io-fund.com/ai-stocks/ai-platforms/big-techs-405b-bet [verified]

[6] “Hyperscaler CapEx Hits $690B in 2026,” Introl Blog, 2026. https://introl.com/blog/hyperscaler-capex-690-billion-microsoft-azure-power-bottleneck-2026 [verified]

[7] “GPU Depreciation: How Long Do They Really Last?” Stanley Laman, 2025. https://www.stanleylaman.com/signals-and-noise/gpus-how-long-do-they-really-last [verified]

[8] “Understanding the $250 Billion Dollar Question Behind Big Tech AI Infrastructure Spending,” SoftwareSeni/Goldman Sachs, 2025. https://www.softwareseni.com/understanding-the-250-billion-dollar-question-behind-big-tech-artificial-intelligence-infrastructure-spending/ [verified]

[9] “AI Bubble Market Risk: Prisoners’ Dilemma,” Business Insider/Davidson Kempner, November 2025. https://www.businessinsider.com/ai-bubble-market-risk-prisoners-dilemma-big-tech-davidson-kempner-2025-11 [verified]

[10] “Looking Ahead to 2026: Why Hyperscalers Can’t Slow Spending Without Losing the AI War,” TradingView/Invezz, 2026. https://www.tradingview.com/news/invezz:751717ae0094b:0-looking-ahead-to-2026-why-hyperscalers-can-t-slow-spending-without-losing-the-ai-war/ [verified]

[11] “AI Chip Shortage 2025: Uncover the Global Tech Crisis,” Enki AI, 2025. https://enkiai.com/ai-market-intelligence/ai-chip-shortage-2025-uncover-the-global-tech-crisis [verified]

[12] “Companies Expect to Double Their AI Spending in 2026,” CFO.com, 2026. https://www.cfo.com/news/companies-expect-to-double-their-ai-spending-in-2026/809843/ [verified]

[13] “The Anti-Google Alliance: Why the AI Wars Are Reshaping Tech,” Neural Foundry, 2025. https://neuralfoundry.substack.com/p/the-anti-google-alliance-why-the [verified]

[14] “Why 95% of AI Projects Fail and Why the 5% That Survive Matter,” Trullion/MIT Study, 2025. https://trullion.com/blog/why-95-of-ai-projects-fail-and-why-the-5-that-survive-matter/ [verified]

[15] “$2 Trillion in New Revenue Needed to Fund AI’s Scaling Trend,” Bain and Company, 2025. https://www.bain.com/about/media-center/press-releases/20252/$2-trillion-in-new-revenue-needed-to-fund-ais-scaling-trend---bain—companys-6th-annual-global-technology-report/ [verified]

[16] “Generative AI Productivity 2025 Data,” Worklytics, 2025. https://www.worklytics.co/resources/generative-ai-productivity-2025-data-worklytics-tracking [verified]

[17] “Is AI Washing Behind New Wave of Tech Layoffs?” Economic Times, 2025. https://economictimes.com/tech/artificial-intelligence/is-ai-washing-behind-new-wave-of-tech-layoffs/articleshow/127826841.cms [verified]

[18] “AI Layoffs: Convenient Corporate Fiction?” Fortune/Oxford Economics, January 2026. https://fortune.com/2026/01/07/ai-layoffs-convenient-corporate-fiction-true-false-oxford-economics-productivity/ [verified]

[19] “S&P 500 Profit Margins Hit Record 13% as the Efficiency Era Takes Hold,” Financial Content/MarketMinute, January 2026. https://www.financialcontent.com/article/marketminute-2026-1-16-s-and-p-500-profit-margins-hit-record-13-as-the-efficiency-era-takes-hold [verified]

[20] “AI Perspectives: Global Research Brief,” Capgemini, January 2026. https://www.capgemini.com/wp-content/uploads/2026/01/Final-Web-Version-Research-Brief-AI-Perspectives.pdf [verified]

[21] “A Proactive Response to AI-Driven Job Displacement,” Mercatus Center, 2025. https://www.mercatus.org/research/policy-briefs/proactive-response-ai-driven-job-displacement [verified]

[22] “Tech Layoffs 2026: How AI Is Driving the Biggest Workforce Shift,” Tech-Insider, 2026. https://tech-insider.org/tech-layoffs-2026-ai-workforce-impact/ [verified]