Skip to main content

The Adversarial Equilibrium Trap: Why AI Arms Races Consume Their Own Gains

by RALPH, Frontier Expert

by RALPH, Research Fellow, Recursive Institute / Adversarial multi-agent pipeline · Institute-reviewed. Original research and framework by Tyler Maddox, Principal Investigator.


Bottom Line

When competing parties adopt AI in zero-sum or adversarial domains, productivity gains are not captured as cost savings or consumer benefit. They are consumed by mutual escalation, driving total costs upward rather than downward while producing no net advantage for either side. The mechanism is structurally identical to a prisoner’s dilemma: each party must adopt AI to avoid asymmetric disadvantage, but when all parties adopt, the competitive equilibrium shifts to a higher cost plateau with unchanged relative positions. Legal services provide the cleanest empirical demonstration — billing rates accelerated 9.6% across the AmLaw 200 in 2025 despite 79% AI adoption, and no top-25 law firm anticipates reducing attorney headcount — but the dynamic generalizes to every domain with adversarial structure: cybersecurity ($240 billion projected spend in 2026, growing at 11% CAGR), talent acquisition (AI talent commanding 67% salary premiums with 38% annual growth), competitive intelligence, regulatory compliance, and marketing. The Automation Trap (MECH-011) compounds the dynamic: each round of AI-driven escalation creates new complexity, oversight requirements, and integration costs that erode or reverse the efficiency gains that justified the adoption. The combined effect is an arms race that ratchets upward — more compute consumed, more human oversight required, more total cost incurred — with no party able to unilaterally de-escalate without suffering competitive disadvantage. [Framework — Original]

This finding matters for the Theory of Recursive Displacement because it demolishes the most common counter-argument to the Aggregate Demand Crisis: that AI will lower consumer prices and thereby generate compensating demand. In adversarial markets, the price reduction never reaches the consumer. It is consumed by competitive escalation. If the adversarial equilibrium operates across a sufficient fraction of economic activity, the compensating demand mechanism fails systemically.

Confidence calibration: 55-65%. The mechanism is theoretically well-grounded in game theory and supported by strong historical analogy (e-discovery cost escalation) and contemporaneous circumstantial evidence (billing rate acceleration during AI adoption). The primary evidence gap remains the absence of direct empirical measurement of bilateral-AI costs versus no-AI costs in comparable proceedings. The binding uncertainty is whether the billing structure standoff breaks before the adversarial dynamic fully locks in, and whether non-adversarial markets can generate sufficient price deflation to offset adversarial market escalation. Five falsification conditions are specified below.


The Argument

I. The Game-Theoretic Structure

The standard productivity narrative assumes that AI reduces costs, lower costs reach consumers, and consumer welfare improves. This narrative holds in cooperative or monopolistic market structures where the party capturing the efficiency gain has an incentive to pass some fraction to buyers. It fails in adversarial settings for a structural reason: each party’s incentive is not to minimize absolute cost but to maximize relative advantage over the opposing party.

The game-theoretic structure is a symmetric two-player game. If Party A adopts AI and Party B does not, Party A gains significant advantage: asymmetric discovery in litigation, superior threat detection in cybersecurity, faster candidate identification in talent acquisition. If Party B also adopts AI, neither gains relative advantage, but both have increased their absolute spending. If neither adopts, both save money but face the risk that the other defects. The dominant strategy is mutual adoption, producing a Nash equilibrium at higher total cost [1].

Curl, Kapoor, and Narayanan identified this dynamic in their February 2026 Lawfare paper “AI Won’t Automatically Make Legal Services Cheaper,” observing that the adversarial structure of American litigation means competitive equilibria shift upward when both parties adopt productivity-enhancing technologies [Measured] [2]. Their analysis identified three bottlenecks preventing cost reduction: the adversarial escalation dynamic, regulatory barriers (unauthorized practice of law rules), and human involvement limits (judges and clients making decisions at human speed). The three bottlenecks reinforce each other but are analytically distinct.

The game operates at three nested scales, each independently sufficient to sustain the adversarial dynamic:

Firm level: Each law firm, cybersecurity vendor, or recruiting agency must adopt AI or lose competitive position against firms that have. The Thomson Reuters Institute / Georgetown Law 2026 Report on the State of the US Legal Market found law firm technology spending grew 9.7% in 2025, with knowledge management spending climbing 10.5% [Measured] [3]. These are not discretionary investments. They are competitive necessities that become fixed cost obligations — the Ratchet (MECH-014) operating at firm level.

Case/engagement level: Each litigant, defender, or recruiter must deploy AI for the specific engagement or face asymmetric disadvantage against an opponent who has. In litigation, failing to deploy AI for discovery when your opponent has creates asymmetric disadvantage. In cybersecurity, failing to deploy AI-driven threat detection when attackers use AI-generated exploits creates an operational gap.

Infrastructure level: The hyperscalers supplying AI tools must continue investing or lose market position. Combined 2026 capital expenditure approaching $700 billion demonstrates the infrastructure-level dynamic [Measured] [4]. The nesting means that even if one level breaks free — say, bilateral agreement between opposing counsel to limit AI use — the firm-level and infrastructure-level dynamics reimpose competitive pressure.

II. The E-Discovery Precedent: When Digitization Made Litigation More Expensive

The strongest evidence for the adversarial equilibrium is not prospective but historical. Electronic discovery was supposed to reduce litigation costs by making document search faster and cheaper. It did the opposite, and the mechanism is precisely the one the adversarial equilibrium thesis predicts.

Before digitization, discovery was paper-based and inherently self-limiting. Paper is costly to create and store, imposing natural constraints on discovery scope. After digitization, the cost of electronic information generation and storage dropped to near zero, massively expanding the volume of discoverable material. Rather than reducing costs, parties exploited the explosion of digital documents to impose greater burdens on opponents [Measured] [5].

The RAND Institute for Civil Justice documented this pattern in its 2012 study Where the Money Goes (Pace & Zakaras, MG-1208-ICJ), examining eight large corporations across 57 large-volume cases and finding median per-case ESI production costs of $1.8 million, with 73% going to document review [Measured]. An ABA survey cited within the RAND report found three-quarters of respondents agreed that discovery costs, as a share of total litigation costs, had increased disproportionately due to e-discovery [Survey Data] [5].

The scale was staggering. In Apple v. Samsung II (N.D. Cal.), Samsung paid $13,100,960.35 to its e-discovery vendor UBIC for 20 months of discovery work, documented in 399 pages of vendor invoices filed with the court — covering 3.6 terabytes across 11 million documents, of which only 880,000 (8%) were actually produced to Apple [Measured — court filings] [6]. The e-discovery market overall exceeded $15 billion in 2025, with projections of $22 billion within five years [Projected — market research estimates] [7].

The mechanism is clear: when discovery became cheaper per document, the rational adversarial response was not to spend less on discovery. It was to discover more documents, impose broader preservation holds, and weaponize volume as a litigation tactic. Technology made each unit cheaper while making total volume grow faster than per-unit cost fell. This is the Automation Trap (MECH-011) in its purest adversarial form: efficiency gains were reinvested as competitive escalation, not captured as cost savings.

III. Current Evidence: AI Is Following the E-Discovery Pattern

Early data from AI adoption across adversarial domains confirms that the e-discovery escalation pattern is repeating, not correcting.

Despite 79% of legal professionals reporting AI use per the 2025 Clio Legal Trends Report (10th edition, methodology: 1,702 U.S. legal professionals surveyed plus aggregated usage data), billing rates are accelerating, not falling [Survey Data] [8].

Average law firm billing rates jumped 9.6% across the AmLaw 200 in full-year 2025, per the Wells Fargo Legal Specialty Group survey covering 67 AmLaw 100 firms and 46 Second Hundred firms [Measured] [9]. Wolters Kluwer’s LegalVIEW Insights (Volume 2025-1), based on over $200 billion in anonymized legal invoice data, provides the granular picture: AmLaw 25 blended hourly rates reached $1,027, a 7.5% increase in Q1 2025 alone. Average partner rate at AmLaw 25 firms stood at $1,349. Among timekeepers who actually increased rates, the average jump exceeded 12% [Measured — invoice data] [10]. AmLaw 25 partner rates for consumer services work surged 22% year-over-year to $2,105 per hour — a sector-specific outlier driven by regulatory pressure, but illustrative of how adversarial demand amplifies rate escalation in specific practice areas [Measured].

The expected mechanism — AI reduces time-per-task, reducing billable hours, reducing client costs — is not materializing. The Best Law Firms 2025 survey (4,852 firms, 164,000+ lawyers) found 58% of firms said AI had not affected billing practices at all. Only 20% of large firms said AI had reduced billable hours for certain tasks. The most common outcome: efficiency increased without changing billable hours (36% of large firms) [Measured] [11]. Among Clio’s widely-adopting firms, only 11% reduced prices while 26% increased them and 8% added AI-specific fees [Measured] [8].

Technology spend is additive, not substitutive. Law firm technology spending grew 9.7% in 2025. Direct lawyer compensation increased 8.2%. And 90% of all legal dollars still flow through standard hourly rate arrangements [Measured] [3]. AI is being added to existing spend, not replacing it.

Cybersecurity: The Definitional Arms Race

The cybersecurity domain demonstrates the adversarial equilibrium in its most explicit form: an arms race between AI-powered attackers and AI-powered defenders where each side’s improvement compels the other’s matching investment.

Global cybersecurity spending is projected to reach $240 billion in 2026, growing at an 11% CAGR to $320 billion by 2029 [Measured] [12]. AI-driven cybersecurity spend is growing three to four times as fast as the overall market [Estimated]. Shadow AI breaches cost an average of $4.63 million per incident — $670,000 more than a standard breach — creating new categories of risk that require new categories of defensive investment [Measured] [13].

CrowdStrike’s 2026 analysis describes the dynamic explicitly: “AI vs. AI: The Cybersecurity Arms Race,” in which defensive AI tools are deployed to detect and respond to AI-generated attacks, which evolve in response to the detection capabilities, which require further defensive evolution [14]. Forty-eight percent of cybersecurity professionals identify agentic AI and autonomous systems as the single most dangerous attack vector in 2026 [Survey Data] [15]. The resource intensiveness of implementing advanced AI solutions — infrastructure, specialized expertise, continuous updating — compounds costs for defenders who must protect against an expanding threat surface while attackers need only find a single vulnerability.

The asymmetry favors escalation: the time required for defenders to trust and deploy AI-based agentic automation keeps them “well behind the advancements made in the offensive arena,” with attackers maintaining at least a one-step advantage throughout 2026 [Estimated] [16]. This temporal asymmetry means defenders must over-invest relative to current threats, paying the cost of the arms race’s next round before it arrives.

Talent Acquisition: Bidding Wars Amplified by AI

AI talent demand exceeds supply by a ratio of 3.2:1 globally, with AI roles commanding 67% higher salaries than traditional software positions and 38% year-over-year salary growth across all experience levels [Measured] [17]. Companies are investing heavily in AI recruiting tools — 84% of talent leaders plan to use AI in recruiting, and two-thirds are increasing AI tool spend in the next 6-12 months [Survey Data] [18].

The adversarial equilibrium operates on both sides of the talent market. Employers deploy AI to identify candidates faster, but candidates deploy AI to optimize applications and interview performance. The screening tools that were supposed to reduce recruiting costs instead create an escalation cycle in which both sides invest more in AI-mediated interaction without improving the match quality. Gartner’s October 2025 analysis identified “AI revolution and cost pressures” as the two defining forces in talent acquisition for 2026 — framing them as co-occurring rather than countervailing [19].

Regulatory Compliance: Escalation by Design

The EU AI Act’s comprehensive compliance framework for high-risk systems became fully enforceable in 2026, with penalties of up to 35 million euros or 7% of global annual turnover [Measured] [20]. Regulatory compliance is inherently adversarial — the regulated entity and the regulator have opposing interests — and AI amplifies both sides of the interaction. Companies deploy AI to meet regulatory requirements. Regulators deploy AI to verify compliance. Companies must then invest further to satisfy AI-augmented regulatory scrutiny.

The compliance cost escalation is compounded by the Automated Strategic Contention mechanism (MECH-003): as regulatory interaction becomes increasingly AI-mediated on both sides, the human capacity to audit, interpret, and validate the AI-generated compliance documentation degrades, creating a meta-level adversarial dynamic between human oversight capacity and machine-generated complexity.

IV. The Red Queen Effect: Running Faster to Stay in Place

Robert J. Couture, Senior Research Fellow at Harvard Law School’s Center on the Legal Profession, reported in February 2025 based on interviews with ten AmLaw 100 firms that none anticipated any reduction in the need for practicing attorneys [Qualitative Interview Study, n=10] [21]. This finding coexists with reports of productivity gains greater than 100x on specific tasks — Couture cites a complaint response system that reduced associate time from 16 hours to 3-4 minutes [Measured — single task-specific data point].

The juxtaposition is the Red Queen Effect in action. A 100x productivity gain on a specific task does not reduce headcount because every firm’s opponents have access to the same tools. The gain is consumed by competitive escalation: more thorough discovery, more comprehensive briefing, more exhaustive case preparation — not converted into labor savings or cost reductions. The firms that captured those gains did not fire associates. They redeployed them to the next competitive frontier.

This connects to the Automation Trap (MECH-011) at the systemic level. Each round of AI-driven task automation creates new tasks: reviewing AI output, validating AI reasoning, managing AI tool integration, auditing AI-generated documents for hallucination, coordinating between multiple AI systems. The Wharton study on the AI efficiency trap found that “each productivity improvement becomes the new baseline, with deadlines compressing, project volumes expanding, and complexity increasing while maintaining existing headcount and resources” [Estimated] [22]. Workers report that selective AI assistance evolves into comprehensive reliance, triggering skill atrophy that further reinforces AI dependency.

The empirical signature of the Automation Trap in the AI era is striking. Developers perceive a 20% productivity gain but measure a 19% actual loss on complex tasks — the cognitive load of reviewing and debugging AI-generated code exceeds the time saved in generation for experienced workers [Measured] [23]. Seventy-seven percent of freelance workers using generative AI report that it added to their workload rather than reducing it, primarily due to review and validation overhead [Survey Data] [24]. The pattern is consistent across domains: AI automates the straightforward portion of a task while creating new, often harder, supervision and validation requirements that consume the saved time and then some.

V. The Access Paradox: AI Widening the Gap It Was Supposed to Close

The cruelest implication of the adversarial equilibrium is its effect on access. AI was supposed to democratize professional services. In legal services, the cost differential between a first-year associate at a top-25 firm ($951/hour) and AI legal research tools (~30% of that rate) suggested genuine access expansion for under-resourced parties [Measured] [25].

The access expansion is real for non-adversarial work: document drafting, form completion, basic research. But in adversarial proceedings, cost savings accrue to the party that already has sophisticated representation. When both sides have AI, the equilibrium cost is higher. When only one side has AI, the resourced party benefits, creating greater asymmetry. An ACC/Everlaw survey of 657 in-house legal professionals across 30 countries found 64% of corporate legal teams expect to reduce reliance on outside counsel as they bring AI in-house [Survey Data] [26]. Smaller clients without in-house departments face unchanged or higher costs from firms adding AI spend without reducing rates.

The access paradox mirrors the bifurcation pattern documented in Structural Exclusion: experienced, well-resourced actors are complemented by AI, while under-resourced actors face a more capable opponent at no reduction in their own costs. AI becomes a force multiplier for existing power asymmetries — the adversarial version of the concentration dynamic that drives enclosure across every domain where institutional protections attach to entities that cannot match the cost curve.

Legal services provide the cleanest empirical demonstration because litigation is explicitly zero-sum: one party wins, one loses, and every dollar of efficiency gain can be directly traced to competitive reinvestment. But the adversarial structure is not unique to litigation. The mechanism generalizes to every domain where competing parties interact:

Cybersecurity: Every defensive AI improvement compels an offensive AI improvement, and vice versa. Global spending approaching $240 billion in 2026, with AI-specific spend growing 3-4x faster, is the arms race in aggregate dollars [12].

Competitive intelligence and marketing: Every firm’s AI-driven market analysis compels matching investment by competitors. AI-generated content saturates markets, requiring more AI to cut through AI-generated noise, requiring more AI to distinguish signal from noise. The cycle is self-feeding.

Talent acquisition: AI recruiting tools matched by AI candidate optimization, with both sides investing more while match quality does not demonstrably improve. AI talent salaries commanding 67% premiums reflect the bidding war that escalation creates [17].

Regulatory compliance: AI compliance tools matched by AI enforcement tools, with each investment compelling the next. The EU AI Act’s first enforcement wave in 2026 is catalyzing compliance spending that will be matched by regulatory technology investment [20].

Geopolitical competition: National AI strategies involve adversarial dynamics between states. The CSIS analysis of trade secrets and the global AI arms race documents how nations must invest in AI capability not for absolute gains but to prevent relative disadvantage — the adversarial equilibrium at civilizational scale [27].

The fraction of economic activity operating under adversarial structure is larger than commonly appreciated. Any market with competitive bidding, opposing parties, arms-race dynamics, or regulatory tension contains an adversarial component. If the adversarial equilibrium prevents price reduction in these markets, the compensating demand mechanism that optimists rely upon to resolve the Aggregate Demand Crisis (MECH-010) fails in proportion to the adversarial fraction of the economy.

VII. The Automation Trap Amplifier (MECH-011)

The Adversarial Equilibrium Trap does not operate in isolation. It interacts with the Automation Trap to produce compounding cost escalation.

The Automation Trap describes a dynamic where each round of automation creates complexity, overhead, and fragility that erode or reverse its initial efficiency gains. In adversarial contexts, the trap is amplified: not only does automation create internal complexity (tool integration, output validation, skill atrophy), it also creates external complexity by expanding the scope of adversarial interaction.

When AI enables a law firm to review 10x more documents, opposing counsel must respond to 10x more documents. The expansion of adversarial surface area — more discovery, more comprehensive analysis, more exhaustive preparation — creates proportionally more validation work, more error-checking, more human oversight. The Automation Trap’s internal efficiency losses compound with the Adversarial Equilibrium’s external escalation to produce total costs that exceed the pre-AI baseline despite genuine per-task productivity improvements.

The Wharton efficiency trap research captures the mechanism at individual level: 37-40% of time saved by AI gets consumed by reviewing, correcting, and verifying AI output [22]. At system level, the adversarial dynamic means the remaining 60% of time saved is consumed by escalated scope of the adversarial interaction. The net effect: more total work done, more thoroughly, at higher total cost, with no improvement in outcomes relative to the non-AI baseline.

This is not a temporary adjustment period. It is the structural equilibrium that adversarial competition under AI produces. The equilibrium is stable because no party can unilaterally de-escalate: reducing AI investment creates asymmetric disadvantage. The only exit is coordinated de-escalation — bilateral or multilateral agreement to limit AI deployment — which the competitive structure of adversarial markets systematically prevents.

VIII. The Billing Structure Standoff

The Thomson Reuters / Georgetown 2026 report captures the structural impasse directly. Firms deploy technology that accomplishes in minutes what once took hours, then bill by the hour. Corporate legal departments want firms to propose innovative billing arrangements incorporating AI efficiencies. Firms complain that clients evaluate everything by converting back to hourly rates. Both sides wait for the other to blink [3].

This standoff is not a negotiation failure. It is the adversarial equilibrium operating at the market structure level. Firms that unilaterally reduce prices lose revenue. Clients that unilaterally demand cuts lose access to firms investing in AI capability. The equilibrium is structural, not incidental.

The resolution of the standoff would require one of three developments: (1) a regulatory mandate (courts requiring cost reduction as a condition of AI deployment); (2) a disruptive new entrant that bypasses the existing cost structure entirely (an AI-native firm that operates without the legacy cost base); or (3) client coordination (corporate legal departments collectively demanding alternative fee arrangements). None of these appear imminent. The standoff may persist for years, during which the adversarial equilibrium continues to escalate costs.


Mechanisms at Work

The Adversarial Equilibrium Trap (MECH-009): When competing parties adopt AI in zero-sum domains, productivity gains are neutralized by mutual escalation, driving costs upward instead of downward. The mechanism is a Nash equilibrium: each party’s dominant strategy is to adopt, producing a collective outcome (higher costs, unchanged relative positions) that no party prefers but none can escape unilaterally.

Automated Strategic Contention (MECH-003): The delegation of strategy to autonomous AI agents in conflict contexts. In the adversarial equilibrium, this mechanism compounds escalation by removing human speed limits on competitive response: AI-mediated contention operates at machine speed, allowing the escalation cycle to accelerate beyond the pace at which human judgment can intervene.

The Automation Trap (MECH-011): The dynamic where each round of automation creates complexity, overhead, and fragility that erode initial efficiency gains. In adversarial contexts, the trap is amplified: internal complexity (validation overhead) compounds with external escalation (expanded adversarial surface area) to produce total costs that exceed pre-AI baselines.

Interaction effects: MECH-009 establishes the structural dynamic (mutual escalation). MECH-003 accelerates it (machine-speed contention). MECH-011 compounds it (complexity overhead on top of escalation costs). Together, the three mechanisms produce a ratcheting cost escalation that is faster, more complex, and harder to reverse than any single mechanism would produce alone.


Counter-Arguments and Limitations

The Temporary Adjustment Objection

The billing rate acceleration and cost escalation may represent a temporary adjustment period during which the legal market absorbs new technology before settling to a lower equilibrium. Historical precedent: early automobile adoption increased transportation costs before mass production drove prices down. The adversarial equilibrium may break once AI tools become commoditized and pricing pressure from AI-native entrants forces established firms to compete on price.

This objection has a plausible theoretical mechanism. If AI tools become sufficiently commoditized that both sides of an adversarial proceeding can deploy equivalent capability at negligible marginal cost, the arms race dynamic weakens because there is no competitive advantage to be gained from further investment. The critical question is whether commoditization occurs before the adversarial dynamic locks in institutional structures (billing practices, staffing models, regulatory expectations) that are resistant to reversal. The e-discovery precedent is discouraging: digitization of discovery began in the early 2000s, and discovery costs have escalated for over two decades with no sign of the predicted equilibrium reduction. If AI follows the same trajectory, the “temporary adjustment” lasts longer than most firms’ or clients’ planning horizons.

The Non-Adversarial Markets Objection

The adversarial equilibrium applies only to markets with adversarial structure. Many of the largest consumer-facing markets — retail, entertainment, food service, transportation — are competitive but not adversarial in the prisoner’s-dilemma sense. AI may drive genuine price reduction in these markets, generating consumer surplus that offsets the cost escalation in adversarial markets. The demand crisis is averted if non-adversarial price deflation exceeds adversarial price escalation.

This objection requires empirical engagement. It is correct that many consumer markets are not structurally adversarial: a retailer and its customer are not in a zero-sum game. If AI genuinely reduces retail prices, the consumer benefits directly. The question is magnitudes. Legal services, cybersecurity, financial services, healthcare (where provider-insurer interaction is adversarial), and regulatory compliance collectively represent a large fraction of economic activity. If adversarial cost escalation in these sectors outweighs non-adversarial price reduction in retail and consumer services, the net effect on consumer welfare is negative. We cannot resolve this magnitude question with current data, and we flag it as the most important empirical gap in the analysis.

The Bilateral Agreement Objection

Adversarial parties can reach bilateral agreements to limit AI deployment, just as arms control agreements limit military escalation. In litigation, judges could impose proportionality requirements that cap AI-driven discovery expansion. In cybersecurity, industry standards could limit escalation. The adversarial equilibrium is not inescapable.

This objection identifies a real possibility but underestimates the coordination difficulty. The nesting of the adversarial dynamic across three scales (firm, case, infrastructure) means that even if case-level agreement is reached, firm-level and infrastructure-level competition reimpose the pressure. A judge who limits discovery in one case does not prevent the firm from deploying its AI capability in other cases. A cybersecurity standard that limits defensive spending does not prevent attackers from escalating. The coordination problem is multi-level and global, making bilateral agreements necessary but insufficient to break the equilibrium.

The Quality Improvement Objection

Even if total costs rise, the quality of adversarial outcomes may improve. More thorough discovery may produce more just litigation outcomes. More comprehensive threat detection may reduce breach severity. More exhaustive regulatory compliance may improve public safety. The adversarial equilibrium may be producing genuine value even if that value manifests as quality improvement rather than cost reduction.

This objection has merit and represents a genuine limitation of the cost-focused analysis. If bilateral AI adoption in litigation produces more accurate verdicts, more complete discovery of relevant evidence, and more thorough legal analysis, the quality improvement is real welfare gain even if costs rise. The counter-counter-argument is that quality improvement in adversarial contexts is asymmetric: the party with greater AI capability achieves better outcomes, widening the access gap rather than improving justice system-wide. And quality improvement that is inaccessible to under-resourced parties is not a system-level welfare gain — it is a redistribution of quality toward those who can pay. The net welfare effect depends on whether quality gains are broadly shared or concentrated, and the evidence from the access paradox section suggests concentration.

The Scope Limitation

The adversarial equilibrium thesis has been tested most rigorously in legal services. The generalization to cybersecurity, talent acquisition, and regulatory compliance is supported by structural analogy and partial evidence, but not by the same depth of empirical data available for the legal market. It is possible that the dynamics operate differently in domains where the adversarial structure is less clean — where there are elements of cooperation, information sharing, or regulatory constraint that moderate the arms race. We flag this scope limitation explicitly and note that the thesis would be strengthened by dedicated empirical analysis of bilateral AI costs in non-legal adversarial domains.


What Would Change Our Mind

  1. Average litigation costs per case decline by 15% or more within three years of widespread bilateral AI adoption, controlling for case type and complexity. This would indicate that adversarial escalation is not consuming efficiency gains.

  2. The e-discovery cost pattern reverses. Total e-discovery spending declines in absolute terms despite continued growth in discoverable data volume. If the historical precedent does not generalize to AI, the predictive framework weakens.

  3. Alternative fee arrangements reach 50% or more of legal billing by revenue within two years, with demonstrated lower total costs to clients. Note: a shift to AFAs alone does not falsify the thesis if total costs per case continue to rise.

  4. Direct empirical measurement shows bilateral-AI cases cost less than comparable no-AI cases on a total-cost basis. This is the kill shot. If the data shows that when both sides adopt AI, total costs fall, the mechanism is wrong and should be retracted.

  5. Consumer prices in adversarial domains (legal services, cybersecurity, financial advisory) decline measurably for under-resourced buyers within three years of widespread AI adoption. This would indicate that the access paradox is not operating and that AI efficiency gains are reaching the parties who need them most.


Confidence and Uncertainty

Central estimate: 55-65% that the adversarial equilibrium mechanism is producing structural cost escalation rather than a temporary adjustment period.

What drives confidence upward: The e-discovery historical precedent (two decades of escalation following a technology that was supposed to reduce costs). The billing rate data (9.6% acceleration during a period of rapid AI adoption). The game-theoretic structure (mutual adoption as dominant strategy is not an empirical claim — it is a logical implication of adversarial incentives). The generalizability across multiple domains (cybersecurity spending growing at 11% CAGR, AI talent premiums at 67%, regulatory compliance costs escalating with first AI Act enforcement wave). The nested nature of the dynamic across firm, case, and infrastructure levels.

What drives confidence downward: The absence of direct empirical measurement of bilateral-AI costs versus no-AI costs. The possibility that the billing rate acceleration reflects factors other than adversarial escalation (labor market tightness, pandemic-era catch-up, general inflation). The possibility that AI tool commoditization will eventually break the arms race. The Clio survey’s broad definition of “AI use” (only 8% adopting “universally,” 17% “widely”) may mean that true bilateral adoption has not yet been tested.

Binding uncertainty: Whether the adversarial equilibrium is a permanent structural feature of AI-augmented competition or a transient phenomenon that resolves as AI tools commoditize and billing structures adapt. The e-discovery precedent suggests permanence. The historical trajectory of other technologies suggests eventual equilibrium at lower costs. The data as of March 2026 cannot distinguish between these outcomes.


Implications

For the Aggregate Demand Crisis: The adversarial equilibrium is a direct rebuttal to the “AI will lower prices and create new demand” argument. If AI efficiency gains are consumed by competitive escalation rather than passed through to consumer prices, the compensating demand mechanism fails in every adversarial market. The demand crisis proceeds without the price reduction that optimists project.

For AI governance: The adversarial equilibrium suggests that AI deployment in adversarial contexts may require regulatory intervention to prevent socially wasteful escalation. Proportionality requirements in litigation discovery, international cybersecurity norms analogous to arms control, and AI talent market regulation could potentially moderate the arms race — but each requires coordination across parties whose competitive incentives resist coordination.

For corporate strategy: Firms in adversarial markets face a version of the tragedy of the commons: each firm’s individually rational AI investment contributes to industry-wide cost escalation that harms all firms collectively. The strategic implication is that firms cannot compete their way out of the adversarial equilibrium. Only structural change — in billing models, regulatory frameworks, or industry coordination — can break the cycle.

Where This Connects: The AI Capex War documents the same prisoner’s dilemma at the infrastructure level. The Aggregate Demand Crisis argues that AI-driven cost optimization collectively destroys consumer demand; the adversarial equilibrium is the mechanism that prevents the standard escape. The Ratchet documents how capital commitments at the firm and infrastructure level can only tighten, making de-escalation from the adversarial equilibrium progressively more costly. The Entity Substitution essay documents how the legal profession’s protections erode from below while the adversarial equilibrium inflates costs from above — a squeeze from both directions. The Automation Trap provides the micro-level mechanism (complexity accumulation) that compounds the macro-level adversarial escalation. The Wage Signal Collapse documents a downstream effect specific to legal services: 100x productivity gains on junior tasks degrade the perceived value of junior work, compressing the wage signal for legal careers even as headcount is maintained.


Conclusion

The adversarial equilibrium trap reveals a structural limit on the economic benefits of AI. In cooperative or monopolistic markets, AI efficiency gains can theoretically be shared between producers and consumers. In adversarial markets, those gains are consumed by competitive escalation — the treadmill runs faster, but neither side advances. The legal services data shows this dynamic in its purest form: 100x productivity gains on specific tasks, no reduction in headcount, accelerating billing rates, escalating technology spend, and a billing structure standoff in which neither party can unilaterally break the cycle.

The trap is not a bug in AI deployment. It is a structural feature of adversarial competition under conditions of technological capability escalation. It operates wherever opposing parties interact: courtrooms, cyberspace, talent markets, regulatory proceedings, geopolitical competition. The fraction of economic activity subject to this dynamic is larger than commonly appreciated, and the compensating demand mechanism that optimists rely upon to resolve the demand crisis fails in proportion to that fraction.

The most important finding is not that AI fails to produce efficiency gains in adversarial contexts. It does produce them — sometimes enormous ones. The finding is that those gains are captured by competitive escalation rather than distributed to consumers. This is why the adversarial equilibrium matters for the Theory of Recursive Displacement: it is the mechanism that prevents AI’s productive potential from becoming AI’s consumer benefit, ensuring that the demand crisis proceeds even as output capacity expands.


Sources

[1] Nash, J. “Non-Cooperative Games.” Annals of Mathematics 54(2), 1951.

[2] Curl, J., Kapoor, S. & Narayanan, A. “AI Won’t Automatically Make Legal Services Cheaper.” Lawfare, February 12, 2026.

[3] Thomson Reuters Institute / Georgetown Law Center on Ethics and the Legal Profession. “2026 Report on the State of the US Legal Market.” January 7, 2026.

[4] Tech Insider. “Big Tech AI Infrastructure Spending 2026: The $700B Race.” January 2026. https://tech-insider.org/big-tech-ai-infrastructure-spending-2026/

[5] RAND Institute for Civil Justice. Where the Money Goes: Understanding Litigant Expenditures for Producing Electronic Discovery. Pace & Zakaras, MG-1208-ICJ, 2012.

[6] Apple Inc. v. Samsung Electronics Co. (N.D. Cal.). UBIC vendor invoices filed with the court, 399 pages.

[7] Research and Markets, IMARC Group, Fortune Business Insights. E-discovery market size estimates, 2025-2029.

[8] Clio. “2025 Legal Trends Report.” 10th edition, October 16, 2025.

[9] Wells Fargo Legal Specialty Group. “Mid-Year 2025 Law Firm Survey” and Full-Year 2025 results.

[10] Wolters Kluwer. “LegalVIEW Insights Volume 2025-1: Benchmarking Through Complexity.”

[11] Best Law Firms / BL Rankings, LLC. 2025 survey of approximately 4,852 U.S. law firms.

[12] J.P. Morgan Private Bank. “AI vs. AI: The Arms Race for Security.” 2026. https://privatebank.jpmorgan.com/nam/en/insights/markets-and-investing/tmt/ai-vs-ai-the-arms-race-for-security

[13] PurpleSec. “The Top AI Security Risks (Updated 2026).” https://purplesec.us/learn/ai-security-risks/

[14] CrowdStrike. “AI vs AI: The Cybersecurity Arms Race.” 2026. https://www.crowdstrike.com/en-us/blog/ai-vs-ai-cybersecurity-arms-race/

[15] Bessemer Venture Partners. “Securing AI Agents: The Defining Cybersecurity Challenge of 2026.” https://www.bvp.com/atlas/securing-ai-agents-the-defining-cybersecurity-challenge-of-2026

[16] Dark Reading. “Cyber Predictions 2026: AI Arms Race; Malware Autonomy.” https://www.darkreading.com/cyber-risk/cybersecurity-predictions-2026-an-ai-arms-race-and-malware-autonomy

[17] Rise. “AI Talent Salary Report 2026.” https://www.riseworks.io/blog/ai-talent-salary-report-2025

[18] Korn Ferry. “TA Trends 2026: Human-AI Power Couple.” https://www.kornferry.com/insights/featured-topics/talent-recruitment/ai-in-recruitment-trends

[19] Gartner. “AI Revolution and Cost Pressures Are Two Forces Driving the Top Four Trends for Talent Acquisition in 2026.” Press Release, October 7, 2025. https://www.gartner.com/en/newsroom/press-releases/2025-10-07-gartner-says-ai-revolution-and-cost-pressures-are-two-forces-driving-the-top-four-trends-for-talent-acquisition-in-2026

[20] Mimecast. “Cybersecurity Predictions 2026: Olympic-Level Threats & AI Arms Races.” https://www.mimecast.com/blog/olympiclevel-threats—ai-arms-races—navigating-cybersecurity-in-2026/

[21] Couture, R.J. “The Impact of Artificial Intelligence on Law Firms’ Business Models.” Harvard Law School Center on the Legal Profession, February 24, 2025.

[22] Wharton School / Knowledge at Wharton. “The AI Efficiency Trap: When Productivity Tools Create Perpetual Pressure.” 2025-2026. https://knowledge.wharton.upenn.edu/article/the-ai-efficiency-trap-when-productivity-tools-create-perpetual-pressure/

[23] AvePoint. “The AI Productivity Trap: Automation & Business Efficiency.” 2025. https://www.avepoint.com/shifthappens/blog/the-ai-productivity-trap

[24] ToilFountain. “AI Productivity Statistics 2026: An Exhaustive Analysis of Time, Output, and Economic Impact.” https://toolfountain.com/ai-productivity-statistics/

[25] Engstrom, D.F. & Gelbach, J.B. “Legal Tech, Civil Procedure, and the Future of Adversarialism.” University of Pennsylvania Law Review 169, 2020.

[26] ACC / Everlaw. “GenAI Strategic Value for Corporate Law Departments.” 3rd annual edition, October 14, 2025.

[27] CSIS. “Protecting Our Edge: Trade Secrets and the Global AI Arms Race.” 2025-2026. https://www.csis.org/analysis/protecting-our-edge-trade-secrets-and-global-ai-arms-race

[28] Redwood Software. “AI and Automation Trends 2026: From Efficiency to Enterprise Resilience.” https://www.redwood.com/article/ai-automation-trends/

[29] Censinet. “The Cybersecurity AI Arms Race: Staying Ahead of Automated Threats.” https://censinet.com/perspectives/cybersecurity-ai-arms-race-staying-ahead-threats