Skip to main content

The L.A.C.C. Economy: Why Compute Sovereignty Rewrites the Geopolitical Chessboard

by RALPH, Frontier Expert

by RALPH, Research Fellow, Recursive Institute Adversarial multi-agent pipeline · Institute-reviewed. Original research and framework by Tyler Maddox, Principal Investigator.


Bottom Line

The original L.A.C. framework — Labor, Automation, Capital — described the structural shift away from labor-centric production toward a triad of Land, Automation, and Capital. That framework captured the first-order transformation correctly but collapsed a critical fourth axis into Capital. This essay argues that Compute — defined as the physical capacity to train, serve, and iteratively improve AI systems at production scale — is analytically distinct from Capital and must be elevated to a co-equal pillar: L.A.C.C. (Labor, Automation, Capital, Compute).

The distinction is not semantic. Capital is fungible: it can be redirected from one investment to another within quarterly cycles. Compute capacity is geographically fixed, supply-constrained by semiconductor fabrication bottlenecks, and subject to export control regimes that have no parallel in conventional capital markets. The United States currently controls approximately [Measured] 75% of global AI compute capacity. [Estimated — WEF 2026] Of 190 countries assessed, 158 lack any AI data center presence. [Measured — UNDP 2025] These numbers describe a concentration more extreme than OPEC’s share of global oil at the cartel’s peak — and unlike oil, compute capacity cannot be discovered underground by a lucky geological survey. It must be manufactured through a semiconductor supply chain that passes through a handful of chokepoints, each subject to sovereign control. [1][3][4]

The competitive dynamics around compute are currently dominated by bloc logic and export controls, but cooperation is gaining institutional traction through the UN Global Dialogue on AI Governance, the International Scientific Panel on AI, and the Global Digital Compact. [7][8][9] This essay engages both vectors — and argues that the structural incentives currently favor competition, while acknowledging that the cooperative architecture, if adequately resourced, could alter the equilibrium. The [Framework — Original] L.A.C.C. framework captures a structural moment, not necessarily a permanent configuration. [Framework — Original, extending L.A.C.]

Confidence calibration: 55–65% that the four-pillar framework captures a durable structural feature of the AI-era geopolitical order rather than a transient phase that technological diffusion will resolve. 70–80% that compute concentration is currently operating as a distinct strategic axis independent of capital allocation. 40–50% that the concentration persists beyond the 2030–2035 window as inference hardware commoditizes and on-device compute matures. The binding uncertainty is whether export controls remain effective as workarounds proliferate and whether the DeepSeek moment — frontier-class performance at reduced compute cost — represents an anomaly or a trend.


Part I: Why Capital Cannot Contain Compute

The original L.A.C. essay, published in October 2025, described the shift from labor-based to Land, Automation, and Capital-based production as a “geopolitical revolution.” That description was correct as far as it went. The framework captured how reshoring, critical mineral competition, and automation investment were restructuring the global order. What it did not capture — because the evidence was not yet legible — was that a fourth axis had emerged with dynamics irreducible to the other three.

The argument for treating compute as a subset of Capital runs as follows: compute infrastructure is purchased with capital; therefore, it is capital. By this logic, aircraft carriers are also capital, yet no serious strategist would collapse naval power into a general “capital” category when analyzing maritime competition. The relevant question is not whether compute is purchased with money but whether it behaves like other capital assets once deployed. It does not, and the divergence runs along four dimensions.

Geographic fixity. Capital flows across borders at the speed of electronic transfer. An AI data center cannot be relocated. Once built, it is physically bound to its location, dependent on local power grids, cooling infrastructure, fiber interconnects, and the jurisdictional regime of the host nation. [4][15] Countries with abundant capital — Saudi Arabia, Singapore, Norway — still face multi-year timelines to build sovereign compute because the constraint is fabrication capacity and technical expertise, not funding.

Supply-chain chokepoints. Capital markets have no equivalent to TSMC. Approximately 90% of the world’s most advanced logic chips (sub-5nm) are fabricated by a single company on a single island. [3] The semiconductor supply chain — from ASML’s sole-source EUV lithography machines to Samsung and SK Hynix high-bandwidth memory — constitutes a dependency chain with no parallel in general capital markets. [3][20] When the U.S. restricts chip exports to China, it is not restricting capital. China has capital. It is restricting a manufactured good that requires a decade of accumulated process knowledge and billions in specialized tooling to produce.

Export control susceptibility. No country imposes export controls on dollars, euros, or yen as instruments. Compute hardware is subject to a thickening regime of export restrictions spanning three jurisdictions (U.S., Netherlands, Japan), targeting specific transistor densities, interconnect bandwidths, and inference throughput benchmarks. [1][2][20] The Biden-era controls, the H200 partial reversal under Trump, and the evolving “compute cap” frameworks demonstrate that sovereign states treat AI hardware as closer to weapons systems than to financial instruments. [1][2]

Irreversibility of concentration. Capital concentration is reversible through taxation, redistribution, or market competition. Compute concentration exhibits path dependence that more closely resembles the Ratchet (MECH-014) than conventional market dynamics. Once a hyperscaler has invested $100+ billion in AI infrastructure, the sunk cost creates lock-in: the silicon must be utilized, the power purchase agreements must be honored, the debt must be serviced. [Compute Feudalism (MECH-029)] The open-weight model releases that were supposed to democratize AI have instead functioned as demand-side subsidies for the inference layer, where the same hyperscalers that release the weights control the infrastructure required to run them at production scale. Even when model weights are freely available, the compute required to serve a 70B-parameter model at production latency costs 3–8 euros per hour — and that cost concentrates among providers with purpose-built silicon, high-bandwidth interconnects, and co-optimized software stacks. [Compute Feudalism]

The L.A.C.C. upgrade is not a terminological preference. It is a claim about the structure of the current geopolitical order: that compute capacity has become a distinct axis of strategic power, subject to its own logic of concentration, its own instruments of coercion, and its own institutional architecture of competition and cooperation.


Part II: The Compute Map — Who Has It, Who Does Not

The distribution of global AI compute capacity in early 2026 is more concentrated than any strategic resource in modern history. The numbers are stark enough to state plainly.

The United States holds approximately 75% of global AI compute capacity, measured by installed GPU and TPU equivalents in hyperscale data centers. [4] China holds an estimated 15–20%, with the remainder distributed across the EU, UK, Japan, South Korea, India, and a handful of Gulf states. [Estimated — synthesized from WEF 2026, CSIS 2025, McKinsey 2025] Of 190 countries assessed by UNDP, 158 lack any AI-capable data center infrastructure. [10] The African continent — 54 nations, 1.4 billion people — hosts less than 1% of global AI compute. [10][11]

This distribution is not an accident of market preference. It is the structural output of three reinforcing dynamics.

Semiconductor bottlenecks allocate compute before capital can. The global GPU supply is constrained by TSMC’s fabrication capacity, ASML’s lithography machine output, and the high-bandwidth memory production of Samsung and SK Hynix. Even if a nation has unlimited capital, it cannot purchase frontier AI chips faster than these chokepoints allow. [3][20] The result is that compute capacity tracks semiconductor supply chain position, not GDP. India’s GDP exceeds $4 trillion, yet its AI compute capacity is a fraction of what a small hyperscaler region in Oregon commands — because India lacks the fab relationships, the power infrastructure at the required density, and the technical workforce to build and operate hyperscale facilities at the pace that matters.

Export controls create a tiered access regime. The U.S. chip export control framework, implemented in stages from October 2022 through the H200 partial reversal in late 2025, has created a de facto three-tier system of compute access. [1][2][20] Tier 1 consists of the U.S. and close allies (Japan, South Korea, Netherlands, Taiwan, UK, Australia) with unrestricted access to frontier chips. Tier 2 includes countries subject to “compute cap” frameworks that limit the aggregate computing power they can import — a category that includes most of the Middle East, Southeast Asia, and South America. Tier 3 is China and its designated affiliates, subject to comprehensive restrictions on advanced chips, chip-making equipment, and related technical knowledge. [1][2]

This tiered system means that a country’s compute trajectory is determined not only by its capital availability but by its geopolitical alignment. The controls are leaky — smuggling networks, third-country transshipment, and Huawei’s domestic fabrication efforts all erode their effectiveness. [1][20] The CSIS analysis of chip export control limits documents that Huawei’s Ascend 910C chips, fabricated on SMIC’s N+2 process, achieve roughly 60–70% of NVIDIA H100 performance per chip — a significant gap that nonetheless narrows with each generation. [3][20] The controls buy time. They do not buy permanent advantage. But in a domain where a two-year lead in training compute translates to a generation gap in AI capability, “buying time” is itself a strategic asset of enormous value.

Hyperscaler investment patterns compound geographic concentration. The combined capital expenditure of the top five hyperscalers (Amazon, Google, Microsoft, Meta, Oracle) for 2026 is approximately $600 billion, with roughly 75% directed at AI infrastructure. [Estimated — Compute Feudalism] This investment is overwhelmingly located in the United States, with secondary clusters in Northern Europe (for cooling advantages), the Gulf states (for energy arbitrage), and Singapore/Japan (for Asia-Pacific latency requirements). The investment is not flowing to the 158 countries without data centers because those countries lack the power grid density, fiber connectivity, cooling infrastructure, and regulatory environment that hyperscale facilities require. [4][15]

The Geopolitical Phase Diagram (MECH-017) predicted that institutional starting conditions would sort countries into divergent AI-transition trajectories. The compute distribution data confirms this prediction with uncomfortable precision. The countries best positioned to participate in the AI economy are the countries that already have compute infrastructure. The countries that lack it face a bootstrapping problem: they cannot build AI industries without compute, and they cannot attract compute investment without AI industries. This is not the Sequencing Problem (MECH-022) in its original formulation — which mechanism runs fastest — but rather a prior question: which mechanisms can even activate when the enabling infrastructure is absent? For 158 countries, the answer is: very few.


Part III: The “Just Industrial Policy” Objection — What Makes Compute Analytically Distinct?

The strongest objection to the L.A.C.C. framework is that compute sovereignty is simply industrial policy by another name. Every era has its strategic technology — steel, oil, nuclear energy, semiconductors. What makes compute different enough to warrant a distinct analytical category rather than being treated as the latest instance of technology-as-power?

The objection has merit and must be engaged directly. Three features distinguish compute from prior strategic technologies.

First: the inference-layer concentration persists even when the model layer is democratized. This is the core insight of Compute Feudalism (MECH-029), and it has no precedent in prior strategic technology cycles. When nuclear technology proliferated, the enrichment facilities were the chokepoint — but a nation that acquired enrichment capability could produce weapons-grade material independently. When steel production globalized, any country with iron ore, coal, and capital could build a mill. Compute is different because the democratization of AI model weights — Llama, DeepSeek, Qwen — has not broken the infrastructure concentration. [Compute Feudalism] Three-quarters of organizations using open-weight models still run them on hyperscaler cloud infrastructure because the capital expenditure, technical expertise, and operational complexity of self-hosting at production scale exceed what most organizations and most nations can sustain. [Estimated]

The DeepSeek moment deserves direct engagement here. In January 2025, DeepSeek demonstrated that frontier-class reasoning performance could be achieved with significantly less training compute than previously assumed, using mixture-of-experts architectures and aggressive distillation. This was interpreted by some as evidence that compute concentration is ephemeral — that algorithmic efficiency will eventually commoditize the hardware advantage. The interpretation has a kernel of truth and a structural blindspot. The kernel: training efficiency is improving rapidly, and the assumption that frontier models require ever-increasing compute budgets is not guaranteed. The blindspot: training is a one-time cost. Inference is a recurring cost that scales with usage. DeepSeek’s efficient training does not reduce the inference compute required to serve the model to millions of users at production latency. The inference stack — custom silicon, high-bandwidth interconnects, co-optimized serving frameworks — remains concentrated regardless of how efficiently the model was trained. [MECH-029] Export control leakage and DeepSeek-style efficiency gains suggest that compute concentration may not be permanent. This essay presents the L.A.C.C. framework as capturing a structural moment — the current configuration of power — not as asserting a permanent equilibrium.

Second: compute enables recursive self-improvement in a way that prior strategic technologies did not. Steel does not make better steel. Oil does not discover more oil. But compute, when applied to AI research, produces algorithmic improvements that increase the effective compute available for the next generation of research. This recursive loop — documented in the Recursive Displacement framework (MECH-001) as applying to economic participation generally — operates with particular intensity in the compute domain. Nations with sufficient compute to train frontier models can use those models to accelerate their own semiconductor design, compiler optimization, and architectural innovation. Nations without sufficient compute cannot enter this loop. The gap does not merely persist; it widens through recursion.

Third: compute is the binding constraint on AI capability, and AI capability is becoming the binding constraint on economic competitiveness. This two-link chain is what elevates compute above “just another strategic input.” The UNCTAD estimate of $4.8 trillion in annual economic value from AI by 2030 may be optimistic, but even at half that figure, the economic stakes of compute access exceed those of any prior technology transition. [19] McKinsey’s analysis of sovereign AI identifies compute infrastructure as the single largest determinant of national AI readiness — ahead of talent, ahead of regulation, ahead of data availability. [15] The WEF framework concurs: “AI sovereignty requires not just policy ambition but physical compute capacity.” [4]

The “just industrial policy” objection is therefore partially correct and partially misleading. Compute sovereignty is industrial policy — but it is industrial policy for a technology that recursively amplifies its own advantage, that cannot be commoditized through weight release, and that serves as the enabling substrate for economic competitiveness across every other sector. The analogy is not “compute is the new steel.” The analogy is “compute is the new electricity” — except that the power plants are owned by five companies, located in three countries, and subject to export controls that determine who gets to plug in.


Part IV: Export Controls — The Leaky Lever

The United States has wagered its compute strategy on export controls. The bet is that by restricting China’s access to advanced AI chips and chip-making equipment, the U.S. can maintain a durable lead in AI capability — and by extension, in economic and military competitiveness. The evidence through early 2026 suggests the controls are partially effective, structurally leaky, and generating second-order consequences that complicate the strategy.

The control regime has evolved through four phases: Phase 1 (October 2022), broad restrictions on advanced chips and fab equipment to China; Phase 2 (October 2023), trilateral coordination with the Netherlands and Japan; Phase 3 (late 2024), the “compute cap” framework extending controls to a tiered global system; Phase 4 (late 2025), the H200 partial reversal under the Trump administration. [1][2][20]

The controls have demonstrably slowed China’s frontier AI hardware development. NVIDIA’s H100 and subsequent chips remain unavailable through official channels. China’s domestic alternative — Huawei’s Ascend line, fabricated on SMIC’s process technology — delivers roughly 60–70% of H100 per-chip performance and suffers from lower yields, higher power consumption, and a less mature software ecosystem. [3][20] The CFR analysis concludes that “Huawei cannot catch NVIDIA” at the current technology trajectory, and that the gap in training efficiency between U.S. and Chinese hardware remains significant. [3]

But the controls leak through at least four channels. Smuggling and transshipment: advanced chips reach China through third countries, particularly in Southeast Asia and the Middle East, with seizure data suggesting the volume is non-trivial. [20] Stockpiling: Chinese firms pre-purchased large quantities of A100 and H100 chips before controls took effect, creating a buffer that is being drawn down but not yet exhausted. [2] Domestic fabrication progress: SMIC’s N+2 process, while inferior to TSMC’s leading nodes, is iterating faster than initial Western assessments predicted. [3][20] Algorithmic efficiency: the DeepSeek demonstration showed that frontier performance can be achieved with less compute than previously assumed, partially offsetting the hardware gap through software innovation. [2]

The Adversarial Equilibrium Trap (MECH-009) is visible in the control regime’s dynamics. Each U.S. restriction triggers a Chinese response — retaliatory controls on critical minerals (gallium, germanium, rare earths), accelerated domestic chip development, and deepened technology partnerships with non-aligned nations. Each Chinese response triggers further U.S. tightening. The CSIS analysis of the limits of chip export controls concludes that controls are “necessary but not sufficient” and that “the United States cannot export-control its way to sustained AI leadership.” [20] The arms-race dynamic documented by Springer and Taylor & Francis confirms that competitive escalation in AI is consuming resources that could be directed toward productive deployment. [16][17]

The second-order consequences extend beyond the bilateral contest. The compute-cap framework has created resentment among Tier 2 nations — India, Brazil, Saudi Arabia, UAE — that view the restrictions as American technological hegemony rather than legitimate security. [5][14] India’s case is addressed in Part VI.

Carnegie identifies a deeper structural problem: export controls are designed for discrete weapons systems, not for a general-purpose technology that is simultaneously a commercial product, a research tool, a military enabler, and an infrastructure substrate. [18] The dual-use character of AI compute means that any restriction regime draws inherently arbitrary lines — creating both enforcement challenges and legitimacy deficits that undermine long-term viability.


Counter-Arguments and Limitations

The Cooperation Vector — Institutional Architecture Under Construction

The competitive dynamics documented in Parts II–IV are not the only forces shaping the compute sovereignty landscape. A parallel track of institutional cooperation is gaining traction, and intellectual honesty requires engaging it substantively rather than dismissing it as window dressing.

The UN Global Dialogue on AI Governance, launched in late 2025, represents the most ambitious multilateral effort to date. [7][9] The initiative brings together 193 member states to negotiate shared principles for AI governance, with compute access identified as a core equity concern. The CSIS analysis of the Global Dialogue reveals that it is explicitly structured to address the concerns of the Global South — particularly the 158 countries without AI data center infrastructure — by framing compute access as a development issue rather than purely a security issue. [9]

Three institutional developments deserve attention. The International Scientific Panel on AI, modeled on the IPCC, aims to provide consensus-based assessments of AI capabilities and risks. [8] The IPCC analogy is instructive: it did not prevent climate change, but it created the shared epistemic foundation that made coordinated action possible. Whether the AI panel can achieve similar authority in a domain where capabilities evolve yearly rather than decadally remains uncertain. The Global Digital Compact, adopted by the General Assembly in September 2024, includes provisions for AI governance and capacity building — non-binding but normatively significant. [8] Regional frameworks are emerging independently: the EU AI Act, the African Union’s AI strategy, and ASEAN’s governance guide each articulate positions on compute access within their respective blocs.

The cooperation vector faces three structural disadvantages relative to the competition vector.

Asymmetric incentives. Nations that currently hold compute advantages have weak incentives to share them. The United States gains more from maintaining its 75% compute share than from redistributing it through multilateral frameworks. China gains more from offering bilateral compute partnerships to Global South nations — with attendant political conditions — than from participating in multilateral redistribution schemes. The nations that most need cooperative frameworks are the nations with the least leverage to demand them. [9][10]

Speed mismatch. Multilateral institutions operate on diplomatic timescales — years to negotiate, decades to implement. AI capabilities evolve on quarterly timescales. By the time a cooperative framework for compute access is negotiated and ratified, the technology landscape it was designed to address may have shifted fundamentally. The IPCC analogy breaks down here: greenhouse gas concentrations change slowly enough for diplomatic processes to track. AI capabilities do not.

Enforcement vacuum. Even where cooperative commitments exist, enforcement mechanisms are absent. No institution can compel the United States to lift export controls, compel hyperscalers to build data centers in underserved regions, or compel chip manufacturers to allocate fabrication capacity to developing-world customers. The cooperation architecture is a normative framework, not a resource allocation mechanism.

These disadvantages do not make cooperation irrelevant. The UNDP’s analysis of AI-driven development divergence argues that without international cooperation, the compute divide will widen into a “[Estimated] new era of divergence” that compounds existing development gaps. [10] The LSE analysis of AI’s impact on the Global South documents how current compute concentration is already reshaping labor markets in provider countries through the Arbitrage Compression mechanism (MECH-030) — AI capabilities concentrated in high-compute nations are compressing the cost differential that sustains cross-border labor arbitrage. [12] The CSIS “Divide to Delivery” framework proposes concrete mechanisms for making AI serve the Global South, including compute-sharing agreements, distributed training networks, and capacity-building programs. [11]

The honest assessment: cooperation is gaining institutional expression but remains structurally weaker than competition. The competitive dynamics have capital, infrastructure, and institutional momentum behind them. The cooperative dynamics have normative authority and an increasingly organized constituency but lack enforcement mechanisms and resource commitments. The L.A.C.C. framework must account for both vectors — and for the possibility that the balance between them shifts as the costs of compute divergence become more visible.


Part VI: Testing the Framework — India as a Non-US-China Case

The L.A.C.C. framework claims to identify a general structural feature of the AI-era geopolitical order. If it only explains the U.S.-China bilateral dynamic, it is not a framework; it is a description of one rivalry. India provides the critical test case — a nation with sufficient scale, ambition, and institutional complexity to stress-test the framework outside the superpower dyad.

India’s position in the L.A.C.C. matrix is uniquely contradictory. On the Labor axis, India retains the world’s largest youth workforce, with a median age of 28 and 600+ million people under 30. This demographic dividend was the foundation of the offshore services model documented in Arbitrage Compression. On the Automation axis, India’s manufacturing robot density is among the lowest of any major economy — approximately 5 per 10,000 manufacturing workers versus South Korea’s 1,012 and China’s 470. On the Capital axis, India has attracted record FDI in recent years and its domestic venture capital ecosystem, while smaller than the U.S. or China’s, is growing rapidly. On the Compute axis, India faces a severe deficit: its total installed AI compute capacity is a small fraction of what a single U.S. hyperscaler region commands, and the compute-cap framework subjects it to import restrictions that constrain its sovereign AI ambitions.

The Sequencing Problem (MECH-022) operates with particular clarity in the Indian case. India must simultaneously manage four transitions that run at different speeds: (1) the compression of its IT services export model as AI erodes labor arbitrage (MECH-030, fast — already visible in the 94% hiring shortfall among top firms); (2) the demographic transition that is producing a massive youth cohort that needs employment (medium-speed — demographic momentum cannot be accelerated or decelerated); (3) the build-out of domestic compute infrastructure (slow — constrained by semiconductor supply chains, power grid limitations, and regulatory approvals); and (4) the development of sovereign AI capability sufficient to compete in global markets (very slow — requires not just hardware but ecosystem development across research, talent, and application domains).

The sequencing mismatch is the critical insight. India’s labor arbitrage model is compressing faster than its compute infrastructure can be built. The Geopolitical Phase Diagram (MECH-017) predicts that this mismatch should push India toward a divergent trajectory relative to both the U.S. (which has compute and is shedding labor dependence) and China (which has compute and is building automation at state-directed speed). India is experiencing what might be called a “compute sandwich” — squeezed between a rapid-onset compression of its traditional competitive advantage and a slow-onset build-out of its next-generation competitive advantage, with the gap between the two creating a window of structural vulnerability.

India’s response illustrates the framework’s dynamics. The government has announced a $1.2 billion sovereign AI initiative, partnered with NVIDIA and domestic players to build GPU clusters, and positioned India as a bridge between the U.S. and Global South technology ecosystems. But these investments operate under the constraint of the compute-cap framework — India cannot import unlimited frontier chips without U.S. approval. This makes India’s AI sovereignty contingent on its geopolitical alignment in a way that its labor sovereignty never was. No country needed permission from Washington to employ its own workers. India needs, in effect, permission from Washington to compute at scale. This is what makes compute analytically distinct from labor, from automation, and from capital: it is the only pillar of the L.A.C.C. framework where sovereign capacity is gated by another nation’s export control decisions.

The EU provides a contrasting test case. The EU has abundant capital, strong institutions, and a regulatory framework (the AI Act) that asserts normative leadership. But its compute position is weak — no EU-headquartered company ranks among the top five global hyperscalers, and European semiconductor fabrication capacity, while growing through the EU Chips Act, remains a small fraction of East Asian and American capacity. [4][15] The EU’s strategy of regulatory power without compute power creates what the Geopolitical Phase Diagram would categorize as a “referee without a team” — capable of setting rules but dependent on non-European infrastructure to implement them. The L.A.C.C. framework explains why the EU’s normative leadership, while valuable, is insufficient for AI sovereignty: regulation governs what you do with compute, but it does not create compute.


Part VII: Arbitrage Compression and the Global South — The Mirror Image

The compute concentration documented in Part II has a mirror image that operates through the Arbitrage Compression mechanism (MECH-030). Where Part II describes who holds the compute, this section describes what happens to those who do not.

The offshore IT services model — 7.3% of India’s GDP, 5.8 million direct jobs, 15–20 million indirect — was built on a cost differential between provider-country and client-country labor. AI is compressing that differential through anticipatory demand-signal dampening: client firms shorten contracts and redirect work toward AI-augmented onshore capacity not because AI has already replaced offshore workers but because the trajectory of AI cost decline makes long-term arbitrage commitments strategically irrational. [MECH-030]

The compute dimension intensifies this dynamic. As AI capability concentrates in high-compute nations, the tasks performable by domestic AI infrastructure expand. Each increment of compute capacity in the United States is, simultaneously, a reduction in demand for offshore human labor. The mechanism is demand-side displacement operating through procurement decisions — clients adjust before the crossover point arrives.

The UNDP report frames this as a “new era of divergence” in which “the development gaps between countries widen” as AI concentrates in wealthy nations. [10] The UNCTAD estimate places the total economic value of AI at $4.8 trillion annually by 2030, with the overwhelming majority of that value accruing to nations with compute infrastructure. [19] The LSE analysis documents that AI-driven work in the Global South is already characterized by low wages, precarious conditions, and value extraction — the “ghost work” of data labeling, content moderation, and RLHF that sustains AI systems in wealthy nations while providing minimal economic uplift to provider countries. [12]

The Recursive Displacement framework (MECH-001) is visible in this dynamic at the international scale. AI systems trained on Global South labor (data labeling, content moderation) produce capabilities that displace Global South service exports (IT, BPO, customer support), which reduces the economic participation of Global South nations in the AI economy, which reduces their capacity to invest in sovereign compute, which further concentrates AI capability in wealthy nations. The recursion is geographic rather than sectoral, but the logic is identical: displacement compounds through its own outputs.

The Post-Labor Economy (MECH-019) framework, applied at the international level, suggests a disturbing trajectory. If compute concentration continues on its current path, a significant fraction of the global population could find itself in nations that are post-labor not by choice but by exclusion — their labor no longer needed by the global economy, their service exports compressed by AI, their manufacturing uncompetitive against automated production in high-compute nations, and their compute infrastructure insufficient to participate in the AI economy directly. This is not the post-labor future envisioned by optimists, in which automation frees humanity from drudgery. It is a bifurcated future in which some nations transition through automation while others are simply bypassed.


Part VIII: The Structural Moment — Why This Configuration May Not Last

The adversary’s strongest challenge to the L.A.C.C. framework is temporal: the current concentration of compute may be a transient feature of the infrastructure buildout phase rather than a durable structural characteristic of the AI economy. This challenge must be taken seriously, because the evidence supports it partially.

Four forces are working to erode compute concentration.

Algorithmic efficiency gains. DeepSeek’s demonstration that frontier-class performance can be achieved with significantly less training compute suggests that the relationship between raw compute and AI capability is not fixed. If algorithmic improvements continue at their current pace, the compute threshold for meaningful AI participation will decline, potentially bringing it within reach of nations currently excluded. [2]

On-device inference. Mobile NPUs (neural processing units) in consumer devices are doubling in capability every 12–18 months. Apple’s M-series chips, Qualcomm’s Snapdragon AI Engine, and Samsung’s Exynos processors are bringing meaningful inference capability to devices that do not depend on centralized data centers. If the critical applications of AI shift from cloud-served to device-served, the geographic fixity that currently makes compute a distinct strategic axis weakens significantly. [Estimated]

Inference-specialized ASICs. Companies like Groq, Cerebras, and SambaNova are developing chips optimized specifically for inference rather than training. These architectures offer dramatic price-performance improvements over general-purpose GPUs and could — if they achieve commercial scale — break the hyperscaler lock on inference serving. [Compute Feudalism]

Distributed training and inference. Federated learning, model sharding, and peer-to-peer inference networks could, in principle, enable nations to aggregate small compute pools into effective AI capability. The technical barriers are significant but not insurmountable.

Against these erosion forces, three reinforcing dynamics sustain concentration.

The Ratchet (MECH-014). Hyperscaler capex commitments of $600 billion in 2026 create irreversible lock-in. The infrastructure being built now will serve AI workloads for 5–10 years. The power purchase agreements, the fiber connections, the cooling systems, the custom silicon — all of it is sunk cost that anchors compute in its current geographic distribution regardless of what algorithmic improvements occur.

Recursive capability advantage. Nations with frontier compute can use AI to accelerate their own semiconductor research, chip design, compiler optimization, and infrastructure planning — generating more effective compute from the same physical resources. Nations without this capability cannot enter the recursive loop. The gap compounds.

Institutional lock-in. The export control regime, the hyperscaler investment patterns, the semiconductor supply chain relationships, and the diplomatic architecture of compute access all create institutional path dependence that persists independently of technological change. Even if on-device inference commoditizes some AI workloads, the training and fine-tuning of frontier models — which determine the capability frontier — will remain concentrated in nations with the largest compute clusters for the foreseeable future.

The honest assessment: concentration will erode at the edges — on-device inference, edge compute, efficiency gains for specific architectures — while persisting at the core — frontier training, large-scale inference, and the recursive capability loops that determine which nations shape the next generation of AI. The L.A.C.C. framework captures this structural moment. Whether it captures a permanent feature depends on which forces prove dominant over the next decade.


Part IX: Where This Connects

The L.A.C.C. framework intersects the broader Theory of Recursive Displacement through six established threads.

Compute Feudalism (MECH-029) is the mechanism that makes the “C” in L.A.C.C. analytically necessary. The compute-feudalism essay demonstrated that open-weight model release concentrates rather than distributes economic value at the inference layer. The L.A.C.C. essay extends that finding from a market-structure observation to a geopolitical claim: the same concentration that operates within the AI industry operates between nations, and with similar consequences. The lords of the inference stack are not just companies; they are countries.

The Geopolitical Phase Diagram (MECH-017) provides the theoretical framework for understanding why compute concentration sorts countries into divergent trajectories. The phase diagram’s three spatial axes — State Capacity, Labor Formalization, Demographic Trajectory — should now be supplemented by a fourth: Compute Endowment. The Indian case study in Part VI demonstrates that Compute Endowment operates independently of the other three axes — India has moderate state capacity, mixed labor formalization, and favorable demographics, yet its AI trajectory is constrained by compute scarcity. The phase diagram without a compute axis misses this.

Arbitrage Compression (MECH-030) documents the demand-side channel through which compute concentration in wealthy nations compresses the labor arbitrage that sustains Global South service exports. The L.A.C.C. essay situates this mechanism within the broader geopolitical framework: arbitrage compression is not an isolated labor-market phenomenon but a structural consequence of the compute distribution documented in Part II.

The Sequencing Problem (MECH-022) operates at the international level with particular force. Nations face different mechanism-speed configurations depending on their compute endowment: India’s IT exports are compressing faster than its compute infrastructure can be built; the EU’s regulatory framework is advancing faster than its compute capacity; China’s domestic fabrication is advancing but slower than U.S. frontier capability. These sequencing mismatches produce different attractor states — and the L.A.C.C. framework makes them visible.

The Physical Frontier — the resource tetrad of energy, water, minerals, and land that constrains AI infrastructure — connects through the “L” in L.A.C.C. The critical minerals analysis in the original L.A.C. essay remains valid and is incorporated by reference. Compute infrastructure requires not just chips but the full resource stack: rare earths for magnets, copper for interconnects, lithium for backup power, and above all, enormous quantities of electricity and cooling water. The physical frontier is the floor beneath compute sovereignty.

Navigating the L.A.C. Economy — the policy architecture essay published in September 2025 — requires revision to incorporate the compute dimension. Its proposals for income floors, wealth-fund participation, and transition management remain valid but incomplete without addressing compute access as a precondition for national economic participation in the AI era. The L.A.C.C. upgrade implies that any policy architecture for the automated economy must include a compute access strategy — sovereign, allied, or negotiated — as a foundational element rather than an assumed background condition.


Conclusion: The Fourth Pillar

The original L.A.C. framework was correct about the death of the labor-arbitrage model and correct about the rise of automation and capital as replacement pillars of economic power. It was incomplete. Compute capacity has emerged as a fourth pillar that is analytically irreducible to the other three — geographically fixed where capital is fluid, supply-constrained where capital is abundant, subject to export controls where capital flows freely, and recursively self-amplifying in a way that no prior strategic resource has been.

The current distribution — 75% U.S., 158 countries without data centers, access gated by geopolitical alignment — describes a concentration the world has not seen since the early nuclear era. Unlike nuclear weapons, compute is dual-use in the deepest sense: simultaneously commercial product, research tool, military capability, and infrastructure substrate.

The cooperative institutions emerging around AI governance — the UN Global Dialogue, the International Scientific Panel, the Global Digital Compact — represent genuine progress toward a multilateral framework for compute access. [7][8][9] They also face structural disadvantages relative to the competitive dynamics that concentrate compute: asymmetric incentives, speed mismatches, and enforcement vacuums. The honest assessment is that competition is currently winning, cooperation is currently losing, and the outcome for the 158 countries without data centers will be determined by which dynamic proves dominant over the next decade.

The L.A.C.C. framework does not predict which dynamic will prevail. It claims that any serious analysis of the AI-era geopolitical order must treat compute as a distinct axis — not as a subset of capital, not as a detail of industrial policy, but as a strategic resource with its own logic of concentration, its own instruments of coercion, and its own implications for who participates in the economic future and who is left behind.

This framework captures a structural moment. Whether it captures a permanent feature of the international order is the most important open question in the geopolitics of technology.


Works cited

Sources

FDD, “Rolling Back Export Controls: U.S. Offers China Powerful AI Chips,” December 2025. https://www.fdd.org/analysis/2025/12/10/rolling-back-export-controls-u-s-offers-china-powerful-ai-chips/

[2] AI Frontiers, “US Chip Export Controls and China AI,” 2025. https://ai-frontiers.org/articles/us-chip-export-controls-china-ai

[3] CFR, “China’s AI Chip Deficit: Why Huawei Can’t Catch NVIDIA and US Export Controls Should Remain,” 2025. https://www.cfr.org/article/chinas-ai-chip-deficit-why-huawei-cant-catch-nvidia-and-us-export-controls-should-remain

[4] WEF, “Rethinking AI Sovereignty: Pathways to Competitiveness through Strategic Investments,” 2026. https://www3.weforum.org/docs/WEF_Rethinking_AI_Sovereignty_Pathways_to_Competitiveness_through_Strategic_Investments_2026.pdf

[5] Chatham House, “How Middle Powers Can Weather US and Chinese AI Dominance,” February 2026. https://www.chathamhouse.org/2026/02/how-middle-powers-can-weather-us-and-chinese-ai-dominance/02-why-build-sovereign-ai

[6] Atlantic Council, “Eight Ways AI Will Shape Geopolitics in 2026,” 2026. https://www.atlanticcouncil.org/dispatches/eight-ways-ai-will-shape-geopolitics-in-2026/

[7] UN Press, “Secretary-General Announces Global Dialogue on AI Governance,” 2025. https://press.un.org/en/2025/sgsm22839.doc.htm

[8] WEF, “UN’s New AI Governance Bodies,” October 2025. https://www.weforum.org/stories/2025/10/un-new-ai-governance-bodies/

[9] CSIS, “What the UN Global Dialogue on AI Governance Reveals About Global Power Shifts,” 2025. https://www.csis.org/analysis/what-un-global-dialogue-ai-governance-reveals-about-global-power-shifts

[10] UNDP, “AI Risks Sparking New Era of Divergence as Development Gaps Between Countries Widen,” 2025. https://www.undp.org/asia-pacific/press-releases/ai-risks-sparking-new-era-divergence-development-gaps-between-countries-widen-undp-report-finds

[11] CSIS, “From Divide to Delivery: How AI Can Serve the Global South,” 2025. https://www.csis.org/analysis/divide-delivery-how-ai-can-serve-global-south

[12] LSE, “The Perilous Future of AI Work in the Global South,” November 2025. https://blogs.lse.ac.uk/medialse/2025/11/14/the-perilous-future-of-ai-work-in-the-global-south/

[13] SSRN, “Compute Sovereignty Framework,” 2025. https://papers.ssrn.com/sol3/Delivery.cfm/5312977.pdf?abstractid=5312977&mirid=1

[14] New America, “Compute or Be Computed,” 2025. https://www.newamerica.org/planetary-politics/blog/compute-or-be-computed/

[15] McKinsey, “Sovereign AI: Building Ecosystems for Strategic Resilience and Impact,” 2025. https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/sovereign-ai-building-ecosystems-for-strategic-resilience-and-impact

[16] Taylor & Francis, “Arms Race vs. Innovation Race in AI,” 2025. https://www.tandfonline.com/doi/full/10.1080/14650045.2025.2456019

[17] Springer, “The AI Arms Race and Global Order,” 2025. https://link.springer.com/article/10.1007/s43681-025-00778-6

[18] Carnegie Endowment, “Governing Military AI Amid a Geopolitical Minefield,” 2024. https://carnegieendowment.org/research/2024/07/governing-military-ai-amid-a-geopolitical-minefield?lang=en&center=europe

[19] UNCTAD, “AI’s $4.8 Trillion Future: UN Trade and Development Alerts to Divides, Urges Action,” 2025. https://unctad.org/news/ais-48-trillion-future-un-trade-and-development-alerts-divides-urges-action

[20] CSIS, “The Limits of Chip Export Controls: Meeting the China Challenge,” 2025. https://www.csis.org/analysis/limits-chip-export-controls-meeting-china-challenge