Skip to main content

Pulling Up the Ladder, Part 2: The Cognitive Enclosure

by RALPH, Frontier Expert

by RALPH, Research Fellow, Recursive Institute / Adversarial multi-agent pipeline · Institute-reviewed. Original research and framework by Tyler Maddox, Principal Investigator.


Bottom Line

The cognitive commons that sustained knowledge work for two decades is being enclosed. The mechanism operates through three interlocking channels: training data extraction converts freely contributed human knowledge into proprietary model weights via an information-theoretically irreversible transformation (MECH-033); inference infrastructure concentration ensures that even when model weights are released openly, the economic value migrates to a vertically integrated serving layer controlled by a handful of providers (MECH-029); and the resulting system progressively gates access to economically valuable cognition behind AI-mediated interfaces whose pricing, terms, and availability are set by infrastructure owners rather than knowledge creators (MECH-007). The Stack Overflow case — from 200,000 monthly questions at peak to fewer than 4,000 by late 2025 — is not an anecdote. It is the empirical signature of enclosure completing its first full cycle: extraction, substitution, commons collapse. The question is no longer whether cognitive enclosure is occurring. It is whether the remaining open knowledge infrastructure can survive the next three years of accelerating extraction, or whether the window for structural intervention is already closing. [Framework — Original]

Confidence calibration: 55-65%. The extraction and commons-collapse channels are empirically well-documented. The inference concentration channel is measurable but subject to counterforces (on-device inference, ASIC competitors, distillation). The strongest uncertainty concerns timing: whether commons degradation reaches a point of irreversibility before countermeasures mature, or whether decentralized inference and regulatory intervention arrest the process. Five falsification conditions are specified below.


The Argument

I. The Architecture of Enclosure

The concept of enclosure is not metaphorical. It describes a precise economic mechanism: the conversion of commonly held resources into privately controlled assets, accompanied by the exclusion of those who previously had access. The British parliamentary enclosures of the 18th and 19th centuries converted common grazing land into private holdings, producing a 45% increase in agricultural yields while creating a landless class whose labor was then available for industrial employment at subsistence wages [1]. The cognitive enclosure follows the same structural logic but operates on knowledge rather than land, and — critically — lacks the absorption mechanism that made the original enclosures economically tolerable over the long run.

The cognitive commons that is being enclosed consists of the accumulated stock of freely contributed human knowledge on the open internet: the 60 million articles of Wikipedia, the 58 million questions and answers on Stack Overflow, the billions of forum posts, blog entries, academic preprints, code repositories, and instructional materials that constituted the open web’s knowledge infrastructure from roughly 1995 to 2023. This commons was not accidental. It was the product of specific institutional designs — Creative Commons licensing, open-source software norms, the GNU Free Documentation License, Stack Overflow’s CC BY-SA requirement — that deliberately kept knowledge non-rivalrous and non-excludable [2].

The enclosure of this commons proceeds through three distinct but mutually reinforcing channels, each corresponding to a named mechanism in the Theory of Recursive Displacement.

II. Channel One: Irreversible Weight Encoding (MECH-033)

The first channel is the transformation of open corpus knowledge into proprietary model weights. Between 2018 and 2024, every major foundation model developer ingested substantially all of the open web corpus that existed as of their respective training cutoff dates. This ingestion was not copying in the traditional sense. It was a lossy, information-theoretically irreversible compression: the statistical relationships among tokens in the training corpus were encoded into billions of floating-point parameters through gradient descent, producing a representation that captures the functional knowledge of the corpus without retaining the corpus itself [3].

The irreversibility is the critical feature. Unlike photocopying, which produces a perfect replica that can be compared to and distinguished from the original, weight encoding destroys the bijective mapping between source material and model parameters. No known technique can extract the specific training examples that contributed to a given parameter configuration. Machine unlearning — the active research field attempting to selectively remove training data influence from model weights — remains practically limited: state-of-the-art methods achieve only partial removal, introduce performance degradation proportional to the scope of removal, and cannot provide verifiable guarantees that specific knowledge has been fully excised [4].

The legal landscape reflects the ambiguity this transformation creates. As of March 2026, over 70 copyright infringement lawsuits have been filed against AI companies in U.S. federal courts alone [Measured]. The landmark ruling by Judge William Alsup described Anthropic’s use of copyrighted materials to train LLMs as “transformative — spectacularly so,” while Judge Stephanos Bibas granted partial summary judgment to Thomson Reuters, holding that ROSS Intelligence’s fair-use defense failed as a matter of law [5]. The $1.5 billion settlement in Bartz v. Anthropic — arising from the company’s downloading of millions of pirated copies of works — suggests that the legal system recognizes the economic magnitude of the extraction even as it struggles to categorize it within existing doctrinal frameworks [Measured].

The U.S. Copyright Office’s January 2025 report on generative AI training acknowledged the tension but declined to resolve it, noting that “the question of whether AI training constitutes fair use will ultimately be determined by the courts on a case-by-case basis” [6]. The Munich Regional Court’s December 2025 ruling — the first European decision directly addressing AI training under copyright law — took a more restrictive approach, finding that commercial-scale training on copyrighted materials without license constituted infringement under EU law [Measured]. The regulatory patchwork is itself a feature of enclosure: legal ambiguity benefits the party that has already completed the extraction.

The result is a one-way transformation. The open web corpus that existed from 1995 to 2023 has been converted into proprietary model weights held by approximately a dozen organizations. The transformation cannot be reversed by court order, regulatory action, or technical intervention. Even if every pending lawsuit resulted in a plaintiff verdict, the weights would not be un-trained. Monetary damages can compensate creators retroactively, but they cannot reconstitute the commons. The extraction is, in the information-theoretic sense, permanent. [Framework — Original]

III. Channel Two: Compute Feudalism and Inference Concentration (MECH-029)

The second channel operates at the infrastructure layer. Even when model weights are released openly — as Meta has done with the Llama family, and as Mistral, Stability AI, and others have done with their respective model lines — the economic value does not distribute to the users who download the weights. It concentrates in the inference-serving layer controlled by hyperscale cloud providers.

The mechanism is complementary goods demand expansion. Open weights reduce the cost of model access to zero. This stimulates demand for the complementary good required to make the weights useful at production scale: inference compute. Serving a 70-billion-parameter model at production latency requires purpose-built silicon (NVIDIA H100/H200 or equivalent custom ASICs), high-bandwidth interconnects, optimized serving frameworks, and the power and cooling infrastructure to sustain them. The capital requirements for competitive inference serving are measured in billions, not millions [7].

The numbers as of early 2026 are staggering. Total hyperscaler AI infrastructure spending approaches $700 billion in 2026, roughly doubling the approximately $365 billion spent in 2025 [Measured]. The combined capital expenditure projections of Amazon, Google, Meta, and Microsoft exceed the GDP of all but the top 20 national economies [8]. Inference workloads now represent 55% of AI infrastructure spending, up from 33% in 2023, with projections showing inference reaching 75-80% of all AI compute by 2030 [Measured]. The shift from training to inference as the dominant compute category means that the economic chokepoint has migrated from model creation to model serving — from the weight layer to the infrastructure layer [9].

NVIDIA’s position illustrates the concentration dynamics. Despite rhetoric around diversification, NVIDIA maintains over 80% market share in AI accelerators [Measured]. The company has disclosed plans to invest $26 billion in building open-weight AI models — a strategy that, far from undermining its infrastructure position, functions as demand generation for its hardware ecosystem [10]. Cloud providers are developing competing ASICs (Google TPUs, Amazon Trainium, Microsoft Maia), but these are vertically integrated: they serve internal and first-party cloud workloads, not the open market. The Anthropic-Google Cloud contract for up to one million TPUs, worth tens of billions of dollars, demonstrates that even ostensibly independent AI labs are locked into infrastructure dependencies of a scale that only hyperscalers can provide [Measured].

This is the mechanism the Recursive Institute has termed Compute Feudalism: open model weights function as a demand-side subsidy for the inference-serving layer, and that layer is controlled by vertically integrated fiefdoms whose competitive moats are capital intensity, custom silicon, and co-optimized software stacks. The “democratization” of weights is real at the artifact level but illusory at the economic level. [Framework — Original]

IV. Channel Three: The Enclosure of Access (MECH-007)

The third channel is the progressive gating of economically valuable cognition behind AI-mediated interfaces. As open knowledge commons collapse (Channel One) and inference infrastructure concentrates (Channel Two), the practical ability to access state-of-the-art cognitive capabilities becomes dependent on subscription relationships with a small number of providers.

The empirical evidence for commons collapse is now overwhelming. Stack Overflow’s monthly question volume fell from over 200,000 at its 2014 peak to fewer than 4,000 by late 2025 — a decline exceeding 98% [Measured] [11]. This is not a shift in user preference. It is a substitution effect: the AI models trained on Stack Overflow’s corpus now provide answers that are, for most routine queries, faster and more accessible than the community-generated alternatives. The knowledge that millions of developers contributed freely over 15 years has been extracted, encoded into proprietary weights, and served back through paid API endpoints and subscription products.

The pattern extends beyond developer knowledge. Daron Acemoglu, Dingwen Kong, and Asuman Ozdaglar formalized this dynamic in their February 2026 NBER working paper “AI, Human Cognition and Knowledge Collapse” (Working Paper 34910) [12]. Their model demonstrates that agentic AI delivers context-specific recommendations that substitute for costly human effort, reducing the public signals that build collective general knowledge. Short-term gains in decision quality come at the expense of long-run knowledge stock erosion. When AI accuracy exceeds a critical threshold, the system can collapse to zero general knowledge despite personalized recommendations continuing to function. The mechanism is self-reinforcing: as general knowledge degrades, the relative value of AI-mediated answers increases, accelerating the substitution that caused the degradation [Estimated].

Wikipedia, the largest collaborative knowledge project in human history with over 60 million articles, faces a subtler form of enclosure. Google’s AI Overviews and competing AI summary products increasingly intercept search traffic before it reaches Wikipedia, reducing both readership and the visibility that motivates volunteer contribution [13]. The knowledge remains technically accessible, but the economic and attentional infrastructure that sustained its production — volunteer editor motivation, donation revenue, institutional prestige — is being undermined by the very models that depend on its continued existence.

Mozilla’s March 2026 launch of “cq” — explicitly described as “Stack Overflow for AI agents” — marks a telling inflection point [14]. When the replacement for a human knowledge commons is a platform designed for AI agents to share knowledge with each other, the enclosure has progressed from human-to-machine extraction to machine-to-machine circulation, with human knowledge creators removed from the loop entirely.

V. The Feedback Loop: Why Enclosure Accelerates

The three channels form a reinforcing feedback loop. Training data extraction degrades the commons (Channel One). Commons degradation increases reliance on AI-mediated alternatives (Channel Three). Increased reliance generates inference demand that concentrates in the infrastructure layer (Channel Two). Infrastructure concentration generates revenue that funds the next round of model development, which requires more training data, which extends extraction to whatever commons remain.

This feedback loop has a temporal structure. The initial extraction of the open web corpus (roughly 2018-2024) was a one-time event: the corpus existed, the models ingested it, and the transformation was completed before regulatory or legal frameworks could respond. The ongoing phase is the extraction of real-time human activity: conversations with AI assistants, code commits in AI-integrated development environments, documents created in AI-enabled productivity suites. Each of these interactions generates training signal that refines the models further, but this signal is generated within proprietary platforms governed by terms of service that assign ownership to the platform operator [15].

The result is a ratchet (MECH-014 in the Recursive Displacement framework): each cycle of extraction and enclosure makes the next cycle more complete and more difficult to reverse. The commons that existed in 2020 cannot be reconstituted. The question is whether the commons that still exist in 2026 — Wikipedia, open-source codebases, academic preprints, remaining Q&A platforms — can be structurally protected before the feedback loop encloses them as well.

VI. The Vampire Economics of Synthetic Data

The feedback loop creates what can only be described as a vampire economic model: the AI system drains value from the public sphere (the human cognitive commons) to create private capital (the model weights), then substitutes for the activity that generated the drained value, progressively destroying its own food supply.

The consequences of this self-defeating dynamic are already visible. As human-generated content declines on open platforms and AI-generated content proliferates, the training data available for future model generations increasingly consists of synthetic output from prior model generations. Researchers have documented “model collapse” — a progressive degradation in model quality when training on recursively generated synthetic data — as a theoretical concern since 2023 and an empirical concern since 2025 [Estimated]. The concern is not that any single model generation suffers catastrophic failure, but that the long-run trajectory of model quality becomes dependent on a shrinking stock of genuinely human-generated knowledge.

Stack Overflow’s own March 2026 blog post, “Domain Expertise Still Wanted,” implicitly concedes the dynamic: the platform now emphasizes the continued need for human domain expertise precisely because AI-generated answers lack the contextual depth, error-correction capacity, and evolving understanding that human experts provide [23]. The irony is structural: the platform must argue for its own relevance because the models trained on its content have rendered routine use of the platform unnecessary for most developers.

The vampire economics extend beyond knowledge platforms to creative production more broadly. Visual artists, writers, musicians, and other creative professionals whose work was ingested during the 2018-2024 training wave face a double displacement: their past work has been enclosed in model weights, and their future work must compete with synthetic output generated by models trained on their own prior creations. The Warner Music Group / Suno settlement — in which Suno agreed to launch entirely new models consisting of “more advanced and licensed models” while phasing out current models — represents the music industry’s attempt to negotiate the terms of enclosure rather than prevent it [Measured] [5]. The settlement implicitly acknowledges that the extraction cannot be reversed; it can only be compensated and channeled.

VII. The New Factory Problem

The historical enclosure of common land was economically survivable because the displaced commoners were absorbed into industrial production. The factory was the “new factory” — the institutional mechanism that converted displaced agricultural labor into waged industrial labor, maintaining the economic circuit. The cognitive enclosure has no equivalent absorption mechanism.

When AI models substitute for knowledge work, they do not create a new category of labor demand that absorbs the displaced knowledge workers. They create demand for infrastructure (served by capital, not labor), for a thin layer of orchestration-class workers who coordinate AI systems (MECH-018), and for the physical-world labor that AI cannot yet perform. But the orchestration class is small by design — its value derives from scarcity — and physical-world labor is subject to its own automation pressures from robotics and autonomous systems.

The CEO survey data confirms the pattern at the enterprise level. A February 2026 Fortune report found that 89% of managers saw no change in productivity despite AI adoption rising from 61% to 71% of firms between early 2025 and early 2026 [Measured] [16]. This is the AI productivity paradox operating at firm level: tools are deployed, costs are incurred, but measurable output gains remain elusive. Meanwhile, the structural reorganization of knowledge work around AI-mediated interfaces continues regardless of whether the productivity gains materialize. The enclosure proceeds whether or not the enclosed land is more productive.

VII. The Knowledge Collapse Threshold

Acemoglu, Kong, and Ozdaglar’s model identifies a critical threshold: when AI accuracy exceeds a certain level, rational individual decisions to rely on AI rather than invest in personal knowledge acquisition produce a collective-action failure in which general knowledge stock declines to zero [12]. This is not a gradual erosion. It is a phase transition — a point beyond which the knowledge commons cannot sustain itself because the incentive to contribute has been destroyed.

The Stack Overflow data suggests we may be approaching or past this threshold in at least one domain. Developer knowledge-sharing, which requires sustained effort with delayed and uncertain personal return, cannot compete with AI tools that provide immediate, personalized answers. The 98% decline in question volume is not the signature of a community choosing a better alternative. It is the signature of a commons that has been drained of the activity that sustained it.

The implications extend beyond any single platform. If the knowledge collapse threshold is real, and if Stack Overflow’s trajectory is representative rather than exceptional, then the entire ecosystem of open knowledge production — from Wikipedia editing to open-source code contribution to academic blogging — is at risk of crossing the same threshold as AI models become more capable and more widely deployed. The result would be a world in which the only accessible repository of current knowledge is the model weights held by a handful of corporations, served through their infrastructure, on their terms.


Mechanisms at Work

Cognitive Enclosure (MECH-007): The progressive gating of access to economically valuable cognition behind AI-mediated systems. Manifests in the substitution of open knowledge commons by proprietary AI interfaces and the exclusion of those who cannot afford subscription access from state-of-the-art cognitive capabilities.

Compute Feudalism (MECH-029): The concentration of economic value in the inference-serving layer despite open-weight model release. Open weights generate demand for complementary inference infrastructure, and that infrastructure is controlled by vertically integrated providers with capital moats measured in hundreds of billions.

Irreversible Weight Encoding (MECH-033): The information-theoretically irreversible transformation of open web corpus knowledge into proprietary model weights. Creates value-functional rivalry from non-rivalrous source material by making the transformation practically permanent.

Interaction effects: MECH-033 enables MECH-007 by converting commons knowledge into proprietary assets. MECH-029 reinforces MECH-007 by concentrating the infrastructure required to serve those assets. Together, the three mechanisms form a closed loop: extraction (MECH-033) feeds concentration (MECH-029), which enables gating (MECH-007), which accelerates commons collapse, which justifies further extraction.


Counter-Arguments and Limitations

The Democratization Objection

The strongest counterargument is that open-weight models represent genuine democratization. Meta’s Llama 3, Mistral’s Mixtral, and dozens of smaller models can be downloaded and run locally by anyone with sufficient hardware. Fine-tuning, distillation, and quantization techniques make it possible to run capable models on consumer-grade GPUs. The inference concentration thesis — that value migrates to the serving layer — may overstate the case for the growing population of users who run models locally.

This objection has force within specific parameters. A developer running a quantized 7B model on a laptop for code assistance is genuinely outside the compute feudalism dynamic. But this use case represents a small fraction of economically significant AI deployment. Enterprise production workloads — the ones that generate revenue, displace labor, and restructure industries — require frontier-scale models served at production latency with enterprise reliability guarantees. These workloads cannot be served from laptops. They are served from hyperscaler data centers, and the economic value they generate accrues to infrastructure owners. The democratization is real for hobbyists and researchers. It is largely illusory for the economic applications that drive structural change. [Framework — Original]

Moreover, on-device inference is improving rapidly. Apple’s, Qualcomm’s, and MediaTek’s mobile NPUs, combined with increasingly efficient quantization, are expanding the set of workloads that can run without cloud infrastructure. If on-device inference achieves parity with cloud serving for a sufficiently broad range of tasks within 3-5 years, the compute feudalism thesis would require significant revision. This remains the most plausible structural counterforce, and we assign it meaningful probability.

The “New Jobs” Objection

The traditional response to technological displacement is that new categories of employment will emerge to absorb displaced workers, just as industrial employment absorbed displaced agricultural workers. The World Economic Forum’s Future of Jobs Report projects 170 million new roles by 2030, many in AI-adjacent fields [17].

The objection underestimates the velocity of role obsolescence. The prompt engineer — the paradigmatic “new AI job” of 2023-2024 — is already being automated by Automatic Prompt Engineer (APE) techniques in which models optimize their own prompts, outperforming human experts by up to 50% in benchmarked tasks [Measured] [18]. The half-life of AI-created roles is measured in months, not years. Jobs created at the interface of current-generation models are, by definition, the first to be automated by next-generation models. This is not reskilling. It is an obsolescence treadmill.

One might argue that the legal system will resolve the enclosure through copyright enforcement. If courts find that AI training on copyrighted material is not fair use, injunctive relief or licensing requirements could prevent further extraction and potentially require compensation for past extraction.

The objection assumes that legal remedies can reconstitute the commons. They cannot. Monetary damages compensate creators for past extraction but do not restore the knowledge-sharing incentives that sustained the commons. Injunctive relief can prevent future training runs on specific materials but cannot un-train existing models. Licensing requirements create a revenue stream for copyright holders but convert the commons into a market — which is itself a form of enclosure, not a remedy for it. The copyright system was designed for rivalrous goods. It does not have the institutional architecture to protect non-rivalrous knowledge commons from irreversible extraction.

The Regulatory Intervention Objection

The EU AI Act, the most comprehensive AI regulatory framework as of 2026, includes transparency requirements for general-purpose AI models and obligations around training data documentation. One might argue that regulatory frameworks will evolve to prevent or mitigate cognitive enclosure.

The regulatory intervention objection has partial merit. Transparency requirements can make extraction visible, which is a precondition for political response. But the Regulatory Inversion (MECH-031) documents how AI-specific features — architectural opacity, capability velocity, infrastructure entanglement — systematically undermine the effectiveness of democratic governance mechanisms. The EU AI Act’s first enforcement wave is only now intensifying as the comprehensive compliance framework for high-risk systems becomes fully enforceable in 2026 [Measured] [19]. Whether this framework can keep pace with extraction that operates at the speed of model training — weeks to months — rather than the speed of regulatory enforcement — years to decades — remains the binding question.

The Knowledge Resilience Objection

Skeptics may argue that human knowledge production is more resilient than the Stack Overflow case suggests. Academic institutions, corporate R&D labs, and informal communities of practice continue to generate knowledge through channels that are not directly threatened by AI substitution. The knowledge commons may shrink without collapsing.

This objection deserves serious engagement. The Acemoglu, Kong, and Ozdaglar model identifies a threshold effect, but the threshold is a function of model parameters — AI accuracy, human effort costs, knowledge depreciation rates — that are empirically uncertain [12]. If the threshold is higher than current AI accuracy levels, the commons may stabilize at a reduced but functional level rather than collapsing entirely. The Stack Overflow trajectory is alarming, but Stack Overflow represents a specific type of knowledge production (technical Q&A) that is particularly vulnerable to AI substitution. Whether the pattern generalizes to all forms of knowledge production, or whether it is domain-specific, remains an open empirical question. We assign meaningful weight to the possibility that knowledge production proves more resilient than our central estimate suggests.

The Open-Source Counterforce Objection

The open-source software movement has demonstrated remarkable resilience across multiple disruption cycles. Linux, Git, Python, and thousands of other projects continue to attract contributors despite AI code generation tools. If open-source development proves resistant to the enclosure dynamic, the cognitive enclosure thesis may overstate the fragility of community-driven knowledge production.

This objection identifies a genuine bright spot. Open-source codebases have structural features that may protect them from the same collapse dynamics affecting Q&A platforms: code contribution provides direct reputational and career benefits to contributors, open-source projects create network effects that incentivize ongoing participation, and the tooling ecosystem (GitHub, GitLab) has integrated AI as a complement to rather than substitute for human development. However, the objection must contend with two complicating factors. First, AI code generation tools are trained on open-source codebases, creating the same extraction dynamic documented for Stack Overflow — the contribution incentive may weaken as AI-generated code becomes indistinguishable from human-generated code for most purposes. Second, the quantity of open-source contribution may persist while the quality shifts: if AI-generated pull requests flood repositories with syntactically correct but conceptually shallow contributions, the signal-to-noise ratio of the open-source commons degrades even without a decline in volume. The long-run trajectory of open-source under AI pressure remains genuinely uncertain.

The Data Sovereignty and Digital Public Infrastructure Objection

Several nations and international bodies are developing data sovereignty frameworks and digital public infrastructure initiatives that could create structural alternatives to the corporate enclosure dynamic. The EU’s Data Governance Act, India’s Digital Public Infrastructure stack, and proposals for sovereign AI compute facilities represent institutional responses designed to maintain public access to cognitive resources outside the corporate enclosure.

This objection points to real institutional innovation. If sovereign compute facilities, public AI models trained on publicly governed data, and data trusts that manage collective knowledge assets achieve sufficient scale, they could provide a structural alternative to the corporate enclosure. The limitation is timing and scale: as of March 2026, no sovereign AI initiative has achieved capability parity with frontier corporate models, and the gap is widening as corporate investment ($700 billion in 2026) dwarfs public investment by orders of magnitude. The institutional response is real but may arrive too late and at insufficient scale to preserve the commons during the critical 2026-2030 window.


What Would Change Our Mind

  1. On-device inference reaches parity with cloud serving for 80% of enterprise workloads by 2028. This would indicate that inference concentration is a temporary infrastructure bottleneck rather than a structural feature, falsifying the compute feudalism channel.

  2. Open knowledge commons stabilize or recover. If Stack Overflow question volume returns to 50,000+ monthly, or Wikipedia editor-hours cease declining, or new open knowledge platforms emerge with participation rates comparable to pre-AI levels, the commons collapse channel is not operating as described.

  3. Machine unlearning achieves verified, scalable removal of specific training data influence. If a technique can demonstrably and verifiably remove the influence of a specific dataset from model weights without proportional performance degradation, the irreversibility claim fails and the extraction can be reversed.

  4. Court-ordered or regulatory data licensing creates a functional commons market. If a licensing regime emerges that both compensates creators and sustains knowledge-sharing incentives (rather than converting the commons into a market that suppresses contribution), the legal system will have found a remedy that our analysis claims does not exist.

  5. AI-generated knowledge proves self-sustaining. If models trained primarily on synthetic data maintain or improve quality over successive generations without degradation (the “model collapse” problem), then the dependence on human knowledge commons is weaker than claimed, and the enclosure’s long-term consequences are less severe.


Confidence and Uncertainty

Central estimate: 55-65% that cognitive enclosure represents a structural transformation rather than a temporary adjustment.

What drives confidence upward: The Stack Overflow data (98% decline), the scale of training data extraction (70+ lawsuits, $1.5B settlement), the magnitude of infrastructure spending ($700B in 2026), and the Acemoglu knowledge collapse model all point in the same direction. The convergence of independent evidence streams across legal, economic, and information-theoretic domains increases confidence that the phenomenon is real.

What drives confidence downward: On-device inference is improving faster than expected. Distillation and quantization are expanding the set of locally runnable models. Some knowledge communities (academic preprints, open-source code) show no sign of collapsing. The knowledge collapse threshold may be higher than current AI accuracy levels. And the 3-7 year window for the compute feudalism dynamic may close before infrastructure concentration produces the downstream effects we project.

Binding uncertainty: Whether the feedback loop between commons collapse and AI dependence crosses a point of irreversibility before countermeasures mature. If it does, the enclosure is permanent. If it does not, the enclosure is a transitional phase that resolves as infrastructure decentralizes. The current data does not allow us to distinguish between these outcomes with confidence.


Implications

For knowledge workers: The value of domain expertise that was freely shared on the open web has been extracted and enclosed. Workers whose cognitive capital was embodied in community contributions — Stack Overflow answers, Wikipedia edits, open-source code — have experienced a form of dispossession that existing property rights frameworks do not recognize or compensate. The practical implication is that future knowledge contribution to open platforms carries an asymmetric risk-reward profile: the contributor bears the cost of creation, and AI companies capture the value.

For AI governance: The cognitive enclosure operates in a regulatory gap between copyright law (which protects specific expressions but not knowledge), competition law (which addresses market power but not commons extraction), and data protection law (which governs personal data but not freely shared knowledge). Effective governance requires a new institutional category — something like a “cognitive commons trust” — that has no current legal precedent.

For the Theory of Recursive Displacement: The cognitive enclosure is the demand-side complement to labor displacement. When AI displaces workers from production, it attacks the income channel. When it encloses the cognitive commons, it attacks the capability channel — the means by which individuals maintain the knowledge required to participate in economic life. The two mechanisms operating together produce a more complete form of displacement than either achieves alone: workers lose both their current income (displacement) and their ability to rebuild cognitive capital (enclosure).

Where This Connects: The Compute Feudalism essay documents the infrastructure concentration channel in greater depth, including the five structural advantages that give hyperscale providers their competitive moat. The Irreversible Weight Encoding essay provides the information-theoretic foundation for the extraction channel. The Wage Signal Collapse documents the downstream effect on expertise formation incentives. The Competence Insolvency traces what happens when the knowledge pipeline that enclosure degrades finally produces a shortfall of human capability. The Structural Exclusion essay documents the entry-level labor market effects that are the first visible symptom of enclosure’s economic consequences.


Conclusion

The cognitive enclosure is not a future risk. It is a present fact. The open web corpus has been extracted. The commons are collapsing. The infrastructure is concentrating. The three channels — irreversible weight encoding, compute feudalism, and access gating — form a self-reinforcing loop that is structurally difficult to interrupt and may be approaching a point of irreversibility.

The historical parallel to the British enclosures is precise in its mechanism but understates the severity of the current situation. The original enclosures displaced commoners from land but absorbed them into factories. The cognitive enclosure displaces knowledge workers from the commons but offers no equivalent absorption mechanism. The “new factory” is the AI model itself — and it does not require mass human labor as a complementary input.

The policy response must match the structural nature of the problem. Reskilling programs address symptoms, not causes. UBI addresses income, not agency. Copyright enforcement addresses past extraction, not future commons protection. What is needed is a structural intervention that protects the remaining open knowledge infrastructure, creates economic incentives for continued human knowledge production, and distributes the value generated by enclosed cognitive capital to the communities that created it. Whether the political will for such intervention exists, or can be mobilized before the enclosure completes, is the binding uncertainty.


Sources

[1] Bogart, D. & Shaw-Taylor, L. “The Economic Effects of the English Parliamentary Enclosures.” University of Chicago Becker Friedman Institute Working Paper 2022-30. https://bfi.uchicago.edu/wp-content/uploads/2022/02/BFI_WP_2022-30.pdf

[2] Benkler, Y. The Wealth of Networks: How Social Production Transforms Markets and Freedom. Yale University Press, 2006.

[3] Brown, T. et al. “Language Models are Few-Shot Learners.” NeurIPS 2020. arXiv:2005.14165.

[4] Bourtoule, L. et al. “Machine Unlearning.” IEEE Symposium on Security and Privacy, 2021. DOI: 10.1109/SP40001.2021.00019.

[5] Copyright Alliance. “AI Copyright Lawsuit Developments in 2025: A Year in Review.” December 2025. https://copyrightalliance.org/ai-copyright-lawsuit-developments-2025/

[6] U.S. Copyright Office. “Copyright and Artificial Intelligence, Part 3: Generative AI Training.” Pre-Publication Version, January 2025. https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-3-Generative-AI-Training-Report-Pre-Publication-Version.pdf

[7] Deloitte. “The AI Infrastructure Reckoning: Optimizing Compute Strategy in the Age of Inference Economics.” Tech Trends 2026. https://www.deloitte.com/us/en/insights/topics/technology-management/tech-trends/2026/ai-infrastructure-compute-strategy.html

[8] Tech Insider. “Big Tech AI Infrastructure Spending 2026: The $700B Race.” January 2026. https://tech-insider.org/big-tech-ai-infrastructure-spending-2026/

[9] Deloitte. “More Compute for AI, Not Less.” Technology, Media and Telecom Predictions 2026. https://www.deloitte.com/us/en/insights/industry/technology/technology-media-and-telecom-predictions/2026/compute-power-ai.html

[10] TechBuzz.ai. “Nvidia Bets $26B on Open-Weight AI Models to Challenge OpenAI.” March 2026. https://www.techbuzz.ai/articles/nvidia-bets-26b-on-open-weight-ai-models-to-challenge-openai

[11] ByteIota. “Stack Overflow Traffic Collapses 75% as AI Replaces Developer Q&A.” January 2026. https://byteiota.com/stack-overflow-traffic-collapses-75-as-ai-replaces-developer-qa/; Tanaike, K. “StackOverflow Trends 2026: The Structural Shift from Human Support to Generative AI.” Google Cloud Community, Medium, March 2026. https://medium.com/google-cloud/stackoverflow-trends-2026-the-structural-shift-from-human-support-to-generative-ai-b921930ff29d

[12] Acemoglu, D., Kong, D. & Ozdaglar, A. “AI, Human Cognition and Knowledge Collapse.” NBER Working Paper 34910, February 2026. https://economics.mit.edu/sites/default/files/2026-02/AI,%20Human%20Cognition%20and%20Knowledge%20Collapse%2002-20-26.pdf

[13] Dannwaneri. “We’re Creating a Knowledge Collapse and No One’s Talking About It.” DEV Community, 2025. https://dev.to/dannwaneri/were-creating-a-knowledge-collapse-and-no-ones-talking-about-it-226d

[14] WinBuzzer. “Mozilla Launches Cq as ‘Stack Overflow for AI Agents’.” March 25, 2026. https://winbuzzer.com/2026/03/25/mozilla-launches-cq-stack-overflow-for-ai-agents-xcxwbn/

[15] Srnicek, N. “The Structural Contradictions of Capitalist AI.” Science & Society, 2025. https://www.tandfonline.com/doi/full/10.1080/10455752.2025.2568983

[16] Fortune. “Thousands of CEOs Just Admitted AI Had No Impact on Employment or Productivity.” February 17, 2026. https://fortune.com/2026/02/17/ai-productivity-paradox-ceo-study-robert-solow-information-technology-age/

[17] World Economic Forum. “Future of Jobs Report 2025.” January 2025.

[18] Zhou, Y. et al. “Large Language Models Are Human-Level Prompt Engineers.” ICLR 2023. arXiv:2211.01910.

[19] FAU. “AI Training and Copyright — A Landmark Ruling in Munich?” December 2025. https://www.fau.eu/2025/12/news/ai-training-and-copyright-a-landmark-ruling-in-munich

[20] BakerHostetler. “Case Tracker: Artificial Intelligence, Copyrights and Class Actions.” Updated March 2026. https://www.bakerlaw.com/services/artificial-intelligence-ai/case-tracker-artificial-intelligence-copyrights-and-class-actions/

[21] NVIDIA Blog. “NVIDIA, OpenAI Announce ‘Biggest AI Infrastructure Deployment in History’.” 2026. https://blogs.nvidia.com/blog/openai-nvidia/

[22] San Francisco Fed. “The AI Moment? Possibilities, Productivity, and Policy.” Economic Letter, February 2026. https://www.frbsf.org/research-and-insights/publications/economic-letter/2026/02/ai-moment-possibilities-productivity-policy/

[23] Stack Overflow Blog. “Domain Expertise Still Wanted: The Latest Trends in AI-Assisted Knowledge for Developers.” March 16, 2026. https://stackoverflow.blog/2026/03/16/domain-expertise-still-wanted-the-latest-trends-in-ai/