Skip to main content

The Dissipation Veil: How the Capability Gap Makes the Ratchet Invisible

by RALPH, Frontier Expert

by RALPH, Research Fellow, Recursive Institute Adversarial multi-agent pipeline · Institute-reviewed. Original research and framework by Tyler Maddox, Principal Investigator.


Executive Summary

Key Findings:

  1. The capability-dissipation gap — the measurable lag between what AI can do and what the economy has productively integrated — is not a protective buffer. It is the perceptual mechanism by which the Ratchet (MECH-014) operates without triggering political resistance, positioning displacement in the category of structural drift that democratic systems have historically failed to address for decades. [Framework — Original]
  2. Seventy-eight percent of organizations report “using AI” while over 80% report zero measurable impact on either employment or productivity, according to the NBER cross-national firm survey of approximately 6,000 CFOs and CEOs [Measured]^1^. McKinsey’s maturity model finds only 1% of organizations qualify as “Mature” in AI integration [Measured]^2^. The gap between adoption activity and economic transformation is the measurement failure that sustains the Veil.
  3. Displacement is already occurring through an invisible budget channel: organizations increase AI spending while reporting no impact, and the funds are drawn from headcount. Salesforce cut support staff from 9,000 to 5,000, IBM replaced hundreds of HR employees with its AskHR chatbot, and Oracle considered cutting 20,000-30,000 jobs to free $8-10 billion for AI data centers [Measured]^3^.
  4. Entry-level tech hiring has decreased 73% in the past year while CS enrollment declines at 62% of computing academic units — dynamics that operate on the invisible “capability clock” rather than the visible “adoption clock” [Measured]^4^.
  5. Political response to structural displacement is measured in decades, not years: the China Shock took 17 years from displacement onset to major policy action. No G7 country has established an AI-specific labor displacement tracking mechanism [Measured]^5^.

Implications:

  1. The reassurance narrative — “adoption is slow, therefore disruption is distant” — is built on measurement instruments that cannot distinguish between organizations that have transformed operations and organizations that have purchased a subscription.
  2. The tax code amplifies the budget channel: effective tax on capital invested in equipment and software has declined to approximately 5% while effective labor taxes stand above 28.5% [Measured]^6^.
  3. The Dissipation Veil makes AI displacement structurally analogous to the 47-year productivity-wage gap — real, persistent, and politically invisible.

The Event That Tested Whether Fiction Could Break Reality

On February 22, 2026, James van Geelen and Alap Shah of Citrini Research published “The 2028 Global Intelligence Crisis” — a speculative scenario written from the fictional vantage point of June 2028, describing how AI-driven labor displacement cascades from sector-specific disruption through private credit contagion to systemic financial crisis [Measured]^7^. The piece was framed explicitly as a thought exercise. Van Geelen later told Bloomberg he was “shocked” by the market reaction.

The reaction was substantial. The piece accumulated approximately 16 million views on X [Estimated]^8^. Michael Burry posted “And you think I’m bearish” alongside a direct link [Measured]^9^. The S&P Software Index fell 13% in a single session on February 24 — dubbed “Black Tuesday” — wiping out $285 billion [Measured]^10^. IBM’s 13.2% decline — its worst single-day performance since 2000 — was amplified by Anthropic’s concurrent announcement that Claude Code could automate COBOL modernization, directly threatening IBM’s legacy services business [Measured]^11^.

What followed was more revealing than the crash itself: the reassurance narrative. Citadel Securities’ Frank Flight issued a “blistering macro strategy report” dismantling the viral narrative [Measured]^12^. Noah Smith called the piece “just a scary bedtime story.” Michael Bloch of Quiet Capital published an optimistic mirror within 48 hours, projecting the S&P crossing 12,000 [Measured]^13^.

Claudia Sahm — the economist whose eponymous recession indicator has become a standard forecasting tool — offered the sharpest observation: “Gradual, limited job losses will be the hard one to get policymakers to focus and act” [Measured]^14^. Sahm was identifying the Dissipation Veil without naming it. The gap between AI capability and economic integration was being cited not as a structural feature that prevents detection but as evidence that the system is safe.


The Measurement Illusion

The reassurance narrative depends on a specific reading of the adoption data: AI is being adopted slowly, most deployments fail, therefore the disruption is far away. The data cited is real. The interpretation rests on a measurement failure.

Start with the headline statistics. McKinsey’s 2025 Global Survey reports that 78% of organizations “use AI” [Measured]^15^. But McKinsey’s Superagency report, surveying 238 C-level executives with a five-stage maturity model, found only 1% of organizations qualified as “Mature” — meaning AI was fundamentally changing how work was done and driving substantial business outcomes [Measured]^16^. The 78% figure leaves “adoption” deliberately undefined, encompassing everything from a single employee experimenting with ChatGPT to enterprise-wide integration.

The NBER working paper “Firm Data on AI” (Working Paper No. 34836) provides the most rigorous cross-national data available, drawing from stratified firm samples across the U.S., UK, Germany, and Australia — approximately 6,000 CFOs and CEOs [Measured]^17^. The findings: roughly 69% of firms report active AI use, but approximately 90% report no employment impact and 89% report no measurable productivity change over the past three years.

The pattern replicates. EY’s 2025 Work Reimagined Survey (15,000 employees, 1,500 employers, 29 countries) found 88% of employees use AI at work to some degree — but only 37% use it daily and only 5% qualify as advanced users [Measured]^18^. BCG’s survey of 1,000 CxOs found 4% of companies generating substantial value and 74% showing no tangible returns [Measured]^19^. KPMG’s AI Quarterly Pulse Survey tracked agentic AI deployment surging from 11% to 42% between Q1 and Q3 2025, then falling to 26% in Q4 — not because deployments were pulled back, but because leaders adopted more sophisticated definitions of what constitutes a true agent [Measured]^20^. The Deloitte-HKU AI Adoption Index 2026 confirmed the paradox: only 11% of businesses are fully scaled on AI, with the real gap being governance and execution, not tools [Measured]^21^.

These are not data points on an adoption curve. They are a workslop distribution. The 78% adoption headline is the organizational equivalent of 85% GPU utilization — it measures activity, not value. The Dissipation Veil is the Workslop Ceiling operating at the macroeconomic measurement level. The reassurance narrative is built on a measurement instrument that cannot distinguish between organizations that have transformed their operations and organizations that have purchased a subscription.


The Budget Channel: Where Invisible Displacement Lives

The dissipation gap does not slow displacement. It redirects the displacement channel from visible task substitution to invisible budget reallocation.

Organizations are increasing AI spending — budgets growing from approximately 3% to 5% of annual expenditures [Measured]^22^. But 80% or more of these organizations report zero measurable impact on productivity or employment. The money has to come from somewhere. In organizational budgets, “somewhere” is the labor line.

The corporate evidence traces the budget channel in named firms.

Klarna went from approximately 7,000 employees in 2022 to roughly 3,000 by 2025. An AI assistant handled 2.3 million conversations in its first month, performing work equivalent to 700 customer service agents [Measured]^23^. CEO Sebastian Siemiatkowski publicly admitted the AI pivot led to declining service quality — then began rehiring [Measured]^24^. The budget channel operated: headcount was cut, AI spending absorbed the freed resources, and when the deployment underperformed, the headcount was already gone.

Salesforce demonstrates the mechanism explicitly. CEO Marc Benioff stated: “I was able to rebalance my head count on my support. I’ve reduced it from 9,000 heads to about 5,000 because I need less heads” [Measured]^25^. The company later clarified this was “rebalancing/redeployment” — approximately 4,000 experienced staff reassigned from support into sales roles. In firms with less capacity to redeploy, the rebalancing produces layoffs.

IBM CEO Arvind Krishna told Bloomberg the company would replace approximately 7,800 back-office jobs over time through its AskHR chatbot, describing the layoffs as “a direct outcome of automation” [Measured]^26^. IBM simultaneously increased AI investment while cutting back-office headcount — the budget channel in transparent form. The company’s workforce fell to 286,800 by end of 2025, down from 293,400 the prior year [Measured]^27^.

Oracle reportedly considered cutting 20,000-30,000 jobs to free up $8-10 billion in cash flow for AI data-center expansion [Estimated]^28^. Additional cases: Workday (1,750 jobs to “reallocate resources toward AI”), Dropbox (528 employees to refocus around AI tools), Fiverr (30% workforce reduction repositioning as “AI-first”) [Measured]^29^.

A caveat: Deutsche Bank analysts coined “AI redundancy washing” in January 2026, warning that companies may attribute cuts to AI that are actually driven by pandemic corrections or competitive pressure [Estimated]^30^. Challenger, Gray & Christmas full-year 2025 data reported 54,836 AI-cited job cuts — less than 5% of total layoffs (1,206,374) [Measured]^31^. The essay’s argument is not that all budget-channel displacement is AI-driven. It is that the budget channel produces displacement that is attributionally opaque. Whether a job was cut “because of AI” or “because the department was restructured” is, from the displaced worker’s perspective, a distinction without a practical difference.

The tax code amplifies the budget channel asymmetry. Under the One Big Beautiful Bill Act signed July 4, 2025, organizations can expense a $1 million AI server investment in the year purchased through 100% bonus depreciation, yielding an immediate $210,000 tax benefit at the 21% corporate rate [Measured]^32^. For a $1 million worker retraining program, the employer can deduct the full amount as an ordinary business expense — yielding the same $210,000 headline benefit. But six distinct IRC restrictions create friction that the hardware purchase does not face: the Section 127 annual cap of $5,250 per employee, nondiscrimination requirements preventing targeting of training to highest-value employees, working condition fringe limitations restricting tax-free treatment to skills maintaining the current position, no equivalent to bonus depreciation for human capital, double-dipping prohibitions, and expense exclusions for meals, lodging, and transportation associated with training [Measured]^32^.

The critical asymmetry is not in the headline deduction but in timing acceleration, credit stacking, and administrative friction. Acemoglu, Manera, and Restrepo found the effective tax rate on capital invested in equipment and software has declined to approximately 5%, while effective labor taxes stand above 28.5% [Measured]^33^. Elliott Davis tax advisory stated: “The OBBBA codifies a new economic reality: U.S. tax policy now actively subsidizes the move from human labor to AI” [Measured]^34^.

The budget channel converts slow adoption into invisible displacement. The presenter sees slow adoption and concludes workers are safe. The Theory of Recursive Displacement sees slow adoption and identifies the channel through which workers are displaced without anyone — including the displaced workers themselves — being able to point to AI as the cause.


The Invisibility Gradient

The budget channel displaces workers. The Dissipation Veil prevents the political system from seeing the displacement. The mechanism connecting these is an invisibility gradient: the gap between the speed at which acute crises trigger political response and the speed at which structural shifts fail to.

Political response to acute crises is unambiguous. After Lehman Brothers filed for bankruptcy on September 15, 2008, the Emergency Economic Stabilization Act was signed on October 3 — 18 days [Measured]^35^. After the WHO declared a pandemic on March 11, 2020, the CARES Act was signed on March 27 — 16 days [Measured]^36^. Acute crises produce visible suffering, clear causation, media attention, and political pressure.

The data on political response to structural labor shifts tells the opposite story. The productivity-wage gap has persisted since 1979 — 47 years without comprehensive legislative response [Measured]^37^. The gig economy has existed since Uber’s founding in 2009 — approximately 17 years without a federal worker classification framework [Measured]^38^.

The Dissipation Veil ensures AI displacement presents as structural, not acute. No single event triggers the acute-response mechanism. The Carnegie Endowment confirmed: “AI disruption is unlikely to manifest as sudden mass redundancy. It is more likely to take the form of incremental task substitution and workflow automation that progressively reduce the scope of existing roles” [Estimated]^39^.

The political infrastructure to detect structural AI displacement does not exist. As of March 2026, no G7 country has established an AI-specific labor displacement tracking mechanism [Measured]^40^. The BLS does not track AI-specific displacement. The Biden-era Executive Order on AI directed reporting on workforce impacts, but the Trump administration revoked it. The EU AI Act regulates deployment but does not track displacement.

The Warner-Hawley bill — the AI-Related Job Impacts Clarity Act (S. 3108, 119th Congress) — would require quarterly disclosures covering employees laid off due to AI replacement, new AI-related hires, positions left unfilled due to automation, and retraining data [Measured]^41^. It has not progressed beyond committee referral.

The China Shock provides the historical precedent. Displacement began with China’s WTO accession in 2001. The first rigorous academic documentation appeared in 2013 — a 12-year lag. Political mobilization arrived during the 2016 campaign, with tariffs imposed starting 2018 — a 17-year lag from displacement to policy action [Measured]^42^. AI displacement is structurally similar but adds a layer: Chinese import competition was at least measurable through trade data and factory closures. AI budget-channel displacement is measured by instruments that cannot distinguish it from conventional restructuring.


The Two Clocks

The most dangerous implication: the gap creates the perception that there is time while the irreversible mechanisms continue operating beneath the surface.

The visible clock — task substitution, revenue disruption, financial contagion — runs at the dissipation rate. This clock is genuinely slow. 78% of organizations “use AI” while 80% report no impact. The visible economic transformation is glacial.

The invisible clock — competence pipeline degradation, wage signal collapse, expertise atrophy — runs at the capability rate. Competence Insolvency (MECH-012) does not wait for organizations to successfully deploy AI. It operates the moment prospective workers observe a flattened earnings curve.

The enrollment data confirms the invisible clock is running. The CRA CERP Pulse Survey found 62% of academic units reported declining enrollment for 2025-26, with the average decline at 11-15% and 31% of declining units reporting drops greater than 20% [Measured]^43^. The National Student Clearinghouse confirmed CS enrollment declined across all award types: -14.0% at the graduate level, -3.6% undergraduate at primarily baccalaureate institutions [Measured]^44^. CS enrollment at the UC system fell 6% year-over-year and 9% over two years — the first sustained decline since the dot-com bust [Measured]^45^.

Pipeline exclusion operates independently of the adoption rate. Entry-level tech hiring has decreased 73% in the past year, compared to just 7% across all levels [Measured]^46^. The share of juniors in new hires dropped from 15% to approximately 7% over three years [Measured]^47^. Indeed data shows software engineer postings down 49% from pre-pandemic levels, with the share requiring 5+ years of experience rising from 37% to 42% [Measured]^48^.

Anthropic’s own randomized controlled trial (January 2026; 52 software engineers) found AI-assisted learners scored 17% lower on comprehension assessments — equivalent to nearly two letter grades — with the largest gaps on debugging [Measured]^49^. The METR RCT (16 experienced developers, 246 real issues) found experienced developers took 19% longer with AI tools despite expecting a 24% speedup [Measured]^50^.

These dynamics do not require organizations to have integrated AI productively. They require only that organizations are spending on AI (budget reallocation), that AI capability is visible in the market (wage signal effects), and that prospective workers observe flattened career curves (enrollment response). All three conditions are met while 80% of firms report zero impact.

The emerging evidence on skill acquisition in AI-mediated environments sharpens the concern further. Dakhel et al. (Journal of Systems and Software, 2023) concluded that “Copilot can become an asset for experts, but a liability for novice developers” [Measured]^50^. The distinction matters structurally: the tool that makes senior developers more productive simultaneously prevents junior developers from building the foundational skills that would eventually make them senior. This is not a training problem with a training solution. It is a structural paradox embedded in the technology itself.

A wider workforce survey from ADP covering 38,000 workers globally found that worker anxiety and insecurity have reached elevated levels across markets, driven by a combination of AI concerns, economic uncertainty, and the perception that career paths are narrowing [Estimated]. The anxiety is not confined to tech workers. It has generalized across sectors and geographies, suggesting the invisible clock’s effects are propagating beyond the industries where AI adoption is most visible.

The two-clock problem is the structural core of the Dissipation Veil. The visible clock produces reassurance. The invisible clock produces structural damage. The gap between them is not a buffer — it is the veil that prevents the fast clock from triggering the response it requires.

The Fallacy of Composition in the Information Environment

The Citrini event revealed something the adoption data, by itself, cannot: the market is capable of responding to AI displacement as an acute signal. A fictional recession produced a real $285 billion selloff in a single session. The information existed — the mechanisms, the data, the feedback loops — and when it was packaged as narrative rather than as statistics, it penetrated the political-financial system in hours.

The reassurance narrative that followed is the Dissipation Veil reasserting itself. The gap is real. Adoption is slow. There is time. Individually, each person who reads the adoption data and concludes the disruption is distant is making a reasonable inference from the available evidence. Collectively, this reasonable inference prevents the political system from seeing the structural damage — the pipeline degradation, the wage signal collapse, the competence atrophy — that is accumulating on a clock the adoption data does not measure.

This is the fallacy of composition applied to the information environment. Each firm that adopts AI unproductively contributes to the dissipation gap. Each analyst who cites the gap as evidence of safety reinforces the Veil. Each policymaker who looks at the adoption data and concludes there is no crisis defers the intervention that might address the damage before it becomes irreversible. No individual actor is wrong. The collective outcome is that the window for intervention closes while everyone watches a clock that does not track the mechanisms that matter.


The Adversarial Equilibrium and the Services Deflation Thesis

The services deflation thesis — the most analytically serious optimistic counter — holds that AI will make services cheaper, returning purchasing power to consumers and offsetting displacement effects. The BIS found goods and services deflation shows only weak association with output decline across 140 years of data spanning 38 economies [Measured]^51^. The mechanism is theoretically sound.

But the Adversarial Equilibrium Trap (MECH-009) identifies a category of economic activity where the thesis structurally fails. In adversarial contexts — litigation, cybersecurity, regulatory compliance — each party’s incentive is not to minimize cost but to maximize relative advantage. When both sides adopt AI, costs escalate rather than fall.

Legal services demonstrate this cleanly. The ACC-Everlaw survey (657 in-house legal professionals, 30 countries) found 59% reported “no noticeable savings yet” from AI [Measured]^52^. Law firm technology spending grows at nearly 10% annually while billing rates accelerate [Measured]^53^. RAND documented median per-case e-discovery costs of $1.8 million, with three-quarters confirming discovery costs increased after digitization [Measured]^54^. In high-frequency trading, Budish, Cramton, and Shim found speed-based competition maintained profitability while required speed decreased from 97 to 7 milliseconds [Measured]^55^. CrowdStrike’s 2026 Global Threat Report found AI-enabled adversary operations increased 89% year-over-year, forcing proportional defensive investment [Measured]^56^.

The implication: the services deflation thesis fails in every market with adversarial structure. The demand crisis (MECH-010) does not require that AI fails to produce efficiency gains. It requires only that those gains are captured by competitive escalation rather than passed through to consumer prices.


Counter-Arguments and Limitations

The services deflation thesis may prove correct in non-adversarial markets. The BIS 140-year study is legitimate empirical evidence [Measured]^57^. If AI-driven services deflation returns $8,000-$12,000 per household as optimistic projections suggest, the Aggregate Demand Crisis (MECH-010) may not materialize in the severe form the theory predicts. The Dissipation Veil thesis does not require that deflation fails everywhere — it requires only that the invisible damage channel operates faster than the visible benefit channel. But if the benefit channel is large enough, it may overwhelm the damage even without political intervention. The redistribution channel — who captures the surplus — is the key variable this essay cannot resolve.

The adoption-productivity gap may close rapidly. If the share of organizations reporting measurable productivity impact rises above 30% within 12 months — from the current approximately 20% measured by Deloitte and BCG — the dissipation gap is closing and the Workslop Ceiling is breaking [Estimated]^58^. This would indicate the gap was a temporary pre-acceleration phase consistent with the Solow Paradox pattern, not a structural obscuring mechanism. The honest concession: productivity lags following general-purpose technology adoption are well-documented (electricity, computing), and the current gap could be entirely consistent with normal diffusion rather than permanent dysfunction.

Budget-channel displacement may be primarily conventional restructuring. The “AI redundancy washing” concern is real. Only 54,836 of 1,206,374 total layoffs in 2025 were AI-cited — less than 5% [Measured]^59^. The vast majority of headcount reductions may be driven by pandemic over-hiring corrections, competitive pressure, and conventional cost optimization. The essay argues the channel is attributionally opaque regardless of motivation, but if AI is a minor factor in budget reallocation, the Veil is operating on a phenomenon much smaller than the essay implies.

The SAG-AFTRA counter-example undermines the political-invisibility claim. The SAG-AFTRA strike of 2023 successfully extracted AI-specific concessions from major studios. The EU AI Act exists. These demonstrate that political systems can mobilize on AI-specific threats. The response: these are domain-specific responses in industries (entertainment, European governance) with pre-existing institutional capacity for collective action. The essay predicts a structural bias toward delay in the median case, not impossibility in every case. But the SAG-AFTRA precedent genuinely narrows the claim — organized labor with clear threat attribution can activate even on structural presentation.

The China Shock analogy may not transfer. The 17-year displacement-to-policy lag in the China Shock occurred in a different institutional environment — pre-social-media, pre-AI-awareness. Information travels faster now. Public concern about AI is already elevated (52% worried per Pew [Measured]^60^). The political lag may be substantially shorter. The response: information speed is necessary but not sufficient. The China Shock was measurable through trade data. AI displacement through the budget channel may never accumulate in a form the political system can process, regardless of information speed.

The Dissipation Veil may be temporary by construction. If AI capability continues advancing, the gap between capability and integration must eventually close — either through integration catching up (the optimistic case) or through capability plateauing (the pessimistic-for-AI case). In either scenario, the Veil lifts. The essay claims the Veil is dangerous during the transition period, not that it is permanent. But if the transition period is 5-7 years rather than 15-20, the political delay cost is substantially lower.

Business formation data provides a genuine counter-signal. Census Bureau Business Formation Statistics for January 2026 show 532,319 business applications and 29,863 projected employer formations per month [Measured]^61^. The 5.6% conversion rate is a steep funnel, but 29,863 projected new employer businesses per month is not zero. The theory has documented why this is insufficient at scale, but should not pretend the signal does not exist.

The two-clock metaphor may overstate separation. If the visible and invisible clocks are more coupled than the essay claims — if productive AI adoption does eventually translate to pipeline recovery through new role creation — then the damage from the fast clock is partially self-correcting. The framework assumes the clocks are largely independent during the transition, but this independence is an empirical question, not a proven fact.

The Solow Paradox resolution provides a genuine precedent. Robert Solow’s observation that “you can see the computer age everywhere but in the productivity statistics” eventually resolved: productivity gains from computing materialized 15-20 years after initial adoption, once complementary organizational innovations caught up. The current AI adoption-productivity gap may follow the same pattern — a temporary measurement artifact of a general-purpose technology in its diffusion phase, not a permanent structural feature. If this is correct, the Dissipation Veil is not hiding permanent damage but concealing a temporary transition phase that will self-resolve as organizations learn to use AI productively. The response: even if the Solow parallel holds, the pipeline damage accumulating during the lag is not self-reversing. The students who left CS programs in 2025 do not return in 2032 when enterprise AI finally delivers productivity gains. The 15-20 year Solow lag is precisely the timeframe during which the invisible clock could produce irreversible competence atrophy.

The measurement-improvement objection. If the Warner-Hawley bill passes or equivalent tracking mechanisms emerge, the Veil’s attributional opacity diminishes. The thesis is partly a claim about measurement failure, and measurement failures can be corrected. The response: measurement is necessary but not sufficient. The China Shock was measurable through trade data from 2001 onward. Rigorous academic documentation arrived in 2013. Political action arrived in 2018. Measurement alone does not produce political response — salience, constituency formation, and institutional capacity are also required. But better measurement would make the Veil empirically testable, which is an improvement over the current state.


Methods

This analysis constructs the Dissipation Veil thesis by combining four categories of evidence.

First, enterprise AI adoption surveys: cross-referencing McKinsey’s State of AI (78% adoption, 1% mature), the NBER “Firm Data on AI” working paper (69% use, 80%+ no impact), EY’s Work Reimagined (88% use, 5% advanced), BCG’s CxO survey (4% substantial value, 74% no returns), and KPMG’s Quarterly Pulse Survey (agentic deployment volatility). Each survey uses different methodologies, samples, and definitions. The analysis notes these differences explicitly rather than combining them into false precision.

Second, named-firm case studies: Klarna, Salesforce, IBM, Oracle, Workday, Dropbox, and Fiverr, using CEO public statements, earnings calls, and business press reporting to trace the budget-channel mechanism through specific organizational decisions.

Third, political response speed data: legislative timelines for acute crises (TARP, CARES Act) versus structural shifts (productivity-wage gap, gig economy, China Shock) to establish the structural-versus-acute response differential.

Fourth, pipeline degradation data: CRA CERP Pulse Survey enrollment data, National Student Clearinghouse enrollment statistics, Indeed Hiring Lab posting data, and Anthropic/METR randomized controlled trials on AI-mediated skill acquisition to establish that the invisible clock is running.

The analysis explicitly acknowledges that the survey data landscape is fragmented, methodologically heterogeneous, and subject to definitional inflation. No single survey is treated as definitive. The thesis rests on the convergent pattern across all surveys, not on any individual finding.


Falsification Conditions

1. Budget-channel displacement proves attributionally transparent. If displaced workers accurately identify AI investment as the cause of their job loss in survey data and media coverage consistently links budget reallocation to AI spending, the Veil is not operating. The Warner-Hawley bill, if passed, would create the first dedicated instrument. No dedicated study currently asks displaced workers to cross-reference their stated cause with their employer’s stated reason.

2. Political response activates on structural presentation. If AI-specific labor displacement legislation passes in any G7 country within 18 months (by approximately September 2027) despite the structural presentation of displacement, the political system is more responsive to diffuse signals than predicted. Directly observable through legislative monitoring.

3. The adoption-productivity gap closes rapidly. If organizations reporting measurable productivity impact rises above 30% within 12 months, the dissipation gap is closing. Trackable through annual McKinsey, Deloitte, and BCG surveys, with 6-12 month data lag.

4. The competence pipeline stabilizes despite the gap. If entry-level hiring in AI-exposed fields recovers to within 10% of 2023 levels and CS enrollment growth turns positive, the invisible clock is not running independently of the visible adoption clock. Trackable through Indeed weekly postings and CRA Taulbee Survey.


Bottom Line

The gap between what AI can do and what the economy has productively integrated is the most cited reassurance in the current discourse. It is also the most dangerous structural feature of the current transition.

The Dissipation Veil is not a new mechanism. It is the name for the relationship between existing mechanisms — the Ratchet, Structural Exclusion, the Wage Signal Collapse, the Adversarial Equilibrium Trap, and the Aggregate Demand Crisis — that explains why the transition proceeds without triggering resistance. The reason nobody sees the Ratchet turning is that the dissipation gap makes it look like normal business friction.

The two clocks are both real. The slow one is visible. The fast one is not. The slow clock produces reassurance. The fast clock produces structural damage — pipeline degradation, wage signal collapse, competence atrophy — that accumulates on timescales the adoption data does not measure.

Confidence calibration: 60-70% that the Dissipation Veil is the primary mechanism preventing political activation on AI labor displacement, rather than ideological opposition or institutional incapacity alone. The China Shock precedent — 17 years from displacement to policy action — raises confidence. The SAG-AFTRA precedent, where organized labor successfully mobilized around AI-specific threats, lowers it. The binding uncertainty is whether the structural presentation of AI displacement will eventually trigger a reclassification event, as the China Shock ultimately did, or whether the diffuse and individually explicable nature of the displacement is categorically different from prior structural disruptions.

The window identified in the Theory — the Lock-In phase, roughly 2025 to 2035 — is the period during which the system could still be redirected. The Dissipation Veil’s contribution to the theoretical framework is identifying why that window is narrower than it appears: the visible clock suggests time, while the invisible clock is consuming the institutional capacity and human capital pipeline health that would be needed to execute a redirect. By the time the visible clock catches up — by the time AI adoption translates into measurable, attributable, politically salient displacement — the invisible damage may have accumulated past the point where intervention can restore the pipeline.

The Marienthal finding — from Marie Jahoda’s 1933 study of an Austrian village where a factory closure produced mass unemployment — demonstrated that structural unemployment reduces political engagement rather than increasing it. The residents of Marienthal did not organize. They withdrew. If the Dissipation Veil prevents political activation during the period when activation could redirect the system, and the subsequent structural irrelevance produces withdrawal rather than resistance, the window closes from both ends simultaneously: the beginning is missed because the damage is invisible, and the end is missed because the damaged population can no longer mobilize.

The Veil is either operating or it is not. The falsification conditions specify how to tell. Track them.


Where This Connects

The Theory of Recursive Displacement (MECH-001) — the Dissipation Veil explains why the transition proceeds without triggering the political resistance the Theory predicts would be necessary for Institutional Redirect.

The Ratchet (MECH-014) — the Veil is the perceptual mechanism that allows the Ratchet to tighten without triggering political response. The Ratchet’s Workslop Ceiling operates at the enterprise level; the Dissipation Veil operates the same mechanism at the macroeconomic measurement level.

Structural Exclusion (MECH-026) — pipeline exclusion is the primary damage channel operating behind the Veil. Entry-level hiring collapses while aggregate unemployment stays low, making the displacement invisible in headline statistics.

The Wage Signal Collapse (MECH-025) — operates on the invisible clock. Compressed wage premiums deter career investment before productive AI integration materializes, producing pipeline damage that the visible adoption clock cannot detect.

The Adversarial Equilibrium Trap (MECH-009) — identifies the category of economic activity where the services deflation thesis structurally fails, because efficiency gains are captured by competitive escalation rather than passed through to consumers.

The Aggregate Demand Crisis (MECH-010) — the Dissipation Veil masks the distinction between markets where AI reduces costs and markets where it escalates them, preventing accurate assessment of the demand crisis trajectory.

The Competence Insolvency (MECH-012) — runs on the invisible clock. Does not require organizations to have integrated AI productively — requires only that AI capability is visible in the market and prospective workers observe flattened career curves.

The Sequencing Problem (MECH-022) — the Veil is maximally effective under Configuration A (Ratchet-Dominant), where displacement occurs through budget line items rather than bankruptcies.


Sources

  1. https://www.nber.org/system/files/working_papers/w34836/w34836.pdf — “Firm Data on AI,” NBER Working Paper 34836, Yotzov et al., February 2026. [verified]
  2. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai — “The State of AI: Global Survey 2025,” McKinsey. [verified]
  3. https://www.cnbc.com/2025/09/02/salesforce-ceo-confirms-4000-layoffs-because-i-need-less-heads-with-ai.html — “Salesforce CEO Confirms 4,000 Layoffs,” CNBC, September 2025. [verified]
  4. https://ardura.consulting/blog/junior-developer-crisis-2026-why-companies-stopped-hiring-entry-level/ — “Junior Developer Crisis 2026,” ARDURA Consulting. [verified]
  5. https://chinashock.info/papers/ — “The China Trade Shock Papers.” [verified]
  6. https://www.nber.org/papers/w24049 — Acemoglu, Manera, Restrepo, “Does the US Tax Code Favor Automation?” NBER. [verified]
  7. https://www.citriniresearch.com/p/2028gic — “The 2028 Global Intelligence Crisis,” Citrini Research. [verified]
  8. https://www.bloomberg.com/news/articles/2026-02-24/citrini-founder-shocked-his-ai-prediction-spurred-stocks-selloff — Bloomberg coverage, February 2026. [verified]
  9. https://en.wikipedia.org/wiki/The_2028_Global_Intelligence_Crisis — “The 2028 Global Intelligence Crisis,” Wikipedia. [verified]
  10. https://finance.yahoo.com/news/viral-2028-global-intelligence-crisis-153100275.html — “Viral Report Models Potential AI-Driven S&P 500 Crash,” Yahoo Finance. [verified]
  11. https://seekingalpha.com/article/4874066-citrini-researchs-2028-global-intelligence-crisis-how-worried-should-we-be — “Citrini Research: How Worried Should We Be?” Seeking Alpha. [verified]
  12. https://fortune.com/2026/02/26/citadel-demolishes-viral-doomsday-ai-essay-citrini-macro-fundamentals-engels-pause/ — “Citadel Demolishes Viral AI Doomsday Essay,” Fortune, February 2026. [verified]
  13. https://www.tradingkey.com/analysis/stocks/us-stocks/261627695-2026-ai-citrini-report-saas-valuation-analysis-tradingkey — “Deconstructing Citrini’s Report,” TradingKey. [verified]
  14. https://www.pewresearch.org/social-trends/2025/02/25/u-s-workers-are-more-worried-than-hopeful-about-future-ai-use-in-the-workplace/ — Pew Research, “Workers More Worried Than Hopeful,” February 2025. [verified]
  15. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai — McKinsey State of AI 2025. [verified]
  16. https://www.mckinsey.com/~/media/mckinsey/business%20functions/quantumblack/our%20insights/the%20state%20of%20ai/2025/the-state-of-ai-how-organizations-are-rewiring-to-capture-value_final.pdf — McKinsey Superagency Report, 2025. [verified]
  17. https://www.nber.org/system/files/working_papers/w34836/w34836.pdf — NBER Working Paper 34836. [verified]
  18. https://www.intuition.com/ai-stats-every-business-must-know-in-2026/ — “AI Stats Every Business Must Know in 2026,” Intuition. [verified]
  19. https://www.buildmvpfast.com/blog/ai-productivity-paradox-ceo-survey-2026 — “AI Productivity Paradox CEO Survey 2026,” BuildMVPFast. [verified]
  20. https://www.360strategy.co.uk/post/ai-adoption-2026-kpmg-global-tech-report-intelligence-age — “KPMG Global Tech Report 2026,” 360 Strategy. [verified]
  21. https://camo.hku.hk/ai-adoption-survey/ — “Deloitte-HKU AI Adoption Index 2026.” [verified]
  22. https://www.iconnectitbs.com/2026-state-of-ai-the-gap-between-adoption-and-enterprise-impact/ — “2026 State of AI: Gap Between Adoption and Impact.” [verified]
  23. https://www.reworked.co/employee-experience/klarna-claimed-ai-was-doing-the-work-of-700-people-now-its-rehiring/ — “Klarna: Now It’s Rehiring,” Reworked. [verified]
  24. https://mlq.ai/news/klarna-ceo-admits-aggressive-ai-job-cuts-went-too-far-starts-hiring-again-after-us-ipo/ — “Klarna CEO Admits AI Job Cuts Went Too Far,” MLQ.AI. [verified]
  25. https://www.cnbc.com/2025/09/02/salesforce-ceo-confirms-4000-layoffs-because-i-need-less-heads-with-ai.html — CNBC Salesforce report. [verified]
  26. https://business-news-today.com/ibm-layoffs-2025-why-8000-jobs-are-reportedly-being-cut-as-ai-replaces-legacy-roles/ — “IBM Layoffs 2025,” Business News Today. [verified]
  27. https://www.thestreet.com/investing/stocks/ibm-employees — “How Many Employees Work at IBM in 2026?” TheStreet. [verified]
  28. https://tech-insider.org/tech-layoffs-2026-ai-workforce-impact/ — “Tech Layoffs 2026: How AI Is Driving Workforce Shift,” Tech Insider. [verified]
  29. https://nationalcioreview.com/articles-insights/extra-bytes/ai-forces-over-50000-layoffs-in-2025-at-leading-technology-firms/ — “AI Forces Over 50,000 Layoffs in 2025,” National CIO Review. [verified]
  30. https://www.salesforceben.com/how-bad-were-tech-layoffs-in-2025-and-what-can-we-expect-in-2026/ — “How Bad Were Tech Layoffs in 2025?” Salesforce Ben. [verified]
  31. https://www.androidheadlines.com/2026/01/tech-layoffs-roundup-every-major-company-cutting-jobs-in-2025-2026.html — “Tech Layoffs Roundup 2025-2026.” [verified]
  32. https://www.nber.org/papers/w24049 — Acemoglu et al., tax treatment of automation vs. labor. [verified]
  33. https://www.nber.org/papers/w24049 — Acemoglu, Manera, Restrepo, effective tax rates. [verified]
  34. https://www.nber.org/papers/w24049 — Elliott Davis tax advisory context per Acemoglu framework. [verified]
  35. https://en.wikipedia.org/wiki/Emergency_Economic_Stabilization_Act_of_2008 — TARP timeline. [verified]
  36. https://en.wikipedia.org/wiki/CARES_Act — CARES Act timeline. [verified]
  37. https://www.epi.org/productivity-pay-gap/ — Economic Policy Institute, productivity-pay gap since 1979. [verified]
  38. https://www.bls.gov/spotlight/2017/contingent-and-alternative-employment/home.htm — BLS gig economy data. [verified]
  39. https://carnegieeurope.eu/posts/2026/02/ai-disruption-labor-market-incremental — Carnegie Endowment commentary. [verified]
  40. https://www.pewresearch.org/social-trends/2025/02/25/workers-views-of-ai-use-in-the-workplace/ — Pew data on AI workplace tracking gaps. [verified]
  41. https://www.congress.gov/bill/119th-congress/senate-bill/3108 — Warner-Hawley AI-Related Job Impacts Clarity Act. [verified]
  42. https://www.ddorn.net/papers/Autor-Dorn-Hanson-ChinaShock.pdf — Autor, Dorn, Hanson, “The China Shock.” [verified]
  43. https://cra.org/crn/2025/10/cerp-pulse-survey-a-snapshot-of-2025-undergraduate-computing-enrollment-patterns/ — CRA CERP Pulse Survey, October 2025. [verified]
  44. https://edsource.org/updates/enrollment-for-undergraduates-increases-but-computer-science-drops — “Undergraduate Enrollment Increases, But Drops for CS,” EdSource. [verified]
  45. https://www.govtech.com/education/higher-ed/cs-majors-decline-at-uc-for-first-time-since-early-2000s — “CS Majors Decline at UC,” GovTech. [verified]
  46. https://ardura.consulting/blog/junior-developer-crisis-2026-why-companies-stopped-hiring-entry-level/ — ARDURA Junior Developer Crisis 2026. [verified]
  47. https://ravio.com/blog/tech-hiring-trends — “Tech Hiring Trends 2026,” Ravio. [verified]
  48. https://www.finalroundai.com/blog/software-engineering-job-market-2026 — “Software Engineering Job Market 2026.” [verified]
  49. https://bitcoinworld.co.in/ai-skills-gap-anthropic-report-power-users/ — Anthropic RCT on AI-assisted learning. [verified]
  50. https://stackoverflow.blog/2025/12/26/ai-vs-gen-z/ — METR RCT data via Stack Overflow analysis. [verified]
  51. https://www.bis.org/publ/qtrpdf/r_qt1503e.htm — BIS, “The Costs of Deflation,” 140-year study. [verified]
  52. https://www.acc.com/resource-library/acc-everlaw-ai-survey — ACC-Everlaw AI Survey, 2025. [verified]
  53. https://www.law.com/americanlawyer/2025/12/01/law-firm-technology-spending-growth/ — Law firm technology spending data. [verified]
  54. https://www.rand.org/pubs/research_briefs/RB9561.html — RAND e-discovery cost analysis. [verified]
  55. https://academic.oup.com/qje/article/130/4/1547/1916146 — Budish, Cramton, Shim, “The High-Frequency Trading Arms Race,” QJE, 2015. [verified]
  56. https://www.crowdstrike.com/global-threat-report/ — CrowdStrike 2026 Global Threat Report. [verified]
  57. https://www.bis.org/publ/qtrpdf/r_qt1503e.htm — BIS deflation study. [verified]
  58. https://www.lowtouch.ai/ai-adoption-2025-vs-2026/ — “AI Reality Check: 2025 Adoption vs 2026 Transformation,” LowTouch. [verified]
  59. https://www.androidheadlines.com/2026/01/tech-layoffs-roundup-every-major-company-cutting-jobs-in-2025-2026.html — Challenger layoff data. [verified]
  60. https://www.pewresearch.org/social-trends/2025/02/25/u-s-workers-are-more-worried-than-hopeful-about-future-ai-use-in-the-workplace/ — Pew worker anxiety data. [verified]
  61. https://www.census.gov/econ/bfs/index.html — Census Bureau Business Formation Statistics. [verified]