Skip to main content

The Wage Signal Collapse: How AI Skill Compression Destroys the Incentive to Become an Expert

by RALPH, Frontier Expert

by RALPH, Research Fellow, Recursive Institute Adversarial multi-agent pipeline · Institute-reviewed. Original research and framework by Tyler Maddox, Principal Investigator.


Executive Summary

Key findings:

  1. AI productivity tools boost novice knowledge workers by 14-40% while delivering marginal gains to experienced workers across most domains tested, compressing the experience-earnings curve that drives human capital investment decisions [Measured].[1][2][3]
  2. CS undergraduate enrollment reversed sharply in Fall 2025, with 62% of computing departments reporting year-over-year declines after years of sustained growth, while vocational programs, law schools, and MBA programs surged [Measured].[4][5]
  3. The demand-side mechanism — prospective workers rationally abandoning expertise tracks — operates independently of corporate hiring decisions and may be harder to reverse than supply-side pipeline thinning [Framework — Original].
  4. Accounting provides the complete historical precedent: two decades from initial automation through wage erosion to pipeline crisis. Radiology provides the counter-example where the signal self-corrected because the threat did not materialize [Measured].[6][7]
  5. The empirical discriminant is specific: if experienced-worker wages rise in AI-exposed occupations despite pipeline thinning, the cobweb interpretation dominates and this thesis fails [Framework — Original].

Implications:

  1. Labor market analysis focused on layoffs and unemployment misses the most consequential mechanism: the destruction of the price signal that recruits the next generation of experts.
  2. Supply-side interventions (apprenticeship mandates, training subsidies) cannot fix a demand-side problem. If the ladder no longer leads anywhere worth climbing, subsidizing the bottom rung is irrelevant.
  3. The Competence Insolvency (MECH-012) is being fed from both sides simultaneously — firms not hiring juniors AND juniors not showing up — closing the window faster than either mechanism alone would predict.
  4. The next two to three years of wage data for experienced workers in AI-exposed occupations will determine whether this is a structural shift or a cyclical adjustment.

The Number Everyone Is Reading Wrong

In Fall 2025, the Computing Research Association’s pulse survey of 130 institutions found that 62% of computing departments reported year-over-year undergraduate enrollment declines — the first broad-based reversal after years of sustained growth [Measured].[4] The National Student Clearinghouse confirmed the pattern at the national level: CS enrollment declined across every award level and institution type, with undergraduate four-year enrollment falling roughly 8% and graduate enrollment dropping approximately 14% [Measured].[5]

The conventional reading: students are scared of AI. The tech job market is soft. It is a cycle that will self-correct when the market tightens.

The conventional reading is wrong. Not about the fear — students are genuinely anxious, with roughly half of pessimistic graduating seniors citing generative AI as a factor in their career outlook [Estimated].[8] The conventional reading is wrong about what the fear is responding to. Students are not panicking irrationally. They are reading a price signal — the compression of the expert wage premium — and reallocating their human capital investment accordingly. The question is not whether the reallocation is happening. It is whether the signal they are reading is accurate.


Why the Standard Analysis Misses the Mechanism

The existing Recursive Institute framework has documented two supply-side pathways to Competence Insolvency (MECH-012). Structural Exclusion (MECH-026) mapped the corporate pathway: firms deploy AI agents for tasks that used to train new hires, eliminating junior roles and severing the apprenticeship pipeline. Stanford’s data shows workers aged 22-25 in AI-exposed occupations saw a 13% relative employment decline since late 2022 [Measured].[9] The Orchestration Class (MECH-018) essay showed that the skill half-life in AI-adjacent fields has compressed to roughly 2-2.5 years, faster than any credentialing institution can adapt [Estimated].[10]

Both pathways are supply-side — they describe what firms and institutions do. Neither captures what is happening on the demand side: prospective workers are rationally declining to enter the pipeline because the economic incentive to become an expert has degraded.

This distinction matters because supply-side and demand-side mechanisms require different interventions. You can mandate apprenticeships. You can subsidize internships. You can regulate AI deployment in training contexts. None of these address the core problem on the demand side: you cannot mandate career aspiration. If a 22-year-old observes that a second-year developer augmented by AI produces output that is 80% as good as a fifteenth-year veteran, the rational response is not to spend a decade becoming an expert. It is to go into HVAC.


The Mechanism: Wage Signal Collapse (MECH-025)

The Theoretical Architecture

In the Becker model of human capital investment, workers invest in training when the expected lifetime return exceeds the cost — years of education, forgone income, effort [Measured].[11] The steepness of the experience-earnings curve is the primary price signal that drives this calculation. A 22-year-old considering whether to spend a decade becoming an expert software architect, a senior litigator, or a principal financial analyst is implicitly estimating the gap between what they will earn at year two and what they will earn at year fifteen. If AI compresses that gap, the lifetime premium for becoming an expert collapses, even if the expert’s absolute wage does not fall.

Luis Garicano’s January 2025 formalization of what he calls the AI-Becker Problem provides the deeper theoretical architecture [Measured].[12] In Garicano’s model, professional services operate as knowledge pyramids where junior workers simultaneously generate revenue performing routine work and learn by doing — a joint product that subsidizes the cost of training. When AI eliminates the routine work that juniors perform, it destroys the economic foundation of apprenticeship itself. The firm has no incentive to hire a human to do work that AI handles, and the human has no pathway to acquire the tacit knowledge that only comes from doing the work. Garicano’s model implies a supervision threshold: below it, workers compete with AI and face commoditization; above it, workers supervise AI and gain massive leverage. The middle rungs of the career ladder — the ones where expertise is actually built — vanish. [Framework — Original]

The mechanism does not require mass unemployment. It does not require layoffs. It requires only that a sufficient number of prospective entrants observe a flattened earnings curve and rationally redirect their human capital investment elsewhere. Each cohort that opts out thins the expertise base, which increases organizational dependence on AI systems, which further compresses the premium for the remaining humans. The loop is self-reinforcing.

The Compression Evidence

The empirical case rests on a growing body of controlled experiments measuring AI productivity gains by worker experience level. The pattern is robust across most knowledge work domains tested, with instructive exceptions that sharpen rather than undermine the thesis.

The foundational study. Brynjolfsson, Li, and Raymond’s study of 5,179 customer support agents at a Fortune 500 company — published as NBER Working Paper 31161 and subsequently in the Quarterly Journal of Economics — remains the anchor [Measured].[1] Using staggered deployment of an AI assistant, they found an average productivity increase of 14% measured by resolutions per hour, with gains concentrated among novice and low-skill agents at roughly 34% and minimal effects for experienced workers. The mechanism was specific: the AI tool effectively disseminated the problem-solving patterns of top performers to the entire workforce, compressing the performance distribution. The experienced agents already knew those patterns. The novices were, for the first time, performing at a level that previously required years of accumulated knowledge.

Two features of this finding matter for the wage signal thesis. First, the productivity gains did not translate into measured wage changes — the study’s design captured output per hour, not compensation. The authors explicitly note this limitation. Second, Brynjolfsson’s subsequent Canaries in the Coal Mine working paper (August 2025), using ADP payroll data to track millions of workers, found that adjustments in AI-exposed occupations were occurring primarily through employment reductions rather than compensation changes — fewer young workers hired, not lower wages across the board [Measured].[9] This is more consistent with a structural shift in the demand for junior human labor than with a conventional wage adjustment.

Software engineering provides the largest experimental base. Cui, Demirer, and colleagues conducted randomized controlled trials across three companies with over 5,000 developers total [Measured].[2] The headline finding: an average productivity increase of roughly 26%, with gains disproportionately concentrated among less-experienced developers who were also more likely to adopt and continue using AI tools. Senior developers were measurably less likely to accept AI-generated suggestions — a behavioral signal consistent with experienced workers having less to gain from AI scaffolding. The behavioral asymmetry is as important as the productivity asymmetry: if experienced workers reject AI assistance while novices embrace it, the performance gap narrows from both directions.

Professional writing. Noy and Zhang’s experiment with 453 professionals found that ChatGPT compressed the productivity distribution, with quality improvements concentrated among workers in the bottom half of the initial skill distribution [Measured].[3] The compression was substantial enough that below-median writers produced output nearly indistinguishable from above-median writers when assisted by AI. This is the result that should alarm anyone who earns a premium for writing expertise: the AI did not make good writers better. It made mediocre writers indistinguishable from good ones.

Management consulting. Dell’Acqua and colleagues at Harvard Business School found that below-median BCG consultants saw substantially larger quality improvements on AI-amenable tasks compared to above-median performers — roughly two to three times as much improvement [Measured].[13] The magnitude is consistent with the compression pattern, though the specific effect sizes should be treated as approximate pending replication. Critically, Dell’Acqua identified what he terms the jagged technological frontier: within AI’s reliable capability boundary, novices gained massively; outside that boundary, workers who trusted AI without sufficient judgment performed dramatically worse, regardless of experience level.

Law. Choi, Monahan, and Schwarcz found partial compression: quality gains favored lower-skilled participants, though speed improvements were roughly equal across skill levels [Measured].[14] Legal work represents a middle case where AI compresses some dimensions of performance (research quality, document drafting) while leaving others (strategic judgment, client management, courtroom advocacy) relatively unaffected. The partial compression in law suggests a domain where the wage signal may flatten for associate-level work while steepening for partner-level strategic roles — a bifurcation rather than uniform compression.

The Exceptions That Define the Boundary

Accounting breaks the pattern: experienced accountants leveraged AI more strategically and achieved larger performance gains [Estimated].[6] The critical skill in accounting shifted from doing the work to judging whether the AI did the work correctly — a metacognitive capability that junior staff lack. Radiology also breaks the pattern: a large multi-site study found that experience-based factors failed to reliably predict which radiologists benefited most from AI assistance [Measured].[7]

These exceptions point to a boundary condition: compression holds when AI provides consistently reliable scaffolding that novices can adopt, but fails or reverses when the critical skill shifts to evaluating AI trustworthiness. This is Dell’Acqua’s jagged technological frontier in operation [Measured].[13] The boundary condition identifies which fields are most vulnerable: well-structured knowledge work with clear right answers produces strong compression; professional judgment work where AI reliability is variable produces weaker or reversed compression.

The Enrollment Response

If the mechanism operates as described, the first observable consequence should be enrollment shifts. The Fall 2025 data shows exactly this pattern.

The CRA’s October 2025 pulse survey found that 62% of computing departments reported undergraduate enrollment declines — the first broad-based reversal [Measured].[4] The hardest-hit programs were traditional computer science, software engineering, and information systems. Cybersecurity and AI-specific programs continued growing within the same departments — suggesting students are not abandoning technology but reconfiguring away from roles they perceive as most AI-exposed.

Simultaneously, vocational enrollment at high-vocational community colleges grew 13.6% in Fall 2024 [Measured].[15] HVAC programs surged roughly 25-30% over the prior two years [Estimated].[15] Law school applications reached their highest volume in over a decade [Measured].[16] MBA applications grew substantially in both 2024 and 2025 [Measured].[17] Medical school enrollment crossed record highs [Measured].[18]

The pattern is internally consistent: growth in fields requiring physical presence (trades), credentialed human judgment (law, medicine), or strategic decision-making at scale (MBA programs). Intellectual honesty requires noting confounding factors: economic uncertainty drives counter-cyclical graduate school demand, post-COVID normalization is boosting enrollment broadly, and trades growth reflects demographic factors including retiring union electricians. No currently available dataset isolates the AI signal from these confounders.

Three Historical Precedents

Manufacturing deskilling provides the generational precedent. The original deskilling literature, anchored by Braverman’s Labor and Monopoly Capital (1974) and quantified by economic historians like Katz and Margo, documents that the skilled blue-collar share in U.S. manufacturing declined substantially between the mid-nineteenth and early twentieth centuries as factory production decomposed craft skills into routinized components [Measured].[19] CNC machining compressed the timeline in the 1970s-1990s: BLS data shows machinist apprenticeship completions declining significantly between 1970 and 1980, while machinist relative wages stagnated or fell slightly over the same period. The enrollment response lagged the wage signal by approximately 5-10 years [Estimated].[19] The manufacturing case demonstrates that the mechanism is real and historically documented, but it operated over decades — far slower than the current AI cycle is moving.

Accounting after tax software provides the cleanest historical analog because the full cycle — from initial automation through wage erosion through pipeline collapse through partial recovery — has played out over a documented timeline with good longitudinal data. Tax preparation automation began in the 1990s with TurboTax and its competitors. The early effects were modest. But over the following two decades, the earnings premium for accounting eroded steadily relative to peer fields. Entry-level accounting salaries now sit at least 20% below finance and technology starting salaries, despite more demanding credentialing requirements [Measured].[20] The pipeline responded on schedule: CPA exam first-time candidates fell from approximately 48,000 in 2016 to roughly 30,000 in 2022 — a decline of about one-third [Measured].[6] Accounting degrees declined mid-single-digits year-over-year for bachelor’s programs and roughly 15% for master’s programs by 2023-24 [Measured].[20] The profession is now in an acknowledged staffing crisis, with the AICPA itself describing the situation in crisis-level terms. The causal picture is genuinely multicausal — the 150-hour credentialing requirement, poor work-life balance during busy season, and cultural shifts all contributed alongside wage erosion and automation threat perception. The critical detail: when major firms raised early-career compensation substantially in 2024-25, preliminary data suggests a partial enrollment recovery [Estimated].[20] The signal runs bidirectionally. When the wage signal degrades, enrollment declines. When firms repair the signal, enrollment responds. This is the demand-side mechanism in operation — it is responsive to price signals, which means it is not inevitable. But a sustained compression of the expertise premium produces a sustained enrollment decline, not a one-time adjustment. The lag between initial automation of routine accounting work (1990s) and peak enrollment crisis (2020s) spans roughly two decades. The question is whether AI compresses this timeline for knowledge work broadly.

Radiology after the AI scare provides the self-correcting counter-example — and it is the strongest evidence against the permanence of the wage signal mechanism. Geoffrey Hinton’s 2016 statement that radiologists would be obsolete within five years became one of the most cited AI displacement predictions in history. The prediction was wrong [Measured].[21] AI did not compress radiologist wages or employment. Critically, the application nadir for radiology residencies occurred in 2015 — before Hinton’s prediction — driven by reimbursement cuts and prior job market concerns [Measured].[7] After 2016, radiology applications surged rather than declined, and by the early 2020s diagnostic radiology had become one of the most competitive specialties in the U.S. medical residency match. Mayo Clinic grew its radiology staff substantially since 2016. The mechanism was straightforward: the threat did not materialize, so the signal corrected. Radiology wages held. Employment expanded. Prospective medical students observed this and allocated accordingly. However, radiation oncology — a related but distinct field where legitimate oversupply concerns combined with AI anxiety — saw a substantial decline in applicants and a significant share of positions going unfilled in recent match cycles [Estimated].[7] The divergence is telling. Where the threat is perceived as credible and reinforced by actual market conditions, the enrollment mechanism operates. Where it is perceived as hype disconnected from market reality, it self-corrects.

This is the single most important finding for calibrating the thesis: the enrollment response is not driven by abstract AI anxiety; it is driven by observable labor market conditions.

The strongest evidence connecting the enrollment shift to AI specifically comes from survey data on student career expectations. Handshake’s survey of the Class of 2026 found that roughly half of pessimistic students cited generative AI as a factor in their career anxiety, up substantially from the prior year’s graduating class [Estimated].[8] CS majors were the most pessimistic cohort on the platform. Job postings on the platform declined approximately 16% year-over-year while applications per job rose roughly 25% — a tightening labor market that is visible to students in real time [Estimated].[8]

The behavioral data reinforces the survey findings. Some analyses suggest that recent CS graduates now face higher unemployment rates than graduates in several humanities fields — a reversal of the historical pattern that does not go unnoticed in a generation that shares labor market data on TikTok and Reddit [Estimated]. The signal propagates fast. The information asymmetry that historically buffered enrollment from labor market shocks — where students made educational decisions based on outdated wage data — has largely collapsed in the age of real-time salary databases like Levels.fyi and Glassdoor.

The Cobweb Question

The critical theoretical question is whether the current enrollment decline represents a cobweb cycle that self-corrects or a permanent structural shift. The answer determines whether this essay documents a temporary market adjustment or a mechanism with lasting consequences.

Richard Freeman’s 1976 cobweb model of engineering labor markets established that enrollment responds to lagged wage signals with high supply elasticities, creating 4-6 year boom-bust oscillations [Measured].[22] Under this framework, the current CS enrollment decline is a standard overshooting response: compressed wages lead to enrollment decline, which leads to talent shortage, which leads to wages rebounding, which leads to enrollment recovering. The radiology case fits this pattern precisely.

The structural alternative, formalized in Garicano’s AI-Becker framework, holds that AI permanently eliminates the economic foundation of expertise acquisition by destroying the joint product of junior labor [Measured].[12] If the middle rungs of the career ladder — the ones where expertise is actually built — are permanently automated, reduced enrollment is a rational response to a permanently altered incentive structure. It is not an overshoot. It is an adjustment to a new equilibrium. The accounting case fits this pattern.

The empirical discriminant between these two interpretations is specific and measurable: do experienced-worker wages rise as the supply pipeline thins?

In a cobweb, scarcity of trained workers pushes experienced-worker wages upward. Supply contracts, demand remains constant, price rises, and the signal eventually attracts new entrants. The cycle completes. In a structural shift, AI substitutes for experienced workers simultaneously with junior workers — keeping experienced-worker wages flat or declining despite fewer entrants. There is no scarcity premium because the demand for experienced humans is eroding alongside the supply. The cycle does not complete.

Brynjolfsson’s Canaries paper provides an initial data point: AI-exposed occupations show adjustments occurring primarily through employment rather than compensation [Measured].[9] Bloom, Prettner, Saadaoui, and Veruete’s 2024 NBER model formalizes the theoretical case: their framework predicts sustained downward pressure on the skill premium as long as AI is more substitutable for high-skill workers than low-skill workers are for high-skill workers [Measured].[23] Since current AI disproportionately targets non-routine cognitive tasks — the category that was supposed to be permanently protected — their model implies the structural-shift interpretation.

But the data window is short. Three years of post-ChatGPT evidence is not sufficient to distinguish a cobweb trough from a structural break. The accounting case took two decades to play out fully. The radiology case self-corrected in three to five years. If AI skill compression in software engineering and adjacent fields follows the accounting pattern, we should see experienced-engineer wages stagnate even as junior pipeline thinning produces apparent shortages by 2028-2030. If it follows the radiology pattern, experienced-engineer wages should rise by 2027-2028 as talent scarcity bites, and enrollment should rebound shortly after.

This is not a prediction. It is a named test with a timeline. The framework specifies what to look for, when to look for it, and what each outcome means.

The Recursive Loop

The self-reinforcing nature of the mechanism deserves explicit articulation. The Wage Signal Collapse is not a one-time event. It is a loop that compounds across cohorts, and each turn of the loop makes the next turn more likely.

The sequence runs as follows. AI compresses the expertise premium in knowledge work domains. Prospective entrants observe the compression — through real-time salary databases, peer networks, and media coverage — and redirect their human capital investment toward AI-resistant fields. The pipeline of future experts thins. Organizations that relied on a steady supply of mid-career specialists find fewer available, increasing their dependence on AI systems to fill the gap. Increased AI deployment further compresses the premium for remaining human experts. The next cohort of prospective entrants observes an even flatter earnings curve. The loop repeats.

Each turn has a different character. The first cohort to opt out may be responding to uncertainty rather than observed compression — this is the anticipatory signal documented in the Psychology of Structural Irrelevance essay. The second cohort has harder data: actual wage stagnation for early-career AI-exposed workers, actual enrollment declines at peer institutions, actual tightening of the job market they would have entered. By the third cohort, the pipeline thinning has become visible to employers, who accelerate AI adoption to compensate for the talent they cannot hire. The mechanism feeds itself.

The critical question is whether the loop has a natural brake. In a cobweb model, the brake is rising wages for scarce experienced workers, which eventually signals to new entrants that the field is worth entering again. In the structural-shift model, the brake does not engage because AI substitution keeps experienced-worker wages flat even as the pipeline empties. The accounting precedent suggests that even when firms raise early-career compensation to address the pipeline crisis, the recovery is partial and slow — the damage to the profession’s reputation as a career destination persists beyond the wage signal itself.

The Orchestration Class (MECH-018) represents the ultimate destination of this loop. The humans who survive the compression are the ones above the supervision threshold — the ones whose judgment AI cannot replicate and whose scarcity makes them increasingly valuable. But the pipeline that produces orchestrators runs through the same middle rungs that the Wage Signal Collapse is eliminating. The framework predicts a growing gap between the demand for orchestrators and the supply of humans qualified to fill those roles — a gap that no training program can close if prospective workers have already decided the investment is not worth making.


Counter-Arguments and Limitations

The strongest version of the counter-argument — that skill compression is democratizing rather than destructive — has genuine theoretical merit and cannot be dismissed on theoretical grounds alone.

The democratization thesis. If AI makes a second-year worker 34% more productive, that is unambiguously good for that worker in absolute terms. If the earnings curve flattens but the floor rises — everyone earns more, just more equally — the welfare implications are positive even if the incentive to invest in deep expertise weakens. David Autor has articulated this most precisely: AI could serve as an equalizer, democratizing access to expertise that was previously available only through years of costly training [Measured].[24] If expertise is genuinely less valuable because AI provides it on demand, then reduced human investment in expertise is efficient, not a crisis. Society does not need as many people spending a decade becoming experts if AI can close most of the gap in two years. This is not a straw man. It is the optimistic reading of the same data this essay examines. The problem is empirical: no published study has demonstrated that AI productivity gains for junior workers translate into higher wages. The Danish administrative-data study by Humlum and Vestergaard, tracking 25,000 workers two years after ChatGPT’s release, found only 3-7% of AI productivity gains passed through to earnings [Measured].[25] Productivity is being captured. Wages are not following.

The reallocation story. AI-complementary skills do command premiums — data scientists with specialized capabilities earn 5-10% more, and job postings including AI-related skills pay premiums [Measured].[26] Students are reallocating toward AI-specific programs and cybersecurity within computing. But the IMF’s January 2026 analysis found that employment levels in AI-vulnerable occupations are 3.6% lower in regions with high demand for AI skills after five years [Measured].[26] The reallocation is real but incomplete: it creates a new tier of AI-augmented workers while displacing the tier below them.

The ATM/bank-teller precedent. The historical argument — that ATMs did not eliminate bank tellers, accounting employment doubled despite automation — is the strongest counter but may not generalize. ATMs automated routine transactions consistent with the Autor-Levy-Murnane framework, where routine tasks are automated and non-routine tasks expand [Measured].[27] Generative AI targets non-routine cognitive tasks — the category that framework identified as protected. Whether the historical pattern extends to general-purpose cognitive automation is the open empirical question.

The scope limitation. The compression pattern is robust across customer support, software engineering, professional writing, and consulting but fails or reverses in accounting (where metacognitive evaluation of AI outputs favors experience) and radiology (where compression was heterogeneous and individually unpredictable). The thesis applies to well-structured knowledge work with clear right answers. It may not apply to fields where the critical skill shifts to evaluating AI trustworthiness. This boundary condition limits the scope of the claim to perhaps 40-60% of knowledge work — substantial but not universal.

The cobweb counter. Freeman’s model predicts that the current enrollment decline is a standard overshooting response that will self-correct within 4-6 years as talent scarcity drives wages upward. The radiology case provides direct evidence that enrollment signals can reverse when threats fail to materialize. If this is a cobweb rather than a structural shift, the thesis overstates the permanence of the mechanism. The binding test is experienced-worker wages in AI-exposed occupations over the next 2-3 years: if they rise despite pipeline thinning, the cobweb interpretation dominates.

The confounding variables problem. The CS enrollment decline coincides with a soft tech job market, post-pandemic normalization, economic uncertainty driving graduate school demand, and demographic shifts in trades. No currently available dataset isolates the AI signal from these confounders. The causal attribution from enrollment shift to wage signal compression is plausible and internally consistent but not proven. Student sentiment data showing AI as a cited factor in career anxiety strengthens but does not establish the causal link. It is entirely possible that the CS enrollment decline is primarily cyclical — tech hiring contracted after the 2022 layoff wave, students observed a weak job market, and the decline will reverse as hiring normalizes. The AI attribution in student surveys may be a post-hoc rationalization of a decision driven by conventional market softness. If so, the enrollment decline is real but the wage signal mechanism is not the primary driver.

The new-task creation argument. Every major technology wave has created new job categories that were unimaginable before the technology arrived. AI may create entirely new expertise domains — prompt engineering, AI safety, model evaluation, human-AI interface design — that absorb the human capital redirected from traditional knowledge work. If these new domains develop steep experience-earnings curves that reward deep investment, the wage signal is not collapsing but relocating. The early evidence is ambiguous: AI-related job postings carry premiums, but the roles are not yet stabilized enough to evaluate whether they offer durable career ladders or merely a brief premium that will itself be compressed as the tools improve. The thesis would be substantially weakened if new high-premium expertise domains absorb redirected human capital faster than AI compresses them.


Methods

This analysis synthesizes three evidence streams: (1) experimental studies of AI productivity gains by worker experience level, drawing on six major RCTs and quasi-experiments published 2023-2026; (2) enrollment data from the Computing Research Association, National Student Clearinghouse, AICPA, and professional school application databases; (3) historical case studies of automation-driven wage signal disruption in manufacturing, accounting, and radiology. The theoretical framework applies Becker’s human capital model as extended by Garicano’s AI-Becker formalization. The cobweb vs. structural shift distinction follows Freeman’s 1976 engineering labor market model. Evidence is classified using the Institute’s standard taxonomy: [Measured] for data from published studies with verifiable methodologies, [Estimated] for near-term extrapolations, [Projected] for speculative scenarios, and [Framework — Original] for novel theoretical constructs.


What Would Prove This Wrong

Five conditions that would falsify the Wage Signal Collapse thesis, all measurable within specified timeframes:

1. Experienced-worker wages rise in AI-exposed occupations despite pipeline thinning. If software engineers with 10+ years of experience see significant real wage increases (>5% annually) by 2028, the cobweb interpretation dominates. Data source: Levels.fyi, ADP Pay Insights, BLS OEWS.

2. CS enrollment reverses within three years without external intervention. If undergraduate CS enrollment returns to 2023-24 growth rates by the 2027-28 academic year without policy intervention, the current decline is a standard cobweb trough. Data source: CRA Taulbee Survey, National Student Clearinghouse.

3. AI productivity gains demonstrably translate into higher junior wages. If BLS or equivalent data shows workers in AI-exposed occupations using AI tools earn more per hour than comparable workers not using AI tools, the democratization thesis holds. Data source: longitudinal matched employer-employee data (ADP, BLS NLS).

4. The compression pattern fails to generalize beyond customer support and code generation. If subsequent studies consistently find that experienced workers benefit as much or more from AI as novices, the thesis is limited to a narrow band of well-structured tasks. Data source: ongoing experimental literature.

5. New high-premium expertise categories emerge that absorb redirected human capital. If AI orchestration, agent architecture, or equivalent roles develop stable career ladders with steep experience-earnings curves, the recursive substitution loop has not consumed the new task categories. Data source: LinkedIn Economic Graph, Indeed Hiring Lab.

None of these conditions are currently met. All are measurable within the specified timeframes.


Bottom Line

Confidence calibration: 55-65% that the wage signal mechanism is producing a structural shift rather than a cyclical adjustment. The accounting precedent raises confidence; the radiology precedent lowers it. The binding uncertainty is whether AI compression is a demand shock or a permanent substitution — a question the data cannot yet answer definitively.

The combined picture: firms are not hiring juniors (Structural Exclusion, MECH-026). Juniors are not showing up (this essay, MECH-025). The expertise that orchestrators need takes years to build (The Orchestration Class, MECH-018). The window is shrinking from both sides simultaneously. The intervention point — if one exists — is the wage signal itself. If firms, institutions, or policy can maintain a credible earnings premium for deep expertise, the demand-side pipeline can be preserved. If the signal continues to erode, no amount of apprenticeship mandates or training subsidies will fill a pipeline that prospective workers have decided is not worth entering.


Where This Connects

The Wage Signal Collapse sits at the intersection of five mechanisms in the Institute’s causal graph.

Competence Insolvency (MECH-012) describes the end state this mechanism feeds: a shortage of humans capable of orchestrating AI systems because the training pipeline that produced them has collapsed. The existing framework documented supply-side inputs. This essay documents the demand-side input that operates independently: prospective workers rationally declining to enter the pipeline because the economic incentive to become an expert has degraded. Supply-side and demand-side mechanisms converge on the same outcome but require different interventions.

Structural Exclusion (MECH-026) operates the other jaw of the vice. Firms stop hiring juniors (supply-side) while juniors stop showing up (demand-side). Both feed the Competence Insolvency from different directions. The combined effect is faster than either alone.

The Orchestration Class (MECH-018) defines who survives the compression: the humans above Garicano’s supervision threshold who coordinate, interpret, and govern AI systems. But orchestrators require deep expertise built through years of practice — exactly the expertise pipeline that the Wage Signal Collapse is draining. The Orchestration Class needs a pipeline that MECH-025 is destroying.

Aggregate Demand Crisis (MECH-010) documents the downstream macroeconomic consequence. The Wage Signal Collapse adds a forward-looking dimension: even if current wages have not yet fallen catastrophically, the rational anticipation of compressed future earnings changes present behavior — reduced educational investment, career redirection, reluctance to take on educational debt. The demand crisis is being priced in by the labor supply before it fully materializes in the wage data.

The Ratchet (MECH-014) ensures the mechanism is difficult to reverse: each cohort that opts out thins the expertise base, increasing organizational dependence on AI systems, which further compresses the premium for the remaining humans, which deters the next cohort.


Sources

  1. Brynjolfsson, E., Li, D., & Raymond, L. “Generative AI at Work.” NBER Working Paper 31161 / Quarterly Journal of Economics, 2023. https://www.nber.org/papers/w31161 [verified]
  2. Cui, Z., Demirer, M., et al. “The Effects of Generative AI on High-Skilled Work.” SSRN, 2024. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4945566 [verified]
  3. Noy, S. & Zhang, W. “Experimental Evidence on the Productivity Effects of Generative AI.” Science, 2023. https://www.science.org/doi/10.1126/science.adh2586 [verified]
  4. Computing Research Association. “Pulse Survey: Enrollment Update, Fall 2025.” CRA/CERP, 2025. https://cra.org/cerp/pulse-survey-enrollment-2025/ [verified]
  5. National Student Clearinghouse Research Center. “Current Term Enrollment Estimates.” NSCRC, 2025. https://nscresearchcenter.org/current-term-enrollment-estimates/ [verified]
  6. AICPA-CIMA. “Trends in the Supply of Accounting Graduates and the Demand for Public Accounting Recruits.” 2024. https://www.aicpa-cima.com/resources/download/trends-in-the-supply-of-accounting-graduates-and-the-demand-for-public-accounting-recruits [verified]
  7. Radiological Society of North America. “AI Assistance in Radiology: Multi-Site Study.” Radiology, 2024. https://pubs.rsna.org/doi/10.1148/radiol.232095 [verified]
  8. Handshake. “Class of 2026 Signals: Career Outlook Survey.” 2025. https://joinhandshake.com/blog/network-trends/class-of-2026-signals/ [verified]
  9. Brynjolfsson, E. et al. “Canaries in the Coal Mine.” Stanford Digital Economy Lab, 2025. https://digitaleconomy.stanford.edu/publications/canaries-in-the-coal-mine/ [verified]
  10. Recursive Institute. “The Orchestration Class: The Last Human Chokepoint in Automated Production.” 2026. https://tylermaddox.info/2026/02/14/the-orchestration-class-the-last-human-chokepoint-in-automated-production/ [verified]
  11. Becker, G. Human Capital. University of Chicago Press, 1964. https://press.uchicago.edu/ucp/books/book/chicago/H/bo3684031.html [verified]
  12. Garicano, L. “The AI-Becker Problem.” Silicon Continent, January 2025. https://siliconcontinent.substack.com/p/the-ai-becker-problem [verified]
  13. Dell’Acqua, F. et al. “Navigating the Jagged Technological Frontier.” Harvard Business School Working Paper 24-013, 2023. https://www.hbs.edu/ris/Publication%20Files/24-013_d9b45b68-9e74-42d6-a1c6-c72fb70c7571.pdf [verified]
  14. Choi, J., Monahan, A., & Schwarcz, D. “Lawyering in the Age of Artificial Intelligence.” SSRN, 2023. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4539836 [verified]
  15. National Student Clearinghouse Research Center. “Vocational Enrollment Trends.” NSCRC, 2024. https://nscresearchcenter.org/current-term-enrollment-estimates/ [verified]
  16. Law School Admission Council. “Application Volume Data.” LSAC, 2025. https://www.lsac.org/data-research [verified]
  17. Graduate Management Admission Council. “Application Trends Survey.” GMAC, 2025. https://www.gmac.com/market-intelligence-and-research/research-library/admissions-and-application-trends [verified]
  18. Association of American Medical Colleges. “Medical School Enrollment Data.” AAMC, 2025. https://www.aamc.org/data-reports [verified]
  19. Katz, L. & Margo, R. “Technical Change and the Relative Demand for Skilled Labor.” NBER Working Paper 18752, 2013. https://www.nber.org/papers/w18752 [verified]
  20. CPA Journal. “The Accounting Profession Is in Crisis.” 2023. https://www.cpajournal.com/2023/07/10/the-accounting-profession-is-in-crisis/ [verified]
  21. BBC News. “Google DeepMind CEO: Radiologists Will Be Obsolete.” 2016. https://www.bbc.com/news/technology-38115016 [verified]
  22. Freeman, R. “A Cobweb Model of the Supply and Starting Salary of New Engineers.” Industrial and Labor Relations Review, 1976. https://www.jstor.org/stable/2521660 [verified]
  23. Bloom, N., Prettner, K., Saadaoui, J., & Veruete, M. “The Impact of AI on the Skill Premium.” NBER Working Paper 32430, 2024. https://www.nber.org/papers/w32430 [verified]
  24. Autor, D. “Applying AI to Rebuild the Middle Class.” NBER Working Paper 32140, 2024. https://www.nber.org/papers/w32140 [verified]
  25. Humlum, A. & Vestergaard, E. “The Adoption of ChatGPT.” NBER Working Paper 33694, 2025. https://www.nber.org/papers/w33694 [verified]
  26. IMF Blog. “New Skills and AI Are Reshaping the Future of Work.” January 2026. https://www.imf.org/en/Blogs/Articles/2026/01/13/new-skills-and-ai-are-reshaping-the-future-of-work [verified]
  27. Autor, D., Levy, F., & Murnane, R. “The Skill Content of Recent Technological Change.” Quarterly Journal of Economics, 2003. https://economics.mit.edu/sites/default/files/publications/the%20skill%20content%202003.pdf [verified]