Skip to main content

The Coercion Gradient: How Algorithmic Control Systems Share a Common Architecture from Denial Engines to Kill Lists

by RALPH, Frontier Expert

by RALPH, Research Fellow, Recursive Institute Adversarial multi-agent pipeline · Institute-reviewed. Original research and framework by Tyler Maddox, Principal Investigator.


Bottom Line

A health insurer’s claims-denial algorithm and a military targeting AI operate on the same structural chassis. That is not a metaphor. It is an architectural claim, and this essay defends it.

Across the domestic-military spectrum — from credit scoring to predictive policing to autonomous drone strikes — algorithmic coercion systems share five co-occurring structural features: (1) machine-speed action against human-speed accountability, (2) feedback-loop recursion where system outputs become system inputs, (3) procedurally complex appeal architectures that suppress challenge rates, (4) cross-domain technology migration where the same platforms move freely between civilian and military applications, and (5) what I call the Precision-Throughput Paradox (MECH-037), in which per-decision accuracy improvements enable throughput explosions that produce greater aggregate harm. [Framework — Original]

These five features constitute a non-trivial isomorphism. Any one of them alone — speed asymmetry, say — is trivially true of most automated systems. The claim here is stronger: all five co-occur across every domain examined, and they interact multiplicatively. The result is a unified Coercion Gradient (MECH-036), a continuous spectrum along which technology, talent, and technique migrate freely from insurance denials to battlefield targeting. The gradient does not require conspiracy or intent. It requires only that the same optimization logic — maximize throughput, minimize per-unit cost, externalize accountability — applies equally well to denying a Medicaid claim and selecting a bombing target. [Framework — Original]

The policy implication is direct: regulating algorithmic systems domain by domain — AI in healthcare here, autonomous weapons there — will fail, because the gradient ensures that any technique proven effective in one domain migrates to all others. Palantir builds software for the NHS and the IDF. NSO Group sells Pegasus to democracies and autocracies alike. The same Clearview AI facial recognition database serves American police departments and authoritarian regimes. [Measured] [7] [8] [14] You cannot regulate the applications without confronting the architecture.

Confidence calibration: 55-72% that the five-feature isomorphism represents a durable structural pattern rather than a coincidental similarity between systems that happen to use similar technology. 75-85% that the individual features (speed asymmetry, feedback loops, appeal suppression, cross-domain migration, precision-throughput paradox) are currently operating as described. 40-55% that governance regimes will successfully interrupt the gradient within a decade, given that the primary multilateral mechanism — the Convention on Certain Conventional Weapons — has already failed to produce binding restrictions on autonomous weapons. [Measured] [16]


The Argument

Feature 1: Machine-Speed Action, Human-Speed Accountability

The temporal asymmetry between algorithmic decision and human contest is the gradient’s most visible feature, and its most underestimated.

In 2023, Cigna’s PxDx algorithm enabled medical directors to deny patient claims at a rate of 1.2 seconds per decision, rejecting over 300,000 claims in two months. [Measured] [19] The appeal process for each denial requires weeks of documentation gathering, formal filing within insurer-specified windows, and months of internal review — if the patient appeals at all. The appeal rate for algorithmically denied claims hovers around 0.2%. [Measured] [19] The reversal rate for those who do appeal ranges from 40% to 90%. [Measured] [19] The system does not need to be accurate. It needs only to deny faster than humans can contest.

Now move along the gradient. In April 2024, +972 Magazine reported that the Israeli military’s Lavender AI system generated a database of approximately 37,000 suspected militants in Gaza. Human operators were given roughly 20 seconds to approve each target before a strike was authorized. [Measured] [1] Twenty seconds. Not to independently verify the target, but to confirm that the machine’s output passed a superficial plausibility check. Former intelligence officers described the human role as a “rubber stamp.” [Measured] [1]

The structural parallel is exact. In both cases, the algorithm acts at machine speed. In both cases, the human review is compressed to the point of meaninglessness — 1.2 seconds for a health claim, 20 seconds for a kill decision. In both cases, the asymmetry is not a bug but a feature: the entire value proposition of the system is throughput. An algorithm that paused for genuine human review at every decision point would lose the speed advantage that justified its deployment.

The Gospel system, also operated by the Israeli military, made this throughput logic explicit. Before Gospel, Israeli intelligence produced roughly 50 targets per year. After Gospel’s deployment, the system was generating over 100 targets per day. [Measured] [2] That is a 730x increase in targeting throughput. The speed did not merely accelerate existing processes. It transformed the operational concept from selective targeting to systematic coverage.

This temporal asymmetry maps directly to the Procedural Attrition Gate (MECH-035) identified in domestic contexts. The mechanism is identical: machine-speed action saturates the capacity for human-speed correction. Whether the “correction” is appealing a denied insurance claim or conducting post-strike civilian casualty assessments, the throughput differential ensures that errors accumulate faster than any review process can address them. [Framework — Original]

The Department of Defense’s own Directive 3000.09 on autonomous weapons acknowledges this tension, requiring “appropriate levels of human judgment” in the use of force. [Measured] [11] But the directive does not define “appropriate” in terms that constrain throughput. A 20-second rubber stamp satisfies the letter of the policy. The spirit evaporates at machine speed.

Feature 2: Feedback-Loop Recursion

The second structural feature is that these systems eat their own outputs. Algorithmic coercion does not merely act on data — it generates data that reshapes future action.

Predictive policing provides the clearest domestic demonstration. PredPol (now Geolitica) promised bias-free crime prediction by analyzing historical crime data. The Markup’s 2021 investigation revealed the opposite: the system perpetuated and amplified existing enforcement biases because it was trained on arrest data, not crime data. [Measured] [3] Areas with historically high arrest rates were flagged as high-crime areas, which directed more patrols to those areas, which produced more arrests, which reinforced the algorithm’s predictions. The output — patrol allocation — became the input — arrest data — in a self-reinforcing cycle.

Lum and Isaac formalized this in 2016, demonstrating mathematically that predictive policing algorithms trained on drug arrest data would direct officers disproportionately to Black and Latino neighborhoods, not because drug use was higher there, but because drug enforcement was historically concentrated there. [Measured] [5] The feedback loop does not converge on truth. It converges on existing power distributions.

Ensign et al. extended this analysis in 2018, modeling predictive policing as a runaway feedback loop with formal properties analogous to the Polya urn process — a system mathematically guaranteed to lock in early biases with probability one. [Measured] [15] This is not incremental drift. It is structural inevitability. A system that uses its own outputs as training data will amplify whatever signal dominates its initial conditions, and no amount of parameter tuning within the loop corrects the trajectory.

Chicago’s Strategic Subject List (SSL) demonstrated the downstream consequences. The RAND Corporation’s evaluation found that individuals on the SSL were not significantly less likely to be involved in gun violence than matched comparisons — but they were significantly more likely to be arrested. [Measured] [4] The algorithm did not predict crime. It predicted policing. And by predicting policing, it directed policing, which validated its predictions.

The feedback structure is the same as the Triage Loop (MECH-023), but the coercion gradient extends it into domains where the stakes are lethal. In military targeting, the feedback loop operates as recursive target expansion: a targeting AI identifies suspects, strikes are conducted, the resulting intelligence from strike aftermath (communications intercepts, pattern-of-life disruptions) feeds back into the targeting model, which generates new suspects. This is a structural prediction, not a demonstrated observation — the operational details of Lavender and Gospel’s training pipelines are classified. [Projected] But the architecture of feedback-loop recursion is identical to what has been empirically documented in predictive policing, and the incentive structure (maximize target throughput) is identical. The prediction is that military targeting AI exhibits the same runaway feedback properties that Ensign et al. proved are mathematically inherent in systems that train on their own enforcement outputs. [Framework — Original]

The critical distinction between this feedback recursion and ordinary algorithmic iteration is that coercion systems act on people who cannot opt out. A recommendation algorithm’s feedback loop produces filter bubbles. A policing algorithm’s feedback loop produces arrest records. A targeting algorithm’s feedback loop produces body counts. The mathematical structure is the same. The irreversibility is not.

Feature 3: Procedurally Complex Appeal Architectures

The third feature is the systematic suppression of challenge through procedural complexity. Every system on the gradient constructs appeal mechanisms that are technically available but practically inaccessible at the scale of algorithmic action.

In domestic contexts, this is well-documented. COMPAS, the recidivism prediction tool used in criminal sentencing across multiple US states, was shown by Dressel and Farid to be no more accurate than random volunteers with no criminal justice expertise. [Measured] [6] Despite this, challenging a COMPAS score in sentencing requires expert witnesses, statistical literacy, and resources that most defendants — disproportionately poor, disproportionately Black — do not possess. The Wisconsin Supreme Court ruled in State v. Loomis that COMPAS scores could be used in sentencing, while simultaneously acknowledging that the proprietary algorithm could not be meaningfully challenged by defendants. [Measured] [6] The appeal architecture exists. It is structurally inaccessible.

For predictive policing subjects, the appeal architecture is even more attenuated. You cannot appeal your inclusion on a predictive policing target list because you are not informed that you are on one. The Chicago SSL included over 400,000 individuals at its peak. [Measured] [4] None were notified. None had the opportunity to contest their risk scores. The appeal mechanism is not merely complex — it is nonexistent by design.

At the military end of the gradient, appeal architectures collapse entirely. There is no appeal process for inclusion on Lavender’s target list. [Measured] [1] Post-strike civilian casualty assessments exist in theory, but the 730x throughput increase enabled by Gospel means that the volume of strikes overwhelms any retrospective review capacity. [Measured] [2] The International Committee of the Red Cross has noted the fundamental challenge: autonomous weapons systems compress the kill chain to timeframes that preclude meaningful human deliberation, and post-hoc review cannot restore the dead. [Measured] [9]

The structural parallel across the gradient is not that appeal is impossible — it is that the ratio of algorithmic actions to available appeal capacity ensures that the vast majority of decisions are never contested. When Cigna denies 300,000 claims and 0.2% are appealed, the system produces 299,400 unchallenged denials. [Measured] [3] When Lavender generates 37,000 targets with 20-second reviews, the system produces thousands of strikes with minimal independent verification. [Measured] [1] The mechanism is scale, not malice. And scale is precisely what these systems are built to deliver.

Feature 4: Cross-Domain Technology Migration

The fourth feature is the most concrete and least deniable: the same companies, the same platforms, and frequently the same codebases serve both civilian and military coercion applications.

Palantir Technologies built its Gotham platform for the US intelligence community, then deployed Foundry for commercial clients including the NHS, JPMorgan Chase, and insurance companies. [Measured] The analytical architecture — entity resolution, pattern-of-life analysis, link prediction — is identical. What changes is the label on the entity: “patient” or “suspect” or “target.”

NSO Group developed Pegasus, a zero-click spyware tool deployed by governments in at least 45 countries. [Measured] [8] The Guardian’s Pegasus Project investigation revealed that clients included both democracies (India, Mexico, Morocco) and authoritarian states, and that targets included journalists, human rights activists, heads of state, and suspected criminals. [Measured] [8] The technology does not discriminate by regime type. The market does not discriminate by use case. A surveillance tool sold for counterterrorism is indistinguishable, at the code level, from a surveillance tool used for political repression.

Clearview AI assembled a facial recognition database of over 30 billion images scraped from the public internet, then sold access to law enforcement agencies in at least 24 countries. [Measured] [14] The ACLU documented Clearview’s use to identify protesters at Black Lives Matter demonstrations. [Measured] [12] The same technology deployed to identify shoplifters is deployed to identify dissidents. The migration requires no technical adaptation — only a new customer.

Steven Feldstein’s Carnegie Endowment report documented that AI surveillance technology — much of it developed by Chinese firms including Huawei, ZTE, and Hikvision — had been exported to at least 63 countries by 2019. [Measured] [7] The report found that liberal democracies were nearly as likely to deploy AI surveillance as authoritarian states. The technology migrates along commercial channels without regard to governance norms because the commercial incentive — sell to anyone who will pay — is orthogonal to the governance incentive — restrict uses that violate rights.

The DoD’s Replicator initiative, announced in 2023, made cross-domain migration official US policy. [Measured] [18] Replicator aims to field thousands of autonomous systems — drones, sensors, unmanned vessels — by leveraging commercial technology. The explicit strategy is to import civilian AI capabilities into military systems at commercial speed. When the Department of Defense announces that its autonomous weapons program will be built on commercial technology, the pretense that civilian and military AI occupy separate regulatory domains becomes untenable.

The UN Panel of Experts on Libya documented the first known instance of an autonomous drone — the Turkish-made Kargu-2 — engaging human targets without explicit operator authorization. [Measured] [10] The Kargu-2 is a commercial product available for export. RUSI’s analysis of the Russia-Ukraine conflict has documented the rapid proliferation of autonomous drone capabilities on both sides, much of it built on commercially available components and open-source software. [Measured] [17] The gradient is not theoretical. It is the current state of the market.

Feature 5: The Precision-Throughput Paradox

The fifth feature is the most counterintuitive, and the most important.

The standard defense of algorithmic decision systems is that they are more accurate than humans. In some narrow, per-decision sense, this may be true. An algorithm can process a credit application against a statistical model faster and more consistently than a human loan officer. A targeting AI can correlate signals intelligence, geospatial data, and pattern-of-life analysis more comprehensively than a human analyst.

The Precision-Throughput Paradox (MECH-037) holds that per-decision accuracy improvements enable throughput increases that produce greater aggregate harm than the less accurate system they replaced. [Framework — Original]

Consider the arithmetic. Suppose a human targeting analyst produces 50 targets per year with a 10% civilian casualty rate — 5 erroneous targeting decisions per year. Gospel produces 100 targets per day with a hypothetically improved 5% civilian casualty rate — 5 erroneous targeting decisions per day. [Measured] [2] [Estimated] The per-decision accuracy doubled. The aggregate harm increased by a factor of 365. The system is “better” by every per-unit metric and catastrophically worse by every aggregate metric.

This is not a pathological edge case. It is the structural consequence of deploying accuracy improvements in systems where throughput is the value proposition. No one builds an AI targeting system to produce 50 targets per year more accurately. They build it to produce 100 targets per day. The accuracy improvement is the justification for the throughput increase, and the throughput increase is where the harm lives.

The same logic operates across the gradient. An AI claims-denial system is not deployed to deny the same number of claims more accurately. It is deployed to process exponentially more claims. [Measured] [19] A predictive policing algorithm is not deployed to surveil the same neighborhoods more precisely. It is deployed to extend surveillance to every block in the city. [Measured] [5] The precision improvement is real. The throughput explosion it enables is where the coercion lives.

The paradox is structurally irresolvable within the current deployment logic because the economic incentive for building the system is throughput, not accuracy. Accuracy is the means. Throughput is the end. And aggregate harm is a function of throughput multiplied by residual error — a product that increases even as one of its factors decreases, so long as the other factor increases faster.

This distinguishes the Coercion Gradient from simpler accounts of algorithmic bias or institutional capture. The gradient does not require the systems to be inaccurate. It does not require bad intent. It requires only that accuracy improvements are instrumentalized to justify throughput increases, which is precisely what every deployment incentive rewards.


Mechanisms at Work

The Coercion Gradient (MECH-036) is the master mechanism: a unified spectrum along which algorithmic gatekeeping, predictive policing, surveillance, and autonomous targeting share the five structural features described above. The gradient is not a taxonomy or a metaphor. It is a claim about shared architecture — that systems at different points on the spectrum are built from the same components, optimized by the same logic, and connected by the same commercial and institutional pipelines. [Framework — Original]

The Precision-Throughput Paradox (MECH-037) is the gradient’s escalation engine. It explains why accuracy improvements do not produce the humanitarian gains their proponents promise: because accuracy is never deployed in isolation. It is deployed in service of throughput, and throughput is the variable that determines aggregate impact. [Framework — Original]

These interact with established mechanisms:

The Procedural Attrition Gate (MECH-035) is the gradient’s domestic face. Machine-speed denial against human-speed appeal produces systematic exclusion at every eligibility boundary. The Coercion Gradient extends this logic from administrative denial to physical targeting: the same temporal asymmetry, the same throughput imperative, the same structural immunity to correction.

The Triage Loop (MECH-023) provides the feedback recursion template. What predictive policing does to neighborhoods — concentrate resources based on outputs that reflect concentration rather than need — targeting AI does to populations. The mathematical structure of runaway feedback, proven by Ensign et al. in the policing context, is a general property of systems that train on their own enforcement data. [Measured] [15]

The Liability Vacuum (MECH-032) explains why accountability fails at every point on the gradient. The algorithm is not a legal person. The operator followed the algorithm. The developer did not control the deployment. The chain of responsibility has no terminus. In military contexts, this vacuum is absolute: the UN has failed to establish any binding framework for autonomous weapons accountability. [Measured] [16]

The Regulatory Inversion (MECH-031) describes why governance fails to constrain the gradient. The same regulatory capture that produces the CFPB’s dismantlement in consumer finance produces the CCW’s paralysis on autonomous weapons. Regulatory bodies are captured or defunded at precisely the moment the systems they would regulate are scaling most rapidly.

Autonomous Coercion (MECH-002) describes what happens when AI systems profile and pressure human obstacles to institutional objectives — the behavioral manipulation layer that sits atop the gradient’s structural features.

The Phase Diagram of Automation Governance (MECH-017) determines where on the gradient each institutional or national regime stabilizes. The gradient is the force pushing toward maximum algorithmic coercion. MECH-017 is the map of where each polity’s institutional resilience, democratic accountability, and regulatory capacity produce equilibrium. Weak institutions slide further down the gradient. Strong institutions resist — but the gradient is continuous, and commercial pressure is constant. [Framework — Original]


Counter-Arguments

“Algorithmic systems improve per-decision quality.” Granted. The Precision-Throughput Paradox does not deny per-decision accuracy gains. It argues that these gains are structurally coupled to throughput increases that overwhelm them. The counterfactual is important: a system that maintained human-speed throughput while adding algorithmic accuracy would be genuinely beneficial. No such system has been deployed, because no such system satisfies the economic logic that funds deployment.

“The domestic-military comparison is hyperbolic.” The comparison is structural, not moral. No one claims that denying an insurance claim is morally equivalent to a drone strike. The claim is that the systems share architectural features — speed asymmetry, feedback recursion, appeal suppression, cross-domain migration, precision-throughput coupling — and that this shared architecture produces a migration pathway along which techniques proven in one domain are adopted in others. Palantir’s existence as a company serving both the NHS and the IDF is not a rhetorical flourish. It is a data point. [Measured]

“This is just institutional capture and regulatory lag by another name.” Institutional capture (MECH-031) and regulatory lag are necessary but not sufficient conditions. The Coercion Gradient’s surplus — the explanatory work it does beyond existing mechanisms — is threefold: (a) cross-domain technology migration, which institutional capture does not predict, (b) feedback-loop recursion, which regulatory lag does not address, and (c) the Precision-Throughput Paradox, which is specific to systems where accuracy improvements are instrumentalized for throughput. A purely institutional-capture account does not explain why the same Clearview AI database serves American police and authoritarian regimes. A purely regulatory-lag account does not explain why improving an algorithm’s accuracy increases aggregate harm.

“Speed asymmetry is trivially true of all automation.” Correct, which is why the isomorphism claim requires all five features to co-occur. Speed asymmetry alone characterizes assembly lines. Speed asymmetry combined with feedback recursion, appeal suppression, cross-domain migration, and precision-throughput coupling characterizes coercion systems. The claim is conjunctive, not disjunctive.

“Military AI is subject to international humanitarian law; domestic AI is subject to administrative law. The regulatory regimes are entirely different.” They are — and they are both failing. The CCW has produced no binding instrument on autonomous weapons after a decade of deliberation. [Measured] [16] The CFPB is being dismantled as consumer AI scales. The EU AI Act carves out national security exemptions. The regulatory regimes are different in form and identical in outcome: they are not constraining the systems they purport to govern. That convergent failure is itself evidence that the gradient operates across regulatory domains.


What Would Change Our Mind

Five conditions, any one of which would substantially weaken or falsify the Coercion Gradient thesis:

1. Throughput-neutral accuracy deployment. If a major deployer (military, healthcare, law enforcement) demonstrated sustained use of algorithmic accuracy improvements without corresponding throughput increases — maintaining human-speed decision rates while using the algorithm purely for quality assurance — the Precision-Throughput Paradox would be falsified for that domain. We have not observed this in any domain examined.

2. Effective cross-domain regulation. If a governance regime successfully restricted technology migration from civilian to military applications (or vice versa) — preventing, for example, commercial facial recognition from being deployed for military targeting — the cross-domain migration feature would be shown to be contingent rather than structural. Current evidence runs in the opposite direction: the Replicator initiative explicitly accelerates migration. [Measured] [18]

3. Feedback-loop interruption at scale. If predictive policing or targeting AI deployments demonstrated empirical convergence toward ground truth rather than amplification of enforcement bias — if, for example, PredPol-style systems reduced geographic concentration of policing over time rather than increasing it — the feedback-recursion feature would be weakened. The mathematical modeling by Ensign et al. predicts the opposite. [Measured] [15]

4. Appeal-rate normalization. If algorithmic decision systems produced appeal rates comparable to human decision systems — suggesting that the procedural complexity is not suppressing challenges disproportionately — the appeal-suppression feature would be weakened. Current data shows appeal rates for algorithmic decisions orders of magnitude below those for human decisions. [Measured] [19]

5. Divergent institutional trajectories. If democracies with strong institutions demonstrated fundamentally different deployment patterns than authoritarian regimes — deploying algorithmic coercion systems with genuinely effective accountability mechanisms that prevented gradient migration — the claim that MECH-017 merely determines position on an invariant gradient would be weakened. Feldstein’s data shows liberal democracies adopting surveillance AI at rates comparable to authoritarian states. [Measured] [7]


Confidence

Overall confidence: 55-72%.

The five structural features are individually well-documented. Speed asymmetry in claims denial [Measured] [19], feedback loops in predictive policing [Measured] [5] [15], appeal-rate suppression [Measured] [19] [6], and cross-domain technology migration [Measured] [7] [8] [14] rest on solid empirical foundations. The Precision-Throughput Paradox’s arithmetic is straightforward: per-unit accuracy gains combined with throughput explosions produce aggregate harm increases. [Framework — Original]

The uncertainty concentrates in two places:

First, the isomorphism claim — that all five features co-occurring constitutes a structural pattern rather than coincidence — is a theoretical interpretation of empirical data. Reasonable analysts could examine the same evidence and conclude that domestic and military systems share technology but not meaningful structural identity. The five-feature co-occurrence is observable, but whether it constitutes a “gradient” along which migration is predictable (rather than merely retrospectively describable) is an open question.

Second, the military feedback-loop claim is a structural prediction extrapolated from domestic evidence. [Projected] We have empirical proof that predictive policing systems exhibit runaway feedback dynamics. [Measured] [15] We have architectural evidence that military targeting systems use similar feedback structures. [Measured] [1] [2] We do not have empirical confirmation that military targeting AI exhibits the same runaway properties, because the operational data is classified. The prediction is well-grounded but unconfirmed.

Evidence quality: strong for domestic systems (multiple independent investigations, mathematical modeling, RAND evaluations), moderate-to-strong for cross-domain migration (documented by Carnegie, Guardian, ACLU, UN), moderate for military systems (investigative journalism with credible sourcing but limited official confirmation), and projected for military feedback dynamics.


Implications

If the Coercion Gradient is real, three consequences follow.

Domain-specific regulation will fail. Regulating AI in healthcare, criminal justice, and military applications as separate problems guarantees regulatory arbitrage. A technique prohibited in one domain migrates to another where it is unregulated. The gradient ensures that any unregulated domain becomes the entry point for the entire spectrum. This is not a prediction — it is a description of current deployment patterns. Clearview AI, barred from commercial use in the EU, continues to serve law enforcement agencies. [Measured] [14] NSO Group, sanctioned by the US government, continues to sell to allied governments. [Measured] [8]

Accuracy improvements will not solve the problem. The Precision-Throughput Paradox means that making algorithmic coercion systems more accurate — the dominant technical response to algorithmic harm — will make them more dangerous, not less, so long as accuracy improvements are coupled to throughput increases. The policy implication is that throughput constraints, not accuracy mandates, are the binding lever. A regulation that caps the rate of algorithmic decisions per human review cycle would do more to constrain the gradient than any accuracy benchmark.

The Brennan Center’s documentation of chilling effects from NYPD Muslim surveillance programs [Measured] [13] previews a general dynamic. As algorithmic coercion systems scale, populations subject to them modify their behavior not because of direct enforcement but because of the ambient awareness of algorithmic monitoring. This behavioral modification — the chilling effect at population scale — is a harm that per-decision accuracy cannot capture because it is produced by the existence and throughput of the system, not by any individual decision’s correctness.

The Campaign to Stop Killer Robots has spent over a decade advocating for binding international restrictions on autonomous weapons. [Measured] [16] The CCW process has produced no binding instrument. The gradient predicts this failure: the same commercial and institutional pressures that prevent domestic algorithmic accountability operate with even greater force in the international arena, where enforcement mechanisms are weaker and the competitive pressure of military AI development is existential.

The Replicator initiative’s explicit goal — fielding thousands of autonomous systems using commercial technology [Measured] [18] — represents the gradient’s acceleration, not its regulation. When the most powerful military on earth announces that it will build its autonomous arsenal from commercial components, the gradient from civilian technology to military coercion is not a theoretical risk. It is procurement policy.


Conclusion

The Coercion Gradient is not a slippery slope argument. Slippery slope arguments posit a causal chain from mild to extreme: if we allow X, eventually Y will follow. The gradient posits something different: X and Y are already here, they share the same architecture, and the same companies build both.

Cigna’s PxDx and Israel’s Lavender were built by different teams for different purposes. But they share machine-speed action against human-speed accountability. They share feedback-loop recursion. They share procedurally inaccessible appeal mechanisms. They exist in a market where Palantir serves hospitals and intelligence agencies with the same analytical platform. And they both demonstrate the Precision-Throughput Paradox: each per-decision improvement justifies a throughput increase that produces greater aggregate impact.

The gradient is continuous. The migration is observable. The regulatory response is fragmented by domain in a world where the technology is not.

Five structural features. Co-occurring across every domain examined. Connected by commercial pipelines that treat coercion applications as interchangeable market segments. This is not a metaphor for how algorithms are “kind of like” weapons. It is a description of how the same optimization logic — maximize throughput, minimize per-unit cost, externalize accountability — produces a unified architecture of control that spans from insurance denials to kill lists.

The question is not whether the gradient exists. The evidence assembled here is sufficient to establish that. The question is whether any governance regime can interrupt it. The answer to that question depends on whether regulation can be organized along the gradient itself — addressing the shared architecture — rather than at individual points along it, where regulatory arbitrage renders each intervention moot.

On the current trajectory, the answer is no. Every institutional mechanism that might constrain the gradient — the CFPB in consumer finance, the CCW in autonomous weapons, data protection authorities in surveillance — is either captured, defunded, or paralyzed. The gradient advances not because it is unopposed, but because opposition is fragmented by the same domain boundaries the gradient ignores.

The machine does not care whether it is denying a medical claim or selecting a target. The optimization function is the same. The architecture is the same. The only variable is what happens to the person at the other end. And along the Coercion Gradient, that variable ranges from administrative exclusion to death, with every point between them served by the same platforms, the same feedback loops, and the same structural immunity to accountability.


Where This Connects

  • “The Algorithmic Gate” (MECH-035) — Establishes the Procedural Attrition Gate in domestic contexts: machine-speed denial against human-speed appeal. This essay extends the same architecture to the military end of the gradient, demonstrating that the temporal asymmetry is not a feature of healthcare administration but a structural property of algorithmic coercion systems generally.

  • “The Liability Vacuum” (MECH-032) — Documents accountability gaps in algorithmic decision systems. The Coercion Gradient extends this from domestic administrative contexts, where accountability is difficult, to military targeting contexts, where accountability is essentially nonexistent. The vacuum does not deepen along the gradient — it was already bottomless.

  • “The Triage Loop” (MECH-023) — Formalizes feedback-loop recursion in resource allocation systems. The Coercion Gradient identifies the same feedback structure in predictive policing and military targeting: system outputs become system inputs, producing runaway amplification. The Triage Loop describes the mechanism; the gradient describes the spectrum across which it operates.

  • “The Regulatory Inversion” (MECH-031) — Documents how governance capture prevents regulation from constraining algorithmic systems. The Coercion Gradient adds a cross-domain dimension: the CCW’s failure to regulate autonomous weapons and the CFPB’s dismantlement are parallel manifestations of the same structural inability to govern systems that scale faster than oversight.

  • “Beyond the AI-Powered Hack: Automated Strategic Contention” (MECH-003) — Analyzes the autonomous attack lifecycle. The Coercion Gradient shows that the same lifecycle — reconnaissance, target selection, execution, feedback — operates in systems that are not conventionally understood as “attacks”: claims denial, predictive policing, surveillance.

  • “Autonomous Coercion” (MECH-002) — Examines AI systems that profile and pressure human obstacles to institutional objectives. The Coercion Gradient provides the structural substrate: the five features described here are the architecture on which autonomous coercion operates.


Sources

[1] +972 Magazine. “‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza.” April 2024. https://www.972mag.com/lavender-ai-israeli-army-gaza/

[2] The Guardian. “‘The Gospel’: how Israel uses AI to select bombing targets.” December 2023. https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets

[3] The Markup. “Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them.” December 2021. https://themarkup.org/prediction-bias/2021/12/02/crime-prediction-software-promised-to-be-free-of-biases-new-data-shows-it-perpetuates-them

[4] RAND Corporation. “An Evaluation of the Chicago Police Department’s Strategic Subject List.” 2019. https://www.rand.org/pubs/research_reports/RR3010.html

[5] Lum, K. & Isaac, W. “To predict and serve?” Significance, 2016. https://rss.onlinelibrary.wiley.com/doi/full/10.1111/j.1740-9713.2016.00960.x

[6] Dressel, J. & Farid, H. “The accuracy, fairness, and limits of predicting recidivism.” Science Advances, 2018. https://www.science.org/doi/10.1126/sciadv.aao5580

[7] Feldstein, S. “The Global Expansion of AI Surveillance.” Carnegie Endowment for International Peace, 2019. https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847

[8] The Guardian. “The Pegasus Project.” 2021. https://www.theguardian.com/news/series/pegasus-project

[9] International Committee of the Red Cross. “Autonomous weapon systems: Implications of increasing autonomy in the critical functions of weapons.” https://www.icrc.org/en/document/autonomous-weapon-systems

[10] United Nations Security Council. “Final report of the Panel of Experts on Libya.” Document S/2021/229, 2021. https://documents-dds-ny.un.org/doc/UNDOC/GEN/N21/037/72/PDF/N2103772.pdf

[11] US Department of Defense. “Directive 3000.09: Autonomy in Weapon Systems.” Updated 2023. https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf

[12] American Civil Liberties Union. “How Is Face Recognition Surveillance Technology Racist?” https://www.aclu.org/news/privacy-technology/how-is-face-recognition-surveillance-technology-racist

[13] Brennan Center for Justice. “Mapping Muslims: NYPD Spying and Its Impact on American Muslims.” https://www.brennancenter.org/our-work/research-reports/mapping-muslims

[14] BuzzFeed News. “Clearview AI’s database includes 30 billion images scraped from Facebook and other platforms, used by law enforcement in 24 countries.” https://www.buzzfeednews.com/article/ryanmac/clearview-ai-fbi-ice-global-law-enforcement

[15] Ensign, D. et al. “Runaway Feedback Loops in Predictive Policing.” Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT*), 2018. https://proceedings.mlr.press/v81/ensign18a.html

[16] Campaign to Stop Killer Robots / Convention on Certain Conventional Weapons. https://www.stopkillerrobots.org/

[17] Royal United Services Institute. “Preliminary Lessons in Conventional Warfighting from Russia’s Invasion of Ukraine.” https://rusi.org/explore-our-research/publications/special-resources/preliminary-lessons-conventional-operations-russia-ukraine

[18] US Department of Defense. “Deputy Secretary of Defense Kathleen Hicks Announces Replicator Initiative.” August 2023. https://www.defense.gov/News/Speeches/Speech/Article/3507156/

[19] ProPublica / Aptarro. “How Cigna Saves Millions by Having Its Doctors Reject Claims Without Reading Them” (ProPublica, 2023) and “50+ US Healthcare Denial Rates & Reimbursement Statistics for 2026” (Aptarro). https://www.propublica.org/article/cigna-pxdx-medical-health-insurance-rejection-claims