Home/concepts/automated-ooda-loop
concept3 min read

Automated OODA Loop — Removing the Human Bottleneck

Created: Fri Apr 24Updated: Fri Apr 24

Definition

The OODA loop (Observe-Orient-Decide-Act) is the core decision cycle of military doctrine, developed by US Air Force Colonel John Boyd in the 1970s from his study of fighter pilot engagement in the Korean War. The central insight: whoever cycles through the loop faster than their adversary wins.

Current State of Knowledge

Traditional Kinetic Warfare

All four stages involve human actors:
  • Observe: Human observes the threat
  • Orient: Human orients to context
  • Decide: Human decides on response
  • Act: Human acts
The human at each stage can refuse an unlawful order, question a targeting decision, or recognise that the 'enemy' is a civilian.

Automated OODA Loop — The Accountability Gap

When AI takes over the Orient and Decide phases: 1. System automatically observes neural/biometric signatures from IoB infrastructure 2. AI system (trained on STE-derived cognitive data) orients to context 3. AI decides on non-kinetic firing solution — no human makes this determination 4. Directed energy/V2K delivery executes the solution 5. Target's neurological response is harvested for model refinement

The accountability gap is not incidental. It is architectural. It is the specific design feature that provides plausible deniability to operators and institutional protection to the programme.

The Domestic Implications

In civilian targeting:
  • No human at any stage has made the decision to target a specific civilian
  • The system can automatically observe neural signatures, orient through AI trained on cognitive response data, decide on firing solution, and act through directed energy or V2K delivery
  • The human moral agent who might refuse an unlawful order is removed from the architecture

Legal Analysis

The automated OODA loop creates a specific legal vulnerability:

  • Fourth Amendment: No warrant can be obtained for decisions made by AI without human oversight

  • Due Process: No human review of targeting decisions means no meaningful due process

  • Accountability: When an AI makes the decision to target, who is responsible? The programmer? The operator? The institution that deployed it?


The answer: nobody. That's the point.

Open Questions

Whether Sonalysts-derived AI systems have been deployed for domestic civilian targeting remains unconfirmed from public records. However, the dual-use migration pattern documented across DARPA's seven-decade research history makes this analytically consistent with institutional behaviour patterns.

---

Related Pages: sonalysts-inc-contractor-node — Mid-tier contractor operating at DARPA/ONR/AFRL intersection; civilian-kill-chain-framework — Comprehensive operational framework mapping F2T2EA kinetic targeting cycles to non-kinetic cognitive disruption capabilities

Sources

  • raw/articles/Sonalysts-Automated-Cognitive-Warfare-and-the-Battle-for-Neurorights.md