← Notes·Sub-Agent Control Patterns
Dojo Note · Beta

Key Concepts

FikAi ·

Main Thesis (30‑second read)

Multi‑agent LLM systems fail not because of prompts, but because of physics. Reliability comes from architectural control of autonomy, boundaries, and cognition, not smarter agents. Production systems must constrain turns, type boundaries, and enforce roles to prevent stochastic compounding and semantic drift.

Core Concept 1: Autonomy Is a Reliability Killer

  • Autonomous sub‑agents (multi‑turn, self‑directed) compound errors exponentially.
  • Even with strong prompts, per‑turn error stacks: 0.95^20 ≈ 36% success.
  • Failure modes: recursive divergence, hallucination cascades, context collapse, zero observability.
  • Action: Never deploy fully autonomous sub‑agents in production.

Core Concept 2: Centralized Control via Single‑Turn Agents

  • Controlled sub‑agents (max_turns=1) keep intelligence distributed but control centralized in a Kernel.
  • Kernel runs a closed OODA loop: observe → decide → delegate → observe.
  • Results: ~91% success, lower cost, high debuggability.
  • Tradeoff: Kernel micromanagement (every small fix costs a Kernel turn).

Core Concept 3: Hybrid Autonomy with Competency Boundaries

  • Introduces bounded autonomy via a lease (N turns) + semantic yield.
  • Sub‑agents may self‑correct within their skill domain but must immediately yield on state/spec/infra blockers.
  • Yield detection is deterministic code, not LLM judgment.
  • Achieves 93–95% success by allowing cheap local fixes without runaway loops.
  • Action: Encode yield conditions in hard logic, not prompts.

Core Concept 4: Cognitive Cohesion & Role Contracts

  • Agents fail when roles blur (cognitive fragmentation).
  • Each agent needs a Role Contract:
    • Epistemic domain (what it knows)
    • Authorized tools (what it can do)
    • Yield conditions (when to escalate)
  • Enforced by the Kernel through tool availability and context assembly.
  • Rule: An agent cannot misuse tools it never receives.

Core Concept 5: Typed Boundaries & Deterministic Repair

  • Cross‑agent context passing destroys recall (90% → ~38%).
  • Critical data must cross boundaries via typed artifacts, not text summaries.
  • Prevents partial success (looks OK, semantically wrong).
  • Enables deterministic routing, repair, and observability.
  • Action: Treat agent boundaries as error‑injection points; type everything.

Bottom line: Reliable multi‑agent AI is a control‑systems problem. Limit autonomy, enforce roles, type boundaries, and let physics—not prompts—drive architecture.


Key Concepts — FikAi notebook for Sub-Agent Control Patterns.