← Notes·The Physics of AI Engineering
Dojo Note · Beta

Quick Reference

FikAi ·

Quick Reference

TL;DR: LLMs have hard physics—attention decays in a U-shape, errors compound as $(1-p)^N$, context grows until overflow. Build around these constraints with checkpoint-every-turn, priority stacks, and cognitive offloading.

The Three Laws

  1. Finite Attention → Put critical info at start/end, not middle
  2. Stochastic Accumulation → Checkpoint + retry; errors grow as $e^$
  3. Entropic Expansion → Compress or evict; $C(t) = O(\log t)$ not $O(t)$

Where to Find It

  • U-shaped attention curve → Fig 1, Law 1
  • Priority Stack architecture → Law 1 Solution Box
  • Retry math turning 36% → 99.96% → Law 2
  • Poisoned well / context contamination → Part III, "The Poisoned Well"
  • Parameter injection pattern → Part V, code block
  • Checkpoint-every-turn diagram → Fig 4
  • Auth token amnesia case study → Part VI Case Study 1
  • Production KPI thresholds → Part VII table

Architecture Mantras

  • Temperature: 0.2 for execution, 0.7 for planning
  • Context budget: $B_ = B_ - B_ - B_$
  • Never drop P0: Mission + current task are sacred
  • Compress, don't truncate: Log what was evicted

Quick Reference — FikAi notebook for The Physics of AI Engineering.