""
Five Architectural Principles
These principles, adapted from Christopher Alexander's work on living systems, guide how Tractatus evolves and maintains coherence. They're not documentation—they're design criteria enforced architecturally.
Not-Separateness: Governance in the Critical Path
Governance woven into deployment architecture, not bolted on
Tractatus governance services operate in the critical execution path—every action passes through validation before executing. This isn't monitoring after-the-fact, it's architectural enforcement that cannot be bypassed.
Example: PreToolUse Hook
When the AI attempts to edit a file, the PreToolUse hook intercepts before execution. BoundaryEnforcer, CrossReferenceValidator, and other services validate the action. If any service blocks, the edit does not proceed—the hook architecture prevents bypass without explicit override flags.
Contrast: Bolt-on compliance systems monitor actions after they occur, creating separation between governance and execution. An AI agent could theoretically disable monitoring or exploit gaps. Tractatus eliminates that separation.
Deep Interlock: Services Reinforce Each Other
Coordinated governance, not isolated checks
The six governance services don't operate in silos—they coordinate through mutual validation. High context pressure intensifies boundary checking. Instruction persistence affects cross-reference validation. Service outputs feed into each other, creating resilience through redundancy.
Example: The 27027 Incident
AI attempted to use default database port despite HIGH persistence instruction specifying port 27027. InstructionPersistenceClassifier flagged the instruction. ContextPressureMonitor detected 53.5% pressure. CrossReferenceValidator caught the conflict. BoundaryEnforcer blocked the action. Four services working together prevented the error.
Why it matters: Single service bypass doesn't compromise governance. An attacker would need to circumvent multiple coordinated services simultaneously—exponentially harder than defeating isolated checks.
Gradients Not Binary: Nuanced Responses
Intensity levels, not yes/no switches
Governance operates on gradients: NORMAL → ELEVATED → HIGH → CRITICAL. Context pressure, security impact, and validation rigor all scale with intensity. This mirrors how living systems adapt—gradual responses, not mechanical on/off.
Example: Context Pressure Monitoring
At NORMAL pressure (0-25%), routine operations proceed smoothly. At ELEVATED (25-50%), validation becomes more thorough. At HIGH (50-75%), human review triggers more frequently. At CRITICAL (>75%), framework recommends session closedown. Graduated response prevents both alert fatigue and catastrophic failures.
Contrast: Binary "allowed/blocked" systems create brittleness—either everything passes or nothing does. Gradients enable natural adaptation to varying risk levels.
Structure-Preserving: Audit Continuity
Changes enhance without breaking
Framework changes must preserve wholeness—audit logs remain interpretable, decisions remain valid, institutional memory survives evolution. Version 4.2 logs are readable in version 4.4. Six-month-old audit decisions still make sense. Structure-preserving transformations maintain coherence across time.
Example: Adding Framework Fade Detection
When inst_064 (framework fade detection) was added, it monitored all six services without changing their core definitions. Pre-existing audit logs remained valid. Service behavior evolved, but historical decisions stayed interpretable. Enhancement without fracture.
Regulatory advantage: Regulators need stable audit trails. Structure-preserving evolution lets the framework adapt while maintaining compliance continuity—no need to re-interpret old decisions every version.
Living Process: Evidence-Based Evolution
Grows from real failures, not theory
Framework changes emerge from observed reality, not predetermined plans. When services went unused, we added fade detection. When selective verification reduced noise, we evolved triggering criteria. Real operational experience drives evolution—no building solutions to theoretical problems.
Example: MetacognitiveVerifier Selective Mode
Audit logs showed MetacognitiveVerifier activating on trivial operations, creating noise. Rather than theorize about thresholds, we analyzed real trigger patterns. Selective mode emerged from data—verify only complex operations (3+ file modifications, 5+ sequential steps). Performance improved based on evidence, not guesswork.
Contrast: Over-engineered systems solve imagined problems. Living process builds only what reality proves necessary—lean, effective, grounded in operational truth.
How the Five Principles Work Together
These principles aren't independent—they form an interlocking pattern. Not-separateness requires deep interlock between services. Gradients enable natural adaptation. Living process drives changes that must be structure-preserving to maintain wholeness.
Not-Separateness (governance in critical path)
↓ requires
Deep Interlock (services coordinate)
↓ enables
Gradients (nuanced responses)
↓ guided by
Living Process (evidence-based evolution)
↓ constrained by
Structure-Preserving (audit continuity)
↓
System Wholeness
Tractatus works with any agentic AI system—Claude Code, LangChain, AutoGPT, CrewAI, or custom agents. The governance layer sits between your agent and its actions.
Your AI agent (any platform). Handles planning, reasoning, tool use. Tractatus is agnostic to implementation.
Six external services enforce boundaries, validate actions, monitor pressure. Architecturally more difficult for AI to bypass.
Immutable audit logs, governance rules, instruction history. Independent of AI runtime—can't be altered by prompts.
Applied to Training: The Sovereign Language Model
These five principles were developed governing AI agent sessions. The current research applies them to a harder problem: governing AI training itself.
Inference-Time Governance
Where the framework started: six services validate every AI action before execution.
- ✓ Response cannot reach user without governance validation
- ✓ Values decisions deferred to humans
- ✓ Audit trail for every decision
Status: in production
Training-Time Governance
Where the research is going: governance inside the training loop, not post-hoc filtering.
- ✓ BoundaryEnforcer validates every training batch before forward pass
- ✓ Cross-tenant data rejected at the training step, not after
- ✓ Consent verified per content item before inclusion
Status: designed, documented, hardware ordered
How the Five Principles Apply to Training
Not-Separateness: governance inside the training loop
Deep Interlock: BoundaryEnforcer + MetacognitiveVerifier coordinate during training
Gradients: training intensity scales with content sensitivity
Structure-Preserving: training preserves audit log interpretability
Living Process: training evolves from operational failures, not theory
Two-model architecture, three training tiers, thirteen wisdom traditions, indigenous data sovereignty
See the Framework in Action
Explore 171,800+ real governance decisions from production deployment. Filter by service, pressure level, and coordination patterns to understand how Deep Interlock operates in practice.
Apache 2.0 licensed • All data anonymized • No sign-up required
Six Governance Services
These services implement the five principles in practice. Each service embodies not-separateness (operating in the critical path), deep interlock (coordinating with others), and gradients (intensity-based responses).
Blocks AI from making values decisions (privacy, ethics, strategic direction). Requires human approval.
Stores instructions externally with persistence levels (HIGH/MEDIUM/LOW). Aims to reduce directive fade.
Click any service node or the central core to see detailed information about how governance works.
Interactive visualizations demonstrating how Tractatus governance services monitor and coordinate AI operations.
Two Implementations
Tractatus has been applied in two contexts: governing an AI development agent, and governing a sovereign locally-trained language model.
Development Agent Governance
The original implementation: six governance services operating in Claude Code's critical execution path. Every file edit, database query, and deployment action passes through validation.
Village AI: Sovereign Language Model
The current research direction: applying all five architectural principles to model training, not just inference. BoundaryEnforcer operates inside the training loop. Three training tiers (platform, tenant, individual) with governance at each level.
- Governance during training (Not-Separateness applied to optimisation)
- Two-model architecture (3B fast + 8B reasoning) under unified governance
- Per-tenant LoRA adapters with consent-verified training data
- Thirteen wisdom traditions available for Layer 3 adoption
Status: inference in production; training pipeline designed, hardware ordered.
- •
- •
- •
- •
- •
- •
- •
- •
- •
- •