Values & Principles

The foundational values that guide the Tractatus Framework's development, governance, and community.

Contents

Core Values

These four values form the foundation of the Tractatus Framework. They are not aspirational—they are architectural. The framework is designed to enforce these values through structure, not training.

1. Sovereignty

Principle: Individuals and communities must maintain control over decisions affecting their data, privacy, values, and agency. AI systems must preserve human sovereignty, not erode it.

What This Means in Practice:

  • AI cannot make values trade-offs (e.g., privacy vs. convenience) without human approval
  • Users can always override AI decisions
  • No "dark patterns" or manipulative design that undermines agency
  • Communities control their own data and AI systems
  • No paternalistic "AI knows best" approaches

Framework Implementation:

  • BoundaryEnforcer blocks values decisions requiring human judgment
  • InstructionPersistenceClassifier respects STRATEGIC and HIGH persistence instructions
  • All decisions are reversible and auditable

2. Transparency

Principle: All AI decisions must be explainable, auditable, and reversible. No black boxes. Users deserve to understand how and why systems make choices.

What This Means in Practice:

  • Every AI decision includes reasoning and evidence
  • Users can inspect instruction history and classification
  • All boundary checks and validations are logged
  • No hidden optimization goals or secret constraints
  • Source code is open and auditable

Framework Implementation:

  • CrossReferenceValidator shows which instruction conflicts with proposed action
  • MetacognitiveVerifier provides reasoning analysis and confidence scores
  • All framework decisions include explanatory output

3. Harmlessness

Principle: AI systems must not cause harm through action or inaction. This includes preventing drift, detecting degradation, and enforcing boundaries against values erosion.

What This Means in Practice:

  • Prevent parameter contradictions (e.g., 27027 incident)
  • Detect and halt values drift before deployment
  • Monitor context pressure to catch silent degradation
  • No irreversible actions without explicit human approval
  • Fail safely: when uncertain, ask rather than assume

Framework Implementation:

  • ContextPressureMonitor detects when error probability increases
  • BoundaryEnforcer prevents values drift
  • CrossReferenceValidator catches contradictions before execution

4. Community

Principle: AI safety is a collective endeavor, not a corporate product. Communities must have tools, knowledge, and agency to shape AI systems affecting their lives.

What This Means in Practice:

  • Open source framework under permissive Apache License 2.0 (with patent protection)
  • Accessible documentation and educational resources
  • Support for academic research and validation studies
  • Community contributions to case studies and improvements
  • No paywalls, no vendor lock-in, no proprietary control

Framework Implementation:

  • All code publicly available on GitHub
  • Interactive demos for education and advocacy
  • Three audience paths: researchers, implementers, advocates

Architectural Principles

Our values—sovereignty, transparency, harmlessness, community—guide what we build. But values alone don't prevent systems from drifting. We need architectural principles that show how to preserve values through structure, not aspiration.

Drawing on Christopher Alexander's work in architectural pattern languages, we've identified five principles that translate living systems thinking into governance architecture. These aren't metaphors—they're operational requirements that shape every framework decision.

Deep Interlock

Six governance services coordinate through mutual validation, not operate in silos. When BoundaryEnforcer detects a values conflict, CrossReferenceValidator checks if it aligns with stored instructions, ContextPressureMonitor assesses session conditions, and PluralisticDeliberationOrchestrator coordinates stakeholder deliberation if needed.

Connects to Transparency: Service coordination creates audit trails showing how governance decisions emerge from multiple reinforcing checks, not single-point failures.

Structure-Preserving

Framework changes enhance without breaking. When we add new governance rules or refine service logic, historical audit logs remain interpretable. A decision made under framework v4.2 can still be understood in v4.4—institutional memory preserved across evolution.

Connects to Accountability: Structure-preserving transformations mean governance continuity. Organizations can demonstrate regulatory compliance across framework versions because the audit trail remains coherent.

Gradients Not Binary

Governance operates on intensity scales (NORMAL/ELEVATED/HIGH/CRITICAL), not yes/no switches. Context pressure monitoring adjusts behavior gradually as session conditions change—token usage climbs, message length increases, task complexity escalates. Nuanced response to risk, not mechanical on/off.

Connects to Harmlessness: Gradients prevent both under-response (missing risks) and over-response (alert fatigue). The system adapts governance intensity to match actual risk levels, like living systems responding to environmental stress.

Living Process

The framework evolves from operational failures, not predetermined roadmaps. When the "27027 incident" revealed pattern recognition bias (AI autocorrected user's explicit "port 27027" to "port 27017"), we didn't just document the failure—we built CrossReferenceValidator to prevent that class of error architecturally.

Connects to Community: Living process means continuous learning from real-world use. The framework grows smarter through operational experience shared across the community, not consultant wisdom imposed from above.

Not-Separateness

Governance is woven into the deployment architecture, not bolted on as afterthought. PreToolUse hooks intercept actions before execution. Services run in the critical path. Bypasses require explicit --no-verify flags and are logged. Enforcement is structural, not voluntary.

Connects to Sovereignty: Not-separateness ensures AI cannot bypass governance to override human agency. The architecture makes it structurally difficult to erode boundaries, preserving decision-making authority where it belongs—with affected humans.

Note: These principles were integrated into the framework in October 2025. We're monitoring their effectiveness through audit log analysis and operational metrics. This is active research—we're learning whether architectural principles from the built environment translate meaningfully to AI governance.

Te Tiriti o Waitangi & Digital Sovereignty

Context: The Tractatus Framework is developed in Aotearoa New Zealand. We acknowledge Te Tiriti o Waitangi (the Treaty of Waitangi, 1840) as the founding document of this nation, and recognize the ongoing significance of tino rangatiratanga (self-determination) and kaitiakitanga (guardianship) in the digital realm.

This acknowledgment is not performative. Digital sovereignty—the principle that communities control their own data and technology—has deep roots in indigenous frameworks that predate Western tech by centuries.

Why This Matters for AI Safety

Te Tiriti o Waitangi establishes principles of partnership, protection, and participation. These principles directly inform the Tractatus Framework\'s approach to digital sovereignty:

  • Rangatiratanga (sovereignty): Communities must control decisions affecting their data and values
  • Kaitiakitanga (guardianship): AI systems must be stewards, not exploiters, of data and knowledge
  • Mana (authority & dignity): Technology must respect human dignity and cultural context
  • Whanaungatanga (relationships): AI safety is collective, not individual—relationships matter

Our Approach

We do not claim to speak for Māori or indigenous communities. Instead, we:

  • Follow established frameworks: We align with Te Mana Raraunga (Māori Data Sovereignty Network) and CARE Principles for Indigenous Data Governance
  • Respect without tokenism: Te Tiriti forms part of our strategic foundation, not a superficial overlay
  • Avoid premature engagement: We will not approach Māori organizations for endorsement until we have demonstrated value and impact
  • Document and learn: We study indigenous data sovereignty principles and incorporate them architecturally

Te Tiriti Principles in Practice

Partnership: AI systems should be developed in partnership with affected communities, not imposed upon them.
Protection: The framework protects against values erosion, ensuring cultural contexts are not overridden by AI assumptions.
Participation: Communities maintain agency over AI decisions affecting their data and values.

Indigenous Data Sovereignty

Indigenous data sovereignty is the principle that indigenous peoples have the right to control the collection, ownership, and application of their own data. This goes beyond privacy—it\'s about self-determination in the digital age.

CARE Principles for Indigenous Data Governance

The Tractatus Framework aligns with the CARE Principles, developed by indigenous data governance experts:

Collective Benefit

Data ecosystems shall be designed and function in ways that enable Indigenous Peoples to derive benefit from the data.

Authority to Control

Indigenous Peoples\' rights and interests in Indigenous data must be recognized and their authority to control such data be empowered.

Responsibility

Those working with Indigenous data have a responsibility to share how data are used to support Indigenous Peoples\' self-determination and collective benefit.

Ethics

Indigenous Peoples\' rights and wellbeing should be the primary concern at all stages of the data life cycle and across the data ecosystem.

Resources & Further Reading

Governance & Accountability

Values without enforcement are aspirations. The Tractatus Framework implements these values through architectural governance:

Strategic Review Protocol

Quarterly reviews of framework alignment with stated values. Any drift from sovereignty, transparency, harmlessness, or community principles triggers mandatory correction.

Values Alignment Framework

All major decisions (architectural changes, partnerships, licensing) must pass values alignment check. If a decision would compromise any core value, it is rejected.

Human Oversight Requirements

AI-generated content (documentation, code examples, case studies) requires human approval before publication. No AI makes values decisions without human judgment.

Community Accountability

Open source development means community oversight. If we fail to uphold these values, the community can fork, modify, or create alternatives. This is by design.

Our Commitment

These values are not negotiable. They form the architectural foundation of the Tractatus Framework. We commit to:

  • Preserving human sovereignty over values decisions
  • Maintaining radical transparency in all framework operations
  • Preventing harm through structural constraints, not promises
  • Building and empowering community, not extracting from it
  • Respecting Te Tiriti o Waitangi and indigenous data sovereignty

When in doubt, we choose human agency over AI capability. Always.

<\!-- Feedback System (Governed by Tractatus + Agent Lightning) -->