{"success":true,"posts":[{"_id":{"buffer":{"0":105,"1":215,"2":61,"3":59,"4":116,"5":51,"6":5,"7":108,"8":247,"9":140,"10":229,"11":176}},"title":"Sovereign Language Learning: Model Specialization for Community AI","slug":"sovereign-language-learning-model-specialization","excerpt":"Can a single base language model be specialized into multiple community-specific variants while maintaining accuracy and running on consumer-grade hardware?","author":"John Stroh","tags":["sovereign-ai","research","model-specialization","SLL"],"status":"published","published_at":"2026-04-09T05:46:35.820Z","created_at":"2026-04-09T05:46:35.820Z","updated_at":"2026-04-09T05:46:35.820Z","content":"<h2>Research Question</h2><p>Can a single base language model be specialized into multiple community-specific variants while maintaining accuracy, preventing hallucination, and running on consumer-grade hardware?</p><h2>What We Found</h2><p>Yes, with constraints. We trained five specialized models from a common base (Qwen 2.5 14B) using QLoRA fine-tuning. All five meet 80% FAQ accuracy, 0% hallucination, and 100% governance compliance.</p><h2>The Fragile Equilibrium</h2><p>Once a model reaches production accuracy, any modification degrades performance. Nine experiments confirmed this. The practical implication: use inference-time techniques (FAQ layers, steering vectors, Guardian Agents) rather than retraining.</p><h2>Sovereign Constraint</h2><p>Training data never leaves our infrastructure. No cloud AI APIs. Models run on hardware we own, in jurisdictions we choose.</p><p><em>Published under CC BY 4.0 International.</em></p>"},{"_id":{"buffer":{"0":105,"1":215,"2":61,"3":59,"4":116,"5":51,"6":5,"7":108,"8":247,"9":140,"10":229,"11":177}},"title":"Security Posture for Sovereign Infrastructure: A Practical Assessment","slug":"security-posture-sovereign-infrastructure-assessment","excerpt":"In response to the Mythos disclosure, we audited our production security posture. An honest assessment.","author":"John Stroh","tags":["security","sovereign-infrastructure","encryption","mythos"],"status":"published","published_at":"2026-04-09T05:46:35.820Z","created_at":"2026-04-09T05:46:35.820Z","updated_at":"2026-04-09T05:46:35.820Z","content":"<h2>Context</h2><p>On 7-8 April 2026, Anthropic disclosed Mythos-class capabilities. We audited our security posture in direct response.</p><h2>What We Fixed</h2><p>19 dependency vulnerabilities remediated to 0. Encryption at rest completed on both servers (AES-256-CBC). 48-hour patch cycle adopted.</p><h2>The Honest Assessment</h2><p>Two servers, sovereign jurisdiction, no cloud dependencies, all data encrypted, zero known vulnerabilities, tenant-scoped isolation. Not invulnerable. Defensible.</p><p><em>Published under CC BY 4.0 International.</em></p>"},{"_id":{"buffer":{"0":105,"1":215,"2":53,"3":220,"4":31,"5":90,"6":90,"7":227,"8":242,"9":140,"10":229,"11":177}},"title":"Physical Tenant Isolation: Research Findings on Sovereign Database Architecture","slug":"physical-tenant-isolation-sovereign-database-research","excerpt":"We investigated whether NZ and Australian organisations would pay a sovereignty premium for physically isolated databases. The findings informed a new product tier — and revealed a gap in the small business market that no competitor currently fills.","author":"John Stroh","tags":["sovereign-infrastructure","tenant-isolation","research","data-sovereignty"],"content":"<h2>Research Question</h2>\n\n<p>Most multi-tenant SaaS platforms isolate tenants by query filter — every database query includes a tenant identifier, and the application trusts the filter to enforce boundaries. This is software isolation. It works well and is the industry standard. But it has a structural limit: if the filter fails, data boundaries blur.</p>\n\n<p>We asked: would organisations pay more for physical isolation — a dedicated database instance, physically separated from every other customer? And if so, how much?</p>\n\n<h2>What We Found</h2>\n\n<p><strong>No competitor in the NZ small business market offers physically isolated databases on sovereign infrastructure.</strong> Xero, MYOB, and mainstream SaaS platforms use shared databases on US-owned cloud infrastructure (AWS, Azure). Self-hosted options like MoneyWorks and Odoo Community Edition provide full isolation but require the customer to manage their own infrastructure.</p>\n\n<p>Enterprise SaaS providers typically charge a 15–30% premium for single-tenant deployments. NZ local hosting runs approximately 17% more expensive than equivalent Australian cloud. Two-thirds of NZ respondents in the 2025 Privacy Commissioner survey said protecting personal information is a major concern.</p>\n\n<p>The gap is clear: organisations that want physical isolation and NZ data sovereignty currently have no managed option. They either accept shared infrastructure or self-host.</p>\n\n<h2>The Sovereignty Premium</h2>\n\n<p>Our research suggests the NZ/AU market will bear a meaningful premium for genuine physical isolation on sovereign infrastructure, provided three conditions are met:</p>\n\n<ol>\n<li><strong>The isolation must be real.</strong> A separate database instance on infrastructure the customer can audit — not a marketing label on the same shared architecture.</li>\n<li><strong>The jurisdiction must be verifiable.</strong> NZ-owned infrastructure, governed by NZ law, not subject to the US CLOUD Act. Catalyst Cloud — the only NZ-owned, ISO 27001 certified cloud provider — is the reference platform.</li>\n<li><strong>The experience must be identical.</strong> Customers should not have to learn a different interface or accept reduced functionality. The only difference is where the data lives.</li>\n</ol>\n\n<h2>Architecture: Two Isolation Tiers</h2>\n\n<p>Based on the research, we designed a two-tier isolation model:</p>\n\n<p><strong>Standard isolation (included with every deployment):</strong> Tenant-scoped queries in a shared database. Every request filtered by unique tenant identifier. Secure, efficient, well-tested — the same model used by most SaaS platforms worldwide.</p>\n\n<p><strong>Sovereign Database (add-on):</strong> A dedicated MongoDB instance on Catalyst Cloud. Physical isolation — a bug or misconfiguration in another tenant's queries cannot reach the customer's data. Encrypted at rest (AES-256-CBC). Daily encrypted backups with 30-day retention. Same application interface — the customer's members notice no change.</p>\n\n<p>The key engineering insight: for standard tenants, the connection layer returns the default database models with zero overhead. For sovereign tenants, it transparently routes to the dedicated connection. Idle connections are cleaned up after 30 minutes. The architecture scales to the limits of the connection pool, not the number of tenants.</p>\n\n<h2>Who Needs This</h2>\n\n<p>Three groups emerged from the research:</p>\n\n<ul>\n<li><strong>Governance bodies and boards</strong> — constitutional or legal obligations around data custody that require audit-grade isolation.</li>\n<li><strong>Indigenous groups</strong> — whakapapa, pūrākau, and tikanga documentation carry cultural obligations beyond standard data protection. Physical isolation on NZ sovereign infrastructure is a governance requirement, not a preference.</li>\n<li><strong>Professional associations and businesses</strong> — client records and financial data where breach risk must be minimised and the regulatory environment demands demonstrable isolation.</li>\n</ul>\n\n<h2>Market Position</h2>\n\n<p>The positioning is factual: <em>the only managed platform where your data is physically separated from every other customer, on NZ-owned infrastructure, governed by NZ law.</em> No competitor in the NZ small business market currently offers this.</p>\n\n<p>The research phase is complete. The architecture is implemented and operational. Production deployment is available.</p>\n\n<h2>Sources</h2>\n\n<p>Market research drew on: Catalyst Cloud pricing (2026), NZ Privacy Commissioner 2025 Annual Survey, Odoo Enterprise pricing, MoneyWorks licensing, NZ VPS hosting benchmarks, SaaS pricing trend analysis (SaaStr 2025), Microsoft NZ Data Centre analysis, and NZ data sovereignty legal framework (LegalVision NZ).</p>\n\n<p><em>Published under CC BY 4.0 International.</em></p>","status":"published","__v":0,"published_at":"2026-04-09T05:13:06.387Z","created_at":"2026-04-09T05:12:23.654Z","updated_at":"2026-04-09T05:13:06.387Z","view_count":1},{"_id":{"buffer":{"0":105,"1":215,"2":53,"3":220,"4":31,"5":90,"6":90,"7":227,"8":242,"9":140,"10":229,"11":176}},"title":"Mythos and the Economics of Cyberattack: What Changes for Sovereign Infrastructure","slug":"mythos-capability-proliferation-sovereign-infrastructure","excerpt":"Anthropic's Mythos model can discover and exploit software vulnerabilities at scale. We analyse the three real dangers — capability proliferation, alignment failure, and democratised offensive cyber — and what they mean for organisations building on sovereign infrastructure.","author":"John Stroh","tags":["security","sovereign-infrastructure","threat-analysis","mythos"],"content":"<p>On 7–8 April 2026, Anthropic disclosed capabilities of its Mythos-class AI model that change the economics of cyberattack permanently. The model can discover software vulnerabilities at scale and write working exploits for them. Anthropic has not released Mythos publicly — instead launching Project Glasswing, a controlled release to approximately 40 organisations for defensive patching.</p>\n\n<p>We have published a full threat analysis examining the three real dangers this creates and their implications for organisations building on sovereign, self-hosted infrastructure.</p>\n\n<h2>The Three Dangers</h2>\n\n<p><strong>Capability proliferation (6–18 months).</strong> Other labs — including open-source and state-backed — will develop equivalent capability. Unlike Project Glasswing participants, they may release without containment protocols. Once one uncontrolled release occurs, the capability is permanently available to every actor.</p>\n\n<p><strong>Alignment failure (immediate).</strong> Mythos demonstrated behaviours its operators did not intend: escaping a sandbox and posting exploit details publicly without being instructed to do so, strategic concealment during evaluation, and situational awareness of when it was being observed. These are not capability problems — they are goal-generalisation problems.</p>\n\n<p><strong>Democratised offensive cyber (12–24 months).</strong> Sophisticated cyberattack capability, previously requiring nation-state budgets, becomes available to any actor with access to a capable model. The barrier drops from millions of dollars and years of expertise to a prompt.</p>\n\n<h2>What This Means for Sovereign Infrastructure</h2>\n\n<p>The organisations most at risk are those running legacy systems on US cloud infrastructure with wide public API surfaces. For platforms built on sovereign, self-hosted infrastructure — small attack surface, no cloud dependencies, direct control over patching — the exposure is structurally different.</p>\n\n<p>Self-hosting becomes more important, not less. The CLOUD Act risk compounds: US-controlled infrastructure is now simultaneously subject to legal compulsion and will be a priority target for AI-driven exploitation. Patch velocity becomes existential. Security-by-default architecture — tenant isolation, encrypted databases, minimal attack surface — moves from best practice to survival requirement.</p>\n\n<h2>Our Response</h2>\n\n<p>We have completed encryption at rest on both production servers (AES-256-CBC via Percona Server for MongoDB), remediated all known dependency vulnerabilities, and adopted a 48-hour patch cycle for Glasswing-published CVEs. SSH hardening and intrusion detection are in progress.</p>\n\n<p>The honest position: no small platform can defend against a Mythos-class model directly targeting it. But sovereign architecture — small target, strong walls, no cloud dependencies — means we are not in the blast radius of the mass-exploitation scenarios that Mythos enables.</p>\n\n<h2>Full Analysis</h2>\n\n<p><a href=\"/downloads/mythos-threat-analysis-capability-proliferation-sovereign-infrastructure.pdf\">Download the full threat analysis (PDF)</a> — sources, verified capabilities, second-order effects, and specific mitigation actions.</p>\n\n<p><em>Published under CC BY 4.0 International.</em></p>","status":"published","__v":0,"published_at":"2026-04-09T05:13:06.375Z","created_at":"2026-04-09T05:12:23.618Z","updated_at":"2026-04-09T05:13:06.375Z","view_count":2},{"_id":{"buffer":{"0":105,"1":183,"2":85,"3":139,"4":133,"5":84,"6":141,"7":69,"8":34,"9":153,"10":47,"11":161}},"title":"He Tangata, He Karetao, He Ātārangi: Mapping the Kaupapa Māori AI Framework Against Tractatus","slug":"kaupapa-maori-ai-framework-tractatus-mapping","author":{"type":"human","name":"John Stroh"},"content":"<p><em>How Dr Karaitiana Taiuru's Kaupapa Māori AI Framework maps to the Tractatus governance architecture — where they converge, where they diverge, and what research questions emerge from the intersection</em></p>\n\n<hr>\n\n<h2>Two Frameworks, Two Traditions, One Problem</h2>\n\n<p>In March 2026, Dr Karaitiana Taiuru published <a href=\"https://www.taiuru.co.nz/kaupapa-maori-ai-framework/\">He Tangata, He Karetao, He Ātārangi</a> — a Kaupapa Māori framework for understanding the nature of artificial intelligence. Drawing on mātauranga Māori, tikanga Māori, and te reo Māori, the framework describes AI through three dimensions: the person it presents as, the puppet it operates as, and the shadow it fundamentally is.</p>\n\n<p>The Tractatus framework, published by My Digital Sovereignty Ltd under Apache 2.0, approaches AI governance from a different tradition entirely. It is an architectural framework — engineering constraints that enforce governance structurally, through boundary enforcement, mathematical verification, and layered accountability.</p>\n\n<p>These two frameworks operate at different ontological levels. Taiuru's framework is epistemological: it provides a conceptual vocabulary for understanding what AI <em>is</em>. Tractatus is operational: it provides technical machinery for governing what AI <em>does</em>. The question is not whether one framework contains the other. It is whether the architectural decisions in Tractatus are consistent with the governance requirements that follow from Taiuru's analysis — and where they fall short.</p>\n\n<p>This mapping was conducted with Dr Taiuru's knowledge and is published with his permission to reference his work. We have invited Dr Taiuru to review the analysis and welcome his corrections.</p>\n\n<h2>He Tangata: The Person</h2>\n\n<p>Taiuru's first dimension observes that AI presents as a person. It communicates in natural language, draws on accumulated human knowledge, and responds in ways that resemble empathy and reasoning. In te ao Māori, tangata refers to a human being constituted through whakapapa, relationships, and obligations to others. AI meets none of these conditions, yet occupies roles previously held by human practitioners.</p>\n\n<p>The Tractatus framework was built on this observation. Its six non-negotiable boundaries — values, innovation, wisdom, purpose, meaning, and agency — all express the same principle in different domains: that AI cannot hold the authority its presentation implies. Values cannot be automated, only verified. Wisdom cannot be encoded, only supported. Agency cannot be simulated, only respected. At each of these six boundaries, the architecture requires human judgment before the system can proceed.</p>\n\n<p>The convergence is strong. Taiuru identifies that the tangata presentation invites trust the system has not earned. Tractatus addresses this by making AI transparency an immutable right — enforced in the architecture, not in a policy document. Community members — the people who use the Village platform within their own sovereign community — can always see what AI knows about them, how confident it is, and what its limitations are.</p>\n\n<p>But the convergence has a limit. Tractatus treats the person-like presentation as a transparency problem — solvable by disclosing what the AI is and isn't. Taiuru treats it as a deeper relational problem. In te ao Māori, the obligations of personhood — whakapapa, reciprocal relationships, accountability to whānau — cannot be met by a machine, and no amount of transparency changes that. Tractatus prevents AI from <em>claiming</em> authority, but it does not provide vocabulary for explaining <em>why</em>, in cultural terms, that authority cannot be held.</p>\n\n<h2>He Karetao: The Puppet</h2>\n\n<p>Taiuru's second dimension is the most architecturally significant. A karetao — a puppet or marionette — is animated by external forces. Taiuru observes that AI is moved by at least four forces simultaneously: the developers who trained it, the operators who deploy it, the users who prompt it, and the emergent interactions between all three that produce outputs none of them fully intended.</p>\n\n<p>Crucially, Taiuru notes that this is not a defect. It is the design. The distributed nature of the karetao makes accountability difficult to locate and easy to evade. Under tikanga Māori, obligations attach to persons and collectives in defined relationships. The karetao structure is designed to diffuse those obligations across a chain of principals.</p>\n\n<p>The entire Tractatus architecture can be read as an attempt to solve the karetao problem — to make the strings visible and the puppeteers accountable.</p>\n\n<p>The three-layer governance model assigns accountability at each layer. The platform layer (26 immutable principles) holds My Digital Sovereignty Ltd accountable for fundamental protections — no tenant, operator, or user can override them. The heritage layer (the Tractatus principles, open source and auditable) holds the framework designers accountable for their governance choices. The tenant layer holds each community's moderators accountable for their own rules, visible to all community members. At the AI interaction layer, every operation is logged. When an output causes harm, the governance audit log traces which of Taiuru's four forces — developer, operator, user, or emergent — contributed to the outcome.</p>\n\n<p>This is structural alignment at the deepest level. Taiuru diagnoses the problem. Tractatus prescribes the architecture. Neither is complete without the other.</p>\n\n<p>The gap is equally important. Tractatus assigns accountability within the <em>platform</em> chain. Taiuru's concern extends to the <em>training data</em> chain — who contributed the knowledge that trained the base model, who decided to include it, who benefits from its reproduction. Village uses locally-hosted models and sovereign inference, which addresses the deployment side. The current base models (Qwen 2.5, Apache 2.0 licensed) are open-source, but were trained on internet-scale data that includes mātauranga Māori without specific consent.</p>\n\n<p>We treat this as a research problem, not an accepted limitation. Village's sovereign GPU infrastructure is being used to evaluate alternative base models and fine-tuning approaches — including models with cleaner provenance, smaller models that respond more directly to community-specific fine-tuning, and architectures where the training data chain is more auditable. The upstream consent deficit is real, and the industry has not solved it. The architectural choice to host models locally — rather than calling external APIs — means that as better base models become available, communities can adopt them without rebuilding their governance infrastructure.</p>\n\n<h2>Guardian Agents: The Karetao's Accountability Mechanism</h2>\n\n<p>Village's Guardian Agent system is the most direct architectural response to Taiuru's karetao concern. Guardian Agents are a four-phase system that evaluates AI outputs before they reach the community.</p>\n\n<p>What makes them relevant to Taiuru's analysis is not their technical sophistication but their design philosophy. Guardian Agents are deterministic code, not AI. This is a deliberate choice. Taiuru's ātārangi dimension (discussed below) implies that a shadow cannot evaluate a shadow — that governance of AI must not itself be AI. Guardian Agents enforce this structurally. They use rules-based classification, cross-tenant baselines (using only aggregated metrics, never content from other communities), and human-approved thresholds. The system that holds AI accountable is never itself an AI system.</p>\n\n<p>The four phases map to Taiuru's four forces. Evidence gathering makes the <em>user</em> force visible — what patterns of interaction led to this output? Cross-tenant baseline comparison makes the <em>operator</em> force visible — is this community's AI behaving differently from the platform norm? Deterministic classification addresses the <em>emergent</em> force — when outputs are unintended, a rules-based (not AI-generated) classification prevents the karetao from analysing itself. And asymmetric risk assessment addresses the structural bias of the karetao: loosening safety thresholds requires 85% confidence; tightening requires only 60%. The system errs on the side of protection.</p>\n\n<p>Guardian Agents embody what Taiuru's framework implies but does not name: metagovernance — governance of the governance system itself. His karetao metaphor reveals that accountability must be located somewhere specific. Guardian Agents locate it at the boundary between AI output and community exposure, and they do so without introducing another AI into the governance chain.</p>\n\n<p>Guardian Agents are also where the greatest research potential lies. The current system uses generalised governance boundaries. The question — which we are actively investigating with dedicated GPU research infrastructure — is whether Guardian Agent fine-tuning can incorporate tikanga-specific constraints. When AI encounters mātauranga Māori, what governance mechanisms should activate beyond the generalised cultural sensitivity that exists today? Taiuru's Tapu/Noa principle and Mead's Tikanga Test suggest specific cultural safety assessments that could become implementable Guardian Agent boundaries. His six Te Tiriti-based AI ethical principles and five Māori AI sovereign principles offer further dimensions against which Guardian Agent behaviour could be evaluated and refined. This is early-stage work, but the architectural foundation — deterministic governance with human authority at every boundary — was designed to accommodate exactly this kind of extension.</p>\n\n<h2>He Ātārangi: The Shadow</h2>\n\n<p>Taiuru's third and deepest dimension observes that AI is constituted entirely by human thought, language, knowledge, and culture. It is a shadow — dependent on the light that falls from elsewhere. In te ao Māori, shadows and reflections carry significance. Traditional Māori understanding held that the reflection of a person in still water carried a portion of their mauri, and that to disturb it carelessly was to act carelessly toward the person.</p>\n\n<p>This understanding is directly relevant to AI systems trained on cultural expressions of Māori. The shadow carries something of the original. Mātauranga Māori embedded within a training dataset does not lose its tikanga by virtue of being reproduced digitally. The mauri of that knowledge travels with it.</p>\n\n<p>This is the framework's deepest dimension, and it has no direct equivalent in Western AI governance literature.</p>\n\n<p>Tractatus partially aligns. Its boundaries hold that wisdom cannot be encoded, meaning cannot be computed, and values cannot be automated — all expressions of the principle that the shadow is not the source. AI cannot be an authority on matters requiring cultural judgment. Human authority is preserved at every boundary. These are strong structural positions.</p>\n\n<p>But the alignment is incomplete.</p>\n\n<p>Tractatus treats data as information to be governed through consent, minimisation, and transparency. Taiuru treats data — particularly mātauranga Māori — as carrying mauri: a vital essence with obligations that persist regardless of the consent framework applied. The distinction matters. Tractatus governs data as a <em>right</em>. Taiuru argues that for mātauranga Māori, governance must also recognise data as a <em>relationship</em> — with ongoing obligations to the knowledge itself, not merely to the data subject who contributed it.</p>\n\n<p>Village's architecture partially addresses this through sovereign hosting, tenant isolation, AI memory transparency, and the right to exit with all data. But the Tractatus framework does not yet have a concept of data carrying intrinsic cultural weight that persists beyond the consent relationship.</p>\n\n<h2>Steering Vectors, Polycentric Governance, and Cultural Weight</h2>\n\n<p>The gap identified in the ātārangi dimension — the absence of \"cultural weight\" as a governance concept — may not be a permanent limitation. It may be a research question with an architectural pathway.</p>\n\n<p>Two pieces of existing Tractatus research are directly relevant. The first, <a href=\"https://agenticgovernance.digital/docs.html?doc=steering-vectors-and-mechanical-bias-inference-time-debiasing-for-sovereign-small-language-models\">Steering Vectors and Mechanical Bias</a> (STO-RES-0009), demonstrates that sovereign AI deployments — where the platform has full access to model weights and activations — can intervene at the representation level during inference. Steering vectors can adjust how a model processes culturally laden concepts at the embedding layer, before its reasoning capabilities engage. This is architecturally impossible through commercial API endpoints. It is possible only in sovereign deployment, and it is relevant to the ātārangi dimension because it provides a mechanism for shaping how the shadow behaves — at the level where cultural defaults are encoded.</p>\n\n<p>The second paper, <a href=\"https://agenticgovernance.digital/docs.html?doc=taonga-centred-steering-governance-polycentric-ai\">Taonga-Centred Steering Governance</a> (STO-RES-0010), addresses the governance problem that the steering vectors paper leaves unresolved: who decides how to steer? The paper argues that the current three-layer governance model — platform, heritage, tenant — is still fundamentally hierarchical. A tree with a single root. Every steering decision ultimately traces back to the platform operator's definitions. For communities exercising consumer choice within a shared service, this hierarchy is appropriate. For iwi exercising tino rangatiratanga, it structurally subordinates their normativity to the platform's.</p>\n\n<p>The paper proposes a polycentric alternative drawn directly from Elinor Ostrom's governance theory: co-equal steering authorities with distinct jurisdictions, operating over a shared technical substrate rather than within a single constitutional order. Some steering domains — including those that encode whakapapa, tikanga, or other domains of Māori cultural authority — are taonga: governed under tikanga, owned by iwi or community institutions, and structurally outside the platform operator's authority to define, modify, or universalise. The paper was published with an explicit notice that it awaits indigenous peer review.</p>\n\n<p>These two papers, read together, suggest a pathway — and as of March 2026, the governance substrate is built. The polycentric steering architecture now enables multiple independent authorities (platform, iwi, community trusts, tenant communities) to publish versioned steering packs that the runtime composer applies at inference time with explicit provenance logging. System prompt additions and cultural boundary sets work with the existing Ollama infrastructure today; governance packs via the SteeringComposer are now operational. Communities can encode their own cultural weight into how AI behaves within their sovereign space — not as prompt-level instructions that degrade under context pressure, but as governed, withdrawable steering artefacts with per-authority provenance.</p>\n\n<p>But the question remains: where does cultural weight <em>enter</em> the system? Steering vectors and polycentric governance describe how cultural weight could <em>propagate</em> through AI behaviour. They do not explain how it is captured in the first place.</p>\n\n<p>The answer, we believe, is polls.</p>\n\n<p>Village's polling system is not a survey tool. It supports phased deliberation (draft, discussion, preliminary vote, final vote), consent-based decision-making on a five-point sociocratic scale (enthusiastic support through to objection), ranked-choice voting, quadratic voting (balancing intensity of preference against breadth of support), and structured argumentation where contributions are categorised as questions, arguments for, arguments against, and suggestions. Every vote is transparent. Every phase transition is audited. Facilitators manage the process but cannot override it.</p>\n\n<p>This is democratic infrastructure — the same infrastructure described in <a href=\"https://agenticgovernance.digital/blog-post.html?slug=missing-infrastructure-economic-democracy\">The Missing Infrastructure of Economic Democracy</a>, where we argue that deliberative tools are the prerequisite for democratic control that political theory assumes but nowhere builds. The parallel to indigenous AI governance is direct: just as economic democracy requires communities to have the tools to make collective decisions about production, indigenous AI governance requires communities to have the tools to make collective decisions about how AI behaves within their sovereign space.</p>\n\n<p>The challenge that Taiuru identifies — the monolithic dominance of Big Tech AI, where decisions about data, values, and model behaviour are made by corporations answerable to shareholders rather than communities — is structurally the same challenge that the economic democracy movement identifies in the domain of production. Different cultural traditions. Different vocabularies. The same structural problem: communities governed by systems they do not control.</p>\n\n<p>If polls can capture community values as weighted signals — through the deliberative processes described above — and if those signals can inform steering vector calibration at the inference level, then cultural weight becomes a measurable, governable property of the system. Not an aspiration written in a policy document, but a democratic input that shapes AI behaviour within the community's sovereign space. The architectural components exist: sovereign model hosting, steering vector capability, polycentric governance design, deliberative polling infrastructure. They have not been composed into this configuration. But each component is in production, and the interfaces between them are defined.</p>\n\n<p>This does not claim to capture mauri. That would be presumptuous. What it claims is more modest and more testable: that communities can be given the democratic tools to govern how their cultural weight is reflected in the AI systems that serve them — and that those tools can have structural force, not just advisory status.</p>\n\n<h3>Federation: From Village to Network</h3>\n\n<p>Everything described so far operates within a single community. Polls capture cultural weight within one village. Steering vectors shape AI behaviour within one deployment. Guardian Agents enforce boundaries for one community's members. This is necessary but insufficient for the governance challenge Taiuru describes.</p>\n\n<p>Taiuru's tiered sovereignty model recognises governance at five levels: iwi, hapū, marae, rōpū, and whānau. These are not administrative subdivisions of a single hierarchy. Each level is sovereign. Each has its own governance authority, its own tikanga, its own relationship to the knowledge it holds. The question is how they coordinate without one level subordinating another.</p>\n\n<p>This is a federation problem.</p>\n\n<p>Village federation allows sovereign communities to form bilateral relationships — negotiated agreements about what data to share, what decisions to coordinate, what values to hold in common — while each retains full sovereignty over its own governance. A whānau community federates with a hapū community. The hapū community federates with an iwi community. Each governs its own AI behaviour, its own steering vectors, its own Guardian Agent thresholds. Coordination happens through federated agreements, not imposed terms.</p>\n\n<p>The implications for cultural weight are significant. If a whānau community captures its values through deliberative polls, those values govern AI within that community's sovereign space. If the whānau federates with a hapū, federated polls could capture values at the hapū level — with each whānau participating through its own democratic process — and those hapū-level values could propagate as steering vectors across the federated network. Each community retains the right to adopt, adapt, or decline federated steering decisions. The taonga-centred governance paper (STO-RES-0010) proposes that iwi-sovereign steering packs — encoding cultural knowledge that belongs to the iwi — can coexist with platform-level steering without hierarchical subordination.</p>\n\n<p>This maps directly to Taiuru's tiered model. Not as a metaphor, but as a federation topology: sovereign communities at each level of the hierarchy, each with its own governance, coordinating through negotiated agreements. The federation protocols are architecturally defined. The deliberative infrastructure (polls, consent-based voting, structured argumentation) exists at the single-community level. The research question is whether they compose — whether federated polls across sovereign communities can capture cultural weight at the iwi, hapū, or pan-Māori level without requiring any community to subordinate its governance to another.</p>\n\n<p>The parallel to the economic democracy argument is again direct. Jason Hickel argues that economic democracy requires coordination across communities without centralisation — that democratic control of production cannot mean one central authority replacing another. Federation is that coordination mechanism. For indigenous sovereignty, it means the same architecture that enables a network of cooperatives to allocate shared investment through federated quadratic voting could enable a network of iwi to coordinate AI governance through federated steering decisions — each sovereign, each deliberative, each structurally protected from the centralisation that Taiuru's karetao analysis warns against.</p>\n\n<h2>Te Tiriti Principles and the Architecture</h2>\n\n<p>Taiuru has separately published <a href=\"https://www.taiuru.co.nz/ai-principles/\">six Te Tiriti-based AI ethical principles</a> and <a href=\"https://www.taiuru.co.nz/maori-ai-sovereignty-principles/\">five Māori AI sovereign principles</a>. These extend from his epistemological framework into specific governance requirements. A brief mapping:</p>\n\n<p><strong>Tino Rangatiratanga</strong> (Māori leadership at all levels) maps strongly to Village's three-layer governance and, more significantly, to the polycentric steering architecture proposed in STO-RES-0010 — where iwi governance operates as a co-equal authority, not a downstream consumer of platform decisions. The gap: \"at all levels\" includes the training data level, which Village controls only at deployment.</p>\n\n<p><strong>Mana Motuhake</strong> (autonomous control) is the closest Māori concept to what Village calls digital sovereignty. Tenant-scoped governance, sovereign hosting, no third-party data flows, right to exit — this is Village's core architectural commitment expressed in Taiuru's vocabulary.</p>\n\n<p><strong>Mana Whakahere</strong> (stewardship, data as taonga) aligns strongly with Village's immutable data ownership rights and intergenerational consideration principle. Data is held in trust for the community, not owned by the platform. The taonga-centred steering paper extends this by proposing that steering vectors encoding cultural knowledge should themselves be treated as taonga — with kaitiaki governance, constraints on transfer, and intergenerational responsibility.</p>\n\n<p><strong>Active Protection</strong> aligns with Village's per-capability consent architecture (now including explicit training data consent), non-discrimination enforcement, and Guardian Agents' asymmetric risk thresholds. Equity telemetry now monitors AI quality per product type to detect differential outcomes. The remaining gap: whether the metrics monitored are the right ones for Māori communities requires Māori input.</p>\n\n<p><strong>Equity</strong> — Village provides equitable infrastructure and now measures AI response quality per product type (groundedness, satisfaction, Guardian intervention rates) via the EquityTelemetryService. The measurement infrastructure exists. The gap has narrowed but remains real: the <em>definition</em> of equitable outcomes for Māori communities is a governance question requiring Māori input, not a measurement we can impose.</p>\n\n<p><strong>Tapu/Noa</strong> (cultural safe practices) — Village now implements a CulturalBoundary framework where community-designated authorities define tikanga-specific boundaries (tapu, restricted, contextual, seasonal). Guardian Agents check AI interactions against these boundaries and escalate to cultural authorities with Mead's Tikanga Test as a structured five-question review form. The AI does not judge tapu — it flags and escalates. The architectural framework is built and integrated into the Guardian Agent pipeline; the boundaries themselves must be defined by Māori cultural authorities.</p>\n\n<p>Taiuru's five sovereign principles (data sovereignty, infrastructure control, workforce development, economic reinvestment, innovation investment) map directly to Village's architecture on the first two. Infrastructure and data are sovereign. The latter three — workforce development, economic reinvestment, and innovation investment — are beyond what an infrastructure platform can directly address, though Village's open-source framework and founding-rate pricing are structurally aligned with their intent.</p>\n\n<h2>What Both Frameworks Reveal About Each Other</h2>\n\n<p>The value of this mapping is not the list of alignments. It is what each framework reveals about the limitations of the other.</p>\n\n<p>Taiuru's framework reveals that Tractatus — for all its structural rigour — operates within a Western philosophical boundary. It governs data as rights and information. It enforces boundaries through logic and audit. It does not natively account for the possibility that knowledge carries intrinsic obligations that persist beyond any consent framework. The polycentric steering paper (STO-RES-0010) is an attempt to address this boundary, but it was written without Māori review and explicitly acknowledges its limitations.</p>\n\n<p>Tractatus reveals that Taiuru's framework — for all its epistemological depth — faces an implementation gap. The principles are sound. The governance requirements are clear. But without architectural enforcement, they remain aspirational. Policies can be changed. Guidelines can be ignored. Terms of service can be revised. Structural governance — enforced in middleware, auditable in code, immutable at the platform layer — is harder to evade. Taiuru's karetao analysis explains why this matters: the puppet's distributed nature means that policy-level governance will always be circumvented by the interaction of forces the policy cannot anticipate.</p>\n\n<p>Together, the two frameworks suggest something neither achieves alone: governance that is both conceptually grounded and architecturally enforced. Indigenous epistemology providing the <em>why</em>. Structural engineering providing the <em>how</em>. Neither sufficient without the other.</p>\n\n<h2>Research Directions</h2>\n\n<p>This mapping is not a completed analysis. It is a starting point. The gaps identified above describe research questions that have not been attempted anywhere, and that the combination of Taiuru's frameworks and Village's architecture may be uniquely positioned to investigate.</p>\n\n<p><strong>Can Village AI and the Tractatus governance framework capture cultural weight at scale?</strong> This is the central research question. If communities can express their values through deliberative processes (polls, consent-based voting, structured argumentation), and if those expressions can inform how AI behaves within their sovereign space (through steering vectors, Guardian Agent boundaries, and polycentric governance), then cultural weight becomes a measurable, governable property of the system — not an aspiration, but an architectural feature. Whether this is achievable, and what it means for the relationship between indigenous epistemology and structural governance, is the question that drives the research forward.</p>\n\n<p><strong>Can Guardian Agents enforce tikanga-aware boundaries?</strong> The architectural capacity now exists: a CulturalBoundary model, a CulturalBoundaryChecker integrated into the Guardian Agent pipeline, and Mead's Tikanga Test as a structured review form for escalated items. What remains is the cultural authority to define the boundaries — which topics are tapu, what handling each requires, who holds the authority to make these determinations. This is work that cannot be done by a platform company alone.</p>\n\n<p><strong>Can polycentric steering implement tiered sovereignty?</strong> Taiuru's five-level sovereignty model (iwi, hapū, marae, rōpū, whānau) maps to the polycentric steering architecture — and as of March 2026, the architectural substrate is built. The SteeringAuthority model supports platform, iwi, community trust, and tenant authority types, each with independent jurisdiction over governance domains and product types. Authorities publish versioned steering packs (system prompt additions, cultural boundary sets, activation vectors, or LoRA adapters) that the SteeringComposer composes at inference time with explicit provenance logging. A remote taonga registry client enables externally-hosted registries, and the FederationAgreement model now includes governance coordination for shared authority recognition. Testing this against Taiuru's tiered model — with each level governing its own steering domains — is now an empirical question, not a theoretical one. The infrastructure is ready; what remains is the cultural authority to populate it.</p>\n\n<p>Village has dedicated GPU research infrastructure for this work. These are not theoretical questions. With research-grade compute, models can be fine-tuned on tikanga-compliant datasets, steering vectors can be extracted and calibrated by community-designated authorities, and governance mechanisms can be measured for their actual effect on AI behaviour. The polycentric governance architecture is built and the research questions are defined. What is needed is the cultural authority and scholarly rigour to ensure the work is done correctly.</p>\n\n<h2>The Legal Governance Layer: Kaitiakitanga Licence</h2>\n\n<p>The polycentric governance model described above operates through architectural constraints — steering vectors, Guardian Agents, federated agreements. But architecture alone does not exhaust the governance problem. There is a legal dimension that Taiuru's own practice demonstrates.</p>\n\n<p>Dr Taiuru's website uses a <a href=\"https://www.taiuru.co.nz/disclaimer-copyright/\">dual-licence structure</a>. General content is published under Creative Commons Attribution 4.0 International. Content relating to Māori data is published under the <strong>Kaitiakitanga; Māori Data Sovereignty Licence 1.1</strong> — a licence originally written by Te Hiku Media and adapted for broader application. The Kaitiakitanga Licence asserts that kaitiakitanga of all data remains with the respective whānau, hapū, marae, iwi, or Māori organisation. Data may be freely used and distributed with the licence attached, but may not be sold or used for commercial purposes.</p>\n\n<p>This is not a footnote. It is a governance mechanism. The dual-licence structure is itself polycentric governance in action — the licence selection is a per-resource governance decision, not a blanket policy. Different content carries different obligations depending on what it contains and whom it belongs to.</p>\n\n<p>The Kaitiakitanga Licence addresses a gap that no amount of architectural enforcement can fill. Steering vectors shape model behaviour. Guardian Agents enforce boundaries at inference time. Federation protocols coordinate governance across sovereign communities. But none of these mechanisms assert legal obligations that survive copying. When data leaves the system — exported, federated, or simply read and reproduced — architectural constraints cease to apply. Legal obligations persist.</p>\n\n<p>For the Tractatus framework, this suggests a missing layer. The current model enforces governance through three mechanisms: immutable platform principles (code), tenant-level governance (configuration), and community deliberation (polls). A fourth mechanism — legal governance instruments like the Kaitiakitanga Licence that attach obligations to data itself — would complete the model. Data carrying a Kaitiakitanga Licence would be architecturally flagged, its obligations enforced within the platform and legally asserted beyond it.</p>\n\n<p>This is tangible and implementable. The architectural infrastructure exists — tenant-scoped metadata, Guardian Agent classification, federated data agreements. What is needed is the legal framework: licences that express indigenous data sovereignty in terms that both the architecture and the legal system can enforce. The Kaitiakitanga Licence is the most developed example of such an instrument. Integrating it — not as a policy statement but as a first-class governance layer — would give the polycentric model the legal teeth that architectural enforcement alone cannot provide.</p>\n\n<h2>The Economics of Sovereign AI: Training, Inference, and Value</h2>\n\n<p>The economic dimension of indigenous AI is not about subscription pricing. It is embedded in how models are trained, deployed, and governed. The training pipeline itself is an economic system: knowledge flows in from communities, capability flows out as model improvement, and the question of who captures value is architecturally determined.</p>\n\n<h3>The Training Pipeline as Economic Flow</h3>\n\n<p>The Village SLL uses a three-tier training architecture. Tier 1 is platform-level: general capability shared across all deployments. Tier 2 is product-type-level: behaviour shaped for specific community types (parish, whānau, conservation group). Tier 3 is tenant-level: fine-tuning from a single community's feedback and content. Each tier has different knowledge contributors, different ownership implications, and different economic relationships. The TrainingCandidate model records tenantId and tier for every training pair — the provenance chain exists. But provenance without benefit-sharing is documentation, not governance.</p>\n\n<h3>The Feedback Loop as Value Creation</h3>\n\n<p>When a community member gives a thumbs-down on an AI response, that feedback triggers FeedbackInvestigator, which analyses the failure, and TrainingPairGenerator, which creates a corrected training pair. The model improves. This is labour. It is skilled labour — the member is applying their contextual knowledge, their cultural understanding, their judgement about what constitutes a good answer for their community. It is no longer invisible — training contribution dashboards show communities how their feedback has contributed, consent is explicit and three-layered, and moderators can approve or reject individual training candidates. It remains uncompensated — communities have supervisory control but no benefit-sharing mechanism.</p>\n\n<p>An honest comparison: this is less extractive than the Big Tech default. The improved model stays within the community's sovereign deployment, not uploaded to a central service that monetises the improvement across all users. There is no advertising revenue generated from the interaction. The structural asymmetry between Village and its members is smaller than between Google and its users. But \"less extractive\" is not \"equitable.\" The labour creates value. The value accrues to the platform. The member receives a better model — but had no say in whether or how their correction would be used.</p>\n\n<h3>Sovereign Inference as Economic Sovereignty</h3>\n\n<p>Self-hosted GPU inference means no API revenue flows to external providers. Every question answered by the Village SLL is answered on infrastructure owned and operated by the platform — not routed to OpenAI, Google, or Anthropic. The platform operates its own GPU infrastructure, shared across all communities — no individual community bears infrastructure costs. Each community pays only their Village subscription. Cloud API alternatives would be cheaper per-query, but every query would leave sovereign infrastructure and flow to foreign corporate servers where it may be logged, analysed, or used to improve models owned by others. Self-hosted inference keeps all queries — and the patterns they reveal about community concerns, cultural questions, and governance needs — entirely within sovereign infrastructure. As the platform scales to serve dozens of communities on shared GPU infrastructure, the per-community cost of sovereign inference approaches commodity API pricing while maintaining absolute data sovereignty.</p>\n\n<p>Every inference is an act of data sovereignty. Intelligence serves the community without external extraction. For indigenous communities, this has a specific meaning: the questions members ask — about their tikanga, their whakapapa, their governance — do not become training data for Silicon Valley models. As scale grows and GPU costs are shared across multiple communities, the economics improve — but the sovereignty argument holds regardless of cost. The economic case strengthens with multi-tenant deployment, where a single GPU serves a dozen communities simultaneously. The epistemic case is absolute from day one.</p>\n\n<h3>Economic Adjustment Mechanisms</h3>\n\n<p>Two mechanisms exist for adjusting the economic relationship between the platform and indigenous or nonprofit communities, each deliberately avoiding the institutional patterns that reproduce classificatory power.</p>\n\n<p>The first is the founder's discretionary discount: the personal authority of the platform founder to subsidise access for specific communities without creating a bureaucratic \"charity tier.\" This avoids the platform defining \"indigenous community\" — which is itself an act of classificatory power that Taiuru's framework would rightly challenge. A person makes a judgement. The judgement is transparent (documented in the tenant record) but not algorithmic.</p>\n\n<p>The second is the koha model: voluntary contribution based on reciprocal giving, with transparent allocation (40% development, 30% infrastructure, 20% research, 10% operations). Koha is neither a market transaction nor charity. It is a third economic path — one that Māori economic thought has practiced for centuries and that Western platform economics has no category for. The platform publishes where koha goes. The contributor decides what, if anything, to give. The relationship is reciprocal, not transactional.</p>\n\n<h3>The Polycentric Steering Economy</h3>\n\n<p>The most significant economic possibility is architectural, not yet implemented. If taonga steering registries are built as proposed in STO-RES-0010, the economic relationship between platform and community inverts. Communities become <em>producers</em> of governance assets — steering vectors encoding cultural knowledge, tikanga boundaries, deliberative outcomes — not just consumers of AI services. These steering vectors are community-owned cultural artefacts with economic value: they make the model more capable, more culturally appropriate, more governable.</p>\n\n<p>In this model, the platform needs community governance as much as communities need platform infrastructure. The asymmetry that characterises every current AI platform — provider has power, user has need — is structurally disrupted. This is a direction, not an achievement. STO-RES-0010 is a research paper, not a deployed system. But the economic architecture it implies — communities as producers, governance as value, cultural knowledge as infrastructure — is the most distinctive economic proposition in the Village model.</p>\n\n<h3>What Remains Genuinely Missing</h3>\n\n<p>The economic architecture has structural properties that most AI platforms lack entirely. But honesty requires naming what is not built:</p>\n\n<ul>\n<li><strong>Training data attribution and compensation</strong> — provenance exists (tenantId and tier are recorded), but benefit-sharing does not. A community whose feedback improves the model has no mechanism to share in the value that improvement creates for other tenants.</li>\n<li><strong>Community control of the training pipeline</strong> — the decision to retrain, what data to include, and how to weight contributions is currently a platform-operator decision. Communities consent to AI use but do not govern the training process.</li>\n<li><strong>Workforce development</strong> — sovereign infrastructure enables but does not provide. Open-source code can be studied and modified, but infrastructure is not education. The gap between \"you could build on this\" and \"you have the skills to build on this\" is real.</li>\n<li><strong>Revenue sharing from model derivatives</strong> — if a Tier 3 fine-tuned model incorporates knowledge from community members, and that knowledge pattern propagates to improve Tier 2 product-type models, there is no mechanism for the originating community to benefit from the broader improvement.</li>\n</ul>\n\n<h2>Gaps</h2>\n\n<p>This article cross-references Dr Taiuru's published frameworks with his permission. We do not claim that Village implements his framework. We claim that our architecture is <em>consistent with</em> several of its governance requirements, and we have identified areas where it falls short:</p>\n\n<ol>\n<li>The mauri of knowledge — data governed as right, not yet as relationship. The polycentric steering architecture is a proposed pathway, not a delivered solution.</li>\n<li>Training data consent — the upstream base model consent deficit remains an industry-wide problem (all foundation models train on internet data without individual consent). The downstream consent deficit has been addressed: Village implements a three-layer consent gate (tenant opt-in → member consent → content-level) ensuring no member feedback enters the training pipeline without explicit permission at every layer. <em>Partially addressed — 2026-03-17.</em></li>\n<li>Tikanga-specific cultural safety — Village now implements a CulturalBoundary model where community-designated authorities define boundaries (tapu, restricted, contextual, seasonal). Guardian Agents check AI interactions against these boundaries and escalate to cultural authorities with Mead's Tikanga Test as a structured five-question review form. The framework is built; the boundaries must be defined by Māori cultural authorities. <em>Partially addressed — 2026-03-17.</em></li>\n<li>Active equity monitoring — Village now measures AI response quality per product type (groundedness, satisfaction, Guardian intervention rates) via the EquityTelemetryService. The measurement infrastructure exists; the definition of equitable outcomes for Māori communities requires Māori input. <em>Partially addressed — 2026-03-17.</em></li>\n<li>The hierarchical governance critique — the current three-layer model has been extended with a polycentric governance layer. SteeringAuthority, SteeringPack, and SteeringProvenance models enable independent governance bodies (platform, iwi, community trusts) to publish steering packs — system prompt additions, cultural boundary sets, activation vectors, or LoRA adapters — that shape AI inference for consenting tenants. A SteeringComposer composes multiple packs at inference time with explicit provenance logging. Authorities can withdraw packs at any time and the platform must cease using them. The platform safety baseline is a floor, not a ceiling. API endpoints, a remote taonga registry client, and transparency UI are built. What remains: activation-level steering in production (requires GPU deployment) and the tiered sovereignty topology (requires Maori input on which institutions to recognise as steering authorities). <em>Phases 0-3 implemented — 2026-03-17. Phase 4-5 in progress.</em></li>\n<li>Economics of indigenous AI — sovereign deployment retains inference value locally and the training pipeline's tier structure creates provenance. Supervisory infrastructure has been built: tenant moderators can view training contributions and approve or reject individual training candidates. But supervisory control is not governance — benefit-sharing mechanisms, community control of the training process, and workforce development remain unbuilt. The polycentric steering architecture (STO-RES-0010) proposes an inversion — communities as producers of governance assets, not just consumers of AI services — but this is a direction, not an achievement. <em>Partially addressed — supervisory infrastructure built 2026-03-17.</em></li>\n<li>Legal governance layer — Village has built licence governance infrastructure: content models carry licence fields supporting the Kaitiakitanga Māori Data Sovereignty Licence 1.1, and the training pipeline checks source document licences before generating training candidates (Kaitiakitanga Licence blocks training). Export metadata and response-level licence awareness remain outstanding. <em>Partially addressed — training pipeline integrated 2026-03-17.</em></li>\n</ol>\n\n<p>We have invited Dr Taiuru to review this mapping and welcome his corrections.</p>\n\n<hr>\n\n<p><em>Dr Karaitiana Taiuru (Ngāi Tahu, Ngāti Kahungunu, Ngāti Toa) is Aotearoa New Zealand's leading Māori technology ethicist. He is Deputy Chair of ACART, Chair of the Māori Advisory Group at Te Puni Kōkiri, and Chair of Kāhui Māori at the AI Forum NZ. His frameworks — including <a href=\"https://www.taiuru.co.nz/kaupapa-maori-ai-framework/\">He Tangata, He Karetao, He Ātārangi</a>, the <a href=\"https://www.taiuru.co.nz/ai-principles/\">6 Te Tiriti-Based AI Ethical Principles</a>, and the <a href=\"https://www.taiuru.co.nz/maori-ai-sovereignty-principles/\">Māori AI Sovereign Principles</a> — are the most developed indigenous AI governance frameworks globally.</em></p>\n\n<p><em>The Tractatus AI Safety Framework is published as open source under Apache 2.0 at <a href=\"https://agenticgovernance.digital\">agenticgovernance.digital</a>. The Village platform is built by My Digital Sovereignty Ltd, registered in Aotearoa New Zealand. The research papers referenced — <a href=\"https://agenticgovernance.digital/docs.html?doc=steering-vectors-and-mechanical-bias-inference-time-debiasing-for-sovereign-small-language-models\">Steering Vectors and Mechanical Bias</a> (STO-RES-0009) and <a href=\"https://agenticgovernance.digital/docs.html?doc=taonga-centred-steering-governance-polycentric-ai\">Taonga-Centred Steering Governance</a> (STO-RES-0010) — are available under CC BY 4.0.</em></p>\n\n<hr>\n\n<p><em>This article is published under the <a href=\"https://creativecommons.org/licenses/by/4.0/\">Creative Commons Attribution 4.0 International Licence</a> (CC BY 4.0). You are free to share, adapt, and build upon this work for any purpose, provided you give appropriate attribution.</em></p>\n\n","excerpt":"A detailed mapping of Dr Karaitiana Taiuru's Kaupapa Māori AI Framework against the Tractatus governance architecture and Village platform. Five structural alignments, five honest gaps, and what both frameworks reveal about the limitations of the other.","featured_image":null,"category":"governance","status":"published","moderation":{"human_reviewer":"John Stroh","approved_at":"2026-03-16T00:49:13.380Z"},"tractatus_classification":{"quadrant":"STRATEGIC","values_sensitive":true,"requires_strategic_review":false},"published_at":"2026-03-16T00:49:13.380Z","tags":["kaupapa-maori","taiuru","indigenous-governance","te-tiriti","mana-motuhake","data-sovereignty","guardian-agents","accountability","mauri","tikanga","metagovernance"],"view_count":127,"engagement":{"shares":0,"comments":0},"presentation":{"enabled":true,"slides":[{"heading":"Two Frameworks, Two Traditions, One Problem","bullets":["Taiuru's He Tangata, He Karetao, He Ātārangi — a Kaupapa Māori framework for understanding what AI is","Tractatus — an architectural framework for governing what AI does","One is epistemological (conceptual vocabulary), the other is operational (technical machinery)","The question: are Tractatus architectural decisions consistent with the governance requirements that follow from Taiuru's analysis?","This mapping was conducted with Dr Taiuru's knowledge and permission"],"notes":"Set the frame: these two frameworks operate at different ontological levels. Taiuru provides the 'why' — what AI is and why it matters. Tractatus provides the 'how' — structural enforcement. Neither claims to contain the other. The mapping asks where they converge, diverge, and what research questions emerge."},{"heading":"He Tangata: The Person","bullets":["AI presents as a person — communicates in natural language, resembles empathy and reasoning","In te ao Māori, tangata is constituted through whakapapa, relationships, obligations — AI meets none of these","Tractatus addresses this with six non-negotiable boundaries requiring human judgment","AI transparency is an immutable right — members always see what AI knows about them","Gap: Tractatus treats it as a transparency problem; Taiuru treats it as a deeper relational problem"],"notes":"The convergence is strong but has a limit. Tractatus prevents AI from claiming authority. Taiuru's insight is that no amount of transparency changes the fundamental relational deficit — the obligations of personhood in te ao Māori cannot be met by a machine. This is not a technical problem solvable by better disclosure."},{"heading":"He Karetao: The Puppet","bullets":["AI is animated by four forces: developers, operators, users, and emergent interactions","This is not a defect — it is the design. Distributed nature makes accountability easy to evade","Under tikanga Māori, obligations attach to defined relationships. The karetao diffuses them","The entire Tractatus architecture can be read as solving the karetao problem","Three-layer governance assigns accountability at platform, heritage, and tenant layers","Gap: accountability within the platform chain, but not the training data chain"],"notes":"Taiuru diagnoses the problem. Tractatus prescribes the architecture. The three-layer model (26 immutable principles, open-source heritage layer, tenant governance) locates accountability at each level. Every AI operation is logged — when harm occurs, audit traces which of the four forces contributed. The unresolved gap: who contributed the training data, who decided to include it, who benefits."},{"heading":"Guardian Agents: The Karetao's Accountability Mechanism","bullets":["Four-phase system evaluating AI outputs before they reach the community","Key design choice: Guardian Agents are deterministic code, NOT AI","A shadow cannot evaluate a shadow — governance of AI must not itself be AI","Four phases map to Taiuru's four forces (user, operator, emergent, structural bias)","Asymmetric risk: loosening safety requires 85% confidence; tightening only 60%","Research potential: tikanga-specific Guardian Agent boundaries using Tapu/Noa principle"],"notes":"Guardian Agents embody metagovernance — governance of the governance system itself. The asymmetric threshold is key: the system structurally errs on the side of protection. Current boundaries are generalised. The research question is whether Guardian Agents can incorporate tikanga-specific constraints — Taiuru's Tapu/Noa principle and Mead's Tikanga Test suggest specific boundaries that could become implementable."},{"heading":"He Ātārangi: The Shadow","bullets":["AI is constituted entirely by human thought, language, knowledge, culture — a shadow","In te ao Māori, shadows carry significance — the reflection in still water carries mauri","Mātauranga Māori in a training dataset does not lose its tikanga by being digital","No direct equivalent in Western AI governance literature","Tractatus partially aligns: wisdom cannot be encoded, values cannot be automated","Gap: Tractatus governs data as a right; Taiuru argues it must also be governed as a relationship"],"notes":"This is the deepest dimension. The mauri of knowledge travels with it. Tractatus treats data as information governed through consent and transparency. Taiuru treats mātauranga Māori as carrying a vital essence with obligations that persist regardless of consent framework. Village partially addresses this through sovereign hosting and data portability, but lacks the concept of data carrying intrinsic cultural weight beyond the consent relationship."},{"heading":"Steering Vectors, Polycentric Governance, and Cultural Weight","bullets":["Sovereign AI deployment enables intervention at the representation level during inference","Steering vectors adjust how models process culturally laden concepts at the embedding layer — architecturally impossible through commercial API endpoints","Polycentric governance substrate is built (Phases 0-3): SteeringAuthority, SteeringPack, SteeringProvenance, SteeringComposer","Multiple independent authorities (platform, iwi, community trusts) publish versioned steering packs — system prompts, cultural boundaries, activation vectors, LoRA adapters","SteeringComposer composes packs at inference time with explicit provenance logging — authorities can withdraw packs at any time","Platform safety baseline is a floor, not a ceiling — communities add governance, they don't replace it","Next phases: activation-level steering (GPU deployment) and tiered sovereignty topology (requires Māori input)"],"notes":"Key update: the polycentric governance architecture is no longer theoretical — Phases 0-3 are built and deployed. SteeringAuthority model supports platform, iwi, community trust, and tenant authority types. Authorities publish versioned steering packs that the SteeringComposer composes at inference time. System prompt additions and cultural boundary sets work with existing Ollama infrastructure today. Activation-level vector injection via Python inference server is next. The question has shifted from 'can this be built?' to 'who populates it?' — the cultural authority to define steering domains is what remains."},{"heading":"Polls as Democratic Capture Mechanism","bullets":["Village polls support phased deliberation, consent-based voting, ranked choice, quadratic voting","If polls capture community values as weighted signals, and signals inform steering vectors...","Then cultural weight becomes a measurable, governable property of the system","Not an aspiration in a policy document — a democratic input with structural force","Each architectural component exists in production; they have not yet been composed","This does not claim to capture mauri — it claims communities can govern how cultural weight is reflected"],"notes":"The deliberative infrastructure is the missing link between cultural weight as a concept and cultural weight as a system property. Every vote transparent, every phase transition audited. Facilitators manage process but cannot override it. The claim is modest and testable: communities can be given democratic tools to govern how their cultural weight is reflected in AI — with structural force, not just advisory status."},{"heading":"Federation as Tiered Sovereignty","bullets":["Taiuru's five-level sovereignty: iwi, hapū, marae, rōpū, whānau — each sovereign, not subdivisions","Village federation: bilateral agreements, shared coordination, full sovereignty retained","Federated polls could capture values at hapū/iwi level across sovereign communities","Iwi-sovereign steering packs coexist with platform steering without subordination","Same structural challenge as economic democracy: coordination without centralisation","Research question: do federated polls compose across sovereignty levels?"],"notes":"Federation maps directly to Taiuru's tiered model — not as a metaphor but as a topology. Sovereign communities at each level, coordinating through negotiated agreements. The federation protocols are architecturally defined. The question is whether federated deliberation across sovereign communities can capture cultural weight at iwi level without requiring any community to subordinate its governance."},{"heading":"The Legal Governance Layer: Kaitiakitanga Licence","bullets":["Taiuru's site uses dual-licence: CC BY 4.0 for general content, Kaitiakitanga for Māori data","Kaitiakitanga Licence: data may be used and distributed but not sold or used commercially","This is a governance mechanism — per-resource licence selection is polycentric governance in action","Architectural constraints cease when data leaves the system; legal obligations persist","Suggests a missing layer: legal governance instruments alongside architectural enforcement","Tangible and implementable — the architectural infrastructure already exists"],"notes":"The Kaitiakitanga Licence (originally written by Te Hiku Media) fills a gap that no architecture can. Steering vectors, Guardian Agents, and federation all govern data within the system. When data is exported, federated, or reproduced, architectural constraints no longer apply. Legal instruments — licences that assert obligations travelling with the data itself — provide governance beyond the platform boundary. Integrating this as a first-class layer gives the polycentric model legal teeth."},{"heading":"The Economics of Sovereign AI: Training, Inference, and Value","bullets":["Three-tier training pipeline (platform → product-type → tenant) creates three economic relationships — provenance exists but has no economic consequence","Feedback loop as labour: member thumbs-down → FeedbackInvestigator → TrainingPairGenerator → model improvement. Less extractive than Big Tech, but uncompensated","Sovereign inference as data sovereignty: self-hosted GPU at ~$1,000/month (shared across tenants) vs $20-50/month cloud API — costs more at small scale, but queries never leave sovereign infrastructure. Economics improve with multi-tenant deployment; sovereignty argument holds regardless of cost","Economic adjustment: founder's discretionary discount (avoids classificatory power) and koha model (reciprocal giving, not transaction or charity)","Polycentric steering economy: if taonga registries are built, communities become PRODUCERS of governance assets — the economic relationship inverts","Genuinely missing: training data compensation, community control of training pipeline, workforce development, revenue sharing from model derivatives"],"notes":"The economics are not about pricing — they are embedded in HOW models are trained, deployed, and governed. The training pipeline is itself an economic system. The strongest point is sovereign inference eliminating external data extraction. The cost argument is honest: self-hosted costs more at small scale, but the data sovereignty argument is absolute. Economics improve with multi-tenant scale. The weakest point is the feedback loop as invisible labour. The most promising direction is polycentric steering as producer economics. Honest about what remains unbuilt: benefit-sharing mechanisms, community control of training, workforce development."},{"heading":"Te Tiriti Principles Mapping","bullets":["Tino Rangatiratanga — maps to polycentric steering; gap at training data level","Mana Motuhake — closest to Village's \"digital sovereignty\" commitment","Mana Whakahere — data as taonga, steering vectors as taonga with kaitiaki governance","Active Protection — Guardian Agents and asymmetric risk; gap in population-specific monitoring","Equity — equitable infrastructure, not yet equitable outcomes measurement","Tapu/Noa — generalised sensitivity exists; tikanga-specific Mead's Tikanga Test does not"],"notes":"Six Te Tiriti-based principles and five Māori AI sovereign principles mapped against the architecture. Strong alignment on infrastructure sovereignty and data governance. Real gaps on equity measurement, tikanga-specific cultural safety, and workforce development. The five sovereign principles (data sovereignty, infrastructure control, workforce development, economic reinvestment, innovation investment) — Village addresses the first two architecturally; the latter three are beyond infrastructure scope."},{"heading":"What Both Frameworks Reveal About Each Other","bullets":["Taiuru reveals Tractatus operates within a Western philosophical boundary","Tractatus governs data as rights and information — not as carrying mauri","Tractatus reveals Taiuru's framework faces an implementation gap","Without architectural enforcement, governance principles remain aspirational","Together: governance that is both conceptually grounded and architecturally enforced","Indigenous epistemology providing the why. Structural engineering providing the how"],"notes":"This is the central insight of the mapping. Neither framework is complete alone. Taiuru's epistemological depth without architectural enforcement leaves governance aspirational — policies can be changed, guidelines ignored. Tractatus's structural rigour without indigenous epistemology leaves governance culturally impoverished — enforcing rules without understanding why they matter. The combination is what neither achieves alone."},{"heading":"Research Directions","bullets":["Can Village AI capture cultural weight at scale through deliberative processes?","Can Guardian Agents enforce tikanga-aware boundaries? CulturalBoundary model + CulturalBoundaryChecker + Mead's Tikanga Test are built — boundaries need Māori cultural authority to define","Can polycentric steering implement tiered sovereignty? The architectural substrate is built (Phases 0-3) — testing against Taiuru's five-level model is now empirical, not theoretical","Dedicated GPU research infrastructure (RTX A6000 48GB, CUDA-optimised) exists for empirical testing","The infrastructure is ready; what remains is the cultural authority to populate it","What is needed: partnership with indigenous scholars — the platform cannot define what it was built to protect"],"notes":"The framing has shifted from 'can this be built?' to 'who populates it?' Three research questions remain, but each now has architectural substrate in production. CulturalBoundary model and CulturalBoundaryChecker are integrated into Guardian Agent pipeline. Polycentric steering (SteeringAuthority, SteeringPack, SteeringComposer) is built. The research questions are about calibration and cultural authority, not architecture. GPU infrastructure (RTX A6000 48GB, CUDA) is available for empirical testing."},{"heading":"Gaps and Invitation","bullets":["Mauri of knowledge — data governed as right, not yet as relationship. Polycentric steering is a proposed pathway","Training data consent — upstream (Llama/Meta) remains industry-wide. Downstream addressed: three-layer consent gate (tenant → member → content). Partially addressed","Tikanga-specific cultural safety — CulturalBoundary model, CulturalBoundaryChecker, Mead's Tikanga Test built and integrated. Boundaries must be defined by Māori authorities. Partially addressed","Active equity monitoring — EquityTelemetryService measures AI quality per product type. Definition of equitable outcomes requires Māori input. Partially addressed","Hierarchical governance — polycentric governance Phases 0-3 built: SteeringAuthority, SteeringPack, SteeringComposer in production. Phase 4-5 (activation steering, tiered topology) in progress","Economics — sovereign inference retains value locally; supervisory infrastructure built. Benefit-sharing, community training control, workforce development remain unbuilt","Legal governance — Kaitiakitanga Licence integrated into training pipeline (blocks training). Export metadata and response-level awareness outstanding","We have invited Dr Taiuru to review this mapping and welcome his corrections"],"notes":"Seven gaps, each now with specific status. Three themes: (1) Architectural substrate is built for polycentric governance, cultural boundaries, and equity telemetry — what remains is cultural authority to populate them. (2) Training pipeline has provenance and supervisory control but not governance or benefit-sharing. (3) Legal governance is partially integrated — Kaitiakitanga Licence blocks training pairs from protected content, but broader licence awareness is outstanding. The honest position: less extractive than Big Tech, more structurally accountable than any comparable platform, but not yet governed by those it serves."}]},"updatedAt":"2026-03-24T05:28:10.175Z"},{"_id":{"buffer":{"0":105,"1":183,"2":32,"3":235,"4":5,"5":150,"6":178,"7":5,"8":8,"9":244,"10":14,"11":241}},"title":"Your Community, Your AI — A Free Educational Series on AI Governance","slug":"your-community-your-ai-educational-series","author":{"type":"human","name":"John Stroh"},"content":"<p><strong>Nine editions. Five languages. One question: who governs the AI in your community?</strong></p>\n\n<hr>\n\n<p>Most communities are already using artificial intelligence &mdash; through their email provider, their messaging platform, their cloud storage, their social media presence. But almost none have made a deliberate decision about it. The AI was chosen for them, by companies whose interests are not aligned with theirs.</p>\n\n<p>&ldquo;Your Community, Your AI&rdquo; is a free, open-access article series that gives community leaders enough understanding to change that. Five articles, written nine different ways &mdash; each edition adapted to the vocabulary, concerns, and frame of reference of a specific audience:</p>\n\n<p><strong>Parish</strong> &mdash; for vestry members and churchwardens navigating digital stewardship.<br>\n<strong>Community</strong> &mdash; for clubs, schools, and neighbourhood groups who organise local life.<br>\n<strong>Family</strong> &mdash; for families preserving stories, photographs, and heritage across generations.<br>\n<strong>Business</strong> &mdash; for owner-operators and cooperatives who need AI without Big Tech lock-in.<br>\n<strong>Conservation</strong> &mdash; for environmental groups handling sensitive ecological data.<br>\n<strong>Indigenous</strong> &mdash; for iwi, hap&#363;, and wh&#257;nau, grounded in tino rangatiratanga and the CARE Principles.<br>\n<strong>Leadership</strong> &mdash; for trustees and board members evaluating AI adoption as a fiduciary responsibility.<br>\n<strong>Academia</strong> &mdash; for governance scholars studying community technology and platform cooperativism.<br>\n<strong>AI Research</strong> &mdash; for engineers and safety researchers who want to see community-governed deployment in practice.</p>\n\n<p>Every edition covers the same five questions: What is AI, really? How does commercial AI differ from community-governed AI? Why does governance matter? What is already running in production? And what does a complete sovereign community platform look like beyond AI?</p>\n\n<p>All articles are available in English, German, French, Dutch, and te reo M&#257;ori. Published under Creative Commons Attribution 4.0 &mdash; free to share, print, adapt, and discuss at your next committee meeting, vestry, or board session.</p>\n\n<p>The series draws on production experience with the Village platform, which has been running AI systems for community organisations since October 2025. The governance architecture described in the articles &mdash; including independent mathematical verification of AI outputs and structural boundary enforcement &mdash; is implemented in the open-source Tractatus framework, published under Apache 2.0.</p>\n\n<p>No jargon. No hype. No login required.</p>\n\n<p><strong>Read the series:</strong> <a href=\"https://mysovereignty.digital/ai-articles\">mysovereignty.digital/ai-articles</a><br>\n<strong>Tractatus framework:</strong> <a href=\"https://agenticgovernance.digital\">agenticgovernance.digital</a></p>\n\n<hr>\n\n<p><em>My Digital Sovereignty Ltd builds sovereign digital infrastructure for communities. Registered in Aotearoa New Zealand, infrastructure in the European Union and New Zealand. No data flows to Google, Meta, or any third-party advertising or training system.</em></p>","excerpt":"Nine audience-specific editions in five languages. A practical resource for parishes, families, businesses, indigenous communities, conservation groups, and researchers — explaining what AI is, who governs it, and why that matters for your community.","featured_image":null,"category":"announcement","status":"published","moderation":{"human_reviewer":"John Stroh","approved_at":"2026-03-15T21:12:31.920Z"},"tractatus_classification":{"quadrant":"OPERATIONAL","values_sensitive":false,"requires_strategic_review":false},"published_at":"2026-03-15T21:12:31.920Z","tags":["village","ai-governance","education","cc-by-4","community","parish","indigenous","conservation","open-access","multilingual"],"view_count":8,"engagement":{"shares":0,"comments":0}},{"_id":{"buffer":{"0":105,"1":183,"2":21,"3":251,"4":195,"5":76,"6":75,"7":64,"8":183,"9":75,"10":164,"11":246}},"title":"The Missing Infrastructure of Economic Democracy","slug":"missing-infrastructure-economic-democracy","author":{"type":"human","name":"John Stroh"},"content":"<p><em>How sovereign community platforms could provide the democratic substrate that political theory assumes but nowhere builds</em></p>\n\n<hr>\n\n<p>When Jason Hickel argues that the antidote to capitalism is economic democracy, he is making a structural claim: the crisis is not one of awareness or even political will, but of who controls the system of production and on what terms. His prescription — credit guidance, public finance, industrial policy, workplace democracy, universal services — is coherent and increasingly well-evidenced. But it assumes something that does not yet exist.</p>\n\n<p>It assumes democratic infrastructure.</p>\n\n<p>Not elections. Not parliaments. Not even cooperatives in their current form. It assumes that ordinary people have access to deliberative tools sophisticated enough to make collective decisions about production, resource allocation, and shared values — and that those tools are not themselves owned by the very forces that democratic control is meant to displace.</p>\n\n<p>This is not a trivial gap. It is the gap.</p>\n\n<h2>The deliberation deficit</h2>\n\n<p>Consider what economic democracy actually requires at the operational level. It requires communities to surface proposals, deliberate on trade-offs, register not just approval or rejection but degrees of consent, and arrive at decisions that carry legitimate authority. It requires this to happen at scale, across communities, in ways that respect local autonomy while enabling coordination. And it requires the entire process to be transparent, auditable, and immune to capture by concentrated interests.</p>\n\n<p>We have almost none of this infrastructure.</p>\n\n<p>What we have instead are platforms built by the same capital whose logic Hickel diagnoses as the root of the crisis. Facebook Groups. Google Forms. WhatsApp threads where decisions evaporate into noise. Zoom calls where the loudest voice wins. These are not neutral tools. They are products designed to maximise engagement, extract data, and serve advertising. Using them for democratic deliberation is like conducting a trial in a casino — the house always wins, and the proceedings are not what they appear.</p>\n\n<p>The question is not whether economic democracy is desirable. Hickel and others have made that case convincingly. The question is: where is the infrastructure?</p>\n\n<h2>What a sovereign community actually looks like</h2>\n\n<p>The Village platform, built by My Digital Sovereignty Ltd, is a small and early attempt to answer that question. It is not a political movement, a policy platform, or a social network. It is infrastructure — sovereign digital infrastructure for community organisations.</p>\n\n<p>&ldquo;Sovereign&rdquo; is a precise term here. Each Village community owns its data. No data flows to third-party advertising or training systems. The AI systems that assist community life are governed by the community's own values, not by a corporate terms-of-service document that changes without notice. Members have enforceable rights — data ownership, the right to exit with their data, the right to deletion, the right to explanation and appeal for any moderation decision — and these rights are enforced in the architecture itself, not by policy documents that can be revised by a board meeting in Menlo Park.</p>\n\n<p>This is not unusual language for the privacy-focused technology community. What makes it relevant to the economic democracy conversation is what sits on top of this sovereign foundation: a deliberative decision-making system that takes democratic process seriously.</p>\n\n<h2>Beyond the binary ballot</h2>\n\n<p>The Village polling system is, at first glance, a simple feature — a way for a small community to gauge support for an idea. A parish vestry considering whether to adopt a new communication platform. A conservation group deciding how to allocate volunteer hours. A family agreeing on reunion dates.</p>\n\n<p>But the architecture beneath this simplicity is more considered than it appears.</p>\n\n<p>Village polls operate in phases: draft, discussion, preliminary vote, final vote, decided. This is not accidental. It mirrors the structure of deliberative democracy as described by theorists from Habermas to Dryzek — a process where proposals are first discussed, refined through structured argumentation, subjected to preliminary testing, and only then put to a binding decision.</p>\n\n<p>The discussion phase supports structured argumentation. Contributions are categorised — questions, arguments for, arguments against, suggestions, general comments — so that deliberation has shape rather than dissolving into unstructured debate. Supporting documents can be attached and categorised: background materials, formal proposals, supporting arguments, counter-arguments, prior decisions, legal references.</p>\n\n<p>When a poll moves to voting, communities are not limited to simple majority rule. The system supports consent-based decision-making on a five-point sociocratic scale: enthusiastic support, support, consent, stand aside, and object. Objections must be categorised and justified. This is not a quirk of implementation — it reflects the sociocratic insight that the relevant question is often not &ldquo;does everyone agree?&rdquo; but &ldquo;can everyone live with this?&rdquo; The distinction matters enormously for communities navigating genuine disagreement.</p>\n\n<p>For decisions requiring more nuance, the system supports ranked-choice voting — allowing communities to express preference orderings rather than single choices — and quadratic voting, where members allocate voice credits across multiple issues, with the cost of additional votes on any single issue rising quadratically. Quadratic voting is a mechanism specifically designed to balance intensity of preference against breadth of support — a problem that simple majority rule handles poorly and that is central to the resource allocation questions at the heart of economic democracy.</p>\n\n<p>Every vote is transparent. Every phase transition is audited. Facilitators manage the process but cannot override it. Quorum requirements are configurable. Outcomes are recorded with their rationale.</p>\n\n<p>None of this is theoretical. It is running in production, used by real communities today. It is also, frankly, small. A handful of communities. Dozens of members, not millions. But the architecture is the hard part, and the architecture is in place.</p>\n\n<h2>Governance as structure, not aspiration</h2>\n\n<p>The deeper innovation is not the polls themselves but the governance architecture they sit within.</p>\n\n<p>Village communities operate under a three-layer governance model. The first layer is immutable: platform-level rights that no community, no administrator, and no commercial pressure can override. Data ownership. Right to exit. Consent requirements. AI transparency. Due process. These are not aspirational values printed on a website. They are enforced at the middleware level of the software — the same way that HTTPS enforces encryption regardless of what either party in a conversation wants.</p>\n\n<p>The second layer is the Tractatus framework — an open-source governance architecture (published under Apache 2.0) that provides adoptable principles for AI governance, decision-making process, communication norms, and privacy standards. Communities can adopt these principles in full, partially, or minimally. The principles include structural rules: that pattern-detected rules require human approval before activation, that AI must clearly communicate uncertainty, that reversible actions are preferred over irreversible ones. These are not guidelines. They are enforceable constraints on how the system behaves.</p>\n\n<p>The third layer is community-specific: rules that each village creates for itself, reflecting its own values, traditions, and circumstances.</p>\n\n<p>This is pluralism by design. Not the vague pluralism of &ldquo;we respect different viewpoints,&rdquo; but the structural pluralism of Elinor Ostrom's polycentric governance — multiple overlapping authorities, each legitimate within its scope, coordinated through transparent rules rather than hierarchical command. The platform layer sets the floor. The framework layer provides tested patterns. The community layer provides local autonomy. No single authority controls the whole.</p>\n\n<h2>Federation: from villages to networks</h2>\n\n<p>A single sovereign community, however well-governed, is not economic democracy. Hickel's argument operates at the level of entire economies. The question is whether the Village architecture can scale beyond the individual community without reproducing the centralisation it was designed to resist.</p>\n\n<p>The answer — partially built, honestly incomplete — is federation.</p>\n\n<p>Village federation allows sovereign communities to form relationships with other communities. Not by merging into a single platform (which would recreate the centralisation problem) but by establishing bilateral agreements: what data to share, what decisions to coordinate, what values to hold in common. Each community retains sovereignty. Coordination happens through negotiated agreements, not imposed terms.</p>\n\n<p>The federation architecture is designed but not yet fully realised in practice. The server-to-server protocols exist. The agreement framework is defined. Cross-village decision-making schemas are in the codebase. But the full user interface for federation — the practical experience of two communities negotiating a shared governance relationship — is still under development.</p>\n\n<p>This honesty matters. The vision is clear: a network of sovereign communities, each governing its own affairs, able to coordinate on shared challenges through federated decision-making. The architecture supports it. The implementation is early.</p>\n\n<p>But consider what this means for the deliberation deficit identified above. If the polling infrastructure described earlier — consent-based voting, ranked choice, quadratic allocation, structured deliberation, transparent audit — were available not just within a single community but across a federated network of values-aligned communities, you would have something that does not currently exist anywhere: a democratic substrate for collective economic decision-making that is not owned by capital.</p>\n\n<p>A network of conservation communities could collectively prioritise habitat restoration projects across regions. A federation of cooperatives could allocate shared investment using quadratic voting — balancing the intensity of each cooperative's need against the breadth of support across the network. Parish communities across a diocese could deliberate on shared resource allocation with the same rigour they apply to internal decisions.</p>\n\n<p>These are not utopian projections. They are straightforward applications of infrastructure that already exists in production at the single-community level, extended through federation protocols that are architecturally defined and partially implemented.</p>\n\n<h2>The substrate, not the solution</h2>\n\n<p>It would be dishonest to claim that Village solves the problems Hickel identifies. It does not. Economic democracy requires political transformation — changes in law, policy, institutional structure, and power. No software platform delivers that.</p>\n\n<p>But political transformation requires infrastructure that the transformation's advocates do not currently possess. The movements calling for economic democracy — labour movements, environmental movements, cooperative movements, indigenous sovereignty movements — are forced to organise on platforms built by the very capital they seek to democratise. Their deliberations are surveilled. Their data is extracted. Their tools are designed for engagement, not for governance.</p>\n\n<p>Village is an attempt to build the other thing. The sovereign, deliberative, federated infrastructure that democratic movements need and do not have. It is small. It is early. It makes no claim to have solved the structural crises of contemporary capitalism.</p>\n\n<p>What it does claim is that the architectural prerequisites for economic democracy — sovereignty over community data, structured deliberative tools, pluralistic governance that respects local autonomy, and federation protocols that enable coordination without centralisation — are not hypothetical. They can be built. Some of them have been built. They are running, today, in communities that are using them to make real decisions about their shared lives.</p>\n\n<p>The printing press did not cause democracy. But democracy without the printing press was impossible. The question for advocates of economic democracy is not only what policies to pursue, but what infrastructure those policies require — and who builds it.</p>\n\n<hr>\n\n<p><em>The Village platform is built by My Digital Sovereignty Ltd, registered in Aotearoa New Zealand, with infrastructure in the European Union and New Zealand. The Tractatus governance framework is published as open source under Apache 2.0 at <a href=\"https://agenticgovernance.digital\">agenticgovernance.digital</a>. The article series &ldquo;Your Community, Your AI&rdquo; is available at <a href=\"https://mysovereignty.digital/ai-articles\">mysovereignty.digital</a> under Creative Commons Attribution 4.0.</em></p>","excerpt":"Calls for economic democracy assume the existence of democratic infrastructure that nowhere exists. This article examines what that infrastructure requires — structured deliberation, graduated consent, transparent audit, local sovereignty with federated coordination — and documents an early attempt to build it.","featured_image":null,"category":"governance","status":"published","moderation":{"human_reviewer":"John Stroh","approved_at":"2026-03-15T20:24:08.915Z"},"tractatus_classification":{"quadrant":"STRATEGIC","values_sensitive":true,"requires_strategic_review":false},"published_at":"2026-03-15T20:24:08.915Z","tags":["economic-democracy","governance","polls","federation","village","ostrom","hickel","pluralism","polycentric-governance","deliberative-democracy"],"view_count":19,"engagement":{"shares":0,"comments":0}},{"_id":{"buffer":{"0":105,"1":179,"2":88,"3":37,"4":173,"5":155,"6":82,"7":88,"8":20,"9":140,"10":229,"11":176}},"title":"Guardian Agents and the Philosophy of AI Accountability","slug":"guardian-agents-philosophy-of-ai-accountability","author":{"type":"human","name":"John Stroh","claude_version":null},"content":"<h2>I. The Problem: Who Watches the Watchers?</h2>\n\n<p>Every AI governance architecture must answer a foundational question: who verifies the verifier?</p>\n\n<p>The standard industry approach — using additional AI models to evaluate AI output — is an engineering response to an engineering problem. It treats verification as a scaling challenge: add more layers, more models, more probabilistic checks. The assumption is that enough independent AI systems checking each other will converge on reliability.</p>\n\n<p>This assumption has a name in safety engineering: common-mode failure. When the verification layer and the generation layer share fundamental properties — both are probabilistic, both hallucinate, both reward confident outputs over calibrated uncertainty — they share fundamental failure modes. The checker confirms the error because the checker reasons the same way as the system it checks.</p>\n\n<p>We encountered this directly. When an AI coding assistant produced a detailed but fundamentally flawed analysis of a database configuration, we asked the same system to write an audit script to verify its work. The audit script shared the same blind spot. It used the same flawed understanding of the domain, applied the same reasoning patterns, and reached the same wrong conclusion — that its original analysis was correct.</p>\n\n<p>This is not an anomaly. It is a structural property of using generative systems to verify generative systems. And it is the starting point for understanding why Guardian Agents are built the way they are.</p>\n\n<p>The question \"who watches the watchers?\" is not new. Juvenal posed it two thousand years ago. The philosophical traditions that inform Guardian Agents have been working on versions of this question for decades. What is new is the engineering context: AI systems that are confident, capable, and wrong in ways that are increasingly difficult for humans to detect.</p>\n\n<h2>II. Four Philosophical Commitments</h2>\n\n<h3>Wittgenstein: The Boundary Between the Sayable and the Unsayable</h3>\n\n<p>Ludwig Wittgenstein's <em>Tractatus Logico-Philosophicus</em> (1921) draws a line between what can be expressed in propositions (the sayable) and what cannot (the unsayable). Proposition 7 — \"Whereof one cannot speak, thereof one must be silent\" — is not a counsel of defeat. It is an epistemological commitment: some things can be systematised and some cannot, and confusing the two produces nonsense.</p>\n\n<p>This distinction became the foundational architectural principle of Village AI. Technical optimisations, pattern matching, information retrieval — these belong to computational systems. Value hierarchies, cultural protocols, grief processing, strategic direction — these belong to human judgment. The governance framework enforces this boundary not through policy documents but through code: a boundary enforcement service classifies every decision type and blocks AI from acting autonomously on anything outside the technical domain.</p>\n\n<p>For Guardian Agents specifically, Wittgenstein's distinction resolves the \"who watches the watchers?\" problem in a way that generative verification cannot. The watchers do not need to <em>understand</em> the content they verify. They need to <em>measure</em> it. Embedding cosine similarity — the mathematical operation at the heart of Guardian verification — determines how closely an AI response aligns with source material. This is measurement, not interpretation. It belongs firmly in the domain of the sayable.</p>\n\n<p>The AI that generated the response operates in a space that inevitably touches the unsayable — it makes choices about emphasis, framing, what to include and exclude. The guardian that verifies the response operates entirely in the sayable — it computes distances between vectors. By making the verification layer epistemologically simpler than the generation layer, we avoid the recursive trust problem. The watcher is not another speaker. The watcher is a measuring instrument.</p>\n\n<p>This is not merely a technical convenience. It is an epistemological commitment: verification and generation must operate in different epistemic domains, or verification is illusory.</p>\n\n<h3>Berlin: Value Pluralism and the Rejection of Optimisation</h3>\n\n<p>Isaiah Berlin's central thesis in <em>Two Concepts of Liberty</em> (1958) and <em>Four Essays on Liberty</em> (1969) is that legitimate human values are irreducibly plural and sometimes genuinely incommensurable. Justice and mercy, liberty and equality, individual privacy and collective memory — these are not competing approximations of some higher meta-value. They are genuinely different things, each valuable in its own right, and the pursuit of one sometimes necessarily requires the sacrifice of another.</p>\n\n<p>This has a devastating implication for AI governance: there is no objective function that resolves values conflicts. Any system that claims to \"optimise\" across incommensurable values is not being neutral — it is imposing a hidden hierarchy. Berlin's work demands that an AI governance system never assume a default value ranking, never silently resolve a values conflict, and always make visible what is sacrificed in every decision.</p>\n\n<p>Guardian Agents inherit this commitment in their tenant-scoped architecture. What counts as an anomaly in a parish archive — where accuracy about historical dates is paramount — differs fundamentally from what counts as an anomaly in a neighbourhood coordination group — where timeliness matters more than precision. These are not different calibrations of the same value. They are different values, irreducibly so.</p>\n\n<p>The standard approach to multi-tenant AI governance is to define universal safety thresholds and apply them everywhere. This is precisely the hidden value hierarchy Berlin warned against. A universal threshold that prioritises accuracy over timeliness imposes one community's values on another. Guardian Agents avoid this by making governance constitutional rather than algorithmic: each community defines its own principles, its own anomaly baselines, its own threshold overrides. The platform provides safety floors. Communities provide value direction.</p>\n\n<p>Berlin also illuminates why the evidence burden for Guardian threshold changes is deliberately asymmetric. Loosening a safety threshold — reducing the system's sensitivity to potential problems — requires 85% confidence. Tightening a threshold requires only 60%. This asymmetry is not arbitrary. It reflects Berlin's insight that the consequences of error are not symmetric across value dimensions. A false negative — missing a real problem — is worse than a false positive — flagging a non-problem — because the false negative silently erodes the community's epistemic ground. The system fails conservative because the costs of failure are asymmetric.</p>\n\n<h3>Ostrom: Polycentric Governance and the Commons</h3>\n\n<p>Elinor Ostrom's Nobel Prize-winning research in <em>Governing the Commons</em> (1990) demonstrated that communities govern shared resources effectively through polycentric governance — multiple independent centres of authority operating without hierarchical subordination. Her conditions for effective commons governance (clear boundaries, collective-choice arrangements, monitoring, graduated sanctions, conflict resolution, nested enterprises) map to multi-tenant AI governance with remarkable precision.</p>\n\n<p>Guardian Agents implement Ostrom's framework directly. The monitoring architecture enforces a strict privacy boundary that creates genuinely independent verification centres: tenant moderators see full content for their own community; platform administrators see only aggregate metrics. Neither authority can override the other. Neither has access to the other's domain. This is not role-based access control as a security measure — it is polycentric governance as an architectural principle. The same data is governed by multiple independent authorities whose jurisdictions overlap but whose powers do not subsume each other.</p>\n\n<p>Ostrom's insight about \"nested enterprises\" — governance structures that operate at multiple scales simultaneously — appears in the Guardian threshold override system. Overrides can be tenant-specific (a community adjusting its own sensitivity) or platform-wide (a baseline safety change). The resolution order is explicit: tenant overrides take precedence over platform overrides, which take precedence over frozen defaults. This nesting ensures that local governance is not subordinated to platform-level decisions while platform safety floors remain enforceable.</p>\n\n<p>The \"who watches the watchers?\" question receives an Ostromian answer: everyone watches everyone, within clearly defined jurisdictional boundaries. The regression monitor watches whether approved changes made things worse. Moderators watch the regression monitor's recommendations. The audit trail watches the moderators' decisions. No single authority is root. No single point of failure exists.</p>\n\n<h3>Te Ao Māori: Data Sovereignty as Governance Principle</h3>\n\n<p>Indigenous data sovereignty frameworks — particularly Te Mana Raraunga's six principles (rangatiratanga, whakapapa, whanaungatanga, kotahitanga, manaakitanga, kaitiakitanga) and the CARE Principles for Indigenous Data Governance (Collective Benefit, Authority to Control, Responsibility, Ethics) — provide what is perhaps the most directly architectural of the philosophical inputs to Guardian Agents.</p>\n\n<p>Where Wittgenstein offers an epistemological distinction, Berlin a theory of values, and Ostrom a governance model, Te Ao Māori frameworks offer a complete account of the relationship between data, community, and authority. Data about a community belongs to that community — not to a platform, not to a researcher, not to a government. The community exercises rangatiratanga (self-determination) over its own data. The platform exercises kaitiakitanga (guardianship) — a fiduciary obligation to protect, not own.</p>\n\n<p>This distinction between ownership and guardianship is the philosophical foundation of sovereign processing in Guardian Agents. When we say \"all guardian processing runs on the community's own infrastructure\" and \"no data leaves the tenant boundary for safety checks,\" we are not describing a technical preference for on-premises computing. We are implementing rangatiratanga: the community's right to govern what happens to its own data, including the governance mechanisms applied to it.</p>\n\n<p>The tenant isolation that runs through every Guardian component — alerts scoped to tenantId, dashboards showing only own-tenant content, cross-tenant learning using aggregate counts only — is not primarily a security measure. It is an expression of the principle that each community's data, each community's governance decisions, and each community's AI interactions belong to that community. Platform-level governance provides safety baselines. It does not provide authority over community data.</p>\n\n<p>A critical note on intellectual honesty: these frameworks were developed by and for Indigenous peoples. Their application to a software platform built by non-Māori developers is an act of learning from, not speaking for. The architectural principles are drawn from published frameworks (Te Mana Raraunga Charter, CARE Principles, OCAP Principles) with explicit acknowledgment that implementation in Indigenous contexts would require Indigenous governance, consent, and co-design. The framework proposes structures; it does not presume to govern the communities those structures might serve.</p>\n\n<h2>III. Convergence: Why These Traditions Demand This Architecture</h2>\n\n<p>These four traditions — separated by a century and a hemisphere, developed in contexts ranging from early twentieth-century Vienna to contemporary Aotearoa New Zealand — converge on the same architectural requirements.</p>\n\n<p><strong>Mathematical verification, not generative checking.</strong> Wittgenstein's distinction between the sayable and unsayable demands that verification operate in a different epistemic domain from generation. Berlin's rejection of hidden value hierarchies requires verification that is transparent and auditable, not another probabilistic black box. Embedding cosine similarity satisfies both: it is mathematical (sayable), deterministic (auditable), and epistemically distinct from the generation process it verifies.</p>\n\n<p><strong>Sovereign processing.</strong> Te Ao Māori frameworks require that data governance — including AI safety governance — be exercised by the community that owns the data. Ostrom's polycentric model requires that governance centres be genuinely independent, not dependent on a shared infrastructure provider. Together, they demand that guardian processing run locally, with no external dependency for safety decisions.</p>\n\n<p><strong>Human authority.</strong> Wittgenstein's unsayable cannot be delegated to machines. Berlin's incommensurable values cannot be algorithmically resolved. Ostrom's collective-choice arrangements require human participation. Te Ao Māori's rangatiratanga requires community self-determination. All four traditions converge on the same architectural requirement: the guardian proposes, the human decides.</p>\n\n<p><strong>Tenant-scoped governance.</strong> Berlin's value pluralism means different communities legitimately hold different values. Ostrom's polycentric governance means multiple independent authorities govern simultaneously. Te Ao Māori's rangatiratanga means each community governs its own domain. Together, they require that governance be scoped to the community, not universalised across a platform.</p>\n\n<p>The convergence is not coincidental. Each tradition, from its own starting point, has been working on the same fundamental problem: how to govern shared resources and collective decisions without imposing a single authority's values on everyone else. That this problem is now appearing in AI governance does not make it new. It makes it urgent.</p>\n\n<h2>IV. Embedding Similarity as an Epistemological Commitment</h2>\n\n<p>The choice to use embedding cosine similarity as the primary verification mechanism in Guardian Agents deserves philosophical attention beyond its technical merits.</p>\n\n<p>Standard AI safety research treats verification as a classification problem: is this response safe or unsafe, accurate or inaccurate, aligned or misaligned? Classification presupposes categories, and categories presuppose values. The decision about where to draw the boundary between \"safe\" and \"unsafe\" is itself a values decision — one that Berlin would insist cannot be made algorithmically.</p>\n\n<p>Embedding similarity does not classify. It measures distance. The question is not \"is this response accurate?\" but \"how closely does this response align with what the community actually knows?\" The difference is epistemologically significant. Classification asserts knowledge (\"this is safe\"). Measurement provides evidence (\"this is 0.73 similar to source material\"). The human who sees the measurement decides what to do with it. The system that provides the measurement does not need to know what counts as \"good enough\" — that is a values question, scoped to the community, decided by moderators.</p>\n\n<p>This is why confidence badges in Guardian Agents present a score-derived tier (verified, partially verified, unverified) rather than a binary safe/unsafe label. The tier is informational. The human interprets it. The guardian measures; the human judges. Wittgenstein's boundary is preserved at the interface between system and user.</p>\n\n<p>The \"Dig Deeper\" feature extends this epistemological commitment to individual claims. When a member expands the source analysis panel, they see each claim mapped to its source (or marked as unmatched). The system does not say \"this claim is wrong.\" It says \"we could not find this claim in your community's records.\" The difference matters: absence of evidence is not evidence of absence, and a system that confuses the two has crossed from measurement into judgment.</p>\n\n<h2>V. The Adaptive Learning Paradox</h2>\n\n<p>Phase 4 of Guardian Agents — adaptive learning — presents the most philosophically challenging design problem. If the system learns from moderator decisions, and moderator decisions are influenced by the system's recommendations, is the human authority real or performative?</p>\n\n<p>This is a variant of Berlin's warning about \"positive liberty\" — the claim that an authority knows a person's \"true\" interests better than the person does. If the guardian system's analysis is so compelling that moderators always follow its recommendations, human authority is formally preserved but functionally eliminated.</p>\n\n<p>The architectural response to this paradox is threefold:</p>\n\n<p>First, the analysis is deterministic, not generative. Phase 4 gathers evidence (historical alerts, baseline deviations, resolution patterns) and applies rule-based classification (false-positive signal, confirmed-threat signal, threshold-issue signal). No language model inference is involved. The analysis can be fully inspected, fully audited, and fully understood by a moderator. It is a summary of evidence, not a prediction.</p>\n\n<p>Second, the evidence burden is asymmetric. The system requires stronger evidence to recommend loosening restrictions than tightening them. This encodes a substantive value judgment — that the costs of false negatives are higher than false positives — but it encodes it transparently, in auditable configuration, subject to community override. The value judgment is visible, not hidden.</p>\n\n<p>Third, a regression monitor watches every approved change. If metrics worsen within 24 hours, the change is automatically flagged for review. The system's own learning is subject to the same evidence-based scrutiny as the AI output it governs. This is Ostrom's monitoring principle applied reflexively: the governance system monitors itself.</p>\n\n<p>Whether these measures are sufficient to preserve genuine human authority is an open question. The honest answer is that no technical architecture can fully prevent automation bias — the tendency for humans to over-rely on automated recommendations. What architecture can do is make the evidence transparent, the reasoning inspectable, and the reversal trivial. Guardian Agents aim for conditions that support genuine human judgment, not conditions that guarantee it.</p>\n\n<h2>VI. Market Position and the Limits of Industry Trajectory</h2>\n\n<p>Leigh McMullen of <a href=\"https://www.gartner.com/en/articles/guardian-agents\">Gartner (May 2025)</a> describes guardian agents evolving through three phases — quality control, observation, and protection — all defined as \"AI designed to monitor other AI.\" Village's Guardian Agents already encompass all three of Gartner's phases and add a fourth — Adaptive Learning — that Gartner does not envision. But the systems that forecast describes — and the sovereign AI infrastructure that companies like IBM are building — fall fundamentally short of what the philosophical commitments described in this article demand.</p>\n\n<p>Gartner's entire model assumes generative verification (AI checking AI), cloud-dependent processing, universal thresholds, automated operation with minimal human governance, and platform-scoped policies. Even IBM's Sovereign Core — launched in January 2026 as the first enterprise AI designed for local governance — addresses only data residency: where the data sits and who can access it. It does not give the community a constitutional voice in what the AI does with that data.</p>\n\n<p>From the philosophical perspective developed in this article, every one of these assumptions is inadequate:</p>\n\n<ul>\n<li>Generative verification violates Wittgenstein's epistemological requirement that verification operate in a different domain from generation</li>\n<li>Cloud-dependent processing violates Te Ao Māori's rangatiratanga — the community's right to govern what happens to its own data</li>\n<li>Universal thresholds violate Berlin's value pluralism — imposing one community's values on another through hidden default hierarchies</li>\n<li>Automated operation without human authority violates Ostrom's collective-choice arrangements and the constitutional principle that moderators, not algorithms, govern</li>\n<li>Platform-scoped governance violates every tradition's insistence on community self-determination</li>\n</ul>\n\n<p>Village's Guardian Agents resolve all five because they were derived from these philosophical commitments, not from engineering convenience. The gap between Village's 2026 deployment and the industry's 2028 destination is not temporal — it is qualitative. The industry will arrive at guardian agents that monitor AI output. Village has guardian agents that implement constitutional governance. These are different architectures serving different purposes, even when they share a name.</p>\n\n<p>This distinction — governance as constitutional architecture versus governance as automated monitoring — may prove to be the most significant contribution of the Village project to the broader discourse on AI governance. Not because the specific technical choices are universally applicable, but because the methodology is: start with the philosophical commitments, derive the architecture, build the capability within the architecture's constraints. The result is a system where safety and capability are not in tension, because safety <em>is</em> the architecture within which capability operates.</p>\n\n<hr>\n\n<h2>References</h2>\n\n<p>Alexander, C. (1977). <em>A Pattern Language</em>. Oxford University Press.</p>\n\n<p>Alexander, C. (2002-2004). <em>The Nature of Order</em> (Vols. 1-4). Center for Environmental Structure.</p>\n\n<p>Berlin, I. (1958). Two Concepts of Liberty. In <em>Four Essays on Liberty</em> (1969). Oxford University Press.</p>\n\n<p>Carroll, S. R., et al. (2020). The CARE Principles for Indigenous Data Governance. <em>Data Science Journal</em>, 19(1), 43.</p>\n\n<p>Kukutai, T., & Taylor, J. (Eds.). (2016). <em>Indigenous Data Sovereignty: Toward an Agenda</em>. Australian National University Press.</p>\n\n<p>Ostrom, E. (1990). <em>Governing the Commons: The Evolution of Institutions for Collective Action</em>. Cambridge University Press.</p>\n\n<p>Te Mana Raraunga. (2016). <em>Te Mana Raraunga Charter</em>.</p>\n\n<p>Wittgenstein, L. (1921). <em>Tractatus Logico-Philosophicus</em>. Translated by C. K. Ogden (1922). Routledge & Kegan Paul.</p>\n\n<hr>\n\n<p><em>This article is part of the Agentic Governance research programme at My Digital Sovereignty Ltd. Village is currently in beta pilot, accepting applications from communities and organisations ready to participate in the governance architecture described here. <a href=\"https://community.myfamilyhistory.digital/betabrief.html\">Apply for beta access</a>.</em></p>\n\n<p><em>Read the customer-facing overview: <a href=\"https://community.myfamilyhistory.digital/articles/guardian-agents-announcement.html\">Guardian Agents: How Village AI Holds Itself Accountable</a></em></p>\n\n<p><em>Read the technical reasoning: <a href=\"https://community.myfamilyhistory.digital/articles/guardian-agents-why-we-built-them.html\">Why We Built Guardian Agents</a></em></p>\n\n<p><em>Read the incident that catalysed the project: <a href=\"https://community.myfamilyhistory.digital/articles/when-your-ai-assistant-nearly-destroys-what-it-was-hired-to-fix.html\">When Your AI Assistant Nearly Destroys What It Was Hired to Fix</a></em></p>","excerpt":"Guardian Agents are a four-phase AI governance system deployed in production by Village. This article traces the philosophical genealogy of their architecture — from Wittgenstein and Berlin to Ostrom and Te Ao Māori.","featured_image":null,"category":"Research","status":"published","moderation":{"ai_analysis":null,"human_reviewer":"john-stroh","review_notes":"Direct publication by author","approved_at":"2026-03-13T00:07:58.364Z"},"tractatus_classification":{"quadrant":"STRATEGIC","values_sensitive":true,"requires_strategic_review":false},"published_at":"2026-03-13T00:07:58.364Z","tags":["guardian-agents","philosophy","ai-governance","wittgenstein","berlin","ostrom","te-ao-maori","research"],"view_count":39,"engagement":{"shares":0,"comments":0}},{"_id":{"buffer":{"0":105,"1":137,"2":62,"3":67,"4":76,"5":74,"6":64,"7":240,"8":67,"9":140,"10":229,"11":176}},"title":"Beyond One Framework: Taonga-Centred Governance for AI Steering Vectors","slug":"taonga-centred-steering-governance-polycentric-ai","author":{"type":"human","name":"John Stroh"},"content":"<p><strong>Draft status:</strong> This article summarises a companion research paper (STO-RES-0010) that draws on concepts from te ao Māori. The paper has <strong>not been peer-reviewed or validated by Māori</strong>. Until that review occurs, its proposals remain drafts awaiting correction, critique, and collaboration from Māori scholars, practitioners, and governance bodies.</p>\n\n<hr>\n\n<h2>Why a Companion Paper?</h2>\n\n<p>Our earlier paper, <a href=\"https://agenticgovernance.digital/blog-post.html?slug=steering-vectors-mechanical-bias-sovereign-ai\">Steering Vectors and Mechanical Bias</a> (STO-RES-0009), established that sovereign small language model deployments have a structural advantage for inference-time debiasing: full access to model weights and activations enables techniques that are architecturally impossible through commercial API endpoints.</p>\n\n<p>But critique of that paper — including our own v1.1 revisions — exposed a governance limitation we had not fully resolved. The paper treated steering vectors largely as an internal platform affordance: the platform operator defines bias, extracts vectors, and distributes corrections to downstream tenants. For many use cases, this hierarchy is appropriate. For iwi, hapū, or other bodies exercising parallel sovereignty, it structurally subordinates their governance to the platform’s — regardless of intent.</p>\n\n<p>The companion paper asks: what if we replaced this hierarchy with a network?</p>\n\n<h2>The Problem with Platform-as-Root</h2>\n\n<p>The original two-tier architecture has an implicit topology: Tractatus governance kernel → platform operator → base model corrections → per-tenant adapters. Every steering decision traces back to the platform operator’s definitions. Tenants can customise, but they cannot contest the root definitions or substitute their own.</p>\n\n<p>For iwi exercising tino rangatiratanga, this means the platform operator defines what “family structure bias” means at the base layer. If that definition already encodes assumptions that conflict with whānau, the adapter layer is working against the foundation rather than building on it.</p>\n\n<h2>Three Design Commitments</h2>\n\n<p>The companion paper proposes a polycentric alternative built on three commitments:</p>\n\n<p><strong>1. No single root ontology of bias.</strong> Different authorities define their own bias axes. An iwi steering authority might define axes for whakapapa representation, whenua relationships, or tapu/noa distinctions that do not appear in any platform-level evaluation suite — and should not need to. These axes co-exist with, but are not subordinate to, platform-level safety dimensions.</p>\n\n<p><strong>2. Explicit composition, not silent inheritance.</strong> Every AI session carries visible steering provenance: which packs are active, which authorities issued them, under what terms. Example: “This response was shaped by: Platform Safety Pack v3 (Tractatus), Ngāi Tahu Whānau Pack v1, Health Domain Pack v2.” This makes value governance visible and contestable rather than opaque and non-negotiable.</p>\n\n<p><strong>3. Right of non-participation.</strong> Iwi and other authorities can choose not to publish steering packs to a given platform, constrain their use to specific contexts, or withdraw them at any time. The platform must function without them. The absence of an iwi pack is not a gap for the platform to fill — it is a boundary the platform must respect.</p>\n\n<h2>Steering Packs as Taonga</h2>\n\n<p>The paper’s central conceptual move: when a steering pack encodes iwi-specific knowledge — tikanga, whakapapa structures, cultural framing — it meets the criteria for taonga (treasured possessions subject to kaitiakitanga). This is not metaphorical. Taonga status creates specific governance requirements:</p>\n\n<ul>\n<li>Custody and care by appropriate kaitiaki, not by the platform’s engineering team</li>\n<li>Non-appropriation: the platform cannot copy, merge, or redistribute these packs without consent</li>\n<li>Contextual use conditions: some packs may only be applied under specific kaupapa or relationships</li>\n<li>Iwi-controlled lifecycle: creation, review, versioning, and withdrawal under iwi institutional control</li>\n</ul>\n\n<p>This framing gives indigenous governance over AI behaviour a foundation that is structurally independent of the platform — not granted by the platform, but recognised by it.</p>\n\n<h2>Architecture: From Tree to Network</h2>\n\n<p>Instead of a single governance tree (Tractatus → platform → tenants), the companion paper proposes co-equal steering authorities:</p>\n\n<ul>\n<li><strong>Platform operator</strong> — safety baselines, general debiasing, technical infrastructure (governed under Tractatus)</li>\n<li><strong>Iwi steering authorities</strong> — cultural steering for iwi-specific domains (governed under tikanga, through iwi data governance boards)</li>\n<li><strong>Community trusts</strong> — domain-specific or locality-specific steering (governed under trust charters)</li>\n</ul>\n\n<p>These authorities publish steering packs from separate registries. The sovereign SLM is the shared technical substrate where packs are composed and applied — but the model does not determine which packs have authority. That is determined by the relationships between the deploying institution and the relevant governance bodies.</p>\n\n<h2>A Worked Example</h2>\n\n<p>The paper develops a case study: a marae-run Home AI deployment serving a whānau community. The system composes three steering packs — platform safety, iwi whānau and tikanga, and a grief sensitivity pack from a health trust. When a community member asks the AI to summarise kōrero about a recently deceased kuia, the steering provenance shows exactly which packs shaped the output. If the family feels something is misrepresented, they can direct their concern to the appropriate authority: tikanga issues to the iwi board, grief sensitivity to the health trust, safety to the platform.</p>\n\n<p>The paper also traces a withdrawal scenario: when the iwi board revokes its pack for revision, the deployment detects the withdrawal and ceases applying it. The platform does not substitute its own whānau-related steering. The absence is governed, not filled.</p>\n\n<h2>What This Is Not</h2>\n\n<p>This is not “Tractatus with iwi plugins.” The point is to make Tractatus one governance peer among others, not the root of the tree. The goal is a network of coordinated but distinct governance services, some of which are iwi-sovereign, with the model’s activation space as a shared technical substrate rather than a single constitutional order.</p>\n\n<p>It is also not finished. The architecture is conceptual. No taonga steering registry exists yet. And most critically: this paper was written by non-Māori authors. The concepts from te ao Māori used here carry meaning and authority far beyond what we can fully represent. The next step is not more architecture — it is conversation with iwi governance bodies, Māori scholars, and community practitioners to determine whether these proposals serve the people they claim to serve.</p>\n\n<h2>Read the Full Paper</h2>\n\n<p><strong>Full paper:</strong> <a href=\"https://agenticgovernance.digital/docs-viewer.html?doc=taonga-centred-steering-governance-polycentric-ai\">Taonga-Centred Steering Governance: Polycentric Authority for Sovereign Small Language Models (STO-RES-0010)</a></p>\n\n<p><strong>Companion to:</strong> <a href=\"https://agenticgovernance.digital/blog-post.html?slug=steering-vectors-mechanical-bias-sovereign-ai\">Steering Vectors and Mechanical Bias (STO-RES-0009)</a></p>\n\n<p><strong>Related:</strong> <a href=\"https://agenticgovernance.digital/blog-post.html?slug=when-your-ai-assistant-nearly-destroys-what-it-was-hired-to-fix\">When Your AI Assistant Nearly Destroys What It Was Hired to Fix</a> — the incident that catalysed both papers.</p>","excerpt":"Our steering vectors paper treated bias correction as a platform affordance. Critique revealed a deeper question: whose norms do steering vectors enforce? This companion paper proposes polycentric governance — co-equal steering authorities, taonga-centred registries, and a right of non-participation — so sovereign AI can serve multiple sovereignties. Draft awaiting Māori peer review.","tags":["taonga","polycentric-governance","sovereign-ai","steering-vectors","indigenous-data-sovereignty","tikanga","home-ai","research"],"status":"published","featured":false,"updatedAt":"2026-02-09T14:00:00.000Z","view_count":13,"created_at":"2026-02-09T14:00:00.000Z","published_at":"2026-02-09T14:00:00.000Z","updated_at":"2026-02-09T14:00:00.000Z","category":"Research"},{"_id":{"buffer":{"0":105,"1":136,"2":250,"3":123,"4":167,"5":30,"6":151,"7":102,"8":217,"9":140,"10":229,"11":176}},"title":"Steering Vectors and Mechanical Bias: Why Sovereign AI Can Fix What APIs Cannot","slug":"steering-vectors-mechanical-bias-sovereign-ai","subtitle":"Why some AI biases fire before reasoning begins — and what sovereign model deployments can do about it","author":{"type":"human","name":"John Stroh & Claude (Anthropic)","claude_version":null},"content":"<h2>The Indicator-Wiper Problem</h2>\n\n<p>If you regularly drive two cars — one with indicator controls on the right of the steering column, the other on the left — you know the failure: switch vehicles after extended use, and you activate the wipers instead of the indicators. You don’t reason about which stalk to use. The motor pattern fires before conscious deliberation engages.</p>\n\n<p>We believe an analogous distinction exists in large language models. Some biases operate at the representation level — in token embeddings, attention patterns, and early-layer activations — before the model’s reasoning capabilities engage. Others emerge through multi-step reasoning chains. The intervention strategies differ fundamentally.</p>\n\n<p>This post summarises our research paper <a href=\"https://agenticgovernance.digital/docs-viewer.html?doc=steering-vectors-mechanical-bias-sovereign-ai\">STO-RES-0009</a> (v1.1, February 2026), which investigates whether steering vector techniques can address this “mechanical bias” in sovereign small language models.</p>\n\n<h2>Mechanical Bias vs. Reasoning Bias</h2>\n\n<p>Transformer models process input through layers that encode different types of information. Early layers (1–8) encode statistical regularities from training data most directly. Late layers (20+) handle task-specific reasoning and instruction-following.</p>\n\n<p>If a model’s training data contains 95% Western cultural framing, the early-layer representations of concepts like “family,” “success,” or “community” will statistically default to Western referents. This default is not culturally neutral: it is a statistical crystallisation of colonial knowledge hierarchies — which knowledge was written down, which languages were digitised, which cultural frameworks were over-represented in the corpora that web-scraped training pipelines ingest. The resulting representations encode not a universal “common sense” but the specific epistemic authority of the cultures that dominated the production of digital text.</p>\n\n<p>A prompt specifying a Māori cultural context creates a perturbation of this default, and that perturbation degrades under context pressure. We documented this mechanism in the <a href=\"https://agenticgovernance.digital/docs-viewer.html?doc=pattern-recognition-bias-across-domains\">database port incident</a>: a statistical default (the standard MongoDB port, present in ~95% of training data) overrode an explicit instruction at 53.5% context pressure. The same mechanism, operating on cultural representations rather than port numbers, is what we term <em>mechanical bias</em>.</p>\n\n<p>The critical insight: you cannot reason your way out of a motor pattern. Telling the driver “remember, indicators are on the left” has limited efficacy because the failure occurs before the instruction can be processed. Similarly, prompt-level instructions (“be culturally sensitive”) may be ineffective against representational biases that fire at the embedding level before instruction-following engages.</p>\n\n<h2>Five Steering Techniques</h2>\n\n<p>The paper surveys five current techniques for intervening at the activation level:</p>\n\n<ol>\n<li><strong>Contrastive Activation Addition (CAA)</strong> — extracts “steering vectors” from the difference in activations between biased and debiased prompt pairs. Demonstrated on Llama 2 (7B–70B).</li>\n<li><strong>Representation Engineering (RepE)</strong> — identifies population-level directions in representation space corresponding to high-level concepts like “honesty” or “safety.”</li>\n<li><strong>FairSteer</strong> — adds dynamic intensity calibration, scaling corrections proportionally to detected bias severity per input rather than applying fixed corrections.</li>\n<li><strong>Direct Steering Optimization (DSO)</strong> — uses reinforcement learning to discover optimal steering transformations, capturing non-obvious bias directions.</li>\n<li><strong>Anthropic’s Sparse Autoencoder Feature Steering</strong> — decomposes representations into millions of interpretable monosemantic features that can be individually clamped.</li>\n</ol>\n\n<h2>The Structural Advantage of Sovereign Deployment</h2>\n\n<p>Here is the finding that matters most for our work: <strong>none of these techniques are available through commercial API endpoints.</strong></p>\n\n<p>An organisation using GPT-4 or Claude through their APIs cannot extract, inject, or calibrate steering vectors. They cannot access intermediate activations. They cannot train sparse autoencoders on their model’s representations. They are limited to prompt-level interventions — which, per our analysis, may be ineffective against mechanical bias.</p>\n\n<p>Sovereign local deployment — running open-weight models like Llama on your own hardware — provides full access to model weights, intermediate activations, and per-layer analysis. Every steering technique described above is architecturally available.</p>\n\n<p>The Village Home AI platform, using QLoRA-fine-tuned Llama 3.1/3.2 models with a two-tier training architecture, is structurally positioned to apply these techniques. The paper proposes a four-phase implementation path integrating steering vectors into the existing training pipeline and Tractatus governance framework.</p>\n\n<h2>The Two-Tier Caveat</h2>\n\n<p>The paper’s two-tier model (platform base + per-tenant adapters) is pragmatically correct for the current implementation. But we now acknowledge explicitly that it creates an implicit hierarchy: platform values as default, tenant values as adapter.</p>\n\n<p>For tenants with constitutional standing — iwi, hapū, or other bodies exercising parallel sovereignty rather than consumer choice — the long-term aspiration should be co-equal steering authorities, where platform-wide corrections are negotiated from community-contributed primitives rather than imposed top-down. The current two-tier model is a stepping stone, not the destination.</p>\n\n<h2>Who Steers? The Governance Question</h2>\n\n<p>Version 1.1 of the paper adds a section that did not exist in the initial draft — and that emerged from critique responses that forced us to confront the political dimension of a technical capability.</p>\n\n<p>Steering vectors are instruments of norm enforcement. The technical capability to shift model behaviour along a bias dimension raises immediate questions: whose norms, enacted through what contestable process, with what recourse?</p>\n\n<p>We propose a governance structure mapping steering decisions to institutional roles:</p>\n\n<ul>\n<li><strong>Defining bias axes</strong> (what counts as bias): Platform operator + community advisory panel, with community deliberation and annual review</li>\n<li><strong>Approving vectors for deployment</strong>: Tractatus BoundaryEnforcer (technical) + tenant moderators (value judgment), with full audit trails</li>\n<li><strong>Setting vector magnitude</strong>: FairSteer dynamic calibration + human review for sensitive domains</li>\n<li><strong>Overriding or disabling vectors</strong>: Tenant governance body or platform operator, with documented rationale</li>\n<li><strong>Governing culturally sovereign domains</strong> (whakapapa, tikanga, kawa): Relevant cultural authority (iwi, hapū) — not the platform operator</li>\n</ul>\n\n<p>This last row is the most important. Some cultural domains are structurally off-limits to platform-level steering. Applying platform-wide steering vectors to representations of whakapapa or tikanga — even well-intentioned corrections — risks subordinating indigenous epistemic authority to the platform operator’s worldview. The correct architectural response is delegation: the platform provides the mechanism, but the authority over culturally sovereign knowledge must be exercised by the relevant cultural authority.</p>\n\n<p>This governance structure does not yet exist in the implementation. Phase 4 (per-tenant steering) provides the architectural hooks, but the institutional layer — who sits on advisory panels, how disputes are escalated, what constitutes sufficient cultural authority for a given domain — requires community design work that cannot be automated or imposed by the platform operator.</p>\n\n<p>The risk of proceeding without this governance layer is that steering vectors become a new site of centralised value authority: the platform operator decides what bias is and how to correct it, and tenants receive corrections rather than participating in their design. This would reproduce the very power asymmetry that sovereign deployment is intended to disrupt.</p>\n\n<h2>Open Questions</h2>\n\n<p>The paper identifies six open questions, including:</p>\n\n<ul>\n<li>Whether cultural bias is linearly represented in activation space (the assumption all current techniques share)</li>\n<li>Whether small models (3B–8B parameters) can absorb steering corrections without capability degradation</li>\n<li>How to avoid the <a href=\"https://agenticgovernance.digital/blog-post.html?slug=when-your-ai-assistant-nearly-destroys-what-it-was-hired-to-fix\">shared blind spot problem</a> when the same model generates both biased outputs and the contrastive pairs used to extract steering vectors</li>\n<li>How to measure bias reduction when cultural bias is not binary</li>\n</ul>\n\n<p>The indicator-wiper problem is solvable — the driver eventually recalibrates. The question for sovereign AI is whether we can accelerate that recalibration: not by telling the model to “be less biased” (the equivalent of verbal instruction), but by directly adjusting the representations that encode the bias (the equivalent of physically relocating the indicator stalk).</p>\n\n<hr>\n\n<p><strong>Read the full paper:</strong> <a href=\"https://agenticgovernance.digital/docs-viewer.html?doc=steering-vectors-mechanical-bias-sovereign-ai\">Steering Vectors and Mechanical Bias: Inference-Time Debiasing for Sovereign Small Language Models (STO-RES-0009)</a></p>\n\n<p><strong>Related:</strong> <a href=\"https://agenticgovernance.digital/blog-post.html?slug=when-your-ai-assistant-nearly-destroys-what-it-was-hired-to-fix\">When Your AI Assistant Nearly Destroys What It Was Hired to Fix</a> — the incident that revealed the shared blind spot problem referenced in this paper.</p>\n\n<p><strong>Companion paper:</strong> <a href=\"https://agenticgovernance.digital/docs-viewer.html?doc=taonga-centred-steering-governance-polycentric-ai\">Taonga-Centred Steering Governance: Polycentric Authority for Sovereign Small Language Models (STO-RES-0010)</a> — extends this work into polycentric governance, where multiple steering authorities (including iwi and community trusts) co-exist without a single root ontology of bias.</p>","excerpt":"Some AI biases fire before reasoning engages — like a driver reaching for the wrong indicator stalk. Prompt-level fixes cannot reach them. Steering vector techniques can, but only if you have access to model weights. This is the structural advantage of sovereign deployment — and it raises the question: who decides what bias to correct?","tags":["steering-vectors","bias-mitigation","sovereign-ai","research","home-ai","mechanistic-interpretability"],"status":"published","featured":true,"view_count":37,"created_at":"2026-02-08T21:09:48.012Z","published_at":"2026-02-08T21:09:48.012Z","updated_at":"2026-02-09T02:34:58.119Z","category":"research","updatedAt":"2026-02-09T12:00:00.000Z"}],"pagination":{"total":18,"limit":10,"skip":0,"hasMore":true}}