What Is the Village
The Village is a member-owned platform for whānau, marae, clubs, and community organisations. Each community gets its own isolated tenant with sovereign data storage, AI-assisted features, and governance-protected privacy. The platform supports te reo Māori throughout.
All AI processing runs on the platform's own infrastructure — a locally fine-tuned Llama model with no data sent to external AI providers. Communities operate with full data ownership and can withdraw consent at any time.
Deployment Facts
- Duration: 18+ months in production
- Tenant Model: Multi-tenant (multiple communities)
- AI Model: Sovereign Llama (QLoRA fine-tuned)
- AI Features: 4 governed features live
- Infrastructure: NZ + EU (no US dependency)
Sovereign AI Architecture
The Village runs its own language model — not an API call to a US hyperscaler, but a locally fine-tuned model where the training data, model weights, and inference pipeline all remain under community control.
Local Language Model
Llama 3.1 8B and Llama 3.2 3B, fine-tuned with QLoRA on community-specific data. All inference runs on the platform's own GPU infrastructure.
Sovereign Infrastructure
Production servers in New Zealand and the EU. No data transits US jurisdiction. Community data never leaves the deployment it belongs to.
Community-Controlled Training
QLoRA fine-tuning on domain-specific data with consent tracking and provenance. Communities can withdraw training data and trigger model retraining.
For a detailed account of the model architecture, training approach, and governance integration, see Village AI / SLL: Sovereign Locally-Trained Language Model.
Polycentric Governance
The distinctive contribution of the Village is its governance architecture. Rather than a single operator making all decisions, the platform implements polycentric governance — multiple co-equal authorities that share structural control over how AI is used.
Co-Equal Authority
Communities maintain architectural co-governance — not just consultation rights, but structural authority over how their data is used. Drawn from te ao Māori concepts of rangatiratanga (self-determination) and kaitiakitanga (guardianship).
Right of Non-Participation
Members can opt out of any AI feature without losing access to the platform. AI governance defers to human judgment on values questions and never overrides community decisions.
Taonga-Centred Design
Cultural treasures (taonga) are governed as first-class objects with provenance tracking, withdrawal rights, and community authority over how they appear in AI contexts.
Tenant-Scoped Isolation
Each community operates in complete data isolation. No cross-tenant data sharing. Each tenant's governance decisions apply only within their own boundary.
The research foundation is described in Taonga-Centred Steering Governance: Polycentric AI for Indigenous Data Sovereignty.
How Governance Works in Practice
When a member uses any AI feature, the request passes through six governance checks before a response is generated. Each check is independent and can block or modify the request.
-
1
Member request received
A member asks for help, requests OCR, or uses story assistance.
-
2
Values boundary check
Is this a values question that requires human judgment? If so, the AI defers rather than answering.
-
3
Intent validation
Does the request conflict with stored governance rules or attempt prompt injection? Cross-references against known instruction sets.
-
4
Context and session health
Is the session within acceptable bounds? Monitors for context pressure and triggers graceful handoff when needed.
-
5
Permission-filtered retrieval and response
The sovereign Llama model generates a response using RAG context filtered by the member's permissions. All processing stays on-infrastructure.
-
6
Scope verification
Is the response appropriate to what was asked? Detects scope creep and blocks responses that exceed the original request.
-
7
Delivery with attribution
Response delivered to the member with source attribution. Every step is logged for audit.
What the Platform Delivers
Help Centre
Members ask questions in natural language and get answers drawn from help content, stories, and documentation — filtered by their permissions.
Governance: Values boundary check prevents AI from making judgments; intent validation blocks prompt injection attempts.
Document OCR
Upload a document and get the text extracted automatically. Useful for digitising letters, certificates, and historical records.
Governance: Requires explicit consent before processing. All operations are audit-logged with full provenance.
Story Assistance
AI-assisted writing suggestions for community stories and family histories. Helps with structure, prompts, and gentle editing.
Governance: Values boundary check prevents inappropriate content suggestions; scope verification ensures the AI stays within what was asked.
AI Memory Transparency
Members can see, edit, and delete what the AI "remembers" about them. Full audit dashboard shows every AI interaction.
Governance: Multi-stakeholder consent required. Persistence decisions classified and auditable. Members control their own data.
Honest Limitations
This case study documents preliminary evidence from a production multi-tenant deployment. We are transparent about the following limitations:
-
Small Scale: The Village currently serves a small number of community tenants. Generalisability to larger deployments or different community types is unknown.
-
Self-Reported Metrics: No independent verification of logged data has been conducted.
-
Operator-Developer Overlap: Framework developer also operates the Village (conflict of interest).
-
Limited Adversarial Testing: No formal red-team evaluation has been conducted.
-
Voluntary Invocation: AI could theoretically bypass governance if not configured to use it.
What This Demonstrates
Evidence Supports
- • Sovereign AI deployment is technically feasible for small community organisations
- • Polycentric governance can operate in production without prohibitive overhead
- • Multi-tenant isolation with per-community governance is achievable
- • Governance violations are detectable and auditable
- • The framework learns from failures (documented incident responses)
Evidence Does NOT Support
- • Framework effectiveness at scale (thousands of concurrent users)
- • Generalisability across different AI systems or model architectures
- • Resistance to sophisticated adversarial attacks
- • Regulatory sufficiency (EU AI Act compliance untested)
Explore Further
Dive deeper into the technical architecture, read the research, or see the Village platform in action.