Why Rules and Training Aren't Enough — The Governance Challenge
Series: To Hapori, To AI — Digital Sovereignty for Indigenous Communities (Article 3 of 5) Author: My Digital Sovereignty Ltd Date: March 2026 Licence: CC BY 4.0 International
The Mihi That Wasn't
Before we discuss governance philosophy, let us start with a story about a mihi.
A kaumatua is preparing for a tangi. She is tired — it has been a long week, and there is much to coordinate. She asks an AI system to help her draft a mihi whakatau, a speech of greeting appropriate to the occasion. She is specific: she wants the language of whakapapa, the acknowledgement of those who have passed, the connection between the living and the dead that sits at the heart of tangihanga.
The AI produces a beautifully written speech. It is warm, professional, and compassionate. It speaks of "celebrating a life well lived," "finding strength in memories," "the healing journey ahead," and "honouring their legacy." It reads well. It sounds caring. And it is entirely wrong.
The whanau does not need to celebrate a life well lived. They need to hear the whakapapa spoken — the lines of descent that connect the departed to the living and to those who went before. They do not need a healing journey. They need the karanga, the tangi, the proper sequence of rites that have carried their people through death for generations. The kaumatua asked for tikanga, and the AI gave her Western bereavement counselling — because its training data contains a thousand grief guides from counselling websites for every one that knows what a tangi is.
The AI did not refuse the kaumatua's instruction. It did not say "I don't know your tradition." It simply replaced what she asked for with what was statistically more common in its training data. The substitution was silent. If the kaumatua were more tired than usual, or less experienced, or working under time pressure, she might not have caught it. The mihi would have been delivered, and the whanau would have received comfort from the wrong tradition — professionally worded, sincerely meant, and culturally empty.
Your phone autocorrects words. You see the red underline, and you fix it. AI autocorrects values. And there is no underline.
When Patterns Override Tikanga
The mihi is not an isolated case. The same mechanism operates in every AI conversation.
When a whanau member asks an AI system for advice about a family conflict, the system defaults to the language of individual therapy — assertiveness training, boundary-setting, self-care — because that is what dominates its training data. It does not reach for the concepts of whanaungatanga, of utu (reciprocity and balance), or the understanding that in a kinship system, resolving a conflict is not about individual outcomes but about restoring the balance of the collective.
When a community leader asks the AI to help prepare for an important hui, it defaults to corporate meeting management — agendas, action items, stakeholder engagement — because business meeting correspondence vastly outnumbers indigenous governance correspondence in its training data. It does not understand that a hui is not a meeting. A hui has its own tikanga, its own protocols for who speaks and when, its own rhythm that serves purposes a corporate agenda cannot comprehend.
When a community asks the AI to help with a submission on a resource consent, it defaults to standard planning language. It does not understand the concept of kaitiakitanga — that the community's relationship to the land is not one of ownership or economic interest, but of guardianship across generations.
The AI is not hostile to indigenous knowledge. It simply does not know indigenous knowledge. It knows what is statistically common, and what is statistically common is overwhelmingly Western. For indigenous communities, this is not a technical shortcoming. It is the digital continuation of a pattern that began with colonisation: the replacement of indigenous knowledge systems with Western frameworks, conducted so smoothly that many people do not notice it happening.
This is the governance problem. Not malice. Not incompetence. Structural bias, operating silently.
Why More Rules Don't Solve It
The instinct of most organisations, when confronted with AI risks, is to write policies. Acceptable use policies. AI ethics guidelines. Terms of service. Responsible AI frameworks.
These documents are not useless, but they share a fundamental limitation: they rely on the AI system to follow them.
An AI system does not read your policy document and decide to comply. It generates responses based on statistical patterns in its training data. If those patterns conflict with your policy, the patterns win — not because the AI is rebellious, but because it does not understand policies. It understands patterns.
You can fine-tune a model — adjust its training to emphasise certain behaviours. This helps, but it does not solve the underlying problem. Fine-tuning adds new patterns on top of existing ones. Under pressure, unusual circumstances, or novel questions, the old patterns reassert themselves. The technical term is "catastrophic forgetting," but the plain-language version is simpler: training wears off.
Writing a policy that says "Our AI will respect our community's tikanga" is like writing a policy that says "Our river will not flood." The river does not read policies. If you want to prevent flooding, you need to build stopbanks — structural interventions that operate regardless of what the river intends.
AI governance requires the same approach. Not rules the AI is expected to follow, but structures that operate independently of the AI, checking its behaviour from the outside.
What Tikanga Teaches Us About Governance
The insight that some decisions cannot be reduced to rules is not new. It is ancient, and indigenous governance traditions have understood it for centuries.
Tikanga Maori is not a rulebook. It is a living system of protocols, values, and practices that guides behaviour within a relational context. The correct action in a given situation depends not on a written rule but on the relationships involved, the context, the precedent, and the mana of the people present. A kaumatua making a judgment at a hui is not applying a formula — she is exercising wisdom accumulated over a lifetime within a specific community.
This is precisely the kind of judgment that AI cannot perform. A system trained on statistical patterns cannot understand mana, cannot weigh relationships, cannot sense the tono (the call, the pull) of a situation. It can process information, but it cannot exercise rangatiratanga — the self-determining authority that comes from being embedded in a community with accountability to that community.
The philosopher Ludwig Wittgenstein spent his career exploring the boundary between what can be stated precisely and what lies beyond precise statement. His conclusion — that "whereof one cannot speak, thereof one must be silent" — maps directly onto the distinction between the questions AI can help with and those it cannot. "When is the next hui?" has a definite answer that an AI can look up. "How should we approach this korero with the neighbouring hapu?" involves judgment, relationships, and tikanga that resist systematic treatment.
Isaiah Berlin, the political philosopher, argued that some human values are genuinely incompatible — liberty and equality, tradition and progress, individual conscience and communal harmony. There is no formula that resolves these tensions. They require ongoing human judgment, negotiation, and the kind of practical wisdom that communities develop over generations.
Indigenous governance traditions have held this understanding for far longer than Western philosophy has articulated it. The concept of kaitiakitanga already contains the recognition that guardianship involves ongoing judgment, not fixed rules. The practice of whakawhiti korero (exchange of talk) at hui already embodies the understanding that collective wisdom emerges from structured dialogue, not from optimisation.
Any AI governance framework that pretends it can systematise these judgments is not governing — it is colonising. Again.
How Village Governs AI Structurally
Village does not rely on telling the AI to behave. It builds governance into the architecture — structures that operate independently of the AI and cannot be overridden by it.
The boundary enforcer blocks the AI from making values decisions. When a question involves cultural protocols, ethical judgments, or relational context, the system halts and routes the question to a human — your moderator, your kaumatua, your runanga. The AI cannot override this boundary, because the boundary operates outside the AI's control.
The instruction persistence system stores your community's explicit instructions in a separate system that the AI cannot modify. When the AI generates a response, it is checked against these stored instructions. If the response contradicts an instruction, the instruction takes precedence — by default, regardless of what the AI's training patterns suggest.
The cross-reference validator checks the AI's proposed actions against your community's actual records. It does not ask the AI whether its response is correct — that would be asking the system to verify itself. It uses mathematical measurement, operating in a fundamentally different way from the AI, to determine whether the response is grounded in your community's real content.
The context pressure monitor watches for degraded operating conditions — situations where the AI is under strain, processing complex requests, or encountering novel questions. When it detects these conditions, it increases the intensity of verification. The harder the question, the more scrutiny the response receives.
These are not policies. They are structures. They operate whether or not the AI agrees with them, in the same way that a stopbank operates whether or not the river agrees with it.
The Difference Between Aspiration and Architecture
Many organisations publish AI ethics statements. Village does not rely on ethics statements. It relies on architectural constraints that enforce governance structurally.
The distinction matters because aspiration is what you hope will happen. Architecture is what actually happens. Your community does not rely on a hope that funds will be handled properly — it requires proper financial oversight, with accountability to the collective. That is architectural governance. The same principle applies to AI.
The Tractatus Framework — Grounded in Te Tiriti
The governance architecture behind Village AI is called the Tractatus framework. It is worth knowing three things about it.
It is open. The entire framework is published under an Apache 2.0 open-source licence. Anyone can read the code, inspect the rules, and verify that the governance does what it claims to do. This is the opposite of Big Tech AI governance, where the rules are proprietary and the reasoning is hidden. When Google or OpenAI tells you their AI is "aligned with human values," you have no way to check. With Tractatus, you can read every line.
It is transparent. Every governance decision is logged. When the boundary enforcer blocks the AI from making a values decision, that event is recorded. When the cross-reference validator catches a discrepancy, it is recorded. Your moderators can see exactly what the governance system did and why. There is no hidden layer where decisions are made without accountability.
It is grounded in Te Tiriti o Waitangi. The Tractatus framework's partnership model is not a generic "stakeholder engagement" framework borrowed from corporate governance. It draws explicitly on Te Tiriti principles — particularly the Article Two guarantee of tino rangatiratanga (full authority) over taonga. In the context of AI governance, this means that communities retain full authority over their knowledge, their data, and the rules that govern how AI interacts with both. The framework does not grant this authority — it recognises that the authority already exists and builds architecture to enforce it.
This grounding in Te Tiriti is published in the open-source framework at agenticgovernance.digital. It is not a marketing claim — it is an architectural commitment that anyone can inspect, critique, and hold the platform accountable to.
We acknowledge that grounding a technology framework in Te Tiriti carries obligations that go beyond code. Whether Village meets those obligations is a judgment for Maori communities to make, not for the platform to assert.
In the next article, we will look at what Village AI actually does today in practice — what it can help your community with, how bias is addressed through the vocabulary system, and what is still a work in progress.
This is Article 3 of 5 in the "To Hapori, To AI" series. For the full governance architecture, visit Village AI on Agentic Governance.
Previous: Big Tech AI vs. Your Community's AI — Why the Difference Matters Next: What's Actually Running in Village Today