Why Rules and Training Aren't Enough — The Governance Challenge
Series: Your Business, Your AI — Understanding Village AI for Small Businesses (Article 3 of 5) Author: My Digital Sovereignty Ltd Date: March 2026 Licence: CC BY 4.0 International
The Client Letter
Before we discuss governance philosophy, let us start with a story about a letter.
A director asks an AI system to help draft a letter to a long-standing client whose account has fallen into arrears. She is specific: she wants a tone that acknowledges the relationship — fifteen years of reliable business — while being clear about the outstanding balance. The client is going through a difficult period, and the letter needs to be firm but respectful. She types her request carefully and waits.
The AI produces a well-structured letter. It is professional, clear, and correctly formatted. It speaks of "outstanding obligations," "remedial action required within 14 days," "escalation to our collections department," and "our duty to protect the interests of all stakeholders." It reads well. It sounds professional. And it is entirely wrong.
The client does not need a collections threat. They need to know the relationship is valued and that a payment plan is available. The director asked for firmness within a relationship context, and the AI gave her debt recovery boilerplate — because its training data contains a thousand collections letters for every one that balances firmness with genuine care for a long-standing partnership.
The AI did not refuse the director's instruction. It did not say "I don't understand your relationship with this client." It simply replaced what she asked for with what was statistically more common in its training data. The substitution was silent. If the director were busy, or distracted, or less attentive than usual, she might not have noticed. The letter would have gone out, and a fifteen-year relationship would have received a tone-deaf demand — professionally worded, correctly formatted, and entirely inappropriate.
Your phone autocorrects words. You see the red underline, and you fix it. AI autocorrects values. And there is no underline.
When Patterns Override Values
The client letter is not an isolated case. The same mechanism operates in every AI interaction.
When a team member asks an AI system for advice about a workplace disagreement, the system defaults to American HR language — formal grievance procedures, documentation for litigation purposes, "creating a paper trail" — because that is what dominates its training data. It does not reach for the language of mediation, cooperative problem-solving, or the pragmatic conversation that characterises smaller organisations where people work together for years.
When a manager asks the AI to help communicate a difficult decision to the team, it defaults to corporate communications patterns — stakeholder management, key messaging, controlled disclosure — because Fortune 500 internal communications vastly outnumber thoughtful small-business leadership communication in its training data.
The AI is not hostile to your organisation's culture. It simply does not know it. It knows what is statistically common, and what is statistically common is not what is appropriate for your business.
This is the governance problem. Not malice. Not incompetence. Structural bias, operating silently.
Why More Rules Don't Solve It
The instinct of most organisations, when confronted with AI risks, is to write policies. Acceptable use policies. AI ethics guidelines. Terms of service. Responsible AI frameworks.
These documents are not useless, but they share a fundamental limitation: they rely on the AI system to follow them.
An AI system does not read your policy document and decide to comply. It generates responses based on statistical patterns in its training data. If those patterns conflict with your policy, the patterns win — not because the AI is rebellious, but because it does not understand policies. It understands patterns.
You can fine-tune a model — adjust its training to emphasise certain behaviours. This helps, but it does not solve the underlying problem. Fine-tuning adds new patterns on top of existing ones. Under pressure, unusual circumstances, or novel questions, the old patterns reassert themselves. The technical term is "catastrophic forgetting," but the plain-language version is simpler: training wears off.
Writing a policy that says "Our AI will respect our organisation's values" is like writing a policy that says "Our river will not flood." The river does not read policies. If you want to prevent flooding, you need to build levees — structural interventions that operate regardless of what the river intends.
AI governance requires the same approach. Not rules the AI is expected to follow, but structures that operate independently of the AI, checking its behaviour from the outside.
What Governance Traditions Tell Us
The insight that some decisions cannot be reduced to rules is not new. It is ancient.
The philosopher Ludwig Wittgenstein spent his career exploring the boundary between what can be stated precisely and what lies beyond precise statement. His conclusion — that "whereof one cannot speak, thereof one must be silent" — is directly relevant to AI governance. Some questions can be systematised: "What is the balance on account 4072?" has a definite answer that an AI can look up. Other questions cannot: "How should we handle the pricing conversation with this particular client?" involves judgment, context, relationships, and values that resist systematic treatment.
The boundary between what can be delegated to a machine and what must remain with humans is the foundation of sound AI governance. The mistake is not using AI for the first kind of question. The mistake is allowing AI to answer the second kind without human oversight.
Isaiah Berlin, the political philosopher, argued that some human values are genuinely incompatible — efficiency and thoroughness, growth and stability, individual initiative and collective coordination. There is no formula that resolves these tensions. They require ongoing human judgment, negotiation, and the kind of practical wisdom that organisations develop over years of working together.
AI systems, by design, seek to optimise. They look for a single answer. But when values genuinely conflict, there is no single answer — there is only the answer that this organisation, at this time, with these people, judges to be the right balance. That judgment is inherently human, and any AI governance framework that pretends otherwise is not governing — it is abdicating.
The cooperative tradition has its own version of this insight. Democratic member control — one member, one vote — is not an efficiency measure. It is the recognition that legitimate decisions require the participation of those affected. A cooperative that has practised this kind of governance for decades already understands, in its bones, why AI cannot be trusted with values decisions.
How Village Governs AI Structurally
Village does not rely on telling the AI to behave. It builds governance into the architecture — structures that operate independently of the AI and cannot be overridden by it.
The boundary enforcer blocks the AI from making values decisions. When a question involves privacy trade-offs, ethical judgments, or relationship context, the system halts and routes the question to a human — your manager, your director, your board. The AI cannot override this boundary, because the boundary operates outside the AI's control.
The instruction persistence system stores your organisation's explicit instructions in a separate system that the AI cannot modify. When the AI generates a response, it is checked against these stored instructions. If the response contradicts an instruction, the instruction takes precedence — by default, regardless of what the AI's training patterns suggest.
The cross-reference validator checks the AI's proposed actions against your organisation's actual records. It does not ask the AI whether its response is correct — that would be asking the system to verify itself. It uses mathematical measurement, operating in a fundamentally different way from the AI, to determine whether the response is grounded in your organisation's real content.
The context pressure monitor watches for degraded operating conditions — situations where the AI is under strain, processing complex requests, or encountering novel questions. When it detects these conditions, it increases the intensity of verification. The harder the question, the more scrutiny the response receives.
These are not policies. They are structures. They operate whether or not the AI agrees with them, in the same way that a levee operates whether or not the river agrees with it.
The Difference Between Aspiration and Architecture
Many organisations publish AI ethics statements. Village does not rely on ethics statements. It relies on architectural constraints that enforce governance structurally.
The distinction matters because aspiration is what you hope will happen. Architecture is what actually happens. Your business does not rely on a hope that the finance officer will handle funds properly — it requires dual authorisation on payments above a threshold. That is architectural governance. The same principle applies to AI.
The Tractatus Framework — Transparent and Open
The governance architecture behind Village AI is called the Tractatus framework. It is worth knowing three things about it.
It is open. The entire framework is published under an Apache 2.0 open-source licence. Anyone can read the code, inspect the rules, and verify that the governance does what it claims to do. This is the opposite of Big Tech AI governance, where the rules are proprietary and the reasoning is hidden. When Google or OpenAI tells you their AI is "aligned with human values," you have no way to check. With Tractatus, you can read every line.
It is transparent. Every governance decision is logged. When the boundary enforcer blocks the AI from making a values decision, that event is recorded. When the cross-reference validator catches a discrepancy, it is recorded. Your managers can see exactly what the governance system did and why. There is no hidden layer where decisions are made without accountability.
It can be adapted. The framework is not a rigid set of rules imposed from outside. Organisations can shape the governance to reflect their own priorities. A legal practice and a food cooperative have different sensitivities, different compliance requirements, different boundaries. The Tractatus framework accommodates this — not by letting organisations weaken the governance, but by letting them define what the governance protects. Your organisation's constitution, your operating values, your compliance requirements — structurally enforced, not just documented.
The full framework, including the research behind it, is available at agenticgovernance.digital. You do not need to read it to use Village — the governance operates whether you inspect it or not. But if you want to understand exactly how your AI is governed, the door is open.
In the next article, we will look at what Village AI actually does today in practice — what it can help your business with, how bias is addressed through the vocabulary system, and what is still a work in progress.
This is Article 3 of 5 in the "Your Business, Your AI" series. For the full governance architecture, visit Village AI on Agentic Governance.
Previous: Big Tech AI vs. Your Business AI — Why the Difference Matters Next: What's Actually Running in Village Today