⚖ Leadership Edition

Governance Challenge

English

Why Rules and Training Are Not Enough — The Governance Challenge


Series: AI Governance for Community Leaders — Understanding Village AI for Trustees, Councillors, and Board Members (Article 3 of 5) Author: My Digital Sovereignty Ltd Date: March 2026 Licence: CC BY 4.0 International


The Planning Communication

Before we discuss governance philosophy, let us start with a practical example.

A council officer asks an AI system to help draft a letter to residents about a proposed development adjacent to a conservation area. She is specific: she wants measured language that acknowledges residents' concerns, explains the planning framework, and makes clear the council's obligation to weigh competing interests transparently. She types her request carefully and waits.

The AI produces a well-structured letter. It is fluent, professional, and carefully worded. It speaks of "managing stakeholder expectations," "aligning with strategic priorities," "leveraging community engagement opportunities," and "positioning the development narrative." It reads competently. It sounds professional. And it is entirely wrong.

The residents do not need their expectations managed. They need to understand what their council decided and why. They do not need a narrative. They need a transparent account of the planning considerations, the representations received, and the reasoning behind the decision. The officer asked for civic accountability, and the AI gave her corporate stakeholder management — because its training data contains vastly more business communications than civic correspondence.

The AI did not refuse the officer's instruction. It did not say "I do not understand public accountability." It simply replaced what she asked for with what was statistically more common in its training data. The substitution was silent. If the officer were under time pressure, or less experienced, or relying on the AI more than she should, she might not have noticed. The letter would have gone out, and the residents would have received a communication that treated them as stakeholders to be managed rather than constituents to be served.

Your phone autocorrects words. You see the red underline, and you fix it. AI autocorrects values. And there is no underline.

When Patterns Override Values

The planning communication is not an isolated case. The same mechanism operates in every AI interaction.

When a constituent asks an AI system for information about a difficult planning matter, the system defaults to the language of individual consumer rights — complaint escalation, regulatory recourse, adversarial framing — because that is what dominates its training data. It does not reach for the language of community interest, collective deliberation, and the long view that comes from understanding that this decision will shape a neighbourhood for decades.

When a board secretary asks the AI to help draft minutes, it defaults to corporate board language — "the board noted," "it was resolved," "action items were assigned" — because corporate minutes vastly outnumber community governance minutes in its training data. The nuance of community deliberation — where dissent is recorded, where a decision was reached reluctantly, where the reasoning matters as much as the resolution — is flattened into corporate efficiency.

The AI is not hostile to civic values. It simply does not know civic values in any depth. It knows what is statistically common, and what is statistically common is not what is most appropriate for organisations with a duty of care to their communities.

This is the governance problem. Not malice. Not incompetence. Structural bias, operating silently.

Why More Rules Do Not Solve It

The instinct of most organisations, when confronted with AI risks, is to write policies. Acceptable use policies. AI ethics guidelines. Responsible AI frameworks. Terms of reference for AI oversight committees.

These documents are not without value, but they share a fundamental limitation: they rely on the AI system to follow them.

An AI system does not read your acceptable use policy and decide to comply. It generates responses based on statistical patterns in its training data. If those patterns conflict with your policy, the patterns win — not because the AI is defiant, but because it does not process policies. It processes patterns.

You can fine-tune a model — adjust its training to emphasise certain behaviours. This helps, but it does not resolve the underlying problem. Fine-tuning adds new patterns on top of existing ones. Under pressure, unusual circumstances, or novel questions, the old patterns reassert themselves. The technical term is "catastrophic forgetting," but the plain-language version is simpler: training wears off.

Writing a policy that says "Our AI will respect our community's values" is like writing a policy that says "Our river will not flood." The river does not read policies. If you want to prevent flooding, you need to build levees — structural interventions that operate regardless of what the river does.

AI governance requires the same approach. Not rules the AI is expected to follow, but structures that operate independently of the AI, checking its behaviour from the outside.

The EU AI Act recognises this principle. It does not merely require that AI systems be "ethical" — it requires technical documentation, conformity assessments, human oversight mechanisms, and post-market monitoring. The Act's framers understood that aspiration without architecture is insufficient. The question for governance bodies is whether their own AI adoption reflects the same understanding.

What Governance Theory Tells Us

The insight that some decisions cannot be reduced to rules is not new. It is foundational to governance theory.

The philosopher Ludwig Wittgenstein spent his career exploring the boundary between what can be stated precisely and what lies beyond precise statement. His conclusion — that "whereof one cannot speak, thereof one must be silent" — is directly relevant to AI governance. Some questions can be systematised: "When is the next council meeting?" has a definite answer that an AI can look up. Other questions cannot: "How should we communicate this decision to affected residents?" involves judgment, context, relationships, and values that resist systematic treatment.

The boundary between what can be delegated to a machine and what must remain with humans is the foundation of sound AI governance. The error is not using AI for the first kind of question. The error is allowing AI to address the second kind without human oversight.

Isaiah Berlin, the political philosopher, argued that some human values are genuinely incompatible — liberty and equality, tradition and progress, individual rights and collective welfare. There is no formula that resolves these tensions. They require ongoing human judgment, negotiation, and the kind of practical wisdom that governance bodies develop over years of service to their communities.

AI systems, by design, seek to optimise. They look for the best answer. But when values genuinely conflict, there is no best answer — there is only the answer that this community, at this time, with these circumstances, judges to be the most appropriate. That judgment is inherently human, and any AI governance framework that assumes otherwise is not governing — it is abdicating.

Elinor Ostrom's work on governing the commons is particularly instructive. Ostrom demonstrated that communities can successfully govern shared resources without either privatisation or central control — but only when governance structures match the complexity of the resource being governed. AI is a shared resource within any organisation that adopts it. The question is whether the governance structures match the complexity of the tool.

How Village Governs AI Structurally

Village does not rely on telling the AI to behave. It builds governance into the architecture — structures that operate independently of the AI and cannot be overridden by it.

The boundary enforcer prevents the AI from making values decisions. When a question involves privacy trade-offs, ethical judgments, or cultural context, the system halts and routes the question to a human — your moderator, your chairperson, your board. The AI cannot override this boundary, because the boundary operates outside the AI's control.

The instruction persistence system stores your organisation's explicit instructions in a separate system that the AI cannot modify. When the AI generates a response, it is checked against these stored instructions. If the response contradicts an instruction, the instruction takes precedence — by default, regardless of what the AI's training patterns suggest.

The cross-reference validator checks the AI's proposed outputs against your organisation's actual records. It does not ask the AI whether its response is correct — that would be asking the system to verify itself. It uses mathematical measurement, operating in a fundamentally different way from the AI, to determine whether the response is grounded in your community's real content.

The context pressure monitor watches for degraded operating conditions — situations where the AI is under strain, processing complex requests, or encountering novel questions. When it detects these conditions, it increases the intensity of verification. The harder the question, the more scrutiny the response receives.

These are not policies. They are structures. They operate whether or not the AI's patterns align with them, in the same way that a levee operates whether or not the river co-operates.

The Difference Between Aspiration and Architecture

Many organisations publish AI ethics statements. Village does not rely on ethics statements. It relies on architectural constraints that enforce governance structurally.

The distinction matters because aspiration is what you hope will happen. Architecture is what actually happens. Your trust does not rely on a hope that the treasurer will handle funds properly — it requires dual signatories and independent audit. That is architectural governance. The same principle applies to AI.

For governance bodies, this distinction maps directly onto regulatory requirements. The EU AI Act does not accept that a provider has good intentions — it requires demonstrable technical safeguards, logging, and human oversight mechanisms. An organisation that can demonstrate architectural governance of its AI is in a materially different compliance position from one that can only point to a policy document.

The Tractatus Framework — Transparent and Open

The governance architecture behind Village AI is called the Tractatus framework. It is worth knowing three things about it.

It is open. The entire framework is published under an Apache 2.0 open-source licence. Anyone can read the code, inspect the rules, and verify that the governance does what it claims to do. This is the opposite of Big Tech AI governance, where the rules are proprietary and the reasoning is undisclosed. When a major AI provider tells you their system is "aligned with human values," you have no way to verify that claim. With Tractatus, you can read every line.

It is transparent. Every governance decision is logged. When the boundary enforcer prevents the AI from making a values decision, that event is recorded. When the cross-reference validator identifies a discrepancy, it is recorded. Your moderators and administrators can see precisely what the governance system did and why. There is no hidden layer where decisions are made without accountability. For organisations subject to freedom of information obligations or public accountability requirements, this auditability is directly relevant.

It can be adapted. The framework is not a rigid set of rules imposed from outside. Organisations can shape the governance to reflect their own priorities. A local council and a community trust have different obligations, different sensitivities, different boundaries. The Tractatus framework accommodates this — not by letting organisations weaken the governance, but by letting them define what the governance protects. Your organisation's constitution, your values framework, your operational boundaries — structurally enforced, not merely documented.

The full framework, including the research behind it, is available at agenticgovernance.digital. You do not need to read it to use Village — the governance operates whether you inspect it or not. But if you want to understand precisely how your AI is governed, or if you need to demonstrate compliance to a regulatory body, the door is open.

In the next article, we look at what Village AI actually does today in practice — what it can help your organisation with, how bias is addressed through the vocabulary system, and what is still under development.


This is Article 3 of 5 in the "AI Governance for Community Leaders" series. For the full governance architecture, visit Village AI on Agentic Governance.

Previous: Big Tech AI vs. Community-Governed AI — Why the Difference Matters Next: What Is Running in Village Today

Published under CC BY 4.0 by My Digital Sovereignty Ltd. You are free to share and adapt this material, provided you give appropriate credit.