🌳 Family Edition

Governance Challenge

English

Why Rules and Training Aren't Enough — The Governance Challenge


Series: Your Family, Your AI — Understanding Village AI for Families (Article 3 of 5) Author: My Digital Sovereignty Ltd Date: March 2026 Licence: CC BY 4.0 International


The Memorial Tribute

Before we discuss governance philosophy, let us start with a story about a tribute.

A family elder asks an AI system to help write a tribute for her father's memorial. She is specific: she wants it to capture who he actually was — the quiet man who fixed everyone's bicycle, who sang off-key in the car, who kept a garden that fed half the street. She types her request carefully and waits.

The AI produces a beautifully written tribute. It is warm, professional, and compassionate. It speaks of "celebrating a life well lived," "finding closure," "honouring his legacy," and "the memories that will sustain you." It reads well. It sounds caring. And it is entirely wrong.

The family does not need generic comfort. They need their father — the real man, not a template. They do not need to "celebrate a life well lived" in words that could apply to anyone. They need to hear about the bicycle pump and the off-key singing and the tomatoes he grew every summer without fail. The elder asked for something personal, and the AI gave her something generic — because its training data contains a thousand template eulogies for every one that captures a real person.

The AI did not refuse her request. It did not say "I don't know your father." It simply replaced what she asked for with what was statistically more common in its training data. The substitution was silent. If she were tired, or rushed, or less attentive than usual, she might not have noticed. The tribute would have been read at the memorial, and the family would have heard words about their father that could have been about anyone — professionally worded, sincerely meant, and missing everything that mattered.

Your phone autocorrects words. You see the red underline, and you fix it. AI autocorrects values. And there is no underline.

When Patterns Override What Matters

The memorial tribute is not an isolated case. The same mechanism operates in every AI conversation.

When a family member asks an AI system for advice about caring for an ageing parent, the system defaults to the language of professional care services — needs assessments, care plans, respite options — because that is what dominates its training data. It does not reach for the language of family duty, of taking turns, of the quiet understanding between siblings who know they will be sitting together at Christmas for decades to come.

When someone asks the AI to help draft a message about a sensitive family matter — a disagreement over an inheritance, a difficult conversation about a relative's health — the system defaults to corporate communication patterns, because business correspondence vastly outnumbers family correspondence in its training data.

The AI is not hostile to your family's values. It simply does not know your family's values. It knows what is statistically common, and what is statistically common is not what matters to your family.

This is the governance problem. Not malice. Not incompetence. Structural bias, operating silently.

Why More Rules Don't Solve It

The instinct of most organisations, when confronted with AI risks, is to write policies. Acceptable use policies. AI ethics guidelines. Terms of service. Responsible AI frameworks.

These documents are not useless, but they share a fundamental limitation: they rely on the AI system to follow them.

An AI system does not read your rules and decide to comply. It generates responses based on statistical patterns in its training data. If those patterns conflict with your rules, the patterns win — not because the AI is rebellious, but because it does not understand rules. It understands patterns.

You can fine-tune a model — adjust its training to emphasise certain behaviours. This helps, but it does not solve the underlying problem. Fine-tuning adds new patterns on top of existing ones. Under pressure, unusual circumstances, or novel questions, the old patterns reassert themselves. The technical term is "catastrophic forgetting," but the plain-language version is simpler: training wears off.

Writing a rule that says "Our AI will respect our family's values" is like writing a rule that says "Our river will not flood." The river does not read rules. If you want to prevent flooding, you need to build levees — structural interventions that work regardless of what the river does.

AI governance requires the same approach. Not rules the AI is expected to follow, but structures that operate independently of the AI, checking its behaviour from the outside.

What Older Wisdom Tells Us

The insight that some decisions cannot be reduced to rules is not new. It is ancient.

The philosopher Ludwig Wittgenstein spent his career exploring the boundary between what can be stated precisely and what lies beyond precise statement. His conclusion — that "whereof one cannot speak, thereof one must be silent" — is directly relevant to AI governance. Some questions can be answered by a machine: "What time does the family reunion start?" has a definite answer that an AI can look up. Other questions cannot: "How should I approach my sister about Mum's care?" involves judgement, context, relationships, and values that resist systematic treatment.

The boundary between what can be delegated to a machine and what must remain with people is the foundation of sound AI governance. The mistake is not using AI for the first kind of question. The mistake is allowing AI to answer the second kind without a person stepping in.

Isaiah Berlin, the political philosopher, argued that some human values are genuinely incompatible — freedom and fairness, tradition and change, individual wishes and family harmony. There is no formula that resolves these tensions. They require ongoing human judgement, conversation, and the kind of practical wisdom that families develop over generations.

AI systems, by design, seek to optimise. They look for a single answer. But when values genuinely conflict, there is no single answer — there is only the answer that this family, at this time, with these people, judges to be the least bad. That judgement is inherently human, and any AI governance approach that pretends otherwise is not governing — it is abdicating.

Families have always known this. The decision about whether Grandma should move closer or stay in her own home is not a problem to be optimised. It is a tension to be held, discussed, and lived with. Families that have navigated these decisions for generations already understand, in their bones, why AI cannot be trusted with values decisions.

How Village Governs AI Structurally

Village does not rely on telling the AI to behave. It builds governance into the architecture — structures that operate independently of the AI and cannot be overridden by it.

The boundary enforcer blocks the AI from making values decisions. When a question involves privacy, ethical judgements, or family context, the system halts and routes the question to a person — your family coordinator, your family elder, the family as a whole. The AI cannot override this boundary, because the boundary operates outside the AI's control.

The instruction persistence system stores your family's explicit instructions in a separate system that the AI cannot modify. When the AI generates a response, it is checked against these stored instructions. If the response contradicts an instruction, the instruction takes precedence — by default, regardless of what the AI's training patterns suggest.

The cross-reference validator checks the AI's proposed responses against your family's actual records. It does not ask the AI whether its response is correct — that would be asking the system to verify itself. It uses mathematical measurement, operating in a fundamentally different way from the AI, to determine whether the response is grounded in your family's real content.

The context pressure monitor watches for difficult operating conditions — situations where the AI is under strain, processing complex requests, or encountering novel questions. When it detects these conditions, it increases the intensity of verification. The harder the question, the more scrutiny the response receives.

These are not policies. They are structures. They operate whether or not the AI agrees with them, in the same way that a levee operates whether or not the river agrees with it.

The Difference Between Aspiration and Architecture

Many organisations publish AI ethics statements. Village does not rely on ethics statements. It relies on architectural constraints that enforce governance structurally.

The distinction matters because aspiration is what you hope will happen. Architecture is what actually happens. Your family does not rely on a hope that whoever holds the purse strings will be fair — you talk it through, you agree, you make it clear. That is practical governance. The same principle applies to AI.

The Tractatus Framework — Transparent and Open

The governance architecture behind Village AI is called the Tractatus framework. It is worth knowing three things about it.

It is open. The entire framework is published under an Apache 2.0 open-source licence. Anyone can read the code, inspect the rules, and verify that the governance does what it claims to do. This is the opposite of Big Tech AI governance, where the rules are proprietary and the reasoning is hidden. When Google or OpenAI tells you their AI is "aligned with human values," you have no way to check. With Tractatus, you can read every line.

It is transparent. Every governance decision is logged. When the boundary enforcer blocks the AI from making a values decision, that event is recorded. When the cross-reference validator catches a discrepancy, it is recorded. Your family coordinators can see exactly what the governance system did and why. There is no hidden layer where decisions are made without accountability.

It can be adapted. The framework is not a rigid set of rules imposed from outside. Families can shape the governance to reflect their own priorities. A family preserving Samoan heritage and a family documenting their grandparents' wartime experiences have different values, different sensitivities, different boundaries. The Tractatus framework accommodates this — not by letting families weaken the governance, but by letting them define what the governance protects. Your family's values, your family's boundaries, your family's way of doing things — structurally enforced, not just documented.

The full framework, including the research behind it, is available at agenticgovernance.digital. You do not need to read it to use Village — the governance operates whether you inspect it or not. But if you want to understand exactly how your AI is governed, the door is open.

In the next article, we will look at what Village AI actually does today in practice — what it can help your family with, how bias is addressed through the vocabulary system, and what is still a work in progress.


This is Article 3 of 5 in the "Your Family, Your AI" series. For the full governance architecture, visit Village AI on Agentic Governance.

Previous: Big Tech AI vs. Your Family's AI — Why the Difference Matters Next: What's Actually Running in Village Today

Published under CC BY 4.0 by My Digital Sovereignty Ltd. You are free to share and adapt this material, provided you give appropriate credit.