Part 1 of 4 — AI Governance for Communities

What Is AI, Really? A Guide for Community and Not-for-Profit Leaders

Why AI Governance Matters Now

March 2026 · My Digital Sovereignty · All Articles

This is Article 1 of a 4-part series on AI Governance for Communities and Not-for-Profit Organisations, published by My Digital Sovereignty Limited under CC BY 4.0.

Introduction: Why AI Governance Matters Now

In the space of three years, artificial intelligence has gone from a niche technology topic to something that touches nearly every community organisation, not-for-profit (NFP), and charitable trust in the country. Your local food bank is being offered AI-powered donor management. Your iwi health service is fielding proposals for AI-assisted triage. Your community housing trust is being told that AI can "revolutionise" tenant communications. School boards, sports clubs, churches, marae committees, refugee support networks — all are being approached by vendors promising transformation through AI.

Most of these organisations are not ready. Not because the people leading them lack intelligence or diligence — quite the opposite. They are not ready because the technology has arrived faster than the governance frameworks needed to manage it. And the stakes are not trivial.

When a community organisation adopts AI, it is making decisions about who controls the technology, whose values are embedded in the system, what happens when things go wrong, and who bears the consequences. These are governance questions, not technical ones. They belong in the boardroom, the committee meeting, and the community hui — not solely in the IT department.

This article is the first in a four-part series designed to equip community and NFP leaders with the knowledge they need to govern AI responsibly. We are not here to sell you on AI or to frighten you away from it. We are here to ensure that when you make decisions about this technology, you make them with genuine understanding rather than marketing brochures.

In this first article, we will build a shared vocabulary, demystify how the technology actually works, explore the governance challenges it creates, and explain why community and NFP governance faces unique considerations that generic corporate frameworks do not address.

Let us begin with the language.

Core Concepts and Vocabulary

One of the most effective ways to disempower people is to surround a topic with impenetrable jargon. The AI industry — whether intentionally or not — has done exactly this. Before we can govern AI, we need to understand what the words actually mean. Here is a plain-language guide to the terms you will encounter most often.

Artificial Intelligence (AI)

Artificial intelligence is an umbrella term for computer systems that perform tasks normally requiring human intelligence — recognising images, understanding language, making predictions, generating text. In practice, most of what is marketed as "AI" today is a specific subset called machine learning. The term is used very loosely in marketing; a simple rules-based filter on your email might be called "AI-powered" by a vendor trying to sound cutting-edge. When someone tells you their product "uses AI," your first question should be: what does it actually do, specifically?

Machine Learning (ML)

Machine learning is a method where computer systems learn patterns from data rather than following explicitly programmed rules. Instead of a programmer writing "if the email contains these words, mark it as spam," a machine learning system is shown millions of emails already labelled as spam or not-spam and learns to identify the patterns itself. The quality of the learning depends entirely on the quality and representativeness of the data. This is a crucial point for governance: if the training data is biased, the system's behaviour will be biased.

Large Language Models (LLMs)

Large language models are the technology behind tools like ChatGPT, Claude, and Gemini. They are machine learning systems trained on enormous quantities of text — books, websites, articles, code, conversations — to predict what word should come next in a sequence. Despite their remarkable fluency, they do not "understand" language the way humans do. They are extraordinarily sophisticated pattern-matching systems. When an LLM writes a paragraph that sounds authoritative, it is generating text that statistically resembles authoritative writing in its training data. This distinction between fluency and understanding is fundamental to governing these systems well.

Parameters

When you read that a model has "175 billion parameters," this refers to the adjustable numerical values inside the model that were tuned during training. Think of them as the knobs on an impossibly complex mixing desk: each one is set to a specific value, and together they determine how the model responds to any given input. More parameters generally means the model can capture more complex patterns, but it also means the model is more expensive to run, more energy-intensive, and harder to audit. The number of parameters has become a marketing metric, but bigger is not always better for a given task.

Training Data

Training data is the body of text, images, or other information that a machine learning system learns from. For large language models, this typically includes vast swathes of the internet, digitised books, academic papers, and other text sources. The composition of this training data has profound governance implications: if the data over-represents certain cultures, languages, or perspectives and under-represents others, the model will reflect those imbalances. Training data is where biases enter the system, and it is often the least transparent part of the technology. Many AI companies treat the composition of their training data as a trade secret.

Inference

Inference is the technical term for what happens when you actually use an AI system — when you type a question and it generates a response. Training is the process of building the model; inference is the process of running it. For governance purposes, the distinction matters because training happens once (or periodically), but inference happens every time someone interacts with the system. Each inference call has a cost — in computing resources, in energy, and potentially in privacy if the input data is sent to an external server.

Hallucination

Hallucination is the industry term for when an AI system generates information that is entirely fabricated but presented with complete confidence. An LLM might cite a court case that does not exist, invent statistics, or attribute a quotation to someone who never said it. This happens because the model is generating statistically plausible text, not retrieving verified facts. For community organisations, hallucination is not a minor inconvenience — it is a serious risk. Imagine an AI system advising a refugee support service with fabricated legal precedents, or a health-adjacent NFP receiving AI-generated "medical guidance" that sounds authoritative but is simply wrong.

Fine-tuning

Fine-tuning is the process of taking a pre-trained model and further training it on a smaller, specialised dataset to adapt its behaviour for a specific purpose. A general-purpose language model might be fine-tuned on legal documents to become better at legal tasks, or on te reo Maori text to improve its competence in that language. Fine-tuning can make a model more useful for a particular community, but it also raises governance questions: who controls the fine-tuned model, what data was used, and does the fine-tuning introduce new biases or remove important safeguards?

RAG (Retrieval-Augmented Generation)

Retrieval-Augmented Generation is a technique that grounds an AI's responses in actual documents rather than relying solely on what the model learned during training. When you ask a question, the system first searches a collection of real documents for relevant information, then uses the AI to compose a response based on what it found. This significantly reduces hallucination because the model is working from real source material. For community organisations, RAG offers a way to use AI technology while maintaining control over the knowledge base it draws from — your policies, your documents, your records.

Agents and Tools

An AI agent is a system that can take actions in the real world, not just generate text. Where a basic chatbot can only write responses, an agent might search databases, send emails, update records, or trigger workflows. The governance implications are significant: a text-generating AI that gives bad advice is problematic, but an AI agent that takes incorrect actions on your behalf is a different order of risk entirely. When evaluating AI tools, understanding whether the system merely advises or actually acts is a critical governance distinction.

Closed vs Open-weight Models

Closed models (sometimes called proprietary models) are owned and operated by companies that do not release the model's internal workings. You interact with them through an API or interface, but you cannot inspect, modify, or host the model yourself. Open-weight models make the model's trained parameters available for download, meaning organisations can run them on their own infrastructure, inspect their behaviour, and modify them. The choice between closed and open-weight models has significant governance implications: closed models are typically more capable but create dependency on a single vendor; open-weight models offer greater transparency and control but require more technical capacity to operate.

Alignment and Safety

Alignment refers to the challenge of ensuring an AI system behaves in accordance with human values and intentions. Safety encompasses the broader effort to prevent AI systems from causing harm — whether through biased outputs, manipulation, privacy violations, or unintended consequences. These are not solved problems. Every major AI company is actively working on alignment, and significant disagreements exist within the field about how to approach it. For governance purposes, the key insight is that "safety" is not a fixed property of a model — it is an ongoing process that depends on how the system is deployed, monitored, and constrained.

How Today's AI Actually Works in Practice

With the vocabulary established, let us look at how these systems actually function. Understanding the mechanics — even at a high level — is essential for good governance. You would not govern financial systems without understanding the basics of accounting; you should not govern AI systems without understanding the basics of how they work.

Next-Token Prediction: AI as Sophisticated Autocomplete

At its core, a large language model does one thing: given a sequence of words, it predicts what word should come next. This is essentially the same principle as the autocomplete on your phone, but scaled up by orders of magnitude in both the size of the model and the sophistication of the patterns it captures.

When you type a prompt — say, "Write a summary of New Zealand's Treaty obligations" — the model does not retrieve a pre-written summary from a database. Instead, it generates the response one word (technically, one "token") at a time, each time choosing the most probable next token based on the patterns in its training data and the text generated so far.

This process is why LLMs can produce remarkably fluent, contextually appropriate text. They have been trained on so much human writing that they can reproduce the patterns of virtually any genre, style, or register. A model can write in the style of a legal brief, a children's story, or a board report because it has seen millions of examples of each.

Pattern Matching, Not Understanding

Here is where governance requires clear thinking. The fluency of these systems creates a powerful — and dangerous — illusion of understanding. When an AI writes a nuanced paragraph about Treaty obligations, it appears to understand the topic. It does not, in any meaningful sense. It has identified statistical patterns in how humans write about Treaty obligations and is reproducing those patterns.

This distinction matters enormously for governance because it determines what you can and cannot trust the system to do. An AI can reliably produce text that resembles competent writing on a topic. It cannot reliably determine whether a specific factual claim is true. It cannot exercise genuine judgement. It cannot understand the cultural significance of what it is writing about. It does not know what it does not know.

For community leaders, this means AI can be a powerful drafting tool, a useful starting point, and a genuine productivity aid — but it cannot be a decision-maker, a fact-checker, or a substitute for human judgement on matters that require genuine understanding.

Statistical Patterns vs Genuine Knowledge

Consider a concrete example. If you ask an AI about the tikanga around a particular cultural practice, the model will generate text based on whatever was written about that topic in its training data. If the training data included respectful, accurate accounts written by knowledgeable practitioners, the response may be useful. If the training data included superficial, orientalising, or inaccurate accounts — which is statistically more likely, given the composition of the internet — the response will confidently reproduce those inaccuracies.

The model has no way to distinguish between an authoritative source and a poorly informed blog post. It has learned patterns in text, not the truth about the world. This is not a flaw that will be fixed in the next version; it is an inherent characteristic of how these systems work.

Why AI Can Be Fluent yet Wrong

This brings us to one of the most governance-relevant characteristics of modern AI: the combination of extreme fluency and unreliable accuracy. Humans naturally associate confident, well-structured communication with competence and knowledge. We have evolved to make this association because, among humans, it is usually a reasonable heuristic. Someone who can articulate a complex topic clearly usually does understand it.

AI breaks this heuristic completely. A language model can generate a perfectly structured, grammatically flawless, persuasively argued paragraph that is entirely wrong. It can cite sources that do not exist. It can present fabricated statistics with the same confidence as real ones. It does not hedge when it should, because hedging is a function of knowing the limits of your knowledge — and the model has no knowledge, only patterns.

For governance, this means that any use of AI in your organisation must include human verification. The more consequential the output, the more rigorous the verification must be. AI-generated grant applications, policy documents, legal summaries, health information, or cultural content all require careful human review by someone with actual expertise in the subject matter.

The Limits of Training Data: Biases Baked in at Scale

The training data for large language models is predominantly English-language, predominantly from North America and Europe, predominantly from the last twenty-five years of the internet, and predominantly reflective of the perspectives of people who write publicly online. This is not a representative sample of human knowledge, experience, or values.

The consequences are predictable and well-documented. AI systems perform better in English than in other languages. They reflect Western cultural assumptions as defaults. They under-represent indigenous knowledge systems, non-Western philosophical traditions, and the perspectives of communities that have historically had less access to digital publishing platforms.

For community organisations in Aotearoa New Zealand, this is not an abstract concern. AI systems may default to Anglocentric frameworks when discussing issues that require a te ao Maori perspective. They may not understand the significance of whakapapa, the protocols around taonga, or the relational nature of Maori knowledge systems. They may treat individualistic Western assumptions as universal truths.

These biases are not random errors to be patched; they are structural features of systems trained on structurally biased data. Governing AI well means understanding this and planning for it.

Real-World Governance Challenges

Understanding how AI works leads directly to the governance challenges it creates. These are not hypothetical future problems — they are present realities that community and NFP leaders must grapple with today.

Opaque Training Data and Embedded Biases

Most commercial AI systems do not disclose the full composition of their training data. You are asked to trust the output of a system without being able to examine the inputs. For any other decision-making tool, this opacity would be unacceptable. Imagine a financial adviser who refused to disclose the data behind their recommendations, or a medical test where the manufacturer would not reveal how it was calibrated.

The biases embedded in training data are well-documented. Gender biases: AI systems that associate nursing with women and engineering with men. Racial biases: image recognition systems that perform worse on darker skin tones. Cultural biases: systems that treat Western cultural norms as universal. Geographic biases: systems that know far more about New York than about Napier, more about London than about Levin.

For governance, this means asking uncomfortable questions of vendors. What is your model trained on? How do you test for bias? What happens when your system encounters a cultural context outside its training data? If the vendor cannot or will not answer these questions, that itself is a governance-relevant data point.

Privacy and Surveillance Capitalism

Many of the most prominent AI tools are offered for "free" — or at prices that seem remarkably low given the enormous cost of developing and running these systems. The economics do not add up unless you recognise what is actually being traded.

The dominant business model behind consumer-facing AI is the same model that drives social media: your data is the product. Every prompt you type, every document you upload, every question you ask may be used to improve the model, train future systems, or inform advertising. For community organisations handling sensitive information — health data, family histories, financial records, culturally sensitive material — this is not acceptable.

Even when companies offer data protection guarantees, the legal landscape is evolving rapidly and varies by jurisdiction. Data stored on servers in the United States is subject to US law, regardless of where the data originated or what promises were made. For organisations in Aotearoa with obligations under the Privacy Act 2020 and potentially Te Tiriti o Waitangi, the question of where data goes and who can access it is not a technical footnote — it is a governance priority.

Power Concentration

The development of frontier AI systems requires computational resources that only a handful of corporations can afford. This creates an unprecedented concentration of power. A small number of companies — predominantly based in the United States and, increasingly, China — control the foundational technology on which an increasing portion of the global economy depends.

For community organisations, this concentration manifests as dependency. If your operations rely on a specific AI provider's tools, you are subject to that company's pricing decisions, terms of service changes, feature deprecations, and — critically — their values and judgements about what the AI should and should not do. These companies make daily decisions about AI behaviour that affect millions of users, and those decisions are made in corporate boardrooms in San Francisco, not in community meetings in South Auckland.

Governance in this context means thinking carefully about dependency. What happens if the tool you rely on doubles in price? What happens if the provider changes its terms of service? What happens if the company is acquired, or fails? Do you have alternatives? Can you migrate your data? These are the same questions good governance has always asked about critical suppliers — they just happen to involve AI now.

Cross-Border and Colonial Dynamics

The flow of data in the AI ecosystem follows familiar colonial patterns: from communities to corporations, from the periphery to the centre, from the global South to Silicon Valley. Indigenous communities, rural communities, and communities in the developing world contribute data — often without consent or compensation — that is used to train systems controlled by wealthy corporations in wealthy countries.

For Maori communities and other indigenous peoples, this dynamic is particularly acute. Cultural knowledge shared online may be ingested into training data, stripped of its context, and reproduced without attribution, consent, or understanding of the protocols that govern its use. The concept of data sovereignty — the right of peoples to control data about themselves, their communities, and their taonga — is directly challenged by AI systems that treat all publicly available text as raw material for training.

The Maori Data Sovereignty movement, led by Te Mana Raraunga, has articulated clear principles about the governance of Maori data. These principles — which emphasise rangatiratanga (authority), whakapapa (relationships), and whanaungatanga (kinship) — are directly relevant to AI governance because they challenge the assumption that data, once digitised, belongs to whoever can scrape it from the internet.

Community and NFP leaders do not need to resolve these global power dynamics, but they do need to be aware of them when making governance decisions about AI adoption. Every decision to use a particular AI tool is also a decision about which power structures to support.

Environmental Costs

Training and running large AI models requires significant energy. Data centres consume electricity for computation and for cooling, and the rapid expansion of AI infrastructure is placing measurable pressure on energy grids around the world. While estimates vary and the industry is investing in renewable energy, the environmental footprint of AI is real and growing.

For organisations with environmental commitments — which includes many community and NFP organisations — the energy cost of AI is a governance consideration. This does not necessarily mean avoiding AI entirely, but it does mean making deliberate choices. Do you need a frontier model for every task, or would a smaller, more efficient model suffice? Can you process data locally rather than sending it to energy-intensive cloud infrastructure? Is the AI use case valuable enough to justify its environmental cost?

These questions rarely appear in AI vendor pitches, but they belong in governance discussions.

Why Community and NFP Governance Is Different

Generic corporate AI governance frameworks — even good ones — are not sufficient for community and NFP organisations. The differences are not just a matter of scale; they are differences in kind.

Custodians of Sensitive Data

Community and NFP organisations hold some of the most sensitive data in existence. Health records. Addiction and recovery histories. Family violence files. Immigration cases. Mental health assessments. Survivor testimonies. Cultural records and genealogies. Financial hardship documentation.

This data is not held for commercial purposes. It is held in trust, often provided by people in vulnerable situations who shared their information because they needed help and trusted the organisation to protect it. The governance obligations around this data are fundamentally different from those facing a corporation protecting customer purchase histories.

When an AI vendor proposes that your organisation upload documents to their system for "AI-powered insights," the governance question is not just "does this comply with the Privacy Act?" It is: "Do our clients and community members know their most sensitive information may be processed by an AI system hosted overseas? Did they consent to this specific use? Would they consent if asked?"

Accountability to Communities, Not Shareholders

Corporations adopt AI governance frameworks primarily to manage risk: regulatory risk, reputational risk, and financial risk. Community organisations have different accountability structures. You are accountable to your community — often to the most vulnerable members of that community.

If an AI system in a corporate context produces biased results, the consequence might be a PR problem and a regulatory fine. If an AI system in a community context produces biased results, the consequence might be a refugee family receiving incorrect information about their legal rights, or a Maori whanau having their cultural information processed without appropriate tikanga, or a domestic violence survivor's records being sent to a server in a jurisdiction with weak privacy protections.

The accountability is personal, relational, and often irreversible. This demands a higher standard of governance, not a lower one.

Already-Marginalised Communities Facing Amplified Risks

Many of the communities served by NFP organisations are already subject to disproportionate surveillance, discrimination, and systemic disadvantage. AI systems trained on historical data tend to reproduce and amplify these patterns. Predictive policing algorithms that disproportionately target Maori and Pasifika communities. Benefit fraud detection systems that generate more false positives for certain demographic groups. Hiring algorithms that penalise non-Western names.

When a community organisation introduces AI into its operations, it risks becoming a conduit for these amplified biases — directing them at precisely the people the organisation exists to serve. This is not a theoretical risk. It has happened repeatedly in jurisdictions around the world, from Australia's Robodebt scandal to discriminatory welfare algorithms in the Netherlands.

Governance for community organisations must start from the recognition that the communities they serve are often the communities most harmed by poorly governed AI.

Cultural Contexts: Whakapapa, Taonga, and Collective Memory

Western AI governance frameworks typically treat data as belonging to individuals or organisations. Many indigenous knowledge systems, including te ao Maori, understand knowledge differently. Whakapapa (genealogy and relational identity) is not "data" to be processed — it is a living system of relationships with its own protocols for who may access it, when, and for what purposes. Taonga (treasured possessions, including intangible cultural heritage) carry obligations that persist across generations.

An AI system has no understanding of these protocols. It cannot distinguish between information that is freely shareable and information that requires specific cultural authority to access. It cannot understand that some knowledge is seasonal, some is gender-specific, and some belongs to specific whanau or hapu.

Community organisations that work with Maori communities — or with any indigenous community — need AI governance frameworks that explicitly account for these cultural dimensions. The question is not just "is this legal?" but "is this tika?" — is this right, proper, and in accordance with the values and protocols of the communities we serve?

Heightened Ethical Duties

Community and NFP leaders carry fiduciary duties, cultural responsibilities, and relational obligations that go well beyond what corporate AI governance frameworks typically address.

Fiduciary duty means acting in the best interests of the organisation and those it serves. Adopting an AI system that creates vendor dependency, exposes sensitive data, or introduces bias into service delivery may breach this duty — even if the system delivers short-term efficiency gains.

Cultural duty means respecting and upholding the cultural values and protocols of the communities you serve. This includes ensuring that AI tools do not undermine cultural practices, misrepresent cultural knowledge, or violate data sovereignty principles.

Relational duty means maintaining the trust relationships that are the foundation of community work. People share their stories, their health information, their family histories with your organisation because they trust you. Introducing AI into that relationship without transparency, consent, and genuine safeguards is a betrayal of that trust — regardless of how efficient it makes your operations.

These duties cannot be delegated to an AI vendor. They cannot be satisfied by reading a terms-of-service document. They require active, informed, ongoing governance by people who understand both the technology and the community context.

A Note on Opportunity

This article has focused heavily on risks and challenges, and rightly so — governance is fundamentally about managing risk and ensuring accountability. But it would be incomplete without acknowledging that AI, governed well, offers genuine opportunities for community organisations.

AI can help small teams do more with limited resources. It can assist with grant writing, report generation, and administrative tasks that consume disproportionate time. It can help organisations make their services more accessible — through translation, through summarisation, through more responsive communication. It can help analyse patterns in service delivery data that would take humans weeks to identify.

The key word is "governed well." The opportunities are real, but they are only realised when the technology is adopted deliberately, with clear governance frameworks, genuine transparency, and ongoing accountability. The organisations that will benefit most from AI are not the ones that adopt it fastest — they are the ones that adopt it most thoughtfully.

Closing: From Buzzwords to Responsibilities

If you have read this far, you now have something many technology vendors would prefer you did not have: a genuine understanding of what AI is, how it works, and why it creates the governance challenges it does. You know that "AI" is not magic — it is pattern matching at scale. You know that fluency does not equal understanding. You know that training data carries biases. You know that the business models behind "free" AI are not free at all. And you know that your organisation's governance obligations — fiduciary, cultural, relational — demand more than a cursory review of a vendor's marketing materials.

This knowledge creates a responsibility. Community and NFP leaders can no longer afford to be passive consumers of AI technology, accepting whatever tools vendors offer and hoping for the best. You are governors. Your role is to make informed decisions about whether, when, how, and under what conditions AI is adopted in your organisation — and to ensure that the technology serves your community's interests rather than the other way around.

The transition from passive consumer to active governor of AI technology is the central challenge of this moment. It does not require you to become a data scientist. It does require you to ask the right questions, demand genuine answers, and make decisions grounded in your organisation's values rather than a vendor's sales targets.

In Article 2 of this series, we will move from understanding to action. We will examine existing AI governance frameworks — from the OECD AI Principles to the EU AI Act — and assess their relevance to community and NFP organisations. We will also introduce a practical governance framework designed specifically for the community sector: one that accounts for the cultural, relational, and fiduciary dimensions that generic corporate frameworks miss.

The buzzwords end here. The governance work begins.

AI Governance for Communities — Full Series

My Digital Sovereignty Limited supports communities and not-for-profit organisations to govern technology on their own terms. For more information, visit mysovereignty.digital.

Village Community and Village Business both include sovereign AI with Guardian Agents in all subscriptions. We are accepting applications from communities and organisations. Beta founding partners receive locked-for-life founding rates.

Next in series: Governance Frameworks All Articles Apply for beta access