Part 2 of 4

Governing AI in Community and Not-for-Profit Contexts

AI in the Service of Mission: Opportunities and Temptations

March 2026 · My Digital Sovereignty · All Articles

This article is the second in a four-part series on AI governance for communities and not-for-profit organisations. Article 1 introduced the landscape and the case for governance.

AI in the Service of Mission: Opportunities and Temptations

Artificial intelligence holds genuine promise for community organisations and not-for-profits. The technology can translate content into multiple languages so that migrant communities can participate fully. It can transcribe oral histories from elders whose stories might otherwise be lost. It can make documents accessible to people with disabilities, summarise lengthy policy papers for time-poor volunteers, and automate the administrative burden that consumes so much of the sector's energy — the grant reports, the membership records, the event logistics that keep organisations running but drain the people who run them.

These are real capabilities, not speculative ones. The opportunities are genuine. But so are the temptations.

The first temptation is speed. AI tools are easy to adopt — often as simple as creating an account and uploading data. The gap between "we could use this" and "we are using this" can be measured in minutes, not months. But speed without governance is recklessness dressed as efficiency. Every piece of data uploaded to a commercial AI service becomes subject to that vendor's terms, that vendor's jurisdiction, and that vendor's commercial interests. Once uploaded, data cannot be un-uploaded in any meaningful sense.

The second temptation is vendor defaults. When an organisation adopts a commercial AI system, it inherits that vendor's assumptions about privacy, data retention, model training, and acceptable use. These defaults are designed for the vendor's benefit, not the organisation's mission. They represent the values of Silicon Valley shareholders, not the values of a community organisation in Christchurch or a parish in Birmingham. Accepting vendor defaults as governance is not governance at all — it is abdication.

The third temptation is conflating novelty with progress. If an AI system makes administration faster but compromises member privacy, that is not progress. If it automates decisions that should be made by humans exercising judgment and care, that is not progress — it is regression wrapped in a modern interface.

The fourth temptation is the most insidious: the assumption that because everyone else is doing it, we should too. Community organisations exist to serve their members and their missions, not to keep pace with technology trends. The question is never "should we adopt AI?" in the abstract. The question is always "does this specific use of AI, governed in this specific way, advance our specific mission while honouring our specific obligations to the people we serve?"

These temptations are not theoretical. Across the sector, organisations are adopting AI tools without governance frameworks, uploading sensitive data to commercial platforms without understanding the implications, and treating vendor marketing as due diligence. This article provides a practical foundation for doing better.

Specific Risk Profile for NFPs and Community Organisations

Community and not-for-profit organisations face a distinct risk profile when it comes to AI adoption. The risks that matter most to the sector are not the same risks that dominate corporate AI governance discussions. Understanding this distinction is essential for building governance frameworks that actually protect the people we serve.

Sensitive Data That Cannot Be Un-Leaked

Community organisations hold data of extraordinary sensitivity. Health charities hold medical information. Family violence services hold survivor stories. Churches hold confessional and pastoral records. Iwi organisations hold whakapapa — genealogical knowledge that connects people to land, to ancestors, and to identity. Community archives hold oral histories that were shared in trust, often by people who are no longer alive to give or withdraw consent.

This data is not comparable to customer transaction records or marketing databases. A leaked credit card number can be cancelled and replaced. A leaked survivor story cannot be un-told. A leaked whakapapa record cannot be un-known. The harm from exposure is not financial — it is personal, cultural, and often irreversible.

When this data enters an AI system — whether for analysis, summarisation, or any other purpose — it becomes subject to risks that the data's original custodians may never have contemplated. The person who shared their family history with a community archive in 1987 did not consent to that story being processed by a large language model in 2026. The whānau who entrusted their whakapapa to an iwi organisation did not consent to it being stored on servers in Virginia.

The governance implication is clear: some data must never enter AI systems at all, regardless of the potential benefits. This is not a technical limitation — it is an ethical boundary.

Already-Marginalised Communities Facing Amplified Bias

AI systems reflect the data they are trained on, and that data reflects the societies that produced it. When those societies have systematically marginalised certain groups — through racism, colonialism, ableism, sexism, or any other form of structural inequality — AI systems inherit and amplify those patterns.

For community organisations serving marginalised populations, this is not an abstract concern. A refugee resettlement organisation using AI-assisted case management may find that the system systematically underestimates the capabilities of people from certain countries, because the training data reflects historical patterns of discrimination. A disability advocacy organisation using AI to draft communications may find that the system defaults to deficit-based language, because that is the dominant framing in the data it learned from. An iwi health service using AI for risk assessment may find that the system reproduces the very disparities it was meant to address, because those disparities are encoded in the historical health data.

The organisations most likely to be harmed by AI bias are precisely the organisations least likely to have the resources to detect, understand, and mitigate it. This creates a vicious cycle: marginalised communities adopt AI tools to overcome resource constraints, those tools amplify existing biases, and the resulting harm falls on people who are already vulnerable.

Vendor Power Asymmetry

When a multinational corporation negotiates AI terms with Microsoft, it brings legal teams, procurement specialists, and the leverage of a large contract. When a community organisation with three staff members and a volunteer board adopts the same technology, it clicks "I agree" on terms it has neither the time nor the expertise to evaluate.

This asymmetry is fundamental and extends beyond initial terms. When a vendor changes its terms of service — as they regularly do — community organisations have no recourse. When a vendor is acquired by a larger company with different values, the organisation's data goes with it. When a vendor deprecates a product, the organisation loses access to its own workflows and, potentially, its own data.

For most community organisations, the choice is not between negotiating better terms and accepting defaults. The choice is between accepting defaults and not using the product. Governance frameworks must help organisations make that choice with clear eyes.

Resource Constraints Limiting Due Diligence

Proper AI governance requires time, expertise, and money — three things that community organisations perpetually lack. Conducting a data protection impact assessment requires legal knowledge. Evaluating vendor terms requires procurement expertise. Monitoring AI outputs for bias requires technical capability. Training staff on responsible AI use requires dedicated time and resources.

The cruel irony is that organisations adopt AI precisely because they lack resources, and governing AI responsibly requires the very resources they do not have. This is not an argument against governance — it is an argument for governance frameworks that are proportionate, practical, and designed for organisations without dedicated compliance teams.

It also suggests that the sector needs shared resources: template policies, collective evaluation of common tools, sector-specific guidance, and peer networks where organisations can share what they have learned. No individual community organisation can conduct a thorough evaluation of every AI tool on the market. But the sector collectively can, if it organises to do so.

Mission Drift Through Technology Adoption

Technology shapes behaviour. When an organisation adopts a tool, it subtly adjusts its practices to fit the tool's capabilities. Over time, these adjustments can shift the organisation away from its core mission without anyone making a conscious decision to change direction.

Consider a community storytelling organisation that adopts AI-assisted content creation. Initially, the AI helps with editing and formatting. But gradually, the ease of AI-generated content influences what stories are told and how. The raw, unpolished, deeply human stories that are the organisation's reason for existing become harder to justify when a more "professional" alternative is available at the click of a button. The mission has not changed on paper, but practice has drifted away from it.

This drift is dangerous precisely because it is invisible from the inside. Governance frameworks must include mechanisms for periodic realignment — asking not just "is this AI working?" but "is this AI serving our mission?"

Regulatory Lag

Laws and regulations governing AI are evolving rapidly but remain significantly behind the technology. The European Union's AI Act is the most comprehensive regulatory framework to date, but it is designed primarily for commercial AI systems and large-scale deployments. Aotearoa New Zealand's approach to AI governance remains largely principles-based, without the specific regulatory requirements that community organisations need for practical guidance.

This regulatory lag creates a governance vacuum. Organisations cannot rely on compliance with existing law as a sufficient standard of care, because existing law does not adequately address the risks they face. They must develop their own standards, informed by emerging regulation but going beyond it. This is demanding work, but it is also an opportunity: community organisations can establish governance practices that set the standard, rather than waiting for regulators to catch up.

Key Governance Questions for Any AI Use

Before adopting any AI system, every community and not-for-profit organisation should work through a structured set of governance questions. These questions are not a compliance checklist — they are a framework for honest deliberation. The goal is not to produce the "right" answers but to ensure that the right questions are asked and that the people affected by the decisions are part of the conversation.

Who Owns the Data We Are Putting Into This System?

Community organisations often hold data on behalf of their members, not as owners in their own right. A church holds pastoral records on behalf of its congregation. An iwi organisation holds whakapapa on behalf of whānau and hapū. When data is held in trust, uploading it to an AI system may exceed the authority under which it was originally collected. The question is not just "do we own this data?" but "do we have the authority to use this data in this way, given the conditions under which it was entrusted to us?"

Where Is It Stored, and Under What Jurisdiction?

Data stored in cloud-based AI systems is subject to the laws of the jurisdiction where it is physically stored and where the vendor is incorporated. For a New Zealand organisation using an American AI service, member data may be subject to US law, including government access under the CLOUD Act. For culturally sensitive data, jurisdictional questions extend beyond legal compliance — Māori data sovereignty principles assert that data about Māori should remain under Māori governance regardless of physical storage location.

What Happens if the Vendor Changes Terms, Is Acquired, or Shuts Down?

AI vendors are commercial entities in a volatile market. Can you export your data in a usable format? What happens to data used to train the vendor's models? If the vendor is acquired by a company whose values conflict with your mission, can you leave quickly? These are not hypothetical scenarios — multiple AI startups have been acquired, pivoted, or shut down in recent years, leaving dependent organisations scrambling.

What Harmful Uses Are Possible With This Data?

Governance must consider misuses, both deliberate and accidental. A data breach at a family violence service could endanger lives. Incorrect AI outputs in a health charity could lead to harmful advice. Misuse of cultural data could cause deep offence and lasting damage to relationships of trust. The harm scenarios for community organisations are often more severe than for commercial entities.

Has Our Community Been Consulted?

Genuine consultation means more than a notice in a newsletter. It means explaining the proposed AI use in accessible language, providing realistic information about both benefits and risks, and creating genuine opportunities for members to influence the decision — including deciding not to proceed. For many community organisations, consultation is not just good practice — it is a constitutional obligation under their governing documents.

Can We Explain How This System Makes Decisions?

This is not a demand for technical transparency at the level of model architecture. It is a demand for practical explainability: can you tell a member, in plain language, why the system produced a particular output? If you cannot explain it, you cannot govern it. And if you cannot govern it, you should not be using it for decisions that affect people.

What Is Our Exit Strategy?

Every AI adoption should begin with a plan for how to stop using it. An exit strategy forces the organisation to consider vendor dependency, data portability, and the true cost of adoption. An organisation that cannot articulate its exit strategy has not adopted a tool — it has acquired a dependency.

Baseline Governance Practices

The governance questions above inform a set of baseline practices that every community and not-for-profit organisation should establish before, or as soon as possible after, adopting AI tools. These practices are designed to be proportionate to the sector's resources while providing meaningful protection for the people the organisation serves.

AI Register: Documenting Every AI System in Use

An organisation cannot govern what it does not know about. An AI register is a simple document — a spreadsheet is sufficient — that records every AI system the organisation uses, what data it has access to, who authorised its use, and when it was last reviewed.

The register should include not just dedicated AI products but also AI features embedded in other tools. If the organisation uses Microsoft 365, it should note which AI features (such as Copilot) are enabled. If staff use ChatGPT for work purposes, that should be recorded. The discipline of maintaining a register forces intentionality — when a new AI tool is proposed, the register provides a natural checkpoint.

Data Classification and "No-Go" Zones

Not all data carries equal risk, and governance should reflect this. A practical data classification scheme for community organisations might include three tiers:

Open data — information that is already available to all members or is genuinely non-sensitive. Meeting dates, event announcements, published newsletters. AI processing of this data carries minimal risk.

Restricted data — information that is sensitive but may be appropriate for carefully governed AI use. Membership records, financial data, operational documents. AI processing of this data requires specific authorisation, appropriate safeguards, and ongoing monitoring.

Protected data — information that must never enter an AI system under any circumstances. Survivor stories, health records, cultural knowledge held in trust, whakapapa, confessional records, data from minors. The boundary around protected data is absolute and non-negotiable.

The specific classification of data categories will vary between organisations, and should be determined by the governance body with input from members. The critical point is that the classification exists, that it is documented, and that it is enforced — not as a suggestion but as a binding policy.

Informed Consent: Real Consent, Not Click-Through Consent

If an organisation's AI use involves member data, members must give informed consent. Informed consent is not a checkbox on a form. It requires that the person understands what they are consenting to, has a genuine choice about whether to consent, and can withdraw consent without penalty.

In practice, this means:

Human-in-the-Loop Requirements

For certain categories of decision, human review must be non-negotiable. AI systems can assist with analysis, surface relevant information, and generate drafts — but there are decisions where a human being must exercise judgment, take responsibility, and be accountable.

As a baseline, human-in-the-loop requirements should apply whenever:

The human in the loop must be someone with appropriate authority, context, and expertise — not simply someone who clicks "approve" on every AI recommendation. Rubber-stamping is not review. The purpose of human-in-the-loop governance is to ensure that human judgment is genuinely exercised, not merely performed.

Internal Education

Everyone who touches AI systems needs a basic understanding of what they can and cannot do, what the governance framework requires, and what their individual responsibilities are. This does not mean turning every volunteer into a data scientist. It means ensuring people understand what data they should and should not put into AI systems, how to recognise outputs that may be incorrect or biased, and who to contact with concerns. Education should be ongoing, not a one-time induction — AI capabilities and governance requirements evolve too rapidly for static training.

Regular Review Cycles

AI governance is not "set and forget." Governance frameworks must include scheduled review cycles — at minimum annually, and more frequently for high-risk uses. Reviews should examine whether each AI system in the register still serves its purpose, whether data classifications remain appropriate, whether any incidents or concerns have arisen, whether the regulatory environment has changed, and whether the AI systems themselves have changed (vendors frequently update models, features, and terms without notice).

The review should be conducted by the governance body — the board, the committee, the rūnanga — not delegated to operational staff alone. AI governance is a governance responsibility, not an IT responsibility.

Cultural and Community-Specific Considerations

Baseline governance practices provide a foundation, but they are not sufficient on their own. Community and not-for-profit organisations operate within specific cultural contexts that shape their obligations, their values, and their understanding of what data means and who has authority over it. Governance frameworks must account for these contexts explicitly, not treat them as optional additions.

Indigenous Data Sovereignty: Te Mana Raraunga Principles

Indigenous data sovereignty is one of the most developed and articulate frameworks for thinking about collective data rights, and its principles are relevant far beyond indigenous contexts. In Aotearoa New Zealand, Te Mana Raraunga — the Māori Data Sovereignty Network — has articulated a set of principles that provide a powerful foundation for AI governance in any community setting.

Rangatiratanga (authority and self-determination): Māori have an inherent right to exercise authority over Māori data. This principle asserts that governance of data is an expression of self-determination, not a technical convenience. The community decides how its data is used — not the vendor, not the platform, not the government.

Whakapapa (relationships and connections): Data exists within networks of relationships. It is not isolated facts but connected knowledge that links people to each other, to their ancestors, and to their land. Governance must respect and protect these connections.

Whanaungatanga (kinship and collective responsibility): Data governance is a collective responsibility, not an individual one. The community's interest in its data takes precedence over any individual's preference to share or withhold.

Kotahitanga (collective vision and unity): Data governance should reflect the collective aspirations of the community, not just the interests of those who happen to hold administrative authority at a given moment.

Manaakitanga (reciprocity, respect, and care): Those who hold data have obligations of care toward the people that data represents. Data custodianship carries responsibilities, not just rights.

Kaitiakitanga (guardianship and stewardship): Data is held in trust for future generations, not consumed by the present one. Governance decisions must consider long-term consequences, not just immediate utility.

These principles are not abstract values — they are practical governance requirements. An iwi organisation considering an AI system must ask not just "is this legal?" and "is this useful?" but "does this honour our obligations of kaitiakitanga? Does this respect the whakapapa embedded in this data? Have we exercised our rangatiratanga in making this decision, or have we ceded it to a vendor?"

Taonga and Whakapapa: Data as Treasure and Genealogical Knowledge

In te ao Māori, certain categories of knowledge are taonga — treasures of such significance that they carry specific cultural obligations and protections. Whakapapa is perhaps the most important category: the genealogical knowledge that connects individuals to their ancestors, their whānau, their hapū, their iwi, and ultimately to the natural world.

Whakapapa is not "data" in the way that a database administrator understands the term. It is living knowledge that carries mana (spiritual authority and prestige). Its disclosure is governed by cultural protocols, not privacy policies. Its custodians — the kaumātua and kuia (elders) who hold and transmit it — have responsibilities that transcend any organisation's data governance policy.

The implications for AI governance are profound. Whakapapa must never enter a commercial AI system. This is not a matter of data classification or risk assessment — it is a matter of cultural obligation. No governance framework, however sophisticated, can make it appropriate to process whakapapa through a system that the whānau does not control, that operates under foreign jurisdiction, and that may use the data for purposes the whānau has not contemplated.

This principle — that some knowledge is too significant and too sacred for algorithmic processing — is not unique to Māori culture. Every community has categories of knowledge that require special protection. Religious communities have confessional and sacramental records. Medical charities have patient narratives. Refugee organisations have stories of persecution and flight. The specific cultural context varies, but the governance principle is universal: some data is not a resource to be optimised — it is a trust to be honoured.

Collective Consent vs Individual Consent

Western data protection frameworks, including Aotearoa New Zealand's Privacy Act 2020 and the European Union's General Data Protection Regulation, are built on individual consent. But for many communities, individual consent is insufficient. When data represents a collective reality — a family history, a community's cultural knowledge, a congregation's shared experience — individual consent alone cannot authorise its use.

Consider a community archive holding the recorded oral history of a neighbourhood: twenty people sharing interconnected stories over an afternoon. Individual consent from each participant is necessary but not sufficient. The stories are collectively constructed — each person's contribution makes sense only in context. Using AI to analyse these stories requires the community's collective authorisation.

Similarly, whakapapa is by definition collective data. It belongs to the whānau and hapū whose identity it constitutes. An individual cannot consent to its processing on behalf of the collective.

Governance frameworks must accommodate collective consent alongside individual consent — consulting governance bodies before processing collective knowledge, establishing community-level policies that govern categories of data, and recognising that some decisions about data belong to the community, not the organisation.

The Role of Governance Bodies: Kaitiaki, Elders, Cultural Advisors

Effective AI governance in community contexts requires governance bodies that combine technical understanding with cultural authority. In many communities, this means engaging people who hold specific cultural roles:

These roles cannot be replaced by technical expertise alone. A data protection officer can assess legal compliance. A cultural advisor can assess whether a proposed AI use violates obligations that no law requires but that the community considers binding. Both perspectives are necessary; neither is sufficient on its own.

Non-Indigenous Communities Also Have Cultural Obligations

The principle that communities have specific cultural obligations regarding their data extends well beyond indigenous contexts.

Religious communities hold data governed by theological as well as legal principles — the seal of the confessional, the privacy of pastoral counselling, the sacredness of liturgical records. Migrant and refugee communities hold data that could, if disclosed, endanger people in their countries of origin — immigration status, political affiliations, family connections to persecuted groups. Professional bodies hold data about members' competence and conduct that carries obligations of fairness and due process. Health and disability organisations hold data about people's most intimate vulnerabilities, shared in contexts of trust.

In each case, the cultural context shapes the obligations that governance must honour. The baseline practices described earlier provide a common foundation, but each community must build on that foundation with governance structures that reflect its own obligations and values.

From Risk Lists to Design Principles

Risk identification is necessary but not sufficient. A list of risks can paralyse an organisation as easily as it can protect one. The purpose of understanding risks is not to generate anxiety but to inform design — to shape the principles that guide every decision about AI adoption, implementation, and governance.

The risks and considerations explored in this article point toward a set of design principles that community and not-for-profit organisations can use as a foundation for their AI governance frameworks.

Care Over Speed

The pressure to adopt AI quickly — before competitors, before the funding cycle ends, before the moment passes — is real but must be resisted. Community organisations exist to care for people, not to be first to market. Every AI decision should be made at the pace that allows for genuine deliberation, meaningful consultation, and honest assessment of consequences.

This does not mean paralysis. It means that the question "have we taken adequate care?" always takes precedence over "can we do this faster?"

Local Control Over Convenience

Commercial AI services offer convenience at the cost of control. The data leaves the organisation, the processing happens elsewhere, the terms are set by someone else. Local control — over data, over processing, over governance — requires more effort but preserves the organisation's ability to make its own decisions.

Where local control is not feasible (and for many organisations, it will not be feasible for all AI uses), governance must compensate: stricter data classification, more rigorous vendor assessment, clearer exit strategies, and explicit acceptance of the trade-offs involved.

Collective Voice Over Individual Preference

In community organisations, decisions about data governance should reflect the community's collective values, not the preferences of individual administrators, board members, or technology enthusiasts. This means consultation, deliberation, and consent mechanisms that give the community a genuine voice.

It also means accepting that the community may decide not to adopt a tool that individual members find useful, or to impose restrictions that individual members find inconvenient. The collective voice is not always the sum of individual preferences — it is the expression of shared values and mutual obligations.

Transparency as a Default

Every AI system the organisation uses, every piece of data that enters an AI system, every decision influenced by AI output — all of these should be visible to the governance body and, in appropriate forms, to the membership. Transparency is not a burden to be minimised — it is a discipline that keeps governance honest.

This does not mean publishing technical details that members cannot interpret. It means ensuring that members can understand, in plain language, what AI systems the organisation uses, why it uses them, and how their data is affected. It means that when AI influences a decision, the people affected by that decision know that AI was involved.

Sovereignty as Architecture, Not Aspiration

Data sovereignty must be built into the technical architecture of AI systems, not merely stated as a policy objective. If the architecture sends data to a foreign jurisdiction, no policy can make it sovereign. If the architecture allows a vendor to use community data for model training, no terms-of-service negotiation can fully mitigate that risk. Architecture is policy made real. Community organisations should favour AI architectures that enforce sovereignty by design — where data stays under community control because the system makes it technically impossible for it to leave.

Looking Ahead: From Principles to Governance Models

These design principles provide a compass, but a compass is not a map. In Article 3, we move from principles to governance models — examining how community and not-for-profit organisations can structure their governance arrangements to put these principles into practice, with governance committee structures, policy templates, decision-making frameworks, and accountability mechanisms proportionate to the sector's resources.

This article is the second in a four-part series on AI governance for communities and not-for-profit organisations. Article 1 introduced the landscape and the case for governance. Article 3 will address governance structures and decision-making frameworks. Article 4 will examine implementation, monitoring, and continuous improvement.

My Digital Sovereignty Limited builds community-owned digital infrastructure grounded in data sovereignty, cultural respect, and democratic governance. Learn more at mysovereignty.digital.

This work is licensed under a Creative Commons Attribution 4.0 International Licence.

In this series

Village is currently in beta pilot. We are accepting applications from communities and organisations. Beta founding partners receive locked-for-life founding rates.

Previous: Part 1 Next: Part 3 All Articles Apply for beta access