Part 3 of 4

Models of AI Governance for Communities and Not-for-Profits

AI Governance for Communities and Not-for-Profits — Article 3 of 4

March 2026 · My Digital Sovereignty · All Articles

This is part of a four-part series. Start with Article 1: Why AI Governance Matters if you are new to the series, or read Article 2: Principles for Community-Centred AI Governance for the preceding instalment.

Introduction

The previous articles in this series established why AI governance matters for communities and not-for-profits, and examined the principles that should guide our approach. This article turns to the practical question: what models of AI governance are actually available to us, and how do we choose between them?

This is not a purely technical decision. Every technology choice is a governance choice. When a community organisation signs up for a cloud AI service, it is entering a relationship of dependency — accepting that vendor's terms, placing community data within that vendor's jurisdiction, and ceding control over how AI behaves when interacting with members. These are governance decisions with consequences that extend far beyond the IT department.

This article lays out four distinct approaches, honestly assesses their strengths and limitations, and provides a framework for choosing between them. Not every organisation needs the most sovereign option. Not every organisation can afford to ignore sovereignty. The right answer depends on who you are, what you hold in trust, and what dependencies you are willing to accept.

A note on framing: we give each approach genuine credit. The simplest option may be entirely appropriate for some organisations and some use cases. The goal is not to advocate for one approach but to ensure that governance decisions are made deliberately, with a clear understanding of what is gained and what is given up at each level.

Governance Is a Choice of Dependencies

The dominant framing — "should we use AI?" — is already obsolete. AI capabilities are embedded into every major software platform, cloud service, and productivity tool. The question is no longer whether AI will touch your organisation's operations, but whose AI, on whose terms, with what accountability.

Every technology dependency is simultaneously a political, economic, and cultural dependency.

Political dependency means accepting the regulatory jurisdiction of your technology provider. When a New Zealand community organisation uses an AI service hosted by a United States corporation, that organisation's data falls within the reach of US law — including the Clarifying Lawful Overseas Use of Data (CLOUD) Act, which compels US companies to produce data stored anywhere in the world in response to US government requests, regardless of where the data physically resides or what local privacy laws say.

Economic dependency means that your organisation's capabilities and costs are determined by someone else's business model. Subscription prices change. Features are deprecated. Free tiers disappear. The recent history of technology platforms is littered with organisations that built critical operations on services that were later repriced, restricted, or discontinued. When your AI capability is rented, it can be taken away.

Cultural dependency is perhaps the most subtle and most consequential. Large language models trained predominantly on English-language internet content carry anglophone cultural assumptions into every interaction, every suggestion, every classification. For communities whose identity, language, or values diverge from the mainstream — indigenous communities, minority-language communities, communities with distinctive ethical traditions — this cultural embedding is an active force that can erode distinctiveness.

The four models below represent different positions on the dependency spectrum, from maximum external dependency to maximum community sovereignty.

Approach 1: Vendor-Centric AI — Off-the-Shelf Closed Models

What This Looks Like

Staff members sign up for consumer or enterprise subscriptions to services like ChatGPT, Microsoft Copilot, or similar offerings, using them directly through web interfaces or integrated into existing productivity software. There is no custom configuration, no local infrastructure, and minimal organisational oversight.

In practice, this often begins informally — individuals start using AI tools for drafting emails, summarising documents, or generating content. The organisation may eventually formalise this with enterprise licences, but the fundamental model remains the same: the organisation is a consumer of a service controlled entirely by the vendor.

Genuine Strengths

It would be dishonest to dismiss this approach. For many small not-for-profits with limited technical capacity, vendor-centric AI offers real value.

Low barrier to entry. No infrastructure, no technical expertise, no capital expenditure. A volunteer coordinator can start using AI assistance within minutes.

Rapid capability access. The largest commercial models represent billions of dollars of research and development, offering capabilities in language processing, translation, and content generation that no individual organisation could replicate.

Continuous improvement. Vendors continuously update their models, adding capabilities and addressing limitations. The organisation benefits from these improvements without any effort or expense.

Broad ecosystem support. Commercial AI services integrate with widely used productivity tools, content management systems, and communication platforms, reducing friction and making adoption straightforward.

Honest Limitations

Zero control over model behaviour. The organisation has no influence over how the model responds, what biases it carries, or how it handles sensitive topics. These decisions are made unilaterally by the vendor and change without notice.

Data flows to the vendor. Every query, every uploaded document, every conversation flows to the vendor's infrastructure. For organisations handling personal stories, health data, cultural knowledge, or safeguarding records, this is a fundamental governance concern.

Terms change unilaterally. Vendors routinely update terms of service and privacy policies on a take-it-or-leave-it basis. An organisation that has built workflows around a particular service has limited ability to refuse.

No sovereignty. The organisation owns nothing. It cannot inspect the model, understand how decisions are made, or ensure alignment with community values. The relationship is entirely asymmetric: the vendor holds all the cards.

Vendor lock-in. As organisations build workflows, templates, and institutional knowledge around a specific AI service, switching costs increase. This lock-in is often more operational than technical — it is the accumulated habits, prompts, and workarounds that staff have developed.

When This Approach Is Appropriate

Minimum Governance Requirements

Approach 2: Big-Cloud Enterprise AI — Managed Platforms with Compliance Controls

What This Looks Like

Enterprise AI platforms — Microsoft Azure OpenAI Service, Amazon Bedrock, or similar — provide AI capabilities wrapped in compliance controls, data residency options, audit trails, and service level agreements. The organisation configures these platforms to meet compliance requirements: choosing data residency regions, enabling logging, setting access controls, and potentially fine-tuning models on organisational data.

This model is increasingly adopted by larger not-for-profits, government agencies, and organisations in regulated sectors. It represents a meaningful step up from consumer services in terms of control, while still relying on major cloud providers for infrastructure and models.

Genuine Strengths

Better data controls. Meaningful controls over where data is processed, who can access it, and how it is logged. For organisations subject to the New Zealand Privacy Act 2020, the GDPR, or similar legislation, these controls help demonstrate compliance.

Enterprise security. Multi-factor authentication, role-based access control, encryption, and incident response capabilities that most organisations could not build independently.

Jurisdictional options. Major cloud providers operate data centres in multiple regions, including Aotearoa New Zealand. Organisations can choose to have data processed within their own jurisdiction.

Audit capabilities. Detailed logging of AI interactions supporting both compliance and internal governance.

Honest Limitations

Data residency is not data sovereignty. This distinction is critical and frequently misunderstood. Data residency means your data is stored in a particular location. Data sovereignty means you have genuine control — who can access it, under what circumstances, subject to whose laws. A New Zealand data centre operated by a US cloud provider still falls within reach of the US CLOUD Act. The data may be physically in Auckland, but legal jurisdiction follows corporate ownership, not server location.

Still dependent on big tech. The cloud provider controls the underlying models, the infrastructure, and ultimately the terms of the relationship.

Expensive and complex. Enterprise platforms are significantly more expensive than consumer subscriptions, and proper configuration requires substantial technical expertise. Misconfigured compliance controls provide a false sense of security.

The vendor still controls the model. Even with enterprise controls, the organisation cannot inspect training data, understand biases, or modify fundamental model behaviour. The compliance controls govern the wrapper around the AI, not the AI itself.

When This Approach Fits

Governance Requirements

Approach 3: Open-Weight Models with Local Hosting Partners

What This Looks Like

This approach uses openly available AI models — such as Meta's Llama family, Mistral AI's models, or others released under open-weight licences — running on infrastructure controlled by the organisation or a trusted local hosting partner. "Open-weight" means the model's trained parameters are publicly available for download, inspection, and deployment, though training data and processes may not be fully disclosed.

In practice, this means partnering with a local IT provider, university, regional data centre, or community technology organisation. The organisation selects the model; the partner provides infrastructure and operational expertise; governance is shared through a contractual relationship where both parties are accountable under the same jurisdiction.

In Aotearoa New Zealand, this might involve a local cloud provider, a Crown Research Institute, a university research computing service, or a technology cooperative.

Genuine Strengths

Model inspection is possible. The model's behaviour can be examined, tested against specific scenarios, and assessed for biases before deployment — transparency fundamentally unavailable with closed models.

Local data storage. Data remains within the organisation's jurisdiction, processed on locally accountable infrastructure. No CLOUD Act exposure, no cross-border transfer.

Avoidance of big-tech dependency. Open weights ensure portability. If the hosting relationship needs to change, the model can be redeployed elsewhere.

Growing ecosystem. Open-weight models are now competitive with closed commercial models for many practical tasks. The ecosystem of fine-tuning, evaluation, and deployment tools is maturing rapidly.

Customisation potential. Models can be fine-tuned for specific domains, languages, or cultural contexts — a capability entirely unavailable with closed models and only partially available through enterprise platforms.

Honest Limitations

Requires technical capacity. Even with a hosting partner, the organisation needs enough understanding to specify requirements, evaluate performance, and make informed decisions about model selection.

Partner dependency. This approach trades big-tech dependency for partner dependency. Mitigate through clear contracts, documented configurations, and ensuring replicability.

Base model biases persist. Open-weight models carry the same cultural biases as closed models. Open weights make biases more visible and potentially addressable, but addressing them requires deliberate effort.

Operational burden. Infrastructure maintenance — security patches, performance monitoring, backups, failure handling — falls on the organisation and its partner.

"Open-weight" is not "open-source." Many models released as "open" come with licence restrictions limiting commercial use or prohibiting certain applications. Training data and methodology are typically not disclosed.

When This Approach Fits

Governance Requirements

Approach 4: Community-Sovereign AI

What This Looks Like

AI trained specifically for and by a community, running on infrastructure the community controls, governed by the community's own governance structures. The AI is not a product purchased from a vendor. It is a capability the community has built, owns, and governs as a commons.

The community curates its own training data — drawn from community-generated content, locally relevant documents, culturally appropriate sources, and domain-specific knowledge. The model is trained or fine-tuned on this data, producing an AI system that reflects the community's language, values, and priorities. The resulting system runs on infrastructure governed by arrangements that keep decision-making authority within the community.

This approach draws on long traditions of commons management. Just as communities have historically governed shared fisheries, forests, and irrigation systems, this model treats AI capability as a governed commons: a shared resource maintained through collective effort and governed by community-determined rules.

Genuine Strengths

Full data sovereignty. Community data never leaves community control. No external vendor access, no cross-border transfer, no third-party terms governing data use.

Value alignment. The community controls both training data and model configuration. A faith community can ensure theological commitments are reflected. An indigenous community can ensure cultural protocols are respected. A not-for-profit can ensure its AI prioritises the populations it serves.

Cultural specificity. The AI can be trained for specific cultural contexts — local dialects, community terminology, information-sharing protocols, and the particular needs of community members. For many communities, this specificity is essential for the AI to be genuinely useful rather than awkwardly generic.

Community accountability. The AI is accountable through the community's own governance structures — board of trustees, committee, hui, or other collective decision-making. If the AI produces inappropriate output, the community has authority and means to address it.

Resilience. No dependency on any external vendor's continued existence, pricing decisions, or terms of service.

Honest Limitations

Requires sustained investment. Not just financial — time, attention, and organisational capacity. This is an ongoing commitment, like maintaining a community hall. The community must sustain this investment over time.

Capability limits. Community-scale models will not match the raw capability of models trained with billions of dollars and the world's data. For many community use cases this gap does not matter — the community needs contextual understanding, not general-purpose capability across 200 languages. But honesty about limits is essential.

Governance capacity must be developed. Decisions about training data curation, model evaluation, bias monitoring, and acceptable use must be made by people who understand both the technology and the community's values. Building this capacity is part of the required investment.

Operational burden. Software updates, security monitoring, hardware management, and backup procedures fall on the community. Partnerships and shared infrastructure help but do not eliminate this burden.

Isolation risk. Without deliberate connection to broader ecosystems — through federation, shared learning, or other mechanisms — community-sovereign AI can miss improvements happening in the wider field.

When This Approach Fits

Governance Innovation: AI as Governed Commons

The most significant innovation here is treating AI as a governed commons rather than a purchased product:

This model aligns naturally with indigenous governance traditions — millennia of experience governing shared resources through collective decision-making, intergenerational stewardship, and relational accountability. It also resonates with community cooperatives, mutual societies, and commons-based resource management across many cultures.

Comparative Analysis: Four Approaches Side by Side

The following table provides a structured comparison across the dimensions that matter most for governance decisions. Ratings are indicative — specific implementations will vary.

Dimension Vendor-Centric Enterprise Cloud Open-Weight Local Community-Sovereign
Control over model behaviour None Minimal (configuration only) Moderate (selection, fine-tuning) Full (training, configuration, governance)
Data jurisdiction Vendor's jurisdiction Configurable (CLOUD Act risk for US providers) Local jurisdiction Community-controlled
Value alignment capability None Limited (content filtering) Moderate (model selection, fine-tuning) High (training data curation, community governance)
Cultural specificity None None Moderate (fine-tuning possible) High (community-curated training data)
Cost (relative) Low High Moderate Moderate-to-high (ongoing investment)
Technical capacity required Minimal Substantial Moderate (with partner) Substantial (can be developed over time)
Exit strategy difficulty Low (operational lock-in only) Moderate-to-high Low (model is portable) Not applicable (community owns the asset)
Accountability chain Vendor (unilateral terms) Vendor + contract Partner + organisation Community governance structures
Resilience to vendor failure Low Low-to-moderate High (model persists, infrastructure replaceable) High (community owns all components)

Decision Factors

What data is involved? If only generic, non-sensitive information, lower-sovereignty approaches may be adequate. If member data, personal stories, cultural knowledge, or safeguarding records are involved, governance requirements escalate significantly.

What is our technical capacity? Be realistic. An organisation with no technical staff and no access to a trusted partner is better served by a well-governed vendor-centric approach than by an ambitious sovereignty project it cannot sustain.

What are our compliance obligations? Regulated sectors may have specific legal requirements that constrain options. Map these before selecting an approach.

What is our time horizon? Organisations thinking in months may find vendor-centric approaches pragmatic. Organisations thinking in decades — as many community organisations and indigenous communities rightly do — should weigh sovereignty more heavily.

What do we hold in trust? Organisations holding community stories, cultural knowledge, or personal narratives entrusted by members have a fiduciary obligation that shapes governance choices. The higher the trust obligation, the stronger the case for community-controlled approaches.

Can we share the investment? Multiple organisations sharing infrastructure, governance expertise, and costs can achieve sovereignty impractical for any single organisation alone. Sector bodies, regional collaborations, and community technology cooperatives all offer pathways for shared investment.

What dependencies are we willing to accept? Every approach involves dependencies. The question is whether those dependencies are acceptable given the organisation's values, obligations, and risk tolerance. There is no dependency-free option — but there are options with very different dependency profiles.

Why Standard Governance Frameworks Are Necessary but Insufficient

Organisations exploring AI governance will encounter growing bodies of standards: the NIST AI Risk Management Framework, ISO/IEC 42001 (AI Management Systems), the OECD AI Principles. In Aotearoa New Zealand, the Algorithm Charter and Te Ara Tika provide local reference points.

These frameworks are valuable. They address risk identification, transparency, fairness, security, human oversight, and accountability. Any organisation deploying AI should incorporate their relevant provisions.

However, they share a common limitation: they are designed for organisations that are consumers of AI, not co-creators. They assume a procurement relationship and address the risks of that procurement. Within that frame, they work well. What they do not address are the deeper questions of sovereignty, cultural integrity, and community self-determination.

Sovereignty Is Not a Risk to Be Managed

Standard frameworks treat data governance as risk management: minimise the chance that something goes wrong. This differs fundamentally from the sovereignty question: who has the right to determine how data is used, through what governance processes?

Risk management accepts the existing power structure and seeks to optimise within it. Sovereignty challenges the power structure itself. For indigenous communities exercising data sovereignty, this distinction is not abstract. The CARE Principles (Collective Benefit, Authority to Control, Responsibility, Ethics) are not a risk framework — they are an assertion of rights. Te Mana Raraunga positions data as taonga subject to collective guardianship — a concept with no equivalent in standard risk management.

Cultural Integrity Is Not a Compliance Checkbox

Standard frameworks address fairness and non-discrimination. But cultural integrity goes beyond the absence of discrimination to encompass the positive presence of culturally appropriate behaviour — an AI that understands tikanga, respects protocols around tapu and noa, and works within the relational frameworks of a specific community. Generic standards cannot provide this depth of cultural specificity.

Community Self-Determination Requires Community Agency

Standard governance frameworks do not contemplate communities becoming co-creators of AI. They provide no guidance on governing training data curation, community-specific model development, or accountability structures for AI the community has built.

This is not a criticism — they were designed for a different purpose. But communities aspiring to genuine AI sovereignty need governance approaches that go beyond these frameworks, using them as a foundation while building additional structures for sovereignty, cultural integrity, and self-determination.

Realistic Transitions

Few organisations will adopt any approach in its pure form. Most will use a hybrid, evolving over time as capacity develops.

A practical progression might follow these phases:

Phase 1: Governed consumption. Adopt vendor-centric tools with clear acceptable use policies and data classification. Build organisational understanding of AI capabilities and limitations.

Phase 2: Selective sovereignty. For high-sensitivity use cases — member data, cultural content, core mission activities — move to open-weight models with a local partner while continuing commercial AI for lower-sensitivity tasks.

Phase 3: Community capability building. Begin developing community-specific AI — curating training data, fine-tuning models, establishing community governance structures. Start with a single focused use case and expand.

Phase 4: Sovereign operation. Operate core AI capabilities on community-controlled infrastructure, governed by community structures, while maintaining broader connections through federation and shared learning.

This progression is neither inevitable nor mandatory. Some organisations will remain at Phase 1 or 2 indefinitely. Others — particularly indigenous communities with urgent sovereignty imperatives — may move directly to Phase 3 or 4.

Toward Community AI Stewardship

The governance model an organisation chooses for AI is a statement about what kind of organisation it wants to be.

Choosing vendor-centric AI is choosing convenience and capability at the cost of autonomy. For some organisations and some use cases, this is a perfectly rational choice. Choosing enterprise cloud AI is choosing compliance and professional controls at the cost of genuine sovereignty. For regulated organisations with specific compliance obligations, this may be the appropriate balance. Choosing open-weight models with local partners is choosing sovereignty and transparency at the cost of operational responsibility. For organisations with access to trusted technology partners, this offers a strong balance of capability and independence.

Choosing community-sovereign AI is choosing self-determination at the cost of sustained investment and capacity building. For communities that hold sensitive cultural knowledge, exercise indigenous data sovereignty, or simply believe that their AI should serve their values rather than a vendor's business model, this is not just the best option — it may be the only option consistent with their obligations to members.

What all four approaches share is the need for deliberate, informed, ongoing governance that treats AI not as a neutral tool but as a powerful capability that shapes community life and must be subject to community oversight.

The final article in this series explores what happens when communities step into the role of AI co-stewards rather than AI customers — developing governance frameworks that draw on indigenous governance traditions, commons management principles, and the practical experience of communities already building this future.

This is Article 3 of a four-part series on AI governance for communities and not-for-profit organisations, published by My Digital Sovereignty Limited. The series is licensed under Creative Commons Attribution 4.0 International (CC BY 4.0). You are free to share and adapt this material with attribution.

Series navigation:

Previous: Part 2 Next: Part 4 All Articles Beta Programme