🌈 Indigenous Edition

What AI Is

English

What AI Actually Is (and What It Isn't)


Series: To Hapori, To AI — Digital Sovereignty for Indigenous Communities (Article 1 of 5) Author: My Digital Sovereignty Ltd Date: March 2026 Licence: CC BY 4.0 International


A Machine That Finishes Your Sentences

You have probably heard people say that artificial intelligence is going to change everything. You may also have heard people say it is just a fad, or that it cannot do anything truly new. Both of these positions miss the point, and understanding why will help your community make better decisions about this technology.

Here is the plainest description of what AI does today: it predicts what word should come next.

When you type a message into ChatGPT or ask Siri a question, the system is not thinking about your question the way you or your kaumatua would think about it. It is doing something much more mechanical. It has been shown billions of pages of text — books, websites, conversations, legal documents, recipes, medical papers, arguments on social media — and from all of that reading, it has learned patterns. When you ask it a question, it generates an answer by predicting, one word at a time, what a plausible response looks like based on everything it has seen before.

This is genuinely useful. A system that has absorbed the patterns of billions of pages of text can help you draft a letter, summarise a long document, answer a factual question, or suggest how to word a difficult announcement. These are real capabilities, and they save real time.

But it is not thinking. It is not understanding. It is pattern matching at an extraordinary scale.

"AI Can't Do Anything New" — It Depends What You Mean by New

People who dismiss AI by saying it cannot create anything original are making a claim that is narrowly true and broadly misleading.

A language model cannot originate from experience. It has never sat at a tangi. It has never felt the weight of speaking on behalf of a whanau. It cannot understand why the karanga matters at the gate of a marae — it can only reproduce patterns that statistically resemble understanding. In that sense, everything it produces is a recombination of material it absorbed during training.

But consider what "recombination" actually means at this scale.

No single human being has read every piece of Treaty settlement documentation, every report from the Waitangi Tribunal, every piece of legislation on indigenous rights across the Commonwealth, every academic paper on indigenous data sovereignty, and every community newsletter from the last hundred years. The AI has been trained on a vast corpus that includes many of these sources. When it draws a connection between polycentric governance theory and traditional Maori decision-making structures, that connection is genuinely new to any individual human, even though both ideas existed separately.

So "AI can't do anything new" is true at the level of origination and false at the level of synthesis. Both things matter, and serious engagement with this technology requires holding both.

Can AI Actually Reason?

There is a deeper question that researchers are actively investigating, and the plain answer is: we do not yet know.

When early AI systems produced fluent text, it was reasonable to describe them as sophisticated pattern-matching. But as these systems have grown larger and more capable, something unexpected has happened. They have developed internal structures — circuits, if you like — that look surprisingly like reasoning. Not identical to human reasoning, but not simple retrieval either.

Researchers have found that large language models can solve problems they were never explicitly trained on. They can follow chains of logic across multiple steps. They can draw analogies between domains. Some researchers cautiously describe these capabilities as emergent — meaning they appeared at scale without being specifically designed in.

Whether this constitutes genuine reasoning or very sophisticated pattern-matching that mimics reasoning is an open question. The research is genuinely inconclusive. Anyone who tells you AI definitely can or definitely cannot reason is overstating what the evidence supports.

What we can say is this: the trajectory is steep. Five years ago, these systems could barely string a coherent paragraph together. Today, they can write essays, pass legal examinations, generate computer code, and hold conversations that many people find indistinguishable from talking to a human. Five years from now, their capabilities will be significantly greater again.

Why This Matters Now

No one knows with certainty what happens if an AI system ever develops something resembling its own intent — its own purposes and priorities that may not align with ours. We are likely still some distance from that threshold. But the architecture we build now, the habits of governance we establish today, will determine whether we are prepared when that moment arrives or whether we discover too late that we handed over control without noticing.

This is not science fiction. It is a straightforward observation about institutional preparedness. Your community has its own governance structures — its own tikanga for how decisions are made, how authority is exercised, how knowledge is shared. These exist not because every hui descends into chaos, but because governance structures need to be in place before they are needed, not after.

The same principle applies to AI.

The Real Issue: Whose Patterns?

Here is where it gets practical for your community.

When a large AI system is trained on the internet, it absorbs the internet's biases, assumptions, and cultural defaults. The internet is overwhelmingly English-language, Western, commercially oriented, and shaped by the values of Silicon Valley. This is not a conspiracy — it is simply what happens when you train a system on data that disproportionately represents one culture and one set of priorities.

For indigenous communities, this bias is not subtle. It is structural.

The internet over-represents written, Western, individualised knowledge. It under-represents oral traditions, collective decision-making, relational knowledge systems, and the forms of understanding that indigenous peoples have carried for generations. When an AI system is trained on this data, it does not merely lack indigenous knowledge — it is structurally weighted against it. The patterns it has learned treat Western frameworks as the default and everything else as the exception.

When a whanau member asks an AI system for advice about a difficult family situation, the system defaults to the language of individual therapy — assertiveness training, boundary-setting, self-care — because that is what dominates its training data. It does not reach for the concepts of whanaungatanga (kinship obligation), manaakitanga (care for others), or the long view that comes from knowing your obligations extend across generations. Not because these concepts are less valid, but because they are statistically rare in the data the system learned from.

When a community leader asks the AI to help with a sensitive communication, the system defaults to corporate language — stakeholder management, messaging frameworks, talking points — because business correspondence vastly outnumbers indigenous community correspondence in its training data.

The system is not hostile to your knowledge. It simply does not know your knowledge. It knows what is statistically common, and what is statistically common is overwhelmingly Western.

This is the real issue with AI for indigenous communities. Not whether it can think. Not whether it will take over the world. The real issue is: whose patterns does it carry? And can your community choose its own?

Two Paths Forward

There are two ways a community can engage with AI.

The first path is to use Big Tech AI — systems like ChatGPT, Google Gemini, or Microsoft Copilot. These are powerful, convenient, and free or cheap to use. But they come with conditions. Your data flows to their servers. Your conversations become part of their systems. The AI's behaviour is governed by the company's policies, which can change without your consent. And the patterns the AI carries — its defaults, its assumptions, its cultural framing — are set by its training data, which you have no influence over. For indigenous communities that have spent generations fighting for sovereignty over their own knowledge, this is not a neutral trade-off.

The second path is to use AI that your community controls. A smaller system, less powerful in raw capability, but trained on your content, running on infrastructure you control, governed by rules your community sets. A system that knows the difference between a community announcement and a corporate blog post, because your community taught it. A system whose responses are checked against your actual records by mathematical watchers that operate independently of the AI itself.

This is what Village AI is. It is not the most powerful AI system available. It is designed to be faithful to your community — to your content, your values, and your governance. For indigenous communities, that faithfulness includes the ability to define your own vocabulary, your own governance boundaries, and your own rules about how knowledge is shared.

The next article in this series explains how Village AI is structurally different from Big Tech AI, and why that difference matters — particularly for communities whose knowledge systems have already survived one wave of colonisation and should not have to survive another.


This is Article 1 of 5 in the "To Hapori, To AI" series. For the full technical architecture, visit Village AI — Agentic Governance.

Next: Big Tech AI vs. Your Community's AI — Why the Difference Matters

Published under CC BY 4.0 by My Digital Sovereignty Ltd. You are free to share and adapt this material, provided you give appropriate credit.