Article 4 in the series — AI Governance for Communities

A Smaller Room Than You Think

On philosophy, philanthropy, sovereignty, and who is deciding what artificial intelligence becomes

April 2026 · My Digital Sovereignty · All Articles

This is part of a five-part series. Start with Article 1: What Is AI, Really? if you are new to the series, or read Article 3: Models of AI Governance for the preceding instalment.

If you have ever finished reading a piece about artificial intelligence and felt more confused than before, you are in good company. Very few people — including many who build the systems — fully grasp how fast the terrain is shifting beneath us. This article is not an attempt to explain all of it. It is an invitation to think carefully about a smaller, older question that underlies everything else: who is shaping the future of artificial intelligence, and by what authority?

The question may sound abstract. It is not. The outline of how this technology will be governed — who owns it, who funds it, who steers it, who is asked, who is excluded — is being drawn right now, often before most of us even know the conversation is happening. When a technology is still young, debates about its "ethics" are usually loud but low-stakes. The decisions that matter most are made quietly, by people whose names rarely appear in the news, in rooms that are not legislatures.

With AI, that quiet period is now.

A Movement That Began in a Different Place

To understand why a small group of philosophers and philanthropists has come to sit near the centre of AI development, it helps to begin somewhere less dramatic than a laboratory. In the early 2010s, a loose movement called "effective altruism" began asking a deliberately simple question: given limited resources, how could one actually do the most good? The early answer was largely about global health. Money spent on malaria nets could be compared, in lives saved per dollar, against money spent on cash transfers or deworming. This was not glamorous work, but it was serious work, and it attracted serious people.

Over time, a strand of the movement pushed the question further. If the future may contain vastly more people than the present — perhaps many orders of magnitude more — then the moral weight of any action reaches forward in time as much as outward in space. This view, known as longtermism, treats safeguarding the survival and long-range flourishing of humanity as the most important thing a person can work on. Extended in one direction, it leads to global health and pandemic preparedness. Extended in another, it leads inexorably to the technology that seems most likely to shape — or end — the long future. Artificial intelligence.

One does not have to endorse this framework to recognise how compellingly it rearranges priorities for those who accept it. And once a person believes, sincerely, that a miscalculated AI system could close the human story, the rest of the logic writes itself. You go to the places where those systems are being built. You try, with whatever influence you can muster, to shape what happens there.

From Idea to Institution

This movement did not remain confined to philosophy seminars. Its convictions took institutional form. OpenAI was founded in 2015 with the stated mission of ensuring that advanced AI "benefits all of humanity" — a phrase written in exactly the vocabulary of the movement just described. Anthropic, created by a group of former OpenAI researchers, made "safety and reliability" the explicit centre of its public identity. The people working inside these organisations were, in many cases, the same people who had spent years in adjacent philosophical circles, arguing about risk, moral weight, and what duties we owe to future generations.

The laboratories experimented with unusual corporate forms meant to bind their behaviour to their stated ethics. OpenAI began as a non-profit and later layered a "capped-profit" company on top of it, so that investors' returns were limited by design. More recently it has been pursuing a further restructuring: a move toward converting the commercial arm into a public benefit corporation. Whether the non-profit parent will retain ultimate authority, and what the final shape of the arrangement will be, has been actively contested and remains unsettled. Anthropic chose a different path: a public benefit corporation from the beginning, with a governance trust meant to sit alongside ordinary shareholders and speak for the public interest. These were sincere attempts to use corporate law as a container for moral commitment — to write ethics into the legal bones of the enterprise.

Whether such experiments can hold their shape under pressure is a question the next decade will answer. What is already clear is that the language of care and the language of corporate power have become entangled in ways that are difficult to fully untangle from outside.

The Money, and What Money Does

Beneath all of this sits the less visible but more consequential question of capital. Influence in any field ultimately depends on the means to exercise it, and in this case the means are highly concentrated. Open Philanthropy, funded largely by Dustin Moskovitz and Cari Tuna, has directed billions of dollars toward causes the movement judges most important — increasingly, AI safety and related fields, alongside continuing support for global health and animal welfare. A network of smaller foundations, regranting programmes, and individual donors extends this influence further, reaching research institutes, academic programmes, policy think tanks, and early-career fellowships.

For a period, this pool was amplified by donors from the cryptocurrency world, most visibly Sam Bankman-Fried, whose collapse in 2022 forced the movement to reckon publicly with its proximity to extraordinary risk and with money whose provenance it had not fully examined. Some of its certainty did not survive the encounter.

The architecture that remains has a distinctive quality. A small number of philanthropic actors, acting in good conscience but accountable to no public body, can rapidly reorient the research priorities of an entire field. They can fund the think tanks that advise governments. They can underwrite the academic programmes that train the next generation of regulators. They can place their alumni inside agencies that write the rules. In ordinary democratic politics, a shift of this magnitude in priorities would be debated openly. In philanthropy, it happens when a board meets.

Why "Safety" Is Not a Neutral Word

A growing chorus — some of it from within the movement itself — raises harder questions. When a small, wealthy, ideologically homogeneous network can shape the research agenda of the leading laboratories, fund the institutions that advise governments, and place its alumni in regulatory positions, something has happened that is difficult to describe as neutral. The word safety does a great deal of work in this context. It can mean careful, technical research into how machine learning systems behave under unusual conditions. It can also mean an argument for permitting only a few organisations to build the most powerful systems, on the grounds that letting others do so is too dangerous. Both meanings often appear in the same document.

Critics point out that this is not the first time a confident technocratic class has believed it should govern what ordinary people cannot. What is different now is the scale of what is being decided. Not the shape of a single industry, but the architecture of systems that may one day mediate much of human life — how people work, learn, vote, speak, form opinions, and are seen by the institutions around them. A debate that sounds like it is about computer science is, at its heart, a debate about whose values get encoded into the machinery of the next century.

A Philosophical Trap

At the core of the disagreement is a philosophical move that deserves careful attention. Longtermism, in its most rigorous form, is a serious argument for taking the welfare of future generations seriously. In its cruder form, it invites a particular kind of mistake: treating very small probabilities of very distant outcomes as if they were reliable guides to present action.

Philosophers have a name for the trap. It is sometimes called Pascal's mugging, after a thought experiment in which someone offers you an impossibly large reward in exchange for your wallet. Because the imagined payoff is so large, even a minuscule chance that the offer is genuine can, on paper, justify handing over the money. The arithmetic works. The reasoning does not. Used carelessly, this pattern of thought allows present-day harms — surveillance, monopoly power, the quiet concentration of wealth, the exclusion of entire communities from decisions that bind them — to be traded away for the sake of catastrophes that may never arrive.

None of this proves that the catastrophes being imagined are unreal. Some of them may well be real. The point is narrower and more uncomfortable: we do not yet know which are which, and a framework that privileges confident extrapolation over humility about what we do not know can be a weapon as easily as a shield.

The Horizon Will Not Hold Still

There is a further difficulty worth naming before going further. Everything in this article treats "AI" as if it were one thing. It is not. The systems that currently dominate public conversation — large language models trained on vast corpora of text — are only one branch of a much older and wider tree. Quantum computing, still years from maturity, is likely to change which problems machines can approach at all. New kinds of model architectures are quietly beginning to emerge that do not rest on the same assumptions as today's systems. When experienced researchers say they cannot fully picture what the next decade will bring, they are not performing humility. They mean it.

This has two consequences worth sitting with. The first is that any confident claim about the long-term risks of AI — including the most alarming — should be read with a certain reserve, because the people making the claims do not themselves know what they are extrapolating from. The second is that waiting until we understand everything before asking governance questions is itself a decision, and a consequential one. The choices being made in the rooms described in this article are being made under conditions of genuine unknowing. The right response to unknowing is not confidence. But it is also not silence.

A Different Tradition

The movement this article has been tracing is not the only tradition of serious thinking about how technology should be governed. It is only the loudest in a particular cluster of cities. Elsewhere — in communities that have lived with much older questions about how to share decisions fairly, across relationships that extend backward and forward through many generations — other frameworks have been developed quietly, over decades. They are now arriving in AI policy just as the longtermist tradition's internal contradictions begin to show.

In Aotearoa New Zealand, Māori scholars have developed a body of work on indigenous data sovereignty, the governance of knowledge, and the obligations that attach to systems trained on cultural material. The Māori Data Sovereignty Network, Te Mana Raraunga, has articulated a set of principles — rangatiratanga (self-determination), whakapapa (the relationships that give a thing its place in the world), whanaungatanga (kinship), kotahitanga (unity), manaakitanga (care), and kaitiakitanga (guardianship) — that treat data not as raw material but as taonga: something held on trust, with obligations that cannot be reduced to a licence agreement. The CARE Principles developed through the Global Indigenous Data Alliance — Collective benefit, Authority to control, Responsibility, Ethics — translate the same commitments into vocabulary that governments and international bodies can read.

More recently, Dr. Karaitiana Taiuru has articulated a Kaupapa Māori way of thinking about artificial intelligence, working through three overlapping figures: he tangata — the person, he karetao — the puppet, and he ātārangi — the shadow. AI presents as a person, in his framing, but lacks the relational obligations that personhood carries in te ao Māori. It is moved by distributed forces — developers, operators, data providers, users, emergent interactions — and accountability diffuses through the chain in ways that no single participant can answer for. And it carries within its training data the mauri, the vital essence, of knowledge that belongs to communities who may never have been asked for it.

Why does any of this matter for the question this article has been pursuing? Because the two traditions — the longtermist, expert-centric framework developed in Oxford and Silicon Valley, and the sovereignty-based, community-centric framework developed in Aotearoa and in other indigenous contexts around the world — ask fundamentally different questions. The first asks: which experts should be empowered to govern AI safely? The second asks: how do communities govern systems that affect them? The first is a technocratic answer to a political question. The second begins with the political question and treats the technical questions as downstream of it.

This is not a contest between two philosophies so much as a choice about which tradition a reader chooses to draw from. Taking both seriously makes it harder to believe that the only alternative to Silicon Valley's expert class is chaos. Other traditions have thought about these problems for longer, and with more humility about the limits of what can be calculated. They deserve to be in the room — and in some cases they have been thinking about the problem for longer than the room has existed.

What Action Can Look Like

A reader of an earlier draft offered a criticism I took to heart: the piece describes the problem well but is vague about what a reader should actually do. The criticism was fair. Let me try to answer, without pretending the answer is complete.

The first thing worth saying is that the distinction between making a difference and choosing who is in the room is a false one. Every time a community decides which tools to use, which platforms to host their conversations on, which companies to trust with their data, which frameworks to consult when they disagree — that is governance. It is not the glamorous kind, and it rarely makes the news, but it is where the real room is.

With that in mind, six directions that might be available to any reader — not in priority order so much as a progression from the first question you can ask in private to the deepest obligation you carry in public:

1. Treat the ownership and governance of the tools you use as a serious question, not a technical detail.

Where is your data stored? Who has the right to read it? Whose jurisdiction applies? What happens to it if the company is sold, changes its mind, or quietly rewrites its terms? The answers are often unsatisfying, but the act of asking is the first form of governance available to anyone.

2. Attend to the scholars and frameworks who have been thinking about this for longer.

In Aotearoa, that means taking seriously the work of Karaitiana Taiuru, Potaua Biasiny-Tule, and networks such as Te Mana Raraunga; it means reading about WAI 262 and about instruments like the Kaitiakitanga Licence not as specialist literature, but as a serious contribution to the governance question. Elsewhere, it means finding the communities in your own context — indigenous, religious, civic — who have held the long questions for longer than the technology has existed.

3. Reduce dependence on the systems you are most uneasy about.

If a handful of very large companies have come to mediate most of the digital infrastructure of your life, some of that is because individual choices made over years aggregated into a market position. Those choices can be made differently. Self-hosted, community-owned, or sovereign alternatives exist for almost every function that matters — email, storage, messaging, documents, search, payments — and they improve only when people actually use them.

4. Participate in deliberation rather than only in consumption.

Many of the newer governance experiments — including the Village project this writer is involved with — are built to treat the people who use them as participants with a voice, not as data sources to be mined. Using such systems, and giving considered, critical feedback when the system asks for it, is itself a governance act. This is not light work. Deliberation — thinking in public, holding a position under pressure, staying long enough to be changed by what you hear — asks for nerve and perseverance. "Rate the tool" stops being a marketing phrase when the ratings actually shape what the tool becomes.

5. Bring the question into conversations that already exist.

The most important move is often not to join a new organisation but to insert the governance question into the conversations you are already in. A parish council, a school board, a book club, a community trust, a trade union, a professional association, a whānau discussion about what their tamariki should be exposed to — all of these will soon, if not already, be making decisions about AI tools. They will make better decisions if someone in the room has read enough to ask who controls the thing they are being sold.

6. Honour the obligations that come with knowledge.

This is the quietest, and perhaps the deepest, form of action. Māori thought, at its most careful, insists that knowledge is held on trust and comes with duties — to the community it came from, to those not yet born, to the web of relationships within which it lives. If an algorithm is trained on a people's stories, songs, or sacred material without their permission, that is not simply a regulatory failure. It is a violation of the obligations that the knowledge carried in the first place. Recognising this, and refusing to participate in the violation, is a form of action available to anyone who is paying attention.

None of these moves alone changes the direction of the industry. They are not meant to. What they do is change the proportion of people who are in the conversation at all — and over time, the proportion is what matters. The rooms in which AI's future is being decided are smaller than most of us assume, but the walls are thin, and the decisions about whose voices come through them are still being made.

A Closing Note, Not a Conclusion

None of what is above is meant to dismiss the genuine thought and care that has gone into the effective altruism movement's work on AI. Many of the people involved are serious, thoughtful, and grappling with problems that would paralyse most of us. The critique is not of their sincerity. It is of the architecture they work within: of concentrated philanthropic capital quietly steering research agendas, of governance experiments that remain private even when their language is public, and of a philosophical framework whose limits its own advocates are only beginning to examine in daylight.

The future of artificial intelligence will not be decided by any one movement, nor by any one tradition — and the traditions that have thought longest about these questions have much to offer each other, if they are allowed to inform one another rather than stand apart. But the conversation about what it should be is being shaped more by some voices than by others. The traditions that have thought the longest and most carefully about collective, relational, intergenerational governance are still, in most of the rooms where these decisions are made, treated as a footnote or a courtesy rather than as colleagues. The modest proposal of this piece is simply that more voices should be in the conversation, and that the older traditions should be treated as the serious partners they are, rather than as the decoration they are sometimes mistaken for.

There is an older word for what is being asked. The word is participation. It has never been easy. It has never been optional either.

Further Reading

A short, deliberately ecumenical selection — some sympathetic, some critical — for readers who want to go further. None of these will settle the argument. That is a feature, not a limitation.

On the philosophical foundations

Sympathetic arguments for longtermism and existential risk

Critiques of effective altruism and longtermism

On sovereignty, indigenous data, and plural governance

On artificial intelligence specifically

On the reckoning

This article is part of an ongoing series on the governance of artificial intelligence. It draws on published commentary and critique of the effective altruism movement's role in AI policy and funding, on the governance structures of leading AI laboratories as documented through 2026, and on the body of Māori and indigenous scholarship on data sovereignty and plural governance that the author treats as a primary source rather than a footnote.

The author writes at mysovereignty.digital, where this article and the rest of the AI Governance series are published. Republication is welcomed and encouraged.

This work is licensed under a Creative Commons Attribution 4.0 International Licence (CC BY 4.0). You are free to share and adapt the material, in any medium or format, for any purpose, provided you give appropriate credit to the author and link back to the original.

Series navigation:

Previous: Part 3 Next: Part 5 Series Hub