Cooperation by Construction – A Framework for Governing Frontier AI

What makes AI unsettling is not only the pace of the technology but the way our institutions respond to it. Governments cling to control. Companies build walls around their models. Standards splinter. Safety protocols multiply even though no one can verify them. Each regulatory bloc pushes its preferred approach and none of them can see what the others are doing. In a field built on speed and interaction, our systems behave as if steady, top-down authority were still enough to produce order. It's not.

It's easier to sense than articulate. People say things like “No one is in charge, and everyone is in charge.” That confusion is not cultural, it's structural.

In a high-velocity domain, control is precisely what breaks the system. The more we try to lock things down, the more we push behaviour into shadows where it becomes impossible to see, let alone govern.

The uncomfortable truth is that the control-first mindset no longer fits the world AI creates. We are dealing with systems that move through their environment faster than people can react. A government simply doesn't stand a chance. Behaviour emerges from interactions we cannot supervise directly. Treating agents as objects that must be contained misses the deeper problem: the real source of misalignment is not ideological, but architectural. The conditions that allow us to work together can't hold at the speed of frontier technology.

What makes AI challenging is not that agents have goals. It is that their goals shift in processes we cannot see, and their actions take effect in environments that lack any shared picture of what those actions are meant to achieve. We keep trying to regulate the inside of the agent, as if the solution lies in inspecting internal objectives. But the more autonomous these systems become, the less meaningful that approach gets. Internal alignment drifts; external consequences do not wait.

A more stable route is to shift the burden of trust away from the agent and into the environment it acts within. Instead of controlling what an agent is, we control the surface through which its actions meet the world. That surface—made of fixed activities, forkable utilities, and decentralised rulebooks—creates a boundary where only cooperative behaviour remains executable. If an action cannot be interpreted, verified, or linked back to a commitment that humans recognise, it simply does not run.

This sounds abstract, but it rests on an everyday lesson. Cooperation works only when people can share a picture of what they are doing, see one another’s behaviour, and make commitments that hold. Small groups manage this naturally. Large systems lose it quickly. AI accelerates that loss because it collapses not just information costs, but the costs of knowledge, action, and decision. A person can now trigger world-shaping processes without going through the deliberation that once held harmful outcomes in check. Institutions that rely on slow agreement or centralised oversight are structurally too sluggish to keep up.

That does not mean disorder is inevitable. It means the architecture of coordination has to be redesigned so that cooperation is the default and control the exception. In practice this means three things.

First, the environment must hold a shared picture of the activity. Instead of vague principles written at the start of a project, meaning must stay live and adjustable. Goals, norms, and interpretations are expressed through explicit commitments—visible on interfaces that link meaning to action. Trust sits in those commitments, not in the inscrutable behaviour of an agent.

Second, the rules and utilities that organise the activity must remain contestable. If an agent or operator wants to propose a new way of doing something, they can; if others disagree, they can fork the relevant component without breaking the whole system. This keeps learning open. It also prevents capture by any single actor, because no one can close the system around themselves.

Third, the environment must generate feedback quickly enough to keep up with the systems operating within it. AI reshapes the coordination field by moving faster than interpretation can stabilise. The only defence is a structure where feedback can update the shared picture of the activity before drift accumulates. When meaning can renew, misalignment becomes repairable rather than catastrophic.

This is the opposite of the world we have built. Today’s AI governance regimes reflect the habits of twentieth-century control. Regulatory blocs compete to set standards, but those standards cannot interoperate. Safety protocols are published without methods to check whether they are being followed. Model governance frameworks assume a level of shared meaning that no longer exists across jurisdictions or industries. Open and closed ecosystems have no common surface through which cooperative behaviour can stabilise.

The result is familiar from earlier coordination failures: everything becomes brittle, and every actor responds by tightening control. But in a high-velocity domain, control is precisely what breaks the system. The more we try to lock things down, the more we push behaviour into shadows where it becomes impossible to see, let alone govern.

Historically, each collapse in the cost of coordination has produced a governance crisis. The printing press reshaped political authority; industrialisation reshaped labour and law; networks reshaped media and markets. But those shocks unfolded on human timescales. People interpreted change as it arrived. Institutions learned slowly but not fatally. Frontier AI is different. It compresses action and reaction into windows too narrow for deliberation. The gap between what a system does and what we can understand widens with each generation of models. Without a cooperative architecture, accountability evaporates.

We should not romanticise past eras. Human restraint was often the buffer that prevented disaster: the hesitation before acting, the slow accumulation of experience, the moral friction that stopped a bad idea from becoming a global catastrophe. AI lowers that friction to near zero. A poorly specified agent can launch economic, political, or ecological effects without anyone needing to intend them. That alone is enough to show why control-first governance is untenable.

A cooperative approach does not solve everything, but it changes what is possible. When the environment makes commitments visible, when action cannot be executed without passing through shared meaning, when every pathway can be inspected, forked, or contested, the system becomes safer not by restriction but by design. Misaligned behaviour is filtered out by the structure itself. Aligned behaviour becomes easier, cheaper, and more predictable. Instead of tightening the rules around opaque agents, we make the world itself interpretable enough that the agents must adapt to it.

This is not utopian. It is the only strategy that matches the logic of the technology. We already know that control fails when meaning moves faster than rule-making. We already know that open feedback produces better coordination than command. The early Internet, the best open-source ecosystems, and the most resilient commons all follow this pattern. They work because they keep meaning, visibility, and accountability coupled—the triad that anchors the generative commons.

AI simply raises the stakes. It forces us to decide whether we build systems that learn with us, or systems that accelerate away from our capacity to interpret them.

AI simply raises the stakes. It forces us to decide whether we build systems that learn with us, or systems that accelerate away from our capacity to interpret them. The standard model of governance, designed for slower worlds, cannot bridge that gap. A cooperative architecture can.

We often think of safety as a matter of stopping bad outcomes. But the real work is ensuring that good outcomes can still emerge. That requires an environment where actors—human and machine—gain from staying in view of one another. A world where coordination grows because the structure supports it, not because a regulator commands it. A world where misalignment is hard not because we punish it, but because the system itself leaves no room for it to take hold.

If we build environments that can renew their shared picture of the activity, keep action visible, and maintain commitments that genuinely matter, frontier technologies become governable in a way control could never achieve. The goal is not perfect safety. It is workable order: a system that stays coherent at the speed of the world it helps create.


New articles published throughout 2026.