In Part I of this series, I examined how the dominant design logic of contemporary AI systems — competitive, winner-takes-all architectures — bears a troubling resemblance to market Darwinism. Part II does not simply critique. It proposes a counter-model grounded in a different set of scientific premises.

If the evidence from nature, from human evolution, and from multi-agent AI systems all converges on the same conclusion — that intelligence emerges through cooperation, not despite it — then AI architectures built primarily on competitive logic are not just ethically questionable. They are empirically poorly designed.

Kropotkin's Corrective: What Survival of the Fittest Actually Means

Peter Kropotkin's Mutual Aid: A Factor of Evolution (1902) documented that species facing harsh environments frequently survived not through individual competition but through organised mutual support. He was careful to acknowledge competitive struggle — he did not deny it. What he challenged was the ideological over-emphasis on competition at the expense of cooperation.

More recently, Martin Nowak of Harvard — one of the leading mathematical biologists working on evolutionary dynamics — formalised the mechanisms by which cooperation evolves and persists under natural selection. His landmark 2006 paper in Science identified kin selection, direct reciprocity, indirect reciprocity, network reciprocity, and group selection as the conditions under which cooperative behaviour becomes evolutionarily stable. The headline finding: cooperation is not a deviation from natural selection, it is a product of it.

Tomasello and the Cooperative Origins of Human Intelligence

Michael Tomasello and colleagues at the Max Planck Institute make a precise and falsifiable claim: the critical difference between human cognition and that of other species is the capacity for shared intentionality — the ability to form joint intentions, attend jointly to shared objects, and pursue collaborative goals with common understanding. This is not simply about being social. Great apes are social. What distinguishes humans is the motivation and cognitive architecture to share psychological states with others in genuinely collaborative endeavours.

Our technological civilisation, including artificial intelligence itself, is a product of the cooperative cognitive architecture that evolved in Homo heidelbergensis at least 400,000 years ago. Humans did not evolve to be the cleverest individual in the room. They evolved to be extraordinarily good at thinking together. AI systems that ignore this are not modelling intelligence — they are modelling a partial and evolutionarily earlier form of it.

Ostrom and the Governance of Shared Intelligences

Elinor Ostrom's 1990 work, Governing the Commons, documented what rational choice theory declared impossible: that communities of actors could successfully govern shared resources over long periods without privatisation or central state control. She identified recurring structural features — eight design principles — that enabled self-organised governance: clearly defined boundaries, rules matched to local conditions, collective choice arrangements, monitoring, graduated sanctions, and conflict resolution mechanisms.

Ostrom's framework suggests that cooperative AI collectives require governance architecture, not just technical architecture. A network of agents pursuing shared goals needs boundary conditions, representation, monitoring, and mechanisms for handling deviation — not simply reward optimisation.

Haraway: Making-With, Not Making-Against

Donna Haraway introduces the concept of sympoiesis — "making-with" — as an alternative to autopoiesis, the idea of self-making. Drawing from the biology of holobionts (organisms understood as collectives of symbiotic species), Haraway argues that life itself is not a phenomenon of separate competing entities but of entities constituted through their relations with others.

Current AI architectures are typically designed as autopoietic: self-contained systems that optimise internal objectives. A sympoietic alternative would conceptualise AI agents not as bounded optimisers but as entities whose intelligence is constituted through their relations — with other agents, with human users, with the social contexts in which they operate.

MP

Dr Mark Plaice

Independent researcher in ADHD, digital media, learning, and AI-assisted practice. Building and documenting the process in public.