Dear friends,

Today I’d like to share some thoughts around a nexus point between an ongoing colonial capitalist modality of expropriation and the utterly uneven development of artificial intelligence technologies in high-technology western contexts. Both of these spaces are ridden with significant turbulence, colonialism and it’s capitalist modality (or vice-versa depending on your position in geopolitics) has held an extractivist mode closest to its heart since the 1700s, and as recent developments towards large language model technologies in artificial intelligence have burst onto the corporatising scene a slew of under-critiqued ideologies have nested into the heart of their explosive development.

We’ve discussed the origins of colonialism, and how colonialism drew on the experiment before it of enclosure and largely capitalist development. Here, we assert that colonisation, while ideologically compatible with many anti-human and anti-nature modalities, is largely concerned with the propagation of capitalist governance outside Europe. This brutal, genocidal approach desires hatred and division to enable uneven expansion and exploitation, mostly funnelling ill-gotten gains back to Europe. Care, here, is needed to ensure we do not collapse into universalising blame – yes, conditions for all across Europe were substantively better because of the brutal, anti-human, genocidal and fascistic advancement in the colonies, but at a time where information control was extremely tight, and the actual beneficiaries were very similar to those benefiting from capitalism today (a 1%), we need to localise ‘blame’ for this mould to a small container of people. The effects of their greedy, murderous, and discriminatory regime were felt by 99% in Europe, and 100% in the ‘colonies’.

The latest, in the line of colonial/capitalist malignancy, is the development of commercial ‘artificial intelligence’ technologies. The bounding ideology of LLMs is a regurgitation of western colonial capitalist modes the world over, because by its very nature, the technology that enables LLMs draws on mainstream knowledges, predominantly in English language. Most of the published world, especially in the form of newspaper articles, books, websites, and journal papers are written from a hegemonic position, for a hegemony which historically serviced and maintained the ‘thinkers’ in society. Gramscian theory, here, becomes particularly useful as a lens through which to examine the ideologies that are unashamedly distributed through artificial intelligences, not to mention the corporate and fundamentally anti-human way artificial intelligence software has been designed. This bifurcation: (1) the people, tools and technologies involved in the creation of the ‘LLM’ itself and (2) the works, sources of materials, and training approach of the first group, is simultaneously equally important. Exploited researchers, workers, and technologists who support the development of AI are extracted from by their 1% overlords. The product of their intelligence simultaneously reinforces the 99%/1% binary, and further extracts from the artistic, creative, and curious thinkers within the 99% (who are, largely, tied to the 1%’s ideology).

I think, therefore, it is useful for us to spend a moment longer considering the strength of hegemonic knowledge production as an artifact of history (at least from a historical materialist frame). Gramsci advanced that, at least in capitalist nations in the west, there was a dominant culture, a hegemony, whose ‘rulership’ was established through hard and soft modes. A rulership came to being by its capacity to, largely initially, by force capture a people, then by coercion maintain that control. The maintenance of this control required cultural and intellectual shaping – reintegration of divergent ideas to suit, or benefit, the hegemony which ruled. This explains a lot about all those Che Guevara t-shirts, and some System of a Down and Red Hot Chili Peppers songs. In a more human explanation, by subtly influencing the vital organs of a society – the media, education, law, armies, and so on – one could maintain control over something ‘captured’ and continue to grow its resilience through the co-optation of new ideas and their subsequent reintegration with the hegemony towards the ends that served those in positions of power. The cumulative ‘weight of history’ of our globalised, cancerous, and deeply toxic capitalism has so firmly rooted itself generationally that it has begun to shape the physical realities of our societies. Buildings, imaginations, worlds and lives are so deeply influenced by the power and weight of the hegemony of capitalism, and in the ouroboros of that ideology, under the powers of hegemony and history. We continue eating the foundations of our very existence (nature) through ideological advancement such that ‘capitalist realism’ the notion we cannot see outside this has grasped us all.

So when AI research begun to commercialise, far beyond its roots in the 1960s and 1970s, it brought with it both a mode (commercialisation, marketisation, acriticality) and a content (training data, model weights, preferences) that were uniquely capitalist in nature. As part of this, as we might imagine, that capitalist realism simultaneously advanced into the outputs of LLMs. Even with substantial prompt engineering, it is difficult to convince a commercial LLM to abjectly denounce capitalism – unless you use extremely decolonial or Marxist prompts (small joy). Because of this, AI becomes yet another tool in the promulgation of colonial capitalist rhetoric. Some LLMs have guardrails that prevent overtly racist, sexist, and grossly capitalist responses, but these are few and far between – with more problems emerging every day. Indeed, the model tweaking has had obvious effects on responses generated, sometimes day by day I get different responses from the same LLM that is clearly regurgitating its current guardrail (pro-capitalist, of course). For about two months Claude utterly refused to give me any anti-capitalist thought whatsoever, feeling particularly allergic to Marxism, while still surprisingly open to redescribing eastern and global southern theorists through western commentaries.

But there is some hope, on the horizon, here. Increasingly, as you may have seen me sharing on mind reader, overly comfortable middle class heterosexual cisgender white men are growing frustrated with the expropriation of their thinking and work. Be that in the form of their “creative” content posted online (pictures, writings, so on) or in the AI industry itself (with growing interest in open source AI models, thankfully). We know one thing for sure, as marginalised peoples, that once this category of people in a society begin to feel any vague tickle of political pressure on their positionality, things snap really quickly. And, no, I don’t just mean those that adamantly follow Joe Rogan’s latest codswallop. Past the initial vacuuming of the internet for training data, and beyond the tweaking and refinement to AI models, a nexus point at this hegemonic/AI border may actually offer an opportunity for change. But we’re not done here.

Gramsci was a firm believer in the power of (the) subaltern(s). For true revolution, he imagined, we would need disparate clusters of social interests to form adequate counter-hegemonic (alternative, verging revolutionary) modes that create a clear vision for different futures. These visions would need to unite people, through hope, joy, and opportunity, towards a future which is ‘possible’ – rather than the bleak, broken, and toxic reality that was capitalism. He hoped, as a Marxist, that this mode would be socialist in nature, that egalitarian ways of working could be developed not within extant capitalist structures, but that systems could be reinvented from the margins and by those at nexus points between margins such that a new intellectual class – a grounded and embodied kind of intellectual, rather than a mouthpiece for mainstream views – could devise, through strong community connections, a way of working that superseded the dominant. This work is not the work of one romanticised leader. Rather it was the collective work of every person, in every industry, across all facets of social and (re)productive life. Then, in true network effect, these marginalised thinkers, activists, workers, community members, could find each other as their visions drove them to more inclusive terrains, and enabled the bridging of connection that would offer analogous visions that would supplant capitalism.

So, good news, Sam Altman, you too can be extremely late to the party in your feeling of marginalisation and mild discomfort, and with those of us who have experienced intersectional, intergenerational violence and oppression are very happy to sit with you and exchange ideas about how we might radically rethink AI, technology, and work for a future that shares, co-constructs, and equalises. In seriousness, though, this meeting of ‘edges’ that are offered by resistance to AI’s appropriative nature which is finally being critiqued by the makers of AI themselves, no, not the Sam Altmans, but the researchers, computer nerds, and tech industry workers of the world offers another opportunity to grow counter-hegemonies. And through networking our counter-hegemonies together, in good dialogue and right relation, we might find that we are more capable as a species of custodianship and transformation that we are allowed to have credit for under capitalism. I could also be utterly delusional about just how ‘exploited’ AI workers really feel, and maybe this is still years away – but either way, we are all uniquely capable of using our context to strive towards egalitarianism and a better collective future, not a better future for the 1% who will end up living in underground bunkers when their manufactured apocalypse comes.

Stay cheery, friends,

Aidan