Skip to content

AI

‘Lean in hard’ is anti-scientific, anti-worker, and bad for our future

Dear friends,

This morning’s headlines included an article titled: “Australia has ‘no alternative’ but to embrace AI and seek to be a world leader in the field, industry and science minister says” [1]. The chief scientist has never been an appointment of visionaries; indeed, the office’s entire raison d’être has been capitalist reproduction, but nationally we’re being guided to skate to where the puck was. Who needs leadership when you have both the chief scientist and the LLM regurgitating an algometric reconfiguration of hegemonic narratives. Is there a thread here?

If we were to embrace, nationally, an agenda which promoted (re)training to better understand and develop applications of artificial ‘intelligence’ we might offer some students an opportunity to become very wealthy. But, by and large, to expect any kind of systematic or publicly beneficial economic growth as a result of Australians taking to the AI “science” space would be lunacy. Though, this is far from new, the chief scientist has always been a mouthpiece for the hegemony: eurocentric, bourgeois, and anti-ecological. We know the dangers of unfettered, unregulated, and ‘wild’ AI [2]. And we know that neoliberal market policies love those very things. So, here, we have a dangerous combination of factors: promotion of engagement with AI, a neoliberal anti-human “market”, and a bourgeois mouthpiece suggesting further engagement with ever more anti-humanist praxes. Let’s backtrack for a second.

Artificial intelligence technologies are not inherently evil, bad, or problematic. However, as we’ve discussed here on mind reader, their application and their use are currently extremely dangerous. From water usage that accelerates us even faster toward ecological collapse, to regurgitation of appropriate knowledges and hegemonic narratives, through the undermining of human artistic talent the current AI technology set does far more harm than good. More recent exposés have shown a decline in worker productivity (if you believe such things) and growing concerns over impaired cognitive functions. Naturally, all of these things could be countered by an ecological-forward ontology: valuing the role of nature, environment, animals and people in ecological harmony. But that’s not the values of the political economy under which we live.

You might see an opportunity here to subvert the hegemonic narrative – in which case, well done. As an educator, these kinds of narratives about what ‘should’ be done have dominated the field in western thought for as long as there was ‘education’ (in a hegemony’s civil society mode). As a supporter of building counter-hegemonies, I might suggest that we use this new narrative to teach young people and students about how LLMs and other AI technologies work – including coverage of the ecological dangers inherent in their current formulation. We might also use this as an opportunity to challenge hegemonic forms, relating with students over sources of training data and asking them to (re)imagine these toward more equitable outputs. But that’s not how this will be done broadly. Indeed, there is unlikely to be any serious education in the AI space done in public education due to the economic landscape which created the current raft of popular technologies.

AI scientists on the ‘bleeding edge’ have, for many decades now, been employed privately, even secretively. Once they ‘make a name for themselves’ (are wealthy, male and somewhere in the ballpark of knowing one or two things) they are paid lucratively, and their outputs are nondisclosure’d and locked up by corporate giants. While arXiv papers [3] from corpos pepper the scene of AI and Data Science, these are frequently a partial picture, describing abstract techniques or ‘proving’ what we already knew about these ‘sciences’ [4]. Of course, there are some who are involved with the development of AI technologies who have left the corporate scene, with an even smaller handful of these committing to public discourse about AI technologies and participation in teaching through higher education institutions (even if such an audience remains an extremely narrow slice of ‘publics’). However, the vast majority of AI technology remains stunningly locked up [5].

Regardless the landscape of current AI technologies, there are growing calls to rethink how AI is currently working [6], and ever more papers about the environmental, social, and human cost of AI. As corporations increasingly dominate narratives about embracing AI futurism, the public (worker) excitement about these technologies dies. This is not subaltern cultural repression directly, rather a hegemonic subsumption of technology and a more public embrace of the despotic “leadership” ever on display under capitalism. Any initial excitement from workers about the possibilities of AI technologies in their personal or work lives has surely been replaced by distain, disinterest or complacency. As more hegemons gesture towards futures of replacing workers with algorithms, worker disinterest or resistance grows. This is an indicator of the trajectory of our culture more broadly, not about the specific technologies used in AI.

What might a future which re-centres ecology and humanism actually look like? If we were to continue honing the underlying technologies such that environmental destruction was not requisite to technological growth we might have a start. Unfortunately, while some AI and Data Scientists work towards this kind of thinking, the majority of the corporate world has jumped on technologies which consume gigalitres of water and hundreds of kilowatts of (unclean) energy daily. The race to embrace AI as a core part of the modern workplace has meant that rather than spending time on perfecting the underlying technology (i.e., the approach to AI, not the models) we have seen exponential growth of hardware (and, therefore, water, power, etc.) requirements. The call from the chief scientist is one to supply corporations and despotic leadership with ever more resource intensive models – not to innovate for the future. Moreover, while some ‘change management’ professionals with an AI focus may be employed, increasingly we are seeing workers literally phased out in favour of quantitatively worse AI deployment.

While it is not uncommon to see neoliberal corporate subservience in STEM areas [7], blatant calls to engage further with skating to where the puck was will only set back our learners and young people – not to mention gut knowledge workers. If we had a vision for the future that demanded ecological and human justice we might find an application of novel forms of AI technologies which are fundamentally different than the current destructive forms. We lack, in this country, a unifying vision for the future. Instead, we’re seeing corporate bootlicking and hegemonic capitulation across every sector.

I’m not particularly interested in detouring through all the evidence of the despotic shift in our governments, governance, corporate arena, and anti-everybody sentiment. But look no further than: Labor’s continuing approval of new coal and gas projects which have impact reports suggesting massive contributions toward >3º warming; or to the stark Trump fascism in California; Greta’s deportation after attempting to bring aid to those being genocidally murdered by Israel in Gaza. I’ve seen bandied about a saying: “if you ever wondered what you would have done during Hitler’s genocide, think about your actions during globally rising fascism and genocides today”. It’s a bleak picture. LLMs have been trained on an ontological corpus which normalises this anti-human and anti-environment sentiment, and all it can do is regurgitate the same narratives it has been fed. There is no creativity under current AI technologies, only rabid fanatical hallucinations.

In solidarity,

Aidan


  1. https://www.theguardian.com/australia-news/2025/jun/12/australia-ai-no-alternative-industry-and-science-minister-tim-ayres ↩︎

  2. https://mndrdr.org/2025/at-the-nexus-of-knowledge-appropriation-and-ai https://mndrdr.org/2024/on-forestalled-innovation https://mndrdr.org/2024/the-power-of-ai ↩︎

  3. https://arxiv.org/ ↩︎

  4. c.f., https://arxiv.org/pdf/2407.21075v1 ↩︎

  5. https://doi.org/10.32855/fcapital.2024.007 ↩︎

  6. https://www.unep.org/news-and-stories/story/ai-has-environmental-problem-heres-what-world-can-do-about ↩︎

  7. especially because of the widespread belief in “science” as a religious substitute; and adherence to empiricist positivism which disregards human ethics and values for the fallacy of “objective truth”. ↩︎

At the nexus of knowledge appropriation and AI

Dear friends,

Today I’d like to share some thoughts around a nexus point between an ongoing colonial capitalist modality of expropriation and the utterly uneven development of artificial intelligence technologies in high-technology western contexts. Both of these spaces are ridden with significant turbulence, colonialism and it’s capitalist modality (or vice-versa depending on your position in geopolitics) has held an extractivist mode closest to its heart since the 1700s, and as recent developments towards large language model technologies in artificial intelligence have burst onto the corporatising scene a slew of under-critiqued ideologies have nested into the heart of their explosive development.

We’ve discussed the origins of colonialism, and how colonialism drew on the experiment before it of enclosure and largely capitalist development. Here, we assert that colonisation, while ideologically compatible with many anti-human and anti-nature modalities, is largely concerned with the propagation of capitalist governance outside Europe. This brutal, genocidal approach desires hatred and division to enable uneven expansion and exploitation, mostly funnelling ill-gotten gains back to Europe. Care, here, is needed to ensure we do not collapse into universalising blame – yes, conditions for all across Europe were substantively better because of the brutal, anti-human, genocidal and fascistic advancement in the colonies, but at a time where information control was extremely tight, and the actual beneficiaries were very similar to those benefiting from capitalism today (a 1%), we need to localise ‘blame’ for this mould to a small container of people. The effects of their greedy, murderous, and discriminatory regime were felt by 99% in Europe, and 100% in the ‘colonies’.

The latest, in the line of colonial/capitalist malignancy, is the development of commercial ‘artificial intelligence’ technologies. The bounding ideology of LLMs is a regurgitation of western colonial capitalist modes the world over, because by its very nature, the technology that enables LLMs draws on mainstream knowledges, predominantly in English language. Most of the published world, especially in the form of newspaper articles, books, websites, and journal papers are written from a hegemonic position, for a hegemony which historically serviced and maintained the ‘thinkers’ in society. Gramscian theory, here, becomes particularly useful as a lens through which to examine the ideologies that are unashamedly distributed through artificial intelligences, not to mention the corporate and fundamentally anti-human way artificial intelligence software has been designed. This bifurcation: (1) the people, tools and technologies involved in the creation of the ‘LLM’ itself and (2) the works, sources of materials, and training approach of the first group, is simultaneously equally important. Exploited researchers, workers, and technologists who support the development of AI are extracted from by their 1% overlords. The product of their intelligence simultaneously reinforces the 99%/1% binary, and further extracts from the artistic, creative, and curious thinkers within the 99% (who are, largely, tied to the 1%’s ideology).

I think, therefore, it is useful for us to spend a moment longer considering the strength of hegemonic knowledge production as an artifact of history (at least from a historical materialist frame). Gramsci advanced that, at least in capitalist nations in the west, there was a dominant culture, a hegemony, whose ‘rulership’ was established through hard and soft modes. A rulership came to being by its capacity to, largely initially, by force capture a people, then by coercion maintain that control. The maintenance of this control required cultural and intellectual shaping – reintegration of divergent ideas to suit, or benefit, the hegemony which ruled. This explains a lot about all those Che Guevara t-shirts, and some System of a Down and Red Hot Chili Peppers songs. In a more human explanation, by subtly influencing the vital organs of a society – the media, education, law, armies, and so on – one could maintain control over something ‘captured’ and continue to grow its resilience through the co-optation of new ideas and their subsequent reintegration with the hegemony towards the ends that served those in positions of power. The cumulative ‘weight of history’ of our globalised, cancerous, and deeply toxic capitalism has so firmly rooted itself generationally that it has begun to shape the physical realities of our societies. Buildings, imaginations, worlds and lives are so deeply influenced by the power and weight of the hegemony of capitalism, and in the ouroboros of that ideology, under the powers of hegemony and history. We continue eating the foundations of our very existence (nature) through ideological advancement such that ‘capitalist realism’ the notion we cannot see outside this has grasped us all.

So when AI research begun to commercialise, far beyond its roots in the 1960s and 1970s, it brought with it both a mode (commercialisation, marketisation, acriticality) and a content (training data, model weights, preferences) that were uniquely capitalist in nature. As part of this, as we might imagine, that capitalist realism simultaneously advanced into the outputs of LLMs. Even with substantial prompt engineering, it is difficult to convince a commercial LLM to abjectly denounce capitalism – unless you use extremely decolonial or Marxist prompts (small joy). Because of this, AI becomes yet another tool in the promulgation of colonial capitalist rhetoric. Some LLMs have guardrails that prevent overtly racist, sexist, and grossly capitalist responses, but these are few and far between – with more problems emerging every day. Indeed, the model tweaking has had obvious effects on responses generated, sometimes day by day I get different responses from the same LLM that is clearly regurgitating its current guardrail (pro-capitalist, of course). For about two months Claude utterly refused to give me any anti-capitalist thought whatsoever, feeling particularly allergic to Marxism, while still surprisingly open to redescribing eastern and global southern theorists through western commentaries.

But there is some hope, on the horizon, here. Increasingly, as you may have seen me sharing on mind reader, overly comfortable middle class heterosexual cisgender white men are growing frustrated with the expropriation of their thinking and work. Be that in the form of their “creative” content posted online (pictures, writings, so on) or in the AI industry itself (with growing interest in open source AI models, thankfully). We know one thing for sure, as marginalised peoples, that once this category of people in a society begin to feel any vague tickle of political pressure on their positionality, things snap really quickly. And, no, I don’t just mean those that adamantly follow Joe Rogan’s latest codswallop. Past the initial vacuuming of the internet for training data, and beyond the tweaking and refinement to AI models, a nexus point at this hegemonic/AI border may actually offer an opportunity for change. But we’re not done here.

Gramsci was a firm believer in the power of (the) subaltern(s). For true revolution, he imagined, we would need disparate clusters of social interests to form adequate counter-hegemonic (alternative, verging revolutionary) modes that create a clear vision for different futures. These visions would need to unite people, through hope, joy, and opportunity, towards a future which is ‘possible’ – rather than the bleak, broken, and toxic reality that was capitalism. He hoped, as a Marxist, that this mode would be socialist in nature, that egalitarian ways of working could be developed not within extant capitalist structures, but that systems could be reinvented from the margins and by those at nexus points between margins such that a new intellectual class – a grounded and embodied kind of intellectual, rather than a mouthpiece for mainstream views – could devise, through strong community connections, a way of working that superseded the dominant. This work is not the work of one romanticised leader. Rather it was the collective work of every person, in every industry, across all facets of social and (re)productive life. Then, in true network effect, these marginalised thinkers, activists, workers, community members, could find each other as their visions drove them to more inclusive terrains, and enabled the bridging of connection that would offer analogous visions that would supplant capitalism.

So, good news, Sam Altman, you too can be extremely late to the party in your feeling of marginalisation and mild discomfort, and with those of us who have experienced intersectional, intergenerational violence and oppression are very happy to sit with you and exchange ideas about how we might radically rethink AI, technology, and work for a future that shares, co-constructs, and equalises. In seriousness, though, this meeting of ‘edges’ that are offered by resistance to AI’s appropriative nature which is finally being critiqued by the makers of AI themselves, no, not the Sam Altmans, but the researchers, computer nerds, and tech industry workers of the world offers another opportunity to grow counter-hegemonies. And through networking our counter-hegemonies together, in good dialogue and right relation, we might find that we are more capable as a species of custodianship and transformation that we are allowed to have credit for under capitalism. I could also be utterly delusional about just how ‘exploited’ AI workers really feel, and maybe this is still years away – but either way, we are all uniquely capable of using our context to strive towards egalitarianism and a better collective future, not a better future for the 1% who will end up living in underground bunkers when their manufactured apocalypse comes.

Stay cheery, friends,

Aidan

The “power” of AI

Dear friends,

Today I have some off the cuff thoughts about global heat death – revisiting an early theme (actually, the earliest in this particular incarnation of dispatches).

Yesterday I felt happy about expanding numbers of women-identifying YouTube creators – yes I still watch YouTube, I know ... who were interested in the intersection of technology and creativity – not because this is (or should be) rare, but because “in my day” the dominance of sexist men in that particular niche was incredibly overwhelming. But one of these creative sorts, you know how the algorithm goes – particularly with YouTube, a story for another post – popped up talking about creating a bespoke AI, fit for purpose if you like. As part of this video was discussion of the role of creativity and AI (re: “AI stealing all the creative work”) and, further, the rising electricity demands of the AI industry. This got me thinking of things to really truly test in my own environment.

Just recently I’ve been running a combination of tools on my Linux desktop machine – unfortunately “hamstrung” in AI land, at least, by an AMD CPU/GPU(Ryzen 9 7900X / RX 6750 XT / 32 GB DDR4) combo – to run local Large Language Models. I’m still a novice in this space, but I was more interested in the comparative time to response from, even a modest sized local model (i.e. 70b [1]), compared to commercial AI systems. I know this is a very unscientific test, but time to response on very short (“write me a poem about AI”) prompts is decent, probably around 1s. But the revelatory moment was in the massive spin up of fans and power draw from the wall (which I won’t pretend to have properly scientific figures for).

Generating a 1500 word story, basically on complete nonsense because this particular model is no where near competitive even with the free tier of ChatGPT, for instance, made my 3sqm office hot – like I’d been playing Tiny Glade for three+ hours hot. Again, anyone who knows about measuring energy efficiency, comparing apples to apples, and has an interest in genuinely benchmarking technologies against one another is flat out scrunched into a ball of cringe right now, but the purpose of this very unscientific test stands. I wanted to get a feel for time, and energy, on a machine which I control, using a data set, model, and algorithm I control. And the results of this, ignoring everything I know about streamlining, caching, using more appropriate hardware, and so on, still make me incredibly “worried” about commercial AI solutions.

I’ve shared a litany of news stories on the extreme cost on power networks that commercial AI uses – to the point where Microsoft is recommissioning a nuclear power reactor for the sole purpose of powering just some of its AI infrastructure. But until you feel the heat coming off a computer generating a three line poem about itself, it doesn’t quite feel “real”. We are seriously looking at a global power consumption footprint larger than most nations with the combined use of AI as tech bros increasingly wet themselves with excitement – and the line-go-up capitalists get their jollies by suggesting automating workers’ jobs.

This accelerationism which is lauded – and genuinely so, by capitalism and its vanguards – middle managers, for instance – is accelerating global heat death. Not to mention the continuing deep inequity in AI use, not only at an infrastructural level where resources and materials are being diverted from nations to power bourgeois CEOs email writing, but also at the use-interface. As the proletarian hype for AI dies down, something we are right in the middle of with increasingly “bored” responses to the latest AI hype, particularly from coal-face workers who have seen the hallucinations completely derail BAU, the increasing bourgeification (making up words) of AI rolls on.

Instead of using LLMs as a tool for crafting social change, we’re seeing the working class turn away from these tools. And perhaps, given their inefficiencies and inequity, rightfully so – but that won’t stop capitalists replacing you with an LLM the minute they can get it just barely passably at your “standard”. Hand in hand with the deliberate mystification of the systems and tools that make, power, and generate AI, this abstraction of workers away from the means of production is a tale as old as time in our capitalist hell.

There are genuine solutions to these problems. Running local LLMs and seeing for yourself the limitations, power use, and possibility is a start. Investing in green(er) power sources, getting involved in community projects to bring AI tools to communities, and seriously and in an activist mode debating with capitalists about the use of AI to replace humans is all a start. My fear is, not only accelerated heat death but, accelerated worker replacement into increasingly deskilled roles while a mediocre, half-baked, environmentally destructive AI takes over the creative and intellectual work of the proletariat – rapidly increasing inequity in the first world, while AI currently continues to disadvantage expropriated and poorer countries right now.

I am excited about the possibilities and capabilities of LLMs as an augmentation tool. I benefit as much as anyone from the use of ML in analysing photo libraries, telling me what plants and birds are in photos, and so on. I’m certainly not a luddite. But I think that – in conjunction with a growing awareness of how much energy these tools use, the malice of capitalists in turning machinery of production against the workers, and the unequal and problematic distribution of global resources to keep a small minority comfortable – the context is “a lot” to process. Obviously disclaimers abound about no ethical consumption under capitalism, but I think that this kind of thinking about these problems needs to happen more, and I applaud those who are having this conversation with an audience [2].

So what do you reckon? Where are we headed with these technologies? Will we be further abstracted from knowledge of systems and tools than we are now? Will schools start teaching kids how to design their own AI? Or will we keep doing stupid shit like banning phones? I’m not hopeful that we’ll see radical shift in the way technology is taught and used, because after all it is anti-capitalist to believe in access, knowledege, and understanding – and damn that’s sad.

With trepidation,

Aidan


  1. https://ollama.com/library/llama3:70b ↩︎

  2. https://www.youtube.com/watch?v=ytpA1wV7e3A ↩︎

AI accessibility, or global heat death?

Dear friends,

Yesterday I was listening to one, amidst a litany, technology podcast. They were asserting, like the cacophony, that Apple needed to use WWDC to skate to where the puck is, rather than their preferred position of “leading the industry” — not entirely sure where the misguided idea that Apple leads the industry came from in the first place — and now, ‘features’ such as Recall AI would need to be shoehorned into macOS and iOS (derivs).

There are several blogs out there about how potentially terrible and privacy invasive idea Recall is. While there are obvious implications in the privacy space, I think there is something missing in theses anaylses, which are inherently pro big-tech and offloading what used to be “personal” computing.

Recall is an interesting, and if deployed properly, potentially powerful tool for the way our memory works. Particularly as a neurodivergent human, the potential ability to ask an LLM for help quite literally ‘recalling’ things I’ve seen or done on my computer would likely come in handy. You could definitely look at, in a long bow, how PayPal today is launching an ad network on the backs of years purchase history which was previously thought to be private.

Remembering that this is Microsoft we’re talking about here — any misguided notion of “in the public interest” is for show and part of their well known embrace, extend, extinguish model. How long before, on corporate Windows machines, we start to see Recall used to analyse employee behaviour? We’ve already seen habit tracking, status tracking, document engagement, and other nasty stats used in corporate dashboards to “understand” office workers productivity.

What is missing, then, genius? Well, if we follow the reasoning of various Apple podcasters we see a need for a privacy focussed, AI first model which still “skates to the puck” while simultaneously preserving (their own) notions of privacy.

So, cool, Apple in its public propaganda has a laser focus on consumer devices and “privacy”. As a result any product therein may have a slightly more respectful approach to integration of machine learning models and tools (at WWDC). Though you can bet that, even if it is local, there will be pingbacks to Apple servers. This is the piece that many “privacy” enthusiasts miss — with corporate technology your platform is owned by FAANG not you... So what? Ignore AI, don’t upgrade, what is the play here?

While I was thinking about ways to not be a total spoilsport on the AI ballgame I was thinking about the recent releases, by Apple no less, of additional accessibility features. When you start to think about the potential of LLMs — just LLMs — to change the “access” to technology there are already myriad possibilities. But think about computer vision, TTS, STT, and other technologies advancing in leaps and bounds.

Imagine being the company (or better yet, the FOSS community) that jumps to accessibility features first, buzzwords second. And then I saw a Hacker News article on using “AI Powered Headphones” [1] to be able to single out a voice in a crowded room and my 11 year old Auditory Processing Disorder self just about passed out.

I know there are a few logical leaps here, that’s just how my brain works. However, we do know that when you focus on accessibility everyone benefits. We also know what an inaccessible mess most computing platforms are. Just think about how long it has taken to get properly described images on websites? Oh no, wait, we’re still going there. If a “big tech” company put accessibility at the forefront — even if that wasn’t what it said on the marketing tin — couldn’t we see some incredible advancement for everybody?

What would I know. I do, however, acknowledge there is power in them there woods, sorry, “AI models” and by power, I mean the same stuff that caused a 52° day in Pakistan just the other day. So let’s seriously consider how we use these “models” because it is straight up heat death we’re careening towards. Maybe accessibility is a better future driver than “it sees all your computing use and records it”. In related news, and yes I am aware that I am a meme of myself, I have moved my daily computing to a Linux machine in a vain attempt to continue to control my own machinery. We are slip sliding ever more rapidly into an AI controlled, platform grabbing, global heat death, and that is not a future I wish to subscribe to.

Hope you have a great day,

Aidan.


  1. https://www.washington.edu/news/2024/05/23/ai-headphones-noise-cancelling-target-speech-hearing/ ↩︎