Like what you’re reading? Join mind reader (free) to get fresh critical analysis delivered directly to you. Start right now!
Listen to this post.
Dear friends,
This morning’s headlines included an article titled: “Australia has ‘no alternative’ but to embrace AI and seek to be a world leader in the field, industry and science minister says” [1]. The chief scientist has never been an appointment of visionaries; indeed, the office’s entire raison d’être has been capitalist reproduction, but nationally we’re being guided to skate to where the puck was. Who needs leadership when you have both the chief scientist and the LLM regurgitating an algometric reconfiguration of hegemonic narratives. Is there a thread here?
If we were to embrace, nationally, an agenda which promoted (re)training to better understand and develop applications of artificial ‘intelligence’ we might offer some students an opportunity to become very wealthy. But, by and large, to expect any kind of systematic or publicly beneficial economic growth as a result of Australians taking to the AI “science” space would be lunacy. Though, this is far from new, the chief scientist has always been a mouthpiece for the hegemony: eurocentric, bourgeois, and anti-ecological. We know the dangers of unfettered, unregulated, and ‘wild’ AI [2]. And we know that neoliberal market policies love those very things. So, here, we have a dangerous combination of factors: promotion of engagement with AI, a neoliberal anti-human “market”, and a bourgeois mouthpiece suggesting further engagement with ever more anti-humanist praxes. Let’s backtrack for a second.
Artificial intelligence technologies are not inherently evil, bad, or problematic. However, as we’ve discussed here on mind reader, their application and their use are currently extremely dangerous. From water usage that accelerates us even faster toward ecological collapse, to regurgitation of appropriate knowledges and hegemonic narratives, through the undermining of human artistic talent the current AI technology set does far more harm than good. More recent exposés have shown a decline in worker productivity (if you believe such things) and growing concerns over impaired cognitive functions. Naturally, all of these things could be countered by an ecological-forward ontology: valuing the role of nature, environment, animals and people in ecological harmony. But that’s not the values of the political economy under which we live.
You might see an opportunity here to subvert the hegemonic narrative – in which case, well done. As an educator, these kinds of narratives about what ‘should’ be done have dominated the field in western thought for as long as there was ‘education’ (in a hegemony’s civil society mode). As a supporter of building counter-hegemonies, I might suggest that we use this new narrative to teach young people and students about how LLMs and other AI technologies work – including coverage of the ecological dangers inherent in their current formulation. We might also use this as an opportunity to challenge hegemonic forms, relating with students over sources of training data and asking them to (re)imagine these toward more equitable outputs. But that’s not how this will be done broadly. Indeed, there is unlikely to be any serious education in the AI space done in public education due to the economic landscape which created the current raft of popular technologies.
AI scientists on the ‘bleeding edge’ have, for many decades now, been employed privately, even secretively. Once they ‘make a name for themselves’ (are wealthy, male and somewhere in the ballpark of knowing one or two things) they are paid lucratively, and their outputs are nondisclosure’d and locked up by corporate giants. While arXiv papers [3] from corpos pepper the scene of AI and Data Science, these are frequently a partial picture, describing abstract techniques or ‘proving’ what we already knew about these ‘sciences’ [4]. Of course, there are some who are involved with the development of AI technologies who have left the corporate scene, with an even smaller handful of these committing to public discourse about AI technologies and participation in teaching through higher education institutions (even if such an audience remains an extremely narrow slice of ‘publics’). However, the vast majority of AI technology remains stunningly locked up [5].
Regardless the landscape of current AI technologies, there are growing calls to rethink how AI is currently working [6], and ever more papers about the environmental, social, and human cost of AI. As corporations increasingly dominate narratives about embracing AI futurism, the public (worker) excitement about these technologies dies. This is not subaltern cultural repression directly, rather a hegemonic subsumption of technology and a more public embrace of the despotic “leadership” ever on display under capitalism. Any initial excitement from workers about the possibilities of AI technologies in their personal or work lives has surely been replaced by distain, disinterest or complacency. As more hegemons gesture towards futures of replacing workers with algorithms, worker disinterest or resistance grows. This is an indicator of the trajectory of our culture more broadly, not about the specific technologies used in AI.
What might a future which re-centres ecology and humanism actually look like? If we were to continue honing the underlying technologies such that environmental destruction was not requisite to technological growth we might have a start. Unfortunately, while some AI and Data Scientists work towards this kind of thinking, the majority of the corporate world has jumped on technologies which consume gigalitres of water and hundreds of kilowatts of (unclean) energy daily. The race to embrace AI as a core part of the modern workplace has meant that rather than spending time on perfecting the underlying technology (i.e., the approach to AI, not the models) we have seen exponential growth of hardware (and, therefore, water, power, etc.) requirements. The call from the chief scientist is one to supply corporations and despotic leadership with ever more resource intensive models – not to innovate for the future. Moreover, while some ‘change management’ professionals with an AI focus may be employed, increasingly we are seeing workers literally phased out in favour of quantitatively worse AI deployment.
While it is not uncommon to see neoliberal corporate subservience in STEM areas [7], blatant calls to engage further with skating to where the puck was will only set back our learners and young people – not to mention gut knowledge workers. If we had a vision for the future that demanded ecological and human justice we might find an application of novel forms of AI technologies which are fundamentally different than the current destructive forms. We lack, in this country, a unifying vision for the future. Instead, we’re seeing corporate bootlicking and hegemonic capitulation across every sector.
I’m not particularly interested in detouring through all the evidence of the despotic shift in our governments, governance, corporate arena, and anti-everybody sentiment. But look no further than: Labor’s continuing approval of new coal and gas projects which have impact reports suggesting massive contributions toward >3º warming; or to the stark Trump fascism in California; Greta’s deportation after attempting to bring aid to those being genocidally murdered by Israel in Gaza. I’ve seen bandied about a saying: “if you ever wondered what you would have done during Hitler’s genocide, think about your actions during globally rising fascism and genocides today”. It’s a bleak picture. LLMs have been trained on an ontological corpus which normalises this anti-human and anti-environment sentiment, and all it can do is regurgitate the same narratives it has been fed. There is no creativity under current AI technologies, only rabid fanatical hallucinations.
In solidarity,
Aidan
[2] https://mndrdr.org/2025/at-the-nexus-of-knowledge-appropriation-and-ai https://mndrdr.org/2024/on-forestalled-innovation https://mndrdr.org/2024/the-power-of-ai
[4] c.f., https://arxiv.org/pdf/2407.21075v1
[5] https://doi.org/10.32855/fcapital.2024.007
[7] especially because of the widespread belief in “science” as a religious substitute; and adherence to empiricist positivism which disregards human ethics and values for the fallacy of “objective truth”.