Skip to content

heat death

CEOs and death

Dear friends,

Overnight in the US a person killed a private health care company’s CEO [1]. The suspicion, of course, is that this company denied the person’s (or their family/friends) health care claim. I commented on mind reader that this could well be the start of rolling out the guillotines to end billionaires. Let’s see how good our odds are looking of an anti-capitalist revolution through our theoretical lenses, before we start partying on dead CEO’s graves. Hang about though because there is some cause for a party right out the gate: healthcare companies in the US have been allowing claims at a much higher rate today, they’ve removed information about their boards and directors, and are obscuring details about their CEOs. Okay, so one of those is a good thing. But it is interesting how scared the capitalist class is today. This is a deeply theoretically interesting time – if morally challenging.

While, of course, one cannot advocate for violence, there are some interesting nuances to consider in both the reaction to these events, and the fallout of showing “it’s possible” to bring an end to violence, suffering, and death – if only for a moment. To be extremely clear, I mean that quite literally the removal of a CEO brings a net positive in the world. Today, hundreds if not thousands of US citizens fortunate enough to have health cover are more likely to have their claims accepted. The direct causal effect of a CEO being murdered over the perception that their company denied too many claims and therefore became a target has led to mass positives. This tells us a lot about the nature of capitalism.

Normally, our “economy” – discussed ad nauseam, this is a fallacy to mask human suffering – channels all production towards capitalists (investors, shareholders, directors, CEOs, billionaires, and so on). But what if companies were operated for humanity instead? We see a brief glimpse of this as direct action forces the hand of corporate scumbags. Of course, sadly, this wont last. If the US people rally enough that they kill a CEO a week, perhaps for a short time corporations will turn to serving the people – a move that they can easily afford, and is the morally correct thing to do, but inconveniences the Musk types. More likely, though, is that Trump’s oligopoly succeeds [2].

There are a few implications, here, for Gramscian theorisation, and amongst these are: the role of the police as class-treacherous enforcers of capital (reacting only when CEOs are killed, not when thousands are denied owed healthcare claims), the media’s complicity in ethically sanitising billionaires and other oligarchs, and the role of politics and hegemonic enforcement in ensuring a status quo that oppresses 99% of people. As always, the reaction of various institutions reveal much about how hegemony operates. The media’s immediate rush to condemn individual action while normalising the systemic violence of denied healthcare claims demonstrates the manufacturing of consent that Chomsky identified. Corporate media portrays the daily deaths from denied claims as unfortunate but natural “market outcomes”, while framing any resistance as illegitimate violence. This selective morality serves capital’s interests by making the violence of the system appear invisible while spotlighting any challenge to it.

But particularly interesting, to me, is the role of “enforcement”.

The role of class traitors becomes particularly visible in these moments. Police mobilise (verging on massive) resources to protect corporate leadership while showing little interest in investigating deaths from denied claims. Middle managers in healthcare companies enforce policies they know harm people, having internalised capital’s logic that profits matter more than lives. The system’s gatekeepers – from HR departments to media commentators – work to maintain a status quo that ultimately harms them too, demonstrating how thoroughly hegemonic control shapes consciousness. Isn’t it weird? Don’t you find how amoral and unethical society is just extremely weird?

We teach kids to care for each other, to show respect, compassion, and to work collaboratively. We talk about centring values we describe as human: “kindness,” “care,” “love,” “affection” and so on, as natural, desirable, and important characteristics… At least of young people. As we age, this completely reverses. Cutthroat middle managers are celebrated – gaslighting and lying to employees, CEOs are lauded for their profiteering, and in Trump’s America, billionaires – the ones most responsible for the catastrophic environmental destruction which is sure to kill us all within a handful of years, are installed as dictators of government departments. The values held by Vice Chancellors, CEOs, directors, managers, and many many more belligerent, meaningless, and ultimately inhuman creatures are the direct opposite of “kindness”, “respect”, or “decency”. And yet, our system is geared for their protection – and is enabled in such a way that to even notice the cruelty and inhumanity of the system to which all 8 billion of us have consented requires a violent act? Ughhhh.

I think particularly revealing here is how quickly companies changed their behaviour when faced with direct consequences. This exposes the lie that denied claims are unfortunate necessities rather than choices made to maximise profit. The instant shift toward approving more claims proves these companies could always afford to provide care – they simply chose not to while the costs of their violence remained externalised onto the working class. At every possible moment, these corporate giants seek only to extract the maximum profit from us, all of us, yes you – dear reader, even your “wannabe millionaire friends” – we are all screwed over by billionaires and corporate giants. We created these machines of toxic destruction, and we empower their lackeys – the sycophantic narcissists that populate management in our institutions, corporations, and governments. Like a cancer they have grown and subsumed everything good, wholesome, healthy, and positive about the world – to the extent that our planet is dying.

The ruling class’s reaction also illuminates how democracy under capitalism is conditional. When electoral politics and permitted forms of protest fail to protect human life, and people feel driven to direct action, we see how quickly the system drops its democratic pretence [3]. The same voices who justify the violence of poverty, houselessness, and denied healthcare suddenly become deeply concerned with “law and order” when the 1% face consequences.
This moment forces us to grapple with uncomfortable questions about how change happens in a system designed to prevent it. While we cannot advocate violence, we must acknowledge how the system’s inherent violence – from denied healthcare to ecological collapse – creates conditions where people feel they have no other recourse. The fact that a single action produced more concrete positive change than decades of permitted resistance reveals the bankruptcy of working only within the system’s approved channels. And that is perhaps the most terrible part of all – in order to defeat this violent, disgusting system, the response that works seems to be more violence?

And yet, perhaps most importantly, this reveals the fiction of market inevitability. When faced with sufficient pressure, companies can choose to prioritise human wellbeing over maximum profit extraction. So, what, how do we build movements powerful enough to force this choice consistently, rather than temporarily? The answer as always lies in rebuilding class consciousness and solidarity while developing tactics that impose real costs on capital’s violence, without resorting to our own. Or at least that is my hope, because violence (physical and otherwise) does not bring good things – ever, not in the long run, it is incompatible with compassion, respect and decency.

The path forward requires understanding these dynamics while working to create alternatives to both individual actions of desperation and the system that produces them. This means building dual power – developing democratic institutions to meet human needs while delegitimising the structures that prioritise profit over life.

I feel like today I needed the “or something” more than the last post. This is a complex space to navigate, and it’s hard sometimes not to jump for joy when cracks in capital’s facade appear – even if they are brought by murder. I’m hopeful this is the start of some revolutionary activity that centres humanity, but I’m also fearful that we’re just seeing a further exponent on the curve towards extreme anti-human violence and that this isn’t really anti-capitalist at all, but rather a convenient scapegoat for further global authoritarianism…

In solidarity,

Aidan


  1. https://nymag.com/intelligencer/article/unitedhealthcare-ceo-shooting-celebrations.html ↩︎

  2. https://www.theguardian.com/us-news/2024/dec/06/trump-us-cabinet-billionaires ↩︎

  3. https://www.propublica.org/article/missouri-abortion-amendment-republican-bill-proposals ↩︎

The “power” of AI

Dear friends,

Today I have some off the cuff thoughts about global heat death – revisiting an early theme (actually, the earliest in this particular incarnation of dispatches).

Yesterday I felt happy about expanding numbers of women-identifying YouTube creators – yes I still watch YouTube, I know ... who were interested in the intersection of technology and creativity – not because this is (or should be) rare, but because “in my day” the dominance of sexist men in that particular niche was incredibly overwhelming. But one of these creative sorts, you know how the algorithm goes – particularly with YouTube, a story for another post – popped up talking about creating a bespoke AI, fit for purpose if you like. As part of this video was discussion of the role of creativity and AI (re: “AI stealing all the creative work”) and, further, the rising electricity demands of the AI industry. This got me thinking of things to really truly test in my own environment.

Just recently I’ve been running a combination of tools on my Linux desktop machine – unfortunately “hamstrung” in AI land, at least, by an AMD CPU/GPU(Ryzen 9 7900X / RX 6750 XT / 32 GB DDR4) combo – to run local Large Language Models. I’m still a novice in this space, but I was more interested in the comparative time to response from, even a modest sized local model (i.e. 70b [1]), compared to commercial AI systems. I know this is a very unscientific test, but time to response on very short (“write me a poem about AI”) prompts is decent, probably around 1s. But the revelatory moment was in the massive spin up of fans and power draw from the wall (which I won’t pretend to have properly scientific figures for).

Generating a 1500 word story, basically on complete nonsense because this particular model is no where near competitive even with the free tier of ChatGPT, for instance, made my 3sqm office hot – like I’d been playing Tiny Glade for three+ hours hot. Again, anyone who knows about measuring energy efficiency, comparing apples to apples, and has an interest in genuinely benchmarking technologies against one another is flat out scrunched into a ball of cringe right now, but the purpose of this very unscientific test stands. I wanted to get a feel for time, and energy, on a machine which I control, using a data set, model, and algorithm I control. And the results of this, ignoring everything I know about streamlining, caching, using more appropriate hardware, and so on, still make me incredibly “worried” about commercial AI solutions.

I’ve shared a litany of news stories on the extreme cost on power networks that commercial AI uses – to the point where Microsoft is recommissioning a nuclear power reactor for the sole purpose of powering just some of its AI infrastructure. But until you feel the heat coming off a computer generating a three line poem about itself, it doesn’t quite feel “real”. We are seriously looking at a global power consumption footprint larger than most nations with the combined use of AI as tech bros increasingly wet themselves with excitement – and the line-go-up capitalists get their jollies by suggesting automating workers’ jobs.

This accelerationism which is lauded – and genuinely so, by capitalism and its vanguards – middle managers, for instance – is accelerating global heat death. Not to mention the continuing deep inequity in AI use, not only at an infrastructural level where resources and materials are being diverted from nations to power bourgeois CEOs email writing, but also at the use-interface. As the proletarian hype for AI dies down, something we are right in the middle of with increasingly “bored” responses to the latest AI hype, particularly from coal-face workers who have seen the hallucinations completely derail BAU, the increasing bourgeification (making up words) of AI rolls on.

Instead of using LLMs as a tool for crafting social change, we’re seeing the working class turn away from these tools. And perhaps, given their inefficiencies and inequity, rightfully so – but that won’t stop capitalists replacing you with an LLM the minute they can get it just barely passably at your “standard”. Hand in hand with the deliberate mystification of the systems and tools that make, power, and generate AI, this abstraction of workers away from the means of production is a tale as old as time in our capitalist hell.

There are genuine solutions to these problems. Running local LLMs and seeing for yourself the limitations, power use, and possibility is a start. Investing in green(er) power sources, getting involved in community projects to bring AI tools to communities, and seriously and in an activist mode debating with capitalists about the use of AI to replace humans is all a start. My fear is, not only accelerated heat death but, accelerated worker replacement into increasingly deskilled roles while a mediocre, half-baked, environmentally destructive AI takes over the creative and intellectual work of the proletariat – rapidly increasing inequity in the first world, while AI currently continues to disadvantage expropriated and poorer countries right now.

I am excited about the possibilities and capabilities of LLMs as an augmentation tool. I benefit as much as anyone from the use of ML in analysing photo libraries, telling me what plants and birds are in photos, and so on. I’m certainly not a luddite. But I think that – in conjunction with a growing awareness of how much energy these tools use, the malice of capitalists in turning machinery of production against the workers, and the unequal and problematic distribution of global resources to keep a small minority comfortable – the context is “a lot” to process. Obviously disclaimers abound about no ethical consumption under capitalism, but I think that this kind of thinking about these problems needs to happen more, and I applaud those who are having this conversation with an audience [2].

So what do you reckon? Where are we headed with these technologies? Will we be further abstracted from knowledge of systems and tools than we are now? Will schools start teaching kids how to design their own AI? Or will we keep doing stupid shit like banning phones? I’m not hopeful that we’ll see radical shift in the way technology is taught and used, because after all it is anti-capitalist to believe in access, knowledege, and understanding – and damn that’s sad.

With trepidation,

Aidan


  1. https://ollama.com/library/llama3:70b ↩︎

  2. https://www.youtube.com/watch?v=ytpA1wV7e3A ↩︎