When Datagrid’s 78,000 square metre data centre in Makarewa, north of Invercargill, is built, it will use 280 MW of power, making it the second-largest user of electricity after the Tiwai Point aluminium smelter near Bluff. To what end? Local mayors and business representatives are talking up the jobs the project will create, but details remain vague around the end plans for AI use, either from business or government. AI’s boosters – OpenAI most famously, and the start-ups and venture capitalists trying to push the technology into contact with the public sector, education, health, and other big areas with lucrative and regular contracts – have good reasons to hype their own product. It’s inevitable, they say; it will change the world for the better. OpenAI’s “About Us” page announces that their mission is to “ensure that artificial general intelligence – AI systems that are generally smarter than humans – benefit all of humanity”. Scroll past this rather airy goal, however, and a more familiar company pitch appears. What are they selling?
Datagrid in Southland is part of a world-wide rush to build data centres to keep up with the insatiable energy needs of AI technologies. “The massive growth rate in data centre power demand reflects more than a surge in the number of data centers in the pipeline”, Bloomberg NEF reported in December last year; “it also highlights the new centers’ size. Of the nearly 150 new data center projects BNEF added to its tracker in the last year, nearly a quarter exceed 500 megawatts. That’s more than double last year’s share.” Water needed to cool these facilities is another huge strain. A recent report from the United Kingdom’s Government Digital Sustainability Alliance predicts that AI will increase global water usage from 1.1 billion cubic metres to 6.6 billion cubic metres by 2027, al Jazeera reported in January, documenting myriad health and environmental problems connected with the rush to divert drinkable water to the system globally.
Accompanying these boggling numbers are equally astonishing figures in the global economy. Many mainstream economists are talking now of an AI “bubble”, and it’s not hard to see why. Rana Foroohar, writing in the Financial Times, makes the comparison between fossil fuels and AI: “while the oil and gas industry spent $570bn on exploration and production last year, five American tech majors are set to make $700bn in capital expenditure around AI by the end of this year. If the trends continue, these companies will represent half of the ten largest corporate borrowers in the bond market.”
All of this investment is based on the calculation by capitalists that, eventually, it will return them a profit on their investment. But, outside of the building and construction industries making the plants, how certain is this? AI boosters talk a big game, often with Science Fiction overtones: “artificial general intelligence” is just around the corner; AI will lead to massive gains in productivity; AI will transform the workforce. The race around is a classic example of the rationality of individual decisions capitalists make and the wider irrationality of their system. If the wild promises of AI are true, then those who invest most heavily in it early are likely to pull the biggest rewards, and so it makes sense, individually, to throw as much as you can at this next big technological thing. If they aren’t, however, or if the gains are more uneven, then all of these billions of dollars may be setting us up for a bubble that bursts. And, as always, it will be workers and the environment who pay the human and social price afterwards.
Artificial General Intelligence?
AI is a term used to cover all sorts of technologies. This vague, all-purpose use by its promoters serves ideological ends by mixing together the undeniably positive (advances in medical research) with the dubious and tacky (ChatGPT) and the morally abhorrent (Elon Musk’s misogynistic Grok). The technical questions of the technology’s development can be very hard to understand, and this also is used to make people feel as if their only option is to accept the inevitable.
But, even as non-technical or outsider thinkers, we have good reasons to be sceptical of the AI industry’s core claims. What, for example, would “artificial general intelligence” be? OpenAI’s Sam Altman writes as if this is just a question of scale: get a factory big enough (get enough “compute”) and eventually, scaling up, AI will be able to cure cancer, tutor every child on earth, and more. But what is intelligence? Philosophers, psychologists, everyone, really, who thinks with, as, and about human beings still debates what the term really means. Human creativity, labour, intentionality, and memory work in ways more complicated (and still unknown) than the scale models OpenAI is driven by allow. That sets limits for the kinds of “intelligence” that can be generated without human intervention.
AI as Class War
Altman is closer to the money when he notes that “it does seem like the balance of power between capital and labor could easily get messed up” by the drive to AI. This is a feature, not an oversight. Widespread automation and AI use (or the threat of both) can be used to keep workers disciplined, have skilled workers fearful for their job, and drive up exploitation and insecurity. It is, in the language of an earlier era and the factory floor, a speed-up on a global scale.
The question, for many capitalists, is not whether an AI product can be as good as a human-made one but, rather, if the technology can be used to force a “good enough” process that deskills their workers and drives down their wages and conditions. The ugly aesthetics of AI slop and its political economy combine here. It’s no surprise that coding, indexing, translation, graphic design, and editing jobs have been threatened first. Getting workers, and consumers, used to a sloppy, inferior product is a way of de-professionalising and degrading groups of workers with skills and bargaining power in advanced capitalism. Palantir’s CEO Alex Karp is explicit about the political part of this class war, telling MSNBC that “this technology disrupts humanities-trained—largely Democratic—voters, and makes their economic power less […] And to make this work, we have to come to an agreement of what it is we’re going to do with the technology; how are we gonna explain to people who are likely gonna have less good, and less interesting jobs.”
It’s an indictment of capitalism as a system that a technology praised by its own promoters as labour-saving should generate, for most of us, fears of joblessness and obsolescence. That is the nature of a system organised for private profit rather than social need, and this is the logic of capital: a technology that can allow bosses to drive workers harder and farther will be used for that end, and not for the nebulous “humanity” of OpenAI’s promotional material. A Harvard Business Review study this year “discovered that AI tools didn’t reduce work, they consistently intensified it. In an eight-month study of how generative AI changed work habits at a US-based technology company with about 200 employees, we found that employees worked at a faster pace, took on a broader scope of tasks, and extended work into more hours of the day, often without being asked to do so.” Marx wrote about this very phenomenon in Capital in 1867!
Therefore, since machinery in itself shortens the hours of labour, but when employed by capital it lengthens them; since in itself it lightens labour, but when employed by capital it heightens its intensity; since in itself it is a victory of man over the forces of nature but in the hands of capital it makes man the slave of these forces; since in itself it increases the wealth of the producers, but in the hands of capital it makes them paupers.
Marx’s quote should remind us, too, that this is not really a question about whether a technology on its own is “good” or “bad”. Technological development occurs whether we approve of it or not, but it does not occur outside of politics and economics. The environmental damage I mentioned at the start of this article, and the speed-ups and deskilling threatening workers at its end, are signs of how production is ordered under capitalism, and the anarchic drive to develop AI for further profit rather than as consideration of its possible human need means no one, really, is in control of where we go from here. To challenge the “enshitification” of our work, our digital life, and our environmental life system we need to be building resistance to capitalism itself.
Empires
Karen Hao, in her book Empire of AI (2025), shows the way OpenAI has built like an empire: colonial, extractive, profit-maximising, and externalising damage to poorer communities and the Global South. But empires fall. The entire tech supply chain depends on energy and chemical imports from the Middle East, and East Asian capitalism is heavily reliant on goods coming through the Strait of Hormuz. If the Trump administration is pushing the AI industry on the one hand, its geopolitical ambitions and its war with Iran may scupper them on another. This shows the vulnerability, and instability, of the current AI boom, and it warns us that ordinary people and their lives, livelihoods, and environments will suffer if the bubble bursts and as AI becomes ever more enmeshed in war and imperialism abroad. None of this is stable, and none of it is good for “humanity”.
We deserve better than a future of slop, increasing digital alienation, and shoddy deskilling. The cost to our planet to produce all of this makes the system’s abolition, and our liberation, all the more urgent. The AI Bubble is the product of a very new technology and raises new questions, but deserves these old answers still.





