Deflating “Hype” Won’t Save Us
Published:
By Hagen Blix & Ingeborg Glimmer
We find ourselves in a moment where big tech has allied with the far right, where AI slop mass produces fascist aesthetics, where visas are getting revoked because AI tools took issue with someone’s speech, and where an AI job crisis is unfolding. Clearly, we need critical perspectives that help us organize our thinking and our activities. Currently, AI-critical perspectives are often centered on identifying and deflating hype (or on calling AI snake oil, declaring AI doomerism advertising, suggesting that it’s all a bubble, a con, a scam, a hoax etc). Unfortunately, we worry that calling things hype (even calling hype hype) falls woefully short, and is dangerously inadequate for understanding what is actually going on. So we’d like to offer a critique of primarily deflationary AI critique.
Before we get into the weeds, some important caveats: First, we’re not saying that there isn’t plenty of hype. Marketing departments and even journalists spread constant misinformation. There is ubiquitous talk of current day language models thinking, holding attitudes, having feelings and desires, plotting and scheming, etc, as if they could do any of that. Clearly, pushing back against false claims is and remains important. Alas, we also think that pushing back against hype is often largely orthogonal to the most pressing AI-related issues. Second, this is a critique of the particular frame, that particular way of naming and conceptualizing the issue. In our experience, most people who use this frame are actually working hard to overcome many of its pitfalls. They don’t actually believe that getting techbros to stop lying will solve our problems, they are well aware of the connections between the far right and big tech, they worry about political, social, or environmental impacts and so on. In fact, we admire the work of many people who use this frame. Nonetheless, we think that calling things hype is increasingly getting in the way. We think the hype frame zooms in on the wrong spot, and conflates technical with social/political/economic issues in a way that risks obscuring things, rather than clarifying them.
With those caveats out of the way, let’s start with a very quick definition of AI hype: it’s a promise that some AI tool can (or will soon be able to) do some X, when it really can’t (or won’t). It’s a lie of exaggeration. In that sense, calling something hype is, at its heart, a technical claim (about feasibility or factuality). But as Ali Alkhatib has laid out so clearly in Defining AI, AI is not (and arguably never has been) merely a technical tool that we could evaluate solely on technical grounds. It’s always been an unfolding series of intertwined technical and political projects: Technologies, after all, have affordances and uses, and are developed in conjunction with evolving needs and goals.
Once we see AI as a series of political projects, it becomes clear that hype is not the real problem – the projects are. If we waved our magic wand and all the hype was gone, hardly any problem would be solved. So, let’s lay out a few issues that AI interfaces with in our current society, and explore why the hype frame is an obstacle to clarifying what we’re facing.
First – AI, Bullshit, and Fascist Propaganda
At this point, it seems rather indisputable that AI can automate the production of bullshit – that is text and images, that do not commit the producers of that content to any kind of truth. Where these are intended to act as political speech, we can see the resonance between AI and far right politics, where texts and images are intended to manipulate the emotional valence of a particular topic and amplify its affective resonances. The ultimate aim is to foreclose any possibility for good faith deliberation (a mode of politics despised by right wing populism and fascism). AI-generated political bullshit is everywhere. Take just images: From the ghiblification of ICE deportations to cult-of-personality glorifications of Trump and Trumpist policies (Trump with a red lightsaber, Trump as a golden statue in Gaza, Trump as the next pope, Trump with deepfakes of Black voters etc), AI plays a major role in the propaganda side of what far right political strategist Steve Bannon calls “flooding the zone”.
AI is used to flood the zone with more than propaganda, too. There’s DOGE using AI for anything from intimidating federal workers to rewriting federal housing regulations. Tariff rates are supposedly being written by language models too. And so on and so on. Clearly, AI is crucial to the daily execution of Bannon’s strategy, aimed at relentlessly stoking scandal and outrage to the point where nobody can keep track of the attacks on democracy, science, immigrants, Black people, women, queer people, or workers anymore. The aim: Make people lose their sense of what is even going on, of what is real altogether, so they get politically paralyzed, and feel unable to act.
Is any of this an issue of hype, of some discrepancy between promised capabilities and real capabilities? Or is the issue that the very real affordances of very real AI potentiate fascist political aims and methods? Is the issue not that AI models operate akin to the online troll, who just asks questions and just says (bullshit) things for the lulz? AI tools have no intentions. They cannot care for the lulz. But they are, by design, incapable of not bullshitting. They cannot recognize that “their reality” (bytes, letters, pixels, etc) is distinct from reality. To put it in a Wittgensteinian manner: Truth is not a property of the world, nor simply a property of statements. Truth pertains to a relation between the two, a statement can only be true about something. But models lack that duality, that distinction. There is no difference, here, between measurement and the measured, between a representation of the world and the world, between the map and the land that it describes, between the text and the con-text – there is only ever the measurement, the representation-no-longer-representing, the map, the pixel, the text. And hence the concept of “truth” (even as a desideratum) disappears. AI is not merely a machine for the production of bullshit, but inherently unsuited to ever distinguish bullshit from non-bullshit. Fascism, meanwhile, is committed to a play of power and aesthetics that regards a desire for truthfulness as an admission of weakness. It loves a bullshit generator, because it cannot conceive of a debate as anything but a fight for power, a means to win an audience and a following, but never a social process aimed at deliberation, emancipation, or progress towards truth.
Fascists do, of course, try to exploit the very prerequisites for discourse (a willingness to assume good faith, to treat equality if not as a condition, then at least as a laudable aim of social progress, etc). Take, for example, the free speech debates as an attempt to blindside the enemy (that is, us). Fascists are continually proclaiming their defense of and love for free speech. They are also arresting people for speech, banning books, and attacking drag story hours. To take this as an inconsistency, or an intellectual mistake is to misunderstand the very project – fascists are not in it for consistency, nor for making a rational, reasonable world with rules that free and equal people give themselves. The apparent contradiction is, instead, part of the same family of strategies as flooding the zone is – to use whatever tool is necessary in order to accrue power, strengthen hierarchies, and entrench privilege.
Is this political attack on the possibility of rational (let alone critical and emancipatory) discourse enabled by hype? Would things be better if the models were improved, or the advertisement toned down? Is it not rather the case that the models’ inability to distinguish words from the world coupled with their finely tuned sycophancy (always aimed at winning their own audiences, at sounding convincingly certain, etc) makes them so useful to fascist aims? If that is the case, is not calling AI hype just like thinking the fascists are simply stupid or making a mistake by being inconsistent? But the fascists are confused neither about the nature of free speech, nor the nature of the “speech” that is produced by an LLM. They know what they want, they know what they’re doing, and they know where it leads.
Second – AI as a Tool for Political Terror
Let’s consider a related, but distinct use of AI: As a political tool for surveillance, control, and even terror. Right now, the US government is using AI to scan social media posts and gather “intelligence” at scale. The state department’s “Catch and Revoke” initiative, for example, uses AI to cancel the visas of non-citizens whose speech is insufficiently aligned with the government’s foreign policy goals. Is the problem with this hype?
At the risk of sounding unnecessarily repetitive: Calling AI “hype” highlights a gap between a sales pitch and the model’s real performance. When a hyped model is put to real world use, it will make “mistakes” (or make them at an unreasonable frequency, etc). What about the state department’s AI tools? Their student surveillance AI will certainly make “mistakes” – it will flag people who aren’t holders of student visas, or it will flag people for unrelated speech. Hype? Perhaps. It’s entirely possible that someone at the state department fell for it, and really believes that these models are better at surveillance than they actually are. We don’t really know. And we don’t think it matters – because whether or not hype is involved seems rather radically besides the point.
Sophia Goodfriend calls this whole thing an AI drag net, and observes quite acutely: “Where AI falters technically, it delivers ideologically”. Indubitably people are getting falsely classified as having engaged in unsanctioned speech. But those misclassifications are mistakes only in a narrow technical sense. After all, we can actually be quite certain about the real political aims of the state department. They’re invested in a) an increase in deportations, and b) the suppression of particular kinds of speech – and neither of those depends on amazing accuracy (hitting the “right” people). They depend mostly on scale (hitting a lot of people). For the first goal – more false positives means more cancelled visas, means more deportations. Check. As for the second aim, the suppression of speech depends on a sufficient reach to make everyone feel like “it could be me next, (unless I censor myself/make myself quiet and small/self-deport/etc)”. And here, too, the tool certainly delivers. In Why We Fear AI, we argue that it is in fact precisely the error-prone and brittle nature of these systems that makes them work so well for political repression. It’s the unpredictability of who gets caught in the drag net, and the unfathomability of the AI black box reasons why that make such tools so effective at producing anxiety, that make them so useful for suppressing speech.
Once we ask what kind of project is being undertaken, then, the (technical) mistakes no longer look like (political) mistakes at all. This is most transparently obvious when the government shows an active interest in not rectifying alleged “mistakes”. We can see this in the case of Mahmoud Khalil, who they sought to deport over a canceled student visa. When the ICE agents out to arrest him learned that he was actually a green card holder, a permanent resident, they did not care about that mistake, and it did not deter them. They simply changed their narrative, decided on no particular legal grounds that his other status would soon be revoked as well, and continued to arrest him. We can also see it in the case of Kilmar Abrego Garcia, whom they deported illegally, and who they refuse to bring back to the country. Clearly, the desire not to rectify supposed mistakes demonstrates that the “mistake” was part of the agenda all along.
We can make the same point with respect to, say, the police’s use of facial recognition technologies. Certainly, the harm done due to AI algorithms misidentifying people is very real. Let’s acknowledge first that even if the algorithms were perfect and never misidentified anyone (if there weren’t any hype because things actually worked as advertised), it would aid a racist system that is frequently aimed at violent dehumanization, at supplying humans as billables and forced labor for the prison-industrial complex. But today, the algorithms certainly do misidentify people, there certainly is a claim that models can do X (identify faces accurately), when it really can’t. Does calling this hype help us understand the problem?
We certainly know that the “mistakes”, the misidentifications, aren’t randomly distributed. In fact, we’ve known at least since Joy Buolamwini and Timnit Gebru’s 2018 Gender Shades paper, that facial recognition algorithms are, among other things, more likely to misidentify Black people than white people. When the AI errors are so clearly distributed unequally, and the errors are a source of harm – of false arrests and possible police violence – then it is obviously unhelpful to simply call them “errors”, or theorize this through a lens of “hype”. What this system produces isn’t errors, it’s terror. And this terror has a history and clear throughlines of racist pseudoscience and racist political structures: from phrenology to facial recognition, and from slavery to Jim Crow, to today’s New Jim Crow (Michelle Alexander) and New Jim Code (Ruha Benjamin). Once we draw those throughlines, the semi-randomness, the “errors”, the “false” arrests appear not as accidents, but as part of AI as a political project.
These not-quite-random errors live in precisely the gap between the sales pitch and reality. The gap provides plausible deniability: “It’s the algorithm that fucked up”, they will surely tell us, and that therefore, no person is really responsible for racist false arrests. But the very fact that the misidentifications are predictable at the level of populations (we know what groups will be misidentified more often), and unpredictable at the level of individuals (nobody knows in advance who in particular will be misidentified) also enhances its usefulness to the particular political project of producing political terror: Again, it’s about producing the widespread feeling that “it could happen to any of us, it could happen to me”. This is as true of AI as it is of other, older tools of political terror and police intimidation. Yet, for good reason, nobody ever suggested that the sales pitches for “non-lethal” weapons like rubber bullets was hype. A rubber bullet will sometimes blind people, or even kill them, and that, too, is a semi-randomly distributed “error”, a gap between the sales pitch and reality. Rubber bullets, like the surveillance AI and the facial recognition system, function as tools of political control, precisely because that gap does things, it is functional, not incidental, to the system. In the words of cybernetician Stafford Beer, there is “no point in claiming that the purpose of a system is to do what it constantly fails to do”. And hence to focus primarily on what the system can’t actually do (as the hype frame does) risks distracting from what it is actually doing.
Third – AI as a Tool for Crushing Wages
Last, but certainly not least, let’s talk economics: From Microsoft to universities, from OpenAI to the government, we are told all kinds of lies of exaggeration by all kinds of powerful institutions. Take whatever task you like: as long as it has something to do with intellectual activity (and/or is at least somewhat reasonably paid) you will find someone claiming that task is about to be automated, that it will soon be done by an AI “employee”, or at the very least that AI-assisted labor will soon outcompete anyone not using AI.
Now, critics like Emily Bender and Alex Hanna have, quite correctly, pointed out that such claims (and their incessant repetition by uncritical media) are essentially forms of advertising. But we think it’s crucial to foreground who exactly the sales pitch is directed at: investors and corporate customers for whom replacing workers with tools might be good for the bottom line. To workers, these words certainly don’t ring like advertising, but like a threat. It looks less like glossy pages full of exaggerated promises, and more like the slides from a mandatory workplace meeting on the supposed “dangers” of collective bargaining: If you don’t do this (refrain from unionizing/start using AI), then the competition will crush us, and you’ll lose your job. “Shut up and lower your expectations, for you are replaceable!” is the constant refrain of union busters and AI peddlers alike.
It’s that -able suffix in “replace-able” that is crucial here. For the threat, it’s raising the possibility of replacement that counts. This is what the laws of supply and demand imply when it comes to the supply of people (with particular skills) – an increased supply of a possible substitute (AI) will lower the price of a commodity (workers’ wages).
The union busting pitch gives the real economic purpose of AI away: it’s a tool to deskill jobs and depress wages. And for that goal, whether the tool can actually replace the work done by people at the same level of quality is often largely irrelevant. After all, replaceability is not about simple equivalence, but more often than not about price-quality tradeoffs. People like to buy whatever they think is a good bang for the buck. Companies do too, often substituting cheaper inputs (skills, stuff, etc) to drive down their costs even if it reduces quality. That’s the obvious horizon for AI – not full automation, but the model of fast fashion or IKEA: Offer somewhat lower quality at significantly lower prices and capture the market from the bottom up.
Again, note that this is perfectly compatible with there being hype, with the models remaining in the just so-so level of quality, with sales pitches exaggerating the quality of model outputs/behaviors. Lots of IKEA furniture doesn’t survive more than one move (if that), and plenty of fast fashion items dissolve after a few months. That didn’t stop their spread – and there’s no reason to believe that it will be any different with sub-par AI.
From coders to paralegals, from tutors to therapists, investments to produce cheap AI-assisted versions are well underway, and they will likely depress wages. The particular strategies will surely vary (perhaps an actual licensed therapist will have to oversee 10 different chat windows simultaneously, or perhaps the spread of “vibe coding” will mean that junior programmers are increasingly people with a few weeks training in a prompting academy who can thus be paid as unskilled labor). As happened in fashion and furniture, the middle market – mid level both in terms of quality of the goods/services produced, and in terms of the jobs that are available for making them – will increasingly be crowded out.
Particular instances of hype are perhaps a problem for particular investors (without a doubt, plenty of AI companies will fail and go bankrupt along the way). But the real economic problem isn’t hype. The attack on workers, on the quality of jobs, and the quality of the things we make and consume, is the problem – and that problem exists quite regardless of the hype. Unless you’re a venture capitalist, you aren’t the target of the AI advertisement – you’re the target of the threat. We have no use for terms that warn investors that they might be making bad investments, we need terms that are useful for fighting back.
Admittedly, we personally don’t know what terms, frames, and phrases will be most useful either. But we do know that hype, and similar deflationary frames aren’t it. They don’t capture anything about the political/economic AI projects we just described: they do not clarify what’s happening when AI bullshit is flooding the zone, when errors are used to spread political terror, or why even a shit quality LLM could still be used to depress wages. Ultimately, hype itself is too stuck in a narrowly technical perspective, too focused on identifying lies, rather than identifying political projects. We should not make the mistake of thinking that just because a statement is a lie, it can be disregarded. Contrary to what the hype frame may suggest, once you figure out that a lie is a lie, the work is only just beginning. Let’s take the lies and the ads seriously – as something that can be telling even if it isn’t true.
So let’s focus on the political projects, and look beyond deflating their blown up lies and egos. Let’s call AI what it is, a weapon in the hands of the powerful. Take the third project – let’s call it class war through enshitification, automated union busting, a machine for deskilling, or Techno-Taylorism. Let’s take some inspiration from the Luddites, who called the big tech innovation of their day, the steam engine, “a demon god of factory and loom”, or “a tyrant power and a curse to those who work in conjunction with it.” Let’s make up better words, better phrases, and better frames that clarify the political stakes. Let’s de-shitify the world!