
An AI-generated image of a Neo-Luddite rejecting the AI-based Neo-Narrative
In prior posts, I’ve pointed out that those who develop and manage narratives, the Narrativators, are highly motivated, presumably very skilled, and likely hard at work on the neo-narrative in preparation for (among other things) the 2026 midterms. This crew will doubtless have the very latest AI multi-media tools at their disposal, which may well enable them to develop something far more powerful than any propaganda ever launched on a population. The stakes for the continuation of neoliberalism are existentially high, the funding sources infinitely deep, and the lessons of 2024 painfully vivid — how could they not succeed this time?
Well, they might not. The chance of that outcome may not be high — who knows? — but it strikes me that the possible origins of such a failure are worth brief consideration, if only as a marker.
The biggest and simplest factor could be that my assumptions about the Narrativators are wrong, specifically regarding their skills and ability to learn from mistakes. Narrative propaganda has, I think, been a rapidly innovating field since around 2008 and even ‘experienced’ Narrativators are likely experimenting a lot of the time. ‘Old’ skills might not be especially relevant; amazing new tools like AI will unleash feverish dreams of innovative approaches that are ‘sure to work’. Likewise, the fact of past failure might be obvious, but the lessons for going forward aren’t so clear. Especially because political mistakes aren’t exactly the same as neoliberal status quo-threatening mistakes. The Democrats’ mistakes last year were political by definition, and they resulted in the loss of the party’s institutional power. Those specific mistakes, currently being much analyzed and discussed, are presumably what will be avoided in future. In contrast, neoliberalism lost its previously effective camouflage in 2024. Rebuilding political power and reestablishing neoliberal camouflage, which so neatly (and fortuitously?) overlapped in 2008, might now involve very different and possibly inconsistent narratives — what do the Narrativators do then?
In short, the neo-narrative could fail in an old-fashioned way: the competence brought to bear was less, and/or the challenge greater, than was thought. More interesting and right up to date, however, are the ways that the involvement of AI itself in the development of the neo-narrative might contribute, not to success, but to the failure of the approach. Off the bat, I can think of three ways — doubtless there are others:
Sudden and widespread revulsion to AI super-slop: From its ancient beginnings, propaganda ‘narrative’ has always in a way been an intentional form of ‘slop’ — curated hallucinations spinning a plausible, but not actually true story. With AI tools, the hallucinations can be more realistic, more personalized, more pervasive — just more everything. The Narrativators will be sorely tempted to push the dial to ’11’, whereupon the resulting ‘super-slop’ may cause overload, annoyance and ultimately revulsion. People will by then have enough multi-media AI in their lives anyway and won’t be easily amused or distracted by yet more of it coming through their screens. For personal economic and social matters, they’ll turn to their own uncamouflaged lived experience, which will be increasingly disconnected from AI virtual worlds. They’ll make political decisions accordingly, uninfluenced and mostly uninfluenceable by the neo-narrative. [1]
Neo-Luddite reaction and resentment: Ah, the Luddites — that ever-handy meme of early 19th century ignorant deplorables smashing the very machines that were to give their descendants (that’s us!) so much prosperity. The meme’s cautionary tale will of course be instantly applied to any modern-day folks who express fear and loathing about AI’s job-killing potential. Presumably, the ones who don’t wish to be considered ignorant or deplorable will then back off — right? Maybe not — this time might be different, for two reasons. First, even in the early stages of the Industrial Revolution it was clear that growing mechanization would require more, not less, labor. Why? Because up until recently machines were powerful but dumb, and averagely smart people were needed to run them. Now machines can be both powerful and at least averagely smart (usually far more); it isn’t clear at all what will happen, and very well-informed pessimism can be justified. [2]
Second, and more importantly for those folks who would otherwise fear the Luddite label, AI is coming for white collar jobs, big time. They may on one level be persuaded that their personal job loss is just part of society’s transition to a glorious future, but on a visceral level, there’ll be old-fashioned, where’s-my-hammer-type anger and resentment. Perhaps the latter will frequently be secretive and partially suppressed, but it’ll likely be a lurking influence on privately made decisions.
Neo-Luddism will undermine any neo-narrative that includes the ‘wonderfully positive transformative’ impact that AI will have on society and our collective material well-being. (Yes, looking at you, ‘Abundance’ proponents.) Less directly, to the extent that the use of AI multi-media is detected in the selling of any neo-narrative (whether pitching AI futurism or not), lurking neo-Luddite anger will help trigger or at least add to the revulsion described in the first point — there’ll be a lot of ‘touchy’ people out there.
AI tools used to expose the truth: It is conceivable that the use of AI tools becomes sufficiently widespread that any propaganda, much less narrative, depending on a poorly informed population becomes ineffective. Yes, I know — it’s unrealistic to think that there’ll be a sudden thirst for truth and knowledge, especially given the state of US education. But AI might enable more individuals to package and distribute more ‘truth adjacent’ or ideological material than before, and that might be influential on the margin. Perhaps the role of independent media (enabled by new social media tools) in the 2024 incineration of the old woke narrative is indicative — more like that, but with AI tools?
This possible path of AI usage development has specific relevance to federal infrastructure finance reform, as discussed in prior posts. Could even be a central factor in deep reform of such a technical area. So, I’ll be coming back to that topic often in future.
__________________________________________________________________________________________
Notes
[1] In historical terms, I think this type of ‘overnight rejection and revulsion’ has precedents for happening rather quickly — from ‘divine right’ to the guillotine in a few years? Or the pre-Reformation ‘indulgence narrative’ (complete with advertising jingles!) triggering a reaction that soon led to widespread (and actively destructive) iconoclasm. A more specific example for neoliberals who feel that they’ll be covered by a narrative, one way or another, learning about the fate of French Pre-Revolutionary tax farmers would be…edifying. Just saying.
[2] One big population did lose out quickly and permanently in the Industrial Revolution — horses. Powerful, flexible and (in terms of their tasks) smart animals — but not smart enough. That population would have had ample reason to be pessimistic in the 19th century. Yes, overall wealth would clearly increase. But other than a small minority (racers, show, pets, etc.), they weren’t going to get any of it.