Billionaires for a Day
Or, On the value of "data" and the endgame of advertising
The other day, Facebook shared the above “content” with me, presumably hoping to boost my “engagement.” I have not had a Mountain Dew since I was twelve years old. I would be surprised if I drink soda of any kind as often as once a month. This is not even the most recent of the hilariously wrong things it has brought to my attention—yesterday it was trying to bait me with a post from a group for owners of used Mazdas (I have not owned any car for over 20 years). It can’t even seem to discern that my relationship to North Central College is that of a long-time faculty member, not a potential student in one of their health-oriented graduate programs.
This phenomenon of bizarre algorithmic mismatch is something we have all experienced, so much so that it fades into the background. In my case, however, it is especially absurd. Though I have quit Twitter-style social media, I have continued my longstanding habit of using Facebook as a daily diary, recording even the most mundane events in my life. By any measure, then, I am giving Facebook a huge, even incautious, amount of genuine information about my life. A human reader would rightly feel that they know a lot about me. But Facebook itself seems to have deduced only that I like going to the symphony and watching Star Trek. None of the other obvious angles of attack—what about cleaning products? what about nice jackets? what about op-eds related specifically to education?—seem to have occured to the algorithm. I am a closed book to the algorithm because I choose to contribute only humanly-readable information, not the kind of “data” and “engagement” they can parse. If only I would subject myself to the stream and begin contributing the appropriate likes, rather than tediously detailed descriptions of my actual life, they would be able to discern the inner core of my being.
It feels almost unfair to use Facebook in this example, because it’s well-known to be the most dysfunctional of social media sites. (It’s so bad, in fact, that there’s a common conspiracy theory that they broke it on purpose to force us onto Instagram.) But even the supposedly hyper-competent titan of the digital age, Amazon, is just as bad. I am routinely enjoined to buy another copy of the shoes I just bought—not exaggerating. I often buy Star Trek ebooks during their monthly 99¢ specials. They have not picked up on that pattern, but instead serve me recommendations for literally every Star Trek book ever written. The algorithm cannot even seem to understand basic things like the difference between a durable and consumer product, as it showed when it tried to sell me irons for weeks after I replaced mine. People usually only buy irons every ten years or so, Amazon!
Ten years ago, the big story about algorithmic data-sifting was that it was spooky accurate. In Creepiness, I cite a popular anecdote about a time that Target’s online shopping algorithm realized a woman was pregnant before she herself found out and started serving her maternity clothes and prenatal vitamins. As recently as a few years ago, Amazon was rolling out plans to start preemptively shipping items to us based on its readings of our needs and desires. That was supposed to be what the Big Data revolution was going to give us—fulfilling our desires before we even knew we had them. Instead, our lives are controlled by trillion-dollar companies that don’t understand that people seldom buy new irons and can’t even tell whether or not I own a car.
This bizarre saga is arguably the culmination of the long story of advertising. The modern age is unimaginable without advertising. It is omnipresent in our lives and serves as the primary funding mechanism for cultural production. And it also very obviously doesn’t work—certainly not as well as the massive sums devoted to it would imply. Prior to the internet age, the question of whether it worked was, for all practical purposes, unanswerable, and companies had fallen into an equilibrium where they spent huge sums on advertising because everyone else does and you never know. When the ad-funded model shifted online, however, we finally had empirical data about how much advertising influenced people’s spending, and the answer was: way, way less than anyone might have hoped.
In response to this definitive proof that advertising didn’t work, the titans of industry decided to double down on the quest to make it work. The problem was interpreted as one of relevance, and the solution was something that the internet age had in abundance: “data.” Through the analysis of clicks and likes and patterns of purchases, users would ultimately get advertisements custom-tailored to their unique needs and desires—advertisements so immediately relevant, so in line with their deepest selves, that they wouldn’t even register as such. The algorithm would even know us better than we know ourselves, becoming the Father who sees in secret and rewards us with unprecedented savings on all our favorite brands.
The path to that glorious culmination was paved with “data.” Our “data” would ensure that the advertising that was placed in front of us was plausibly relevant to us in specific—hence calming corporate fears of its inefficacy—and our very “engagement” with the websites featuring the advertisements would in turn generate more “data.” Google and Facebook are the pioneers of this model. When we run a Google search, we are very directly telling Google what we’re interested in, enabling it to serve us appropriate ads—and then to track our behavior as we interact with the results, so that it can further tailor them. By the same token, when we interact with people on Facebook, we are presumably giving them information about our interests and preferences that enable them to target advertisements at us and track how we respond.
The virtuous circle of making advertising finally work was therefore a matter of generating “data” that could be used to beget ever more “data.” But since no one uses a website for the personalized ads, these platforms needed more than just personalization to guarantee that their users would see and potentially click their customers’ sales pitches. They needed to make sure that users spent as much time as possible on their sites, and it turned out that the “data” they were gathering was much more efficacious for guessing what kind of not-overtly-advertising “content” would keep users “engaged.” The sites themselves became a weird kind of advertisement for advertisement—not for the end user, whose experience I have outlined above, but for the advertisers themselves, who must be induced to believe that such profound levels of user “engagement” will guarantee “engagement” with the relevant ads.
At this point, I think we need to step back and ask what advertising was for, back in its classical era. What did companies think they were getting for all their spending? I don’t think any of them were naive enough to believe that it was as simple as “person looks at ad, person automatically buys product.” Here I think that the obviously idealized and stylized portrait of classical advertising that we find in Mad Men is helpful. Almost never in Mad Men do we see them trying to discern which pitch will be most convincing to customers. The few times we do—Peggy’s attempt to brand beauty products as a form of ritual, or her efforts to understand the harried moms who guiltily buy dinner from Burger Chef—the effort always, always fails. They are not promising to sell the product to the customer. They are selling the company to itself, selling it an image of its influence, its importance, its crucial role in everyone’s life.
Part of that image is indeed a sense of control over others. No one buys advertisements solely for glory, obviously. They do anticipate that they will get people to buy their product who otherwise would not. And when the early internet era punctured the illusion that advertising was legibly driving behavior, it threatened the entire myth.
Hence the illusion of control is precisely what the “data”-driven model promises to restore. At first—and here I realize I’m echoing a version of Cory Doctorow’s “enshittification” thesis—the idea was that we would meet the customer halfway and win them over with ads directed personally at them. But over time, the focus became control for its own sake. As Chris Hayes famously argues, online platforms have commodified our attention as such. Their claim on the obscene wealth and power they command is based on their ability to command our attention—not for the sake of selling advertisements so much as for its own sake. The influence they have over our economy and political system is based on their very direct and gut-level control over our day-to-day experience—that is, on the extent to which they can induce us to fritter away our lives on meaningless bullshit.
Sometimes they use this control for the sake of particular, stated ends. Facebook and Twitter both intervened to thwart pandemic misinformation and election denialism, for instance, which led some progressives to view platform capitalism as a potential ally. Sometimes, and increasingly often, the interventions have been malicious—as with Musk’s very public manipulation of Twitter’s algorithm to favor right-wing content and get him, personally, more engagement. But what’s truly scary is that the manipulations that have been most consequential seem to have been more or less random. The most haunting passage for me in Marion Fourcade and Kieran Healy’s The Ordinal Society is where they reveal that black-box algorithms tend to push people toward right-wing radicalization not out of any overt political motive, but simply because getting people hooked on that kind of material makes them easier to please in the future. It’s not that it wants people to be right-wing—it just wants them to be stupider so that the algorithm’s job of manipulating them will become more straightforward.
The fact that this form of radicalization is an emergent property of black-box algorithmic platforms points to a deeper truth, which is that their business model is intrinsically evil. They want to induce us to waste our lives on the platforms, simply to show that they can. And it is in that context that we must understand LLMs. Whatever their value “as a tool” (and to everyone who “uses AI as a tool,” I can only respond: shut the fuck up), the economic and political function of LLMs is to intensify this dynamic immeasurably. What the chatbot gives you is not merely inert “content” that meets your needs—it gives you the illusion of a person who wants nothing more than to serve you. Like everyone, I am worried about what happens to human cultural production and to certain lines of work in the wake of LLMs, but I think the promise of laying off every white collar worker in the world is less attractive to our overlords than the promise of turning everyone on earth into the inert passive consumers from Wall-E.
In one of my frequent discussions of these matters with My Esteemed Partner, she averred that the billionaires would at least be punished by being forced to live in the world they have created. And in that moment I had a realization: they already do. They live in a bubble where they can have whatever they want and no one ever tells them no. LLMs give everyone, even the homeless bum at the public library, access to an endlessly enthusiastic servant and yes-man. Chatbot psychosis is thus a kind of generalization, even democratization, of the experience of being a billionaire.
The experience of a billionaire is, on one level, something I wouldn’t wish on my worst enemy—a comprehensive type of dehumanization that renders one utterly alone in a world full of non-player characters. But on another level, it does speak to genuine human desires and needs. All of us want to be cared for and taken seriously. All of us delight in experiences that feel “just right” and “just for us”—and taking account of the delight that the online world promises and sometimes even delivers is one of the great strengths of Fourcade and Healy’s book. The problem is when those basically legitimate desires become the only thing. That kind of monomania creates monsters, who in turn attempt to create a monstrous world.
At the end of the day, I don’t think that their vision of Wall-E drones presided over by billionaire gods is possible, on any level. I don’t think the technology is or ever will be “there,” because I believe that artificial general intelligence is conceptually impossible. More fundamentally, I don’t believe that humans are fully controllable or manipulable—they will always resist, always refuse domination. Indeed, part of the reason why the “data”-driven model fails is that we insist on using these platforms in ways they don’t want us to and finding affordances that cut against our billionaire overlords’ nefarious intentions (such as my perverse insistence on using Facebook to connect with friends and colleagues by sharing things about my life). But even if attempting the impossible cannot succeed, it can still do profound and at times irreversible damage. It will be small consolation to be proven right on the impossibility of artificial general intelligence, for instance, if we blow through humanity’s entire future carbon budget in the process. Nor will the fact that chatbot psychosis is not finally an appealing way of life console those whose educations have been stolen from them in exchange for its false promise.
These people are bad, they want bad things, and they have massive power and resources behind them. It is not at all clear how we can defeat them and build a more livable future. But the first step is to realize that they and their technologies are, in fact, our enemies. We must reject them, and all their works, and all their empty promises—not pretend they are good simply because we can imagine a good and legitimate way to use them “as a tool.”



This article comes at the perfect time. Your observation on algorithmic mismatch fading into the background is so sharp. Truely insightfull.
I have known and know people at most of the big tech companies (studied math in undergrad) and the thing that sets favebook apart is that it is the place with the most status conscious social strivers. We do not live in a world where quality engineering redounds to social status so facebook sucks for users.
I think also that ads do work in a certain way. When forced to choose between a large number of basically identical products (cars cookies etc) any heuristic makes life simpler than no heuristic. But this means that advertising is a rat race where no actual advantage is cultivated, so advertising is like table stakes