Like many bold tech firms earlier than it, OpenAI launched itself to the tradition at giant with massive claims about how its know-how would enhance the world—from boosting productiveness to enabling scientific discovery. Even the caveats and warnings have been de facto commercials for the existential potential of synthetic intelligence: We needed to be cautious with these items, or it would actually wipe out humanity.
Quick-forward to the current day, and OpenAI continues to be driving culture-wide conversations, however its attention-grabbing choices aren’t fairly so lofty. Its Sora 2 video platform—which makes it straightforward to generate and share AI-derived fictions—was greeted as a TikTok for deepfakes. That’s, a mash-up of two of probably the most closely criticized developments in current reminiscence: addictive algorithms and misinformation.
As that launch was settling in (and being tweaked to deal with mental property complaints), OpenAI promised a forthcoming change to its flagship ChatGPT product, enabling “erotica for verified adults.” These merchandise should not precisely curing most cancers, as CEO Sam Altman has recommended synthetic intelligence could sometime do. On the contrary, the strikes have struck many as weirdly off-key: Why is an organization that took its mission (and itself) so critically doing . . . this?
An apparent threat right here is that OpenAI is watering down a beforehand high-minded model. There are a number of main gamers in AI at this level, together with Anthropic, the maker of ChatGPT rival Claude, in addition to Meta, Microsoft, Elon Musk’s Grok, and extra. As they search to draw an viewers, they should differentiate themselves by way of how their applied sciences are deployed and what they make attainable, or straightforward. In brief, what the know-how stands for. Because of this slop, memes, and intercourse seem to be such a comedown from OpenAI’s fastidiously cultivated popularity as an bold however accountable pioneer.
To underscore the purpose, rival Anthropic not too long ago loved a shocking quantity of optimistic consideration—an estimated 5,000 guests and 10 million social media impressions—for a pop-up occasion in New York’s West Village, dubbed a “no slop zone,” that emphasised analog creativity instruments. That is a part of a “Hold Pondering” branding marketing campaign geared toward burnishing the popularity of its Claude chatbot. The corporate has positioned itself as taking a cautious strategy to creating and deploying the know-how (one which’s attracted some criticism from the Trump administration). It has additionally made Anthropic stand out in what could be a move-fast-and-break-things aggressive discipline.
AI is a discipline that’s spending—and shedding—huge sums, and recently casting about for income streams within the right here and now whereas working towards that promised lofty future. In accordance with The Data, OpenAI misplaced $7.8 billion on income of $4.5 billion within the first half of 2025, and expects to spend $115 billion by 2029. ChatGPT has 800 million month-to-month customers, however paid accounts are nearer to twenty million, and these current strikes counsel that it must construct and leverage engagement. As Digiday not too long ago famous, OpenAI more and more appears to be at the least contemplating ad-driven fashions (as soon as dubbed a “final resort” by Altman).
Author and podcaster Cal Newport has made the case that developments like viral-video instruments and erotica chat are emblematic of a deeper shift away from grandiose financial impacts and towards “betting [the] firm on its potential to promote adverts in opposition to AI slop and computer-generated pornography.” It’s virtually like a sped-up model of Cory Doctorow’s notorious enshittification course of, pivoting from a top quality person expertise to an more and more degraded one designed for near-term revenue.
This isn’t fully honest to OpenAI, whose each transfer is scrutinized partly as a result of it’s the best-known model in a singularly hyped class. All its opponents may also should ship actual worth in trade for his or her huge prices to traders and society at giant. However exactly as a result of it’s a number one model, it’s significantly prone to dilution if it’s seen as straying from its idealistic promise, and rhetoric. A cutting-edge AI pioneer doesn’t need to be perceived as an existential risk—nevertheless it additionally doesn’t need to be branded as simply one other supply of crass distraction.

