There’s something off about this year’s “fall vibes”

This is what came out when I prompted Canva’s AI tool to create a “view from cafe in autumn, quaint street, foliage, coffee, aesthetic, small town.” Interesting how it placed said foliage indoors! | Image generated by Rebecca Jennings using Canva

A rain-soaked street at dusk, pictured through the window of a coffee shop. String lights hang between old brick buildings, a church steeple in the distance. In the foreground, a candlelit table with mugs of coffee, tea, and … a corked glass jug of beige liquid? Next to a floating hunk of sourdough? And also the table is covered in water?

This is the platonic ideal of “autumn,” according to one photo that’s gone viral both on X, where it’s been seen almost 12 million times, and Pinterest, where it’s the very first picture that comes up when you search “fall inspo.” At first glance, you’d be forgiven for thinking it’s a tiny street in Edinburgh or the part of Boston that looks like Gilmore Girls. But like so many other viral autumnal vibes photos this year, the image, with its nonsensical details and uncanny aura, appears to be AI-generated. 

AI “autumn vibes” imagery makes up a ton of the most popular fall photos on Pinterest right now, from a moody outdoor book display on yet another rain-soaked street to a sunlit farmer’s market to several instances of coffee cups perched on tousled bedspreads. All of them appear normal until you zoom in and realize the books don’t contain actual letters and the pillows are actually made of bath mat material. 

Dreaming of a wet table with three types of bread and a broccoli latte ☺️?? https://t.co/uIqRCRlqcC

— mj slenderman (@othermiike) September 25, 2024

It’s not just limited to Pinterest or “vibes”: AI-generated content is now infiltrating social media in ways that have a meaningful impact on people’s lives. Knitters and crocheters hoping to craft fall sweaters are being inundated with nonsensical AI patterns and inspo images on Reddit. An entirely fake restaurant has gained 75,000 followers on Instagram by claiming to be “number one in Austin” and posting over-the-top seasonal food items like a croissant shaped like Moo Deng. Meanwhile, folks hoping to curl up with a cozy fantasy novel or a bedtime story for their kids are confronted with a library of Chat GPT-generated nonsense “written” by nonexistent authors on the Kindle bookstore, while their YouTube algorithms serve them bot-generated fall ambiance videos. Autumn, it seems, is being eaten by AI. 

Not everyone is — and please excuse the following pun — falling for it. When the fake café photo went viral on X, it caused a deluge of quote-tweets asking why the hell anyone needed to use AI when you could just as easily post one of the many actual photos taken in real cities that do, in fact, look like this. 

The crux of the issue now is the sheer scale of it: Scammers and spammers can unleash a barrage of text and images with the click of a button

Colloquially, all this garbage is widely considered “slop,” a term for the spammy AI-generated images, text, and videos that clog up internet platforms and make it more difficult and unpleasant than ever to be online. In reality, this moment of peak slop is the natural culmination of platforms that incentivize virality and engagement at all costs — no matter how low-quality the content happens to be. But the crux of the issue now is the sheer scale of it: Scammers and spammers can unleash a barrage of text and images with the click of a button, so that searches for legitimate information or a casual scroll through social media require even more time and effort to bypass the junk. Misinformation about crucial news events and election coverage is spreading on platforms. Academic and literary publications are being spammed with low-quality submissions, making it harder to suss out genuine creative or scholarly work. 

Of course, there are more urgent concerns regarding the rise of generative AI: its enormous energy consumption, for one, or the rampant creation of deepfake porn used to harass and abuse women. Considering all that, it’s easy to look at cute AI-generated fall pictures on Pinterest as a relative non-issue, a side effect of a technology that could (arguably) greatly benefit humanity. 

But as Jason Koebler, co-founder of 404 Media, a publication covering tech, explains, these images normalize AI slop and desensitize our ability to discern what’s real and what’s fake. “The clogging of feeds and of search results is not just a side effect, but a main effect of all of this,” he told me. “It’s harder for a journalist writing an article to break through, or an artist painting a picture, or a musician making a song when they’re competing with not just a bunch of other humans making stuff, but humans who are using this automatic creation machine to make things at a scale that is impossible otherwise.” The problem is so bad that tools used to track the human usage of certain words online are no longer effective due to the prevalence of large-language models. 

AI slop typically comes from people trying to make money by going viral on social media. Platforms like Facebook, Instagram, X, and TikTok all have programs that pay creators directly based on how much engagement their content receives, and AI makes it easier than ever to produce and test that content. That’s led to an entire cottage industry of people all over the world who teach paid courses on how to produce highly engaging AI slop, sharing information on the best prompts to generate the most attention-getting posts.

On Facebook, AI posters say they make around $100 per 1,000 likes, and some TikTokers are making $5,000 per month at their side hustles. It’s a decent amount for anyone but especially lucrative in countries where many AI content hustlers are based, such as India, Vietnam, and the Philippines. One Kenya-based creator told New York magazine that his process involves asking ChatGPT something like, “WRITE ME 10 PROMPT picture OF JESUS WHICH WILLING BRING HIGH ENGAGEMENT ON FACEBOOK” and then plugging those prompts into an image generator like Leonardo.ai and Midjourney. 

Rather than being worried about their platforms being overrun with low-quality engagement bait, Meta and X seem entirely unconcerned and even supportive of it. “There seems to be very little interest from any platform in taking action about this stuff,” says Koebler. “They are actively, in some cases, generating it themselves, and being like, ‘Look how cool this technology is!’” 

Both Meta and X have invested heavily in AI, offering tools for users to add to the ever-increasing deluge of slop on their platforms. Meta’s official account even posted an AI photo of the northern lights over San Francisco on Threads — its top reply is a NASA engineer explaining why images like this spread false information and “muddy the waters of reality.”

AI slop will continue to exist as long as people are finding ways to make money from it, just like any practice on social media, from the merely irksome to the actually dangerous. Koebler guesses that AI spam got so popular because some of the platforms cracked down on content stolen from real creators, making AI the second-easiest option for spammers. 

“People are just posting whatever they’re getting out of these AI generator machines, regardless of the level of quality,” he says, “because of the ways that virality and social media algorithms work, even if you have the world’s greatest piece of content, it might not go viral, whereas something that is not very good might, just because you won the luck of the draw.”

There’s a particular irony with AI-generated images of fall vibes, considering fall is disappearing from many parts of the US and AI emissions have become a major contributor to climate change. 

And it’s not as if the internet is starving for aesthetically pleasing fall inspo: Every September, social media is flooded with images of pumpkin-strewn stoops, cozy blankets on comfy sofas, or small towns covered in yellow and orange leaves. For the past few years, we’ve christened these mini-moments online with names: Meg Ryan fall, which had its own outfit and playlist recommendations, or Christian girl autumn, where people unleashed their inner white woman by putting on wide-brimmed hats and knee-high boots to grab pumpkin spice lattes. (This year, it seems like the Gilmores’ fictional Connecticut town of Stars Hollow is providing much of the inspiration.) 

It’s all very cutesy and wholesome, but even this type of human-made content is increasingly little more than a plot to get viewers to click on affiliate links to Amazon storefronts or ultra-cheap TikTok Shop junk. In this way, they’re not all that dissimilar from AI slop: Platforms are encouraging their users to be professional salespeople, whether they’re hawking unethically made clothing and home goods or spamming audiences with AI-generated inspo photos. Both are low-quality, quick-to-produce types of content that drive engagement and, therefore, revenue, even when regular users say they hate it. 

Part of the joy in scrolling through fall photos, after all, is knowing that these places exist and that you could theoretically visit them, that the world fundamentally changes in autumn, and that there’s only a small window of time to marvel at how beautiful it all is. An AI-generated image represents the precise opposite: It’s just one of the infinite possible arrangements of pixels for machines to keep churning out indefinitely. 

vox.com

Читать статью полностью на: vox.com

Новые статьи