#based on the Facebook ai slop
Explore tagged Tumblr posts
carnelianfoxx · 3 months ago
Text
Why is it pictures like this never trend?
Tumblr media
371 notes · View notes
mariacallous · 15 days ago
Text
AI slop is flowing onto every major platform where people post online—and Medium is no exception.
The 12-year-old publishing platform has undertaken a dizzying number of pivots over the years. It’s finally on a financial upswing, having turned a monthly profit for the first time this summer. Medium CEO Tony Stubblebine and other executives at the company have described the platform as “a home for human writing.” But there is evidence that robot bloggers are increasingly flocking to the platform, too.
Earlier this year, WIRED asked AI detection startup Pangram Labs to analyze Medium. It took a sampling of 274,466 recent posts over a six-week period and estimated that over 47 percent were likely AI-generated. “This is a couple orders of magnitude more than what I see on the rest of the internet,” says Pangram CEO Max Spero. (The company’s analysis of one day of global news sites this summer found 7 percent as likely AI-generated.)
The strain of slop on Medium tends toward the banal, especially compared with the dadaist flotsam clogging Facebook. Instead of Shrimp Jesus, one is more apt to see vacant dispatches about cryptocurrency. The tags with the most likely AI-generated content included “NFT”—out of 5,712 articles tagged with this phrase over the last several months, Pangram found that 4,492, or around 78 percent, came back as likely AI-generated—as well as “web3,” “ethereum,” “AI,” and, for whatever reason, “pets.”
WIRED asked a second AI detection startup, Originality AI, to run its own analysis. It examined a sampling of Medium posts from 2018 and compared it with a sampling from this year. In 2018, 3.4 percent were estimated as likely AI-generated. CEO Jon Gillham says that percentage corresponds to the company’s false-positive rate, as AI tools were not widely used at that point. For 2024, with a sampling of 473 articles published this year, it suspected that just over 40 percent were likely AI-generated. With no knowledge of each others’ analyses, both Originality and Pangram came to similar conclusions about the scope of AI content.
When contacted by WIRED for this article and notified of the results of the AI detection analyses, Stubblebine rejected the premise that Medium has an AI issue. “I am disputing the importance of the results and also the idea that these companies discovered anything,” he says.
Stubblebine does not deny that Medium has seen a major uptick in AI-generated articles. “We think, probably, AI-generated content that gets posted to Medium is probably up tenfold from the beginning of the year,” he says. He also adopts a generally adversarial approach to AI slop appearing on the platform: “We’re strongly against AI content.” But he objects to the use of AI detectors in assessing the scope of the issue, in part because he alleges they cannot differentiate between posts that are wholly AI-generated and posts in which AI is used more lightly. (“That’s not accurate,” Spero says; he claims Pangram can indeed differentiate between a ChatGPT post generated from a prompt and a post based on an AI outline but fleshed out with human writing.)
According to Stubblebine, Medium tested several AI detectors and decided they were not effective. (Stubblebine also accused Pangram Labs of attempting to extort him “by press” because Spero, Pangram’s CEO, sent an email detailing the results of the analysis WIRED had requested and then offered its services to Medium. “I just thought we could help them,” Spero says.)
AI detection tools are, indeed, flawed. They work by analyzing texts and making predictions and can produce false positives and false negatives. Caution using them to judge individual pieces of writing and artwork is warranted, especially with a new wave of tools available to trick them. Still, they have utility as barometers gauging changes in how much AI-generated content exists on certain platforms and websites, and they can help researchers, journalists, and the public to spot patterns.
“Since AI detectors are accurate but not perfect, it is impossible to say with certainty whether any single piece of content is AI-generated or not,” says Gillham. “However, they are great at seeing the trend of AI writing taking over platforms like Medium.”
Others have spotted this trend. “During my regular scans for new AI-generated news sites, I regularly come across AI-generated content on Medium on a weekly basis,” says McKenzie Sadeghi, an editor at online misinformation tracking company NewsGuard. “I've found that most of it is often about crypto, marketing, SEO.”
Stubblebine is adamant that these numbers do not accurately capture what Medium readers experience. “It doesn't matter,” he says. “Having access to the raw feed of what gets posted to Medium doesn't represent the actual activity of what gets recommended and viewed. The vast majority of detectable AI-generated stories in the raw feeds for these topics already have zero views. Zero views is the goal and we already have a system that accomplishes [that].” He believes Medium is effectively containing its AI slop with the combination of its general-purpose spam filtering system and its human moderation.
Many accounts that appear to post high volumes of AI-generated material do, indeed, appear to have puny or non-existent readerships. One account flagged by Pangram Labs as the author of likely AI-generated posts about crypto, for example, posted six times in one day, with no interactions on any of the posts, suggesting a negligible impact. Other flagged posts appear to have been recently pulled down; while some may have been voluntarily removed, others may have been removed by Medium days or weeks after publication. Sometimes, Medium deliberately delays removing spam, according to Stubblebine, if it has identified “spam rings” attempting to game the system.
Zero views was not the case across the board, though. WIRED found that other articles flagged as likely AI-generated by Pangram, Originality, and the AI detection company Reality Defender, had hundreds of “claps,” which are similar to “likes” on other platforms, suggesting at the very least a readership substantially higher than zero.
Stubblebine sees people as the cornerstone of Medium’s approach to quality control. “Medium basically runs on human curation now,” he says. He cites the 9,000 editors of Medium’s publications, as well as additional human evaluation for stories that can be “boosted” or more widely distributed. “I think you could, if you're being pedantic, say we're filtering out AI—but there's a goal above that, which is, we're just trying to filter out the stuff that's not very good.”
Medium has taken steps this year to curb the presence of robotic bloggers, updating its AI policy. Its stance is a notable contrast to other platforms, like LinkedIn and Facebook, that explicitly encourage people to use AI. Instead, Medium no longer allows AI writing to be paywalled in its Partner program, to receive wider human-curated distribution from its Boost program, or to promote affiliate links. Disclosed AI writing can get general distribution, but undisclosed AI writing is given only “network” distribution, which means it is meant to appear only on the feeds of people who follow the writer. Medium defines AI-generated writing as “writing where the majority of the content has been created by an AI-writing program with little or no edits, improvements, fact-checking, or changes.” Medium does not have any AI-specific enforcement tools for these new rules. “We've found that our existing curation system has the side effect of filtering out AI generated writing simply because AI generated writing is also bad writing,” says Stubblebine.
Some Medium writers and editors do applaud the platform’s approach to AI. Eric Pierce, who founded Medium’s largest pop culture publication Fanfare, says he doesn’t have to fend off many AI-generated submissions and that he believes that the human curators of Medium’s boost program help highlight the best of the platform’s human writing. “I can’t think of a single piece I’ve read on Medium in the past few months that even hinted at being AI-created,” he says. “Increasingly, Medium feels like a bastion of sanity amid an internet desperate to eat itself alive.”
However, other writers and editors believe they currently still see a plethora of AI-generated writing on the platform. Content marketing writer Marcus Musick, who edits several publications, wrote a post lamenting how what he suspects to be an AI-generated article went viral. (Reality Defender ran an analysis on the article in question and estimated it was 99 percent “likely manipulated.”) The story appears widely read, with over 13,500 “claps.”
In addition to spotting possible AI content as a reader, Musick also believes he encounters it frequently as an editor. He says he rejects around 80 percent of potential contributors a month because he suspects they’re using AI. He does not use AI detectors, which he calls “useless,” instead relying on his own judgment.
While the volume of likely AI-generated content on Medium is notable, the moderation challenges the platform faces—how to surface good work and keep junk banished—is one that has always plagued the greater web. The AI boom has simply super-charged the problem. While click farms have long been an issue, for example, AI has handed SEO-obsessed entrepreneurs a way to swiftly resurrect zombie media outlets by filling them with AI slop. There’s a whole subgenre of YouTube hustle culture entrepreneurs creating get-rich-quick tutorials encouraging others to create AI slop on platforms like Facebook, Amazon Kindle, and, yes, Medium. (Sample headline: “1-Click AI SEO Medium Empire đŸ€Ż.”)
“Medium is in the same place as the internet as a whole right now. Because AI content is so quick to generate that it is everywhere,” says plagiarism consultant Jonathan Bailey. “Spam filters, the human moderators, et cetera—those are probably the best tools they have.”
Stubblebine’s argument—that it doesn’t necessarily matter whether a platform contains a large amount of garbage, as long as it successfully amplifies good writing and limits the reach of said garbage—is perhaps more pragmatic than any attempt to wholly banish AI slop. His moderation strategy may very well be the most savvy approach.
It also suggests a future in which the Dead Internet theory comes to fruition. The theory, once the domain of extremely online conspiratorial thinkers, argues that the vast majority of the internet is devoid of real people and human-created posts, instead clogged with AI-generated slop and bots. As generative AI tools grow more commonplace, platforms that give up on trying to blot out bots will incubate an online world in which work created by humans becomes increasingly harder to find on platforms swamped by AI.
11 notes · View notes
ibboard · 2 months ago
Text
Bonus section from the interview that this summary article is based on:
I think there’s been this trend over time where the feeds started off as primarily and exclusively content for people you followed, your friends. I guess it was friends early on, then it kind of broadened out to, “Okay, you followed a set of friends and creators.” And then it got to a point where the algorithm was good enough where we’re actually showing you a lot of stuff that you’re not following directly because, in some ways, that’s a better way to show you more interesting stuff than only constraining it to things that you’ve chosen to follow. I think the next logical jump on that is like, “Okay, we’re showing you content from your friends and creators that you’re following and creators that you’re not following that are generating interesting things. And you just add on to that, a layer of, “Okay, and we’re also going to show you content that’s generated by an AI system that might be something that you’re interested in.”
They're planning to GENERATE THEIR OWN SLOP AND FORCE IT INTO YOUR FEED! Not just junk accounts generating slop in the hopes of getting ad revenue, but Facebook itself generating soulless gibberish filled with lies to make you read more and more (and to push out real people making real content) 😒
(I'm just glad I've deleted my Facebook and Twitter accounts, and basically never used my Facebook account even when I had it!)
Tumblr media
(eyeroll) Wow, that's us put in our place, huh.
(See also: "If you're not rich enough to sue us, go screw yoursel[f/ves].")
820 notes · View notes