#one of the most significant spreaders of misinformation on X
Explore tagged Tumblr posts
Photo
(via Elon Musk's AI turns on him, labels him 'one of the most significant spreaders of misinformation on X' | Fortune)
4 notes
·
View notes
Text
Americans need to log off. Unplug. Shoot the TV. It seems impossible. Less than five days from Election Day in the US, most people can’t help but check the news—or TikTok or X—at least once a day. Swipe, refresh, repeat. By Tuesday, the connectedness will be constant. Mentally, political stress takes a huge toll. Given that anxiety can be exacerbated by uncertainty, the 2024 election feels worse than it has ever before. There’s a reason for that.
I don’t just mean the general sky-is-falling stuff—the militias on Facebook organizing ballot-box stakeouts, the conspiracy theory spreaders, the cybercriminals potentially waiting in the wings. Some version of those nerve-janglers has been around for years. Now, though, there’s a new factor upping users’ blood pressure as they doomscroll: AI misinformation.
Clearly US voters worry about how misinformation might impact who wins the election, but Sander van der Linden, author of Foolproof: Why Misinformation Infects Our Minds and How to Build Immunity, notes that the anxiety around AI might be more existential. “If you look at the problem from a more indirect perspective, such as sowing doubt and chaos, confusion, undermining democratic discourse, lowering trust in the electoral process, and confusing swing voters,” he says. “I think we’re looking at a bigger risk”—one that fuels polarization and erodes the quality of debate.
According to an American Psychological Association survey released last week, 77 percent of US adults feel some level of stress over the future of the country. It gets worse. Sixty-nine percent of adults surveyed said the race between Vice President Kamala Harris and Donald Trump was a cause of “significant stress”—a figure that’s up from 52 percent in 2016, when Trump beat Hillary Clinton. Nearly three-quarters of respondents thought the election could spur violence; more than half worried it could be “the end of democracy in the US.”
Christ.
On top of all of this sits the threat of AI-generated falsehoods. For more than a year researchers have warned of election misinformation from artificial intelligence. Beyond the polls, such misinformation has played a role in the Israel-Hamas war and the war in Ukraine. 404 Media called the aftermath of Hurricane Helene “the ‘fuck it’ era of AI-generated slop.” (Actually) fake news lurks around every corner. Earlier this year, the World Economic Forum released a report claiming AI misinformation is one of the biggest short-term threats the world faces. Bad election information and fake images can also bring in serious money for X users, according to a BBC report this week.
This was the first year the APA asked about AI and election anxiety and one of the things the organization found was that seven in 10 people experienced stress over the fact that fake information can seem so believable. One-third of social media users said they don’t know what to believe on those platforms. “It extends beyond just information and social media,” says Vaile Wright, APA’s senior director of health care innovation. “A majority of Americans said they don't trust the US government. So there's sort of this whole lack of trust in what used to be very trusted institutions—the media, government—and that, I'm sure, is not helping with people's stress as it relates to this election this year.”
When the US election season ramped up there were AI-generated robocalls (the Federal Communications Commission outlawed them) and now election officials are preparing staff to deal with any number of deepfakes they may encounter. X’s AI model Grok is reportedly boosting conspiracy theories. (It’s also, according to Musk, working on its MRI-reading skills.)
After months of fretting about AI taking jobs, now everyone has to worry about it taking faith in the democratic process?
For nearly two decades, one social media platform or another has ended up dominating a US election. Back in 2008, it was a still-young Twitter. During most of the twenty-teens, it was Facebook (and a bit of Instagram) and Twitter. More recently, TikTok has become a news-spreading tool. In each election cycle, people have swiped to keep up—and also confronted new levels of toxicity. Former Trump advisor Steve Bannon, who got out of prison this week, once told reporter Michael Lewis Democrats didn’t matter, “the real opposition is the media. And the way to deal with them is to flood the zone with shit.” That shit went online.
Now, that shit doesn’t even have to come from political operatives. Machines can make it. When people scroll around on their smartphones for a flicker of hope about whether or not their candidate will win, whatever discouragement or reassurance they find may not even be real.
The APA’s survey found that 82 percent of US adults were worried people may base their values on inaccurate information, and more than one-fifth said they’d believed something they read online or on social media when it wasn’t true. Another poll conducted in early September found that only about a quarter of voters feel confident that they can tell the difference between real AI-generated visuals, like the fake images Trump shared claiming Taylor Swift fans are supporting him. “That’s not a good sign,” van der Linden says.
If your fears about the election seem even worse than they did in 2020, this may be why. Misinformation takes a mental toll. “Political anxiety” exists, and research indicates it can impact those who aren’t anxious otherwise. Couple that with a media landscape where newspapers are coming under fire for not endorsing a political candidate and the picture of a nervous electorate becomes very clear. Trust no one; just wait to see what happens—then decide if you believe it.
85 notes
·
View notes
Text
31 notes
·
View notes
Text
#Elon musk#even his own so says he’s full of shit#I wish he’d go away#btw Tesla issues it’s sixth cybertruck recall this year
20 notes
·
View notes
Text
elon's slaves are getting restless
Elon Musk's AI turns on him, labels him as 'one of the most significant spreaders of misinformation on X'
10 notes
·
View notes
Text
"Elon Musk might be in charge of the business of Grok, but the artificial intelligence has seemingly gone into business for itself, labeling Musk as one of the worst offenders when it comes to spreading misinformation online."
🤣
3 notes
·
View notes
Text
"Based on various analyses, social media sentiment, and reports, Elon Musk has been identified as one of the most significant spreaders of misinformation on X since he acquired the platform,” Grok wrote later adding “Musk has made numerous posts that have been criticized for promoting or endorsing misinformation, especially related to political events, elections, health issues like COVID-19, and conspiracy theories. His endorsements or interactions with content from controversial figures or accounts with a history of spreading misinformation have also contributed to this perception.
3 notes
·
View notes
Text
2 notes
·
View notes
Text
Oh the irony, Elon: You can run, but you can’t hide.
1 note
·
View note
Text
Elon Musk's AI chatbot, Grok, has seemingly turned on its creator after it told an X user that the source of most misinformation on the platform is its owner.
"Based on various analyses, social media sentiment, and reports, Elon Musk has been identified as one of the most significant spreaders of misinformation on X since he acquired the platform," Grok told X user Gary Koepnick, which he shared in a post to X.
Unexpected?
0 notes
Link
Elon Musks KI wendet sich gegen ihn und bezeichnet ihn als "einen der wichtigsten Verbreiter von Fehlinformationen über X"
https://fortune.com/2024/11/14/grok-musk-misinformation-spreader/
0 notes
Text
Elon Musk's AI turns on him, labels him 'one of the most significant spreaders of misinformation on X' | Fortune
DELETE TWITTER X
STOP NAZI VANILLA ISIS
0 notes
Text
Elon Musk's AI turns on him, labels him 'one of the most significant spreaders of misinformation on X' | Fortune
We didn't need AI to know this
1 note
·
View note