#Meta Content Moderation
Explore tagged Tumblr posts
childrenofthedigitalage · 30 days ago
Text
Meta's Content Moderation Changes: Why Ireland Must Act Now
Meta’s Content Moderation Changes: Why Ireland Must Act Now The recent decision by Meta to end third-party fact-checking programs on platforms like Facebook, Instagram, and Threads has sent shockwaves through online safety circles. For a country like Ireland, home to Meta’s European headquarters, this is more than just a tech policy shift—it’s a wake-up call. It highlights the urgent need for…
1 note · View note
saywhat-politics · 29 days ago
Text
Tumblr media
Meta rolled out a number of changes to its “Hateful Conduct” policy Tuesday as part of a sweeping overhaul of its approach toward content moderation.
META announced a series of major updates to its content moderation policies today, including ending its fact-checking partnerships and “getting rid” of restrictions on speech about “topics like immigration, gender identity and gender” that the company describes as frequent subjects of political discourse and debate. “It’s not right that things can be said on TV or the floor of Congress, but not on our platforms,” Meta’s newly appointed chief global affairs officer, Joel Kaplan, wrote in a blog post outlining the changes.
In an accompanying video, Meta CEO Mark Zuckerberg described the company’s current rules in these areas as “just out of touch with mainstream discourse.”
98 notes · View notes
mostlysignssomeportents · 9 months ago
Text
CDA 230 bans Facebook from blocking interoperable tools
Tumblr media
I'm touring my new, nationally bestselling novel The Bezzle! Catch me TONIGHT (May 2) in WINNIPEG, then TOMORROW (May 3) in CALGARY, then SATURDAY (May 4) in VANCOUVER, then onto Tartu, Estonia, and beyond!
Tumblr media
Section 230 of the Communications Decency Act is the most widely misunderstood technology law in the world, which is wild, given that it's only 26 words long!
https://www.techdirt.com/2020/06/23/hello-youve-been-referred-here-because-youre-wrong-about-section-230-communications-decency-act/
CDA 230 isn't a gift to big tech. It's literally the only reason that tech companies don't censor on anything we write that might offend some litigious creep. Without CDA 230, there'd be no #MeToo. Hell, without CDA 230, just hosting a private message board where two friends get into serious beef could expose to you an avalanche of legal liability.
CDA 230 is the only part of a much broader, wildly unconstitutional law that survived a 1996 Supreme Court challenge. We don't spend a lot of time talking about all those other parts of the CDA, but there's actually some really cool stuff left in the bill that no one's really paid attention to:
https://www.aclu.org/legal-document/supreme-court-decision-striking-down-cda
One of those little-regarded sections of CDA 230 is part (c)(2)(b), which broadly immunizes anyone who makes a tool that helps internet users block content they don't want to see.
Enter the Knight First Amendment Institute at Columbia University and their client, Ethan Zuckerman, an internet pioneer turned academic at U Mass Amherst. Knight has filed a lawsuit on Zuckerman's behalf, seeking assurance that Zuckerman (and others) can use browser automation tools to block, unfollow, and otherwise modify the feeds Facebook delivers to its users:
https://knightcolumbia.org/documents/gu63ujqj8o
If Zuckerman is successful, he will set a precedent that allows toolsmiths to provide internet users with a wide variety of automation tools that customize the information they see online. That's something that Facebook bitterly opposes.
Facebook has a long history of attacking startups and individual developers who release tools that let users customize their feed. They shut down Friendly Browser, a third-party Facebook client that blocked trackers and customized your feed:
https://www.eff.org/deeplinks/2020/11/once-again-facebook-using-privacy-sword-kill-independent-innovation
Then in in 2021, Facebook's lawyers terrorized a software developer named Louis Barclay in retaliation for a tool called "Unfollow Everything," that autopiloted your browser to click through all the laborious steps needed to unfollow all the accounts you were subscribed to, and permanently banned Unfollow Everywhere's developer, Louis Barclay:
https://slate.com/technology/2021/10/facebook-unfollow-everything-cease-desist.html
Now, Zuckerman is developing "Unfollow Everything 2.0," an even richer version of Barclay's tool.
This rich record of legal bullying gives Zuckerman and his lawyers at Knight something important: "standing" – the right to bring a case. They argue that a browser automation tool that helps you control your feeds is covered by CDA(c)(2)(b), and that Facebook can't legally threaten the developer of such a tool with liability for violating the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, or the other legal weapons it wields against this kind of "adversarial interoperability."
Writing for Wired, Knight First Amendment Institute at Columbia University speaks to a variety of experts – including my EFF colleague Sophia Cope – who broadly endorse the very clever legal tactic Zuckerman and Knight are bringing to the court.
I'm very excited about this myself. "Adversarial interop" – modding a product or service without permission from its maker – is hugely important to disenshittifying the internet and forestalling future attempts to reenshittify it. From third-party ink cartridges to compatible replacement parts for mobile devices to alternative clients and firmware to ad- and tracker-blockers, adversarial interop is how internet users defend themselves against unilateral changes to services and products they rely on:
https://www.eff.org/deeplinks/2019/10/adversarial-interoperability
Now, all that said, a court victory here won't necessarily mean that Facebook can't block interoperability tools. Facebook still has the unilateral right to terminate its users' accounts. They could kick off Zuckerman. They could kick off his lawyers from the Knight Institute. They could permanently ban any user who uses Unfollow Everything 2.0.
Obviously, that kind of nuclear option could prove very unpopular for a company that is the very definition of "too big to care." But Unfollow Everything 2.0 and the lawsuit don't exist in a vacuum. The fight against Big Tech has a lot of tactical diversity: EU regulations, antitrust investigations, state laws, tinkerers and toolsmiths like Zuckerman, and impact litigation lawyers coming up with cool legal theories.
Together, they represent a multi-front war on the very idea that four billion people should have their digital lives controlled by an unaccountable billionaire man-child whose major technological achievement was making a website where he and his creepy friends could nonconsensually rate the fuckability of their fellow Harvard undergrads.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/05/02/kaiju-v-kaiju/#cda-230-c-2-b
Tumblr media
Image: D-Kuru (modified): https://commons.wikimedia.org/wiki/File:MSI_Bravo_17_(0017FK-007)-USB-C_port_large_PNr%C2%B00761.jpg
Minette Lontsie (modified): https://commons.wikimedia.org/wiki/File:Facebook_Headquarters.jpg
CC BY-SA 4.0: https://creativecommons.org/licenses/by-sa/4.0/deed.en
246 notes · View notes
tomorrowusa · 1 month ago
Text
Being a content moderator on Facebook can give you severe PTSD.
Let's take time from our holiday festivities to commiserate with those who have to moderate social media. They witness some of the absolute worst of humanity.
More than 140 Facebook content moderators have been diagnosed with severe post-traumatic stress disorder caused by exposure to graphic social media content including murders, suicides, child sexual abuse and terrorism. The moderators worked eight- to 10-hour days at a facility in Kenya for a company contracted by the social media firm and were found to have PTSD, generalised anxiety disorder (GAD) and major depressive disorder (MDD), by Dr Ian Kanyanya, the head of mental health services at Kenyatta National hospital in Nairobi. The mass diagnoses have been made as part of lawsuit being brought against Facebook’s parent company, Meta, and Samasource Kenya, an outsourcing company that carried out content moderation for Meta using workers from across Africa. The images and videos including necrophilia, bestiality and self-harm caused some moderators to faint, vomit, scream and run away from their desks, the filings allege.
You can imagine what now gets circulated on Elon Musk's Twitter/X which has ditched most of its moderation.
According to the filings in the Nairobi case, Kanyanya concluded that the primary cause of the mental health conditions among the 144 people was their work as Facebook content moderators as they “encountered extremely graphic content on a daily basis, which included videos of gruesome murders, self-harm, suicides, attempted suicides, sexual violence, explicit sexual content, child physical and sexual abuse, horrific violent actions just to name a few”. Four of the moderators suffered trypophobia, an aversion to or fear of repetitive patterns of small holes or bumps that can cause intense anxiety. For some, the condition developed from seeing holes on decomposing bodies while working on Facebook content.
Being a social media moderator may sound easy, but you will never be able to unsee the horrors which the dregs of society wish to share with others.
To make matters worse, the moderators in Kenya were paid just one-eighth what moderators in the US are paid.
Social media platform owners have vast wealth similar to the GDPs of some countries. They are among the greediest leeches in the history of money.
33 notes · View notes
Text
Tumblr media Tumblr media Tumblr media
A thread on Zuckerberg's bullshit with Joe Rogan, America's douchebro.
10 notes · View notes
justinspoliticalcorner · 26 days ago
Text
Christopher Wiggins at The Advocate:
Meta, the parent company of Instagram, Facebook, and Threads, under the leadership of CEO Mark Zuckerberg, has overhauled its content moderation policies, sparking outrage among LGBTQ+ advocacy groups, employees, and users. The company now permits slurs and dehumanizing rhetoric targeting LGBTQ+ people, a shift critics say is a deliberate alignment with far-right agendas and a signal of its disregard for marginalized communities’ safety. Leaked training materials reviewed by Platformer and The Intercept reveal that moderators are now instructed to allow posts calling LGBTQ+ people “mentally ill” and denying the existence of transgender individuals. Posts like “A trans person isn’t a he or she, it’s an it” and “There’s no such thing as trans children” are deemed non-violating under the new policies. Use of a term considered a slur to refer to transgender people is also now permissible, The Intercept reports. The changes, which include removing independent fact-checking and loosening hate speech restrictions, closely resemble Elon Musk’s controversial overhaul of Twitter, now X. Zuckerberg framed the updates as a return to Meta’s “roots” in free expression, but advocacy groups argue the move sacrifices safety for engagement.
Meta has thrown away any and all of its remaining goodwill this week by pandering to anti-LGBTQ+ and anti-DEI jagoffs, such as permitting defamatory slurs towards LGBTQ+ people.
See Also:
LGBTQ Nation: Meta employees speak out against new anti-LGBTQ+ & anti-DEI policies
11 notes · View notes
monetizeme · 28 days ago
Text
“The announcement from Mark is him basically saying: ‘Hey I heard the message, we will not intervene in the United States,’” said Haugen.
Announcing the changes on Tuesday, Zuckerberg said he would “work with President Trump” on pushing back against governments seeking to “censor more”, pointing to Latin America, China and Europe, where the UK and EU have introduced online safety legislation.
Haugen also raised concern over the effect on Facebook’s safety standards in the global south. In 2018 the United Nations said Facebook had played a “determining role” in spreading hate speech against Rohingya Muslims, who were the victims of a genocide in Myanmar.
“What happens if another Myanmar starts spiralling up again?” Haugen said. “Is the Trump state department going to call Facebook? Does Facebook have to fear any consequences from doing a bad job?”
3 notes · View notes
sucka99 · 28 days ago
Photo
Tumblr media
3 notes · View notes
kingofmyborrowedheart · 1 month ago
Text
Seeing all of these big tech companies bending the knee to Trump is really something.
4 notes · View notes
one-and-a-half-yikes · 1 year ago
Text
little weird joining the bakudeku subreddit and then it just being completely vacant of any activity going on and then I look and see IzuOcha sub with more activity (not meant to be taken as a negative tbc, glad they're having fun over there it just sucks lmao) like what the fuck happened???? Did everybody over there fucking die or something?????💀💀😭😭😭😭
9 notes · View notes
macmanx · 1 year ago
Text
youtube
In 2017, the Rohingya people in Myanmar faced a genocide brought on by Buddhist extremists and the military. But something else was fueling the violence, inciting people, and spreading hatred - Facebook and their algorithm was helping tear the country apart. This story in some ways is specific to Myanmar. It's a country that had been closed off to the world for a long time. It opened up, it was newly connected to the Internet, and soon longstanding tensions between ethnic groups exploded. But in other ways, this is a story about Humans everywhere. About how we've chosen to build our society around technology and social media platforms that are supposed to connect us, and they do. And yet in the process we've learned just how much they also create division, spread disinformation, and limit our ability to live in a shared reality. It's a story about how that process, that trade-off, can have deadly consequences.
7 notes · View notes
shoutgraphics · 9 days ago
Text
Tumblr media
Instagram Has Gone 4:5
That’s not really the point here, is it?
Earlier this week, social media managers woke up to a decision that would fundamentally alter how they approach their work. Let’s not sugarcoat what it means to be a social media manager in 2025—these professionals are graphic designers, photographers, account managers, and digital marketers all at once. This week, Instagram decided the grid would go 4:5.
This change lands in a fractured social media landscape. It has been years since the men, seated in the front row at January 20th’s inauguration, prioritized genuine connection over profit. We log on not because we want to, but because we feel compelled to—and our moods deteriorate as a result. We’ve collectively voiced a desire to leave platforms like Instagram, but the alternatives still feel inadequate. Now, amid this disillusionment, Instagram has quietly discarded over a decade of design work by users with one sweeping change.
Most blogs addressing this update will stick to the practicalities. I’ll save you the time: make your images at 1080 x 1350 or 1536 x 1920. The process, however, is far from intuitive. After selecting your images, click "Next," find the dashed square icon at the bottom left of the image preview, and set your post to "Portrait." If you make a mistake, don’t worry—Instagram has also made deleting a post far less accessible. But let’s not kid ourselves—that’s not really the point here, is it?
Creatives have had to accept that our portfolios cannot exist solely on our websites. This is why Shout! Graphics has an official social media policy. Platforms like Instagram, Facebook, and Twitter/X aren’t just social spaces—they’re search engines. They dictate visibility, and if your work is seen, your next contract might depend on the immediate look of your grid. With one unilateral change, a carefully curated portfolio no longer reflects the creator’s intent. Sure, I can adapt future assets, but the time, effort, and resources I’ve already poured into optimizing for 1:1? Gone. The effectiveness of the grid I’ve built has been significantly diminished.
Instagram’s move from 1:1 to 4:5 is an abrasive reminder of how little control we have in these spaces. We don’t set the rules; we’re forced to play by them. They can change the rules at any moment. There is nothing we can do about it. The only place where we maintain true creative control is our own websites.
This isn’t just an inconvenience—it’s a message. Instagram’s decision underscores a lack of respect for its users’ labor and investment. As creatives, we must recognize this for what it is: a move that reinforces the need for independence. Social platforms can serve as tools, but they should never dictate how we define or present our work. The shift to 4:5 isn’t just about aspect ratios; it’s about power dynamics. This is yet another decision made on our behalf, but not for us.
_
🔗 Read more here: shoutgraphics.design/blog/industry/instagram-aspect-ratio/
0 notes
deathbyscreens · 11 months ago
Text
The leaders of those tech companies did all they could to keep the discussion within the “numb stance of the technological idiot.”  You can even see McLuhan’s point in Mark Zuckerberg’s famous quasi-apology to the parents of those dead kids: 
I'm sorry. Everything that you all gone through, it's terrible. No one should have to go through the things that your families have suffered and this is why we invest so much and are going to continue doing industry-leading efforts to make sure that no one has to go through the types of things that your families have had to suffer.
In other words: We’re already the best in the business at content moderation, so I can’t promise you that we’ll do better in the future, but we’ll continue doing what we’re doing to remove harmful content from the 5 hours that your children now spend each day on social media. 
Let me be clear: there is no way to make social media safe for children by just making the content less toxic. It’s the phone-based childhood that is harming them, regardless of what they watch. 
Jonathan Haidt, After Babel
1 note · View note
justinspoliticalcorner · 30 days ago
Text
Graeme Demianyk at HuffPost:
Mark Zuckerberg announced Meta is abandoning its fact-checking program as he criticized the system for becoming “too politically biased.” The tech billionaire unveiled the changes in a video on Tuesday. Zuckerberg said his companies — which include Facebook and Instagram — would instead implement a “community notes” model similar to the one used on X, which is owned by Elon Musk. The policy shift comes as tech companies attempt to curry favor with President-elect Donald Trump following the Republican’s election triumph in November. “After Trump first got elected in 2016, the legacy media wrote non-stop about how misinformation was a threat to democracy,” Meta’s chief executive said. “We tried, in good faith, to address those concerns without becoming the arbiters of truth, but the fact-checkers have just been too politically biased and have destroyed more trust than they’ve created, especially in the U.S.” Starting in the U.S., Meta will end its fact-checking program with independent third parties and pivot to “community notes,” a system that relies on users adding notes or corrections to posts that may contain false or misleading information.
Zuckerberg also indicated a new direction on speech, announcing Meta will also “remove restrictions on topics like immigration and gender that are out of touch with mainstream discourse.” “What started as a movement to be more inclusive has increasingly been used to shut down opinions and shut out people with different ideas,” Zuckerberg said. “And it’s gone too far. So I want to make sure that people can share their beliefs and experiences on our platforms.” He conceded that there would be more “bad stuff” on the platform as a result of the decisions. “The reality is that this is a trade-off,” he said.
[...] Zuckerberg also indicated a new direction on speech, announcing Meta will also “remove restrictions on topics like immigration and gender that are out of touch with mainstream discourse.” “What started as a movement to be more inclusive has increasingly been used to shut down opinions and shut out people with different ideas,” Zuckerberg said. “And it’s gone too far. So I want to make sure that people can share their beliefs and experiences on our platforms.” He conceded that there would be more “bad stuff” on the platform as a result of the decisions. “The reality is that this is a trade-off,” he said.
Meta owner Mark Zuckerberg grossly caves into right-wing faux outrage campaign about conservatives being “silenced” by dumping fact-checkers for X-esque Community Notes and removing restrictions on anti-immigrant and anti-LGBTQ+/anti-trans speech. Adding right-wing UFC CEO Dana White to Meta’s board furthers the appeasement of Trump and radical right-wing Tech Bros.
See Also:
Daily Kos: Zuckerberg's Meta follows Musk's X into misinformation
The Guardian: Meta to get rid of factcheckers and recommend more political content
MMFA: Zuckerberg and Meta are done pretending to care about mitigating the harms their platforms cause
8 notes · View notes
probablyasocialecologist · 1 year ago
Text
Meta has engaged in a “systemic and global” censorship of pro-Palestinian content since the outbreak of the Israel-Gaza war on 7 October, according to a new report from Human Rights Watch (HRW). In a scathing 51-page report, the organization documented and reviewed more than a thousand reported instances of Meta removing content and suspending or permanently banning accounts on Facebook and Instagram. The company exhibited “six key patterns of undue censorship” of content in support of Palestine and Palestinians, including the taking down of posts, stories and comments; disabling accounts; restricting users’ ability to interact with others’ posts; and “shadow banning”, where the visibility and reach of a person’s material is significantly reduced, according to HRW. Examples it cites include content originating from more than 60 countries, mostly in English, and all in “peaceful support of Palestine, expressed in diverse ways”. Even HRW’s own posts seeking examples of online censorship were flagged as spam, the report said. “Censorship of content related to Palestine on Instagram and Facebook is systemic and global [and] Meta’s inconsistent enforcement of its own policies led to the erroneous removal of content about Palestine,” the group said in the report, citing “erroneous implementation, overreliance on automated tools to moderate content, and undue government influence over content removals” as the roots of the problem.
[...]
Users of Meta’s products have documented what they say is technological bias in favor of pro-Israel content and against pro-Palestinian posts. Instagram’s translation software replaced “Palestinian” followed by the Arabic phrase “Praise be to Allah” to “Palestinian terrorists” in English. WhatsApp’s AI, when asked to generate images of Palestinian boys and girls, created cartoon children with guns, whereas its images Israeli children did not include firearms.
5K notes · View notes
presidentalpaca · 2 years ago
Text
meta, bytedance, and openai (facebook, tiktok, and chat gpt) have been paying workers in africa $1.50 an hour to moderate their ai. those workers are now unionizing, which has gotten a lot of press. however the really important part will come when those workers begin negotiations. keep an eye out in the future for any support they may need. this is going to be a big fucking battle against these massive, super protected entities just to not be paid starving wages
Tumblr media
13K notes · View notes