#social media content moderation
Explore tagged Tumblr posts
Text
đ How Agile Content Moderation Process Improves a Brandâs Online Visibility
đ€·ââïž Agile content moderation enhances a brandâs online presence by swiftly addressing and adapting to evolving content challenges. This dynamic approach ensures a safer and more positive digital environment, boosting visibility and trust. đ Read the blog: https://www.sitepronews.com/2022/12/20/how-agile-content-moderation-process-improves-a-brands-online-visibility/
View On WordPress
#content moderation#content moderation solution#Outsource Content Moderation#Outsource content moderation services#social media content moderation
0 notes
Text
Being a content moderator on Facebook can give you severe PTSD.
Let's take time from our holiday festivities to commiserate with those who have to moderate social media. They witness some of the absolute worst of humanity.
More than 140 Facebook content moderators have been diagnosed with severe post-traumatic stress disorder caused by exposure to graphic social media content including murders, suicides, child sexual abuse and terrorism. The moderators worked eight- to 10-hour days at a facility in Kenya for a company contracted by the social media firm and were found to have PTSD, generalised anxiety disorder (GAD) and major depressive disorder (MDD), by Dr Ian Kanyanya, the head of mental health services at Kenyatta National hospital in Nairobi. The mass diagnoses have been made as part of lawsuit being brought against Facebookâs parent company, Meta, and Samasource Kenya, an outsourcing company that carried out content moderation for Meta using workers from across Africa. The images and videos including necrophilia, bestiality and self-harm caused some moderators to faint, vomit, scream and run away from their desks, the filings allege.
You can imagine what now gets circulated on Elon Musk's Twitter/X which has ditched most of its moderation.
According to the filings in the Nairobi case, Kanyanya concluded that the primary cause of the mental health conditions among the 144 people was their work as Facebook content moderators as they âencountered extremely graphic content on a daily basis, which included videos of gruesome murders, self-harm, suicides, attempted suicides, sexual violence, explicit sexual content, child physical and sexual abuse, horrific violent actions just to name a fewâ. Four of the moderators suffered trypophobia, an aversion to or fear of repetitive patterns of small holes or bumps that can cause intense anxiety. For some, the condition developed from seeing holes on decomposing bodies while working on Facebook content.
Being a social media moderator may sound easy, but you will never be able to unsee the horrors which the dregs of society wish to share with others.
To make matters worse, the moderators in Kenya were paid just one-eighth what moderators in the US are paid.
Social media platform owners have vast wealth similar to the GDPs of some countries. They are among the greediest leeches in the history of money.
#social media#social media owners#greed#facebook#meta#twitter/x#content moderation#moderators#ptsd#gad#mdd#kenya#samasource kenya#low wages#foxglove#get off of facebook#boycott meta#get off of twitter
33 notes
·
View notes
Text
Christopher Wiggins at The Advocate:
Meta, the parent company of Instagram, Facebook, and Threads, under the leadership of CEO Mark Zuckerberg, has overhauled its content moderation policies, sparking outrage among LGBTQ+ advocacy groups, employees, and users. The company now permits slurs and dehumanizing rhetoric targeting LGBTQ+ people, a shift critics say is a deliberate alignment with far-right agendas and a signal of its disregard for marginalized communitiesâ safety. Leaked training materials reviewed by Platformer and The Intercept reveal that moderators are now instructed to allow posts calling LGBTQ+ people âmentally illâ and denying the existence of transgender individuals. Posts like âA trans person isnât a he or she, itâs an itâ and âThereâs no such thing as trans childrenâ are deemed non-violating under the new policies. Use of a term considered a slur to refer to transgender people is also now permissible, The Intercept reports. The changes, which include removing independent fact-checking and loosening hate speech restrictions, closely resemble Elon Muskâs controversial overhaul of Twitter, now X. Zuckerberg framed the updates as a return to Metaâs ârootsâ in free expression, but advocacy groups argue the move sacrifices safety for engagement.
Meta has thrown away any and all of its remaining goodwill this week by pandering to anti-LGBTQ+ and anti-DEI jagoffs, such as permitting defamatory slurs towards LGBTQ+ people.
See Also:
LGBTQ Nation: Meta employees speak out against new anti-LGBTQ+ & anti-DEI policies
#LGBTQ+#Anti LGBTQ+ Extremism#Content Moderation#Social Media#Meta#Facebook#Instagram#Threads#Mark Zuckerberg
11 notes
·
View notes
Text
Social Media is Nice in Theory, but...
Something I cannot help but think about is how awesome social media can be - theoretically - and how much it sucks for the most part.
I am a twitter refugee. I came to tumblr after Elmo bought twitter and made the plattform a rightwing haven. But, I mean... There is in general an issue with pretty much all social media, right? Like, most people will hate on one plattform specifically and stuff, while upholding another plattform. But let's be honest... They all suck. Just in different ways.
And the main reasons for them sucking are all the same, right? For one, there is advertisement and with that the need to make the plattform advertiser-friendly. But then there is also just the impossibility to properly moderate a plattform used by millions of people.
I mean, the advertisement stuff is already a big issue. Because... Sure, big platforms need big money because they are hosting just so much stuff in videos, images and what not. Hence, duh, they do need to get the money somewhere. And right now the only way to really make enough money is advertisement. Because we live under capitalism and it sucks.
And with this comes the need to make everything advertiserfriendly. On one hand this can be good, because it creates incentives for the platform to not host stuff like... I don't know. Holocaust denial and shit. The kinda stuff that makes most advertisers pull out. But on the other hand...
Well, we all know the issue: Porn bans. And not only porn bans, but also policing of anything connected to nude bodies. Especially nude bodies that are perceived to be female. Because society still holds onto those ideas that female bodies need to be regulated and controlled.
We only recently had a big crackdown on NSFW content on even sides that are not primarily advertiser driven - like Gumroad and Patreon. Because... Well, folks are very intrested in outlawing any form of porn. Often because they claim to want to protect children. The truth is of course that they often do quite the opposite. Because driving everyone away from properly vetted websites also means, that on one hand kids are more likely to come across the real bad stuff. And on the other hand, well... The more dingy the websites are, that folks consume their porn on, the more likely it is to find some stuff like CP and snuff on those sides. Which will get more attention like this.
But there is also the less capitalist issue of moderating the content. Which is... kinda hard on a lot of websites. Of course, to save money a lot of the big social media platforms are not really trying. Because they do not want to for proper moderators. But it makes it more likely for really bad stuff to happen. Like doxxing and what not.
I mean, like with everything: I do think that social media could be a much better place if only we did not have capitalism. But I also think that a lot about the way social media is constructed (with the anonymity and stuff, but also things like this dopamine rush when people like your stuff) will not just change, if you stop capitalism.
#solarpunk#lunarpunk#social media#social networks#facebook#twitter#tiktok#anti capitalism#content moderation#fuck capitalism
15 notes
·
View notes
Text
Is Social Media Content Moderation, SILENCING FREE SPEECH?
Or Protecting Civil Rights
By: Ki Lov3 Editor: Toni Gelardi © Feb 6, 2025
While some argue that content moderation restricts free speech, it actually serves as a safeguardâreinforcing civil rights and equality, just as hate crime laws protect individuals in physical spaces. In society, laws are in place to prevent hate crimes and protect vulnerable groups from discrimination because they recognize that certain speech and actions cause harm. If we protect people from hate-fueled attacks in real life, why should we allow them online, where harm can be even more severe? Social media has become a central part of public discourse, shaping conversations, movements, and opinions, but it has also made it possible for hate speech, harassment, and misinformation to spread.
Cyberbullying and hate speech are linked to increased depression, anxiety, and self-harm. A study from the Journal of Medical Internet Research found that victims of online harassment are more than twice as likely to engage in self-harm or suicidal behaviors compared to those who are not targeted. Unlike in-person bullying, online harassment is persistent, anonymous, and amplified, often making it inescapable.
Increased rates of anxiety, depression, and self-harm have been connected to hate speech and cyberbullying. A study from the Journal of Medical Internet Research found that victims of online harassment are more than twice as likely to engage in self-harm or suicidal behaviors compared to those who are not targeted. Unlike in-person bullying, online harassment is persistent, anonymous, and amplified, often making it inescapable.
Why Moderation is Not Censorship
Just as businesses can deny service to abusive customers, social media companies have the rightâand obligationâto moderate harmful content. This does not silence opinions; rather, it ensures that discussions remain inclusive. Without it, hate speech and harassment push marginalized voices out of public discourse. True free speech necessitates an environment where people can engage without fear of abuse. Free speech laws prevent the government from suppressing speech, not private platforms from enforcing policies to create a safer environment.
Effective moderation should:
Apply rules consistently to all users.
Focus on harm reduction, not suppressing debate.
Be developed with input from civil rights experts.
Include an appeals process to ensure fairness.
In conclusion, through the prevention of online spaces from becoming venues for harm, social content moderation safeguards civil liberties. We shouldn't let hateful attacks online, just as we wouldn't tolerate them in person. By guaranteeing that all voices are heard, the creation of safe online environments enhances free speech rather than diminishes it.
#content moderators#social media#free speech#online harassment#cyberbullying#content moderator#meta#filters#seo#shadow banning
3 notes
·
View notes
Text
âThe announcement from Mark is him basically saying: âHey I heard the message, we will not intervene in the United States,ââ said Haugen.
Announcing the changes on Tuesday, Zuckerberg said he would âwork with President Trumpâ on pushing back against governments seeking to âcensor moreâ, pointing to Latin America, China and Europe, where the UK and EU have introduced online safety legislation.
Haugen also raised concern over the effect on Facebookâs safety standards in the global south. In 2018 the United Nations said Facebook had played a âdetermining roleâ in spreading hate speech against Rohingya Muslims, who were the victims of a genocide in Myanmar.
âWhat happens if another Myanmar starts spiralling up again?â Haugen said. âIs the Trump state department going to call Facebook? Does Facebook have to fear any consequences from doing a bad job?â
#meta#frances haugen#human rights#us politics#content moderation#social media#united nations#facebook#instagram#whatsapp#mark zuckerberg#myanmar#genocide#royhinga#the guardian
3 notes
·
View notes
Text
'Subway Surfing' in NYC: The Perilous Quest for Digital Fame
![Tumblr media](https://64.media.tumblr.com/9967ddd893faf092029c185ef1dfb6d4/ad513f31d528cbf2-e9/s540x810/2c31d9640dbab392a87c3996ea6a2c387f2df128.webp)
#Subway Surfing#NYC#Social Media Risks#Dangerous Stunts#Youth Culture#Video Sharing#TikTok Trends#Social Media Impact#Peer Tube#Digital Fame#Shock Sites#Skywalkers#Extreme Sports#Pro Life Spiderman#George King#Wu Yongning#Roof topping#Urban Exploration#Digital Age#Content Moderation#Internet Trends#Social Responsibility#Video Sharing Sites
2 notes
·
View notes
Quote
The end state of a platform that allows hate speech but gives you the option to "hide" it is that hate speech proliferates. Mobs form. Assholes coordinate their abuse and use sock-puppets to boost their presence, and eventually this spills over into real-world harm.
Some (Probably) Worthless Thoughts About Content Moderation
36 notes
·
View notes
Text
lmao well I hope I have some leads on some jobs since my friend is referring me but like Iâm genuinely scared what itâs going to do to me
#itâs um. a qa analyst role (yay) for a social media platform (eh) within their content moderation dept (ugh)#maybe theyâll pay for my therapy ?? at least itâs mostly remote and itâs better than nothing#also tell me why I got denied for my PTO and then my boss immediately went on a 5 day vacation to some fucking island beach. tell me#wurm.txt
3 notes
·
View notes
Text
By: Anna Davis
Published: Oct 23, 2024
Internet users should be able to choose what content they see instead of tech companies censoring on their behalf, a report said on Wednesday.
Author Michael Shellenberger said the power to filter content should be given to social media and internet users rather than tech firms or governments in a bid to ensure freedom of speech.
In his report Free Speech and Big Tech: The Case for User Control, Mr Shellenberger warned that if governments or large tech companies had power over the legal speech of citizens they would have unprecedented influence over thoughts, opinions and âaccepted viewsâ in society.
He warned that attempts to censor the internet to protect the public from disinformation could be abused and end up limiting free speech.
He wrote: âRegulation of speech on social media platforms such as Facebook, X (Twitter), Instagram, and YouTube has increasingly escalated in attempts to prevent âhate speechâ, âmisinformationâ and online âharmâ.â
This represents a âfundamental shift in our approach to freedom of speech and censorshipâ he said, warning that legal content was being policed. He added: âThe message is clear: potential âharmâ from words alone is beginning to take precedence over free speech.â
Mr Shellenbergerâs paper is published today ahead of next weekâs Alliance for Responsible Citizenship conference in Greenwich, where he will speak along with MPs Kemi Badenoch and Michael Gove. Free speech is one of the themes the conference will cover.
Explaining how his system of âuser controlâ would work, Mr Shellenberger wrote: âWhen you use Google, YouTube, Facebook, X/Twitter, or any other social media platform, the company would be required to ask you how you want to moderate your content. Would you like to see the posts of the people you follow in chronological or reverse chronological order? Or would you like to use a filter by a group or person you trust?
âOn what basis would you like your search results to be ranked? Users would have infinite content moderation filters to choose from. This would require social media companies to allow them the freedom to choose.â
--
By:Â Michael Shellenberger
âSpeech is fundamental to our humanity because of its inextricable link to thought. One cannot think freely without being able to freely express those thoughts and ideas through speech. Today, this fundamental right is under attack.â
In this paper, Michael Shellenberger exposes the extent of speech censorship online and proposes a âBill of Rights' for online freedom of speech that would restore content moderation to the hands of users.
Summary of Research Paper
In this paper:
The War on Free Speech
Giving users control of moderation
A Bill of Rights for Online Freedom of Speech
The Right to Speak Freely Online
The advent of the internet gave us a double-edged sword: the greatest opportunity for freedom of speech and information ever known to humanity, but also the greatest danger of mass censorship ever seen.
Thirty years on, the world has largely experienced the former. However, in recent years, the tide has been turning as governments and tech companies become increasingly fearful of what the internet has unleashed. In particular, regulation of speech on social media platforms such as Facebook, X (formerly Twitter), Instagram, and YouTube has increased in attempts to prevent âhate speechâ, âmisinformationâ, and online âharmâ.
Pieces of pending regulation legislation across the West represent a fundamental shift in our approach to freedom of speech and censorship. The proposed laws explicitly provide a basis for the policing of legal content, and in some cases, the deplatforming or even prosecution of its producers. Beneath this shift is an overt decision to eradicate speech deemed to have the potential to âescalateâ into something dangerousâessentially prosecuting in anticipation of crime, rather than its occurrence. The message is clear: potential âharmâ from words alone is beginning to take precedence over free speechâneglecting the foundational importance of the latter to our humanity.
This shift has also profoundly altered the power of the state and Big Tech companies in society. If both are able to moderate which views are seen as âacceptableâ and have the power to censor legal expressions of speech and opinion, their ability to shape the thought and political freedom of citizens will threaten the liberal, democratic norms we have come to take for granted.
Citizens should have the right to speak freelyâwithin the bounds of the lawâwhether in person or online. Therefore, it is imperative that we find another way, and halt the advance of government and tech regulation of our speech.
Network Effects and User Control
There is a clear path forward which protects freedom of speech, allows users to moderate what they engage with, and limits state and tech power to impose their own agendas. It is in essence, very simple: if we believe in personal freedom with responsibility, then we must return content moderation to the hands of users.
Social media platforms such as Facebook, X, Instagram, and YouTube could offer users a wide range of filters regarding content they would like to see, or not see. Users can then evaluate for themselves which of these filters they would like to deploy, if any. These simple steps can allow individuals to curate the content they wish to engage with, without infringing on anotherâs right to free speech.
User moderation also provides tech companies with new metrics with which to target advertisements and should increase user satisfaction with their respective platforms. Governments can also return to the subsidiary role of fostering an environment in which people flourish and can freely exchange ideas.
These proposals turn on the principle of regulating social media platforms according to their ânetwork effectsâ, which are generated when a product or service delivers greater economic and social value the more people use it. Many network effects, including those realised by the internet and social media platforms are public goodsâthat is, a product or service that is non-excludable, where everyone benefits equally and enjoy access to it. As social media platforms are indispensable for communication, the framework that regulates online discourse must take into account the way in which these private platforms deliver a public good in the form of network effects.
A Bill of Rights for Online Freedom of Speech
This paper provides a digital âBill of Rightsâ, outlining the key principles needed to safeguard freedom of speech online:
Users decide their own content moderation by choosing âno filterâ, or content moderation filters offered by platforms.
All content moderation must occur through filters whose classifications are transparent to users.
No secret content moderation.
Companies must keep a transparent log of content moderation requests and decisions.
Users own their data and can leave the platform with it.
No deplatforming for legal content.
Private right of action is provided for users who believe companies are violating these legal provisions.
If such a charter were embraced, the internet could once again fulfil its potential to become the democratiser of ideas, speech, and information of our generation, while giving individuals the freedom to choose the content they engage with, free from government or tech imposition.
--
youtube
When people ask me, you know, to make the case for free speech, I sort of smile because of course it's, I'm a little disoriented by the experience of needing to defend free speech.
Literally, just a few years ago, I would have thought that anybody who had to defend free speech was sort of cringe, like that's silly. Like, who would need to defend free speech? But here we find ourselves having to defend all of the pillars of civilization, free speech, law and order, meritocracy, cheap and abundant energy, food and housing. These things all seem obviously as the pillars of a free, liberal, democratic society, but they're clearly not accepted as that. And so you get the sense that there has been a takeover by a particularly totalitarian worldview which is suggesting that that something comes before free speech.
Well, what is that thing? That thing they say is "reducing harm." And so what we're seeing is safety-ism emerging out of the culture and being demanded and enforced by powerful institutions in society.
I think ARC has so much promise to reaffirm the pillars of civilization. It's not complicated. It's freedom of speech. It's cheap and abundant energy, food and housing, law and order. It's meritocracy.
If we don't have those things, we do not have a functioning liberal, democratic civilization. And that means that it needs to do the work to make the case intellectually, communicate it well in a pro-human, pro-civilization, pro-Western way.
The world wants these things. We are dealing with crises, you know, chaos on our borders, both in Europe and the United States, because people around the world want to live in a free society. They want abundance and prosperity and freedom. They don't want to live in authoritarian or totalitarian regimes.
So we know what the secret ingredients are of success. We need to reaffirm them because the greatest threats are really coming from within. And that means that the reaction and the affirmation of civilizational values also needs to come from within.
#Michael Shellenberger#free speech#freedom of speech#bill of rights#content moderation#censorship#social media#hate speech#misinformation#online harm#religion is a mental illness
3 notes
·
View notes
Text
#there is a way. social media is not like live tv which has predetermined commercial schedules#so social media can simply implement moderating buffer time to screen all media contents.#social media simply has to employ an army of screeners to kill off live streaming of violence.#and the internet domain controller can also shut down a site thatâs streaming directly. they just have to implement and act#taiwantalk#Israel#Hamas#live streaming of execution
3 notes
·
View notes
Text
.
#sorry i do find the current fic being fed into ai thing is getting a bit overblown sorry#like i understand why people are uncomfortable#qnd i would prefer that we all get a bit more control over our online data and how it stored and processed#and there probably should be some talk about consenting to having your writing used or whatever#but it's getting a bit moral panicky now ngl#like i really dont think this is the great issue of our time#also im 90% sure that chatGPT doesn't learn from conversations/submissions#at least not in a straightforward way#it's relying on its own old database not what you give it#anyway my real issue with ai is the issue of content moderation and the exploitation of moderators#which is an issue that goes far beyond just ai and is inherent to all social media#i would much rather scroll through endless posts about that issue than fearmongering abt random teenagers#who use ai to write boring bad endings for fics they like#as if chatGPT will produce anything worth reading anyway
2 notes
·
View notes
Text
Graeme Demianyk at HuffPost:
Mark Zuckerberg announced Meta is abandoning its fact-checking program as he criticized the system for becoming âtoo politically biased.â The tech billionaire unveiled the changes in a video on Tuesday. Zuckerberg said his companies â which include Facebook and Instagram â would instead implement a âcommunity notesâ model similar to the one used on X, which is owned by Elon Musk. The policy shift comes as tech companies attempt to curry favor with President-elect Donald Trump following the Republicanâs election triumph in November. âAfter Trump first got elected in 2016, the legacy media wrote non-stop about how misinformation was a threat to democracy,â Metaâs chief executive said. âWe tried, in good faith, to address those concerns without becoming the arbiters of truth, but the fact-checkers have just been too politically biased and have destroyed more trust than theyâve created, especially in the U.S.â Starting in the U.S., Meta will end its fact-checking program with independent third parties and pivot to âcommunity notes,â a system that relies on users adding notes or corrections to posts that may contain false or misleading information.
Zuckerberg also indicated a new direction on speech, announcing Meta will also âremove restrictions on topics like immigration and gender that are out of touch with mainstream discourse.â âWhat started as a movement to be more inclusive has increasingly been used to shut down opinions and shut out people with different ideas,â Zuckerberg said. âAnd itâs gone too far. So I want to make sure that people can share their beliefs and experiences on our platforms.â He conceded that there would be more âbad stuffâ on the platform as a result of the decisions. âThe reality is that this is a trade-off,â he said.
[...] Zuckerberg also indicated a new direction on speech, announcing Meta will also âremove restrictions on topics like immigration and gender that are out of touch with mainstream discourse.â âWhat started as a movement to be more inclusive has increasingly been used to shut down opinions and shut out people with different ideas,â Zuckerberg said. âAnd itâs gone too far. So I want to make sure that people can share their beliefs and experiences on our platforms.â He conceded that there would be more âbad stuffâ on the platform as a result of the decisions. âThe reality is that this is a trade-off,â he said.
Meta owner Mark Zuckerberg grossly caves into right-wing faux outrage campaign about conservatives being âsilencedâ by dumping fact-checkers for X-esque Community Notes and removing restrictions on anti-immigrant and anti-LGBTQ+/anti-trans speech. Adding right-wing UFC CEO Dana White to Metaâs board furthers the appeasement of Trump and radical right-wing Tech Bros.
See Also:
Daily Kos: Zuckerberg's Meta follows Musk's X into misinformation
The Guardian: Meta to get rid of factcheckers and recommend more political content
MMFA: Zuckerberg and Meta are done pretending to care about mitigating the harms their platforms cause
#Meta#Facebook#Threads#Instagram#WhatsApp#Mark Zuckerberg#Fact Checking#Community Notes#Dana White#Joel Kaplan#Social Media#Content Moderation
8 notes
·
View notes
Text
As a person who has done moderation on another social media site (video content) in the last year or so, this game is very cool! It definitely gets the vibe right.
The things that make moderation reallt difficult though is the constant changing of the rules. In my experience, guidelines would change close to weekly, and would be several pages for every category. Something like harassment would have things like slurs, bullying, private info, etc. There are also exceptions for every rule, and grey areas in things like aging individuals. I'm not going to get too into detail about "trends" or guidelines on current events, but guidelines would be updated daily on how to handle these things.
Many videos have more than one tag, and not all tags do the same things. Some will get the police called, and others would prevent content from being as easily promoted. And the amount of spam is obscene. In a work day you would sometimes see less than 10 examples of original content, and instead just thousands of 5 or 6 constantly reposted videos. Content that was reported was often completely fine, but you still have to sort through them.
A lot of moderators often truly don't care. When I was being trained, we were shown something on the platform that was clearly against sexual content guidelines. It was honestly jarring how clear it was. The trainer reported it, and the next day it was flagged as fine and not removed.
Moderation is a huge struggle, both that it is hard to moderate and some people genuinely don't care and will allow things to slip past them, for so many different reasons. I don't think staff is completely innocent and more could be done, but that isn't up to the people that are moderating.
The number people on this site just solidly convinced that the Nazis and bad people are around because staff loves them and not because moderation of millions of accounts is very very hard and expensive is. Staggering.
"oh just use automation." You mean the one currently mismarking trans accounts as mature? You think they just need to flip a no Nazis switch on the moderation-o-matic?
"oh why did they manage to get rid of that post about staff being Harry Potter fans" because there was a lot of targeted harassment and it was a single account, not the millions that make up Tumblr. And actually it's a very straightforward labor protection to make sure your staff don't get harassed. It's actually very bad to insist that staff gets harassed as long as there are other moderation issues.
16K notes
·
View notes
Text
![Tumblr media](https://64.media.tumblr.com/eaaac13ca7198a8873618234a7efe505/d9653e8daabcff7c-72/s540x810/dc7d356dc1f602b1fe79b0712ea1949aab61455e.jpg)
Do you know that social media platforms shape what we see, whose voices are amplified, and who or what gets buried?
Think of who is missing from your feed? How does algorithm bias make you feel?
1 note
·
View note
Text
not leaving any platforms or anything (as if i'm posting anything, lmao) but i am on bluesky now
#enough people have jumped ship that im like fine whatever i'll make another thing#seems like a moderately usable site let's go#i've given up on social media to a degree but whatever#big âpoob has it for youâ energy with the gauntlet of rising and falling online communities#please manage 10-15 profiles across interfaces that aren't compatible or consistent but theoretically fill the same ecological niche#each one has a maximum of 80% of the content you want to have conveniently in one place#bring back rss feeds i'm tired#i never even used them but i get it now
1 note
·
View note