#social media content moderation
Explore tagged Tumblr posts
Text
youtube
a u.s. supreme court case for 2023 may change the internet. legal eagle (above video) explains.
Gonzalez v. Google LLC is about whether youtube can be held liable for the content that its algorithm recommends to viewers. it involves a terrorist attack by isis and the fact that youtube was recommending isis videos to its users.
#u.s. supreme court#u.s. law#2023 u.s. supreme court cases#youtube algorithm#communications decency act#section 230#social media content moderation#isis#youtube recommendations#online radicalization#this dystopian nightmare of a country#duty not to support terrorists (ATA)#Youtube#Gonzalez v. Google LLC#legaleagle#antiterrorism act (ATA)#3rd party content moderation#FOSTA#SESTA
14 notes
·
View notes
Text
👉 How Agile Content Moderation Process Improves a Brand’s Online Visibility
🤷♀️ Agile content moderation enhances a brand’s online presence by swiftly addressing and adapting to evolving content challenges. This dynamic approach ensures a safer and more positive digital environment, boosting visibility and trust. 🔊 Read the blog: https://www.sitepronews.com/2022/12/20/how-agile-content-moderation-process-improves-a-brands-online-visibility/
View On WordPress
#content moderation#content moderation solution#Outsource Content Moderation#Outsource content moderation services#social media content moderation
0 notes
Link
#Social Media Content Moderation#United States#social networking sites#objectionable content#Content moderation services#content moderators#social media platforms#Pre-moderation#post-moderation#reactive moderation#spam contents#Unwanted Contents#facebook#linkdin#twiitter#Instagram#YouTube#Tumblr#TikTok
1 note
·
View note
Text
Social Media is Nice in Theory, but...
Something I cannot help but think about is how awesome social media can be - theoretically - and how much it sucks for the most part.
I am a twitter refugee. I came to tumblr after Elmo bought twitter and made the plattform a rightwing haven. But, I mean... There is in general an issue with pretty much all social media, right? Like, most people will hate on one plattform specifically and stuff, while upholding another plattform. But let's be honest... They all suck. Just in different ways.
And the main reasons for them sucking are all the same, right? For one, there is advertisement and with that the need to make the plattform advertiser-friendly. But then there is also just the impossibility to properly moderate a plattform used by millions of people.
I mean, the advertisement stuff is already a big issue. Because... Sure, big platforms need big money because they are hosting just so much stuff in videos, images and what not. Hence, duh, they do need to get the money somewhere. And right now the only way to really make enough money is advertisement. Because we live under capitalism and it sucks.
And with this comes the need to make everything advertiserfriendly. On one hand this can be good, because it creates incentives for the platform to not host stuff like... I don't know. Holocaust denial and shit. The kinda stuff that makes most advertisers pull out. But on the other hand...
Well, we all know the issue: Porn bans. And not only porn bans, but also policing of anything connected to nude bodies. Especially nude bodies that are perceived to be female. Because society still holds onto those ideas that female bodies need to be regulated and controlled.
We only recently had a big crackdown on NSFW content on even sides that are not primarily advertiser driven - like Gumroad and Patreon. Because... Well, folks are very intrested in outlawing any form of porn. Often because they claim to want to protect children. The truth is of course that they often do quite the opposite. Because driving everyone away from properly vetted websites also means, that on one hand kids are more likely to come across the real bad stuff. And on the other hand, well... The more dingy the websites are, that folks consume their porn on, the more likely it is to find some stuff like CP and snuff on those sides. Which will get more attention like this.
But there is also the less capitalist issue of moderating the content. Which is... kinda hard on a lot of websites. Of course, to save money a lot of the big social media platforms are not really trying. Because they do not want to for proper moderators. But it makes it more likely for really bad stuff to happen. Like doxxing and what not.
I mean, like with everything: I do think that social media could be a much better place if only we did not have capitalism. But I also think that a lot about the way social media is constructed (with the anonymity and stuff, but also things like this dopamine rush when people like your stuff) will not just change, if you stop capitalism.
#solarpunk#lunarpunk#social media#social networks#facebook#twitter#tiktok#anti capitalism#content moderation#fuck capitalism
15 notes
·
View notes
Text
Jennifer Rankin at The Guardian:
French judicial authorities on Sunday extended the detention of the Russian-born founder of Telegram, Pavel Durov, after his arrest at a Paris airport over alleged offences related to the messaging app. His arrest at the Le Bourget airport outside Paris on Saturday was the latest extraordinary twist in the career of one of the world’s most influential tech icons. The detention of Durov, 39, was extended beyond Sunday night by the investigating magistrate who is handling the case, according to a source close to the investigation. This initial period of detention for questioning can last up to a maximum of 96 hours. When this phase of detention ends, the judge can decide to free him or press charges and remand in further custody. French investigators had issued a warrant for Durov’s arrest as part of an inquiry into allegations of fraud, drug trafficking, organised crime, promotion of terrorism and cyberbullying.
Durov is accused of failing to take action to curb the criminal use of his platform and was stopped after arriving in Paris from Baku on his private jet on Saturday night. “Enough of Telegram’s impunity,” said one investigator who expressed surprise that Durov flew to Paris knowing he was a wanted man. In a statement on Sunday evening, Telegram said: “Telegram abides by EU laws, including the Digital Services Act – its moderation is within industry standards and constantly improving.
[...] Durov lives in Dubai, where Telegram is based, and holds citizenship of France and the United Arab Emirates (UAE). He recently said he had tried to settle in Berlin, London, Singapore and San Francisco before choosing Dubai, which he praised for its business environment and “neutrality”. In the UAE, Telegram faces little pressure to moderate its content, while western governments are trying to crack down on hate speech, disinformation, sharing of images of child abuse and other illegal content.
Telegram offers end-to-end encrypted messaging and allows users to create channels to disseminate information to followers. Especially popular in the former Soviet Union, the app is widely used by the Ukrainian president, Volodymyr Zelenskiy, and his circle, as well as politicians throughout Ukraine, to release information about the war. It is also one of the few places where Russians can get unfiltered information about the conflict, after the Kremlin tightened media controls in the wake of the full-scale invasion. Its apparently unbreakable encryption has made Telegram a haven for extremists and conspiracy theorists. Investigative journalists at the central European news site VSquare said it had become the “‘go-to’ tool for Russian propagandists, both leftwing and rightwing radicals, American QAnon and conspiracy theorists,” concluding it was an “ecosystem for the radicalisation of opinion”. The app was also used widely by far-right agitators plotting anti-immigration rallies in England and Northern Ireland in the wake of the stabbing of three children at a Southport dance class last month. The anti-racism campaign group Hope Not Hate concluded that Telegram had become the “app of choice” for racists and violent extremists and “a cesspit of antisemitic content” with minimal moderation or effort from the app to curb extremist content.
Telegram founder Pavel Durov was arrested in France over the weekend based on an inquiry into allegations of fraud, drug trafficking, organised crime, promotion of terrorism and cyberbullying, and child sex abuse material (aka child pornography) on the social media app.
Telegram is popular in Russia and most of the former Soviet countries, and in the west, a hub for far-right conspiracy theorists.
The arrest of Durov has ZERO to do with “free speech”, despite right-wing spin claiming otherwise.
See Also:
The Guardian: What is Telegram and why has its founder Pavel Durov been arrested?
CNN: A Russian Elon Musk with 100 biological children: Meet Pavel Durov
NBC News: Telegram founder Durov's arrest is part of a larger investigation into alleged 'complicity' in child exploitation and drug trafficking
#Pavel Durov#Telegram#Social Media#VKontakte#Child Sex Abuse#Crime#World News#France#Content Moderation
6 notes
·
View notes
Quote
The end state of a platform that allows hate speech but gives you the option to "hide" it is that hate speech proliferates. Mobs form. Assholes coordinate their abuse and use sock-puppets to boost their presence, and eventually this spills over into real-world harm.
Some (Probably) Worthless Thoughts About Content Moderation
36 notes
·
View notes
Text
lmao well I hope I have some leads on some jobs since my friend is referring me but like I’m genuinely scared what it’s going to do to me
#it’s um. a qa analyst role (yay) for a social media platform (eh) within their content moderation dept (ugh)#maybe they’ll pay for my therapy ?? at least it’s mostly remote and it’s better than nothing#also tell me why I got denied for my PTO and then my boss immediately went on a 5 day vacation to some fucking island beach. tell me#wurm.txt
3 notes
·
View notes
Text
By: Anna Davis
Published: Oct 23, 2024
Internet users should be able to choose what content they see instead of tech companies censoring on their behalf, a report said on Wednesday.
Author Michael Shellenberger said the power to filter content should be given to social media and internet users rather than tech firms or governments in a bid to ensure freedom of speech.
In his report Free Speech and Big Tech: The Case for User Control, Mr Shellenberger warned that if governments or large tech companies had power over the legal speech of citizens they would have unprecedented influence over thoughts, opinions and “accepted views” in society.
He warned that attempts to censor the internet to protect the public from disinformation could be abused and end up limiting free speech.
He wrote: “Regulation of speech on social media platforms such as Facebook, X (Twitter), Instagram, and YouTube has increasingly escalated in attempts to prevent ‘hate speech’, ‘misinformation’ and online ‘harm’.”
This represents a “fundamental shift in our approach to freedom of speech and censorship” he said, warning that legal content was being policed. He added: “The message is clear: potential ‘harm’ from words alone is beginning to take precedence over free speech.”
Mr Shellenberger’s paper is published today ahead of next week’s Alliance for Responsible Citizenship conference in Greenwich, where he will speak along with MPs Kemi Badenoch and Michael Gove. Free speech is one of the themes the conference will cover.
Explaining how his system of “user control” would work, Mr Shellenberger wrote: “When you use Google, YouTube, Facebook, X/Twitter, or any other social media platform, the company would be required to ask you how you want to moderate your content. Would you like to see the posts of the people you follow in chronological or reverse chronological order? Or would you like to use a filter by a group or person you trust?
“On what basis would you like your search results to be ranked? Users would have infinite content moderation filters to choose from. This would require social media companies to allow them the freedom to choose.”
--
By: Michael Shellenberger
“Speech is fundamental to our humanity because of its inextricable link to thought. One cannot think freely without being able to freely express those thoughts and ideas through speech. Today, this fundamental right is under attack.”
In this paper, Michael Shellenberger exposes the extent of speech censorship online and proposes a ‘Bill of Rights' for online freedom of speech that would restore content moderation to the hands of users.
Summary of Research Paper
In this paper:
The War on Free Speech
Giving users control of moderation
A Bill of Rights for Online Freedom of Speech
The Right to Speak Freely Online
The advent of the internet gave us a double-edged sword: the greatest opportunity for freedom of speech and information ever known to humanity, but also the greatest danger of mass censorship ever seen.
Thirty years on, the world has largely experienced the former. However, in recent years, the tide has been turning as governments and tech companies become increasingly fearful of what the internet has unleashed. In particular, regulation of speech on social media platforms such as Facebook, X (formerly Twitter), Instagram, and YouTube has increased in attempts to prevent “hate speech”, “misinformation”, and online “harm”.
Pieces of pending regulation legislation across the West represent a fundamental shift in our approach to freedom of speech and censorship. The proposed laws explicitly provide a basis for the policing of legal content, and in some cases, the deplatforming or even prosecution of its producers. Beneath this shift is an overt decision to eradicate speech deemed to have the potential to “escalate” into something dangerous—essentially prosecuting in anticipation of crime, rather than its occurrence. The message is clear: potential “harm” from words alone is beginning to take precedence over free speech—neglecting the foundational importance of the latter to our humanity.
This shift has also profoundly altered the power of the state and Big Tech companies in society. If both are able to moderate which views are seen as “acceptable” and have the power to censor legal expressions of speech and opinion, their ability to shape the thought and political freedom of citizens will threaten the liberal, democratic norms we have come to take for granted.
Citizens should have the right to speak freely—within the bounds of the law—whether in person or online. Therefore, it is imperative that we find another way, and halt the advance of government and tech regulation of our speech.
Network Effects and User Control
There is a clear path forward which protects freedom of speech, allows users to moderate what they engage with, and limits state and tech power to impose their own agendas. It is in essence, very simple: if we believe in personal freedom with responsibility, then we must return content moderation to the hands of users.
Social media platforms such as Facebook, X, Instagram, and YouTube could offer users a wide range of filters regarding content they would like to see, or not see. Users can then evaluate for themselves which of these filters they would like to deploy, if any. These simple steps can allow individuals to curate the content they wish to engage with, without infringing on another’s right to free speech.
User moderation also provides tech companies with new metrics with which to target advertisements and should increase user satisfaction with their respective platforms. Governments can also return to the subsidiary role of fostering an environment in which people flourish and can freely exchange ideas.
These proposals turn on the principle of regulating social media platforms according to their “network effects”, which are generated when a product or service delivers greater economic and social value the more people use it. Many network effects, including those realised by the internet and social media platforms are public goods—that is, a product or service that is non-excludable, where everyone benefits equally and enjoy access to it. As social media platforms are indispensable for communication, the framework that regulates online discourse must take into account the way in which these private platforms deliver a public good in the form of network effects.
A Bill of Rights for Online Freedom of Speech
This paper provides a digital “Bill of Rights”, outlining the key principles needed to safeguard freedom of speech online:
Users decide their own content moderation by choosing “no filter”, or content moderation filters offered by platforms.
All content moderation must occur through filters whose classifications are transparent to users.
No secret content moderation.
Companies must keep a transparent log of content moderation requests and decisions.
Users own their data and can leave the platform with it.
No deplatforming for legal content.
Private right of action is provided for users who believe companies are violating these legal provisions.
If such a charter were embraced, the internet could once again fulfil its potential to become the democratiser of ideas, speech, and information of our generation, while giving individuals the freedom to choose the content they engage with, free from government or tech imposition.
--
youtube
When people ask me, you know, to make the case for free speech, I sort of smile because of course it's, I'm a little disoriented by the experience of needing to defend free speech.
Literally, just a few years ago, I would have thought that anybody who had to defend free speech was sort of cringe, like that's silly. Like, who would need to defend free speech? But here we find ourselves having to defend all of the pillars of civilization, free speech, law and order, meritocracy, cheap and abundant energy, food and housing. These things all seem obviously as the pillars of a free, liberal, democratic society, but they're clearly not accepted as that. And so you get the sense that there has been a takeover by a particularly totalitarian worldview which is suggesting that that something comes before free speech.
Well, what is that thing? That thing they say is "reducing harm." And so what we're seeing is safety-ism emerging out of the culture and being demanded and enforced by powerful institutions in society.
I think ARC has so much promise to reaffirm the pillars of civilization. It's not complicated. It's freedom of speech. It's cheap and abundant energy, food and housing, law and order. It's meritocracy.
If we don't have those things, we do not have a functioning liberal, democratic civilization. And that means that it needs to do the work to make the case intellectually, communicate it well in a pro-human, pro-civilization, pro-Western way.
The world wants these things. We are dealing with crises, you know, chaos on our borders, both in Europe and the United States, because people around the world want to live in a free society. They want abundance and prosperity and freedom. They don't want to live in authoritarian or totalitarian regimes.
So we know what the secret ingredients are of success. We need to reaffirm them because the greatest threats are really coming from within. And that means that the reaction and the affirmation of civilizational values also needs to come from within.
#Michael Shellenberger#free speech#freedom of speech#bill of rights#content moderation#censorship#social media#hate speech#misinformation#online harm#religion is a mental illness
3 notes
·
View notes
Text
#there is a way. social media is not like live tv which has predetermined commercial schedules#so social media can simply implement moderating buffer time to screen all media contents.#social media simply has to employ an army of screeners to kill off live streaming of violence.#and the internet domain controller can also shut down a site that’s streaming directly. they just have to implement and act#taiwantalk#Israel#Hamas#live streaming of execution
3 notes
·
View notes
Text
.
#sorry i do find the current fic being fed into ai thing is getting a bit overblown sorry#like i understand why people are uncomfortable#qnd i would prefer that we all get a bit more control over our online data and how it stored and processed#and there probably should be some talk about consenting to having your writing used or whatever#but it's getting a bit moral panicky now ngl#like i really dont think this is the great issue of our time#also im 90% sure that chatGPT doesn't learn from conversations/submissions#at least not in a straightforward way#it's relying on its own old database not what you give it#anyway my real issue with ai is the issue of content moderation and the exploitation of moderators#which is an issue that goes far beyond just ai and is inherent to all social media#i would much rather scroll through endless posts about that issue than fearmongering abt random teenagers#who use ai to write boring bad endings for fics they like#as if chatGPT will produce anything worth reading anyway
2 notes
·
View notes
Text
As a person who has done moderation on another social media site (video content) in the last year or so, this game is very cool! It definitely gets the vibe right.
The things that make moderation reallt difficult though is the constant changing of the rules. In my experience, guidelines would change close to weekly, and would be several pages for every category. Something like harassment would have things like slurs, bullying, private info, etc. There are also exceptions for every rule, and grey areas in things like aging individuals. I'm not going to get too into detail about "trends" or guidelines on current events, but guidelines would be updated daily on how to handle these things.
Many videos have more than one tag, and not all tags do the same things. Some will get the police called, and others would prevent content from being as easily promoted. And the amount of spam is obscene. In a work day you would sometimes see less than 10 examples of original content, and instead just thousands of 5 or 6 constantly reposted videos. Content that was reported was often completely fine, but you still have to sort through them.
A lot of moderators often truly don't care. When I was being trained, we were shown something on the platform that was clearly against sexual content guidelines. It was honestly jarring how clear it was. The trainer reported it, and the next day it was flagged as fine and not removed.
Moderation is a huge struggle, both that it is hard to moderate and some people genuinely don't care and will allow things to slip past them, for so many different reasons. I don't think staff is completely innocent and more could be done, but that isn't up to the people that are moderating.
The number people on this site just solidly convinced that the Nazis and bad people are around because staff loves them and not because moderation of millions of accounts is very very hard and expensive is. Staggering.
"oh just use automation." You mean the one currently mismarking trans accounts as mature? You think they just need to flip a no Nazis switch on the moderation-o-matic?
"oh why did they manage to get rid of that post about staff being Harry Potter fans" because there was a lot of targeted harassment and it was a single account, not the millions that make up Tumblr. And actually it's a very straightforward labor protection to make sure your staff don't get harassed. It's actually very bad to insist that staff gets harassed as long as there are other moderation issues.
16K notes
·
View notes
Text
Culture: Stephen King Departs X, Cites Platform’s 'Dark Turn' on Threads
Renowned author Stephen King announced his departure from X (formerly Twitter) after 11 years, reflecting on the platform’s drastic transformation. King shared on Threads, “I quit Twitter. Eleven years, man. It really changed. Grew dark,” capturing a sentiment felt by many as X faces criticism over its shifting environment. King’s exit follows a series of high-profile departures from X, as…
#&039;Dark Turn&039;#Cites Platform’s &039;Dark Turn&039;#content moderation#culture#Culture: Stephen King Departs X#elon musk#hate speech#online safety#platform migration#social media#Stephen King#toxic environment#Twitter#X platform
0 notes
Text
Mark Sweney at The Guardian:
The Meta boss, Mark Zuckerberg, has said he regrets bowing to what he claims was pressure from the US government to censor posts about Covid on Facebook and Instagram during the pandemic. Zuckerberg said senior White House officials in Joe Biden’s administration “repeatedly pressured” Meta, the parent company of Facebook and Instagram, to “censor certain Covid-19 content” during the pandemic.
“In 2021, senior officials from the Biden administration, including the White House, repeatedly pressured our teams for months to censor certain Covid-19 content, including humour and satire, and expressed a lot of frustration with our teams when we didn’t agree,” he said in a letter to Jim Jordan, the head of the US House of Representatives judiciary committee. “I believe the government pressure was wrong.” During the pandemic, Facebook added misinformation alerts to users when they commented on or liked posts that were judged to contain false information about Covid.
The company also deleted posts criticising Covid vaccines, and suggestions the virus was developed in a Chinese laboratory. In the 2020 US presidential election campaign, Biden accused social media platforms such as Facebook of “killing people” by allowing disinformation about coronavirus vaccines to be posted on its platform. [...] Zuckerberg also said that Facebook “temporarily demoted” a story about the contents of a laptop owned by Hunter Biden, the president’s son, after a warning from the FBI that Russia was preparing a disinformation campaign against the Bidens. Zuckerberg wrote that it has since become clear that the story was not disinformation, and “in retrospect, we shouldn’t have demoted the story”.
Meta boss Mark Zuckerberg says that he regrets bowing to White House pressure to censor posts about COVID-19 on Instagram and Facebook during the pandemic that contained misinformation about the efficacy of COVID-19 vaccines and conspiracy theories about COVID. Zuckerberg also touched on the Hunter Biden laptop story being initially a Russian disinformation campaign.
See Also:
Vox: Mark Zuckerberg’s letter about Facebook censorship is not what it seems
#Mark Zuckerberg#Meta#Facebook#Coronavirus#Content Moderation#Coronavirus Conspiracies#Biden Administration#Social Media#Hunter Biden Laptop#Misinformation#Disinformation
6 notes
·
View notes
Text
I kinda hate how donating to AO3 gets used as shorthand for supporting bad evil stupid things, especially since it's usually in comparison to like, assuming an AO3 supporter never donates to any other kind of charity or fundraiser.
Really just feels like willful misunderstanding of what they are and do, y'know? Like where does the idea come from that there's just NO moderation on that site.
#and for that matter can people stop trivializing the complexity of moderation at scale#the reason so much 'problematic' content is on AO3 is because they're actually dedicated to their principles and like#you cannot define the bad stuff well enough to catch it quickly and consistenty#without also accidentally sweeping up huge swathes of valid creative expression#Same reason social media will never be free of nazis and terfs!#Social media could do *a lot better* don't get me wrong but you can't automate that shit cause guess what?#a lot of terms that could ping someone as a bigot would also ding a lot of the people they're bigoted against!
0 notes
Text
Weekly output: 1Password's Jeff Shiner, NextGen TV (x2), Verizon's myHome deals, Supreme Court on content moderation, whither Twitter, White House social media
Due to the non-magic of my filing two longer pieces earlier and then having two stories break hours apart in the same morning, Wednesday saw more than 3,200 words appear under my byline at two different sites. Patreon readers got one extra post this week, my recap of what I learned at VidCon from YouTubers, streamers and other content creators without boldface names about making a living and…
View On WordPress
#1Password#Anaheim#ATSC 3.0#Biden social media#Collision#content moderation#incentive auction#Jeff Shiner#Murthy v. Missouri#Netflix deal#NextGen TV#SCOTUS#Supreme Court#Twitter influencers#Verizon myHome#VidCon#White House
0 notes
Text
BEGINNERS GUIDE TO BLUESKY
Hiya! Curious about joining bluesky but intimidated by all the features? Already on bluesky but want to learn more? Then welcome to my quick guide on getting started and navigating bluesky!~
What is Bluesky?
it’s a social media site that’s owned by no single person or company. it's aim is to bring back the early days of twitter before bots, elon musk or algorithms took over. Personally I find the site really cozy, wholesome, and engaging. my Bluesky account for example
What’s unique about Bluesky?
→ CUSTOMIZATION: your timeline is very easy to control. There’s tons of options, so be sure to go through each tab in your settings. some options include: turning off autoplay, changing the order in which threaded replies show, changing DM settings, content preferences and lots of visual app settings.
→ MODERATION LISTS: human made, mass blocklists. These are public lists of accounts that when you subscribe to you automatically block or mute everyone in that specific blocklist. A great way to avoid unwanted content, and interactions. ✦ Moderation lists I recommend will be below the cut
→ STARTER PACKS: recommendation lists on who to follow, made by users. You can even curate your own starter pack of recommendations! ✦ Starter pack recommendations will be below the cut
→ FEEDS: public timelines, basically. There are a lot of feeds you can join, or you can even create your own. I made a feed featuring just my pixel art so it doesn’t get cluttered with text posts or other photos in my media tab. ✦ I’ll post feeds I recommend below and link you to a tutorial on how to create your own feed
→ BLOCKING/MUTING: bluesky has a great blocking system. When you block someone they can no longer see, or interact with you. They also have a feature to make your blog inaccessible unless logged in. you can also mute specific people, delete post replies, and even detach your post from a reblog. You can also mute specific words, phrases, tags etc.
→ NSFW: bluesky allows NSFW content, including artwork, porn, lewds etc. They also have a great moderation page to avoid the content completely, censor the content, or show it if you’d wish. ✦ just go to settings > moderation > toggle on NSFW settings and it’ll let you heavily moderate.
→ LABELS: this is a really cool feature on the site, you can subscribe to certain pages that enable a lot of fun/useful labels that help you in different ways! (like pronoun tags, artist tags etc) ✦ Labels to browse will be posted below
→ COMMUNITIES: the vastly diverse communities really feel like the best parts of tumblr. since you can so heavily curate your experience, it can really feel like a calming oasis. Mine is mostly artists, and other creatives.
there’s also a large community of professional artists, art directors, authors, celebrities, and even the best shitposters from twitter. the app really is what you make of it but it’s thriving right now.
RECOMMENDATIONS & LINKS BELOW ⬎
→ MODERATION LISTS:
HATE SPEECH: NAZIS | MAGA | MAGAv2 | MAGAv3 | TRANSPHOBES & HOMOPHOBES | FAR RIGHT | FAR RIGHTv2 | FAR RIGHTv3 | ELON MUSK FANBOYS | ANTI-BLACK | ANTI-VAX
NFT/AI/CRYPTO: MASTERLIST | AI/NFT | AI/NFTv2 | AI FANBOYS | CRYPTO | NFTs
SPAM/SCAMMERS: SPAMBOTS | BOTS | CONTENT SCRAPERS | CONTENT FARMING
✦ to block or mute everyone in the blocklist at once, click subscribe in the top right corner:
→ STARTER PACKS:
ART: PIXEL ART | PIXEL ARTv2 | WOMEN OF PIXEL ART | BADASS DIGITAL ARTISTS | MAGIC THE GATHERING ARTIST | PAINTERS OF BLUESKY | INDIE COMIC CREATORS | LGBTQIA+ COMIC CREATORS | WEBCOMICS ULTIMATE COLLECTION
GENERAL: WOMEN OF BSKY | AUTHORS | LGBTQ NEWS
SHITPOSTERS: JUNIPER | JUNIPERv2 | MASTERLIST | SCIENCE SHITPOSTERS
✦ for more niche starter packs, use the search function. search your specific interest and ‘starter pack’ and you’ll find some!
→ FEEDS:
DISCOVER | WHATS TRENDING | MENTIONS | ART | TRENDING ART
THE GRAM: a timeline for exclusively image posts from those you follow. no textposts etc. ONLYPOST: similar to the gram, it shows a timeline of only those you follow. no reposts, just original posts. 📌: a way to bookmark posts. just reply with the pin emoji.
✦ there’s tons of others feeds as well! just use the feed tab and you can browse feeds or search for specific ones.
✦ TUTORIAL ON HOW TO CREATE A CUSTOM FEED FOR YOUR ART/POSTS
→ LABELS:
SKYWATCH: most popular label. Lots of useful labels!
AI Labels: identifies AI users, can also enable hiding the posters.
Pronouns: self explanatory but useful. can add a badge with your pronouns!
✦ you can search for additional label bots on bluesky!
OTHER RECOMMENDATIONS:
✦ EXPIRIENCE ENHANCING TOOLS RECS ✦ CLEARSKY: TRACK BLOCKS AND BLOCKLISTS ✦ SKYFEED: CREATE CUSTOM FEEDS EASILY ✦ use the block function often. do not entertain trolls or hate speech. ✦ as well as starter packs, there’s also lists! lists can be used in the same way to create curated lists of accounts. it’s a good way to keep track of specific genres of posters you’re interested in, and finding new ones! ✦ hashtags: use them! they’re beneficial in boosting your post. you can even link hashtags in your bio making you easier to find. another method of making you more visible is if you post an ‘interest’ post! basically just type things you’re interested in and it’ll help people find you / vice versa ! ✦ update your profile first thing, like bio avi etc. make a small post so people know you're real. interact and engage! the communities there are so welcoming!
I think that covers abt everything i wanted to cover! Hope this was helpful and thanks for reading lol
#bluesky#bluesky starter pack#bluesky social#bsky.app#bsky#bsky social#bluesky tutorial#bluesky walkthrough#bluesky app#ooooooooook that took forever lol hope its useful!!!!!!!!#long post#text post
6K notes
·
View notes