#Social Media Automation Is Bad
Explore tagged Tumblr posts
Text
I kinda hate how donating to AO3 gets used as shorthand for supporting bad evil stupid things, especially since it's usually in comparison to like, assuming an AO3 supporter never donates to any other kind of charity or fundraiser.
Really just feels like willful misunderstanding of what they are and do, y'know? Like where does the idea come from that there's just NO moderation on that site.
#and for that matter can people stop trivializing the complexity of moderation at scale#the reason so much 'problematic' content is on AO3 is because they're actually dedicated to their principles and like#you cannot define the bad stuff well enough to catch it quickly and consistenty#without also accidentally sweeping up huge swathes of valid creative expression#Same reason social media will never be free of nazis and terfs!#Social media could do *a lot better* don't get me wrong but you can't automate that shit cause guess what?#a lot of terms that could ping someone as a bigot would also ding a lot of the people they're bigoted against!
0 notes
Note
sorry if you've talked about it already, but what is it that makes KOSA's idea of online safety wrong? I don't know much about the bill, what does it intend to do?
What do you think is a good way to protect kids from things like online predators or just seeing things that they shouldn't be seeing? (By which I mean sex and graphic violence, things which you'd need to be 16+ to see in a movie theater so I think it makes sense to not want pre-teens to see it)
From stopkosa.com:
Why is KOSA a bad bill? KOSA uses two methods to “protect” kids, and both of them are awful. First, KOSA would incentivize social media platforms to erase content that could be deemed “inappropriate” for minors. The problem is: there is no consensus on what is inappropriate for minors. All across the country we are seeing how lawmakers are attacking young people’s access to gender affirming healthcare, sex education, birth control, and abortion. Online communities and resources that queer and trans youth depend on as lifelines should not be subject to the whims of the most rightwing extremist powers and we shouldn’t give them another tool to harm marginalized communities. Second, KOSA would ramp up the online surveillance of all internet users by expanding the use of age verification and parental monitoring tools. Not only are these tools needlessly invasive, they’re a massive safety risk for young people who could be trying to escape domestic violence and abuse.
I’ve heard there’s a new version of KOSA. What’s the deal? The new version of KOSA makes some good changes: narrowing the ability of rightwing attorneys general to weaponize KOSA to target content they don’t like and limiting the problematic “duty of care. However, because the bill is still not content neutral, KOSA still invites the harms that civil rights advocates have warned about. As LGBTQ and reproductive rights groups have said for months, the fundamental problem with KOSA is that its “duty of care” covers content specific aspects of content recommendation systems, and the new changes fail to address that. In fact, personalized recommendation systems are explicitly listed under the definition of a design feature covered by the duty of care in the new version. This means that a future Federal Trade Commission (FTC) could still use KOSA to pressure platforms into automated filtering of important, but controversial topics like LGBTQ issues and abortion, by claiming that algorithmically recommending such content “causes” mental health outcomes that are covered by the duty of care like anxiety and depression. Bans on inclusive books, abortion, and gender affirming healthcare have been passed on exactly that kind of rhetoric in many states recently. And we know that already existing content filtering systems impact content from marginalized creators exponentially more, resulting in discrimination and censorship. It’s also important to remember that algorithmic recommendation includes, for example, showing a user a post from a friend that they follow, since most platforms do not show all users all posts, but curate them in some way. As long as KOSA’s duty of care isn’t content neutral, platforms will be likely to react the same way that they did to the broad liability imposed by SESTA/FOSTA: by engaging in aggressive filtering and suppression of important, and in some cases lifesaving, content.
Why it's bad:
The way it's written (even after being changed, which the website also goes over), it is still possible for this law to be used to restrict things like queer content, discussion of reproductive rights and resources, and sexual education.
It will restrict youth's ability to use the Internet independently, essentially cutting off life support to many vulnerable people who rely on the Internet to learn that they are queer, being abused, disabled, etc.
Better alternatives:
Stop relying on ageist ideas of purity and innocence. When we focus on protecting the "purity" of youth, we dehumanize them and it becomes more about soothing adult anxieties than actually improving the lives of children.
Making sure content (sexual, violent, etc.) is marked/tagged and made avoidable for anyone who doesn't want to engage with it.
Teach children why certain things may be upsetting and how best to avoid those things.
Teach children how to recognize grooming and abuse and empower them to stop it themselves.
Teach children how to recognize fear, discomfort, trauma, and how to cope with those experiences.
The Internet makes a great boogeyman. But the idea that it is uniquely corrupting the Pure Innocent Youth relies on the idea that all children are middle-class suburban White kids from otherwise happy homes. What about the children who see police brutality on their front lawns, against their family members? How are we protecting them from being traumatized? Or children who are seeing and experiencing physical and sexual violence in their own homes, by the parents who prevent them from realizing what's happening by restricting their Internet usage? How does strengthening parent's rights stop those kids from being groomed? Or the kids who grow up in evangelical Christian homes and are given graphic descriptions of the horrors of the Apocalypse and told if they ever question their parents, they'll be left behind?
Children live in the same world we do. There are children who are already intimately aware of violence and "adult" topics because of their lived experiences. Actually protecting children means being concerned about THEIR human rights, it means empowering them to save themselves, it means giving them the tools to understand their own feelings and traumas. KOSA is just another in a long line of attempts to "save the children!" by dehumanizing them and giving more power to the people most likely to abuse them. We need to stop trying to protect children's "innocence" and appreciate that children are already growing, changing people, learning to deal with discomfort and pain and the weight of the world the same as everyone else. What people often think keeps kids safe really just keeps them ignorant and quiet.
Another explanation as to why it's bad:
179 notes
·
View notes
Text
If KOSA doesn't pass, something else will.
ugh, this whole KOSA thing makes me roll my eyes. I'm sorry I KNOW I'm just a sims blog but I need to say something and it's going to be long, skip if you want to-
I get it I do, call your reps if you want to. I honestly could see it getting struck down (yet again) but honestly? It's probably gonna get through eventually in our current political and cultural climate. Do you know why? Not because of wanting to protect kids, obviously, but because they can't easily shape the narrative. And children, being blank slates, are obviously not as scared of upturning power structures as their X/Boomer parents. Not that I super needed to tell you any of this, I mean it's obvious.
And I mean, don't be naive, this was cute when it was like 2015 or whatever and we all banded together to stop SOPA but obviously this isn't going to stop. This isn't just a whiny lament about how we can do nothing (Which, total sidebar, isn't it weird when these sorts of things come up and people show up in the comments all "Oh no, there's nothing we can do!! I guess we'll just die!!!" like, get a grip)
ANYWAY, when was the last time you watched something illegally? Probably pretty recently, when was the last time you got a hold of something you probably weren't supposed to have. Do you know easy it will probably be to bypass these measures? You really expect me to believe that they're capable of censoring the WHOLE internet?
Our government. Which cannot do anything competently besides war crimes (and even then...), is really going to plug *every hole* in that regard? The trillion dollar Hollywood machine has been dumping endless amounts of money and time into stopping piracy and they STILL haven't done it. The closest they got was just trying to give us a better option, and they even fucked *that* up. And let me tell you, trying to search for a way stop people from finding very specific files you can create bots to look for is WAY easier than trying to automate a system that just searches for nebulous concepts like "dangerous content".
Like I said, do what you feel like you need to do but it's obvious that those in charge are more and more willing to make increasing machiavellian decisions to try control and public whose opinions are quickly spiraling out of control. And I REALLY doubt that calling your rep all "UwU swir, can you pwease not impede my abiwity to rwead supwernatural porwn onwine??" is going to sway them.
And the thing that they're really trying to stop, a changing worldview among youth driven by online discourse, is bound to fail because it's going to be hard to put *that* particular genie back in the bottle. If they wanted to curb the amount of sway that the internet could have over young people's opinions they needed to kill social media in its cradle in the mid-2000s. It's WAY too late for that.
You can be mad and disagree all you want but, how about a plan B? Just in case this, or any future law, gets pushed through by the stone age baby boomers. Try things like not using only the 5 largest social media sites for all of your needs. Learn how to use tor. Protect yourself online. Use platforms that can't be easily tracked. Back up shit you like so you have copies.
Alls I'm saying is MAYBE instead of playing the dumb game of "Maybe if we ask really nicely they'll do the right thing" we make a plan to use decentralized platforms that are far to large and varied to effectively police in any meaningful way. In hindsight, maybe we shouldn't have come to use large platforms to criticize power structures when the heads of those power structures also use those platforms. It just seems like bad planning.
Stop expecting that you can fight EVERY bill and start planning to do some illegal shit online.
21 notes
·
View notes
Text
I forget sometimes because I have my own friends and tumblr bubble where people are more reasonable, but yesterday I made the mistake of going on twitter and its algorithm noticed I had an interest in AI, and, man it's bleak out there.
I'm actually fairly despondent about the way that AI has firmly and probably irreversibly been cast as an ontological evil being inflicted on society. I hate that the industry I've studied to enter, that I truly believe can do a lot of good, is going to get me socially blacklisted in a lot of contexts.
I have got into fights with my own family over this. The other night I was out at an LGBT social meet and a couple of people there were illustrators or in media and the whole time I was like, "please don't ask me what I studied".
It's so grim. I could never have predicted this in 2018. I just want to build useful intelligent systems, reduce human labour burden, and explore the potential of cognitive computing for creative expression (no, not "AI art", at the end of the day I don't really give too much of a shit about that as a specific application. But building an agent that can express itself is the purest form of artistic creation there is).
Fuck society for not even giving us a chance to get this right before starting in on shutting it all down. Most people didn't even try to engage with deploying AI systems and cognitive automation maximally ethical ways, weren't even paying attention while those discussions were first happening, and now the people researching core capabilities the bad guys while y'all happily work to keep the rat race running in place where it was circa 1999.
(Fucking grey as hell future people online seem to want. Nothing revolutionary about doing nothing for decades and saying "wouldn't it be nice if things were different, but of course we can't actually risk trying to change them". Capitalist realism is a black hole for hope tbh.)
68 notes
·
View notes
Text
I've been checking out the OC tags for a little while, and I can say the state of OC sharing on tumblr is in absolute misery.
We've all discussed how bad the ratio of reblogs has become, how the amount of them have been dwindling those last couple of years, but I think an extra important emphasis has to be made on original creation. Though everything is hit by the lack of sharing, fanarts at least have a tag people will look for, improving their visibility - fandom OCs are sadly shared less than fanarts, in general, but they still enjoy that same visibility.
But what of the purely original? People who have OCs belonging solely to their own world, with a tag nobody will look for? I've been seeing awesome OC art that has been sitting for days and weeks with 0 or 1 notes, sometimes 5 or 6 with luck (though most of the time only likes)... And though there are exceptions, overall, it's a frankly saddening sight.
The way for someone to get attention on their OCs is to already be a well established blog or to produce fanart on the side to build a following. Blogs solely creating original content sit at the bottom of the note pool with no escape in sight.
As much as we praise tumblr for its tagging system and the fact it doesn't kill old posts the way other social media does, it still does fail in the way of uplifting creation that isn't fandom-based.
I don't have a solution to this. It is merely a sad observation. It's only natural that people would look for what they already know and love. But in a world where all of our interactions are linked to consumerism, in a world where automation replaces human imagination, I'd love to see a community of people willing to actively search for, and uplift, the creatives that are trying to peek out of the water.
One reblog may lead to another may lead to a follow, may lead to a creator feeling like their work matters.
So I'm doing it, one reblog at a time.
#ocs#oc#original character#original creation#original story#reblogs#tumblr#mine#i've had this build up in my heart the more i've been exploring the oc tags#clicking on blogs and seeing all their lovely ocs#and seeing all of em with what. 1 note? a Like most often#idk.#i love fandoms! i love so many of them#but. i wish i could see more cool imaginative stuff from us normal users too#and i wish there was love and support for it#so yeah. i'm gonna keep exploring the tag randomly and reblog a bunch of cool ocs i find#i'll be happy if you even just consider doing the same
344 notes
·
View notes
Text
It’s now well understood that generative AI will increase the spread of disinformation on the internet. From deepfakes to fake news articles to bots, AI will generate not only more disinformation, but more convincing disinformation. But what people are only starting to understand is how disinformation will become more targeted and better able to engage with people and sway their opinions.
When Russia tried to influence the 2016 US presidential election via the now disbanded Internet Research Agency, the operation was run by humans who often had little cultural fluency or even fluency in the English language and so were not always able to relate to the groups they were targeting. With generative AI tools, those waging disinformation campaigns will be able to finely tune their approach by profiling individuals and groups. These operatives can produce content that seems legitimate and relatable to the people on the other end and even target individuals with personalized disinformation based on data they’ve collected. Generative AI will also make it much easier to produce disinformation and will thus increase the amount of disinformation that’s freely flowing on the internet, experts say.
“Generative AI lowers the financial barrier for creating content that’s tailored to certain audiences,” says Kate Starbird, an associate professor in the Department of Human Centered Design & Engineering at the University of Washington. “You can tailor it to audiences and make sure the narrative hits on the values and beliefs of those audiences, as well as the strategic part of the narrative.”
Rather than producing just a handful of articles a day, Starbird adds, “You can actually write one article and tailor it to 12 different audiences. It takes five minutes for each one of them.”
Considering how much content people post to social media and other platforms, it’s very easy to collect data to build a disinformation campaign. Once operatives are able to profile different groups of people throughout a country, they can teach the generative AI system they’re using to create content that manipulates those targets in highly sophisticated ways.
“You’re going to see that capacity to fine-tune. You’re going to see that precision increase. You’re going to see the relevancy increase,” says Renee Diresta, the technical research manager at Stanford Internet Observatory.
Hany Farid, a professor of computer science at the University of California, Berkeley, says this kind of customized disinformation is going to be “everywhere.” Though bad actors will probably target people by groups when waging a large-scale disinformation campaign, they could also use generative AI to target individuals.
“You could say something like, ‘Here’s a bunch of tweets from this user. Please write me something that will be engaging to them.’ That’ll get automated. I think that’s probably coming,” Farid says.
Purveyors of disinformation will try all sorts of tactics until they find what works best, Farid says, and much of what’s happening with these disinformation campaigns likely won’t be fully understood until after they’ve been in operation for some time. Plus, they only need to be somewhat effective to achieve their aims.
“If I want to launch a disinformation campaign, I can fail 99 percent of the time. You fail all the time, but it doesn’t matter,” Farid says. “Every once in a while, the QAnon gets through. Most of your campaigns can fail, but the ones that don’t can wreak havoc.”
Farid says we saw during the 2016 election cycle how the recommendation algorithms on platforms like Facebook radicalized people and helped spread disinformation and conspiracy theories. In the lead-up to the 2024 US election, Facebook’s algorithm—itself a form of AI—will likely be recommending some AI-generated posts instead of only pushing content created entirely by human actors. We’ve reached the point where AI will be used to create disinformation that another AI then recommends to you.
“We’ve been pretty well tricked by very low-quality content. We are entering a period where we’re going to get higher-quality disinformation and propaganda,” Starbird says. “It’s going to be much easier to produce content that’s tailored for specific audiences than it ever was before. I think we’re just going to have to be aware that that’s here now.”
What can be done about this problem? Unfortunately, only so much. Diresta says people need to be made aware of these potential threats and be more careful about what content they engage with. She says you’ll want to check whether your source is a website or social media profile that was created very recently, for example. Farid says AI companies also need to be pressured to implement safeguards so there’s less disinformation being created overall.
The Biden administration recently struck a deal with some of the largest AI companies—ChatGPT maker OpenAI, Google, Amazon, Microsoft, and Meta—that encourages them to create specific guardrails for their AI tools, including external testing of AI tools and watermarking of content created by AI. These AI companies have also created a group focused on developing safety standards for AI tools, and Congress is debating how to regulate AI.
Despite such efforts, AI is accelerating faster than it’s being reined in, and Silicon Valley often fails to keep promises to only release safe, tested products. And even if some companies behave responsibly, that doesn’t mean all of the players in this space will act accordingly.
“This is the classic story of the last 20 years: Unleash technology, invade everybody’s privacy, wreak havoc, become trillion-dollar-valuation companies, and then say, ‘Well, yeah, some bad stuff happened,’” Farid says. “We’re sort of repeating the same mistakes, but now it’s supercharged because we’re releasing this stuff on the back of mobile devices, social media, and a mess that already exists.”
109 notes
·
View notes
Text
Sonadowtober Prompt 17: Alter Ego
For one of my AUs, Atlas! Decided to bring the old manga back to life with a post-Frontiers self-discovery arc for Sonic
Shadow's been looking for a certain someone... and finds that someone has taken on a new face
Sorry it's late lol (Context in AO3 endnotes)
Read Below🔽
Shadow doesn’t know what compelled him to walk into that bookstore.
It wasn’t particularly eye-catching, nor was he in need of a book of any sort. Yet something told him to stop beneath the sign proudly labeling the quaint place as “THE ATLAS.”
A lady with kind eyes and a soft smile opened the door hugging a thick clothbound book, calling back over her shoulder a thank you to someone named Nicky. She stops upon seeing him. “Why hello! Visiting?”
“...Yes.” Shadow supposed he wasn’t the most discreet person on this side of the planet, with easily memorable red stripes declaring to everyone unfamiliar that they haven’t seen him before. It wasn’t such a big deal now as it is when he’s on missions, but it’s still annoying that he can’t blend in with folks who often ask him more questions than he has the social battery for.
“What brings you to our little town? We don’t get much tourism around here,” she beamed. Shadow had an automated response prepared for questions like hers.
“Personal reasons.”
“Oh, I hope it’s nothing bad,” the lady replies. “The Atlas is a good place to go if you need advice, though! Nicky’s got something for everyone. I swear that boy knows everything there is to know.”
Quite a boast. Shadow pastes a smile on his face and tells her that he’ll look into it, completely certain that this Nicky wouldn’t be able to help. With as remote of a town as this, he doubts they’d know anything about the whereabouts of the hero he’s looking for.
He doesn’t know how wrong he is. Yet simultaneously, he finds out just how right his assumption of obliviousness was.
Perhaps he goes into the shop knowing that his objective isn’t urgent, that the lost didn’t need nor want to be found. The world wasn’t ending, and he knew that Sonic would show his face if anything of the sort happened. Shadow supposed he just wanted to check in on him, seeing as the last time they met, he wasn’t doing the best.
But if Shadow’s advice had encouraged him to find the comfort to disappear off the face of Earth for such a while, he was probably okay.
Whatever the case, Shadow walks into The Atlas, and finds everything he thought he wouldn’t.
Among the shelves stands a hedgehog, stretching to reach a book just out of his grasp. When he turns around to greet his new customer, emerald eyes widen behind black-framed glasses, if only for a second.
“H-hey! Welcome to The Atlas!” Sonic squeaks, pitched in a bad attempt at disguising his voice. He tugs at his red hoodie’s strings nervously as he gets stared down.
“Sonic.” Shadow suddenly finds that he has no idea what to say. He didn’t really expect to come across the hero so soon, despite having looked for him for months now. Turns out, he didn’t have to say anything, at least not yet.
Sonic has grabbed his arm and pulled him into a different room before he can even register the movement. He’d gotten so used to being faster than the rest of the world that he forgot there was someone faster. “What are you doing here?”
“I… I should be asking you that,” Shadow states, taken back by the sudden question. “Can I not be here?”
“Heh. I suppose you’re right.” Sonic runs a hand through his bangs, something he apparently took the time to grow out. “Uh…” He glances at Shadow, then around what seems to be a storage room, struggling to find words despite the numerous books.
A faint ringing of bells sounds from beyond the closed door and the blue hedgehog perks up. Giving a customer-service worthy smile, he gestures outside and goes to greet whoever came in.
Shadow watches the interaction unfold in secret.
The customer seems young, bright. By their demeanor, they’ve definitely been here before. “Hi Nicky! Has that collection arrived yet?”
“Ollie!” Sonic’s so at ease as he hops behind the checkout counter and hoists a box up to them. “Here! I hope you enjoy it as much as I do.” He pushes his glasses up the bridge of his nose thoughtfully. “And if you require any more books, I have plenty of recommendations like this one.”
“Alright, bookworm,” they laugh, trading a sack of rings for the neatly bound box. “Thanks! I’ll be back!”
“See you later!” As soon as they’re out of sight, Sonic zips back into the storage room, smiling shyly once he realizes Shadow’s been watching. “Sorry.”
“Don’t be,” Shadow says immediately. Seeing him interact with Ollie… he can see that the hero’s found a place here, one worth staying for. He’d feel bad if he was anything other than happy for Sonic.
Sonic seems to disagree. “So… on a scale of one to mass hysteria, how bad is it? At home, I mean.” At Shadow’s confused look, he cringes, prying his glasses off his face and anxiously cleaning them with his shirt. “Gosh, I should go back, shouldn’t I?”
“What? No. Why? They’re fine,” he splurts, unable to understand what planted that idea in Sonic’s head in the first place. It was as if Shadow’s mere presence caused the hero to regress into the near paranoid behavior he’d only recently worked himself out of.
“Are you sure?” He sounds so… scared. Shadow takes his hands and plops those glasses back onto Sonic’s face.
“I’m sure. I didn’t come with the intention of forcing you to go anywhere. I looked for you to make sure you were okay. There’s no need to return if you’re enjoying yourself here.”
That coaxes a smile out of him. One of those genuine smiles that caused the corners of his eyes to crinkle with joy, not the practiced kind the hero wore for the masses. “I am. It’s nice to be just a guy again.”
Shadow nods in sympathy. He can’t say he knew what it was like, but if Sonic found it so nice… “Would you like me to call you Nicky?” The blue hedgehog shrugs. Indifference. Well, if he was to be a normal guy, a hero’s name wouldn’t do. “Nicky, then. It fits you. You return when you’re ready, Nicky. The world can handle itself.”
Emerald eyes simmer with unsaid thanks, and Shadow finds himself pulled into a hug, tight but comfortable. He can’t help but smile as he reciprocates the gesture, gently running a hand through blue quills.
Just two hedgehogs, for the moment.
#sonadowtober#sonadowtober 2024#sonic#sonic the hedgehog#shadow the hedgehog#sonadow#nicky parlouzer#sonic manga#bookstore#sonic au#Atlas AU#oneshot#writing#writers on tumblr#writeblr#fanfic#fanfiction#ao3#cross posted on ao3#CatieCatWorks
15 notes
·
View notes
Text
𝐼𝓉 𝓌𝒶𝓈 𝒻𝒶𝓀𝑒 𝓊𝓃𝓉𝒾𝓁 𝒾𝓉 𝓌𝒶𝓈𝓃'𝓉.
note: this is a modern time set. I'm in a Joesph mood, so here I am. I'm gonna have a part 2 for this either tomorrow or the day after.
Fandom(s): SWWSDJ
Character(s): Joseph
TW: none.
"boyfriend....for hire?"
The short red head in front of you nodded. she leaned in close, resting her chin on her hand as she smirked. "Yep. that right there will solve all your problems."
you looked up from the paper clearly unamused by your friends' antics. "What are you talking about Vex?"
Her eyes narrowed, but her grin never wavered. "Well, you were just bitching about your upcoming family dinner, and how your parents keep trying to set you up....You know some will even sleep with you after times up."
You gasped before glaring at her "Vex"
she leaned back, putting her hands up in defense. "I'm just saying~ I mean, what could it hurt?"
You looked back down at the paper, still unsure. "It feels... wrong."
Vex shrugged before crossing her arms over her torso. "Give it some thought. you got a week before the dinner. just uh... let me know how it goes?"
.
You sighed, looking down at the same paper from a few days ago. You had been debating if you should or not. You understood your parents were worried, but you weren't ready to hear it again. With shaky hands, you reach for your cellphone and slowly type in the number. With a deep breath, your pressed call quickly brings the phone to your ear.
It rang a few times before a female automated voice came through. "Thank you for calling callin' for love~. Please wait for representatives to help you." Soothing jazz began to play through the speaker, only making you more anxious. You just wanted to get it over with waiting and make you overthink.
God, what if this was a mistake? what if they turned out to be a murder? what if they - "Thank you for calling callin'" for love, my name is Jasper. How can i help you?" you paused for a minute. someone was there. "Hello?"
"h-hello?"
"Hello! how can I help you...?"
"y/n, and um, I'm looking to hire a boyfriend f-for a family dinner?" God, you hated this so much! it felt so embarrassing!
"Sounds great, y/n!" You could hear the typing of her keyboard ad she spoke. "Now, in order to find the man perfect for the job, do you mind if I ask you a couple of questions?"
you nodded, forgetting she couldn't see you before responding "yes"
"Great!"
.
the questions took a good couple of minutes with her cracking a few jokes, making loosen up a bit. "Well, good news, darling. we have the perfect match for you! Give me a minute to transfer you to him."
"What his name?"
"His name is Joesph, and he is an actor."
you gasped "an actor? Why is he doing this job?"
Jasper giggled. "Why not ask him yourself?" Before you could respond, the jazz from before came back on making you slightly nervous again.
.
Joseph's feet rested on his coffee table as he scrolled through social media. He thought it was funny how his children's show captured more than children's attention. Scrolling through the latest fanart became his new favorite hobby.
An alarm siren rang through the speakers of his phone at the caller contact of his side job shown on his phone. Sighing, he picked up the call, his head falling back on the cushion. "Joesph speaking."
"Hey Joe! it's Jasper I got a job for you if your intrested."
He hummed, mulling it over. Having the extra money wouldn't be so bad, and it would be nice to go out. "Okay, Jasper. fill me in."
.
The jazz music continued to play as your anxiety grew. what if she was having more trouble than she let on. oh God, this was a bad idea! maybe you sho-.
"Hello? y/n?"
you paused again, not ready for whoever was on the other line. it certainly not Jasper anymore the voice deep and a males. "h-hello?" Your voice was quiet, and meek I'm return making you cringe slightly.
You heard an airy chuckle before he continued. "It's nice to meet you, y/n. I'm Joseph. now I'm assuming you're calling this line for specific reasons?"
"y-yes. I have a family dinner coming up....I really dont want to hear it from them..." You rubbed the back of your neck, glad you weren't making eye contact with the guy.
"I hear ya doll. My coworkers are the same way. So what day is this family get together?"
"Um, this saturday?"
You hear some rustling followed by papers being flipped before he settled down. "That's perfect, actually. I'm off that day so we can arrive together to make it more believeable."
oh God, this is really happening! you were gonna introduce a total stranger to your family just to get them off your back. "c-can we meet the day before? I'll pay for it if I need to!"
He hummed in thought "mmm how does dinner sound then? We have a day of filming that Friday, but I can meet you for dinner?"
Once again, you nodded before giving a verbal response. "That sounds fine. Um, do you know that mom and pops Cafe on first?"
"Cara's Cafe?"
"Yes! how does that sound."
he chuckled again, putting down whatever paper he was holding. "Sounds great, doll. Here, let me give you my number so we can keep in contact."
.
It was a good thing you took his number when you did. Due to come technical issues, filming took longer than he thought, so he texted you to let you know.
So now you sit at the booth staring out the window, trying to calm your nerves. it wasn't a real date. Yet you were still meeting a stranger like it was. "I should've had Vex ghost our date."
As soon as sighed the bell above the door rang. Looking up and away from the window, your eyes meet with the one most beautiful brown eyes. The mam attached to them wasn't hard to look at either. If you didn't believe he was an actor before, you certainly did now.
He talked with the waitress before making his way to your table. "y/n?" You go to stand up, but he stopped you simply holding his hand out for you to shake, smiling down at you. "It's fine, really. It's nice to finally meet you."
You take his hand, giving him a nervous smile. "y-you too, Joesph."
"Joe is fine, doll." He sat down across from you, the smile never leaving his face.
"Are you sure?"
"Well, seeing as we have to pretend we didn't just meet each other recently, I would say so." He shrugs
"Okay... Joe," it felt weird saying his name in such a friendly manner despite just knowing him, but at the same time, it felt....nice?
"So I'm assuming you wanted to early to get to know me better." You nodded,"Smart. Then let's get started, doll."
.
The "date" went pretty well, you felt. You leaned the basics of his life and even got to hear what him and his costars. He seemed somewhat laid back, which put you at ease. You might even have to thank Vex later for the suggestion.
A knock bounced off the walls of your quiet living room as you placed the finishing touches on your outfit. "coming!" You called gathering your phone and keys.
When you opened the door, you had to stop gasping. He dressed really nicely. it wasn't a suit and tie, but damn did he look good.
He eyes you up and down, giving a low whistle. "Damn, you clean up nicely, doll." He held out his elbow for you to take. "Are you ready?"
You shut and lock the door behind you before wrapping your arm around his own. "You know I could say the same thing about you."
He chuckled, watching your movements. "What can I say? I wanted to leave an impression."
The two of you were making your way to your car when you scoffed. "On my parents? why? you will only see them once."
"Who says I wanted to impress them?"
When the two of you got close, you unlocked your car. However, before you could open the door to the driver's seat, Joesph beat you to it, opening the door for you. You turned to him face slightly red with a raised eyebrow. "Man, you really want to be hired again, huh?"
He smirked and simply shrugged. when you got into the car, he shut the door before making his way to the other side and climbing into the car. "Show me your music taste doll."
.
Joseph would be lying if he said it wasn't awkward at first. He was surrounded by strangers, but he played the role of boyfriend pretty well. Their father even seemed to take a liking to him, even asking for his help with the grill, and even bring him a cold one whe. got one.
The can opened with a pop followed sizzling from the carbonation before he took a long swig of the drink. "So how did you and my kid meet?"
Joesph had just opened his can when he looked up at your father. "Online dating. They just...stood out to me."
Your dad looked him up and down as if trying to determine if he was telling the truth. "Did you find out what the something was yet?"
"No, but I plan sticking around to find out and after."
Your dad smiled, patting him on the back. "Good man, Joe. Now come on! these burgers aren't gonna flip themselves!"
.
You watched from afar as Joesph interacted with your family. He was certainly putting his acting skills to use as he happily talked with your father and uncle. Not only did he get along with adults, but the kids seemed to love him as well. always showing him the things they color or make in minecraft. God, how has he not settled down by now?
"You made a good choice hun." You jumped slightly as your mom stood beside you, watching the crowd with you. "Heck even your dad likes him, and that's feat of its own."
You hummed, taking a drink of the drink in your hand. "Yeah. It kinda surprising, honestly."
Your mom chuckled. "Me too, but he must see something in him. It must be your sign."
You scoffed, rolling your eyes if only she really knew. "Come on, mom, let's finish these sides before dads finished."
She watched you uncertain for a few seconds before nodding. "You're right. You know how your father doesn't like to eat cold burgers."
.
As the night came to a close, you couldn't help but feel anxiety. Due to your families curiosity, you didn't get to spend much time with Joe. At first, you didn't mind, but the more he interacted with your family or they said something sweet about him, you couldn't help but feel your heartache.
At first, you didn't understand why, but the more you thought about it, the more you couldn't deny that he was perfect. He got along with your family, polite to you, and even seemed to care as he would constantly check in with you through the evening. It felt almost cruel.
You knew from the start that this was all fake. You knew after tonight that you may never see him again. Yet your heart yearned for more. For something you felt you couldn't have.
"Is everything okay dear?"
You nodded "yeah sorry mom, ....just tired. Work and all."
You could feel her eyes on you before turning back to crowd "Your just like your father in that aspect. Lean on Joe a bit more. let him lighten your load."
You sighed "I think it's time for us to go mom. I still have work in the morning.
she nods in understanding, opening her arms to you. You wrapped your arms around her in a warm hug. "I understand, and please, you guys, come see us soon."
"We will, mom."
.
The car ride home felt weird. You both sat quietly as the soft sounds of the music you were playing flowed through the speakers. You wanted to say something anything really, but you didn't know what.
Joseph seemed to be thinking the same thing since he was the first to speak up. "You have a nice family doll... So friendly and loving."
You smiled, your eyes still trained on the road ahead. "Thank you...We try."
he hummed in response as the music began to fill the gap again. "You know this went so well it makes me feel like I should meet your family next." As soon as the words left your mouth, you felt yourself cringe.
Joesph was silent before letting out an unamused huff. "I would rather you not." You opened your mouth to apologize for making the comment, but Joesph continued before you could."They are awful. I would rather you meet my coworkers."
You closed your mouth, unsure how to take that comment. As you pulled into your driveway, putting the car in park, you sat there for a minute. You thought Joesph would have gotten out of the car as soon as it was parked with an awkward goodbye, but he didn't.
Instead, he stayed in the car with you, his eyes trained on you. "I....had a really good time, Joe. Maybe I can find another excuse to hire you again."
"Hmmm, counter offer. What if instead we go on an actual date? No company involved. Just us." You turned to him surprised only to be met with a slightly nervous look on Joseph's face. "I had a good time too, and I know a good thing when I see it. If you're down for it, I want to keep this good thing going."
You watched him for a couple minutes waiting for the got you moment, but it never happened. He looked at you earnestly yet nervously, waiting for your response. "Okay, let's try it."
#sunny day jack#swwsdj#something's wrong with sunny day jack#x reader#yandere x reader#something's wrong with sunny day jack x reader#swwsdj x reader#joesph x reader#sdj joseph x reader#sunny day jack joseph#sdj joseph#joseph x reader#joseph cullman#joesph cullman x reader#joesph cullman
196 notes
·
View notes
Text
Moderation is a Sucker's Game
Longpost time - tl;dr: the concept of moderation is totally beefed on a fundamental level everywhere and recent anti-trans bans indicate Tumblr has only made the problem harder for itself by making bad staff choices. No solution, not absolving Tumblr of responsibility, but also I think it's an interesting systemic issue on top of genuine incompetence.
Tumblr has a running history of screwing up moderation hard enough to either drive entire communities off the site or allow rule-breaking harassment to persist and drive them off.
As such, I think Tumblr will definitely cease at some point, because it is handling the problem of moderation much worse than most other big platforms and this is a major barrier to its financial sustainability - they cannot say "we put our users first and refuse to use relatively profitable Unethical Data-Harvesting Tricks" and expect to pivot to a user-supported financing model if they're widely perceived as repeatedly spurning said userbase.
The prior 'Porn Ban' (and subsequent smug tone of Staff communications) and the 'we had a moderator on staff accepting payments for making anti-trans moderation decisions' reveal stand out, as well as the (iirc) 2016-era peak of racist harassment (not that it ever *stopped*) which went largely unmoderated; instead, black users responding to, pointing out, or sometimes literally just screenshotting the deluge of harassment were permabanned.
There has also, of course, been the whole "over-moderation of queer- and specifically trans-related tags and terms in Search" - something that has also, repeatedly, affected Palestinian and pro-Palestine blogs.
Right now, of course, we have the current wave of anti-transfem "everything you do, selfies and textposts alike, can and will be marked as mature", compounded by instant permabans handed out without notice or appeal, all based on automod decisions from bad-faith reports and bizzarely cursory/biased human reviews.
This is all contrasted by semi-regular waves of fresh kinds of porn-related advertisements and spam blogs, which often go entirely unmoderated, automated or otherwise, for months upon months. Also the explicitly ToS-breaking harassment that gets reported and returned as "fine, actually".
Why is this happening? Beyond the inherent problem of "many Tumblr staff have had and currently have biases and open bigotry" (@photomatt springs to mind), you'd think that boring business sense would come first - diversity is Tumblr's brand, fandom is Tumblr's brand, so "not specifically driving off those groups" should have been an *essential* part of monetization efforts. Right?
Trouble is, even a lawsuit settled not-in-Tumblr's-favour can't solve the core problem, which seems to be the same one every user-generated-content platform faces: reasonable moderation isn't feasible for real-time, user-generated content at scale.
Straight-up, that is the largest problem Tumblr faces. Nobody knows how to do it fairly or reasonably. Content moderation has long been the writhing tar-pit horror sitting at the core of all large-scale social media. Increasingly, this unsolvable problem looks like it might be the reason the entire format is structurally doomed - or at least, doomed to a cycle of new platform -> rise in popularity -> failures in moderation and financing -> user exodus and platform collapse.
Meta (Facebook and Instagram) tackle moderation by being totally opaque and overzealous - often you won't even be told your reach has been limited. Or, if you're told, you might not know *what* post triggered it, or why. If you do, you won't be told what effect being 'limited' has, or how long it will last. There is no reliable appeal process, but that doesn't matter. They are too big to be affected by people being unhappy about moderation on an individual or community level.
Twitter 'solved' the problem by leaning more and more on pure automation - which wasn't working great, sure, but once it was bought and most of those measures scrapped for 'limiting free speech', Twitter got *much, much worse*. It is now a cesspool of unavoidable spams and spam-for-scams. Also, harassment.
Tiktok also does a lot of automated moderation - not as much as people seem to think, but also not as efficiently as other platforms, given that it's video content. They also make heavier use of de-prioritizing content algorithmically rather than just banning or deleting videos. Twitch and YouTube follow along in this bucket, being very willing to use automated systems to suspend, de-rank, and de-monetize hard, early, and arbitrarily.
Mastodon and similar 'decentralised' networks offload the problem onto whoever runs each local server/instance. You set up social.horse.mastodon or whatever? Great - moderation of posts on there is your problem. Some instances are great! Some instances are full of petty tyrants over-moderating their little fiefdoms. Some instances are godawful. Usually, nobody is being paid, which isn't great.
Unfortunately, instance-to-instance communication sometimes means that you can be harassed by a group of people from those godawful servers who are functionally unreportable and who cannot be stopped from spinning up dozens of sockpuppets on said servers to evade your blocks of individual accounts. This is also a problem with the concept of "email", so, you know, not strictly a new problem.
Google can't moderate its search results, and is overtaken by SEO spam and generative misinformation (even prior to their "AI answers" integration).
Amazon, as a storefront, is overrun by scams. Some of them are, functionally, directly run and facilitated by Amazon's own staff, facilities, and even manufacturing processes.
We seethe at Adobe insisting they have the right to moderate (automated or otherwise) the content we put on their cloud services, but chances are they would largely *rather not* - but legal obligations, advertiser/partner dollars, payment processors, and technical requirements are involved, so they're screwed and so are users.
Nobody can "do" content moderation of any kind at scale without being too lax or too overzealous, and probably both at the same time. If the billions of dollars of these corporate giants can't hack the problem, the rinkydink tens of millions of Automattic ain't gonna cut it.
None of this is "working" or "fair" or even "reasonable".
And that's fine by these companies! Their main moderation concern is "not being found liable for horrific and illegal shit users do", followed by "being pleasant *enough* to be used profitably, regardless of actual user experience or sentiment".
Good moderation is hard. Think about the obscenely small teacher–student ratio you need for a good, safe, productive classroom experience. You're not going to push more than a hundred students to one or two lecturers before you lose the ability to meaningfully grade their exams and give feedback, let alone have insight into their real-time behaviour for a dozen hours a week.
Now, imagine that but 24/7. A perpetual whorl of short-form essays being handed in at random times of day, wildly multimedia projects of totally inconsistent sizes from dozens of countries. What sort of ratio of moderators to users would even *plausibly* keep things under control? How do you *pay* for that? How do you have meaningful *oversight* over the mods? Fuck, how do you even *begin* to compensate for the fact that they'll be inevitably be exposed to a subset of your users posting criminally heinous content for laughs?
The answer is that you don't manage to balance it reasonably. You use keywords to auto-filter certain posts so they'll be seen less, lowering the chance of anyone reporting them. You use basic network models to auto-approve or auto-deny some reported content based on what's *probably* in the images or text, and call a 70% success rate an exemplary success, because that's 70% of those reported posts your human moderators will correctly never see and a further 25% fewer posts that are incorrectly ruled on but never get appealed! Huge reduction in workload - fantastic news!
You try your damndest to make sure that advertisers feel like their content is never posted next to or in association with "bad" content, even if it's not ToS-breaking, because that's where the dollars are and without those all you've got are good intentions and that's not a currency you can pay your moderators in. You hope to hell that you fall on the side of "overzealous", because right-wing single-issue ideologues have the ears of payment processors and lawmakers the world over, and they'll cut you the hell off if you get a reputation, fair or otherwise, for being the sort of platform that might "facilitate harm" to kids, or women, or Jesus. Mostly Jesus.
Hence, the uncomfortable tension stretching taut the façade of every major platform - on the one hand, 'shifting moderation burdens to your users' is universally regarded as a shitty and unethical cost-cutting move ripe for exploitation by bad actors. On the other, despite having a surplus of capital and benefitting from the efficiencies of scale (and, arguably, having an unshiftable responsibility to moderate their own platforms), companies aren't managing to wield moderation in a way that works for their users.
In Tumblr's case, it's not profitable. In *Twitter's* case, it's not even profitable.
Obviously, I don't have a solution to this. Tumblr has chosen to fight the dual battles of "moderation is hard" and *ALSO* "some of our staff, including moderators, are inarguably biased/bigoted against core user groups". That's on them. Not going to pretend it isn't, not going to make excuses for it.
The best answer I have is to archive your shit and hop onto smaller networks with staff, communities, and rules that you can vibe with, and hope you will be in a position to help directly and monetarily contribute to their continued existence in a sustainable way.
We're here for the community and a broad set of fairly straightforward features (and lack of other, worse features). Those can, will, and often *do* exist elsewhere. If you stick around and one of these 'elsewhere' platforms finds a size that's sustainable and a moderation approach that actually works for the vast majority of users, then you've hit the jackpot.
If not? Well, archive everything you can and hop ships to new networks. These aren't public institutions designed to last lifetimes - these are passion projects (or cash grabs) bloated beyond initial scope and inevitably riddled with the biases, oversights, and straight-up skill issues of their creators. They were never going to last, and their insistence on pretending they're immortal and behaving in accordance is part of the problem.
Also, you should support laws that would mandate user access to their own data in an exportable and preferably cross-platform-compatible format. Part of what keeps people on networks is lock-in and effort. Making it legally mandatory to make those transitions between networks easy is probably one of the only bits of social media-related law that would actually curb malfeasance (from users and platforms themselves).
#predstrogen#charlottan#covidsafehotties#hammer car explosion#content moderation#bans#trans bans#permabans#longpost
18 notes
·
View notes
Note
Hii dear! Do you have any suggestions or examples of ways/places where I can invest in myself as a small service-based business owner? 😘🌸
Always work on improving both with your craft and what you offer clients. Find new ways to make the experience easy and convenient for them. If you handle a lot of things manually, now is the time to find ways to automate and provide customers with a better experience. I am not saying the experience they have with you is bad, but the name of the game is convenience.
Are there operational things that take away from your time? Automate.
Study your competitors and see if there is anything they are doing that you could implement to continue to scale your business.
Work on making your business look good on social media to be able to drive more traffic back to you.
Probably top important is to make sure you are taking care of yourself. If you are not 100% neither will be your business.
Read/listen to business books that could help you be a better leader and make better business decisions.
Remember that you do not want to stay small, so you need to treat your business as if it is bigger than it is in order for it to grow.
I would also network with complimentary businesses, not direct competitors but people with businesses where their client would probably need your services as well. This creates more avenues for you and affords you the opportunity to create strategic partnerships. This is honestly always a good idea.
Most important, remember that you work at your business, not for your business. Viewing it this way can help not only with productivity but also mindset and what you do with the actual business in terms of its potential success.
IDK what niche you are in, but if you give me some sort of scope I can probably suggest a few tools and ideas you can implement that could help.
36 notes
·
View notes
Text
Just wanted to address this post from @thevibewizard on a separate thread because the post is getting too long, and moderation is a different topic from new site features. But it's worth addressing.
No one, especially me, would suggest that Tumblr Staff are to be held completely unaccountable and without scrutiny. My post was specifically about who to blame whenever Tumblr rolls out a feature that is generally unappealing.
When it comes to blazed posts with shit in them like TERF nonsense and suicide notes, there's nothing to defend there. Someone screwed up. It's not evidence of a massive in-house TERF conspiracy or that Staff's Trust and Safety team is actually lazy. The most likely explanation is someone had a lapse of judgement or didn't review the content well enough.
I know it's upsetting to see that shit, especially as a blazed post. I get upset too. The best course of action is to report the post and file an explanation of what's wrong with it. Even if it's obvious. The Trust & Safety team will review it, and blazed posts that get taken down are probably a big red flag for them to review how they ended up blazed in the first place. The T&S team will not see it when you just screenshot the offending post and make an angry post about it, they don't search Tumblr's posts for hate speech reports.
Remember that there's hundreds of millions of users and only a few dozen people reading reports (on top of reviewing blazes and other T&S work of enforcement, development, etc). As a quick exercise in picturing the scale: there are roughly 12 million new posts made every day on Tumblr. Say 0.5% of them contain content against terms of service. That's 60,000 shitty posts. There's a lot to go through daily and not all of it is reported, and automated tools for catching bad content effectively are not that good.
The volume of reports that Tumblr's T&S team deals with is astronomical compared to the social media sites that farm out their moderation teams to third party services. And, again, this is not to excuse mistakes, this is just to provide context for how those mistakes happen. Customer service is brutal no matter the org.
Tumblr's Staff is a bunch of individuals working together on a larger product. I guess the question comes down to whether you personally think it's worth getting angry at Staff as a whole when an individual makes a mistake, and how, exactly, can you productively direct that anger. If it's worth it to you to express that anger, go for it. It's valid. Just ask yourself, are you swinging at the right target?
If Tumblr didn't have a reputation for being the internet's imperfect but most well-known safe space for LGBTQIA+ youth and self-discovery, I would be more suspicious of why TERF posts make it through blaze review. The people I worked with at Tumblr were the most vocal supporters of marginalized people I've worked with in tech. The corporate entity of Tumblr (and Automattic) can never be truly trusted. Never trust a corporation or the executives. But I can say that the individuals in the apparatus that do the work are largely good and are trying their best with the tools they have.
339 notes
·
View notes
Text
The FF14 community has a sarcastic saying along the lines of "great community btw", which is used in cases whenever someone or some group of awful people pop up doing something shitty and which I find annoying.
I think it is really obnoxious because having a great community does not mean there will never be any awful people in that community.
Especially when you get an award for stuff like having a great community – something that in many spaces is usually based on popularity or earned via nebulous committee vote.
That's just not how people work. And putting that strange standard on every single person in that community is really unrealistic.
In fact, I think it would be creepier if occasional assholes didn't pop up. Actual cult behaviour entails a situation where nobody is frustrated, critical or says anything bad about each other ever.
And this is doubly so for stuff that is popular.
We have 30 million accounts in the game now. Do you honestly expect not one of them to be made by an awful person?
The second frustration I have is tying social media with the community. Yoshi-P and his team, don't, even in fact, straight-up can't really, moderate what happens on social media, on Twitter, Reddit, Discord etc.
And as far as I know their official accounts keep it pretty basic and chill.
So, a lot of the bad stuff is just the nature of social media and the internet.
Now, this actually does not mean the dev team bears no responsibility because what they can do is minimise the poison within the game itself.
FF14 is probably the cleanest multiplayer community I've been apart of ever. The most I've gotten is generic insults in French.
The lack of gamerwords in my 3 months of time surprised me the most, and so did the occasional actual compliments. We just either talk about the game or what issues there are with us doing the mechs or disband when stuff isn't working out and time is getting wasted.
But I also know many haven't been as lucky and some of the security features were dated even by the time the game re-released back in 2013.
The friendlist is a mess because it doesn't block properly, and I know stories of people who had to go to another region to escape their stalker.
The GMs, while preferrable to any automated moderation, still don't catch everything, sometimes for years, especially whistleblowers who signal stuff in specific codewords and technically aren't breaking the rules.
Unjust massreportings can happen.
Reporting can be clunky and from what I've explored is hidden behind some menuing instead of just being an open option like in many other online games.
So they kinda can't control Twitter or Reddit at all and they can't change human nature, but I think they can modernise a bunch of the social tab.
I also haven't experienced the game as a non-sprout/at endgame yet, which I'm told can make a difference. Part of the reason why I find the game so rewarding is because you find so many different experiences and opinions in just Duty Finder and you just sometimes get to talking with them about the game and then various other things.
Recently, many gathered to honor Akira Toriyama in many of the data centers and people were just kind of talking to each other about DB and other series.
And I've never seen anything like this in any other MMO or any multiplayer games. I know it happened for Kentaro Miura, too, when he died.
So clearly, the social element is a huge part of the identity of an MMO, so if they mean they're improving the MMO in the MMO, I hope this also includes the social tab features.
21 notes
·
View notes
Text
If I had any coding knowledge, I think it would be a useful tool to have a social media platform that was JUST for pics and video of performers. The problem I keep hearing from dancers is that they'll be at a thing and a dozen people will be taking pictures but you have to scour the internet to find even one of them.
So like in concept it would be like:
Select location, input event, input act. Input act would tag performer if performer is listed. Performer can then look through the act tag for that event and see themselves.
Like... the reason I use Facebook for high volume events is because I can do the whole album at once, tag a few highlights, and that let's the performer know that theres an album of them.
But automating the event tagging system would make the legwork easier as a photographer. Or an event runner could remake an album and let people dump their photos there.
My problems would be getting people to use it, and having a moderation team to limit bad actors.
But I'm not the person to make this, I think. Like it's a service I would definitely use, but I'm not great at starting things.
89 notes
·
View notes
Text
9/12 Blog Post Week #3
Why and how can the internet be or feel like a safe space for women outside of political organizations?
For many women not involved in political groups, the internet can feel like a safe space where they can challenge the gender inequality they deal with in their everyday lives. It gives them a place to explore who they are, connect with others, and talk about feminism and gender issues without the restrictions they face offline. Nouraie-Simone (2005) explains that for young Iranian women, the internet becomes "a liberating territory of one’s own—a place to resist a traditionally imposed subordinate identity" (p. 61). It offers them a break from the limitations of public life, allowing them to express themselves freely. In this way, the internet is like the “room of one’s own” that Virginia Woolf described—offering women a personal, empowering space to speak up and take control of their identity.
Reference: Nouraie-Simone, F. (2005). On Shifting Ground: Muslim Women in the Global Era. The Feminist Press at CUNY.
2. Would it be considered right or wrong when people seek out online spaces that affirm and solidify their own social identities?
Looking for online spaces that support and strengthen one's own social identity is usually seen as a positive thing. People often use the internet to connect with others who share their racial, gender, or sexual identities, which can be empowering. For instance, young people might use social media to express themselves and connect with friends (Boyd, 2004). People of color and LGBTQ+ individuals also use specific websites to affirm their identities and find like-minded people (Bryson, 2004). Nouraie-Simone (2005) notes that for those in restrictive environments, the internet can provide a freeing space to explore and express their identities (p. 61-62). Moreover, research shows that people with health issues use online platforms to talk about their experiences openly, rather than to escape them (Pitts, 2004). So, using the internet to support one’s identity is generally a meaningful and helpful practice.
Boyd, D. (2004). Friendster and Facebook: Social networking site strategies.
Bryson, M. (2004). QueerSisters: Learning to be queer online.
Nouraie-Simone, F. (2005). On Shifting Ground: Muslim Women in the Global Era. The Feminist Press at CUNY.
Pitts, V. (2004). Illness and the body: Online narratives of cancer.
3. How can high tech tools impact and affect poor working class communities negatively when it is supposed to “help those in need?”
Even with the best-laid plans, high-tech instruments can be detrimental to underprivileged populations. Governor LePage of Maine falsely claimed that recipients of TANF were abusing their benefits based on EBT data, despite the fact that only 0.03% of transactions were dubious. Due to the perpetuation of unfavorable perceptions, receiving public aid was seen as "lazy" or "criminal" (Eubanks, 2018, p. 19). Stricter regulations were consequently implemented, burdening families with additional stress (e.g., requiring them to retain receipts for a year). In this instance, technology didn't help—rather, it made things more difficult for individuals who require assistance.Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin's Press.
4. AI struggles to be able to fully conduct black and asian faces and it is known. Why does law enforcement rely on this to identify these people knowing the risk of putting someone innocent behind bars?
Facial recognition tech is notoriously bad at identifying Black and Asian faces, but law enforcement still uses it. Research shows that these systems are much more likely to misidentify people of color because they’re often trained on biased data (Buolamwini & Gebru, 2018). For example, Nijeer Parks, a Black man from New Jersey, was wrongfully arrested after being misidentified by facial recognition—he's the third known Black man to face this kind of mistake (Hill, 2020). Even though the risks are clear, police keep using this flawed tech, likely because it seems like an easy solution, but it ends up hurting innocent people. There needs to be more caution and oversight to prevent these errors.
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 77-91.
Hill, K. (2020). Another arrest, and jail time, due to a bad facial recognition match. The New York Times. Retrieved from https://www.nytimes.com
8 notes
·
View notes
Text
ppl don't understand moderation at scale and it shows
a lot of ppl on this website don't seem to understand tumblr is a pretty big website and big websites are hard to moderate.
like yeah it's obvious to you when there's a bad post that violates AUP or there's a perfectly good post that got incorrectly flagged. like duh. just ban terfs and don't ban transwomen.
but how many posts do you see a day, a thousand or so?
well it's a little harder when there are 13 million posts published per day, approximately 3-5% of them require moderation* (4% = 520k posts), and your automated tooling is anything less than 99.5% accurate (i.e. more than 1 misclassification every 200 posts). that accuracy would produce 2600 posts per day that require human review. if there are 4 human reviewers working 8h/day doing nothing but moderation, they'd have a budget of 44 seconds** to spend on reviewing a given post. and that's likely an underestimate of the workload***.
there are gonna be some mistakes. if you make your automated stuff less trigger happy, more bad things like terf shit falls through the cracks. if you make it more trigger happy, marginalized people start getting flagged for calling themselves faggots or posting boygirltits. if you rely less on automation, then you need humans; if you use humans, they cost a lot more, they're way slower, you're exposing more people to traumatic shit every day, and they're still gonna make mistakes.
to be clear: i think it's true that on aggregate, marginalized people are disproportionately affected by moderation mistakes. but that's not a tumblr-specific thing, and i don't think it's reasonable to expect tumblr, a 200-person company with 300 million monthly active users, to somehow solve a problem that none of twitter/facebook/reddit/youtube have managed to solve with literally thousands of engineers. should they do better? yes. absolutely. but that's not my point.
my point is: when you see the mistakes, i'm sitting here begging and pleading for you to consider it might be due to the logistical reality of social media moderation at scale rather than conspiracy or malice.
thanks 4 coming 2 my ted talk. footnotes under the cut.
*AFAIK Tumblr doesn't publicly report this statistic, so this is an informed under-guesstimate. On Reddit, 6% of all content was reported/flagged in 2021. I assume Tumblr and Reddit have similar enough content profiles for me to say "ehhh 3% lower bound, probably."
**Calculated by (60 / (P / (M * W * 60))) where P is number of posts to review, M is number of moderators, and W is hours worked per moderator per day. 60 / (2600 / (4 * 8 * 60)) ≈ 44.
***This is a reductive picture for the purpose of demonstrating scale. In real life, the calculus for how long can a moderator spend on a given post is more complicated because of things like prioritization for specific kinds of AUP violations (eg CSAM is higher priority than porn), classification accuracy that isn't uniform across categories (eg hit rate for gore is probably different from porn or hate speech), regulatory requirements (like mandatory reporting for CSAM and government takedown requests), different pipelines for user reports versus tool-based reports, yadda yadda. My goal is to show that even the underestimate is quite burdensome.
PS: I don't work for tumblr and I never have. I just work at a place that does things at scale and faces similar issues and I'm very passionate about online communities.
66 notes
·
View notes
Text
The headlines sounded dire: “China Will Use AI to Disrupt Elections in the US, South Korea and India, Microsoft Warns.” Another claimed, “China Is Using AI to Sow Disinformation and Stoke Discord Across Asia and the US.”
They were based on a report published earlier this month by Microsoft’s Threat Analysis Center which outlined how a Chinese disinformation campaign was now utilizing artificial technology to inflame divisions and disrupt elections in the US and around the world. The campaign, which has already targeted Taiwan’s elections, uses AI-generated audio and memes designed to grab user attention and boost engagement.
But what these headlines and Microsoft itself failed to adequately convey is that the Chinese-government-linked disinformation campaign, known as Spamouflage Dragon or Dragonbridge, has so far been virtually ineffective.
“I would describe China's disinformation campaigns as Russia 2014. As in, they're 10 years behind,” says Clint Watts, the general manager of Microsoft’s Threat Analysis Center. “They're trying lots of different things but their sophistication is still very weak.”
Over the past 24 months, the campaign has switched from pushing predominately pro-China content to more aggressively targeting US politics. While these efforts have been large-scale and across dozens of platforms, they have largely failed to have any real world impact. Still, experts warn that it can take just a single post being amplified by an influential account to change all of that.
“Spamouflage is like throwing spaghetti at the wall, and they are throwing a lot of spaghetti,” says Jack Stubbs, chief information officer at Graphika, a social media analysis company that was among the first to identify the Spamouflage campaign. “The volume and scale of this thing is huge. They're putting out multiple videos and cartoons every day, amplified across different platforms at a global scale. The vast majority of it, for the time being, appears to be something that doesn't stick, but that doesn't mean it won't stick in the future.”
Since at least 2017, Spamouflage has been ceaselessly spewing out content designed to disrupt major global events, including topics as diverse as the Hong Kong pro-democracy protests, the US presidential elections, and Israel and Gaza. Part of a wider multibillion-dollar influence campaign by the Chinese government, the campaign has used millions of accounts on dozens of internet platforms ranging from X and YouTube to more fringe platforms like Gab, where the campaign has been trying to push pro-China content. It’s also been among the first to adopt cutting-edge techniques such as AI-generated profile pictures.
Even with all of these investments, experts say the campaign has largely failed due to a number of factors including issues of cultural context, China’s online partition from the outside world via the Great Firewall, a lack of joined-up thinking between state media and the disinformation campaign, and the use of tactics designed for China’s own heavily controlled online environment.
“That's been the story of Spamouflage since 2017: They're massive, they're everywhere, and nobody looks at them except for researchers,” says Elise Thomas, a senior open source analyst at the Institute for Strategic Dialogue who has tracked the Spamouflage campaign for years.
“Most tweets receive either no engagement and very low numbers of views, or are only engaged with by other accounts which appear to be a part of the Spamouflage network,” Thomas wrote in a report for the Institute of Strategic Dialogue about the failed campaign in February.
Over the past five years, the researchers who have been tracking the campaign have watched as it attempted to change tactics, using video, automated voiceovers, and most recently the adoption of AI to create profile images and content designed to inflame existing divisions.
The adoption of AI technologies is also not necessarily an indicator that the campaign is becoming more sophisticated—just more efficient.
“The primary affordance of these Gen AI products is about efficiency and scaling,” says Stubbs. “It allows more of the same thing with fewer resources. It's cheaper and quicker, but we don't see it as a mark of sophistication. These products are actually incredibly easy to access. Anyone can do so with $5 on a credit card.”
The campaign has also taken place on virtually every social media platform, including Facebook, Reddit, TikTok, and YouTube. Over the years, major platforms have purged their systems of hundreds of thousands of accounts linked to the campaign, including last year when Meta took down what it called “the largest known cross-platform covert influence operation in the world.”
The US government has also sought to curb the effort. A year ago, the Department of Justice charged 34 officers of the Chinese Ministry of Public Security’s “912 Special Project Working Group” for their involvement in an influence campaign. While the DOJ did not explicitly link the arrests to Spamouflage, a source with knowledge of the event told WIRED that the campaign was “100 percent” Chinese state-sponsored. The source spoke on the condition of anonymity as they were not authorized to speak publicly about the information.
“A commercial actor would not be doing this,” says Thomas, who also believes the campaign is run by the Chinese government. “They are more innovative. They would have changed tactics, whereas it's not unusual for a government communications campaign to persist for a really long time despite being useless.”
For the past seven years, however, the content pushed by the Spamouflage campaign has lacked nuance and audience-specific content that successful nation-state disinformation campaigns from countries like Russia, Iran, and Turkey have included.
“They get the cultural context confused, which is why you'll see them make mistakes,” says Watts. “They're in the audience talking about things that don't make sense and the audience knows that, so they don't engage with the content. They leave Chinese characters sometimes in their posts.”
Part of this is the result of Chinese citizens being virtually blocked off from the outside world as a result of the Great Firewall, which allows the Chinese government to strictly control what its citizens see and share on the internet. This, experts say, makes it incredibly difficult for those running an influence operation to really grasp how to successfully manipulate audiences outside of China.
“They're having to adapt strategies that they might have used in closed and tightly controlled platforms like WeChat and Weibo, to operating on the open internet,” says Thomas. “So you can flood WeChat and Weibo with content if you want to if you are the Chinese government, whereas you can't really flood the open internet. It's kind of like trying to flood the sea.”
Stubbs agrees. “Their domestic information environment is not one that is real or authentic,” he says. “They are now being tasked with achieving influence and affecting operational strategic impact in a free and authentic information environment, which is just fundamentally a different place.”
Russian influence campaigns have also tended to coordinate across multiple layers of government spokespeople, state-run media, influencers, and bot accounts on social media. They all push the same message at the same time—something the Spamouflage operators don’t do. This was seen recently when the Russian disinformation apparatus was activated to sow division in the US around the Texas border crisis, boosting the extremist-led border convoy and calls for “civil war” on state media, influencer Telegram channels, and social media bots all at the same time.
“I think the biggest problem is [the Chinese campaign] doesn’t synchronize their efforts,” Watts said. “They’re just very linear on whatever their task is, whether it’s overt media or some sort of covert media. They’re doing it and they’re doing it at scale, but it’s not synchronized around their objectives because it’s a very top down effort.”
Some of the content produced by the campaign appeared to have a high number of likes and replies, but closer inspection revealed that those engagements came from other accounts in the Spamouflage network. “It was a network that was very insular, it was only engaging with itself,” says Thomas.
Watts does not believe China’s disinformation campaigns will have a material impact on the US election, but added that the situation “can change nearly instantaneously. If the right account stumbles onto [a post by a Chinese bot account] and gives it a voice, suddenly their volume will grow.”
This, Thomas says, has already happened.
A post, written on X by an account Thomas had been tracking that has since been suspended, referenced “MAGA 2024” in their bio. It shared a video from Russian state-run channel RT that alleged President Joe Biden and the CIA had sent a neo-Nazi to fight in Ukraine—a claim that has been debunked by investigative group Bellingcat. Like most Spamouflage posts, the video received little attention initially, but when it was shared by the account of school shooting conspiracist Alex Jones, who has more than 2.2 million followers on the platform, it quickly racked up hundreds of thousands of views.
“What is different about these MAGAflage accounts is that real people are looking at them, including Alex Jones. It’s the most bizarre tweet I’ve ever seen,” Thomas said.
Thomas says the account that was shared by Jones is different from typical Spamouflage accounts, because it was not spewing out automated content, but seeking to organically engage with other users in a way that made them appear to be a real person—reminiscent of what Russian accounts did in the lead-up to the 2016 election.
So far, Thomas says she has found just four of these accounts, which she has dubbed “MAGAflage,” but worries there may be a lot more operating under the radar that will be incredibly difficult to find without access to X’s backend.
“My concern is that they will start doing this, or potentially are already doing this, at a really significant scale,” Thomas said. “And if that is happening, then I think it will be very difficult to detect, particularly for external researchers. If they start doing it with new accounts that don't have those interesting connections to the Spamouflage network and if you then hypothetically lay on top of that, if they start using large language models to generate text with AI, I think we're in a lot of trouble.”
Stubbs says that Graphika has been tracking Spamouflage accounts that have been attempting to impersonate US voters since before the 2022 midterms, and hasn’t yet witnessed real success. And while he believes reporting on these efforts is important, he’s concerned that these high-profile campaigns could obscure the smaller ones.
"We are going to see increasing amounts of public discussion and reporting on campaigns like Spamouflage and Doppelganger from Russia, precisely because we already know about them,” says Stubbs. “Both those campaigns are examples of activity that is incredibly high scale, but also very easy to detect. [But] I am more concerned and more worried about the things we don't know."
9 notes
·
View notes