#of community-based reporting for content moderation being used to harass people.
Explore tagged Tumblr posts
crazy-pages · 2 days ago
Text
Sure, let's talk about it!
First, you will be happy to know the following! (citations below) That any images (real or otherwise) of such are banned on Ao3. That directing the attention of a real person to a story containing written sexual material about them can be considered harassment and is against Ao3 policy. And that doing so is furthermore against US sexual harassment law - the details of which are essentially that Ao3 is not legally liable for its hosting, but is obligated to maintain channels for users to report it to them, and for Ao3 to forward this to the government if actionable.
While Ao3 does permit the hosting of content describing sexual content involving real people, including people who are underage, which is not being used in a harassing capacity, this is in line with common archival policies regarding obscenity, including the US Library of Congress' own archival policies. This is because real people do sometimes have sex while underage, and there are non-prurient reasons why one might document such events or write of fictionalized or hypothetical events involving underage sex, just as there are non-prurient reasons one might write about a child's death, maiming, or other disturbing event occurring to them. This makes drawing a clean censorship line difficult and subject to personal judgement, which is exactly what Ao3 is designed to prevent.
As an example, let's say someone is writing a historical novel about the rise of the USSR (which qualifies for the Ao3 hosting criteria). This includes the murder of the Romanov royal children, a beyond-the-pale crime against children. Should such a work be banned? Obviously not you might say, surely it would be an exception to a policy of not letting people write stories about bad things happening to real children!
Except history shows time and time again that these exceptions are not made, or are made specifically to censor unrelated content. A government could easily selectively enforce such a principle only against content about the Russian Revolution to censor discussion of it. So censoring stories about bad things happening to real children can't be the policy. Nor can you narrow it down to sexual material - that still removes real historical events. Nor can you narrow it down to fictional sexual material - not all historical events are well attested and discussion of hypotheticals can be non-prurient.
And that's assuming the censors even care! Remember, censorship safety isn't just about how well good actors can employ it, but how effectively bad actors can too. Because ultimately the capacity to censor stories is just a checklist and a button! Check the list, press the button. Nothing actually ensures the checked boxes correspond to reality! This is how historical fandom purges of queer content have worked. Get a policy permitting community or moderator purging of pedophilic material and then either collectively report queer stories for pedophilia, or get anti-queer activists into the moderator ranks and have them do it. It is extremely difficult to create a censorship criteria which isn't judgement prone, and even more difficult to create a judgement-based censorship system that isn't prone to exploitation.
So ultimately this becomes a question of harm vs. harm. Which has the potential for more harm? Permitting the hosting of stories involving child sexual content of real people, some of which yes, will be prurient. Or permitting a censorship mechanism for the archive?
Well, if we're discussing the harms of what you would want to censor, fully and obviously fictional stories containing sexual material of real children ... well that's such a specific scenario we can ask: What exactly is the real world harm of those stories, who is being hurt? To use the strawberry metaphor above, whose allergies are being triggered, which laborers are being exploited? And the answer is, well ... none and no one. Unless that material is being brought to the attention of and used to harass someone...
And that is against both Ao3 policy and US law.
How fortunate that Ao3 policy already addresses your concerns.
Unless your concern isn't harm being done to real people, but simply about personal revulsion.
II.K. Illegal and Inappropriate Content You may not upload Content that appears or purports to contain, link to, or provide instructions for obtaining sexually explicit or suggestive photographic or photorealistic images of real children; malware, warez, cracks, hacks, or other executable files and their associated utilities; or trade secrets, restricted technologies, or other classified information. ... If you encounter Content that you believe violates a specific law of the United States, you can report it to us.
Real-Person Fiction (RPF) Creating RPF never constitutes harassment in and of itself. Posting works where someone dies, is subjected to slurs, or is otherwise harmed as part of the plot is usually not a violation of the Harassment Policy. However, deliberately posting such Content in a manner designed to be seen by the subject of the work, such as by gifting them the work, may result in a judgment of harassment.
P.S. This is an incredibly sensitive topic. And this is not a call made lightly. Ao3's content policy was created with the contributions of volunteer lawyers, civil rights advocates, and censorship experts, people whose actual professional jobs involve understanding all the minutiae of these things and thinking long and hard on them.
This is a decision which a lot of thought got poured into. The least any of us can do before jerking our knees in horror over the depiction of something revolting, is to put some thought and consideration into what the mechanics of removing that grossness would look like, and if it would be exploitable by bad actors.
saying ao3 needs to censor certain content is like saying a museum can't have still life art that includes strawberries because you don't like them.
these are not real strawberries. you do not have to, and in fact cannot, eat them. no one with a strawberry allergy will be harmed by looking at them. no migrant workers were exploited in the picking of these strawberries. there were no questionable farming practices or negative environmental impacts from growing or transporting them.
because - and i cannot stress this enough - they are not real strawberries.
if you don't like strawberries, you don't have to look at the paintings. in fact, you can get a map of the museum that lists what works are in what rooms and just. not go in there. if you see one by mistake, you can look away. just keep walking. there's plenty of other stuff to see.
yes, real strawberries can cause real quantifiable harm to real people.
but again. these are not real strawberries.
you may have whatever feelings you like about strawberries, and so can i. you can draw and write about whatever fruit floats your boat, and so can i, even if that happens to be strawberries. and we can hang our art side by side in the same gallery, provided you understand that my strawberries are not about you (and your kumquats are, shocker, not about me) and that - and this is true - neither are real.
and when the fascists break down the doors and grab all the strawberry paintings and heap them in the street and set them on fire, please know that they are coming for your kumquats next.
so if you want a place where you can show off your beautiful kumquat art safely, you're gonna have to tolerate having some strawberries in the next room.
and that's okay. because the strawberries aren't real.
7K notes · View notes
shinraelectricpowercom · 2 years ago
Text
so there's a new campaign going round hoping to Make Fandom Less Racist by... pushing for moderation on ao3. no way this could Possibly go wrong :)
10 notes · View notes
theoretically-questionable · 6 months ago
Text
Moderation is a Sucker's Game
Longpost time - tl;dr: the concept of moderation is totally beefed on a fundamental level everywhere and recent anti-trans bans indicate Tumblr has only made the problem harder for itself by making bad staff choices. No solution, not absolving Tumblr of responsibility, but also I think it's an interesting systemic issue on top of genuine incompetence.
Tumblr has a running history of screwing up moderation hard enough to either drive entire communities off the site or allow rule-breaking harassment to persist and drive them off.
As such, I think Tumblr will definitely cease at some point, because it is handling the problem of moderation much worse than most other big platforms and this is a major barrier to its financial sustainability - they cannot say "we put our users first and refuse to use relatively profitable Unethical Data-Harvesting Tricks" and expect to pivot to a user-supported financing model if they're widely perceived as repeatedly spurning said userbase.
The prior 'Porn Ban' (and subsequent smug tone of Staff communications) and the 'we had a moderator on staff accepting payments for making anti-trans moderation decisions' reveal stand out, as well as the (iirc) 2016-era peak of racist harassment (not that it ever *stopped*) which went largely unmoderated; instead, black users responding to, pointing out, or sometimes literally just screenshotting the deluge of harassment were permabanned.
There has also, of course, been the whole "over-moderation of queer- and specifically trans-related tags and terms in Search" - something that has also, repeatedly, affected Palestinian and pro-Palestine blogs.
Right now, of course, we have the current wave of anti-transfem "everything you do, selfies and textposts alike, can and will be marked as mature", compounded by instant permabans handed out without notice or appeal, all based on automod decisions from bad-faith reports and bizzarely cursory/biased human reviews.
This is all contrasted by semi-regular waves of fresh kinds of porn-related advertisements and spam blogs, which often go entirely unmoderated, automated or otherwise, for months upon months. Also the explicitly ToS-breaking harassment that gets reported and returned as "fine, actually".
Why is this happening? Beyond the inherent problem of "many Tumblr staff have had and currently have biases and open bigotry" (@photomatt springs to mind), you'd think that boring business sense would come first - diversity is Tumblr's brand, fandom is Tumblr's brand, so "not specifically driving off those groups" should have been an *essential* part of monetization efforts. Right?
Trouble is, even a lawsuit settled not-in-Tumblr's-favour can't solve the core problem, which seems to be the same one every user-generated-content platform faces: reasonable moderation isn't feasible for real-time, user-generated content at scale.
Straight-up, that is the largest problem Tumblr faces. Nobody knows how to do it fairly or reasonably. Content moderation has long been the writhing tar-pit horror sitting at the core of all large-scale social media. Increasingly, this unsolvable problem looks like it might be the reason the entire format is structurally doomed - or at least, doomed to a cycle of new platform -> rise in popularity -> failures in moderation and financing -> user exodus and platform collapse.
Meta (Facebook and Instagram) tackle moderation by being totally opaque and overzealous - often you won't even be told your reach has been limited. Or, if you're told, you might not know *what* post triggered it, or why. If you do, you won't be told what effect being 'limited' has, or how long it will last. There is no reliable appeal process, but that doesn't matter. They are too big to be affected by people being unhappy about moderation on an individual or community level.
Twitter 'solved' the problem by leaning more and more on pure automation - which wasn't working great, sure, but once it was bought and most of those measures scrapped for 'limiting free speech', Twitter got *much, much worse*. It is now a cesspool of unavoidable spams and spam-for-scams. Also, harassment.
Tiktok also does a lot of automated moderation - not as much as people seem to think, but also not as efficiently as other platforms, given that it's video content. They also make heavier use of de-prioritizing content algorithmically rather than just banning or deleting videos. Twitch and YouTube follow along in this bucket, being very willing to use automated systems to suspend, de-rank, and de-monetize hard, early, and arbitrarily.
Mastodon and similar 'decentralised' networks offload the problem onto whoever runs each local server/instance. You set up social.horse.mastodon or whatever? Great - moderation of posts on there is your problem. Some instances are great! Some instances are full of petty tyrants over-moderating their little fiefdoms. Some instances are godawful. Usually, nobody is being paid, which isn't great.
Unfortunately, instance-to-instance communication sometimes means that you can be harassed by a group of people from those godawful servers who are functionally unreportable and who cannot be stopped from spinning up dozens of sockpuppets on said servers to evade your blocks of individual accounts. This is also a problem with the concept of "email", so, you know, not strictly a new problem.
Google can't moderate its search results, and is overtaken by SEO spam and generative misinformation (even prior to their "AI answers" integration).
Amazon, as a storefront, is overrun by scams. Some of them are, functionally, directly run and facilitated by Amazon's own staff, facilities, and even manufacturing processes.
We seethe at Adobe insisting they have the right to moderate (automated or otherwise) the content we put on their cloud services, but chances are they would largely *rather not* - but legal obligations, advertiser/partner dollars, payment processors, and technical requirements are involved, so they're screwed and so are users.
Nobody can "do" content moderation of any kind at scale without being too lax or too overzealous, and probably both at the same time. If the billions of dollars of these corporate giants can't hack the problem, the rinkydink tens of millions of Automattic ain't gonna cut it.
None of this is "working" or "fair" or even "reasonable".
And that's fine by these companies! Their main moderation concern is "not being found liable for horrific and illegal shit users do", followed by "being pleasant *enough* to be used profitably, regardless of actual user experience or sentiment".
Good moderation is hard. Think about the obscenely small teacher–student ratio you need for a good, safe, productive classroom experience. You're not going to push more than a hundred students to one or two lecturers before you lose the ability to meaningfully grade their exams and give feedback, let alone have insight into their real-time behaviour for a dozen hours a week.
Now, imagine that but 24/7. A perpetual whorl of short-form essays being handed in at random times of day, wildly multimedia projects of totally inconsistent sizes from dozens of countries. What sort of ratio of moderators to users would even *plausibly* keep things under control? How do you *pay* for that? How do you have meaningful *oversight* over the mods? Fuck, how do you even *begin* to compensate for the fact that they'll be inevitably be exposed to a subset of your users posting criminally heinous content for laughs?
The answer is that you don't manage to balance it reasonably. You use keywords to auto-filter certain posts so they'll be seen less, lowering the chance of anyone reporting them. You use basic network models to auto-approve or auto-deny some reported content based on what's *probably* in the images or text, and call a 70% success rate an exemplary success, because that's 70% of those reported posts your human moderators will correctly never see and a further 25% fewer posts that are incorrectly ruled on but never get appealed! Huge reduction in workload - fantastic news!
You try your damndest to make sure that advertisers feel like their content is never posted next to or in association with "bad" content, even if it's not ToS-breaking, because that's where the dollars are and without those all you've got are good intentions and that's not a currency you can pay your moderators in. You hope to hell that you fall on the side of "overzealous", because right-wing single-issue ideologues have the ears of payment processors and lawmakers the world over, and they'll cut you the hell off if you get a reputation, fair or otherwise, for being the sort of platform that might "facilitate harm" to kids, or women, or Jesus. Mostly Jesus.
Hence, the uncomfortable tension stretching taut the façade of every major platform - on the one hand, 'shifting moderation burdens to your users' is universally regarded as a shitty and unethical cost-cutting move ripe for exploitation by bad actors. On the other, despite having a surplus of capital and benefitting from the efficiencies of scale (and, arguably, having an unshiftable responsibility to moderate their own platforms), companies aren't managing to wield moderation in a way that works for their users.
In Tumblr's case, it's not profitable. In *Twitter's* case, it's not even profitable.
Obviously, I don't have a solution to this. Tumblr has chosen to fight the dual battles of "moderation is hard" and *ALSO* "some of our staff, including moderators, are inarguably biased/bigoted against core user groups". That's on them. Not going to pretend it isn't, not going to make excuses for it.
The best answer I have is to archive your shit and hop onto smaller networks with staff, communities, and rules that you can vibe with, and hope you will be in a position to help directly and monetarily contribute to their continued existence in a sustainable way.
We're here for the community and a broad set of fairly straightforward features (and lack of other, worse features). Those can, will, and often *do* exist elsewhere. If you stick around and one of these 'elsewhere' platforms finds a size that's sustainable and a moderation approach that actually works for the vast majority of users, then you've hit the jackpot.
If not? Well, archive everything you can and hop ships to new networks. These aren't public institutions designed to last lifetimes - these are passion projects (or cash grabs) bloated beyond initial scope and inevitably riddled with the biases, oversights, and straight-up skill issues of their creators. They were never going to last, and their insistence on pretending they're immortal and behaving in accordance is part of the problem.
Also, you should support laws that would mandate user access to their own data in an exportable and preferably cross-platform-compatible format. Part of what keeps people on networks is lock-in and effort. Making it legally mandatory to make those transitions between networks easy is probably one of the only bits of social media-related law that would actually curb malfeasance (from users and platforms themselves).
18 notes · View notes
hbhcgkycuovouoytftof · 3 days ago
Text
I’d like to post about something important and special to me in the hopes that perhaps someone might see it and derive meaning from it too: the Something Awful Forums. I want to talk about it because I think it’s the best place on the internet.
This is one of the oldest communities on the internet. Once upon a time it was the secondary component of a website mainly dedicated to ridiculing other stuff on the internet. That’s a vein of content that has led to a lot of terrible behavior over the years, and the Something Awful community has certainly had its bad moments. However, within it there has always been a certain strain of, well, I don’t want to say “niceness” per se because at the beginning it was really all about being mean, and we still take any opportunity to lightheartedly razz on each other when it comes up. Instead I would say that there’s always been a strain of accountability. “Don’t touch the poop” is a mantra nearly as old as the site itself, and what it means is "you can use this website as a platform to make fun of things that are silly or dumb, but you absolutely cannot use it as a harassment platform." This hasn’t always been executed perfectly (and in particular was not held to by the site’s original owner, good riddance) but it has guided the SA forums on a journey of evolution that has led to it becoming a less mean-spirited, less volatile, less divisive, less reactionary place while the rest of internet has become (imo) more of those things. In the current iteration the number one rule at play is DON’T BE A DICK.
Nowadays the front page, which was once the main attraction dedicated to skewering other parts of the web, is gone except as an archive. The forums are now the main attraction, and they are now dedicated to simply discussing things you like. Posts go into threads sorted by topic, in chronological order. There is no algorithm. There are no bots. The rules are based on what will result in the best website for users rather than advertisers, so you don’t have to say unalive or sewer-slide. The site is attended to by an active, engaged moderation staff made of community members who are genuinely interested in making it the best community possible. If you see bigotry or harassment, you don’t have to fire off a report to a vaguely defined group of anonymous reviewers who may or may not do something about it. When a report is acted on, the moderator action automatically goes on a big list alongside the moderator’s name and their reasoning for the punishment. If someone is a nazi, they get fucking banned. If someone is a transphobe, they get fucking banned. If someone is an asshole, they get probated for a few days so they can cool off. The result is a forum full of people who are, generally speaking, chill and nice.
My favorite example of the evolutions of the site’s culture and userbase is that we used to say it was a “dead gay forum” to complain about how it was losing popularity, using gay as a pejorative because that’s edgy. Now? We still call it a dead gay forum, but positively because we love how dead it is (low pop makes for a tighter community) and we love how gay it is (the site has a significant queer population, myself included!)
.. and now, I would like to cordially invite you to make the site a little less dead and a little more gay (or not) by joining us!
One of the ways that the SA Forums maintain quality is a high barrier to entry. Well, not that high but certainly high relative to the big sites. It costs ten dollars to make a forums account, which is required to read and post on the forums. However, to celebrate the site’s 25th anniversary, the management is doing something the forums have not done in a long time, a free weekend! This weekend only, you can register a forms account, read the forums, and post in a limited number of them, absolutely free!
If you are nostalgic for the Internet of the late 90s and early 2000s, or you never got to experience them and are curious what posting in them was like, this is your chance to do so. To celebrate the 25th anniversary the forums now have a new section designed to look and feel like the early days, but without all the casual homophobia and such. Same posting vibes, new posting sensibilities. If you are a new member, it is advised that you take the time to not just read the rules but also read existing threads and get a feel for what the forms are like before you post. Newbies beware, you may get flamed!!!
You’ve already read my pitch, but if you want some more convincing, here is a thread where SA posters talk about what the site means to them and why they like it. Some of these people have been posting on this website for longer than I’ve been alive.
But the forums aren’t just for posting about the forums, they’re also for posting about things you love! Here are a few threads dedicated to things I’m interested in, in case you share them.
Balatro:
Jerma:
Excellent birds:
Fanart (56k WARNING NOT DIALUP FRIENDLY):
That’s just a few of my favorite threads. If none of that interests you, check out the main page and start browsing! I guarantee you’ll find something you like. Take note that FYAD is kind of the “anything goes” subforum (still no nazis but you might see a picture of a penis or something) while every other subforum is more or less SFW. My favorite forums are Post your Favorite, Video Games, Ask/Tell, and Rapidly Going Deaf. I tend to steer clear of FYAD and the politics subforums since they’re really not to my taste. Thanks so much for reading, and I hope you’ll find a place on the only forums that acknowledge the truth that modern social media will not: The Internet Makes You Stupid!
2 notes · View notes
shartingan · 5 months ago
Note
apologies if this is kinda out of left field but what is your current opinion on ao3? me personally i do like to read and write fic on it due to the tagging system and other aspects but the amount of whack shit makes me uncomfortable and I do wish you could report harmful content easily (I think you can now but it's a bit of a process)
sorry this took so long to answer! i had to finish graduating college 😔 i tend not to think about ao3 too much or too in depth, and i don't really get involved in the discourse on this site about it. please take everything i say with a grain of salt because im uninformed and not really looking to involve myself more in this discourse! but i will gladly answer this ask with the information i do have because i think youre starting an interesting conversation and i like to talk ( •̀ .̫ •́ )✧! feel free to hmu i dms, anon.
under read more because i started to ramble a bit. tldr please stop donating to ao3 and donate to one of these vetted palestinian fundraisers instead, especially for eid
personally i agree with all that youve said about ao3 in your ask already. i wont get on a soapbox and moralize about fanfiction and stuff like that because, again, i really dont care, but also i think it is really neat to have these spaces where people can safely interact with IPs in a transformative way and share their works with others without the threat of legal action. that's not ao3s doing, but it does provide a hub for people to publish without ever running into the treat of legal action. and, like you said, the tagging system makes it really easy to sort through fics and find what you like vs. dislike.
my main problems with ao3 come from its mismanagement, and the fact that it really has no competitors. ive been on ao3 since 2015, and i think the only significant change i can remember in the 9 years ive used it was the fact that they added the ability to filter out tags along with filtering in some. that's literally it. there could definitely have been some significant behind the scenes additions im unaware of, but from a users point of view, there has only been 1 update in 9 years that has majorly impacted the way i use the site. that seems like extreme mismanagement to me, especially since ao3 manages to meet and then significantly past all of their donation goals that they run pretty frequently. it seems like ao3 is making a lot of money which isn't being put into improving the site in any significant way. ao3 has been a site since 2008, its making a lot of money, and has a huge base of donors and volunteers, but it's still in beta? that seems like major mismanagement to me. again, this is just me speaking as a layperson not involved in the site itself.
also, it's pretty much become the only site that you can read fanfic on. there are sparse communities on tumblr, ff.net, and wattpad, but they're definitely a lot smaller and spread out. ao3 is the biggest and most centralized place for posting and reading fanfiction on the internet, meaning it virtually has no competition, and there's no alternatives that offer the same scale or quality. despite all that, it has almost no internal user moderation, and ao3 has actively been resistant to implementing those kinds of changes. that has resulted primarily in users of color getting targeted and attacked by racists. harassment is easy on ao3, and people can easily abuse the tagging system to do whatever they like. there's not even a perma-block tag feature, which is the first thing i would've implemented with the filter out feature. like you said, anon, if they're even implementing these changes, they're implementing them slowly, and behind a convoluted multi-step process. there's still little to no internal moderation or blocking system, which is just incredibly irresponsible.
ao3 almost has a monopoly on fanfiction on the internet, which means that it can kind of just do whatever, and no matter the gripes it knows people will keep coming back if they want fanfic because there's no alternative. and people keep chucking money at it because, again, its a hub for people to post and read fanfic without ever running into the threat of legal action because the website provides a safe, transformative works shield for authors. it's just insane how mismanaged the site is, but people keep giving it money despite that, so there's no real incentive for OTW to change anything or put in any effort. HOW IS IT STILL IN BETA. ITS BEEN 16 YEARS AND THEY'VE MADE MILLIONS OF DOLLARS. MINECRAFT CLASSIC WAS LAUNCHED IN 2009 AND THEY MOVED THROUGH BETA TO ALPHA TO A FULL LAUNCH AND THEN HUNDREDS OF UPDATES IN THE SAME TIME FRAME AND WITH A SMALLER TEAM AND FUNDS, INITIALLY😭😭😭 that's just insane to me, there's definitely something weird behind the scenes, how can they be this bad at site management, it's gotta be deliberate or something.
anyway, please stop donating to ao3 and donate to one of these vetted palestinian fundraisers instead, especially for eid
4 notes · View notes
dawnfelagund · 1 year ago
Text
Tumblr media
Independent Archive Survey
What concerns about OTW/AO3 do you have?
Check all that apply.
the organization is slow to respond to fandom concerns: 59% consolidation of most fandoms and fanworks onto AO3 increases the risk of a mass loss of fanworks: 57% volunteer safety is not taken seriously enough: 45% concerns about racism within the organization and AO3 are not being adequately addressed: 38% the organization is slow to respond to individual fans who need their help: 28% moderation of potentially harmful content is inadequate: 27% the organization is not transparent enough about decisions: 22% AO3 users' safety is not taken seriously enough: 18% the AO3 code is not properly documented and maintained: 18% organization leadership (e.g., Board members, Legal, committee chairs) wield too much power: 17% I don't have any concerns about OTW/AO3 archives: 12% (note: 2 of the 10 respondents who chose this did select concerns from the list; eliminating these responses, 10% of respondents had no concerns) I don't know: 2% Responses in the "other" field:
Other projects besides AO3 seem to fall by the wayside (e.g. fanlore); AO3 is hostile to outside fixes for code problems; volunteers are burned through quickly; volunteers must go through an intensive onboarding process that weeds out people who actually want to help; functions of AO3 don't work as intended/advertised (the exchange interface, the prompt meme, tagsets)
I have concerns as noted but I also hope and want Ao3 to improve and succeed (while also supporting the existence of more archives!)
Moderation of illegal content is inadequate
My main concern with OTW is that it has grown too large as an organization/project to continue operating solely on volunteer labor. To be honest, most of their issues stem out from that main problem or are exacerbated by it, in my opinion. But it isn't some simple thing to start bringing on paid staff either. Anyway, in short, the org has outgrown its model, but switching to a new model will also take time and there will be more growing pains as a result before things improve.
Not enough moderation in general. Hard to remove/report harassing comments, spam fics, etc.
for how long it's been around, the feature set is surprisingly immature (e.g., blocking/muting is just now being added, the time-based posting bug)
No sense of community
The size makes for a lack of community; the weight placed on quantitative measures (work stats)
I use it too little to personally experience the negative effects, however I'll support people I know and trust who do.
administration of the site feels to far from the individual user
Responses: 82
Analysis
I hesitated to include this item at all. I really do not want this to become a small archive vs. AO3 issue or to be presented as an either-or. We can and should have both, and for the 999th time, I want the OTW and AO3 to succeed for a variety of reasons. However, getting a sense of concerns seemed important as we move forward into crafting next-generation small archives that meet the needs of their creators, visitors, and fandoms. So the question went in.
Not surprisingly, fewer people overall are concerned about OTW/AO3 than small archives. About one in ten respondents did not have concerns at all, and no single concern was selected as often as the top ones in the corresponding dataset for small archives. Again, this is not a surprise. Despite the past few months, many of the concerns on the OTW/AO3 list remain hypotheticals, whereas concerns about small archives have happened at one time or another (if only because there have been thousands of small archives and just one AO3!) Furthermore, many of the concerns on this list were in response to some of the whistleblowing of recent months, and it's possible not all respondents were even aware of what was going on.
What were the concerns? Two dominated. The organization's slow response to fandom concerns, was top—also not a surprise. It's nearly cliche to point out that the wheels of large bureaucracies grind slowly, and one needn't be versed in the latest discussions around the OTW to have likely seen this at some point in its almost fifteen-year history. I will note that this is an area where smaller archives can succeed ... but aren't guaranteed, of course. On the SWG, it has always been a policy to take no longer than twenty-four hours to respond to a task, question, or issue, and most of the time we are significantly quicker than that. (Sometimes actually fixing the issue takes longer, but even that is rare.) However, you have to commit to doing this. The potential is there (where I'd argue it's really never going to be for an organization the size of the OTW), but it needs to be realized.
Secondmost was the worry about consolidation and the possibility of the mass loss of fanworks. I have been yelling about this for years, so I'll admit that it felt pretty good to see that those words haven't gone entirely unheeded. Is this unlikely? Yep. Is it possible? It is. Sorry, sweet summer children, it really is, and if it does happen, it is devastating in a way that the closure of a small archive never will be. And for the last dataset about small archive concerns, I made the case that the data around archive closures possibly reflected the Tolkien fandom's "collective trauma" about the unannounced transfer of ownership or closure of small archives. (And I imagine most respondents participate in the Tolkien fandom; my signal boost wasn't passed that widely around.) Of course, this happens against a backdrop of Fandom's collective trauma around unannounced content purges. Point being, these possibilities are on our mind.
There are a couple responses that pair naturally between the small archive and OTW/AO3 datasets. There is much more worry about the technical stability of small archives than AO3. Again, we've seen small archives fail and degrade due to tech issues, so this isn't hypothetical in the way it is for AO3, for all that's been said about spaghetti code. On leadership and the power given to a site's leaders, the two sets are remarkably even. This does surprise me! For all that's been revealed about the OTW's governance in recent months, they do have a process of governance that is more transparent than most archives, and they do offer points of democratic input, whereas many small sites do not.
The "Other" option was also more used for the OTW/AO3 dataset than the small archive dataset and includes some interesting responses that elaborate on the concerns from the list and identify some new ones. A couple mentions of "community" jump out at me here—and again, this is what small archives have to offer (potentially! again, "potential" and "actual" can be quite starkly divided) and what AO3 really cannot in most circumstances (and I'd further add was not intended to. I've argued before that a universal archive cannot offer the community features many people want and need by definition.)
What is the independent archive survey?
The independent archive survey ran from 23 June through 7 July 2023. Eighty-two respondents took the survey during that time. The survey asked about interest in independent archives and included a section for participants interested in building or volunteering for an independent archive. The survey was open to all creators and readers/viewers of fanworks.
What is an independent archive?
The survey defined an independent archive as "a website where creators can share their fanworks. What makes it 'independent' is that it is run by fans but unaffiliated with any for-profit or nonprofit corporations or organizations. Historically, independent archives have grown out of fan communities that create fanworks."
Follow the tag #independent archives for more survey results and ongoing work to restore independent archives to fandoms that want them.
Independent Archives Survey Masterpost
16 notes · View notes
swinstudent · 2 years ago
Text
Week 10 - Digital Citizenship and Conflict: Social Media Governance
The ‘Snowflake generation’ (Haslop, C., O’Rourke, F., & Southern, R. 2021), that's what we are. A generation of feeling offended by let's say, offensive things, but hey! It's nothing close to what the generation before us went through, right? 
The term ‘Snowflake generation’, can be described as a somewhat derogatory way of labelling “young people” perceived for their “intolerance and over-sensitivity.” (Haslop, C., O’Rourke, F., & Southern, R. 2021)
The term, often utilised in the mouths of older generations, that can’t understand the concern surrounding harassment online. Who are, hypocritically, also the very individuals who refuse to engage in specific social media applications so would have no experience on the topic. 
Digital conflict exists as different groups of communities online demand a sense of power, and therefore, diminish certain groups or individuals through online harassment.
Examples of online harassment range from sexual, impersonation, physical threats, rumour spreading, etc. and exist on the digital realm as a result of the lack of accountability, posed to individuals hidden behind screens seeking to impose power. 
One of the most common online harassment examples is sexual abuse, in the form of ‘dick pics’.
A sad common denominator amongst my female peers, is that we all either have received a non-solicited dick pic, or know someone close to us who has…gross! 
Unfortunately, this is not the only example of female targeted abuse online, “young women aged 18–19 years old were more likely to report degrading comments on their gender, sexual harassment and unwanted sexual requests” (Laura Vitis Fairleigh Gilmour 2017) However, being one of the most common forms, it can be argued that dick pics are a power move in the hands of men, so there is no questioning “women and girls’ participation in online spaces…marked by concerns for their safety.” (Laura Vitis Fairleigh Gilmour 2017)
The existing power thirst from men online, reflects that of real life, whereby similarly, women face higher levels of harassment, which Laura Vitis and Fairleigh Gilmour, 2017 raises a similar question of, “whether such harassment simply repeats real world gendered inequalities and tensions or whether it is a product of the Internet” (Laura Vitis Fairleigh Gilmour 2017)
Presently, attempts to minimise online harassment can be seen in social media governance, which takes form on applications such as Instagram and TikTok as something you might be familiar with as… ‘community guidelines.’
Personally, the times I have stumbled across community guideline restrictions, I haven’t been its biggest fan, with experience having my very PG rated TikTok videos removed based off ‘explicit content’ that falls into one of the many brackets of ‘unsafe viewing'. 
Although, this is a good step in the right direction and allows prevention of explicit content, sometimes the governance might be argued as too strong - pushing the boundaries of removing digital communities freedom of expression. On the other hand, content moderation techniques imposed by social media applications such as Instagram, highlight the need for corporate social responsibility - Whereby, accountability has limited space to fall onto the individuals doing the bullying, so platforms need to assist in preventing it.
Another example of attempts to minimise digital conflict and harassment is the legislation put into place to encourage safety online ‘online safety act 2021.’ 
Yet still, harassment is prevalent and expected on all forms of social media as we speak, proving a difficult problem to combat - so what can be done? 
References - 
Haslop, C., O’Rourke, F., & Southern, R. (2021). #NoSnowflakes: The toleration of harassment and an emergent gender-related digital divide, in a UK student online culture. Convergence, 27(5), 1418–1438.
Laura Vitis and Fairleigh Gilmour (2017) 'Dick pics on blast: A woman’s resistance to online sexual harassment using humour, art and Instagram', Download  'Dick pics on blast: A woman’s resistance to online sexual harassment using humour, art and Instagram', Crime, Media, Culture. 13(3) 335-355. 
6 notes · View notes
enidtheghost-sims · 2 years ago
Text
it seems that since the last update there’s been a lot of hostility going around especially toward modders trying to update their mods and moderators of sims communities, especially in discord, who are trying to help people who are having issues with the new update and pack. in response some larger discord servers have decided to “close” for a few days in hopes that people will calm down and some cc creators and modders have considered quitting and no longer uploading and updating their content publicly.
since the situation is so bad that others have tapped out and because i’m kind of a veteran sims player i thought i could give some advice if you’re having problems with your game and mods/cc, so here’s some steps that you might want to take instead of harassing people who are trying to help make your gaming experience better for free:
back up your saves and tray files. sometimes patches are broken and can fuck up your saves and even your tray files (your lots and sims from your library). before loading your game, back those up.
remove your entire mods folder from your game before you open it for the first time after the patch.
play the vanilla game to make sure that there are no problems with your game w/o the mods.
check to see if the mods you use have been updated since the patch was released. check if the modder has made a statement about when they might be able to update. check if others in the community have reported that the mods you use are broken with the patch and need updated. if you’re part of a larger discord server or forum that often helps people with these problems, check announcements, pinned posts, and other information already made available on where or how to find out if your mods need to be updated.
if available, update the mods you use for the patch. if not available, wait until the modders are able to update their mods. don’t harass them as they are people who have to take out time in their day to do (usually) unpaid work for the love of making the game a fun experience for others in the community. keep it a fun experience for them so they’ll want to keep sharing their work with you.
if your game is still broken even though it ran fine when you tested the vanilla game and you’ve updated all your mods, use the 50/50 method to see which mod still isn’t working. remove half of your mods at a time to see which group the problem mod is in, then split that group 50/50, and continue doing that until you’ve narrowed down the mod. once you’re sure that’s the mod, then you can contact the modder to ask if there’s a mod conflict (assuming you’ve already read any and all info about known conflicts before using the mod and that’s not the problem) or to let them know of a persisting issue. be polite and wait for a solution.
if you’re struggling with doing any of these steps, ask for help in a sims 4 based discord, subreddit, or other forum. be polite because these people are volunteering their time to help you. remember that they’re not being paid and they don’t owe anyone their time. they’re usually helping because they love the community and want to help others have fun with their game. keep it a good experience for them and they’ll continue to want to help you in the future.
if you’ve decided no, you don’t want to check your vanilla game, you’re not going to check for updates or broken mods or mod conflicts, and you’re not going to remove your mods folder to play the game, don’t pester others with whatever problems you’ve chosen to create for yourself. and especially, don’t be a prick because you created a bad situation for yourself.
hope that helps! stay safe out there, be kind to one another, give the modders and moderators some room to breathe, and once they’re in a good space, don’t forget to show them love for the work they do for free to make the community a better place and to help others have a fun time with the game.
2 notes · View notes
juliadotsia · 6 months ago
Text
Week 11: Digital Citizenship and Conflict: Social Media Governance
Tumblr media
Hello guys welcome back! Have you ever thought about how social media influences the way we act online and how it deals with drama, especially harassment and cyberbullying? Let’s explore digital citizenship, social media governance, and how they relate to online harassment!
First of all, let’s talk about what is digital citizenship. Digital citizenship involves using technology responsibly, safely, and respectfully. It includes protecting private information online, reducing risks from cyber threats, and using information and media in a respectful, informed, and legal manner (Digital Citizenship: What it is & What it Includes | Learning.com 2023). While we live and interact online much like we do offline, we aren't always mindful of our online actions. Sometimes we act without considering the impact on our reputation, safety, and growth as digital citizens. Meanwhile, everything we do online shapes our digital world and identity. Social Media Governance refers to the guidelines, methods, and steps that social media platforms use to control and oversee how users behave and what they post on their platforms (Murthy 2024). The purpose of social media governance is to keep the internet a safe, respectful, and legal place to be while also protecting user rights and free speech.
Tumblr media
Do you know that if you don’t use social media wisely will lead to harassment and cyberbullying? These are serious issues that affect many people online. Harassment involves repeated, unwanted behavior that makes someone feel intimidated or threatened. Cyberbullying is a type of harassment that happens specifically online. It can include spreading rumors, sending threatening messages, or publicly humiliating someone. Both can have devastating effects on victims, leading to anxiety, depression, and even suicidal thoughts. Based on my research, over 3000 cases is reported in Malaysia in 2023 (Rashidi 2024). MCMC also found that Facebook was the most common platform for cyberbullying, with 1,401 reports. WhatsApp came in second with 667, followed by Instagram with 388, TikTok with 258 and X with 159 and both adults and teenagers also reported being harassed within the past 12 months, up from 23% in 2022 to 33% in 2023 for adults, and 36% to 51% for teenagers (Wahab 2023). Social media platforms like Facebook, Twitter, and Instagram have their own rules to keep things in check. These rules are supposed to prevent stuff like cyberbullying, fake news, and hate speech. But let's be real—sometimes these platforms don't always get it right. They have to balance free speech with the need to protect users from harmful content.
One big part of social media governance is content moderation. This is where platforms use a mix of algorithms and human moderators to check what’s being posted (Content Moderation Justice and Fairness on Social Media: Comparisons Across Different Contexts and Platforms 2024). While AI can catch some bad stuff, it’s not perfect. Human moderators have the tough job of deciding what stays and what goes, which can be super stressful. One of the examples where conflicts often pop up is when misinformation and fake news, lead to misunderstandings and disputes. Users may share unverified information, which can cause disagreements over what is true (The Role of Social Media in Modern Conflicts n.d.)
Let’s talk about the legal framework for cyberbullying in Malaysia. In Malaysia, there is no existing legal provision specifically to tackle cyberbullying cases in Malaysia. Besides, we can also refer to Section 233 of the Communication and Multimedia Act 1998 which is Improper use of network services, which includes making any comment that is considered offensive, abusive, and intended to harass another person, if anyone is convicted, the offender could be fined of not more than RM50,000 or imprisonment for up to 1 year, or both (Iskandar 2023). Malaysia’s government should take cyberbullying seriously and come out with a legal law for cyberbullying.
So, what can we do as digital citizens? We can start by being mindful of what we post and how we interact with others. Think twice before sharing something controversial or unverified. Be respectful in your comments and conversations. And if you see something harmful, report it. Together, we can make social media a better place.
Reference
Content Moderation Justice and Fairness on Social Media: Comparisons Across Different Contexts and Platforms, 2024. arXiv.org e-Print archive. viewed 27 May 2024. Available at: https://arxiv.org/html/2403.06034v1#:~:text=To%20fight%20harmful%20content%20and,screening%20of%20user-generated%20content%20
Digital Citizenship: What it is & What it Includes | Learning.com, 2023. Learning. viewed 27 May 2024. Available from: https://www.learning.com/blog/what-is-digital-citizenship/
Iskandar, I. M., 2023, Activists want ambiguity in Communications and Multimedia Act cleared up, NST Online, viewed 27 May 2024, Available at: https://www.nst.com.my/news/nation/2023/03/885208/activists-want-ambiguity-communications-and-multimedia-act-cleared
Murthy, S., 2024. Social Media Governance: 9 Essential Components. Sprinklr: Unified AI-Powered Customer Experience Management Platform. viewed 27 May 2024, Available at: https://www.sprinklr.com/blog/social-media-governance/#:~:text=A%20social%20media%20governance%20plan%20is%20a%20structured%20framework%20for,risks%20associated%20with%20social%20media
Rashidi, Q.N.M., 2024, Over 3,000 cyberbullying complaints recorded in 2023, thesun.my, viewed 27 May 2024, Available at : https://thesun.my/local_news/over-3000-cyberbullying-complaints-recorded-in-2023-AK12097214
The Role of Social Media in Modern Conflicts n.d., PCRF, viewed 27 May 2024, https://www.pcrf.net/information-you-should-know/item-1707234928.html
Wahab, F., 2023. ‘Cyberbullying laws need more bite’. The Star. viewed 27 May 2024. Available at : https://www.thestar.com.my/metro/metro-news/2023/07/01/cyberbullying-laws-need-more-bite
0 notes
mikkeneko · 1 year ago
Text
I have largely been not engaging with the latest go-around of end racism in the OTW, in part because I don't feel it's really my lane, but mostly because I do not think the things they are asking for are going to solve the problem they want to solve, and has great potential to cause more problems.
The above post has a lot of very detailed breakdown on why content moderation on the scale of the Archive is not feasible with their current budget and infrastructure, but I really want to highlight and delve more into the issue they briefly touched on of malicious reporting.
I have been in fandom in one form or another since the turn of the century, and the one thing I have seen repeatedly -- time and time again, with evolving vocabulary falling into the exact same structures -- is that fans will weaponize any available tool to attack other fans. They have done, they still do, they will continue to do this.
I can see a position that says, well that's an acceptable collateral for the greater good of reducing racial harm. But I also believe, pretty firmly, that POC creators are going to be the ones hurt most by this. This will happen for three reasons:
POC creators are more likely to be targeted for harassment in the first place, because racism (because there's just 'something they can't put their finger on' about this person that they don't like, that drives people to dig through their work and biography for excuses;)
Non-American creators are more likely to be POC creators, and non-American fans are the most likely to be not English speakers, not steeped in American purity culture, and not up on the shibboleths of whatever is considered Right And Good on (american) social media this week;
POC creators are more likely to be engaging with POC characters, works, OCs and themes in the first place, and are already being policed for performing their ethnicity to the audience's satisfaction
Here are a few things I have seen happen in the past few years, and these are only things that I have directly seen happen in real time, not getting into stuff that may happen in other channels I'm not witness to:
Asian artists being targeted and harassed for "whitewashing" Asian characters (eg, drawing them Not Asian Enough; there's an entire conversation to be had on marked vs unmarked styles of depiction that I don't have space for here;)
SEA artists being targeted and harassed for drawing characters at the same level of color as the character's official art, when the community has decided that they should be darker (see also the American hangup of judging a character's ethnic representation according to color of skin and color of skin only;)
Black creators being targeted and harassed for making "fetish" content of OCs and characters based on their own selves.
In some of these cases, that things got so out of hand could be attributed to well-meant but misguided zealotry. But in the majority of harassment cases I've seen over two decades in this fandom, the single driving root of every conflict is that a fan decided they did not like the way another creator shipped, drew, or wrote their favorite character, so they looked around for a suitably hefty stick to whack them with; and that stick might be wrapped in a concealing cellophane of social justice but it's still going to be used as a weapon.
Fans are not going to stop doing this. POC creators are going to continue to be in the cross-sights. I do not see any measurable benefit in giving them bigger and heftier sticks, or trying to pull AO3 moderators into these arguments as referees.
For those who might happen across this, I'm an administrator for the forum 'Sufficient Velocity', a large old-school forum oriented around Creative Writing. I originally posted this on there (and any reference to 'here' will mean the forum), but I felt I might as well throw it up here, as well, even if I don't actually have any followers.
This week, I've been reading fanfiction on Archive of Our Own (AO3), a site run by the Organisation for Transformative Works (OTW), a non-profit. This isn't particularly exceptional, in and of itself — like many others on the site, I read a lot of fanfiction, both on Sufficient Velocity (SV) and elsewhere — however what was bizarre to me was encountering a new prefix on certain works, that of 'End OTW Racism'. While I'm sure a number of people were already familiar with this, I was not, so I looked into it.
What I found... wasn't great. And I don't think anyone involved realises that.
To summarise the details, the #EndOTWRacism campaign, of which you may find their manifesto here, is a campaign oriented towards seeing hateful or discriminatory works removed from AO3 — and believe me, there is a lot of it. To whit, they want the OTW to moderate them. A laudable goal, on the face of it — certainly, we do something similar on Sufficient Velocity with Rule 2 and, to be clear, nothing I say here is a critique of Rule 2 (or, indeed, Rule 6) on SV.
But it's not that simple, not when you're the size of Archive of Our Own. So, let's talk about the vagaries and little-known pitfalls of content moderation, particularly as it applies to digital fiction and at scale. Let's dig into some of the details — as far as credentials go, I have, unfortunately, been in moderation and/or administration on SV for about six years and this is something we have to grapple with regularly, so I would like to say I can speak with some degree of expertise on the subject.
So, what are the problems with moderating bad works from a site? Let's start with discovery— that is to say, how you find rule-breaching works in the first place. There are more-or-less two different ways to approach manual content moderation of open submissions on a digital platform: review-based and report-based (you could also call them curation-based and flag-based), with various combinations of the two. Automated content moderation isn't something I'm going to cover here — I feel I can safely assume I'm preaching to the choir when I say it's a bad idea, and if I'm not, I'll just note that the least absurd outcome we had when simulating AI moderation (mostly for the sake of an academic exercise) on SV was banning all the staff.
In a review-based system, you check someone's work and approve it to the site upon verifying that it doesn't breach your content rules. Generally pretty simple, we used to do something like it on request. Unfortunately, if you do that, it can void your safe harbour protections in the US per Myeress vs. Buzzfeed Inc. This case, if you weren't aware, is why we stopped offering content review on SV. Suffice to say, it's not really a realistic option for anyone large enough for the courts to notice, and extremely clunky and unpleasant for the users, to boot.
Report-based systems, on the other hand, are something we use today — users find works they think are in breach and alert the moderation team to their presence with a report. On SV, this works pretty well — a user or users flag a work as potentially troublesome, moderation investigate it and either action it or reject the report. Unfortunately, AO3 is not SV. I'll get into the details of that dreadful beast known as scaling later, but thankfully we do have a much better comparison point — fanfiction.net (FFN).
FFN has had two great purges over the years, with a... mixed amount of content moderation applied in between: one in 2002 when the NC-17 rating was removed, and one in 2012. Both, ostensibly, were targeted at adult content. In practice, many fics that wouldn't raise an eye on Spacebattles today or Sufficient Velocity prior to 2018 were also removed; a number of reports suggest that something as simple as having a swearword in your title or summary was enough to get you hit, even if you were a 'T' rated work. Most disturbingly of all, there are a number of — impossible to substantiate — accounts of groups such as the infamous Critics United 'mass reporting' works to trigger a strike to get them removed. I would suggest reading further on places like Fanlore if you are unfamiliar and want to know more.
Despite its flaws however, report-based moderation is more-or-less the only option, and this segues neatly into the next piece of the puzzle that is content moderation, that is to say, the rubric. How do you decide what is, and what isn't against the rules of your site?
Anyone who's complained to the staff about how vague the rules are on SV may have had this explained to them, but as that is likely not many of you, I'll summarise: the more precise and clear-cut your chosen rubric is, the more it will inevitably need to resemble a legal document — and the less readable it is to the layman. We'll return to SV for an example here: many newer users will not be aware of this, but SV used to have a much more 'line by line, clearly delineated' set of rules and... people kind of hated it! An infraction would reference 'Community Compact III.15.5' rather than Rule 3, because it was more or less written in the same manner as the Terms of Service (sans the legal terms of art). While it was a more legible rubric from a certain perspective, from the perspective of communicating expectations to the users it was inferior to our current set of rules  — even less of them read it,  and we don't have great uptake right now.
And it still wasn't really an improvement over our current set-up when it comes to 'moderation consistency'. Even without getting into the nuts and bolts of "how do you define a racist work in a way that does not, at any point, say words to the effect of 'I know it when I see it'" — which is itself very, very difficult don't get me wrong I'm not dismissing this — you are stuck with finding an appropriate footing between a spectrum of 'the US penal code' and 'don't be a dick' as your rubric. Going for the penal code side doesn't help nearly as much as you might expect with moderation consistency, either — no matter what, you will never have a 100% correct call rate. You have the impossible task of writing a rubric that is easy for users to comprehend, extremely clear for moderation and capable of cleanly defining what is and what isn't racist without relying on moderator judgement, something which you cannot trust when operating at scale.
Speaking of scale, it's time to move on to the third prong — and the last covered in this ramble, which is more of a brief overview than anything truly in-depth — which is resources. Moderation is not a magic wand, you can't conjure it out of nowhere: you need to spend an enormous amount of time, effort and money on building, training and equipping a moderation staff, even a volunteer one, and it is far, far from an instant process. Our most recent tranche of moderators spent several months in training and it will likely be some months more before they're fully comfortable in the role — and that's with a relatively robust bureaucracy and a number of highly experienced mentors supporting them, something that is not going to be available to a new moderation branch with little to no experience. Beyond that, there's the matter of sheer numbers.
Combining both moderation and arbitration — because for volunteer staff, pure moderation is in actuality less efficient in my eyes, for a variety of reasons beyond the scope of this post, but we'll treat it as if they're both just 'moderators' — SV presently has 34 dedicated moderation volunteers. SV hosts ~785 million words of creative writing.
AO3 hosts ~32 billion.
These are some very rough and simplified figures, but if you completely ignore all the usual problems of scaling manpower in a business (or pseudo-business), such as (but not limited to) geometrically increasing bureaucratic complexity and administrative burden, along with all the particular issues of volunteer moderation... AO3 would still need well over one thousand volunteer moderators to be able to match SV's moderator-to-creative-wordcount ratio.
Paid moderation, of course, you can get away with less — my estimate is that you could fully moderate SV with, at best, ~8 full-time moderators, still ignoring administrative burden above the level of team leader. This leaves AO3 only needing a much more modest ~350 moderators. At the US minimum wage of ~$15k p.a. — which is, in my eyes, deeply unethical to pay moderators as full-time moderation is an intensely gruelling role with extremely high rates of PTSD and other stress-related conditions — that is approximately ~$5.25m p.a. costs on moderator wages. Their average annual budget is a bit over $500k.
So, that's obviously not on the table, and we return to volunteer staffing. Which... let's examine that scenario and the questions it leaves us with, as our conclusion.
Let's say, through some miracle, AO3 succeeds in finding those hundreds and hundreds and hundreds of volunteer moderators. We'll even say none of them are malicious actors or sufficiently incompetent as to be indistinguishable, and that they manage to replicate something on the level of or superior to our moderation tooling near-instantly at no cost. We still have several questions to be answered:
How are you maintaining consistency? Have you managed to define racism to the point that moderator judgment no longer enters the equation? And to be clear, you cannot allow moderator judgment to be a significant decision maker at this scale, or you will end with absurd results.
How are you handling staff mental health? Some reading on the matter, to save me a lengthy and unrelated explanation of some of the steps involved in ensuring mental health for commercial-scale content moderators.
How are you handling your failures? No moderation in the world has ever succeeded in a 100% accuracy rate, what are you doing about that?
Using report-based discovery, how are you preventing 'report brigading', such as the theories surrounding Critics United mentioned above? It is a natural human response to take into account the amount and severity of feedback. While SV moderators are well trained on the matter, the rare times something is receiving enough reports to potentially be classified as a 'brigade' on that scale will nearly always be escalated to administration, something completely infeasible at (you're learning to hate this word, I'm sure) scale.
How are you communicating expectations to your user base? If you're relying on a flag-based system, your users' understanding of the rules is a critical facet of your moderation system — how have you managed to make them legible to a layman while still managing to somehow 'truly' define racism?
How are you managing over one thousand moderators? Like even beyond all the concerns with consistency, how are you keeping track of that many moving parts as a volunteer organisation without dozens or even hundreds of professional managers? I've ignored the scaling administrative burden up until now, but it has to be addressed in reality.
What are you doing to sweep through your archives? SV is more-or-less on-top of 'old' works as far as rule-breaking goes, with the occasional forgotten tidbit popping up every 18 months or so — and that's what we're extrapolating from. These thousand-plus moderators are mostly going to be addressing current or near-current content, are you going to spin up that many again to comb through the 32 billion words already posted?
I could go on for a fair bit here, but this has already stretched out to over two thousand words.
I think the people behind this movement have their hearts in the right place and the sentiment is laudable, but in practice it is simply 'won't someone think of the children' in a funny hat. It cannot be done.
Even if you could somehow meet the bare minimum thresholds, you are simply not going to manage a ruleset of sufficient clarity so as to prevent a much-worse repeat of the 2012 FF.net massacre, you are not going to be able to manage a moderation staff of that size and you are not going to be able to ensure a coherent understanding among all your users (we haven't managed that after nearly ten years and a much smaller and more engaged userbase). There's a serious number of other issues I haven't covered here as well, as this really is just an attempt at giving some insight into the sheer number of moving parts behind content moderation:  the movement wants off-site content to be policed which isn't so much its own barrel of fish as it is its own barrel of Cthulhu; AO3 is far from English-only and would in actuality need moderators for almost every language it supports — and most damning of all,  if Section 230 is wiped out by the Supreme Court  it is not unlikely that engaging in content moderation at all could simply see AO3 shut down.
As sucky as it seems, the current status quo really is the best situation possible. Sorry about that.
3K notes · View notes
absynthe--minded · 4 years ago
Text
wattpad vs. ao3
so this is an examination of Wattpad as an alternative to Archive of our Own, largely in response to the ongoing criticisms of AO3 when it comes to their content policy and what’s permitted onsite in terms of tropes and ratings. I’m not going to be talking about anything in the context of the completely separate and justified debate about how Archive staff handles racism and racist harassment. First off, I agree that AO3 needs to take more action against racist commenters and stories intended to harass fans of color (I’ve received a few comments like that myself) and second off, I don’t know how Wattpad handles racism.
I’m pro-AO3, but I do believe that if people have problems with AO3, they should be free to leave the platform and find something that suits their needs and wants better, and no one has brought up Wattpad in these conversations, which I think is a shame.
Wattpad:
commercial site with ads and a premium membership option
general fiction focus with fanfiction section (not a dedicated fic archive)
mobile-friendly with a dedicated app on App Store and Play Store
basic user tagging (think Tumblr, Instagram) with some native filtering
allows for user blocking
community forums on-site with direct messaging feature
RTF-only text input (no HTML editing)
native image support, including gifs and video files
ability to upload custom art in-story and as a cover for your fic
no native self-archiving/story download feature unless you’re the author
extremely large userbase, with popular fics getting hundreds of thousands of hits regularly
primarily M/F, including large amounts of selfshipping, reader insert, and canon/OC romance
site demographic skews young, with many adolescents “aging out” and moving to FFN or AO3
comprehensive, well-enforced content policy restricting and banning many story concepts and thematic elements, including erotica, all underage stories where participants are younger than sixteen, and glorification of suffering such as self-harm or sexual violence. encourages users to report stories that violate TOS.
basic content rating system, with the requirement to tag stories as mature to warn of adult content that is permitted in the TOS, including sex scenes that are part of the plot, sexual violence or dark themes that aren’t written about from a perspective of horror or condemnation, etc. no option to opt out of ratings.
can and will delete stories that are found to be in violation of the TOS, or will render them private and viewable only to the author.
Archive of our Own:
nonprofit organization with no ads or premium options for site members
dedicated fanfiction archive, though original works and nonfiction about fannish things are permitted
mobile friendly to an extent, no apps of any kind
comprehensive, thorough tagging system custom-built for maximum user customization and labeling. enables native filtering for all tags, always present and usable regardless of searches or preferences
no current options for user blocking, though change may come
no forums, direct messages, or social element except comments on fics, which can be moderated and deleted or turned off by the author
supports RTF and HTML text input for stories
limited image embedding, requiring offsite hosting and HTML editing for mobile viewers
no native image upload feature or ability to create “covers” for stories
allows the option to download all fics in multiple formats
large userbase but fics with hundreds of thousands of hits are relatively rare, and subcommunities/fandoms have different standards for a “popular” fic
primarily M/M on a sitewide basis but most popular ships and story styles vary based on fandom.
site demographic skews older than Wattpad, with many users considering themselves “fandom olds” or being present since the site’s launch
allows anything to be written and published in their stories, with content policies banning user harassment and photographs of illegal pornography. users are expected to accept that they might see fics in the listings that upset or disgust or squick them on some level, and tag filtering/external browser extensions are expected to be implemented by the user to block out upsetting content
comprehensive rating system, with fics expected to be tagged and rated and warned for accordingly. option given to opt out of warnings and ratings entirely with “Unrated” and “Choose Not To Warn” categories
will rarely delete stories, and will never do so without warning and emailing the author a copy of their fic along with an explanation for why it was deleted
Wattpad’s Content Policy:
The full policy is linked above, but Wattpad explicitly bans underage sex, purely pornographic content, graphic self-harm, suicide, hate speech, underage sex where one party is younger than sixteen (the age of consent in Canada), sex with animals, revenge porn, sexual solicitation/roleplaying, and harassment of other site users, among other things. Stories cannot focus on sexual violence in a positive way, and sex scenes must meet content standards even in mature-rated stories. This is in contrast to AO3, which (as stated above) doesn’t have bans against any of this. Their TOS FAQ is linked here, and contains extensive discussion of their content policy, while affirming that they believe in the user opting out of content they dislike rather than banning that content on principle. I can confirm anecdotally that they do take action against embedded photographic images of illegal pornography, but that’s the only ban they seem to have.
My final conclusion is that abandoning AO3 for Wattpad sacrifices user friendliness and an extremely comprehensive tagging system that will get you exactly the results you want for a heavily moderated, much less risky experience that has sitewide standards designed to protect users from graphic or controversial content. Both have fun interfaces, and both are easy to use, but I personally would recommend the latter site to anyone who felt AO3 was too free and open with the kind of stories it permits on its site.
78 notes · View notes
dodstoldpackage · 4 years ago
Text
Campaign Against Reddit’s Current TOS
Right now, Reddit's TOS does not outline any protections from harassment on their website. Most of their TOS is content and copyright based. While you can report comments as harassment without an account and posts with an account. You cannot report subreddits and all comment and post reports go through subreddit moderators which has allowed subreddits such as r/DIDCringe to continue to thrive. This is not cool and there shouldn't be communities centred around harassment, calling people cringey, and fakeclaiming others. The only mention of attacking others on their TOS is outlined for moderators, telling them not to attack their own members. This does not stop them from running commumities that thrive on the harassment of others.
Because of this, I have made my own change.org petition to combat this.
There is already a petition to get r/DIDCringe taken down, but I am looking for a more permanent solution to the thriving of this and other similar subreddits.
Below are some screenshots of Reddit’s TOS that are important, which you can read in full here.
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Below the cut are the Image IDs for those with screen readers.
[Image 1 ID:  6. Things You Cannot Do When using or accessing Reddit, you must comply with these Terms and all applicable laws, rules, and regulations. Please review the Content Policy (and for RPAN, the Broadcasting Content Policy), which are part of these Terms and contain Reddit’s rules about prohibited content and conduct. In addition to what is prohibited in the Content Policy, you may not do any of the following: Use the Services in any manner that could interfere with, disable, disrupt, overburden, or otherwise impair the Services. Gain access to (or attempt to gain access to) another user’s Account or any non-public portions of the Services, including the computer systems or networks connected to or used together with the Services. Upload, transmit, or distribute to or through the Services any viruses, worms, malicious code, or other software intended to interfere with the Services, including its security-related features.]
[Image 2 ID: Use the Services to violate applicable law or infringe any person’s or entity's intellectual property rights or any other proprietary rights.Access, search, or collect data from the Services by any means (automated or otherwise) except as permitted in these Terms or in a separate agreement with Reddit. We conditionally grant permission to crawl the Services in accordance with the parameters set forth in our robots.txt file, but scraping the Services without Reddit’s prior consent is prohibited.Use the Services in any manner that we reasonably believe to be an abuse of or fraud on Reddit or any payment system.We encourage you to report content or conduct that you believe violates these Terms or our Content Policy. We also support the responsible reporting of security vulnerabilities. To report a security issue, please email [email protected].]
[Image 3 ID:  7. ModeratorsModerating a subreddit is an unofficial, voluntary position that may be available to users of the Services. We are not responsible for actions taken by the moderators. We reserve the right to revoke or limit a user’s ability to moderate at any time and for any reason or no reason, including for a breach of these Terms.If you choose to moderate a subreddit:You agree to follow the Moderator Guidelines for Healthy Communities;You agree that when you receive reports related to a subreddit you moderate, you will take appropriate action, which may include removing content that violates policy and/or promptly escalating to Reddit for review;You are not, and may not represent that you are, authorized to act on behalf of Reddit;You may not enter into any agreement with a third party on behalf of Reddit, or any subreddits that you moderate, without our written approval;You may not perform moderation actions in return for any form of compensation, consideration, gift, or favor from third parties;] [Outlined is: You agree that when you receive reports related to a subreddit you moderate, you will take appropriate action, which may include removing content that violates policy and/or promptly escalating to Reddit for review;]
[Image 4 ID: If you have access to non-public information as a result of moderating a subreddit, you will use such information only in connection with your performance as a moderator; andYou may create and enforce rules for the subreddits you moderate, provided that such rules do not conflict with these Terms, the Content Policy, or the Moderator Guidelines for Healthy Communities.Reddit reserves the right, but has no obligation, to overturn any action or decision of a moderator if Reddit, in its sole discretion, believes that such action or decision is not in the interest of Reddit or the Reddit community.]
[Image 5 ID:  Moderator Guidelines for Healthy CommunitiesEffective April 17, 2017.Engage in Good FaithHealthy communities are those where participants engage in good faith, and with an assumption of good faith for their co-collaborators. It’s not appropriate to attack your own users. Communities are active, in relation to their size and purpose, and where they are not, they are open to ideas and leadership that may make them more active.] [Outlined is:  It’s not appropriate to attack your own users.]
[Image 6 ID:  cringe people on the internet involved with DID in some way. faking or not.[00:57]1. Posts must be on topic The post itself should be cringey and the person must claim to have DID 2. No calls for violence or harassment of people posted 3. No brigading or witch hunting 4. No reposts 5. No self-posts 6. Must be CRINGE content Posting someone you suspect is faking by itself is not cringe and will be removed. They have to actually being doing something cringe. While faking DID can be seen as cringe, these cases are a dime a dozen and not worth posting]
6 notes · View notes
cardedge7 · 4 years ago
Text
Workplace Mediation Solution.
Workplace Mediation, Manchester, Cheshire & North West.
Content
Service.
Speak With Those Who Have Utilized The Solution.
It can be viewed as an expensive process if an end result can not be gotten to. It is consequently just beneficial if both events are prepared to endanger.
youtube
These issues are gone over and also if you reach a contract the mediator will certainly write it down for you and also make sure it states what you both desire it to say. Everybody indicators the agreement and also you decide who else, if any person, need to see a duplicate. If you are unpleasant with sharing the joint arrangement with people who are not in the area after that a decision is made about what, if anything, to show the person or individuals that referred you to mediation. The major drawback of mediation is that there is no warranty of a resolution.
Company.
That usually leaves a circumstance where both individuals associated with the grievance need to proceed working together. Mediation can aid there by repairing the relationships so that the two can find a way to co-work efficiently. The mediation enabled both celebrations to explore where their working partnership was going wrong, as well as review what they both gotten out of each other. With their new understanding of the other events' viewpoint two contracts were drawn up. The 2nd agreement was for circulation to their supervisor and also it set out modifications in work techniques that they both wanted to see for the future. They remained in the same division and reported to the same department manager. Fiona had actually really felt under pressure from Jenni since she joined the company.
What can I expect at mediation?
The mediator does not take sides, make decisions, or give legal advice; their only role is to facilitate respectful conversation. The parties' lawyer may participate in the process and attend mediation meetings. Before the mediation process commences, parties may draft and sign a mediation agreement.
Some people wish to 'have their day in court' as well as feel a sense of injustice if the process is not translucented until completion. Although, an increasing number of of our consumers are making it clear that they anticipate their workers to act in a practical method to secure a positive resolution to a complaint or a complaint. If an agreement is gotten to through the mediation procedure, then a binding paper can be created for both events to participate in. The best-case circumstance in mediation is that all events concern a mutually concurred option to resolve the conflict, which will permit a good working relationship to be brought back.
Hear From Those Who Have Used The Service.
Generally, we would permit someday for every mediation session as well as there is likewise additional contact made with all parties, in the lead as much as and also adhering to mediation. It is vital that all individuals agree to join the mediation process, in order for mediation to take place. The dominating purpose of workplace mediation is to restore and maintain excellent and effective functioning relationships. The dispute centred around promo possibilities as well as arrangements between a manger and her supervisor.
Tumblr media
The moderator will bring the conferences to a close, give a duplicate of the concurred statement to those involved and explain their duties for its application. If mediation services norwich is reached, various other treatments might later be utilized to attempt to solve the conflict.
Work Regulation.
To start with, the mediator meets with each celebration independently to comprehend their experience of the problem, their setting as well as interests and also what they wish to happen following. During these meetings, the arbitrator will certainly likewise look for arrangement from the celebrations to an assisted in joint conference. A qualified conciliator's function is to function as an unbiased 3rd party who helps with a meeting in between two or more individuals in disagreement to aid them get to an agreement. Although the arbitrator is in charge of the procedure, any kind of agreement originates from those in dispute.
Augsburg Staff Vote for a Union - Workday Minnesota
Augsburg Staff Vote for a Union.
Posted: Fri, 08 Jan 2021 17:24:00 GMT [source]
Every person will have had a possibility to be heard, which can assist to boost the understanding of both sides moving forward. Workplace mediation is an increasingly preferred technique embraced by several organisations as an alternative means of solving workplace disputes.
The moderator holds both of you to the ground rules and makes sure you have equal time to speak as well as to listen to every other. You will be advised by the arbitrator regarding the procedure as well as just how much time to publication out of your journal. Typically, for a two-person mediation you will be asked to allot a complete day for the mediation session. Private meetings usually begin at 9.30 am or 11am, the joint meeting usually start at 1pm and also continues till 4pm, nevertheless, timings can be versatile on the day. At the beginning of the mediation you are asked to authorize a Privacy and also Responsibility Arrangement. This record likewise advises you that the mediator can not provide legal suggestions which the material of the mediation conversation is personal, it can not be utilized in any future procedures or procedures that you might be associated with.
How do you talk during mediation?
How to Talk and Listen Effectively in Mediation 1. Strive to understand through active listening. In trial, litigants address juries in their opening statements and final arguments. 2. Avoid communication barriers. 3. Watch your nonverbal communication. 4. Be ready to deal with emotions at mediation. 5. Focus on the facts. 6. Use your mediator and limit caucuses. 7. Conclusion.
She increased the problem with her department manager; however, she really felt that absolutely nothing had actually changed. When both parties consented to mediation they were both charging the various other of bullying and harassment. Work Law Updates for very early is set to be a fascinating year, not the very least with Brexit day fast coming close to. In spite of the uncertainty that Brexit has created, HR specialists as well as entrepreneur still have to guarantee they are up-to-date with what's in store in employment law adjustments that we do recognize will certainly occur.
Following the exchange of the statements, the moderators facilitate private conversations between the celebrations. There are various designs or approaches of mediation that might appropriate relying on the context as well as organisational society. These array from a casual peer-based mediation to a more official mediation procedure with an independent mediator. If business mediation services portsmouth would certainly such as more information on workplace mediation, or to discuss a scenario that you really feel mediation might help in, please call Åsa Waring. The future has actually never ever been more unforeseeable, demanding or challenging.
HR experts, employment lawyers, trade union representatives, elderly and middle supervisors, team leaders and also people managing difficult and also sensitive concerns. Workplace Mediation is the procedure in which a mediator aids conversations in between a company, administration or employees, to get to an option that works for everybody. Mediation aids individuals solve distinctions to their common complete satisfaction and also to the fulfillment of the business making sure efficient teamwork going forward raising performance and lowering administration time. Mediation is cost and time effective and also generates lasting remedies to conflict. Workplace conflict can have a significant influence on your service, both in regards to monitoring time as well as efficiency. Being equipped with the best approach to take on these high-emotion as well as often multi-party cases can provide mediators an edge and also the appropriate method to support organisations in their times of problem. Workplace mediation can help to solve the position for example where a grievance has actually been handled.
The dispute had actually intensified to a factor where the department was no more able to operate successfully, as well as the manager was being paid to continue to be in your home pending further examination right into the issue. Mediation is a procedure designed to put the parties back in charge of the circumstance and also to choose based on the fullness of the circumstances. An expert conciliator's primary inspiration will certainly be guarantee that the process works by upholding the concepts of impartiality and also privacy, and that the celebrations are provided every possibility to get the most out of it. We will certainly be on your side through every step of a workplace conflict to help you solve the problem as promptly and friendly as possible, call us today on for more info.
The court supplied mediation service is complimentary and also you do not require to execute a great deal of preparation before the mediation occurs.
Generally the parties split the price of the conciliator as well as this joint investment in seeking a resolution includes in each event's dedication to the procedure.
Mediation has a superb success price implying that any type of party choosing to moderate has an excellent opportunity of the disagreement being settled there and afterwards.
If the case settles, you can prevent the anxiety and also time of the test and also preparing for it.
Never prior to has there been so much details to absorb, so many social and company networks to navigate, therefore several economic, political and social concerns to confront. Any documents of the conversations that take place during mediation are destroyed and remain confidential in between those taking part in the mediation unless they otherwise mutually accept share the activity plan with their manager. desires to provide a possibility to fix any concerns that arise with mediation. The second component of the certification requirements consists of written tasks. Delegates are called for to submit a thorough portfolio in feedback to set inquiries that cover a variety of concepts covered in the course. Locations covered include dispute theory in mediation, transformative mediation, individuals abilities in the mediation process and also assisting in mediation. Delegates have 14 days from completion of the fifth day of the training course to finish the composed jobs.
1 note · View note
memoryshame5 · 4 years ago
Text
Wandsworth Mediation Service
Straight Mediation Solutions
Content
National Family Members Mediation Solution.
Northampton Quaker Meeting.
An industrial sight additionally needs to be taken, as a business must just actually think about sustaining the cost of designating a meditator if all events want to take part in the process. Certified moderators will certainly usually think about useful as well as ingenious remedies to intricate troubles, as well as suggest end results ideal for the certain people and/or service included. Mediation permits much more adaptability than the courts have when thinking about solutions. and also receive useful updates as well as understandings about mediation, problem resolution, management, employee relationships and Human Resources. " Mediation is anon-adversarialway of resolving tight spots. At TCM, we explain mediation as amind-set; a structure; and also a proficiency. The FAIR Mediation Design ™ is one-of-a-kind due to the fact that it attends to the underlying root cause of a dispute whilst bringing a practical and highly reliable trouble fixing strategy.
Disputes at the workplace can be incredibly time consuming as well as discouraging to handle. Not to mention costly if it leads to absence or potential lawsuits.
National Family Members Mediation Solution.
Administration time in dealing with the problem rather than focusing on taking care of business. Our 2020 Managing conflict in the modern workplace study finds that almost nine in 10 staff members report excellent functioning connections with associates in their group as well as various other coworkers at the workplace.
youtube
While mediation does not constantly need to be performed face to face, more frequently mediation is arranged to occur off-site at a neutral venue and with the mediator and all the celebrations present. The bypassing purpose of workplace mediation is to restore and preserve great and efficient functioning connections any place feasible. as well as WIBBERLEY, G. Inside the mediation area - efficiency, voice as well as equity in workplace mediation. A variety of organisations run accredited training programs for internal moderators. It can be made use of to deal with a range of workplace issues consisting of partnership malfunction, individuality clashes, communication problems, as well as intimidation and harassment.
Northampton Quake Conference.
Almost 4 in five rate the total working environment and society as 'excellent' or 'great'. You could additionally use mediation to restore partnerships after a corrective or complaint process.
Do mediators cost money?
Fees for private mediation may be charged on an hourly basis or as a flat rate and are generally shared by the parties as they may agree. But in mediation, parties avoid the costs of preparing a court case, which can quickly escalate into the tens of thousands of dollars or more.
Conversion cookies are not made use of by Google for individualized ad targeting and also linger for a minimal time just. totally free 's typically embeded in the googleadservices.com domain or the google.com domain. Several of our various other cookies may be utilized to gauge conversion occasions as well. For instance, DoubleClick and also Google Analytics cookies might also be utilized for this function. We deal with senior execs, CEOs, founders as well as board members to manage their personal employment legislation needs.
Why Choose Mediation?
The celebrations are urged to start problem fixing as well as establishing their resolution action plan. Mediation placesresponsibility for the resolutionof a conflict directly with the celebrations. Burnetts generates a range of write-ups, work legislation e-bulletins and also factsheets. This cost-free legal resource works for both organisations and individuals.
The Top 100 Lawyers in Arizona for 2021 - AZ Big Media
The Top 100 Lawyers in Arizona for 2021.
Posted: Wed, 13 Jan 2021 03:36:20 GMT [source]
It additionally consists of a considerable amount of post-mediation assistance for the parties for a complete year after mediation wraps up. It is created for line supervisors, managers, issues trainers, union representatives and also Human Resources and also ER consultants that may take advantage of utilizing mediation skills as part of their daily tasks. The abilities that we teach are generally important and also include compassion, used favorable psychology, interaction skills, assertiveness, problem resolving and settlement abilities. This is a wonderful way for delegates to gain from a few of the worlds top moderators. Like Practical Mediation Abilities ™, it functions best as an internal course which implies that we can customize it to your certain requirements as well as context. Nevertheless, we additionally run it as an open program with The TCM Academy.
The conversations are concentrated on checking out the issues so that a positive contract can be reached which attends to future communication and working practices. Establishing your very own convenient time, date as well as place for where resolution meetings take place. >> more information of workplace mediations business mediation services northampton plus will certainly assist all celebrations to far better manager anxiety, their very own health and wellbeing, and also social connections amongst associates, in addition to to feel more positive concerning their duty in the business. The success of mediation relies on both events being open and willing to get involved. workplacemediations.co.uk for mediation services is that the events restore their working relationship via equally agreeing a favourable result to fulfill both their personal requirements. Notably, mediation focuses on restoring and also preserving effective functioning partnerships to stay clear of similar problems occurring once more in the future. Through mediation we give an opportunity to tackle such situations early as well as much less formally, saving time, anxiety and also sensations.
Mediation between cities, county over alleged breach of contract to begin in January - Richmond County Daily Journal
Mediation between cities, county over alleged breach of contract to begin in January.
Posted: Fri, 18 Dec 2020 08:00:00 GMT [source]
If the issue can not be dealt with informally, you can make use of mediation. Mediation can be utilized at any type of stage in a disagreement, however it's best to begin it immediately. The earlier the disagreement is taken care of, the much less opportunity there is of things worsening. The arbitrator will concur with both sides which information can be shared outside the mediation as well as how. If you do not get to an arrangement, anything that's been stated during the mediation must be maintained personal as well as can not be used in future procedures.
Relationship malfunction is the concern most frequently pointed out by companies as ideal for mediation. There are various other informal conflict resolution comes close to that can be handy, such as 'helped with conversations' by HR, which can be viewed as a management-led version of mediation. Our research discovered that a quarter of companies utilized promoted conversations or 'trouble-shooting' by Human Resources.
Tumblr media
We encourage on the full spectrum of contentious as well as non-contentious work legislation, with experience in taking care of complicated, high profile and high risk scenarios. A programme open up to onset and also development innovation startups, whose services or products apply to the lawful industry. We have years of experience in the industry as well as are well versed in all aspects of problem mediation as well as are up to date with all the current mediation techniques. If you're based in or around the Newcastle-upon-Tyne location and also want to arrange a mediation session then make certain to call us today. Workplace mediation is based on the concepts of motivating positive communication in a risk-free as well as personal setting recognizing mutual remedies as well as agreements as well as bring back considerate, expert working partnerships. Globis Mediation Group has developed an impressive record for providing top quality mediation training which has led to acknowledgment and national accreditation. Globis Mediation Group mediation training is accredited by OCN London as well as is likewise a signed up training program with the Civil Mediation Council.
Anything claimed during mediation must be personal to those taking part, unless all parties agree to share particular factors, such as predetermined activities or setups with their coworkers, supervisors, or Human Resources. This means that a conciliator may report to HR that a meeting has efficiently taken place however not disclose the detail of what was discussed or agreed. The only exemptions to skip discretion are where, as an example, a potentially crime has actually been devoted or there's a serious risk to health and wellness. Our Managing problem study report likewise explains employees' experiences of interpersonal dispute at the workplace. It demonstrates how conflict-- both separated clashes and also ongoing hard partnerships in addition to bullying and harassment-- can develop, influencing individuals' health as well as health and their work. When conflict isn't attended to and also resolved at an early stage, the scenario often tends to smolder.
In 2018 Stuart existed with a National Mediation Award and in 2019 was assigned to the board of directors at The University of Mediators.
And also, even if they have actually needed to withstand a complaint or examination, mediation additionally gives people a chance to obtain their partnership back on course.
Our highly-experienced moderators can get people speaking and also discovering a resolution to their dispute prior to it ends up being a problem.
Stuart is a Household Conciliator recognized by the Family members Mediation Council in all concerns and belongs to the University of Mediators.
To discover a local mediation solution, or a conciliator who serves your area, usage Find A Moderator or call the Scottish Mediation Helpline on. Google uses conversion cookies, whose main function is to assist marketers figure out the amount of times individuals who click on their ads wind up purchasing their service or products. These cookies enable Google and the marketer to establish that you clicked the advertisement as well as later on went to the advertiser's website.
1 note · View note
solacekames · 6 years ago
Photo
Tumblr media
(This is a great serious study with a lot of data, which is the only way to fight this online disinformation shit. 
I’m already accustomed to us Japanese-Americans being almost completely unable to have online public conversations because we get trolled to the point of nausea by creepy fetishists, fakers and just plain racist trolls. But it could get worse. It IS getting worse. 
While a lot of the article may sound very “water is wet” please don’t be dismissive of it. It’s very important.)
Members of vulnerable groups such as the Latino, Muslim, and Jewish communities are being disproportionately targeted online with disinformation, harassment, and computational propaganda — and they don’t trust big social platforms to help them, according to new research by the Palo Alto–based Institute for the Future’s Digital Intelligence Lab shared exclusively with BuzzFeed News.
Researchers found that online messages and images on platforms such as Twitter that originate in the Latino, Muslim, and Jewish communities are co-opted by extremists to spread division and disinformation, often resulting in more social media engagement for the extremists. This causes members of social groups to pull away from online conversations and platforms, and to stop using them to engage and organize, further ceding ground to trolls and extremists.
“We think that the general goal of this [activity] is to create a spiral of silence to prevent people from participating in politics online, or to prevent them from using these platforms to organize or communicate,” said Samuel Woolley, the director of the Digital Intelligence Lab. The platforms, meanwhile, have mainly met these complaints with inaction, according to the research.
Woolley said he expects strategies like fomenting division, spreading disinformation, and co-opting narratives that were used by bad actors in the 2016 election to be employed in the upcoming 2020 election. “In 2020 what we hypothesize is that social groups, religious groups, and issue voting groups will be the primary target of” this kind of activity, he said.
The lab commissioned eight case studies from academics and think tank researchers to look at how different social and issues groups in the US are affected by what researchers call “computational propaganda” (“the assemblage of social media platforms, autonomous agents, and big data tasked with the manipulation of public opinion” — i.e., digital propaganda). The groups studied were Muslim Americans, Latino Americans, moderate Republicans, immigration activists, black women gun owners, environmental activists, anti-abortion and abortion rights activists, and Jewish Americans.
In one example, immigration activists told researchers that a “know your rights” flyer instructing people what to do when stopped by ICE was photoshopped to include false information, and then spread on social media. A member of the Council on American-Islamic Relations said the hashtag related to the organization’s name (#CAIR) has been “taken over by haters” and used to harass Muslims. Researchers who looked at anti-Latino messaging on Reddit also found that extremist voices discussing Latino topics “appear to be louder than their supporters.”
Jewish Americans interviewed by researchers said online conversations about Israel have reached a new level of toxicity. They spoke of “non-bot Twitter mobs” targeting people, and “coordinated misinformation campaigns conducted by Jewish organizations, trying to propagandize Jews.”
“What we've come to understand is that it's oftentimes the most vulnerable social groups and minority communities that are the targets of computational propaganda,” Woolley told BuzzFeed News.
These findings align with other data that reinforces how these social groups bear the brunt of online harassment. According to a 2019 report from the ADL, 27% of black Americans, 30% of Latinos, 35% of Muslims, and 63% of the LGBTQ+ communities in the United States have been harassed online because of their identity.
BOTS
Tumblr media
While bots were generally not a dominant presence in the Twitter conversations analyzed by researchers, automated accounts were used to spread hateful or harassing messages to different communities.
Tweets gathered about the Arizona Republican primary to replace John McCain in the Senate and his funeral last year showed that bots tried to direct moderate Republicans to america-hijacked.com, an anti-Semitic conspiracy website. (It has not published new material since 2017.) Researchers also found that Twitter discussions about reproductive rights saw anti-abortion bots spread harassing language, while pro–abortion rights bots spread politically divisive messages.
Researchers used the Botometer tool to identify likely automated accounts, and gathered millions of tweets based on hashtags for analysis. They combined this data analysis with interviews conducted with members of the communities being studied. The goal was to identify and quantify the human consequences of computational propaganda, according to Woolley.
“The results range from chilling effects and disenfranchisement to psychological and physical harm,” reads an executive summary from Woolley and Katie Joseff, the lab’s research director.
Joseff said people in the studied communities feel they’re being targeted and outmaneuvered by extremist groups and that they don’t “have the allyship of the platforms.”
“They didn't trust the platforms to help them,” she said.
In response to a request for comment, a Twitter spokesperson pointed to the company's review of its efforts to protect election integrity during the 2018 midterms elections.
"With elections taking place around the globe leading up to 2020, we continue to build on our efforts to address the threats posed by hostile foreign and domestic actors. We're working to foster an environment conducive to healthy, meaningful conversations on our service," said an emailed statement from the spokesperson. (Reddit, the other social platform studied in the research, did not immediately reply to a request for comment.)
Joseff and Woolley said more extreme and insular social media platforms like Gab and 8Chan are where harassment campaigns and messaging about certain social groups is incubated. Ideas that begin on these platforms later dictate the conversation that takes place on more mainstream social media platforms. “The niche platforms like Gab or 8Chan are spaces where the culture around this kind of language becomes fermented and is built,” Woolley said. “That’s why you’re seeing the cross-pollination of attacks across more mainstream social media platforms … directed at multiple different types of groups.”
Co-opting
Researchers found that several of the communities studied are dealing with hashtag and content co-opting, a process by which something used by a group to promote a message or cause gets turned on its head and exploited by opponents.
For example, immigration activists interviewed for one case study said they’ve seen anti-immigration campaigns “video-taping activists and portraying them as ICE officers online, and reframing images to represent immigrant organizations as white supremacist supporters.”
Those interviewed said the perpetrators are tech savvy, “use social media to track and disrupt activism events, and have created memes of minorities looting after a natural disaster.”
The researchers found that messages initially pushed out by immigration activists were consistently co-opted by their opponents — and that these counter-narrative messages generate more engagement than the original, as shown in this graphic representing one example:
“In all cases but one a narrative was consistently drowned out by a counter narrative,” the researchers wrote.
Another case study about Latino Americans gathered data from Reddit. It found that members of r/The_Donald, a major pro-Trump subreddit where racist and extremist content often surfaces, were hugely influential in organizing and promoting discussions related to the Latino community. By filling Reddit with their content, as well as organizing megathreads and other group discussions, they drowned out Latino voices. Researchers also wrote that trolls have at times impersonated experts “in attempts to sow discord and false narratives” related to Latino issues.
Old Tropes
The specific disinformation identified by researchers was often connected to long-running conspiracies or false claims. The case studies about online conversations about women’s reproductive rights and climate science found that old tropes and falsehoods continue to drive divisive conversations.
In the case of women’s reproductive rights, researchers studied 1.7 million tweets posted between Aug. 27 and Sept. 7 last year to coincide with the timing of the Kavanaugh confirmation hearing. The two most prominent disinformation campaigns identified were false claims about Planned Parenthood. One false claim was that the founder of the organization started it to target black people for abortions. This is based on a deliberate misquote of what Margaret Sanger actually said, which was in fact to warn against people thinking the organization was targeting black Americans.
“Recurrence of age-old conspiracies or tropes occurred across many of the case studies,” Joseff said.
Key to the spread of hate, division, and disinformation online is inaction from social media companies. Many of those interviewed for the studies said that when a harassment campaign is underway they have nowhere to turn, and the tech giants don’t take any action.
“There is just so much, it can't be a full-time job,” the director of a chapter of CAIR told researchers when asked about muting or blocking those who send hateful messages.
When platforms do take action, they sometimes end up banning the wrong people. One interview subject who participates in online activism related to immigration issues said that Twitter removed the account of a key Black Lives Matter march organizer last June.
“Subsequently the march was sent into disarray and could have been avoided would major voices of social rights activist organizers have been present in the conversation,” the researchers wrote.
The case studies also identified the fact that algorithms and other key elements of how social media platforms work are easily co-opted by bad actors.
“Their algorithms are gameable and can result in biased or hateful trends,” the executive summary said. “What spreads on their platforms can result in very real offline political violence, let alone the psychological impact from online trolling and harassment.” 
574 notes · View notes
sciencespies · 5 years ago
Text
A Helpful Online Safety Guide for People With Autism Spectrum Disorders
https://sciencespies.com/uncategorized/a-helpful-online-safety-guide-for-people-with-autism-spectrum-disorders/
A Helpful Online Safety Guide for People With Autism Spectrum Disorders
Tumblr media
People from all walks of life and all kinds of backgrounds fall victim to online bullying and cybercrime, but studies have shown that those with an autism spectrum disorder (ASD) are more susceptible to online threats than others.
ASD is a developmental disorder that affects behavior and communication. People on the Autism Spectrum tend to live a relatively normal life but can need supervision and lack judgement – a trait that has been identified as dangerous when left to their own devices in cyberspace.
Not only are ASD children and adults at risk from others, but they can also develop compulsive online habits and internet addictions, and can be more deeply affected by exposure to inappropriate content.
Everyone should feel safe online. It’s therefore extremely important to make sure you have adequate online security and remain internet vigilant.
To help you surf with ease and reduce your vulnerability to attack, take a look at our Internet Safety Guide for people with ASD.
Common Online Issues
There are always an array of threats surfing along the waves of cyberspace. Familiarize yourself with what they are and be extra vigilant – a plan of action is most definitely in order. Below, we have selected the most common online issues faced by those with ASD and provide tips on how to take control of the situation.
1Cyberbullying
Cyberbullying has become a more common trend across the internet, especially affecting children and those with ASD. The bullies use digital platforms, like social media or internet chat forums, to harass and intimidate their victims. Sometimes, this harassment can escalate into real-world threats and bullying. Anyone can become the target of a cyberbully, despite their age, background or lifestyle.
According to the Journal of Mental Health Research in Intellectual Disabilities, those with intellectual and developmental disabilities are more likely to become the victim of cyberbullying. The anti-bullying alliance has also found that those with disabilities are more susceptible to cyberbullies.
More is being done to understand the phenomena and help build a safer environment online. However, cyberbullying can sometimes be difficult to recognize.
Text-based communication sometimes struggles to convey the same level of meaning and context as face-to-face conversations. Because of this, it can sometimes be hard to tell if someone is intentionally trying to bully, or if it’s a misunderstanding. But, if a person sends you abusive messages, or tries to intimidate or embarrass you online, this is most definitely cyberbullying.
The Long Term Effects of Cyberbullying
Bullying can damage your self-worth and/or affect your mental health. Ongoing harassment could lead you to withdrawing from society, making it difficult for you to interact with friends and family. If left unaddressed, the impact of cyberbullying can run deep, and for a long time.
Even though this sounds scary, don’t let it deter you from exploring online and forming meaningful friendships with people over the internet.
Different Types of Cyberbullying
Cyberbullying is a beast with many faces, the most common of which is abusive messages received through email, text message, and instant chat. However, it’s not only through these means that online bullies can get to you.
Tumblr media
Cyberbullying might also manifest itself in the following guises:
A person spreading gossip and rumors about you online, to your friends or even strangers.
Someone who posts statuses and comments intending to humiliate you, or altering the way in which other people perceive you.
Threats being made to you through social media and other avenues of online communication.
Someone who uses their online profiles to share information, videos, or photos of you without your consent, or after you have asked them to stop.
A person who uses your online profiles and information to stalk you online and/or in real life.
Someone who hacks into your online accounts or impersonates you with the intention of using your name and reputation to spread inappropriate or harmful content. This is most commonly known as fraping.
How to Prevent Being Bullied Online
Recent research suggests that cyberbullying tends to occur when certain risk factors aren’t mitigated. Although cyberbullying is hard to stop, you can take steps to prevent yourself from becoming a victim.
The first step is to change the settings on your social media accounts so that your profiles can only be seen by people you know and trust. Cyberbullies are opportunistic by nature, so you‘re at greater risk of experiencing online harassment if strangers can easily contact you.
Similarly, you should always avoid opening messages or accepting friend requests from people who you don‘t know. The ability to hide behind a computer screen while attacking someone often removes a cyberbully from the real-world consequences of their actions, and so they often pick on someone who isn’t in their social circle or someone they don’t know.
6 Tips to Avoid Cyberbullying
Secure your social media accounts. Set your security levels to ‘friends only’ so that strangers can’t see your profile or send you messages.
Don’t post personal information online. Never post information such as your location, address, phone number, school, or workplace online. This will help to prevent cyberstalking, and also means bullies won’t be able to contact you face-to-face or on the phone.
If someone sends you abusive messages, don’t take the bait. Most bullies’ primary goal is to elicit a reaction from their target. If you respond, it might encourage them to continue, so it’s best to refrain from giving them what they’re looking for. Most bullies will simply give up and leave you alone if you don’t reply.
Report them. If someone is bullying your or someone you know, report their post to the platform’s support team. A member of staff will review the content and make a decision to either delete it or allow it to remain. In more serious cases, they may even take action against the bully by blocking or banning them.
Block the bully. Blocking someone will prevent them from accessing your profile and contacting you in the future.
Talk about it. Let a trusted friend or family member know what’s going on. They might be able to help you or give you some handy advice.
2 Understanding the Context of Online Messages
Online Misunderstandings
It is possible to misunderstand a situation when communicating with someone over the internet. It’s easy to miss the context or meaning of someone’s comment in the absence of social cues, and this can cause online discourse to go off-track, or even turn into a heated argument.
Tumblr media
Here are the best practices for avoiding misunderstandings online:
Keep in mind that not everything you read online is true and not everyone with whom you speak will be honest.
If something is unclear, ask the person to clarify what they mean before sharing your opinion.
Use reliable sources to double check facts and information so you don‘t take on or share something that is inaccurate.
Remember to be polite and calm even when you are sure that somebody is wrong or if you feel they are being rude.
Look out for admins and moderators in groups and forums to mediate online discussions if they become uncomfortable or argumentative.
Online forums, like Talk About Autism, are built specifically for people with ASD to socialize and make friends. Most of these forums have moderators who monitor the discussions and are trained to offer mediation if they spot a misunderstanding.
12 Ways to Improve Social Media and Online Communication
There are a lot of benefits for using social media, especially for people on the Autism spectrum who may have trouble interacting with people. However, there are some drawbacks to putting all of your information on social networks. Here are 12 tips that can improve online communication and minimize the risk of being misunderstood.
Tumblr media
Never add your boss, teacher, or supervisor on social media. If you‘re friends online, they’ll be able to see the content on your profile, which may lead to misinterpretations over your character. If their opinion of you is altered by what they see, it could hinder your ability to get a promotion.
Never comment about your workplace online especially if you‘re complaining about your job. It might seem innocent to you, but it could cost your job if it gets back to your bosses or colleagues. Additionally, most workplaces now have rules against posting about your work on social media.
Refrain from posting content that might skew other people’s opinion of you, such as angry rants. Potential employers will usually look you up online and they may base their opinion on you from what they see, even if it doesn‘t actually represent who you are.
Always meet a new online friend during the day and in a public place. Always tell someone where you are going, who you are meeting and any change of location. To be extra careful, you could take a trusted friend or family member with you. Don‘t go anywhere secluded or follow them back to their house. If something feels ‘off,’ leave.
Keep your passwords safe and don‘t hack into other people’s accounts or websites, even if you can. People with ASD often find themselves the victim of a manipulative person who will ask them to break the law or hack a computer, but it’s illegal to do so.
Don’t believe everything that you read online – particularly on social media. A lot of users spread misinformation over the internet and even exaggerate their lives to look good.
Don’t compare your life with someone else’s on social media – you’re only seeing the highlights of their life, not the regular everyday experiences.
Always be polite in your online discourse and avoid arguments, even when you feel that the other person is wrong.
Remember that most internet users regard typing in capitals as the digital equivalent of yelling, and in this context, it can be viewed as rude to type in ALLCAPS.
You can use emojis or emoticons to better express the context and meaning of your words. For example, adding a smiley face to the end of a sentence will show that you are happy, or being friendly.
If someone is making you feel uncomfortable or unsafe, leave the situation and block them.
¨C17C¨C18C¨C19C. If you share photos without consent, or photos of a minor, you are breaking the law and may face legal consequences.
3 Becoming a Victim of a Scam, Manipulation or Hacking
Scammers and hackers are unfortunately a part of everyday online life. To put it simply, some people have ill intentions and wish to manipulate others for their own gain.
They will present themselves as someone who wants to become your friend, or even a potential romantic partner. They will work hard to build a relationship to gain your trust, then rip you off!
Scammers, hackers and cybercriminals will do this for a variety of reasons. For example, they may try to con you into sending them money or committing a crime on their behalf.
They may also be phishing for your personal information – like your passport details – to steal your identity or pose as you online.
Best Practices for Avoiding Scams and Manipulation
Don’t give anyone personal information, such as your address, phone number, or ID number.
Never divulge your banking or credit card information online – remember that some scammers may contact you pretending to be your bank. Your bank will never contact you asking for personal and private information.
Don’t tell anyone where you or your friends and family work or go to school.
Consider using a pseudonym instead of your real name – lots of people use their first and middle names, or create an entirely new name for themselves.
Be careful when agreeing to meet up with people you’ve met online.
Don’t send money to anybody you meet online – if somebody asks you to send them cash, it’s likely they are trying to scam you.
Never click any links to websites that you don’t recognize as they may take you to a website that will compromise your computer’s security.
If you think that you may have been the victim of a scam, it is important that you contact your bank and local law enforcement agency immediately.
Play it Safe
The Center on Secondary Education for Students with Autism Spectrum Disorder have created a memorable acronym for staying safe online, Play it Safe.
Tumblr media
Personal information – never share your personal information online.
Let a friend or family member know if someone has asked you for this information, or if you don‘t feel safe.
Attachments – remember that email attachments might contain malware that can damage your computer and harvest your private information. Don‘t open them unless it’s a file that you have been expecting from someone that you trust.
Your feelings are important. If something makes you feel uncomfortable or unsafe, stop and let somebody know.
Information – remember that not everything you read online is true.
Take breaks from your computer often to socialize, stretch, and give your eyes a rest.
Spend your money safely. Don‘t buy things from unfamiliar stores or links, and don‘t send people money.
Act politely and don‘t say things online that you wouldn‘t say in real life.
Friends online should stay online – if someone asks to meet up, tell them no.
Enjoy yourself and have fun!
4 Exposure to Inappropriate Content
For as many wonderful and informative pieces of information you find online, there are equal amounts of inappropriate and harmful content hidden away. Sometimes, you might stumble across depictions of violence, pornography, and illegal content that most people would prefer to avoid. Accessing things like child pornography, even by accident, can have disastrous legal consequences, so it’s important to safeguard yourself against this.
Tools to Block Inappropriate Content
1. SafeSearch:
Google’s SafeSearch blocks explicit content from your Google search results. Although it isn‘t always 100% accurate, it allows you to filter out things like pornography and explicit images when you‘re googling on your tablet, phone, or computer.
How to set up SafeSearch:
Go to ‘settings’ button on your Google homepage, then navigate to search settings. Under SafeSearch filters, select the box next to the ‘turn on SafeSearch’ option, and be sure to click save before you navigate away.
You can check out Google’s SafeSearch guide to learn how to enable it on your Android or iOS device.
2. Internet filters:
Web filters, like Net Nanny, monitor the websites you access in order to block inappropriate content. You can customize the things your filter looks for, and even whitelist websites you deem as safe. This is a great tool for adults who want to filter out content that’s not safe for work as well as parents looking to keep their kids safe online.
3. Advert and pop up blockers:
We’ve all heard stories of friends who’ve had people walk up behind them when they’re using their computer, only for an unexpected explicit pop-up to come on the screen at that very moment. You can protect yourself from these potentially disastrous incidences by installing a pop-up and ad blocker on your browser.
4. Anti-virus and anti-malware protection:
Some viruses and malware will cause explicit pop ups to grace your screen at inopportune moments. A good, up-to-date anti-virus will not only protect your computer from damaging infections, but it will also keep you shielded from inappropriate content.
5. Links:
Avoid clicking on links you don’t recognize. Even if the message is sent to you by a friend, don’t click on a link you don’t recognize, or you aren’t expecting. You will often receive spam messages via text messages and email that ask you to click on a link to access their website or even a prize, but doing so will leave you at risk of a virus or scam.
5 Sensory Overload on the Internet
For those of us who experience sensory sensitivity, electronic devices and the internet can trigger an all-round overload. Loud noises, bright backlights, unexpected music, and auto-playing videos are just a handful of the irritants that can overwhelm.
Thankfully, there are steps you can take to minimize the sensory impact.
Adjust your screen’s brightness levels and invest in an app that blocks blue light on your device. Although it will make your screen appear with a slightly orange tint, blocking out blue light is a must for decreasing the strain that backlights put on our senses. Apps to block blue light are available on most devices, and they’ll even help you get to sleep faster and minimize the impact of light-sensitive migraines.
Switch video and audio autoplay off on your social media platforms.
Invest in a ‘quiet’ keyboard and mouse to reduce the clicking noises as you type.
White noise is a great tool to soothe the senses. It can also drown out irritating sounds, like the hum of your computer, or your noisy neighbors! White noise videos are available for free on YouTube, or, alternatively, you can purchase a white noise machine.
6 Internet Addiction
The allure and ease of socializing online can negatively impact your drive to socialize in the real world. Online addiction is a serious issue and iT affects many people. Studies suggest that people who are prone to obsessive behaviors are at greater risk of developing an internet addiction. People with ASD and anxiety disorders are at particularly high risk.
It’s easy to see why – the internet offers sanctuary and an easy way to connect and communicate with peers. When most of your friends are internet-based, that‘s where you will want to spend most of your time.
It’s crucial for your mental and physical health to develop and maintain relationships in the real world. The internet is a wonderful tool, but if it interferes with your ability to spend time with friends and family, it might be time to take a break.
Tips to Counter Internet Addiction
Set yourself a time limit when you‘re on the computer. You might like to set a timer for an hour or two and log out when the time is up.
Create a roster or make plans to spend a certain amount of time with friends and family, or enjoying hobbies and exercise, each day. Include your online time in your roster, but plan other activities for your free time as well.
Make sure you have completed all the other tasks you need to do, like chores, before you go online each day.
Use specially designed apps to remind you to take a break. Programs like Offtime monitor your usage and show you how much time you‘ve been spending on social media. You can even set them to block certain sites, like Facebook, during certain times of the day.
Set your social media push notifications to silent on your phone or tablet. This way, you’ll receive them when you login and not when you‘re busy with other activities.
If you feel that you might be falling victim to an internet addiction, you can ask your doctor for a referral to an experienced therapist who will be able to give you more advice.
7 How to Protect Yourself on Popular Social Networks
Below, we have compiled a short guide to keeping yourself safe on some of the most popular social media networks. We delve into their risks, and how to change your account settings to avoid explicit content, scammers, fake profiles, and cyberbullies.
Facebook
What are some of the main risks on Facebook?
it is easy for scammers to befriend and trick you by using fake profiles.
There is a medium to high risk of being exposed to links that will take you to scam websites that phish for your personal information.
Cyberbullies often use Facebook to harass their victims.
Although it is against Facebook’s policies, you may be exposed to explicit posts that their content filters haven‘t detected.
Features like video autoplay can trigger sensory overload.
Social media, as a whole, can be addictive.
Ways to protect yourself on Facebook
Leave out personal information: 
Although Facebook asks for your first and last name, avoid giving it if you can. Instead, many people use a pseudonym or create a fake last name. This will make it difficult for people to track you down on other platforms or in real life.
Avoid customizing your ‘about me’ section too much. You should never tell Facebook where you live, work or study.
If you‘re using a device with GPS, don‘t allow Facebook to post your location. The easiest way to do this is to block Facebook from accessing your device’s location information. You can usually find this setting on your device under Settings > Privacy > Location Services.
Make your account private:
Make sure that your profile is private so that only your friends can see your statuses and send you messages. This reduces your risk of encountering cyberbullying by putting you in charge of who can contact you. Keep in mind that strangers will still be able to read any comments you make on your friends’ posts and on public pages.
How to set your posts to friends only:
Once you‘ve opened the status dialog box, click the privacy setting drop-down menu in the lower bar. It will say either ‘friends’ or ‘public.’ If it says ‘friends,’ that means only the friends you have accepted will see this post. If it’s set to ‘public’ click on it and select ‘friends’ before you hit the post button.
How to set your profile to private:
Login to Facebook and select the arrow at the top of your page in the home bar. From here, select ‘settings’.
When your settings page loads, select ‘privacy’ in the sidebar. This will load two categories of privacy settings for you to alter.
There are two privacy options under ‘your activity’. For the best privacy possible, set them as follows:
Who can see your future posts?
This should be set to ‘friends only,’ so that strangers can‘t see your private status updates.
Limit the audience for posts you‘ve shared with friends of friends or public?
If you opt to change this to ‘friends only’, it will increase the privacy of all your past posts so that strangers can no longer view them.
Next, decide how people can find and contact you.
Who can send you friend requests?
If you‘re not interested in receiving friend requests from strangers, set this option to friends of friends. Unfortunately, there is no way to completely eliminate people sending you requests, but this will reduce the occurrence by quite a bit.
Who can see your friends list?
For optimal security, set this one to ‘friends’ or ‘only me’.
Who can look you up using the email address/phone number you provided?
If you‘re worried about strangers or bullies tracking you down using your email address or phone number, set this one to ‘friends’ only. Your friends can already contact you through your account, so they‘d have no reason to look you up by any other means.
Do you want search engines outside Facebook to link to your profile?
If you select yes for this option, it makes it possible for people to find your Facebook page by searching for your name on Google or any other search engine. For optimal security, select no.
Now, head on over to your ‘timeline and tagging’ settings to finalize the process.
Who can post on your timeline?
To prevent strangers (and bullies) from posting on your timeline, you can set this to ‘only me’. However, this will also prevent your friends from posting on your timeline.
Who can see what others post on my timeline?
Again, set this one to ‘friends’ or ‘only me’ so that strangers can’t see the posts other people leave on your timeline.
Avoid cyberbullies:
If someone is using Facebook to harass you, you can block them from seeing your profile or contacting you. All you need to do is navigate to their profile and select the drop-down menu that‘s represented by three little dots at the top of their page. Then, select ‘block’. They won‘t be able to find or view your profile, and they won‘t be notified that you have blocked them.
Avoid inappropriate content:
For the most part, Facebook’s censorship software filters inappropriate and harmful content out of your feed. However, you can also set Facebook up to filter comments containing specific words out of your timeline.
Head back to your ‘timeline and tagging’ settings, and under the timeline category, select the ‘hide comments that contain certain words’ option. From here, you can create a list of words, phrases, and even emojis that you don‘t want to see on your timeline, and Facebook will block them for you.
Take a break:
You can log out of Facebook at any time, but for a more prolonged leave of absence, you can temporarily deactivate your account. All of your friends, posts, and photos will remain on your profile while you‘re away, but nobody else will be able to see your account or send you messages until the next time you login. This is a great solution if you need to take a step away from social media, but don’t want to lose all of your content and memories.
To deactivate your account, go to settings > general > manage account > deactivate your account.
Twitter
What are some of the main risks on Twitter?
Twitter is a hub for social and political activism, and sometimes, this can become overwhelming and distressing.
Due to Twitter’s diverse content, you may come across some explicit or triggering tweets from time-to-time.
Passionate arguments often break out over Twitter discussions, and users may be at risk of experiencing cyberbullying.
Like all social media, Twitter has the potential to become addictive and interfere with your everyday life.
Ways to protect yourself on Twitter
Make your tweets private:
When you set your Twitter profile and posts to private, they will only be visible to your followers. When someone new follows you, Twitter will send you a notification and you will be asked to approve their request, or deny it. However, accounts that followed you before you protected your tweets will still be able to view and interact with your profile unless you block them.
To protect your tweets, head to the Tweet privacy section in your privacy and safety settings and check the box next to ‘protect my tweets.’ Click the save button, enter your password to confirm, and you‘re done!
Additionally, you can make it so that people who have your contact details aren’t able to find you on Twitter unless you follow them first. From the privacy and safety settings page, uncheck both discoverability options.
Prevent Twitter posting your location information:
Every time you create a Tweet, you will be able to choose whether Twitter should post your location with it, or not. By default, Twitter won‘t share your location unless you have already opted in to the service.
Avoid cyberbullies:
Blocking someone on Twitter is similar to Facebook. From their profile, click the ‘see more’ icon (three vertical dots) and select ‘block’ from the menu. Then, click ‘block’ again to confirm. People you have blocked can’t follow or see your Twitter profile. Twitter won‘t send them a notification when you block them, but if they visit your profile, they will receive a message informing them that they have been blocked.
Avoid inappropriate content:
The best way to ensure that you avoid content you don‘t want to see on Twitter is to only follow people who are already your friends, and only view content on your main Twitter feed. Once you delve into Twitter‘s search feature or investigate hashtags, you leave yourself vulnerable to inappropriate content. By default, Twitter will show a warning before you view content it deems as not safe for work, but this isn‘t 100% accurate as some content can slip through Twitter‘s filter.
Take a break:
Account deactivation on Twitter is a more permanent solution, so if you need to step away for a short while, it’s better to log out. You could deactivate your account completely, but you face losing your profile and past tweets in the process.
Instagram
Make your account private:
When you‘re posting personal pictures on Instagram, privacy is important. You don’t want strangers to be able to access your personal information or use your photos to impersonate you online
Luckily, you can make it so that all of your posts are private and only your friends can see them. To do this, head to your settings, then select account privacy and turn ‘private account’ on.
Now, people will need to send you a follow request, and you will need to approve it before they can see your posts, followers, and following lists. If someone was following you before you set your account to private and you don‘t want them to be able to see your posts anymore, you will need to block them.
Avoid cyberbullies:
Just like most social media platforms, Instagram makes it easy for you to block someone. All you need to do is go to their profile, hit the ‘see more’ button (represented by the little dots) and select ‘block’.
Once you block someone, they can no longer find your profile, posts, or stories. Instagram won‘t notify people when you block them.
Avoid inappropriate content:
Although posting explicit content is against Instagram‘s policies, unfortunately, some users still share it. Similar to Twitter, the best way to avoid stumbling across explicit content is to stick with viewing the profiles of people who you trust and avoid exploring hashtags.
Time out:
If you‘re in need of a break from your Instagram account, login from a desktop or mobile internet browser, navigate to your profile, and click ‘edit profile’. Select ‘temporarily disable my account’ and follow the prompts. All of your followers and content will remain until you‘re ready to log back in.
8 Online Dating and ASD
Online dating is a great avenue to meet new friends and potential romantic partners, but it brings with it some pretty serious dangers. People who you meet via online dating sites may not always be who they seem, and catfishing is rampant.
A ‘catfish‘ is a person who creates an online dating profile through which they pretend to be somebody else. They might use a fake name, fake pictures, and a fake life story among other things to paint you a mental picture of the person they aren’t.
Tumblr media
It can be difficult to tell if someone is catfishing you, and so we delve further into how to check if someone is telling you the truth about their identity below.
If you‘re using the internet to date, remember:
Always have a conversation with someone and get to know them before you agree to meet in person.
Ask to speak with them over video chat, or on the phone to verify that they are the person in their pictures. Someone who is being honest about their identity will rarely have an issue with this and will take comfort in knowing you are, too.
Also, ask if it is okay to add them on Facebook if you have an account. This way, you can check out their profile, pictures, and friends to get a clearer picture of who they are.
Always agree to meet in a busy public space, like a cafe, during the day. Make sure there are people around who can help you if you get into trouble and consider asking a friend or family member to be situated
Never tell them personal information like your address even if they are offering to pick you up.
Make sure that you can get to and from the meeting place independently and safely. You don‘t want to be reliant on them for a ride home if you don’t like them.
9 How to Tell if Someone is Who They Say They Are
Most of the people who you meet online will be genuine, but some will use fake profiles designed to draw you in and manipulate you. Luckily, you can usually verify if someone is telling the truth about their identity by using a few simple tricks.
Verify their picture:
Check to see if their profile picture is a real person.
If other photographs on their account show the same person, they may be telling the truth. You can save one of these photos to your computer and use Google’s reverse image search to check if it appears anywhere else online.
If it appears in a lot of places, they may be using a stolen profile photo. But, if it only appears on their profile, chances are it is a photo of them.
Check their friend count:
Do they have any other friends on their account? If you are the only friend they have, they might be using a fake profile to target you.
If they have other friends, do the friends ever post anything to the person‘s timeline that might indicate they know each other in real life? If not, they could be using a fake profile to attract several targets who have never met them before.
Check their status updates and posts:
Are their status updates regular, everyday posts about their life? Or, are they mostly posting links and advertisements? If they are mostly posting links and ads, it is likely that they are using a fake profile to scam people or make sales.
Secrecy:
Have they told you not to tell anyone about them? If so, this indicates that they could have ill intentions and that they are not a genuine friend.
Money:
Have they asked you for money, or told you they are in a bad situation and need help with money? If so, they are likely posing as a friend in order to scam you.
If you suspect that your online friend isn‘t who they claim to be, you should stop talking to them and block their account.
10 Signs that Something Might be Wrong Online
If you feel upset, uncomfortable, or unsafe, something might be seriously wrong with your online situation. It’s important that you listen to this inner feeling and leave the situation before it goes too heavy. You may need to block the person who is making you feel unsafe, or seek help from a third party, like a family member or the police.
If one of your online friends is saying something one day and then contradicting it the next, it’s possible that they aren‘t being truthful about their identity. You could use the steps listed above to see if everything checks out, and if it doesn‘t, you may need to make the decision to remove them from your online circle.
Cyberbullying is common on social media, and if someone is being cruel to you or other people, they aren’t worth your time. You should report their cruel comments to the website‘s administrator and then block them to prevent them from contacting you in the future.
If something seems too good to be true, then it probably is. You should always be cautious of scams. Remember that if a stranger or friend is offering you something that sounds a little fishy, like a prize for clicking on their link, you should avoid it at all costs. If you‘re unsure, you can search on Google or even Snopes to find out if it is a scam.
11 Ways to Improve Your Child’s Internet Safety
Protecting a child from the dangers on the internet should be a concern for all parents, even moreso if they have children on the Autism spectrum. We’ve put together the following tips as a guide to keep your children safe online, and avoid some of the dangers that can be found online.
Tumblr media
Keep your family’s computer in a communal space, like a lounge room or in the kitchen. This way, you can check in regularly and keep an eye on what’s happening in your child’s online social circles.
Create some visual reminders and posters of internet safety tips and hang them up in the room around your computer. This can be a great opportunity to sit with your child and discuss internet safety while coming up with some rules together.
Educate your child about online safety, clarify that they understand, and renew their knowledge regularly.
Roleplay different scenarios with your child to teach them how to react to online dangers in a safe setting. You could create an account on the platform they‘re using, and use this to send them messages as part of the roleplay to make it seem more realistic.
Write and enforce a strict roster around internet usage times to avoid complications with internet addiction. You may even divide time spent online into separate categories, like play or study, and work that into the roster as well.
Put all electronics away about two hours before bed to help improve your child’s sleep.
Use internet content filters, like Net Nanny, to monitor and restrict your child‘s browsing activity. These programs will also restrict their access to inappropriate content, and any other websites you block.
Install child-friendly internet browsers like KidSplorer – they are visually appealing to children, and they make it safer for them to use the internet. Similar to a content filter, they’ll only be able to access the websites you have specified, and will even block access to the internet at predetermined times.
Establish a plan with them on what they should do if they encounter a cyberbully, how they should react, and who they should tell.
Casually ask them about their online friends and what they’ve been talking about, similar to the way in which you‘d ask them how their school day went.
Provide them with a checklist of the information that they are not allowed to give out over the internet, such as their full name, birth date, address, and school name.
12 Conclusion
In summary, the key here is to ensure you’re internet safe. Focus on incorporating tighter online security measures and heightened safety precautions. Educate yourself, be vigilant and be aware.
Those with ASD are more susceptible to online threats than others, so it’s even more important to follow our advice for remaining safe online.
Cyberbullies and online scammers will unfortunately always have a place online, so it’s your job to stand up and take the necessary steps to protect yourself from attack.
Follow our steps so you can spot when something is not right and take action to protect yourself when you feel threatened.
This Internet Safety Guide has flagged the key areas you should watch out for, and how to tackle the threats faced. Enjoy your time online, but remember to be internet safe and careful at the same time!
#Biology, #Uncategorized
1 note · View note