#fosta
Explore tagged Tumblr posts
mostlysignssomeportents · 5 months ago
Text
Copyright takedowns are a cautionary tale that few are heeding
Tumblr media
On July 14, I'm giving the closing keynote for the fifteenth HACKERS ON PLANET EARTH, in QUEENS, NY. Happy Bastille Day! On July 20, I'm appearing in CHICAGO at Exile in Bookville.
Tumblr media
We're living through one of those moments when millions of people become suddenly and overwhelmingly interested in fair use, one of the subtlest and worst-understood aspects of copyright law. It's not a subject you can master by skimming a Wikipedia article!
I've been talking about fair use with laypeople for more than 20 years. I've met so many people who possess the unshakable, serene confidence of the truly wrong, like the people who think fair use means you can take x words from a book, or y seconds from a song and it will always be fair, while anything more will never be.
Or the people who think that if you violate any of the four factors, your use can't be fair – or the people who think that if you fail all of the four factors, you must be infringing (people, the Supreme Court is calling and they want to tell you about the Betamax!).
You might think that you can never quote a song lyric in a book without infringing copyright, or that you must clear every musical sample. You might be rock solid certain that scraping the web to train an AI is infringing. If you hold those beliefs, you do not understand the "fact intensive" nature of fair use.
But you can learn! It's actually a really cool and interesting and gnarly subject, and it's a favorite of copyright scholars, who have really fascinating disagreements and discussions about the subject. These discussions often key off of the controversies of the moment, but inevitably they implicate earlier fights about everything from the piano roll to 2 Live Crew to antiracist retellings of Gone With the Wind.
One of the most interesting discussions of fair use you can ask for took place in 2019, when the NYU Engelberg Center on Innovation Law & Policy held a symposium called "Proving IP." One of the panels featured dueling musicologists debating the merits of the Blurred Lines case. That case marked a turning point in music copyright, with the Marvin Gaye estate successfully suing Robin Thicke and Pharrell Williams for copying the "vibe" of Gaye's "Got to Give it Up."
Naturally, this discussion featured clips from both songs as the experts – joined by some of America's top copyright scholars – delved into the legal reasoning and future consequences of the case. It would be literally impossible to discuss this case without those clips.
And that's where the problems start: as soon as the symposium was uploaded to Youtube, it was flagged and removed by Content ID, Google's $100,000,000 copyright enforcement system. This initial takedown was fully automated, which is how Content ID works: rightsholders upload audio to claim it, and then Content ID removes other videos where that audio appears (rightsholders can also specify that videos with matching clips be demonetized, or that the ad revenue from those videos be diverted to the rightsholders).
But Content ID has a safety valve: an uploader whose video has been incorrectly flagged can challenge the takedown. The case is then punted to the rightsholder, who has to manually renew or drop their claim. In the case of this symposium, the rightsholder was Universal Music Group, the largest record company in the world. UMG's personnel reviewed the video and did not drop the claim.
99.99% of the time, that's where the story would end, for many reasons. First of all, most people don't understand fair use well enough to contest the judgment of a cosmically vast, unimaginably rich monopolist who wants to censor their video. Just as importantly, though, is that Content ID is a Byzantine system that is nearly as complex as fair use, but it's an entirely private affair, created and adjudicated by another galactic-scale monopolist (Google).
Google's copyright enforcement system is a cod-legal regime with all the downsides of the law, and a few wrinkles of its own (for example, it's a system without lawyers – just corporate experts doing battle with laypeople). And a single mis-step can result in your video being deleted or your account being permanently deleted, along with every video you've ever posted. For people who make their living on audiovisual content, losing your Youtube account is an extinction-level event:
https://www.eff.org/wp/unfiltered-how-youtubes-content-id-discourages-fair-use-and-dictates-what-we-see-online
So for the average Youtuber, Content ID is a kind of Kafka-as-a-Service system that is always avoided and never investigated. But the Engelbert Center isn't your average Youtuber: they boast some of the country's top copyright experts, specializing in exactly the questions Youtube's Content ID is supposed to be adjudicating.
So naturally, they challenged the takedown – only to have UMG double down. This is par for the course with UMG: they are infamous for refusing to consider fair use in takedown requests. Their stance is so unreasonable that a court actually found them guilty of violating the DMCA's provision against fraudulent takedowns:
https://www.eff.org/cases/lenz-v-universal
But the DMCA's takedown system is part of the real law, while Content ID is a fake law, created and overseen by a tech monopolist, not a court. So the fate of the Blurred Lines discussion turned on the Engelberg Center's ability to navigate both the law and the n-dimensional topology of Content ID's takedown flowchart.
It took more than a year, but eventually, Engelberg prevailed.
Until they didn't.
If Content ID was a person, it would be baby, specifically, a baby under 18 months old – that is, before the development of "object permanence." Until our 18th month (or so), we lack the ability to reason about things we can't see – this the period when small babies find peek-a-boo amazing. Object permanence is the ability to understand things that aren't in your immediate field of vision.
Content ID has no object permanence. Despite the fact that the Engelberg Blurred Lines panel was the most involved fair use question the system was ever called upon to parse, it managed to repeatedly forget that it had decided that the panel could stay up. Over and over since that initial determination, Content ID has taken down the video of the panel, forcing Engelberg to go through the whole process again.
But that's just for starters, because Youtube isn't the only place where a copyright enforcement bot is making billions of unsupervised, unaccountable decisions about what audiovisual material you're allowed to access.
Spotify is yet another monopolist, with a justifiable reputation for being extremely hostile to artists' interests, thanks in large part to the role that UMG and the other major record labels played in designing its business rules:
https://pluralistic.net/2022/09/12/streaming-doesnt-pay/#stunt-publishing
Spotify has spent hundreds of millions of dollars trying to capture the podcasting market, in the hopes of converting one of the last truly open digital publishing systems into a product under its control:
https://pluralistic.net/2023/01/27/enshittification-resistance/#ummauerter-garten-nein
Thankfully, that campaign has failed ��� but millions of people have (unwisely) ditched their open podcatchers in favor of Spotify's pre-enshittified app, so everyone with a podcast now must target Spotify for distribution if they hope to reach those captive users.
Guess who has a podcast? The Engelberg Center.
Naturally, Engelberg's podcast includes the audio of that Blurred Lines panel, and that audio includes samples from both "Blurred Lines" and "Got To Give It Up."
So – naturally – UMG keeps taking down the podcast.
Spotify has its own answer to Content ID, and incredibly, it's even worse and harder to navigate than Google's pretend legal system. As Engelberg describes in its latest post, UMG and Spotify have colluded to ensure that this now-classic discussion of fair use will never be able to take advantage of fair use itself:
https://www.nyuengelberg.org/news/how-explaining-copyright-broke-the-spotify-copyright-system/
Remember, this is the best case scenario for arguing about fair use with a monopolist like UMG, Google, or Spotify. As Engelberg puts it:
The Engelberg Center had an extraordinarily high level of interest in pursuing this issue, and legal confidence in our position that would have cost an average podcaster tens of thousands of dollars to develop. That cannot be what is required to challenge the removal of a podcast episode.
Automated takedown systems are the tech industry's answer to the "notice-and-takedown" system that was invented to broker a peace between copyright law and the internet, starting with the US's 1998 Digital Millennium Copyright Act. The DMCA implements (and exceeds) a pair of 1996 UN treaties, the WIPO Copyright Treaty and the Performances and Phonograms Treaty, and most countries in the world have some version of notice-and-takedown.
Big corporate rightsholders claim that notice-and-takedown is a gift to the tech sector, one that allows tech companies to get away with copyright infringement. They want a "strict liability" regime, where any platform that allows a user to post something infringing is liable for that infringement, to the tune of $150,000 in statutory damages.
Of course, there's no way for a platform to know a priori whether something a user posts infringes on someone's copyright. There is no registry of everything that is copyrighted, and of course, fair use means that there are lots of ways to legally reproduce someone's work without their permission (or even when they object). Even if every person who ever has trained or ever will train as a copyright lawyer worked 24/7 for just one online platform to evaluate every tweet, video, audio clip and image for copyright infringement, they wouldn't be able to touch even 1% of what gets posted to that platform.
The "compromise" that the entertainment industry wants is automated takedown – a system like Content ID, where rightsholders register their copyrights and platforms block anything that matches the registry. This "filternet" proposal became law in the EU in 2019 with Article 17 of the Digital Single Market Directive:
https://www.eff.org/deeplinks/2018/09/today-europe-lost-internet-now-we-fight-back
This was the most controversial directive in EU history, and – as experts warned at the time – there is no way to implement it without violating the GDPR, Europe's privacy law, so now it's stuck in limbo:
https://www.eff.org/deeplinks/2022/05/eus-copyright-directive-still-about-filters-eus-top-court-limits-its-use
As critics pointed out during the EU debate, there are so many problems with filternets. For one thing, these copyright filters are very expensive: remember that Google has spent $100m on Content ID alone, and that only does a fraction of what filternet advocates demand. Building the filternet would cost so much that only the biggest tech monopolists could afford it, which is to say, filternets are a legal requirement to keep the tech monopolists in business and prevent smaller, better platforms from ever coming into existence.
Filternets are also incapable of telling the difference between similar files. This is especially problematic for classical musicians, who routinely find their work blocked or demonetized by Sony Music, which claims performances of all the most important classical music compositions:
https://pluralistic.net/2021/05/08/copyfraud/#beethoven-just-wrote-music
Content ID can't tell the difference between your performance of "The Goldberg Variations" and Glenn Gould's. For classical musicians, the best case scenario is to have their online wages stolen by Sony, who fraudulently claim copyright to their recordings. The worst case scenario is that their video is blocked, their channel deleted, and their names blacklisted from ever opening another account on one of the monopoly platforms.
But when it comes to free expression, the role that notice-and-takedown and filternets play in the creative industries is really a sideshow. In creating a system of no-evidence-required takedowns, with no real consequences for fraudulent takedowns, these systems are huge gift to the world's worst criminals. For example, "reputation management" companies help convicted rapists, murderers, and even war criminals purge the internet of true accounts of their crimes by claiming copyright over them:
https://pluralistic.net/2021/04/23/reputation-laundry/#dark-ops
Remember how during the covid lockdowns, scumbags marketed junk devices by claiming that they'd protect you from the virus? Their products remained online, while the detailed scientific articles warning people about the fraud were speedily removed through false copyright claims:
https://pluralistic.net/2021/10/18/labor-shortage-discourse-time/#copyfraud
Copyfraud – making false copyright claims – is an extremely safe crime to commit, and it's not just quack covid remedy peddlers and war criminals who avail themselves of it. Tech giants like Adobe do not hesitate to abuse the takedown system, even when that means exposing millions of people to spyware:
https://pluralistic.net/2021/10/13/theres-an-app-for-that/#gnash
Dirty cops play loud, copyrighted music during confrontations with the public, in the hopes that this will trigger copyright filters on services like Youtube and Instagram and block videos of their misbehavior:
https://pluralistic.net/2021/02/10/duke-sucks/#bhpd
But even if you solved all these problems with filternets and takedown, this system would still choke on fair use and other copyright exceptions. These are "fact intensive" questions that the world's top experts struggle with (as anyone who watches the Blurred Lines panel can see). There's no way we can get software to accurately determine when a use is or isn't fair.
That's a question that the entertainment industry itself is increasingly conflicted about. The Blurred Lines judgment opened the floodgates to a new kind of copyright troll – grifters who sued the record labels and their biggest stars for taking the "vibe" of songs that no one ever heard of. Musicians like Ed Sheeran have been sued for millions of dollars over these alleged infringements. These suits caused the record industry to (ahem) change its tune on fair use, insisting that fair use should be broadly interpreted to protect people who made things that were similar to existing works. The labels understood that if "vibe rights" became accepted law, they'd end up in the kind of hell that the rest of us enter when we try to post things online – where anything they produce can trigger takedowns, long legal battles, and millions in liability:
https://pluralistic.net/2022/04/08/oh-why/#two-notes-and-running
But the music industry remains deeply conflicted over fair use. Take the curious case of Katy Perry's song "Dark Horse," which attracted a multimillion-dollar suit from an obscure Christian rapper who claimed that a brief phrase in "Dark Horse" was impermissibly similar to his song "A Joyful Noise."
Perry and her publisher, Warner Chappell, lost the suit and were ordered to pay $2.8m. While they subsequently won an appeal, this definitely put the cold grue up Warner Chappell's back. They could see a long future of similar suits launched by treasure hunters hoping for a quick settlement.
But here's where it gets unbelievably weird and darkly funny. A Youtuber named Adam Neely made a wildly successful viral video about the suit, taking Perry's side and defending her song. As part of that video, Neely included a few seconds' worth of "A Joyful Noise," the song that Perry was accused of copying.
In court, Warner Chappell had argued that "A Joyful Noise" was not similar to Perry's "Dark Horse." But when Warner had Google remove Neely's video, they claimed that the sample from "Joyful Noise" was actually taken from "Dark Horse." Incredibly, they maintained this position through multiple appeals through the Content ID system:
https://pluralistic.net/2020/03/05/warner-chappell-copyfraud/#warnerchappell
In other words, they maintained that the song that they'd told the court was totally dissimilar to their own was so indistinguishable from their own song that they couldn't tell the difference!
Now, this question of vibes, similarity and fair use has only gotten more intense since the takedown of Neely's video. Just this week, the RIAA sued several AI companies, claiming that the songs the AI shits out are infringingly similar to tracks in their catalog:
https://www.rollingstone.com/music/music-news/record-labels-sue-music-generators-suno-and-udio-1235042056/
Even before "Blurred Lines," this was a difficult fair use question to answer, with lots of chewy nuances. Just ask George Harrison:
https://en.wikipedia.org/wiki/My_Sweet_Lord
But as the Engelberg panel's cohort of dueling musicologists and renowned copyright experts proved, this question only gets harder as time goes by. If you listen to that panel (if you can listen to that panel), you'll be hard pressed to come away with any certainty about the questions in this latest lawsuit.
The notice-and-takedown system is what's known as an "intermediary liability" rule. Platforms are "intermediaries" in that they connect end users with each other and with businesses. Ebay and Etsy and Amazon connect buyers and sellers; Facebook and Google and Tiktok connect performers, advertisers and publishers with audiences and so on.
For copyright, notice-and-takedown gives platforms a "safe harbor." A platform doesn't have to remove material after an allegation of infringement, but if they don't, they're jointly liable for any future judgment. In other words, Youtube isn't required to take down the Engelberg Blurred Lines panel, but if UMG sues Engelberg and wins a judgment, Google will also have to pay out.
During the adoption of the 1996 WIPO treaties and the 1998 US DMCA, this safe harbor rule was characterized as a balance between the rights of the public to publish online and the interest of rightsholders whose material might be infringed upon. The idea was that things that were likely to be infringing would be immediately removed once the platform received a notification, but that platforms would ignore spurious or obviously fraudulent takedowns.
That's not how it worked out. Whether it's Sony Music claiming to own your performance of "Fur Elise" or a war criminal claiming authorship over a newspaper story about his crimes, platforms nuke first and ask questions never. Why not? If they ignore a takedown and get it wrong, they suffer dire consequences ($150,000 per claim). But if they take action on a dodgy claim, there are no consequences. Of course they're just going to delete anything they're asked to delete.
This is how platforms always handle liability, and that's a lesson that we really should have internalized by now. After all, the DMCA is the second-most famous intermediary liability system for the internet – the most (in)famous is Section 230 of the Communications Decency Act.
This is a 27-word law that says that platforms are not liable for civil damages arising from their users' speech. Now, this is a US law, and in the US, there aren't many civil damages from speech to begin with. The First Amendment makes it very hard to get a libel judgment, and even when these judgments are secured, damages are typically limited to "actual damages" – generally a low sum. Most of the worst online speech is actually not illegal: hate speech, misinformation and disinformation are all covered by the First Amendment.
Notwithstanding the First Amendment, there are categories of speech that US law criminalizes: actual threats of violence, criminal harassment, and committing certain kinds of legal, medical, election or financial fraud. These are all exempted from Section 230, which only provides immunity for civil suits, not criminal acts.
What Section 230 really protects platforms from is being named to unwinnable nuisance suits by unscrupulous parties who are betting that the platforms would rather remove legal speech that they object to than go to court. A generation of copyfraudsters have proved that this is a very safe bet:
https://www.techdirt.com/2020/06/23/hello-youve-been-referred-here-because-youre-wrong-about-section-230-communications-decency-act/
In other words, if you made a #MeToo accusation, or if you were a gig worker using an online forum to organize a union, or if you were blowing the whistle on your employer's toxic waste leaks, or if you were any other under-resourced person being bullied by a wealthy, powerful person or organization, that organization could shut you up by threatening to sue the platform that hosted your speech. The platform would immediately cave. But those same rich and powerful people would have access to the lawyers and back-channels that would prevent you from doing the same to them – that's why Sony can get your Brahms recital taken down, but you can't turn around and do the same to them.
This is true of every intermediary liability system, and it's been true since the earliest days of the internet, and it keeps getting proven to be true. Six years ago, Trump signed SESTA/FOSTA, a law that allowed platforms to be held civilly liable by survivors of sex trafficking. At the time, advocates claimed that this would only affect "sexual slavery" and would not impact consensual sex-work.
But from the start, and ever since, SESTA/FOSTA has primarily targeted consensual sex-work, to the immediate, lasting, and profound detriment of sex workers:
https://hackinghustling.org/what-is-sesta-fosta/
SESTA/FOSTA killed the "bad date" forums where sex workers circulated the details of violent and unstable clients, killed the online booking sites that allowed sex workers to screen their clients, and killed the payment processors that let sex workers avoid holding unsafe amounts of cash:
https://www.eff.org/deeplinks/2022/09/fight-overturn-fosta-unconstitutional-internet-censorship-law-continues
SESTA/FOSTA made voluntary sex work more dangerous – and also made life harder for law enforcement efforts to target sex trafficking:
https://hackinghustling.org/erased-the-impact-of-fosta-sesta-2020/
Despite half a decade of SESTA/FOSTA, despite 15 years of filternets, despite a quarter century of notice-and-takedown, people continue to insist that getting rid of safe harbors will punish Big Tech and make life better for everyday internet users.
As of now, it seems likely that Section 230 will be dead by then end of 2025, even if there is nothing in place to replace it:
https://energycommerce.house.gov/posts/bipartisan-energy-and-commerce-leaders-announce-legislative-hearing-on-sunsetting-section-230
This isn't the win that some people think it is. By making platforms responsible for screening the content their users post, we create a system that only the largest tech monopolies can survive, and only then by removing or blocking anything that threatens or displeases the wealthy and powerful.
Filternets are not precision-guided takedown machines; they're indiscriminate cluster-bombs that destroy anything in the vicinity of illegal speech – including (and especially) the best-informed, most informative discussions of how these systems go wrong, and how that blocks the complaints of the powerless, the marginalized, and the abused.
Tumblr media
Support me this summer on the Clarion Write-A-Thon and help raise money for the Clarion Science Fiction and Fantasy Writers' Workshop!
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/06/27/nuke-first/#ask-questions-never
Tumblr media
Image: EFF https://www.eff.org/files/banner_library/yt-fu-1b.png
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
673 notes · View notes
audieoddity · 9 months ago
Text
I wish more people knew about Fosta/sesta bc it goes so far in explaining the modern state of the Internet
10 notes · View notes
thoughtportal · 2 years ago
Video
tumblr
This is SESTA FOSTA all over again and so many people are falling for it.
18 notes · View notes
r0semultiverse · 1 year ago
Text
this fosta shit is making me fosta a lot of hatred for the old dusty ass skeletons running this shit
8 notes · View notes
vaguely-problematic · 2 years ago
Text
youtube
a u.s. supreme court case for 2023 may change the internet. legal eagle (above video) explains.
Gonzalez v. Google LLC is about whether youtube can be held liable for the content that its algorithm recommends to viewers. it involves a terrorist attack by isis and the fact that youtube was recommending isis videos to its users.
14 notes · View notes
didtheyunbanporn · 2 years ago
Link
so the law that was responsable for porn being removed on tumblr (FOSTA-SESTA)  looks like its gonna get overturned as the courts found it violates the first amedment more speficically the part that says “use of computers and internet” part of it. could be big!
0 notes
thoughtportal · 2 years ago
Video
tumblr
FOSTA-SESTA was the start of this
14 notes · View notes
trans-axolotl · 3 months ago
Text
"Much ink has already been spilled on Harris’s prosecutorial background. What is significant about the topic of sex work is how recently the vice president–elect’s actions contradicted her alleged views. During her tenure as AG, she led a campaign to shut down Backpage, a classified advertising website frequently used by sex workers, calling it “the world’s top online brothel” in 2016 and claiming that the site made “millions of dollars from trafficking.” While Backpage did make millions off of sex work ads, its “adult services” listings offered a safer and more transparent platform for sex workers and their clients to conduct consensual transactions than had historically been available. Harris’s grandiose mischaracterization led to a Senate investigation, and the shuttering of the site by the FBI in 2018.
“Backpage being gone has devastated our community,” said Andrews. The platform allowed sex workers to work more safely: They were able to vet clients and promote their services online. “It’s very heartbreaking to see the fallout,” said dominatrix Yevgeniya Ivanyutenko. “A lot of people lost their ability to safely make a living. A lot of people were forced to go on the street or do other things that they wouldn’t have otherwise considered.” M.F. Akynos, the founder and executive director of the Black Sex Worker Collective, thinks Harris should “apologize to the community. She needs to admit that she really fucked up with Backpage, and really ruined a lot of people’s lives.”
After Harris became a senator, she cosponsored the now-infamous Stop Enabling Sex Traffickers Act (SESTA), which—along with the House’s Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA)—was signed into law by President Trump in 2018. FOSTA-SESTA created a loophole in Section 230 of the Communications Decency Act, the so-called “safe harbor” provision that allows websites to be free from liability for user-generated content (e.g., Amazon reviews, Craigslist ads). The Electronic Frontier Foundation argues that Section 230 is the backbone of the Internet, calling it “the most important law protecting internet free speech.” Now, website publishers are liable if third parties post sex-work ads on their platforms.
That spelled the end of any number of platforms—mostly famously Craigslist’s “personal encounters” section—that sex workers used to vet prospective clients, leaving an already vulnerable workforce even more exposed. (The Woodhull Freedom Foundation has filed a lawsuit challenging FOSTA on First Amendment grounds; in January 2020, it won an appeal in D.C.’s district court).
“I sent a bunch of stats [to Harris and Senator Diane Feinstein] about decriminalization and how much SESTA-FOSTA would hurt American sex workers and open them up to violence,” said Cara (a pseudonym), who was working as a sex worker in the San Francisco and a member of SWOP when the bill passed. Both senators ignored her.
The bill both demonstrably harmed sex workers and failed to drop sex trafficking. “Within one month of FOSTA’s enactment, 13 sex workers were reported missing, and two were dead from suicide,” wrote Lura Chamberlain in her Fordham Law Review article “FOSTA: A Hostile Law with a Human Cost.” “Sex workers operating independently faced a tremendous and immediate uptick in unwanted solicitation from individuals offering or demanding to traffic them. Numerous others were raped, assaulted, and rendered homeless or unable to feed their children.” A 2020 survey of the effects of FOSTA-SESTA found that “99% of online respondents reported that this law does not make them feel safer” and 80.61 percent “say they are now facing difficulties advertising their services.” "
-What Sex Workers Want Kamala Harris to Know by Hallie Liberman
412 notes · View notes
not-poignant · 1 year ago
Note
Is it just me or is censorship all across social media getting really bad lately? Pinterest just removed a whole bunch of my pins for sexual content - they were all fully clothed queer men and women showing vague intimacy, nothing overtly sexual at all. Like one they removed was literally just two men's heads leaning on each other. That's wacko right??? Have I missed something big going on?
Tbh it's been getting increasingly terrible for a long time, anon, it's not a lately thing, it's been since before the Tumblr purge, and it is at least in some part due to the SESTA/FOSTA law that got passed in the USA, and the increasing passing of laws in many countries that are specifically concerned with removing net neutrality and treating all of us like 3 year olds.
It will get worse, not better. And you've probably been missing a few big things! It sounds like you were most directly impacted by what happened with Pinterest, so you've just noticed it. Many of us noticed it around 2018 with the Tumblr purge. Some of us have been impacted by elements of it way earlier, due to Livejournal's Strikethrough which necessitated the invention of Dreamwidth and helped to really get AO3 off the ground. And this was back even before we now have many laws that scare a lot of big companies into removing adult content.
Steve Jobs famously hated / loathed pornography and was on a mission to literally try and remove it from the internet, and part of that mission was to - as much as possible - make it nearly impossible for apps that have it to get listed in the Apple store. This is partly why AO3 doesn't have an app. This is why Dreamwidth doesn't have an app. This is why the Tumblr Purge happened - so they could continue to have an app. And while some sites don't get targeted, as soon as you do get targeted by the Apple store, it's either 'provide your legal identity to prove that you're the age you say you are' to access adult content or it's 'goodbye adult content.'
We've also had an increasing rise of morally panicked, puritanical TERF-informed anti-shippers who believe that their emotional reactions to fictional content they find troubling are firstly valid moral judgements, and secondly, a valid reason to abuse, bully and send death threats to real people. And these people basically work hand-in-hand (often without realising) with extremely powerful Evangelical Christians who have government influence and a lot of money in the USA and literally work to change laws to make it reflect an extremely puritanical vision they have of the future. You know, the homphobic, transphobic, misogynistic, racist, kinkphobic, bigoted, antisemitic etc. etc. etc. one. (It's highly ironic and tragic that most antis are young and queer and just extremely uneducated).
I'd say people notice based on what impacted them directly. So some of us realised in 2007. Some of us realised again in 2018. And since then there's been a lot of blows from a lot of sites. In a way, Pinterest is joining an already very bloated bandwagon of sites cornered in the manner. The reason why people say 'unalive' these days instead of suicide, or 'r@pe' these days instead of rape is because of Tiktok censors. The reason so many folks moved their adult fanart and art accounts off Instagram, or they've gone dead, is because of Instagram censors. The reason so many adult writers on Patreon are very careful about what explicit words they write directly onto the site is because of Patreon censorship.
After all this, it's possible that Pinterest has a bug and are implementing a new AI algorithm for detecting adult content, and it's just broken. In those cases, reporting and appealing actually often does help. When Tumblr first implemented their algorithm, it wasn't very well trained yet, and like, pictures of fruit etc. were being banned because the AI algorithm was still figuring out what to do. Tumblr was in a rush in order to keep the app in the Apple store (over 70% of their income is from app users, the site would have literally died if they didn't act quickly), and so they ended up with an extremely overzealous and initially broken (and still sometimes broken x.x) algorithm.
If Pinterest is going through something similar, either with the app store or with having to address a sudden legal change, they may be having algo problems, and reporting will help them train the algorithm better.
Trust me, there will be people behind the scenes - staff at all of these websites - who hate the changes as much as you do, even if they can't say so for professional reasons. But even the new owner of Tumblr got pretty close to saying 'it fucking sucks but we have to do it if you want the site to exist' (which honestly made it a lot more...possible to handle the change, because it's not usually the sites you have to hate/resent, but the laws getting changed around you. Also if anyone here is an adult and can do so - please vote!!!)
46 notes · View notes
mostlysignssomeportents · 2 years ago
Link
Sohn was eminently qualified to serve on the FCC, and there was no mystery as to who she would serve in that role: the American people, especially those who have been abused, forgotten or underserved by Big Telco and Big Cable, from digitally redlined inner-city to rural broadband deserts.
So the monopolists went to work. For sixteen months, they successfully lo the Senate to block her confirmation hearing. Not her confirmation — just the hearing. Over $23 million in telco money flowed into the Senate over this period, and that was just the start.
The ISPs also went to work on the frothing culture warriors of the American right, smearing Sohn as a “groomer” and an “anti-police radical.” They ran a homophobic smear campaign against Sohn, who is gay, and condemned her for her work as a volunteer board member with the Electronic Frontier Foundation, on the grounds that EFF opposes unconstitutional digital police surveillance and campaigned against SESTA/FOSTA, a law that has put sex-workers in grave physical danger while doing nothing to accomplish its nominal goal of preventing sex-trafficking (disclosure: I am a Special Advisor to EFF and am proud to have worked with them for over 21 years).
-Culture War Bullshit Stole Your Broadband: Your internet sucks because telco monopolists kept Gigi Sohn off the FCC
252 notes · View notes
mxjackparker · 1 year ago
Text
Censoring terms like "sex work" and making them unsearchable doesn't stop anyone from finding porn, which can be tagged a million other things, but it does make it harder to hear sex workers talk about their own experiences.
56 notes · View notes
inthefallofasparrow · 1 year ago
Text
How did the internet become so puritanical? On social media, outspoken anti-sex advocates increasingly cry “gross” at everything from R-rated rom-coms to fictional characters and queer people having sex to consenting adults with slight age gaps to dating short people. They see oversexualization in just about everything. They often accuse the things they dislike of being coded fronts for pedophilia, and the people who enjoy those things of being sexual predators. These social media users frequently form enclaves that turn as nightmarish and troubling as the things they’re ostensibly trying to police.
This dovetails with what we’re being told right now about Gen Z and sex: They’re having less casual sex, they hate dating, they’re more reserved about relationships in general. It’s easy to pigeonhole online anti-sex police as being teens and young adults, a.k.a. “puriteens.” Because so much of this comes down to carnal horror, you might assume that everyone who’s horrified is a teen who just hasn’t arrived at a mature view of sex and other adult activity. Such anti-sex zeal increasingly forces sex-positive communities back into the internet’s underground. It also aids and abets the larger cultural shift toward regressive attitudes and censorship of sexual minorities and sex-positive content.
Yet overwhelmingly, the common thread among this new generation of “antis” — a broad label for people who are opposed to sexual content in media — isn’t that they are minors who are scared of sex. It’s that none of them distinguish between fictional harm and real-world harm. That is, regardless of their ages, they believe fiction not only can have a real-world impact, but that it always has a real-world impact ...
139 notes · View notes
hungerpunch · 1 year ago
Text
i would kill a man and skin him if i never had to witness another tumblr user whimpering "will you unban porn now?" please jesus god fuck god christ PLEASE if tumblr didn't have to ban porn they never would have! you're asking and blaming the wrong people. look up FOSTA-SESTA to start with and then how that impacted app store policies.
22 notes · View notes
campgender · 8 months ago
Text
Tumblr media Tumblr media Tumblr media
excerpts from Amber Dawn’s “Touch ≠ Touch Screen” that i thought particularly resonated with the most recent iteration of conversations on transmisogynistic + whorephobic censorship on tumblr & other social media
image description: three screenshots of a poem from the collection My Art Is Killing Me (2020) with stylized spacing.
excerpt 1:
Just survivors, I’m talking only to you now (literally you).
Did your abuse fever teach you to solder belonging and harm?
Were you seen and were you shamed in the same
original place? Did you inherit
a coercive dichotomy?
Anxious arousal hand
me downs?
Does your public network see you and hate you in looping rounds?
Does logging on harm you? Does all this somehow feel familiar?
excerpt 2:
Let’s talk about 2018 when
FOSTA-SESTA (Fight Online Sex Trafficking Act and Stop
Enabling Sex Traffickers Act) was passed as law by US Congress
on April 11, marking the first ever exception to Section 230.
So after twenty-two years, yes, twenty-two years, social media platforms
where made responsible for user generated content if that content may be
intended for sex work yes
all sex work, yes, consensual sex work and, yes
anything like a butthole or a female-presenting-assumed nipple, and yes
responsible for or in authority of images, words and phrases
that mend desire together
with age, race, size, orientation, disability, labour
economics and any bodies subject to other-ness.
And with other-ness, I’m talking about
fat babes in neon green lingerie, about two brown men
kissing, about trans women being radiant and using
their real fucking names. I’m talking about
masculine-presenting-assumed folk with baby bumps. I’m talking sexual
assault survivors showing off the scars on our inner thighs. I’m talking about
women posting screenshots of the violent Tinder messages we receive
every damn day. I’m talking about
speaking up. I’m talking language
reclamation. I’m talking decolonizing
sexualities. I’m talking gagged faggots
about dyke march photos
torn down. I’m talking
about locked accounts.
excerpt 3:
viii.
I’m talking about this—
power holding backlash.    How dare we get those likes
those shares, take up virtual space
speak truths, share strategy, love our ash and phoenix
bodies, rise up or dig deep, whichever way or all
directions at once.
We can be nimble AF
but how dare we?
Make the internet
white relentlessly white again
str8 cis thin and norm again
Redesign the sightline
of hating women.         I’m talking about this
power holding backlash.
I think about this a lot—
what it means to spend upwards
of two hours per day
on platforms that believe
we should not legally exist
un      see      able
end image description.
15 notes · View notes
calicostorms · 26 days ago
Text
Etsy banning adult toymakers is genuinely so devastating. So many amazing businesses were there making cool stuff, and it's so much harder to find out about them without a centralized website to start on. I'm really distraught and disgusted by how few of the creators I follow on Etsy are still allowed to sell their wares there now
2 notes · View notes