#substack nazis
Explore tagged Tumblr posts
Text
Tech libertarianism is, fundamentally, an ideology for people who are both cheap and lazy. That is the great advantage that attracts businesspeople to adopt a libertarian perspective on speech regulation. If your first instinct about content moderation is “I would rather not think about this, it shouldn’t be my problem, and I definitely don’t want to spend any resources on it,” then libertarianism is the ideology for you. ... It is bad and weird that Google, Facebook, Apple, and the rest of big tech have been left to play the role of regulator-of-last-resort. Their executives at times complain, at times correctly, that even if they have the right as private businesses to make these decisions, we would all be better off with some other entity making them. (The hitch here, of course, is that one reason we have reduced government regulatory capacity to make and enforce these decisions is that these same companies have worked tirelessly to whittle down the size and scale of the administrative state. It has been a project of attaining great power while foreswearing any responsibility. Which is, y’know, really not great!) .. This is why every tech CEO loves the libertarian approach to speech issues. Tech libertarianism holds that someone else (or no one at all) should expend resources on setting and enforcing boundaries for how your product is used. The essence of the position is “I shouldn’t have to spend money on any of this. And I shouldn’t ever face negative consequences for not spending money on this.” (It’s a bit like someone who refuses to tip at a restaurant and insists its because they believe philosophically that the whole system is unjust and restaurants ought to pay fair wages to their workers. Sure! Fair point! But in the meantime, here and now, you’re still being a cheapskate asshole.)
On Substack Nazis, laissez-faire tech regulation, and mouse-poop-in-cereal-boxes
Dave Karpf
Dec 14, 2023
3 notes
·
View notes
Text
From The Atlantic: Substack Has a Nazi Problem
[that link is to an archived version, so no paywall]
Bottom Line: the CEOs/leaders of Substack aren't just being laissez-faire about the fascists and open white supremacists on the platform, they actively boost them by having them on the company podcast, featuring them, mentioning them, and boosting them. Because the newsletters bring in LOADS of money and they love money. Even newsletters that repeatedly violate the basic, useless guidelines of Substack, they do not get punished.
This isn't a huge surprise for anyone who has been following the major issues with Substack that have come up in the past few years. There was the whole scandal where the public discovered that Substack had been paying people secretly to be on the service while advertising that anyone can make it on their own here! Plus, they were paying bigots directly to put their newsletters on the srvice.
Good breakdowns of that from Annalee Newitz and Grace Lavery.
Then there was the disasterous interview one of the CEOs (Chris Best) did with Nilay Patel of The Verge when Substack's Twitter clone launched. Nilay -- who is, if you hadn't guessed, of Indian descent -- asked him pointed questions about content moderation and... well...
[Nilay] I just want to be clear, if somebody shows up on Substack and says “all brown people are animals and they shouldn’t be allowed in America,” you’re going to censor that. That’s just flatly against your terms of service. [Best] So, we do have a terms of service that have narrowly prescribed things that are not allowed. That one I’m pretty sure is just flatly against your terms of service. You would not allow that one. That’s why I picked it. So there are extreme cases, and I’m not going to get into the– Wait. Hold on. In America in 2023, that is not so extreme, right? “We should not allow as many brown people in the country.” Not so extreme. Do you allow that on Substack? Would you allow that on Substack Notes? I think the way that we think about this is we want to put the writers and the readers in charge– No, I really want you to answer that question. Is that allowed on Substack Notes? “We should not allow brown people in the country.” I’m not going to get into gotcha content moderation. This is not a gotcha... I’m a brown person. Do you think people on Substack should say I should get kicked out of the country? I’m not going to engage in content moderation, “Would you or won’t you this or that?” That one is black and white, and I just want to be clear: I’ve talked to a lot of social network CEOs, and they would have no hesitation telling me that that was against their moderation rules. Yeah. We’re not going to get into specific “would you or won’t you” content moderation questions. Why? I don’t think it’s a useful way to talk about this stuff.
Best wasn't willing to get into these "gotchas" around their new social network, which is a pretty clear indication that they won't get into it around content moderation on the original platform. (Their statement after the fact did nothing to make things better.)
It's also really clear from the Atlantic article that the Substack CEOs/Owners are, at best, more interested in making money than in keeping white supremacists and Nazis (literal ones) off their platform. At worst, the Substack CEOs/Owners are supremacist/Nazi sympathizers. Either way:
Substack Directly Supports the Alt-Right, Nazis, and White Supremacists
Openly, brazenly, and without remorse.
#tw: nazis#nazis#fascisim#tw: fascists#tw: white supremacists#white supremacists#Substack#newsletters
485 notes
·
View notes
Text
new newsletter 🎧
#estimated reading time says 24 minutes..... sorry angels i got a bit excited. but i hope u have fun reading this !!! <3#also yayyy first post on medium bc fuck substacks nazi ass leadership#l#newsletters
35 notes
·
View notes
Quote
But turning a blind eye to recommended content almost always comes back to bite a platform. It was recommendations on Twitter, Facebook, and YouTube that helped turn Alex Jones from a fringe conspiracy theorist into a juggernaut that could terrorize families out of their homes. It was recommendations that turned QAnon from loopy trolling on 4Chan into a violent national movement. It was recommendations that helped to build the modern anti-vaccine movement. The moment a platform begins to recommend content is the moment it can no longer claim to be simple software.
Why Substack is at a crossroads
29 notes
·
View notes
Text
i'm writing again! not a letter this week, but something i wrote in the midst of a sleepless night. much love.
#ive moved to medium because of the whole nazis on substack thing so#idk if you need a medium account to read this??#anyway#partridgeremains#much love if you read it <3
14 notes
·
View notes
Text
Israel has lost the plot
Mouin Rabbani's latest assessment of Israel's Unfulfilled Objective: Eliminating Hamas
#screenshots from Norm Finkelstein's substack#read the whole thing it's great!#palestine#nazi mention
18 notes
·
View notes
Text
The owner of Substack is hosting Nazis because “free speech”
58 notes
·
View notes
Text
Before preceding [sic] we should mention what “Nazi ideas” are, since McKenzie didn’t. “Nazi ideas” are: first, that people Nazis consider part of their ethnic and philosophical in-group represent pure paragons of humanity; second, that all other human beings are corrupted and corrupting threats poisoning their bloodstream both literally and metaphorically; and third, that all other human beings therefore should be subjugated, then expelled, then exterminated, so that true humanity can finally thrive. They have other ideas as well, but those make up the core.
The notion of a marketplace of ideas selecting the best idea and rejecting the worse is an interesting one. It suggests that marketplaces always select quality, especially the more unregulated they are, which is not something I’ve noticed to be true about how any actual marketplaces operate. The idea that Nazi “ideas” need to be defeated in open debate, which will cause them to lose power, is also interesting. It presupposes that debates are always won by the most correct idea, which I’ve noticed is often the opposite of how debate works. It also suggest that Nazis plan is to participate in bloodless debate over their ideas, and accept the outcome if their ideas are rejected, which is not a plan I think Nazis have ever pursued, or the sort of arena in which they have ever admitted—much less accepted—defeat. It also suggests that what Nazis have are “ideas,” when we know that what they actually have are intentions, and those intentions always create real-life violence toward marginalized communities along racial, ethnic, religious, and other lines of bigotry—and they do so the more effectively Nazis are able to gather and organize and promote their “ideas” into the mainstream. Most damning of all, it also smuggles in the idea that—in the mind of the person making the suggestion, at least—Nazi ideas haven’t been effectively defeated in the marketplace of ideas yet. Apparently Nazi ideas are still valid and worthy consideration, and need to be debated some more before they are to be considered defeated.
13 notes
·
View notes
Text
The US National Institutes of Health describes scientific racism “an organised system of misusing science to promote false scientific beliefs in which dominant racial and ethnic groups are perceived as being superior”.
The ideology rests on the false belief that “races” are separate and distinct. “Racial purity is a fantasy concept,” said Dr Adam Rutherford, a lecturer in genetics at University College London. “It does not and has not and never will exist, but it is inherent to the scientific racism programme.”
Prof Alexander Gusev, a quantitative geneticist at Harvard University, said that “broadly speaking there is essentially no scientific evidence” for scientific racism’s core tenets.
The writer Angela Saini, author of a book on the return of race science, has described how it traces its roots to arguments originally used to defend colonialism and later Nazi eugenics, and today can often be deployed to “shore up” political views.
In multiple conversations, HDF’s organisers suggested their interests were also political. Frost appeared to express support for what he called “remigration”, which Ahrens had told him would be the AfD’s key policy should the party win power.
...
The principal benefactor
Andrew Conru founded his first internet business while studying mechanical engineering at Stanford. In 2007, he hit the jackpot, selling his dating website Adult FriendFinder to the pornography company Penthouse for $500m.
In recent years, the entrepreneur has turned his attention to giving away his money, declaring on his personal website: “My ultimate goal is not to accumulate wealth or accolades, but to leave a lasting, positive impact on the world.”
His foundation has given millions to a wide and sometimes contrasting range of causes, including a Seattle dramatic society, a climate thinktank and a pet rehoming facility, as well as less progressive recipients: an anti-immigration group called the Center for Immigration Studies, and Turning Point USA, which runs a watchlist of university professors it claims advance leftist propaganda.
0 notes
Text
Bouncing off my last reblog about how Substack has become a haven for Nazis, here’s an interview the EIC of The Verge did on his podcast, Decoder, where he asked Chris Best, the CEO of Substack, how they would moderate hate speech.
His response was atrocious and took me from “oh maybe I’ll make a Substack” in the first half minutes of the interview, to “holy shit I’m never touch in that place” in the opening of the second half right after the commercial break.
#Substack#The Verge#the news the other day about Substack being openly okay with Nazis didn’t surpirse me because of this#did it make me sad and angry? of course#but man what a fool Chris Best is#you guys should all listen to Decoder and The Vergecast if you have any interest in tech or what’s going on in the industry#because shit like this is more common than you think and The Verge will always call them out
8 notes
·
View notes
Text
7 notes
·
View notes
Text
Content Moderation Isn't As Hard As They Say
Another issue from the Atlantic article on Substack that bears discussing is this bit:
Moderating online content is notoriously tricky. Amid the ongoing crisis in Israel and Gaza, Amnesty International recently condemned social-media companies’ failure to curb a burst of anti-Semitic and Islamophobic speech, at the same time that it criticized those companies for “over-broad censorship” of content from Palestinian and pro-Palestinian accounts—which has made sharing information and views from inside Gaza more difficult. When tech platforms are quick to banish posters, partisans of all stripes have an incentive to accuse their opponents of being extremists in an effort to silence them. But when platforms are too permissive, they risk being overrun by bigots, harassers, and other bad-faith actors who drive away other users, as evidenced by the rapid erosion of Twitter, now X, under Musk. In a post earlier this year, a Substack co-founder, Hamish McKenzie, implied that his company’s business model would largely obviate the need for content moderation. “We give communities on Substack the tools to establish their own norms and set their own terms of engagement rather than have all that handed down to them by a central authority,” he wrote. But even a platform that takes an expansive view of free speech will inevitably find itself making judgments about what to take down and what to keep up—as Substack’s own terms of service attest. ... Ultimately, the First Amendment gives publications and platforms in the United States the right to publish almost anything they want. But the same First Amendment also gives them the right to refuse to allow their platform to be used for anything they don’t want to publish or host.
I don't agree that moderating online content is "tricky" in the way that the article writer posits it. Even that first example is presented as if it's somehow talking out of both sides of one's mouth to condemn social media companies for allowing anti-Semitic and Islamophobic speech while suppressing pro-Palestinian posts and accounts. What?
And that bit about partisans using a network's propensity to use the banhammer as a tool to silence their opponents is indeed a thing, but is only effective if the network's banning "policies" (used very loosely here) are vague and mostly run by bots. It can even be a problem when humans get involved in the moderation if said humans don't truly understand what they're looking at or they have been trained improperly.
Back in 2017 ProPublica published a deep dive into what people who are tasked with reviewing flagged content are trained to see as appropriate or not. It wasn't a pretty picture.
There's also the part about language and cultural understanding. If a platform outsources their content moderation to a country where they can get that labor for "cheap", the individuals reviewing the content may not know English well enough to spot a problem or know the culture of the post origin well enough to understand dog whistles or even outright bigotry if it's not on the list given to them of what's not acceptable.
For issues at the scale of Facebook, Instagram, TikTok, and other very large networks, the main solution is and has always been money. Money to pay people inside of a country or culture to review the materials. Money to train them properly. Money to support the mental health tolls this work takes on people. You know what companies hate to do? Spend money on stuff that isn't CEO pay.
But let's be real here: the ultimate problem in content moderation isn't that it's tricky, it's that corporate owned networks aren't willing to take an ethical stand on things like what constitutes racism, sexism, homophobia, or any other true bigotry. They're also not willing to take a stand against ideas like "reverse racism" or "reverse sexism" and similar. You won't see them saying: Those reverse isms aren't a real thing and we won't tolerate that crap around here.
You can't create a moderation policy that covers every tiny detail of what is and isn't okay and what words are and aren't okay and such granular stuff as that. You can have a code of ethics and a morality that prioritizes harm reduction, especially for marginalized groups. Not so ironically, I've seen these kinds of policies most when looking at various Mastodon instances suggested to me and others. Here's a good example.
Yes, I know that scale is a huge factor here and I don't discount it. Scale doesn't mean this kind of moderation is impossible, just more difficult or costly as things grow. Yet it's not difficult to take a stand and say: We don't want white supremacists or Nazis on our platform, period. As The Atlantic points out, platforms and social networks have a First Amendment right to do that.
The Substack CEOs? Aren't willing.
#Substack#content moderation#terms of service#ethics#morality#tw: Nazis#tw: white supremacists#long post
4 notes
·
View notes
Quote
Before preceding we should mention what “Nazi ideas” are, since McKenzie didn’t. “Nazi ideas” are: first, that people Nazis consider part of their ethnic and philosophical in-group represent pure paragons of humanity; second, that all other human beings are corrupted and corrupting threats poisoning their bloodstream both literally and metaphorically; and third, that all other human beings therefore should be subjugated, then expelled, then exterminated, so that true humanity can finally thrive. They have other ideas as well, but those make up the core. If you want to hear examples of Nazi ideas, you should listen to the leader of the Republican Party, whose name is Donald Trump. He’s saying stuff like this all the time, and his crowds cheer and cheer, when they aren’t claiming to be opposed to antisemitism and other forms of racism and bigotry and supremacy and suppression, that is.
A.R. Moxon
28 notes
·
View notes