Freedom of reach IS freedom of speech
The online debate over free speech suuuuucks, and, amazingly, it’s getting worse. This week, it’s the false dichotomy between “freedom of speech” and “freedom of reach,” that is, the debate over whether a platform should override your explicit choices about what you want to see:
https://seekingalpha.com/news/3849331-musk-meets-twitter-staff-freedom-of-reach-new-ideas-on-human-verification
It’s wild that we’re still having this fight. It is literally the first internet fight! The modern internet was born out of an epic struggled between “Bellheads” (who believed centralized powers should decide how you used networks) and “Netheads” (who believed that services should be provided and consumed “at the edge”):
https://www.wired.com/1996/10/atm-3/
The Bellheads grew out of the legacy telco system, which was committed to two principles: universal service and monetization. The large telcos were obliged to provide service to everyone (for some value of “everyone”), and in exchange, they enjoyed a monopoly over the people they connected to the phone system.
That meant that they could decide which services and features you had, and could ask the government to intervene to block competitors who added services and features they didn’t like. They wielded this power without restraint or mercy, targeting, for example, the Hush-A-Phone, a cup you stuck to your phone receiver to muffle your speech and prevent eavesdropping:
https://en.wikipedia.org/wiki/Hush-A-Phone
They didn’t block new features for shits and giggles, though — the method to this madness was rent-extraction. The iron-clad rule of the Bell System was that anything that improved on the basic service had to have a price-tag attached. Every phone “feature” was a recurring source of monthly revenue for the phone company — even the phone itself, which you couldn’t buy, and had to rent, month after month, year after year, until you’d paid for it hundreds of times over.
This is an early and important example of “predatory inclusion”: the monopoly carriers delivered universal service to all of us, but that was a prelude to an ugly, parasitic, rent-seeking way of doing business:
https://lpeproject.org/blog/predatory-inclusion-a-long-view-of-the-race-for-profit/
It wasn’t just the phone that came with an unlimited price-tag: everything you did with the phone was also a la carte, like the bananas-high long-distance charges, or even per-minute charges for local calls. Features like call waiting were monetized through recurring monthly charges, too.
Remember when Caller ID came in and you had to pay $2.50/month to find out who was calling you before you answered the phone? That’s a pure Bellhead play. If we applied this principle to the internet, then you’d have to pay $2.50/month to see the “from” line on an email before you opened it.
Bellheads believed in “smart” networks. Netheads believed in what David Isenberg called “The Stupid Network,” a “dumb pipe” whose only job was to let some people send signals to other people, who asked to get them:
https://www.isen.com/papers/Dawnstupid.html
This is called the End-to-End (E2E) principle: a network is E2E if it lets anyone receive any message from anyone else, without a third party intervening. It’s a straightforward idea, though the spam wars brought in an important modification: the message should be consensual (DoS attacks, spam, etc don’t count).
The degradation of the internet into “five giant websites, each filled with screenshots of text from the other four” (h/t Tom Eastman) meant the end of end-to-end. If you’re a Youtuber, Tiktoker, tweeter, or Facebooker, the fact that someone explicitly subscribed to your feed does not mean that they will, in fact, see your feed.
The platforms treat your unambiguous request to receive messages from others as mere suggestions, a “signal” to be mixed into other signals in the content moderation algorithm that orders your feed, mixing in items from strangers whose material you never asked to see.
There’s nothing wrong in principal with the idea of a system that recommends items from strangers. Indeed, that’s a great way to find people to follow! But “stuff we think you’ll like” is not the same category as “stuff you’ve asked to see.”
Why do companies balk at showing you what you’ve asked to be shown? Sometimes it’s because they’re trying to be helpful. Maybe their research, or the inferences from their user surveillance, suggests that you actually prefer it that way.
But there’s another side to this: a feed composed of things from people is fungible. Theoretically, you could uproot that feed from one platform and settle it in another one — if everyone you follow on Twitter set up an account on Mastodon, you could use a tool like Movetodon to refollow them there and get the same feed:
https://www.movetodon.org/
A feed that is controlled by a company using secret algorithms is much harder for a rival to replicate. That’s why Spotify is so hellbent on getting you to listen to playlists, rather than albums. Your favorite albums are the same no matter where you are, but playlists are integrated into services.
But there’s another side to this playlistification of feeds: playlists and other recommendation algorithms are chokepoints: they are a way to durably interpose a company between a creator and their audience. Where you have chokepoints, you get chokepoint capitalism:
https://chokepointcapitalism.com/
That’s when a company captures an audience inside a walled garden and then extracts value from creators as a condition of reaching them, even when the audience requests the creator’s work. With Spotify, that manifests as payola, where creators have to pay for inclusion on playlists. Spotify uses playlists to manipulate audiences into listening to sound-alikes, silently replacing the ambient artists that listeners tune in to hear with work-for-hire musicians who aren’t entitled to royalties.
Facebook’s payola works much the same: when you publish a post on Facebook, you have to pay to boost it if you want it to reach the people who follow you — that is, the people who signed up to see what you post. Facebook may claim that it does this to keep its users’ feeds “uncluttered” but that’s a very thin pretense. Though you follow friends and family on Facebook, your feed is weighted to accounts willing to cough up the payola to reach you.
The “uncluttering” excuse wears even thinner when you realize that there’s no way to tell a platform: “This isn’t clutter, show it to me every time.” Think of how the cartel of giant email providers uses the excuse of spam to block mailing lists and newsletters that their users have explicitly signed up for. Those users can fish those messages out of their spam folders, they can add the senders to their address books, they can write an email rule that says, “If sender is X, then mark message as ‘not spam’” and the messages still go to spam:
https://doctorow.medium.com/dead-letters-73924aa19f9d
One sign of just how irredeemably stupid the online free expression debate is that we’re arguing over stupid shit like whether unsolicited fundraising emails from politicians should be marked as spam, rather than whether solicited, double-opt-in newsletters and mailing lists should be:
https://www.cbsnews.com/news/republican-committee-sues-google-over-email-spam-filters/
When it comes to email, the stuff we don’t argue about is so much more important than the stuff we do. Think of how email list providers blithely advertise that they can tell you the “open rate” of the messages that you send — which means that they embed surveillance beacons (tracking pixels) in every message they send:
https://www.wired.com/story/how-email-open-tracking-quietly-took-over-the-web/
Sending emails that spy on users is gross, but the fucking disgusting part is that our email clients don’t block spying by default. Blocking tracking pixels is easy as hell, and almost no one wants to be spied on when they read their email! The onboarding process for webmail accounts should have a dialog box that reads, “Would you like me to tell creepy randos which emails you read?” with the default being “Fuck no!” and the alternative being “Hurt me, Daddy!”
If email providers wanted to “declutter” your inbox, they could offer you a dashboard of senders whose messsages you delete unread most of the time and offer to send those messages straight to spam in future. Instead they nonconsensually intervene to block messages and offer no way to override those blocks.
When it comes to recommendations, companies have an unresolvable conflict of interest: maybe they’re interfering with your communications to make your life better, or maybe they’re doing it to make money for their shareholders. Sorting one from the other is nigh impossible, because it turns on the company’s intent, and it’s impossible to read product managers’ minds.
This is intrinsic to platform capitalism. When platforms are getting started, their imperative is to increase their user-base. To do that, they shift surpluses to their users — think of how Amazon started off by subsidizing products and deliveries.
That lured in businesses, and shifted some of that surplus to sellers — giving fat compensation to Kindle authors and incredible reach to hard goods sellers in Marketplace. More sellers brought in more customers, who brought in more sellers.
Once sellers couldn’t afford to leave Amazon because of customers, and customers couldn’t afford to leave Amazon because of sellers, the company shifted the surplus to itself. It imposed impossible fees on sellers — Amazon’s $31b/year “advertising” business is just payola — and when sellers raised prices to cover those fees, Amazon used “Most Favored Nation” contracts to force sellers to raise prices everywhere else.
The enshittification of Amazon — where you search for a specific product and get six screens of ads for different, worse ones — is the natural end-state of chokepoint capitalism:
https://pluralistic.net/2022/11/28/enshittification/#relentless-payola
That same enshittification is on every platform, and “freedom of speech is not freedom of reach” is just a way of saying, “Now that you’re stuck here, we’re going to enshittify your experience.”
Because while it’s hard to tell if recommendations are fair or not, it’s very easy to tell whether blocking end-to-end is unfair. When a person asks for another person to send them messages, and a third party intervenes to block those messages, that is censorship. Even if you call it “freedom of reach,” it’s still censorship.
For creators, interfering with E2E is also wage-theft. If you’re making stuff for Youtube or Tiktok or another platform and that platform’s algorithm decides you’ve broken a rule and therefore your subscribers won’t see your video, that means you don’t get paid.
It’s as if your boss handed you a paycheck with only half your pay in it, and when you asked what happened to the other half, your boss said, “You broke some rules so I docked your pay, but I won’t tell you which rules because if I did, you might figure out how to break them without my noticing.”
Content moderation is the only part of information security where security-through-obscurity is considered good practice:
https://doctorow.medium.com/como-is-infosec-307f87004563
That’s why content moderation algorithms are a labor issue, and why projects like Tracking Exposed, which reverse-engineer those algorithms to give creative workers and their audiences control over what they see, are fighting for labor rights:
https://www.eff.org/deeplinks/2022/05/tracking-exposed-demanding-gods-explain-themselves
We’re at the tail end of a ghastly, 15-year experiment in neo-Bellheadism, with the big platforms treating end-to-end as a relic of a simpler time, rather than as “an elegant weapon from a more civilized age.”
The post-Twitter platforms like Mastodon and Tumblr are E2E platforms, designed around the idea that if someone asks to hear what you have to say, they should hear it. Rather than developing algorithms to override your decisions, these platforms have extensive tooling to let you fine-tune what you see.
https://pluralistic.net/2022/08/08/locus-of-individuation/#publish-then-filter
This tooling was once the subject of intense development and innovation, but all that research fell by the wayside with the rise of platforms, who are actively hostile to third party mods that gave users more control over their feeds:
https://techcrunch.com/2022/09/27/og-app-promises-you-an-ad-free-instagram-feed/
Alas, lawmakers are way behind the curve on this, demanding new “online safety” rules that require firms to break E2E and block third-party de-enshittification tools:
https://www.openrightsgroup.org/blog/online-safety-made-dangerous/
The online free speech debate is stupid because it has all the wrong focuses:
Focusing on improving algorithms, not whether you can even get a feed of things you asked to see;
Focusing on whether unsolicited messages are delivered, not whether solicited messages reach their readers;
Focusing on algorithmic transparency, not whether you can opt out of the behavioral tracking that produces training data for algorithms;
Focusing on whether platforms are policing their users well enough, not whether we can leave a platform without losing our important social, professional and personal ties;
Focusing on whether the limits on our speech violate the First Amendment, rather than whether they are unfair:
https://doctorow.medium.com/yes-its-censorship-2026c9edc0fd
The wholly artificial distinction between “freedom of speech” and “freedom of reach” is just more self-serving nonsense and the only reason we’re talking about it is that a billionaire dilettante would like to create chokepoints so he can extract payola from his users and meet his debt obligations to the Saudi royal family.
Billionaire dilettantes have their own stupid definitions of all kinds of important words like “freedom” and “discrimination” and “free speech.” Remember: these definitions have nothing to do with how the world’s 7,999,997,332 non-billionaires experience these concepts.
Image:
Cryteria (modified)
https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0
https://creativecommons.org/licenses/by/3.0/deed.en
William Shaw Antliff (modified)
https://www.macleans.ca/history/this-canadian-private-wrote-and-saved-hundreds-of-letters-during-the-first-world-war/
Public domain
https://en.wikipedia.org/wiki/Copyright_law_of_Canada#Posthumous_works
[Image ID: A handwritten letter from a WWI soldier that has been redacted by military censors; the malevolent red eye of HAL9000 from 2001: A Space Odyssey has burned through the yellowing paper.]
250 notes
·
View notes
The one where Bruce is the asshole (again)
So! We have a typical story where the JLA finds out about the Situation in Amity.
Whichever way they find out doesn't matter, but either way they end up sending Batman to do a threat analysis and review of whether this requires their attention.
And while there, he runs into a Kid who obviously needs to be saved from his Abusive Home. Look at him, he's far too thin, his grades are horrible, he has many unexcused absences, and he has bruises hidden under his clothes.
Even after figuring out that Danny is Phantom the local Hero, he thinks Danny needs to be saved from his Parents.
I mean, it's plain to see! They Hates Ghosts with a Passion, negelct their son very often, shoot at him nearly every day, and are probably the ones who killed him in the first place!
So, with no input from Danny himself, Bruce calls CPS on the Fentons and uses his Wealth to expedite the process and avoid the actual Investigation. (I mean, why would you even need one? It's so obviously a bad home!)
The Fenton's are arrested, and Bruce reveals that Danny is Phantom to convince the Courts that they are horrible people for shooting at their own son, and that they should be locked up (ignoring the horrified looks on their faces, probably cause they were living with a Ghost for so long, thats probably why).
He immediately offers to adopt Danny, even when Danny vehemently refuses his offer. He knows that Danny will come around to it, he's doing this for his own good. He still thinks his Parents were good people, and not thr Villains they really were.
Meanwhile Danny's life has been completely uprooted thanks to the self-righteous machinations of an Adoption Crazed Fruitloop! And not even the usual one!
Sure his parents were often busy with their work, but they Always set aside time to hang out with their kids and make sure they were okay. They never abused him, the neglect was only for like a month or two when the portal before they got their act together and apologized for it, and (most importantly) THEY DIDN'T KNOW he was a Halfa when they shot at him! They only found out when the ASSHOLE revealed his Identity in Court!
And Danny is Extra enraged by that part. The Adoption Crazed Fruitloop had revealed his secret identity for the ENTIRE WORLD TO HEAR!
He would never be able to live a normal life anymore, even if he managed to get away from the Moron who caused all this!
Bruce Wayne was a Villain in his eyes.
He ripped him from his home and from his family (basically kidnapped), revealed his identity to the world so he was forced to stay with him for fear of the GIW, and spun the whole story so that it looked like he was the Good Guy in this!?
It was official. Danny Hates Bruce Wayne, possibly more than anyone else in the World.
And that's a High Bar.
1K notes
·
View notes