#sdos
Explore tagged Tumblr posts
Text
Shifting $677m from the banks to the people, every year, forever
I'll be in TUCSON, AZ from November 8-10: I'm the GUEST OF HONOR at the TUSCON SCIENCE FICTION CONVENTION.
"Switching costs" are one of the great underappreciated evils in our world: the more it costs you to change from one product or service to another, the worse the vendor, provider, or service you're using today can treat you without risking your business.
Businesses set out to keep switching costs as high as possible. Literally. Mark Zuckerberg's capos send him memos chortling about how Facebook's new photos feature will punish anyone who leaves for a rival service with the loss of all their family photos – meaning Zuck can torment those users for profit and they'll still stick around so long as the abuse is less bad than the loss of all their cherished memories:
https://www.eff.org/deeplinks/2021/08/facebooks-secret-war-switching-costs
It's often hard to quantify switching costs. We can tell when they're high, say, if your landlord ties your internet service to your lease (splitting the profits with a shitty ISP that overcharges and underdelivers), the switching cost of getting a new internet provider is the cost of moving house. We can tell when they're low, too: you can switch from one podcatcher program to another just by exporting your list of subscriptions from the old one and importing it into the new one:
https://pluralistic.net/2024/10/16/keep-it-really-simple-stupid/#read-receipts-are-you-kidding-me-seriously-fuck-that-noise
But sometimes, economists can get a rough idea of the dollar value of high switching costs. For example, a group of economists working for the Consumer Finance Protection Bureau calculated that the hassle of changing banks is costing Americans at least $677m per year (see page 526):
https://files.consumerfinance.gov/f/documents/cfpb_personal-financial-data-rights-final-rule_2024-10.pdf
The CFPB economists used a very conservative methodology, so the number is likely higher, but let's stick with that figure for now. The switching costs of changing banks – determining which bank has the best deal for you, then transfering over your account histories, cards, payees, and automated bill payments – are costing everyday Americans more than half a billion dollars, every year.
Now, the CFPB wasn't gathering this data just to make you mad. They wanted to do something about all this money – to find a way to lower switching costs, and, in so doing, transfer all that money from bank shareholders and executives to the American public.
And that's just what they did. A newly finalized Personal Financial Data Rights rule will allow you to authorize third parties – other banks, comparison shopping sites, brokers, anyone who offers you a better deal, or help you find one – to request your account data from your bank. Your bank will be required to provide that data.
I loved this rule when they first proposed it:
https://pluralistic.net/2024/06/10/getting-things-done/#deliverism
And I like the final rule even better. They've really nailed this one, even down to the fine-grained details where interop wonks like me get very deep into the weeds. For example, a thorny problem with interop rules like this one is "who gets to decide how the interoperability works?" Where will the data-formats come from? How will we know they're fit for purpose?
This is a super-hard problem. If we put the monopolies whose power we're trying to undermine in charge of this, they can easily cheat by delivering data in uselessly obfuscated formats. For example, when I used California's privacy law to force Mailchimp to provide list of all the mailing lists I've been signed up for without my permission, they sent me thousands of folders containing more than 5,900 spreadsheets listing their internal serial numbers for the lists I'm on, with no way to find out what these lists are called or how to get off of them:
https://pluralistic.net/2024/07/22/degoogled/#kafka-as-a-service
So if we're not going to let the companies decide on data formats, who should be in charge of this? One possibility is to require the use of a standard, but again, which standard? We can ask a standards body to make a new standard, which they're often very good at, but not when the stakes are high like this. Standards bodies are very weak institutions that large companies are very good at capturing:
https://pluralistic.net/2023/04/30/weak-institutions/
Here's how the CFPB solved this: they listed out the characteristics of a good standards body, listed out the data types that the standard would have to encompass, and then told banks that so long as they used a standard from a good standards body that covered all the data-types, they'd be in the clear.
Once the rule is in effect, you'll be able to go to a comparison shopping site and authorize it to go to your bank for your transaction history, and then tell you which bank – out of all the banks in America – will pay you the most for your deposits and charge you the least for your debts. Then, after you open a new account, you can authorize the new bank to go back to your old bank and get all your data: payees, scheduled payments, payment history, all of it. Switching banks will be as easy as switching mobile phone carriers – just a few clicks and a few minutes' work to get your old number working on a phone with a new provider.
This will save Americans at least $677 million, every year. Which is to say, it will cost the banks at least $670 million every year.
Naturally, America's largest banks are suing to block the rule:
https://www.americanbanker.com/news/cfpbs-open-banking-rule-faces-suit-from-bank-policy-institute
Of course, the banks claim that they're only suing to protect you, and the $677m annual transfer from their investors to the public has nothing to do with it. The banks claim to be worried about bank-fraud, which is a real thing that we should be worried about. They say that an interoperability rule could make it easier for scammers to get at your data and even transfer your account to a sleazy fly-by-night operation without your consent. This is also true!
It is obviously true that a bad interop rule would be bad. But it doesn't follow that every interop rule is bad, or that it's impossible to make a good one. The CFPB has made a very good one.
For starters, you can't just authorize anyone to get your data. Eligible third parties have to meet stringent criteria and vetting. These third parties are only allowed to ask for the narrowest slice of your data needed to perform the task you've set for them. They aren't allowed to use that data for anything else, and as soon as they've finished, they must delete your data. You can also revoke their access to your data at any time, for any reason, with one click – none of this "call a customer service rep and wait on hold" nonsense.
What's more, if your bank has any doubts about a request for your data, they are empowered to (temporarily) refuse to provide it, until they confirm with you that everything is on the up-and-up.
I wrote about the lawsuit this week for @[email protected]'s Deeplinks blog:
https://www.eff.org/deeplinks/2024/10/no-matter-what-bank-says-its-your-money-your-data-and-your-choice
In that article, I point out the tedious, obvious ruses of securitywashing and privacywashing, where a company insists that its most abusive, exploitative, invasive conduct can't be challenged because that would expose their customers to security and privacy risks. This is such bullshit.
It's bullshit when printer companies say they can't let you use third party ink – for your own good:
https://arstechnica.com/gadgets/2024/01/hp-ceo-blocking-third-party-ink-from-printers-fights-viruses/
It's bullshit when car companies say they can't let you use third party mechanics – for your own good:
https://pluralistic.net/2020/09/03/rip-david-graeber/#rolling-surveillance-platforms
It's bullshit when Apple says they can't let you use third party app stores – for your own good:
https://www.eff.org/document/letter-bruce-schneier-senate-judiciary-regarding-app-store-security
It's bullshit when Facebook says you can't independently monitor the paid disinformation in your feed – for your own good:
https://pluralistic.net/2021/08/05/comprehensive-sex-ed/#quis-custodiet-ipsos-zuck
And it's bullshit when the banks say you can't change to a bank that charges you less, and pays you more – for your own good.
CFPB boss Rohit Chopra is part of a cohort of Biden enforcers who've hit upon a devastatingly effective tactic for fighting corporate power: they read the law and found out what they're allowed to do, and then did it:
https://pluralistic.net/2023/10/23/getting-stuff-done/#praxis
The CFPB was created in 2010 with the passage of the Consumer Financial Protection Act, which specifically empowers the CFPB to make this kind of data-sharing rule. Back when the CFPA was in Congress, the banks howled about this rule, whining that they were being forced to share their data with their competitors.
But your account data isn't your bank's data. It's your data. And the CFPB is gonna let you have it, and they're gonna save you and your fellow Americans at least $677m/year – forever.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/11/01/bankshot/#personal-financial-data-rights
#pluralistic#Consumer Financial Protection Act#cfpa#Personal Financial Data Rights#rohit chopra#finance#banking#personal finance#interop#interoperability#mandated interoperability#standards development organizations#sdos#standards#switching costs#competition#cfpb#consumer finance protection bureau#click to cancel#securitywashing#oligarchy#guillotine watch
453 notes
·
View notes
Photo
very overdue post but nak upload jugak #sdos shot! ahahahha. let’s pray for a smooth 2023 and cheering on abang taking psle this yr.. we can do this! btw, one more not in pic ponteng eh. 🤦🏻♀️😏 https://www.instagram.com/p/Cm_wlqehhwKAbMD9hU04uMJt4RZD8QWvejBFnw0/?igshid=NGJjMDIxMWI=
0 notes
Text
Plasma clouds over the surface of the Sun
466 notes
·
View notes
Text
thirst trapping the contestants. (one more insta reals left- ask any questions you have!!)
#ts4#ts4 gameplay#the sims 4#gp2#gp 2#gameplay2#game play 2#the sims 4 edit#ts4 edit#slate side missions#pennys bc#my girl really sdo be looking good in ANY color#sorry i dont' make the rules
67 notes
·
View notes
Text
THIS.....CREATURE????????
#ensemble stars#enstars#himeru#I ABSOLUTELY LOVE THIS THING WHATEVER IT IS#new meru card soooo pretty i liiiike it sm ahhhh!!!!!!!#i screamed when saw him oaooooaoaaaaoooooooaoo#LIKE.....HE'S SO GENTLE AND OPEN THERE!!!! SMILING ON BLOOMED AND SO SILLY ON UNBLOOMED!!!!!!!!#BITING HIM BBBRRBRRGGGRGRRGGGRAAA#JDSHDKLVJCLS;DZKCKSD;VJ;SDO
166 notes
·
View notes
Note
Hi, Spamton! We're here! How was the trip?
#iforrgot what i was sdoing#[you've got mail!]#spamton#spamton g spamton#deltarune#deltarune spamton#deltarune chapter 2#dyoythinl i fyou got close enougjh to him youd hear the little hum of a computer??#i forgot wht little thing i was doing next so for now hes just talking to you because he was worried 💞💞💞
120 notes
·
View notes
Text
Partial Solar Eclipse Seen From Space
82 notes
·
View notes
Text
via redbullracing
312 notes
·
View notes
Text
The business partners are dancing!
#rel'sart#alastor#charlie magne#hazbin hotel#hh#vivziepop#got my own issues with the show and creator but I've been attached to the two of them since the pilot#can be interpreted as charlastor if you like :3#I never really saw them as father and daughter esp in the pilot? So episode five was a slap to the face. I did not see it coming at all SDO#charlastor#like#I can understand how others would ship them given how intriguing their dynamic is#both of their ideals and beliefs are in conflict and against each other#but at the same time they seem to share a lot of similar interests??? and also the vibes are impeccable#like the two of them can really challenge the other I would say...#but yeah my blog is a safe space for those who like Charlastor!#honestly I don't even mind a platonic dynamic for them#their interactions are always so intriguing to me#Like Alastor being intrigued by Charlie#seeing her as the most interesting thing after years of BOREDOM#He wants in on that#Front row seats if you will#I think every single action he takes towards her is manipulative but he grows to care for her in his own way#Charlie sees the good in everyone and is hellbent on proving Alastor wrong#One might say#she believes in redeeming even him#which Alastor just laughs at#I've been a Charlastor shipper since the pilot and I have curated my own specific view on them ever since KSDHD#Like...in the Charlastor ship right...I can't actually imagine Alastor loving Charlie in a romantic way??#It's more like he's fascinated by her and wants to shake her in a bottle BAHAHAH
62 notes
·
View notes
Text
Our Sun !
A main sequence star, emits a strong solar flare flashes in this image captured by NASA's Solar Dynamics Observatory.
NASA/SDO
#art#cosmos#cosmic#universe#blast#space#sun#flare#solar flare#SDO#nasa#solar dynamics observatory#photography
15 notes
·
View notes
Text
Testing out solar photography before the 2024 total eclipse next month! Thought it would be fun to compare my photo to today's SDO satellite image
36 notes
·
View notes
Text
The disenshittified internet starts with loyal "user agents"
I'm in TARTU, ESTONIA! Overcoming the Enshittocene (TOMORROW, May 8, 6PM, Prima Vista Literary Festival keynote, University of Tartu Library, Struwe 1). AI, copyright and creative workers' labor rights (May 10, 8AM: Science Fiction Research Association talk, Institute of Foreign Languages and Cultures building, Lossi 3, lobby). A talk for hackers on seizing the means of computation (May 10, 3PM, University of Tartu Delta Centre, Narva 18, room 1037).
There's one overwhelmingly common mistake that people make about enshittification: assuming that the contagion is the result of the Great Forces of History, or that it is the inevitable end-point of any kind of for-profit online world.
In other words, they class enshittification as an ideological phenomenon, rather than as a material phenomenon. Corporate leaders have always felt the impulse to enshittify their offerings, shifting value from end users, business customers and their own workers to their shareholders. The decades of largely enshittification-free online services were not the product of corporate leaders with better ideas or purer hearts. Those years were the result of constraints on the mediocre sociopaths who would trade our wellbeing and happiness for their own, constraints that forced them to act better than they do today, even if the were not any better:
https://pluralistic.net/2024/04/24/naming-names/#prabhakar-raghavan
Corporate leaders' moments of good leadership didn't come from morals, they came from fear. Fear that a competitor would take away a disgruntled customer or worker. Fear that a regulator would punish the company so severely that all gains from cheating would be wiped out. Fear that a rival technology – alternative clients, tracker blockers, third-party mods and plugins – would emerge that permanently severed the company's relationship with their customers. Fears that key workers in their impossible-to-replace workforce would leave for a job somewhere else rather than participate in the enshittification of the services they worked so hard to build:
https://pluralistic.net/2024/04/22/kargo-kult-kaptialism/#dont-buy-it
When those constraints melted away – thanks to decades of official tolerance for monopolies, which led to regulatory capture and victory over the tech workforce – the same mediocre sociopaths found themselves able to pursue their most enshittificatory impulses without fear.
The effects of this are all around us. In This Is Your Phone On Feminism, the great Maria Farrell describes how audiences at her lectures profess both love for their smartphones and mistrust for them. Farrell says, "We love our phones, but we do not trust them. And love without trust is the definition of an abusive relationship":
https://conversationalist.org/2019/09/13/feminism-explains-our-toxic-relationships-with-our-smartphones/
I (re)discovered this Farrell quote in a paper by Robin Berjon, who recently co-authored a magnificent paper with Farrell entitled "We Need to Rewild the Internet":
https://www.noemamag.com/we-need-to-rewild-the-internet/
The new Berjon paper is narrower in scope, but still packed with material examples of the way the internet goes wrong and how it can be put right. It's called "The Fiduciary Duties of User Agents":
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3827421
In "Fiduciary Duties," Berjon focuses on the technical term "user agent," which is how web browsers are described in formal standards documents. This notion of a "user agent" is a holdover from a more civilized age, when technologists tried to figure out how to build a new digital space where technology served users.
A web browser that's a "user agent" is a comforting thought. An agent's job is to serve you and your interests. When you tell it to fetch a web-page, your agent should figure out how to get that page, make sense of the code that's embedded in, and render the page in a way that represents its best guess of how you'd like the page seen.
For example, the user agent might judge that you'd like it to block ads. More than half of all web users have installed ad-blockers, constituting the largest consumer boycott in human history:
https://doc.searls.com/2023/11/11/how-is-the-worlds-biggest-boycott-doing/
Your user agent might judge that the colors on the page are outside your visual range. Maybe you're colorblind, in which case, the user agent could shift the gamut of the colors away from the colors chosen by the page's creator and into a set that suits you better:
https://dankaminsky.com/dankam/
Or maybe you (like me) have a low-vision disability that makes low-contrast type difficult to impossible to read, and maybe the page's creator is a thoughtless dolt who's chosen light grey-on-white type, or maybe they've fallen prey to the absurd urban legend that not-quite-black type is somehow more legible than actual black type:
https://uxplanet.org/basicdesign-never-use-pure-black-in-typography-36138a3327a6
The user agent is loyal to you. Even when you want something the page's creator didn't consider – even when you want something the page's creator violently objects to – your user agent acts on your behalf and delivers your desires, as best as it can.
Now – as Berjon points out – you might not know exactly what you want. Like, you know that you want the privacy guarantees of TLS (the difference between "http" and "https") but not really understand the internal cryptographic mysteries involved. Your user agent might detect evidence of shenanigans indicating that your session isn't secure, and choose not to show you the web-page you requested.
This is only superficially paradoxical. Yes, you asked your browser for a web-page. Yes, the browser defied your request and declined to show you that page. But you also asked your browser to protect you from security defects, and your browser made a judgment call and decided that security trumped delivery of the page. No paradox needed.
But of course, the person who designed your user agent/browser can't anticipate all the ways this contradiction might arise. Like, maybe you're trying to access your own website, and you know that the security problem the browser has detected is the result of your own forgetful failure to renew your site's cryptographic certificate. At that point, you can tell your browser, "Thanks for having my back, pal, but actually this time it's fine. Stand down and show me that webpage."
That's your user agent serving you, too.
User agents can be well-designed or they can be poorly made. The fact that a user agent is designed to act in accord with your desires doesn't mean that it always will. A software agent, like a human agent, is not infallible.
However – and this is the key – if a user agent thwarts your desire due to a fault, that is fundamentally different from a user agent that thwarts your desires because it is designed to serve the interests of someone else, even when that is detrimental to your own interests.
A "faithless" user agent is utterly different from a "clumsy" user agent, and faithless user agents have become the norm. Indeed, as crude early internet clients progressed in sophistication, they grew increasingly treacherous. Most non-browser tools are designed for treachery.
A smart speaker or voice assistant routes all your requests through its manufacturer's servers and uses this to build a nonconsensual surveillance dossier on you. Smart speakers and voice assistants even secretly record your speech and route it to the manufacturer's subcontractors, whether or not you're explicitly interacting with them:
https://www.sciencealert.com/creepy-new-amazon-patent-would-mean-alexa-records-everything-you-say-from-now-on
By design, apps and in-app browsers seek to thwart your preferences regarding surveillance and tracking. An app will even try to figure out if you're using a VPN to obscure your location from its maker, and snitch you out with its guess about your true location.
Mobile phones assign persistent tracking IDs to their owners and transmit them without permission (to its credit, Apple recently switch to an opt-in system for transmitting these IDs) (but to its detriment, Apple offers no opt-out from its own tracking, and actively lies about the very existence of this tracking):
https://pluralistic.net/2022/11/14/luxury-surveillance/#liar-liar
An Android device running Chrome and sitting inert, with no user interaction, transmits location data to Google every five minutes. This is the "resting heartbeat" of surveillance for an Android device. Ask that device to do any work for you and its pulse quickens, until it is emitting a nearly continuous stream of information about your activities to Google:
https://digitalcontentnext.org/blog/2018/08/21/google-data-collection-research/
These faithless user agents both reflect and enable enshittification. The locked-down nature of the hardware and operating systems for Android and Ios devices means that manufacturers – and their business partners – have an arsenal of legal weapons they can use to block anyone who gives you a tool to modify the device's behavior. These weapons are generically referred to as "IP rights" which are, broadly speaking, the right to control the conduct of a company's critics, customers and competitors:
https://locusmag.com/2020/09/cory-doctorow-ip/
A canny tech company can design their products so that any modification that puts the user's interests above its shareholders is illegal, a violation of its copyright, patent, trademark, trade secrets, contracts, terms of service, nondisclosure, noncompete, most favored nation, or anticircumvention rights. Wrap your product in the right mix of IP, and its faithless betrayals acquire the force of law.
This is – in Jay Freeman's memorable phrase – "felony contempt of business model." While more than half of all web users have installed an ad-blocker, thus overriding the manufacturer's defaults to make their browser a more loyal agent, no app users have modified their apps with ad-blockers.
The first step of making such a blocker, reverse-engineering the app, creates criminal liability under Section 1201 of the Digital Millennium Copyright Act, with a maximum penalty of five years in prison and a $500,000 fine. An app is just a web-page skinned in sufficient IP to make it a felony to add an ad-blocker to it (no wonder every company wants to coerce you into using its app, rather than its website).
If you know that increasing the invasiveness of the ads on your web-page could trigger mass installations of ad-blockers by your users, it becomes irrational and self-defeating to ramp up your ads' invasiveness. The possibility of interoperability acts as a constraint on tech bosses' impulse to enshittify their products.
The shift to platforms dominated by treacherous user agents – apps, mobile ecosystems, walled gardens – weakens or removes that constraint. As your ability to discipline your agent so that it serves you wanes, the temptation to turn your user agent against you grows, and enshittification follows.
This has been tacitly understood by technologists since the web's earliest days and has been reaffirmed even as enshittification increased. Berjon quotes extensively from "The Internet Is For End-Users," AKA Internet Architecture Board RFC 8890:
Defining the user agent role in standards also creates a virtuous cycle; it allows multiple implementations, allowing end users to switch between them with relatively low costs (…). This creates an incentive for implementers to consider the users' needs carefully, which are often reflected into the defining standards. The resulting ecosystem has many remaining problems, but a distinguished user agent role provides an opportunity to improve it.
And the W3C's Technical Architecture Group echoes these sentiments in "Web Platform Design Principles," which articulates a "Priority of Constituencies" that is supposed to be central to the W3C's mission:
User needs come before the needs of web page authors, which come before the needs of user agent implementors, which come before the needs of specification writers, which come before theoretical purity.
https://w3ctag.github.io/design-principles/
But the W3C's commitment to faithful agents is contingent on its own members' commitment to these principles. In 2017, the W3C finalized "EME," a standard for blocking mods that interact with streaming videos. Nominally aimed at preventing copyright infringement, EME also prevents users from choosing to add accessibility add-ons that beyond the ones the streaming service permits. These services may support closed captioning and additional narration of visual elements, but they block tools that adapt video for color-blind users or prevent strobe effects that trigger seizures in users with photosensitive epilepsy.
The fight over EME was the most contentious struggle in the W3C's history, in which the organization's leadership had to decide whether to honor the "priority of constituencies" and make a standard that allowed users to override manufacturers, or whether to facilitate the creation of faithless agents specifically designed to thwart users' desires on behalf of manufacturers:
https://www.eff.org/deeplinks/2017/09/open-letter-w3c-director-ceo-team-and-membership
This fight was settled in favor of a handful of extremely large and powerful companies, over the objections of a broad collection of smaller firms, nonprofits representing users, academics and other parties agitating for a web built on faithful agents. This coincided with the W3C's operating budget becoming entirely dependent on the very large sums its largest corporate members paid.
W3C membership is on a sliding scale, based on a member's size. Nominally, the W3C is a one-member, one-vote organization, but when a highly concentrated collection of very high-value members flex their muscles, W3C leadership seemingly perceived an existential risk to the organization, and opted to sacrifice the faithfulness of user agents in service to the anti-user priorities of its largest members.
For W3C's largest corporate members, the fight was absolutely worth it. The W3C's EME standard transformed the web, making it impossible to ship a fully featured web-browser without securing permission – and a paid license – from one of the cartel of companies that dominate the internet. In effect, Big Tech used the W3C to secure the right to decide who would compete with them in future, and how:
https://blog.samuelmaddock.com/posts/the-end-of-indie-web-browsers/
Enshittification arises when the everyday mediocre sociopaths who run tech companies are freed from the constraints that act against them. When the web – and its browsers – were a big, contented, diverse, competitive space, it was harder for tech companies to collude to capture standards bodies like the W3C to secure even more dominance. As the web turned into Tom Eastman's "five giant websites filled with screenshots of text from the other four," that kind of collusion became much easier:
https://pluralistic.net/2023/04/18/cursed-are-the-sausagemakers/#how-the-parties-get-to-yes
In arguing for faithful agents, Berjon associates himself with the group of scholars, regulators and activists who call for user agents to serve as "information fiduciaries." Mostly, information fiduciaries come up in the context of user privacy, with the idea that entities that hold a user's data would have the obligation to put the user's interests ahead of their own. Think of a lawyer's fiduciary duty in respect of their clients, to give advice that reflects the client's best interests, even when that conflicts with the lawyer's own self-interest. For example, a lawyer who believes that settling a case is the best course of action for a client is required to tell them so, even if keeping the case going would generate more billings for the lawyer and their firm.
For a user agent to be faithful, it must be your fiduciary. It must put your interests ahead of the interests of the entity that made it or operates it. Browsers, email clients, and other internet software that served as a fiduciary would do things like automatically blocking tracking (which most email clients don't do, especially webmail clients made by companies like Google, who also sell advertising and tracking).
Berjon contemplates a legally mandated fiduciary duty, citing Lindsey Barrett's "Confiding in Con Men":
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3354129
He describes a fiduciary duty as a remedy for the enforcement failures of EU's GDPR, a solidly written, and dismally enforced, privacy law. A legally backstopped duty for agents to be fiduciaries would also help us distinguish good and bad forms of "innovation" – innovation in ways of thwarting a user's will are always bad.
Now, the tech giants insist that they are already fiduciaries, and that when they thwart a user's request, that's more like blocking access to a page where the encryption has been compromised than like HAL9000's "I can't let you do that, Dave." For example, when Louis Barclay created "Unfollow Everything," he (and his enthusiastic users) found that automating the process of unfollowing every account on Facebook made their use of the service significantly better:
https://slate.com/technology/2021/10/facebook-unfollow-everything-cease-desist.html
When Facebook shut the service down with blood-curdling legal threats, they insisted that they were simply protecting users from themselves. Sure, this browser automation tool – which just automatically clicked links on Facebook's own settings pages – seemed to do what the users wanted. But what if the user interface changed? What if so many users added this feature to Facebook without Facebook's permission that they overwhelmed Facebook's (presumably tiny and fragile) servers and crashed the system?
These arguments have lately resurfaced with Ethan Zuckerman and Knight First Amendment Institute's lawsuit to clarify that "Unfollow Everything 2.0" is legal and doesn't violate any of those "felony contempt of business model" laws:
https://pluralistic.net/2024/05/02/kaiju-v-kaiju/
Sure, Zuckerman seems like a good guy, but what if he makes a mistake and his automation tool does something you don't want? You, the Facebook user, are also a nice guy, but let's face it, you're also a naive dolt and you can't be trusted to make decisions for yourself. Those decisions can only be made by Facebook, whom we can rely upon to exercise its authority wisely.
Other versions of this argument surfaced in the debate over the EU's decision to mandate interoperability for end-to-end encrypted (E2EE) messaging through the Digital Markets Act (DMA), which would let you switch from, say, Whatsapp to Signal and still send messages to your Whatsapp contacts.
There are some good arguments that this could go horribly awry. If it is rushed, or internally sabotaged by the EU's state security services who loathe the privacy that comes from encrypted messaging, it could expose billions of people to serious risks.
But that's not the only argument that DMA opponents made: they also argued that even if interoperable messaging worked perfectly and had no security breaches, it would still be bad for users, because this would make it impossible for tech giants like Meta, Google and Apple to spy on message traffic (if not its content) and identify likely coordinated harassment campaigns. This is literally the identical argument the NSA made in support of its "metadata" mass-surveillance program: "Reading your messages might violate your privacy, but watching your messages doesn't."
This is obvious nonsense, so its proponents need an equally obviously intellectually dishonest way to defend it. When called on the absurdity of "protecting" users by spying on them against their will, they simply shake their heads and say, "You just can't understand the burdens of running a service with hundreds of millions or billions of users, and if I even tried to explain these issues to you, I would divulge secrets that I'm legally and ethically bound to keep. And even if I could tell you, you wouldn't understand, because anyone who doesn't work for a Big Tech company is a naive dolt who can't be trusted to understand how the world works (much like our users)."
Not coincidentally, this is also literally the same argument the NSA makes in support of mass surveillance, and there's a very useful name for it: scalesplaining.
Now, it's totally true that every one of us is capable of lapses in judgment that put us, and the people connected to us, at risk (my own parents gave their genome to the pseudoscience genetic surveillance company 23andme, which means they have my genome, too). A true information fiduciary shouldn't automatically deliver everything the user asks for. When the agent perceives that the user is about to put themselves in harm's way, it should throw up a roadblock and explain the risks to the user.
But the system should also let the user override it.
This is a contentious statement in information security circles. Users can be "socially engineered" (tricked), and even the most sophisticated users are vulnerable to this:
https://pluralistic.net/2024/02/05/cyber-dunning-kruger/#swiss-cheese-security
The only way to be certain a user won't be tricked into taking a course of action is to forbid that course of action under any circumstances. If there is any means by which a user can flip the "are you very sure?" circuit-breaker back on, then the user can be tricked into using that means.
This is absolutely true. As you read these words, all over the world, vulnerable people are being tricked into speaking the very specific set of directives that cause a suspicious bank-teller to authorize a transfer or cash withdrawal that will result in their life's savings being stolen by a scammer:
https://www.thecut.com/article/amazon-scam-call-ftc-arrest-warrants.html
We keep making it harder for bank customers to make large transfers, but so long as it is possible to make such a transfer, the scammers have the means, motive and opportunity to discover how the process works, and they will go on to trick their victims into invoking that process.
Beyond a certain point, making it harder for bank depositors to harm themselves creates a world in which people who aren't being scammed find it nearly impossible to draw out a lot of cash for an emergency and where scam artists know exactly how to manage the trick. After all, non-scammers only rarely experience emergencies and thus have no opportunity to become practiced in navigating all the anti-fraud checks, while the fraudster gets to run through them several times per day, until they know them even better than the bank staff do.
This is broadly true of any system intended to control users at scale – beyond a certain point, additional security measures are trivially surmounted hurdles for dedicated bad actors and as nearly insurmountable hurdles for their victims:
https://pluralistic.net/2022/08/07/como-is-infosec/
At this point, we've had a couple of decades' worth of experience with technological "walled gardens" in which corporate executives get to override their users' decisions about how the system should work, even when that means reaching into the users' own computer and compelling it to thwart the user's desire. The record is inarguable: while companies often use those walls to lock bad guys out of the system, they also use the walls to lock their users in, so that they'll be easy pickings for the tech company that owns the system:
https://pluralistic.net/2023/02/05/battery-vampire/#drained
This is neatly predicted by enshittification's theory of constraints: when a company can override your choices, it will be irresistibly tempted to do so for its own benefit, and to your detriment.
What's more, the mere possibility that you can override the way the system works acts as a disciplining force on corporate executives, forcing them to reckon with your priorities even when these are counter to their shareholders' interests. If Facebook is genuinely worried that an "Unfollow Everything" script will break its servers, it can solve that by giving users an unfollow everything button of its own design. But so long as Facebook can sue anyone who makes an "Unfollow Everything" tool, they have no reason to give their users such a button, because it would give them more control over their Facebook experience, including the controls needed to use Facebook less.
It's been more than 20 years since Seth Schoen and I got a demo of Microsoft's first "trusted computing" system, with its "remote attestations," which would let remote servers demand and receive accurate information about what kind of computer you were using and what software was running on it.
This could be beneficial to the user – you could send a "remote attestation" to a third party you trusted and ask, "Hey, do you think my computer is infected with malicious software?" Since the trusted computing system produced its report on your computer using a sealed, separate processor that the user couldn't directly interact with, any malicious code you were infected with would not be able to forge this attestation.
But this remote attestation feature could also be used to allow Microsoft to block you from opening a Word document with Libreoffice, Apple Pages, or Google Docs, or it could be used to allow a website to refuse to send you pages if you were running an ad-blocker. In other words, it could transform your information fiduciary into a faithless agent.
Seth proposed an answer to this: "owner override," a hardware switch that would allow you to force your computer to lie on your behalf, when that was beneficial to you, for example, by insisting that you were using Microsoft Word to open a document when you were really using Apple Pages:
https://web.archive.org/web/20021004125515/http://vitanuova.loyalty.org/2002-07-05.html
Seth wasn't naive. He knew that such a system could be exploited by scammers and used to harm users. But Seth calculated – correctly! – that the risks of having a key to let yourself out of the walled garden were less than being stuck in a walled garden where some corporate executive got to decide whether and when you could leave.
Tech executives never stopped questing after a way to turn your user agent from a fiduciary into a traitor. Last year, Google toyed with the idea of adding remote attestation to web browsers, which would let services refuse to interact with you if they thought you were using an ad blocker:
https://pluralistic.net/2023/08/02/self-incrimination/#wei-bai-bai
The reasoning for this was incredible: by adding remote attestation to browsers, they'd be creating "feature parity" with apps – that is, they'd be making it as practical for your browser to betray you as it is for your apps to do so (note that this is the same justification that the W3C gave for creating EME, the treacherous user agent in your browser – "streaming services won't allow you to access movies with your browser unless your browser is as enshittifiable and authoritarian as an app").
Technologists who work for giant tech companies can come up with endless scalesplaining explanations for why their bosses, and not you, should decide how your computer works. They're wrong. Your computer should do what you tell it to do:
https://www.eff.org/deeplinks/2023/08/your-computer-should-say-what-you-tell-it-say-1
These people can kid themselves that they're only taking away your power and handing it to their boss because they have your best interests at heart. As Upton Sinclair told us, it's impossible to get someone to understand something when their paycheck depends on them not understanding it.
The only way to get a tech boss to consistently treat you well is to ensure that if they stop, you can quit. Anything less is a one-way ticket to enshittification.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/05/07/treacherous-computing/#rewilding-the-internet
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#maria farrell#scalesplaining#user agents#eme#w3c#sdos#scholarship#information fiduciary#the internet is for end users#ietf#delegation#bootlickers#unfollow everything#remote attestation#browsers#treacherous computing#enshittification#snitch chips#Robin Berjon#rewilding the internet
345 notes
·
View notes
Text
Riddle goes to Idia's room for Manga Recomendations (Cater advice). Idia shows off his colections featuring the more questionable manga he owns. Riddle gets, understandably, curious. Most Akward missunderstanding of their life insues.
#art#comic#twisted wonderland#idia shroud#riddle rosehearts#had a thought and i was like 'I should draw this lol' here it is#and actually finished a sketch comic for once?! crazy actually#Anyways i have been thinking about them way to much the parralelism is CRAZY#chronically online x chronically offline do you guys get it#a part of my brain is like 'girl this is so cringe people will kill you for this' but like. i dont care enough to not do that. ssry#to my friends that will see this um uuuuuuu#whatever see you in like 3 months next time i do a cringe ass comic i still wanna post bcs its not sdo bad actually
49 notes
·
View notes
Text
Sun Releases 2 of its Strongest Flares yet on May 11, 2024 | 24h time-lapse (AIA 0304 Å) Courtesy of NASA/SDO, AIA, EVE, & HMI science teams.
The Sun emitted two of its strongest solar flares yet from an active sunspot region called AR3664, peaking at 01:23am UTC on May 11, 2024, and 11:44am UTC on May 11, 2024. NASA’s Solar Dynamics Observatory, which watches the Sun constantly, captured images of the events. Solar flares are powerful bursts of energy. Flares and solar eruptions can impact radio communications, electric power grids, navigation signals, and pose risks to spacecraft and astronauts. The flares are classified as X5.8 and X1.5-class flares, respectively. X-class denotes the most intense flares, while the number provides more information about its strength.
Excerpt from NASA Solar Cycle 25 blog post
#NASA#space#solar storm#geomagnetic storms#solar flare#northern lights#aurora borealis#video#SDO#sun
18 notes
·
View notes
Text
Space Weather News https://spaceweather.com THE STRONGEST FLARE YET:
Sunspot AR3842 exploded again today, producing the strongest solar flare of Solar Cycle 25 so far.
The X9-category blast hurled a CME directly toward Earth.
This makes two CMEs now en route to our planet.
The forecast calls for auroras this weekend.
#science#space#astronomy#physics#news#nasa#astrophysics#spacetimewithstuartgary#starstuff#spacetime#sdo#space weather#geomagnetic storm
7 notes
·
View notes
Text
Huge tornados of plasma on the Sun
14 notes
·
View notes