gracien-system
31K posts
Oh, hi there. Welcome, we hope you enjoy your stay. We're a system of unknown count, and go by it/they collectively. We are an adult. Send us asks.
Don't wanna be here? Send us removal request.
Text
They’re calling me every slur under the sun over on twitter for this post

19K notes
·
View notes
Text
18K notes
·
View notes
Text
Pokemon headcanon that once Absol are studied and people realize they prevent disasters instead of causing them, particularly dangerous workplaces get themselves a workplace Absol and it also decreases accidents.
Construction sites and fishing ships and factories will have one that pretty much just lazes about until it just gets up howling one day and knocks a dude down. They almost never figure out what would have happened but they're always like "yes absol thank you absol I am so grateful to be on the floor right now. Can I offer you a treat in this trying time"
113K notes
·
View notes
Text
If you've got a friend that you know can't remember shit, and you feel like it'd be rude to remind them about something that's coming up beforehand just in case they did remember something they signed up for and now you feel bad for implying that you don't trust their memory, and you know that there's a 90% chance that they won't remember the thing unless you remind them, here's a tip from someone with a Can't Remember Shit Disease:
Instead of simply reminding them about the event, just ask them about a specific detail involved in it instead. If you know that The Thing is on next week's friday, and the last moment you need confirmation whether they're coming or not is this thursday, instead of texting
"Hey you remember we have the thing on next week's friday, right?"
you can text some specific question - regardless of whether the info itself is important to you or not - that clarifies when the event is, like
"Hey are you going to be driving to the thing next week's friday, or is someone giving you a ride? We'll need to plan parking beforehand."
Because in case they did remember the thing, they can just answer you for the question you asked. And if they didn't remember and go "OH SHIT IT'S NEXT WEEK I COMPLETELY FORGOT", you still gave them the reminder they needed just the same.
I don't personally get insulted when people gently remind me that they know that I can't remember shit, and most self-aware memory problem people don't either, but if you're worried that it would feel rude to remind people about things you're worried they might've forgotten, this is a good way to circumvent that.
7K notes
·
View notes
Text

I've started getting into Lancer recently...
3K notes
·
View notes
Text
I’m a recovered hater who needs to see the good in people so I don’t kill myself and my brother is a recovered care-too-much-er who needs to dislike people so he doesn’t accidentally carry the world on his shoulders and kill himself so as you can imagine hanging out in public is sometimes an ordeal
3K notes
·
View notes
Text
standing up and blacking out for a few seconds is just transitioning from a cutscene to the actual gameplay
280K notes
·
View notes
Text
Promulgating a new variant of the Docetist ("hologram Jesus") heresy whereby the body of Christ which his followers perceived was an illusion, but there was a smaller material body inside that illusion:

865 notes
·
View notes
Text
By the way, the prosecution just violated Luigi Mangione's HIPAA rights and I have not seen people talking much about it
40K notes
·
View notes
Text
Fucking hate watching children go “um Actually UwU” about AO3. saw someone say that fixing a bug with bookmarks isn’t a good reason to close a site down for a couple hours and they’re all lying about what they spend money on
meanwhile this very week my actual day job shut down the internal programmes for idk how many hours to fix a minor bug that popped up out of nowhere. I mean??? I don’t know shit about IT but “shut down all functions while we fix a problem” is so damn common. And “oh this took longer than we said” as well.
22K notes
·
View notes
Text
AI software assistants make the hardest kinds of bugs to spot

Hey, German-speakers! Through a very weird set of circumstances, I ended up owning the rights to the German audiobook of my bestselling 2022 cryptocurrency heist technothriller Red Team Blues and now I'm selling DRM-free audio and ebooks, along with the paperback (all in German and English) on a Kickstarter that runs until August 11.
It's easy to understand why some programmers love their AI assistants and others loathe them: the former group get to decide how and when they use AI tools, while the latter has AI forced upon them by bosses who hope to fire their colleagues and increase their workload.
Formally, the first group are "centaurs" (people assisted by machines) and the latter are "reverse-centaurs" (people conscripted into assisting machines):
https://pluralistic.net/2025/05/27/rancid-vibe-coding/#class-war
Most workers have parts of their jobs they would happily automate away. I know of a programmer who uses AI to take a first pass at CSS code for formatted output. This is a notoriously tedious chore, and it's not hard to determine whether the AI got it right – just eyeball the output in a variety of browsers. If this was a chore you hated doing and someone gave you an effective tool to automate it, that would be cause for celebration. What's more, if you learned that this was only reliable for a subset of cases, you could confine your use of the AI to those cases.
Likewise, many workers dream of doing something through automation that is so expensive or labor-intensive that they can't possibly do it. I'm thinking here of the film editor who extolled the virtues to me of deepfaking the eyelines of every extra in a crowd scene, which lets them change the focus of the whole scene without reassembling a couple hundred extras, rebuilding the set, etc. This is a brand new capability that increases the creative flexibility of that worker, and no wonder they love it. It's good to be a centaur!
Then there's the poor reverse-centaurs. These are workers whose bosses have saddled them with a literally impossible workload and handed them an AI tool. Maybe they've been ordered to use the tool, or maybe they've been ordered to complete the job (or else) by a boss who was suggestively waggling their eyebrows at the AI tool while giving the order. Think of the freelance writer whom Hearst tasked with singlehandedly producing an entire, 64-page "best-of" summer supplement, including multiple best-of lists, who was globally humiliated when his "best books of the summer" list was chock full of imaginary books that the AI "hallucinated":
https://www.404media.co/viral-ai-generated-summer-guide-printed-by-chicago-sun-times-was-made-by-magazine-giant-hearst/
No one seriously believes that this guy could have written and fact-checked all that material by himself. Nominally, he was tasked with serving as the "human in the loop" who validated the AI's output. In reality, he was the AI's fall-guy, what Dan Davies calls an "accountability sink," who absorbed the blame for the inevitable errors that arise when an employer demands that a single human sign off on the products of an error-prone automated system that operates at machine speeds.
It's never fun to be a reverse centaur, but it's especially taxing to be a reverse centaur for an AI. AIs, after all, are statistical guessing programs that infer the most plausible next word based on the words that came before. Sometimes this goes badly and obviously awry, like when the AI tells you to put glue or gravel on your pizza. But more often, AI's errors are precisely, expensively calculated to blend in perfectly with the scenery.
AIs are conservative. They can only output a version of the future that is predicted by the past, proceeding on a smooth, unbroken line from the way things were to the way they are presumed to be. But reality isn't smooth, it's lumpy and discontinuous.
Take the names of common code libraries: these follow naming conventions that make it easy to predict what a library for a given function will be, and to guess what a given library does based on its name. But humans are messy and reality is lumpy, so these conventions are imperfectly followed. All the text-parsing libraries for a programming language may look like this: docx.text.parsing; txt.text.parsing, md.text.parsing, except for one, which defies convention by being named text.odt.parsing. Maybe someone had a brainfart and misnamed the library. Maybe the library was developed independently of everyone else's libraries and later merged. Maybe Mercury is in retrograde. Whatever the reason, the world contains many of these imperfections.
Ask an LLM to write you some software and it will "hallucinate" (that is, extrapolate) libraries that don't exist, because it will assume that all text-parsing libraries follow the convention. It will assume that the library for parsing odt files is called "odt.text.parsing," and it will put a link to that nonexistent library in your code.
This creates a vulnerability for AI-assisted code, called "slopsquatting," whereby an attacker predicts the names of libraries AIs are apt to hallucinate and creates libraries with those names, libraries that do what you would expect they'd do, but also inject malicious code into every program that incorporates them:
https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/
This is the hardest type of error to spot, because the AI is guessing the statistically most plausible name for the imaginary library. It's like the AI is constructing one of those spot-the-difference image puzzles on super-hard mode, swapping the fork and knife in a diner's hands from left to right and vice-versa. You couldn't generate a harder-to-spot bug if you tried.
It's not like people are very good at supervising machines to begin with. "Automation blindness" is what happens when you're asked to repeatedly examine the output of a generally correct machine for a long time, and somehow remain vigilant for its errors. Humans aren't really capable of remaining vigilant for things that don't ever happen – whatever attention and neuronal capacity you initially devote to this never-come eventuality is hijacked by the things that happen all the time. This is why the TSA is so fucking amazing at spotting water-bottles on X-rays, but consistently fails to spot the bombs and guns that red team testers smuggle into checkpoints. The median TSA screener spots a hundred water bottles a day, and is (statistically) never called upon to spot something genuinely dangerous to a flight. They have put in their 10,000 hours, and then some, on spotting water bottles, and approximately zero hours on spotting stuff that we really, really don't want to see on planes.
So automation blindness is already going to be a problem for any "human in the loop," from a radiologist asked to sign off on an AI's interpretation of your chest X-ray to a low-paid overseas worker remote-monitoring your Waymo…to a programmer doing endless, high-speed code-review for a chatbot.
But that coder has it worse than all the other in-looped humans. That coder doesn't just have to fight automation blindness – they have to fight automation blindness and spot the subtlest of errors in this statistically indistinguishable-from-correct code. AI's are basically doing bug steganography, smuggling code defects in by carefully blending them in with correct code.
At code shops around the world, the reverse centaurs are suffering. A survey of Stack Overflow users found that AI coding tools are creating history's most difficult-to-discharge technology debt in the form of "almost right" code full of these fiendishly subtle bugs:
https://venturebeat.com/ai/stack-overflow-data-reveals-the-hidden-productivity-tax-of-almost-right-ai-code/
As Venturebeat reports, while usage of AI coding assistants is up (from 76% last year to 84% this year), trust in these tools is plummeting – 33%, with no bottom in sight. 45% of coders say that debugging AI code takes longer than writing the code without AI at all. Only 29% of coders believe that AI tools can solve complex code problems.
Venturebeat concludes that there are code shops that "solve the 'almost right' problem" and see real dividends from AI tools. What they don't say is that the coders for whom "almost right" isn't a problem are centaurs, not reverse centaurs. They are in charge of their own production and tooling, and no one is using AI tools as a pretext for a relentless hurry-up amidst swingeing cuts to headcount.
The AI bubble is driven by the promise of firing workers and replacing them with automation. Investors and AI companies are tacitly (and sometimes explicitly) betting that bosses who can fire a worker and replace them with a chatbot will pay the chatbot's maker an appreciable slice of that former worker's salary for an AI that takes them off the payroll.
The people who find AI fun or useful or surprising are centaurs. They're making automation choices based on their own assessment of their needs and the AIs' capabilities.
They are not the customers for AI. AI exists to replace workers, not empower them. Even if AI can make you more productive, there is no business model in increasing your pay and decreasing your hours.
AI is about disciplining labor to decrease its share of an AI-using company's profits. AI exists to lower a company's wage-bill, at your expense, with the savings split between the your boss and an AI company. When Getty or the NYT or another media company sues an AI company for copyright infringement, that doesn't mean they are opposed to using AI to replace creative workers – they just want a larger slice of the creative workers' salaries in the form of a copyright license from the AI company that sells them the worker-displacing tool.
They'll even tell you so. When the movie studios sued Midjourney, the RIAA (whose most powerful members are subsidiaries of the same companies that own the studios) sent out this press statement, attributed to RIAA CEO Mitch Glazier:
There is a clear path forward through partnerships that both further AI innovation and foster human artistry. Unfortunately, some bad actors – like Midjourney – see only a zero-sum, winner-take-all game.
Get that? The problem isn't that Midjourney wants to replace all the animation artists – it's that they didn't pay the movie studios license fees for the training data. They didn't create "partnerships."
Incidentally: Mitch Glazier's last job was as a Congressional staffer who slipped an amendment into must-pass bill that killed musicians' ability to claim the rights to their work back after 35 years through "termination of transfer." This was so outrageous that Congress held a special session to reverse it and Glazier lost his job.
Whereupon the RIAA hired him to run the show.
AI companies are not pitching a future of AI-enabled centaurs. They're colluding with bosses to build a world of AI-shackled reverse centaurs. Some people are using AI tools (often standalone tools derived from open models, running on their own computers) to do some fun and exciting centaur stuff. But for the AI companies, these centaurs are a bug, not a feature – and they're the kind of bug that's far easier to spot and crush than the bugs that AI code-bots churn out in volumes no human can catalog, let alone understand.
Support me this summer in the Clarion Write-A-Thon and help raise money for the Clarion Science Fiction and Fantasy Writers' Workshop! This summer, I'm writing The Reverse-Centaur's Guide to AI, a short book for Farrar, Straus and Giroux that explains how to be an effective AI critic.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2025/08/04/bad-vibe-coding/#maximally-codelike-bugs
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
275 notes
·
View notes
Text
Everyday of my life I learn I've been pronouncing a word wrong my entire life because I've only ever read it in my head.
Today's word is valet.
22 notes
·
View notes
Text
phineas and ferb episode that is rated pg13 so they are allowed one f-bomb. candace keeps trying to find the perfect situation to use it but every time she tries she gets interrupted or drowned out by a comically loud horn. meanwhile doofenshmirtz has made the censor-inator because now that they're pg13 he's convinced that vanessa doesnt know any swear words (she knows all of them) and that she'll explode if she hears someone say "shit" so he wants to make the entire tri-state area child-friendly. he flies it over danville in his official blimp and a one-off joke is that his laser hits a rated r movie that's a clear parody of the human centipede or someshit and it turns into a barney-the-dinosaur-ass psa. at the last second perry destroys the inator and knocks it out of the blimp and it fires off one more laser that censors out phineas and ferb's invention before mom can get home. candace tries to finally drop her one-allowed cuss but her voice has given out. perry comes back and makes a little platypus noise (all of those have been swear words the whole time but because nobody speaks platypus nobody notices) and then part of doof's machine crashes into the house (candace is in charge) and ferb says "what the fuck"
44K notes
·
View notes