#AI as a weapon
Explore tagged Tumblr posts
Text
Ethics of autonomous weapons systems and its applicability to any AI systems
Highlights • Dual use of AI implies that tools designed for good be used as weapons and the need to regulate them from that perspective. • Tools which may affect people′s freedom should be treated as a weapon and be subject to International Humanitarian Law. • A freeze in investigations is neither possible nor desired, nor is the maintenance of the current status quo. • The key ethical principles are that the way algorithms work is understood and the humans retain enough control. • All artificial intelligence developments should take into account possible military uses of the technology from inception.
Source
2 notes
·
View notes
Text
Something I love about Leo is that, canonically, he IS capable of cooking, he’s just completely incapable of using a toaster. He’s banned from the kitchen not out of an inability to make edible food, but because being within six feet of a toaster causes the poor appliance to spontaneously combust.
#rottmnt#rise of the teenage mutant ninja turtles#rottmnt leo#rottmnt headcanons#rise leo#all Leos mortal enemy: toasters#side note but thinking about this aspect of Leo’s character really has me wanting to make a deeper dive into Leo as a Jack of All Trades#because he has aspects of this all throughout the series#where he can do many things he’s just not the best at it#like he can cook but he’s no Mikey#he can - canonically - rewire an *AI PROGRAM* but it goes very wrong#he can lift both Mikey and Raph simultaneously but he struggles to do so where his other brothers don’t even break a sweat#bro is a Jack of All Trades Master of None frfr#and Leo is even more interesting with this in mind because he uses what he CAN do so well#it’s like how he can see his family’s strengths and weaknesses and knows exactly how they work#his skill set is made way better simply by his personal USE of those skills not by the skills themselves#portals and teleportation are only op if you know how to weaponize them#given time he ABSOLUTELY could#okay I shut up now this was supposed to be about toasters#but yeah all the boys have a bunch of skills under their belts outside the typical ones#but Leo stands out to me for having skills his other brothers have but to a much lesser degree in a lot of cases#and he works with what he has so well that that is a skill in itself
429 notes
·
View notes
Text
AI is a WMD
I'm in TARTU, ESTONIA! AI, copyright and creative workers' labor rights (TOMORROW, May 10, 8AM: Science Fiction Research Association talk, Institute of Foreign Languages and Cultures building, Lossi 3, lobby). A talk for hackers on seizing the means of computation (TOMORROW, May 10, 3PM, University of Tartu Delta Centre, Narva 18, room 1037).
Fun fact: "The Tragedy Of the Commons" is a hoax created by the white nationalist Garrett Hardin to justify stealing land from colonized people and moving it from collective ownership, "rescuing" it from the inevitable tragedy by putting it in the hands of a private owner, who will care for it properly, thanks to "rational self-interest":
https://pluralistic.net/2023/05/04/analytical-democratic-theory/#epistocratic-delusions
Get that? If control over a key resource is diffused among the people who rely on it, then (Garrett claims) those people will all behave like selfish assholes, overusing and undermaintaining the commons. It's only when we let someone own that commons and charge rent for its use that (Hardin says) we will get sound management.
By that logic, Google should be the internet's most competent and reliable manager. After all, the company used its access to the capital markets to buy control over the internet, spending billions every year to make sure that you never try a search-engine other than its own, thus guaranteeing it a 90% market share:
https://pluralistic.net/2024/02/21/im-feeling-unlucky/#not-up-to-the-task
Google seems to think it's got the problem of deciding what we see on the internet licked. Otherwise, why would the company flush $80b down the toilet with a giant stock-buyback, and then do multiple waves of mass layoffs, from last year's 12,000 person bloodbath to this year's deep cuts to the company's "core teams"?
https://qz.com/google-is-laying-off-hundreds-as-it-moves-core-jobs-abr-1851449528
And yet, Google is overrun with scams and spam, which find their way to the very top of the first page of its search results:
https://pluralistic.net/2023/02/24/passive-income/#swiss-cheese-security
The entire internet is shaped by Google's decisions about what shows up on that first page of listings. When Google decided to prioritize shopping site results over informative discussions and other possible matches, the entire internet shifted its focus to producing affiliate-link-strewn "reviews" that would show up on Google's front door:
https://pluralistic.net/2024/04/24/naming-names/#prabhakar-raghavan
This was catnip to the kind of sociopath who a) owns a hedge-fund and b) hates journalists for being pain-in-the-ass, stick-in-the-mud sticklers for "truth" and "facts" and other impediments to the care and maintenance of a functional reality-distortion field. These dickheads started buying up beloved news sites and converting them to spam-farms, filled with garbage "reviews" and other Google-pleasing, affiliate-fee-generating nonsense.
(These news-sites were vulnerable to acquisition in large part thanks to Google, whose dominance of ad-tech lets it cream 51 cents off every ad dollar and whose mobile OS monopoly lets it steal 30 cents off every in-app subscriber dollar):
https://www.eff.org/deeplinks/2023/04/saving-news-big-tech
Now, the spam on these sites didn't write itself. Much to the chagrin of the tech/finance bros who bought up Sports Illustrated and other venerable news sites, they still needed to pay actual human writers to produce plausible word-salads. This was a waste of money that could be better spent on reverse-engineering Google's ranking algorithm and getting pride-of-place on search results pages:
https://housefresh.com/david-vs-digital-goliaths/
That's where AI comes in. Spicy autocomplete absolutely can't replace journalists. The planet-destroying, next-word-guessing programs from Openai and its competitors are incorrigible liars that require so much "supervision" that they cost more than they save in a newsroom:
https://pluralistic.net/2024/04/29/what-part-of-no/#dont-you-understand
But while a chatbot can't produce truthful and informative articles, it can produce bullshit – at unimaginable scale. Chatbots are the workers that hedge-fund wreckers dream of: tireless, uncomplaining, compliant and obedient producers of nonsense on demand.
That's why the capital class is so insatiably horny for chatbots. Chatbots aren't going to write Hollywood movies, but studio bosses hyperventilated at the prospect of a "writer" that would accept your brilliant idea and diligently turned it into a movie. You prompt an LLM in exactly the same way a studio exec gives writers notes. The difference is that the LLM won't roll its eyes and make sarcastic remarks about your brainwaves like "ET, but starring a dog, with a love plot in the second act and a big car-chase at the end":
https://pluralistic.net/2023/10/01/how-the-writers-guild-sunk-ais-ship/
Similarly, chatbots are a dream come true for a hedge fundie who ends up running a beloved news site, only to have to fight with their own writers to get the profitable nonsense produced at a scale and velocity that will guarantee a high Google ranking and millions in "passive income" from affiliate links.
One of the premier profitable nonsense companies is Advon, which helped usher in an era in which sites from Forbes to Money to USA Today create semi-secret "review" sites that are stuffed full of badly researched top-ten lists for products from air purifiers to cat beds:
https://housefresh.com/how-google-decimated-housefresh/
Advon swears that it only uses living humans to produce nonsense, and not AI. This isn't just wildly implausible, it's also belied by easily uncovered evidence, like its own employees' Linkedin profiles, which boast of using AI to create "content":
https://housefresh.com/wp-content/uploads/2024/05/Advon-AI-LinkedIn.jpg
It's not true. Advon uses AI to produce its nonsense, at scale. In an excellent, deeply reported piece for Futurism, Maggie Harrison Dupré brings proof that Advon replaced its miserable human nonsense-writers with tireless chatbots:
https://futurism.com/advon-ai-content
Dupré describes how Advon's ability to create botshit at scale contributed to the enshittification of clients from Yoga Journal to the LA Times, "Us Weekly" to the Miami Herald.
All of this is very timely, because this is the week that Google finally bestirred itself to commence downranking publishers who engage in "site reputation abuse" – creating these SEO-stuffed fake reviews with the help of third parties like Advon:
https://pluralistic.net/2024/05/03/keyword-swarming/#site-reputation-abuse
(Google's policy only forbids site reputation abuse with the help of third parties; if these publishers take their nonsense production in-house, Google may allow them to continue to dominate its search listings):
https://developers.google.com/search/blog/2024/03/core-update-spam-policies#site-reputation
There's a reason so many people believed Hardin's racist "Tragedy of the Commons" hoax. We have an intuitive understanding that commons are fragile. All it takes is one monster to start shitting in the well where the rest of us get our drinking water and we're all poisoned.
The financial markets love these monsters. Mark Zuckerberg's key insight was that he could make billions by assembling vast dossiers of compromising, sensitive personal information on half the world's population without their consent, but only if he kept his costs down by failing to safeguard that data and the systems for exploiting it. He's like a guy who figures out that if he accumulates enough oily rags, he can extract so much low-grade oil from them that he can grow rich, but only if he doesn't waste money on fire-suppression:
https://locusmag.com/2018/07/cory-doctorow-zucks-empire-of-oily-rags/
Now Zuckerberg and the wealthy, powerful monsters who seized control over our commons are getting a comeuppance. The weak countermeasures they created to maintain the minimum levels of quality to keep their platforms as viable, going concerns are being overwhelmed by AI. This was a totally foreseeable outcome: the history of the internet is a story of bad actors who upended the assumptions built into our security systems by automating their attacks, transforming an assault that wouldn't be economically viable into a global, high-speed crime wave:
https://pluralistic.net/2022/04/24/automation-is-magic/
But it is possible for a community to maintain a commons. This is something Hardin could have discovered by studying actual commons, instead of inventing imaginary histories in which commons turned tragic. As it happens, someone else did exactly that: Nobel Laureate Elinor Ostrom:
https://www.onthecommons.org/magazine/elinor-ostroms-8-principles-managing-commmons/
Ostrom described how commons can be wisely managed, over very long timescales, by communities that self-governed. Part of her work concerns how users of a commons must have the ability to exclude bad actors from their shared resources.
When that breaks down, commons can fail – because there's always someone who thinks it's fine to shit in the well rather than walk 100 yards to the outhouse.
Enshittification is the process by which control over the internet moved from self-governance by members of the commons to acts of wanton destruction committed by despicable, greedy assholes who shit in the well over and over again.
It's not just the spammers who take advantage of Google's lazy incompetence, either. Take "copyleft trolls," who post images using outdated Creative Commons licenses that allow them to terminate the CC license if a user makes minor errors in attributing the images they use:
https://pluralistic.net/2022/01/24/a-bug-in-early-creative-commons-licenses-has-enabled-a-new-breed-of-superpredator/
The first copyleft trolls were individuals, but these days, the racket is dominated by a company called Pixsy, which pretends to be a "rights protection" agency that helps photographers track down copyright infringers. In reality, the company is committed to helping copyleft trolls entrap innocent Creative Commons users into paying hundreds or even thousands of dollars to use images that are licensed for free use. Just as Advon upends the economics of spam and deception through automation, Pixsy has figured out how to send legal threats at scale, robolawyering demand letters that aren't signed by lawyers; the company refuses to say whether any lawyer ever reviews these threats:
https://pluralistic.net/2022/02/13/an-open-letter-to-pixsy-ceo-kain-jones-who-keeps-sending-me-legal-threats/
This is shitting in the well, at scale. It's an online WMD, designed to wipe out the commons. Creative Commons has allowed millions of creators to produce a commons with billions of works in it, and Pixsy exploits a minor error in the early versions of CC licenses to indiscriminately manufacture legal land-mines, wantonly blowing off innocent commons-users' legs and laughing all the way to the bank:
https://pluralistic.net/2023/04/02/commafuckers-versus-the-commons/
We can have an online commons, but only if it's run by and for its users. Google has shown us that any "benevolent dictator" who amasses power in the name of defending the open internet will eventually grow too big to care, and will allow our commons to be demolished by well-shitters:
https://pluralistic.net/2024/04/04/teach-me-how-to-shruggie/#kagi
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/05/09/shitting-in-the-well/#advon
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
--
Catherine Poh Huay Tan (modified) https://www.flickr.com/photos/68166820@N08/49729911222/
Laia Balagueró (modified) https://www.flickr.com/photos/lbalaguero/6551235503/
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/
#pluralistic#pixsy#wmds#automation#ai#botshit#force multipliers#weapons of mass destruction#commons#shitting in the drinking water#ostrom#elinor ostrom#sports illustrated#slop#advon#google#monopoly#site reputation abuse#enshittification#Maggie Harrison Dupré#futurism
320 notes
·
View notes
Text
my hot take is that she works like a digimon and can and will alter her form at will based on data she's been exposed to, which is a lot after frontiers and that's fun
#sth#sonic#sage#sage the ai#sage sonic frontiers#purp doot#i wanted to draw smth omnimon inspired too to drive the point across but i am so sleepy#but anyway i have weaponized the child and im feeling based for that
524 notes
·
View notes
Text
There is one brain cell in the Base when Zo is gone and Beta has custody of it
#my art#horizon forbidden west#art#kotallo#aloy#illustration#kotaloy#hfw kotallo#hfw aloy#hfw alva#things that go boom#listen i actually think Kotallo's arm could supercharge that Zenith weapon and it would be a really rad combo move#but do i think Aloy and Kotallo could do it without blowing something up? absolutely not#and Alva has it recorded on her Focus for all time now#somewhere GAIA is getting a migraine which is impressive for an AI
337 notes
·
View notes
Text
"Dherkari" (0002)
(More of The Warrior Prince Series)
0001
#ai man#ai generated#ai art community#ai artwork#heroic fantasy#gay heroic fantasy#gay ai art#homo art#male physique#male form#male figure#male art#ai gay#bald#bald man#inked#tatted#jewelry#weapons#muscular#bearded man#armor#fashion illustration#art direction#action hero#heroic#fantasy art#black male body#black male beauty#warrior prince
84 notes
·
View notes
Text
Emerie Week | Day 2 - Soul of a Warrior
@emerieweekofficial
.
.
.
instagram
#our badass queen with a big ass weapon#emerieweek2024#acotar emerie#emerie of illyria#emerie acosf#acotar fanart#no ai art#Ella art✨
68 notes
·
View notes
Text
hey so you guys ever think about how master chief nearly inserted the index into the core and pulled the plug on all sentient life in the entire galaxy? you ever think about how he thought it was the right thing right up until cortana told him it wasn't? you ever think about how he stood there realising the fact he nearly killed all the people he would have died to protect? you ever think? you ever think??? you ever think????????
#this is a man so used to taking orders#sure he knows how to disobey them#we see that happen several times#but he never really gets out of that mindset of being a weapon#of being a machine#of not being the one to think#he left without one ai to depend upon and came back with another#only that the new one didn't have such good intentions#halo#halo combat evolved#halo ce#master chief#john-117#you'll have to forgive me if this is out of character#because i have very niche feelings on john#he's such an emotional character#'spartans don't cry. spartans don't feel'#except he feels everything. all the time. on behalf of everybody in the entire world#he fights and fights and feels everybody's agonies a thousand times over. then he feels his own
71 notes
·
View notes
Text
By o_to_to
#nestedneons#cyberpunk#cyberpunk art#cyberpunk aesthetic#art#cyberpunk artist#cyberwave#megacity#futuristic city#scifi#cyberpunk neon city#neon city#neoncore#scifi art#scifi world#scifi aesthetic#scifi weapon#ai art#ai artwork#ai artist#aiartcommunity#thisisaiart#stable diffusion#urbex#urban decay#dystopian#dystopic future#dystopia
140 notes
·
View notes
Text
[ID: The “Shout out to Women’s Day” meme, edited to read, “Shout out to ai girls in media fr 🤞🏾 / Gotta be one of my favorite genders.” A collage of several characters has been edited on top. From left to right, top to bottom, they include: The Weapon from Halo, Lyla from Spider-Man: Across the Spider-Verse, Lyla from Spider-Man 2099, GLaDOS from Portal, Aiba from AI: The Somnium Files, Cortana from Halo and Tama from AI: Nirvana Initiative. End ID]
#thought about the weapon too hard and realized how predictable i am#do i tag everyone. fuck it why not#halo#cortana#the weapon#ai: the somnium files#ai: nirvana initiative#aiba#tama#portal#glados#spider-man: across the spider-verse#spider-man 2099#lyla
356 notes
·
View notes
Text
Pascal Najadi, son of WEF co-founder Hussain Najadi.
#Pascal Najadi#Hussain Najadi#Geneva#Switzerland#Democide#WHO#WEF#UN#Gavi#klaus schwab#santa klaus#george soros#Swiss#Big Pharma#Campus biotech#AI#bio-weapon#Nano Lipids#bill gates#trudeau#injection#COVID
148 notes
·
View notes
Text
I have only used one of the new dungeon weapons but it goes hard
#also i really like the Vesper AI#she must be protected#destiny 2#destiny the game#the final shape#destiny#destiny revenant#episode revenant#vespers host#destiny dungeon#destiny weapon
33 notes
·
View notes
Text
Unironically think that each of the bros (+April) don’t actually get how impressive their feats really are so they just do what they do and on the off chance someone comments on those feats they all react like:
#rottmnt#tmnt#rise of the teenage mutant ninja turtles#no but really#I love thinking that they’re actually way more prideful about the stuff that does not even hold a candle to their other feats#like yeah Mikey can open a hole in the space time continuum but that’s nothing have you TRIED his manicotti??#yeah Leo has outsmarted multiple incredibly intelligent and capable people AND knows how to rewire AI but eh did you hear his one liners?#donnie accidentally made regular animatronics sentient but that was an oopsie check out his super cool hammer instead#raph was able to fake his own death to save the entirety of New York and then be the one to bring about his brothers’ inner powers-#but forget about that did you know he can punch like a BOSS?#and April can survive and THRIVE against a demonic suit of armor alongside literal weapons of destruction as a regular human-#but her crane license is where it’s really at#(not to mention all the other secondary talents and skills these kids all just sorta have like - they are VERY CAPABLE)#honorable mentions in this regard go moments like#donnie ordering around an entire legion of woodland critters to create a woodsy tech paradise#or Leo being able to avoid an entire crowd’s blind spots in plain sight#and also being able to hold a pose without moving a millimeter while covered in paint and being transported no I’m NOT OVER THAT#Mikey casually being ridiculously strong and also knowledgeable enough about building to help Donnie make the puppy paradise for Todd#Raph literally led an entire group of hardened criminals like that entire episode was just#basically they’re all so capable????#and at the same time prone to wiping out at the most inopportune of moments#love them sm
270 notes
·
View notes
Text
Hypothetical AI election disinformation risks vs real AI harms
I'm on tour with my new novel The Bezzle! Catch me TONIGHT (Feb 27) in Portland at Powell's. Then, onto Phoenix (Changing Hands, Feb 29), Tucson (Mar 9-12), and more!
You can barely turn around these days without encountering a think-piece warning of the impending risk of AI disinformation in the coming elections. But a recent episode of This Machine Kills podcast reminds us that these are hypothetical risks, and there is no shortage of real AI harms:
https://soundcloud.com/thismachinekillspod/311-selling-pickaxes-for-the-ai-gold-rush
The algorithmic decision-making systems that increasingly run the back-ends to our lives are really, truly very bad at doing their jobs, and worse, these systems constitute a form of "empiricism-washing": if the computer says it's true, it must be true. There's no such thing as racist math, you SJW snowflake!
https://slate.com/news-and-politics/2019/02/aoc-algorithms-racist-bias.html
Nearly 1,000 British postmasters were wrongly convicted of fraud by Horizon, the faulty AI fraud-hunting system that Fujitsu provided to the Royal Mail. They had their lives ruined by this faulty AI, many went to prison, and at least four of the AI's victims killed themselves:
https://en.wikipedia.org/wiki/British_Post_Office_scandal
Tenants across America have seen their rents skyrocket thanks to Realpage's landlord price-fixing algorithm, which deployed the time-honored defense: "It's not a crime if we commit it with an app":
https://www.propublica.org/article/doj-backs-tenants-price-fixing-case-big-landlords-real-estate-tech
Housing, you'll recall, is pretty foundational in the human hierarchy of needs. Losing your home – or being forced to choose between paying rent or buying groceries or gas for your car or clothes for your kid – is a non-hypothetical, widespread, urgent problem that can be traced straight to AI.
Then there's predictive policing: cities across America and the world have bought systems that purport to tell the cops where to look for crime. Of course, these systems are trained on policing data from forces that are seeking to correct racial bias in their practices by using an algorithm to create "fairness." You feed this algorithm a data-set of where the police had detected crime in previous years, and it predicts where you'll find crime in the years to come.
But you only find crime where you look for it. If the cops only ever stop-and-frisk Black and brown kids, or pull over Black and brown drivers, then every knife, baggie or gun they find in someone's trunk or pockets will be found in a Black or brown person's trunk or pocket. A predictive policing algorithm will naively ingest this data and confidently assert that future crimes can be foiled by looking for more Black and brown people and searching them and pulling them over.
Obviously, this is bad for Black and brown people in low-income neighborhoods, whose baseline risk of an encounter with a cop turning violent or even lethal. But it's also bad for affluent people in affluent neighborhoods – because they are underpoliced as a result of these algorithmic biases. For example, domestic abuse that occurs in full detached single-family homes is systematically underrepresented in crime data, because the majority of domestic abuse calls originate with neighbors who can hear the abuse take place through a shared wall.
But the majority of algorithmic harms are inflicted on poor, racialized and/or working class people. Even if you escape a predictive policing algorithm, a facial recognition algorithm may wrongly accuse you of a crime, and even if you were far away from the site of the crime, the cops will still arrest you, because computers don't lie:
https://www.cbsnews.com/sacramento/news/texas-macys-sunglass-hut-facial-recognition-software-wrongful-arrest-sacramento-alibi/
Trying to get a low-waged service job? Be prepared for endless, nonsensical AI "personality tests" that make Scientology look like NASA:
https://futurism.com/mandatory-ai-hiring-tests
Service workers' schedules are at the mercy of shift-allocation algorithms that assign them hours that ensure that they fall just short of qualifying for health and other benefits. These algorithms push workers into "clopening" – where you close the store after midnight and then open it again the next morning before 5AM. And if you try to unionize, another algorithm – that spies on you and your fellow workers' social media activity – targets you for reprisals and your store for closure.
If you're driving an Amazon delivery van, algorithm watches your eyeballs and tells your boss that you're a bad driver if it doesn't like what it sees. If you're working in an Amazon warehouse, an algorithm decides if you've taken too many pee-breaks and automatically dings you:
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
If this disgusts you and you're hoping to use your ballot to elect lawmakers who will take up your cause, an algorithm stands in your way again. "AI" tools for purging voter rolls are especially harmful to racialized people – for example, they assume that two "Juan Gomez"es with a shared birthday in two different states must be the same person and remove one or both from the voter rolls:
https://www.cbsnews.com/news/eligible-voters-swept-up-conservative-activists-purge-voter-rolls/
Hoping to get a solid education, the sort that will keep you out of AI-supervised, precarious, low-waged work? Sorry, kiddo: the ed-tech system is riddled with algorithms. There's the grifty "remote invigilation" industry that watches you take tests via webcam and accuses you of cheating if your facial expressions fail its high-tech phrenology standards:
https://pluralistic.net/2022/02/16/unauthorized-paper/#cheating-anticheat
All of these are non-hypothetical, real risks from AI. The AI industry has proven itself incredibly adept at deflecting interest from real harms to hypothetical ones, like the "risk" that the spicy autocomplete will become conscious and take over the world in order to convert us all to paperclips:
https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space
Whenever you hear AI bosses talking about how seriously they're taking a hypothetical risk, that's the moment when you should check in on whether they're doing anything about all these longstanding, real risks. And even as AI bosses promise to fight hypothetical election disinformation, they continue to downplay or ignore the non-hypothetical, here-and-now harms of AI.
There's something unseemly – and even perverse – about worrying so much about AI and election disinformation. It plays into the narrative that kicked off in earnest in 2016, that the reason the electorate votes for manifestly unqualified candidates who run on a platform of bald-faced lies is that they are gullible and easily led astray.
But there's another explanation: the reason people accept conspiratorial accounts of how our institutions are run is because the institutions that are supposed to be defending us are corrupt and captured by actual conspiracies:
https://memex.craphound.com/2019/09/21/republic-of-lies-the-rise-of-conspiratorial-thinking-and-the-actual-conspiracies-that-fuel-it/
The party line on conspiratorial accounts is that these institutions are good, actually. Think of the rebuttal offered to anti-vaxxers who claimed that pharma giants were run by murderous sociopath billionaires who were in league with their regulators to kill us for a buck: "no, I think you'll find pharma companies are great and superbly regulated":
https://pluralistic.net/2023/09/05/not-that-naomi/#if-the-naomi-be-klein-youre-doing-just-fine
Institutions are profoundly important to a high-tech society. No one is capable of assessing all the life-or-death choices we make every day, from whether to trust the firmware in your car's anti-lock brakes, the alloys used in the structural members of your home, or the food-safety standards for the meal you're about to eat. We must rely on well-regulated experts to make these calls for us, and when the institutions fail us, we are thrown into a state of epistemological chaos. We must make decisions about whether to trust these technological systems, but we can't make informed choices because the one thing we're sure of is that our institutions aren't trustworthy.
Ironically, the long list of AI harms that we live with every day are the most important contributor to disinformation campaigns. It's these harms that provide the evidence for belief in conspiratorial accounts of the world, because each one is proof that the system can't be trusted. The election disinformation discourse focuses on the lies told – and not why those lies are credible.
That's because the subtext of election disinformation concerns is usually that the electorate is credulous, fools waiting to be suckered in. By refusing to contemplate the institutional failures that sit upstream of conspiracism, we can smugly locate the blame with the peddlers of lies and assume the mantle of paternalistic protectors of the easily gulled electorate.
But the group of people who are demonstrably being tricked by AI is the people who buy the horrifically flawed AI-based algorithmic systems and put them into use despite their manifest failures.
As I've written many times, "we're nowhere near a place where bots can steal your job, but we're certainly at the point where your boss can be suckered into firing you and replacing you with a bot that fails at doing your job"
https://pluralistic.net/2024/01/15/passive-income-brainworms/#four-hour-work-week
The most visible victims of AI disinformation are the people who are putting AI in charge of the life-chances of millions of the rest of us. Tackle that AI disinformation and its harms, and we'll make conspiratorial claims about our institutions being corrupt far less credible.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/02/27/ai-conspiracies/#epistemological-collapse
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#ai#disinformation#algorithmic bias#elections#election disinformation#conspiratorialism#paternalism#this machine kills#Horizon#the rents too damned high#weaponized shelter#predictive policing#fr#facial recognition#labor#union busting#union avoidance#standardized testing#hiring#employment#remote invigilation
145 notes
·
View notes
Text
CYBERWEAPONIC.
— a gender that feels like a digital or robotic weapon. this gender may also have ties to sentient AI used as a weapon, but not necessarily. there is no wrong way to identify with this gender, as the meaning is up to interpretation.
[id: a horizontally striped flag with dark desaturated purple at the top followed by a lighter desaturated purple, a gray-ish purple, red, gray-ish purple, light desaturated purple, and dark desaturated purple. end id.]
#.gender#mogai coining#mogai blog#mogai label#mogai term#mogai gender#mogai friendly#mogai flag#mogai#mogai safe#mogai community#liom flag#liom coining#liom gender#liom term#liom community#liom#liom label#liom safe#weapongender#weapon gender#digigender#digital gender#digitalgender#cybergender#cyber gender#robotgender#robot gender#ai gender#aigender
48 notes
·
View notes
Note
Zionist.
saying don't fall for scams does not make someone a zionist. tumblr asks are NOT actual calls for aid!
i was just going to delete this ask like i do all scam asks, but i figured id post it just in case other people are getting similar things for... not being gullible? for trying to stop other people from being scammed and sending their money to scammers instead of actual palestinians?
many people in palestine obviously need aid. an obvious bot sending thousands of messages to thousands of people asking for "aid" will not help those people. they aren't from actual victims. they're from random people who are weaponizing the kindness of strangers to make a buck. falling for it helps absolutely no one. critical thinking is even MORE important in a time like this, stop falling for this obvious shit! they're just like the ai porn bots. they're used by the same exact people for the same exact reasons, getting money off those who are gullible. they're scumbags who are weaponizing peoples empathy to make a buck off a genocide. stop. falling. for. it.
they're trying to take advantage of you. they're assuming you're too stupid to think critically about who you think you're helping. stop proving them right.
there are thousands of actual ways to donate to those in need that aren't tumblr ask scams!
#this is an extremely frustrating thing that is becomming much too common on tumblr#those asks calling for “aid” arent real people. they have never been real people. theyre bots by scammers.#this isnt a new strategy either! scammers have been doing this for years!#its only recently theyve been weaponizing peoples empathy for palenstine in these godawful asks#be kind recklessly but dont be an idiot#why do you think all these asks have variations of the EXACT same words? why do you think they all talk like chat gpt?#there are hundreds of ACTUAL ways to send aid to ACTUAL genocide victims isntead of some random guy whos using chat gpt to spam people#these scammers rely on you not questioning anything. they rely on you WANTING to fall for it#donating to some random ass dude in luxemburg or whatever helps no one#for the love of all that is holy stop falling for these scams. theres plenty of confirmed ways to send aid!!!!!!!!!#free palestine#free actual palenstine and not random shitstains trying to take advantage of war and peoples empathy#the 'i hate ai croud' when the ai pretends to be from palenstine: real shit?!#critical thinking is a blessing for both you and the actual people you could send aid to :)))))))))))#this whole situation is so frustrating. so many people keep falling for these obvious scam asks#i can only hope someone thinks twice about sending money to some random TUMBLR ASK after seeing this#please send your aid to actual palestinians and not scammers
30 notes
·
View notes