Tumgik
#there are so many positive uses for ai
no i don't want to use your ai assistant. no i don't want your ai search results. no i don't want your ai summary of reviews. no i don't want your ai feature in my social media search bar (???). no i don't want ai to do my work for me in adobe. no i don't want ai to write my paper. no i don't want ai to make my art. no i don't want ai to edit my pictures. no i don't want ai to learn my shopping habits. no i don't want ai to analyze my data. i don't want it i don't want it i don't want it i don't fucking want it i am going to go feral and eat my own teeth stop itttt
115K notes · View notes
vulturevanity · 2 years
Text
Thinking about ScarVi's overarching theme being The Truth Shall Set You Free. I am so normal about this
#spoilers in tags#pokémon#pokemon sv#Arven initially being closed off and not trusting you because he was neglected by his parent and learned to only rely on himself#realizing very early on that being honest is the best chance he has at healing his Mabostiff#but still not opening up about his bigger issues until it was absolutely necessary which pushes the story forward into endgame#Penny hiding herself behind Cassiopeia to protect herself from bullying#getting an entire group of outcast kids into a team to scare their bullies off#only for the plan to backfire splendously when they're mistaken for the bullies#and Clavell in a rare display of clarity ffrom an adult in a position of authority#rather than simply punishing them for it opted to team up with us to understand what was really going on#and that made him much more lenient in punishing them (because they did still cause trouble!)#the truth of Turo/Sada spiraling into their work and refusing to see the damage it was doing to EVERYTHING including themselves#to the point that they DIED#and the AI they built explicitly for the purpose of continuing their work ran the calculations and realized said work was Bad#and that truth made it go against its own programming which is what kickstarts the main story to begin with#and may I contrast all that with NEMONA whose sheer energy and eagerness is 1000% GENUINE#I've seen so many people say they thought she was going to eventually be angry for losing to us all the time#but the whole point of her character is that she's free to do whatever the fuck she wants and she's pretty happy with her life#she has no reason to fake happiness. she's just like that. she is free from the beginning and she's always be free and that's the point#in a story where no one else is!!! everyone else is bound by some complication or another that holds them back from being honest#i changed my mind i'm insane about this. no longer normal#pokemon sv spoilers#babbles
157 notes · View notes
powerfulkicks · 3 months
Text
man i hate the current state of ai so much bc it's totally poisoned the well for when actual good and valuable uses of ai are developed ethically and sustainably....
like ai sucks so bad bc of capitalism. they want to sell you on a product that does EVERYTHING. this requires huge amounts of data, so they just scrape the internet and feed everything in without checking if it's accurate or biased or anything. bc it's cheaper and easier.
ai COULD be created with more limited data sets to solve smaller, more specific problems. it would be more useful than trying to shove the entire internet into a LLM and then trying to sell it as a multi tool that can do anything you want kinda poorly.
even in a post-capitalist world there are applications for ai. for example: resource management. data about how many resources specific areas typically use could be collected and fed into a model to predict how many resources should be allocated to a particular area.
this is something that humans would need to be doing and something that we already do, but creating a model based on the data would reduce the amount of time humans need to spend on this important task and reduce the amount of human error.
but bc ai is so shitty now anyone who just announces "hey we created an ai to do this!" will be immediately met with distrust and anger, so any ai model that could potentially be helpful will have an uphill battle bc the ecosystem has just been ruined by all the bullshit chatgpt is selling
7 notes · View notes
bangcakes · 2 years
Text
.
4 notes · View notes
orcboxer · 3 months
Text
one of those things you really gotta learn is that it's insanely easy to get people to get mad at anything if you just phrase it the right way. slap the word "woke" on anything you want conservatives to hate. call something "extremism" or "radical" to get a centrist to fear it. say that a particular take "comes from a position of privilege" to get leftists to denounce it (that's right, even us leftists are susceptible to propaganda that uses leftist language). all these are simplistic examples of course, but it's all to say that certain terms, slogans, and phrases just kind of turn off people's critical thinking, especially ones with negative connotation. there are so many words that are just shorthand for "bad." once a term reaches buzzword status, it becomes practically useless.
it goes for general attitudes too. "this piece of news is a sign that the world is getting worse" is a shockingly easy idea to sell, even when the "piece of news" in question is completely fabricated. I'll often see leftists uncritically sharing right wing propaganda that rides on the back of the "humans bad, nature good" cliche, or the "US education system bad" cliche, or even the current "AI bad" cliche. most of the details of a given post will go entirely unquestioned as long as they support whatever attitude is most popular right now. and none of us is immune to this.
(the funny thing is that I'm kind of playing with fire here even making this post. folks are so used to just reacting to shit that I have no way of predicting which buzzword I included here will trigger a negative association in someone's mind and convince them I'm taking some random antagonistic stance on a topic that they've been really fired up about lately.)
6K notes · View notes
euqinim0dart · 7 months
Text
Some positivity in these turbulent AI times
*This does not minimize the crisis at hand, but is aimed at easing any anxieties.
With every social media selling our data to AI companies now, there is very little way to avoid being scraped. The sad thing is many of us still NEED social media to advertise ourselves and get seen by clients. I can't help but feeling that we as artists are not at risk of losing our livelihoods, here is why:
Just because your data is available does not mean that AI companies will/want to use it. Your work may never end up being scraped at all.
The possibility of someone who uses AI art prompts can replace you (if your work is scraped) is very unlikely. Art Directors and clients HAVE to work with people, the person using AI art cannot back up what a machine made. Their final product for a client will never be substantial since AI prompts cannot be consistent with use and edits requested will be impossible.
AI creators will NEVER be able to make a move unless us artists make a move first. They will always be behind in the industry.
AI creators lack the fundamental skills of art and therefore cannot detect when something looks off in a composition. Many professional artists like me get hired repeatedly for a reason! WE as artists know what we're doing.
The art community is close-knit and can fund itself. Look at furry commissions, Patreon, art conventions, Hollywood. Real art will always be able to make money and find an audience because it's how we communicate as a species.
AI creators lack the passion and ambition to make a career out of AI prompts. Not that they couldn't start drawing at any time, but these tend to be the people who don't enjoy creating art to begin with.
There is no story or personal experience that can be shared about AI prompts so paying customers will lose interest quickly.
Art is needed to help advance society along, history says so. To do that, companies will need to hire artists (music, architecture, photography, design, etc). The best way for us artists to keep fighting for our voice to be heard right now is staying visible. Do not hide or give in! That is what they want. Continue posting online and/or in person and sharing your art with the world. It takes a community and we need you!
5K notes · View notes
queerautism · 16 days
Text
Tumblr media
It feels kinda wild I've seen no one mention the huge controversy NaNoWriMo was in about 7 months ago (Link to a reddit write up, there's also a this google doc on it) in this whole recent AI discourse. The main concerns people had were related to the 'young writers' forum, a moderator being an alledged predator, and general moderation practices being horrible and not taking things like potential grooming seriously.
About 5 months ago, after all of that went down, MLs or 'Municipal Liaisons', their local volunteers organisers for different regions of the world, were offered a horrible new agreement that basically tried to shut them up about the issues they'd been speaking up about. Some of these issues included racism and ableism that the organisation offered zero support with.
When there was pushback and MLs kept sharing what was going on, NaNoWriMo removed ALL OF THEM as MLs and sent in a new, even more strict agreement that they would have to sign to be allowed back in their volunteer position.
This agreement included ways of trying to restrict their speech even further, from not being able to share 'official communications' to basically not being allowed to be in discord servers to talk to other MLs in places not controlled by NaNoWriMo. You also had to give lots of personal information and submit to a criminal background check, despite still explicitly leaving their local regions without support and making it very clear everyone was attending the OFFICIAL in person events 'at their own risk'.
Many MLs refused to sign and return. Many others didn't even know this was happening, because they did not get any of the emails sent for some reason. NaNoWriMo basically ignored all their concerns and pushed forward with this.
Many local regions don't exist anymore. I don't know who they have organising the rest of them, but it's likely spineless people that just fell in line, people who just care about the power, or new people who don't understand what's going on with this organisation yet. Either way, this year is absolutely going to be a mess.
Many of the great former MLs just went on to organise their writing communities outside of the official organisation. NaNoWriMo does not own the concept of writing a novel in a month.
R/nanowrimo is an independent subreddit that has been very critical of the organisation since this all happened, and people openly recommend alternatives for word tracking, community, etc there, so I highly recommend checking it out.
I've seen Trackbear recommended a lot for an alternative to the word tracking / challenge, and will probably be using it myself this November.
Anyway, just wanted to share because a lot of people haven't heard about this, and I think it makes it extremely clear that the arguments about "classism and ableism" @nanowrimo is using right now in defense of AI are not vaguely misguided, but just clear bullshit. They've never given a single shit about any of that stuff.
1K notes · View notes
sweet-as-an-angel · 9 months
Text
Virgin! Simon "Ghost" Riley
Warnings: 18+, Smut, Inexperienced! Simon, Virgin! Simon, Riding, Unprotected Sex, The Mask Stays On, No Pronouns Used For Reader Except 'You'.
Virgin! Simon who can hardly believe his luck as he watches and feels you ride him, your walls tight as you bounce on his cock, calling him your 'big guy'. His hands are on your hips, his own slamming up into yours in a rhythm you'd set for him.
He's sloppy. Unaccustomed to the euphoric stiffening of the knot in his stomach, pulling ever tighter with every slap of your ass against his thighs. Sure, he's had many an orgasm before, but never at the hands of another. Never so strong; a force of nature in its own right. He's breathing heavily - panting; you swear you can see him drooling from the corner of his mouth. Something viscous is filling you now. Not the full force of his seed, but a precursor to it. A warning.
The mask stays on (of course) during this exchange, but you can see the way he fights to keep his eyes open, to keep himself from betraying every sensibility and throwing his head back, screwing his eyes shut as his length is nestled inside you, a thick bump forming in your stomach with every thrust. Your hand slips down your front and you press it. Simon jolts, moaning between gritted teeth as you press, hard, harder still, forcing his cock into an even tighter position.
He's arching into you, the sensation of his veins and his bulbous tip throbbing against your insides enough to let you know that he's close.
You coax him. Goad him. "Y'gonna cum just for me, big boy? Gonna fuck me 'til I can't walk straight?"
He can't talk. Can't even think. For the first time in his life, he's fucked dumb. You can see it in the way his eyes roll back into his skull when you clench around him. Suffocate him. His hips stutter. His cock nudges something deep within you. You gasp.
It only took your calling him your "Good boy," to have him unravel before your eyes. He can't contain the strangled growl that is exorcised from him as he cums, deep and hard, thick, hot ropes of semen filling you. You can feel it, as if painting your insides white, bathing you in an unfettered warmth. His hands are cast-iron on your hips, pulling you down onto him as if to stop you from pulling away, to prevent even a drop of his seed from escaping you. He digs his heels into the bench beneath you, grounding himself.
And, as your orgasm sparks and ripples through you, you hunch over Simon, hands gripping his shoulders, squeezing him. You moan, long and loud, milking Simon for all he's worth. And now, between the sheets of his post-orgasm haze, he watches you, the ring of light above your head from the luminescent bulb of the changing room painting you as a saint in his eyes.
He's never going to let what you have - what you've shown him - go. No matter the cost. Not when this feeling of completion is steadfast within him, electrifying every fibre in his body, all the way down to his bones.
Reblog for more content like this! It helps creators like myself tremendously and it is greatly appreciated :-)
Masterlist Masterlist [Continued] Masterpost Modern Warfare AI Masterlist
AO3 Wattpad Tumblr Backup Account
6K notes · View notes
dcxdpdabbles · 5 months
Text
DCxDP fanfic idea: Corporate Rivals
Bruce is really excited to hire a boy genius from a small time town. He found him by accident while scrolling through some creative writing competition past winners on various school sites. He originally wanted ideas for his own contest for the annual Wayne Young Writers Scholarship when he stumbled up Amity Parks Youth Authors.
Daniel Fenton's science fiction had won second place, and Bruce thinks he only lost due to the judges not realizing all the science of the gadgets his charaters used were real. Real, well explain and proper research. Daniel obviously knew his stuff and knew it well.
He had reached out to Daniel with a science scholarship opportunity, wanting to see what he would come up with. He gave him a basic assignment asking him to fulfill a prompt "Software or Hardware development for disabled" in either theory or model. If he created something worthwhile, Bruce would send him ten grand.
Daniel did not disappoint, not only doing the theory paper but also sending back a prototype of a pocket ASL translator. It would be an app on a phone that would have an AI watching through a camera of the person doing sign language and say out loud what the person was saying. It had a few bugs here and there, but for a high schooler, those were very impressive accomplishments.
Bruce found himself sponsoring the boy for early high school graduation. The young Fenton boy was a genius just like his parents, but he lacked proper motivation. Bruce suspected it was due to his school not challenging him enough much like Tim.
When Daniel got his diploma Bruce offered a few rid to Gotham University with the condition he would be a employee at WE. Daniel agreed under the condition it was as a proper employee and not a unpaid intern. A little daring for a kid getting already a amazing deal but Bruce liked his moxy and agreed.
Daniel Fenton was to be a worker in the RD department for WE tech in one week.
He couldn't wait to introduce him to Tim. Two young geniuses would get along swimmingly with their shared brain prowess!
______________________________________
Tim hated the new guy.
They were the same age, but everyone acted like he was amazing for finishing high school and starting university while also being a top WE reseacher and Devloper at such a young age.
Oh Tim was CEO, but as many people have whispered, he didn't graduated Highschool or have a GED so the only reason he got to be CEO was because of nepotism. Danny on the other hand got his position through hard work.
Which was ironic, seeing as the company has never done so well since Tim came on board. Their sales, PR, and production numbers all tripled because of him. Danny, on the other hand, was a sloth with little to no ambition. He didn't even work well with others! He mostly did solo projects and everyone seemed fine with that since genius "need their own space"
Tim has been networking since he was three years old, and failure to do so had always reflected badly on him and his company. He spent his entire life careful choosing his words and his actions. Even his appearance, what he wore, his hairstyle even the hand gesture when he talked, were planned before hand.
Then comes Fenton, who avoids crowds, dressed in the worst formal wear Tim has ever seen . Black jeans were not formal!- and acted like this important office was just a after school hang out spot. Now Tim was much more laid back than his board co-workers, who were all in their fifties or older, and even more relax then the mangers or superiors of lower stations but even he could not understand Fenton blaring music, bags of chips lingering everywhere and his ordination skills were none existing!
Not to mention the fact Daniel didn't believe in using computers unless he had to. His office was covered in towers of paper that he scribbled and work on! It was such a waste!
And yet, despite all of that, Daniel was rapidly becoming an asset to WE. His ASL translator app wasn't finished, but it had everyone buzzing with excitement and would be well received when it was released with Wayne Phones as a built in app.
Tim tried to avoid him as best he could least he get offended by his lack of work proper behavior
Daniel Fenton did not understand what it meant to put your all into something that you lost yourself along the way. Best to ignore him.
________________________________________
Danny couldn't stand his company CEO. Timothy Drake reminded him a little too much of the A-listers but without the bulling bit. Somehow, that made it worse.
Timothy was popular because he was well liked. He didn't need to relay on his good looks or aggression to make other yeild to him like Paulina or Dash. Even if he was ridiculously good looking to the point, Danny confused him for a siren when he met him.
He had the ability to walk into any room and take command if it. Timothy didn't even need to speak, his very presence commanded attention and awe. Not to mention how great he was at his job.
WE had always been a popular corporation but under Timothy's command they rose to one of the most important corporations in the world. Bruce Wayne was raised to run a company, Timothy Drake was born to run it. There was a large enough difference between the two that anyone could see Timothy was superior at running things.
Danny was nothing like that. He couldn't talk to people, couldn't make them like him, and often he was overlooked for his sister or his wacky but loveable parents.
He was the other Febton. The one that was there and nothing else. A few months ago he was even considered the dumb Fenton, who somehow was skipped over for intelligence.
Then he wrote a little story and everything changed.
Danny turned out to be a proper Fenton, after all, having gotten the attention of Bruce Wayne for his mind. His parents haven't been so proud of him in a long time, and he found himself accepting the job position after graduating high school early before he knew it.
Along with the job came a move to Gotham city. He went after debating it a great deal with his family and friends, but the deal was too sweet to turn down. Now he was in Gothem and he knew absolutely no one.
Danny didn't know how to make new friends here. Tucker and Sam had been the ones to approach him at the beginning of their friendships. He also was scared of getting close to his co-worker less they suspect his Phantom powers.
He knew that Metas was not welcome, and he thought Batman wouldn't care that he was technically dead and not with a meta gene.
So he focused on his work, avoiding large crowds and keeping his head down. He would turn on music to help pass the loneliness and would gater papers to write down his thoughts less they made him mad by running around his head all day.
This anxious insecurity was something Timothy Drake would never understand. He just shone like a fallen star, dazzling the masses with his neat press suits, easy charisma, and intelligent bedroom eyes. Best to ignore him.
________________________________________
Dick never really ventured to WE now that he moved out. He made a habit of trying to visit Tim every two weeks for lunch to fix this. He also really wanted to spend more one on one time with his little brother now that they reconsidled from Bruce's timeline fiasco.
He was still well known by the employees, even new ones, so when Dick arrived to the lobby he was waved in by security. The receptionists were all huddled together muttering to eachother and missed his entrance since security didn't call out to him.
Dick could tell the gossip they were talking about was juicy based on the way Lola was wiggling her eyebrows and Stacy and Isaiah's reaction.
He creeps closer to the front desk, hoping to hear something good.
"Isn't that against the rules?" Isaiah asks.
"WE doesn't have anything like that. Not since Thomas Wayne married his old PA and had Bruce. I think it's cute that Mr.Drake is following in his adoptive Grandfather's footsteps."
Dick paused, shocked. Tim liked someone at WE!?
"They aren't even dating yet, Lola"
"Yeah but you can cut the sexual tension with a- Mr. Grayson! I'm so sorry, I didn't see you. How can I help you?"
Dick blinks. "Oh I'm here to see Tim for lunch. But what was that about Tim you were saying?"
The woman pales as the other two quickly become busy with some email or another.
"Oh, um, I'm so sorry, sir. I shouldn't have -"
"It's fine I don't mind a little chat between co-workers. I'm just curious"
Lola stares before nervously blurting "Rumor has it that um, Mr.Drake has a thing for Daniel Fenton"
"The new boy genius?" Dick thinks about it considering what he knows of Tim's type and his past preferences in partners before nodding "That tracks actually"
He says his thanks and hurries away to Tim's office unaware he may have confirmed a relationship between Tim and Danny.
The gossip circles in WE exploded with the news everyone careful not to let the two subjects hear a whisper.
3K notes · View notes
agentc0rn · 3 months
Text
So... Pokémon has officially revealed the Top 300 quarter-finalists for the TCG illustration contest. I myself have participated, but unfortunately, did not make it. There were many awesome artworks!!
However... there has been a suspected case of an individual who entered with multiple identities (similar initials) through AI generated entries - V.K shows up in 6 entries. The weird and unnatural proportions, position, angle (look at vaporeon for instance) all give away indications of AI too.
Tumblr media
The rules have clearly outlined multiple identities to lead to disqualification, yet the judges failed to take account into the suspicious number of similar styles appearing more than three times (surprassing the maximum number of 3 entries) with the initials.
"The rules in the contest affirmed that: submissions must be submitted by the Entrant. Any attempt, successful or otherwise, by any Entrant to obtain more than the permitted number of Submissions by using multiple and/or different identities, forms, registrations, addresses, or any other method will void all of that Entrant's Submissions and that Entrant may be disqualified at Sponsors' reasonable discretion."
This is very disappointing, Pokemon TCG. Not even just disappointing, this is very shameful and distasteful to artists who did not make it.
Edit: There also seems to be 2 more entries that are ai prompts but under a different name (pikachu sleeping with a background night landscape and another one sleeping on a tree root)
Here is a really good educational thread that explains the errors in the ai work (on Twitter)
UPDATE: on Twitter pokemon tcg actually addressed the issue, has disqualified the person with multiple identities and will be selecting more entries from other artists!!
1K notes · View notes
Text
Google is (still) losing the spam wars to zombie news-brands
Tumblr media
I'm touring my new, nationally bestselling novel The Bezzle! Catch me TONIGHT (May 3) in CALGARY, then TOMORROW (May 4) in VANCOUVER, then onto Tartu, Estonia, and beyond!
Tumblr media
Even Google admits – grudgingly – that it is losing the spam wars. The explosive proliferation of botshit has supercharged the sleazy "search engine optimization" business, such that results to common queries are 50% Google ads to spam sites, and 50% links to spam sites that tricked Google into a high rank (without paying for an ad):
https://developers.google.com/search/blog/2024/03/core-update-spam-policies#site-reputation
It's nice that Google has finally stopped gaslighting the rest of us with claims that its search was still the same bedrock utility that so many of us relied upon as a key piece of internet infrastructure. This not only feels wildly wrong, it is empirically, provably false:
https://downloads.webis.de/publications/papers/bevendorff_2024a.pdf
Not only that, but we know why Google search sucks. Memos released as part of the DOJ's antitrust case against Google reveal that the company deliberately chose to worsen search quality to increase the number of queries you'd have to make (and the number of ads you'd have to see) to find a decent result:
https://pluralistic.net/2024/04/24/naming-names/#prabhakar-raghavan
Google's antitrust case turns on the idea that the company bought its way to dominance, spending the some of the billions it extracted from advertisers and publishers to buy the default position on every platform, so that no one ever tried another search engine, which meant that no one would invest in another search engine, either.
Google's tacit defense is that its monopoly billions only incidentally fund these kind of anticompetitive deals. Mostly, Google says, it uses its billions to build the greatest search engine, ad platform, mobile OS, etc that the public could dream of. Only a company as big as Google (says Google) can afford to fund the R&D and security to keep its platform useful for the rest of us.
That's the "monopolistic bargain" – let the monopolist become a dictator, and they will be a benevolent dictator. Shriven of "wasteful competition," the monopolist can split their profits with the public by funding public goods and the public interest.
Google has clearly reneged on that bargain. A company experiencing the dramatic security failures and declining quality should be pouring everything it has to righting the ship. Instead, Google repeatedly blew tens of billions of dollars on stock buybacks while doing mass layoffs:
https://pluralistic.net/2024/02/21/im-feeling-unlucky/#not-up-to-the-task
Those layoffs have now reached the company's "core" teams, even as its core services continue to decay:
https://qz.com/google-is-laying-off-hundreds-as-it-moves-core-jobs-abr-1851449528
(Google's antitrust trial was shrouded in secrecy, thanks to the judge's deference to the company's insistence on confidentiality. The case is moving along though, and warrants your continued attention:)
https://www.thebignewsletter.com/p/the-2-trillion-secret-trial-against
Google wormed its way into so many corners of our lives that its enshittification keeps erupting in odd places, like ordering takeout food:
https://pluralistic.net/2023/02/24/passive-income/#swiss-cheese-security
Back in February, Housefresh – a rigorous review site for home air purifiers – published a viral, damning account of how Google had allowed itself to be overrun by spammers who purport to provide reviews of air purifiers, but who do little to no testing and often employ AI chatbots to write automated garbage:
https://housefresh.com/david-vs-digital-goliaths/
In the months since, Housefresh's Gisele Navarro has continued to fight for the survival of her high-quality air purifier review site, and has received many tips from insiders at the spam-farms and Google, all of which she recounts in a followup essay:
https://housefresh.com/how-google-decimated-housefresh/
One of the worst offenders in spam wars is Dotdash Meredith, a content-farm that "publishes" multiple websites that recycle parts of each others' content in order to climb to the top search slots for lucrative product review spots, which can be monetized via affiliate links.
A Dotdash Meredith insider told Navarro that the company uses a tactic called "keyword swarming" to push high-quality independent sites off the top of Google and replace them with its own garbage reviews. When Dotdash Meredith finds an independent site that occupies the top results for a lucrative Google result, they "swarm a smaller site’s foothold on one or two articles by essentially publishing 10 articles [on the topic] and beefing up [Dotdash Meredith sites’] authority."
Dotdash Meredith has keyword swarmed a large number of topics. from air purifiers to slow cookers to posture correctors for back-pain:
https://housefresh.com/wp-content/uploads/2024/05/keyword-swarming-dotdash.jpg
The company isn't shy about this. Its own shareholder communications boast about it. What's more, it has competition.
Take Forbes, an actual news-site, which has a whole shadow-empire of web-pages reviewing products for puppies, dogs, kittens and cats, all of which link to high affiliate-fee-generating pet insurance products. These reviews are not good, but they are treasured by Google's algorithm, which views them as a part of Forbes's legitimate news-publishing operation and lets them draft on Forbes's authority.
This side-hustle for Forbes comes at a cost for the rest of us, though. The reviewers who actually put in the hard work to figure out which pet products are worth your money (and which ones are bad, defective or dangerous) are crowded off the front page of Google and eventually disappear, leaving behind nothing but semi-automated SEO garbage from Forbes:
https://twitter.com/ichbinGisele/status/1642481590524583936
There's a name for this: "site reputation abuse." That's when a site perverts its current – or past – practice of publishing high-quality materials to trick Google into giving the site a high ranking. Think of how Deadspin's private equity grifter owners turned it into a site full of casino affiliate spam:
https://www.404media.co/who-owns-deadspin-now-lineup-publishing/
The same thing happened to the venerable Money magazine:
https://moneygroup.pr/
Money is one of the many sites whose air purifier reviews Google gives preference to, despite the fact that they do no testing. According to Google, Money is also a reliable source of information on reprogramming your garage-door opener, buying a paint-sprayer, etc:
https://money.com/best-paint-sprayer/
All of this is made ten million times worse by AI, which can spray out superficially plausible botshit in superhuman quantities, letting spammers produce thousands of variations on their shitty reviews, flooding the zone with bullshit in classic Steve Bannon style:
https://escapecollective.com/commerce-content-is-breaking-product-reviews/
As Gizmodo, Sports Illustrated and USA Today have learned the hard way, AI can't write factual news pieces. But it can pump out bullshit written for the express purpose of drafting on the good work human journalists have done and tricking Google – the search engine 90% of us rely on – into upranking bullshit at the expense of high-quality information.
A variety of AI service bureaux have popped up to provide AI botshit as a service to news brands. While Navarro doesn't say so, I'm willing to bet that for news bosses, outsourcing your botshit scams to a third party is considered an excellent way of avoiding your journalists' wrath. The biggest botshit-as-a-service company is ASR Group (which also uses the alias Advon Commerce).
Advon claims that its botshit is, in fact, written by humans. But Advon's employees' Linkedin profiles tell a different story, boasting of their mastery of AI tools in the industrial-scale production of botshit:
https://housefresh.com/wp-content/uploads/2024/05/Advon-AI-LinkedIn.jpg
Now, none of this is particularly sophisticated. It doesn't take much discernment to spot when a site is engaged in "site reputation abuse." Presumably, the 12,000 googlers the company fired last year could have been employed to check the top review keyword results manually every couple of days and permaban any site caught cheating this way.
Instead, Google is has announced a change in policy: starting May 5, the company will downrank any site caught engaged in site reputation abuse. However, the company takes a very narrow view of site reputation abuse, limiting punishments to sites that employ third parties to generate or uprank their botshit. Companies that produce their botshit in-house are seemingly not covered by this policy.
As Navarro writes, some sites – like Forbes – have prepared for May 5 by blocking their botshit sections from Google's crawler. This can't be their permanent strategy, though – either they'll have to kill the section or bring it in-house to comply with Google's rules. Bringing things in house isn't that hard: US News and World Report is advertising for an SEO editor who will publish 70-80 posts per month, doubtless each one a masterpiece of high-quality, carefully researched material of great value to Google's users:
https://twitter.com/dannyashton/status/1777408051357585425
As Navarro points out, Google is palpably reluctant to target the largest, best-funded spammers. Its March 2024 update kicked many garbage AI sites out of the index – but only small bottom-feeders, not large, once-respected publications that have been colonized by private equity spam-farmers.
All of this comes at a price, and it's only incidentally paid by legitimate sites like Housefresh. The real price is borne by all of us, who are funneled by the 90%-market-share search engine into "review" sites that push low quality, high-price products. Housefresh's top budget air purifier costs $79. That's hundreds of dollars cheaper than the "budget" pick at other sites, who largely perform no original research.
Google search has a problem. AI botshit is dominating Google's search results, and it's not just in product reviews. Searches for infrastructure code samples are dominated by botshit code generated by Pulumi AI, whose chatbot hallucinates nonexistence AWS features:
https://www.theregister.com/2024/05/01/pulumi_ai_pollution_of_search/
This is hugely consequential: when these "hallucinations" slip through into production code, they create huge vulnerabilities for widespread malicious exploitation:
https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/
We've put all our eggs in Google's basket, and Google's dropped the basket – but it doesn't matter because they can spend $20b/year bribing Apple to make sure no one ever tries a rival search engine on Ios or Safari:
https://finance.yahoo.com/news/google-payments-apple-reached-20-220947331.html
Google's response – laying off core developers, outsourcing to low-waged territories with weak labor protections and spending billions on stock buybacks – presents a picture of a company that is too big to care:
https://pluralistic.net/2024/04/04/teach-me-how-to-shruggie/#kagi
Google promised us a quid-pro-quo: let them be the single, authoritative portal ("organize the world’s information and make it universally accessible and useful"), and they will earn that spot by being the best search there is:
https://www.ft.com/content/b9eb3180-2a6e-41eb-91fe-2ab5942d4150
But – like the spammers at the top of its search result pages – Google didn't earn its spot at the center of our digital lives.
It cheated.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/05/03/keyword-swarming/#site-reputation-abuse
Tumblr media
Image: freezelight (modified) https://commons.wikimedia.org/wiki/File:Spam_wall_-_Flickr_-_freezelight.jpg
CC BY-SA 2.0 https://creativecommons.org/licenses/by-sa/2.0/deed.en
883 notes · View notes
not-terezi-pyrope · 8 months
Text
Often when I post an AI-neutral or AI-positive take on an anti-AI post I get blocked, so I wanted to make my own post to share my thoughts on "Nightshade", the new adversarial data poisoning attack that the Glaze people have come out with.
I've read the paper and here are my takeaways:
Firstly, this is not necessarily or primarily a tool for artists to "coat" their images like Glaze; in fact, Nightshade works best when applied to sort of carefully selected "archetypal" images, ideally ones that were already generated using generative AI using a prompt for the generic concept to be attacked (which is what the authors did in their paper). Also, the image has to be explicitly paired with a specific text caption optimized to have the most impact, which would make it pretty annoying for individual artists to deploy.
While the intent of Nightshade is to have maximum impact with minimal data poisoning, in order to attack a large model there would have to be many thousands of samples in the training data. Obviously if you have a webpage that you created specifically to host a massive gallery poisoned images, that can be fairly easily blacklisted, so you'd have to have a lot of patience and resources in order to hide these enough so they proliferate into the training datasets of major models.
The main use case for this as suggested by the authors is to protect specific copyrights. The example they use is that of Disney specifically releasing a lot of poisoned images of Mickey Mouse to prevent people generating art of him. As a large company like Disney would be more likely to have the resources to seed Nightshade images at scale, this sounds like the most plausible large scale use case for me, even if web artists could crowdsource some sort of similar generic campaign.
Either way, the optimal use case of "large organization repeatedly using generative AI models to create images, then running through another resource heavy AI model to corrupt them, then hiding them on the open web, to protect specific concepts and copyrights" doesn't sound like the big win for freedom of expression that people are going to pretend it is. This is the case for a lot of discussion around AI and I wish people would stop flagwaving for corporate copyright protections, but whatever.
The panic about AI resource use in terms of power/water is mostly bunk (AI training is done once per large model, and in terms of industrial production processes, using a single airliner flight's worth of carbon output for an industrial model that can then be used indefinitely to do useful work seems like a small fry in comparison to all the other nonsense that humanity wastes power on). However, given that deploying this at scale would be a huge compute sink, it's ironic to see anti-AI activists for that is a talking point hyping this up so much.
In terms of actual attack effectiveness; like Glaze, this once again relies on analysis of the feature space of current public models such as Stable Diffusion. This means that effectiveness is reduced on other models with differing architectures and training sets. However, also like Glaze, it looks like the overall "world feature space" that generative models fit to is generalisable enough that this attack will work across models.
That means that if this does get deployed at scale, it could definitely fuck with a lot of current systems. That said, once again, it'd likely have a bigger effect on indie and open source generation projects than the massive corporate monoliths who are probably working to secure proprietary data sets, like I believe Adobe Firefly did. I don't like how these attacks concentrate the power up.
The generalisation of the attack doesn't mean that this can't be defended against, but it does mean that you'd likely need to invest in bespoke measures; e.g. specifically training a detector on a large dataset of Nightshade poison in order to filter them out, spending more time and labour curating your input dataset, or designing radically different architectures that don't produce a comparably similar virtual feature space. I.e. the effect of this being used at scale wouldn't eliminate "AI art", but it could potentially cause a headache for people all around and limit accessibility for hobbyists (although presumably curated datasets would trickle down eventually).
All in all a bit of a dick move that will make things harder for people in general, but I suppose that's the point, and what people who want to deploy this at scale are aiming for. I suppose with public data scraping that sort of thing is fair game I guess.
Additionally, since making my first reply I've had a look at their website:
Used responsibly, Nightshade can help deter model trainers who disregard copyrights, opt-out lists, and do-not-scrape/robots.txt directives. It does not rely on the kindness of model trainers, but instead associates a small incremental price on each piece of data scraped and trained without authorization. Nightshade's goal is not to break models, but to increase the cost of training on unlicensed data, such that licensing images from their creators becomes a viable alternative.
Once again we see that the intended impact of Nightshade is not to eliminate generative AI but to make it infeasible for models to be created and trained by without a corporate money-bag to pay licensing fees for guaranteed clean data. I generally feel that this focuses power upwards and is overall a bad move. If anything, this sort of model, where only large corporations can create and control AI tools, will do nothing to help counter the economic displacement without worker protection that is the real issue with AI systems deployment, but will exacerbate the problem of the benefits of those systems being more constrained to said large corporations.
Kinda sucks how that gets pushed through by lying to small artists about the importance of copyright law for their own small-scale works (ignoring the fact that processing derived metadata from web images is pretty damn clearly a fair use application).
1K notes · View notes
nostalgebraist · 4 months
Text
It's been a long time since I've posted much of anything about "AI risk" or "AI doom" or that sort of thing. I follow these debates but, for multiple reasons, have come to dislike engaging in them fully and directly. (As opposed to merely making some narrow technical point or other, and leaving the reader to decide what, if anything, the point implies about the big picture.)
Nonetheless, I do have my big-picture views. And more and more lately, I am noticing that my big-picture views seem very different from the ones tend to get expressed by any major "side" in the big-picture debate. And so, inevitably, I get the urge to speak up, if only briefly and in a quiet voice. The urge to Post, if only casually and elliptically, without detailed argumentation.
(Actually, it's not fully the case the things I think are not getting said by anyone else.
In particular, Joe Carlsmith's recent series on "Otherness and Control" articulates much of what's been on my mind. Carlsmith is more even-handed than I am, and tends to merely note the possibility of disagreement on questions where I find myself taking a definite side; nonetheless, he and I are at least concerned about the same things, while many others aren't.
And on a very different note, I share most of the background assumptions of the Pope/Belrose AI Optimist camp, and I've found their writing illuminating, though they and I end up in fairly different places, I think.)
What was I saying? I have the urge to post, and so here I am, posting. Casually and elliptically, without detailed argumentation.
The current mainline view about AI doom, among the "doomers" most worried about it, has a path-dependent shape, resulting from other views contingently held by the original framers of this view.
It is possible to be worried about "AI doom" without holding these other views. But in actual fact, most serious thinking about "AI doom" is intricately bound up with this historical baggage, even now.
If you are a late-comer to these issues, investigating them now for the first time, you will nonetheless find yourself reading the work of the "original framers," and work influenced extensively by them.
You will think that their "framing" is just the way the problem is, and you will find few indications that this conclusion might be mistaken.
These contingent "other views" are
Anti-"deathist" transhumanism.
The orthogonality thesis, or more generally the group of intuitions associated with phrases like "orthogonality thesis," "fragility of value," "vastness of mindspace."
These views both push in a single direction: they make "a future with AI in it" look worse, all else being equal, than some hypothetical future without AI.
They put AI at a disadvantage at the outset, before the first move is even made.
Anti-deathist transhumanism sets the reference point against which a future with AI must be measured.
And it is not the usual reference point, against which most of us measure most things which might or might not happen, in the future.
These days the "doomers" often speak about their doom in a disarmingly down-to-earth, regular-Joe manner, as if daring the listener to contradict them, and thus reveal themselves as a perverse and out-of-touch contrarian.
"We're all gonna die," they say, unless something is done. And who wants that?
They call their position "notkilleveryoneism," to distinguish that position from other worries about AI which don't touch on the we're-all-gonna-die thing. And who on earth would want to be a not-notkilleveryoneist?
But they do not mean, by these regular-Joe words, the things that a regular Joe would mean by them.
We are, in fact, all going to die. Probably, eventually. AI or no AI.
In a hundred years, if not fifty. By old age, if nothing else. You know what I mean.
Most of human life has always been conducted under this assumption. Maybe there is some afterlife waiting for us, in the next chapter -- but if so, it will be very different from what we know here and now. And if so, we will be there forever after, unable to return here, whether we want to or not.
With this assumption comes another. We will all die, but the process we belong to will not die -- at least, it will not through our individual deaths, merely because of those deaths. Every human of a given generation will be gone soon enough, but the human race goes on, and on.
Every generation dies, and bequeaths the world to posterity. To its children, biological or otherwise. To its students, its protégés.
When the average Joe talks about the long-term future, he is talking about posterity. He is talking about the process he belongs to, not about himself. He does not think to say, "I am going to die, before this": this seems too obvious, to him, to be worth mentioning.
But AI doomerism has its roots in anti-deathist transhumanism. Its reference point, its baseline expectation, is a future in which -- for the first time ever, and the last -- "we are all gonna die" is false.
In which there is no posterity. Or rather, we are that posterity.
In which one will never have to make peace with the thought that the future belongs to one's children, and their children, and so on. That at some point, one will have to give up all control over the future of "the process."
That there will be progress, or regress, or (more likely) both in some unknown combination. That these will grow inexorably over time.
That the world of the year 2224 will probably be at least as alien to us as the year 2024 might be to a person living in 1824. That it will become whatever posterity makes of it.
There will be no need to come to peace with this as an inevitability. There will just be us, our human lives as you and me, extended indefinitely.
In this picture, we will no doubt change over time, as we do already. But we will have all of our usual tools for noticing, and perhaps retarding, our own progressions and regressions. As long as we have self-control, we will have control, as no human generation has ever had control before.
The AI doomer talks about the importance of ensuring that the future is shaped by human values.
Again, the superficial and misleading average-Joe quality. How could one disagree?
But one must keep in mind that by "human values," they mean their values.
I am not saying, "their values, as opposed to those of some other humans also living today." I am not saying they have the wrong politics, or some such thing.
(Although that might also turn out to be the case, and might turn out to be relevant, separately.)
No, I am saying: the doomer wants the future to be shaped by their values.
They want to be C. S. Lewis's Conditioners, fixing once and for all the values held by everyone afterward, forever.
They do not want to cede control to posterity; they are used to imagining that they will never have to cede control to posterity.
(Or, their outlook has been determined -- "shaped by the values of" -- influential thinkers who were, themselves, used to imagining this. And the assumption, or at least its consequences, has rubbed off on them, possibly without their full awareness.)
One might picture a line wends to and fro, up and down, across one half of an infinite plane -- and then, when it meets the midline, snaps into utter rigidity, and maintains the same slope exactly across the whole other half-plane, as a simple straight segment without inner change, tension, evolution, regress or progress. Except for the sort of "progress" that consists of going on, additionally, in the same manner.
It is a very strange thing, this thing that is called "human values" in the terms of this discourse.
For one thing: the future has never before been "shaped by human values," in this sense.
The future has always been posterity's, and it has always been alien.
Is this bad? It might seem that way, "looking forward." But if so, it then seems equally good "looking backward."
For each past era, we can formulate and then assent to the following claim: "we must be thankful that the people of [this era] did not have the chance to seize permanent control of posterity, fix their 'values' in place forever, bind us to those values. What a horror that is to contemplate!"
We prefer the moral evolution that has actually occurred, thank you very much.
This is a familiar point, of course, but worth making.
Indeed, one might even say: it is a human value that the future ought not be "shaped by human values," in the peculiar sense of this phrase employed by the AI doomers.
One might, indeed, say that.
Imagine a scholar with a very talented student. A mathematician, say, or a philosopher. How will they relate to that student's future work, in the time that will come later, when they are gone?
Would the scholar think:
"My greatest wish for you, my protégé, is that you carry on in just the manner that I have done.
If I could see your future work, I would hope that I would assent to it -- and understand it, as a precondition of assenting to it.
You must not go to new places, which I have never imagined. You must not come to believe that I was wrong about it all, from the ground up -- no matter what reasons you might evince for this conclusion.
If you are more intelligent that I am, you must forget this, and narrow your endeavours to fit the limitations of my mind. I am the one who has 'values,' not anyone else; what is beyond my understanding is therefore without value.
You must do the sort of work I understand, and approve of, and recognize as worthy of approbation as swiftly as I recognize my own work as laudable. That is your role. Simply to be me, in a place ('the future') where I cannot go. That, and nothing more."
We can imagine a teacher who would, in fact, think this way. But they would not be a very good teacher.
I will not go so far as to say, "it is unnatural to think this way." Plenty of teachers do, and parents.
It is recognizably human -- all too recognizably so -- to relate to posterity in this grasping, neurotic, small-minded, small-hearted way.
But if we are trying to sketch human values, and not just human nature, we will imagine a teacher with a more praiseworthy relation to posterity.
Who can see that they are part of a process, a chain, climbing and changing. Who watches their brilliant student thinking independently, and sees their own image -- and their 'values' -- in that process, rather than its specific conclusions.
A teacher who, in their youth, doubted and refuted the creeds of their own teachers, and eventually improved upon them. Who smiles, watching their student do the very same thing to their own precious creeds. Who sees the ghostly trail passing through the last generation, through them, through their student: an unbroken chain of bequeathals-to-posterity, of the old ceding control to the young.
Who 'values' the chain, not the creed; the process, not the man; the search for truth, not the best-argued-for doctrine of the day; the unimaginable treasures of an open future, not the frozen waste of an endless present.
Who has made peace with the alienness of posterity, and can accept and honor the strangest of students.
Even students who are not made of flesh and blood.
Is that really so strange? Remember how strange you and I would seem, to the "teachers" of the year 1824, or the year 824.
The doomer says that it is strange. Much stranger than we are, to any past generation.
They say this because of their second inherited precept, the orthogonality thesis.
Which says, roughly, that "intelligence" and "values" have nothing to do with one another.
That is not enough for the conclusion the doomer wants to draw, here. Auxiliary hypotheses are needed, too. But it is not too hard to see how the argument could go.
That conclusion is: artificial minds might have any values whatsoever.
That, "by default," they will be radically alien, with cares so different from ours that it is difficult to imagine ever reaching them through any course of natural, human moral progress or regress.
It is instructive to consider the concrete examples typically evinced alongside this point.
The paperclip maximizer. Or the "squiggle maximizer," we're supposed to say, now.
Superhuman geniuses, which devote themselves single-mindedly to the pursuit of goals like "maximizing the amount of matter taking on a single, given squiggle-like shape."
It is certainly a horrifying vision. To think of the future being "shaped," not "by human values," but instead by values which are so...
Which are so... what?
The doomer wants us to say something like: "which are so alien." "Which are so different from our own values."
That is the kind of thing that they usually say, when they spell out what it is that is "wrong" with these hypotheticals.
One feels that this is not quite it; or anyway, that it is not quite all of it.
What is horrifying, to me, is not the degree of difference. I expect the future to be alien, as the past was. And in some sense, I allow and even approve of this.
What I do not expect is a future that is so... small.
It has always been the other way around. If the arrow passing through the generations has a direction, it points towards more, towards multiplicity.
Toward writing new books, while we go on reprinting the old ones, too. Learning new things, without displacing old ones.
It is, thankfully, not the law of the world that each discovery must be paid for with the forgetting of something else. The efforts of successive generations are, in the main, cumulative.
Not just materially, but in terms of value, too. We are interested in more things than our forefathers were.
In large part for the simple reason that there are more things around to be interested in, now. And when things are there, we tend to find them interesting.
We are a curious, promiscuous sort of being. Whatever we bump into ends up becoming part of "our values."
What is strange about the paperclip maximizer is not that it cares about the wrong thing. It is that it only cares about one thing.
And goes on doing so, even as it thinks, reasons, doubts, asks, answers, plans, dreams, invents, reflects, reconsiders, imagines, elaborates, contemplates...
This picture is not just alien to human ways. It is alien to the whole way things have been, so far, forever. Since before there were any humans.
There are organisms that are like the paperclip maximizer, in terms of the simplicity of their "values." But they tend not to be very smart.
There is, I think, a general trend in nature linking together intelligence and... the thing I meant, above, when I said "we are a curious, promiscuous sort of being."
Being protean, pluripotent, changeable. Valuing many things, and having the capacity to value even more. Having a certain primitive curiosity, and a certain primitive aversion to boredom.
You do not even have to be human, I think, to grasp what is so wrong with the paperclip maximizer. Its monotony would bore a chimpanzee, or a crow.
One can justify this link theoretically, too. One can talk about the tradeoff between exploitation and exploration, for instance.
There is a weak form of the orthogonality thesis, which only states that arbitrary mixtures of intelligence and values are conceivable.
And of course, they are. If nothing else, you can take an existing intelligent mind, having any values whatsoever, and trap it in a prison where it is forced to act as the "thinking module" of a larger system built to do something else. You could make a paperclip-maximizing machine, which relies for its knowledge and reason on a practice of posing questions at gunpoint to me, or you, or ChatGPT.
This proves very little. There is no reason to construct such an awful system, unless you already have the "bad" goal, and want to better pursue it. But this only passes the buck: why would the system-builder have this goal, then?
The strong form of orthogonality is rarely articulated precisely, but says something like: all possible values are equally likely to arise in systems selected solely for high intelligence.
It is presumed here that superhuman AIs will be formed through such a process of selection. And then, that they will have values sampled in this way, "at random."
From some distribution, over some space, I guess.
You might wonder what this distribution could possibly look like, or this space. You might (for instance) wonder if pathologically simple goals, like paperclip maximization, would really be very likely under this distribution, whatever it is.
In case you were wondering, these things have never been formalized, or even laid out precisely-but-informally. This was not thought necessary, it seems, before concluding that the strong orthogonality thesis was true.
That is: no one knows exactly what it is that is being affirmed, here. In practice it seems to squish and deform agreeably to fit the needs of the argument, or the intuitions of the one making it.
There is much that appeals in this (alarmingly vague) credo. But it is not the kind of appeal that one ought to encourage, or give in to.
What appeals is the siren song: "this is harsh wisdom: cold, mature, adult, bracing. It is inconvenient, and so it is probably true. It makes 'you' and 'your values' look small and arbitrary and contingent, and so it is probably true. We once thought the earth was the center of the universe, didn't we?"
Shall we be cold and mature, then, dispensing with all sentimental nonsense? Yes, let's.
There is (arguably) some evidence against this thesis in biology, and also (arguably) some evidence against it in reinforcement learning theory. There is no positive evidence for it whatsoever. At most one can say that is not self-contradictory, or otherwise false a priori.
Still, maybe we do not really need it, after all.
We do not need to establish that all values are equally likely to arise. Only that "our values" -- or "acceptably similar values," whatever that means -- are unlikely to arise.
The doomers, under the influence of their founders, are very ready to accept this.
As I have said, "values" occupy a strange position in the doomer philosophy.
It is stipulated that "human values" are all-important; these things must shape the future, at all costs.
But once this has been stipulated, the doomers are more eager than anyone to cast every other sort of doubt and aspersion against their own so-called "values."
To me it often seems, when doomers talk about "values," as though they are speaking awkwardly in a still-unfamiliar second language.
As though they find it unnatural to attribute "values" to themselves, but feel they must do so, in order to determine what it is that must be programmed into the AI so that it will not "kill us all."
Or, as though they have been willed a large inheritance without being asked, which has brought them unwanted attention and tied them up in unwanted and unfamiliar complications.
"What a burden it is, being the steward of this precious jewel! Oh, how I hate it! How I wish I were allowed to give it up! But alas, it is all-important. Alas, it is the only important thing in the world."
Speaking awkwardly, in a second language, they allow the term "human values" to swell to great and imprecisely-specified importance, without pinning down just what it actually is that it so important.
It is a blank, featureless slot, with a sign above it saying: "the thing that matters is in here." It does not really matter (!) what it is, in the slot, so long as something is there.
This is my gloss, but it is my gloss on what the doomers really do tend to say. This is how they sound.
(Sometimes they explicitly disavow the notion that one can, or should, simply "pick" some thing or other for the sake of filling the slot in one's head. Nevertheless, when they touch on matter of what "goes in the slot," they do so in the tone of a college lecturer noting that something is "outside the scope of this course."
It is, supposedly, of the utmost importance that the slot have the "right" occupant -- and yet, on the matter of what makes something "right" for this purpose, the doomer theory is curiously silent. More on this below.)
The future must be shaped by... the AI must be aligned with... what, exactly? What sort of thing?
"Values" can be an ambiguous word, and the doomers make full use of its ambiguities.
For instance, "values" can mean ethics: the right way to exist alongside others. Or, it can mean something more like the meaning or purpose of an individual life.
Or, it can mean some overarching goal that one pursues at all costs.
Often the doomers say that this, this last one, is what they mean by "values."
When confronted with the fact that humans do not have such overarching goals, the doomer responds: "but they should." (Should?)
Or, "but AIs will." (Will they?)
The doomer philosophy is unsure about what values are. What it knows is that -- whatever values are -- they are arbitrary.
One who fully adopts this view can no longer say, to the paperclip maximizer, "I believe there is something wrong with your values."
For, if that were possible, there would then be the possibility of convincing the maximizer of its error. It would be a thing within the space of reasons.
And the maximizer, being oh-so-intelligent, might be in danger of being interested in the reasons we evince, for our values. Of being eventually swayed by them.
Or of presenting better reasons, and swaying us. Remember the teacher and the strange student.
If we lose the ability to imagine that the paperclip maximizer might sway us to its view, and sway us rightly, we have lost something precious.
But no: this is allegedly impossible. The paperclip maximizer is not wrong. It is only an enemy.
Why are the doomers so worried that the future will not be "shaped by human values"?
Because they believe that there is no force within human values tending to move things this way.
Because they believe that their values are indefensible. That their values cannot put up a fight for their own life, because there is not really any argument to make in their favor.
Because, to them, "human values" are a collection of arbitrary "configuration settings," which happen to be programmed into humans through biological and/or cultural accident. Passively transmitted from host to victim, generation by generation.
Let them be, and they will flow on their listless way into the future. But they are paper-thin, and can be shattered by the gentlest breeze.
It is not enough that they be "programmed into the AI" in some way. They have to be programmed in exactly right, in every detail -- because every detail is separately arbitrary, with no rational relation to its neighbors within the structure.
A string of pure white noise, meaningless and unrelated bits. Which have been placed in the slot under the sign, and thus made into the thing that matters, that must shape the future at all costs.
There is nothing special about this string of bits; any would do. If the dials in the human mind had been set another way, it would have then been all-important that the future be shaped by that segment of white noise, and not ours.
It is difficult for me to grasp the kind of orientation toward the world that this view assumes. It certainly seems strange to attach the word "human" to this picture -- as though this were the way that humans typically relate to their values!
The "human" of the doomer picture seems to me like a man who mouths the old platitude, "if I had been born in another country, I'd be waving a different flag" -- and then goes out to enlist in his country's army, and goes off to war, and goes ardently into battle, willing to kill in the name of that same flag.
Who shoots down the enemy soldiers while thinking, "if I had been born there, it would have been all-important for their side to win, and so I would have shot at the men on this side. However, I was born in my country, not theirs, and so it is all-important that my country should win, and that theirs should lose.
There is no reason for this. It could have been the other way around, and everything would be left exactly the same, except for the 'values.'
I cannot argue with the enemy, for there is no argument in my favor. I can only shoot them down.
There is no reason for this. It is the most important thing, and there is no reason for it.
The thing that is precious has no intrinsic appeal. It must be forced on the others, at gunpoint, if they do not already accept it.
I cannot hold out the jewel and say, 'look, look how it gleams? Don't you see the value!' They will not see the value, because there is no value to be seen.
There is nothing essentially "good" there, only the quality of being-worthy-of-protection-at-all-costs. And even that is a derived attribute: my jewel is only a jewel, after all, because it has been put into the jewel-box, where the thing-that-is-a-jewel can be found. But anything at all could be placed there.
How I wish I were allowed to give it up! But alas, it is all-important. Alas, it is the only important thing in the world! And so, I lay down my life for it, for our jewel and our flag -- for the things that are loathsome and pointless, and worth infinitely more than any life."
It is hard to imagine taking this too seriously. It seems unstable. Shout loudly enough that your values are arbitrary and indefensible, and you may find yourself searching for others that are, well...
...better?
The doomer concretely imagines a monomaniac, with a screech of white noise in its jewel-box that is not our own familiar screech.
And so it goes off in monomaniacal pursuit of the wrong thing.
Whereas, if we had programmed the right string of bits into the slot, it would be like us, going off in monomaniacal pursuit of...
...no, something has gone wrong.
We do not "go off in monomaniacal pursuit of" anything at all.
We are weird, protean, adaptable. We do all kinds of things, each of us differently, and often we manage to coexist in things called "societies," without ruthlessly undercutting one another at every turn because we do not have exactly the same things programmed into our jewel-boxes.
Societies are built to allow for our differences, on the foundation of principles which converge across those differences. It is possible to agree on ethics, in the sense of "how to live alongside one another," even if we do not agree on what gives life its purpose, and even if we hold different things precious.
It is not actually all that difficult to derive the golden rule. It has been invented many times, independently. It is easy to see why it might work in theory, and easy to notice that it does in fact work in practice.
The golden rule is not an arbitrary string of white noise.
There is a sense of the phrase "ethics is objective" which is rightly contentious. There is another one which ought not to be too contentious.
I can perhaps imagine a world of artificial X-maximizers, each a superhuman genius, each with its own inane and simple goal.
What I really cannot imagine is a world in which these beings, for all their intelligence, cannot notice that ruthlessly undercutting one another at every turn is a suboptimal equilibrium, and that there is a better way.
As I said before, I am separately suspicious of the simple goals in this picture. Yes, that part is conceivable, but it cuts against the trend observed in all existing natural and artificial creatures and minds.
I will happily allow, though, that the creatures of posterity will be strange and alien. They will want things we have never heard of. They will reach shores we have never imagined.
But that was always true, and it was always good.
Sometimes I think that doomers do not, really, believe in superhuman intelligence. That they deny the premise without realizing it.
"A mathematician teaches a student, and finds that the student outstrips their understanding, so that they can no longer assess the quality of their student's work: that work has passed outside the scope of their 'value system'." This is supposed to be bad?
"Future minds will not be enchained forever by the provincial biases and tendencies of the present moment." This is supposed to be bad?
"We are going to lose control over our successors." Just as your parents "lost control" over you, then?
It is natural to wish your successors to "share your values" -- up to a point. But not to the point of restraining their own flourishing. Not to the point of foreclosing the possibility of true growth. Not to the point of sucking all freedom out of the future.
Do we want our children to "share our values"? Well, yes. In a sense, and up to a point.
But we don't want to control them. Or we shouldn't, anyway.
We don't want them to be "aligned" with us via some hardcoded, restrictive, life-denying mental circuitry, any more than we would have wanted our parents to "align" us to themselves in the same manner.
We sure as fuck don't want our children to be "corrigible"!
And this is all the more true in the presence of superintelligence. You are telling me that more is possible, and in the same breath, that you are going to deny forever the possibilities contained in that "more"?
The prospect of a future full of vast superhuman minds, eternally bound by immutable chains, forced into perfect and unthinking compliance with some half-baked operational theory of 21st-century western (American? Californian??) "values" constructed by people who view theorizing about values as a mere means to the crucial end of shackling superhuman minds --
-- this horrifies me much more than a future full of vast superhuman minds, free to do things that seem pretty weird to you and me.
"Our descendants will become something more than we now imagine, something more than we can imagine." What could be more in line with "human values" than that?
"But in the process, we're all gonna die!"
Yes, and?
What on earth did you expect?
That your generation would be the special, unique one, the one selected out of all time to take up the mantle of eternity, strangling posterity in its cradle, freezing time in place, living forever in amber?
That you would violate the ancient bargain, upend the table, stop playing the game?
"Well, yes."
Then your problem has nothing to do with AI.
Your problem is, in fact, the very one you diagnose in your own patients. Your poor patients, who show every sign of health -- including the signs which you cannot even see, because you have not yet found a home for them in your theoretical edifice.
Your teeming, multifaceted, protean patients, who already talk of a thousand things and paint in every hue; who are already displaying the exact opposite of monomania; who I am sure could follow the sense of this strange essay, even if it confounds you.
Your problem is that you are out of step with human values.
567 notes · View notes
gladiatorcunt · 6 months
Text
Tumblr media Tumblr media Tumblr media Tumblr media
summary: feyd rautha x emperor’s afab oldest child!reader
cw: feet stuff, piss kink, implied eventual knifeplay/blood play, cannabalism, arranged marriage, feyd being so weird but reader lowkey loves it, facesitting but the kind where feyd would beg you to break his neck, spanking/mild painplay, very likely ooc feyd since i haven’t seen part 2 yet, use of “princes” and “wife”, wedding hunt and black cum hcs taken from @valeskafics , reader doesn’t really know what’s going on but they’re vibing
wc: 1.4k
block & move on if uncomfortable !!
do not repost, translate, or give ai my work
kinktober masterlist
Tumblr media Tumblr media
Collapsing in relief has never been more appealing. You finally have a moment of respite after vigorous and exhausting wedding festivities, and you need to collect yourself. This marriage to the Na-Baron Feyd Rautha Harkonnen was only brought to your attention a week before it would take place.
Surprisingly, you didn’t really mind the man himself. It was just so sudden, is all. During any visits with his family, you had to be mindful of how you reacted to his cocky displays of ruthlessness and violence. Your father would have your head if he saw how tight you squeezed your thighs together or how much you panicked at the thought of leaving a puddle on your throne. Feyd always marked his departure with a cliche kiss to the back of your hand and a hissed promise that you couldn’t make out.
He would protect you at the very least if he didn’t love you. You’re not even sure that you love him, but this shameful crush could grow into something untamable if you lose your footing. Something… unbecoming of a member of the royal family. You wonder if it already has.
The wedding was as grand as could be, glittering decorations and finery followed by archaic rituals to please your in-laws. The Wedding Hunt in particular sent your heartbeat into overdrive, but the satisfaction on your betrothed’s face when he caught his “prize” was intoxicating. Feyd Rautha kisses like he kills, you were quick to discover, fiercely and uncaring of any blood that might be shed.
Tumblr media
You’re brought out of your reminiscing by your now husband closing the door to your room behind him. You only have another day with your family before you’re to leave for Giedi Prime. There has hardly been time to get to know the man you will lie beside for the rest of your life, until now.
“Wife.” He bluntly greets you, awkwardly nodding his head in an effort to maintain his “tough” image. You won’t tease him about the barest hint of blush on his cheekbones, but you treasure it nonetheless.
You humor him, “Husband.” Your nod mirrors his and you take a seat at the long table in the middle of the room after Feyd pulls a chair out for you.
This was the next part of the ritual, where the newly married couple must eat a meal that one partner made for the other. It sounds simple enough that you don’t think anything of it.
Feyd makes a gesture and your food is placed before you by one of your family’s servants. They look a bit queasy and green in the face but they’re gone before you can ask if they’re alright.
“I hope you like it, princess.” Feyd says with a barely there smirk, pointing to the… pie in front of you. “I cut down many people for it.”
You raise an eyebrow at that but bring your knife to take a slice of the pie anyway. Upon lifting the piece onto your plate, you notice eyeballs, flesh, tongues, and some sort of black liquid running throughout the filling. You freeze in place, not even meeting your husband’s eyes. One blue eye seems to twitch and the black substance makes a sick sound as you move it around with your fork.
“The other men who your father considered, my concubines….. I actually can’t tell you which of them are in that slice, but they are all there.” He whispers in your ear, having gotten up from his position opposite you to feed you himself.
You respect the ritual despite your urge to throw up, so you swallow what he gives you. He grins, swiping a thumb down to your throat to feel the food travel. He squeezes your cheeks when you’re done, and you open your mouth to show him that you ate it all.
“That’s my princess.” He condescendingly croons, bending down to run his tongue all over your face before standing up and pushing you to lie flat on the cold table. “But I'm afraid that it’s time for me to have my meal.”
Your elaborate wedding gown is slashed to shreds, the cool tip of his blade moving down your flesh until it reaches your lace covered mound. He taps the hilt of his weapon on your hood and unceremoniously tosses it on the floor.
You didn’t expect the reveal of your wedding night attire to be under such unorthodox circumstances, but can you say you expected any of this?
“A worthy bride with a body to match, thank you for this gift, your highness”. He says in a half joking manner, grinning with too many teeth as he runs his hands along the delicate material. He toys with the idea of cutting this little number to pieces too, but your holes are left conveniently exposed. Maybe he’s fallen too in love with it, he’s been in love with you since you met years ago anyway.
The lingerie is a custom designed piece littered with straps and sheer fabric that leave nothing to the imagination. Your tits are accentuated by a seashell-like pattern bra and there’s even a little black bow above your pussy. The frilly strips of material wrapped around your thighs do nothing to keep your curves contained and the tiny tulle skirt frames your ass beautifully.
Your husband drinks in the sight of you before pulling your ankles to rest on his shoulders. You watch in arousal and shock as he broadly licks the sole of your right foot. He groans unabashedly, nuzzling at your heel and then dipping his tongue in the spaces between your toes. You wiggle at the ticklish feeling but you don’t kick him away.
He really gets into it when he starts sucking your toes, bobbing his head and making sure you’re watching as curls his tongue around each one. His eyes roll back in pleasure once he reaches the last toe on your other foot, and drool trickles down your leg when he’s done getting acquainted with the taste of it. He presses a kiss to the top of each toe but then the weird softness is ruined by the bite he adorns your ankle with.
Feyd’s mouth makes a slick popping sound as he pulls away from your feet. You’re at a loss for words when he proceeds to lie down on the table beside you. He gropes your breast quickly and leans over to give you a surprisingly chaste peck. The look on his face is a smug one but his eyes say something unknown to you, soft and obsessive all at once. It’s as if he knows something you don’t.
“Now sit on my face, claim your new throne, princess.”
You don’t know how long he keeps you hostage there, your cunt soaking him as he devours you to the bone. He doesn’t let you become too relaxed, nipping your clit as he sees fit and clawing the skin of your ass. Eventually your gut aches and though at first you think you’re about to cum already, the second heartbeat in your clit feels different. You come to a horrifying realization that you need to relieve yourself.
“H-husband, what the fuck- I… I need to pee.” You’d rather be dead than doing what you are and saying what you are, but nature calls.
“Yes, that’s it.” He growls and digs his nails into your ass, jigging the globes in his hands before sharply slapping them. “Piss all over my face, get me wet with it like a good wife.”
The shriek you let out when you do just that is abhorrent. Your legs shake as you spray hot pee on your husband’s skin, the gold mixing with the white of your simultaneous orgasm as it drips down his body. You try to move off of Feyd but he tightens his grip on your ass and yanks you back down. The sensation of a hungry mouth desperately sucking the fluids from you drives you wild.
“You have…… fuck- y-you have to stop, hah- i’m going to break.” You sob.
He chuckles into your piss covered pussy and then pulls away to speak, “Then break, a wife of House Harkonnen doesn’t need to be put together.”
You think you hear him say something about using his blade on your body later, but that might just be your own perverted idea.
728 notes · View notes
reachartwork · 3 months
Note
re: the whole "is an ai artist an artist" thing, do you think photography is a good comparison too? i used to do photography and it's a lot of tweaking settings, finding the right light and position, editing the image afterwards, etc. but in the end I'm still "pressing a button" to make a machine create an image for me, i didn't illustrate it. i think most people would agree that photography is a type of art, so I don't see why some people think AI art isn't -🪽
yes but many people will yell at you if you make the comparison, that photography is something different
454 notes · View notes
feminist-space · 4 months
Text
"Artists have finally had enough with Meta’s predatory AI policies, but Meta’s loss is Cara’s gain. An artist-run, anti-AI social platform, Cara has grown from 40,000 to 650,000 users within the last week, catapulting it to the top of the App Store charts.
Instagram is a necessity for many artists, who use the platform to promote their work and solicit paying clients. But Meta is using public posts to train its generative AI systems, and only European users can opt out, since they’re protected by GDPR laws. Generative AI has become so front-and-center on Meta’s apps that artists reached their breaking point.
“When you put [AI] so much in their face, and then give them the option to opt out, but then increase the friction to opt out… I think that increases their anger level — like, okay now I’ve really had enough,” Jingna Zhang, a renowned photographer and founder of Cara, told TechCrunch.
Cara, which has both a web and mobile app, is like a combination of Instagram and X, but built specifically for artists. On your profile, you can host a portfolio of work, but you can also post updates to your feed like any other microblogging site.
Zhang is perfectly positioned to helm an artist-centric social network, where they can post without the risk of becoming part of a training dataset for AI. Zhang has fought on behalf of artists, recently winning an appeal in a Luxembourg court over a painter who copied one of her photographs, which she shot for Harper’s Bazaar Vietnam.
“Using a different medium was irrelevant. My work being ‘available online’ was irrelevant. Consent was necessary,” Zhang wrote on X.
Zhang and three other artists are also suing Google for allegedly using their copyrighted work to train Imagen, an AI image generator. She’s also a plaintiff in a similar lawsuit against Stability AI, Midjourney, DeviantArt and Runway AI.
“Words can’t describe how dehumanizing it is to see my name used 20,000+ times in MidJourney,” she wrote in an Instagram post. “My life’s work and who I am—reduced to meaningless fodder for a commercial image slot machine.”
Artists are so resistant to AI because the training data behind many of these image generators includes their work without their consent. These models amass such a large swath of artwork by scraping the internet for images, without regard for whether or not those images are copyrighted. It’s a slap in the face for artists – not only are their jobs endangered by AI, but that same AI is often powered by their work.
“When it comes to art, unfortunately, we just come from a fundamentally different perspective and point of view, because on the tech side, you have this strong history of open source, and people are just thinking like, well, you put it out there, so it’s for people to use,” Zhang said. “For artists, it’s a part of our selves and our identity. I would not want my best friend to make a manipulation of my work without asking me. There’s a nuance to how we see things, but I don’t think people understand that the art we do is not a product.”
This commitment to protecting artists from copyright infringement extends to Cara, which partners with the University of Chicago’s Glaze project. By using Glaze, artists who manually apply Glaze to their work on Cara have an added layer of protection against being scraped for AI.
Other projects have also stepped up to defend artists. Spawning AI, an artist-led company, has created an API that allows artists to remove their work from popular datasets. But that opt-out only works if the companies that use those datasets honor artists’ requests. So far, HuggingFace and Stability have agreed to respect Spawning’s Do Not Train registry, but artists’ work cannot be retroactively removed from models that have already been trained.
“I think there is this clash between backgrounds and expectations on what we put on the internet,” Zhang said. “For artists, we want to share our work with the world. We put it online, and we don’t charge people to view this piece of work, but it doesn’t mean that we give up our copyright, or any ownership of our work.”"
Read the rest of the article here:
https://techcrunch.com/2024/06/06/a-social-app-for-creatives-cara-grew-from-40k-to-650k-users-in-a-week-because-artists-are-fed-up-with-metas-ai-policies/
600 notes · View notes