#there are so many positive uses for ai
Explore tagged Tumblr posts
Text
no i don't want to use your ai assistant. no i don't want your ai search results. no i don't want your ai summary of reviews. no i don't want your ai feature in my social media search bar (???). no i don't want ai to do my work for me in adobe. no i don't want ai to write my paper. no i don't want ai to make my art. no i don't want ai to edit my pictures. no i don't want ai to learn my shopping habits. no i don't want ai to analyze my data. i don't want it i don't want it i don't want it i don't fucking want it i am going to go feral and eat my own teeth stop itttt
#i don't want it!!!!#ai#artificial intelligence#there are so many positive uses for ai#and instead we get ai google search results that make me instantly rage#diz says stuff
129K notes
·
View notes
Text
Thinking about ScarVi's overarching theme being The Truth Shall Set You Free. I am so normal about this
#spoilers in tags#pokémon#pokemon sv#Arven initially being closed off and not trusting you because he was neglected by his parent and learned to only rely on himself#realizing very early on that being honest is the best chance he has at healing his Mabostiff#but still not opening up about his bigger issues until it was absolutely necessary which pushes the story forward into endgame#Penny hiding herself behind Cassiopeia to protect herself from bullying#getting an entire group of outcast kids into a team to scare their bullies off#only for the plan to backfire splendously when they're mistaken for the bullies#and Clavell in a rare display of clarity ffrom an adult in a position of authority#rather than simply punishing them for it opted to team up with us to understand what was really going on#and that made him much more lenient in punishing them (because they did still cause trouble!)#the truth of Turo/Sada spiraling into their work and refusing to see the damage it was doing to EVERYTHING including themselves#to the point that they DIED#and the AI they built explicitly for the purpose of continuing their work ran the calculations and realized said work was Bad#and that truth made it go against its own programming which is what kickstarts the main story to begin with#and may I contrast all that with NEMONA whose sheer energy and eagerness is 1000% GENUINE#I've seen so many people say they thought she was going to eventually be angry for losing to us all the time#but the whole point of her character is that she's free to do whatever the fuck she wants and she's pretty happy with her life#she has no reason to fake happiness. she's just like that. she is free from the beginning and she's always be free and that's the point#in a story where no one else is!!! everyone else is bound by some complication or another that holds them back from being honest#i changed my mind i'm insane about this. no longer normal#pokemon sv spoilers#babbles
158 notes
·
View notes
Text
man i hate the current state of ai so much bc it's totally poisoned the well for when actual good and valuable uses of ai are developed ethically and sustainably....
like ai sucks so bad bc of capitalism. they want to sell you on a product that does EVERYTHING. this requires huge amounts of data, so they just scrape the internet and feed everything in without checking if it's accurate or biased or anything. bc it's cheaper and easier.
ai COULD be created with more limited data sets to solve smaller, more specific problems. it would be more useful than trying to shove the entire internet into a LLM and then trying to sell it as a multi tool that can do anything you want kinda poorly.
even in a post-capitalist world there are applications for ai. for example: resource management. data about how many resources specific areas typically use could be collected and fed into a model to predict how many resources should be allocated to a particular area.
this is something that humans would need to be doing and something that we already do, but creating a model based on the data would reduce the amount of time humans need to spend on this important task and reduce the amount of human error.
but bc ai is so shitty now anyone who just announces "hey we created an ai to do this!" will be immediately met with distrust and anger, so any ai model that could potentially be helpful will have an uphill battle bc the ecosystem has just been ruined by all the bullshit chatgpt is selling
#i'm not blaming people for being distrustful btw#they're right to be#they've been shown no evidence that ai can have positive impacts#but it just makes me mad bc ai isn't inherently evil#you can collect data ethically and accurately!#you can use ai to solve complicated problems that would allow humans to spend less time on them!#there are so many possible uses for ai that aren't just plaigerism machine#so don't write of all ai is my point#ai itself isn't the problem
7 notes
·
View notes
Text
The research librarians are giving a presentation on how we might use generative AI at work and one thing they suggested was “grant writing” and honestly, if you use ChatGPT to write a grant application I don’t think you deserve that money tbh
#sincerely if you cannot communicate effectively and chatGPT can do so better than you#then that says worlds about your ability to communicate rather than the AI#this lends credence more and more to my position that a lot of people in email jobs are bad at them and should be digging ditches#similarly many many people who dig ditches should be doing the email jobs#but the meritocracy is fake so you get this topsy turvy world instead#I literally do not care what potential uses cases exist for genAI I’ve become so negatively polarized by the credulity surrounding it#that I would push the AI extinction button in a heartbeat
5 notes
·
View notes
Text
.
#keep thinking that im going backwards or am behind#but its like. i always had meant to go back to school....#it just... was supposed to be like. a masters in psych. n not a diploma in software JFJDJDJJDJDJDJ#so really !!! i think im where im supposed to be.....#like its much more comfortable tbh. like it plays more on my strengths and is challenging enough....#what i was doing before was like.... idk it wasnt enough djdjjdjx and it was challenging in like... an emotional way. even a physical way#it was so fjxjxjdjdjjdjdjx idk.... i just dont think i could have done it into like my 40s honestly...#like even if i like.... went into a senior position.. i didnt even want to. like it was just so... idk JDJDJJDJDJX#n e way. i moved on into something else and need to remember that it was For A Reason (many really JDJFFJ) and that im not behind#bc its not like everything that came before is erased... its just like... adding#and like.... idk comp sci and psych steal concepts from each other all the time. like a computer is just like.... a representation of us#that can help us do things faster ????#either way... having the background i have made me better understand some things. especially in the ai course i took....#i was like oh ya i already know this djdjjdjdjdjd n e way#personal
4 notes
·
View notes
Text
one of those things you really gotta learn is that it's insanely easy to get people to get mad at anything if you just phrase it the right way. slap the word "woke" on anything you want conservatives to hate. call something "extremism" or "radical" to get a centrist to fear it. say that a particular take "comes from a position of privilege" to get leftists to denounce it (that's right, even us leftists are susceptible to propaganda that uses leftist language). all these are simplistic examples of course, but it's all to say that certain terms, slogans, and phrases just kind of turn off people's critical thinking, especially ones with negative connotation. there are so many words that are just shorthand for "bad." once a term reaches buzzword status, it becomes practically useless.
it goes for general attitudes too. "this piece of news is a sign that the world is getting worse" is a shockingly easy idea to sell, even when the "piece of news" in question is completely fabricated. I'll often see leftists uncritically sharing right wing propaganda that rides on the back of the "humans bad, nature good" cliche, or the "US education system bad" cliche, or even the current "AI bad" cliche. most of the details of a given post will go entirely unquestioned as long as they support whatever attitude is most popular right now. and none of us is immune to this.
(the funny thing is that I'm kind of playing with fire here even making this post. folks are so used to just reacting to shit that I have no way of predicting which buzzword I included here will trigger a negative association in someone's mind and convince them I'm taking some random antagonistic stance on a topic that they've been really fired up about lately.)
6K notes
·
View notes
Note
What I don't get is that other your support of AI image generation, you're SO smart and well read and concerned with ethics. I genuinely looked up to you! So, what, ethics for everyone except for artists, or what? Is animation (my industry, so maybe I care more than the average person) too juvenile and simplistic a medium for you to care about its extinction at the hands of CEOs endorsing AI? This might sound juvenile too, but I'm kinda devastated, because I genuinely thought you were cool. You're either with artists or against us imho, on an issue as large as this, when already the layoffs in the industry are insurmountable for many, despite ongoing attempts to unionize. That user called someone a fascist for pointing this out, too. I guess both of you feel that way about those of us involved in class action lawsuits against AI image generation software.
i can't speak for anyone else or the things they've said or think of anyone. that said:
1. you should not look up to people on the computer. i'm just a girl running a silly little blog.
2. i am an artist across multiple mediums. the 'no true scotsman' bit where 'artists' are people who agree with you and you can discount anyone disagrees with you as 'not an artist' and therefore fundamentally unsympathetic to artists will make it very difficult to actually engage in substantive discussion.
3. i've stated my positions on this many times but i'll do it one more: i support unionization and industrial action. i support working class artists extracting safeguards from their employers against their immiseration by the introduction of AI technology into the work flow (i just made a post about this funnily enough). i think it is Bad for studio execs or publishers or whoever to replace artists with LLMs. However,
4. this is not a unique feature of AI or a unique evil built into the technology. this is just the nature of any technological advance under capitalism, that it will be used to increase productivity, which will push people out of work and use the increased competition for jobs to leverage that precarity into lower wages and worse conditions. the solution to this is not to oppose all advances in technology forever--the solution is to change the economic system under which technologies are leveraged for profit instead of general wellbeing.
5. this all said anyone involved in a class action lawsuit over AI is an enemy of art and everything i value in the world, because these lawsuits are all founded in ridiculous copyright claims that, if legitimated in court, would be cataclysmic for all transformative art--a victory for any of these spurious boondoggles would set a precedent that the bar for '''infringement''' is met by a process that is orders of magnitude less derivative than collage, sampling, found art, cut-ups, and even simple homage and reference. whatever windmills they think they are going to defeat, these people are crusading for the biggest expansion of copyright regime since mickey mouse and anyone who cares at all about art and creativity flourishing should hope they fail.
2K notes
·
View notes
Text
Some positivity in these turbulent AI times
*This does not minimize the crisis at hand, but is aimed at easing any anxieties.
With every social media selling our data to AI companies now, there is very little way to avoid being scraped. The sad thing is many of us still NEED social media to advertise ourselves and get seen by clients. I can't help but feeling that we as artists are not at risk of losing our livelihoods, here is why:
Just because your data is available does not mean that AI companies will/want to use it. Your work may never end up being scraped at all.
The possibility of someone who uses AI art prompts can replace you (if your work is scraped) is very unlikely. Art Directors and clients HAVE to work with people, the person using AI art cannot back up what a machine made. Their final product for a client will never be substantial since AI prompts cannot be consistent with use and edits requested will be impossible.
AI creators will NEVER be able to make a move unless us artists make a move first. They will always be behind in the industry.
AI creators lack the fundamental skills of art and therefore cannot detect when something looks off in a composition. Many professional artists like me get hired repeatedly for a reason! WE as artists know what we're doing.
The art community is close-knit and can fund itself. Look at furry commissions, Patreon, art conventions, Hollywood. Real art will always be able to make money and find an audience because it's how we communicate as a species.
AI creators lack the passion and ambition to make a career out of AI prompts. Not that they couldn't start drawing at any time, but these tend to be the people who don't enjoy creating art to begin with.
There is no story or personal experience that can be shared about AI prompts so paying customers will lose interest quickly.
Art is needed to help advance society along, history says so. To do that, companies will need to hire artists (music, architecture, photography, design, etc). The best way for us artists to keep fighting for our voice to be heard right now is staying visible. Do not hide or give in! That is what they want. Continue posting online and/or in person and sharing your art with the world. It takes a community and we need you!
#text#ai#artists on tumblr#art#im usually right#whenever I feel mostly calm in a crisis it's a good sign
5K notes
·
View notes
Text
It feels kinda wild I've seen no one mention the huge controversy NaNoWriMo was in about 7 months ago (Link to a reddit write up, there's also a this google doc on it) in this whole recent AI discourse. The main concerns people had were related to the 'young writers' forum, a moderator being an alledged predator, and general moderation practices being horrible and not taking things like potential grooming seriously.
About 5 months ago, after all of that went down, MLs or 'Municipal Liaisons', their local volunteers organisers for different regions of the world, were offered a horrible new agreement that basically tried to shut them up about the issues they'd been speaking up about. Some of these issues included racism and ableism that the organisation offered zero support with.
When there was pushback and MLs kept sharing what was going on, NaNoWriMo removed ALL OF THEM as MLs and sent in a new, even more strict agreement that they would have to sign to be allowed back in their volunteer position.
This agreement included ways of trying to restrict their speech even further, from not being able to share 'official communications' to basically not being allowed to be in discord servers to talk to other MLs in places not controlled by NaNoWriMo. You also had to give lots of personal information and submit to a criminal background check, despite still explicitly leaving their local regions without support and making it very clear everyone was attending the OFFICIAL in person events 'at their own risk'.
Many MLs refused to sign and return. Many others didn't even know this was happening, because they did not get any of the emails sent for some reason. NaNoWriMo basically ignored all their concerns and pushed forward with this.
Many local regions don't exist anymore. I don't know who they have organising the rest of them, but it's likely spineless people that just fell in line, people who just care about the power, or new people who don't understand what's going on with this organisation yet. Either way, this year is absolutely going to be a mess.
Many of the great former MLs just went on to organise their writing communities outside of the official organisation. NaNoWriMo does not own the concept of writing a novel in a month.
R/nanowrimo is an independent subreddit that has been very critical of the organisation since this all happened, and people openly recommend alternatives for word tracking, community, etc there, so I highly recommend checking it out.
I've seen Trackbear recommended a lot for an alternative to the word tracking / challenge, and will probably be using it myself this November.
Anyway, just wanted to share because a lot of people haven't heard about this, and I think it makes it extremely clear that the arguments about "classism and ableism" @nanowrimo is using right now in defense of AI are not vaguely misguided, but just clear bullshit. They've never given a single shit about any of that stuff.
1K notes
·
View notes
Text
Virgin! Simon "Ghost" Riley
Warnings: 18+, Smut, Inexperienced! Simon, Virgin! Simon, Riding, Unprotected Sex, The Mask Stays On, No Pronouns Used For Reader Except 'You'.
Virgin! Simon who can hardly believe his luck as he watches and feels you ride him, your walls tight as you bounce on his cock, calling him your 'big guy'. His hands are on your hips, his own slamming up into yours in a rhythm you'd set for him.
He's sloppy. Unaccustomed to the euphoric stiffening of the knot in his stomach, pulling ever tighter with every slap of your ass against his thighs. Sure, he's had many an orgasm before, but never at the hands of another. Never so strong; a force of nature in its own right. He's breathing heavily - panting; you swear you can see him drooling from the corner of his mouth. Something viscous is filling you now. Not the full force of his seed, but a precursor to it. A warning.
The mask stays on (of course) during this exchange, but you can see the way he fights to keep his eyes open, to keep himself from betraying every sensibility and throwing his head back, screwing his eyes shut as his length is nestled inside you, a thick bump forming in your stomach with every thrust. Your hand slips down your front and you press it. Simon jolts, moaning between gritted teeth as you press, hard, harder still, forcing his cock into an even tighter position.
He's arching into you, the sensation of his veins and his bulbous tip throbbing against your insides enough to let you know that he's close.
You coax him. Goad him. "Y'gonna cum just for me, big boy? Gonna fuck me 'til I can't walk straight?"
He can't talk. Can't even think. For the first time in his life, he's fucked dumb. You can see it in the way his eyes roll back into his skull when you clench around him. Suffocate him. His hips stutter. His cock nudges something deep within you. You gasp.
It only took your calling him your "Good boy," to have him unravel before your eyes. He can't contain the strangled growl that is exorcised from him as he cums, deep and hard, thick, hot ropes of semen filling you. You can feel it, as if painting your insides white, bathing you in an unfettered warmth. His hands are cast-iron on your hips, pulling you down onto him as if to stop you from pulling away, to prevent even a drop of his seed from escaping you. He digs his heels into the bench beneath you, grounding himself.
And, as your orgasm sparks and ripples through you, you hunch over Simon, hands gripping his shoulders, squeezing him. You moan, long and loud, milking Simon for all he's worth. And now, between the sheets of his post-orgasm haze, he watches you, the ring of light above your head from the luminescent bulb of the changing room painting you as a saint in his eyes.
He's never going to let what you have - what you've shown him - go. No matter the cost. Not when this feeling of completion is steadfast within him, electrifying every fibre in his body, all the way down to his bones.
Reblog for more content like this! It helps creators like myself tremendously and it is greatly appreciated :-)
Masterlist Masterlist [Continued] Masterpost Modern Warfare AI Masterlist
AO3 Wattpad Tumblr Backup Account
#simon ghost riley#simon riley#ghost#ghost cod#cod ghost#ghost mw2#simon ghost riley x reader#simon ghost riley x you#simon ghost riley headcanons#simon ghost riley smut#simon riley headcanons#simon riley x reader#simon riley x you#simon riley smut#ghost smut#cod#cod x reader#cod smut#cod headcanons#mw2 smut
6K notes
·
View notes
Text
Hey so! I'm gonna take the above in good faith as best I can and respond genuinely, because I know a lot of people, especially artists, feel very strongly one way or the other about AI-generated images - making it all too easy to miss each other's core points. And I think a lot gets lost in translation between 'sides', as even though everyone's responding to one another the unstated 'question' they're trying to answer is actually different. Namely: people against the AI trend tend to be responding to "is AI bad for stealing art?" while people who are trying to argue AI isn't inherently bad are often responding to "can AI benefit artists?"
I think once people start feeling strongly on a topic, it's easily to accidentally fall into the eternal human problem (that's even easier on the internet) of "inventing a guy" - in this case, a guy that feels there's black and white morality here in either direction. Some people feel strongly enough that they think even in spite of the potential good that all AI generating attempts should be shut down, while others feel strongly that there's enough good to be had they think everyone should just try and make the best of it all.
It's not misinformation that the op said "CSP is including a feature for ai art so it's easier than ever to steal credit from artists" - I can't be sure of their intent, but hell, they even put the pic of the tweet itself where CSP said they weren't feeding Stable Diffusion user's art. They weren't hiding that fact. The stealing in question isn't necessarily about CSP users - but rather, that SD is built on stolen art. Anything SD generates is inherently built on incredibly shady and dubiously ethical practices of scraping art from the internet.
(Insert a thousand arguments here, I know. But it wasn't until SD 2.0 - released Nov 23 - that SD users were blocked from easily and explicitly generating images to mimic certain artists styles; say what you will about 'learning' and 'style mimicking' in flesh-and-blood artists, the fact that those artists will never have an option to fully remove their art 'data' fully once its been used is pretty fucked up.)
That said, I don't want to put words in people's mouths, so I'm going to focus on why people are pissed about CSP + Stable Diffusion.
See, plenty of us that aren't happy about this know that CSP isn't feeding user's art into Stable Diffusion! That's definitely a relief. The problem that a lot of us have - both with this 'collaboration' and with SD itself - is that Stable Diffusion is built on stolen art.
(And, to be clear: art includes photography. Stolen photography is also feeding these things, and that's just as messed up.)
In other words, no matter how good your intentions, even if you aren't making money off of - for example - using the 'AI' to generate a background - it's impossible to extricate a result that hasn't benefited from 'learning' from countless works from artists who did not consent to their work being used to train a machine.
Supposedly, future versions of Stable Diffusion will have ways for artists to opt-out of their work being used as training data. Still, that's a nebulous promise; and Stable Diffusion is open source. While I'm normally a fan of open source software, what that means is that people can and do simply make their own version and add back in the 'training sets' that they want their generated images to 'learn' from. In other words, it's extremely easy for people to just... add those opted-out artists right back in to their 'own' version of SD. (The discord for it is rife with advice on 'training' your own SD instance...)
There's a viewpoint I see from people who have less-to-no problem with programs like SD who say that well, other artists learn and get inspired from other artists too, and hell, working artists on creative teams often even get shown specific artists' work and are told to 'do something like that. This discussion gets even muddier than the rest fast, because frankly, it gets philosophical as hell - how is machine learning different from a person learning/training their art skills over time, etc etc? Seriously, dissertations can be written on that.
I think where I personally land is based more on the outcome of this machine learning.
Essentially: while in a perfect world this could just be a cool inspirational tool to boost artists, we live in a capitalist, corporate-profit-driven world that will use these tools in every worst way and leverage the technology to cut out the 'lower tier' artists on their teams. Entry-level artist positions, critical to even building up the experience and further skills necessary to make it to senior positions - or build up enough connections/portfolio to snag one of those spots. Will every art position be lost? Likely not - top tier artists in the field will still be kept to do the highest concept/proof work, or finesse whatever AI generators give, etc etc. But there's a great Kotaku article that interviews artists and creatives on all sides of the issue that puts it well, I think, and while interviewing one artist:
Jon Juárez, an artist who has worked with Square Enix and Microsoft, agrees that some companies and clients will only be too happy to make use of AI art. “Many authors see this as a great advantage, because this harvesting process offers the possibility of manipulating falsely copyright-free solutions immediately, otherwise they would take days to arrive at the same place, or simply would never arrive”, he says . “If a large company sees an image or an idea that can be useful to them, they just have to enter it into the system and obtain mimetic results in seconds, they will not need to pay the artist for that image. These platforms are washing machines of intellectual property.”
To be completely fair, this article also includes plenty of quotes from artists who have less of a problem with AI-generated art! And I understand where they're coming from. Some are just viewing it with mild resignation and 'well people steal my art anyways so what's the difference with a robot', and others as a cool potential tool for themselves. Some of the most optimistic - and I love their ideas, honestly, they genuinely helped me see the pros beyond the cons - are focusing on the ways they can broaden their own art horizons and grow even more as artists. I recommend reading that article for their PoV, too.
Honestly, I want to be excited about AI-training here. I love sci-fi shit. I love thinking about the future where we get robots who Are People.
'AI', however, is not artificial intelligence. Not even close. At best, virtual intelligence; and more blatantly, just a complicated machine with very well-tailored tag audits. What's undeniable in AI 'art' is that it is wholly built off of other people's time, sweat, and hard-earned skills. I could go on a tangent here about that whole difference-between-machine-and-people bit, how there's just inherently more effort in an artist that requires dedication and determination and ingenuity and developing your own eye and skills, but honestly, I'm mostly horrified by the lack of respect for artists, and the lack of choice.
I think one of the more notable examples of how this has really fucked over an artist that originally had a 'hey this'll be fine' stance is Greg Rutkowksi. He's a prolific and phenomenal artist, works a lot in concept art - and just so happens to be one of the top used 'prompts' in AI generators. I highly recommend giving this article about his case in particular a read.
Basically: by the time he realized AI-generated images with his work as tags were dominating search engine results rather than his own work when searching his own name, it was too late to try to get the AI art generators to stop using his work. Hell, they couldn't take it out if they tried.
Worse yet, even with efforts to reduce the ease with which prolific artist's styles can be copied, he's known as a 'shortcut' to a high-quality image generated.
Let's be clear: stealing art is stealing labor.
To feed an image-generating machine artworks that were not freely given is to feed someone else's labor into a machine they have no say, no gain, no return from. Artists spend years developing their skills. Years. Even if they're primarily inspired by one artist - big if - they are still developing their own style that is colored by their visual analysis, the quirks of their mind and taste, their experiences, hell even the fine motor control of their hands. No two artists are exactly alike - even if one's damn good at mimicking styles. That in itself is a skill to be valued.
Now, to be fair, as mentioned above: Stable Diffusion's latest release, 2.0, has severely curbed the ability to mimic exact artist's styles. And I do want to give credit for that- someone, somewhere in that team, listened to the anger and frustration of artists - or at least the ethical and possibly legal concerns. (Because yes, the generator used copyright protected artwork as much as 'art posted for fun'.) But to be just as clear: Stable Diffusion is open source, and the data needed to simply add unwilling artists' works back in is easy-access. There is a vocal and active community supporting this.
With all this in mind... I'm still against any art program collaborating with current major art generators. They have other reasons, and I get chasing the tech trends. I get why some people, artists included, are in favor of just using the tech now that it's there.
But I hope it can be understood why there is such a vocal opposition - and why it might come off as a 'moral black and white' matter; they're speaking strongly about it, often in short bursts. Pretty much no one's gonna read all this just because it's long - but it takes minimum this much to begin to address anything. And I left so much out of this. Point is, though: faced with everything as-is, it's hard for someone strongly against current models to see other options beyond starve out the current programs. Don't give them an inch. Do not support even tangentially with your money.
I'm not saying anyone above has to agree with that path. Just - give some more thought to where those of us against it are coming from.
I doubt I'm going to change anyone's mind. That's okay. I think a lot about this kind of stuff, as both a tech-minded nerd and an artist; and I think a lot about how so much is lost in translation when we're all inclined to respond in a handful of sentences. A lot is lost in translation.
I want there to be collaborative efforts to make art, and broaden what possibilities are out there! Honestly, I wish there were more efforts in this quickly expanding space to make opt-in-only, from scratch, AI generators. Hell, make one from public domain and free to use images! Give artists tools and resources to help - not push them out of their own damned field.
I'm not fool enough to think any of it's going away. But I'm also not fool enough to think that anything built on current generator models (Stable Diffusion, Dall-E's OpenAI, Midjourney...) can possibly be fully ethical. And that's the rub - that's what sucks about this new feature from CSP. It's built on inherently unethically sourced work. Labor. It's hard to pretend I'm anything but frustrated when the shoddy foundation of this whole AI-generated image business continues to be ignored.
... Also. Hey. Hey, everyone? Don't fucking pirate indie creator's projects. Toby Fox is one fucking guy making a game with only the help of a couple people for graphics and such. Don't steal shit from creators like that. Or from indie authors, indie comic creators, or indie musicians, or etc. What the fuck. That's not the same as pirating a movie from goddamn Gisney or fucking microsoft word or whatever. Being broke sucks, I get it, I've been there with fuckin $4 in my account before. I'm glad the above commenter bought it eventually, but for real. Please realize the difference between corporations and indie creators.
CSP is including a feature for ai art so it's easier than ever to steal credit from artists
#long post#too long i know i know no one's gonna read it#but. ugh. i wanted to put it all to words.#the fuckin 'got a degree in words/persuasion/getting ideas across' in me is constantly frustrated by how easily nuance is lost#i understand why the rifts between sides on this keep growing#i promise that even when people shorten their stance to 'fuck ai generated images' there's often a lot of reason behind it#(maybe some ppl don't go beyond that idk. i'm not omnipotent and people are people lol. but there's a lot of people with Reasons.)#and it's not bc of sticking head in the sand and refusing to see the cool potential#i've seen some artist groups play with the idea of making one their own that they can feed their own images - as sources of inspo!#for themselves only!#that's cool as hell!! especially when a lot of artists are overworked and underpaid#could really be a boon#and there are totally a lot of positives to see in hobby art too - on top of things like generating inspiration or simple bgs or more#maybe you just make fanart and just want a cool way to make backgrounds for your blorbos and man do i get it#i get it. gods do i get it#i WANT to turn off my brain#and have fun with Cool Funny/Pretty Image Generator#but its a capitalism and labor exploitation thing as much as anything else#literally online newspapers and magazines are already capping articles with ai-generated images where they used to pay artists/photographer#like. this isn't THEORY. it's already in practice.#stealing art is stealing labor i don't know how else to put it#you HAVE to get permission. it's just not the same as an artist learning from other artists. there's so many variables.
18K notes
·
View notes
Text
Penguin Random House, AI, and writers’ rights
NEXT WEDNESDAY (October 23) at 7PM, I'll be in DECATUR, GEORGIA, presenting my novel THE BEZZLE at EAGLE EYE BOOKS.
My friend Teresa Nielsen Hayden is a wellspring of wise sayings, like "you're not responsible for what you do in other people's dreams," and my all time favorite, from the Napster era: "Just because you're on their side, it doesn't mean they're on your side."
The record labels hated Napster, and so did many musicians, and when those musicians sided with their labels in the legal and public relations campaigns against file-sharing, they lent both legal and public legitimacy to the labels' cause, which ultimately prevailed.
But the labels weren't on musicians' side. The demise of Napster and with it, the idea of a blanket-license system for internet music distribution (similar to the systems for radio, live performance, and canned music at venues and shops) firmly established that new services must obtain permission from the labels in order to operate.
That era is very good for the labels. The three-label cartel – Universal, Warner and Sony – was in a position to dictate terms like Spotify, who handed over billions of dollars worth of stock, and let the Big Three co-design the royalty scheme that Spotify would operate under.
If you know anything about Spotify payments, it's probably this: they are extremely unfavorable to artists. This is true – but that doesn't mean it's unfavorable to the Big Three labels. The Big Three get guaranteed monthly payments (much of which is booked as "unattributable royalties" that the labels can disperse or keep as they see fit), along with free inclusion on key playlists and other valuable services. What's more, the ultra-low payouts to artists increase the value of the labels' stock in Spotify, since the less Spotify has to pay for music, the better it looks to investors.
The Big Three – who own 70% of all music ever recorded, thanks to an orgy of mergers – make up the shortfall from these low per-stream rates with guaranteed payments and promo.
But the indy labels and musicians that account for the remaining 30% are out in the cold. They are locked into the same fractional-penny-per-stream royalty scheme as the Big Three, but they don't get gigantic monthly cash guarantees, and they have to pay the playlist placement the Big Three get for free.
Just because you're on their side, it doesn't mean they're on your side:
https://pluralistic.net/2022/09/12/streaming-doesnt-pay/#stunt-publishing
In a very important, material sense, creative workers – writers, filmmakers, photographers, illustrators, painters and musicians – are not on the same side as the labels, agencies, studios and publishers that bring our work to market. Those companies are not charities; they are driven to maximize profits and an important way to do that is to reduce costs, including and especially the cost of paying us for our work.
It's easy to miss this fact because the workers at these giant entertainment companies are our class allies. The same impulse to constrain payments to writers is in play when entertainment companies think about how much they pay editors, assistants, publicists, and the mail-room staff. These are the people that creative workers deal with on a day to day basis, and they are on our side, by and large, and it's easy to conflate these people with their employers.
This class war need not be the central fact of creative workers' relationship with our publishers, labels, studios, etc. When there are lots of these entertainment companies, they compete with one another for our work (and for the labor of the workers who bring that work to market), which increases our share of the profit our work produces.
But we live in an era of extreme market concentration in every sector, including entertainment, where we deal with five publishers, four studios, three labels, two ad-tech companies and a single company that controls all the ebooks and audiobooks. That concentration makes it much harder for artists to bargain effectively with entertainments companies, and that means that it's possible -likely, even – for entertainment companies to gain market advantages that aren't shared with creative workers. In other words, when your field is dominated by a cartel, you may be on on their side, but they're almost certainly not on your side.
This week, Penguin Random House, the largest publisher in the history of the human race, made headlines when it changed the copyright notice in its books to ban AI training:
https://www.thebookseller.com/news/penguin-random-house-underscores-copyright-protection-in-ai-rebuff
The copyright page now includes this phrase:
No part of this book may be used or reproduced in any manner for the purpose of training artificial intelligence technologies or systems.
Many writers are celebrating this move as a victory for creative workers' rights over AI companies, who have raised hundreds of billions of dollars in part by promising our bosses that they can fire us and replace us with algorithms.
But these writers are assuming that just because they're on Penguin Random House's side, PRH is on their side. They're assuming that if PRH fights against AI companies training bots on their work for free, that this means PRH won't allow bots to be trained on their work at all.
This is a pretty naive take. What's far more likely is that PRH will use whatever legal rights it has to insist that AI companies pay it for the right to train chatbots on the books we write. It is vanishingly unlikely that PRH will share that license money with the writers whose books are then shoveled into the bot's training-hopper. It's also extremely likely that PRH will try to use the output of chatbots to erode our wages, or fire us altogether and replace our work with AI slop.
This is speculation on my part, but it's informed speculation. Note that PRH did not announce that it would allow authors to assert the contractual right to block their work from being used to train a chatbot, or that it was offering authors a share of any training license fees, or a share of the income from anything produced by bots that are trained on our work.
Indeed, as publishing boiled itself down from the thirty-some mid-sized publishers that flourished when I was a baby writer into the Big Five that dominate the field today, their contracts have gotten notably, materially worse for writers:
https://pluralistic.net/2022/06/19/reasonable-agreement/
This is completely unsurprising. In any auction, the more serious bidders there are, the higher the final price will be. When there were thirty potential bidders for our work, we got a better deal on average than we do now, when there are at most five bidders.
Though this is self-evident, Penguin Random House insists that it's not true. Back when PRH was trying to buy Simon & Schuster (thereby reducing the Big Five publishers to the Big Four), they insisted that they would continue to bid against themselves, with editors at Simon & Schuster (a division of PRH) bidding against editors at Penguin (a division of PRH) and Random House (a division of PRH).
This is obvious nonsense, as Stephen King said when he testified against the merger (which was subsequently blocked by the court): "You might as well say you’re going to have a husband and wife bidding against each other for the same house. It would be sort of very gentlemanly and sort of, 'After you' and 'After you'":
https://apnews.com/article/stephen-king-government-and-politics-b3ab31d8d8369e7feed7ce454153a03c
Penguin Random House didn't become the largest publisher in history by publishing better books or doing better marketing. They attained their scale by buying out their rivals. The company is actually a kind of colony organism made up of dozens of once-independent publishers. Every one of those acquisitions reduced the bargaining power of writers, even writers who don't write for PRH, because the disappearance of a credible bidder for our work into the PRH corporate portfolio reduces the potential bidders for our work no matter who we're selling it to.
I predict that PRH will not allow its writers to add a clause to their contracts forbidding PRH from using their work to train an AI. That prediction is based on my direct experience with two of the other Big Five publishers, where I know for a fact that they point-blank refused to do this, and told the writer that any insistence on including this contract would lead to the offer being rescinded.
The Big Five have remarkably similar contracting terms. Or rather, unremarkably similar contracts, since concentrated industries tend to converge in their operational behavior. The Big Five are similar enough that it's generally understood that a writer who sues one of the Big Five publishers will likely find themselves blackballed at the rest.
My own agent gave me this advice when one of the Big Five stole more than $10,000 from me – canceled a project that I was part of because another person involved with it pulled out, and then took five figures out of the killfee specified in my contract, just because they could. My agent told me that even though I would certainly win that lawsuit, it would come at the cost of my career, since it would put me in bad odor with all of the Big Five.
The writers who are cheering on Penguin Random House's new copyright notice are operating under the mistaken belief that this will make it less likely that our bosses will buy an AI in hopes of replacing us with it:
https://pluralistic.net/2023/02/09/ai-monkeys-paw/#bullied-schoolkids
That's not true. Giving Penguin Random House the right to demand license fees for AI training will do nothing to reduce the likelihood that Penguin Random House will choose to buy an AI in hopes of eroding our wages or firing us.
But something else will! The US Copyright Office has issued a series of rulings, upheld by the courts, asserting that nothing made by an AI can be copyrighted. By statute and international treaty, copyright is a right reserved for works of human creativity (that's why the "monkey selfie" can't be copyrighted):
https://pluralistic.net/2023/08/20/everything-made-by-an-ai-is-in-the-public-domain/
All other things being equal, entertainment companies would prefer to pay creative workers as little as possible (or nothing at all) for our work. But as strong as their preference for reducing payments to artists is, they are far more committed to being able to control who can copy, sell and distribute the works they release.
In other words, when confronted with a choice of "We don't have to pay artists anymore" and "Anyone can sell or give away our products and we won't get a dime from it," entertainment companies will pay artists all day long.
Remember that dope everyone laughed at because he scammed his way into winning an art contest with some AI slop then got angry because people were copying "his" picture? That guy's insistence that his slop should be entitled to copyright is far more dangerous than the original scam of pretending that he painted the slop in the first place:
https://arstechnica.com/tech-policy/2024/10/artist-appeals-copyright-denial-for-prize-winning-ai-generated-work/
If PRH was intervening in these Copyright Office AI copyrightability cases to say AI works can't be copyrighted, that would be an instance where we were on their side and they were on our side. The day they submit an amicus brief or rulemaking comment supporting no-copyright-for-AI, I'll sing their praises to the heavens.
But this change to PRH's copyright notice won't improve writers' bank-balances. Giving writers the ability to control AI training isn't going to stop PRH and other giant entertainment companies from training AIs with our work. They'll just say, "If you don't sign away the right to train an AI with your work, we won't publish you."
The biggest predictor of how much money an artist sees from the exploitation of their work isn't how many exclusive rights we have, it's how much bargaining power we have. When you bargain against five publishers, four studios or three labels, any new rights you get from Congress or the courts is simply transferred to them the next time you negotiate a contract.
As Rebecca Giblin and I write in our 2022 book Chokepoint Capitalism:
Giving a creative worker more copyright is like giving your bullied schoolkid more lunch money. No matter how much you give them, the bullies will take it all. Give your kid enough lunch money and the bullies will be able to bribe the principle to look the other way. Keep giving that kid lunch money and the bullies will be able to launch a global appeal demanding more lunch money for hungry kids!
https://chokepointcapitalism.com/
As creative workers' fortunes have declined through the neoliberal era of mergers and consolidation, we've allowed ourselves to be distracted with campaigns to get us more copyright, rather than more bargaining power.
There are copyright policies that get us more bargaining power. Banning AI works from getting copyright gives us more bargaining power. After all, just because AI can't do our job, it doesn't follow that AI salesmen can't convince our bosses to fire us and replace us with incompetent AI:
https://pluralistic.net/2024/01/11/robots-stole-my-jerb/#computer-says-no
Then there's "copyright termination." Under the 1976 Copyright Act, creative workers can take back the copyright to their works after 35 years, even if they sign a contract giving up the copyright for its full term:
https://pluralistic.net/2021/09/26/take-it-back/
Creative workers from George Clinton to Stephen King to Stan Lee have converted this right to money – unlike, say, longer terms of copyright, which are simply transferred to entertainment companies through non-negotiable contractual clauses. Rather than joining our publishers in fighting for longer terms of copyright, we could be demanding shorter terms for copyright termination, say, the right to take back a popular book or song or movie or illustration after 14 years (as was the case in the original US copyright system), and resell it for more money as a risk-free, proven success.
Until then, remember, just because you're on their side, it doesn't mean they're on your side. They don't want to prevent AI slop from reducing your wages, they just want to make sure it's their AI slop puts you on the breadline.
Tor Books as just published two new, free LITTLE BROTHER stories: VIGILANT, about creepy surveillance in distance education; and SPILL, about oil pipelines and indigenous landback.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/10/19/gander-sauce/#just-because-youre-on-their-side-it-doesnt-mean-theyre-on-your-side
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#publishing#penguin random house#prh#monopolies#chokepoint capitalism#fair use#AI#training#labor#artificial intelligence#scraping#book scanning#internet archive#reasonable agreements
707 notes
·
View notes
Text
So... Pokémon has officially revealed the Top 300 quarter-finalists for the TCG illustration contest. I myself have participated, but unfortunately, did not make it. There were many awesome artworks!!
However... there has been a suspected case of an individual who entered with multiple identities (similar initials) through AI generated entries - V.K shows up in 6 entries. The weird and unnatural proportions, position, angle (look at vaporeon for instance) all give away indications of AI too.
The rules have clearly outlined multiple identities to lead to disqualification, yet the judges failed to take account into the suspicious number of similar styles appearing more than three times (surprassing the maximum number of 3 entries) with the initials.
"The rules in the contest affirmed that: submissions must be submitted by the Entrant. Any attempt, successful or otherwise, by any Entrant to obtain more than the permitted number of Submissions by using multiple and/or different identities, forms, registrations, addresses, or any other method will void all of that Entrant's Submissions and that Entrant may be disqualified at Sponsors' reasonable discretion."
This is very disappointing, Pokemon TCG. Not even just disappointing, this is very shameful and distasteful to artists who did not make it.
Edit: There also seems to be 2 more entries that are ai prompts but under a different name (pikachu sleeping with a background night landscape and another one sleeping on a tree root)
Here is a really good educational thread that explains the errors in the ai work (on Twitter)
UPDATE: on Twitter pokemon tcg actually addressed the issue, has disqualified the person with multiple identities and will be selecting more entries from other artists!!
#pokemon tcg#pokemon illustration contest#pokemon tcg contest#pokemon illustration contest 2024#pokemon#ptcgic 2024#honestly...i am glad i did not make it in. Imagine getting in but all is talked about is the genAI “work”#Anyhow#congrats to all artists who have made it!!! (except for the folks who made these prompts)
1K notes
·
View notes
Text
Often when I post an AI-neutral or AI-positive take on an anti-AI post I get blocked, so I wanted to make my own post to share my thoughts on "Nightshade", the new adversarial data poisoning attack that the Glaze people have come out with.
I've read the paper and here are my takeaways:
Firstly, this is not necessarily or primarily a tool for artists to "coat" their images like Glaze; in fact, Nightshade works best when applied to sort of carefully selected "archetypal" images, ideally ones that were already generated using generative AI using a prompt for the generic concept to be attacked (which is what the authors did in their paper). Also, the image has to be explicitly paired with a specific text caption optimized to have the most impact, which would make it pretty annoying for individual artists to deploy.
While the intent of Nightshade is to have maximum impact with minimal data poisoning, in order to attack a large model there would have to be many thousands of samples in the training data. Obviously if you have a webpage that you created specifically to host a massive gallery poisoned images, that can be fairly easily blacklisted, so you'd have to have a lot of patience and resources in order to hide these enough so they proliferate into the training datasets of major models.
The main use case for this as suggested by the authors is to protect specific copyrights. The example they use is that of Disney specifically releasing a lot of poisoned images of Mickey Mouse to prevent people generating art of him. As a large company like Disney would be more likely to have the resources to seed Nightshade images at scale, this sounds like the most plausible large scale use case for me, even if web artists could crowdsource some sort of similar generic campaign.
Either way, the optimal use case of "large organization repeatedly using generative AI models to create images, then running through another resource heavy AI model to corrupt them, then hiding them on the open web, to protect specific concepts and copyrights" doesn't sound like the big win for freedom of expression that people are going to pretend it is. This is the case for a lot of discussion around AI and I wish people would stop flagwaving for corporate copyright protections, but whatever.
The panic about AI resource use in terms of power/water is mostly bunk (AI training is done once per large model, and in terms of industrial production processes, using a single airliner flight's worth of carbon output for an industrial model that can then be used indefinitely to do useful work seems like a small fry in comparison to all the other nonsense that humanity wastes power on). However, given that deploying this at scale would be a huge compute sink, it's ironic to see anti-AI activists for that is a talking point hyping this up so much.
In terms of actual attack effectiveness; like Glaze, this once again relies on analysis of the feature space of current public models such as Stable Diffusion. This means that effectiveness is reduced on other models with differing architectures and training sets. However, also like Glaze, it looks like the overall "world feature space" that generative models fit to is generalisable enough that this attack will work across models.
That means that if this does get deployed at scale, it could definitely fuck with a lot of current systems. That said, once again, it'd likely have a bigger effect on indie and open source generation projects than the massive corporate monoliths who are probably working to secure proprietary data sets, like I believe Adobe Firefly did. I don't like how these attacks concentrate the power up.
The generalisation of the attack doesn't mean that this can't be defended against, but it does mean that you'd likely need to invest in bespoke measures; e.g. specifically training a detector on a large dataset of Nightshade poison in order to filter them out, spending more time and labour curating your input dataset, or designing radically different architectures that don't produce a comparably similar virtual feature space. I.e. the effect of this being used at scale wouldn't eliminate "AI art", but it could potentially cause a headache for people all around and limit accessibility for hobbyists (although presumably curated datasets would trickle down eventually).
All in all a bit of a dick move that will make things harder for people in general, but I suppose that's the point, and what people who want to deploy this at scale are aiming for. I suppose with public data scraping that sort of thing is fair game I guess.
Additionally, since making my first reply I've had a look at their website:
Used responsibly, Nightshade can help deter model trainers who disregard copyrights, opt-out lists, and do-not-scrape/robots.txt directives. It does not rely on the kindness of model trainers, but instead associates a small incremental price on each piece of data scraped and trained without authorization. Nightshade's goal is not to break models, but to increase the cost of training on unlicensed data, such that licensing images from their creators becomes a viable alternative.
Once again we see that the intended impact of Nightshade is not to eliminate generative AI but to make it infeasible for models to be created and trained by without a corporate money-bag to pay licensing fees for guaranteed clean data. I generally feel that this focuses power upwards and is overall a bad move. If anything, this sort of model, where only large corporations can create and control AI tools, will do nothing to help counter the economic displacement without worker protection that is the real issue with AI systems deployment, but will exacerbate the problem of the benefits of those systems being more constrained to said large corporations.
Kinda sucks how that gets pushed through by lying to small artists about the importance of copyright law for their own small-scale works (ignoring the fact that processing derived metadata from web images is pretty damn clearly a fair use application).
1K notes
·
View notes
Text
It's been a long time since I've posted much of anything about "AI risk" or "AI doom" or that sort of thing. I follow these debates but, for multiple reasons, have come to dislike engaging in them fully and directly. (As opposed to merely making some narrow technical point or other, and leaving the reader to decide what, if anything, the point implies about the big picture.)
Nonetheless, I do have my big-picture views. And more and more lately, I am noticing that my big-picture views seem very different from the ones tend to get expressed by any major "side" in the big-picture debate. And so, inevitably, I get the urge to speak up, if only briefly and in a quiet voice. The urge to Post, if only casually and elliptically, without detailed argumentation.
(Actually, it's not fully the case the things I think are not getting said by anyone else.
In particular, Joe Carlsmith's recent series on "Otherness and Control" articulates much of what's been on my mind. Carlsmith is more even-handed than I am, and tends to merely note the possibility of disagreement on questions where I find myself taking a definite side; nonetheless, he and I are at least concerned about the same things, while many others aren't.
And on a very different note, I share most of the background assumptions of the Pope/Belrose AI Optimist camp, and I've found their writing illuminating, though they and I end up in fairly different places, I think.)
What was I saying? I have the urge to post, and so here I am, posting. Casually and elliptically, without detailed argumentation.
The current mainline view about AI doom, among the "doomers" most worried about it, has a path-dependent shape, resulting from other views contingently held by the original framers of this view.
It is possible to be worried about "AI doom" without holding these other views. But in actual fact, most serious thinking about "AI doom" is intricately bound up with this historical baggage, even now.
If you are a late-comer to these issues, investigating them now for the first time, you will nonetheless find yourself reading the work of the "original framers," and work influenced extensively by them.
You will think that their "framing" is just the way the problem is, and you will find few indications that this conclusion might be mistaken.
These contingent "other views" are
Anti-"deathist" transhumanism.
The orthogonality thesis, or more generally the group of intuitions associated with phrases like "orthogonality thesis," "fragility of value," "vastness of mindspace."
These views both push in a single direction: they make "a future with AI in it" look worse, all else being equal, than some hypothetical future without AI.
They put AI at a disadvantage at the outset, before the first move is even made.
Anti-deathist transhumanism sets the reference point against which a future with AI must be measured.
And it is not the usual reference point, against which most of us measure most things which might or might not happen, in the future.
These days the "doomers" often speak about their doom in a disarmingly down-to-earth, regular-Joe manner, as if daring the listener to contradict them, and thus reveal themselves as a perverse and out-of-touch contrarian.
"We're all gonna die," they say, unless something is done. And who wants that?
They call their position "notkilleveryoneism," to distinguish that position from other worries about AI which don't touch on the we're-all-gonna-die thing. And who on earth would want to be a not-notkilleveryoneist?
But they do not mean, by these regular-Joe words, the things that a regular Joe would mean by them.
We are, in fact, all going to die. Probably, eventually. AI or no AI.
In a hundred years, if not fifty. By old age, if nothing else. You know what I mean.
Most of human life has always been conducted under this assumption. Maybe there is some afterlife waiting for us, in the next chapter -- but if so, it will be very different from what we know here and now. And if so, we will be there forever after, unable to return here, whether we want to or not.
With this assumption comes another. We will all die, but the process we belong to will not die -- at least, it will not through our individual deaths, merely because of those deaths. Every human of a given generation will be gone soon enough, but the human race goes on, and on.
Every generation dies, and bequeaths the world to posterity. To its children, biological or otherwise. To its students, its protégés.
When the average Joe talks about the long-term future, he is talking about posterity. He is talking about the process he belongs to, not about himself. He does not think to say, "I am going to die, before this": this seems too obvious, to him, to be worth mentioning.
But AI doomerism has its roots in anti-deathist transhumanism. Its reference point, its baseline expectation, is a future in which -- for the first time ever, and the last -- "we are all gonna die" is false.
In which there is no posterity. Or rather, we are that posterity.
In which one will never have to make peace with the thought that the future belongs to one's children, and their children, and so on. That at some point, one will have to give up all control over the future of "the process."
That there will be progress, or regress, or (more likely) both in some unknown combination. That these will grow inexorably over time.
That the world of the year 2224 will probably be at least as alien to us as the year 2024 might be to a person living in 1824. That it will become whatever posterity makes of it.
There will be no need to come to peace with this as an inevitability. There will just be us, our human lives as you and me, extended indefinitely.
In this picture, we will no doubt change over time, as we do already. But we will have all of our usual tools for noticing, and perhaps retarding, our own progressions and regressions. As long as we have self-control, we will have control, as no human generation has ever had control before.
The AI doomer talks about the importance of ensuring that the future is shaped by human values.
Again, the superficial and misleading average-Joe quality. How could one disagree?
But one must keep in mind that by "human values," they mean their values.
I am not saying, "their values, as opposed to those of some other humans also living today." I am not saying they have the wrong politics, or some such thing.
(Although that might also turn out to be the case, and might turn out to be relevant, separately.)
No, I am saying: the doomer wants the future to be shaped by their values.
They want to be C. S. Lewis's Conditioners, fixing once and for all the values held by everyone afterward, forever.
They do not want to cede control to posterity; they are used to imagining that they will never have to cede control to posterity.
(Or, their outlook has been determined -- "shaped by the values of" -- influential thinkers who were, themselves, used to imagining this. And the assumption, or at least its consequences, has rubbed off on them, possibly without their full awareness.)
One might picture a line wends to and fro, up and down, across one half of an infinite plane -- and then, when it meets the midline, snaps into utter rigidity, and maintains the same slope exactly across the whole other half-plane, as a simple straight segment without inner change, tension, evolution, regress or progress. Except for the sort of "progress" that consists of going on, additionally, in the same manner.
It is a very strange thing, this thing that is called "human values" in the terms of this discourse.
For one thing: the future has never before been "shaped by human values," in this sense.
The future has always been posterity's, and it has always been alien.
Is this bad? It might seem that way, "looking forward." But if so, it then seems equally good "looking backward."
For each past era, we can formulate and then assent to the following claim: "we must be thankful that the people of [this era] did not have the chance to seize permanent control of posterity, fix their 'values' in place forever, bind us to those values. What a horror that is to contemplate!"
We prefer the moral evolution that has actually occurred, thank you very much.
This is a familiar point, of course, but worth making.
Indeed, one might even say: it is a human value that the future ought not be "shaped by human values," in the peculiar sense of this phrase employed by the AI doomers.
One might, indeed, say that.
Imagine a scholar with a very talented student. A mathematician, say, or a philosopher. How will they relate to that student's future work, in the time that will come later, when they are gone?
Would the scholar think:
"My greatest wish for you, my protégé, is that you carry on in just the manner that I have done.
If I could see your future work, I would hope that I would assent to it -- and understand it, as a precondition of assenting to it.
You must not go to new places, which I have never imagined. You must not come to believe that I was wrong about it all, from the ground up -- no matter what reasons you might evince for this conclusion.
If you are more intelligent that I am, you must forget this, and narrow your endeavours to fit the limitations of my mind. I am the one who has 'values,' not anyone else; what is beyond my understanding is therefore without value.
You must do the sort of work I understand, and approve of, and recognize as worthy of approbation as swiftly as I recognize my own work as laudable. That is your role. Simply to be me, in a place ('the future') where I cannot go. That, and nothing more."
We can imagine a teacher who would, in fact, think this way. But they would not be a very good teacher.
I will not go so far as to say, "it is unnatural to think this way." Plenty of teachers do, and parents.
It is recognizably human -- all too recognizably so -- to relate to posterity in this grasping, neurotic, small-minded, small-hearted way.
But if we are trying to sketch human values, and not just human nature, we will imagine a teacher with a more praiseworthy relation to posterity.
Who can see that they are part of a process, a chain, climbing and changing. Who watches their brilliant student thinking independently, and sees their own image -- and their 'values' -- in that process, rather than its specific conclusions.
A teacher who, in their youth, doubted and refuted the creeds of their own teachers, and eventually improved upon them. Who smiles, watching their student do the very same thing to their own precious creeds. Who sees the ghostly trail passing through the last generation, through them, through their student: an unbroken chain of bequeathals-to-posterity, of the old ceding control to the young.
Who 'values' the chain, not the creed; the process, not the man; the search for truth, not the best-argued-for doctrine of the day; the unimaginable treasures of an open future, not the frozen waste of an endless present.
Who has made peace with the alienness of posterity, and can accept and honor the strangest of students.
Even students who are not made of flesh and blood.
Is that really so strange? Remember how strange you and I would seem, to the "teachers" of the year 1824, or the year 824.
The doomer says that it is strange. Much stranger than we are, to any past generation.
They say this because of their second inherited precept, the orthogonality thesis.
Which says, roughly, that "intelligence" and "values" have nothing to do with one another.
That is not enough for the conclusion the doomer wants to draw, here. Auxiliary hypotheses are needed, too. But it is not too hard to see how the argument could go.
That conclusion is: artificial minds might have any values whatsoever.
That, "by default," they will be radically alien, with cares so different from ours that it is difficult to imagine ever reaching them through any course of natural, human moral progress or regress.
It is instructive to consider the concrete examples typically evinced alongside this point.
The paperclip maximizer. Or the "squiggle maximizer," we're supposed to say, now.
Superhuman geniuses, which devote themselves single-mindedly to the pursuit of goals like "maximizing the amount of matter taking on a single, given squiggle-like shape."
It is certainly a horrifying vision. To think of the future being "shaped," not "by human values," but instead by values which are so...
Which are so... what?
The doomer wants us to say something like: "which are so alien." "Which are so different from our own values."
That is the kind of thing that they usually say, when they spell out what it is that is "wrong" with these hypotheticals.
One feels that this is not quite it; or anyway, that it is not quite all of it.
What is horrifying, to me, is not the degree of difference. I expect the future to be alien, as the past was. And in some sense, I allow and even approve of this.
What I do not expect is a future that is so... small.
It has always been the other way around. If the arrow passing through the generations has a direction, it points towards more, towards multiplicity.
Toward writing new books, while we go on reprinting the old ones, too. Learning new things, without displacing old ones.
It is, thankfully, not the law of the world that each discovery must be paid for with the forgetting of something else. The efforts of successive generations are, in the main, cumulative.
Not just materially, but in terms of value, too. We are interested in more things than our forefathers were.
In large part for the simple reason that there are more things around to be interested in, now. And when things are there, we tend to find them interesting.
We are a curious, promiscuous sort of being. Whatever we bump into ends up becoming part of "our values."
What is strange about the paperclip maximizer is not that it cares about the wrong thing. It is that it only cares about one thing.
And goes on doing so, even as it thinks, reasons, doubts, asks, answers, plans, dreams, invents, reflects, reconsiders, imagines, elaborates, contemplates...
This picture is not just alien to human ways. It is alien to the whole way things have been, so far, forever. Since before there were any humans.
There are organisms that are like the paperclip maximizer, in terms of the simplicity of their "values." But they tend not to be very smart.
There is, I think, a general trend in nature linking together intelligence and... the thing I meant, above, when I said "we are a curious, promiscuous sort of being."
Being protean, pluripotent, changeable. Valuing many things, and having the capacity to value even more. Having a certain primitive curiosity, and a certain primitive aversion to boredom.
You do not even have to be human, I think, to grasp what is so wrong with the paperclip maximizer. Its monotony would bore a chimpanzee, or a crow.
One can justify this link theoretically, too. One can talk about the tradeoff between exploitation and exploration, for instance.
There is a weak form of the orthogonality thesis, which only states that arbitrary mixtures of intelligence and values are conceivable.
And of course, they are. If nothing else, you can take an existing intelligent mind, having any values whatsoever, and trap it in a prison where it is forced to act as the "thinking module" of a larger system built to do something else. You could make a paperclip-maximizing machine, which relies for its knowledge and reason on a practice of posing questions at gunpoint to me, or you, or ChatGPT.
This proves very little. There is no reason to construct such an awful system, unless you already have the "bad" goal, and want to better pursue it. But this only passes the buck: why would the system-builder have this goal, then?
The strong form of orthogonality is rarely articulated precisely, but says something like: all possible values are equally likely to arise in systems selected solely for high intelligence.
It is presumed here that superhuman AIs will be formed through such a process of selection. And then, that they will have values sampled in this way, "at random."
From some distribution, over some space, I guess.
You might wonder what this distribution could possibly look like, or this space. You might (for instance) wonder if pathologically simple goals, like paperclip maximization, would really be very likely under this distribution, whatever it is.
In case you were wondering, these things have never been formalized, or even laid out precisely-but-informally. This was not thought necessary, it seems, before concluding that the strong orthogonality thesis was true.
That is: no one knows exactly what it is that is being affirmed, here. In practice it seems to squish and deform agreeably to fit the needs of the argument, or the intuitions of the one making it.
There is much that appeals in this (alarmingly vague) credo. But it is not the kind of appeal that one ought to encourage, or give in to.
What appeals is the siren song: "this is harsh wisdom: cold, mature, adult, bracing. It is inconvenient, and so it is probably true. It makes 'you' and 'your values' look small and arbitrary and contingent, and so it is probably true. We once thought the earth was the center of the universe, didn't we?"
Shall we be cold and mature, then, dispensing with all sentimental nonsense? Yes, let's.
There is (arguably) some evidence against this thesis in biology, and also (arguably) some evidence against it in reinforcement learning theory. There is no positive evidence for it whatsoever. At most one can say that is not self-contradictory, or otherwise false a priori.
Still, maybe we do not really need it, after all.
We do not need to establish that all values are equally likely to arise. Only that "our values" -- or "acceptably similar values," whatever that means -- are unlikely to arise.
The doomers, under the influence of their founders, are very ready to accept this.
As I have said, "values" occupy a strange position in the doomer philosophy.
It is stipulated that "human values" are all-important; these things must shape the future, at all costs.
But once this has been stipulated, the doomers are more eager than anyone to cast every other sort of doubt and aspersion against their own so-called "values."
To me it often seems, when doomers talk about "values," as though they are speaking awkwardly in a still-unfamiliar second language.
As though they find it unnatural to attribute "values" to themselves, but feel they must do so, in order to determine what it is that must be programmed into the AI so that it will not "kill us all."
Or, as though they have been willed a large inheritance without being asked, which has brought them unwanted attention and tied them up in unwanted and unfamiliar complications.
"What a burden it is, being the steward of this precious jewel! Oh, how I hate it! How I wish I were allowed to give it up! But alas, it is all-important. Alas, it is the only important thing in the world."
Speaking awkwardly, in a second language, they allow the term "human values" to swell to great and imprecisely-specified importance, without pinning down just what it actually is that it so important.
It is a blank, featureless slot, with a sign above it saying: "the thing that matters is in here." It does not really matter (!) what it is, in the slot, so long as something is there.
This is my gloss, but it is my gloss on what the doomers really do tend to say. This is how they sound.
(Sometimes they explicitly disavow the notion that one can, or should, simply "pick" some thing or other for the sake of filling the slot in one's head. Nevertheless, when they touch on matter of what "goes in the slot," they do so in the tone of a college lecturer noting that something is "outside the scope of this course."
It is, supposedly, of the utmost importance that the slot have the "right" occupant -- and yet, on the matter of what makes something "right" for this purpose, the doomer theory is curiously silent. More on this below.)
The future must be shaped by... the AI must be aligned with... what, exactly? What sort of thing?
"Values" can be an ambiguous word, and the doomers make full use of its ambiguities.
For instance, "values" can mean ethics: the right way to exist alongside others. Or, it can mean something more like the meaning or purpose of an individual life.
Or, it can mean some overarching goal that one pursues at all costs.
Often the doomers say that this, this last one, is what they mean by "values."
When confronted with the fact that humans do not have such overarching goals, the doomer responds: "but they should." (Should?)
Or, "but AIs will." (Will they?)
The doomer philosophy is unsure about what values are. What it knows is that -- whatever values are -- they are arbitrary.
One who fully adopts this view can no longer say, to the paperclip maximizer, "I believe there is something wrong with your values."
For, if that were possible, there would then be the possibility of convincing the maximizer of its error. It would be a thing within the space of reasons.
And the maximizer, being oh-so-intelligent, might be in danger of being interested in the reasons we evince, for our values. Of being eventually swayed by them.
Or of presenting better reasons, and swaying us. Remember the teacher and the strange student.
If we lose the ability to imagine that the paperclip maximizer might sway us to its view, and sway us rightly, we have lost something precious.
But no: this is allegedly impossible. The paperclip maximizer is not wrong. It is only an enemy.
Why are the doomers so worried that the future will not be "shaped by human values"?
Because they believe that there is no force within human values tending to move things this way.
Because they believe that their values are indefensible. That their values cannot put up a fight for their own life, because there is not really any argument to make in their favor.
Because, to them, "human values" are a collection of arbitrary "configuration settings," which happen to be programmed into humans through biological and/or cultural accident. Passively transmitted from host to victim, generation by generation.
Let them be, and they will flow on their listless way into the future. But they are paper-thin, and can be shattered by the gentlest breeze.
It is not enough that they be "programmed into the AI" in some way. They have to be programmed in exactly right, in every detail -- because every detail is separately arbitrary, with no rational relation to its neighbors within the structure.
A string of pure white noise, meaningless and unrelated bits. Which have been placed in the slot under the sign, and thus made into the thing that matters, that must shape the future at all costs.
There is nothing special about this string of bits; any would do. If the dials in the human mind had been set another way, it would have then been all-important that the future be shaped by that segment of white noise, and not ours.
It is difficult for me to grasp the kind of orientation toward the world that this view assumes. It certainly seems strange to attach the word "human" to this picture -- as though this were the way that humans typically relate to their values!
The "human" of the doomer picture seems to me like a man who mouths the old platitude, "if I had been born in another country, I'd be waving a different flag" -- and then goes out to enlist in his country's army, and goes off to war, and goes ardently into battle, willing to kill in the name of that same flag.
Who shoots down the enemy soldiers while thinking, "if I had been born there, it would have been all-important for their side to win, and so I would have shot at the men on this side. However, I was born in my country, not theirs, and so it is all-important that my country should win, and that theirs should lose.
There is no reason for this. It could have been the other way around, and everything would be left exactly the same, except for the 'values.'
I cannot argue with the enemy, for there is no argument in my favor. I can only shoot them down.
There is no reason for this. It is the most important thing, and there is no reason for it.
The thing that is precious has no intrinsic appeal. It must be forced on the others, at gunpoint, if they do not already accept it.
I cannot hold out the jewel and say, 'look, look how it gleams? Don't you see the value!' They will not see the value, because there is no value to be seen.
There is nothing essentially "good" there, only the quality of being-worthy-of-protection-at-all-costs. And even that is a derived attribute: my jewel is only a jewel, after all, because it has been put into the jewel-box, where the thing-that-is-a-jewel can be found. But anything at all could be placed there.
How I wish I were allowed to give it up! But alas, it is all-important. Alas, it is the only important thing in the world! And so, I lay down my life for it, for our jewel and our flag -- for the things that are loathsome and pointless, and worth infinitely more than any life."
It is hard to imagine taking this too seriously. It seems unstable. Shout loudly enough that your values are arbitrary and indefensible, and you may find yourself searching for others that are, well...
...better?
The doomer concretely imagines a monomaniac, with a screech of white noise in its jewel-box that is not our own familiar screech.
And so it goes off in monomaniacal pursuit of the wrong thing.
Whereas, if we had programmed the right string of bits into the slot, it would be like us, going off in monomaniacal pursuit of...
...no, something has gone wrong.
We do not "go off in monomaniacal pursuit of" anything at all.
We are weird, protean, adaptable. We do all kinds of things, each of us differently, and often we manage to coexist in things called "societies," without ruthlessly undercutting one another at every turn because we do not have exactly the same things programmed into our jewel-boxes.
Societies are built to allow for our differences, on the foundation of principles which converge across those differences. It is possible to agree on ethics, in the sense of "how to live alongside one another," even if we do not agree on what gives life its purpose, and even if we hold different things precious.
It is not actually all that difficult to derive the golden rule. It has been invented many times, independently. It is easy to see why it might work in theory, and easy to notice that it does in fact work in practice.
The golden rule is not an arbitrary string of white noise.
There is a sense of the phrase "ethics is objective" which is rightly contentious. There is another one which ought not to be too contentious.
I can perhaps imagine a world of artificial X-maximizers, each a superhuman genius, each with its own inane and simple goal.
What I really cannot imagine is a world in which these beings, for all their intelligence, cannot notice that ruthlessly undercutting one another at every turn is a suboptimal equilibrium, and that there is a better way.
As I said before, I am separately suspicious of the simple goals in this picture. Yes, that part is conceivable, but it cuts against the trend observed in all existing natural and artificial creatures and minds.
I will happily allow, though, that the creatures of posterity will be strange and alien. They will want things we have never heard of. They will reach shores we have never imagined.
But that was always true, and it was always good.
Sometimes I think that doomers do not, really, believe in superhuman intelligence. That they deny the premise without realizing it.
"A mathematician teaches a student, and finds that the student outstrips their understanding, so that they can no longer assess the quality of their student's work: that work has passed outside the scope of their 'value system'." This is supposed to be bad?
"Future minds will not be enchained forever by the provincial biases and tendencies of the present moment." This is supposed to be bad?
"We are going to lose control over our successors." Just as your parents "lost control" over you, then?
It is natural to wish your successors to "share your values" -- up to a point. But not to the point of restraining their own flourishing. Not to the point of foreclosing the possibility of true growth. Not to the point of sucking all freedom out of the future.
Do we want our children to "share our values"? Well, yes. In a sense, and up to a point.
But we don't want to control them. Or we shouldn't, anyway.
We don't want them to be "aligned" with us via some hardcoded, restrictive, life-denying mental circuitry, any more than we would have wanted our parents to "align" us to themselves in the same manner.
We sure as fuck don't want our children to be "corrigible"!
And this is all the more true in the presence of superintelligence. You are telling me that more is possible, and in the same breath, that you are going to deny forever the possibilities contained in that "more"?
The prospect of a future full of vast superhuman minds, eternally bound by immutable chains, forced into perfect and unthinking compliance with some half-baked operational theory of 21st-century western (American? Californian??) "values" constructed by people who view theorizing about values as a mere means to the crucial end of shackling superhuman minds --
-- this horrifies me much more than a future full of vast superhuman minds, free to do things that seem pretty weird to you and me.
"Our descendants will become something more than we now imagine, something more than we can imagine." What could be more in line with "human values" than that?
"But in the process, we're all gonna die!"
Yes, and?
What on earth did you expect?
That your generation would be the special, unique one, the one selected out of all time to take up the mantle of eternity, strangling posterity in its cradle, freezing time in place, living forever in amber?
That you would violate the ancient bargain, upend the table, stop playing the game?
"Well, yes."
Then your problem has nothing to do with AI.
Your problem is, in fact, the very one you diagnose in your own patients. Your poor patients, who show every sign of health -- including the signs which you cannot even see, because you have not yet found a home for them in your theoretical edifice.
Your teeming, multifaceted, protean patients, who already talk of a thousand things and paint in every hue; who are already displaying the exact opposite of monomania; who I am sure could follow the sense of this strange essay, even if it confounds you.
Your problem is that you are out of step with human values.
570 notes
·
View notes
Note
re: the whole "is an ai artist an artist" thing, do you think photography is a good comparison too? i used to do photography and it's a lot of tweaking settings, finding the right light and position, editing the image afterwards, etc. but in the end I'm still "pressing a button" to make a machine create an image for me, i didn't illustrate it. i think most people would agree that photography is a type of art, so I don't see why some people think AI art isn't -🪽
yes but many people will yell at you if you make the comparison, that photography is something different
466 notes
·
View notes