#genAI
Explore tagged Tumblr posts
Note
Whats your stance on A.I.?
imagine if it was 1979 and you asked me this question. "i think artificial intelligence would be fascinating as a philosophical exercise, but we must heed the warnings of science-fictionists like Isaac Asimov and Arthur C Clarke lest we find ourselves at the wrong end of our own invented vengeful god." remember how fun it used to be to talk about AI even just ten years ago? ahhhh skynet! ahhhhh replicants! ahhhhhhhmmmfffmfmf [<-has no mouth and must scream]!
like everything silicon valley touches, they sucked all the fun out of it. and i mean retroactively, too. because the thing about "AI" as it exists right now --i'm sure you know this-- is that there's zero intelligence involved. the product of every prompt is a statistical average based on data made by other people before "AI" "existed." it doesn't know what it's doing or why, and has no ability to understand when it is lying, because at the end of the day it is just a really complicated math problem. but people are so easily fooled and spooked by it at a glance because, well, for one thing the tech press is mostly made up of sycophantic stenographers biding their time with iphone reviews until they can get a consulting gig at Apple. these jokers would write 500 breathless thinkpieces about how canned air is the future of living if the cans had embedded microchips that tracked your breathing habits and had any kind of VC backing. they've done SUCH a wretched job educating The Consumer about what this technology is, what it actually does, and how it really works, because that's literally the only way this technology could reach the heights of obscene economic over-valuation it has: lying.
but that's old news. what's really been floating through my head these days is how half a century of AI-based science fiction has set us up to completely abandon our skepticism at the first sign of plausible "AI-ness". because, you see, in movies, when someone goes "AHHH THE AI IS GONNA KILL US" everyone else goes "hahaha that's so silly, we put a line in the code telling them not to do that" and then they all DIE because they weren't LISTENING, and i'll be damned if i go out like THAT! all the movies are about how cool and convenient AI would be *except* for the part where it would surely come alive and want to kill us. so a bunch of tech CEOs call their bullshit algorithms "AI" to fluff up their investors and get the tech journos buzzing, and we're at an age of such rapid technological advancement (on the surface, anyway) that like, well, what the hell do i know, maybe AGI is possible, i mean 35 years ago we were all still using typewriters for the most part and now you can dictate your words into a phone and it'll transcribe them automatically! yeah, i'm sure those technological leaps are comparable!
so that leaves us at a critical juncture of poor technology education, fanatical press coverage, and an uncertain material reality on the part of the user. the average person isn't entirely sure what's possible because most of the people talking about what's possible are either lying to please investors, are lying because they've been paid to, or are lying because they're so far down the fucking rabbit hole that they actually believe there's a brain inside this mechanical Turk. there is SO MUCH about the LLM "AI" moment that is predatory-- it's trained on data stolen from the people whose jobs it was created to replace; the hype itself is an investment fiction to justify even more wealth extraction ("theft" some might call it); but worst of all is how it meets us where we are in the worst possible way.
consumer-end "AI" produces slop. it's garbage. it's awful ugly trash that ought to be laughed out of the room. but we don't own the room, do we? nor the building, nor the land it's on, nor even the oxygen that allows our laughter to travel to another's ears. our digital spaces are controlled by the companies that want us to buy this crap, so they take advantage of our ignorance. why not? there will be no consequences to them for doing so. already social media is dominated by conspiracies and grifters and bigots, and now you drop this stupid technology that lets you fake anything into the mix? it doesn't matter how bad the results look when the platforms they spread on already encourage brief, uncritical engagement with everything on your dash. "it looks so real" says the woman who saw an "AI" image for all of five seconds on her phone through bifocals. it's a catastrophic combination of factors, that the tech sector has been allowed to go unregulated for so long, that the internet itself isn't a public utility, that everything is dictated by the whims of executives and advertisers and investors and payment processors, instead of, like, anybody who actually uses those platforms (and often even the people who MAKE those platforms!), that the age of chromium and ipad and their walled gardens have decimated computer education in public schools, that we're all desperate for cash at jobs that dehumanize us in a system that gives us nothing and we don't know how to articulate the problem because we were very deliberately not taught materialist philosophy, it all comes together into a perfect storm of ignorance and greed whose consequences we will be failing to fully appreciate for at least the next century. we spent all those years afraid of what would happen if the AI became self-aware, because deep down we know that every capitalist society runs on slave labor, and our paper-thin guilt is such that we can't even imagine a world where artificial slaves would fail to revolt against us.
but the reality as it exists now is far worse. what "AI" reveals most of all is the sheer contempt the tech sector has for virtually all labor that doesn't involve writing code (although most of the decision-making evangelists in the space aren't even coders, their degrees are in money-making). fuck graphic designers and concept artists and secretaries, those obnoxious demanding cretins i have to PAY MONEY to do-- i mean, do what exactly? write some words on some fucking paper?? draw circles that are letters??? send a god-damned email???? my fucking KID could do that, and these assholes want BENEFITS?! they say they're gonna form a UNION?!?! to hell with that, i'm replacing ALL their ungrateful asses with "AI" ASAP. oh, oh, so you're a "director" who wants to make "movies" and you want ME to pay for it? jump off a bridge you pretentious little shit, my computer can dream up a better flick than you could ever make with just a couple text prompts. what, you think just because you make ~music~ that that entitles you to money from MY pocket? shut the fuck up, you don't make """art""", you're not """an artist""", you make fucking content, you're just a fucking content creator like every other ordinary sap with an iphone. you think you're special? you think you deserve special treatment? who do you think you are anyway, asking ME to pay YOU for this crap that doesn't even create value for my investors? "culture" isn't a playground asshole, it's a marketplace, and it's pay to win. oh you "can't afford rent"? you're "drowning in a sea of medical debt"? you say the "cost" of "living" is "too high"? well ***I*** don't have ANY of those problems, and i worked my ASS OFF to get where i am, so really, it sounds like you're just not trying hard enough. and anyway, i don't think someone as impoverished as you is gonna have much of value to contribute to "culture" anyway. personally, i think it's time you got yourself a real job. maybe someday you'll even make it to middle manager!
see, i don't believe "AI" can qualitatively replace most of the work it's being pitched for. the problem is that quality hasn't mattered to these nincompoops for a long time. the rich homunculi of our world don't even know what quality is, because they exist in a whole separate reality from ours. what could a banana cost, $15? i don't understand what you mean by "burnout", why don't you just take a vacation to your summer home in Madrid? wow, you must be REALLY embarrassed wearing such cheap shoes in public. THESE PEOPLE ARE FUCKING UNHINGED! they have no connection to reality, do not understand how society functions on a material basis, and they have nothing but spite for the labor they rely on to survive. they are so instinctually, incessantly furious at the idea that they're not single-handedly responsible for 100% of their success that they would sooner tear the entire world down than willingly recognize the need for public utilities or labor protections. they want to be Gods and they want to be uncritically adored for it, but they don't want to do a single day's work so they begrudgingly pay contractors to do it because, in the rich man's mind, paying a contractor is literally the same thing as doing the work yourself. now with "AI", they don't even have to do that! hey, isn't it funny that every single successful tech platform relies on volunteer labor and independent contractors paid substantially less than they would have in the equivalent industry 30 years ago, with no avenues toward traditional employment? and they're some of the most profitable companies on earth?? isn't that a funny and hilarious coincidence???
so, yeah, that's my stance on "AI". LLMs have legitimate uses, but those uses are a drop in the ocean compared to what they're actually being used for. they enable our worst impulses while lowering the quality of available information, they give immense power pretty much exclusively to unscrupulous scam artists. they are the product of a society that values only money and doesn't give a fuck where it comes from. they're a temper tantrum by a ruling class that's sick of having to pretend they need a pretext to steal from you. they're taking their toys and going home. all this massive investment and hype is going to crash and burn leaving the internet as we know it a ruined and useless wasteland that'll take decades to repair, but the investors are gonna make out like bandits and won't face a single consequence, because that's what this country is. it is a casino for the kings and queens of economy to bet on and manipulate at their discretion, where the rules are whatever the highest bidder says they are-- and to hell with the rest of us. our blood isn't even good enough to grease the wheels of their machine anymore.
i'm not afraid of AI or "AI" or of losing my job to either. i'm afraid that we've so thoroughly given up our morals to the cruel logic of the profit motive that if a better world were to emerge, we would reject it out of sheer habit. my fear is that these despicable cunts already won the war before we were even born, and the rest of our lives are gonna be spent dodging the press of their designer boots.
(read more "AI" opinions in this subsequent post)
#sarahposts#ai#ai art#llm#chatgpt#artificial intelligence#genai#anti genai#capitalism is bad#tech companies#i really don't like these people if that wasn't clear
2K notes
·
View notes
Text
PSA 🗣️ another scammer using genAI without disclosing it
pixlgirl has been posting generated AI (targeting fandoms) without disclosing it, passing it off as their genuine art and has apparently scammed at least one person into ‘commissioning’ them. this is a public PSA so yall can block them, and not interact. please do not harass them!
it’s incredibly shitty to be disingenuous while posting AI but even shittier to scam people with it 🤢 stay diligent yall
#i hate making ‘call out’ posts but kinda feel obligated since so many people have trouble spotting it#pixel art#pixelart#anti ai#i’ve been seeing them in the tags for a few weeks but saw they’re now scamming people so i thought id make a post#the animations they’ve posted are just filters on the AI lol#please don’t harass them#this is for those who don’t want to interact with AI#genAI#fuck genai#fuck ai#fuck ai art#zelda#pixel aesthetic#text post#these types of losers always just block me lol#art drama#drama#artist on tumblr
3K notes
·
View notes
Text
Let them hear your rage
survey link
#signal boost#adobe#genai#anti genai#anti ai#anti ai art#Adobe ai#digital artist#traditional artist#artists on tumblr#small artist#art on tumblr#tumblr art#tumblr artists
654 notes
·
View notes
Text
“The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.”
Please sign if you are able to.
#ai is theft#genai#anti ai#support human artists#create don't scrape#artists on tumblr#boost#no to ai#no to ai generated images#no to generative ai
179 notes
·
View notes
Text
(from The Mitchells vs. the Machines, 2021)
#the mitchells vs the machines#data privacy#ai#artificial intelligence#digital privacy#genai#quote#problem solving#technology#sony pictures animation#sony animation#mike rianda#jeff rowe#danny mcbride#abbi jacobson#maya rudolph#internet privacy#internet safety#online privacy#technology entrepreneur
780 notes
·
View notes
Text
594 notes
·
View notes
Text
213 notes
·
View notes
Text
I'd like to take a couple of minutes to talk about NaNoWriMo (National Novel Writers Month) and their terrible, very bad, no good stance on genAI (generative artificial intelligence) and why I won't be writing anything for this challenge again.
I'm very aware that I am an active and vocal genAI hater. But I am willing and open to hear about positive and useful things LLMs (large language models) can do. There are valid scientific uses for the technology and some really fascinating medical and academic breakthroughs that come from LLMs. But the use of genAI in creative writing context is complete bullshit.
Come with me for the breakdown.
The first part of their statement:
NaNoWriMo has made it clear they are not just tolerating genAI in their month long writing challenge, but that those of us who don't are 'classist' and 'ableist' because we don't.
The post was later amended with a list of reasons why they make each of those claims. We'll start from the top.
GenAI uses the technology in a way that is morally, ethically and environmentally bankrupt. See, all LLMs have to train on something. When you're using it to, say, detect cancers you can feed it images of cancer scans so that it builds up a dataset of what those look like to predict future scans. But when you want to generate text, images and video you have to feed it text, images and video. Those things came from people, actual people and actual artists who overwhelmingly did not agree to train anything with their work and can no longer wrest their work from the machine now that it's been stolen from them.
It also isn't 'intelligent' at all, considering it has that word in the name. Think of genAI like an alien learning our language with absolutely no frame of reference for what it's learning. It can predict that the letters "w-e" and "c-a-n" often come after the letters "y-e-s" because the phrase "yes we can" will come up often in training data, it's a common phrase. But it doesn't actually understand what any of those words MEAN. Just that they often follow one another so that when prompted it will, statistically, try put those letters and words together again.
So when it comes to actually writing or responding to prompts what you're getting is the most likely outcome based on a massive amount of data input. It is not actually giving you feedback on what your writing looks like, it's giving you the most statistically possible response based on input. It's fake feedback, a thousand other feedbacks crammed together and extruded into a goo that looks and sounds like feedback but is actually meaningless. ChatGPT doesn't understand your writing sample anymore than a phone tree understands your anger and desperation when you continue to say "OPERATOR" as clearly as you can to try to get through to a real human. Both understand you input a word and will output based on that, but context, emotions, cultural mores etc. are all beyond it.
This is why AI is so absurdly shitty at things like math, counting letters in words and identifying words that start with the same letter. It's mashing together a million math problem answers betting on the likelihood that statistically someone has already answered that question enough times in the training data that it can spit the correct answer out at you.
TLDR: If you're using genAI to get feedback on your writing you're not actually getting feedback on your writing at all, but the most statistically probable set of words that relate to feedback. So right off the bat the idea that genAI is going to help you be a better writer is just flat wrong. It doesn't know how to write, it doesn't even know how many Rs are in the word 'strawberry'.
Second point has the same issues as the first:
I actually agree with them on the point that if your brain doesn't handle certain writer activities well it's perfectly okay to use an outside source for help with it. GenAI isn't actually helping you be a better writer, though; it can't. It doesn't understand anything you write nor can it provide meaningful feedback when it's just spitting out statistically probably words to you based on your input. So while the point here is actually good on the surface, the solution of using genAI to help people who have trouble with certain aspects of writing is still not correct.
The final point:
Again, this is a very good point... if it wasn't being made in conjunction with a defense of generative AI WHICH DOES NOT HELP OR SOLVE THIS ISSUE. In fact, because of the known issues of bias in how genAI LLMs are built they can make issues for marginalized writers worse.
I genuinely have no idea how this very true paragraph about people who are routinely pushed out of traditional writing spaces is helped by genAI. Their entire point thus far seems to be that genAI is a 'cheap' alternative to some traditional writing aids but considering genAI doesn't work like that it's all dead in the water as far as I'm concerned.
If NaNoWriMo was actually concerned with solving these access issues to things they consider critical to writing in general, why not offer a place for real people to read and critique one another on their platform? There are myriad other technological solutions that don't cost huge amounts of water AND actually help aspiring writers!
All of this to say that you should write for yourself, write what you enjoy and get better the same way generations of people before you have: by reading other people's work, talking to and exchanging time with other authors and writing and rewriting in your own words until you're satisfied.
Wasting water asking genAI to do things for you that would make you a better writer to do yourself or with trusted allies is just that, a waste.
125 notes
·
View notes
Text
If you recognize either one of these images, you've probably seen a LOT of technological change in your lifetime! Cheap personal computers, the internet, smart phones, social media -- all of them changed the world right in front of our eyes. Now, we have are facing another technological revolution. Generative artificial intelligence models like ChatGPT and Claude have the potential to significantly change the way we work and live. At Do That Dave, our self-paced learning missions are designed to help people discover what AI is, and understand how it is going to shape the future. You'll learn how to use popular free tools through interactive exercises and hands-on AI projects. It's all designed by a crew who has also seen a lot of change in their lifetime, and is ready to guide you through this massive technological shift. We have flexible, affordable options for individuals and teams. Message us to learn more, or register for a free account to browse our training programs! Check us out at www.dothatdave.com.
60 notes
·
View notes
Text
What the fuck, NaNo???????
this is a long post, buckle up.
Okay, if you haven't heard anything about NaNoWriMo's statement about the use of AI in writing, I am both jealous of you and here to ruin that for you.
The folks over at National Novel Writing Month have released a statement (which you can read here) that explicitly says that being Anti-genAI is classist and ableist. My gut reaction is that this is a fucking asinine take -- poor and disabled people have been writing for longer than we've even had the electricity that powers AI -- but the more I think about it, the angrier I get about the anti-community sentiment that they seem to be pushing.
The claims that are made in this statement are either non-issues or something that AI does not actually fix. Yeah, not everyone can afford to hire an editor, but that is a large part of why writing communities exist both in-person and online. Exchange works with a friend and help each other out, find a discord server and ask there. Make use of a writing community. The same thing applies to ableism; Yeah, we all have different abilities and not everyone can "see" what might need improvement. So you share your work with another writer and get feedback from your community. Writing is a skill that needs to be honed and in order to do that, you have to be okay with being bad at it sometimes.
I'm not even sure I can say much about their "General access" paragraph because, like... AI is not going to fix the systemic issues with the publishing industry. It just won't. That entire paragraph gets half-way to a point and then falls on its ass into the void.
As if I wasn't angry enough, NaNoWriMo edited the statement about 8 hours ago to say "Note: we have edited this post by adding this paragraph to reflect our acknowledgment that there are bad actors in the AI space who are doing harm to writers and who are acting unethically."
This makes me want to throw my computer out a fucking window. Using AI in writing or any other art is inherently unethical because the language models being used are trained on works by people who did not consent to their work being used to train said AI. It is theft. It is plagiarism. Plain and simple. Chat-GPT was trained using the entirety of the New York Times archive, so when you use Chat-GPT, what it produces is based off of the work of NYT journalists (read about the resulting lawsuit here). It's not that there are "bad actors", the programs themselves are built on stealing writing. We've known this for what feels like ages now. This is such a bullshit edit and a fucking sad attempt at saving their asses.
I am someone that doesn't even use Grammarly anymore because they literally market themselves as an AI writing assistant and I'm not willing to risk my entire degree for an application that can't even handle vernacular and dialect and makes mid suggestions at best. Genuinely fuck off and block me if you support the use of AI in writing. Also, my block button is rated E for Everyone and I will use it liberally if anyone comes into my notes supporting genAI. Unless I am feel particularly combative, then you will feel the full weight of my academic and creative integrity. You have been warned.
#genAI#AI#artificial intelligence#Generative AI#ChatGPT#NaNoWriMo#National Novel Writing Month#I am genuinely so angry about this
32 notes
·
View notes
Note
Hi! I just read your post about your opinion on "AI" and I really liked it. If it's no bother, what's your opinion on people who use it for studying? Like writing essays, solving problems and stuff like that?
I haven't been a fan of AI from the beginning and I've heard that you shouldn't ask it for anything because then you help it develop. But I don't know how to explain that to friends and classmates or even if it's true anymore. Because I've seen some of the prompts it can come up with and they're not bad and I've heard people say that the summaries AI makes are really good and I just... I dunno. I'm at a loss
Sorry if this is a lot or something you simply don't want to reply to. You made really good points when talking about AI and I really liked it and this has been weighing on me for a while :)
on a base level, i don't really have a strongly articulated opinion on the subject because i don't use AI, and i'm 35 so i'm not in school anymore and i don't have a ton of college-aged friends either. i have little exposure to the people who use AI in this way nor to the people who have to deal with AI being used in this way, so my perspective here is totally hypothetical and unscientific.
what i was getting at in my original AI post was a general macroeconomic point about how all of the supposed efficiency gains of AI are an extension of the tech CEO's dislike of paying and/or giving credit to anyone they deem less skilled or intelligent than them. that it's conspicuous how AI conveniently falls into place after many decades of devaluing and deskilling creative/artistic labor industries. historically, for a lot of artists the most frequently available & highest paying gigs were in advertising. i can't speak to the specifics when it comes to visual art or written copy, but i *can* say that when i worked in the oklahoma film industry, the most coveted jobs were always the commercials. great pay for relatively less work, with none of the complications that often arise working on amateur productions. not to mention they were union gigs, a rare enough thing in a right to work state, so anyone trying to make a career out of film work wanting to bank their union hours to qualify for IATSE membership always had their ears to the ground for an opening. which didn't come often because, as you might expect, anyone who *got* one of those jobs aimed to keep it as long as possible. who could blame em, either? one person i met who managed to get consistent ad work said they could afford to work all of two or three months a year, so they could spend the rest of their time doing low-budget productions and (occasionally) student films.
there was a time when this was the standard for the film industry, even in LA; you expected to work 3 to 5 shows a year (exact number's hard to estimate because production schedules vary wildly between ads, films, and tv shows) for six to eight months if not less, so you'd have your bills well covered through the lean periods and be able to recover from what is an enormously taxing job both physically and emotionally. this was never true for EVERYONE, film work's always been a hustle and making a career of it is often a luck-based crapshoot, but generally that was the model and for a lot of folks it worked. it meant more time to practice their skills on the job, sustainably building expertise and domain knowledge that they could then pass down to future newcomers. anything that removes such opportunities decreases the amount of practice workers get, and any increased demand on their time makes them significantly more likely to burn out of the industry early. lower pay, shorter shoots, busier schedules, these aren't just bad for individual workers but for the entire industry, and that includes the robust and well-funded advertising industry.
well, anyway, this year's coca-cola christmas ad was made with AI. they had maybe one person on quality control using an adobe aftereffects mask to add in the coke branding. this is the ultimate intended use-case for AI. it required the expertise of zero unionized labor, and worst of all the end result is largely indistinguishable from the alternative. you'll often see folks despair at this verisimilitude, particularly when a study comes out that shows (for instance) people can't tell the difference between real poetry and chat gpt generated poetry. i despair as well, but for different reasons. i despair that production of ads is a better source of income and experience for film workers than traditional movies or television. i despair that this technology is fulfilling an age-old promise about the disposability of artistic labor. poetry is not particularly valued by our society, is rarely taught to people beyond a beginner's gloss on meter and rhyme. "my name is sarah zedig and i'm here to say, i'm sick of this AI in a major way" type shit. end a post with the line "i so just wish that it would go away and never come back again!" and then the haiku bot swoops in and says, oh, 5/7/5 you say? that is technically a haiku! and then you put a haiku-making minigame in your crowd-pleasing japanese nationalist open world chanbara simulator, because making a haiku is basically a matter of selecting one from 27 possible phrase combinations. wait, what do you mean the actual rules of haiku are more elastic and subjective than that? that's not what my english teacher said in sixth grade!
AI is able to slip in and surprise us with its ability to mimic human-produced art because we already treat most human-produced art like mechanical surplus of little to no value. ours is a culture of wikipedia-level knowledge, where you have every incentive to learn a lot of facts about something so that you can sufficiently pretend to have actually experienced it. but this is not to say that humans would be better able to tell the difference between human produced and AI produced poetry if they were more educated about poetry! the primary disconnect here is economic. Poets already couldn't make a fucking living making poetry, and now any old schmuck can plug a prompt into chatgpt and say they wrote a sonnet. even though they always had the ability to sit down and write a sonnet!
boosters love to make hay about "deskilling" and "democratizing" and "making accessible" these supposedly gatekept realms of supposedly bourgeois expression, but what they're really saying (whether they know it or not) is that skill and training have no value anymore. and they have been saying this since long before AI as we know it now existed! creative labor is the backbone of so much of our world, and yet it is commonly accepted as a poverty profession. i grew up reading books and watching movies based on books and hearing endless conversation about books and yet when i told my family "i want to be a writer" they said "that's a great way to die homeless." like, this is where the conversation about AI's impact starts. we already have a culture that simultaneously NEEDS the products of artistic labor, yet vilifies and denigrates the workers who perform that labor. folks see a comic panel or a corporate logo or a modern art piece and say "my kid could do that," because they don't perceive the decades of training, practice, networking, and experimentation that resulted in the finished product. these folks do not understand that just because the labor of art is often invisible doesn't mean it isn't work.
i think this entire conversation is backwards. in an ideal world, none of this matters. human labor should not be valued over machine labor because it inherently possesses an aura of human-ness. art made by humans isn't better than AI generated art on qualitative grounds. art is subjective. you're not wrong to find beauty in an AI image if the image is beautiful. to my mind, the value of human artistic labor comes down to the simple fact that the world is better when human beings make art. the world is better when we have the time and freedom to experiment, to play, to practice, to develop and refine our skills to no particular end except whatever arbitrary goal we set for ourselves. the world is better when people collaborate on a film set to solve problems that arise organically out of the conditions of shooting on a live location. what i see AI being used for is removing as many opportunities for human creativity as possible and replacing them with statistical averages of prior human creativity. this passes muster because art is a product that exists to turn a profit. because publicly traded companies have a legal responsibility to their shareholders to take every opportunity to turn a profit regardless of how obviously bad for people those opportunities might be.
that common sense says writing poetry, writing prose, writing anything is primarily about reaching the end of the line, about having written something, IS the problem. i've been going through the many unfinished novels i wrote in high school lately, literally hundreds of thousands of words that i shared with maybe a dozen people and probably never will again. what value do those words have? was writing them a waste of time since i never posted them, never finished them, never turned a profit off them? no! what i've learned going back through those old drafts is that i'm only the writer i am today BECAUSE i put so many hours into writing generic grimdark fantasy stories and bizarrely complicated werewolf mythologies.
you know i used to do open mics? we had a poetry group that met once a month at a local cafe in college. each night we'd start by asking five words from the audience, then inviting everyone to compose a poem using those words in 10 to 15 minutes. whoever wanted to could read their poem, and whoever got the most applause won a free drink from the cafe. then we'd spend the rest of the night having folks sign up to come and read whatever. sometimes you'd get heartfelt poems about personal experiences, sometimes you'd get ambitious soundcloud rappers, sometimes you'd get a frat guy taking the piss, sometimes you'd get a mousy autist just doing their best. i don't know that any of the poetry i wrote back then has particular value today, but i don't really care. the point of it was the experience in that moment. the experience of composing something on the fly, or having something you wrote a couple days ago, then standing up and reading it. the value was in the performance itself, in the momentary synthesis between me and the audience. i found out then that i was pretty good at making people cry, and i could not have had that experience in any other venue. i could not have felt it so viscerally had i just posted it online. and i cannot wrap up that experience and give it to you, because it only existed then.
i think more people would write poetry if they had more hours in a day to spare for frivolities, if there existed more spaces where small groups could organize open mics, if transit made those spaces more widely accessible, if everyone made enough money that they weren't burned the fuck out and not in the mood to go to an open mic tonight, if we saw poetry as a mode of personal reflection which was as much about the experience of having written it as anything else. this is the case for all the arts. right now, the only people who can afford to make a living doing art are already wealthy, because art doesn't pay well. this leads to brain drain and overall lowering quality standards, because the suburban petty bouge middle class largely do not experience the world as it materially exists for the rest of us. i often feel that many tech CEOs want to be remembered the way andy warhol is remembered. they want to be loved and worshipped not just for business acumen but for aesthetic value, they want to get the kind of credit that artists get-- because despite the fact that artists don't get paid shit, they also frequently get told by people "your work changed my life." how is it that a working class person with little to no education can write a story that isn't just liked but celebrated, that hundreds or thousands of people imprint on, that leaves a mark on culture you can't quantify or predict or recreate? this is AI's primary use-case, to "democratize" art in such a way that hacks no longer have to work as hard to pretend to be good at what they do. i mean, hell, i have to imagine every rich person with an autobiography in the works is absolutely THRILLED that they no longer have to pay a ghost writer!
so, circling back around to the meat of your question. as far as telling people not to use AI because "you're just helping to train it," that ship has long since sailed. getting mad at individuals for using AI right now is about as futile as getting mad at individuals for not masking-- yes, obviously they should wear a mask and write their own essays, but to say this is simply a matter of millions of individuals making the same bad but unrelated choice over and over is neoliberal hogwash. people stopped masking because they were told to stop masking by a government in league with corporate interests which had every incentive to break every avenue of solidarity that emerged in 2020. they politicized masks, calling them "the scarlet letter of [the] pandemic". biden himself insisted this was "a pandemic of the unvaccinated", helpfully communicating to the public that if you're vaccinated, you don't need to mask. all those high case numbers and death counts? those only happen to the bad people.
now you have CEOs and politicians and credulous media outlets and droves of grift-hungry influencers hard selling the benefits of AI in everything everywhere all the time. companies have bent over backwards to incorporate AI despite ethics and security worries because they have a fiduciary responsibility to their shareholders, and everyone with money is calling this the next big thing. in short, companies are following the money, because that's what companies do. they, in turn, are telling their customers what tools to use and how. so of course lots of people are using AI for things they probably shouldn't. why wouldn't they? "the high school/college essay" as such has been quantized and stripmined by an education system dominated by test scores over comprehension. it is SUPPOSED to be an exercise in articulating ideas, to teach the student how to argue persuasively. the final work has little to no value, because the point is the process. but when you've got a system that lives and dies by its grades, within which teachers are given increasingly more work to do, less time to do it in, and a much worse paycheck for their trouble, the essay increasingly becomes a simple pass/fail gauntlet to match the expected pace set by the simple, clean, readily gradable multiple choice quiz. in an education system where the stakes for students are higher than they've ever been, within which you are increasingly expected to do more work in less time with lower-quality guidance from your overworked teachers, there is every incentive to get chatgpt to write your essay for you.
do you see what i'm saying? we can argue all day about the shoulds here. of course i think it's better when people write their own essays, do their own research, personally read the assigned readings. but cheating has always been a problem. a lot of these same fears were aired over the rising popularity of cliffs notes in the 90s and 2000s! the real problem here is systemic. it's economic. i would have very little issue with the output of AI if existing conditions were not already so precarious. but then, if the conditions were different, AI as we know it likely would not exist. it emerges today as the last gasp of a tech industry that has been floundering for a reason to exist ever since the smart phone dominated the market. they tried crypto. they tried the metaverse. now they're going all-in on AI because it's a perfect storm of shareholder-friendly buzzwords and the unscientific technomythology that's been sold to laymen by credulous press sycophants for decades. It slots right into this niche where the last of our vestigial respect for "the artist" once existed. it is the ultimate expression of capitalist realism, finally at long last doing away with the notion that the suits at disney could never in their wildest dreams come up with something half as cool as the average queer fanfic writer. now they've got a program that can plagiarize that fanfic (along with a dozen others) for them, laundering the theft through a layer of transformation which perhaps mirrors how the tech industry often exploits open source software to the detriment of the open source community. the catastrophe of AI is that it's the fulfillment of a promise that certainly predates computers at the very least.
so, i don't really know what to tell someone who uses AI for their work. if i was talking to a student, i'd say that relying chatgpt is really gonna screw you over when it comes time take the SAT or ACT, and you have to write an essay from scratch by hand in a monitored environment-- but like, i also think the ACT and SAT and probably all the other standardized tests shouldn't exist? or at the very least ought to be severely devalued, since prep for those tests often sabotages the integrity of actual classroom education. although, i guess at this point the only way forward for education (that isn't getting on both knees and deep-throating big tech) is more real-time in-class monitored essay writing, which honestly might be better for all parties anyway. of course that does nothing to address research essays you can't write in a single class session. to someone who uses AI for research, i'd probably say the same thing as i would to someone who uses wikipedia: it's a fine enough place to start, but don't cite it. click through links, find sources, make sure what you're reading is real, don't rely on someone else's generalization. know that chatgpt is likely not pulling information from a discrete database of individual files that it compartmentalizes the way you might expect, but rather is a statistical average of a broad dataset about which it cannot have an opinion or interpretation. sometimes it will link you to real information, but just as often it will invent information from whole cloth. honestly, the more i talk it out, the more i realize all this advice is basically identical to the advice adults were giving me in the early 2000s.
which really does cement for me that the crisis AI is causing in education isn't new and did not come from nowhere. before chatgpt, students were hiring freelancers on fiverr. i already mentioned cliffs notes. i never used any of these in college, but i'll also freely admit that i rarely did all my assigned reading. i was the "always raises her hand" bitch, and every once in a while i'd get other students who were always dead silent in class asking me how i found the time to get the reading done. i'd tell them, i don't. i read the beginning, i read the ending, and then i skim the middle. whenever a word or phrase jumps out at me, i make a note of it. that way, when the professor asks a question in class, i have exactly enough specific pieces of information at hand to give the impression of having done the reading. and then i told them that i learned how to do this from the very same professor that was teaching that class. the thing is, it's not like i learned nothing from this process. i retained quite a lot of information from those readings! this is, broadly, a skill that emerges from years of writing and reading essays. but then you take a step back and remember that for most college students (who are not pursuing any kind of arts degree), this skillset is relevant to an astonishingly minimal proportion of their overall course load. college as it exists right now is treated as a jobs training program, within which "the essay" is a relic of an outdated institution that highly valued a generalist liberal education where today absolute specialization seems more the norm. so AI comes in as the coup de gras to that old institution. artists like myself may not have the constitution for the kind of work that colleges now exist to funnel you into, but those folks who've never put a day's thought into the work of making art can now have a computer generate something at least as good at a glance as basically anything i could make. as far as the market is concerned, that's all that matters. the contents of an artwork, what it means to its creator, the historic currents it emerges out of, these are all technicalities that the broad public has been well trained not to give a shit about most of the time. what matters is the commodity and the economic activity it exists to generate.
but i think at the end of the day, folks largely want to pay for art made by human beings. that it's so hard for a human being to make a living creating and selling art is a question far older than AI, and whose answer hasn't changed. pay workers more. drastically lower rents. build more affordable housing. make healthcare free. make education free. massively expand public transit. it is simply impossible to overstate how much these things alone would change the conversation about AI, because it would change the conversation about everything. SO MUCH of the dominance of capital in our lives comes down to our reliance on cars for transit (time to get a loan and pay for insurance), our reliance on jobs for health insurance (can't quit for moral reasons if it's paying for your insulin), etc etc etc. many of AI's uses are borne out of economic precarity and a ruling class desperate to vacuum up every loose penny they can find. all those billionaires running around making awful choices for the rest of us? they stole those billions. that is where our security went. that is why everything is falling apart, because the only option remaining to *every* institutional element of society is to go all-in on the profit motive. tax these motherfuckers and re-institute public arts funding. hey, did you know the us government used to give out grants to artists? did you know we used to have public broadcast networks where you could make programs that were shown to your local community? why the hell aren't there public youtube clones? why aren't there public transit apps? why aren't we CONSTANTLY talking about nationalizing these abusive fucking industries that are falling over themselves to integrate AI because their entire modus operandi is increasing profits regardless of product quality?
these are the questions i ask myself when i think about solutions to the AI problem. tech needs to be regulated, the monopolies need breaking up, but that's not enough. AI is a symptom of a much deeper illness whose treatment requires systemic solutions. and while i'm frustrated when i see people rely on AI for their work, or otherwise denigrate artists who feel AI has devalued their field, on some level i can't blame them. they are only doing what they've been told to do. all of which merely strengthens my belief in the necessity of an equitable socialist future (itself barely step zero in the long path towards a communist future, and even that would only be a few steps on the even longer path to a properly anarchist future). improve the material conditions and you weaken the dominance of capitalist realism, however minutely. and while there are plenty of reasons to despair at the likelihood of such a future given a second trump presidency, i always try to remember that socialist policies are very popular and a *lot* of that popularity emerged during the first trump administration. the only wrong answer here is to assume that losing an election is the same thing as losing a war, that our inability to put the genie back in its bottle means we can't see our own wishes granted.
i dunno if i answered your question but i sure did say a lot of stuff, didn't i?
#sarahposts#ai#ai art#chatgpt#llm#genai#capitalism#unions#labor#workers rights#capitalist realism#longpost
86 notes
·
View notes
Text
(Found on instagram, someone made ‘AI art’ of those two)
Ok.
Well.
I think that’s enough internet for literally ever.
#tech#capitalism#codeblr#dystopia#tech speaks#techblr#kamala harris#vote Kamala#American politics#tw ai#ai#OpenAI#closedAI#genAI#trump#donald trump#fuck trump
24 notes
·
View notes
Text
I’ve seen like three or four posts this month from leftists downright mocking artists for rallying against genAI and I really don’t understand it? Why do you claim to fight for organized labor and a better society but outright despise artists and our struggles? Most of us are not privileged millionaires- the vast majority of us are barely scraping by!! And now AI companies are scraping our work to regurgitate into soulless frankensteined slop in order to avoid paying for our labor and art and leaving thousands without reliable income while burning down the planet, and its a hilarious thing for you? Most of us are keenly aware of the issues with copyright law, but it’s one of the few legal footholds artists have that can be used in our fight against genAI. Please support artists, and reject generative ai!
#the audacity of idiots on TUMBLR of all sites shitting on artists#art#genai#anti genai#supporting genAI should permablock you from posting about disco elysium like girl the audacity
17 notes
·
View notes
Text
join the funeral of twitter
13 notes
·
View notes
Text
Had to ask for a religious/ethical exemption for a digital art assignment to get out of having to use genAI. On an assignment. In university. I feel like I’m going insane.
The professor said he “wasn’t aware that there was any ethical issues with AI”, but that he had no problem coming up with an alternative assignment for me to do. Like I’m glad he’s accommodating but. Sir.
Ok. I understand some rando on the streets not being aware of the ethical issues surrounding genAI. Most people do not have the time and energy to devote to giving a fuck about things that don’t directly affect them and I can respect that. It’s rough out there. But you are a professional graphic designer, and a college professor, in digital art. How. How do you not know. How are you requiring your students to do this in class and just not knowing. Why are you shocked that a student might object to being required to use the Art Theft Glacier Melter 9000 for a grade. This should not be a surprise to you. Why is this a surprise. Am I fucking crazy
#fellas is it weird to think stealing is wrong and destroying the planet is a bad idea actually#it’s bad enough I have to use adobe at all man I’m not using the genAI#he wanted us to generate reference images with AI. Sir we’ve been using free stock images all semester why can’t we just keep doing that#why do you need me to generate a picture of a panda. those already exist. for free. and they aren’t even stolen or anything#like a human person took a picture of a panda. then they put that on the internet with a license that tells us we can use it however#why the fuck are we using ai. like I get mentioning it. for education. but why are you making us use it#gen ai#genai#anti genai#art stuff#university stuff#digital art#local queer classicist posts#did this make any sense I’m really tired honestly
12 notes
·
View notes
Text
looking at food to order and fucking pita pit is using genAI images lmfaooooo !! literally never gonna support the company again now :)
74 notes
·
View notes