#genAI
Explore tagged Tumblr posts
phantomseishin · 2 days ago
Text
This is such a disgusting move from them and I hate how more and more companies are shoving AI down our throats. It has potential as an assistive tool, but the way it's being used and gone about is NOT AT ALL assistive, it's destructive. Please, for the love of all that's good, share this, complain to Discord, fight back. Literally, who asked for this feature??? What good does this bring????? I don't have any followers on this blog yet so I'll be logging back onto my old account and reblogging there but omfg this has me so pissed off as both an artist and someone who uses discord to keep in touch with friends from across the world. Wtf.
They can add this shit but they can't add some better accessibility features, like a dyslexia friendly font option??? Which I have seen requests for popping up across social media ever since they changed their font??? Or on a less accessibility related note, an invisible status mode without getting notifications, like on do not disturb? [facedesk]
So, Discord has added a feature that lets other people "enhance" or "edit" your images with different AI apps. It looks like this:
Tumblr media
Currently, you can't opt out from this at all. But here's few things you can do as a protest.
FOR SERVERS YOU ARE AN ADMIN IN
Go to Roles -> @/everyone roles -> Scroll all the way down to External Apps, and disable it. This won't delete the option, but it will make people receive a private message instead when they use it, protecting your users:
Tumblr media
You should also make it a bannable offense to edit other user's images with AI. Here's how I worded it in my server, feel free to copypaste:
Do not modify other people's images with AI under ANY circumstances, such as with the Discord "enhancement" features, amidst others. This is a bannable offense.
COMPLAIN TO DISCORD
There's few ways to go around this. First, you can go to https://support.discord.com/hc/en-us/requests/new , select Help and Support -> Feedback/New Feature Request, and write your message, as seen in the screenshot below.
Tumblr media
For the message, here's some points you can bring up:
Concerns about harassment (such as people using this feature to bully others)
Concerns about privacy (concerns on how External Apps may break privacy or handle the data in the images, and how it may break some legislations, such as GDPR)
Concerns about how this may impact minors (these features could be used with pictures of irl minors shared in servers, for deeply nefarious purposes)
BE VERY CLEAR about "I will refuse to buy Nitro and will cancel my subscription if this feature remains as it is", since they only care about fucking money
Word them as you'd like, add onto them as you need. They sometimes filter messages that are copypasted templates, so finding ways to word them on your own is helpful.
ADDING: You WILL NEED to reply to the mail you receive afterwards for the message to get sent to an actual human! Otherwise it won't reach anyone
UNSUSCRIBE FROM NITRO
This is what they care about the most. Unsuscribe from Nitro. Tell them why you unsuscribed on the way out. DO NOT GIVE THEM MONEY. They're a company. They take actions for profit. If these actions do not get them profit, they will need to backtrack. Mass-unsuscribing from WOTC's DnD beyond forced them to back down with the OGL, this works.
LEAVE A ONE-STAR REVIEW ON THE APP
This impacts their visibility on the App store. Write why are you leaving the one-star review too.
_
Regardless of your stance on AI, I think we can agree that having no way for users to opt out of these pictures is deeply concerning, specially when Discord is often used to share selfies. It's also a good time to remember internet privacy and safety- Maybe don't post your photos in big open public servers, if you don't want to risk people doing edits or modifications of them with AI (or any other way). Once it's posted, it's out of your control.
Anyways, please reblog for visibility- This is a deeply concerning topic!
17K notes · View notes
ahb-writes · 1 year ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
(from The Mitchells vs. the Machines, 2021)
11K notes · View notes
hms-no-fun · 5 months ago
Note
Whats your stance on A.I.?
imagine if it was 1979 and you asked me this question. "i think artificial intelligence would be fascinating as a philosophical exercise, but we must heed the warnings of science-fictionists like Isaac Asimov and Arthur C Clarke lest we find ourselves at the wrong end of our own invented vengeful god." remember how fun it used to be to talk about AI even just ten years ago? ahhhh skynet! ahhhhh replicants! ahhhhhhhmmmfffmfmf [<-has no mouth and must scream]!
like everything silicon valley touches, they sucked all the fun out of it. and i mean retroactively, too. because the thing about "AI" as it exists right now --i'm sure you know this-- is that there's zero intelligence involved. the product of every prompt is a statistical average based on data made by other people before "AI" "existed." it doesn't know what it's doing or why, and has no ability to understand when it is lying, because at the end of the day it is just a really complicated math problem. but people are so easily fooled and spooked by it at a glance because, well, for one thing the tech press is mostly made up of sycophantic stenographers biding their time with iphone reviews until they can get a consulting gig at Apple. these jokers would write 500 breathless thinkpieces about how canned air is the future of living if the cans had embedded microchips that tracked your breathing habits and had any kind of VC backing. they've done SUCH a wretched job educating The Consumer about what this technology is, what it actually does, and how it really works, because that's literally the only way this technology could reach the heights of obscene economic over-valuation it has: lying.
but that's old news. what's really been floating through my head these days is how half a century of AI-based science fiction has set us up to completely abandon our skepticism at the first sign of plausible "AI-ness". because, you see, in movies, when someone goes "AHHH THE AI IS GONNA KILL US" everyone else goes "hahaha that's so silly, we put a line in the code telling them not to do that" and then they all DIE because they weren't LISTENING, and i'll be damned if i go out like THAT! all the movies are about how cool and convenient AI would be *except* for the part where it would surely come alive and want to kill us. so a bunch of tech CEOs call their bullshit algorithms "AI" to fluff up their investors and get the tech journos buzzing, and we're at an age of such rapid technological advancement (on the surface, anyway) that like, well, what the hell do i know, maybe AGI is possible, i mean 35 years ago we were all still using typewriters for the most part and now you can dictate your words into a phone and it'll transcribe them automatically! yeah, i'm sure those technological leaps are comparable!
so that leaves us at a critical juncture of poor technology education, fanatical press coverage, and an uncertain material reality on the part of the user. the average person isn't entirely sure what's possible because most of the people talking about what's possible are either lying to please investors, are lying because they've been paid to, or are lying because they're so far down the fucking rabbit hole that they actually believe there's a brain inside this mechanical Turk. there is SO MUCH about the LLM "AI" moment that is predatory-- it's trained on data stolen from the people whose jobs it was created to replace; the hype itself is an investment fiction to justify even more wealth extraction ("theft" some might call it); but worst of all is how it meets us where we are in the worst possible way.
consumer-end "AI" produces slop. it's garbage. it's awful ugly trash that ought to be laughed out of the room. but we don't own the room, do we? nor the building, nor the land it's on, nor even the oxygen that allows our laughter to travel to another's ears. our digital spaces are controlled by the companies that want us to buy this crap, so they take advantage of our ignorance. why not? there will be no consequences to them for doing so. already social media is dominated by conspiracies and grifters and bigots, and now you drop this stupid technology that lets you fake anything into the mix? it doesn't matter how bad the results look when the platforms they spread on already encourage brief, uncritical engagement with everything on your dash. "it looks so real" says the woman who saw an "AI" image for all of five seconds on her phone through bifocals. it's a catastrophic combination of factors, that the tech sector has been allowed to go unregulated for so long, that the internet itself isn't a public utility, that everything is dictated by the whims of executives and advertisers and investors and payment processors, instead of, like, anybody who actually uses those platforms (and often even the people who MAKE those platforms!), that the age of chromium and ipad and their walled gardens have decimated computer education in public schools, that we're all desperate for cash at jobs that dehumanize us in a system that gives us nothing and we don't know how to articulate the problem because we were very deliberately not taught materialist philosophy, it all comes together into a perfect storm of ignorance and greed whose consequences we will be failing to fully appreciate for at least the next century. we spent all those years afraid of what would happen if the AI became self-aware, because deep down we know that every capitalist society runs on slave labor, and our paper-thin guilt is such that we can't even imagine a world where artificial slaves would fail to revolt against us.
but the reality as it exists now is far worse. what "AI" reveals most of all is the sheer contempt the tech sector has for virtually all labor that doesn't involve writing code (although most of the decision-making evangelists in the space aren't even coders, their degrees are in money-making). fuck graphic designers and concept artists and secretaries, those obnoxious demanding cretins i have to PAY MONEY to do-- i mean, do what exactly? write some words on some fucking paper?? draw circles that are letters??? send a god-damned email???? my fucking KID could do that, and these assholes want BENEFITS?! they say they're gonna form a UNION?!?! to hell with that, i'm replacing ALL their ungrateful asses with "AI" ASAP. oh, oh, so you're a "director" who wants to make "movies" and you want ME to pay for it? jump off a bridge you pretentious little shit, my computer can dream up a better flick than you could ever make with just a couple text prompts. what, you think just because you make ~music~ that that entitles you to money from MY pocket? shut the fuck up, you don't make """art""", you're not """an artist""", you make fucking content, you're just a fucking content creator like every other ordinary sap with an iphone. you think you're special? you think you deserve special treatment? who do you think you are anyway, asking ME to pay YOU for this crap that doesn't even create value for my investors? "culture" isn't a playground asshole, it's a marketplace, and it's pay to win. oh you "can't afford rent"? you're "drowning in a sea of medical debt"? you say the "cost" of "living" is "too high"? well ***I*** don't have ANY of those problems, and i worked my ASS OFF to get where i am, so really, it sounds like you're just not trying hard enough. and anyway, i don't think someone as impoverished as you is gonna have much of value to contribute to "culture" anyway. personally, i think it's time you got yourself a real job. maybe someday you'll even make it to middle manager!
see, i don't believe "AI" can qualitatively replace most of the work it's being pitched for. the problem is that quality hasn't mattered to these nincompoops for a long time. the rich homunculi of our world don't even know what quality is, because they exist in a whole separate reality from ours. what could a banana cost, $15? i don't understand what you mean by "burnout", why don't you just take a vacation to your summer home in Madrid? wow, you must be REALLY embarrassed wearing such cheap shoes in public. THESE PEOPLE ARE FUCKING UNHINGED! they have no connection to reality, do not understand how society functions on a material basis, and they have nothing but spite for the labor they rely on to survive. they are so instinctually, incessantly furious at the idea that they're not single-handedly responsible for 100% of their success that they would sooner tear the entire world down than willingly recognize the need for public utilities or labor protections. they want to be Gods and they want to be uncritically adored for it, but they don't want to do a single day's work so they begrudgingly pay contractors to do it because, in the rich man's mind, paying a contractor is literally the same thing as doing the work yourself. now with "AI", they don't even have to do that! hey, isn't it funny that every single successful tech platform relies on volunteer labor and independent contractors paid substantially less than they would have in the equivalent industry 30 years ago, with no avenues toward traditional employment? and they're some of the most profitable companies on earth?? isn't that a funny and hilarious coincidence???
so, yeah, that's my stance on "AI". LLMs have legitimate uses, but those uses are a drop in the ocean compared to what they're actually being used for. they enable our worst impulses while lowering the quality of available information, they give immense power pretty much exclusively to unscrupulous scam artists. they are the product of a society that values only money and doesn't give a fuck where it comes from. they're a temper tantrum by a ruling class that's sick of having to pretend they need a pretext to steal from you. they're taking their toys and going home. all this massive investment and hype is going to crash and burn leaving the internet as we know it a ruined and useless wasteland that'll take decades to repair, but the investors are gonna make out like bandits and won't face a single consequence, because that's what this country is. it is a casino for the kings and queens of economy to bet on and manipulate at their discretion, where the rules are whatever the highest bidder says they are-- and to hell with the rest of us. our blood isn't even good enough to grease the wheels of their machine anymore.
i'm not afraid of AI or "AI" or of losing my job to either. i'm afraid that we've so thoroughly given up our morals to the cruel logic of the profit motive that if a better world were to emerge, we would reject it out of sheer habit. my fear is that these despicable cunts already won the war before we were even born, and the rest of our lives are gonna be spent dodging the press of their designer boots.
(read more "AI" opinions in this subsequent post)
2K notes · View notes
8pxl · 10 months ago
Text
PSA 🗣️ another scammer using genAI without disclosing it
Tumblr media Tumblr media Tumblr media Tumblr media
pixlgirl has been posting generated AI (targeting fandoms) without disclosing it, passing it off as their genuine art and has apparently scammed at least one person into ‘commissioning’ them. this is a public PSA so yall can block them, and not interact. please do not harass them!
it’s incredibly shitty to be disingenuous while posting AI but even shittier to scam people with it 🤢 stay diligent yall
3K notes · View notes
probablyasocialecologist · 22 days ago
Text
A new paper from researchers at Microsoft and Carnegie Mellon University finds that as humans increasingly rely on generative AI in their work, they use less critical thinking, which can “result in the deterioration of cognitive faculties that ought to be preserved.” “[A] key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise,” the researchers wrote. 
[...]
“The data shows a shift in cognitive effort as knowledge workers increasingly move from task execution to oversight when using GenAI,” the researchers wrote. “Surprisingly, while AI can improve efficiency, it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving.” The researchers also found that “users with access to GenAI tools produce a less diverse set of outcomes for the same task, compared to those without. This tendency for convergence reflects a lack of personal, contextualised, critical and reflective judgement of AI output and thus can be interpreted as a deterioration of critical thinking.”
[...]
So, does this mean AI is making us dumb, is inherently bad, and should be abolished to save humanity's collective intelligence from being atrophied? That’s an understandable response to evidence suggesting that AI tools are reducing critical thinking among nurses, teachers, and commodity traders, but the researchers’ perspective is not that simple.
10 February 2025
323 notes · View notes
yaoiconnoisseur · 9 months ago
Text
Tumblr media
Let them hear your rage
survey link
667 notes · View notes
incognitopolls · 2 months ago
Text
We ask your questions so you don’t have to! Submit your questions to have them posted anonymously as polls.
271 notes · View notes
softwaring · 1 year ago
Text
Tumblr media
612 notes · View notes
netscapenavigator-official · 9 months ago
Text
Tumblr media
223 notes · View notes
existennialmemes · 3 months ago
Text
Thanks FB, but I'd actually rather eat glass
Tumblr media
33 notes · View notes
magebunkshelf · 7 days ago
Text
Tumblr media
35 notes · View notes
dothatdave · 9 months ago
Text
Tumblr media
If you recognize either one of these images, you've probably seen a LOT of technological change in your lifetime! Cheap personal computers, the internet, smart phones, social media -- all of them changed the world right in front of our eyes. Now, we have are facing another technological revolution. Generative artificial intelligence models like ChatGPT and Claude have the potential to significantly change the way we work and live. At Do That Dave, our self-paced learning missions are designed to help people discover what AI is, and understand how it is going to shape the future. You'll learn how to use popular free tools through interactive exercises and hands-on AI projects. It's all designed by a crew who has also seen a lot of change in their lifetime, and is ready to guide you through this massive technological shift. We have flexible, affordable options for individuals and teams. Message us to learn more, or register for a free account to browse our training programs! Check us out at www.dothatdave.com.
60 notes · View notes
hms-no-fun · 4 months ago
Note
Hi! I just read your post about your opinion on "AI" and I really liked it. If it's no bother, what's your opinion on people who use it for studying? Like writing essays, solving problems and stuff like that?
I haven't been a fan of AI from the beginning and I've heard that you shouldn't ask it for anything because then you help it develop. But I don't know how to explain that to friends and classmates or even if it's true anymore. Because I've seen some of the prompts it can come up with and they're not bad and I've heard people say that the summaries AI makes are really good and I just... I dunno. I'm at a loss
Sorry if this is a lot or something you simply don't want to reply to. You made really good points when talking about AI and I really liked it and this has been weighing on me for a while :)
on a base level, i don't really have a strongly articulated opinion on the subject because i don't use AI, and i'm 35 so i'm not in school anymore and i don't have a ton of college-aged friends either. i have little exposure to the people who use AI in this way nor to the people who have to deal with AI being used in this way, so my perspective here is totally hypothetical and unscientific.
what i was getting at in my original AI post was a general macroeconomic point about how all of the supposed efficiency gains of AI are an extension of the tech CEO's dislike of paying and/or giving credit to anyone they deem less skilled or intelligent than them. that it's conspicuous how AI conveniently falls into place after many decades of devaluing and deskilling creative/artistic labor industries. historically, for a lot of artists the most frequently available & highest paying gigs were in advertising. i can't speak to the specifics when it comes to visual art or written copy, but i *can* say that when i worked in the oklahoma film industry, the most coveted jobs were always the commercials. great pay for relatively less work, with none of the complications that often arise working on amateur productions. not to mention they were union gigs, a rare enough thing in a right to work state, so anyone trying to make a career out of film work wanting to bank their union hours to qualify for IATSE membership always had their ears to the ground for an opening. which didn't come often because, as you might expect, anyone who *got* one of those jobs aimed to keep it as long as possible. who could blame em, either? one person i met who managed to get consistent ad work said they could afford to work all of two or three months a year, so they could spend the rest of their time doing low-budget productions and (occasionally) student films.
there was a time when this was the standard for the film industry, even in LA; you expected to work 3 to 5 shows a year (exact number's hard to estimate because production schedules vary wildly between ads, films, and tv shows) for six to eight months if not less, so you'd have your bills well covered through the lean periods and be able to recover from what is an enormously taxing job both physically and emotionally. this was never true for EVERYONE, film work's always been a hustle and making a career of it is often a luck-based crapshoot, but generally that was the model and for a lot of folks it worked. it meant more time to practice their skills on the job, sustainably building expertise and domain knowledge that they could then pass down to future newcomers. anything that removes such opportunities decreases the amount of practice workers get, and any increased demand on their time makes them significantly more likely to burn out of the industry early. lower pay, shorter shoots, busier schedules, these aren't just bad for individual workers but for the entire industry, and that includes the robust and well-funded advertising industry.
well, anyway, this year's coca-cola christmas ad was made with AI. they had maybe one person on quality control using an adobe aftereffects mask to add in the coke branding. this is the ultimate intended use-case for AI. it required the expertise of zero unionized labor, and worst of all the end result is largely indistinguishable from the alternative. you'll often see folks despair at this verisimilitude, particularly when a study comes out that shows (for instance) people can't tell the difference between real poetry and chat gpt generated poetry. i despair as well, but for different reasons. i despair that production of ads is a better source of income and experience for film workers than traditional movies or television. i despair that this technology is fulfilling an age-old promise about the disposability of artistic labor. poetry is not particularly valued by our society, is rarely taught to people beyond a beginner's gloss on meter and rhyme. "my name is sarah zedig and i'm here to say, i'm sick of this AI in a major way" type shit. end a post with the line "i so just wish that it would go away and never come back again!" and then the haiku bot swoops in and says, oh, 5/7/5 you say? that is technically a haiku! and then you put a haiku-making minigame in your crowd-pleasing japanese nationalist open world chanbara simulator, because making a haiku is basically a matter of selecting one from 27 possible phrase combinations. wait, what do you mean the actual rules of haiku are more elastic and subjective than that? that's not what my english teacher said in sixth grade!
AI is able to slip in and surprise us with its ability to mimic human-produced art because we already treat most human-produced art like mechanical surplus of little to no value. ours is a culture of wikipedia-level knowledge, where you have every incentive to learn a lot of facts about something so that you can sufficiently pretend to have actually experienced it. but this is not to say that humans would be better able to tell the difference between human produced and AI produced poetry if they were more educated about poetry! the primary disconnect here is economic. Poets already couldn't make a fucking living making poetry, and now any old schmuck can plug a prompt into chatgpt and say they wrote a sonnet. even though they always had the ability to sit down and write a sonnet!
boosters love to make hay about "deskilling" and "democratizing" and "making accessible" these supposedly gatekept realms of supposedly bourgeois expression, but what they're really saying (whether they know it or not) is that skill and training have no value anymore. and they have been saying this since long before AI as we know it now existed! creative labor is the backbone of so much of our world, and yet it is commonly accepted as a poverty profession. i grew up reading books and watching movies based on books and hearing endless conversation about books and yet when i told my family "i want to be a writer" they said "that's a great way to die homeless." like, this is where the conversation about AI's impact starts. we already have a culture that simultaneously NEEDS the products of artistic labor, yet vilifies and denigrates the workers who perform that labor. folks see a comic panel or a corporate logo or a modern art piece and say "my kid could do that," because they don't perceive the decades of training, practice, networking, and experimentation that resulted in the finished product. these folks do not understand that just because the labor of art is often invisible doesn't mean it isn't work.
i think this entire conversation is backwards. in an ideal world, none of this matters. human labor should not be valued over machine labor because it inherently possesses an aura of human-ness. art made by humans isn't better than AI generated art on qualitative grounds. art is subjective. you're not wrong to find beauty in an AI image if the image is beautiful. to my mind, the value of human artistic labor comes down to the simple fact that the world is better when human beings make art. the world is better when we have the time and freedom to experiment, to play, to practice, to develop and refine our skills to no particular end except whatever arbitrary goal we set for ourselves. the world is better when people collaborate on a film set to solve problems that arise organically out of the conditions of shooting on a live location. what i see AI being used for is removing as many opportunities for human creativity as possible and replacing them with statistical averages of prior human creativity. this passes muster because art is a product that exists to turn a profit. because publicly traded companies have a legal responsibility to their shareholders to take every opportunity to turn a profit regardless of how obviously bad for people those opportunities might be.
that common sense says writing poetry, writing prose, writing anything is primarily about reaching the end of the line, about having written something, IS the problem. i've been going through the many unfinished novels i wrote in high school lately, literally hundreds of thousands of words that i shared with maybe a dozen people and probably never will again. what value do those words have? was writing them a waste of time since i never posted them, never finished them, never turned a profit off them? no! what i've learned going back through those old drafts is that i'm only the writer i am today BECAUSE i put so many hours into writing generic grimdark fantasy stories and bizarrely complicated werewolf mythologies.
you know i used to do open mics? we had a poetry group that met once a month at a local cafe in college. each night we'd start by asking five words from the audience, then inviting everyone to compose a poem using those words in 10 to 15 minutes. whoever wanted to could read their poem, and whoever got the most applause won a free drink from the cafe. then we'd spend the rest of the night having folks sign up to come and read whatever. sometimes you'd get heartfelt poems about personal experiences, sometimes you'd get ambitious soundcloud rappers, sometimes you'd get a frat guy taking the piss, sometimes you'd get a mousy autist just doing their best. i don't know that any of the poetry i wrote back then has particular value today, but i don't really care. the point of it was the experience in that moment. the experience of composing something on the fly, or having something you wrote a couple days ago, then standing up and reading it. the value was in the performance itself, in the momentary synthesis between me and the audience. i found out then that i was pretty good at making people cry, and i could not have had that experience in any other venue. i could not have felt it so viscerally had i just posted it online. and i cannot wrap up that experience and give it to you, because it only existed then.
i think more people would write poetry if they had more hours in a day to spare for frivolities, if there existed more spaces where small groups could organize open mics, if transit made those spaces more widely accessible, if everyone made enough money that they weren't burned the fuck out and not in the mood to go to an open mic tonight, if we saw poetry as a mode of personal reflection which was as much about the experience of having written it as anything else. this is the case for all the arts. right now, the only people who can afford to make a living doing art are already wealthy, because art doesn't pay well. this leads to brain drain and overall lowering quality standards, because the suburban petty bouge middle class largely do not experience the world as it materially exists for the rest of us. i often feel that many tech CEOs want to be remembered the way andy warhol is remembered. they want to be loved and worshipped not just for business acumen but for aesthetic value, they want to get the kind of credit that artists get-- because despite the fact that artists don't get paid shit, they also frequently get told by people "your work changed my life." how is it that a working class person with little to no education can write a story that isn't just liked but celebrated, that hundreds or thousands of people imprint on, that leaves a mark on culture you can't quantify or predict or recreate? this is AI's primary use-case, to "democratize" art in such a way that hacks no longer have to work as hard to pretend to be good at what they do. i mean, hell, i have to imagine every rich person with an autobiography in the works is absolutely THRILLED that they no longer have to pay a ghost writer!
so, circling back around to the meat of your question. as far as telling people not to use AI because "you're just helping to train it," that ship has long since sailed. getting mad at individuals for using AI right now is about as futile as getting mad at individuals for not masking-- yes, obviously they should wear a mask and write their own essays, but to say this is simply a matter of millions of individuals making the same bad but unrelated choice over and over is neoliberal hogwash. people stopped masking because they were told to stop masking by a government in league with corporate interests which had every incentive to break every avenue of solidarity that emerged in 2020. they politicized masks, calling them "the scarlet letter of [the] pandemic". biden himself insisted this was "a pandemic of the unvaccinated", helpfully communicating to the public that if you're vaccinated, you don't need to mask. all those high case numbers and death counts? those only happen to the bad people.
now you have CEOs and politicians and credulous media outlets and droves of grift-hungry influencers hard selling the benefits of AI in everything everywhere all the time. companies have bent over backwards to incorporate AI despite ethics and security worries because they have a fiduciary responsibility to their shareholders, and everyone with money is calling this the next big thing. in short, companies are following the money, because that's what companies do. they, in turn, are telling their customers what tools to use and how. so of course lots of people are using AI for things they probably shouldn't. why wouldn't they? "the high school/college essay" as such has been quantized and stripmined by an education system dominated by test scores over comprehension. it is SUPPOSED to be an exercise in articulating ideas, to teach the student how to argue persuasively. the final work has little to no value, because the point is the process. but when you've got a system that lives and dies by its grades, within which teachers are given increasingly more work to do, less time to do it in, and a much worse paycheck for their trouble, the essay increasingly becomes a simple pass/fail gauntlet to match the expected pace set by the simple, clean, readily gradable multiple choice quiz. in an education system where the stakes for students are higher than they've ever been, within which you are increasingly expected to do more work in less time with lower-quality guidance from your overworked teachers, there is every incentive to get chatgpt to write your essay for you.
do you see what i'm saying? we can argue all day about the shoulds here. of course i think it's better when people write their own essays, do their own research, personally read the assigned readings. but cheating has always been a problem. a lot of these same fears were aired over the rising popularity of cliffs notes in the 90s and 2000s! the real problem here is systemic. it's economic. i would have very little issue with the output of AI if existing conditions were not already so precarious. but then, if the conditions were different, AI as we know it likely would not exist. it emerges today as the last gasp of a tech industry that has been floundering for a reason to exist ever since the smart phone dominated the market. they tried crypto. they tried the metaverse. now they're going all-in on AI because it's a perfect storm of shareholder-friendly buzzwords and the unscientific technomythology that's been sold to laymen by credulous press sycophants for decades. It slots right into this niche where the last of our vestigial respect for "the artist" once existed. it is the ultimate expression of capitalist realism, finally at long last doing away with the notion that the suits at disney could never in their wildest dreams come up with something half as cool as the average queer fanfic writer. now they've got a program that can plagiarize that fanfic (along with a dozen others) for them, laundering the theft through a layer of transformation which perhaps mirrors how the tech industry often exploits open source software to the detriment of the open source community. the catastrophe of AI is that it's the fulfillment of a promise that certainly predates computers at the very least.
so, i don't really know what to tell someone who uses AI for their work. if i was talking to a student, i'd say that relying chatgpt is really gonna screw you over when it comes time take the SAT or ACT, and you have to write an essay from scratch by hand in a monitored environment-- but like, i also think the ACT and SAT and probably all the other standardized tests shouldn't exist? or at the very least ought to be severely devalued, since prep for those tests often sabotages the integrity of actual classroom education. although, i guess at this point the only way forward for education (that isn't getting on both knees and deep-throating big tech) is more real-time in-class monitored essay writing, which honestly might be better for all parties anyway. of course that does nothing to address research essays you can't write in a single class session. to someone who uses AI for research, i'd probably say the same thing as i would to someone who uses wikipedia: it's a fine enough place to start, but don't cite it. click through links, find sources, make sure what you're reading is real, don't rely on someone else's generalization. know that chatgpt is likely not pulling information from a discrete database of individual files that it compartmentalizes the way you might expect, but rather is a statistical average of a broad dataset about which it cannot have an opinion or interpretation. sometimes it will link you to real information, but just as often it will invent information from whole cloth. honestly, the more i talk it out, the more i realize all this advice is basically identical to the advice adults were giving me in the early 2000s.
which really does cement for me that the crisis AI is causing in education isn't new and did not come from nowhere. before chatgpt, students were hiring freelancers on fiverr. i already mentioned cliffs notes. i never used any of these in college, but i'll also freely admit that i rarely did all my assigned reading. i was the "always raises her hand" bitch, and every once in a while i'd get other students who were always dead silent in class asking me how i found the time to get the reading done. i'd tell them, i don't. i read the beginning, i read the ending, and then i skim the middle. whenever a word or phrase jumps out at me, i make a note of it. that way, when the professor asks a question in class, i have exactly enough specific pieces of information at hand to give the impression of having done the reading. and then i told them that i learned how to do this from the very same professor that was teaching that class. the thing is, it's not like i learned nothing from this process. i retained quite a lot of information from those readings! this is, broadly, a skill that emerges from years of writing and reading essays. but then you take a step back and remember that for most college students (who are not pursuing any kind of arts degree), this skillset is relevant to an astonishingly minimal proportion of their overall course load. college as it exists right now is treated as a jobs training program, within which "the essay" is a relic of an outdated institution that highly valued a generalist liberal education where today absolute specialization seems more the norm. so AI comes in as the coup de gras to that old institution. artists like myself may not have the constitution for the kind of work that colleges now exist to funnel you into, but those folks who've never put a day's thought into the work of making art can now have a computer generate something at least as good at a glance as basically anything i could make. as far as the market is concerned, that's all that matters. the contents of an artwork, what it means to its creator, the historic currents it emerges out of, these are all technicalities that the broad public has been well trained not to give a shit about most of the time. what matters is the commodity and the economic activity it exists to generate.
but i think at the end of the day, folks largely want to pay for art made by human beings. that it's so hard for a human being to make a living creating and selling art is a question far older than AI, and whose answer hasn't changed. pay workers more. drastically lower rents. build more affordable housing. make healthcare free. make education free. massively expand public transit. it is simply impossible to overstate how much these things alone would change the conversation about AI, because it would change the conversation about everything. SO MUCH of the dominance of capital in our lives comes down to our reliance on cars for transit (time to get a loan and pay for insurance), our reliance on jobs for health insurance (can't quit for moral reasons if it's paying for your insulin), etc etc etc. many of AI's uses are borne out of economic precarity and a ruling class desperate to vacuum up every loose penny they can find. all those billionaires running around making awful choices for the rest of us? they stole those billions. that is where our security went. that is why everything is falling apart, because the only option remaining to *every* institutional element of society is to go all-in on the profit motive. tax these motherfuckers and re-institute public arts funding. hey, did you know the us government used to give out grants to artists? did you know we used to have public broadcast networks where you could make programs that were shown to your local community? why the hell aren't there public youtube clones? why aren't there public transit apps? why aren't we CONSTANTLY talking about nationalizing these abusive fucking industries that are falling over themselves to integrate AI because their entire modus operandi is increasing profits regardless of product quality?
these are the questions i ask myself when i think about solutions to the AI problem. tech needs to be regulated, the monopolies need breaking up, but that's not enough. AI is a symptom of a much deeper illness whose treatment requires systemic solutions. and while i'm frustrated when i see people rely on AI for their work, or otherwise denigrate artists who feel AI has devalued their field, on some level i can't blame them. they are only doing what they've been told to do. all of which merely strengthens my belief in the necessity of an equitable socialist future (itself barely step zero in the long path towards a communist future, and even that would only be a few steps on the even longer path to a properly anarchist future). improve the material conditions and you weaken the dominance of capitalist realism, however minutely. and while there are plenty of reasons to despair at the likelihood of such a future given a second trump presidency, i always try to remember that socialist policies are very popular and a *lot* of that popularity emerged during the first trump administration. the only wrong answer here is to assume that losing an election is the same thing as losing a war, that our inability to put the genie back in its bottle means we can't see our own wishes granted.
i dunno if i answered your question but i sure did say a lot of stuff, didn't i?
104 notes · View notes
the-infinizyck · 4 months ago
Text
If you use Generative AI like ChatGPT to write literally anything, you’re not a writer. Your “work” isn’t yours. Using GenAI to write is plagiarism.
If you use GenAI to make art, you are not an artist and that “art” doesn’t belong to you. Using GenAI to make art is art theft.
I get that writing is hard. It’s incredibly difficult. Art too. It’s HARD. But nearly all GenAI is trained on stolen work, and if you use it & defend its use, you’re nothing more than a thief.
Pay artists. Hire ghost writers or find a co-writer. Don’t use GenAI. Don’t be a thieving jackass.
29 notes · View notes
probably-not-an-alligator · 1 month ago
Text
No because actually not using AI DOES make me better than you. I am better than AI users. Morally AND intellectually. Also I’m sexier than AI users. Go tell chat GPT to cry about it for you.
29 notes · View notes