Tumgik
#ai tools vs ai art
valla-chan · 2 years
Text
Something I can't stand:
The proliferation of grift culture and "venture capitalist" techbros using AI as a way to steal and resell other people's art has completely ruined the awe and appreciation we had of early AI experiments that didn't hurt anyone, and had inevitably kneecapped those type of projects' continued existence
What am I talking about..? Stuff like background removers, upscalers, vocal removers, frame interpolators, tools to make art seamlessly tile, fucked up nightmare distortion images, AI based vocaloids (made from paid and consenting voice actors), comedic deepfakes, singing faces (think Dame da ne)
The fun stuff, that is either too poor quality to pass for a human creation, or is merely a free tool designed to fill a very specific niche in one step of art creation, or doesn't function in the way we currently think of AI and is made by a closed group of people all being paid for their contributions (eg vocaloid, of which the AI voicebanks are created and used almost the exact same way as they always have, with manual tuning and songwriting, employed voice actors, and the AI part being used to blend phonemes rather than create something from scratch)
Unfortunately I think it was kinda inevitable that AI turned into what it is, because for a lot of high profile people, the grift never ends. We are finding out more and more that those fun, mostly harmless tools they gave us at the beginning as our introduction to AI are being swiftly deprecated and paywalled, because they always intended to fuck us over once it became viable to do so.
And I think, based on the conversations I've had, at first a lot of people didn't really have opinions towards their art going into the AI mush machine, because what it generated was silly and unmarketable. It was deformed rooms with dog faces and scrungly trees. It was a sloppy mess most people considered harmless. And that's how they eased their way into turning theft from artists into something profitable for them, to be sold back to people who want to be artists without the effort. Or maybe not even that. Some don't give a shit about art, and just know that other people like art enough to give the lowest seller lots of money, and use AI as a grifting tool to undercut artists for Lamborghini money.
And now that we know that the people who made our background removers, our image upscalers, our wacky gray goo karl-marx-getting-slimed-at-the-kids-choice-awards generators, our funny singing face tech, etc have taken those data sets that many initially didn't even think twice about, and turned them on us for profit with other tools, there is no way in hell anyone will ever trust a movement like this again.
They played us, and are now crying and wiping their eyes with money about how we feel betrayed and stolen from. Capitalism and grift culture has ensured that the only way that AI could go is towards a point of automating out the emotion, work, quality, and ownership of art itself.
Have you noticed that while these techbro AI image generators have surged in quality, background removers and upscalers have stagnated in quality upgrades? Silly tools like making faces sing have been abandoned in the dust. The actual TOOLS that can be justified— those made from controlled datasets of a specific variety that are designed to help actual artists— have fallen completely out of favor in exchange for a focus on a low effort, generalized art-replacement that can be sold to anyone lazy or conniving enough to buy into it. And maybe the worst part is they're being made by the same companies, sometimes even using the same datasets they hid in the initially harmless tools that we once trusted to remain as such.
It's been so sad seeing the future of hyperspecific tools to help artists turn into this ratrace towards art replacement. I watched many of our opinions shift over time as the culture and tech did. I wrote this because I had a brief recall of "the fun early days" and how people were generally okay with things like that, and how a fair amount of people still are with low-level machine learning tools that aid specific processes rather than replacing them. I don't really have a point to make here. We were once again fucked over by the people who promised to make our work and lives easier, and then raged at for hating them for using that sentiment to abandon those projects in favor of stealing from us on a wide scale.
66 notes · View notes
gender-trash · 2 years
Text
incredibly funny how a bunch of people interpreted “ao3 was almost certainly scraped as part of the gpt training dataset because it’s a big easily accessible body of english language text, so you can prompt gpt with surprisingly vague stuff and it will autocomplete with snarry underage or wangxian a/b/o” as “elon musk Personally is Currently scraping ao3 and training an ai to plagiarize fic, going to go lock ALL my works on ao3 IMMEDIATELY”
its. its already in the dataset. how do you think these things work. “locking my works to registered users only until after the scraping stops!” my dude the ao3 team just needs to like add a robots.txt and check the useragent and stuff to prevent this from happening in the future*, and theyre already on it, but not only is the existing body of work presumably In the Dataset, the model has ALREADY BEEN TRAINED. that omelet isnt going to get unscrambled
(*im assuming that everyone gathering datasets for large language models is being reasonably Polite about it bc these are both very simple to circumvent — if this assumption is false then ao3 might need to graduate to Offensive Measures but also we would definitely need to bully the culprits off of hacker news)
anyway im not taking any Stance one way or the other on the “ai art debate” (other than maybe “none of you know what the hell you’re talking about”) but we’re definitely going to see a whole new world of copyright claims against the big art models and ml researchers developing new tools for “removing” stuff from a trained model, and i for one think that it will be SO entertaining to watch
718 notes · View notes
ladyyomiart · 25 days
Text
Tumblr media
Here's my version of the OCs vs Artist meme! ✍💖
🔸Favorite OC: My poor freckled daughter with self-esteem issues whose life I love to mess up with memory issues and vampiric apocalypses, Furukawa Chie (Hakuouki OC) and her mischievous Shiba Inu stray dog, Yokai.
🔸First OC: April Saito (Digimon OC), who I created when I was only 10 years old! April will grow up to become a Digimon rights activist, so I thought it'd be fun to draw her protesting with a "✅Yes to Digital Intelligence / ❌No to Artificial Intelligence" sign along with Gigimon, who has a matching anti-AI bandana.
🔸Fans' Favorite OC: Rice Ritz, head of the "Kame Warriors" (my troop of Time Patroller OCs from Dragon Ball Xenoverse), who used to be wildly popular back in 2018 when we ran amazing "OC vs OC" tournaments on DeviantArt.
🔸Easiest to Draw OC: Furukawa Kohana (Hakuouki OC), which caused my sister to go "What do you mean Kohana is the easiest one for you to draw?!". 😂 Maiko/Geiko are one of my special interests, so she's one of the few OCs I can draw from memory without having to consult a character sheet, lol.
🔸Hardest to Draw OC: Tani Sanjuro (Hakuouki OC), the arrogant yet skillful captain of the 7th Division of the Shinsengumi that both me and my fic's readers love to hate… and who has features as difficult to draw as his whole wardrobe!
🔸The Artist: Lady Yomi herself! 👀 Referenced from a real photo so that's why I'm wearing my noise cancelling headphones, haha.
11 notes · View notes
mrpuzzle · 7 months
Text
Tumblr media
📄⚾️🧢✨
7 notes · View notes
Text
What kind of bubble is AI?
Tumblr media
My latest column for Locus Magazine is "What Kind of Bubble is AI?" All economic bubbles are hugely destructive, but some of them leave behind wreckage that can be salvaged for useful purposes, while others leave nothing behind but ashes:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
Think about some 21st century bubbles. The dotcom bubble was a terrible tragedy, one that drained the coffers of pension funds and other institutional investors and wiped out retail investors who were gulled by Superbowl Ads. But there was a lot left behind after the dotcoms were wiped out: cheap servers, office furniture and space, but far more importantly, a generation of young people who'd been trained as web makers, leaving nontechnical degree programs to learn HTML, perl and python. This created a whole cohort of technologists from non-technical backgrounds, a first in technological history. Many of these people became the vanguard of a more inclusive and humane tech development movement, and they were able to make interesting and useful services and products in an environment where raw materials – compute, bandwidth, space and talent – were available at firesale prices.
Contrast this with the crypto bubble. It, too, destroyed the fortunes of institutional and individual investors through fraud and Superbowl Ads. It, too, lured in nontechnical people to learn esoteric disciplines at investor expense. But apart from a smattering of Rust programmers, the main residue of crypto is bad digital art and worse Austrian economics.
Or think of Worldcom vs Enron. Both bubbles were built on pure fraud, but Enron's fraud left nothing behind but a string of suspicious deaths. By contrast, Worldcom's fraud was a Big Store con that required laying a ton of fiber that is still in the ground to this day, and is being bought and used at pennies on the dollar.
AI is definitely a bubble. As I write in the column, if you fly into SFO and rent a car and drive north to San Francisco or south to Silicon Valley, every single billboard is advertising an "AI" startup, many of which are not even using anything that can be remotely characterized as AI. That's amazing, considering what a meaningless buzzword AI already is.
So which kind of bubble is AI? When it pops, will something useful be left behind, or will it go away altogether? To be sure, there's a legion of technologists who are learning Tensorflow and Pytorch. These nominally open source tools are bound, respectively, to Google and Facebook's AI environments:
https://pluralistic.net/2023/08/18/openwashing/#you-keep-using-that-word-i-do-not-think-it-means-what-you-think-it-means
But if those environments go away, those programming skills become a lot less useful. Live, large-scale Big Tech AI projects are shockingly expensive to run. Some of their costs are fixed – collecting, labeling and processing training data – but the running costs for each query are prodigious. There's a massive primary energy bill for the servers, a nearly as large energy bill for the chillers, and a titanic wage bill for the specialized technical staff involved.
Once investor subsidies dry up, will the real-world, non-hyperbolic applications for AI be enough to cover these running costs? AI applications can be plotted on a 2X2 grid whose axes are "value" (how much customers will pay for them) and "risk tolerance" (how perfect the product needs to be).
Charging teenaged D&D players $10 month for an image generator that creates epic illustrations of their characters fighting monsters is low value and very risk tolerant (teenagers aren't overly worried about six-fingered swordspeople with three pupils in each eye). Charging scammy spamfarms $500/month for a text generator that spits out dull, search-algorithm-pleasing narratives to appear over recipes is likewise low-value and highly risk tolerant (your customer doesn't care if the text is nonsense). Charging visually impaired people $100 month for an app that plays a text-to-speech description of anything they point their cameras at is low-value and moderately risk tolerant ("that's your blue shirt" when it's green is not a big deal, while "the street is safe to cross" when it's not is a much bigger one).
Morganstanley doesn't talk about the trillions the AI industry will be worth some day because of these applications. These are just spinoffs from the main event, a collection of extremely high-value applications. Think of self-driving cars or radiology bots that analyze chest x-rays and characterize masses as cancerous or noncancerous.
These are high value – but only if they are also risk-tolerant. The pitch for self-driving cars is "fire most drivers and replace them with 'humans in the loop' who intervene at critical junctures." That's the risk-tolerant version of self-driving cars, and it's a failure. More than $100b has been incinerated chasing self-driving cars, and cars are nowhere near driving themselves:
https://pluralistic.net/2022/10/09/herbies-revenge/#100-billion-here-100-billion-there-pretty-soon-youre-talking-real-money
Quite the reverse, in fact. Cruise was just forced to quit the field after one of their cars maimed a woman – a pedestrian who had not opted into being part of a high-risk AI experiment – and dragged her body 20 feet through the streets of San Francisco. Afterwards, it emerged that Cruise had replaced the single low-waged driver who would normally be paid to operate a taxi with 1.5 high-waged skilled technicians who remotely oversaw each of its vehicles:
https://www.nytimes.com/2023/11/03/technology/cruise-general-motors-self-driving-cars.html
The self-driving pitch isn't that your car will correct your own human errors (like an alarm that sounds when you activate your turn signal while someone is in your blind-spot). Self-driving isn't about using automation to augment human skill – it's about replacing humans. There's no business case for spending hundreds of billions on better safety systems for cars (there's a human case for it, though!). The only way the price-tag justifies itself is if paid drivers can be fired and replaced with software that costs less than their wages.
What about radiologists? Radiologists certainly make mistakes from time to time, and if there's a computer vision system that makes different mistakes than the sort that humans make, they could be a cheap way of generating second opinions that trigger re-examination by a human radiologist. But no AI investor thinks their return will come from selling hospitals that reduce the number of X-rays each radiologist processes every day, as a second-opinion-generating system would. Rather, the value of AI radiologists comes from firing most of your human radiologists and replacing them with software whose judgments are cursorily double-checked by a human whose "automation blindness" will turn them into an OK-button-mashing automaton:
https://pluralistic.net/2023/08/23/automation-blindness/#humans-in-the-loop
The profit-generating pitch for high-value AI applications lies in creating "reverse centaurs": humans who serve as appendages for automation that operates at a speed and scale that is unrelated to the capacity or needs of the worker:
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
But unless these high-value applications are intrinsically risk-tolerant, they are poor candidates for automation. Cruise was able to nonconsensually enlist the population of San Francisco in an experimental murderbot development program thanks to the vast sums of money sloshing around the industry. Some of this money funds the inevitabilist narrative that self-driving cars are coming, it's only a matter of when, not if, and so SF had better get in the autonomous vehicle or get run over by the forces of history.
Once the bubble pops (all bubbles pop), AI applications will have to rise or fall on their actual merits, not their promise. The odds are stacked against the long-term survival of high-value, risk-intolerant AI applications.
The problem for AI is that while there are a lot of risk-tolerant applications, they're almost all low-value; while nearly all the high-value applications are risk-intolerant. Once AI has to be profitable – once investors withdraw their subsidies from money-losing ventures – the risk-tolerant applications need to be sufficient to run those tremendously expensive servers in those brutally expensive data-centers tended by exceptionally expensive technical workers.
If they aren't, then the business case for running those servers goes away, and so do the servers – and so do all those risk-tolerant, low-value applications. It doesn't matter if helping blind people make sense of their surroundings is socially beneficial. It doesn't matter if teenaged gamers love their epic character art. It doesn't even matter how horny scammers are for generating AI nonsense SEO websites:
https://twitter.com/jakezward/status/1728032634037567509
These applications are all riding on the coattails of the big AI models that are being built and operated at a loss in order to be profitable. If they remain unprofitable long enough, the private sector will no longer pay to operate them.
Now, there are smaller models, models that stand alone and run on commodity hardware. These would persist even after the AI bubble bursts, because most of their costs are setup costs that have already been borne by the well-funded companies who created them. These models are limited, of course, though the communities that have formed around them have pushed those limits in surprising ways, far beyond their original manufacturers' beliefs about their capacity. These communities will continue to push those limits for as long as they find the models useful.
These standalone, "toy" models are derived from the big models, though. When the AI bubble bursts and the private sector no longer subsidizes mass-scale model creation, it will cease to spin out more sophisticated models that run on commodity hardware (it's possible that Federated learning and other techniques for spreading out the work of making large-scale models will fill the gap).
So what kind of bubble is the AI bubble? What will we salvage from its wreckage? Perhaps the communities who've invested in becoming experts in Pytorch and Tensorflow will wrestle them away from their corporate masters and make them generally useful. Certainly, a lot of people will have gained skills in applying statistical techniques.
But there will also be a lot of unsalvageable wreckage. As big AI models get integrated into the processes of the productive economy, AI becomes a source of systemic risk. The only thing worse than having an automated process that is rendered dangerous or erratic based on AI integration is to have that process fail entirely because the AI suddenly disappeared, a collapse that is too precipitous for former AI customers to engineer a soft landing for their systems.
This is a blind spot in our policymakers debates about AI. The smart policymakers are asking questions about fairness, algorithmic bias, and fraud. The foolish policymakers are ensnared in fantasies about "AI safety," AKA "Will the chatbot become a superintelligence that turns the whole human race into paperclips?"
https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space
But no one is asking, "What will we do if" – when – "the AI bubble pops and most of this stuff disappears overnight?"
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/12/19/bubblenomics/#pop
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
--
tom_bullock (modified) https://www.flickr.com/photos/tombullock/25173469495/
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/
4K notes · View notes
pillowfort-social · 1 year
Text
ChatGPT Bot Block
Tumblr media
Hey Pillowfolks!
We know many of you are still waiting on our official stance regarding AI-Generated Images (also referred to as “AI Art”) being posted to Pillowfort.  We are still deliberating internally on the best approach for our community as well as how to properly moderate AI-Generated Images based on the stance we ultimately decide on. We’re also in contact with our Legal Team for guidance regarding additions to the Terms of Service we will need to include regarding AI-Generated Images. This is a highly divisive issue that continues to evolve at a rapid pace, so while we know many of you are anxious to receive a decision, we want to make sure we carefully consider the options before deciding. Thank you for your patience as we work on this. 
As of today, 9/5/2023, we have blocked the ChatGPT bot from scraping Pillowfort. This means any writings you post to Pillowfort cannot be retrieved for use in ChatGPT’s Dataset. 
Our team is still looking for ways to provide the same protection for images uploaded to the site, but keeping scrapers from accessing images seems to be less straightforward than for text content. The biggest AI generators such as StableDiffusion use datasets such as LAION, and as far as our team has been able to discern, it is not known what means those datasets use to scrape images or how to prevent them from doing so. Some sources say that websites can add metadata tags to images to prevent the img2dataset bot (which is apparently used by many generative image tools) from scraping images, but it is unclear which AI image generators use this bot vs. a different bot or technology. The bot can also be configured to simply disregard these directives, so it is unknown which scrapers would obey the restriction if it was added.
For artists looking to protect their art from AI image scrapers you may want to look into Glaze, a tool designed by the University of Chicago, to protect human artworks from being processed by generative AI. 
We are continuing to monitor this topic and encourage our users to let us know if you have any information that can help our team decide the best approach to managing AI-Generated Images and Generative AI going forward. Again, we appreciate your patience, and we are working to have a decision on the issue of moderating AI-Generated Images soon.
Best, Pillowfort.social Staff
873 notes · View notes
sunderwight · 8 months
Text
y'know what, I think it's kind of interesting to bring up Data from Star Trek in the context of the current debates about AI. like especially if you actually are familiar with the subplot about Data investigating art and creativity.
see, Data can definitely do what the AI programs going around these days can. better than, but that's beside the point, obviously. he's a sci-fi/fantasy android. but anyway, in the story, Data can perfectly replicate any painting or stitch a beautiful quilt or write a poem. he can write programs for himself that introduce variables that make things more "flawed", that imitate the particular style of an artist, he can choose to either perfectly replicate a particular sort of music or to try and create a more "human" sounding imitation that has irregular errors and mimics effort or strain. the latter is harder for him that just copying, the same way it's more complicated to have an algorithm that creates believable "original" art vs something that just duplicates whatever you give it.
but this is not the issue with Data. when Data imitates art, he himself knows that he's not really creating, he's just using his computer brain to copy things that humans have done. it's actually a source of deep personal introspection for the character, that he believes being able to create art would bring him closer to humanity, but he's not sure if he actually can.
of course, Data is a person. he's a person who is not biological, but he's still a person, and this is really obvious from go. there's no one thing that can be pointed to as the smoking gun for Data's personhood, but that's normal and also true of everyone else. Data's the culmination of a multitude of elements required to make a guy. Asking if this or that one thing is what makes Data a person is like asking if it's the flour or the eggs that make a cake.
the question of whether or not Data can create art is intrinsically tied to the question of whether or not Data can qualify as an artist. can he, like a human, take on inspiration and cultivate desirable influences in order to produce something that reflects his view on the world?
yes, he can. because he has a view on the world.
but that's the thing about the generative AI we are dealing with in the real world. that's not like Data. despite being referred to as "AI", these are algorithms that have been trained to recognize and imitate patterns. they have no perspective. the people who DO have a perspective, the humans inputting prompts, are trying to circumvent the whole part of the artistic process where they actually develop skills and create things themselves. they're not doing what Data did, in fact they're doing the opposite -- instead of exploring their own ability to create art despite their personal limitations, they are abandoning it. the data sets aren't like someone looking at a painting and taking inspiration from it, because the machine can't be inspired and the prompter isn't filtering inspiration through the necessary medium of their perspective.
Data would be very confused as to the motives and desires involved, especially since most people are not inhibited from developing at least SOME sort of artistic skill for the sake self-expression. he'd probably start researching the history of plagiarism and different cultural, historical, and legal standards for differentiating it from acceptable levels of artistic imitation, and how the use of various tools factored into it. he would cite examples of cultures where computer programming itself was considered a form of art, and court cases where rulings were made for or against examples of generative plagiarism, and cases of forgeries and imitations which required skill as good if not better than the artists who created the originals. then Geordi would suggest that maybe Data was a little bit annoyed that people who could make art in a way he can't would discount that ability. Data would be like "as a machine I do not experience annoyance" but he would allow that he was perplexed or struggling to gain internal consensus on the matter. so Geordi would sum it up with "sometimes people want to make things easy, and they aren't always good at recognizing when doing that defeats the whole idea" and Data would quirk his head thoughtfully and agree.
then they'd get back to modifying the warp core so they could escape some sentient space anomaly that had sucked the ship into intermediate space and was slowly destabilizing the hull, or whatever.
anyways, point is -- I don't think Data from Star Trek would be a big fan of AI art.
313 notes · View notes
txttletale · 9 months
Note
Your discussions on AI art have been really interesting and changed my mind on it quite a bit, so thank you for that! I don’t think I’m interested in using it, but I feel much less threatened by it in the same way. That being said, I was wondering, how you felt about AI generated creative writing: not, like AI writing in the context of garbage listicles or academic essays, but like, people who generate short stories and then submit them to contests. Do you think it’s the same sort of situation as AI art? Do you think there’s a difference in ChatGPT vs mid journey? Legitimate curiosity here! I don’t quite have an opinion on this in the same way, and I’ve seen v little from folks about creative writing in particular vs generated academic essays/articles
i think that ai generated writing is also indisputably writing but it is mostly really really fucking awful writing for the same reason that most ai art is not good art -- that the large training sets and low 'temperature' of commercially available/mass market models mean that anything produced will be the most generic version of itself. i also think that narrative writing is very very poorly suited to LLM generation because it generally requires very basic internal logic which LLMs are famously bad at (i imagine you'd have similar problems trying to create something visual like a comic that requires consistent character or location design rather than the singular images that AI art is mostly used for). i think it's going to be a very long time before we see anything good long-form from an LLM, especially because it's just not a priority for the people making them.
ultimately though i think you could absolutely do some really cool stuff with AI generated text if you had a tighter training set and let it get a bit wild with it. i've really enjoyed a lot of AI writing for being funny, especially when it was being done with tools like botnik that involve more human curation but still have the ability to completely blindside you with choices -- i unironically think the botnik collegehumour sketch is funnier than anything human-written on the channel. & i think that means it could reliably be used, with similar levels of curation, to make some stuff that feels alien, or unsettling, or etheral, or horrifying, because those are somewhat adjacent to the surreal humour i think it excels at. i could absolutely see it being used in workflows -- one of my friends told me recently, essentially, "if i'm stuck with writer's block, i ask chatgpt what should happen next, it gives me a horrible idea, and i immediately think 'that's shit, and i can do much better' and start writing again" -- which is both very funny but i think presents a great use case as a 'rubber duck'.
but yea i think that if there's anything good to be found in AI-written fiction or poetry it's not going to come from chatGPT specifically, it's going to come from some locally hosted GPT model trained on a curated set of influences -- and will have to either be kind of incoherent or heavily curated into coherence.
that said the submission of AI-written stories to short story mags & such fucking blows -- not because it's "not writing" but because it's just bad writing that's very very easy to produce (as in, 'just tell chatGPT 'write a short story'-easy) -- which ofc isn't bad in and of itself but means that the already existing phenomenon of people cynically submitting awful garbage to literary mags that doesn't even meet the submission guidelines has been magnified immensely and editors are finding it hard to keep up. i think part of believing that generative writing and art are legitimate mediums is also believing they are and should be treated as though they are separate mediums -- i don't think that there's no skill in these disciplines (like, if someone managed to make writing with chatGPT that wasnt unreadably bad, i would be very fucking impressed!) but they're deeply different skills to the traditional artforms and so imo should be in general judged, presented, published etc. separately.
211 notes · View notes
bamsara · 1 year
Note
Oooo, for the dialogue prompts "you should have thought about that before you got into a fight" and "I only wanted to help"
I love your works! Your art looks like itd taste like sour patch kids, v nice!! ^^
Sun (Mostly) Centric | Wordcount: 1,147 | AO3 Version
The world has not yet adjusted to the flood of robots merging with day-to-day society.
At least, not in the form they had taken prior. To say that there was some backlash was undercutting it; using arguments of humanity vs machine to its core, despite the clarity that those walking alongside them weren't just AI made to mimic human traits and personality, but sentient beings that develop their own. There's a difference between a chatbot app and your next-door neighbor who just so happens to be made out of metal.
Still, there is progress as much as there are incidents. A recent ruling states that all robots don't need to look human in order to receive the same amount of respect and rights (which is fantastic for all of Fazbear's line up of robots, considering they were animals in nature and all, in all franchises and pizza plexes across the country) but there were...incidents too, some of them making the news.
So when you're out doing some quick shopping for groceries one day and a stranger with a taut face and a sour attitude starts heckling Sun, and that heckling turns to harassment, and thus turns into him reaching for the back of the animatronic's head and pulling at the vulnerable wires there, you clock him.
Hard, actually. Your knuckles hurt like a bitch, but you don't have time to shake the feeling out from your hand because the guy sends one right back and oh, there you go, tumbling in the isle and knocking baking soda and sugar and other cake ingredients off the shelf as the two of you yell profanities and arguments while Sun has a metaphorical loading symbol over his head while he processes the last five seconds.
Now you're both banned from that store. The other guy is too, thankfully. Still sucks though. You didn't get to check out the ingredients for the cake.
"You're a real mess." Sun scolds you, dipping the rag back into the warm water, and bringing it back up to your face. He dabs at the dried blood under your eye, careful not to rub too harshly so as to not irritate the darkening skin beneath it. "Honestly. That could have gone so much worse-"
"Like pulling wires out of your head?" You interrupt. You're not too keen about the bathroom being turned into a lecture hall, and the lid of the toilet seat being your 'time-out' spot as he tends to you. "Yeah, sure. I'll just let the stranger rip out what is essentially your brain cords out of your flat skull and be fine with it."
Sun shoots you a look. The default smile is strained.
"What?" You hiss in the silent pause, and not because of the sting of your eye. "All I'm saying is that this-" A point to your face, "-is preferable than the other outcome."
"Our wires are welded in with steel, so I highly doubt a human could rip them out without some sort of power tool." Sun tuts. "You remember Parts n Service."
He had a point. The machine in Parts n Service did weld his arm back into place at the time, and all the other repairs since then didn't go without some sort of heat tool to make sure everything was properly molded in place. Still, you frown. "It's still fucked up that he did that, though."
"Language."
"We didn't even get the cake mix." A light dab on the eye, you bite your tongue as Sun clears the last of the dried blood from the area. "Shouldn't have banned us. Now we have to go across town to get groceries."
Sun pulls back the rag, stained pink and light brown with old blood, dropping it in the sink to be washed later. "You should have thought about that before getting into a fight."
"I was only trying to help!" You defend, continuing as Sun pulls out the disinfectant in a rather knowing manner. The cut underneath your eye from the guy's ring was about to sting like hell. "And it's not like I was the one who started it!"
He pours a dab of alcohol onto a cotton ball retrieved from the first aid kit, a small puff of white in between large silocone fingers, it's almost comical how he pinches it into place before crouching back down, the cotton ball hovering over your face. "Hush. This is going to sting."
Your mouth thins at the underlying tone of Moon's voice in his scolding, leaning away from the offending ball. "You're such a hypocrite."
A hand comes underneath your chin to hold you in place, thumb pressed into your jawline. "Stop whining."
"How would you feel, huh?" You wrinkle your nose as the disinfectant ball comes closer. "What would you do if someone attacked me like that?"
The cotton ball presses against the cut and you flinch, hard enough that your shoulders hike up and your neck tenses. It stings like hell, searing for a moment before dulling to an aching throb, a hiss in the back of your dry throat.
The Daycare Attendant's thumb keeps in place for a second, then pulls it away, expression unreadable. "The same thing we did the last time someone tried."
You grit your teeth, pressing your lips into a thin line as the stinging starts to fade.
"Though," He continues, pulling the cotton ball away and tossing it into the trash. "While your help is appreciated, It would be very much appreciated if we were to avoid something like that in the future!" He waves his hands, the bright smile returning, and Sun's fingers go behind your ear, pulling back out a colorful bandage. "I think it goes without saying that it makes me very sad to see you all hurt. Not fun at all!"
You blow hot air out of your nose in a huff as he applies the sticky bandage. "Hypocrite."
"There you are! Right as rain, dandy and peachy." Sun pulls back to observe his handiwork, and there's a slight pause. "Well, not quite. You've still got a bit of a shiner. I don't think I have a medicine for that one."
"It makes me look cool." You jest. "I look badass."
The animatronic sighs, heavy and loaded for a robot with no lungs, though his exasperation is evident in his voicebox. "Pulling my wires, our wires, please, you're constantly on them-" He's mumbling, quickly. Still talking even as he cradles your head gently by your jawline, and presses his faceplate to the skin above the black eye. "Afraid that's all I can give."
You wrinkle your nose, smiling. "I think a cake would be great too."
"Thanks to someone-" He starts, rising from a crouched position and taking your hand to help you stand. "It looks like we'll be ordering one from the bakery instead."
878 notes · View notes
reachartwork · 3 months
Note
Hey, really like your art and I'm sorry you're getting harassed over it. I'm pretty sure following you and seeing you use AI for serious artistic expression back when everyone was still using it to make pictures of Sonic and Shrek playing basketball is what inoculated me against the weird hatred everyone has of it now.
I wanted to ask about something I saw you say over on Twitter, in the context of not wanting to use AI to write because you can write without it: "most ai writing is slop because modern ai tools are just bad authors, but if you like slop then go nuts." Is this different from your opinion on AI-based visual art? I think most AI art is bad because most art in any medium is bad and AI has a particularly low barrier to entry, but I would have thought that someone who put the same effort into making AI-based writing as you put into AI-based visuals would also be able to make something worthwhile. Do you disagree? Is there something about modern LLM tools that makes them less suitable to actual artistic expression than current image generation tech?
i think if someone put real elbow grease into making an ai that writes good it would be even harder to distinguish between that and just normal writing - we're more sensitive to weird images vs weird language. but also, like... most modern ai art tools are competent artists, but most modern ai writing tools are not competent fiction writers, so the "bare minimum" slop that someone gets with zero effort or engagement is like... a lesser... slop... quality? the minimum slop is worse.
does that make sense?
74 notes · View notes
the-hydroxian-artblog · 6 months
Note
I love your animatronic toy OC guys so much, they have so much personality to them and their colours are really good (especially umbra)
Thank you! The funny thing about Umbra's design was that while I was developing it about two years ago and had some colors in mind, I described in text what I already came up with to an image generator for fun (shitty unconvincing old kind, vs now where it looks like shit but in a somewhat more convincing way) and it produced something so silly that I made her design better than what I would've settled with out of spite.
More details of my process and anti-AI ranting below the cut, so the examples given won't show up on search results. Google Images is getting polluted too much with slop to begin with.
Let's begin.
Tumblr media
In 2022 I was drafting up Umbra's design with mostly concrete details. At this time image generators were newer and much less convincing, and I was a bit less aware of just how unethical they were, so I fed one a text description of what I had drafted for her design out of curiosity. Something along the lines of, "doll of an anthropomorphic owl librarian in glasses, blazer/suit jacket, skirt, corset, high heels, sitting on a bookshelf" and probably a few more terms. Really specific, lengthy prompt.
I try to be open-minded and give new things a shot, but the results were Not Great. Ideally, I'd want to not share the AI pictures at all on-principle, but I feel like it's useful, transparent, and necessary to show them. Both as a means of not hiding anything, but also just to appreciate where the design is at in spite of it.
Outside of this particular collage of Weird Owls, no other pictures on this blog are AI-generated. AI Image Generation is harmful, and I am against its usage.
Tumblr media
But hey, two of the generated pictures look close, right? The top left is the closest, and bottom right is second.
That's because they started out worse, and I had to actually erase chunks of them and have the generator fill in the blanks to get anything remotely close to what I wanted. Misshapen limbs, unrecognizable anatomy, fever-dream clothing details, etc. They didn't even have a corset or proper legs until I slapped the generator in the face enough times to make it produce them. I was just using it to photobash, which was such an annoying process, I just went "this is dumb" and stopped. They're literally posed like that because I kept erasing and regnerating their limbs until they looked vaguely in-character. It literally only looks passable thanks to STRANGLING it with human input.
Before I used the image generator, I already drafted her to be night-themed with yellow eyes and something like purple, dark blue, or sky-blue as her main color; the generator making one owl yellow-eyed and purple was a happy coincidence, and the only thing the generative AI "came up with" that I didn't already have in mind or included in the prompt was the light blue shirt, which I did adapt into her cyan shirt and stockings/socks as well. That was a good call. You get One Point, Mr. AI.
...Which still meant that at its absolute best, it was a largely redundant step in the creative process if its contribution was worse than what a randomized palette generator or character creator could come up with.
That's already putting the ethics of it aside, like carbon emissions, data pollution, using artists' and photographers' work without credit or permission, the incentive to plagiarize, flooding sites like deviantart with slop, Willy Wonka Shit, etc etc etc. When people say "you can use AI as a tool though", this ordeal was enough to convince me that it's more trouble than its worth, even in its most ethical usage. I feel gross for having even tried. I wish I knew what sources went into the creation of those Weird Owls. It'd be better for research if the right people could be credited.
Nothing else on this blog is AI-generated or ever will be. The art below is purely my own (2022 vs a few weeks ago)):
Tumblr media Tumblr media
Actually drawing Umbra and solidifying her design was far more rewarding than having an image generator vaguely approximate my own ideas. I wanted her to look really special, so I used a black cape and pants, gold highlights and buttons, and blue undertones to make something more distinct. Also, neck floof. Very important. I wanted the head in particular to look distinct and original, going with bold black streaks to really help her look distinguished.
I also have certain inevitable Hydroisms for Fancy characters like her; most apparent in these designs for Chasey and Kaita from even longer ago, which were more of an influence than anything else. (Old art of mine from like 2021, Kaita ref looks wonky but Chasey still holds up nicely):
Tumblr media Tumblr media
Most of Umbra's other design elements were already commonly used with established ocs like Kaita, like her shape language, corset, skirt, heels, etc. It was my previous work with Chasey that inspired the use of gold buttons and highlights.
Umbra is also now a bluer shade of purple partly to distance the current design from that ordeal. All things considered, I'll probably make her more indigo next time. I already wanted her to have a wide color range from the get-go (Featured below is, again, purely my art from 2022:)
Tumblr media
I may use a different colored shirt and stockings in the future. I like to think she has many different shirts and clothes based on the different stages of the night sky, from dusk to dawn, and the painting I made in the top right there was an exploration of her range in different lighting.
All in all, it's frustrating. I'm proud of her design, but explaining all of this is annoying, because it's technically all relevant to showing how her colors were picked and how the design was made. I still technically have AI to """Thank""", in the way you thank a bad experience for encouraging you to make things better out of spite.
111 notes · View notes
chubs-deuce · 20 days
Note
I understand the appeal of AI and maybe ai can be used as a proper tool for animators but personally, I prefer an artist hard work and dedication then something made by a greedy corporation. Love your art, by the way, keep it up.
Partially disagreeing with the first half of this bc to me art made by humans vs art made by an AI generator is not a matter of personal preference but entirely one of ethics, since the entire appeal of it is that it's faster and cheaper than hiring actual artists (who they steal from blatantly)
And imo unless AI gets some really damn strict regulations on where it's legally allowed to source it's training data from and place limitations on how it may be applied in a commercial project, I will continue to consider it a blight on society that serves nobody except greedy corporate cheapskates who don't want to pay for human labor and lazy techbros that want to feel special by forcibly inserting themselves into artists' spaces without even understanding the purpose of art...
Pardon the strong language, I do overall appreciate the sentiment you're proposing here, but this happens to be a topic that's close to my heart and as such have some pretty strong opinions on the matter that I felt needed expressing 😅
Have a nice day and take care <3
25 notes · View notes
kradeelav · 1 month
Text
Tumblr media
the tl;dr
IRON CROWN as a free comic is now off of wordpress and can be viewed by a neat, robust HTML/CSS/JS comic template called rarebit! effectively nothing has changed for the reader, beyond expecting a little more reliability of uptime over the years.
all comic pages and previously paywalled patreon posts can also be downloaded in this art dump for free, as mentioned in the new author's notes.
the long story:
When talking shop about site/platform moves under this handle, I think it's useful to realize that us (taboo) kink artists live in an actively adversarial internet now, compared to five years ago.
meaning that we have to live with an expectation that 99% of platforms (including registrars and hosting, let alone sns sites) will ban/kick us without warning. this might explain the overly cautious/defensive way we discuss technologies - weighing how likely (and easily) the tool can be used against us vs the perks.
for example: has a harassment mob bullied the platform owners into quietly dropping lolisho artists? trans artists? does the platform/technology have a clear, no-bullshit policy on drawn kink art (specifically third rail kinks like noncon)? does the platform have a long history of hosting r18 doujin artists/hentai publishers with no issue? does the company operate in a nation unfriendly to specific kinks (eg fashkink artists fundamentally incompatible with companies based in germany, when other kinks might be OK?). i talk with a few different groups of artists daily about the above.
but that gets tiring after a while! frankly, the only path that's becoming optimal long-term is (a) putting kink art on your personal site, and if possible, (b) self hosting the whole thing entirely, while (c) complementing your site with physical merch since it's much harder to destroy in one go.
with that said - I've been slowly re-designing all of my pages/sub-domains as compact 'bug out bags'. lean, efficiently packed with the essentials, and very easy to save and re-upload to a new host/registrar near instantly (and eventually, be friendly to self-hosting bandwidth costs since that's now a distant goal).
how does this look in theory, you ask?
zero dependencies. the whole IRON CROWN comic subdomain is three JS files, a few HTML files, one CSS file, and images. that's it.
no updates that can be trojan horse'd. I'm not even talking about malware though that's included; I'm talking about wordpress (owned by the same owners as tumblr cough) slipping in AI opt-outs in a plug-in that's turned on by default. I used to think wordpress was safe from these shenanigans because wordpress-as-a-CMS could be separate from wordpress-as-a-domain; I was wrong. they'll get you through updates.
robust reliability through the KISS principle. keep it simple stupid. malware/DDOS'ing has an infinitively harder time affecting something that doesn't have a login page/interactive forms. You can't be affected by an open source platform suddenly folding, because your "starter" template is contained files saved on your desktop (and hopefully multiple backups...). etc.
so how does this look in practice?
To be fair, you're often trading convenient new shiny UI/tools for a clunkier back-end experience. but i think it's a mistake to think your art site has to look like a MIT professor's page from 1999.
with IRON CROWN, I've effectively replicated it from a (quite good) comic template in wordpress to 98% of the same layout in pure HTML/CSS/JS via rarebit. Should rarebit's website go "poof", I've got the initial zip download of the template to re-use for other sites.
I frankly have a hard time recommending rarebit for an actively updating webcomic since you personally might be trading too many advantages like SEO tools, RSS feeds, etc away - but for a finished webcomic that you want to put in "cold storage" - it's amazing. and exactly what I needed here.
43 notes · View notes
birdship · 3 months
Text
This project is unfinished and will remain that way. There are bugs. Not all endings are implemented. The ending tracker doesn't work. Images are broken. Nothing will be fixed. There's still quite a bit of content, though, so I am releasing what's here as is.
Tilted Sands is a project I started back when AI Dungeon first came out--the very early version you had to run in a Google colabs notebook. Sometime in late 2018, I think? I was a contributor at Botnik Studios at the time and I was delighted by AI Dungeon, but I knew it would never be a truly satisfying choose your own adventure generator on its own. I would argue that the modern AI Dungeon 2 and NovelAI don't fully function as such even now. That's not how AI works. It has to be guided heavily, the product has to be sculpted by human hands.
Anyway, it inspired me to use Transformer--a GPT2 predictive text writing tool--to craft a more coherent and polished but still silly and definitely AI-flavored CYOA experience. It was an ambitious project, but I was experienced with writing what I like to call "cyborg" pieces--meaning the finished product is, in a way, made by both an AI/algorithm/other bot AND a human writer. Something strange and wonderful that could not have been made by the bot alone, nor by the human writer alone. Algorithms can surprise us and trigger our creative human minds to move in directions we never would've thought to go in otherwise. To me, that's what actual AI art is: a human engaging in a creative activity like writing in a way that also includes utilizing an algorithm of some sort. The results are always fascinating, strangely insightful, and sometimes beautiful.
I worked on Tilted Sands off-and-on for a couple years, and then the entire AI landscape changed practically overnight with DALL-E and ChatGPT. And I soon realized that I cannot continue working on this project. Mainstream, corporate AI is disgustingly unethical and I don't want the predictive text writing I used to enjoy so much to be associated with "AI art". It's not. Before DALL-E and ChatGPT, there were artists and writers who made art by utilizing algorithms, neural networks, etc. Some things were perhaps in an ethical or legal grey area, but people actually did care about that. I remember discussing "would it be ethical to scrape [x]?" with other writers, and sharing databases of things like commercial advertising scripts and public domain content. I liked using mismatched databases to write things, like a corpus of tech product reviews that I used to write a song. The line between transformative art and fair use vs theft was constantly on all of our minds, because we were artists ourselves.
All of the artists and writers I knew in those days who made "cyborg art" have stopped by now. Including me.
But I poured a lot of love and thought and energy into this silly little project, and the thought of leaving it to rot on my hard drive hurt too much. It's not done, but there's a lot there--over 14,000 words, multiple endings and game over scenarios. I had so much fun with it and I wanted to complete it, but I can't. I don't want it to be associated in any way with the current "AI art" scene. It's not.
Please consider this my love letter to what technology-augmented art used to be, and what AI art could have been.
I know I'm not the only one mourning this brief but intense period from about 2014-2019 in which human creativity and developing AI technology combined organically to create an array of beautiful, stupid, silly, terrible, wonderful works of art. If you're also feeling sad and nostalgic about it, I hope you find this silly game enjoyable even in its unfinished state.
In conclusion:
Fuck capitalism, fuck what is currently called AI art, fuck ChatGPT, fuck every company taking advantage of artists and writers and other creative types by using AI.
24 notes · View notes
sharada-n · 1 year
Text
It's so frustrating that people are still going like "Why is AI-less whumptober a thing when the original whumptober people have said they discourage AI and won't reblog it 🤨" when in actuality:
Discouraging something is not the same as disallowing it. I'm not comfortable with an event that only 'discourages' the use of scraping AI instead of taking a stance against it
The reason whumptober said they want to discourage AI is because it 'feels like cheating', comically missing the point on why scraping AI should actually be disliked (ie that it's an inherently unethical tool made from cannibalized works of other creators, whose works are nonconsensually scraped in an act that should be considered art theft, and that has actively harmed the industry and other creators)
When this was pointed out, they went the ableist route of saying this sort of AI is fine because it's an accessibility tool. When educated on the difference between actual accessibility tools that use machine learning vs an AI built on scraping like ChatGTP, they accused me and several others of 'not actually being disabled
They refused to admit that AI that scrapes content against creators' consent is art theft. They said that even if it was, using the tool made of art theft is not the same as committing art theft yourself because the content was scraped anyway so might as well use the AI then. They said AI art was only a debate in fandom and said it has never harmed actual, real creators' income. When given links to prove the contrary, this went ignored
When asked what creators should do if they don't want their content scraped by an AI, whumptober said 'then just don't post it online. If it's online, people are going to steal it'. Which is a common art theft stance
They blocked people speaking out against them and said we were harassing them, all while deleting all evidence of the above, so they could spin the story as "we can't check every entry for AI and that makes people mad :(" and not "we don't actually discourage AI at all, we just think it's cheating but not art theft, and we're being ableist"
At least one of the whumptober mods - possibly multiple - actively reblog AI art on their main a lot. So 'discouraging' my ass, they just want to make their event look good but clearly think AI art is okay
If you're against scraping AI, you should be against whumptober. Period.
Support @ailesswhumptober
55 notes · View notes
demonzoro · 7 months
Text
the more i look into discussions around ai art the more i feel like any model that justifies scraping art from the web because 'you consented when you posted it' is so disingenuous. i'm going to keep mulling this over and doing my research but what i keep circling back to is that ai art is a new medium that can make it more accessible for people to express themselves through art or a tool to help ideate/produce art quicker/with less resources (i.e. physical strain, time, cost of tools, + more).
in itself, it's acknowledging that art takes no small amnt of time and effort to make, which is why generative ai is so attractive. as an illustrator, instinctively i don't want someone to use my art to create something that would've taken me years to learn how to do without me consenting/being compensated in some way. i understand that there's still a person coding, testing, and refining their prompts — but from what i understand, the goal of generative ai is to hone these models to get better and better at generating art with better accuracy in relation to the prompt, so as this technology progresses (which in itself isn't a bad thing, more on that later), that gap of time and error that a human would make when taking inspiration from art they see would become less and less in comparison to generative models. it feels like the advancement of ai counts on the labor of traditional artists who will in the future have to compete with these same generative models, all while not being compensated or given a chance for the conversation of letting their art be used for the progression of generative models
i do believe that generative ai can be used as a tool to help ideate faster/better, or explore artistic styles that is unique to ai. i also think it would be interesting for an artist collective (that is properly briefed and compensated) to pool their art in order to train a model, in hopes of making ai art a more accessible medium in comparison to other mediums (like digital art vs. oil painting). but i don't agree with how many generative ais are trained and the purpose of why they are trained, which is to exploit and undermine traditional artists
25 notes · View notes