#ai tools vs ai art
Explore tagged Tumblr posts
satanfemme · 2 months ago
Text
just saw a post which was very insistent in an authoritative way that interacting with AI-assisted art in any context even just to mentally find meaning in it before moving on, or even (especially!) if you don't actually realize that it's AI-assisted before interacting, is a moral failing on ur part. and so according to op the only morally correct thing to do from now on for the rest of ur life is to thoroughly research every piece of art you come across, and to thoroughly research the artist of any piece of art you come across, and to - prior to appreciating any art - ensure just in case through thorough research that no AI was used at any point in the process at all, or else it's morally tainted the art forever. and how if you don't do this you, personally, will be partially responsible for the impeding death of art itself. RIP art. born: the dawn of human kind, probably sooner. died: now I guess, because You specifically forgot to ruminate compulsions ruminate compulsions moral purity rumination over it prior to daring to resonate with something someone spent the time and effort to make. aren't you tired. aren't you literally tired.
30 notes · View notes
ladyyomiart · 10 months ago
Text
Tumblr media
Here's my version of the OCs vs Artist meme! ✍💖
🔸Favorite OC: My poor freckled daughter with self-esteem issues whose life I love to mess up with memory issues and vampiric apocalypses, Furukawa Chie (Hakuouki OC) and her mischievous Shiba Inu stray dog, Yokai.
🔸First OC: April Saito (Digimon OC), who I created when I was only 10 years old! April will grow up to become a Digimon rights activist, so I thought it'd be fun to draw her protesting with a "✅Yes to Digital Intelligence / ❌No to Artificial Intelligence" sign along with Gigimon, who has a matching anti-AI bandana.
🔸Fans' Favorite OC: Rice Ritz, head of the "Kame Warriors" (my troop of Time Patroller OCs from Dragon Ball Xenoverse), who used to be wildly popular back in 2018 when we ran amazing "OC vs OC" tournaments on DeviantArt.
🔸Easiest to Draw OC: Furukawa Kohana (Hakuouki OC), which caused my sister to go "What do you mean Kohana is the easiest one for you to draw?!". 😂 Maiko/Geiko are one of my special interests, so she's one of the few OCs I can draw from memory without having to consult a character sheet, lol.
🔸Hardest to Draw OC: Tani Sanjuro (Hakuouki OC), the arrogant yet skillful captain of the 7th Division of the Shinsengumi that both me and my fic's readers love to hate… and who has features as difficult to draw as his whole wardrobe!
🔸The Artist: Lady Yomi herself! 👀 Referenced from a real photo so that's why I'm wearing my noise cancelling headphones, haha.
14 notes · View notes
arcticasylumrelic · 10 days ago
Text
cat save a boy See more .....
1 note · View note
shivaaiblog · 7 days ago
Text
🎥 MrBeast’s AI Thumbnail Tool Backfired — Here’s What Really Happened
🚀 The Idea That Sparked a Firestorm In June 2025, MrBeast, one of YouTube’s biggest names, dropped a new tool through his analytics platform ViewStats — an AI-powered thumbnail generator. It was designed to help creators make “viral” thumbnails fast, without needing expensive design skills or software. Sounds helpful, right? Well… not everyone thought so. Instead of applause, it triggered a…
0 notes
dominaexmachina · 14 days ago
Text
Heroes of whining
It’s amazing to watch how AI suddenly turned ordinary artists into holy sufferers.
“You’re doing so well, still drawing… despite AI…”
Well. Maybe it’s time to create similar appreciation posts for engineers, doctors, teachers, accountants, designers, biologists, lawyers, logistics specialists, marketers — all of whom learn new tools, frameworks, new medicines, protocols, research reports, laws, requirements, and interfaces every year.
You’re doing great too. But no one tells you:
“Despite spreadsheets looming over us, you’re doing amazing, sweetie.”
Here’s the reality: If you use AI as a tool — not a fetish of fear — you save time on drafts, base structure, and composition. And you can invest that time into quality, detail, and your own artistic magic.
Let’s break it down:
How does an artist work?
(One complex art piece — from idea to completion)
1. Concept. What’s the idea? Mood? Scene? Who’s in it, where’s the light coming from?
2. Sketches. 5–15 mini roughs or thumbnails. On paper or digital. A few hours to a whole day.
3. Composition. Black-and-white value layout. Masses, rhythm, focus, readability.
4. References. Photos, 3D base poses, perspective, textures, clothing — often hours of search.
5. Linework. Base outlines, underpainting, color blocking. Only then the “real” part starts.
6. Light, color, depth. Building atmosphere, spatial layers, focal contrast.
7. Character detailing. Eyes, hands, fabrics, expressions, skin, drapery, texture.
8. Background. Architecture, environment, plants, particles, reflections.
9. Micro-details. Dust in the light, droplets, stitching, jewelry, tiny surface effects.
10. Final polish. Merging, cleanup, final light tweaks, subtle accents.
🕓 Total: 30–50 hours. And still — compromises. Artists get exhausted by step 3. Backgrounds get flatter. Composition gets simplified. Details trimmed down. Because time and energy are limited.
Now imagine:
Steps 1–5 are done by AI. The artist gives direction, prompts, edits, references. But now they have all 30–50 hours for steps 6–10. And they’re not tired.
What’s the result?
— All effort goes into depth and quality, not grinding through sketchwork — More time for variations, storytelling, wild lighting decisions — Space to polish every surface: metal, skin, reflections, mood — Room for narrative in every corner of the scene
This isn’t laziness. It’s strategic effort management. The AI-assisted artist doesn’t skip the hard part — they just skip the routine part.
And yes — they both spent 40 hours. But one built from zero. The other built magic on top of a structure.
Now tell me: Which piece will be deeper? More detailed? More surprising? Which one will make people zoom in, explore, return to it again and again?
0 notes
niggadiffusion · 3 months ago
Text
The Soul in the Circuit: How Generative AI is Flipping the Script on Art
In the quiet corners of digital imagination, something wild is happening. Machines are sketching scenes that never were, spinning beats no one’s ever danced to, and weaving pixels into poetry. This is generative AI art—where creativity isn’t a solo act anymore. It’s a conversation between human intuition and machine intelligence, a new kind of collaboration unfolding at the edge of what we…
0 notes
ai-innova7ions · 9 months ago
Text
Is Meta's New AI Art Tool - Revolutionizing Creativity?
Have you ever wondered if artificial intelligence could create art?
This intriguing question leads us to explore Meta's new AI art generator, a tool that promises to revolutionize our understanding of creativity. As technology advances rapidly, this innovative platform is making waves in the digital art world, captivating both artists and tech enthusiasts alike.
The process of using Meta's AI art generator is user-friendly and accessible to everyone. By simply inputting preferences like color schemes and themes, We can witness how quickly it generates stunning artwork. While traditional methods have their charm, AI-generated creations offer fresh perspectives on artistry.
Join us as we delve into the potential impact of this groundbreaking tool!
Tumblr media
#AIArt #MetaArtGenerator
Level up you art and your creativity with some of the leading Generative AI Platforms or with Meta AI.
Tumblr media
0 notes
yampuff · 9 months ago
Text
Tumblr media
I've been working on this one for some time! My thoughts on AI and AI-generated content!
0 notes
swampjawn · 4 months ago
Text
Look Back VS AI Art
This is a real frame from Look Back (2024).
Tumblr media
You might assume this made it into the final movie because of its director Kiyotaka Oshiyama (押山清高) doing HALF the key animation for the film and only fully finishing it A WEEK before it's festival debut.
Tumblr media
And well, you might be partially right about that. But more importantly, this is the movie embodying its themes through its unconventional production process and the very lines on the screen!
In an age of digital tools, CGI, AI, and other combinations of letters ending in I, Look Back is an ode to art and the labor that goes into it, no matter how tedious or imperfect.
Tumblr media
Every thought, every little decision, every stroke made by a person puts a little piece of that person onto the screen, and the imperfections that come from that process can be beautiful in the sense that they're evidence of the thoughts and process that went into creating an image. So in keeping with the plot of the movie itself, Oshiyama made a point of leaving those remnants - lines that are scratchy, overlapping, or half-erased, and normally would have been cleaned up in 2nd key animation (第二原画).
Tumblr media Tumblr media
Ayumu Fujino has a tight grip on how she expresses herself, having this image to uphold as the perfect prodigy girl. She's afraid to let people see too much of her, lest that perfect image be shattered.
Tumblr media
But at times the mask does slip, like this moment of sheer panic after she accidentally drops what is really an extremely rude manga strip under her rival's door by accident.
Tumblr media Tumblr media
And it's these moments when that rough imperfection shines through the most! So this breakdown of polish in the art functions simultaneously as both a connection to the human labor that went into creating it, AND an impressionistic representation of Fujino's mental state within the world of the movie.
Tumblr media
Not only are the edges of her backpack visible through her arm, her face even disappears completely, replaced by just the roughly sketched dividing lines that indicate the position of her eyes. At least personally, I never would have noticed this fully unfinished frame at full speed because the shot is just so well-executed! The framing is dramatic with Fujino surrounded by these mountains of sketchbooks in the foreground, and the motion is so believable, her posture - hunched over to the side to support the weight of the bag while maneuvering around the books, and the way her legs twirl around each other frantically, rotating this way and that.
Tumblr media
But more importantly, this is a frame that an AI program would never draw, because it has no REASON to. There's no thought process, no decisions being made about how to express a feeling. Even if you did train an AI specifically to mimic these human imperfections, in Oshiyama's words, "It would just be a design. It would be a fake. The lines have meaning because they were drawn by humans. […] There's value in that." (MANTANWEB)
This is an adapted excerpt from this video! Go watch it or I'll dox you.
youtube
3K notes · View notes
ohnoitstbskyen · 1 month ago
Note
You’ve probably been asked this before, but do you have a specific view on ai-generated art. I’m doing a school project on artificial intelligence and if it’s okay, i would like to cite you
I mean, you're welcome to cite me if you like. I recently wrote a post under a reblog about AI, and I did a video about it a while back, before the full scale of AI hype had really started rolling over the Internet - I don't 100% agree with all my arguments from that video anymore, but you can cite it if you please.
In short, I think generative AI art
Is art, real art, and it's silly to argue otherwise, the question is what KIND of art it is and what that art DOES in the world. Generally, it is boring and bland art which makes the world a more stressful, unpleasant and miserable place to be.
AI generated art is structurally and inherently limited by its nature. It is by necessity averages generated from data-sets, and so it inherits EVERY bias of its training data and EVERY bias of its training data validators and creators. It naturally tends towards the lowest common denominator in all areas, and it is structurally biased towards reinforcing and reaffirming the status quo of everything it is turned to.
It tends to be all surface, no substance. As in, it carries the superficial aesthetic of very high-quality rendering, but only insofar as it reproduces whatever signifiers of "quality" are most prized in its weighted training data. It cannot understand the structures and principles of what it is creating. Ask it for a horse and it does not know what a "horse" is, all it knows is what parts of it training data are tagged as "horse" and which general data patterns are likely to lead an observer to identify its output also as "horse." People sometimes describe this limitation as "a lack of soul" but it's perhaps more useful to think of it as a lack of comprehension.
Due to this lack of comprehension, AI art cannot communicate anything - or rather, the output tends to attempt to communicate everything, at random, all at once, and it's the visual equivalent of a kind of white noise. It lacks focus.
Human operators of AI generative tools can imbue communicative meaning into the outputs, and whip the models towards some sort of focus, because humans can do that with literally anything they turn their directed attention towards. Human beings can make art with paint spatters and bits of gum stuck under tennis shoes, of course a dedicated human putting tons of time into a process of trial and error can produce something meaningful with genAI tools.
The nature of genAI as a tool of creation is uniquely limited and uniquely constrained, a genAI tool can only ever output some mixture of whatever is in its training data (and what's in its training data is biased by the data that its creators valued enough to include), and it can only ever output that mixture according to the weights and biases of its programming and data set, which is fully within the control of whoever created the tool in the first place. Consequently, genAI is a tool whose full creative capacity is always, always, always going to be owned by corporations, the only entities with the resources and capacity to produce the most powerful models. And those models, thus, will always only create according to corporate interest. An individual human can use a pencil to draw whatever the hell they want, but an individual human can never use Midjourney to create anything except that which Midjourney allows them to create. GenAI art is thus limited not only by its mathematical tendency to bias the lowest common denominator, but also by an ideological bias inherited from whoever holds the leash on its creation. The necessary decision of which data gets included in a training set vs which data gets left out will, always and forever, impose de facto censorship on what a model is capable of expressing, and the power to make that decision is never in the hands of the artist attempting to use the tool.
tl;dr genAI art has a tendency to produce ideologically limited and intrinsically censored outputs, while defaulting to lowest common denominators that reproduce and reinforce status quos.
... on top of which its promulgation is an explicit plot by oligarchic industry to drive millions of people deeper into poverty and collapse wages in order to further concentrate wealth in the hands of the 0.01%. But that's just a bonus reason to dislike it.
2K notes · View notes
mostlysignssomeportents · 2 years ago
Text
What kind of bubble is AI?
Tumblr media
My latest column for Locus Magazine is "What Kind of Bubble is AI?" All economic bubbles are hugely destructive, but some of them leave behind wreckage that can be salvaged for useful purposes, while others leave nothing behind but ashes:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
Think about some 21st century bubbles. The dotcom bubble was a terrible tragedy, one that drained the coffers of pension funds and other institutional investors and wiped out retail investors who were gulled by Superbowl Ads. But there was a lot left behind after the dotcoms were wiped out: cheap servers, office furniture and space, but far more importantly, a generation of young people who'd been trained as web makers, leaving nontechnical degree programs to learn HTML, perl and python. This created a whole cohort of technologists from non-technical backgrounds, a first in technological history. Many of these people became the vanguard of a more inclusive and humane tech development movement, and they were able to make interesting and useful services and products in an environment where raw materials – compute, bandwidth, space and talent – were available at firesale prices.
Contrast this with the crypto bubble. It, too, destroyed the fortunes of institutional and individual investors through fraud and Superbowl Ads. It, too, lured in nontechnical people to learn esoteric disciplines at investor expense. But apart from a smattering of Rust programmers, the main residue of crypto is bad digital art and worse Austrian economics.
Or think of Worldcom vs Enron. Both bubbles were built on pure fraud, but Enron's fraud left nothing behind but a string of suspicious deaths. By contrast, Worldcom's fraud was a Big Store con that required laying a ton of fiber that is still in the ground to this day, and is being bought and used at pennies on the dollar.
AI is definitely a bubble. As I write in the column, if you fly into SFO and rent a car and drive north to San Francisco or south to Silicon Valley, every single billboard is advertising an "AI" startup, many of which are not even using anything that can be remotely characterized as AI. That's amazing, considering what a meaningless buzzword AI already is.
So which kind of bubble is AI? When it pops, will something useful be left behind, or will it go away altogether? To be sure, there's a legion of technologists who are learning Tensorflow and Pytorch. These nominally open source tools are bound, respectively, to Google and Facebook's AI environments:
https://pluralistic.net/2023/08/18/openwashing/#you-keep-using-that-word-i-do-not-think-it-means-what-you-think-it-means
But if those environments go away, those programming skills become a lot less useful. Live, large-scale Big Tech AI projects are shockingly expensive to run. Some of their costs are fixed �� collecting, labeling and processing training data – but the running costs for each query are prodigious. There's a massive primary energy bill for the servers, a nearly as large energy bill for the chillers, and a titanic wage bill for the specialized technical staff involved.
Once investor subsidies dry up, will the real-world, non-hyperbolic applications for AI be enough to cover these running costs? AI applications can be plotted on a 2X2 grid whose axes are "value" (how much customers will pay for them) and "risk tolerance" (how perfect the product needs to be).
Charging teenaged D&D players $10 month for an image generator that creates epic illustrations of their characters fighting monsters is low value and very risk tolerant (teenagers aren't overly worried about six-fingered swordspeople with three pupils in each eye). Charging scammy spamfarms $500/month for a text generator that spits out dull, search-algorithm-pleasing narratives to appear over recipes is likewise low-value and highly risk tolerant (your customer doesn't care if the text is nonsense). Charging visually impaired people $100 month for an app that plays a text-to-speech description of anything they point their cameras at is low-value and moderately risk tolerant ("that's your blue shirt" when it's green is not a big deal, while "the street is safe to cross" when it's not is a much bigger one).
Morganstanley doesn't talk about the trillions the AI industry will be worth some day because of these applications. These are just spinoffs from the main event, a collection of extremely high-value applications. Think of self-driving cars or radiology bots that analyze chest x-rays and characterize masses as cancerous or noncancerous.
These are high value – but only if they are also risk-tolerant. The pitch for self-driving cars is "fire most drivers and replace them with 'humans in the loop' who intervene at critical junctures." That's the risk-tolerant version of self-driving cars, and it's a failure. More than $100b has been incinerated chasing self-driving cars, and cars are nowhere near driving themselves:
https://pluralistic.net/2022/10/09/herbies-revenge/#100-billion-here-100-billion-there-pretty-soon-youre-talking-real-money
Quite the reverse, in fact. Cruise was just forced to quit the field after one of their cars maimed a woman – a pedestrian who had not opted into being part of a high-risk AI experiment – and dragged her body 20 feet through the streets of San Francisco. Afterwards, it emerged that Cruise had replaced the single low-waged driver who would normally be paid to operate a taxi with 1.5 high-waged skilled technicians who remotely oversaw each of its vehicles:
https://www.nytimes.com/2023/11/03/technology/cruise-general-motors-self-driving-cars.html
The self-driving pitch isn't that your car will correct your own human errors (like an alarm that sounds when you activate your turn signal while someone is in your blind-spot). Self-driving isn't about using automation to augment human skill – it's about replacing humans. There's no business case for spending hundreds of billions on better safety systems for cars (there's a human case for it, though!). The only way the price-tag justifies itself is if paid drivers can be fired and replaced with software that costs less than their wages.
What about radiologists? Radiologists certainly make mistakes from time to time, and if there's a computer vision system that makes different mistakes than the sort that humans make, they could be a cheap way of generating second opinions that trigger re-examination by a human radiologist. But no AI investor thinks their return will come from selling hospitals that reduce the number of X-rays each radiologist processes every day, as a second-opinion-generating system would. Rather, the value of AI radiologists comes from firing most of your human radiologists and replacing them with software whose judgments are cursorily double-checked by a human whose "automation blindness" will turn them into an OK-button-mashing automaton:
https://pluralistic.net/2023/08/23/automation-blindness/#humans-in-the-loop
The profit-generating pitch for high-value AI applications lies in creating "reverse centaurs": humans who serve as appendages for automation that operates at a speed and scale that is unrelated to the capacity or needs of the worker:
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
But unless these high-value applications are intrinsically risk-tolerant, they are poor candidates for automation. Cruise was able to nonconsensually enlist the population of San Francisco in an experimental murderbot development program thanks to the vast sums of money sloshing around the industry. Some of this money funds the inevitabilist narrative that self-driving cars are coming, it's only a matter of when, not if, and so SF had better get in the autonomous vehicle or get run over by the forces of history.
Once the bubble pops (all bubbles pop), AI applications will have to rise or fall on their actual merits, not their promise. The odds are stacked against the long-term survival of high-value, risk-intolerant AI applications.
The problem for AI is that while there are a lot of risk-tolerant applications, they're almost all low-value; while nearly all the high-value applications are risk-intolerant. Once AI has to be profitable – once investors withdraw their subsidies from money-losing ventures – the risk-tolerant applications need to be sufficient to run those tremendously expensive servers in those brutally expensive data-centers tended by exceptionally expensive technical workers.
If they aren't, then the business case for running those servers goes away, and so do the servers – and so do all those risk-tolerant, low-value applications. It doesn't matter if helping blind people make sense of their surroundings is socially beneficial. It doesn't matter if teenaged gamers love their epic character art. It doesn't even matter how horny scammers are for generating AI nonsense SEO websites:
https://twitter.com/jakezward/status/1728032634037567509
These applications are all riding on the coattails of the big AI models that are being built and operated at a loss in order to be profitable. If they remain unprofitable long enough, the private sector will no longer pay to operate them.
Now, there are smaller models, models that stand alone and run on commodity hardware. These would persist even after the AI bubble bursts, because most of their costs are setup costs that have already been borne by the well-funded companies who created them. These models are limited, of course, though the communities that have formed around them have pushed those limits in surprising ways, far beyond their original manufacturers' beliefs about their capacity. These communities will continue to push those limits for as long as they find the models useful.
These standalone, "toy" models are derived from the big models, though. When the AI bubble bursts and the private sector no longer subsidizes mass-scale model creation, it will cease to spin out more sophisticated models that run on commodity hardware (it's possible that Federated learning and other techniques for spreading out the work of making large-scale models will fill the gap).
So what kind of bubble is the AI bubble? What will we salvage from its wreckage? Perhaps the communities who've invested in becoming experts in Pytorch and Tensorflow will wrestle them away from their corporate masters and make them generally useful. Certainly, a lot of people will have gained skills in applying statistical techniques.
But there will also be a lot of unsalvageable wreckage. As big AI models get integrated into the processes of the productive economy, AI becomes a source of systemic risk. The only thing worse than having an automated process that is rendered dangerous or erratic based on AI integration is to have that process fail entirely because the AI suddenly disappeared, a collapse that is too precipitous for former AI customers to engineer a soft landing for their systems.
This is a blind spot in our policymakers debates about AI. The smart policymakers are asking questions about fairness, algorithmic bias, and fraud. The foolish policymakers are ensnared in fantasies about "AI safety," AKA "Will the chatbot become a superintelligence that turns the whole human race into paperclips?"
https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space
But no one is asking, "What will we do if" – when – "the AI bubble pops and most of this stuff disappears overnight?"
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/12/19/bubblenomics/#pop
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
--
tom_bullock (modified) https://www.flickr.com/photos/tombullock/25173469495/
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/
4K notes · View notes
incognitopolls · 2 months ago
Text
Anon's explanation:
I’m curious because I see a lot of people claiming to be anti-AI, and in the same post advocating for the use of Glaze and Artshield, which use DiffusionBee and Stable Diffusion, respectively. Glaze creates a noise filter using DiffusionBee; Artshield runs your image through Stable Diffusion and edits it so that it reads as AI-generated. You don’t have to take my work for it. Search for DiffusionBee and Glaze yourself if you have doubts. I’m also curious about machine translation, since Google Translate is trained on the same kinds of data as ChatGPT (social media, etc) and translation work is also skilled creative labor, but people seem to have no qualms about using it. The same goes for text to speech—a lot of the voices people use for it were trained on professional audiobook narration, and voice acting/narration is also skilled creative labor. Basically, I’m curious because people seem to regard these types of gen AI differently than text gen and image gen. Is it because they don’t know? Is it because they don’t think the work it replaces is creative? Is it because of accessibility? (and, if so, why are other types of gen AI not also regarded as accessibility? And even then, it wouldn’t explain the use of Glaze/Artshield)
Additional comments from anon:
I did some digging by infiltrating (lurking in) pro-AI spaces to see how much damage Glaze and other such programs were doing. Unfortunately, it turns out none of those programs deter people from using the ‘protected’ art. In fact, because of how AI training works, they may actually result in better output? Something about adversarial training. It was super disappointing. Nobody in those spaces considers them even a mild deterrent anywhere I looked. Hopefully people can shed some light on the contradictions for me. Even just knowing how widespread their use is would be informative. (I’m not asking about environmental impact as a factor because I read the study everybody cited, and it wasn’t even anti-AI? It was about figuring out the best time of day to train a model to balance solar power vs water use and consumption. And the way they estimated the impact of AI was super weird? They just went with 2020’s data center growth rate as the ‘normal’ growth rate and then any ‘extra’ growth was considered AI. Maybe that’s why it didn’t pass peer review... But since people are still quoting it, that’s another reason for me to wonder why they would use Glaze and Artshield and everything. That’s why running them locally has such heavy GPU requirements and why it takes so long to process an image if you don’t meet the requirements. It’s the same electricity/water cost as generating any other AI image.)
We ask your questions anonymously so you don’t have to! Submissions are open on the 1st and 15th of the month.
328 notes · View notes
therobotmonster · 16 days ago
Note
On that recent Disney Vs Midjourney court thing wrt AI, how strong do you think their case is in a purely legal sense, what do you think MJ's best defenses are, how likely is Disney to win, and how bad would the outcome be if they do win?
Oh sure, ask an easy one.
In a purely legal sense, this case is very questionable.
Scraping as fair use has already been established when it comes to text in legal cases, and infringement is based on publication, not inspiration. There's also the question of if Midjourney would be responsible for their users' creations under safe harbor provisions, or even basic understanding of what an art tool is. Adobe isn't responsible for the many, many illegal images its software is used to make, after all.
The best defense, I would say, is the fair use nature of dataset training and the very nature of transformative work, which is protected, requires the work-to-be-transformed is involved. Disney's basic approach of 'your AI knows who our characters are, so that proves you stole from us' would render fair use impossible.
I don't think its likely for Disney to win, but the problem with civil action is proof isn't needed, just convincing. Bad civil cases happen all the time, and produce case law. Which is what Disney is trying to do here.
If Disney wins, they'll have pulled off a coup of regulatory capture, basically ensuring that large media corporations can replace their staff with robots but that small creators will be limited to underpowered models to compete with them.
Worse, everything that is a 'smoking gun' when it comes to copyright infringement on Midjourney? That's fan art. All that "look how many copyrighted characters they're using-" applies to the frontpage of Deviantart or any given person's Tumblr feed more than to the featured page of Midjourney.
Every single website with user-generated content it chock full of copyright infringement because of fan art and fanfic, and fair use arguments are far harder to pull out for fan-works. The law won't distinguish between a human with a digital art package and a human with an AI art package, and any win Disney makes against MJ is a win against Artstation, Deviantart, Rule34.xxx, AO3, and basically everyone else.
"We get a slice of your cheese if enough of your users post our mouse" is not a rule you want in law.
And the rules won't be enforced by a court 9/10 times. Even if your individual work is plainly fair use, it's not going to matter to whatever image-based version of youtube's copyreich bots gets applied to Artstation and RedBubble to keep the site owners safe.
Even if you're right, you won't have the money to fight.
Heck, Adobe already spies on what you make to report you to the feds if you're doing a naughty, imagine it's internal watchdogs throwing up warnings when it detects you drawing Princess Jasmine and Ariel making out. That may sound nuts, but it's entirely viable.
And that's just one level of possible nightmare. If the judgement is broad enough, it could provide a legal pretext for pursuing copyright lawsuits over style and inspiration. Given how consolidated IP is, this means you're going to have several large cabals that can crush any new work that seems threatening, as there's bound to be something they can draw a connection to.
If you want to see how utterly stupid inspiration=theft is, check out when Harlan Ellison sued James Cameron over Terminator because Cameron was dumb enough to say he was inspired by Demon with a Glass Hand and Soldier from the Outer Limits.
Harlan was wrong on the merits, wrong ethically, and the case shouldn't have been entertained in the first place, but like I said, civil law isn't about facts. Cameron was honest about how two episodes of a show he saw as a kid gave him this completely different idea (the similarities are 'robot that looks like a guy with hand reveal' and 'time traveling soldier goes into a gun store and tries to buy future guns'), and he got unjustly sued for it.
If you ever wonder why writers only talk about their inspirations that are dead, that's why. Anything that strengthens the "what goes in" rather than the "what goes out" approach to IP is good for corps, bad for culture.
167 notes · View notes
sirfrogsworth · 2 months ago
Text
Falling into the AI vortex.
Before I deeply criticize something, I try to understand it more than surface level.
With guns, I went into deep research mode and learned as much as I could about the actual guns so I could be more effective in my gun control advocacy.
I learned things like... silencers are not silent. They are mainly for hearing protection and not assassinations. It's actually small caliber subsonic ammo that is a concern for covert shooting. A suppressor can aid with that goal, but its benefits as hearing protection outweigh that very rare circumstance.
AR15s... not that powerful. They use a tiny bullet. Originally it could not even be used against thick animal hides. It was classified as a "varmint hunting" gun. There are other factors that make it more dangerous like lightweight ammo, magazine capacity, medium range accuracy, and being able to penetrate things because the tiny bullets go faster. But in most mass shooting situations where the shooting distance is less than 20 feet, they really aren't more effective than a handgun. They are just popular for that purpose. Dare I say... a mass shooting fad or cliche. But there are several handguns that could be more powerful and deadly—capable of one bullet kills if shot anywhere near the chest. And easier to conceal and operate in close quarters like a school hallway.
This deeper understanding tells me that banning one type of gun may not be the solution people are hoping for. And that if you don't approach gun control holistically (all guns vs one gun), you may only get marginal benefits from great effort and resources.
Now I'm starting the same process with AI tools.
Everyone is stuck in "AI is bad" mode. And I understand why. But I worry there is nuance we are missing with this reactionary approach. Plus, "AI is bad" isn't a solution to the problem. It may be bad, but it is here and we need to figure out realistic approaches to mitigate the damage.
So I have been using AI tools. I am trying to understand how they work, what they are good for, and what problems we should be most worried about.
I've been at this for nearly a month and this may not be what everyone wants to hear, but I have had some surprising interactions with AI. Good interactions. Helpful interactions. I was even able to use it to help me keep from an anxiety thought spiral. It was genuinely therapeutic. And I am still processing that experience and am not sure what to say about it yet.
If I am able to write an essay on my findings and thoughts, I hope people will understand why I went into the belly of the beast. I hope they won't see me as an AI traitor.
A big part of my motivation to do this was because of a friend of mine. He was hit by a drunk driver many years ago. He is a quadriplegic. He has limited use of his arms and hands and his head movement is constrained.
When people say, "just pick up a pencil and learn to draw" I always cringe at his expense. He was an artist. He already learned how to pick up a pencil and draw. That was taken away from him. (And please don't say he can stick a pencil in his mouth. Some quads have that ability—he does not. It is not a thing all of them can do.) But now he has a tool that allows him to be creative again. And it has noticeably changed his life. It is a kind of art therapy that has had massive positive effects on his depression.
We have had a couple of tense arguments about the ethics of AI. He is all-in because of his circumstances. And it is difficult to express my opinions when faced with that. But he asked and I answered. He tried to defend it and did a poor job. Which, considering how smart he is, was hard to watch.
But I love my friend and I feel I'd like to at least know what I'm talking about. I want to try and experience the benefits he is seeing. And I'd like to see if there is a way for this technology to exist where it doesn't hurt more than it helps.
I don't know when I will be done with my experiment. My health is improving but I am still struggling and I will need to cut my dose again soon. But for now I am just collecting information and learning.
I guess I just wanted to prepare people for what I'm doing.
And ask they keep an open mind with my findings. Not all of them will be "AI is bad."
184 notes · View notes
nattikay · 3 months ago
Text
y’know, it’s funny how the AI “art” crowd claims that using AI instead of drawing the image yourself is no different in principle from drawing on digital canvas instead of a physical piece of paper. Because I remember having digital vs traditional art debates back in the day. And do you know what the pro-digital art argument was?
It was that there are no filters you can apply to instantly give your work correct anatomy. There is no button that magically applies perfect lighting. There is no “now make it look pretty” tool. Artists drawing on a digital tablet have to develop the same type of hand-eye coordination that artists drawing on physical paper do. They have to develop the same understanding of fundamentals like anatomy, perspective, composition, lighting, contrast, color theory, etc. and the ability to apply that knowledge to their work.
In other words: the modern pro-AI argument that compares using AI to using a drawing tablet and the anti-digital art arguments of old share the exact same misconception of how digital art works. AI actually is exactly what the anti-digital art folks incorrectly thought digital art was.
Whatever your personal stance on AI images, to claim that typing in a prompt and letting the computer spit out an image for you instead of drawing it yourself is remotely equivalent to drawing it yourself but doing it on a screen instead of a piece of paper is absolutely wild.
143 notes · View notes
plaidos · 1 month ago
Note
im still trying to get myself to the point where i fully can accept/agree w your stance on ai art vs art made by people but lmao how everyone arguing with you is using art they dont own like babes the ground youre standing on is crumbling with every attempt at a funny snarky tag you add to the post
btw ai art is made by people. like it doesnt just Spawn Out Of The Earth does it lol its made by tools that were made by people. the people saying "a computer generated this with zero human input" would probably not argue that, say, a random unplayed minecraft world was generated with "zero human input" because they would recognise (A) the human input that goes into the actions of literally setting up the parameters of the world (name, size, type, seed, single player vs multi player etc) and (B) the human input that already went into like.... programming minecraft to be able to do that ?
87 notes · View notes