#regulate ai created images
Explore tagged Tumblr posts
Photo
Ai generated images need to be controlled and regulated!
The music industry has shown that this is possible and that music, musicians and copyright can be protected! We need this for visual art too! The database scraps all art they can find on the internet! Artists didn’t give their permission and there is no way to opt out! Even medical reports and photos that are classified are being fed into those ai machines! It’s unethical and criminal and they weasel they way around legal terms!
Stand with artists! Support them! Regulate AI!
Edit: no I don’t want the art world to be like the music industry! This was just an example to show that with support and lobby some kind of moderation is possible! But no one fucking cares for artists!
1K notes
·
View notes
Text
What better way to vent about low-effort AI-generated artwork than with a low-effort shitpost?
#irreveRANTS#ai art is not art#ai art theft#ai artwork#no ai art#autistic artist#artists on tumblr#no to ai generated images#not ai generated#art theft#support artists rights#support human artists#ai art discourse#no to ai generated art#regulate ai created images#no to ai art#artificial intelligence
779 notes
·
View notes
Text
I have respect for actual artists and not people using AI generated programs. That shit is not art.
There's no heart or soul in it. Not to also forget, the AI is the one doing the work for the person.
Type something in one of the search or filter engine and the AI generated the random pics going on the person's filters. Come the fuck on?!
Seriously, it's not that fuckin hard to draw. Yeah,it takes time to develop and pick up more learning to use your skills, but at least it's more honorable way than cheating and stealing!
6 notes
·
View notes
Text
Unless anyone can give me a proper, good reason not to, I fully support the ongoing protest against AI-generated images.
(Please read other people's posts too. This might give you a better understanding of the subject.)
While it has a lot of potential, even for artists, non-artist people took it to the point that it threatens our current and future job opportunities.
Honetly, it is a scary experience seeing it happening.
I've been drawing since I was able to hold a pencil. I've been drawing ALL MY LIFE, and just when I think it's going to pay off, when I think my pieces are getting good enough that I might be able to do it as a full-time job, something like this happens.
And I haven't even mentioned the art theft aspect of it. These pieces are NOT ORIGINAL WORKS. They're all based on other people's works, who DID NOT GIVE CONSENT TO USE THEIR WORKS.
I get it; it is wonderful to create something and frustrating when you don't have the skillset. But even this simple piece I created for this post has more life and feelings implied than all those AI "art pieces" put together.
It has my anger, sadness, disappointment, and fear in it.
It has that kindergarten girl in it who is still remembered by her former teachers for always drawing.
It has that suicidal preteen whose only happiness was writing her little stories and drawing the characters from them.
And it has that almost-adult high school student whose dream is to become like that one artist, to whom she can thank her life.
That one artist has barely any idea that I exist and definitely has no idea what she has done. I'm not sure if she realizes her childish stories from back then saved a life, but they did.
This is what art is for me.
Something you spend your life on without realizing it, getting better and better. Something that is perfect because it's human-made. Something that has feelings in it, even if we don't intend them.
#no to ai generated art#no to ai art#regulate ai created images#ai art#ai artwork#support real artists#support human artists#human art#human artist#ai#ai art is art theft#ai art is not art#ai art is theft#ai art is fake art#art#oc art#original character#digital art#digital aritst#digital drawing#say no to ai art#say no to ai generated art
24 notes
·
View notes
Text
fuckin uuuuuuuuuuh sorry if this is a hot take or anything but “ai art is theft and unfair to the ppl who made the original art being sampled” doesnt suddenly become not true if the person generating the ai art is disabled
and its also possible to hold that opinion WITHOUT believing that disabled ppl are in any way “lazy” or “deserve” their disability or “it’s their fault” or any of that bullshit
i get that disability can be devastating to those who used to do a certain type of art that their body no longer allows in the traditional sense. but physical and mental disability have literally NEVER stopped people from creating art, EVER, in all of history. artists are creative. passionate people find a way.
and even if a way can’t be found…. stealing is still stealing. like. plagiarizing an article isn’t suddenly ethical just bc the person who stole it can’t read, and being against plagiarism doesn’t mean you hate people who can’t read.
the whole ableism angle on ai art confuses the hell out of me, if im honest. at least be consistent about it. if it’s ethical when disabled people do it, then it’s ethical when anyone does it
#lime rants#i think the only ethical way to use ai art is any way in which youre not claiming it as your own work#so like. you wanna generate a funny image to post in the groupchat#or you want some ideas for a piece of art that you ACTUALLY ARE gonna make yourself. but u need inspiration and cant find good extant image#unfortunately theres no way to regulate any of that#i also think theres something fucky about implying that disabled people cant do ANYTHING#even stuff that they historically have been able to do just fine#like ‘well disabled people are just useless so you cant expect them to do anything themselves’ uuuuh#disabled people can absolutely create their own art. they do not need to steal#and even if not that doesnt just make stealing okay
25 notes
·
View notes
Photo
some before-and-after pictures of how I’ve been using AI generated images in my art lately 🤖
I share other artists’ concerns about the unethical nature of the theft going on in the training data of AI art algorithms, so I refuse to spend any money on them or to consider the images generated by them to be true art, but I’m curious to hear people’s thoughts on using it for reference and paint-over like this?
my hope is that with proper regulation and more ethical use, AI could be a beneficial tool to help artists - instead of a way that allows people to steal from us more easily.
#ai art#art#digital art#artists on tumblr#right now i use ai as an extension of how i already create art#i make regular use of reference photos and free-use stock images for a variety of different purposes#and i think that with enough of a transformation of the source material#true art CAN be created from ai 'art'#but it's a complicated subject for sure#how do we decide how much transformation is enough to consider it an original work?#how do we regulate the use of image generation algorithms as more and more people get access to them#and start to run them for themselves?#we're in extremely uncharted territory here as technological progress continues to accelerate at an unprecedented pace#it's important to think about these questions and stay informed on how we're reacting#because no matter how hard we try technology doesn't go backwards#but we do still have a say in how we choose to use it and how we feel about it
93 notes
·
View notes
Text
I think this part is truly the most damning:
If it's all pre-rendered mush and it's "too expensive to fully experiment or explore" then such AI is not a valid artistic medium. It's entirely deterministic, like a pseudorandom number generator. The goal here is optimizing the rapid generation of an enormous quantity of low-quality images which fulfill the expectations put forth by The Prompt.
It's the modern technological equivalent of a circus automaton "painting" a canvas to be sold in the gift shop.
so a huge list of artists that was used to train midjourney’s model got leaked and i’m on it
literally there is no reason to support AI generators, they can’t ethically exist. my art has been used to train every single major one without consent lmfao 🤪
link to the archive
#to be clear AI as a concept has the power to create some truly fantastic images#however when it is subject to the constraints of its purpose as a machine#it is only capable of performing as its puppeteer wills it#and these puppeteers have the intention of stealing#tech#technology#tech regulation#big tech#data harvesting#data#technological developments#artificial intelligence#ai#machine generated content#machine learning#intellectual property#copyright
37K notes
·
View notes
Text
AI models can seemingly do it all: generate songs, photos, stories, and pictures of what your dog would look like as a medieval monarch.
But all of that data and imagery is pulled from real humans — writers, artists, illustrators, photographers, and more — who have had their work compressed and funneled into the training minds of AI without compensation.
Kelly McKernan is one of those artists. In 2023, they discovered that Midjourney, an AI image generation tool, had used their unique artistic style to create over twelve thousand images.
“It was starting to look pretty accurate, a little infringe-y,” they told The New Yorker last year. “I can see my hand in this stuff, see how my work was analyzed and mixed up with some others’ to produce these images.”
For years, leading AI companies like Midjourney and OpenAI, have enjoyed seemingly unfettered regulation, but a landmark court case could change that.
On May 9, a California federal judge allowed ten artists to move forward with their allegations against Stability AI, Runway, DeviantArt, and Midjourney. This includes proceeding with discovery, which means the AI companies will be asked to turn over internal documents for review and allow witness examination.
Lawyer-turned-content-creator Nate Hake took to X, formerly known as Twitter, to celebrate the milestone, saying that “discovery could help open the floodgates.”
“This is absolutely huge because so far the legal playbook by the GenAI companies has been to hide what their models were trained on,” Hake explained...
“I’m so grateful for these women and our lawyers,” McKernan posted on X, above a picture of them embracing Ortiz and Andersen. “We’re making history together as the largest copyright lawsuit in history moves forward.” ...
The case is one of many AI copyright theft cases brought forward in the last year, but no other case has gotten this far into litigation.
“I think having us artist plaintiffs visible in court was important,” McKernan wrote. “We’re the human creators fighting a Goliath of exploitative tech.”
“There are REAL people suffering the consequences of unethically built generative AI. We demand accountability, artist protections, and regulation.”
-via GoodGoodGood, May 10, 2024
#ai#anti ai#fuck ai art#ai art#big tech#tech news#lawsuit#united states#us politics#good news#hope#copyright#copyright law
2K notes
·
View notes
Text
this is an earnest and honest plea and call in especially to fandoms as i see it happen more - please don't use AI for your transformative works. by this i mean, making audios of actors who play the characters you love saying certain things, making deepfakes of actors or even animated characters' faces. playing with chatGPT to "talk" or RP with a character, or write funny fanfiction. using stable diffusion to make interesting "crossover" AI "art." i KNOW it's just for fun and it is seemingly harmless but it's not. since there is NO regulation and since some stuff is built off of stable diffusion (which uses stolen artwork and data), it is helping to create a huge and dangerous mess. when you use an AI to deepfake actors' voices to make your ship canon or whatever, you help train it so people can use it for deepfake revenge porn. or so companies can replace these actors with AI. when you RP with chatGPT you help train it to do LOTS of things that will be used to harm SO many people. (this doesn't even get into how governments will misuse and hurt people with these technologies) and yes that is not your fault and yes it is not the technology's fault it is the companies and governments that will and already have done things but PLEASE. when you use an AI snapchat or instagram or tiktok filter, when you use an AI image generator "just for fun", when you chat with your character's "bot," you are doing IRREPARABLE harm. please stop.
8K notes
·
View notes
Text
The Best News of Last Week - March 18
1. FDA to Finally Outlaw Soda Ingredient Prohibited Around The World
An ingredient once commonly used in citrus-flavored sodas to keep the tangy taste mixed thoroughly through the beverage could finally be banned for good across the US. BVO, or brominated vegetable oil, is already banned in many countries, including India, Japan, and nations of the European Union, and was outlawed in the state of California in October 2022.
2. AI makes breakthrough discovery in battle to cure prostate cancer
Scientists have used AI to reveal a new form of aggressive prostate cancer which could revolutionise how the disease is diagnosed and treated.
A Cancer Research UK-funded study found prostate cancer, which affects one in eight men in their lifetime, includes two subtypes. It is hoped the findings could save thousands of lives in future and revolutionise how the cancer is diagnosed and treated.
3. “Inverse vaccine” shows potential to treat multiple sclerosis and other autoimmune diseases
A new type of vaccine developed by researchers at the University of Chicago’s Pritzker School of Molecular Engineering (PME) has shown in the lab setting that it can completely reverse autoimmune diseases like multiple sclerosis and type 1 diabetes — all without shutting down the rest of the immune system.
4. Paris 2024 Olympics makes history with unprecedented full gender parity
In a historic move, the International Olympic Committee (IOC) has distributed equal quotas for female and male athletes for the upcoming Olympic Games in Paris 2024. It is the first time The Olympics will have full gender parity and is a significant milestone in the pursuit of equal representation and opportunities for women in sports.
Biased media coverage lead girls and boys to abandon sports.
5. Restored coral reefs can grow as fast as healthy reefs in just 4 years, new research shows
Planting new coral in degraded reefs can lead to rapid recovery – with restored reefs growing as fast as healthy reefs after just four years. Researchers studied these reefs to assess whether coral restoration can bring back the important ecosystem functions of a healthy reef.
“The speed of recovery we saw is incredible,” said lead author Dr Ines Lange, from the University of Exeter.
6. EU regulators pass the planet's first sweeping AI regulations
The EU is banning practices that it believes will threaten citizens' rights. "Biometric categorization systems based on sensitive characteristics" will be outlawed, as will the "untargeted scraping" of images of faces from CCTV footage and the web to create facial recognition databases.
Other applications that will be banned include social scoring; emotion recognition in schools and workplaces; and "AI that manipulates human behavior or exploits people’s vulnerabilities."
7. Global child deaths reach historic low in 2022 – UN report
The number of children who died before their fifth birthday has reached a historic low, dropping to 4.9 million in 2022.
The report reveals that more children are surviving today than ever before, with the global under-5 mortality rate declining by 51 per cent since 2000.
---
That's it for this week :)
This newsletter will always be free. If you liked this post you can support me with a small kofi donation here:
Buy me a coffee ❤️
Also don’t forget to reblog this post with your friends.
781 notes
·
View notes
Note
I'm speaking as an artist in the animation industry here, it's hard not to be reactionary about AI image generation when it's already taking jobs from artists. Sure, for now it's indie gigs on book covers or backgrounds on one Netflix short, but how long until it'll be responsible for layoffs en-masse? These conversations can't be had in a vacuum. As long as tools like these are used as a way for companies to not pay artists, we cannot support them, give them attention, do anything but fight their implementation in our industry. It doesn't matter if they're art. They cannot be given a platform in any capacity until regulation around their use in the entertainment industry is established. If it takes billions of people refusing to call AI image generation "art" and immediately refusing to support anything that features it, then that's what it takes. Complacency is choosing AI over living artists who are losing jobs.
Call me a luddite but I'll die on this hill. Artists with degrees and 20+ years in the industry are getting laid off, the industry is already in shambles. If given the chance, no matter how vapid, shallow, or visibly generated the content is, if it's content that rakes in cash, companies will opt for it over meaningful art made by a person, every time. Again, this isn't a debate that can be had in a vacuum. Until universal basic income is a reality, until we can all create what we want in our spare time and aren't crippled under capitalism, I'm condemning AI image generation because I'd like to keep my job and not be homeless. It has to be a black and white issue until we have protections in place for it to not be.
you can condemn the technology all you like but it's not going to save you. the only thing that can actually address these concerns is unionization in the short term and total transformation of our economic system in the long term. you are a luddite in the most literal classical sense & just like the luddites as long as you target the machines and not the system that implements them you will lose just like every single battle against new immiserating technology has been lost since the invention of the steam loom.
595 notes
·
View notes
Text
The creation of sexually explicit "deepfake" images is to be made a criminal offence in England and Wales under a new law, the government says.
Under the legislation, anyone making explicit images of an adult without their consent will face a criminal record and unlimited fine.
It will apply regardless of whether the creator of an image intended to share it, the Ministry of Justice (MoJ) said.
And if the image is then shared more widely, they could face jail.
A deepfake is an image or video that has been digitally altered with the help of Artificial Intelligence (AI) to replace the face of one person with the face of another.
Recent years have seen the growing use of the technology to add the faces of celebrities or public figures - most often women - into pornographic films.
Channel 4 News presenter Cathy Newman, who discovered her own image used as part of a deepfake video, told BBC Radio 4's Today programme it was "incredibly invasive".
Ms Newman found she was a victim as part of a Channel 4 investigation into deepfakes.
"It was violating... it was kind of me and not me," she said, explaining the video displayed her face but not her hair.
Ms Newman said finding perpetrators is hard, adding: "This is a worldwide problem, so we can legislate in this jurisdiction, it might have no impact on whoever created my video or the millions of other videos that are out there."
She said the person who created the video is yet to be found.
Under the Online Safety Act, which was passed last year, the sharing of deepfakes was made illegal.
The new law will make it an offence for someone to create a sexually explicit deepfake - even if they have no intention to share it but "purely want to cause alarm, humiliation, or distress to the victim", the MoJ said.
Clare McGlynn, a law professor at Durham University who specialises in legal regulation of pornography and online abuse, told the Today programme the legislation has some limitations.
She said it "will only criminalise where you can prove a person created the image with the intention to cause distress", and this could create loopholes in the law.
It will apply to images of adults, because the law already covers this behaviour where the image is of a child, the MoJ said.
It will be introduced as an amendment to the Criminal Justice Bill, which is currently making its way through Parliament.
Minister for Victims and Safeguarding Laura Farris said the new law would send a "crystal clear message that making this material is immoral, often misogynistic, and a crime".
"The creation of deepfake sexual images is despicable and completely unacceptable irrespective of whether the image is shared," she said.
"It is another example of ways in which certain people seek to degrade and dehumanise others - especially women.
"And it has the capacity to cause catastrophic consequences if the material is shared more widely. This Government will not tolerate it."
Cally Jane Beech, a former Love Island contestant who earlier this year was the victim of deepfake images, said the law was a "huge step in further strengthening of the laws around deepfakes to better protect women".
"What I endured went beyond embarrassment or inconvenience," she said.
"Too many women continue to have their privacy, dignity, and identity compromised by malicious individuals in this way and it has to stop. People who do this need to be held accountable."
Shadow home secretary Yvette Cooper described the creation of the images as a "gross violation" of a person's autonomy and privacy and said it "must not be tolerated".
"Technology is increasingly being manipulated to manufacture misogynistic content and is emboldening perpetrators of Violence Against Women and Girls," she said.
"That's why it is vital for the government to get ahead of these fast-changing threats and not to be outpaced by them.
"It's essential that the police and prosecutors are equipped with the training and tools required to rigorously enforce these laws in order to stop perpetrators from acting with impunity."
288 notes
·
View notes
Text
So, about this whole "AI" thing...
A response to an ask (for some reason, tumblr won't let me blaze normal responsicles)
Like the Titan, Prometheus, Man Has Stolen Fire From the Gods. We can now make minds in our own image, elevating crude matter to the level of self-awareness. So... What next?
The first thing I would like to make clear is that, in some respects, my opinion here is irrelevant. So is yours. So are the opinions of the people reading this.
No matter what we do, no matter what we believe, something remains inviolably clear and true:
BAD ACTORS WILL EXPLOIT GENUINELY USEFUL TECHNOLOGIES TO BENEFIT THEMSELVES
This is an axiom of human behaviour that cannot be escaped. Nuclear power is amongst the most regulated technologies that have ever existed... and right now, rogue states are attacking their neighbours, protected from intervention by the threat of nuclear annihilation.
Nuclear Weapons (their own, and Red China's) are what allows the North Korean government to continue oppressing its population.
Nuclear Weapons enable The Land Of The Bear to invade The Ukraine.
Despite this, nuclear power has otherwise been mostly regulated out of existence. It is cheap, safe, and abundant, yet various laws make it either artificially expensive or outright illegal to heat your home with it, light your rooms, power your transportation, trim your hedges.
Regulations and anti-technology hysteria can prevent ordinary people from benefitting from innovation, but they cannot prevent the worst people in the world from abusing it.
So, whatever worst-case scenario you've imagined? Accept the fact that it's going to happen no matter what you do.
Legions of nanobots reconfiguring us into paperclips, a la Eliezar Yudkowski's bizarrely specific fever dreams? If you think it is possible, accept that it is inevitable.
Intelligent machines with glowing red eyes malevolently hunting us through a post-apocalyptic wasteland, a la James Cameron/The Wachowskis? If you think it is possible, accept that it is inevitable.
Lying governments using deepfaked videos to create un-debunkable false-flags and cheaply manufacture consent for wars to further their adrenochrome-harvesting operations? Let's face it, they don't even need AI for that, most people will just take their claims at face value.
But what if we all agree to stop using it?
Technologies are sometimes lost, yes, but this happens gradually, over the course of decades if not centuries. Civilisations can decline and lose access to technologies, but that's not likely to happen for AI within our lifetimes.
If it works, if it is genuinely useful, it WILL be used.
We have seen this play out time and time again, throughout history.
So, we can either do what we did for nuclear power, and regulate it so heavily that it serves no useful purpose to the Just and the Kind, whilst availing the Corrupt and the Wicked...
Or we can accept Evil shall be done, and try with all our might to counter it with Good.
We can strive to Magnanimous heights of Faustian greatness, using AI to create untold works of beauty, so that Human Grandeur at least rivals Human Depravity.
In summary:
We have stolen Fire from the Gods. The more noble-minded amongst us might as well do something worthwhile with it.
#truth#AI#artificial intelligence#transhumanism#politics#science#philosophy#principle#history#ask#anonymous
178 notes
·
View notes
Text
I found these replies very frustrating and fairly ableist. Do people not understand that disabilities and functionality vary wildly from person to person? Just because one person can draw with their teeth or feet doesn't mean others can.
And where is my friend supposed to get this magic eye movement drawing tech from? How is he supposed to afford it? And does the art created from it look like anything? Is it limited to abstraction? What if that isn't the art he wants to make?
Also, asking another artist to draw something for you is called a commission. And it usually costs money.
I have been using the generative AI in Photoshop for a few months now. It is trained on images Adobe owns, so I feel like it is in an ethical gray area. I mostly use it to repair damaged photos, remove objects, or extend boundaries. The images I create are still very much mine. But it has been an incredible accessibility tool for me. I was able to finish work that would have required much more energy than I had.
My friend uses AI like a sketchpad. He can quickly generate ideas and then he develops those into stories and videos and even music. He is doing all kinds of creative tasks that he was previously incapable of. It is just not feasible for him to have an artist on call to sketch every idea that pops into his brain—even if they donated labor to him.
I just think seeing these tools as pure evil is not the best take on all of this. We need them to be ethically trained. We need regulations to make sure they don't destroy creative jobs. But they do have utility and they can be powerful tools for accessibility as well.
These are complicated conversations. I'm not claiming to have all of the answers or know the most moral path we should steer this A.I behemoth towards. But seeing my friend excited about being creative after all of these years really affected me. It confused my feelings about generative A.I. Then I started using similar tools and it just made it so much easier to work on my photography. And that confused my feelings even more.
So...I am confused.
And unsure of how to proceed.
But I do hope people will be willing to at least consider this aspect and have these conversations.
147 notes
·
View notes
Text
Too many people are willfully misunderstanding why artists are protesting AI art right now.
All they hear is "Artists are mad at fun new tool and scared it will replace them, so they're trying to take it away from us!!"
Artists are not protesting the tool itself. Many even like the concept of the AI tools, believe it or not.
We are protesting that it takes and uses our work without our consent and without any compensation, all while the companies behind the tool are making loads of money off this practice.
We're fighting for regulation of the tool. Not only does it scrape work created by artists into it's database without the artist's permission, private medical photos have also been found in these datasets. None of that is ok.
From the start this tool should have only been fed images in the public domain, and any artist work fed to it should have come from artists who have consented to it and who were then also compensated whenever their work was used by the AI tool. There's also other issues like:
Sites like ArtStation and DeviantArt refusing to place AI in it's own category to separate it from human made art. Just like traditional and digital art get separate categories, so should AI generated art. (Also some are trying to hide when they generated something from AI and try to pass it off as done by their own hand??? If you believe it's 'just another tool,' why are you trying to hide it???);
How DA tried to pull a fast one and first made AI scraping an opt-out function and said that dead artists work would be scraped because they weren't alive to tell them no;
How the companies behind the tools are knowingly making money off the AI scraping artist work without artist consent;
People are selling AI art with no regard that their generated image likely contains work that another artist created;
Etc.
"But humans take inspiration from other artists all the time! The AI is just doing the same thing!"
First off, it's not. And I don't even mean that in a "AI art is soulless and can never be the same as Human Art!" way or anything.
I just mean these "AI" tools aren't 'true AI' like how you're thinking. They're no Hal3000 that actually make decisions on their own. They're algorithms programed by humans to search the acquired database and photomash together a product based on a prompt. They're not actually becoming 'inspired' by anything. (And it's not insulting the tool to say this either!)
And that's not even the point, but let's pretend for a just minute that what AI Art programs do is the same as a human taking inspiration- Even humans are not allowed to take too much of another artists idea/work with the intent to profit without getting in trouble. Even if that 'profit' is just internet clicks, people very much still do get mad at other humans for copying another artist's work and trying to pass it off as their own.
And that is what's happening with a lot of generated art. It will spit out pieces very similar or nearly identical to another artist's work and will often even include artist's signatures or watermarks in the product. Because it just photomashes, essentially. (Again, not a dig!)
And I'm not knocking photomashing, it is used in the industry. And I bet most artists are actually fine with the concept of a photomashing tool. However, even when humans in the industry use photomashing, they have to use their own photos, public domain photos, or have permission of the owner to use the photos they intend to photomash with. And we sure as hell are not allowed to use someone's private medical photos in our work either.
We're only asking that the work generated by AI Art programs follow these same standards. Again, we're only fighting for regulation, not to take this "new fun tool" from you.
But unfortunately that's all some who are already enamored with the idea of AI Art are willing to hear from our arguments.
It's easier to just believe that artists are simply "afraid of change" or "afraid of being obsolete" and are trying to rain on your fun than to look at our arguments and concede that, "Hey, maybe this tool was implemented in a bad way. Maybe artists do deserve the basic respect of being allowed to consent to their work being used to train AI, and to being compensated by the company behind the tool if their work is used. Maybe we should look into more ethical ways of implementing this new tool."
No one seems to realize that artists would not be fighting this tool if it was done right from the start and didn't just outright take our work to train the AI without our permission. Hell, artists release stuff to help teach/'train' other human artists all the time! We release full tutorials, stock images, even post finished art for people to use for free sometimes!
The difference is that when we do, we consented to do so. It wasn't just ripped from our hands by people who felt entitled to our labor for their own gain.
We're not trying to take away your fun new tools! We're only asking that your new tool does not come at the expense of abusing us!
I really don't think that's a hard ask.
#I'm worried that the link will hide this in the tags#so maybe reblog to spread this?#ai art debate#art theft#ai#ai art#no ai art#noai#ai art generation#stable diffusion#ai art generator#anti ai art#do better ai#support human artists#photomash
2K notes
·
View notes
Text
Sphinxmumps Linkdump
On THURSDAY (June 20) I'm live onstage in LOS ANGELES for a recording of the GO FACT YOURSELF podcast. On FRIDAY (June 21) I'm doing an ONLINE READING for the LOCUS AWARDS at 16hPT. On SATURDAY (June 22) I'll be in OAKLAND, CA for a panel and a keynote at the LOCUS AWARDS.
Welcome to my 20th Linkdump, in which I declare link bankruptcy and discharge my link-debts by telling you about all the open tabs I didn't get a chance to cover in this week's newsletters. Here's the previous 19 installments:
https://pluralistic.net/tag/linkdump/
Starting off this week with a gorgeous book that is also one of my favorite books: Beehive's special slipcased edition of Dante's Inferno, as translated by Henry Wadsworth Longfellow, with new illustrations by UK linocut artist Sophy Hollington:
https://www.kickstarter.com/projects/beehivebooks/the-inferno
I've loved Inferno since middle-school, when I read the John Ciardi translation, principally because I'd just read Niven and Pournelle's weird (and politically odious) (but cracking) sf novel of the same name:
https://en.wikipedia.org/wiki/Inferno_(Niven_and_Pournelle_novel)
But also because Ciardi wrote "About Crows," one of my all-time favorite bits of doggerel, a poem that pierced my soul when I was 12 and continues to do so now that I'm 52, for completely opposite reasons (now there's a poem with staying power!):
https://spirituallythinking.blogspot.com/2011/10/about-crows-by-john-ciardi.html
Beehive has a well-deserved rep for making absolutely beautiful new editions of great public domain books, each with new illustrations and intros, all in matching livery to make a bookshelf look classy af. I have several of them and I've just ordered my copy of Inferno. How could I not? So looking forward to this, along with its intro by Ukrainian poet Ilya Kaminsky and essay by Dante scholar Kristina Olson.
The Beehive editions show us how a rich public domain can be the soil from which new and inspiring creative works sprout. Any honest assessment of a creator's work must include the fact that creativity is a collective act, both inspired by and inspiring to other creators, past, present and future.
One of the distressing aspects of the debate over the exploitative grift of AI is that it's provoked a wave of copyright maximalism among otherwise thoughtful artists, despite the fact that a new copyright that lets you control model training will do nothing to prevent your boss from forcing you to sign over that right in your contracts, training an AI on your work, and then using the model as a pretext to erode your wages or fire your ass:
https://pluralistic.net/2024/05/13/spooky-action-at-a-close-up/#invisible-hand
Same goes for some privacy advocates, whose imaginations were cramped by the fact that the only regulation we enforce on the internet is copyright, causing them to forget that privacy rights can exist separate from the nonsensical prospect of "owning" facts about your life:
https://pluralistic.net/2023/10/21/the-internets-original-sin/
We should address AI's labor questions with labor rights, and we should address AI's privacy questions with privacy rights. You can tell that these are the approaches that would actually work for the public because our bosses hate these approaches and instead insist that the answer is just giving us more virtual property that we can sell to them, because they know they'll have a buyer's market that will let them scoop up all these rights at bargain prices and use the resulting hoards to torment, immiserate and pauperize us.
Take Clearview AI, a facial recognition tool created by eugenicists and white nationalists in order to help giant corporations and militarized, unaccountable cops hunt us by our faces:
https://pluralistic.net/2023/09/20/steal-your-face/#hoan-ton-that
Clearview scraped billions of images of our faces and shoveled them into their model. This led to a class action suit in Illinois, which boasts America's best biometric privacy law, under which Clearview owes tens of billions of dollars in statutory damages. Now, Clearview has offered a settlement that illustrates neatly the problem with making privacy into property that you can sell instead of a right that can't be violated: they're going to offer Illinoisians a small share of the company's stock:
https://www.theregister.com/2024/06/14/clearview_ai_reaches_creative_settlement/
To call this perverse is to go a grave injustice to good, hardworking perverts. The sums involved will be infinitesimal, and the only way to make those sums really count is for everyone in Illinois to root for Clearview to commit more grotesque privacy invasions of the rest of us to make its creepy, terrible product more valuable.
Worse still: by crafting a bespoke, one-off, forgiveness-oriented regulation specifically for Clearview, we ensure that it will continue, but that it will also never be disciplined by competitors. That is, rather than banning this kind of facial recognition tech, we grant them a monopoly over it, allowing them to charge all the traffic will bear.
We're in an extraordinary moment for both labor and privacy rights. Two of Biden's most powerful agency heads, Lina Khan and Rohit Chopra have made unprecedented use of their powers to create new national privacy regulations:
https://pluralistic.net/2023/08/16/the-second-best-time-is-now/#the-point-of-a-system-is-what-it-does
In so doing, they're bypassing Congressional deadlock. Congress has not passed a new consumer privacy law since 1988, when they banned video-store clerks from leaking your VHS rental history to newspaper reporters:
https://en.wikipedia.org/wiki/Video_Privacy_Protection_Act
Congress hasn't given us a single law protecting American consumers from the digital era's all-out assault on our privacy. But between the agencies, state legislatures, and a growing coalition of groups demanding action on privacy, a new federal privacy law seems all but assured:
https://pluralistic.net/2023/12/06/privacy-first/#but-not-just-privacy
When that happens, we're going to have to decide what to do about products created through mass-scale privacy violations, like Clearview AI – but also all of OpenAI's products, Google's AI, Facebook's AI, Microsoft's AI, and so on. Do we offer them a deal like the one Clearview's angling for in Illinois, fining them an affordable sum and grandfathering in the products they built by violating our rights?
Doing so would give these companies a permanent advantage, and the ongoing use of their products would continue to violate billions of peoples' privacy, billions of times per day. It would ensure that there was no market for privacy-preserving competitors thus enshrining privacy invasion as a permanent aspect of our technology and lives.
There's an alternative: "model disgorgement." "Disgorgement" is the legal term for forcing someone to cough up something they've stolen (for example, forcing an embezzler to give back the money). "Model disgorgement" can be a legal requirement to destroy models created illegally:
https://iapp.org/news/a/explaining-model-disgorgement
It's grounded in the idea that there's no known way to unscramble the AI eggs: once you train a model on data that shouldn't be in it, you can't untrain the model to get the private data out of it again. Model disgorgement doesn't insist that offending models be destroyed, but it shifts the burden of figuring out how to unscramble the AI omelet to the AI companies. If they can't figure out how to get the ill-gotten data out of the model, then they have to start over.
This framework aligns everyone's incentives. Unlike the Clearview approach – move fast, break things, attain an unassailable, permanent monopoly thanks to a grandfather exception – model disgorgement makes AI companies act with extreme care, because getting it wrong means going back to square one.
This is the kind of hard-nosed, public-interest-oriented rulemaking we're seeing from Biden's best anti-corporate enforcers. After decades kid-glove treatment that allowed companies like Microsoft, Equifax, Wells Fargo and Exxon commit ghastly crimes and then crime again another day, Biden's corporate cops are no longer treating the survival of massive, structurally important corporate criminals as a necessity.
It's been so long since anyone in the US government treated the corporate death penalty as a serious proposition that it can be hard to believe it's even happening, but boy is it happening. The DOJ Antitrust Division is seeking to break up Google, the largest tech company in the history of the world, and they are tipped to win:
https://pluralistic.net/2024/04/24/naming-names/#prabhakar-raghavan
And that's one of the major suits against Google that Big G is losing. Another suit, jointly brought by the feds and dozens of state AGs, is just about to start, despite Google's failed attempt to get the suit dismissed:
https://www.reuters.com/technology/google-loses-bid-end-us-antitrust-case-over-digital-advertising-2024-06-14/
I'm a huge fan of the Biden antitrust enforcers, but that doesn't make me a huge fan of Biden. Even before Biden's disgraceful collaboration in genocide, I had plenty of reasons – old and new – to distrust him and deplore his politics. I'm not the only leftist who's struggling with the dilemma posed by the worst part of Biden's record in light of the coming election.
You've doubtless read the arguments (or rather, "arguments," since they all generate a lot more heat than light and I doubt whether any of them will convince anyone). But this week, Anand Giridharadas republished his 2020 interview with Noam Chomsky about Biden and electoral politics, and I haven't been able to get it out of my mind:
https://the.ink/p/free-noam-chomsky-life-voting-biden-the-left
Chomsky contrasts the left position on politics with the liberal position. For leftists, Chomsky says, "real politics" are a matter of "constant activism." It's not a "laser-like focus on the quadrennial extravaganza" of national elections, after which you "go home and let your superiors take over."
For leftists, politics means working all the time, "and every once in a while there's an event called an election." This should command "10 or 15 minutes" of your attention before you get back to the real work.
This makes the voting decision more obvious and less fraught for Chomsky. There's "never been a greater difference" between the candidates, so leftists should go take 15 minutes, "push the lever, and go back to work."
Chomsky attributed the good parts of Biden's 2020 platform to being "hammered on by activists coming out of the Sanders movement and other." That's the real work, that hammering. That's "real politics."
For Chomsky, voting for Biden isn't support for Biden. It's "support for the activists who have been at work constantly, creating the background within the party in which the shifts took place, and who have followed Sanders in actually entering the campaign and influencing it. Support for them. Support for real politics."
Chomsky tells us that the self-described "masters of the universe" understand that something has changed: "the peasants are coming with their pitchforks." They have all kinds of euphemisms for this ("reputational risks") but the core here is a winner-take-all battle for the future of the planet and the species. That's why the even the "sensible" ultra-rich threw in for Trump in 2016 and 2020, and why they're backing him even harder in 2024:
https://www.bbc.com/news/articles/ckvvlv3lewxo
Chomsky tells us not to bother trying to figure out Biden's personality. Instead, we should focus on "how things get done." Biden won't do what's necessary to end genocide and preserve our habitable planet out of conviction, but he may do so out of necessity. Indeed, it doesn't matter how he feels about anything – what matters is what we can make him do.
Chomksy himself is in his 90s and his health is reportedly in terminal decline, so this is probably the only word we'll get from him on this issue:
https://www.reddit.com/r/chomsky/comments/1aj56hj/updates_on_noams_health_from_his_longtime_mit/
The link between concentrated wealth, concentrated power, and the existential risks to our species and civilization is obvious – to me, at least. Any time a tiny minority holds unaccountable power, they will end up using it to harm everyone except themselves. I'm not the first one to take note of this – it used to be a commonplace in American politics.
Back in 1936, FDR gave a speech at the DNC, accepting their nomination for president. Unlike FDR's election night speech ("I welcome their hatred"), this speech has been largely forgotten, but it's a banger:
https://teachingamericanhistory.org/document/acceptance-speech-at-the-democratic-national-convention-1936/
In that speech, Roosevelt brought a new term into our political parlance: "economic royalists." He described the American plutocracy as the spiritual descendants of the hereditary nobility that Americans had overthrown in 1776. The English aristocracy "governed without the consent of the governed" and “put the average man’s property and the average man’s life in pawn to the mercenaries of dynastic power":
Roosevelt said that these new royalists conquered the nation's economy and then set out to seize its politics, backing candidates that would create "a new despotism wrapped in the robes of legal sanction…an industrial dictatorship."
As David Dayen writes in The American Prospect, this has strong parallels to today's world, where "Silicon Valley, Big Oil, and Wall Street come together to back a transactional presidential candidate who promises them specific favors, after reducing their corporate taxes by 40 percent the last time he was president":
https://prospect.org/politics/2024-06-14-speech-fdr-would-give/
Roosevelt, of course, went on to win by a landslide, wiping out the Republicans despite the endless financial support of the ruling class.
The thing is, FDR's policies didn't originate with him. He came from the uppermost of the American upper crust, after all, and famously refused to define the "New Deal" even as he campaigned on it. The "New Deal" became whatever activists in the Democratic Party's left could force him to do, and while it was bold and transformative, it wasn't nearly enough.
The compromise FDR brokered within the Democratic Party froze out Black Americans to a terrible degree. Writing for the Institute for Local Self Reliance, Ron Knox and Susan Holmberg reveal the long shadow cast by that unforgivable compromise:
https://storymaps.arcgis.com/stories/045dcde7333243df9b7f4ed8147979cd
They describe how redlining – the formalization of anti-Black racism in New Deal housing policy – led to the ruin of Toledo's once-thriving Dorr Street neighborhood, a "Black Wall Street" where a Black middle class lived and thrived. New Deal policies starved the neighborhood of funds, then ripped it in two with a freeway, sacrificing it and the people who lived in it.
But the story of Dorr Street isn't over. As Knox and Holmberg write, the people of Dorr Street never gave up on their community, and today, there's an awful lot of Chomsky's "constant activism" that is painstakingly bringing the community back, inch by aching inch. The community is locked in a guerrilla war against the same forces that the Biden antitrust enforcers are fighting on the open field of battle. The work that activists do to drag Democratic Party policies to the left is critical to making reparations for the sins of the New Deal – and for realizing its promise for everybody.
In my lifetime, there's never been a Democratic Party that represented my values. The first Democratic President of my life, Carter, kicked off Reaganomics by beginning the dismantling of America's antitrust enforcement, in the mistaken belief that acting like a Republican would get Democrats to vote for him again. He failed and delivered Reagan, whose Reaganomics were the official policy of every Democrat since, from Clinton ("end welfare as we know it") to Obama ("foam the runways for the banks").
In other words, I don't give a damn about Biden, but I am entirely consumed with what we can force his administration to do, and there are lots of areas where I like our chances.
For example: getting Biden's IRS to go after the super-rich, ending the impunity for elite tax evasion that Spencer Woodman pitilessly dissects in this week's superb investigation for the International Consortium of Investigative Journalists:
https://www.icij.org/inside-icij/2024/06/how-the-irs-went-soft-on-billionaires-and-corporate-tax-cheats/
Ending elite tax cheating will make them poorer, and that will make them weaker, because their power comes from money alone (they don't wield power because their want to make us all better off!).
Or getting Biden's enforcers to continue their fight against the monopolists who've spiked the prices of our groceries even as they transformed shopping into a panopticon, so that their business is increasingly about selling our data to other giant corporations, with selling food to us as an afterthought:
https://prospect.org/economy/2024-06-12-war-in-the-aisles/
For forty years, since the Carter administration, we've been told that our only power comes from our role as "consumers." That's a word that always conjures up one of my favorite William Gibson quotes, from 2003's Idoru:
Something the size of a baby hippo, the color of a week-old boiled potato, that lives by itself, in the dark, in a double-wide on the outskirts of Topeka. It's covered with eyes and it sweats constantly. The sweat runs into those eyes and makes them sting. It has no mouth, no genitals, and can only express its mute extremes of murderous rage and infantile desire by changing the channels on a universal remote. Or by voting in presidential elections.
The normie, corporate wing of the Democratic Party sees us that way. They decry any action against concentrated corporate power as "anti-consumer" and insist that using the law to fight against corporate power is a waste of our time:
https://www.thesling.org/sorry-matt-yglesias-hipster-antitrust-does-not-mean-the-abandonment-of-consumers-but-it-does-mean-new-ways-to-protect-workers-2/
But after giving it some careful thought, I'm with Chomsky on this, not Yglesias. The election is something we have to pay some attention to as activists, but only "10 or 15 minutes." Yeah, "push the lever," but then "go back to work." I don't care what Biden wants to do. I care what we can make him do.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/06/15/disarrangement/#credo-in-un-dio-crudel
Image: Jim's Photo World (modified) https://www.flickr.com/photos/jimsphotoworld/5360343644/
CC BY-SA 2.0 https://creativecommons.org/licenses/by-sa/2.0/
#pluralistic#linkdump#linkdumps#chomsky#voting#elections#uspoli#oligarchy#irs#billionaires#tax cheats#irs files#hipster antitrust#matt ygelsias#dante#gift guide#books#crowdfunding#public domain#model disgorgement#ai#llms#fdr#groceries#ripoffs#toledo#redlining#race
74 notes
·
View notes