#regulate ai created images
Explore tagged Tumblr posts
serenityprincess · 1 year ago
Text
I have respect for actual artists and not people using AI generated programs. That shit is not art.
There's no heart or soul in it. Not to also forget, the AI is the one doing the work for the person.
Type something in one of the search or filter engine and the AI generated the random pics going on the person's filters. Come the fuck on?!
Seriously, it's not that fuckin hard to draw. Yeah,it takes time to develop and pick up more learning to use your skills, but at least it's more honorable way than cheating and stealing!
6 notes · View notes
serenityprincess · 1 year ago
Photo
Agreed.
Tumblr media
Ai generated images need to be controlled and regulated!
The music industry has shown that this is possible and that music, musicians and copyright can be protected! We need this for visual art too!  The database scraps all art they can find on the internet! Artists didn’t give their permission and there is no way to opt out! Even medical reports and photos that are classified are being fed into those ai machines! It’s unethical and criminal and they weasel they way around legal terms!
Stand with artists! Support them! Regulate AI!
Edit: no I don’t want the art world to be like the music industry! This was just an example to show that with support and lobby some kind of moderation is possible! But no one fucking cares for artists!
1K notes · View notes
death-limes · 1 year ago
Text
fuckin uuuuuuuuuuh sorry if this is a hot take or anything but “ai art is theft and unfair to the ppl who made the original art being sampled” doesnt suddenly become not true if the person generating the ai art is disabled
and its also possible to hold that opinion WITHOUT believing that disabled ppl are in any way “lazy” or “deserve” their disability or “it’s their fault” or any of that bullshit
i get that disability can be devastating to those who used to do a certain type of art that their body no longer allows in the traditional sense. but physical and mental disability have literally NEVER stopped people from creating art, EVER, in all of history. artists are creative. passionate people find a way.
and even if a way can’t be found…. stealing is still stealing. like. plagiarizing an article isn’t suddenly ethical just bc the person who stole it can’t read, and being against plagiarism doesn’t mean you hate people who can’t read.
the whole ableism angle on ai art confuses the hell out of me, if im honest. at least be consistent about it. if it’s ethical when disabled people do it, then it’s ethical when anyone does it
25 notes · View notes
aurosoulart · 2 years ago
Photo
Tumblr media Tumblr media Tumblr media
some before-and-after pictures of how I’ve been using AI generated images in my art lately 🤖
I share other artists’ concerns about the unethical nature of the theft going on in the training data of AI art algorithms, so I refuse to spend any money on them or to consider the images generated by them to be true art, but I’m curious to hear people’s thoughts on using it for reference and paint-over like this?
my hope is that with proper regulation and more ethical use, AI could be a beneficial tool to help artists - instead of a way that allows people to steal from us more easily.
93 notes · View notes
reasonsforhope · 7 months ago
Text
AI models can seemingly do it all: generate songs, photos, stories, and pictures of what your dog would look like as a medieval monarch. 
But all of that data and imagery is pulled from real humans — writers, artists, illustrators, photographers, and more — who have had their work compressed and funneled into the training minds of AI without compensation. 
Kelly McKernan is one of those artists. In 2023, they discovered that Midjourney, an AI image generation tool, had used their unique artistic style to create over twelve thousand images. 
“It was starting to look pretty accurate, a little infringe-y,” they told The New Yorker last year. “I can see my hand in this stuff, see how my work was analyzed and mixed up with some others’ to produce these images.” 
For years, leading AI companies like Midjourney and OpenAI, have enjoyed seemingly unfettered regulation, but a landmark court case could change that. 
On May 9, a California federal judge allowed ten artists to move forward with their allegations against Stability AI, Runway, DeviantArt, and Midjourney. This includes proceeding with discovery, which means the AI companies will be asked to turn over internal documents for review and allow witness examination. 
Lawyer-turned-content-creator Nate Hake took to X, formerly known as Twitter, to celebrate the milestone, saying that “discovery could help open the floodgates.” 
“This is absolutely huge because so far the legal playbook by the GenAI companies has been to hide what their models were trained on,” Hake explained...
“I’m so grateful for these women and our lawyers,” McKernan posted on X, above a picture of them embracing Ortiz and Andersen. “We’re making history together as the largest copyright lawsuit in history moves forward.” ...
The case is one of many AI copyright theft cases brought forward in the last year, but no other case has gotten this far into litigation. 
“I think having us artist plaintiffs visible in court was important,” McKernan wrote. “We’re the human creators fighting a Goliath of exploitative tech.”
“There are REAL people suffering the consequences of unethically built generative AI. We demand accountability, artist protections, and regulation.” 
-via GoodGoodGood, May 10, 2024
2K notes · View notes
ekaarts · 7 months ago
Text
With Meta being a big A-hole, it's time to repost this
Join the Monday Meta boycuts to support the strike for the rights of not just artists, but EVERYONE who ever uploaded anything to any meta site (like Instagram or Facebook).
Tumblr media
Unless anyone can give me a proper, good reason not to, I fully support the ongoing protest against AI-generated images.
(Please read other people's posts too. This might give you a better understanding of the subject.)
While it has a lot of potential, even for artists, non-artist people took it to the point that it threatens our current and future job opportunities.
Honetly, it is a scary experience seeing it happening.
I've been drawing since I was able to hold a pencil. I've been drawing ALL MY LIFE, and just when I think it's going to pay off, when I think my pieces are getting good enough that I might be able to do it as a full-time job, something like this happens.
And I haven't even mentioned the art theft aspect of it. These pieces are NOT ORIGINAL WORKS. They're all based on other people's works, who DID NOT GIVE CONSENT TO USE THEIR WORKS.
I get it; it is wonderful to create something and frustrating when you don't have the skillset. But even this simple piece I created for this post has more life and feelings implied than all those AI "art pieces" put together.
It has my anger, sadness, disappointment, and fear in it.
It has that kindergarten girl in it who is still remembered by her former teachers for always drawing.
It has that suicidal preteen whose only happiness was writing her little stories and drawing the characters from them.
And it has that almost-adult high school student whose dream is to become like that one artist, to whom she can thank her life.
That one artist has barely any idea that I exist and definitely has no idea what she has done. I'm not sure if she realizes her childish stories from back then saved a life, but they did.
This is what art is for me.
Something you spend your life on without realizing it, getting better and better. Something that is perfect because it's human-made. Something that has feelings in it, even if we don't intend them.
24 notes · View notes
hashtagloveloses · 2 years ago
Text
this is an earnest and honest plea and call in especially to fandoms as i see it happen more - please don't use AI for your transformative works. by this i mean, making audios of actors who play the characters you love saying certain things, making deepfakes of actors or even animated characters' faces. playing with chatGPT to "talk" or RP with a character, or write funny fanfiction. using stable diffusion to make interesting "crossover" AI "art." i KNOW it's just for fun and it is seemingly harmless but it's not. since there is NO regulation and since some stuff is built off of stable diffusion (which uses stolen artwork and data), it is helping to create a huge and dangerous mess. when you use an AI to deepfake actors' voices to make your ship canon or whatever, you help train it so people can use it for deepfake revenge porn. or so companies can replace these actors with AI. when you RP with chatGPT you help train it to do LOTS of things that will be used to harm SO many people. (this doesn't even get into how governments will misuse and hurt people with these technologies) and yes that is not your fault and yes it is not the technology's fault it is the companies and governments that will and already have done things but PLEASE. when you use an AI snapchat or instagram or tiktok filter, when you use an AI image generator "just for fun", when you chat with your character's "bot," you are doing IRREPARABLE harm. please stop.
8K notes · View notes
afeelgoodblog · 9 months ago
Text
The Best News of Last Week - March 18
1. FDA to Finally Outlaw Soda Ingredient Prohibited Around The World
Tumblr media
An ingredient once commonly used in citrus-flavored sodas to keep the tangy taste mixed thoroughly through the beverage could finally be banned for good across the US. BVO, or brominated vegetable oil, is already banned in many countries, including India, Japan, and nations of the European Union, and was outlawed in the state of California in October 2022.
2. AI makes breakthrough discovery in battle to cure prostate cancer
Tumblr media
Scientists have used AI to reveal a new form of aggressive prostate cancer which could revolutionise how the disease is diagnosed and treated.
A Cancer Research UK-funded study found prostate cancer, which affects one in eight men in their lifetime, includes two subtypes. It is hoped the findings could save thousands of lives in future and revolutionise how the cancer is diagnosed and treated.
3. “Inverse vaccine” shows potential to treat multiple sclerosis and other autoimmune diseases
Tumblr media
A new type of vaccine developed by researchers at the University of Chicago’s Pritzker School of Molecular Engineering (PME) has shown in the lab setting that it can completely reverse autoimmune diseases like multiple sclerosis and type 1 diabetes — all without shutting down the rest of the immune system.
4. Paris 2024 Olympics makes history with unprecedented full gender parity
Tumblr media
In a historic move, the International Olympic Committee (IOC) has distributed equal quotas for female and male athletes for the upcoming Olympic Games in Paris 2024. It is the first time The Olympics will have full gender parity and is a significant milestone in the pursuit of equal representation and opportunities for women in sports.
Biased media coverage lead girls and boys to abandon sports.
5. Restored coral reefs can grow as fast as healthy reefs in just 4 years, new research shows
Tumblr media
Planting new coral in degraded reefs can lead to rapid recovery – with restored reefs growing as fast as healthy reefs after just four years. Researchers studied these reefs to assess whether coral restoration can bring back the important ecosystem functions of a healthy reef.
“The speed of recovery we saw is incredible,” said lead author Dr Ines Lange, from the University of Exeter.
6. EU regulators pass the planet's first sweeping AI regulations
Tumblr media
The EU is banning practices that it believes will threaten citizens' rights. "Biometric categorization systems based on sensitive characteristics" will be outlawed, as will the "untargeted scraping" of images of faces from CCTV footage and the web to create facial recognition databases.
Other applications that will be banned include social scoring; emotion recognition in schools and workplaces; and "AI that manipulates human behavior or exploits people’s vulnerabilities."
7. Global child deaths reach historic low in 2022 – UN report
Tumblr media
The number of children who died before their fifth birthday has reached a historic low, dropping to 4.9 million in 2022.
The report reveals that more children are surviving today than ever before, with the global under-5 mortality rate declining by 51 per cent since 2000.
---
That's it for this week :)
This newsletter will always be free. If you liked this post you can support me with a small kofi donation here:
Buy me a coffee ❤️
Also don’t forget to reblog this post with your friends.
782 notes · View notes
txttletale · 1 year ago
Note
I'm speaking as an artist in the animation industry here, it's hard not to be reactionary about AI image generation when it's already taking jobs from artists. Sure, for now it's indie gigs on book covers or backgrounds on one Netflix short, but how long until it'll be responsible for layoffs en-masse? These conversations can't be had in a vacuum. As long as tools like these are used as a way for companies to not pay artists, we cannot support them, give them attention, do anything but fight their implementation in our industry. It doesn't matter if they're art. They cannot be given a platform in any capacity until regulation around their use in the entertainment industry is established. If it takes billions of people refusing to call AI image generation "art" and immediately refusing to support anything that features it, then that's what it takes. Complacency is choosing AI over living artists who are losing jobs.
Call me a luddite but I'll die on this hill. Artists with degrees and 20+ years in the industry are getting laid off, the industry is already in shambles. If given the chance, no matter how vapid, shallow, or visibly generated the content is, if it's content that rakes in cash, companies will opt for it over meaningful art made by a person, every time. Again, this isn't a debate that can be had in a vacuum. Until universal basic income is a reality, until we can all create what we want in our spare time and aren't crippled under capitalism, I'm condemning AI image generation because I'd like to keep my job and not be homeless. It has to be a black and white issue until we have protections in place for it to not be.
you can condemn the technology all you like but it's not going to save you. the only thing that can actually address these concerns is unionization in the short term and total transformation of our economic system in the long term. you are a luddite in the most literal classical sense & just like the luddites as long as you target the machines and not the system that implements them you will lose just like every single battle against new immiserating technology has been lost since the invention of the steam loom.
594 notes · View notes
glasshomewrecker · 1 year ago
Text
I think this part is truly the most damning:
Tumblr media
If it's all pre-rendered mush and it's "too expensive to fully experiment or explore" then such AI is not a valid artistic medium. It's entirely deterministic, like a pseudorandom number generator. The goal here is optimizing the rapid generation of an enormous quantity of low-quality images which fulfill the expectations put forth by The Prompt.
It's the modern technological equivalent of a circus automaton "painting" a canvas to be sold in the gift shop.
Tumblr media Tumblr media Tumblr media Tumblr media
so a huge list of artists that was used to train midjourney’s model got leaked and i’m on it
literally there is no reason to support AI generators, they can’t ethically exist. my art has been used to train every single major one without consent lmfao 🤪
link to the archive
37K notes · View notes
ukrfeminism · 8 months ago
Text
The creation of sexually explicit "deepfake" images is to be made a criminal offence in England and Wales under a new law, the government says.
Under the legislation, anyone making explicit images of an adult without their consent will face a criminal record and unlimited fine.
It will apply regardless of whether the creator of an image intended to share it, the Ministry of Justice (MoJ) said.
And if the image is then shared more widely, they could face jail.
A deepfake is an image or video that has been digitally altered with the help of Artificial Intelligence (AI) to replace the face of one person with the face of another.
Recent years have seen the growing use of the technology to add the faces of celebrities or public figures - most often women - into pornographic films.
Channel 4 News presenter Cathy Newman, who discovered her own image used as part of a deepfake video, told BBC Radio 4's Today programme it was "incredibly invasive".
Ms Newman found she was a victim as part of a Channel 4 investigation into deepfakes.
"It was violating... it was kind of me and not me," she said, explaining the video displayed her face but not her hair.
Ms Newman said finding perpetrators is hard, adding: "This is a worldwide problem, so we can legislate in this jurisdiction, it might have no impact on whoever created my video or the millions of other videos that are out there."
She said the person who created the video is yet to be found.
Under the Online Safety Act, which was passed last year, the sharing of deepfakes was made illegal.
The new law will make it an offence for someone to create a sexually explicit deepfake - even if they have no intention to share it but "purely want to cause alarm, humiliation, or distress to the victim", the MoJ said.
Clare McGlynn, a law professor at Durham University who specialises in legal regulation of pornography and online abuse, told the Today programme the legislation has some limitations.
She said it "will only criminalise where you can prove a person created the image with the intention to cause distress", and this could create loopholes in the law.
It will apply to images of adults, because the law already covers this behaviour where the image is of a child, the MoJ said.
It will be introduced as an amendment to the Criminal Justice Bill, which is currently making its way through Parliament.
Minister for Victims and Safeguarding Laura Farris said the new law would send a "crystal clear message that making this material is immoral, often misogynistic, and a crime".
"The creation of deepfake sexual images is despicable and completely unacceptable irrespective of whether the image is shared," she said.
"It is another example of ways in which certain people seek to degrade and dehumanise others - especially women.
"And it has the capacity to cause catastrophic consequences if the material is shared more widely. This Government will not tolerate it."
Cally Jane Beech, a former Love Island contestant who earlier this year was the victim of deepfake images, said the law was a "huge step in further strengthening of the laws around deepfakes to better protect women".
"What I endured went beyond embarrassment or inconvenience," she said.
"Too many women continue to have their privacy, dignity, and identity compromised by malicious individuals in this way and it has to stop. People who do this need to be held accountable."
Shadow home secretary Yvette Cooper described the creation of the images as a "gross violation" of a person's autonomy and privacy and said it "must not be tolerated".
"Technology is increasingly being manipulated to manufacture misogynistic content and is emboldening perpetrators of Violence Against Women and Girls," she said.
"That's why it is vital for the government to get ahead of these fast-changing threats and not to be outpaced by them.
"It's essential that the police and prosecutors are equipped with the training and tools required to rigorously enforce these laws in order to stop perpetrators from acting with impunity."
288 notes · View notes
mariacallous · 28 days ago
Text
Next year will be Big Tech’s finale. Critique of Big Tech is now common sense, voiced by a motley spectrum that unites opposing political parties, mainstream pundits, and even tech titans such as the VC powerhouse Y Combinator, which is singing in harmony with giants like a16z in proclaiming fealty to “little tech” against the centralized power of incumbents.
Why the fall from grace? One reason is that the collateral consequences of the current Big Tech business model are too obvious to ignore. The list is old hat by now: centralization, surveillance, information control. It goes on, and it’s not hypothetical. Concentrating such vast power in a few hands does not lead to good things. No, it leads to things like the CrowdStrike outage of mid-2024, when corner-cutting by Microsoft led to critical infrastructure—from hospitals to banks to traffic systems—failing globally for an extended period.
Another reason Big Tech is set to falter in 2025 is that the frothy AI market, on which Big Tech bet big, is beginning to lose its fizz. Major money, like Goldman Sachs and Sequoia Capital, is worried. They went public recently with their concerns about the disconnect between the billions required to create and use large-scale AI, and the weak market fit and tepid returns where the rubber meets the AI business-model road.
It doesn’t help that the public and regulators are waking up to AI’s reliance on, and generation of, sensitive data at a time when the appetite for privacy has never been higher—as evidenced, for one, by Signal’s persistent user growth. AI, on the other hand, generally erodes privacy. We saw this in June when Microsoft announced Recall, a product that would, I kid you not, screenshot everything you do on your device so an AI system could give you “perfect memory” of what you were doing on your computer (Doomscrolling? Porn-watching?). The system required the capture of those sensitive images—which would not exist otherwise—in order to work.
Happily, these factors aren’t just liquefying the ground below Big Tech’s dominance. They’re also powering bold visions for alternatives that stop tinkering at the edges of the monopoly tech paradigm, and work to design and build actually democratic, independent, open, and transparent tech. Imagine!
For example, initiatives in Europe are exploring independent core tech infrastructure, with convenings of open source developers, scholars of governance, and experts on the political economy of the tech industry.
And just as the money people are joining in critique, they’re also exploring investments in new paradigms. A crop of tech investors are developing models of funding for mission alignment, focusing on tech that rejects surveillance, social control, and all the bullshit. One exciting model I’ve been discussing with some of these investors would combine traditional VC incentives (fund that one unicorn > scale > acquisition > get rich) with a commitment to resource tech’s open, nonprofit critical infrastructure with a percent of their fund. Not as investment, but as a contribution to maintaining the bedrock on which a healthy tech ecosystem can exist (and maybe get them and their limited partners a tax break).
Such support could—and I believe should—be supplemented by state capital. The amount of money needed is simply too vast if we’re going to do this properly. To give an example closer to home, developing and maintaining Signal costs around $50 million a year, which is very lean for tech. Projects such as the Sovereign Tech Fund in Germany point a path forward—they are a vehicle to distribute state funds to core open source infrastructures, but they are governed wholly independently, and create a buffer between the efforts they fund and the state.
Just as composting makes nutrients from necrosis, in 2025, Big Tech’s end will be the beginning of a new and vibrant ecosystem. The smart, actually cool, genuinely interested people will once again have their moment, getting the resources and clearance to design and (re)build a tech ecosystem that is actually innovative and built for benefit, not just profit and control. MAY IT BE EVER THUS!
72 notes · View notes
the-cybersmith · 9 months ago
Text
So, about this whole "AI" thing...
A response to an ask (for some reason, tumblr won't let me blaze normal responsicles)
Tumblr media
Like the Titan, Prometheus, Man Has Stolen Fire From the Gods. We can now make minds in our own image, elevating crude matter to the level of self-awareness. So... What next?
Tumblr media
The first thing I would like to make clear is that, in some respects, my opinion here is irrelevant. So is yours. So are the opinions of the people reading this.
No matter what we do, no matter what we believe, something remains inviolably clear and true:
BAD ACTORS WILL EXPLOIT GENUINELY USEFUL TECHNOLOGIES TO BENEFIT THEMSELVES
This is an axiom of human behaviour that cannot be escaped. Nuclear power is amongst the most regulated technologies that have ever existed... and right now, rogue states are attacking their neighbours, protected from intervention by the threat of nuclear annihilation.
Nuclear Weapons (their own, and Red China's) are what allows the North Korean government to continue oppressing its population.
Nuclear Weapons enable The Land Of The Bear to invade The Ukraine.
Despite this, nuclear power has otherwise been mostly regulated out of existence. It is cheap, safe, and abundant, yet various laws make it either artificially expensive or outright illegal to heat your home with it, light your rooms, power your transportation, trim your hedges.
Regulations and anti-technology hysteria can prevent ordinary people from benefitting from innovation, but they cannot prevent the worst people in the world from abusing it.
So, whatever worst-case scenario you've imagined? Accept the fact that it's going to happen no matter what you do.
Legions of nanobots reconfiguring us into paperclips, a la Eliezar Yudkowski's bizarrely specific fever dreams? If you think it is possible, accept that it is inevitable.
Tumblr media
Intelligent machines with glowing red eyes malevolently hunting us through a post-apocalyptic wasteland, a la James Cameron/The Wachowskis? If you think it is possible, accept that it is inevitable.
Tumblr media Tumblr media
Lying governments using deepfaked videos to create un-debunkable false-flags and cheaply manufacture consent for wars to further their adrenochrome-harvesting operations? Let's face it, they don't even need AI for that, most people will just take their claims at face value.
Tumblr media
But what if we all agree to stop using it?
Technologies are sometimes lost, yes, but this happens gradually, over the course of decades if not centuries. Civilisations can decline and lose access to technologies, but that's not likely to happen for AI within our lifetimes.
If it works, if it is genuinely useful, it WILL be used.
We have seen this play out time and time again, throughout history.
So, we can either do what we did for nuclear power, and regulate it so heavily that it serves no useful purpose to the Just and the Kind, whilst availing the Corrupt and the Wicked...
Or we can accept Evil shall be done, and try with all our might to counter it with Good.
We can strive to Magnanimous heights of Faustian greatness, using AI to create untold works of beauty, so that Human Grandeur at least rivals Human Depravity.
Tumblr media
In summary:
We have stolen Fire from the Gods. The more noble-minded amongst us might as well do something worthwhile with it.
178 notes · View notes
sirfrogsworth · 9 months ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
I found these replies very frustrating and fairly ableist. Do people not understand that disabilities and functionality vary wildly from person to person? Just because one person can draw with their teeth or feet doesn't mean others can.
And where is my friend supposed to get this magic eye movement drawing tech from? How is he supposed to afford it? And does the art created from it look like anything? Is it limited to abstraction? What if that isn't the art he wants to make?
Also, asking another artist to draw something for you is called a commission. And it usually costs money.
I have been using the generative AI in Photoshop for a few months now. It is trained on images Adobe owns, so I feel like it is in an ethical gray area. I mostly use it to repair damaged photos, remove objects, or extend boundaries. The images I create are still very much mine. But it has been an incredible accessibility tool for me. I was able to finish work that would have required much more energy than I had.
My friend uses AI like a sketchpad. He can quickly generate ideas and then he develops those into stories and videos and even music. He is doing all kinds of creative tasks that he was previously incapable of. It is just not feasible for him to have an artist on call to sketch every idea that pops into his brain—even if they donated labor to him.
I just think seeing these tools as pure evil is not the best take on all of this. We need them to be ethically trained. We need regulations to make sure they don't destroy creative jobs. But they do have utility and they can be powerful tools for accessibility as well.
These are complicated conversations. I'm not claiming to have all of the answers or know the most moral path we should steer this A.I behemoth towards. But seeing my friend excited about being creative after all of these years really affected me. It confused my feelings about generative A.I. Then I started using similar tools and it just made it so much easier to work on my photography. And that confused my feelings even more.
So...I am confused.
And unsure of how to proceed.
But I do hope people will be willing to at least consider this aspect and have these conversations.
147 notes · View notes
mostlysignssomeportents · 6 months ago
Text
Sphinxmumps Linkdump
Tumblr media
On THURSDAY (June 20) I'm live onstage in LOS ANGELES for a recording of the GO FACT YOURSELF podcast. On FRIDAY (June 21) I'm doing an ONLINE READING for the LOCUS AWARDS at 16hPT. On SATURDAY (June 22) I'll be in OAKLAND, CA for a panel and a keynote at the LOCUS AWARDS.
Tumblr media
Welcome to my 20th Linkdump, in which I declare link bankruptcy and discharge my link-debts by telling you about all the open tabs I didn't get a chance to cover in this week's newsletters. Here's the previous 19 installments:
https://pluralistic.net/tag/linkdump/
Starting off this week with a gorgeous book that is also one of my favorite books: Beehive's special slipcased edition of Dante's Inferno, as translated by Henry Wadsworth Longfellow, with new illustrations by UK linocut artist Sophy Hollington:
https://www.kickstarter.com/projects/beehivebooks/the-inferno
I've loved Inferno since middle-school, when I read the John Ciardi translation, principally because I'd just read Niven and Pournelle's weird (and politically odious) (but cracking) sf novel of the same name:
https://en.wikipedia.org/wiki/Inferno_(Niven_and_Pournelle_novel)
But also because Ciardi wrote "About Crows," one of my all-time favorite bits of doggerel, a poem that pierced my soul when I was 12 and continues to do so now that I'm 52, for completely opposite reasons (now there's a poem with staying power!):
https://spirituallythinking.blogspot.com/2011/10/about-crows-by-john-ciardi.html
Beehive has a well-deserved rep for making absolutely beautiful new editions of great public domain books, each with new illustrations and intros, all in matching livery to make a bookshelf look classy af. I have several of them and I've just ordered my copy of Inferno. How could I not? So looking forward to this, along with its intro by Ukrainian poet Ilya Kaminsky and essay by Dante scholar Kristina Olson.
The Beehive editions show us how a rich public domain can be the soil from which new and inspiring creative works sprout. Any honest assessment of a creator's work must include the fact that creativity is a collective act, both inspired by and inspiring to other creators, past, present and future.
One of the distressing aspects of the debate over the exploitative grift of AI is that it's provoked a wave of copyright maximalism among otherwise thoughtful artists, despite the fact that a new copyright that lets you control model training will do nothing to prevent your boss from forcing you to sign over that right in your contracts, training an AI on your work, and then using the model as a pretext to erode your wages or fire your ass:
https://pluralistic.net/2024/05/13/spooky-action-at-a-close-up/#invisible-hand
Same goes for some privacy advocates, whose imaginations were cramped by the fact that the only regulation we enforce on the internet is copyright, causing them to forget that privacy rights can exist separate from the nonsensical prospect of "owning" facts about your life:
https://pluralistic.net/2023/10/21/the-internets-original-sin/
We should address AI's labor questions with labor rights, and we should address AI's privacy questions with privacy rights. You can tell that these are the approaches that would actually work for the public because our bosses hate these approaches and instead insist that the answer is just giving us more virtual property that we can sell to them, because they know they'll have a buyer's market that will let them scoop up all these rights at bargain prices and use the resulting hoards to torment, immiserate and pauperize us.
Take Clearview AI, a facial recognition tool created by eugenicists and white nationalists in order to help giant corporations and militarized, unaccountable cops hunt us by our faces:
https://pluralistic.net/2023/09/20/steal-your-face/#hoan-ton-that
Clearview scraped billions of images of our faces and shoveled them into their model. This led to a class action suit in Illinois, which boasts America's best biometric privacy law, under which Clearview owes tens of billions of dollars in statutory damages. Now, Clearview has offered a settlement that illustrates neatly the problem with making privacy into property that you can sell instead of a right that can't be violated: they're going to offer Illinoisians a small share of the company's stock:
https://www.theregister.com/2024/06/14/clearview_ai_reaches_creative_settlement/
To call this perverse is to go a grave injustice to good, hardworking perverts. The sums involved will be infinitesimal, and the only way to make those sums really count is for everyone in Illinois to root for Clearview to commit more grotesque privacy invasions of the rest of us to make its creepy, terrible product more valuable.
Worse still: by crafting a bespoke, one-off, forgiveness-oriented regulation specifically for Clearview, we ensure that it will continue, but that it will also never be disciplined by competitors. That is, rather than banning this kind of facial recognition tech, we grant them a monopoly over it, allowing them to charge all the traffic will bear.
We're in an extraordinary moment for both labor and privacy rights. Two of Biden's most powerful agency heads, Lina Khan and Rohit Chopra have made unprecedented use of their powers to create new national privacy regulations:
https://pluralistic.net/2023/08/16/the-second-best-time-is-now/#the-point-of-a-system-is-what-it-does
In so doing, they're bypassing Congressional deadlock. Congress has not passed a new consumer privacy law since 1988, when they banned video-store clerks from leaking your VHS rental history to newspaper reporters:
https://en.wikipedia.org/wiki/Video_Privacy_Protection_Act
Congress hasn't given us a single law protecting American consumers from the digital era's all-out assault on our privacy. But between the agencies, state legislatures, and a growing coalition of groups demanding action on privacy, a new federal privacy law seems all but assured:
https://pluralistic.net/2023/12/06/privacy-first/#but-not-just-privacy
When that happens, we're going to have to decide what to do about products created through mass-scale privacy violations, like Clearview AI – but also all of OpenAI's products, Google's AI, Facebook's AI, Microsoft's AI, and so on. Do we offer them a deal like the one Clearview's angling for in Illinois, fining them an affordable sum and grandfathering in the products they built by violating our rights?
Doing so would give these companies a permanent advantage, and the ongoing use of their products would continue to violate billions of peoples' privacy, billions of times per day. It would ensure that there was no market for privacy-preserving competitors thus enshrining privacy invasion as a permanent aspect of our technology and lives.
There's an alternative: "model disgorgement." "Disgorgement" is the legal term for forcing someone to cough up something they've stolen (for example, forcing an embezzler to give back the money). "Model disgorgement" can be a legal requirement to destroy models created illegally:
https://iapp.org/news/a/explaining-model-disgorgement
It's grounded in the idea that there's no known way to unscramble the AI eggs: once you train a model on data that shouldn't be in it, you can't untrain the model to get the private data out of it again. Model disgorgement doesn't insist that offending models be destroyed, but it shifts the burden of figuring out how to unscramble the AI omelet to the AI companies. If they can't figure out how to get the ill-gotten data out of the model, then they have to start over.
This framework aligns everyone's incentives. Unlike the Clearview approach – move fast, break things, attain an unassailable, permanent monopoly thanks to a grandfather exception – model disgorgement makes AI companies act with extreme care, because getting it wrong means going back to square one.
This is the kind of hard-nosed, public-interest-oriented rulemaking we're seeing from Biden's best anti-corporate enforcers. After decades kid-glove treatment that allowed companies like Microsoft, Equifax, Wells Fargo and Exxon commit ghastly crimes and then crime again another day, Biden's corporate cops are no longer treating the survival of massive, structurally important corporate criminals as a necessity.
It's been so long since anyone in the US government treated the corporate death penalty as a serious proposition that it can be hard to believe it's even happening, but boy is it happening. The DOJ Antitrust Division is seeking to break up Google, the largest tech company in the history of the world, and they are tipped to win:
https://pluralistic.net/2024/04/24/naming-names/#prabhakar-raghavan
And that's one of the major suits against Google that Big G is losing. Another suit, jointly brought by the feds and dozens of state AGs, is just about to start, despite Google's failed attempt to get the suit dismissed:
https://www.reuters.com/technology/google-loses-bid-end-us-antitrust-case-over-digital-advertising-2024-06-14/
I'm a huge fan of the Biden antitrust enforcers, but that doesn't make me a huge fan of Biden. Even before Biden's disgraceful collaboration in genocide, I had plenty of reasons – old and new – to distrust him and deplore his politics. I'm not the only leftist who's struggling with the dilemma posed by the worst part of Biden's record in light of the coming election.
You've doubtless read the arguments (or rather, "arguments," since they all generate a lot more heat than light and I doubt whether any of them will convince anyone). But this week, Anand Giridharadas republished his 2020 interview with Noam Chomsky about Biden and electoral politics, and I haven't been able to get it out of my mind:
https://the.ink/p/free-noam-chomsky-life-voting-biden-the-left
Chomsky contrasts the left position on politics with the liberal position. For leftists, Chomsky says, "real politics" are a matter of "constant activism." It's not a "laser-like focus on the quadrennial extravaganza" of national elections, after which you "go home and let your superiors take over."
For leftists, politics means working all the time, "and every once in a while there's an event called an election." This should command "10 or 15 minutes" of your attention before you get back to the real work.
This makes the voting decision more obvious and less fraught for Chomsky. There's "never been a greater difference" between the candidates, so leftists should go take 15 minutes, "push the lever, and go back to work."
Chomsky attributed the good parts of Biden's 2020 platform to being "hammered on by activists coming out of the Sanders movement and other." That's the real work, that hammering. That's "real politics."
For Chomsky, voting for Biden isn't support for Biden. It's "support for the activists who have been at work constantly, creating the background within the party in which the shifts took place, and who have followed Sanders in actually entering the campaign and influencing it. Support for them. Support for real politics."
Chomsky tells us that the self-described "masters of the universe" understand that something has changed: "the peasants are coming with their pitchforks." They have all kinds of euphemisms for this ("reputational risks") but the core here is a winner-take-all battle for the future of the planet and the species. That's why the even the "sensible" ultra-rich threw in for Trump in 2016 and 2020, and why they're backing him even harder in 2024:
https://www.bbc.com/news/articles/ckvvlv3lewxo
Chomsky tells us not to bother trying to figure out Biden's personality. Instead, we should focus on "how things get done." Biden won't do what's necessary to end genocide and preserve our habitable planet out of conviction, but he may do so out of necessity. Indeed, it doesn't matter how he feels about anything – what matters is what we can make him do.
Chomksy himself is in his 90s and his health is reportedly in terminal decline, so this is probably the only word we'll get from him on this issue:
https://www.reddit.com/r/chomsky/comments/1aj56hj/updates_on_noams_health_from_his_longtime_mit/
The link between concentrated wealth, concentrated power, and the existential risks to our species and civilization is obvious – to me, at least. Any time a tiny minority holds unaccountable power, they will end up using it to harm everyone except themselves. I'm not the first one to take note of this – it used to be a commonplace in American politics.
Back in 1936, FDR gave a speech at the DNC, accepting their nomination for president. Unlike FDR's election night speech ("I welcome their hatred"), this speech has been largely forgotten, but it's a banger:
https://teachingamericanhistory.org/document/acceptance-speech-at-the-democratic-national-convention-1936/
In that speech, Roosevelt brought a new term into our political parlance: "economic royalists." He described the American plutocracy as the spiritual descendants of the hereditary nobility that Americans had overthrown in 1776. The English aristocracy "governed without the consent of the governed" and “put the average man’s property and the average man’s life in pawn to the mercenaries of dynastic power":
Roosevelt said that these new royalists conquered the nation's economy and then set out to seize its politics, backing candidates that would create "a new despotism wrapped in the robes of legal sanction…an industrial dictatorship."
As David Dayen writes in The American Prospect, this has strong parallels to today's world, where "Silicon Valley, Big Oil, and Wall Street come together to back a transactional presidential candidate who promises them specific favors, after reducing their corporate taxes by 40 percent the last time he was president":
https://prospect.org/politics/2024-06-14-speech-fdr-would-give/
Roosevelt, of course, went on to win by a landslide, wiping out the Republicans despite the endless financial support of the ruling class.
The thing is, FDR's policies didn't originate with him. He came from the uppermost of the American upper crust, after all, and famously refused to define the "New Deal" even as he campaigned on it. The "New Deal" became whatever activists in the Democratic Party's left could force him to do, and while it was bold and transformative, it wasn't nearly enough.
The compromise FDR brokered within the Democratic Party froze out Black Americans to a terrible degree. Writing for the Institute for Local Self Reliance, Ron Knox and Susan Holmberg reveal the long shadow cast by that unforgivable compromise:
https://storymaps.arcgis.com/stories/045dcde7333243df9b7f4ed8147979cd
They describe how redlining – the formalization of anti-Black racism in New Deal housing policy – led to the ruin of Toledo's once-thriving Dorr Street neighborhood, a "Black Wall Street" where a Black middle class lived and thrived. New Deal policies starved the neighborhood of funds, then ripped it in two with a freeway, sacrificing it and the people who lived in it.
But the story of Dorr Street isn't over. As Knox and Holmberg write, the people of Dorr Street never gave up on their community, and today, there's an awful lot of Chomsky's "constant activism" that is painstakingly bringing the community back, inch by aching inch. The community is locked in a guerrilla war against the same forces that the Biden antitrust enforcers are fighting on the open field of battle. The work that activists do to drag Democratic Party policies to the left is critical to making reparations for the sins of the New Deal – and for realizing its promise for everybody.
In my lifetime, there's never been a Democratic Party that represented my values. The first Democratic President of my life, Carter, kicked off Reaganomics by beginning the dismantling of America's antitrust enforcement, in the mistaken belief that acting like a Republican would get Democrats to vote for him again. He failed and delivered Reagan, whose Reaganomics were the official policy of every Democrat since, from Clinton ("end welfare as we know it") to Obama ("foam the runways for the banks").
In other words, I don't give a damn about Biden, but I am entirely consumed with what we can force his administration to do, and there are lots of areas where I like our chances.
For example: getting Biden's IRS to go after the super-rich, ending the impunity for elite tax evasion that Spencer Woodman pitilessly dissects in this week's superb investigation for the International Consortium of Investigative Journalists:
https://www.icij.org/inside-icij/2024/06/how-the-irs-went-soft-on-billionaires-and-corporate-tax-cheats/
Ending elite tax cheating will make them poorer, and that will make them weaker, because their power comes from money alone (they don't wield power because their want to make us all better off!).
Or getting Biden's enforcers to continue their fight against the monopolists who've spiked the prices of our groceries even as they transformed shopping into a panopticon, so that their business is increasingly about selling our data to other giant corporations, with selling food to us as an afterthought:
https://prospect.org/economy/2024-06-12-war-in-the-aisles/
For forty years, since the Carter administration, we've been told that our only power comes from our role as "consumers." That's a word that always conjures up one of my favorite William Gibson quotes, from 2003's Idoru:
Something the size of a baby hippo, the color of a week-old boiled potato, that lives by itself, in the dark, in a double-wide on the outskirts of Topeka. It's covered with eyes and it sweats constantly. The sweat runs into those eyes and makes them sting. It has no mouth, no genitals, and can only express its mute extremes of murderous rage and infantile desire by changing the channels on a universal remote. Or by voting in presidential elections.
The normie, corporate wing of the Democratic Party sees us that way. They decry any action against concentrated corporate power as "anti-consumer" and insist that using the law to fight against corporate power is a waste of our time:
https://www.thesling.org/sorry-matt-yglesias-hipster-antitrust-does-not-mean-the-abandonment-of-consumers-but-it-does-mean-new-ways-to-protect-workers-2/
But after giving it some careful thought, I'm with Chomsky on this, not Yglesias. The election is something we have to pay some attention to as activists, but only "10 or 15 minutes." Yeah, "push the lever," but then "go back to work." I don't care what Biden wants to do. I care what we can make him do.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/06/15/disarrangement/#credo-in-un-dio-crudel
Tumblr media
Image: Jim's Photo World (modified) https://www.flickr.com/photos/jimsphotoworld/5360343644/
CC BY-SA 2.0 https://creativecommons.org/licenses/by-sa/2.0/
74 notes · View notes
qweerhet · 1 month ago
Note
I see some of your pro-ai stuff, and I also see that you're very good at explaining things, so I have some concerns about ai that I'd like for you to explain if it's okay.
I'm very worried about the amount of pollution it takes to make an ai generated image, story, video, etc. I'm also very worried about ai imagery being used to spread disinformation.
Correct me if I'm wrong, but you seem to go by the stance that since we can't un-create ai, we should just try our best to manage. How do we manage things like disinformation and massive amounts of pollution? To be fair, I actually don't know the exact amount of pollution ai generated prompts make.
so, first off: the environmental devastation argument is so incorrect, i would honestly consider it intellectually dishonest. here is a good, thorough writeup of the issue.
the tl;dr is that trying to discuss the "environmental cost of AI" as one monolithic thing is incoherent; AI is an umbrella term that refers to a wide breadth of both machine-learning research and, like, random tech that gets swept up in the umbrella as a marketing gimmick. when most people doompost about the environmental cost of AI, they're discussing image generation programs and chat interfaces in particular, and the fact is that running these programs on your computer eats about as much energy as, like, playing an hour of skyrim. bluntly, i consider this argument intellectually dishonest from anyone who does not consider it equally unethical to play skyrim.
the vast majority of the environmental cost of AI such as image generation and chat interfaces comes from implementation by large corporations. this problem isn't tractable by banning the tool; it's a structural problem baked into the existence of massive corporations and the current phase of capitalism we're in. prior to generative AI becoming a worldwide cultural trend, corporations were still responsible for that much environmental devastation, primarily to the end of serving ads--and like. the vast majority of use cases corporations are twisting AI to fit boil down to serving ads. essentially, i think focusing on the tool in this particular case is missing the forest for the trees; as long as you're not addressing the structural incentives for corporations to blindly and mindlessly participate in unsustainable extractivism, they will continue to use any and all tools to participate in such, and i am equally concerned about the energy spent barraging me with literally dozens and dozens of digital animated billboards in a ten-mile radius as i am with the energy spent getting a chatbot to talk up their product to me.
moving onto the disinformation issue: actually, yes, i'm very concerned about that. i don't have any personal opinions on how to manage it, but it's a very strong concern of mine. lowering the skill floor for production of media does, necessarily, mean a lot of bad actors are now capable of producing a much larger glut of malicious content, much faster.
i do think that, historically speaking, similar explosions of disinformation & malicious media haven't been socially managed by banning the tool nor by shaming those who use it for non-malicious purposes--like, when it was adopted for personal use, the internet itself created a sudden huge explosion of spam and disinformation as never before seen in human history, but "get rid of the internet" was never a tractable solution to this, and "shame people you see using the internet" just didn't do anything for the problem.
wish i could be more helpful on solutions for that one--it's just not a field i have any particular knowledge in, but if there's anyone reading who'd like to add on with information about large-scale regulation of the sort of broad field of malicious content i'm discussing, feel free.
26 notes · View notes