#LAION
Explore tagged Tumblr posts
Text
where my fellow monster fuckers at 👅👅👅👅👅👅
#dungeon meshi fanart#dungeon meshi#dunmeshi#laios touden#farcille#marcille donato#falin#not going to lie labru is the worst thing to happen to me because damn everyones lining up for yaoi and not laios x monsters sniff sniff#sorry cough im. normal#my artwork#rkgk#winged lion#theres not even a ship tag for them im sick#lios..?#laion#😭
26K notes
·
View notes
Text
And things get even more terrifying!! I saw someone mention this in Jon's insta comments.
Everyone is at risk and not just artists...
Here's the full article!
841 notes
·
View notes
Text
I know a lot of artists are antsy about art theft right now (myself included, I literally just had a terrible nightmare about fighting the physical manifestation of AI, The Mitchells vs The Machines style…). I can’t claim that any of these things can prevent it. But here’s a few things I’ve found useful:
Opening a free account on Pixsy.com. This website does a decent job at letting me know when my images have been reposted. 99% of the time, the results are just Tumblr-copying zombie websites that just repost everything that is already here. But, it’s sensitive enough that it alerted me when my old college posted my work. They were harmlessly using my stuff as an example of alumni work- but I was glad to be in the know, AND they had mistakenly credited my deadname, so I was able to reach out and correct that. I would have never have seen it otherwise. The website has subscription options, but you can ignore them and still use the monitoring services it provides.
Reverse image searching my most widely shared pieces on haveibeentrained.com. This website checks to see if your work has been fed to AI.
Looking up legal takedown letters and referencing them to draft a generic letter for my own use. This takes a bit of the stress off what is already a stressful and often time-consuming ordeal. Taking time to craft a Very Scary, Legally Threatening, Yet Coldly Professional Memo has been worth it.
Remaining careful about what and how I post online. My living depends on sharing my work, so I have to post it. I’ve learned through trial and error how to post lower resolution images that still look good, but aren’t easily used for anything beyond the intended post, and of course, strategic watermarking. Never, ever post full res, print quality stuff for the general public. Half the time it ends up looking unflattering on social media anyways, cause the files get crunched for being large. I try to downsize my images, while set to bicubic smoothening, to head that off. Look up the optimal image resolutions and proportions for individual sites before posting your web versions. For some work, cropping the piece, or posting chunks of detail shots instead of a full view, is a more protective measure.
Look out for other artists! Reach out when in doubt. Don’t steal from others. Learn the difference between theft, and a study/master copy/fanart/inspiration. Don’t assume that all posted art has the same intended purpose as a “how to” instructional like 5 Minute Crafts. Ask permission. Artists are often helpful and supportive towards people who want to study their work! And, the best tip-offs I’ve received have all been from other people who were watching my back. Thank you to everybody who keeps an eye out for my work, and who have been thoughtful enough to reach out to me when they see theft happening 💖 y’all are the real MVPs. All we have is each other.
#cas posts#art theft#ai art#art#intellectual property#artificial art#artificial intelligence#artificialart#lensa app#lensa#stable diffusion#laion#laion database#large scale artificial intelligence open network#stolen art
398 notes
·
View notes
Text
The biggest dataset used for AI image generators had CSAM in it
Link the original tweet with more info
The LAION dataset has had ethical concerns raised over its contents before, but the public now has proof that there was CSAM used in it.
The dataset was essentially created by scraping the internet and using a mass tagger to label what was in the images. Many of the images were already known to contain identifying or personal information, and several people have been able to use EU privacy laws to get images removed from the dataset.
However, LAION itself has known about the CSAM issue since 2021.
LAION was a pretty bad data set to use anyway, and I hope researchers drop it for something more useful that was created more ethically. I hope that this will lead to a more ethical databases being created, and companies getting punished for using unethical databases. I hope the people responsible for this are punished, and the victims get healing and closure.
12 notes
·
View notes
Text
Okay, tech people:
Can anybody tell me what the LAION-5B data set is in layman's terms, as well as how it is used to train actual models?
Everything I have read online is either so technical that it provides zero information to me, or so dumbed down that it provides almost zero information to me.
Here is what I *think* is going on (and I already have enough information to know that in some ways this is definitely wrong.)
LAION uses a web crawler to essentially randomly check publicly accessible web pages. When this crawler finds an image, it creates a record of the image URL, a set of descriptive words from the image ALT text, (and other sources I think?) and some other stuff.
This is compiled into a big giant list of image URLs and descriptive text associated with the URL.
When a model is trained on this data it... I guess... essentially goes to every URL in the list, checks the image, extracts some kind of data from the image file itself, and then associates the data extracted from the image with the discriptive text that LAION has already associated with the image URL?
The big pitfall, apparently, is that there are a lot of images that have been improperly or even illegally posted on the internet publicly with the ability to let crawlers access them even though they shouldn't be public (e.g. medical records or CSAM) and the dataset is too large to actually hand-curate every single entry? So that therefore models trained on the dataset contain some amount of data that legally they should not have, outside and beyond copyright considerations. A secondary problem is that the production of image ALT text is extremely opaque to ordinary users, so certain images that a user might be comfortable posting may, unbeknownst to them, contain ALT text that the user would not like to be disseminated.
Am I even in the ballpark here? It is incredibly frustrating to read multiple news stories about this stuff and still lack the basic knowledge you would need to think about this stuff systematically.
7 notes
·
View notes
Text
So, SO fucking tired of this year, honestly.
So, AI generated pictures ( I REFUSE to call these 'art' ) are the current ban of my existence as an artist, but if you want to be sure if you art has been scraped reaped and stole for the LAION dataset (you know, the one who blatantly steal and resell private and copyrighted data?) you can use this site.
Now, I have no idea if their tool to opt out of the data base even work, but if I have news about it (or if you have, drop a comment or a MP) I'll update this post.
Yeah, I found several art of mine there, including some of my portfolio art, personal gift for friend's birthday and the content of my redbubble gallery.
As a french woman, my ancestral instinct to find and guillotine* someone are getting higher every day.
*in a metaphorical sense, of course, I'm usually** non violent and more prone to cry and blow my nose on the perpetrator's shirt than going straight to killing.
** usually anyway.
#AI#LAION#AI generated picture#artist's right#years of reading ghost in the shell didn't prepare me for THAT shit
9 notes
·
View notes
Note
psst ai art is not real art and hurts artists
Real life tends to be far more nuanced than sweeping statements, emotional rhetoric, or conveniently fuzzy definitions. “Artists” are not a monolithic entity and neither are companies. There are different activities with different economics.
I’ll preface the rest of my post with sharing my own background, for personal context:
👩🎨 I am an artist. I went to/graduated from an arts college and learned traditional art-making (sculpture to silkscreen printing), and my specialism was in communication design (using the gamut of requisite software like Adobe Illustrator, InDesign, Photoshop, Lightroom, Dreamweaver etc). Many of my oldest friends are career artists—two of whom served as official witnesses to my marriage. Friends of friends have shown at the Venice Biennale, stuff like that. Many are in fields like games, animation, VFX, 3D etc. In the formative years of my life, I’ve worked & collaborated in a wide range of creative endeavours and pursuits. I freelanced under a business which I co-created, ran commercial/for-profit creative events for local musicians & artists, did photography (both digital & analog film, some of which I hand-processed in a darkroom), did some modelling, styling, appeared in student films… the list goes on. I’ve also dabbled with learning 3D using Blender, a free, open source software (note: Blender is an important example I’ll come back to, below). 💸 I am a (budding) patron of the arts. On the other side of the equation, I sometimes buy art: small things like buying friends’ work. I’m also currently holding (very very tiny) stakes in “real” art—as in, actual fine art: a few pieces by Basquiat, Yayoi Kusama, Joan Mitchell. 👩💻 I am a software designer & engineer. I spent about an equal number of years in tech: took some time to re-skill in a childhood passion and dive into a new field, then went off to work at small startups (not “big tech”), to design and write software every day.
So I’m quite happy to talk art, tech, and the intersection. I’m keeping tabs on the debate around the legal questions and the lawsuits.
Can an image be stolen if only used in training input, and is never reproduced as output? Can a company be vicariously liable for user-generated content? Legally, style isn’t copyrightable, and for good reason. Copyright law is not one-size-fits-all. Claims vary widely per case.
Flaws in the Anderson vs Stability AI case, aka “stolen images” argument
Read this great simple breakdown by a copyright lawyer that covers reproduction vs. derivative rights, model inputs and outputs, derivative works, style, and vicarious liability https://copyrightlately.com/artists-copyright-infringement-lawsuit-ai-art-tools/
“Getty’s new complaint is much better than the overreaching class action lawsuit I wrote about last month. The focus is where it should be: the input stage ingestion of copyrighted images to train the data. This will be a fascinating fair use battle.”
“Surprisingly, plaintiffs’ complaint doesn’t focus much on whether making intermediate stage copies during the training process violates their exclusive reproduction rights under the Copyright Act. Given that the training images aren’t stored in the software itself, the initial scraping is really the only reproduction that’s taken place.”
“Nor does the complaint allege that any output images are infringing reproductions of any of the plaintiffs’ works. Indeed, plaintiffs concede that none of the images provided in response to a particular text prompt “is likely to be a close match for any specific image in the training data.””
“Instead, the lawsuit is premised upon a much more sweeping and bold assertion—namely that every image that’s output by these AI tools is necessarily an unlawful and infringing “derivative work” based on the billions of copyrighted images used to train the models.”
“There’s another, more fundamental problem with plaintiffs’ argument. If every output image generated by AI tools is necessarily an infringing derivative work merely because it reflects what the tool has learned from examining existing artworks, what might that say about works generated by the plaintiffs themselves? Works of innumerable potential class members could reflect, in the same attenuated manner, preexisting artworks that the artists studied as they learned their skill.”
My thoughts on generative AI: how anti-AI rhetoric helps Big Tech (and harms open-source/independents), how there’s no such thing as “real art”
The AI landscape is still evolving and being negotiated, but fear-mongering and tighter regulations seldom serve anyone’s favour besides big companies. It’s the oldest trick in the book to preserve monopoly and all big corps in major industries have done this. Get a sense of the issue in this article: https://www.forbes.com/sites/hessiejones/2023/04/19/amid-growing-call-to-pause-ai-research-laion-petitions-governments-to-keep-agi-research-open-active-and-responsible/?sh=34b78bae62e3
“AI field is progressing at unprecedented speed; however, training state-of-art AI models such as GPT-4 requires large compute resources, not currently available to researchers in academia and open-source communities; the ‘compute gap’ keeps widening, causing the concentration of AI power at a few large companies.”
“Governments and businesses will become completely dependent on the technologies coming from the largest companies who have invested millions, and by definition have the highest objective to profit from it.”
“The “AGI Doomer” fear-mongering narrative distracts from actual dangers, implicitly advocating for centralized control and power consolidation.”
Regulation & lawsuits benefit massive monopolies: Adobe (which owns Adobe Stock), Microsoft, Google, Facebook et al. Fighting lawsuits, licensing with stock image companies for good PR—like OpenAI (which Microsoft invested $10billion in) and Shutterstock—is a cost which they have ample resources to pay, to protect their monopoly after all that massive investment in ML/AI R&D. The rewards outweigh the risks. They don't really care about ethics, only when it annihilates competition. Regulatory capture means these mega-corporations will continue to dominate tech, and nobody else can compete. Do you know what happens if only Big Tech controls AI? It ain’t gonna be pretty.
Open-source is the best alternative to Big Tech. Pro-corporation regulation hurts open-source. Which hurts indie creators/studios, who will find themselves increasingly shackled to Big Tech’s expensive software. Do you know who develops & releases the LAION dataset? An open-source research org. https://laion.ai/about/ Independent non-profit research orgs & developers cannot afford harsh anti-competition regulatory rigmarole, or multi-million dollar lawsuits, or being deprived of training data, which is exactly what Big Tech wants. Free professional industry-standard software like Blender is open-source, copyleft GNU General Public License. Do you know how many professional 3D artists and businesses rely on it? (Now it’s development fund is backed by industry behemoths.) The consequences of this kind of specious “protest” masquerading as social justice will ultimately screw over these “hurt artists” even harder. It’s shooting the foot. Monkey’s paw. Be very careful what you wish for.
TANSTAAFL: Visual tradespeople have no qualms using tons of imagery/content floating freely around the web to develop their own for-profit output—nobody’s sweating over source provenance or licensing whenever they whip out Google Images or Pinterest. Nobody decries how everything is reposted/reblogged to death when it benefits them. Do you know how Google, a for-profit company, and its massively profitable search product works? “Engines like the ones built by OpenAI ingest giant data sets, which they use to train software that can make recommendations or even generate code, art, or text. In many cases, the engines are scouring the web for these data sets, the same way Google’s search crawlers do, so they can learn what’s on a webpage and catalog it for search queries.”[1] The Authors Guild v. Google case found that Google’s wholesale scanning of millions of books to create its Google Book Search tool served a transformative purpose that qualified as fair use. Do you still use Google products? No man is an island. Free online access at your fingertips to a vast trove of humanity’s information cuts both ways. I’d like to see anyone completely forgo these technologies & services in the name of “ethics”. (Also. Remember that other hyped new tech that’s all about provenance, where some foot-shooting “artists” rejected it and self-excluded/self-harmed, while savvy others like Burnt Toast seized the opportunity and cashed in.)
There is no such thing as “real art.” The definition of “art” is far from a universal, permanent concept; it has always been challenged (Duchamp, Warhol, Kruger, Banksy, et al) and will continue to be. It is not defined by the degree of manual labour involved. A literal banana duct-taped to a wall can be art. (The guy who ate it claimed “performance art”). Nobody in Van Gogh’s lifetime considered his work to be “real art” (whatever that means). He died penniless, destitute, believing himself to be an artistic failure. He wasn’t the first nor last. If a soi-disant “artist” makes “art” and nobody values it enough to buy/commission it, is it even art? If Martin Shkreli buys Wu Tang Clan’s “Once Upon a Time in Shaolin” for USD$2 million, is it more art than their other albums? Value can be ascribed or lost at a moment’s notice, by pretty arbitrary vicissitudes. Today’s trash is tomorrow’s treasure—and vice versa. Whose opinion matters, and when? The artist’s? The patron’s? The public’s? In the present? Or in hindsight?
As for “artists” in the sense of salaried/freelance gig economy trade workers (illustrators, animators, concept artists, game devs, et al), they’ll have to adapt to the new tech and tools like everyone else, to remain competitive. Some are happy that AI tools have improved their workflow. Some were struggling to get paid for heavily commoditised, internationally arbitraged-to-pennies work long before AI, in dehumanising digital sweatshop conditions (dime-a-dozen hands-for-hire who struggled at marketing & distributing their own brand & content). AI is merely a tool. Methods and tools come and go, inefficient ones die off, niches get eroded. Over-specialisation is an evolutionary risk. The existence of AI tooling does not preclude anyone from succeeding as visual creators or Christie’s-league art-world artists, either. Beeple uses AI. The market is information about what other humans want and need, how much it’s worth, and who else is supplying the demand. AI will get “priced in.” To adapt and evolve is to live. There are much greater crises we're facing as a species.
I label my image-making posts as #my art, relative to #my fic, mainly for navigation purposes within my blog. Denoting a subset of my pieces with #ai is already generous on this hellsite entropy cesspool. Anti-AI rhetoric will probably drive some people to conceal the fact that they use AI. I like to be transparent, but not everyone does. Also, if you can’t tell, does it matter? https://youtu.be/1mR9hdy6Qgw
I can illustrate, up to a point, but honing the skill of hand-crafted image-making isn’t worth my remaining time alive. The effort-to-output ratio is too high. Ain’t nobody got time fo dat. I want to tell stories and bring my visions to life, and so do many others. It’s a creative enabler. The democratisation of image-making means that many more people, like the disabled, or those who didn’t have the means or opportunity to invest heavily in traditional skills, can now manifest their visions and unleash their imaginations. Visual media becomes a language more people can wield, and that is a good thing.
Where I’m personally concerned, AI tools don’t replace anything except some of my own manual labour. I am incredibly unlikely to commission a visual piece from another creator—most fanart styles or representations of the pair just don’t resonate with me that much. (I did once try to buy C/Fe merch from an artist, but it was no longer available.) I don’t currently hawk my own visual wares for monetary profit (tips are nice though). No scenario exists which involves me + AI tools somehow stealing some poor artist’s lunch by creating my tchotchkes. No overlap regarding commercial interests. No zero-sum situation. Even if there was, and I was competing in the same market, my work would first need to qualify as a copy. My blog and content is for personal purposes and doesn’t financially deprive anyone. I’ll keep creating with any tool I find useful.
AI art allegedly not being “real art” (which means nothing) because it's perceived as zero-effort? Not always the case. It may not be a deterministic process but some creators like myself still add a ton of human guidance and input—my own personal taste, judgement, labour. Most of my generation pieces require many steps of in-painting, manual hand tweaking, feeding it back as img2img, in a back and forth duet. If you've actually used any of these tools yourself with a specific vision in mind, you’ll know that it never gives you exactly what you want—not on the first try, nor even the hundredth… unless you're happy with something random. (Which some people are. To each their own.) That element of chance, of not having full control, just makes it a different beast. To achieve desired results with AI, you need to learn, research, experiment, iterate, combine, refine—like any other creative process.
If you upload content to the web (aka “release out in the wild”), then you must, by practical necessity, assume it’s already “stolen” in the sense that whatever happens to it afterwards is no longer under your control. Again, do you know how Google, a for-profit company, and its massively profitable search product works? Plagiarism has always been possible. Mass data scraping or AI hardly changed this fact. Counterfeits or bootlegs didn’t arise with the web.
As per blog title and Asimov's last major interview about AI, I’m optimistic about AI overall. The ride may be bumpy for some now, but general progress often comes with short-term fallout. This FUD about R’s feels like The Caves of Steel, like Lije at the beginning [insert his closing rant about humans not having to fear robots]. Computers are good at some things, we’re good at others. They free us up from incidental tedium, so we can do the things we actually want to do. Like shipping these characters and telling stories and making pretty pictures for personal consumption and pleasure, in my case. Most individuals aren’t that unique/important until combined into a statistical aggregate of humanity, and the tools trained on all of humanity’s data will empower us to go even further as a species.
You know what really hurts people? The pandemic which nobody cares about; which has a significant, harmful impact on my body/life and millions of others’. That cost me a permanent expensive lifestyle shift and innumerable sacrifices, that led me to walk away from my source of income and pack up my existence to move halfway across the planet. If you are not zero-coviding—the probability of which is practically nil—I’m gonna have to discount your views on “hurt”, ethics, or what we owe to each other.
We are a non-profit organization with members from all over the world, aiming to make large-scale machine learning models, datasets and related code available to the general public. OUR BELIEFS: We believe that machine learning research and its applications have the potential to have huge positive impacts on our world and therefore should be democratized. PRINCIPLE GOALS: Releasing open datasets, code and machine learning models. We want to teach the basics of large-scale ML research and data management. By making models, datasets and code reusable without the need to train from scratch all the time, we want to promote an efficient use of energy and computing ressources to face the challenges of climate change. FUNDING: Funded by donations and public research grants, our aim is to open all cornerstone results from such an important field as large-scale machine learning to all interested communities.
The “AGI Doomer” fear-mongering narrative distracts from actual dangers, implicitly advocating for centralized control and power consolidation.”
youtube
2 notes
·
View notes
Text
DeviantArt, DreamUp, and why everyone needs to leave the site.
Here are the facts:
DeviantArt rolled out DreamUp, their AI “art” gen service (it is a pay service by the way, not a free toy) as Opt-In by default. This meant that every artists’ work on DeviantArt (hereafter referred to as DA) was automatically enrolled in the data scraping for their in-house data set.
They held a Twitter Space where they ignored all criticism and featured two NFT/ai "artists" for "discussion". When people commented that setting the decision to Opt-In by default allowed them to scrape inactive users, and, even worse, users who had passed away, they essentially shrugged their shoulders. When people complained on Twitter, their various PR reps ignored valid complaints and said everyone who complained were being misleading or wrong.
The function to Opt-Out by default was not implemented until the users complained. Prior to this, every user had to go through and manually change every. single. deviation. To “opt-out” from DA’s scraper. Further more, to opt out of your name being used as a text prompt, you have to email a form and have DA manually approve the removal and blacklisting of your username from DreamUp’s prompt system.
Opt-out by default was eventually implemented several hours later. However, opting out does nothing because the dataset for DreamUp is from Stable Diffusion, using the LAION dataset, which has already been trained on DA. Opting out is meaningless.
Therefor, DA can't walk this back without completely removing DreamUp from the site. While DreamUp exists on DA, it will be using DA members’ art because the dataset it uses contains them. Let alone the fact that it’s using art from people across the entire internet who were never asked and never consented in the first place.
It is almost assured that DA will not remove DreamUp because it is a for-profit service that will make them money. They will figure out a way to mollify enough people they think are important enough to listen to, and leave it in place.
Leave DA. It's too far gone. There's plenty of other outlets to post art. Mourn the loss of a community and let it go.
6 notes
·
View notes
Text
No, Doctors Aren't To Blame for AI Using Your Medical Record Photos, Here's How and Why
People care TOO MUCH about the IP laws of dumb cartoons like Mickey Mouse than the real abuse of data going on, acting like the only conversation worth having is "is copyright good or bad", but as a med student I have a vested interest in talking about data collection ethics.
You're welcome to address my bias or in less kind words say I'm in the pocket of "big pharma" or that I'm a "copyright maximalist" but I'm doing this purely to explain and educate how the LAION team is dishonest, manipulative, malicious and hides behind the good graces of "open-source" and "non-profit".
To start; how does LAION get hold of your photos? To put it shortly: Common Crawl, a service that has indexed and scraped web pages from 2014 until 2021. But, unlike LAION Common Crawl has a TOS, and states on their website that they do not allow users to violate IP or circumvent copy-protection using their data.
The highlights in orange are important, but for future points.
So how does this affect medical photos? "They shouldn't be on the internet in the first place!" You might say. This is where things get a bit muddy, because in the most popular case being spread the user has signed a consent forum allowing the use of their photos in medical journals, seen here;
Please make note of the first line, "to be used for my care, medical presentations and/or articles".
So how did it get online?
Despite what a lot of people jump to assume, this most likely was not the fault of the doctor – and unfortunately he's not alive anymore to even clarify what went wrong, RIP. There are many journals online on the user's condition – one which is particularly rare and as such requires study and photos for identification, many with attached images that have been scraped too. This user is most certainly not alone.
For background, PubMed is the largest site for sharing medical journals and case studies on the internet. It contains a wealth of information and is crucial to the safety and sanity of every overworked med student writing their 30th pharmacology paper. It also has attached images in some cases. These images are necessary to the journal, case study, research paper, whathaveyou that's being uploaded. They're also not just uploaded willy nilly. There are consent forms like the one seen above, procedures, and patient rights that are to be respected and honored. What I want to emphasize,
Being on a journal ≠ free to use.
Being online ≠ free to use.
If you do not have the patient's signed consent, you are not allowed to use the image at all, even in a transformative manner. It is not yours to use.
So how does LAION respond to this? Lying like shitty assholes, of course. Vice has done a very insightful article on just what LAION has stored within it and showing many harrowing stories of nonconsensual pornography, graphic executions and CSEM on the database, found here.
A very interesting part of the article that I'd like to draw attention to, though, is LAION team's claims about the copyright applied to these images. The claim in blue that all data falls under creative commons (lying about the copyright to every image) directly contradicts the claim in red (divorcing the team from copyright).
The claim in orange is stupid because it claims photos of SSNs and addresses directly linked to your name are not personal data if they dont contain your face. It also is not GDP compliant, as they elevate their own definition of what private data is over what your actual private data is.
But whatever, team LAION is on this!! They got it, they'll definitely be pruning their database to remove all of the offending– aaaand they literally just locked the discord help channel, deleted the entire exchange and accused Vice of writing a "hit piece", as reported on by motherboard here. Classy, LAION!
They don't even remove images from their database unless you explicitly email them, and even then they first condescendingly tell you to download the entire database, find the image and the link tied to it, then remove the image from the internet yourself– somehow. Classy, LAION.
Of course, the medical system isn't completely free from blame here, from the new motherboard article;
Zack Marshall, an associate professor of Community Rehabilitation, and Disability Studies at the University of Calgary, has been researching the spread of patient medical photographs in Google Image Search Results and found that in over 70 percent of case reports, at least one image from the report will be found on Google Images. Most patients do not know that they even end up in medical journals, and most clinicians do not know that their patient photographs could be found online.
“[Clinicians] wouldn't even know to warn their patients. Most of the consent forms say nothing about this. There are some consent forms that have been developed that will at least mention if you were photographed and published in a peer-reviewed journal online, [the image is] out of everybody's hands,” Marshall said. After hearing about the person whose patient photograph was found in the LAION-5B dataset, Marshall said he is trying to figure out what patients and clinicians can do to protect their images, especially as their images are now being used to train AI without their consent.
It's a case of new risks that people have not been aware of, and of course people can't keep up with the evolving web of tech bro exploiters chomping at the bit to index every image of CSEM and ISIS beheading they can get their hands on. If artists are still trying to get informed on the topic, expecting doctors who share this information for the benefit of other doctors to be hiding it behind expensive paywalls and elaborate gates just to cut off the tech bros is asinine. But regardless, if you don't want to go on a journal, now you are aware of the possibility and can not consent to it in the future.
LAION however can't be held accountable themselves because, despite facilitating abuse, they're not direct participants in the training of data, they just compiled it and served it on a gold platter. But on the bright side, The Federal Trade Commission (FTC) has begun practicing algorithmic destruction, which is demanding that companies and organizations destroy the algorithms or AI models that it has built using personal information and data collected in a bad faith or illegally. FTC Commissioner Rebecca Slaughter published a Yale Journal of Law and Technology article alongside other FTC lawyers that highlighted algorithmic destruction as an important tool that would target the ability of algorithms “to amplify injustice while simultaneously making injustice less detectable��� through training their systems on datasets that already contain bias, including racist and sexist imagery.
“The premise is simple: when companies collect data illegally, they should not be able to profit from either the data or any algorithm developed using it,” they wrote in the article. “This innovative enforcement approach should send a clear message to companies engaging in illicit data collection in order to train AI models: Not worth it.”
This is likely going to be the fate of any algorithms that take advantage of the illegal data collected in LAION-5B.
So what do we take from all of this?
Please read consent forms thoroughly.
Algorithmic destruction should befall LAION-5B and I wouldn't mind if every member on the team is arrested
That's it, that's the whole thing 😊
Addendum, which I know people will ask; Am I against AI art? Well, I'm against unethical bullshit, which LAION team does plenty, and which most if not all AI algorithms are being trained on. While I hate going for the elephant in the room, capitalism is to blame for the absolutely abhorrent implementation of AI, and so it can't exist without being inherently unethical in these conditions.
2 notes
·
View notes
Text
Stable Diffusion Model! We just threw caution to the wind and made a mini model, we're blowing another 7 hours and 5 USD on a larger 200 image set - but these are examples from our FIRST set!
You can test AND download the model here: https://huggingface.co/Duskfallcrew/duskfalltest
Mind you we probably need to understand in training an SD model we just stole our own art.
PFFT.
That just proves something, did you KNOW YOU CAN TRAIN an AI model on YOUR OWN DATA? Yea, then you have full control!
These weren't FULLY ours, the Huggingface one was also a mix of Laion data because it's a very FAST version of it with minimal images!
Here's the WEB UI running on cpu: https://huggingface.co/spaces/Duskfallcrew/duskfalltest
Have fun!! Btw I TOTALLY "STOLE" my own art for this one ( because @duskfallcrewart and @duskfallcrewsys run this tumblr as part of @earthndusk )
#stable diffusion#data training#ai art generator#ai art training#ai art theft#aiartcommunity#midjourney#ai artwork#ai art#artificial intelligence#hugging face#laion#data security#ai art generation#digital art#duskfallcrew#endo safe#we're weird we know it#actually plural
1 note
·
View note
Text
LAION, el asistente digital de Peugeot que facilita conocer el nuevo SUV Peugeot 2008
Peugeot ha dado un paso innovador en la industria automotriz argentina al lanzar LAION, un asistente digital con inteligencia artificial diseñada para ofrecer una experiencia de usuario rápida y completa. Este asistente virtual permite a los usuarios acceder a información detallada sobre el nuevo Peugeot 2008, convirtiendo la búsqueda en una interacción 24/7, ágil y precisa. LAION, diseñado con…
0 notes
Text
I'm going to set something on fire
1 note
·
View note
Photo
“childcare”
thumbpress.com/wp-content/uploads/2015/08/baby-carrier-grocery-basket.jpg
files.namnak.com/users/zt/aup/201809/997_pics/%D8%AA%D8%B5%D8%A7%D9%88%DB%8C%D8%B1-%D8%AC%D8%A7%D9%84%D8%A8-%D9%88-%D8%AE%D9%86%D8%AF%D9%87-%D8%AF%D8%A7%D8%B1.jpg
i1.wp.com/www.teamjimmyjoe.com/wp-content/uploads/2014/08/gambling-baby-worst-parents.jpg?resize=550%2C603
0 notes
Text
Just a heads up to any non AI artists that use red bubble (among many more). They are allowing your work to be used by the LAION-5B data set for use in AI training. haveibeentrained.com is free to use
9 notes
·
View notes
Text
Will you be my new mommy? 🥺👉👈
3 notes
·
View notes