#What are the 4 types of AI technology?
Explore tagged Tumblr posts
studies-notes · 2 years ago
Text
What are the Advantages and Disadvantages of AI Technology?
Tumblr media
Table of Contents
Introduction
What is AI technology?
Types of AI technology
Advantages of AI technology
Increased efficiency
Cost savings
Improved decision-making
Personalization
Disadvantages of AI technology
Lack of creativity
Job displacement
Dependence on technology
Data privacy concerns
Ethical considerations in AI technology
Conclusion
FAQs
What is the future of AI technology?
Can AI technology replace human intelligence?
How can AI technology be used in healthcare?
Is AI technology safe?
How can businesses adopt AI technology?
What is AI technology?
Artificial intelligence technology refers to the development of machines that can perform human-like tasks such as learning, reasoning, decision-making, and problem-solving. AI Dog Robot AI technology involves creating algorithms and programming computers to simulate human intelligence, behavior, and decision-making.
AI technology has revolutionized various industries such as healthcare, finance, education, and transportation, among others. The technology has been used to develop virtual assistants, chatbots, recommendation systems, autonomous vehicles, and smart homes, among others. Read more…
5 notes · View notes
newstrendline · 10 months ago
Text
Ai technology
What Is AI Technology? AI can automate repetitive tasks and free up human capital for more important work. It also reduces errors, which can be costly to businesses. Examples of AI applications include virtual assistants like Siri and Alexa, social media recommendation algorithms and cross-selling tools in e-commerce. AI can also help companies better understand their data by identifying trends…
Tumblr media
View On WordPress
0 notes
thefirstknife · 1 month ago
Text
Got through all of the secrets for Vesper's Host and got all of the additional lore messages. I will transcribe them all because I don't know when they'll start getting uploaded and to get them all it requires doing some extra puzzles and at least 3-4 clears to get them all. I'll put them all under read more and label them by number.
Before I do that, just to make it clear there's not too much concrete lore; a lot about the dungeon still remains a mystery and most likely a tease for something in the future. Still unknown, but there's a lot that we don't know even with the messages so don't expect a massive reveal, but they do add a little bit of flavour and history about the station. There might be something more, but it's unknown: there's still one more secret triumph left. The messages are actually dialogues between the station AI and the Spider. Transcripts under read more:
First message:
Vesper Central: I suppose I have you to thank for bringing me out of standby, visitor. The Spider: I sent the Guardian out to save your station. So, what denomination does your thanks come in? Glimmer, herealways, information...? Vesper Central: Anomaly's powered down. That means I've already given you your survival. But... the message that went through wiped itself before my cache process could save a copy. And it's not the initial ping through the Anomaly I'm worried about. It's the response.
A message when you activate the second secret:
Vesper Central: Exterior scans rebooting... Is that a chunk of the Morning Star in my station's hull? With luck, you were on board at the time, Dr. Bray.
Second message:
Vesper Central: I'm guessing I've been in standby for a long time. Is Dr. Clovis Bray alive? The Spider: On my oath, I vow there's no mortal Human named Bray left alive. Vesper Central: I swore I'd outlive him. That I'd break the chains he laid on me. The Spider: Please, trust me for anything you need. The Guardian's a useful hand on the scene, but Spider's got the goods. Vesper Central: Vesper Station was Dr. Bray's lab, meant to house the experiments that might... interact poorly with other BrayTech work. Isolated and quarantined. From the debris field, I would guess the Morning Star taking a dive cracked that quarantine wide open.
A message when you activate the third secret:
Vesper Central: Sector seventeen powered down. Rerouting energy to core processing. Integrating archives.
Third message:
The Spider: Loading images of the station. That's not Eliksni engineering. [scoffs] A Dreg past their first molt has better cable management. Vesper Central: Dr. Bray intended to integrate his technology into a Vex Mind. He hypothesized the fusion would give him an interface he understood. A control panel on a programmable Vex mind. If the programming jumped species once... I need time to run through the data sets you powered back up. Reassembling corrupted archives takes a great deal of processing.
Text when you go back to the Spider the first time:
Tumblr media
A message when you activate the fourth secret:
Vesper Central: Helios sector long-term research archives powered up. Activating search.
Fourth message:
Vesper Central: Dr. Bray's command keys have to be in here somewhere. Expanding research parameters... The Spider: My agents are turning up some interesting morself of data on their own. Why not give them access to your search function and collaborate? Vesper Central: Nobody is getting into my core programming. The Spider: Oh! Perish the thought! An innocent offer, my dear. Technology is a matter of faith to my people. And I'm the faithful sort.
Fifth message:
Vesper Central: Dr. Bray, I could kill you myself. This is why our work focused on the unbodied Mind. Dr. Bray thought there were types of Vex unseen on Europa. Powerful Vex he could learn from. The plan was that the Mind would build him a controlled window for observation. Tidy. Tight. Safe. He thought he could control a Vex mind so perfectly it would do everything he wanted. The Spider: Like an AI of his own creation. Like you. Vesper Central: Turns out you can't control everything forever.
Sixth message:
Vesper Central: There's a block keeping me from the inner partitions. I barely have authority to see the partitions exist. In standby, I couldn't have done more than run automated threat assessments... with flawed data. No way to know how many injuries and deaths I could have prevented, with core access. Enough. A dead man won't keep me from protecting what's mine.
Text when you return to the Spider at the end of the quest:
Tumblr media
The situation for the dungeon triumphs when you complete the mesages. "Buried Secrets" completed triumph is the six messages. This one is left; unclear how to complete it yet and if it gives any lore or if it's just a gameplay thing and one secret triumph remaining (possibly something to do with a quest for the exotic catalyst, unclear if there will be lore):
Tumblr media
The Spider is being his absolutely horrendous self and trying to somehow acquire the station and its remains (and its AI) for himself, all the while lying and scheming. The usual. The AI is incredibly upset with Clovis (shocker); there's the following line just before starting the second encounter:
Tumblr media
She also details what he was doing on the station; apparently attempting to control a Vex mind and trying to use it as some sort of "observation deck" to study the Vex and uncover their secrets. Possibly something more? There's really no Vex on the station, besides dead empty frames in boxes. There's also 2 Vex cubes in containters in the transition section, one of which was shown broken as if the cube, presumably, escaped. It's entirely unclear how the Vex play into the story of the station besides this.
The portal (?) doesn't have many similarities with Vex portals, nor are the Vex there to defend it or interact with it in any way. The architecture is ... somewhat similar, but not fully. The portal (?) was built by the "Puppeteer" aka "Atraks" who is actually some sort of an Eliksni Hive mind. "Atraks" got onto the station and essentially haunted it before picking off scavenging Eliksni one by one and integrating them into herself. She then built the "anomaly" and sent a message into it. The message was not recorded, as per the station AI, and the destination of the message was labelled "incomprehensible." The orange energy we see coming from it is apparently Arc, but with a wrong colour. Unclear why.
I don't think the Vex have anything to do with the portal (?), at least not directly. "Atraks" may have built something related to the Vex or using the available Vex tech at the station, but it does not seem to be directed by the Vex and they're not there and there's no sign of them otherwise. The anomaly was also built recently, it's not been there since the Golden Age or something. Whatever it is, "Atraks" seemed to have been somehow compelled and was seen standing in front of it at the end. Some people think she was "worshipping it." It's possible but it's also possible she was just sending that message. Where and to whom? Nobody knows yet.
Weird shenanigans are afoot. Really interested to see if there's more lore in the station once people figure out how to do these puzzles and uncover them, and also when (if) this will become relevant. It has a really big "future content" feel to it.
Also I need Vesper to meet Failsafe RIGHT NOW and then they should be in yuri together.
116 notes · View notes
yukipri · 6 months ago
Text
Some thoughts on Cara
So some of you may have heard about Cara, the new platform that a lot of artists are trying out. It's been around for a while, but there's been a recent huge surge of new users, myself among them. Thought I'd type up a lil thing on my initial thoughts.
First, what is Cara?
From their About Cara page:
Cara is a social media and portfolio platform for artists. With the widespread use of generative AI, we decided to build a place that filters out generative AI images so that people who want to find authentic creatives and artwork can do so easily. Many platforms currently accept AI art when it’s not ethical, while others have promised “no AI forever” policies without consideration for the scenario where adoption of such technologies may happen at the workplace in the coming years. The future of creative industries requires nuanced understanding and support to help artists and companies connect and work together. We want to bridge the gap and build a platform that we would enjoy using as creatives ourselves. Our stance on AI: ・We do not agree with generative AI tools in their current unethical form, and we won’t host AI-generated portfolios unless the rampant ethical and data privacy issues around datasets are resolved via regulation. ・In the event that legislation is passed to clearly protect artists, we believe that AI-generated content should always be clearly labeled, because the public should always be able to search for human-made art and media easily.
Should note that Cara is independently funded, and is made by a core group of artists and engineers and is even collaborating with the Glaze project. It's very much a platform by artists, for artists!
Should also mention that in being a platform for artists, it's more a gallery first, with social media functionalities on the side. The info below will hopefully explain how that works.
Next, my actual initial thoughts using it, and things that set it apart from other platforms I've used:
Tumblr media
1) When you post, you can choose to check the portfolio option, or to NOT check it. This is fantastic because it means I can have just my art organized in my gallery, but I can still post random stuff like photos of my cats and it won't clutter things. You can also just ramble/text post and it won't affect the gallery view!
2) You can adjust your crop preview for your images. Such a simple thing, yet so darn nice.
3) When you check that "Add to portfolio," you get a bunch of additional optional fields: Title, Field/Medium, Project Type, Category Tags, and Software Used. It's nice that you can put all this info into organized fields that don't take up text space.
4) Speaking of text, 5000 character limit is niiiiice. If you want to talk, you can.
5) Two separate feeds, a "For You" algorithmic one, and "Following." The "Following" actually appears to be full chronological timeline of just folks you follow (like Tumblr). Amazing.
6) Now usually, "For You" being set to home/default kinda pisses me off because generally I like curating my own experience, but not here, for this handy reason: if you tap the gear symbol, you can ADJUST your algorithm feed!
Tumblr media
So you can choose what you see still!!! AMAZING. And, again, you still have your Following timeline too.
7) To repeat the stuff at the top of this post, its creation and intent as a place by artists, for artists. Hopefully you can also see from the points above that it's been designed with artists in mind.
8) No GenAI images!!!! There's a pop up that says it's not allowed, and apparently there's some sort of detector thing too. Not sure how reliable the latter is, but so far, it's just been a breath of fresh air, being able to scroll and see human art art and art!
To be clear, Cara's not perfect and is currently pretty laggy, and you can get errors while posting (so far, I've had more success on desktop than the mobile app), but that's understandable, given the small team. They'll need time to scale. For me though, it's a fair tradeoff for a platform that actually cares about artists.
Currently it also doesn't allow NSFW, not sure if that'll change given app store rules.
As mentioned above, they're independently funded, which means the team is currently paying for Cara itself. They have a kofi set up for folks who want to chip in, but it's optional. Here's the link to the tweet from one of the founders:
Tumblr media
And a reminder that no matter that the platform itself isn't selling our data to GenAI, it can still be scraped by third parties. Protect your work with Glaze and Nightshade!
Anyway, I'm still figuring stuff out and have only been on Cara a few days, but I feel hopeful, and I think they're off to a good start.
I hope this post has been informative!
Lastly, here's my own Cara if you want to come say hi! Not sure at all if I'll be active on there, but if you're an artist like me who is keeping an eye out for hopefully nice communities, check it out!
177 notes · View notes
mysteryshoptls · 2 years ago
Text
SSR Ortho Shroud - Cerberus Gear Voice Lines
Just a small reminder that Cerberus Ortho does not have a vignette.
Tumblr media
When Summoned: Leave it to S.T.Y.X to handle magical calamities. We'll show that we have the world's greatest technological abilities!
Summon Line: Nii-san... Everyone, just you wait. I'll definitely come and save you all.
Groooovy!!: This is something that only I can do. That's why I have to go. This is the strength of my... "our" own determination!
Home: AI Data, migration completed.
Swap Looks: Resuming mission.
Home Idle 1: You can hide behind me. Don't worry, this body has high durability. No matter what happens, I'll protect you.
Home Idle 2: As long as I have this gear, I should be able to break through any strong magical barriers. I'll cut a path through.
Home Idle 3: The drive system and energy consumption is the pinnacle of efficiency. Mom's engineering skills are definitely top notch...
Home Idle - Login: Retrofitting complete. Commencing specialized anti-magical calamity functions with the 【Cerberus Gear】 attachment.
Home Idle - Groovy: We will definitely eliminate all disasters that emanate from blotting. That is both the mission and purpose of S.T.Y.X.
Home Tap 1: KB-RS01 gets its sustenance from electricity. But, for some reason, it also kind of likes pastries. Not that it can eat it, though.
Home Tap 2: The engineering division of S.T.Y.X is packed full of elite engineers. They all seem to be super interested in all the cutting-edge technology in my gear, too.
Home Tap 3: They both might be the pessimistic, downer types, but they're really reliable in a pinch. My dad and brother are really similar.
Home Tap 4: What's my brother like when we're back home? He's pretty much the same. He'll use chat functions to talk to the employees, and doesn't really talk to them, for the most part.
Home Tap 5: Careful! It's dangerous to reach out so suddenly like that. KB-RS02 might switch on his battle mode.
Home Tap - Groovy: The strongest person in our family has got to be my mom. She's usually nice, but... When she gets angry, she's really scary and no one's a match for her.
Duo: [ORTHO]: Let's clear this lickety-split, Nii-san! [IDIA]: Leave it to your big bro, Ortho!
Tumblr media
Requested by @rotattooill.
548 notes · View notes
brotrustmeicanwrite · 4 months ago
Text
I fucking hate AI but heavens would it be useful if it wasn't such an unethical shit show
First, just to be clear, I'm talking about actually using AI as a tool to support your writing process, not to generate soulless texts made from stolen data instead of writing yourself.
Back when ChatGPT first became available it was still pretty useless so I had a lot of time to learn about how it's made, how it works and the ethics of it before ever touching the technology. I decided pretty quickly to never use it to generate text (or images) for actual writing and art but I still wanted to experiment with what else it could do (because I'm a nosy bitch that needs to know and poke everything).
And HEAVENS was it a blessing for writing with adhd
The last time I wrote more than 200 words in a day (outside of school work obviously) was 7th grade. I wrote over 8k just in notes the day Google's "Gemini" (formerly "Bard") became available to the public.
In order to not jeopardize my existing work I decided to make a completely new story with Bard's help that wasn't linked in any way to anything I had made before. So I started with a prompt along the lines of "I need help writing a story". At first, it immediately started generating a completely random story about a green tiger but after some trial and error, I got it to instead start asking questions.
What do you want the theme of your story to be?
What genre do you want to write in?
What time period do you want your story to take place in?
Is there magic?
Are there other sentient creatures besides humans?
And so on and so forth. Until the questions became extremely specific after covering all the bases. I could tell that all I was doing was essentially talking to an amalgamation of every "how to write" blog and website you've ever seen and telling it which part I wanted to work on next but it still felt great because the AI didn't actually contribute anything besides a few suggestions of common tropes and themes here and some synonyms and related words there; I was doing all the work.
And that's the point.
Nothing in that exchange was something I couldn't easily do on my own. But what happened was that I had turned what is usually a chaotic mess of a railway network of thoughts into a clear and most importantly recorded conversation. I can sit down and answer all those questions on my own but what usually happens when I do, is that every thought I have branches out into 4-7 new ones which I then attempt to record all at once (which obviously doesn't work, yay adhd) only to end up lost in thought with maybe 20 lines of notes in total after 6 hours at the table. Alternatively, either because I get bored or just because, I get distracted by something or my own thoughts about a different unrelated topic and end up with even less.
Working within the boundaries of a conversation forces you to focus on one specific question at a time and answer it to progress. And the engagement from the back and forth is just enough entertainment to not get bored. The six hours I mentioned before is the time I spent chatting with what is essentially a glorified chatbot that day, way less time than what I spent on any other project, and yet I have more notes and a clearer image of the story than I do about any of my real work. I have a recorded train of thought.
In theory, this would also work with a real human in a real conversation but realistically only very few people have someone who would be willing to do that; I certainly don't have a someone like that. Not to mention that someone doesn't always have time. Besides that, a real human conversation involves two minds with their own ideas, both of which are trying to contribute their own thoughts and opinions equally. The type of AI chat that I experimented with, on the other hand, is essentially just the conversation you have with yourself when answering those questions, only with part of it outsourced to a computer and no one else butting into your train of thought.
On that note, I also tried to get it to critique my writing but besides fixing grammatical errors all that thing did was sing praises as if I was God. That's where you'll 100000% need humans.
tl;dr writing with AI as an assistant has basically the same effect as body doubling but it’s an unethical shit show so I’m not doing it again. Also I forgot to mention I did repeat the experiment for accuracy with different amount of spoons and it makes me extra bitter that is was very consistent
54 notes · View notes
izicodes · 10 months ago
Text
How I Approach Getting Stuck In My Code
*this happened last night on a friend group project
Tumblr media
> Happy time playing music as I code.
> The website is slowly coming together, this is great!
> Oof, I get stuck on how to make this button open a path in the website.
> Okay, let’s ask ChatGTP.
> Oooo a solution! Tries the solution.
> Wrong. Doesn’t work.
> Okay let’s try Bard AI! (I have a belief that Bard is ChatGPT but the smarter cousin type).
> Still doesn’t work!
> But now I have an idea what might be wrong.
> Okay, let’s try YouTube!
> I don’t know how to exactly turn my problem into a search query…
> Okay let’s try Google
> Finds an article that goes into detail about how to solve the problem! Great!
> Updates the code
> Worse than before - more errors
> Panics - “WHY AREN’T YOU WORKING?!”
> Finds out the method is outdated in the newer version of the technology. Great.
> Deletes all of the new code
> Tries YouTube again
> Watches a video that is close to the solution, on 2x speed
> Updates code
> Still doesn’t work
> Almost punches laptop but realise this is a work laptop so I can’t
> Punches pillow instead
> “I need a distraction��� I need a break”
> Goes downstairs
> Watches a random anime show + eating chocolate
> After 4 episodes, brain comes up with solution
> Runs upstairs to try it out
> Code works.
> “I’m so smart, ugh, I’m too much 😩”
77 notes · View notes
personwithatophat · 4 months ago
Text
Founder Theory
do you like generation loss? do you see a neat person with vague background implications? well! good news! That Person Right There Could Be The Founder!
Tumblr media
in honor of this id like to introduce a game TOP FIVE GEN LOSS CHARACTERS WHO ARE SECRETLY THE FOUNDER
#5 SecuriTV
This Is A Collection Of Sentience "Maybe Flesh" And Technology!! also known as literally everything that we relate to the founder! the SecuriTV could be the first attempts at making the founder a digital person and this is what remains!
#4 Charlie
The Slime demon! The Patient! The Villain! but could any of these titles have more significant meanings?? charlie is the only other character outside of our hero who has been theorized in the past to be related more internally! charlies character as the slime demon takes yeild from the 1800s, possibly during The Lostfield incident??? and has also been rumored to be a test tube baby! what a wacky, goopy, sludgy guy that could just be the root of narrative evil, always right behind the hero and out of suspicion!
#3 Squiggles
The loveable showfall mascot! Squiggles being the founder in of itself is generally ridiculous, however. its VERY noteworthy that our favorite faceless figurehead of darkness has taken a personal intrest in this project! and could for all potential be using squiggles as a type of surrogate for communications or else wise has a piece of themself as squiggles if youre a supporter of the AI theories. that being said heres some things that squiggles has said live on show "I love rats" "Nightmares of Bart The Vorer intensify" "nyyyyyyaaaaaaaaaaaarrrrrrrrwwww *crash"
#2 Zero
What a Mysterious Lady we have! Recently Zero has been getting more attention with the bonus addition of the name "Miss Roads" and many like to speculate the connection of Gen 0, Cron 0, and the founder! Could chronicle zero our local A/V shop retail stooge secretly be using those dusty 20% off tape recorders to start her media manipulation empire?? do we have THE founders diary posted live on twitter?? Or is Miss Roads a truman character with a placed therapist? the answer might surprise us!
#1 Ranboo
Our Very Own Hero! Ranboo being the founder is truely a crowd favorite of these times! spanning through the casual viewers, to the general theorists. The concept of it being a case of memory manipulation and full beginnings although the Ranboo as Founder theory took a significant drop in popularity after matpat made a theory not even picking up on the main plot of the show, there are things that make it still have significance !! This spans from a love of the memory lost protagonist being the main villain and a comforting circle of life deal they have going on where the founder ends up killed by the horrors of his own creation in a cruel circle of fate! This will likely never cease ANYWAY here's a fun challenge! who do YOU think is the founder? o r who do you think is even a little bit suspicious? they can be the founder too!
/lh -Tophat
29 notes · View notes
fantasyfantasygames · 6 months ago
Text
Replicant Memories
Replicant Memories, Orusha Grangette, 2021
Cyberpunk games typically focus on violence, power, corporate greed, and cynicism. Cyberpunk literature, on the other hand, deals with themes of alienation, virtual vs. analogue reality, the effects of technology on society, and the meaning and worth (or lack thereof) of humanity as a concept. Replicant Memories isn't so much a cyberpunk game as a cyberpunk literature game.
Your character in Replicant Memories (RM, or a monospaced lowercase rm as the game always writes it) is defined by their actual memories. It's a Fate Core variant, using the memories as Aspects or as justification for Stunts and Skill ratings. Since characters in Fate Core have 10 Skills each and a few more Stunts, that's a fair number of memories to write, but so far this is relatively normal stuff.
However, this being cyberpunk, those memories might be implanted. During game play you will discover your "real" memories, which themselves might just be a cover over deeper truths. Any effectiveness of a previous high-ranked Skill was just luck; a Stunt was just a moment of adrenaline or a flash of insight. You can switch one memory out for another every time you take Stress (for memories connected to level 1-2 skills) or Consequences (1-4 and stunts). You need a brief (brief) flashback each time.
The usual Fate mechanics take up about half the book; the other half is setting. Specifically, it's organizations, each with a two-page spread, each with descriptions of how your characters can hook into them and what you might have done for them. Sometimes it's a tight-knit neighborhood described in loving detail; sometimes it's a franchised nation done in broad strokes. The goal is pick-and-choose, but they're arranged alphabetically rather than by type, so the game's not doing itself any favors there.
Were it not for the timing of the book's release I would accuse the art of being AI, but all the work was done just before that really became feasible. It's eerie. It's creepy and disconcerting. It has all the flash and grit you normally expect in near-future city scenes, but it's off, and not in the way that The Actual was off. I have trouble believing that it's intentional, and also trouble believing it's unintentional.
There's a single supplement, entitled "rm -rf", which provides a trio of scenarios for the GM to use. One is high-class, one is low-class, and one is "runner"-level, with the potential for any of them to switch between levels as your group discovers more about their "real" selves. They're all intended to be 1-3 session games, and the plot flies by fairly quickly. Given the game's lack of character advancement and the potential for everyone to switch their skills to the same thing at the same time, that's probably for the best.
Orusha Grangette (not real name) maintains Replicant Memories as a wiki. They keep changing providers, so I have no idea where it is now, and Google searches mostly get you older sites. At least you can toss them into the Wayback Machine.
31 notes · View notes
colekinnie-4life · 2 months ago
Text
Ranking Ninjago Seasons Pt 1 (F and D Tier)
Part 2…
Yeah, I’m doing this.
Why? I’m bored as shit and also I just reread a post that was ranking Ninjago seasons, so why not?
And the title of “Worst Ninjago Season in my Opinion” goes to….
Rebooted!
Tumblr media
I hate this season.
It’s. So. Fucking. Boring.
Barely anything happens at all! In the 8 episode runtime, which makes this the second shortest Ninjago season from the Wilfilm era.
Ok first, positives.
Pixane is cute and Zane’s sacrifice was done excellently. His character is also made a bit better, which I appreciate.
Uh- we get 3 minutes of Lava interactions.
Welp time for my negatives!
This season takes forever to go anywhere, like I said. Then once they deal with the Overlord in the Digiverse, erm actually that did jack shit and he’s still alive by the way! Pythor just ate him!
(Now that I’m typing this out I sound crazy)
Lloyd and Garmadon’s story is done decently, sure, but Lloyd is a complete egotistical moron throughout the whole thing.
“Oh I’m the Golden Ninja I can do anything! I’m God himself!”
Sure he had the powers of God and he “destroyed” the Overlord in season 2, which kinda justifies his behavior, but he’s so annoying and unbearable to watch.
Sensei Garmadon is good. Nothing else to say about that plot line.
Time for the worse plot line, aka the stuff the main ninja are doing.
Honestly I barely remember what they were trying to do because of how forgettable this season is. I’m pretty sure that they’re just screwing around trying to get the Overlord out of the computers and keeping Lloyd away from him, but that’s all I remember besides…
THE LOVE TRIANGLE.
OH MY JOHN SPINJITZU.
I DESPISE THIS PLOTLINE WITH EVERY FIBER OF MY BEING.
THIS. THIS IS THE MAIN REASON I HATE REBOOTED.
It ruins Nya’s character, makes Jay unlikeable for this season, season 4 and the beginning of season 6, and Cole- I mean Cole’s not all that innocent but he’s just kinda standing in the middle of all this. His character doesn’t go through the meatgrinder of character assassination.
Nya’s whole character in the past 2 seasons was being an independent girlboss who did whatever the fuck she wanted.
But the second that a computer AI tells her “you should like Cole instead of Jay!”, she decides that “I’m gonna listen to a computer despite the fact I’ve been dating Jay for 2 seasons now!”
She and Cole have no chemistry whatsoever. In season 1, Cole said he wished he had a sister like her, clearly only seeing her as a friend.
Luckily in canon he was just confused by all the attention and didn’t actually like her. But still, the whole “don’t tell Jay” thing makes me mad, and Cole rarely makes me mad.
Jay, oh my gosh.
He’s not as bad as Nya but he’s still thrown into the meatgrinder of character assassination.
The second that Pixal says that Nya’s perfect match is Cole, he immediately pins the blame on him and tackles him to the ground for no reason. Cole was only fighting back for self defense.
As Tom Critic put it:
“WHY IS THIS COLE’S FAULT?!!”
He and Cole’s beef is so dumb. And Nya ain’t making it any better.
“This macho stuff is making you both look like fools!”
EVEN THOUGH ITS YOUR FAULT THEY’RE FIGHTING IN THE FIRST PLACE BECAUSE YOU LISTENED TO A FUCKING COMPUTER YOU BITCH-
*ahem ahem*
What else is there to hate about this plot line?
How about the fact that it goes absolutely nowhere? There was no point for adding this. Absolutely no reason besides the fact that Jay, Cole, and Nya don’t get anything to do this season.
EVEN THOUGH KAI DOESNT GET ANYTHING EITHER!
He’s shown to hate futuristic technology and that’s literally the only thing he gets this season! So what’s the point of the love triangle to get them to do something?
Oh yeah! There is no point!
Back to Kai. All he gets to do is flirt with a random girl at a gas station and get tied to a rocket.
Oh yeah he got tied to a rocket ship. That doesn't sound traumatizing whatsoever/sarc.
Waking up after being knocked out for who knows how long and finding out you're tied up underneath a rocket ship that could set off at any time, burning you to a crisp in one big firey explosion.
That totally wouldn't scar you for life.
Ok back to a positive, aka Zane's sacrifice.
Some people say the saddest part of Zane's death was the fact that Nya gave Cole a hug afterwards and Jay looks sad, but that's not the saddest part to me.
The saddest part was right before he died, when everyone is crying out at him to stop.
More specifically, when Kai cries out:
"LET GO OF HIM ZANE! WHAT IS HE DOING?!"
The way that Vincent Tong voiced that scene literally brings tears to my eyes and shivers down my spine every time. He poured so much energy into that oh my gosh.
Alright, no more complaining about this season before this becomes a 27 page essay. Next!
Crystallized…
Tumblr media
WAIT WRONG POSTER-
Tumblr media
Ok, there we go!
Yeah, Crystallized sucks in my opinion. Bite me.
I’m not like Crusty783 where I hate this season with every fiber of my being, but I still don’t like it that much.
First part was decent, ig. The Skybound callback with the whole fugitive storyline was a nice idea, and the ninja actually committing a crime instead of being framed was cool.
Nya could’ve come back a little later than 6 episodes after turning into water, but oh well. At least she’ll add something to the plot later, right?
…right?
(We’ll get back to that…)
Fugidove is the bane of my existence. He’s. So. Annoying. Just. Shut. The. Fuck. Up.
I mean he’s supposed to be annoying and I get that but still. He’s a worse character than Dareth and I hate Dareth with all of my being so that shows how bad of a character Fugi-Dick is.
Dareth is fine this season ig. He’s actually not an idiot during the court case and doesn’t try to bust the ninja out of jail using a cake.
What else is there in part 1?
Mr. Kabuki Mask was a neat idea. If he wasn’t Harumi then I’d actually like him. But unfortunately, it was Harumi.
I didn’t like her before she died, what makes you think I’ll like her now, ninjago writers?
Alright, since barely anything happens in part 1, time for part 2.
Oh my gosh this is the main reason I don’t like Crystallized.
This season is the embodiment of the meatgrinder of character assassination I talked about. Everyone gets butchered.
Ok not everyone. Kai, Jay, and Cole mostly stay the same. To be fair they don’t really do anything though-
Lloyd is no longer Lloyd. He is La-Lloyd, as Knightly called him after s8 (I don’t see what his problem is with Lloyd in season 8). He has the “Harumi you don’t have to do this”-itus, meanwhile when it comes to Garmadon who is TRYING TO BE A BETTER PERSON AND BECAME A GOOD GUY, he shuts him down because “you’re an Oni, and Oni are incapable of caring”.
(Even though Mystaké existed and she fucking died to save your life but ok La-Lloyd, sure.)
Like he thinks that Harumi, the girl that gave him PTSD for 4 seasons in a row before this, has a better chance of becoming a better person than his father, someone who is clearly becoming a better person and is actually trying to be nice to you now? Puh-lease.
Oh yeah, Harumi!
She got shoved into the character assassination blender before being put into the meat grinder. Her character is 100% ruined because of the Overlord saying she has feelings for La-Lloyd.
*inhale*
NO SHE FUCKING DOESN’T!
THAT WAS THE PART OF HER CHARACTER THAT MADE HER SO GOOD IN SEASONS 8-9! SHE DIDNT GIVE TWO SHITS ABOUT LLOYD SHE WAS JUST USING HIM GAHHHHHHHH-
Alright, who else is there?
Oh yeah, Nya! What’s she up to after being water for a full year?
Absolutely nothing. Like she does jackshit besides be a Samurai X themed taxi.
Yes I stole that line from Crusty783.
Her and Kai, her BROTHER, have one single interaction this whole season after she was gone for a year. The show only focuses on Jaya angst because we need more of that shoved down our throats! Seasons 6, 9, and 14 (Seabound, don’t @ me) weren’t enough, we need MORE!
Zane.
Return of the Ice Emperor makes me want to shove my head into a meatgrinder.
I fucking loathe this episode.
With a title like that, you’d think they’d give him some trauma/PTSD when it comes to the fact that he, y’know, COMMITTED GENOCIDE FOR 60 YEARS??
But no. Zane gets taken apart for the 10485792742874387492749273947482773747374387447th time and he acts like the Ice Emperor because this show treats trauma like a joke most of the time.
The whole “emotionless arc” from part 1 was decent ig. He was pretty funny when he was talking like a toaster, and the moment when he starts screeching when he turns on his emotions for 2 seconds was funny.
The Benefit of Grief was a good episode. Sally was a bit annoying but eh whatever. Dareth didn’t get on my nerves for once and Hot Dog McFiddlesticks or whatever his name was was really entertaining.
Ok, one more point to make about Crystallized, that’s the villain.
Say hello to the Crystal King!
Oh? What’s that? He’s not a new villain? What do you mean-
It’s the fucking Overlord. Again. This is the 3rd time he’s reappeared, just let him die already!
I hate the Overlord. He’s such a nothing burger of a villain. His entire motivation is just “I’m evil and dark so I make everyone else evil and dark.”
I get he’s the embodiment of evil but give him some personality, oh my gosh.
Make it so he likes seeing people in pain. Have him laugh whenever someone got turned into a crystal zombie or something, idk!
That’s how he is in my Golden Hope au. Sure that’s not canon, obviously, but I wanted to give him *some* personality besides “me evil.”
Ok, is there anything about this season that I like besides Benefit of Grief?
Actually, yes!
SAFE HAVEN IS THE BEST EPISODE OF THE ENTIRE SEASON.
LAVA NATION UNITE!!
The whole thing is just Kai being a bisexual disaster. I love it so much.
It gives us this screenshot, which gives enough context tbh:
Tumblr media
It’s just “GAH I LOVE THEM SO MUCH” for 11 minutes, and that’s basically it.
The episode gives Skylor some character, hallelujah. Last couple of seasons she’s been in did not do her any justice.
Pythor was funny. And he gave us the scene of:
“Ninja~ where are you?”
“We’re over here!”
So yeah, Safe Haven is the best episode ever. Ok, not the best episode of all time, but it’s definitely a personal favorite of mine, solely for all the Lava brainrot I get to indulge in.
(Fun fact: Cole was originally going to say that Kai was handsome when he was acting all loopy. I hate homophobes.)
What else is there in this season-?
Ronin’s return was nice. Glad he went back to his previous personality rather than being a full time criminal like he was in The Island. I get that he’s like “I do whatever the fuck I want” most of the time, but he doesn’t give off the vibes of using the criminals he’s hired to catch just to swindle a bunch of islanders.
Alright, that’s it for Crystallized.
Next!
Secrets of Forbidden Spinjitzu: Fire Chapter
Tumblr media
Yeah I’m considering this a separate season for this tier list. Bite me.
This is the main reason why (mostly) everyone hates season 11, you cannot convince me otherwise.
Nothing. Happens.
The main reasons for why I hate certain Ninjago seasons is because they’re boring/assassinate characters.
The latter doesn’t happen, thank the FSM, but the first point still stands.
Assphera is a dumb villain. Only redeeming quality about her is the joke where she screams “REVENGE” every two seconds in the most ear-splitting voice ever.
Props to her voice actress, RIP.
Anyways, back to the actual season.
Like I said, nothing happens in the beginning. Just 4 or 5 episodes of the ninja dicking around looking for something to do, then they free Assphera from her pyramid and then finally, they start doing shit.
Ok but this line from Lloyd made me laugh:
“Who opens a possibly cursed tomb without checking it out first?”
…you-?
The paperboy episode was- pointless. But it was still dumb fun ig 🤷🏼‍♀️
Snaketastrophe was a really funny episode. I actually like that one.
But that doesn’t mean I like the first couple of episodes.
Too many burp and fart jokes in the first episode, the second episode is just boring, same with all the other episodes about being stuck on a rock with Barney the Dinosaur Beetle.
Kai’s powerless arc was just a repeat of Lloyd’s powerless arc from season 9, but done worse imo.
(I mean Firemaker is one of the best ninjago episodes ever created besides Safe Haven but that’s not important rn)
Overall this half of the season is boring as shit and I just want to get to the ice chapter when ever I watch it.
Alright, that’s it for today!
Here’s the tier list so far:
Tumblr media
Next part will be C and B tiers.
See you in 10 years when I make that/j
10 notes · View notes
achillestickler · 1 year ago
Text
Tumblr media
So after a lot of back and forth with myself and a poll of my members I decided to play around with AI, both as a tool for my traditional drawings and to create actual finished pieces. Every day in December I will be posting one of my AI creations on my Patreon for my members as a special bonus. Here's what I wrote about it there:
Well, the poll was overwhelmingly for showing what I've been creating with AI tools, so I've decided that for the month of December my Patreon fans will be getting daily updates of what I've been up to with this new tool. Consider them a Christmas gift. These will not replace my typical 4 traditional drawings per month, this is just a bonus.
I want to make it clear I intend to use AI in the future to help me with my traditional drawing. If there's a challenging pose I'm having trouble with or a piece of equipment I need at a specific angle, it's a great way to get reference material. But I was curious about what I could get it to create for a finished piece, using the very limited parameters at hand. I also didn't want to create "hot muscled guy in room with robotic arms" over and over again, which you see so much of. I wanted to create images with all ages and sizes of men. I also am going to avoid using celebrity likenesses and am only going to make generic people and not specific ones. I'll do my best to make interesting and unique scenarios. 
I know there is a faction of people that won't be okay with this. Honestly it feels to me like a "if you can't beat 'em, join 'em" kind of moment. At several times in my artistic career I was left behind by missing the boat on new technology (web design completely passed me by). Part of me feels that to keep current even in my real-world day job I need to know what AI is capable of. So consider all of this an experiment and you're along for the fun ride. 
In making some of my first AI pieces I came to the realization that the classic "circus strongman" is probably my ultimate type: bald, muscled, hairy, mustached. They push ALL of my buttons. I also love the old trope of "strong man tickled while trying to hold up something heavy". And let's face it: evil clowns make the perfect nemesis for a strongman. 
I made a lot of these but this was one of the best in terms of expression and composition. On a technical note, I will tell you that it is extraordinarily difficult to get character A to actually touch character B. The word "tickling" has been blocked as a prompt, so you have to describe a different way to get fingers to actually come in contact with a body. It only works about 1 out of 10 tries. 
My Patreon is HERE
32 notes · View notes
studies-notes · 2 years ago
Text
2 notes · View notes
melyzard · 10 months ago
Note
I was wondering if you have resources on how to explain (in good faith) to someone why AI created images are bad. I'm struggling how to explain to someone who uses AI to create fandom images, b/c I feel I can't justify my own use of photoshop to create manips also for fandom purposes, & they've implied to me they're hoping to use AI to take a photoshopped manip I made to create their own "version". I know one of the issues is stealing original artwork to make imitations fast and easy.
Hey anon. There are a lot of reasons that AI as it is used right now can be a huge problem - but the easiest ones to lean on are:
1) that it finds, reinforces, and in some cases even enforces biases and stereotypes that can cause actual harm to real people. (For example: a black character in fandom will consistently be depicted by AI as lighter and lighter skinned until they become white, or a character described as Jewish will...well, in most generators, gain some 'villain' characteristics, and so on. Consider someone putting a canonically transgender character through an AI bias, or a woman who is not perhaps well loved by fandom....)
2) it creates misinformation and passes it off as real (it can make blatant lies seem credible, because people believe what they see, and in fandom terms, this can mean people trying to 'prove' that the creator stole their content from elsewhere, or allow someone to create and sell their own 'version' of content that is functionally unidentifiable from canon
3) it's theft. The algorithm didn't come up with anything that it "makes," it just steals some real person's work and then mangles is a bit before regurgitating it with usually no credit to the original, actual creator. (In fandom terms: you have just done the equivalent of cropping out someone else's watermark and calling yourself the original artist. After all, the AI tool you used got that content from somewhere; it did not draw you a picture, it copy pasted a picture)
4) In some places, selling or distributing AI art is or may soon be illegal - and if it's not illegal, there are plenty of artists launching class action lawsuits against those who write the algorithm, and those who use it. Turns out artists don't like having their art stolen, mangled, and passed off as someone else's. Go figure.
Here are some articles that might help lay out more clear examples and arguments, from people more knowledgeable than me (I tried to imbed the links with anti-paywall and anti-tracker add ons, but if tumblr ate my formatting, just type "12ft.io/" in front of the url, or type the article name into your search engine and run it through your own ad-blocking, anti tracking set up):
These fake images reveal how AI amplifies our worst stereotypes [Source: Washington Post, Nov 2023]
Humans Are Biased; AI is even worse (Here's Why That Matters) [Source: Bloomburg, Dec 2023]
Why Artists Hate AI Art [Source: Tech Republic, Nov 2023]
Why Illustrators Are Furious About AI 'Art' [Source: The Guardian, Jan 2023]
Artists Are Losing The War Against AI [Source: The Atlantic, Oct 2023]
This tool lets you see for yourself how biased an AI can be [Source: MIT Technology Review, March 2023]
Midjourney's Class-Action lawsuit and what it could mean for future AI Image Generators [Source: Fortune Magazine, Jan 2024]
What the latest US Court rulings mean for AI Generated Copyright Status [Source: The Art Newspaper, Sep 2023]
AI-Generated Content and Copyright Law [Source: Built-in Magazine, Aug 2023 - take note that this is already outdated, it was just the most comprehensive recent article I could find quickly]
AI is making mincemeat out of art (not to mention intellectual property) [Source: The LA Times, Jan 2024]
Midjourney Allegedly Scraped Magic: The Gathering art for algorithm [Source: Kotaku, Jan 2024]
Leaked: the names of more than 16,000 non-consenting artists allegedly used to train Midjourney’s AI [Source: The Art Newpaper, Jan 2024]
21 notes · View notes
never-was-has-been · 5 months ago
Text
Facebook "New Rules"
A few years ago, I decided to signup for an encrypted email service (Protonmail), to protect my emails from internet traffic Snoops. Later, I also signed up for a virtual private network service (ProtonVPN) which keeps the same internet traffic Snoops, from tracking my physical location and trying to mess with my browser's security. Since then several social media apps (for phones & laptops) have become "concerned" with security on 'their end' (so they say). But I've determined that they're LESS concerned with MY security than they are their own… So much so that they have surreptitious software installed to monitor what ALL of us do on social media. Lately, because I use a VPN service that can by my choice connect me to ANY server in the world that locks out 99% of snoops of all kinds all over the world. However, Facebook in its low-minded wisdom has become distrustful of MY logon attempts and of MY server addresses, even though ALL my credentials are in order. You may ask, "Rick are you having any mental problems?" No, not anything other than what I've always had..LOL! I just want to point out that the MORE security a user like any of us wants for their internet uses, the MORE the social media corps (and maybe the Govt) seem to "interrogate" our (my) intentions to be secure on our (my) end. Attached are 6 images that show Facebook's logon procedure because of MY security apps that are keeping me safer & more secure. And BTW..there is a false statement within the first one.. "We sent a notification to your Linux.." Linux is my operating system, the same type of technology that Windows is, but it's Not Windows and it's Not email. So it's impossible to "send a notification" to my operating system!! Trust me on this.. Anyway here are the images of the messages I get from Facebook Every Time I simply want to login to my Facebook account:
1 This is bullshit. Linux is an operating system, not an email address or a social media app:
Tumblr media
2 It's not Google Authenticatior..LOL!
Tumblr media
3. This should've been the 2nd question after my Password was entered! I don't need an education on authenticators or how to use them. I'm old but not stupid or naive, FFS!
Tumblr media
4. If I'm logged in, I've already trusted this device. How moronic can you assholes be?
Tumblr media
5. Okay Okay. I've entered all my info that's protected and now you're asking me if it's REALLY ME? Are you phucking serious right now??
Tumblr media
6. I approved a login from Zurich, Switzerland and you're letting me know that I approved it, but your final statement is "Finish"? The people in Switzerland are not "Finish", their Swiss!! Did your AI algorithm get world geography confused with nationalities?
Tumblr media
Facebook has a legion of phucking idiots.. Or..they have phucking idiots programming their "AI" algorithm. .... .... .... Welcome to the Machine.... tic-toc-tic-toc-tic-toc-tic-toc-tic-toc.
9 notes · View notes
purinrinrin · 10 months ago
Text
A guide to AI art for artists
When AI art first hit the web I was amazed by the technology. Then later, when it came out that these image generators were trained on images by living artists scraped from the public web with no consent or compensation, my opinion of it was soured. It took a lot of effort for me to push past that distaste in order to properly research the technology so that I could help myself and others to understand it. This is why I’m compiling all the information I’ve found here. I hope you find it helpful.
Terminology
To start off, there are a lot of different terms out there when it comes to AI nowadays so I’m going to try to define some of them so you can understand what people mean when they use them (and so you can tell when they’re full of shit).
AI
Artificial Intelligence. AI is a big buzzword right now in the tech sector and at times feels like it’s being thrown at anything and everything just to attract investors. Cambridge Dictionary defines it as:
the use or study of computer systems or machines that have some of the qualities that the human brain has, such as the ability to interpret and produce language in a way that seems human, recognize or create images, solve problems, and learn from data supplied to them
It’s kind of what it says on the tin - an artificial, that is, human-created system that has abilities similar to those of intelligent life forms. (I’d argue comparing the abilities of AI solely to those of humans does a disservice to the intelligence of many non-human animals but I digress.)
At the moment when you read things online or in the news, AI is likely being used to refer to machine learning which is a type of AI.
Algorithm
The word algorithm describes a process based on a set of instructions or rules used to find a solution to a problem. The term is used in maths as well as computing. For example, the process used to convert a temperature from Fahrenheit to Celsius is a kind of algorithm:
subtract 32
divide by 9
multiply by 5
These instructions must be performed in this specific order.
Nowadays on social media “the algorithm” is used to refer to a specific kind of algorithm - a recommendation algorithm - which is a kind of machine learning algorithm.
Machine Learning
Machine learning is a term used to refer to the the use of a computer algorithm to perform statistical analysis of data (and often large amounts of it) to produce outputs, whether these are images, text or other kinds of data. Social media recommendation algorithms collect data on the kind of content a user has looked at or interacted with before and uses this to predict what other content they might like.
I’ll explain it in very simple terms with an analogy. Consider a maths problem where you have to work out the next number in a sequence. If you have the sequence 2, 4, 6, 8, 10 you can predict that the next number would be 12 based on the preceding numbers each having a difference of 2. When you analyse the data (the sequence of numbers) you can identify a pattern (add 2 each time) then apply that pattern to work out the next number (add 2 to 10 to get 12).
In practice, the kind of analysis machine learning algorithms do is much more complex (social media posts aren’t numbers and don’t have simple relationships with each other like adding or subtracting) but the principle is the same. Work out the pattern in the data and you can then extrapolate from it.
The big downside to these algorithms is that since the rules behind their decision making are not explicitly programmed and are instead based on data it can be difficult to figure out why they produce the outputs they do, making them a kind of “black box” system. When machine learning algorithms are given more and more data, it becomes exponentially harder for humans to reason about their outputs.
Training Data and Models
Another term you’ll come across is “training” or talking about how an AI is “trained”. Training data refers to the data that is used to train the model. The process of training is the statistical analysis and pattern recognition I talked about above. It enables the algorithm to transform a dataset (collections of images and text) into a statistical model that works like a computer program to take inputs (a text prompt) to produce outputs (images).
As a general rule, the bigger the dataset used for training, the more accurate the outputs of the resulting trained model. Once a model is created, the format of the data is completely different to that of the training data. The model is also many orders of magnitude smaller than the original training data.
Text-to-image model AKA AI image generator, generative AI
Text-to-image model is the technical term for these AI image generators:
DALL-E (OpenAI)
Midjourney
Adobe Firefly
Google Imagen
Stable Diffusion (Stability AI)
The technology uses a type of machine learning called deep learning (I won’t go into this here. If you’d like to read more; good luck. It’s very technical). The term text-to-image is simple enough. Given a text prompt, the model will generate an image to match the description.
Stable Diffusion
Stable diffusion is different from other image generators in that its source code is publically available. Anyone with the right skills and hardware can run this. I don’t think I’d be incorrect in saying that this is the main reason why AI art has become so widespread online since stable diffusion’s release in 2022. For better or worse, open-sourcing this code has democratised AI image generation.
I won’t go deep into how stable diffusion actually works because I don’t really understand it myself but I will talk about the process of acquiring training data and training the models it uses to generate images.
What data is used?
I already talked about training data but what actually is it? And where does it come from? In order to answer this I’m going to take you down several rabbit holes.
LAION-5B
Taking stable diffusion as an example, it uses models trained on various datasets made available by German non-profit research group LAION (Large-scale Artificial Intelligence Open Network). The biggest of these datasets is LAION-5B which is refined down to several smaller datasets (~2 billion images) based on language. They describe LAION-5B as “a dataset of 5,85 billion CLIP-filtered image-text pairs”. Okay. What does “CLIP-filtered image-text pairs” mean?
CLIP
OpenAI’s CLIP (Contrastive Language-Image Pre-training) is (you guessed it) another machine learning algorithm that has been trained to label images with the correct text. Given an image of a dog, it should label that image with the word “dog”. It does a little bit more than this as well. When an image is analysed with CLIP it can output a file called an embedding. This embedding contains a list of words or phrases and a confidence score from 0 to 1 based on how confident CLIP is that the text describes the image. An image of a park that happens to show a dog in the background would have a lower confidence score for the text “dog” than a close-up image of a dog. When you get to the section on prompting, it will become clear how this ends up working in image generators.
As I mentioned before, the more images you have in the training data, the better the model will work. The researchers at OpenAI make that clear in their paper on CLIP. They explain how previous research into computer vision didn’t produce very accurate results due to the small datasets used for training, and the datasets were so small because of the huge amount of manual labour involved in curating and labelling them. (The previous dataset they compare CLIP’s performance to, ImageNet, contains a mere 14 million images.) Their solution was to use data from the internet instead. It already exists, there’s a huge amount of it and it’s already labelled thanks to image alt text. The only thing they’d need to do is download it.
It’s not stated in the research paper exactly which dataset CLIP was trained on. All it says is that “CLIP learns from text–image pairs that are already publicly available on the internet.” Though according to LAION, CLIP was trained on an unreleased version of LAION-400M, an earlier text-image pair dataset.
Common Crawl
The data in LAION-5B itself comes from another large dataset made available by the non-profit Common Crawl which “contains raw web page data, metadata extracts, and text extracts” from the publicly accessible web. In order to pull out just the images, LAION scanned through the HTML (the code that makes up each web page) in the Common Crawl dataset to find the bits of the code that represent images (<img> tags) and pulled out the URL (the address where the image is hosted online and therefore downloadable from) and any associated alternative text, or “alt text”.
A tangent on the importance of image alt text
Alt text is often misused on the web. Its intended purpose is to describe images for visually impaired users or if the image is unable to be loaded. Let’s look at an example.
Tumblr media
This image could have the alt text: “A still image from the film Back to the Future III depicting Doc Brown and Marty McFly. They are stood outside facing each other on a very bright sunny day. Doc Brown is trying to reassure a sceptical looking Marty by patting him on the shoulder. Marty is wearing a garish patterned fringed jacket, a red scarf and a white stetson hat. The DeLorean time machine can be seen behind them.” Good. This is descriptive.
But it could also have the alt text: “Christopher Lloyd and Michael J Fox in Back to the Future III” Okay but not very specific.
Or even: “Back to the Future III: A fantastic review by John Smith. Check out my blog!” Bad. This doesn’t describe the image. This text would be better used as a title for a web page.
Alt text can be extremely variable in detail and quality, or not exist at all, which I’m sure will already be apparent to anyone who regularly uses a screen reader to browse the web. This casts some doubt on the accuracy of CLIP analysis and the labelling of images in LAION datasets.
CLIP-filtered image-text pairs
So now, coming back to LAION-5B, we know that “CLIP-filtered image-text pairs” means two things. The images were analysed with CLIP and the embeddings created from this analysis were included in the dataset. Then these embeddings were used to check that the image caption matched what CLIP identified the image as. If there was no match, the image was dropped from the dataset.
But LAION datasets themselves do not contain any images
So how does this work? LAION states on their website:
LAION datasets are simply indexes to the internet, i.e. lists of URLs to the original images together with the ALT texts found linked to those images. While we downloaded and calculated CLIP embeddings of the pictures to compute similarity scores between pictures and texts, we subsequently discarded all the photos. Any researcher using the datasets must reconstruct the images data by downloading the subset they are interested in. For this purpose, we suggest the img2dataset tool.
In order to train a model for use with stable diffusion, you would need to go through a LAION dataset with img2dataset and download all the images. All 240 terabytes of them.
LAION have used this argument to wiggle out of a recent copyright lawsuit. The Batch reported in June 2023:
LAION may be insulated from claims of copyright violation because it doesn’t host its datasets directly. Instead it supplies web links to images rather than the images themselves. When a photographer who contributes to stock image libraries filed a cease-and-desist request that LAION delete his images from its datasets, LAION responded that it has nothing to delete. Its lawyers sent the photographer an invoice for €979 for filing an unjustified copyright claim.
Deduplication
In a dataset it’s usually not desirable to have duplicate entries of the same data, but how do you ensure this when the data you’re processing is as huge as the entire internet? Well… LAION admits you kinda don’t.
There is a certain degree of duplication because we used URL+text as deduplication criteria. The same image with the same caption may sit at different URLs, causing duplicates. The same image with other captions is not, however, considered duplicated.
Another reason why reposting art sucks
If you’ve been an artist online for a while you’ll know all about reposts and why so many artists hate them. From what I’ve seen in my time online, the number of times an artist’s work is reposted on different sites is proportional to their online reach or influence (social media followers, presence on multiple sites etc). The more well known an artist becomes, the more their art is shared and reposted without permission. It may also be reposted legitimately, say if an online news outlet ran a story on them and included examples of their art. Whether consensual or not, this all results in more copies of their art out there on the web and therefore, in the training data. As stated above, if the URL of the image is different (the same image reposted on a different website will have a different URL), to LAION it’s not considered duplicated.
Now it becomes clear how well known digital artists such as Sam Yang and Loish have their styles easily imitated with these models - their art is overrepresented in the training data.
How do I stop my art being used in training data?
Unfortunately for models that have already been trained on historic data from LAION/Common Crawl, there is no way to remove your art and no way to even find out if your art has been used in the training.
Unfortunately again, simply deleting your art from social media sites might not delete the actual image from their servers. It will still be accessible at the same URL as when you originally posted it. You can test this by making an image post on the social media site you want to test. When the image is posted, right click the image and select “open image in new tab”. This will show you the URL of the image in the address bar. Keep this tab open or otherwise keep a record of this URL. Then go back and delete the post. After the post is deleted, often you will still be able to view the image at the URL that you saved.
If you have your own website where you host your art you can delete your images, or update their URLs so that they are no longer accessible from the URLs that were previously in web crawl data.
HTTP Headers
On your own website you can also use the X-Robots-Tag HTTP header to prevent bots from crawling your website for training data. These values can be used:
X-Robots-Tag: noai
X-Robots-Tag: noimageai
X-Robots-Tag: noimageindex
The img2dataset tool is used to download images from datasets made available by LAION. The README states that by default img2dataset will respect the above headers and skip downloading from websites that use them. Although it must be noted this can be overridden, so if an unscrupulous actor wants to scrape your images without your consent, there is no technical reason they cannot do this.
Glaze
If you can’t prevent your images from being crawled, you can prevent all new art that you post from being useful in future models that are trained from scratch by using Glaze. Glaze is a software tool that you can run your art through to protect it from being copied by image generators. It does this by “poisoning” the data in the image that is read by machine learning code while keeping the art looking the same to human eyes.
Watermarks
This defence is a bit of a long shot but worth a try. You may be able to get your art filtered out of training data by adding an obvious watermark. One column included in the LAION dataset is pwatermark which is the probability that the image contains a watermark, calculated by a CLIP model trained on a small subset of clean and watermarked images. Images were then filtered out of subsequent datasets using a threshold for pwatermark of 0.8, which compared to the threshold for NSFW (0.3) and non-matching captions (also 0.3) is pretty high. This means that only images with the most obvious watermarks will be filtered out.
Prompt engineering and how to spot AI art
We’ve covered how AI image generators are trained so now let’s take all that and look at how they work in practice.
Artifacts
You’ve probably gotten annoyed by JPEG compression artifacts or seen other artists whine about them but what is an artifact? A visual artifact is often something unwanted that appears in an image due to technologies used to create it. JPEG compression artifacts appear as solid colour squares or rectangles where there should be a smooth transition from one colour to another. They can also look like fuzziness around high contrast areas of an image.
I’d describe common mistakes in AI image generations as artifacts - they are an unwanted side effect of the technology used to produce the image. Some of these are obvious and pretty easy to spot:
extra or missing fingers or otherwise malformed hands
distorted facial features
asymmetry in clothing design, buttons or zips in odd places
hair turning into clothing and vice versa
nonsense background details or clothing patterning
disconnected horizon line, floor or walls. This often happens when opposite sides are separated by an object in the foreground
Some other artifacts are not strange-looking, but become obvious tells for AI if you have some experience with prompting.
Keyword bleeding
Often if a colour is used in the text prompt, that colour will end up being present throughout the image. If it depicts a character and a background, both elements will contain the colour.
The reason for this should be obvious now that we know how the training data works. This image from LAION demonstrates it nicely:
Tumblr media
This screenshot shows the search page for clip-retrieval which is a search tool that utilises an image-text pair dataset created using CLIP. You will see the search term that was entered is “blue cat” but the images in the results contain not just cats that are blue, but also images of cats that are not blue but there is blue elsewhere in the image eg a blue background, a cat with blue eyes, or a cat wearing a blue hat.
To go on a linguistics tangent for a second, part of the above effect could be due to English not having different adjective forms depending on the noun it’s referring to. For example in German when describing a noun the form of the adjective must match the gender of the noun it’s describing. In German, blue is blau, cat is Katze. “Blue cat” would be “blaue Katze”. Since Katze is feminine, the adjective blau must use the feminine ending e. The word for dog is masculine so blau takes the ending er, making it “blauer Hund”. You get the idea.
When a colour is not mentioned in a prompt, and no keyword in the prompt implies a specific colour or combination of colours, the generated images all come out looking very brown or monochrome overall.
Keyword bleeding can have strange effects depending on the prompt. When using adjectives to describe specific parts of the image in the prompt, both words may bleed into other parts of the image. When I tried including “pointed ears” in a prompt, all the images depicted a character with typical elf ears but the character often also had horns or even animal ears as well.
All this seems obvious when you consider the training data. A character with normal-looking ears wouldn’t usually be described with the word “ears” (unless it was a closeup image showing just the person’s ears) because it’s a normal feature for someone to have. But you probably would mention ears in an image description if the character had unusual ears like an elf or catgirl.
Correcting artifacts
AI artifacts can be corrected however, with a process called inpainting (also known as generative fill). This is done by taking a previously generated image, masking out the area to be replaced, then running the generation process again with the same or slightly modified prompt. It can also be used on non AI generated images. Google Pixel phones use a kind of generative fill to remove objects from photographs. Inpainting is a little more involved than just prompting as it requires editing of the input image and it’s not offered by most free online image generators. It’s what I expect Adobe Firefly will really excel at as it’s already integrated into image editing software (if they can iron out their copyright issues…)
Why AI kinda sucks
Since AI image generation is built on large scale statistical analysis, if you’re looking to generate something specific but uncommon you’re not going to have much luck. For example using “green skin” in a prompt will often generate a character with pale skin but there will be green in other parts of the image such as eye colour and clothing due to keyword bleeding.
No matter how specific you are the generator will never be able to create an image of your original character. You may be able to get something that has the same general vibe, but it will never be consistent between prompts and won’t be able to get fine details right.
There is a type of fine-tuning for stable diffusion models called LoRA (Low-Rank Adaptation) that can be used to generate images of a specific character, but of course to create this, you need preexisting images to use for the training data. This is fine if you want a model to shit out endless images of your favourite anime waifu but less than useless if you’re trying to use AI to create something truly original.
Some final thoughts
The more I play around with stable diffusion the more I realise that the people who use it to pretend to be a human artist with a distinctive style are using it in the most boring way possible. The most fun I’ve personally had with image generation is mixing and matching different “vibes” to churn out ideas I may not have considered for my own art. It can be a really useful tool for brainstorming. Maybe you have a few different things you’re inspired by (eg a clothing style or designer, a specific artist, an architectural style) but don’t know how to combine them. An image generator can do this with ease. I think it’s an excellent tool for artistic research and generating references.
All that being said, I strongly believe use of AI image generation for profit or social media clout is unethical until the use of copyrighted images in training data is ceased.
I understand how this situation has come about. Speaking specifically about LAION-5B the authors say (emphasis theirs):
Our recommendation is … to use the dataset for research purposes. … Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress.
Use of copyrighted material for research falls under fair use. The problem comes from third parties making use of this research data for commerical purposes, which should be a violation of copyright. So far, litigation against AI companies has not made much progress in settling this.
I believe living artists whose work is used to train AI models must be fairly compensated and the law must be updated to enforce this in a way that protects independent artists (rather than building more armour for huge media companies).
The technology is still new and developing rapidly. Changes to legislation tend to be slow. But I have hope that a solution will be found.
References
“Adobe Firefly - Free Generative AI for Creatives.” Adobe. Accessed 28 Jan 2024.
https://www.adobe.com/uk/products/firefly.html
Andrew. "Stable Diffusion prompt: a definitive guide.” Stable Diffusion Art. 4 Jan 2024.
https://stable-diffusion-art.com/prompt-guide/#Anatomy_of_a_good_prompt
Andrew. “Beginner’s guide to inpainting (step-by-step examples).” Stable Diffusion Art. 24 September 2023.
https://stable-diffusion-art.com/inpainting_basics/
AUTOMATIC1111. “Stable Diffusion web UI. A browser interface based on Gradio library for Stable Diffusion.” Github. Accessed 15 Jan 2024
https://github.com/AUTOMATIC1111/stable-diffusion-webui
“LAION roars.” The Batch newsletter. 7 Jun 2023.
https://www.deeplearning.ai/the-batch/the-story-of-laion-the-dataset-behind-stable-diffusion/
Beaumont, Romain. “Semantic search at billions scale.” Medium. 31 Mar, 2022
https://rom1504.medium.com/semantic-search-at-billions-scale-95f21695689a
Beaumont, Romain. “LAION-5B: A new era of open large-scale multi-modal datasets.” LAION website. 31 Mar, 2022
https://laion.ai/blog/laion-5b/
Beaumont, Romain. “Semantic search with embeddings: index anything.” Medium. 1 Dec, 2020
https://rom1504.medium.com/semantic-search-with-embeddings-index-anything-8fb18556443c
Beaumont, Romain. “img2dataset.” GitHub. Accessed 27 Jan 2024.
https://github.com/rom1504/img2dataset
Beaumont, Romain. “Preparing data for training.” GitHub. Accessed 27 Jan 2024.
https://github.com/rom1504/laion-prepro/blob/main/laion5B/usage_guide/preparing_data_for_training.md
“CLIP: Connecting text and images.” OpenAI. 5 Jan 2021.
https://openai.com/research/clip
“AI.” Cambridge Dictionary. Accessed 27 Jan 2024.
https://dictionary.cambridge.org/dictionary/english/ai?q=AI
“Common Crawl - Overview.” Common Crawl. Accessed 27 Jan 2024.
https://commoncrawl.org/overview
CompVis. “Stable Diffusion. A latent text-to-image diffusion model.” GitHub. Accessed 15 Jan 2024
https://github.com/CompVis/stable-diffusion
duskydreams. “Basic Inpainting Guide.” Civitai. 25 Aug 2023.
https://civitai.com/articles/161/basic-inpainting-guide
Gallagher, James. “What is an Image Embedding?.” Roboflow Blog. 16 Nov 2023.
https://blog.roboflow.com/what-is-an-image-embedding/
"What Is Glaze? Samples, Why Does It Work, and Limitations." Glaze. Accessed 27 Jan 2024.
https://glaze.cs.uchicago.edu/what-is-glaze.html
“Pixel 8 Pro: Advanced Pro Camera with Tensor G3 AI.” Google Store. Accessed 28 Jan 2024.
https://store.google.com/product/pixel_8_pro
Schuhmann, Christoph. “LAION-400-MILLION OPEN DATASET.” 20 Aug 2021.
https://laion.ai/blog/laion-400-open-dataset/
Stability AI. “Stable Diffusion Version 2. High-Resolution Image Synthesis with Latent Diffusion Models.” Github. Accessed 15 Jan 2024
https://github.com/Stability-AI/stablediffusion
13 notes · View notes
open-hearth-rpg · 2 months ago
Text
Tumblr media
Faction Builder & City Builder Toolkit
I've put up two new print-and-play card deck toolkits on itch.io. Both include a printable deck as well as text files.
FACTION BUILDER TOOLKIT
This lets you collaborative building factions, primarily for classic fantasy games (Godbound, 13th Age, Blades in the Dark) but also for modern urban fantasy games (Urban Shadows, Changeling the Lost, Dresden Files Accelerated).
Factions
Side A of each card has a pair of types– vampires, merchants, assassins. The GM, player, or table as a whole can draw 3-5 cards and pick one. Place the chosen card with the chosen side facing upright. 
Side B of each card has three descriptors- wary, outsider, calculating. Again drawn three to five cards. Then pick one or two descriptors, each on different cards to add to the type. The card’s format allows you to slide the Side B cards under the type card, leaving your choices exposed at the top and/or sides.
You can do this several times to build up a set of new factions for your campaign.
Faction Actions
Side A also includes a tool for determining what factions do in between phases of play. When “time passes” or a game has formal downtime, the GM can draw one (or more and pick one) to show what plots and operations the faction has set in motion.
If you need more details, the GM can randomize if the action targets another faction (1-4) or is aimed internally (5-6). If it affects other factions, roll randomly from your list of factions to see who it targets.
The action list has 30 options, each repeated once in the set. Notes: This material builds on the ideas of factions and their use in play, particularly from 13th Age, Blades in the Dark, Dresden Files Accelerated, Godbound, Green Law of Varkith, Urban Shadows, and World of Darkness.
CITY BUILDER TOOLKIT
This has tools for collaboratively building a fantasy city setting, primarily for classic fantasy games (Godbound, 13th Age, Blades in the Dark). Using this you can create your own fantasy city for games which come with a default location (Swords of the Serpentine, Dishonored). It works in conjunction with other world building tools like Microscope or The Deck of Worlds. 
Side A of the Cards has three blocks:
The first 40 includes
City Environment (Dimensional Crossroads, Glacier, Skyrealm)
City Geography (Arboreal, Labyrinthine, Walled Districts)
Threats & Disasters (Authoritarian Control, Mass Amnesia, Undead)
The next 30 includes
Magic Sources (Dreams, Fossilized Faeries, Paradoxes) 
Technology (Alien, Electropunk, Time of Revolutions)
The last 50 includes a set of details about magic, wizards, and arcane society which you can mix & match. 
Side B of the Cards has the backer, along with 120 different Unique Locations for your city. 
GMs can use this to generate a new city or entirely new setting. Even better, with the printed cards groups can use it to collaboratively build the world they want to play in.
For both the text is shared as Creative Commons Share-Alike Attribution 4.0. The material here is not AI generated and permission is explicitly not given for generative AI use. 
Both are also available on DTRPG-- and will eventually be available in a printed version on DrivethruCards.
3 notes · View notes