#diffusion workflow
Explore tagged Tumblr posts
katdbee · 1 year ago
Text
Tumblr media Tumblr media
Workflow for generating 25 images of a character concept using Automatic1111 and Control Net image diffusion method with txt2img;
Enable Control Net , Low VRAM, and Preview checkboxes on.
Select the Open Pose setting and choose the openpose_hand preprocessor. Feed it a good clean source image such as this render of figure I made in Design Doll. Click the explodey button to preprocess the image, and you'll get a spooky rave skeleton like this.
Tumblr media Tumblr media
Low VRAM user (me, I am low VRAM) tip: Save that preprocessed image and then replace the source image with it. Change the preprocessor to none, and it saves a bit of time.
Lower the steps from 20 if you like. Choose the DPM++SDE Karras sampler if you like.
Choose X/Y/Z plot from the script drop down and pick the settings you like for the character chart about to be generated. in the top one posted I used X Nothing Y CFG scale 3-7 Z Clipskip 1,2,3,7,12
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Thanks for reading.
3 notes · View notes
slick-devon · 1 year ago
Text
Tumblr media Tumblr media
Some process notes: What I am showing here is what SD gave me in the first image, and my edits with Photoshop in the second. The prompt for the AI was very simple: "shirtless guy in a garden swing with multicolored robes." I "roop'ed" SNL player, Andrew Dismukes to influence the face. I picked the best of 40 generated images from the prompt, and upsized it. In Photoshop, I cropped out whatever the heck was going on below his waist. Next, I attacked the obvious missing rope holding his swing up, by duplicating a bit of it, moving, blending, and blurring that bit to the subject's right shoulder. The next challenge was his left hand holding onto the rope. THREE FINGERS! Typical of AI generated images of course. I selected that section of the image, digitally added another finger with my limited Photoshop skills and sent it back to SD img2img to refine. I picked the best out of 30 iterations and pasted it back into place. Using Photoshop's AI-powered and insanely awesome new Remove Tool, I cleaned up a lot of blemishes and smoothed out the overly defined musculature and vascularity of the original AI-rendered image. I also selected small sections of the progress so far and nudged things around with PS's Liquify tool. Finally, the incredibly powerful Camera RAW tool of Photoshop cannot be underestimated. For my final rendering of this image, I made several adjustments to color, sharpness, tinting, noise, fog, etc. And I use a lot of the presets, including Adobe's own AI-adjustments such as "popping the subject". Overall, I upsize from Stable Diffusion an outrageous amount and work with that in Photoshop until I am satisfied. And then I downsize for sharing on social media. If you read this far and want to learn more of my process, then drop me a PM. I am happy to correspond with you whether you're doing gay pin-up imagery as I am, or any kind of generative art. I know the traditional analog media and digital artists who've worked hard on their craft are conflicted on this. I believe they will continue to persist. I want to be part of an emerging segment of digital and there is plenty of room!
36 notes · View notes
ottopilot-ai · 8 days ago
Text
Anatomy of a Scene: Photobashing in ControlNet for Visual Storytelling and Image Composition
This is a cross-posting of an article I published on Civitai.
Initially, the entire purpose for me to learn generative AI via Stable Diffusion was to create reproducible, royalty-free images for stories without worrying about reputation harm or consent (turns out not everyone wants their likeness associated with fetish smut!).
In the beginning, it was me just hacking through prompting iterations with a shotgun approach, and hoping to get lucky.
I did start the Pygmalion project and the Coven story in 2023 before I got banned (deservedly) for a ToS violation on an old post. Lost all my work without a proper backup, and was too upset to work on it for a while.
I did eventually put in work on planning and doing it, if not right, better this time. Was still having some issues with things like consistent settings and clothing. I could try to train LoRas for that, but seemed like a lot of work and there's really still no guarantees. The other issue is the action-oriented images I wanted were a nightmare to prompt for in 1.5.
I have always looked at ControlNet as frankly, a bit like cheating, but I decided to go to Google University and see what people were doing with image composition. I stumbled on this very interesting video and while that's not exactly what I was looking to do, it got me thinking.
You need to download the controlnet model you want, I use softedge like in the video. It goes in extensions/sd-webui-controlnet/models.
I got a little obsessed with Lily and Jamie's apartment because so much of the first chapter takes place there. Hopefully, you will not go back and look at the images side-by-side, because you will realize none of the interior matches at all. But the layout and the spacing work - because the apartment scenes are all based on an actual apartment.
Tumblr media
The first thing I did was look at real estate listings in the area where I wanted my fictional university set. I picked Cambridge, Massachusetts.
Tumblr media
I didn't want that mattress in my shot, where I wanted Lily by the window during the thunderstorm. So I cropped it, keeping a 16:9 aspect ratio.
Tumblr media
You take your reference photo and put it in txt2img Controlnet. Choose softedge control type, and generate the preview. Check other preprocessors for more or less detail. Save the preview image.
Tumblr media
Lily/Priya isn't real, and this isn't an especially difficult pose that SD1.5 has trouble drawing. So I generated a standard portrait-oriented image of her in the teal dress, standing looking over her shoulder.
Tumblr media
I also get the softedge frame for this image.
Tumblr media
I opened up both black-and-white images in Photoshop and erased any details I didn't want for each. You can also draw some in if you like. I pasted Lily in front of the window and tried to eyeball the perspective to not make her like tiny or like a giant. I used her to block the lamp sconces and erased the scenery, so the AI will draw everything outside.
Take your preview and put it back in Controlnet as the source. Click Enable, change preprocessor to None and choose the downloaded model.
You can choose to interrogate the reference pic in a tagger, or just write a prompt.
Notice I photoshopped out the trees and landscape and the lamp in the corner and let the AI totally draw the outside.
Tumblr media
This is pretty sweet, I think. But then I generated a later scene, and realized this didn't make any sense from a continuity perspective. This is supposed to be a sleepy college community, not Metropolis. So I redid this, putting BACK the trees and buildings on just the bottom window panes. The entire point was to have more consistent settings and backgrounds.
Tumblr media
Here I am putting the trees and more modest skyline back on the generated image in Photoshop. Then i'm going to repeat the steps above to get a new softedge map.
Tumblr media
I used a much more detailed preprocessor this time.
Tumblr media
Now here is a more modest, college town skyline. I believe with this one I used img2img on the "city skyline" image.
2 notes · View notes
boredtechnologist · 7 months ago
Text
Tumblr media
Workflow for content adjustment using Stable Diffusion and other tools
0 notes
sumikatt · 9 months ago
Text
the darling Glaze “anti-ai” watermarking system is a grift that stole code/violated GPL license (that the creator admits to). It uses the same exact technology as Stable Diffusion. It’s not going to protect you from LORAs (smaller models that imitate a certain style, character, or concept)
An invisible watermark is never going to work. “De-glazing” training images is as easy as running it through a denoising upscaler. If someone really wanted to make a LORA of your art, Glaze and Nightshade are not going to stop them.
If you really want to protect your art from being used as positive training data, use a proper, obnoxious watermark, with your username/website, with “do not use” plastered everywhere. Then, at the very least, it’ll be used as a negative training image instead (telling the model “don’t imitate this”).
There is never a guarantee your art hasn’t been scraped and used to train a model. Training sets aren’t commonly public. Once you share your art online, you don’t know every person who has seen it, saved it, or drawn inspiration from it. Similarly, you can’t name every influence and inspiration that has affected your art.
I suggest that anti-AI art people get used to the fact that sharing art means letting go of the fear of being copied. Nothing is truly original. Artists have always copied each other, and now programmers copy artists.
Capitalists, meanwhile, are excited that they can pay less for “less labor”. Automation and technology is an excuse to undermine and cheapen human labor—if you work in the entertainment industry, it’s adapt AI, quicken your workflow, or lose your job because you’re less productive. This is not a new phenomenon.
You should be mad at management. You should unionize and demand that your labor is compensated fairly.
11K notes · View notes
pathologicalreid · 6 months ago
Text
separation anxiety | S.R.
Tumblr media
spencer's first case back from paternity leave involves children, so a concerned party reaches out to you for help
who? spencer reid x fem!reader category: fluff content warnings: mom!reader, dad!spencer, vaguely described breastfeeding, word count: 1.28k a/n: this is technically the reid family from cryptic, but you don't have to read cryptic in order to understand this fic.
Tumblr media
Your book rested in your lap as you pinched the thin paper of the novel between your index finger and your thumb. You had one foot on the ground, and the other was on the bottom of your daughter’s stroller, effectively rocking the stroller in two-four time so the infant would stay asleep.
Just because the A-Team wasn’t around didn’t mean there weren’t people working in the BAU. A crying baby would certainly disrupt the workflow in the bullpen – even if the baby belonged to a member of the BAU. Although, you had already fed her – mostly covered – at Spencer’s desk, so maybe you were past the point of no return.
You and baby Nellie had just been staring at each other at home – she was doing tummy time – when your phone went off. A mysterious text from Derek Morgan had popped up on your phone screen.
Derek Morgan: Got a sec?
It wasn’t that you and Derek never texted, it’s just that it was usually under the realm of “on my way” messages and, more recently, baby pictures, but you usually communicated indirectly using a massive group chat that was created by none other than Penelope Garcia.
So, when you answered and he asked if you’d be able to meet the team when they arrived at Quantico, you hesitantly said yes. He explained more once they were on the jet, the case that they had been on involved young children, and there was a little girl that had struck a particular chord with your boyfriend – who was on his first case back from paternity leave.
Eleanor was three months old, and you weren’t sure who’d have a harder time being away from one another – her or Spencer. You hadn’t considered how Spencer would feel when confronted with a case involving children now that he was a father. Quite frankly, you had hoped that he would’ve had more time before he needed to face a situation like that.
You waited, still using your foot to rock Nell’s stroller as the cover diffused the fluorescent light, you could hear her moving now, likely having woken up from her nap, but if she wasn’t crying, you saw no reason to stop her from playing with the colorful toys that dangled above her.
Sighing, you peered up from your book to see the elevator opening on the sixth floor, revealing the team behind the steel doors. Morgan clocked you first, winking as he passed through the glass doors to the bullpen.
Spencer hadn’t noticed the two of you yet, so you slowly opened the cover of the stroller and picked your daughter up, holding her gently to your chest. The infant fussed a bit while she was being moved, effectively gaining the attention of her father, whose face lit up at the sight of his family waiting for him at his desk.
Pushing past the rest of the team, who had also noticed the small being in the room by this point, Spencer approached his desk, haphazardly dropping his bag on the metal surface before pressing a soft kiss to your lips. Before even bothering to separate your lips, he was taking the baby from your arms.
“Hey,” he murmured, pulling away from you slowly as he secured the baby in his arms, bending his neck to place his lips on the crown of Nell’s head, “I missed you, angel girl.” His voice was gentle as you looked on fondly, she reached out a small hand and gripped the collar of his shirt. “How are you?” He asked, turning his attention back onto you.
You smiled at the two of them, using a cloth to wipe the drool from her chin before Spencer took it from you, deftly draping it over his shoulder in case he needed it shortly. “Good,” you answered, “tired,” you added.
Across the bullpen, Emily waved at Eleanor, grinning broadly as she walked over to her desk with JJ. To her enjoyment, the baby responded by letting out a coo and smiling before turning her attention to her dad, nuzzling her face in his chest, “Did I miss anything?”
Raising your eyebrows, you shrugged, leaning back and sitting on Spencer’s desk, “She pushed herself up on her arms yesterday.” It wasn’t a massive milestone – you were still grateful that Spencer had been present for her first real smile.
“Oh, yeah?” He responded, proudly looking down at his daughter, who had moved on from nuzzling and was now trying to see just how much of her hand she could fit in her mouth. “Did you know that babies usually go through a sleep regression right before they learn a new skill?” He asked, directing the question at Nell, “That must be why your mama looks so tired.”
You waved him off, crossing your arms in front of your stomach, “She’s lucky she’s so cute.”
The familiar click-clack of heels notified you that Penelope Garcia had made it to the party, likely signaled by another member of the team, “The cutest little girl in the world!”
Even though every member of the team had held your daughter at one point or another, you weren’t entirely comfortable with her being handed off like a hot potato. This, combined with Spencer’s aversion to germs, led to an unspoken rule: wait until one of her parents offered to let you hold her.
“Did you want to take her for a bit?” You offered, looking over at Spencer as you did. He needed time with her, it wasn’t your intention to deprive him of that, but you needed to check in with him without the distraction of the baby. Handing her off, you spoke up, “Watch your earrings,” you tapped on your earlobe, “She will grab them.”
As Garcia held the baby, she made her way around the bullpen, allowing Eleanor to make grabby hands at everyone and everything.
Keeping an arm around his waist, you looked up at your boyfriend, “Are you alright?” You asked, keeping your voice low as there was no sense in airing your concerns to the now bustling office.
Spencer’s smile faltered ever so slightly, “They were just kids. There have been kids before, but now…”
“Now you’re a dad,” you finished for him. “It’s not just something that you could see happening to someone else; it’s something you could see happening to yourself.” Pinching his side slightly, you smirked at him knowingly, “You know, your levels of empathy and sensitivity increase when you become a parent. Your brain adjusts to make yourself a better parent.”
Rolling his eyes slightly, Spencer raised his eyebrows at you, “You know, I vaguely remember telling you something very similar last week when you were crying at an ASPCA commercial.”
You reached up to ruffle his hair, “Nice try at sarcasm, babe, but you and I both know you never vaguely remember anything.”
“How did you know to come here? That I’d need to see her?” Spencer asked, watching as Penelope continued to parade around the BAU, now taking her up the stairs and through the roundtable room. “Was it a mother’s intuition?” He suggested, taking up a lighter tone.
Turning around, your eyes followed Garcia as she walked with Eleanor, “I was contacted by a concerned party.”
Spencer followed your gaze, “I’ll thank Garcia when she gives our baby back.”
You hummed, “Actually, it was Derek, he-“ Your voice cut off abruptly, “Oh, Penny, I told you she’d grab them!” You called from Spencer’s desk, but Garcia was already on her way to return Eleanor, holding one hand to her ear as she handed the baby back to Spencer.
Tumblr media Tumblr media
2K notes · View notes
nestedneons · 1 year ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
By jilt with stable diffusion
Cyberpunk art commissions
Ko-Fi
My ai workflows
626 notes · View notes
boundinparchment · 6 months ago
Text
Kinetic Harvest
Tumblr media
“I ain’t got the money. Not now. But with your…assistance, I can make it worth your while. Consider me a lifetime customer.”
You put the bullet back on your desk, a peace offering. He took it back and tucked it away, gun still trained on you.
“I don’t work on those who threaten me.”
Boothill/Gender Neutral Reader oneshot. Can be read as a pairing or not. Dottore reference if you squint. Not beta read.
Leaks used as a base, read at your own discretion. On AO3 here.
Reblogs are appreciated.
Desperation drove most to your doorstep, trembling as their bellies stoked fires so strong they made suns pale in comparison. Their eyes darted, assessing the clean office and workshop, as if they were wondering the validity of the rumors. A back-alley mechanic who took the money of criminals, crooks, and high society alike had to certainly have signs of that wealth. Or perhaps they thought morality was tied to cleanliness.
You cared not.
And they only cared whether you could fix their problem.
It made for a very convenient workflow.
But the man who sat before you was a deviation from that norm. He was surefooted, a little curious in the way his head turned to gaze about the darkened space. His eyes lingered not on you but on the prosthetic arm you kept behind your desk, the finger joints extended and the gun attachment on the wrist popped out, unloaded.
Never gave his name but you liked his drawl. You’d heard it from folks in a distant system. Aeragan-Epharshel was an ancient land, home to a language as old as the green plains and permafrosted mountains and dusty canyons; you were certain your mentor would have loved it there. So much to explore and learn from those who came before.
The stranger told you a story of a boy who grew up taming horses and identifying plants. Caring for everything around him. Isolated though the planet was, it was not without a law of entropy and a reciprocity that few ever even knew existed anymore. Of a child whose smile lit up a room like the sun itself.
There wasn’t an ounce of hesitation in his eyes when he stood a bullet up on your desk. In the glint of the lamplight, you caught three letters: IPC.
The one party you never took funding or clients from. The Interstellar Peace Corporation was, quite ironically, stood for the exact opposite, in your opinion.
“You specialize in cybernetics,” the man tilted his head as he leaned back in his seat. The wood squeaked. “And rumor has it, you go beyond the usual…modifications. I ain’t done in this universe ‘til that bullet is buried in the skull of the leech that sucks planets dry.”
His words were pinched tight by this teeth, jaw on edge. This man, this stranger off the streets, knew what he wanted and you wondered how many others in your profession turned him away. Plenty would. There was a liability in taking the human form too far, both ethically and bureaucratically. Too much red tape, too much diffusing of pre-conceived notions.
No wonder your mentor chose the path of eternal funding and embraced his legacy.
“Before you tell me, ‘No’,” the man drawled. “Know that I have endured harsher summers and brutal winters than most o’ your so-called patients, doc. I can handle what needs to be done.”
“I don’t doubt that,” you replied, fingers reaching for the bullet and holding it up to the light.
Those who were so glued to their convictions made for difficult clients, though. They were stubborn.
Worse, really, you reminded yourself as you looked up and noticed the barrel of a gun staring back at you. No one would stand between a hunter and his prey.
“I ain’t got the money. Not now. But with your…assistance, I can make it worth your while. Consider me a lifetime customer.”
You put the bullet back on your desk, a peace offering. He took it back and tucked it away, gun still trained on you.
“I don’t work on those who threaten me.”
A second, and then two, before he clicked his teeth and holstered the weapon. He gestured with open hands to demonstrate he was unarmed and then folded them in his lap.
“You’ll have a difficult road ahead,” you advised. “Years of assembly.“
“A full cybernetic body that preserves my noggin and my perfect eyesight is hardly unreasonable. It’s been done. Everyone knows you studied hidden away from the Aeons, under the Heretic. He’s dead, o’ course, but if I were a gamblin’ man…”
“You don’t strike me the type.”
“I ain’t,” the words came out strained, frustrated with a huff of breath. “A waste o’ money and time. Frivolous. All I’m sayin’ is…if I wanted the easy way out, I wouldn’t be here. I know what I’m signin’ up for.”
Your eyes traced his haggard face, white hair with tinges of black that had seen better days, a muscular frame trimmed a little too lean in places due to malnutrition. A hat more pristine than his dusty pants.
“Lay down over on the table,” you jerked your head in the direction of the vivisection table off to the side of your workshop. “We’ll start with your measurements.”
The man let out a slow exhale, one you didn’t dare attribute to relief. He rose with a steadiness you recognized only in those who trusted in their abilities and convictions, who would succeed not just through skill but by the cognitive bias that they embraced with every fiber of their being.
“Just promise me one thing, cowboy,” you said, collecting a tablet from your desk.
He turned, weight shifted to cock his hip impatiently.
“I don’t want your money. But when we’re done, you’ll tell me your name. I want to know what to call the one who succeeds in gutting the IPC.”
He smiled, crooked and charming, and you wondered if you ever saw eyes sparkle like that in this office before.
“It’s a deal, doc.”
216 notes · View notes
pillowfort-social · 9 months ago
Text
Site Update - 2/9/2024
Tumblr media
Hi Pillowfolks!
Today is the day. Post Queueing & Scheduling is finally here for everyone. Hooray! As always we will be monitoring closely for any unexpected bugs so please let us know if you run into any.
Tumblr media
New Features/Improvements
✨ *NEW* Queue & Schedule - One of the most highly requested features has finally arrived at Pillowfort. Users can now effortlessly Queue or Schedule a post for a future time.  
Queue helps keep your Pillowfort active by staggering posts over a period of hours or days. Just go to your Settings page to set your queue interval and time period.
How to add a post to your queue: 
While creating a new post or editing a draft, click on the clock icon to the right of the “Publish” button and choose “Queue.” Then click “Queue” when you’re ready to submit the post.
Schedule assigns a post a specific publishing time in the future (based on your timezone you’ve selected in Account Settings). How to schedule a post: 
While creating a new post or editing a draft, click on the clock icon to the right of “Publish” and choose “Schedule.” Enter the time you wish to publish your post, click on “Submit” and then click “Schedule.” 
How to review your queued & scheduled posts: 
On the web, your Queue is available in the user sidebar located on the left side of the screen underneath “Posts.” (On mobile devices, click on the three line icon located on the upper left of your screen to access your user sidebar.)
Note: the “Queue” button will only display if you have one or more queued or scheduled posts.
A CAVEAT: It is not currently possible to queue or schedule posts to Communities. We do intend to add this feature in the future, but during development it was determined that enabling queueing & scheduling to Communities would require additional workflow and use case requirements that would extend development time when this project has already been delayed, and so it was decided to release queue & scheduling for blogs only at the present time. We will add the ability to queue & schedule to Communities soon after the Pillowfort PWA (our next major development project) is complete.
✨ End of Year Fundraiser Reward Badges: End of Year Fundraiser Rewards Badges will begin to be distributed today. We'll update everyone when distribution is done.  
Tumblr media
✨ End of Year Fundraiser Reward Frames: As a special thank you to our community for helping keep Pillowfort online we have released two very special (and cozy!) Avatar Frames for all users. 
As for the remaining End of Year Fundraiser Rewards - we will be asking the Community for feedback on the upcoming Light Mode soon. 
Tumblr media
✨ Valentine’s Day Avatar Frame: A new Valentine’s Day inspired frame is now available!
✨ Valentine’s Day Premium Frames: Alternate colors of the Valentine’s Day frame are available to Pillowfort Premium subscribers. 
Tumblr media
✨ Site FAQ Update - Our Site FAQ has received a revamp.  
Terms of Service Update
As of today (February 9th), we are updating our Terms of Service to prohibit the following content:
Images created through the use of generative AI programs such as Stable Diffusion, Midjourney, and Dall-E.
An explanation of how this policy will be enforced and what exactly that means for you is available here: https://www.pillowfort.social/posts/4317673
Thank you again for your continued support. Other previously mentioned updates (such as the Pillowfort Premium Price increase, Multi Account Management, PWA, and more) will be coming down the pipeline soon. As always, stay tuned for updates. 
Best, Staff
137 notes · View notes
luminavt · 2 months ago
Text
Level-5, Fantasy Life:i and Generative AI Stable Diffusion.
The Developer of Fantasy Life: i, Level-5. Just announced a lot of delays for their upcoming games at Level 5 Vision 2024: To the World's Children.
In this presentation, a lot of the games showed off BEAUTIFUL and unique looking art styles and character designs. They stand out from what a lot of current anime games were offering.
Tumblr media
I watched it live on stream and my stream community enjoyed seeing it all. However the very next day?
We learned through this article posted above, that the developer had started to embrace using Stable Diffusion, a form of Generative AI, for the Art Assets in three of its games. Megaton Musashi, Yokai Watch, and Inazuma Eleven are shown in the official government presentation.
Tumblr media Tumblr media
As someone who is very passionate about Fantasy Life i?
Seeing the company you grew up loving embrace a form of Generative AI that doesn't collect the data of their original works without explicit consent is HEARTBREAKING.
However, I want to be as clear and accurate as possible.
There is very clear evidence that Level 5 is embracing Generative AI for the games listed in the video. There is no clear evidence that these techniques were used in the development of Fantasy Life: i. This post is being shared with you for your awareness.
Tumblr media
Fantasy Life for the 3ds is one of the most magical games I've ever played.
The game had so much charm that I showed a minimum of 6 different friends, and upon just watching me play it? They immediately went to buy the game themselves.
It was so charming, so simple yet aesthetically pleasing that anyone could appreciate it for what it was.
This game was developed by Level-5.
Tumblr media
The fact that Level-5 was the developers is what got my eye on this game in the first place. Ever since Dark Cloud 2 for the Playstation 2 I fell in love with what these developers can do.
Dark Cloud, Ni no Kuni, Rogue Galaxy, I fell in love with the developers ages ago and what they do meant a lot to me.
It feels awful that I cannot feel comfortable supporting the developer as a whole anymore.
I don't fault anyone if they choose the purchase the game still because ultimately, i know the game means a lot. Part of me still wants to experience the game.
However, its clear that Level 5 is one of the developers who plan to fully integrate Gen Ai into their development cycle going forward and I wouldn't be surprised it's why they have so many delays in all their games. As they may be adapting to a new workflow.
As someone who heavily endorsed this game as a streaming vtuber, I felt it was only fair I spread this information. Thank you.
Link to the article will be on my following tumblr post for full context.
39 notes · View notes
glitchyrobo · 3 months ago
Note
Hey If you wouldn't mind what is your 3d animation process? I am super curious, It's a wonderful style
Keeping it brief cuz otherwise this post would be waaaayyy too long
I model, animate, and render in Blender, with textures made with Aseprite or Krita.
I model polygonally without subdiv (except for specific effects, but that's the exception), then UV unwrap, make the texture, rig the model if needed, and then animate!
In terms of the shading/lighting, which I'm assuming is what you're most interested in, I use a set of Toon BSDF nodes mixed together with an Emission node added in afterwards to recolor the shadows from black (which is impossible otherwise in cycles, at least in blender 2.93!) I use Branched Path Tracing with just 1 sample, as well as 0 diffuse bounces. This, in combination with using just 0-size sun lamps and disabling diffuse bounces on all objects (under Object Properties → Visibility. It also allows for very fast renders (on the order of some seconds per frame)
I also often make custom shader node trees for various effects. Sometimes these are as simple as some noise mixed together in some way, and otherwise I end up with sprawling node groups in order to get effects like an Affini eye, or the accretion disk of a black hole, or even something especially involved like using ray marching & SDFs to have some 3d shape 'projected' into the scene without using additional geometry! Shaders are really phenomenal and rad and I'd encourage anyone who's interested in making 3d art to experiment with some. The Book of Shaders has some good introductory material, though it appears to be unfinished as of right now.
Texturing is just as important a skill as writing shaders, and the two can work together! If you've seen my animation of the light feral control ship with the big eyes, the smaller cycling lights on its hull are controlled by a manually drawn shapes of varying value that are then used as a greyscale mask to control which are lit up at any given time. ("create some greyscale mask and then use it for some animation purpose" is a super common part of my workflow tbh!)
Stylistically speaking, I have a personal rule to avoid 'smooth gradients' in the final frames. (Hence the toon shaders!)
I said I'd keep this brief, so I'll just quickly wrap up by saying that once I've rendered a sequence of PNGs I'll encode it into a h.264 mp4 video with ffmpeg with the '-tune animation' (among other) options, which encodes better for my style of large-blocks-of-contiguous-color than the default options.
33 notes · View notes
katdbee · 2 years ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Character portraits, free for use as always.
More of the same from me, it's character portraits as practice on generating good hands.
Gonna talk process for a minute There's a few ways to do it, but I get the best results out of setting the openpose controlnet to 'controlnet is more important' and a canny or lineart model that defines the figure further (always make sure it's on for at least the first half of the steps if not the whole way). It absolutely will struggle with open shapes such as where the top of the nagitata is cropped from the input image so the machine does it best to make something up. I got more weird weapons than I did mutant hands, and the mutant hands I did get were from turning on the lineart at the last half of the steps. The depth model is great for creating a lush painterly effect in the generation. The more nets you stack the longer the generation takes so keep that in mind.
Also turn your sampling steps DOWN. What you get in 10 steps will be enough to have a good idea of the generation and demonstrate hand consistency.
A good way to save time while figuring this out is to preprocess the image, save it with the little download icon in the preview window (click allow preview checkbox if you don't see it), and then put that preprocessed image into the controlnet and set the preprocessor to none. That way it won't be re-preprocessing that image over and over which cuts generation time down considerably.
This is the doll I put into the preprocessor and the the openpose that comes out. By preprocessing at a 2048 resolution and generating at 512x768 the details are kept much better than if pixel perfect is used, however if it needed to preprocess and generate this openpose image every single time it would still be cooking right now.
Tumblr media Tumblr media
Then this is aaaaaa I don't remember, one of the lineart or canny or whatever preps that makes a rough invert sketch of the input. That got saved and then plugged in again and processed one more time to keep the most relevant edges.
This is also how I came to realize the machines struggle with open shapes. I'm not sure how I'll try to get around that yet.
Tumblr media Tumblr media
3 notes · View notes
slick-devon · 1 year ago
Note
Insanley good looking hunks you make! Must take hours. do you create them all from start?
It's a mix (and I should point out my starting points)...I either start with pure text of an idea, my own sketch, or a random photo off the web, that's not necessarily the look I'm looking for, but more about the staging and pose. Most of my images are pin-ups or portraits. Anything involving action or more than one person gets difficult. I'll bring it into Photoshop and nudge things around and correct fingers and limbs, and run it through the AI another time or 2 before finally polishing it Lightroom-ish for the final.
17 notes · View notes
horzagobuchul · 1 year ago
Note
I love the work your doing with Ai!!! Can you tell us more about your process
Hello!
Thank you so much for the kind words~
My process is very iterative and never the same for any concept that i generate to be honest...
I do most of my work in Stable Diffusion running locally on my pc, with the Automatic1111 webui, with minor touching up and such in Adobe Photoshop~ The actual checkpoint I use differ a lot depending on what look I'm going for and also kind of just how I feel at the time; but I have a few that are selectively trained on the kind of material I'm interested in.
Most project begin with trying out a few prompts that generally describe the concept I'm going for, with bulk generation of images with random seeds just to find something that I like.
When I find something workable I try it out at different weight to see how the seed and model behaves with various body shapes. If it works for enough iterations I can generate a couple of hundred frames that I then put together into an animation~
If the seed in question behaves very differently at low and high weights, I might have to dynamically change the prompt as the iterations progress~ Generally speaking this varies between every concept.
For single images I usually find a concept that I like using text to image, which I then refine using image to image generation. There's always some small part of the original image that turns out wonky~
Hope that's kind of what you were asking for haha, I'm not very good at describing my workflow, and I've got multiple people asking me this~
Tumblr media
117 notes · View notes
mrcatfishing · 5 months ago
Text
I've managed to build up an entire workflow of AI image generation.
I start with a large proprietary model like Dall E or Midjourney, which are good at a lot of concepts, but tend towards PG content boundaries and fewer knobs to control the specifics. This can get me to a good starting point, like the following image.
Tumblr media
I then take that image and play around with it in Stable Diffusion's Img2Img generator, playing with the style Loras I've made until I get something I like. Often that includes pushing it into a properly mature image. In this case, I've kept it Tumblr safe though.
Tumblr media
Whether or not this counts as an improvement is debatable, but I enjoy putting a spin onto the output that makes it more clearly my own.
12 notes · View notes
nestedneons · 11 months ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
By jilt with her stable diffusion workflow
Cyberpunk art commissions
Ko-Fi
226 notes · View notes