#diffusion workflow
Explore tagged Tumblr posts
katdbee · 1 year ago
Text
Tumblr media Tumblr media
Workflow for generating 25 images of a character concept using Automatic1111 and Control Net image diffusion method with txt2img;
Enable Control Net , Low VRAM, and Preview checkboxes on.
Select the Open Pose setting and choose the openpose_hand preprocessor. Feed it a good clean source image such as this render of figure I made in Design Doll. Click the explodey button to preprocess the image, and you'll get a spooky rave skeleton like this.
Tumblr media Tumblr media
Low VRAM user (me, I am low VRAM) tip: Save that preprocessed image and then replace the source image with it. Change the preprocessor to none, and it saves a bit of time.
Lower the steps from 20 if you like. Choose the DPM++SDE Karras sampler if you like.
Choose X/Y/Z plot from the script drop down and pick the settings you like for the character chart about to be generated. in the top one posted I used X Nothing Y CFG scale 3-7 Z Clipskip 1,2,3,7,12
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Thanks for reading.
3 notes · View notes
slick-devon · 1 year ago
Text
Tumblr media Tumblr media
Some process notes: What I am showing here is what SD gave me in the first image, and my edits with Photoshop in the second. The prompt for the AI was very simple: "shirtless guy in a garden swing with multicolored robes." I "roop'ed" SNL player, Andrew Dismukes to influence the face. I picked the best of 40 generated images from the prompt, and upsized it. In Photoshop, I cropped out whatever the heck was going on below his waist. Next, I attacked the obvious missing rope holding his swing up, by duplicating a bit of it, moving, blending, and blurring that bit to the subject's right shoulder. The next challenge was his left hand holding onto the rope. THREE FINGERS! Typical of AI generated images of course. I selected that section of the image, digitally added another finger with my limited Photoshop skills and sent it back to SD img2img to refine. I picked the best out of 30 iterations and pasted it back into place. Using Photoshop's AI-powered and insanely awesome new Remove Tool, I cleaned up a lot of blemishes and smoothed out the overly defined musculature and vascularity of the original AI-rendered image. I also selected small sections of the progress so far and nudged things around with PS's Liquify tool. Finally, the incredibly powerful Camera RAW tool of Photoshop cannot be underestimated. For my final rendering of this image, I made several adjustments to color, sharpness, tinting, noise, fog, etc. And I use a lot of the presets, including Adobe's own AI-adjustments such as "popping the subject". Overall, I upsize from Stable Diffusion an outrageous amount and work with that in Photoshop until I am satisfied. And then I downsize for sharing on social media. If you read this far and want to learn more of my process, then drop me a PM. I am happy to correspond with you whether you're doing gay pin-up imagery as I am, or any kind of generative art. I know the traditional analog media and digital artists who've worked hard on their craft are conflicted on this. I believe they will continue to persist. I want to be part of an emerging segment of digital and there is plenty of room!
37 notes · View notes
ottopilot-ai · 2 months ago
Text
Anatomy of a Scene: Photobashing in ControlNet for Visual Storytelling and Image Composition
This is a cross-posting of an article I published on Civitai.
Initially, the entire purpose for me to learn generative AI via Stable Diffusion was to create reproducible, royalty-free images for stories without worrying about reputation harm or consent (turns out not everyone wants their likeness associated with fetish smut!).
In the beginning, it was me just hacking through prompting iterations with a shotgun approach, and hoping to get lucky.
I did start the Pygmalion project and the Coven story in 2023 before I got banned (deservedly) for a ToS violation on an old post. Lost all my work without a proper backup, and was too upset to work on it for a while.
I did eventually put in work on planning and doing it, if not right, better this time. Was still having some issues with things like consistent settings and clothing. I could try to train LoRas for that, but seemed like a lot of work and there's really still no guarantees. The other issue is the action-oriented images I wanted were a nightmare to prompt for in 1.5.
I have always looked at ControlNet as frankly, a bit like cheating, but I decided to go to Google University and see what people were doing with image composition. I stumbled on this very interesting video and while that's not exactly what I was looking to do, it got me thinking.
You need to download the controlnet model you want, I use softedge like in the video. It goes in extensions/sd-webui-controlnet/models.
I got a little obsessed with Lily and Jamie's apartment because so much of the first chapter takes place there. Hopefully, you will not go back and look at the images side-by-side, because you will realize none of the interior matches at all. But the layout and the spacing work - because the apartment scenes are all based on an actual apartment.
Tumblr media
The first thing I did was look at real estate listings in the area where I wanted my fictional university set. I picked Cambridge, Massachusetts.
Tumblr media
I didn't want that mattress in my shot, where I wanted Lily by the window during the thunderstorm. So I cropped it, keeping a 16:9 aspect ratio.
Tumblr media
You take your reference photo and put it in txt2img Controlnet. Choose softedge control type, and generate the preview. Check other preprocessors for more or less detail. Save the preview image.
Tumblr media
Lily/Priya isn't real, and this isn't an especially difficult pose that SD1.5 has trouble drawing. So I generated a standard portrait-oriented image of her in the teal dress, standing looking over her shoulder.
Tumblr media
I also get the softedge frame for this image.
Tumblr media
I opened up both black-and-white images in Photoshop and erased any details I didn't want for each. You can also draw some in if you like. I pasted Lily in front of the window and tried to eyeball the perspective to not make her like tiny or like a giant. I used her to block the lamp sconces and erased the scenery, so the AI will draw everything outside.
Take your preview and put it back in Controlnet as the source. Click Enable, change preprocessor to None and choose the downloaded model.
You can choose to interrogate the reference pic in a tagger, or just write a prompt.
Notice I photoshopped out the trees and landscape and the lamp in the corner and let the AI totally draw the outside.
Tumblr media
This is pretty sweet, I think. But then I generated a later scene, and realized this didn't make any sense from a continuity perspective. This is supposed to be a sleepy college community, not Metropolis. So I redid this, putting BACK the trees and buildings on just the bottom window panes. The entire point was to have more consistent settings and backgrounds.
Tumblr media
Here I am putting the trees and more modest skyline back on the generated image in Photoshop. Then i'm going to repeat the steps above to get a new softedge map.
Tumblr media
I used a much more detailed preprocessor this time.
Tumblr media
Now here is a more modest, college town skyline. I believe with this one I used img2img on the "city skyline" image.
2 notes · View notes
boredtechnologist · 8 months ago
Text
Tumblr media
Workflow for content adjustment using Stable Diffusion and other tools
0 notes
pathologicalreid · 7 months ago
Text
separation anxiety | S.R.
Tumblr media
spencer's first case back from paternity leave involves children, so a concerned party reaches out to you for help
who? spencer reid x fem!reader category: fluff content warnings: mom!reader, dad!spencer, vaguely described breastfeeding, word count: 1.28k a/n: this is technically the reid family from cryptic, but you don't have to read cryptic in order to understand this fic.
Tumblr media
Your book rested in your lap as you pinched the thin paper of the novel between your index finger and your thumb. You had one foot on the ground, and the other was on the bottom of your daughter’s stroller, effectively rocking the stroller in two-four time so the infant would stay asleep.
Just because the A-Team wasn’t around didn’t mean there weren’t people working in the BAU. A crying baby would certainly disrupt the workflow in the bullpen – even if the baby belonged to a member of the BAU. Although, you had already fed her – mostly covered – at Spencer’s desk, so maybe you were past the point of no return.
You and baby Nellie had just been staring at each other at home – she was doing tummy time – when your phone went off. A mysterious text from Derek Morgan had popped up on your phone screen.
Derek Morgan: Got a sec?
It wasn’t that you and Derek never texted, it’s just that it was usually under the realm of “on my way” messages and, more recently, baby pictures, but you usually communicated indirectly using a massive group chat that was created by none other than Penelope Garcia.
So, when you answered and he asked if you’d be able to meet the team when they arrived at Quantico, you hesitantly said yes. He explained more once they were on the jet, the case that they had been on involved young children, and there was a little girl that had struck a particular chord with your boyfriend – who was on his first case back from paternity leave.
Eleanor was three months old, and you weren’t sure who’d have a harder time being away from one another – her or Spencer. You hadn’t considered how Spencer would feel when confronted with a case involving children now that he was a father. Quite frankly, you had hoped that he would’ve had more time before he needed to face a situation like that.
You waited, still using your foot to rock Nell’s stroller as the cover diffused the fluorescent light, you could hear her moving now, likely having woken up from her nap, but if she wasn’t crying, you saw no reason to stop her from playing with the colorful toys that dangled above her.
Sighing, you peered up from your book to see the elevator opening on the sixth floor, revealing the team behind the steel doors. Morgan clocked you first, winking as he passed through the glass doors to the bullpen.
Spencer hadn’t noticed the two of you yet, so you slowly opened the cover of the stroller and picked your daughter up, holding her gently to your chest. The infant fussed a bit while she was being moved, effectively gaining the attention of her father, whose face lit up at the sight of his family waiting for him at his desk.
Pushing past the rest of the team, who had also noticed the small being in the room by this point, Spencer approached his desk, haphazardly dropping his bag on the metal surface before pressing a soft kiss to your lips. Before even bothering to separate your lips, he was taking the baby from your arms.
“Hey,” he murmured, pulling away from you slowly as he secured the baby in his arms, bending his neck to place his lips on the crown of Nell’s head, “I missed you, angel girl.” His voice was gentle as you looked on fondly, she reached out a small hand and gripped the collar of his shirt. “How are you?” He asked, turning his attention back onto you.
You smiled at the two of them, using a cloth to wipe the drool from her chin before Spencer took it from you, deftly draping it over his shoulder in case he needed it shortly. “Good,” you answered, “tired,” you added.
Across the bullpen, Emily waved at Eleanor, grinning broadly as she walked over to her desk with JJ. To her enjoyment, the baby responded by letting out a coo and smiling before turning her attention to her dad, nuzzling her face in his chest, “Did I miss anything?”
Raising your eyebrows, you shrugged, leaning back and sitting on Spencer’s desk, “She pushed herself up on her arms yesterday.” It wasn’t a massive milestone – you were still grateful that Spencer had been present for her first real smile.
“Oh, yeah?” He responded, proudly looking down at his daughter, who had moved on from nuzzling and was now trying to see just how much of her hand she could fit in her mouth. “Did you know that babies usually go through a sleep regression right before they learn a new skill?” He asked, directing the question at Nell, “That must be why your mama looks so tired.”
You waved him off, crossing your arms in front of your stomach, “She’s lucky she’s so cute.”
The familiar click-clack of heels notified you that Penelope Garcia had made it to the party, likely signaled by another member of the team, “The cutest little girl in the world!”
Even though every member of the team had held your daughter at one point or another, you weren’t entirely comfortable with her being handed off like a hot potato. This, combined with Spencer’s aversion to germs, led to an unspoken rule: wait until one of her parents offered to let you hold her.
“Did you want to take her for a bit?” You offered, looking over at Spencer as you did. He needed time with her, it wasn’t your intention to deprive him of that, but you needed to check in with him without the distraction of the baby. Handing her off, you spoke up, “Watch your earrings,” you tapped on your earlobe, “She will grab them.”
As Garcia held the baby, she made her way around the bullpen, allowing Eleanor to make grabby hands at everyone and everything.
Keeping an arm around his waist, you looked up at your boyfriend, “Are you alright?” You asked, keeping your voice low as there was no sense in airing your concerns to the now bustling office.
Spencer’s smile faltered ever so slightly, “They were just kids. There have been kids before, but now…”
“Now you’re a dad,” you finished for him. “It’s not just something that you could see happening to someone else; it’s something you could see happening to yourself.” Pinching his side slightly, you smirked at him knowingly, “You know, your levels of empathy and sensitivity increase when you become a parent. Your brain adjusts to make yourself a better parent.”
Rolling his eyes slightly, Spencer raised his eyebrows at you, “You know, I vaguely remember telling you something very similar last week when you were crying at an ASPCA commercial.”
You reached up to ruffle his hair, “Nice try at sarcasm, babe, but you and I both know you never vaguely remember anything.”
“How did you know to come here? That I’d need to see her?” Spencer asked, watching as Penelope continued to parade around the BAU, now taking her up the stairs and through the roundtable room. “Was it a mother’s intuition?” He suggested, taking up a lighter tone.
Turning around, your eyes followed Garcia as she walked with Eleanor, “I was contacted by a concerned party.”
Spencer followed your gaze, “I’ll thank Garcia when she gives our baby back.”
You hummed, “Actually, it was Derek, he-“ Your voice cut off abruptly, “Oh, Penny, I told you she’d grab them!” You called from Spencer’s desk, but Garcia was already on her way to return Eleanor, holding one hand to her ear as she handed the baby back to Spencer.
Tumblr media Tumblr media
3K notes · View notes
nestedneons · 1 year ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
By jilt with stable diffusion
Cyberpunk art commissions
Ko-Fi
My ai workflows
627 notes · View notes
rennybu · 13 days ago
Note
Hi Andy! out of curiosity would you be able to give any tips on, and this is the best way I can think to describe this 💀, rendering the elf glow ears? Like the glow of a light source where skin is thin i guess?
It's something that I've been trying to figure out how to do for ages in procreate, and is something I've always really liked in yours as well as others work when it comes to elves! If not thank you anyway, take it easy
hi!! i'm so sorry this took a while to get to.... I drew up a little step by step including what layer modes i usually use, but it's not foolproof and will need some colour tweaks on a per-drawing basis until u find your stride with it.
the term for this is subsurface scattering!!! its how light penetrates translucent surfaces and bounces around inside/diffuses back out into a 'glow'.... that orange radiance is Blood and Cartilage.
some videos i've enjoyed on the topic (1) (2) (and there r many more to do with 3D rendering, if you're interested in going down the rabbit hole... i think i took off running with glowy ears after learning abt subsurface scattering in my 3D animation course back in 2018. ITS SO FUN)
i'm using my own shorthand for. everything i drew here. but i hope it makes sense for visualizing a quick workflow in procreate!!!!
Tumblr media
i personally use a mix of layer modes, depending on the piece, usually overlay, screen, or colour dodge... It is honestly something i should devote some proper study time to (paint from observation without layer modes - its Good to be able to better understand colour interaction this way) - but this is how I've been doing it for the last few years!
transcript under the cut:
from left to right:
Ear and base colour :)
+ New layer Consider colour of blood and flesh beneath the skin. I start with pinks, light orange, yellow Less illumination <--> More
Set layer mode to overlay (or ur fav add mode...), adjust your colours based on skin, light source, stylization, etc. Merge down once happy!
Duplicate line art layer, lock + colour it in warm. + Light emphasis if u want (pop!) I like to use overlay mode but its not necessary. Adds a sense of depth and vibrancy.
base + soft brush + red line art Same idea of light/skin interaction, without using layer modes. Red for blood/flesh, orange for transition, yellow for thinnest part of cartilage.
When it (light) hits the surface... it scatters idk
113 notes · View notes
pillowfort-social · 11 months ago
Text
Site Update - 2/9/2024
Tumblr media
Hi Pillowfolks!
Today is the day. Post Queueing & Scheduling is finally here for everyone. Hooray! As always we will be monitoring closely for any unexpected bugs so please let us know if you run into any.
Tumblr media
New Features/Improvements
✨ *NEW* Queue & Schedule - One of the most highly requested features has finally arrived at Pillowfort. Users can now effortlessly Queue or Schedule a post for a future time.  
Queue helps keep your Pillowfort active by staggering posts over a period of hours or days. Just go to your Settings page to set your queue interval and time period.
How to add a post to your queue: 
While creating a new post or editing a draft, click on the clock icon to the right of the “Publish” button and choose “Queue.” Then click “Queue” when you’re ready to submit the post.
Schedule assigns a post a specific publishing time in the future (based on your timezone you’ve selected in Account Settings). How to schedule a post: 
While creating a new post or editing a draft, click on the clock icon to the right of “Publish” and choose “Schedule.” Enter the time you wish to publish your post, click on “Submit” and then click “Schedule.” 
How to review your queued & scheduled posts: 
On the web, your Queue is available in the user sidebar located on the left side of the screen underneath “Posts.” (On mobile devices, click on the three line icon located on the upper left of your screen to access your user sidebar.)
Note: the “Queue” button will only display if you have one or more queued or scheduled posts.
A CAVEAT: It is not currently possible to queue or schedule posts to Communities. We do intend to add this feature in the future, but during development it was determined that enabling queueing & scheduling to Communities would require additional workflow and use case requirements that would extend development time when this project has already been delayed, and so it was decided to release queue & scheduling for blogs only at the present time. We will add the ability to queue & schedule to Communities soon after the Pillowfort PWA (our next major development project) is complete.
✨ End of Year Fundraiser Reward Badges: End of Year Fundraiser Rewards Badges will begin to be distributed today. We'll update everyone when distribution is done.  
Tumblr media
✨ End of Year Fundraiser Reward Frames: As a special thank you to our community for helping keep Pillowfort online we have released two very special (and cozy!) Avatar Frames for all users. 
As for the remaining End of Year Fundraiser Rewards - we will be asking the Community for feedback on the upcoming Light Mode soon. 
Tumblr media
✨ Valentine’s Day Avatar Frame: A new Valentine’s Day inspired frame is now available!
✨ Valentine’s Day Premium Frames: Alternate colors of the Valentine’s Day frame are available to Pillowfort Premium subscribers. 
Tumblr media
✨ Site FAQ Update - Our Site FAQ has received a revamp.  
Terms of Service Update
As of today (February 9th), we are updating our Terms of Service to prohibit the following content:
Images created through the use of generative AI programs such as Stable Diffusion, Midjourney, and Dall-E.
An explanation of how this policy will be enforced and what exactly that means for you is available here: https://www.pillowfort.social/posts/4317673
Thank you again for your continued support. Other previously mentioned updates (such as the Pillowfort Premium Price increase, Multi Account Management, PWA, and more) will be coming down the pipeline soon. As always, stay tuned for updates. 
Best, Staff
137 notes · View notes
luminavt · 3 months ago
Text
Level-5, Fantasy Life:i and Generative AI Stable Diffusion.
The Developer of Fantasy Life: i, Level-5. Just announced a lot of delays for their upcoming games at Level 5 Vision 2024: To the World's Children.
In this presentation, a lot of the games showed off BEAUTIFUL and unique looking art styles and character designs. They stand out from what a lot of current anime games were offering.
Tumblr media
I watched it live on stream and my stream community enjoyed seeing it all. However the very next day?
We learned through this article posted above, that the developer had started to embrace using Stable Diffusion, a form of Generative AI, for the Art Assets in three of its games. Megaton Musashi, Yokai Watch, and Inazuma Eleven are shown in the official government presentation.
Tumblr media Tumblr media
As someone who is very passionate about Fantasy Life i?
Seeing the company you grew up loving embrace a form of Generative AI that doesn't collect the data of their original works without explicit consent is HEARTBREAKING.
However, I want to be as clear and accurate as possible.
There is very clear evidence that Level 5 is embracing Generative AI for the games listed in the video. There is no clear evidence that these techniques were used in the development of Fantasy Life: i. This post is being shared with you for your awareness.
Tumblr media
Fantasy Life for the 3ds is one of the most magical games I've ever played.
The game had so much charm that I showed a minimum of 6 different friends, and upon just watching me play it? They immediately went to buy the game themselves.
It was so charming, so simple yet aesthetically pleasing that anyone could appreciate it for what it was.
This game was developed by Level-5.
Tumblr media
The fact that Level-5 was the developers is what got my eye on this game in the first place. Ever since Dark Cloud 2 for the Playstation 2 I fell in love with what these developers can do.
Dark Cloud, Ni no Kuni, Rogue Galaxy, I fell in love with the developers ages ago and what they do meant a lot to me.
It feels awful that I cannot feel comfortable supporting the developer as a whole anymore.
I don't fault anyone if they choose the purchase the game still because ultimately, i know the game means a lot. Part of me still wants to experience the game.
However, its clear that Level 5 is one of the developers who plan to fully integrate Gen Ai into their development cycle going forward and I wouldn't be surprised it's why they have so many delays in all their games. As they may be adapting to a new workflow.
As someone who heavily endorsed this game as a streaming vtuber, I felt it was only fair I spread this information. Thank you.
Link to the article will be on my following tumblr post for full context.
39 notes · View notes
horzagobuchul · 1 year ago
Note
I love the work your doing with Ai!!! Can you tell us more about your process
Hello!
Thank you so much for the kind words~
My process is very iterative and never the same for any concept that i generate to be honest...
I do most of my work in Stable Diffusion running locally on my pc, with the Automatic1111 webui, with minor touching up and such in Adobe Photoshop~ The actual checkpoint I use differ a lot depending on what look I'm going for and also kind of just how I feel at the time; but I have a few that are selectively trained on the kind of material I'm interested in.
Most project begin with trying out a few prompts that generally describe the concept I'm going for, with bulk generation of images with random seeds just to find something that I like.
When I find something workable I try it out at different weight to see how the seed and model behaves with various body shapes. If it works for enough iterations I can generate a couple of hundred frames that I then put together into an animation~
If the seed in question behaves very differently at low and high weights, I might have to dynamically change the prompt as the iterations progress~ Generally speaking this varies between every concept.
For single images I usually find a concept that I like using text to image, which I then refine using image to image generation. There's always some small part of the original image that turns out wonky~
Hope that's kind of what you were asking for haha, I'm not very good at describing my workflow, and I've got multiple people asking me this~
Tumblr media
118 notes · View notes
katdbee · 2 years ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Character portraits, free for use as always.
More of the same from me, it's character portraits as practice on generating good hands.
Gonna talk process for a minute There's a few ways to do it, but I get the best results out of setting the openpose controlnet to 'controlnet is more important' and a canny or lineart model that defines the figure further (always make sure it's on for at least the first half of the steps if not the whole way). It absolutely will struggle with open shapes such as where the top of the nagitata is cropped from the input image so the machine does it best to make something up. I got more weird weapons than I did mutant hands, and the mutant hands I did get were from turning on the lineart at the last half of the steps. The depth model is great for creating a lush painterly effect in the generation. The more nets you stack the longer the generation takes so keep that in mind.
Also turn your sampling steps DOWN. What you get in 10 steps will be enough to have a good idea of the generation and demonstrate hand consistency.
A good way to save time while figuring this out is to preprocess the image, save it with the little download icon in the preview window (click allow preview checkbox if you don't see it), and then put that preprocessed image into the controlnet and set the preprocessor to none. That way it won't be re-preprocessing that image over and over which cuts generation time down considerably.
This is the doll I put into the preprocessor and the the openpose that comes out. By preprocessing at a 2048 resolution and generating at 512x768 the details are kept much better than if pixel perfect is used, however if it needed to preprocess and generate this openpose image every single time it would still be cooking right now.
Tumblr media Tumblr media
Then this is aaaaaa I don't remember, one of the lineart or canny or whatever preps that makes a rough invert sketch of the input. That got saved and then plugged in again and processed one more time to keep the most relevant edges.
This is also how I came to realize the machines struggle with open shapes. I'm not sure how I'll try to get around that yet.
Tumblr media Tumblr media
3 notes · View notes
slick-devon · 1 year ago
Note
Insanley good looking hunks you make! Must take hours. do you create them all from start?
It's a mix (and I should point out my starting points)...I either start with pure text of an idea, my own sketch, or a random photo off the web, that's not necessarily the look I'm looking for, but more about the staging and pose. Most of my images are pin-ups or portraits. Anything involving action or more than one person gets difficult. I'll bring it into Photoshop and nudge things around and correct fingers and limbs, and run it through the AI another time or 2 before finally polishing it Lightroom-ish for the final.
17 notes · View notes
mrcatfishing · 6 months ago
Text
I've managed to build up an entire workflow of AI image generation.
I start with a large proprietary model like Dall E or Midjourney, which are good at a lot of concepts, but tend towards PG content boundaries and fewer knobs to control the specifics. This can get me to a good starting point, like the following image.
Tumblr media
I then take that image and play around with it in Stable Diffusion's Img2Img generator, playing with the style Loras I've made until I get something I like. Often that includes pushing it into a properly mature image. In this case, I've kept it Tumblr safe though.
Tumblr media
Whether or not this counts as an improvement is debatable, but I enjoy putting a spin onto the output that makes it more clearly my own.
12 notes · View notes
hubr1s69 · 4 months ago
Note
i have so many questions like how did you do the hair cards ? how do you apply them? did you do the UVs in Zbrush? how was the retopology for the pants, especially around the folds? what program do you use for retopo? was sculpting the mesh of the sword and texturing it super hard??? i'm impressed with your work i need to learn so much more
Hi! this is a great tutorial going over the type of hair cards I used for this project: https://www.artstation.com/artwork/xD0bPm
to simplify the process universally: 1. analyse your references and determine which type of strands make up the hairstyle you want to do 2. generate the textures in a program of your choice, I simulate the hair strands in Maya using x-gen and bake the opacity and normals onto cards in Substance Painter where I also do a simple diffuse and roughness map (think normal high to low poly workflow) 3. apply the textures to your cards in your 3D program and start placing them on your character in layers, starting from the lowest 4. set up a shader in your rendering program of choice and frequently test your groom, I'm using marmoset toolbag 4! I did the Retopo/UVs for everything in Maya since that's the program I was taught and most comfortable in, I don't think Zbrush is great for UVs but with plugins Blender comes close to the utilities Maya has!
Most of the retopology was based on the topology of the underlying body mesh since it's mostly tight-fitting items that need to deform exactly the same way to avoid clipping. The folds took a while to retopo and it's again mostly the same topology as the body underneath but adding detail/faces by using the cut tool along the flow of the folds without disturbing the overall edgeflow! :)
The sword was less sculpting than you would assume, I've started making my own IMM brushes to use for ornaments and similar things so it's mostly just placing things around and making it look good together! I found that doing ornaments that way leads to a cleaner result and it's easier to iterate, compared to attempting to sculpt that level of detail
The textures of the sword are still sort of early in the process, the bake is doing a lot at the moment and I want to add more signs of wear and damage to the metal as well as the hilt
Thanks for the questions! I love talking 3D so feel free to hit me up if you want more explanations, just keep in mind that I am a recent grad so there's a lot of things I myself am still learning!
6 notes · View notes
redslug · 1 year ago
Note
Hi, do you know of any good up-to-date guides on getting stable diffusion/inpainting installed and working on a pc? I don't even know what that would *look* like. Your results look so cool and I really want to play with these toys but it keeps being way more complicated than I expect. It's been remarkably difficult to find a guide that's up to date, not overwhelming, and doesn't have "go join this discord server" as one of the steps. (and gosh github seems incredibly unapproachable if you're not already intimately familiar with github >< )
The UI I use for running Stable Diffusion is Invoke AI. The instruction in it's repo is basically all you'll need to install and use it. Just make sure your rig meets the technical requirements. https://github.com/invoke-ai/InvokeAI Github is something you'll need to suffer through, unfortunately, but it gets better the longer you stew in it. Invoke goes with SD already in it, you won't need to install it separately. I do inpainting through it's Unified Canvas feature. The tutorial I watched to learn it is this: https://www.youtube.com/watch?v=aU0jGZpDIVc The difference in my workflow is that I use the default inpainting model that goes with Invoke.
Tumblr media
You can grab something fancier off of civit.ai if you feel like it. For learning how to train your own models you'll need to read this: https://github.com/bmaltais/kohya_ss Don't try messing with it before you get acquainted with Invoke and prompting in general, because this is the definition of overwhelming.
Hope this'll be a good jumping off point for you, I wish you luck. And patience, you'll need a lot of that.
34 notes · View notes
nestedneons · 1 year ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
By jilt with her stable diffusion workflow
Cyberpunk art commissions
Ko-Fi
226 notes · View notes