Portrait generation in Stable Diffusion

I’ve been tinkering with the Stable Diffusion for a couple of days, and realized that it’s a great tool to generate portraits for characters in NWN. The game’s 3d engine didn’t age well and the character models don’t look as beautiful as they used to, but still they can be great input images for image-to-image generation of portraits. Just a couple of examples:



Another one:

So, I’m still learning things and those above are just quick drafts to demonstrate the method. If you have good close up screenshots of PC/NPC models and have an idea what kind of characters they should be (dark, smug, sarcastic, etc) - send them my way and I’ll try to generate some portraits.



1 Like

Really impressive!

I wonder if I could get Stable Diffusion running on this old PC w/ a GTX1050?

[Visits project’s website GitHub - CompVis/stable-diffusion: A latent text-to-image diffusion model]

“…the model is relatively lightweight and runs on a GPU with at least 10GB VRAM.”

Umm, that would be “no.”

Same for my gtx 1650 with just 4gb.


Are there a list of supported GPU to test this project? The screen result is awesome!

Well, I think you could try anyway. It supports older videocards if you use parameters like --lowvram and --always-batch-cond-uncond; though it needs a lot of tweaking and generation time will be much higher.
I’m using not the best rig myself - I had to leave my country because of the war, and my only computer for the last 6 months is a corporate laptop with rtx3060 and 6 gigs of VRAM. It’s not suitable for generating large images or batches, and it needed a lot of tweaking, but from what I gathered from the community discussions is that some guys are using even older cards with 2gb VRAM on board. But still needs some paractical magic applied.


From a complete beginner and non techie, how hard is it to go from photos like these to creating NPCs from them in Blender using the photos as textures ? Would all you need be front side and back pictures - and a lot more skill than I’ll ever have? :grinning: :cry:

Or is it another world of pain above that involving ridiculous numbers of polygons etc.?

While not impossible, it would be difficult. Just imagine you were to do it in real life. You’d be just making a stretched mask when in reality you would need to remove all the skin of the head to put on your sculpture.


Thanks for pointing out the details, @ribbed . I’ll go take another look w/ reference to the community forums to see how folks are configuring it for low VRAM.

I have another 4GB video card in a partially refurbished state as well as the rest of the salvaged system it came with. Once I finish refurbing it then I will have a system I could set to work w/ Stable Diffusion and let run undisturbed by other tasks.

Thanks also for sharing cool stuff with us during hard times!

1 Like

Well, pretty hard. :slight_smile:
I’m not a 3d artist, but I work in game industry and pretty familiar with their methods and pipelines. Photos might be used only as a reference for geometry, and pretty often concept artists draw characters in something similar to T-pose to make life easier for their 3d peers. As for the textures, they are almost never taken from the initial reference - well, at least in our studio. Something similar is usually taken from the library or drawn from scratch.

1 Like

Thanks. That’s interesting to hear. I was wondering how far away we are from programs that take in your picture and turn out a working 3 D model for a PC in a game. Must be possible ?

In fact does facial recognition not just identify thousands of points on your face which are a bit like the vertices in Blender?

Folks, I actually wanted to practice and generate some portraits for you in this thread. :slight_smile:
Maybe you are creating a module that needs some NPC artworks, or want a custom portrait for your PC - just post a cropped screenshot of your character, and I’ll try to generate an artwork for it. Characters with relaxed poses and without weapons in their hands are generally easier to work with.

I spent some more time with SD and merged a neat model for generating NWN portraits. Here’re some new results:



Really wish I knew how to do this myself, but GitHub always confuse the heck out of me. I often need a lot of portraits and to tinker away like you do would be fantastic. Sometimes I just wish I was a lot smarter.

Hi @ribbed I had been thinking to take you up on this from your first post. Let’s make it interesting/challenging (maybe) - a PC w/ her familiar! Here is a pic of one of my “swiss-army knife” characters that I will often use when first checking out a low-level, “for good characters” module from the vault.

This is Scotoma the Scintillating, female gnome Cleric 1/Illusionist 1 and her faithful side kick (familiar) the pixie Dustine d’Wind.

For characteristics, I’d describe Scotoma as “cute, prankster, charismatic.” Maybe add “nice” or “good” if the character ends up a bit “dark,” attitude/presentation wise? For Dustine I’d describe her as “determined, roguish, perceptive” and perhaps if it matters, “pixie.”

I threw this together really quickly, image-wise, so I’ll try harder if you like, particularly with a better image or understanding of how to “instruct” Stable Diffusion.


My thanks to @Tarot_Redhand who has answered my query about how close we are to pictures to 3D object in the “Let’s Drool” thread above. I’m really impressed by the quality of the textured elephant !

Wait a minute. It won’t be above any more as I’ve replied here moving it to the top . . . :grinning:

Here you go, hopefully that’s what you had in mind.

I moved them to a forest setting because pixie made more sense among the trees and allowed more interesting lightning.

Frankly, generating an artwork for two predefined characters is a pain, as it involves some work in external image editors and you basically generate two characters on their own and then combine them into one scene. My laptop is far from high-end and I constantly run out of VRAM during image generation if I have Photoshop opened in the background, so it all involves constant relaunching of SD or PS, rebooting and praying. Still, it was a great challenge. :slight_smile:


Awesome and thank you! This works really well and the forest setting was a good choice.

I know next to nothing about graphics myself, but your description of the workflow does match my own experience with doing basic things. I often bounce back and forth between two programs like GIMP & Inkscape, for example.

Sorry … to be direct … but you set up an anaconda ambient, download the github project on your laptop and use img2img.py to create all this image?

I’m using stable diffusion webUI, so no direct script launches. Other than that - yes, just fetch the most recent version from github and download or merge some SD models.

1 Like

Folks, I’m really looking forward to generating some more portraits based on your characters. :slight_smile:
It’s something akin to cooking - kind of okay if you do it just for yourself, but way more satisfying if you cook for someone else. So just drop screenshots of your characters with short description (like class, age, alignment, personality traits, etc), and I’ll try to cook something for you. Preferably without familiars. :smiley:

1 Like