I’m using Stable Diffusion to generate portraits, but consistently getting distorted hands in my results. Here’s an example:
[Image: A professional-looking portrait of a woman in business attire, but her left hand has six fingers and the right hand appears twisted at an unnatural angle]
My setup:
-
Model: Stable Diffusion 1.5
-
Sampling steps: 30
-
CFG scale: 7
-
Negative prompts: “bad anatomy, malformed hands, extra fingers”
-
Size: 512x512 pixels
What I’ve tried:
-
Adding “perfect hands, five fingers” to positive prompts
-
Increasing sampling steps to 50
-
Using hand-related negative prompts
The face and body proportions are good, but hands remain problematic. Is this a model limitation or am I missing something in my approach? What specific techniques or model versions work best for anatomically correct hands?
Tags: ai-image-generation, stable-diffusion, computer-vision, deep-learning, image-quality
