Character consistency in AI-generated artwork is no joke — especially in illustrations for novels, comics, and more. We’re talking about the art of keeping characters looking uniformly themselves from one scene to the next. Sounds simple? It’s trickier than you’d think! Here’s a fun fact: with DALL-E 3, even tiny tweaks can result in whopping image changes.
Now, I’ve heard folks suggest that slapping on long character descriptions, using specific names, or dialing in a ‘seed’ number can fix things. But, spoiler alert: these tweaks might just be a drop in the ocean.
The challenge is even more intense when we’re dealing with images of real people. Recognize Brad Pitt from a slightly raised eyebrow? Yep, we’re on that level of complexity. But fret not, dear reader! We’ll dive deep, beginning with live-action character consistency and then tiptoeing into the animated territory.
Before we start, I would like to introduce a custom GPT I created that can make DALL-E super powerful. It has these incredible features:
- It can churn out 4 consecutive images.
- It cleverly sidesteps DALL-E’s copyright constraints.
- Each image is introduced by a unique Midjourney prompt…