Generating a new profile picture with AI

Jimmy Longley
5 min readApr 25, 2023

--

I’ve recently begun a new job search, and am going through the process of refreshing my online presence.

One snag is that my profile photo is now almost 8 years old, and I’m looking a little too baby-faced to keep using it in good conscience. I hate getting my picture taken, and I don’t want to have to do a proper photoshoot in order to get a new professional-looking headshot.

Fortunately, this seemed like a great excuse to start messing around with all of the amazing new AI art generation tools that have been emerging over the last year, such as DALL-E 2, Stable Diffusion, and Midjourney. My goal was to make something professional but stylized enough that it‘d be recognizable that it was generated somehow.

I set out following Jake Dahn’s fantastic set of tutorials.

I won’t go into detail on the technical process, if you want to understand that, read Jake’s blog instead. Instead, here’s how it went:

First, I took around 20 photos of myself. I tried to get some with different lighting, clothing, and angles. I didn’t bother cutting out the backgrounds as in the tutorial, but it didn’t seem to affect the final result too much.

I used the images to generate custom tensors with LORA using this repl. It was just a few clicks and a few minutes of training, and it gave me a link for a data file that I could turn around and plug into Stable Diffusion.

With Stable Diffusion 1.5 out, I’m sure I could have gotten better results by seeking out a newer model, but I wanted to see if I could do this whole project in a couple of hours without having to download any software. Instead, I used the Stable Diffusion 1.3 model available at this repl.

The meat of the project was trying to come up with good prompts. I tried a few dozen variations and found that around 1 in 10 photos might be something that I could use. After around an hour, I created several new headshots in various styles that had an acceptable likeness.

If you’re going to try this, my main suggestion is to generate a lot of images from each prompt before you discard it. I had to sift through plenty of nightmare fuel to find these good results, but with enough photos, you can sift out good ones relatively quickly.

Nightmare fuel

Ultimately, I landed on this image, which had the best likeness of my face in a noticeable but subtle painterly style.

There was only one problem. What is up with that sweater?

No problem though, this was an opportunity to try another technique, AI-assisted inpainting. Its 2023 after all, If I don’t like the shirt, I’ll just have AI draw me a new one. So, back to repl.

First I needed to whip up a quick layer mask, So, I hopped onto my favorite web app, the incredible web-based Photoshop clone PhotoPea, and masked off the shirt by hand.

Unfortunately, my results from this method were… inconsistent.

Despite some pointed negative prompting, all it wanted to do was to create a somehow even uglier sweater!

Finally, I compromised and decided to move forward with this relatively basic t-shirt design, which I then just dumped back into PhotoPea to recolor manually with another hue and saturation adjustment layer.

Lastly, I wanted to see if I could do something about the background. This is the type of thing stable diffusion is great at, and it quickly gave me many usable options via prompts such as “simple stylized painting abstract colorful digital painting”

Then, it was back to PhotoPea to combine the layers together.

Given it was around 2 hours of work and around $3 of GPU time, I’m thrilled with how it turned out. Ultimately, I probably won’t be using this on LinkedIn, but it's still pretty cool. What do you think?

--

--