Artificially generated Tattoo

Vasily Betin
Vasily Betin
Published in
8 min readMar 12, 2020

This gonna be a simple description of the process of training StyleGan2 model on the tattoo images. No much into science behind, not much into description. Main steps and reason.

Why?

I love tattoos. I like to use my body as a canvas for artists I meet in my life. I usually have just one request — I don't want to see image before they made it. If I choose artist — I trust his work and style and I want to give him a full freedom to do whatever he want without any supervision from my side. This way I even got my back tattooed in 9 hours by two artists. No sketch, everything on me, freestyle.

As I working lot with technology and generative art, why do not put an generative image on me? Why do not give this freedom for an artificial artist. In most cases they will follow algorithm that reprogrammed by person. That how i got my first generative tattoo. One of my first generative work from 2016 when I started to move in this direction.

So, next step to get a tattoo from artificial artist that not really follow algorithm, but that has some freedom in action. Here comes ML (Machine Learning) image generation models. One of them — StyleGan, that allow you to generate quite detailed high resolution images that sometimes hard to distinguish from real once.

To do it, we need to deal with few problems:

  • How to train model?
  • Where get data for initial training?
  • What platform to train on without huge cost and time?
  • How to keep result different from what human artist will do?

Where to get data?

First of all, to train a ML model we need some initial data. For StyleGan2 that I choose we need better quite big data set. If it will be too small — it will converge to specific style and can be over trained on this style, at the same time can stuck in improving result. So, best — I need few thousand of tattoo images, but where to get them?

Data searching and parsing

Where the biggest data of images now? Right, Instagram can provide you with almost infinite data set of images. All you need — find a right tag for data you looking for.

Going a bit forward, Instagram prove as the worst clear data set, no matter what tag you searching for, you will get thousandth selfies, advertising and other non relevant images. Thank for bots and human. So, my second source of data was Pinterest. Pinterest i can say have a good quality of images there without this social-media garbage.

So, we have a data source now. But how to get them?

Prepare data

For this purpose I wrote two simple image scrapers:

This can scrap images by hashtag. So, there I got my 20k tattoo images.

But. I i tell — lots of noise in this data. This was the longest part for this work. It took me two days to cleanup this data set to around 11k images.

I cleaned:

  • selfie
  • not relevant to tattoo images
  • advertising
  • tattoo flashes (multiple tattoo on one image — that can make network confuse and slow down the training and the result i want to get)
  • hyper realism tattoo and too detailed (again it will confuse network and can destroy learning, also i do not looking for hyper real tattoo images as output)

Stylega2 and network training

What is StyleGan2

Here we came to an actual training part. Its not so hard. Require some simple understanding of python and reading of code. Also ‘Google it’. As platform I choose google Colab that can give me access to a quite good GPU instance for training for cost of 10$ per month (PRO version) or even free (limited time and lover level GPUs).

I choose StyleGAN2 network as it shown amazing result in different generative image tasks (you can google lot examples) and this git repository that already have optimized for Colab code. Thank Community!

Next step — just follow the Colab file with minor changes for my model and prepare data set. Not gonna write instruction here, it well documented.

256 model training

Next stew was actual training. I started with 256x256px network, first, to test out the result in shortest possible time. After I though if it show result — I will train on higher resolution.

This some results of 256px model over around one week training.

I was surprised on the level of details and interesting surrealistic result. Video below can show all training process for this part.

During this time I did some changes on data set (added more images, did augmentation of them — rotate, flip to get a better result).

Upscale and upscale

Time for a higher dimension model. But, should i train my model from beginning again? After progress over one week and speed for training higher model gonna be slower (1.5 times slower). Or there any other trick?

Google helps. I found a script for up-scaling existing network. Again, thank you community. Minor changes to adapt it to StyleGAN2 and it works.

As you see, at the beginning colors was going crazy and after few hours of training it was looks like network getting worse and worse. Pint to green and totally noisy image.

Just after a night of training network get back with quite good result. After I figured out that with green it was try to get rig of pink color, but went too far — so was need to unlearn green again. The same with details. It get to the noise by trying to catch up with new level of details for high resolution, but did`t payed attention to more abstract details. After get to norm again.

Again, very interesting result and details level. We see lots of face and flowers (people originally like to do faces and flowers, skulls). By combining it machine got to a nice result.

So, up-scaling works. One week of training.Interesting results and decision to go higher.

One week training for 1024 resolution. Pretty much same process.

Result

So, now we here, looking at the result. I generated around 2k images and pick up some of them. Too much flowers. Delete. To natural — delete. I want something original, that no human artist will do. Here my small pick.

And finally, thanks to my girlfriend (by the way she are tattoo artist with pretty unique style) I made it happen.

probably it`s a first tattoo painted by artificial agent without any human input or supervision.

Runway model

And thanks to RunwayML community and Gene Kogan special for tutorials I released this model now for you at RunwayML, so you can try and get your next tattoo).

If you really want to tattoo the image generated by model, please show me result, very curious about it)

If you curious about more my works and experiments — I do not writing much, you can find it on Facebook, Twitter or Instagram

--

--

Vasily Betin
Vasily Betin

Symbiosis with entropy and uncertainty to create. Creative director Sokaris.com, educator, entropy artist, nonhunamart.org, lensofthoughts.com