Make money using NFT + AI | GAN image generation

Shyam BV
Code Sprout
Published in
7 min readMay 9, 2021

Introduction

If you are new to NFT, please read the below articles to understand it.

  1. What is NFT? | Make money using NFT + AI
  2. Getting started with Opensea. Reduce the gas cost
  3. Create art images using AI. Feature Optimization
  4. Create different images within AI. Use GAN’s for image creation
Image generated by author using Stylegan2-ADA

In this article, we will see how to create new images using GAN. If you are new to GAN, please check read more about it here. Here we will mainly discuss how to generate art using Stylegan2-ADA. The goal is to create contemporary art via NFT and sell it via Opeansea.

Stylegan2-ADA quick Intro

Stylegan2-ADA(SGA) is the latest and greatest version of stylegan from NVidia. It generates fake images, which are very hard to distinguish from counterfeit images. You can check out their implementation at https://github.com/NVlabs/stylegan2-ada-pytorch.

It takes a lot of computational resources to train an SGA. If you have powerful GPUs, you can try to train them from scratch. Else, you can perform transfer learning and fine-tune a custom model to generate art according to your needs. Another option is to train the model in the cloud or to use Google colab. Google colab is an easier option to get started. However, I would recommend pro-version of it due to the longer session time.

Getting Data

One of the most significant starting problems is to get the data input data for training. As we will mint our art in NFT, it should be unique and indistinguishable compared to real images. Below are the options

  1. Flickr — Download and use
  2. Unsplash — Download and use
  3. Scrape the images — Be kind and cautious
  4. Use your own images — Of a particular type

Flickr:

You will be able to download Flickr images using an API key via python API. You can select a specific category or images and download them. You can check this link for downloading images using python.

Unsplash:

Unsplash is a similar website to Flickr, which has a broader range of images. Below small code snippet to get the images from Unsplash. Y

Scrape the images

Often most of the images will not be available as an easy download. You might need to scrape the images. I would highly recommend not to break their website. Also, download the images which you can freely use(Although you are not going to use the images just for training your model)

You can use two types of scraping, and one is using beautiful soup, which is easy and well-known. However, most modern websites are react and are not easy to scrape. So you might need to use a chrome driver and download images. Here is a small snippet of the code, which might change depending on the website you scrape.

Code for scrapping
Code for saving the images

Use your own images

This is one of another way to get training images. If you are a photographer or have a collection of images, you will use them freely. However, one caveat is what type of images is in your collection. If you are going to generate nature-type images, you should have around 1000 images if using a pre-trained model. Because for SGA, we needed much lesser images than the previous version. However, to train better, I would at least go from 600–1000 images.

Training

Pre-processing

Before performing training, you needed to pre-process the images. SDA accepts 1024x1024 sized images as input and generates similar sized images as output. So you can resize the images using the below code.

Resize images

To start training from basic model, you can run train the model without any pre-trained model. Check the newsletter to get the code for it. Once you prepare your input dataset, you have to point it in the dataset_path variable. Also, the resume_from parameters will start from the fine-tuning using the previous checkpoint.

#required: Input dataset imagesdataset_path = './datasets/resized_art_images.zip'#resume_from = 'ffhq1024'aug_strength = 0.0
train_count = 0
mirror_x = True
#optional: you might not need to edit these
gamma_value = 50.0
augs = 'bg'
config = '11gb-gpu'
snapshot_count = 8

#Scratch training
!python train.py --gpus=1 --cfg=$config --metrics=None --outdir=./results --data=$dataset_path --snap=$snapshot_count --augpipe=$augs --initstrength=$aug_strength --gamma=$gamma_value --mirror=$mirror_x --mirrory=False --nkimg=$train_count#Resume from previous checkpoint!python train.py --gpus=1 --cfg=$config --metrics=None --outdir=./results --data=$dataset_path --snap=$snapshot_count --resume=$resume_from --augpipe=$augs --initstrength=$aug_strength --gamma=$gamma_value --mirror=$mirror_x --mirrory=False --nkimg=$train_count

There are different checkpoints that we can use. Below are some of them

  1. ffhq1024 — Provided by Nvidia. Use if you are starting a new dataset. Something to not start from scratch. Usage: 'ffhq1024'
  2. Wikiart — Wikiart is a custom model, which a community user trained. Convert anything to art. Usage: './pretrained/wikiart.pkl'
  3. metfaces — Created by Nvidia to generate faces. Use if you wanted to generate new faces. Usage: https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metfaces.pkl
  4. Cifar — Nvidia created a model for generating general images from the cifar dataset.

There are many more models which we can use. I have also trained my custom model for art generation. Please leave a comment if you require the model.

Training needs to be for a decent amount of time. For a Tesla V100 GPU, it took around 3 days for me to train a satisfactory art model. So depending on the dataset and GPU, 3 -5 days of training is required. If you use colab, the session will disconnect depending on the plan. You need to update the resume_from parameter and start again.

Generate Images for NFT

After a satisfactory level of training, you are now good to create your masterpiece like Picasso! Some points which you need to consider

  1. Generate images that can be completely different than your needs. If so, you need to check the error points and start again.
  2. Generating images is purely based on training. So junk in junk out.
  3. We can generate images based on the last checkpoint from your training.
  4. Images generated may be purely randomized, and you might not be able to recreate the same image.
  5. The seed parameter will somewhat help in recreating the same image which has been generated before.
  6. Image generation has different parameters. I will list some of them.

The below code is used to generate 500 images using the seed value.

# Plain generate images. Update the seed number if required !python generate.py --outdir=/content/drive/MyDrive/generate_images/art_images_trained  --trunc=1  --seeds=1-500 --network=/content/drive/MyDrive/base_colab-sg2-ada-pytorch/stylegan2-ada-pytorch/results/00003-resized_art_images-mirror-11gb-gpu-gamma50-bg-resumecustom/network-snapshot-000080.pkl

Below are a couple of the images which the model created.

Images created from the model
Images created from the model

SGA also allows to play around with the model on image creation and create a video out of it. Below are one-liner definitions to quickly understand. If you require more details, please let me know in the comments.

Truncation traversal — How much the w vector can change compared to the average. Vector w is generated from z latent space.

Interpolations — Perform smaller changes to the z or w vector and create images.

There are other types, such as projection, noise loop, and circular loop. Those are given to create different videos and images. However, I feel truncation and interpolation should provide enough images.

# To Preform Truncation traversal. Here a range is specified for video!python generate.py --process="truncation" --outdir=/content/out/trunc-trav-3/ --start=-0.8 --stop=3 --increment=0.02 --seeds=1000 --network=/content/drive/MyDrive/base_colab-sg2-ada-pytorch/stylegan2-ada-pytorch/results/00003-resized_art_images-mirror-11gb-gpu-gamma50-bg-resumecustom/network-snapshot-000080.pkl# Perform interpoloation and generate images in z space video!python generate.py --outdir=/content/out/video1-w-0.5/ --space="z" --trunc=0.5 --process="interpolation" --seeds=463,470 --network=/content/drive/MyDrive/colab-sg2-ada-pytorch/stylegan2-ada-pytorch/results/00000-face_dataset-mirror-11gb-gpu-gamma50-bg-resumecustom/network-snapshot-000000.pkl --frames=48

Make money

Now it is time to make money(probably) using NFT. You have completed all the thought sections. Now generate a series of images by giving a range of seed numbers in generate.py and download them. Once you download them, you can enhance the images by sharpening the images using python OpenCV or another AI model or use your photoshop skills. Please let me know in the comments if you need another article to sharpen the image quality.

Open your opensea account; if you do not have one, please check my previous articles listed in the introduction or in the conclusion section to create it.

My minted images — https://bit.ly/3hcaoE4

Conclusion

This concludes the series of articles to make money with AI and NFT. If you have good skills and a training dataset, you can make a ton of money in NFT. Also, these images can be minted on different platforms.

Disclaimer: I am showing the approach and it is purely for informational purposes. It is not guaranteed that you will make money. You may also not be able to produce unique arts via AI. It really depends on the input dataset and your skills

Series of related articles to read

  1. What is NFT? | Make money using NFT + AI
  2. Getting started with Opensea. Reduce the gas cost
  3. Create art images using AI. Feature Optimization
  4. Create different images within AI. Use GAN’s for image creation

Reference links

  1. https://github.com/dvschultz/stylegan2-ada-pytorch

Get Code

To get the working code for my articles and for other updates, please subscribe to my newsletter.

--

--