DATA STORIES | GENERATIVE ADVERSARIAL NETWORKS | KNIME ANALYTICS PLATFORM

GAN Data Apps

Deploying generative models for synthetic image data

Ivan Prigarin
Low Code for Data Science

--

Co-author: Emilio Silvestri

This article has been separated in two parts. This is Part II. “How to create GANs with KNIME Analytics Platform” has been published on December 15th 2021.

Recap from the previous article

Figure 1. This workflow contains all the steps necessary to implement a GAN in KNIME.

In the previous article, we went through the steps demonstrating how to implement a GAN in KNIME Analytics Platform that can produce artificial human faces. From preprocessing the dataset, to defining our model, to training and finally producing results, the workflow implementing the entire process can be seen in Figure 1 and found on the KNIME Hub here. An example of a face generated by the trained model is shown in Figure 2.

Figure 2. A GAN-generated face produced by the trained model from the previous article.

Besides the face-generating model, we also obtained a number of other models while experimenting with different datasets: they can produce images of cats, dogs, wild animals, and even Simpsons. Instead of letting them go to waste, we would like to share them with you in the form of a couple of interactive data apps. By utilizing the recently released Refresh Button Widget node, we will be able to generate new images on-demand with a simple press of a button.

Image Generator Data App

After the lengthy training process, you will have a generator model that can produce images that mimic the ones from the training dataset, and a discriminator which, hopefully, cannot distinguish fake images from real ones. Evidently, the more useful part here is the generator. Using the Keras Network Writer, we can export the model, which can then be employed separately further down the line.

For instance, we can read the model using the Keras Network Reader node, and apply it to a random latent space vector inside a Keras Network Executor node in order to obtain a synthetic image. Recall that the latent space is typically a n-dimensional hypersphere with each variable drawn randomly from a Gaussian distribution with a mean of zero and a standard deviation of one.

The Keras Network Executor node must be correctly configured to convert the output of the model “To Image (auto-mapping)” (Figure 3).

Figure 3. The Keras Network Executor node is configured to convert its output to image. Note that the output layer also has to be manually selected appropriately.

As mentioned in the previous article, the output images require some postprocessing in order to be visualized (Figure 4). Specifically, we provide a name for the image dimensions, swap them to the KNIME image representation (x, y, Channel), and finally rescale the pixel values to the 0–255 range. All these steps are carried out with nodes provided by the KNIME Image Processing extension.

Figure 4. The postprocessing pipeline for the model output.

Combining the above, we have the complete pipeline to generate synthetic images. To transform it into an interactive application, we have to:

  • Encapsulate the workflow into a component in order to enable the interactive view.
  • Add an Image Output Widget to visualize the produced images.
  • Add a Refresh Button Widget, which triggers the re-execution of the whole workflow. Clicking the Refresh Button Widget in the interactive view would produce a new random latent space vector, to which we can apply the model and get a brand new image.

The workflow implementing these steps, as well as the data app itself, can be seen in Figure 5.

Figure 5. The component behind the data app that generates new images on the fly. The Refresh Button in the data app triggers the re-execution of the workflow, resulting in a new generated image.

This data app could be further expanded to include more models to generate from. You can place the trained models into a single directory, read them with the Keras Network Reader node, name them appropriately, and apply a filter widget (for example, the Nominal Row Filter Widget node) to your visualization to allow the user to select a single model. We can also add a few finishing touches, like the title and a small description, in order to make the data app a little nicer. This enhanced version of the data app is shown in Figure 6. You can download the workflow from the KNIME Hub and try it yourself.

Figure 6. Data app to generate images from different trained models.

Image to image interpolation: a trip to the Latent Space

Remember the remark about how a latent space vector can be thought of as a more compact representation of the corresponding generated image? It turns out that, by slightly altering an input vector of the generator, we are able to get a slightly different output image. Which means that by generating two random vectors, interpolating between their values, and using the intermediate vectors as inputs for the generator, we can produce a sequence of images that gradually change from one point to the other. Figure 7 visualizes this using a pair of two-dimensional vectors. In reality, the vectors are of much higher dimensionality.

Figure 7. Visual representation of vector interpolation. Starting from a pair of vectors, the intermediate points are interpolated to produce a sequence of images morphing from the starting to the ending point.

This random-vector-production is carried out by the workflow shown in Figure 8. It generates a number of random vectors, inserts empty placeholders between them which are then filled by the Missing Value node set to Linear interpolation.

Figure 8. Generate random vectors from latent space and fill the intermediate values with interpolated values. These slightly different vectors will result in slightly changing images produced by the generator.

To apply this, we built another interactive data app, shown in Figure 9. You can specify various generation parameters, choose from the generator models mentioned in the previous section, and even save the interpolated images as an .avi video. The workflow behind this data app can be found here on the KNIME Hub.

Figure 9. Data app demonstrating Image to Image interpolation.

Image Vector Combiner

Having said that a latent space vector is a more compact representation of an image, can we play around with them to get a combined representation? Let’s say we have the two vectors used to generate the faces on the left in Figure 10 and we want to obtain a new face that is somewhere in between the two. We can take the two vectors and calculate the mean, and then use this vector to generate a new image.

Figure 10. Two latent space vectors can be combined to get new images.

This concept can be further expanded for different vector operations, like sum and element-wise product. To showcase this further application, we built a final data app that calculates different combinations of two original images. Once again, the workflow is available here on the KNIME Hub.

Final remarks

In the previous article, we demonstrated that it is possible to implement and train a GAN in the KNIME Analytics Platform. Now, we have shown that it only takes a few additional nodes to utilize your GAN models in interactive data apps that can be deployed on the web.

While we focused on images of a relatively small size in this and the previous article, the same principles and implementation techniques can be applied to other kinds of data, resulting in powerful domain-specific generative models. These can be invaluable when it comes to producing synthetic training data for other models when data collection is difficult or expensive; they can furthermore be deployed, as we have shown, in the form of interactive data apps, or as part of a larger pipeline.

--

--