How d’ArtFlex is Using AI

dArtFlex
dArtFlex

--

AI technology is a real disruptor that can change our lifestyle in the coming years or decades. And it already is. For example, AI — in the form of facial recognition — is actively used around the world today, including China, Europe and the United States of America. AI algorithms and machine learning are used for data recognition by Big Tech companies including Amazon, Google and Facebook. d’ArtFlex has also implemented AI algorithms on our platform.

Any d’ArtFlex user can use special AI tools to create new images. These platform tools are generative adversarial networks (GANs), which produce an infinite amount of random “child art” from a parent image by combining several artworks together. The platform allows you to mix anything you can imagine. The artificial intelligence engine is capable of working with unique parameters, including different types of art, to create a unique sequence.

This process creates a completely unique gallery with rare and genuine art in the collection.

For collectors, interaction with AI will also be open if the original artists agree to their work being used, allowing buyers to create new art from their own collection.

GANs were first introduced in a 2014 paper by Ian Goodfellow and other researchers, including Yoshua Bengio, at the University of Montreal. Facebook’s director of artificial intelligence research, Yann LeCun, called adversarial network training “the most interesting idea in machine learning in the last 10 years.”

The potential of GANs is enormous because they mimic any distribution of data. GANs are trained to create structures that look eerily similar to entities from our world in images, music, speech, and prose. Generative adversarial networks are, in a sense, robotic artists, and the results are impressive.

One neural network, called a generator, generates new data instances, and another, a discriminator, evaluates them for authenticity; that is, the discriminator decides whether or not each data instance it examines belongs to a training data set.

Suppose we are trying to do something more trivial than replicate a portrait of the Mona Lisa. We will generate handwritten numbers similar to those in the MNIST dataset. The purpose of the discriminator is to recognize authentic instances from the set.

To do this, the generator generates new images, which it passes to the discriminator. It does this in the hope that they will be accepted as genuine, even though they are fake. The purpose of the generator is to generate handwritten digits that will be missed by the discriminator. The purpose of the discriminator is to determine if the image is genuine.

The GAN goes through these steps:

  • The generator receives a random number and returns an image.
  • This generated image is fed to the discriminator along with a stream of images taken from the actual dataset.
  • The discriminator accepts both real and fake images and returns probabilities, in a range from 0 to 1, with 1 representing a genuine image and 0 representing a fake image.

Thus, we have a double feedback loop:

  • The discriminator is in a loop with the genuine images.
  • The generator is in a loop with the discriminator.

You can think of a GAN as a counterfeiter and a policeman playing cat-and-mouse, where the counterfeiter learns to make false bills and the policeman learns to detect them. Both are dynamic; that is, the policeman also trains (perhaps the central bank marks missed bills), and each side comes to learn the other’s methods in constant escalation.

The discriminator network is a standard convolutional network that can classify images fed to it with a binomial classifier that recognizes images as real or fake. The generator is in some sense an inverse convolutional network: while the standard convolutional classifier takes an image and reduces its resolution to get a probability, the generator takes a vector of random noise and converts it into an image. The first sift the data using downsampling techniques such as max pooling, and the second generates new data.

The d’Artflex platform uses GANs for picture brushing. With this technology, we modify pictures from our users, making the pictures absolutely unique. If they are photos or pictures of living people, they change their facial expression, skin color, eye color. If they are things, they change color or color shades. If it’s an interior, they change colors, too.

Flexibility is the basis of our platform, in terms of the possibilities provided to users. In this way, users will be able to create unique works themselves in the constructor with the help of trained GANs. The user chooses two images and with the participation of our GAN network one image is created from these images — always unique. The result is digital art, a collaboration between human creators and AI.

--

--