Counterfeits, beware! These new neural networks will catch you
Before we dive deep into the world of “Machine Learning and the terms I never heard about and if I did, I may or may not know what they mean — just like Schrodinger’s cat”, we want to ask you a short question: do you think AI will prevail in every domain, even counterfeit? Let’s picture this for a moment.
Imagine that you are a painter. Not any kind of painter, but one that is specialized into, how to put it gently, not creating new things, but forging things. Let’s call you G. Now, imagine opposed to you, there is this guy, D, who is an art critic — a very talented one, that can spot a fake painting from miles away. Now, let’s imagine you want to show him some of your “Monet’s”.
At the beginning, G’s whole purpose is to create fake Monets. Sometimes, D falls for them, sometimes he doesn’t. But, as time passes by and D starts to see more and more original examples, he becomes better at detecting fakes. Since G starts having a harder time to, let’s say, fool D, he has to become better. Thus, he slowly starts to provide better forges. On short, this is the idea behind Generative Adversarial Networks or GANs.
What are Generative Adversarial Networks
GANs is a new type of generative model, which are a branch of unsupervised learning techniques in Machine Learning.
GANs contains two networks that live in a constant conflict — thus the adversarial term, a generator (G) and a a discriminator (D). As in anything related to Machine Learning, GANs can be trained with examples, such as images, and there is an underlying distribution (x) that governs them. Thus, G will generate outputs — or create new stuff, and D will decide if they come from the same distribution of the training set or they are, you guessed, fake.
How do GANs work — technically
Generative Adversarial Networks are like this: G, will start from some noise (z), and the images it generates are G(z). D, on the other hand takes from the real images (x) and the fake ones, from G, and classifies them as D(x) and D(G(z)).
What’s interesting about them is that both are learning at the same time. Once you train G with enough input, it will know enough about the distribution and it will be able to generate new samples that share very similar properties. And as you train D, it will sense if the objects from the image are real or not.
Some real applications of GNAs
- Image generation from input samples
- High resolution image generation from a lower one
- Interactive image generation — iGANs
- Diagrammatic Abstract Reasoning
- Image super resolution
- Image in painting
- Semantic segmentation
- Video generation
- Text to image generation
Until the release of our next article, which will be about image synthesis, you can check our Facebook and Twitter accounts, where we will post other great things about AI. In case we have missed some important piece of info about GANs, do not hesitate to leave us a comment.