DeepFaceDrawing — Neural Network That Turns Sketches Into People Faces

Mikhail Raevskiy
Deep Learning Digest
2 min readJul 22, 2020

DeepFaceDrawing is a neural network model that generates an image of a person based on a sketch. The network was developed by researchers from the University of Hong Kong.

Source: http://geometrylearning.com/DeepFaceDrawing/

Live Demonstration

Source: DeepFaceDrawing

The problem of existing approaches

Existing neural network approaches to generating images from a sketch quickly synthesize facial images. However, such models are often retrained on sketches and require professional sketches or boundary maps to enter. To get around this limitation, the researchers propose to simulate a shape space with multiple face images and synthesize the image in this space to approximate the input sketch. The neural network uses a local-to-global approach. The model uses sketches as constraints. This allows for the generation of believable facial images.

What’s inside the model

The proposed approach consists of three submodules:

  1. A module where embeddings of key facial features are learned using separate autoencoders;
  2. Feature mapping network that decodes facial vectors into corresponding multi-channel feature maps;
  3. Image generation network
Visualization of the DeepFaceDrawing Approach. Source: DeepFaceDrawing

Model performance evaluation

The researchers compared the proposed approach with alternative existing architectures. Alternative models include Pix2pix, Lines2FacePhoto, Pix2pixHD, iSketchNFill. Below, in some examples, you can see that the proposed approach generates more photorealistic images.

Comparison of the proposed method with state-of-the-art approaches. Source: DeepFaceDrawing

--

--

Mikhail Raevskiy
Deep Learning Digest

Bioinformatician at Oncobox Inc. (@oncobox). Research Associate