Synthetic Design Assistant

Automating the design process

Images automatically generated by the Design Assistant.


The Autonomous Design Agent (ADA) takes here the form of a Generative Adversarial Network (GAN) that was fed with a 21st century American Car dataset (2005–2016) manually curated.

The output generated could eventually be the starting point for a human designer to work on more elaborated concepts.


As a subset of AI, Machine Learning (ML) is a process in which a machine can learn from experience based on parsing data and to make predictions about it.

Deep Learning (DL) is a subset of ML inspired by the structure and function of the brain. It is based on algorithms known as Neural Networks (NN) that mimic the biological structure of the brain.

DL improves the performance of the system by constructing internal representations of the data at diverse degrees of abstraction. Within Deep Learning we can find a daily incremental zoo of different architectures and algorithms focused on solving specific problems.


The training data set was composed by 1- 963 examples of true front-end and 2-1022 examples of side views (all 128X128) web_scraped from many online sites. The selection criteria on the dataset tried to keep minimum variation between the examples in order to not alterate dramatically the final output. (Mainly size and location of the object in the image).

Few samples from the original dataset used to feed the network.

The network used was a Deep Convolutional Generative Adversarial Network (or DCGAN) that uses a visual system so it only focuses on design features. The idea of using a GAN to obtain further insights allow us to handle, analize and process more information that we cannot consciously be aware of, obtaining unknkown insight features from it. The goal was not to generate a functional car but rather to explore the most basic characteristic design features (known as “latent vectors“) that has successfully characterized some car designs. In fact what the Neural Network tryes to create is a general abstraction of common features in the training data.

After training the model, several proposals, some of them unreadable, some of them not functional at all, were generated by the network. Just a few of them after final examination matched the starting criteria.

A bunch of front-end proposals, some of them very inspiring, some of them very intriguing.


The automation of creative processes and the adoption of computational solutions to improve the production flow in a design framework will be a necesary step to understand the modern world dynamics. The analysis of empirical data will allow us to filter unseen information and to make better decisions, being therefore more precise with our design assumptions and making in the end more accurate, efficent and successful products.

The goal of the project was to explore the roles of human and machine in the creative process, however I can say more questions than answers have arisen:

  1. How can we just manipulate one specific design feature and thus its latent vector?
  2. How AI models can augment or automate human creative tasks?
  3. Should we talk now about a new Human-Machine Centered Design?
  4. How AI can help designers to better understand the context and needs of end users?
  5. How synthetic learning can co-exist with current design methodologies?

This is an ongoing project with lots of feature engineering to be tuned (..) and several daily new architectures to train different data. The aim of presenting the research at this early stage of development is to show a first glimpse about the potential of the possibilities of Design Assitants.

The DCGAN paper in which this research was based on can be found here. Tensorflow was the library used to implement the model.

The Synthetic Design Assistant code was first released on November 2018.

Jose R. Lopez @ 2019