Synthetic Design Assistant

estudio objeto
Apr 25, 2019 · 4 min read

Automating the design process

Image for post
Image for post
Images automatically generated by the Design Assistant.


The scope of this research is the generation of an Autonomous Design Agent able to simulate human creativity and to generate industrial (automotive) design proposals by itself.

The Autonomous Design Agent (ADA) takes here the form of a Generative Adversarial Network (GAN) that was fed with a 21st century American Car dataset (2005–2016) manually curated.

The output generated could eventually be the starting point for a human designer to work on more elaborated concepts.


Artificial Itelligence (AI) can be considered as the hability of a machine to perform tasks commonly associated with human intelligence.

As a subset of AI, Machine Learning (ML) is a process in which a machine can learn from experience based on parsing data and to make predictions about it.

Deep Learning (DL) is a subset of ML inspired by the structure and function of the brain. It is based on algorithms known as Neural Networks (NN) that mimic the biological structure of the brain.

DL improves the performance of the system by constructing internal representations of the data at diverse degrees of abstraction. Within Deep Learning we can find a daily incremental zoo of different architectures and algorithms focused on solving specific problems.

Image for post
Image for post


Training a Deep Neural Network

The training data set was composed by 1- 963 examples of true front-end and 2-1022 examples of side views (all 128X128) web_scraped from many online sites. The selection criteria on the dataset tried to keep minimum variation between the examples in order to not alterate dramatically the final output. (Mainly size and location of the object in the image).

Image for post
Image for post
Few samples from the original dataset used to feed the network.

The network used was a Deep Convolutional Generative Adversarial Network (or DCGAN) that uses a visual system so it only focuses on design features. The idea of using a GAN to obtain further insights allow us to handle, analize and process more information that we cannot consciously be aware of, obtaining unknkown insight features from it. The goal was not to generate a functional car but rather to explore the most basic characteristic design features (known as “latent vectors“) that has successfully characterized some car designs. In fact what the Neural Network tryes to create is a general abstraction of common features in the training data.

After training the model, several proposals, some of them unreadable, some of them not functional at all, were generated by the network. Just a few of them after final examination matched the starting criteria.

Image for post
Image for post
A bunch of front-end proposals, some of them very inspiring, some of them very intriguing.


The automation of creative processes and the adoption of computational solutions to improve the production flow in a design framework will be a necesary step to understand the modern world dynamics. The analysis of empirical data will allow us to filter unseen information and to make better decisions, being therefore more precise with our design assumptions and making in the end more accurate, efficent and successful products.

The goal of the project was to explore the roles of human and machine in the creative process, however I can say more questions than answers have arisen:

  1. How can we just manipulate one specific design feature and thus its latent vector?
  2. How AI models can augment or automate human creative tasks?
  3. Should we talk now about a new Human-Machine Centered Design?
  4. How AI can help designers to better understand the context and needs of end users?
  5. How synthetic learning can co-exist with current design methodologies?

This is an ongoing project with lots of feature engineering to be tuned (..) and several daily new architectures to train different data. The aim of presenting the research at this early stage of development is to show a first glimpse about the potential of the possibilities of Design Assitants.

The DCGAN paper in which this research was based on can be found here. Tensorflow was the library used to implement the model.

The Synthetic Design Assistant code was first released on November 2018.

Jose R. Lopez @ 2019

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store