Introducing the AI Blueprint Engine: A Code Generator for Deep Learning

Written by Denis Krompass and Sigurd Spieckermann, founders of creaidAI

Today, we are releasing the public beta release of our AI Blueprint Engine which is a code generator for Deep-Learning-based machine learning. Previously, we motivated code generation for Deep Learning, which we invite you to read first before continuing with this article.

The scope of this article is to cover the current capabilities of the AI Blueprint Engine, to outline future improvements, and to preview upcoming features. We exemplify its utility on the simple example model, that we previously implemented using Theano, TensorFlow, and Keras, by designing the model with our graphical user interface (GUI) and generating human-readable, editable source code.

Simple example model from our previous article.

The Graphical User Interface

Our purpose-built graphical user interface (GUI) provides an intuitive level of abstraction to implementing Deep-Learning-based machine learning models and workflows, inspired by the way data scientists and machine learning experts conceptualize such solutions prior to implementation. It facilitates the design of highly customized neural network architectures but also provides building blocks for data loading and preprocessing of numerous data types and file formats including CSV, HDF5, serialized NumPy arrays, TFRecord, and color and grayscale images.

Overview on basic building blocks and concepts of the graphical user interface.

Our guided design concept facilitates a smooth user experience, providing a self-explanatory workflow through the neural architecture design process. At each step, only the set of compatible options or operations is presented to the user. All UI components are annotated with informative tooltips, more complicated options such as step rules or complex layer types contain links to selected third-party resources that offer more detailed information.

The set of layers differs for the 2D data (left) and 3D data (right). Also, every option is well annotated with additional information (left).

We believe that these annotations will be especially useful for newcomers, providing hints and pointers to resources for diving deeper into the subject. To give you a quick start and explore the capabilities of our GUI, we’ve pre-designed several common neural architectures which are available via the following links:

The Code Generator

The AI Blueprint Engine generates not only individual scripts but rather a complete project that currently contains:

Overview of content generated by the AI Blueprint Engine
  • Python source code of the graphically designed neural architecture, data preprocessing pipelines, training and inference functions plus command-line interfaces with several exposed configuration options such as hyper-parameters or paths to data sources
The generated command-line interface of the training script.
  • Python requirements files with package dependencies of the project for CPU and GPU-accelerated execution
  • Dockerfiles for building CPU and GPU-enabled Docker images with installed package dependencies and the generated project code
  • a README file with comprehensive project documentation as well as setup and runtime instructions
Two excerpts of the generated file showing the setup instructions (left) and the model overview (right) in GitLab.

We have developed the code generator with subsequent human code editing in mind. For this reason, the generated source code is modular, fully documented including NumPy-style function/class/module docstrings and inline comments, and complies with the PEP 8 style guide.

The main function that is part of the that has been generated by the code generator.

Pre-generated sample projects are available for several common neural architectures:


We introduced the AI Blueprint Engine, a code generator for Deep-Learning-based machine learning, featuring:

  • a GUI for building highly customized and flexible neural network architectures including data preprocessing workflows, suitable for (multi-task) supervised and unsupervised learning
  • a guided graphical design process for building neural architectures by exposing only context-related options
  • extensive UI component annotation including tooltips and references to third-party resources supporting the design process
  • generation of modular, fully-documented, and style-compliant Python/TensorFlow source code for training and inference including command-line interfaces that expose several configuration options such as hyper-parameters or paths to data sources, making the generated code compatible with third-party tools for managing experiments
  • generation of auxiliary files including a README file, package dependency reference files, and Dockerfiles for containerized CPU and GPU-accelerated execution

What’s next?

Upcoming releases will focus on further improving the GUI user experience and enhanced code generation based on, tf.keras, and tf.estimator to generate code that scales to larger problems and includes additional monitoring and evaluation features which integrate with Tensorboard. With these enhancements, the generated code will be capable of handling large amounts of data that don’t fit into memory. In addition, we will introduce higher-level building blocks representing entire architectures, offer pre-trained weights for those architectures, add preprocessing blocks for data augmentation, and support variable length data.

We invite you to try our public beta and we very much appreciate feedback, suggestions, and feature requests, which you may submit via our GitHub issue tracker. You may also leave a comment below if you like (or dislike — constructive criticism is appreciated as well) what we are doing.

In the next article we will demonstrate the utility of the AI Blueprint Engine on a real dataset where we build a custom model using the GUI, generate code, and use it to train and apply the model.