Lowering the Entrance Barrier of Deep Learning with High-Quality Source Code Generation

Written by Denis Krompass and Sigurd Spieckermann, founders of creaidAI

Back in the days, …

… when we were using Theano to build machine learning models, we often faced implementation issues which were tedious and time-consuming to resolve. While having a clear idea of the implementation goal, symbolic programming and cryptic error messages of an incorrectly constructed computation graph significantly slowed down our progress. Tons of documentation, and as a last resort Theano’s source code itself, likely contained all the knowledge needed to get things to work, but this development mode is just ridiculously inefficient.

Ooops, something went wrong while running some Theano code. Happy debugging …

StackOverflow to the rescue! Unfortunately, the exact same problem rarely occurs twice, or it’s nontrivial to boil the problem down to the minimum to get to the bottom of it. As a result, we, and many others, spent too much time with trial and error and with integrating information from various sources to finally turn an idea into reality.

Let’s not even try to imagine how many great ideas were tossed to the trash because the (anticipated) effort of getting things to work was just too high.

Nevertheless, persistence often payed off and after a while it became easier to avoid common mistakes, but don’t be fooled — there’s still plenty googling and querying of StackOverflow involved even these days. When TensorFlow matured and Theano was slowing down, transitioning to TensorFlow felt almost like a déjà vu although the two have much in common. Many subtle differences between TensorFlow and Theano such as function names, function parameterization, placeholders and shape inference, variable initialization, executing the computation graph, etc. took some getting used to.

A simple dummy model implemented with Theano (left) and TensorFlow (right).

In addition, there were tons of new features to explore such as scopes, summaries, or distributed training, to name a few. To make this transition happen, we spent enormous time reading docs, reading source code, querying the usual resources, and with trial and error — all over again.

Entering the field of Deep Learning is not easy

The recent hype around and rise of Artificial Intelligence (AI) has drawn a lot of attention on Deep Learning — the primary technology behind most notable advances in AI. Not only AI experts, with strong backgrounds in machine learning or related fields, but also many newcomers and AI enthusiasts are part of the current AI revolution. Even though there have been tremendous advancements in the machine learning software ecosystem, they will experience similar struggles as we did on their journeys towards applying Deep Learning. Some of them will prevail, but others, especially the novices will be intimidated by the high entrance barriers and eventually fail.

In order to lower the entrance barriers for Deep Learning many excellent higher-level frameworks (e.g. Keras) or machine learning platforms (e.g. H2O.ai) have emerged, exposing fewer low-level details and providing a jump start for training your first Deep-Learning-based model.

Same dummy model as above, but this time implemented with TensorFlow’s Keras API tf.keras.

However, many of these frameworks trade flexibility off against ease of use although flexibility is often required for implementing more advanced ideas or for tackling non-standard problems. In addition, all of these frameworks still require initial familiarization with implementation details, so reading the docs, reading source code, exhaustive web search for example code inspiration, and lots of trial and error are unavoidable which is especially frustrating for many newcomers.

Lowering the entrance barrier with a graphical higher-level abstraction

While abstraction is generally the right way to lower entrance barriers in AI development, the kind of abstraction should be reassessed to match the needs of both experts and novices alike. We argue that the design of Deep-Learning-based models is a graphical process and should be treated as such using a graphical user interface (GUI).

Graph representation of the dummy model implemented with Theano, TensorFlow, and Keras above.

Clearly, graphical representations of computation graphs are much easier to understand for a broader audience than plain source code. For this reason many presentations, blogs, and publications use them to visualize complex machine learning workflows such deep neural network architectures. For example, Francois Chollet, the creator of the Keras Deep Learning framework used a graph to illustrate a model architecture before showing the TensorFlow/Keras implementation at the TensorFlow Dev Summit in 2017 (see the video).

What about flexibility?

Unfortunately, a higher level of abstraction tends to trade off flexibility that might be required for developing real-world AI systems. From our professional experience, solutions to meaningful real-world problems using Deep Learning (and machine learning in general) are still purpose-built. It is a fact that there are no universal guidelines or proven master algorithms yet that can solve and generalize to arbitrary problems. Theoretical solutions like ONE have recently been proposed, but to the best of our knowledge neither an implementation nor any empirical evidence exist. Instead, developers rely on best practices, extensive experience, and lots of trial and error.

At creaidAI we believe the solution to making AI technology accessible to a broader market is a tool that offers an easy-to-use graphical user interface while preserving flexibility by facilitating modifications at the source code level.

These requirements can be met by a GUI-based code generator.

Why a code generator?

First of all,

editing high-quality and documented source code is easier than writing everything from scratch.

Even if the code generator cannot provide the final solution to a problem, it likely gets you already 90% of the way by generating most of the boilerplate code, allowing you to focus on parts of the source code that are of particular importance to your specific task. In addition, it can serve as a dynamic cookbook for implementation details, providing answers to specific “how-to” questions by generating working and tested code accordingly (e.g. “How is a deep neural network with Batch Normalization built and trained using TensorFlow?”). We are convinced that a code generator will accelerate the development process for everyone— both experts and novices — and will facilitate exploration during development as developers are not held up or discouraged by implementation details or building blocks they are unfamiliar with. In addition, developers are not locked into a specific software ecosystem but free to combine the generated code with any other suitable piece of software that handles other aspects of the job, e.g. managing experiments or deploying models (e.g. FloydHub, Valohai or RiseML).


In the next article we will introduce the AI Blueprint Engine — a GUI for building Deep-Learning-based AI systems that generates human-readable, editable source code.