Towards an ML-powered code visualiser

Bruno Marnette
prodo.dev
Published in
5 min readDec 11, 2018

We recently released a little side project of ours called Alfie — an online JavaScript playground displaying all the ‘‘helpful” values observed during the execution of your scripts. Here is how it looks:

Alfie’s look, as of today. Disclaimer: we’re still shipping a new version every day.

We received a lot of feedback and people have found our visualisation helpful in different ways. Some users have used Alfie to solve algorithmic puzzles (e.g. the challenges of the Advent of Code or Project Euler) and it has helped them understand and debug their code quickly. Others have used Alfie to share and explain code, for instance when teaching JavaScript to someone. However, there is no perfect single visualisation fitting all code snippets and all use cases.

So how could we automatically customise what you see in different contexts? In short: we plan to use Machine Learning.

Learning what not to show

Releasing Alfie helped us to confirm an obvious assumption: we can’t show too much information at once.

First, it wouldn’t fit on screen. More importantly, it wouldn’t fit in the user’s brain. Showing examples of runtime values for every single variable and expression in your code is a non-starter.

And it would also feel silly at times. In fact, the very first version of Alfie didn’t feel very smart at all. For instance, next to a variable declaration such as var a = b it would show both the observed values for a and for b. But it felt like noise to our users, since those two values are going to be equal after each assignment.

For now, we’re still handling those sort of edge-cases by writing heuristics. One of those heuristics intuitively says something like “in case of a trivial assignment, only show the value of the left hand side”. But it quickly gets more complicated in practice. Here is a snippet illustrating the kind of logic we ended up dealing with:

Example of criteria currently hardcoded to help deciding what to show to users.

As you can imagine, hardcoding a long list of such heuristics is error-prone, expensive, and there’ll be rules that simply won’t occur to us, so we think that an ML approach is the right option to make it truly smart.

Communication principles

Avoiding redundant information is only one aspect of what a smart visualiser should do. Long before us, philosophers and linguists have studied what makes human communication effective and came up with other good rules of thumb, which all seem to apply to this use case.

Paul Grice — 1913, 1988

Among others, one can consider the four Gricean maxims:

  • [quantity] give as much information as is needed, and no more
  • [quality] do not give information that is not supported by evidence
  • [relation] only say things that are pertinent to the discussion
  • [manner] be as clear, as brief, and as orderly as possible

The above rules may sound obvious because humans are relatively good at sticking to them. Rule-based machines however can be quite terrible at showing the right quality and quantity of information in a clear and relevant way.

This is were we believe that a deep learning approach can eventually make all the difference. Symbolic AI would typically struggle to encode and apply good high level principles. But Machine Learning models, when trained on the right data, might very well learn to predict what information will matter or not to the user in different contexts. And Deep Learning models sound particularly promising to us, because we’ve seen them perform well on other code-related tasks.

Baby steps towards ML automation

If you’ve read this far, you’re hopefully starting to get an idea of why we find ML relevant to the task at hand. Now here is how we’re aiming to turn Alfie v1 into a smarter Alfie.

A baby giraffe. Or maybe it’s a baby zebra. Not sure.

Step 0 [done]— Ship a simple (ML-free) visualiser that essentially tries to show as much information as it can fit in a screen.

Step 1 [ongoing]— Listen carefully to user feedback and iterate every day on the underlying heuristics to reduce the level of noise.

Step 2 — Keep developing better and better intuition about what information to show in which context.

Step 3 — Use this intuition to start some feature engineering. Identify the parameters that matter. Find the decision points where an AI would help most. An example of such decision point would be: when to fold or unfold the visualisation of a complex object.

Step 4 — Update the UI to collect the right data (e.g. letting users fold/unfold what they want) and train simple models on this data. By “simple” models, we’re typically talking about statistical models or flat neural models that are cheap and fast to train. If they perform better than expected, we might just stop here. If not, we’ll use them as a baseline.

Step 5 — Do less feature engineering and investigate Deep Learning models. Among others, look at graph-based neural net architectures (we’ve had some success with those in the past — see for instance this technical talk). Look at the problem as an end-to-end task. Allow the deep learning system to figure out its own features.

Step 6 — Improve the training process. Consider transfer learning. Keep improving the data collection. Try and build an active learning loop. Refine the metrics and the targets.

We’re only in the middle of this roadmap and the remaining steps are still speculative at this stage, but your early feedback is essential for us to keep pushing in the right direction. So if you haven’t yet, please do try https://alfie.prodo.ai/ and make sure to hit the “Give us feedback” button!

--

--