AI Automation of Software Development

Why CodeBuilders are such a compelling enterprise architecture

This is part 5 of a series. I encourage the reader to start at the beginning or jump here.

Summary

A lot of folks these days are developing software that uses AI. I’m talking about the meta-esque subject of using AI to help develop software.

While Silicon Valley was chasing social media, NYC was picking itself up after the 2008 crash and overhauled how enterprise software is built. Unlike many bubble-fueled tech ideas, CodeBuilder platforms evolved through crisis and necessity.

To recap the series, some of the benefits of a CodeBuilder architecture are:

  • Modern management of enterprise code (no more flat files!)
  • Switchboard for unbundling enterprise capabilities
  • Immersive Agile collaboration
  • Live Agile turnaround
  • A medium to separate ‘internal’ code from ‘external’ code
  • On-the-fly debugging and patching
  • Reduce need for traditional DevOps
  • Homogenous medium for data management
  • Clear separation of configuration from function

Etc etc — the list goes on and on (depending on the implementation), which is to be expected if you are catching up on 50 years of stagnation. But since you are now in the building, why not take it to the top floor.

The Age of Weaponized Software

Earlier I described this genre as the ‘nuclear weapon’ of software. I’m not trying to sell anyone on CodeBuilders so much as provide a heads-up. There are a lot of pundits vaunting the glories of automation who have not faced the other side to this work:

The other side of automation

I started writing this series after sending a lot of highly paid software developers packing and realized I couldn’t even articulate what they were up against. But my philosophy is this: rather than complain about Goldman Sachs and JP Morgan global domination after 2008, why not learn how they did it? I imagine in the old days of Carnegie-Mellon or MIT, DARPA would have preferred to keep CodeBuilders under ITAR but alas Wall Street beat them to the punch. Of course, this genre is still emerging and there will be setbacks and other disruptive technologies to come along but I see this is as a milestone, where the brilliant work of Alan Kay finally comes of age.

There is nothing inherently “AI-ish” about CodeBuilders. Rather I believe they are a critical enabler to automation. Getting good at AI requires a fundamental retooling of how software is built and Satya Nadella lamented as much at DLD in Munich earlier this year.

Intent

Automation is a big reason why we use computers in the first place. However, we need a way to tell a machine what we want it to do — our “intention”. Industry pioneer Charles Simonyi envisioned something called “intentional programming.” But how do I express ‘want’ in the software world? This is not unlike the debate over imperative vs declarative programming. We need to separate what we want away from micro-managing how to do it.

Easier said than done

Suppose I want to display a circle on a computer screen. Rather than write it myself I use something nifty like pix2code to generate it for me. I take a picture and out pops a program. Let’s say it pumped out 5000 lines of code to do this. Then a newer tool comes along that does it in only 500 lines of code. Is it better or worse? Then another tool does it in 50 lines of code. Soon I am asking myself: do I really want the computer to write code or produce a circle? That is, my original intention might be wrong. How many times have you looked at a piece of code and see what it is doing but have no clue why. How is a machine supposed to do any better?

Getting Past Science Fiction

The question of whether machines can think is about as relevant as the question of whether submarines can swim — Edsger Dijkstra

Before anyone gets too enamored with hand-waving technologies that magically produce programs, remember that is exactly the sales strategy employed by the Big Four for decades. When something just “pops out”, you run the danger of resurrecting the “Big Bang” waterfall approach to software — producing something that meets contractual requirements but is brittle and otherwise unusable e.g. Agile is nowhere to be found. Might as well have the machine Google up an image of a circle and display it on the screen. Now what?

As IBM Watson has shown, AI is a tad less sexy once you peek under the hood. Microsoft has AI that actually writes code... but the old questions never go away: Is it crappy code? Which libraries should it use? What design decisions were made? Is the code properly factored? You see, this stuff doesn’t suddenly drop out of the sky — there are clear evolutionary steps. This gets back to programming versus computing. Computing is about problem solving. Intent goes almost down to hardware level. If AI can draw a circle, it should first know how to draw a pixel. Ouch.

There are practical considerations too. As much as Silicon Valley loves AI hype, a real system might still need to talk to legacy code or do boring stuff like run SQL on a database. We might need a more incremental approach that lends itself to composition.

Back to Agile

Remember Toyota LEAN and Gemba? A similar problem exists in the manufacturing world. Suppose our finished product is a “circle” and machine instructions are the ‘raw materials’ needed to produce that circle. There is something called the cone of specialization:

Software’s factory floor

Machine instructions (on the left) are the raw materials of software. Assembly code can create just about anything you can image — from circles to Goldman Sachs quant platforms to robots playing Go. Slinging 1’s and 0’s has so much fresh promise! But perhaps too low-level to be of much value, so we combine things into higher-level functions, libraries etc. (such as NPM above) depending on the situation. Unfortunately, each time we do this, we also slightly limit the ability for that code to be used elsewhere. Hence, our range of flexibility (two thick black lines) narrows as our code increases in specialization (moves to the right). Eventually we arrive at a nice little function that is fantastic at creating circles… but not much else.

The manufacturing world mitigates some of these problems through a number of well-known production optimizations such as Make-To-Order (MTO) and Design-For-Assembly (DFA). How does this translate to the labor-intensive software world? IT managers are keenly aware of sunk cost for code — inasmuch is it begins to go stale almost as soon as it is delivered. The CIO can only brag about millions of lines of legacy code for so long before shareholders suspect a latent liability. What if software functionality was managed like inventory and delivered Just-In-Time (JIT)? Software professionals have long suspected that much of what developers call ‘code’ is inherently transient in nature — cached for performance reasons but that’s about it.

AI at Point of Use

I will suggest that AI is far more interested in seeing see how code is used than how it is written. This is why a CodeBuilder is a live system. Once a program is loaded and running into memory, it no longer looks like text. It looks like a graph.

Graphs and Hierarchal Systems

Graph structures have long captured the imagination of Silicon Valley, despite a history of failed starts. Humans intuitively understand and communicate via hierarchies. Written communication, management structure, the web, GUI screens, source code, financial chart of accounts etc. all follow hierarchal patterns because that’s how the brain is wired. Paradoxically however, hierarchies are a suboptimal way to actually ‘manage’ information because they are littered with redundancies and context-specific sub-structures. In the 1960s, IBM IMS started as a wildly popular graph database but proved to be a mess as networks grew. Software code is no different. Relational technology like Oracle swept IMS aside by providing an efficient methodology for data compression (normalization is really just a compression methodology) to make search and update easier. The catch is that relational data must be ‘uncompressed’ back into some sort of hierarchal form to be human-comprehensible (e.g. on forms, reports etc.)— and this has often been called the ‘impedance mismatch’ between RDBMS and imperative software. As any ORDBMS or GraphQL programmer can tell you, these graph structures are mostly temporary in nature. Indeed, research suggests the brain only builds graphs long enough to run a ‘thought’ and then are discarded. Maybe software should work the same way.

Incidentally, a particular form of graph execution is called flow-based programming (FBP), invented in the 1970s by J Paul Morrison. This is why Google’s popular AI package for processing n-dimensional arrays (tensors) is called TensorFlow (TF).

Of Graphs and Computing

Remember I said CodeBuilders are wired using ‘computing’ not programming —because you almost have to when new code can be dropped in at any time. Programming models are starting to outgrow static control flow paths and increasingly letting machines do the driving. But frameworks like TF are largely black boxes. Languages lag badly — monadic structures to retain and manipulate computing state remain nebulous and are a long way from becoming easy to use or first-class structures (at least outside Haskell). NYC was an early adopter of a functional optimization technique called memoization that stores and tracks interim model state. The analogy in general computing (and getting back to the cone of specialization) is something called ‘function currying’. Wall Street found an interesting way to persist and manage graphs of curried functions.

To visualize the end result, it might be helpful to get out of the weeds and look at some specific examples of codebuilder-like technology outside the world of finance. Otherwise, I run the risk of getting this series stuck in an endless loop and comparing CodeBuilders to tables

DevOps

DevOps packages like Ansible will detect code changes and push them out via a system of “recipes” and “crafting” (e.g. Playbooks). In this respect, we see clear separation of configuration from code. Note that Ansible itself uses Python (like Athena or Quartz) so we see how a late-binding language might facilitate the software assembly process. As hotloaders improve, notions of development and runtime will continue to converge into a single environment. Note that Ansible is not very ‘collaborative’ because it is downstream from the core development process (save for the usual ChatOps) but you could argue the same is true when working with any existing libraries in that the original authors finished their work long before you came along.

Gaming

Gaming systems are inherently collaborative, event-driven and non von-Neumann in nature. And these days even the most basic gaming engine supports some sort of AI. Which brings us to engines like Unity that bring code, configuration and runtime under one roof. Although Unity has only limited hot-patching it does have a compelling ‘Collaborate’ feature. You could argue that Unity is an emerging “codebuilder” architecture for gaming.

Arma 3 — Bohemia Interactive

Minecraft

Getting back to Ansible crafting, it is worth looking closer at builder systems like Minecraft. Here you have a working implementation of the Agile Lego ‘dream’… completely ignored by the software community. It is equally as baffling why the hardware community has not produced a similar solution: a basic set of building blocks with virtually unlimited composability. In fact, you can build a computer in Minecraft:

Hello World

I would suggest this is a mathematical proof we have computer architecture backwards. Minecraft is a textbook example of how a system should manage complexity as it grows. Moreover, it is non von-Neumann. There is a clock in Minecraft but no compelling need to attach things to a linear instruction set or control flow. I realize academic researchers find all this decidedly banal but the best ideas are always the simple ones. Remember the lesson of Excel.

Control Flow, Assembly and Automation

I believe the biggest unexploited secret to software reusability and AI automation is the ability to divert and manipulate control flow. How do you hot patch a running program with new code and have it automatically incorporated? How often do you see a piece of code but are unable to “get to it”? These dilemmas happen all the time in flat-file world but in machine memory you know exactly where things are — and with the right sort of CodeBuilder architecture, you can jump all over the place. Almost like the human brain.

Looking Ahead

Until now, coding technology has stagnated. The days of Bell Labs and Xerox Parc are long behind us (although CodeBuilders bear an uncanny resemblance to Smalltalk-80) and those sorts of nebulous research playgrounds would have never been funded in today’s TTM-driven VC climate. While there are a proliferation of new programming languages and virtually a fresh front-end stack each week, a lot of them are essentially rehashing work done decades ago but don’t address the underlying nature of coding.

Given the nice margins enjoyed by various Agile consulting shops, there is little business incentive to innovate much of anything beyond marginal gains necessary to keep competition at bay. Automation is largely viewed as a threat by both providers and IT shops alike. The game theory simply doesn’t work: what rational programmers would automate themselves or their boss out of a job? Even where software development has been outsourced, you still see hundreds of developers manually building systems. It is not hard to find body shops that deliberately eschew automation despite obvious IRR potential — partially because they lack the skills and partially because it complicates the accounting. There is also the ethical dilemma of replacing programmers with machines, which is a bit ironic for the software industry which has been doing that to everyone else for a long time.

When do enterprise CodeBuilders become mainstream? History suggests it usually takes a crisis. Recall that Goldman Sachs and JP Morgan were really the only two visionary players in this space prior to the 2008 crash and their technology edge literally gave them global domination because they were the only ones that could react fast enough to changing business conditions. Only as their world was collapsing did NYC peer down that dark Gemba “hole” and belatedly question conventional Silicon Valley dogma. Perhaps it will be an increasing threat from platforms like Amazon that force companies to retrench and consider themselves software first. Particularly in the age of activist central banking, economic conditions can shift radically in just a few months and the enterprise has to be able to retool quickly. I see this is a weakness of established players like Oracle and SAP and why they are so vulnerable to automation (at least in my experience). The era of multi-year projects are over. M&A alone will force the issue.

Part 6 is here.


Ed Note: A lot of my career was spent in the strange world of aerospace SGML and its variants… yes, the same shadowy SGML that gave birth to HTML, XML, the World Wide Web and JSON. That’s why I’m so obsessed about moving beyond flat files for programming.