“Live Versioning” — a stable product whilst innovating

One of the problems of rapid innovation of a product is it can cause instability in the code and cause bugs. This article describes a software engineering technique I call “Live versioning” which enables the best of both worlds, a stable product and rapid innovation.

Some software engineering techniques are steeped in history. They’ve grown up because in the past it was important to keep programs small simply because computers could not run large programs. This is not true today memory, disk space and cpu cycles are all cheap — its software developers who are expensive. And yet many techniques have not evolved — they need to.

The idea is to divide your code into 2 component types

· Versionable code that runs the products main functionality and changes over time as new functionality is added

· Non-Versionable Dependent code which changes less often — this will often be common code used by the main code base (eg libraries).

The idea is to run multiple versions of the Versionable code at the same time where different users get different versions of the product (if you have a mechanism for recognising them of course otherwise a different url is used). The main users will get the stable older version whereas beta testers or people by choice may be able to choose to run the latest version which may be less stable. Additionally a sysadmin should be able to choose which version different people get — for example if a bad bug is found in the latest version the sysadmin will be able to downgrade all users to an earlier version. Traditionally this sort of version testing has been done with multiple servers where beta testers use a different server where all the software may be different. With live versioning its only the versionable software which is different. There are rules on how the non-versionable code can be changed. Existing code cannot be edited, but additional functions can be added — in ths way there is no possibility of the older code being broken by changes to the new version as happens regularly with current techniques. The result is a very flexible and very stable system where innovation can be added and controlled at will.

I’ve used this approach on a chatbot system where some users can choose which version they run and I can try out new user interfaces to see how they work for small numbers of users before releasing to the entire user base.

The only issue I’ve had with this approach is with more traditional software engineers who don’t understand what is trying to be achieved. One principle of software engineering is well structured code and this is measured using tools that look for code duplication. In this technique there is lots of code duplication. In the above diagram most of the V1, V2 and V3 boxes will contain duplicate code but for me this is not an issue. The code is well structured. In some cases there is a bug discovered which means the same bug fix has to be applied to all versions but with modern IDEs this is easy to do and not an difficult or time consuming thing to do. The tools that measure code duplication (eg Sonar) are a menace to this technique and should not be used on this type of project or if used then used by programmers who understand what is happening and not used for management quality reporting.

On final thing to say is dependent on how rapid the innovation is, there will be a point where the oldest code is thrown away (it’s not needed anymore). Here’s a case where duplicate code eventually dies anyway.

I have successfully used this technique on a large project and ended up with a very stable solution whilst doing rapid innovation. The only issue I’ve had is with people who don’t understand the technique.

Comments please!!!