Docker’s keynote story is great… until the software auditors knock on the door.

I was reading this Docker blog earlier about the Day 2 Keynote at DockerCon 2017. It highlights the April 2017 announcement that Oracle would be one of the content providers for the new ‘Docker Store’.

Indeed, Oracle’s stuff is there already:

This is, I’m sure, a welcome development for anyone wishing to deploy Oracle stuff in a Docker container. These are official Oracle docker images, and will probably save people a lot of manual effort (or persuade them not to go down the more questionable path of using an unofficial image from the public Docker Hub. These do exist, by the way, even though they feel akin to buying a CD-R with “ORACLE” scrawled on it in marker pen).

The keynote included a hypothetical demonstration scenario, in which one of those Oracle images was used. The story goes something like this:

  • Two developers are grabbed by the CEO (whose neck they have already saved in a previous phase of the demo, of course), because the company has just acquired another business.
  • The company they’ve bought uses a ‘traditional on-premise application’, and there is concern that the cost of migrating it into modernity will impact the next day’s merger announcement.
  • Once again, the hapless CEO is saved by the clever developers, who use Docker’s clever tools to enact a rapid transformation of the clunky old application. Hooray, now it can run in Docker containers and be deployed to the cloud!
  • Oh, and one last thing… the application uses Oracle, so in a flash, they pull an Oracle database server image off the Docker Store, and deploy it to run with the application.

Okay, this is an impressive story (notwithstanding the argument that HypotheticalCorp might want to ask a few big searching questions of their merger due-diligence people). But there’s a serious gotcha here: as any Software Asset Manager could point out, these actions could have just cost the company a pretty staggering amount of money. How? By falling foul of Oracle’s notoriously complex licensing system.

Let’s look at a few key points here:

Firstly, the company has just merged with another company. You might as well paint a big bullseye on your HQ, and force all staff to tape “AUDIT ME” on a piece of paper to their back, because this is one of the primary triggers of vendor compliance checks.

Secondly, this application is now moving from a dusty-old on-premise installation, to whatever “the cloud” looks like in this case. That might be a public cloud provider, in which Oracle has you covered here (PDF), provided you are using Amazon or Microsoft Azure (if not, then congratulations, you’ve entered the well-known software licensing realm of “who the hell knows?”). You’re still going to be paying money, but at least it’s only for what you use, so just watch that sprawl, okay?

The real issue comes if the stuff is being deployed on a cloud which is hosted on some identifiable physical infrastructure. Now, things can get very, very expensive.

(A quick history lesson: until some time last decade, it was very easy for big middleware software vendors to figure out how much to charge their customers. They’d find all the metal-boxes-with-blinking-lights on which their stuff was deployed, count the CPU cores, and charge accordingly. But then, VMWare and others came along and muddied the water. As their customers could now wrestle a whole lot more effective utilisation out of those metal boxes, many of those vendors had to change their formulae, to avoid being paid a lot less money. To run those products on VMs, customers now needed to purchase expensive licenses corresponding to the capacity of the physical machines underpinning the virtual environment.

As virtual technologies evolved, it just kept getting worse. Many companies simply failed to manage it, often through carelessness or ignorance. Software auditors filled their boots, creating a whole new software sub-industry worth literally billions of dollars. It’s a cycle of complexity that just gets worse with each new development in infrastructure. I wrote about this stuff five years ago, and that article still holds true: IBM now has something like 50 different virtualisation capacity counting methods.

Anyway, we’ll just pause for a moment to let the Open Source people stop laughing).

The specific issue with Oracle is that they have a pretty hardline approach to what needs to be licensed, as defined in their ‘Partitioning Policy’. If you’re using one of the technologies listed in that document as “hard partitioning” (such as, for example, capped Solaris Zones, or IBM LPARs), then you’re in luck: you only need to pay license fees based on the CPUs underpinning those partitions.

However, pretty much any other virtualization technology likely falls into Oracle’s definition of “soft partitioning”. Oracle now want their software to be licensed for every CPU in the physical server farm.

The plucky developers in Docker’s example might just, therefore, have exposed the company to a huge licensing demand. This has happened before, one example being the huge spat in 2016 between Mars and Oracle over the former’s use of VMWare.

I’ve presented on numerous occasions to Software Asset Managers and ITSM people about the implications of a rise of containerization as a defacto virtualisation standard in enterprises. SAM practitioners know only too well about the complexities of Oracle’s licensing in virtual and cloud environments. People in the Docker world are now starting to notice too:

I have to make a disclaimer here: Oracle licensing is bloody complex, and it’s entirely possible that a goalpost or two might have moved by the time you read this.

But this scenario is a really good example of the importance of aligning the rapid, wonderful, software-is-eating-the-world brilliance of agile, DevOps and Digital Transformation, with some experienced heads from the “old worlds” of ITSM and IT Asset Management, at least once-in-a-while. In Docker’s story, developers saved the day, but a Software Asset Manager might save the company.