Perspectives on Enterprise Software — an ActionIQ miniseries, part 1

Steve McColl
ActionIQ Tech Blog
Published in
8 min readJan 24, 2019

I wanted to come back to a topic I referenced in passing when I launched our blog — my love of enterprise software. In writing this article, I realized I was trying to cover too much ground and could also do with a little help and so we’re going to publish this as a mini-series with multiple authors:

  • This article will give some insight into the evolution of Enterprise Software as I have experienced it in my career
  • The second article will describe ActionIQ’s approach to developing our product and platform from an Engineering perspective
  • The third article will describe ActionIQ’s approach to developing our product and platform from a Design perspective; written by Raschin Fatemi
  • The fourth article will describe ActionIQ’s approach to developing our product and platform from a Product perspective; written by Justin Debrabant

So let’s start with some simple definitions and build from there.

What is Enterprise Software?

Ok, this is quite a broad category and so I’ll throw out some attributable definitions below:

“Enterprise software, also known as enterprise application software (EAS), is computer software used to satisfy the needs of an organization rather than individual users. Such organizations include businesses, schools, interest-based user groups, clubs, charities, and governments. Enterprise software is an integral part of a (computer-based) information system.” [1]

“Enterprise application software includes content, communication, and collaboration software; CRM software; digital and content creation software, ERP software; office suites; project and portfolio management; and SCM software.” [2]

“Enterprise applications are about the display, manipulation, and storage of large amounts of often complex data and the support and/or automation of business processes that rely on this data.” [3]

My take, and obviously I’m both biased and also prefer simplicity, is that Enterprise Software is built to solve the needs of one or more parts of an organization and there exists a target market of companies willing to purchase that software.

Evolution

As the hardware and software landscape has evolved, that evolution has radically impacted enterprise software development. Time for some personal anecdotes that hopefully help tell part of that story.

My father is a civil and structural engineer. Relatively early in his career he decided two things —

1) that working for someone else was never going to make him truly happy and
2) that civil engineering was about to be revolutionized away from manual draftsmen by CAD

…so, he started his own business and immediately invested a lot of capital in a Roland A0 paper size (that’s 33inches by 44 inches — really quite large and awesome to watch) pen plotter, a couple of PCs and copies of Autodesk’s AutoCAD software. By that point I was already into all things related to computers and so I became the person that had to install software for him which primarily involved many, many 3½” floppy disks.

I vividly remember spending many tedious hours slowly loading software and ensuring the correct order of the disks. I also remember thinking what an amazing undertaking it was to build software in California and for it to end up physically in my hands in Scotland. The fact that a team of software engineers had to build a product so stable and portable that it would ship without the ability to patch in the field in the 1980s was quite a feat.

Roll forward many years and early in my professional career I found myself maintaining a desktop application used by ~1500 people deployed across 20+ offices around the world. Desktop software was booming — almost every employee had a powerful workstation that was expected to provide the functionality and resources needed for the end user to effectively do their job.
Building the software was as simple as pulling the appropriate branch from our version control system starting the compile and going to lunch. When I’d get back from lunch I would run the software locally confirming nothing unexpected had happened (note to self & readers: this should never, ever be described as “effective testing”!) and then later in the afternoon start the deploy process to distributed filesystems that our users workstations used to source our software. When we had a major release, I would also synchronize a login script update with our central IT admin teams so that when people logged in on a Monday morning, Windows registry settings were updated to point to the new binaries.

The transition from local installation via physical media on a single workstation to remote distribution to a large number of workstations ‘simultaneously’ meant that we could ship software relatively quickly (in hours as opposed to weeks/months). However, it also meant that a ‘bad push’ or any sort of corruption was catastrophic. The tooling around delivering software itself was clunky but beyond that there were almost no support tools. This meant that when things went wrong and our application would not start or operate correctly, triaging issues thousands of miles away without remote control or diagnostic tooling was challenging.

Roll forward a few more years and it became apparent that as the size and scope of data processing was increasing, the capability vs cost ratio of local workstations was becoming more and more unattractive. Part of this was solved by virtual desktop technology for the more lightweight needs but where there was a need for complex distributed systems, the data center and centralized expensive compute resource began to rule. At this time, beginning to compete with company-owned datacenters were cloud providers offering virtual compute and storage as a service. However the fact that the cloud providers were so new meant that trust had not been effectively established and taking advantage of the nascent cloud computing opportunities was out of the question for many established companies.

Leading engineers to build enterprise software where your target market consists of large, established companies is a challenging endeavor. Particularly because each company typically has their own ‘preferred solution’ installation requirements. In one of my more recent roles, our software had to be deployed on AWS, VMWare, HyperV, and on ‘bare metal’ depending on the deployment policies of our clients. This platform spectrum brings many challenges re: software development, deployment and support. To sell software across these different environments the team must be able to test and certify each release across the deployment platforms we choose to support.

Additionally, when ‘bare metal’ is involved, three new challenges often exist:

  1. the hardware class between clients is different and therefore performance characteristics vary
  2. the client often has to use capital to purchase new machines which will be used to deploy the software
  3. the purchased hardware will typically be expected to be used for at least 3 years. During that time, the software company developing the product will likely want to make architecture and processing changes that will either have to be supported on older hardware or the software will be branched and move into an end-of-life cycle

The first and third challenges will exist for the lifetime of the software contract and typically generates friction in most future releases.

The second challenge, whilst short-term can be very acute — pricing, purchasing and installation challenges need to be worked through which can impact deal time and fundamentally time to value.

The most memorable moment in recent years was a new installation where we sent an engineer to install our software on the hardware that had been purchased by our new client. When he arrived at the client’s datacenter and asked to be directed to the hardware he was pointed to the loading dock and was greeted with this sight:

…which is not ideal and definitely not ‘install ready’ :-)

Kudos to the engineer in this situation — he got the unit placed in the data center, networked and powered and installed our software before heading home but it was not the simple “plug in the USB” moment he had planned for!

Roll forward to present day and many of the security and risk concerns that companies previously have had with cloud providers have changed significantly. There are many drivers for these changes:

  1. Improvements in the security and compliance posture of cloud providers:
    https://cloud.google.com/security/compliance/#/
    https://aws.amazon.com/compliance/programs/
    https://azure.microsoft.com/en-us/overview/trusted-cloud/
  2. The continued development of cloud security (e.g. AWS Security) and Cloud Access Security Broker (CASB) solutions that provide increased control and visibility into what is happening within the cloud environment(s)
  3. Risk analysis and positioning changing over time. Here are a couple of examples from large organizations with historically conservative InfoSec postures:
    FINRA: https://www.youtube.com/watch?v=tONmArf07QI
    Goldman Sachs: https://www.youtube.com/watch?v=GtAtairWGTY
    Merck: https://www.youtube.com/watch?v=_jzn9edFyYs

…and so we are now in a part of the story where software buyers have a lower/no hurdle when it comes to buying SaaS-based solutions. The benefits to this are significant both for the client and the software provider:

  • Network, compute and storage can be viewed as flexible, transient commodities. For the buyer this means that as the software and their usage changes, the infrastructure needs and demands of their workload are factored into the license agreement without the need for major renegotiation or purchasing moments. For the seller, this means that there is greater flexibility and opportunity to take advantage of capability and pricing improvements when offered by cloud providers. It also reduces the impact of architectural change in the future as the underlying system and hardware requirements are abstracted from the client
  • It is easier for the software provider to provide a resilient, highly-available service without a significant capital investment
  • The security of cloud-hosted solutions, when built correctly, is better than those of datacenter-hosted solutions. Beyond the investment in physical data center and supply-chain security by cloud providers, software-defined networking and transient compute assumptions lead to a ‘cattle not pets’ approach to infrastructure which typically favors the ‘latest and greatest’… and with that, comes up to date operating and software versions

Present Day

…and so now I find myself at ActionIQ — building a product tailored to the needs of marketing teams in large organizations with many customers.

Why do I love it so much?… part of the answer is due to the constant challenge that comes with leading engineering teams in such a dynamic domain as enterprise software. The other part of the answer — I will leave hanging for part 2!

If this type of challenge excites you and you are looking for new opportunities then please do reach out to jobs@actioniq.com. If you’ve an enterprise software anecdote to share or comment on the story then please do contribute — we’d love to hear your perspective!

Steve leads Engineering at ActionIQ and loves building excellent enterprise software products and excellent engineering teams. He’s been doing this a while across a number of different industries and has built software from the “good old days” of building websites in perl and writing business logic in stored procedures to large distributed data-driven systems, mobile and desktop apps and reporting tools.
Outside of the office, he spends most of his time with his wife, 3 daughters and two cats in Brooklyn and a little time riding motorcycles or making music.

--

--