DevOps, Speed Kills

Lawrence Manickam
kuberiter
Published in
6 min readApr 8, 2018
Alouette Lake, Maple Ridge BC — Lawrence Manickam

DevOps, Speed Kills

It’s a cloudy period in the IT delivery function at any company. Things are in between new and the old. The philosophies of Cloud Computing and DevOps changes the landscape and traditional understanding of IT delivery model rapidly.

I was working at GE, Schenectady 12345 to implement their first UNIX/WebLogic Infrastructure between 2001–2003 at the IT Ops Department. The leadership were gifted with a clean understanding of both Applications and Hardware.

There was often frustration with the procurement process and it was suggested by the then leadership to have a third-party vendor to provide provisioning services on demand basis (Cloud Computing). We had used Solaris packages that include software to install them at various UNIX Servers (DevOps). I have kept in touch with the executive team since leaving GE and my recent interactions brought back memories of this conversation.

It is astonishing to see the growth of IT provisioning services after all these years. The hardware and software delivery model is decoupled with Cloud Components, Container Technologies and Orchestration products deploy the applications rapidly without flipping the DNS, Load Balancers etc. There is no need for mid night deployment, a Jenkins trigger is enough to pull the code from a cloud code repository, build in-transit and deploy at the target environment. Multi-Cloud provisioning tools such as Kubernetes enables you to deploy a Jenkins Slave pod at the production system to build and deploy the new release therefore the middle layer can be eliminated. Speed!!!

DevOps is a tremendous opportunity of our life time.

Speed Kills? No, actually these are growing pains. DevOps is here to stay for years to come.

Here are some delivery model areas that are affected.

Software Procurement

No more lengthy meetings, large amount of RFP paper-based procurement work anymore. Yes, we download the software from the Internet and use it. The complete DevOps toolchain is made up of Opensource software with few subscription-based cloud services.

The IT executives of fortune companies didn’t prefer to have Opensource at least until 2012 and they needed a 1–800 number to call the technical support when required. Even when they used Open source Apache Server, they wanted it to be labeled as IBM HTTP Server.

Though DevOps saved a lot of time and money from Software procurement perspective, the pipeline implementation requires expensive resources to build and it creates dependency with them. Attrition, recession, massive layoffs and God forbid another financial collapse may make the DevOps toolchain non-functional and the struggle will ensue.

Enterprise Architecture

I’m all in for Enterprise Architecture practices especially TOGAF. Even though it felt like a water fall method sometime due to the long ACB, ARB and other stake holder meetings, never forget it saved many organizations to come out of siloed development methodologies and helped to create a centralized Architecture Repository and IT Governance. Several top level IT executives understand the importance of it even though it took a lot of time to prove the value proposition.

The excitement and short-term happiness to fix things in DevOps pipeline affects the discipline that we had built in the last decade. No more Logical, Conceptual and Physical Architecture documents and diagrams.

On the fly implementations and configurations.

Before a stakeholder calls for an ARB meeting, the particular change gets implemented in DevOps. Though it may look like a cow boy environment, the happenings may introduce a new EA framework or methodologies to the DevOps industry. The line managers are forced to research and implement new set of disciplinary procedures to get the job done.

Project Management

Project Management started disappearing, or at least functions with few resources. Remember those days when the PM started writing everything from procurement of hardware servers, shipping, data center installs, cabling, IP addresses allocation, file systems configuration, storage allocation etc. etc. I have seen 3000 lines or project plans with 10 revisions.

Today’s Cloud Computing trimmed those 3000 lines to 1500 and the massive DevOps adoption trimmed those lines further down to 500.

Traditional Project Plan model and running towards technical resources to calculate hours for a task no longer works at DevOps. There is no need to write the code deployment time and assign a resource with hours against it. Jenkins does that job with scheduling and triggering. Several Project Managers have hard time to understand the rapid changes and adopt themselves to the ‘Speed’.

PMO is here to stay but the reduction in resources will continue to grow. Conflicts will keep arising between PMO and technology resources until the dust settles down.

IT Service Management

Remember those days that things were all about ITIL processes. Few CCB meetings in a week, emergency CCB meetings, mid night roll out of applications, rollback screams and problem management. Industry went crazy to send their employees to get ITIL certifications in order to instill discipline and the phenomenon still continue in traditional setup.

There is no time or no need to open a change ticket, approve, read the approval email then implement the change in DevOps. The review process sits inside the code repository. A developer makes a change in the code, and other developer approves the code merge then the manager approves the pull request to build and deploy the code through Jenkins. Where is the change ticket number? It’s long gone.

Where is the change ticket to the Linux System Administrator to install those Devlib, libstdc++, mod files at 20 different servers? They are already sitting inside the Docker Container. This ticket disappeared too.

Where are those P1 tickets to fix those production failures and subsequent RCA’s mayhem? Cloud elasticity, availability zones, multiple running replicas of application containers, self-healing and intelligent automation reduced the number of P1 tickets.

IT Operations

I was a WebLogic Administrator for 11 years in the row and deployed more than 4000 iterations J2EE applications during that period.

Production deployments were lengthy. Mostly happened in the mid night and the objective was not to disturb the business and it also gave us enough time to rollback if something went wrong. Sleepless nights!

We planned production deployments a week early, looked at resources vacation calendar to verify their availability, checked network connections beforehand and conducted pilot testing few times at the staging environment few times.

During the production application deployments, the network engineer diverted the traffic to one set of servers in a cluster and the application deployment was executed at the servers that were taken offline and tested. Similar steps were executed by flipping the traffic back to the servers that had the new version of application. Back and Forth deployments. It was not fun provided the database connections, CRM integration, batch servers and other legacy integrations such as IBM Mainframe.

I don’t want to mention the pains of application deployments at the DR environment here.

There is no need to flip the load balancer and DNS anymore.

DevOps and Cloud Computing provides Blue-Green deployments. A complete availability zone can be taken offline, deploy the application, bring up and test. The process removes Virtual Machines and create them newly for each version of applications. Added Docker with Kubernetes, the new version of the application can be pushed into the production, tested and implemented right away by removing containers that have old applications.

There is no enough Storage? Add that Volume right away by clicking on few tabs without discussing about storage forecasting, LUN’s and a change ticket.

Massive changes in IT Operations. I’m thrilled!

Cost Management

Infrastructure cost was always a nightmare for IT executives in a traditional datacenter setup. The datacenter was already piled up with so many tools and cost monitoring tools were not that effective.

I remember the days where we bought an appliance for USD $200k then get rid of it after few weeks when we realized that it didn’t serve our purpose.

Cloud Computing is a blessing. It made the Infrastructure Cost visible. There is a metering, validation and support case for every penny.

It has its disadvantages.

The stakeholders struggle to define the differentiation between Cloud Implementation and Cloud Usage. For example, running few volumes may cost few hundred dollars however removing them and add later costs few thousand dollars. A temporary cost saving makes the stakeholder happy but in the longer run it hurts the total budget.

The type of issues will introduce new standards and practices in the coming days.

“The Only Thing That Is Constant Is Change” ― Heraclitus

It’s time to adopt DevOps and keep improving. We fail in the morning then pass in the afternoon. Fail Fast. It’s the beauty of DevOps.

It will certainly create new jobs and innovations. The future looks brighter than before.

Lawrence Manickam is the Technical Founder of Kuberiter Inc, a Seattle based bootstrapped Start-up that provide JDK Services (Jenkins as a Service, Docker as a Service and Kubernetes as a Service). Please visit www.kuberiter.com , subscribe and try our Docker as a Service module that is currently available.

--

--