#mainframe chat question 3
What are some of the common misconceptions about DevOps, especially considering legacy infrastructure?
I’ve touched on some misconceptions already in my previous answers, so I’ll focus on a couple that I haven’t yet.

DevOps isn’t a set of tools.
DevOps isn’t a new team or organizational structure.
DevOps isn’t just for greenfield projects.
DevOps isn’t even about technology.
The number one misconception that is deeply embedded into nearly everyone’s conciousness is that DevOps is about tools and technology. Let’s just dispense with that one right now, shall we? DevOps is fundamentally about business value. We deal with the questions of what provides it, what stands in the way of providing it, what does not contribute to it. We maximize that value by employing fast feedback loops (continuous delivery) and identifying low, no, and negative value parts of the process to eliminate or streamline (eliminating waste, ala Lean). Technology is a tertiary concern, we use it to deliver that value and eliminate or streamline those low value aspects of delivery. We use or build tools as necessary to do that.
Aside from the idea that DevOps is primarily about tools, there is the idea that DevOps means new teams and organizational structures. While it certainly *can* mean new teams and org structures, it doesn’t have to. And in fact, implementing a new “DevOps” team that becomes a new silo that only introduces additional wait time, wheel spinning, and friction around hand-offs is one of the biggest anti-patterns people are falling prey to right now.
We have a lot of amazing tools and implementation patterns that we’re all investing a lot of time and effort into making easy for people to implement. Since tools are the easiest thing to understand, we want to start implementing those as quickly as possible — and charging a new team with that task seems natural to us because we are already stuck thinking in terms of silos and handoffs.
DevOps is fundamentally about realigning cross functional teams that own the entire process and stack of some portion of the value proposition for which our organization exists — a new team can integrate into those alignments but they will only get in the way unless we put a lot of effort into changing all the pre-existing expectations and beliefs that are the reason we aren’t aligned around value in the first place. This is the hardest thing about this transformation, is getting at what exists only inside our people’s heads, their experience and beliefs around what their jobs actually are, and who they have to please.
The second misconception I’ll highlight is the idea that you must have DevOps tools available to implement DevOps. The first time I built Platform as a Service, the only tool I had available was bash script and my own physical hardware I unboxed into our own data center. From there I built a system in which a new server was plugged in to the rack, and the only manual step was recording the mac address of the server and putting it into our kickstart system. From there a new server would configure itself and become whatever role we had assigned it, including being inserted into the target environment, dev, test, stage, prod, etc.
Its worth mentioning that I have a client doing 400 deploys a day on mainframe. They don’t do this by slavishly utilizing tools — they do this by flexibly applying the principles and building their own tools when necessary. The mindset is more important than the tools.
There are challenges around COTS software that aren’t trivial and sometimes require brute force hacks to force into compliance — it is tempting to think it is hopeless or impossible and exclude our legacy environments from modernization efforts, and focus on grandfathering off these tools, but I think this is a mistake.
Certainly we should be considering whether a long term dependency on vendors who aren’t keeping up with the times is in our best interest, but I also believe we need to take a two-pronged attack strategy here: 1. Demand vendors fix any obscurity that prevents automation and environment creation, and 2. Innovate our way out: hack up a solution when necessary. For example once I had to use OCR to to read in a jpg output I captured from a vendors screen in order to set up a mount point. I never did get away from a manual check to make sure it worked properly because before I developed a good test we had gotten rid of the vendor environment, but this is just an example of out of the box thinking you can apply here.
All answers here: https://medium.com/@geek_king/answers-from-idgtechtalk-mainframe-chat-on-devops-d853cf3d786e
