Why Collaboration is Key to DevOps’ Success in a Serverless World
I started my career as a “developer.” When I was working in a startup, I was the 12th person on the team, and during this phase I did development, testing, deployment, customer service — pretty much the entire SDLC of a product. To my surprise, of all these tasks, what I found really challenging was deployment. There is so much to worry about, like zero downtime, rolling out updates, a rollback plan, monitoring, not to mention recovery and all the responsibility that comes along with it (after all, you can’t afford to lose some of your paying customers in a startup). This started my “Dev”Ops journey, and I discovered I wanted to develop my projects with an operational mindset; I wanted to wear the developer hat while working on operations. That’s why, from my perspective, the “dev” and the “ops” are not so different, in fact they have the same DNA.
In a traditional developer and operations environment, what we generally come across is that developers write a piece of code, hand it over to QE for testing, and then leave it to operations for deployment. There is a minimal handshake in this lifecycle. With this approach, the boundaries and roles are well defined, but what this often leads to is chaos. For example, developers may mention that DevOps missed XYZ instruction and the DevOps team may say that they did not receive complete information, all of this resulting in more chaos and unhappy teams.
Historically, the DevOps teams were network engineers or system administrators, and these people lacked the development perspective. Developers, on the other hand, lack the understanding of operations, as this is not a role they’re accustomed to doing. The antidote for this is developers and operations collaborating more; this is what it takes to change the status quo.
Bridging the gap between the ‘dev’ and ‘ops’ with automation
It’s because of automation and technologies like serverless that collaboration among developers, operations, and quality engineers is becoming so important. When operations provides the necessary CI/CD and configuration management tools to development—thinking about deployment as a constant development activity—then that’s when we’re able to bridge the gap.
Operations needs to collaborate with developers to understand the requirements, then design them via development (code/automation/scripts), so that each deployment can be done via a single click. The operations teams also need to make their own lives easier by automating quarterly activities like key rotation, user management, backups, recovery, security implementations, monitoring, etc. via scripts. Design your system with self recovery in mind and give your developers the confidence in what they are building by plugging in system testing to each deployment. This makes collaboration with quality engineers much more impactful and reduces their burden.
Breaking down the walls between developers, operations, and QE
When I work in various “operations” teams, I often get push back from operations team members. They frequently remind me that this is a “development” job or a “QE’s” job, and operations does not need to contribute. This is a big pain for me. Since I come from a development background, I can often contribute in development or testing and I certainly don’t hesitate to do so. I look at it as a product, and if I can do something to make the overall development/deployment non-chaotic, then I go ahead and contribute without thinking which group it belongs to.
I faced similar struggles at the beginning of my Adobe career. There were those same boundaries drawn, and I found it disheartening. To address this, I discussed my concerns with my managers, and we worked together to create a cultural change within the company. Developers, QE, and operations now work together like cross-functional teams. We as a team all contribute to automation and we’ve seen many benefits.
By empowering developers and QE with automation, we are able to perform QSR activities, single-click deployment, monitoring, testing, and disposal of clusters, and all of this reduces operational overhead and helps avoid human errors. Our deployment also triggers QE’s test automation scripts, which results in reducing the burden on QE. Another big benefit of this is that everyone takes responsibility, as a team, for what’s deployed in production. This is a big departure from operations solely taking the ownership, and ends the the blame game that’s all too common. Teams that work this way are much happier and more satisfied with their work overall.
With serverless, DevOps needs to be built right into your strategy from day one
As we move to serverless technologies, the operations world is being managed by vendors like AWS, Azure, GCP, with only the functions remaining the responsibility of developers. With the serverless approach, it’s virtually impossible to write any code without at least considering how code will be executed and what other resources it requires to function.
While the DevOps movement promotes collaboration between development and operations teams, with serverless it’s simply not possible to separate the two areas. With serverless, security has to be considered as soon as the function is deployed, and each function has to have a security policy associated with it. It’s becoming very clear that the serverless model is making operability a part of the normal development cycle. When it comes to serverless, it’s not possible to just “adopt” a DevOps mentality, it needs to be built-in.
Recently, I attended Serverless Conference in San Francisco, where I saw that this is the direction the industry is moving in. It was a clear reminder that the industry is changing, and we need to change our culture and mindset to move with the industry into a more collaborative world between development, operations, and quality engineering.