Robservations from DevOps World | Jenkins World 2019

Rob Cuddy
AppScan
Published in
5 min readAug 28, 2019

Recently (Aug. 2019), I had the privilege to attend my first DevOps World | Jenkins World conference in San Francisco, CA. This event began in 2017 and brings together Jenkins users from all over the world. It has grown and expanded to include all aspects of DevOps, and I wanted to share a few of my “Robervations” from the time there.

Value Stream Mapping Has Real Potential

Value Stream Mapping itself is not a new idea; however, the idea of applying it to DevOps and software delivery pipelines is. Value Stream Mapping is a principle that comes from Lean Manufacturing and is the notion of analyzing how work moves through a process to produce something useful for a customer. If you have read The Phoenix Project, think of the part of the story where Eric takes Bill to the catwalk overlooking a manufacturing plant floor and uses that to illustrate how work flows from one side of the plant to the other. From this encounter, there were two key principles: knowing how work enters and leaves a particular part of the process, where the bottlenecks are so that work flow can be optimized. These same principles are now being applied today to DevOps pipelines and the challenge for teams is being able to visualize how work flows through the pipeline.

There were 4 sessions, a workshop and at least two vendors showing solutions concerning value stream mapping and pipelines. Because this conference was primarily made up of Jenkins users, most of what I saw discussing the value stream centered around integrations with Jenkins and the value stream began at the time when code was committed. This is a big step in the right direction; however, I believe it is important to be able to see all the parts of the software development process, including elements like requirements and design, in the value stream. And because of this, UrbanCode Velocity stood out to me because of the running demo that went end-to-end and also seamlessly included security scans as part of the value stream. This had the added bonus of visualizing security tests just like other quality metrics.

Because organizations are continuing to look for ways to deliver better customer satisfaction faster, identifying and dealing with process bottlenecks is vital. Expect to see an increase in the number of tools and conversations around value stream mapping and management in the near future.

DevSecOps is Real and Nearing Mainstream Now for this Audience.

Security is quickly becoming a very hot topic in the DevOps space. There were several different sessions covering ideas ranging from dealing with Open Source vulnerabilities to anexample reference architecture for DevSecOps, and everything in-between. As I wrote about in a different article, trying to turn developers into security experts does not work; however, getting them to participate and partner more effectively with security teams does. The more we can make security testing a natural part of what they are doing, the better.

In addition, there were several players present touting solutions and expertise, and it was encouraging to see this because it is indicative of the importance of security and the recognition of its importance to overall DevOps value. Most of the security related discussions that I witnessed centered around three main topic areas:

1. Integrating various security tests into a DevOps pipeline.

2. Best practices around securing containers.

3. Providing feedback to teams in ways that are not overwhelming to them.

The first should be obvious to most because developers need the tools they are working with today to work in their IDEs and appear seamless to avoid disruption and delays. Every time a developer has to leave an IDE to go to a different tool that means more time spent. It also implies that there needs to be a reconciling between what is entered in that tool and the developer work being done, making visibility and traceability across the entire SDLC harder.

The discussion around containers is more interesting because, up until now, only a few vendors have been able to scan the contents of a container image. In most places the story has been largely “secure what goes into the container” and “restrict who can access the container and where it can be deployed” as the primarily ways to secure it. As the use of open source continues to grow, the need for being able to validate an image is paramount and it is good to see more happening in this space.

And finally, the third idea is really more about providing the right information in the right context and at the right time to development teams so that it is a help, not a hindrance. As an example, running a static security test and simply returning the results to developers, without taking the time to filter and triage for false positives, is likely going to end up producing too much information for teams to consume. And over time, this will condition teams to avoid or even ignore these results. For DevSecOps to work well, feedback to teams has to have a direct positive impact on the work in progress and result in added value to the pipeline, otherwise it is just noise.

Security is a Great Place for AI to Intersect with DevOps

After attending both Black Hat 2019 and this event, I found that there is a good work done in exploring how AI can be used to help quickly identify real issues faster. AI can assist in other kinds of tests as well, such as determining which kinds of tests and test cases to run for a given application and scenario. A 2018 IBM survey found that, for the first time, consumers preferred security over convenience, and especially with applications that dealt with finances. What this means is that in a world where delivering capabilities at speed is the norm, companies that can maintain pace AND still ensure highly reliable, safe and secure applications will differentiate greatly and ultimately win in the marketplace.

The most obvious place this difference appears is in security testing, and particularly the notion of false positives. Today 88% of Cybersecurity and InfoSec teams spend a minimum of 25 hours a week or more investigating and detecting application vulnerabilities. AI and Machine Learning can be used here to make a massive difference, especially to triage test results before they get into the hands of developers. Doing this well reduces the amount of data that they must deal with, and it also helps to prioritize remediation efforts. And when AI is further leveraged to help focus test cases, policies and test runs to provide the best combination of coverage, vulnerability assessment and risk exposure, it streamlines getting feedback to teams earlier in the DevOps pipeline, while allow for Security and QA teams to devote their expertise to the more challenging issues that arise. I look forward to seeing the growth in these kinds of capabilities in the future.

For additional insight on these or other Application Security topics, feel free to look for my other blogs on Medium, or any of my previous blogs on SecurityIntelligence.com. If you are interested in learning more about adding Application Security as part of your DevOps pipeline, check out our new Application Security Testing site. As always, I encourage you to connect with me on social media on twitter at the Robservatory and/or on LinkedIn.

--

--

Rob Cuddy
AppScan

Welcome to the Robservatory! Christian husband & father. Works for IBM. Graduated USC. Teaching & training, #Saddleback #DevOps #USC #SaddelbackJHM #Ducks