Robservations on Black Hat 2019

Rob Cuddy
AppScan

--

Earlier this month (August) several thousand of our closest security friends all converged in Las Vegas for the annual Black Hat conference. This was my second time attending the event, and it was great to see the growth and continued interest being paid to cybersecurity, and application security in particular, as well as the numerous options and opinions about it. With that in mind, I wanted to share a few of my “Robervations” from the time there.

DevSecOps Is Gaining Momentum and Hit Mainstream

This year, nearly everyone was again using the term “DevSecOps” in some fashion to describe the need for addressing Security as a first-class citizen in the DevOps world. Attendees and vendors alike clearly understood that, when it comes to applications, security is a part of the user experience. It makes no difference if your “next great app” has the best features and capabilities if no one can trust it to handle their data and information securely. The challenge in this space is dealing with the ever increasing threat landscape. Applications are being de-coupled into interdependent services and leveraging container technology to increase delivery flexibility. That just means more places we have to secure. Add to that the competitive need to deliver at speed and you have your work cut out for you.

People Want to “See” Security in the DevSecOps Pipeline

There is no doubting that addressing and accounting for security needs earlier in the application development lifecycle is paramount to business success. The question today though, is how to show it? Today, there are a few different approaches being taken. Severalvendors are utilizing some kind of risk-based scoring assessment to illustrate relative vulnerability. Each vendor had their own unique spin, but the general idea was to align a specific vulnerability category to the likelihood it could be exploited, and the depth of impact if exploited, and then measure that against the importance of the application to the business. The higher each of these areas were, the higher the risk score ended up being. Having a sense of risk provides insight into what issues need to be addressed first.

For a quick example, consider the case where a scanof an application in an organization finds CVE-2019–1010268. This vulnerability can allow for considerable information disclosure, potential modification of some system files and information, and potential for reduced performance. Would you expect the organization to have an urgent approach to dealing with it if it’s found in the customer payment processing application? Would you expect a different approach if it was found against the internal application for displaying the company’s remote office cafeteria menu? Many are looking for this kind of information, however, getting it is often contingent on the business accurately assessing their applications’ importance.

A smaller number of vendors have taken more of an approach that combines security with other kinds of traditional testing (e.g. functional, performance, regression, etc.) and looks to aggregate results from each into a more holistic dashboard view. This is really the notion of having security be treated as a quality metric. In this space the key challenge is being able to filter out false positives so that the issues that are filtered back to teams are real. Without this filtering, the potential for teams to downplay, or even dismiss, results is high, because of a lack of trust in the information they get.

It will be interesting to see where the market goes from here over the next year in terms of showing security information in DevOps pipelines, but one thing is clear: having an accurate, realistic view of the state of security at any given point matters.

We Still Want to“Shift-Left” but the Narrative Has Changed

At Black Hat last year, one of my main “Robservations” was how so many people talked about getting the developer involved in security testing. The idea is if we are able to get vulnerability information to developers faster and in-context, then they will know and be able to fix them faster.

It’s a great idea in theory but, in practice, it doesn’t work.

Asking a developer to become a security expert is a lot like asking a frequent flyer on an airplane to become a pilot. It can be done, but it’s certainly not trivial, it will take time to accomplish, and there are sure to be some bumpy rides along the way. Just because a person flies a lot in a plane and has a basic understanding of how it works, that does not mean they know how to actually fly it. And just because a developer writes great code for new capabilities, it does not automatically mean they know how to secure it against every vulnerability out there. The unfortunate reality is that secure coding is difficult; especially when a fix is unknown or not well understood. And don’t forget that when you ask development teams to also triage security test results alongside all the other tests, requests and defects that are coming in, there is enormous potential for teams to be overwhelmed with noise.

One way to better combat these issues is to introduce security right at the outset of a project. Security members need to be included in design and requirement sessions. Teams need to think about the ways to implement security and what that means. For instance, if you are building a mobile app, will you implement Two-Factor or Multi-Factor authentication? Once you decide then how will you do it? SMS? An Application Authorizer? Something else? The answer to questions like this will have an impact on lead time, cycle time and delivery time. If we want to be able to accurately predict these things, then we need to understand how security affects them.

AI and Machine Learning Are Still in Their Infancy in this Space

As was the case at the Ai4 Cybersecurity Conference in December last year, it’s clear the industry recognizes how important Artificial Intelligence and Machine Learning are, but they are not sure how best to use them. Personally, I believe that there are two places where these can make a real difference.

The first is around assessing risk. As mentioned earlier, today many organizations must make an educated guess as to the relative importance of their application to the business and rank it on a simple “high-medium-low” scale if they want to score it. It would be better to be able to analyze the actual frequency of use along with associated capabilities and have a more robust understanding of the applications’ importance. In the same vein, being able to ascertain types of vulnerabilities that appear frequently in applications could then provide better insight into improving coding practices.

The second area is in supporting the use of natural language for better testing, and to provide deep learning and best practices into tomorrow’s application security requirements. If you would like to read more about the potential impact of cognitive capabilities and Artificial Intelligence on your development efforts, download this complimentary Ponemon Institute study.

For additional insight on these or other Application Security topics, feel free to look for my other blogs on Medium, or any of my previous blogs on SecurityIntelligence.com. If you are interested in learning more about adding Application Security as part of your DevOps pipeline, check out our new Application Security Testing site. As always, I encourage you to connect with me on social media on twitter at the Robservatory and/or on LinkedIn.

Rob Cuddy, Global Application Security Evangelist for AppScan, HCL Technologies

Photo courtesy of Flickr, Trending Topics 2019, https://www.flickr.com/photos/146269332@N03/48506989377/

--

--

Rob Cuddy
AppScan

Welcome to the Robservatory! Christian husband & father. Works for IBM. Graduated USC. Teaching & training, #Saddleback #DevOps #USC #SaddelbackJHM #Ducks