2023 Cloud Threats and Vulnerabilities Summit Recap

Brian Gastwirth
DayBlink Consulting
5 min readApr 21, 2023
Photo by Alex Machado on Unsplash

The Cloud Security Alliance (CSA) recently held its Cloud Threats and Vulnerabilities Summit, featuring prominent speakers across 20 sessions and panels. Speakers highlighted the main systemic risks within the cloud ecosystem and provided recommendations on how to best defend against malicious actors.

The DayBlink Consulting Cybersecurity Group gathered the following key takeaways and their implications for cloud and security practitioners.

Open source has taken over, presenting both risk and opportunity

Most people don’t truly understand how widespread open source is. It is incomprehensibly big, and Log4J was the event that demonstrated this. Once word got out to the public, almost everywhere people looked, they found Log4J. Open source software was everywhere and in everything. The paradigm shift was on. In reality, the transition to open source happened gradually and quietly, and it took a uniquely rare moment like Log4J to wake everyone up. 70–90% of software consists of open source these days, and that fact is finally starting to sink in for non-developers.

Tech media outlets and marketers may emphasize supply chain risk, but grasping the ubiquity of open source is the real issue. Open source code is not insecure on its own. But as a result of how widespread open source code has become, vulnerabilities that are as pervasive as Log4J create a second-order, systemic risk that we are still learning how to handle. While open source has been transformative, it has also created a new problem space in vulnerability management. And it is an everywhere and everyone problem.

The Securing Open Source Software Act of 2022 came in response to the widespread Log4Shell vulnerability.

The craziness of Log4J revealed other deficiencies. It took 6 days for the CVE website to get updated with information about the vulnerability. Twitter and Reddit contained real-time information and data about Log4J, but it was disaggregated when it needed to come from a centralized source of truth.

Log4J exposed a real need for an open source community built around security data. Kurt Seifried, who served on the CVE Board through January 2021 explained how the CVE database is insufficient in both its scope and depth of coverage. Additionally, there is no way for users to score the completeness and correctness of data on the CVE website — what you see is what you get.

Aside from CVE, there are many other ecosystem-specific security databases. Each ecosystem has its own participation rules, its own definitions of what a vulnerability is, and its own selected data format. While each database is valuable in its specificity, Information Security practitioners need coverage of everything in use, including operating systems, network gear, applications, languages, and libraries. Practitioners have enough vulnerabilities, but what they really need is better, more actionable data that is easier to consume. A truly valuable open source security community should make community contributions simple and let everyone use the data easily, in a wiki-like format.

The CSA is trying to centralize and democratize vulnerability information with its nascent Global Security Database (GSD). Seifried previewed an MVP of the platform and emphasized how GSD will need robust data, helpful tools, and an active community to succeed. Its data needs to be consumable, standardized, and clearly labeled to be usable. Tools must be easily available so users can make contributions and correct data. Lastly, GSD requires a community to interact with its data and tools and then provide feedback on what they need. The presence of both a solid foundation and an active feedback loop with users will be critical to GSD’s future.

Automation in the Cloud is critical, not a nice-to-have

The days of single cloud are long gone. Multi-cloud has become the norm, with 76% of organizations worldwide using a multi-cloud infrastructure. Amongst large enterprises, this majority is even more pronounced — 90% have adopted multi-cloud. This trend is poised to continue as niche clouds begin to appear for certain industries like healthcare and finance. Multi-cloud is appealing (or necessary) because it allows organizations to optimize the best blend of cloud computing solutions across the various cloud service providers. More specifically, being multi-cloud enables scalability, cost optimization, faster time-to-market, and improved business continuity through diversification. However, each new cloud demands different skillsets, making things more complicated from an organizational standpoint. It’s harder to scale a security program with different clouds in place. All of this means that higher cloud complexity grows any given organization’s attack surface.

The cautionary tale here goes beyond multi-cloud making life harder for security organizations. Premature complexity can lead to “negative value” — when an organization spends more on managing its multi-cloud infrastructure than the benefit that comes back to the business as a result of it. Multi-cloud yields benefits when managed properly, but there is a ripple effect on staff. You can add cloud environments, but security budgets remain static. This means complexity has doubled or tripled, and firms cannot hire the additional resources they need to deal with the complications of multi-cloud. Organizations do not move to multi-cloud to “save money” — onboarding more vendors increases operating expenses. What they are actually doing is trying to spend money more efficiently.

Successful abstraction and automation efforts help firms maximize the benefits of multi-cloud.

Automation is how organizations can manage cloud complexity and the attack surface effectively. For example, with automation, Operating Systems and Application patching takes place more frequently and rapidly, allowing defenders to meet the speed of threat actors. To reinforce this point, Travis Smith presented research from Qualys that found how patches that can be automated are deployed 45% more often and 36% faster than those that must be done manually. Automated tooling allows organizations to address multi-cloud challenges related to people, processes, and technologies. Different clouds are not interoperable due to intense competition in the industry, and there is also a significant skills gap when it comes to the cloud.

How can these issues be avoided? Via cross-cloud capabilities that allow you to leverage one interface for the multiple clouds in use. The objective is to eliminate redundancy in functions across different public cloud platforms. Operations, security, observability, and governance services should not be done individually for each cloud. Single, cross-cloud interfaces allow organizations to operate in a layer above each of the clouds they use to perform functions like IAM and MFA in common ways.

For any questions or comments on the analysis above — please contact:

Brian Gastwirth, Consultant
brian.gastwirth@dayblinkconsulting.com

Jacob Armijo, CISM, Manager
jacob.armijo@dayblinkconsulting.com

Michael Morgenstern, Partnermichael.morgenstern@dayblinkconsulting.com

--

--