OWASP A02 — Cryptographic Failures: What they are and why they are important

Jamie Beckland
Traceable and True
Published in
3 min readJul 8, 2022

The second most common issue in web application security is cryptographic failures. Cryptography has always been important, but we have seen a new focus on hardening cryptography as we expose more and more web applications to each other. When applications were self-contained, a failure in cryptography would be difficult to discover and exploit; now, exploiting issues with APIs has become very simple.

Why are cryptographic failures so dangerous?

Cryptographic failures expose sensitive data. In fact, in the previous version of OWASP’s top ten vulnerabilities, this risk was actually described as “Sensitive Data Exposure.”

In the 2021 version, the language has been updated because sensitive data can be exposed for a variety of reasons and misconfigurations; cryptographic failures are just the most prevalent currently.

Sensitive data is often personal in nature, and can include personal contact details; demographic information; data about protected classes; financial data; health data; and other types of data. Often, these personal data categories are regulated, as in the case of GDPR for personal data and HIPAA for health data. So, the risk of breach or loss extends beyond technical or business risk; to legal and compliance risk; as well as customer trust and brand credibility.

Why does cryptography so often fail?

OWASP identified cryptographic failures in more than 44% of their data analysis reviews. These can include broken or weak algorithms that can be easily or quickly hacked; outdated or hardcoded passwords; or a lack of protection around data assets in motion.

In practice, these vulnerabilities are often difficult to detect in operational systems. Using a default password to protect access to a system will show up as a password-protected endpoint in testing. However, the default password may be easily discoverable in vendor documentation. Therefore, simple testing for missing password protection would miss the risk.

Even worse, often password protection is not even applied, and data is transmitted or available in clear text, open to the entire web. Data assets at rest are a more obvious risk, and they can be tested against fairly easily. For this reason, data in transit has become a more attractive target.

In addition, the cryptography space continues to mature, with new, safer protocols being discovered, tested, and recommended as best practice. But, updating current applications with new hashing algorithms is often seen as remediating technical debt, and not prioritized within development teams. In the past decade alone, we have seen popular hash functions like MD5 and SHA1 deprecated in favor of more modern hashes like SHA2 and SHA3. This was driven by an improvement in raw compute and network power that can be dedicated to breaking older has functions.

Machine-to-machine data transfer over APIs compounds the issue

As personal data moves from one system to another, the chain of custody becomes increasingly complicated, and securing the entire chain of data access and provenance becomes more complicated. With the rise of exposed APIs and perimeterless applications, data can be compromised at a variety of different transfer points. Certificates, validation, and access authorization need to be tested at each step and any failure upstream can create a data injection that may not be noticed or even able to be seen downstream. This is one reason the industry has pushed for a software bill of materials, to be able to more quickly identify where dependencies exist, so that when a vulnerability is discovered, it can be traced and remediated throughout the system.

How to minimize cryptographic failures

Always remember that some protection is better than no protection. Prioritize your exposed, unprotected endpoints first; then remediate old and outdated hashed and transport layer security next. After that, review the data that is actually being transferred for necessity, appropriate use, and contextual integrity. Minimize data over-replication and monitor for shadow APIs and configuration errors from the development process.

The best approach is to acknowledge that no development process can be perfect, but we can detect and remediate issues much more quickly post-live. Oftentimes, we can even redact and minimize data transfer without changing the core application at all.

Originally published at https://www.bycontxt.com on July 8, 2022.

--

--

Jamie Beckland
Traceable and True

President & Co-Founder at Contxt. Security & Privacy Everywhere, All At Once. Erstwhile Dancer, Armchair Economist, Traveler…and above all, Technology Optimist.