Lessons learned from publishing a Content Security Policy

Matteo Mazzarolo
InVision Engineering
6 min readDec 14, 2021

At InVision, we recently published a Content Security Policy (CSP) for our web apps.

Content Security Policy (CSP)is a security layer that helps detect and mitigate certain types of attacks, including Cross-Site Scripting (XSS) and data injection attacks.

Configuring a Content Security Policy involves creating a list of resources the user agent is allowed to connect to or to load for a page, such as JavaScript, CSS, fonts, and iframes. This list must be added to either the Content-Security-Policy HTTP header or <meta http-equiv="Content-Security-Policy">.

The browser will then consult the policy each time it needs to request a resource to determine if it's allowed to load it. If a resource cannot be loaded, the browser will throw a SecurityPolicyViolationEvent that you can collect and send to a violations aggregator (such as Sentry or Report-URI) for further analysis.

Adding a Content Security Policy to a webpage is a great way to enhance its security, but it’s not all sunshine and rainbows.

Between cryptic violation reports and cross-browser inconsistencies, we’ve found that understanding what is being blocked on your website (and why) might not be as easy as it sounds.

In this post, we want to highlight a few issues that have slowed down our CSP rollout.

Cross-browser inconsistencies

To start, we learned the hard way that there are several differences in how each browser implements the CSP spec, causing a SecurityPolicyViolationEvent violation to have different values depending on the browser that caught it.

Most of these differences are insignificant, but some can make it hard to understand why a specific violation is happening.

To reproduce these inconsistencies in your browser, we created an example HTML file with the following CSP:

script-src 'unsafe-inline';

This CSP allows only inline resources, such as inline <script> elements, inline event handlers, and inline <style> elements.
In the example, we're loading an external script (JQuery) to force a SecurityPolicyViolationEvent.

The major differences with the violation event we notice:

  • violationEvent.effectiveDirective: is the directive whose enforcement uncovered the violation.
    In Firefox and Safari, this is the policy directive that was violated. In the example, this is set toscript-src.
    In Chrome, it will be the most “specific” directive that was violated. In the example, this is script-src-elem — even if we haven't declared this directive — because this is where the violation would have occurred if such directive was present in the policy.
  • violationEvent.blockedURI : is the URI of the resource that was blocked because it violates a policy.
    In Chrome and Firefox, it’s the full URI of the resource. In the example, this is https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js.
    In Safari, it’s just the origin of the resource. In the example, this is https://ajax.googleapis.com.
  • violationEvent.violatedDirective: is the directive whose enforcement uncovered the violation. Following the CSP3 spec, this is “A copy of the effective-directive property, kept for historical reasons”.
    In Chrome and Firefox, this is the policy directive that was violated. In the example, this is script-src.
    In Safari, this is the directive and the value that was violated. In the example, this is script-src 'unsafe-inline'.

Safari and the base-uri directive

Another minor browser inconsistency we discovered is that, in Safari, removing a <base> element from a page with a CSP base-uri directive causes a CSP violation — regardless of the value set in base-uri.

The base-uri directive restricts the URLs that can be used to specify the document base URL.

We created a small example that you can use to reproduce the issue in your browser.

From our understanding of the CSP spec and the algorithm defined in the HTML5 spec to obtain a document’s base, the issue should be on Safari’s end: if no <base> element is available on the page, the document's base should fallback to the document URL (instead of throwing a violation).

The wasm-eval violation

We noticed that Chrome reports a confusing error on pages that use WebAssembly with a CSP enabled.

As of today, the only way to make WebAssembly work with CSP is to add script-src unsafe-eval to the policy — at least until a better, wasm-specific option (wasm-eval) is available.

Still, even if with the unsafe-eval clause enabled, Chrome reports the following error:

[Report Only] Refused to compile or instantiate WebAssembly module because ‘wasm-eval’ is not an allowed source of script in the following Content Security Policy directive: “script-src ‘self’ ‘unsafe-inline’ ‘unsafe-eval’

This error was very confusing because it does not mean Chrome refused to compile or instantiate the WebAssembly module — it’s just a warning triggered because the wasm-eval clause is not supported by Chrome yet.

Violations for resources that are correctly set in the policy

This was by far the most time-consuming issue we had to debug.

By checking the CSP violation reports, we noticed many of them were on resources that we actually allowed in our CSP.

For example, we had multiple violations of the https://px.ads.linkedin.com resource (used by the LinkedIn pixel tracker), even if it was explicitly allowed in our list.

Unfortunately, we couldn’t reproduce any of these violations, and the additional violation metadata (user-agent, browser, location, etc.) wasn’t hinting at any common cause.

At first, we thought these violations were triggered by browser extensions (like ad-blockers) that apply a stricter CSP to block some resources at runtime. We started manually testing countless browser extensions, but we’ve never been able to reproduce these violations even once — all browser extensions were blocking resources at the network connection level.

We then decided to continue monitoring these violations and see if any of them were happening on a browser from an InVision employee. If an internal team member has this issue, it would be really easy to work with them to understand the conditions in which it was happening.

After a few days, our patience paid off!

We got a report of a violation coming from the browser of an InVision engineer, so we paired up and finally discovered the root cause of the issue: redirects!

It was redirects! In case of violations caused by redirects, browsers report in the blocked-uri the initial URI of the redirect chain.

On this InVision engineer’s browser, the https://px.ads.linkedin.com URL was being redirected to https://px4.ads.linkedin.com (by LinkedIn servers).
Since our policy allows https://px.ads.linkedin.com (but not px4) the generated violation had https://px.ads.linkedin.com as the URI, even if the violated URI was https://px4.ads.linkedin.com.

This behavior makes debugging violations even more challenging because we don’t have any detail on what target URI of the redirect chain actually violated the CSP — we only get the original URI which is already allowed. Complicating this, this is usually caused by the resource server, edge handlers, proxies, etc., for a lot of reasons — such as Geolocation, server issues, deprecated APIs, etc. which makes them very cumbersome to predict and reproduce without knowing the reason for the redirect.

In addition to issues with redirecting to different origins, we also noticed a violation could be triggered by a content-type mismatch between a resource’s expected type and its received type.
For example, if a network request for a script returns a 404/500 HTML result, the browser triggers a violation without any information hinting at a content-type mismatch.

As of now, there seems to be little to nothing we can do to make the debugging experience easier, and it takes a lot of diving deep to figure it out. We couldn’t find much help online for these issues during our journey, so we hope this post helps other teams on their CSP journey!

If you’re interested in helping solve these exciting challenges, we’re always hiring!

--

--