Never trust a client, not even your own!

Bernie Durfee
7 min readJan 12, 2018

--

This article is about a principle that ought to be followed in every system, everywhere, all the time. Yet, I see time and time again how applications leave gaping holes in their attack surface, just waiting for some nefarious actor to exploit it.

There are many, many different boundaries within an application. We create these boundaries each time we modularize our software. Most software, even simple software, is created by composition. We take a bunch of libraries and write a little business logic to glue them together in order to orchestrate some behavior based on inputs and data.

In olden times, we typically had all of our code, modules, libraries, services and other components running on one machine, in one process. Given that the boundaries between all of those components sat on one machine, our machine, in our data center to which only we knew the secret access codes, we didn’t give those boundaries much thought. We could implicitly trust that we had total control over each module and didn’t need to worry much about nefarious actors taking control of a module in our giant process.

Now, over the last couple decades, we’ve spent an awful lot of time breaking up our giant processes so that we can run them across many different machines. We compose our applications using smaller modules and use the network to enable communication between those modules so we can take advantage of horizontal scaling vs continuously trying to buy a bigger server.

We’ve even gone back and forth and back again in using devices and machines not owned and controlled by us to run our instructions. We often have large chunks of our code running in browsers and mobile apps, relegating the code on our servers to simply crunching numbers and serving data.

We now have boundaries that cross not only from machine to machine and data center to data center, but from data center to sketchy internet cafe in some unknown part of your country or even another country in another part of the world. It’s amazing that we’re able to take a single process and allow it to span from our server to any one of billions of anonymous devices around the globe. Very cool stuff!

Yet, we’re very often so focused on the difficulty of making this happen, we don’t think through the consequences. We’re so happy we can get our code running correctly in a browser, that we don’t consider the fact that we don’t actually own the browser our code will be running in.

Consider the following three camps at play in a large software project. The goal of Camp A, the web developers, is to just get the code working reasonably consistently in 83 different browser/version variants that they need to support. The goal of Camp B, the security team, is to make sure that stuff doesn’t get hacked and compromised. The goal of Camp C, the hackers, is to exploit every gap they can to steal anything of value or cause grief in any way possible.

Hackers know that web developers spend all their time just getting stuff to work and that security teams often have very little influence on how things get implemented because it’s low priority compared to releasing functionality.

Here’s where things get dicey. To save time, web developers often defer the activity of securing the boundaries in their application, assume the network will take care of security and simply trust all their code, regardless of where it’s running.

If two modules are communicating, we typically refer to one as the client and the other as the server or service. When the client and the service are both executing on the same machine, there’s generally not much to worry about, tons of controls in place to keep everything safe. Though, in our new massively multi-tenant cloud world, even some of those boundaries should be considered suspect. But in all cases, when the client and the service are separated by a network, even a ‘private’ or ‘local’ network, there is great risk.

Networks are notoriously insecure. By design, there is almost no inherent or intrinsic security in the network stack. Networks simply ensure routing of traffic from place to place happens quickly, reliably and efficiently. Most networks security controls are very easily compromised using simple techniques.

In fact, nearly every great hack, compromise, virus, worm, malware attack, ransomware attack and other security tragedy was facilitate by network infrastructure. The network happily allowed all these things to happen and does so every day.

The network is responsible for moving packets, the software that sends and receives those packets is responsible for ensuring integrity, consistency and privacy of the data carried by those packets. No network is secure. Your network is not secure.

Never trust the network.

Now you have two modules talking to each other over a network that can’t be trusted. Even worse, our client code is running on someone else’s device. Now we can’t trust the device, the client code or the network in between. Danger!

Again, web development teams are almost exclusively focused on delivering functionality first. The fastest way to get this done is to assume that the client device, client code and intervening network will behave as expected. Danger!

This assumption is generally at the root of all the great hacks, compromises, viruses, worms, malware attacks, ransomware attacks and other security tragedies.

The reality is, when a network is involved, you need to code as if there will be a thousand nefarious attackers sitting on that network waiting to pounce. Because there are. That’s simply the reality of software development today.

If the data being housed and delivered by your application wasn’t valuable enough for someone to steal, you wouldn’t be getting paid to write the software that manages the data. Of course, with IoT, it’s not just data anymore, your software manages motors, valves, lasers and other real physical stuff that can do real physical harm.

Since you can’t trust the network, you have to assume all clients on the other side of those network connects are untrustworthy… even that client that you wrote with your own fingers on your own keyboard. No, you cannot trust the JavaScript in your own application if it’s running on someone else’s device across a network.

You need to ensure that your services check, double check and recheck every input coming across a network from a foreign device. Networks, browser and devices are incredibly easy to compromise… in fact, by design networks are software configurable and browsers/devices have developer modes that enable very easy manipulation.

tl;dr

Here are some anti-patterns that I’ve seen implemented all too often in real systems:

A service should never trust any data that originates from the client side. For example, say a client is going to send you a timestamp when an action is taken and that timestamp affects some behavior in the service. Don’t trust that timestamp. Always assume someone figured out how to send an invalid timestamp just to mess with you and manipulate the behavior of your service. Instead, figure out a way to generate the timestamp on the server or figure out how to not rely on it or how to deal with a bad timestamp.

A service should never trust state data stored on a client. For example, say you want to know who the user is that you’re interacting with, you might be tempted to send a cookie to the client with the username. Then, on subsequent requests, you can read the cookie and voila you don’t need to store data locally. You also just set yourself up for the client to just send you whatever username it wants to and you blindly trust it. Never trust data coming from the client, even if you sent the data originally.

A service should never trust input constraints imposed in a client. You will absolutely need to double-check all values coming from the client, even though you already checked them in your JavaScript. This is the most used route of SQL injection and other injection attacks. But it can even lead to other exploits where 1,000 characters are submitted in a field that only allows 100 characters. This might just throw an error in an insert statement, but it also might destabilize a data store if 10,000 entries with 100,000 characters each are submitted in rapid succession.

As a service, you should never trust authorization decisions made by a client. Sadly, I see this far too often. For example, say only administrators are allowed to press the ‘Delete User’ button, so the button is hidden in the UI for those users that aren’t administrators. Yet, if a non-admin user adds ‘action=delete’ to the URL and presses enter, the server side will gladly assume the button was pressed by an admin user and bad things happen. You need to treat a client running in a foreign browser or device as the enemy. Once you send your code to a foreign browser or device, assume it will be manipulated or replaced.

The above are all anti-patterns I’ve seen repeated far too often. Most times these are simply mistakes that are implemented due to time pressures or simply because of a lack of understanding as to how dangerous they are.

Secure coding is very difficult. It adds time and complexity. You always need to consider the integrity of the inputs you receive from your clients. That means lots of extra code, lots of extra testing, a clear and deep understanding of security concepts. This isn’t just a good idea, it’s a requirement.

We’re in the midst of the ‘Golden Age of Hacking’. The overwhelming global demand for software drives prioritization of features and functionality over safety and security. The reality is that your software is full of holes and hasn’t been hacked yet only because there are other more tempting targets nearby. If there were more hackers in the world, more stuff would be hacked.

To defer your first big compromise, you need to at least get the fundamentals right. Watch for the above anti-patterns and be sure to consider how easily someone can send funky inputs to your service. Cover the basics and that’ll keep the majority of inexperienced or unmotivated hackers out!

Creepy Fingers

--

--

Bernie Durfee

IT Professional; Software Developer; Software Architect; Habitual Musician