Upgrading Approaches to the Secure Mobile Architectures
That’s the follow-up to the talk I first gave at #appbuilders16 conference in Zurich, Switzerland.
We’ll talk a bit about the most undervalued part of mobile security: ideas and concepts. Another name for this talk could be “Everything Will Be Broken,” but what should we do?
Intro: this is the picture
Let’s take a look at the problem domain. What’s on the landscape? The picture shows our typical infrastructure — an iOS app talking over some network connection to a server where we have some custom logic serving our tasks.
So, what do we care about while we’re making apps? User experience, fast & continuous delivery, and getting things done. And Swift, of course. Swift is very exciting!
What don’t we care about? Server crap. Everything not iOS is magical and unknown :)
Imagine you put on the Security Wizard Hat. What will you see?
This is what should be important for mobile developers if they care about security:
– iOS runtime and user-held secrets are sources of trust. Apple is good, mostly. Trust Apple (…mostly).
– Beware on the Net, the dragons are near. Limit trust to the server: there’s plenty of unknown dangers.
– Be attentive to server logic: it’s easy to fuck up, and there is no sandbox around.
Why should you care about that?
Because you’re not alone.
In the app, the FBI is knocking at your door. On the net, the NSA and Russian Hackers are here to get your data. On the server… oh well, everyone will try to attack your home base.
The question is, what can we do about it?
What is in our control? The brains.
In other words, the amount of mental effort we put into our app security and the amount of knowledge about protecting things we have at our disposal.
Why does the problem exist?
What exactly is wrong with the way we see app security now? Well, a lot. Security is hard; it is an engineering discipline, unlike familiar subjects we know.
“It is secure” is not a valid statement; instead, we can just say, “It has not been broken yet.”
Not very reassuring, right? There are three main reasons why the problem exists: speed, openness, and ignorance.
We need to get things done. Fast. We live in the design-driven world, where short iterations are bound to the functions visible to the user. In the world of constant MVP, security is outside that bare minimum we do to meet deadlines. It accumulates technical debt, which is a synonym to bugs and failures.
Using the OpenSource libraries and third-party APIs are easy and convenient, but you can’t be sure that they’re secure enough.
Usually mobile developers know little about security. And — unfortunately — we don’t think about these “security things” a lot because it requires another type of thinking we rarely encounter.
Everyone makes security mistakes
You may not notice, but there’re lots of security vulnerabilities found every month. The more applications we create, the more applications we use every day, the higher the chances that we’ll use vulnerable software.
Apps for car remote controls are amazingly popular now. Unfortunately, lots of them skip the authentication stage, so the attacker can access private data and even unlock the car. You can find some examples & links in my slides.
Everything will be broken. Hacked cars are just more scary than hacked photo services, right?
But even Apple makes security mistakes. In the last few months, twice in the iMessage — the core infrastructural app — huge vulnerabilities were found.
The second one concerns the attachment transfer protocol, allowing the attacker to grab your photos by enumerating cryptographic keys via the silent flood of malformed messages, precisely a quarter million messages. That’s a really long and tedious hack, but hacking via key enumerating in 2016 looks rather striking.
(Now the desktop version of iMessage has screen sharing with remote control. Wanna guess what problems come next?)
Taking a look at the bigger picture
There’s no official statistics for iOS applications, but there’s some for iOS itself. According to the National Vulnerability Database, a record-breaking number of vulnerabilities was registered in 2015.
Unfortunately, mobile developers make things even worse!
Last year’s notorious AFNetworking bugs turned hundred of apps into an insecure state immediately. But what’s even more exciting: everybody knows that without SSL pinning your apps might not tell friend from foe; only one tenth of popular apps use SSL pinning.
Why does this even happen?
Is this miserable state of things our fault? Not really; it’s just the way things are. But there are some problems we can solve if we understand what makes our mobile platforms unique.
We think mobile and backend are in a classical client-server relationship. But they’re not. Mobile apps don’t have many features we are used to having in classical apps. And mobile apps require very specific server behaviors, not all of which are good security-wise.
In the end, the problem is that if it works — it does not mean it’s secure.
There are many things we don’t notice: Apple or ecosystem took care of them already. But looking closer at your app and device, we see the disparity between client and server in many things. The mobile app is not a ‘classical’ client, but a thin client, talking over several layers of abstractions. This, together with the aforementioned human problems, gets us into a sad state of things…
Mobile security is hard and still undeveloped. Security relies on the shared wisdom of previous generations. In our world, there is no yet. And we find it hard to import common knowledge for some reasons. One of them is that our ecosystem is really very different, another is that risk models are blurred even for us.
What exactly are we risking?
What bad could happen to our mobile apps? Potential attackers are looking for three things: data, identity, and control.
Security people will notice I’ve skipped something in the next slides: local access. iPhone’s architecture makes JailBreak hard enough for a remote attacker, and it’s no better than smashing your fingers for a local one.
Data is anything sensitive for the user. Either data that is useful (and the user wants to keep it safe) or data that is private (and the user wants to hide it from other people’s eyes).
Why do we keep it on the phone in the first place? Because the user needs it, and, if done right, it is safer than on backend.
Identity is anything that could be used to impersonate the user, like tokens or passwords (a password to your system is actually not only your security risk — users reuse the same passwords across many systems, so any password is a valuable asset). The server is not talking to your phone or your app, it recognizes you by your identification tokens and that’s what attackers want.
Why do we even want to store that on the phone? Your phone is a good place to store data access credentials if you plan to access the data from your app: you can execute code around the credentials in a sandbox!
Control — the abilities that are bound to your phone: for example, being able to redirect application flow to make it authorize some backend action. Remember those cars, right?
What should we do?
We face serious threats, this means more work and trying to figure out sophisticated boring technologies invented by people who don’t know anything about mobile tech. However, we have many benefits; we are going to achieve some decent level of security only by understanding what is important.
Understand the strong sides
Things Apple got right for you in the first place.
User trust relationships: you have direct contact with users over (almost) bare metal. What the user types, we can trust most of the time. But don’t trust fingerprints too much: these keys are not easily rotated yet are easily stolen, so they are not good keys.
Trust the device: iPhone is quite well protected, well you know that story with the FBI. What is stored in the local storage is safe enough from anything around. So you have to take care only of your own process, which is good news.
Narrow scope: iOS apps don’t frequently open network ports for listening because we get push notifications for the outside world to ping us. We don’t run 3rd party code embed on runtime. And it’s really complicated to push buffer overflow payload using an iPhone keyboard.
Low collateral risk: unlike the server, you don’t run a gazillion of processes with shared resources. Processes are sandboxed, so privilege escalation and external control flow hijacking are less likely.
Trust: the result of authentication
Now, let’s talk more about trust — the core thing in security.
Trust server less: A server has a lot of moving parts, dependencies and people you don’t know running it. Your app, once compiled, is considerably safer than your NodeJS backend. The server is your backend, but it is subject to more traditional attacks, and should have as little trust as possible by default.
Explicit trust. Trust is given as the result of authentication, not as some default decision. App flows should require constant verification of trust.
Involve users: users are good carriers of secrecy, which is hard to steal. Users operate with physical devices, which are a good auth factor. We’ll talk about using multiple factors to authenticate users later.
Echelonization: add more layers of defense
Echelonized risk management is an idea from ancient warfare. If the system has only one protected perimeter, no matter how strong it is, it will fall, even if it promises all security guarantees in the world.
Okay, what is the solution? Every layer has its own defense, defenses are connected, trust and threats are calculated for each system layer, and for the whole system, too.
It means for mobile apps:
– authenticate important things manually, even within the app’s flow;
– verify credentials and certificates more than once and compare results;
– use multi-factor authentication, use different factors in different combinations;
– protect data with keys, which are stored elsewhere.
Compartmentalization: limit access to everything
The third important principle you should know. The policy of limiting access to the information, making things private by default and allowing access only to entities that need this info to perform their tasks.
Access only the data you need. Where you need. When you need. Never work with full records, only fields you need. Don’t transmit everything over the network. If anything goes wrong, you’ll leak only a part of the data, a minor credential or an insignificant ID.
Make sure trust tokens and protected data are stored separately and are not easily stolen together. Read the corresponding wiki page for deeper understanding.
Practical techniques for securing mobile apps
It may look like very complicated theoretical things. However, most of this hard theory makes a lot of sense in practice.
Classic security techniques
First of all, do all these traditional things:
– protect transport well, pin certificates;
– authenticate everything: both user and server;
– encrypt data in motion and at rest;
– protect keys well.
These techniques will stop most of the attackers. But not all. Let’s talk a bit about advanced ideas.
End-to-end encryption allows you to echelonize risk, and it compartmentalizes sensitive data.
If you don’t trust your backend to store unprotected data, it can’t be stolen from there.
Real end-to-end has only users and their credentials as a source of trust. Servers and network act as the medium only, so their state does not matter much. Their malfunction/compromise is only as bad as Denial of Service attack.
Good end-to-end exists for both data in motion and data at rest. Take a look at the scheme above and read this useful guide, if you have doubts about what iOS crypto library to use in your app. For data in motion, there’re plenty of specialized protocols, with ephemeral keys and strong crypto. With data at rest, you can pick any easy AES wrapper, and just manage the keys correctly. But being quite biased, I advice using Themis Secure Cell anyway :)
Multi-factor authentication is a very important idea: authenticate users only when they show several proofs of who they are. Talking in a more scientific way, there are three large classes: “thing you know” (password), “thing you have” (mobile phone), “thing you are” (fingerprint). The important idea is that all credentials are independent of one another and cannot be derived from one another.
MFA follows compartmentalization: it involves checking isolated, unrelated things you have to ensure it’s you. If they’re properly isolated, chances are they won’t be stolen at the same time.
MFA means that it’s not enough to use different authentication methods from one class. You should combine methods from different classes: like a fingerprint (thing you are) and password (thing you know), or password plus Google Authenticator, or voice confirmation of your password over a phone (all three).
Zero knowledge proof
Sometimes, even a single request is data leakage. Imagine you need to enter your passport ID to log into some governmental system — now, if the connection or remote party was compromised, attackers know your passport ID. Sometimes you need to transmit sensitive authentication data, but don’t have a trusted channel of communication at all. In this situation, establishing trust is a complicated task, isn’t it?
Zero Knowledge Proof is a protocol, which allows two parties to compare some secret without sending (leaking it). It has real mathematical proofs and was invented by scientists a few decades ago. You may find this small example useful for better understanding of the protocol itself.
If the remote party doesn’t have the same credential as you, the request will fail without leaking the request data. It works for passwords, document ids, and any private credentials with the public identifier. By the way, it combines really well with MFA. Unfortunately, I’m not aware of any iOS implementations other than the library I’m contributing to, so it’s a shameless plug :)
Well, remember echelonization? No single tool is good enough. Let’s combine them.
A typical secure app layout has SSL over a network, and it somehow encrypts data on both ends. But, well, it’s barely enough. Such a system may be still vulnerable to MitM and data leakage.
Step 2 makes trust model consistent and compartmentalize things from one another. Let’s keep most of the trust in the user’s hands: encrypt all stored data by keys derived from the user’s password. So only the application can decrypt data only upon password input. As for traffic, encrypt everything with ephemeral keys, which are derived from public keys. This way, the transport and the data itself are separated and have their own layers of protection.
Now, let’s add more novel techniques to enable even better security guarantees: by introducing several authentication factors we will make identity forgery even more complicated and minimize the risks of stealing even encrypted data. Moreover, when the parties only negotiate to exchange the data, adding zero knowledge protocol helps us to stay strong in a case where the whole system gets compromised, and the remote party gets replaced by an evil twin.
Feeling more secure now? :)
I would be happy if you tap on links from the text/slides and read them. Obviously, one post (even long read) is not enough to describe everything. You may also be interested in reading my other slides, where I discuss some approaches in more depth, like SSL pinning, ephemeral keys and so on. Then put that Security Wizard Hat on and analyze your system from security point of view. Find possible threats and implement a thing or two.
– Text, slides and video from my previous talk “Data protection for mobile client server architecture”
– Choose crypto iOS library that works for you
– OWASP: iOS application security testing cheat sheet
– Zero knowledge proof illustrated primer
Coming soon. I’ll post it here as soon it is ready :)
If you like my post and sketches, tap on 💚!
Originally published on Stanfy’s blog on May 2016.