STACK-X Webinar — An Introduction to Building Secure Mobile Apps

Michael Tan
CSG @ GovTech
Published in
14 min readAug 7, 2020

The following article was adapted from a STACK-X Webinar conducted by the mobile application penetration testing team from GovTech’s Cyber Security Group (CSG), on 16 July 2020.

So you want to build a (secure) mobile app

That’s great! Tapping into the vast, ever-expanding user base of smartphone owners is always a great move, especially with most users today preferring to handle most of their day-to-day tasks from these handy devices.

However, it’s vital that you don’t overlook security during the development of your app. While it might seem like modern mobile operating systems are already packed with strong security features (such as application signing and sandboxing), it would not be wise to skip those security tests!

Many apps today are filled with gaping vulnerabilities, especially those originating from data storage and network communications.

Perhaps this is preaching to the choir — since you’re reading this, you probably already understand that, in today’s context, it’s extremely important to ensure that your apps are secure. Providing users with the peace of mind that their data and information are safe can go a long way, after all.

But before you begin trying to prevent these mistakes from being made, it’s important to understand what they are in the first place!

Five common vulnerabilities and how to prevent them

A good place to start would be with commonly-observed vulnerabilities in mobile apps.

We’re going to be running through some case studies to demonstrate these, but let’s first run through the testing methodology that we’ll be using (hopefully you’ll run these tests too when building your app):

Static Analysis — Extracting the mobile application’s binary and then reverse engineering it to examine the application. Some things we might be looking for here include hard-coded credentials or leftover development files from the binary.

Dynamic Analysis — Testing the mobile application’s functionalities in real-time to examine any potential vulnerabilities. Testing here might involve trying to circumvent security measures and detect logic errors within the app.

With that in mind, let’s move on to the first of our five vulnerabilities (in no particular order):

1. Insecure data storage

As mentioned before, it’s easy to fall into the trap of thinking that modern operating systems are secure enough that you don’t have to implement additional security measures.

Insecure data storage occurs when developers operate on such beliefs and store sensitive data in plaintext on client-side devices, allowing attackers to very easily access sensitive data. Such data is often found in:

  • SQLite Databases
  • Log Files
  • Plist Files
  • Shared Preferences
  • XML Data stores or Manifest Files
  • Binary Data Stores
  • Cookie Stores

To illustrate this problem, we found a relatively popular application on the Google Play Store with high ratings that claims to be a secure note-taking application where you can store your passwords and sensitive details.

While testing this app, we were able to retrieve the user’s login information to access the application. This information had been stored locally on the device in the Shared Preferences file (that is easily recoverable), allowing us to access every piece of sensitive data that the user had entered into the app.

But it gets worse!

As we continued to test this app, it turned out that we never needed to obtain the user’s login credentials at all. The secret information had also been stored, completely unencrypted, in an SQL file; anyone with access to the user’s phone could have very easily obtained this secret information.

Not so secure after all, it turns out.

How do I prevent this?

Well, information should always be stored in the backend if possible, and if the information has to be stored locally due to business requirements or the like, it should always be encrypted — never stored in plaintext!

2. Insecure Direct Object Reference

Insecure Direct Object Reference (IDOR) is a type of access control vulnerability where a backend service trusts user-supplied inputs without requiring additional validation of what the user is authorised to do.

That’s kind of like allowing anyone with an ID pass to access every part of your building…without checking if they’re authorised in the first place!

This allows attackers to gain unrestricted, unauthorised access to a user’s data. In the worst of cases, this could happen for apps dealing with critical information, leading to data loss and illegitimate transactions.

Today, most mobile applications require a good deal of communication with backend servers. When API calls are made during transitions of states and user data is retrieved from a backend server, users usually don’t see or interact with this process.

It’s no wonder, then, that some developers overlook this and neglect to harden their API, effectively creating a server-side vulnerability.

Let’s have a look at how this works in practice.

We found an application that fetches appointment information based on a user’s NRIC number. Below, you can see that the app uses a post-authentication fetch to retrieve this information automatically (upon the user’s login), meaning that the user can’t alter the API call at all.

So far so good, right?

It soon became clear, however, that we could easily retrieve any other user’s appointment information just by spoofing the NRIC field.

Unfortunately, an attacker could easily enumerate NRIC numbers to obtain the appointment details of any user they wanted in this manner.

How do I prevent this?

As a developer, you should have your endpoints perform access control checks on any requests for sensitive information. This way, you can be sure that the user requesting that data is authorised to do so.

3. Extraneous functionalities

When building complex apps and racing against harsh deadlines, it’s not uncommon to leave in a few things and forget about them in the chaos of it all.

That can sometimes be the fatal flaw of your app, however. When development or testing functionalities and details (that aren’t intended for users) are deployed to production, they can sometimes provide useful information for attackers to reverse engineer your app and find further vulnerabilities.

Take this next app, for example — a popular password manager app on the iOS app store.

While examining the code that performs user authentication, we found the following vulnerability:

During user authentication, an example of simple logic that describes the function well enough is ‘if input matches stored password, then login, or else deny’. Interestingly, we found some suspicious behaviour highlighted in the equivalent line of code within the deny logic of the password manager app.

Using the hardcoded string of ‘*#06#*’, we can gain unauthorised entry to the user’s stored credentials.

But why is that even there?

We can only assume that this was a feature used to ease development that was forgotten before they released the app. But thanks to this, anyone with the knowledge of this backdoor can retrieve users’ stored passwords.

How do I prevent this?

It’s simple, although it takes slightly more effort: you have to do your due diligence and clean up all debug and testing code before shipping the app out to production. This includes console logs, unused codes and testing functionalities.

4. Insecure client-side authentication

Insecure client-side authentication happens when authentication is performed insecurely on the client-side app, such as with missing or poorly implemented authentication protocols.

Attackers can exploit this by tampering with the application state to bypass authentication and gain access to the app.

For this vulnerability, we identified an Android app that claims to hide a user’s photos securely in a ‘photo locker’. It requires users to enter a PIN code that they must first set in order to access their stored photos.

Here’s a look at the app’s lockscreen:

We obviously wouldn’t have access to our user’s PIN, so we first tried looking for the pictures within the device:

But as it happens, they’d all been encrypted.

So the next recourse for us was to go back to bypass the authentication. To understand how we could go about doing this, we first examined how the app’s authentication works.

We did this by decompiling the app’s APK file and found that the app relies on a function to check whether the PIN entered is the same as the PIN stored in the device. If the PIN is the same as the one stored, the function returns true. Otherwise, it returns false.

We relied on runtime tampering to ensure that the return value of the function is always true.

Just in case you don’t know, runtime tampering is a technique used to modify the code in the memory during runtime to manipulate the logical flow of the app. Most of the time — and in this case as well — we modify the arguments parsed into a certain function (or the return value of the function) so we can trick the app into behaving in the way we want it to.

As you can see below, we successfully changed the return value of the PIN checker to ensure that it’s always true, regardless of whether the PIN entered matches the PIN stored or not.

We were then able to enter the app and retrieve all the private photos stored by our user. Yikes.

How do I prevent this?

You should first try to understand the local authentication scheme you’re using thoroughly and attempt to look for loopholes before releasing the app. Avoid storing passwords or tokens locally on the device itself as well, as this might allow attackers to retrieve that information to bypass the app’s authentication.

Alternatively and ideally, the best method you can use to prevent this is to simply have authentication take place through a backend server instead of trying to put it within the app.

5. Code tampering

Malicious actors will often employ methods like this where they tamper with an app’s critical business logic to change the application flow and bypass security mechanisms to gain unrestricted access to the app.

This time, we looked at a Basic Theory Test app (familiar to many young drivers). The app is free but displays ads, although this can be disabled by paying for the full version. We noticed that two options were provided for access to the full version:

Of course, when we attempted to click on the Restore Full Version button, we were greeted with an error message (since we never did pay for it).

How does the app check whether a user has purchased the app before?

We extracted the dex file from the app’s APK file, disassembled it to Smali intermediate code and then decompiled it to Java. We then studied how it works from the Java source code (this last step isn’t technically necessary if you’re very well-versed in Smali). Below, you can see the flag “alreadyPurchased” we found in the source code that the app uses to determine whether it has been purchased before or not.

Then, we identified the same flag in the Smali code and modified it to the value true. This tells the app that we’ve legitimately purchased the full version.

Finally, we recompiled and signed the package so we could install it on our device. And true enough:

How do I prevent this?

To counter code tampering, implement the use of checksums to determine if an app has been tampered with or not, both in the package and memory. You should also run a robust check on whether the app runs in insecure environments (like jailbroken or rooted devices) and then denying access to the app for them.

So far, we’ve looked at some of the common vulnerabilities and how to address them. While several issues have been addressed, many of the points covered can be summarised in these two key takeaways:

  • Generally speaking, always assume the device your app runs on is in an insecure environment and employ Defence in Depth strategies to ensure it is secure even in such instances.
  • Sensitive information and operations should also be stored and conducted on backend servers if possible.

Building secure mobile applications with OWASP

Now that we’ve gotten a better understanding of how to prevent these errors, we can go a step further and introduce established standards with to assess the security hygiene of our mobile applications.

That’s where OWASP comes in.

Image credit: https://owasp.org

OWASP, or Open Web Application Security Project, is a global non-profit organisation that focuses on software and hardware security mechanisms.

In case you’ve never heard of them, the organisation holds conferences and conducts training on a worldwide scale to educate users on security-related matters. Some of the notable projects that they’ve undertaken are:

  • OWASP Web Security Testing Guide (WSTG)
  • OWASP Mobile Security Testing Guide (MSTG)
  • OWASP Mobile Top 10
  • OWASP Mobile Application Security Verification Standard (MASVS)

Incidentally, we’ll be covering MASVS and MSTG today!

Security standards with MASVS

So, what is MASVS?

Quite simply, MASVS is a standard that defines the security requirements applicable for mobile apps and their OS platforms.

These requirements offer a baseline for mobile application security hygiene and help developers to ensure that defense-in-depth measures and resiliency protections are in place against client-side threats such as rooting, jailbreaking and code tampering.

As you can see above, the requirements are tiered into various levels that are each suited to specific apps’ overall needs for security.

  • L1 covers basic requirements for code quality, handling of sensitive data and interactions with mobile applications. This applies to all mobile apps.
  • L2 introduces more advanced security controls that are relevant for any apps dealing with PII — personally identifiable information.
  • R stands for resiliency and outlines what steps should be taken for apps to protect against mobile client-side attacks such as app tampering and reverse engineering. Game apps should adhere to L1 and R requirements (so users can’t tamper with game scores, for instance), while apps with sensitive data stored locally on devices should adopt L2 and R.
A checklist of one of MASVS’s requirements

Curious about which requirements your app should adhere to? Well, MASVS’s requirement checklists (like the one above) provide a useful guide to see which levels of requirements your app should adopt.

Moving on, though — how do we use MASVS? Three main ways you can adopt these standards are:

As a metric — used by security researchers to evaluate the security standards of your mobile application

As guidance –for developers during the development and testing phases

As a procurement measure metric — for system owners to provide a baseline for security verification when engaging vendors to perform verifications on your app

Remember that not all requirements listed in MASVS are necessarily relevant to your app! Evaluate the effectiveness of each requirement individually to ensure that no additional effort is put into implementing an unnecessary security feature.

Thorough testing with MSTG

MSTG is a testing guide and checklist that helps to outline baseline security requirements for your app.

But wait! Isn’t that exactly the same as MASVS?

Not quite. MASVS serves a different need, as the requirements outlined in it are used for an app’s planning and architecture design stages.

MSTG, however, describes the technical processes for developers and penetration testers on how exactly to test components of an app. It’s a technical manual that provides verification steps for each MASVS requirement for a given mobile application. This includes technical evaluation guides for app security via static and dynamic analyses.

Incidentally, every security weakness we found in the first section on common vulnerabilities is tagged within MSTG under specific categories!

Speaking of tagging, the manual actually groups test cases into different broad categories (such as network, resilience or authentication) under MSTG-ID tags. Perhaps you can try to locate some of the aforementioned vulnerabilities within these categories the next time you read it!

Now, how do we use MSTG, exactly?

Within each MSTG-ID tag’s respective page is an overview that runs you through what the test is and what the expected outcomes are. This is useful for your reference if you need a broad understanding of the issues discussed before you dive deeper into the technical aspects.

Further in is usually a static analysis section that contains the code snippets showing how phones run checks and what best practices should be adopted.

Finally, there’s also a dynamic testing portion that outlines the exact steps for conducting these tests using various open source tools and techniques, as well as the expected outcomes of such tests.

You can see this reflected in the MSTG iOS Anti-Reversing Defenses page (MSTG-RESILIENCE-1). In it, we have our overview, followed by a static analysis portion with the relevant code snippets of various jailbreak detection techniques, and finally a dynamic analysis portion with a step-by-step guide on how to bypass this jailbreak detection.

A look at the interoperability of MSTG and MASVS

As you can see from the diagram, MSTG and MASVS are very much interoperable. Adhering to MSTG, the relevant portion of our software development cycle could look like this:

MASVS — We can first use MASVS as a list of security requirements to ensure our app is secure by design.

RASP –Runtime Application Self-Protection tools can then be used to detect and prevent real-time attacks (part of the MSTG Resilience requirements)

SAST –Static Application Security Testing tools can be employed to scan our app source code for any security vulnerabilities during the app’s build time.

Mobile app security tools –Such tools are automated, mobile-oriented tools that employ both static and dynamic tests to scan the app binary and source code for mobile-specific security vulnerabilities. Use them!

Manual PT –Finally, we can conduct manual penetration tests — wherein we use tools and techniques an actual attacker might use — to assess the app holistically.

And that about sums up this section on MASVS and MSTG!

But before you go…

Some additional help

We know that having more resources is always beneficial.

Our Medium blog uploads weekly, technical articles on cyber security that might just contain the insights you need for your latest project, so be sure to check out the other articles when you’re done.

Finally, if you’d like to catch the video version of this webinar, you can do so here.

On that note, that’s all we have for you today. Thanks for reading!

By GovTech’s Cyber Security Group (William Tan, Timothy Lee, Thomas Lim and Teo Kok Sang) and Technology Management Office (Michael Tan)

--

--