This is part 2 of our views on APIs. If you haven’t read part 1 then why are you still here? Go take a look :)
APIs generally have access to loads of user data; they are the middleman to the transactions between the client interface and the back-end databases. Things like search functionality or filtering will be done on data that the API receives from a back-end database query, however this should be done server side, not locally on the client with stored or received data as this risks exposing a lot of information.
With this application, there was a call to api.example.com/call/data as a GET request, the response from the server was the entire listing of companies, emails, usernames, user IDs and real names held by the business; perfect for direct social engineering attacks. This data was then locally stored in a file to be used in the search bar to narrow down results based on pattern matching. …
Application programming interface, or APIs are a large part of today’s world; they are a set of software functions that define the way that multiple applications communicate to each other. Whenever you use an Internet aware software application whether this is a software package, web app, or mobile app it will send data to a back-end server via a request. The server then acts on this information, interpreting it, taking some action and sending it back to the client as a response; this is received via the application and translated into a format readable by humans.
Some APIs are so large that they are more like complete products of their own, not just an intermediary. …
Mobile applications are part of our daily lives, from banking, messaging, health, financial to social networking; but what do most of this type of app have in common?
They all hold some sort of private user data.
Whether that data is account numbers, private messages, credit card statements, or the location where you last checked yourself in… that nice cafe round the corner from your house. Our devices store an awful lot of information that would aid someone in potentially stealing an identity.
We rely on the app developers to adequately protect the data we put into their apps.
If we, as users, decide we want to use an app, we have to rely on the developers to create a secure app and protect our stored data; that is the implicit trust we place onto them. We have usernames, passwords, biometrics, but what about the custom security features that are added into applications in an attempt to safeguard this private information away from malicious intent; protections such as PINs? …
Almost all mobile applications communicate with a backend server, whether this is transmitting app data, backups, analytics; there is some sort of back and forth of data over wireless communication channels in mobile devices.
Most developers know that they need to use HTTPS, it is 2020 after all, but in mobile applications, this doesn’t go far enough. If you don’t check the server you’re communicating with is the one you’re expecting then how do you trust that the communication is secure and not being intercepted?
If you don’t verify the server is the legitimate one, how do you know user communication is secure? …
Root detection in Android apps has always been a cat and mouse game. Developers come up with new checks, or a new library comes out; the attackers then bypass these checks or hide root from the filesystem. Despite all the work that goes into coming up with new techniques for either side, root detection remains one of the first hurdles of a defence in depth solution for mobile applications; and as security researchers, one we see all the time.
This post isn’t meant to educate about rooting or the act of obtaining root, but it is useful to understand the concepts surrounding it. To root a device means to gain access to the super user account on the operating system, in Android, which is based on Linux, this user is called ‘root’. …
When testing Android mobile apps, quite often you can find yourself in a situation where you face a security mechanism that you wish to bypass, either because the app won’t run (e.g root detection) or there is something else you want to investigate more (e.g SSL pinning).
Usually a tester has 2 options to bypass these mechanisms:
We’ve observed how modules from both app execution and the app lifecycle are loaded within the app, but why would this need to be done on a pentest? What can we actually do with all the scripts?
The aim of this final post is to solve the third consideration; to push the scripts even further to manipulate the input arguments and return values of native functions to be able to modify the true workflow of the app and its designed behaviour.
Let us describe our scenario, and our goal. We are targetting the OpenSSL function named SSL_CTX_set_cipher_list() which essentially is used to specify the default ciphers to be used during the SSL negotiation. …
In part 2 we moved into a dynamic approach at investigating native libraries using frida-trace and frida CLI. We leveraged the power of the API to construct our own scripts to get useful information from these native functions; however we were left with two considerations.
The script can only enumerate the modules loaded at its execution — not during the app lifecycle.
Using the memory base address and the size of library, monitor the memory to extract useful values.
We will be further exploring the Frida API; and using code examples, try to solve these issues.
The first point can be resolved using the Interceptor API, which, as the name suggests lets us intercept a target function. We are interested in any library that is opened at any time during the app lifecycle in order to enumerate its exports, as opposed to just at app execution. This is due to certain libraries not being called until a later stage when certain functionality is triggered by the user. …
Having covered what can be done with a static approach¹ on native libraries as an information gathering technique, we can move into a more dynamic approach, running the app and leveraging runtime tools.
Ideally, we would like to achieve the following:
To answer the first point, we could initially use frida-trace². The official definition from its tutorial page explains, frida-trace is a command line tool for “dynamically tracing function calls”, and is part of the Frida…
Mobile security testing of Android applications involves code review in order to understand how the app logic and flow works, as well as identifying any potential security vulnerabilities. If the app was developed in Java, decompiling the app means reversing the compilation process in order to extract the Java source-code from the binary compiled code. To accomplish this, testers can use commercial tools (such as jadx and enjarify) which take the APK file and attempt to retrieve the Java source code.
Sometimes decompilation of the code back to Java class files is not enough.
However, this might not be enough. Sometimes developers might use the so-called Android NDK (Native Development Kit), which allows developers to use native C and C++ language. In this case we are talking about Native Functions, rather than Java methods and Native Libraries, referred to as .so …