Kicking Off App Security
by Photis Patriotis
Truthfully, few of Prolific’s early days as a mobile product agency were spent worrying about security. We felt relatively safe basking in the view of ourselves as only doing front-end engineering because asset ownership is such an important part of information security and we don’t own user’s devices. It should be needless to say, but that thinking was flawed; being reckless about what happens on users’ devices was definitely a bad practice. There have also been major pushes by smartphone manufacturers to lock down their devices since those days. These gave us confidence that we had the ability to make the apps we build for our partners secure.
So, in the past two years, we have been working on an ever-improving security processes for our apps. Security is a constant fight and continuous practice, not a singular event, so this post is more about what we’ve done to get started and what we considered to be highest priority and largest impact. We decided to approach this by going across the lifecycle of an app and securing each major phase:
- Development: Adding security to the product development flow itself.
- Compilation: When code compiles into builds for both testing and production use.
- Execution: Operations of the app on the device.
Execution is the most crucial part of securing an app. Even though it is usually the most discussed part of securing an app and the rest of the lifecycle phases revolve around what happens at runtime, we decided to specifically focus on the basic vulnerabilities of mobile software for this phase. Vulnerable points in app execution mostly have to do with the flow of data, both into the app (Networking) and within the app (Memory & Storage).
The best way to make sure data is getting to the app without tampering or interception is to use https for EVERYTHING. There are ways to override this in iOS currently, but it will soon be a requirement for iOS, and Android has easy ways to implement it, so we decided to get ahead on this requirement and mandate it for all APIs that our apps use.
Once data gets to a device owned by a user, even we, as the app developer, have little control it. So, the foremost technique to protect sensitive user and business data is to send the absolute minimum amount of it over the network to the app in the first place.
We added steps to our feature development process to analyze data requirements and work with API teams to make sure the minimum required for operation is sent. The following are some examples of sensitive data that should be evaluated and a strategy developed for managing it properly:
- Personally Identifiable Information: Email, First Name, Last Name, Address, etc.
- Data covered by Payment Card Industry compliance requirements: Credit Card Number, Expiration Date, etc.
- Other app-specific data: Passwords, or anything that the business would consider sensitive, discussion of how to analyze this is out of scope for this post, but should 100% be done.
Memory & Storage
If we find that any sensitive data is required for the operation of the app, we have to ensure that, once on the device, it’s not easily accessible outside of the app’s intended use. The first priority is removing it from memory immediately after use, or if storage is required, we use encryption.
Encryption at rest
Both iOS and Android have made a strong push to increase security on their devices, even at the hardware level. Using these native OS functionalities helps us piggyback on systems built by some of the top encryption experts in the world at companies like Apple. Keychain (iOS) and Keystore (Android)are examples of this and are easy to implement (iOS / Android), so we use them as often as possible.
A common way to probe for vulnerabilities in an app is by doing penetration tests. Penetration tests are essentially attempts to break into the app. Ray Wenderlich has a great two-part tutorial on how to do this on your own, but we prefer to use third parties so we can have an objective, cross-platform review of our apps. There are plenty of vendors to choose from, but we like to work with ones that can:
- Run tests frequently
For incremental releases, we run tests right before release, but for larger, long-term development, we make sure penetration tests are run around once every 2 months.
- Give clear mitigation actions
Our penetration testing vendors are able to clearly explain what vulnerability they detected, why it is an issue, and steps to address it.
- Prioritization based on severity
We also bucket vulnerability discoveries into actions that need to be taken immediately, and within 30, 60, or 90 days since some issues could take considerable effort to adjust for.
The second most important part of securing the app lifecycle is that the product development flow take security into account. We also put in place safeguards for engineers, product managers, designers, and others while they are developing.
Security In Product Management
At Prolific, we have embedded security analysis into our product development workflow starting from when teams are deciding how features will work and Product Managers are writing user stories and requirements. Using security as another dimension of the development lifecycle can be a peer to user experience design and research for many apps.
The processes detailed in this post really describe our Security Software Development Lifecycle (S-SDLC). S-SDLCs have been adopted by many different companies, and each is different, but the diagram below and this introductory post are a great place to get an overview. And even though it’s a bit dated, there’s a Microsoft book called The Security Development Lifecycle which is definitely on my reading list.
At Prolific, our product management process revolves around writing user stories, detailing their corresponding acceptance criteria, and developing and testing them before pushing out a build. Our S-SDLC is interwoven into these processes. Product Managers are creative in thinking about how user stories could affect security and write them as such. For very security sensitive projects, we’ve even considered adding a role on the project team that can review this aspect of the story writing.
In parallel to the standard product development process we also use our internal mobile audit to maintain security best practices across the organization. Our audit allows us to add, remove and/or modify techniques, processes, and tools across the whole company to keep up with new learnings.
Besides execution-based vulnerabilities, at development time, the feature-set itself should be evaluated as it can also have unintended consequences and vulnerabilities. For example, mobile operating systems have internal features that ease experiences for users which may or may not be secure for a particular app.
One case is the background screenshot in iOS. The Chase app, for example, overrides this so that if a user starts scrolling through their apps in front of someone else, they do not need to be worried about someone accidentally viewing their account information. For how to do this technically: iOS and Android.
App icons and titles can also be an issue if the app is related to something sensitive. They live on the user’s home screen and can be easily seen by others when scrolling through or via quick search.
Another example feature which is app-specific and not OS based is automatically logging out the user session after a certain amount of time, both on the frontend and backend. This is a common security measure, but comes at a high user-experience cost. If it is required, we try to avoid extra friction by advising partners to strategically authenticate throughout the app. For example, Amazon asks users to re-enter their credit card when a new address is entered to prevent fraud; Bank of America uses Touch ID to smooth out the constant logouts while still making user entry simpler and secure.
The other major security process we put in place to protect the development team is a sandbox. Our environment was designed around these three core objectives:
- Keep team members safe from themselves
Without even trying to be malicious, engineers and other team members could easily fat-finger during a test and make an impact on a production component. So, we completely removed all access to production keys and environments for everyone in the company besides designated, centralized production release personnel.
- All code should be open source-able
This objective creates a mindset when building software that all eyes could be on your code. Whether code is public or private should be a business decision, but the quality and security of your code should be as strong either way. Mandatory peer code reviews are a way of bringing this to the forefront, and again, keeping private information segregated from code repositories is crucial.
- Anyone with access should easily be able to build the project
As a continuation of the second point. Whether someone can build a project or not should be a simple business decision. We value giving as many team members as possible access to view development builds for QA, testing, etc. purposes, and we don’t want to feel nervous about doing so.
We built this sandbox using a few simple tools and processes. Cocoapod-keys for iOS and Gradle Properties for Android keep keys out of our git repositories and integrate with our Bitrise CI solution. We use a combination of Dropbox as a makeshift Key Management System (note: we’re switching to a full solution in the near future), where Dropbox holds the development keyfiles for each project, so we can share a specific project folder with a team member to give them access to build a development version of the project.
The final phase, which ties development and execution together, is when source code compiles into the IPA or APK that is run on the devices. We took a few measures to ensure that this process is protected against nasty vulnerabilities or malicious code being added in once the app leaves the engineer’s control. These measures include managing the 3rd party dependencies that our code uses and also assigning a specific team to own the production app compilation process.
At Prolific, we make extensive use of open source and third party libraries. We are currently working on better, and standardized ways to review these for security before they’re added in, but for now, we try to limit our use to popular libraries which have a higher chance of having their vulnerabilities identified by the community. Beyond that, we use the pod outdated command in CocoaPods, VersionEye, and the dependency messages in Android Studio to make sure we’re notified about these updates across the entire company.
Build a Release Team For Release Builds
Our sandbox keeps the development team from handling production keys, but these are precisely what’s needed when building an app for release. So, we designated a team with proper authorization to carefully and securely manage sensitive information: the Release Team. This team has had background checks done, spans the entire company, and is manageably small: 1 key person + 2 backups. The responsibilities of the Release Team cover:
The team directly handles key exchanges securely between us and our partners, without involvement from the rest of the development team.
Release personnel are responsible for cycling passwords and keys at intervals in case of leaks. If the sandbox is set up properly, this can be done with no effect or awareness of the development team.
Even though our security standards are part of our regular development audit, the Release Team executes an independent audit specifically for security on releases.
Take human touch out of the process
A big part of making the Release Team secure is it being small. The fewer the people touching sensitive information, the easier it is to manage it. We try to implement as many tools as possible for our Release Team to keep their work streamlined. Two examples are Bitrise and 1Password.
We configured Bitrise to use webhooks to kick off builds and pull down keys during the build process automatically when code is merged. Different types of builds will automatically grab the correct keys, allowing a no-touch system with keys kept safe from access.
1Password helps us manage app credentials, especially when sharing them back and forth with partners. Our Release Team cycles passwords when possible and works with our partners to recommend proper access to things like App Store accounts.
Of course we still have plenty to do to improve our processes, but tackling the issue from these three areas has given us a lot of confidence in the integrity of the apps we release. As an ever-evolving process, much of the above will continue to be amended to or eliminated as platforms evolve and we learn more about the apps we build. One of our next major initiatives is to make sure that all of the above measures are consistently enforced throughout our many products.