Mobile testing with AWS: Using AWS Device Farm
AWS Device Farm is a testing service for mobile app developers using Android, iOS, or Amazon Fire OS apps. Announced at the regional AWS Summit in New York City in July 2015, AWS Device Farm allows developers to simultaneously test applications on a wide range of devices including smartphones, tablets and more, in the cloud. Developers can simply upload their apps and test them on a variety of devices for the latest device/OS combination.
In this post, we’re going to walk you through the process of setting up and using AWS Device Farm.
Before we get into the how-to, let’s take a closer look at the problems that AWS Device Farm solves.
Because many developers rely on manual testing for their applications, this phase can be extremely time consuming and complex, especially given the large variety of devices on the Android market, not to mention the multiple variations of models, OS versions, carrier customizations, and more. This makes it difficult to access or even successfully simulate 100% of what’s available.
AWS Device Farm addresses this challenge by providing a test environment that covers all of the available combinations of devices and operating systems. It does not use simulators, but real non-rooted devices.
This access to the latest hardware, operating systems, and platforms allows the testing process to be more streamlined, less expensive, and more innovative. According to the official press release announcing the service, “Developers can use AWS Device Farm to test real-world customer scenarios, fine-tuning test environments with a broad set of device configurations, including language, location, app data, and app dependencies.” It can identify errors across multiple devices and presents you with the data to analyze events across even hundreds of tests.
In terms of pricing, AWS Device Farm offers a free plan for the first 250 device minutes. Thereafter, you pay $0.17 per device minute. You can also pay a flat $250 per device per month for an unlimited testing time.
For the full list of devices, pricing, documentation, and more visit the official AWS Device Farm page.
Before creating our project in AWS Device Farm, you will need to make sure that you have an AWS account. AWS recommends that instead of using your AWS root account to access the service, you should create a new IAM (Identity and Access Management) user (unless you already have one. You will need to set up permission for the IAM user to access Device Farm by creating a new access policy in IAM.
Now, let’s get started with our project.
The first step is to create a new project, which is a workspace inside AWS Device Farm. A project collects all of the Runs executed inside it. According to AWS, in this context, a run “represents a specific build of your app, with a specific set of tests, to be run on a specific set of devices.” Here, it provides a variety of ways to track the results of our tests. Projects are not differentiated by the operating system, application, or tests type. However, a Run is, so we can run a test case for Android and one for iOS inside the same project or even test two different apps. Since the are no limits for the number of projects we can create, it’s always better to differentiate on the operating system, the application, or both.
Once the project is created, we can start with our first suite of tests executed on a pool of devices; in other words, our first Run. First of all, we need to choose the type of application that we want to test. Because we’ll focus exclusively on iOS and Android applications in this post, we will need to choose native (iOS or Android).
Next, we need to upload the application that we want to test. For Android, we need to export a .apk file from the debug configuration (on release all tests are removed). For iOS we need a .ipa.
We need to archive our project (be sure to run the archive command with a real device or “generic iOS device” selected) and then export it choosing “Save for development deployment”. Once finished we can upload it.
Configure a Test
Now we can configure our tests. There are a lot of options available but we can choose only one for each Run. External frameworks like Appium or Cabalash are supported along with XCTest, XCUITest and Instrumentation along with a Fuzz test.
Let’s focus on the last one. The goal of a fuzz test (or fuzzing) is to stress the app with large amounts of random data that might make it crash in order to discover any errors or loopholes. No additional files are required. Simply select Built in: Fuzz and we are ready to go.
Note: If you plan to execute a Fuzz test on your iOS app, make sure that you have iOS 9.0 as a minimum target since Fuzz tests do not work on iOS versions 10.0.1+. (At this time, there no devices with iOS 10.0.0 or 10.0.1. They start from iOS 10.0.2.)
The device pool is the container of devices where our tests will runs. In Device Farm, a device pool “represents a collection of devices that typically share similar characteristics such as platform, manufacturer, or model.” A curated pool of “Top Devices” is available out of the box or we can build our own. The list of available devices is quite large covering all iOS devices (from iOS 8.0 onward) plus a good selection of Android devices. You can find the most diffused devices, like Samsung Galaxy and Nexus along with some Amazon Fire devices. There is also the possibility to suggest a device to add to the pool of available ones.
You can also define some simple rules to automatically include certain devices, like “All android tablet” or “All Samsung devices”
There are no limits on how many devices you can include in a pool, but the tests will run simultaneously on 5 devices maximum.
Run our own tests
As stated above device farm supports a large number of test suites. How tests are uploaded is different from platform to platform. While on Android device is sufficient to upload a test .apk (that can easily be generated with Android studio) on iOS we have different cases if we want XCTests or XCUITests. For XCTests we need to upload a zip containing our .xctest, instead of for XCUITests we need a .ipa file.
The files we need can be found in Library/Developer/XCode/DerivedData/My-App/Build/products/debug-iPhone. XCTest files are located in the package my-app-name. Open the package (Show package content) and find the .xctest file in the Plugin directory. For XCUITest you need to compress the package my-app-name-Runner and change the extension from .zip to .ipa.
Also, remember that in debug XCode build only for the current hardware architecture. This can lead to test errors if you try to run a test on a device with a different architecture from yours.
We can now run our test. The test will run in parallel on every device in the pool. The total time of a test is calculated as the sum of the time spent by each device. This is the time Amazon will bill to you, bigger the device pool bigger the time we are going to spent.
When the tests are completed we will be presented with a nice recap view. For each device, we can see which tests have succeeded and which have failed, plus detailed information about the device. Everything is recorded, we can access to performance graphs showing FPS, CPU, Memory and Threads plus all device logs and a video showing the device screen. This is especially useful during UI or Fuzz testing to better identify the problem.
The base pricing plan is 0.17$ per device minute. That is if you run a test on two devices in parallel and each one takes 5 minutes to complete you will spend (5 + 5) * 0.17 = 1.7$. If your tests take a lot of times or involve a great number of devices, there is also a flat plan for 250$/month which let you run as many tests as you want.
An interesting feature is the possibility to connect remotely to a specific device model via a remote screen and install our app. This is particularly useful if you need to test on a particular device (to reproduce a bug for example) although the experience is not smooth as it should be, with a notable lag on the device screen.
Device Farm is surely a service worth of trying especially if you have your own test suite. It is easy to use and offer a very detailed recap view, providing all information need to reproduce the bug. Its strong point is the possibility to execute tests on a wide range of devices, allow us to discover problems which could not be seen on our test devices. This is very handy especially on Android where is difficult to cover a lot of devices due to its big device fragmentation. The documentation part is a bit lacking in how to export test for iOS (especially where to find them) and on some non-code related errors (like when you are trying to upload an ipa compiled for a different architecture). The remote device feature is cool, but currently, it’s usability is the only average due to the high lag between input and device response.
Originally published at cloudacademy.com on April 4, 2017.