Building a reproducible AWS serverless Android DevOps flow — 2
We’ve seen some YMAL scripts showing how to define an AWS Lambda function via Cloudformation. This post will go through the rest part.
Here is the sample project. Execute the deploy_pipeline.sh to deploy the default setup.
It’s a default Android project created by Android Studio, two python file folder, and some scripts. To deploy the templates to CloudFormation, you should also check if AWS CLI, Python version 3.6, and Pyenv have installed.
We’ll mainly focus on the two YMAL files under the root project; the two define all the resources which AWS Cloudformation will generate for us.
The first one is custom_resources.ymal. The resources defined here is used in the flavor of the AWS CloudFormation custom resource; we’ll discuss this part later.
The scripts in file ci.yaml are about CI pipeline itself as its name suggests. You’ll see some bucket where our code related files stored, the CodeBuild which assemble our Android apk then upload to Device farm, the Lambda to post pipeline status message to our Slack channel, and all roles and policies to restrict and access these resources.
Let’s start from the Parameters section:
Resources will reference these Parameters variables; we can use Type preserved word to define which type this variable is. Except for String type, we could use AWS services type like AndroidDevOpsGithubToken above, which is a token string value stored in SSM.
If you remember, the AWS CodePipeline will automatically detect our code has changed, so to pull your source code, the CodePipeline must use your Github OAuth token to authenticate itself. You could manage them on your Github settings.
AWS SSM is a service where you can securely store the token with the key provided, then retrieve the value by the key. To put your token in SSM using AWS CLI:
aws ssm put-parameter --name YOUR_KEY --value "SOME_AWESOME_VALUE"
--type String
and you can fetch your YOUR_KEY from SSM using
aws ssm get-parameter --name YOUR_KEY
If thing’s good, put your Slack client and bot token to AWS SSM as well, these token will be treated as some environment values for Lambda functions. These tokens are for making HTTP calls in our Python code to Slack API to get information like channels, messages, etc. You also need to set up your bot and its scope in your Slack OAuth and Permission tab(please see below), and we need channels:history, channels:read and groups:read if your slack channel is set to private.
Then it’s our CodePipeline:
Same as Parameters, we define the Type, which is AWS::CodePipeline:: Pipeline, and it will depend on AndroidDevOpsInstrumentation, which means our pipeline is created after device farm project is generated.
Every resource should have a corresponding role and policy to restrict the permission to the service you’ll use, it’s safer, and you don’t want to see anything unexpected on the bill.
The RoleArn property reference to AndroidDevOpsCodePipelineServiceRole. I also extract the AndroidDevOpsCodePipelineServicePolicy, and have a Ref function in Roles property, which means this policy is within the role we define. We determine the who and which to access AWS service here. You can narrow down the scope in those Resource properties under Statemen if you wish.
In the Stages, we have Source, Build and Test, each’s output will be next stage’s input, for example, the OutputArtifacts in the Source section will be defined again as InputArtifacts in Build section. You can see inside Configuration property it reference the Parameters we’ve defined like Repo name and others, the PollForSourceChanges will handle the detection too.
I extract the Build stage to AndroidDevOpsUnitTestAndAssemble too to keep the script from too much-complicated:
Notice the Buildspec property at line 10, it’s inlined style scripts, but you could tell a location and create a buildspec.ymal too. We handle the Android SDK manager in the pre_build phase; then, the build phase we use Gradle wrapper to do unit tests and assemble both app and AndroidTest apk, and those apks will be OutputArtifacts.
The final stage is Test, which is provided by Device Farm. Device Farm will go and look up the ArtifactStore property in CodePipeline at line 5. The ArtifactStore is just an S3 bucket stored in the files that the CodePipeline uses. If the Build stage goes well, the apks will get uploaded to this bucket, and the pipeline will switch to the Test stage.
Before July 2018, you may need to write up a Lambda to handle the process of uploading these apks from S3 plus keep polling the device farm test run status. Glad now we can just indicate which device farm we want to use in the pipeline after AWS has better support.
The rest of the CodePipeline Property via CloudFormation you can look up from here.
The last two sections look a bit weird. The Type is Custom::DeviceFarm and Custom::DeviceFarmDevicePool plus some properties we’ve never seen.
It’s because not all service types are available in CloudFormation, but AWS provides a way to let developers implement their own, by a mechanism call Custom Resource.
It’s just using Lambda functions to create the missing pieces, the ServiceToken itself is the ARN of the Lambda, and it’s not that scary.
The CloudFormation will use the ServiceToken to look up the function and invoke it. Basically inside the Lambda, we need to import some service SDK we want, get the info if resources exist, then create it and keep sending the status to CloudFormation, there’ s a bunch of examples on Github you can find, including the sample I use🤣.
The ServiceToken property takes the output value from coustom_resources.ymal, so the Fn::ImportValue will be available if our first YAML success in CloudForamation.
And let’s go back to coustom_resources.ymal, it contains the functions that will be invoked when need to create custom resources, which are device farm project and device pool. We also need a role and a policy to attached to. At the bottom of the file, you can see the output value we’ve mentioned before.
The last part, our Slack notifier, composed of several Lambda functions. In ci.ymal, the magic sauce is the Events properties:
The Events properties will create an AWS CloudWatch Event, and it will propagate the status of our CodePipeline and CodeBuild so that the AndroidDevOpsNotifier function can post all those build-specific event payloads to our pre-setup Slack bot.
That’s it. Once you get more familiar with the connection of an AWS service to another, all the resources on the AWS can be defined via Cloudformation.
There’s still a lot to improve like scripts inside the CodeBuild, the apk file name, version code, the Python code to post messages(I’ve modified some to fit the API version), etc.
And as a developer, you know this is just the beginning!