Our Silver Lining: Building Mobile Apps Using Amazon Web Services
We recently faced a new challenge: how could we provide our enterprise customers with a fast, reliable and secure service for building native mobile apps? Well, we took a ride with Amazon Web Services (AWS)!
How It All Started
Eventually the day will come when you’ll have to deliver some sort of mobile application. On that day (and during the months that follow) you’ll swear a lot while you deal with whole new IDEs, APIs, programming languages, known issues and a myriad of other annoying things related to mobile development.
We get that and we want to stop it from happening to you. At OutSystems, we aim to make life simpler to our customers. So we set out on a journey to make building mobile applications easy for everyone.
We started with the plan to enable our customers to build native mobile applications for Android and iOS without having to worry about setting up every piece of required software. We created a native builder as a proof of concept and it was clear very quickly that we had to move to the cloud. It was really the only choice for a solution that would receive a simple request, handle all the building processes and deliver a ready-to-install application to our customers.
But it couldn’t be just any cloud. Another challenge was the fact that the team lacked the expertise and had limited time to implement this solution. How would our developers feel about joining the “cloud?” How fast would they learn all these new technologies?
Choosing Our “Cloud” Provider
A few years ago “cloud” was just a buzzword. It just meant “someone else’s computer.” To this day, some cloud providers are still just computing power, storage capacity and networking infrastructure. Others, like Amazon Web Services (AWS), have moved way past that point and brought us a gigantic catalog of highly available, easily scalable, geographically distributed products. These products enable companies of any size to deliver robust and innovative solutions to their customers without having to buy a whole new datacenter, hire dozens of software specialists, and rack up massive expenses.
Our enterprise customers are distributed across the globe and, among other things, expect good response times from our services. They are used to our “1-Click Publish” feature for web applications, which generates, compiles and deploys their solutions in a matter of seconds. Waiting longer to have a mobile app wasn’t an option. After all, OutSystems is the leader in low-code rapid application development platforms.
Riding the “Serverless” Wave
Disclaimer: “serverless” is, again, just a buzzword. This one translates to “we are not managing or worrying about any servers; someone else is doing that for us.”
I don’t know about your IT department, your devops team or your system administrators, but from my experience, there are very few people, teams or companies talented enough to do infrastructure management right. It’s hard. And that’s one of the main reasons cloud providers are thriving right now. They do most of the hard work for you. Of course you still have room to screw up. Among other things, you are still in charge of your code, and that’s where “serverless” comes in. Cloud providers are allowing you to only focus on your code so you can get it right.
This abstraction from the underlying technologies is also what OutSystems has been doing for its customers since its very beginning. We empower people to deliver their ideas without worrying about writing any code.
With all this in mind, we made the decision to go “serverless” with AWS. As always with new IT hype, there are lovers and haters. Moderate people never show up for debate. Having said that, we had a pretty good time going serverless for most parts of our project.
And so came the time to move our proof of concept from the computer under the desk to the cloud. This is what the implementation looked like:
One of the most crucial requirements was to keep the development process easy; this move shouldn’t drastically change the way our developers were working or hinder their debugging procedures. But, at the same time we couldn’t compromise on the scalability and high availability of the solution. The right architecture was also key.
So, what kind of architecture did we use? Let’s take a look:
Our service is composed of four major components:
- REST API for public interaction with the service
- Storage for requests, application resources and generated applications
- The OutSystems Building Service for generating Android and iOS mobile apps
- Testing service to run automated tests on generated applications (optional)
As the front door for our service, we picked API Gateway. It would give us scalability, throttling, and good latency across the globe due to the integration with CloudFront, Amazon’s content delivery network. It also lets us implement request handling either in the AWS ecosystem or proxy it elsewhere.
At the time we started our project, API Gateway didn’t have support for importing swagger files to bootstrap our REST API. Fortunately, AWS provided the aws-apigateway-importer tool to overcome this limitation. This came in handy as we just had to import our existing swagger definition from our proof-of-concept API, and it saved our development team some time. Swagger definition import is now a native feature of this service, and there’s no need for external tools anymore.
One of the major annoyances we faced during the development of our REST API was the mapping templates feature. Documentation was a bit vague back then, and we had to go through a “trial and error” approach in the beginning. The documentation of the API Gateway service has improved a lot since then, and we appreciate that!
Processing requests was left to AWS Lambda. The integration with the API Gateway, our “front door,” was seamless, scalability was not an issue once again and our developers could just focus on code… Or could they?
One of the questions I get asked most when introducing Lambda at OutSystems is, “What language do you recommend?” My answer is always the same: the one you are most comfortable with! You’re already learning a new service; don’t add learning a new language to the mix because that doubles the effort. And, so our developers picked NodeJS, and apart from some asynchronous problems, everything went smoothly.
When picking Lambda, you should keep some things in mind:
- Lambda functions timeout after 5 minutes.
- One of the billing factors of the service is your function running time in milliseconds; these are clear indicators that your functions should be simple and quick to execute.
- At the time of this writing only three languages are officially supported; however, you can run anything in there, as long as your package doesn’t exceed the size limits. The same is true for loading libraries.
Our Lambda functions ended up interacting with three other services: DynamoDB, S3 and SQS.
Capacity prediction in DynamoDB is tricky. You should plan carefully and pay attention to your CloudWatch metrics when you go into production. Also, don’t forget to plan your backup strategy. There’s no native “snapshot” procedure like in other solutions directly backed by typical volumes for storage. There’s clearly room for improvement there.
To store larger objects, either those uploaded by our customers during the initial request or our generated applications, we went with S3. The notifications feature easily feeds our event-driven architecture. Lifecycle policies are set up once so you don’t have to ever worry about costs and cleanup tasks again.
Thanks to the PoC, we had our in-house builder that could receive a simple request, handle all the building processes and deliver a ready-to-install application to our customers. We also already had a working prototype of “the worker,” the place where the magic happens, where customers’ requests turn into nice shiny native mobile apps. The worker process was already pooling for jobs in a database; adapting it to use SQS was painless.
Elastic Beanstalk takes care of infrastructure deployment and autoscaling, and it has various options for code deployment. On top of that it has support for Docker containers. Although Elastic Beanstalk is aimed more at web applications, it also has a “worker” mode for back-end processing, and our use case was a perfect match with this service.
Need to add any dependency? Change the Dockerfile.
Need to update the worker? Just change and upload your code.
At first debugging was a little harder. Developers had to check in which instance the worker was running, connect to the instance, get inside the container and finally debug their issue. But the worker code ended up being improved and now almost everything can be found in CloudWatch Logs.
During this quest, we stumbled on AWS DeviceFarm. For those who don’t know this service yet, AWS created a “farm” of mobile devices (real physical devices) that you can use for testing, either with your favorite testing framework or in real time with remote access. This proved to be very easy to integrate in our architecture, and the question “Where’s my device?” didn’t pop up as often in the mobile team. Once again, we could focus on writing our tests instead of managing the testing software, looking for lost devices, or taking forever to reproduce customer issues.
Ah, security. This means access keys, policies, roles and a myriad of other stuff.
It’s your first day using AWS. You are trying to create a Lambda function and stumble on this “role” stuff. You decide to explore a bit further and find yourself in this “IAM” thing. What?
Identity and access management (IAM) is the service responsible for, well, identity and access management in AWS. It’s present in every service and it’s no wonder that sooner or later you will bump into it. Now, you must take the time to understand it (period).
Of all the concepts present in IAM, the one that I find people struggle most with is the concept of “role.” A “role” is just a trust relationship that, once assumed, becomes a (temporary) identity in your account, like a user. After you get the grasp of roles, you’ll have to write your first policy. So it’s time to get acquainted with the very handy policy generator.
Now, you are probably wondering, “Why, oh why do I have to go through all this hassle just to create my first Lambda function?” An annoying voice in your head will whisper, “Security first.” You will soon realize that this annoyance is your unified way to manage permissions on all resources from all AWS services. Suddenly, you’re thankful for not having to configure each service and piece of software in a different convoluted way. Writing IAM policies isn’t difficult, especially if you’re the one doing the development. After all, you already know precisely which methods you should allow and which resources you’ll be accessing with those methods. Don’t be lazy here, or it’ll come back to bite you, just like any other software security issue either at the infrastructure or code level.
Taking the time to write a CloudFormation template from the first day paid off quickly. True, it takes precious time at the beginning of your project, when you are eager to have things up and running. But, as your project grows it gives you great peace of mind and the assurance that staging your service will always be predictable. You can forget about custom configurations. If they are not specified in your templates, they don’t exist, and cheaters are quickly caught as soon as you start staging your changes to the quality assurance environment.
The first phase of this project took about a week and a half; we are still surprised at how smoothly things went from zero to a full working service. The development team had no experience with AWS or any other cloud provider. Here’s what helped us accomplish this amazing feat in a nutshell:
- Good documentation played a key role.
- Having the SDK available and up-to-date in a multitude of languages was also helpful.
- The adopted services allowed our developers to keep focus on our service, not on installing software and managing configurations.
- Staging was made easy by CloudFormation.
Editor’s Note: Manuel Pata is lead of the Cloud Automation team at OutSystems, and he spends his waking hours optimizing cloud ops to its limits.