Simple Drone Service: From Idea to re:Invent
Simple Drone Service (SDS) is a cloud-connected drone that does a simple maneuver: fly up, hover, take a picture, and land. Instantly, the SDS dashboard lights up with data that the drone detected. For example, if the drone flies over a crowd of people, the dashboard will provide the following information:
- Number (count) of people
- Picture of the crowd
- Telemetry data about the flight
In this post, we explain how SDS evolved from an early idea to its debut at the Dev Lounge at AWS re:Invent 2015. We share our learnings here, so that you can see how simple it is to connect a drone to the cloud and understand the architectural patterns that are applicable in designing such a system. More generally, we discuss the benefits of designing solutions based on tiny “Nanoservices” which helped us dramatically shrink the build-measure-learn loop.
You can have a quick look at the SDS in action here. There’s more footage of SDS in action at the end of the post.
Why Did We Connect a Drone to the Cloud?
The Solutions Architects team at AWS tend to have a “maker” bent of mind; our roster of creations, such as Simple Beer Service and Simple Robot Service, is proof of this. When the opportunity to build something for the Dev Lounge at re:Invent 2015 came along, our team brainstormed many interesting ideas. Ultimately, we decided to connect a drone to the cloud. We chose this project primarily for the interactive experience it would provide re:Invent participants. Additionally, we wanted a project that participants could replicate with ease, so we chose an off-the-shelf drone (Parrot AR.Drone 2.0) and used open-source software.
What Can a Cloud-Powered Drone Do?
Although it would be novel enough to simply connect a drone to the cloud, what real-world problem would that solve? The use of drones is becoming increasingly popular for a wide range of use cases. A quick survey to get the lay of the land revealed three key areas where the cloud could help the drone:
1. Image processing: Drones collect a lot of image/video data, which can be processed to gather useful information.
2. Flight and drone monitoring: There is a need for a drone management plane (data and control), especially as there is an evolution from managing a few drones to many fleets of drones.
3. Power management: The battery is still a pain point in a drone. Tracking battery consumption and the ability to take actions based on battery status would be useful.
The Iterative Path to Minimum Viable Product (MVP)
The design principles that guided our implementation were primarily scalability and creating smaller, independent, decoupled modules (Nanoservices, if you will). We wanted to create an architecture that works for one drone or millions of drones. It’s key for this architecture to allow for new modules to get added as we evolve the functionality of SDS. Finally, we decided to build a “serverless backend” using AWS managed services and AWS Lambda because we wanted to solve real-world drone problems and avoid the undifferentiated heavy lifting of maintaining backend infrastructure.
When you’re building a product, you need to rapidly create new prototypes. Thanks to available sample code, it was quite simple to prototype different parts of the solution. For example, this post on the AWS Compute blog explains how to use the OpenCV (Open Source Computer Vision) Library with AWS Lambda. From that, we added code to detect faces in an image (more on this later). The blueprints available in the AWS Lambda console and the Amazon Kinesis code samples available here were good starting points as well.
The choice of the storage layer was straightforward for the images: we chose Amazon S3 because of its high availability and fault tolerance. We also needed to store JSON blobs (telemetry) and key-value pairs (people count) in a NoSQL database. We chose Amazon DynamoDB for the database because it’s a managed NoSQL database with consistent performance at scale, which is important for SDS, since it needs to be able to handle data from thousands of drones. Additionally, in the future we could potentially leverage DynamoDB streams plus AWS Lambda to do aggregate analysis (in other words, to provide the average flight time for a fleet of drones). Finally, we needed a solution to ingest large amounts of streaming data without having to maintain any stream data-processing system. Amazon Kinesis fit that requirement.
Here is the overall architecture of SDS:
A good way to describe it is in terms of workflows. The first workflow is image processing. The images taken by the drone are uploaded in S3. A Lambda function is triggered for every new object (in this case, an image) in the specified S3 bucket. We use OpenCV, an open source computer vision library, to detect faces in the image. By counting the number of faces, we approximate the number of people the drone is currently seeing (people with their back turned are not counted.) This count is put in a DynamoDB table, and a new image with an ellipse around the faces that are detected is stored in a different S3 bucket.
The drone sends out telemetry and navigational data (five samples per second) in JSON format to an Amazon Kinesis stream. This includes basic data points such as real-time battery consumption, altitude, and speed to more detailed metrics such as x,y,z coordinates of the camera. There are three different consumers (Lambda functions) of this Amazon Kinesis stream. The first simply stores the blob in S3. We wanted to create a “blackbox” in the cloud in order to analyze data in the event of a crash. The second consumer — also a Lambda function — writes this data to a DynamoDB table. This is the table used by the front-end dashboard. A third Lambda function filters out the real-time battery consumption metric and publishes that as a custom Amazon CloudWatch metric. An alarm is set on this metric so that a notification (alarm) is sent out when the available battery falls below the defined threshold. More generically, this gives the ability to define an action when the available battery is low; for example, a Lambda function that finds the closest charging station to the drone and directs it to that station.
UX: Just as Important as the Backend
Once the MVP was ready, we started user testing, essentially giving demos of SDS to our colleagues and gathering their feedback. One of the most important learnings from this exercise was that users were looking for an engaging dashboard experience. Our efforts into building a feature-rich, scalable backend seemed to have minimal value without a dashboard that showcases those. Another important learning was the value of consistency in the user experience. As mentioned earlier, the Solutions Architects team has delivered other popular products/services, such as Simple Beer Service. Users were automatically expecting a similar user experience, given that SDS has the same branding (“Simple… Service”).
To sum this up, the SDS dashboard displays the image that the drone takes, graphs of the people count, battery consumption, and an “operator console” where the real-time data streams in. One of the themes we observed at re:Invent 2015 was the use of QR codes, the bar codes that are readable by mobile phones. We quickly added QR codes for the image on the dashboard on day two of re:Invent. Further, everybody wanted to know how many people the drone has counted, so we added that metric to the drone dashboard as well.
Getting Your Product Past Real-World Challenges
The real world always poses a unique set of challenges for a product. In our example, AWS IT security imposed several requirements before they would give us approval to fly the drone at re:Invent:
- The drone must takeoff, fly, and land in the same place, without intervention
- The drone must have appropriate security measures in place to prevent random individuals from connecting to and controlling the drone
- The drone was restricted in how high it could fly at the conference
Typically, the drone is very stable during flight. Creating an automated flight for the drone to take off and land ended up being the easy part of this project. An automated takeoff and landing in the exact same place was a much bigger challenge. Most of the time the drone would take off fairly straight, but regularly it would take off at an angle and end up across the room, like a zoo animal let out of its cage!
To meet the preceding requirements, we had to create some type of guide that would restrict and regulate the drone’s flight path. We looked at things such as setting the trim prior to every flight, which helped somewhat. Ultimately, we needed a cage to restrain the drone.
We went through several iterations, including a tether used for dog grooming, before we settled on a safety tower built out of PVC that attached to the drone from two sides yet still allowed it to take off and land freely. We were also able to modify IP tables to allow only connections from devices with specific MAC addresses to the drone’s onboard chip, which runs BusyBox Linux.
To summarize, here are a few key takeaways from the SDS project:
1. Getting started with any service on AWS is easier than ever, given the plethora of resources. For us, the primary resources were the blueprints available on the AWS Lambda console.
2. It’s important to choose the right architectural paradigm for the design considerations and domain. For us, it was serverless architecture.
3. Agility is essential to the success of any product. For example, SDS comprises many tiny Nanoservices.
4. Technology is always evolving, and so is your product. For instance, among the newer AWS services, we are looking at using AWS IoT and Amazon Kinesis Firehose in SDS.v2.
We plan to open source the SDS code in early 2016, so that you can customize and extend it for your needs.
Contributors: Shankar Ramachandran, Jeff Sweet, Sunil Mallya