In this post, I would give you sneak peak of the cloud components involved in building the product, FaceQuest, and some insights about the infrastructure
Functionality of part of the product hosted in AWS
FaceQuest — a face recognition service, the core functionlity is to —
- detect the faces on the picture that is given to the API
- accomplish the detection of faces, we would need some reference faces and hence the need to store the reference photos somewhere.
Computation power to run the container
- While we could have used EC2, it often requires a great deal of tedious manual configuration and oversight. Managing these would require a great deal of resources and effort, and it takes time away from what’s most important: deploying applications.
- In order to spend minimal effort in managing the infra and to focus on the application itself we choose Fargate — serverless compute engine managed by AWS.
- Less complex — You don’t have to worry about where you’ll deploy your containers, or how you’ll manage and scale them. Instead, you can focus on defining the right parameters for your containers (e.g. compute, storage, and networking) for a successful deployment.
- Cost effective (I believe)— Fargate spot charges less than half the price Fargate usage would, by leveraging the spare compute capacity for the fault tolerant applications. By design, FaceQuest supports fault tolerant workloads.
Initially we decided to use Cloudinary as our image storage but upon analysis we found that it was not a right choice for our use case which involves lots of downloads from image storage service which would result result in spike in the costs. This would inturn require us to charge our customers obscenely. So we ended up using S3 as our image storage.
To learn more about this usecase and how we approached it visit here
- Cost effective — the costs were controlled because the worker processes downloading & processing the images run in AWS too