Every day there are lots to talk about the cloud and today I have chosen that we have a little chat on serverless architecture and the corresponding horsemen that could change the way engineers approach the cloud. In 2016 when I first came into the cloud computing space, I got to hear the word “serverless” very often from senior engineers but I always imagined serverless meant there were actually no servers, but as time went on, I got to understand the real stuff.
However, let us define the concept “Serverless”. Well for starters, there are actually physical servers in the serverless computing world but engineers/developers do not need to be aware of their existence. The serverless architecture defines the ability for engineers to deploy and process code in the cloud without worrying about the infrastructures and processes. It is largely regarded as an “as-used” service in the cloud.
The core of cloud computing operations is built around the “Serverless Architectures”
In AWS, the concept of serverless has to obey four major rules:
- Existence of fault tolerance/availability in the system.
- Pay-as-you-go value system for the running code (nothing unused will be billed).
- Servers are not required to be managed by developers/users.
- System scaling based on usage curve.
Moving forward, the serverless architecture allows engineers to put more effort into building the product while the cloud operations are handled off the books. But, the architecture is powered by some services regarded as horsemen of AWS serverless architecture but I regard them as the pillars (you could use horsemen if you prefer that). The horsemen/pillars will be discussed in detail as we drive along, please refill your cup of coffee.
When working with events in the backend space of your application, you will need to manage the infrastructure, check the application status, manage lots of servers and other tedious activities but with the advent of AWS Lambda is serverless computing, all you need to pay for is compute time then you consume and work on your code as you desire locally.
AWS Lambda is driven by event-driven compute, functions as a service (assigning events to certain application code) and most importantly serverless.
AWS Lambda runs response to events like object uploads/downloads, bucket managements, database updates and even in-app activities. It publishes real-time logs to cloud watch and you could track the operations of your uploaded code. You could get started with it very easily and you could use any of your language libraries such as Node.js, Python, Go amongst many others and recently native libraries have been involved as well. There are three parts of the function that makes the structure complete; Handler() function, event object and context object. There are use-samples of already-made structures set up for common applications so if you are a beginner, you could explore that and in a case where you have your lambda code already, you could upload it as a zip file or even use the Lambda functions open space.
As shown in the image, you could integrate S3 buckets and DynamoDB with the Lambda functions using the AWS SDK and boom you are there automating processes and taking care of your dog or even playing chess. Finally, speaking of payments, you get to pay according to value and running code but in my experience, sometimes more memory could reduce the price, shocking right? Let me explain, Lambda exposes only a memory controller with the % of the CPU core and network capacity assigned to the function. So if your code is bound or dependent on network, memory or even network then there might be a need to use more memory for lesser rates.
Anytime I think about IAM, all that comes to mind is security and privileges then ultimately key access to services. AWS IAM is the acronym for AWS Identity Access Management. With this feature, you could assign roles to users so they can perform just what they need to perform. By definition, IAM policies writing allows you to give access to your users/engineers but further, AWS needs to execute these policies and allow access control based on pre-define rules that have been set up.
There is a policy structure/syntax, it has four major parts that I “acronymed” PARC which represents Principal, Action, Resource and Condition. In every case, they must be identified in the code but in some cases, the Principal may not be required however if you are referencing a certain bucket, from experience, I think you should ensure the principal is assigned.
AWS defines Principal as the entity allowed/denied access, Action as the access that is allowed/denied access, Resource is the AWS resource the actions will act on then finally Condition which is the conditions under the access that is valid (this can come in when building VPCs and infrastructures. Lastly, IAM is very relevant in serverless architecture, you should look up the docs here.
Before we round up on IAM, we should understand the Policy types and possible use-cases. Some of the most famous policies are shown:
- Service Control Policies: This is designed to disable service access on the principals of the account so this is relatively useful when using buckets policies.
- As Permission Policies and Permission Boundaries: This is simple and just used to describe how boundaries are set for certain users. For instance, a user can have limits in setting up certain privileges within the organization.
- Scoped-down Policies: This reduces generally shared permissions further in the system.
- Resource-based Policies: Just like the name implies, they are resource-based policies and it controls cross-account access to certain resources in the system/organization.
- Endpoint Policies: Using a VPC endpoint, you can control access to a service, I mean users access to a service within the organization.
Finally, I think I should end it here for IAM policies but I believe you should understand that the policies work together by ensuring there is some kind of a trust system between service control policies and IAM policies/resource-based policies so everything has to be allowed for pure communication. You could read more about policies here and their effects on serverless architecture.
Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. Remember our use case for serverless where the events and activities cannot be handled easily so we need to hire these horsemen to make the job smooth. Well, Amazon DynamoDB is one of such horsemen and it is used in many scalable applications. It is fully managed and it could handle encryption, SLA and could handle more than 1 trillion requests daily.
In using DynamoDB, there is no need for server creations/configurations, you could just access the console and start creating the tables which are not like the other databases in the complex world. Some of its features include Transactions, capacity management dynamically, high availability, durability, on-demand capacity mode, encryption at rest, table parts (i.e partition key, sort key, attributes and others), user-friendly console, DAX support, amongst much other cool stuff. Moving further, you could test the creation of tables via this guide.
Amazon Simple Storage Service is an object storage service that offers industry-leading scalability, data availability, security, and performance. Amazon S3 is easy to use, with a simple web service interface to store and retrieve any amount of data from anywhere on the web. The kind of data used in businesses and products especially products with a large user base has some serious complexities and that is where the storage service has to be introduced. In some context, it could be used to collect logs from Cloudwatch for study and extractions.
Here there is unmatched durability, security management and a massive ecosystem of diverse third-party partners from all over the world. It also allows for business processes for any analytics processing all in AWS. Data could be stored in different availability zones for data standards and protection but you could do otherwise for a lower cost which is definitely a risk. For security and compliance, there is encryption that is performed and follows security standards with best practices. Also, we could use the S3 with Lambda and it’s an interesting run for the storage of data. There is a chance to utilize the redshift spectrum to analyse the S3 bucket data and other cool stuff in the cloud. We could go on and on but let me leave you here to digest and we could talk some other time on other AWS S3 details.
In final words, every AWS engineer/user that understands how to build and automate these services previously discussed has the ability to build a new world and ride the serverless horses end-to-end, close that world, create a new one, ride again and repeat — In clear terms, know this nitty-gritty of these horsemen and you are in control of the world.
Thanks for reading ❤️
Please leave a comment if you have any thoughts about the topic — I am open to learning and knowledge explorations.