AWS Cloud Essentials: A Comprehensive Beginner’s Guide to Cloud Technology and Services
So, what is Amazon Web Services (AWS)? Simply put, AWS is a cloud computing platform that provides access to various computer resources when needed. It takes care of all the hardware and software for you. A key feature of AWS is that it operates on a pay-as-you-go model, meaning you pay only for what you use.
AWS offers various services, such as data storage, data analytics, machine learning, compute, networking, and databases.
****************************************************************************
CHAPTER 1
AWS Infrastructure and Deployment
Infrastructure refers to equipment and systems needed to run your application, i.e., servers, storage, & networks, while deployment means setting up and running your software/application so that it can be used by others.
AWS offers three deployment options: cloud, hybrid, and on-premises.
Cloud Deployment
Cloud deployment involves running applications and services solely on AWS’s cloud infrastructure. This method leverages the internet to access and manage AWS services, allowing you to handle resources such as storage and computing entirely online. It’s particularly beneficial for those who prefer to focus on their applications without the burden of maintaining physical hardware.
Hybrid Deployment
This method combines your on-site equipment with AWS cloud services to create a flexible and scalable environment. It is ideally used when using your existing hardware and AWS resources together.
On-Premises Deployment
On-premises deployment refers to using private cloud infrastructure, where all resources are hosted and managed within your own facilities. This approach is optimal for scenarios requiring full control over data and infrastructure, often due to stringent security and regulatory requirements. It allows organizations to maintain direct oversight and management of their technological assets.
AWS Global Infrastructure
Now that we understand the basics of infrastructure and deployment, we can explore AWS’s global infrastructure. First, let’s familiarize ourselves with some key terms.
Data Centers are physical facilities that house computer systems and associated components.
AWS global infrastructure is the network of data centers that AWS uses to deliver cloud services worldwide.
Network Latency refers to the delay in network communication.
Cached Data refers to data that is temporarily stored close to the user to speed up retrieval times.
Redundancy refers to duplicating hardware or software components to protect against single points of failure.
AWS global infrastructure is designed to provide worldwide accessibility with low latency connections through a network of data centers worldwide. You can think of the AWS Global Infrastructure as a bucket; within that bucket, we have a different hierarchy of components. These components include: Availability Zones, Regions, Edge Locations, Regional Edge Caches, Local Zones, Wavelength Zones, and Outposts
1.) AWS Global Infrastructure: Regions
Regions are physical geographical locations where AWS operates isolated clusters of data centers. Each region comprises a collection of availability zones that are located in close proximity to one another. Each region also operates independently from the others and includes at least two Availability Zones.
2.) AWS Global Infrastructure: Availability Zones (AZs)
Within regions, we have AZs that are made up of one or more data centers. AZs are designed to provide high availability and fault tolerance, which helps keep services running even if one data center experiences issues. Availability zones are interconnected using high-speed private links. Low latency links between AZs are used by many AWS services to replicate data for high availability and resilience purposes
3.) AWS Global Infrastructure: Local Zones
Local zones are extensions of AWS regions designed to bring compute, storage, and database services closer to users. Their purpose is to ensure low-latency access in specific geographic areas, thereby enhancing end-user performance.
Diagram illustrating the us-east-1 AWS region with an Availability Zone (us-east-1a) and its connection to AWS Local Zones in New York for reduced latency and improved local performance.
4.) AWS Global Infrastructure: Edge Locations
Edge locations are global data centers that store cached copies of data. Their purpose is to enhance content delivery by reducing latency and enabling faster information retrieval closer to users.
5.) AWS Global Infrastructure: Wavelength Zones
These are specialized AWS locations situated within the data centers of telecommunication companies, positioned near users at the edge of the 5G network. You might wonder, “Why are they located in telecom companies’ data centers?” The reason is that telecom companies have widespread data center networks. By integrating AWS services into these facilities, AWS can utilize existing infrastructure to provide faster and more reliable services.
6.) AWS Global Infrastructure: Outposts
AWS Outposts bringd the capabilities of the AWS cloud to your on-premises data center. They utilize the same hardware found in AWS data centers, enabling you to employ native AWS services, tools, and APIs as if you were operating directly within AWS infrastructure. As a fully managed solution, AWS Outposts relieve you from the responsibilities of patch management and software updates. AWS ensures that your Outposts are maintained, patched, and updated as necessary.
Global Accelerator
AWS Global Accelerator is a service designed to optimize the path your users take to your applications, improving the availability and speed of your applications by leveraging the AWS global network.
Sample Global Architectural Patterns in AWS
In this section, we’ll explore visual representations of various architectures from the infrastructures we’ve discussed.
The region (us-east-1 — Northern Virginia) is a large geographical area containing multiple Availability Zones(AZs). AZs are subsections of the AWS region designed to be isolated from failures in other AZs. Each AZ can include one or more data centers. This region has three depicted AZs: us-east-1a, us-east-1b, and us-east-1c.
The illustration above shows a Virtual Private Cloud (VPC) within the AWS region us-east-1 (Northern Virginia). The VPC spans multiple Availability Zones (AZs) and includes a Local Zone. Local Zones bring AWS services closer to users to support latency-sensitive applications.
An Amazon Virtual Private Cloud (VPC) is your own isolated section of AWS where you can launch AWS resources.
A Content Delivery Network (CDN) is a system of distributed servers that delivers web content and applications to users based on their geographic location. The main goal is to provide high availability and performance by distributing the service closer to end-users’ locations.
Amazon Elastic Compute Cloud(EC2) is a web service that provides resizable compute capacity in the cloud. Think of it as powerful computers in the cloud.
Amazon Simple Storage Service(S3) is a service for storing files and data in the cloud, similar to an online storage drive.
Regional Edge Data Cache is a middle layer where popular content is stored closer to edge locations to make it quicker to access. It acts as a temporary storage for frequently requested data.
Edge Locations are small data centers located near users. These locations store cached content to deliver it to users quickly, reducing the time it takes to load websites or access applications.
Users are people who are using the internet to access websites or applications. Their requests are routed to the nearest edge location to get the content faster.
This diagram illustrates how data is replicated across multiple AWS regions to ensure high availability, low latency, and redundancy for users located worldwide. Users access the data from the area closest to them, which reduces latency and improves access speed.
This diagram illustrates how AWS Global Accelerator directs end-user requests to the optimal AWS region, using different types of load balancers to ensure efficient traffic distribution and high availability.
The Application Load Balancer handles and distributes HTTP/HTTPS traffic and offers advanced routing features. While the Network Load Balancer handles and distributes TCP/UDP traffic and is optimized for high performance and low latency.
In this illustration, the AWS Global Accelerator uses the AWS global network to route traffic to the nearest and healthiest endpoints.
****************************************************************************
CHAPTER 2
AWS Connectivity and Infrastructure
Now that we’ve covered deployment and global infrastructure, let’s explore how to connect to AWS services and provision resources within AWS.
There are primarily three ways to connect to AWS services: Public internet, AWS Direct Connect, and AWS VPN.
Public Internet
Public Internet allows connection to the AWS user interface from any internet-connected device. It’s an easy and cost-effective method. This method is ideal for quick and straightforward access to AWS services and is best suited for individual developers, small businesses, and educational institutions where advanced security is not a primary concern.
AWS Direct Connect
AWS Direct Connect offers a direct link to AWS data centers, providing a dedicated network connection that bypasses the public internet. This method is more secure and reliable, making it ideal for enterprises that require consistent network performance. It ensures lower latency, enhanced security, and dependable network performance. AWS Direct Connect is particularly suitable for enterprises and organizations that need a stable and secure connection to support their operation
AWS VPN
AWS VPN provides a secure method of connecting to AWS resources using a Virtual Private Network. This option creates an encrypted connection between on-premises environments and AWS, enabling remote access from anywhere. It combines security with flexibility, making it an ideal choice for remote work and distributed teams. AWS VPN is cost-effective and well-suited for organizations seeking secure, flexible access to AWS resources from multiple locations.
Now that we’ve discussed how to connect to AWS, let’s explore what you can do once you are inside the AWS environment.
Infrastructure as Code (Iac)
A common reason for connecting to AWS is to deploy services in the cloud. In AWS, one can deploy services using a tool called Infrastructure as a Code (IaC). IaC is a method for provisioning and managing AWS deployments using code templates.
Why Use IaC?
Infrastructure as Code (IAC) enables developers to deploy new resources using code, supports version control to maintain a history of configurations, and facilitates consistent and repeatable deployments.
AWS CloudFormation
AWS CloudFormation is a service that allows you to implement infrastructure as code capabilities in AWS. It supports JSON and YAML templates for creating code-based configurations, allows for version control of the templates, and supports Continuous Integration and Continuous Deployment(CI/CD) capabilities.
To use CloudFormation, you start by creating JSON or YAML templates that contain code for the resources you need configured and their configurations. You then upload the template to an S3 bucket that is accessible to CloudFormation, which will read the template code from the bucket and then provision all the resources in their respective configurations as instructed in the template.
****************************************************************************
CHAPTER 3
AWS Storage Solutions
Here, we’ll explore storage solutions provided by AWS, storage lifecycle policies, and backup services.
AWS storage solutions: Data Storage, Storage Lifecycle Policies, and Backup Services to manage and protect data efficiently.
Cloud computing relies heavily on data storage and AWS provides various storage options suitable for various needs and use cases. These options ensure secure data management and retrieval in the cloud.
Storage types in AWS
We have four types of storage in AWS; Object Storage, Block Storage, File Storage, Cache Storage
1.) Object Storage
It is a storage architecture that manages and organizes data as discrete units called objects. Key features of objects include.
a) Horizontal scaling
Horizontal scaling means adding more servers (storage nodes) to handle more data and requests. In object storage, a storage node is a server or device that stores data and contributes to the overall storage infrastructure.
Vertical scaling, on the other hand, is where you increase the capacity of an existing machine by adding more resources (CPU, RAM, storage).
b) Metadata Management
Metadata is data about data. Each object is created with its own metadata, which can include details such as date, content type, access permissions, and custom tags defined by the user.
Rich metadata capabilities make it easier to organize, search, and manage large datasets. It also allows for efficient categorization and retrieval of data based on various attributes, enhancing data management and analysis.
c) Storing unstructured data
Unstructured data is data that has not yet been ordered in a predefined way. Some common examples are text files, video files, reports, email, and images. Unstructured data is often large, complex, and difficult to manage using traditional databases. Object storage is well-suited for storing unstructured data because it can handle large volumes of data with diverse formats without requiring a rigid schema.
In contrast, structured data has a standardized format for efficient access by software and humans alike. It is typically tabular with rows and columns clearly defining data attributes.
Amazon Simple Service Storage (Amazon S3)
Amazon S3 is an object storage service provided by AWS that offers high scalability, data availability, security, and performance. It is available in all regions.
Scalability: You can store any amount of data in S3. It is fully elastic, which means there is no need to provision storage, it automatically grows and shrinks as you add and remove data.
Durability and availability: S3 is designed to provide 99.999999999% data durability and 99.99% availability by default, backed by the most robust Service Level Agreements(SLA) in the cloud. An SLA is a formal contract between a service provider and the customer that specifies the expected level of service.
Security and data protection: S3 is secure, private, and encrypted by default.
Lowest price and highest performance: S3 provides multiple storage classes for different use cases, the best price-performance for any workload, and automated data lifecycle management, so you can store massive amounts of frequently, infrequently, or rarely accessed data cost-efficiently.
How Amazon S3 Works
Amazon S3 stores data as objects within buckets. A bucket is a container for objects, while an object is a file and any metadata that describes the file. Each bucket hosted in an AWS account contains multiple objects. Each object in an Amazon S3 bucket consists of data, a unique key (identifier), and metadata. The data represents the actual content, the unique key ensures each object can be uniquely identified and retrieved, and the metadata provides additional information about the object, such as its properties and management details.
Storage Classes in S3
Amazon S3 offers six storage classes designed to cater to varying latency and data access requirements: S3 Standard, Intelligent-Tiering, One Zone-Infrequent Access (One Zone-IA), Glacier, Glacier Deep Archive, and Outposts.
i) Amazon S3 Standard
This is the most used class suitable for data that needs to be accessed frequently. It is durable, scalable, and available in all AWS regions.
ii) Amazon S3 Intelligent Tiering
Optimizes cost by moving objects automatically between tiers(classes) based on data access patterns.
iii) One Zone-Infrequent Access (IA)
Operates in a single availability zone and is therefore cost effective and suitable for data that does not require redundancy. It is ideal for infrequently accessed data that can be easily reproduced.
iv) Amazon S3 Glacier
Glacier is a low-cost, archival storage option for infrequently accessed data that can withstand long retrieval times ranging from minutes up to a few hours.
v) Amazon S3 Glacier Deep Archive
Glacier Deep Archive has the lowest storage cost and the longest retrieval times. It is most suitable for infrequently accessed data backups.
vi) Amazon S3 Outposts
Outposts extend storage to on-premises data, enabling a hybrid architecture for seamless on-premises and cloud data integration. Essentially, it brings the cloud storage capabilities of S3 to your local environment (physical hardware located in your own data center or on your premises).
2.) Block Storage
Block storage divides storage into fixed-size blocks, each with a unique address. Uploaded data is segmented(divided into smaller, manageable pieces or blocks) to create a lookup table for block identification during retrieval. It’s commonly used for transactional stores and applications with intensive read-write requirements.
Amazon Elastic Block Storage (EBS)
Amazon Elastic Block Storage(EBS) is a block storage service designed for compute services. It provides block-level storage volumes, which act like hard drives, organizing data into fixed-size blocks. It is designed to be used with AWS compute services such as Amazon EC2 (Elastic Compute Cloud). It offers both HDD and SSD options. EBS volumes are directly attached to EC2 instances, providing localized storage. This means that the storage is closely tied to the compute instance, ensuring 99.999% application availability.
Amazon EBS provides block storage volumes in SSD or HDD formats. These volumes are attached to AWS compute instances (such as EC2) to provide localized storage. This setup supports applications with 99.999% availability, ensuring high performance and reliability.
3.) File Storage
File storage allows for hierarchical data storage in directories and subdirectories, offering simultaneous read-write access to multiple users and applications. It stores meta data for faster retrieval.
Amazon Elastic File System (Amazon EFS)
Amazon EFS is a file-level, fully managed storage that can be accessed by multiple EC2 instances concurrently. It is designed for both on-premise and cloud data. It’s used in DevOps for code sharing, in content management systems, and for parallel processing in data science experiments.
4.) Cache Storage
Cache storage stores frequently accessed data in a quickly retrievable location, speeding up application response time and reducing server load by minimizing data retrieval from original source.
Amazon ElasticCache
ElastiCache provides cache storage in AWS, operating atop data stores to deliver cached data to applications. It is commonly used for storing web app session data and accelerating real-time analytics.
ElastiCache caches data from storage services to speed up access for applications. It interacts with services like Amazon S3, Amazon CloudWatch, AWS IAM Identity Center, and Amazon Kinesis to enhance performance and scalability.
****************************************************************************
CHAPTER 4
Compute in AWS
Compute services (Infrastructure-as-a-Service (IaaS)) supply cloud based servers that provide computational resources, such as virtual machines (VMs), on-demand. They allow users to run applications and manage workloads without investing in physical hardware or power backup.
Amazon Elastic Compute Cloud (EC2)
Amazon EC2 provides compute capabilities in AWS. Each individual EC2 machine is called an instance and can be scaled up or down based on needs. Different use cases necessitate different types of instances. EC2 instances seamlessly integrate with other AWS services like storage and databases.
EC2 Instance Type Categories
There are six categories of EC2 instances for specialized use-cases: General Purpose, Compute optimized, Memory optimized, Storage optimized, Accelerated computing, and High Performance Computing (HPC) optimized.
i) General Purpose
General-purpose instances are designed to handle a mix of tasks involving of computing power, memory, and networking resources. They are commonly used for things like running websites and managing code repositories.
ii) Storage optimized instances
Storage optimized instances are built for handling lots of data quickly, especially for tasks that involve reading and writing large amounts of data. They are ideal for uses like storing big datasets in a data warehouse or reorganizing large databases.
iii) Compute optimized instances
Compute optimized instances are made for tasks that need a lot of processing power, such as running scientific simulations or financial models.
iv) Memory optimized instance
Memory optimized instances are built for tasks that need a lot of memory but not necessarily a lot of storage, such as analyzing real-time data streams or generating closed captions.
v) Accelerated computing instances
Accelerated computing instances include special hardware like Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs), making them ideal for tasks such as deep learning and graphics rendering.
vi) High Performance Computing (HPC) optimized instances
HPC optimized instances are designed for tasks that always need a lot of computing power, like weather forecasting or crash simulations, and help to optimize costs for these high-demand applications.
Load balancing in AWS
An important concept in designing effective compute resources is load balancing. Load balancing evenly distributes traffic among multiple EC2 instances, preventing server overloads and ensuring high availability and efficient horizontal scaling. AWS supports four types of load balancers: Classic, Network, Application, and Gateway load balancers.
How load balancing works?
1.) The process starts with users sending processing requests.
User Requests: Customers visit your online store and start browsing different product pages. Each action they take, such as clicking on a product or adding an item to their cart, sends a request to your server.
2.) The requests hit the load balancer.
These requests first go to the load balancer instead of directly hitting your servers. The load balancer acts as a gatekeeper, receiving all incoming traffic.
3.) The load balancer activates the primary target group to fulfill the user request
Primary Group Activation: The load balancer forwards these requests to the primary group of EC2 instances (servers). This group is set up to handle a certain amount of traffic efficiently. These instances start processing the requests, delivering the product pages and managing shopping carts.
4.) If the traffic exceeds the capacity of the primary target, the load balancer activates the secondary group and distributes traffic evenly among all instances.
Handling High Traffic: “Traffic” here refers to the volume of user requests coming to your website. If the number of requests (traffic) increases and the primary group of instances reaches its capacity (i.e., they can’t handle more requests efficiently), the load balancer will activate the secondary group of EC2 instances.
Secondary Group Activation: The secondary group of EC2 instances starts up, and the load balancer begins to distribute the additional requests to these new instances. This helps in managing the high traffic by spreading the load across more servers, ensuring that no single server becomes overloaded
Compute elasticity
Elasticity is a crucial concept in computing that provides systems with the flexibility to automatically scale resources up or down according to demand. This adaptive scaling ensures that your system expands its capacity during peak periods and conserves resources when demand wanes, thereby optimizing both costs and performance. Amazon EC2 instances leverage this elasticity through the use of EC2 Auto Scaling.
EC2 Auto Scaling
This is an AWS service that automatically adjusts the number of active EC2 instances based on real-time usage. It helps reduce costs by preventing over-provisioning (having more resources than necessary) and ensures that your application always has the right amount of resources to handle current demand.
How Does Auto-Scaling Work?
i) Users send requests
ii) These requests are received by the EC2 auto-scaling service
iii) The auto-scaling service routes these requests to the available active EC2 instances.
iv) If the demand exceeds the capacity of the existing instances, the auto-scaling service creates new EC2 instances to handle the increased demand.
v) If the demand decreases, the auto-scaling service turns off the extra instances to save costs.
Reducing Instances
Load Balancing vs. Auto-Scaling
Load Balancing distributes traffic evenly across an existing set of EC2 instances, ensuring no single instance is overloaded. By balancing the load among all available instances, it helps maintain high availability and performance.
Auto-Scaling automatically adjusts the number of EC2 instances based on the demand. It can create new instances when demand increases and terminate them when demand decreases, ensuring that your application always has the necessary resources.
Containerized Deployments and Serverless Compute
In this section, we will explore computing further, exploring containers and serverless options in AWS.
Containerized Deployments
What are Containers? Containers package applications and their dependencies into a single lightweight, portable unit that can run anywhere, isolating them from the underlying system.
Dependencies refer to the external libraries, frameworks, tools, or services an application or system relies on to function correctly.
Why containerize? Containers offer efficient resource utilization by sharing the host OS and can be easily moved across different environments. This isolation ensures consistent performance regardless of where they are deployed.
Containers in AWS
AWS offers container deployments through Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS).
Both of these services have robust scalability allowing containerized applications to scale up or down based on demand, and integrate seamlessly with other AWS services like databases and storage.
Amazon Elastic Container Service (ECS)
Amazon ECS is a managed orchestration service that simplifies containerized applications’ deployment, management, and scaling. It is particularly effective for microservices-based applications and excels in processing large datasets through batch processing workloads across multiple AWS services
Amazon Elastic Kubernetes Service (EKS)
Amazon EKS enhances ECS functionalities for Kubernetes-powered applications. EKS is ideal for running compute-intensive machine learning tasks and can operate in hybrid cloud environments, offering robust support for complex, scalable applications.
Comparing Containers with Serverless Compute for Dynamic Workloads
Containers are ideal for persistent and predictable computing needs, handling resource-intensive operations like large databases or continuous data processing efficiently. However, some specialized use-cases require more dynamic solutions. For instance, applications that only need compute resources when specific events occur or those with sporadic sessions of heavy workloads need a flexible approach. This is where serverless compute comes in as a solution. In a serverless architecture, AWS automatically manages and allocates resources only when needed, making it perfect for handling irregular, event-driven workloads without the need for manual intervention.
Serverless Compute
What is Serverless Architecture? In a serverless architecture, developers do not need to manage servers, they focus on writing code that is executed in response to events. AWS handles the provisioning, scaling, and maintenance of the infrastructure.
Functions inside serverless applications are triggered by events in real-time and since the usage is dynamic, you only pay for when the application is running.
Serverless compute is ideal for scenarios where compute resources are triggered by events, applications that require real-time file processing, environments with rapidly changing compute demands due to sporadic bursts of data, and for applications like chatbots and voice assistants that handle complex data structures.
AWS offers serverless compute capabilities through AWS Lambda and AWS Fargate.
AWS LAMBDA
AWS Lambda is a service that automatically runs your code in response to events and scales the compute resources as needed. Here’s a simple example to illustrate how it works:
Imagine you have a photo-sharing website that requires all uploaded images to be resized to a specific dimension. Every time a user uploads a new photo, an AWS Lambda function can be triggered to resize the image.
So, when a photo is uploaded:
- An event triggers the Lambda function.
- The Lambda function runs a resizing algorithm on the photo.
- The resized photo is then stored and displayed on the website.
This way, AWS Lambda automatically ensures that all photos meet the required dimensions, without any manual intervention.
AWS Fargate
AWS Fargate allows you to run containers without managing servers. It’s like combining the benefits of containers and serverless computing. Fargate automatically takes care of resource allocation, making it efficient and cost-effective. It’s great for running AI and machine learning applications because it handles big data and parallel processing easily, without the need for you to manage the underlying infrastructure.
Transitioning from Traditional EC2 Compute to Modern Containerized and Serverless Architectures
Microservice architecture is a distinctive method of developing software systems that focuses on building single-function modules with well-defined interfaces and operations. Unlike microservices, a monolith application is built as a single, autonomous unit. So, any changes to the application are slow, as they affect the entire system. Microservices solve the challenges of monolithic systems by being as modular as possible. In the simplest form, they help build an application as a suite of small services, each running in its own process and being independently deployable.
Modularity refers to designing a system in separate, interchangeable components or modules. Each module or component can function independently and can be developed, tested, and deployed separately. This approach simplifies maintenance and enables scalability, as individual modules can be updated or replaced without affecting the entire system.
Amazon Elastic Compute Cloud (EC2) represents traditional compute by offering resizable virtual servers in the cloud, providing flexibility and control over computing resources. EC2 instances can be tailored for various workloads, ensuring applications have the necessary compute power.
However, as technology evolves, there’s a shift towards more modular and efficient architectures like containerized and serverless computing. Containerized deployments, managed through services like Amazon ECS and Amazon EKS, package applications and their dependencies into portable units, enabling consistent performance across environments.
Serverless computing, exemplified by AWS Lambda and AWS Fargate, allows developers to run code and manage containers without handling the underlying infrastructure, automatically scaling based on demand and optimizing costs by charging only for actual usage.
This shift from traditional EC2 instances to containerized and serverless solutions reflects the need for more scalable, efficient, and manageable application deployment strategies in modern cloud environments.
****************************************************************************
Chapter 5
Exploring AWS Database Resources
Cloud databases, or Databases as a Service(DBaas), are managed database engines that allow access to databases without configuring physical infrastructure or installing software. AWS provides different types of databases for various use cases:
Relational databases store data in structured tables and support SQL queries, making them ideal for transactional applications. NoSQL databases handle unstructured data like JSON documents, providing flexibility for evolving data models. Memory-based databases use in-memory storage for fast data retrieval, suitable for caching and real-time applications. Compute-hosted databases deploy databases on virtual machines for custom configurations and control over the environment.
1.) Relational Databases
Relational databases are structures storage systems that organize data into tables, and build relationships to other tables using keys. Examples include MySQL or PostgreSQL. Relational databases offer scalability, consistency and data integrity.
AWS provides relational database management through two services, Amazon RDS and Amazon Aurora.
Amazon Relational Database Service (RDS)
RDS is a fully managed relational database that supports user setup control, operation, and scaling. Amazon RDS supports multiple database engines, including Amazon Aurora, PostgreSQL, MySQL, MariaDB, and others. A database engine is the core software component that a database management system (DBMS) uses to create, read, update, and delete (CRUD) data from a database. It handles the data storage, retrieval, and management tasks. RDS performs well in both cloud and on-premises environments.
Amazon Aurora
Amazon Aurora is a relational database service optimized for MySQL and PostgreSQL engines. It provides high performance at a much lower cost compared to traditional on-premises databases. Aurora automatically performs continuous backups and supports multi-region deployments, ensuring your data is always available and protected.
RDS Vs. Aurora
Amazon RDS
General Purpose: Supports a wide range of database engines including MySQL, PostgreSQL, Oracle, SQL Server, and MariaDB.
Use Case: Suitable when you need flexibility to choose from different database engines.
Amazon Aurora
Specialized Service: Designed specifically for MySQL and PostgreSQL.
High Performance: Fine-tuned for high performance and cost efficiency for MySQL and PostgreSQL databases.
Use Case: Ideal when you specifically need a MySQL or PostgreSQL database with optimized performance and features.
Summary
Use RDS for a variety of database engines.
Choose Aurora for high-performance MySQL or PostgreSQL databases.
2.) NoSQL Databases
NoSQL databases go beyond the traditional table-based relational database model to support flexible data models like JSON and raw documents. They offer dynamic schema flexibility, allowing them to adapt to changing data structures and scale horizontally to handle increasing amounts of data. They are perfect for unstructured or semi-structured data. NoSQL databases do not use SQL as their primary query language and can store data in various formats such as documents, key-value pairs, wide-column stores, or graphs.
AWS offers two NoSQL database services, DocumentDB and DynamoDB.
Amazon DynamoDB
Amazon DynamoDB is a managed NoSQL database built on a serverless architecture that provides high performance with limitless throughput and storage.
It guarantees 99.999% availability and scales seamlessly. DynamoDB handles unstructured data, such as real-time video streaming or media content. It’s also well-suited for tracking inventory, managing shopping carts for customer profiles, and maintaining session history and leaderboards on gaming platforms.
Amazon DocumentDB
MongoDB is a popular NoSQL database designed to store and retrieve data in JSON-like documents, making it flexible and scalable. It’s often used for handling large volumes of unstructured data.
Amazon DocumentDB is AWS’s managed NoSQL service that offers compatibility with MongoDB. It is designed for working with large-scale document workloads without the need for infrastructure management.
This service is commonly used in:
- Content Management Systems: Managing data like text reviews and images.
- Recommendation Engines: Handling millions of user profiles.
- Generative AI Use Cases: Managing large semantic, language-based data.
3.) Memory-Based Databases
Another category of databases offered by AWS are memory-based databases. These are designed for high-performance storage and utilize RAM for super-fast retrieval. They are used in applications that require caching, need real-time data processing, or track high-throughput high-speed transactions.
MemoryDB for Redis
MemoryDB for Redis is AWS’s offering for memory-based databases that gives microsecond read and millisecond write capabilities. It is hosted with 99.99% global availability and is equipped with fault tolerance to recover instantaneously from any outages or issues
4.) Compute-Hosted Databases
EC2-hosted databases, also known as compute-hosted databases, are custom databases deployed on Amazon EC2 instances. They simplify data access during compute tasks and offer granular control over configuration and management. However, users are responsible for backups.
Compute databases vs static databases
When deciding between EC2-hosted and AWS-managed databases (static), consider the level of control and automation you need. Choose EC2-hosted databases for full control over configuration, but opt for AWS-managed databases if you prefer an automated, fully managed service.
****************************************************************************
CHAPTER 6
Database Migration Services in AWS
When we talk about storing data in the cloud, an important question arises: how does data get into the cloud in the first place? This is where database migration comes in. AWS provides robust tools and services to facilitate the smooth transfer of data from on-premises databases to cloud-based solutions, ensuring minimal downtime and seamless integration.
Database migration is moving all your data from one environment to a brand new environment.
Why Do We Need Data Migration?
Legacy systems are outdated computer systems, software, or technologies. Legacy systems often struggle to meet the current demands for scalability, efficiency, and integration with newer technologies and applications.
Data migration is essential because legacy systems often fail to meet today’s rapidly evolving scalability and efficiency requirements. Modern applications, such as those using generative AI, demand advanced compute and storage capabilities. Furthermore, businesses now gather data from numerous sources, necessitating a unified management layer. Migrating data to advanced environments helps achieve these goals, ensuring that businesses can efficiently manage and leverage their data in a centralized, robust platform.
Data Migration in Practice
A typical data migration project consists of five steps. First, an assessment of existing data structures, formats, and dependencies of the source data. Second, preparation, where a clear migration plan is developed and source data is organized. Third, execution, where necessary tools and operations are deployed to perform the migration without inconsistencies. Fourth, validation to verify data integrity post-migration and conduct thorough testing. Finally, optimization, to fine-tune the performance of applications in the new environment.
Aws offers four services to support database migration, the Database Migration Service, AWS Snow Family, DataSync, and the Schema Conversion Tool.
Amazon Database Migration Service (DMS)
Database Migration Service (DMS) is a tool that facilitates the seamless transfer of databases and analytics engines to AWS. It operates by replicating data across multiple availability zones, which ensures minimal downtime during the migration process. DMS supports various source and target databases, making it versatile for different migration needs. Additionally, it includes validation checks and task monitoring features to maintain data integrity throughout and after the migration.
- Assess: Evaluating the existing database to understand its structure, dependencies, and requirements.
- Convert: Using the AWS Schema Conversion Tool to transform the database schema and code to be compatible with Amazon Aurora.
- Migrate: Moving the data from the on-premises database to Amazon Aurora, ensuring minimal downtime and data integrity.
AWS DMS facilitates this process by replicating data across multiple availability zones, ensuring high availability and reliability during the migration. The end result is a database hosted on Amazon Aurora, benefiting from its high performance and managed service capabilities.
AWS Snow Family
The AWS Snow Family consists of physical devices that help move very large amounts of data (petabytes) offline. These devices are used in remote locations or places close to where the data is stored. They have strong security features like encryption and seals that show if someone tried to tamper with the device, ensuring the data remains safe.
AWS DataSync
AWS DataSync is a service that helps transfer large amounts of data from your local storage to the cloud quickly and efficiently. It uses parallel processing to speed up the transfer and integrates smoothly with AWS storage services like S3. DataSync can be automated using the AWS management console or APIs, making the process even easier.
AWS Schema Conversion Tool
The AWS Schema Conversion Tool automates the conversion of database schemas from one format to another. It maps objects from the source database to the target database, reducing the need for manual work. This tool also allows for custom conversion rules, checks the converted schema for errors before migration, and provides live feedback to address any issues that arise during the process.
When to use which service?
We have four AWS database migration services, each best suited for specific scenarios:
- Database Migration Service: Best for large-scale and complex data migrations across different types of databases.
- AWS Snow Family: Ideal for large physical data volumes, limited internet bandwidth, or when data needs to be processed locally before moving to AWS.
- AWS DataSync: Perfect for frequent and automated large-scale data transfers between on-premises storage and AWS.
- AWS Schema Conversion Tool: Useful when migrating databases with different structures, helping to reduce manual work and ensure a smooth transition.
****************************************************************************
CHAPTER 7
AWS Networking & Content Delivery
This section highlights how AWS supports the connectivity of cloud resources, allowing them to effectively communicate across far-flung locations. It also delves into how AWS networking services ensure secure and reliable connections and content delivery. It covers five crucial services: Amazon VPC (for creating private networks within AWS), Amazon VPN (for secure connections to AWS), AWS Direct Connect (for dedicated network connections to AWS), Amazon Route 53 (for domain name services), and Amazon CloudFront (for distributed content delivery).
Amazon Virtual Private Cloud (VPC)
Amazon VPC establishes networking in AWS by providing a private, logically isolated space in the global AWS infrastructure for defining and launching resources. VPCs are regionally hosted, residing in one AWS region.
Amazon VPC allows for a dedicated network environment within the AWS cloud that supports both IPv4 and IPv6 protocols, and allows users to customize IP address ranges according to their specific needs.
Building a Logically Isolated Virtual Network
- Security Layers: Amazon VPC enhances security through the use of security groups and network access control lists (ACLs), which act as firewalls to control traffic at the instance and subnet level, respectively.
- Control Over Network Components: It provides extensive control over network configuration by allowing the creation and customization of subnets, route tables, and network gateways. This flexibility helps in designing a network that closely fits an organization’s IT infrastructure needs.
Let’s delve deeper into these essential features to understand how they bolster network isolation and security within the Amazon VPC.
Key VPC concepts
Custom configurable IP address range
IP addresses act as your designated spaces in the digital realm, much like having a specific office address in the physical world. Amazon VPC (Virtual Private Cloud) allows you to set up this virtual address space within the AWS ecosystem according to your specific requirements.
Subnets (sub networks), or subdivisions of your VPC divide your VPC IP adress range into smaller, maneageable segments, they function similarly to breaking down a large office into smaller, more manageable departments, each with its own designated roles and responsibilities, yet all operating under the same organizational address. This division helps in organizing, managing, and securing the network’s resources efficiently.
Route tables
Route tables in a Virtual Private Cloud (VPC) contain rules that determine the pathways for network traffic, either from your subnets or through gateways, directing how data moves within the VPC or to external networks.
Network gateways, on the other hand, are key components that manage connections between your VPC and the wider internet, or between different VPCs. They handle both incoming and outgoing traffic, ensuring that data exchanges are secure and efficient.
Default vs. custom Amazon VPC
Virtual Private Clouds (VPCs) in AWS are available in two types, each serving different levels of user needs and customization:
i) Default VPCs: Automatically created for every new AWS account, default VPCs are pre-configured to simplify initial setup. They come with a subnet in every availability zone within the region tied to the account, allowing resources placed within them to communicate with the internet right out of the box.
ii) Custom VPCs: For users needing specific configurations, custom VPCs provide the flexibility to define and customize network settings according to precise requirements. These include choosing custom IP address ranges, setting up specific route tables, and more. Internet access for custom VPCs isn’t automatic; it requires explicit configuration, giving users more control over their network security and traffic management.
Network Security
Network security is crucial in establishing robust and secure networking within cloud environments. AWS enhances the security of its networks by managing both inbound and outbound traffic, which are essential for regulating data entering and leaving the network.
- Inbound Traffic: Refers to external data packets that are received by a network. In the context of AWS, it typically means any requests or data sent to your resources in the cloud from other networks, including the internet.
- Outbound Traffic: Involves data packets that are sent from your network to an external network. This could include responses from your applications hosted on AWS to users or other external systems.
To ensure the security of these traffic flows, AWS provides two key services:
Network Access Control Lists (ACLs): These act as a virtual firewall at the subnet level within an AWS VPC. Network ACLs are stateless; they evaluate the incoming and outgoing packets separately and control traffic to and from subnets based on specified rules, such as allowed IP ranges, protocols, or ports. This means each packet is inspected independently, without considering any previous packets.
Network Security Groups (NSGs): These provide a similar function but are associated directly with AWS services, such as EC2 instances. Unlike Network ACLs, Security Groups are stateful, meaning they automatically allow return traffic to flow out regardless of outbound rules, as long as the traffic matches the inbound rules.
VPC endpoints
Beyond security, resource communication is important for networking. VPC endpoints enable private, secure connections between AWS services without public IP addresses.
AWS PrivateLink
AWS PrivateLink is a specialized type of VPC endpoint that facilitates private connections between different Virtual Private Clouds (VPCs), various AWS services, and on-premises networks without requiring exposure to the public internet. This service simplifies network management by reducing the need for complex firewall rules and network configurations, thereby enhancing security and streamlining the secure exchange of data across different environments.
Two other ways of connecting to AWS that we have previously seen are:
1.) AWS VPN: This service enables secure connections between an on-premises network and AWS over the internet. It’s particularly useful for smaller or temporary projects where quick setup and cost-effectiveness are priorities. AWS VPN uses standard encryption techniques to ensure that data transmitted over the internet is secure, making it a viable option for businesses that need a reliable but flexible connection.
ii) AWS Direct Connect: AWS Direct Connect provides a more robust solution by offering a dedicated network connection between your on-premises infrastructure and AWS. This connection is direct and does not traverse the public internet, thereby enhancing security and reliability. AWS Direct Connect is ideal for high-bandwidth or mission-critical workloads, as it facilitates consistent network performance and lower latency compared to internet-based connections.
DNS — Internet’s address book
DNS, short for Domain Name System, functions like the internet’s address book. It translates easy-to-remember domain names (like www.example.com) into IP addresses (like 192.0.2.1) that computers use to identify each other on the network. This system is crucial for helping users access websites using familiar names instead of complex numeric addresses.
Amazon Route 53
Amazon Route 53 is AWS’s DNS service that translates domain names into IP addresses. It also manages domain registration and traffic routing for web applications. It ensures that user requests are efficiently routed to the appropriate servers, even distributing traffic across multiple resources to improve load times and increase fault tolerance. It’s widely used for hosting web applications and distributing incoming traffic across resources.
Information movement in the cloud
Moving information across the cloud can be slow and challenging due to distance-related delays (geographical latency) and limited bandwidth, especially when transferring data across continents.
Content Delivery Networks (CDNs): AWS addresses these challenges through CDNs, strategically placing servers worldwide to cache content closer to users. This setup speeds up data delivery, including web pages and multimedia content, by reducing the distance information needs to travel.
Amazon CloudFront
Amazon CloudFront is AWS’s own CDN. It excels at quickly distributing content by leveraging a global network of servers. It efficiently handles sudden traffic spikes and scales as needed, ensuring content is delivered rapidly and reliably. It also accelerates web content, APIs, and streaming. Additionally, CloudFront enhances security by providing DDoS protection and HTTPS support, which helps secure the data transmitted between users and the network.
How CloudFront is used
Amazon CloudFront is a content delivery network that enhances user interactions with websites by speeding up the delivery of website content, leading to an improved user experience. Here’s how CloudFront is utilized effectively:
- Optimizing Streaming: CloudFront minimizes buffering times during streaming, ensuring viewers can watch videos smoothly without interruption. This is crucial for maintaining engagement and satisfaction with media-heavy content.
- Scalable Updates: CloudFront also excels in delivering patches and over-the-air updates to devices efficiently. It automatically scales its resources to handle these distributions, ensuring that updates are delivered promptly without overloading the system, regardless of the number of users or devices being updated simultaneously.
Through these features, CloudFront provides a robust platform for fast content delivery and effective digital media streaming, while also managing large-scale updates efficiently.
****************************************************************************
CHAPTER 8
AWS AI & ML Services and Machine Learning Process
AWS is at the forefront of providing advanced AI and ML services, which are transforming how we interact with technology by automating complex processes and enabling intelligent decision-making.
What is AI and ML?
Artificial Intelligence (AI): AI involves creating machines that can perform tasks that typically require human intelligence. This includes capabilities like problem-solving, speech recognition, and adaptive learning.
Machine Learning (ML): ML is a specific branch of AI focused on developing systems that learn and improve from experience without being explicitly programmed. It uses algorithms to parse data, learn from that data, and make informed decisions based on what it has learned.
AWS AI and ML offerings
AWS offers a comprehensive range of AI and ML services designed to cater to various needs:
i) AI Services: These are pre-built tools for common AI tasks, such as language translation, voice recognition, and visual understanding, which can be easily integrated into applications.
ii) ML Services: AWS provides services that help developers and data scientists build, train, and deploy ML models quickly.
iii) ML Frameworks and Infrastructure: AWS supports all major ML frameworks like TensorFlow and PyTorch, offering flexible and scalable computing resources for training and inference.
iv) Machine Learning Workflow: AWS streamlines the ML workflow, from data preprocessing and model training to deployment and management, making it more accessible for users to implement ML solutions.
AWS AI services
AWS AI services AWS provides a suite of AI-focused services designed to simplify the implementation of artificial intelligence into various applications without the need for deep machine learning expertise. These services use pre-trained or auto-trained models that allow developers to integrate advanced AI capabilities quickly and efficiently. This simplicity encourages individuals interested in AI to explore and implement these services with confidence.
Here’s an overview of some prominent AWS AI services:
1.) Amazon Rekognition
This service provides powerful image and video analysis capabilities. It can identify objects, people, text, scenes, and activities in images and videos, and it also offers facial recognition technology.
2.) Amazon Polly
Converts test into lifelike, natural-sounding speech.
3.) Amazon Lex
Lex is a managed service that handles the development and deployment of conversational interfaces like chatbots. Comprehend can be used to extract insights like sentiments, entities or identify language from text.
4.) Amazon Comprehend
A natural language processing (NLP) service that uses machine learning to uncover insights and relationships in text. It can identify the language, extract key phrases, places, people, brands, or events, understand how positive or negative the text is, and automatically organize a collection of text files by topic.
5.) Amazon Translate
Translates text between languages.
6.) Amazon Forecast
An AI service that delivers highly accurate forecasts using the same technology as Amazon.com. This service automatically discovers how variables such as product features, seasonality, and store locations affect each other, making it easier to prepare more accurate forecasts.
7.) Amazon CodeGuru
CodeGuru is a developer-friendly service that is used to automate code reviews. It specializes in generating intelligent recommendations for improving code quality.
ML services in AWS
AWS offers distinct approaches to machine learning through its AI and ML services. While AI services leverage pre-built models for immediate use in applications, ML services provide tools for developers and data scientists to create, train, and deploy their own custom machine learning models. In this section we’ll delve into two significant AWS ML services: Amazon SageMaker and Amazon CodeWhisperer.
- Amazon SageMaker
Functionality: Amazon SageMaker is a fully managed service designed to facilitate the entire machine learning (ML) lifecycle, from building and training models to deploying them into production. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high-quality models. It is integrated with Jupyter notebooks and is built to handle swift training and deployment of ML models. It can be used for designing predictive analytics, computer vision, and natural language processing applications.
2.) Amazon CodeWhisperer
Amazon CodeWhisperer is a relatively newer service aimed at improving developer productivity with machine learning. It is a code generation service that helps developers by providing code recommendations based on their comments in real-time.
ML frameworks
AWS supports a wide range of machine learning (ML) and artificial intelligence (AI) initiatives by offering access to several key open-source frameworks. These frameworks are essential for customizing and managing ML workflows, making it easier for developers and data scientists to deploy robust, scalable, and efficient ML models.
AWS ML Frameworks
- TensorFlow: Developed by Google, TensorFlow is a versatile open-source library for numerical computation that makes machine learning faster and easier. It’s particularly strong in facilitating the development, training, and deployment of machine learning models across a variety of computing platforms.
- PyTorch: Created by Meta (formerly Facebook), PyTorch is popular for its ease of use and flexibility, especially in academic and research settings. It excels in automatic differentiation, making it useful for applications that require on-the-fly adjustments to neural networks.
- MXNet: Managed by the Apache Software Foundation, MXNet is renowned for its efficiency in training deep neural networks across distributed systems. This makes it ideal for environments where scalability and speed are crucial.
In the context of machine learning on AWS, a “pipeline” refers to a series of automated processes and workflows designed to manage the complete lifecycle of a machine learning model. This includes data collection, data preprocessing, model training, model evaluation, and deployment to production. The purpose of an ML pipeline is to ensure that the entire machine learning process is repeatable, scalable, and maintainable.
Sample ML Pipeline Using AWS
- Data Preparation: The process begins with the preparation of source data, which is stored in AWS S3, a scalable storage solution.
- Model Development and Training: Using Amazon SageMaker, developers can directly read data from S3 to develop and train ML models. SageMaker provides a powerful platform that simplifies the entire lifecycle of machine learning from model building and training to deployment.
- Deployment: Once the model is trained, the next step involves packaging the SageMaker notebook into a container. This container can then be deployed into a production environment using Amazon Elastic Kubernetes Service (EKS), which manages containerized applications.
- Continuous Integration: AWS Lambda can be configured to trigger a re-training and deployment process whenever new data becomes available. This automation ensures that the ML model is always up-to-date and performing optimally.
***************************************************************************
CHAPTER 9
AWS Analytics & BI Services
AWS offers a comprehensive suite of Analytics and Business Intelligence (BI) services designed to help you make data-driven decisions efficiently.
Introduction to data analytics
Data analytics is the process of collecting, processing, and transforming large amounts of data to derive valuable insights. It’s an iterative process supporting continuous improvement.
This diagram illustrates the transformation of large volumes of raw data into actionable insights through AWS data analysis services. It represents the flow from data collection and storage, depicted by databases and spreadsheets, through processing via AWS data analysis tools, to the final output of valuable insights, symbolized by charts and graphs. This visualization captures the essence of how AWS enables businesses to harness data for strategic decision-making
In this section we’ll look at six aws services that facilitate data analytics workflows in AWS. These services are Athena, QuickSight, Kinesis, Redshift, Macie, and Glue.
Amazon Athena
Athena is a serverless query service that makes it easy to analyze data directly in Amazon S3 using standard SQL. It is cost-effective because you pay only for the queries you run, and it seamlessly integrates with other AWS services for both machine learning and data visualization, such as Amazon QuickSight.
Amazon QuickSight
This is a scalable, serverless, embeddable, machine learning-powered BI service built for the cloud. QuickSight lets you create and publish interactive BI dashboards that include ML Insights. With serverless architecture, it automatically scales to handle large datasets and high concurrency without any infrastructure management. It comes with generative BI capabilities where you can put in natural language questions and it responds by building visualizations as answers.
Amazon Kinesis
Kinesis makes it easy to collect, process, and analyze real-time, streaming data. This enables developers to build applications that can continuously ingest and process large streams of data records in real time, which is ideal for time-sensitive information that must be processed quickly. Kinesis is serverless. is best used for real-time applications like live leaderboards, reading IoT-powered sensor data, and helps improve transfer speeds for traditional batch processing loads.
Amazon Redshift
Amazon Redshift is an AI-powered data warehousing service that supports massively parallel processing capabilities. Achieving high performance at six times lower cost, it supports zero-ETL (Extract, Transform, Load) for unified data integration, and enables secure parallel project collaboration. Redshift excels in data warehousing and advanced analytics, especially with large datasets, concurrent parallel processing, and monetizing data-as-a-service products.
Amazon Macie
Macie is an AI-powered security service that automatically recognizes sensitive data such as personally identifiable information (PII) and provides dashboards and alerts that help you understand how this data is being accessed or moved.
AWS Glue
Glue is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development. It provides both visual and code-based interfaces to make data integration and ETL processes seamless and efficient.
Creating an end-to-end data workflow
Now, let’s see how these analytics services work in action through a sample data analytics project in AWS. Imagine we are analyzing usage data from a mobile application.
- Data is ingested in real-time via Amazon Kinesis from sources like mobile applications and stored in S3.
- AWS Glue is used to transform and prepare data for analytics, moving it into services like Amazon Redshift and Athena for structured querying.
- Amazon Macie monitors data across S3, Redshift, and Athena to ensure sensitive information is protected.
- Analyzed data in Redshift might feed into machine learning models for advanced analytics.
- Results are visualized through Amazon QuickSight for interactive reporting and decision-making.
****************************************************************************
CHAPTER 10
Secondary AWS Service Categories
In this segment we’ll explore services that simplify your overall experience with AWS.
So far we have covered AWS services in global infrastructure, compute, storage, databases, networking, machine learning, and data analytics. These are the core offerings forming the foundation of cloud computing.
In addition to these, AWS offers a range of secondary services that complement and enhance the core functionalities, split into categories of business enhancement, developer tools, and advanced intelligence.
Each category includes specific services that target different aspects of enterprise and application needs.
Application Integration Services
Amazon Simple Queue Service (SQS), and Amazon Simple Notification Service (SNS). These services act as orchestrators, ensuring seamless communication and data movement between applications, which is crucial for maintaining scalable architectures.
Amazon EventBridge
Amazon EventBridge responds to events within AWS or externally, connecting with all services to relay occurrences, freeing development from explicit dependencies. It also monitors your AWS environment, tracking integrations for improvement.
Amazon Simple Queue Service (SQS)
Amazon SQS ensures reliable messaging between software components and scales almost infinitely. It is ideal for microservices communication, like dynamic GPS sensor feedback for updating live maps. It also efficiently decouples components for seamless background work.
Amazon Simple Notification Service (SNS)
SNS is a fully managed messaging service for both application-to-application (A2A) and application-to-person (A2P) communication. The service facilitates the sending of notifications to end-users using mobile push, SMS, and email, ensuring effective communication across different platforms.
Business Application Services
Business application services streamline operations, increase efficiency, integrate with AWS services, and support automation.
Services like Amazon Connect and Amazon Simple Email Service (SES) streamline operations by integrating with other AWS services, supporting business automation, and improving customer engagement through scalable contact center solutions and efficient email sending capabilities.
Amazon Connect is a cloud-based contact center offering easy setup and management. It scales customer support operations during peak times, ensuring efficient support with AI-driven features for personalized interactions and improved customer experiences.
Amazon SES is a scalable, cost-effective email service for marketing and transactional messages. It efficiently sends bulk emails, seamlessly integrates for transactional messages like password resets or order confirmations, providing reliable delivery and tracking capabilities.
Developer Services
AWS enhances developer productivity with services like AWS CodePipeline and AWS CodeCommit. These services support continuous integration and continuous delivery (CI/CD) practices, streamline code deployments, and facilitate collaboration across development teams.
AWS CodePipeline is a continuous delivery service that automates the software release process, enabling developers to rapidly and reliably deliver features and updates. By using JSON templates, CodePipeline allows for flexible and easy modifications of the delivery pipelines, accommodating changes in the infrastructure or application workflow seamlessly. This service is designed to integrate with other AWS services, enhancing its capability to handle complex development workflows from code commit to deployment.
AWS CodeCommit is a fully managed source control service that provides private Git repositories within the AWS cloud. It’s designed to enhance the collaboration experience for developers by providing a secure and highly scalable ecosystem for code management.
Advanced Intelligence Services
AWS is pushing the boundaries of traditional computing with smart services like AWS IoT Core and Amazon Braket which connect directly with physical devices using the Internet of Things or perform supercomputer-level computations through quantum computing applications.
IoT Core enables the connection of IoT devices to the cloud for better management and integration, while Braket explores quantum computing applications, providing tools and environments to experiment with quantum algorithms. IoT Core is widely used for predictive maintenance and quality monitoring in industries, and also automates home IoT devices like thermostats and smart TVs. Braket provides essential tools and support for managing quantum projects, from code building to running and analyzing. It also supports automated testing for quantum applications.
A special thank you to #Ujuzifursa, #Kevin Tuei, and #DataCamp for their invaluable support and resources.
Refrences
Microservices. (n.d.). smartbear.com. https://smartbear.com/learn/api-design/microservices/
AWS Database Management & Compute Services — Containerized Deployments and Serverless Compute. (n.d.). [Video]. DataCamp. https://campus.datacamp.com/courses/aws-cloud-technology-and-services/aws-database-management-compute-services?ex=5