How to Properly Architecture Your AWS Cloud Environment

Petar Marinković
Jatheon Technologies Inc
6 min readNov 18, 2019

If you have been following Jatheon blog over the last couple of months, you could’ve noticed several blog posts explaining AWS core concepts and most important services, such as IAM, VPC, S3, EC2 and others.

These introductory blog posts are an excellent first step for any company that’s starting its cloud journey, as well as those that have a light presence in the cloud and are looking “to do more”, in order to really embrace all the power AWS can offer.

Now it’s time to go a step further. In today’s blog post, you will get acquainted with AWS Well-Architected Framework. This will help you learn AWS environment best practices for designing and maintaining a reliable, scalable, secure, but also cost-effective cloud environment for your company’s needs.

What Is AWS Well-Architected Framework?

AWS Well-Architected Framework is a set of five pillars which serve as the foundation for your AWS cloud environment. You can consider them as a blueprint for your workload on Amazon’s public cloud. These five pillars are:

Operational Excellence

The goal of the Operational Excellence pillar is to give you guidelines and best practices on how to run and maintain your AWS workloads in order to support your business needs, while continuously improving. There are six main design principles within these guidelines:

The six design principles we’ve presented here are just an overview of this pillar. To really get acquainted with it, we suggest you study this whitepaper in detail.

In our two-blog post series on cloud security, we covered the majority of the concepts from the AWS Well-Architected Framework Security pillar. Here we will mention only what AWS emphasizes as the six main design principles:

  • Implement a strong identity foundation. Rely on IAM and external identity foundation systems like AD or LDAP. Implement IAM best practices.
  • Enable traceability. Configure and use AWS services such as CloudTrail, Config, GuardDuty, Artifact.
  • Apply security on all layers. Follow the VPC best practices, enforce NACL and SG, enable the VPC flow logs (network protection on L2-L4 OSI layers). Protect the application layer with WAF or DAST, SAST and similar scanning tools.
  • Automate security best practices. This design principle relies heavily on operational excellence and the usage of infrastructure-as-code tools to have all changes automated, audited and easily reversible.
  • Protect data both in transit and at rest. Encrypt everything!
  • Prepare for security events. Establish a testing plan for all security mechanisms, perform regular audits and rotate all of your encryption keys and IAM credentials.

Remember, before you deploy any workload on AWS, even those in testing or staging environments, you must first ensure security, and set up proper auditing and reporting mechanisms.

And it’s not just enough to have the tools up and running. There should be a DevOps or security team regularly checking the findings of these tools. A good piece of reading we recommend here is the Security pillar whitepaper.

AWS describes the Reliability pillar as the ability of any AWS cloud system to recover from any failure or disruption. In a nutshell, it’s a set of five main design principles on how to make your services fault-tolerant to any disaster happening due to either your mistake or downtime in AWS global infrastructure. These are:

The Reliability pillar is interconnected on many levels with the Operational Excellence pillar and you can master most of its principles by studying the latter. Still, we recommend spending some time studying the Reliability pillar whitepaper as well.

Performance Efficiency

By following the pillars of the Well-Architected Framework in the given order, it’s easy to understand that you first need to properly design your system to be reliable, secure and operational at all times before deploying your workloads.

But, this is not enough.

If your service doesn’t have enough resources to provide the required functions to its users, the fact that it’s fault tolerant and secure from breaches won’t help. That’s why Amazon introduced the Performance Efficiency pillar into the framework, with the following five design principles:

These aren’t easy questions to answer right away, but if you carry out a proper internal review of your company’s needs, you can better understand the actual requirements from the AWS services, and then proceed to utilize them properly while consulting the AWS documentation.

  • Go global in minutes. Utilize regions and AZs around the world to provide a global service which offers fast response and low latency, if required by your customers. Also remember to leverage managed services such as CloudFront, Route53, and many others which allow you to go global in a couple of minutes with just a few API calls (or clicks from the web console, even though that’s not something AWS recommends).
  • Use serverless architecture. If you’re still not using serverless services, consider doing so. Serverless isn’t just Lambda computing, there are also serverless databases such as Aurora / DynamoDB or serverless container hosting solutions such as AWS Fargate, which eliminates the need of hosting and maintaining your own Docker clusters.
  • Experiment more often. But remember that when you do experiment with the new services and technologies, don’t do it in your production environment 🙂 Have a separate AWS account, or at least dedicated VPC for testing purposes.
  • Mechanical sympathy. This guideline is tied to the previous point, which is, experiment as much as you can, but use the technologies that are in line with your business. If your business relies on old, monolith application, don’t try to break it by enforcing microservices and containers, as it simply won’t work. Better find a way to host it in EC2 with auto-scaling and load balancing in multi-AZ setup.

For an overview of compute, storage, network and database AWS services, and a take on performance testing and fault tolerance, check out the AWS Performance Efficiency whitepaper.

The Cost Optimization pillar focuses on avoiding unnecessary costs in the cloud. While this pillar won’t affect how your cloud workloads functions on a day-to-day basis, it’s imperative for any business to run optimized operations in order to achieve sustainability. Amazon defines five design principles for this pillar:

Besides security, costs are the topic with most concerns when it comes to migrating to cloud. So, we will dedicate a separate article to cloud costs in a couple of weeks. In the meanwhile, be sure to check out the Cost Optimization whitepaper.

Which Tools Accompany AWS Well-Architected Framework?

You’ve probably realized reading these last few paragraphs that implementing all of the guidelines, best practices and advice is quite a daunting task. It will take you quite some time to draw all of your cloud blueprints on paper and follow all five pillars.

And that’s correct, designing a cloud environment while considering all the guidelines will take you and your cloud team probably a couple of months. But how to know when you’re ready, how to verify your designs? Luckily, Amazon can help you with that as well.

Well-Architected Tool is available as an AWS service, residing inside your account, and with just a couple of clicks, you can submit your workloads to this tool, answer some questions and get a report on your environment. Before using it, please check the region availability, since this tool is not available in all regions, but it should be present in the most popular ones in the US and Europe. Besides the end result, this tool will also guide you with videos and documentation on how to strengthen your weak points and improve overall results in all five points.

Besides the Well-Architected Tool, AWS customers can also rely on AWS Solutions Portfolio, a knowledge base of vetted AWS reference diagrams and implementations, in use by many technology leaders through the globe.

Each configuration in the AWS Solutions Portfolio has been verified by Amazon-employed AWS solution architects, and these implementations and diagrams are updated on a regular basis. And you can browse through such solutions either via technology category (such as databases, ML / AI or containers / microservices) or by industry.

Last but not least, if you’re really in doubt, you can always reach out to companies in your area from AWS Partner Network, MSPs that can help you on your cloud journey and be a practical advisor while you lay your foundation following principles from WA pillars.

In our next blog post, we’ll be looking at the costs associated with moving to cloud. Stay tuned in the following weeks.

Jatheon Cloud is a fourth-generation, cloud-based email archiving platform that runs entirely on AWS. To learn more about Jatheon’s cloud archive, contact us or .

Originally published at https://jatheon.com on November 18, 2019.

--

--