Geek Culture
Published in

Geek Culture

How many AWS Accounts do I need? — Part 2

I n the previous post, I have talked about how many AWS Accounts you should go with at a high level. In this post, I am going to write more about it with next level of details and show possible evolution of the AWS Organizations, where you would add more accounts as your Architecture and Infrastructure evolves with new business requirements around growth, availability, data residency and disaster recovery.

TL;DR — following diagram sums it up

Baseline Setup

First, create AWS Management account, and use AWS Control Tower to setup new Accounts for Security, Logging and common infrastructure. AWS Control Tower also gives a great starting point by setting up AWS SSO, Service Control Policies, Guardrails to restrict certain changes, as well as setup Security and Logging account following best practices. Then, add Sandbox account which can be used for developing any Proof of Concepts and try out new AWS Services and features.

Application Accounts

Once you have baseline setup with a Sandbox account, when you have evolved your design and initial implementation of a problem, then it will be time to deploy it to different environments to its own separate set of accounts for the application. In case you have many application environments to deploy, you may combine few of the environments into one account. However, it would be best to group them into “Development”, “Testing/Preproduction” (any account which is not development and production), and “Production” by their use case.

Here, Group of Application Accounts, one per Environment (or group of environments) are to be created for each Application or Service, or group of applications/services/domains which needs to work cohesively. Be cautious and do not use Company’s Organization boundaries as the only factor to decide which workload (application or service) will be deployed in which Account. Best way is to consider your overall System Architecture to decide components (workloads) and their cohesiveness to decide number of Application/Domain/Services accounts you need.

Network account will play a role in connecting multiple application (and other accounts) together by using services like AWS Transit Gateway (to connect multiple VPCs centrally), AWS Cloud WAN, Site-to-Site VPN to connect AWS Cloud with Corporate Network, and set centralized firewall rules.

As your business, requirements and architecture evolves and grows, you will need more accounts.

Lakehouse not simply about integrating a data lake with a data warehouse, but rather about integrating a data lake, a data warehouse, and purpose-built stores, enabling unified governance and easy data movement

With latest capabilities and tight integration of Storage and Serverless processing (AWS Glue Jobs, Athena, Redshift Serverless) of data stored in Object stores (AWS S3), and shared Metadata (Glue Table, Schema), it is possible to setup data, format, access in such a way that you can have seamless integration with Data Lake and Data Warehouse, resulting into Lakehouse.

Data is very important for any company, to be able to understand usage, provide features with personalization, recommendation, and make data driven decisions for product and business as a whole. Introducing separate accounts for ingesting, processing, organizing and managing data for this in central account will be a good idea. Application itself will have smaller version of Lakehouse in its own account, but it will be important to have that exposed and accessible in a central accounts as well.

International Market

Expansion to international market will require that you expand your workload to multiple regions of Cloud provider. In case of AWS, it has regions in six continents (as of this writing, except Antarctica — which doesn’t make sense from business standpoint due to population), and in some cases multiple regions per Country with sizable distance and population.

As you grow into International Market and start offering your software/services/application, you will have to comply with Data Residency requirements as well as due to increased latency between clients/users and servers/services with increased distance, forcing you to have Infrastructure (Application/Services and Data both) deployed, stored and processed in multiple regions. You can leverage same set of accounts, with selection of different region while deploying your application/services software and infrastructure Continuous Deployment pipelines, extend pipeline to deploy changes in multiple regions.

Disaster Recovery

Depending on how you leverage multi-region deployments, where in most cases you will have users assigned to regions within the border of their country (and within the country, closer to their location), you need to prepare for Disaster hitting that single region and possible loss of data if the region you are using goes down or experiences long lasting outage. Your business requirements of Recovery Time Objectives and Recovery Point Objectives will play significant role in designing, implementing and executing Failover to a different region. At the minimum, you should have regular backups of your data to a different region (where possible and needed, keeping in mind Data Residency compliance). It would be better to use different account with very restricted access (automation process and minimum set of users) so that your backup data (and account) is safe, even in the event of a specific account hack/overtake, allowing you to use data from the separate account to execute failover.

As always, please let me know if I am missing something worth adding to above points which you have found useful (for isolation, scale, organization/management, flexibility and cost) in your setup.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
TA Eng

Open to Consulting — Software Engineer, Architect, AWS Cloud, Digital Transformation, DevOps, Big Data, Machine Learning