AWS Control Tower Customization with CloudFormation and CodePipeline

Oleksii Bebych
11 min readJul 17, 2024

--

In the previous four posts, I explained AWS Landing Zone in general, the structure of AWS organization, and basic security configurations:

Building a Landing zone with AWS Control Tower (part 1)
Building a Landing zone with AWS Control Tower (part 2)
Building a Landing zone with AWS Control Tower (part 3)
Control Tower Guardrails overview (Preventive, Detective, and Proactive)

Now it’s time to show more advanced things. The basics are pretty much the same for all companies, but when it comes to specific things, we need a good tool to make customizations according to customer’s needs.

Customizations for AWS Control Tower (CfCT) helps you customize your AWS Control Tower landing zone and stay aligned with AWS best practices. Customizations are implemented with AWS CloudFormation templates and service control policies (SCPs).

This CfCT capability is integrated with AWS Control Tower lifecycle events, so that your resource deployments remain synchronized with your landing zone. For example, when a new account is created through account factory, all resources attached to the account are deployed automatically. You can deploy the custom templates and policies to individual accounts and organizational units (OUs) within your organization.

Deploying CfCT builds the following environment in the AWS Cloud.

Solution deployment

Fortunately, AWS has already developed this and actively supports it, so we can just take and use it.

Deployment is simple. Using this template, create a CloudFormation stack in the Control Tower Management account (home AWS region).

In this example, I use the following parameters:

  • Pipeline Approval Stage — Yes

I don’t want to make changes to the Production infrastructure without review and manual approval.

  • Pipeline Approval Email Address — email where I will receive notification about pending approval
  • AWS CodePipeline Source — can be S3 or CodeCommit. I chose CodeCommit and will use Git for code changes.
  • Existing CodeCommit Repository? — No; a new CodeCommit repository will be created in this case.
  • CodeCommit Repository Name — whatever you like
  • CodeCommit Branch Name — whatever you like, the default is “main”
  • Region Concurrency Type — PARALLEL; faster option for us. Another option is SEQUENTIAL — more control
  • Max Concurrent Percentage — 100 (default value); The maximum percentage of accounts in which to perform this operation at one time.
  • Failure Tolerance Percentage — 10 (default value); The percentage of accounts, per Region, for which this stack operation can fail before AWS CloudFormation stops the operation in that Region.

All the above parameters will be much clearer when we start using the pipeline. Now, let’s complete the stack deployment.

It needs 3 minutes to complete:

And you will get a Subscription Confirmation email from the SNS topic, which will be used for the Manual Approval step in the pipeline:

Let’s check what we have. First of all, it’s a new CodeCommit repository.

Git repository contains the following structure:

  • manifest.yaml describes your AWS resources; which SCP and CloudFormation templates to deploy and what is a target for deployment (accounts, organizational units).
  • templates folder contains CloudFormation templates
  • policies folder contains JSON files with SCP
  • parameters folder contains inputs for CloudFormation template (optional part)
- manifest.yaml
- policies/
- service control policies files (*.json)
- templates/
- template files for AWS CloudFormation Resources

And CodePipeline, which immediately started. But don’t worry; the Git repo does not have any real code, so the pipeline will not deploy anything significant.

The pipeline contains 5 stages:

  1. Source stage, checkout code changes.
  2. Build stage, The build stage requires AWS CodeBuild to validate the contents of the configuration package. These checks include testing the manifest.yaml file syntax and schema, along with all AWS CloudFormation templates included in the package or remotely hosted, using AWS CloudFormation validate-template and cfn_nag. If the manifest file and AWS CloudFormation templates pass the tests, the pipeline continues to the next stage. If the tests fail, you can review the CodeBuild logs to identify the issue and edit the configuration source file as needed.
  3. Manual approval stage (optional), we enabled this stage on the initial deployment above, it provides additional control over the configuration pipeline. It pauses the pipeline during deployment, until an approval is given.
  4. ServiceControlPolicy stage, invokes the service control policy state machine to call AWS Organizations APIs that create service control policies (SCPs).
  5. CloudformationResource stage, invokes the stack set state machine to deploy the resources specified in the list of accounts or organizational units (OUs), which you provided in the manifest file. The state machine creates the AWS CloudFormation resources in the order that they are specified in the manifest file, unless a resource dependency is specified.

How to use

Let’s deploy the following:

  1. S3 bucket for VPC FlowLogs into Log Archive account (output bucket name)
  2. IPAM pool into Networking account (output a shared pool ID)
  3. VPC to the Prod account (use the previously deployed IPAM pool for CIDR allocation and S3 bucket as a target for VPC FlowLogs)
  4. Service Control Policy that restricts disabling VPC Flow Logs. Apply to whole AWS Organization

The following code was committed:

manifest.yaml

---
region: eu-west-2
version: 2021-03-15

resources:
- name: restrict-modification-vpc-flow-logs
description: To prevent modify VPC Flow Logs
resource_file: policies/protect-flow-logs.json
deploy_method: scp
deployment_targets:
organizational_units:
- Root


## S3 for VPC Flow Logs ##
- name: s3-flow-logs
resource_file: templates/logging/s3-flow-logs.yaml
deploy_method: stack_set
deployment_targets:
accounts:
- 444091872385 # Log Archive account
export_outputs:
- name: /logs/vpc-flow-logs/s3-bucket-arn
value: $[output_BucketArn]
- name: /logs/vpc-flow-logs/s3-bucket-name
value: $[output_BucketName]
regions:
- us-east-1
parameters:
- parameter_key: BucketNamePrefix
parameter_value: aws-vpc-flow-logs


## IPAM ##
- name: ipam
resource_file: templates/networking/ipam.yaml
deploy_method: stack_set
deployment_targets:
accounts:
- 240724570208 # Network account
regions:
- us-east-1
parameters:
- parameter_key: Region
parameter_value: us-east-1
- parameter_key: TopLevelCidr
parameter_value: 10.0.0.0/9
- parameter_key: RegionLevelCidr
parameter_value: 10.0.0.0/10
- parameter_key: ProdLevelCidr
parameter_value: 10.0.0.0/14
- parameter_key: ProdOuArn
parameter_value: arn:aws:organizations::444629336067:ou/o-mmrpss74fy/ou-2ure-kyl7gx7p
export_outputs:
- name: /network/ipam/prod-pool-id
value: $[output_ProdPoolID]


## Management VPC ##
- name: demo-vpc
resource_file: templates/networking/vpc.yaml
deploy_method: stack_set
deployment_targets:
accounts:
- 590183808992 # Prod account
regions:
- us-east-1
parameters:
- parameter_key: IMAPPoolID
parameter_value: $[alfred_ssm_/network/ipam/prod-pool-id]
- parameter_key: VpcCidrSize
parameter_value: 18
- parameter_key: ExternalLogBucket
parameter_value: $[alfred_ssm_/logs/vpc-flow-logs/s3-bucket-name]

protect-flow-logs.json

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": [
"ec2:DeleteFlowLogs"
],
"Resource": "*",
"Condition": {
"StringNotLike": {
"aws:PrincipalARN": [
"arn:aws:iam::*:role/AWSControlTowerExecution"
]
}
}
}
]
}

IAM Role AWSControlTowerExecution is intentionally excluded, because it’s used by Control Tower automation to deploy and remove resources.

s3-flow-logs.yaml

AWSTemplateFormatVersion: '2010-09-09'
Description: 'Create S3 bucket for vpc flow logs'

Parameters:
BucketNamePrefix:
Description: 'Name of S3 bucket for vpc flow logs delivery'
Type: String
Default: ''

Resources:
LogBucket:
Type: 'AWS::S3::Bucket'
DeletionPolicy: Retain
Properties:
BucketName: !Sub "${BucketNamePrefix}-${AWS::AccountId}-${AWS::Region}"
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: AES256
VersioningConfiguration:
Status: Enabled
LifecycleConfiguration:
Rules:
- Id: RetentionRule
Status: Enabled
ExpirationInDays: '365'
NoncurrentVersionExpirationInDays: '365'
PublicAccessBlockConfiguration:
BlockPublicAcls: True
BlockPublicPolicy: True
IgnorePublicAcls: True
RestrictPublicBuckets: True

LogBucketPolicy:
Type: 'AWS::S3::BucketPolicy'
Properties:
Bucket: !Ref LogBucket

PolicyDocument:
Version: '2012-10-17'
Statement:
- Sid: Deny non-HTTPS access
Effect: Deny
Principal: "*"
Action: "s3:*"
Resource: !Sub "${LogBucket.Arn}/*"
Condition:
Bool:
aws:SecureTransport: "false"
- Sid: AWSLogDeliveryWrite
Effect: Allow
Principal:
Service: "delivery.logs.amazonaws.com"
Action:
- 's3:PutObject'
Resource: !Sub "${LogBucket.Arn}/*"
Condition:
StringEquals:
's3:x-amz-acl': 'bucket-owner-full-control'
- Sid: AWSLogDeliveryAclCheck
Effect: Allow
Principal:
Service: "delivery.logs.amazonaws.com"
Action:
- "s3:GetBucketAcl"
Resource: !GetAtt "LogBucket.Arn"

Outputs:
BucketArn:
Description: ARN of S3 bucket for vpc flow logs delivery
Value: !GetAtt LogBucket.Arn
Export:
Name: !Sub "${AWS::StackName}-BucketArn"
BucketName:
Description: Name of S3 bucket for vpc flow logs delivery
Value: !Ref LogBucket
Export:
Name: !Sub "${AWS::StackName}-BucketName"

ipam.yaml

AWSTemplateFormatVersion: '2010-09-09'
Description: IPAM

Parameters:
Region:
Description: 'Operating regions'
Type: String
TopLevelCidr:
Description: 'CIDR for Top Level Pool'
Type: String
RegionLevelCidr:
Description: 'CIDR for Region Level Pool'
Type: String
ProdLevelCidr:
Description: 'CIDR for Production Pool'
Type: String
ProdOuArn:
Description: 'Prod OU ARN'
Type: String

Resources:
IPAM:
Type: AWS::EC2::IPAM
Properties:
OperatingRegions:
- RegionName: !Sub "${Region}"

topLevelPool:
Type: AWS::EC2::IPAMPool
Properties:
AddressFamily: ipv4
IpamScopeId: !GetAtt IPAM.PrivateDefaultScopeId
ProvisionedCidrs:
- Cidr: !Sub "${TopLevelCidr}"

regionLevelPool:
Type: AWS::EC2::IPAMPool
Properties:
AddressFamily: ipv4
IpamScopeId: !GetAtt IPAM.PrivateDefaultScopeId
ProvisionedCidrs:
- Cidr: !Sub "${RegionLevelCidr}"
SourceIpamPoolId: !GetAtt topLevelPool.IpamPoolId
Tags:
- Key: "Name"
Value: !Sub "${Region}-Level-Pool"

ProdLevelPool:
Type: AWS::EC2::IPAMPool
Properties:
AddressFamily: ipv4
IpamScopeId: !GetAtt IPAM.PrivateDefaultScopeId
AllocationMaxNetmaskLength: 28
AllocationMinNetmaskLength: 16
ProvisionedCidrs:
- Cidr: !Sub "${ProdLevelCidr}"
SourceIpamPoolId: !GetAtt regionLevelPool.IpamPoolId
Locale: !Sub "${Region}"
AutoImport: True
Tags:
- Key: "Name"
Value: "Production"

ProdPoolShare:
Type: AWS::RAM::ResourceShare
Properties:
Name: IPAM-Prod-pool-share
ResourceArns:
- !GetAtt ProdLevelPool.Arn
Principals:
- !Sub "${ProdOuArn}"

Outputs:
ProdPoolID:
Description: Prod Pool ID
Value: !GetAtt ProdLevelPool.IpamPoolId
Export:
Name: IPAM-Prod-PoolID

vpc.yaml

AWSTemplateFormatVersion: '2010-09-09'
Description: Creates simple demo VPC

Parameters:
IMAPPoolID:
Type: String
Default: ''
VpcCidrSize:
Type: Number
Default: 16
ExternalLogBucket:
Description: '(Optional) Name of an S3 bucket where you want to store flow logs. If you leave this empty, FlowLogs will not be configured'
Type: String
Default: ''
TrafficType:
Description: 'The type of traffic to log.'
Type: String
Default: ALL
AllowedValues:
- ACCEPT
- REJECT
- ALL

### VPC ###
Resources:
VPC:
Type: "AWS::EC2::VPC"
Properties:
EnableDnsSupport: true
EnableDnsHostnames: true
Ipv4IpamPoolId: !Ref IMAPPoolID
Ipv4NetmaskLength: !Ref VpcCidrSize

### VPC Flow Logs ###
FlowLogExternalBucket:
Type: 'AWS::EC2::FlowLog'
Properties:
LogDestination: !Sub 'arn:aws:s3:::${ExternalLogBucket}'
LogDestinationType: s3
ResourceId: !Ref VPC
ResourceType: 'VPC'
TrafficType: !Ref TrafficType

Pipeline starts after the “git commit” and “git push”

The “Build” step validates CF templates’ syntax, checks all requited input parameters, and scans for security

This is a CodeBuild project, and you can check logs in case of failures:

Once the validation is completed, you will get an email notification about manual approval:

This is the third step in the pipeline:

You can review the commit, reject or approve:

The next step is applying the Service Control Policy:

4th and 5th steps of the pipeline work via Step Functions:

We can see that “CustomControlTowerServiceControlPolicyMachine” succeeded:

and SCP has been applied to the target OU, as I set in manifest.yaml:

The next step is deploying CloudFormation stacks:

The Step function “CustomControlTowerStackSetStateMachine” creates a separate execution of every resource I defined in the manifest.yaml file:

State machine execution creates a CloudFormation StackSet here on the Management AWS account and deploys resources to the target AWS accounts and regions:

All three mentioned stacks (S3 bucket, IPAM pool, and VPC) have been deployed, and CodePipeline execution has finished:

One important thing is how we can pass parameters between different stacks. We can set “export_outputs in the manifest.yaml. “name” is how we name a parameter in the SSM parameter store, “value” is output from the CloudFormation stack.

Here are the created parameters in the Management Account SSM Parameter Store:

For example, this S3 bucket name will be used later by the VPC stack as a target for VPC FlowLogs:

How do we define the import of those parameters in the manifest file? There is a helper script called “alfred-helper

In the result in the CloudFormation console, we can see how those inputs were applied to the VPC stack:

I did not define a specific CIDR block in the VPC stack. I used the previously created and shared IPAM pool, and IPAM returned a free CIDR block and wrote it to the Allocations tab. Using IPAM, we don’t need to think about network overlap in the future:

VPC has been created, and VPC FlowLogs are enabled with the previously created S3 bucket as a target:

Let’s test our Service Control Policy and try to delete the VPC FlowLogs configuration:

Denied, and it’s expected:

The demonstrated automation can be triggered by a code change, as I have shown, and when a new account is created in the Landing Zone:

Conclusion

In this article, I demonstrated how to use the Control Tower Customization pipeline to apply Service Control Policies, deploy CloudFormation templates to different AWS Accounts, and create a small part of the infrastructure that is relevant for most Landing Zones. Let’s recap what was happening:

  1. Configuration code was pushed to the Git repository.
  2. CodePipeline started.
  3. The configuration code was validated, and a security scan was performed.
  4. Pipeline paused and waited for Manual Approval.
  5. After the approval, the next step started, which used Step Function to apply the Service Control Policy.
  6. The last step of the pipeline started. Another Step Function was used to deploy CloudFormation stacks one by one. The first stack was “S3 bucket for VPC FlowLogs”. The bucket was deployed to the Log Archive account, and the bucket name was saved in the SSM Parameter Store for further usage.
  7. The next stack was “IPAM,” which was deployed to the Networking Account. The IPAM pool ID was saved in the SSM Parameter Store for further usage.
  8. The last stack was VPC, which was deployed to the Prod Account. Outputs from the previous two stacks were used as inputs here.

In the next post, I will demonstrate how to build a Landing Zone with other customizations.

--

--

Oleksii Bebych

IT professional with more than 10 years of experience in IT. Dozens of successful projects with AWS. AWS Ambassador and Community Builder