The purpose of this article is to provide a clear and concise overview of AWS cloud concepts and services. This guide aims to enhance understanding and serve as a valuable resource for users. I will continually update and improve this article to ensure accurate and up-to-date content.
Amazon Macie
Amazon Macie is a fully managed data security service that uses machine learning and pattern matching to help you discover, monitor, and protect your sensitive data in Amazon S3. With Macie, you can:
- Discover sensitive data. Macie uses machine learning to identify sensitive data in your Amazon S3 buckets, including personally identifiable information (PII), financial data, and intellectual property.
- Monitor your data security posture. Macie continuously monitors your Amazon S3 buckets for security and access control issues, and generates findings to notify you of potential risks.
- Protect your sensitive data. Macie can automatically encrypt your sensitive data, block unauthorized access, and detect and respond to data breaches.
AWS CloudTrail Processing Library
The AWS CloudTrail Processing Library is a Java library that makes it easy to build an application that reads and processes CloudTrail log files in a fault-tolerant and highly scalable manner. The library is provided as an Apache-licensed open-source project, available on GitHub: https://github.com/aws/aws-cloudtrail-processing-library.
The CloudTrail Processing Library provides a number of features that make it easy to process CloudTrail log files, including:
- Polling of Amazon SQS queues. The library can poll your Amazon SQS queues to retrieve CloudTrail log files.
- Parsing of queue messages. The library can parse queue messages to extract CloudTrail events.
- Downloading of CloudTrail log files. The library can download CloudTrail log files from Amazon S3.
- Parsing of CloudTrail log files. The library can parse CloudTrail log files to extract events.
- Passing of events to code. The library can pass events to your code as Java objects.
Service-linked Roles
AWS Service-linked roles (SLRs) are a type of IAM role that is linked directly to an AWS service. The service can assume the role to perform an action on your behalf. SLRs appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for SLRs.
Here are some of the benefits of using SLRs:
- Simplified management. SLRs are automatically created and managed by the service, so you don’t have to worry about creating or updating them.
- Increased security. SLRs are only accessible to the service that created them, so you can be confident that your permissions are secure.
- Improved auditing. All actions performed by an SLR are logged in CloudTrail, so you can easily track and audit the service’s activity.
Understanding “Managed Policies vs Inline Policies” in AWS IAM
AWS IAM (Identity and Access Management) provides two types of policies to assign permissions to IAM entities (users, groups, and roles): managed policies and inline policies. These policies are sets of permissions that determine what actions are allowed or denied by the IAM entity on specific AWS resources.
Here’s a breakdown of the differences:
Managed Policies
Managed policies are standalone policies that you can attach to multiple users, groups, and roles in your AWS account. They are best when you want to reuse permissions across multiple entities. Managed policies are also larger than inline policies, with a maximum of 100,000 characters. There are two types of managed policies:
- AWS Managed Policies: These are managed policies created and managed by AWS. They are designed to provide permissions for many common use cases, like “read-only access to S3” or “full access to EC2.” These are good to use when they align with your permission requirements.
- Customer Managed Policies: These are managed policies that you create in your AWS account. You have full control over these policies. They offer more precise control over policy permissions compared to AWS managed policies.
Inline Policies
Inline policies are policies that you create and manage and that are embedded directly into a single user, group, or role. These are best when you want to maintain a strict one-to-one relationship between a policy and the entity to which it applies. Inline policies are also limited in size, with a maximum of 20,000 characters.
Here’s a summary of when you might choose each one:
- Use AWS Managed Policies when they align with your permission requirements. They’re already created for you, and AWS maintains them.
- Use Customer Managed Policies when you need to customize permissions beyond what AWS managed policies offer, and these permissions will be used by multiple IAM entities.
- Use Inline Policies when you need to assign permissions to a single IAM entity, and you want to be sure that the permissions aren’t inadvertently altered or deleted.
More details on AWS inline policy
An AWS inline policy is a policy that is embedded in an IAM identity, such as a user, group, or role. Inline policies are a way to grant permissions to an identity without having to create a separate policy document.
Inline policies are useful for granting permissions to a single identity or for granting permissions that are specific to that identity. For example, you could create an inline policy to allow a user to access a specific S3 bucket.
Inline policies are created using the IAM Policy Document language. The policy document defines the permissions that are granted to the identity.
Here is an example of an inline policy that allows a user to access a specific S3 bucket:
JSON
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Resource": "arn:aws:s3:::my-bucket"
}
]
}
Inline policies are a convenient way to grant permissions to IAM identities. However, they can be difficult to manage if you have a large number of identities. If you have a large number of identities, you may want to consider using managed policies instead.
Here are some of the pros and cons of using AWS inline policies:
Pros:
- Inline policies are easy to create and manage.
- Inline policies can be specific to an identity.
Cons:
- Inline policies can be difficult to manage if you have a large number of identities.
- Inline policies are not portable. If you move an identity to another AWS account, the inline policy will not be transferred.
EBS Fast Snapshot Restore (FSR)
Amazon EBS Fast Snapshot Restore (FSR) is a feature that allows you to restore Amazon EBS snapshots more quickly than you would be able to without FSR. FSR does this by pre-staging the data from the snapshot in Amazon Simple Storage Service (S3) before you restore the snapshot to an Amazon EBS volume. This means that when you restore the snapshot, the data is already in S3 and can be quickly copied to the volume.
FSR is a great way to improve the performance of your EBS snapshots. If you need to restore a snapshot quickly, FSR can help you do so. FSR is also a good way to reduce the amount of time it takes to create new EBS volumes from snapshots.
To use FSR, you need to enable it for the snapshot that you want to restore. You can do this in the Amazon EBS console or by using the AWS Command Line Interface (CLI). Once you have enabled FSR, you can restore the snapshot to an EBS volume. The restore process will be faster than it would be without FSR.
Here are some of the benefits of using EBS Fast Snapshot Restore:
- Faster restore times. FSR can significantly reduce the time it takes to restore an EBS snapshot.
- Reduced storage costs. FSR can help you reduce your storage costs by pre-staging the data from the snapshot in S3.
- Improved performance. FSR can improve the performance of your EBS volumes by reducing the amount of time it takes to create them.
IAM permission boundaries
IAM permission boundaries are an advanced feature in AWS Identity and Access Management (IAM) that allows you to set the maximum permissions that an identity-based policy can grant to an IAM entity (which could be a user, group, or role). IAM entities can then have policies that grant any permissions that are a subset of the permission boundary.
In other words, you can consider permission boundaries as a way to limit the maximum permissions a particular IAM entity can have, regardless of what permissions are stated in the entity’s policies.
To give an example, let’s say you have a junior administrator in your organization. You want them to be able to manage IAM roles for specific tasks, but you don’t want them to grant full administrative privileges to these roles. You could set a permission boundary for the junior admin that excludes certain administrative privileges. The junior admin could then create and manage roles with various permissions, but none that exceed the boundary.
The process to use permission boundaries typically involves the following steps:
- Create a policy that includes the maximum permissions that you want to allow.
- Set that policy as a permissions boundary for the IAM entities (users or roles).
- Create or modify the IAM policies attached to those IAM entities. The effective permissions are the intersection of entity’s policies and the permission boundary.
Please note that if you don’t explicitly set a permissions boundary, no boundary applies, and the IAM entity (user or role) can have any permissions that are granted in their policies.
IAM roles Vs resource-based policies
IAM roles and resource-based policies are both used to control access to AWS resources. However, they have different strengths and weaknesses, and they are best suited for different use cases.
IAM roles are an identity-based access control (IAM) mechanism that allow you to delegate permissions to other AWS accounts or services. Roles are not attached to a specific user or group, but instead, they are assumed by a user or service when they need to access a resource. This makes it easy to grant temporary or limited access to resources.
Resource-based policies are attached to specific AWS resources, such as S3 buckets or EC2 instances. These policies define who can access the resource and what actions they can perform. Resource-based policies are a good choice for granting access to resources that are not owned by the account that is creating the policy.
Here is a table that summarizes the key differences between IAM roles and resource-based policies:
When to use IAM roles
IAM roles are a good choice for the following use cases:
- Granting temporary or limited access to resources.
- Delegating permissions to other AWS accounts or services.
- Enabling cross-account access.
When to use resource-based policies
Resource-based policies are a good choice for the following use cases:
- Granting access to resources that are not owned by the account that is creating the policy.
- Controlling access to resources that are not frequently accessed.
- Enabling fine-grained access control.
Ultimately, the best way to decide which type of policy to use is to consider the specific use case and the specific requirements.
The OrganizationAccountAccessRole is an IAM role that is automatically created by AWS Organizations when you create a new account in your organization. This role grants the management account in your organization administrative access to the new account.
The OrganizationAccountAccessRole has the following permissions:
- iam:PassRole: This permission allows the management account to assume the role in the new account.
- organizations:DescribeAccount: This permission allows the management account to describe the new account.
- organizations:ListAccountPermissions: This permission allows the management account to list the permissions that are granted to the new account.
- organizations:UpdateAccount: This permission allows the management account to update the settings for the new account.
The OrganizationAccountAccessRole
When you create a new AWS account within an organization using AWS Organizations, a role named OrganizationAccountAccessRole
is automatically created in the new account. This role is a way for users in the master account (the AWS account that manages the organization) to access and manage the newly created member account.
The OrganizationAccountAccessRole
is designed to have full administrative permissions, meaning it can perform any action on any resource in the member account. The permissions policy attached to this role, by default, allows all actions on all resources.
This role simplifies management of the member accounts from the master account. An IAM user or role in the master account can assume the OrganizationAccountAccessRole
in the member accounts to perform administrative tasks, without having to create individual IAM users and roles in each member account.
Here’s how you might use it:
- As an IAM user in the master account, you can switch to the management console of the member account by choosing
Switch Role
in the IAM console. You'll need to provide the account ID of the member account and the name of the role (OrganizationAccountAccessRole
). - You can also use the AWS CLI or SDKs to assume the role programmatically. You’d use the
sts:AssumeRole
API operation and provide the ARN (Amazon Resource Name) of theOrganizationAccountAccessRole
in the member account.
Please note that to use OrganizationAccountAccessRole
, you need to have permission to sts:AssumeRole
in your IAM user or role in the master account. The specifics of this permission can be defined in your IAM policies.
AWS Assume Role
AWS Security Token Service (STS) allows you to assume a role, which temporarily grants you the permissions of another user or service. This can be useful for a variety of purposes, such as:
- Providing access to AWS resources to IAM users in different accounts, even if those users do not have direct permissions to those resources.
- Providing access to AWS resources to services offered by AWS, such as Amazon S3 and Amazon EC2.
- Providing access to AWS resources to externally authenticated users, such as users who have logged in to your application using an identity provider such as Amazon Cognito.
When you assume a role, you give up your original permissions and take on the permissions of the role. This means that you can only perform actions that are allowed by the role’s permissions policy.
Cross-Account Access to S3 Buckets with STS
Here is an example of how to assume a role with STS to access an S3 bucket:
- Account A: This is the account that wants to access the S3 bucket. It does not have direct permissions to the bucket, so it needs to assume a role in order to access it.
- Account B: This is the account that contains the S3 bucket. It has created a role called
S3AccessRole
that allows users to access the bucket.
To assume the role, the user in Account A needs to do the following:
- Attach a policy to the user that grants the
sts:AssumeRole
permission. The policy must specify the ARN of theS3AccessRole
role in Account B. - Use the
sts:AssumeRole
API call to assume the role. This will give the user a set of temporary credentials that they can use to access the S3 bucket.
Once the user has the temporary credentials, they can use them to access the S3 bucket in Account B. For example, they could use the AWS CLI to list the objects in the bucket.
Here is an example of the policy that you would attach to the user in Account A:
Code snippet
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::123456789012:role/S3AccessRole"
}
]
}
Here is an example of the command that you would use to assume the role:
aws sts assume-role — role-arn arn:aws:iam::123456789012:role/S3AccessRole — role-session-name my-session
This will return a set of temporary credentials that you can use to access the S3 bucket.
Assuming a Role with AWS STS with Diagram
In this diagram:
- Account A sends a request to AWS Security Token Service (STS) to assume a role that has the necessary permissions to access the S3 bucket in Account B.
- STS validates the request with Account B.
- If the request is valid, Account B returns the role credentials to STS.
- STS then provides these role credentials to Account A.
- Finally, Account A uses these assumed role credentials to access the S3 bucket in Account B.
Providing Access to AWS Accounts Owned by Third Parties
When you need to grant access to AWS resources to third parties, you can use the AWS Security Token Service (STS) to create a role that the third party can assume. This allows the third party to access your AWS resources without having to share your credentials.
To create a role for a third party, you will need to provide the following information:
- The third party’s AWS account ID.
- An external ID. This is a secret that you will share with the third party. It is used to uniquely associate the role with the third party.
- The permissions that you want to grant to the third party. These permissions will be defined in the role’s IAM policy.
Once you have created the role, you can provide the external ID to the third party. They can then use the sts:AssumeRole
API call to assume the role and gain access to your AWS resources.
Here are some additional points to consider:
- The third party must have a valid AWS account ID and secret access key in order to assume a role.
- The third party can only assume a role that they have been granted permission to assume.
- The third party’s permissions are limited to the permissions of the role that they assume.
- You can use IAM Access Analyzer to find out which resources are exposed to third parties.
The Confused Deputy Problem and Solution
The confused deputy problem is a security issue that can occur when an entity that does not have permission to perform an action can coerce a more-privileged entity to perform the action. In the context of assuming roles, this could happen if a third-party service that has been granted permission to assume a role is compromised and the attacker is able to use the role to access resources that they should not have access to.
To solve the confused deputy problem, you can use the ExternalId
condition key in the role's trust policy. This key allows you to specify a unique value that the third-party service must provide when it assumes the role. If the value does not match, the role will not be assumed.
The Confused Deputy Problem and Solution with External ID
In this diagram:
- The User requests the Deputy (an application) to perform an action on a Resource.
- The Deputy performs the action using its own credentials. This is the “Confused Deputy” problem, where the Deputy might perform actions that the User did not intend, because it’s using its own permissions rather than the User’s.
- To solve this problem, the User provides temporary credentials to the Deputy, which are obtained by assuming a role with the necessary permissions. The User also provides an External ID.
- The Deputy then performs the action on the Resource using these temporary credentials and the External ID. This ensures that the Deputy only performs actions that the User has explicitly allowed and verifies the identity of the Deputy using the External ID.
Session Tags in AWS Security Token Service (STS
Session Tags in AWS Security Token Service (STS) are tags that you pass when you assume an IAM Role or federate a user in STS. The aws:PrincipalTag
condition compares the tags attached to the principal making the request with the tag you specified in the policy. For example, it allows a principal to pass session tags only if the principal making the request has the specified tags.
Here is a sequence diagram that illustrates the use of Session Tags in AWS STS:
In this diagram:
- The User assumes an IAM Role with STS, passing a Session Tag
Department=HR
. - STS returns temporary security credentials to the User.
- The User then attempts to access an S3 bucket (hr-docs) using these temporary credentials.
- Access is granted if the
aws:PrincipalTag
condition in the bucket's policy matches the Session Tag passed by the User.
STS Important APIs
AWS Security Token Service (STS) provides a set of APIs that you can use to assume roles, get session tokens, and federate users. These APIs are used to grant temporary access to AWS resources.
The following are some of the most important STS APIs:
- AssumeRole: This API allows you to assume a role in your own account or in another account.
- AssumeRoleWithSAML: This API allows you to assume a role for a user who has logged in with SAML.
- AssumeRoleWithWebIdentity: This API allows you to assume a role for a user who has logged in with an identity provider (IdP). Amazon Cognito, Login with Amazon, Facebook, Google, and any OpenID Connect-compatible IdP are all supported.
- GetSessionToken: This API allows you to get a session token for a user or the AWS account root user. This token can be used to access AWS resources that require MFA.
- GetFederationToken: This API allows you to get temporary credentials for a federated user. These credentials can be used by a proxy application to grant access to a distributed application inside a corporate network.
AWS recommends using Amazon Cognito instead of AssumeRoleWithWebIdentity. Cognito provides a more secure and scalable way to federate users.
AWS Organization — Feature Modes
AWS Organizations has two available feature sets or modes: “consolidated billing” and “all features”.
- Consolidated Billing: This feature set allows you to consolidate payment for multiple AWS accounts. All the accounts in your organization are billed as one entity, which simplifies the payment process, and can potentially lead to cost savings, as AWS combines usage from all accounts to qualify you for volume pricing tiers.
- All Features: This feature set includes everything in consolidated billing, and adds a number of advanced features. For example, it allows you to apply policies to accounts in your organization that can control their actions.
The “all features” mode enables you to:
- Centralize management of service control policies (SCPs) across your AWS accounts. SCPs allow you to fine-tune the permissions for the IAM users and roles in your organization’s accounts.
- Use advanced account management features like inviting other accounts to join your organization or removing accounts from your organization.
- Access all AWS services that integrate with AWS Organizations.
When you create an organization, it is initially in the consolidated billing mode. If you enable all features, you can’t switch back to consolidated billing. Therefore, AWS recommends that you enable all features only after you’ve tested and understood the effects of SCPs and other “all features” elements.
Understanding and Implementing AWS Direct Connect: A Focus on Private Virtual Interface (VIF)
Definition of a Private VIF
A private Virtual Interface (VIF) refers to a BGP-enabled link offering a dedicated, secure conduit between your on-premises infrastructure and a singular Virtual Private Cloud (VPC). Private VIFs bridge the gap between your local network and AWS Direct Connect, thereby facilitating a safe and reliable data transfer path from your local network to your VPC.
How Do Private VIFs Operate?
Upon initiating a private VIF, AWS earmarks a dedicated VLAN (Virtual Local Area Network) for your Direct Connect link. This VLAN serves as the platform for establishing a BGP (Border Gateway Protocol) peering session between your on-site router and your VPC. Consequently, the BGP peering session enables your local router and your VPC to exchange routing details, fostering smooth traffic flow from your local network to your VPC.
Why Should You Opt for Private VIFs?
Private VIFs offer multiple advantages:
- Exclusive Link: A private VIF guarantees a reserved connection from your local network to your VPC. It assures a consistent bandwidth capacity for your traffic, ensuring no shared connection with other AWS users.
- Personalized Routing: The creation of a private VIF facilitates a BGP peering session, permitting the application of customized routing policies for traffic management between your local network and your VPC. This flexibility enhances the control over your traffic routing and potentially boosts application performance.
- Scalability: Private VIFs are scalable, allowing seamless bandwidth augmentation as per evolving needs.
Limitations of Using Private VIFs
The use of private VIFs presents a few constraints:
- Single VPC Connection: A private VIF exclusively links to one VPC. If you need to connect to multiple VPCs, you’d need to use a transit gateway.
- Expense: Private VIFs might incur higher costs compared to other types of Direct Connect connections, such as public VIFs.
Ideal Circumstances for Using a Private VIF
Consider opting for a private VIF when:
- A dedicated connection to a single VPC is required.
- You need to apply personalized routing policies for managing traffic between your local network and your VPC.
- There’s a need to scale your connection in line with your growing demands.
If these features aren’t your priority, you might want to explore other types of Direct Connect connections, such as a public VIF.
Navigating AWS Resource Management Access: Key Concepts, Benefits, and Best Practices
Understanding AWS Resource Management Access
AWS resource management access refers to the mechanism of managing accessibility and operational permissions pertaining to your AWS resources. The implementation of this control is facilitated by IAM (Identity and Access Management) policies, which are a collection of permission rules that delineate the access and operation level of various resources.
Functioning of AWS Resource Management Access
Upon creating an IAM user or role, one or multiple IAM policies can be assigned to it, dictating the permissions associated with the user or role. For instance, a policy could allow a user to launch EC2 instances, but restrict them from deleting the same.
Whenever a request is made by a user or role for accessing an AWS resource, AWS assesses the corresponding IAM policies to verify if the required permissions are granted. If the requisite permissions are absent, the request is rejected.
Benefits of Implementing AWS Resource Management Access
Utilizing AWS resource management access offers several advantages:
- Augmented Security: By regulating the access to your AWS resources, you can safeguard your data and applications from unauthorized interventions.
- Enhanced Flexibility: IAM policies enable you to provide varying access levels to different users and roles based on their necessities, offering you increased flexibility in AWS resource management.
- Streamlined Administration: The administration of your AWS resources can be simplified with IAM policies. By consolidating permissions in IAM policies, the need to handle permissions for individual resources is eliminated.
Limitations Associated with AWS Resource Management Access
There are certain constraints associated with AWS resource management access:
- Complexity: IAM policies can be intricate, making their functioning challenging to comprehend.
- Errors: Incorrect IAM policy creation can hinder users or roles from accessing necessary resources.
- Security: Losing control of your IAM keys can result in unauthorized individuals gaining access to your AWS resources.
Appropriate Usage of AWS Resource Management Access
You should consider employing AWS resource management access if you aim to:
- Regulate access to your AWS resources.
- Assign varied access levels to different users and roles.
- Streamline the administration of your AWS resources.
If managing access to your AWS resources is not a priority, IAM policies may not be necessary. However, if access control is crucial, IAM policies serve as a potent tool to secure your data and applications.
Failover Times for Different AWS Databases
Here are the typical failover times for various AWS databases based on your provided information:
- Amazon Aurora: Aurora has an advanced failover mechanism thanks to its distributed, fault-tolerant, and self-healing storage system, which can auto-scale up to 128TB per database instance. It can handle the loss of multiple data copies without disrupting database write or read availability. The typical failover time for Aurora is usually under 35 seconds.
- Amazon RDS for MySQL (Multi-AZ deployments): Amazon RDS for MySQL employs an automatic failover to the standby replica in case of database problems, which maintains up-to-date data. The process involves a DNS record modification of the DB instance to point to the standby. The typical failover time for RDS MySQL is typically between 60–120 seconds.
- Amazon RDS for PostgreSQL (Multi-AZ deployments): Amazon RDS for PostgreSQL uses a similar failover mechanism to MySQL. Failover times typically range between 60–120 seconds, although the actual duration may vary depending on specific circumstances.
- Amazon RDS for Oracle (Multi-AZ deployments): Amazon RDS for Oracle, like other Amazon RDS databases, performs automatic failover to a standby instance during database issues. The typical failover time generally falls within the range of 60–120 seconds.
- Amazon RDS for SQL Server (Multi-AZ deployments): Amazon RDS for SQL Server utilizes SQL Server Database Mirroring (DBM) or Always On Availability Groups (AGs) to provide high availability and failover support. The typical failover time for RDS SQL Server is usually between 60–120 seconds.
AWS Organizations — Reserved Instances
AWS Reserved Instances (RIs) are a cost-savings option for Amazon Elastic Compute Cloud (EC2) instances. When you purchase an RI, you commit to using a specific instance type and region for a set amount of time, in exchange for a discounted hourly rate.
In AWS Organizations, you can purchase RIs for all of the accounts in your organization. This can help you to save even more money, because you can consolidate your RIs and take advantage of volume discounts.
To purchase RIs for your organization, you need to have the Consolidated Billing features mode enabled. You can then purchase RIs from the Billing and Cost Management console or the AWS CLI.
When you purchase RIs for your organization, you need to decide which accounts will be able to use the RIs. You can do this by specifying the accounts in the Account selection section of the RI purchase dialog box.
Once you have purchased RIs for your organization, you can track their usage and costs in the Billing and Cost Management console. You can also use the RIs to create Savings Plans, which can help you to save even more money.
Here are some of the benefits of using RIs in AWS Organizations:
- You can save money on EC2 instance costs.
- You can consolidate your RIs and take advantage of volume discounts.
- You can track the usage and costs of your RIs in the Billing and Cost Management console.
- You can use RIs to create Savings Plans.
Here are some of the challenges of using RIs in AWS Organizations:
- You need to have the Consolidated Billing features mode enabled.
- You need to decide which accounts will be able to use the RIs.
- You need to track the usage and costs of your RIs.
Overall, RIs can be a great way to save money on EC2 instance costs. If you are using AWS Organizations, you can purchase RIs for all of the accounts in your organization and take advantage of even greater savings.
AWS Organizations — Moving Accounts
AWS Organizations allows you to group your accounts into organizational units (OUs), which helps to manage the accounts more easily and to apply service control policies (SCPs) more effectively.
If you need to move an AWS account from one organizational unit (OU) to another within your organization, you can do so by following these steps:
- Sign in to the AWS Management Console as a user who has the
organizations:MoveAccount
permission. Usually, this will be a user in the master account. - Open the AWS Organizations console.
- In the navigation pane, choose “Organize accounts”.
- On the “Organize accounts” page, select the account that you want to move.
- Choose “Move”.
- For “Destination”, select the OU that you want to move the account to. If you want to move the account to the root of your organization instead of into another OU, you can select the root.
- Choose “Move account”.
Keep in mind that when you move an account to a new OU, the policies of the destination OU will apply to that account. Make sure to review the policies of the destination OU and ensure they are appropriate for the account you are moving.
Additionally, be aware that you can’t move an account that is currently being invited or created, or an account that is suspended because of a problem with its payment method. You also can’t move the master account of your organization.
AWS DataSync and AWS Storage Gateway (File Gateway mode)
AWS DataSync and AWS Storage Gateway (File Gateway mode) are two services that help users move large amounts of data into and out of the AWS Cloud. While they might seem similar, they are used for different scenarios and have different characteristics:
AWS DataSync
AWS DataSync is a data transfer service that makes it easy for you to automate moving data between on-premises storage and Amazon S3, Amazon Elastic File System (EFS), or Amazon FSx for Windows File Server.
Use cases for AWS DataSync:
- Data migration to AWS for analysis or archival.
- Transferring data to the cloud for regular backup and restore activities.
- Moving data from one region to another for geographic distribution.
Pros:
- Fast and efficient transfer.
- Can handle both one-time migrations and recurring data transfer jobs.
- Built-in data validation to ensure data transferred over the network matches the source data.
Cons:
- Not suitable for maintaining a hot cache of data on-premises.
AWS Storage Gateway (File Gateway mode)
AWS DataSync and AWS Storage Gateway (File Gateway mode) are both services that can be used to transfer data between on-premises and AWS. However, they have different strengths and weaknesses, and are suited for different use cases.
AWS DataSync is a fully managed service that uses a single agent to copy data between on-premises and AWS. It is designed for high-performance data transfer, and can copy large datasets quickly and efficiently. DataSync also supports a wide range of data sources and destinations, including NFS, SMB, Amazon S3, and Amazon EFS.
AWS Storage Gateway (File Gateway mode) is a hybrid cloud storage service that provides a local file interface to Amazon S3. This allows on-premises applications to access S3 data as if it were stored locally. File Gateway is also a good option for disaster recovery, as it can be used to create a local cache of S3 data that can be used to restore applications in the event of a disaster.
Here is a table that summarizes the key differences between AWS DataSync and AWS Storage Gateway (File Gateway mode):
In general, AWS DataSync is a better choice for high-performance data transfer, while AWS Storage Gateway (File Gateway mode) is a better choice for applications that need a local file interface to Amazon S3.
Here are some additional considerations when choosing between AWS DataSync and AWS Storage Gateway (File Gateway mode):
- Data transfer size: If you need to transfer a large amount of data, AWS DataSync is a better choice.
- Data transfer frequency: If you need to transfer data on a regular basis, AWS DataSync is a better choice.
- Data transfer security: AWS DataSync uses encryption to protect your data during transfer.
- Local file access: If you need to provide local file access to Amazon S3 data, AWS Storage Gateway (File Gateway mode) is a better choice.
- Disaster recovery: If you need to protect your data from disaster, AWS Storage Gateway (File Gateway mode) is a better choice.
Connect to on-premises Active Directory
AWS Managed Microsoft AD can be connected to your on-premises Active Directory (AD) forest. This allows users and computers from your on-premises AD domain to authenticate to resources in AWS Managed Microsoft AD, and vice versa.
To connect your on-premises AD forest to AWS Managed Microsoft AD, you must establish a Direct Connect (DX) or VPN connection between your on-premises network and AWS. Once the connection is established, you can create a forest trust between the two forests.
There are three types of forest trusts that you can create:
- One-way trust: This type of trust allows users and computers from one forest to authenticate to resources in the other forest, but not vice versa.
- Two-way trust: This type of trust allows users and computers from both forests to authenticate to resources in the other forest.
- Forest trust with selective authentication: This type of trust allows you to control which users and computers from one forest can authenticate to resources in the other forest.
Forest trust is different from synchronization. Forest trust is a security relationship that allows users and computers from different forests to authenticate to each other. Synchronization is the process of replicating objects between different forests.
Here are some of the benefits of connecting your on-premises AD forest to AWS Managed Microsoft AD:
- Simplified user management: You can manage users and computers in a single location, regardless of whether they are located on-premises or in the cloud.
- Increased security: By connecting the two forests, you can ensure that users have the same permissions in both environments. This can help to improve security across the entire environment.
- Enhanced flexibility: Users can access resources in both on-premises and cloud environments, regardless of their location. This can help to improve productivity and make it easier for users to work from anywhere.
AWS Organizations — OrganizationAccountAccessRole
The OrganizationAccountAccessRole is an IAM role that grants full administrator permissions in a member account to the management account. This role can be used to perform administrative tasks in the member account, such as creating IAM users, managing policies, and configuring settings.
The OrganizationAccountAccessRole can be assumed by IAM users in the management account. This means that users in the management account can use the role to access and manage resources in the member account.
The OrganizationAccountAccessRole is automatically added to all new member accounts created with AWS Organizations. However, if you invite an existing member account to join your organization, you must create the role manually.
Consolidated billing and Reserved Instances
The consolidated billing feature of AWS Organizations treats all the accounts in the organization as one account for billing purposes. This means that all accounts in the organization can receive the hourly cost benefit of Reserved Instances (RIs) that are purchased by any other account.
The payer account (management account) of an organization can turn off RI discount and Savings Plans discount sharing for any accounts in that organization, including the payer account. This means that RIs and Savings Plans discounts aren’t shared between any accounts that have sharing turned off. To share an RI or Savings Plans discount with an account, both accounts must have sharing turned on.
How it works
When RI discount sharing is turned on for an account in an organization, the following happens:
- The RIs purchased by the account are visible to all other accounts in the organization.
- The hourly cost benefit of the RIs is applied to the combined usage of all accounts in the organization that are running instances of the same type and size as the RIs.
- The RI discount is applied even if the account that purchased the RI is closed.
Benefits
There are several benefits to enabling RI discount sharing in AWS Organizations:
- Cost savings: Accounts in an organization can share the cost benefits of RIs that are purchased by other accounts. This can lead to significant cost savings, especially for organizations with a large number of accounts.
- Simplified management: RI discount sharing can simplify the management of RIs across an organization. Accounts only need to purchase RIs for the instance types and sizes that they need. The hourly cost benefit of the RIs is then applied to the combined usage of all accounts in the organization.
- Increased flexibility: RI discount sharing can increase the flexibility of an organization’s cloud deployments. Accounts can move instances between different accounts in the organization without having to worry about losing the RI discount.
Conclusion
RI discount sharing is a powerful feature of AWS Organizations that can help organizations save money on their cloud costs. By enabling RI discount sharing( this is important ), organizations can simplify the management of RIs and increase the flexibility of their cloud deployments.
AWS Organizations — Moving Accounts
To move an account from one AWS Organization to another, you must follow these steps:
- Remove the member account from the current organization.
- Go to the AWS Organizations console.
- In the Accounts page, select the member account that you want to move.
- Click Remove.
- In the confirmation dialog box, click Remove.
- Send an invite to the member account from the new organization.
- Go to the AWS Organizations console of the new organization.
- In the Accounts page, click Invite.
- In the Invite an account dialog box, enter the email address of the account administrator for the member account.
- Click Send invitation.
- Accept the invite to the new organization from the member account.
- The account administrator for the member account should receive an email with an invitation to join the new organization.
- Click the link in the email to accept the invitation.
- In the confirmation dialog box, click Accept.
The member account will then be moved to the new organization.
Here are some additional things to keep in mind when moving accounts between AWS Organizations:
- The member account must have a valid payment method before it can be moved.
- The member account must be in the Active state before it can be moved.
- The member account must not have any resources that are shared with other accounts in the current organization.
Service Control Policies (SCPs)
Service control policies (SCPs) are a type of organization policy that you can use to control access to AWS services in your organization. SCPs are similar to IAM permission policies, but they are applied at the organizational unit (OU) or account level. SCPs do not apply to the management account.
SCPs are applied to all the users and roles in the account, including the root user. The SCP does not affect service-linked roles. Service-linked roles enable other AWS services to integrate with AWS Organizations and cannot be restricted by SCPs.
SCPs must have an explicit Allow
statement. This means that SCPs do not allow anything by default. You must explicitly specify the services and actions that you want to allow.
SCPs can be used to restrict access to certain services, such as EMR. They can also be used to enforce compliance requirements, such as PCI compliance.
Here are some examples of how SCPs can be used:
- Restrict access to EMR: You can create an SCP that denies access to the EMR service. This would prevent users and roles in the account from creating or managing EMR clusters.
- Enforce PCI compliance: You can create an SCP that denies access to all services that are not compliant with PCI standards. This would prevent users and roles in the account from using services that could put your organization at risk of a data breach.
SCPs are a powerful tool that can be used to control access to AWS services in your organization. By using SCPs, you can help to ensure that your organization is compliant with your security and compliance requirements.
Restricting Tags with IAM Policies
You can use IAM policies to restrict specific tags on AWS resources. This can be done by using the aws:TagKeys
condition key. The aws:TagKeys
condition key validates the tag keys attached to a resource against the tag keys in the IAM policy.
For example, you could create an IAM policy that allows IAM users to create EBS volumes only if the volumes have the Env
and CostCenter
tags. This would prevent users from creating EBS volumes without these tags, which could help to ensure that your resources are properly tagged.
The aws:TagKeys
condition key can be used with two different operators: ForAllValues
and ForAnyValue
. The ForAllValues
operator requires that all of the tag keys in the IAM policy must be present on the resource. The ForAnyValue
operator requires that at least one of the tag keys in the IAM policy must be present on the resource.
Here is an example of an IAM policy that uses the aws:TagKeys
condition key with the ForAllValues
operator:
Code snippet
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:CreateVolume",
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:TagKeys": ["Env", "CostCenter"]
}
}
}
]
}
This policy would allow IAM users to create EBS volumes only if the volumes have the Env
and CostCenter
tags. If the volumes do not have these tags, the request would be denied.
AWS IAM Identity Center
- Single sign-on for all your AWS accounts in AWS Organizations, business cloud applications, SAML 2.0-enabled applications, and EC2 Windows Instances.
- Identity providers
- Built-in identity store in IAM Identity Center
- Third-party identity providers: Active Directory (AD), OneLogin, Okta, etc.
Benefits
- Centralized management of user access to AWS accounts and applications.
- Increased security through single sign-on and role-based access control (RBAC).
- Simplified user provisioning and deprovisioning.
- Improved auditing and reporting.
Requirements
- AWS Organizations.
- AWS IAM Identity Provider (IdP) connector.
- Supported SAML 2.0 applications.
Pricing
- Free for the first 10 users.
- Pay-as-you-go for additional users.
Conclusion
AWS IAM Identity Center is a secure and scalable solution for managing user access to AWS accounts and applications. It provides a single sign-on experience for users, centralized management of user access, and increased security through RBAC. IAM Identity Center is a good choice for organizations of all sizes that need to manage user access to AWS.
Here are some additional points that you may want to consider including in your review:
- Ease of use: IAM Identity Center is easy to use for both administrators and end users.
- Scalability: IAM Identity Center can scale to meet the needs of large organizations.
- Security: IAM Identity Center is a secure solution that uses industry-standard security protocols.
- Support: AWS provides 24/7 support for IAM Identity Center.
Organizational Trail
An Organizational Trail is a CloudTrail trail that logs all events for all AWS accounts in an organization. This allows you to track all activity in your organization, regardless of which account it originates from.
Organizational Trails are created and managed in the management account of the organization. Users in member accounts do not have permissions to modify or delete an Organizational Trail.
By default, an Organizational Trail logs all management events for all accounts in the organization. You can also choose to log data events, which provide information about the resource operations performed on or in a resource.
The log files for an Organizational Trail are stored in an Amazon S3 bucket that you specify. You can also configure CloudTrail to deliver the log files to Amazon CloudWatch Logs.
Organizational Trails can be a valuable tool for auditing activity in your organization and investigating security incidents. They can also help you to comply with regulatory requirements.
Here are some of the benefits of using Organizational Trails:
- Centralized auditing: Organizational Trails allow you to track all activity in your organization from a single location. This can make it easier to identify suspicious activity and investigate security incidents.
- Compliance: Organizational Trails can help you to comply with regulatory requirements that mandate the auditing of AWS activity.
- Cost savings: Organizational Trails can help you to save money by reducing the need to create and manage separate trails for each account in your organization.