Not everyone is an AWS expert. The CLI is good to find something specific, but finding misconfiguration or vulnerabilities is really hard.
The web interface is probably more intuitive, but it’s unrealistic to build some automation on the top of it.
AWS still gives us some help to manage our account, the AWS Trusted Advisor for example.
AWS Trusted Advisor
AWS Trusted Advisor’s purpose is to help customers follow general AWS best practices.
It will do a lot of checks like:
- Cost Optimization
And there are a lot of security checks (17 checks) done like:
- Unrestricted access (0.0.0.0/0) to specific port, range or all ports
- Amazon S3 bucket permissions
- MFA on root account
- Exposed access keys
All these features are pretty nice for an administrator system, but will not be enough for a pentester, security auditor and so on. Indeed, you will not be able to add, edit or delete rules.
For example, you may want to check if a security group allows access to a specific CIDR.
Another issue with AWS Trusted Advisor is that it will consider “OK” (green color) an unrestricted access for an exotic port (like 4200).
Even if AWS Trusted Advisor is pretty easy to understand, it is still an online tool and you will need to sign-in on console.aws.com, then search AWS Trusted Advisor, then check the security issues. You will have access to a lot of information, or, too much information.
Why AWS Tower?
AWS Tower has been developed by security engineers, for security engineers. Even without AWS knowledge you can still easily see the security issues or other information like DNS records, allowed IP addresses or opened ports.
AWS Tower can be used in 2 ways:
- Directly from your terminal to easily scan or discover information
- Used with another tool (like Patrowl Manager), indeed, we designed AWS Tower to be easily plugged into another application (json format).
AWS Tower uses your aws configuration (
~/.aws/config for Linux and MacOS,
%USERPROFILE%\.aws/config for Windows) to scan and discover. You don’t need to specify any credentials or anything else (except the profile you want to use).
How does it solve our issues?
At leboncoin we use AWS for our infrastructure and it may be hard to stay alerted about new instances, new opened ports or new record DNS. Indeed, it may be hard to use AWS Console to check new instances, security groups, and so on…
That is why we use AWS Tower with an orchestrator to warn us when a new issue is found.
Furthermore, when we find a new profile, we can easily check it with the CLI.
For example, if we discover a new profile, we will try 2 commands:
AWS Tower is using the boto3 library, it’s perfect to request the AWS API and fetch resources. Furthermore, we use the same profile and credentials than the
AWS Tower has two modes: discovery and scan.
Discovery mode is the first step when you want to list and understand the AWS account you are heading to. In a quick glance, you can identify which assets are exposed on the internet, their custom DNS record, and so on.
For this example, this is our demo AWS account:
In this example, our AWS account has two EC2 not exposed on the internet. An ELBv2 is in the main-vpc-private-lb subnet, with the DNS record “patrowl.my-private-domain.com”, but not exposed as well.
On the other side, “custom-nginx” is an EC2 publicly accessible.
Some options are available,
--public-only, will narrow the search by only looking for public assets and
--verbose will display more information, in our case the security groups.
The advantage of AWS Tower is the ability to synthesize multiple security groups and render condensed output.
The EC2 “custom-nginx” has the SSH port reachable from “192.168.0.254/32” and “192.168.1.0/24”, which are in the private address range (RFC1918). However, “220.127.116.11/32” is a public IP, the security analyst should know at whom this IP belongs and eventually remove this exception.
Also, both port 80 and 9000 are available from the public Internet. In this example, it’s important to check which resources are available.
To help the security analyst in their audit, there is the Scan mode, which outputs vulnerabilities.
For a condensed view, we recommend to use the
--brief mode at first, with the
--min-severity to medium to narrow the scan output:
The idea is to simplify the complexity of an AWS account to show only the vulnerable resources. Obviously, these rules are configurable and every severity can be set.
About the rules, they are separated in two data sources: security-group or metadata.
This rule permits to detect if all ports of an instance are reachable from a public network, which is bad.
The output message is variabilize from the security group, such as “sg_name”, “ports” and “source”.
Two rules applies:
- Check if “all” is in the “ports” variable, with the “in” rule
- Does the “all” constant is inside the “ports” variable ?
- Check if it’s a private network, with the “is_private_cidr” rule
is_private_cidr(source)equals “false” ?
This rule is meant to detect a deprecated version of a RDS MySQL.
The output message is variabilize from the metadata field, such as “current_version” inside
Two rules applies:
- Check if a version is available, with the “in” rule
- Does the “Engine” constant is inside the “metadata” variable ?
- Check if the version is below our threshold of 5.2.0, with the “engine_deprecated_version” rule
metadata["engine"]version is <5.2.0 for engine “mysql” ?
It can take time to understand and use our ruling system, but it’s not a main feature, mostly a custom setting for tweaker.
The default set of rules available is ready for a production environment with low rate of false positives.
Also, in the case of a private internet subnet you use in your production, it’s possible to add some entries inside the
subnet_allow_list.txt file and consider them as “private network”.
At leboncoin, we have several AWS accounts and it’s hard to switch between all of them to find vulnerabilities. The idea is the automation.
With AWS Tower, it’s possible to run it as a lambda, regularly, to alert us about new vulnerabilities.
We run it every 15 minutes and the findings are stored into our Patrowl Manager. We can rely on the Patrowl alerting to combine slack and email alert, depending on the severity of the vulnerability.
This is our Patrowl dashboard, it runs multiple scans all day long, and could without a problem run some on our exposed asset. For instance, a dirbuster on every asset that has an HTTP ou HTTPs open port.
Here, it displays the information of our previously vulnerable asset “custom-nginx”
The high findings are acknowledged. For us, it means that our alerting process (for medium and high only) has successfully warned us on Slack.
Despite the abstraction of the tool, some AWS knowledge is still needed to understand some assessment.
Also, without filters the output can be really large with a big aws account. I can advise you to start with the
--min-severity at medium to eliminate info and low findings.
Colors are missing, which is not perfect to spot real issues and the lack of advanced filters doesn’t help much. Output in json or csv could be appreciated in the future, especially for automation.
Also, there are only a limited number of asset types: EC2, ELBv2 (which is ALB, ELB and NLB), RDS and Route53. Some more will arrive soon, like S3, ApiGateway, EC2 or ElastiCache.
This was a very interesting project. We are now able to automate the AWS security and help our security analyst to deal with unknown AWS accounts.
The tool is still in development, but you can contribute or watch our public GitHub repository: https://github.com/leboncoin/aws-tower