Securing your database instance on the cloud from unauthorized access or modifications

Keith Tay
Geek Culture
Published in
7 min readApr 15, 2022

If you are an avid reader in the cyberspace news, you will notice that many databases’ data leakage or data modification were resulting from misconfigurations or weak configurations. Just this month, Fox News database with 13 million records have been exposed online through an unprotected database. Looking at the cyber warfare space, the hacking group “Anonymous” exposed tons of databases belonging to the Russia Federation. Based on following article, the research firm was able to sample that 92 out of 100 databases were compromised using non-password protected cloud storage repositories. It was observed that apart from accessing the database contents, the files were deleted, and cyber vandalism campaign was launched to send a message.

Image 1: Database publicly accessible.

The call to action to all readers is “Never leave your cloud database publicly accessible”. An analogy is that if you place a safe you own in the middle of a street, people are simply going to attempt to open that safe. Even with a combination to open the safe, it’s just the matter of a time where an adversary is able to circumvent that control on it. Relating it back, the Internet is an ‘Open’ space. That basically means that adversaries are able to scan the network and attempt to break in to your database if its publicly accessible.

For all Developers, DevOps engineers, IT or even Security practitioners, there must be a mindset that once your database is made publicly available, be it intentionally or unintentionally, attacks on your database can happen within the first hour. As documented by Imperva’s honeypot research, hackers will attempt to connect to the database and attempt to brute force the authentication (if any).

*This article will use Amazon Web Services (AWS) as an illustration and do remember, security is a shared responsibility when using a Cloud Service Provider.

The root cause for allowing the cloud database to publicly available to everyone:

Lack of secure design considerations — During the design phase of a system/application, it is paramount to evaluate the design and architecture holistically via the use of threat modelling, secure design patterns and reference architectures. Based on AWS documentation, it is recommended to run public facing systems (e.g., web servers) on the public subnet and back-end services (e.g., app server and database servers) in the private subnet. This increases the security posture as adversaries will not be able to directly access your internal resources. The secure design consideration phase is often omitted or cut corners to allow for a quicker deployment time to the market.

In the event where the is a business requirement to allow for the database to be made publicly available, the organization should determine the potential threats, apply the necessary security controls and evaluate the residual risk.

*Note: In 2021, OWASP created a new category titled “A04:Insecure Design” as one of the top 10 most critical application security risk.

Weak or misconfigured network policy used — There are several network controls to regulate connections to the database. For example, Security Groups, Network ACLs, Network or local Firewalls. Developers could get careless (be it intentionally or unintentionally) for allowing all connections (0.0.0.0/24); allowing connections from a broad CIDR block; or/and even using the default network ACL (allow all inbound or outbound traffic) or subnet (public subnet — the main route table sends the subnet’s traffic that is destined for the internet to the internet gateway).

Lack of visibility and policy enforcement — As an organization scales, it may be hard to keep track of all developers’ action performed on the cloud (could be across different AWS accounts too). Often a time, organizations give the developers full autonomy on the cloud without architecting the security baselines for cloud deployment. Even if there was an internal process (non-technological control) articulated to the developers, human errors and carelessness could result in the misconfiguration of the database.

I would like to share three steps to better secure your database instance (applicable for other use cases or services on the cloud as well) on the cloud from unauthorized access and modification.

Step 1: Always plan for the design (including security) of your solution prior to the development and list the residual risks

Image 2: An example of a system/network diagram on the cloud

The saying goes “If you fail to plan, you plan to fail”. Before building any solution on the cloud, it is essential to plan for the design and to highlight any potential risk to the stakeholders. The architecture has to be aligned to the organization's business goals and strategies, including security.

Let us use a simple three tier architecture as an example — Web Server, Application Server and Database Sever. Should we deploy all three servers within the public subnet, or should we follow security best practice where we deploy the web server to the public subnet and the application and database server to the private subnet? To be frank, there is no straight up answer to this scenario. It really depends on your business requirement. Is there a use case where you require the application and the database server to be made available to the public? Subsequently, perform threat modelling and evaluate what security controls can be put in place to eliminate or mitigate the risk identified — allowing public access opens up to a greater attack surface. Depending on the budget and the risk appetite of the organization, the risk may be accepted, mitigated or avoided. If it is avoided, a separate design will have to be considered until the stakeholders are comfortable of the risk.

Security is not meant to be a blocker to organizations but instead, the risks associated to each design have to be articulated and understood.

Step 2: Leverage technological controls to enforce a security baseline

Image 3: Effective permission Venn diagram

If the organization has a clear security or regulatory requirements, it is possible to translate them into a policy document on the cloud. In AWS, we can leverage Service Control Policies (SCP) or IAM policies (with permission boundaries) to dictate what action is allowed or denied. For example, a rule can be configured to deny the launch of any database instances in the public subnets. In addition, it is possible to enforce what security groups can be attached to the deployed database instance.

Leveraging technological controls on the cloud can help eliminate any human errors and help developers be synchronized to the organization's business and security strategy.

Note: If the database needs to be made public, consider enforcing granular whitelisting of IP address and review the policy from time-to-time. Otherwise, the database instance should only allow connections from the application server.

Step 3: Perform proactive audits and scans

Image 4: Use AWS Security Hub as a detection means

An important security principle is to ensure that there is sufficient visibility in the cloud. At a minimum, I would recommend using AWS Security Hub — a cloud security posture management service that performs security best practice checks, aggregate alerts, and enables automated remediation. AWS Security Hub supports multiple recognized industry standards such as CIS, PCI-DSS, AWS. With regards to database security, it will raise issues such as database instances are publicly accessible, database encryption at rest is not enabled, database instance should have deletion protection is not enabled, database logging should be enabled and many more. The organization can use the output form security hub for detection purposes and for articulation of risk if its deviating from security best practices. Furthermore, security hub scans are executed every 12 hours, thus it gives an up-to-date posture of your cloud instances.

If your organization resources are not scarce, consider having a team to build an automated script to run proactive scans on your deployed cloud instances. As inspired from a security researcher writeup, the researcher uncovered thousands of open databases (Elasticsearch) on AWS. These databases revealed customers personal information and even production logs which could divulge the internals of an organization network.

Note: It is possible to leverage AWS config to monitor for changes (e.g., spinning up an DB instance in public subnet) and act upon it. In addition, AWS Config Rules allow for automatic remediation of non-compliant resources. Also, AWS GuardDuty can be enabled to detect anomalies or potential brute force attempts on your DB instance.

Conclusion

Migrating to the cloud does have its myriad of benefits. But prior to deploying any system/application on the cloud, it is essential to ensure that the design is well thought of, that includes aligning to the business and security goals and strategy. One small mistake such as a misconfiguration on the database instance or inadequate security controls for your database could result in attackers gaining unauthorized access or modification to your database. We must have the mentality that whatever is made available on the public internet, it is an open space where anyone can attempt to break into.

As part of risk identification, it is essential to apply threat modelling to evaluate all potential attacks based on your draft design. Subsequently, the organization should implement the relevant security controls to eliminate or to mitigate any risk and to determine if the residual risk level is something that the organization can accept. Otherwise, a redesign and evaluation are required.

Furthermore, organizations can enforce policies on SCP or permission boundary to align their developer’s configuration to the business and security strategy. Lastly, always take a proactive measure to audit and scan the cloud configuration. Any deviation from the organization's goal should be rectified and remediated immediately (could be automated too).

Lastly, security on the cloud is a shared responsibility. The cloud user has a huge part to play in terms of ensuring secure configurations are enforced for the organization deployed systems and applications.

--

--

Keith Tay
Geek Culture

Cyber-Enthusiast | IoT Specialist | Penetration Testing | Red Teaming