How to Make Your Google Cloud Platform project more secure: GCE Network Security

Aliz
6 min readMay 3, 2018

--

In this, the fourth article in the series, we will go through several network-level protection tools available for your Google Compute Engine instances.

Use Google Cloud Load Balancer for Inbound HTTPS Traffic

It is always good practice to reduce your attack surface. This method is an example of doing just that with the incoming HTTP and HTTPS traffic. If your instance serves web requests, then you should use the Google Cloud Load Balancer to route those requests to your instance and avoid directly opening the HTTP and HTTPS ports to the Internet. This is a good idea if you only use a single machine.

This approach has two advantages. One is that you will not disclose the individual public IP addresses of your web servers, thus making them harder to attack. The other advantage is that the Google Cloud Load Balancer applies some filters to the incoming HTTP requests, so you will probably not even see a bunch of malicious requests in the first place, because they will be filtered out before hitting your instances.

Also, there is the possibility to scale or replace your web servers while maintaining full availability of your services. You can change the backend servers behind the load balancer any time and there could be multiple web instances serving your sites at the same time.

Restrict the Outbound Firewall Rules

Even if you disable every incoming request other than web requests, and even route those requests through the Google Cloud Load Balancer, you might still have a bug in the software you use in your web stack (e.g. Tomcat, NGINX) or maybe even in your application code.

If an attacker can somehow manage to get control over the web processes and run arbitrary code on your web server, it will be much harder to practically control that machine in the long term without communicating with it from a master node. For this reason, most attacks use a technique to trick your firewall rules and open up an outgoing connection to an outside control host managed by the attacker. This way the connection is not incoming (where the firewall would block it) but outgoing (where the rules are usually much less restrictive).

You can make this type of attack much harder if you disable new outgoing connections toward the Internet in your firewall rules (either on the OS level on the virtual machines or using the Firewall provided by GCE). There are two ways to go about this. One is to disable only newly established outgoing connections, so every reply packet for incoming connections will be able to leave the instance. A second, stricter way to do it is to enable outgoing connections only to those IPs where you want to access the machine from. This way, only those machines can talk to the instance and no other outside access will be possible.

Set Up a HTTPS Proxy for Outbound Access

If you implement the rules from the previous section, then you might inadvertently limit some legitimate functionality of your applications. For example, if your application code accesses a third-party API during normal operations, then either you have to enable outgoing traffic in the firewall for the IP addresses of that third-party service or the code cannot access it.

If your applications only use HTTP or HTTPS as outgoing connections, then it is better to use an HTTPS (web)proxy for the requests. This way, you do not have to list every third-party API service IP in your firewall rules.

Install a proxy server on a separate instance and let that instance open new connections to the whole Internet on the HTTP and HTTPS ports in the firewall. Then, set up that instance internal IP as a proxy server to use on your web servers. This way you will have access to any APIs over HTTP or HTTPS while an attacker (without knowledge of your network layout) cannot open up new outgoing web connections from your machine by default.

Refine All Inbound Firewall Rules which have 0.0.0.0/0 as a Source

If you have any inbound firewall rules where the allowed connection source is set up as the whole Internet, then you might want to reconsider those rules.

For example, if you allow SSH access for the whole Internet, you will have a very high attack surface and will have to rely on every user not to lose their SSH private keys. And your particular SSH server must have no security flaws in it.

There are multiple methods to remove or at least reduce the number of inbound rules with wide sources. One is that if the specific port is only used by management activities by a limited number of employees or partners. In this case, it is much better to specify the IP addresses of the offices where the service is used from. If the access is for customers and the service is a web page or an API, then you should use Google Cloud Load Balancer in front of your instances as described earlier.

If it is a special case where you have a limited audience, the requests are not web related, and you cannot know the source IP address range of the audience in advance, then you should try the methods described in the next section.

Use VPN or a Jump Host for SSH or Special Port Accesses

If you have a specific port with a service which is not intended for public use, but rather for a limited audience, then it is good practice to not open that service up to the whole Internet.

If the audiences’ IP address is not known in advance, then you can try two solutions other than a firewall rule allowing the incoming access.

One solution is to create a jump host. This is a dedicated instance which enables some kind of remote access (e.g. SSH, RDP) for the whole Internet. From this instance, the special service is accessible using the internal network. As a result, your audience first connects to the jump host, then uses the service from that machine.

This approach has some prerequisites to be secure. One is that this jump host’s IP address is not publicly known. Another is that this jumphost has to be the most up-to-date and best secured machine in your whole infrastructure, because it is the only one that is accessible from the Internet on a management port.

If you want to raise the security of this approach, then you can apply some additional security measures, such as moving the remote access from the well-known, default port to a random one only known by your audience, or by applying advanced techniques like port knocking.

If this method is not right for you, then you might try out a VPN setup. In this case, you install a dedicated VPN server instance where you only open the VPN server ports to the whole Internet and you employ a key-based authentication. This way you can access the services on the internal network directly using the internal network from your machine. When the packets travel through the Internet, they are encapsulated and encrypted, so this approach is also secure.

Filter Traffic between Instances on the Same Network

Isolation is a very good strategy to make sure that even if everything else fails, then there is still a last line of defense in your system. This last line means that only some groups of machines will be compromised and not all of them at the same time. So, if an attacker gets hold of all your SSH private keys and knows the IP addresses of some machines where the SSH port is open, then they are only able to access those machines and not any other ones on the private network, because an internal network rule is blocking access between the machines.

It is advisable to separate your internal network to different subnets with no access between them. In the default network, all internal traffic is allowed between the instances, but you can change this setting according to your specific needs.

Remove the Public IP Address of an Instance if Not Used

If an instance is not providing a service outside the private network (e.g. it is a management node), then it is a good idea to remove its public IP address. This way it is impossible for it to receive or send traffic to or from the Internet. This is a very good way of greatly reducing the attack surface of those instances.

--

--

Aliz

A bunch of cloud developers committed to create something cool for you! hello@doctusoft.com