Internet Access for Private/Sensitive Workloads with Squid Proxy and Private Service Connect on Google Cloud Platform (GCP)

Hassene BELGACEM
Google Cloud - Community
7 min readJul 26, 2023

This article was inspired by an engaging discussion I had recently about the different methods to architect a hub and spoke network topology in the Google Cloud Platform (GCP), tailored to the unique security requirements of your workload. My focus in this piece will be on private/sensitive workloads, i.e., those that do not necessitate internet exposure but may still require internet access.

You might question why I made this particular choice. The reasoning is rooted in the fact that a significant amount of enterprise workloads are inherently private. Managed internally, accessed via the office LAN or a client-to-site VPN, these workloads are kept away from public access. Certain ones amongst these bear high-security requisites that demand robust isolation. It’s vitally important to approach this isolation in a way that is not only effective and efficient but also aligns with the gold standards of network security.

To add a touch of real-world application to the design, we will be leveraging a hub-and-spoke network topology. Esteemed for its scalability and robustness, this model is preferred by countless businesses. We will journey into the specifics of securing these private workloads within this network framework, aiming to provide a holistic understanding of network design and security implementation.

What is Private Service Connect ?

Private Service Connect (PSC)is a feature of Google Cloud that enables private communication between Virtual Private Cloud (VPC) networks and managed services, without needing to expose these services to the public internet. This service helps users leverage their internal IP addresses to connect to various services while keeping traffic entirely within Google Cloud. It supports access to Google published services, third-party services, intra-organization services, and Google APIs. Private Service Connect is designed with service-oriented architecture, explicit authorization, no shared dependencies, and line-rate performance. It provides multiple connection types, such as endpoints and backends, offering flexible access to managed services. It’s an excellent tool for enhancing security, scalability, and performance in cloud networking.

Network design

Now, its time to explore a practical application where we implement a hub-spoke network topology on Google Cloud Platform (GCP), based on the solutions and security best paractices.

Egress only Hub and Spoke Design

Starting from the left, we’ve designed a hub network that acts as a segregating force between different projects, with each one represented by a standard VPC Network. To preserve this isolation while enabling only egress traffic, we’ll establish a conventional hub in the “Org Hub” project, represented as “VPC Hybrid Hub” in our diagram. This hub will be connected to each spoke (or VPC) using Private Service Connect (PSC).

Regarding the management of ingress and egress traffic from the internet, we propose using a proxy appliance. Depending on your specific needs, you can choose Google Cloud’s Secure Web Proxy managed service or opt for the Squid proxy. Regardless of your choice, we recommend having a dedicated network for managing traffic from and to the internet, as illustrated in the provided diagram.

Since our design targets private and high-security workloads that require a high level of isolation, we won’t use Shared VPCs, which are typically used for hosting multiple applications on the same network. Instead, we choose the classic VPC Network model, which best suits our scenario.

How to build this design ?

In this section, we are going to deep dive into the procedure involved in constructing a simplified version of our design with only one spoke, and for web proxy we will be using Squid Proxy as it is a more mature and production proof solution, you can also replace it we Cloud Secure Web Proxy as described here. We will now, unfold this process step by step, elaborating on each aspect in detail to provide a comprehensive understanding of how to achieve our target design.

  • Step 0: We will start with setting the necessary env variables, this will facilitate installation steps
export PROJECT_ID="your-project-id"
export REGION="your-region" # ex: europe-west3
export HUB_NETWORK_NAME="hub-network"
export HUB_SUBNET_NAME="hub-subnet"
export HUB_PSC_SUBNET_NAME="psc-subnet"
export SPOKE_NETWORK_NAME="spoke1-network"
export SPOKE_SUBNET_NAME="spoke1-subnet"
export SPOKE_PSC_ENDPOINT_IP="10.0.1.5"
export TEMPLATE_NAME="l7-web-proxy-tmpl"
export MIG_NAME="l7-web-proxy-mig"
export LOAD_BALANCER_NAME="l7-web-proxy-lb"
  • Step 1: To begin, it is necessary to establish a prescribed network structure consisting of a central Hub network and at least one connected Spoke networks.
# Create a Hub custom Network and its subnet
gcloud compute networks create $HUB_NETWORK_NAME \
--project=$PROJECT_ID \
--subnet-mode=custom
gcloud compute networks subnets create $HUB_SUBNET_NAME \
--project=$PROJECT_ID \
--network=$HUB_NETWORK_NAME \
--range=192.168.0.0/24 --region=$REGION

# Create Health check firewall rule
gcloud compute firewall-rules create hub-allow-health-checks \
--network=$HUB_NETWORK_NAME \
--action=ALLOW \
--direction=INGRESS \
--source-ranges=35.191.0.0/16,130.211.0.0/22 \
--target-tags=l7-web-proxy \
--rules=tcp:3128
# Create private firewall rule
gcloud compute firewall-rules create hub-allow-private-ingress \
--network=$HUB_NETWORK_NAME \
--action=ALLOW \
--direction=INGRESS \
--source-ranges=192.168.0.0/16,10.0.0.0/8 \
--target-tags=l7-web-proxy \
--rules=tcp:3128,tcp:3126,tcp:3127
# Allow egress traffic
gcloud compute firewall-rules create hub-allow-egress \
--network=$HUB_NETWORK_NAME \
--action=allow \
--direction=EGRESS \
--rules="tcp:0-65535,udp:0-65535" \
--destination-ranges="0.0.0.0/0"
# Create a Spoke custom Network and its subnet
gcloud compute networks create $SPOKE_NETWORK_NAME \
--project=$PROJECT_ID \
--subnet-mode=custom
gcloud compute networks subnets create $SPOKE_SUBNET_NAME \
--project=$PROJECT_ID \
--network=$SPOKE_NETWORK_NAME \
--range=10.0.1.0/24 --region=$REGION
# Delete default internet gateway Route for spoke Network
ROUTE_NAME=$(gcloud compute routes list --filter="network: $SPOKE_NETWORK_NAME AND nextHopGateway:default-internet-gateway" --format="value(name)")
gcloud compute routes delete $ROUTE_NAME --quiet
# Allow egress traffic
gcloud compute firewall-rules create spoke-allow-egress \
--network=$SPOKE_NETWORK_NAME \
--action=allow \
--direction=EGRESS \
--rules="tcp:0-65535,udp:0-65535" \
--destination-ranges="0.0.0.0/0"
# Create private firewall rule
gcloud compute firewall-rules create spoke-allow-private-ingress \
--network=$SPOKE_NETWORK_NAME \
--direction="INGRESS" \
--action="ALLOW" --rules=all \
--source-ranges="0.0.0.0/0"
Step 1 result
  • Step 2 : Next, it’s time to set up the Squid Proxy Appliance, positioning it behind an Internal TCP/UDP Load Balancer. For the sake of brevity, and considering that I’ve already provided a comprehensive step-by-step guide in a previous article, I won’t delve deep into the details here. Instead, I’ll present the necessary script to accomplish this task.
#Cloud init startup file
cat > startup.yml <<EOF
#cloud-config
runcmd:
- add-apt-repository universe
- apt update
# Install Squid 5 with HTTPS Decryption
- curl -L https://raw.githubusercontent.com/belgacem-io/gcp-secure-web-proxy/main/modules/gcp_squid_proxy/files/squid.sh | bash
# Install clamav and clamav squid Adapter
- curl -L https://raw.githubusercontent.com/belgacem-io/gcp-secure-web-proxy/main/modules/gcp_squid_proxy/files/clamav.sh | bash
- systemctl restart squid
- sysctl -p
EOF

# Create the instance template
gcloud compute instance-templates create $TEMPLATE_NAME \
--project=$PROJECT_ID \
--region=$REGION \
--machine-type=e2-medium \
--network=$HUB_NETWORK_NAME \
--subnet=$HUB_SUBNET_NAME \
--image-project=ubuntu-os-cloud \
--image-family=ubuntu-minimal-2004-lts \
--tags l7-web-proxy \
--metadata user-data="$(cat startup.yml)",enable-oslogin=TRUE
#Create a managed instance group using the template
gcloud compute instance-groups managed create $MIG_NAME \
--project=$PROJECT_ID \
--region=$REGION \
--base-instance-name=$MIG_NAME \
--template=$TEMPLATE_NAME \
--size=2 --region=$REGION
# Create Health check
gcloud compute health-checks create tcp $LOAD_BALANCER_NAME \
--project=$PROJECT_ID \
--region=$REGION \
--port 3128
# Create load balancer
gcloud compute backend-services create $LOAD_BALANCER_NAME \
--project=$PROJECT_ID \
--protocol=TCP \
--region=$REGION \
--load-balancing-scheme=INTERNAL \
--health-checks-region=$REGION \
--health-checks=$LOAD_BALANCER_NAME
gcloud compute backend-services add-backend $LOAD_BALANCER_NAME \
--project=$PROJECT_ID \
--region=$REGION \
--instance-group=$MIG_NAME
gcloud compute forwarding-rules create $LOAD_BALANCER_NAME \
--project=$PROJECT_ID \
--region=$REGION \
--load-balancing-scheme=internal \
--subnet=$HUB_SUBNET_NAME \
--backend-service=$LOAD_BALANCER_NAME \
--ip-protocol=TCP \
--ports=3128
  • Step 3: Having successfully established the hub and spoke networks, and set up the Squid proxy, we now shift our focus towards configuring Private Service Connect. We’ll guide you through the steps of publishing the proxy’s internal load balancer as a service and creating its associated endpoint in the spoke. Let’s dive into the specifics of this pivotal stage.
# Create PSC Subnet
gcloud compute networks subnets create $HUB_PSC_SUBNET_NAME \
--project=$PROJECT_ID \
--network=$HUB_NETWORK_NAME \
--role="ACTIVE" \
--purpose="PRIVATE_SERVICE_CONNECT" \
--range=192.168.2.0/23 --region=$REGION

# Create the Service Attachment for the Proxy Load Balancer
gcloud compute service-attachments create ${LOAD_BALANCER_NAME}-svc-attachment \
--region=$REGION \
--project=$PROJECT_ID \
--producer-forwarding-rule="$LOAD_BALANCER_NAME" \
--connection-preference="ACCEPT_AUTOMATIC" \
--nat-subnets="$HUB_PSC_SUBNET_NAME"
# Reserve endpoint Ip Address
gcloud compute addresses create spoke-psc-ip-address \
--region=$REGION \
--project=$PROJECT_ID \
--subnet=$SPOKE_SUBNET_NAME \
--addresses="$SPOKE_PSC_ENDPOINT_IP"
# Create the Private Service Connect Endpoint for the Spoke Network
gcloud compute forwarding-rules create ${LOAD_BALANCER_NAME}-psc-endpoint \
--project=$PROJECT_ID \
--region=$REGION \
--network=$SPOKE_NETWORK_NAME \
--target-service-attachment=$LOAD_BALANCER_NAME-svc-attachment \
--address="spoke-psc-ip-address"
Step 3 results

Test and validate this design ?

Testing and validation of this design straightforward. To accomplish this, a virtual machine (VM) can be installed within the spoke network, acting as a representative client for the network’s endpoints. By utilizing the ‘curl’ command, it is possible to simulate internet access and evaluate the network’s ability to establish connections, route traffic, and resolve DNS queries.

  • Step 1: Create a client virtual machine within the spoke network
gcloud compute instances create client-vm \
--project=$PROJECT_ID \
--zone=${REGION}-a \
--machine-type=e2-medium \
--network=$SPOKE_NETWORK_NAME \
--subnet=$SPOKE_SUBNET_NAME \
--tags client-vm --metadata enable-oslogin=TRUE

# Allow ssh ingress traffic
gcloud compute firewall-rules create spoke-allow-ssh-ingress \
--project=$PROJECT_ID \
--network=$SPOKE_NETWORK_NAME \
--action=allow \
--direction=INGRESS \
--rules=tcp:22 \
--target-tags=client-vm
  • Step 2: At this stage, it’s essential to establish an SSH connection to the Virtual Machine (VM) you’ve recently set up. This can be achieved by simply utilizing the “ssh” button found in the console. To verify internet connectivity through the web proxy you’ve established.
curl --proxy http://$SPOKE_PSC_ENDPOINT_IP:3128 google.com

Conclusion

In closing, the lessons learned here serve as a blueprint for those aiming to optimize their network security and operational efficiency. This article has taken us through the crucial exploration of private workloads within the Google Cloud Platform (GCP) environment. We’ve learned that handling these private workloads effectively and securely demands a robust level of isolation, and we’ve observed the application of this principle within the hub-and-spoke network topology.

--

--

Hassene BELGACEM
Google Cloud - Community

Cloud Architect | Trainer . Here, I share my thoughts and exp on the topics like cloud computing and cybersecurity. https://www.linkedin.com/in/hassene-belgacem