Securing External Resource Access for AWS EKS Nodes in Private Subnets Using Fixed Egress IPs and NAT Gateways

Mehmet kanus
Hedgus
Published in
8 min readFeb 15, 2024

In Amazon EKS deployments, creating worker nodegroups in private subnets offers enhanced security but comes with the challenge of not receiving external IPs by default. This limitation becomes apparent when attempting to access resources or services outside the cluster. To address this issue and establish a seamless connection for private subnet nodegroups, we explore a practical solution involving the association of Elastic IPs with EKS nodes in private subnets through NAT Gateways. By integrating Elastic IPs into the NAT Gateway configuration, we not only overcome the external IP challenge but also ensure a cohesive and reliable setup. In this article, we will delve into the process of utilizing NAT Gateways and Elastic IPs to enable external IP functionality, providing step-by-step guidance for configuring your EKS setup.

  • So, where, how, and for what purposes do we use such applications?
  1. Let’s say our database is installed in another cluster or a different resource, meaning it is set up externally. To access the external database with a ElasticIP/StaticIP from within the application and to ensure our IP is not blacklisted, we can fix and whitelist it.
  2. Persistent Connectivity: Elastic IPs in cloud environments, like AWS, offer persistent public IPs that can be associated with and disassociated from instances, ensuring consistent external access.
  3. Ease of Access and Management: Elastic IPs make it easier to manage cloud resources by allowing instances to be associated with a fixed IP, streamlining access and configurations.
  4. Avoiding IP Changes: Elastic IPs prevent disruptions caused by instance restarts or reconfigurations, maintaining a consistent entry point for external connections.
  5. Whitelisting and Security: Similar to static IPs, elastic IPs can be whitelisted to enhance security by allowing access only to specified IP addresses.
  6. External Service Integration: Cloud-based elastic IPs are particularly useful for integrating with external services in a scalable and flexible manner.
  7. Failover and Redundancy: Cloud providers often offer features like Elastic IP migration to enable failover and redundancy strategies, enhancing the availability of services.

Step-1: Creating the VPC: When we click on the “create VPC” tab in the VPC service, we can start creating a VPC by choosing between the options “VPC only” or “VPC and more.” For convenience and speed, I will create the VPC using the “VPC and more” option.

Creating a VPC using this method automatically generates Internet Gateway and Route Tables. The only remaining step in subsequent stages is to connect the NAT Gateway to the private route table.

Step-2: Allocate Elastic IP address: I will create the worker nodes of AWS EKS in a private subnet. To achieve this, I will first allocate an Elastic IP, then proceed to create a NAT Gateway. Subsequently, I will route the NAT Gateway to the private route table.

Step-3: Create NAT Gateway: A NAT Gateway with the following specifications will be created. The crucial aspect here is that the NAT Gateway should be created within the public subnet.

Step-4: Configuring NAT Gateway for Outbound Access: We will route the created NAT Gateway to the private route table to enable the worker nodes in the private subnet to access the external world.

Step-5: Create Cluster: We create a Kubernetes cluster by clicking on the “create cluster” tab in Elastic Kubernetes Service.

  • We create a role for the cluster service role from the IAM user service with the following policy.
  • After making the necessary settings on the first page, we click “next.” On the second screen that appears, as shown, we select the created VPC and all associated subnets.
  • Since there is no need for any additional settings on the other pages, we simply click “create” on the last page and complete the creation of the cluster.
  • The cluster is now ready and active. We can proceed to create worker node groups.

Step-6: First, we will create a nodegroup in a private subnet, and then create another nodegroup in a public subnet for this purpose.

  • Let’s create a nodegroup in a private subnet.
  • We give a name to the node group and attach a node IAM role to it. We also create a role consisting of specific policies from the IAM user service for this purpose.
  • We are adding label and taint features to the nodegroup, as we will deploy all application deployments/pods on the node in the private subnet.
  • Similarly, on the next page, we provide node details and specify auto-scaling features.
  • On the next page, we specify subnet details. Here, we designate our node group to be in only one private subnet. Our goal is to keep the egress IP of all nodes created in this subnet constant, ensuring that it is the Elastic IP assigned to the NAT Gateway.
  • Finally, we proceed to the next page and click on the “create” tab.

Step-7: Similarly, we will create a second nodegroup; however, we will deploy this nodegroup in the public subnet within the same Availability Zone (AZ) without introducing label and taint features.

  • As seen in the image below, we have created two nodegroups within the cluster, one public and one private.
  • Our cluster has been completely created with all nodes. We connected to the cluster from the terminal and verified the presence of the nodes.
  • As seen, two nodes have been created. The node formed in the public subnet receives an EXTERNAL-IP, while the node formed in the private subnet does not receive an EXTERNAL-IP.
  • In the same way, by examining the public IPs of instances from the EC2 service, we can determine whether the nodes in both public and private subnets receive public IPs.
  • However, when connecting to the cluster with Lens and executing a command like curl canhazip.com on the nodes, we observe that the egress IPs of the nodes are the Elastic IP associated with the NAT gateway.

Step-8: Now, let’s install a sample application on the cluster along with the Ingress-Nginx controller.

  • Let’s install Ingress-Nginx using Helm along with the following controller YAML file.
helm repo add nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm upgrade --install ingress-nginx ingress-nginx --create-namespace --namespace nginx -f helm-nginx.yaml
# helm-nginx.yaml
controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-0e0c91e9baca4b441 # public subnet id
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
  • Previously, when creating the nodegroup in the private subnet, we specified label and taint features. Now, by adding the following nodeSelector and tolerations to the deployment/pods spec block of the applications we are going to create, the application pods will be deployed on the nodes in the private nodegroup.
    spec:
nodeSelector:
node: private
tolerations:
- key: "node"
operator: "Equal"
value: "private"
effect: "NoSchedule"
containers:
- image:
  • When I deploy the applications I created, as seen, all application pods are created in the private nodegroup.
  • In the same manner, I am also creating an Ingress to access these applications.
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: pb-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: phonebook-service
port:
number: 80
- path: /result
pathType: Prefix
backend:
service:
name: result-service
port:
number: 80

Step-9: Let’s log in to the application from the browser and check if it’s working

  • Now we can delete all of our resources.
eksctl delete cluster cluster-name --region us-east-1

Thank you for adding my article to your reading list! If you enjoyed it and found it helpful, please consider following me and giving the article a clap. Your support means a lot and helps me continue creating content that you love.

Thanks again, and happy reading!

--

--