Extending a service using Private Link from Azure and securing it with Cilium’s Network Policy

Amit Gupta
7 min readMar 31, 2024

--

☸ ️Introduction

Azure Private Link enables you to access Azure PaaS Services (for example, Azure Storage and SQL Database) and Azure hosted customer-owned/partner services over a private endpoint in your virtual network.

Traffic between your virtual network and the service travels the Microsoft backbone network. Exposing your service to the public internet is no longer necessary.

🎯Goals & Objectives

In this article you will learn how to
- access an AKS application from users that are outside of your VNet (network) through the use of Private Link Service.
- allow users from a specific CIDR block to access the AKS application using Cilium’s Network Policy.

Pre-Requisites

  • You should have an Azure Subscription.
  • Install kubectl.
  • Install Helm.
  • VNet peering is in place across two VNet’s.
  • An image in the ACR repo (optional)- You can use an existing image as well that is available.
  • Ensure you have enough quota resources to create an AKS cluster. Go to the Subscription blade, navigate to “Usage + Quotas”, and make sure you have enough quota for the following resources:
    -Regional vCPUs
    -Standard Dv4 Family vCPUs

Let’s get going

Create an AKS cluster

You can create an AKS cluster using BYOCNI or Azure CNI powered by Cilium.
Note- You can also create an AKS cluster in BYOCNI in a non kube-proxy environment.

  • Set the Subscription

If you have multiple Azure subscriptions, choose the subscription you want to use.
- Replace SubscriptionName with your subscription name.
- You can also use your subscription ID instead of your subscription name.

az account set --subscription SubscriptionName
  • Set the Kubernetes Context

Log in to the Azure portal and browse to Kubernetes Services> select the respective Kubernetes service that was created ( AKS Cluster) and click on connect. This will help you connect to your AKS cluster and set the respective Kubernetes context.

az aks get-credentials --resource-group <resource-group-name> --name <cluster-name>

Note- The AKS cluster is created in a distinct network range so that there are no overlaps.

helm install cilium cilium/cilium --version 1.14.9 \
--namespace kube-system \
--set aksbyocni.enabled=true \
--set nodeinit.enabled=true
--set hubble.relay.enabled=true \
--set hubble.ui.enabled=true

AKS and Private Link service integration

To expose applications and services outside of an AKS cluster, several options are available:

  • Ingress controllers
  • Application Gateways using AGIC
  • Directly through Load Balancers ( LBs could be private or public)

We will expose our application through a private load balancer, and then extend it with a Private Link Service. Ensure that the service principal that is used for creating the AKS cluster has permissions to read/write over the subnet and create an Internal Load Balancer as listed below:

Microsoft.Network/virtualNetworks/subnets/read
Microsoft.Network/loadBalancers/write
Microsoft.Network/virtualNetworks/subnets/join/action

Expose the application

  • To create a load balancer from within an AKS cluster, you need to create a Service object of type LoadBalancer, and AKS will create the appropriate load balancer and load balancing rules.
    - Sample pod and service is shown below:
apiVersion: v1
kind: Pod
metadata:
name: pls-app
labels:
app: pls-app
spec:
containers:
- image: "testacrrepo.azurecr.io/mycustomimage/nginxamit"
name: pls-app
ports:
- containerPort: 80
protocol: TCP

apiVersion: v1
kind: Service
metadata:
name: sample-app
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
selector:
app: pls-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-nginx-684dd4dcd4-hhkcz 1/1 Running 0 170m 10.0.1.124 aks-nodepool1-11106896-vmss000001 <none> <none>
pls-app 1/1 Running 0 4d17h 10.0.0.143 aks-nodepool1-11106896-vmss000002 <none> <none>

kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 4d23h
pls-app LoadBalancer 10.0.167.20 10.224.0.7 80:32673/TCP 4d17h
  • This service is now accesible from the respective VNet where the resource group of the AKS cluster is residing. You can reach the service over its cluster-IP as well as its external IP which is an internal IP from the same subnet.
root@my-nginx-684dd4dcd4-hhkcz:/# curl http://10.0.167.20
<!DOCTYPE html>
<html>
<body style="background-color:rgb(220, 240, 234);">
<h1>Welcome to NGINX by Amit</h1>
<h2>Azure Demo Repo for ACR and AKS</h2>
<h2>Application Demo</h2>
</body>
</html>

Access the application from a peered network

  • Create another VNet and call it as pls-vnet network.
  • Create a VM in pls-vnet and establish VNet peering between the two networks (Network where the AKS cluster has been created and the pls-vnet network).
  • You should be able to access the service from this peered VNet as well over the Load Balancer’s external-IP.
root@pls-vm:/home/plsvm# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0d:3a:86:72:51 brd ff:ff:ff:ff:ff:ff
inet 10.221.1.4/26 brd 10.221.1.63 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20d:3aff:fe86:7251/64 scope link
valid_lft forever preferred_lft forever

root@pls-vm:/home/plsvm# root@mtenantplsvm:/home/plsvm# curl http://10.224.0.7
<!DOCTYPE html>
<html>
<body style="background-color:rgb(220, 240, 234);">
<h1>Welcome to NGINX by Amit</h1>
<h2>Azure Demo Repo for ACR and AKS</h2>
<h2>Application Demo</h2>
</body>
</html>

Expose the application through Private Link Service

  • Create another VNet (mtenantpls) and try to access the application from that VNet. This will obviously fail as there is no VNet peering or anything has been set-up for communication where in the application is residing.
  • To make the application available through Private Link Service, annotate the Service with specific annotations. The only required annotation is the azure-pls-create annotation which indicates that this service should be exposed using Private Link Service.

Note- Notice the annotations in use:

 service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-pls-create: "true"
service.beta.kubernetes.io/azure-pls-name: pls-app
  • Once the setup is complete you can find your Private Link service and an associated NIC within the AKS managed resource group.
  • Create the Private Endpoint
    - On mtenantpls VNet, create a private endpoint pointing to the Private link service to make the private connectivity to the service live.
  • Take note of the Private IP of the private endpoint connection.
  • You should be able to access the service that was created with the pls annotation although it will be accesible on the Private IP that was created for the private endpoint connection.
root@mtenantplsvm:/home/plsvm# curl http://10.0.0.5
<!DOCTYPE html>
<html>
<body style="background-color:rgb(220, 240, 234);">
<h1>Welcome to NGINX by Amit</h1>
<h2>Azure Demo Repo for ACR and AKS</h2>
<h2>Application Demo</h2>
</body>
</html>

Cilium Network Policy

When using Cilium, endpoint IP addresses are irrelevant when defining security policies. Instead, you can use the labels assigned to the pods to define security policies. The policies will be applied to the right pods based on the labels, irrespective of where or when they run within the cluster.

The layer 3 policy establishes the base connectivity rules regarding which endpoints can talk to each other.

  • Create a policy that doesn’t allow anyone apart from the allowed networks to access the service.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: example-ipblock
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.221.1.0/24
  • Create a policy that doesn’t allow any access to the service.
kind: NetworkPolicy
metadata:
name: example-ipblock
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress: []
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
kubectl apply -f netpol_pls_drop.yaml
networkpolicy.networking.k8s.io/example-ipblock created
  • Requests towards the service should start failing and you can check ‘Dropped’ flows on Hubble.

Optional Metrics

You can view Private Link Service and Private Endpoint stats using standard workbooks provided under the respective Private Link Service Insights.

References

Try out Cilium

  • Try out Cilium and get a first-hand experience of how it solves some real problems and use-cases in your cloud-native or on-prem environments related to Networking, Security or Observability.

🌟Conclusion 🌟

Hopefully, this post gave you a good overview of how to expose a service privately using the Private Link Service integration. Thank you for Reading !! 🙌🏻😁📃, see you in the next blog.

🚀 Feel free to connect/follow with me/on :

LinkedIn: linkedin.com/in/agamitgupta

--

--