kube-netc: simplified network observability for Kubernetes

Drew Ripberger
4 min readJul 30, 2020

--

Kube-netc was the focus of my summer internship at Nirmata. It’s an open source project that uses Kubernetes’ DaemonSets to expose simple networking statics for each of your nodes. You can find it on GitHub:

I want to use this post to take you through kube-netc, a bit of my experience developing it, and how it can be useful to your project.

The Objectives of the Project

As it was originally presented to me, there a few main goals the we wanted kube-netc to accomplish:

  • to understand which Kubernetes resources or external services are communicating with each other
  • to track the rate of bytes in/out of these connections

From the start we wanted to use eBPF to interface with the Linux kernel to collect networking stats, and Prometheus to export those stats after we process them and attach some useful information about their state in the Kubernetes cluster.

Project Design

For the most up to date documentation on kube-netc’s design, check out DESIGN.md in the repository.

There are three main packages that make up kube-netc’s functionality:

  • tracker: uses eBPF to collect stats at the kernel level and provides channels for them to be piped to our other packages.
  • cluster: keeps an internal mapping of ip’s to their respective Kubernetes resource name (if it is internal to the cluster) and adds information about the resource’s state to the stats.
  • collector: exports the finalized data that has been produced by the tracker and cluster packages as Prometheus metrics.

These packages work together to allow a user to quickly get a sense of a node’s network.

Try It Out

Now to give you a quick way to try kube-netc out yourself and visualize the stats with Grafana.

See the README for the most up to date walk through on how to get kube-netc deployed in Kubernetes.

First things first, let’s apply the install.yaml that will create the DaemonSet and appropriate permissions:

~$ kubectl apply -f https://github.com/nirmata/kube-netc/raw/master/config/install.yaml

If this is successful you should see:

clusterrolebinding.rbac.authorization.k8s.io/netc-rbac created 
daemonset.apps/kube-netc created

Now that the kube-netc DaemonSet is running and networking is being tracked we will go over two different examples of how you can utilize the resultant Prometheus metrics.

To get started, if you are running your Kubernetes cluster locally and or in a test system, for either of these examples you’ll probably have to port forward you node as such:

~$ kubectl get pods --all-namespaces | grep "kube-netc"

Which tells us the pod or pods that are running kube-netc.

kube-system          kube-netc-9lfr6                                     1/1     Running   0          26h

As kube-netc is a DaemonSet you should of course have as many kube-netc pods as you do nodes. For these examples just pick one of the pods, as it will work the same with any kube-netc pod. So now we can take the pod name kube-netc-9lfr6 and port forward it, so it is exposed to your localhost and not just your testing environment.

~$ sudo kubectl port-forward -n kube-system kube-netc-9lfr6 port-forward 9655:9655

Make sure to use port 9655 in this command, the Prometheus exporter is set to serve on this port.

Example: Curl

With that done, we can now now use curl to fetch a raw text output of the current network stats of the node the pod is running on:

~$ curl localhost:9655/metrics | grep "bytes_recv_per_second"

This returns a list of Prometheus metrics and their values. A truncated example:

...
bytes_recv_per_second{component="kube-controller-manager",destination_address="10.244.0.5:48590",destination_kind="pod",destination_name="kube-netc-9lfr6",destination_namespace="kube-system",destination_node="d
rewcluster-control-plane",instance="",managed_by="",name="",part_of="",source_address="172.18.0.2",source_kind="pod",source_name="kube-controller-manager-drewcluster-control-plane",source_namespace="kube-system
",source_node="drewcluster-control-plane",version=""} 5
...

Example: Grafana

There is also a demo Grafana dashboard in the GitHub repository that you can use to visualize these metrics.

Just point Prometheus to the kube-netc exporter in your configuration by appending a new job to your scrape_configs in your prometheus.yml:

- job_name: 'kube-netc' 
static_configs:
- targets: ['localhost:9655']

Now after opening the provided dashboard and giving Grafana the Prometheus data source on localhost:9655, we can see all of the aggregated networking stats.

How kube-netc Can Help

While kube-netc reports quantitative metrics, one of the greatest additions that the project can provide is a qualitative look at the cluster’s environment. It gives a high level, all encompassing look at who is communicating with who and how often they are doing so.

That being said, kube-netc of course isn’t your end all be all solution for analyzing your Kubernetes network. It’s an open source project that we hope to grow and continue to develop with the support and contributions of the community.

Thanks

I would like to personally issue thanks to everyone that helped put this project together, whether directly or in writing fantastic articles or documentation on eBPF.

Specific thanks to Alban at Kinvolk for his assistance with eBPF and the folks at DataDog for their ebpf library that we leverage in kube-netc’s tracker package.

--

--