Heap dump in Go using pprof

Luan Figueredo
4 min readOct 11, 2021

Some weeks ago I have faced a memory leak in one of our Go applications. This app is an external scaler that integrates and provides metrics to KEDA to scale other applications inside our K8s cluster.

After some time the app simply got OOMKilled. It doesn't require much memory to run, it was running normally with less than 100MB. It started to claim a lot more memory, reaching a limit greater than 1GB, and the app was killed.

I am very used to analyzing tools for Java, like MAT, JConsole, VisualVM, GCEasy, some APMs, and others. But I have never needed this for Go, not until this problem.

Searching for a tool for solving this issue I ended up finding pprof. It provides profiling data via an HTTP server in your application. As far as I know, it is currently the best option to get profiling data in Go. Let me know with you have other options.

How to use pprof in your application

If your application is a web app that already has an HTTP server running, you can just import the pprof package:

import _ "net/http/pprof"

If the application that you need to analyze doesn't has an HTTP server (that's my case) you need to import the HTTP package and pprof:

import   (
"net/http"
_ "net/http/pprof"
)

If you don’t have an HTTP server running, you need to start it too, so you can add this to your code:

go func() {
log.Println(http.ListenAndServe("localhost:8081", nil))
}()

Doing this automatically makes pprof available through your HTTP server.

If you are not using the DefaultServeMux, there is another step to do. You need to register the route for pprof or register it to be handle by the http.DefaultServeMux. The code below shows how to handle it with the default mux:

myRouter.PathPrefix("/debug/pprof/").Handler(http.DefaultServeMux)

If you prefer, you can register manually for each route:

myRouter.HandleFunc("/debug/pprof/", pprof.Index)
myRouter.HandleFunc("/debug/pprof/cmdline", pprof.Cmdline)
myRouter.HandleFunc("/debug/pprof/profile", pprof.Profile)
myRouter.HandleFunc("/debug/pprof/symbol", pprof.Symbol)
myRouter.Handle("/debug/pprof/goroutine", pprof.Handler("goroutine"))
myRouter.Handle("/debug/pprof/heap", pprof.Handler("heap"))
myRouter.Handle("/debug/pprof/threadcreate", pprof.Handler("threadcreate"))
myRouter.Handle("/debug/pprof/block", pprof.Handler("block"))

Now you can extract dumps for your app.

Extracting a dump

Now you have different endpoints that will give you different profiles. To see the available profiles you can make a GET to the endpoint http://localhost:8081/debug/pprof

Example of the pprof result

You can download the result of the profiles to analyze later. Now, let’s get the heap.

curl http://localhost:8081/debug/pprof/heap > heap.out

The result will be stored in a file named heap.out. You could get other profiles as well. For example, let’s get all stack traces of all current goroutines:

curl http://localhost:8081/debug/pprof/goroutine> goroutine.out

Analyzing the result

There are fell ways to analyze the dump. The best I found is to use the pprof tool as a web server, then you can navigate through the different types of views, samples, and filter data.

To start this server, use the following command:

go tool pprof -http=:8082 heap.out
pprof web tool

Now it is possible to access this tool from your browser. You can simply choose a port and pass the file you have downloaded. You can use it with other profiles too, just changing the name of the file:

go tool pprof -http=:8082 goroutine.out

Now you can have a view of how your heap is being used. It’s easier to identify if your application has some memory leak and where to find it.

Extracting a dump from K8S

Here I will show you how I normally extract these files from our K8s cluster. First, if you have an API you should already have a Service exposing it, then it is easier just to request data from this API.

In this case, my pod was not exposing the HTTP service, so I had two options.

  1. Expose the HTTP service to get the pprof files
  2. Connect to the pod to extract the files

Since I don’t need the HTTP service to be exposed for any other reason, I went with the second option. To do this, first I run an sh command in the pod:

kubectl exec -it POD_NAME sh

Then I make a request to the localhost to extract the file I need:

curl http://localhost:8081/debug/pprof/heap > heap.out

After this you can simply copy the file to your machine:

kubectl cp POD_NAME:/app/heap.out .\

Now you can analyze the file using pprof, using the same process as earlier:

go tool pprof -http=:8081 heap.out

The solution

As I mentioned earlier, I was having memory leak problems in one of our applications. It is an app that serves a gRPC server. Deeply analyzing the heap I found out that more than 90% of the memory was used to handle connections.

The problem was that the application that consumes this service wasn’t managing the connections properly, leading to open more connections than needed. After some time, it was more connections than our service can handle. We didn’t manage this problem on our server (we should) because the consumers are known and reliable.

The solution was to fix the consumers to properly open and reuse these opened connections properly, avoiding then keeping a lot of open connections.

The code used in the demonstration can be found here.

--

--