Speed up your APIs

Sai Teja
3 min readFeb 20, 2023

--

Creating new objects can be a relatively expensive operation, especially if it involves allocating memory on the heap, so reusing objects can help reduce the overhead and improve performance. By keeping a pool of preAllocated objects and reusing them as needed, we can avoid the overhead of allocating and deAllocating memory(garbage collector deallocates the unused variables in memory ).

This can be especially useful for APIs that need to handle a large number of requests, where creating and destroying objects on each request can be a significant bottleneck. By reusing objects, we can reduce the amount of CPU time and memory required, which can lead to better performance and the ability to handle more requests.

To achieve this, we can use the sync.Pool package in Go to implement object pooling. The sync.Pool is a built-in Go package that provides a thread-safe mechanism for storing and retrieving objects from a pool.
Can jump into code directly ..!! 😋

Handlers:-


package handlers

import (
"encoding/json"
"net/http"
"sync"
)

type Request struct {
Name string `json:"name"`
Id string `json:"id"`
Country string `json:"country"`
Age int `json:"age"`
}

type ObjectPool struct {
ReqPool *sync.Pool
}

var objPool ObjectPool

func init() {
// init object pool
objPool.ReqPool = &sync.Pool{
New: func() interface{} {
return new(Request)
},
}

}


func HandlerWithoutSyncPool() http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
req :=new(Request)
err := json.NewDecoder(r.Body).Decode(&req)
if err != nil {
w.WriteHeader(http.StatusBadRequest)
return
}
w.WriteHeader(http.StatusOK)
err = json.NewEncoder(w).Encode(req)
if err != nil {
w.WriteHeader(http.StatusInternalServerError)
return
}

})
}

func HandlerWithSyncPool() http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
req := objPool.ReqPool.Get().(*Request)

//to make sure it return object to pool
defer objPool.ReqPool.Put(req)

err := json.NewDecoder(r.Body).Decode(&req)
if err != nil {
w.WriteHeader(http.StatusBadRequest)
return
}

w.WriteHeader(http.StatusOK)

})
}

Benchmarks:-
I did benchmark testing with a benchmark time of 5 seconds and 1 cpu on a running server instead of calling handler

Note:-results varies with type of object we are pooling

package handlers

import (
"bytes"
"encoding/json"
"net/http"
"net/http/httptest"
"testing"
)

var tests = []struct {
req Request
}{
{
req: Request{Name: "test", Country: "testCountry", Id: "id1", Age: 10},
},
}


func BenchmarkHandlerWithSyncPool(b *testing.B) {

for _, test := range tests {
data, _ := json.Marshal(test)

b.RunParallel(func(pb *testing.PB) {

for pb.Next() {
_, _ = http.Post("http://127.0.0.1:8888/withpool", "application/json", bytes.NewBuffer(data))
}
})

}

}

func BenchmarkHandlerWithoutSyncPool(b *testing.B) {
for _, test := range tests {

data, _ := json.Marshal(test)

b.RunParallel(func(pb *testing.PB) {

for pb.Next() {
_, _ = http.Post("http://127.0.0.1:8888/withoutpool", "application/json", bytes.NewBuffer(data))
}
})
}

}

Without pooling:-

go test -bench=BenchmarkHandlerWithoutSyncPool -benchtime=5s -cpu=1
goos: darwin
goarch: amd64
pkg: testsyncpool/handlers
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkHandlerWithoutSyncPool 23067 1004690 ns/op

WithPooling:-

 go test -bench=BenchmarkHandlerWithSyncPool -benchtime=5s -cpu=1 
goos: darwin
goarch: amd64
pkg: testsyncpool/handlers
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkHandlerWithSyncPool 77290 76521 ns/op

main file:-

package main

import (
"log"
"net/http"
"net/http/pprof"
"testsyncpool/handlers"
)

func main() {
r := http.NewServeMux()
//r.Handle("/withoutpool", handlers.HandlerWithoutSyncPool())

r.Handle("/withpool", handlers.HandlerWithSyncPool())

// Register pprof handlers
r.HandleFunc("/debug/pprof/", pprof.Index)
r.HandleFunc("/debug/pprof/cmdline", pprof.Cmdline)
r.HandleFunc("/debug/pprof/profile", pprof.Profile)
r.HandleFunc("/debug/pprof/symbol", pprof.Symbol)
r.HandleFunc("/debug/pprof/trace", pprof.Trace)
err := http.ListenAndServe(":8888", r)
if err != nil {
log.Fatal(err)
}
}

It's important to note that object pooling may not be suitable for every use case, as it can also increase the complexity of the code and add additional overhead to manage the pool of objects. However, it can be a useful technique to consider when optimizing the performance of APIs.

Please feel free to add any comments/suggestion/ mistakes etc…👍

For Part -2 https://medium.com/@saiteja180/speed-up-your-apis-part-2-eb16eb333df5

--

--