Go: Is the encoding/json Package Really Slow?

Vincent Blanchon
May 20 · 7 min read

ℹ️ This article is based on Go 1.12.

Questions about the performance of the encoding/json package is a recurrent topic and multiple libraries like easyjson, jsoniter or ffjson are trying to address this issue. But is it really slow? Has it been improved?

Questions about the performance of the encoding/json package is a recurrent topic and multiple libraries like easyjson, jsoniter or ffjson are trying to address this issue. But is it really slow? Has it been improved?

Evolution of the package

Let’s look first at the performance evolution of the library. I made a small makefile with a benchmark file in order to run it against all versions of go:

The makefile creates a folder for each version of go, creates a container based on its docker image, and runs the benchmark. The results are compared in two ways:

  • each version VS the last version of go 1.12
  • each version VS the next version

The first comparison allows us to check the evolution from a specific version against the last one, while the second analysis allows us to know which one brought the most improvements into the encoding/json package.

Here are the most significant results:

  • from 1.2.0 to 1.3.0, the time for each operation has reduced by ~25/35%:
  • from 1.6.0 to 1.7.0, the time for each operation has reduced by ~25/40%:
  • from 1.10.0 to 1.11.0, the memory allocation has reduced by ~25/60%:
  • from 1.11.0 to 1.12.0, the time for each operation has reduced by ~5/15%:

The full report is available on github for Marshall and Unmarshall.

If we check from 1.2.0 to 1.12.0, the performances have significantly improved:

The benchmark has been done with a simple struct. The deltas could be different with a different value to encode/decode things such as a map or an array or even a bigger struct.

Dive into the code

The best way to understand why it seems slower is to dive into the code. Here is the flow of the Marshal method in go 1.12:

Now that we know the flow, let’s compare the code of the versions 1.10 and 1.12 since we have seen there was a huge improvement on the memory during the Marshal process. The first modification that we see is related to the first step of the flow when the encoder is retrieved from the cache:

The sync.Pool has been added here in order to share the encoder and reduce the number of allocations. The method newEncodeState() already existed in 1.10 but was not used. To confirm that, we can just replace this piece of code in go 1.10 and check the new result:

In order to run the benchmark with the Go repository, just go to the folder of the lib and run:

As we can see, the impact of the sync package is huge and should be considered in your project when you allocate the same struct intensively.

Regarding the the Unmarshal method, here is the flow in go 1.12:

Each of the flows are pretty optimized with a cache strategy — thanks to sync package — and we can see that the part regarding the reflection and the iteration on each fields is the bottleneck of the package.

Alternatives and performances

As mentioned, there are many alternatives. If most of them are compatible with the native package, using the full library (and the interfaces Marshaler and Unmarshaler) will really bring the advantage you are looking for.

ffjson is one of them and generates static MarshalJSON and UnmarshalJSON and offers a similar API: ffjson.Marshal and ffjson.Unmarshal. The generated methods look like this:

If we compare the benchmark with the native lib and ffjson (benchmark with usage of ffjson.Pool()):

For Marshaling or Unmarshaling, it looks like that the native library is more efficient.

Regarding the higher usage of memory, we can see with the compiler “go run -gcflags=”-m” some variables will be allocated to the heap:

Let’s have a look at another one: easyjson. It uses the same strategy. Here is the benchmark:

This time, it seems than easyjson is much faster, 30% for the Marshalling and almost 2 times faster for the Unmarshalling. Everything make sense if we look at the easyjson.Marshal method provided by the library:

The method MarshalEasyJSON is generated by the library in order to print the JSON:

As we can see, there is no more reflection. The flow is pretty straightforward. Also, the library provide compatibility with the native JSON library:

However, the performances here will be worse than the native library since the native flow will be applied and only this small part of code will be run during the Marshalling.

Conclusion

If many efforts have been done on the standard library, it could be never be as fast as a library that dumps the generation of the JSON. The negative points here are that you will have to maintain this code generation and remain dependent on an external library.

Prior to making any decision about switching from the standard library, you should measure how the json Marshalling/Unmarshalling impacts your application and if a gain of performance could drastically improve the performance of your whole application. If it represents only a small percentage, it is maybe not worth it, the standard library is now efficient enough in most of the cases.

Vincent Blanchon

Written by

Gopher — Dubai

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade