How should be designed the ideal Golang CRUD backend ?

YannickB
9 min readSep 10, 2019

--

Golang is a very promising language, if you are here this is probably what you believe. Still, it is yet challenging and time-consuming to build a service with it.

I come from the Python world, where I used several frameworks and ERPs to build successful backends for my customers. In these tools, I especially valued how it was stupidly simple to implement boring CRUD, which is whatever the project, often the first step to an early prototype and to success.

You just have to define the name of a new model, his fields, and the tool will take care of everything, from configuring the database, providing hooks to define what happened on each operation, API, access rights, and sometimes a graphical interface. In a matter of hours, if not minutes, you have the foundation of your project.

But there is still a problem for me: most of these incredible tools are monoliths. They are not designed for the microservice world where we are all going, and personally I am tired of these big, unmaintainable, codebase.

I believe Golang is the perfect language for building microservices backends, but I still struggle to keep the same productivity I have with my old tools.

I want to present here what should be according to me the perfect “framework” for a microservice backend. I know framework is a bad word in the Golang community, but please let me continue.

I’m also aware of many other projects in this area. I don’t intend this post to be a critic of them, maybe some of them already implement everything I’ll expose here, let me know! I want to share what I believe and why.

About framework and libraries

We all know it, frameworks have important drawbacks. When you use a framework, you learn the framework, you don’t learn the language.

Libraries are just functions you are calling from the main component of your application, you know exactly when and why you are calling them. There is no magic, and so this is far more interesting to work with them.

In Golang this is easy to avoid framework: you have to ban the use of reflect. Reflection is a way to make a generic function by making it work with any model type you’d define in your main application.

Reflection is widely used in many frameworks as it is essential to make them easy to use, but it comes with a high cost in execution time. Hence it is discouraged in Golang, and you’ll have to explicitly import the reflection module to use it. Let’s try to avoid it.

But if we are not using reflection, this means that in your application and for each model you’ll not only have to define the main struct but all the functions to work with it (create, update, delete, and many many more) even if themselves are just calling the background libraries which bring the real value.

Without reflect, there is only one clean way to achieve the incredible development speed of a framework: code generation. It is counter-intuitive for many developers, but it is really powerful and it’s already gaining a lot of traction in the Golang community.

My proposal is, you define a model schema that will generate all the structs and functions you need to work with. These functions will just make conversions, they will call the background libraries which will do the real work.

This is how I see a framework in Golang, write everything you can in the background libraries, and the framework will be the code generation you’ll write around to avoid reflection. This way, the developer can choose whether he wants to use the generated code or directly use the libraries behind.

Also you can have the best of all three important times, fast compilation time (enforced in Golang), fast execution time (no reflection), and best of all fast development time (thanks to framework-like ease of use).

Protobuf-based core structures

Protobuf is a mechanism to easily serialize data from a struct, which we will see is incredibly important, especially for transferring data.

When you want to transfer data from a service to another, you need to serialize the data in your object in a format that can be understood by the other service. Usually, when you use widely adopt API protocols like REST and GraphQL this means you’ll convert the data in a JSON format and send the message.

This conversion is time-consuming, and the message will be bigger than necessary. It’s ok for a publicly accessible API since the JSON format will make the API easily usable for any external developer, but on the other end it is far from optimal for internal communication between your services.

GRPC is another protocol for exchanging data and is especially recommended for internal communication. It is based on Protobuf, which works by sharing an object description between your two services and converting the object into a binary at blazing speed. GRPC is widely known for its speed, and it’s primarily due to its use of Protobuf.

When you receive the protobuf object, it may be tempting to convert the data into the main object you use in your application, but think about it. What if the main object in your application was itself a protobuf object ? This means anywhere in your application, you could instantly serialize/deserialize your object into binary, which is huge for a lot of usages (one especially important later in this post).

This is why I recommend using Protobuf in the core objects of your application. This way you’ll gain powerful serialization capabilities, and will also be able to instantly send your protobuf object in your GRPC messages without converting it first.

GraphQL gateway

I’m a firm believer of GraphQL for the public part of your API. I won’t describe here the added value of GraphQL compared to REST, you can find plenty of them on the Internet, but let’s just say that you can request plenty of data in a single request, API versioning is easier, and you only get and compute what you ask for.
This type of API make the work far easier to do for the client developers, they just have to request the data they want, and also avoid a lot of useless computation on the server side.

Now let’s say you have several services, for instance, a customer service and an invoice service. You want to only have one GraphQL endpoint for your public API, so you have to create a gateway service that will disassemble and request the correct background service in GRPC. This is the same approach that is used with some REST projects and is gaining more and more adoption https://github.com/grpc-ecosystem/grpc-gateway.

In my opinion, this is also another place where code generation is handy. Thanks to the schema we talked earlier, we can on the background service generate the functions of the gateway, so we can import them on the gateway.

Event-driven architecture

This is where things start moving out of what we know from our monolith framework. In a microservice architecture, you have to make sure data and events are consistent with each service, and this may be really hard to do well since we don’t have the reliability of database transactions anymore.

This is where an event-driven architecture can help. In our CRUD libraries, we can just specify for each model if we want the event to be propagated in the asynchronous messaging system (like NATS), each time a creation/update/deletion/custom event is done. This way, other services can independently subscribe to the messaging system to know when an event occurs and act accordingly without any tight coupling.

An event-driven architecture means each record has his whole life saved in the database, from creation to deletion, and information are still there after deletion. This is important this data remain so a late client service can easily catch up to the last version of the data and get all the event he missed.
There is another big added point, an event-driven architecture provide a free audit trail natively included in the system since we keep of copy of everything that happened. This may be very interesting for highly regulated activities like banks and finances.

Still, and on this point I may disagree with the pure definition of an event-driven architecture which want to keeps a list of events as the single source of truth, I think it’s important that we keep a regular, relational database.

In the relational database, you have the constraints, the foreign keys, the transactions, which are incredibly important to provide consistency and avoid corrupt data. There is just no way we can throw this by the window.

On this topic, this is why I believe the ideal system would on a mutation (create/update/delete) first request the relational database to check with the constraints, register the event in the same database, do whatever it has to do in the same transaction, and finally commit. If commit happens, only then the events are sent to the messaging system so the other services can be alerted.

This way, we can have the best of both worlds, the consistency of relational data, and the loosely-coupled communication provided by the event-driven architecture.

Cache, back in local memory ?

Using all these new concepts may provide some great opportunities. Please note on this part is only a theory, I may be completely wrong, but it’s a good illustration of what could become possible when you combine the best concepts around.

In a CRUD framework you usually request a lot the database, whether it’s read request from the clients, or to check constraints and do random things in your custom functions. Caching the most used records are very important to scale your application.

The problem is, you can’t just use the local memory. If several application workers serve your application, a worker may update the data, and the others will have a stalled cache.
This is why we usually use Redis. We converted the data in JSON and sent it to this keystore database which is horizontally scalable and will always be up-to-date. We lose execution time because of the network, because we have to convert to JSON, but in the end this is still way faster than requesting the database.

But again, in a CRUD framework you request the cache all the time, can’t we really do better ? Well I believe we can, thanks to the messaging system.

What if each application worker was not only sending the events to the messaging system, but also subscribed to it ? This way, when a worker publishes an event, all the others can immediately and adequately clear the record in their local cache.

This is not all. In local memory, you have to serialize your object into binary. And since your object is already Protobuf object, you can do it at an unrivaled speed. This way, you can get a instantaneous and scalable cache system.

Conclusion

A prototype already exists, and is published on Gitlab: https://gitlab.com/empowerlab/example. Look at it for more information. All open-source, Apache license, but no company behind, just some lonely exploration from me.

This is only the first part of a more significant project. Next will come the DevOps architecture around it and the frontend libraries to interact with, but this backend is by far the most crucial step.
If you want to know more, have a look at http://empower.sh, which present everything I want to have in the stack.

Thank you for your attention, I hope you enjoyed it.

--

--

YannickB

French developer, backend/ERP software. Opensource advocate and passionate, always learning. http://empower.sh