How to apply clean architecture principles to Golang micro-services

A step by step example of how applying clean architecture in Golang can lead to more maintainable and testable code

Matteo Giaccone
10 min readApr 24, 2018

As a freelancer, I spend a lot of my time between projects learning and experimenting with new technologies and concepts. A few years ago I got into Golang and it has since become the go-to language for a lot of my experiments.

Lately, I have been attempting to apply principles of clean architecture to Golang and I thought I would share how I’ve been doing it in my current personal project.

We’ll be building a simple micro-service based application. The first service (producer) will expose its API both via HTTP and RPC, the second (consumer) will only be a simple service that only listens for events on a message bus.

The sample code for this article is available on GitHub. Let’s get started…

The problem

Developing a micro-service architecture is always a challenge and there is quite a lot of boilerplate code to be written and repeated for every single service (parsing CLI flags, connecting to a DB, handling transport, etc.) plus it’s easy to end up with the whole logic into a single file.

Keeping that in mind, I set the following goals for the project:

  • Don’t use an external micro-service frameworks
  • Apply clean architecture principles
  • Reduce boilerplate by implement reusable and self-contained components to allow for service composition
  • Don’t try to be too smart and attempt to standardise similar sounding components (for instance avoid creating a unified interface to connect to any possible DB in existence)

Clean Architecture

Here’s a simple diagram showing the clean architecture conceptual model

In simple terms, each inner layer doesn’t know anything about the outer layers. So, entities don’t know anything about the use cases, use cases don’t know anything about how presenters will use them and so on…

For an in-depth reading about it refer to the post from Uncle Bob here.

Service and components

All the services are implemented around two simple abstractions.

type Component interface {
ID() string
Configure(service Service, cliCtx *cli.Context) error
DependsOn() []string
Flags() []cli.Flag
Initialize(wg *sync.WaitGroup, startedCh chan<- struct{}, shutdownCh <-chan struct{}, errCh chan<- error)
Logger() *zerolog.Logger
}

A component is a specialised unit responsible for:

  • Processing the command line/environment arguments for itself
  • Applying its own configuration
  • Initialising and managing its own lifecycle
  • Exposing functionalities (i.e. DB client), if required
type Service interface {
AddComponent(component Component, deps ...string) error
Bootstrap()
Component(id string) (Component, error)
DebugMode() bool
Handler() Component
Logger() *zerolog.Logger
Name() string
Shutdown()
}

A service is a simple implementation of the above interface and it wraps most of the boilerplate code and it is responsible for:

  • Holding components and resolve dependencies between them
  • Bootstrapping the micro-service
  • Control the lifecycle of the application

You might have noticed the Handler() method in the service interface. The handler is the component at the heart of the service itself and it acts as a presenter in the clean architecture model. Its responsibility is to wire all other components (a poor man dependency injection) and to expose the business logic of the service. Notice I said “expose” and not “contain”, I will come back to it later in the article.

Each service follows the package structure convention below but not all packages are required for every service.

[serviceName]/
domain/
handler/
repository/
usecase/
main.go

Implementing a component

As we will be using NATS as a message broker between the services, we need to implement a component to connect to it. Here’s a snippet of the component implementation:

package natsbroker

import ( ... )

const (
Component = "nats-broker"
...
)

var cliFlags = []cli.Flag{ ... }

func Create(options ...nats.Option) framework.Component {
return &natsBroker{ natsOptions: options }
}

func Get(service framework.Service) (Broker, error) { ... }

type Broker interface {
Client() *nats.Conn
Publish(topic string, msg proto.Message) error
}

type natsBroker struct {
client *nats.Conn
logger *zerolog.Logger
natsUrl string
natsOptions []nats.Option
}
...func (b *natsBroker) Configure(service framework.Service, cliCtx *cli.Context) error {
logger := service.Logger().With().Str("component", Component).Logger()
b.logger = &logger

natsUrl := cliCtx.String(flagNatsUrl)
if natsUrl == "" {
return fmt.Errorf("missing nats url")
}
b.natsUrl = natsUrl

return nil
}

func (b *natsBroker) Initialize(wg *sync.WaitGroup, startedCh chan<- struct{}, shutdownCh <-chan struct{}, errCh chan<- error) {
// make sure we notify when we are done
defer wg.Done()

b.logger.Debug().Msg("connecting...")
client, err := nats.Connect(b.natsUrl, b.natsOptions...)
if err != nil {
errCh <- err
return
}
b.client = client

b.logger.Info().Msg("connected")
// Notify service of initialisation completed
close(startedCh)

doneCh := make(chan struct{}, 1)
stopFunc := func() {
b.logger.Debug().Msg("disconnecting...")
b.client.Close()
close(doneCh)
}
// Wait for shutdown signal
<-shutdownCh
b.logger.Debug().Msg("shutdown signal received...")
stopFunc()

// Wait until stopFunc() completes
<-doneCh
b.logger.Info().Msg("disconnected")
}

As you can see, the component does only one thing. It holds the connection to NATS and exposes the client and a publish method.

The most important methods in the component are Configure() and Initialize(). The former handles the configuration flags from the command line and populates the configuration while the latter initialises the NATS client, notifies the service of success (or error) and waits for the shutdown notification from the service to close the client.

Implementing the consumer service

Now that we have our first component, let’s see how it all comes together by implementing the consumer service. This service is only listening for events coming from a message broker and it doesn’t expose any public API.

package main

import ( ... )
const Name = "consumer"var (
GitCommit string
Version string
)

func main() {
service, err := framework.Create(Name, Version, GitCommit, handler.ServiceHandler())
if err != nil {
panic(err)
}

service.AddComponent(natsbroker.Create())
service.Bootstrap()
}

And that’s it, this is how we bootstrap a new service!

Let’s now have a look at the service handler and how it interacts with the component. As I mentioned earlier, the handler has to wire components and expose the business logic.

This is where the use case concept in clean architecture model comes in handy. A use case handles entities and wraps a unit of business logic, keeping it separate from the rest of the code. This separation makes the code cleaner, much easier to replace and easily testable.

For the sake of simplicity, we assume we are only printing a log line when we receive a new message.

package usecase

import ( ... )
func NewHandleMessageUseCase() *handleMessageUseCase {
return &handleMessageUseCase{}
}

type HandleMessageFunc func(logger *zerolog.Logger, rawMsg *nats.Msg) error

type handleMessageUseCase struct {
}

func (uc *handleMessageUseCase) Execute(logger *zerolog.Logger, rawMsg *nats.Msg) error {
if rawMsg == nil {
return fmt.Errorf("the message cannot be NIL")
}

var message messaging.EventUserHello
if err := proto.Unmarshal(rawMsg.Data, &message); err != nil {
return fmt.Errorf("error deserializing user hello event")
}

logger.Info().Msgf("[%s] says: %s", message.GetName(), message.GetMessage())
return nil
}

In the real world, a use case might have references to repositories or other services. The important thing is wrapping the business logic to solve a specific problem in a single place.

Right… but how do we use it? Let’s go back to the service handler.

package handler

import ( ... )
func ServiceHandler() framework.Component {
return &serviceHandler{}
}

type serviceHandler struct {
service framework.Service
logger *zerolog.Logger
handleMessage usecase.HandleMessageFunc
}
...func (h *serviceHandler) DependsOn() []string {
return []string{natsbroker.Component}
}

func (h *serviceHandler) Initialize(...) {
// Notify shutdown complete on exit
defer wg.Done()

broker, err := natsbroker.Get(h.service)
if err != nil {
errCh <- err
return
}

h.handleMessage = usecase.NewHandleMessageUseCase().Execute

go h.monitorEvents(broker, shutdownCh)
close(startedCh)

// Wait for shutdown
<-shutdownCh
}

func (h *serviceHandler) monitorEvents(broker natsbroker.Broker, shutdownCh <-chan struct{}) {
// Listen for messages on the broker and dispatch
...

for {
select {
case msg := <-userHelloCh:
go h.handleMessage(h.logger, msg)
case <-shutdownCh:
h.logger.Info().Msg("stopped monitoring topics")
return
}
}
}

Similarly to the NATS component, the service handler configures and initialises itself and it can have its own CLI flags. As the handler depends on the broker, it has to explicitly declare that dependency:

func (h *serviceHandler) DependsOn() []string {
return []string{natsbroker.Component}
}

And this is how we use the use case to expose its logic in the handler:

type serviceHandler struct {
...
handleMessage usecase.HandleMessageFunc
}
...

func (h *serviceHandler) Initialize(...) {
...
h.handleMessage = usecase.NewHandleMessageUseCase().Execute
...
}

func (h *serviceHandler) monitorEvents(...) {
...
for {
select {
case msg := <-userHelloCh:
go h.handleMessage(h.logger, msg)
case <-shutdownCh:
...
}
}
}

Neat, isn’t it? Let’s move on and implement a fully featured service

Implementing the producer service

The producer service is more complicated and it will have the following features:

  • Expose an HTTP API
  • Expose an RPC API
  • Store the message in a DB
  • Send the message to the consumer service through the message broker

Let’s assume we have implemented the HTTP, RCP and DB storage components (you can find the code on the GitHub repository), our producer service will look like this:

package main

import ( ... )
const Name = "producer"

var (
GitCommit string
Version string
)

func main() {
service, err := framework.Create(Name, Version, GitCommit, handler.ServiceHandler())
if err != nil {
panic(err)
}

service.AddComponent(cloudstore.Create())
service.AddComponent(natsbroker.Create())
service.AddComponent(microhttp.Create(handler.InitHttpFunc), framework.HandlerComponent)
service.AddComponent(microrpc.Create(handler.InitRpcFunc), framework.HandlerComponent)
service.Bootstrap()
}

We are now composing the service with more components and adding an explicit dependency on the handler for the HTTP and RPC component.

Our use case to handle the request will require access to the message entity.

package domain

import ( ... )
const MessageKind = "Message"

type Message struct {
ID *datastore.Key `datastore:"__key__"`
Name string
Message string
CreatedAt time.Time
}

func NewMessage(name string, message string) *Message {
key := datastore.IDKey(MessageKind, 0, nil)
return &Message{
ID: key,
Name: name,
Message: message,
}
}

and the repository:

package repository

import ( ... )
func NewMessageRepository(logger *zerolog.Logger, cloudstore cloudstore.Store) MessageRepository {
return &messageRepository{
logger: logger,
cloudstore: cloudstore,
}
}

type MessageRepository interface {
GetAll() ([]domain.Message, error)
Store(*domain.Message) error
}

type messageRepository struct {
cloudstore cloudstore.Store
logger *zerolog.Logger
}

func (r *messageRepository) GetAll() ([]domain.Message, error) {
client := r.cloudstore.Client()

query := datastore.NewQuery(domain.MessageKind)
var result []domain.Message
if _, err := client.GetAll(context.Background(), query, &result); err != nil {
return nil, err
}
return result, nil
}

func (r *messageRepository) Store(entity *domain.Message) error {
client := r.cloudstore.Client()

if _, err := client.Put(context.Background(), entity.ID, entity); err != nil {
return err
}
return nil
}

We could avoid using a repository and let the use case talk directly to the datastore, but that would duplicate code and it would be prone to errors. A repository can be tested independently and mocked to test the use case without requiring a database instance.

For the first use case we want to store the message and publish an event

package usecase

import ( ... )
func NewStoreAndPublishMessageUseCase(broker natsbroker.Broker, repo repository.MessageRepository) *storeAndPublishMessageUseCase {
return &storeAndPublishMessageUseCase{
broker: broker,
repo: repo,
}
}

type StoreAndPublishMessageFunc func(logger *zerolog.Logger, requestId string, msg *domain.Message) error

type storeAndPublishMessageUseCase struct {
broker natsbroker.Broker
repo repository.MessageRepository
}

func (uc *storeAndPublishMessageUseCase) Execute(logger *zerolog.Logger, requestId string, msg *domain.Message) error {
if msg == nil {
return fmt.Errorf("message cannot be NIL")
}

// Store message in datastore
if err := uc.repo.Store(msg); err != nil {
return fmt.Errorf("error storing message: %v", err)
}

// Publish message event to broker
event := &messaging.EventUserMessage{
RequestId: requestId,
Name: msg.Name,
Message: msg.Message,
}
if err := uc.broker.Publish(messaging.ChannelUserMessage, event); err != nil {
logger.Error().Err(err).Msg("error publishing event")
// We should still return even if the event has failed
}

return nil
}

The second use case retrieves all the messages from the DB

package usecase

import ( ... )

func NewGetMessagesUseCase(repo repository.MessageRepository) *getMessagesUseCase {
return &getMessagesUseCase{
repo: repo,
}
}

type GetMessagesUseCaseFunc func(logger *zerolog.Logger) ([]domain.Message, error)

type getMessagesUseCase struct {
repo repository.MessageRepository
}

func (uc *getMessagesUseCase) Execute(logger *zerolog.Logger) ([]domain.Message, error) {
logger.Debug().Msg("loading messages from repository")

messages, err := uc.repo.GetAll()
if err != nil {
return nil, fmt.Errorf("error loading messages from repository: %v", err)
}
return messages, nil
}

As you can see both use cases have dependencies on the repository and/or the message broker component. The service handler is responsible of making sure the dependencies are available and this is the relevant snippet from the service handler:

func (h *serviceHandler) Initialize(...) {
defer wg.Done()

broker, err := natsbroker.Get(h.service)
if err != nil {
errCh <- err
return
}

datastore, err := cloudstore.Get(h.service)
if err != nil {
errCh <- err
return
}

messageRepo := repository.NewMessageRepository(h.logger, datastore)

h.getMessages = usecase.NewGetMessagesUseCase(messageRepo).Execute
h.storeAndPublishMessage = usecase.NewStoreAndPublishMessageUseCase(broker, messageRepo).Execute

close(startedCh)

// We don't need to wait for the shutdown signal in this case.
// The HTTP and RPC components will do that for us
}

Now that everything is wired up we can access the use cases from the handler’s HTTP interface

package handler

import ( ... )

type httpPublishRequest struct {
Name string `json:"name,omitempty"`
Message string `json:"message,omitempty"`
}

type httpMessage struct {
Name string `json:"name,omitempty"`
Message string `json:"message,omitempty"`
CreatedAt time.Time `json:"createdAt,omitempty"`
}

func InitHttpFunc(service framework.Service, _ framework.Component, router *gin.Engine) error {
handler := service.Handler().(*serviceHandler)

router.GET("/messages", getMessagesHandler(handler))
router.POST("/publish", publishHandler(handler))

return nil
}

func getMessagesHandler(handler *serviceHandler) gin.HandlerFunc {
return func(c *gin.Context) {
logger := microhttp.RequestLogger(c)

messageEntities, err := handler.getMessages(logger)
if err != nil {
c.AbortWithError(http.StatusInternalServerError, err)
return
}

messages := make([]*httpMessage, 0)

for _, m := range messageEntities {
msg := &httpMessage{
Name: m.Name,
Message: m.Message,
CreatedAt: m.CreatedAt,
}
messages = append(messages, msg)
}
c.JSON(http.StatusOK, messages)
}
}

func publishHandler(handler *serviceHandler) gin.HandlerFunc {
return func(c *gin.Context) {
logger := microhttp.RequestLogger(c)
requestId := microhttp.RequestId(c)

var body httpPublishRequest
if err := c.ShouldBindWith(&body, binding.JSON); err != nil {
c.AbortWithStatus(http.StatusBadRequest)
return
}

message := domain.NewMessage(body.Name, body.Message)
if err := handler.storeAndPublishMessage(logger, requestId, message); err != nil {
logger.Error().Err(err).Msg("server error")
c.AbortWithStatus(http.StatusInternalServerError)
return
}
c.Status(http.StatusCreated)
}
}

and from the handler’s RPC interface:

package handler

import ( ... )
func InitRpcFunc(service framework.Service, component framework.Component, server *grpc.Server) error {
RegisterProducerRPCServer(server, &rpcServer{
logger: component.Logger(),
handler: service.Handler().(*serviceHandler),
})
return nil
}

type rpcServer struct {
logger *zerolog.Logger
handler *serviceHandler
}

func (s *rpcServer) GetMessages(context.Context, *Empty) (*GetMessagesReply, error) {
messageEntities, err := s.handler.getMessages(s.logger)
if err != nil {
return nil, err
}

messages := make([]*Message, 0)

for _, m := range messageEntities {
createdAt, _ := ptypes.TimestampProto(m.CreatedAt)
msg := &Message{
Name: m.Name,
Message: m.Message,
CreatedAt: createdAt,
}
messages = append(messages, msg)
}
return &GetMessagesReply{
Messages: messages,
}, nil
}

func (s *rpcServer) PublishMessage(ctx context.Context, req *PublishMessageRequest) (*Empty, error) {
requestId := util.NewUUID()

message := domain.NewMessage(req.GetMessage().Name, req.GetMessage().Message)
if err := s.handler.storeAndPublishMessage(s.logger, requestId, message); err != nil {
s.logger.Error().Err(err).Msg("server error")
return nil, err
}
return &Empty{}, nil
}

Wrapping up

Thank you for reading this far! I didn’t think it was going to be this long. We have seen how easy is to apply clean architecture principles to a Golang application. Despite having to write a little more code for the use cases, I believe in the benefits of splitting the code in smaller units to make the application easily testable and more maintainable in the long run.

Have a look at the full code on GitHub and try it out!

--

--

Matteo Giaccone

Freelance software developer. Passionate about computer languages, software architectures, traveling and photography.