Structuring Go Applications: A Practical Approach

Darwishdev
8 min readSep 6, 2024

--

Introduction

When building APIs, one common question is, “How should I structure my Go application?” While there are plenty of theoretical responses to this question, practical examples are often lacking. In this article, we’ll explore the lifecycle of a typical API feature, demonstrate the naive way of structuring Go applications, and propose a more maintainable and performant approach using domains, use cases, repositories, and adapters.

API Feature Lifecycle

Let’s start by understanding the API feature lifecycle. Imagine an API call, like /userCreate. After setting up a proper server, this call reflects a function in your code. Typically, this function will:

  1. Validate the incoming request.
  2. Convert the request into the required database query parameters.
  3. Interact with the database to get or put data.
  4. Parse the database response into an API response model.

If this describes your process, then this article will guide you through a simple, maintainable way to structure your Go code.

The Naive Approach

In many initial implementations, the API handler does everything: validation, database interactions, and response formatting. This approach often results in a folder structure like:

common/
├── api/
│ └── user.go
└── models/
└── user.go
└── db/
└── db.go

While straightforward, this structure has clear downsides, including tight coupling, code duplication, and difficulty in testing and scaling.

Downsides of the Naive Approach

The naive approach of placing all logic within the API handler has several drawbacks that can hinder maintainability, testability, and scalability:

1. Tight Coupling:

  • Dependency on Database: The API handler is directly coupled to the database, making it difficult to test without a running database or to mock database interactions.
  • Hardcoded Logic: Business logic is tightly integrated with the API handler, making it challenging to reuse or modify without affecting the API’s interface.

2. Code Duplication:

  • Repeated Validation: Validation logic might be duplicated across multiple API handlers, leading to inconsistencies and potential errors.
  • Similar Database Interactions: If multiple API handlers interact with the database in similar ways, the code can become redundant and harder to maintain.

3. Difficulty in Testing:

  • Complex Setup: Testing the API handler requires setting up a complete environment, including a database and potentially other dependencies.
  • Limited Isolation: It’s difficult to test individual components of the API handler in isolation, making it harder to identify and fix bugs.

4. Scalability Challenges:

  • Performance Bottlenecks: If the API handler handles both validation and database interactions, it can become a performance bottleneck, especially under heavy load.
  • Maintenance Overhead: As the application grows, managing a single, large API handler can become increasingly complex and time-consuming.

5. Reduced Reusability:

  • Limited Component Usage: The tightly coupled code makes it difficult to reuse components in other parts of the application or in different projects.

To address these issues, a more structured approach is needed that separates concerns and promotes modularity.

Towards a Better Structure

Proposing Domains and Use Cases

To improve, we propose dividing the application into domains. Each domain should represent a logical grouping of related features, mapped closely to your database schemas. For example, the “Accounts” domain would include everything related to user management, roles, etc.

Here’s a revised folder structure after introducing domains and use cases:

app/
├── accounts/
│ └── usecase/
│ ├── usecase.go // Defines interfaces for account operations
│ └── user.go // Implementation of user-related use cases
common/
│ ├── api/
│ │ ├── accounts_rpc.go // API endpoints for accounts
│ │ └── server.go // Server setup and use case injection
│ ├── proto/
│ │ └── accounts.proto // Protobuf definitions

main.go

The Repository Layer and its Benefits

The repository layer acts as an abstraction between your usecase logic and the underlying persistence mechanism (like a database). It defines interfaces for data access operations, hiding the specifics of how data is stored and retrieved. This separation offers several advantages:

  • Improved Testability: You can write unit tests for your use cases without needing a real database by mocking the repository interface.
  • Flexibility: Easily switch persistence mechanisms (e.g., from SQL to NoSQL) by implementing a new repository that interacts with the desired database.
  • Decoupling: Use cases don’t have to worry about the specifics of database interactions, making the code more maintainable and reusable.

How it Benefits Our Use Case:

In our example of user management, a repository could provide interfaces for:

  • Creating a new user
  • Getting a user by ID
  • Updating user information

This allows the user.go file within the accounts/usecase directory to focus solely on user-related business logic without needing direct database access code.
The updated folder structure would look like this:

app/
├── accounts/
│ └── usecase/
│ ├── usecase.go // Defines interfaces for account operations
│ └── user.go // Implementation of user-related use cases
└── repo/
│ ├── repo.go // Defines interfaces for account repo
│ └── user.go // Implementation of user-related sql functions
common/
│ ├── api/
│ │ ├── accounts_rpc.go // API endpoints for accounts
│ │ └── server.go // Server setup and use case injection
│ ├── proto/
│ │ └── accounts.proto // Protobuf definitions

main.go

The Problem with In-Domain Repositories and Direct SQL Execution

While having repositories within domains can provide a level of encapsulation, directly executing SQL code within them introduces several potential drawbacks:

  • Tight Coupling: The repository becomes tightly coupled to the specific SQL and database implementation, limiting flexibility and making it harder to switch databases or adapt to changes in SQL syntax.
  • Maintenance Overhead: Managing SQL queries within the repository can increase maintenance complexity, especially as the application grows and queries become more complex.
  • Security Risks: Embedding SQL directly within the code can introduce security vulnerabilities, such as SQL injection attacks, if not handled carefully.
  • Reduced Reusability: It’s less likely that the repository can be reused in other contexts or with different data sources if it’s deeply tied to specific SQL code.

Separating SQL Code and Introducing a Store Interface

To address these issues, consider separating the actual SQL code from the domain-specific repositories. This can be achieved by:

  1. Creating a Dedicated Query Layer: Establish a separate layer or package to house all SQL queries. This could be named queries, sql, or a similar convention.
  2. Implementing the Store: Create a concrete implementation of the store interface that interacts with the database using the SQL queries from the dedicated layer. This implementation can leverage SQLC or other tools for code generation and query management.

Benefits of This Approach

  • Improved Flexibility: By separating SQL code, you can more easily switch databases or modify queries without affecting the domain logic.
  • Enhanced Reusability: The store interface can be reused in different parts of the application or even in other projects, promoting code modularity.
  • Reduced Security Risks:Centralizing SQL queries allows for better security practices, such as parameterized queries and input validation.
  • Simplified Testing:It’s easier to test the store interface independently,

The updated folder structure would look like this:

app/
├── accounts/
│ └── usecase/
│ ├── usecase.go // Defines interfaces for account operations
│ └── user.go // Implementation of user-related use cases
└── repo/
│ ├── repo.go // Defines interfaces for account repo
│ └── user.go // Implementation of user-related sql functions
common/
│ ├── api/
│ │ ├── accounts_rpc.go // API endpoints for accounts
│ │ └── server.go // Server setup and use case injection
│ ├── proto/
│ │ └── accounts.proto // Protobuf definitions
| ├── sql/
| | └──store.go
| | └──user.sql.go

main.go

Introducing the Adapter Layer For Inside Domain Convertions

Understanding the Current Endpoint Lifecycle

After the changes introduced, the typical endpoint lifecycle now involves the following steps:

  1. Endpoint Call: An incoming API request triggers the execution of the corresponding endpoint handler.
  2. Request Validation: The handler validates the incoming request to ensure it adheres to the expected format and contains necessary data.
  3. Use Case Invocation: The validated request is passed to the relevant use case within the appropriate domain.
  4. Request Transformation: The use case transforms the request into the parameters required by the underlying repository layer.
  5. Repository Interaction: The use case calls the repository to perform the necessary data access operations.
  6. Response Parsing: The repository’s response is parsed and transformed into the desired API response format.

Separating Request and Response Transformations with Adapters

While this structure provides a good separation of concerns, there’s still an opportunity for further improvement. The logic of transforming API requests into repository parameters and vice versa can be extracted into a separate layer called adapters.

The updated folder structure would look like this:

app/
├── accounts/
│ └── usecase/
│ ├── usecase.go // Defines interfaces for account operations
│ └── user.go
│ └── adpter/
│ ├── adapter.go // Defines interfaces for account adapters
│ └── user.go // Implementation of user-related use adapters
└── repo/
│ ├── repo.go // Defines interfaces for account repo
│ └── user.go // Implementation of user-related sql functions
common/
│ ├── api/
│ │ ├── accounts_rpc.go // API endpoints for accounts
│ │ └── server.go // Server setup and use case injection
│ ├── proto/
│ │ └── accounts.proto // Protobuf definitions
| ├── sql/
| | └──store.go
| | └──user.sql.go

main.go

Benefits of Using Adapters:

  • Enhanced Flexibility: Adapters can encapsulate the details of request and response transformations, making it easier to adapt to changes in API or database schemas.
  • Improved Reusability: Adapters can be reused across different use cases within a domain, reducing code duplication and improving maintainability.
  • Simplified Testing: Adapters can be tested independently, making it easier to verify the correctness of request and response transformations.

Conclusion and What’s Next

Throughout this article, we’ve navigated through some complex concepts often associated with the clean architecture approach, such as domains, repositories, and use cases. However, rather than diving straight into the jargon, my aim was to first highlight the problems we encounter when structuring Go applications naively. By gradually presenting these challenges, I introduced the necessary architectural terms and solutions in a practical context, making it easier to grasp their importance and application.

The structure we discussed can be visualized in a layered diagram, representing a clean separation of concerns:

1. API Layer (Port): This layer serves as the entry point of the application, where API requests are handled.

2. SQL Layer (Infrastructure / Store): This layer deals with the data access logic, executing SQL queries and interacting with the database.

3. Domain Layer: This is where the core business logic resides, divided into sub-layers:

  1. Use Case Layer: The entry point for domain-specific operations, designed to match the API interface and handle the core logic of each feature.
  2. Repository Layer: This layer acts as an interface between the domain and the SQL layer, providing data access methods that the use case interacts with. It abstracts the data retrieval and storage logic.
  3. Adapter Layer: Responsible for converting API request structures into database function parameters and vice versa, adapting the database responses into API responses. This layer is typically called twice in each function by the use case: once for input adaptation and once for output adaptation.

By structuring your Go application in this way, you achieve a clear, maintainable, and scalable architecture that separates concerns and allows for easy testing and future enhancements.

However, reading about this structure and the underlying design patterns is not enough to fully understand them. In the next article, we will put these theories into practice by building a fully functional API from scratch. We’ll implement role-based permissions and comprehensive user management features using the layered structure discussed here. I’ll also introduce my preferred toolset, and guide you step-by-step through the process, ensuring you gain a hands-on understanding of these concepts.

As we move forward, we’ll see how this structured approach simplifies complexity and enhances the maintainability of your Go applications. So stay tuned, and get ready to transform theory into practice as we continue this journey of building robust, scalable APIs in Go!

--

--