Building a Clean Rust Backend with Axum, Diesel, PostgreSQL and DDD: From Concept to Deployment

Quentin Piot
21 min readSep 12, 2023

--

In the ever-evolving landscape of web development, Rust has emerged as a compelling choice for building robust and high-performance backend systems. Leveraging Rust’s memory safety guarantees and expressive type system, this article takes you on a journey to create a real-world backend application using cutting-edge technologies like Axum, Diesel, and Domain-Driven Design (DDD).

But we’re not stopping there! We’ll also explore how to supercharge your application with Redis, using it as both a cache and a locking mechanism, to optimize performance and enhance scalability.

Along the way, we’ll delve into the fundamentals of structuring a modular, maintainable codebase and guide you through the process of containerizing your application with Docker for easy deployment. Additionally, to fortify your application’s security, we’ll explore the intricacies of adding OAuth authentication.

So, fasten your seatbelts, as we embark on a comprehensive exploration of modern Rust backend development, from inception to deployment and beyond. Let’s get started!

Link to the GitHub Repository : https://github.com/Quentin-Piot/axum-diesel-real-world

Link to my LinkedIn Profile : https://www.linkedin.com/in/quentin-piot/

The primary aim of this blog post is not to provide an in-depth tutorial on Rust programming language or the intricacies of querying databases with Diesel ORM. While we’ll touch on essential Rust and Diesel concepts, the primary goal is to illustrate the end-to-end development process and best practices for building a robust backend solution.

Part 1 : Initializing the Codebase

To kickstart our journey, you’ll need to have Rust installed on your machine. If you haven’t already, you can install Rust by following the instructions on the official website. Once Rust is up and running, we can create our project.

Open your terminal and use the following command to create a new Rust project:

cargo new axum-diesel-real-world

This command will generate a new directory called axum-diesel-real-world with the basic structure for a Rust project.

Adopting a Modular Structure

To effectively apply Domain-Driven Design (DDD) principles, we’ll structure our project in a modular fashion, mirroring the key domains and capabilities of our application. Here’s the hierarchy we’ll be following:

axum-diesel-real-world/
├── src/
│ ├── domain/
│ │ ├── models/
│ │ ├── mod.rs
│ ├── handlers/
│ ├── infra/
│ │ ├── db/
│ │ ├── repositories/
│ ├── utils/
│ │ ├── custom_extractors/
│ ├── main.rs
│ ├── routes.rs
│ ├── config.rs
│ ├── error.rs
├── migrations/
├── Cargo.toml
└── README.md

In this structure:

  • domain/ houses your domain logic following DDD principles, with a subdirectory models/ for defining your domain models.
  • handlers/ is where you define your API handlers.
  • infra/ encompasses your infrastructure logic, further divided into db/ for database-related logic and repositories/ for defining repositories.
  • utils/ is where you define utility functions, including custom extractors for Axum.

This modular approach not only adheres to DDD principles but also ensures that your codebase remains organized and maintainable as your project grows.

Setting Up Initial Dependencies

As our project evolves, we’ll rely on external libraries and frameworks to streamline development. Let’s begin by setting up the initial dependencies in our Cargo.toml file:

[dependencies]
axum = { version = "0.6", features = ["macros"] }
axum-macros = "0.3"
chrono = { version = "0.4.26", features = ["serde"] }
diesel = { version = "2.1", features = ["postgres", "uuid", "serde_json"] }
diesel_migrations = "2"
deadpool-diesel = { version = "0.4", features = ["postgres"] }
dotenvy = "0.15"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
tokio = { version = "1.0", features = ["sync", "macros", "rt-multi-thread"] }
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
uuid = { version = "1.4", features = ["fast-rng", "v4", "serde"] }

Configure PostgreSQL database and environment file

To install PostgreSQL on your machine, please follow the official installation instructions provided in the PostgreSQL documentation.

Create next a database and then save the database url in an .env file. You’ll also need to specify the port and the host you need your server to start on.

For information, 127.0.0.1 is the default port corresponding to localhost while we’ll need to use 0.0.0.0 if it is running in a Docker container

DATABASE_URL=postgres://postgres:postgres@localhost/<DATABASE_NAME>
PORT=3000
HOST=127.0.0.1

Creating the server in your main.rs file

In your main.rs file, you'll define the entry point for your Rust backend application. This is where you'll set up and configure the server using the Axum framework. Axum provides a powerful and ergonomic way to build asynchronous web applications. You'll define your application's routes, middleware, and server configuration in this file, allowing your backend to listen for incoming HTTP requests and respond accordingly. We'll explore this in more detail as we progress through the development of our Rust backend solution.

The code will be explained below, you can also find some comments on it:

src/main.rs


use std::net::SocketAddr;

use deadpool_diesel::postgres::{Manager, Pool};
use diesel_migrations::{embed_migrations, EmbeddedMigrations, MigrationHarness};
use tracing_subscriber::{layer::SubscriberExt, util::SubscriberInitExt};

use crate::config::config;
use crate::errors::internal_error;
use crate::routes::app_router;

// Import modules
mod config;
mod domain;
mod errors;
mod handlers;
mod infra;
mod routes;

// Define embedded database migrations
pub const MIGRATIONS: EmbeddedMigrations = embed_migrations!("migrations/");

// Struct to hold the application state
#[derive(Clone)]
pub struct AppState {
pool: Pool,
}

// Main function, the entry point of the application
#[tokio::main]
async fn main() {
// Initialize tracing for logging
init_tracing();

// Load configuration settings
let config = config().await;

// Create a connection pool to the PostgreSQL database
let manager = Manager::new(
config.db_url().to_string(),
deadpool_diesel::Runtime::Tokio1,
);
let pool = Pool::builder(manager).build().unwrap();

// Apply pending database migrations
run_migrations(&pool).await;

// Create an instance of the application state
let state = AppState { pool };

// Configure the application router
let app = app_router(state.clone()).with_state(state);

// Define the host and port for the server
let host = config.server_host();
let port = config.server_port();

let address = format!("{}:{}", host, port);

// Parse the socket address
let socket_addr: SocketAddr = address.parse().unwrap();

// Log the server's listening address
tracing::info!("listening on http://{}", socket_addr);

// Start the Axum server
axum::Server::bind(&socket_addr)
.serve(app.into_make_service())
.await
.map_err(internal_error)
.unwrap()
}

// Function to initialize tracing for logging
fn init_tracing() {
tracing_subscriber::registry()
.with(
tracing_subscriber::EnvFilter::try_from_default_env()
.unwrap_or_else(|_| "example_tokio_postgres=debug".into()),
)
.with(tracing_subscriber::fmt::layer())
.init();
}

// Function to run database migrations
async fn run_migrations(pool: &Pool) {
let conn = pool.get().await.unwrap();
conn.interact(|conn| conn.run_pending_migrations(MIGRATIONS).map(|_| ()))
.await
.unwrap()
.unwrap();
}

So what is happening here ?

In this code, we are setting up the entry point for our Rust backend application. The main function, marked with #[tokio::main], serves as the starting point of our server. Here's a breakdown of the key actions:

  1. We initialize the logging and tracing infrastructure to capture application events and provide valuable debugging information.
  2. Configuration settings are loaded from the environment, including the database connection details and server settings.
  3. A connection pool to the PostgreSQL database is created using the deadpool-diesel and diesel libraries, allowing efficient and asynchronous database access.
  4. Pending database migrations are applied to ensure that the database schema is up-to-date with our codebase. We utilize the diesel-migrations crate to manage these migrations.
  5. An instance of the application state is created to hold the database connection pool, making it accessible to various parts of our application using dependency injection.
  6. We configure our application’s routes, middleware, and server settings using the Axum framework.
  7. The server is configured to listen on a specified host and port, and the server’s listening address is logged for reference.
  8. Finally, the Axum server is started, serving the configured application and handling incoming HTTP requests.

Global configuration

With Tokio, it is simple to have a global configuration file using OnceCell, a crate that allows you to initialize and store a value once, and then access that value efficiently across your application.

Let’s create a config.rs file at root level that will generate a OnceCell containing the configuration file with data loaded from environment file :

src/config.rs

// Import necessary modules
use std::env;
use dotenvy::dotenv;
use tokio::sync::OnceCell;

// Define a struct to represent server configuration
#[derive(Debug)]
struct ServerConfig {
host: String,
port: u16,
}

// Define a struct to represent database configuration
#[derive(Debug)]
struct DatabaseConfig {
url: String,
}

// Define a struct that aggregates server and database configuration
#[derive(Debug)]
pub struct Config {
server: ServerConfig,
db: DatabaseConfig,
}

// Implement methods for the Config struct to access configuration values
impl Config {
// Getter method for the database URL
pub fn db_url(&self) -> &str {
&self.db.url
}

// Getter method for the server host
pub fn server_host(&self) -> &str {
&self.server.host
}

// Getter method for the server port
pub fn server_port(&self) -> u16 {
self.server.port
}
}

// Create a static OnceCell to store the application configuration
pub static CONFIG: OnceCell<Config> = OnceCell::const_new();

// Asynchronously initialize the configuration
async fn init_config() -> Config {
// Load environment variables from a .env file if present
dotenv().ok();

// Create a ServerConfig instance with default values or values from environment variables
let server_config = ServerConfig {
host: env::var("HOST").unwrap_or_else(|_| String::from("127.0.0.1")),
port: env::var("PORT")
.unwrap_or_else(|_| String::from("3000"))
.parse::<u16>()
.unwrap(),
};

// Create a DatabaseConfig instance with a required DATABASE_URL environment variable
let database_config = DatabaseConfig {
url: env::var("DATABASE_URL").expect("DATABASE_URL must be set"),
};

// Create a Config instance by combining server and database configurations
Config {
server: server_config,
db: database_config,
}
}

// Asynchronously retrieve the application configuration, initializing it if necessary
pub async fn config() -> &'static Config {
// Get the configuration from the OnceCell or initialize it if it hasn't been set yet
CONFIG.get_or_init(init_config).await
}

This code defines a configuration structure that aggregates server and database configuration settings. It uses the dotenvy crate to load environment variables from a .env file and sets default values for server configuration. The OnceCell ensures that the configuration is initialized only once and can be accessed globally throughout the application. The config function allows other parts of the application to asynchronously retrieve the configuration, initializing it if necessary.

Write router

In order to have a cleaner code, I decided to put the router in a separate file router.rs that will handle for now only GET call to the route of the api, but also have a fallback returning a 404 response

src/router.rs

use axum::http::StatusCode;
use axum::response::IntoResponse;
use axum::routing::{get, post};
use axum::Router;


use crate::AppState;

pub fn app_router(state: AppState) -> Router<AppState> {
Router::new()
.route("/", get(root))
.fallback(handler_404)
}

async fn root() -> &'static str {
"Server is running!"
}

async fn handler_404() -> impl IntoResponse {
(
StatusCode::NOT_FOUND,
"The requested resource was not found",
)
}

Now we have our codebase, let’s work on our first context : Posts !

Part 2: Create our first context : Posts

Setup diesel and migrations

In order to make the usage of Diesel easier, you can install the CLI tool using Cargo:

cargo install diesel_cli --no-default-features --features "postgres"

Then you need to specify your Database Url in your .env file :

DATABASE_URL=postgres://postgres:postgres@localhost/<DATABASE_NAME>

We now just have to setup everything using the following command tool :

diesel setup

It will create your database, generate a diesel.toml file at root level and generate a migration folder with an example

Let’s write our first migration. We can use the command line tool to generate it automatically:

diesel migration generate create_posts

We can now create our first object : Posts working with both up.sql and down.sql files

I have decided to use UUID to reduce the chances of collisions or conflicts, even when data is distributed across different databases or instances.

up.sql

CREATE TABLE posts
(
id uuid PRIMARY KEY DEFAULT uuid_generate_v4(),
title VARCHAR NOT NULL,
body TEXT NOT NULL,
published BOOLEAN NOT NULL DEFAULT FALSE
)

down.sql

DROP TABLE posts

Now our migration is ready, we can run it through the CLI tool in order to generate a schema. But first, let’s change the default schema destination to work with our data structure :

diesel.rs


[print_schema]
file = "src/infra/db/schema.rs"
custom_type_derives = ["diesel::query_builder::QueryId"]

[migrations_directory]
dir = "migrations"

We can now apply our migration :

diesel migration run

A new file has been created in our src/infra/db folder :

schema.sql

diesel::table! {
posts (id) {
id -> Uuid,
title -> Varchar,
body -> Text,
published -> Bool,
}
}

This table! macro will create a new public module, with the same name, as the name of the table. In this module, you will find a unit struct named table, and a unit struct with the name of each column. It will be used to interact with the database later on

Well done, we now have everything ready to finally start coding !

Create our post model

Let’s move to our domain and create our post model.

src/domain/post.rs

#[derive(Clone, Debug, PartialEq)]
pub struct PostModel {
pub id: Uuid,
pub title: String,
pub body: String,
pub published: bool,
}

We can also add Errors linked to this domain to make error handling much reliable and easy to understand.

#[derive(Debug)]
pub enum PostError {
InternalServerError,
NotFound(Uuid),
InfraError(InfraError),
}

In Axum, every type returned by a handler must implements IntoResponse. Let’s do it for our new PostError enum

impl IntoResponse for PostError {
fn into_response(self) -> axum::response::Response {
let (status, err_msg) = match self {
Self::NotFound(id) => (
StatusCode::NOT_FOUND,
format!("PostModel with id {} has not been found", id),
),
Self::InfraError(db_error) => (
StatusCode::INTERNAL_SERVER_ERROR,
format!("Internal server error: {}", db_error),
),
_ => (
StatusCode::INTERNAL_SERVER_ERROR,
String::from("Internal server error"),
),
};
(
status,
Json(
json!({"resource":"PostModel", "message": err_msg, "happened_at" : chrono::Utc::now() }),
),
)
.into_response()
}
}

As it’s a very simple usecase, it’s all we need in our model for now. Let’s move to our repository and implement the basic CRUD operations. We will see how to interact with our database and return our model

Global errors handling

Let’s create a file at root levelerror.rs to handle global level errors. It will help later on in your appliaction

src/errors.rs

// Import necessary modules and types
use axum::http::StatusCode;
use axum::Json;
use axum::response::IntoResponse;
use serde_json::json;

// Define an enumeration for custom application errors
#[derive(Debug)]
pub enum AppError {
InternalServerError, // Represents an internal server error
BodyParsingError(String), // Represents an error related to request body parsing
}

// Define a util to create an internal server error
pub fn internal_error<E>(_err: E) -> AppError {
AppError::InternalServerError
}

// Implement the `IntoResponse` trait for the `AppError` enumeration
impl IntoResponse for AppError {
// Define the conversion to an Axum response
fn into_response(self) -> axum::response::Response {
// Define status and error message based on the error variant
let (status, err_msg) = match self {
Self::InternalServerError => (
StatusCode::INTERNAL_SERVER_ERROR,
String::from("Internal Server Error"),
),
Self::BodyParsingError(message) => (
StatusCode::BAD_REQUEST,
format!("Bad request error: {}", message),
),
};

// Create a JSON response containing the error message
(status, Json(json!({ "message": err_msg }))).into_response()
}
}
  • The code defines an AppError enumeration with two variants:
  • InternalServerError: Represents an internal server error.
  • BodyParsingError: Represents an error related to request body parsing and includes a custom error message.
  • The internal_error function is defined to create instances of the InternalServerError variant.
  • The IntoResponse trait is implemented for the AppError enumeration, allowing instances of AppError to be converted into Axum responses.
  • Inside the into_response method implementation, the code matches the error variant to determine the HTTP status code and error message to be returned in the response.
  • Finally, it creates a JSON response containing the error message and returns it as an Axum response.

This code allows you to handle and respond to various types of errors in your Axum application, including internal server errors and errors related to request body parsing. It ensures that appropriate HTTP status codes and error messages are included in the responses to provide meaningful feedback to clients

Create repository

The goal of the repository is to deal with interactions between your application’s domain models (such as PostModel) and the database using the Diesel ORM.

In this repository for our posts, we will define the structs representing the database entities, the different functions to perform operations but also the adapters to transform our database entities into domain models.

We’ll also perform error handling using the PostError enum we have created in the previous section.

But first, let’s create our error handler for infra errors. It will be used to convert different types of error in a generic Infra error. We will need then to create a trait that will be implemented by the different types of errors encountered in our infra.

I’ll also add a simple utility to adapt directly the errors

src/infra/errors.rs

use std::fmt;
use deadpool_diesel::InteractError;

// Define a custom error type for infrastructure-related errors
#[derive(Debug)]
pub enum InfraError {
InternalServerError, // Represents an internal server error
NotFound, // Represents a resource not found error
}

// Utility function to adapt errors of generic type T into InfraError
pub fn adapt_infra_error<T: Error>(error: T) -> InfraError {
error.as_infra_error()
}

// Implement the Display trait to customize how InfraError is displayed
impl fmt::Display for InfraError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
InfraError::NotFound => write!(f, "Not found"), // Display "Not found" for NotFound variant
InfraError::InternalServerError => write!(f, "Internal server error"), // Display "Internal server error" for InternalServerError variant
}
}
}

// Define a custom Error trait for types that can be converted to InfraError
pub trait Error {
fn as_infra_error(&self) -> InfraError;
}

// Implement the Error trait for diesel::result::Error
impl Error for diesel::result::Error {
fn as_infra_error(&self) -> InfraError {
match self {
diesel::result::Error::NotFound => InfraError::NotFound, // Map NotFound to InfraError::NotFound
_ => InfraError::InternalServerError, // Map other errors to InfraError::InternalServerError
}
}
}

// Implement the Error trait for deadpool_diesel::PoolError
impl Error for deadpool_diesel::PoolError {
fn as_infra_error(&self) -> InfraError {
InfraError::InternalServerError // Map all PoolError instances to InfraError::InternalServerError
}
}

// Implement the Error trait for InteractError
impl Error for InteractError {
fn as_infra_error(&self) -> InfraError {
InfraError::InternalServerError // Map all InteractError instances to InfraError::InternalServerError
}
}

src/infra/repositories/post_repository.rs


use diesel::{
ExpressionMethods, Insertable, PgTextExpressionMethods, QueryDsl, Queryable, RunQueryDsl,
Selectable, SelectableHelper,
};
use serde::{Deserialize, Serialize};
use uuid::Uuid;


use crate::domain::models::post::PostModel;
use crate::infra::db::schema::posts;
use crate::infra::errors::{adapt_infra_error, InfraError};

// Define a struct representing the database schema for posts
#[derive(Serialize, Queryable, Selectable)]
#[diesel(table_name = posts)] // Use the 'posts' table
#[diesel(check_for_backend(diesel::pg::Pg))] // Check compatibility with PostgreSQL
pub struct PostDb {
pub id: Uuid,
pub title: String,
pub body: String,
pub published: bool,
}

// Define a struct for inserting new posts into the database
#[derive(Deserialize, Insertable)]
#[diesel(table_name = posts)] // Use the 'posts' table
pub struct NewPostDb {
pub title: String,
pub body: String,
pub published: bool,
}

// Define a struct for filtering posts
#[derive(Deserialize)]
pub struct PostsFilter {
published: Option<bool>,
title_contains: Option<String>,
}

// Function to insert a new post into the database
pub async fn insert(
pool: &deadpool_diesel::postgres::Pool,
new_post: NewPostDb,
) -> Result<PostModel, InfraError> {
// Get a database connection from the pool and handle any potential errors
let conn = pool.get().await.map_err(adapt_infra_error)?;

// Insert the new post into the 'posts' table, returning the inserted post
let res = conn
.interact(|conn| {
diesel::insert_into(posts::table)
.values(new_post)
.returning(PostDb::as_returning()) // Return the inserted post
.get_result(conn)
})
.await
.map_err(adapt_infra_error)?
.map_err(adapt_infra_error)?;

// Adapt the database representation to the application's domain model
Ok(adapt_post_db_to_post(res))
}

// Function to retrieve a post from the database by its ID
pub async fn get(
pool: &deadpool_diesel::postgres::Pool,
id: Uuid,
) -> Result<PostModel, InfraError> {
// Get a database connection from the pool and handle any potential errors
let conn = pool.get().await.map_err(adapt_infra_error)?;

// Query the 'posts' table to retrieve the post by its ID
let res = conn
.interact(move |conn| {
posts::table
.filter(posts::id.eq(id))
.select(PostDb::as_select()) // Select the post
.get_result(conn)
})
.await
.map_err(adapt_infra_error)?
.map_err(adapt_infra_error)?;

// Adapt the database representation to the application's domain model
Ok(adapt_post_db_to_post(res))
}

// Function to retrieve a list of posts from the database with optional filtering
pub async fn get_all(
pool: &deadpool_diesel::postgres::Pool,
filter: PostsFilter,
) -> Result<Vec<PostModel>, InfraError> {
// Get a database connection from the pool and handle any potential errors
let conn = pool.get().await.map_err(adapt_infra_error)?;

// Build a dynamic query for retrieving posts
let res = conn
.interact(move |conn| {
let mut query = posts::table.into_boxed::<diesel::pg::Pg>();

// Apply filtering conditions if provided
if let Some(published) = filter.published {
query = query.filter(posts::published.eq(published));
}

if let Some(title_contains) = filter.title_contains {
query = query.filter(posts::title.ilike(format!("%{}%", title_contains)));
}

// Select the posts matching the query
query.select(PostDb::as_select()).load::<PostDb>(conn)
})
.await
.map_err(adapt_infra_error)?
.map_err(adapt_infra_error)?;

// Adapt the database representations to the application's domain models
let posts: Vec<PostModel> = res
.into_iter()
.map(|post_db| adapt_post_db_to_post(post_db))
.collect();

Ok(posts)
}

// Function to adapt a database representation of a post to the application's domain model
fn adapt_post_db_to_post(post_db: PostDb) -> PostModel {
PostModel {
id: post_db.id,
title: post_db.title,
body: post_db.body,
published: post_db.published,
}
}

Now that we’ve got our database interactions sorted with the repository, it’s time to dive into the fun stuff: handling incoming HTTP requests and crafting responses. In the next sections, we’ll explore the handlers that’ll make our API shine.. This is where things really start to get exciting, as we connect our data layer with the Axum framework to create a fully functional and secure API.

Create our handlers

Handlers play a crucial role in web applications, and their primary goal is to handle incoming HTTP requests and generate appropriate HTTP responses. Here’s an explanation of the goal of handlers and how they interact with the repository

The interaction between handlers and the repository is a crucial part of building web applications.

create_post.rs

// Import necessary modules and types
use axum::extract::State;
use axum::Json;

// Import internal modules and types
use crate::domain::models::post::PostError;
use crate::handlers::posts::{CreatePostRequest, PostResponse};
use crate::infra::repositories::post_repository;


// This is a placeholder to extract JSON data from the request body.
use crate::utils::JsonExtractor;
use crate::AppState;

// Define the handler function for creating a new post
pub async fn create_post(
State(state): State<AppState>, // Extract the application state from the request
JsonExtractor(new_post): JsonExtractor<CreatePostRequest>, // Extract JSON data from the request body
) -> Result<Json<PostResponse>, PostError> {
// Create a NewPostDb instance with data from the JSON request
let new_post_db = post_repository::NewPostDb {
title: new_post.title,
body: new_post.body,
published: false, // Set the initial 'published' status to false
};

// Insert the new post into the database using the repository
let created_post = post_repository::insert(&state.pool, new_post_db)
.await
.map_err(PostError::InfraError)?; // Handle potential infrastructure errors

// Create a PostResponse instance from the newly created post
let post_response = PostResponse {
id: created_post.id,
title: created_post.title,
body: created_post.body,
published: created_post.published,
};

// Return the response as JSON with a success status
Ok(Json(post_response))
}
  • State(state): This line extracts the AppState from the application's state, which contains the database connection pool and other shared application data.
  • JsonExtractor(new_post): This line extracts the JSON data from the request body and deserializes it into a CreatePostRequest struct. The extractor will be explained in a following section
  • The code then creates a NewPostDb instance based on the extracted request data, setting the initial 'published' status to false.
  • It uses the post_repository::insert function to insert the new post into the database, handling potential infrastructure errors.
  • After successfully inserting the post, it constructs a PostResponse from the created post's data.
  • Finally, it returns the PostResponse as JSON with a success status in an Ok result, making it ready to be sent as a response to the client's request.

get_post.rs

// Import necessary modules and types
use axum::extract::State;
use axum::Json;
use uuid::Uuid;

// Import internal modules and types
use crate::domain::models::post::{PostError, PostModel};
use crate::handlers::posts::PostResponse;
use crate::infra::errors::InfraError;
use crate::infra::repositories::post_repository;

// Import PathExtractor for extracting the post_id from the request path
use crate::utils::PathExtractor;
use crate::AppState;

// Define the handler function for retrieving a specific post by its ID
pub async fn get_post(
State(state): State<AppState>, // Extract the application state from the request
PathExtractor(post_id): PathExtractor<Uuid>, // Extract the post_id from the request path
) -> Result<Json<PostResponse>, PostError> {
// Use the post_repository to fetch the post based on its ID
let post = post_repository::get(&state.pool, post_id)
.await
.map_err(|db_error| match db_error {
// Map infrastructure errors to custom PostError types
InfraError::InternalServerError => PostError::InternalServerError,
InfraError::NotFound => PostError::NotFound(post_id),
})?;

// Convert the retrieved PostModel to a PostResponse
Ok(Json(adapt_post_to_post_response(post)))
}

// Helper function to adapt a PostModel to a PostResponse
fn adapt_post_to_post_response(post: PostModel) -> PostResponse {
PostResponse {
id: post.id,
title: post.title,
body: post.body,
published: post.published,
}
}
  • State(state): This line extracts the AppState from the application's state, which contains the database connection pool and other shared application data.
  • PathExtractor(post_id): This line extracts the post_id from the request's path. The PathExtractor is used to extract data from the request path, in this case, the unique identifier of the post.
  • The code then uses the post_repository::get function to fetch the post from the database based on the provided post_id. It also handles potential database errors and maps them to custom PostError types.
  • The adapt_post_to_post_response function converts the retrieved PostModel to a PostResponse to prepare it for the JSON response.
  • Finally, it returns the PostResponse as JSON in an Ok result, making it ready to be sent as a response to the client's request.

list_posts.rs

// Import necessary modules and types
use axum::extract::{Query, State};
use axum::Json;

// Import internal modules and types
use crate::domain::models::post::{PostError, PostModel};
use crate::handlers::posts::{ListPostsResponse, PostResponse};
use crate::infra::repositories::post_repository::{get_all, PostsFilter};
use crate::AppState;

// Define the handler function for listing posts with optional query parameters
pub async fn list_posts(
State(state): State<AppState>, // Extract the application state from the request
Query(params): Query<PostsFilter>, // Extract query parameters for filtering posts
) -> Result<Json<ListPostsResponse>, PostError> {
// Use the `get_all` function to retrieve a list of posts based on the provided query parameters
let posts = get_all(&state.pool, params)
.await
.map_err(|_| PostError::InternalServerError)?;

// Convert the retrieved list of PostModel instances to a ListPostsResponse
Ok(Json(adapt_posts_to_list_posts_response(posts)))
}

// Helper function to adapt a single PostModel to a PostResponse
fn adapt_post_to_post_response(post: PostModel) -> PostResponse {
PostResponse {
id: post.id,
title: post.title,
body: post.body,
published: post.published,
}
}

// Helper function to adapt a list of PostModel instances to a ListPostsResponse
fn adapt_posts_to_list_posts_response(posts: Vec<PostModel>) -> ListPostsResponse {
// Map each PostModel to a PostResponse and collect them into a Vec<PostResponse>
let posts_response: Vec<PostResponse> =
posts.into_iter().map(adapt_post_to_post_response).collect();

// Create a ListPostsResponse containing the list of PostResponses
ListPostsResponse {
posts: posts_response,
}
}
  • State(state): This line extracts the AppState from the application's state, which contains the database connection pool and other shared application data.
  • Query(params): This line extracts query parameters from the request. In this case, it extracts the PostsFilter struct, which can contain parameters like published and title_contains for filtering posts.
  • The code then uses the get_all function to retrieve a list of posts from the database based on the provided query parameters. It also handles potential database errors and maps them to a custom PostError type.
  • The adapt_post_to_post_response function is used to convert a single PostModel instance to a PostResponse.
  • The adapt_posts_to_list_posts_response function converts a list of PostModel instances to a ListPostsResponse. It does this by mapping each PostModel to a PostResponse and collecting them into a vector.
  • Finally, it returns the ListPostsResponse as JSON in an Ok result, making it ready to be sent as a response to the client's request. This handler allows clients to list posts and potentially filter them based on query parameters.
  • Handle post creation and querying from our router

Update the router with our new handlers

Now, we have created our handlers, our models from the domain, and our repositories from the infra, we can finally update our router to link every part of the application.

Let’s add our three different route in a new nested router : posts_routes


use axum::http::StatusCode;
use axum::response::IntoResponse;
use axum::routing::{get, post};
use axum::Router;

// Import internal handlers and the AppState type
use crate::handlers::posts::{create_post, get_post, list_posts};
use crate::AppState;

// Define the main application router
pub fn app_router(state: AppState) -> Router<AppState> {
// Create a new Router for the application
Router::new()
// Define a route for the root path "/"
.route("/", get(root))
// Nest a sub-router under the path "/v1/posts"
.nest("/v1/posts", posts_routes(state.clone()))
// Define a fallback handler for 404 Not Found errors
.fallback(handler_404)
}

// Handler for the root path "/"
async fn root() -> &'static str {
"Server is running!" // Return a simple message indicating the server is running
}

// Fallback handler for 404 Not Found errors
async fn handler_404() -> impl IntoResponse {
(
StatusCode::NOT_FOUND, // Set the HTTP status code to 404 Not Found
"The requested resource was not found", // Provide an error message
)
}

// Define a sub-router for handling posts-related routes
fn posts_routes(state: AppState) -> Router<AppState> {
// Create a new Router for posts-related routes
Router::new()
// Define a route for creating a new post using the HTTP POST method
.route("/", post(create_post))
// Define a route for listing posts using the HTTP GET method
.route("/", get(list_posts))
// Define a route for retrieving a specific post by ID using the HTTP GET method
.route("/:id", get(get_post))
// Provide the application state to this sub-router
.with_state(state)
}

Custom axum extractors to improve error handling

Custom extractors allow you to define structured input data extraction, validation, and transformation logic, making it easier to handle and report errors effectively.

By creating custom extractors, you can:

  1. Encapsulate Input Processing: Custom extractors encapsulate the process of extracting data from incoming requests. This helps keep your handler functions clean and focused on application logic rather than data validation.
  2. Centralize Error Handling: You can implement error handling and validation logic within the custom extractor itself. This means that any errors related to input data can be captured and reported in a standardized way.
  3. Improve Code Reusability: Once defined, custom extractors can be reused across multiple routes and handlers. This promotes code reusability and reduces duplication of error handling logic.
  4. Enhance Readability: Custom extractors make your route and handler functions more readable by abstracting away low-level details of data extraction and validation.
  5. Provide Clear Error Responses: When an error occurs during data extraction, custom extractors can generate clear error responses with appropriate HTTP status codes and error messages. This helps clients understand and react to errors effectively.

In the previous sections, we have defined two custom extractors, why that ?

It it because we want to adapt the errors coming from the serialization of json and the parsing of params to be transformed into AppErrors.

In our case, we are using already existing extractors coming from Axum : Json() andPath()

Within the official axum-macros crate, you can find a very useful macro that can help us define custom-extractors that derive from existing ones to improve error handling : useFromRequest.

Let’s write our two custom extractors :

json_extractor.rs

// Import necessary modules and types
use axum::extract::rejection::JsonRejection;
use axum_macros::FromRequest;

// Import internal AppError type
use crate::errors::AppError;

// Define a custom extractor for JSON data
#[derive(FromRequest)]
#[from_request(via(axum::Json), rejection(AppError))] // Derive the FromRequest trait with specific configuration
pub struct JsonExtractor<T>(pub T);

// Implement the conversion from JsonRejection to AppError
impl From<JsonRejection> for AppError {
fn from(rejection: JsonRejection) -> Self {
// Convert the JsonRejection into a BodyParsingError with the rejection message
AppError::BodyParsingError(rejection.to_string())
}
}
  • The code defines a custom extractor JsonExtractor<T> for JSON data. This extractor is designed to extract data of type T from JSON request bodies.
  • The #[derive(FromRequest)] attribute macro is used to automatically implement the FromRequest trait for the JsonExtractor<T>. This trait allows Axum to use this custom extractor in route handlers.
  • In the #[from_request(via(axum::Json), rejection(AppError))] attribute, the extractor is configured to work with JSON data using axum::Json. Additionally, it specifies that rejections should be handled using the AppError type.
  • The impl From<JsonRejection> for AppError block implements the conversion from JsonRejection (which can occur when JSON parsing fails) to the custom AppError type.
  • Inside the implementation, it creates an AppError::BodyParsingError variant with the rejection message obtained from the JsonRejection. This allows for consistent error handling and reporting when JSON parsing fails.

path_extractor.rs

use axum::extract::rejection::PathRejection;
use axum_macros::FromRequestParts;

use crate::errors::AppError;

#[derive(FromRequestParts, Debug)]
#[from_request(via(axum::extract::Path), rejection(AppError))]
pub struct PathExtractor<T>(pub T);

impl From<PathRejection> for AppError {
fn from(rejection: PathRejection) -> Self {
AppError::BodyParsingError(rejection.to_string())
}
}

What is coming next ?

Thanks to everyone for taking the time to read this tutorial! In this post, we’ve covered the foundations of building a real-world backend application in Rust using Axum and Diesel, following Domain-Driven Design principles. We’ve explored the creation of modular code, database configuration, and the implementation of handlers for various API endpoints. But we’re not done yet!

In the upcoming sections, we’ll dive into two critical aspects of our application. First, we’ll explore how to make testing in Axum, to deploy our Rust backend using Docker, ensuring that it’s ready for production use. Then, we’ll take a closer look at adding OAuth authentication to enhance the security and user management of our application.

So, stay tuned for the next parts of this series, where we’ll continue to build and enhance our backend application. Happy coding!

--

--