Introducing PrrrStack

This is the first part in a two-part introductory series to PRRR Stack (Postgres, Rust, Rocket, React) application. You’re probably wondering whether yet another stack is necessary, and the answer is of course not. Skepticism towards any one-technology-fits-all approach to web development is healthy. I learned web development with MEANstack and had no idea why I was using each piece. The same is probably true for many of the previous generation who started with LAMP. Technologies have their place and purpose, and learning a stack can be a good introduction to new things, but other than that, they’re just marketing (and I think I came up with a pretty good name to market).

Why use PRRR Stack then?

  • functionalish programming — Neither Rust nor JavaScript are functional languages, but both allow for a somewhat functional style, leading to cleaner code that’s easy to write than, say, Haskell or Elm.
  • learning Rust — Maybe you’ve heard about the language that’s been Stack Overflow’s most loved language for the past three years, but you’re coming from a web background with little-to-no systems-level experience.

We’ll be using Rocket because it makes writing a web server in Rust ridiculously easy and has a fair amount of safety build in. Diesel is a ORM and query builder that works with a variety of databases. I’ve chosen Postgres, not only because it helps with the acronym, but, as a recent convert, I think it’s the most powerful choice. Also, it has the benefit of being open source. We’ll also be using R2D2 to create a database pool to maintain an open connection between our database and app.

The frontend will be written in React. Part of the reason is the convenience of JSX and stateless functional components, but my biggest reason for choosing React over Angular or Vue is (surprisingly not for the acronym but) that React uses one-way data binding whereas the later two use two-way data biding. Personally, I feel that two-way data biding significantly and unecessarily increases complexity and mental load.

Before getting started, make sure you have Rust installed and are using the nightly version. You’ll also want to have Postgres, the Diesel Cli, Node/Npm, and Webpack.


I guess the standard intro to new frameworks these days is a To Do list or a Tour of Heros. Well, since this is PrrrStack, we’ll do a Tour of Cats with lovely things like the cats’ names, bios, pictures, and their kill counts.

We’ll start by creating a new project, letting Rust know we’d like a binary: cargo new --bin prrr_demo. We’ll also want to make sure we’re using the nightly version of Rust by running rustup override add nightly in the project’s directory.

Let’s go ahead and add our dependencies to our Cargo.toml

[dependencies]
diesel = { version = "1.2.2", features = ["postgres"] }
dotenv = "0.11.0"
r2d2 = "0.8.2"
r2d2-diesel = "1.0.0"
serde = "1.0.43"
serde_derive = "1.0.43"
serde_json = "1.0.16"
rocket = "0.3.9"
rocket_codegen = "0.3.9"
rocket_contrib = { version = "0.3.9", default-features = false, features = ["json"] }
rocket_cors = "0.2.3"

Let’s also create the different files we’ll need in our /src directory, which should look like this:

/src
|---db.rs
|---main.rs
|---models.rs
|---routes.rs
|---schema.rs

Now that we’ve got that set up, well go ahead and build out a basic Rocket application and test it.

Here’s what our main.rs looks like:

#![feature(plugin, custom_derive, const_fn, decl_macro, extern_prelude)]
#![plugin(rocket_codegen)]
extern crate rocket;
extern crate rocket_contrib;
fn rocket() -> rocket::Rocket {
rocket::ignite()
.mount("/", routes![index])
}
#[get("/")]
fn index<'a>() -> &'a str {
"Hello!"
}
fn main() {
rocket().launch();
}

A little on what it’s doing. First, we’re letting Rust know we want to use plugins. We’re then pulling in the external Rocket library. Next, we’re creating a function rocket that returns an instance of rocket::Rocket. Inside that function, we’re mounting our routes. Our only route at the moment is the get route defined by the Rocket macro below. Here, we see just one way that Rocket makes declaring (and protecting) routes easy. Finally, we’re calling the launch method on the returned instance of our rocket in our main function. Now all we have to do is type cargo run in the command line to compile and run our application. We should get something like this:

🔧  Configured for development.
=> address: localhost
=> port: 8000
=> log: normal
=> workers: 8
=> secret key: generated
=> limits: forms = 32KiB
=> tls: disabled
🛰 Mounting '/':
=> GET /
🚀 Rocket has launched from http://localhost:8000

and if we check out locahost:8000 we should see “Hello!”. Gotta love those emojis!


Next, we’ll be setting up our database pool, building our models, and connecting our app to Postgres.

First, we’ll import the external modules we need into db.rs

use dotenv::dotenv;
use diesel::pg::PgConnection;
use r2d2;
use r2d2_diesel::ConnectionManager;
use rocket::http::Status;
use rocket::request::{self, FromRequest};
use rocket::{Outcome, Request, State};
use std::env;
use std::ops::Deref;

Next, we’ll create a new type of Pool that implements a Postgres connection

pub type Pool = r2d2::Pool<ConnectionManager<PgConnection>>;

And create a function that returns a Pool.

pub fn create_db_pool() -> Pool {
dotenv().ok();
let database_url = env::var("DATABASE_URL").expect("DATABASE_URL must exist");
let manager = ConnectionManager::<PgConnection>::new(database_url);
r2d2::Pool::new(manager).expect("db pool failure")
}

Here, we’re first making sure that we can access our environment variables. Then, we’re assigning our DATABASE_URL environment variable to a variable. Calling expect() will cause the program to panic and return the string we provide. This is lazy as far as error handling goes, but it will work for our demo. Next, we will create a new Connection Manager that establishes a connection to a Postgres database with the URL we’ll soon supply as an environment variable. Finally, we return a R2D2 pool with our newly-created database manager.

We’ll go ahead and create a Rocket request guard while we’re here too that verifies we’ve connected to the database. You can think of Request Guards as providing extra validation for routes. A common example (one that we won’t go into here) would be verifying that a user is logged in before being able to access a specific route.

pub struct DbConn(pub r2d2::PooledConnection<ConnectionManager<PgConnection>>);
impl <'a, 'r> FromRequest<'a, 'r> for DbConn {
type Error = ();
fn from_request(request: &'a Request<'r>) -> request::Outcome<DbConn, ()> {
let pool = request.guard::<State<Pool>>()?;
match pool.get() {
Ok(conn) => Outcome::Success(DbConn(conn)),
Err(_) => Outcome::Failure((Status::ServiceUnavailable, ())),
}
}
}
impl Deref for DbConn {
type Target = PgConnection;
#[inline(always)]
fn deref(&self) -> &Self::Target {
&self.0
}
}

First, we’re creating another public structure DbConn. Next, we implement FromRequest which basically matches against our application’s state. If there is a pool, we’ll return a successful connection; if not, we’ll return a failure. Notice the 'a and 'r? Those are Rust lifetimes, which are a whole ‘nother can of worms we won’t open here, but the short of it is that they help keep things in memory as long as we need them.

Before I forget about it, let’s go ahead and create our .env file too, which only needs our database’s URL

DATABASE_URL=postgres://postgres:postgres@localhost/prrr_demo

and we’ll create the database by running diesel setup. After that, we can go ahead and generate our migrations by running diesel migration generate create_cats. You should notice a new /migrations directory at the root of your project and within it another directory with today’s date. We’ll create our down migration

DROP table cats;

followed by our up migration

CREATE TABLE cats (
id SERIAL PRIMARY KEY,
name VARCHAR NOT NULL,
bio TEXT NOT NULL,
kills INTEGER NOT NULL,
image_url VARCHAR NOT NULL
);

and after that, we can go ahead and run them with diesel migration run. Now, our database should be ready with Diesel having done the work for us.

But how do we connect the database to our app? First, we create our schema. The Diesel docs say this in done for us, but it’s easy enough to do on our own.

table! {
cats (id) {
id -> Int4,
name -> Varchar,
bio -> Text,
kills -> Int4,
image_url -> Varchar,
}
}

After that, we create a model for our database in models.rs

use diesel;
use diesel::prelude::*;
use diesel::pg::PgConnection;
use schema::cats;
use schema::cats::dsl::cats as all_cats;
#[derive(Serialize, Queryable, Debug, Clone)]
pub struct Cat {
pub id: i32,
pub name: String,
pub bio: String,
pub kills: i32,
pub image_url: String,
}
#[derive(Serialize, Deserialize, Insertable)]
#[table_name = "cats"]
pub struct NewCat {
pub name: String,
pub bio: String,
pub kills: i32,
pub image_url: String,
}

Notice we have have two structs here. The Cat struct is for retrieving data from the database, while the NewCat (notice it’s lacking an ID) is for inserting a new cat into the database. While we’re here, lets go ahead and add the CRUD methods we’ll be building into our RestAPI.

impl Cat {
pub fn show(id: i32, conn: &PgConnection) -> Vec<Cat> {
all_cats.find(id)
.load::<Cat>(conn)
.expect("Sometimes cats don't come when you call them")
}
pub fn all(conn: &PgConnection) -> Vec<Cat> {
all_cats.order(cats::id.desc())
.load::<Cat>(conn)
.expect("Error herding cats")
}
pub fn create(cat: NewCat, conn: &PgConnection) -> bool {
diesel::insert_into(cats::table)
.values(&cat)
.execute(conn)
.is_ok()
}
pub fn update_by_id(id: i32, cat: NewCat, conn: &PgConnection) -> bool {
use schema::cats::dsl::{
name as n,
bio as b,
kills as k,
image_url as img,
};
let NewCat { name, bio, kills, image_url } = cat;
diesel::update(all_cats.find(id))
.set((n.eq(name), b.eq(bio), k.eq(kills), img.eq(image_url)))
.get_result::<Cat>(conn)
.is_ok()
}
pub fn delete_by_id(id: i32, conn: &PgConnection) -> bool {
if Cat::show(id, conn).is_empty() {
return false;
}
diesel::delete(all_cats.find(id))
.execute(conn)
.is_ok()
}
}

I feel like these are pretty self-explanatory, so I won’t go into too much detail. A few things to note:

  • Each of our functions is using a reference to our database pool
  • show and all are returning a Vector of cats, while the other methods simply tell us whether they were successful
  • We’re using NewCat not only for adding a new cat to our database, but we’re also using it to destructure the request object in our update_by_id method
  • We make these methods public so that we can access them elsewhere.

We’re almost there. The next step is to set up an endpoint for each of the methods we just created in routes.rs

use rocket_contrib::Json;
use serde_json::Value;
use db::DbConn;
use models::{Cat, NewCat};

We’ll be working with JSON, so we’ll import Json from the rocket_contrib library, but also we’re interested in extracting the value out of it, so we’ll use serede_json to help us with that. Next, we have our routes themselves:

#[get("/cats", format = "application/json")]
fn all_cats(conn:DbConn) -> Json<Value> {
let cats = Cat::all(&conn);
Json(json!({
"status": 200,
"result": cats,
}))
}
#[post("/cats", format = "application/json", data = "<new_cat>")]
fn new_cat(new_cat: Json<NewCat>, conn: DbConn) -> Json<Value> {
Json(json!({
"status": Cat::create(new_cat.into_inner(), &conn),
"result": Cat::all(&conn),
}))
}
#[put("/cats/<id>", format = "application/json", data = "<new_cat>")]
fn update_cat(id: i32, new_cat: Json<NewCat>, conn: DbConn) -> Json<Value> {
let status = if Cat::update_by_id(id, new_cat.into_inner(), &conn) { 200 } else { 404 };
Json(json!({
"status": status,
"result": Cat::all(&conn),
}))
}
#[delete("/cats/<id>")]
fn delete_cat(id: i32, conn: DbConn) -> Json<Value> {
let status = if Cat::delete_by_id(id, &conn) { 200 } else { 404 };
Json(json!({
"status": status,
"result": null,
}))
}

Rocket’s macros make it clear and easy to use. We name the type, the URL, and format and data are optional. We’re also able to parse our ID from the URL by using <id>. Since DbConn implements FromRequest, each route verifies that our database connection is still open. Rust’s type system also validates our incoming JSON requests against our Cat and NewCat structs. Only when the API request is validated does it execute the corresponding methods we’ve attached to our models.


Now that the routes are working, all we have to do is connect our routes and database to our Rocket application in our main.rs.

#![feature(plugin, custom_derive, const_fn, decl_macro, extern_prelude)]
#![plugin(rocket_codegen)]
#[macro_use] extern crate diesel;
extern crate dotenv;
extern crate r2d2;
extern crate r2d2_diesel;
extern crate rocket;
extern crate rocket_contrib;
extern crate rocket_cors;
#[macro_use] extern crate serde_derive;
#[macro_use] extern crate serde_json;
use rocket::http::Method;
use rocket_cors::{AllowedOrigins, AllowedHeaders};
use routes::*;
mod db;
mod models;
mod routes;
mod schema;
fn rocket() -> rocket::Rocket {
let pool = db::create_db_pool();
let (allowed_origins, failed_origins) = AllowedOrigins::some(&["http://localhost:3000"]);
let options = rocket_cors::Cors {
allowed_origins: allowed_origins,
allowed_methods: vec![Method::Get, Method::Put, Method::Post, Method::Delete]
.into_iter()
.map(From::from)
.collect(),
allowed_headers: AllowedHeaders::all(),
allow_credentials: true,
..Default::default()
};
rocket::ignite()
.manage(pool)
.mount("/api", routes![all_cats, new_cat, update_cat, delete_cat])
.mount("/", routes![index])
.attach(options)
}
#[get("/")]
fn index<'a>() -> &'a str {
"Hello!"
}
fn main() {
rocket().launch();
}

I’ve gone ahead and added the whole thing here, so not all of it is new. Let’s look one piece at a time at what is. First, we’ve added all of the dependencies we’ve been using throughout the project so it will (hopefully) compile again. Next, we’ve told it to use all of our routes, as well as brought our modules into scope. We’re also pulling in some things from rocket_cors that aren’t really necessary now, but will prevent us from needing to revisit this in part 2.

Our rocket() function has grown quite a bit. We’re now declaring our database pool at the top of it an connecting our application to it with .manage(pool). We’ve also allowed for Cross-Origin Resource Sharing and configured it for our soon-to-be frontend and connected it to our application with .attach(options). Rocket, as it is now, is better set up for server-side rendering, but it is still pretty straightforward for building an API.

Finally, we’ve mounted our routes and prefixed them with /api to help us keep them separate from our static pages. Go ahead and try cargo run and everything should be working. If not, check your project against the repo.


Our RestAPI appears to be working (it compiles at least), but let’s break Postman out for a few tests. Try posting a new cat tolocalhost:8000/api/cats:

{
"bio": "i was found in a trash can",
"image_url": "http://zeelifestylecebu.com/wp-content/uploads/2015/03/cat3.jpg",
"kills": 0,
"name": "Dingle Poo"
}

Looks good so far, but lil’ Dingle Poo got out and had a blast at the bird feeder so we ought to update that kill count with a PUT request:

{
"id": 1,
"bio": "i was found in a trash can",
"image_url": "http://zeelifestylecebu.com/wp-content/uploads/2015/03/cat3.jpg",
"kills": 13,
"name": "Dingle Poo"
}

Neat. Play around, add more cats, and try out the methods. Be sure to check out Part 2 next week where we build our React frontend.