Coding Artist
Published in

Coding Artist

Full-Stack React With Phoenix (Chapter 5 | Working With PostgreSQL)

Table of Contents

Chapter 1 | Why Bother?
Chapter 2 | Learning the Basics of Elixir
Chapter 3 | Introduction to Phoenix
Chapter 4 | Implementing React

Scope of This Chapter

In order to really feel confident with a Phoenix and React stack, we want to create a project that is very close to a real-world scenario. In a real-world project, we would create an API service programmatically to interact with a database which would allow our React frontend to have dynamic data. We will be doing exactly that in the following chapters using a database called PostgreSQL.

In this chapter, I want to lay the foundation for understanding what PostgreSQL is and do a bootstrapped example of creating data in our very own PostgreSQL database.

SQL vs. NoSQL

As you were learning React, you probably heard of two technologies for creating a database for your application, Firebase and MongoDB.

Both Firebase and MongoDB are known as NoSQL databases. Both of these databases store data as JSON objects internally. The data can also be shipped out via a request as a JSON object.

Here’s an internal look at a MongoDB database:

Because data is stored and shipped out as a JSON object, Firebase and MongoDB provide a lower learning curve for JavaScript developers. This also means that there is a great community and wealth of information behind each technology. For a lot of projects, these databases are great options.

However, NoSQL database (arguably) don’t provide the best way to store and query data. Reason being, the databases are not relational.

SQL (structured query language) databases are relational. Instead of storing data in JSON objects, they are stored in tables. A table consists of columns and rows. A row contains a single record and a column contains the values for certain attributes.

Let’s look at this example:

Again, we can see a record of information in each row and the values of certain attributes in the column.

So, how are SQL databases relational? Let’s look at another example:

In the image above, we have a schema which shows the different tables in a database and the fields within them. There are also connected lines which show relations.

For example, in the students tables, there is a primary key called StudentID. A primary key is a field which uniquely identifies a record in a table. In other words, StudentID is the most appropriacfte (primary) unique identifier of the Students table. Now, StudentID might be needed to organize data in another table such as StudentTransactions. However, StudentID is not the best unique identified for student transactions even if it is needed in the table as a field. The use of a primary key in another table where it is needed but not primary makes it a foreign key. Looking at this example, we could say there is a relation between the Students table and the StudentTransactions table via the StudentID. By having these relations, it is easier to organize and query our tables.

In my opinion, I find SQL queries to be very readable and easy to follow. They have also been around for a long time and proven to be successful. For that reason, I will simply recommend that you look over this SQL cheat sheet to get a sense of how it works. I will still take the time to explain all the querying that we do in this book. For now, I just want you to understand the differences between SQL and NoSQL and the possible benefits.

Introduction to PostgreSQL

Why PostgreSQL?

In this book, we are going to be working with the SQL database technology called PostgreSQL.

PostgreSQL is configured as the default in Phoenix. It’s open-source and totally free. This is a strong reason as to why it has gathered and incredibly strong community and implemented some really powerful features. There are a ton of handy 3rd party, open-source tools available for it.PostgreSQL is very extendible, scales well, and a beast for complex design and data.

While NoSQL databases have their place for certain projects, I’m really excited to show you how to implement PostgreSQL in Phoenix.

Getting Started

When working with Node.js and MongoDB (as I have written about here), you can use a tool like Mongoose to be able to interact with a MongoDB database. Phoenix comes with a tool within the Elixir ecosystem called Ecto which allows for interaction with PostgreSQL, MySQL, SQLite 3m and MongoDB. In other words, it will allow us to talk to different kinds of databases so that we can do create, read, update, and delete operations (CRUD). Phoenix comes configured with PostgreSQL by default.

The official Phoenix documentation also provides some helpful definitions of Ecto terminology which we will be unpacking in more detail later on:

Repo — A repository represents a connection to an individual database. Every database operation is done via the repository.

Model — Models are our data definitions. They define table names and fields as well as each field’s type. Models also define associations — the relationships between models.

Query — Queries tie both models and repositories together, allowing us to elegantly retrieve data from the repository and cast it into the models themselves.

Changeset — Changesets declare transformations we need to perform on our model data before our application can use it. These include type casting, validations, and more.

Before we begin a new Phoenix project, you will need to install PostgreSQL. If you are using a Mac, I highly recommend using this installer. It will allow you to install, configure, and fire up a PostgreSQL server with ease through a GUI.

For managing PostgreSQL databases through a GUI, I recommend pgAdmin or Postico. In this book, we will use pgAdmin as it is cross-platform. Go ahead and download it.

However, for this chapter, we won’t need pgAdmin. Phoenix actually has a nice generator that will define a new table in our database and populate templates (as well as a view and controller as the means to achieve this) that will show us the contents of the table and allow us to do CRUD operations to it. We will switch up our approach in the next chapter and use pgAdmin.

Let’s go ahead and create a new project called phoenix_curated_list:

mix phoenix.new phoenix_curated_listcd phoenix_curated_list

This project will just be used to render a curated list of blog posts using data from a PostgreSQL database.

Open the project in your code editor of choice.

If we go to config/dev.exs, we can see configurations for PostgreSQL at the bottom:

config :phoenix_curated_list, PhoenixCuratedList.Repo,
adapter: Ecto.Adapters.Postgres,
username: "postgres",
password: "postgres",
database: "phoenix_curated_list_dev",
hostname: "localhost",
pool_size: 10

Note: dev.exs is configuration for a dev environment and prod.exs is configuration for production. Overall configuration happens in config.exs.

By default, PostgreSQL sets up a superuser with username and password of ‘postgres’. If you don’t have it configured this way, you can do so by running:

psql postrgresCREATE USER postgres;
ALTER USER postgres PASSWORD 'postgres';
ALTER USER postgres WITH SUPERUSER;

By default, the port for PostgreSQL is 5432 and is not listed in the configuration. Personally, I’m using port 5431 so my configuration will look like this:

# Configure your database
config :phoenix_curated_list, PhoenixCuratedList.Repo,
adapter: Ecto.Adapters.Postgres,
username: "postgres",
password: "postgres",
database: "phoenix_curated_list_dev",
hostname: "localhost",
pool_size: 10,
port: 5431

In the code above, you will see PhoenixCuratedList.Repo. It was mentioned earlier that a repo is a connection with a database by which we can do operations/interactions with a database. Phoenix has already configured a repo by simply using an existing module called Ecto.Repo as can be seen in following code (lib/phoenix_curated_list/repo.ex):

defmodule PhoenixCuratedList.Repo do
use Ecto.Repo, otp_app: :phoenix_curated_list
end

This repo was also included in the configuration found in config/config.ex and config/dev.ex.

It’s important here to interject and explain that I’m just showing you this stuff about our repo so you know what it is and where it came. Again, it is simply a connection with our database by which we can do operations with our database. If you are interested in the nerdy details of it, you can check the Ecto.Repo documentation.

We have our database and repo configuration settled by default, now let’s go ahead and create the database by running:

mix ecto.create

This will create a database called phoenix_curated_list_dev as we have specified in our configuration in config/dev.ex:

# Configure your database
config :phoenix_curated_list, PhoenixCuratedList.Repo,
adapter: Ecto.Adapters.Postgres,
username: "postgres",
password: "postgres",
database: "phoenix_curated_list_dev",
hostname: "localhost",
pool_size: 10,
port: 5431

If we fire up our Phoenix server using mix phoenix.server, we will see that the annoying error messages in the command line from earlier have ceased.

Ok. We have a database configured for our Phoenix project. Now what?

We need to create a model which can be defined as a data definition of a table, field (columns), and field types. This would be equivalent to defining a collection, its field, and field types in MongoDB.

Phoenix has a handy model generator that does the heavy lifting for us. It will create a model for us. With only a few additional actions, we can use the data definitions in our model to create a table in our database. The generator will also provide templates by which we can do create, read, update, and delete operations on a table.

First, let’s consider what table we ultimately want.

We want a table called Blogs with the following fields:

Blogs Table
____________

title | subtitle | image | link | author
123...

We also need to know the data types which can be expressed using atoms (note that the id will be generated automatically so it is omitted):

title :string 
subtitle :string
image :string
link :string
author :string

Let’s use the generator phoenix.gen.html and specify that we want a table called Blogs which can be accessed on the /blogs path containing the fields shown above:

mix phoenix.gen.html Blogs blogs title:string subtitle:string image:string link:string author:string

As prompted in command line, we need to place the following code in our router.ex file:

scope "/", PhoenixCuratedList do
pipe_through :browser # Use the default browser stack
get "/", PageController, :index
resources "/blogs", BlogsController
end

This will allow us to see our table and do CRUD operations via the generated templates on the /blogs path.

If we open the generated model code in web/models/blogs.ex, we can see the following schema was generated defining the table, field, and field types:

schema "blogs" do
field :title, :string
field :subtitle, :string
field :image, :string
field :link, :string
field :author, :string
timestamps()
end

For instructional purposes, here’s a snapshot via Postico of our current database:

Normally, tables in our database would appear here. However, there’s nothing.

This means we have to do an additional step to create our table. We need to create a table following the schema defined in our model.

In Phoenix, we do this via a migration. You can think of the word migration as expressing the action of moving a table into our database using the field definitions outlined in the schema within our model.

So, how do we do a migration?

As you may have guessed given the trend, Phoenix automatically generates a migration file which upon execution will create our table following our schema. The code that creates the table is found in this file located at priv/repo/migrations/*timestamp*_create_blogs.exs:

def change do
create table(:blogs) do
add :title, :string
add :subtitle, :string
add :image, :string
add :link, :string
add :author, :string
timestamps()
end
end

Recall, we mentioned that interactions with a database would happen through a repo. This is why the migrations are underneath the repo folder and the module definition containing the function shown above looks like this:

defmodule PhoenixCuratedList.Repo.Migrations.CreateBlogs

To run this, we can use the following command:

mix ecto.migrate

Via Postico, I’ll show you what our database looks like now:

Two tables have been created. As expected, we have the blogs table and another table called schema_migrations which just logs meta data about our migrations:

The blogs table looks just like we had specified:

Now, if you go to http://localhost:4000/blogs, you can see the template which shows us the blogs table:

We can click the “New blogs” link which brings us to another template with a form for adding a row:

Insert a blog yourself and hit submit.

We are brought back to a complete view of our table with the new row added:

We can click the buttons on the right to show, edit, or delete this row.

For educational purposes, I’ll show this snapshot via Postico to confirm that a row truly was added in our table:

Woot woot! We have created our first PostgreSQL database, created a table, and insert a row using Phoenix.

Final Code

Available on GitHub.

Concluding Thoughts

Using the Phoenix generator was bootstrapped way to create a model, execute a migration, and perform CRUD operations on the new table within our database. It’s not what we are going to use going forward which why I didn’t explain a lot of the files that were generated.

In a real-world scenario, we want to have an API service so we can do CRUD operations through HTTP requests from our React application. One of the generated files which I didn’t unpack actually did this, however, we are going to want to do this programmatically. This will allow us to not only be introduced to these new topics about PostgreSQL interactions with Phoenix (as we were in this chapter) but to retain them. While this means removing the use of the generator, and consequently the templates to view tables and do CRUD operations, we can do this via pgAdmin as I alluded to earlier.

Get ready. This next chapter will be the meat and potatoes of full-stack React development with Phoenix.

Chapter 6

Chapter 6 is now available.

Sign Up for Notifications

Get notified when each chapter is released.

Cheers,
Mike Mangialardi
Founder of Coding Artist

--

--

--

Providing journeys for developers who see the web as their canvas

Recommended from Medium

1. The invitation code is [460 233 021].Copy

Reduce Your Arrays!

Writing JS libraries less than 1TB size

Building a clone of the ASOS website

React Basics: Using the useState() Hook — Part 4 in a 4 Part Series

How To Always Find A Key For Your React Lists

Handling TV User Input in a React App: Part 2

React Vs Angular: Yet Another Opinion

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Michael Mangialardi

Michael Mangialardi

https://leanpub.com/designsystemsfordevelopers

More from Medium

NextJS vs React: Key Differences, Advantages, and Limitations

How to Write a Reusable React Loading Indicator

Server-side search in React-table

Curious Case of React: How to Rerender when Using Refs