Build Up a Simple Backend with Express.js, Scrapy, PostgreSQL, and Heroku — Heroku

There are a lot of kinds of cloud services, such as Google Cloud, Microsoft Azure, Amazon Web Services, IBM Cloud, etc. But most of them need to be charged after the free trial. So this part is going to introduce Heroku and deploy those projects I created in previous sections.
Long story short, Heroku is a free service that provides complete documents. Moreover, it supports several kinds of languages and frameworks. Their dev center is very clear and friendly. Just follow their guide and you are able to deploy something on the Heroku cloud.
Note: Here is the pricing page of Heroku. And here is more detail, history and stroy on wikipidia.
Deploy Node.js Project
Heroku's’ document is pretty straight forward. Pick the language you are going to code. In this part, Node.js is the one you want to pick. You can either follow and read their documents carefully or read this note.
1. Install Heroku CLI(Command-Line Interface)
$ brew install heroku/brew/heroku2. Login Heroku via CLI
Use the commandheroku login to log in to the Heroku CLI. The browser may open after pressing any key.
$ heroku login3. Clone Project
Clone the project that you intend to deploy from GitHub.
$ git clone https://github.com/heroku/{your-project-name}.git
$ cd {your-project-name}4. Deploy
$ heroku create
$ git push heroku master5. Check
The application is now deployed. Use the command to ensure that at least one instance of the app is running.
$ heroku ps:scale web=1
$ heroku openCreate PostgreSQL for Projects
1. Enable Heroku Postgres
Go to the project on the Heroku website and look for the “Data” section at the top right concern as below.

It will direct you to this site and choose Heroku Postgres. After install to a certain application. Choose the Hobby Dev — Free option to access free service or you could choose other plans to get more functions like microservice or cluster, etc.
Note: Those actions are able to impelement by command. They all write down in the document in dev center.
2. Connect to Heroku Postgres
Just like before, check the dev center first. And you will get a document talks about Heroku Postgres. Scrolling directly to the “Using the CLI” and find the command below to connect to the database.
$ heroku pg:psqlNow, the Heroku Postgres is able to access and do CRUD commands.
3. Modify the Node.js Code to access Heroku Postgres
Again, dev center. Directly scroll down to get the “Connecting in Node.js” section. Just follow the steps and do the modify as below. And you will able to connect your project to Heroku Postgres.
Deploy Scrapy Project
So far, you already able to manipulate the database and use the Node.js Project communicate to it directly. If your project is intended to let users input the data, then you could stop here and starting to study how to do the POST request to update the database. Otherwise, the next step is to deploy a web crawler and fetch the data into the database that just created.
1. Read the document in the dev center
Same as deploying the Node.js project, read the document and follow those instructions step by step. So I will skip those commands in this section.
Note: There is an important thing which is easy to forget at the third step “Prepare the app”. Always rememeber to add runtime.txt and requirements.txt as below.
2. Installed add-ons for the new Python project
The project you just created is nothing in there besides the code. Remember to connect it to the Heroku Postgres that just created before. It will look like below if you succeed to set the add-ons.

3. Update the pipeline connection to Heroku
Switch to the Heroku Postgres part, and you will see there is also an introduction in the document and talking about how to connect to the Python project. So run the command and modify the code in pipelines.py as below.
$ pip install psycopg2-binary4. Run the web crawler on Heroku
Everything is setup. Just run the command as below and every data will import to your database on Heroku.
$ heroku run scrapy crawl <crawler_name>5. Test the API
After the web crawler stop working. It means all data is already fetched and ready to use by the RESTful API. You will get a page of JSON format with a click on the “open app” and add the corresponding path for the URL.

Conclusion
These whole series is talking about how to build up a “simple” backend, which means it didn’t consider a lot about architecture and capability. I believe that everyone who knows how to read the documents can do those stuff. This kind of web service is enough to provide a small service which uses by fewer users.
If you meant to create a huge service with more then a hundred thousand users. You may want to know how microservice works or how to load balance to make the system more stable. In addition to this, to know how the cluster database works and calculate, etc.
There still got a long way to go. Learning is always a good thing.
- Build Up a Simple Backend with Express.js, Scrapy, PostgreSQL, and Heroku — Introduction
- Build Up a Simple Backend with Express.js, Scrapy, PostgreSQL, and Heroku — PostgreSQL
- Build Up a Simple Backend with Express.js, Scrapy, PostgreSQL, and Heroku — Scrapy
- Build Up a Simple Backend with Express.js, Scrapy, PostgreSQL, and Heroku — Express.js
- Build Up a Simple Backend with Express.js, Scrapy, PostgreSQL, and Heroku — Heroku (you’re here)
