Connecting APIs, A web app Project using Node.js and React

The Saverlife Project

Michael Bidnyk
7 min readAug 28, 2020

I’m currently working with a nonprofit called SaverLife. Saverlife’s mission is to inspire, inform, and reward the millions of Americans who need help saving money. They give working people the methods and motivation to take control of their financial future,through engaging technologies and strategic partnerships.

SaverLife wants to help their users by better predicting their budgets and upcoming expenses. My team and I are tasked with creating a web application that displays past financial data in the form of visualizations, which are mostly graphs. The graphs clearly show both past spending and transactional history. There is also a feature that helps the user better budget their money using predictive models.

The web application is entirely built using Javascript utilizing React.js library on the front-end and the Node.js platform on the back-end. There are two teams working on this project, a team of web developers and another team of data scientists. I am working as a back-end web developer on the web team.

My main concern going into this project was how well the two teams would work together cross-functionally since the web application is reliant on receiving the user’s transactional data from the data science API.

The specific tasks for the web application were first broken down between the front-end and back-end. I focused in on particular user stories and how the back-end would fulfill those tasks.

Trello cards of user stories for the SaverLife web app.

For example one user story is “I want to be able to see my past spending habits”. For the back-end that would mean creating a “/spending” endpoint that sends a POST request to the “/spending” Data science API endpoint. The data science API then responds by returning the spending data to the original request made by the front-end.

Setting up the Back-end endpoint

Rendering a visualization on the front-end was one of the web teams minimum requirements. So as a member of the back-end I prioritized setting up a back-end endpoint that the front-end could call to request visualization data. Specifically this visual data would come in as a JSON string that would be parsed on the front-end and plugged into a Plotly.js component.

Example visualization component that renders graph data using Plotly.js
Example in React.js of visual data being rendered using react-plotly.js.

The back-end endpoint is going to request the Plotly.js graph data from the data science API. There are two main reasons behind having the back-end request the data from the data science API instead of the front-end. The first is authentication and authorization routes are built in through the back-end. The second is the back-end could cache the graph data for easier access by the front-end and less server load for the data science API.

The main issue I ran into early on in the project was not having a data science API endpoint to work with. The data science team needed time to gain access to user data they needed to develop their API endpoints. My solution to this problem was creating a dummy data science endpoint to test my own back-endpoints.The dummy endpoint was just an endpoint that responded with mock graph data in the form of a JSON string when called .

Most of the back-end endpoints make requests to the data science API and then send the response from that request to the front-end. The back-end endpoint does this through a function that makes an HTTP request to the data science API. After the function return with a response it is sent to the front-end which made the initial request to the back-end endpoint.

Two POST endpoints that call functions to request DS data and then respond with that data.

The next step, Back-end Data Persistence

For this application I used PostgreSQL as my back-end database coupled with pgAdmin4 a PostgreSQL database management tool. I utilized Knex.js as my query builder and library. I chose to use PostgreSQL and the aforementioned tools and libraries because of my familiarity with them and how well they worked with the Node.js back-end runtime environment.

Utilizing Knex.js I created a profiles table that would contain columns for an ‘id’, ‘email’,’name’, ’bank_account_id’, ’monthly_savings_goal’, ’categories’, and a timestamp for when the profile was created.

Profiles table with 7 columns created using Knex.js

I also created some seed data for three test profiles. The ‘id’s were used by the back-end to identify which profile logged in on the front-end. The ‘bank_account_id’ was mainly utilized by the data science API for similar identification purposes.

Seed data for profiles table includes keys and values [ id, email, name, and bank account id ] for 3 test profiles

Redis In-Memory Cache

Another way I stored important data was through an in-memory cache called Redis. Unlike PostgreSQL a relational database that usually stores data on a remote server, Redis relies on the main memory of the local computer to store and cache data. This means it can very quickly pull cached data from the local computers memory instead of making a longer request to a remote database or API. The drawbacks of using an in-memory cache like Redis is that it uses the local computer’s RAM which is much more limited than a database server like PostgreSQL. To compensate for this limitation I only cached important and frequently used data that I received from the data science API. This way when the back-end server received a request from the front-end app it would only call the data science API once initially and cache that response. Then if the front-end app would make the same call to the back-end endpoint the response would be taken from the cached data in Redis instead of the data science API. There was a four hour wait time before Redis would call the data science API again to refresh the expired data. Using Redis not only benefited the front-end but the data science team as well. The data science API would receive less requests from the back-end which could otherwise strain the server and decrease its performance.

Function that saves/caches data to Redis cache with an expiration of 4 hours for that cached data
Middleware that checks Redis cache for the specific request in the back-end endpoint

Structuring Data in Back-End Endpoints and Finalizing the App

One of the difficulties that I ran into while building the back-end endpoints for the SaverLife App was restructuring data that I received from the science API. I realized that it would benefit the front-end if the back-end restructured the data received from the data science API , before sending it to the front-end server.

Restructuring the data meant taking two different response objects requested from the data science API and merging them into one object. That object needed to cleanly display the requested information in this case a users max spending and current spending based on specific categories. There were a few things to consider such as what structure should the data take, an array maybe or object, with possible nested objects and arrays. Also time and space complexity was another consideration, with the former being of higher importance. The data structure that fit this criteria was an object with category keys that then took an object that had the keys of ‘maxSpending’, and ‘currSpending’. This object allowed the front-end to easily access the respective categories and pull out the spending values. Because it was an object with nested objects it was fairly good in regards to time complexity as well.

Snippet of a back-end GET endpoint that returns data requested from two data science API endpoints. Restructures the response from data science endpoints in the ‘for’ loop and sends that data to the front-end.

Reflecting on the Project

Looking back at the now completed SaverLife app I realize there is a lot I still need to work on. However, I appreciate the technical and interpersonal experience I received from working on the application. Learning to use tools like Redis and better structure data within back-end endpoints has helped me build an application that works more efficiently. Working cross functionally with the data science team was great experience for me to learn more about their role on a team oriented project. Communicating with the front-end and data science team was a valuable experience that I plan to improve on in future projects.

--

--