Bookshelf Explorer: Serverless Web API

Bhaarat Krishnan
Google Cloud - Community
10 min readAug 16, 2023

--

This blog and implementation are a derivation of the Serverless Web APIs Codelab by Guillaume Laforge, with some forks in the flow approved by the author.

Overview

The goal of this implementation is to learn about serverless services offered by Google.

  1. Cloud Functions to deploy small units of business logic in the shape of functions, that react to various events.
  2. Cloud Run to deploy and scale containers, that can contain any language, runtime or library.

What are we building?

  1. This workshop guides in building a bookshelf explorer with Google Cloud Services.
  2. Cloud Function is used for data import, a Cloud Run container for a REST API with a Web user interface.
Bookshelf explorer web user interface

What is Serverless?

Serverless is a cloud computing paradigm that abstracts away the complexity of server management and infrastructure provisioning.

Setup and Requirements

  1. Sign up or Sign into your Google Cloud account and create a new project. After creating your project, note your project ID to further use.
  2. Start your cloud shell which creates a command line environment in Google Cloud.

Enabling APIs

Google Cloud APIs are programmatic interfaces that allow users to access Google Cloud Platform services. We can enable Google Cloud APIs required for our project using this command:

gcloud services enable \
cloudbuild.googleapis.com \
artifactregistry.googleapis.com \
cloudfunctions.googleapis.com \
firestore.googleapis.com \
run.googleapis.com

Creating Environment Variables

We will have two environment variables. The first will be region and second is project id.

export REGION=asia-south1
export PROJECT_ID={your_project_id}

Creating Firestore Database

Firestore is a flexible and scalable NoSQL cloud database provided by Google Cloud. It’s designed to store and manage data in a way that’s easy to access and efficient to scale. We will use firestore database to store our books data.

gcloud firestore databases create --location=${REGION}

Cloning Repository

git clone https://github.com/glaforge/serverless-web-apis 

The repository contains the source code for application written in node.js.

The directory structure:

  1. data: Holds a JSON sample of 100 books.
  2. functions-import: Contains a function for importing data via an endpoint.
  3. run-crud: Contains the Web API exposing Firestore-stored books.
  4. appengine-frontend: Offers a basic read-only interface for displaying book lists.

Sample Books Data

The data folder contains the data.json file which is list of books objects.

[
{
"isbn": "9780435272463",
"author": "Chinua Achebe",
"language": "English",
"pages": 209,
"title": "Things Fall Apart",
"year": 1958
},
{
"isbn": "9781414251196",
"author": "Hans Christian Andersen",
"language": "Danish",
"pages": 784,
"title": "Fairy tales",
"year": 1836
},
...
]
  1. isbn: The 13-digit code for identifying the book.
  2. author: The name of author of the book.
  3. language: Language in which the book is written.
  4. pages: Total pages of the book.
  5. title: Title of the book.
  6. year: Published year of the book.

Deploying Cloud Functions

We will see a few questions here to understand the usage of Google Cloud Functions.

What use case does Google Cloud Functions have here?

We will use these Google Cloud Function to create and insert books into our database.

So why google cloud functions ? How are they different from our usual functions?

Great Question! Let us see this simple JS function which takes in a list of books as function parameters and inserts it in firestore database.

...
async function insertBooks(books) {
const writeBatch = firestore.batch();

for (const book of books) {
const doc = bookStore.doc(book.isbn);
writeBatch.set(doc, {
title: book.title,
author: book.author,
language: book.language,
pages: book.pages,
year: book.year,
updated: Firestore.Timestamp.now()
});
}

try {
await writeBatch.commit();
console.log("Saved books in Firestore");
return "OK";
} catch (e) {
console.error("Error saving books:", e);
throw new Error("Error saving books");
}
}
...

We have a great function that adds books to our database. How can we make it work seamlessly across different tech setups?

Google Cloud Functions are a good solution. They are event-driven, scalable, and serverless. They also work seamlessly with many Google services, including Firestore.

Let’s convert this function to a Google Cloud Function 🔥:

Search for Cloud Functions in Google Cloud Console.

Click on create function button and you will be taken to this screen.

Feel free to give your function name and make sure to enable allow unauthenticated invocations option in HTTP trigger. Click next button to land on inline editor to start creating our cloud functions.

Change the Entry point to “parseBooks”.

Replace the package.json contents:

{
"dependencies": {
"@google-cloud/firestore": "^4.9.9",
"@google-cloud/functions-framework": "^3.1.0"
}
}

Replace index.js contents with our function wrapped in Google Cloud Function Framework:

const Firestore = require('@google-cloud/firestore');
const functions = require('@google-cloud/functions-framework');
const firestore = new Firestore();
const bookStore = firestore.collection('books');

functions.http('parseBooks', async (req, resp) => {
if (req.method !== "POST") {
resp.status(405).send({error: "Only method POST allowed"});
return;
}
if (req.headers['content-type'] !== "application/json") {
resp.status(406).send({error: "Only application/json accepted"});
return;
}

const books = req.body;
// console.debug(books);

const writeBatch = firestore.batch();

for (const book of books) {
const doc = bookStore.doc(book.isbn);
writeBatch.set(doc, {
title: book.title,
author: book.author,
language: book.language,
pages: book.pages,
year: book.year,
updated: Firestore.Timestamp.now()
});
}

try {
await writeBatch.commit();
console.log("Saved books in Firestore");
} catch (e) {
console.error("Error saving books:", e);
resp.status(400).send({error: "Error saving books"});
return;
};

resp.status(202).send({status: "OK"});
})

Now click on deploy to start our function deployment and that’s it. We have our Amazing Google Cloud Function up and running.

We can test it by using Function URL and sending a POST request to it.

Open your cloud shell and go to serverless-web-apis directory and run this command.

curl -d "@./data/books.json" \
-H "Content-Type: application/json" \
$YOUR_FUNCTION_URL

We can check if the books have been uploaded to our database using GCP Console. Search for firestore and select one with Data as title and firestore as subtitle.

A books collection will be created with all our books inserted into the database.

Deploy Containers to Cloud Run

Too many technical terms, right? Let’s break it down!

Containers

Containers are a way to package and run applications. They isolate the application from its environment, so it can be deployed and scaled easily. Docker is a popular container platform.

Why Docker and Containers?

Docker and containers make apps portable, scalable, and efficient. They streamline development, integrate with CI/CD, and are key to microservices, cloud, and modern deployment.

What use case does Google Cloud Run have here?

We crafted a Web API with express.js and node.js for book CRUD. We’ll deploy this API using Docker and containers. Google Cloud Run offers serverless deployment, scaling, and cost efficiency. Our Amazing Bookshelf is now globally accessible.

Let’s get started🔥

Open Google Cloud Shell and navigate to /serverless-web-apis/run-crud/ directory which contains Web API and Dockerfile.

The package.json file contains the required dependencies for running our web API server:

  1. express.js is a minimal and flexible Node.js web application framework.
  2. firestore is client SDK offered by Google Cloud to easily access our database.
  3. cors is a node.js package for providing a Express middleware that can be used to enable CORS with various options.

The index.js file is the entry point to our web API. It initializes the express server and listens to all requests on port 8080. We will focus on the implementation of GET and POST requests, but feel free to explore the source code for the PUT and DELETE methods as well.

GET method retrieves books from our database. Implementation-wise, we’re listening to GET requests at /books path. The function fetches the initial 10 books, using query parameters to filter by author and language.

app.get('/books', async (req, res) => {
try {
var query = new Firestore().collection('books');

if (!!req.query.author) {
console.log(`Filtering by author: ${req.query.author}`);
query = query.where("author", "==", req.query.author);
}
if (!!req.query.language) {
console.log(`Filtering by language: ${req.query.language}`);
query = query.where("language", "==", req.query.language);
}

const page = parseInt(req.query.page) || 0;

const snapshot = await query
.orderBy('updated', 'desc')
.limit(PAGE_SIZE)
.offset(PAGE_SIZE * page)
.get();

const books = [];

if (snapshot.empty) {
console.log('No book found');
} else {
snapshot.forEach(doc => {
const {title, author, pages, year, language, ...otherFields} = doc.data();
const book = {isbn: doc.id, title, author, pages, year, language};
books.push(book);
});
}

var links = {};
if (page > 0) {
const prevQuery = querystring.stringify({...req.query, page: page - 1});
links.prev = `${req.path}${prevQuery != '' ? `?${prevQuery}` : ''}`;
}
if (snapshot.docs.length === PAGE_SIZE) {
const nextQuery = querystring.stringify({...req.query, page: page + 1});
links.next = `${req.path}${nextQuery != '' ? `?${nextQuery}` : ''}`;
}
if (Object.keys(links).length > 0) {
res.links(links);
}

res.status(200).send(books);
} catch (e) {
console.error('Failed to fetch books', e);
res.status(400)
.send({error: `Impossible to fetch books: ${e.message}`});
}
});

POST Method is used to create books in the database. We will listen to POST requests in same path /books. The request body contains the book data in form of JSON with required attributes. The createBook function is then executed to write this data to our database.

app.post('/books', async (req, res) => {
const isbn = req.body.isbn;
createBook(isbn, req, res);
});

Dockerfile

A Dockerfile is a script defining steps to build a container image. It contains commands to create an application-ready environment.

Our Dockerfile looks like this:

FROM node:20-slim
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY . ./
CMD [ "node", "index.js" ]
  1. FROM node:20-slim : Base image “node:20-slim” provides Node.js runtime and essentials.
  2. WORKDIR /usr/src/app: Sets working directory inside the container to /usr/src/app.
  3. COPY package*.json ./: Copies package.json to the container’s working directory.
  4. RUN npm install --only=production: Executes “npm install” inside the container.
  5. COPY . ./: Copies remaining source files to the container.
  6. CMD ["node", "index.js"]: Entry point to start the container, launching the web API using node index.js. Uses an array of strings as the command.

Woah! that’s a lot of work. All we need to do now is to build the image, deploy it in Cloud Run and our web API will be up and running in Cloud Run.

Build the Image

Create a repository in Artifact Registry

gcloud artifacts repositories create cv3-repo \ 
--repository-format=docker \
--location=${REGION}

Build docker image locally with tag for artifact registry.

docker build -t gcr.io/${PROJECT_ID}/cv3-repo/run-crud-api .

Push this image to artifact registry.

docker push gcr.io/${PROJECT_ID}/cv3-repo/run-crud-api

Deploy the image to Cloud Run

gcloud run deploy run-crud \
--image=gcr.io/${PROJECT_ID}/cv3-repo/run-crud-api \
--allow-unauthenticated \
--region=${REGION} \
--platform=managed

It’s done guys! our web API container is deployed to Cloud Run, we can use the URL to access the API. Go to YOUR_API_URL/books in your browser. This uses the GET request to query the first 10 books in database.

[
{
"isbn": "9782070369218",
"title": "Memoirs of Hadrian",
"author": "Marguerite Yourcenar",
"pages": 408,
"year": 1951,
"language": "French"
},
{
"isbn": "9781585102259",
"title": "The Aeneid",
"author": "Virgil",
"pages": 442,
"year": -23,
"language": "Classical Latin"
},
...
]

We will try using queries to get top 10 books in french, go to YOUR_API_URL/books?language=French.

You would have received an Error.

{
"error": "Impossible to fetch books: 9 FAILED_PRECONDITION:
The query requires an index. You can create it here: ${URL_TO_CREATE_INDEX}"
}

The query requires an index! Why?

Firestore needs indexes for fast data querying. Without them, searches become sluggish as the entire database is scanned. Indexes create a data map, enabling Firestore to quickly locate relevant documents.

To create an index, visit the URL, log in, grant “language” field index permission, click “Create,” and the index will be ready in 2–10 minutes.

To create an index, visit the URL, log in, grant “language” field index permission, click “Create,” and the index will be ready in 2–10 minutes.

After index creation we will try again to get top 10 French books from our database. YOUR_API_URL/books?language=French

[
{
"isbn": "9782070369218",
"title": "Memoirs of Hadrian",
"author": "Marguerite Yourcenar",
"pages": 408,
"year": 1951,
"language": "French"
},
{
"isbn": "9788437616926",
"title": "Gargantua and Pantagruel",
"author": "François Rabelais",
"pages": 623,
"year": 1533,
"language": "French"
},
...
]

Attempt querying by the author’s name, and you’ll encounter the same error. Proceed to the provided URL to create an index, and then retry the author-based query. Feel free to test all the HTTP methods.

Deploying Books Frontend to Cloud Run (Optional)

I think, at this point we all know Cloud Run, Docker, Containers, Artifact Registry, so we will make it quick. Navigate to serverless-web-apis/appengine-frontend directory.

First, create a Dockerfile

touch Dockerfile

Now go to Google Inline Editor and paste the following configuration in Dockerfile

FROM node:20-slim
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY . ./
ENV RUN_CRUD_SERVICE_URL=${YOUR_WEB_API_URL}
CMD [ "node", "index.js" ]

Build the image locally with artifact registry tag.

docker build -t gcr.io/${PROJECT_ID}/cv3-repo/books-frontend .

Push it to artifact registry.

docker push gcr.io/${PROJECT_ID}/cv3-repo/books-frontend 

Deploy this image to Cloud Run.

gcloud run deploy books-frontend \
--image=gcr.io/${PROJECT_ID}/cv3-repo/books-frontend \
--allow-unauthenticated \
--region=${REGION} \
--platform=managed

Again Woah! but I think we did a lot faster this time. Now you go to this container URL to access the frontend.

Summary

We created a set of serverless services, thanks to Cloud Functions and Cloud Run, to expose various Web API endpoints and web frontend, to view all our books.

Services we have covered:

  1. Google Cloud Run
  2. Google Cloud Functions
  3. Google Cloud Firestore
  4. Google Artifact Registry

Through a seamless integration of these services, developers can build scalable, secure, and highly available APIs that adapt to varying workloads, all while enjoying the streamlined development and cost efficiency that serverless architectures bring.

--

--

Bhaarat Krishnan
Google Cloud - Community

I'm a Full Stack and Cross-Platform Developer, API enthusiast, and Cloud Engineer.