Switching to Serverless with Google Cloud Platform

Varun Abhi
Feb 22 · 11 min read

Serverless is a cloud computing execution model where the cloud provider dynamically manages the allocation and provisioning of servers. A serverless application runs in stateless compute containers that are event-triggered, ephemeral (may last for one invocation), and fully managed by the cloud provider. Pricing is based on the number of executions rather than pre-purchased compute capacity.

One of the major advantages of using Serverless is reduced cost, for years the cost of provisioning servers and maintaining that 24x7 server team which blew a hole in your pocket is gone. The cost model of Serverless is execution-based: you’re charged for the number of executions. You’re allotted a certain number of seconds of use that varies with the amount of memory you require. Likewise, the price per MS (millisecond) varies with the amount of memory you require. Obviously, shorter running functions are more adaptable to this model with a peak execution time of 300-second for most Cloud vendors.

Benefits of Serverless Architecture

From business perspective

  1. The cost incurred by a serverless application is based on the number of function executions, measured in milliseconds instead of hours.
  2. Process agility: Smaller deployable units result in faster delivery of features to the market, increasing the ability to adapt to change.
  3. Cost of hiring backend infrastructure engineers goes down.
  4. Reduced operational costs

From developer perspective

  1. Reduced liability, no backend infrastructure to be responsible for.
  2. Zero system administration.
  3. Easier operational management.
  4. Fosters adoption of Nanoservices, Microservices, SOA Principles.
  5. Faster set up.
  6. Scalable, no need to worry about the number of concurrent requests.
  7. Monitoring out of the box.
  8. Fosters innovation.

From user perspective

  1. If businesses are using that competitive edge to ship features faster, then customers are receiving new features quicker than before.
  2. It is possible that users can more easily provide their own storage backend(i.e Dropbox, Google Drive).
  3. It’s more likely that these kinds of apps may offer client-side caching, which provides a better offline experience.

Cloud Run

Cloud Run is a fully managed compute platform that automatically scales your stateless containers. Cloud Run is serverless: it abstracts away all infrastructure management, so you can focus on what matters most — building great applications.

Benefits of using Cloud Run:

  • Write code your way using your favorite languages (Go, Python, Java, C#, PHP, Ruby, Node.js, Shell, and others)
  • Abstract away all infrastructure management for a simple developer experience
  • Only pay when your code is running

In this blog we will build a PDF converter web app on Cloud Run that automatically converts files stored in Cloud Storage into PDFs stored in separated folders.

Prerequisites

  • Knowledge of Google Cloud SDK ( GCloud Tool )
  • Github
  • Shell scripting
  • Node Js.
  • Containerization ( Docker )

Understanding the task

Pet theory would like to convert their invoices into PDFs so that customers can open them reliably. The team wants to accomplish this conversion automatically to minimize the workload for their office manager.

Below is the architecture image of what exactly we will be building

Enable the Cloud Run API

  1. Open the navigation menu and select APIs & Services > Library. Then in the search bar, enter in “Cloud Run” and select the Cloud Run API from the results list.
  2. Click Enable and then hit the back button in your browser twice. Your Console should now resemble the following:

Deploy a simple Cloud Run service

  1. Open a new Cloud Shell session and run the following command to clone the Pet Theory repository:
git clone https://github.com/rosera/pet-theory.git

Then change your current working directory to lab03:

cd pet-theory/lab03

2. Edit package.json with Cloud Shell Code Editor or your preferred text editor. In the "scripts" section, add "start": "node index.js", as shown below:

..."scripts": {
"start": "node index.js",
"test": "echo \"Error: no test specified\" && exit 1"
},
...

3. Now run the following commands in Cloud Shell to install the packages that your conversion script will be using:

npm install express
npm install body-parser
npm install child_process
npm install @google-cloud/storage

4. Now open the lab03/index.js file and review the code.

The application will be deployed as a Cloud Run service that accepts HTTP POSTs. If the POST request is a Pub/Sub notification about an uploaded file, the service writes the file details to the log. If not, the service simply returns the string “OK”.

5. Review the file named lab03/Dockerfile.

The above file is called a manifest and provides a recipe for the Docker command to build an image. Each line begins with a command that tells Docker how to process the following information:

  • The first list indicates the base image should use node v12 as the template for the image to be created.
  • The last line indicates the command to be performed, which in this instance refers to “npm start”.

6. To build and deploy the REST API, use Google Cloud Build. Run this command to start the build process:

gcloud builds submit \
--tag gcr.io/$GOOGLE_CLOUD_PROJECT/pdf-converter

The command builds a container with your code and puts it in the Container Registry of your project.

7. Return to the GCP Console, open the navigation menu, and select Container Registry > Images. You should see your container hosted:

8. Return to your code editor tab and in Cloud Shell run the following command to deploy your application:

gcloud beta run deploy pdf-converter \
--image gcr.io/$GOOGLE_CLOUD_PROJECT/pdf-converter \
--platform managed \
--region us-central1 \
--no-allow-unauthenticated

9. When the deployment is complete, you will see a message like this:

Service [pdf-converter] revision [pdf-converter-00001] has been deployed and is serving 100 percent of traffic at https://pdf-converter-[hash].a.run.app

10. Create the environment variable $SERVICE_URL for the app so you can easily access it:

SERVICE_URL=$(gcloud beta run services describe pdf-converter --platform managed --region us-central1 --format="value(status.url)")echo $SERVICE_URL

11. Make an anonymous POST request to your new service:

curl -X POST $SERVICE_URL

This will result in an error message saying "Your client does not have permission to get the URL". This is good; you don't want the service to be callable by anonymous users.

Now try invoking the service as an authorized user:

curl -X POST -H "Authorization: Bearer $(gcloud auth print-identity-token)" $SERVICE_URL

If you get the response "OK" you have successfully deployed a Cloud Run service. Well done!

Trigger your Cloud Run service when a new file is uploaded

Now that the Cloud Run service has been successfully deployed. The Google Cloud Storage bucket will use an event trigger to notify the application when a file has been uploaded and needs to be processed.

  1. Run the following command to create a bucket in Cloud Storage for the uploaded docs:
gsutil mb gs://$GOOGLE_CLOUD_PROJECT-upload

2. And another bucker for the processed PDFs:

gsutil mb gs://$GOOGLE_CLOUD_PROJECT-processed

Now return to your GCP Console tab, open the Navigation menu and select Storage. Verify that the buckets have been created (there will be other buckets there as well that are used by the platform.)

3. In Cloud Shell run the following command to tell Cloud Storage to send a Pub/Sub notification whenever a new file has finished uploading to the docs bucket:

gsutil notification create -t new-doc -f json -e OBJECT_FINALIZE gs://$GOOGLE_CLOUD_PROJECT-upload

The notifications will be labeled with the topic “new-doc”.

4. Then create a new service account which Pub/Sub will use to trigger the Cloud Run services:

gcloud iam service-accounts create pubsub-cloud-run-invoker --display-name "PubSub Cloud Run Invoker"

5. Give the new service account permission to invoke the PDF converter service:

gcloud beta run services add-iam-policy-binding pdf-converter --member=serviceAccount:pubsub-cloud-run-invoker@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com --role=roles/run.invoker --platform managed --region us-central1

6. Find your project number by running this command:

gcloud projects list

Look for the current default project as we will be using the value of the Project Number in the next command.

7. Create a PROJECT_NUMBER environment variable, replacing [project number] with the Project Number from the last command:

PROJECT_NUMBER=[project number]

8. Then enable your project to create Cloud Pub/Sub authentication tokens:

gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT --member=serviceAccount:service-$PROJECT_NUMBER@gcp-sa-pubsub.iam.gserviceaccount.com --role=roles/iam.serviceAccountTokenCreator

9. Finally, create a Pub/Sub subscription so that the PDF converter can run whenever a message is published on the topic “new-doc”.

gcloud beta pubsub subscriptions create pdf-conv-sub --topic new-doc --push-endpoint=$SERVICE_URL --push-auth-service-account=pubsub-cloud-run-invoker@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com

See if the Cloud Run service is triggered when files are uploaded to Cloud Storage

To verify the application is working as expected, Ruby asks Patrick to upload some test data to the named storage bucket and then check Stackdriver Logging.

  1. Copy some test files into your upload bucket:
gsutil -m cp gs://spls/gsp644/* gs://$GOOGLE_CLOUD_PROJECT-upload 

2. Once the upload is done, return to your GCP Console tab, open the navigation menu, and select Logging from under the Stackdriver section.

In the first dropdown, filter your results to Cloud Run Revision.

In the log results, look or a log entry that starts with file: and click it. It shows a dump of the file data that Pub/Sub sends to your Cloud Run service when a new file is uploaded.

Can you find the name of the file you uploaded in this object?

Note: If you do not see any log entries that begin with “file”, try clicking on the “load newer logs” button near the bottom of the page.

Now return to the code editor tab and run the following command in Cloud Shell to clean up your upload directory by deleting the files in it:

gsutil -m rm gs://$GOOGLE_CLOUD_PROJECT-upload/*

Update the Docker container

With all the files identified, the Dockerfile can now be created. Help Ruby set up and deploy the container.

The package for LibreOffice was not included in the container before, which means it now needs to be added. Patrick has previously provided the commands he uses to build his application, Ruby will add these as a RUN command within the Dockerfile.

  1. Open the Dockerfile manifest and add the command RUN apt-get update -y && apt-get install -y libreoffice && apt-get clean line as shown below:
FROM node:12
RUN apt-get update -y \
&& apt-get install -y libreoffice \
&& apt-get clean
WORKDIR /usr/src/app
COPY package.json package*.json ./
RUN npm install --only=production
COPY . .
CMD [ "npm", "start" ]

Deploy the new version of the pdf-conversion service

  1. Open the index.js file and add the following package requirements at the top of the file:
const {promisify} = require('util');
const {Storage} = require('@google-cloud/storage');
const exec = promisify(require('child_process').exec);
const storage = new Storage();

2. Replace the app.post('/', async (req, res) with the following code:

app.post('/', async (req, res) => {
try {
const file = decodeBase64Json(req.body.message.data);
await downloadFile(file.bucket, file.name);
const pdfFileName = await convertFile(file.name);
await uploadFile(process.env.PDF_BUCKET, pdfFileName);
await deleteFile(file.bucket, file.name);
}
catch (ex) {
console.log(`Error: ${ex}`);
}
res.set('Content-Type', 'text/plain');
res.send('\n\nOK\n\n');
})

3. Now add the following code that processes LibreOffice documents to the bottom of the file:

async function downloadFile(bucketName, fileName) {
const options = {destination: `/tmp/${fileName}`};
await storage.bucket(bucketName).file(fileName).download(options);
}
async function convertFile(fileName) {
const cmd = 'libreoffice --headless --convert-to pdf --outdir /tmp ' +
`"/tmp/${fileName}"`;
console.log(cmd);
const { stdout, stderr } = await exec(cmd);
if (stderr) {
throw stderr;
}
console.log(stdout);
pdfFileName = fileName.replace(/\.\w+$/, '.pdf');
return pdfFileName;
}
async function deleteFile(bucketName, fileName) {
await storage.bucket(bucketName).file(fileName).delete();
}
async function uploadFile(bucketName, fileName) {
await storage.bucket(bucketName).upload(`/tmp/${fileName}`);
}

4. Ensure your index.js file looks like the following:

const {promisify} = require('util');
const {Storage} = require('@google-cloud/storage');
const exec = promisify(require('child_process').exec);
const storage = new Storage();
const express = require('express');
const bodyParser = require('body-parser');
const app = express();
app.use(bodyParser.json());const port = process.env.PORT || 8080;
app.listen(port, () => {
console.log('Listening on port', port);
});
app.post('/', async (req, res) => {
try {
const file = decodeBase64Json(req.body.message.data);
await downloadFile(file.bucket, file.name);
const pdfFileName = await convertFile(file.name);
await uploadFile(process.env.PDF_BUCKET, pdfFileName);
await deleteFile(file.bucket, file.name);
}
catch (ex) {
console.log(`Error: ${ex}`);
}
res.set('Content-Type', 'text/plain');
res.send('\n\nOK\n\n');
})
function decodeBase64Json(data) {
return JSON.parse(Buffer.from(data, 'base64').toString());
}
async function downloadFile(bucketName, fileName) {
const options = {destination: `/tmp/${fileName}`};
await storage.bucket(bucketName).file(fileName).download(options);
}
async function convertFile(fileName) {
const cmd = 'libreoffice --headless --convert-to pdf --outdir /tmp ' +
`"/tmp/${fileName}"`;
console.log(cmd);
const { stdout, stderr } = await exec(cmd);
if (stderr) {
throw stderr;
}
console.log(stdout);
pdfFileName = fileName.replace(/\.\w+$/, '.pdf');
return pdfFileName;
}
async function deleteFile(bucketName, fileName) {
await storage.bucket(bucketName).file(fileName).delete();
}
async function uploadFile(bucketName, fileName) {
await storage.bucket(bucketName).upload(`/tmp/${fileName}`);
}

5. The main logic is housed in these functions:

const file = decodeBase64Json(req.body.message.data);
await downloadFile(file.bucket, file.name);
const pdfFileName = await convertFile(file.name);
await uploadFile(process.env.PDF_BUCKET, pdfFileName);
await deleteFile(file.bucket, file.name);

Whenever a file has been uploaded, this service gets triggered. It performs these tasks, one per line above:

  • Extracts the file details from the Pub/Sub notification.
  • Downloads the file from Cloud Storage to the local hard drive. This is actually not a physical disk, but a section of virtual memory that behaves like a disk.
  • Converts the downloaded file to PDF.
  • Uploads the PDF file to Cloud Storage. The environment variable process.env.PDF_BUCKET contains the name of the Cloud Storage bucket to write PDFs to. You will assign a value to this variable when you deploy the service below.
  • Deletes the original file from Cloud Storage.

The rest of index.js implements the functions called by this top-level code.

It’s time to deploy the service, and to set the PDF_BUCKET environment variable. It's also a good idea to give LibreOffice 2 GB of RAM to work with (see the line with the --memory option).

6. Run the following command to build the container:

gcloud builds submit \
--tag gcr.io/$GOOGLE_CLOUD_PROJECT/pdf-converter

7. Now deploy the latest version of your application:

gcloud beta run deploy pdf-converter \
--image gcr.io/$GOOGLE_CLOUD_PROJECT/pdf-converter \
--platform managed \
--region us-central1 \
--memory=2Gi \
--no-allow-unauthenticated \
--set-env-vars PDF_BUCKET=$GOOGLE_CLOUD_PROJECT-processed

With LibreOffice part of the container, this build will take longer than the previous one. This is a good time to get up and stretch for a few minutes.

Testing the pdf-conversion service

  1. Once the deployment commands finish, make sure that the service was deployed correctly by running:
curl -X POST -H "Authorization: Bearer $(gcloud auth print-identity-token)" $SERVICE_URL

If you get the response "OK" you have successfully deployed the updated Cloud Run service. LibreOffice can convert many file types to PDF: DOCX, XLSX, JPG, PNG, GIF, etc.

2. Run the following command to upload some example files:

gsutil -m cp gs://spls/gsp644/* gs://$GOOGLE_CLOUD_PROJECT-upload

Return to the GCP Console, open the Navigation menu and select Storage. Open the -upload bucket and click on the Refresh bucket button a couple of times to see how the files are deleted, one by one, as they are converted to PDFs.

Then click Browser from the left menu, and click on the bucket whose name ends in “-processed”. It should contain PDF versions of all files. Feel free to open the PDF files to make sure they were properly converted:

Congratulations !

Now you are familiar with Cloud run and serverless architecture and similarly one can built their own business logic using cloud run withoult worrying about the overhead of infrastructure and pricing.

References:

Varun Abhi

Written by

Software developer. Contact me @ varunabhi86@gmail.com

More From Medium

More from Varun Abhi

Related reads

50

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade