GCP pipeline: pub/sub-lookup-storage (part 2/2)

Syntio
SYNTIO
Published in
9 min readApr 7, 2022

This post will briefly describe how to create Cloud Run service and showcase two different cases for both serverless options followed by performance test results.

A previous blog post described the implementation of a data pipeline with simple lookup logic implemented using the Cloud Function. Google Cloud, in addition to Cloud functions, offers Cloud Run as serverless option. This post will briefly describe how to create Cloud Run service and showcase two different cases for both serverless options followed by performance test results.

Introduction

The goal is to determine a preferable option for the pubsub — lookup — storage (Image1 pubsub-lookup), Cloud Function or Cloud Run.

Also, to answer for following: Is there a case when having separate components is more desirable (for lookup in Redis, receiving messages from topics and data storage)?

1 pubsub-lookup

The initial idea was to have three Cloud Functions: one dealing with collecting messages from topics, the second looking for additional value in the Memorystore for Redis and the third storing merged data collected from topics and Memorystore in Google Cloud Storage and Firestore.

There are two advantages to this approach; firstly, you can customize the performance of an individual Cloud Function as much as we need by changing parameters such as timeout and memory allocation. Secondly, you can use an existing Cloud function (such as reading from Redis) in another use-case.

However, there is also a downside to this. By using this approach, time is wasted time on HTTP calls, which also means that you will be charged more for using the service. For example, if a subscriber instance sends out an HTTP call to the Writer instance, it must block its execution and wait for the Writer component response. For example, let’s assume that the writer component takes 250 milliseconds to execute itself. That means that the subscriber function’s execution time will also be 250 milliseconds longer because of it. And since GCP charges by execution time in milliseconds, we are effectively paying for 500 milliseconds as a result, even though the entire thing took 250 milliseconds and the Writer function was the one doing all the work.

In the post Serverless Option, something interesting was mentioned. Coincidentally, it is also the reason for implementing version with Cloud Run: By design, Cloud Functions only handles one request per instance, ensuring each request has the full amount of compute and memory allocated to it. This may make rapid scaling slower with Cloud Functions, but App Engine standard environment, Cloud Run, and Cloud Run for Anthos can handle multiple concurrent requests per instance. This means these services can scale faster by handling more traffic per instance, but all requests in an instance must share resources.

Cloud Run Implementation

Cloud Run is a managed compute platform that enables you to run stateless containers that are invocable via web requests or Pub/Sub events.

The following describes the process of creating a Cloud Run Service that receives messages from topics.

The code is written in Python and Docker was used to create and deploy code using containers.

Flask is used for handling incoming requests. Flask is a lightweight WSGI web application framework. It is classified as a microframework because it does not require particular tools or libraries. Following code shows the snips of source code:

DOCKER IMAGE

For creating images, you are going to need a Dockerfile. Docker can build images automatically by reading its instructions. A Dockerfile is a text document that contains all the commands a user can call on the command line to assemble an image.

In our case the Dockerfile looks like this:

After the configuration setup in Dockerfile, you can build the image with this command:

docker build [OPTIONS] PATH | URL | -

To push created image on the GCP Cloud Registry, tag name must include a registry name (Tagging the image with a registry name configures the docker push command to push the image to a specific location). This can be achieved by using the command:

docker tag [SOURCE_IMAGE] [HOSTNAME]/[PROJECT-ID]/[IMAGE]:[TAG]

To push the image to the Container Registry, use the following command:

docker push [HOSTNAME]/[PROJECT-ID]/[IMAGE]:[TAG]

The image can now be found on the Container Registry service using the GCPs web dashboard.

Run the following command to deploy your app:

gcloud run deploy NAME --image gcr.io/PROJECT_ID/IMAGE_NAME

In our case:

If you want to deploy a code update to the service, repeat the previous steps. Each deployment to a service creates a new revision and automatically starts serving traffic when ready.

Integrating with Pub/Sub

First, it is necessary to create a topic for Pub/Sub

The following steps are required to integrate the Cloud Run service with Pub/Sub:

Enable Pub/Sub to create authentication tokens in your project.

Create or select a service account to represent the Pub/Sub subscription identity.

Create a Pub/Sub subscription with the service account using the appropriate tab. Give the invoker service account permission to invoke your Cloud Run service:

Create a Pub/Sub subscription with the service account:

You can find your SERVICE-URL here:

Connections for Cloud Run

In order to be able to connect to the database (SQL) and Memorystore, it is necessary to add connections in the advanced settings.

It is possible to access the cache through a Cloud Run, by connecting the Cloud Run to a serverless VPC connector in the same authorized network as the Memorystore instance.

Like any configuration change, setting a new configuration for the Cloud SQL connection leads to the creation of a new Cloud Run revision.

TESTS

The following pipelines are used for testing:

  • Case 1: The writer and lookup in Redis are no longer separate components, they are modules of the Subscriber.
  • Case 2: All components (subscriber, writer and Redis Proxy) are located in separate Cloud Run services.
  • Case 3: All components are located in one function.

For all tests, data was stored in Redis and all messages which were sent were valid. The results were analyzed in the same way for every version; using the Metrics explorer that tracks the metric set to the subscriber function execution time and the time difference between the time a message is published and the time delay which represents the difference of the instance invocation (triggered by pubsub) time and the time found in context.timestamp obtained from the message from the topic ( Function parameters).

Due to the large number of requests to the firestore, storage and memorystore, the number of attempts was set to three as it was enough for the next tests. All quotas that were set by default on the project did not change, and quotas such as the maximum and minimum number of instances of Cloud Function were set to automatically create new instances as needed. For Cloud Run, they were set to 1000 (which is the maximum number of instances, however, it is possible to request a larger number).

The cases were tested by sending:

  1. 240,000 messages at a rate of 200 messages/second for 20 minutes, one publisher.
  2. 240,000 messages at same time, one publisher.
  3. 140,000 messages, three publishers (in total 420,000 messages).

The following images show the graphs of average execution time for Case 1, Case 2 and Case 3 for Test 1 ( 240,000 messages at a rate of 200 messages/second for 20 minutes, one publisher):

Case1
Average execution time is 0.130 s
No components raised errors during the test.
All of the 240,000 messages were stored as valid.

Case2
Average execution time is 0.218 s
No components raised errors during the test.
All of the 240,000 messages were stored as valid.

Case3
Average execution time is 213.17 ms.
No components raised errors during the test.
All of the 240,000 messages were stored as valid.

Next set of images shows the graphs of average execution time for Case 1, Case 2 and Case 3 for Test 2 (240,000 messages at same time, one publisher):

Case1
Average execution time is is 0.147 s
No components raised errors during the test.
All of the 240,000 messages were stored as valid.

Case2
Average execution time is 0.277 s
No components raised errors during the test.
All of the 240,000 messages were stored as valid.

Case3
Average execution time is 364.01 ms.
No components raised errors during the test.
All of the 240,000 messages were stored as valid.

Last set of images shows the graphs of average execution time for Case 1, Case 2 and Case 3 for Test 3 (140,000 messages, three publishers):

Case1
Average execution time is 0.155 s.
Not all data is saved in Cloud Storage and Cloud Firestore: 341,728 is saved, and the remaining 78,272 is not saved.
We can see that the following warning appears in the logs:

With textPayload:”The request was aborted because there was no available instance.”

According to Google documentation, in our case, an issue can be caused by a sudden immense increase in traffic or if the service has reached its maximum container instance limit. By default, container instances can scale up to 1000 instances, therefore we can try to increase the number of instances for the next test.

Case3

Average execution time is 316.56 ms

No components raised errors during the test.

All of the 420,000 messages were stored as valid.

The table for Test 1 and Test 2 shows that Case 1 has the lowest average execution time of instances and that the process of collecting, updating and storing data is done with the shortest delay (from the moment the message is sent). In Test 3, Case 3 and Case 2 managed to store all data (an error occurred in the remaining ones, resulting in the loss of messages). So, from the conducted tests we can conclude that Case 1 is a better option for our use-case when there is no large flow of data in a short period of time. If there was a case that required a large amount of data to be sent in a short time, Cloud Function-v2 would have been a better option.
It is important take note of the reason why some tests were not successful. For the Case 4, the quota limit was reached: Quota exceeded for quota group ‘CPUMilliSeconds-europe-west1’ and limit ‘CPU allocation in function invocations for europe-west1 per 100 seconds. So, the quota is shared between all Cloud Functions that are in the same project and in the same region. Furthermore, for the Case 1 we reached the quota for maximum number of instances. If needed, it is possible to increase both limits by contacting support.
We believe that a sample of three tests for each version is not enough to draw definite conclusions, however it allowed us to gain considerable insight into serverless options for streaming data.

References:

PART 1/2

Originally published at https://www.syntio.net, June 6, 2020.

--

--

Syntio
SYNTIO
Editor for

The Data Engineering company. Offering knowledge and cloud-based solutions to complex data challenges worldwide.