Securing Flowise on Cloud Run

Yuki Iwanari
google-cloud-jp
Published in
11 min readSep 21, 2023

In this article, I will introduce Flowise, Open source UI visual tool to build your customized LLM flow, and how to run it on Cloud Run securely.

Notice
This is an English ver. of my blog post (Japanese ver.) in zenn.dev

Flowise?

Flowise is an open source UI visual tool to build your customized LLM flow. Vertex AI PaLM API for Text / Chat and Embeddings are supported at v. 1.3.3.

With Flowise, you can create simple LLM Chain and complex Flow, such as QnA Retrival Chain using vector store in GUI.

Simple LLM chain
Simple LLM Chain

In addition, you can get codes to access API to access customized LLM flow you defined easily.

API access to customized LLM flow

Marketplace is a good starting point to use typical LLM flow. You can create and customize your own LLM flow using these template.

Marketplace of Flowise

If you want to know how to use Vertex AI PaLM API with Flowise, please read following blog in Beatrust tech blog.

Getting Started

Let’s use Flowise!

Run Flowise locally

You can run Flowise as below:

npm install -g flowise
npx flowise start

To access Vertex AI via Flowise, there are 3 patterns.

  1. Specify path to credentials in “Connect credential”
  2. Input credentials in “Connect credential”
  3. Use Application Default Credential (ADC)

In case of 1 or 2, you can specify different permissions for each flow using credentials of several service accounts.

In case of 3, the access is controlled according to permissions of your Google account. You can see that you don’t need to specify credential here in case you use ADC (Client library reads credential automatically).

Vertex AI authentication

You can create local credential file by following command. Please check whether your Google account has appropriate permissions to Vertex AI.

gcloud auth application-default login

Run Flowise on Cloud Run simply

Next, let’s run Flowise on Cloud Run simply. You can run it by following next steps.

Notice
This steps are to run Flowise simply, so please read the later part to understand how to run it securely. In this article, preview features are used in some cases for simplicity, but GA feature is highly recommended if you want to use it in production.
In addition, creating a new project is highly recommended as many cloud resources will be created after these steps and deletion takes time.

First, set up environment variables and enable APIs.

PROJECT_ID=sample-project           # Need to change
FLOWISE_USERNAME=admin
FLOWISE_PASSWORD="skxZm&SE2Wa35W0E" # Need to change
REGION=asia-northeast1
ZONE=asia-northeast1-a
DATABASE_INSTANCE=flowise-instance
DATABASE_USER=postgres
DATABASE_PASSWORD="Aqt9Db64w8" # Need to change
gcloud services enable \
servicenetworking.googleapis.com cloudbuild.googleapis.com \
compute.googleapis.com artifactregistry.googleapis.com \
sql-component.googleapis.com run.googleapis.com \
aiplatform.googleapis.com secretmanager.googleapis.com \
--project=${PROJECT_ID}

Then, it’s ready!

Let’s build Flowise container image using Cloud Build. Last command (gcloud builds submit) will take 15–20 mins, so you can wait for the job or proceed to DB instance creation by adding “ — async” option.

# Get flowise code
git clone https://github.com/FlowiseAI/Flowise.git
cd FlowiseAI

# Create a repository in Artifact Repository
gcloud artifacts repositories create flowiseai --location=${REGION} \
--repository-format=docker --project=${PROJECT_ID}

# Build a container image of Flowise
gcloud builds submit \
--tag=${REGION}-docker.pkg.dev/${PROJECT_ID}/flowiseai/flowiseai . --project=${PROJECT_ID}

# (optional) Build a container image of Flowise with '--async' option
gcloud builds submit \
--tag=${REGION}-docker.pkg.dev/${PROJECT_ID}/flowiseai/flowiseai . --project=${PROJECT_ID} --async

Next, let’s set up database. Default option is SQLite, but I use Cloud SQL for PostgreSQL considering data persistence in case of Cloud Run scale in / out.

# Configure VPC peering
gcloud compute addresses create google-managed-services-default \
--global \
--purpose=VPC_PEERING \
--prefix-length=16 \
--network=projects/${PROJECT_ID}/global/networks/default --project=${PROJECT_ID}
gcloud services vpc-peerings connect \
--service=servicenetworking.googleapis.com \
--ranges=google-managed-services-default \
--network=default \
--project=${PROJECT_ID}

# Create Cloud SQL for PostgreSQL instance
gcloud sql instances create ${DATABASE_INSTANCE} \
--database-version=POSTGRES_15 \
--tier db-custom-1-3840 \
--network default --no-assign-ip \
--zone=${ZONE} --project=${PROJECT_ID}

# Set password
gcloud sql users set-password ${DATABASE_USER} \
--instance=${DATABASE_INSTANCE} \
--password=${DATABASE_PASSWORD} --project=${PROJECT_ID}

# Create database
gcloud sql databases create flowise \
--instance=${DATABASE_INSTANCE} --project=${PROJECT_ID}

Following shows Flowise data structure.
- chat_flow: definition of customized flows
- chat_message: history of prompt and output. Will be deleted once “Clear action” in Flowise app.

Notice
In this sample, Cloud SQL does not have a public IP. If you want to connect to Cloud SQL, you need to use Compute Engine or Cloud Workstations which can access Cloud SQL with the private IP.

psql "sslmode=disable dbname=flowise user=postgres hostaddr=172.26.0.3"

flowise=> \dt
List of relations
Schema | Name | Type | Owner
--------+--------------+-------+----------
public | chat_flow | table | postgres
public | chat_message | table | postgres
public | credential | table | postgres
public | tool | table | postgres
(4 rows)

Finally, deployment of Cloud Run!

Cloud Run need to access Cloud SQL via VPC. In this case, Direct VPC egress is configured but in preview as of 2023/09. For production, Serverless VPC Access is appropriate as it’s in GA.

DATABASE_HOST=172.26.0.3 # Need to change

# Deploy Flowise service to Cloud Run
gcloud beta run deploy flowise --port 3000 --region ${REGION} \
--image ${REGION}-docker.pkg.dev/${PROJECT_ID}/flowiseai/flowiseai \
--set-env-vars "FLOWISE_USERNAME=${FLOWISE_USERNAME},FLOWISE_PASSWORD=${FLOWISE_PASSWORD},DATABASE_TYPE=postgres,DATABASE_PORT=5432,DATABASE_HOST=${DATABASE_HOST},DATABASE_NAME=flowise,DATABASE_USER=${DATABASE_USER},DATABASE_PASSWORD=${DATABASE_PASSWORD}" \
--network=default --subnet=default --vpc-egress=private-ranges-only --project=${PROJECT_ID} \
--allow-unauthenticated # Public access

Security of Flowise

As of 2023/09, Flowise supports 2 authentication.

  1. App level
    App level authorization protects your Flowise instance by username and password
  2. Chatflow level
    Chatflow level protects your APIs of customised LLM flows

Notice
As of 2023/09, API keys for Chatflow level protection are stored locally. Chatflow level authentication might not work correctly as API keys are stored in the containers in case of scaling in / out.

If you want to protect API using API key, please consider following approaches:
- Store API keys in NFS etc. (Cloud Storage FUSE, Filestore etc.)
- CPU always-on + min. / max. instance = 1 in Cloud Run

Yon don’t have to consider this when you use Flowise only via GUI or do not use Chatflow level protection.
Personally, when I run Flowise on Cloud Run, I would utilize several security features of Cloud Run / Google Cloud instead of Chatflow level protection (Of course, it depends).

Then, let’s enhance security by utilizing security features of Cloud Run / Google Cloud!

Run Flowise securely

This is main topics in this article!

Run Flowise on Cloud Run securely
  1. Appropriate permission using service accounts
  2. Use credentials in Secret Manager
  3. Restrict ingress to Cloud Run
  4. IP restriction of Cloud Armor & protect access with ID and context

Let’s deep dive into each topic :)

1. Appropriate permission using service accounts

Cloud Run use default Compute Engine service account by default when you do not specify service account.

Let’s securing Cloud Run deployments with least privilege access using service account for the service.

Fisrt, create a service account.

# Create Service Account
gcloud iam service-accounts create sa-flowise \
--description="Service account for Flowise" \
--display-name="Flowise on Cloud Run service account" --project=${PROJECT_ID}

# Add permission to access Vertex AI PaLM API
gcloud projects add-iam-policy-binding ${PROJECT_ID} \
--member="serviceAccount:sa-flowise@${PROJECT_ID}.iam.gserviceaccount.com" \
--role="roles/aiplatform.user" --project=${PROJECT_ID}

Notice
If you do not have appropriate permission, please check
— role “roles/iam.serviceAccountUser”.

Next, configure Cloud Run using the service account you created.

# Delete Cloud Run service
gcloud run services delete flowise --project=${PROJECT_ID}

# Update Cloud Run service to use the service acount
gcloud beta run deploy flowise --port 3000 --region ${REGION} \
--image ${REGION}-docker.pkg.dev/${PROJECT_ID}/flowiseai/flowiseai \
--set-env-vars "FLOWISE_USERNAME=${FLOWISE_USERNAME},FLOWISE_PASSWORD=${FLOWISE_PASSWORD},DATABASE_TYPE=postgres,DATABASE_PORT=5432,DATABASE_HOST=${DATABASE_HOST},DATABASE_NAME=flowise,DATABASE_USER=${DATABASE_USER},DATABASE_PASSWORD=${DATABASE_PASSWORD}" \
--network=default --subnet=default --vpc-egress=private-ranges-only --project=${PROJECT_ID} \
--allow-unauthenticated \
--service-account sa-flowise@${PROJECT_ID}.iam.gserviceaccount.com # added

In this step, Vertex AI User role (roles/aiplatform.user) is binded into the service account.

Please check the following document about access control in Generative AI of Vertex AI.

Here, Application running on Cloud Run accesses Google Cloud services using permission of the service account automatically, so you don’t have to configure API key of service account manually.

Please execute following command and make “Connect Credential” blank when accessing Vertex AI using Application Default Credential.

Vertex AI authentication

If you want to specify different service account for each chatflow, you cau use several credentials with several service accounts.
(In that case, you need to create service accounts, download json files, and input it into “Connect Credential”).

2. Use credentials in Secret Manager

Next, let’s check environment variables.

In the previous step, database credentials are specified as environment variables of Cloud Run, but these values would be open in Cloud Console as below.

Specify credentials as environment variables of Cloud Run

It would be better to hide these credentials just in case, so let’s use Secret Manager.

# Delete Cloud Run service
gcloud run services delete flowise --project=${PROJECT_ID}

# Create secrets
echo -n ${FLOWISE_PASSWORD} | gcloud secrets create flowise-password --project=${PROJECT_ID} --data-file=-
echo -n ${DATABASE_PASSWORD} | gcloud secrets create database-password --project=${PROJECT_ID} --data-file=-

# Add access to Secret Manager into Flowise service account
gcloud secrets add-iam-policy-binding flowise-password \
--member="serviceAccount:sa-flowise@${PROJECT_ID}.iam.gserviceaccount.com" \
--role="roles/secretmanager.secretAccessor" --project=${PROJECT_ID}
gcloud secrets add-iam-policy-binding database-password \
--member="serviceAccount:sa-flowise@${PROJECT_ID}.iam.gserviceaccount.com" \
--role="roles/secretmanager.secretAccessor" --project=${PROJECT_ID}

# Deploy Cloud Run Service using Secret Manager
gcloud beta run deploy flowise --port 3000 --region ${REGION} \
--image ${REGION}-docker.pkg.dev/${PROJECT_ID}/flowiseai/flowiseai \
--set-env-vars "FLOWISE_USERNAME=${FLOWISE_USERNAME},DATABASE_TYPE=postgres,DATABASE_PORT=5432,DATABASE_HOST=${DATABASE_HOST},DATABASE_NAME=flowise,DATABASE_USER=${DATABASE_USER}" \
--network=default --subnet=default --vpc-egress=private-ranges-only --project=${PROJECT_ID} \
--allow-unauthenticated \
--service-account sa-flowise@${PROJECT_ID}.iam.gserviceaccount.com \
--update-secrets=FLOWISE_PASSWORD=flowise-password:latest,DATABASE_PASSWORD=database-password:latest # added

By using Secret Manager, these credentials cannot be accessed in Cloud Console as below (Only those who have permission to the secrets can access the value).

Use Secret Manager

3. Restrict ingress to Cloud Run

As deploying external applicatoin load balancer in front of Cloud Run, ingress setting should be “Internal and Cloud Load Balancing”.

# Delete Cloud Run service
gcloud run services delete flowise --project=${PROJECT_ID}

# Deploy Cloud Run Service allowing internal access
gcloud beta run deploy flowise --port 3000 --region ${REGION} \
--image ${REGION}-docker.pkg.dev/${PROJECT_ID}/flowiseai/flowiseai \
--set-env-vars "FLOWISE_USERNAME=${FLOWISE_USERNAME},DATABASE_TYPE=postgres,DATABASE_PORT=5432,DATABASE_HOST=${DATABASE_HOST},DATABASE_NAME=flowise,DATABASE_USER=${DATABASE_USER}" \
--network=default --subnet=default --vpc-egress=private-ranges-only --project=${PROJECT_ID} \
--allow-unauthenticated \
--service-account sa-flowise@${PROJECT_ID}.iam.gserviceaccount.com \
--update-secrets=FLOWISE_PASSWORD=flowise-password:latest,DATABASE_PASSWORD=database-password:latest \
--ingress=internal-and-cloud-load-balancing # added

After configuring this, you cannot access Cloud Run even if specifying “ — allow-unauthenticated” option.

Next, let’s configure external application load balancer with HTTPS. First, create static ip.

# Reserve static ip
gcloud compute addresses create flowise-ip \
--network-tier=PREMIUM \
--ip-version=IPV4 \
--global --project=${PROJECT_ID}

# Confirm IP address
gcloud compute addresses describe flowise-ip \
--format="get(address)" \
--global --project=${PROJECT_ID}

Then, create certificate resource. In order to create HTTPS load balancer, you need to create SSL certificate resource. In this time, let’s use Google-managed SSL certificates.

# Create ssl certificate
gcloud compute ssl-certificates create flowise-ssl-certificate \
--description="SSL certificate for flowise" \
--domains="flowise.example.com" \
--global --project=${PROJECT_ID}

Then, let’s create a load balancer. For more details, please check following document.

gcloud compute network-endpoint-groups create flowise-serverless-neg \
--region=${REGION} \
--network-endpoint-type=serverless \
--cloud-run-service=flowise --project=${PROJECT_ID}

gcloud compute backend-services create flowise-backend-service \
--load-balancing-scheme=EXTERNAL_MANAGED \
--global --project=${PROJECT_ID}

gcloud compute backend-services add-backend flowise-backend-service \
--global \
--network-endpoint-group=flowise-serverless-neg \
--network-endpoint-group-region=${REGION} --project=${PROJECT_ID}

gcloud compute url-maps create flowise-url-map \
--default-service flowise-backend-service --project=${PROJECT_ID}

gcloud compute target-https-proxies create flowise-target-https-proxy \
--http-keep-alive-timeout-sec=610 \
--ssl-certificates=flowise-ssl-certificate \
--url-map=flowise-url-map --project=${PROJECT_ID}

gcloud compute forwarding-rules create flowise-https-forwarding-rule \
--load-balancing-scheme=EXTERNAL_MANAGED \
--network-tier=PREMIUM \
--address=flowise-ip \
--target-https-proxy=flowise-target-https-proxy \
--global \
--ports=443 --project=${PROJECT_ID}

At the end, connect your domain to your load balancer.

After a period of time (max. 72 hours, several minutes in my case), the certificate’s status will be “ACTIVE”.

gcloud compute ssl-certificates describe flowise-ssl-certificate \
--format="get(managed.domainStatus)" --project=${PROJECT_ID}

Once the status becomes “ACTIVE”, you can access Flowise on Cloud Run via “https://<domain name>”.

4. IP restriction of Cloud Armor & protect access with ID and context

Google Cloud Armor helps you protect your Google Cloud deployments from multiple types of threats, including distributed denial-of-service (DDoS) attacks and application attacks like cross-site scripting (XSS) and SQL injection (SQLi).

Let’s enable Cloud Armor at load balancer. Please keep in mind that it might take some time to apply rules.

# Create security policy
gcloud compute security-policies create internal-users-policy \
--description "policy for internal test users" --project=${PROJECT_ID}

# Configure default "Deny"
gcloud compute security-policies rules update 2147483647 \
--security-policy internal-users-policy \
--action "deny-502" --project=${PROJECT_ID}

# Configure "Alloy" policy
gcloud compute security-policies rules create 1000 \
--security-policy internal-users-policy \
--description "allow traffic from 198.51.100.0/24" \
--src-ip-ranges "198.51.100.0/24" \
--action "allow" --project=${PROJECT_ID}

# Update backend service
gcloud compute backend-services update flowise-backend-service \
--security-policy internal-users-policy --global --project=${PROJECT_ID}

With Cloud Armor, you can successfully restrict source IP to Cloud Run.

In addition to enhanse security, Google Cloud provides IAP (Identity-aware Proxy), which protects access to your application with ID and context. You can configure IAP into Cloud Run, so you can allow specific Google accounts to access your Cloud Run service with IAP.

For more details, please check following documents.

In addition to IAP, you can configure policies based on user identity, device health, and other contextual factores to enforce granular access controls to applications with BeyondCorp Enterprise.

Enhanse security more…

At this article, I introduced my recommendations to secure Cloud Run, but there are other several security features / products in Google Cloud. Please check and try them!

Clean up

Following commands are commands to clean up main (= cost) components in this article. However, all resources are not included in the commands, so please delete the project if you use new Google Cloud project.

gcloud artifacts repositories delete flowiseai --location=${REGION} --project=${PROJECT_ID}
gcloud run services delete flowise --project=${PROJECT_ID}
gcloud sql instances delete flowise-instance --project=${PROJECT_ID}
gcloud iam service-accounts delete sa-flowise@${PROJECT_ID}.iam.gserviceaccount.com --project=${PROJECT_ID}

--

--

Yuki Iwanari
google-cloud-jp

Customer Engineer in Google Cloud. All views and opinions are my own.