Cloud Functions Combed 🪮
An architectural approach for growing cloud function projects
How everyone starts
A fresh cloud functions project is like fresh baby hair.
No need to comb, looks cute and grows without you realizing it.
Everything goes into index.ts.
How you continue
After some months you realize the hair has grown too long. You separate it into different files, in our case controller files by feature. Those you export in index.ts.
How you end up
- Controller files with more than 1000 lines of code.
- Code repeating itself and the uncertainty if the code has not been already written elsewhere.
- Tendencies to create a bunch of util classes for each feature.
To cut a long story short: It’s getting messy as the project gets older.
This is also what happened to us in the uRyde tech team the first time we started a cloud functions project. This is 3 years ago and the above journey is ours.
Also, we didn’t have tests
- as we started small and
- always added this and that small functionality
- testing locally was impossible. Our testing approach was debugging with logs → Testing was deploy, trigger function, read logs, find and fix bugs, repeat.
The cloud functions were originally only meant as a firestore extension. Now it has become our backend and hosts way more logic than we initally thought.
So this is how the code looked like after 2.5 years. Unhappy and uncombed:
We clearly needed another approach. Something simple but clearly separated.
How we solved the problem
Requirements
We finally came up with the following requirements for our code:
- The business code shouldn’t be dependent on the cloud functions framework
- The business code shouldn’t contain direct references to firebase in general
- The code should be locally testable
- The cloud functions framework should be completely replaceable
- The firestore database should be completely replaceable
Separate trigger - business logic - data origin
To achieve that we added 2 levels of abstraction around our business logic
- data origin is not important inside the business logic
- code execution is not important inside the business logic
The cloud functions as a framework become nothing more than triggers. They are now hosted in controllers
.
Each controller
triggers code execution either
- by time (schedulers)
- by call (onRequest / onCall http functions)
- by callback (onCreate, onUpdate, onDelete, onWrite)
The triggered code is hosted in isolated services
.
A service
has to be idempotent so it shouldn’t have side-effects and also do the same thing no matter how often or when it is triggered.
A service
defines the business logic which is free of any knowledge on how it is triggered. It gets data from a datasource
to be also free from the knowledge of knowing where the data comes from.
A service
can contain multiple datasources
but never another service
.
The datasource's
main advantage is that firestore is not part of the service
.
Another advantage is that the data can be mocked and we immediately have testable code in the service as it is pure business logic without any framework / network dependencies.
Example
Instead of this code
const functions = require('firebase-functions');
// tslint:disable-next-line:no-implicit-dependencies
const firestore = require('@google-cloud/firestore');
const client = new firestore.v1.FirestoreAdminClient();
const bucket = 'gs://backup-bucket';
export const dailyDatabaseBackup = functions
.runWith({ timeoutSeconds: 540 })
.region('europe-west3')
.pubsub.schedule('56 2 * * *')
.timeZone('Europe/Berlin')
.onRun((context) => {
const projectId = process.env.GCP_PROJECT || process.env.GCLOUD_PROJECT;
const databaseName =
client.databasePath(projectId, '(default)');
return client.exportDocuments({
name: databaseName,
outputUriPrefix: bucket,
collectionIds: []
})
.then(responses => {
const response = responses[0];
console.log(`Operation Name: ${response['name']}`);
})
.catch(err => {
console.error(err);
throw new Error('Export operation failed');
});
});
the code now is separated into a controller
import { BackupService } from "../backup/backup_service";
import functions = require('firebase-functions');
let backupService: BackupService;
export function init(service: BackupService) {
backupService = service;
}
export const dailyDatabaseBackup = functions
.runWith({ timeoutSeconds: 540 })
.region('europe-west3')
.pubsub.schedule('56 2 * * *')
.timeZone('Europe/Berlin')
.onRun((_: any) => backupService.backupDatabase());
a service
import { BackupDataSource } from "../backup/backup_datasource";
export interface BackupService {
backupDatabase(): Promise<void>;
}
export class BackupServiceImpl implements BackupService {
private backupDataSource: BackupDataSource;
constructor(backupDataSource: BackupDataSource) {
this.backupDataSource = backupDataSource;
}
async backupDatabase(): Promise<void> {
const databaseName = this.backupDataSource.getDatabaseName();
return this.backupDataSource.backupDatabase(databaseName);
}
}
and a datasource
export interface BackupDataSource {
getDatabaseName(): string;
backupDatabase(databaseName: string): Promise<void>;
}
const firestore = require('@google-cloud/firestore');
const client = new firestore.v1.FirestoreAdminClient();
export class BackupDataSourceFirestoreImpl implements BackupDataSource {
bucket = 'gs://backup-bucket';
constructor() {}
getDatabaseName(): string {
const projectId = process.env.GCP_PROJECT || process.env.GCLOUD_PROJECT;
return client.databasePath(projectId, '(default)');
}
async backupDatabase(databaseName: string): Promise<void> {
return client.exportDocuments({
name: databaseName,
outputUriPrefix: this.bucket,
collectionIds: []
})
.then(responses => {
const response = responses[0];
console.log(`Operation Name: ${response['name']}`);
})
.catch(err => {
console.error(err);
throw new Error('Export operation failed');
});
}
}
As you can see the responsibilities are clearly separated. In this specific case the service does not define a lot of logic because the operation is simple but it illustrates the idea and how the code is now structured.
Dependency Injection
The concept follows a simple dependency injection approach. The implementations of services
are passed to the controllers
from the outside.
In the main index.ts
file those dependencies are prepared and injected. See the following example
import backupController = require('./backup/backup_controller');
// Create Services
const backupService = new BackupServiceImpl(new BackupDataSourceFirestoreImpl());
// Init controllers
backup.init(backupService);
// Exports
// Backup
export const backup = backupController;
Using a dependency injection package does not make sense for us right now, but might be another option.
Exporting functions
Instead of exporting all functions only the controllers are exported. This — as a side note — solves a deployment issue when trying to deploy too many functions which causes quota issues. It groups cloud functions and those then can be deployed like
firebase deploy - only functions:backup
in smaller chunks. The resulting function would be named backup-dailyDatabaseBackup
as well as all other functions inside the backupController
would be prefixed with backup-
.
Shared Logic
There will be shared logic. As previously mentioned one of the architecture’s requirements is
> A service
can contain multiple datasources
but never another service
.
Shared logic could be added to a service
without giving it a datasource
but would violate this requirement.
Therefore, shared logic in this approach goes to so called handlers
. Handlers also have an interface
and different implementations
to be able to mock their functionality for testing purposes.
An example is the NotificationHandler
which currently has a NotificationHandlerFcmImpl
and which is needed in multiple services to send push notifications.
I don’t know what you think - I love the naming handler
. But I might be a bit biased with my own name.
Testing
Testing is now also as easy as running
npm test
or using a run config in vs code to debug and run single tests.
These test can be run locally. Choose your testing framework and then go and get this great feeling that everything must be bullet-proof from now on. I just love it. We by the way use jest
with typescript support but this is another topic an completely up to you.
Now we got it ordered and can enjoy coding again — and sleep well at night.
Summary
Our code is separated in
- controllers → trigger business logic via cloud functions
- services -> contain testable business logic
- handlers -> contain shared functionality to be used in multiple services
- datasources -> define where the data comes from and how it is stored (CRUD operations)
Our services are now tested and tests can be run locally and within our CI pipeline.
I hope you like this approach. In one year I get back to you and tell you why this approach is not as perfect as thought ;-).