A Toolkit to Speed Up and Optimise Firebase Cloud Functions – Part 1

George
The Startup
Published in
21 min readMay 9, 2020

--

Reduce Cold-Start Time Using better-firebase-functions — How I built this package

This is the first post in a series that follows my journey publishing an open-source package on npm called better-firebase-functions that allows app developers to speed up and organise their backend code. These code patterns are applicable to small and big apps alike. If you just want a guide on how to use the tool and optimise your functions deployment, read this instead.

Screenshot of the better-firebase-functions npm repo

I initially started working on this project a few months ago. A wrist break due to an accidental fall during Jiu-Jitsu sadly stalled my progress while I was in a cast (there’s only so much typing you can get done with one hand) but I’m working on it again now, and decided to document the process along the way, in case it helps others, or even lands me a good job :)

Now I started writing this story after I had already published version 3 of my repo. It’s not really version 3. 1–3 are the same version. It became 3 due to some difficulties I was having implementing semantic-release in my TravisCI setup.

This post starts with me working on the next version, with some exciting new features. So what does the existing functionality achieve? How can we improve it?
A quote from the documentation:

1. Automatically dynamically export all of your function triggers found in the specified source code directory. The function scans your specified directory and exports all of your function triggers for you.

2. Function triggers are exported in a special way that increases performance and reduces the cold-boot / cold-start time of your cloud functions.

I will probably make a separate post explaining the best way to organise your source code directories, to avoid having to keep track of cumbersome barrel files and imports/exports, as well as automating the lazy-loading of certain dependencies. For now, here’s a good guide. Note: The author of that article published his own similar repo after I published mine, feel free to check out both — however, I would still recommend better-firebase-functions as the faster, smaller, more optimised, more featureful package you should use.

Here is another good resource with the OG himself, Doug Stevenson:

The important thing to understand is that our two main concerns end up having a shared solution.
These two concerns:

  • Organise Source Code Files in Directories, easily keep track and export function triggers
  • Optimise memory usage and cold-start time of each function trigger

End up having a “combined” solution:

  • Use a small package/code-snippet to automatically traverse source code directories, find exported function triggers, re-export them for consumption by Firebase Cloud Functions and do it in a way that is optimised and only loads dependencies that particular function needs!

The reason these are shared concerns is that applying the optimisations manually to each exported function would be too cumbersome for practical use.

Understanding Optimisation

The way Firebase Functions works is that it reads your package.json to figure out which dependencies it needs to install on the server, which node version to run, and also which file is the entry point of your app.

Your entry point will have a number of named exports. The documentation instructs developers to use the firebase-functions package which provides wrapper methods around your function triggers, which are then used to create the named exports, as follows:

import * as functions from 'firebase-functions'
export const functionOne = functions.auth.onCreate(YOUR_FUNCTION)
export const functionTwo = functions.auth.onDelete(YOUR_FUNCTION)

This means that from the vantage point of the entry point file, you end up with an exports object that contains the function triggers as its properties. It looks something like this:

exports: { functionOne, functionTwo }

When you run firebase deploy --only functions it will deploy both of these functions.

Now it’s also possible to have nested properties, and this allows for faster / selective deployment. For example:

exports: {
functionOne,
functionTwo,
auth: {
authOne,
authTwo,
}
}

So what we have here is a function group auth. The function group is denoted by a nested export. Auth itself is not a function, but an object which holds two function triggers, authOne and authTwo. I will explain this more in the next paragraph, but for now, you just need to understand that it’s possible to deploy only the auth group with the following command:

firebase deploy --only functions:auth

Firebase Cloud Functions, on deployment, will process your entire codebase for each exported function. It reads the exports object that is emitted from your entry point file and generates your function instances. When doing this, it looks at the exports object in order to generate the names that will be used to name your functions. The names are important when it comes to callable/Http functions as you need to know them to access the function. This is how your variable name in export const callableHttp = is able to affect the name you use to call the function from your frontend.

So, what happens to a function name when it is in a submodule? Firebase automatically adds a dash to the name. So for example, the name of the first function in the example above is functionOne, whereas the auth group functions would be named auth-authOne and auth-authTwo respectively.

Knowing how the function names correlate to the structure of your exports object is important when optimising the deployment/cold-start times of the functions.

But where’s the problem in all this?

There are two main problems that occur when your project grows. As stated, Firebase will load all of the modules in the global scope for every single function trigger. As your project grows, you will likely import more and more external libraries, classes, and instantiate more objects in memory. Each function will probably have its own specific list of dependencies. When you have a myriad of functions in one file, the global scope can become polluted. For example:

import * as admin from ‘firebase-admin’;
import * as functions from ‘firebase-functions’;
import * as funcOneDep from ‘heavy-library’;
import * as funcTwoDep from ‘heavy-library’;
export const funcOne = functions.auth.onCreate(() => {
// This instance loads both funcOneDep and funcTwoDep into memory
funcOneDep.doStuff();
});
export const funcTwo = functions.auth.onCreate(() => {
// This instance loads both funcOneDep and funcTwoDep into memory
funcTwoDep.doStuff();
});
```

Initially, you may try to mitigate this via lazy-loading:

import * as functions from ‘firebase-functions’;let funcTwoDep;
let funcOneDep;
export const funcTwo = functions.auth.onCreate(() => {
// This will only load heavy-library when the function is triggered
funcTwoDep = funcTwoDep ?? require(‘heavy-library’);
funcTwoDep.doStuff();
});

But again, as your project grows, this becomes more cumbersome and complicated. You have to keep track of multiple `let` variables and make sure you lazy load every dependency that your individual function might need while making sure NOT to lazy-load dependencies that ALL of your functions will need, as variables that are outside of the scope of the function, per separate function instance, are kept in memory in between function invocations.

The solution

The way we prevent the global scope from becoming polluted and slowing down the cold-boot time of every function is by ensuring that each function is contained within its own file (so each function is a submodule), and the special way that our solution works (shown in the next section) ensures that each function instance only loads the dependencies needed for that particular function. More on this later.

The obvious immediate benefit is that this also allows us to organise our code better along our source code directories. Each function has its own file, rather than just using one monolithic index.ts file with all of our functions.

For example, you can do this:

// src/auth/onCreate.tsimport * as functions from ‘firebase-functions’;
import * as dep1 from ‘heavy-library’;
// No need to lazy load!export default functions.auth.onCreate(() => {
...consume all imports...
});

The global scope of the previous function will be automatically separated from the global scope of the next function. The dependencies WILL NOT be loaded from the function above when executing the second function below:

// src/auth/onDelete.tsimport * as functions from ‘firebase-functions’;
import * as dep2 from ‘heavy-library’;
// No need to lazy load!export default functions.auth.onDelete(() => {
...consume all imports...
});

This means that your global scope for each function is essentially “automagically managed” for you. You simply import exactly what each function needs at the top of its file, and this will never impact other function triggers. These imports are kept in-memory between function invocations (when not cold-booting).

The second benefit is easier file/directory management. Since this function dynamically imports your function triggers, it’s also capable of traversing your source code directory and automatically finding, naming and exporting your function triggers.

This is what the (old) solution looks like:

// entry point index.js/tsimport exportCloudFunctions from 'better-firebase-functions';exportCloudFunctions(__dirname, __filename, exports, './', './**/*.js');

And that’s it! All of your function triggers will be automatically exported on deployment.

The current, ~okay~ implementation

So let’s take our first look at the existing v3.0.0 code, and understand what it does algorithmically. This snippet is taken from (altered to be readable) from version 1–3:

export const function = (
__dirname: string,
__filename: string,
exports: any,
dir?: string,
globPattern?: string
) => {
const funcDir = dir || './';
const pat = globPattern || './**/*.js';
const files = glob.sync(pat, {cwd: resolve(__dirname, funcDir)});
for (const file of files) {
const absPath = resolve(__dirname, funcDir, file);
// Prevent exporting self
if (absPath.slice(0, -2) === __filename.slice(0, -2)) continue;
const absFuncDir = resolve(__dirname, funcDir);
const relPath = absPath.substr(absFuncDir.length + 1);
const funcName = funcNameFromRelPath(relPath);
const propPath = funcName.replace(/-/g, '.');
if (!process.env.FUNCTION_NAME
|| process.env.FUNCTION_NAME === funcName
) {
const module = require(resolve(__dirname, funcDir, relPath));
if (!module.default) continue;
_.set(exports, propPath, module.default);
}
}
};

If you want to see how funcNameFromRelPath works, you can check out the repo.

So, what’s going on here?

The first thing to understand is that this function is designed to be used in the main entry point file of your cloud functions package. So, your index.ts/main.ts which is consumed by Firebase Functions will possess this function.

This means that it’s precisely the exports object on this entry point file that we need to manage programmatically. If we are packaging our code in a redistributable repository, it will most likely be running out of a node_modules directory in someone else’s codebase.

Which is why we need to pass in the following parameters from the entry point module:

export function(__dirname: string, __filename: string, exports: any,
dir?: string, globPattern?: string) { ...

We pass in exports (also known as module.exports) so that the function can set the correct exports on the entry point file; __dirname so the function knows which root directory to start searching for function trigger modules in; __filename so the function knows what the main entry point file is called, so it can skip re-exporting itself; a dir string parameter, to tell the function if the source code search should take place in a subdirectory relative to the main entry point file; finally a globPattern string which provides glob-match capabilities in case you decide you only want to match certain files, such as **/*.function.js. Those are the function parameters.

const funcDir = dir || './';
const pat = globPattern || './**/*.js';
const files = glob.sync(pat, {cwd: resolve(__dirname, funcDir)});

Then we do some work setting defaults. Remember, you want to match *.js rather than ts because this code will be running relative to your compiled output, so even if you’re using Typescript, you need to match js.
The last line uses the glob package to search the chosen directory for js files, which should contain your function triggers.

We then have this mess here:

for (const file of files) {
const absPath = resolve(__dirname, funcDir, file);
// Prevent exporting self
if (absPath.slice(0, -2) === __filename.slice(0, -2)) continue;
const absFuncDir = resolve(__dirname, funcDir);
const relPath = absPath.substr(absFuncDir.length + 1);
const funcName = funcNameFromRelPath(relPath);
const propPath = funcName.replace(/-/g, '.');
...

The start of this loop begins with iterating over all of the matched files. If it detects the entry point file, it will skip it. It resolves the absolute path of every detected file.

If a file is in a subdirectory, it needs to generate two things — what is the function name going to be? In this case, it’s based off the path, with each directory acting as a nested group or submodule. The property path that is supplied to lodash.set is dot separated, and so that propPath is also generated.

Now it gets interesting.

...
if (!process.env.FUNCTION_NAME
|| process.env.FUNCTION_NAME === funcName
) {
const module = require(resolve(__dirname, funcDir, relPath));
if (!module.default) continue;
_.set(exports, propPath, module.default);
}
}
};

This is the money-shot right here.

  • If process.env.FUNCTION_NAME is undefined, this means we are deploying functions, in which case, we want to attach all function triggers to their respective location in the exports object, in order to tell Firebase about all of our functions during deployment and function instance provisioning.
  • if process.env.FUNCTION_NAME is set, it will be set to the name of the currently running function instance in a cold-start scenario. This means we can be smart, and avoid require'ing any of the submodules that we do not need!
  • In the above case, for that function invocation, only the required module is added to the exports. This is why it’s important to accurately and consistently generate function names for our detected function triggers.

In the first case, we read our source code and tell Firebase what our functions are called, by populating the exports object passed down to us from the entry point module.

When those functions get executed, Firebase runs the same code, except this time, telling us it wants to execute a function trigger by supplying us the function name that we generated in the beginning. If the function name matches the generated one, that function is attached to the exports object and returned. Firebase is then able to execute just the called function. None of the other submodules are loaded — only the function. Which is how we avoid having to lazy load dependencies.

It may still make sense to lazy-load a dependency for your function, if that function may not need that dependency on every single invocation. However, dependencies from a separate file/module will never be loaded unnecessarily.

You’d notice the use of module.default. This simply means that in each file, we are not using a named export. There is no need since each module/file will only have one export, and can use the export default ... syntax. The default export can be dynamically accessed via the ['default'] property on a module object.

So how can we make this better for v4.0.0?

So far, I’ve already done a bit of work cleaning up the code. Here is the updated version:

v4.0.0

In the above image, I’ve applied a number of optimisations — fewer calls to resolve, fewer lines overall. Fewer arguments needed to run the export function:

import { exportFunctions } from 'better-firebase-functions'
exportFunctions({__filename, exports})

I’ve also made it possible for developers to supply their own funcNameFromRelPath() function. I’ve used object destructuring in the function parameters to provide defaults for arguments, whilst simultaneously converting the list of input parameters to the function into one “functions object” to make using this function more convenient. The total number of required properties on the input object has been reduced, and each property has its own intellisense to aid the developer.

Could we make it any better? Let’s outline the algorithm:

  • Get all our input settings from function arguments
  • Resolve our working directory

So far, so good.

  • Iterate through all of the detected matched files, doing the following steps:
  • — get file path, then derive corresponding function name
  • — if the currently running function process.env.FUNCTION_NAME is the same as our derived one, load that module and attach it to the correct location on the export object.
  • — if process.env.FUNCTION_NAME is undefined, add all detected modules to exports.

Problems

I originally thought of a few potential improvements we could make. Here is my thought process.

Let’s say I’m allowing developers to optionally define their own method for generating function names. How can I be sure that they’re not going to break the entire process? For example, they need to follow the convention where submodules are named with a dash -. Should I write a function that tests funcNameFromRelPath at runtime?

It also seems wasteful to always iterate through every single matched file, even on function invocations. We’re essentially searching the filesystem, and then looping through the results to find the correct function.

Although, on second thought, it’s not that wasteful because this process only happens on cold boot. Once a function instance is warm, the dependencies, and the module singleton are kept in memory.

This means that everything outside of that single function scope is kept in memory between function invocations when the function instance is warm.

It may be possible, for example, to read the current process.env.FUNCTION_NAME, and from that, infer the correct file path for that module and require it, avoiding having to go through the loop, or to traverse the directories with glob search, meaning we remove the need for disk access and an O(n) time computation where n = # of total modules.

In order to achieve this, we would need to define two functions:

  • funcNameFromRelPath()
  • relPathFromFuncName()

One for deployment, and one for function invocation. And if a consumer of this package wanted to customise this behaviour, they would also have to pass in both functions. And we might have to find a way to check that the functions map eachother’s inputs and outputs 1:1 to prevent errors.

How many milliseconds could this save? And it would only be saving this on cold-start, since the module will be kept in memory in between invocations.

Would it matter if you had a project with hundreds of files and functions? And your cloud functions provided SSR to a web page in production? In these cases, even shaving off 50–200ms could make a difference.

Another thing I wanted to explore was allowing developers to define their own method of sourcing function triggers from files.

Is it possible to have more than one function trigger export per file? If this was the case, we would lose some degree of optimisation.

The dependencies of all functions defined within the module scope would be loaded into memory instead of one, since the functions are located in the same module scope. Could there be any benefit for a developer to want to name their functions as named exports, instead of the original method of using export defaultin each module and naming the function after the pathname? Could there be any way I could allow developers to customise this behaviour?

Probably not at the risk of a loss of speed. Any optimization that doesn't involve only having one trigger per submodule will most likely lose speed.

Furthermore, naming your API layer after its function triggers is an ideal way of organising your source code.

One of the key benefits and insights of better-firebase-functions was the enforcement of a single module scope for each function trigger, which can then call as many dependencies as it needs in its own module, without affecting the performance of any other function trigger.

Testing

I want to show how I went about writing tests to ensure correct functionality from this particular function.

Jest testing the exportFunctions() function

I created a folder called __mocks__ in which I scaffolded out a “pretend” functions directory, complete with modules that had a default export of… 5. This isn’t the best way to test, nor is this the correct use of a __mocks__ directory, however, it did the job. I was able to reliably test most of the features of the module.

A factory function runs our code and returns the “export object” that would otherwise be consumed by Firebase Functions. The export object that is passed in is simply an empty object and the directories it scans are the local ones I made above. The factory function also allows us to pass in any custom configuration values we might need to use in each test case.

Timing

I realised that if we’re going to try to improve the performance of the existing function, we should measure it. So I added in a capability to enable/disable a logger, which will otherwise provide useful performance logging to anyone that uses the repo, by setting enableLogger = true in the config object.

Hard Lessons

Even with a large number of dummy exports, the time we’d be saving with further optimisations was negligible:

Negligible time to loop against all files

Although it’s not necessary to loop through each file before finding one that matches the current function name in order to export to it (as opposed to just loading the module directly based on the function name), there is no point optimising this any further as the chances are that in the very worst case cold-start, we’d lose less than 50ms. And only for the cold-start.

You’ll notice that in subsequent tests, the duration is even lower. This may be due to runtime compiler optimisations or disk-read-caching on my development machine. In any case, I doubt that my machine would be faster than the Google servers that Firebase runs on.

More Improvements

There are a few more improvements I decided to do after a few days of hard work on the package. I wanted to make the code more readable and maintainable by rewriting it in a (more-so) functional style, and also make the tests more robust. Ideally, the tests should generate the files/folders, and only then test them afterwards. This would help my repo be cleaner for when I split it up into multiple repositories (coming in the next part of this series). I also decided to build in a function to allow developers to define their own way of extracting the function trigger from each module.

Testing

Since I’m going to be programmatically generating the mock functions directory, I needed a way to figure out where to put the directory.
Where would we put the files? In a temp folder? In a folder relative to the tests themselves? That’s probably a bad idea since the tests would become coupled with their location on disk, which could cause issues depending on where the tests are being run and what file permissions look like on that machine.

After a quick Google, the tmp package provides a clean way to do this, and 15m downloads per week can’t be wrong, right? I would also end up needing the fs-extra package to be able to write needed directories on the fly. Lastly, import { resolve } from 'path'.

Now, to programmatically generate the directory we’ll be using for testing. First, we need to get the file descriptor for the temporary directory:

const {name: tempFuncDir} = tmp.dirSync()

Done. Now to generate the function files:

// Module Scope - function takes path, array of files and a single
// string to write to each file
function generateTestDir(
dirPath: string,
filePaths: string[],
fileContents: string
) {
const fileBuffer = Buffer.from(fileContents);
filePaths.forEach((path) =>
fs.outputFileSync(resolve(dirPath, path), fileBuffer));
}
// Inside Jest test suite
const { name: tempFuncDir, removeCallback } = tmp.dirSync();
const randOutput = Math.floor(Math.random() * 10);
beforeAll(() => {
const testFiles = [
'sample.func.ts',
'camel-case-func.func.ts',
'./empty-folder/',
'folder/new.func.ts',
'folder/not-a-func.ts',
];
generateTestDir(
tempFuncDir,
testFiles,
`export default ${randOutput};`);
});

Now, I need to update my tests to actually use these programmatically generated modules. I can test against the file array testFiles and the corresponding output randOutput. The perfect way to do it would be to make each test fully independent, but I’m short on time. Ideally, the tests wouldn’t have to depend on any external state. They would set up everything needed for the tests themselves. Generally:

  • Tests should not be affected by which order they are run in
  • Tests should be idempotent and not depend on external state
  • Using lifecycle hooks (beforeAll, afterAll) is sometimes a code smell which indicates that you are sharing state between tests
  • You should add tests whenever you fix a bug to prevent regressions, and whenever you add a new feature

When following the above guidelines, you’ll often set up factories to generate the state you need independently for each test. This was the factory I initially used for my tests:

const exportTestFactory = (configObj?: any) => {
const exportObj = {};
bff.exportFunctions({
__dirname: tempFuncDir,
__filename,
exports: exportObj,
functionDirectoryPath: './',
searchGlob: '**/*.func.ts',
...configObj,
});
return exportObj;
};

The benefit of doing this becomes very obvious in this situation. Since I had used a factory, now it will be much easier to update all of my tests — and any customisations that the tests needed are still applied via the configObj. The spread operator “combines” the two objects. I was able to improve the factory to this beautiful shorthand:

const exportTestFactory = (configObj?: any) => bff
.exportFunctions({
__dirname: tempFuncDir,
__filename: `${tempFuncDir}/pretend-index.ts`,
exports: {},
searchGlob: '**/*.func.ts',
...configObj,
});

And finally, I updated the tests themselves. Here’s an example:

it('should export from the default export of each submodule', () => {  expect(exportTestFactory())
.toHaveProperty(
filePathToPropertyPath(testFiles[1]),
randOutput,
);
});

Ideally, the test would generate the file structure per test to guarantee total independence from other tests, and therefore include all of the information for the test case within the scope of the test.

Bundling & Performance

A realisation came to me when I realised that both function trigger modules and exportFunctions() itself could be bundled and minified to improve performance. The goal was to eliminate as many require() calls as possible, yet exportFunctions() introduced its own dependencies, such as glob and lodash, which would be loaded on every single cold-start.

I used a tool by @zeit/ncc which provides zero-configuration module bundling, using Webpack as its underlying bundler. I also plan to look into rollup and uglify-js in future.

The world of bundlers is extensive and complex. I have begun to understand why DevOps engineer is an actual job. Simply managing the code, the repo, the build process, bundlers, Git hooks, CI systems, testing… is a lot of work, and requires specific knowledge of JS devtools. The beauty of Javascript, however, allows you to read your bundled code and understand what’s going on.

Bundlers were initially designed for the web — to load js for the browser, in a more efficient way. This made sense, as each time you loaded a webpage, the browser would have to request, download, parse and execute your code, and often the entire codebase. Bundlers take your source code, which should be organised logically for optimal experience when programming — and compress, minify, and pack (hence the ‘pack’ in ‘Webpack’) it for production use in the browser. This would reduce the overall size of the bundle, and also reduce the number of requests a browser had to make to download your JS (less relevant with HTTP/2). The goal, ultimately, was to reduce the time between when a user requested a page, and when the code runs and makes the page interactive. Web pages with Javascript are essentially just on-demand code execution.

On-demand code execution. Hmm, sounds familiar…

See, with traditional Node apps, you start the server with a command like node ./index.js and off you go. Node will read your code, require() all of your dependencies, load them into memory. It will then serve the users of your application, usually with fast response times. Optimising the time it took for Node to start up the server made no sense since this only happened when you literally deployed your code to production.

Firebase Functions, and generally lambda type hosted-cloud-function platforms, on the other hand, all share a similarity with a web browser. The function instances go to sleep, so when a request is made, the server is quickly booted up, your code read, dependencies loaded and a response is sent back to the user. Kind of like executing JS on-demand by loading a website, but on the backend. Kind of like having your backend function be a webpage that’s loading from “behind”. So it makes sense to optimise backend functions much like you would optimise and bundle JS for the browser. Not all apps, startups or services have enough users or traffic to keep their instances running 24/7… and an 8 second response time for a backend function is unacceptable.

And so, we use the same strategies to bundle and minify your backend functions code (and reduce or bundle dependencies) as we would for frontend code.

Backend JS/TS codeRepo:
src/
- index.ts
- export-functions.ts
- default.ts
node_modules/
Command:
ncc build src/index.ts -m
Output:
lib/index.js

After bundling, the whole repo now has 0 dependencies and has seen a significant performance improvement. This means that exportFunctions() is as lightweight as possible. But — it should also be possible, now that the function modules are separated, to bundle each one separately to get each function instance to startup as quickly as possible!

And so, I decided to write a new tool, which you’ll read about later in the next part of this series.

I included type definitions with the bundle, to provide TS support.

I also had to use eval('require') as a hack to force Webpack to ignore the particular require call BFF uses to load your function triggers. This is because Webpack, via static analysis, reads and replaces all of your require statements with its own optimised module loader in its runtime. This is because unlike your source code, the Webpack bundle usually includes the dependencies (eg: node_modules) within the bundle, necessitating that the require calls be re-routed to the bundle’s own included dependencies. It does this by generating a module dependency tree from the entry point of your application. This is what we wanted with exportFunctions() own dependencies, but there was no way Webpack could do this for the actual function triggers that BFF loads during runtime. Obviously, the function triggers are not known at the time of bundling BFF (hence a dynamic require which cant be followed via static analysis).

So the overall strategy is to bundle exportFunctions() with its own minimal dependencies, reserving it’s one and only require call for loading function triggers at runtime for your Firebase Cloud Functions. But what I realised was, that in the world of bundlers and Webpack, there’s a lot more we can achieve. Each function trigger is also technically an “entry point” into the backend application via that particular trigger. Rather than manually exporting all of your function triggers and creating one big bundle, we use exportFunctions() to load them dynamically and ensure the triggers themselves are bundled all independently as their own entry points. Since we do not care about the overall size of the code (since it’s not being downloaded), but we do care about the speed that it runs (it also takes time to parse large JS files, so size does matter, after all) it makes sense to bundle each function separately and include each function’s dependencies in their own bundles, rather than trying to share chunks or reuse shared dependencies.

This would effectively reduce the number of require calls to just one.

In the next part, I’ll show you how I created a fully optimised solution in a NRWL repo, using better-firebase-functions as part of the build process and configuring Webpack to bundle and minify each function automatically.

This concludes this post, as I’m happy with the exportFunctions() function at this stage. If anyone finds any reason to further improve or optimise it, just leave a comment.

Part 2 — coming soon! (Webpack, full bundle optimisation, Service layer classes and a custom Firestore ORM!)

--

--

George
The Startup

Passionate about coding, web development, AI, STEM, entrepreneurship, SEO/Marketing/Sales and all things tech.