Write, Launch, Forget: Building a Chatbot on Cloud Functions
My first brush with the serverless architecture
Why Another Chatbot?
Chatbots make an excellent programming exercise because of their direct, cut-to-the-chase quality.
To those of us who picked up the trade in the MS-DOS era, chatbots are oddly reminiscent of command prompts. Forget boilerplate MFC code and HTML scaffolds — your user’s input and your business logic is once again separated by a mere return key.
This time, you don’t read from STDIN, but from a string argument after 92 layers of abstraction, transportation, and transcoding protocols, sent from half the world away. The promise of chatbot protocols is you will never need to worry about those details; it shall be the duty of the platforms (Messenger, Telegram, LINE@, etc.) to connect their users’ mobile apps to your Internet server.
The serverless architecture is simply a logical extension of this promise. Now you only write a function — the atomic unit of code execution — and you no longer have to worry about how it runs.
For this exercise, I have chosen Google Cloud Functions because I have been meaning to try it out. It should apply equally well to AWS Lambda.
Google Cloud Functions takes care of provisioning the underlying machines, scheduling your code, and routing your requests to you. It’s safe to assume that your function will scale up and down infinitely without your involvement. Speaking as someone who likes to whip up side projects but hates saving them from the inevitable platform rot three years down the road, this is almost too good to be true.
If you haven’t run it in a while, or you just deployed a new version, it will suffer a slow cold start. Understandable, because the app packages needs time to propagate throughout the farm, and the instance needs to spin up. If your app gets a lot of traffic, this should not be an issue.
Oh, and you have to write in Node.js.
Node.js is a terrific — and by terrific I mean terrible — choice for serverless programming, because the language and its entire ecosystem were designed for an asynchronous, non-blocking world. Which, incidentally, is the last property you could give a rat’s ass about, in a production environment where nothing blocks on one another and thread and process are concepts entirely missing from the vocabulary.
Installation ICS By Example ICS Basics ICS Control Flow ICS: Tutorial Conclusion ICS: Read More Usage Overview Language…maxtaco.github.io
Setting It Up
Getting started with Google Cloud Functions is refreshingly straightforward. The interface is slick material design flavor and peppered with hints liberally so that you almost usually can get by without consulting their well done documentation.
GCF expects your dispatch function to implement the Express API, and that’s it. It doesn’t pass judgement on your choice of modules or database engine. And since the bare minimum you need to get started with the Facebook Messenger API is answering its challenge correctly, I got the logs to start flowing with just the following code:
I switch back to the logs console and start spamming the test Facebook page. It starts to fill up with messages, but it will continue to do so long after I’ve forgotten about this little exercise.
All in all it takes me about two hours to set this up, most of which goes into research.
Scaling It Out
Eventually my one-file snippet grows into a proper repository with node_modules, gitignore, and whatnots. The repository is also hosted on GCP. What I especially enjoy is not being forced into making a new repository from the start, but being allowed to do so at my own pace.
It isn’t until I’m well into my third refactoring that the training wheels wear down, and I start to fumble over my code organization. Do I run many functions from one repository or spread them out? Do I keep the views in the same file or as separate files in one folder, or is it even a thing here?
It feels like one place where a framework could really come in and suggest a general best practice. But frameworks emerge when there’s a consensus about how a large number of apps ought to be built, and it’s possible there are not that many of them out there yet. There is one established framework called serverless, and I plan to study their approach next, although they seem light on documentation.
People I know who adopt Firebase into their arsenal seem to swear by it. Next to them is a larger group of people who swear to never have anything to do with it for fear of creating a stack whose core business functions are entirely dependent on one company’s whim.
To me, the serverless architecture represents a very sensible compromise between both worlds.
I’ll be regularly sharing approaches and processes that I come across in my tenure as the CTO of a nimble startup. If this interests you, Follow and/or say hi.