Slack Bot in Scala and 12 Ways to Run It

A Story about Slack, Scala, Javascript, AWS Lambda and Fly.io

Voytek Pituła
17 min readAug 17, 2023

Okay, 12 is a clickbait. In reality, we will try only 6 ways to run it. But first, a bit of a story…

TL;DR: The article below is a an inflated, overdone and unnecessarily long version of this README with completely superfluous mix-in of my personal opinions. If you’re interested in code and snippets, then go there directly. If you prefer to enjoy my caricature of writing, stay here.

Table of Contents

This article is pretty long so here is a bit of a helper

A Perfect Fit!

I wanted to write a very simple Slack bot. And what is a Slack bot? It’s an over-engineered webhook handler: you configure a URL, and Slack calls it under certain circumstances. That’s it.

The particular bot I wanted to write wouldn’t need any database or state in general. Moreover, it would be called very sparingly, at best a few times a day. Doesn’t it sound like a perfect fit for a lambda?

The problem is that I couldn’t find a good and comprehensive example of how to do that.

  • There are plenty of resources explaining how to write a slack bot with a lambda.
  • There also many resources explaining how to write a lambda in Scala.

But, it turns out, this isn’t enough to write such a bot in a single afternoon. Especially if you haven’t ventured into those areas before. Hence, at some point, I created scala-slack-bot to help anyone else interested in solving that particular problem.

This repository contains a minimal example of a Slack bot written in Scala and deployed in various ways. Some of these methods indeed involve AWS Lambda. But during this journey, I also realized that the fit wasn’t as perfect as I initially believed.

The story below is about Scala, Slack, and Lambdas, and how all these things (don’t) work together. It can be treated as a tutorial for writing a Slack bot, writing a lambda, or both. It comes with a strong focus on the deployment of those.

A Small Disclaimer

I love UIs for many things, but I absolutely hate them for anything related to code deployments. That’s why, throughout this article, I’ll lean towards using scripts and APIs unless UIs are absolutely necessary.

A Gentle Start

What’s our goal? A Slack bot, written in Scala, responding to the/hello command.

To do that, we need two things: a Slack Application and a URL to handle the logic.

To simplify the process, we w’ll use webhook.site at first. Visit the site, and you’ll be given a unique url. Copy it and export it to a bash variable. This site will capture and display all the HTTP calls made to it.

export WEBHOOK_URL='https://webhook.site/204044be-020b-4e93-8064-a8be9682ab6e'

You can even test if it works:

curl $WEBHOOK_URL

Tada! 🎉🎉🎉

Okay, that wasn’t super impressive. But stick with me; it’ll get better. Now we can create the Slack application. Typically, you’d do that through the UI, but why choose the easy route? We’re better than that.

Instead, we’ll use the new and sleek Slack Manifest API. To use it, we need the “App Configuration Token”. To get one, visit the Application List Page and click enticing “Generate Token” button.

Once done, export the token.

export APP_CONF_TOKEN='xoxe.xoxp-1-Mi0yLTU1MzQyMTQ1MDEyMjMtNTU3MjQ2ODAxMDE0NC01NjkzODg2OTYzNzMzLTU3NjIyODI2MTg2NDAtZTdlZWNmODc5NDdkNjY1YzljZGQ4YzIyNThlZTEwZmY4YzBkOGFhNjk4M2U0MjU4NWU0NGFkMjY2MzliZDMyMw'

Now I should show you here how to create the app, but this snippet would be way too long and boring. So, instead we’ll execute this script that will submit this manifest to this endpoint.

> scripts/create-slack-app.sh
...
Go to https://api.slack.com/apps/.../oauth and install the app

The script will parse out the application ID from the response and give you the URL under which you can install the app into your workspace. Proceed with that.

Perfect! We’re almost there. Head to your slack workspace, find the “Scala Slack Bot”, and type `/hello` to it. Although you won’t receive a response, don’t despair. Navigate to the webhook.site and be brace yourself for a surprise. Your request will be right there!

Now, if you’re really eager to see something more before we move to more serious stuff, try editing the default response.

Save, send /hello again to our bot, and voila! You should see a response like this:

Are you tired? Now on to the more interesting stuff.

Local First

I have a significant personal bias towards running everything locally. I believe that every piece of code we write should be runnable without any deployment procedure. With this mindset, we can begin running our bot.

We can execute the code in two ways: as a lambda or a service. For simplicity, we’ll choose the service for now:

> sbt 
...
sbt:scala-slack-bot> serviceJVM/reStart
...
serviceJVM ... Ember-Server service bound to address: [::]:9876

Great! Our server is now listening on port 9876. Let’s see if it works

> curl localhost:9876/hello
%world

> curl -XPOST localhost:9876/lambda
Malformed request

It does work! The server we’ve launched exposes two endpoints: GET /hello, which does nothing, and POST /lambda, which calls a piece of logic crafted for our Slack integration. But we’ll not delve into that logic right now and use this new service for our Slack bot right away.

Indecent Exposure

Unfortunately there are two more hurdles:

  1. Slack needs a publicly reachable endpoint
  2. Slack requires https support

While 1) can be bypassed by properly configuring your router, the 2) proves a bit trickier. If you’re among the 0.00001% of people who can (and want to) get a legitimate HTTPS certificate for your local machine, then I wonder what you’re doing here. For everyone else, we’re resorting to ngrok:

> brew install --cask ngrok
> ngrok http 9876
...
Web Interface http://127.0.0.1:4040
Forwarding https://46eb-83-21-167-217.ngrok-free.app -> http://localhost:9876

Ngrok provides a public, HTTPS-compatible address that we can use in Slack! Let’s make use of it:

# Mind the /lambda after the ngrok url
export WEBHOOK_URL='https://46eb-83-21-167-217.ngrok-free.app/lambda'
export APP_ID='A05MQEU8JF5' # from UI or app creation response
scripts/update-slack-app.sh

Now, when you send the/hello command, you should see a response!

We‘re getting closer. And since we have some time to spare, let’s repeat part of the procedure to provide the bot token to the app. This token is needed because our bot aims to fetch your name for a personalized greeting. Let’s cater to its needs.

Head to your app settings, retrieve the token, export it, restart the service, and resend the command. You can find the token at https://api.slack.com/apps/${APP_ID}/oauth or just go to apps list and navigate the UI like an animal.

> export SLACK_BOT_TOKEN='xoxb-5534214501223-5751280315521-Di8ljRpI2re0mYJg5MwWFnKx'
> sbt
sbt:scala-slack-bot> serviceJVM/reStart

Yay! This is our bot in full. If you expected more I’m sorry to disappoint you. But we still have a lot to cover so let’s continue.

Debugging: The Good, the Bad, and the Ugly

What’s the underlying benefit of running things locally, apart from simplicity? Debugging, naturally. Although the methods I detail below also function over the network, executing them locally is considerably more straightforward and convenient.

Luckily everything’s set up thanks to Revolver.enableDebugging() present in build.sbt. All that remains is to connect to the JVM through the debugger.

Cool, with that superpower you can now see the data coming from Slack without changing any code or checking logs. Just set a breakpoint, send your command, and dive into the wonders of debugging

Okay, I know, this is not anything impressive for anyone who have spent more than 3 months in JVM world. But I have a point here, so bear with me. for a little bit longer.

Enter Scala.JS

Say your prayers, little one
Don’t forget, my son
To include everyone
I tuck you in, warm within
Keep you free from sin
’Til the scala.js, it comes

We’ve stayed in the warm and safe embrace of the JVM for far too long. Now it’s a time for a bit of an adventure.

The project we use has full cross-compilation support, allowing us to run the same code on both JVM and JS. Why? “Because we can” might be a good answer, but a more practical reason is that NodeJS has a much lower startup time than JVM, making it more suitable for lambda runtime.

I will spare you all the exhausting, painful, traumatic, agonizing, soul-crushing, morale-draining, and gut-wrenching details of setting up cross-compilation. If you want them, you can look into build.sbt, as long as you’re prepared for it to look back into you.

But it’s not all bad. Node offers similar debugging capabilities, which is crucial if we opt to deploy our lambda via a JS environment. Let’s test it.

# remember to reStop the previous instance
> npm install source-map-support # required to make it any close to usable
> sbt serviceJS/run
...
Debugger listening on ws://127.0.0.1:9229/e86ca008-f977-45c0-bf29-47dd2ef1065b

See? The debugger is there! Let’s connect to it and re-send the command.

I’m taking a break to bring my psyche back to normal, and in the meantime, you can look there, scroll up to the JVM output, and then scroll back down. Tell me if it’s even remotely close in terms of UX. And this is just a debugger in a very very simple project on a happy path. I don’t even want to think about how this setup will behave if we hit any kind of edge case. But..

La La Lambda

Usability aside, it’s time to finally focus on the promised lambdas. As you might guess, we’ll begin by testing it locally.

sbt lambdaJS/fastOptJS::webpack
./scripts/run-local-lambda-container-js.sh

This will start-up an aws-provided image that mimics AWS lambda runtime loaded with a bundled code of ours.

Normally, this is when I’d guide you through testing it. But not this time! This time I will show how it DOESN’T work. Go and try calling our lambda.

> curl -XPOST "http://localhost:9876/2015-03-31/functions/function/invocations" -d '{}'

If you do you will see an error. A quite cryptic one.

{
"errorType": "org.scalajs.linker.runtime.UndefinedBehaviorError",
"errorMessage": "java.lang.ClassCastException: undefined cannot be cast to java.lang.Boolean",
"trace": [
"org.scalajs.linker.runtime.UndefinedBehaviorError: java.lang.ClassCastException: undefined cannot be cast to java.lang.Boolean",
" at $throwClassCastException (/var/task/lambda-fastopt-bundle.js:61:9)",
" at $uZ (/var/task/lambda-fastopt-bundle.js:553:77)",
" at Runtime.$t_LslackBotLambda_JsHandler$__handler [as handler] (/var/task/lambda-fastopt-bundle.js:14888:9)",
" at Runtime.handleOnceNonStreaming (file:///var/runtime/index.mjs:1083:29)"
]
}

Everything is clear, right? And mind the fact that I already removed part of the crypticness and formatted the json for you.

undefined cannot be cast to java.lang.Boolean

If you’re feeling naughty, shout that at a JS conference and watch the reactions. Anyway, the message doesn’t help much so let’s look at the code. Runtime.$t_LslackBotLambda_JsHandler$__handler [as handler] (/var/task/lambda-fastopt-bundle.js:14888:9) will be our friend!

14888: if ($uZ(event.isBase64Encoded)) {

Ah indeed, there is a line of code like that in our project. Due to the format of lambda messages, we have to check if the body is base64 encoded and decode it if it is. The problem here is that parsing is utterly nonexistent in the JS world! And so, the standard way is to just access the field and hope for the best, which results in this beautiful error above. It’s even prettier because of all the minimising (what a beautiful function name $uZ is! Feels a bit like haskell) and scala.js compiler-generated names (I always call my functions t_LslackBotLambda_JsHandler$__handler; I even considered naming my son like this). So, why did the error happen? We just didn't provide a proper format for the body.

Okay, was this experiment productive? Not much. Was it needed? Not particularly. Was it helpful? Probably not. But did it allow me to rant about JS? Hell yeah!

Now that I’m satisfied, we can make a proper call and check that our lambda and container work properly in the absence of silly developers who can’t prepare a correct payload.

> curl -X POST --location "http://localhost:9876/2015-03-31/functions/function/invocations" \
-d '{
"body":"user_id=111",
"isBase64Encoded": false
}'

{"statusCode":200,"body":"Failure in communicating slack: {\"ok\":false,\"error\":\"user_not_found\"}","headers":{"Content-Type":"text/plain"}}%

Yay! It does work to an expected extent. The user with id 111 is not expected to exist. Unfortunately, this container can’t be used directly with Slack (we can’t point Slack to https://46eb-83-21-167-217.ngrok-free.app/2015-03-31/functions/function/invocations) because there’s nothing in front of it to form a proper ApiGateway payload. So, we have a tool to invoke our code in a lambda-like environment but not E2E. Bummer.

Deploy All The Things!

Finally, we’ve reached the point where it makes sense to deploy something to the cloud. And, since we’re already “enjoying” the lambda setup, the only natural thing is to stick with it.

We’ll take our beautiful code and deploy it to AWS alongside a few other things. The script executed below will deploy this CloudFormation stack (which I shamelessly stole from the internet) to create the following resources:

  • Lambda Function
  • IAM Role — so that our code can perform dangerous actions, like producing logs.
  • Lambda Permission — so that API Gateway can use our function.
  • API Gateway API — so that we can call the function through http call.

That’s just 4 concepts (+ CloudFormation itself) to learn and understand. In the AWS world, this is what we call lean and easy! Unfortunately, CF doesn’t allow us to upload the function in one step, so we‘ll need to do it separately with another script.

> scripts/deploy-cf-stack.sh

# This will get some time, reexecute below until it shows complete
> scripts/get-cf-stack-status.sh
Status: "CREATE_COMPLETE"
Lambda url: "https://ofz57d1915.execute-api.eu-central-1.amazonaws.com"

> sbt lambdaJS/universal:packageBin
> scripts/update-lambda-js.sh

If everything went fine you should be able to see created resources in AWS Console.

Lets test it. 😈

curl -XPOST https://ofz57d1915.execute-api.eu-central-1.amazonaws.com
{"message":"Internal Server Error"}%

Oh no! Time to investigate!
Just kidding, that would be pure sadism at this point. Instead, I’ll provide a summary:

  • You go to the AWS console.
  • You go to your lambda.
  • You go to CloudWatch logs.
  • You see an error.
{
"errorType": "TypeError",
"errorMessage": "Cannot read properties of undefined (reading 'length')",
"stack": [
"TypeError: Cannot read properties of undefined (reading 'length')",
" at Wy (/var/task/lambda.js:1:183637)",
" at qM (/var/task/lambda.js:1:449114)",
" at /var/task/lambda.js:1:108609",
" at _h (/var/task/lambda.js:1:108949)",
" at Runtime.SV [as handler] (/var/task/lambda.js:1:107713)",
" at Runtime.handleOnceNonStreaming (file:///var/runtime/index.mjs:1083:29)"
]
}
  • You look for length in the bot codebase.
  • It’s not used anywhere explicitly.
  • You check all the locations from the stack trace.
  • None of them are helpful.
  • You redeploy the lambda with fast-opt to make the generated code more understandable.
  • You repeat the entire investigation process.
  • You might or might not find what’s the cause.

To be honest, I stopped midway through that list and simply tried adding a body to the request. It worked, but that was pure luck. If you weren’t so fortunate, you’d have had a very frustrating hour or two. So, let’s send a proper request to ensure everything functions as expected.

> curl -XPOST https://ofz57d1915.execute-api.eu-central-1.amazonaws.com -d 'user_id=111'
Failure in communicating slack: {"ok":false,"error":"user_not_found"}%

Good, this is the expected result, and everything is working as it should.

Lambda + Slack, Finally

We waited long enough. Let’s connect Slack to our lambda.

export WEBHOOK_URL='https://ofz57d1915.execute-api.eu-central-1.amazonaws.com'
scripts/update-slack-app.sh

Hurray! This is what we wanted to achieve. We have a Slack Bot that triggers an AWS lambda, checks the username and responds with a greeting. Is that all? Not really.

Do We Really Need That JS?

Spoiler: maybe?

Why did we go with scala.js in the first place? For two reasons:

  • It’s supposed to start faster.
  • You can put scala.js on your CV.

Let’s see if the first assumption actually holds.

JVM Lambda? Be My Guest!

Let’s start by building and running a JVM version of our lambda locally. We’ll once again leverage the AWS-provided image to mimic the behavior of a real lambda environment.

> sbt lambdaJVM/universal:stage
> ./scripts/run-local-lambda-container-jvm.sh

And while we’re at it, let’s repeat the experiment we did with the js version initially.

> curl -XPOST "http://localhost:9876/2015-03-31/functions/function/invocations" -d '{}'

{
"errorMessage": "Cannot invoke \"String.split(String)\" because \"body\" is null",
"errorType": "java.lang.NullPointerException",
"stackTrace": [
"slackBotLambda.LambdaHandler$.parseUrlParams(LambdaHandler.scala:65)",
"slackBotLambda.LambdaHandler$.getUserId(LambdaHandler.scala:39)",
"slackBotLambda.LambdaHandler$.run(LambdaHandler.scala:23)",
"slackBotLambda.JVMHandler.handleRequest(JVMHandler.scala:22)",
"java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)",
"java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)",
"java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)",
"java.base/java.lang.reflect.Method.invoke(Unknown Source)"
]
}

Oh no. This error clearly shows what went wrong and where. This is totally unacceptable. (If you feel the sarcasm is undeserved, then scroll up a bit for the same error from the JS environment. It’s very much deserved.)

So, let’s try once more with a valid payload:

> curl -X POST --location "http://localhost:9876/2015-03-31/functions/function/invocations" \
-d '{
"body":"user_id=111",
"isBase64Encoded": false
}'

{"statusCode":200,"headers":{"Content-Type":"text/plain"},"body":"Failure in communicating slack: {\"ok\":false,\"error\":\"user_not_found\"}","isBase64Encoded":false}%

Success! Off to the cloud(s) we go!

> sbt lambdaJVM/universal:packageBin
# this transforms our js lambda into jvm labda
> ./scripts/update-lambda-jvm.sh

> curl -XPOST https://ofz57d1915.execute-api.eu-central-1.amazonaws.com -d 'user_id=111'
Failure in communicating slack: {"ok":false,"error":"user_not_found"}%

And there it is. Assuming you didn’t change the webhook URL our Slack bot should now work end-to-end on JVM on AWS. 🎉 🎉 🎉

However, not everything is rosy. At least for me, the cold starts are absolutely disastrous (reaching double digits in seconds). I’m not convinced that this is a problem of the JVM itself, as it has improved its start times significantly over the last few versions. Rather, it might be the setup and how lambda uses the JVM. There are a couple of ways to improve startup time, as described here or here.

So, can we replace scala.js with JVM for lambdas? Probably, depending on how critical cold-start times are and how often they will occur in practice. But there’s one more question to address.

Do we really need that Lambda?

Returning to our primary goal: we wanted to write a Slack bot in Scala, one that will be used a few times a day at most. Lambda was tempting because of its low cost. But do you know what else is nearly as cheap? A basic VPS or another type of persistent hosting.

All the hiccups during this journey cost me probably around 5–10 hours. For this, I could have bought a simple server for a couple of years. So the cost perspective, in my particular use case, is negligible. If not for the cost, then what? Let’s explore two main alternatives: lambda and service architecture. We’ll compare them based on:

  • Complexity — number of moving parts.
  • Popularity — how often people use a given approach.
  • Proficiency — my personal experience and how often I interact with a given architecture at work.

Can you see where I’m heading? Lambdas are vastly more complex, especially in the Scala.js scenario. Lambdas are also less popular. Probably any backend developer can write and deploy a REST API, but not necessarily a lambda. I work with HTTP APIs daily, but I had to learn Lambda from scratch. So, let’s dig a bit into service deployment.

In the Pursuit of Simplicity

Not all offerings are created equal. I prefer using a single tool or provider when feasible. Sadly, that’s not the case here. Some say AWS is the assembly language of cloud computing, and this is very much true in this case. Deploying a single container there is way too complex. Depending on the approach you take (raw ECS, Fargate, Lightsail, AppRunner, Beanstalk), your experience might differ. However, all of them feel too low-level to me. Plus, having five products to run a docker container is a clear sign that something went amiss during product development.

If not AWS, then what? The two main alternatives are GCP and Digital Ocean. I’d probably consider DO for a VPS. Google Cloud Run seems fitting, but I have even less experience with GCP than AWS. Plus, AWS left a bitter taste in my mouth, making me wary of big cloud vendors. The fact that GCP’s tutorials seem UI-centric doesn’t help either.

So, why not a VPS actually? This would be my fallback option if I can’t find a decent managed offering. While setting up a simple server isn’t too difficult (installing Docker and updating the system isn’t exactly challenging), it comes with other concerns, like security and resilience, that I’d prefer to avoid if possible.

If not AWS, GCP, or VPS, then what? The last option that is frequently recommended is fly.io. I must confess I wasn’t familiar with this provider, but I decided to try it out. The first positive sign was the shell-first tutorial on the main page. I won’t bore you with all the details of learning fly.io, but suffice it to say it checks all the boxes I needed:

  • super easy to deploy docker container
  • super easy to configure http(s) endpoint
  • super easy to do all of that from the console
  • cost effective

So let’s see what it takes to deploy our bot there:

# one time auth
> fly auth login
> fly auth docker

# build
> sbt serviceJVM/docker:publish
...
[info] Published image registry.fly.io/scala-slack-bot:latest

#deploy
> fly deploy -c resources/fly.tom
> fly secrets set -c resources/fly.toml SLACK_BOT_TOKEN=$SLACK_BOT_TOKEN

# get hostname
> fly status -c resources/fly.toml --json | jq -r .Hostname
scala-slack-bot.fly.dev

Oh, look, I didn’t even need to write a script for that! And if you’re curious about the resources/fly.toml file, it’s literally 10 lines of TOML:

app = "scala-slack-bot"
primary_region = "ams"

[build]
image = "registry.fly.io/scala-slack-bot:latest"

[http_service]
internal_port = 9876
force_https = true
min_machines_running = 0

This is the bare minimum you should care about when deploying an HTTP API. And you know what else is cool? This essentially comes at no cost. While I wouldn’t mind paying for the machine, my test deployment fits into the free tier. So, I don’t even need to configure payment details.

To demonstrate that it works:

> curl https://scala-slack-bot.fly.dev/hello
world

> export WEBHOOK_URL='https://scala-slack-bot.fly.dev/lambda'
> scripts/update-slack-app.sh

Final Thoughts

That’s the end. I don’t have anything more for you. We’ve run our slack in quite a few ways:

  1. Locally as a JVM service
  2. Locally as a JS service
  3. Locally as a JS Lambda inside an AWS container
  4. On AWS as a JS Lambda
  5. On AWS as a JVM Lambda
  6. On Fly.io as a JVM service

All of that should provide you with a solid foundation for developing Slack bots in Scala and help you decide on the architecture to use. Personally, I won’t concern myself with serverless functions unless there’s another seemingly perfect fit. The inherent complexity is a significant downside. One lesson I’ve learned over all those years is the value of simplicity.

Scala.js is an impressive engineering achievement, but it should be used with thoughtful consideration. While it might be a necessary tool for the frontend, it’s not necessarily the best choice for backend development. The JS ecosystem just doesn’t compare to what we have with JVM for backend tasks.

Further Work

We haven’t explored a few runtime alternatives, primarily Scala-Native and GraalVM native image. Both might address the issue of Lambda cold starts, but they probably aren’t the go-to solutions for crafting a production-ready Slack Bot. I admire these initiatives (especially Scala-Native), but they come with their own set of challenges. I’m eager to try them out for CLI applications, but for now, I’ll stick with a straightforward JVM service for Slack bots.

Having said that it would be great to incorporate them into the repository for both completeness and educational value. PRs are highly encouraged and welcomed!

Summary

Scala is fantastic. Slack is pretty good. Scala.js is awesome but challenging. AWS Lambda is complex. Fly.io is great.

--

--

Voytek Pituła

Generalist. An absolute expert in faking expertise. Claimant to the title of The Laziest Person in Existence. Staff Engineer @ SwissBorg.