How We Boosted AI Usage Across Our Organization by Building a GPT Chatbot in Slack

How you can make Slack smarter — incorporating GPT into your everyday workflow.

Matthias Fischer
Lingvano
6 min readOct 12, 2023

--

Do you also remember the time when ChatGPT was released? Our engineering team was hooked right away: We started to see the vast potential of this new era of AI-driven development.

After incorporating GPT into our development workflow, our goal became to develop an AI assistant for our entire team. We asked ourselves: How can we make sure nontechnical people reap the amazing benefits of AI in a seamless, and accessible way?

We were already using dozens of tools in our day-to-day workflow. So buying a ChatGPT premium subscription to give everyone access to GPT-4 was not an option for 2 reasons:

  1. Cost: You need to pay a subscription fee for every team member, even the ones that don’t use it often, which is very cost-inefficient.
  2. Overhead: The usage will be low as people don’t want to start using yet another tool, as it won’t blend into their daily workflow.

So, we explored an alternative solution — creating our own GPT-powered chatbot on Slack. By leveraging the OpenAI API and the fact that everyone was already using Slack heavily this proved to be the most scalable, user-friendly, and cost-effective solution for us.

In this article, you’ll learn how to build your own GPT-powered chatbot in 4 steps.

How does it work?

Our solution is fully integrated into Slack. You can start conversations with the bot, and it will reply inside a thread. Replies are being streamed, so you don’t have to wait until the whole response is finished. The context of threads is saved, and Slack already offers all standard chat features such as history, search, and syntax formatting.

To better illustrate how it works, check out this quick demo of interacting with our Slack Bot:

Visual Demo of Slack OpenAI Bot

Next, let’s dive into the specifics of creating a GPT Bot for Slack.

1. Prerequisites and Tech Stack

Before you get started make sure you have:

  1. The ability to deploy apps on your Slack workspace
  2. Access to Open AI’s API

You can build a GPT-powered Slack Bot with many different stacks, so here is just an example of what we use:

  1. Node.js backend that ingests the user prompt from the Slack App and sends a request to Open AI’s API and back to Slack
  2. A database (we use Redis) for persisting the chat context so that you can build a chat.

2. Create a Slack App

Now let’s go ahead and create our custom App within Slack:

  1. Open the Slack Apps Panel (https://api.slack.com/apps/)
  2. Click on “Create a new App
  3. Select “Create from a Manifest
  4. You can use the template below 🔽
  5. You will need to adapt the field “request_url” on line 28 and set it to the URL of your API endpoint that will handle the Slack requests
  6. Review the setup and click “Create
  7. Click on “Install App to Workspace
  8. After you have done that, you should be able to search for the name of the app you just created in your Slack Workspace (“GPT-Bot” as per our template).
Template for Slack App template manifest

After opening “GPT-Bot”, notice how it says “Sending messages to this app has been turned off”. To fix that, open the App you created in your Slack Apps Panel again and click on Features > App Home in the sidebar.

You need to activate the toggle “Allow users to send Slash commands and messages from the messages tab”.

Once you refresh your Slack Workspace tab you should now be able to send messages. Sending a message inside “GPT-Bot” should now fire requests to the `request_url` you have defined in 3a) of this step.

3. Connect your backend

You will need an API endpoint that will:

  1. Ingest the prompt from the user
  2. Verify that the request is valid and comes from Slack
  3. Send the prompt to the OpenAI API chat completion in streaming mode
  4. Use the Slack SDK chat.update method to send the updated result from OpenAI every second back to Slack
  5. This way the AI answer will appear continuously as if it was typing, just like within ChatGPT

Let’s look at some demo code to see how we can achieve this in Express.js (Typescript):

➡️ You can find the full code here ⬅️

Let’s break down the implementation step by step with some slightly simplified code!

First, you will set up a service for interacting with the Slack and OpenAI APIs:

class SlackService {
private _messageId: string;
private _threadId: string;
private _userId: string;

constructor(
private readonly _client: WebClient,
private readonly _openai: OpenAI,
) {
this._messageId = '';
this._threadId = '';
this._userId = '';
}

public async receiveBotMessage(event: SlackEvent) {
const res = await this._client.chat.postMessage({
channel: event.channel,
thread_ts: event.event_ts,
text: '',
});

this._threadId = event.thread_ts || event.event_ts;
this._userId = event.user;

this._messageId = res.ts;

await this._getChatCompletion(event);
}

private async _getChatCompletion(event: SlackEvent): Promise<OpenAI.Chat.ChatCompletionMessageParam[]> {
let slackMessage = '';

const stream = await this._openai.chat.completions.create({
model: 'gpt-4',
messages: [
{role: 'system', content: 'You are a helpful assistant.'},
{role: 'user', content: event.text},
],
stream: true,
});

for await (const chunk of stream) {
slackMessage += chunk.choices[0].delta.content ?? '';

await this._client.chat.update({
channel: event.channel,
text: slackMessage,
ts: this._messageId,
});
}

return [
{role: 'user', content: event.text},
{role: 'assistant', content: slackMessage},
];
}
}

For setting up the Rest API endpoint, you’ll need a function to verify the request from Slack:

/** https://github.com/slackapi/bolt-js/blob/main/src/receivers/verify-request.ts */
export function verifySlackRequestMiddleware(req: express.Request, _, next: express.NextFunction) {
const requestTimestampSec = req.headers['x-slack-request-timestamp'] as unknown as number;
const signature = req.headers['x-slack-signature'] as string;

if (!requestTimestampSec) {
throw new Error(`x-slack-request-timestamp header is missing`);
} else if (isNaN(requestTimestampSec)) {
throw new Error(`x-slack-request-timestamp header is not a number`);
}

const [signatureVersion, signatureHash] = signature?.split('=') ?? ['', ''];
if (signatureVersion !== 'v0') {
throw new Error('unknown signature version');
}

const hmac = createHmac('sha256', process.env.SLACK_SIGNING_SECRET);
// eslint-disable-next-line @typescript-eslint/no-explicit-any
hmac.update(`${signatureVersion}:${requestTimestampSec}:${(req as any).rawBody}`);
const ourSignatureHash = hmac.digest('hex');
if (!signatureHash || !tsscmp(signatureHash, ourSignatureHash)) {
throw new Error(`Slack: signature mismatch`);
}

return next();
}

Finally, you wire it all together to receive API requests including the middleware from above:

const app = express();

app.post('/slack', verifySlackRequestMiddleware, async (req: SlackRequest, res: express.Response) => {
// Verify to Slack that you own this URL
if (req.body.challenge) {
return res.status(200).json({success: true, challenge: req.body.challenge});
}

const event = req.body.event;
if (event.user === process.env.SLACK_BOT_USER_ID || event.subtype === 'message_changed') {
// This is a message from our bot, which we ignore to avoid loops
return res.status(200).json({success: true});
}

/**
* No `await` here as the API needs to respond
* fast, otherwise Slack will think the request
* failed and will retry it.
* */
new SlackService(
new WebClient(process.env.SLACK_BOT_TOKEN),
new OpenAI({apiKey: process.env.OPENAI_API_KEY}),
).receiveBotMessage(event);

return res.status(200).json({success: true});
});

const port = 3000;
app.listen(port, () => {
console.log(`Example app listening on port ${port}`);
});

4. Add persistence

The great thing about Slack is that the history of your AI chats is already included out of the box. To make sure the AI Bot will remember the context of your prompt on a thread level, you will have to add some persistence layer in your backend where you store the messages for a given period of time. The simplest solution would be to store the context in memory for a while.

At Lingvano, we use a Redis database to store the chats for 48 hours. This way a user can come back and continue writing in a particular thread and the AI will have all of its context saved which is super convenient!

Your thoughts matter

This is how we managed to boost AI usage across the entire organization! In the article, you learned:

  1. What you need to create a GPT-powered chatbot
  2. How you set up a custom Slack app
  3. How you connect the messages from your Slack app to your backend
  4. How you can add persistence to your AI Bot to preserve the context

Are you eager to build a GPT-powered chatbot yourself? Let us know your thoughts! We are considering bundling this logic into an NPM package if there is enough interest from the community.

Curious about our app? You can check out our production version on Web, iOS, and Android.

Feel free to take a look at our public repos and other projects, where we share some details of the technical setup and infrastructure of our web, iOS, and Android React Native Expo app.

Do you have questions about our setup or ideas for improvement? Just leave a comment! Make sure to follow us to get more insights into our projects and our day-to-day developer life.

We’re happy to connect on Twitter, Instagram or LinkedIn. Also, we are always hiring. Join us! 🚀

--

--