Local WebHook Service Using Go, MySQL and Docker

Mihai Balica
Cognizant Softvision Insights
7 min readJul 1, 2021

What is a WebHook?

According to Wikipedia, a WebHook is a user-defined HTTP callback, usually triggered by an event, such as a code push to a repository or a comment posted on a blog. The idea is that when that event occurs the source site makes an HTTP request to the URL configured for the WebHook (this is the WebHook service or server). This way, they can be used to trigger builds on Continuous Integration systems or create tickets on Issue Tracking systems.

When working with WebHooks it is considered good practice to test and debug callbacks. A WebHook Service that consumes WebHook callbacks and makes it available exactly as it was received could prove to be very useful. The following details build one to use locally.

Why a local WebHook server?

Why local when there are so many servers available, many of them at no cost? If you need something really quickly, always available and under your complete control. Something capable of returning in an instant the content of a callback without changes.

In this scenario, there’s a large automated test suite that would take a couple of hours to execute. Having a fast and reliable WebHook service decreased the running time of this test suite and reduced to zero the number of false errors caused by the WebHook service malfunctions.

Proposed architecture

Docker containers were used here for a couple of reasons. It is easy and fast to deploy on any machine and it enables deployment to the Cloud, but beyond that, it offers an isolated and consistent environment. Docker images are free of environmental limitations, and the application and its configuration inside a container image are the same on every instance.

Without Docker, one would have to install MySQL, fix any missing library issues or incorrect versions, fix possible conflicts with other libraries and so on. Instead of all these painstaking processes, it is simple to download a MySQL container image and launch it. Without the use of Docker, it wouldn’t be possible to run more instances of the same application. Docker offers tools like docker-compose that can help you create and run multi-container applications, such as in this scenario, where we have a MySQL database, several instances of a Golang application and an Nginx proxy server for load balancing the network traffic, all of these containers in an isolated environment, having its own internal network.

A top-down description of the Docker environment

There are six containers running in their isolated environment, on their own network: see Appendix_1. The outside world has access to it only through HTTP requests on port 8080 of the operating system, which is redirected to the internal network of this stack and hits the Nginx reverse proxy on port 80. The traffic is then redirected to one of the four containers (on port 8080) that run the golang app which saves or retrieves the data in the MySQL database container using port 3306 (default for MySQL).

Docker Compose

The docker-compose definition is stored in the docker-compose.yml file. At version 3.7, this file describes the services that are part of the stack, the network and the volumes. There are three services described in this document:

Proxy

Where we instruct docker to open port 80 in the running container and to map the operating system port 8080 to this port. We also provide a configuration file where we instruct the proxy server to route traffic coming from outside to any of the four application’s containers is having least opened connections:

DB

This container is, in fact, a MySQL database. It is easy to pick up the exact version we want by using the corresponding image tag. This container needs a volume to store the data, to make it persistent, otherwise we will lose everything in case the container restarts.

The database name, username and passwords are stored locally in .env file

App

This application container depends on the database, so all the instances are going to wait for the DB container to start first. Here, we do not specify a specific image, but we specify a build. This means that before starting up the stack, we first need to build all the requested images. So, having the docker-compose.yml content described above, the user has to execute these two commands, in this exact order:

The second command will instruct compose to start four instances of the app service.

The first command will build the container image for the app, using instructions in the provided Dockerfile:

The GoLang app

The application is in fact an HTTP server which listens on port 8080 for HTTP requests. It creates two endpoints, one is used for initialisation — creating database tables if they do not exist; and the other one is used for pushing and pulling information to and from the database.

All the requests coming on path “/” are handled in function processRequest() and all requests coming on path “/initializeDataBase” are handled in function initialize().

The initialize() function executes a “create table if not exists” on the selected database and prints the error messages on standard output if any. And it closes the connection with the database after that. See Appendix 2 for details.

The processRequest() function is the one that handles all the other requests except the database initialization. It handles two types of HTTP requests: POST requests where the data (request Body) is saved in the database and GET requests, where the saved data is returned. See Appendix 3.

For the POST requests the application is expecting the content type to be JSON and is looking for a specific “trId” field, which will be used in the database on a separate column for later identification. A GET request will have to provide this information, this trId, in order to correctly identify the information in the database.

Examples

For initialisation of the database

Either use curl from command line or a tool like Postman, Insomnia etc.

For pushing up and pulling your information — POST and GET requests on “/”

Let’s have a JSON file like this (input.json):

The “id” field is mandatory, without it the JSON content will not be saved in the database.

Before pushing our JSON Body to the WebHook server, we can check to see if a JSON Body with the same ID was pushed before:

Not a single call was recorded for this ID. Let’s have a POST request with our JSON in the body:

Now let’s see what happened. We perform a GET request and we put the id from our JSON as parameter, followed by ‘?’ character for delimitation purposes:

As we can see we have the headers and the body from our POST request. The reason webhooks is a JSON Array in the GET response is because in real life you might get several POST requests for the same ID, so in case it happens we can see that. For demo purposes, let’s copy the input.json file to input_2.json and add a value to the client_id field, but the id is going to be the same.

Will perform the POST request:

Now let’s see the results:

Two entries in the webhook JSON array, same Headers, as expected, but the bodies are different: the client_id

GitHub

The project with source code and configurations, including docker files, can be found here: https://github.com/MihaiBalica/webhook

What’s next

The next steps involve moving everything to AWS and making it https capable, which will be covered in a future article

--

--

Mihai Balica
Cognizant Softvision Insights

QA Engineer since 2001, passionate about automating everything.