Building Minimal Authorized Kinaba Image Oneself For Peace Of Mind in 2 Steps.

Art Deco
Dec 7, 2018 · 6 min read

In the previous story, Art Deco showed how to use the new artdeco/kibana Docker image with authorisation proxy and minimised dependencies that was 200 MB smaller than the official OSS image (with Webpack and Babel removed). However, as we were really keen to show the world how we’d done it, the implementation with the build logic were develop-ish. This second version builds on the experience and progress made during this week of hacking around Kibana: the new version of the Dockerfile is maximally transparent and has the size of only 211MB.

tldr; How To Build Small Kibana Image With Authorization And Know It’s Legit.

The steps:

git clone https://github.com/dockspage/kibana-docker

docker build . -t kibana

The Dockerfile v2

The final outcome is the Dockerfile that builds an image in 2 stages to optimise the size. Each step is easy-to-understand and the additional files such as src/cli can be looked up on GitHub.

The GitHub Repository

We share the code at https://github.com/dockspage/kibana-docker.

It is easy to inspect the project directory now, with the only proxy-server and the cli script additionalities.

We have run the steps necessary to create the kibana/package.json and kibana/yarn.lock files which we also committed to the repo, however, they can be recreated again using the install-deps tool that we contribute. These files are used for faster installation of the dependencies by docker.

The screenshot of the working of the newer tool which is our contribution and innovation in solving the Kibana’s too-many prod dependencies for a prod image problem.

Table Of Contents

The Dockerfile v2
The GitHub Repository
Previously Opaque Method
The Improved Approach
Trust But Make Your Own
The Install-Deps Interactive Tool
Source Code Acquisition And Modifications
Using Alamode To Transpile Project Code
Patching src/server/kbn_server
Summary

Previous Opaque Method

Each released was distributed by GitHub as tar.gz which was then downloaded on a docker container via the Dockerfile during the dokku app deploy.
  1. Download the snapshot image, modified code in the kibana/src/cli/index.js and kibana/src/server/kbn_server.js files, push changes to GitHub, and release with a tag like v1.3 and message like Proxy via the GitHub web interface.

2. Every release got downloaded in a temporary Dokku container created on git push to the Dokku host. The image was made using the Dockerfile that was pushed to Dokku running the kibana app. The created image, after manual testing, was tagged as artdeco/kibana and pushed on the Docker Hub. As a result, newer apps could just spawn a container from the artdeco/kibana image and only had to pass the ELASTIC_SEARCH host URL in the environment variable.

The old Dockerfile downloads the release from GitHub, so that each change to the source code had to go through the release process. At one point, we made a release without pushing local changes to GitHub which resulted in the same build process as before so it’s a thing to look out for in similar situation. The releases on GitHub are created instantly and can be used to transport the code.

Initially, before the proxy, we run an open to the world Kibana with our data which was not good, therefore we had to shut down the dokku image. However, when it is killed with dokku ps:stop kibana, it will restart (to one’s surprise) therefore it is important to stop the container with the docker stop CONTAINER_ID.

The Improved Approach

The appropriate development to the initial version of the solution is to make sure that the preparation of the image process is transparent and all new introduced scripts can be inspected with their number to be kept to the minimum: anyone should understand what they run on their containers. We will use the multi-stage build process to firstly create the needed kibana firstly, and then copy it in the second stage of the build — this will help to keep the image size small.

Instead of downloading the snapshot, removing node from it, and committing the code (the src directory and @kbn/packages) to GitHub, we will add the wget https://snapshots.elastic.co/downloads/kibana/kibana-oss-6.5.2-SNAPSHOT-linux-x86_64.tar.gz | tar xz line to the Dockerfile so that it is clear to everyone where the snapshot comes from (it will create the kibana directory upon extraction). After that, we add generated locally package.json and yarn.lock files (see the tool section) to Kibana and run the yarn command to install Kibana’s dependencies.

We then remove the Optimize mixin from kbn_server by copying this file from online, making changes and transpiling it again with Àlamode to change the import calls to require. To complete the first stage, the __REPLACE_WITH_PUBLIC_PATH__ marker is removed from optimised bundles using find and sed.

In the second stage, we copy the prepared Kibana, install @idio/core and koa-better-http-proxy with yarn and add the cli.js and proxy_server.js scripts. The CLI script will start the proxy server and call kibana/src/server/cli/cli.js without loading setup-node-env normally required by Kibana. This helps to avoid running babel-register unnecessary when all files from the snapshot are already compiled from JSX and Typescript. The login screen found at static is also added to the project dir.

Trust But Make Your Own

The purpose of the initial image was to show that it was indeed possible to setup a secure Kibana, however the transparency of the method did not allow to fully rely on it. Despite that, people will always come up with innovations that improve life of others, and it is their reputation that really allows us to use trust external images. In case of Elastic with their docker.elastic.co, we trust them completely, but with the artdeco image, because ElasticSearch servers can contain personal information, it would be madness to run it against a production ElasticSearch.

Therefore we want to show how it is very easy to make the image by repeating all steps that we’ve done to build it, and using the tool that we provide to install only the necessary dependencies.

The Install-Deps Tool

This tool is fun to use and is based on the same principle as before: spawning the server and waiting for the error. We now extract the filename where the error happened, and also learn that the Kibana server uses the mix-in system where each mix-in is spawned in series so that an error can happen in a plugin after the log mix-in is loaded, and it will be displayed in the stdout as JSON rather to stderr . In this case, we grab the latest line of stdout and parse it to extract the filename. In case of the log, just use the stderr output.

The install-deps tool quickly finds all dependencies using the start-fail-install-start strategy with a nice UI that reports the locations of files relative to the cwd (in the GIF case, relative to the kibana’s dir).

Using this script, we generated the package.json and yarn.lock and committed it into the repository, and ADD it during the first stage build. Anyone can repeat the process themselves to ensure the validity of those files.

Source Code Acquisition And Modifications

We grab some source code from a URL like https://github.com/elastic/kibana/tree/v6.5.2/(the last path part depending on the version, and v6.5.3 is not tagged), and make changes. We grab the kbn_server source code that contains import statements, and transpile it using alamode (see explanation below).

Using Alamode To Transpile Project Code

Àlamode makes use of regexes to reliably change import to requires, such as that even the destructuring of named modules during imports is kept intact, for example, we get const { namedA, namedB } = require('module') from import { namedA, namedB } from 'module'. All developers can read compiled code from which the export statements are removed (and module.exports = Module added in the end of the file, see FIGURE 1 below). The whitespace is kept intact though to ensure that it is possibility of debugging these files.

By using this most modern tool, we wave goodbye to Babel and avoid building the AST tree on the whole of the code and breaking our JSDoc as well because everybody knows that Babel does not care for JSDoc and all packages that are built with it will not display IDE documentation correctly in other packages in some c ases. This is noticed rarely because package authors mostly test their packages in the project directory using the source files. But the bottom line is that Babel for import/export transpilation is evil and Àlamode is the best. It can also be used as a register hook:

Patching Src/Server/Kbn_server

The src/server/kbn_server is only updated to remove the following line: import optimizeMixin from ‘../optimize’; and the line where the plugin reference is passed to the pipeline.

FIGURE 1: The new transpiled code is not ugly like Babel’s that adds _interopRequireWildcard, and access properties by names like _http2.default, _usage.usageMixin and makes the code unreadable.

Summary

In this story, we improved our method and created a 2-stage build process that is easily verifiable. We also updated the install-deps tool to detect when the process of deducing dependencies finished completely (the Kibana server started), and given it a better UI. Now, it is really really easy to have a secure Kibana with a login interface.

Art Deco

Written by

Art Deco

Node.js tools for the best developer experience. https://artd.eco https://nodetools.co

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade