NodeJS Console Logs in Docker Containers: Hidden No More

Baruch Speiser
Geek Culture
Published in
7 min readSep 6, 2021

Background

Lots of developers are used to writing NodeJS programs that rely on console.log(), for better or for worse. There’s nothing quite like the simplicity of running your Express server and simply being able to log whatever you want to the terminal. It is, after all, the oldest debugging trick in the book.

Many a developer have fallen into a pit of pain, however, once their application has been containerized and deployed somewhere. You can’t just redirect the standard output to a file and easily open up your log files anymore. The server no longer exists as an independent machine that you can access — maybe the container is being run through Docker Compose or Kubernetes, or deployed as some other container-based cloud instance that doesn’t give you access via SSH or RDP. Who knows if you will ever have access to the running container itself.

If you’re lucky, someone configured the Docker-based deployment to capture the standard output from its containers and put it somewhere you can see it — but let’s be honest; no one ever gets that right when you really need it.

So let’s deal with it on our own and add log retrieval to our containers. This article will walk you through it. If you’re new to NodeJS or Docker, don’t worry — everything is spelled out below. And of course, you might find this technique useful for other platforms, too.

A Simple NodeJS Server Application

First, let’s start with a really simple NodeJS/Express server with an API that returns the time. We’ll take it one file at a time, but for reference, by the time we’re done we’ll have a file & folder structure that looks like this:

logging-docker-example
├── docker-compose.yaml
├── Dockerfile
├── logs
├── node_modules
├── package.json
├── package-lock.json
├── server.js
└── utils
└── logger.js

Let’s start with the package.json file:

{
"name": "logging-docker-example",
"version": "1.0.0",
"description": "Showcasing a logger API",
"main": "server.js",
"author": "Baruch Speiser, Cambium",
"license": "ISC",
"dependencies": {
"dotenv": "^10.0.0",
"express": "^4.17.1",
"log4js": "^6.3.0"
}
}

You’ll see in our dependencies that I’ve added Log4JS (more on that later). Install all of these dependencies with npm install to generate your node_modules folder, and we can then start adding code.

Now we’ll add a server.js file that will serve up our API:

require("dotenv").config();
const express = require('express');
const app = express();app.get("/api/time", (request, response) => {
const today = new Date();
const now = today.toTimeString().substring(0,8);
//chop off the timezone information
const result = {
time: now
};
console.info(`Time requested at ${now}`);
response.set('Content-Type', 'application/json');
return response.send(result);
});
const PORT = process.env.PORT || 8080;
app.listen(PORT, () => {
console.info(`Started on port ${PORT}`);
});

Run the application using node server.js and (assuming your port isn’t blocked), you should see that it started up. Open a browser to http://localhost:8080/api/time, and you should see the current time. In your terminal, where you started the application, you should also see where it logged the request.

Simple and straightfoward so far. Now let’s add a typical Dockerfile for a NodeJS/Express app that we can build into a container:

FROM node:14-alpineWORKDIR /usr/src/appCOPY package*.json ./
RUN npm ci --only=production
COPY server.js ./USER node
ENV PORT=80
EXPOSE 80
CMD ["/bin/ash", "-c", "node server.js"]

For those of you who aren’t familiar with Docker, here’s a basic explanation of what this file is setting up:

  1. Start with an Alpine Linux container that has NodeJS version 14 preinstalled.
  2. Set the location in the container where we’ll put our files.
  3. Copy the package.json file into the container (to its /user/src/app folder).
  4. Install only the relevant dependencies needed for running the app (i.e. don’t include any development tools).
  5. Copy over our server.js code.
  6. When the container starts, it should run as the user “node” instead of the root user (this is a security best practice).
  7. Pre-configure the port in our application to the standard HTTP port 80, but allow it to be overridden as a container environment variable.
  8. Let outside requests access port 80 of this container by default.
  9. When the container starts, run our code.

Now we’ll use a docker-compose.yaml file to build and run it:

version: "3.3"
services:
logger-example:
build: .
ports:
- "8080:80"

Make sure you stopped your local execution of the app, so that we can run it through Docker Compose instead. Use the command:

docker-compose up --build

(If you’re a Linux user, you may need to do this as sudo.)

Again, use your browser to check the time, and you should see Docker Compose report that the time was requested in your terminal.

So far, so good. Through all of the above, we’ve set up a basic containerized NodeJS/Express application. Now let’s kick it up a notch and add in our special logging mechanics. This will allow us to view application logs from outside the container.

Upgrade the Logging

Our next step is to add a new file to our project, utils/logger.js:

const log4js = require("log4js");
const fs = require('fs')
log4js.configure({
appenders: {
"stdout" : { type: "stdout" },
"file" : { type: "file", filename: "logs/out.log" }
},
categories: {
default: { appenders: [ 'stdout', 'file' ], level: 'info' }
}
});
exports.logger = log4js.getLogger();exports.readLog = () => {
let log = fs.readFileSync('logs/out.log','utf8',
(error, content) => {
if(error) {
log4js.getLogger().error(error);
return error;
}
//otherwise
return content;
});
return log;
}

What we’ve done here is two things: first, we’ve set up a logger from Log4JS to replace our console logging. This logger will simultaneously log the same text to both the console and to a file, logs/out.log. In logging terminology, we refer to this as using multiple appenders — effectively different output channels — for your log. Second, we’ve added a method that will read that log file.

Now let’s go back to our server.js file and modify it — both to use the new logger, and to add a new method that will return the logs:

require("dotenv").config();
const express = require('express');
const { logger, readLog } = require('./utils/logger');
const app = express();app.get("/api/time", (request, response) => {
const today = new Date();
const now = today.toTimeString().substring(0,8);
//chop off the timezone information
const result = {
time: now
};
logger.info(`Time requested at ${now}`);

response.set('Content-Type', 'application/json');
return response.send(result);
});
app.get("/api/logs", (request, response) => {
try {
const result = readLog();
response.set('Content-Type', 'text/plain');
return response.send(result);
} catch(e) {
return response.sendStatus(500);
}
});
const PORT = process.env.PORT || 8080;
app.listen(PORT, () => {
logger.info(`Started on port ${PORT}`);
});

Just to be sure, create the /logs folder and run the program. Check the time using your browser, and you should see the console log the time — although now it will have a lot more information in the log entry, since Log4JS provides you with a lot of additional information that you might find useful.

You can now navigate to http://localhost:8080/api/logs, and you should see all of the server logs right in your browser.

The next step is to update our container build. In particular, we need to make sure that it has the proper permissions to create and read the file, so that we don’t run into any file permission issues. Make the following changes to the Dockerfile:

FROM node:14-alpineWORKDIR /usr/src/appCOPY package*.json ./
RUN npm ci --only=production
RUN mkdir utils \
&& mkdir logs \
&& touch logs/out.log \
&& chmod 777 logs/out.log
COPY server.js ./
COPY utils/logger.js ./utils/
USER node
ENV PORT=80
EXPOSE 80
CMD ["/bin/ash", "-c", "node server.js"]

Run Docker Compose again with the build option, and check your container.

Hold on — Security!

You can’t possibly think it’s a good idea to expose your container’s logs to the world, right? Good, good — so let’s add some security. We’ll use a simple access key that can be configured during container runtime, so that you can change it whenever you need.

First, let’s modify the Log API so that it is effectively password-protected:

app.get("/api/logs", (request, response) => {
try {
const auth = "Bearer " + process.env.ACCESS_KEY;
if(request.headers.authorization == auth) {

const result = readLog();
response.set('Content-Type', 'text/plain');
return response.send(result);
} else {
return response.sendStatus(401);
}

} catch(e) {
return response.sendStatus(500);
}
});

Let’s define the access key in our docker-compose.yaml file:

version: "3.3"
services:
logger-example:
build: .
ports:
- "8080:80"
environment:
ACCESS_KEY: 'VjR8vu5p3QY5cvwP'

Now if you run it and try to open it with your browser, you should get a 401 Unauthorized error. If you use a tool like Postman, however, you can add a bearer token with the access key you configured in Docker Compose and then you can read the logs:

There you have it — now you can deploy your container in whatever black-box environment you want, and with the correct access key, you can access the logs remotely.

Is this the best way to go about things? Probably not. If you’re using containers on a regular basis, you should probably have a good plan in place to read the standard output of your containers, and otherwise give you all the insight you need into how your applications are actually running. In a pinch, though, sometimes it’s handy to plan ahead and have tricks like this in your back pocket.

--

--

Baruch Speiser
Geek Culture

CTO of Cambium, Applicable Innovation (https://cambium.co.il). Long-time Java Architect and hobbyist woodworker. https://github.com/blspeiser