SSH Jumphosting to intranet through a VPN container

Eren Manış
9 min readApr 1, 2019

--

Using VPN container as a jumphost provides enhanced availability to a protected application/server.

Background Knowledge

Jumphost is a server which we use as an intermediate/gateway server for reaching to a protected server. Simple example would be a protected ssh server. First, we connect to that specific SSH server and we reach the intranet, then, we connect to which server/application we want in the intranet.

Basic syntax would be

$ ssh -J jumphost:22 protected-server

Manual page of SSH describes

-J destination: Connect to the target host by first making a ssh connection to the jump host described by destination and then establishing a TCP forwarding to the ultimate destination from there.

Another way of jumphosting is to use -L option. This option provides port forwarding via an intermediate host. Again, syntax is like below

$ ssh -L local_socket:destination:destination_port intermediate_host

-L local_socket:host:hostport: Specifies that connections to the given TCP port or Unix socket on the local (client) host are to be forwarded to the given host and port, or Unix socket, on the remote side.

Let's say we have an Elasticsearch running on our destination. Through the jumphost using -L option, we forward it to our localhost and we can connect to the Elasticsearch through curling localhost:local_port

So far so good.

Idea: Using VPN container as a jumphost

It is common to use VPN for accessing the protected network. VPN provides encryption and anonymity along with other security benefits. In a typical engineer night, there may be lots of reasons to connect company's network through VPN. To check everything is running, to commit a code, to review a code, to read some documentation etc…

Most of the time, what we do is, open a VPN client (or simply, terminal), connect with our config, add ip routes, check if it connects, proceed to intranet. And it is not uncommon to see it failing…

Let's head to the point. What if we containerized the VPN client and run SSH server inside? Run the container with docker run , use this as a jumphost and connect to wherever we want in the intranet without messing up our local configuration and connection. Nothing is affected, everything is up in 5 sec. Sounds good!

For such purposes, containers are beautiful.

  • It doesn't mess up the local configuration. For ex. I need to reset my Wifi everytime I disconnect from VPN because it can't revert some of my routing configuration.
  • We can be sure that once it works, it works everytime. Especially using OpenVPN, sometimes I need to try multiple times for a successful connection.
  • Last but not least, it becomes ready only after a few seconds.

Let's jump to the implementation of our idea.

Not discovering the world again…

What we need is a container, running a VPN client and runs an SSH server. There are already number of repositories in Github providing what we need, partially as a container. We don't need to discover the world again. I will use two of these repositories. One for containerizing VPN Client and another for OpenSSH server. So, what I will do is to merge these two repositories to achieve what we want. Big thanks to dperson and danielguerra69 for providing these repositories.

Running multiple processes in a single container

As we know, container runs a single process. If we want to run multiple applications, we split these to multiple containers. As Docker suggests running a multiple services in a container should be avoided.

For our use case, we will open a simple straightforward SSH and another process to connect to VPN. Splitting these to two containers comes with complexity. Because our container will connect to VPN and its network will be different than the local network. If we redirect our SSH Server container to that network, then how are we going to make an SSH request? Maybe we should connect two network interfaces one for binding the SSH port another for the connection of intranet bla bla bla… Too complex. I want our solution to be simpler than what we already have (without container). For the sake of simplicity, I will run multiple processes in single container and I think this way of use is pretty reasonable hence, acceptable.

Supervisord is a simple application which runs child processes and supervises them (checks their health, logs them etc.). We name this config as supervisord.conf with the content:

[supervisord]
nodaemon=true
childlogdir=/log
logfile=/log/supervisord.log
[program:sshd]
command=/entrypoint.sh
stdout_logfile=/log/sshd_out.log
redirect_stderr=true
[program:openvpn]
command=openvpn --config /vpn/config/config.ovpn
stdout_logfile=/log/openvpn_out.log
redirect_stderr=true

File is self explanatory. We have two processes/blocks.

  • [program:sshd] is SSH daemon explaining which command will be run and where will be the log file located. We don't want separate error and stdout files so we redirect the error file to stdout file.
  • [program:openvpn] is the OpenVPN, explaining the command and same logging stuff.
  • Also parent block is [supervisord] explaining the self configuration. We don't want to run it as a daemon service because we will attach this process to container in Dockerfile. So we set the nodaemon=true

Let's quickly take a look at entrypoint.sh where SSH server starts.

#!/bin/ash# generate host keys if not present
ssh-keygen -A
# do not detach (-D), log to stderr (-e), passthrough other arguments
exec /usr/sbin/sshd -D -e "$@"

We generate required keys by the SSH server and start the SSH server process. I saved this file into a new directory called rootfs. So If I want to add another file to root, I can simply put the file and rebuild the container without touching the Dockerfile.

Dockerfile

From there, everything should be easily manageable. What we need to do is copy necessary files, install necessary packages, do some configuration and run supervisord. Here is the Dockerfile

We pull the alpine:3.7 image. Copy the rootfs contents to root directory (where entrypoint.sh is located). Install necessary libraries.

For 11–13th lines, we create default directories. Supervisord doesn't create default log file and throws an error that there is no such log file.

Through 15–16 we set default password for the root user and set yes to permit root login through ssh. So, we'll have an access to root user through ssh.

Again, we copy our supervisord configuration to its default path. We expose the port 22 for SSH. We set the volumes we'll be attaching to from our local. And finally, we start our process by running the supervisord.

Let's build our image.

$ docker build -t databoss/ssh-vpn-access:v1.0.1 .

Done. Next step is to run our container. Don't forget, we've given two volumes and a port to expose. Also, for OpenVPN we need to attach a virtual network device and we need to tell the container that it can modify network settings.

$ mkdir log; export VPN_LOG=$(pwd)/log
$ mkdir config; export VPN_CONFIG=$(pwd)/config # Includes the .ovpn file
$ docker run --rm -d -p 2222:22 --cap-add NET_ADMIN --device /dev/net/tun --volume $VPN_CONFIG:/vpn/config --volume $VPN_LOG:/log --name vpn databoss/ssh-vpn-access:v1.0.1

Let's closely look at what we're doing here.

  • Set env variable named VPN_LOG where log files are placed.
  • Set env variable named VPN_CONFIG where our .ovpn files reside.

And for the docker run part.

  • --rm says that when this container dies, just remove it. (Optional)
  • -d to make it run in detached mode. We already have logs as volume.
  • -p 2222:22 to forward the 22 port in the container, which is an SSH server, to our localhost. So we'll be able to establish an SSH session when we specify ssh -p root@localhost:222
  • --cap-add NET_ADMIN specifies that this container has a capability to do network related operations.
  • --device /dev/net/tun to specify accesible devices by container. I configured my OpenVPN server to use network tunnel. So we need a virtual tunnel device to configure and connect. (See TUN/TAP differences)
  • --volume $VPN_CONFIG:/vpn/config to attach our volume (directory where .ovpn file resides) to inside of the container. Our container looks for the config.ovpn file in the /vpn/config directory.
  • --volume $VPN_LOG:/log to attach the logs.
  • --name vpn to name our container.
  • databoss/ssh-vpn-access:v1.0.1 is where we pull the image.

Tip for the .ovpn credentials: For my case, my ovpn asks me my private key password. Since we run it in a container, we can redirect this request to file. You can add askpass /vpn/config/pass.txt line in your .ovpn file and put pass.txt alongside with .ovpn file containing your private key password. If you're using username and password authentication method, you need to add line auth-user-pass /vpn/config/pass.txt into your .ovpn file and pass.txt file should contain two lines, username and password, respectively

Let's look at the logs

$ docker logs vpn # Same with 'tail $VPN_LOG/supervisord.log'INFO supervisord started with pid 1
INFO spawned: 'sshd' with pid 11
INFO spawned: 'openvpn' with pid 12
INFO success: sshd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
INFO success: openvpn entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

Great! Our main process is supervisord running with pid 1. And it also started SSH and OpenVPN client. Let's look at the other log files. Since we mounted the log folder we can see from our host.

$ tail $VPN_LOG/sshd_out.logssh-keygen: generating new host keys: RSA DSA ECDSA ED25519
Server listening on 0.0.0.0 port 22.
Server listening on :: port 22.

Our SSH server is up too. We should be able to connect to our container through ssh -p 2222 root@localhost with the password root. Before doing it, let's look at OpenVPN logs too.

$ tail $VPN_LOG/openvpn_out.log...
TUN/TAP device tun0 opened
do_ifconfig, tt->did_ifconfig_ipv6_setup=0
/sbin/ip link set dev tun0 up mtu 1500
/sbin/ip addr add dev tun0 local 192.168.255.14 peer 192.168.255.13
Initialization Sequence Completed

Our virtual tunnel device up and running and OpenVPN initialization completed successfully. We should be able to ping our servers in intranet after we establish an SSH session to container.

Tip: OpenVPN log file is the most important one. If you didn't mistype anything, SSH and Supervisord will work as expected but depending on your configuration and credentials, OpenVPN may fail to complete and this log file will lead you to a problem.

Nice! We built our container and successfully connected the container to our intranet through VPN. It's time to forward couple ports to actually make use of it. For my use case, I'd like to connect Elasticsearch to monitor our cluster. Also, I'd like to access to Jira to review some issues.

$ ssh -L 8080:192.168.1.43:8080 -L 9200:192.168.1.42:9200 root@localhost -p 2222 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null

Let's go over the command step by step

  • -L 8080:192.168.1.43:8080 option means that we will forward our localhost:8080 to 192.168.1.43:8080 (our protected Jira server) So whenever I open localhost:8080 in my browser, it will be redirected to our protected Jira server through our VPN container.
  • -L 9200:192.168.1.42:9200 option means that we will forward our localhost:9200 to 192.168.1.42:9200 (our protected Elasticsearch server) So whenever I curl localhost:9200 , it will be redirected to our protected Elasticsearch server through our VPN container.
  • root@localhost -p 2222 is our jumphost, which is our container. We exposed the 2222 port on our local and connected to the container's 22 port. Above -L port forwarding options will jump to this container and jump one more time to which server we want (in our case, 2 servers with 2 different port forwarding)
  • -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null options are optional but strongly recommended. As you see above, our container starts the process by running the entrypoint.sh script. In the script, for the SSH server, we generate host keys. When we establish an SSH session, SSH command by default checks that host key and warns us ( you may remember the part where you type 'yes' or 'no' when you connect to a new server ) Also, if you proceed and type 'yes' it appends the host key to local .ssh/known_hosts file. The point is, if someone changes the server secretly for harmful reasons and assign the same IP, SSH command will give you a warning that it does not match with the stored host key and it will reject to connect. For our use case, we generate the host key everytime our supervisord starts the SSH server. So, container's host key will be different every time we start the supervisord process and we do not want SSH command to reject connection because it is our expected behavior. For more information regarding this, can be found here.

And I open a terminal in my localhost.

Eren-Local$ curl localhost:9200/_cat/nodes{
"name" : "es71",
"cluster_name" : "databoss",
"cluster_uuid" : "xA5d74XTEPLlDZdb",
"version" : {
...
},
"tagline" : "You Know, for Search"
}

Now, as you can see here, when you try to connect to forwarded local ports, you will be able to connect successfully. In my case one for Elasticsearch, one for Jira. In your case, it may be an application port or a database port.

Happy forwarding!

--

--