Viewing Docker container HTTPS traffic

Using ngrep and nginx to debug HTTPS like a pro!

Imagine you’re investigating a strange issue with a web service, that talks to other services using HTTP over TLS. You want to inspect the HTTP calls it makes and what responses it gets back.

…and it’s 2017. That’s why you’re running the service in a shiny container!

The first obvious approach would be to increase logging. This is very cool if the service you’re debugging allows to do it. But what if it doesn’t? What if you are not allowed to modify the source code and add it yourself? In this article, we’ll see how to view a network traffic between containers even if it’s encrypted with TLS using ngrep and nginx without touching the code!

Scenario

We have two services talking to each other via HTTP using encrypted TLS connection. We assume, that Service A is running locally, whereas Service B is deployed in the cloud and accessible using HTTPS.

NOTE: Steps in this article also apply when Service A is not containerized.

Our goal is to see raw HTTP requests made by Service A along with responses from Service B.

Tools

To achieve our goal, we’ll use ngrep and nginx.

ngrep

ngrep (network grep) is a similar tool to tcpdump used for sniffing network traffic. We’ll use ngrep in a limited scope here just to see what HTTP requests our service makes and what responses it receives. It doesn’t make sense to capture HTTPS traffic with ngrep since we won’t see anything useful besides encrypted garbage.

nginx

That’s why we’ll use nginx! We’ll treat it as a forward proxy. Service A will connect to the nginx server, using HTTP without TLS, which will then forward the request to the Service B using HTTPS. In fact, instead of nginx, you can use any other server, that would act as a proxy.

Let’s see it in action!

Steps

  1. We’ll use an image with nginx and ngrep installed to run our so-called proxy container (the green one in the diagram above).
  2. We’ll configure nginx in the proxy container to proxy all incoming requests using HTTPS to Service B deployed in the cloud.
  3. We’ll point Service A to the proxy container.
  4. We’ll execute an interactive bash shell on the proxy container (docker exec -it ... bash) and use ngrep to see the requests.

Step 1. Run the proxy container

To run the proxy container, we use an image with nginx and ngrep and export port 8090, because nginx is configured to start on this port by default in this image. The container must be parametrized with a URL passed as an environment variable. It informs where nginx should proxy the incoming traffic. In our scenario, the URL should point to Service B and must contain the trailing slash!

docker run --name proxy-container -d -p 8090:8090 \
-e URL=https://Service-B-in-the-cloud.com/ \
0000bartek/nginx-ngrep
IMPORTANT: Don’t forget about trailing slash at the end of the URL!

Step 2. Run your service

Ok, proxy container is up and working. You can now run your service (Service A) and point it to the proxy container.

If you’re running Service A outside container, the proxy container will be accessible under localhost:8090.

If you’re running Service A in a container, you can run it in host network (for example by adding --net=host parameter to docker run command). In this way, you can access proxy container also under localhost:8090.

Step 3. Execute an interactive bash shell on the proxy container

It’s the final step! We’ll now access the proxy container and run ngrep to see the requests.

docker exec -it proxy-container bash
bash-4.3#

Now we’re inside the container and we’ll use ngrep to listen for the traffic on the port 8090.

bash-4.3# ngrep -q -W byline port 8090
interface: eth0 (172.17.0.0/255.255.0.0)
filter: (ip) and ( port 8090 )

Now you can play with Service A and see what calls are being made! For sake of simplicity, in this article Service A will be just a container with curl. We’ll appropriate/curl docker image to “fake” Service A sending an HTTP request:

docker run --rm -it --net=host --name curl \
appropriate/curl localhost:8090

After this call, the output from ngrep will be the following:

T 172.17.0.1:40602 -> 172.17.0.2:8090 [AP]
GET / HTTP/1.1.
Host: localhost:8090.
User-Agent: curl/7.47.0.
Accept: */*.
.
T 172.17.0.2:8090 -> 172.17.0.1:40602 [AP]
HTTP/1.1 200 OK.
Server: nginx/1.13.3.
Date: Wed, 06 Sep 2017 19:06:53 GMT.
Content-Type: application/json; charset=utf-8.
Content-Length: 134.
Connection: keep-alive.
Access-Control-Allow-Credentials: .
Access-Control-Allow-Headers: .
Access-Control-Allow-Methods: .
Access-Control-Allow-Origin: .
Access-Control-Expose-Headers: .
Vary: Accept-Encoding.
.
{"headers":{"host":"postman-echo.com","accept":"*/*","user-agent":"curl/7.47.0","x-forwarded-port":"443","x-forwarded-proto":"https"}}

We can clearly see the whole request and response. In this example, nginx was pointing to https://postman-echo.com/headers.

Summary

In this article, we’ve seen how to use a docker container, that acts as a proxy, to see a conversation Service A makes with Service B, which is deployed in the cloud and accessible via HTTPS.

The presented solution can be useful for remote debugging. Just deploy the proxy container on your ECS/Kubernetes/Marathon/… and you can view the traffic instantly.

If you’re curious how the proxy image is built or nginx configured, go to this github repo (btw. pull requests with improvements are very welcome! :)).

That would be it! Thanks for reading and happy debugging :)!

Reference