Mocking domain names in a maintainable and scalable way

Artem Titkov
7 min readMar 27, 2019

--

Usually when we develop applications or services that have a web UI, we need to preview and inspect its publicly exposed functionalities while the development is ongoing. And the first task that anyone is offered when they join a new development team, is to do some crazy-custom adjustments to their host machine in order to locally preview and test features of that application. It can be achieved in multiple ways of different complexity levels, but it is rarely done in a maintainable way that can scale across a development team.

This guide goes through some of those techniques, briefly explains why they are not suitable and resolves those issues with the use of mitm-nginx-proxy-companion.

In this guide we create dockerized web services and try to make them accessible through the browser. And although the actions are based on Linux where Docker exposes ports on 127.0.0.1, exactly the same actions can be done on OSX and Windows with Docker exposing ports on a virtual machine.

Create some web services

As a starting point we will create two simple Nginx instances that will act as two different web services. We will use a docker-compose.yml file to build our stack and two different html files in order to be able to differentiate between those Nginx instances.

With docker-compose up we can now create and start our dockerized Nginx instances. Now that our “web services” are running it would be nice if we could access them through a browser to see that content.

How can we access our Nginx instances?

In order to do so, we need somehow to expose the containers that include our Nginx instances and “route” our browser to those Docker containers. We can achieve this in multiple ways, but I think the “most common” ones or the first that come to mind would be:

1. Expose ports on each container. As a result the services would be available on some random or dedicated ports on our host machine like 127.0.0.1:8000 and 127.0.0.1:9000. It works, but it’s not good enough because

  • It’s not scalable for multiple services, as after some days we will not remember which port belongs to which container
  • Usually we do not use ports to access web services, we use domain names and we want our development enviroment to replicate our production envionment as closely as possible

2. Go old-school and apply bad techniques from “web2000”. docker inspect each container to get their ips and hardcode them in /etc/hosts

  • Super hacky and not working after a recreation of any of those containers as they will get new ips

3. Back in the days I was using webmin to configure through a web UI the locally installed services like dns. And it works quite nice, but

  • We are living in a dockerized era and usually we don’t want to pollute our host machine with server setups like webmin and dns
  • Tedious to replicate on multiple machines across a development team
  • More manual configurations = more things to go wrong
  • One might not have access install those apps on the host machine

4. Use an all-included proxy like Telerik Fiddler. Also works quite nice, but

  • It’s a standalone app, so once again extraneous stuff to permanently install on the host machine
  • Linux and OSX versions are still in beta
  • One might not have access install it on the host machine

Can a reverse proxy save the day?

One tool that can help us in this situation is a reverse proxy, as it can “route” our requests to different services based on some predefined rules. jwilder/nginx-proxy is a dockerized, efficient reverse proxy with small footprint that can also be used in production.

Let’s add jwilder/nginx-proxy to our docker-compose.yml stack.

After that step, we should be able to visit http://example-one.com/ and http://example-two.net/.

If we plan to integrate our web services with 3rd parties or other “wrappers” like webview, most probably we will need to add SSL and make our web services available on HTTPS. One great perk of using jwilder/nginx-proxy in production is that we can easily add Let’s Encrypt — Free SSL/TLS Certificates to our service by using letsencrypt-nginx-proxy-companion.

In order to develop our integrations with 3rd parties, although we are still in “local development mode”, we already must be able to access our services through HTTPS. This functionality can be achieved with the use of self-signed certificates; in the current situation with the help of paulczar/omgwtfssl.

Now we can also visit https://example-one.com/ and https://example-two.net/, but the browser will not let us proceed and instead will show a warning about the self-signed certificate. We can add an exception for that certificate, or copy the certificate from the cert Docker volume and add it permanently into our browser’s certificate storage.

Cool, but this setup is still not good enough because:

1. We are touching /etc/hosts, a system configuration file on the host machine, which means that

  • We can break the networking in our system
  • One might not have access to edit this file at all

2. After every new web service addition to our stack we will have to update /etc/hosts again

3. If we want to have multiple self-signed certificates for different web services, we would need to add them one by one into the browser’s certificate storage

Docker based lookup for hostnames

The previous situation can be resolved by the use of mitm-nginx-proxy-companion, which contains jderusse/docker-dns-gen — an auto-configured dns server, and mitmproxy — a man-in-the-middle proxy for HTTP and HTTPS. The only extra part that is needed is a “browser proxy extension” which will send the requests through that proxy. After the setup, the whole “route” of our web requests will look like this:

1. We try to access a local development domain in a browser

2. The proxy extension forwards that request to mitmproxy instead of the “real” internet

3. mitmproxy tries to resolve the domain name through the dns server in the same container

  • If the domain is not a “local” one, it will forward the request to the “real” internet
  • But if the domain is a “local” one, it will forward the request to the reverse proxy

4. The reverse proxy in its turn forwards the request to the appropriate container that includes the service we want to access

Let’s add mitm-nginx-proxy-companion to our docker-compose.yml stack and clean up /etc/hosts from the previously added entries.

We must also install a browser proxy extension, with proxy address being 127.0.0.1:8080. Usually I create a separate browser profile and install the extension there, so it does not interfere with my normal internet crawling. After that step, we should be able to visit http://example-one.com/ and http://example-two.net/.

But we still can not visit any proxied HTTPS web services. To fix that issue, in our proxied browser we should visit http://mitm.it/ and click on “Other” in order to get the ca authority certificate that mitmproxy uses to re-encrypt our requests. The web service we just visited is not a “real” one, but is served by the mitmproxy and the certificate we downloaded can be found in the mitm-nginx-proxy-companion container, inside /home/mitmproxy/.mitmproxy directory.

Finally we must add that certificate to our browser’s certificate storage as a “ca authority certificate” and allow usage for trusting websites. Now we should be able to visit proxied HTTPS web services, as well as https://example-one.com/ and https://example-two.net/.

Browser requests inspection

Another capability of mitmproxy is the inspection of the requests that go through the proxy, both HTTP and HTTPS. In order to enable it, we need to slightly modify docker-compose.yml.

Now if we visit http://127.0.0.1:8081/ we should see an web UI, which gives us the ability to inspect the requests that are going through the mitmproxy. This is especially useful for debugging; some common use-cases are:

  • Multiple redirects through web services
  • Background or “hidden” requests

What did we achieve?

To sum it up, his setup gives us multiple perks like: single file configuration, no adjustments / installs on host os, it’s git-compatible and can easily scale across a development team, we get HTTP/S requests inspection for debugging and the whole setup is much closer to “production mode” than any other.

Of course it’s still not perfect as it requires us to have a browser proxy extension and not all browsers support extensions; also the web UI of mitmproxy is not 100% complete.

There is a very small possibility that those trade-offs will be a blocker for your team, but still they are not as bad as having to replicate and maintain dns and /etc/hosts settings across a fast-pace development team.

Closing notes

The other options available on http://mitm.it/ are suitable for using mitm-nginx-proxy-companion as a system proxy. This way we will be able to inspect all the traffic of our host machine, although I don’t recommend it, as mitmproxy is not optimized to handle the enormous amounts of requests that a host system generates.

For more details on how mitmproxy operates, visit the official mitmproxy documentation.

If you are interested in selectively mocking web resources for web development and fun, check out this example.

--

--