Simplifying Local Development in Kubernetes with Telepresence
Telepresence is a versatile CNCF sandbox project that aims to provide “Fast, local development for Kubernetes and OpenShift microservice.”
I personally believe that Telepresence is more than that — it simplifies configuration management for local development, allowing developers to work as though the application was running in the remote environment.
This article contains two sections: an introduction to the Telepresence shell and swapping remote deployments. The goal is to, from a functional standpoint, outline when you would use the Telepresence shell and how it can be used to bridge the gap between remote and local development configuration.
Making use of the Telepresence shell
You just joined a team working on a shopping website. There are three non-public services running in Kubernetes that you want to use: Orders, Products and Users. Since the services are only available inside the cluster, you open three kubectl port-forwards to make the services accessible for local development.
The first and most obvious problem with the setup above is the maintenance of the tunnels. Depending on Node configuration, the connection will time out and close if it’s not actively being used (on AKS, timeout happens after 5 minutes).
A more serious problem is that of environment configuration drift — you now have a local configuration and a remote configuration. The remote configuration will use the DNS of the services, and the local configuration will use the localhost URLs.
With Telepresence, there is no need to make this local/remote distinction.
Starting a shell
To start a new Telepresence shell, run:
And… it seems like nothing happened! The first time I ran Telepresence, this really threw me off — I didn’t really know what was going on.
Under the surface, there is now a proxy running inside the cluster that makes the local shell act as if it was a Pod inside the Kubernetes cluster:
printenv in the local shell, you will notice environment variables such as
KUBERNETES_SERVICE_HOST that are normally injected into Kubernetes pods.
To check the new deployment and pod that was created by Telepresence, run:
kubectl get all -l "telepresence"
The pod that proxies the environment runs with the Docker image
datawire/telepresence-k8s . This deployment will be cleaned up as soon as you exit the Telepresence shell.
Going back to the shopping example, there would no longer be a need to use these localhost addresses:
Instead they could be the same as you’d expect in the Kubernetes environment:
Note: there are some limitations to using
--method vpn-tcp (default proxy method). For more info, visit the docs.
Running the shell inside a Docker container
At some point before the first deployment, you’ll want to test the Docker image of your application. Just like running
telepresence --run-shell starts a new shell, using
--docker-run will give you a local Docker container which has its environment proxied into the Kubernetes cluster:
docker build . -t shopwebtelepresence --docker-run --rm -it \
-p 3000:3000 \
-v $(pwd):/path/to/workdir \
Note how there is no need to push the image to a remote registry. As long as the local Telepresence container starts successfully, you’ll be able to try out the image as if it was in the cluster.
Running a container changes the proxy method from
Testing things in your cluster
With Telepresence in your toolbelt, there is no need to run commands such as this one:
kubectl run -it --rm --restart=Never --image=pstauffer/curl test
Instead, you may just as well start a new Telepresence shell and enjoy the availability of all the tools you have on your local machine:
telepresence --run bash
You can also run Postman to query your services directly using their internal domain names:
telepresence --run postman
# on MacOS
telepresence --run open /Applications/Postman.app
The shopping application has now grown and now consists of three components: the web client responsible for serving web pages, a GraphQL API acting as the interface against the back-end services, and a Redis session store.
To manage deployment and configuration of the application, releases are now done via Helm. The Helm chart manages configuration relating to connectivity between services within the chart.
For example, when Alice installs a Helm release with the name
alice-shopweb, then its Redis store will, by convention, be given the name
alice-shopweb-redis. The Redis hostname is put in the env section of both the Web and GraphQL pod specifications.
Let’s consider local development with the setup above. A traditional approach would be to run Docker Compose on the host network, containing the GraphQL Server, Web Server and Redis Cache. Then, either the dependencies towards back-end services would be mocked, or the containers would use the host network to gain access to port-forwards into Kubernetes.
The local development situation with Docker Compose would look something like this:
Whenever configuration changes occur, both the Docker Compose configuration and values file used by Helm would need to be updated. Also, any host names would be different depending on if the application was running in Kubernetes or locally.
This is where the the feature of deployment swapping comes into play.
Telepresence has the ability to temporarily replace a remote deployment with a local shell, container or process, inheriting the environment of the remote deployment.
For example, assuming that the Web Server deployment is called
shopweb , and its pods running on port 3000, we can swap the remote deployment with:
telepresence --swap-deployment shopweb \
--expose 3000 \
The system now looks something like this:
Hitting the web service now actually hits your local Telepresence shell. This means that anything you run locally will be available to whoever uses the service in the cluster. If the service is public, then you’re currently serving whoever hits the ingress- from your local machine.
Deployment swapping under the hood
Inside Kubernetes, the the Web setup now looks something like this:
In essence, Telepresence copied the existing deployment, replaced the image with
datawire/telepresence-k8s and set the number of replicas for the existing deployment to be 0. The Telepresence shell then awaits an exit signal to clean up the deployment and restore the replicas in the original deployment.
Replacing the deployment with Docker Container
Just as in the previous section, you may replace the
--run flag with a
--docker-run and pass in any arguments you’d normally pass to
docker run . Just be aware that you will no longer use
vpn-tcp , but rather the proxy method
container, which naturally incurs some overhead. You can read more about proxying methods here.
Updating the environment
Because developer can inherit the remote environment of the deployment and make small adjustments when running local development, you should aim to get a deployment into Kubernetes as fast as possible. The deployment will then become the source of truth for the environment — both local and remote.
In order to make permanent changes to the local environment, the developer actually changes the remote configuration of the deployment.
For example, if there is a need to access credentials for a service, the DevOps engineer can put those credentials into the deployment configuration itself and voilà: the developer now has access to the service.
In this article you’ve seen from a high level how Telepresence can be used to run local development as if you were inside Kubernetes — reducing the need to configure the local environment and service connectivity setup. If you are interested in knowing more, I highly recommend checking out the Telepresence documentation.
A huge thanks to the Datawire for the initial development of Telepresence, and to all the great contributors working on the project! It truly is an amazing tool.