Some Quick Notes On Telepresence

Basics

In my current project we got the chance to evaluate Telepresence and eventually start to use as a tool to simplify local development for Kubernetes or OpenShift clusters.

From my experience it’s a very useful tool that dramatically lowers the time of the development loop and ressource requirements of local development for a Kubernetes or OpenShift cluster.

Telepresence was originally created by Ambassador Labs and is a sandbox project of the Cloud Native Computing Foundation.

Telepresence allows devs to verify changes basically immediately after rebuilding the service under development without having to redeploy and without having to run multiple service instances locally (such as using a local cluster).

Example    Devs can for example use databases on a development cluster and deployed services if required and do not have to run a local cluster or Docker environment.

Telepresence can installed on multiple platforms. E.g using brew on macOS

brew install datawire/blackbird/telepresence

Telepresence Architecture and Workflow

Extremely simplified, Telepresence creates a tunnel to a (remote) Kubernetes cluster which makes connections to deployed services transparent for a local processes.

Telepresence architecture

Telepresence’ architecture has four main components: * Telepresence CLI, * Telepresence Daemon, * Traffic Manager, and * Traffic Agent.

The CLI supplies us with the commands necessary to establish the development lifecycle such as telepresence connectand telepresence quit. We can also debug the connection with telepresence status.

Typically, we login to the development cluster with kubectl and the initialize the Telepresence Daemon and Traffic Manager by running telepresence connect. (Using AWS here for no specific reason other than illustration — I’m not getting any money :D.)

aws eks --region eu-central-1 update-kubeconfig --name my-cluster
telepresence connect

The magic happens in the interaction of the Telepresence Daemon and the Traffic Manager. Daemons are installed on the local machine whereas the Manager is deployed in the cluster. Requests from the local processes to the cluster are forwarded to the Traffic Manager which then routes traffice to the cluster deployed services.

A really neat part is that the local DNS is configured and will resolve actual service names (with mandatory namespace suffix) and direct traffic to the Telepresence daemon. This aligns configuration since the deployed service uses the nearly identical host.

Example    If we have a service my-service running on the remote cluster in namespace my-ns, requests from the local machine to http://my-service.my-ns will be routed to the service my-service in the cluster.

Using only Telepresence Daemon and Traffic Manager is sufficient if you are only concerned with traffic from the local machine to the remote services. Telepresence however also supports redirecting traffic to the service you develop from within the cluster. This is called an intercept and is enabled by deploying the Traffic Agent sidecar to the deployed instance of the service under development which intercepts and redirects traffic to the local machine.

Adding an intercept to the standard traffic forwarding behavior basically ports your local service into the remote cluster. This is a killer feature, considering the time spent waiting for (re)deployments or running the whole thing locally.

A known caveat is that all traffic to the deployed service is redirected to your local machine meaning that parallel intercepts of multiple Telepresence clients require more setup. Telepresence however also makes it possible to only intercept traffic with custom headers via the --http-match=CUSTOM_HEADER_NAME=CUSTOM_HEADER_VALUE option in order to allow multiple intercepts on a single service.

Shutting down Telepresence on a local machine is as easy as running

telepresence quit

More on Telepresence after the jump

Have a look at this introductory video for more information.