Envoy as an API Gateway: Part III


By finishing this part, we can set up the local Kubernetes cluster, deploy our gRPC service with Envoy, and automatically update those two later when any code is changed.

What is Kubernetes?

Kubernetes or k8s is a container orchestration system built to automate deployments, scale and manage containerized applications.

We have containerized gRPC service to deploy it into the k8s cluster, scale it, manage resources, etc. From the developers' point of view, k8s is a universal environment for their applications. It doesn’t matter where the cluster deployed itself, in GCP, AWS, Azure, or localhost. We will use the last one, but the example, with minimal changes, will work in clouds too.

k8s on localhost

The easiest way, at least for myself, to run k8s locally is to use minikube.

It’s a virtual machine, which runs k8s inside. There are several other similar implementations, but I chose this one because it was effortless to start.

To prepare minikube for work, we need to install it, see instruction by the link above and run it with:

% minikube start
😄  minikube v1.23.2 on Arch
✨  Automatically selected the docker driver
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
💾  Downloading Kubernetes v1.22.2 preload ...
    > preloaded-images-k8s-v13-v1...: 511.84 MiB / 511.84 MiB  100.00% 15.07 Mi
    > gcr.io/k8s-minikube/kicbase: 355.39 MiB / 355.40 MiB  100.00% 10.15 MiB p
🔥  Creating docker container (CPUs=2, Memory=5900MB) ...

🐳  Preparing Kubernetes v1.22.2 on Docker 20.10.8 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

It downloads, installs, and runs thelatest k8s version (it’s also possible to specify the version). Our cluster is ready:

% kubectl get nodes -o wide
minikube   Ready    control-plane,master   7m45s   v1.22.2   <none>        Ubuntu 20.04.2 LTS   5.14.8-arch1-1   docker://20.10.8

What is Tilt?

Tilt is a tool that helps developers to synchronize their code into the k8s cluster and manage k8s objects. It watches for files and applies changes to applications already run in k8s. Also, it has great UI, even both: terminal-based, for people like me, who spent most of the time in a terminal, and web-based. The last one is more powerful than the first one.

Tilt uses Starlark language for configuration, as Bazel does, but those Starlark versions are slightly different.

Bazel again

We build everything with Bazel, except k8s and minikube. We assume those two are already installed and configured.

We need a couple of things for each component of our system to deploy it into the k8s cluster, namely:

  • Docker image
  • k8s deployment definition
  • k8s service definition

And Bazel makes it with k8s_* target. We’ll list them in a moment, and now let me define a build flow.

Tilt and Bazel: the flow

Tilt is a great tool. It can run go build, build Docker images, deploy them into the k8s cluster, etc. But we use Bazel, which does the same. So, we will not use that part of Tilt. Instead, we call Bazel from inside Tilt. In this case, we have a clear separation of duties. Bazel builds binaries, creates images, and produces k8s deployments, while Tilt synchronizes them into thek8s cluster.

You can find integration between Tilt and Bazel in bazel.Tiltfile, Tilt’s main config is a Tiltfile in a project root.

Run Tilt with Bazel

Tilt is a tool written in Go, which means Bazel can download and run it. It’s also possible to build a tool from scratch, as Bazel does it with protobuf, but it requires writing additional rules. On the other side, Tilt has a set of pre-built binaries on the release page, so we just downloaded the necessary one. There are binaries for different operating systems. We pick two Linux and macOS. Depending on OS, we download the correct version.

We get one more command in our toolset, which Bazel manages. Reasons to build and run tools such a way are the same:

  • to have a unified interface for everything
  • lock tools' versions
  • do not install tools separately.


WORKSPACE is a special file. It determines the project root for Bazel and contains commands for downloading rules and dependencies, some settings for toolchains, and so on. See documentation for details.

In this file we have the following things:


TILT_VERSION = "0.22.9"

TILT_ARCH = "x86_64"

TILT_URL = "https://github.com/windmilleng/tilt/releases/download/v{VER}/tilt.{VER}.{OS}.{ARCH}.tar.gz"

    name = "tilt_linux_x86_64",
    build_file_content = "exports_files(['tilt'])",
    sha256 = "5ede1bd6bfdf7ad46984166f7d651696616ec2c7b3c7a3fed2a0b9cc8e3d6d6e",
    urls = [TILT_URL.format(
        OS = "linux",
        ARCH = TILT_ARCH,

    name = "tilt_darwin_x86_64",
    build_file_content = "exports_files(['tilt'])",
    sha256 = "77a3848233e07e715d1f2f73d7ef10c8164c7457f7a6c8a2dc1d68808bd29fdd",
    urls = [TILT_URL.format(
        OS = "mac",
        ARCH = TILT_ARCH,

We lock Tilt’s version and checksum, as well as two different targets, one for each OS.

tools/BUILD file

In tool directory, we have a BUILD file with following targets:

    name = "tilt-up",
    srcs = ["wrapper.sh"],
    args = [
    data = select({
        "@bazel_tools//src/conditions:darwin": ["@tilt_darwin_x86_64//:tilt"],
        "//conditions:default": ["@tilt_linux_x86_64//:tilt"],

    name = "tilt-down",
    srcs = ["wrapper.sh"],
    args = [
    data = select({
        "@bazel_tools//src/conditions:darwin": ["@tilt_darwin_x86_64//:tilt"],
        "//conditions:default": ["@tilt_linux_x86_64//:tilt"],

Each of which calls bash script wrapper.sh (we’ll look at it in a moment), with actual binary and its parameters. These targets determine which OS to download the binary for, with data param. It has a select switch, which knows, which target to call depending on OS.


It’s a simple script. Based on $OSTYPE values, it sets an actual path to the downloaded binary within the Bazel cache and runs the binary.


set -euo pipefail


if [[ "$OSTYPE" == "darwin"* ]]; then
  realpath() {
      [[ $1 = /* ]] && echo "$1" || echo "$PWD/${1#./}"
  TOOL_PATH="$(realpath "external/${TOOL}_darwin_x86_64/${TOOL}")"

TOOL_PATH="$(realpath "${TOOL_PATH}")"

exec "${TOOL_PATH}" "$@"

K8s objects: namespace, deployments, and services

To deploy our gRPC service and Envoy into the k8s cluster, we need severa YAML files. K8s uses it for creating its cluster objects. We use Bazel and call it from inside the Tilt configuration script. In other words, we have to produce those YAML files with Bazel.



We have such targets here and there to produce YAML files for k8s resources. Let’s consider a couple of places:

# k8s/BUILD

load("@io_bazel_rules_k8s//k8s:object.bzl", "k8s_object")
load("//:helpers.bzl", "default_namespace")

package(default_visibility = ["//visibility:public"])

    name = "api-deployment-yaml",
    srcs = [

    name = "namespace",
    srcs = [

    name = "namespace-yaml",
    kind = "deployment",
    substitutions = default_namespace(),
    template = ":namespace",

Here we have a couple of filegroup targets. It’s one of the general rules which wraps each file on disk, making a target for a particular file. Actual YAML templates are next to the BUILD file.

k8s_object uses filegroup as a template template = ":namespace" and fills placeholders like %{namespace}, parameter substitutions = default_namespace() is used for providing actual values.

default_namespace() is a function which returns a list of pairs %{variable} = value.

The target is runnable, so let’s do it:

% bazel run //k8s:namespace-yaml

INFO: Analyzed target //k8s:namespace-yaml (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //k8s:namespace-yaml up-to-date:
INFO: Elapsed time: 0.106s, Critical Path: 0.00s
INFO: 1 process: 1 internal.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action

apiVersion: v1
kind: Namespace
  name: bazel-k8s-envoy

There is another filegroup target called api-deployment-yaml, but k8s_object target, which uses it in a different place, namely in service-one/BUILD:

# service-one/BUILD

load("@io_bazel_rules_k8s//k8s:object.bzl", "k8s_object")
# skipped
    name = "yaml",
    kind = "deployment",
    substitutions = {
        "%{apiname}": "service-one",
        "%{namespace}": namespace(),
    template = "//k8s:api-deployment-yaml",

We use this target to deploy the gRPC service. As you can see, in template parameter has a label as a value. That’s why we defined filegroup targets before.

That’s all we need to deploy our service into the k8s cluster.

% bazel run //tools:tilt-up

This command runs Tilt, which creates namespace, deployment, and service and starts watch for files. We deploy everything into the dedicated namespace bazel-k8s-envoy.

Let’s check it there:

% kubectl get ns
NAME              STATUS   AGE
bazel-k8s-envoy   Active   98s
default           Active   39h
kube-node-lease   Active   39h
kube-public       Active   39h
kube-system       Active   39h

% kubectl get -n bazel-k8s-envoy svc
NAME          TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)             AGE
envoy         ClusterIP   None         <none>        8080/TCP,8081/TCP   103s
service-one   ClusterIP   None         <none>        5000/TCP            103s

% kubectl get -n bazel-k8s-envoy po
NAME                           READY   STATUS    RESTARTS   AGE
envoy-686dcc77d6-dxrsx         1/1     Running   0          105s
service-one-75d49549d9-nw9qg   1/1     Running   0          105s

And this is how Tilt console looks like:


By hitting the Enter button, you can open the Tilt web console on http://localhost:10350, which has more features, like restarting pods, changing trigger modes, viewing and filtering logs so on.

When you don’t need Tilt anymore, press Ctrl-C to stop the console and run:

% bazel run //tools:tilt-down

It deletes all deployed objects from the cluster and the namespace itself.

That’s it. Have fun with k8s now!