Envoy Gateway a new Gateway API for Kubernetes
- kubernetes envoy proxy gateway
Envoy Proxy is a well known proxy load balancer offering HTTP, gRPC, TCP, customizable with Lua, Go, WASM … It’s often the core component used to build gateways/proxies for Kubernetes. For example Istio is using it as a proxy load balancer for ingresses but also as a sidecar to create a “mesh network”.
Envoy Gateway is a more recent project to manage Envoy proxies inside Kubernetes, it is meant to be a common foundation to build on top of it, but it can be used directly as it is.
It has a great advantage, it can be managed through the recent Kubernetes Gateway API, this API solves a decade of non standardized ways to provision “Ingresses”.
Envoy Gateway first GA version was released on November 1st 2023, you may think it’s very early but remember the traffic is actually flowing through Envoy Proxy, Gateway is only provisioning the proxies using xDS APIs..
The actual documentation has some great examples, but some details, for real world scenarios, are sometimes missing, the followings examples worked for me.
We’ll explore two scenarios, one to deploy it as a public LoadBalancer
, one to use it as internal ClusterIP
load balancer.
Installation
The installation is straightforward and requires only applying some CRDs to the cluster.
kubectl apply -f https://github.com/envoyproxy/gateway/releases/download/latest/install.yaml
TLS
Add certs to Kubernetes:
kubectl create -n envoy-gateway-system secret tls tls-certificate --key=tls.key --cert=tls.crt
Creating A Public Proxy
From there provision a GatewayClass
and a Gateway
.
apiVersion: gateway.networking.k8s.io/v1beta1
kind: GatewayClass
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
parametersRef:
group: gateway.envoyproxy.io
kind: EnvoyProxy
name: custom-proxy-config
namespace: envoy-gateway-system
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: eg
namespace: envoy-gateway-system
spec:
gatewayClassName: eg
listeners:
- allowedRoutes:
namespaces:
from: All
name: https
port: 443
protocol: HTTPS
tls:
certificateRefs:
- group: ""
kind: Secret
name: tls-certificate
mode: Terminate
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: custom-proxy-config
namespace: envoy-gateway-system
spec:
provider:
type: Kubernetes
kubernetes:
envoyDeployment:
replicas: 2
We will use the EnvoyProxy
custom configuration, multiple times in those examples, to modify the default behavior, here we are asking for 2 replicas of the Envoy Proxy.
Note the extra allowedRoutes:
namespaces:
from: All
, or you won’t be able to create routes from other namespaces.
In the envoy-gateway-system
namespace the pods should be visible by now:
envoy-gateway
the gatewayenvoy-envoy-gateway-system-eg
the proxies twice
kubectl -n envoy-gateway-system get po
NAME READY STATUS RESTARTS AGE
envoy-gateway-55cd86c564-bxvcd 1/1 Running 0 2m47s
envoy-envoy-gateway-system-eg-5391c79d-c65b975fc-wftkc 1/1 Running 0 14s
envoy-envoy-gateway-system-eg-5391c79d-c65b975fc-vm89d 1/1 Running 0 14s
It created a LoadBalancer
, in case this cluster is on a public cloud instance, this is the external public entry to your services.
kubectl -n envoy-gateway-system get svc
envoy-envoy-gateway-system-eg-5391c79d LoadBalancer 10.43.198.60 10.0.0.37 443:30761/TCP 3d23h
Deploy a test app
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: myproject
labels:
app: backend
service: backend
spec:
ports:
- name: http
port: 3000
targetPort: 3000
selector:
app: backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
namespace: myproject
spec:
replicas: 1
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- image: gcr.io/k8s-staging-ingressconformance/echoserver:v20221109-7ee2f3e
imagePullPolicy: IfNotPresent
name: backend
ports:
- containerPort: 3000
Add some HTTP routes to the app
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: backend-routes
namespace: myproject
spec:
hostnames:
- "api.mydomain.tld"
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
namespace: envoy-gateway-system
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
Regex Path Matching
Envoy Proxy (so upstream) has multiple ways to match paths, but there are no example in Envoy Gateway documentation.
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: api-routes
namespace: myproject
spec:
hostnames:
- "api.mydomain.tld"
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
namespace: envoy-gateway-system
rules:
- backendRefs:
- group: ""
kind: Service
name: user-send
port: 8080
weight: 1
matches:
- path:
type: RegularExpression
value: \/user\/[0-9a-f]{8}-[0-9a-f]{4}-4[0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}\/send
Solving Issues
You can tell something is wrong if the gatewayclasses
are not accepted:
$ k -n envoy-gateway-system get gatewayclasses.gateway.networking.k8s.io
NAME CONTROLLER ACCEPTED AGE
eg gateway.envoyproxy.io/gatewayclass-controller True 41m
eg-internal gateway.envoyproxy.io/gatewayclass-controller False 6m57s
Then using describe on the gatewayclasses
:
k -n envoy-gateway-system describe
gatewayclasses.gateway.networking.k8s.io eg-internal
Status:
Conditions:
Last Transition Time: 2023-11-07T14:51:01Z
Message: Invalid GatewayClass: another older GatewayClass with the same Spec.Controller exists
Observed Generation: 2
Reason: OlderGatewayClassExists
Status: False
Type: Accepted
gRPC
Creating a GRPCRoute
is very similar to HTTPRoute
:
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GRPCRoute
metadata:
name: grpc-api
namespace: myproject
spec:
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
namespace: envoy-gateway-system
hostnames:
- "grpc-api.mydomain.tld"
rules:
- backendRefs:
- group: ""
kind: Service
name: grpc-api
port: 9000
weight: 1
Note that it can automatically match the service using gRPC reflection:
rules:
- matches:
- method:
method: ServerReflectionInfo
service: grpc.reflection.v1alpha.ServerReflection
Or using regular expression on the method or service.
JWT
If you are using Auth0 the JWKS URL is in the form: https://myorg.us.auth0.com/.well-known/jwks.json
.
Let’s say we want to restrict the api-routes
to valid JWT requests:
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: backend-api-closed
namespace: envoy-gateway-system
spec:
targetRef:
group: "gateway.networking.k8s.io"
kind: HTTPRoute
name: api-routes
jwt:
providers:
- name: auth0
remoteJWKS:
uri: https://myorg.us.auth0.com/.well-known/jwks.json
audiences:
- https://myorg.us.auth0.com/userinfo
issuer: https://id.mydomain.com/
claimToHeaders:
- claim: azp
header: client_id
Note the use of claimToHeaders
to pass some claims value to the headers.
External Authorization
To use an external authorization ext_authz
, it needs a xDS patch (hopefully this will change in the future):
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyPatchPolicy
metadata:
name: ext-authz-patch-policy
namespace: envoy-gateway-system
spec:
targetRef:
group: gateway.networking.k8s.io
kind: Gateway
name: eg
namespace: envoy-gateway-system
type: JSONPatch
jsonPatches:
- type: "type.googleapis.com/envoy.config.listener.v3.Listener"
name: "envoy-gateway-system/eg/http"
operation:
op: add
path: "/default_filter_chain/filters/0/typed_config/http_filters/0"
value:
name: "envoy.filters.http.ext_authz"
typed_config:
"@type": "type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthz"
grpc_service:
envoy_grpc:
cluster_name: envoy_ext_jwt_auth
Patching Default Bootstrap
To patch the default Envoy Proxy configuration use EnvoyProxy
custom config again:
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: custom-proxy-config
namespace: envoy-gateway-system
spec:
bootstrap:
type: Replace
value: |
admin:
accessLog:
...
Beware 0.6.0 documentation has a typo.
Accessing Envoy Proxy Debug Interface
Access the web interface by port forwarding to the envoy-envoy-gateway-system-eg
pods port 19000.
kubectl -n envoy-gateway-system port-forward envoy-envoy-gateway-system-eg-xxxxxx-xxxxxx-xxxxx 19000
Creating An Internal Proxy
A gateway accessible only within the cluster can be provisioned, to act for example as an L7 load balancer in front of gRPC (this is especially needed for gRPC since there are long-lived TCP connections, see my older post):
apiVersion: gateway.networking.k8s.io/v1beta1
kind: GatewayClass
metadata:
name: eg-internal
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
parametersRef:
group: gateway.envoyproxy.io
kind: EnvoyProxy
name: proxy-config-internal
namespace: envoy-gateway-system
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: eg-internal
namespace: envoy-gateway-system
spec:
addresses:
- type: IPAddress
value: 10.43.146.30
gatewayClassName: eg-internal
listeners:
- allowedRoutes:
namespaces:
from: All
name: http
port: 80
protocol: HTTP
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: proxy-config-internal
namespace: envoy-gateway-system
spec:
provider:
type: Kubernetes
kubernetes:
envoyService:
type: ClusterIP
Verify what address the proxy is using:
kubectl -n envoy-gateway-system get gateway
NAME CLASS ADDRESS PROGRAMMED AGE
eg eg 10.43.146.30 True 17m
Using a cloud provider, you also have the option to establish an internal gateway that can be routed to your private networks using annotations: GKE:
apiVersion: config.gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: custom-proxy-config
namespace: envoy-gateway-system
spec:
provider:
kubernetes:
envoyService:
annotations:
cloud.google.com/neg: '{"exposed_ports": {"80":{}}}'
type: ClusterIP
type: Kubernetes
Azure:
annotations:
kubernetes.io/ingress.class: azure/application-gateway
appgw.ingress.kubernetes.io/use-private-ip: "true"
By provisioning an internal proxy with an IP in advance, we can then provision DNS services entries that point to the proxy.
apiVersion: v1
kind: Service
metadata:
name: myapi
namespace: myproject
spec:
type: ExternalName
externalName: 10.43.146.30
ports:
- port: 80
protocol: TCP
nslookup myapi.myproject.svc.cluster.local
Adress: 10.43.146.30
And finally provision an HTTProute
or GRPCRoute
matching for this hostname:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: backend-internal-routes
namespace: myproject
spec:
hostnames:
- "myapi.myproject.svc.cluster.local"
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
namespace: envoy-gateway-system
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
Note that at the time of writing this post, there is a bug in 0.6.0 when using the clusterIP
and setting addresses
, it will set externalIPs
rather than clusterIPs
.
Multiple Gateways
If you are planning on a public and an internal one, at the same time, you will end up with a conflict, the solution is to create a second installation for a gateway watching gateway.envoyproxy.io/internal-gatewayclass-controller
helm template --set config.envoyGateway.gateway.controllerName=gateway.envoyproxy.io/internal-gatewayclass-controller eg-internal oci://docker.io/envoyproxy/gateway-helm --version v0.6.0 -n envoy-gateway-internal > install-internal.yaml
kubectl create ns envoy-gateway-internal
kubectl -n envoy-gateway-internal apply -f install-internal.yaml
Notes On k3s
Here are some notes to explore Envoy Gateway, on a lab, using k3s.
Traefik is shipped within k3s, it needs to be disabled, tweak systemctl
to add an argument to start k3s without Traefik.
ExecStart=
ExecStart=/usr/local/bin/k3s \
server \
--disable traefik
LoadBalancer
In k3s a component, named ServiceLB, will act as a cloud provider to expose ports to the outside world (your host), and should be visible in kube-system
namespace.
svclb-envoy-envoy-gateway-system-eg-5391c79d-54a88387-fz4wj 1/1 Running 0 3d23h
Mac & Colima
If you are on Mac, Colima can be used as your Docker host but also to have local Kubernetes cluster using k3s. You need to disable Traefik (the bundled load balancer):
colima start -m 4 -c 4 --kubernetes --k3s-arg "--disable=traefik"
Colima is doing the work of exposing the cluster load balanced ports for you, so you can actually talk to localhost on port 80 or 443 to reach the cluster:
curl --verbose --header "Host: blog.mydomain.tld" http://localhost/get
Conclusion
Like every fresh projects, you may hit some unknown issues, but the overall project quality is high. It can be used as it is, I’ve migrated some services to it already, and we are planning on a full migration.