Have you ever wondered how traffic actually gets from the outside world into your Kubernetes cluster and finds the right application? That’s exactly what a Gateway controller handles — and in this guide, we’re going to set one up using Envoy Gateway, one of the most powerful options available today.
By the end of this tutorial, you will know how to:

  • Install the Envoy Gateway controller on a Kubernetes cluster
  • Configure all the resources it needs: EnvoyProxy, GatewayClass, Gateway, and HTTPRoute
  • Test that real traffic flows correctly to a demo application

Prerequisites

Before we start, make sure you have the following ready:

  • A running Kubernetes cluster (local like Minikube, or cloud-based like EKS/AKS)
  • kubectl installed on your workstation
  • Helm installed on your workstation

Don’t have a cluster yet? You can quickly spin one up on AWS using eksctl or use a local tool like Kind or Minikube for testing.

Step 1: Install Envoy Gateway Using Helm

Helm makes the installation super straightforward. We’ll install both the Gateway API CRDs (Custom Resource Definitions) and the Envoy Gateway controller in one shot.
Run the following command:

1
2
3
4
helm install eg oci://docker.io/envoyproxy/gateway-helm \
--version v1.6.2 \
-n envoy-gateway-system \
--create-namespace

This will take a moment. While it runs, Helm is deploying the controller and all the custom resource definitions that Envoy Gateway needs to function.

⚠️ Enterprise users: If your company uses a private container registry, you’ll need to mirror the Envoy images there and update the Helm values accordingly before deploying.

Step 2: Verify the Installation

Once the Helm install completes, let’s confirm everything is running correctly:

1
kubectl -n envoy-gateway-system get all

You should see output similar to this:

1
2
3
4
5
6
7
8
NAME                                READY   STATUS    RESTARTS              AGE
pod/envoy-gateway-6dd8f9b8f-r5w47 1/1 Running 0 2m18s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/envoy-gateway ClusterIP 10.110.16.99 <none> ... 2m18s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/envoy-gateway 1/1 1 1 2m18s

A Running status means the Control Plane is up and watching for Gateway API resources.

You can also verify that all the required CRDs were installed:

1
kubectl get crds | grep -iE "gateway"

You’ll see a list that includes both the standard Kubernetes Gateway API CRDs (gateways.gateway.networking.k8s.io, httproutes.gateway.networking.k8s.io, etc.) and the Envoy-specific ones (envoyproxies.gateway.envoyproxy.io, etc.).


Step 3: Deploy a Sample Application

Now let’s have something for our gateway to actually route traffic to. We’ll deploy a simple Nginx web server:

1
2
3
4
5
# Create the deployment
kubectl create deployment web-deploy --image nginx --port 80 --replicas 2

# Expose it with a ClusterIP service
kubectl expose deployment web-deploy --name web-svc --port 80 --target-port 80 --type ClusterIP

Verify both the deployment and service are up:

1
kubectl get deploy,svc

You should see web-deploy with 2/2 pods ready and web-svc as a ClusterIP service. Our demo app is ready — now let’s wire up the gateway in front of it.


Step 4: Create the EnvoyProxy Resource

Skip this step if you’re on a cloud cluster (EKS, AKS, GKE) and are happy to use a cloud Load Balancer — that’s the default behavior and it will work automatically.

By default, Envoy Gateway creates a LoadBalancer type service for the Data Plane, which in cloud environments triggers an actual external load balancer to be provisioned. For a local or test setup, we don’t want that — instead, we’ll use a NodePort service.

We do this by creating an EnvoyProxy custom resource that overrides the default behavior:

1
2
3
4
5
6
7
8
9
10
11
12
13
cat < envoyproxy.yaml
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: envoy-proxy
namespace: default
spec:
provider:
type: Kubernetes
kubernetes:
envoyService:
type: NodePort
EOF

Apply the manifest:

1
kubectl apply -f envoyproxy.yaml

Confirm it was created:

1
kubectl get envoyproxy
1
2
NAME          AGE
envoy-proxy 22m

Step 5: Create the GatewayClass Resource

A Kubernetes cluster can have multiple Gateway controllers running at the same time. The GatewayClass resource is how you tell Kubernetes which controller to use.

In this manifest, we point to Envoy Gateway’s controller ID and attach the EnvoyProxy config we just created:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
cat < gatewayclass.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: envoy-gateway-class
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
parametersRef:
group: gateway.envoyproxy.io
kind: EnvoyProxy
name: envoy-proxy
namespace: default
EOF

kubectl apply -f gatewayclass.yaml

📝 Note: If you skipped Step 4 (using a cloud Load Balancer), leave out the parametersRef section entirely.

Check that it was created successfully:

1
kubectl get gatewayclass
1
2
NAME                  CONTROLLER
envoy-gateway-class gateway.envoyproxy.io/gatewayclass-controller

Step 6: Create the Gateway Resource

The Gateway resource is like the front door of your cluster. It defines which port and protocol incoming traffic should use, and when you create it, the Envoy Proxy Data Plane pods get automatically deployed.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
cat < gateway.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: web-gateway
namespace: default
spec:
gatewayClassName: envoy-gateway-class
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: All
EOF

kubectl apply -f gateway.yaml

Verify the Gateway is programmed and has an address:

1
kubectl -n default get gateway
1
2
NAME          CLASS                 ADDRESS      PROGRAMMED   AGE
web-gateway envoy-gateway-class 172.30.1.2 True 2m10s

Now check the Envoy Gateway system namespace — you should see a new proxy pod and a NodePort service:

1
kubectl -n envoy-gateway-system get deploy,svc

Important: Note down the NodePort number from the output (something like 31299). You’ll need it in the final step to test the app.


Step 7: Create the HTTPRoute Resource

We have a front door (the Gateway), but we need to tell it which room traffic should go to (our web-svc service). That’s what HTTPRoute does.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
cat < httproute.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: web-httproute
spec:
parentRefs:
- name: web-gateway
rules:
- backendRefs:
- group: ""
kind: Service
name: web-svc
port: 80
weight: 1
matches:
- path:
type: PathPrefix
value: /
EOF

kubectl apply -f httproute.yaml

Confirm the HTTPRoute exists:

1
kubectl -n default get httproute
1
2
NAME            HOSTNAMES   AGE
web-httproute 7m23s

All the pieces are now in place. Time to test!


Step 8: Test the Traffic Routing

Use curl to send a request through the gateway to your Nginx app. You’ll need one of your worker node’s IP addresses and the NodePort number you noted earlier:

1
curl [NODE_IP]:[NODE_PORT]

If everything is wired up correctly, you’ll see the familiar Nginx welcome page HTML response. That means traffic successfully traveled through:

1
Your Machine → NodePort → Envoy Proxy → web-svc → Nginx Pod

It works! 🎉

💡 For production setups: Instead of NodePort, you’d use a cloud Load Balancer and map its DNS to a proper domain using a service like AWS Route 53.


Quick Recap: What Did We Just Build?

Here’s a summary of every resource we created and what it does:

Resource Purpose
EnvoyProxy Configures the Data Plane (e.g., NodePort vs LoadBalancer)
GatewayClass Tells Kubernetes which controller (Envoy) to use
Gateway Defines the entry point — port, protocol, and listeners
HTTPRoute Routes traffic from the Gateway to your backend service

Why Choose Envoy Gateway?

There are many Gateway API controllers out there. Here’s why Envoy Gateway stands out:

  • Zero-downtime config changes — Thanks to the xDS API, routing rules update in real time without restarting the proxy.
  • Extensible with WebAssembly — You can write custom filters using WASM for rate limiting, authentication, and more.
  • Istio-compatible — Since Istio also uses Envoy, integrating the two is seamless.

What’s Next?

You’ve got a working Envoy Gateway setup — but that’s just the beginning. From here, you can explore:

  • TLS termination — Secure your routes with HTTPS using certificates and secrets
  • Canary deployments — Route a percentage of traffic to a new version of your app using weighted routing
  • Method-based routing — Route GET and POST requests to different backends
  • Rate limiting — Protect your services from being overwhelmed

Are you planning to use Envoy Gateway in a real project? Or are you currently using a different controller like NGINX or Traefik? Drop a comment — I’d love to hear how you’re approaching this!

Happy Coding