How to setup Envoy Gateway API
Have you ever wondered how traffic actually gets from the outside world into your Kubernetes cluster and finds the right application? That’s exactly what a Gateway controller handles — and in this guide, we’re going to set one up using Envoy Gateway, one of the most powerful options available today.
By the end of this tutorial, you will know how to:
- Install the Envoy Gateway controller on a Kubernetes cluster
- Configure all the resources it needs: EnvoyProxy, GatewayClass, Gateway, and HTTPRoute
- Test that real traffic flows correctly to a demo application
Prerequisites
Before we start, make sure you have the following ready:
- A running Kubernetes cluster (local like Minikube, or cloud-based like EKS/AKS)
- kubectl installed on your workstation
- Helm installed on your workstation
Don’t have a cluster yet? You can quickly spin one up on AWS using eksctl or use a local tool like Kind or Minikube for testing.
Step 1: Install Envoy Gateway Using Helm
Helm makes the installation super straightforward. We’ll install both the Gateway API CRDs (Custom Resource Definitions) and the Envoy Gateway controller in one shot.
Run the following command:
1 | helm install eg oci://docker.io/envoyproxy/gateway-helm \ |
This will take a moment. While it runs, Helm is deploying the controller and all the custom resource definitions that Envoy Gateway needs to function.
⚠️ Enterprise users: If your company uses a private container registry, you’ll need to mirror the Envoy images there and update the Helm values accordingly before deploying.
Step 2: Verify the Installation
Once the Helm install completes, let’s confirm everything is running correctly:
1 | kubectl -n envoy-gateway-system get all |
You should see output similar to this:
1 | NAME READY STATUS RESTARTS AGE |
A Running status means the Control Plane is up and watching for Gateway API resources.
You can also verify that all the required CRDs were installed:
1 | kubectl get crds | grep -iE "gateway" |
You’ll see a list that includes both the standard Kubernetes Gateway API CRDs (gateways.gateway.networking.k8s.io, httproutes.gateway.networking.k8s.io, etc.) and the Envoy-specific ones (envoyproxies.gateway.envoyproxy.io, etc.).
Step 3: Deploy a Sample Application
Now let’s have something for our gateway to actually route traffic to. We’ll deploy a simple Nginx web server:
1 | # Create the deployment |
Verify both the deployment and service are up:
1 | kubectl get deploy,svc |
You should see web-deploy with 2/2 pods ready and web-svc as a ClusterIP service. Our demo app is ready — now let’s wire up the gateway in front of it.
Step 4: Create the EnvoyProxy Resource
Skip this step if you’re on a cloud cluster (EKS, AKS, GKE) and are happy to use a cloud Load Balancer — that’s the default behavior and it will work automatically.
By default, Envoy Gateway creates a LoadBalancer type service for the Data Plane, which in cloud environments triggers an actual external load balancer to be provisioned. For a local or test setup, we don’t want that — instead, we’ll use a NodePort service.
We do this by creating an EnvoyProxy custom resource that overrides the default behavior:
1 | cat < envoyproxy.yaml |
Apply the manifest:
1 | kubectl apply -f envoyproxy.yaml |
Confirm it was created:
1 | kubectl get envoyproxy |
1 | NAME AGE |
Step 5: Create the GatewayClass Resource
A Kubernetes cluster can have multiple Gateway controllers running at the same time. The GatewayClass resource is how you tell Kubernetes which controller to use.
In this manifest, we point to Envoy Gateway’s controller ID and attach the EnvoyProxy config we just created:
1 | cat < gatewayclass.yaml |
📝 Note: If you skipped Step 4 (using a cloud Load Balancer), leave out the
parametersRefsection entirely.
Check that it was created successfully:
1 | kubectl get gatewayclass |
1 | NAME CONTROLLER |
Step 6: Create the Gateway Resource
The Gateway resource is like the front door of your cluster. It defines which port and protocol incoming traffic should use, and when you create it, the Envoy Proxy Data Plane pods get automatically deployed.
1 | cat < gateway.yaml |
Verify the Gateway is programmed and has an address:
1 | kubectl -n default get gateway |
1 | NAME CLASS ADDRESS PROGRAMMED AGE |
Now check the Envoy Gateway system namespace — you should see a new proxy pod and a NodePort service:
1 | kubectl -n envoy-gateway-system get deploy,svc |
Important: Note down the NodePort number from the output (something like 31299). You’ll need it in the final step to test the app.
Step 7: Create the HTTPRoute Resource
We have a front door (the Gateway), but we need to tell it which room traffic should go to (our web-svc service). That’s what HTTPRoute does.
1 | cat < httproute.yaml |
Confirm the HTTPRoute exists:
1 | kubectl -n default get httproute |
1 | NAME HOSTNAMES AGE |
All the pieces are now in place. Time to test!
Step 8: Test the Traffic Routing
Use curl to send a request through the gateway to your Nginx app. You’ll need one of your worker node’s IP addresses and the NodePort number you noted earlier:
1 | curl [NODE_IP]:[NODE_PORT] |
If everything is wired up correctly, you’ll see the familiar Nginx welcome page HTML response. That means traffic successfully traveled through:
1 | Your Machine → NodePort → Envoy Proxy → web-svc → Nginx Pod |
It works! 🎉
💡 For production setups: Instead of NodePort, you’d use a cloud Load Balancer and map its DNS to a proper domain using a service like AWS Route 53.
Quick Recap: What Did We Just Build?
Here’s a summary of every resource we created and what it does:
| Resource | Purpose |
|---|---|
EnvoyProxy |
Configures the Data Plane (e.g., NodePort vs LoadBalancer) |
GatewayClass |
Tells Kubernetes which controller (Envoy) to use |
Gateway |
Defines the entry point — port, protocol, and listeners |
HTTPRoute |
Routes traffic from the Gateway to your backend service |
Why Choose Envoy Gateway?
There are many Gateway API controllers out there. Here’s why Envoy Gateway stands out:
- Zero-downtime config changes — Thanks to the xDS API, routing rules update in real time without restarting the proxy.
- Extensible with WebAssembly — You can write custom filters using WASM for rate limiting, authentication, and more.
- Istio-compatible — Since Istio also uses Envoy, integrating the two is seamless.
What’s Next?
You’ve got a working Envoy Gateway setup — but that’s just the beginning. From here, you can explore:
- TLS termination — Secure your routes with HTTPS using certificates and secrets
- Canary deployments — Route a percentage of traffic to a new version of your app using weighted routing
- Method-based routing — Route
GETandPOSTrequests to different backends - Rate limiting — Protect your services from being overwhelmed
Are you planning to use Envoy Gateway in a real project? Or are you currently using a different controller like NGINX or Traefik? Drop a comment — I’d love to hear how you’re approaching this!





