In this comprehensive guide, you will learn how to set up and configure ingress traffic for an Istio service mesh using the Kubernetes Gateway API, with practical examples that you can apply in a real cluster. By the end of this article, you will understand:

  • How the Gateway API works with Istio and why it’s replacing older ingress methods
  • How to expose services inside an Istio mesh externally with proper security
  • How to split traffic for canary deployments using weighted routing
  • How to include advanced features such as circuit breaking using Istio’s DestinationRule
  • How to test and validate incoming traffic behavior end to end
  • How to troubleshoot common issues and resolve them quickly
  • How the Gateway API extends beyond ingress to internal mesh traffic with GAMMA

Why Use Gateway API with Istio

Istio is a service mesh that primarily manages internal communication between services (east-west traffic). Most real-world applications also need to accept traffic from outside the cluster (north-south traffic) in a controlled and secure way.

Evolution of Ingress in Istio

Earlier versions of Istio handled ingress traffic using its own Gateway and VirtualService resources. While functional, this approach was Istio-specific and created vendor lock-in. The Kubernetes Gateway API is a newer, standardized API backed by the Kubernetes community for defining how traffic enters and exits a cluster. Istio now supports this API to manage external traffic reliably while providing portability across different service mesh implementations.

Gateway API vs Traditional Ingress

If you’re familiar with Kubernetes Ingress resources, here’s how Gateway API differs:

Traditional Ingress:

  • Single resource type (Ingress)
  • Limited traffic routing capabilities
  • Controller-specific annotations for advanced features
  • Less separation of concerns between infrastructure and application teams

Gateway API:

  • Multiple resource types (GatewayClass, Gateway, HTTPRoute, etc.)
  • Rich, native support for advanced routing (weighted traffic, header matching, etc.)
  • Role-oriented design: infrastructure teams manage Gateways, app teams manage Routes
  • Extensible and portable across implementations

While Gateway API does not yet cover all features of Istio, it is increasingly becoming the standard for managing ingress traffic and is recommended for new deployments.

Prerequisites

Before starting, ensure you have:

  • kubectl installed and configured to access your Kubernetes cluster
  • Istio installed in your Kubernetes cluster (version 1.16+ recommended for best Gateway API support)
  • A sample application deployed within the Istio service mesh
  • Basic understanding of Kubernetes networking concepts

If Istio is not installed or applications are not deployed, follow an Istio installation tutorial first.

Step 1: Install Gateway API CRDs

The Kubernetes Gateway API is not part of Kubernetes by default, so you need to install its Custom Resource Definitions (CRDs). These define the new API objects that Kubernetes and Istio will use.

1
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yaml

Verify the installation:

1
kubectl get crds | grep gateway

You should see output similar to:

1
2
3
4
gatewayclasses.gateway.networking.k8s.io
gateways.gateway.networking.k8s.io
httproutes.gateway.networking.k8s.io
referencegrants.gateway.networking.k8s.io

Istio’s built-in controller watches and manages Gateway API objects automatically. No additional configuration is needed.

Step 2: Verify the Default Istio GatewayClass

A GatewayClass tells Kubernetes which controller should implement gateway behavior. Istio creates a GatewayClass named istio and another called istio-remote.

List GatewayClasses:

1
kubectl get gatewayclasses

You should see:

1
2
3
NAME           CONTROLLER                    ACCEPTED   AGE
istio istio.io/gateway-controller True 5m
istio-remote istio.io/gateway-controller True 5m
  • istio - Managed by Istio’s gateway controller for standard ingress
  • istio-remote - Used for multi-cluster setups where gateway control plane is remote

For this guide, use the istio GatewayClass for your ingress gateway.

Step 3: Deploy a Demo Application

For this example, we will deploy a simple HTTP application with two versions (v1 and v2) to demonstrate traffic splitting and canary deployments.

Create a namespace with Istio sidecar injection enabled:

1
2
3
4
5
6
apiVersion: v1
kind: Namespace
metadata:
labels:
istio-injection: enabled
name: istio-test

Apply the namespace:

1
kubectl apply -f namespace.yaml

Deploy two versions of a backend service:

backend-v1-deployment.yaml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-v1
namespace: istio-test
spec:
replicas: 3
selector:
matchLabels:
app: backend
version: v1
template:
metadata:
labels:
app: backend
version: v1
spec:
containers:
- name: echo
image: hashicorp/http-echo
args: ["-text=hello from backend v1"]
ports:
- containerPort: 5678

backend-v2-deployment.yaml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-v2
namespace: istio-test
spec:
replicas: 2
selector:
matchLabels:
app: backend
version: v2
template:
metadata:
labels:
app: backend
version: v2
spec:
containers:
- name: echo
image: hashicorp/http-echo
args: ["-text=hello from backend v2"]
ports:
- containerPort: 5678

Expose each deployment with Kubernetes services:

backend-services.yaml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
apiVersion: v1
kind: Service
metadata:
name: backend-v1
namespace: istio-test
spec:
selector:
app: backend
version: v1
ports:
- name: http
port: 80
targetPort: 5678
---
apiVersion: v1
kind: Service
metadata:
name: backend-v2
namespace: istio-test
spec:
selector:
app: backend
version: v2
ports:
- name: http
port: 80
targetPort: 5678

Apply all resources:

1
2
3
kubectl apply -f backend-v1-deployment.yaml
kubectl apply -f backend-v2-deployment.yaml
kubectl apply -f backend-services.yaml

Verify deployments are running:

1
kubectl get pods -n istio-test

You should see output like:

1
2
3
4
5
6
NAME                          READY   STATUS    RESTARTS   AGE
backend-v1-xxx-yyy 2/2 Running 0 30s
backend-v1-xxx-zzz 2/2 Running 0 30s
backend-v1-xxx-aaa 2/2 Running 0 30s
backend-v2-xxx-bbb 2/2 Running 0 30s
backend-v2-xxx-ccc 2/2 Running 0 30s

Note that each pod shows 2/2 containers ready - this indicates the Istio sidecar proxy is running alongside your application container.

Step 4: Create a Gateway

The Gateway defines how external traffic enters the cluster. This resource is typically managed by infrastructure/platform teams.

istio-gateway.yaml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: istio-gateway
namespace: istio-test
annotations:
# Cloud-specific annotations (optional, adjust for your provider)
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
gatewayClassName: istio
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: Same

Important configuration notes:

  • gatewayClassName: istio - Uses Istio’s implementation
  • allowedRoutes.namespaces.from: Same - Only allows routes from the same namespace (more secure)
  • The annotation is AWS-specific; adjust for your cloud provider or remove for on-premises

Apply the gateway:

1
kubectl apply -f istio-gateway.yaml

Verify the gateway is ready:

1
kubectl get gateway -n istio-test

Expected output:

1
2
NAME             CLASS   ADDRESS          PROGRAMMED   AGE
istio-gateway istio 34.123.45.67 True 1m

Check the gateway pod and service:

1
kubectl get pods,services -n istio-test -l gateway.networking.k8s.io/gateway-name=istio-gateway

You should see a gateway pod and a LoadBalancer service created by Istio. The external IP will be used to access your services.

Troubleshooting tip: If ADDRESS remains <pending> for more than a few minutes, check your cloud provider’s load balancer quotas and logs.

Step 5: Create an HTTPRoute

The HTTPRoute maps incoming traffic to backend services. This resource is typically managed by application teams.

backend-route.yaml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: backend-route
namespace: istio-test
spec:
parentRefs:
- name: istio-gateway
namespace: istio-test
hostnames:
- "test.codingtricks.io"
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: backend-v1
port: 80
weight: 50
- name: backend-v2
port: 80
weight: 50

Configuration breakdown:

  • parentRefs - Links this route to the gateway
  • hostnames - Domain name(s) this route responds to
  • backendRefs with weight - Splits traffic 50/50 between v1 and v2

Apply the route:

1
kubectl apply -f backend-route.yaml

Verify:

1
kubectl get httproutes -n istio-test

Expected output:

1
2
NAME            HOSTNAMES                   AGE
backend-route ["test.codingtricks.io"] 30s

Traffic is now split evenly between backend-v1 and backend-v2.

Adjusting Traffic Weights for Canary Deployments

To gradually roll out backend-v2, adjust the weights:

1
2
3
4
5
6
7
backendRefs:
- name: backend-v1
port: 80
weight: 90
- name: backend-v2
port: 80
weight: 10

This sends 90% of traffic to v1 and 10% to v2, allowing you to monitor the new version with minimal risk.

Step 6: Add Istio DestinationRules for Production Resilience

Gateway API does not yet cover all Istio features. For production environments, you’ll want circuit breaking, connection pooling, and advanced load balancing. Create a DestinationRule to add these capabilities:

backend-v2-destinationrule.yaml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: backend-v2-destination-rule
namespace: istio-test
spec:
host: backend-v2.istio-test.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
connectionPool:
tcp:
maxConnections: 100
http:
http1MaxPendingRequests: 50
http2MaxRequests: 100
maxRequestsPerConnection: 2
outlierDetection:
consecutive5xxErrors: 5
interval: 30s
baseEjectionTime: 30s
maxEjectionPercent: 50

What this configuration provides:

  • Load balancing: Round-robin distribution across pods
  • Connection pooling: Limits concurrent connections to prevent resource exhaustion
  • Circuit breaking: Automatically removes unhealthy pods from the pool after 5 consecutive errors
  • Recovery: Re-tests ejected pods after 30 seconds

Apply it:

1
kubectl apply -f backend-v2-destinationrule.yaml

This configuration enhances traffic to backend-v2 with production-grade resilience patterns. You can create similar rules for backend-v1 or apply them to both versions with a single rule targeting the parent service.

Step 7: Configure DNS for Testing

To test your gateway, you need to route the hostname to your gateway’s IP address.

Option 1: Local /etc/hosts (for development)

Get the gateway IP:

1
2
GATEWAY_IP=$(kubectl get gateway istio-gateway -n istio-test -o jsonpath='{.status.addresses[0].value}')
echo "$GATEWAY_IP test.codingtricks.io"

Add this line to your /etc/hosts file:

1
34.123.45.67 test.codingtricks.io

Option 2: Port forwarding (no DNS needed)

For quick local testing without DNS configuration:

1
kubectl port-forward -n istio-test service/istio-gateway-istio 8080:80

Then access via http://localhost:8080 in your tests (without hostname requirement).

Option 3: Real DNS (for production)

Create an A record in your DNS provider pointing test.codingtricks.io to the gateway’s external IP address.

Step 8: Validate Ingress Traffic

Test the traffic flow to verify routing and traffic splitting:

1
2
3
4
for i in {1..10}; do 
curl -H "Host: test.codingtricks.io" http://<GATEWAY_IP>/
echo ""
done

Or if using /etc/hosts:

1
2
3
4
for i in {1..10}; do 
curl http://test.codingtricks.io
echo ""
done

Expected output alternating between:

1
2
3
4
5
hello from backend v1
hello from backend v2
hello from backend v1
hello from backend v2
...

The responses should alternate roughly according to your weight configuration (50/50 in this example).

Testing with Different Traffic Patterns

Test header-based routing (if configured):

1
curl -H "Host: test.codingtricks.io" -H "X-Version: v2" http://<GATEWAY_IP>/

Test circuit breaker:

Simulate backend errors and verify Istio removes failing pods:

1
2
# Generate load
for i in {1..100}; do curl http://test.codingtricks.io; done

Monitor ejected hosts:

1
kubectl exec -n istio-test <gateway-pod> -- curl localhost:15000/stats | grep outlier

Step 9: Adding HTTPS/TLS (Production Requirement)

The example above uses HTTP for simplicity, but production environments require TLS encryption. Here’s how to add HTTPS support:

Create a TLS Certificate

For testing, create a self-signed certificate:

1
2
3
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout tls.key -out tls.crt \
-subj "/CN=test.codingtricks.io/O=test"

For production, obtain certificates from Let’s Encrypt or your certificate authority.

Create a Kubernetes Secret

1
2
3
4
kubectl create secret tls test-codingtricks-tls \
--cert=tls.crt \
--key=tls.key \
-n istio-test

Update the Gateway for HTTPS

istio-gateway-tls.yaml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: istio-gateway
namespace: istio-test
spec:
gatewayClassName: istio
listeners:
- name: https
protocol: HTTPS
port: 443
hostname: "test.codingtricks.io"
tls:
mode: Terminate
certificateRefs:
- name: test-codingtricks-tls
kind: Secret
allowedRoutes:
namespaces:
from: Same
- name: http-redirect
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: Same

http-redirect-route.yaml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-redirect
namespace: istio-test
spec:
parentRefs:
- name: istio-gateway
namespace: istio-test
sectionName: http-redirect
hostnames:
- "test.codingtricks.io"
rules:
- filters:
- type: RequestRedirect
requestRedirect:
scheme: https
statusCode: 301

Apply both resources and test with HTTPS:

1
curl -k https://test.codingtricks.io

Troubleshooting Common Issues

Gateway Not Getting External IP

Symptoms: Gateway status shows ADDRESS as <pending>

Possible causes and solutions:

  1. Load balancer quota exceeded: Check your cloud provider’s quota limits
  2. Service type not supported: Ensure your cluster supports LoadBalancer services
  3. Cloud controller not installed: Verify cloud controller manager is running

Debug commands:

1
2
3
4
5
6
# Check gateway events
kubectl describe gateway istio-gateway -n istio-test

# Check service events
kubectl get svc -n istio-test -l gateway.networking.k8s.io/gateway-name=istio-gateway
kubectl describe svc <gateway-service-name> -n istio-test

Traffic Not Reaching Backend Services

Symptoms: Curl requests timeout or return connection refused

Possible causes and solutions:

  1. Hostname mismatch: Ensure the Host header matches the HTTPRoute hostname
  2. Route not attached: Verify parentRefs correctly reference the gateway
  3. Namespace mismatch: Check that allowedRoutes permits the route’s namespace
  4. Backend pods not ready: Verify backend pods are running with 2/2 containers

Debug commands:

1
2
3
4
5
6
7
8
9
10
11
12
# Check route status
kubectl get httproute backend-route -n istio-test -o yaml

# Verify gateway allows this route
kubectl describe gateway istio-gateway -n istio-test

# Check backend pod status
kubectl get pods -n istio-test -l app=backend

# Test from inside the mesh
kubectl run -it --rm debug --image=curlimages/curl --restart=Never -n istio-test \
-- curl backend-v1.istio-test.svc.cluster.local

Traffic Not Splitting as Expected

Symptoms: All traffic goes to one backend despite weight configuration

Possible causes and solutions:

  1. Session affinity enabled: Check if sticky sessions are configured
  2. Connection pooling: HTTP/1.1 keep-alive may reuse connections
  3. Insufficient requests: Run more requests to see distribution

Verification:

1
2
# Run many requests and count responses
for i in {1..100}; do curl -s http://test.codingtricks.io; done | sort | uniq -c

You should see roughly the expected distribution based on weights.

Circuit Breaker Not Ejecting Unhealthy Pods

Symptoms: Traffic continues to failing pods

Possible causes and solutions:

  1. Health check configuration: Verify consecutive5xxErrors threshold is appropriate
  2. Not enough failures: Ensure enough consecutive errors occur to trigger ejection
  3. Wrong DestinationRule target: Verify host matches the service FQDN

Debug commands:

1
2
3
4
5
6
7
# Check outlier detection stats
kubectl exec -n istio-test deploy/istio-gateway-istio -- \
curl localhost:15000/stats | grep "outlier_detection"

# View DestinationRule status
kubectl get destinationrule -n istio-test
kubectl describe destinationrule backend-v2-destination-rule -n istio-test

Understanding GAMMA: The Future of Service Mesh Networking

GAMMA (Gateway API for Mesh Management and Administration) is an initiative to extend Gateway API beyond ingress to manage internal mesh traffic (east-west). This would provide a unified API for both north-south and east-west traffic.

Current state (as of early 2025):

  • Gateway API handles ingress (north-south) traffic
  • Istio’s VirtualService and DestinationRule handle internal (east-west) traffic
  • GAMMA aims to replace VirtualService with HTTPRoute for internal traffic too

Benefits when fully implemented:

  • Single API for all traffic management
  • Reduced Istio-specific configuration
  • Better portability across service mesh implementations
  • Unified RBAC and policy model

What to watch:

  • GRPCRoute for internal gRPC traffic management
  • ServiceRoute for east-west traffic
  • Enhanced ReferenceGrant for cross-namespace communication

For production use, continue using DestinationRules alongside Gateway API until GAMMA features mature.

Cleaning Up Resources

When you’re done testing, remove all resources:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# Delete the HTTPRoute
kubectl delete httproute backend-route -n istio-test

# Delete the Gateway
kubectl delete gateway istio-gateway -n istio-test

# Delete DestinationRules
kubectl delete destinationrule backend-v2-destination-rule -n istio-test

# Delete backend services and deployments
kubectl delete -f backend-services.yaml
kubectl delete -f backend-v1-deployment.yaml
kubectl delete -f backend-v2-deployment.yaml

# Delete the namespace
kubectl delete namespace istio-test

# Optional: Remove Gateway API CRDs (only if no other gateways exist)
kubectl delete -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yaml

Best Practices and Recommendations

Security

  1. Always use TLS in production - Never expose HTTP endpoints externally
  2. Limit namespace access - Use allowedRoutes.namespaces.from: Same to prevent unauthorized routes
  3. Implement authentication - Add RequestAuthentication and AuthorizationPolicy for secure access
  4. Regular certificate rotation - Automate certificate renewal with cert-manager

Traffic Management

  1. Start with conservative canary rollouts - Begin with 5-10% traffic to new versions
  2. Monitor before scaling - Watch error rates and latency before increasing traffic weights
  3. Use circuit breakers - Protect services from cascading failures with DestinationRules
  4. Set appropriate timeouts - Configure request timeouts to prevent hung requests

Observability

  1. Enable access logging - Configure gateway access logs for debugging
  2. Use distributed tracing - Integrate with Jaeger or Zipkin for request flow visibility
  3. Monitor gateway metrics - Track request rates, error rates, and latency in Prometheus/Grafana
  4. Set up alerts - Create alerts for high error rates or traffic anomalies

Operational

  1. Use GitOps - Manage gateway configurations in version control
  2. Test in staging - Always validate routing changes in non-production environments first
  3. Document hostnames - Maintain a registry of all external hostnames and their purposes
  4. Plan for multi-cluster - Design with future multi-cluster deployments in mind

Conclusion

You have successfully set up Istio ingress using the Kubernetes Gateway API. You learned how to:

  • Install and configure Gateway API resources with Istio
  • Create Gateways and HTTPRoutes to manage external traffic
  • Implement traffic splitting for canary deployments
  • Enhance routing with Istio DestinationRules for production resilience
  • Secure traffic with TLS/HTTPS encryption
  • Troubleshoot common issues effectively
  • Follow best practices for production deployments

The Gateway API represents a significant step forward in Kubernetes networking, providing a vendor-neutral, role-oriented approach to traffic management. As the API evolves and GAMMA matures, expect even more powerful features for both ingress and internal service mesh traffic management.

For more advanced configurations, explore the Gateway API documentation and Istio’s Gateway API guide.

Happy ❤️ Coding 💻