Managing MySQL databases in production Kubernetes environments can quickly become overwhelming. You’re dealing with persistent volumes, StatefulSets, secrets management, replication setup, and failover mechanisms—all while ensuring your data remains consistent and highly available.

What if there was a way to automate all of this complexity with just a few commands?
Enter the MySQL Operator for Kubernetes—a game-changing tool developed by Oracle that transforms complex database management into simple, declarative configurations. In this comprehensive guide, we’ll walk you through everything you need to know about installing and managing MySQL clusters on Kubernetes.

What is MySQL Operator and Why You Need It?

The Challenge of Manual MySQL Management

Picture this: You’re running a critical e-commerce application on Kubernetes, and you need a highly available MySQL database. Without an operator, you’d need to:

  • Manually configure StatefulSets for each MySQL instance
  • Set up persistent volume claims and manage storage
  • Handle secrets management for database credentials
  • Configure replication between primary and secondary nodes
  • Implement failover mechanisms when the primary node goes down
  • Manage backup and recovery procedures
  • Handle rolling updates without data loss

This complexity multiplies exponentially as your application scales.

Enter MySQL Operator: Your Database Automation Superhero

The MySQL Operator (developed by Oracle) eliminates this complexity by providing:

Automated cluster provisioning with a single YAML file
Built-in high availability with automatic failover
Intelligent traffic routing via MySQL Router
Automated backup and recovery workflows
Rolling updates with zero downtime
Multi-zone deployment support

Key Benefits for Production Workloads

Traditional Approach MySQL Operator
Manual StatefulSet configuration Declarative cluster definitions
Complex replication setup Automatic replication management
Manual failover procedures Built-in automatic failover
Custom backup scripts Scheduled backup resources
Manual scaling operations Declarative scaling

How MySQL Operator Works Under the Hood

Understanding the architecture is crucial for effective troubleshooting and optimization. Here’s how the MySQL Operator orchestrates your database infrastructure:

graph TB
    %% ===============================
    %% Subgraphs
    %% ===============================
    subgraph "Application Layer"
        APP[Application Pods]
    end
    
    subgraph "MySQL Router Layer"
        ROUTER1[MySQL Router Pod 1]
        ROUTER2[MySQL Router Pod 2]
    end
    
    subgraph "MySQL Cluster Layer"
        PRIMARY[MySQL Primary Pod]
        SECONDARY1[MySQL Secondary Pod 1]
        SECONDARY2[MySQL Secondary Pod 2]
    end
    
    subgraph "Storage Layer"
        PV1[Persistent Volume 1]
        PV2[Persistent Volume 2]
        PV3[Persistent Volume 3]
    end
    
    subgraph "Control Plane"
        OPERATOR[MySQL Operator]
        CRD[Custom Resources]
    end

    %% ===============================
    %% Connections
    %% ===============================
    APP --> ROUTER1
    APP --> ROUTER2
    
    ROUTER1 --> PRIMARY
    ROUTER1 --> SECONDARY1
    ROUTER1 --> SECONDARY2
    ROUTER2 --> PRIMARY
    ROUTER2 --> SECONDARY1
    ROUTER2 --> SECONDARY2
    
    PRIMARY --> PV1
    SECONDARY1 --> PV2
    SECONDARY2 --> PV3
    
    OPERATOR --> CRD
    CRD --> PRIMARY
    CRD --> SECONDARY1
    CRD --> SECONDARY2
    CRD --> ROUTER1
    CRD --> ROUTER2
    
    PRIMARY -.->|Replication| SECONDARY1
    PRIMARY -.->|Replication| SECONDARY2
    
    %% ===============================
    %% Styling
    %% ===============================
    classDef app fill:#4CAF50,stroke:#2E7D32,stroke-width:2px,color:#fff;
    classDef router fill:#FF9800,stroke:#E65100,stroke-width:2px,color:#fff;
    classDef clusterPrimary fill:#2196F3,stroke:#0D47A1,stroke-width:2px,color:#fff;
    classDef clusterSecondary fill:#64B5F6,stroke:#1565C0,stroke-width:2px,color:#fff;
    classDef storage fill:#9C27B0,stroke:#4A148C,stroke-width:2px,color:#fff;
    classDef control fill:#607D8B,stroke:#263238,stroke-width:2px,color:#fff;

    %% Assigning styles
    class APP app;
    class ROUTER1,ROUTER2 router;
    class PRIMARY clusterPrimary;
    class SECONDARY1,SECONDARY2 clusterSecondary;
    class PV1,PV2,PV3 storage;
    class OPERATOR,CRD control;

Traffic Flow Explanation

  1. Application Request: Your application pods send read/write requests to the database
  2. Smart Routing: MySQL Router pods intelligently route traffic:
    • Write requests → Always to the Primary pod (prevents conflicts)
    • Read requests → Distributed across all pods (load balancing)
  3. Data Replication: Primary pod replicates data to secondary pods ensuring consistency
  4. Automatic Failover: If primary fails, a secondary is automatically promoted

Prerequisites and Setup Requirements

Before diving into the installation, ensure you have the following components ready:

Essential Requirements

Component Minimum Version Purpose
Kubernetes Cluster v1.19+ Host environment for MySQL operator
kubectl v1.19+ Command-line tool for cluster interaction
Helm v3.0+ Package manager for Kubernetes applications
Storage Class Any dynamic provisioner Persistent storage for database data

Cluster Resource Requirements

1
2
3
4
5
6
7
8
9
# Minimum resource requirements per MySQL instance
resources:
requests:
memory: "1Gi"
cpu: "500m"
storage: "10Gi"
limits:
memory: "2Gi"
cpu: "1000m"

Quick Environment Check

Run these commands to verify your environment:

1
2
3
4
5
6
7
8
9
10
11
# Check Kubernetes cluster access
kubectl cluster-info

# Verify kubectl version
kubectl version --client

# Check available storage classes
kubectl get storageclass

# Verify Helm installation
helm version

Expected output should show:

  • ✅ Kubernetes cluster connectivity
  • ✅ kubectl version 1.19 or higher
  • ✅ At least one available storage class
  • ✅ Helm version 3.0 or higher

Installing MySQL Operator with Helm

Now let’s get hands-on with the installation process. We’ll use Helm for a streamlined installation experience.

Step 1: Add the Official MySQL Operator Repository

1
2
3
4
5
# Add the MySQL operator Helm repository
helm repo add mysql-operator https://mysql.github.io/mysql-operator/

# Update your local Helm repository cache
helm repo update

Step 2: Install the MySQL Operator

1
2
3
4
# Install the MySQL operator in its own namespace
helm install my-mysql-operator mysql-operator/mysql-operator \
--namespace mysql-operator \
--create-namespace

Step 3: Verify the Installation

1
2
3
4
5
6
7
8
9
# Check operator deployment status
kubectl -n mysql-operator get deployments,services

# Expected output:
# NAME READY UP-TO-DATE AVAILABLE AGE
# deployment.apps/mysql-operator 1/1 1 1 2m

# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# service/mysql-operator ClusterIP 10.96.156.219 <none> 9443/TCP 2m

Step 4: Validate Custom Resource Definitions

1
2
3
4
5
6
# List MySQL operator CRDs
kubectl get crds | grep mysql

# Expected output:
# innodbclusters.mysql.oracle.com 2025-01-15T10:30:15Z
# mysqlbackups.mysql.oracle.com 2025-01-15T10:30:15Z

Customizing the Installation (Optional)

For production environments, you might want to customize the operator configuration:

1
2
3
4
5
# Download the Helm chart for customization
helm pull mysql-operator/mysql-operator --untar

# View the chart structure
tree mysql-operator/

Key files to customize:

  • values.yaml - Main configuration parameters
  • templates/deployment.yaml - Operator deployment settings

Understanding InnoDB Clusters

Before deploying your first cluster, let’s understand what an InnoDB Cluster provides:

InnoDB Cluster Components

graph LR
    subgraph "InnoDB Cluster"
        PRIMARY[Primary Instance<br/>Read + Write]
        SECONDARY1[Secondary Instance 1<br/>Read Only]
        SECONDARY2[Secondary Instance 2<br/>Read Only]
    end
    
    subgraph "High Availability Features"
        FAILOVER[Automatic Failover]
        REPLICATION[Group Replication]
        CONSISTENCY[Data Consistency]
    end
    
    PRIMARY --> SECONDARY1
    PRIMARY --> SECONDARY2
    PRIMARY --> FAILOVER
    PRIMARY --> REPLICATION
    PRIMARY --> CONSISTENCY

Key Benefits

  • Group Replication: Ensures data consistency across all instances
  • Automatic Failover: Promotes a secondary to primary if the primary fails
  • Conflict Detection: Prevents data conflicts in multi-master scenarios
  • Built-in Monitoring: Health checks and status reporting

Deploying Your First MySQL InnoDB Cluster

Time to create your first MySQL cluster! We’ll start with a simple 3-node setup.

Step 1: Deploy the InnoDB Cluster

1
2
3
4
5
6
7
8
# Deploy a 3-instance MySQL cluster with 1 router
helm install devcluster mysql-operator/mysql-innodbcluster \
--set credentials.root.user='root' \
--set credentials.root.password='SecurePassword123!' \
--set credentials.root.host='%' \
--set serverInstances=3 \
--set routerInstances=1 \
--set tls.useSelfSigned=true

Step 2: Monitor the Deployment Progress

1
2
3
4
5
# Watch the pods come online
kubectl get pods -l app.kubernetes.io/name=mysql-innodbcluster -w

# Check deployment, StatefulSet, and PVCs
kubectl get deploy,statefulset,pvc

Expected output after successful deployment:

1
2
3
4
5
6
7
8
9
10
NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/devcluster-router 1/1 1 1 5m

NAME READY AGE
statefulset.apps/devcluster 3/3 5m

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
persistentvolumeclaim/datadir-devcluster-0 Bound pvc-abc123... 2Gi RWO gp2
persistentvolumeclaim/datadir-devcluster-1 Bound pvc-def456... 2Gi RWO gp2
persistentvolumeclaim/datadir-devcluster-2 Bound pvc-ghi789... 2Gi RWO gp2

Step 3: Verify Cluster Status

1
2
3
4
5
6
# Check the InnoDB cluster custom resource
kubectl get innodbclusters

# Expected output:
# NAME STATUS ONLINE INSTANCES ROUTERS AGE
# devcluster ONLINE 3 3 1 5m

Testing Database Connectivity and Replication

Let’s verify that our MySQL cluster is working correctly by testing connectivity and replication.

Step 1: Connect to the MySQL Cluster

1
2
3
4
5
# Create a temporary MySQL client pod
kubectl run --rm -it mysql-client \
--image=container-registry.oracle.com/mysql/community-operator:8.0.35-2.1.0 \
--restart=Never \
-- mysqlsh root@devcluster --sql

When prompted, enter the password: SecurePassword123!

Step 2: Basic Database Operations

Once connected to the MySQL shell, run these commands:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
-- Check current hostname (shows which pod you're connected to)
SELECT @@hostname;

-- List existing databases
SHOW DATABASES;

-- Create a test database
CREATE DATABASE ecommerce_app;

-- Use the new database
USE ecommerce_app;

-- Create a test table
CREATE TABLE users (
id INT AUTO_INCREMENT PRIMARY KEY,
username VARCHAR(50) NOT NULL,
email VARCHAR(100) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

-- Insert test data
INSERT INTO users (username, email) VALUES
('john_doe', '[email protected]'),
('jane_smith', '[email protected]'),
('bob_wilson', '[email protected]');

-- Verify data insertion
SELECT * FROM users;

Step 3: Test Replication

To verify data replication across all MySQL instances:

1
2
3
4
5
6
# Connect to each pod individually and check data
for pod in devcluster-0 devcluster-1 devcluster-2; do
echo "=== Checking pod: $pod ==="
kubectl exec -it $pod -- mysql -uroot -pSecurePassword123! \
-e "USE ecommerce_app; SELECT COUNT(*) as user_count FROM users;"
done

All pods should return the same user count, confirming successful replication.

Setting Up Automated Backups

Data protection is crucial for production workloads. Let’s configure automated backups using the MySQL Operator’s built-in backup capabilities.

Understanding MySQL Operator Backup Architecture

graph TB
    subgraph "Backup Process"
        SCHEDULE[Backup Schedule<br/>CronJob Pattern]
        PROFILE[Backup Profile<br/>Configuration]
        STORAGE[Persistent Volume<br/>Backup Storage]
    end
    
    subgraph "MySQL Cluster"
        PRIMARY[Primary MySQL Pod]
        SECONDARY[Secondary Pods]
    end
    
    subgraph "Backup Artifacts"
        DUMP[MySQL Dump Files]
        METADATA[Backup Metadata]
        LOGS[Backup Logs]
    end
    
    SCHEDULE --> PROFILE
    PROFILE --> PRIMARY
    PRIMARY --> DUMP
    DUMP --> STORAGE
    METADATA --> STORAGE
    LOGS --> STORAGE

Step 1: Create Backup Storage

First, we need persistent storage for our backups:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# backup-storage.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-backup-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/mysql-backups"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-backup-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi

Apply the storage configuration:

1
kubectl apply -f backup-storage.yaml

Step 2: Configure Backup-Enabled Cluster

Create a comprehensive cluster configuration with backup capabilities:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
# mysql-cluster-with-backups.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: devcluster-sa
namespace: default
---
apiVersion: v1
kind: Secret
metadata:
name: devcluster-secret
namespace: default
type: Opaque
stringData:
rootUser: "root"
rootHost: "%"
rootPassword: "SecurePassword123!"
---
apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
name: devcluster
namespace: default
spec:
instances: 3
router:
instances: 1
secretName: devcluster-secret
tlsUseSelfSigned: true

# Backup configuration
backupProfiles:
- name: daily-backup-profile
dumpInstance:
dumpOptions:
excludeSchemas: ["performance_schema", "information_schema"]
storage:
persistentVolumeClaim:
claimName: mysql-backup-pvc

backupSchedules:
- name: daily-backup
schedule: "0 2 * * *" # Daily at 2 AM
backupProfileName: daily-backup-profile
enabled: true

- name: hourly-backup
schedule: "0 */4 * * *" # Every 4 hours
backupProfileName: daily-backup-profile
enabled: true

Step 3: Apply Backup Configuration

1
2
3
4
5
# Apply the backup-enabled cluster configuration
kubectl apply -f mysql-cluster-with-backups.yaml

# Monitor backup creation
kubectl get mysqlbackups -w

Step 4: Verify Backup Operations

1
2
3
4
5
6
7
8
# List all backup resources
kubectl get mysqlbackups

# Check backup details
kubectl describe mysqlbackup <backup-name>

# View backup files in storage
kubectl exec -it <mysql-pod> -- ls -la /mnt/mysql-backups/

Multi-Zone Deployment Considerations

When deploying MySQL clusters across multiple availability zones, several important factors come into play.

Understanding Multi-Zone Challenges

graph TB
    subgraph "Zone A"
        MYSQL_A[MySQL Pod A<br/>Primary]
        PV_A[PV in Zone A]
        MYSQL_A --> PV_A
    end
    
    subgraph "Zone B"
        MYSQL_B[MySQL Pod B<br/>Secondary]
        PV_B[PV in Zone B]
        MYSQL_B --> PV_B
    end
    
    subgraph "Zone C"
        MYSQL_C[MySQL Pod C<br/>Secondary]
        PV_C[PV in Zone C]
        MYSQL_C --> PV_C
    end
    
    MYSQL_A -.->|Network Replication| MYSQL_B
    MYSQL_A -.->|Network Replication| MYSQL_C
    
    subgraph "Key Considerations"
        LATENCY[Cross-Zone Latency]
        COST[Data Transfer Costs]
        AFFINITY[Volume Affinity]
    end

Best Practices for Multi-Zone Deployments

1. Use Topology Spread Constraints

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Multi-zone cluster configuration
apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
name: multi-zone-cluster
spec:
instances: 3
podSpec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
mysql.oracle.com/cluster: multi-zone-cluster

2. Configure Storage Class for Zone Awareness

1
2
3
4
5
6
7
8
9
10
11
# Zone-aware storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mysql-zone-aware
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer # Critical for zone affinity
allowVolumeExpansion: true
parameters:
type: gp3
fsType: ext4

Performance and Cost Optimization

Consideration Impact Mitigation Strategy
Cross-zone latency 1-2ms additional latency Use read replicas in same zone as applications
Data transfer costs $0.01-0.02/GB in AWS Minimize cross-zone queries, use local read replicas
Split-brain risk Potential data inconsistency Always use odd number of instances (3, 5, 7)

Troubleshooting Common Issues

Even with the MySQL Operator’s automation, issues can occur. Here are the most common problems and their solutions.

Issue 1: Cluster Domain Resolution Errors

Symptoms:

1
ERROR: Failed to connect to MySQL server at devcluster:3306

Solution:

1
2
3
4
5
6
# Set the correct cluster domain
kubectl -n mysql-operator set env deploy/mysql-operator \
MYSQL_OPERATOR_K8S_CLUSTER_DOMAIN=cluster.local

# Restart the operator
kubectl -n mysql-operator rollout restart deploy/mysql-operator

Issue 2: Pods Stuck in Pending State

Diagnosis:

1
2
3
4
# Check pod status and events
kubectl describe pod <pod-name>

# Common causes and solutions:
Issue Cause Solution
Insufficient resources Not enough CPU/memory Scale nodes or adjust resource requests
Storage unavailable PV provisioning failed Check storage class and CSI driver
Image pull errors Network or registry issues Verify image registry access

Issue 3: Backup Failures

Diagnosis and Resolution:

1
2
3
4
5
6
7
8
9
10
11
12
# Check backup status
kubectl get mysqlbackups

# Examine backup logs
kubectl describe mysqlbackup <backup-name>

# Common fixes:
# 1. Ensure backup PVC has sufficient space
kubectl get pvc mysql-backup-pvc

# 2. Verify backup profile configuration
kubectl get innodbclusters devcluster -o yaml | grep -A 10 backupProfiles

Issue 4: Replication Lag

Monitoring Replication Health:

1
2
3
4
5
6
7
# Connect to MySQL and check replication status
kubectl exec -it devcluster-0 -- mysql -uroot -p -e "
SELECT
MEMBER_HOST,
MEMBER_STATE,
MEMBER_ROLE
FROM performance_schema.replication_group_members;"

Best Practices and Production Tips

Security Hardening

1. Use Strong Passwords and Secrets Management

1
2
3
4
5
6
7
8
9
# Use external secret management
apiVersion: v1
kind: Secret
metadata:
name: mysql-credentials
type: Opaque
data:
rootPassword: <base64-encoded-strong-password>
# Generate with: echo -n 'YourStrongPassword123!' | base64

2. Enable TLS Encryption

1
2
3
4
5
6
7
8
apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
name: secure-cluster
spec:
tlsUseSelfSigned: false # Use proper certificates in production
tlsCASecretName: mysql-ca-cert
tlsSecretName: mysql-tls-cert

Resource Management

1. Set Resource Requests and Limits

1
2
3
4
5
6
7
8
9
10
11
spec:
podSpec:
containers:
- name: mysql
resources:
requests:
memory: "2Gi"
cpu: "1000m"
limits:
memory: "4Gi"
cpu: "2000m"

2. Configure Persistent Volume Settings

1
2
3
4
5
6
7
spec:
datadirVolumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
storageClassName: "fast-ssd"
resources:
requests:
storage: "100Gi"

Monitoring and Observability

1. Enable Metrics Collection

1
2
3
4
5
6
# Add monitoring annotations
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9104"
prometheus.io/path: "/metrics"

2. Set Up Health Checks

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
spec:
podSpec:
containers:
- name: mysql
livenessProbe:
exec:
command:
- mysqladmin
- ping
- --user=root
- --password=$(MYSQL_ROOT_PASSWORD)
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3

Scaling and Performance

1. Horizontal Scaling Guidelines

1
2
3
4
5
6
7
# Scale cluster instances (always use odd numbers)
kubectl patch innodbcluster devcluster --type='merge' -p='
{
"spec": {
"instances": 5
}
}'

2. Performance Tuning

1
2
3
4
5
6
7
8
spec:
mysql:
my.cnf: |
[mysqld]
innodb_buffer_pool_size=2G
innodb_log_file_size=256M
max_connections=1000
query_cache_size=64M

Advanced Configuration Examples

Production-Ready Cluster Configuration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
name: production-mysql
namespace: production
labels:
app: ecommerce
environment: production
spec:
instances: 5 # Odd number for quorum

# Router configuration
router:
instances: 2
podSpec:
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"

# MySQL server configuration
mysql:
my.cnf: |
[mysqld]
# Performance tuning
innodb_buffer_pool_size=4G
innodb_log_file_size=512M
max_connections=2000

# Security hardening
local_infile=0
skip_show_database=1

# Replication optimization
binlog_format=ROW
gtid_mode=ON
enforce_gtid_consistency=1

# Pod specifications
podSpec:
resources:
requests:
memory: "4Gi"
cpu: "2000m"
limits:
memory: "8Gi"
cpu: "4000m"

# Security context
securityContext:
runAsUser: 27
runAsGroup: 27
fsGroup: 27

# Node affinity for performance
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-type
operator: In
values: ["database-optimized"]

# Storage configuration
datadirVolumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
storageClassName: "fast-nvme-ssd"
resources:
requests:
storage: "1Ti"

# Backup configuration
backupProfiles:
- name: full-backup
dumpInstance:
dumpOptions:
excludeSchemas: ["performance_schema", "information_schema", "sys"]
includeRoutines: true
includeTriggers: true
storage:
persistentVolumeClaim:
claimName: mysql-backup-storage

backupSchedules:
- name: nightly-full-backup
schedule: "0 1 * * *" # 1 AM daily
backupProfileName: full-backup
enabled: true

- name: hourly-incremental
schedule: "0 */2 * * *" # Every 2 hours
backupProfileName: full-backup
enabled: true

# Security configuration
secretName: mysql-production-credentials
tlsUseSelfSigned: false
tlsCASecretName: mysql-ca-certificate
tlsSecretName: mysql-tls-certificate

Cleanup and Maintenance

Proper Cleanup Procedures

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 1. Delete the MySQL cluster first
helm uninstall devcluster

# 2. Remove the operator
helm uninstall my-mysql-operator -n mysql-operator

# 3. Clean up the namespace
kubectl delete namespace mysql-operator

# 4. Remove persistent volumes (if needed)
kubectl delete pv mysql-backup-pv

# 5. Remove any remaining custom resources
kubectl delete crd innodbclusters.mysql.oracle.com
kubectl delete crd mysqlbackups.mysql.oracle.com

Regular Maintenance Tasks

  1. Monitor backup completion - Check backup logs weekly
  2. Review resource usage - Adjust requests/limits based on actual usage
  3. Update operator version - Keep operator up-to-date with latest features
  4. Test disaster recovery - Regularly test backup restoration procedures
  5. Security audits - Review access logs and update credentials

Conclusion

The MySQL Operator transforms complex database management into simple, declarative configurations, and by following this guide, you’ve learned to install and configure the MySQL Operator, deploy highly available InnoDB clusters, set up automated backup workflows, handle multi-zone deployments, troubleshoot common issues, and implement production-ready configurations.

The MySQL Operator is actively maintained by Oracle with regular updates and new features. Stay connected with the official documentation for the latest developments.

Happy Coding