Overview:
This post shows the general process of how to backup and restore the etcd backend data in Kubernetes
Backup etcd data
- Set etcd API version and check if
etcdctl
can communicate with the etcd server by requesting the cluster name:ETCDCTL_API=3 etcdctl get cluster.name \ --endpoints=https://[private_ip_address]:2379 \ --cacert=~/[path_to_the_.pem_file] \ --cert=~/[path_to_the_.crt_file] \ --key=~[path_to_the_.key_file]
- Perform the backup:
ETCDCTL_API=3 etcdctl snapshot save [path_where_to_save_backup.db_file] \ --endpoints=https://[private_ip_address]:2379 \ --cacert=~/[path_to_the_.pem_file] \ --cert=~/[path_to_the_.crt_file] \ --key=~[path_to_the_.key_file]
Restore etcd data
- To restore etcd data use:
sudo ETCDCTL_API=3 etcdctl snapshot restore [path_to_the_backup_file] \ --initial-cluster etcd-restore=https://[private_ip_of_the_etcd_server]:2380 \ --initial-advertise-peer-urls https://[private_ip_of_the_etcd_server]:2380 \ --name etcd-restore \ --data-dir /var/lib/etcd
- This operation temporarily creates the etcd cluster node
- Check that data is restored:
sudo ls /var/lib/etcd
- When restored all data is owned by
root
, fix this by:sudo chown -R etcd:etcd /var/lib/etcd
- Start the etcd:
sudo systemctl start etcd
- Check that etcd server is up and running by requesting the cluster name:
ETCDCTL_API=3 etcdctl get cluster.name \ --endpoints=https://[private_ip_address]:2379 \ --cacert=~/[path_to_the_.pem_file] \ --cert=~/[path_to_the_.crt_file] \ --key=~[path_to_the_.key_file]
[Optional - dangerous]:
For testing purposes you can train the migration process on the same cluster by deleting the etcd data:
- Stop the etcd backend:
sudo systemctl stop etcd
- Delete the etcd data:
sudo rm -rf /var/lib/etcd
Notes
etcdctl
- is a command-line client to interact with Kubernetes backend storage etcd.--endpoints
- flag specifies the URL address of the etcd server--cacert
- specifies the public certificate for the certificate authority--cert
- specifies the client certificate--key
- specifies the certificate key