Resize etcd volumes on kops
kops does not support changing size or type of etcd volumes after cluster creation. It is inconvenient when you created too large volumes.
If your etcd volumes are gp2
or io1
type, you can extend the volumes easily in the AWS management console.
You can shrink the volumes by the following steps:
- Stop the Kubernetes master services.
- Create the new volumes.
- Copy data from old to new.
- Add tags to the new volumes.
Steps to shrink volumes
It assumes your cluster has single master. If your cluster has multiple masters, shrink to single master in advance.
(1) Stop Kubernetes master
Connect to the master instance:
ssh -i your_ssh_key admin@x.x.x.x
Stop the Kubernetes services:
sudo systemctl stop docker-healthcheck.timer
sudo systemctl stop docker
sudo systemctl stop kubelet
sudo systemctl stop protokube# Make sure no Kubernetes related process left
ps axfu
(2) Create new volumes
Create a new volumes:
- A volume with Name tag
a.etcd-main.cluster.domain
- A volume with Name tag
a.etcd-events.cluster.domain
Attach the new volumes to the master instance. Then format and mount the new volumes:
sudo mkfs.ext4 /dev/xvdf
sudo mkdir /mnt/etcd-main
sudo mount /dev/xvdf /mnt/etcd-mainsudo mkfs.ext4 /dev/xvdg
sudo mkdir /mnt/etcd-events
sudo mount /dev/xvdg /mnt/etcd-events
Now the master instance has the following volumes:
- Old volume
a.etcd-main.cluster.domain
at/mnt/master-vol-xxx
- Old volume
a.etcd-events.cluster.domain
at/mnt/master-vol-yyy
- New volume
a.etcd-main.cluster.domain
at/mnt/etcd-main
- New volume
a.etcd-events.cluster.domain
at/mnt/etcd-events
You can see them by mount
command:
/dev/xvdu on /mnt/master-vol-xxx type ext4 (rw,relatime,data=ordered)
/dev/xvdv on /mnt/master-vol-yyy type ext4 (rw,relatime,data=ordered)
/dev/xvdf on /mnt/etcd-main type ext4 (rw,relatime,data=ordered)
/dev/xvdg on /mnt/etcd-events type ext4 (rw,relatime,data=ordered)
(3) Copy from old to new
Copy data from old to new:
cd /mnt/master-vol-xxx
sudo cp -av k8s.io/ var/ /mnt/etcd-maincd /mnt/master-vol-yyy
sudo cp -av k8s.io/ var/ /mnt/etcd-events
Unmount the volumes and shutdown:
sudo umount /mnt/etcd-main
sudo umount /mnt/etcd-events
(4) Add tags to new volumes
Add the following tags to the new main volume:
KubernetesCluster
=cluster.name
(your cluster name)k8s.io/etcd/main
=a/a
(availability zone)k8s.io/role/master
=1
kubernetes.io/cluster/cluster.name
=owned
Add the following tags to the new events volume:
KubernetesCluster
=cluster.name
(your cluster name)k8s.io/etcd/events
=a/a
(availability zone)k8s.io/role/master
=1
kubernetes.io/cluster/cluster.name
=owned
Remove same tags from the old volumes. Then protokube can find the new volumes at boot.
Finally terminate the master instance:
sudo poweroff
Wait a moment and then the auto scaling group will spawn a new instance.
Conclusion
Even after cluster creation, you can extend or shrink etcd volumes.