How to mount the root filesystem of an OpenStack instance with Ceph/RBD backend

Recently I managed to get locked out from my own OpenStack/nova instance. While provisioning a new instance I forgot to add my public SSH key the authorized_keys file of the user account. When rebooting the instance, cloud-init disabled the root login because the disable_root setting in /etc/cloud/cloud.cfg defaults to true. Long story short, I was locked out 😰.

So, how to mount the root filesystem of the instance manually to regain access?

First, make sure the instance is shut off. Otherwise bad things may happen 😉. Now we need to get the UUID of the cinder volume:

root@compute-1:~# nova show my-instance | grep volumes_attached
| os-extended-volumes:volumes_attached | [{“id”: “d2aa4814–87b7–473f-8b28–56dd67ffb8fa”, “delete_on_termination”: false}] 

Get the volume name in your Ceph pool using the rbd command:

root@compute-1:~# rbd --pool volumes ls | grep d2aa4814–87b7–473f-8b28–56dd67ffb8fa

If you have multipath installed on your system, make sure it is stopped before mounting the volume. There is a bug which prevents the unmapping of RBD volumes.

service multipath-tools stop

Map the RBD volume:

root@compute-1:~# rbd map --pool volumes volume-d2aa4814–87b7–473f-8b28–56dd67ffb8fa

Lets view the partitions of the device…

root@compute-1:~# fdisk -l /dev/rbd0

… and finally mount it!

root@compute-1:~# mount /dev/rbd0p1 /mnt

Success! 👌

root@compute-1:~# ls /mnt
bin boot dev etc home lib lib64 lost+found media mnt opt proc root run sbin selinux srv sys tmp usr var vmlinuz

We can now access the filesystem and do the necessary changes, for examples adding an SSH key to /mnt/home/admin/.ssh/authorized_keys.

If done:

root@compute-1:~# umount /mnt
root@compute-1:~# rbd unmap /dev/rbd0

And start multipath again, if you have stopped it beforehand.