How to Restructure an OCI Oracle Linux 8 LVM Platform Image

James George
Oracle Developers
Published in
19 min readDec 16, 2022

Restructure an OCI Oracle Linux 8 LVM platform image for separate /var, /tmp, /home, etc.

⚠️ This process is destructive — running this process on a copy, a clone or an easily replaceable boot volume is strongly recommended.

Background

OCI platform images are delivered with a simple partitioning scheme that is simple and usable; the root (/) file system contains everything. However, some customers prefer (or require) a more defensive structure that isolates volatile portions of the file system to make it less likely that adverse events on the file system will bring down the whole instance. For example, filling the root (/) file system can cause all sorts of unexpected behaviour. When locations like /tmp, /var and /home are part of the root file system, the whole system is more exposed to rogue processes, excessive logging, and human error. While it is possible to mitigate some risks with proactive monitoring and maintenance, there’s still scope for encountering issues.

It is possible, though uncommon, that a completely full file system can result in administrators being unable to log in, or that an instance may not be able to boot successfully, resulting in a more complex and time consuming recovery process.

The OCI Oracle Linux 8 platform images have a disk structure like this:


$ lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
sda
├─sda1 vfat 20D9-FA96 /boot/efi
├─sda2 xfs 70b15dc2–5ada-4399–9df5-b48c348cd4f1 /boot
└─sda3 LVM2_member VxgtY1–3gar-4ACy-JeJI-YAJf-A7MK-gFdAOT
├─ocivolume-root xfs 25dae1d1-c678–45b2-bd0c-c308e8c950cc /
└─ocivolume-oled xfs a8c9a3f3–5052–445e-8868–5c1249ae071e /var/oled

$ df -hP
Filesystem Size Used Avail Use% Mounted on
devtmpfs 302M 0 302M 0% /dev
tmpfs 343M 0 343M 0% /dev/shm
tmpfs 343M 14M 329M 4% /run
tmpfs 343M 0 343M 0% /sys/fs/cgroup
/dev/mapper/ocivolume-root 36G 6.6G 29G 19% /
/dev/mapper/ocivolume-oled 10G 119M 9.9G 2% /var/oled
/dev/sda2 1014M 777M 238M 77% /boot
/dev/sda1 100M 5.0M 95M 5% /boot/efi
tmpfs 69M 0 69M 0% /run/user/0
tmpfs 69M 0 69M 0% /run/user/987
tmpfs 69M 0 69M 0% /run/user/1000

The sda1 and sda2 partitions are required to boot the instance, but the remainder of the operating system is installed in sda3 , which is in turn an LVM volume group (ocivolume) with multiple logical volumes (root and oled). The operating system exists in root and oled is used for Oracle oswatcher and crash logs; the separation of oled is the same we want to do for /tmp, /home and the rest of /var . To modify this structure, we face a number of challenges:

  • The whole volume group is allocated (no free space)
  • The XFS file system cannot be shrunk (this is not the case for EXT and some other file systems)
  • Due to the above points, restructuring the file system will be, at least partially, destructive
  • SELinux (enforcing by default) will not be happy with the result

Approach

There is a sample script in the Appendix that automates steps 3–12. It is very basic and should be used with caution.

For the new structure we want to separate out /tmp, /var, /var/log and/home into unique logical volumes and file systems. We do not want to recreate the LVM physical volume or recreate the existing logical volumes — this should remove the need for us to reconfigure grub post refactoring.

  • /tmp will be a separate 4Gb file system, this will be created in a logical volume called tmp
    - /var will be a separate 1Gb file system, this will be created in a logical volume called var
    - /var/log will be a separate 1Gb file system, this will be created in a logical volume called varlog
    - /home will be a separate 1Gb file system, this will be created in a logical volume called home

The default boot volume is ~50Gb and the contents of the root (/) file system is probably going to be in the 7–10Gb range. It may be worth considering increasing the size of the boot volume of the source instance when provisioning it if the additional logical volumes you add will need to be sized, or may need to be grown to a size, that will exhaust the default ~50Gb boot volume.

To undertake this work, we’ll use two compute instances; the first will be a source OL8 instance and the second will be a temporary worker instance that we will use to modify the boot volume of our source OL8 instance.

It is worth noting that OCI compute instances are clones and as such, instances created from the same platform image will have the same disk UUIDs, disk labels and LVM labels (if using an LVM scheme). This can become very confusing for both the user and software. As a result, it is recommended that the worker be based on an entirely different platform image. In this example, I will be using the Ubuntu 20.04 platform image.

The high-level steps are:

  1. Create source OL8 instance, stop source instance and detach source instance’s boot volume
  2. Create worker instance and ensure all the necessary tools are installed and attach source instance boot volume as a block (not boot) volume
  3. Check and backup the source instance root (/) file system from the worker instance
  4. Resize the source instance root logical volume and re-create the root logical volume file system
  5. Create the additional logical volumes (tmp, var, varlog and home) in the source instance volume group (ocivolume) and create file systems on them
  6. Restore the source instance root (/) file system to the resized root logical volume
  7. Create temporary mount points for the new tmp, var, varlog, home and agent logical volumes and mount them
  8. Copy /tmp, /var, /var/log and /home from the source root (/) file system to the new file systems on the temporary mount points
  9. Rename original /tmp, /var, and /home directories and replace with new mount points
  10. Update source instance fstab to mount new file systems on boot
  11. Ensure SELinux will be happy
  12. Unmount and detach the source instance boot volume from the worker and attach to the source instance
  13. Test

If SELinux is not required to be enforcing after the restructuring it can be disabled before undertaking these steps, but it is not recommended.

Steps

1. Create and prepare source OL8 instance

Use the OCI console, CLI or other mechanism to create a new OL8 compute instance. The shape and CPU architecture are not relevant (these steps should also work for ARM instances) as we are just working with the boot volume. Once the instance is created, you may want to log in and update the instance or make any additional configuration changes prior to continuing, however, this is not necessary.

Stop the source OL8 instance and detach its boot volume using the OCI console or tool of your choice.

2. Create and prepare worker instance

For this exercise, an Ubuntu 20.04 instance was used running on an E4-Flex 1 OCPU shape with 16Gb RAM. A different shape can also be used too. Remember to add your SSH keys!

Once the worker instance has started log in and ensure that the XFS tools are installed.

$ sudo apt install -y xfsprogs xfsdump
Reading package lists… Done
Building dependency tree
Reading state information… Done
xfsprogs is already the newest version (5.3.0–1ubuntu2).
Suggested packages:
acl attr quota
The following NEW packages will be installed:
xfsdump
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 183 kB of archives.
After this operation, 692 kB of additional disk space will be used.
Get:1 http://ap-sydney-1-ad-1.clouds.ports.ubuntu.com/ubuntu-ports focal/main arm64 xfsdump arm64 3.1.6+nmu2build1 [183 kB]
Fetched 183 kB in 2s (91.5 kB/s)
Selecting previously unselected package xfsdump.
(Reading database … 148592 files and directories currently installed.)
Preparing to unpack …/xfsdump_3.1.6+nmu2build1_arm64.deb …
Unpacking xfsdump (3.1.6+nmu2build1) ...
Setting up xfsdump (3.1.6+nmu2build1) ...
Processing triggers for man-db (2.9.1–1) ...

Attach the source instance’s boot volume as a paravirtualzed block volume to the worker instance using the OCI console or your tool of choice.

3. Check and backup the source instance root (/) file system

Once the source instance’s boot volume is attached, it is a good idea to ensure that the worker instance sees it properly:

$ sudo vgchange -a y
2 logical volume(s) in volume group "ocivolume" now active

$ sudo lvdisplay
--- Logical volume ---
LV Path /dev/ocivolume/oled
LV Name oled
VG Name ocivolume
LV UUID QYLO3I-YbXn-d92M-Jhaa-xJoo-hujB-A70hHe
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2022-05-18 23:33:06 +0000
LV Status available
# open 0
LV Size 10.00 GiB
Current LE 2560
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 4096
Block device 253:0

--- Logical volume ---
LV Path /dev/ocivolume/root
LV Name root
VG Name ocivolume
LV UUID asKqqn-xF8Z-T3me-XYSi-ZUSN-17Rm-dUahdB
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2022-05-18 23:33:07 +0000
LV Status available
# open 0
LV Size 35.47 GiB
Current LE 9081
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 4096
Block device 253:1

$ sudo lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
loop0 squashfs 4.0 0 100% /snap/core18/2409
loop1 squashfs 4.0 0 100% /snap/core20/1434
loop2 squashfs 4.0 0 100% /snap/lxd/22923
loop3 squashfs 4.0 0 100% /snap/oracle-cloud-agent/36
loop4 squashfs 4.0 0 100% /snap/snapd/15534
sda
├─sda1 ext4 1.0 cloudimg-rootfs 96695e07-a270-493b-b66c-ce28d94e1409 42.9G 5% /
├─sda14
└─sda15 vfat FAT32 UEFI 9DB3-1174 99.1M 5% /boot/efi
sdb
├─sdb1 vfat FAT16 2314-8847
├─sdb2 xfs a619a666-d067-48a1-84c0-597037623f97
└─sdb3 LVM2_member LVM2 001 5QtWJg-49vg-4Ces-2Iv3-MRP5-dNdG-nLQwgd
├─ocivolume-oled xfs 0950a10f-ab7a-4ade-81df-abfe0b973620
└─ocivolume-root xfs b74c7d6e-a842-4ab7-a47e-2ac332edaa3d

Now that the source instance block volume is attached, it is a good idea to validate the file system before we start to change things:

$ sudo xfs_repair /dev/mapper/ocivolume-root
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- zero log...
- scan filesystem freespace and inode maps...
- found root inode chunk
Phase 3 - for each AG...
- scan and clear agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
clearing reflink flag on inodes when possible
Phase 5 - rebuild AG headers and trees...
- reset superblock...
Phase 6 - check inode connectivity...
- resetting contents of realtime bitmap and summary inodes
- traversing filesystem ...
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
done

Now that we know we have a clean file system, lets mount it:

$ sudo mount /dev/mapper/ocivolume-root /mnt/

Note that the linux device mapper creates entries for LVM as <volume_group>-<logical_volume>, thus ocivolume (the volume group) and root (the logical volume) becomes ocivolume-root.

We now need to backup the entire root (/) volume using xfsdump:

Note that in this example, the source instance has not been customised and thus the file system dump is not large and can safely be placed in /tmp of the worker instance. If your source instance has been customised or has had additional software installed, consider placing the dump somewhere other than /tmp, for example, consider adding another block volume to to the worker instance or attach some File System Service (FSS) storage to the worker node to temporarily hold the backup.

$ sudo xfsdump -L "" -M "" -0uf /tmp/ol8_root_backup /mnt
xfsdump: using file dump (drive_simple) strategy
xfsdump: version 3.1.9 (dump format 3.0) - type ^C for status and control
xfsdump: WARNING: no session label specified
xfsdump: WARNING: most recent level 0 dump was interrupted, but not resuming that dump since resume (-R) option not specified
xfsdump: level 0 dump of worker:/mnt
xfsdump: dump date: Mon Jun 13 06:51:40 2022
xfsdump: session id: c078bec3-1852-463b-ab5a-4e94943eb219
xfsdump: session label: ""
xfsdump: ino map phase 1: constructing initial dump list
xfsdump: ino map phase 2: skipping (no pruning necessary)
xfsdump: ino map phase 3: skipping (only one dump stream)
xfsdump: ino map construction complete
xfsdump: estimated dump size: 8501529728 bytes
xfsdump: WARNING: no media label specified
xfsdump: creating dump session media file 0 (media 0, file 0)
xfsdump: dumping ino map
xfsdump: dumping directories
xfsdump: dumping non-directory files
xfsdump: ending media file
xfsdump: media file size 8263474064 bytes
xfsdump: dump size (non-dir files) : 8158542168 bytes
xfsdump: dump complete: 232 seconds elapsed
xfsdump: Dump Summary:
xfsdump: stream 0 /tmp/ol8_root_backup OK (success)
xfsdump: Dump Status: SUCCESS

Depending on the size of the source instance’s root (/) file system, the backup could take quite a few minutes.

Once the backup has completed successfully, unmount the source instance root (/) file system from /mnt:

$ sudo umount /mnt

4. Resize the source instance `root` logical volume and re-create the `root` logical volume file system

Now we need to resize the source instance’s root logical volume. The shrink operation is performed using lvreduce and we are reducing it by 15Gb (-L -15G). 15Gb was chosen due to 4+1+1+1 Gb for the new file systems plus some room (8Gb) left over for future file systems (such as other agent software). The root logical volume could be shrunk by more than 10Gb in this example, but remember that enough space needs to be preserved to restore the backed up root (/) file system.

⚠️ The following steps are destructive!

Firstly, check that the resize should work successfully using test mode (-t):

$ sudo lvreduce -ft -L -15G /dev/ocivolume/root
TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.
WARNING: Reducing active logical volume to 20.47 GiB.
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Size of logical volume ocivolume/root changed from 35.47 GiB (9081 extents) to 20.47 GiB (5241 extents).
Logical volume ocivolume/root successfully resized.

If the resize operations tests ok, resize for real:

$ sudo lvreduce -f -L -15G /dev/ocivolume/root
WARNING: Reducing active logical volume to 20.47 GiB.
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Size of logical volume ocivolume/root changed from 35.47 GiB (9081 extents) to 20.47 GiB (5241 extents).
Logical volume ocivolume/root successfully resized.

Then recreate the XFS file system on the source instance’s root logical volume (ocivolume-root):

$ sudo mkfs.xfs -f /dev/mapper/ocivolume-root
meta-data=/dev/mapper/ocivolume-root isize=512 agcount=4, agsize=1341696 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1 bigtime=0 inobtcount=0
data = bsize=4096 blocks=5366784, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2620, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

5. Create the additional logical volumes and create file systems on them

Create new logical volumes:

$ sudo lvcreate ocivolume -L 4G -n tmp
Logical volume "tmp" created.
$ sudo lvcreate ocivolume -L 1G -n var
Logical volume "var" created.
$ sudo lvcreate ocivolume -L 1G -n varlog
Logical volume "varlog" created.
$ sudo lvcreate ocivolume -L 1G -n home
Logical volume "home" created.

Create new file systems with mkfs.xfs:

Remember that device mapper will have created new entries in the form of <volume_group>-<logical_volume>.

  • For the new /tmpsudo mkfs.xfs -f /dev/mapper/ocivolume-tmp
  • For the new /varsudo mkfs.xfs -f /dev/mapper/ocivolume-var
  • For the new /var/logsudo mkfs.xfs -f /dev/mapper/ocivolume-varlog
  • For the new /homesudo mkfs.xfs -f /dev/mapper/ocivolume-home

This should look something like this:

$ sudo mkfs.xfs -f /dev/mapper/ocivolume-tmp
meta-data=/dev/mapper/ocivolume-tmp isize=512 agcount=4, agsize=262144 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1 bigtime=0 inobtcount=0
data = bsize=4096 blocks=1048576, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

$ sudo mkfs.xfs -f /dev/mapper/ocivolume-var
meta-data=/dev/mapper/ocivolume-var isize=512 agcount=4, agsize=65536 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1 bigtime=0 inobtcount=0
data = bsize=4096 blocks=262144, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

$ sudo mkfs.xfs -f /dev/mapper/ocivolume-varlog
meta-data=/dev/mapper/ocivolume-varlog isize=512 agcount=4, agsize=65536 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1 bigtime=0 inobtcount=0
data = bsize=4096 blocks=262144, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

$ sudo mkfs.xfs -f /dev/mapper/ocivolume-home
meta-data=/dev/mapper/ocivolume-home isize=512 agcount=4, agsize=65536 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1 bigtime=0 inobtcount=0
data = bsize=4096 blocks=262144, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

6. Restore the source instance root (/) file system to the resized `root` logical volume

Mount the source instance root logical volume (now formatted and empty) back to /mnt:

$ sudo mount /dev/mapper/ocivolume-root /mnt
$ df -hP
Filesystem Size Used Avail Use% Mounted on
tmpfs 1.6G 1.1M 1.6G 1% /run
/dev/sda1 45G 9.5G 36G 21% /
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/sda15 105M 5.3M 100M 5% /boot/efi
tmpfs 1.6G 4.0K 1.6G 1% /run/user/1001
/dev/mapper/ocivolume-root 21G 179M 21G 1% /mnt

With the resized and empty source instance’s root logical volume mounted to /mnt, we restore the file system back up:

$ sudo xfsrestore -f /tmp/ol8_root_backup /mnt
xfsrestore: using file dump (drive_simple) strategy
xfsrestore: version 3.1.9 (dump format 3.0) - type ^C for status and control
xfsrestore: searching media for dump
xfsrestore: examining media file 0
xfsrestore: dump description:
xfsrestore: hostname: worker
xfsrestore: mount point: /mnt
xfsrestore: volume: /dev/mapper/ocivolume-root
xfsrestore: session time: Mon Jun 13 00:28:36 2022
xfsrestore: level: 0
xfsrestore: session label: ""
xfsrestore: media label: ""
xfsrestore: file system id: b74c7d6e-a842-4ab7-a47e-2ac332edaa3d
xfsrestore: session id: 08bc394b-465a-4f67-b301-8364ad848d9b
xfsrestore: media id: 15cccbfe-fcde-4b71-9122-52f6500610f8
xfsrestore: using online session inventory
xfsrestore: searching media for directory dump
xfsrestore: reading directories
xfsrestore: 18592 directories and 170057 entries processed
xfsrestore: directory post-processing
xfsrestore: restoring non-directory files
xfsrestore: restore complete: 253 seconds elapsed
xfsrestore: Restore Summary:
xfsrestore: stream 0 /tmp/ol8_root_backup OK (success)
xfsrestore: Restore Status: SUCCESS

7. Create mount points for the new logical volumes and mount them

Temporary mount points are needed for the new (i.e. replacement) tmp (/mnt/tmp-new), var (/mnt/var-new), varlog (/mnt/var-new/log) and home (/mnt/home-new) locations.

Note that the new target /var/log location is a new file system and has to be mounted relative to the new /var location. Thus the new varlog logical volume will be mounted to /mnt/var-new/log for copying data after the new var logical volume has been mounted to /mnt/var-new.

$ sudo mkdir /mnt/tmp-new /mnt/home-new /mnt/var-new

$ sudo mount /dev/mapper/ocivolume-tmp /mnt/tmp-new
$ sudo chmod 1777 /mnt/tmp-new # Set correct permissions for a "/tmp"
$ sudo mount /dev/mapper/ocivolume-var /mnt/var-new
$ sudo mount /dev/mapper/ocivolume-home /mnt/home-new
$ sudo mkdir /mnt/var-new/log # Place the new "/var/log" mount point into the new "/var"
$ sudo mount /dev/mapper/ocivolume-varlog /mnt/var-new/log

8. Copy /tmp, /var, /var/log and /home from the source root (/) file system to the new file systems on the temporary mount points

Copy (and preserve attributes) from the restored root (/) file system to the temporary mounts of the new logical volumes:

$ sudo cp -rp /mnt/home/. /mnt/home-new/
$ sudo cp -rp /mnt/tmp/. /mnt/tmp-new/
$ sudo cp -rp /mnt/var/. /mnt/var-new/

9. Rename original /tmp, /var, and /home directories and replace with new mount points

This next piece is a shuffling of the existing directories to “backup locations” and repositioning the temporary mount points to replace those original directories.

First, we need to unmount the logical volumes from the temporary mount points so they, along with the original locations, can be renamed:

$ sudo umount /mnt/home-new /mnt/var-new/log /mnt/var-new /mnt/tmp-new

Then, shuffle the old locations and new mount points around:

$ sudo mv /mnt/home /mnt/home-old && sudo mv /mnt/home-new /mnt/home
$ sudo mv /mnt/tmp /mnt/tmp-old && sudo mv /mnt/tmp-new /mnt/tmp
$ sudo mv /mnt/var /mnt/var-old && sudo mv /mnt/var-new /mnt/var

10. Update source instance fstab to mount new file systems on boot

To ensure the new logical volumes and file systems are mounted on boot, add the following entries to the source instance’s fstab before the /var/oled entry:

/dev/mapper/ocivolume-tmp /tmp                     xfs     defaults        0 0
/dev/mapper/ocivolume-var /var xfs defaults 0 0
/dev/mapper/ocivolume-varlog /var/log xfs defaults 0 0
/dev/mapper/ocivolume-home /home xfs defaults 0 0

Thus, edit the fstab of the source OL8 image (currently located in /mnt/etc) file so its contents reflect the following:

$ sudo vi /mnt/etc/fstab 

$ cat /mnt/etc/fstab

#
# /etc/fstab
# Created by anaconda on Wed May 18 23:33:09 2022
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/ocivolume-root / xfs defaults 0 0
UUID=a619a666-d067-48a1-84c0-597037623f97 /boot xfs defaults 0 0
UUID=2314-8847 /boot/efi vfat defaults,uid=0,gid=0,umask=077,shortname=winnt 0 2
/dev/mapper/ocivolume-tmp /tmp xfs defaults 0 0
/dev/mapper/ocivolume-var /var xfs defaults 0 0
/dev/mapper/ocivolume-varlog /var/log xfs defaults 0 0
/dev/mapper/ocivolume-home /home xfs defaults 0 0
/dev/mapper/ocivolume-oled /var/oled xfs defaults 0 0
tmpfs /dev/shm tmpfs defaults,nodev,nosuid,noexec 0 0
######################################
## ORACLE CLOUD INFRASTRUCTURE CUSTOMERS
##
## If you are adding an iSCSI remote block volume to this file you MUST
## include the '_netdev' mount option or your instance will become
## unavailable after the next reboot.
## SCSI device names are not stable across reboots; please use the device UUID instead of /dev path.
##
## Example:
## UUID="94c5aade-8bb1-4d55-ad0c-388bb8aa716a" /data1 xfs defaults,noatime,_netdev 0 2
##
## More information:
## https://docs.us-phoenix-1.oraclecloud.com/Content/Block/Tasks/connectingtoavolume.htm
/.swapfile none swap sw,comment=cloudconfig 0 0

11. Ensure SELinux will be happy

If SELinux is enabled and enforcing it will not be happy that the file systems have been modified. While the instance may boot, it is unlikely to be functioning properly and users will not be able to log in remotely. To ensure this does not happen, the easiest thing to do is to force a relabelling of the instance on the next boot. To do this, create a flag file (.autorelabel) in the source instance’s root (/) file system (again, remembering that the source instance root(/) is currently mounted to /mnt):

$ sudo touch /mnt/.autorelabel

That is all that should be required. Upon next boot, SELinux should reapply all the labels to the file system.

⚠️ Relabelling the entire file system structure may take considerable time, so be patient on the first boot. Once complete, the marker file is automatically removed and subsequent boots will be normal.

12. Unmount and detach the source instance boot volume from the worker instance and attach back to the source instance

All that remains is to unmount and detach the source instance’s root logical volume, detach it from the worker instance, attach it back to the source instance and start the source instance to test.

$ sudo umount /mnt

OPTIONAL Check all the new file systems:

$ sudo xfs_repair /dev/mapper/ocivolume-root
...
$ sudo xfs_repair /dev/mapper/ocivolume-tmp
...
$ sudo xfs_repair /dev/mapper/ocivolume-var
...
$ sudo xfs_repair /dev/mapper/ocivolume-varlog
...
$ sudo xfs_repair /dev/mapper/ocivolume-home
...

Once happy that everything is complete and the source instance’s root logical volume has been unmounted, detach the source instance’s boot volume from the worker instance using the OCI console or your tool of choice. Then attach the source instance’s boot volume back to the source OL8 instance as its boot volume.

13. Test

Prior to starting the source instance with the restructured boot volume, it is recommended to create a serial console connection for the instance and attach to that with another session to monitor the boot process for any issues.

Once the source instance has booted, log in and verify the new file system structure.

The original structures for /tmp, /var and /home are still present as /tmp-old, /var-old and /home-old. If everything checks out, remove these previous structures:

$ sudo rm -rf /tmp-old /var-old /home-old

If everything checks out, consider creating a custom image to allow easy reuse of the new file system structure. See Managing Custom Images.

That is about it!

If you’re curious about the goings-on of Oracle Developers in their natural habitat, come join us on our public Slack channel!

Appendix — script

⚠️ Use with caution!

The following script is extremely basic and will perform steps 3 through to 12 (except the detatch) if run on a worker instance once the source instance’s boot volume is attached. There is very limited checking and all values are hardcoded. If the platform image for OL8 changes, or this is run against a different platform image, the script may well fail leaving the restructure incomplete or target boot volume damaged. Creating a backup or using a copy of the target boot volume is highly recommended.

⚠️ Use with caution!

refactor.sh — usage sudo refactor.sh and cross one’s fingers…

#!/bin/bash

if [ "${EUID}" -ne 0 ]; then
printf "Please run as root.\n"
exit 1
fi

printf "Refreshing volume groups...\n%s\n\n" "$(vgchange -a y)"
printf "These are the logical volumes...\n%s\n\n" "$(lvdisplay)"
printf "The block devices...\n%s\n\n" "$(lsblk -f)"

printf "Sanity checking the root file system...\n"
printf "%s\n\n" "$(xfs_repair /dev/mapper/ocivolume-root 2>&1)"

printf "Mounting root file system... "
mount /dev/mapper/ocivolume-root /mnt
if [ ${?} -ne 0 ]; then
printf "mounting failed!\nAbort...\n"
exit 1
else
printf "done.\n\n"
fi

printf "Backing up root file system...\n"
OUTPUT=$(xfsdump -L \"\" -M \"\" -0uf /tmp/ol8_root_backup /mnt 2>&1)
if [ ${?} -ne 0 ]; then
printf "%s\n\nBackup failed!\nAbort...\n" "${OUTPUT}"
umount /mnt
exit 1
else
printf "%s\n\n" "${OUTPUT}"
fi

printf "Unmounting root file system... "
umount /mnt
if [ ${?} -ne 0 ]; then
printf "unmounting failed!\nAbort...\n"
exit 1
else
printf "done.\n\n"
fi

printf "Testing resize of root file system...\n"
OUTPUT=$(lvreduce -ft -L -15G /dev/ocivolume/root 2>&1)
if [ ${?} -ne 0 ]; then
printf "%s\n\nTest failed!\nAbort...\n" "${OUTPUT}"
exit 1
else
printf "%s\n\nTest ok, procceding.\n\n" "${OUTPUT}"
fi

printf "Resize of root file system...\n"
OUTPUT=$(lvreduce -f -L -15G /dev/ocivolume/root)
if [ ${?} -ne 0 ]; then
printf "%s\n\nResize failed!\n Abort...\n" "${OUTPUT}"
exit 1
else
printf "%s\n\nResized!\n\n" "${OUTPUT}"
fi

printf "Recreate root file system...\n%s\n\n" "$(mkfs.xfs -f /dev/mapper/ocivolume-root 2>&1)"

printf "Creating new logical volumes...\n"
lvcreate ocivolume -L 4G -n tmp
lvcreate ocivolume -L 1G -n var
lvcreate ocivolume -L 1G -n varlog
lvcreate ocivolume -L 1G -n home

printf "Creating new file systems...\n"
mkfs.xfs -f /dev/mapper/ocivolume-tmp
mkfs.xfs -f /dev/mapper/ocivolume-var
mkfs.xfs -f /dev/mapper/ocivolume-varlog
mkfs.xfs -f /dev/mapper/ocivolume-home

printf "Remounting root file systems... "
mount /dev/mapper/ocivolume-root /mnt
if [ ${?} -ne 0 ]; then
printf "mounting failed!\nAbort...\n"
exit 1
else
printf "done.\n\n"
fi

printf "Restoring root file system...\n"
OUTPUT=$(xfsrestore -f /tmp/ol8_root_backup /mnt 2>&1)
if [ ${?} -ne 0 ]; then
printf "%s\n\nRestore failed!\nAbort...\n" "${OUTPUT}"
umount /mnt
exit 1
else
printf "%s\n\n" "${OUTPUT}"
fi

printf "Creating temporary mount points...\n"
mkdir /mnt/tmp-new /mnt/home-new /mnt/var-new /mnt/agent

printf "Mount volumes... "
mount /dev/mapper/ocivolume-tmp /mnt/tmp-new && chmod 1777 /mnt/tmp-new && mount /dev/mapper/ocivolume-var /mnt/var-new && mount /dev/mapper/ocivolume-home /mnt/home-new && mount /dev/mapper/ocivolume-agent /mnt/agent && mkdir /mnt/var-new/log && mount /dev/mapper/ocivolume-varlog /mnt/var-new/log
if [ ${?} -ne 0 ]; then
printf "mounting and mapping failed!\nAbort...\n"
exit 1
else
printf "done.\n\n"
fi

printf "Copying files to volumes...\n"
cp -rp /mnt/home/. /mnt/home-new/
cp -rp /mnt/tmp/. /mnt/tmp-new/
cp -rp /mnt/var/. /mnt/var-new/

printf "Unmounting volumes... "
umount /mnt/home-new /mnt/var-new/log /mnt/var-new /mnt/tmp-new /mnt/agent
if [ ${?} -ne 0 ]; then
printf "unmounting failed!\nAbort...\n"
exit 1
else
printf "done.\n\n"
fi

printf "Shuffling locations and mount points... "
mv /mnt/home /mnt/home-old && sudo mv /mnt/home-new /mnt/home && mv /mnt/tmp /mnt/tmp-old && sudo mv /mnt/tmp-new /mnt/tmp && mv /mnt/var /mnt/var-old && sudo mv /mnt/var-new /mnt/var
if [ ${?} -ne 0 ]; then
printf "failed!\nAbort...\n"
exit 1
else
printf "done.\n\n"
fi

printf "Touching marker file for SELinux relabelling...\n"
touch /mnt/.autorelabel

printf "Changing fstab...\n"
sed -i-$(date +%Y%m%d) -e'/.*\/var\/oled.*/i /dev/mapper/ocivolume-tmp /tmp xfs defaults 0 0\n/dev/mapper/ocivolume-var /var xfs defaults 0 0\n/dev/mapper/ocivolume-varlog /var/log xfs defaults 0 0\n/dev/mapper/ocivolume-home /home xfs defaults 0 0\n/dev/mapper/ocivolume-agent /agent xfs defaults 0 0' /mnt/etc/fstab

cat /mnt/etc/fstab

printf "Unounting root file system... "
umount /mnt
if [ ${?} -ne 0 ]; then
printf "unmounting failed!\nAbort...\n"
exit 1
else
printf "done.\n\n"
fi

printf "\nAll done.\n\nPlease detach the volume and test.\n"

--

--