Diving deeper with Ubuntu Autoinstall

Boris Kaplun
9 min readMay 21, 2024

--

This is the second article of the series about Ubuntu Autoinstall. As you can remember, I did omit quite a lot of explanations teasing you into the next episodes. Now is the time to start discussing the more intricate details.

Let’s begin with the configuration sections and my typical use cases. First I need to mention that my main focus is on airgap on-premise Kubernetes cluster installations (K3s in particular) and this is the overall direction we’re heading. This is also one of the most complicated scenarios and we’re going to cover simpler use cases on the way.

Moving on with the autoinstall configuration options

Autoinstall documentation offers a number of sections you can configure. But let’s face reality. Installing an Ubuntu server instance especially with container or Kubernetes payload in mind, there’re just a few things we really care about while installing the OS itself.

Locale is usually not a concern — US English with an English keyboard is perfectly fine for a server box. Do you care about the time zone? Well, you can configure it if you really mean it. But remember, we’re building an ISO.

Network is obviously important and this autoinstall section is pretty much in sync with the netplan syntax. But do you really want to get your static IP netplan config nailed inside an ISO? Probably not. My personal preference is DHCP lease reservation for ease of centralized management. But what if you still need a manual network setup? We’ll get there soon.

Next comes the storage. Recent Linux server distros usually install LVM and Ubuntu is no exception. The only problem is that the default drive mapping implies you’re really going to use the LVM features like growing and snapshotting your partitions and this is not always the case. Not sure if the majority really uses it as intended so let’s assume the installer is somewhat opinionated. Our default policy is to extend the LVM volume to the physical disk size.

There’s a section called storage in the autoinstall for that. However, I’d be really interested to see someone brave enough to master that path: official documentation. Cloud-init got a module called growpart. I’m certain there’re use cases for that, but my images are usually targeted universally. This is why I couldn’t resist the temptation to implement it the easy way with just 2 shell commands: lvextend + resize2fs.

Now let’s configure it with autoinstall. Running shell commands is possible within early-commands and late-commands sections. The difference is installation stages. You can examine in more detail which parts of the OS are already available at each stage. Late-commands, as the name says, are intended for later execution once the system is almost installed. That’s perfectly fine for most of my use cases.

Add a section called “late-commands” and see how we can run shell commands:

  late-commands:
- lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv
- resize2fs /dev/mapper-ubuntu—vg-ubuntu—lv

For more complicated setups such as custom RAID configurations, etc., I’d go for a manual override that we cover below.

But before the late-commands there’s a section called “apt”. This can be used to configure the package repo mirror preferences.

As you might remember, my primary focus is airgap installations. I have all the necessary packages included in the ISO already and prefer the installer to avoid attempting to download anything if the Internet is unreachable:

apt:
fallback: offline-install

Now there’s another catch. Remember, my payload is usually K3s stuff? And certain older K3s versions are not fully compatible with the cgroups v2 and would not run on Ubuntu 22.04 out of the box. This requires passing a few parameters to the kernel. Need to add:

GRUB_CMDLINE_LINUX_DEFAULT="cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory systemd.unified_cgroup_hierarchy=0 pci=realloc

to the grub configuration. We will modify /etc/default/grub using sed and launch a grub-update for the changes to apply.

This is a good illustration of autoinstall additional OS customizing features. Now looking at the example with volume extension you might think this would work?

  late-commands:
- sed -i 's/GRUB_CMDLINE_LINUX_DEFAULT=""/GRUB_CMDLINE_LINUX_DEFAULT="cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory systemd.unified_cgroup_hierarchy=0 pci=realloc"/' /etc/default/grub
- update-grub

The answer is no! The trick here is that this time it should be executed in a chrooted environment at /target by an installer utility called “curtin” in this case. Here is the working code:

late-commands:
- curtin in-target --target=/target -- sed -i 's/GRUB_CMDLINE_LINUX_DEFAULT=""/GRUB_CMDLINE_LINUX_DEFAULT="cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory systemd.unified_cgroup_hierarchy=0 pci=realloc"/' /etc/default/grub
- curtin in-target --target=/target -- update-grub

And last but not least comes another important feature I promised. My primary goal throughout this project was a headless unattended setup. But that quickly evolved into a versatile installer. While we still do keep the human operator input to the minimum, there could be tasks when it’s unavoidable. In my experience, we would use it only for networking, but your mileage may vary. I expect storage to be the second candidate.

Here is how you can configure autoinstall for manual override/installer interaction. Ubuntu let’s you enable parts of the regular installer by using a section called “interactive-sections”:

interactive-sections:
- network
- storage # will not be included in the final config

That was a bit fragmented, so let’s summarize:

#cloud-config
autoinstall:
version: 1
identity:
hostname: ubuntu22auto
username: ubuntulogin
password: $6$LmPUjxOfMHOMgRlg$pSXXVlcfwSKUSotcoG6ed7DUu7.iOX7kJEylN9V9z3C96uNBIMfCJjtL1tNjLx9dDbKS/kH9W7B8oIMKxmXb70
ssh:
install-server: yes
allow-pw: yes
apt:
fallback: offline-install
interactive-sections:
- network
late-commands:
- lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv
- resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
- curtin in-target --target=/target -- sed -i 's/GRUB_CMDLINE_LINUX_DEFAULT=""/GRUB_CMDLINE_LINUX_DEFAULT="cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory systemd.unified_cgroup_hierarchy=0 pci=realloc"/' /etc/default/grub
- curtin in-target --target=/target -- update-grub
shutdown: poweroff

Bytes, lies and big files

Have you ever seen a creature that looks like a duck, walks like a duck, quacks like a duck…. but is not a duck?! Then you are probably experienced with Ubuntu autoinstall.

While autoinstall and cloud init are so closely related, that they even live together in the same file. They do similar things, their config elements are named similarly and their functionality overlaps a lot. Hence they are totally different species! I don’t have an explanation why Canonical designed it this way but it presented a serious challenge for my mental health.

Looking at the scarce documentation, I couldn’t tell the difference. I was mixing the 2 different tool schemas that would render my work instantly useless. I’ve been desperately banging my head against Google and all the GPT bots I could subscribe to but with no luck. I couldn’t realize why things that just worked in a Multipass sandbox were crashing facing reality of a real Ubuntu.

Here’s another important aspect to clarify. Autoinstall usually runs until the initial system reboot. Cloud-init is a post-installation configuration tool that kicks in after the initial reboot.

So, here’s how it really works. Autoinstall and Cloud-init are being handled by 2 different utilities, Subiquity and Cloud-init. They are also being validated and processed against different JSON schemas. They cannot be mixed, or it just won’t work. Yet they do live in a single file. And this file is called user-data. As some of you might know, Cloud-init yamls are also usually named user-data. So how do we get this working?

Here comes the trick. First comes the autoinstall section. Just scroll up a little to see the config we generated. Nothing spectacular so far.

Now pay attention here. Next comes a section within the file that should be indented (it’s a yaml) like a regular autoinstall section (!). And this section is called….. user-data! How crazy is that? Now we can insert our more or less usual cloud-init config. Let’s say I want to engage a payload script residing in the /opt folder:

user-data:
runcmd:
- chmod +x /opt/*.sh
- ./opt/payload_script.sh >> /var/log/payload.log

Please be very careful with functionality overlapping between Autoinstall and Cloud-init. Things like user creation, SSH keys, package installation, etc.

Creating a dummy payload

In order to test the build and some features you will see in the next articles, I drafted a shell script to act as a dummy payload. The idea is to echo some text with a timestamp every 10 seconds for 5 minutes (300 seconds) to mimic an interactive installer progress:

#!/bin/bash

# Calculate the end time, 5 minutes from now
end_time=$((SECONDS + 300)) # 300 seconds

while [ $SECONDS -lt $end_time ]; do
printf "Payload installation in progress [%s] %s\n" "$(date +'%T')"
sleep 10
done

echo "Installation complete"

It’s short so let’s embed it directly into the cloud-init section of the config to illustrate the syntax:

  user-data:
write_files:
- path: /opt/payload_script.sh
permissions: '0755'
content: |
#!/bin/bash

# Calculate the end time, 5 minutes from now
end_time=$((SECONDS + 300)) # 300 seconds

while [ $SECONDS -lt $end_time ]; do
printf "Payload installation in progress [%s] %s\n" "$(date +'%T')"
sleep 10
done

echo "Installation complete"
runcmd:
- cd /opt
- ./payload_script.sh >> /var/log/payload.log

Now there’s a catch: once you add user-data section after “shutdown: poweroff” shut down stops working. So let’s move it to the late-commands:

late-commands:
- poweroff

But here comes another one that would block the installation finish:

Ubuntu installer failed unmounting /cdrom

I can’t tell if it’s a bug or a feature. What I do know is that it’s the interactive installer sections that trigger this behavior. Meaning it works fine with a fully headless automatic workflow as it will not request for user involvement.

Enabling interactive sections implies you have a console working and can hit enter to reboot the box. It’s not clean but it works. I tried adding various forms of forcing CDROM unmount but that didn’t work for me. Would highly appreciate if someone could share a fix.

Putting it all together.

Now let’s assemble the moving parts and test our build. I will follow the very same steps we did in the previous article. Let’s name this project U22AutoInstallBuildV0.2:

CUBIC project setup screen

Then press “Next” until the “Boot” configuration. Now let’s switch to our VSCode and edit the config files.

First let’s copy the nocloud folder and the grub.cfg from our previous project. I will also update the menu entry name to build v0.2. Next I will introduce the updates to the user-data file we made:

VSCode complete user-data file

Now let’s build it, just like we did it last time.

Testing the ISO

Hyper-V would not work this time. For example, this section would already cause the installation to crash:

late-commands:
- lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv
- resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

Googling around revealed that Hyper-V is known for issues running Ubuntu VMs. I was already testing a Proxmox instance running nested inside Hyper-V but that didn’t feel right.

Fortunately, VMWare just made Workstation Pro (and Fusion if you’re a Mac person) free for personal use. Another awesome feature I honestly didn’t know about is that VMWare Workstation can now live in harmony with Hyper-V. Meaning I don’t have to sacrifice my WSL and a Rancher K3S.

Let’s create a VM. Booting the image will be standard just as in the previous article. Now let’s go to the differences. This time we enabled an interactive section so we can see the usual Ubuntu installer network screen:

Ubuntu installer interactive section for network configuration

I will just move on with DHCP and the installer would pop up to confirm the drive formatting. And yes, we didn’t enable the storage configuration section yet, but it just works this way:

Ubuntu installer confirm destructive action

Watching Subiquity in action can be mesmerizing; however, what particularly interests us is this:

Subiquity late-commands

These are our instructions from the late-commands section.

Next comes the bug (?) mentioned above:

Just press “enter”, the system will power off and we’re done with the autoinstall part.

Testing the cloud-init section

Now let’s power on our machine, log in using the credentials ubuntulogin/ubuntupass, and see the cloud-init post installation in action:

payload_script is running as root

And ps shows our payload_script.sh successfully running as root.

Now let’s check the payload.log:

less /var/log/payload.log

Everything exactly as intended.

With a real production payload we usually want to monitor the installation progress, so I prefer this way:

watch tail /var/log/payload.log

Summary:

This article provided an in-depth journey into Ubuntu Autoinstall, getting to the real OS deployment scenarios. We discussed how to enable interactive installer sections and clarified the distinctions and complementary uses of Autoinstall and Cloud-init. Furthermore, we explored potential pitfalls and demonstrated the engagement of a payload bash script with Cloud-init.

Stay tuned for advanced topics in the next article.

--

--