Intro

AutoInstall Directory

A long awaited (for me anyway) follow up to Automating ESX Install. It’s been top of mind ever since I finally setup automated installation of ESXi on baremetal to do a similiar thing with some FOSS alternative. I think I’ve landed on using some form of KVM and LibVirt up and running along with containers on these hosts. I think I’m leaning towards LXD at the moment. It could change. ¯\(ツ)/¯ Could I have gone the Proxmox route and gotten things like virtual switches and what not handled much more easily? Sure. Where’s the fun in that though? I briefly evaluated going Debian, maybe Alpine or Rocky. Ultimately though, I settled into using Ubuntu. (Old habits die hard afterall.) I think I’ll continue to use the lighter images for my containers still. This is bound to be the first of several posts in a series related to getting a functional alternative for ESX for my purposes.

Goals

I think with a project like this, it’s important to lay out what the goals are. I should come back to this post and update these with links to the appropriate posts as each one isa ccomplished.

  1. Automated install of Ubuntu on bare metal. (This post)
  2. Host setup and management with Ansible.
  3. Virtual switch configuration and setup.
  4. VM and Container deployment using cloud images (At least getting containers spun up is super easy)
  5. Deploying OVAs / QCOWs through Terraform
  6. Deploying cloud images and containers through Terraform
  7. Monitoring
  8. Automating backups - Note: I think that backups should only be of the infrastructure as code and databases/storage files for applications. File shares if they end up creeping up into the mix later on.
  9. Monitoring
    1. Automatically adding hosts to monitoring on creation regardless if container, hypervisor, vm.
  10. iSCSI for datastore of VMs / Containers
  11. Equivalent of Distributed Resource Sharing / Clustering and automatic VM / Container fialover

Will we get through it all? Will I be distracted by yet another project, another hiatus. Find out, next time on DragonBall Z…er…FischNets…

Equipment

I’m using a few things for this:

  1. An ISO thumbdrive - This does not have to be 128Gb. I just had this and used it.
  2. Another thumbdrive to be the boot device of the mini PC. - Use whatever size you’d like. I use 32GB+ normally
    • If you’d like, you can skip this thumbdrive outright, and install to the media inside the box.
  3. A Beelink x86 mini PC.
    • I opted for the 12th Gen Intel Alder Lake-N100 w/ 256GB M.2SSD and 8G of RAM.
    • I upgraded the RAM to 32GB of RAM
    • The bulk of this really doesn’t care what the platform is. We’re install Ubuntu 24.04 Server edition essentially.
    • At some point, I’d probably swap that 256GB SSD for something larger. But who cares for now.
  4. The computer I’m doing all this from.

Automating Ubuntu

Given that this kicked off with Automating Cloud Image Provisioning with VMs on ESXi, and then got a recent boot to get back to it when I picked up some additional HP ProDesk Minis for the lab, followed by finally getting my Turing Pi. I have accepted that the ARM based Turing PIs setup the same as the x86 boxes just isn’t going to happen. But, that doesn’t mean that the actual management and how they all run won’t be the same. So knowing that the x86 based boxes will be provisioned differently than the ARM boxes I decided I should sit down and just knock this out. Micro Center had a deal on some low profile USB drives that I like to use as the OS drive for these hosts and the boxes came in. So it was a matter of buckling down and knocking it out. (You’ll notice later on, I neded up using one of my 32GB drives I already had on hand for this demo)

I first looked at trying to adopt the KickStart config; but alas, that being a RHEL or similar distro type tool made it unusable for Ubuntu. I looked at Debian Preseed but then realized that Ubuntu had moved onto using their AutoInstall as part of Subiquity and Cloud-Init for the most part if you’re not using somethihg like MAAS. Hopefully it’s starting to become clear why this took such a back seat.

Anyway, I got hopeful that I could maybe do some modification to Cloud-Init like I do on the VM provisioning, but that turned out to not be quite the case. At least not standalone. As of the time of writing I don’t quite have the installation-media flag working the way I would like it to be. Which admittedly I can’t decide if that’s better or worse. It ends up requiring two working USBs and ports on the host to handle everything. I found one really good guide that I’ve reduced down to my workflow from Ji Mangel I opted against his workflow of two drives just for getting things going. Or I just miss-understood what was being conveyed. But based on some early testing of the steps provided there, I figured out that I could just place a autoinstall.yaml file on the base of the Install drive, and it would pull in any of the config customizations that I needed. I ultimately would still like to be able to take a singular USB, load the iso onto it, and place the autoinstall.yaml file, and then install directly onto that drive. However, I was not successful in that. There is a match keyword of install-media: true that is supposed to allow this. But I found I could not get it to work.

As a result, I got things going with two thumbdrives still. One to be the boot OS, and another for having the Boot ISO and Autostart.yaml file. I was successful with testing this on 22.04 and 24.04 LTS releases. I may try 20.04 because I think subiquity was introduced then. But I don’t remember at the moment. So apologies if you’re trying to get some legacy stuff working and having to rely on the older methods. This won’t be for you.

Creating the Modified Live Image

All of this work is going to be done from a WSL2 Ubuntu 24.04 environment. Start by folder setup and downloading the image

mkdir cloud-init-test
cd cloud-init-test/
wget https://releases.ubuntu.com/24.04/ubuntu-24.04.1-live-server-amd64.iso

Next we’re going to modify the image so that there’s no prompts to install, and everything is truly automated. Notice that mount does have to be done by root. Not the user space. Then we’re going to modify the grub.cfg to launch autoinstall quietly, and that the timeout before starting is just 1 second instead of 30.

mkdir mnt
sudo mount -o loop ubuntu-24.04.1-live-server-amd64.iso mnt
cp --no-preserve=all ./mnt/boot/grub/grub.cfg /tmp/grub.cfg
sed -i 's/linux \/casper\/vmlinuz  ---/linux \/casper\/vmlinuz autoinstall quiet ---/g' /tmp/grub.cfg
sed -i 's/timeout=30/timeout=1/g' /tmp/grub.cfg

With grub modified, it’s time to start putting things back together. The apt tools installed are needed to build the modified ISO iamge. The github repo that’s cloned is for the livefs-editor which simplified a lot of the other work. At some point, I’ll hopefully come back, update this post with just xorriso to do the needful.

sudo apt install xorriso squashfs-tools gpg liblz4-tool
git clone https://github.com/mwhudson/livefs-editor
cd livefs-editor
python -m venv .venv
source .venv/bin/activate
python -m pip install .
# Need to become root to deal with the mounted drive being owned by root
sudo su
source ./.venv/bin/activate
export ORIG_ISO=ubuntu-24.04.1-live-server-amd64.iso
export MODDED_ISO=export MODDED_ISO="${ORIG_ISO::-4}-modded.iso"
livefs-edit ../$ORIG_ISO ../$MODDED_ISO --cp /tmp/grub.cfg new/iso/boot/grub/grub.cfg
deactviate
exit

Assuming you follow the section above, you’ll have everything pulled down, change from the cloud-init-test/ folder to the livefs-editor folder, create a Python Virtual Environment, activate it, and then install the Python dependencies. Feel free to swap python -m venv .venv for your virtual environment manager creation of choice. This should just work on at least a fresh 24.04 image or 24.04 WSL.

Now we have our modified image that’s going to start the process right away. Next we need the seed data I’ve changed my process significantly from what I relied on to get this far. Rather than using a seed image on a secondary drive or even further mods to the iso, we just use Ubuntu’s Autoinstall. This requires placing a autoinstall.yaml file on the root of the USB Drive that’s going to be used for install. This file ends up getting the cloud-init data embedded into it.

Autoinstall File

Next up, is the Autoinstall.yaml file. There’s going to be a lot to get into here as well. The docs go into a good amount of detail, but I think there’s room for improvement. (I’d offer to assist, but look how long this took…) The username portion is different than what I actually use, and I’d recommend you change it. But it will work. The encrypted password is ubuntu generated with mkpasswd -m sha-512 ubuntu (mkpasswd is part of the whois package). If you omit the password key in identity and cloud-data, then interactive logon will not be enabled for the default user. Be careful here.

You can just create this in the directory that we’re currently working in. And then once you write the ISO to your thumbdrive, then you place the autoinstall.yaml file in the root. I usually use Rufus for writing my thumb drives. There’s plenty of information out there on how to use Rufus and alternatives. So we’re going to skip that. Just make sure when selecting the ISO, to select the ubuntu-24.04.1-live-server-amd64-modded.iso image.

version: 1
identity:
  hostname: ubuntu-bee
  username: cloud
  password: $6$CVoaYl2CyX91bSUk$dD.j89PMxFnYjo23dGNtl5aSNHuo8OLPGqfgqIPX8RXiXrGX3JirXIgc3ESutXM9AzK3.nKWMPIkkl9Np0ArV1
source:
  search_drivers: true
  id: ubuntu-server
ssh:
  install-server: yes
  allow-pw: false
update: false
storage:
  layout:
    name: lvm
    sizing-policy: all
    reset-partition: false
    match:
      size: smallest
timezone: geoip
shutdown: poweroff
user-data:
  disable_root: true
  package_upgrade: false
  groups:
    - cloud
  users:
    - name: cloud
      primary_group: cloud
      passwd: $6$CVoaYl2CyX91bSUk$dD.j89PMxFnYjo23dGNtl5aSNHuo8OLPGqfgqIPX8RXiXrGX3JirXIgc3ESutXM9AzK3.nKWMPIkkl9Np0ArV1
      ssh_authorized_keys:
        - ssh-rsa <RSA KEY HERE>
      sudo: ALL=(ALL) NOPASSWD:ALL
      lock_passwd: false
      groups: admin, sudo, cloud
      shell: /bin/bash
drivers:
  install: true

So let’s get into the different stanzas!

version

  • version: This along with 1 as the value are required. It’s so that in the future autoinstall can have different structures and maintain backwards compatability.

identity

The docs say that if using the user-data stanza, that username and password are not needed. However, when I tried with just hostname on a 24.04 install, it failed. So i just went ahead and included the username ubuntu, along with the same password hash I included in the user-data section.

  • hostname: The hostname you want the computer to have.
  • username: The initially created user you want.
  • password: A hashed password for the initial user.

source

  • search_drivers: Whether or not you want to use ’non-free’ drivers. I opt to install them.
  • id: This lets you specify what version of Ubuntu you want, again docs for the options.

ssh

  • install-server: This specifically is to install the ssh server, which I definitely want.
  • allow-pw: Whether or not you want to allow interactive (password) based login.

update

  • update: Determines if it should do updates after the install. I prefer to control this via ansible. I’ve still got resentment from hours long updates on older version… old habits die hard I guess.

storage

Now this section can get realllly tricky. I recommend taking a look at the storage docs for this section. There’s the simple layout object you can use. And if you’re good with just installing to the non-install media, by all means, I’d recommend it. I really hope at some point they add support for the special install-media option for match within this stanza. Becuase you cannot just use size: smallest within the match block to get it. You can find many different complex versions online for using config. It allows a lot of curtain customizations.

An emberassing side note on this section. I messed around with this in like March 2024. One evening, to my surprise it worked. The problem, I was messing around with the autoinstall file on thumbdrive. So I didn’t have a backup of the working config. What’s worse, I told myself, “I’ll get this all documented tomorrow.” Well tomorrow came, and went. And I got extremely busy over the course of the year. Conferences, work, travel, blah, blah, blah.

With all that, let’s get into what is being used.

  • layout: This is used for the simple option. Alternatively, if going the Curtain route, you would use config. If layout is present, then config is ignored if both are present.
    • name: The options are lvm, direct, and zfs. I opt for lvm since it’s what I’m most comfortable with, and makes the most sense in my mind for just the install drive.
    • sizing-policy: Since using lvm, the options are scaled or all. Again, being just boot, makes sense to be all.
    • reset-partition: Becuase this whole idea is throw away OS installs, no need for a reset-partition, so I opt for false. If set to true, you’ll have another partition that can be used for OS recovery containing a copy of the installer image.
    • match: This is where you can tell it what drive to install to. There’s a few different options here, and again I’d recommend hitting the autoinstall docs for more information. However, since the thumbdrive tends to be smaller than the storage drive, size: smallest is good for getting it to match to the second thumbdrive that will be plugged in.

timezone

  • timezone: You can either provide no timezone, which will go Etc/UTC. You can tell it geoip like I have, which will automatically determine based on IP information. Or, you can provide a timezone mapping like America/New York.

shutdown

  • shutdown: Post install behavior. Options are reboot or poweroff

user-data

This is cloud-init user data. I covered this in Automating ESX Install a bit as well. You can also reference the All cloud config examples for more information on cloud-init user data.

A break down of this simple one is as follows:

  • user-data:
    • disable_root: Disabling the user root
    • package_upgrade: Whether or not to update packages on first boot.
    • groups: Initial groups to create. This can be useful for creating something like a default Ansible user for management. As a list of group names.
    • users: Another set of object information containing user data.
      • name: The username of the user to create
      • primary_group: what the primary group is. Can be what you created, or one of the defaults.
      • passwd: The hashed password to use. Necesarry if you don’t set the sudo item below. As well as if you want interactive login for the user.
      • ssh_authorized_keys: a list of keys. does require the - ssh-rsa before a space and your public key.
      • sudo: Rules for sudo can be placed in here. This can be very complicated. The option I use ALL=(ALL) NOPASSWD:ALL sets so that SUDO doesn’t require the password for any command. The SUDO Manpage goes over some of the info. This Cloud-Init Example also has some information that can be referenced.
      • lock_passwd: Whether or not to lock the password to disable password login.
      • groups: Additional groups to make the user a member of. I recommend including admin, sudo at a minimum.
      • shell: The shell to use for interactive login. Note: if you set to /bin/nologin then the user will not be able to get an interactive shell. E.g. no sudo su user.

drivers

  • install: Whether or not to install avaialble third-party drivers. I recommend setting to true for optimal compatability.

Tying it together up to now

At this point, the modded image has been created. And if we’re back in our cloud-init-test directory. It’s going to look like this:

.
├── autoinstall.yaml
├── livefs-editor
│   ├── COPYING
│   ├── README.md
│   ├── build
│   │   ├── bdist.linux-x86_64
│   │   └── lib
│   │       └── livefs_edit
│   │           ├── __init__.py
│   │           ├── __main__.py
│   │           ├── actions.py
│   │           ├── cli.py
│   │           └── context.py
│   ├── examples
│   │   └── example.yaml
│   ├── livefs_edit
│   │   ├── __init__.py
│   │   ├── __main__.py
│   │   ├── actions.py
│   │   ├── cli.py
│   │   └── context.py
│   ├── livefs_edit.egg-info
│   │   ├── PKG-INFO
│   │   ├── SOURCES.txt
│   │   ├── dependency_links.txt
│   │   ├── entry_points.txt
│   │   ├── requires.txt
│   │   └── top_level.txt
│   ├── pyproject.toml
│   └── setup.cfg
├── meta-data
├── mnt
├── ubuntu-24.04.1-live-server-amd64-modded.iso
├── ubuntu-24.04.1-live-server-amd64.iso

Once you have used Rufus or your tool of choice to write the image to a thumbdrive (Note: I write in ISO format option within Rufus.), and copied over the autoinstall.yaml file into the root it will look something similiar to this:

E:.
├───[boot]
├───boot
├───casper
├───dists
├───EFI
├───install
├───pool
├───autoinstall.yaml
├───autorun.ico
├───autorun.inf
├───boot.catalog
├───md5sum.txt
├───ubuntu
└───.disk

Disclaimer: Do not do this to a PC that has data you’re not willing to lose on the drives. Same thing with the Thumbdrives you use.

At this point, you’re ready to insert the thumbdrive you want to be the boot drive, along with the thumbdrive that is going to act as the live CD into the PC you’re formatting, and power it up. Make sure you have set BIOS to boot to the thumbdrive. Once it’s done, the PC should be shut down. I would suggest you have a monitor to hook the computer up to the first time you do this, that way if there are any issues, you can troubleshoot them.

Storage Setup Post Install

Assuming everything has worked up to this point, you have the system installed. If you used interactive logon details, you can either hook up a keyboard to the computer, and log in. Otherwise, you can find the IP from your DHCP server, and ssh into it.

I’ve not quite gotten this part automated just yet. But there’s enough uniqueness here that I’m not sure it’ll be worth the effort to do it. There’s possibility that it could also be managed by Cloud-Init. The system has a drive that’s going to be used for storage, not a remote iscsi or other style target. As a result, it needs to be formatted, and configured for use.

First, run parted (it has to run as root) to get the drive identified: sudo parted -l The output will look something like this:

cloud@ubuntu-bee:~$ sudo parted -l
Model: ATA NGFF 2280 256GB (scsi)
Disk /dev/sda: 256GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  1128MB  1127MB  primary  fat32        boot
 2      1128MB  256GB   255GB   primary  ext4


Model:  USB  SanDisk 3.2Gen1 (scsi)
Disk /dev/sdb: 30.8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  1128MB  1127MB  fat32              boot, esp
 2      1128MB  3276MB  2147MB  ext4
 3      3276MB  30.8GB  27.5GB

Now I know that this machine has a 256 SSD and a 32GB USB drive. The 32GB Drive is what the OS was installed onto. So I want to format the 256 drive for use as storage

So with the information above, I know that the drive I want to format is going to be the /dev/sda I’m going to use put a gpt label on the drive instead of the current msdos label. so for that I’ll run:

sudo parted /dev/sda mklabel gpt

Next Is to create a partitionspanning the entire drive with: parted -a

sudo parted -a opt /dev/sda mkpart primary ext4 0% 100%

The primary flag will make it a standalone partition, or not extended from another one.

There’s obviously other filesystem partitions besides ext4 that can be used. But that’s a whole other topic. For this use case, ext4 is going to be sufficient

Finally, the 0% and 100% flags will tell it to span from the beginning of the drive to the end.

Now, with the partition in place, I need to actually initialize the the Ext4 filesystem. For that, the mkfs.ext4 utility will be used:

sudo mkfs.ext4 -L storage /dev/sda1

The -L flag is to set the label, in this case I’m calling it “storage” You can call it something else as well.

Next, the FSTAB will need to be updated so that the drive is mounted on boot. I’m going to need the UUID of the drive so that if other drives are added at a later time, it’ll still find the right partition. To get that, the lsblk utility is going to be used.

sudo lsblk --fs

Which will yield something like this:

cloud@ubuntu-bee:~$ sudo lsblk --fs
NAME                   FSTYPE      FSVER    LABEL   UUID                                   FSAVAIL FSUSE% MOUNTPOINTS
loop0                  squashfs    4.0                                                           0   100% /snap/snapd/23258
loop1                                                                                            0   100% /snap/core22/1722
loop2                                                                                            0   100% /snap/lxd/31214
sda
└─sda1                 ext4        1.0      storage 5ba2290c-1a4e-4078-adc2-81f967188c7b    221.7G     0% /mnt/storage
sdb
├─sdb1                 vfat        FAT32            EA24-4399                                   1G     1% /boot/efi
├─sdb2                 ext4        1.0              f0220bf1-b9fc-450e-a8a1-78237d916aef      1.7G     5% /boot
└─sdb3                 LVM2_member LVM2 001         IXYvZL-uy7y-8no6-qUyX-7prA-UuGY-S8GyLC
  └─ubuntu--vg-ubuntu--lv
                       ext4        1.0              dfe2d417-5650-465d-b535-768d85da500a     15.5G    33% /

I’m going to mount this drive under /mnt/storage so the directory needs to be created

sudo mkdir -p /mnt/storage

Now it’s time to edit the fstab

sudo nano /etc/fstab

Here’s what the contents look like:

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation
/dev/disk/by-id/dm-uuid-LVM-uOrIhCPYkr0P3a0wlmQmnJD3cn4V0M0HxgtbmXLBcvw2I2ujW3YTz05jj1hvBZbN / ext4 defaults 0 1
# /boot was on /dev/sdc2 during curtin installation
/dev/disk/by-uuid/f0220bf1-b9fc-450e-a8a1-78237d916aef /boot ext4 defaults 0 1
# /boot/efi was on /dev/sdc1 during curtin installation
/dev/disk/by-uuid/EA24-4399 /boot/efi vfat defaults 0 1
/swap.img       none    swap    sw      0       0
# Storage drive
/dev/disk/by-uuid/5ba2290c-1a4e-4078-adc2-81f967188c7b /mnt/storage ext4 defaults 0 2

The lines that I’ve added is at the end. It contains the UUID that was grabbed in the lsblk command:

# Storage drive
/dev/disk/by-uuid/5ba2290c-1a4e-4078-adc2-81f967188c7b /mnt/storage ext4 defaults 0 2

Alternatively, I could do something like this and just append it:

sudo sh -c  'echo "# Storage drive\n/dev/disk/by-uuid/5ba2290c-1a4e-4078-adc2-81f967188c7b /mnt/storage ext4 defaults 0 2" >> /etc/fstab'

There’s some newline characters in there, anything more complicated and I wouldn’t recommend going this route as it can get hairy to read, and things could get broken.

Finally, the drive needs to be mounted after the modifications: sudo mount -a This can then be verified with something simple like:

echo "test write" | sudo tee /mnt/storage/test_file

The reason this is done as root, as of right now the entire directory of /mnt/storage is owned by root. There’s further modifications that could be done to make a different user own /mnt/storage by editing the /etc/fstab differently; However, this is good enough for what I need.

With confirmation of the test file is there:

cloud@ubuntu-bee:~$ ls -la /mnt/storage/test_file
-rw-r--r-- 1 root root 11 Jan 12 18:42 /mnt/storage/test_file
cloud@ubuntu-bee:~$ cat /mnt/storage/test_file
test write

Everything is complete, and the system is ready for the next stage of setup!

Speedrun

Here’s a fast run through with a yaml that will generate a default user of cloud without password based ssh / keyboard login. But will allow SSH with a private key Installing Ubuntu-Server edition. The hostname will be ubuntu-bee and the user will be cloud. All you have to do is replace the password key values and ssh authorized key.

Create image

sudo apt install xorriso squashfs-tools gpg liblz4-tool
mkdir baremetal-install
cd baremetal-install/
wget https://releases.ubuntu.com/24.04/ubuntu-24.04.1-live-server-amd64.iso
mkdir mnt
sudo mount -o loop ubuntu-24.04.1-live-server-amd64.iso mnt
cp --no-preserve=all ./mnt/boot/grub/grub.cfg /tmp/grub.cfg
sed -i 's/linux \/casper\/vmlinuz  ---/linux \/casper\/vmlinuz autoinstall quiet ---/g' /tmp/grub.cfg
sed -i 's/timeout=30/timeout=1/g' /tmp/grub.cfg
git clone https://github.com/mwhudson/livefs-editor
cd livefs-editor
python -m venv .venv
source .venv/bin/activate
python -m pip install .
sudo su
source ./.venv/bin/activate
export ORIG_ISO=ubuntu-24.04.1-live-server-amd64.iso
export MODDED_ISO=export MODDED_ISO="${ORIG_ISO::-4}-modded.iso"
livefs-edit ../$ORIG_ISO ../$MODDED_ISO --cp /tmp/grub.cfg new/iso/boot/grub/grub.cfg
deactviate
exit

Write the ubuntu-24.04.1-live-server-amd64-modded.iso to your thumbdrive in ISO mode.

Place an autoinstall.yaml into the root of the thumbdrive when done writing. Example below.

version: 1
identity:
  hostname: ubuntu-bee
  username: cloud
  password: $6$abcd
source:
  search_drivers: true
  id: ubuntu-server
ssh:
  install-server: yes
  allow-pw: false
update: false
storage:
  layout:
    name: lvm
    sizing-policy: all
    reset-partition: false
    match:
      size: smallest
timezone: geoip
shutdown: poweroff
user-data:
  disable_root: true
  package_upgrade: false
  groups:
    - cloud
  users:
    - name: cloud
      primary_group: cloud
      passwd: $6$abcd
      ssh_authorized_keys:
        - ssh-rsa <RSA KEY HERE>
      sudo: ALL=(ALL) NOPASSWD:ALL
      lock_passwd: true
      groups: admin, sudo, cloud
      shell: /bin/bash
drivers:
  install: true

Now eject the thumbdrive, and place the ISO USB and the one you want to use as your Install drive into the PC. Boot, and then make sure BIOS is set to boot to the correct USB.

Conclusion

At this point, the system is up, and ready for the remainder of setup. Virtualization platform of choice, containers, etc. I’ll cover that in the next post getting into how I manage these hypervisors with Ansible, Terraform, etc.

References