Make FreeDOS boot ISO to flash BIOS

I needed to flash the BIOS of one of my old server motherboards and to my dismay found the only way to do so was via DOS boot image. It was not straightforward so I thought I’d write it down. Thanks to pingtool & tummy.com for the information I needed to pull it off.

First, obtain a copy of FreeDOS ISO and extract it to a directory

  • mount -o loop <freedosISO.iso> <mount directory>
  • rsync -aP <mount directory> <directory you want files to copy to>

Next, copy the necessary flash utilities and firmware files to that same directory as above.

Lastly, use genisoimage to create a new ISO image based on the above folder. Modify -o output to wherever you want the ISO to go.

sudo apt install genisoimage
cd <folder you copied your files to>
mkisofs -o /tmp/freedos_biosupdate.iso -q -l -N \
   -boot-info-table -iso-level 4 -no-emul-boot \
   -b isolinux/isolinux.bin \
   -publisher "FreeDOS - www.freedos.org" \
   -A "FreeDOS beta9 Distribution" -V FDOS_BETA9 -v .

From here you can take the ISO and mount / burn it as needed. It will boot into FreeDOS. Tell it to go to a shell and away you go.

FreeNAS ZFS tuning for SSDs

I wanted to optimize my all SSD storage array on my FreeNAS server but I had a hard time finding information in one place. After a lot of digging I pulled things from several places. This is what I came up with. It boiled down to two main settings

  • ashift
  • recordsize

Checking ashift on existing pools

zdb -U /data/zfs/zpool.cache | grep ashift

I read here a recommended setting of ashift=13, recordsize=8k for VM workloads on SSDs.

How to change recordsize:

This is easily done in the GUI or command line and can be changed on the fly.

zfs set recordize <value> <volume>

How to change ashift:

Backup your data and destroy the pool.

Modify the setting dictating minimum ashift setting as outlined here

sysctl vfs.zfs.min_auto_ashift=13

Re-create the pool.

Additional reading

http://open-zfs.org/wiki/Performance_tuning#Alignment_shift
https://www.reddit.com/r/zfs/comments/7pfutp/zfs_pool_planning_for_vm_storage/

Free up RAM after Proxmox live migration

I ran into an issue where after migrating a bunch of VMs off of one of  my hosts, the remaining VMs on it refused to turn on. Every time I tried the command would hang for a while and eventually error out with this message

TASK ERROR: start failed: command '/usr/bin/kvm -id <truncated>... ' failed: got timeout

I suspected this might be due to RAM use and sure enough the usage was too high for a system that didn’t have any VMs running on it.  I found here that I could run a command to flush the cache:

echo 3 > /proc/sys/vm/drop_caches

That caused the RAM usage to go down but the symptom of the VM not starting remained. I then saw the KSM sharing still had some memory in it. I decided to restart the KSM sharing service:

sudo systemctl restart ksmtuned

After running that the VM started!