Tag Archives: HA

Proxmox HA management script

I was a bit frustrated at the lack of certain functions of ProxMox. I wanted an easy way to tag a VM and manage that tag as a group. My solution was to create HA groups for VMs with various functions. I can then manage the group and tell them to migrate storage or turn off & on.

I wrote a script to facilitate this. Right now it only covers powering on, powering off, and migrating the location of the primary disk, but more is to come.

Here’s what I have so far:

#!/bin/bash
#Proxmox HA management script
#Migrates storage, starts, or stops Proxmox HA groups based on the name and function passed to it.
#Usage: manage-HA-group.sh <start|stop|migrate> <ha-group-name> [local|remote]

#Change to the name of your local storage (for migrating from remote to local storage)
LOCAL_STORAGE_NAME="pve-1TB"

function get_vm_name() {
    #Determine the name of the VMID passed to this function
    VM_NAME=$(qm config "$1" | grep '^name:' | awk '{print $2}')
}

function get_group_VMIDs() {
    #Get a list of VMIDs based on the name of the HA group passed to this function
    group_VMIDs=$(ha-manager config | grep -B1 "$1" | grep vm: | sed 's/vm://g')
}

function group_power_state() {
    #Loop through members of HA group passed to this function
    for group in "$1" 
    do
        get_group_VMIDs "$group"
        for VM in $group_VMIDs
        do
            get_vm_name "$VM"
            echo "$OPERATION $VM_NAME in HA group $group"
            ha-manager set $VM --state $VM_STATE
        done
    done
}

function group_migrate() {
    #This function migrates the VM's first disk (scsi0) to the specified location (local/remote)
    #TODO String to determine all disk IDs:  qm config 115 | grep '^scsi[0-9]:' | tr -d ':' | awk '{print $1}'
    disk="scsi0"    

    #Loop through each VM in specified group name (second argument passed on CLI)
    for group in "$2" 
    do
        get_group_VMIDs "$group"
        for VM in $group_VMIDs
        do
            #Determine the names of each VM in the HA group
            get_vm_name "$VM"

            #Set storage location based on argument
            if [[ "$3" == "remote" ]]; then
                storage="$VM_NAME"
            else
                storage="$LOCAL_STORAGE_NAME"
            fi

            #Move primary disk to desired location
            echo "Migrating $VM_NAME to "$3" storage"
            qm move_disk $VM $disk $storage --delete=1

        done
    done
}

case "$1" in 
    start)
        VM_STATE="started"
        OPERATION="Starting"
        group_power_state "$2" 
        ;;
    stop)
        VM_STATE="stopped"
        OPERATION="Stopping"
        group_power_state "$2"
        ;;
    migrate)
        case "$3" in
            local|remote)
                group_migrate "$@"
                ;;
            *)
                echo "Usage: manage-HA-group.sh migrate <ha-group-name> <local|remote>"
                ;;
        esac        
    ;;
    *)
        echo "Usage: manage-HA-group.sh <start|stop|migrate> <ha-group-name> [local|remote]"
        exit 1
        ;;
esac

ProxMox VMs reboot when switch is rebooted

I came across an interesting situation where if I rebooted my Ubiquiti UniFi Switch 24 for a firmware upgrade, all my VM hosts would reboot themselves. It turned out to be due to my having enabled HA in ProxMox. The hosts would temporarily lose connectivity to each other and begin to fence themselves off from the cluster. This caused HA to kill the VMs on those hosts. Then once connectivity was restored everything would eventually come back up.

The proper way to fix this would be to have multiple paths for each host to talk to each other, so if one switch goes down the cluster is still able to communicate. In my case, where I only have one switch, the “poor man’s fix” was to simply disable HA altogether during the switch reboot, as outlined here. Then, once the switch is back up, re-enable HA.

On each node, stop the pve-ha-lrm service. Once it’s stopped on all hosts, stop the pve-ha-crm service. Then reboot your switch.

After the switch is back up, start pve-ha-lrm on each node first, then pve-ha-crm (if it doesn’t auto start itself) to re-enable HA.