Recently I had a few VMs on shared storage I couldn’t live migrate. The cryptic error messages made it sound like local LVM was required, even though in the GUI all I could see was shared storage for the VM. The errors I kept getting were like this one:
volume pve/vm-103-disk-1 already exists command 'dd 'if=/dev/pve/vm-103-disk-1' 'bs=64k'' failed: got signal 13 send/receive failed, cleaning up snapshot(s).. ERROR: Failed to sync data - command 'set -o ...' failed: exit code 255 aborting phase 1 - cleanup resources ERROR: found stale volume copy 'local-lvm:vm-103-disk-1' on node 'nick-desktop' ERROR: migration aborted (duration 00:00:01): Failed to sync data - command 'set -o pipefail ...' failed: exit code 255 TASK ERROR: migration aborted
After a ton of digging I found this forum post that had the solution:
Most likely there is some stale disk somewhere. Try to run:
# qm rescan –vmid 101
That indeed was the problem. I ran
qm rescan –vmid 103
on the node in question, then refreshed the management page. After doing that, a ‘phantom’ disk entry showed up for the VM. I deleted it, but then had to run another qm –rescan –vmid103 before it would migrate.
So to recap, run qm rescan –vmid (vmid#) once, then delete the stale disk that shows up, then run that same command again.
qm rescan –vmid 10X
gives me :
400 too many arguments
qm rescan [OPTIONS]
Make sure you’re only using one dash. It’s “qm rescan -vmid 100” to rescan VM 100’s storage.
What a stupid reply. retyping a command the exact same way never helps. Your command is wrong
“qm rescan” works fine for every VM no need to specify a VM