It turns out that when hard drives fail, they don’t all fail completely. In fact, most fail silently, getting worse and worse as time moves on, causing bitrot and other issues.
I had a suspicion that one of my drives was failing so I thought I would test it. The tool for the job: badblocks.
badblocks writes data to the drive and then reads it back to ensure it gets the expected result. I have learned a lot about hard drive failure lately and now subscribe to running badblocks on every new hard drive I receive to ensure it is a good drive. The command I use is:
badblocks -wsv <device>
This is a destructive write test – it will wipe the disk. You can also run a non-destructive test, but for new disks you can go ahead and wipe them. I also use badblocks to ensure old disks can still be trusted with data. It’s great for “burn in” testing to ensure a drive won’t fail.
Update 3/1/19: If you encounter the following error:
badblocks: Value too large for defined data type invalid end block (5860522584): must be 32-bit value
It means your drive is too big for badblocks to recognize using the default sector size. Fix this by specifying a 4k sector size:
badblocks -b 4096 -wsv <device>
Thanks to Ubuntu Forums for the info.
Awesome, thank you so much! I have a FreeNAS backup server and I needed to check for bad sectors on one of the backup servers hard drives it’s running 4x1TB in Raidz1. Thanks for the help!
– Alpha Computer and Web Services