Quick way to make HDD light blink

I needed a quick and dirty way to make a specific hard drive LED blink so that I could identify which drive to replace. I stumbled across this post that worked well for me.

The simple trick is to run smartctl in a while loop. Something about smartctl makes the drive light blink differently than using dd, which is what I was using previously. smartctl is what finally allowed me to identify the drive. Here is the command:

while true; do smartctl -a /dev/<device>; done

The command will run forever until you Ctrl + C. It make the LED blink rather obviously, which made things much easier.

Zimbra commercial SSL renewal procedure

My quick notes on what I have to do every year to upgrade my Zimbra mail certificate with a new Namecheap SSL certificate:

  1. Request CSR
    • /opt/zimbra/bin/zmcertmgr createcsr comm -new -subject "/C=COUNTRY/ST=STATE/L=LOCATION/O=ORG/OU=OU/CN=CN.EXAMPLE.ORG" -subjectAltNames CN.EXAMPLE.ORG
    • cat /opt/zimbra/ssl/zimbra/commercial/commercial.csr
  2. Upload CSR, verify domain, receive cert bundle
  3. Copy CRT & CA Bundle files to /tmp/cert
  4. Change permissions of files to allow zimbra user to use them:
    sudo chown zimbra /tmp/cert
    sudo chown zimbra /tmp/cert/*
  5. Verify it works against private key
    zmcertmgr verifycrt comm /opt/zimbra/ssl/zimbra/commercial/commercial.key /tmp/cert/ISSUED_CRT.crt /tmp/cert/CA_BUNDLE.ca-bundle
  6. Import new key
    zmcertmgr deploycrt comm /tmp/cert/ISSUED_CRT.crt /tmp/cert/CA_BUNDLE.ca-bundle
  7. Restart zimbra
    zmcontrol restart

Trigger button to run a script in Home Assistant

I configured a button (Runlesswire Click) to log diaper changes for my new baby. The diaper changes are logged in a Google Docs spreadsheet. I set up a simple public facing Google Form that I could run unauthenticated curl requests against. I then configured Home Assistant to run that curl command when the button is pressed. Instant diaper logging by the press of a button.

Lessons learned:

  • Zigbee Home Assistant (ZHA) does not yet support the Zigbee Green protocol, which the RunlessWire Click uses. I had to pair the switches to my Hue hub instead.
    * It looks like they’re getting close to supporting it, though: https://github.com/zigpy/zigpy/pull/1282

Here was my process:

  • Create Google Form
  • Obtain form ID from URL bar
  • Get pre-filled link to get names of fields by clicking the three dots on top right and clicking “Get pre-filled link”. Make note of the names for each entry e.g. entry.1363419348
    Thanks to help from: https://stackoverflow.com/questions/65142364/i-cant-find-name-attribute-while-inspecting-input-elements-of-google-form-ho
  • Curl command is:
    curl https://docs.google.com/forms/<FORM_URL>/formResponse -d ifq -d <ENTRY_NAME>=<ENTITY_VALUE> -d <ADDITIONAL_ENTRY_NAME>=<ADDITIONAL_ENTRY_VALUE> -d submit=Submit
    Thanks to help from: https://eureka.ykyuen.info/2014/07/30/submit-google-forms-by-curl-command/
  • Shell commands go into configuration.yaml
    shell_command:
    log_pee: <CURL_COMMAND>
    log_poo: <CURL_COMMAND>
    Thanks to help from: https://community.home-assistant.io/t/dont-understand-how-to-use-shell-commands/576580/9
  • Restart Home Assistant to pick up your configuration changes.
  • Configure the automation to call Service: shell_command

Success!

Saltstack gitfs ‘Failed to authenticate SSH session: Callback returned error’ fix

I lost several days of productivity with this one. I wanted to connect my Cent 7 salt master’s salt & pillar data to a gitfs backend. I configured /etc/salt/master per the docs but kept getting this error message:

Error occurred fetching gitfs remote 'git@github.com:<owner>/<repo>': Failed to authenticate SSH session: Callback returned error

I eventually discovered this bit of info that pointed me in the right direction that it was likely an issue with the certificate I was using. I followed the steps of generating a new certificate, but this time I received the error message “You’re using an RSA key with SHA-1, which is no longer allowed. Please use a newer client or a different key type.”

The issue stemmed from the fact that github tightened their security for SSH keys. More digging revealed that the pygit2 python module that comes with CentOS 7 is old and does not recognize the new cipher. I eventually found a fix – use pip to install a compatible version of pygit2. The latest version that works on Cent 7 is 1.6.1. Simply installing it wasn’t enough, though – you must also purge the system-installed pygit2 yum package.

Steps to fix

  1. Remove system supplied pygit2 version
    sudo yum remove python3-pygit2
  2. Install version 1.6.1 of pygit2 via pip. Sudo must be used to ensure global paths are updated.
    sudo python3 -m pip install pygit2==1.6.1 -U
  3. Restart the salt master
    sudo systemctl restart salt-master
  4. Review /var/log/salt/master for errors.

Troubleshooting

Monitor /var/log/salt/master for errors. I occasionally ran into errors such as this one:

2024-03-15 13:01:45,957 [salt.utils.gitfs :878 ][WARNING ][31763] gitfs_global_lock is enabled and update lockfile /var/cache/salt/master/gitfs/5b5f257b5dc909390cd0dfab5b6722334c9bc541912da272389f39cf5b80602e/.git/update.lk is present for gitfs remote ‘git@github.com:<owner>/<repo>’. Process 31793 obtained the lock

The solution was to remove the file and restart the salt master.

Configure Zimbra live replication

I’ve recently configured live active replication from my Zimbra e-mail server to a backup server. This is really slick – in the event of primary server failure, I can bring up my secondary in a matter of minutes with no data loss. I used the Zimbra live sync scripts on Gitlab to accomplish this.

These are my notes on things I needed to do in addition to the readme to get things to work properly on my Zimbra 8.8.15 Open Source Edition installs on CentOS 7 boxes.

Install atd (at package):
sudo yum install atd

Make sure the backup server has the same firewall rules as the primary: https://wiki.zimbra.com/wiki/Ports

On the backup server, configure DNS for the mail server to resolve to the Backup server’s IP address. hostname: mail.server.dns -> mirror mail server.

Disable DNS forwarding for primary mail server domain if configured (to ensure mail goes to backup server in the event of switchover.)

Clone over prod mail server, spin up and change network settings:

  • keep hostname (important)
  • change IP, DNS, hosts to use new IP address/network

/etc/sysconfig/network-scripts/ifcfg-eth0
/etc/hosts
/etc/resolv.conf

Ensure proper VLAN settings in backup VM (may be different than primary)

Systemd service:
add Environment=PATH=/opt/zimbra/bin:/opt/zimbra/common/lib/jvm/java/bin:/opt/zimbra/common/bin:/opt/zimbra/common/sbin:/usr/sbin:/sbin:/bin:/usr/sbin:/usr/bin
WorkingDirectory=/opt/zimbra

Remove start argument from ExecStart: ExecStart=/opt/zimbra/live_sync/live_syncd

This is the complete systemd unit for live sync:

[Unit]
Description=Zimbra live sync - to be run on the mirror server
After=network.target

[Service]
ExecStart=/opt/zimbra/live_sync/live_syncd
ExecStop=/opt/zimbra/live_sync/live_syncd kill
User=zimbra
Environment=PATH=/opt/zimbra/bin:/opt/zimbra/common/lib/jvm/java/bin:/opt/zimbra/common/bin:/opt/zimbra/common/sbin:/usr/sbin:/sbin:/bin:/usr/sbin:/usr/bin
WorkingDirectory=/opt/zimbra

[Install]
WantedBy=multi-user.target

Time limit

It looks like there’s a time limit for how long Zimbra keeps redo logs. It means you will get a lost mail situation if you try to bring your primary server back up after it’s been offline for too long (more than a few weeks.) If you’ve been failed over to your secondary mail server for more than two weeks, you’ll want to do the reverse procedure – clone the backup to the primary, edit IP addresses, then run the zimbra live sync. Log into the restored server to ensure mails from greater than 2 weeks ago are all there.

Digitize old photos and videos

Here is a list of hardware and software that I use to digitize old home movies, tapes, and family pictures:

Hardware

35mm film scanner: Pacific Image PowerFilm Plus 35mm Film Scanner

Document / picture scanner: Brother ADS-2700W

Flatbed scanner: Epson DS-50000 Large-Format Document Scanner

Audiocasette player with a 3.5mm output jack

Laptop / computer with 3.5mm input jack (headphone/microphone)

VHS player

USB VHS to Digital Converter

Soft tip silicone air blower

Software

Video capture: OBS Studio

Photo editing: GIMP

Audio capture: Audacity

Digitizing these precious memories makes them available for future generations. They are much more useful to everyone online than they ever were sitting in a box.

Replace unavail disk in ZFS

I had an issue where I removed a drive in my ZFS array and replaced it with a new drive which the OS gave the same device name (/dev/sdd). I had a hard time getting zfs to replace the drive until I discovered the -g flag for zpool status (thanks to this stackexchange post.)

That did the trick! Simply running zpool status -g showed the GUIDs of each device, which I could then use to properly use zpool replace on:

sudo zpool replace Poolname 12922644002107879117 /dev/sdd

Success!

Fix makemkv not compiling in Arch

I’ve had my Arch Linux desktop system for several years now. Over that time, cruft has built up. It bit me today when I tried to install makemkv. No matter what I tried I could not get it to compile. Configure constantly failed an this step:

checking whether LIBAVCODEC_VERSION_MAJOR is declared... yes
checking LIBAVCODEC_VERSION_MAJOR... 52
...
configure: error: The libavcodec library is too old. Please get a recent one from http://www.ffmpeg.org

I had to systematically delete anything containing ffmpeg, then re-install ffmpeg, in order to finally get it to work.

Get a list of installed packages containing ffmpeg:

yay -Ss ffmpeg | grep Installed

Remove ffmpeg-containing packages:

yay -R chromaprint-fftw grilo-plugins gst-plugins-bad cheese gnome-music gnome-video-effects totem ffmpeg-compat-54 ffmpeg-compat-57 ffmpeg0.10 ffmpeg4.4 vlc libavutil-52 faudio

Install makemkv:

yay -S makemkv

My “nuke all ffmpeg from orbit” approach worked. After I did so, makemkv compiled!

Fix cron output not being sent via e-mail

I had an issue where I had cron jobs that output data to stdout, yet mail of the output was never delivered. Everything showed fine in cron.log :

Aug  3 21:21:01 mail CROND[10426]: (nicholas) CMD (echo “test”)
Aug  3 21:21:01 mail CROND[10424]: (nicholas) CMDOUT (test)

yet no e-mail was sent. I finally found out how to fix this in a roundabout way. I came across this article on cpanel.net on how to silence cron e-mails. I then thought I’d try the reverse of a suggestion and add MAILTO= variable at the top of my cron file. It worked! Example crontab:

MAILTO=”youremail@address.com”
0 * * * * /home/nicholas/queue-check.sh

This came about due to my Zimbra box not sending system e-mails. In addition to the above, I had to configure zimbra as a sendmail alternative per this Zimbra wiki post: https://wiki.zimbra.com/wiki/How_to_%22fix%22_system%27s_sendmail_to_use_that_of_zimbra