Fix makemkv not compiling in Arch

I’ve had my Arch Linux desktop system for several years now. Over that time, cruft has built up. It bit me today when I tried to install makemkv. No matter what I tried I could not get it to compile. Configure constantly failed an this step:

checking whether LIBAVCODEC_VERSION_MAJOR is declared... yes
checking LIBAVCODEC_VERSION_MAJOR... 52
...
configure: error: The libavcodec library is too old. Please get a recent one from http://www.ffmpeg.org

I had to systematically delete anything containing ffmpeg, then re-install ffmpeg, in order to finally get it to work.

Get a list of installed packages containing ffmpeg:

yay -Ss ffmpeg | grep Installed

Remove ffmpeg-containing packages:

yay -R chromaprint-fftw grilo-plugins gst-plugins-bad cheese gnome-music gnome-video-effects totem ffmpeg-compat-54 ffmpeg-compat-57 ffmpeg0.10 ffmpeg4.4 vlc libavutil-52 faudio

Install makemkv:

yay -S makemkv

My “nuke all ffmpeg from orbit” approach worked. After I did so, makemkv compiled!

Fix cron output not being sent via e-mail

I had an issue where I had cron jobs that output data to stdout, yet mail of the output was never delivered. Everything showed fine in cron.log :

Aug  3 21:21:01 mail CROND[10426]: (nicholas) CMD (echo “test”)
Aug  3 21:21:01 mail CROND[10424]: (nicholas) CMDOUT (test)

yet no e-mail was sent. I finally found out how to fix this in a roundabout way. I came across this article on cpanel.net on how to silence cron e-mails. I then thought I’d try the reverse of a suggestion and add MAILTO= variable at the top of my cron file. It worked! Example crontab:

MAILTO=”youremail@address.com”
0 * * * * /home/nicholas/queue-check.sh

This came about due to my Zimbra box not sending system e-mails. In addition to the above, I had to configure zimbra as a sendmail alternative per this Zimbra wiki post: https://wiki.zimbra.com/wiki/How_to_%22fix%22_system%27s_sendmail_to_use_that_of_zimbra

Fix no internet in KVM/QEMU VMs after installing docker

I ran into a frustrating issue where my KVM VMs would lose network connectivity if I installed docker on my Arch Linux system. After some digging I finally discovered the cause (thanks to anteru.net)

It turns out, docker adds a bunch of iptables rules by default which prevent communication. These will interfere with an already existing bridge, and suddenly your VMs will report no network.

There are two ways to fix this. I went with the route of telling docker to NOT mess with iptables on startup. Less secure, but my system is not directly connected to the internet. I created /etc/docker/daemon.json and added this to it:

{
    "iptables" : false
}

Then restarted my machine. This did the trick!

Proxmox Ceph storage configuration

These are my notes for migrating my VM storage from NFS mount to Ceph hosted on Proxmox. I ran into a lot of bumps, but after getting proper server-grade SSDs, things have been humming smoothly long enough that it’s time to publish.

A note on SSDs

I had a significant amount of trouble getting ceph to work with consumer-grade SSDs. This is because ceph does a cache writeback call for each transaction – much like NFS. On my ZFS array, I could disable this, but not so for ceph. The result is very slow performance. It wasn’t until I got some Intel DC S3700 drives that ceph became reliable and fast. More details here.

Initial install

I used the Proxmox GUI to install ceph on each node by going to <host> / Ceph. Then I used the GUI to create a monitor, manager, and OSD on each host. Lastly, I used the GUI to create a ceph storage target in Datacenter config.

Small cluster (3 nodes)

My Proxmox cluster is small (3 nodes.) I discovered I didn’t have enough space for 3 replicas (the default ceph configuration), so I had to drop my pool size/min down to 2/1 despite warnings not to do so, since a 3-node cluster is a special case:

https://forum.proxmox.com/threads/ceph-pool-size-is-2-1-really-a-bad-idea.68939/#post-440755

More discussion: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/UB44GH4Z2NJUV52ZTHKO4TGYEX3DZ4CB/

I have not had any problems with this configuration and it provides the space I need.

Ceph pool size

In my early testing, I discovered that if I removed a disk from pool, the size of the pool increased! After doing some reading in redhat documentation, I learned the basics of why this happened.

Size = number of copies of the data in the pool

Minsize = minimum number of copies before pool operation is suspended

I didn’t have enough space for 3 copies of the data. When I removed a disk, the pool it dropped down to the minsize setting (2 copies) – which I did have enough room for. The pool rebalanced to reflect this and it resulted in more space.

Configure Alerting

It turns out that alerting for problems with ceph OSDs and monitors does not come out of the box. You must configure it. Thanks to this thread and the ceph documentation for how to do so. I did this on each proxmox node.

apt install ceph-mgr-dashboard
ceph config set mgr mgr/alerts/smtp_host <MAIL_HOST>'
ceph config set mgr mrg/alerts/smtp_ssl false
ceph config set mgr mgr/alerts/smtp_ssl false
ceph config set mgr mgr/alerts/smtp_port 25
ceph config set mgr mgr/alerts/smtp_destination <DEST_EMAIL>
ceph config set mgr mgr/alerts/smtp_sender <SENDER_EMAIL>
ceph config set mgr mgr/alerts/smtp_from_name 'Proxmox Ceph Cluster'

Test this by telling ceph to send its alerts:

ceph alerts send

Move VM disks to Ceph storage

I ended up writing a simple for loop to move all my existing Proxmox VM disks onto my new ceph cluster. None of my VMs had more than 3 scsi devices. If your VMs have more than that you’ll have to tweak this rudimentary command:

for vm in $(qm list | awk '{print $1}'|grep -v VMID); do qm move-disk $vm scsi0 <CEPH_POOL_NAME>; qm move-disk $vm scsi1 <CEPH_POOL_NAME>; qm move-disk $vm scsi2 <CEPH_POOL_NAME>; done

Rename storage

I tried to edit /etc/pve/storage.cfg to change the name I gave my ceph cluster in Proxmox. That didn’t work (question mark next to the storage after renaming it) so I just removed and re-added instead.

Maintenance

Begin maintenance:

Ceph constantly tries to keep itself in balance. If you take a node down and it stays down for too long, ceph will begin to rebalance the data among the remaining nodes. If you’re doing short term maintenance, you can control this behavior to avoid unnecessary rebalance traffic.

ceph osd set nobackfill
ceph osd set norebalance

Reboot / perform OSD maintenance.

After maintenance is completed:

ceph osd unset nobackfill
ceph osd unset norebalance

Performance benchmark

I did a lot of performance checking when I first started to try and track down why the pool was so slow. In the end it was my consumer-grade SSDs. I’ll keep this section here for future reference.

Redhat article on ceph performance benchmarking

Ceph wiki on benchmarking

rados bench -p SSD 10 write --no-cleanup
rados bench -p SSD 10 seq
rados bench -p SSD 10 seq
rados bench -p SSD 10 rand
rbd create image01 --size 1024 --pool SSD
rbd map image01 --pool SSD --name client.admin
mkfs.ext4 /dev/rbd/SSD/image01  
mkdir /mnt/ceph-block-device
mount /dev/rbd/SSD/image01 /mnt/ceph-block-device/
rbd bench --io-type write image01 --pool=SSD
pveperf /mnt/ceph-block-device/
rados -p SSD cleanup

Undo:

 umount /mnt/ceph-block-device  
 rbd unmap image01 --pool SSD
 rbd rm image01 --pool SSD

MTU 9000 warning

I read that it was recommended to set network MTU to 9000 (jumbo frames. When I did this I experienced weird behavior, connection timeouts – ceph ground to a halt, complaining about slow OSDs, mons. It was too much hassle for me to troubleshoot, so I went back to the standard 1500 MTU.

Datacenter settings

I discovered you can have a host automatically migrate hosts off when you issue the reboot command via the migrate shutdown policy. https://pve.proxmox.com/wiki/High_Availability

Proxmox GUI / Datacenter / Options / HA Settings

Specify SSD or HDD for pools

I have not done this yet but here’s a link I found that explains how to do it: https://stackoverflow.com/questions/58060333/ceph-how-to-place-a-pool-on-specific-osd

Helpful commands

Determine IPs of OSDs:

ceph osd dump - determine IPs of OSDs

Remove monitor from failed node:

ceph mon remove <host>
Also needs to be removed from /etc/ceph/ceph.conf

Configure Backup

I had been using ZFS snapshots and ZFS send to backup my VM disks before the move to ceph. While ceph has snapshot capability, it is slow and takes up extra space in the pool. My solution was to spin up a Proxmox Backup Server and regularly back up to that instead.

Proxmox backup server: can be installed to an existing PVE server if you desire:

https://pbs.proxmox.com/docs/installation.html

Configure the apt repository as follows:

# PBS pbs-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb http://download.proxmox.com/debian/pbs bullseye pbs-no-subscription

# security updates
deb http://security.debian.org/debian-security bullseye-security main contrib

# apt-get update
# apt-get install proxmox-backup

I had to add a regular user and give admin permissions on PBS side, then add the host on the proxmox side using those credentials.

Configure automated backup in PVE via Datacenter tab / Backup.

Remember to configure automated verify jobs (scrubs).

Make sure to add an e-mail address for proxmox backup user for alerts.

Edit which account & e-mail is used, and how often notified, at the Datastore level.

Sync jobs

I wanted to synchronize my Proxmox Backup repository to a non-PBS server (simply host the files.) I accomplished this by doing the following:

  • Add 127.0.0.1 as a Remote host (Configuration / Remotes.) Copy the PBS server fingerprint from Certificates / Fingerprint.
  • Create remote datastore in /etc/fstab manually (I used SSHFS to backup to a synology over SSH.)
  • Add datastore in PBS, pointing to manual fstab mount. Then add sync job there

Import PBS datastore (in case of total crash)

I wanted to know how to import the data into a fresh instance of PBS. This is the procedcure:

edit /etc/proxmox-backup/datastore.cfg and add config about the datastore manually. Copy from existing datastore config for syntax.

Space still being taken up after deleting backups

PBS uses access time to determine if something has been touched. It waits 24 hours after the last touch. Garbage collection manually updates atime, but still recommended to keep atime on for the dataset PBS is using. Sources:

https://forum.proxmox.com/threads/zpool-atime-turned-off-effect-on-garbage-collection.76590/

https://pbs.proxmox.com/docs/backup-client.html#garbage-collection

Troubleshooting

Really slow VM IOPS during degrade / rebuild

This also ended up being due to having consumer-grade SSDs in my ceph pools. I’m keeping my notes for what I did to troubleshoot in case they’re useful.

https://forum.proxmox.com/threads/ceph-high-i-o-wait-on-osd-add-remove.20271/

Small cluster. Lower backfill activity so recovery doesn’t cause slowdown:

ceph config set osd osd_max_backfills 1
ceph config set osd osd_recovery_max_active 3

Verify setting was applied: https://www.suse.com/support/kb/doc/?id=000019693

ceph-conf --show-config|egrep "osd_max_backfills|osd_recovery_max_active"
ceph config dump | grep osd

Ramp up backfill performance:

ceph tell osd.* injectargs --osd_max_backfills=2 --osd-recovery_max_active=8 # 2x Increase
ceph tell osd.* injectargs --osd_max_backfills=3 --osd-recovery_max_active=12 # 3x Increase
ceph tell osd.* injectargs --osd_max_backfills=4 --osd_recovery_max_active=16 # 4x Increase
ceph tell osd.* injectargs --osd_max_backfills=1 --osd-recovery_max_active=3 # Back to Defaults

The above didn’t help, turns out consumer SSDs are very bad:

https://yourcmc.ru/wiki/Ceph_performance#General_benchmarking_principles

https://blog.cypressxt.net/hello-ceph-and-samsung-850-evo/

I bought some Intel DC S3700 on ebay for $75 a piece. It fixed all my latency/speed issues.

Dead mon despite being removed from cli

I had a situation where a monitor showed up as dead in proxmox, but I was unable to delete it. I followed this procedure:

rm /etc/systemd/system/ceph-mon.target.wants/ceph-mon@<nodename>.service

Dead pve node procedure

remove from /etc/ceph/ceph.conf, remove /var/lib/ceph/mon/ceph-<node>, remove rm /etc/systemd/system/ceph-mon.target.wants/ceph-mon@pve2.service

https://forum.proxmox.com/threads/ceph-cant-remove-monitor-with-unknown-status.63613/

Adding through GUI brought me back to the same problem.

Bring node back manually

https://docs.ceph.com/en/latest/rados/operations/add-or-rm-mons/

 ceph auth get mon. -o /tmp/key
 ceph mon getmap -o /tmp/map
 ceph-mon -i <node_name> –mkfs –monmap /tmp/map –keyring /tmp/key  
 ceph-mon -i <node_name> –public-addr <node_ip>:6789  
 ceph mon enable-msgr2
 vi /etc/pve/ceph.conf

In the end the most surefire way to fix this problem was to re-image the affected host.

Clear HEALTH_WARNING in GUI

In my testing I had tried pulling disks at random, then putting them back in. This recovered well, but I had this message:

HEALTH_WARN 1 daemons have recently crashed

To clear it I had to drop to the CLI and run this command:

ceph crash archive-all

Thanks to the Proxmox Forums for the fix.

Pool cleanup

I noticed I would get rbd error: rbd: listing images failed: (2) No such file or directory (500) when trying to look at which disks were on my Ceph pool. I fixed this by removing the offending images as per this post.

I then ran another rbd ls -l <POOL_NAME> command to see what was left and noticed several items without anything in the LOCK column. I discovered these were artifacts from failed disk migrations I tried early on – wasted space. I removed them one by one with the following command:

rbd rm <VM_FILE_NAME> -p <POOL_NAME>

Be careful to verify they’re not disks that are in use with VMs with are powered off – they will also show no lock for non-running VMs.

Disk errors

I had a disk fail, but then I pulled out the wrong disk. I kept getting these errors:

Warning: Error fsyncing/closing /dev/mapper/ceph--fc741b6c--499d--482e--9ea4--583652b541cc-osd--block--843cf28a--9be1--4286--a29c--b9c6848d33ba: Input/output error

I was unable to remove it from the GUI. After a while I realized the problem – I was on the wrong node. I needed to be on the node that has the disks when creating an OSD in the Proxmox GUI.

Steps to determine which disk is assigned to an OSD, from ceph docs:

ceph-volume lvm list
====== osd.2 =======

 [block]       /dev/ceph-680265f2-0b3c-4426-b2a8-acf2774d82e0/osd-block-2096f339-0572-4e1d-bf20-52335af9b374

     block device              /dev/ceph-680265f2-0b3c-4426-b2a8-acf2774d82e0/osd-block-2096f339-0572-4e1d-bf20-52335af9b374
     block uuid                tcnwFr-G33o-ybue-n0mP-cDpe-sp9y-d0gvYS
     cephx lockbox secret       
     cluster fsid              65f26da0-fca0-4419-ba15-20269a5a363f
     cluster name              ceph
     crush device class        ssd
     encrypted                 0
     osd fsid                  2096f339-0572-4e1d-bf20-52335af9b374
     osd id                    2
     osdspec affinity           
     type                      block
     vdo                       0
     devices                   /dev/sde

Update 6/20/2024

One year later and Ceph has been running great. So great, in fact, that I migrated my bulk storage to it as well. Here are my notes on that endeavor.

Optimal number of PGs

I discovered that there is an optimal number of PGs you want in a ceph cluster. It depends on how many OSDs you have. Link: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/storage_strategies_guide/placement_groups_pgs#pg-count-for-small-clusters

The optimal number of PGs is the following, rounding up to the nearest power of two:

                (OSDs * 100)
   Total PGs =  ------------
                 pool size

In my case (only 3 OSDs – one per node) that made my optimal number of PGs 256.

Slow write speeds for HDDs

Moving OSD DB to SSD – The slow way

I had pretty slow write speeds when adding my 3 HDDs to a new pool (50 MB/s max.) I read the best way to help with this is to offload the db and WAL to an SSD for each OSD. It’s possible to have multiple OSDs using a single SSD for such operations, but since I don’t have enterprise-grade SSDs, I opted to do a 1:1 HDD:SSD mapping. Unfortunately, I had already created the OSDs before I realized I needed to do this. So I had to destroy & re-create each OSD one by one to add the SSD.

https://www.reddit.com/r/ceph/comments/fgvcte/replace_osd_node_without_remapping_pgs

set flag norebalance, norecover, nobackfill, destroy the OSD and join the new OSD as the same ID of the old one.

This worked, but took two days to rebuild. I set out to find a faster option

Moving OSD DB to SSD – The fast way

Migrate DB to SSD without destroying OSD

https://www.reddit.com/r/ceph/comments/1awwoch/yet_another_ceph_poor_performance_post_part_deux

https://github.com/45Drives/scripts/blob/main/add-db-to-osd.sh

Requires jq and bc

I kept getting the error message

WARNING: Device selected (/dev/sdd) has a LVM2_member signature, but no volume group
Wipe disk and run again

despite completely wiping the drive. I dove into the source of the script and found it creates a PV & VG for the drive, and that must be failing, so I did it manually:

pvcreate /dev/sdd

vgcreate ceph-$(uuidgen) /dev/sdd

./add-db-to-osd.sh -b 465G -d /dev/sdd -o 3

This worked beautifully.

Move OSD DB to new device

I discovered that when it comes to DB devices, the same advice about SSDs is still true: Don’t waste your time with consumer SSDs. I ordered some more Intel DC S3700 drives and now needed to replace them. The 45 drives script doesn’t work because the DB had already been migrated to a different SSD. This is the process to move from one dedicated DB device to another:

Thanks to this thread https://www.reddit.com/r/ceph/comments/1bk6e9s/moving_db_and_wal_from_ssd_to_hdd/

and this documentation: https://docs.ceph.com/en/latest/ceph-volume/lvm/list/

https://docs.ceph.com/en/quincy/ceph-volume/lvm/migrate

Plug new drive in alongside existing drive

Obtain OSD fsid with this command: ceph-volume lvm list

pvcreate /dev/<new_device>

vgcreate ceph-$(uuidgen) /dev/<new_device>

lvcreate -l100%FREE -n ceph-osd-db-<OSD FSID> ceph-<UUIDGEN_FROM_ABOVE>

systemctl stop ceph-osd@<OSD_ID>

ceph-volume lvm migrate –osd-id <OSD_ID> –osd-fsid <OSD_FSID> –from db wal –target ceph-<UUIDGEN_FROM_ABOVE>/ceph-osd-db-<OSD FSID>

--> Migrate to new, Source: ['--devs-source', '/var/lib/ceph/osd/ceph-4/block.db'] Target: /dev/ceph-60969103-7d88-4340-a13f-a77f98e1da46/osd-db-800G
Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block.db
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-6
--> Migration successful.

systemctl start ceph-osd@<OSD_ID>

System with no additional HDD slots
Used a USB3 SSD adapter temporarily. Migrated DB,  remove old device, add new device. Reboot node.

Sizing DB device

https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref

For RBD workloads, however, block.db usually needs no more than 1% to 2% of the block size.

Move from dedicated DB device back to OSD

https://www.reddit.com/r/ceph/comments/1bwma91/script_to_move_separate_db_lv_back_to_block_device

Convert TIF to JPG with ImageMagick

My new project is digitizing film negatives. Following advice found on the DataHoarder subreddit, I’m scanning these files in the highest possible quality in uncompressed TIF files. These TIF files are too big for regular consumption, thus the need to convert to JPG.

ImageMagick is amazing, and does the job nicely. Make sure you have the imagemagick package installed, and it’s as simple as using the convert command.

This is my simple script for converting all TIF files to JPG, and outputting them to the same directory:

for file in *.tif; do echo converting "$file" to "${file%.*}.jpg"; convert "$file" "${file%.*}.jpg"; done

It uses bash substitution to remove the TIF extension in the resulting JPG file. It works beautifully!

Update 4/14/2023:

I have re-worked this a bit to handle multiple directories. It involves setting the Internal Field Separator to be ‘ \n’ instead of space (default) and using the find command. The multi-directory command is below:

IFS=$'\n'; for file in $(find . -name *.tif); do echo converting "$file" to "${file%.*}.jpg"; convert "$file" "${file%.*}.jpg"; done;unset IFS

Restart wireguard interface in OpenWRT

One annoying issue with wireguard in OpenWRT is the fact that it won’t re-check DNS on connection failure. In the event that my public IP changes (dynamic IP) the OpenWRT wireguard client doesn’t ever get the memo, even when DNS is updated.

I discovered here that you can tell OpenWRT via the command line to stop and start the wireguard interface. This forces a new DNS check and then the tunnel builds successfully. The command:

ubus call network.interface.wg0 down &&  ubus call network.interface.wg0 up

Success! Throw this into a cron job and you have an automated failsafe to ensure a reconnect after IP change.

Update 2024-01-16

Here is an example of a cron job to accomplish this:

https://forum.openwrt.org/t/restart-wireguard-via-cli/51935/9

#!/bin/sh
#modified from https://openwrt.org/docs/guide-user/base-system/cron
#modified to use logger for global logging instead of scriptlogfile & added infinite reboot protection for reboot
# Prepare vars
DATE=$(date +%Y-%m-%d" "%H:%M:%S)
#logFile="/persistlogs/syslog"

# Ping and reboot if needed

#YOUR WIREGUARD PEER
CHECKHOSTNAME="192.168.X.X"

notification_email="YOUR@EMAIL.ADRESS"
VPNINTERFACE="wgvpn0"


ping -c3 $CHECKHOSTNAME

if [ $? -eq 0 ]; then
    echo "ok"
    logger $(echo "${DATE} - $0: OK - $VPNINTERFACE UP AND RUNNING")

else
    echo "RESTART wgvpn0 Interface"

Wireguard one-way traffic on USG Pro 4 after dual WAN setup

I have a site-to-site VPN between my Ubiquiti USG Pro-4 and an OpenWRT device over wireguard . It’s worked great until I got a secondary WAN connection as a failover connection since my primary cable connection has been flaky lately.

When you introduce dual-WAN on Ubiquiti devices you have to manually configure everything since the GUI assumes only one WAN connection. I configured my manual DNAT (port forwards) for each interface successfully but struggled to figure out why suddenly my Wireguard VPN between my two sites only went one way (remote side could ping all hosts on local side, but not visa-versa.)

After some troubleshooting I realized the firewall itself could ping the remote subnet just fine, it just wasn’t allowing local hosts to do so. I couldn’t find anything in firewall logs. Eventually I came across this very helpful page from hackad.nu that helped me to solve my problem.

The solution was to add a Firewall Modify rule specifically for the eth0 interface (where all my LAN traffic is routed through) to allow the source address of the subnets I want to traverse the VPN, then apply that modifier to the LAN_IN firewall rule for that interface. I had to do it for any VLANs I wanted to be able to use the Wireguard tunnel as well (vifs of eth0, VLAN 50 in my case)

Here is the relevant config.gateway.json sections, namely “firewall” and “interfaces”:

{
    "firewall": {
        "modify": {
            "Wireguard": {
                "rule": {
                    "10": {
                        "action": "modify",
                        "description": "Allow Wireguard traffic",
                        "modify": {
                            "table": "10"
                        },
                        "source": {
                            "address": "10.1.0.0/16"
                        }
                    }
                }
            }
        },
        "interfaces": {
            "ethernet": {
                "eth0": {
                    "firewall": {
                        "in": {
                            "ipv6-name": "LANv6_IN",
                            "modify": "Wireguard",
                            "name": "LAN_IN"
                        }
                    },
                    "vif": {
                        "50": {
                            "firewall": {
                                "in": {
                                    "ipv6-name": "LANv6_IN",
                                    "modify": "Wireguard",
                                    "name": "LAN_IN"
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}

This did the trick! Wireguard is working both directions again, this time with my dual WAN connections.

Prioritize wifi with Network Manager in Arch

My cable internet has been horrid lately. I wanted to be able to hotspot to my phone while maintaining LAN connections to my servers while the cable company takes its sweet time to fix things. Even though I connected to wifi on my phone, my desktop still prioritized the broken connection and wouldn’t use my phone to get to the internet. I verified this by looking at the routing table and running traceroute

sudo ip route
...
default via 10.137.1.1 dev br0 proto dhcp src 10.10.1.124 metric 425 
default via 172.10.10.1 dev wlp69s0 proto dhcp src 172.10.10.4 metric 600 
...

traceroute google.com --max-hops=1
 1  _gateway (10.10.50.1)  0.409 ms  0.449 ms  0.483 ms

The LAN connection’s default gateway had a lower metric than the mobile hotspot connection (lower takes precedence.) To fix this I ran this networkmanager command (thanks to this post for the inspiration)

sudo nmcli connection modify "Nicholas’s iPhone" ipv4.route-metric 50

I noticed DNS traffic was also prioritizing my LAN, which I didn’t want. I fixed it with nmcli as well (thanks to this post)

sudo nmcli connection modify "Nicholas’s iPhone" ipv4.dns-priority 1

I then noticed I couldn’t get to certain LAN subnets. I then realized I needed to add some static routes so they don’t try to go over my hotspot connection (which I learned about here)

sudo nmcli connection modify bridge-br0 +ipv4.routes "10.10.50.0/24 10.10.1.1"

Note you may need to refresh your connection once you’ve made changes. You can either disconnect and reconnect to force a refresh, or run this command (as outlined here.)

sudo nmcli con up bridge-br0 #or whatever your LAN interface name is

Once I refreshed my settings, I was able to get internet via my phone while maintaining all my local network settings.

Convert music to iPhone ringtones

I’ve recently crossed into the dark side and gotten my first iPhone. I wanted to set up ringtones for my contacts but discovered that Apple is pretty picky about ringtone file format. After some searching I found the ffmpeg command to run to get the ringtones into a file iPhones are happy with.

The criteria are:

  • aac codec
  • m4r filename
  • 30 seconds or less in duration

I ran into a snag with various things I was trying to convert apparently having more than one stream. I would get the error message

Could not find tag for codec h264 in stream #0, codec not currently supported in container

I had to use the map command to specify exactly which stream I wanted (just the audio one.) I discovered which stream I wanted by running the ffmpeg -i command on the file to see its available streams. I also discovered that some songs reported incorrect duration. This was fixed with the -write_xing 0 option. Thanks to this gist for the inspiration.

Here is the full command to turn music into an Apple-compatible ringtone. Modify -ss and -to to suit your needs (starting time, to ending time)

ffmpeg -i <input file> -codec:a aac -ss 00:00:59.5 -to 00:01:21.5 -f ipod -map 0:0 -write_xing 0 ringtone.m4r

If taking a ringtone from a video file, I had to specify I wanted stream 1 instead of stream 0:
ffmpeg -i Batman-\ The\ Animated\ Series\ -\ S02E01\ -\ Shadow\ of\ the\ Bat\ \(1\)\ SDTV.avi
-codec:a aac -ss 00:00:20.5 -to 00:00:43.5 -f ipod -write_xing 0 -map 0:1 batman_ringtone.m4r

Once you have the right m4r file, you simply need to plug your iPhone into your computer and fire up iTunes. You can then drag the file into the "Tones" section on the left under "Devices".