Tag Archives: mount

Mount LVM partitions in FreeBSD

I’ve been playing around with helloSystem, an up and coming FreeBSD desktop environment that mirrors the MacOS experience quite well. Since it’s based in FreeBSD I’ve had to brush up on a few FreeBSD-isms that are distinctly different from Linux.

Since I’m dual booting this helloSystem BSD system alongside my Arch Linux install, I want to be able to access files on my Arch system from the BSD system. My Arch system uses LVM, which posed a challenge as LVM is a distinctly Linux thing.

To get it to work I needed to load a couple modules (thanks to the FreeBSD forums for help)

  • fuse
  • geom_linux_lvm

You can do this at runtime by using the kldload command

kldload fuse
kldload /boot/kernel/geom_linux_lvm.ko

To make the kernel module loading survive a reboot, add them to /boot/loader.conf

geom_linux_lvm_load="YES"
fuse_load="YES"

You can now scan your BSD system for LVM partitions:

geom linux_lvm list

The LVM partitions are listed under /dev/linux_lvm. The last step is to mount them with FUSE:

fuse-ext2 -o rw+ /dev/linux_lvm/NAME_OF_LVM_PARTITION /mnt/DESIRED_MOUNT_FOLDER

rw+ indicates a read/write mount.

mountpoint check script

I have a few NFS mounts that I want to be working at all times. If there is a power outage, sometimes NFS clients come up before the NFS server does, and thus the mounts are not there. I wrote a quick little bash script to fix this utilizing the mountpoint command.

Behold (Change the mountpoint(s) to the one you want to monitor.)

#!/bin/bash
#Simple bash script to check mount points and re-mount them if they're not mounted
#Authored by Nick Jepspon 8/11/2019

### Variables ###
# Changes these to suit your needs

MOUNTPOINTS=(/mnt/1 /mnt/2 /mnt/3)  #space separated list of mountpoints to monitor

### End Variables ###


for mount in $MOUNTPOINTS 
do
    if  ! mountpoint -q $mount
    then
        echo "$mount is not mounted, attempting to mount."
        mount $mount
    fi
    #otherwise do nothing
done

I have this set as a cronjob running every 5 minutes

*/5 * * * * /mountcheck.sh

Now the system will continually try to mount the specified folder if it isn’t already mounted.

Recover files from ZFS snapshot of ProxMox VM

I recently needed to restore a specific file from one of my ProxMox VMs that had been deleted. I didn’t want to roll back the entire VM from a previous snapshot – I just wanted a single file from the snapshot. My snapshots are handled via ZFS using FreeNAS.

Since my VM was CentOS 7 it uses XFS, which made things a bit more difficult. I couldn’t find a way to crash-mount a read-only XFS snapshot – it simply resufed to mount, so I had to make everything read/write. Below is the process I used to recover my file:

On the FreeNAS server, find the snapshot you wish to clone:

sudo zfs list -t snapshot -o name -s creation -r DATASET_NAME

Next, clone the snapshot

sudo zfs clone SNAPSHOT_NAME CLONED_SNAPSHOT_NAME

Next, on a Linux box, use SSHFS to mount the snapshot:

mkdir Snapshot
sshfs -o allow_other user@freenas:/mnt/CLONED_SNAPSHOT_NAME Snapshot/

Now create a read/write loopback device following instructions found here:

sudo -i #easy lazy way to get past permissions issues
cd /path/to/Snapshot/folder/created/above
losetup -P -f VM_DISK_FILENAME.raw
losetup 
#Take note of output, it's likely set to /dev/loop0 unless you have other loopbacks

Note if your VM files are not in RAW format, extra steps will need to be taken in order to convert it to RAW format.

Now we have an SSH-mounted loopback device ready for mounting. Things are complicated if your VM uses LVM, which mine does (CentOS 7). Once the loopback device is set, lvscan should see the image’s logical volumes. Make the desired volume active

sudo lvscan
sudo lvchange -ay /dev/VG_NAME/LV_NAME

Now you can mount your volume:

mkdir Restore
mount /dev/VG_NAME/LV_NAME Restore/

Note: for XFS you must have read/write capability on the loopback device for this to work.

When you’re done, do your steps in reverse to unmount the snaspshot:

#Unmount snapshot
umount Restore
#Deactivate LVM
lvchange -an /dev/VG_NAME/LV_NAME
Remove loopback device
losetup -d /dev/loop0 #or whatever the loopback device was
#Unmount SSHfs mount to ZFS server
umount Snapshot

Finally, on the ZFS server, delete the snapshot:

sudo zfs destroy CLONED_SNAPSHOT_NAME

Troubleshooting

When I tried to mount the LVM partition at this point I got this error message:

mount: /dev/mapper/centos_plexlocal-root: can't read superblock

It ended up being because I was accidentally creating a read-only loopback device. I destroy the loopback device and re-created with write support and all was well.

Mount encfs folder on startup with systemd

A quick note on how to encrypt a folder with encfs and then mount it on boot via a systemd startup script. In my case the folder is located on a network drive and I wanted it to happen whether I was logged in or not.

Create encfs folder:

encfs <path to encrypted folder> <path to mount decrypted folder>

Follow the prompts to create the folder and set a password.

Next create a file which will contain your decryption password

echo "YOUR_PASSWORD" > /home/user/super_secret_password_location
chmod 700 /home/user/super_secret_password_location

Create a simple script to be called by systemd on startup using cat to pass your password over to encfs

#!/bin/bash
cat super_secret_password_location | encfs -S path_to_encrypted_folder path_to_mount_decrypted_folder

Finally create a systemd unit to run your script on startup:

vim /etc/systemd/system/mount-encrypted.service
[Unit] 
Description=Mount encrypted folder 
After=network.target 

[Service] 
User=<YOUR USER> 
Type=oneshot 
ExecStartPre=/bin/sleep 20 
ExecStart=PATH_TO_SCRIPT
TimeoutStopSec=30 
KillMode=process 

[Install] 
WantedBy=multi-user.target

Then enable the unit:

sudo systemctl daemon-reload
sudo systemctl enable mount-encrypted.service

Create & Mount disc images in Linux

When working with hard drives it is always a good idea to back the entire thing up before proceeding. I wanted to write down the procedure so I don’t keep forgetting it.

Create disc image

dd does the trick here.

sudo dd if=/dev/<drive device file> of=image.img bs=64M

If you wish to see the progress of the above dd command you can open up a separte window and issue the kill command

kill -USR1 `pidof dd`

Mount disc image read only

You can now disconnect the drive and work with its image instead (great for forensics or dealing with a dying drive.)

In later versions of Linux you can do this with losetup and partprobe.

sudo losetup -Pr -f <path to image file>
sudo losetup #find which loop device file corresponds with your image here
sudo mount -o ro /dev/<loopdevice>p<partition number> <mountpoint>

For example, this is what I did on my system for my aunt’s laptop (I was interested in the 2nd partition on her drive, the one containing Windows files)

Note: remove the r from the above command if you want to mount read/write (required for LVM partitions)

sudo losetup -Pr -f susan-ssd.img
sudo losetup

NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO
/dev/loop0 0 0 0 1 /home/partimag/susan-ssd.img 0

sudo mount -o ro /dev/loop0p2 mount/

When you’re done, unmount the image and delete the image mapping:

umount <path to mount directory>
sudo losetup -d <loop file obtained earlier>

Simple network folder mount script for Linux

I wrote a simple little network mount script for Linux desktops. I wanted to replicate my Windows box as best as I could where a bunch of network drives are mapped upon user login. This script relies on having gvfs-mount and the cifs utilities installed (installed by default in Ubuntu.)

#!/bin/bash
#Simple script to mount network drives

#Specify network paths here, one per line
#use forward slash instead of backslash
FOLDER=(
  server1/folder1
  server1/folder2
  server2/folder2/folder3
  server3/
)

#Create a symlink to gvfs mounts in home directory
ln -s $XDG_RUNTIME_DIR/gvfs ~/Drive_Mounts

for mountpoint in "${FOLDER[@]}"
do
  gvfs-mount smb://$mountpoint
done

Mark this script as executable and place it in /usr/local/bin. Then make it a default startup application for all users:

vim /etc/xdg/autostart/drive-mount.desktop
[Desktop Entry]
Name=Mount Network Drives
Type=Application
Exec=/usr/local/bin/drive-mount.sh
Terminal=false

Voila, now you’ve got your samba mount script starting up for every user.

Mountpoint check script

I wrote a simple script to check to see if a specific mountpoint on a Linux system is still live.  It does this by trying to read a specific file on the share, and if it cannot, write the event to a log, unmount, and then re-mount the folder. The need arose for instances where a file server has been rebooted and the linux system loses the connection to the share. This way it will automatically re-mount.

Modify the variables section as needed and then have a cron job run the script as root at whatever interval you want. Enjoy.

#!/bin/bash
#Script to monitor mount directories to ensure they are properly mounted
#Place a file with the word "mounted" in it inside all mounted directories
#The script will try to read the file and attempt to unmount and remount the folder if it fails to read the file
#Updated 8/30/2016 by Nicholas Jeppson

#---------Variable section------------#

#Place mount folder locations here, separated by space 
#Paths containing spaces need to have quotes around them
LOCATIONS=(/home/njeppson /home/njeppson/Desktop)

#Name of file to try to read
TEST_FILENAME="mountcheck"

#---------End Variable Section--------#
#-----Do not edit below this line-----#

#Read file, if contents don't contain "mounted" then attempt to unmount and re-mount the folder, output attempt to /var/log/mountcheck

for FOLDER in "${LOCATIONS[@]}"; do 
 if [[ $(cat $FOLDER/$TEST_FILENAME) != "mounted" ]]; then
 echo "$(date "+%b %d %T") $(hostname) $FOLDER Not mounted, remounting." >> /var/log/mountcheck 
 umount $FOLDER
 mount $FOLDER
 fi
done

Configure iSCSI initiator in CentOS

Below are my notes for configuring a CentOS box to connect to an iSCSI target. This assumes you have already configured an iSCSI target on another machine / NAS. Much of this information comes thanks to this very helpful website.

Install the software package

1
yum -y install iscsi-initiator-utils

Configure the iqn name for the initiator

1
vi /etc/iscsi/initiatorname.iscsi
1
2
InitiatorName=iqn.2012-10.net.cpd:san.initiator01
InitiatorAlias=initiator01

Edit the iSCSI initiator configuration

1
vi /etc/iscsi/iscsid.conf
1
node.startup = automatic
node.session.auth.authmethod = CHAP
node.session.auth.username = initiator_user
node.session.auth.password = initiator_pass
#The next two lines are for mutual CHAP authentication
node.session.auth.username_in = target_user
node.session.auth.password_in = target_password

Start iSCSI initiator daemon

1
2
/etc/init.d/iscsid start
chkconfig --levels 235 iscsid on

Discover targets in the iSCSI server:

1
2
iscsiadm --mode discovery -t sendtargets --portal 172.16.201.200 the portal's IP address
172.16.201.200:3260,1 iqn.2012-10.net.cpd:san.target01

Try to log in with the iSCSI LUN:

1
2
3
iscsiadm --mode node --targetname iqn.2012-10.net.cpd:san.target01 --portal 172.16.201.200 --login
Logging in to [iface: default, target: iqn.2012-10.net.cpd:san.target01, portal: 172.16.201.200,3260] (multiple)
Login to [iface: default, target: iqn.2012-10.net.cpd:san.target01, portal: 172.16.201.200,3260] successful.

Verify configuration

This command shows what is put into the  iSCSI targets database  (the files located in /var/lib/iscsi/)

1
cat /var/lib/iscsi/send_targets/172.16.201.200,3260/st_config
1
2
3
4
5
6
7
8
9
10
11
12
discovery.startup = manual
discovery.type = sendtargets
discovery.sendtargets.address = 172.16.201.200
discovery.sendtargets.port = 3260
discovery.sendtargets.auth.authmethod = None
discovery.sendtargets.timeo.login_timeout = 15
discovery.sendtargets.use_discoveryd = No
discovery.sendtargets.discoveryd_poll_inval = 30
discovery.sendtargets.reopen_max = 5
discovery.sendtargets.timeo.auth_timeout = 45
discovery.sendtargets.timeo.active_timeout = 30
discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768

Verify session is established

1
2
iscsiadm --mode session --op show
tcp: [2] 172.16.201.200:3260,1 iqn.2012-10.net.cpd:san.target01

Create LVM volume and mount

Add our iSCSI disk to a new LVM physical volume, volume group, and logical volume

1
2
3
4
5
6
7
fdisk -l
Disk /dev/sdb: 17.2 GB, 17171480576 bytes
64 heads, 32 sectors/track, 16376 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
1
Disk /dev/sdb doesn't contain a valid partition table
1
2
pvcreate /dev/sdb
vgcreate iSCSI /dev/sdb
lvcreate iSCSI -n volume_name -l100%FREE
mkfs.ext4 /dev/iSCSI/volume_name

Add the logical volume to fstab

Make sure to use the mount option _netdev.  Without this option, Linux will try to mount this device before it loads network support.

1
vi /etc/fstab
/dev/mapper/iSCSI-volume_name    /mnt   ext4   _netdev  0 0

Success.