Tag Archives: samba

Using ProxMox as a NAS

Lately I’ve been very unhappy with latest FreeBSD causing reboots randomly during disk resilvering. I simply cannot tolerate random reboots of my fileserver. This fact combined with the migration of OpenZFS to the ZFS on Linux code base means it’s time for me to move from a FreeBSD based ZFS NAS to a Linux-based one.

Sadly there aren’t many options in this space yet. I wanted something where basic tasks were taken care of, like what FreeNAS does, but also supports ZFS. The solution I settled on was ProxMox, which is a hypervisor, but it also has ZFS support.

The biggest drawback of ProxMox vs FreeNAS is the GUI. There are some disk-related GUI options in ProxMox, but mostly it’s VM focused. Thus, I had to configure my required services via CLI.

Following are the settings I used when I configured my NAS to run ProxMox.

Repo setup

If you don’t want to pay for a proxmox license, change the PVE enterprise repository to the free version by modifying /etc/apt/sources.list.d/pve-enterprise.list to the following:

deb http://download.proxmox.com/debian/pve buster pve-no-subscription

Then run at apt update & apt upgrade.

Email alerts

Postfix configuration

Edit /etc/postfix/main.cf and tweak your mail server config as needed (relayhost). Restart postfix after editing:

systemctl restart postfix

Forward mail for root to your own email

Edit /etc/aliases and add an alias for root to forward to your desired e-mail address. Add this line:

root: YOUR_EMAIL_ADDRESS

Afterward run:

newaliases

ZFS configuration

Pool Import

Import the pool using the zpool import -f command (-f to force import despite having been active in a different system)

zpool import -f  

By default they’re imported into the main root directory (/). If you want to have them go to /mnt, use the zfs set mountpoint command:

zfs set mountpoint=/mnt/ 

Monitoring

Install and configure zfs-zed

apt install zfs-zed

Modify /etc/zfs/zed.d/zed.rc and uncomment ZED_EMAIL_ADDR, ZED_EMAIL_PROG, and ZED_EMAIL_OPTS. Edit them to suit your needs (default values work fine, they just need to be uncommented.) Optionally uncomment ZED_NOTIFY_VERBOSE and change to 1 if you want more verbose notices like what FreeNAS does (scrub notifications, for example.)

After modifying /etc/zfs/zed.d/zed.rc, restart zed:

systemctl restart zfs-zed

Scrubbing

By default ProxMox scrubs each of your datasets on the second Sunday of every month. This cron job is located in /etc/cron.d/zfsutils-linux. Modify to your liking.

Snapshot & Replication

There are many different snapshot & replication scripts out there. I landed on Sanoid. Thanks to SvennD for helping me grasp how to get it working.

Install sanoid :

#Install necessary packages
apt install debhelper libcapture-tiny-perl libconfig-inifiles-perl pv lzop mbuffer git
# Clone repo, build deb, install
git clone https://github.com/jimsalterjrs/sanoid.git cd sanoid
ln -s packages/debian . 
dpkg-buildpackage -uc -us 
apt install ../sanoid_*_all.deb 

Snapshots

Edit /etc/sanoid/sanoid.conf with a backup and retention schedule for each of your datasets. Example taken from sanoid documentation:

[data/home]
	use_template = production
[data/images]
	use_template = production
	recursive = yes
	process_children_only = yes
[data/images/win7]
	hourly = 4

#############################
# templates below this line #
#############################

[template_production]
        frequently = 0
        hourly = 36
        daily = 30
        monthly = 3
        yearly = 0
        autosnap = yes
        autoprune = yes

Once sanoid.conf is to your liking, create a cron job to launch sanoid every hour (sanoid determines whether any action is needed when executed.)

crontab -e
#Add this line, save and exit
0 * * * * /usr/sbin/sanoid --cron

Replication

syncoid (part of sanoid) easily replicates snapshots. The syntax is pretty straightforward:

syncoid <source> <destination> -r 
#-r means recursive and is optional

For remote locations specify a username@ before the ip/hostname, then a colon and the dataset name, for example:

syncoid root@10.0.0.1:sourceDataset localDataset -r

You can even have a remote source go to a different remote destination, which is pretty neat.

Other syncoid options of interest:

--debug  #for seeing everything happening, useful for logging
--exclude #Regular expression to exclude certain datasets
--src-bwlimit #Set an upload limit so you don't saturate your bandwidth
--quiet #don't output anything unless it's an error

Automate synchronization by placing the same syncoid command into a cronjob:

0 */4 * * * /usr/sbin/syncoid --exclude=bigdataset1 --source-bwlimit=1M --recursive pool/data root@192.168.1.100:pool/data
#if you don't want status emails when the cron job runs, add --quiet

NFS

Install the nfs-kernel-server package and specify your NFS exports in /etc/exports.

apt install nfs-kernel-server portmap

Example /etc/exports :

/mnt/example/DIR1 192.168.0.0/16(rw,sync,all_squash,anonuid=0,anongid=0)

Restart nfs-server after modifying your exports:

systemctl restart nfs-server

Samba

Install samba, configure /etc/samba/smb.conf, and add users.

apt install samba
systemctl enable smbd

/etc/samba/smb.conf syntax is fairly straightforward. See the samba documentation for more information. Example share configuration:

[exampleshare]
comment = Example share
path = /mnt/example
valid users = user1 user2
writable = yes

Add users to the system itself with the adduser command:

adduser user1

Add those same users to samba with the smbpasswd -a command. Example:

smbpasswd -a user1

Restart samba after making changes:

systemctl restart smbd

SMART monitoring

Taken from https://pve.proxmox.com/wiki/Disk_Health_Monitoring:

By default, smartmontools daemon smartd is active and enabled, and scans the disks under /dev/sdX and /dev/hdX every 30 minutes for errors and warnings, and sends an e-mail to root if it detects a problem. 

Edit the file /etc/smartd.conf to suit your needs. You can specify/exclude devices, smart attributes, etc there. See here for more information. Restart the smartd service after modifying.

UPS monitoring

apc-upsd was easiest for me to configure, so I went with it. Thanks to this blog for giving me the information to get started.

First, install apcupsd:

apt install apcupsd apcupsd-doc

As soon as it was installed my console kept getting spammed about IRQ issues. To stop these errors I stopped the apcupsd daemon:

 systemctl stop apcupsd

Now modify /etc/apcupsd/apcupssd.conf to suit your needs. The section I added for my CyberPower OR2200LCDRT2U was simply:

UPSTYPE usb
DEVICE

Then modify /etc/default/apcupsd to specify it’s configured:

#/etc/default/apcupsd
ISCONFIGURED=yes

After configuring, you can restart the apcupsd service

systemctl start apcupsd

To check the status of your UPS, you can run the apcaccess status command:

/sbin/apcaccess status

Log monitoring

Install Logwatch to monitor system events. Here is a good primer on all of Logwatch’s options.

apt install logwatch

Modify /usr/share/logwatch/default.conf/logwatch.conf to suit your needs. By default it runs daily (defined in /etc/cron.daily/00logwatch). I added the following lines for my config to filter out unwanted information:

Service = "-zz-disk_space"
Service = "-postfix"
Service = "vsmartd"
Service = "-zz-lm_sensors"

Manually run logwatch to get a preview of what you’ll see:

logwatch --range today --mailto YOUR_EMAIL_ADDRESS

UPDATE 8/29/2020
I discovered additional tweaking to logwatch to get it exactly how I like it (thanks to this post and this one at serverfault.)

Defaults for monitored services are located in /usr/share/logwatch/default.conf/services/

You can copy this default file to /etc/logwatch/conf/services/<filename.conf> and then modify the service as needed. In my case I wanted to ignore logins for a particular user from a particular machine. This can be done by copying & editing sshd.conf and adding the following:

# Ignore these hosts
*Remove = 192.168.100.1
*Remove = X.Y.123.123
# Ignore these usernames
*Remove = testuser
# Ignore other noise. Note that we need to escape the ()
*Remove = "pam_succeed_if\(sshd:auth\): error retrieving information about user netscan.*

Troubleshooting

ZFS-ZED not sending email

If ZED isn’t sending emails it’s likely due to an error in the config. For some reason default values still need to be uncommented for zed to work, even if left unaltered. Thanks to this post for the info.

Samba share access denied

If you get access denied when trying to write to a SMB share, double check the file permissions on the server level. Execute chmod / chown as appropriate. Example:

chown user1 -R /mnt/example/user1

Using expect with the Ansible shell module

In one of my ansible playbooks I need to obtain a file from a Windows share. I can’t find a module that handles this so I’m using the shell module to call the smbclient command to do what I need. The problem with this solution is that smbclient prompts for a password (and I don’t want to supply it on the command itself for security reasons.)

I tried using ansible’s built-in expect module, but frustratingly it only works on systems that have pexpect >= 3.3  , which CentOS 7 & Ubuntu 14.04 do not have.

My solution to this is to install the expect command on the host, and then use the ansible shell module to call it, following the example given in Ansible’s shell module page

Part of the process in my playbook is registering stdout from that command for later use. I then ran into a problem where I would run smbclient -c “ls <filename>” but ansible would register nothing. After some digging I found I also need to include the interact command after sending the password. Without it, anything after sending the password is not registered to stdout. Thanks to rostyslav-fridman on Stack Overflow for the answer.

My final problem was I was sending a password that had a ] character in it. It was causing this error on run:

missing close-bracket\n while executing\n\"send

I found here (thanks glenn-jackman) it was due to  the fact that the expect syntax uses tcl language, which treats those brackets as special characters. To get around this I had to use an ansible filter, specifically regex_escape()

Lastly I ran into an issue specifically with how I was spawning smbclient. I kept getting this message:

"stderr": "send: spawn id exp4 not open\n while executing\n\"send

It boiled down to single vs double quotes. If I put my -c arguments in single quotes it failed; with double quotes, it worked.

My completed play is below. Finally, success!

- name: Get RSA filename 
  shell: |
    set timeout 300
    spawn smbclient {{standards_location}} -W DOMAIN -U {{username}} -c "ls {{file_location}}*.tar"
    expect "password:"
    send {{ password | regex_escape() }}\r
    interact
    exit 0
  args:
    executable: /usr/bin/expect
  changed_when: false
  no_log: true
  register: RSA_filename_raw

Linux Samba shares using Kerberos / AD credentials

I had a hell of a time trying to figure out why after upgrading the CentOS Samba package the samba shares quit working. Every time someone tried to access the share, the smb service would crash. I had this system configured to use active directory credentials and it worked well for a time, but no longer.

After much digging I found my problem to be the lack of a krb5.keytab file. This is due to my using PowerBroker Open instead of kerberos for authentication.

The solution was to add this line to my samba config:

kerberos method = system keytab

That one bit made all the difference. My current samba config is as follows with no more crashing: (Updated 8/29 to add workgroup name)

[global]
     security = ADS
     passdb backend = tdbsam
     realm = DOMAIN
     workgroup = NETBIOS_DOMAIN_NAME
     encrypt passwords = yes
     lanman auth = no
     ntlm auth = no
     kerberos method = system keytab
     obey pam restrictions = yes
     winbind enum users = yes
     winbind enum groups = yes

Update 8/29/2018: After updating and rebooting my smb service refused to start. It kept giving this very unhelpful message:

 ../source3/auth/auth_util.c:1399(make_new_session_info_guest)
create_local_token failed: NT_STATUS_NO_MEMORY
../source3/smbd/server.c:2011(main)
ERROR: failed to setup guest info.
smb.service: main process exited, code=exited, status=255/n/a
Failed to start Samba SMB Daemon.

I couldn’t find any documentation on this and eventually resorted to just messing around with my smb.conf file. What fixed it was adding this to my configuration:

workgroup = NETBIOS_DOMAIN_NAME

Replacing NETBIOS_DOMAIN_NAME with the old NetBIOS style domain name (what you would put in the domain part of domain\username for logging in) for my company. It worked!

Nextcloud External Files SMB not working – Empty Response from Server

I beat my head against the wall for hours trying to figure out why external storage wasn’t working with my Nextcloud instance after I migrated it over to CentOS 7. All I kept getting was a very unhelpful

There was an error with message: Empty response from the server.

I installed all the libraries multiple times. I tried different versions of smbclient and php-smbclient but the error kept happening! Eventually I decided to check the samba logs of my samba server. Sure enough:

 create_connection_session_info failed: NT_STATUS_ACCESS_DENIED

The username and password I was using was correct. I read on some forum (sorry, no link) to put something in the Workgroup field. Voila! As soon as I populated the workgroup field in Nextcloud for my SMB shares, they all worked!

Simple network folder mount script for Linux

I wrote a simple little network mount script for Linux desktops. I wanted to replicate my Windows box as best as I could where a bunch of network drives are mapped upon user login. This script relies on having gvfs-mount and the cifs utilities installed (installed by default in Ubuntu.)

#!/bin/bash
#Simple script to mount network drives

#Specify network paths here, one per line
#use forward slash instead of backslash
FOLDER=(
  server1/folder1
  server1/folder2
  server2/folder2/folder3
  server3/
)

#Create a symlink to gvfs mounts in home directory
ln -s $XDG_RUNTIME_DIR/gvfs ~/Drive_Mounts

for mountpoint in "${FOLDER[@]}"
do
  gvfs-mount smb://$mountpoint
done

Mark this script as executable and place it in /usr/local/bin. Then make it a default startup application for all users:

vim /etc/xdg/autostart/drive-mount.desktop
[Desktop Entry]
Name=Mount Network Drives
Type=Application
Exec=/usr/local/bin/drive-mount.sh
Terminal=false

Voila, now you’ve got your samba mount script starting up for every user.

Fix Owncloud 8.1.1 samba shares not working

It never seems to go smoothly, does it? I just upgraded my version of Owncloud from 8.0.4 to 8.1.1 on my Ubuntu Trusty Tahr 14.04 VM. After the upgrade I noticed that all my samba (SMB) shares were gone. The logs were not very helpful, full of things like these:

Exception: {"Exception":"Icewind\\SMB\\Exception\\InvalidHostException","Message":"","Code":0,"Trace":"#0 \/var\/www\/owncloud\/apps\/files_external\/3rdparty\/icewind\/smb\/src\/Connection.php(37): Icewind\\SMB\\Connection

Additionally errors like this were showing up:

Your web server is not yet set up properly to allow file synchronization because the WebDAV interface seems to be broken.

After much digging I discovered this post which had a suggestion to install libsmbclient-php. In Ubuntu 14.04 it involves this command:

sudo apt-get install php5-libsmbclient

That did the trick! After installing php5-libsmbclient my samba shares worked once more.

 

Join a CentOS machine to an AD domain

I ran into enough snags when attempting to join an CentOS 6.6 machine to a Microsoft domain that I thought I would document them here. Hopefully it is of use to someone. The majority of the experience is thanks to this site.

Update 03/16/2015: I came across this site which makes things a little easier when it comes to initial configuration – messing with other config files is no longer necessary. The authconfig command to do this is below:

authconfig --disablecache --enablelocauthorize --enablewinbind --enablewinbindusedefaultdomain --enablewinbindauth        --smbsecurity=ads --enablekrb5 --enablekrb5kdcdns --enablekrb5realmdns --enablemkhomedir --enablepamaccess --updateall        --smbidmapuid=100000-1000000 --smbidmapgid=100000-1000000 --disablewinbindoffline --winbindjoin=Admin_account --winbindtemplateshell=/bin/bash --smbworkgroup=DOMAIN --smbrealm=FQDN --krb5realm=FQDN

Replace DOMAIN with short domain name, FQDN with your fully qualified domain name, and Admin_account with an account with domain admin privileges, then skip to the Reboot section, as it covers everything before that.

Install the necessary packages

yum -y install authconfig krb5-workstation pam_krb5 samba-common oddjob-mkhomedir

Configure kerberos auth with authconfig

There is a curses-based GUI you can use to do this in but I opted for the command line.

authconfig --disablecache --enablewinbind --enablewinbindauth --smbsecurity=ads --smbworkgroup=DOMAIN --smbrealm=DOMAIN.COM.AU --enablewinbindusedefaultdomain --winbindtemplatehomedir=/home/DOMAIN/%U --winbindtemplateshell=/bin/bash --enablekrb5 --krb5realm=DOMAIN.COM.AU --enablekrb5kdcdns --enablekrb5realmdns --enablelocauthorize --enablemkhomedir --enablepamaccess --updateall

Add your domain to kerberos configuration

Kerberos information is stored in /etc/krb5.conf. Append your domain in the realms configuration, like below

vi /etc/krb5.conf
[realms]
 EXAMPLE.COM = {
 kdc = kerberos.example.com
 admin_server = kerberos.example.com
 }
 
DOMAIN.COM.AU = {
admin_server = DOMAIN.COM.AU
kdc = DC1.DOMAIN.COM.AU
kdc = DC2.DOMAIN.COM.AU
}
 
[domain_realm]
 .example.com = EXAMPLE.COM
 example.com = EXAMPLE.COM
 domain.com.au = DOMAIN.COM.AU
 .domain.com.au = DOMAIN.COM.AU

 Test your configuration

Use the kinit command with a valid AD user to ensure a good connection with the domain controllers:

kinit <AD user account>
It should return you to the prompt with no error messages. You can further make sure it worked by issuing the klist command to show open Kerberos tickets
klist

Ticket cache: FILE:/tmp/krb5cc_0
Default principal: someaduser@DOMAIN.COM.AU
Valid starting Expires Service principal
02/27/14 12:23:21 02/27/14 22:23:21 krbtgt/DOMAIN.COM.AU@DOMAIN.COM.AU
renew until 03/06/14 12:23:19
When I tried the kinit command it returned an error:
kinit: KDC reply did not match expectations while getting initial credentials
 After scratching my head for a while I came across this site, which explains that your krb5.conf is case sensitive – it must all be all upper case. Fixing my krb5.conf to be all caps for my domain resolved that issue.

Join the domain

net ads join domain.com.au -U someadadmin
When I tried to join the domain I received this lovely message:
Our netbios name can be at most 15 chars long, "EXAMPLEMACHINE01" is 16 chars long
Invalid configuration. Exiting....
Failed to join domain: The format of the specified computer name is invalid.
Thanks to Ubuntu forms I learned I needed to edit my samba configuration to assign an abbreviated NETBIOS name to my machine.
vi /etc/samba/smb.conf
Uncomment the “netbios name =” line and fill it in with a shorter (max 15 characters) NETBIOS name.
netbios name = EXAMPLE01
You can test to ensure the join was successful with this command
net ads testjoin

Configure home directories

The authconfig command above included a switch for home directories. Make sure you create a matching directory and set appropriate permissions for it.

mkdir /home/DOMAIN
setfacl -m group:"Domain Users":rwx /home/DOMAIN #the article calls to do this, this command doesn't work for me but home directories still appear to be created properly

Reboot

To really test everything the best way is to reboot the machine. When it comes back up, log in with Active Directory credentials. It should work!

Account lockout issues

I ran into a very frustrating problem where everything works dandy if you get the password correct on the first try, but if you mess up even once it results in your Active Directory account being locked. You were locked out after the first try. Each login, even when successful, had this in the logs:

winbind pam_unix(sshd:auth): authentication failure

This problem took a few days to solve. Ultimately it involved modifying two files:

vi /etc/pam.d/system-auth
vi /etc/pam.d/password-auth

As far as I can tell, the problem was a combination of pam_unix being first (which always failed when using AD login), as well as having both winbind and kerberos enabled. The fix was to change the order of each mention of pam_unix to be below any mention of pam_winbind. The other fix I had to do was to comment out mentions of pam_krb5 completely.

#auth        sufficient    pam_krb5.so use_first_pass

Restrict logins

The current configuration allows any domain account to log into the machine. You will probably want to restrict who can log in to the machine to certain security groups. The problem: many Active Directory security groups contain spaces in their name, which Linux doesn’t like.

How do you add a security group that contains a space? Escape characters don’t seem to work in the pam config files.  I found out thanks to this site that it is easier to just not use spaces at all. Get the SID of the group instead.

Use wbcinfo -n to query the group in question, using the backslash to escape the space. It will return the SID we desire.

wbinfo -n Domain\ Users
S-1-5-21-464601995-1902203606-794563710-513 Domain Group (2)

Next, modify /etc/pam.d/password-auth and add the require_membership_of argument to pam_winbind.so:

auth        sufficient    pam_winbind.so require_membership_of=S-1-5-21-464601995-1902203606-794563710-513

That’s it! Logins are now restricted to the security group listed.

Configure sudo access

Sudo uses a different list for authorization, which amusingly, handles escaped spaces just fine.  Simply add the active directory group in sudo as you a local one, eg using a % and then group name, escaping spaces with a backslash:

%Domain\ Users ALL=(ALL) ALL

Rejoice

You’ve just gone through a long and painful battle. Hopefully this article helped you to achieve victory.