Tag Archives: linux

Fix no sound in Wine

Lately I’ve been doing 100% of my gaming in Linux. The latest versions of wine in Arch Linux have been fantastic (for the most part.) I recently installed a game called Gauntlet (a windows-only steam game.) For some reason I had no sound. Sound worked fine in other Wine games, just not this one.

After much digging I found this post on the Arch Linux forums which fixed my issue. The issue was not having the proper 32bit sound libraries installed. The fix was as simple as:

sudo pacman -Sy lib32-alsa-plugins lib32-libpulse lib32-openal

Success!

Backup your systems with urBackup

In addition to my ZFS snapshots I decided to implement a secondary backup system. I decided to land on urbackup for ease of use and, more importantly, it was easier to set up.

Server Install

Assuming a Cent-based system:

cd /etc/yum.repos.d/
sudo wget http://download.opensuse.org/repositories/home:uroni/CentOS_7/home:uroni.repo
sudo yum -y install urbackup-server
sudo systemctl enable urbackup-server
sudo systemctl start urbackup-server

Open up necessary ports for the server:

sudo firewall-cmd --add-port=55413-55415/tcp --permanent
sudo systemctl reload firewalld

By default urbackup listens on port 55414 for connections. You can change this to port 80 and/or 443 for HTTPS by installing nginx and having it proxy the connections for you.

sudo yum -y install nginx
sudo systemctl enable nginx
sudo setsebool -P httpd_can_network_connect 1 #if you're using selinux

Copy the following into /etc/nginx/conf.d/urbackup.conf (make sure to change server_name to suit your needs)

server {
        server_name backup;

        location / {
                proxy_pass http://localhost:55414/;
        }
}

Then start nginx:

sudo systemctl start nginx

You should then be able to access the urbackup console by navigating to the IP / hostname of your backup server in a browser.

Client Install:

Urbackup can use a snapshot system known as dattobd. You should use it if you can in order to get more consistent backups, otherwise urbackup will simply copy files from the host which isn’t always desirable (databases, for example)

Install dattobd (optional):

sudo yum -y update
# reboot if your kernel ends up being updated
sudo yum -y localinstall https://cpkg.datto.com/datto-rpm/repoconfig/datto-el-rpm-release-$(rpm -E %rhel)-latest.noarch.rpm
sudo yum -y install dkms-dattobd dattobd-utils

Install urbackup client:

TF=`mktemp` && wget "https://hndl.urbackup.org/Client/2.1.15/UrBackup%20Client%20Linux%202.1.15.sh" -O $TF && sudo sh $TF; rm $TF
#Select dattobd when prompted if desired

Configure Firewall:

sudo firewall-cmd --add-port=35621-35623/tcp --permanent
sudo systemctl reload firewalld

Once a client is installed, assuming they’re on the same network as the backup server, they will automatically add themselves and begin backing up. If they don’t show up it’s usually a firewall issue.

Restore

Restoration of individual files is easily done through the web console. If you have a windows system, restoring from an image backup is also easy.

Linux hosts

Recovery is trickier if you want to restore a Linux system. Install an empty system of same distribution. Give it the same hostname. Install the client as outlined above, then run:

sudo /usr/local/bin/urbackupclientctl restore-start -b last

Troubleshooting

If for some reason the client not showing up after removing it from the GUI: Uninstall & re-install client software

sudo /usr/local/sbin/uninstall_urbackupclient
TF=`mktemp` && wget "https://hndl.urbackup.org/Client/2.1.15/UrBackup%20Client%20Linux%202.1.15.sh" -O $TF && sudo sh $TF; rm $TF

Create & Mount disc images in Linux

When working with hard drives it is always a good idea to back the entire thing up before proceeding. I wanted to write down the procedure so I don’t keep forgetting it.

Create disc image

dd does the trick here.

sudo dd if=/dev/<drive device file> of=image.img bs=64M

If you wish to see the progress of the above dd command you can open up a separte window and issue the kill command

kill -USR1 `pidof dd`

Mount disc image read only

You can now disconnect the drive and work with its image instead (great for forensics or dealing with a dying drive.)

In later versions of Linux you can do this with losetup and partprobe.

sudo losetup -Pr -f <path to image file>
sudo losetup #find which loop device file corresponds with your image here
sudo mount -o ro /dev/<loopdevice>p<partition number> <mountpoint>

For example, this is what I did on my system for my aunt’s laptop (I was interested in the 2nd partition on her drive, the one containing Windows files)

Note: remove the r from the above command if you want to mount read/write (required for LVM partitions)

sudo losetup -Pr -f susan-ssd.img
sudo losetup

NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO
/dev/loop0 0 0 0 1 /home/partimag/susan-ssd.img 0

sudo mount -o ro /dev/loop0p2 mount/

When you’re done, unmount the image and delete the image mapping:

umount <path to mount directory>
sudo losetup -d <loop file obtained earlier>

CentOS7, nginx, reverse proxy, & let’s encrypt

With the loss of trust of Startcom certs I found myself needing a new way to obtain free SSL certificates. Let’s Encrypt is perfect for this. Unfortunately SophosUTM does not support Let’s Encrypt. It became time to replace Sophos as my reverse proxy. Enter nginx.

The majority of the information I used to get this up and running came from digitalocean with help from howtoforge. My solution involves CentOS7, nginx, and the let’s encrypt software.

Install necessary packages

sudo yum install nginx letsencrypt
sudo firewall-cmd --add-service=http --permanent
sudo firewall-cmd --add-service=https --permanent
sudo systemctl enable nginx

Inform selinux to allow nginx to make http network connections:

sudo setsebool -P httpd_can_network_connect 1

Generate certificates

Generate your SSL certificates with the letsencrypt command. This command relies on being able to reach your site over the internet using port 80 and public DNS. Replace arguments below to reflect your setup

sudo letsencrypt certonly -a webroot --webroot-path=/usr/share/nginx/html -d example.com -d www.example.com

The above command places the certs in /etc/letsencrypt/live/<domain_name>

Sophos UTM certificates

In my case I had a few paid SSL certificates I wanted to copy over from Sophos UTM to nginx. In order to do this I had to massage them a little bit as outlined here.

Download p12 from Sophos, also download certificate authority file, then use openssl to convert the p12 to a key bundle nginx will take.

openssl pkcs12 -nokeys -in server-cert-key-bundle.p12 -out server.pem
openssl pkcs12 -nocerts -nodes -in server-cert-key-bundle.p12 -out server.key
cat server.pem Downloaded_CA_file.pem > server-ca-bundle.pem

Once you have your keyfiles you can copy them wherever you like and use them in your site-specific SSL configuration file.

Auto renewal

First make sure that the renew command works successfully:

sudo letsencrypt renew

If the output is a success (a message saying not up for renewal) then add this to a cron job to check monthly for renewal:

sudo crontab -e
30 2 1 * * /usr/bin/letsencrypt renew >> /var/log/le-renew.log
35 2 1 * * /bin/systemctl reload nginx

Configure nginx

Uncomment the https settings block in /etc/nginx/nginx.conf to allow for HTTPS connections.

Generate a strong DH group:

sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

Create SSL conf snippets in /etc/nginx/conf.d/ssl-<sitename>.conf. Make sure to include the proper location of your SSL certificate files as generated with the letsencrypt command.

ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_dhparam /etc/ssl/certs/dhparam.pem;

Here is a sample ssl.conf file:

server {
        listen 443;

        ssl_certificate /etc/letsencrypt/live/<HOSTNAME>/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/<HOSTNAME>/privkey.pem;
        ssl_dhparam /etc/ssl/certs/dhparam.pem;

        access_log /var/log/<HOSTNAME.log>;

        server_name <HOSTNAME>;

        location / {
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;

                proxy_pass http://<BACKEND_HOSTNAME>/;
        }
}

 

Redirect http to https by creating a redirect configuration file (optional)

sudo vim /etc/nginx/conf.d/redirect.conf
server {
	server_name
		<DOMAIN_1>
                ...
		<DOMAIN_N>;

        location /.well-known {
              alias /usr/share/nginx/html/.well-known;
              allow all;
	}
	location / {
               return 301 https://$host$request_uri; 
	}
}

 

Restart nginx:

sudo systemctl restart nginx

Troubleshooting

HTTPS redirects always go to the host at the top of the list

Solution found here:  use the $host variable instead of the $server_name variable in your configuration.

Websockets HTTP 400 error

Websockets require a bit more massaging in the configuration file as outlined here. Modify your site-specific configuration to add these lines:

# we're in the http context here
map $http_upgrade $connection_upgrade {
  default upgrade;
  ''      close;
}

server {     proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;
}

 

Install Linux on Chromebook Pixel 2 (Samus)

I’ve run crouton on my Chromebook Pixel 2  (2015, codename Samus) for some time now but I’ve found myself wanting more. Virtualbox, kernel access, graphics, and more don’t perform well in a chroot. Thankfully it’s actually pretty easy to dual boot Chrome OS and Linux on your chromebook thanks to chrx (pronounced “marshmallow”?)

Installation

The first part of installation is identical to setting up crouton:

  • Enter developer mode:
    Press ESC, Refresh, power simultaneously (when the chromebook is on)

    • Every time you power on the chromebook from now on you’ll get a scary screen. Press CTRL-D to bypass it (or wait 30 seconds)
    • If you hit space on this screen instead of CTRL+D it will powerwash (nuke) your data
      A scary screen will pop up saying the OS is missing or damaged. Press CTRL D, then press Enter when the OS verification screen comes up.
  • Wait several minutes for developer mode to be installed. Note it will wipe your device to do this.

Enable SeaBIOS:

Open up a shell (CTRL + ALT + T, shell, enter) and enter the following

sudo crossystem dev_boot_usb=1 dev_boot_legacy=1

and reboot.

Next, download and run the chrx script twice. The first run will partition and powerwash your system; the second run will actually install GalliumOS (or Ubuntu or Fedora) alongside ChromeOS.

cd ; curl -Os https://chrx.org/go && sudo sh go

reboot after partition, run shell again. You can specify a number of arguments to the go script; I wanted to use Cinnamon on Fedora so these are the ones I used:

cd ; curl -Os https://chrx.org/go && sudo sh go -d fedora -e cinnamon -r latest -H <hostname> -U <username> -Z <timezone>

Fedora took quite a long time to install (1 hour in my case.) Just let the script do its thing. Once complete you can reboot and press CTRL + D for chromeOS or CTRL + L for Linux.

After that, reboot into your new linux environment!

Cleaning up

There were a few samus-specific things I needed to do.

Locale

For some reason my locale was set to an African country.  Correct by doing this:  (thanks to here) I added SELinux commands because for some reason I was getting permission denied errors.

sudo setenforce 0
localectl set-locale LANG=en_US.utf8
sudo setenforce 1

Audio doesn’t work (no sound)

This issue stems from the fact that the sound card is not presented as the first available card. The system defaults to HDMI sound instead. Fortunately this page has instructions on how to fix this. If you’re running GalliumOS default you can follow the instructions from the link above. In my case I had to get a bit creative.

  1. Download the samus patches from here
    wget https://github.com/GalliumOS/galliumos-samus/archive/master.zip
  2. Extract subfolders inside said zip file to root directory
  3. Reboot
  4. run the following:
    1. cp -r /etc/skel/.config $HOME
      sudo samus-alsaenable-speakers
      sudo samus-touch-reset

Success! You can now dual boot between Full blown Linux and ChromeOS on your Chromebook Pixel.

Touchpad / touchscreen stop working after resume

Occasionally my touchpad and touchscreen stop responding after resuming from sleep. The galliumOS-samus fix mentioned above has a handy reset script that fixes this. Simply run:

sudo samus-touch-reset

and your touch functionality is restored. I bound this command to a key shortcut to make things easier.

Virtualbox won’t start

After installing virtualbox I got a strange error message when trying to start VMs:

Failed to load VMMR0.r0 (VERR_SUPLIB_OWNER_NOT_ROOT)

I found this mention saying that /usr has to be owned by root. Easy enough of a fix:

sudo chown root:root /usr/

Simple network folder mount script for Linux

I wrote a simple little network mount script for Linux desktops. I wanted to replicate my Windows box as best as I could where a bunch of network drives are mapped upon user login. This script relies on having gvfs-mount and the cifs utilities installed (installed by default in Ubuntu.)

#!/bin/bash
#Simple script to mount network drives

#Specify network paths here, one per line
#use forward slash instead of backslash
FOLDER=(
  server1/folder1
  server1/folder2
  server2/folder2/folder3
  server3/
)

#Create a symlink to gvfs mounts in home directory
ln -s $XDG_RUNTIME_DIR/gvfs ~/Drive_Mounts

for mountpoint in "${FOLDER[@]}"
do
  gvfs-mount smb://$mountpoint
done

Mark this script as executable and place it in /usr/local/bin. Then make it a default startup application for all users:

vim /etc/xdg/autostart/drive-mount.desktop
[Desktop Entry]
Name=Mount Network Drives
Type=Application
Exec=/usr/local/bin/drive-mount.sh
Terminal=false

Voila, now you’ve got your samba mount script starting up for every user.

Monitor your servers with phpservermonitor

I have a handful of servers and for years I’ve been wanting to get some sort of monitoring in place. Today I tried out php server monitor and found it was pretty easy to set up and use.

Download

The installation process was pretty straightforward.

  • Install PHP, mysql, and apache
  • Create database, user, password, and access rights for mysql
  • Download .tar.gz and extract to /var/www
  • Configure Apache site file to point to phpservermonitor directory
  • Navigate to the IP / URL of your apache server and run the installation script

The above process is documented fairly well on their website. I configured this to run on my Raspberry Pi 2. This was my process:

Install dependencies:

sudo apt-get install php5 php5-curl php5-mysql mysql-server

Configure mysql:

sudo mysql_secure_installation

Create database:

mysql -u root -p
create database phpservermon;
create user 'phpservermon'@'localhost' IDENTIFIED BY 'password';
 grant all privileges on phpservermon.* TO 'phpservermon'@'localhost'; 
flush privileges;

Extract phpservermon to /var/www and grant permissions

tar zxvf <phpservermon_gzip_filename> -C /var/www
sudo chown -R www-data /var/www/*

Configure php:

sudo vim /etc/php5/apache2/php.ini
#uncomment date.timezone and set your timezone
date.timezone = "America/Boise"

Configure apache:

sudo cp /etc/apache2/sites-available/default /etc/apache2/sites-available/phpservermon

#Modify /etc/apache2/sites-available/phpservermon server root to point to directory above, also add a ServerName if desired

sudo a2ensite phpservermon
sudo service apache2 reload

Configure cron (I have it check every minute but you can configure whatever you like)

*/1 * * * * /usr/bin/php /var/www/phpservermon/cron/status.cron.php

Navigate to the web address you’ve configured in apache and follow the wizard.

It’s pretty simple but it works! A nice php application to monitor websites and services.

 

Heroes of the Storm in Linux with Wine

I’ve been wanting to get some of my games to play properly in Linux with Wine. Recently I’ve been able to get Heroes of the Storm working pretty well in Arch Linux. This is what I did to get it working with my NVIDIA GeForce GTX 1070.

Update 4/22/2017: Even more changes with wine 2.4-staging. This site is very helpful:

For Wine Staging release 2.3 and later

  • Change Windows Version to Windows XP
    • Open Wine configuration (winecfg) and change setting at the bottom
  • Install Visual C++ 2015 libraries (needed to update games)
    • winetricks vcrun2015
  • If you get errors about not being able to connect / login:
  • Install lib32-libldap and lib32-gnutls
Installer login Workaround:
 - Create a clean 32-bit wineprefix
 - Run the Battle.net installer with default options
  * It crashes at the very end when it tries to launch, but installation still finishes and succeeds.
 - Overwrite the default Battle.net config file:
  cd "$WINEPREFIX/users/$(whoami)/Application\ Data/Battle.net/"
  echo "{\"Client\": {\"HardwareAcceleration\": \"false\"}}" > "Battle.net.config"
 - Set the default mimicked windows version to Windows XP
 - Run Battle.net:
  wine "$WINEPREFIX/drive_c/Program\ Files/Battle.net/Battle.net\ Launcher.exe"

Update 3/31/2017: A few things have changed since I wrote this article. I’ll keep the old information for historical purposes, but the process has updated

  1. Install mainstream nvidia drivers
    yaourt -Sy nvidia
  2. Install wine-staging
    pacman -Sy wine-staging
  3. Remove / move existing wine configuration if you have one
    rm -rf ~/.wine
  4. Create new 32bit wine prefix
    WINEARCH=win32 wine wineboot
  5. Enable CSMT by going to the Staging Tab and checking “Enable_CSMT for better graphic performance”
  6. Disable DirectX11 as described in step 3 below

  1. Install beta Nvidia drivers
    1.  yaourt -Sy nvidia-beta
  2. Install wine
    1. pacman -Sy wine
  3. Disable DirectX11 
    1. run winecfg
    2. switch to tab ‘Libraries’
    3. select ‘d3d11’ in the drop-down menu or type it in and click ‘Add’
    4. click ‘edit…’ and set it to disabled
    5. click ‘OK’
  4. Install msttcore fonts
    1. winetricks corefonts
  5. Set Windows version to Windows XP
    1. winetricks winxp
  6. (Dvorak keymap users only) Set keymap to US to allow for hotkeys to work
    1. setxkbmap us

It’s not perfect. Battle.net splash screens don’t load (no news, patch info) and the first few moments in the game stutter (but it’s fine after that.) Without DX11 the game doesn’t quite look as shiny but is still quite playable. I hope future versions of WINE will help with this problem, but this is what I’m using for now.

 

Generate static html files from any website with httrack

I came across a need to take a cakePHP dynamically generated site and turn it into a collection of HTML files. After some trial and error I came across httrack which fit the need beautifully (thanks to this site for pointing me there.)

To install httrack run the following (for ubuntu-based distros)

sudo add-apt-repository ppa:upubuntu-com/web
sudo apt-get update
sudo apt-get install webhttrack httrack

Oncne httrack is installed simply launch it from the command line:

httrack

Follow the prompts and in no time you’ll have a folder with static HTML files for your entire website. Easy!

Fix vmware-view RDP issues in Linux

I’ve been using vmware view as a means to remote into my computers at work for some time now. An update to the linux client appears to have broken my ability to remote into work machines over the RDP protocol. This issue affects multiple distros.

The symptom is the fact that after you log into vmware-view and double click on a computer you wish to connect to over the RDP protocol, the screen flashes for a second and then takes you right back to where you started – no error message. Frustrating.

If you launch vmware-view in a console you get a little more insight into what’s going on:

RDP Client(10222): WARNING: Unknown -r argument
2017-02-26 14:06:53.817-07:00: vmware-view 7858| RDP Client(10222): 
2017-02-26 14:06:53.817-07:00: vmware-view 7858| RDP Client(10222): Possible arguments are: comport, disk, lptport, printer, sound, clipboard, scard

After much frustration I was able to combine documentation from vmware and freerdp in order to finally get the right combination of arguments to get things working again. I read that freerdp works better than rdesktop with this version, so I tried launching vmware-view with this option:

vmware-view --rdpclient="xfreerdp"

Progress – at least now the error message was different.

RDP Client(20799): [14:04:04:097] [20799:20803] [ERROR][com.freerdp.crypto] - certificate not trusted, aborting.

After more investigation the culprit turned out to be crypto negotiation. Since I’m already connected to the truste work VMware server, I don’t really care about certificate validation. This is what finally got me up and running. The key components are the rdpclient and the /cert-ignore options.

vmware-view --rdpclient="xfreerdp" --xfreerdpOptions="-wallpaper /sound:sys:alsa /cert-ignore"

You can include these options in your ~/.vmware/view-preferences config file so you don’t have to manually add all those switches:

echo 'view.rdpClient = "xfreerdp"
view.xfreerdpOptions = "-wallpaper /sound:sys:alsa /cert-ignore"' >> ~/.vmware/view-preferences

Finally RDP via vmware-view is working in Linux again.


Update 11/18/2018: I was having a very annoying issue with sound on my Debian Jessie box. If the system I was remoted into made any sound, it would corrupt the sound on my machine. It would make every sound sound tinny and fuzzy. I would have to run pulseaudio -k to fix it.

It turns out this was caused by /sound:sys:alsa above. I changed that to be /sound:sys:pulse and all was well.