Taken from here
yay -S atomicparsley
youtube-dl --extract-audio -f bestaudio[ext=m4a] --add-metadata --embed-thumbnail YOUTUBE_URL
Taken from here
yay -S atomicparsley
youtube-dl --extract-audio -f bestaudio[ext=m4a] --add-metadata --embed-thumbnail YOUTUBE_URL
I had a few audio files of an interview done with a late relative that I wanted to have Google transcribe for me. I wanted to supply an audio file and have it spit out the results. There are many ways to do this but I went with using the Google Cloud Platfrom speech-to-text API.
First I signed up for a GCP free trial via https://cloud.google.com/speech-to-text/ For my usage, it will remain free as 0-60 minutes of transcription per month is not charged: https://cloud.google.com/speech-to-text/pricing
Next, I needed to create GCP storage bucket as audio more than 10 minutes long cannot reliably be transcribed via the “uploading local file” option. I did this following the documentation at https://cloud.google.com/storage/docs/creating-buckets which walks you through going to their storage browser and creating a new bucket. From that screen I uploaded my audio files (FLAC in my case.)
Then I needed to create API credentials to use. I did this by going speech API console’s credentials tab and creating a service account, then saving the key to my working directory on my local computer.
Also on said computer I installed google-cloud-sdk
(on Arch Linux in my case, it was as simple as yay -S google-cloud-sdk
)
With service account json file downloaded & google-cloud-sdk installed I exported the GCP service account credentials into my BASH environment like so
export GOOGLE_APPLICATION_CREDENTIALS=NAME_OF_SERVICE_ACCOUNT_KEYFILE_DOWNLADED_EARLIER.json
I created .json files following the format outlined in command line usage outlined in the quickstart documentation. I tweaked to add a line “model”: “video” to get the API to use the premium Video recognition set (as it was more accurate for this type of recording.) This is what my JSON file looked like:
{
"config": {
"encoding":"FLAC",
"sampleRateHertz": 16000,
"languageCode": "en-US",
"enableWordTimeOffsets": false,
"model": "video"
},
"audio": {
"uri":"gs://googlestorarge-bucket-name/family-memories.flac"
}
}
I then used CURL to send the transcription request to Google. This was my command:
curl -s -H "Content-Type: application/json" -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) https://speech.googleapis.com/v1/speech:longrunningrecognize -d @JSON_FILE_CREATED_ABOVE.json
If all goes well you will get something like this in response:
{
"name": "4663803355627080910"
}
You can check the status of the transcription, which usually takes half the length of the audio file to do, by running this command:
curl -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) -H "Content-Type: application/json; charset=utf-8" "https://speech.googleapis.com/v1/operations/ID_NUMBER_ACQUIRED_ABOVE"
You will either get a percent progress, or if it’s done, the output of the transcription.
Success! It took some time to figure out but was still much better than manually transcribing the audio by hand.
I recently got a Tacx Neo 2 smart trainer for my bike and was eager to use it on my big screen TV with Zwift. Unfortunately, despite the Nvidia Shield being a more that capable Android device, Zwift does not show up in the Google play store. I didn’t want to stream Zwift from my PC because a) the Windows app is annoying and doesn’t go full screen (title bar at the top) and b) my PC is located upstairs and the bluetooth doesn’t appear to reach from the trainer to the PC.
My solution to this was to sideload the Zwift app onto my Nvidia shield. It wasn’t as straightforward as most sideloading due to how Zwift is configured: it has an APK file, and an OBB file. The APK is small and is the application itself, the OBB file is all the map data (it’s large – over 600 MB)
Fortunately, a new Android format called xapk exists, and is an archive of both in one package. This was the process I used to successfully get my Zwift on my Nvidia Shield:
The shield remote doesn’t appear to control anything within the app. Plug in a mouse so you can swipe away the first run tutorial screens (hold left click and drag to the left.) Optional: plug in a keyboard while you’re at it so you can log in faster.
Success! My trainer showed up in the pairing screen and everything works! You can even have your own music playing in the background, with a caveat: if you ever switch apps away from Zwift, it will reset back to the login screen because the Shield doesn’t appear to have enough memory to keep Zwift running when switching another app to the foreground. If you want your shield to play music, start the music first, then switch to Zwift. Once you’re in Zwift, you can’t switch away to any other app without losing your progress.
I needed to send some test packets over UDP to make sure connectivity was working. I found this site which outlined how to do it really well
nc -u <IP/hostname> <port>
Then on the next line you can send test messages, then hit CTRL+D when done. In my case I wanted to test sending syslog data, so I did nc -u <hostname> 514, then wrote test messages. the -u specifies UDP and 514 is the syslog port. I was then able to confirm on the other end the message was received. Handy.
I had a need to take a folder in one git repository and create a whole new git repository with it, preserving history for all files inside. My desire to keep git history made the process a bit more complicated than simply copying the directory into a new git repository.
First, create a new folder on the git server. I’m all command line, no GUI yet, so I need to make it a bare repository. (Thanks to geeksforgeeks on how to to do this)
#On the main git "server"
mkdir <reponame>.git
cd <reponame>.git
git init --bare
Now, on the desktop (not the git server) clone a copy of the repository with your desired folder into a new directory, remove the git origin server, then strip out everything except that directory (thanks to gbayer.com for the info)
#On the desktop
git clone <initial git repository url> <new_directory_name>
cd <new_directory_name>
git remote rm origin
git filter-branch --subdirectory-filter <directory_to_keep_history_of> -- --all
Lastly, (still on the desktop in the new repository directory) create a new origin with the path of the new repo you created above
#On the desktop, inside new_directory_name
git remote add origin <server>:<path_to_new_repo_folder>
git push --set-upstream origin master
In general I try to buy server-class hardware for my home lab, primarily so that I could have IPMI / Remote access console for remote OS installation & troubleshooting. I recently got a new desktop and found myself with a Threadripper 1950x that would make an excellent addition to my server cluster. The one problem being it’s a desktop-class board, so it does not have any IPMI / remote access device.
I solved my problem with pikvm. It works wonderfully! Pikvm uses a raspberry pi with some additional hardware and software to interface with a system to control power & reset capabilities, as well as KVM functions with the ability to upload OS images and do OS installations remotely. The whole project cost me about $150 since I didn’t have some of the essential items for it. It could definitely be cheaper if I didn’t buy large packs of items or already had some electronics components.
The process was straightforward as outlined on their github page. The only snag I ran into was creating the USB Y (split) cable. It did not work the first time, so I had to tear it all down and start again. One cable I used had more than 4 wires (3 red wires, 1 black, 1 green, 1 white, and 1 yellow.) When I re-assembled to include the yellow wire with the red and black, it all worked.
I scavenged the metal mounting bracket from some old networking adapter cards. With those I was able to mount the pi and the HDMI-in module to two standard PCI express card slots. I accidentally destroyed one of my SD cards while doing this so be careful if you try it! The PI is mounted at a slight angle so as to not damage the SD card. I had to mount it backwards (ethernet in the back) because I couldn’t get power to it otherwise (power port right up against the motherboard.) My workaround for this was to custom make a short length ethernet cord and use an RJ45 coupler on the outside of the chassis to provide an easy to access network port for the pi.
I wired the power & reset switch, as well as HDD and power LEDs in parallel so they would function with the chassis as well as with the KVM. To do this simply get some male-to-male jumper wires. On one end plug into the chassis wire, and on the other plug into the corresponding positive and negative slots right next to the ones going to the pi.
Broadboard pinout: https://github.com/pikvm/pikvm/blob/master/img/v2.png
USB split cable diagram: https://github.com/pikvm/pikvm/blob/master/img/v2_splitter.png
Parts list:
Raspberry Pi 4B 2GB edition: https://www.amazon.com/gp/product/B07TD42S27/ref=ppx_yo_dt_b_asin_title_o04_s00?ie=UTF8&psc=1
Raspberry Pi 4 headsink pack: https://www.amazon.com/gp/product/B07ZLZRDXZ/ref=ppx_yo_dt_b_asin_title_o02_s00?ie=UTF8&psc=1
Raspberry Pi HDMI in Module: https://www.amazon.com/gp/product/B0899L6ZXZ/ref=ppx_yo_dt_b_asin_title_o03_s00?ie=UTF8&psc=1
16GB Micro SD card: https://www.amazon.com/gp/product/B073K14CVB/ref=ppx_yo_dt_b_asin_title_o04_s01?ie=UTF8&psc=1
1 foot HDMI cable: https://www.amazon.com/gp/product/B00DI88XEG/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1
Breadboard 3 pack: https://www.amazon.com/gp/product/B077DN2PS1/ref=ppx_yo_dt_b_asin_title_o04_s01?ie=UTF8&psc=1
Breadboard Jumper Wires: https://www.amazon.com/gp/product/B07GD2BWPY/ref=ppx_yo_dt_b_asin_title_o04_s00?ie=UTF8&psc=1
Resistor Assortment Kit: https://www.amazon.com/gp/product/B0792M83JH/ref=ppx_yo_dt_b_asin_title_o04_s00?ie=UTF8&psc=1
390 OHM resistors: https://www.amazon.com/gp/product/B07QK9NFGT/ref=ppx_yo_dt_b_asin_title_o04_s01?ie=UTF8&psc=1
SSR relays: https://www.digikey.com/product-detail/en/G3VM-61A1/Z2100-ND/673290
I upgraded to a shiny new AMD Ryzen 3rd gen processer (Threadripper 3960x.) After doing so I could not boot up my Windows 10 gaming VM (it uses VFIO / PCI Passthrough for the video card.) The message I kept getting as it tried to boot was:
KERNEL_SECURITY_CHECK_FAILED
After reading this reddit thread and this one It turns out it’s a culmination of a few things:
The problem comes with a new speculative execution protection hardware feature in the Ryzen Gen 3 chipsets – stibp
. Qemu doesn’t know how to handle it properly, thus the bluescreens.
There are two ways to fix it
host-model
from host-passthrough
to epyc
stibp
CPU feature.Since I have some software that checks CPU model and refuses to work if it’s not in the desktop class (Geforce Experience) I opted for route #2.
First, check the qemu logs to see which CPU parameters your VM was using (pick a time where it worked.) Replace ‘win10’ with the name of your VM.
sudo cat /var/log/libvirt/qemu/win10.log | grep "\-cpu"
in my case, it was -cpu host,migratable=on,topoext=on,kvmclock=on,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vendor-id=1234567890ab,kvm=off \
Copy everything after -cpu and before the last backslash. Then edit your VM’s XML file (change last argument to the name of your VM)
sudo virsh edit win10
Scroll down to the bottom qemu:commandline section (if it doesn’t exist, create it right above the last line – </domain>
. Paste the following information obtained from the above log (ignoring the qemu:commandline
lines if they already exist.) In my case it looked like this:
<qemu:commandline>
<qemu:arg value='-cpu'/>
<qemu:arg value='host,topoext=on,kvmclock=on,hv-time,hv-relaxed,hv-vapic,hv-
spinlocks=0x1fff,hv-vendor-id=1234567890ab,kvm=off,-amd-stibp'/>
</qemu:commandline>
What you’re doing is copying the CPU arguments you found in the log and adding them to the qemu:commandline section, with a twist – adding -amd-stibp
which instructs qemu to remove that CPU flag.
This did the trick for me!
I’ve once again switched from Proxmox to Arch Linux for my desktop machine. Both use KVM so it’s really just a matter of using the different VM manager syntax (virt-manager vs qm.) I used my notes from my previous stint with Arch, my article on GPU Passthrough in Proxmox as well as a thorough reading of the Arch wiki’s PCI Passthrough article.
Configure GRUB to load the necessary iommu modules at boot. Append amd_iommu=on iommu=pt
to the end of GRUB_CMDLINE_LINUX_DEFAULT
(change accordingly if you have Intel instead of AMD)
sudo vim /etc/default/grub
...
GRUB_CMDLINE_LINUX_DEFAULT="loglevel=3 amd_iommu=on iommu=pt"
Run update-grub
sudo update-grub
Reserve the GPU you wish to pass through to a VM for use with the vfio kernel driver (so the host OS doesn’t interfere with it)
lspci -v
and look for your card. Mine was 01:00.0
& 01:00.1
. You can omit the part after the decimal to include them both in one go – so in that case it would be 01:00
lspci -n -s <PCI address from above>
to obtain vendor IDs. lspci -n -s 01:00
01:00.0 0300: 10de:1b81 (rev a1)
01:00.1 0403: 10de:10f0 (rev a1)
echo "options vfio-pci ids=10de:1b81,10de:10f0" >> /etc/modprobe.d/vfio.conf
Reboot the host to put the kernel / drivers into effect.
pacman -Sy libvirtd virt-manager dnsmasq
sudo systemctl enable libvirtd
sudo systemctl start libvirtd
Assuming you’re using network manager for your connections, create a bridge (thanks to ciberciti.biz & the arch wiki for information on how to do so.) Replace interface names with ones corresponding to your machine:
sudo nmcli connection add type bridge ifname br0 stp no sudo nmcli connection add type bridge-slave ifname enp4s0 master br0 sudo nmcli connection show #Make note of the active connection name sudo nmcli connection down "Wired connection 2" #from above sudo nmcli connection up bridge-br0
Create a second bridge bound to lo0 for host-only communication. Change IP as desired:
sudo nmcli connection add type bridge ifname br99 stp no ip4 192.168.2.1/24 sudo nmcli connection add type bridge-slave ifname lo master br99 sudo nmcli connection up bridge-br99
When creating the passthrough VM, make sure chipset is Q35.
Set the CPU model to host-passthrough
(type it in, there is no dropdown for it.)
When adding disks / other devices, set the device model to virtio
Add your GPU by going to Add Hardware and finding it under PCI Host Device.
If your passthrough VM is going to be windows based, some tweaks are required to get the GPU to work properly within the VM.
Later versions of Windows 10 instantly bluescreen with kmode_exception_not_handled
unless you pass an option to ignore MSRs. Add the kvm ignore_msrs=1
option in /etc/modprobe.d/kvm.conf
to do so. Optionally add the report_ignored_msrs=0
option to squelch massive amounts of kernel messages every time an MSR was ignored.
echo "options kvm ignore_msrs=1" >> /etc/modprobe.d/kvm.conf #Optional - ignore kernel messages from ignored MSRs echo "options kvm report_ignored_msrs=0" >> /etc/modprobe.d/kvm.conf
Reboot to make those changes take effect.
Use the virsh edit
command to make some tweaks to the VM configuration. We need to hide the fact that this is a VM otherwise the GPU drivers will not load and will throw Error 43. We need to add a vendor_id
in the hyperv section, and create a kvm section enabling hidden state
, which hides certain CPU flags that the drivers use to detect if they’re in a VM or not.
sudo virsh edit <VM_NAME>
<features> <hyperv> ... <vendor_id state='on' value='1234567890ab'/> ... </hyperv> ... <kvm> <hidden state='on'/> </kvm> </features>
If you operate on a multi-core system such as my AMD Ryzen Threadripper the you will want to optimize your CPU core configuration in the VM per the CPU Pinning section in the Arch Wiki
Determine your CPU topology by running lscpu -e
and lstopo
The important things to look for are the CPU number and core number. On my box, it looks like this:
CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ 0 0 0 0 0:0:0:0 yes 3400.0000 2200.0000 1 0 0 1 1:1:1:0 yes 3400.0000 2200.0000 2 0 0 2 2:2:2:0 yes 3400.0000 2200.0000 3 0 0 3 3:3:3:0 yes 3400.0000 2200.0000 4 0 0 4 4:4:4:1 yes 3400.0000 2200.0000 5 0 0 5 5:5:5:1 yes 3400.0000 2200.0000 6 0 0 6 6:6:6:1 yes 3400.0000 2200.0000 7 0 0 7 7:7:7:1 yes 3400.0000 2200.0000 8 0 0 8 8:8:8:2 yes 3400.0000 2200.0000 9 0 0 9 9:9:9:2 yes 3400.0000 2200.0000 10 0 0 10 10:10:10:2 yes 3400.0000 2200.0000 11 0 0 11 11:11:11:2 yes 3400.0000 2200.0000 12 0 0 12 12:12:12:3 yes 3400.0000 2200.0000 13 0 0 13 13:13:13:3 yes 3400.0000 2200.0000 14 0 0 14 14:14:14:3 yes 3400.0000 2200.0000 15 0 0 15 15:15:15:3 yes 3400.0000 2200.0000 16 0 0 0 0:0:0:0 yes 3400.0000 2200.0000 17 0 0 1 1:1:1:0 yes 3400.0000 2200.0000 18 0 0 2 2:2:2:0 yes 3400.0000 2200.0000 19 0 0 3 3:3:3:0 yes 3400.0000 2200.0000 20 0 0 4 4:4:4:1 yes 3400.0000 2200.0000 21 0 0 5 5:5:5:1 yes 3400.0000 2200.0000 22 0 0 6 6:6:6:1 yes 3400.0000 2200.0000 23 0 0 7 7:7:7:1 yes 3400.0000 2200.0000 24 0 0 8 8:8:8:2 yes 3400.0000 2200.0000 25 0 0 9 9:9:9:2 yes 3400.0000 2200.0000 26 0 0 10 10:10:10:2 yes 3400.0000 2200.0000 27 0 0 11 11:11:11:2 yes 3400.0000 2200.0000 28 0 0 12 12:12:12:3 yes 3400.0000 2200.0000 29 0 0 13 13:13:13:3 yes 3400.0000 2200.0000 30 0 0 14 14:14:14:3 yes 3400.0000 2200.0000 31 0 0 15 15:15:15:3 yes 3400.0000 2200.0000
From the above output I see my CPU core 0 is shared by CPUs 0 & 16, meaning CPU 0 and CPU 16 (as seen by the Linux kernel) are hyperthreaded to the same physical CPU core.
Especially for gaming, you want to keep all threads on the same CPU cores (for multithreading) and the same CPU die (on my threadripper, CPUs 0-7 reside on one physical die, and CPUs 8-15 reside on the other, within the same socket.)
In my case I want to dedicate one CPU die to my VM with its accompanying hyperthreads (CPUs 0-7 & hyperthreads 16-23) You can accomplish this using the virsh edit
command and creating a cputune section (make sure you have a matching vcpu count for the number of cores you’re configuring.) Also edit CPU mode with the proper topology of 1 socket, 1 die, 8 cores with 2 threads. Lastly, configure memory to only be from the proper NUMA node the CPU cores your VM is using (Read here for more info.)
sudo virsh edit <VM_NAME>
<domain type='kvm'>
...
<vcpu placement='static' cpuset='0-7,16-23'>16</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='0'/>
<vcpupin vcpu='1' cpuset='16'/>
<vcpupin vcpu='2' cpuset='1'/>
<vcpupin vcpu='3' cpuset='17'/>
<vcpupin vcpu='4' cpuset='2'/>
<vcpupin vcpu='5' cpuset='18'/>
<vcpupin vcpu='6' cpuset='3'/>
<vcpupin vcpu='7' cpuset='19'/>
<vcpupin vcpu='8' cpuset='4'/>
<vcpupin vcpu='9' cpuset='20'/>
<vcpupin vcpu='10' cpuset='5'/>
<vcpupin vcpu='11' cpuset='21'/>
<vcpupin vcpu='12' cpuset='6'/>
<vcpupin vcpu='13' cpuset='22'/>
<vcpupin vcpu='14' cpuset='7'/>
<vcpupin vcpu='15' cpuset='23'/>
<emulatorpin cpuset='0-7','26-23'/>
</cputune>
...
<cpu mode='host-passthrough' check='none'>
<topology sockets='1' dies='1' cores='8' threads='2'/>
<feature policy='require' name='topoext'/>
<numa>
<cell id='0' cpus='0-15' memory='16777216' unit='KiB'/>
</numa>
</cpu>
...
</domain>
Non-uniform memory access is essential for 1st and 2nd gen Ryzen chips. It turns out that by default my motherboard hid the real NUMA configuration from the operating system. Remedy this by changing the BIOS setting to set Memory Interleaving = Channel (for my ASRock X399 motherboard it’s in CBS / DF options.) See here: https://www.reddit.com/r/Amd/comments/6vrcq0/psa_threadripper_umanuma_setting_in_bios/
After changing BIOS setting, lstopo
now shows proper configuration:
Change CPU frequency setting to use performance mode:
sudo pacman -S cpupower sudo cpupower frequency-set -g performance
Append default_hugepagesz=1G hugepagesz=1G hugepages=16
to the kernel line in /etc/default/grub and re-run sudo grub-mkconfig -o /boot/grub/grub.cfg
The Arch Wiki mentions to run qemu-system-x86_64
with taskset
and chrt
but doesn’t mention how to do so if you’re using virt-manager. Fortunately this reddit thread outlined how to accomplish it: libvirt hooks. Create the following script and place it in /etc/libvirt/hooks/qemu
, change the VM variable to match the name of your VM, mark that new file as executable (chmod +x /etc/libvirt/hooks/qemu
) and restart libvirtd
#!/bin/bash
#Hook to change VM to FIFO scheduling to decrease latency
#Place this file in /etc/libvirt/hooks/qemu and mark it executable
#Change the VM variable to match the name of your VM
VM="win10"
if [ "$1" == "$VM" ] && [ "$2" == "started" ]; then
if pid=$(pidof qemu-system-x86_64); then
chrt -f -p 1 $pid
echo $(date) changing CPU scheduling to FIFO for VM $1 pid $pid >> /var/log/libvirthook.log
else
echo $(date) Unable to acquire PID of $1 >> /var/log/libvirthook.log
fi
fi
#Additional debug
#echo $(date) libvirt hook arg1=$1 arg2=$2 arg3=$3 arg4=$4 pid=$pid >> /var/log/libvirthook.log
Update 7/28/20: I no longer do this in favor of the qemu hook script above, which prioritizes to p1 the qemu process for the cores it needs. I’m leaving this section here for historical/additional tweaking purposes.
Update 6/28/20: Additional tuning since I was having some stuttering and framerate issues. Also read here about the emulatorpin option
Dedicate CPUs to the VM (host will not use them) – append isolcups, nohz_full & rcu_nocbs kernel parameters into /etc/default/grub
...
GRUB_CMDLINE_LINUX_DEFAULT=... isolcpus=0-7,16-23 nohz_full=0-7,16-23 rcu_nocbs=0-7,16-23
...
Update grub:
sudo grub-mkconfig -o /boot/grub/grub.cfg
Reboot, then check if it worked:
cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-linux root=/dev/mapper/arch-root rw loglevel=3 amd_iommu=on iommu=pt isolcpus=0-7,16-23 nohz_full=0-7,16-23 rcu_nocbs=0-7,16-23
taskset -cp 1
pid 1's current affinity list: 8-15,24-31
You can still tell programs to use the CPUs the VM has manually with the taskset command:
chrt -r 1 taskset -c <cores to use> <name of program/process>
Upbate 7/8/2020: I found this article and this reddit thread (and this one) on how to use pulseaudio for your guest VM to get low latency guest VM audio piped to the host machine.
edit /etc/libvirt/qemu.conf
: uncomment the line #user = "root"
and replace “root” with your username
Edit /etc/pulse/daemon.conf
and uncomment the following lines (remove semicolon)
;default-sample-rate = 44100
;alternate-sample-rate = 48000
Note: Change VM audio settings to match 44100 sample rate
Edit /etc/pulse/default.pa and append auth-anonymous=1
to load-module module-native-protocol-unix
load-module module-native-protocol-unix auth-anonymous=1
The restart pulseaudio:
pulseaudio -k
remove all audio devices from the virtual hardware details bar (left side in VM info view).
Edit XML via virsh edit <VM_NAME>
Make sure top line reads
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
Add the following after </devices> (bottom of file)
<qemu:commandline>
<qemu:arg value='-device'/>
<qemu:arg value='ich9-intel-hda,bus=pcie.0,addr=0x1b'/>
<qemu:arg value='-device'/>
<qemu:arg value='hda-micro,audiodev=hda'/>
<qemu:arg value='-audiodev'/>
<qemu:arg value='pa,id=hda,server=unix:/run/user/1000/pulse/native'/>
</qemu:commandline>
Replace /user/1000 with the UID of your user (output of id
command)
<domain type='kvm' id='1' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
...
<memory unit='KiB'>16777216</memory>
<currentMemory unit='KiB'>16777216</currentMemory>
<memoryBacking>
<hugepages/>
</memoryBacking>
<vcpu placement='static'>16</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='0'/>
<vcpupin vcpu='1' cpuset='16'/>
<vcpupin vcpu='2' cpuset='1'/>
<vcpupin vcpu='3' cpuset='17'/>
<vcpupin vcpu='4' cpuset='2'/>
<vcpupin vcpu='5' cpuset='18'/>
<vcpupin vcpu='6' cpuset='3'/>
<vcpupin vcpu='7' cpuset='19'/>
<vcpupin vcpu='8' cpuset='4'/>
<vcpupin vcpu='9' cpuset='20'/>
<vcpupin vcpu='10' cpuset='5'/>
<vcpupin vcpu='11' cpuset='21'/>
<vcpupin vcpu='12' cpuset='6'/>
<vcpupin vcpu='13' cpuset='22'/>
<vcpupin vcpu='14' cpuset='7'/>
<vcpupin vcpu='15' cpuset='23'/>
<emulatorpin cpuset='8-15,24-31'/>
</cputune>
<numatune>
<memory mode='strict' nodeset='0'/>
</numatune>
...
<features>
...
<hyperv>
...
<vendor_id state='on' value='1234567890ab'/>
</hyperv>
<kvm>
<hidden state='on'/>
</kvm>
...
</features>
<cpu mode='host-passthrough' check='none'>
<topology sockets='1' dies='1' cores='8' threads='2'/>
<feature policy='require' name='topoext'/>
<numa>
<cell id='0' cpus='0-15' memory='16777216' unit='KiB'/>
</numa>
</cpu>
...
<qemu:commandline>
<qemu:arg value='-device'/>
<qemu:arg value='ich9-intel-hda,bus=pcie.0,addr=0x1b'/>
<qemu:arg value='-device'/>
<qemu:arg value='hda-micro,audiodev=hda'/>
<qemu:arg value='-audiodev'/>
<qemu:arg value='pa,id=hda,server=unix:/run/user/1000/pulse/native'/>
</qemu:commandline>
</domain>
I’m very pleased with my current setup. It works well!
My install notes to get Arch Linux set up just the way I like it, June 2020 edition. Reference: https://wiki.archlinux.org/index.php/Installation_guide
Change to dvorak layout:loadkeys dvorak
Sync NTP time:timedatectl set-ntp true
Configure disk:fdisk
#create separate efi partition, LVM root & swappvcreate <dev>
vgcreate arch <dev>
lvcreate -L+2G arch -n swap
lvcreate -l100%FREE -n root arch
Initialize swap:mkswap /dev/arch/swap
swapon /dev/arch/swap
Format & Mount root:mkfs.ext4 /dev/arch/root
mount /dev/arch/root /mnt
Create EFI partitionmkdosfs -F32 <partition 1>
mkdir /mnt/efi
mount <partition 1> /mnt/efi
Make mirrorlist use only xmissionsed -i 's/^Server/#Server/g;s/#Server\(.*xmission.*\)/Server\1/g' /etc/pacman.d/mirrorlist
Install base system plus extra packages:pacstrap /mnt base linux linux-firmware lvm2 efibootmgr samba vim htop networkmanager inetutils man-db man-pages texinfo openssh grub
Generate fstabgenfstab -U /mnt >> /mnt/etc/fstab
Enter new environment chrootarch-chroot /mnt
Set timezoneln -sf /usr/share/zoneinfo/America/Boise /etc/localtime
Configure en_US localessed -i 's/^#en_US\(.*\)/en_US\1/g' /etc/locale.gen
locale-gen
Make dvorak layout permanentecho "KEYMAP=dvorak" > /etc/vconsole.conf
Set hostnameecho "_HOSTNAME_" > /etc/hostname
echo "127.0.1.1 _HOSTNAME_._DOMAIN_ _HOSTNAME_" >> /etc/hosts
Enable lvm2 hook for initial ramdisk (boot)sed -i 's/HOOKS=(.*\<block\>/& lvm2/' /etc/mkinitcpio.conf
Generate initial ramdiskmkinitcpio -P
Set password for root user:passwd
Install Grub (EFI)grub-install --target=x86_64-efi --efi-directory=/efi --bootloader-id=GRUB
grub-mkconfig -o /boot/grub/grub.cfg
Enable networking & SSH on bootup:systemctl enable NetworkManager sshd
Configure NTPyum -y install ntp
#modify /etc/ntp.conf for timeservers as desired
systemctl enable ntpd
Exit chroot & rebootexit
reboot
I started getting SSL errors with my Zimbra mail server despite having a valid SSL certificate everywhere I knew where to check. When I tried to use zmcontrol status
I got this error:
Unable to start TLS: SSL connect attempt failed error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed when connecting to ldap master.
Eventually I found this blog post explaining the problem – it’s with the LDAP component in Zimbra. You have to switch it from ldap to ldaps. Why did this change? I do not know.
ZIMBRA_HOSTNAME=_your_mail_server_dns_hostname_
sudo -u zimbra bash
zmlocalconfig -e ldap_master_url=ldaps://$ZIMBRA_HOSTNAME:636
zmlocalconfig -e ldap_url=ldaps://$ZIMBRA_HOSTNAME:636
zmlocalconfig -e ldap_starttls_supported=0
zmlocalconfig -e ldap_port=636
zmcontrol stop
zmcontrol start
This did the trick. The errors went away.