User Tools

Site Tools


Bengts 64bit Fedora Linux Server

Some history

I created a new linux page when i upgraded to 64bit Fedora 24 with a fresh install after realizing I had “Extreme I/O slowdown with PAE kernel”.

My disk write speeds could be as slow as 5MB/s with 32bit when the machine had been runnning for a while. After fresh boot things were running smoothly but when more and more of the 16GB RAM got used things would slow down seriously with only 1GB lowmem in the PAE kernel. This would for example manifest itself as very dnf/yum/rpm operations. See

Fedora 24 64bit installation

create install media

The first issue was to get a working install usb stick for fedora. First I tried the server edition but the usb stick did not boot correctly and I realized that I would probably be better off with the workstation live usb instead. So I tried the default gnome live usb but it failed to initialize my nvidia gfx card correctly. Then I tried some of the the other fedora spins and ended up using the minimal lxde spin.

The plan is to use uefi gpt boot (my first) dual booting with my old fedora/windows mbr grub dualboot. This is to get the new fedora installation more standalone and use as a trial on my desktop machine. Then I should be able to work on the desktop machine with a new ssd for the server. This ssd will then be moved to the server machine when ready. Then the plan is to remove the fedora 23 pae partition after a while on the desktop.

I used the fedora live usb creation software to create the bootable usb-stick…

os installation

I booted into the usb stick and installed on my trial spare disk in my usb3 disk dock:

  • I first changed the partition table to gpt in windows before doing this to support uefi.
  • I selected Swedish but I used US English with Swedish keyboard on the real server.
  • I tried the default lvm partitioning but used manual partitioning with ext4 on the real server. No separate /home partition. I have had recovery issues with lvm before and I realized that things like gparted still do not suport lvm.
  • I created a user and realized that the uids starts at 1000 and not 500 nowadays. Will probably force uids after installation using adduser instead.

Some configuration after reboot as root:

# vi /etc/dnf/dnf.conf # Add "fastestmirror=true"
# dnf update
# dnf install gedit geany firefox yumex-dnf system-config-users system-config-services system-config-repo
# groupadd -g <uid> <user> && useradd -u <uid> -g <user> <user> && passwd <user>
# vi /etc/ssh/sshd_config # Set "PermitRootLogin no"
# service sshd start && chkconfig sshd on
# systemctl start rc-local && systemctl enable rc-local && touch /etc/rc.d/rc.local
# hostnamectl status # Used for changing hostname later
# vi /etc/lxdm/lxdm.conf # For autologin, see

Add yum/dnf repo:

# dnf install$(rpm -E %fedora).noarch.rpm$(rpm -E %fedora).noarch.rpm

Some commands for modifying user uid/gid to start at 1000 because systemd:

usermod -u <NEWUID> <LOGIN>    
groupmod -g <NEWGID> <GROUP>
find / -user <OLDUID> -exec chown -h <NEWUID> {} \;
find / -group <OLDGID> -exec chgrp -h <NEWGID> {} \;
usermod -g <NEWGID> <LOGIN>

Standard users and groups which seems to be lacking. Maybe because I did a workstation installation.

TBS 6280 PCIe DVB-T2

A PCIe card for recording DVB-T2 TV. Linux driver support from the manufacturer, which means that you have to recompile it on kernel update. Official linux drivers can be downloaded at and open source at Visit and for a forum.

You will need to disable secure boot in bios for uefi boot to load 3rd party drivers. Maybe solved by selecting “Other OS” instead of “Windows” in the boot options.

Roughly following I did like this as root top build and install official drivers (old):

dnf install kernel-headers kernel-devel
mkdir tbs
cd tbs
tar xvjf linux-tbs-drivers.tar.bz2
cd linux-tbs-drivers/
./v4l/ # For 64bit and ./v4l/ for 32bit, see docs for other kernels.
make -j6
mv /lib/modules/$(uname -r)/kernel/drivers/media/ ~/media_$(uname -r) # Will fix some issues like symbol mismatches and in my case a driver crash in dmesg registering SAA716x dvb adapter.
make install
dmesg | grep DVB

The cards should be possible to configure in mythtvsetup now. After kernel upgrades you will need to remove the “linux-tbs-drivers” folder and unpack again, make clean does not seem to work.

Nowadays it seems like like using the open source drivers at is the model to keep up with kernel changes:

dnf install perl-Proc-ProcessTable patchutils
mkdir tbs && cd tbs
git clone
git clone --depth=1 -b latest ./media
cd media_build
make dir DIR=../media
make distclean
make -j6
mv /lib/modules/$(uname -r)/kernel/drivers/media/ ~/media_$(uname -r) # Maybe still needed to fix install mismatch issues?
make install

Install firmware:

tar jxvf tbs-tuner-firmwares_v1.0.tar.bz2 -C /lib/firmware/

Update using: (Does not work anymore since kernel 5.7.15)

cd tbs/media
git remote update && git pull
git clean -ffdx
cd ../media_build
git remote update && git pull
git clean -ffdx
make dir DIR=../media
make distclean
echo -e "#define NEED_FWNODE_GETNAME 1" > ./v4l/config-mycompat.h # Fix undefined fwnode_get_name on kernel 5.6 fedora 30
sed -i '/VIDEO_OV9650/d' ./v4l/versions.txt && sed -i '/9.255.255/a VIDEO_OV9650' ./v4l/versions.txt # Fix undefined __devm_regmap_init_sccb on kernel 5.6/5.7 fedora 30/31 by skipping OV9650 build
make -j6 # I actually tried compiling without disabling OV9650, then I had to run "make distclean" and "make -j6" again to make it work. Needed?
sudo mv /lib/modules/$(uname -r)/kernel/drivers/media/ ~/media_$(uname -r) # Still needed to fix install mismatch issues!?
sudo make install
sudo reboot

I've really had issues with kernel 5.7, in the forum they suggest to download precompiled files instead which does not work either. Failed the same way as the compilation for 5.7.15 did. Therefor I eventually decided to try:

Hauppauge WinTV Dual HD

After issues with compiling for tbs on newer kernels I decided to try a Hauppauge WinTV Dual HD. Modern kernels have the drivers included. You may need to install a firmware file and you may also want to set transfer mode to bulk. See

I had device ID not matching those on that page and it was stuck repeating downloading firmware in dmesg. It was fixed by downloading the firmware on the page above:

lsusb -v | grep Haup
Bus 001 Device 005: ID 2040:8265 Hauppauge dualHD
cp dvb-demod-si2168-b40-01.fw /usr/lib/firmware/dvb-demod-si2168-b40-01.fw

But the firmware I had was actually newer, “B 4.0.25” instead of “B 4.0.11”. I think that it may be interesting to look for other firmwares if things does not work well.

When I tried to just use this tuner instead of tbs I could only tune DVB-T2 channels. It turns out that the tbs card autoswitched between T and T2 but for this card I need to set T2 in mythsetup. It also seems like you need to create a new source and search for channels again for it to store T/T2 info per channel/multiplexer properly. For full tuning do not forget to set your country in the hidden menu. I tried to fix the database for the old source but ended up migrating to the new source.

Then I deleted the cards and sources and recreated them because EIT data did not get updated. Still had issues with low SVT1/2/B/K but I think this was because EIT for these overwrote the ones in use. Only enable EIT for channels in use. See MythTV EIT Issues below.

grub2 boot config

This is a bit different on uefi see and:

# vi /boot/efi/EFI/fedora/grub.cfg # Changes will not survive a new kernel or grub2-mkconfig
# vi /etc/default/grub # Edit defaults for grub2-mkconfig
# grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg # Update kernel boot list

efi partition too small

Trying to fix full boot partition which failed to install new kernel rpm properly by: (Wrong solution, see end of chapter!)

  1. Creating bootable fedora live usb and booting from it.
  2. You MUST disable automatic suspension in “Settings –> Power –> Automatic Suspend”. Hangs on resume which you don't want for disk operations. See
  3. Installing gparted to check which devices to use.
  4. Mounting a disk with enough data “mount /dev/sdc1 /mnt”.
  5. Using dd to backup disk to a file “dd if=/dev/sda of=/mnt/backup.img”
  6. Using gparted to slightly shrink/move sda2 (/boot) and expand sda1 (/boot/efi).
  7. Note that the maximum size I could give the fat16 partition was 256MB. Otherwise gparted said that fat32 was needed. Note that I found no way of converting so I guess I would have to rsync/format/rsync back. Set boot&esp flags, maybe name it “EFI System Partition” and give it UUID “F71B-C42F” again which does not seem easy. See below.
  8. Reinstall the kernel that failed to update boot config due to disk space issues “dnf reinstall kernel-core-6.9.9-100.fc39.x86_64”. It still ran out of disk space which seems quite strange.

Eventually I found and which apparently was a fedora 39 problem when upgrading since forever. The solution was:

# mv "/boot/efi/$(cat /etc/machine-id)" "/boot/efi/$(cat /etc/machine-id)_disabled"

efi partition to fat32

These instrutions were saved for possible future need if 256MB is not enough for /boot/efi, from

Mount the ESP (if it's not mounted already, e.g. on /boot or /boot/efi):

# mount /dev/sdx1 /mnt # replace sdx1 with ESP

Make a backup of its contents:

# mkdir ~/esp
# rsync -av /mnt/ ~/esp/

Unmount the ESP:

# umount /mnt

Delete and recreate the ESP:

# gdisk /dev/sdx # replace sdx with disk containing ESP
p (list partitions)
(ensure the ESP is the first partition)
d (delete partition)
1 (select first partition)
n (create partition)
Enter (use default partition number, should be 1)
Enter (use default first sector, should be 2048)
Enter (use default last sector, should be all available space)
EF00 (hex code for EFI system partition)
w (write changes to disk and exit)

Format the ESP:

# partprobe /dev/sdx
# mkfs.fat -F32 /dev/sdx1

Restore the ESP's contents:

# mount /dev/sdx1 /mnt
# rsync -av ~/esp/ /mnt

Update EFI entry in /etc/fstab

# blkid | grep EFI
# nano /etc/fstab
UUID=XXXX-XXXX /boot vfat umask=0077 0 2 # Replace with UUID from blkid

If you use GRUB, you may need to update or regenerate its configuration file (/boot/grub/grub.cfg), e.g. by running update-grub.

nvidia [old]

This is old information from when I used the geforce 210 card. Nowadays I'm using a 1660Ti with nouveau driver. Removed all the a 340xx stuff and it worked out-of-the box after a reboot.

Install something like this:

# dnf install xorg-x11-drv-nvidia akmod-nvidia "kernel-devel-uname-r == $(uname -r)"
# reboot?
# dnf remove kmod-nvidia-<current-kernel> # May be needed after fedora upgrade
# akmods --force
# reboot

Note that after upgrade to fedora 25 I had to remove all nvidia stuff and do the above again. Do not use nvidia-xconfig as the Xorg.xonf written will prevent booting into X, the file should be empty.

After the last update to kernel 5.11 I had to run nvidia-xconfig to get fullhd output and acceleration on my GeForce 210 card again. This was rather the opposite of my previous experience where I had to restore/remove xorg.conf after doing this. Note that for this card I have to run the old 340xx-340.108 nvidia driver.

More info on rpmfusion drivers:


vncserver remote access

After installing fedora it can be nice to setup a vnc-server so you can access remotely with for example TightVNC on Windows. To start an extra separate desktop for another user you can:

# yum install tigervnc tigervnc-server
$ su - <user-with-vncdesktop>
$ vncserver
Enter password and test TightVNC.
$ killall Xvnc
$ vi .vnc\xstartup # Uncomment two lines for ordinary desktop. Not needed for fedora 24.
# cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@:1.service
# vi /etc/systemd/system/vncserver@:1.service
Use "simple" type with "-fg" to vncserver, replace "<USER>" with your user, empty "User=" for root, start using runuser, note that %i=":1" and maybe use "-geometry 1600x900":
ExecStartPre=/sbin/runuser -l <USER> -c "/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :"
ExecStart=/sbin/runuser -l <USER> -c "/usr/bin/vncserver -geometry 1600x900 -fg %i"
ExecStop=/sbin/runuser -l <USER> -c "/usr/bin/vncserver -kill %i"
# vi /etc/sysconfig/desktop # Not needed for fedora 24.
PREFERRED=$(type -p lxsession) # Will use LXDE as default desktop

Could not get to work. Lots of issues with “no session for pid” and such until I used runuser for all Exec-lines. Needed an updated ExecStartPre for F27 as well. Then it is good to remove the “@” from the unit file name and replace “%i” with “:1” etc.

Now enable and start the configured service:

# firewall-config
# systemctl enable vncserver@:1.service
# systemctl --system daemon-reload && systemctl start vncserver@:1.service

Test with TightVNC that things work as the should. The syntax is is “host:[display|:port]”. In my case “vncserver@:1.service” means that port 5900+1 is used. This port can then be tunneled using SSH with for example putty. It is therefore enough to enable ssh access outside firewall.

You may need to start “vncconfig” inside the vnc session to enable the clipboard. Maybe in ~/.vnc/xstartup.

mariadb/mysql database

MariaDB the MySQL-fork is used by services like mythtv and owncloud for database storage.

Install/setup server and phpmyadmin for administration:

# vi /etc/passwd
mysql:x:27:27:MySQL Server:/var/lib/mysql:/bin/bash
# vi /etc/group
# dnf install mariadb mariadb-server phpMyAdmin
# firewall-config # Enable mysql service
# systemctl enable mariadb && systemctl start mariadb
# mysql_secure_installation
# systemctl restart httpd

Now phpMyAdmin should be accessible through http://localhost/phpMyAdmin/ but only when surfing from localhost.

Backup/restore of all databases on server:

# Backup all databases in one file (maybe use --add-locks):
mysqldump -u root -p --all-databases > wholedb.sql
# Restore all databases:
mysql -u root -p < wholedb.sql 

apache/httpd web server

Apache is used for serving web pages from your server


Install as root, configure firewall and configure/enable/start httpd:

# dnf install httpd # Do NOT use "system-config-httpd", it will corrupt your config...
# firewall-config # Enable http&https
# vi /etc/httpd/conf/httpd.conf # Fill in ServerName
# systemctl enable httpd && systemctl restart httpd

Configuration can be found in /etc/httpd and html docs in /var/www/html. And in /usr/share/mythweb for mythweb.


Adding new users can be done this way:

# htpasswd /etc/httpd/htpasswd username # For basic authentication
# htdigest /etc/httpd/htdigest "In the club?" username # For digest authentication
# service httpd restart

To use authentication in a specific directory you can use a file like this in /etc/httpd/conf.d:

<Directory /var/www/html/folder>
  AllowOverride All # Allows using .htacess-files in html-folder
  SSLRequireSSL # Require using https for access
  AuthUserFile /etc/httpd/htdigest # File containing passwords
  AuthName "In the club?" # Realm SAME as in digest file and presented in login dialog
  AuthType Digest # Using Digest (better) or Basic authentication
  Require user user1 user2 # Only allow access for specific users

Virtual Hosts

To use virtual hosts for different http domains you can use a file like this in /etc/httpd/conf.d/virtual_hosts.conf:

NameVirtualHost *:80
<VirtualHost *:80>
  ServerAlias *
  DocumentRoot /var/www/html

Note that wildcard matches in ServerAlias should be last in the file to not always override. And certbot/letsencrypt may have automatically created a host entry in virtual_hosts-le-ssl.conf that overrides all your settings. When changing this browsers often remembers the redirect so you will need to clear the cache somehow or use another browser.

Also note that the https virtual servers are also defined in /etc/httpd/conf.d/ssl.conf.

certbot for https

I'm using certbot with to get a certificate for my apache web server. See for fedora instructions.

Installation after ensuring that port 80/443 can reach your machine from the outside:

# dnf install certbot-apache mod_ssl
# certbot --apache -m <mail> -d <,,...> # Default seems to be ok, puts a shared cert in ssl.conf for all hosts
# certbot renew --dry-run # Check that it works correctly
# certbot renew --quiet # Put in a daily cron job

dokuwiki collaboration

I selected dokuwiki which does not require a database and is easy to maintain. There are config issues with selinux, maybe I should switch to the packaged version?

For installation see and upgrade and:

# cd /var/www/html
# wget
# tar xzf dokuwiki-stable.tgz
# mv dokuwiki-2014-05-05a dokuwiki
# chown -R apache:apache dokuwiki
# chmod -R og-rwx dokuwiki
# chmod -R u-w dokuwiki
# cd dokuwiki
# chmod -R u+w data conf
# vi /etc/httpd/conf.d/dokuwiki.conf
<Directory /var/www/html/dokuwiki>
  AllowOverride All
# vi .htaccess
Add “SSLRequireSSL” to require https and uncomment the rewrite-stuff.

After this you will do initial configuration in https://localhost/dokuwiki/install.php and then log on as the newly created admin and turn on nice urls “.htaccess” fo url rewrite. And fix other wanted settings.

owncloud data sharing

As an alternative to cloud services like dropbox for sharing data you can have your own cloud with owncloud in your web server.

Simply install and configure:

# dnf install owncloud owncloud-client owncloud-httpd owncloud-mysql
# vi /etc/owncloud/config.php
# vi /etc/httpd/conf.d/owncloud.conf # Change if you want a different path to on the server
# cp /etc/httpd/conf.d/owncloud-access.conf.avail /etc/httpd/conf.d/z-owncloud-access.conf # After setting up server
# vi /etc/httpd/conf.d/z-owncloud-access.conf # Edit with your wanted access settings
# service httpd restart

For initial setup you may want to visit http://localhost/owncloud instead of editing the config file. Note that editing /etc/httpd/conf.d/owncloud.conf can cause issues when upgrading the packages. But I found no better solution when I wanted a different alias than /owncloud.

Then there are file sync clients for windows, linux, android etc:

Also be sure to not wait too long with upgrades of the server as you cannot skip “major” releases. Then the upgrade through the web will fail after yum upgrade. Had to manually force installation of 8.1.5 rpms found here:

Also fixed an issue where a user did not have write access after upgrade this way: sudo -u apache php /usr/share/owncloud/occ files:scan –all

nextcloud data sharing

The owncloud packages for fedora 30 are broken so I moved to nextcloud:

There were issues regarding php versions and upgrade path with own&nextcloud (fedora still on nextcloud 10) so I think I will be heading the docker way this time:

Should NOT needed anymore: Note that fedora 31 is running cgroups v2 which was incompatible with docker. Either switch to v1 using kernel command line options or use podman instead of docker. Add “systemd.unified_cgroup_hierarchy=0” to GRUB_CMDLINE_LINUX in /etc/default/grub and run “grub2-mkconfig” to start using v1 again. But I will probably try to upgrade to podman at a later stage.

Simply install docker and then run your nextcloud instance:

# dnf install docker #
# systemctl start docker
# systemctl enable docker
# docker run -d -p 8080:80 nextcloud # Just for testing using http://localhost:8080 that it runs. Remove -d to see access log. See below for rest of setup.
# docker image save | gzip -c > /mnt/data2/backups/backup/nextcloud_docker.img.gz # Backup the docker image that we fetched

We want to use the host mariadb and host storage (/nextcloud):

Find ip address of host on docker0 network from output and use use this below:
# ip addr show docker0 # See
Create nextcloud database:
# mysql -u root -p
# create database nextcloud;
# create user 'nextcloud'@'172.17.%'' identified by '<password>'; # Change to ip match for docker0 and wanted password
# grant all on nextcloud.* to 'nextcloud'@'172.17.%'';
Now run nextcloud docker changing path to host storage, password and host ip on docker0 to appropriate values for your setup:
# docker run -p 8080:80 -v /host/storage/nextcloud:/var/www/html -e MYSQL_DATABASE=nextcloud -e MYSQL_USER=nextcloud -e MYSQL_PASSWORD=<password> -e MYSQL_HOST= -e SMTP_HOST= nextcloud
Test access using http://localhost:8080 and ensure that an admin user can be created.

Finally we want to add a reverse proxy to the host apache http instance to add ssl encryption:

# vi /etc/httpd/conf.d/virtual_hosts.conf
<VirtualHost *:443>
  ServerName <hostname>
  <Location /> # Only allowed to be accessed from my lan for now
     Require local
     Require ip
  <IfModule mod_headers.c>
    Header always set Strict-Transport-Security "max-age=15552000; includeSubDomains"
  ProxyPreserveHost On
  ProxyRequests off
  ProxyPass /
  ProxyPassReverse /
  RewriteEngine on
  RewriteRule ^/\.well-known/carddav https://%{SERVER_NAME}/remote.php/dav/ [R=301,L]
  RewriteRule ^/\.well-known/caldav https://%{SERVER_NAME}/remote.php/dav/ [R=301,L]
  RewriteRule ^/ - [QSA,L]
  SSLCertificateFile /etc/letsencrypt/live/<hostname>/fullchain.pem
  SSLCertificateKeyFile /etc/letsencrypt/live/<hostname>/privkey.pem
  Include /etc/letsencrypt/options-ssl-apache.conf
# vi /nextcloud/config/config.php
'overwritehost' => '<hostname>',
'overwriteprotocol' => 'https',
'trusted_proxies' => 
array (
  0 => '<hostip>',
# service httpd restart # Also restart nextcloud docker instance
Verify ssl access using https://<hostname>

Then you can add something like this in rc.local (after sleep 15), note the -d:

# docker run -d -p 8080:80 -v /host/storage/nextcloud:/var/www/html nextcloud:18

If you have issues check log in nextcloud/data/nextcloud.log and try to run the container without “-d”. You still need to upgrade to every major release which is simpler with docker. You may need to restore an old versions.php if you accidentally skip a major version. It seems to be updated even though the upgrade fails.

If you need to access nextcloud inside the running container you can do something like this to turn off maintenance mode:

# docker exec -u www-data -it <container name from docker ps> /bin/bash
/var/www/html/occ maintenance:mode --off

After trying different over ambitious photo albums I found single file php gallery which was perfect as I only wanted to be able to show my photos and not manage them.

Install Single File PHP Gallery ( in 5min:

# cd /var/www/html
# mkdir photos
# cd photos
# wget #
# unzip
# mkdir _sfpg_data
# chown apache.apache _sfpg_data
# ln -s /data/pictures/* .

And your done, simply surf to http://your-server/photos to check out your photos.

mythtv for tv recording

This was actually the original reason that I installed redhat 6 once upon a time. Because there was a mythtv rpm repo available and good instructions on how to create a htpc. Nowadays I mainly use the mythv-backend part for recording tv and kodi on windows for watching.

Install from rpmfusion after creating user mythtv:

# dnf install mythtv mythtv-backend mythweb
# config-firewall # Port 3306 tcp (mysql) & 6543-6544 tcp (mythtv) 1900 udp (upnp)

Lots of stuff to configure, both for backend and frontend.

When migrating backend read and remember to NOT start any mythtv progs before changing hostname:

# vi ~/.mythtv/config.xml
# /usr/share/mythtv/ --change_hostname --old_hostname="garaget.lan" --new_hostname="dummy.lan"
# /usr/share/mythtv/ --change_hostname --old_hostname="video.localdomain" --new_hostname="garaget.lan"

See misc below for scanning tv channels…

squeezebox for music

I found out that there is a spotify-plugin for squeezebox and decided to try a Boom in the kitchen and after a number of years I now have 9 squeezeboxes. Nowadays they aren't manufactured anymore but it is possible to use a chromecast as a squeezebox using a plugin in the server. Squeezebox on the server is a very nice solution to get access to your music files, spotify, internet radio and more. Instructions for fedora here:

There was a period when the community squeeze project had a fedora repo available. This is no longer the case, but there are still some nice guys updating the server:

Find maintained versions here: or

Installation something as follows, installing the nightly rpm build manually:

# vi /etc/passwd
squeezeboxserver:x:486:319:SqueezeBox Server:/usr/share/squeezeboxserver:/sbin/nologin
# vi /etc/group0
# wget
# dnf install logitechmediaserver-8.3.1-0.1.1672158254.noarch.rpm
# wget
# dnf install logitechmediaserver-8.5.2-1.noarch.rpm
# ln -s /usr/lib/perl5/vendor_perl/Slim /usr/lib64/perl5/vendor_perl/Slim # Fix for 64bit OS
# firewall-config # Open port 9000 TCP for http, 3483 TCP&UDP, 9005 TCP&UDP for spotify, 49152-49215 TCP for 3x chromecast bridge and 9090 TCP for cli. Note that cc bridge port is dynamic, tick "Use LMS interface and start from port" and set base port 49152.
# systemctl enable squeezeboxserver && systemctl start squeezeboxserver

Check logs with something like “journalctl -u squeezeboxserver.service” or “systemctl status -n1000 squeezeboxserver.service”.

For example after upgrade to fedora 35 the “EV YAML::XS” module failed to load according to “journalctl -u squeezeboxserver.service”. Found and I decided to upgrade to latest 8.2 nightly to be sure to get an updated version. This worked; remember to start by upgrading rpm when facing issues after fedora upgrade.

The web server is then available for configuration:
You can skip the account on and continue. Configure music folder and start scan. The default spotify plugin won't work with Squeezebox Boom so disable it, apply and let the server restart. Then enable “Spotty”, apply and configure your spotify premium account. You may need to to check “Show all 3rd party plugins” for the plugin to show. If you have issues with account names it is probably because some account is overloaded and it will show the wrong name, current name mappping is found in /var/lib/squeezeboxserver/prefs/plugin/spotty.prefs file. Also look for the chromecast plugin enable this.

Had a lot of issues with rebuffering on Boom and Classic. Have experimented with max bitrate and lame recoding on the server suspecting my flac files but no real success. Seems to work better when using wired, need to redo network setup to switch. Some special keys to press on remote to factory reset on older SB hardware:

A solution to reuse sb playlists with other software is to fix url-encoding for squeezebox playlists using sed and change path respectively:

$ sed -i -e '/^tmp:/ s#%20# #g' -e '/^tmp:/ s#%C3%96#Ö#g' -e '/^tmp:/ s#%C3%B6#ö#g' -e '/^tmp:/ s#%C3%85#Å#g' -e '/^tmp:/ s#%C3%A5#å#g' -e '/^tmp:/ s#%C3%84#Ä#g' -e '/^tmp:/ s#%C3%A4#ä#g' -e "/^tmp:/ s#%27#'#g" -e '/^tmp:/ s#%C3%A9#é#g' -e '/^tmp:/ s#%C3%A0#à#g' -e '/^tmp:/ s#%C3%A7#ç#g' -e 's#^tmp://##g' -e 's#tmp://#file://#g' Bengts.m3u
$ sed 's#/data/music/#Y:/music/#g' Bengts.m3u > BengtsWin.m3u

samba network shares

Use samba for file sharing with windows and other systems supporting the smb protocol.

Install and configure:

# dnf install samba system-config-samba system-config-samba-docs
# system-config-samba
# vi /etc/samba/smb.conf
# firewall-config # Add samba and samba-client services
# systemctl enable smb nmb && systemctl restart smb nmb
# smbstatus # Check status, more: "nmblookup <host>"
# vi /etc/samba/smb.conf
# smbpasswd -a <user>

In fedora 31 samba 4.11 disables SMB1 by default which stops lots of things from working, like my photo frame for instance, fix this in the [global] section of /etc/samba/smb.conf by adding:

client min protocol = NT1
server min protocol = NT1
lanman auth = yes
encrypt passwords = yes

nfs network shares

If you want to run a nfs server for sharing files with a firewall you'll need to fix the ports used or in fedora 30 things seems to be working better:

Since fedora 30 the firewall supports nfs better:

# systemctl enable rpcbind nfs-server && systemctl restart rpcbind nfs-server
# firewall-cmd --add-service={nfs,nfs3,mountd,rpc-bind} --permanent
# firewall-cmd --reload
# rpcinfo -p

Edit nfs ports and start nfs services (Old, before fedora 30):

# vi /etc/sysconfig/nfs
STATDARG="-p 2052"
RQUOTAD_PORT=2053 # Not used?
# systemctl enable nfs-server nfs rpcbind rpc-stat && systemctl restart nfs-server nfs rpcbind rpc-stat
# rpcinfo -p
Verify output and update the nfs service in the firewall
# config-firewall # Port 111 tcp/udp (rpcbind) & 2049-2052 tcp/udp (nfs extended)

Edit shares:

# dnf install system-config-nfs
# system-config-nfs
# vi /etc/exports
# systemctl restart nfs-server

I had to allow access from port 1024 and higher in system-config-nfs for differnt nfs clients like Kodi. This translates to the “insecure” option in /etc/exports.

Test from another machine:

# cd /mnt && mkdir test
# showmount -e <host>
# mount <host>:/data test

postfix for sending mail

Nowadays you'll need to configure a more secure mail relay if you want to be able to send mail from the server. See for bahnhof and for gmail.

Use postfix instead of sendmail:

# systemctl disable sendmail && systemctl stop sendmail
# vi /etc/passwd
# vi /etc/group
# dnf install postfix mailx system-switch-mail
# system-switch-mail
Choose postfix.
# vi /etc/postfix/sasl_passwd
[]:465 mbuser:password
# chmod 600 /etc/postfix/sasl_passwd
# postmap /etc/postfix/sasl_passwd
# vi /etc/postfix/
myhostname =
mydomain =
myorigin = $mydomain
relayhost = []:465
smtp_use_tls = yes
smtp_sasl_auth_enable = yes
smtp_sasl_security_options =
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.crt
mynetworks_style = subnet
smtp_tls_wrappermode = yes
smtp_tls_security_level = encrypt
smtp_tls_loglevel = 1
inet_interfaces = all
always_add_missing_headers = yes
(Select relayhost according to your ISP.)
#inet_interfaces = localhost
(Commenting inet_interfaces=localhost makes it accessible on all interfaces.)
# firewall-configure # Enable smtp and smtps services
# systemctl start postfix && systemctl enable postfix

Ensure that root-mail ends up somewhere:

# vi /etc/aliases
# newaliases

Use this for debugging mail issues:

# tail -f /var/log/maillog

This for checking and clearing queue:

# postqueue -p
# postsuper -d ALL

pure-ftpd file up&download

The pure-ftpd file server has been working good for me and can be easily installed nowadays:

# dnf install pure-ftpd

Create a ftpuser os-user/group and a ftp-folder:

# adduser -d /dev/null -s /sbin/nologin ftpuser
# mkdir /ftp

I'm using a “virtual users” mapped to a single ftpuser instead of system users. Note that you can get errors like “[WARNING] Can't login as [joe]: account disabled” in the messagelog when ftpuser has a lower ID than MinUID in /etc/pure-ftpd/pure-ftpd.conf.

Create /ftp and subfolders that will be mounted with “mount bind” in fstab to select what should be shown on the ftp:

# chmod a+w /data/in
# cd /ftp
# mkdir in archive emulators music pictures videos
# vi /etc/fstab
/data/in          /ftp/in                 none    rw,bind         0 0
/data/archive     /ftp/archive            none    ro,bind         0 0
# mount -a

Change permissions with “chown/chmod/chgrp” to reflect what you want. The point is to only allow ftpuser write permissions for the in-folder.

Add ftp-users with commands like this:

# pure-pw useradd banjo -m -u ftpuser -d /ftp

Passwords are stored in “/etc/pure-ftpd/pureftpd.passwd”. Run “pure-pw mkdb” to update “/etc/pure-ftpd/pureftpd.pdb” from this file if the file is manually changed.

Setup the pure-ftpd server as wanted:

# vi /etc/pure-ftpd/pure-ftpd.conf
NoAnonymous                 yes
PureDB                      /etc/pureftpd.pdb
# PAMAuthentication         yes
PassivePortRange            65500 65534
ForcePassiveIP     # Should be possible to resolve to an external ip
UserBandwidth               2000
(Other settings like DontResolve yes and ChrootEveryone yes was already set.)
# systemctl start pure-ftpd && systemctl enable pure-ftpd
# firewall-config # Enable ftp service and tcp port range 65500-65534

To automatically create a readable ftp-index in textfiles you can link the script “buildftpindex” to for example “/etc/cron.weekly”:

# ln -s /usr/local/bin/buildftpindex /etc/cron.weekly/

minecraft server

There is a lot of server wrappers around and I did not now what to select so I set up my own solution instead:

# useradd minecraft
# visudo
ALL ALL=(minecraft) NOPASSWD: /home/minecraft/
# su - minecraft
$ mv server.jar server.jar.old
$ wget
$ cd /home/minecraft && java -Xmx1024M -Xms1024M -jar server.jar --nogui
$ vi eula.txt whitelist.json
$ chmod u+x

Then setup a containing the above java command line and start this using “lxterminal -t minecraft -e sudo -u minecraft /home/minecraft/”. Note that I had issues using lxde autostart desktop file in ~/.config/autostart until I let the script itself start the terminal. Also ensure to have the /home/minecraft directory backed up. Do not forget to secure your setup in, enable whitelisting for example.

You may need to install newer java versions to run later versions of minecraft. For example “java-latest-openjdk-” will make the “/usr/lib/jvm/java-17-openjdk-” java executable available. Later 19 was needed…

If you want to be able to run with plugins you may want to look into and fo the actual plugins. For example if you want to be able to use bedrock clients you can use GeyserMC with floodgate.

$ wget
$ cd minecraft/plugins
$ wget -O floodgate-spigot.jar
$ wget -O Geyser-Spigot.jar
$ # Start server and stop it again to create key
$ cp floodgate/key.pem Geyser-Spigot/

Don't forget to add port 19132 udp to firewall for bedrock and enable whitelisting via xbox id: For example “fwhitelist add .<xbox-id>”.

terraria server

Using a method similar to the minecraft server but using ready docker container from

Create medium sized world as root:

# mkdir -p $HOME/terraria/world
# docker run -it -p 7777:7777 --rm -v $HOME/terraria/world:/root/.local/share/Terraria/Worlds ryshe/terraria:latest -world /root/.local/share/Terraria/Worlds/gwbt.wld -autocreate 2

Then I logged in through terraria and ran the setup command as instructed.

Create large sized master world as root:

# mkdir -p $HOME/terraria/world2
# docker run -it -p 7776:7777 --rm -v $HOME/terraria/world2:/root/.local/share/Terraria/Worlds ryshe/terraria:latest -world /root/.local/share/Terraria/Worlds/gwbt.wld -autocreate 3 -worldmode master

Then I logged in through terraria and ran the setup command as instructed.

After that you can edit some server settings like enabling backups, server password, DisableLoginBeforeJoin, DisableUUIDLogin for security and start the server this way in a terminal:

 # vi /root/terraria/world/config.json
 # docker run -it -p 7777:7777 --rm -v $HOME/terraria/world:/root/.local/share/Terraria/Worlds --name terraria ryshe/terraria:latest -world /root/.local/share/Terraria/Worlds/gwbt.wld

Info about settings:

pi-hole for adblocking

Pi-Hole is a DNS based adblocking service that can be used to remove ads on your whole home network and not just in browsers. Look at for more info. Nowadays I'm using adguard in home assistant instead.

Nowadays I'm using pi-hole from the (home assistant) installation but the config info is still relevant. Previously I simply installed pi-hole using “curl -sSL | bash” on my home automation raspberry pi. There were issues upgrading pi-hole without upgrading raspbian so I had to revert when running nexahome.

I use the pi-hole server as upstream dns in the router and only advertise the router itself as dns for dhcp clients. I have to include the router as dns for the guest network to work properly as the pi-hole server isn't visible on this network. Using the router also makes it possible to lookup dhcp clients on the lan properly. I tried a more advanced/crazy setup where pi-hole referred the router together with google/opendns as upstream dns (forwarding local requests) with a “loop” back to the pi-hole server. Seemed to work ok but caused some strange issues, for example was home assistant unable to lookup the yr camera (!)

With this setup it is quite ok to disable the ad blocker in my browser and save some cpu cycles. Needed to whitelist some server for prisjakt, coop handla online and xbox achievements though. See for more info. This is my whitelist:

pihole -w

Note that the pi-hole seems to get crazy with no stats reporting and use a lot of cpu (load average >1 at the top left in the web gui). I couldn't solve this so I exported the configuration using the teleporter on the settings page, uninstall/install the pi-hole docker image and import the whitelist again. If this happens again I think I need to disable query logging after uninstall/install before it gets into this bad state again. It seems to be ftl related and you don't want to use up one core for this…

nut for ups

After the battery in my APC ES 550 got old and it decided to cut the power (!) and start beeping every two weeks when the self-tests failed I decided for a new ups. I bought the “BlueWalker PW UPS VI 850 SHL” which requires different drivers so I installed and configured nut. It looks like the fedora packages is kind of broken so it requires special attention. Also look at

Install and configure:

# dnf -y install nut nut-clientupsd
# nut-scanner -U
driver = "usbhid-ups"
# mkdir /var/run/nut
# chown root:nut /var/run/nut
# chmod 770 /var/run/nut
# vi /etc/ups/nut.conf
# vi /etc/ups/ups.conf
  driver = usbhid-ups
  port = auto
  desc = "BlueWalker PW UPS VI 850 SHL"
# vi /etc/ups/upsd.users
  password = secret
  upsmon master
  password = secret
  upsmon slave
# vi /etc/ups/upsd.users
MONITOR bluewalker@localhost 1 upsmaster secret master
# killall usbhid-ups upsd upsmon
Nut will drop privileges so we will need to allow access to the device as nut. To test use "-u root" first. Then copy this to the proper place and reboot. Seems like this is in place in fedora...
# upsdrvctl -u root start && upsd -u root && upsmon -u root # usbhid-ups -u root -a bluewalker
# upsc bluewalker
$ nut-monitor
# vi /etc/rc.d/rc.local
Start the three commands

rygel for upnp media

Hoping to get better support for chromecast transcoding I'm trying rygel. Not working that good though. Some info

Install and configure:

# dnf install rygel tumbler ffmpegthumbnailer gstreamer1-{ffmpeg,libav,plugins-{good,ugly,bad{,-free,-nonfree}}} --setopt=strict=0
$ gedit ~/.config/rygel.conf
$ ulimit -n 4096
$ rygel

Then you need to enable port 55555/tcp and 1900/udp in your firewall. And get it to autostart somehow if wanted.

Note that if you get bad names from rygel then look at this ticket: Note that setting extract-metadata=false is not enough.

ddclient for namecheap dyndns

Moved away from redirecting to asus router dyndns to use ddclient package on fedora server. Install using “dnf install ddclient”. Some guides:

On namecheap domains you will need to go to advanced dns, remove the old @ record, enable dyndns, ensure that both @ and * A+ records exists to (will be updated by ddclient), copy the password to /etc/ddclient.conf and configure. Be sure to remove url redirect and such as these will interfere. Also be sure to use the full domains instead of just */@ if you have multiple domains.

Adding this to /etc/ddclient.conf:

## NameCheap DynDNS




Start and enable at boot:

systemctl restart ddclient
systemctl enable ddclient

To debug:

systemctl stop ddclient
ddclient -daemon=0 -debug -verbose -noquiet

Before: CNAME Record * Automatic URL Redirect Record * Unmasked URL Redirect Record @ Unmasked (If you keep the URL redirect stuff then the ip-addresses will be combined with namecheaps. Not good.)


chrome for surfing

Add yum/dnf chrome repo as root and install:

# cat << EOF > /etc/yum.repos.d/google-chrome.repo
name=google-chrome - \$basearch
# dnf install google-chrome-stable

kodi/xbmc for video

Kodi is a very good media player! We use for watching stuff on the server when using the crosstrainer in the garage. We also have it on our media windows pc connected to our main tv together with steam for gaming. On windows it is possible to use as a mythtv frontend. (This was possible on linux as well, but is no longer supported from the fusionrpm repo.)


# dnf install kodi

For Swedish play channels download the repo plugin from and install it in kodi. Then you have to download the actual retrospect plugin in kodi, enable it and browse to it in video.

Home automation

Tellstick Duo & Raspberry PI

I bought a tellstick duo and a raspberry pi and a number switches and temperature sensors from Kjelle. I avoided tellstick net because I wanted my own server. I wanted a separate server for the tellstick to be able to place the stick central in my home due to 30m range.

Raspberry PI hints

Original sd images and tool for burning Note that I now use dd on my linux desktop pc instead. Default credentials are pi / raspberry for OS. Also note that I started with the NexaHome sd image, see below.

I initially had a LOT of problems getting network and keyboard to work, not even the LNK-led were lit. I tested different power adapters, different ethernet cables and different keyboards. I read on forums that it was the same chip for network/keyboard and that it could be issues with the X1 crystal. Eventually I realized that it actually was the usb power CABLE that was the issue. Swapping this solved all of these issues.

Configuring OS, like setting hostname, extending file system to whole sd card and overclocking:

$ sudo raspi-config

Updating telldus-core to get get humidity to work in my oregon sensors:

$ sudo vi /etc/apt/sources.list
deb stable main
$ wget -q -O- | sudo apt-key add -
$ sudo apt-get update
$ mkdir /old
$ sudo cp /etc/tellstick.conf /old
$ sudo find /usr/local \( -name '*telldus*' -o -name 'td*' \) -exec mv "{}" /old \;
$ sudo ldconfig
$ sudo apt-get install telldus-core
Be sure to overwrite existing telldusd but not tellstick.conf if you want to keep your config.

See installing telldus core “the official way”. Note that some cleanup for the old install in /usr/local may be needed, see

Add mail support:

$ sudo apt-get install ssmtp mailutils mpack
$ sudo nano /etc/ssmtp/ssmtp.conf
Fill in all values, for example relayhost = in my case.
$ echo -e "hej\ndå" | mail -s "$subject" -a "Content-Type: text/plain; charset=UTF-8"

You can move to an usb disk for better speed and reliability. See and Still need a small 16MB+ FAT sdcard to boot from.

Also running overclocked turbo mode in “sudo raspi-config”. See where there also is a stability checking script to test your pi. If you are running on an usb stick you may want to update the script with /dev/sda2.

Raspberry Backup & Upgrade

You can use “Win32 Disk Imager” to backup your usb/sd card. Be sure to backup both sd and usb stick to easily recover from issues, like when I tried to upgrade to jessie and it failed to write the kernel. Note that you will probably need to use something like “Etcher” to actually be able to restore the images you created. Or Linux. Note that when using an usb stick for the operating system the files on the fat sd card is probably enough to recover.

Updating OS:

$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get dist-upgrade
$ sudo apt-get clean
$ sudo apt-get autoclean

Updating firmware/kernel, be sure to do this before upgrading to newwer raspberry pi hardware:

$ sudo apt-get install rpi-update
$ sudo rpi-update

I had to move my sd/usb to an old raspberry pi 2 and do this in order to get the pi 3 to boot. This will update the stuff on the sd card fat partition.

After issues with newer pi-hole requiring stretch (possibly jessie) I realized that I neede to upgrade my rpi. Then I realized that I still was on wheezy instead of jessie. Tried upgraded using after backup of usb memory using Win32DiskImager. Failed on kernel during on apt-get dist-upgrade so I re-ran it and it looked like it continued. Should have cleaned apt before and remove /home/pi/sent and maybe extended the file system, also ran out of disk space. Reverting to the usb image before did not work, it seems like the sd card and not only the usb memory was updated in the process.

I think the plan is now to use a new jessie (preferably stretch if that becomes available) NexaHome image and re-install from scratch. To avoid all the fuzz. Or maybe to use some separate setup to get things working properly.


To create devices the TelldusCenter way which is compatible with NexaHome you can manually edit the file /etc/tellstick.conf as well:

$ sudo service telldusd stop
$ sudo leafpad /etc/tellstick.conf
$ sudo service telldusd start
$ TelldusCenter

For example this can be added for my proove-switches, edit id/name/unit:

device {
  id = 1
  name = "Brytare 1"
  controller = 0
  protocol = "arctech"
  model = "selflearning-switch:proove"
  parameters {
    # devices = ""
    house = "666"
    unit = "1"
    code = "0000000000"
    system = "1"
    # units = ""
    fade = "false"

The house code is invented and you can use the Learn-button in TelldusCenter to learn the switch.

To add a proove-remote, edit id/name/unit:

device {
  id = 30
  name = "Fjärr 1-1G"
  controller = 0
  protocol = "arctech"
  model = "selflearning-switch:proove"
  parameters {
    # devices = ""
    house = "18507766"
    unit = "16"
    code = "0000000000"
    system = "1"
    # units = ""
    fade = "false"

My proove on/off remotes have house 50593790 and 18507766 with button→unit as 1/G→16, 2→15, 3→14. All buttons uses group=0 except G with group=1, which cannot be configured in tellstick.conf. So I do not know of any method of differentiating between button 1 and G

Some logged events from my devices:

Altan: tellstick,raw:class:sensor;protocol:oregon;model:1A2D;id:39;temp:28.9;humidity:31;
Ny: tellstick,raw:class:sensor;protocol:oregon;model:1A2D;id:110;temp:26.0;humidity:51;
Grund: tellstick,raw:class:sensor;protocol:oregon;model:1A2D;id:241;temp:19.2;humidity:50;


Downloaded the >=8GB sd image from forum thread and updated the jar-file from Note the special jar for raspberry pi with optional z-wave support. Manual

Adding telldus to apt-sources:

$ sudo nano /etc/apt/sources.list.d/telldus.list
deb stable main

Updating NexaHome according to

$ cd ~/nexahome
$ wget
$ nano

Some other things:

  • Added telldus-core repo and updated to get humidity working on oregon sensors. Wonder why only the telldus source repo were included in the image?
  • Switched to sv_SE.UTF-8 using “sudo raspi-config”. Note that you will have to switch encoding in putty as well.
  • Using vnc server autostart according to Parameters “:1 -geometry 1600×900 -depth 16 -pixelformat rgb565 -lazytight” in init file. Used “sudo raspi-config” to boot into console on hdmi output. Force empty password using “echo '' | vncpasswd -f >~/.vnc/passwd”. For copy'n'paste add “autocutsel -fork” to “~/.vnc/xstartup” and install it using “sudo apt-get install autcutsel”.
  • Use filezilla for sftp file transfer.
  • Reworked web pages in ~/nexahome/mywebserver to fit my sensors and such. Switched encoding of html-files to utf-8.Note that you will have to use the Reload button on the Web-tab in NexaHome after file changes. This tab is also useful for moving lamps and such in the room.
  • Backup sdcard on linux using something like “dd if=/dev/sdc of=/D/nexahome.img bs=4M”. Had issues using Windows, claiming corrupt sdcard. Do not forget to unmount partitions first.

Sense Mother

Do NOT buy a Sense Mother anymore! They seem to have gone out of business and I have learned to NEVER EVER use a cloud service for home automation again. It needs to be a local installation with open software that will work without Internet.

To be able to automatically detect whether the family is home and let NexaHome run different programs I bought a sense mother with some sensor cookies. See You can request an api key from the developer central to be able to read your feeds programatically. See

This bash script will query the api for your event feed urls:


function query()
	curl -s "${1//\"}" -H "Authorization: Token <your-hex-auth-api-key-goes-here>" | jq "$2"

devurls=$(query "" ".devices[].url")
for devurl in $devurls
	data=$(query $devurl)
	jq '.label' <<<"$data"
	for line in $(jq '.publishes[]|.label,.url' <<<"$data")
		echo $line
		[[ $line =~ http ]] && query $line | grep eventsUrl

From the output you can find the presence eventsUrl for your cookies and use with something like “curl -s -H “Authorization: Token 15c66044247f371ed9fcdd37d21bec0ae233ea14” | grep -q Present” to check for presence.

Home Assistant

I decided to try out on a leftover raspberry and my backup tellstick duo to see if it is a worthy replacement for NexaHome/EasyHome. It has support for tellstick, claims to work without Internet and seems very popular. Seems to have support for other stuff that I need like repeating commands for tellstick, some floor-plan plugin, presence at home and pi-hole.

It is easy to download the rpi image for your raspberry version and write to an sd card using etcher to get started. Boot your rpi using the new sd card and access http://hassio.local:8123 to check install progress which can take 20min. Then got to http://hassio.local:8123/hassio/store and add tellstick, samba, and ssh plugins. I think it is good to add the ssh plugin early if you run into problems. Be sure to configure password or key for ssh, the user is root.

For reference (if you disable the introduction) the links in the introduction section are configuration, components, troubleshooting and help. You can follow the configuration link to set up the configurator plugin. Other useful links are Old Overview = States, MDI Icons, Lovelace UI, Picture Element Card, weather.yweather, notify.smtp, sensor.min_max, camera.generic, device_tracker.nmap_tracker, gps logger, tellstick addon, apache proxy, http etc.

Translate tellstick.conf configuration file to HASS-compatible syntax this way:

$  grep 'device {\|id =\|name =\|protocol =\|model =\|house =\|unit =\|fade =\|code =\|^}' /etc/tellstick.conf | sed -e 's/}/},/g' -e 's/\(.* = .*\)/\1,/g' -e 's/\(.*fade =.*\),/\1/g' -e 's/^ \+\([a-z]\+\) =/  "\1":/g' -e 's/device {/{/g'

I decided to use the new beta Lovelace UI as it supports floor plans and iframes etc. It seems to work out very well except that there is no possibility to specify the width of the cards. I have a 7“ tab on the kitchen wall on which it decides to render two columns that are hard to read. Forcing a vertical stack makes it better but it will still have some unused areas on the sides which is not ok. I would really like hass to support setting the max/min width of the cards per view so that I could arrange stuff that needs more width on a specific tab. I increased the android tablet dpi from 160 to 180 and it looked a lot better. Following it was actually possible without rooting.

Do not forget to create and download system shapshots in the panel every now and then. My hassio image has stopped booting twice in two days and I had to redo the installation using a snapshot. The last time (at least) it was when I switched ip address (controlled by mac address through router). I had to restore from a backup image. There may be some possibility to actually access HassOS through ssh. They say it is only for developers but if this continues I need to do something. See and

Letting home assistant through my firewall was a bit more work than I thought. There is a let's encrypt addon but this requires forwarding port 80 to the rpi. As I have an apache web server running I ended up using a reverse proxy according to apache proxy replacing localhost with my local ip for the rpi. Note that I added a subdomain to my cert on the server in order to make this work properly. Also see http and be sure to set a password.

After a while I discovered that the database file grew quite a while. So I decided to add a table on my existing mariadb database instead as people seemed very satisfied with changing database. People seem to have quite a problem with slow/large database and a remote database will also save some wear on the SD card. Configured recorder with a db_url like mysql://hass:pwd@ and let it rock. Nowadays I've moved back to the default sqlite database and an ssd as a data disk in hass after several issues with the remote db. They have done a lot of work to improve the sqlite handling and aggregation of data and it seems quicker. As the database is included in backups rollbacks work better as well.


I bought this as a replacement for tellstick duo hoping that it will be more reliable. The tellstick is kind of unreliable and seems to sometimes miss some sensors for a long time and a reboot can help to get it on track again.

Note that you will need to use RFXmngr to set the supported protocol, in my case: Hideki (ESIC) and FineOffset (Proove/Telldus)


Problems & Solutions

Some know problems and solutions:

  • Wrong keyboard layout with autologin in lxdm: Install “Keyboard Layout Handler” in panel.

Disabling security during testing:

# setenforce 0
# sed -i.bak "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config
# systemctl disable firewalld.service
# systemctl stop firewalld.service
# iptables --flush


I bought a picade to use with my spare pi 4 and had some issues setting it up. Some useful notes follows.

Turn of overlay/bezels for everything due to the screen aspect ratio of the picade:

nano /opt/retropie/configs/all/retroarch.cfg
input_overlay_enable = "false"
input_player1_analog_dpad_mode = "1"
input_player2_analog_dpad_mode = "1"
input_player3_analog_dpad_mode = "1"
input_player4_analog_dpad_mode = "1"

If you control both player 1 and 2 when adding a joystick the builtin script for remapping joypad indexes currently don't support properly disabling the player one index:

nano /opt/retropie/configs/all/retroarch.cfg
input_player1_joypad_index = "16"
input_player2_joypad_index = "0"
input_player3_joypad_index = "1"
input_player4_joypad_index = "2"

This is needed because the picade joystick acts as a keyboard and if the player1 line is commented out it will still default to index 0. Created ticket:

After this I still had issues controlling two players with one joystick and I accidentally had left this will trying to fix, remove overrides:

all/retroarch/config/MAME 2003 (0.78)/MAME 2003 (0.78).cfg:input_player2_joypad_index = "1"

Also, bluetooth seems to disconnect the joypads after configuring on my pi 4. I made it work after reboot by restarting the bluetooth service:

nano /etc/rc.local
sh -c 'sleep 5; service bluetooth restart' &

Note that you still manually have to restart the bluetooth service or you pi after configuring blueooth with this fix. I also selected new firmware non 8bitdo hack i bluetooth in the bluetooth config tool.

MythTV Channel Scanning

Scanning channels in mythtvsetup can be an issue, you can accidentally get channels from the wrong transmitter. Look in for frequencies in Sweden. It looks like the scanner file at is outdated after the 20161206 mux change.

You can manually enter the settings and scan single transports for channels instead. For non-HD channels you can use bandwidth 8Mhz with transports at “Kanal 30 Frekvens 546 MHz” for SVT and “Kanal 43 Frekvens 650 MHz” for TV4/6 at Brudaremossen. For HD SVT 1/2 you should use “Kanal 27 Frekvens 522 MHz” and bandwidth 8MHz. Note that editing transports is done in the channel editor and not where the scanning is performed. Please note that the mythtv scan timeout setting for the card can be too low causing issues finding channels for the HD mux as this seem to need more time!!!

To investigate available channels/transports settings for mythtv you can use w_scan after turning off the backend:

# dnf install w_scan dvb-apps v4l-utils
# w_scan -c SE

However, this gave “no driver support of DVBT2” which comes from lacking FE_CAN_2G_MODULATION capabilities ( so I did not find the HD channels this way either. See for more info about scanners. Check with http://localhost:6544/Guide/GetChannelIcon?ChanId=1 on backend.

I created a scan table file for the 626MHz HD mux and used dvbv5-scan from v4l-utils and got some better (old) info:

# cat ~/brudaremossen.txt 
	FREQUENCY = 626000000
	BANDWIDTH_HZ = 8000000
# dvbv5-scan ~/brudaremossen.txt
# cat dvb_channel.conf
[SVT1 HD Väst]
	SERVICE_ID = 1410
	VIDEO_PID = 1419
	AUDIO_PID = 1417 1014
	FREQUENCY = 626000000
	BANDWIDTH_HZ = 8000000

[SVT2 HD Väst]
	SERVICE_ID = 2410
	VIDEO_PID = 2419
	AUDIO_PID = 2417 2364
	FREQUENCY = 626000000
	BANDWIDTH_HZ = 8000000

In the end I discovered that the real issue was that the scanning timeout for the card in mythtv was the real issue. I changed from 1s scan and 3s tune timeout to 3s for both and then scanning worked for the 626 MUX. The lower timeout failed finding hd channels only which was confusing. You also need to enter the bandwidth 8MHz to make it work.

Note that xmltvid is connected to channel listings so you will probably want them unique.

To fix icons for you channels you will need to get/syncronize files in /home/mythtv/.mythtv/channels and /usr/share/mythweb/data/tv_icons. You can use phpMyAdmin to add paths to icons in the channels table. The icons in the db must NOT use full path since 0.27. Do not use mythtvsetup to download icons as this seem to corrupt them. Download from instead.

Check if icons actually work by browsing to http://host:6544 and by checking mythweb. Be sure to run mythbackend as the mythtv-user for it to find icons in ”~/.mythtv/channels“. Note that kodi will cache channel icons so you cannot check that way. See

MythTV SvtPlay

After issues compiling the tbs driver for kernel 5.7+ I want to try to use svtplay as a generic external recorder in mythtv using streamlink. See and using a command similar to this:
command=“streamlink –player=vlc –stdout \”%URL%\” best“

Install streamlink using “dnf install streamlink”. Use tv_grab_eu_xmltvse or “Europe TV schedules for free (xmltv)” in video sources in mythtvsetup to get channel info. Use external recorder with something like ”/usr/bin/mythexternrecorder –conf svtplay.conf“ for the the tv “card”. See usr/share/mythtv/externrecorder for more info.

When configuring scanning through external recording you may need to enter channel id:s and configure callsigns/names to match. Before I got lots of same channels added. You may want to do “mythfilldatabase –do-channel-updates” as well. More info

I had an issue failing to scan for channels and used “mythtv-setup -v chanscan:debug,channel:debug” to find that it was failing because of a lacking tuner command. Added “tuner=true” in ”[TUNER]“ and it works until the fix in is in the fedora repos. It is also useful to use “mythbackend -v channel,record” for logging.

Unfortunately the plugin broke the same(!) day I managed to get it to work :( There is a fix in and until it is delivered it is possible to place in ~/.config/streamlink/plugins and streamlink will pick it up.

MythTV EIT Issues

When changing dvb cards I have had these weird issues with EIT information not showing up. Sometimes I have removed and recreated cards and sources. But for the case with this working except for SVT1/2/B/K non HD I think it may have been for the “high” duplicate channels for different regions preventing eit crawl for the lower ones. I think they have the same ID and prevent scanning of the “real” channels. Be sure to disable EIT scanning for the high channels to get EIT for the low preoperly! (Maybe truncate eit_cache in the database as well when changing.)

Some things I have done to fix EIT update issues seems to require a big reset in mythtvsetup:

  1. Remove card(s) and create new.
  2. Create a new source.
  3. Connect the source to the card and perform a full scan. Be sure to select your country in the “hidden” menu.
  4. Start editing your newly found channels to copy information like xmltvid and icon name. Preferable in phpmyadmin but mythweb+mythtvsetup will do.
  5. Maybe delete or disable EIT scanning the “high” channels found that are really duplicates. I think EIT scanning will store info for those instead. Preferable before wathching a channel or EIT crawl.
  6. Remove channels connected to the old input. Maybe clean some more info in the database as well connected to the old input. Maybe truncate eit_cache table before running.

You can run “mythbackend -v eit” to debug. Also check timeout before EIT crawl in general menu item in mythtvsetup. I have an hour.


If you have some jpgs stored from a cam you can turn these in to a timelapse video like this:

ffmpeg -i '%*.jpg' -vcodec libx264 -g 30 out.mp4

Disk full and mariadb recovery

The rocksdb part of mariadb filled my root partition, solution:

# dnf remove mariadb-rocksdb-engine
# rm -rf '/var/lib/mysql/#rocksdb'

Some useful commands to fix problems:

# service httpd stop
# service mariadb stop
# killall keepmythbackendalive mythbackend
# mv /var/lib/mysql /var/lib/mysql_corrupt
# rsync -a /mnt/data2/backups/root/var/lib/mysql/ /var/lib/mysql
# mv /var/lib/squeezeboxserver /var/lib/squeezeboxserver_corrupt
# rsync -a /mnt/data2/backups/root/var/lib/squeezeboxserver/ /var/lib/squeezeboxserver
# touch /var/www/html/wiki/conf/local.php

Recovery for mariadb:

Added the following in /etc/my.cnf.d/mariadb-server.cnf [mysqld] to avoid mariadb crashes:

port = 8889

Do not forget to remove when recovery is complete…

In the end I had to truncate hass.statistics_short_term table as this would crash check commands. Tried to export it through phpMyAdmin but failed to import again.

The next time I had a corruption of state_attributes in hass and the mariadb crashes. I exported the table to a file, dropped the table and imported it again:

mysql -u root -p hass < /home/<user>/Downloads/state_attributes.sql

Seems ok. Had to do this later for “events” as well. Do not forget to remove the recovery setting and restart mariadb so that home assistant can connect. Moved away from remote mariadb hass database to the default sqlite implementation with a database on an ssd. Set up a nightly backup through an automation to to avoid all these issues. Seems quicker as well.

Upgrade Fedora ... -> 35 -> 36 -> 37 -> 38 -> 39

Following the instructions on and

Do the following to prepare and download packages:

# dnf upgrade --refresh
# reboot # At least if you got a new kernel
# dnf install dnf-plugin-system-upgrade
# dnf remove owncloud chromium-libs-media-freeworld system-config-samba system-config-services-docs system-config-samba-docs # For fedora 30, moved to nextcloud in docker
# dnf remove python2-certbot; dnf install certbot-apache # For fedora 31
# dnf remove gnomebaker php-recode --noautoremove # For fedora 32
# dnf remove compat-ffmpeg28 # For fedora 39
# dnf system-upgrade download --releasever=39 # Did not work on Saturdays due to 404 for lacking rpmfusion files, waiting for mirroring and retrying on Sunday worked.

You may need the ”–allowerasing“ flag or possibly force uninstall of some packages using “rpm -e –nodeps <package>” to get this to pass.

Then reboot this way to do the actual upgrade:

# sudo dnf system-upgrade reboot

You may need to upgrade the squeezeboxserver rpm. Maybe update java version used for minecraft. See those chapters. (Old, running nouveau with modern gfx-card nowadays: You may need to uninstall and install nvidia drivers again, at least uninstall the newly created kmod-nvidia and rebuild it using akmods.)

After upgrade:

  1. For fedora 39 there is a bug when installing kernels fails with “No space left on device” if you have upgraded since really old fedora versions. There is a file/folder in /boot that triggers this issue. See end of “efi partition too small” chapter.
  2. It hangs during first boot waiting for lvm2 mirrors. I had to disable Intel RST and use AHCI in BIOS to get it to boot. Then I disabled/masked the lvm2 service using “systemctl mask lvm2-monitor.service” and switched back to RST in the next boot.
  3. Old, not be needed anymore: It is running cgroups v2 which was incompatible with docker. Either switch to v1 using kernel command line options or use podman instead of docker. Add “systemd.unified_cgroup_hierarchy=0” to GRUB_CMDLINE_LINUX in /etc/default/grub and run “grub2-mkconfig” to start using v1 again. You will probably want to upgrade to podman later though and start using new cgroups.
  4. Had to upgrade to squeezeboxserver 7.9.3 and build tbs with workarounds for kernel 5.7. Later I replaced tbs with a hauppauge dongle.
  5. New samba 4.11 disables smbv1 by default which does not work with my tablet used as a photo frame. See samba section above for how to fix.
  6. Autologin had to be reconfigured in /etc/lxdm/lxdm.conf for fedora 37.
  7. Maybe clean up old unmaintained packages using “remove-retired-packages 36” or similar. You may also use “dnf repoquery –unsatisfied” and “dnf repoquery –duplicates” to check for broken/duplicates. See fedora upgrade docs for more info.
linux64.txt · Last modified: 2024/07/17 21:04 by bengt

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki