This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Communication and Collaboration

1 - Digital Signage

1.1 - Anthias (Screenly)

Overview

Anthias (AKA Screenly) is a simple, open-source digital signage system that runs well on a raspberry pi. When plugged into a monitor, it displays images, video or web sites in slideshow fashion. It’s managed directly though a web interface on the device and there are fleet and support options.

Preparation

Use the Raspberry Pi Imager to create a 64 bit Raspberry Pi OS Lite image. Select the gear icon at the bottom right to enable SSH, create a user, configure networking, and set the locale. Use SSH continue configuration.

setterm --cursor on

sudo raspi-config nonint do_change_locale en_US-UTF-8
sudo raspi-config nonint do_configure_keyboard us
sudo raspi-config nonint do_wifi_country US
sudo timedatectl set-timezone America/New_York
  
sudo raspi-config nonint do_hostname SOMENAME

sudo apt update;sudo apt upgrade -y

sudo reboot

Enable automatic updates and enable reboots

sudo apt -y install unattended-upgrades

# Remove the leading slashes from some of the updates and set to true
sudo sed -i 's/^\/\/\(.*origin=Debian.*\)/  \1/' /etc/apt/apt.conf.d/50unattended-upgrades
sudo sed -i 's/^\/\/\(Unattended-Upgrade::Remove-Unused-Kernel-Packages \).*/  \1"true";/' /etc/apt/apt.conf.d/50unattended-upgrades
sudo sed -i 's/^\/\/\(Unattended-Upgrade::Remove-New-Unused-Dependencies \).*/  \1"true";/' /etc/apt/apt.conf.d/50unattended-upgrades
sudo sed -i 's/^\/\/\(Unattended-Upgrade::Remove-Unused-Dependencies \).*/  \1"true";/' /etc/apt/apt.conf.d/50unattended-upgrades
sudo sed -i 's/^\/\/\(Unattended-Upgrade::Automatic-Reboot \).*/  \1"true";/' /etc/apt/apt.conf.d/50unattended-upgrades

Installation

bash <(curl -sL https://www.screenly.io/install-ose.sh)

Operation

Adding Content

Navigate to the Web UI at the IP address of the device. You may wish to enter the settings and add authentication and change the device name.

You may add common graphic types, mp4, web and youtube links. It will let you know if it fails to download the youtube video. Some heavy web pages fail to render correctly, but most do.

Images must be sized to for the screen. In most cases this is 1080. Larger images are scaled down, but smaller images are not scaled up. For example, PowerPoint is often used to create slides, but it exports at 720. On a 1080 screen creates black boarders. You can change the resolution on the Pi with rasp-config or add a registry key to Windows to change PowerPoint’s output size.

Windows Registry Editor Version 5.00

[HKEY_CURRENT_USER\Software\Microsoft\Office\16.0\PowerPoint\Options]
"ExportBitmapResolution"=dword:00000096

Schedule the Screen

You may want to turn off the display during non-operation hours. The vcgencmd command can turn off video output and some displays will choose to enter power-savings mode. Some displays misbehave or ignore the command, so testing is warranted.

sudo tee /etc/cron.d/screenpower << EOF

# m h dom mon dow usercommand

# Turn monitor on
30 7  * * 1-5 root /usr/bin/vcgencmd display_power 1

# Turn monitor off
30 19 * * 1-5 root /usr/bin/vcgencmd display_power 0

# Weekly Reboot just in case
0 7 * * 1 root /sbin/shutdown -r +10 "Monday reboot in 10 minutes"
EOF

Troubleshooting

YouTube Fail

You may find you must download the video manually and then upload to Anthias. Use the utility yt-dlp to list and then download the mp4 version of a video

yt-dlp --list-formats https://www.youtube.com/watch?v=YE7VzlLtp-4
yt-dlp --format 22 https://www.youtube.com/watch?v=YE7VzlLtp-4

WiFi Disconnect

WiFi can go up and down, and some variants of the OS do not automatically reconnect. You way want to add the following script to keep connected.

sudo touch /usr/local/bin/checkwifi
sudo chmod +x /usr/local/bin/checkwifi
sudo vim.tiny /usr/local/bin/checkwifi
#!/bin/bash

# Exit if WiFi isn't configured
grep -q ssid /etc/wpa_supplicant/wpa_supplicant.conf || exit 

# In the case of multiple gateways (when connected to wired and wireless)
# the `grep -m 1` will exit on the first match, presumably the lowest metric
GATEWAY=$(ip route list | grep -m 1 default | awk '{print $3}')

ping -c4 $GATEWAY > /dev/null

if [ $? != 0 ]
then
  logger checkwifi fail `date`
  service wpa_supplicant restart
  service dhcpcd restart
else
  logger checkwifi success `date`
fi
sudo tee /etc/cron.d/checkwifi << EOF
# Check WiFi connection
*/5 * * * * /usr/bin/sudo -H /usr/local/bin/checkwifi >> /dev/null 2>&1"
EOF

Hidden WiFi

If you didn’t set up WiFi during imaging, you can use raspi-config after boot, but you must add a line if it’s a hidden network, and reboot.

sudo sed -i '/psk/a\        scan_ssid=1' /etc/wpa_supplicant/wpa_supplicant.conf

Wrong IP on Splash Screen

This seems to be captured during installation and then resides statically in this file. Adjust as needed.

# You can turn off the splash screen in the GUI or in the .conf
sed -i 's/show_splash =.*/show_splash = off/' /home/pi/.screenly/screenly.conf

# Or you can correct it in the docker file
vi ./screenly/docker-compose.yml

White Screen or Hung

Anthias works best when the graphics are the correct size. It will attempt to display images that are too large, but this flashes a white screen and eventually hangs the box (at least in the current version). Not all users get the hang of sizing things correctly, so if you have issues, try this script.

#!/bin/bash

# If this device isn't running signage, exit
[ -d /home/pi/screenly_assets ] || { echo "No screenly image asset directory, exiting"; exit 1; }

# Check that mediainfo and imagemagick convert are available
command -v mediainfo || { echo "mediainfo command not available, exiting"; exit 1; }
command -v convert  || { echo "imagemagick convert not available, exiting"; exit 1; }

cd /home/pi/screenly_assets

for FILE in *.png *.jpe *.gif
do
        # if the file doesn't exist, skip this iteration 
        [ -f $FILE ] || continue
        
        # Use mediainfo to get the dimensions at it's much faster than imagemagick              
        read -r NAME WIDTH HEIGHT <<<$(echo -n "$FILE ";mediainfo --Inform="Image;%Width% %Height%" $FILE)

        # if it's too big, use imagemagick's convert. (the mogify command doesn't resize reliably) 
        if [ "$WIDTH" -gt "1920" ] || [ "$HEIGHT" -gt "1080" ]
        then
                echo $FILE $WIDTH x $HEIGHT
                convert $FILE -resize 1920x1080 -gravity center $FILE
        fi
done

No Video After Power Outage

If the display is off when you boot the pi, it may decide there is no monitor. When someone does turn on the display, there is no output. Enable hdmi_force_hotplug in the `/boot/config.txt`` to avoid this problem, and specify the group and mode to 1080 and 30hz.

sed -i 's/.*hdmi_force_hotplug.*/hdmi_force_hotplug=1/' /boot/config.txt
sed -i 's/.*hdmi_group=.*/hdmi_group=2/' /boot/config.txt
sed -i 's/.*hdmi_mode=.*/hdmi_mode=81/' /boot/config.txt

1.2 - Anthias Deployment

If you do regular deployments you can create an image. A reasonable approach is to:

  • Shrink the last partition
  • Zero fill the remaining free space
  • Find the end of the last partition
  • DD that to a file
  • Use raspi-config to resize after deploying

Or you can use PiShrink to script all that.

Installation

wget https://raw.githubusercontent.com/Drewsif/PiShrink/master/pishrink.sh
chmod +x pishrink.sh
sudo mv pishrink.sh /usr/local/bin

Operation

# Capture and shrink the image
sudo dd if=/dev/mmcblk0 of=anthias-raw.img bs=1M
sudo pishrink.sh anthias-raw.img anthias.img

# Copy to a new card
sudo dd if=anthias.img of=/dev/mmcblk0 bs=1M

If you need to modify the image after creating it you can mount it via loop-back.

sudo losetup --find --partscan anthias.img
sudo mount /dev/loop0p2 /mnt/

# After you've made changes

sudo umount /mnt
sudo losetup --detach-all

Manual Steps

If you have access to a graphical desktop environment, use GParted. It will resize the filesystem and partitions for you quite easily.

# Mount the image via loopback and open it with GParted
sudo losetup --find --partscan anthias-raw.img

# Grab the right side of the last partition with your mouse and 
# drag it as far to the left as you can, apply and exit
sudo gparted /dev/loop0

Now you need to find the last sector and truncate the file after that location. Since the truncate utility operates on bytes, you convert sectors to bytes with multiplication.

# Find the End of the last partition. In the below example, it's Sector *9812664*
$ sudo fdisk -lu /dev/loop0

Units: sectors of 1 * 512 = 512 bytes

Device       Boot  Start     End Sectors  Size Id Type
/dev/loop0p1        8192  532479  524288  256M  c W95 FAT32 (LBA)
/dev/loop0p2      532480 9812664 9280185  4.4G 83 Linux


sudo losetup --detach-all

sudo truncate --size=$[(9812664+1)*512] anthias-raw.img

Very Manual Steps

If you don’t have a GUI, you can do it with a combination of commands.

# Mount the image via loopback
sudo losetup --find --partscan anthias-raw.img

# Check and resize the file system
sudo e2fsck -f /dev/loop0p2
sudo resize2fs -M /dev/loop0p2

... The filesystem on /dev/loop0p2 is now 1149741 (4k) blocks long

# Now you can find the end of the resized filesystem by:

# Finding the number of sectors.
#     Bytes = Num of blocks * block size
#     Number of sectors = Bytes / sector size
echo $[(1149741*4096)/512]

# Finding the start sector (532480 in the example below)
sudo fdisk -lu /dev/loop0

Device       Boot  Start      End  Sectors  Size Id Type
/dev/loop0p1        8192   532479   524288  256M  c W95 FAT32 (LBA)
/dev/loop0p2      532480 31116287 30583808 14.6G 83 Linux

# Adding the number of sectors to the start sector. Add 1 because you want to end AFTER the end sector
echo $[532480 + 9197928 + 1]

# And resize the part to that end sector (ignore the warnings)
sudo parted resizepart 2 9730409 

Great! Now you can follow the remainder of the GParted steps to find the new last sector and truncate the file.

Extra Credit

It’s handy to compress the image. xz is pretty good for this

xz anthias-raw.img

xzcat anthias-raw.img | sudo dd of=/dev/mmcblk0

In these procedures, we make a copy of the SD card before we do anything. Another strategy is to resize the SD card directly, and then use dd and read in X number of sectors rather than read it all in and then truncate it. A bit faster, if a but less recoverable from in the event of a mistake.

1.3 - API

The API docs on the web refer to screenly. Anthias uses an older API. However, you can access the API docs for the version your working with at

http://sign.your.domain/api/docs/

You’ll have to correct the swagger form with correct URL, but after that you can see what you’re working with.

2 - Email

Email is a commodity service, but critical for many things - so you can get it anywhere, but you better not mess it up.

Your options, in increasing order of complexity, are:

Forwarding

Email sent to [email protected] is simply forwarded to someplace like gmail. It’s free and easy, and you don’t need any infrastructure. Most registrars like GoDaddy, NameCheap, CloudFlare, etc, will handle it.

You can even reply from [email protected] by integrating with SendGrid or a similar provider.

Remote-Hosting

If you want more, Google and Microsoft have full productivity suites. Just edit your DNS records, import your users, and pay them $5 a head per month. You still have to ‘do email’ but it’s a little less work than if you ran the whole stack. In most cases, companies that specialize in email do it better than you can.

Self-Hosting

If you are considering local email, let me paraphrase Kenji López-Alt. The first step is, don’t. The big guys can do it cheaper and better. But if it’s a philosophical, control, or you just don’t have the funding, press on.

A Note About Cost

Most of the cost is user support. Hosting means someone else gets purchase and patch a server farm, but you still have to talk to users. My (anecdotal) observation is that fully hosting saves 10% in overall costs and it soothes out expenses. The more users you have, the more that 10% starts to matter.

2.1 - Forwarding

This is the best solution for a small number of users. You configure it at your registrar and rely on google (or someone similar) to do all the work for free.

If you want your out-bound emails to come from your domain name (and you do), add an out-bound relay. This is also free for minimal use.

Registrar Configuration

This is different per registrar, but normally involves creating an address and it’s destination

Cloudflare

  • (Login - assumes you use cloudflare as your registrar)
  • Login and select the domain in question.
  • Select Email, then Email Routing.
  • Under Routes, select Create address.

Once validated, email will begin arriving at the destination.

Configure Relaying

The registrars is only forwarding email, not sending it. To get your sent mail to from from your domain, you must integrate with a mail service such as SendGrid

SendGrid

  • Create a free account and login
  • Authenticate your domain name (via DNS)
  • Create an API key (Settings -> API Keys -> Restricted Access, Defaults)

Gmail

  • Settings -> Accounts -> Send Mail as
  • Add your domain email
  • Configure the SMTP server with:
    • SMTP server: “smtp.sendgrid.net”
    • username: “apikey”
    • password: (the key you created above)

After validating the code Gmail sends you, there will be a drop down in the From field of new emails.

2.2 - Remote Hosting

This is more in the software-as-a-service category. You get an admin dashboard and are responsible for managing users and mail flow. The hosting service provide will help you with basic things, but you’re doing most of the work yourself.

Having manged 100K+ user mail systems and migrated from on-prem sendmail to exchange and then O365 and Google, I can confidently say the infrastructure and even platform amounts to less than 10% of the cost of providing the service.

The main advantage to hosting is that you’re not managing the platform, installing patches and replacing hardware. The main disadvantage is is that you have little control and sometimes things are broken and you can’t do anything about it.

Medium sized organizations benefit most from hosting. You probably need a productivity suite anyways, and email is usually wrapped up in that. It saves you from having to specialize someone in email and the infrastructure associated with it.

But if controlling access to your data is paramount, then be aware that you have lost that and treat email as a public conversation.

2.3 - Self Hosting

When you self-host, you develop expertise in email itself, arguably a commodity service where such expertise has small return. But, you have full control and your data is your own.

The generally accepted best practice is install Postfix and Dovecot. This is the simplest path and what I cover here. But there are some pretty decent all-in-one packages such as Mailu, Modoboa, etc. These usually wrap Postfix and Dovecot to spare you the details and improve your quality of life, at the cost of not really knowing how they really work.

You’ll also need to configure a relay. Many ISPs block basic mail protocol and many recipient servers are rightly suspicious of random emails from unknown IPs in cable modem land.

  1. Postfix
  2. Dovecot
  3. Relay

2.3.1 - Postfix

This is the first step - a server that handles and stores email. You’ll be able to check messages locally at the console. (Remote client access such as with Thunderbird comes later.)

Preparation

You need:

  • Linux Server
  • Firewall Port-Forward
  • Public DNS

Server

We use Debian Bookworm (12) in this example but any derivative will be similar. At large scale you’d setup virtual users, but we’ll stick with the default setup and use your system account. Budget about 10M per 100 emails stored.

Port Forwarding

Mail protocol uses port 25. Simply forward that to your internal mail server and you’re done.

DNS

You need an normal ‘A’ record for your server and a special ‘MX’ record for your domain root. That way, mail sent to [email protected] will get routed to the server.

Name Type Value
the-server A 20.236.44.162
@ MX the-server

Mail servers see [email protected] and look for records of type ‘MX’ for ‘your.org’. Seeing that ’the-server’ is listed, they lookup it’s ‘A’ record and connect. A message to [email protected] is handled the same way, though when there is no ‘MX’ record it just delivers it to the ‘A’ record for ’the-server.your.org’. If you have both, the ‘MX’ takes precedence.

Installation

Some configuration is done at install time by the package so you must make sure your hostname is correct. We use the hostname ‘mail’ in this example.

# Correct internal hostnames as needed. 'mail' and 'mail.home.lan' are good suggestions.
cat /etc/hostname /etc/hosts

# Set the external host name and run the package installer. If postfix is already installed, apt remove it first
EXTERNAL="mail.your.org"
sudo debconf-set-selections <<< "postfix postfix/mailname string $EXTERNAL"
sudo debconf-set-selections <<< "postfix postfix/main_mailer_type string 'Internet Site'"
sudo apt install --assume-yes postfix

# Add the main domain to the destinations as well
DOMAIN="your.org"
sudo sed -i "s/^mydestination = \(.*\)/mydestination = $DOMAIN, \1/"  /etc/postfix/main.cf
sudo systemctl reload postfix.service

Test with telnet - use your unix system ID for the rcpt address below.

telnet localhost 25
ehlo localhost
mail from: <[email protected]>
rcpt to: <[email protected]>
data
Subject: Wish List

Red Ryder BB Gun
.
quit

Assuming that ‘you’ matches your shell account, Postfix will have accepted the message and used it’s Local Delivery Agent to store it in the local message store. That’s in /var/mail.

cat /var/mail/YOU 

Configuration

Encryption

Postfix will use the untrusted “snakeoil” that comes with debian to opportunistically encrypt communication between it and other mail servers. Surprisingly, most other servers will accept this cert (or fall back to non-encrypted), so lets proceed for now. We’ll generate a trusted one later.

Spam Protection

The default config is secured so that it won’t relay messages, but it will accept message from Santa, and is subject to backscatter and a few other things. Let’s tighten it up.

sudo tee -a /etc/postfix/main.cf << EOF

# Tighten up formatting
smtpd_helo_required = yes
disable_vrfy_command = yes
strict_rfc821_envelopes = yes

# Error codes instead of bounces
invalid_hostname_reject_code = 554
multi_recipient_bounce_reject_code = 554
non_fqdn_reject_code = 554
relay_domains_reject_code = 554
unknown_address_reject_code = 554
unknown_client_reject_code = 554
unknown_hostname_reject_code = 554
unknown_local_recipient_reject_code = 554
unknown_relay_recipient_reject_code = 554
unknown_virtual_alias_reject_code = 554
unknown_virtual_mailbox_reject_code = 554
unverified_recipient_reject_code = 554
unverified_sender_reject_code = 554
EOF

sudo systemctl reload postfix.service

PostFix has some recommendations as well.

sudo tee -a /etc/postfix/main.cf << EOF

# PostFix Suggestions
smtpd_helo_restrictions = reject_unknown_helo_hostname
smtpd_sender_restrictions = reject_unknown_sender_domain
smtpd_recipient_restrictions =
    permit_mynetworks, 
    permit_sasl_authenticated,
    reject_unauth_destination,
    reject_rbl_client zen.spamhaus.org,
    reject_rhsbl_reverse_client dbl.spamhaus.org,
    reject_rhsbl_helo dbl.spamhaus.org,
    reject_rhsbl_sender dbl.spamhaus.org
smtpd_relay_restrictions = 
    permit_mynetworks, 
    permit_sasl_authenticated,
    reject_unauth_destination
smtpd_data_restrictions = reject_unauth_pipelining
EOF

sudo systemctl reload postfix.service

If you test a message from Santa now, Postfix will do some checks and realize it’s bogus.

550 5.7.27 [email protected]: Sender address rejected: Domain northpole.org does not accept mail (nullMX)

Header Cleanup

Postfix will attach a Received: header to outgoing emails that has details of your internal network and mail client. That’s information you don’t need to broadcast. You can remove that with a “cleanup” step as the message is sent.

# Insert a header check after the 'cleanup' line in the smtp section of the master file and create a header_checks file
sudo sed -i '/^cleanup.*/a\\t-o header_checks=regexp:/etc/postfix/header_checks' /etc/postfix/master.cf
echo "/^Received:/ IGNORE" | sudo tee -a /etc/postfix/header_checks

Note - there is some debate on if this triggers a higher spam score. You may want to replace instead.

Testing

Incoming

You can now receive mail to [email protected] and [email protected]. Try this to make sure you’re getting messages. Feel free to install mutt if you’d like a better client at the console.

Outgoing

You usually can’t send mail and there are several reasons why.

Many ISPs block outgoing port 25 to keep a lid on spam bots. This prevents you from sending any messages. You can test that by trying to connect to gmail on port 25 from your server.

nc -zv gmail-smtp-in.l.google.com 25

Also, many mail servers will reverse-lookup your IP to see who it belongs to. That request will go to your ISP (who owns the IPs) and show their DNS name instead of yours. You’re often blocked at this step, though some providers will work with you if you contact them.

Even if you’re not blocked and your ISP has given you a static IP with a matching reverse-lookup, you will suffer from a lower reputation score as you’re not a well-known email provider. This can cause your sent messages to be delayed while being considered for spam.

To solve these issues, relay your email though a email provider. This will improve your reputation score (used to judge spam), ease the additional security layers such as SPF, DKIM, DMARC, and is usually free at small volume.

Postfix even calls this using a ‘Smarthost’

Next Steps

Now that you can get email, let’s make it so you can also send it.

Troubleshooting

When adding Postfix’s anti-spam suggestions, we left off the smtpd_client_restrictions and smtpd_end_of_data_restrictions as they created problems during testing.

You may get a warning from Postfix that one of the settings you’ve added is overriding one of the earlier settings. Simply delete the first instance. These are usually default settings that we’re overriding.

Use ‘@’ to view the logs from all the related services.

sudo journalctl -u [email protected]

If you change your server’s DNS entry, make sure to update mydestination in your /etc/postfix/main.cf and sudo systemctl reload [email protected].

Misc

Mail Addresses

Postfix only accepts messages for users in the “local recipient table” which is built from the unix password file and the aliases file1. You can add aliases for other addresses that will deliver to your shell account, but only shell users can receive mail right now. See virtual mailboxes to add users without shell accounts.

In the alias file, you’ll see “Postmaster” (and possibly others) are aliased to root. Add root as an alias to you at the bottom so that mail gets to your mailbox.

echo "root:   $USER" | sudo tee -a /etc/aliases
sudo newaliases

2.3.2 - Relay

A relay is simply another mail server that you give your outgoing mail to, rather than try to deliver it yourself.

There are many companies that specialize in this. Sign up for a free account and they give you the block of text to add to your postfix config. Some popular ones are:

  • SendGrid
  • MailGun
  • Sendinblue

They allow anywhere between 50 and 300 a day for free.

SendGrid

Relay Setup

SendGrid’s free plan gives you 50 emails a day. Create an account, verify your email address ([email protected]), and follow the instructions. Make sure to sudo apt install libsasl2-modules

https://docs.sendgrid.com/for-developers/sending-email/postfix

Restart Postfix and use mutt to send an email. It works! the only thing you’ll notice is that your message has a “On Behalf Of” notice in the message letting you know it came from SendGrid. Follow the section below to change that.

Domain Integration

To integrate your domain fully, add DNS records for SendGrid using these instructions.

https://docs.sendgrid.com/ui/account-and-settings/how-to-set-up-domain-authentication

This will require you to login and go to:

  • Settings -> Sender Authentication -> Domain Authentication

Stick with the defaults that include automatic security and SendGrid will give you three CNAME records. Add those to your DNS and your email will check out.

Technical Notes

DNS

If you’re familiar with email domain-based security, you’ll see that two of the records SendGrid gives you are links to DKIM keys so SendGrid can sign emails as you. The other record (emXXXX) is the host sendgrid will use to send email. The SPF record for that host will include a SendGrid SPF record that includes multiple pools of IPs so that SPF checks will pass. They use CNAMEs on your side so they can rotate keys and pool addresses without changing DNS entries.

If none of this makes sense to you, then that’s really the point. You don’t have to know any of it - they take care of it for you.

Next Steps

Your server can now send email too. All shell users on your sever rejoice!

To actually use your mail server, you’ll want to add some remote client access.

2.3.3 - Dovecot

Dovecot is an IMAP (Internet Message Access Protocol) server that allows remote clients to access their mail. There are other protocols and servers, but Dovecot has about 75% of the internet and is a good choice.

Installation

sudo apt install dovecot-imapd
sudo apt install dovecot-submissiond

Configuration

Storage

Both Postfix and Dovecot use mbox storage format by default. This is one big file with all your mail in it and doesn’t scale well. Switch to the newer maildir format where your messages are stored as individual files.

# Change where Postfix delivers mail.
sudo postconf -e "home_mailbox = Maildir/"
sudo systemctl reload postfix.service

# Change where Dovecot looks for mail.
sudo sed -i 's/^mail_location.*/mail_location = maildir:~\/Maildir/' /etc/dovecot/conf.d/10-mail.conf
sudo systemctl reload dovecot.service

Encryption

Dovecot comes with it’s own default cert. This isn’t trusted, but Thunderbird will prompt you and you can choose to accept it. This will be fine for now. We’ll generate a valid cert later.

Credentials

Dovecot checks passwords against the local unix system by default and no changes are needed.

Submissions

One potential surprise is that IMAP is only for viewing existing mail. To send mail, you use the SMTP protocol and relay messages to your mail server. But we have relaying turned off, as we don’t want just anyone relaying messages.

The solution is to enable authentication and by convention this is done by a separate port process, called the Submission Server.

We’ve installed Dovecot’s submission server as it’s newer and easier to set up. Postfix even suggests considering it, rather than theirs. The only configuration needed it to set the localhost as the relay.

# Set the relay as localhost where postfix runs
sudo sed -i 's/#submission_relay_host =/submission_relay_host = localhost/' /etc/dovecot/conf.d/20-submission.conf
sudo systemctl reload dovecot.service

Port Forwarding

Forward ports 143 and 587 to your mail server and test that you can connect from both inside and outside your LAN.

nc -zf mail.your.org 143
nc -zf mail.your.org 587

If it’s working from outside your network, but not inside, you may need to enable [reflection] aka hairpin NAT. This will be different per firewall vendor, but in OPNSense it’s:

Firewall -> Settings -> Advanced

 # Enable these settings
Reflection for port forwards
Reflection for 1:1
Automatic outbound NAT for Reflection

Clients

Thunderbird and others will successfully discover the correct ports and services when you provide your email address of [email protected].

Notes

TLS

Dovecot defaults to port 587 for the submission service which is an older standard for explicit TLS. It’s now recommended by RFC to use implicit TLS on port 465 and you can add a new “submissions” service for that, while leaving the default in place. Clients will pick their fav. Thunderbird defaults to the 465 when both are available.

Note: leaving the default sumbission port commented out just means it will use the default port. Comment out the whole block to disable.

vi /etc/dovecot/conf.d/10-master.conf

# Change the default of

service submission-login {
  inet_listener submission {
    #port = 587
  }
}

to 

service submission-login {
  inet_listener submission {
    #port = 587
  }
  inet_listener submissions {
    port = 465
    ssl = yes
  }
}

# And reload

sudo systemctl reload dovecot.service

Make sure to port forward 465 at the firewall as well

Next Steps

Now that you’ve got the basics working, let’s secure things a little more

Sources

https://dovecot.org/list/dovecot/2019-July/116661.html

2.3.4 - Security

Certificates

We should use valid certificates. The best way to do that is with the certbot utility.

Certbot

Certbot automates the process of getting and renewing certs, and only requires a brief connection to port 80 as proof it’s you. There’s also a DNS based approach, but we use the port method for simplicity. It only runs once every 60 days so there is little risk of exploit.

Forward Port 80

You probably already have a web server and can’t just change where port 80 goes. To integrate certbot, add a name-based virtual host proxy to that web server.

# Here is a caddy example. Add this block to your Caddyfile
http://mail.your.org {
        reverse_proxy * mail.internal.lan
}

# You can also use a well-known URL if you're already using that vhost
http://mail.your.org {
   handle /.well-known/acme-challenge/ {
     reverse_proxy mail.internal.lan
   }
 }

Install Certbot

Once the port forwarding is in place, you can install certbot and use it to request a certificate. Note the --deploy-hook argument. This reloads services after a cert is obtained or renewed. Else, they’ll keep using an expired one.

DOMAIN=your.org

sudo apt install certbot
sudo certbot certonly --standalone --domains mail.$DOMAIN --non-interactive --agree-tos -m postmaster@$DOMAIN --deploy-hook "service postfix reload; service dovecot reload"

Once you have a cert, Certbot will keep keep it up-to-date by launching periodically from a cronjob in /etc/cron.d and scanning for any needed renewals.

Postfix

Tell Postfix about the cert by using the postconf utility. This will warn you about any potential configuration errors.

sudo postconf -e 'smtpd_tls_cert_file = /etc/letsencrypt/live/mail.$DOMAIN/fullchain.pem'
sudo postconf -e 'smtpd_tls_key_file = /etc/letsencrypt/live/mail.$DOMAIN/privkey.pem'
sudo postfix reload

Dovecot

Change the Dovecot to use the cert as well.

sudo sed -i 's/^ssl_cert = .*/ssl_cert = <\/etc\/letsencrypt\/live\/mail.$DOMAIN\/fullchain.pem/' /etc/dovecot/conf.d/10-ssl.conf
sudo sed -i 's/^ssl_key = .*/ssl_key = <\/etc\/letsencrypt\/live\/mail.$DOMAIN\/privkey.pem/' /etc/dovecot/conf.d/10-ssl.conf
sudo dovecot reload

Verifying

You can view the certificates with the commands:

openssl s_client -connect mail.$DOMAIN:143 -starttls imap -servername mail.$DOMAIN
openssl s_client -starttls smtp -showcerts -connect mail.$DOMAIN:587 -servername mail.$DOMAIN

Privacy and Anti-Spam

You can take advantage of Cloudflare (or other) services to accept and inspect your email before forwarding it on to you. As far as the Internet is concerned, Cloudflare is your email server. The rest is private.

Take a look at the Forwarding section, and simply forward your mail to your own server instead of Google’s. That will even allow you to remove your mail server from DNS and drop connections other than CloudFlare if desired.

Intrusion Prevention

In my testing it takes less than an hour before someone discovers and attempts to break into your mail server. You may wish to GeoIP block or otherwise limit connections. You can also use crowdsec.

Crowdsec

Crowdsec is an open-source IPS that monitors your log files and blocks suspicious behavior.

Install as per their instructions.

curl -s https://packagecloud.io/install/repositories/crowdsec/crowdsec/script.deb.sh | sudo bash
sudo apt install -y crowdsec
sudo apt install crowdsec-firewall-bouncer-nftables
sudo cscli collections install crowdsecurity/postfix

Postfix

Most services now log to the system journal rather than a file. You can view them with the journalctl command

# What is the exact service unit name?
sudo systemctl status | grep postfix

# Anything having to do with that service unit
sudo journalctl --unit [email protected]

# Zooming into just the identifiers smtp and smtpd
sudo journalctl --unit [email protected] -t postfix/smtp -t postfix/smtpd

Crowdsec accesses the system journal by adding a block to it’s log acquisition directives.

sudo tee -a /etc/crowdsec/acquis.yaml << EOF
source: journalctl
journalctl_filter:
  - "[email protected]"
labels:
  type: syslog
---
EOF

sudo systemctl reload crowdsec

Dovecot

Install the dovecot collection as well.

sudo cscli collections install crowdsecurity/dovecot
sudo tee -a /etc/crowdsec/acquis.yaml << EOF
source: journalctl
journalctl_filter:
  - "_SYSTEMD_UNIT=dovecot.service"
labels:
  type: syslog
---
EOF

sudo systemctl reload crowdsec

Is it working? You won’t see anything at first unless you’re actively under attack. But after 24 hours you may see some examples of attempts to relay spam.

allen@mail:~$ sudo cscli alerts list
╭────┬────────────────────┬────────────────────────────┬─────────┬──────────────────────────────────────────────┬───────────┬─────────────────────────────────────────╮
│ ID │       value        │           reason           │ country │                      as                      │ decisions │               created_at                │
├────┼────────────────────┼────────────────────────────┼─────────┼──────────────────────────────────────────────┼───────────┼─────────────────────────────────────────┤
│ 60 │ Ip:187.188.233.58  │ crowdsecurity/postfix-spam │ MX      │ 17072 TOTAL PLAY TELECOMUNICACIONES SA DE CV │ ban:1     │ 2023-05-24 06:33:10.568681233 +0000 UTC │
│ 54 │ Ip:177.229.147.166 │ crowdsecurity/postfix-spam │ MX      │ 13999 Mega Cable, S.A. de C.V.               │ ban:1     │ 2023-05-23 20:17:49.912754687 +0000 UTC │
│ 53 │ Ip:177.229.154.70  │ crowdsecurity/postfix-spam │ MX      │ 13999 Mega Cable, S.A. de C.V.               │ ban:1     │ 2023-05-23 20:15:27.964240044 +0000 UTC │
│ 42 │ Ip:43.156.25.237   │ crowdsecurity/postfix-spam │ SG      │ 132203 Tencent Building, Kejizhongyi Avenue  │ ban:1     │ 2023-05-23 01:15:43.87577867 +0000 UTC  │
│ 12 │ Ip:167.248.133.186 │ crowdsecurity/postfix-spam │ US      │ 398722 CENSYS-ARIN-03                        │ ban:1     │ 2023-05-20 16:03:15.418409847 +0000 UTC │
╰────┴────────────────────┴────────────────────────────┴─────────┴──────────────────────────────────────────────┴───────────┴─────────────────────────────────────────╯

If you’d like to get into the details, take a look at the Crowdsec page .

Next Steps

Now that your server is secure, let’s take a look at how email is authenticated and how to ensure yours is.

2.3.5 - Authentication

Email authentication prevents forgery. People can still send unsolicited email, but they can’t fake who it’s from. If you set up a Relay for Postfix, the relayer is doing it for you. But otherwise, proceed onward to prevent your outgoing mail being flagged as spam.

You need three things

  • SPF: Server IP addresses - which specific servers have authorization to send email.
  • DKIM: Server Secrets - email is signed so you know it’s authentic and unchanged.
  • DMARC: Verifies the address in the From: aligns with the domain sending the email, and what to do if not.

SPF

SPF, or Sender Policy Framework, is the oldest component. It’s a DNS TXT record that lists the servers authorized to send email for a domain.

A receiving server looks at a messages’s return path (aka RFC5321.MailFrom header) to see what domain the email purports to be from. It then looks up that domain’s SPF record and if the server that sent the email isn’t included, the email is considered forged.

Note - this doesn’t check the From: header the user sees. Messages can appear (to the user) to be from anywhere. So it’s is mostly a low-level check to prevent spambots.

The DNS record for your Postfix server should look like:

Type: "TXT"
NAME: "@"
Value: "v=spf1 a:mail.your.org -all"

The value above shows the list of authorized servers (a:) contains mail.your.org. Mail from all other servers is considered forged (-all).

To have your Postfix server check SPF for incoming messages add the SPF policy agent.

sudo apt install postfix-policyd-spf-python

sudo tee -a /etc/postfix/master.cf << EOF

policyd-spf  unix  -       n       n       -       0       spawn
    user=policyd-spf argv=/usr/bin/policyd-spf
EOF

sudo tee -a /etc/postfix/main.cf << EOF

policyd-spf_time_limit = 3600
smtpd_recipient_restrictions =
   permit_mynetworks,
   permit_sasl_authenticated,
   reject_unauth_destination,
   check_policy_service unix:private/policyd-spf
EOF

sudo systemctl restart postfix

DKIM

DKIM, or DomainKeys Identified Mail, signs the emails as they are sent ensuring that the email body and From: header (the one you see in your client) hasn’t been changed in transit and is vouched for by the signer.

Receiving servers see the DKIM header that includes who signed it, then use DNS to check it. Unsigned mail simply isn’t checked. (There is no could-but-didn’t in the standard).

Note - There is no connection between the domain that signs the message and what the user sees in the From: header. Messages can have a valid DKIM signature and still appear to be from anywhere. DKIM is mostly to prevent man-in-the-middle attacks from altering the message.

For Postfix, this requires installation of OpenDKIM and a connection as detailed here. Make sure to sign with the domain root.

https://tecadmin.net/setup-dkim-with-postfix-on-ubuntu-debian/

Once you’ve done that, create the following DNS entry.

Type: "TXT"
NAME: "default._domainkey"
Value: "v=DKIM1; h=sha256; k=rsa; p=MIIBIjANBgkq..."

DMARC

Having a DMARC record is the final piece that instructs servers to check the From: header the user sees against the domain return path from the SPF and DKIM checks, and what to do on a fail.

This means mail “From: [email protected]” sent though mail.your.org mail servers will be flagged as spam.

The DNS record should look like:

Type: "TXT"
NAME: "_dmarc"
Value: "v=DMARC1; p=reject; adkim=s; aspf=r;"
  • p=reject: Reject messages that fail
  • adkim=s: Use strict DKIM alignment
  • aspf=r: Use relaxed SPF alignment

Reject (p=reject) indicates that email servers should “reject” emails that fail DKIM or SPF tests, and skip quarantine.

Strict DKIM alignment (=s) means that the SPF Return-Path domain or the DKIM signing domain must be an exact match with the domain in the From: address. A DKIM signature from your.org would exactly match [email protected].

Relaxed SPF alignment (=r) means subdomains of the From: address are acceptable. I.e. the server mail.your.org from the SPF test aligns with an email from: [email protected].

You can also choose quarantine mode (p=quarantine) or report-only mode (p=none) where the email will be accepted and handled as such by the receiving server, and a report sent to you like below.

v=DMARC1; p=none; rua=mailto:[email protected]

DMARC is an or test. In the first example, if either the SPF or DKIM domains pass, then DMARC passes. You can choose to test one, both or none at all (meaning nothing can pass DMARC) as the the second DMARC example.

To implement DMARC checking in Postfix, you can install OpenDMARC and configure a mail filter as described below.

https://www.linuxbabe.com/mail-server/opendmarc-postfix-ubuntu

Next Steps

Now that you are hadnling email securely and authentically, let’s help ease client connections

Autodiscovery

2.3.6 - Autodiscovery

In most cases you don’t need this. Thunderbird, for example, will use a shotgun approach and may find your sever using ‘common’ server names based on your email address.

But there is an RFC and other clients may need help.

DNS SRV

This takes advantage of the RFC with an entry for IMAP and SMTP Submission

Type Name Service Protocol TTL Priority Weight Port Target
SRV @ _imap TCP auto 10 5 143 mail.your.org
SRV @ _submission TCP auto 10 5 465 mail.your.org

Web Autoconfig

  • Create a DNS entry for autoconfig.your.org
  • Create a vhost and web root for that with the file mail/config-v1.1.xml
  • Add the contents below to that file
<?xml version="1.0"?>
<clientConfig version="1.1">
    <emailProvider id="your.org">
      <domain>your.org</domain>
      <displayName>Example Mail</displayName>
      <displayShortName>Example</displayShortName>
      <incomingServer type="imap">
         <hostname>mail.your.org</hostname>
         <port>143</port>
         <socketType>STARTTLS</socketType>
         <username>%EMAILLOCALPART%</username>
         <authentication>password-cleartext</authentication>
      </incomingServer>
      <outgoingServer type="smtp">
         <hostname>mail.your.org</hostname>
         <port>587</port>
         <socketType>STARTTLS</socketType> 
         <username>%EMAILLOCALPART%</username> 
         <authentication>password-cleartext</authentication>
         <addThisServer>true</addThisServer>
      </outgoingServer>
    </emailProvider>
    <clientConfigUpdate url="https://www.your.org/config/mozilla.xml" />
</clientConfig>

Note

It’s traditional to match server names to protocols and we would have used “imap.your.org” and “smtp.your.org”. But using ‘mail’ is popular now and it simplifies setup at several levels.

Thunderbird will try to guess at your server names, attempting to connect to smtp.your.org for example. But many Postfix configurations have spam prevention that interfere.

Sources

https://cweiske.de/tagebuch/claws-mail-autoconfig.htm
https://www.hardill.me.uk/wordpress/2021/01/24/email-autoconfiguration/

3 - Media

3.1 - Players

3.1.1 - LibreELEC

One of the best systems for a handling media is LibreELEC. It’s both a Kodi box and a server appliance that’s resistant to abuse. With the right hardware (like a ROCKPro64 or Waveshare) it also makes an excellent portable server for traveling.

Deployment

Download an image from https://libreelec.tv/downloads and flash as directed. Enable SSH during the initial setup.

Storage

RAID is a useful feature but only BTRFS works directly. This is fine, but with a little extra work you can add MergerFS, a popular option for combining disks.

BTRFS

Create the RAID set on another PC. If your disks are of different sizes you can use the ‘single’ profile, but leave the metadata mirrored.

sudo mkfs.btrfs -f -L pool -d single -m raid1 /dev/sda /dev/sdb /dev/etc...

After attaching to LibreELEC, the array will be automatically mounted at /media/pool based on label pool you specified above.

MergerFS

This is a good option if you just want to combine disks and unlike most other RAID technologies, if you loose a disk the rest will keep going. Many people combine this with SnapRAID for off-line parity.

But it’s a bit more work.

Cooling

You may want to manage the fan. The RockPro64 has a PWM fan header and LibreELEC loads the pwm_fan module.

Kodi Manual Start

The kodi process can use a significant amount of CPU even at rest. If you’re using this primarily as a file server you can disable kodi from starting automatically.

cp /usr/lib/systemd/system/kodi.service /storage/.config/system.d/kodi-alt.service
systemctl mask kodi

To start kodi, you can enter systemctl start kodi-alt

Remotes

Plug in a cheap Fm4 style remote and it ‘just works’ with kodi. But if you want to customize some remote buttons, say to start kodi manually, you still can.

Enable SMB

To share your media, simply copy the sample file, remove all the preconfigured shares (unless you want them), and add one for your storage pool. Then just enable Samba and reboot (so the file is picked up)

cp /storage/.config/samba.conf.sample /storage/.config/samba.conf
vi /storage/.config/samba.conf
[media]
  path = /storage/pool
  available = yes
  browseable = yes
  public = yes
  writeable = yes
Config --> LibreELEC --> Services --> Enable Samba

Enable HotSpot

Config --> LibreELEC --> Network --> Wireless Networks

Enable Active and Wireless Access Point and it just works!

Enable Docker

This is a good way handle things like Jellyfin or Plex if you must. In the GUI, go to add-ons, search for the items below and install.

  • docker
  • LinuxServer.io
  • Docker Image Updater

Then you must make sure the docker starts starts after the storage is up or the containers will see an empty folder instead of a mounted one.

vi /storage/.config/system.d/service.system.docker.service
[Unit]
...
...
After=network.target storage-pool.mount

If that fails, you can also tell docker to wait a bit

ExecStartPre=/usr/bin/sleep 120

Remote Management

You may be called upon to look at something remotely. Sadly, there’s no remote access to the GUI but you can use things like autossh to create a persistent remote tunnel, or wireguard to create a VPN connection. Wireguard is usually better.

3.1.1.1 - Add-ons

You can also use this platform as a server. This seems counter-intuitive at first; to use a media player OS as a server. But in practice it is rock-solid. I have a mixed fleet of 10 or so devices and LibreELEC has better uptime stats than TrueNAS.

The device playing content on your TV is also the media server for the rest of the house. I wouldn’t advertise this as an enterprise solution, but I can’t dispute the results.

Installation

Normal Add-ons

Common tools like rsync, as well as server software like Jellyfin are available. You can browse as descriped below, or use the search tool if you’re looking for something specific.

  • Select the gear icon and choose Add-ons
  • Choose LibreELEC Add-ons
  • Drill down to browse software.

Docker

If you’re on ARM or want more frequent updates, you may want to add Docker and the LinuxServer.io repository.

  • Select the gear icon and choose Add-ons
  • Search add-ons for “Docker” and install
  • Search add-ons for “LinuxServer.io” and install
  • Select “Install from repository” and choose “LinuxServer.io’s Docker Add-ons”.

Drill down and add Jellyfin, for example.

https://wiki.libreelec.tv/installation/docker

3.1.1.2 - AutoSSH

This allows you to setup and monitor a remote tunnel as the easiest wat to manage remote clients is to let them come to you. To accomplish this, we’ll set up a server, create client keys, test a reverse tunnel, and setup autossh.

The Server

This is simply a server somewhere that everyone can reach via SSH. Create a normal user account with a password and home directory, such as with adduser remote. We will be connecting from our clients for initial setup with this.

The Client

Use SSH to connect to the LibreELEC client, generate a ssh key pair and copy it to the remote server

ssh [email protected]
ssh-keygen  -f ~/.ssh/id_rsa -q -P ""

# ssh-copy-id isn't available so you must use the rather harder command below
cat ~/.ssh/id_rsa.pub | ssh -t [email protected] "cat - >> ~/.ssh/authorized_keys"

ssh [email protected]

If all went well you can back out and then test logging in with no password. Make sure to do this and accept the key so th

The Reverse Tunnel

SSH normally connects your terminal to a remote server. Think of this as a encrypted tunnel where your keystrokes are sent to the server and it’s responses are sent back to you. You can send more than your keystrokes, however. You can take any port on your system and send it as well In our case, we’ll take port 22 (where ssh just happens to be listening) and send it to the rendezvous server on port 2222. SSH will continue to accept local connections while also taking connections from the remote port we are tunneling in.

# On the client, issue this command to connect the (-R)remote port 2222 to localhost:22, i.e. the ssh server on the client
ssh -N -R 2222:localhost:22 -o ServerAliveInterval=240 -o ServerAliveCountMax=2 [email protected]

# Leave that running while you login to the rendezvois server and test if you can now ssh to the client by connecting to the forwarded port.

ssh [email protected]
ssh root@localhost -p 2222

# Now exit both and set up Autossh below

Autossh

Autossh is a daemon that monitors ssh sessions to make sure they’re up and operational, restarting them as needed, and this is exactly what we need to make sure the ssh session from the client stays up. To run this as a service, a systemd service file is needed. For LibreELEC, these are in /storage/.config.

vi /storage/.config/system.d/autossh.service

[Unit]
Description=autossh
Wants=network-online.target
After=network-online.target

[Service]
Type=simple
User=root
EnvironmentFile=/storage/.config/autossh
ExecStart=/storage/.kodi/addons/virtual.system-tools/bin/autossh $SSH_OPTIONS
Restart=always
RestartSec=60

[Install]
WantedBy=multi-user.target
vi /storage/.config/autossh

AUTOSSH_POLL=60
AUTOSSH_FIRST_POLL=30
AUTOSSH_GATETIME=0
AUTOSSH_PORT=22034
SSH_OPTIONS="-N -R 2222:localhost:22 [email protected] -i /storage/.ssh/id_rsa"
systemctl enable autossh.service
systemctl start autossh.service
systemctl status autossh.service

At this point, the client has a SSH connection to your server on port 22, opened port 2222 the ssh server and forwarded that back to it’s own ssh server. You can now connect by:

ssh [email protected]
ssh root@localhost -p 2222

If not, check the logs for errors and try again.

journalctl -b 0 --no-pager | less

Remote Control

Now that you have the client connected, you can use your Rendezvous Server as a Jump Host to access things on the remote client such as it’s web interface and even the console via VNC. Your connection will generally take the form of:

ssh localport:libreelec:libreelec_port -J rendezvoisServer  redevoisServer -p autosshPort

The actual command is hard to read as are going through the rendezvois server twice and connecting to localhost on the destination.

ssh -L 8080:localhost:32400  -J [email protected] root@localhost -p 2222

3.1.1.3 - Building

This works best in an Ubuntu container.

LibreELECT Notes

Installed but no sata hdd. Found this

RPi4 has zero support for PCIe devices so why is it “embarrasing” for LE to omit support for PCIe SATA things in our RPi4 image?

Feel free to send a pull-request to GitHub enabling the kernel config that’s needed.

https://forum.libreelec.tv/thread/27849-sata-controller-error/

Went though thier resouces beginners guid to git https://wiki.libreelec.tv/development/git-tutorial#forking-and-cloning building basics https://wiki.libreelec.tv/development/build-basics specific build commands https://wiki.libreelec.tv/development/build-commands/build-commands-le-12.0.x

and then failed because jammy wasn’t compatibile enough

Created a jammy container and restarted

https://ubuntu.com/server/docs/lxc-containers

sudo lxc-create –template download –name u1 ubuntu jammy amd64 sudo lxc-start –name u1 –daemon sudo lxc-attach u1

Used some of the notes from

https://www.artembutusov.com/libreelec-raid-support/

Did as fork, clone and a

git fetch –all

but couldnt get all the downloads as alsa.org site was down

On a side note, these are needed in the config.txt so that USB works

otg_mode=1,dtoverlay=dwc2,dr_mode=host

https://www.jeffgeerling.com/blog/2020/usb-20-ports-not-working-on-compute-module-4-check-your-overlays

I tried a menuconfig and selected ..sata? and got

CONFIG_ATA=m < CONFIG_ATA_VERBOSE_ERROR=y < CONFIG_ATA_FORCE=y CONFIG_ATA_SFF=y CONFIG_ATA_BMDMA=y

Better compare the .config file again

Edited and commited a config.txt but it didn’t show up in the image. Possibly the wrong file or theres another way to realize that chagne

Enabled the SPI interface

https://raspberrypi.stackexchange.com/questions/48228/how-to-enable-spi-on-raspberry-pi-3 https://wiki.libreelec.tv/configuration/config_txt

sudo apt install lxc

# This didn't work for some reason
sudo lxc-create --template download --name u1 --dist ubuntu --release jammy --arch amd64

sudo lxc-create --template download --name u1

sudo lxc-start --name u1 --daemon

sudo lxc-attach  u1

# Now inside, build 
apt update
apt upgrade
apt-get install gcc make git wget
apt-get install bc patchutils bzip2 gawk gperf zip unzip lzop g++ default-jre u-boot-tools texinfo xfonts-utils xsltproc libncurses5-dev xz-utils


# login and fork so you can clone more easily. Some problem with the creds

cd
git clone https://github.com/agattis/LibreELEC.tv
cd LibreELEC.tv/
git fetch --all
git tag
git remote add upstream https://github.com/LibreELEC/LibreELEC.tv.git
git fetch --all
git checkout libreelec-12.0
git checkout -b CM4-AHCI-Add
PROJECT=RPi ARCH=aarch64 DEVICE=RPi4   tools/download-tool
ls
cat /etc/passwd 
pwd
ls /home/
ls /home/ubuntu/
ls
cd ..
mv LibreELEC.tv/ /home/ubuntu/
cd /home/ubuntu/
ls -lah
chown -R ubuntu:ubuntu LibreELEC.tv/
ls -lah
cd LibreELEC.tv/
ls
ls -lah
cd
sudo -i -u ubuntu
ip a
cat /etc/resolv.conf 
ip route
sudo -i -u ubuntu


apt install tmux
sudo -i -u ubuntu tmux a




# And back home you can write
ls -lah ls/u1/rootfs/home/ubuntu/LibreELEC.tv/target/

3.1.1.4 - Fancontrol

Add this to the /storage/bin and create a service unit.

vi /storage/.config/system.d/fancontrol.service

systemctl enable fancontrol
#!/bin/sh

# Summary
#
# Adjust fan speed by percentage when CPU/GPU is between user set
# Min and Max temperatures.
#
# Notes
#
# Temp can be gleaned from the sysfs termal_zone files and are in
# units millidegrees meaning a reading of 30000 is equal to 30.000 C
#
# Fan speed is read and controlled by he pwm_fan module and can be
# read and set from a sysfs file as well. The value can be set from 0 (off)
# to 255 (max). It defaults to 255 at start


## Set Points

# CPU Temp set points
MIN_TEMP=40 # Min desired CPU temp
MAX_TEMP=60 # Max desired CPU temp


# Fan Speeds set points
FAN_OFF=0       # Fan is off
FAN_MIN=38      # Some fans need a minimum of 15% to start from a dead stop.
FAN_MAX=255     # Max cycle for fan

# Frequency
CYCLE_FREQ=6            # How often should we check, in seconds
SHORT_CYCLE_PERCENT=20  # If we are shutting on or of more than this percent of the
                        # time, just run at min rather than shutting off

## Sensor and Control files

# CPU and GPU sysfs locations
CPU=/sys/class/thermal/thermal_zone0/temp
GPU=/sys/class/thermal/thermal_zone1/temp

# Fan Control files
FAN2=/sys/devices/platform/pwm-fan/hwmon/hwmon2/pwm1
FAN3=/sys/devices/platform/pwm-fan/hwmon/hwmon3/pwm1



## Logic

# The fan control file isn't available until the module loads and
# is unpredictable in path. Wait until it comes up

FAN=""
while [[ -z $FAN ]];do
        [[ -f $FAN2 ]] && FAN=$FAN2
        [[ -f $FAN3 ]] && FAN=$FAN3
        [[ -z $FAN ]] && sleep 1
done

# The sensors are in millidegrees so adjust the user
# set points to the same units

MIN_TEMP=$(( $MIN_TEMP * 1000 ))
MAX_TEMP=$(( $MAX_TEMP * 1000 ))


# Short cycle detection requires us to track the number
# of on-off flips to cycles

CYCLES=0
FLIPS=0

while true; do

        # Set TEMP to the highest GPU/CPU Temp
        TEMP=""
        read TEMP_CPU < $CPU
        read TEMP_GPU < $GPU
        [[ $TEMP_CPU -gt $TEMP_GPU ]] && TEMP=$TEMP_CPU || TEMP=$TEMP_GPU

        # How many degress above or below our min threshold are we?
        DEGREES=$(( $TEMP-$MIN_TEMP ))

        # What percent of the range between min and max is that?
        RANGE=$(( $MAX_TEMP-$MIN_TEMP ))
        PERCENT=$(( (100*$DEGREES/$RANGE) ))

        # What number between 0 and 255 is that percent?
        FAN_SPEED=$(( (255*$PERCENT)/100 ))

        # Override the calculated speed for some special cases
        if [[ $FAN_SPEED -le $FAN_OFF ]]; then                  # Set anything 0 or less to 0
                FAN_SPEED=$FAN_OFF
        elif [[ $FAN_SPEED -lt $FAN_MIN ]]; then                # Set anything below the min to min
                FAN_SPEED=$FAN_MIN
        elif [[ $FAN_SPEED -ge $FAN_MAX ]]; then                # Set anything above the max to max
                FAN_SPEED=$FAN_MAX
        fi

        # Did we just flip on or off?
        read -r OLD_FAN_SPEED < $FAN
        if (    ( [[ $OLD_FAN_SPEED -eq 0 ]] && [[ $FAN_SPEED -ne 0 ]] ) || \
                ( [[ $OLD_FAN_SPEED -ne 0 ]] && [[ $FAN_SPEED -eq 0 ]] ) ); then
                FLIPS=$((FLIPS+1))
        fi

        # Every 10 cycles, check to see if we are short-cycling
        CYCLES=$((CYCLES+1))
        if [[ $CYCLES -ge 10 ]] && [[ ! $SHORT_CYCLING ]]; then
                FLIP_PERCENT=$(( 100*$FLIPS/$CYCLES ))
                if [[ $FLIP_PERCENT -gt $SHORT_CYCLE_PERCENT ]]; then
                        SHORT_CYCLING=1
                        echo "Short-cycling detected. Fan will run at min speed rather than shutting off."
                else
                        CYCLES=0;FLIPS=0
                fi
        fi

        # If we are short-cycling and would turn the fan off, just set to min
        if [[ $SHORT_CYCLING ]] && [[ $FAN_SPEED -le $FAN_MIN ]]; then
                FAN_SPEED=$FAN_MIN
        fi

        # Every so often, exit short cycle mode to see if conditions have changed
        if [[ $SHORT_CYCLING ]] && [[ $CYCLES -gt 10000 ]]; then  # Roughly half a day
                echo "Exiting short-cycling"
                SHORT_CYCLING=""
        fi

        # Write that to the fan speed control file
        echo $FAN_SPEED > $FAN

        # Log the stats everyone once in a while
#       if [[ $LOG_CYCLES ]] && [[ $LOG_CYCLES -ge 10 ]]; then
#               echo "Temp was $TEMP fan set to $FAN_SPEED"
#               LOG_CYCLES=""
#       else
#               LOG_CYCLES=$(($LOG_CYCLES+1))
#       fi

        sleep $CYCLE_FREQ

done

# Also look at drive temps. The sysfs filesystem isn't useful for
# all drives on RockPro64 so use smartctl instead

#ls -1 /dev/sd? | xargs -n1 smartctl -A | egrep ^194 | awk '{print $4}'

3.1.1.5 - MergerFS

This is a good option if you just want to combine disks and unlike most other RAID technologies, if you loose a disk the rest will keep going. Many people combine this with SnapRAID for off-line parity.

Prepare and Exempt Disks

Prepare and exempt the file systems from auto-mounting1 so you can supply your own mount options and make sure they are up before you start MergerFS.

Make sure to wipe the disks before using as wipefs and fdisk are not available in LibreELEC.

# Assuming the disks are wiped, format and label each disk the same
mkfs.ext4 /dev/sda 
e2label /dev/sda pool-member

# Copy the udev rule for editing 
cp /usr/lib/udev/rules.d/95-udevil-mount.rules /storage/.config/udev.rules.d
vi /storage/.config/udev.rules.d/95-udevil-mount.rules

Edit this section by adding the pool-member label from above

# check for special partitions we dont want mount
IMPORT{builtin}="blkid"
ENV{ID_FS_LABEL}=="EFI|BOOT|Recovery|RECOVERY|SETTINGS|boot|root0|share0|pool-member", GOTO="exit"

Test this by rebooting and making sure the drives are not mounted.

Add Systemd Mount Units

Each filesystem requires a mount unit like below. Create one for each drive named disk1, disk2, etc. Note: The name of the file is import and to mount /storage/disk1 the name of the file must be storage-disk1.mount

vi /storage/.config/system.d/storage-disk1.mount
[Unit]
Description=Mount sda
Requires=dev-sda.device
After=dev-sda.device

[Mount]
What=/dev/sda
Where=/storage/disk1
Type=ext4
Options=rw,noatime,nofail

[Install]
WantedBy=multi-user.target
systemctl enable --now storage-disk1.mount

Download and Test MergerFS

MergerFS isn’t available as an add-on, but you can get it directly from the developer. LibreELEC (or CoreELEC) on ARM have a 32 bit[^2] user space so you’ll need the armhf version.

wget https://github.com/trapexit/mergerfs/releases/latest/download/mergerfs-static-linux_armhf.tar.gz

tar --extract --file=./mergerfs-static-linux_armhf.tar.gz --strip-components=3 usr/local/bin/mergerfs

mkdir bin
mv mergerfs bin/

Mount the drives and run a test like below. Notice the escaped *. That’s needed at the command line to prevent shell globbing.

mkdir /storage/pool
/storage/bin/mergerfs /storage/disk\* /storage/pool/

Create the MergerFS Service

vi /storage/.config/system.d/mergerfs.service
[Unit]
Description = MergerFS Service
After=storage-disk1.mount storage-disk2.mount storage-disk3.mount storage-disk4.mount
Requires=storage-disk1.mount storage-disk2.mount storage-disk3.mount storage-disk4.mount

[Service]
Type=forking
ExecStart=/storage/bin/mergerfs -o category.create=mfs,noatime /storage/disk* /storage/pool/
ExecStop=umount /storage/pool

[Install]
WantedBy=default.target
systemctl enable --now mergerfs.service

Your content should now be available in /storage/pool after boot.

3.1.1.6 - Remotes

Most remotes just work. Newer ones emulate a keyboard and send well-known multimedia keys like ‘play’ and ‘volume up’. If you want to change what a button does, you can tell Kodi what to do pretty easily. In addition, LibreELEC also supports older remotes using eventlircd and popular ones are already configured. You can add unusual ones as well as get normal remotes to perform arbitrary actions when kodi isn’t running (like telling the computer to start kodi or shutdown cleanly).

Modern Remotes

If you plug in a remote receiver and the kernel makes reference to a keyboard you have a modern remote and Kodi will talk to it directly.

dmesg

input: BESCO KSL81P304 Keyboard as ...
hid-generic 0003:2571:4101.0001: input,hidraw0: USB HID v1.11 Keyboard ...

If you want to change a button action, put kodi into log mode, tail the logfile, and press the button in question to see what event is detected.

# Turn on debug
kodi-send -a toggledebug

# Tail the logfile
tail -f /storage/.kodi/temp/kodi.log

   debug <general>: Keyboard: scancode: 0xac, sym: 0xac, unicode: 0x00, modifier: 0x0
   debug <general>: HandleKey: browser_home (0xf0b6) pressed, window 10000, action is ActivateWindow(Home)

In this example, we pressed the ‘home’ button on the remote. That was detected as a keyboard press of the browser_home key. This is just one of many defined keys like ’email’ and ‘calculator’ that can be present on a keyboard. Kodi has a default action of that and you can see what it is in the system keymap

# View the system keyboard map to see what's happening by default
cat /usr/share/kodi/system/keymaps/keyboard.xml

To change what happens, create a user keymap. Any entries in it will override the default.

# Create a user keymap that takes you to 'Videos' instead of 'Home'
vi /storage/.kodi/userdata/keymaps/keyboard.xml
<keymap>
  <global>
    <keyboard>
      <browser_home>ActivateWindow(Videos)</browser_home>
    </keyboard>
  </global>
</keymap>
kodi-send -a reloadkeymaps

Legacy Remotes

How They Work

Some receivers don’t send well-known keys. For these, there’s eventlircd. LibreELEC has a list of popular remotes that fall into this category and will dynamically use it as needed. For instance, pair an Amazon Fire TV remote and udev will fire, match a rule in /usr/lib/udev/rules.d/98-eventlircd.rules, and launch eventlircd with the buttons mapped in /etc/eventlircd.d/aftvsremote.evmap.

These will interface with Kodi using it’s “LIRC” (Linux Infrared Remote Contoll) interface. And just like with keyboards, there’s a set of well-known remote keys Kodi will accept. Some remotes don’t know about these so eventlircd does some pre-translation before relaying to Kodi. If you look in the aftvsremote.evmap file for example, you’ll see that KEY_HOMEPAGE = KEY_HOME.

To find out if your remote falls into this category, enable logging, tail the log, and if your remote has been picked up for handling by eventlircd you’ll see some entries like this.

    debug <general>: LIRC: - NEW 66 0 KEY_HOME devinput (KEY_HOME)
    debug <general>: HandleKey: percent (0x25) pressed, window 10000, action is PreviousMenu

In the first line, Kodi notes that it’s LIRC interface received a KEY_HOME button press. (Eventlircd actually translated it, but that happened before kodi saw anything.) In the second line, Kodi says it received the key ‘percent’, and preformed the action ‘Back’. The part where Kodi says ‘percent (0x25)’ was pressed seems resistent to documentation, but the action of PreviousMenu is the end result. The main question is why?

Turns out that Kodi has a pre-mapping file for events relayed to it from LIRC systems. There’s a mapping for ‘KEY_HOME’ that kodi translates to the well-known key ‘start’. Then Kodi checks the normal keymap file and ‘start’ translates to the Kodi action ‘Back’

Take a look at the system LIRC mapping file to see for yourself.

# The Lircmap file has the Kodi well-known button (start) surrounding the original remote command (KEY_HOME)
grep KEY_HOME /usr/share/kodi/system/Lircmap.xml

      <start>KEY_HOME</start>

Then take a look at the normal mapping file to see how start get’s handled

# The keymap file has the well-known Kodi button surrounding the Kodi action, 
grep start /usr/share/kodi/system/keymaps/remote.xml 

      <start>PreviousMenu</start>

You’ll actually see quite a few things are mapped to ‘start’ as it does different things depending on what part of Kodi you are accessing at the time.

Changing Button Mappings

You have a few options an they are listed here in increasing complexity. Specifically, you can

  • Edit the keymap
  • Edit the Lircmap and keymap
  • Edit the eventlircd evmap

Edit the Keymap

To change what the KEY_HOME button does you can create a user keymap like before and override it. It just needs a changed from keyboard to remote for entering through the LIRC interface. In this example we’ve set it to actually take you home via the kodi function ActivateWindow(Home).

vi /storage/.kodi/userdata/keymaps/remote.xml
<keymap>
  <global>
    <remote>
      <start>ActivateWindow(Home)</start>
    </remote>
  </global>
</keymap>

Edit the Lircmap and Keymap

This can occasionally cause problems though - such as when you have another button that already gets translated to start and you want it to keep working the same. In this case, you make an edit at the Lircmap level to translate KEY_HOME to some other button first, then map that button to the action you want. (You can’t put the Kodi function above in the Lircmap file so you have to do a double hop.)

First, let’s determine what the device name should be with the irw command.

irw

# Hit a button and the device name will be at the end
66 0 KEY_HOME devinput

Now let’s pick a key. My remote doesn’t have a ‘red’ key, so lets hijack that one. Note the device name devinput from the above.

vi /storage/.kodi/userdata/Lircmap.xml
<lircmap>
   <remote device="devinput">
      <red>KEY_HOME</red>
   </remote>
</lircmap>

Then map the key restart kodi (the keymap reload command doesn’t handle Lircmap)

vi /storage/.kodi/userdata/keymaps/remote.xml
<keymap>
  <global>
    <remote>
      <red>ActivateWindow(Home)</red>
    </remote>
  </global>
</keymap>
systemctl restart kodi

Edit the Eventlircd Evmap

You can also change what evenlircd does. If LibreELEC wasn’t a read-only filesystem you’d have done this first. But you can do it with a but more work than the above if you prefer.

# Copy the evmap files
cp -r /etc/eventlircd.d /storage/.config/

# Override where the daemon looks for it's configs
systemctl edit --full eventlircd

# change the ExecStart line to refer to the new location - add vvv to the end for more log info
ExecStart=/usr/sbin/eventlircd -f --evmap=/storage/.config/eventlircd.d --socket=/run/lirc/lircd -vvv

# Restart, replug the device and grep the logs to see what evmap is in use
systemctl restart eventlircd
journalctl | grep evmap

# Edit that map to change how home is mapped (yours may not use the default map)
vi /storage/.config/eventlircd.d/default.evmap
KEY_HOMEPAGE     = KEY_HOME

Dealing With Unknown Buttons

Sometimes, you’ll have a button that does nothing at all.

    debug <general>: LIRC: - NEW ac 0 KEY_HOMEPAGE devinput (KEY_HOMEPAGE)
    debug <general>: HandleKey: 0 (0x0, obc255) pressed, window 10016, action is 

In this example Kodi received the KEY_HOMEPAGE button, consulted it’s Lircmap.xml and didn’t find anything. This is because eventlircd didn’t recognize the remote and translate it to KEY_HOME like before. That’s OK, we can just add a user LIRC mapping. If you look through the system file you’ll see things like ‘KEY_HOME’ are tto the ‘start’ button. So let’s do the same.

vi /storage/.kodi/userdata/Lircmap.xml
<lircmap>
   <remote device="devinput">
      <start>KEY_HOMEPAGE</start>
   </remote>
</lircmap>
systemctl restart kodi

Check the log and you’ll see that you now get

    debug <general>: LIRC: - NEW ac 0 KEY_HOMEPAGE devinput (KEY_HOMEPAGE)
    debug <general>: HandleKey: 251 (0xfb, obc4) pressed, window 10025, action is ActivateWindow(Home)

Remotes Outside Kodi

You may want a remote to work outside of kodi too - say because you want to start kodi with a remote button. If you have a modern remote that eventlircd didn’t capture, you must first add your remote to the list of udev rules.

Capture The Remote

First you must identify the remote with lsusb. It’s probably the only non-hub device listed.

lsusb
...
...
Bus 006 Device 002: ID 2571:4101 BESCO KSL81P304
                        ^     ^
Vendor ID -------------/       \--------- Model ID
...

Then, copy the udev rule file and add a custom rule for your remote.

cp /usr/lib/udev/rules.d/98-eventlircd.rules /storage/.config/udev.rules.d/
vi /storage/.config/udev.rules.d/98-eventlircd.rules
...
...
...
ENV{ID_USB_INTERFACES}=="", IMPORT{builtin}="usb_id"

# Add the rule under the above line so the USB IDs are available. 
# change the numbers to match the ID from lsusb

ENV{ID_VENDOR_ID}=="2571", ENV{ID_MODEL_ID}=="4101", \
  ENV{eventlircd_enable}="true", \
  ENV{eventlircd_evmap}="default.evmap"
...

Now, reboot, turn on logging and see what the buttons show up as. You can also install the system tools add-on in kodi, and at the command line, stop kodi and the eventlircd service, then run evtest and press some buttons. You should see something like

Testing ... (interrupt to exit)
Event: time 1710468265.112925, type 4 (EV_MSC), code 4 (MSC_SCAN), value c0223
Event: time 1710468265.112925, type 1 (EV_KEY), code 172 (KEY_HOMEPAGE), value 1
Event: time 1710468265.112925, -------------- SYN_REPORT ------------
Event: time 1710468265.200987, type 4 (EV_MSC), code 4 (MSC_SCAN), value c0223
Event: time 1710468265.200987, type 1 (EV_KEY), code 172 (KEY_HOMEPAGE), value 0
Event: time 1710468265.200987, -------------- SYN_REPORT ------------

Configure and Enable irexec

Now that you have seen the event, you must have the irexec process watching for it to take action. Luckily, LibreELEC already includes it.

vi /storage/.config/system.d/irexec.service
[Unit]
Description=IR Remote irexec config
After=eventlircd.service
Wants=eventlircd.service

[Service]
ExecStart=/usr/bin/irexec --daemon /storage/.lircrc
Type=forking

[Install]
WantedBy=multi-user.target

We’ll create a the config file next. The config is the command or script to run. systemctl start kodi in our case.

vi /storage/.lircrc
begin
    prog   = irexec
    button = KEY_HOMEPAGE
    config = systemctl start kodi
end

Let’s enable and start it up

systemctl enable --now irexec

Go ahead and stop kodi, then press the KEY_HOMEPAGE button on your remote. Try config entries like echo start kodi > /storage/test-results if you have issues and wonder if it’s running.

Notes

You may notice that eventlircd is always running, even if it has no remotes. That’s of a unit file is in /usr/lib/systemd/system/multi-user.target.wants/. I’m not sure of why this is the case when there is no remote in play.

https://discourse.osmc.tv/t/cant-create-a-keymap-for-a-remote-control-button-which-is-connected-by-lircd/88819/6

4 - Web

4.1 - Access Logging

4.1.1 - GoAccess

GoAccess is a lightweight web stats visualizer than can display in a terminal window or in a browser. It supports Caddy’s native JSON log format and can also be run as a real-time service with a little work.

Installation

If you have Debian/Ubuntu, you can add the repo as the [official docs] show.

# Note: The GoAccess docs use the `lsb_release -cs` utility that some Debains don't have, so I've substituted the $VERSION_CODENAME variable from the os-release file
wget -O - https://deb.goaccess.io/gnugpg.key | gpg --dearmor | sudo tee /usr/share/keyrings/goaccess.gpg >/dev/null 
source /etc/os-release 
echo "deb [signed-by=/usr/share/keyrings/goaccess.gpg arch=$(dpkg --print-architecture)] https://deb.goaccess.io/ $VERSION_CODENAME main" | sudo tee /etc/apt/sources.list.d/goaccess.list
sudo apt update
sudo apt install goaccess

Basic Operation

No configuration is needed if your webserver is logging in a supported format1. Though you may need to adjust file permissions so the log file can be read by the user running GoAccess.

To use in the terminal, all you need to is invoke it with a couple parameters.

sudo goaccess --log-format CADDY --log-file /var/log/caddy/access.log

To produce a HTML report, just add an output file somewhere your web server can find it.

sudo touch  /var/www/some.server.org/report.html
sudo chown $USER /var/www/some.server.org/report.html
goaccess --log-format=CADDY --output /var/www/some.server.org/report.html --log-file /var/log/caddy/access.log

Retaining History

History is useful and GoAccess lets you persist your data and incorporate it on the next run. This updates your report rather than replace it.

# Create a database location
sudo mkdir -p /var/lib/goaccess-db
sudo chown $USER /var/lib/goaccess-db

goaccess \
    --persist --restore --db-path /var/lib/goaccess-db \
    --log-format CADDY --output /var/www/some.server.org/report.html --log-file /var/log/caddy/access.log

GeoIP

To display country and city you need GeoIP databases, preferably the GeoIP2 format. An easy way to get started is with DB-IP’s Lite GeoIP2 databases that have a permissive license.

# These are for Jan 2025. Check https://db-ip.com/db/lite.php for the updated links
wget --directory-prefix /var/lib/goaccess-db \
    https://download.db-ip.com/free/dbip-asn-lite-2025-01.mmdb.gz \
    https://download.db-ip.com/free/dbip-city-lite-2025-01.mmdb.gz \
    https://download.db-ip.com/free/dbip-country-lite-2025-01.mmdb.gz  
    
gunzip /var/lib/goaccess-db/*.gz

goaccess \
    --persist --restore --db-path /var/lib/goaccess-db \
    --geoip-database /var/lib/goaccess-db/dbip-asn-lite-2025-01.mmdb  \
    --geoip-database /var/lib/goaccess-db/dbip-city-lite-2025-01.mmdb  \
    --geoip-database /var/lib/goaccess-db/dbip-country-lite-2025-01.mmdb \
    --log-format CADDY --output /var/www/some.server.org/report.html --log-file /var/log/caddy/access.log

This will add a Country and ASN panel, and populate the city column on the “Visitor Hostnames and IPs” panel.

This one-time download is “good enough” for most purposes. But if you want to automate updates of the GeoIP data, you can create an account with a provider and get an API key. Maxmind offers free accounts. Sign up here and you can sudo apt install geoipupdate to get regular updates.

Automation

A reasonable way to automate is with a logrotate hook. Most systems already have this in place to handle their logs so it’s an easy add. If you’re using Apache or nginx you probably already have one that you can just add a prerotate hook to. For Caddy, something like this should be added.

sudo vi  /etc/logrotate.d/caddy
/var/log/caddy/access.log {
    daily
    rotate 7
    compress
    missingok
    prerotate
        goaccess \
        --persist --restore --db-path /var/lib/goaccess-db \
        --geoip-database /var/lib/goaccess-db/dbip-asn-lite-2025-01.mmdb  \
        --geoip-database /var/lib/goaccess-db/dbip-city-lite-2025-01.mmdb  \
        --geoip-database /var/lib/goaccess-db/dbip-country-lite-2025-01.mmdb \
        --log-format CADDY --output /var/www/some.server.org/report.html --log-file /var/log/caddy/access.log
    endscript
    postrotate
        systemctl restart caddy
    endscript
}

You can test this works with force option. If it works, you’ll be left with updated stats, an empty access.log, and a newly minted access.log.1.gz

sudo logrotate --force /etc/logrotate.d/caddy

For Caddy, you’ll also need to disable it’s built-in log rotation

sudo vi /etc/caddy/Caddyfile


log  { 
    output file /path/to/your/log.log {
        roll_disabled
    }
}

sudo systemctl restart caddy

Of course, this runs as root. You can design that out, but you can also configure systemd to trigger a timed execution and run as non-root. Arnaud Rebillout has some good info on that.

Real-Time Service

You can also run GoAccess as a service for real-time log analysis. It works in conjunction with your web server by providing a Web Socket to push the latest data to the browser.

In this example, Caddy is logging vhosts to individual files as described in the Caddy logging page. This is convenient as it allows you view vhosts separately which is often desired. Adjust as needed.

Prepare The System

# Create a system user with no home
sudo adduser --system --group --no-create-home goaccess

# Add the user to whatever group the logfiles are set to (ls -lah /var/log/caddy/ or whatever)
sudo adduser goaccess caddy

# Put the site in a variable to make edits easier
SITE=www.site.org

# Create a database location
sudo mkdir -p /var/lib/goaccess-db/$SITE
sudo chown goaccess:goaccess /var/lib/goaccess-db/$SITE

# Possibly create the report location
sudo touch /var/www/$SITE/report-$SITE.html
sudo chown goaccess /var/www/$SITE/report-$SITE.html

# Test that the goaccess user can create the report
sudo -u goaccess goaccess \
    --log-format CADDY \
    --output /var/www/$SITE/report-$SITE.html \
    --persist \
    --restore \
    --db-path /var/lib/goaccess-db/$SITE \
    --log-file /var/log/caddy/$SITE.log    

Create The Service

sudo vi /etc/systemd/system/goaccess.service
[Unit]
Description=GoAccess Web log report
After=network.target

# Note: Variable braces are required for sysmtemd variable expansion
[Service]
Environment="SITE=www.site.org"
Type=simple
User=goaccess
Group=goaccess
Restart=always
ExecStart=/usr/bin/goaccess \
    --log-file /var/log/caddy/${SITE}.log \
    --log-format CADDY \
    --persist \
    --restore \
    --db-path /var/lib/goaccess-db/${SITE} \
    --output /var/www/${SITE}/report-${SITE}.html \
    --real-time-html
StandardOutput=null
StandardError=null

[Install]
WantedBy=multi-user.target
sudo systemctl enable --now goaccess.service
sudo systemctl status goaccess.service
sudo journalctl -u  goaccess.service

If everything went well, it should be listening on the default port of 7890

nc -zv localhost 7890
localhost [127.0.0.1] 7890 (?) open

BUT you can’t access that port unless you’re on the same LAN. You can start forwarding that port and even setup SSL in the WS config, but in most cases it’s better to handle it with a proxy.

Configure Access via Proxy

To avoid adding additional port forwarding you can convert the websocket connection from a high-level port to a proxy path. This works with cloudflare as well.

Edit your GoAccess service unit to indicate the proxied URL.

ExecStart
    ...
    # Note the added escape slash the the formerly last line
    --real-time-html \  
    --ws-url wss://www.site.org:443/ws/goaccess 
    # If you don't add the port explicitly, GoAccess
    # will 'helpfully' add the internal port (which isn't helpful), silently.

Add a proxy line to your web server. If using Caddy, add a path handler and proxy like this, and restart.

some.server.org {
        ...
        ...
        handle_path /ws/goaccess* {
                reverse_proxy * http://localhost:7890
        }
}

Take your browser to the URL, and you should see the gear icon top left now has a green dot under it.

Image of WebSocket indicator

If the dot isn’t green you’re not connected so take a look at the troubleshooting section.

Create Multiple Services vs Multiple Reports

When you have lots of vhosts its useful to separate them at the log level and report separately. To do that you can use a systemd template so as to create multiple instances. Arnaud Rebillout has some details on that.

But scaling that becomes onerous. My preference is to automate report generation more frequently and skip the realtime.

Troubleshooting

No Data

Check the permissions on the files. If you accidentally typed caddy start as root it will be running as root and later runs may not be able to write log entries.

GoAccess Isn’t Live

Your best bet is to open the developer tools in your browser, check the network tab and refresh. If proxying is wrong it will give you some hints.

What About The .conf

The file /etc/goaccess/goaccess.conf can be used, just make sure to remove the arguments from the unit file so there isn’t a conflict.

Web Socket Intermittent

Some services, such as Cloudflare, support WS but can cause intermittent disconnections. Try separating your stats from your main traffic site.


  1. See https://goaccess.io/man –log-format options ↩︎

4.2 - Content

4.2.1 - Content Mgmt

There are many ways to manage and produce web content. Traditionally, you’d use a large application with roles and permissions.

A more modern approach is to use a distributed version control system, like git, and a site generator.

Static Site Generators are gaining popularity as they produce static HTML with javascript and CSS that can be deployed to any Content Delivery Network without need for server-side processing.

Astro is great, as is Hugo, with the latter being around longer and having more resources.

4.2.1.1 - Hugo

Hugo is a Static Site Generator (SSG) that turns Markdown files into static web pages that can be deployed anywhere.

Like WordPress, you apply a ’theme’ to style your content. But rather than use a web-inteface to create content, you directly edit the content in markdown files. This lends itself well tomanaging the content as code and appeals to those who prefer editing text.

However, unlike other SSGs, you don’t have to be a front-end developer to get great results and you can jump in with a minimal investment of time.

4.2.1.1.1 - Hugo Install

Requirements

I use Debian in this example, but any apt-based distro will be similar.

Preparation

Enable and pin the Debian Backports and Testing repos so you can get recent versions of Hugo and needed tools.

–> Enable and Pin

Installation

Hugo requires git and go

# Assuming you have enable backports as per above
sudo apt install -t bullseye-backports git
sudo apt install -t bullseye-backports golang-go

For a recent version of Hugo you’ll need to go to the testing repo. The extended version is recommended by Hugo and it’s chosen by default.

# This pulls in a number of other required packages, so take a close look at the messages for any conflicts. It's normally fine, though. 
sudo apt install -t testing  hugo

In some cases, you can just install from the Debian package with a lot less effort. Take a look at latest and copy the URL into a wget.

https://github.com/gohugoio/hugo/releases/latest

wget https://github.com/gohugoio/hugo/releases/download/v0.124.1/hugo_extended_0.124.1_linux-amd64.deb

Configuration

A quick test right from the quickstart page to make sure everything works

hugo new site quickstart
cd quickstart
git init
git submodule add https://github.com/theNewDynamic/gohugo-theme-ananke themes/ananke
echo "theme = 'ananke'" >> config.toml
hugo server

Open up a browser to http://localhost:1313/ and you you’ll see the default ananke-themed site.

Next Steps

The ananke theme you just deployed is nice, but a much better theme is Docsy. Go give that a try.

–> Deploy Docsy on Hugo

4.2.1.1.2 - Docsy Install

Docsy is a good-looking Hugo theme that provides a landing page, blog, and a documentation sub-sites using bootstrap CSS.

The documentation site in particular let’s you turn a directory of text files into a documentation tree with relative ease. It even has a collapsible left nav bar. That is harder to find than you’d think.

Preparation

Docsy requires Hugo. Install that if you haven’t already. It also needs a few other things; postcss, postcss-cli, and autoprefixer from the Node.JS ecosystem. These should be installed in the project directory as version requirements change per theme.

mkdir some.site.org
cd some.site.org
sudo apt install -t testing nodejs npm
npm install -D autoprefixer 
npm install -D postcss
npm install -D postcss-cli

Installation

Deploy Docsy as a Hugo module and pull in the example site so we have a skeleton to work with. We’re using git, but we’ll keep it local for now.

git clone https://github.com/google/docsy-example.git .
hugo server

Browse to http://localhost:1313 and you should see the demo “Goldydocs” site.

Now you can proceed to configure Docsy!

Updating

The Docsy theme gets regular updates. To incorporate those you only have to run this command. Do this now, actually, to get any theme updates the example site hasn’t incoporated yet.

cd /path/to/my-existing-site
hugo mod get -u github.com/google/docsy

Troubleshooting

hugo

Error: Error building site: POSTCSS: failed to transform “scss/main.css” (text/css)>: Error: Loading PostCSS Plugin failed: Cannot find module ‘autoprefixer’

And then when you try to install the missing module

The following packages have unmet dependencies: nodejs : Conflicts: npm npm : Depends: node-cacache but it is not going to be installed

You may have already have installed Node.JS. Skip trying to install it from the OS’s repo and see if npm works. Then proceed with postcss install and such.

4.2.1.1.3 - Docsy Config

Let’s change the basics of the site in the config.toml file. I put some quick sed commands here, but you can edit by hand as well. Of note is the Github integration. We prepoulate it here for future use, as it allows quick edits in your browser down the road.

SITE=some.site.org
GITHUBID=someUserID
sed -i "s/Goldydocs/$SITE/" config.toml
sed -i "s/The Docsy Authors/$SITE/" config.toml
sed -i "s/example.com/$SITE/" config.toml
sed -i "s/example.org/$SITE/" config.toml
sed -i "s/google\/docsy-example/$GITHUBID\/$SITE/" config.toml 
sed -i "s/USERNAME\/REPOSITORY/$GITHUBID\/$SITE/" config.toml 
sed -i "s/https:\/\/policies.google.com//" config.toml
sed -i "s/https:\/\/github.com\/google\/docsy/https:\/\/github.com\/$GITHUBID/" config.toml
sed -i "s/github_branch/#github_branch/" config.toml

If you don’t plan to translate your site into different languages, you can dispense with some of the extra languages as well.

# Delete the 20 or so lines starting at "lLanguage] and stopping at the "[markup]" section,
# including the english section.
vi config.tml

# Delete the folders from 'content/' as well, leaving 'en'
rm -rf content/fa content/no

You should also set a default meta description or the engine will put in in the bootstrap default and google will summarize all your pages with that

vi config.toml
[params]
copyright = "some.site.org"
privacy_policy = "/privacy"
description = "My personal website to document what I know and how I did it"

Keep and eye on the site in your browser as you make changes. When you’re ready to start with the main part of adding content, take a look at the next section.

Docsy Operation

Notes

You can’t dispense with the en folder yet, as it breaks some github linking functionality you may want to take advantage of later

4.2.1.1.4 - Docsy Operate

This is a quick excerpt from the Docsy Content and Customization docs. Definitely spend time with those after reading the overview here.

Directory Layout

Content is, appropriately enough, in the content directory, and it’s subdirectories line up with the top-level navigation bar of the web site. About, Documentation, etc corresponds to content/about, content/docs and so on.

The directories and files you create will be the URL that you get with one important exception, filenames are converted to a ‘slug’, mimicking how index files work. For example, If you create the file docs/tech/mastadon.md the URL will be /docs/tech/mastadon/. This is for SEO (Search Engine Optimization).

The other thing you’ll see are _index.html files. In the example above, the URL /docs/tech/ has no content, as it’s a folder. But you can add a _index.md or .html to give it some. Avoid creating index.md or tech.md (a file that matches the name of a subdirectory). Either of those will block Hugo from generating content for any subdirectories.

The Landing Page and Top Nav Pages

The landing page itself is the content/_index.html file and the background is featured-background.jpg. The other top-nav pages are in the content folders with _index files. You may notice the special header variable “menu: main: weight: " and that is what flags that specific page as worth of being in the top menu. Removing that, or adding that (and a linkTitle) will change the top nav.

The Documentation Page and Left Nav Bar

One of the most important features of the Docsy template is the well designed documentation section that features a Section menu, or left nav bar. This menu is built automatically from the files you put in the docs folder, as long as you give them a title. (See Front Matter, below). They are ordered by date but you can add a weight to change that.

It doesn’t collapse by default and if you have a lot of files, you’ll want to enable that.

# Search and set in your config.toml
sidebar_menu_compact = true

Front Matter

The example files have a section at the top like this. It’s not strictly required, but you must have at least the title or they won’t show up in the left nav tree.

---
title: "Examples"
---

Page Content and Short Codes

In addition to normal markdown or html, you’ll see frequent use of ‘shortcodes’ that do things that normal markdown can’t. These are built in to Hugo and can be added by themes, and look like this;

{{% blocks/lead color="dark" %}}
Some Important Text
{{% /blocks/lead %}}

Diagrams

Docsy supports mermaid and a few other tools for creating illustrations from code, such as KaTeX, Mermaid, Diagrams.net, PlantUML, and MarkMap. Simply use a codeblock.

```mermaid
graph LR
 one --> two
```

Generate the Website

Once you’re satisfied with what you’ve got, tell hugo to generate the static files and it will populate the folder we configured earlier

hugo

Publish the Web Site

Everything you need is in the public folder and all you need do is copy it to a web server. You can even use git, which I advise since we’re already using it to pull in and update the module.

–> Local Git Deployment

Bonus Points

If you have a large directory structure full of markdown files already, you can kick-start the process of adding frontmatter like this;

find . -type f | \
while read X
do
  TITLE=$(basename ${X%.*})
  FRONTMATTER=$(printf -- "---\ntitle = ${TITLE}\n---")
  sed -i "1s/^/$FRONTMATTER\n/" "$X"
done

4.2.1.1.5 - Docsy Github

You may have noticed the links on the right like “Edit this page” that takes one to Github. Let’s set those up.

On Github

Go to github and create a new repository. Use the name of your side for the repo name, such as “some.site.org”. If you want to use something else, you can edit your config.toml file to adjust.

Locally

You man have noticed that Github suggested some next steps with a remote add using the name “origin”. Docsy is already using that, however, from when you cloned it. So we’ll have to pick a new name.

cd /path/to/my-existing-site
git remote add github https://github.com/yourID/some.site.org

Let’s change our default banch to “main” to match Github’s defaults.

git branch -m main

Now we can add, commit and push it up to Github

git add --all
git commit -m "first commit of new site"
git push github

You’ll notice something interesting when you go back to look at Github; all the contributers on the right. That’s because you’re dealing with a clone of Docsy and you can still pull in updates and changes from original project.

It may have been better to clone it via github

4.2.2 - Content Deployment

Automating deployment as part of a general continuous integration strategy is best-practice these days. Web content should be similarly treated.

I.e. version controlled and deployed with git.

4.2.2.1 - Local Git Deployment

Overview

Let’s create a two-tiered system that goes from dev to prod using a post-commit trigger

graph LR
Development --git / rsync---> Production

The Development system is your workstation and the Production system is a web server you can rsync to. Git commit will trigger a build and rsync.

I use Hugo in this example, but any system that has an output (or build) folder works similarly.

Configuration

The first thing we need is a destination.

Production System

This server probably uses folders like /var/www/XXXXX for its web root. Use that or create a new folder and make yourself the owner.

sudo mkdir /var/www/some.site.org
sudo chown -R $USER /var/www/some.site.org
echo "Hello" > /var/www/some.site.org/index.html

Edit your web server’s config to make sure you can view that web page. Also check that rsync is available from the command line.

Development System

Hugo builds static html in a public directory. To generate the HTML, simply type hugo

cd /path/to/my-existing-site
hugo
ls public

We don’t actually want this folder in git and most themes (if you’re using Hugo) already have a .gitignore file. Take a look and create/add to it.

# Notice /public is at the top of the git ignore file
cat .gitignore

/public
package-lock.json
.hugo_build.lock
...

Assuming you have some content, let’s add and commit it.

git add --all
git commit -m "Initial Commit"

Note: All of these git commands work because pulling in a theme initialized the directory. If you’re doing something else you’ll need to git init.

The last step is to create a hook that will build and deploy after a commit.

cd /path/to/my-existing-site
touch .git/hooks/post-commit
chmod +x .git/hooks/post-commit
vi .git/hooks/post-commit
#!/bin/sh
hugo --cleanDestinationDir
rsync --recursive --delete public/ [email protected]:/var/www/some.site.org

This script ensures that the remote directory matches your local directory. When you’re ready to update the remote site:

git add --all
git commit --allow-empty -m "trigger update"

If you mess up the production files, you can just call the hook manually.

cd /path/to/my-existing-site
touch .git/hooks/post-commit

Troubleshooting

bash: line 1: rsync: command not found

Double check that the remote host has rsync.

4.2.3 - Content Delivery

4.2.3.1 - Cloudflare

  • Cloudflare acts as a reverse proxy to hide your server’s IP address
  • Takes over your DNS and directs requests to the closest site
  • Injects JavaScript analytics
    • If the browser’s “do not track” is on, JS isn’t injected.
  • Can uses a tunnel and remove encryption overhead

4.3 - Servers

4.3.1 - Caddy

Caddy is a web server that runs SSL by default by automatically grabing a cert from Let’s Encrypt. It comes as a stand-alone binary, written in Go, and makes a decent reverse proxy.

4.3.1.1 - Installation

Installation

Caddy recommends “using our official package for your distro” and for debian flavors they include the basic instructions you’d expect.

Configuration

The easiest way to configure Caddy is by editing the Caddyfile

sudo vi /etc/caddy/Caddyfile
sudo systemctl reload caddy.service

Sites

You define websites with a block that includes a root and the file_server directive. Once you reload, and assuming you already have the DNS in place, Caddy will reach out to Let’s Encrypt, acquire a certificate, and automatically forward from port 80 to 443

site.your.org {        
    root * /var/www/site.your.org
    file_server
}

Authentication

You can add basic auth to a site by creating a hash and adding a directive to the site.

caddy hash-password
site.your.org {        
    root * /var/www/site.your.org
    file_server
    basic_auth { 
        allen SomeBigLongStringFromTheCaddyHashPasswordCommand
    }
}

Reverse Proxy

Caddy also makes a decent reverse proxy.

site.your.org {        
    reverse_proxy * http://some.server.lan:8080
}

You can also take advantage of path-based reverse proxy. Note the rewrite to accommodate the trailing-slash potentially missing.

site.your.org {
    rewrite /audiobooks /audiobooks/
    handle_path /audiobooks/* {
        uri strip_prefix /audiobooks/
        reverse_proxy * http://some.server.lan:8080
    }
}

Import

You can define common elements at the top (snippets) or in files and import them multiple times to save duplication. This helps when you have many sites.

# At the top in the global section of your Caddyfile
(logging) {
    log {
        output file /var/log/caddy/access.log
    }
}
site.your.org {
    import logging     
    reverse_proxy * http://some.server.lan:8080
}

Modules

Caddy is a single binary so when adding a new module (aka feature) you are essentially downloading a new version that has them compiled in. You can find the list of packages at their download page.

Do this at the command line with caddy itself.

sudo caddy add-package github.com/mholt/caddy-webdav
systemctl restart caddy

Security

Drop Unknown Domains

Caddy will accept connections to port 80, announce that it’s a Caddy web server and redirect you to https before realizing it doesn’t have a site or cert for you. Configure this directive at the bottom so it drops immediately.

http:// {
    abort
}

Crowdsec

Caddy runs as it’s own user and is fairly memory-safe. But installing Crowdsec helps identify some types of intrusion attempts.

CrowdSec Installation

Troubleshooting

You can test your config file and look at the logs like so

caddy validate --config /etc/caddy/Caddyfile
journalctl --no-pager -u caddy

4.3.1.2 - Logging

Access Logs

In general, you should create a snippet and import into each block.

#
# Global Options Block
#
{
    ...
    ...
}
#
# Importable Snippets
#
(logging) {
    log {
        output file /var/log/caddy/access.log
    }
}
#
# Host Blocks
#
site.your.org {
    import logging     
    reverse_proxy * http://some.server.lan:8080
}
other.your.org {
    import logging     
    reverse_proxy * http://other.server.lan:8080
}

Per VHost Files

If you want separate logs for separate vhosts, add a parameter to the import that changes the output file name.

#
# Global Options Block
#
{
    ...
    ...
}
#
# Importable Snippets
#
(logging) {
    log {
        # args[0] is appended at run time
        output file /var/log/caddy/access-{args[0]}.log   
    }
}
#
# Host Blocks
#
site.your.org {
    import logging site.your.org    
    reverse_proxy * http://some.server.lan:8080
}
other.your.org {
    import logging other.your.org    
    reverse_proxy * http://other.server.lan:8080
}

Wildcard Sites

Wildcard sites only have one block so you must use the hostname directive to separate vhost logs. This both sends a vhost to the file you want, and filters them out of others. You can also use an import argument as shown in this caddy example to save space. (I would never have deduced this on my own.)

#
# Global Options Block
#
{
    ...
    ...
}
#
# Importable Snippets
#
(logging) {
        log {
                output file /var/log/caddy/access.log {
                        roll_size 5MiB
                        roll_keep 5
                }
        }
}

(domain-logging) {
        log {
                hostnames {args[0]}
                output file /var/log/caddy/{args[0]}.log {
                        roll_size 5MiB
                        roll_keep 5
                }
        }
}
#
# Main Block
#
*.site.org, site.org {
        # Everything goes to this file unless it's filtered out by another log block
        import logging 

        @site host some.site.org
        handle @site {
                reverse_proxy * http://internal.site
        }

        # the www site will write to just this log file
        import domain-logging www.site.org 
        @www host www.site.org                
        handle @www { 
                root * /var/www/www.site.org 
                file_server 
        }

        # This site will write to the normal file
        @site host other.site.org
        handle @site {
                reverse_proxy * http://other.site
        }

Logging Credentials

If you want to track users, add a directive to the global headers.

# 
# Global Options Block 
# 
{
        servers { 
                log_credentials 
        }
}

File Permissions

By default, only caddy can read the log files. This is a problem when you have a log analysis package. In recent versions of caddy however, you can set the mode.

    log {
        output file /var/log/caddy/access.log {
                mode 644
        }
    }

If the log file doesn’t change modes, check the version of caddy. It must be newer than v2.8.4 for the change.

Troubleshooting

You can have a case where the domain specific file never gets created. This usually happens when there us nothing to write to it. Check the hostname is correct.

4.3.1.3 - WebDAV

Caddy can also serve WebDAV requests with the appropriate module. This is important because for many clients, such as Kodi, WebDAV is significantly faster.

sudo caddy add-package github.com/mholt/caddy-webdav
sudo systemctl restart caddy
{   # Custom modules require order of precedence be defined
    order webdav last
}
site.your.org {
    root * /var/www/site.your.org
    webdav * 
}

You can combine WebDAV and Directly Listing - highly recommended - so you can browse the directory contents with a normal web browser as well. Since WebDAV doesn’t use the GET method, you can use the @get filter to route those to the file_server module so it can serve up indexes via the browse argument.

site.your.org {
    @get method GET
    root * /var/www/site.your.org
    webdav *
    file_server @get browse        
}

Sources

https://github.com/mholt/caddy-webdav https://marko.euptera.com/posts/caddy-webdav.html

4.3.1.4 - MFA

The package caddy-security offers a suite of auth functions. Among these is MFA and a portal for end-user management of tokens.

Installation

# Install a version of caddy with the security module 
sudo caddy add-package github.com/greenpau/caddy-security
sudo systemctl restart caddy

Configuration

/var/lib/caddy/.local/caddy/users.json

caddy hash-password

Troubleshooting

journalctl –no-pager -u caddy

4.3.1.5 - Wildcard DNS

Caddy has an individual cert for every virtual host you create. This is fine, but Let’s Encrypt publishes these as part of certificate transparency and the bad guys are watching. If you create a new site in caddy, you’ll see bots probing for weaknesses within 30 min - without you even having published the URL. There’s no security in anonymity, but the need-to-know principle suggests we shouldn’t be informing the whole world about sites of limited scope.

One solution is a wildcard cert. It’s published as just ‘*.some.org’ so there’s no information disclosed. Caddy supports this, but it requires a little extra work.

Installation

In this example we are already using the default Caddy binary but want to connect to CloudFlare’s DNS service. We must change to a custom Caddy binary for that. Check https://github.com/caddy-dns to see if your DNS provider is available.

# Divert the default binary from the repo
sudo dpkg-divert --divert /usr/bin/caddy.default --rename /usr/bin/caddy
sudo cp /usr/bin/caddy.default /usr/bin/caddy.custom
sudo update-alternatives --install /usr/bin/caddy caddy /usr/bin/caddy.default 10
sudo update-alternatives --install /usr/bin/caddy caddy /usr/bin/caddy.custom 50

# Add the package and restart. 
sudo caddy add-package github.com/caddy-dns/cloudflare
sudo systemctl restart caddy.service    

Because we’ve diverted, apt update will not update caddy. This also stops unattended-updates. You must use caddy upgrade instead. The devs don’t think this should be an issue. I disagree. but you can add a cron job if you like.

DNS Provider Configuration

For Cloudflare, a decent example is below. Just use the ‘Getting the Cloudflare API Token’ part

https://roelofjanelsinga.com/articles/using-caddy-ssl-with-cloudflare/

Caddy Configuration

Use the acme-dns global option and then create a single site (used to determine the cert) and match the actual vhosts with subsites.

{
    acme_dns cloudflare alotcharactersandnumbershere
}

*.some.org, some.org {

    @site1 host site1.some.org
    handle @site1 {
        reverse_proxy * http://localhost:3200
    }

    @site2 host site2.some.org
    handle @site2 {
        root * /srv/www/site2
    }
}