This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Storage

1 - Seafile

TODO - seafile 11 is in beta and mysql is required.

Seafile is a cloud storage system, similar to google drive. It stands out for being simpler and faster than it’s peers. It’s also open source.

Preparation

You’ll need a linux server. We use Debian 12 in this example and instructions are based on Seafile’s SQLite instructions, updated for the new OS.

cffi build issues[^cffi],

and a python virtual environement so apt and pip packages play nice.

# The main requirements
sudo apt install -y memcached libmemcached-dev pwgen sqlite3
sudo systemctl enable --now memcached

# Python specific things
sudo apt install -y python3 python3-setuptools python3-pip 


sudo apt install python3-wheel python3-django python3-django-captcha python3-future python3-willow python3-pylibmc python3-jinja2 python3-psd-tools python3-pycryptodome python3-cffi



# cffi build requirements
sudo apt install -y build-essential libssl-dev libffi-dev python-dev-is-python3

# Install the service account and create a python virtual environment for them
sudo apt install python3-venv
sudo useradd --home-dir /opt/seafile --system --comment "Seafile Service Account" --create-home seafile
sudo -i -u seafile
python3 -m venv .venv
source .venv/bin/activate

# Install the rest of the packages from pip
pip3 install --timeout=3600 \
  wheel django django-pylibmc django-simple-captcha future \
  Pillow pylibmc captcha jinja2 psd-tools pycryptodome cffi

192.168.1.21:/srv/seafile /srv/seafile nfs defaults,noatime,vers=4.1 0 0

Installation

It comes with two services. Seafile, the file sync server, and Seahub, a web interface and editor.

For a small team, you can install a lightweight instance of Seafile using a single host and sqlite.

Note: There is a seafile repo, but it may be [client] only. TODO test this

As per the install [instructions] this will create several folders in seafile’s home directory and a symlink to the binaries in a version specific directory for easy upgrades.

# Contine as the seafile user - the python venv should still be in effect. If not, source as before

# Downlaod and exract the binary
wget -P /tmp https://s3.eu-central-1.amazonaws.com/download.seadrive.org/seafile-server_10.0.1_x86-64.tar.gz
tar -xzf /tmp/seafile-server_10.0.1_x86-64.tar.gz -C /opt/seafile/
rm /tmp/seafile*

# Run the setup script
cd /opt/seafile/sea*
./setup-seafile.sh

# Start seafile and seahub to answer some setup questions
./seafile.sh start
./seahub.sh start

./seahub.sh stop
./seafile.sh stop

Create systemd service files1 for the two services. (as a sudo capable user)

sudo tee /etc/systemd/system/seafile.service << EOF
[Unit]
Description=Seafile
After=network.target

[Service]
Type=forking
ExecStart=/opt/seafile/seafile-server-latest/seafile.sh start
ExecStop=/opt/seafile/seafile-server-latest/seafile.sh stop
LimitNOFILE=infinity
User=seafile
Group=seafile

[Install]
WantedBy=multi-user.target
EOF

Note: The ExecStart below is a bit cumbersome, but it saves modifying the vendor’s start script. Only the Seahub service seems to need the virtual env, though you can give both services the same treatment if you wish.

sudo tee /etc/systemd/system/seahub.service << EOF
[Unit]
Description=Seafile hub
After=network.target seafile.service

[Service]
Type=forking
ExecStart=/bin/bash -c 'source /opt/seafile/.venv/bin/activate && /opt/seafile/seafile-server-latest/seahub.sh start'
ExecStop=/bin/bash -c 'source /opt/seafile/.venv/bin/activate && /opt/seafile/seafile-server-latest/seahub.sh stop'
User=seafile
Group=seafile

[Install]
WantedBy=multi-user.target
EOF
sudo systemctl enable --now seafile.service
sudo systemctl enable --now seahub.service

Seafile and Seahub should have started without error, though by default you can only access it from locahost.

If you run into problems here make sure to start Seafile first. Expiriment with sourcing the activation file as the seafile user and running the start script directly.

Add logrotation

sudo tee /etc/logrotate.d/seafile << EOF
/opt/seafile/logs/seafile.log
/opt/seafile/logs/seahub.log
/opt/seafile/logs/file_updates_sender.log
/opt/seafile/logs/repo_old_file_auto_del_scan.log
/opt/seafile/logs/seahub_email_sender.log
/opt/seafile/logs/work_weixin_notice_sender.log
/opt/seafile/logs/index.log
/opt/seafile/logs/content_scan.log
/opt/seafile/logs/fileserver-access.log
/opt/seafile/logs/fileserver-error.log
/opt/seafile/logs/fileserver.log
{
        daily
        missingok
        rotate 7
        # compress
        # delaycompress
        dateext
        dateformat .%Y-%m-%d
        notifempty
        # create 644 root root
        sharedscripts
        postrotate
                if [ -f /opt/seafile/pids/seaf-server.pid ]; then
                        kill -USR1 `cat /opt/seafile/pids/seaf-server.pid`
                fi

                if [ -f /opt/seafile/pids/fileserver.pid ]; then
                        kill -USR1 `cat /opt/seafile/pids/fileserver.pid`
                fi

                if [ -f /opt/seafile/pids/seahub.pid ]; then
                        kill -HUP `cat /opt/seafile/pids/seahub.pid`
                fi

                find /opt/seafile/logs/ -mtime +7 -name "*.log*" -exec rm -f {} \;
        endscript
}
EOF

Configuration

Seahub (the web UI) by default is bound to localhost only. Change that to all addresses so you can access it from other systems.

sudo sed -i 's/^bind.*/bind = "0.0.0.0:8000"/'  /opt/seafile/conf/gunicorn.conf.py

If you’re not proxying already, check the seahub settings. You may need to add the correct internal name and port for ititial access. You should add the file server root as well so you don’t have to add it in the GUI later.

vi /opt/seafile/conf/seahub_settings.py

SERVICE_URL = "http://seafile.some.lan:8000/" 
FILE_SERVER_ROOT = "http://seafile.some.lan:8082"

Add a connection to the memcache server

sudo tee -a /opt/seafile/conf/seahub_settings.py << EOF
CACHES = {
    'default': {
        'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',
        'LOCATION': '127.0.0.1:11211',
    },
}
EOF

And restart to take affect

sudo systemctl restart seahub

You should now be able to login at http://some.server:8000/ with the credentials you created during the command line setup. If the web GUI works, but you can’t download files or the markdown editor doesn’t work as expected, check the FILE_SERVER_ROOT and look in the GUI’s System Admin section at those settings.

NFS Mount

Large amounts of data are best handled by a dedicated storage system and those are usually mounted over the network via NFS or a similar protocol. Seafile data should be stored in such a system, but you cannot mount the entire Seafile data folder over the network as it includes SQLite data that recommends2 against that. Nor can you mount each subdirectory seperately as they rely upon internal links that must be on the same filesystem.

The solution is to mount a network share in an alternate location and symlink the relative parts of the Seafile data directory to it.

sudo mount nfs.server:/exports/seafile /mnt/seafile

sudo systemctl stop seahub
sudo systemctl stop seafile

sudo mv /opt/seafile/seafile-data/httptemp \
	/opt/seafile/seafile-data/storage \
	/opt/seafile/seafile-data/tmpfiles \
/mnt/seafile/

sudo ln -s /mnt/seafile/httptemp /opt/seafile/seafile-data/
sudo ln -s /mnt/seafile/storage /opt/seafile/seafile-data/
sudo ln -s /mnt/seafile/tmpfiles /opt/seafile/seafile-data/

sudo chown -R seafile:seafile /mnt/seafile

Proxy

Say something about why caddy, then give the proxy file, then say HTTP/3 and enabling UDP 443 and seeing it in the logs. with firefox enabled. No special server config.

https://caddy.community/t/caddy-v2-and-seafile-server-on-a-root-server/9188/2

Note the change in the GUI for the 8082

https://www.seafile.com/en/download/#server


  1. https://manual.seafile.com/deploy/start_seafile_at_system_bootup/ ↩︎

  2. https://www.sqlite.org/faq.html#q5 [client]:https://help.seafile.com/syncing_client/install_linux_client/ [instructions]:https://manual.seafile.com/deploy/using_sqlite/ ↩︎

2 - TrueNAS

2.1 - Disk Replacement

Locate the failed drive.

zpool status

It will show something like

	NAME                                        STATE     READ WRITE CKSUM
	pool01                                      DEGRADED     0     0     0
	  raidz3-0                                  ONLINE       0     0     0
	    44fca0d1-f343-48e6-9a43-c71463551aa4    ONLINE       0     0     0
	    7ca5e989-51a5-4f1b-a81e-982d9a05ac04    ONLINE       0     0     0
	    8fd249a0-c8c6-47bb-8787-3e246300c62d    ONLINE       0     0     0
	    573c1117-27d4-430c-b57c-858a75b4ca35    ONLINE       0     0     0
	    29b7c608-72ae-4ec2-830b-0e23925ac0b1    ONLINE       0     0     0
	    293acdbe-6be5-4fa7-945a-e9481b09c0fa    ONLINE       0     0     0
	    437bac45-433b-48e3-bc70-ae1c82e8155b    ONLINE       0     0     0
	    a5ca09a7-3f3f-4135-a2d9-71290fd79160    ONLINE       3     2     0
	  raidz3-1                                  DEGRADED     0     0     0
	    spare-0                                 DEGRADED     0     0     0
	      65f61699-e2fc-4a36-86dd-b0fa6a774798  FAULTED     53     0     0  too many errors
	      9d794dfd-2ef6-432d-8252-0c93e79509dc  ONLINE       0     0     0
	    e27f31e8-a1a4-47dc-ac01-4a6c99b6e5d0    ONLINE       0     0     0
	    aff60721-21ae-42bf-b077-1937aeafaab2    ONLINE       0     0     0
	    714da3e5-ca9c-43d0-a0f3-c0fa693a5b02    ONLINE       0     0     0
	    df89869a-4445-47f9-afa9-3b9cce3b1530    ONLINE       0     0     0
	    29748037-bbd5-4f2d-8878-4fa2b81d9ec3    ONLINE       0     0     0
	    1ff396ec-dec7-45dd-9172-de31e5f6fca7    ONLINE       0     0     0

Off-line the drive.

zpool offline pool01 65f61699-e2fc-4a36-86dd-b0fa6a77479

Get the serial number

hdparm -I /dev/disk/by-partuuid/65f61699-e2fc-4a36-86dd-b0fa6a774798 | grep Serial

The output will be something like

Serial Number:      ZC1168HE
Transport:          Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0

Identify the bay location

sas3ircu 0 display | grep -B 10 ZC1168HE                                          

The output will look like

  Device is a Hard disk
    Enclosure #                             : 2
    Slot #                                  : 17

Turn on the bay indicator

sas3ircu 0 locate 2:17 ON

Physically replace the disk

Check the logs for the new disk’s name

dmesg

The output will indicate the device id, such as ‘sdal’ in the below example

  [16325935.447081] sd 0:0:45:0: Power-on or device reset occurred
  [16325935.447962] sd 0:0:45:0: Attached scsi generic sg20 type 0
  [16325935.451271]  end_device-0:0:28: add: handle(0x001c), sas_addr(0x500304801810f321)
  [16325935.454768] sd 0:0:45:0: [sdal] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB)
  [16325935.477576] sd 0:0:45:0: [sdal] Write Protect is off
  [16325935.479913] sd 0:0:45:0: [sdal] Mode Sense: 9b 00 10 08
  [16325935.482100] sd 0:0:45:0: [sdal] Write cache: enabled, read cache: enabled, supports DPO and FUA
  [16325935.664995] sd 0:0:45:0: [sdal] Attached SCSI disk

Turn off the slot light

sas3ircu 0 locate 2:17 OFF

Use the GUI to replace the disk. (Use the GUI over the cmd lie to ensure it’s setup consistently with the other disks)

  Storage --> Pool Gear Icon (at right) --> Status

    (The removed disk  should be listed bu it's UUID)

  Disk Menu (three dots) --> Replace --> (disk from dmesg above) --> Force --> Replace Disk

After resilvering has finished, check the spare’s ID at the bottom and then detach it so it goes back to spare

zpool detach pool01 9d794dfd-2ef6-432d-8252-0c93e79509dc

Notes:

Note: The GUI takes several steps to prepare the disk and adds a partition to the pool, not the whole disk. It’s ‘strongly advised against’ using the CLI to replace the disk. Though if you must, you can recreate that process at the command line. as adapted from https://www.truenas.com/community/resources/creating-a-degraded-pool.100/

gpart and glable are not present on TrueNAS Scale, so you would have to adapt this to another tool

gpart create -s gpt /dev/da18
gpart add -i 1 -b 128 -t freebsd-swap -s 2g /dev/da18
gpart add -i 2 -t freebsd-zfs /dev/da18

zpool replace pool01 65f61699-e2fc-4a36-86dd-b0fa6a77479

To turn off all slot lights

for X in {0..23};do echo sas3ircu 0 locate 2:$X OFF;done
for X in {0..11};do sas3ircu 0 locate 3:$X OFF;done