## New Features - Add baobab.container_photoprism role for photo management - Enable Intel iGPU hardware acceleration (Quick Sync Video) - Configure deployment to fisi host with NVMe storage ## Base Role Enhancements - Add GPU device mapping support (container_devices) - Add container group membership (container_group_add) - Add security options support (container_security_opt) - Update compose template with conditional device sections ## Infrastructure Updates - Upgrade Traefik from v3.4 to v3.6 (fixes Docker API compatibility) - Add traefik_zone to docker_hosts group_vars - Fix Cloudflare API token variable reference - Add httpClientTimeout to Traefik Docker provider ## Configuration - Deploy PhotoPrism to fisi with Intel GPU support - Configure storage at /mnt/nvme0n1/containers/photoprism - Enable Intel Quick Sync encoder (h264_qsv) - Map /dev/dri/renderD128 for hardware acceleration - Set hostname to photo.nyumbani.baobab.band 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> |
||
|---|---|---|
| .ansible | ||
| .claude | ||
| .vscode | ||
| docs | ||
| group_vars | ||
| host_vars | ||
| inventories | ||
| roles | ||
| .envrc | ||
| .gitattributes | ||
| .gitignore | ||
| ai_collector.sh | ||
| ansible.cfg | ||
| play_containers.yml | ||
| play_passwordless.yml | ||
| play_setup.yml | ||
| play_site.yml | ||
| README.md | ||
| requirements.txt | ||
| requirements.yml | ||
| tmp.txt | ||
| TODO.md | ||
AnsibleBaobabV3
Infrastructure-as-code for my homelab (servers, laptops, Raspberry Pis, and containerized apps).
This repo uses:
- One central Ansible project with shared config.
- Multiple playbooks by domain (base, servers, laptops, RPis, docker hosts, containers).
- Role namespacing:
baobab.<role>for clarity. - Per-repo Python virtualenv auto-activated with
direnv.
Repo layout
inventories/
prod/
group\_vars/
host\_vars/ # create as needed
hosts.yml
playbooks/
base.yml
servers.yml
laptops.yml
rpies.yml
docker-hosts.yml
containers.yml
facts.yml
site.yml
roles/
baobab.common/
baobab.security/
baobab.docker/
baobab.laptop/
baobab.rpi/
baobab.traefik/
baobab.poste/
baobab.mrbs/
baobab.depboard/
baobab.pihole-dns/
ansible.cfg
requirements.txt
requirements.yml
README.md
Tip: roles contain
tasks/,handlers/,templates/,defaults/,vars/,meta/. Keep app-specific compose/config templates inside each app role.
Prerequisites (Debian 12 workstation)
sudo apt update
sudo apt install -y python3-venv python3-dev build-essential \
libffi-dev libssl-dev direnv git sshpass rsync
Enable direnv for zsh (once):
# in ~/.zshrc (already suggested in our setup)
eval "$(direnv hook zsh)"
Restart your shell.
Python environment (auto via direnv)
Create an .envrc in the repo root:
# .envrc
layout python3
if [ ! -d .venv ]; then
python3 -m venv .venv
source .venv/bin/activate
python -m pip install -U pip setuptools wheel
fi
source .venv/bin/activate
Allow it:
direnv allow
Tooling requirements
requirements.txt (pin versions for repeatability):
ansible==10.3.0
ansible-lint==24.7.0
yamllint==1.35.1
rich==13.9.2
passlib[bcrypt]==1.7.4
requirements.yml (Galaxy collections):
---
collections:
- name: community.general
version: ">=9.0.0,<10.0.0"
- name: community.docker
version: ">=3.10.0,<4.0.0"
- name: ansible.posix
version: ">=1.6.0,<2.0.0"
roles: []
Install:
pip install -r requirements.txt
ansible-galaxy install -r requirements.yml
Minimal ansible.cfg (recommended)
[defaults]
inventory = inventories/prod/hosts.yml
roles_path = roles
collections_paths = ~/.ansible/collections
host_key_checking = False
forks = 20
retry_files_enabled = False
interpreter_python = auto_silent
nocows = 1
timeout = 30
stdout_callback = yaml
bin_ansible_callbacks = True
[ssh_connection]
pipelining = True
control_path = ~/.ansible/cp/%%h-%%p-%%r
Inventory scaffold
inventories/prod/hosts.yml:
all:
children:
servers:
hosts:
papa:
simba:
laptops:
hosts:
tais-laptop:
rpies:
hosts:
ha-pi:
docker_hosts:
children:
servers: {}
inventories/prod/group_vars/all.yml (create):
ansible_python_interpreter: /usr/bin/python3
# Example: baseline package list used by baobab.common
baseline_packages:
- vim
- htop
- curl
- rsync
Put secrets in
inventories/prod/group_vars/all.vault.ymland edit withansible-vault.
Playbooks
playbooks/site.yml (imports others):
- ansible.builtin.import_playbook: base.yml
- ansible.builtin.import_playbook: servers.yml
- ansible.builtin.import_playbook: laptops.yml
- ansible.builtin.import_playbook: rpies.yml
- ansible.builtin.import_playbook: docker-hosts.yml
- ansible.builtin.import_playbook: containers.yml
Example playbooks/containers.yml:
- hosts: docker_hosts
become: true
roles:
- baobab.traefik
- baobab.mrbs
- baobab.poste
- baobab.depboard
tags: [containers]
Common commands
# Check versions and environment
ansible --version
ansible-galaxy collection list | grep -E 'community|posix'
# Syntax check all
ansible-playbook playbooks/site.yml --syntax-check
# Run full site
ansible-playbook playbooks/site.yml
# Only containers (tags)
ansible-playbook playbooks/containers.yml -t containers
# Single app by tag, e.g., depboard
ansible-playbook playbooks/containers.yml -t depboard
Conventions
- Namespacing roles with
baobab.keeps long lists readable. - Tags on roles/tasks:
baseline,security,docker,containers,mrbs,depboard, etc. - Secrets live in vault files (
*.vault.yml). - Collections/roles pinned in
requirements.yml. - PiHole DNS: when adding new subdomains/services behind Traefik, add/update local A/AAAA records so LAN devices resolve them correctly.
Troubleshooting
- direnv not activating?
Ensure
eval "$(direnv hook zsh)"is in~/.zshrcand you randirenv allowin the repo. - Python missing on target host?
Bootstrap with a raw task to install
python3or add it in your base play. - SSH host key prompts during first run?
host_key_checking = Falseavoids interactive prompts; alternatively pre-populate~/.ssh/known_hosts.
“Back to green” recovery checklist
If I nuke the workstation or need to restore quickly:
# 0) Clone repo
git clone ssh://git@git.baobab.band:7577/sjat/AnsibleBaobabV3.git
cd AnsibleBaobabV3
# 1) System deps
sudo apt update
sudo apt install -y python3-venv python3-dev build-essential libffi-dev libssl-dev direnv git
# 2) Shell integration (once globally)
# Add to ~/.zshrc if missing:
# eval "$(direnv hook zsh)"
exec $SHELL
# 3) Enable project env
cp ./_examples/envrc.example .envrc 2>/dev/null || true # if you saved one; else create from README
direnv allow
# 4) Python + Ansible deps
pip install -r requirements.txt
ansible-galaxy install -r requirements.yml
# 5) Verify and run
ansible --version
ansible-playbook playbooks/site.yml --syntax-check
(If inventories/prod/hosts.yml is empty on a fresh clone, add your hosts; if vault files are used, copy them from your secrets store.)
Secrets & Ansible Vault
This repo uses Ansible Vault for all sensitive values. The goal is simple: no plaintext secrets in Git; only encrypted *.vault.yml files.
Layout & naming
inventories/
prod/
group_vars/
all.yml # non-secret globals
all.vault.yml # ENCRYPTED: global secrets (tokens, SMTP, etc.)
servers.yml
servers.vault.yml # ENCRYPTED: server-only secrets (if needed)
host_vars/
papa.yml
papa.vault.yml # ENCRYPTED: host-specific secrets (if needed)
Prefix secret vars with the role/app name to avoid collisions, e.g.:
traefik__cloudflare_api_tokenmrbs__db_password,mrbs__db_root_passwordposte__admin_password
Vault key management
-
Keys live outside the repo (do not commit):
~/.ansible/vault-keys/prod.txt(chmod 600; back it up in your password manager).
-
ansible.cfgis wired to use Vault IDs:[defaults] vault_identity_list = prod@~/.ansible/vault-keys/prod.txt
Day-to-day commands
Create/edit/view encrypted files (prod vault id shown):
# create a new encrypted var file
ansible-vault create inventories/prod/group_vars/all.vault.yml --vault-id prod@
# edit an existing one
ansible-vault edit inventories/prod/group_vars/all.vault.yml --vault-id prod@
# view (read-only)
ansible-vault view inventories/prod/group_vars/all.vault.yml --vault-id prod@
Encrypt an existing plaintext file (avoid this—prefer create/edit):
ansible-vault encrypt path/to/file.yml --vault-id prod@
One-off encrypted string (useful for quick secrets):
ansible-vault encrypt_string --vault-id prod@ 'supersecret' --name 'mrbs__db_password'
Rotate the vault key (re-encrypt the file with a new key):
ansible-vault rekey inventories/prod/group_vars/all.vault.yml --vault-id prod@ --new-vault-id prod@
Using secrets in roles/templates (safe patterns)
1) Write tokens/passwords to root-only files (preferred)
Keep secrets off command lines and logs; store on the host with strict perms.
# roles/baobab.traefik/tasks/deploy.yml
- name: Write Cloudflare API token for ACME
ansible.builtin.copy:
content: "{{ traefik__cloudflare_api_token }}"
dest: "/etc/traefik/cf_api_token"
owner: root
group: root
mode: "0600"
no_log: true
- name: Render docker compose
ansible.builtin.template:
src: docker-compose.yml.j2
dest: /opt/containers/traefik/docker-compose.yml
mode: "0644"
notify: Restart traefik
docker-compose.yml.j2 (reference the file, not the token itself):
services:
traefik:
environment:
CF_DNS_API_TOKEN_FILE: "/etc/traefik/cf_api_token"
2) If a secret must go into a template, lock down the file and hide logs
- name: Render app .env (contains secrets)
ansible.builtin.template:
src: env.j2
dest: /opt/containers/myapp/.env
owner: root
group: root
mode: "0600"
no_log: true
3) Don’t leak secrets in output
-
Add
no_log: trueon tasks that touch secrets or thatregister:secretful results. -
Avoid
debug: var=my_secret; if you must check, print metadata only:- debug: msg: "CF token length={{ traefik__cloudflare_api_token | length }}" no_log: true
4) Generate once, then persist (host-local secrets)
For things like DB passwords that never need to leave the host, generate and keep them on the host:
- name: Ensure app password exists (host-local)
ansible.builtin.shell: |
set -euo pipefail
[ -f /etc/myapp/db.pass ] || head -c 48 /dev/urandom | base64 > /etc/myapp/db.pass
chmod 600 /etc/myapp/db.pass
args: { creates: /etc/myapp/db.pass }
register: _dbpass_create
changed_when: _dbpass_create.rc == 0
no_log: true
- name: Read app password
ansible.builtin.slurp:
src: /etc/myapp/db.pass
register: _dbpass
no_log: true
- set_fact:
myapp__db_password: "{{ _dbpass.content | b64decode }}"
no_log: true
Rotation workflow
-
Update the value in
*.vault.yml:ansible-vault edit inventories/prod/group_vars/all.vault.yml --vault-id prod@ -
Deploy only the affected app (via tags/limits), e.g.:
ansible-playbook playbooks/containers.yml -t traefik --limit papa -
Restart services as needed (handlers should handle this).
Git hygiene
-
Vault files (
*.vault.yml) are committed; keys are not. -
To avoid noisy diffs for encrypted files, add to
.gitattributes:*.vault.yml -diff -
Never commit
.env, private keys, or vault key files (see.gitignorealready in repo).
Troubleshooting
- “Vault password not found”: confirm
vault_identity_listinansible.cfgpoints to your key path and the file has0600perms. - Secrets in logs: add/propagate
no_log: trueon the task (and anyregisterit uses). - Decryption failed: you may have mixed vault IDs; ensure you edited with
--vault-id prod@and that the configured key matches.
Multi-environment (optional)
If you add lab/:
-
Create a lab key:
~/.ansible/vault-keys/lab.txt(chmod 600). -
Extend
ansible.cfg:vault_identity_list = prod@~/.ansible/vault-keys/prod.txt, lab@~/.ansible/vault-keys/lab.txt -
Create/edit lab secrets with
--vault-id lab@ininventories/lab/....
Rule of thumb: secrets go in *.vault.yml, get written to files with 0600 when used, and tasks touching them use no_log: true. Rotate by editing the vault file and reapplying the targeted role.
Chapter: Role dependencies & order of operations
When deploying containers, we want three guarantees:
Ordering: prerequisites (user, engine) run before app roles.
Readiness: Docker is not just “installed”, it’s actually running.
Failure-fast: if Docker isn’t ready, we stop with a clear error.
How we express it
- Meta role for prerequisites
Create a tiny meta role baobab.docker_host which depends on the roles that prepare a host for containers:
Preparing a new host (Debian 12) for passwordless Ansible
This is the one-time “bring a host under management” procedure. We’ll:
Create user sjat during install (or after),
Install sudo and add sjat to the sudo group,
Move SSH to port 7576 and adjust firewall,
Run the NOPASSWD bootstrap playbook to install your SSH key and grant passwordless sudo,
Verify and switch to completely passwordless runs.
You can adapt the username/port if you prefer; update inventory/SSH config accordingly.
-
Prereqs on your workstation
Repo cloned and .venv active (direnv handles it).
Tools installed:
pip install -r requirements.txt ansible-galaxy install -r requirements.yml -p roles
Vault key configured (not required for this chapter).
-
On the new host’s console (or out-of-band shell)
Login as the initial admin/root and run:
If sjat wasn’t created during install:
sudo adduser sjat sudo usermod -aG sudo sjat
Ensure sudo installed (minimal installs may lack it)
sudo apt update sudo apt install -y sudo
- Change SSH to port 7576
Open the sshd config
sudo nano /etc/ssh/sshd_config
Set (or update) these lines:
Port 7576 PermitRootLogin no PasswordAuthentication yes # keep YES until after bootstrap completes
Restart SSH and keep the session open:
sudo systemctl restart ssh
Firewall (if enabled)
UFW example:
sudo apt install -y ufw sudo ufw allow 7576/tcp sudo ufw reload
- Configure your local SSH to match
In ~/.ssh/config on your workstation:
Host vDep12 HostName Port 7576 User sjat IdentityFile ~/.ssh/id_ed25519 IdentitiesOnly yes
(Replace vDep12/host/identity as needed.) 4) Add the host to the inventory
Either in inventories/prod/hosts.yml or a host_vars file:
host_vars/vDep12.yml (recommended)
ansible_user: sjat ansible_port: 7576 ansible_become: true ansible_become_method: sudo
Ensure vDep12 is under a group you target (e.g., servers or docker_hosts). 5) Bootstrap NOPASSWD (one-time, with -K)
This play creates/ensures your user, installs your public key, and grants NOPASSWD sudo via /etc/sudoers.d/.... It also optionally flips SSH to key-only (we keep password auth on until this finishes).
playbooks/bootstrap_passwordless.yml (already provided earlier):
-
name: Bootstrap passwordless automation user hosts: vDep12 gather_facts: false become: true vars: automation_user: "{{ ansible_user | default('sjat') }}" automation_pubkey: "{{ lookup('file', lookup('env','HOME') + '/.ssh/id_ed25519.pub') }}" sudo_scope: "ALL=(ALL) NOPASSWD:ALL" tasks:
-
name: Ensure {{ automation_user }} exists ansible.builtin.user: name: "{{ automation_user }}" groups: sudo append: true create_home: true
-
name: Install SSH public key for {{ automation_user }} ansible.builtin.authorized_key: user: "{{ automation_user }}" state: present key: "{{ automation_pubkey }}"
-
name: Allow NOPASSWD sudo for {{ automation_user }} ansible.builtin.copy: dest: "/etc/sudoers.d/90-{{ automation_user }}" content: "{{ automation_user }} {{ sudo_scope }}\n" owner: root group: root mode: "0440" validate: "visudo -cf %s"
-
name: (Optional) Harden SSH to key-only (do this after we confirm key login) ansible.builtin.lineinfile: path: /etc/ssh/sshd_config regexp: '^PasswordAuthentication' line: 'PasswordAuthentication no' state: present backup: yes notify: Restart ssh
handlers:
- name: Restart ssh ansible.builtin.service: name: ssh state: restarted
-
Run it:
ansible-playbook playbooks/bootstrap_passwordless.yml -K
You’ll be prompted for the current sudo password once. 6) Verify passwordless operation
Open a new terminal (ensures no cached auth) and test:
Ad-hoc become without password (should succeed)
ansible vDep12 -m command -a "id" -b
Run a simple play (no -K!)
ansible-playbook playbooks/smoke.yml --limit vDep12
If both succeed without prompting, you’re passwordless. 7) Clean-up: switch SSH to key-only (if not already)
If you didn’t let the play flip it:
On the host
sudo sed -i 's/^PasswordAuthentication .*/PasswordAuthentication no/' /etc/ssh/sshd_config sudo systemctl restart ssh
(Make sure your key works before doing this.) 8) Troubleshooting & rollback
Lost access after port change?
Use the host console to revert /etc/ssh/sshd_config to Port 22 and PasswordAuthentication yes, restart SSH, fix firewall, and try again.
Python missing on a minimal host?
Run the Python bootstrap once:
ansible-playbook playbooks/bootstrap_python.yml --limit vDep12 -b -K
Sudo still prompts?
Check /etc/sudoers.d/90-sjat (or your user file) on the host:
Correct line present: sjat ALL=(ALL) NOPASSWD:ALL
File mode is 0440 and passes visudo -cf.
Wrong SSH user/port?
Confirm host_vars/<host>.yml and ~/.ssh/config match.
You can also override on the fly: -e ansible_user=... -e ansible_port=7576.
-
Security notes (VPS vs LAN)
For VPS on the Internet, consider:
Restricting SSH by IP (UFW/iptables or cloud firewall). Requiring admin access over WireGuard. Using a dedicated ansible user with a limited sudo scope: sudo_scope: "ALL=(ALL) NOPASSWD:/usr/bin/apt,/usr/bin/systemctl,/usr/bin/docker,/usr/bin/mv,/usr/bin/cp,/usr/bin/rm,/usr/bin/tar"We’ll add an SSH hardening role later (KEX/Ciphers/MACs, AllowUsers, X11Forwarding no, etc.).
After this bootstrap, all regular playbooks run without -K, and you can safely enable key-only SSH.
baobab.app_prefs role
What the role does (in plain terms)
Figures out target users: uses app_prefs_users; if empty, falls back to flatpak_users. Only users that actually exist on the host are targeted.
Seeds defaults (one time by default): copies/templates files into each user’s app config tree. Files marked seed_only: true will not overwrite if a user already has the file.
Optionally restores from per-user backups: if a tar exists at
/srv/user-prefs/<user>/<APP_ID_OR_NAME>.tar.gz
it will be extracted into the app’s base directory, with behavior controlled by app_prefs_restore_mode (see below).
Variables you’ll use
app_prefs_enabled (bool): master switch (default false; set to true where needed)
app_prefs_users (list of usernames): who to target; if empty, falls back to flatpak_users
app_prefs_items (list of app entries): what to seed/restore
app_prefs_backup_dir (dir on host): where you place per-user tarballs (default /srv/user-prefs)
app_prefs_restore (bool): enable the restore step (default true)
app_prefs_restore_mode (string): missing_only (default) | once | always
app_prefs_restore_marker_dir (dir): used by once mode (default /var/lib/baobab/app_prefs/restored)
App entry shape
Each item in app_prefs_items looks like:
Flatpak app (needs app_id)
- type: flatpak
app_id: org.speedcrunch.SpeedCrunch
files:
- src: speedcrunch/SpeedCrunch.ini.j2
dest_rel: config/SpeedCrunch/SpeedCrunch.ini
seed_only: true # do not overwrite if user already has it
mode: "0644" # optional
- src: speedcrunch/SpeedCrunch.ini.j2
dest_rel: config/SpeedCrunch/SpeedCrunch.ini
seed_only: true # do not overwrite if user already has it
XDG/“normal” app (needs app_name)
-
type: xdg app_name: keepassxc files:
- src: keepassxc/keepassxc.ini dest_rel: keepassxc/keepassxc.ini seed_only: true
Where do files live?
Text files go in: roles/baobab.app_prefs/templates/... (use .j2; template renders even if you don’t use variables). Binary or literal files go in: roles/baobab.app_prefs/files/... The role auto-detects .j2 or existing location and uses template vs copy accordingly.
Recommended inventory examples Laptops (group_vars/laptops.yml)
app_prefs_enabled: true app_prefs_users: "{{ flatpak_users | default([]) }}" # reuse your Flatpak audience app_prefs_backup_dir: /srv/user-prefs app_prefs_restore: true app_prefs_restore_mode: missing_only # don’t overwrite existing user files
app_prefs_items:
-
type: flatpak app_id: org.speedcrunch.SpeedCrunch files:
- src: speedcrunch/SpeedCrunch.ini.j2 dest_rel: config/SpeedCrunch/SpeedCrunch.ini seed_only: true
-
type: flatpak app_id: com.visualstudio.code files:
- src: code/settings.json.j2 dest_rel: config/Code/User/settings.json seed_only: true
- src: code/keybindings.json # not templated; lives under role .../files/ dest_rel: config/Code/User/keybindings.json seed_only: true
-
type: xdg app_name: keepassxc files:
- src: keepassxc/keepassxc.ini dest_rel: keepassxc/keepassxc.ini seed_only: true
For Flatpak apps the base directory is: ~/.var/app/<APP_ID>/… For XDG apps it is: ~/.config/<APP_NAME>/…
Restore modes (how backups behave)
Set app_prefs_restore_mode to control how /srv/user-prefs//.tar.gz is applied:
missing_only (default): don’t overwrite files that already exist (uses tar --skip-old-files).
once: restore only the first time and create a marker at
/var/lib/baobab/app_prefs/restored/<user>__<app>.restored.
Future runs skip restore.
always: restore every run (overwrites existing files). Use intentionally.
Tip: Use once to import known-good user settings during migrations, then flip back to missing_only for ongoing runs.
Creating a backup tar the role will pick up
On the host (target machine), create the expected layout and tar it:
sudo mkdir -p /srv/user-prefs/sjat sudo mkdir -p /tmp/app_prefs/org.speedcrunch.SpeedCrunch/config/SpeedCrunch sudo tee /tmp/app_prefs/org.speedcrunch.SpeedCrunch/config/SpeedCrunch/SpeedCrunch.ini >/dev/null <<'EOF'
FROM_BACKUP
[General] angleUnit=rad displayMode=fixed EOF
sudo tar -C /tmp/app_prefs/org.speedcrunch.SpeedCrunch
-czf /srv/user-prefs/sjat/org.speedcrunch.SpeedCrunch.tar.gz
config/SpeedCrunch/SpeedCrunch.ini
Then run the role with:
apply backups but don’t clobber existing files:
ansible-playbook play_setup.yml -i inventories//hosts.yml
--limit vDep12-XFCE -t app_prefs -b
-e app_prefs_restore=true -e app_prefs_restore_mode=missing_only
To import exactly once (even if it overwrites on first run):
-e app_prefs_restore=true -e app_prefs_restore_mode=once
To force overwrite every time:
-e app_prefs_restore=true -e app_prefs_restore_mode=always
Common workflows
Ship defaults only (never overwrite user edits)
Set seed_only: true on files (recommended), and leave app_prefs_restore off (or use missing_only).
Migrate user configs from backups exactly once
Place tars in /srv/user-prefs/<user>/…, run with app_prefs_restore_mode=once. After first run, a marker prevents re-apply.
Enforce a known policy periodically
Run with app_prefs_restore_mode=always (overwrites). Use consciously.
Quick verification & troubleshooting
Role didn’t run? You may be filtering by tag. Ensure your playbook includes:
- { role: baobab.app_prefs, tags: [app_prefs] }
Then run with -t app_prefs or omit -t to run everything.
“Source not found” errors: check file paths on the controller:
ls -l roles/baobab.app_prefs/templates/.j2 ls -l roles/baobab.app_prefs/files/
The role checks both locations on localhost and prefers templates/.
Unknown user errors: ensure the users exist (run baobab.users first).
Flatpak app seeds but app not installed? This role only writes files; install apps with your package/flatpak roles.
Where did the “once” marker go?
/var/lib/baobab/app_prefs/restored/<user>__<app>.restored
Delete it to re-apply a once restore.
Design choices & best practices
Logic in role, data in inventory: keeps repo clean and flexible.
Template everything text-based: put text configs in templates/ (.j2), even if they don’t use variables.
Non-destructive by default: seed_only: true and missing_only restore mode protect user changes.
Per-user archives live on the host: decouple secrets/preferences from source control; you decide how they get there.
That’s it—use this role to standardize initial UX while still respecting local customization. If you later want a “backup exporter” (tar current user settings back into /srv/user-prefs/...), we can add a companion task for that.
Traefik and DNS
Create a token per zone (Cloudflare UI)
Do this once for each zone you’ll use (e.g., baobab.band, rullebiler.dk, …):
Cloudflare → My Profile → API Tokens → Create Token
Choose “Edit zone DNS” template (good starting point).
Permissions (minimum needed for DNS-01 via Traefik/lego):
Zone → DNS → Edit
Zone → Zone → Read (helps lego/Traefik resolve zone details)
Zone Resources: Specific Zone → pick your zone (e.g., baobab.band).
Client IP Address Filtering (optional but recommended): add your public IP(s) for the kasuku host(s).
Continue → Create Token → Copy the token string somewhere safe (you’ll paste it into Ansible Vault next).
Repeat for each zone so every host can use a token restricted to its own zone.