

ditch ansible
learn nixos
be motivated


ditch ansible
learn nixos
be motivated


I think truenas and unraid are the only user friendly experience out of the box. Everything else needs a lot of configuring. I don’t think you can call system administration gate keeping


my nixos containers and the podman containers inside them update nightly around 03:00
NIxOS is great for servers
nixos is so the solution to this
suprised it’s so far down the thread
if you don’t need proxmox’s admin tools
try running podman in NixOS on ZFS


Podman inside Nixos inside LXC inside Proxmox
Auto updates configurable everywhere


podman inside nixos inside lxc inside proxmox


master? tut, tut, tut
great, perfect imo
yeah this larping is some strange nonsense
any EU policy should support only FOSS platforms, protocols and storage formats so that anyone can immediately use without cost/license and any investment in further development is immediately available to all users and never privatised
companies can provide support services for these systems, there is going to be a lot of them


use a cheap vlan switch to make an actual vlan DMZ with the services’ router
use non-root containers everywhere. segment services in different containers


use nixos! you won’t regret it


i just transitioned from a dedicated pfsense machine to openwrt LXC container in proxmox machine
the idea is to have 2 or more openwrt instances in different proxmox machines for some HA routing to my self hosted subnet(s)
going well so far and i think i know a lot more about routing (ha). openwrt is pretty great though.
ps. i think i’m having issues with udp port forwarding but not sure


i have found this reference very useful https://mynixos.com/options/


yeah proxmox is not necessary unless you need lots of separate instances to play around with


this is my container config for element/matrix
podman containers do not run as root so you have to get the file privileges right on the volumes mapped into the containers. i used top to find out what user the services were running as. you can see there are some settings there where you can change the user if you are having permissions problems
{ pkgs, modulesPath, ... }:
{
imports = [
(modulesPath + "/virtualisation/proxmox-lxc.nix")
];
security.pki.certificateFiles = [ "/etc/ssl/certs/ca-certificates.crt" ];
system.stateVersion = "23.11";
system.autoUpgrade.enable = true;
system.autoUpgrade.allowReboot = false;
nix.gc = {
automatic = true;
dates = "weekly";
options = "--delete-older-than 14d";
};
services.openssh = {
enable = true;
settings.PasswordAuthentication = true;
};
users.users.XXXXXX = {
isNormalUser = true;
home = "/home/XXXXXX";
extraGroups = [ "wheel" ];
shell = pkgs.zsh;
};
programs.zsh.enable = true;
environment.etc = {
"fail2ban/filter.d/matrix-synapse.local".text = pkgs.lib.mkDefault (pkgs.lib.mkAfter ''
[Definition]
failregex = .*POST.* - <HOST> - 8008.*\n.*\n.*Got login request.*\n.*Failed password login.*
.*POST.* - <HOST> - 8008.*\n.*\n.*Got login request.*\n.*Attempted to login as.*\n.*Invalid username or password.*
'');
};
services.fail2ban = {
enable = true;
maxretry = 3;
bantime = "10m";
bantime-increment = {
enable = true;
multipliers = "1 2 4 8 16 32 64";
maxtime = "168h";
overalljails = true;
};
jails = {
matrix-synapse.settings = {
filter = "matrix-synapse";
action = "%(known/action)s";
logpath = "/srv/logs/synapse.json.log";
backend = "auto";
findtime = 600;
bantime = 600;
maxretry = 2;
};
};
};
virtualisation.oci-containers = {
containers = {
postgres = {
autoStart = false;
environment = {
POSTGRES_USER = "XXXXXX";
POSTGRES_PASSWORD = "XXXXXX";
LANG = "en_US.utf8";
};
image = "docker.io/postgres:14";
ports = [ "5432:5432" ];
volumes = [
"/srv/postgres:/var/lib/postgresql/data"
];
extraOptions = [
"--label" "io.containers.autoupdate=registry"
"--pull=newer"
];
};
synapse = {
autoStart = false;
environment = {
LANG = "C.UTF-8";
# UID="0";
# GID="0";
};
# user = "1001:1000";
image = "ghcr.io/element-hq/synapse:latest";
ports = [ "8008:8008" ];
volumes = [
"/srv/synapse:/data"
];
log-driver = "json-file";
extraOptions = [
"--label" "io.containers.autoupdate=registry"
"--log-opt" "max-size=10m" "--log-opt" "max-file=1" "--log-opt" "path=/srv/logs/synapse.json.log"
"--pull=newer"
];
dependsOn = [ "postgres" ];
};
element = {
autoStart = true;
image = "docker.io/vectorim/element-web:latest";
ports = [ "8009:80" ];
volumes = [
"/srv/element/config.json:/app/config.json"
];
extraOptions = [
"--label" "io.containers.autoupdate=registry"
"--pull=newer"
];
# dependsOn = [ "synapse" ];
};
call = {
autoStart = true;
image = "ghcr.io/element-hq/element-call:latest-ci";
ports = [ "8080:8080" ];
volumes = [
"/srv/call/config.json:/app/config.json"
];
extraOptions = [
"--label" "io.containers.autoupdate=registry"
"--pull=newer"
];
};
livekit = {
autoStart = true;
image = "docker.io/livekit/livekit-server:latest";
ports = [ "7880:7880" "7881:7881" "50000-60000:50000-60000/udp" "5349:5349" "3478:3478/udp" ];
cmd = [ "--config" "/etc/config.yaml" ];
entrypoint = "/livekit-server";
volumes = [
"/srv/livekit:/etc"
];
extraOptions = [
"--label" "io.containers.autoupdate=registry"
"--pull=newer"
];
};
livekitjwt = {
autoStart = true;
image = "ghcr.io/element-hq/lk-jwt-service:latest-ci";
ports = [ "7980:8080" ];
environment = {
LK_JWT_PORT = "8080";
LIVEKIT_URL = "wss://livekit.XXXXXX.dynu.net";
LIVEKIT_KEY = "XXXXXX";
LIVEKIT_SECRET = "XXXXXX";
};
entrypoint = "/lk-jwt-service";
extraOptions = [
"--label" "io.containers.autoupdate=registry"
"--pull=newer"
];
};
};
};
}


this is my nginx config for my element/matrix services
as you can see i am using a proxmox NixOS with an old 23.11 nix channel but i’m sure the config can be used in other NixOS environments
{ pkgs, modulesPath, ... }:
{
imports = [
(modulesPath + "/virtualisation/proxmox-lxc.nix")
];
security.pki.certificateFiles = [ "/etc/ssl/certs/ca-certificates.crt" ];
system.stateVersion = "23.11";
system.autoUpgrade.enable = true;
system.autoUpgrade.allowReboot = true;
nix.gc = {
automatic = true;
dates = "weekly";
options = "--delete-older-than 14d";
};
networking.firewall.allowedTCPPorts = [ 80 443 ];
services.openssh = {
enable = true;
settings.PasswordAuthentication = true;
};
users.users.XXXXXX = {
isNormalUser = true;
home = "/home/XXXXXX";
extraGroups = [ "wheel" ];
shell = pkgs.zsh;
};
programs.zsh.enable = true;
security.acme = {
acceptTerms = true;
defaults.email = "XXXXXX@yahoo.com";
};
services.nginx = {
enable = true;
virtualHosts._ = {
default = true;
extraConfig = "return 500; server_tokens off;";
};
virtualHosts."XXXXXX.dynu.net" = {
enableACME = true;
addSSL = true;
locations."/_matrix/federation/v1" = {
proxyPass = "http://192.168.10.131:8008";
extraConfig = "client_max_body_size 300M;" +
"proxy_set_header X-Forwarded-For $remote_addr;" +
"proxy_set_header Host $host;" +
"proxy_set_header X-Forwarded-Proto $scheme;";
};
locations."/" = {
extraConfig = "return 302 https://element.XXXXXX.dynu.net;";
};
extraConfig = "proxy_http_version 1.1;";
};
virtualHosts."matrix.XXXXXX.dynu.net" = {
enableACME = true;
addSSL = true;
extraConfig = "proxy_http_version 1.1;";
locations."/" = {
proxyPass = "http://192.168.10.131:8008";
extraConfig = "client_max_body_size 300M;" +
"proxy_set_header X-Forwarded-For $remote_addr;" +
"proxy_set_header Host $host;" +
"proxy_set_header X-Forwarded-Proto $scheme;";
};
};
virtualHosts."element.XXXXXX.dynu.net" = {
enableACME = true;
addSSL = true;
locations."/" = {
proxyPass = "http://192.168.10.131:8009/";
extraConfig = "proxy_set_header X-Forwarded-For $remote_addr;";
};
};
virtualHosts."call.XXXXXX.dynu.net" = {
enableACME = true;
addSSL = true;
locations."/" = {
proxyPass = "http://192.168.10.131:8080/";
extraConfig = "proxy_set_header X-Forwarded-For $remote_addr;";
};
};
virtualHosts."livekit.XXXXXX.dynu.net" = {
enableACME = true;
addSSL = true;
locations."/wss" = {
proxyPass = "http://192.168.10.131:7881/";
# proxyWebsockets = true;
extraConfig = "proxy_http_version 1.1;" +
"proxy_set_header X-Forwarded-For $remote_addr;" +
"proxy_set_header Host $host;" +
"proxy_set_header Connection \"upgrade\";" +
"proxy_set_header Upgrade $http_upgrade;";
};
locations."/" = {
proxyPass = "http://192.168.10.131:7880/";
# proxyWebsockets = true;
extraConfig = "proxy_http_version 1.1;" +
"proxy_set_header X-Forwarded-For $remote_addr;" +
"proxy_set_header Host $host;" +
"proxy_set_header Connection \"upgrade\";" +
"proxy_set_header Upgrade $http_upgrade;";
};
};
virtualHosts."livekit-jwt.XXXXXX.dynu.net" = {
enableACME = true;
addSSL = true;
locations."/" = {
proxyPass = "http://192.168.10.131:7980/";
extraConfig = "proxy_set_header X-Forwarded-For $remote_addr;";
};
};
virtualHosts."turn.XXXXXX.dynu.net" = {
enableACME = true;
http2 = true;
addSSL = true;
locations."/" = {
proxyPass = "http://192.168.10.131:5349/";
};
};
};
}


you only need to reboot Nix when something low level has changed. i honestly don’t know where that line is drawn so i reboot quite a lot when i’m setting up a Nix server and then hardly reboot it at all from then on even with auto-updates running
oh and if i make small changes to the services i just run sudo nixos-rebuild switch and don’t reboot
i do look out for new images that could be a drop in replacement
the new no-distro builds of containers is very interesting