

Don’t use the RAID56 functionality of BTRFS, the official docs still list it as unstable. Apart from that it’s pretty good.
Don’t use the RAID56 functionality of BTRFS, the official docs still list it as unstable. Apart from that it’s pretty good.
You’re welcome, great to see how you’re taking all the comments on board!
There are more subtle problems with NAT as well. Say that PC-A opens a connection from port 1234 (to something on the internet), and PC-B opens a connection from port 1234 too. Now the router has to translate the PC-B connection to coming from port 1235 to distinguish them from each other. But if PC-C then wants to open a listening port on 1235 it won’t work because the port is already in use, even though you can’t see anything using that port!
NAT is full of ridiculous corner cases like that, which normal users aren’t very likely to notice. But once you start self-hosting things or trying to get something like older multiplayer games working the problems pile up fast if you’re unlucky.
Yeah multiple NAT is a lot worse, but normal NAT has a lot of corner cases too that most people just don’t run into that often. For example if two computers behind NAT want to listen on the same port, that just doesn’t work.
NAT is a “good enough” solution that tricked a whole generation of people growing up with it into thinking it’s a good thing. While in reality the best case is that you don’t run into issues and the worst case is that performance is horrible and you can’t do the things you want to do. The only people that benefit from it are lazy ISPs, not their users.
NAT is not a firewall and it’s not that great for privacy either, it’s not hard to fingerprint individual devices behind NAT. There are zero cases where NAT is better than the alternatives, except when you’re out of public IP’s, which isn’t an issue with IPv6.
So you’re much better off by not trying to reinvent the wheel and using IPv6 the way it was intended. Use privacy extensions for privacy. Use proper firewall rules for security. Revel in the fact that NAT isn’t fucking up your inbound connections. Do not under any circumstances force the horrible kludge that is NAT into your IPv6 network.
Absolutely possible if you keep the network setup simple. However, I run different sets of containers as different users, some of which also use services from the host itself (such as a PostgreSQL instance), and things quickly become more complex in these situations. The examples on the github helped me a lot to realise everything I wanted.
If you want to use caddy as proxy for other containers running as quadlets have a look at this repo: https://github.com/eriksjolund/podman-caddy-socket-activation
It certainly demystified some network shenanigans for me.
Pretty sure you can unblock per device in Adguard, so maybe block it first then unblock from the logs for the clients you want to allow?
A 10 Gbps network is MUCH slower than even the smallest oldest PCIe slot you have. So cramming the GPUs in any old slot that’ll fit is a much better option than distributing it over multiple PCs.
I’ve started using this method in the past weeks and it mostly does what I want it to do: https://github.com/eriksjolund/podman-caddy-socket-activation/
The difference might be HTTP vs HTTPS. On a Pi the extra CPU load to properly encrypt the HTTPS stream is probably significant.
Object storage (the S3 API stuff) is the most logical answer here, it’s much simpler and thus more reliable than solutions like Gluster, and the abstraction actually matches your use case. Otherwise something like an NFS share from a central fileserver works too.
But I agree with the other comment that you’re trying to do kubernetes on hard mode and most likely with a worse result.
Thunder has experimental support, haven’t tried it yet though (says it costs extra battery)
Oh I agree with your post, but I was responding to Valmond who used different criteria.
You can have all three of those, but you won’t get great performance. The Samsung QVO SATA drives are a great example. I wouldn’t use those for an OS drive but they’re fantastic for NAS or media use.
If everything went fine during production you’re probably right. But there have definitely been batches of hard disks with production flaws which caused all drives from that batch to fail in a similar way.
“mostly solve the write hole problem” 😬
You do you, but I wouldn’t trust my data to that.