

Idk most of the time I just dcpull dcup (aliases ftw)
Ofc had some stuff break occasionally if there’s a breaking change but the same could happen through apt no?
I prefer it to dependency hell personally lol


Idk most of the time I just dcpull dcup (aliases ftw)
Ofc had some stuff break occasionally if there’s a breaking change but the same could happen through apt no?
I prefer it to dependency hell personally lol


Just replace Apt update with docker pull 🤷♂️


It’s really nice once it’s going, especially if you link them together in a compose and farm out all the individual ymls for each service, or use something like dockage to do it.


I went with proxmox and various LXCs for either individual services or docker stacks with several things on a minimal os (I’m comfortable with Ubuntu server so that’s what I go with generally as the unpriv LXC)


For logs dozzle is also fantastic, and you can do “agents” if you have multiple docker nodes and connect them togetherb
Proxmox with Ubuntu as the LXC, essentially only docker containers on it.


The biggest challenge I ran into is keeping the drivers in sync between the host and the LXC, since one is Deb and the other is Ubuntu, the LXC tends to want to update sooner and sometimes that can break the communication.
You edit the LXCs config in proxmox to do it, sec.
Edit: This guide would probably be better then what I did a earlier this year: https://www.virtualizationhowto.com/2025/05/how-to-enable-gpu-passthrough-to-lxc-containers-in-proxmox/
I run a majority of my docker containers within an unprivileged LXC, even gpu pass through and it works great.


As I mentioned in the other reply, this is exactly what is expected for the Proxmox Community Scripts


Unfortunately this is the standard for the proxmox community scripts
You can use Symphonium with JF libraries as well.
I almost set up Navidrome but I have TVs and Movies on JF already
I definitely bounced off of it as it had so much configuration I had no idea where to start. Adguard home was so much easier to set up, particularly because I also had to use it via DHCP as my router doesn’t have a dns option.


For sure, good luck and have fun :D


I’ve completely replaced my searching with searxng, it is a little slower and ofc if I have an outage or something at home I have to go back to a different search temporarily but overall I like it a lot.
It was one of the first things I set up last year with my homelab because I am attempting to degoogle a fair amount, the Ai search stuff was just a fun test


Interesting, I mainly have used text generation webui which has a search support plug in, kinda nifty to use my searxng instance for it. It’s a bit finicky though.
Another thing to keep in mind then (apologies if this is just repeating info you already know), you’d also want to keep in mind your total potential context size in relation to the model size, since both take up VRAM. Reading search results/pages can eat up a lot


Might want a bigger GPU, I have a 3080ti and the 12gb is pretty limiting in terms of how large a model you can use, or like one thing I was hoping to do was essentially replace Google Assistant/Gemini and can’t realistically run a good model and the STT/TTS off the one gpu.


Yeah the Instagram/Tiktok proxies always seem to be down or rate limited. Self hosted redlib tho is nice most of the time.
Oh lol, of course apple just calls it TV OS
Coming up on a year of self hosting the worst I’ve had happen is a copyright letter from my isp from dry downloading torrents lmao. Threw I behind a vpn and it’s been fine since.