• 2 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle
  • pastebin.com/DiHX2vg2

    Hopefully this works and you can see the compose file. I’ve put a few things in [square brackets] to hide some stuff, probably overly cautiously. I have an external network linked to NPM and in that, I use nextcloud-server for IP address and 80 for the port (it’s the inside container port, not 8080 on the system - that took me a long time to figure out!). Add a .env file with everything referenced in the compose file, then (hopefully!) Away you go



  • Not sure if it makes a difference and not quite your question but I’ve just switched away from nextcloud-aio to just having my own docker compose, so I have better control and know what’s going on more. I always found it funny and when installing on a new VPS decided to try. It was surprisingly straightforward and Ive been able to install everything I need.

    Let me know if my docker compose would help. I still need to add the backup solution but it’s going to be straightforward as well.


  • My experience has taught me not to ‘apt autoremove’ unless im really sure what they are!

    Take it one software at a time. See it’s running fine then move on to another. You’ll often realise something down the line will be helpful so will go back to make changes.

    Keep a running list of software and the ports used.

    With docker, do not automatically do :latest on important software (nginx proxy manager, SSO software, password database, anything you use regularly, etc). I did that and was burned a few times.

    Also that at some point you’ll either mess up or realise it would just be easier and start again with a fresh OS install. Keep copying data (docker compose files and persistent storage) on working software before starting a new one, or before installing anything directly onto the OS, or before major updates.




  • I would recommend it as it is fairly easy to understand and most Foss services give you an example to use. You can also convert docker run examples to compose (search docker composeriser) although it doesn’t always work.

    I found composer files easier when learning it, to digest what is going on (ports, networks, depends_on etc) and can compare with other services to see what is missing (container name, restart schedule etc). I can then easily backup the compose files, env files and data directories to be able to very quickly get a service up again (although DBs are trickier but found a docker image that I can stick on the compose files which backups the DB dumps regularly)



  • I tried the readarr and other options. They work sometimes but not enough to rely on it. As others mention, there’s no standard naming and also, lots of people use their library card for Libby access. I also think there’s a bit more of a direct link to authors so I’d prefer to buy the book unless theyre super well off anyway. To be honest, I can’t see the arr’s working with LibGen having looked at the open issues on integrating it, it just doesn’t allow for scraping in the same way.

    For me, I self host openbooks (uses IRC) and select a download straight away, which to be fair, is about the same time as searching / finding a TV show if you are after one book. I have exposed it behind an SSO so can access it on my phone and download the book straight away when someone gives me a recommendation. Most of the time I just add to a running note on phone and go through it every few months when I need more books.

    It’s fairly quick for multiple books but not sonarr levels of ease. The downloads go into a calibre monitored folder which then does the automation (naming, conversion if needed etc). I bulk email the new books to my kindle with one click. Calibre-web is on read only for a nice browsing experience and to read on other devices if I need to (althogh no page sync). It’s a bit of manual work but I find it is not too bad and in 10 minutes I can load up enough books for months.

    Occasionally IRC does not have the book so try manually searching on prowlarr, and download on sab or transmission. The downloads are almost instant so I then just wait and copy them to my downloads folder (I could probably automate this step too with tags but it’s so infrequent).


  • I have dynamic IP and there are several ways around it. I use Cloudflared (updates DNS records regularly) and a script I found to update duck DNS as a backup. Both very simple.

    Accessing the services is not the problem, the problem is keeping them safe. I’ve tried lots of different ways (although not tailscale yet) and have a few services exposed directly to the internet behind authentik \ NPM \ Cloudflare \ fail2ban \ ufw. Others, I access through my router openvpn server, with keys for my laptop and phone as clients. There are so many guides online for all VPN types. Its just finding the right approach between ease of use vs safety




  • I only use docker images supplied by the devs themselves or community maintained (e.g. Linux server.io) so they essentially tell docker what needs to be installed in the container, not me. It takes the hassle out of trying to figure out what I need to do to get the service running. If they update their app, they’ll probably know best what else needs to be updated and will do that in the image. I guess you are relying on them to keep everything updated but they are way more knowledgeable than me and if there is a vulnerability, it is only in that container and not your other services.


  • Lots of little things really. Obviously I couldn’t say for certain but they seemed to on top of it without causing us too much difficulty in doing our jobs.

    Sometimes things were blocked like if a new email, or questioned after to check it was expected and followed policy. Policies were clear, and there were helpful prompts or warnings.

    We were involved in something where we had to copy a sh*t load of files from a shared folder to a hard disk. There were like three automatic blocks that kicked in at different times, which was a pain at first to figure out but because we had a good reason, someone in IT just kept at it to get it done and looking back, that should have raised flags given the size of it all.

    They changed from passwords changing every 6 months to no changes but had to be longer and mandatory 2FA. We were told to use keepass for all passwords for things that weren’t SSO for various reasons.



  • Don’t provide services to others, including your own family, actually especially your own family, until you are quite comfortable with what is going on and what might be causing issues. Focus on helping yourself or keeping whatever other services you were using before just in case.

    Trying to fix something at night, with a fuming partner who’s already put up with a difficult to use service, because of your want for privacy even though they don’t care care, whilst saying “it should work, I don’t know what’s wrong”, is not a great place to be 😁.

    Overall though, I found it so interesting that I am doing a part time degree in computer science in my 30s, purely to learn more (whilst being forced to do it to timelines and having paid for it).

    I have a very comfortable and ‘forget about it’ setup my family are now using. Every now and then I add new services for myself, and if it works out, will give access to others to use, keep it just for me or just delete it and move on.


  • I have a reason I don’t think is covered. A few programs I have come across that I want to try recommend docker and some only provide instructions for docker. They can spend less time trying to help you with dependencies and installations knowing they’ve included everything you need in the docker file. I don’t have a background in Linux or programming so unless they tell you exactly how to install something, I can struggle. Their installation page is then just the docker compose file with a note on the environment variables you can change.



  • They serve two different purposes. You can have one, both or neither. Sorry if you already know all this below but thought it might be good to explain in detail.

    NPM is a proxy provider so passes subdomains to the right service (e.g. service1.url.com passes to service 1 at IP x.x.x.x on port 5050). This allows you to only open one port to NPM but access other services through subdomains. I have NPM in front of myexternal apps so I can access each through a subdomain (e.g. service1.url.com). You could also use it for accessing internally if you setup your internal DNS to pass (e.g. service1.internal) to the IP address and port of your service, and set NPM only to allow access from internal IPs.

    Authentik provides single sign on so instead of having different usernames and passwords for every user on every service, you have one set of users and it manages the passwords.

    There are at a high level two levels of using it.

    Some services have proper SSO integration so you setup Authentik to replace it’s own login system. For instance, with Nextcloud you are going to the Nextcloud homepage but it then goes out to Authentik to do the login process and once passed, Authentik will tell Nextcloud user B has successfully logged in, I vouch for them and here are their details. You can do this for internal and external access. Obviously with Nextcloud you need to login either through it’s own login system or via SSO so even if I go directly to the internal IP and port (and therefore don’t need NPM to access it), I still need Authentik to login so it knows it’s me and not my partner trying to access her account

    Some services don’t have SSO integration or have no login required. For instance, I have Stirling PDF which doesn’t need user details or login. However, you don’t want to just allow anyone to access so I have setup NPM to use Authentik as a proxy pass. If I go to stirlingpdf.url.com then it sends me to Authentik to login. You can only ever get to the Stirling app if you successfully log in. You can also set Authentik so that only certain users or groups of users can access certain apps but that’s more than I need.

    It does take some effort to get SSO working correctly for each service and it’s only really worth it if you do have multiple users or services that need logins.

    You don’t want just NPM unless you trust the service to have a secure login.

    Others will probably say, you shouldn’t have anything facing externally. You can setup Tailscale or Wireguard tunnels so you always appear to be on the local network. That way, you don’t need NPM to be open externally. However you might still want it so you can type the address service1.internal instead of 192.168.1.1:8063 each time. You probably also want Authentik to make the login shared.

    In terms of network access to get them working, NPM needs to be able to access Authentik internally on your network. You could either put them on the same shared Docker network or in my case, they are both on the same server so share an internal IP. I have opened the individual ports on Docker so they can access each other internally just like I can access both from my laptop. If I’m accessing away from home, I have my domain pointing my home external network ID, port 443 open on my router pointing to my home server with NPM. NPM then “talks” to Authentik through the home network so I login through that but I don’t have to open the Authentik port externally.

    In my case, in the NPM settings, instead of using the docker created network for Authentik (like 172.3.1.1 or something that might change), I use the internal IP of the machine (like 192.168.1.1:4443 {if 4443 is the Authentik port}). I also have an NPM entry auth.url.com that points to Authentik which some apps need instead of the internal address. It took some playing around to get it right but once you do, it’s essentially copy and paste for new services.