• 3 Posts
  • 25 Comments
Joined 1 year ago
cake
Cake day: July 28th, 2023

help-circle









  • I wonder what performance impact there would be if you were to move pgsql onto bare metal with enough ram dedicated to caching all of the db data (think: i5 or i7 nuc). That’s going to be my next step with my homelab; I want to migrate everything to a single db host with a lot of RAM and M2 storage and avoid the db process replication I have going on. I have no performance complaints with NC currently, I’m running PHP cache and redis as well as image preview and imaginary.


  • You absolutely need to move from patch to patch and cannot just do a multiple version jump safely. You also need to validate the configs between versions, especially major release updates or you risk breaking. New features and optimizations happen and you also may need to change our update your reverse proxy configuration on update, or modify db table configuration (just puking this from memory as I’ve had to do it before). I don’t know that there’s automation for each one of those steps.

    Because of that, I run nextcloud in a VM and install it from the binary package. I wrote a shell script that handles downloading, moving the files, updating permissions and copying the old config forward, symlinking and doing the upgrade. Then all I have to do is log in as administrator, check out the admin dashboard and make sure there aren’t new things I have to address in the status page. It’s a pain, but my nextcloud uses external db and redis and PHP caching so it’s not an easy out of the box setup. But it’s been solid for a long time once I adopted using this script.










  • If a service was serving the webfinger, it could guess which account needed to be returned based on the requesters user agent. If the UA was mastodon, it could return the mastodon link rel, if pixelfed then return that link rel, etc.

    Might be able to rig it with some more complex conditional logic and regex in nginx as a bandaid. AFAICT, the webfinger spec doesn’t really allow for this, which if true, was pretty short sighted.

    I haven’t considered more in depth S2S connections. I’ll have to watch the traffic logs and see what exactly is being requested and see if all of it can be directed accordingly. I see now you commented on that issue. Also, to be clear, I’m still running the services in subdomains, but I’m trying to use user@domain.tld as the discovery account.