Just an Aussie tech guy - home automation, ESP gadgets, networking. Also love my camping and 4WDing.

Be a good motherfucker. Peace.

  • 6 Posts
  • 156 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle
  • Hmmm - interesting. I hadn’t bothered to check before now, but I’m seeing something similar on one of the two PBS CTs I run.

    Comparing the output of netstat -lantop on both CTs, I can see that the one with more outbound traffic has more waiting connections from localhost on port 82, the port Proxmox Backup Servers provides its API over:

    tcp        0      0 127.0.0.1:51562         127.0.0.1:82            TIME_WAIT   -                    timewait (40.38/0/0)
    tcp        0      0 127.0.0.1:56342         127.0.0.1:82            TIME_WAIT   -                    timewait (29.92/0/0)
    tcp        0      0 127.0.0.1:44864         127.0.0.1:82            TIME_WAIT   -                    timewait (58.94/0/0)
    tcp        0      0 127.0.0.1:45028         127.0.0.1:82            TIME_WAIT   -                    timewait (11.88/0/0)
    tcp        0      0 127.0.0.1:44026         127.0.0.1:82            TIME_WAIT   -                    timewait (48.66/0/0)
    tcp        0      0 127.0.0.1:44852         127.0.0.1:82            TIME_WAIT   -                    timewait (58.80/0/0)
    tcp        0      0 127.0.0.1:59620         127.0.0.1:82            TIME_WAIT   -                    timewait (0.00/0/0)
    tcp        0      0 127.0.0.1:56374         127.0.0.1:82            TIME_WAIT   -                    timewait (30.98/0/0)
    tcp        0      0 127.0.0.1:51544         127.0.0.1:82            TIME_WAIT   -                    timewait (39.98/0/0)
    tcp        0      0 127.0.0.1:59642         127.0.0.1:82            TIME_WAIT   -                    timewait (0.00/0/0)
    tcp        0      0 127.0.0.1:45008         127.0.0.1:82            TIME_WAIT   -                    timewait (10.92/0/0)
    tcp        0      0 127.0.0.1:45016         127.0.0.1:82            TIME_WAIT   -                    timewait (11.76/0/0)
    

    I’m wondering if the graph is pulling aggregated network data, including the loopback interface. If so, and it’s all just port 82 stuff on 127.0.0.1, then it’s probably nothing to worry about.

    Edit: found this forum post that seems to indicate it’s aggregating all the byte values from /proc/dev/net, so this is probably nothing to worry about if your netstat output, like mine, only shows API conections to/from 127.0.0.1 on port 82.









  • It all depends on how you want to homelab.

    I was into low power homelabbing for a while - half a dozen Raspberry Pis - and it was great. But I’m an incessant tinkerer. I like to experiment with new tech all the time, and am always cloning various repos to try out new stuff. I was reaching a limit with how much I could achieve with just Docker alone, and I really wanted to virtualise my firewall/router. There were other drivers too. I wanted to cut the streaming cord, and saving that monthly spend helped justify what came next.

    I bought a pair of ex enterprise servers (HP DL360s) and jumped into Proxmox. I now have an OPNsense VM for my firewall/router, and host over 40 Proxmox CTs, running (at a guess) around 60-70 different services across them.

    I love it, because Proxmox gives me full separation of each service. Each one has its own CT. Think of that as me running dozens of Raspberry Pis, without the headache of managing all that hardware. On top of that, Docker gives me complete portability and recoverability. I can move services around quite easily, and can update/rollback with ease.

    Finally, the combination of the two gives me a huge advantage over bare metal for rapid prototyping.

    Let’s say there’s a new contender that competes with Immich. They offer the promise of a really cool feature no one else has thought of in a self-hosted personal photo library. I have Immich hosted on a CT, using Docker, and hiding behind Nginx Proxy Manager (also on a CT), accessible via photos.domain on my home network.

    I can spin up a Proxmox CT from my custom Debian template, use my Ansible playbook to provision Docker and all the other bits, access it in Portainer and spin up the latest and greatest Immich competitor, all within mere minutes. Like, literally 10 minutes max.

    I have a play with the competitor for a bit. If I don’t like it, I just delete the CT and move on. If I do, I can point my photos.domain hostname (via Nginx Proxy Manager) to the new service and start using it full-time. Importantly, I can still keep my original Immich CT in place - maybe shutdown, maybe not - just in case I discover something I don’t like about the new kid on the block.

    That’s a simplified example, but hopefully illustrates at least what I get out of using Proxmox the way I do.

    The cons for me is the cost. Initial cost of hardware, and the cost of powering beefier kit like this. I’m about to invest in some decent centralised storage (been surviving with a couple li’l ARM-based NASes) to I can get true HA with my OPNsense firewall (and a few other services), so that’s more cost again.




  • It doesn’t have to be hard - you just need to think methodically through each of your services and assess the cost of creating/storing the backup strategy you want versus the cost (in time, effort, inconvenience, etc) if you had to rebuild it from scratch.

    For me, that means my photo and video library (currently Immich) and my digital records (Paperless) are backed up using a 2N+C strategy: a copy on each of 2 NASes locally, and another copy stored in the cloud.

    Ditto for backups of my important homelab data. I have some important services (like Home Assistant, Node-RED, etc) that push their configs into a personal Gitlab instance each time there’s a change. So, I simply back that Gitlab instance up using the same strategy. It’s mainly raw text in files and a small database of git metadata, so it all compresses really nicely.

    For other services/data that I’m less attached to, I only backup the metadata.

    Say, for example, I’m hosting a media library that might replace my personal use of services that rhyme with “GetDicks” and “Slime Video”. I won’t necessarily backup the media files themselves - that would take way more space than I’m prepared to pay for. But I do backup the databases for that service that tells me what media files I had, and even the exact name of the media files when I “found” them.

    In a total loss of all local data, even though the inconvenience factor would be quite high, the cost of storing backups would far outweigh that. Using the metadata I do backup, I could theoretically just set about rebuilding the media library from there. If I were hosting something like that, that is…






  • This may take us down a bit of a rabbit hole but, generally speaking, it comes down to how you route traffic.

    My firewall has an always-on VPN connected to Mullvad. When certain servers (that I specify) connect to the outside, I use routing rules to ensure those connections go via the VPN tunnel. Those routes are only for connectivity to outside (non-LAN) addresses.

    At the same time, I host a server inside that accepts incoming Wireguard client VPN connections. Once I’m connected (with my phone) to that server, my phone appears as an internal client. So the routing rules for Mullvad don’t apply - the servers are simply responding back to a LAN address.

    I hope that explains it a bit better - I’m not aware of your level of networking knowledge, so I’m trying not to over-complicate just yet.