• 0 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: June 19th, 2023

help-circle
  • I do vaguely remember something about it getting changed, but yeah, as you said unless you’re sharing it with a bunch of people, it’s probably not enough to trigger anything on their side anyway

    I think theres a nice variety of methods out there now that there’s no “one right way” to do it which I think is great compared to just a few years ago where your only real options were a reverse tunnel or CloudFlare tunnels



  • first your questions

    Is the tunnel solution appropriate for jellyfin?

    Yes but also no. the tldr is It will work, but video streaming is against CloudFlare rules. I ran this way for about 2 years with Plex just for my own use, so for about 15 hours a week on 480p and I never got my service suspended, but I’ve heard stories of others getting suspended… So just know it’s a risk

    I suppose it’s OK for vaultwarden as there isnt much data being transfered?

    That’s a good use of tunnels

    Would it be better to run nginx proxy manager for everything or can I run both of the solutions?

    You can definitely run both solutions (tunnel points to npm, npm towards to all other services), and it saves you setting up tunnels for each service

    Now for my 2 cents

    As others have suggested, tailscale funnel is a valid option. A reverse proxy using a VPS is also a valid option. And as I pointed out, doing the CloudFlare tunnel is an option if you’re willing to accept the risk.

    My current setup is using a free Oracle VPS with a small nginx docker container forwarding all port 80 and 443 traffic through a tailscale. On the other end is a nginx proxy manager docker container that points to all my services across the network. I have my CloudFlare details configured in nginx proxy manager to generate a wildcard SSL certificate that I apply to all my local services

    Inside the network, I use adguard to redirect the domain to the local LAN IP of the nginx proxy manager server to avoid traffic going through the internet.

    Then all you need to do is point the domain on CloudFlare dns to the Oracle server, and you’ll have several layers of separation between the internet and your local LAN , as well as SSL certs both internally and externally on any services you share

    It might not be the most elegant setup, but I share my Plex server (as well as about 30 other things) with several other people and can handle multiple 1080p streams going through it without any issue and it’s been nice and stable for over a year without any issues



  • You can do this pretty easily using asterisk and then just point your VoIP clients to it’s IP address

    But…

    Whatever you do, unless you’re an expert with network security, don’t leave it on its default port if you’ll expose it to the internet.

    You’ll have that many bots trying to get in that it’ll DDoS you within a few hours of setting it up. Even if you have it on a different port, you’ll have lots of bots trying to get in.

    If you ever see those “unlimited international calls” cards sold in third world countries for like $5-10, those are mostly hacked VoIP systems that have accounts or access to a phone line



  • For a bit of context for those not too familiar with CDN stuff. My web server hosts about 20 small business websites. None are heavy on images or video or anything else. Most sites have well under 1k visitors a day, some are under 100.

    Each month CloudFlare CDN saves me between 40-60gb of traffic which is nothing my server couldn’t handle, but over a year is ~600gb in saved data so it adds up

    If you had a Lemmy instance with even just 100 active users, with all the images and videos and all the federated background communications, that would add up extremely quickly.


  • tristan@aussie.zonetoSelfhosted@lemmy.worldSpam posts
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 months ago

    It’s a shitty situation that’s causing mods and users alike a lot of frustration and might be a bit before it’s sorted.

    Unfortunately I think this is something that will need to be dealt with Federation wide before it’s under control… But even then it’ll still add a lot of extra ongoing work to the mods of instances and communities just to clean up anything that gets through






  • My current setup is 3x Lenovo m920q (soon to be 4) all in a proxmox cluster, along with a qnap nas with 20gb ram and 4x 8tb in raid 5.

    The specs on the m920q are: I5 8500T 32gb ram 256gb sata SSD 2tb nvme SSD 1gbe nic

    On each proxmox machine, I have a docker server in swarm mode and each of those vm all have the same NFS mounts pointing to the nas

    On the Nas I have a normal docker installation which runs my databases

    On the swarm I have over 60 docker containers, including the arr services, overseerr and two deluge instances

    I have no issues with performance or read/write or timeouts.

    As one of the other posters said, point all of your arr services to the same mount point as it makes it far easier for the automated stuff to work.

    Put all the arr services into a single stack (or at least on a single network), that way you can just point them to the container name rather than IP, for example, in overseerr to tell it where sonarr is, you’d just say http://sonarr:8989 and it will make life much easier

    As for proxmox, the biggest thing I’ll say from my experience, if you’re just starting out, make sure you set it’s IP and hostname to what you want right from the start… It’s a pain in the ass to change them later. So if you’re planning to use vlans or something, set them up first

    Pic of my setup