• 1 Post
  • 187 Comments
Joined 1 year ago
cake
Cake day: June 19th, 2023

help-circle
  • The amount of confidently incorrect responses is exactly what one could expect from Lemmy.

    First: TCP and UDP can listen on the same port, DNS is a great example of such. You’d generally need it to be part of the same process as ports are generally bound to the same process, but more on this later.

    Second: Minecraft and website are both using TCP. TCP is part of layer 4, transport; whereas HTTP(S) / Minecraft are part of layer 7, application. If you really want to, you could cram HTTP(S) over UDP (technically, QUIC/HTTP3 does this), and if you absolutely want to, with updates to the protocol itself, and some server client edits you can cram Minecraft over UDP, too. People need to brush up on their OSI layers before making bold claims.

    Third: The web server and the Minecraft server are not running on the same machine. For something that scale, both services are served from a cluster focused only on what they’re serving.

    Finally: Hypixel use reverse proxy to sit between the user and their actual server. Specifically, they are most likely using Cloudflare Spectrum to proxy their traffic. User request reaches a point of presence, a reverse proxy service is listening on the applicable ports (443/25565) + protocol (HTTPS/Minecraft), and then depending on traffic type, and rules, the request gets routed to the actual server behind the scenes. There are speculations of them no longer using Cloudflare, but I don’t believe this is the case. If you dig their mc.hypixel.net domain, you get a bunch of direct assigned IP addresses, but if you tried to trace it from multiple locations, you’d all end up going through Cloudflare infrastructure. It is highly likely that they’re still leaning on Cloudflare for this service, with a BYOIP arrangement to reduce risk of DDOS addressed towards them overflow to other customers.

    In no uncertain terms:

    1. Hypixel.net has Cloudflare DNS for their domain.
    2. For their website, it has orange cloud enabled to proxy traffic through CF’s global CDN and DDOS protection service.
    3. For their Minecraft server, they advertise mc.hypixel.net, but also have a SRV record for _minecraft._tcp.hypixel.net set for 25565 on mc.hypixel.net
    4. The mc.hypixel.net domain has CNAME record for mt.mc.production.hypixel.io. which is flattened to a bunch of their own direct assigned IP addresses.
    5. Traceroute towards those direct assigned IP addresses goes through Cloudflare infrastructure but final destination is obscured, just like their website, to protect them from DDOS attacks.

  • Using Ollama to try a couple of models right now for an idea. I’ve tried to run Llama 3.2 and Qwen 2.5 3b, both of which fits my 3050 6G’s VRAM. I’ve also tried for fun to use Qwen 2.5 32b, which fits in my RAM (I’ve got 128G) but it was only able to reply a couple of tokens per second, thereby making it very much a non-interactive experience. Will need to explore the response time piece a bit further to see if there are ways I can lean on larger models with longer delays still.



  • It is easier to think of the SSL termination in legs.

    1. Client to Cloudflare; if you’re behind orange cloud, you get this for free, don’t turn orange cloud off unless you want to have direct exposure.
    2. Cloudflare to your sever; use their origin cert, this is easiest and secure. You can even get one made specific so your subdomains, or wildcard of your subdomain. Unless you have specific compliance needs, you shouldn’t need to turn this off, and you don’t need to roll your own cert.
    3. Your reverse proxy to your apps; honestly, it’s already on your machine, you can do self signed cert if it really bothers you, but at the end of the day, probably not worth the hassle.

    If, however, you want to directly expose your service without orange cloud (running a game server on the same subdomain for example), then you’d disable the orange cloud and do Let’s Encrypt or deploy your own certificate on your reverse proxy.



  • In the old days, it used to be a problem because everyone just connect their windows 98 desktop with all their services directly exposed to the internet because they’re using dial up internet without the concept of a gateway that prevents internet from accessing internal resources. Now days, you’re most likely behind your ISP router that doesn’t forward ports by default, and you’re only exposing the things you’d actually want to expose.

    For things you’d actually want to expose, having a service on the default port is fine, and reduces the chances of other systems interacting with it failing because they’d expect it on the default port. Moving them to a different port is just security through obscurity, and honestly doesn’t add too much value. You can port scan the entire public IPv4 space fairly quickly fairly cheaply. In fact, it is most likely that it’s already been mapped:

    https://www.shodan.io/host/<your-ip-here>

    Keeping the service up-to-date regularly and applying best practices around it would be much more important and beneficial. For SSH, make sure you’re using key based authentication, and have password based authentication disabled; add fail2ban to automatically ban those trying to brute force. For Minecraft, online mode and white listed only unless you’re running a public one for everyone.


  • I’m not saying you’re wrong — I’ve even upvoted your earlier comments because I’m generally in agreement; you’re an instance admin judging by your handle, go and check the vote history yourself lol.

    I’m saying people shouldn’t force their janky unproven solo solution on to someone else who doesn’t have their level of distrust, and would just rather trust the multibillion multinational corporation, when all they want is something that’s been working fine for them for all they care.


  • There’s always the add more of everything so something could fail without impacting the stability aspect, and that’s great for a corporation needing the redundancy; but it’s probably prudent to not forget there’s also the “I’m interested in learning” aspect, where people running a home server to play with software side of things.

    You’re spot on in that we’d need to know what it is that OP would like to do with the system, but I’m getting the feeling that stability isn’t that high of a concern just yet.


  • Until the basement floods and the server goes offline for a few days; or botched upgrade that’s failing quietly; over zealous spam assassin configuration; etc etc

    It sounded like they were trying to archive things from Gmail to their own server, so just cut the middleman jank out, and let the wife continue to use her Gmail as intended.






  • I’m aware this is the selfhost community, but for a company of 20 engineers, it is probably best to use something commercial in the cloud.

    Biggest pain point was for our ops guy, who constantly had to stay behind to perform upgrades and maintenance, as they couldn’t do it during business hours when the engineers are working. With a team of at least 20, scheduling downtimes could get increasingly more difficult.

    It also adds an entire system to be audited by the auditors.

    The selfhost vs buy commercial kind of bounces back and forth. For smaller teams, less than 5 to 10 engineers, it might be a fun endeavour; but from that point on, until you get to mega corp scale with dedicated ops department maintaining your entire infrastructure, it is probably more effective to just pay for a solution from a major vendor in the cloud instead.






  • I think it would be a good idea to take a step back and ask what is it that you’re trying to achieve.

    Userbase, the service linked, is a backend as a service platform that offers you authentication and basic database that you can access via their api. You’d then code your own front end web app to interact with their service and store data there. You pay only per storage used by their storage tiers, which are frankly fairly fair priced. If that is something you’d need, that’s a good idea, but you’d be coding the front end yourself.

    If you’re only looking for authentication with OAuth, and then coding your own API backend, then something like Authentik would be a nice self hosted authentication provider. Others that commonly gets mentioned but I’ve got limited/no experience with worlds new keycloak, or fusionauth. Managed services here would be your Auth0, Okta, etc.

    If you’ve got a specific use case in mind, then it may be a good idea to say what service you’re thinking about, and the community may be able to suggest prebuilt solutions that good better and require less lift.


  • Strictly speaking, they’re leveraging free users to increase the number of domains they have under their DNS service. This gives them a larger end-user reach, as it in turn makes ISPs hit their DNS servers more frequently. The increased usage better positions them to lead peering agreement discussions with ISPs. More peering agreements leads to overall cheaper bandwidth for their CDN and faster responses, which they can use as a selling point for their enterprise clients. The benefits are pretty universal, so is actually a good thing for everyone all around… that is unless you’re trying to become a competitor and get your own peering agreement setup, as it’d be quite a bit harder for you to acquire customers at the same scale/pace.