You just put both in the server_name
line and you’re good to go.
You just put both in the server_name
line and you’re good to go.
With Docker, the internal network is just a bridge interface. The reason most firewall rules don’t apply is a combination of:
The only thing that should be affected by the host firewall is the proxy service Docker uses to listen on a port on the host and send it to the container.
When using Docker, each container acts like an independent machine, and your host gets configured to act as a router. You can firewall Docker containers, the rules just need to be in the right place to work.
It’s sitting at around 46GB at the moment, not too bad.
Instance is a year and a few months old, so I could probably trim down the storage a bit if needed by purging stuff < 6 months old or something.
I think it initially grows as your users table fills up and pictrs caches the profile pictures, and then it stabilizes a bit. I definitely saw much more growth initially.
I subscribe to a few more communities and my DB dump is about 3GB plain text, but same story, box sits at 5-15% most of the time.
You mean you’re not actually supposed to spend 2 hours daily unfucking everyone’s shit during the standup turn by turn?
The author was bullied by Nintendo into voluntarily removing the repos, it wasn’t DMCA’d.
GitHub had nothing to do with this one. And just like with Yuzu, plenty of people have uploaded copies of the repo already, thanks to git’s decentralized nature where everyone have a full copy of the entire history.
Having the web server be able to overwrite its own app code is such a good feature for security. Very safe. Only need a path traversal exploit to backdoor config.php
!
The official open-source definition expects more freedoms that just being able to see the source: the whole point of having the source isn’t transparency, it’s freedom. Freedom to fork and modify. Freedom to adapt the code to fix it and make it work for your use case, and share those modifications.
This doesn’t let you modify the code or share your modifications at all.
nothing that anybody outside of people selling dodgy romsets online are going to need to worry about
And Linux distro maintainers, Flatpak, and libretro and a lot of other projects that rely on repackaging or integrating the code in a bigger project.
Even NVIDIA has a more flexible license that at least lets distros bundle it in the repositories.
Yep, and I’d guess there’s probably a huge component of “it must be as easy as possible” because the primary target is selfhosters that don’t really even want to learn how to set up Docker containers properly.
The AIO Docker image is an abomination. The other ones are slightly more sane but they still fundamentally mix code and data in the same folder so it’s not trivial to just replace the app.
In Docker, the auto updater should be completely neutered, it’s the wrong way to update the app.
The packages in the Arch repo are legit saner than the Docker version.
I’ve heard very good things about resold HGST Helium enterprise drives and can be found fairly cheap for what they are on eBay.
I’m looking for something from 4TB upwards. I think I remember that drives with very high capacity are more likely to fail sooner - is that correct?
4TB isn’t even close to “very high capacity” these days. There’s like 32TB HDDs out there, just avoid the shingled archival drives. I believe the belief about higher capacity drives is a question of maturity of the technology rather than the capacity. 4TB drives made today are much better than the very first 4TB drives we made a long time ago when they were pushing the limits of technology.
Backblaze has pretty good drive reviews as well, with real world failure rate data and all.
Ethernet splitter
What kind of splitter? Not a hub or switch, just a passive splitter?
Those do exist to do 4x 100M links on a single pair each, but you can’t just plug those into a router or switch and get 4 ports, it still needs to eventually terminate as 4 ports on both ends.
If you’re behind Cloudflare, don’t. Just get an origin certificate from CF, it’s a cert that CF trust between itself and your server. By using Cloudflare you’re making Cloudflare responsible for your cert.
There’s also Cockpit if you just want a basic UI
I believe you, but I also very much believe that there are security vendors out there demonizing LE and free stuff in general. The more expensive equals better more serious thinking is unfortunately still quite present, especially in big corps. Big corps also seem to like the concept of having to prove yourself with a high price of entry, they just can’t believe a tiny company could possibly have a better product.
That doesn’t make it any less ridiculous, but I believe it. I’ve definitely heard my share of “we must use $sketchyVendor because $dubiousReason”. I’ve had to install ClamAV on readonly diskless VMs at work because otherwise customers refuse to sign because “we have no security systems”. Everything has to be TLS encrypted, even if it goes to localhost. Box checkers vs common sense.
LetsEncrypt certs are DV certs. That a put a TXT record for LetsEncrypt vs a TXT record for a paid DigiCert makes no difference whatsoever.
I just checked and Shopify uses a LetsEncrypt cert, so that’s a big one that uses the plebian certs.
Neither does Google Trust Services or DigiCert. They’re all HTTP validation on Cloudflare and we have Fortune 100 companies served with LetsEncrypt certs.
I haven’t seen an EV cert in years, browsers stopped caring ages ago. It’s all been domain validated.
LetsEncrypt publicly logs which IP requested a certificate, that’s a lot more than what regular CAs do.
I guess one more to the pile of why everyone hates Zscaler.
That’s more of a general DevOps/server admin steep learning curve than Vaultwarden’s there, to be fair.
It looks a bit complicated at first as Docker isn’t a trivial abstraction, but it’s well worth it once it’s all set up and going. Each container is always the same, and always independent. Vaultwarden per-se isn’t too bad to run without a container, but the same Docker setup can be used for say, Jitsi which is an absolute mess of components to install and make work, some Java stuff, and all. But with Docker? Just docker compose up -d
, wait a minute or two and it’s good to go, just need to point your reverse proxy to it.
Why do you need a reverse proxy? Because it’s a centralized location where everything comes in, and instead of having 10 different apps with their own certificates and ports, you have one proxy, one port, and a handful of certificates all managed together so you don’t have to figure out how to make all those apps play together nicely. Caddy is fine, you don’t need NGINX if you use Caddy. There’s also Traefik which lands in between Caddy and NGINX in ease of use. There’s also HAproxy. They all do the same fundamental thing: traffic comes in as HTTPS, it gets the Host header from the request and sends it to the right container as plain HTTP. Well it doesn’t have to work that way specifically but that’s the most common use case in self hosted.
As for your backups, if you used a Docker compose file, the volume data should be in the same directory. But it’s probably using some sort of database so you might want to look into how to do periodic data exports instead, as databases don’t like to be backed up live since the file is always being updated so you can’t really get a proper snapshot of it in one go.
But yeah, try to think of it as an infrastructure investment that makes deploying more apps in the future a breeze. Want to add a NextCloud? Add another docker compose file and start it, Caddy picks it up automagically and boom, it’s live and good to go!
Moving services to a new server is also pretty easy as well. Copy over your configs and composes, and volumes if applicable. Start them all, and they should all get back exactly in the same state as they were on the other box. No services to install and configure, no repos to add, no distro to maintain. All built into the container by someone else so you don’t have to worry about any of it. Each update of the app will bring with it the whole matching updated OS with the right packages in the right versions.
As a DevOps engineer we love the whole thing because I can have a Kubernetes cluster running on a whole rack and be like “here’s the apps I want you to run” and it just figures itself out, automatically balances the load, if a server goes down the containers respawn on another one and keeps going as if nothing happened. We don’t have to manually log into any of those servers to install services to run an app. More upfront work for minimal work afterwards.
Yeah, that didn’t stop it from pwning a good chunk of the Internet: https://en.wikipedia.org/wiki/Log4Shell
I think it’s not as much as we expect everyone to host theirs themselves, but that it’s possible at all so multiple companies can compete without having to start from scratch.
Sure there will be hobbyists that do it, but already just on Lemmy users already have the freedom of going with lemmy.ml, lemmy.world, SJW, lemm.ee and plenty more.
It’s about spreading the risk and having alternatives to run to.