Use something like pgAdmin, DBeaver or the pg cli to connect to your postgres instance. Then run the command from the changelog as a SQL query.
Use something like pgAdmin, DBeaver or the pg cli to connect to your postgres instance. Then run the command from the changelog as a SQL query.
You can get a quick overview via DSM, I think in the Disk Manager. For more details you could jump into a terminal and use smartctl.
Have you checked the SMART values of your drives? Do they give you a reason for your concerns?
Anyhow, you should never be in a position where you need to worry about drive failure. If the data is important, back it up separatly. If it isn’t, well, don’t sweat it then.
Why would you buy something new if your current solution works and your requirements don’t change? Just keep it.
Wasabi S3 is nice and cheap. You’ll only pay what you use, so probably just a few cents in your case.
Oops, nevermind:
If you store less than 1 TB of active storage in your account, you will still be charged for 1 TB of storage based on the pricing associated with the storage region you are using.
I recently upgraded three of my proxmox hosts with SSDs to make use of ceph. While researching I faced the same question - everyone said you need an enterprise SSD, or ceph would eat it alive. The feature that apparently matters the most in my case is Power Loss Protection (PLP). It’s not even primarily needed to protect from an possible outage, but it forces sync writes instead of relying on a cache for performance.
There are some SSDs marketed for usage in data centers, these are generally enterprisey. Often they are classified for “Mixed Use” (read and write) or “Read Intensive”. Other interesting metrics are the Drive Writes Per Day (DWPD) and obviously TBW and IOPS.
At the end I went with used Samsung PM883.
But before you fall into this rabbit hole, you might check if you really need an enterprise SSD. If all you’re doing is running a few vms in a homelab, I would expect consumer SSDs to work just fine.
What’s wrong with Portainer?
No, the registrar just registers the domain for you (duh). You can then change the DNS recods for this domain and these records will propagate to other DNS servers all around the world. Your clients will use some of these DNS servers to lookup the IP address of your server and then connect to this IP.
The traffic between your clients and server has nothing to do with your domain registrar.
You could look into mainboards with IPMI. They give you a web based interface to fully control your server, including power management, shell, sensor readings, etc.
Also not a fan about the closed source thing, but I like about Obsidian that it’s all just markdown. If I ever need to ditch it, I can keep and use my existing files as they are.
Would this also be possible with Zettlr or Logseq?
You’d need to post your complete docker-compose.yaml, otherwise nobody knows what you’re doing.
Also (and I don’t want to sound rude) you should probably start learning docker with a less critical service. If you just learned how volumes work you should not store your passwords in one. Yet.
That’s a very specific problem and I don’t know if there is an existing solution that does exactly what you want.
paperless-ngx does a lot of the things you ask for, it lets you upload pdfs, does OCR and gives you full text search via a web ui. It’s just not made specifically for manuals and it does not highlight the search hits or scrolls to them.
I have no experience with terraform but Bitwarden has an API and CLI, so you might be able to script something with it?
I think choosing a domain registrar with DynDNS support has very little to do with setting up PiHole and Wireguard at home. PiHole and Wireguard will not care about or interact directly with a service like porkbun. Okay, you might configure PiHole to forward DNS requests to porkbuns nameservers, but that’s something every dns provider will support because that’s what dns providers do.
Check out porkbun, they have cheap prices and an API that’s supported by ddclient.
I have heard portforwarding is unsafe and a VPN seems inconvenient to me.
Well, those are pretty much the available options.
People are talking about Tailscale a lot and although I’ve never used it, it might be easy to setup while not being too inconvenient for you.
Just wanted to add that you can get Jeff Geerlings book “Ansible for DevOps” for free right now:
I assume you’re not really experienced with storage servers? Then I would likely recommend a Synology NAS. They give you great software that you can easily configure without the need of deeper knowledge of the inner workings. I started with a Synology and didn’t regret it. It just worked and gave me reliable storage so I could concentrate on the other parts of my homelab. It comes with a price though and you mostly pay for the software.
If you aren’t afraid to get your hands dirty or prefer to use an open source storage solution from the beginning, you might consider Unraid or TrueNAS. The latter is more “enterprisey”, the former seems to be more beginner friendly (but I haven’t used it personally).
I don’t understand - you want a layer to hide database internals but also a web app that “is only the db itself”?