Tracking a single cat doesn’t seem like DB work
Tracking a single cat doesn’t seem like DB work
Why wouldn’t a simple spreadsheet and some pivot tables work?
There’s not much cost with S3 object. It’s just a file system in Linux, and replication is a protocol standard.
Use object storage for media and backups, then use s3 replication to put a copy somewhere else.
Remind me: who provides most of the funding that FF has?
If you have enough users and systems that this is a problem then you should be centrally managing it. I get that you want to inventory what you have, but I’m saying that you’re probably doing it wrong right now, and your ask is solved by using a central IAM system.
It sounds like you’re probably looking for some kind of SAML compliant IAM system, where credentials and access can be centrally managed. Active Directory and LDAP are examples of that.
Well, 1ms of latency is 300km of distance, so unless you have something really misconfigured or overloaded, or you’re across the country, latency shouldn’t be an issue. 10-20ms is normally the high water mark for most synchronous replication, so you can go a long way before a protocol like DNS becomes an issue.
I find a lot of stuff is using docker compose, which works with Podman, but using straight docker is easier, especially if it’s nothing web-facing
Yes, but you don’t need Kubernetes from the start.
Use object storage and enable immutability for the backups. If they compromise your site they shouldn’t be able to delete your backups unless they have the NAS password too.
Script that checks your external IP and updates your DNS provider via API.
Erasure coding may be a better option than RAID.
An HP EliteDesk G4 mini desktop Is around that much
Object storage is really popular for backups now because you can make it immutable by protocol standard and with erasure coding you can have fault tolerance across locations.
The problem with external LUNs is that they’re out of the control of the hypervisor. You can run into situations where migration events can cause access issues. If you can have the data presented through the hypervisor it will lead to less potential issues. Using object or NFS are also good options, if available.
Don’t present LUNs to VMs is the best practice. The only time you should do it is if you’re absolutely forced because you’re doing something that doesn’t support better shared disk options.
Save your files to a local s3 object storage mount, enable versioning for immutability and use erasure coding for fault tolerance. You can use Lustre or some other S3 software for the mount. S3 is great for single user file access. You can also replicate to any cloud based S3 for offsite.
Might want to move the NAS to a separate, unrouted VLAN. Storage only needs local connections and it’s good practice not to make it routable.
Sure, but how many foods are we talking here? This sounds like probably <20 rows on a sheet, with columns for ingredients.