

We could, if someone cared to put in the effort to make that happen.


We could, if someone cared to put in the effort to make that happen.


Come on baby…
Light. My. Fire.


I’d look at getting a used SFF (Small Form Factor) desktop for a LOT less than that Ugreen. I paid less than $50 for mine - at that price I can run a second one when I’m ready.
I’m currently running an old Dell SFF as my server, I’ve had Proxmox on it with 5 drives internally (2.5") with the OS on the NVME.
Initially it had 4GB of ram and ran Proxmox with ZFS just fine (and those drives were various ages and sizes).
It idles at 18w, not much more than the 12w my Pi Zero W idled at, but way more powerful and capable.


One drive failure means an array is degraded until resilvering finishes (unless you have hot spare, at least then the array isn’t degraded and silvering a new drive isn’t as risky).
Resilvering is an intensive process that can push other drives to fail.
I have a ZFS system that takes the better part if a day (24 hours) to resilver a 4TB drive in an 8TB five-drive array (single parity) that’s about 70% full. When uts resilvering I have to be confident my other data stores don’t fail (I have the data locally on 2 other drives and a cloud backup).


“Two in RAID” only means 2 when the arrays on on different systems and the replication isn’t instant. Otherwise it only protects against hardware failures and not against you fucking up (ask me how I know…).
If the arrays are on 2 separate systems in the same place, they’ll protect against independent hardware failures without a common cause (a drive dies, etc), but not against common threats like fire or electrical spikes.
Also, how long does it take to return one of those systems to fully functioning with all the data of the other? This is a risk all of us seem to overlook at times.


If you’re storing “critical data”, you want to look at redundancy (ie backup) and not expecting a single store to not have issues. Drives will fail, and if they fail in a RAID the entire store is at risk until the array is restored. If you don’t have hot spares it’s at even more risk while it’s rebuilding. ZFS is less sensitive to this than traditional RAID, but even it can’t magically restore data from thin air.
The link above discusses the 3-2-1-1-0 which I think is good to understand as 0 refers to verified backups. Unverified backups are no backups at all. It’s not unusual in the SMB space to do a test restore of a percentage of files monthly (Enterprise has entire teams and automation around testing).


Of you ever feel like you can’t find the right screws or it just doesn’t hold back together well, just Goop the bastard back together.
So much stuff in my life is now Gooped together - I even Gooped some drives into a desktop that lacked enough mount points.
That stuff is magic in a tube.
The only concern I see here is the external drive. My experience has been that powered off drives fail more often than constantly-on drives. So my external drives are always powered on, I just run a replication script to them on a schedule.
But you do have good coverage, so that’s a small risk.
For stuff like movies I simply use replication as my backup.
Since I share media with fruends/family, I act as the central repository and replicate to them on a schedule (Mom on Monday, Friend 1 on Tuesday, etc), so I have a few days to catch an error. It’s not perfect but I check those replication logs weekly.
I also have 2 local replicas of media, so I’m pretty safe.


You’re missing the point - he’s elevating cli above all else, which you don’t have on TV or mobile.
Yes, I know there are media clients, I’ve used them all. And that screenshot is hideous - compare it to Jellyfin on mobile, which looks just like Netflix used to.
Besides, he’s not doing anything different than running a “server stack” (which isn’t accurate, he’s still running a server, the device hosting the media services, even if they’re native to the OS).
Xerox Parc didn’t invest millions in the 60’s and 70’s because CLI was so great.
We don’t use CLI on our microwaves, toasters ovens, tv’s clocks, lights, etc, for a reason.


So, let me get this straight - you’re saying using command line to play video instead of a gui?
Tell me, how does one do this on a TV? On an iPad? Phone?
Your excitement for command line belies an experience of nothing but GUI, so it’s something of a novelty to use command line.
Dude, get ahold of yourself. I probably wrote more command line stuff before you were born than you’ve ever thought of - I’m not going backwards.
(As a clue, wrote my first Fortran program before PC’s were even a thought at IBM).
Fuck cli except for managing systems. Even then quite often gui is faster by orders of magnitude, mostly to kickoff scripts to do what I need. GUI was a godsend, and Xerox Parc’s efforts created a common GUI language for us, thankfully was embraced. I refuse to go backwards.
And forcibly teach non-technical people to use CLI?
You are exactly the type of person that Saturday Night Live lampooned decades ago.


Exactly, keeping components separated, especially the router.
Hardware routers “cost money because they save money” (Sorry, couldn’t resist that movie quote). A purpose-built router will just run and run. I have 20 year old consumer routers that still “just work”. Granted, they don’t have much in the way of capability, but they do provide a stable gateway.
I then use two separate mesh network tools, on multiple systems. The likelihood of both of those failing simultaneously is low. But I still have a single failure point in the router, which I accept - I’ve only had a couple outright fail over 25 years, so I figure it’s a low risk.


Separate devices provide reliability and supportability.
If your all-in-one device has issues, you can’t remote in to maintain it.
Take a look at what enterprises do: redundant external interfaces, redundant services internally. You don’t necessarily need all this, but it’s worth considering "how do I ensure uptime and enable supportability and reliability? ".
Also, we always ask “what happens if the lone SME (Subject Matter Expert) is hit by a bus?” (You are that Lone SME).


To be fair, the pro plan is for the non-local stuff, which is at least understandable as domains and resolution services are non-free.
Also ongoing development takes resources. Seems like a reasonable approach.
I say this as someone who absolutely despises subscriptions.


Give us an example of what you want as the end result - what devices you have, are you sharing calendars with someone else, etc.
My best answer is to run a calendar server on some machine and let your calendar sync to that whenever the devices are online on the same wifi simultaneously. (E.G. Run Owncloud in a docker on your laptop).
Alternatively you could run Tailscale on the devices which would provide a secure mesh network, eliminating the need to be on the same wifi - so long as they’re online they can sync via Tailscale.
Tailscale even has a feature (Funnel) that will route specific internet traffic into your Tailscale net - this would eliminate the need to have Tailscale on every device. You could host a calendar on a laptop (say Nextcloud in Docker with Tailscale), enable Funnel only for the calendar port, and apply security in Tailscale so only you have access.


Google has always been evil. Why else was their byline “Don’t be evil”?
If you have to make such a disclaimer…


So much this.
Why is Signal hosted in one location on AWS, for example? That’s the sort of thing that should be in multiple places around the world with automatic fail over.


Depends on who we’re talking about. Companies like finance orgs are all about legal contracts and would be able to hold their feet to the fire.
You don’t want to go to court against a finance company or any very large org where contract law is their bread and butter (basically any large/multinational corp).
Amazon’s not hosting just small operations.


Much of this stuff is automatic - I’ve worked with such contracted services where uptime is guaranteed. The contracts dictate the terms and conditions for refunds, we see them on a monthly basis when uptime is missed and it’s not done by a person.
I imagine many companies have already seen refunds for outage time, and Amazon scrambled to stop the automation around this.
They’ll have little to stand on in court for something this visible and extensive, and could easily lose their shirt with fines and penalties when a big company sues over breech when they choose to not renew.
Just cause they’re big doesn’t mean all their clients are small or don’t have legal teams of their own.
I think the nerd/tinker space today is stuff like self-hosting, local storage, return from cloud.
Apps that don’t phone home to someone else’s server, keeping your contacts, calendar, shopping list, etc on your own stuff.