• 0 Posts
  • 32 Comments
Joined 2 months ago
cake
Cake day: January 2nd, 2025

help-circle
  • Are you looking for selective sync, and just over the LAN or over the internet too?

    If just LAN, there’s many Windows sync tools for this with varying levels of complexity and capability. Even just a simple batch file with a copy command.

    I’ll often just setup a Robocopy job for something that’s a regular sync.

    If you open files over a network connection, they stay remote and remain remote when you save. Though this isn’t best practice (Windows and apps are known for having hiccups with remotely opened files).

    Two other approaches:

    1. ResilioSync enables selective sync. If you change a file you’ve synchronized locally, the changed file will sync back to the source.

    2. Mesh network such as Wireguard, Tailscale, Hamachi. Each enables you to maintain an encrypted connection between your devices that the system sees as a LAN (with encryption). If you’re only using Windows, I’d recommend starting with Hamachi, it’s easier to get started. If mobile device support is needed, use Wireguard or Tailscale (Tailscale uses Wireguard, but easier to setup).




  • Just that you don’t need a beast of a machine (with it’s higher cost and power consumption) to just serve files at reasonable performance. If you want to stream video, you’ll need greater performance.

    For example, my NAS is ten years old, runs on ARM, with maybe 2gigs of ram. It supposedly can host services and stream video. It can’t. But it’s power draw is about 4 watts at idle.

    My newer (5 year old) small form factor desktop has a multi-core Intel cpu, true gigabit network card, a decent video card, with an idle draw of under 12 watts, and peaks at 200w when I’m converting video. It can easily stream videos.

    My gaming desktop draws 200w at idle.

    My SFF and gaming rig are both overkill for simple file sharing, and both cost 2x to 4x more than the NAS (bought the NAS and SFF second hand). But the NAS can’t really stream video.

    Power draw is a massive factor these days, as these devices run 24/7.

    RPi is great for it’s incredibly low power draw. The negative of RPi is you still need enclosure, and you’ll have drives that draw power attached to it. In my experience once I’ve built a NAS, RPi doesn’t draw significantly less than my SFF with the same drives installed, as it seems the drives are the greatest consumer. As I mentioned, my SFF with 1TB of storage draws 12 watts, and RPi will draw upwards of 8 watts on its own (my Pi Zero draws 2, but I’d never use it for a NAS). It’s all so close that for me the downside of RPi isn’t worth the difference in power.




  • As I said “how to reproduce this in a home setup”.

    I’m running multiple machines, paid little for all of them, and they all run at pretty low power. I replicate stuff on a schedule, I and have a cloud backup I verify quarterly.

    If OP is thinking about how to ensure uptime (however they define it) and prevent downtime due to upgrades, then looking at how Enterprise does things (the people who use research into this very subject performed by universities and organizations like Microsoft and Google), would be useful.

    Nowhere did I tell OP to do things this way, and I’d thank you to not make strawmen of my words.


  • In the business world it’s pretty common to do staged or switchover upgrades: test new version in a lab environment, iron out the install/config details. Then upgrade a single production server and do a test with a small group of users. Or, build new servers with the new stuff, have a set of users run on it for a while, in this way you can always just move those users back to a known good server.

    How do you do this at home? VMs for lots of stuff, or duplicate hardware for NAS type stuff (I’ve read of running TrueNAS in a VM).

    To borrow from the preparedness community: if you have 1 you have none, if you have 2 you have 1. As an example, the business world often runs mission-critical systems in a redundant setup in regionally-different data centers, so a storm won’t take them down. The question is how to reproduce this idea in a home lab environment.



  • Onomatopoeia@lemmy.cafetoSelfhosted@lemmy.worldNAS Hardware selection
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    17 days ago

    Consider how the NAS will be used. Is it just file storage, or will you want to stream from it?

    If just file storage, you can use lighter hardware.

    I’m running a 5 year old Dell Small Form Factor desktop as my NAS/media server. It’s power draw is under 12 watts unless I’m converting files. There’s room for 3 data drives (boot drive is M2). It has no problem streaming, unlike my consumer NAS. And it cost way less.


  • This may not fully solve the problem, but have you tried using it through Hermit or Native Alpha? These are browsers designed to make websites work like apps on Android.

    Combined with my password manager (Bitwarden), it’s usually as fast or even faster than some apps, with a side benefit of a single app install rather than an app for each service.

    So far this has worked well for Amazon, Walmart, libraries, my healthcare login, bank, ebay, Home Depot and Lowes, etc.







  • So, paid app (if you want wireless sync) - Media Monkey.

    The Android app can read network shares and network media servers (I forget exactly what it can read). But it works best if you run the server app - then you can stream the library or sync media, similar to iTunes.

    The Android app is free for basic functionality ($5 for wireless sync), the desktop/server app is free ($30 to enable wireless sync and a few other features). It’s been worth it for me. Even the free versions work very well.



  • Documentation has been mentioned already, what I’d add to that is planning.

    Start with a list of high-level objectives, as in “Need a way to save notes, ideas, documents, between multiple systems, including mobile devices”.

    Then break that down to high-level requirements such as “Implement Joplin, and a sync solution”.

    Those high-level requirements then spawn system requirements, such as Joplin needs X disk space, user accounts, etc.

    Each of those branches out to technical requirements, which are single-line, single-task descriptions (you can skip this, it’s a nice-to-have):

    “Create folder Joplin on server A”

    “Set folder permissions XYZ on Joplin folder”

    Think of it all as a tree, starting from your objectives. If you document it like this first, you won’t go doing something as you build that you won’t remember why you’re doing it, or make decisions on the fly that conflict with other objectives.