• 0 Posts
  • 24 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle




  • carzian@lemmy.mltoSelfhosted@lemmy.worldServer for a boat
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 months ago

    You’ve gotten a lot of good answers, so I’m going to do some out of the box thinking - maybe it will spark a few ideas.

    Goal:

    • self hosted server on boat

    Issues:

    • size
    • power
    • corrosion

    So if I were going to do this myself, I’d start with a pelican or other similar watertight container. We don’t want the equipment getting wet, and we don’t want it exposed to the salty air.

    I’d probably pick a usff computer, like a dell 9020 or maybe a framework motherboard. To get the storage, I’d get one of these to add multiple sata ports to the computer. Then its a matter of getting a bunch of ssds and powering them. I think the 12v goal is going to be too restrictive, most laptops need 19v to charge, so I’d just bite the bullet and get an inverter. If you’re really tight on power you could go with a pi, but the framework motherboard/usff both use mobile processors, and shouldn’t draw too much while idle.

    Any wires that pass though to the case should be made through waterproof bulkheads.

    Personally I’d nix the HDMI out requirement. One more port to keep track of and it complicates the self hosting. If you want it for media streaming to a TV then I’d recommend a roku and just run a jellyfin server on the computer. If you want it for server debugging I wouldn’t bother running it out of the case.

    The last thing I’d do is figure out cooling. For this I’d probably create some sort of closed loop heat exchanger from the case to either the outside air or the lake/ocean itself. This could be as simple as a pump running water through two radiators, one in the case and the other outside or just dumped overboard. If you know your power usage ahead of time you might be able to get away with a peltier element, dumping the heat outside the case.

    I’d probably put this all on its own power system, get a solar panel, battery, inverter, etc. It could even get topped off by the boat’s system if it needs extra juice.

    Also whatever you do, I’d figure out a way to ensure you’re giving your system a clean and steady 12v.


  • “The cause is a new SATA specification which includes the ability to disable power to the hard disk. When you look at the SATA power connection on the back of your hard drive, there are 15 pins that make contact with your power supply. It’s the third pin that delivers a 3.3V signal that disables the drive. What we need to do is prevent that third pin from making contact with the power cable.”

    Some hotswap harddrive bays use this feature, definitely more common in enterprise scenarios or in USB HDD enclosures.


  • carzian@lemmy.mltoSelfhosted@lemmy.worldcurrent best HDD-model choice
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    5 months ago

    I’ve always liked the ultrastar line. Used to be made by HGST and then WD bought them. I’m using specifically the HC530 14tb. The line has a long history of being very reliable enterprise drives.

    I’ve bought mine from both goharddrive and serverpartsdeals. Both are reliable resellers of used storage. They’ll warranty the drives for 2 or 5 years depending on which you to with. Prices are ~$130-$150.

    Be aware you might need to do the electrical tape over some of the power pins hacks depending on your setup.

    Ps. One of the listings for the HC530 on goharddrive or serverpartdeals is incorrectly labels as HC520. Just pay close attention.


    As far as raid goes, Raid 10 is currently very popular for its speed and drive failure tolerance. Remember, raid is not a replacement for the 3-2-1 backup rule. Raid has some fault tolerances for bad hard drives, but doesn’t protect against a failed raid card, fire, flood, robber, acts of god, etc.

    You can also look into zfs and truenas if you feel inclined. Be aware that if you go with this setup, ecc ram is basically a requirement





  • Ah ok. I’ve done opnsense and pfsense both virtualized in proxmox and on bare metal. I’ve done the setup both at two work places now and at home. I vastly prefer bare metal. Managing it in a VM is a pain. The nic pass through is fine, but it complicates configuration and troubleshooting. If you’re not getting the speeds you want then there’s now two systems to troubleshoot instead of one. Additionally, now you need to worry about keeping your hypervisor up and running in addition to the firewall. This makes updates and other maintance more difficult. Hypervisors do provide snapshots, but opnsense is easy enough to back up that it’s not really a compelling argument.

    My two cents is get the right equipment for the firewall and run bare metal. Having more CPU is great if you want to do intrusion detection, DNS filtering, vpns, etc. on the firewall. Don’t feel like you need to hypervisor everything





  • They are decommisioned datacenter drives, this could be for a variety of reasons (including errors). There are many discussions online about them wiping smart data.

    It depends on your use case, I have a few of their drives in a nas specifically for media. I received one bad drive that failed my burn in tests, which they exchange without issue. All of my important files are stored on a seporate ssd based store.

    All depends on your risk tolerence and needs.



  • Proxmox has a virtual monitor in its web interface, so you can access the desktop of a virtual machine that way. It’s a little clunky but works ok for quick configuration. Alternately you could remote desktop into the virtual machine.

    Quicksync is a little more tricky. GPU pass through is a pain, and I’m not sure off the top of my head about that. You can Google “proxmox quicksync passthrough” and see if any solutions will work for you. There’s a chance that all you would need to do is set the processor type correctly in the virtual machine settings, but I’m not sure.