• 0 Posts
  • 17 Comments
Joined 1 year ago
cake
Cake day: August 15th, 2023

help-circle


  • remotelove@lemmy.catoSelfhosted@lemmy.worldHDD data recovery
    link
    fedilink
    English
    arrow-up
    6
    ·
    5 months ago

    It was on old 3.5" drives a long time ago, before anything fancy was ever built into the drives. It was in a seriously rough working environment anyway, so we saw a lot of failed drives. If strange experiments didn’t work to get the things working, mainly for lulz, the next option was to see if a sledge hammer would fix the problem. Funny thing… that never worked either.




  • Maybe? Bad cables are a thing, so it’s something to be aware of. USB latency, in rare cases, can cause problems but not so much in this application.

    I haven’t looked into the exact ways that bad sectors are detected, but it probably hasn’t changed too much over the years. Needless to say, info here is just approximate.

    However, marking a sector as bad generally happens at the firmware/controller level. I am guessing that a write is quickly followed by a verification, and if the controller sees an error, it will just remap that particular sector. If HDDs use any kind of parity checks per sector, a write test may not be needed.

    Tools like CHKDSK likely step through each sector manually and perform read tests, or just tells the controller to perform whatever test it does on each sector.

    OS level interference or bad cables are unlikely to cause the controller to mark a sector as bad, is my point. Now, if bad data gets written to disk because of a bad cable, the controller shouldn’t care. It just sees data and writes data. (That would be rare as well, but possible.)

    What you will see is latency. USB can be magnitudes slower than SATA. Buffers and wait states are causing this because of the speed differences. This latency isn’t going to cause physical problems though.

    My overall point is that there are several independent software and firmware layers that need to be completely broken for a SATA drive to erroneously mark a sector as bad due to a slow conversion cable. Sure, it could happen and that is why we have software that can attempt to repair bad sectors.


  • Sorry, my points were mixed unintentionally.

    I agree, I stay away from JVMs because they are a pain in the ass to administer and like you said, are usually coded by the lowest bidder.

    In a well maintained environment, I have nothing against JVMs actually.

    I was just bitching about the spring framework family. While security updates are frequent, Java apps tend to not age well and commonly suffer from version lock-in. (I am going through a round of that at my current job with spring auth stuffs being the offender.)








  • I agree with your main point. Python does a great job of replacing lots of tiny, chained scripts. Simple API calls with wget or curl have a place, but can spiral out of control quickly if you need to introduce any grain of control like with pagination, as an example.

    Maintaining one Python app (or “script”) can still adhere to the unix philosophy of simplicity but can bend some rules as far as monolithic design is concerned if you aren’t careful.

    It all boils down to whether you are introducing complexity or reducing it, IMHO.



  • DHCP is a really stupid* service for the most part. Unless you are working with multiple subnets or have some very specific settings you need to pass to your clients, it’s probably not worth it to manage it yourself. I don’t want to discourage you though! Assigning static IP addresses by MAC can be extremely useful and is not always an option on routers. If you want static names and dynamic addresses, that is really where you need to manage both DNS and DHCP. It really depends on how and where you want names to be resolved and what you are trying to accomplish. (*stupid as in, it’s a really simple service. You want it simple because when DHCP breaks, you have other serious issues going on.)

    Setting up your own DNS is worth its weight in gold. You can put it just about anywhere on your network (before your gateway, after, in China, whatever.) and your network won’t even know the difference if setup correctly. You can point BIND at the root servers and bypass your ISP completely if you want. ISP DNS services suck ass, so regardless of you resolve yourself, or forward all name queries to your anon DNS server of choice you have a really decent level of control on your network. It is the service to learn if you want to keep an eye on where your network wants to talk.

    Your Unifi USG must play nice with your own server, by the laws of DNS. There may be some nuances when it comes to internal protocols like WINS, but other than that, it should be just fine.

    I would setup a simple VM somewhere first, to answer your actual question. It’s good practice to keep core services isolated on their own, dedicated instances. This is to speed up recovery time and minimize down time. Even on your home network, DNS and DHCP are services you do not want going down. It’s always a pain when they do go down.