I don’t get it. The key still gets declared, but it’s value is null. “name” in an empty object would return undefined, not null, correct?
(Yes, this joke whooshed, but I am curious now.)
I don’t get it. The key still gets declared, but it’s value is null. “name” in an empty object would return undefined, not null, correct?
(Yes, this joke whooshed, but I am curious now.)
It was on old 3.5" drives a long time ago, before anything fancy was ever built into the drives. It was in a seriously rough working environment anyway, so we saw a lot of failed drives. If strange experiments didn’t work to get the things working, mainly for lulz, the next option was to see if a sledge hammer would fix the problem. Funny thing… that never worked either.
I used to take failed drives while they were powered on and kinda snap them really with a fast twisting motion in an attempt to get the arm to move or get the platters spinning.
It never worked.
Did you get bad sectors? Weird things can absolutely happen but having sectors marked as bad is on the exceptional side of weird.
Maybe? Bad cables are a thing, so it’s something to be aware of. USB latency, in rare cases, can cause problems but not so much in this application.
I haven’t looked into the exact ways that bad sectors are detected, but it probably hasn’t changed too much over the years. Needless to say, info here is just approximate.
However, marking a sector as bad generally happens at the firmware/controller level. I am guessing that a write is quickly followed by a verification, and if the controller sees an error, it will just remap that particular sector. If HDDs use any kind of parity checks per sector, a write test may not be needed.
Tools like CHKDSK likely step through each sector manually and perform read tests, or just tells the controller to perform whatever test it does on each sector.
OS level interference or bad cables are unlikely to cause the controller to mark a sector as bad, is my point. Now, if bad data gets written to disk because of a bad cable, the controller shouldn’t care. It just sees data and writes data. (That would be rare as well, but possible.)
What you will see is latency. USB can be magnitudes slower than SATA. Buffers and wait states are causing this because of the speed differences. This latency isn’t going to cause physical problems though.
My overall point is that there are several independent software and firmware layers that need to be completely broken for a SATA drive to erroneously mark a sector as bad due to a slow conversion cable. Sure, it could happen and that is why we have software that can attempt to repair bad sectors.
Sorry, my points were mixed unintentionally.
I agree, I stay away from JVMs because they are a pain in the ass to administer and like you said, are usually coded by the lowest bidder.
In a well maintained environment, I have nothing against JVMs actually.
I was just bitching about the spring framework family. While security updates are frequent, Java apps tend to not age well and commonly suffer from version lock-in. (I am going through a round of that at my current job with spring auth stuffs being the offender.)
I can agree with that. There isn’t anything wrong with diversity as long as the entire ecosystem benefits from it. There are pros and cons, but not really worth going into that here.
At the end of the day, this is the fediverse. If someone wants to write instance code in COBOL to run on a toaster, you go right ahead! (It doesn’t mean I am going to support that effort, but my own personal opinion is insignificant in the whole scheme of things.)
My reaction on this is: Whatever.
I have heard strange things about Lemmy development in general, so it makes sense that something else would pop up eventually. Java though? I avoid JVMs like the plague and the security track record for spring* is spotty at best.
Still, if some people prefer it that way, whatever.
Cool. Now go post this in the community where they are re-writting Lemmy in Java.
Just highlighting a small bit for redundancy: Make sure the correct permissions are set in your .ssh folder!
I have seen people use pens and tablets for sculpting in Blender so that is an option. For my CAD work I do have a SpaceMouse but that is only really useful for large projects.
Until we get holographic projection (Iron Man style) I am not quite sure what a 3D system would look like, TBH.
FreeCAD is fairly good. Some of the controls are a bit wonky, but that is just a minor gripe. If you are starting on FreeCAD, that doesn’t matter so much. FreeCAD is good to know if you design components for KiCAD as well.
Parametric modeling is fucking awesome, btw. I am not quite sure how old that concept is though.
I agree with your main point. Python does a great job of replacing lots of tiny, chained scripts. Simple API calls with wget or curl have a place, but can spiral out of control quickly if you need to introduce any grain of control like with pagination, as an example.
Maintaining one Python app (or “script”) can still adhere to the unix philosophy of simplicity but can bend some rules as far as monolithic design is concerned if you aren’t careful.
It all boils down to whether you are introducing complexity or reducing it, IMHO.
Meh, I didn’t mean to hate on DHCP. It’s just a service I have learned to keep running all by itself somewhere in a dark corner of my network. DNS and DHCP are just services that I don’t like going down. Ever.
DHCP is a really stupid* service for the most part. Unless you are working with multiple subnets or have some very specific settings you need to pass to your clients, it’s probably not worth it to manage it yourself. I don’t want to discourage you though! Assigning static IP addresses by MAC can be extremely useful and is not always an option on routers. If you want static names and dynamic addresses, that is really where you need to manage both DNS and DHCP. It really depends on how and where you want names to be resolved and what you are trying to accomplish. (*stupid as in, it’s a really simple service. You want it simple because when DHCP breaks, you have other serious issues going on.)
Setting up your own DNS is worth its weight in gold. You can put it just about anywhere on your network (before your gateway, after, in China, whatever.) and your network won’t even know the difference if setup correctly. You can point BIND at the root servers and bypass your ISP completely if you want. ISP DNS services suck ass, so regardless of you resolve yourself, or forward all name queries to your anon DNS server of choice you have a really decent level of control on your network. It is the service to learn if you want to keep an eye on where your network wants to talk.
Your Unifi USG must play nice with your own server, by the laws of DNS. There may be some nuances when it comes to internal protocols like WINS, but other than that, it should be just fine.
I would setup a simple VM somewhere first, to answer your actual question. It’s good practice to keep core services isolated on their own, dedicated instances. This is to speed up recovery time and minimize down time. Even on your home network, DNS and DHCP are services you do not want going down. It’s always a pain when they do go down.
Ditto. I was in an unlucky block of dynamic IPs from my ISP once. Not only was sending or receiving email out of the question, my IP addresses were somehow part of firewall blacklists as well. I couldn’t get to banks at all and tons of random places were just dropping my traffic. It was a serious pain.
Ok, I admit I don’t understand the humor. My immediate response was, “sounds about right because of how these things happen”.(I can be kinda dumb like that sometimes.)
Security advisories may not be immediately announced until a patch is available. If this is in regards to FreeBSD-SA-24:08.openssh, a patch was available the day before it was announced and then refined for prod over the next few days : https://www.freebsd.org/security/advisories/FreeBSD-SA-24:08.openssh.asc
The timing of this stuff is always wonky and it doesn’t look like it hit a could of news places until today, about a week after: https://cyberpress.org/vulnerability-in-openssh-freebsd/