Mine (Thunder) doesn’t recognize tagging the code block as a specific syntax, it just shows it as preformatted block, with no highlighting.
- 0 Posts
- 33 Comments
med@sh.itjust.worksto
Open Source@lemmy.ml•Sectigo’s Wrongful Revocation of RustDesk’s EV Certificate: A Concerning Precedent for the Software Security EcosystemEnglish
2·12 days agoIf you’ve ever had to go through the audit process CAs are subjected to, not violating the compliance controls and ensuring audit compliance is a massive chunk of your attention for a lot of the year.
Can I ask what client you’re using?
You are right to be afraid. I had a similar story, and am still recovering and sorting what data is recoverable. Nearly lost age 0.5-1.5 years of media of my daughters life this way.
As others have said, don’t replicate your existing backup. Do two backups. Preferably on different mediums, spinning disk/ssd eg.
If one backup is corrupted or something nasty is introduced, you will lose both. This is one of the times it is appropriate to do the work twice.
I’ve built two backup mini PCs, and I replicate to them pretty continuously. Otherwise, look at something like Borg base/alternatives.
Remember, 3-2-1 and restore testing. It’s not a backup unless you can restore it.
med@sh.itjust.worksto
Open Source@lemmy.ml•My thoughts on GPL vs. permissive licensesEnglish
3·2 months agoI have never understood this fork argument. All it takes to make it work is a clear division for the project.
If you want to make something, and it requires modification of the source for a GPL project you want to include, why not contribute that back to the source? Then keep anything that isn’t a modification of that piece of your project separately, and license it appropriately. It’s practically as simple as maintaining a submodule.
I’d like to believe this is purely a communication issue, but I suspect it’s more likely conflated with being a USP and argued as a potential liability.
These wasteful practices of ‘re-writing and not-cloning’ are facilitated by a total lack of accountability for security on closed source commercialised project. I know I wouldn’t be maintaining an analogue of a project if there were available security updates from upstream.
med@sh.itjust.worksto
Selfhosted@lemmy.world•Do bots/scrapers check uncommon ports?English
1·3 months agoEverything’s a trade off, as you already know. I still use lets encrypt, despite the fact that I know attackers watch CT logs, and they’ll know as soon as I mint a cert.
med@sh.itjust.worksto
Selfhosted@lemmy.world•18% of people running Nextcloud don't know what database they are usingEnglish
1·4 months agoFair enough, I did assume the target audience was selfhosters based on the question.
As for provider backups - well, you’d hope. But M$ doesn’t do user available backups, so I’d be surprised if that was bundled by the average SaaS provider.
med@sh.itjust.worksto
Selfhosted@lemmy.world•18% of people running Nextcloud don't know what database they are usingEnglish
21·4 months agoAnd if you don’t know what database you’re running, how are you backing it up?
If you don’t know what database you’re running, are you bothering to do a full shutdown before backups? Are you doing backups at all…
I haven’t tested the spouse approval factor, but once Radicale is setup, you don’t have to do anything other than create new calendars through a caldav app, or through the web front end.
Android can use DavX to sync if you’re in to foss stuff
I pretty much only use it for tasks and a maintenance calendar, but I’ve had zero problems with it so far
med@sh.itjust.worksto
Selfhosted@lemmy.world•Linkwarden (v2.11.0) - open-source collaborative bookmark manager to collect, organize, and preserve webpages, articles, and documents (tons of new features!) 🚀English
2·6 months agoAll I need is for them to fix the public collection RSS feed bug where they embed “https,http” in the feed xml if you’re behind a reverse proxy - which breaks parsing
and has integration for Oxidized, smokeping, greylog and more
med@sh.itjust.worksto
Selfhosted@lemmy.world•I'm guilty of not reading the f..ing documentationEnglish
19·7 months agoYes. But also, despite having done it literally thousands of times, I still can’t tell you which way round to put the target and the link name for a softlink on the first go.
My first guess is always
ln -s $NAME $TARGETNo amount of repetition will fix this.
Sounds like you have reason to bump it up the list now - two birds with one stone.
I need to do this too. I know I have stuff deployed that has plaintext secrets in .env or even the compose. I’ll never get time to audit everything. So the more I make the baseline deployment safe, the better.
You’re a monster. My scps would go nowhere
It’s the right move.
I tell you, the first time you’re sat in front of a CEO and an auditor and you have to explain why the big list of servers has a highlighted one called C-NT-PRIK-5 is when the fun stops.
Explaining that it’s short for ‘customer network tester Mr. Prickles 5’, and is actually a cacti server never really seems to help the situation.
At least a few of the customers got a laugh out of it being on the reports!
Username checks out
You had me digging through old hosts files and ssh configs to find some of these.
I try to name them something that resembles what they do or has something to do with what their purpose is.
Short is good, and if it can match more than one of the machine’s purpose/os/software/look, the better.
If it’s some sort of personal machine, it gets a personal name
Phones
- traveller
- pawn
- rook
- bishop
Virtual Workstations
-
boxy
-
moxy
-
sandbox
-
cloud
-
ship lxc container host
-
dock docker host
Laptops
- ciel Razer blade stealth with a rainbow LED keyboard
- arc runs arch.
- lled is a dell
Desktops
- bench
- citadel
- bastion
med@sh.itjust.worksto
Selfhosted@lemmy.world•What skills are needed to self host without too many headaches?English
8·11 months agoLots of people have been talking about products and tools. It’s docker, tailscale, cloudflare proxmox etc. These are important, but will likely come and go on a long enough timescale.
In terms of actual skills, there’s two that will dramatically decrease your headaches. Documention and backup planning. The problem with developing those skills is, to my knowledge, they’ve only ever been obtained through suffering. Trying to remember how to rebuild something when you built it 6 months ago is futile. Trying to recover borked data is brutal. There’s no fail-safe that you haven’t created, and there’s no history that you haven’t written. Fortunately, these are also the most transferable skills.
My advice is, jump in. Don’t hesitate. The chops in docker/linux/networking will come with use and familiarity. If it looks cool, do it. Make mistakes. You will rapidly realise what the problems with your set up are. You will gain knowledge in leaps and bounds from breaking a thing vs learning by rote or lesson. Reframe the headaches as a feature, not a bug - they’re highlighting holes in your understanding. They signpost the way to being a better tech, and a more stable production environment.
The greatest bit about self hosting for me is planning the next great leap forward, making it better, cleaner, more robust. Growing the confidence in your abilities to create a system you can trust. Honing your skills and toolset is the entirety of the excercise, so jump in, and don’t focus on any one thing to master or practice before hand!
med@sh.itjust.worksto
Programmer Humor@lemmy.ml•Recently developed a framework for creating text based user interfaces with a fun bird theme! I call it...
0·1 year agoIt also sounds like clearing your throat, then spitting!
Haugck - Tooie!
Edit: and now I see that was the joke
I was trying to finalize a backup device to gift to my dad over Christmas. We’re planning to use each other for offsite backup, and save on the cloud costs, while providing a bridge to each other’s networks to get access to services we don’t want to advertise publicly.
It is a Beelink ME Mini running arch, btrfs on luks for the os on the emmc storage and the fTPM handling the decryption automatically.
I have built a few similar boxes since and migrated the build over to ansible, but this one was the proving ground and template for them. It was missing some of the other improvements I had built in to the deployed boxes, notably:
I don’t know what possessed me, but I decided that the question marks and tasks I had in my original build documentation should be investigated as I did it up, I was hoping to export some more specific configuration to ansible to the other boxes once done. I was going to migrate manually to learn some lessons.
I wasn’t sure about bothering with UKI. I wanted zfs running, and that meant moving to the linux-lts kernel package for arch.
Given systemd-boot’s superior (at current time) support for owner keys, boot time unlocking and direct efi boot, I’ve been using that. However, it works differently if you use plain kernels, compared to if you use UKI. Plain kernels use a loader file to point to the correct locations for the initramfs and the kernel, which existed on this box.
I installed the linux-lts package, all good. I removed the linux kernel package, and something in the pacman hooks failed. The autosigning process for the secure-boot setup couldn’t find the old kernel files when it regenerated my initramfs, but happily signed the new lts ones. Cool, I thought, I’ll remove the old ones from the database, and re-enroll my os drive with systemd-cryotenroll after booting on the new kernel (the PCRs I’m using would be different on a new kernel, so auto-decrypt wouldn’t work anyway.)
So, just to be sure, I regenerated my initram and kernel with mkinitcpio -p linux-lts, everything worked fine, and rebooted. I was greeted with:
Reboot to firmware settingsas my only boot option. Sigh.
Still, I was determined to learn something from this. After a good long while of reading the arch wiki and mucking about with bootctl (PITA in a live CD booted system) I thought about checking my other machines. I was hoping to find a bootctl loader entry that matched the lts kernel I had on other machines, and copy it to this machine to at least prove to myself that I had sussed the problem.
After checking, I realised no other newer machine had a loader configuration actually specifying where the kernel and initram were. I was so lost. How the fuck is any of this working?
Well, it turns out, if you have UKI set up, as described, it bundles all the major bits together like the kernel, microcode, initram and boot config options in to one direct efi-bootable file. Which is automatically detected by bootctl when installed correctly. All my other machines had UKI set up and I’d forgotten. That was how it was working. Unfortunately, I had used archinstall for setting up UKI, and I had no idea how it was doing it. There was a line in my docs literally telling me to go check this out before it bit me in the ass…
…
…
So, after that sidetrack, I did actually prove that the kernel could be described in that bootctl loader entry, then I was able to figure out how I’d done the UKI piece in the other machines, and applied it to this one, so it matched and updated my docs…
…
UKI configuration is in mkinitcpio default configs, but needs changing to make it work.
…
Turns out my Christmas wish came true, I learned I need to keep better notes.