• 0 Posts
  • 33 Comments
Joined 3 years ago
cake
Cake day: June 23rd, 2023

help-circle
  • I was trying to finalize a backup device to gift to my dad over Christmas. We’re planning to use each other for offsite backup, and save on the cloud costs, while providing a bridge to each other’s networks to get access to services we don’t want to advertise publicly.

    It is a Beelink ME Mini running arch, btrfs on luks for the os on the emmc storage and the fTPM handling the decryption automatically.

    I have built a few similar boxes since and migrated the build over to ansible, but this one was the proving ground and template for them. It was missing some of the other improvements I had built in to the deployed boxes, notably:

    • zfs on luks on the NVMe drives
    • the linux-lts kernel (zfs compatibility)
    • UKI for the secureboot setup

    I don’t know what possessed me, but I decided that the question marks and tasks I had in my original build documentation should be investigated as I did it up, I was hoping to export some more specific configuration to ansible to the other boxes once done. I was going to migrate manually to learn some lessons.

    I wasn’t sure about bothering with UKI. I wanted zfs running, and that meant moving to the linux-lts kernel package for arch.

    Given systemd-boot’s superior (at current time) support for owner keys, boot time unlocking and direct efi boot, I’ve been using that. However, it works differently if you use plain kernels, compared to if you use UKI. Plain kernels use a loader file to point to the correct locations for the initramfs and the kernel, which existed on this box.

    I installed the linux-lts package, all good. I removed the linux kernel package, and something in the pacman hooks failed. The autosigning process for the secure-boot setup couldn’t find the old kernel files when it regenerated my initramfs, but happily signed the new lts ones. Cool, I thought, I’ll remove the old ones from the database, and re-enroll my os drive with systemd-cryotenroll after booting on the new kernel (the PCRs I’m using would be different on a new kernel, so auto-decrypt wouldn’t work anyway.)

    So, just to be sure, I regenerated my initram and kernel with mkinitcpio -p linux-lts, everything worked fine, and rebooted. I was greeted with:

    Reboot to firmware settings
    

    as my only boot option. Sigh.

    Still, I was determined to learn something from this. After a good long while of reading the arch wiki and mucking about with bootctl (PITA in a live CD booted system) I thought about checking my other machines. I was hoping to find a bootctl loader entry that matched the lts kernel I had on other machines, and copy it to this machine to at least prove to myself that I had sussed the problem.

    After checking, I realised no other newer machine had a loader configuration actually specifying where the kernel and initram were. I was so lost. How the fuck is any of this working?

    Well, it turns out, if you have UKI set up, as described, it bundles all the major bits together like the kernel, microcode, initram and boot config options in to one direct efi-bootable file. Which is automatically detected by bootctl when installed correctly. All my other machines had UKI set up and I’d forgotten. That was how it was working. Unfortunately, I had used archinstall for setting up UKI, and I had no idea how it was doing it. There was a line in my docs literally telling me to go check this out before it bit me in the ass…

    • [x] figure out what makes uki from archinstall work ✅ 2025-09-19
    • It was systemd-ukify

    So, after that sidetrack, I did actually prove that the kernel could be described in that bootctl loader entry, then I was able to figure out how I’d done the UKI piece in the other machines, and applied it to this one, so it matched and updated my docs…

    • IT WASN’T ukify

    UKI configuration is in mkinitcpio default configs, but needs changing to make it work.

    vim /etc/mkinitcpio.d/linux-lts.preset 
    

    Turns out my Christmas wish came true, I learned I need to keep better notes.





  • You are right to be afraid. I had a similar story, and am still recovering and sorting what data is recoverable. Nearly lost age 0.5-1.5 years of media of my daughters life this way.

    As others have said, don’t replicate your existing backup. Do two backups. Preferably on different mediums, spinning disk/ssd eg.

    If one backup is corrupted or something nasty is introduced, you will lose both. This is one of the times it is appropriate to do the work twice.

    I’ve built two backup mini PCs, and I replicate to them pretty continuously. Otherwise, look at something like Borg base/alternatives.

    Remember, 3-2-1 and restore testing. It’s not a backup unless you can restore it.


  • I have never understood this fork argument. All it takes to make it work is a clear division for the project.

    If you want to make something, and it requires modification of the source for a GPL project you want to include, why not contribute that back to the source? Then keep anything that isn’t a modification of that piece of your project separately, and license it appropriately. It’s practically as simple as maintaining a submodule.

    I’d like to believe this is purely a communication issue, but I suspect it’s more likely conflated with being a USP and argued as a potential liability.

    These wasteful practices of ‘re-writing and not-cloning’ are facilitated by a total lack of accountability for security on closed source commercialised project. I know I wouldn’t be maintaining an analogue of a project if there were available security updates from upstream.





  • med@sh.itjust.workstoSelfhosted@lemmy.worldCalendar app
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    6 months ago

    I haven’t tested the spouse approval factor, but once Radicale is setup, you don’t have to do anything other than create new calendars through a caldav app, or through the web front end.

    Android can use DavX to sync if you’re in to foss stuff

    I pretty much only use it for tasks and a maintenance calendar, but I’ve had zero problems with it so far





  • Sounds like you have reason to bump it up the list now - two birds with one stone.

    I need to do this too. I know I have stuff deployed that has plaintext secrets in .env or even the compose. I’ll never get time to audit everything. So the more I make the baseline deployment safe, the better.



  • It’s the right move.

    I tell you, the first time you’re sat in front of a CEO and an auditor and you have to explain why the big list of servers has a highlighted one called C-NT-PRIK-5 is when the fun stops.

    Explaining that it’s short for ‘customer network tester Mr. Prickles 5’, and is actually a cacti server never really seems to help the situation.

    At least a few of the customers got a laugh out of it being on the reports!



  • You had me digging through old hosts files and ssh configs to find some of these.

    I try to name them something that resembles what they do or has something to do with what their purpose is.

    Short is good, and if it can match more than one of the machine’s purpose/os/software/look, the better.

    If it’s some sort of personal machine, it gets a personal name

    Phones

    • traveller
    • pawn
    • rook
    • bishop

    Virtual Workstations

    • boxy

    • moxy

    • sandbox

    • cloud

    • ship lxc container host

    • dock docker host

    Laptops

    • ciel Razer blade stealth with a rainbow LED keyboard
    • arc runs arch.
    • lled is a dell

    Desktops

    • bench
    • citadel
    • bastion

  • Lots of people have been talking about products and tools. It’s docker, tailscale, cloudflare proxmox etc. These are important, but will likely come and go on a long enough timescale.

    In terms of actual skills, there’s two that will dramatically decrease your headaches. Documention and backup planning. The problem with developing those skills is, to my knowledge, they’ve only ever been obtained through suffering. Trying to remember how to rebuild something when you built it 6 months ago is futile. Trying to recover borked data is brutal. There’s no fail-safe that you haven’t created, and there’s no history that you haven’t written. Fortunately, these are also the most transferable skills.

    My advice is, jump in. Don’t hesitate. The chops in docker/linux/networking will come with use and familiarity. If it looks cool, do it. Make mistakes. You will rapidly realise what the problems with your set up are. You will gain knowledge in leaps and bounds from breaking a thing vs learning by rote or lesson. Reframe the headaches as a feature, not a bug - they’re highlighting holes in your understanding. They signpost the way to being a better tech, and a more stable production environment.

    The greatest bit about self hosting for me is planning the next great leap forward, making it better, cleaner, more robust. Growing the confidence in your abilities to create a system you can trust. Honing your skills and toolset is the entirety of the excercise, so jump in, and don’t focus on any one thing to master or practice before hand!