The older I get, the less time passes between starting a new project and reading the readme / manpages for a library.
The older I get, the less time passes between starting a new project and reading the readme / manpages for a library.
If you don’t need to host but can run locally, GPT4ALL is nice, has several models to download and plug and play with different purposes and descriptions, and doesn’t require a GPU.
I self host services as much as possible for multiple reasons; learning, staying up to date with so many technologies with hands on experience, and security / peace of mind. Knowing my 3-2-1 backup solution is backing my entire infrastructure helps greatly in feeling less pressured to provide my data to unknown entities no matter how trustworthy, as well as the peace of mind in knowing I have control over every step of the process and how to troubleshoot and fix problems. I’m not an expert and rely heavily on online resources to help get me to a comfortable spot but I also don’t feel helpless when something breaks.
If the choice is to trust an encrypted backup of all my sensitive passwords, passkeys, and recovery information on someone else’s server or have to restore a machine, container, vm, etc. from a backup due to critical failures, I’ll choose the second one because no matter how encrypted something is someone somewhere will be able to break it with time. I don’t care if accelerated and quantum encryption will take millennia to break. Not having that payload out in the wild at all is the only way to prevent it being cracked.
I’d prefer GNU’s ddrescue just because I find it more robust and has better progress output. It’s functionally the same interface but lets you use a mapfile to resume sessions should anything happen to interrupt the copy.
Arguably I’m against this because you never know what’s going to happen and the conventional wisdom for appliances like this is to just backup any important configs, backup your containers and vms, then do a fresh install from the latest install media on the new disk followed by a restore of the backups. It might take a little more time but it’s negligible and allows you an opportunity to review your current configs, make necessary changes, and ensure your backups are working as intended.
I have the same model, powering 3 machines with an average load of ~125w when it switches to battery power. I have a NUT host on one of the servers which will broadcast the outage for the other machines and the whole stack shuts down after 30 seconds and switches off the UPS at the very end. Gone through about 4 or 5 true power events now and double that in testing (overzealous I know) but the UPS is 2.5 years old now and is doing just fine. I have a spare battery because I heard ~3 years is normal but so far no indication it’s reaching replacement yet.
I think the important thing for these is to not run them down to 0. They’re only good for one event at a time and shouldn’t constantly be switching over without basically a full day of recharging again (more like 16h to recharge).
I can see consistent brownouts and events being a problem for these little machines. I’m planning on upgrading to a rack solution soon and relegating this one to my desktop in the other room (with a fresh battery of course).
Fun tidbit, DuckDuckGo has a bang for it, I use it all the time.
!a2 <program name you want to replace>
A2 has been changing a lot over the years. I have found it’s UX to be going in the wrong direction and it feels like it’s on a path towards too much ad monetization and spoiled trust. For now it seems fine still but it does list alternatives to itself which could use some love and support along the way as A2 grows.
Or just make a bunch of static helpers >:)
What are the features you need from your host? If it’s just remote syncing, why not just make a small Debian system and install git on it? You can manage security on the box itself. Do you need the overhead of gitlab at all?
I say this because I did try out hosting my own GitLab, GitTea, Cogs, etc and I just found I never needed any of the features. The whole point was to have a single remote that can be backed up and redeployed easily in disaster situations but otherwise all my local work just needed simple tracking. I wrote a couple scripts so my local machine can create new repos remotely and I also setup ssh key on the remote machine.
I don’t have a complicated setup, maybe you do, not sure. But I didn’t need the integrated features and overhead for solo self hosting.
For example, one of my local machine scripts just executes a couple commands on the remote to create a new folder, cd into it, and then run
git init —bare
then I can just clone the new project folder on the local machine and get started.