Or just use the @CALL
command to call them in order without having to guesstimate how long they run.
~# tar -h
tar: You must specify one of the '-Acdtrux', '--delete' or '--test-label' options
Try 'tar --help' or 'tar --usage' for more information.
***********************************************
WARNING: Self destruct sequence initiated
***********************************************
Yes, the terse Unix version, which needs to be supported for compatibility, and the more readable GNU long option
Depends. Is it GNU tar, BSD tar or some old school Unix tar?
Double hyphen “long options” are a typical GNU thing.
I mean… Young people don’t know things yet… Isn’t that normal?
Depends on the context I guess. If this is a professional IT context in which the 25yo is expected to be proficient enough on a Linux system to edit a text file, not knowing that vim exists is kinda sad.
Otherwise a second PiHole set as the secondary DNS in DHCP would keep things online.
No, that just creates time outs and delays when either of them is offline.
The proper way is to have a standby pihole that takes over the IP address of the main pihole when it goes down. It’s quite easy to achieve this with keepalived.
Mental note: have to migrate my gitea instance over to forgejo.
If there isn’t one
Worse is if there is one but it says: [OPEN] Opened 7 years ago Updated 2 days ago, with a whole bunch of people commenting the equivalent of “me too”, and various things they tried to solve it, but no solution.
The encryption i was talking about is the encryption of your dns server
You mean encryption between the client and your DNS server, on your local network?
Just wanted to chime in and say that with a pihole you can also have encryption if you point to a local resolver like cloudflared
or unbound
.
My pihole forwards everything to a cloudflared
service running on 127.0.0.1:5353 to encrypt all my outgoing DNS queries, it was really easy to setup: https://docs.pi-hole.net/guides/dns/cloudflared/
DNS-over-HTTPS
You can also do that with running cloudflared or unbound on your pihole.
For me gravity sync was too heavy and cumbersome. It always failed at copying over the gravity sqlite3 db file consistently because of my slow rpi2 and sd card, a known issue apparently.
I wrote my own script to keep the most important things for me in sync: the DHCP leases, DHCP reservations and local DNS records and CNAMES. It’s basically just rsync-ing a couple of files. As for the blocklists: I just manually keep them the same on both piholes, but that’s not a big deal because it’s mostly static information. My major concern was the pihole bringing DHCP and DNS resolution down on my network if it should fail.
Now with keepalived and my sync script that I run hourly, I can just reboot or temporarily shutdown pihole1 and then pihole2 automatically takes over DNS duties until pihole1 is back. DHCP failover still has to be done manually, but it’s just a matter of ticking the box to enable the server on pihole2, and all the leases and reservations will be carried over.
That’s what I do. I do have a small VM that is linked to it in a keepalived cluster with a synchronized configuration that can takeover in case the rpi croaks or in case of a reboot, so that my network doesn’t completely die when the rpi is temporarily offline. A lot of services depend on proper DNS resolution being available.
You can use log2ram to mitigate that.
Alternatively, you can even boot a root filesystem residing on an NFS share, but in the case of a rpi hosting the network’s DNS and DHCP services, you could end up with a chicken and egg problem.
I think it’s a good tool to have on your toolbelt, so it can’t hurt to look into it.
Whether you will like it or not, and whether you should move your existing stuff to it is another matter. I know us old Unix folk can be a fussy bunch about new fads (I started as a Unix admin in the late 90s myself).
Personally, I find docker a useful tool for a lot of things, but I also know when to leave the tool in the box.
Huh? Your docker container shouldn’t be calling pip for updates at runtime, you should consider a container immutable and ephemeral. Stop thinking about it as a mini VM. Build your container (presumably pip-ing in all the libraries you require) on the machine with full network access, then export or publish the container image and run it on the machine with limited access. If you want updates, you regularly rebuild the container image and repeat.
Alternatively, even at build time it’s fairly easy to use a proxy with docker, unless you have some weird proxy configuration. I use it here so that updates get pulled from a local caching proxy, reducing my internet traffic and making rebuilds quicker.
postgres
I never use it for databases. I find I don’t gain much from containerizing it, because the interesting and difficult bits of customizing and tayloring a database to your needs are on the data file system or in kernel parameters, not in the database binaries themselves. On most distributions it’s trivial to install the binaries for postgres/mariadb or whatnot.
Databases are usually fairly resource intensive too, so you’d want a separate VM for it anyway.
what would I gain from docker or other containers?
Reproducability.
Once you’ve built the Dockerfile or compose file for your container, it’s trivial to spin it up on another machine later. It’s no longer bound to the specific VM and OS configuration you’ve built your service on top of and you can easily migrate containers or move them around.
javascript was a mistake