• 0 Posts
  • 9 Comments
Joined 11 months ago
cake
Cake day: December 27th, 2023

help-circle
  • you’re welcome.

    what i’ld suggest… a general rule that i like to always follow is to use a test system for everything new. but that does not need to be a full separate system every time.

    lets say you have your mailbox and want to try getting new mails from it using fetchmail. first you can use uidl mechanisms to only fefch every mail once and besides that leave them all on the server, but i like it a bit more secure: create a second email adress/account at your mail providers service only for testing. thus you can do whatever you like to to test the mechanisms only without even touching your real inbox (maybe even fill it up with large emails and look how the system reacts, i once had an email account with a cheap provider that deadlocked the inboxes when full…). then when everything is as you want it, switch the account and password (or create another config file for fetchmail) and your’re done. every change (not only fetchmail things) could go tested this way before going live with the changes. filtering could be done with procmail for example, but when the mda that is called by procmail somehow exits with success when the email really isn’t delivered, then the email might get lost forever depending on the settings of course. so fiddling with new stuff always carries the risk of not fiddling correctly ;-)

    have fun !


  • Its possible to tell your mta (like postfix) to use another mta for all mails, or only some domains etc, so using a third party to play the internet facing service then getting the mails by fetchmail, storing them in a dovecot server is easy. on the sending part you could use your standard email client (i.e. thunderbird on pc or k9-mail on smartphone) to send it to your postfix instance that also sits on the server hosting your dovecot service. the mta there takes the mail and delivers it by rules which could just be using the mta of your freemailer using username/password of your account for all outgoing emails. i am doing this but the “external” mail system are my servers as well, i just don’t want emails to stay too long on VMs in the datacenter where i have no access to the physical disks in case something goes wrong.

    a raspberry pi is sufficient for such a aetup (i am using a pi4 currently but for emails only i’ld say a 3 or older would do too), adding a disk via usb makes storage huge and cheap then, i use two usb ssd’s in a raid1 for storage… that server could be only accessible through vpn if you whish, depending on your skills and needs (i mainly use ssl client certificates that are supported by k9mail and thunderbird so it fits seamless to be connected through a haproxy that authenticates these before proxying the plain connection to the pi) clients like thunderbird can offline-store all emails (configure download-or-not per imap folder) making searches easy and quick while my k9 client can search locally or on the server if needed.

    maybe adjust maximum mail size of your own mta to exactly match (or slightly less) that of the freemailer you use to prevent surprises of big but later then unsent emails.

    its possible to have a nextcloud instance on that same pi that acts as an email web mailer just in case of (i really dont need it, but i’ve set this up anyway). nextcloud is also great for syncing/backup files pictures, contacts notes todo lists and calendar of your phone (where i use davx5 opentasks and foldersync for). there are other webmailers available but installing /using nextcloud is not a too bad idea either ;-)

    i suggest also setting up some automatic offsite backup with snapshots of that pi then to cover emails and the setup and its configs ;-)


  • i once had to look at a firefall appliance cluster, (discovered, it could not do any failover in its current state but somehow the decider was ok with that) but when looking at its logs, i discovered an rsh and rcp access from an ip address that belonged to a military organisation from a different continent. i had to make it a security incident. later the vendor said that this was only the cluster internal routing (over the dedicated crosslink), used for synchronisation (the thing that did not work) and was only used by a separate routing table only for clustersync and that could never be used for real traffic. but why not simply use an ip that you “own” by yourself and PTR it with a hint about what this ip is used for? instead of customers scratching their head why military still uses rcp and rsh. i guess because no company reads firewall logs anyway XD

    someone elses ip? yes! becuase they’ll never find out !!1!

    i really appreciate that ipv6 has things like a dedicated documentation address range and that fc00:/7 is nicely short.


  • ipv6 in companies… ipv6 is not hard, but for internal networking no company (really) “needs” more than rfc1918 address space. thus any decision in that direction is always “less” needed than any bonus for (da)magement personnel is crucial for the whole companies survival…

    for companies services to be reachable from outside/ipv6 mostly “only” the loadbalancers/revproxies etc need to be ipv6 ready but … this i.e. also produces logs that possibly break decades old regexes that no one understands any more (as the good engineers left due to too many boni payed to damagement personnel) while other access/deny rules that could break or worse let through where they should block (remember that 192.168. could the local part of ipv6 IF sone genious used a matching mech that treats the dot “.” as a wildcard as overpayed damagement personnel made them rush too fast), could be hidden “somewhere”. altogether technical debt is a huge blocker for everything, especially company growth, and if no customer “demands” ipv6, then it stays on the damagement personnels list as “fulfilling the whishes of engineers to keep them happy” instead of on the always deleted “cleaning up technical debt caused by damagement personnel” list.

    setting up firewalls for ipv6 is quite easy and if you go the finegrained “whitelisted or drop/block” approach from the beginning it might take a bit for ipv6 specials to be known to you, but the much bigger thing is IMHO the then current state of firewall rules. and who knows every existing rule? what rules should be removed already and must not be ported to ipv6? usually firewalls and their rules are a big mess due to … again too many boni payed to damagement personnel, hindering the company from the needed steps forward…

    ipv6 adoption is slow for reasons that are driving huge cars that in turn speed up other problems ;-|


  • maybe start with an adjustable setup:

    • rent a cheap vm, i got one for 1€/month (for the first year,cancel monthly) from ovh currently
    • setup 3 openvpn instances to redirect all routes through the tunnel, one with ipv4 only, one with ipv6 only and one with both
    • setup the client on your mobile phone and your laptop both with all three vpns to choose from
    • have the option to choose now and try out ipv6, standalone or dualstack depending on what vpn you switch on
    • use this setup to blame services that don’t support ipv6 yet or maybe are broken with dualstack 🤣
    • rise from under-the-stone (disabling ipv6 only) to in-sunlight (to a well-above-industry-standart-level !!! “quick” new network technologies adopting “genious”) 🤣
    • improve your openvpn setup from above to be reachable “by” ipv6 too if you haven’t done it from the beginning, done: reach the pro-level of the-late-adopter-noob-group

    (if you want, ask for config snippets)

    btw i prefer to wait for ipv8😁 before “demanding” ipv6 from services i use 🤣


  • looking at the official timeline it is not completely a microsoft product, but…

    1. microsoft hated all of linux/open source for ages, even publicly called it a cancer etc.
    2. microsoft suddenly stopped it’s hatespeech after the long-term “ineffectivenes” (as in not destroying) of its actions against the open source world became obvious by time
    3. systemd appeared on stage
    4. everything within systemd is microsoft style, journald is literally microsoft logging, how services are “managed” started etc is exactly the flawed microsoft service management, how systemd was pushed to distributions is similar to how microsoft pushes things to its victi… eh… “custumers”, systemd breaks its promises like microsoft does (i.e. it has never been a drop-in-replacement, like microsoft claimed its OS to be secure while making actual use of separation of users from admins i.e. by filesystem permissions first “really” in 2007 with the need of an extra click, where unix already used permissions for such protection in 1973), systemd causes chaos and removes the deterministic behaviour from linux distributions (i.e. before systemd windows was the only operating system that would show different errors at different times during installtion on the very same perfectly working hardware, now on systemd distros similar chaos can be observed too). there AFAIK still does not exist a definition of the 'binary" protocol of journald, every normal open source project would have done that official definition in the first place, systemd developers statement was like “we take care for it, just use our libraries” wich is microsoft style saying “use our products”, the superflous systems features do harm more than they help (journald’s “protection” from log flooding use like 50% cpu cycles for huge amount of wanted and normal logs while a sane logging system would be happily only using 3%cpu for the very same amount of logs/second whilst ‘not’ throwing away single log lines like journald, thus journald exhaustively and pointlessly abuses system resources for features that do more harm where they are said to help with in the first place), making the init process a network reachable service looks to me like as bad as microsoft once put its web rendering enginge (iis) into kernelspace to be a bit faster but still beeing slower than apache while adding insecurity that later was an abused attack vector. systemd adding pointless dependencies all along the way like microsoft does with its official products to put some force on its customers for whatever official reason they like best. systemd beeing pushed to distributions with a lot of force and damage even to distributions that had this type of freedom of choice to NOT force their users to use a specific init system in its very roots (and the push to place systemd inside of those distros even was pushed furzher to circumvent the unstable->testing->stable rules like microsoft does with its patches i.e.), this list is very far from complete and still no end is in sight.
    5. “the” systemd developer is finally officially hired by microsoft

    i said that systemd was a microsoft product long before its developer was then hired by microsoft in 2022. And even if he wasn’t hired by them, systemd is still a microsoft-style product in every important way with all what is wrong in how microsoft does things wrong, beginning with design flaws, added insecurities and unneeded attack vectors, added performance issues, false promises, usage bugs (like i’ve never seen an already just logged in user to be directly be logged off in a linux system, except for when systemd wants to stop-start something in background because of it’s ‘fk y’ and where one would 'just try to login again and dont think about it" like with any other of microsofts shitware), ending in insecure and instable systems where one has to “hope” that “the providers” will take care for it without continueing to add even more superflous features, attack vectors etc. as they always did until now.

    systemd is in every way i care about a microsoft product. And systemd’s attack vectors by “needless dependencies” just have been added to the list of “prooven” (not only predicted) to be as bad as any M$ product in this regard.

    I would not go as far to say that this specific attack was done by microsoft itself (how could i ?), but i consider it a possibility given the facts that they once publicly named linux/open source a “cancer” and now their “sudden” change to “support the open source world” looks to me like the poison “Gríma” used on “Théoden” as well as some other observations and interpretations. however i strongly believe that microsoft secretly actually “likes” every single damage any of systemd’s pointlessly added dependencies or other flaws could do to linux/open source very much. and why shouldn’t they like any damage that was done to any of their obvious opponents (as in money-gain and “dictatorship”-power)? it’s a us company, what would one expect?

    And if you want to argue that systemd is not “officially” a product of the microsoft company… well people also say “i googled it” when they mean “i used one of the search engines actually better than google.com” same with other things like “tempo” or “zewa” where i live. since the systemd developer works for microsoft and it seems he works on systemd as part of this work contract, and given all the microsoft style flaws within from the beginning, i consider systemd a product of microsoft. i think systemd overall also “has components” of apple products, but these are IMHO none of technical nature and thus far from beeing part of the discussion here and also apple does not produce “even more systemd” also apple has -as of my experience- very other flaws i did not encounter in systemd (yet?) thus it’s clearly not an apple product.


  • Before pointing to vulnerabilities of open source software in general, please always look into the details, who -and if so - “without any need” thus also maybe “why” introduced the actual attack vector in the first place. The strength of open source in action should not be seen as a deficit, especially not in such a context.

    To me it looks like an evilish company has put lots of efforts over many years to inject its very own overall steady attack-vector-increase by “otherwise” needless increase of indroduction of uncounted dependencies into many distros.

    such a ‘needless’ dependency is liblzma for ssh:

    https://lwn.net/ml/oss-security/20240329155126.kjjfduxw2yrlxgzm@awork3.anarazel.de/

    openssh does not directly use liblzma. However debian and several other distributions patch openssh to support systemd notification, and libsystemd does depend on lzma.

    … and that was were and how the attack then surprisingly* “happened”

    I consider the attack vector here to have been the superlfous systemd with its excessive dependency cancer. Thus result of using a Microsoft-alike product. Using M$-alike code, what would one expect to get?

    *) no surprises here, let me predict that we will see more of their attack vectors in action in the future: as an example have a look at the init process, systemd changed it into a ‘network’ reachable service. And look at all the “cute” capabilities it was designed to “need” ;-)

    however distributions free of microsoft(-ish) systemd are available for all who do not want to get the “microsoft experience” in otherwise security driven** distros

    **) like doing privilege separation instead of the exact opposite by “design”


  • i am happy to have a raspberry pi setup connected to a VLAN switch, internet is behind a modem (like bridged mode) connected with ethernet to one switchport while the raspi routes everything through one tagged physical GB switchport. the setup works fine with two raspi’s and failover without tcp disconnections during an actual failover, only few seconds delay when that happens, so basically voip calls recover after seconds, streaming is not affected, while in a game a second off might be too much already, however as such hardware failures happen rarely, i am running only one of them anyway.

    for firewall i am using shorewall, while for some special routing i also use unbound dns resolver (one can easily configure static results for any record) and haproxy with sni inspection for specific https routing for the rather specialized setup i have.

    my wifi is done by an openwrt but i only use it for having separate wifis bridged to their own vlans.

    thus this setup allows for multi-zone networks at home like a wifi for visitors with daily changing passwords and another fror chromecast or home automation, each with their own rules, hardware redundancy, special tweaking, everything that runs on gnu/linux is possible including pihole, wireguard, ddns solutions, traffic statistics, traffic shaping/QOS, traffic dumps or even SSL interception if you really want to import your own CA into your phone and see what data your phones apps (those that don’t use certificate pinning) are transfering when calling home, and much more.

    however regarding ddns it sometimes feels more safe and reliable to have a somehow reserved IP that would not change. some providers offer rather cheap tunnels for this purpose. i once had a free (ipv6) tunnel at hurricane electronic (besides another one for IPv4) but now i use VMs in data centers.

    i do not see any ready product to be that flexible. however to me the best ready router system seems to be openwrt, you are not bound to a hardware vendor, get security updates longer than with any commercial product, can 1:1 copy your config to a new device even if the hardware changes and has the possibility to add packages with special features to it.

    “openwrt” is IMHO the most flexible ready solution for longtime use. same as “pfsense” is also very worth looking at and has some similarities to openwrt while beeing different.