yea a that was my first thought, no bsky, no lemmy, no mastodon (edit: apperently has limited suppport for exclusively mastodon.social) , like are they paying attention to the trends?
Just your normal everyday casual software dev. Nothing to see here.
yea a that was my first thought, no bsky, no lemmy, no mastodon (edit: apperently has limited suppport for exclusively mastodon.social) , like are they paying attention to the trends?
this is what I do as well, along with file staging so if I corrupt it by accident I don’t lose the entire DB
Currently I have it on my server as grab only, and then normal access on my clients with staging
The music being removed from your account shit shouldn’t be legal. You paid for it they should be refunding you if they are removing access, in a perfect world anyway.
Assuming the US when I say this but, some year we’ll have consumer protections, I’ll likely be dead by then but hopefully the day will come to light.
That being said I have never heard of soul seek, it sounds like a limewire spinoff? I agree music industry has /sucked/ in terms of obtaining stuff
also keep in mind for people not on windows, namecheaps API only functions for business grade, and also is not clearly documented, there is a “dynamic dns setup page” but it isn’t up to date. I find myself trying to use openwrt’s DDNS pages for it but it still isn’t accurate, I am likely going to transfer elsewhere when im closer to the end of my lease. This API restriction also prevents you from easily automating your SSL process using letsencrypt as you are locked down to subdomain based entries instead of wildcard domains.
Lesson learned, they regularly do this if you have a website that one of their crawlers hit as active. If you really care about it check in about a year later, chances are if you havent inquired within a year they’ll release the domain and you can pay normal sale price for it
I mean if it ain’t broke don’t fix it lmao we manage oil maintenance on our vehicles the same way, we have a wooden board at the back of the garage and we use a Sharpie to draw on that wooden board the mileage of the last oil change for each vehicle
It all depends on your threat model, I own my Hardware as well but I’m still not going to use a software that is shown to me that they don’t take security seriously but I’m also more paranoid than most
I’m currently running proxmox on a 32 gig server running a ryzen 5600 G, it’s going fine the containers don’t actually use all that much RAM and personally I’m actually seeing a better benchmarks than I did when I just ran as a Bare Bones Ubuntu server, my biggest issue has actually been a larger IO strain than anything, because it’s a lot more IO heavy now since everything’s containerized. I think I easily could run it with a lower amount of ram I would just have to turn off some of the more RAM intensive items
As for if I regret changing, no way Jose, I absolutely love the ability of having everything containerized because I can set things up how I want it when I want it and if I end up screwing something up configuration wise or decide that I no longer need that service I can just nuke the container without having to remember well what did I install on this program so I can remove it and do other programs need this dependency to work. Plus while I haven’t tinkered as much in this area, you can hard set what resources you want a lot to each instance, so if you have a program like say a pi hole that you know is never going to use x amount of resources to be able to appropriately work you can restrict what it can do so if something does go wrong with it it doesn’t use all of your system resources
The biggest con out of it is probably having to figure out how to do the networking side because every container is going to have a different IP address, I found using a web dashboard is my friend because I can have heimdel tell me where all my services are and I just have to click the icon to bring me to the right IP address, it took a lot of work to figure out how it’s operational and how to get it working, but the benefits I’ve gotten of having it is amazing. Just make sure you have a spare disk to temporarily clone partitions to because it’s extremly difficult to use existing disks in the machine. I’ve been slowly going one at a time copying it over to an external drive nuking the and then reinitializing the disc as part of the proxmox lvm and then copying the data back over onto their appropriate image file.
I personally will never use nextcloud, it is nice interface side but while I was researching the product I came across concerns with the security of the product. Those concerns have since then been fixed but the way they resolved the issue has made me lose all respect for them as a secure Cloud solution.
Basically when they first introduced encrypting folders, there was a bug in the encryption program, and the only thing that ever would be encrypted was The Parent Directory but any subfolder in that directory would proceed to not be encrypted. The issue with that is that unless you had server-side access to view the files you had no way of knowing that your files weren’t actually being encrypted.
All this is fine it’s a beta feature right? Except for when I read the GitHub issue on the report, they gaslit the reporter who reported the issue saying that despite the fact that it is advertised as feature on their stable branch, the feature was actually in beta status so therefore should not be used in a production environment, and then on top of , the feature was never removed from their features list, and proceeded to take another 3 months before anyone even started working on the issue report.
This might not seem like a big deal to a lot of people, but as someone who is paranoid over security features, the projects inaction over something as critical as that while trying to advertise themselves as being a business grade solution made me flee hardcore
That being said I fully agree with you out of the different Cloud platforms that I’ve had, nextCloud does seem to be the most refined and even has the ability to emulate an office suite which is really nice, I just can’t trust them, I just ended up using syncthing and took the hit on the feature set
TPM is a good way, Mine is setup to have encryption of / via TPM with luks so it can boot no issues, then actual sensitive data like the /home/my user is encrypted using my password and the backup system + fileserver is standard luks with password.
This setup allows for unassisted boot up of main systems (such as SSH) which let’s you sign in to manually unlock more sensative drives.
I’m surprized as well, like I guess I would understand if it’s a no log DNS server but, what else wouldn’t have sensitive information.
regional pricing is the phrase you are looking for friend!
I just expanded the existing fail2ban config on the commonly used default ports such as 22, 21 Etc, any requests on those ports get sent into purgatory, so the ip gets blacklisted any connections from it hangs until it times out. It’s a super basic setup iptables logs whenever a request is not in the current firewall (last rule in the chain) and then fail2ban reads the log and handles the block. I don’t count it as part of the normal setup because they’re isolated Because the actual ports the service is on still have the normal rule set but the default port numbers are just an instant if there’s activity on it you’re gone
My security is fairly simplistic but I’m happy with it
software protection
physical protection
things I’ve thought about:
just spent a few mins reading it, definitly agree with you, required for any issue reporter
before I read the article, I wholeheartedly disagree with the title.
Self-Hosting not only brings control back into your own hands, but also hones your skills at the same time.
OK so after reading I do agree partially with the regulation aspect, but from a privacy POV all of that is fixed by just not storing PII, I run multiple services in my stack, and the most info I collect on someone is their email, which they defo could just opt out of which I would delete off the system.
As for the cost and labor. It’s really not that difficult, my stack consists of Game servers (a mix of them primarily survival based like ark), email hosting for myself and some friends + no reply services for other internal services, my media stack, my file server, the firewall, a reverse proxy manager and my own programming projects/sites. Honestly the hardest part was the networking aspect of it, learning how to use proxmox was a trip because I hadn’t used a containerized environment before outside of docker.
I think this articles being disingenuous with the no paycheck, there is more to Value than a paycheck. My self hosting while I may not be being paid for it, if I were to put my current setup on to remote hosting I would probably be paying roughly $150 to $200 a month for a private VPS this system allowed me to just spend $700 as a one-off and then minor maintenance costs if something failed, which for a project I intend to keep running regardless its the cheaper option.
As for the ideology of decentralization, yes there is some issues in regards to reliability, obviously these smaller side projects for self hosting aren’t going to have the redundancies that the “proper” hosting is going to have. Like for example just last night my service went down because I lost power for about an hour and a half and my battery standby only had enough power for about 45 minutes of it. Being as most of my stuff is more personal based I’m not too concerned about the downtime but I could definitely see if it was a large scale project like a lemmy server it would be a little more distasteful.