• 0 Posts
  • 28 Comments
Joined 3 years ago
cake
Cake day: July 25th, 2023

help-circle
  • This confused me at first with Nextcloud also. What I think OP means is that by default the Nextcloud stores the files shared in a database, not the server’s local filesystem. My first question when I setup Nextcloud was literally, "Ok, now that I’ve set things up and got the mobile app accessing it, how the heck to I access those files when I ssh into the server running Nextcloud. You can share directories from the server’s filesystem with Nextcloud. But it’s not obvious at first how to do that, especially if you’re running Nextcloud from a docker container. If you’re used to the way Dropbox works and (almost) the way OneDrive works, this distinction can be confusing and frustrating. It still frustrates me, because it complicates access control over those files and I practically never have a need for the files stored in Nextcloud’s default places. I’m not sharing the Nextcloud instance or the server with anyone else and I want to access files from the CLI always, so I don’t have any use for Nextcloud’s defaults.


  • All the kids here seem to get really annoyed whenever anyone suggests Ubuntu for “new to Linux” people. My story in particular seems to draw out the trolls, the know-it-alls, and the ricers. I had the same questions as OP 26 years ago, I made the choice you’re recommending (and getting down voted for), I’d do it again, and I have no regrets. Here’s my story anyway in case it resonates with someone.

    I picked Ubuntu for my “mostly a server, but sometimes a workstation, sometimes a multimedia PC” before Mint or Arch were even a thing. I knew about and tried Debian, but support for games and hardware at the time wasn’t there for me. Back when we used BitTorrent to literally mostly download Linux ISOs, I was a relatively new Linux user. I’d tried Debian, Slackware, Corel, SUSE, Redhat, etc. Played around distro hopping. But when it came time to build my next machine I landed on Ubuntu LTS mostly because a few important pieces of software I needed to run (paid real money for and needed for university) ONLY came packaged as Deb. Ubuntu turned out to be well documented, well supported, easy to learn, and stable enough that after a decade it was the hardware that failed me, not the operating system. Then, there was the Unity debacle. Then, there were snaps. But, by that time those issues were meaningless to me because I knew I could easily avoid snaps and unity altogether if they bothered me. I never even touched the app store. I guess I stopped caring about the desktop because by that point I was mostly only accessing the CLI remotely or tunneling individual X apps over ssh. When I rebuilt that machine, I considered other options, but ultimately all the choices had mostly insignificant differences except for my familiarity with them. So, I picked Ubuntu LTS again, and it’s been trucking along without getting in my way for nearly another decade.

    Arch and those other new distros are interesting. I can see the benefits of that kind of system. But it’s not for everyone. It’s not for me. 99% of users are not going to benefit from bleeding edge software updates. Moreover, there seems to be this widespread misinterpretation that stable and long term release cycles don’t get security updates. These days with snaps, flatpacks, docker, and VMs, running a flashy new bit of bleeding edge software on a long term or stable release cycle distro is easier than it ever has been. It may be slightly difficult for a new user, but it’s still easier than reinstalling and setting up a new distro with a host of undocumented bugs. I can’t even begin to imagine how awful it would be to try to learn about Linux and troubleshoot an issue as a noob in this post-search AI slop wasteland that is the dead Internet.

    Anyway, I guess the point I’m getting at is that I chose Ubuntu because it was easy, I chose it again because it continued to be easy, and now that I’ve been using it for a couple decades I’d choose it again because I care more about using my machine than tinkering with my machine. And ultimately, the choice of distro matters a whole lot less when you’re not new to Linux.


  • In my experience, 2 devices will ultimately save you effort and frustration. Anything you choose as a good NAS/seedbox will be unlikely to have a good from the couch interface or handle Netflix reliable and easily. A small Android TV box may have a much better interface, simple app setup, and support all the streaming services, but probably won’t be very powerful or convenient to use as a NAS. The NAS is always on, plugged directly into the Internet access point, and tucked away out of sight and sound. The Android TV or Apple TV box is silent, small, and can be mounted directly to the Beamer/Projector.

    Yes, Kodi exists and it’s add-ons can bridge this gap. But I still think that a SBC NAS running Jellyfin or plex + an Nvidia shield with jellyfin, Plex, Netflix, Spotify, YouTube, amaon, etc. will be so much easier to setup, manage, find support for, and upgrade.

    I have a similar setup even though my server has a direct HDMI link to my TV. I’m not a fan of viewing using the server it from the couch. Setting up IR remotes sucks always. And it’s confusing for anyone but me to use. But if my Nvidia Shield dies or I’m having network trouble, VLC a pretty good backup.






  • Docker compose is just a setting file for a container. It’s the same advantage you get using an ssh config file instead of typing out and specifying a user, IP, port, and private key to use each time. What’s the advantage to putting all my containers into one compose file? It’s not like I’m running docker commands from the terminal manually to start and stop them unless something goes wrong, I let systemd handle that. And I’d much rather the systemd be able to individually start, stop, and monitor individual containers rather than have to bring them all down if one fails.


  • You don’t need to get too complicated with scripts if you let Picard do all the tagging and renaming. In my experience it works pretty well with the default out of the box configuration. Just don’t try to do your whole library at once, just go album by album and check each one is matching with the correct release. I was in the same boat about a decade ago and did the same, just a few albums a day getting tagged and renamed into a fresh music directory. And of course, make a backup first, just in case.

    Lately I’ve been going through this process again because I messed up configuring Lidarr and many files got improperly renamed. Since they were all still properly tagged, fixing them has been easy, especially with Picard. I haven’t really bothered to find all the stray files yet (they’re still roughly in the right location) because Plex ignores the paths and just reads the tags so the misnamed files aren’t even noticable in Plex




  • Jack of all trades, master of none. Forcing a router reboot to get the home Internet working again has become a thing of the past since I set up a unifi router and APs.

    I’d had router/WiFi combos before running either dd-wrt, open-wrt, or tomato. None of them were stable. But I suspect that was because the hardware just couldn’t keep up, not because the open source software was faulty.





  • Yeah, this is a question in bad faith from a child to someone that’s been curating a collection of music for more than a quarter of a century.

    This isn’t even my entire collection, I’ve got at least a couple orange crates packed with vinyl, CDs, mp3s, concert videos, and even some cassettes for nostalgia. Do I listen to everything I’ve gotten digitally? Not yet, but I don’t plan on stopping my listening any time soon and drive space is cheap, so I figure that I’ve got time.

    Your “logical timeframe” is both naive and deeply insulting. I’m going to enjoy my library hobby anyway, but you can just fuck off with your negative attitude.