Ahaha I guess that must be the default of my client then
Yeah it’s pretty amazing that there’s kinda no algorithm, you just see what you subscribe to in chronological order
Yeah it’s an M1 16GB, sounds awesome I’ll try, thanks a lot for the guide it’s super helpful. I just got the Mac Mini for jellyfin but this is an unexpected use case where the server comes in very handy.
I run a Mac Mini as a home server because it’s great for hardware transcoding, I was wondering if I could host an LLM locally. I work with python so that wouldn’t be an issue but I have no idea how to do CUDA or work on low level code. Is there anything I need to consider? Would probably start with a really small model.
Very interesting. How secure is this against having a compromised device? I‘m really paranoid that someone would somehow have a backdoor into my systems and snatch stuff I host on my own
deleted by creator