And the voices. “Billy…”
“You fucked the whole thing up.”
“Billy, your time is up.”
“Your time… is up.”
I was imagining things, as is tradition
I thought it was confirmed to be Piefed once that is complete enough to be useful. Idk, though, maybe that is me just imagining things.
Edit: I am imagining things, it is Sublinks
I think 8 hours starts to get into territory where they might get an informational message about the delay? That also starts to be long enough that the emails might get lost in the distant past in the client and never be seen, by the time they arrive.
I think when I used to do this, it was one advisory message every 24 hours that a message was holding in the queue, and after 5 days it would bounce, but I have to assume that those limits have shrunk in the modern day. How much, IDK; it might be worth experimenting with it though before committing to creating that situation since it might not go okay.
SMTP is designed with queues and retries
Unless something has changed massively since I was deeply involved with this stuff, the people that sent you email may get a notification after some hours that their message is being delayed, and maybe after like 24-48 hours they might get a bounce. But if it’s just your SMTP server going down for an hour or two every now and then, the system should be able handle that seamlessly (barring some hiccups like messages showing up with timestamps hours in the past which sometimes is confusing).
You’re the only one talking sense and you are sitting here with your 2 upvotes
The AI company business model is 100% unsustainable. It’s hard to say when they will get sick of hemorrhaging money by giving away this stuff more or less for free, but it might be soon. That’s totally separate from any legal issues that might come up. If you care about this stuff, learning about doing it locally and having a self hosted solution in place might not be a bad idea.
But upgrading anything aside from your GPU+VRAM is a pure and unfettered waste of money in that endeavor.
You’re going to think I am joking but I am not. Multiple people have sworn to me that this works for a common failure mode of HDD drives and I’ve literally never heard someone say they tried it and it failed. I’ve never tried it. Buyer beware. Don’t blame me if you fuck up your drive / your computer it’s connected to / anything else even worse by doing this:
Wait, how does that work? Can you do Nix package management on a Debian system or something?
Debian is mine and has been for decades + I’m a little bit happy to see it’s still well represented / well thought of in the community. Everything works, and you can choose new + exciting with headaches sometimes, or old + stable with no headaches but old.
Only real issue is the package management hasn’t kept pace with node / python / go / everything else wanting to do its own little mini package management, and so very occasionally that side is a little bit of a mess
NixOS I would like to try at some point as the core philosophy seems a little more suited to the modern (Docker / pip / etc) era, but I never messed with it
I think no 😕
ActivityPub is so loosely designed (in my opinion as somewhat of an outsider) that the opportunity was squandered to be able to have all different services interact smoothly with each other. It’s basically one little fiefdom per app, and if Pixelfed wants to make itself compatible with Mastodon’s fiefdom, then fine, and likewise for Mbin with Lemmy and etc, but it’s not really “cross compatible” between the whole universe of apps, in the same way as other better-designed protocols like email work, where it’s just “email” with no app specificity to it. It is a shame and a missed opportunity with how the protocol was designed, I think.
I think in general, the fediverse people are working on solutions, but we’re sort of stuck into the present setup which has this not really ideal compartmentalization and there’s not a good way to fix it. Certainly not from the Lemmy side that I’m aware of. Two possibilities though:
Ask GPT to rewrite your configuration, check over it with diff to make sure it didn’t do something dumb, bingo bango
Stop, stop. It hurts, it’s too real.
Tor’s obfs4 protocol is pretty difficult to block, and it has some other transports that are options if obfs4 is unusable in a heavy censorship regime. This page is a good overview of how to start; with the right transport and bridge setup it’ll be extremely difficult for your ISP to prevent you having access.
You could make your home server a securely-accessed onion site and connect to a remote-access-via-web service you’re running there. That part might be a little challenging (and this process overall may be overkill) but it’d be very challenging for them to block it, I think, so if you’ve tried some things and had no luck, that might be the way to do it.
Be careful obviously
Honestly having GPT write one-off code for you for particular selected pieces (esp ones that require a lot of domain knowledge) works pretty well in my experience
I think one of the worst things that happened to internet culture was when “I’m a fellow nerd and I am happy if you made me some free nerd stuff, thank you” got replaced with “I’m a customer and you are making a product for me” mentality. It’s like someone is doing you a favor by joining your Lemmy instance, or running your free software, and it gives you the right to complain to them and demand features or things you want, and you’ll threaten to leave and not bless them with your presence anymore if you don’t.
I see this all the time with Lemmy: People pressuring the devs to do some thing in some particular manner, and them constantly explaining “hey, our time’s not unlimited and we have a large number of priorities, we’ll get to it when we get to it, if you feel strongly about it please do it yourself or hire someone” which is 100% reasonable, and then for some reason that’s a problem.
Does it work out okay with 12 cores purely on CPU? About how fast is the interaction?
I played around a little with Ollama and gpt4all but it seemed to me like it wasn’t fast enough to be useful on pure CPU, but if I could just throw cores at it then I might revisit the issue.
Preach
(a) network effects (b) inertia © Matrix is still kind of a pain in the ass
Are all easily more capable to explain it (d) all the users are otherkin
I absolutely agree with the central message. I would add a few reasons, notably lack of end-to-end encryption and partial ownership by Tencent, as strong reasons to stay away if you can avoid it.
That said, a lot of the particular details listed here actually aren’t right to me.
Like I say I actually agree with the central thesis but not with more or less any of the specific reasons he cites.
Also, export your DBs first, and snapshot the export instead of the raw DB files