![](https://lemmy.ca/pictrs/image/163406c1-5236-49ff-a3bd-1227d670b411.png)
![](https://lemmy.world/pictrs/image/8286e071-7449-4413-a084-1eb5242e2cf4.png)
it does allowing forking and redistributing, but you cannot remove or obscure functionality related to payments. https://gitlab.futo.org/videostreaming/grayjay/-/blob/master/LICENSE.md
it does allowing forking and redistributing, but you cannot remove or obscure functionality related to payments. https://gitlab.futo.org/videostreaming/grayjay/-/blob/master/LICENSE.md
Is that not against their TOS? Could make the service more expensive for the rest of us
What’re you hosting on them?
Not really. There is de-obfusication headers which They officially provide which can make decompiled source readable for the purpose of making mods, You’re not allowed to redistribute any of the code.
I thought the whole point was it for to be compatible with Bitwarden and their apps and extensions sorry I was thinking of Vaultwarden 🤦🏾
Have you found much practical use for small models yet? I love the idea that even the 1.1B tinyllama model can run on my phone, but haven’t found much real world use for it yet. Llama3 8b feels better, but not much better for even emails as it’s a bit dumb
I was using this for awhile but it was clunky and still ran into Conflicts and Data Loss.
I’ve tried this in the past, but it didn’t seem like there was an easy way to sync with nextcloud on Android.
What’s wrong with it on a home network?
I haven’t really configured a tagging system that makes any sense so it’s mostly used the search through documents through text. I’d like to figure out how to hook up a vector database to it to do really fuzzy searching
Why not nextcloud? Seafile files stores in a proprietary database
Yup K.I.S.S
SnappyMail
why is it better than roundcube?
Underpowered is probably the reason, they’re small and really low powered. A pi could be a 1/10th the power consumption of an x86 computer, and thus less noise and heat.
Microphone and “AI” models are all private local offline source available. They use a whisper model for speech to text, and a small LLM for next word prediction.