It’s enough to run quantized versions of the distilled r1 model based on Qwen and Llama 3. Don’t know how fast it’ll run though.
projectmoon
Keyoxide: aspe:keyoxide.org:MWU7IK7RMUTL3AP6U6UWCF4LHY
- 0 Posts
- 18 Comments
For stuff like that, it’s best to have an auto formatter like checkstyle or something.
Had a team lead that kept requesting nitpicky changes, going in a FULL CIRCLE about what we should change or not, to the point that changes would take weeks to get merged. Then he had the gall to say that changes were taking too long to be merged and that we couldn’t just leave code lying around in PRs.
Jesus fucking Christ.
There’s a reason that team imploded…
LLMs are statistical word association machines. Or tokens more accurately. So if you tell it to not make mistakes, it’ll likely weight the output towards having validation, checks, etc. It might still produce silly output saying no mistakes were made despite having bugs or logic errors. But LLMs are just a tool! So use them for what they’re good at and can actually do, not what they themselves claim they can do lol.
projectmoon@lemm.eeto
Selfhosted@lemmy.world•Guide to Self Hosting LLMs Faster/Better than OllamaEnglish
1·1 year agoOpenWebUI connected tabbyUI’s OpenAI endpoint. I will try reducing temperature and seeing if that makes it more accurate.
projectmoon@lemm.eeto
Selfhosted@lemmy.world•Guide to Self Hosting LLMs Faster/Better than OllamaEnglish
1·1 year agoContext was set to anywhere between 8k and 16k. It was responding in English properly, and then about halfway to 3/4s of the way through a response, it would start outputting tokens in either a foreign language (Russian/Chinese in the case of Qwen 2.5) or things that don’t make sense (random code snippets, improperly formatted text). Sometimes the text was repeating as well. But I thought that might have been a template problem, because it seemed to be answering the question twice.
Otherwise, all settings are the defaults.
projectmoon@lemm.eeto
Selfhosted@lemmy.world•Guide to Self Hosting LLMs Faster/Better than OllamaEnglish
1·1 year agoI tried it with both Qwen 14b and Llama 3.1. Both were exl2 quants produced by bartowski.
projectmoon@lemm.eeto
Selfhosted@lemmy.world•Guide to Self Hosting LLMs Faster/Better than OllamaEnglish
3·1 year agoPerplexica works. It can understand ollama and custom OpenAI providers.
projectmoon@lemm.eeto
Selfhosted@lemmy.world•Guide to Self Hosting LLMs Faster/Better than OllamaEnglish
1·1 year agoSuper useful guide. However after playing around with TabbyAPI, the responses from models quickly become jibberish, usually halfway through or towards the end. I’m using exl2 models off of HuggingFace, with Q4, Q6, and FP16 cache. Any tips? Also, how do I control context length on a per-model basis? max_seq_len in config.json?
projectmoon@lemm.eeto
Selfhosted@lemmy.world•After some trial and error, I've managed to successfully deploy public instances of privacy-respecting services!English
2·2 years agoAh right. What I really meant to ask was if it can do protocols other than http.
Which I don’t think it can…
projectmoon@lemm.eeto
Selfhosted@lemmy.world•After some trial and error, I've managed to successfully deploy public instances of privacy-respecting services!English
2·2 years agoAre you able to tunnel ports other than 80 and 443 through Cloudflare?
projectmoon@lemm.eeto
Open Source@lemmy.ml•A fork of NewPipe that implements SponsorBlock and ReturnYouTubeDislike
41·2 years agoThe fork was originally created because upstream NewPipe elected not to include sponsor block functionality.
But wouldn’t you calculate the time in the future in the right time zone and then store it back as UTC?
projectmoon@lemm.eeto
Open Source@lemmy.ml•Has Sony ever contributed anything to FreeBSD?
11·2 years agoDidn’t they contribute networking stuff?
projectmoon@lemm.eeto
Open Source@lemmy.ml•Infinity For Lemmy update just dropped. dev is really active in the community
21·3 years agoPretty sure the original developer of Infinity is one of the few people who will try to follow Reddit’s new API rules and charge a subscription fee to cover it. At least that was the case a few months ago. Not sure what’s currently happening.
projectmoon@lemm.eeto
Open Source@lemmy.ml•Infinity For Lemmy update just dropped. dev is really active in the community
2·3 years agoYou’re not incorrect. Probably all will be fixed in time.
projectmoon@lemm.eeto
Open Source@lemmy.ml•Infinity For Lemmy update just dropped. dev is really active in the community
3·3 years agoThe developer didn’t update the version string for 0.0.7. Known issue.

I feel like this article is exactly the type of thing it’s criticizing.