Sad thing is that search engines have got so bad, and usually return so much garbage blog spam that searching directly on reddit is more likely to give useful results. I hope a similar amount of knowledge will build up on Lemmy over time.
Sad thing is that search engines have got so bad, and usually return so much garbage blog spam that searching directly on reddit is more likely to give useful results. I hope a similar amount of knowledge will build up on Lemmy over time.
Assuming they already own a PC, if someone buys two 3090 for it they’ll probably also have to upgrade their PSU so that might be worth including in the budget. But it’s definitely a relatively low cost way to get more VRAM, there are people who run 3 or 4 RTX3090 too.
For LLMs it entirely depends on what size models you want to use and how fast you want it to run. Since there’s diminishing returns to increasing model sizes, i.e. a 14B model isn’t twice as good as a 7B model, the best bang for the buck will be achieved with the smallest model you think has acceptable quality. And if you think generation speeds of around 1 token/second are acceptable, you’ll probably get more value for money using partial offloading.
If your answer is “I don’t know what models I want to run” then a second-hand RTX3090 is probably your best bet. If you want to run larger models, building a rig with multiple (used) RTX3090 is probably still the cheapest way to do it.
Is max tokens different from context size?
Might be worth keeping in mind that the generated tokens go into the context, so if you set it to 1k with 4k context you only get 3k left for character card and chat history. I think i usually have it set to 400 tokens or something, and use TGW’s continue button in case a long response gets cut off
llama.cpp uses the gpu if you compile it with gpu support and you tell it to use the gpu…
Never used koboldcpp, so I don’t know why it would it would give you shorter responses if both the model and the prompt are the same (also assuming you’ve generated multiple times, and it’s always the same). If you don’t want to use discord to visit the official koboldcpp server, you might get more answers from a more llm-focused community such as !localllama@sh.itjust.works
A static website and Immich
There are tons of options for running LLMs locally nowadays, though none come close to GPT4 or Claude 2 etc. One place to start is /c/localllama@sh.itjust.works
https://github.com/miroslavpejic85/mirotalk might be an option. There’s both a server based version and a p2p version IIRC.
I asked someone about this a few days ago, and they claimed to have over 30000 photos in Nextcloud without issues
I suppose “a few” is quite open to interpretation, but I have 50k photos now so if it can handle 100k without getting sluggish it’ll probably be fine for the foreseeable future.
Does Nextcloud handle large numbers of photos nowadays? IIRC when I was comparing programs some years ago I read that both it and Owncloud struggled when you got to a few 10000s of photos.
Ah, nice.
Btw. perhaps you’d like to add:
build: .
to docker-compose.yml so you can just write “docker-compose build” instead of having to do it with a separate docker command. I would submit a PR for it but I have made a bunch of other changes to that file so it’s probably faster if you do it.
Awesome work! Going to try out koboldcpp right away. Currently running llama.cpp in docker on my workstation because it would be such a mess to get cuda toolkit installed natively…
Out of curiosity, isn’t conda a bit redundant in docker since it already is an isolated environment?
Add “site:reddit.com” to your google query.