That’s cool! I’m really interested to know how many tokens per second you can get with a really good U.2. My gut is that it won’t actually be better than the 24VRAM+96RAM cache setup this user already tested with though.
That’s cool! I’m really interested to know how many tokens per second you can get with a really good U.2. My gut is that it won’t actually be better than the 24VRAM+96RAM cache setup this user already tested with though.
How much do you need? Show your maths. I looked it up online for my post, and the website said 1747GB, which is completely in-line with other models.
Can you link that post?
Running R1 locally isn’t realistic. But you can rent a server and run it privately on someone else’s computer. It costs about 10 per hour to run. You can run it on CPU for a little less. You need about 2TB of RAM.
If you want to run it at home, even quantized in 4 bit, you need 20 4090s. And since you can only have 4 per computer for normal desktop mainboards, that’s 5 whole extra computers too, and you need to figure out networking between them. A more realistic setup is probably running it on CPU, with some layers offloaded to 4 GPUs. In that case you’ll need 4 4090s and 512GB of system RAM. Absolutely not cheap or what most people have, but technically still within the top top top end of what you might have on your home computer. And remember this is still the dumb 4 bit configuration.
Edit: I double-checked and 512GB of RAM is unrealistic. In fact anything higher than 192 is unrealistic. (High-end) AM5 mainboards support up to 256GB, but 64GB RAM sticks are much more expensive than 48GB ones. Most people will probably opt for 48GB or lower sticks. You need a Threadripper to be able to use 512GB. Very unlikely for your home computer, but maybe it makes sense with something else you do professionally. In which case you might also have 8 RAM slots. And such a person might then think it’s reasonable to spend 3000 Euro on RAM. If you spent 15K Euro on your home computer, you might be able to run a reduced version of R1 very slowly.
It’s completely open to Microsoft and the NSA for sure.
Oh yeah I did mean cut/paste, my bad.
That’s actually not true. When you cut/paste a file on your computer (for most computers), it’s much faster than copying the file. Deleting the file is also not instant, so copy and delete should be the slowest of the three operations.
When you cut and paste a file, you’re just renaming the file or updating the file database. It’s different how that works depending on your file system, but it typically never involves rewriting much of the data of the file.
Edit: Fixed typo.
Sounds like the instance admins should contribute. Hiring a third person to sneak ‘moderation tools and data privacy law related issues’ related development ahead in the queue would probably go a long way in stopping to ‘refuse to work’ on it. Or they could straight up actually help build the software.
So what exactly is this? Open-source ChatGPT-alternatives have existed before and alongside ChatGPT the entire time, in the form of downloading oogabooga or a different interface and downloading an open source model from Huggingface. They aren’t competitive because users don’t have terabytes of VRAM or AI accelerators.
In Hydro’s defence, their comment was the first time I chuckled reading this thread.
There’s no separate computer, it’s all in the monitor!