![](https://lemmy.world/pictrs/image/93582c05-6558-406f-abc3-c9036f1af04d.png)
![](https://lemmy.world/pictrs/image/8286e071-7449-4413-a084-1eb5242e2cf4.png)
3·
4 months agoIf all computers are on local network you can use warpinator.
Koreader has a plugin to sync with calibre local server and its a REALLY good ereader software
You can self host a local chatgpt like ai known as a local large language model. Searx and Searxbg are great customizable meta search engines that you can customize to scrape whatever you want
I run a local LLM on my gaming computer thats like a decade old now with an old 1070ti 8GB VRAM card. It does a good job running mistral small 22B at 3t/s which I think is pretty good. But any tech enthusiast into LLMs look at those numbers and probably wonder how I can stand such a slow token speed. I look at their multi card data center racks with 5x 4090s and wonder how the hell they can afford it.