

I believe you just need to set the env var OLLAMA_HOST
to 0.0.0.0:11434
and then restart Ollama.
I believe you just need to set the env var OLLAMA_HOST
to 0.0.0.0:11434
and then restart Ollama.
What OS is your server running? Do you have an Android phone or an iPhone?
In either case all you likely need to do is expose the port and then access your server by IP on that port with an appropriate client.
In Ollama you can expose the port to your local network by changing the bind address from 127.0.0.1 to 0.0.0.0
Regarding clients: on iOS you can use Enchanted or Apollo to connect to Ollama.
On Android there are likely comparable apps.
From https://wiki.servarr.com/
Welcome to the consolidated wiki for Lidarr, Prowlarr, Radarr, Readarr, Sonarr, and Whisparr. Collectively they are referred to as “*Arr”, “*Arrs”, “Starr”, or “Starrs”. They are designed to automatically grab, sort, organize, and monitor your Music, Movie, E-Book, or TV Show collections for Lidarr, Radarr, Readarr, Sonarr, and Whisparr; and to manage your indexers and keep them in sync with the aforementioned apps for Prowlarr.
See also https://wiki.ravianand.me/home-server/apps/servarr
Servarr is the name for the ecosystem of apps that help you run and automate your own home media server. This includes fetching movie and TV show releases, books and music management, indexer and UseNet/Torrent managers and downloaders.
I’m a professional software engineer and I’ve been in the industry since before Kubernetes was first released, and I still found it overwhelming when I had to use it professionally.
I also can’t think of an instance when someone self-hosting would need it. Why did you end up looking into it?
I use Docker Compose for dozens of applications that range in complexity from “just run this service, expose it via my reverse proxy, and add my authentication middleware” to “in this stack, run this service with my custom configuration, a custom service I wrote myself or forked, and another service that I wrote a Dockerfile for; make this service accessible to this other service, but not to the reverse proxy; expose these endpoints to the auth middleware and for these endpoints, allow bypassing of the auth middleware if an API key is supplied.” And I could do much more complicated things with Docker if I needed to, so even for self-hosters with more complex use cases than mine, I question whether Kubernetes is the right fit.
You can store passkeys in (and use them from) a password manager instead of the OS’s secret vault. I think most major password managers support this now - Bitwarden definitely does.
In fact, Redot has had 13 releases since the project started late last year.
With an absolutely massive number of commits since then.
An absolutely massive number of commits that were originally made to Godot, sure. Redot has 118 more commits than Godot as of the time of this writing (76,344 vs 76,266). That’s not even 1 original commit per day.
This is what I would try first. It looks like 1337 is the exposed port, per https://github.com/nightscout/cgm-remote-monitor/blob/master/Dockerfile
x-logging:
&default-logging
options:
max-size: '10m'
max-file: '5'
driver: json-file
services:
mongo:
image: mongo:4.4
volumes:
- ${NS_MONGO_DATA_DIR:-./mongo-data}:/data/db:cached
logging: *default-logging
nightscout:
image: nightscout/cgm-remote-monitor:latest
container_name: nightscout
restart: always
depends_on:
- mongo
logging: *default-logging
ports:
- 1337:1337
environment:
### Variables for the container
NODE_ENV: production
TZ: [removed]
### Overridden variables for Docker Compose setup
# The `nightscout` service can use HTTP, because we use `nginx` to serve the HTTPS
# and manage TLS certificates
INSECURE_USE_HTTP: 'true'
# For all other settings, please refer to the Environment section of the README
### Required variables
# MONGO_CONNECTION - The connection string for your Mongo database.
# Something like mongodb://sally:sallypass@ds099999.mongolab.com:99999/nightscout
# The default connects to the `mongo` included in this docker-compose file.
# If you change it, you probably also want to comment out the entire `mongo` service block
# and `depends_on` block above.
MONGO_CONNECTION: mongodb://mongo:27017/nightscout
# API_SECRET - A secret passphrase that must be at least 12 characters long.
API_SECRET: [removed]
### Features
# ENABLE - Used to enable optional features, expects a space delimited list, such as: careportal rawbg iob
# See https://github.com/nightscout/cgm-remote-monitor#plugins for details
ENABLE: careportal rawbg iob
# AUTH_DEFAULT_ROLES (readable) - possible values readable, denied, or any valid role name.
# When readable, anyone can view Nightscout without a token. Setting it to denied will require
# a token from every visit, using status-only will enable api-secret based login.
AUTH_DEFAULT_ROLES: denied
# For all other settings, please refer to the Environment section of the README
# https://github.com/nightscout/cgm-remote-monitor#environment
To run it with Nginx instead of Traefik, you need to figure out what port Nightscout’s web server runs on, then expose that port, e.g.,
services:
nightscout:
ports:
- 3000:3000
You can remove the labels as those are used by Traefik, as well as the Traefik service itself.
Then just point Nginx to that port (e.g., 3000) on your local machine.
—-
Traefik has to know the port, too, but it will auto detect the port that a local Docker service is running on. It looks like your config is relying on that feature as I don’t see the label that explicitly specifies the port.
I thought Hue bulbs used Zigbee?
The up arrow moves through the letters, e.g., A->B->C. The down arrow moves to the next character in the sequence, e.g., C->CA->CAA. If you click past the correct letter, you’ll have to click all the way through again. And if you submit the wrong letter, you have to start all over (after it takes twenty seconds attempting to connect with the wrong password and then alerts you that it didn’t work, of course).
What a misleading, clickbait title:
Mozilla moves away from open source
When the author really meant:
Mozilla does a thing I don’t like
OP is also in the allegedly ultra rare camp of “successfully configured Jellyfin and lived to tell the tale.” Not what I’d expect of someone unable to configure Plex correctly. I’ve not set up a Plex server myself but my guess is it wasn’t clear that it was misconfigured - it did work previously, after all.
If they’re calling it remote streaming when you’re on the same (local) network, that’s not exactly intuitive. I’d say OP’s phrasing was fair.
It’s the new hyped up version of “no-code” or low-code solutions, but with AI so you have more flexibility to footgun.
Not any lazier. Script kiddies didn’t write the code themselves, either.
You can run a NAS with any Linux distro - your limiting factor is having enough drive storage. You might want to consider something that’s great at using virtual machines (e.g., Proxmox) if you don’t like Docker, but I have almost everything I want running in Docker and haven’t needed to spin up a single virtual machine.
Wow, there isn’t a single solution in here with the obvious answer?
You’ll need a domain name. It doesn’t need to be paid - you can use DuckDNS. Note that whoever hosts your DNS needs to support dynamic DNS. I use Cloudflare for this for free (not their other services) even though I bought my domains from Namecheap.
Then, you can either set up Let’s Encrypt on device and have it generate certs in a location Jellyfin knows about (not sure what this entails exactly, as I don’t use this approach) or you can do what I do:
On your router, forward port 443 to the outbound secure port from your PI (which for simplicity’s sake should also be port 443). You likely also need to forward port 80 in order to verify Let’s Encrypt.
If you want to use Jellyfin while on your network and your router doesn’t support NAT loopback requests, then you can use the server’s IP address and expose Jellyfin’s HTTP ports (e.g., 8080) - just make sure to not forward those ports from the router. You’ll have local unencrypted transfers if you do this, though.
Make sure you have secure passwords in Jellyfin. Note that you are vulnerable to a Jellyfin or Traefik vulnerability if one is found, so make sure to keep your software updated.
If you use Docker, I can share some config info with you on how to set this all up with Traefik, Jellyfin, and a dynamic dns services all up with docker-compose services.
Look up “LLM quantization.” The idea is that each parameter is a number; by default they use 16 bits of precision, but if you scale them into smaller sizes, you use less space and have less precision, but you still have the same parameters. There’s not much quality loss going from 16 bits to 8, but it gets more noticeable as you get lower and lower. (That said, there’s are ternary bit models being trained from scratch that use 1.58 bits per parameter and are allegedly just as good as fp16 models of the same parameter count.)
If you’re using a 4-bit quantization, then you need about half that number in VRAM. Q4_K_M is better than Q4, but also a bit larger. Ollama generally defaults to Q4_K_M. If you can handle a higher quantization, Q6_K is generally best. If you can’t quite fit it, Q5_K_M is generally better than any other option, followed by Q5_K_S.
For example, Llama3.3 70B, which has 70.6 billion parameters, has the following sizes for some of its quantizations:
This is why I run a lot of Q4_K_M 70B models on two 3090s.
Generally speaking, there’s not a perceptible quality drop going to Q6_K from 8 bit quantization (though I have heard this is less true with MoE models). Below Q6, there’s a bit of a drop between it and 5 and then 4, but the model’s still decent. Below 4-bit quantizations you can generally get better results from a smaller parameter model at a higher quantization.
TheBloke on Huggingface has a lot of GGUF quantization repos, and most, if not all of them, have a blurb about the different quantization types and which are recommended. When Ollama.com doesn’t have a model I want, I’m generally able to find one there.
I recommend a used 3090, as that has 24 GB of VRAM and generally can be found for $800ish or less (at least when I last checked, in February). It’s much cheaper than a 4090 and while admittedly more expensive than the inexpensive 24GB Nvidia Tesla card (the P40?) it also has much better performance and CUDA support.
I have dual 3090s so my performance won’t translate directly to what a single GPU would get, but it’s pretty easy to find stats on 3090 performance.
I believe you set env vars on Windows through System Properties -> Advanced -> Environment Variables.