sentient-jimmy/README.md

31 lines
766 B
Markdown
Raw Permalink Normal View History

# Sentient Jimmy
Another Ollama bot for discord, however designed for mesh self-hosting.
## Example config.toml
```toml
[bot]
token = "your-bot-token"
debug_guilds = [0123456789] # omit for global commands
[ollama]
order = ["server1", "server2", "fallback"]
# ^ order of preference for Ollama servers. If server1 is offline, server2 will be tried, and so on
[ollama.server1]
base_url = "https://hosted.ollama.internal" # default port is 443, because HTTPS
gpu = true
vram_gb = 8
[ollama.server2]
base_url = "http://192.168.1.2:11434"
gpu = true
vram_gb = 4 # <8GB will enable "low VRAM mode" in ollama
[ollama.fallback]
base_url = "http://192.168.1.250:11434"
gpu = false
vram_gb = 32 # in the case of CPU Ollama, "vram" is actually just regular RAM.
```