Jimmy is sentient! Ollama-integrated discord bot
Find a file
nexy7574 76d3684449
All checks were successful
Build and Publish / build_and_publish (push) Successful in 45s
Remove reference to throttle
2024-06-11 01:44:34 +01:00
.gitea/workflows Properly build master 2024-06-11 01:03:51 +01:00
jimmy Remove reference to throttle 2024-06-11 01:44:34 +01:00
.dockerignore Add docker file 2024-06-10 16:56:12 +01:00
.gitignore Add docker file 2024-06-10 16:56:12 +01:00
docker-compose.yml Properly link docker image 2024-06-10 17:16:02 +01:00
Dockerfile Add docker file 2024-06-10 16:56:12 +01:00
README.md Add docker file 2024-06-10 16:56:12 +01:00
requirements.txt Add docker file 2024-06-10 16:56:12 +01:00

Sentient Jimmy

Another Ollama bot for discord, however designed for mesh self-hosting.

Example config.toml

[bot]
token = "your-bot-token"
debug_guilds = [0123456789]  # omit for global commands

[ollama]
order = ["server1", "server2", "fallback"]
# ^ order of preference for Ollama servers. If server1 is offline, server2 will be tried, and so on

[ollama.server1]
base_url = "https://hosted.ollama.internal"  # default port is 443, because HTTPS
gpu = true
vram_gb = 8

[ollama.server2]
base_url = "http://192.168.1.2:11434"
gpu = true
vram_gb = 4  # <8GB will enable "low VRAM mode" in ollama

[ollama.fallback]
base_url = "http://192.168.1.250:11434"
gpu = false
vram_gb = 32  # in the case of CPU Ollama, "vram" is actually just regular RAM.