Jimmy is sentient! Ollama-integrated discord bot
Find a file
nex 9a42ba815a
Some checks failed
Build and Publish / build_and_publish (push) Failing after 2m39s
Act cache should be properly configured now
2024-06-17 00:53:14 +01:00
.gitea/workflows Act cache should be properly configured now 2024-06-17 00:53:14 +01:00
jimmy Fix TypeError 2024-06-16 16:15:01 +01:00
.dockerignore Add docker file 2024-06-10 16:56:12 +01:00
.gitignore Add docker file 2024-06-10 16:56:12 +01:00
docker-compose.yml Properly link docker image 2024-06-10 17:16:02 +01:00
Dockerfile Add docker file 2024-06-10 16:56:12 +01:00
README.md Update the README 2024-06-11 01:56:03 +01:00
requirements.txt Add docker file 2024-06-10 16:56:12 +01:00
tox.ini Allow importing legacy threads 2024-06-16 15:53:43 +01:00

Sentient Jimmy

Another Ollama bot for discord, however designed for mesh self-hosting.

Example config.toml

[bot]
token = "your-bot-token"
debug_guilds = [0123456789]  # omit for global commands
db_url = "sqlite://:memory:"
# ^ The database URL. Overridden by $DATABASE_URL.
# The default in a docker environment is IN MEMORY, i.e. `sqlite://:memory:`.
# The default in a non-docker environment is sqlite://default.db, aka sqlite @ ./default.db
# If $DATABASE_URL is set, it will override this setting.
# You can use SQLite, or PostgreSQL.
# You can choose to build jimmy with mysql/mssql&oracle support by changing extra `asyncpg` to `asyncmy`/`asyncodbc`
# in the tortoise-orm requirement in requrements.txt

[ollama]
order = ["server1", "server2", "fallback"]
# ^ order of preference for Ollama servers. If server1 is offline, server2 will be tried, and so on

[ollama.server1]
base_url = "https://hosted.ollama.internal"  # default port is 443, because HTTPS
gpu = true
vram_gb = 8
default_model="llama3:latest"  # sets the default model for /ollama chat

[ollama.server2]
base_url = "http://192.168.1.2:11434"
gpu = true
vram_gb = 4  # <8GB will enable "low VRAM mode" in ollama
default_model = "llama2:latest"

[ollama.fallback]
base_url = "http://192.168.1.250:11434"
gpu = false
vram_gb = 32  # in the case of CPU Ollama, "vram" is actually just regular RAM.
default_model = "orca-mini:3b"

Running

See the example docker-compose.yml for an example of how to run this bot with docker-compose.

Alternatively, you can just run the docker image: git.i-am.nexus/nex/sentient-jimmy:master.