Default Docker + Ollama Flow
The checked-in Docker setup still serves Resume Matcher on a single public origin at http://localhost:3000. To use Ollama with Docker, run Ollama on the host machine and point Resume Matcher at it.
Host Ollama Setup
- Install Ollama from ollama.com.
- Pull a model on the host, for example:
ollama pull qwen3:8b
- Start Resume Matcher from the app repo root with Ollama wired in:
LLM_API_BASE=http://host.docker.internal:11434 docker compose up -d
- Open
http://localhost:3000/settingsand set:- Provider:
Ollama - Model:
qwen3:8bor another model you already pulled
- Provider:
URLs
| Surface | URL |
|---|---|
| App | http://localhost:3000 |
| Settings | http://localhost:3000/settings |
| Health | http://localhost:3000/api/v1/health |
| API docs | http://localhost:3000/docs |
What This Page Covers
This page documents the supported host-Ollama pattern for Docker.
- Resume Matcher stays on
http://localhost:3000 - Ollama runs on the host machine
LLM_API_BASEpoints the containerized app at the host Ollama server
If you decide to run Ollama in a separate container, treat that as a custom compose extension. It is not the checked-in default Docker flow.
Troubleshooting
Ollama is not responding
Check the host Ollama service:
curl http://localhost:11434/api/tags
Resume Matcher cannot reach Ollama
Make sure you started Docker with:
LLM_API_BASE=http://host.docker.internal:11434 docker compose up -d
Responses are slow
Use a smaller local model or free up memory on the host.