ujust ollama
Local LLM inference server with GPU acceleration
Quick Start
Follow the standard service lifecycle:
| Step | Command | Recording |
| 1. Config | ujust ollama config | |
| 2. Start | ujust ollama start | |
| 3. Status | ujust ollama status | |
| 4. Logs | ujust ollama logs | |
| 5. Stop | ujust ollama stop | |
Subcommands
Configuration
| Subcommand | Arguments | Description | Recording |
config | | Configure server | |
Lifecycle
| Subcommand | Arguments | Description | Recording |
restart | | Restart server | |
start | | Start Ollama server | |
stop | | Stop Ollama server | |
delete | | Remove server config and container | |
Monitoring
| Subcommand | Arguments | Description | Recording |
status | | Show server/container status | |
logs | [--lines=N] | View container logs | |
Operations
| Subcommand | Arguments | Description | Recording |
list | | List installed models | |
pull | --model=NAME | Download model from Ollama registry | |
run | --model=NAME | Run model | |
Shell
| Subcommand | Arguments | Description | Recording |
shell | [-- CMD] | Open shell or execute command in container | |
Other
| Subcommand | Arguments | Description | Recording |
help | | Show help | |
Flags
| Flag | Short | Default | Values | Description |
--bind | -b | | | |
--config-dir | -c | | | |
--context-length | | | | |
--gpu-type | -g | | | |
--image | -i | | | |
--instance | -n | | | |
--lines | -l | | | |
--model | -m | | | |
--port | -p | | | |
--prompt | | | | |
--tag | -t | | | |
--workspace-dir | -w | | | |
Source: just/bazzite-ai/ollama.just