Models
Model configuration — providers, API keys, model selection, quality tiers, and fallbacks.
Models
Reeve supports multiple AI model providers and lets you choose, switch, and configure models to balance quality, speed, and cost.
How Model Selection Works
Reeve resolves models in this order:
- Primary model — Your default model for all conversations
- Fallbacks — Backup models tried in order if the primary fails
- Provider auth failover — If one API key for a provider fails, the next is tried before moving to the next model
This means your agent stays responsive even if a provider has an outage.
Setting Up Models
Onboarding Wizard (Recommended)
The fastest way to configure models:
reeve onboardThe wizard sets up your preferred provider and API key in one step.
Manual Configuration
Set models through the CLI:
# Set primary model
reeve models set anthropic/claude-sonnet-4-5
# Add fallback models
reeve models fallbacks add openai/gpt-5.2
reeve models fallbacks add anthropic/claude-opus-4-5
# Set image model (for vision tasks)
reeve models set-image anthropic/claude-sonnet-4-5In-Chat Switching
Switch models mid-conversation without restarting:
/model # Show model picker
/model list # List available models
/model 3 # Pick by number
/model anthropic/claude-opus-4-5 # Pick by name
/model status # Current model + auth statusSupported Providers
Reeve works with a wide range of model providers:
| Provider | Models | Auth |
|---|---|---|
| Anthropic | Claude Sonnet, Opus, Haiku | API key or claude setup-token |
| OpenAI | GPT-5.2, o3, Codex | API key or OAuth |
| Amazon Bedrock | Claude models via AWS | AWS credentials |
| Ollama | Any local model | Local (no key needed) |
| OpenRouter | 100+ models from multiple providers | API key |
| Deepgram | Speech-to-text, text-to-speech | API key |
See the Providers section for setup guides for each provider.
Quality Tiers
Different tasks benefit from different model capabilities:
| Tier | Example Models | Best For |
|---|---|---|
| Fast | Claude Haiku, GPT-4o-mini | Quick responses, simple lookups |
| Standard | Claude Sonnet, GPT-5.2 | Day-to-day conversations, tool use |
| Premium | Claude Opus | Complex reasoning, architecture decisions |
You can configure per-task model overrides — for example, use Sonnet for normal chat but Opus for cron jobs that need deep analysis:
reeve cron add \
--name "Weekly analysis" \
--cron "0 6 * * 1" \
--session isolated \
--model opus \
--message "Deep weekly analysis of business metrics."Fallback Chain
If your primary model is unavailable, Reeve automatically tries fallbacks:
Primary: anthropic/claude-sonnet-4-5
↓ (fails)
Fallback 1: openai/gpt-5.2
↓ (fails)
Fallback 2: anthropic/claude-opus-4-5
↓ (fails)
Error: No models availableManage fallbacks via CLI:
reeve models fallbacks list
reeve models fallbacks add openai/gpt-5.2
reeve models fallbacks remove openai/gpt-5.2
reeve models fallbacks clearModel Allowlist
Restrict which models are available in your setup:
{
agent: {
model: { primary: "anthropic/claude-sonnet-4-5" },
models: {
"anthropic/claude-sonnet-4-5": { alias: "Sonnet" },
"anthropic/claude-opus-4-5": { alias: "Opus" }
}
}
}When an allowlist is set, only listed models can be selected via /model. Unlisted models return a "not allowed" error.
Local Models
Run models locally with Ollama for privacy or offline use:
# Install and run a model locally
ollama pull llama3
reeve models set ollama/llama3Local models have no API costs and keep all data on your machine.
Checking Status
# Full status — current model, fallbacks, auth health
reeve models status
# Machine-readable
reeve models status --json
# Automation-friendly (exit code 1 = missing auth)
reeve models status --checkFor a detailed comparison of model failover, auth profile rotation, and cooldowns, see Model Failover. For provider-specific setup, browse the Providers section.