Reeve
Core Concepts

Models

Model configuration — providers, API keys, model selection, quality tiers, and fallbacks.

Models

Reeve supports multiple AI model providers and lets you choose, switch, and configure models to balance quality, speed, and cost.

How Model Selection Works

Reeve resolves models in this order:

  1. Primary model — Your default model for all conversations
  2. Fallbacks — Backup models tried in order if the primary fails
  3. Provider auth failover — If one API key for a provider fails, the next is tried before moving to the next model

This means your agent stays responsive even if a provider has an outage.

Setting Up Models

The fastest way to configure models:

reeve onboard

The wizard sets up your preferred provider and API key in one step.

Manual Configuration

Set models through the CLI:

# Set primary model
reeve models set anthropic/claude-sonnet-4-5

# Add fallback models
reeve models fallbacks add openai/gpt-5.2
reeve models fallbacks add anthropic/claude-opus-4-5

# Set image model (for vision tasks)
reeve models set-image anthropic/claude-sonnet-4-5

In-Chat Switching

Switch models mid-conversation without restarting:

/model                         # Show model picker
/model list                    # List available models
/model 3                       # Pick by number
/model anthropic/claude-opus-4-5  # Pick by name
/model status                  # Current model + auth status

Supported Providers

Reeve works with a wide range of model providers:

ProviderModelsAuth
AnthropicClaude Sonnet, Opus, HaikuAPI key or claude setup-token
OpenAIGPT-5.2, o3, CodexAPI key or OAuth
Amazon BedrockClaude models via AWSAWS credentials
OllamaAny local modelLocal (no key needed)
OpenRouter100+ models from multiple providersAPI key
DeepgramSpeech-to-text, text-to-speechAPI key

See the Providers section for setup guides for each provider.

Quality Tiers

Different tasks benefit from different model capabilities:

TierExample ModelsBest For
FastClaude Haiku, GPT-4o-miniQuick responses, simple lookups
StandardClaude Sonnet, GPT-5.2Day-to-day conversations, tool use
PremiumClaude OpusComplex reasoning, architecture decisions

You can configure per-task model overrides — for example, use Sonnet for normal chat but Opus for cron jobs that need deep analysis:

reeve cron add \
  --name "Weekly analysis" \
  --cron "0 6 * * 1" \
  --session isolated \
  --model opus \
  --message "Deep weekly analysis of business metrics."

Fallback Chain

If your primary model is unavailable, Reeve automatically tries fallbacks:

Primary: anthropic/claude-sonnet-4-5
  ↓ (fails)
Fallback 1: openai/gpt-5.2
  ↓ (fails)
Fallback 2: anthropic/claude-opus-4-5
  ↓ (fails)
Error: No models available

Manage fallbacks via CLI:

reeve models fallbacks list
reeve models fallbacks add openai/gpt-5.2
reeve models fallbacks remove openai/gpt-5.2
reeve models fallbacks clear

Model Allowlist

Restrict which models are available in your setup:

{
  agent: {
    model: { primary: "anthropic/claude-sonnet-4-5" },
    models: {
      "anthropic/claude-sonnet-4-5": { alias: "Sonnet" },
      "anthropic/claude-opus-4-5": { alias: "Opus" }
    }
  }
}

When an allowlist is set, only listed models can be selected via /model. Unlisted models return a "not allowed" error.

Local Models

Run models locally with Ollama for privacy or offline use:

# Install and run a model locally
ollama pull llama3
reeve models set ollama/llama3

Local models have no API costs and keep all data on your machine.

Checking Status

# Full status — current model, fallbacks, auth health
reeve models status

# Machine-readable
reeve models status --json

# Automation-friendly (exit code 1 = missing auth)
reeve models status --check

For a detailed comparison of model failover, auth profile rotation, and cooldowns, see Model Failover. For provider-specific setup, browse the Providers section.

On this page