Installation
Download, install, and configure the Reeve Desktop App.
Desktop Installation
Download
Download the latest release from GitHub:
- macOS (Apple Silicon):
Reeve-2026.2.27-arm64.dmg - macOS (Intel):
Reeve-2026.2.27-x64.dmg
Install
- Open the
.dmgfile - Drag Reeve to the Applications folder
- Launch from Applications or Spotlight
On first launch, macOS may show a Gatekeeper warning. Click Open to proceed — the app is signed and notarized.
First Run
When Reeve Desktop starts for the first time:
- Gateway starts — The bundled gateway launches on port 18789
- Frontend starts — The Next.js frontend launches on port 3100
- Cockpit opens — A native window opens showing the Cockpit dashboard
- Tray icon appears — A rooster icon in the menubar for quick access
Connect your LLM provider
Navigate to Settings → Models and add your API key:
- Anthropic (recommended) — Get a key at console.anthropic.com
- OpenAI — Get a key at platform.openai.com
- OpenRouter — Get a key at openrouter.ai
Create your first agent
- Go to Settings → Agents
- Click New Agent
- Choose a role (start with
assistant) - Name it and configure the model
You're ready to go! Open a chat session and start talking to your agent.
Auto-Updates
The desktop app checks for updates on launch via GitHub Releases. When an update is available:
- A notification appears: "Update available: v2026.2.28"
- Click Update to download and install
- The app restarts with the new version
Updates include new gateway and frontend versions bundled together.
Menubar Tray
The tray icon provides quick access:
- Open Cockpit — Show/focus the main window
- Gateway Status — Running/stopped indicator
- Quit — Stop gateway and exit
Configuration
The desktop app uses the same reeve.json config as the standalone gateway. Config file location:
macOS: ~/Library/Application Support/Reeve/reeve.jsonOr, if the gateway detects an existing config at ~/.reeve/reeve.json, it uses that instead.
The desktop app bundles a complete gateway — it doesn't need a cloud account or internet connection (except for LLM API calls). You can use it fully offline with local models via Ollama.