Reeve
Desktop App

Installation

Download, install, and configure the Reeve Desktop App.

Desktop Installation

Download

Download the latest release from GitHub:

  • macOS (Apple Silicon): Reeve-2026.2.27-arm64.dmg
  • macOS (Intel): Reeve-2026.2.27-x64.dmg

Install

  1. Open the .dmg file
  2. Drag Reeve to the Applications folder
  3. Launch from Applications or Spotlight

On first launch, macOS may show a Gatekeeper warning. Click Open to proceed — the app is signed and notarized.

First Run

When Reeve Desktop starts for the first time:

  1. Gateway starts — The bundled gateway launches on port 18789
  2. Frontend starts — The Next.js frontend launches on port 3100
  3. Cockpit opens — A native window opens showing the Cockpit dashboard
  4. Tray icon appears — A rooster icon in the menubar for quick access

Connect your LLM provider

Navigate to Settings → Models and add your API key:

Create your first agent

  1. Go to Settings → Agents
  2. Click New Agent
  3. Choose a role (start with assistant)
  4. Name it and configure the model

You're ready to go! Open a chat session and start talking to your agent.

Auto-Updates

The desktop app checks for updates on launch via GitHub Releases. When an update is available:

  1. A notification appears: "Update available: v2026.2.28"
  2. Click Update to download and install
  3. The app restarts with the new version

Updates include new gateway and frontend versions bundled together.

The tray icon provides quick access:

  • Open Cockpit — Show/focus the main window
  • Gateway Status — Running/stopped indicator
  • Quit — Stop gateway and exit

Configuration

The desktop app uses the same reeve.json config as the standalone gateway. Config file location:

macOS: ~/Library/Application Support/Reeve/reeve.json

Or, if the gateway detects an existing config at ~/.reeve/reeve.json, it uses that instead.

The desktop app bundles a complete gateway — it doesn't need a cloud account or internet connection (except for LLM API calls). You can use it fully offline with local models via Ollama.

On this page