Skip to content

Agent Configuration

Each workspace runs its own Claude Code agent session. You can configure agent behavior per-workspace or set defaults for all new sessions.

Select a model from the dropdown in the chat header. Available models:

ModelIDDescription
Opus 4.6 1MopusOpus 4.6 with 1M context window
Opus 4.6claude-opus-4-6Standard Opus 4.6
Sonnet 4.6sonnetFast and capable
Haiku 4.5haikuFastest, most affordable

The model selection is per-workspace — you can use different models for different tasks.

Set the default model for new sessions in Settings > Models > Default model.

Control how much reasoning the agent applies to each response:

LevelDescriptionAvailable Models
AutoLet Claude decide (default)All models
LowFast, minimal reasoningOpus, Sonnet
MediumBalancedOpus, Sonnet
HighDeep reasoningOpus, Sonnet
MaxMaximum reasoning budgetOpus 4.6 only

Select the effort level from the dropdown in the chat header, next to the model selector.

Set the default in Settings > Models > Default effort level.

When enabled, the agent shows its reasoning process in expandable “thinking” blocks before its response.

  • Toggle thinking: Enable/disable in the chat header toolbar
  • Show/hide blocks: Even with thinking enabled, you can choose whether to display the thinking blocks in the chat UI

Set defaults in Settings > Models:

  • Default thinking — enable/disable for new sessions
  • Show thinking blocks — show/hide by default

When Settings > Appearance > Extended tool call output is enabled, tool-call rows in the chat timeline include an expand chevron for inspecting the exact input the agent sent to that tool. Code-like inputs such as SQL queries, JavaScript browser evaluations, shell commands, and JSON payloads render in a syntax-highlighted block; plain inputs such as file paths render as monospace text.

The expanded state sticks to each tool call while the chat panel re-renders, so you can keep a long query or command open while later tool activity streams in.

Fast mode prioritizes speed over depth. When enabled, the agent uses quicker, more concise responses.

Toggle fast mode in the chat header toolbar. Set the default in Settings > Models > Default fast mode.

In plan mode, the agent runs read-only until it produces a plan you explicitly approve. See the dedicated Plan Mode page for the full approval workflow, denial-with-feedback path, and CLI integration.

  • Toggle: Click the plan mode button in the chat header, or press Shift + Tab
  • Default: Set in Settings > Models > Default plan mode
  • Slash command: /plan enables plan mode for the next turn

Toggle the Chrome chip in the chat header to enable the agent’s browser tool for web tasks. Useful when the agent needs to navigate live pages, screenshot DOM state, or scrape something that doesn’t have a clean API. Set the default in Settings > Models > Default chrome mode.

While an agent turn is running, you can queue follow-up messages. Type into the chat input and hit Enter — each message is added to the queue popover above the composer and delivered one at a time as later user turns.

Use the Steer action in the queue popover to send any queued item into the currently running turn instead of waiting for the queue to drain. Pressing Cmd/Ctrl + Enter steers the freshly typed composer text when the composer has text or attachments; when the composer is empty, the shortcut steers the top queued item. This is the lowest-friction way to course-correct without stopping the agent.

From the CLI:

Terminal window
claudette chat steer <session-id> "Also update the integration tests"

The Settings > Claude CLI flags panel exposes the underlying claude command flags Claudette is passing for each turn — useful for diagnosing “why didn’t this work like I expected” issues. The chat header banner above each turn shows the model, effort, and any non-default flags in a structured chip so the active configuration is always visible.

Claudette can run agents against alternative providers — Ollama (local, Anthropic-wire), LM Studio (local, Anthropic-wire via /v1/messages), or OpenAI / Codex (remote, gateway-translated) — instead of the official claude CLI. This is gated by the Alternative Claude Code backends experimental flag (the toggle keeps that wording in the UI).

Several chat-header toggles on this page only apply to certain providers. The capability matrix in the dedicated section shows the full picture, but the highlights:

  • Effort, fast mode, 1M-context auto-upgrade — Anthropic only.
  • Extended thinking — Anthropic and Ollama (when the model supports it).
  • All Claude-side toggles — hidden on every non-Anthropic provider (Ollama / LM Studio / OpenAI / Codex). Local models and OpenAI’s Responses API implement these Anthropic-specific knobs inconsistently or not at all, so the chat header hides them rather than silently ignoring user intent.

→ Full setup, capability matrix, and per-provider instructions: Alternative Providers

Forward/backward compat: warning banner in Models

Section titled “Forward/backward compat: warning banner in Models”

If you switch between build channels (e.g. nightly → stable, or run two builds against the same data directory), a newer build can write a backend entry whose kind an older build doesn’t recognize yet. When that happens, Settings → Models shows an accent-tinted warning banner (the exact color depends on the active theme) naming the offending entry — your config isn’t lost. The unknown entry is preserved as opaque JSON in your settings and reactivates automatically on a build that knows it. Saves from the older build splice the unknown entry back into the stored blob, so a downgrade-and-re-upgrade cycle is non-destructive.

Type @ in the chat input to reference specific files in your workspace. A file picker appears showing matching files — select one to include it as context for the agent.

This is useful for directing the agent’s attention to specific files:

“Review the error handling in @src/api/handler.ts and suggest improvements”

Configure defaults for all new sessions in Settings > Models:

SettingDescriptionDefault
Default modelModel used for new chats
Default effortReasoning effort levelAuto
Default thinkingEnable thinking blocksOff
Show thinking blocksDisplay thinking in UIOff
Default plan modeStart in plan modeOff
Default fast modeUse fast modeOff