mirror of
https://github.com/moltbot/moltbot.git
synced 2026-04-26 16:06:16 +00:00
feat: add fast mode toggle for OpenAI models
This commit is contained in:
@@ -1,7 +1,7 @@
|
||||
---
|
||||
summary: "Directive syntax for /think + /verbose and how they affect model reasoning"
|
||||
summary: "Directive syntax for /think, /fast, /verbose, and reasoning visibility"
|
||||
read_when:
|
||||
- Adjusting thinking or verbose directive parsing or defaults
|
||||
- Adjusting thinking, fast-mode, or verbose directive parsing or defaults
|
||||
title: "Thinking Levels"
|
||||
---
|
||||
|
||||
@@ -42,6 +42,19 @@ title: "Thinking Levels"
|
||||
|
||||
- **Embedded Pi**: the resolved level is passed to the in-process Pi agent runtime.
|
||||
|
||||
## Fast mode (/fast)
|
||||
|
||||
- Levels: `on|off`.
|
||||
- Directive-only message toggles a session fast-mode override and replies `Fast mode enabled.` / `Fast mode disabled.`.
|
||||
- Send `/fast` (or `/fast status`) with no mode to see the current effective fast-mode state.
|
||||
- OpenClaw resolves fast mode in this order:
|
||||
1. Inline/directive-only `/fast on|off`
|
||||
2. Session override
|
||||
3. Per-model config: `agents.defaults.models["<provider>/<model>"].params.fastMode`
|
||||
4. Fallback: `off`
|
||||
- For `openai/*`, fast mode applies the OpenAI fast profile: `service_tier=priority` when supported, plus low reasoning effort and low text verbosity.
|
||||
- For `openai-codex/*`, fast mode applies the same low-latency profile on Codex Responses. OpenClaw keeps one shared `/fast` toggle across both auth paths.
|
||||
|
||||
## Verbose directives (/verbose or /v)
|
||||
|
||||
- Levels: `on` (minimal) | `full` | `off` (default).
|
||||
|
||||
Reference in New Issue
Block a user