docs(openai): document GPT-5.5 defaults

This commit is contained in:
Peter Steinberger
2026-04-23 20:01:02 +01:00
parent cd5bc2fc93
commit 89051c6bf6
33 changed files with 128 additions and 126 deletions

View File

@@ -6,6 +6,8 @@ Docs: https://docs.openclaw.ai
### Changes
- Providers/OpenAI: add forward-compatible `gpt-5.5` and `gpt-5.5-pro` support for OpenAI API keys, OpenAI Codex OAuth, and the Codex CLI default model.
### Fixes
- QA channel/security: reject non-HTTP(S) inbound attachment URLs before media fetch, and log rejected schemes so suspicious or misconfigured payloads are visible during debugging. (#70708) Thanks @vincentkoc.

View File

@@ -235,7 +235,7 @@ Run an isolated agent turn:
curl -X POST http://127.0.0.1:18789/hooks/agent \
-H 'Authorization: Bearer SECRET' \
-H 'Content-Type: application/json' \
-d '{"message":"Summarize inbox","name":"Email","model":"openai/gpt-5.4-mini"}'
-d '{"message":"Summarize inbox","name":"Email","model":"openai/gpt-5.5"}'
```
Fields: `message` (required), `name`, `agentId`, `wakeMode`, `deliver`, `channel`, `to`, `model`, `thinking`, `timeoutSeconds`.

View File

@@ -38,7 +38,7 @@ openclaw config get browser.executablePath
openclaw config set browser.executablePath "/usr/bin/google-chrome"
openclaw config set agents.defaults.heartbeat.every "2h"
openclaw config set agents.list[0].tools.exec.node "node-id-or-name"
openclaw config set agents.defaults.models '{"openai-codex/gpt-5.4":{}}' --strict-json --merge
openclaw config set agents.defaults.models '{"openai-codex/gpt-5.5":{}}' --strict-json --merge
openclaw config set channels.discord.token --ref-provider default --ref-source env --ref-id DISCORD_BOT_TOKEN
openclaw config set secrets.providers.vaultfile --provider-source file --provider-path /etc/openclaw/secrets.json --provider-mode json
openclaw config unset plugins.entries.brave.config.webSearch.apiKey
@@ -115,7 +115,7 @@ you pass `--replace`.
Use `--merge` when adding entries to those maps:
```bash
openclaw config set agents.defaults.models '{"openai-codex/gpt-5.4":{}}' --strict-json --merge
openclaw config set agents.defaults.models '{"openai-codex/gpt-5.5":{}}' --strict-json --merge
openclaw config set models.providers.ollama.models '[{"id":"llama3.2","name":"Llama 3.2"}]' --strict-json --merge
```

View File

@@ -136,7 +136,7 @@ Use `model` for provider-backed text inference and model/provider inspection.
openclaw infer model run --prompt "Reply with exactly: smoke-ok" --json
openclaw infer model run --prompt "Summarize this changelog entry" --provider openai --json
openclaw infer model providers --json
openclaw infer model inspect --name gpt-5.4 --json
openclaw infer model inspect --name gpt-5.5 --json
```
Notes:

View File

@@ -57,7 +57,7 @@ OpenClaw ships with the piai catalog. These providers require **no**
- Provider: `openai`
- Auth: `OPENAI_API_KEY`
- Optional rotation: `OPENAI_API_KEYS`, `OPENAI_API_KEY_1`, `OPENAI_API_KEY_2`, plus `OPENCLAW_LIVE_OPENAI_KEY` (single override)
- Example models: `openai/gpt-5.4`, `openai/gpt-5.4-pro`
- Example models: `openai/gpt-5.5`, `openai/gpt-5.5-pro`
- CLI: `openclaw onboard --auth-choice openai-api-key`
- Default transport is `auto` (WebSocket-first, SSE fallback)
- Override per model via `agents.defaults.models["openai/<model>"].params.transport` (`"sse"`, `"websocket"`, or `"auto"`)
@@ -74,7 +74,7 @@ OpenClaw ships with the piai catalog. These providers require **no**
```json5
{
agents: { defaults: { model: { primary: "openai/gpt-5.4" } } },
agents: { defaults: { model: { primary: "openai/gpt-5.5" } } },
}
```
@@ -99,7 +99,7 @@ OpenClaw ships with the piai catalog. These providers require **no**
- Provider: `openai-codex`
- Auth: OAuth (ChatGPT)
- Example model: `openai-codex/gpt-5.4`
- Example model: `openai-codex/gpt-5.5`
- CLI: `openclaw onboard --auth-choice openai-codex` or `openclaw models auth login --provider openai-codex`
- Default transport is `auto` (WebSocket-first, SSE fallback)
- Override per model via `agents.defaults.models["openai-codex/<model>"].params.transport` (`"sse"`, `"websocket"`, or `"auto"`)
@@ -109,12 +109,12 @@ OpenClaw ships with the piai catalog. These providers require **no**
`chatgpt.com/backend-api`, not generic OpenAI-compatible proxies
- Shares the same `/fast` toggle and `params.fastMode` config as direct `openai/*`; OpenClaw maps that to `service_tier=priority`
- `openai-codex/gpt-5.3-codex-spark` remains available when the Codex OAuth catalog exposes it; entitlement-dependent
- `openai-codex/gpt-5.4` keeps native `contextWindow = 1050000` and a default runtime `contextTokens = 272000`; override the runtime cap with `models.providers.openai-codex.models[].contextTokens`
- `openai-codex/gpt-5.5` keeps native `contextWindow = 1000000` and a default runtime `contextTokens = 272000`; override the runtime cap with `models.providers.openai-codex.models[].contextTokens`
- Policy note: OpenAI Codex OAuth is explicitly supported for external tools/workflows like OpenClaw.
```json5
{
agents: { defaults: { model: { primary: "openai-codex/gpt-5.4" } } },
agents: { defaults: { model: { primary: "openai-codex/gpt-5.5" } } },
}
```
@@ -123,7 +123,7 @@ OpenClaw ships with the piai catalog. These providers require **no**
models: {
providers: {
"openai-codex": {
models: [{ id: "gpt-5.4", contextTokens: 160000 }],
models: [{ id: "gpt-5.5", contextTokens: 160000 }],
},
},
},

View File

@@ -72,7 +72,7 @@ Provider configuration examples (including OpenCode) live in
Use additive writes when updating `agents.defaults.models` by hand:
```bash
openclaw config set agents.defaults.models '{"openai-codex/gpt-5.4":{}}' --strict-json --merge
openclaw config set agents.defaults.models '{"openai-codex/gpt-5.5":{}}' --strict-json --merge
```
`openclaw config set` protects model/provider maps from accidental clobbers. A
@@ -124,7 +124,7 @@ You can switch models for the current session without restarting:
/model
/model list
/model 3
/model openai/gpt-5.4
/model openai/gpt-5.5
/model status
```

View File

@@ -209,7 +209,7 @@ refs and write a judged Markdown report:
```bash
pnpm openclaw qa character-eval \
--model openai/gpt-5.4,thinking=xhigh \
--model openai/gpt-5.5,thinking=xhigh \
--model openai/gpt-5.2,thinking=xhigh \
--model openai/gpt-5,thinking=xhigh \
--model anthropic/claude-opus-4-6,thinking=high \
@@ -217,7 +217,7 @@ pnpm openclaw qa character-eval \
--model zai/glm-5.1,thinking=high \
--model moonshot/kimi-k2.5,thinking=high \
--model google/gemini-3.1-pro-preview,thinking=high \
--judge-model openai/gpt-5.4,thinking=xhigh,fast \
--judge-model openai/gpt-5.5,thinking=xhigh,fast \
--judge-model anthropic/claude-opus-4-6,thinking=high \
--blind-judge-models \
--concurrency 16 \
@@ -249,12 +249,12 @@ Candidate and judge model runs both default to concurrency 16. Lower
`--concurrency` or `--judge-concurrency` when provider limits or local gateway
pressure make a run too noisy.
When no candidate `--model` is passed, the character eval defaults to
`openai/gpt-5.4`, `openai/gpt-5.2`, `openai/gpt-5`, `anthropic/claude-opus-4-6`,
`openai/gpt-5.5`, `openai/gpt-5.2`, `openai/gpt-5`, `anthropic/claude-opus-4-6`,
`anthropic/claude-sonnet-4-6`, `zai/glm-5.1`,
`moonshot/kimi-k2.5`, and
`google/gemini-3.1-pro-preview` when no `--model` is passed.
When no `--judge-model` is passed, the judges default to
`openai/gpt-5.4,thinking=xhigh,fast` and
`openai/gpt-5.5,thinking=xhigh,fast` and
`anthropic/claude-opus-4-6,thinking=high`.
## Related docs

View File

@@ -31,7 +31,7 @@ You can use Codex CLI **without any config** (the bundled OpenAI plugin
registers a default backend):
```bash
openclaw agent --message "hi" --model codex-cli/gpt-5.4
openclaw agent --message "hi" --model codex-cli/gpt-5.5
```
If your gateway runs under launchd/systemd and PATH is minimal, add just the
@@ -68,11 +68,11 @@ Add a CLI backend to your fallback list so it only runs when primary models fail
defaults: {
model: {
primary: "anthropic/claude-opus-4-6",
fallbacks: ["codex-cli/gpt-5.4"],
fallbacks: ["codex-cli/gpt-5.5"],
},
models: {
"anthropic/claude-opus-4-6": { alias: "Opus" },
"codex-cli/gpt-5.4": {},
"codex-cli/gpt-5.5": {},
},
},
},

View File

@@ -236,7 +236,7 @@ Save to `~/.openclaw/openclaw.json` and you can DM the bot from that number.
userTimezone: "America/Chicago",
model: {
primary: "anthropic/claude-sonnet-4-6",
fallbacks: ["anthropic/claude-opus-4-6", "openai/gpt-5.4"],
fallbacks: ["anthropic/claude-opus-4-6", "openai/gpt-5.5"],
},
imageModel: {
primary: "openrouter/anthropic/claude-sonnet-4-6",
@@ -244,7 +244,7 @@ Save to `~/.openclaw/openclaw.json` and you can DM the bot from that number.
models: {
"anthropic/claude-opus-4-6": { alias: "opus" },
"anthropic/claude-sonnet-4-6": { alias: "sonnet" },
"openai/gpt-5.4": { alias: "gpt" },
"openai/gpt-5.5": { alias: "gpt" },
},
skills: ["github", "weather"], // inherited by agents that omit list[].skills
thinkingDefault: "low",

View File

@@ -1236,7 +1236,7 @@ Time format in system prompt. Default: `auto` (OS preference).
- `pdfMaxPages`: default maximum pages considered by extraction fallback mode in the `pdf` tool.
- `verboseDefault`: default verbose level for agents. Values: `"off"`, `"on"`, `"full"`. Default: `"off"`.
- `elevatedDefault`: default elevated-output level for agents. Values: `"off"`, `"on"`, `"ask"`, `"full"`. Default: `"on"`.
- `model.primary`: format `provider/model` (e.g. `openai/gpt-5.4`). If you omit the provider, OpenClaw tries an alias first, then a unique configured-provider match for that exact model id, and only then falls back to the configured default provider (deprecated compatibility behavior, so prefer explicit `provider/model`). If that provider no longer exposes the configured default model, OpenClaw falls back to the first configured provider/model instead of surfacing a stale removed-provider default.
- `model.primary`: format `provider/model` (e.g. `openai/gpt-5.5`). If you omit the provider, OpenClaw tries an alias first, then a unique configured-provider match for that exact model id, and only then falls back to the configured default provider (deprecated compatibility behavior, so prefer explicit `provider/model`). If that provider no longer exposes the configured default model, OpenClaw falls back to the first configured provider/model instead of surfacing a stale removed-provider default.
- `models`: the configured model catalog and allowlist for `/model`. Each entry can include `alias` (shortcut) and `params` (provider-specific, for example `temperature`, `maxTokens`, `cacheRetention`, `context1m`).
- Safe edits: use `openclaw config set agents.defaults.models '<json>' --strict-json --merge` to add entries. `config set` refuses replacements that would remove existing allowlist entries unless you pass `--replace`.
- Provider-scoped configure/onboarding flows merge selected provider models into this map and preserve unrelated providers already configured.
@@ -1257,7 +1257,7 @@ Codex app-server harness.
{
agents: {
defaults: {
model: "codex/gpt-5.4",
model: "codex/gpt-5.5",
embeddedHarness: {
runtime: "codex",
fallback: "none",
@@ -1270,7 +1270,7 @@ Codex app-server harness.
- `runtime`: `"auto"`, `"pi"`, or a registered plugin harness id. The bundled Codex plugin registers `codex`.
- `fallback`: `"pi"` or `"none"`. `"pi"` keeps the built-in PI harness as the compatibility fallback when no plugin harness is selected. `"none"` makes missing or unsupported plugin harness selection fail instead of silently using PI. Selected plugin harness failures always surface directly.
- Environment overrides: `OPENCLAW_AGENT_RUNTIME=<id|auto|pi>` overrides `runtime`; `OPENCLAW_AGENT_HARNESS_FALLBACK=none` disables PI fallback for that process.
- For Codex-only deployments, set `model: "codex/gpt-5.4"`, `embeddedHarness.runtime: "codex"`, and `embeddedHarness.fallback: "none"`.
- For Codex-only deployments, set `model: "codex/gpt-5.5"`, `embeddedHarness.runtime: "codex"`, and `embeddedHarness.fallback: "none"`.
- Harness choice is pinned per session id after the first embedded run. Config/env changes affect new or reset sessions, not an existing transcript. Legacy sessions with transcript history but no recorded pin are treated as PI-pinned. `/status` shows non-PI harness ids such as `codex` next to `Fast`.
- This only controls the embedded chat harness. Media generation, vision, PDF, music, video, and TTS still use their provider/model settings.
@@ -1280,7 +1280,7 @@ Codex app-server harness.
| ------------------- | -------------------------------------- |
| `opus` | `anthropic/claude-opus-4-6` |
| `sonnet` | `anthropic/claude-sonnet-4-6` |
| `gpt` | `openai/gpt-5.4` |
| `gpt` | `openai/gpt-5.5` |
| `gpt-mini` | `openai/gpt-5.4-mini` |
| `gpt-nano` | `openai/gpt-5.4-nano` |
| `gemini` | `google/gemini-3.1-pro-preview` |
@@ -2253,7 +2253,7 @@ Further restrict tools for specific providers or models. Order: base profile →
profile: "coding",
byProvider: {
"google-antigravity": { profile: "minimal" },
"openai/gpt-5.4": { allow: ["group:fs", "sessions_list"] },
"openai/gpt-5.5": { allow: ["group:fs", "sessions_list"] },
},
},
}
@@ -2294,7 +2294,7 @@ Controls elevated exec access outside the sandbox:
notifyOnExitEmptySuccess: false,
applyPatch: {
enabled: false,
allowModels: ["gpt-5.4"],
allowModels: ["gpt-5.5"],
},
},
},

View File

@@ -137,11 +137,11 @@ is skipped when a candidate contains redacted secret placeholders such as `***`.
defaults: {
model: {
primary: "anthropic/claude-sonnet-4-6",
fallbacks: ["openai/gpt-5.4"],
fallbacks: ["openai/gpt-5.5"],
},
models: {
"anthropic/claude-sonnet-4-6": { alias: "Sonnet" },
"openai/gpt-5.4": { alias: "GPT" },
"openai/gpt-5.5": { alias: "GPT" },
},
},
},

View File

@@ -169,8 +169,8 @@ This is the highest-leverage compatibility set for self-hosted frontends and too
Use `x-openclaw-model`.
Examples:
`x-openclaw-model: openai/gpt-5.4`
`x-openclaw-model: gpt-5.4`
`x-openclaw-model: openai/gpt-5.5`
`x-openclaw-model: gpt-5.5`
If you omit it, the selected agent runs with its normal configured model choice.
@@ -237,7 +237,7 @@ Streaming:
curl -N http://127.0.0.1:18789/v1/chat/completions \
-H 'Authorization: Bearer YOUR_TOKEN' \
-H 'Content-Type: application/json' \
-H 'x-openclaw-model: openai/gpt-5.4' \
-H 'x-openclaw-model: openai/gpt-5.5' \
-d '{
"model": "openclaw/research",
"stream": true,

View File

@@ -67,7 +67,7 @@ Rules of thumb:
- If `allow` is non-empty, everything else is treated as blocked.
- Tool policy is the hard stop: `/exec` cannot override a denied `exec` tool.
- `/exec` only changes session defaults for authorized senders; it does not grant tool access.
Provider tool keys accept either `provider` (e.g. `google-antigravity`) or `provider/model` (e.g. `openai/gpt-5.4`).
Provider tool keys accept either `provider` (e.g. `google-antigravity`) or `provider/model` (e.g. `openai/gpt-5.5`).
### Tool groups (shorthands)

View File

@@ -658,14 +658,14 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
</Accordion>
<Accordion title="How does Codex auth work?">
OpenClaw supports **OpenAI Code (Codex)** via OAuth (ChatGPT sign-in). Onboarding can run the OAuth flow and will set the default model to `openai-codex/gpt-5.4` when appropriate. See [Model providers](/concepts/model-providers) and [Onboarding (CLI)](/start/wizard).
OpenClaw supports **OpenAI Code (Codex)** via OAuth (ChatGPT sign-in). Onboarding can run the OAuth flow and will set the default model to `openai-codex/gpt-5.5` when appropriate. See [Model providers](/concepts/model-providers) and [Onboarding (CLI)](/start/wizard).
</Accordion>
<Accordion title="Why does ChatGPT GPT-5.4 not unlock openai/gpt-5.4 in OpenClaw?">
<Accordion title="Why does ChatGPT GPT-5.5 not unlock openai/gpt-5.5 in OpenClaw?">
OpenClaw treats the two routes separately:
- `openai-codex/gpt-5.4` = ChatGPT/Codex OAuth
- `openai/gpt-5.4` = direct OpenAI Platform API
- `openai-codex/gpt-5.5` = ChatGPT/Codex OAuth
- `openai/gpt-5.5` = direct OpenAI Platform API
In OpenClaw, ChatGPT/Codex sign-in is wired to the `openai-codex/*` route,
not the direct `openai/*` route. If you want the direct API path in
@@ -2219,7 +2219,7 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
agents.defaults.model.primary
```
Models are referenced as `provider/model` (example: `openai/gpt-5.4`). If you omit the provider, OpenClaw first tries an alias, then a unique configured-provider match for that exact model id, and only then falls back to the configured default provider as a deprecated compatibility path. If that provider no longer exposes the configured default model, OpenClaw falls back to the first configured provider/model instead of surfacing a stale removed-provider default. You should still **explicitly** set `provider/model`.
Models are referenced as `provider/model` (example: `openai/gpt-5.5`). If you omit the provider, OpenClaw first tries an alias, then a unique configured-provider match for that exact model id, and only then falls back to the configured default provider as a deprecated compatibility path. If that provider no longer exposes the configured default model, OpenClaw falls back to the first configured provider/model instead of surfacing a stale removed-provider default. You should still **explicitly** set `provider/model`.
</Accordion>
@@ -2341,23 +2341,23 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
</Accordion>
<Accordion title="Can I use GPT 5.2 for daily tasks and Codex 5.3 for coding?">
<Accordion title="Can I use GPT 5.5 for daily tasks and Codex 5.5 for coding?">
Yes. Set one as default and switch as needed:
- **Quick switch (per session):** `/model gpt-5.4` for daily tasks, `/model openai-codex/gpt-5.4` for coding with Codex OAuth.
- **Default + switch:** set `agents.defaults.model.primary` to `openai/gpt-5.4`, then switch to `openai-codex/gpt-5.4` when coding (or the other way around).
- **Quick switch (per session):** `/model gpt-5.5` for daily tasks, `/model openai-codex/gpt-5.5` for coding with Codex OAuth.
- **Default + switch:** set `agents.defaults.model.primary` to `openai/gpt-5.5`, then switch to `openai-codex/gpt-5.5` when coding (or the other way around).
- **Sub-agents:** route coding tasks to sub-agents with a different default model.
See [Models](/concepts/models) and [Slash commands](/tools/slash-commands).
</Accordion>
<Accordion title="How do I configure fast mode for GPT 5.4?">
<Accordion title="How do I configure fast mode for GPT 5.5?">
Use either a session toggle or a config default:
- **Per session:** send `/fast on` while the session is using `openai/gpt-5.4` or `openai-codex/gpt-5.4`.
- **Per model default:** set `agents.defaults.models["openai/gpt-5.4"].params.fastMode` to `true`.
- **Codex OAuth too:** if you also use `openai-codex/gpt-5.4`, set the same flag there.
- **Per session:** send `/fast on` while the session is using `openai/gpt-5.5` or `openai-codex/gpt-5.5`.
- **Per model default:** set `agents.defaults.models["openai/gpt-5.5"].params.fastMode` to `true`.
- **Codex OAuth too:** if you also use `openai-codex/gpt-5.5`, set the same flag there.
Example:
@@ -2366,12 +2366,12 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
agents: {
defaults: {
models: {
"openai/gpt-5.4": {
"openai/gpt-5.5": {
params: {
fastMode: true,
},
},
"openai-codex/gpt-5.4": {
"openai-codex/gpt-5.5": {
params: {
fastMode: true,
},
@@ -2442,7 +2442,7 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
model: { primary: "minimax/MiniMax-M2.7" },
models: {
"minimax/MiniMax-M2.7": { alias: "minimax" },
"openai/gpt-5.4": { alias: "gpt" },
"openai/gpt-5.5": { alias: "gpt" },
},
},
},
@@ -2470,7 +2470,7 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
- `opus` → `anthropic/claude-opus-4-6`
- `sonnet` → `anthropic/claude-sonnet-4-6`
- `gpt` → `openai/gpt-5.4`
- `gpt` → `openai/gpt-5.5`
- `gpt-mini` → `openai/gpt-5.4-mini`
- `gpt-nano` → `openai/gpt-5.4-nano`
- `gemini` → `google/gemini-3.1-pro-preview`

View File

@@ -512,7 +512,7 @@ Live tests are split into two layers so we can isolate failures:
- How to select models:
- `OPENCLAW_LIVE_MODELS=modern` to run the modern allowlist (Opus/Sonnet 4.6+, GPT-5.x + Codex, Gemini 3, GLM 4.7, MiniMax M2.7, Grok 4)
- `OPENCLAW_LIVE_MODELS=all` is an alias for the modern allowlist
- or `OPENCLAW_LIVE_MODELS="openai/gpt-5.4,anthropic/claude-opus-4-6,..."` (comma allowlist)
- or `OPENCLAW_LIVE_MODELS="openai/gpt-5.5,anthropic/claude-opus-4-6,..."` (comma allowlist)
- Modern/all sweeps default to a curated high-signal cap; set `OPENCLAW_LIVE_MAX_MODELS=0` for an exhaustive modern sweep or a positive number for a smaller cap.
- How to select providers:
- `OPENCLAW_LIVE_PROVIDERS="google,google-antigravity,google-gemini-cli"` (comma allowlist)
@@ -577,7 +577,7 @@ openclaw models list --json
- Default provider/model: `claude-cli/claude-sonnet-4-6`
- Command/args/image behavior come from the owning CLI backend plugin metadata.
- Overrides (optional):
- `OPENCLAW_LIVE_CLI_BACKEND_MODEL="codex-cli/gpt-5.4"`
- `OPENCLAW_LIVE_CLI_BACKEND_MODEL="codex-cli/gpt-5.5"`
- `OPENCLAW_LIVE_CLI_BACKEND_COMMAND="/full/path/to/codex"`
- `OPENCLAW_LIVE_CLI_BACKEND_ARGS='["exec","--json","--color","never","--sandbox","read-only","--skip-git-repo-check"]'`
- `OPENCLAW_LIVE_CLI_BACKEND_IMAGE_PROBE=1` to send a real image attachment (paths are injected into the prompt).
@@ -590,7 +590,7 @@ Example:
```bash
OPENCLAW_LIVE_CLI_BACKEND=1 \
OPENCLAW_LIVE_CLI_BACKEND_MODEL="codex-cli/gpt-5.4" \
OPENCLAW_LIVE_CLI_BACKEND_MODEL="codex-cli/gpt-5.5" \
pnpm test:live src/gateway/gateway-cli-backend.live.test.ts
```
@@ -640,7 +640,7 @@ Notes:
- `OPENCLAW_LIVE_ACP_BIND_AGENT=gemini`
- `OPENCLAW_LIVE_ACP_BIND_AGENTS=claude,codex,gemini`
- `OPENCLAW_LIVE_ACP_BIND_AGENT_COMMAND='npx -y @agentclientprotocol/claude-agent-acp@<version>'`
- `OPENCLAW_LIVE_ACP_BIND_CODEX_MODEL=gpt-5.4`
- `OPENCLAW_LIVE_ACP_BIND_CODEX_MODEL=gpt-5.5`
- Notes:
- This lane uses the gateway `chat.send` surface with admin-only synthetic originating-route fields so tests can attach message-channel context without pretending to deliver externally.
- When `OPENCLAW_LIVE_ACP_BIND_AGENT_COMMAND` is unset, the test uses the embedded `acpx` plugin's built-in agent registry for the selected ACP harness agent.
@@ -681,7 +681,7 @@ Docker notes:
`agent` method:
- load the bundled `codex` plugin
- select `OPENCLAW_AGENT_RUNTIME=codex`
- send a first gateway agent turn to `codex/gpt-5.4`
- send a first gateway agent turn to `codex/gpt-5.5`
- send a second turn to the same OpenClaw session and verify the app-server
thread can resume
- run `/codex status` and `/codex models` through the same gateway command
@@ -691,7 +691,7 @@ Docker notes:
denied so the agent asks back
- Test: `src/gateway/gateway-codex-harness.live.test.ts`
- Enable: `OPENCLAW_LIVE_CODEX_HARNESS=1`
- Default model: `codex/gpt-5.4`
- Default model: `codex/gpt-5.5`
- Optional image probe: `OPENCLAW_LIVE_CODEX_HARNESS_IMAGE_PROBE=1`
- Optional MCP/tool probe: `OPENCLAW_LIVE_CODEX_HARNESS_MCP_PROBE=1`
- Optional Guardian probe: `OPENCLAW_LIVE_CODEX_HARNESS_GUARDIAN_PROBE=1`
@@ -708,7 +708,7 @@ OPENCLAW_LIVE_CODEX_HARNESS=1 \
OPENCLAW_LIVE_CODEX_HARNESS_IMAGE_PROBE=1 \
OPENCLAW_LIVE_CODEX_HARNESS_MCP_PROBE=1 \
OPENCLAW_LIVE_CODEX_HARNESS_GUARDIAN_PROBE=1 \
OPENCLAW_LIVE_CODEX_HARNESS_MODEL=codex/gpt-5.4 \
OPENCLAW_LIVE_CODEX_HARNESS_MODEL=codex/gpt-5.5 \
pnpm test:live -- src/gateway/gateway-codex-harness.live.test.ts
```
@@ -739,13 +739,13 @@ Docker notes:
Narrow, explicit allowlists are fastest and least flaky:
- Single model, direct (no gateway):
- `OPENCLAW_LIVE_MODELS="openai/gpt-5.4" pnpm test:live src/agents/models.profiles.live.test.ts`
- `OPENCLAW_LIVE_MODELS="openai/gpt-5.5" pnpm test:live src/agents/models.profiles.live.test.ts`
- Single model, gateway smoke:
- `OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.4" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
- `OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.5" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
- Tool calling across several providers:
- `OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.4,anthropic/claude-opus-4-6,google/gemini-3-flash-preview,zai/glm-4.7,minimax/MiniMax-M2.7" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
- `OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.5,anthropic/claude-opus-4-6,google/gemini-3-flash-preview,zai/glm-4.7,minimax/MiniMax-M2.7" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
- Google focus (Gemini API key + Antigravity):
- Gemini (API key): `OPENCLAW_LIVE_GATEWAY_MODELS="google/gemini-3-flash-preview" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
@@ -768,8 +768,8 @@ There is no fixed “CI model list” (live is opt-in), but these are the **reco
This is the “common models” run we expect to keep working:
- OpenAI (non-Codex): `openai/gpt-5.4` (optional: `openai/gpt-5.4-mini`)
- OpenAI Codex: `openai-codex/gpt-5.4`
- OpenAI (non-Codex): `openai/gpt-5.5` (optional: `openai/gpt-5.4-mini`)
- OpenAI Codex: `openai-codex/gpt-5.5`
- Anthropic: `anthropic/claude-opus-4-6` (or `anthropic/claude-sonnet-4-6`)
- Google (Gemini API): `google/gemini-3.1-pro-preview` and `google/gemini-3-flash-preview` (avoid older Gemini 2.x models)
- Google (Antigravity): `google-antigravity/claude-opus-4-6-thinking` and `google-antigravity/gemini-3-flash`
@@ -777,13 +777,13 @@ This is the “common models” run we expect to keep working:
- MiniMax: `minimax/MiniMax-M2.7`
Run gateway smoke with tools + image:
`OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.4,openai-codex/gpt-5.4,anthropic/claude-opus-4-6,google/gemini-3.1-pro-preview,google/gemini-3-flash-preview,google-antigravity/claude-opus-4-6-thinking,google-antigravity/gemini-3-flash,zai/glm-4.7,minimax/MiniMax-M2.7" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
`OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.5,openai-codex/gpt-5.5,anthropic/claude-opus-4-6,google/gemini-3.1-pro-preview,google/gemini-3-flash-preview,google-antigravity/claude-opus-4-6-thinking,google-antigravity/gemini-3-flash,zai/glm-4.7,minimax/MiniMax-M2.7" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
### Baseline: tool calling (Read + optional Exec)
Pick at least one per provider family:
- OpenAI: `openai/gpt-5.4` (or `openai/gpt-5.4-mini`)
- OpenAI: `openai/gpt-5.5` (or `openai/gpt-5.4-mini`)
- Anthropic: `anthropic/claude-opus-4-6` (or `anthropic/claude-sonnet-4-6`)
- Google: `google/gemini-3-flash-preview` (or `google/gemini-3.1-pro-preview`)
- Z.AI (GLM): `zai/glm-4.7`

View File

@@ -156,7 +156,7 @@ read_when:
"defaults": {
"model": {
"primary": "anthropic/claude-opus-4-6",
"fallbacks": ["anthropic/claude-sonnet-4-6", "openai/gpt-5.4"]
"fallbacks": ["anthropic/claude-sonnet-4-6", "openai/gpt-5.5"]
},
"maxConcurrent": 4
},

View File

@@ -81,7 +81,7 @@ Each `models[]` entry can be **provider** or **CLI**:
{
type: "provider", // default if omitted
provider: "openai",
model: "gpt-5.4-mini",
model: "gpt-5.5",
prompt: "Describe the image in <= 500 chars.",
maxChars: 500,
maxBytes: 10485760,
@@ -278,7 +278,7 @@ File-attachment extraction behavior:
tools: {
media: {
models: [
{ provider: "openai", model: "gpt-5.4-mini", capabilities: ["image"] },
{ provider: "openai", model: "gpt-5.5", capabilities: ["image"] },
{
provider: "google",
model: "gemini-3-flash-preview",
@@ -359,7 +359,7 @@ File-attachment extraction behavior:
maxBytes: 10485760,
maxChars: 500,
models: [
{ provider: "openai", model: "gpt-5.4-mini" },
{ provider: "openai", model: "gpt-5.5" },
{ provider: "anthropic", model: "claude-opus-4-6" },
{
type: "cli",
@@ -422,7 +422,7 @@ File-attachment extraction behavior:
When media understanding runs, `/status` includes a short summary line:
```
📎 Media: image ok (openai/gpt-5.4-mini) · audio skipped (maxBytes)
📎 Media: image ok (openai/gpt-5.5) · audio skipped (maxBytes)
```
This shows percapability outcomes and the chosen provider/model when applicable.

View File

@@ -43,9 +43,9 @@ OpenClaw has separate routes for OpenAI and Codex-shaped access:
| Model ref | Runtime path | Use when |
| ---------------------- | -------------------------------------------- | ----------------------------------------------------------------------- |
| `openai/gpt-5.4` | OpenAI provider through OpenClaw/PI plumbing | You want direct OpenAI Platform API access with `OPENAI_API_KEY`. |
| `openai-codex/gpt-5.4` | OpenAI Codex OAuth provider through PI | You want ChatGPT/Codex OAuth without the Codex app-server harness. |
| `codex/gpt-5.4` | Bundled Codex provider plus Codex harness | You want native Codex app-server execution for the embedded agent turn. |
| `openai/gpt-5.5` | OpenAI provider through OpenClaw/PI plumbing | You want direct OpenAI Platform API access with `OPENAI_API_KEY`. |
| `openai-codex/gpt-5.5` | OpenAI Codex OAuth provider through PI | You want ChatGPT/Codex OAuth without the Codex app-server harness. |
| `codex/gpt-5.5` | Bundled Codex provider plus Codex harness | You want native Codex app-server execution for the embedded agent turn. |
The Codex harness only claims `codex/*` model refs. Existing `openai/*`,
`openai-codex/*`, Anthropic, Gemini, xAI, local, and custom provider refs keep
@@ -82,7 +82,7 @@ uses.
## Minimal config
Use `codex/gpt-5.4`, enable the bundled plugin, and force the `codex` harness:
Use `codex/gpt-5.5`, enable the bundled plugin, and force the `codex` harness:
```json5
{
@@ -95,7 +95,7 @@ Use `codex/gpt-5.4`, enable the bundled plugin, and force the `codex` harness:
},
agents: {
defaults: {
model: "codex/gpt-5.4",
model: "codex/gpt-5.5",
embeddedHarness: {
runtime: "codex",
fallback: "none",
@@ -141,13 +141,13 @@ everything else:
agents: {
defaults: {
model: {
primary: "codex/gpt-5.4",
fallbacks: ["openai/gpt-5.4", "anthropic/claude-opus-4-6"],
primary: "codex/gpt-5.5",
fallbacks: ["openai/gpt-5.5", "anthropic/claude-opus-4-6"],
},
models: {
"codex/gpt-5.4": { alias: "codex" },
"codex/gpt-5.5": { alias: "codex" },
"codex/gpt-5.4-mini": { alias: "codex-mini" },
"openai/gpt-5.4": { alias: "gpt" },
"openai/gpt-5.5": { alias: "gpt" },
"anthropic/claude-opus-4-6": { alias: "opus" },
},
embeddedHarness: {
@@ -161,8 +161,8 @@ everything else:
With this shape:
- `/model codex` or `/model codex/gpt-5.4` uses the Codex app-server harness.
- `/model gpt` or `/model openai/gpt-5.4` uses the OpenAI provider path.
- `/model codex` or `/model codex/gpt-5.5` uses the Codex app-server harness.
- `/model gpt` or `/model openai/gpt-5.5` uses the OpenAI provider path.
- `/model opus` uses the Anthropic provider path.
- If a non-Codex model is selected, PI remains the compatibility harness.
@@ -175,7 +175,7 @@ the Codex harness:
{
agents: {
defaults: {
model: "codex/gpt-5.4",
model: "codex/gpt-5.5",
embeddedHarness: {
runtime: "codex",
fallback: "none",
@@ -220,7 +220,7 @@ auto-selection:
{
id: "codex",
name: "Codex",
model: "codex/gpt-5.4",
model: "codex/gpt-5.5",
embeddedHarness: {
runtime: "codex",
fallback: "none",
@@ -241,7 +241,7 @@ and lets the next turn resolve the harness from current config again.
By default, the Codex plugin asks the app-server for available models. If
discovery fails or times out, it uses the bundled fallback catalog:
- `codex/gpt-5.4`
- `codex/gpt-5.5`
- `codex/gpt-5.4-mini`
- `codex/gpt-5.2`
@@ -459,7 +459,7 @@ Remote app-server with explicit headers:
Model switching stays OpenClaw-controlled. When an OpenClaw session is attached
to an existing Codex thread, the next turn sends the currently selected
`codex/*` model, provider, approval policy, sandbox, and service tier to
app-server again. Switching from `codex/gpt-5.4` to `codex/gpt-5.2` keeps the
app-server again. Switching from `codex/gpt-5.5` to `codex/gpt-5.2` keeps the
thread binding but asks Codex to continue with the newly selected model.
## Codex command

View File

@@ -483,7 +483,7 @@ Each channel entry can include:
## modelSupport reference
Use `modelSupport` when OpenClaw should infer your provider plugin from
shorthand model ids like `gpt-5.4` or `claude-sonnet-4.6` before plugin runtime
shorthand model ids like `gpt-5.5` or `claude-sonnet-4.6` before plugin runtime
loads.
```json

View File

@@ -122,7 +122,7 @@ OpenClaw. The harness then claims that provider in `supports(...)`.
The bundled Codex plugin follows this pattern:
- provider id: `codex`
- user model refs: `codex/gpt-5.4`, `codex/gpt-5.2`, or another model returned
- user model refs: `codex/gpt-5.5`, `codex/gpt-5.2`, or another model returned
by the Codex app server
- harness id: `codex`
- auth: synthetic provider availability, because the Codex harness owns the
@@ -189,7 +189,7 @@ For Codex-only embedded runs:
{
"agents": {
"defaults": {
"model": "codex/gpt-5.4",
"model": "codex/gpt-5.5",
"embeddedHarness": {
"runtime": "codex",
"fallback": "none"
@@ -230,7 +230,7 @@ Per-agent overrides use the same shape:
"list": [
{
"id": "codex-only",
"model": "codex/gpt-5.4",
"model": "codex/gpt-5.5",
"embeddedHarness": {
"runtime": "codex",
"fallback": "none"

View File

@@ -66,7 +66,7 @@ Any model available on the gateway can be used with the `kilocode/` prefix:
| -------------------------------------- | ---------------------------------- |
| `kilocode/kilo/auto` | Default — smart routing |
| `kilocode/anthropic/claude-sonnet-4` | Anthropic via Kilo |
| `kilocode/openai/gpt-5.4` | OpenAI via Kilo |
| `kilocode/openai/gpt-5.5` | OpenAI via Kilo |
| `kilocode/google/gemini-3-pro-preview` | Google via Kilo |
| ...and many more | Use `/models kilocode` to list all |

View File

@@ -65,8 +65,8 @@ Choose your preferred auth method and follow the setup steps.
| Model ref | Route | Auth |
|-----------|-------|------|
| `openai/gpt-5.4` | Direct OpenAI Platform API | `OPENAI_API_KEY` |
| `openai/gpt-5.4-pro` | Direct OpenAI Platform API | `OPENAI_API_KEY` |
| `openai/gpt-5.5` | Direct OpenAI Platform API | `OPENAI_API_KEY` |
| `openai/gpt-5.5-pro` | Direct OpenAI Platform API | `OPENAI_API_KEY` |
<Note>
ChatGPT/Codex sign-in is routed through `openai-codex/*`, not `openai/*`.
@@ -77,7 +77,7 @@ Choose your preferred auth method and follow the setup steps.
```json5
{
env: { OPENAI_API_KEY: "sk-..." },
agents: { defaults: { model: { primary: "openai/gpt-5.4" } } },
agents: { defaults: { model: { primary: "openai/gpt-5.5" } } },
}
```
@@ -110,7 +110,7 @@ Choose your preferred auth method and follow the setup steps.
</Step>
<Step title="Set the default model">
```bash
openclaw config set agents.defaults.model.primary openai-codex/gpt-5.4
openclaw config set agents.defaults.model.primary openai-codex/gpt-5.5
```
</Step>
<Step title="Verify the model is available">
@@ -124,18 +124,18 @@ Choose your preferred auth method and follow the setup steps.
| Model ref | Route | Auth |
|-----------|-------|------|
| `openai-codex/gpt-5.4` | ChatGPT/Codex OAuth | Codex sign-in |
| `openai-codex/gpt-5.5` | ChatGPT/Codex OAuth | Codex sign-in |
| `openai-codex/gpt-5.3-codex-spark` | ChatGPT/Codex OAuth | Codex sign-in (entitlement-dependent) |
<Note>
This route is intentionally separate from `openai/gpt-5.4`. Use `openai/*` with an API key for direct Platform access, and `openai-codex/*` for Codex subscription access.
This route is intentionally separate from `openai/gpt-5.5`. Use `openai/*` with an API key for direct Platform access, and `openai-codex/*` for Codex subscription access.
</Note>
### Config example
```json5
{
agents: { defaults: { model: { primary: "openai-codex/gpt-5.4" } } },
agents: { defaults: { model: { primary: "openai-codex/gpt-5.5" } } },
}
```
@@ -147,9 +147,9 @@ Choose your preferred auth method and follow the setup steps.
OpenClaw treats model metadata and the runtime context cap as separate values.
For `openai-codex/gpt-5.4`:
For `openai-codex/gpt-5.5`:
- Native `contextWindow`: `1050000`
- Native `contextWindow`: `1000000`
- Default runtime `contextTokens` cap: `272000`
The smaller default cap has better latency and quality characteristics in practice. Override it with `contextTokens`:
@@ -159,7 +159,7 @@ Choose your preferred auth method and follow the setup steps.
models: {
providers: {
"openai-codex": {
models: [{ id: "gpt-5.4", contextTokens: 160000 }],
models: [{ id: "gpt-5.5", contextTokens: 160000 }],
},
},
},
@@ -243,7 +243,7 @@ See [Video Generation](/tools/video-generation) for shared tool parameters, prov
## GPT-5 prompt contribution
OpenClaw adds a shared GPT-5 prompt contribution for GPT-5-family runs across providers. It applies by model id, so `openai/gpt-5.4`, `openai-codex/gpt-5.4`, `openrouter/openai/gpt-5.4`, `opencode/gpt-5.4`, and other compatible GPT-5 refs receive the same overlay. Older GPT-4.x models do not.
OpenClaw adds a shared GPT-5 prompt contribution for GPT-5-family runs across providers. It applies by model id, so `openai/gpt-5.5`, `openai-codex/gpt-5.5`, `openrouter/openai/gpt-5.5`, `opencode/gpt-5.5`, and other compatible GPT-5 refs receive the same overlay. Older GPT-4.x models do not.
The bundled native Codex harness provider (`codex/*`) uses the same GPT-5 behavior and heartbeat overlay through Codex app-server developer instructions, so `codex/gpt-5.x` sessions keep the same follow-through and proactive heartbeat guidance even though Codex owns the rest of the harness prompt.
@@ -535,7 +535,7 @@ the Server-side compaction accordion below.
agents: {
defaults: {
models: {
"openai-codex/gpt-5.4": {
"openai-codex/gpt-5.5": {
params: { transport: "auto" },
},
},
@@ -559,7 +559,7 @@ the Server-side compaction accordion below.
agents: {
defaults: {
models: {
"openai/gpt-5.4": {
"openai/gpt-5.5": {
params: { openaiWsWarmup: false },
},
},
@@ -583,8 +583,8 @@ the Server-side compaction accordion below.
agents: {
defaults: {
models: {
"openai/gpt-5.4": { params: { fastMode: true } },
"openai-codex/gpt-5.4": { params: { fastMode: true } },
"openai/gpt-5.5": { params: { fastMode: true } },
"openai-codex/gpt-5.5": { params: { fastMode: true } },
},
},
},
@@ -605,8 +605,8 @@ the Server-side compaction accordion below.
agents: {
defaults: {
models: {
"openai/gpt-5.4": { params: { serviceTier: "priority" } },
"openai-codex/gpt-5.4": { params: { serviceTier: "priority" } },
"openai/gpt-5.5": { params: { serviceTier: "priority" } },
"openai-codex/gpt-5.5": { params: { serviceTier: "priority" } },
},
},
},
@@ -637,7 +637,7 @@ the Server-side compaction accordion below.
agents: {
defaults: {
models: {
"azure-openai-responses/gpt-5.4": {
"azure-openai-responses/gpt-5.5": {
params: { responsesServerCompaction: true },
},
},
@@ -652,7 +652,7 @@ the Server-side compaction accordion below.
agents: {
defaults: {
models: {
"openai/gpt-5.4": {
"openai/gpt-5.5": {
params: {
responsesServerCompaction: true,
responsesCompactThreshold: 120000,
@@ -670,7 +670,7 @@ the Server-side compaction accordion below.
agents: {
defaults: {
models: {
"openai/gpt-5.4": {
"openai/gpt-5.5": {
params: { responsesServerCompaction: false },
},
},

View File

@@ -97,7 +97,7 @@ as one OpenCode setup.
| Property | Value |
| ---------------- | ----------------------------------------------------------------------- |
| Runtime provider | `opencode` |
| Example models | `opencode/claude-opus-4-6`, `opencode/gpt-5.4`, `opencode/gemini-3-pro` |
| Example models | `opencode/claude-opus-4-6`, `opencode/gpt-5.5`, `opencode/gemini-3-pro` |
### Go

View File

@@ -21,7 +21,7 @@ access hundreds of models through a single endpoint.
<Tip>
OpenClaw auto-discovers the Gateway `/v1/models` catalog, so
`/models vercel-ai-gateway` includes current model refs such as
`vercel-ai-gateway/openai/gpt-5.4` and
`vercel-ai-gateway/openai/gpt-5.5` and
`vercel-ai-gateway/moonshotai/kimi-k2.6`.
</Tip>
@@ -102,7 +102,7 @@ configuration. OpenClaw resolves the canonical form automatically.
<Accordion title="Provider routing">
Vercel AI Gateway routes requests to the upstream provider based on the model
ref prefix. For example, `vercel-ai-gateway/anthropic/claude-opus-4.6` routes
through Anthropic, while `vercel-ai-gateway/openai/gpt-5.4` routes through
through Anthropic, while `vercel-ai-gateway/openai/gpt-5.5` routes through
OpenAI and `vercel-ai-gateway/moonshotai/kimi-k2.6` routes through
MoonshotAI. Your single `AI_GATEWAY_API_KEY` handles authentication for all
upstream providers.

View File

@@ -173,7 +173,7 @@ title: Image generation roundtrip
surface: image
tags: [media, image, roundtrip]
models:
primary: openai/gpt-5.4
primary: openai/gpt-5.5
requires:
tools: [image_generate]
plugins: [openai, qa-channel]

View File

@@ -34,11 +34,11 @@ For a high-level overview, see [Onboarding (CLI)](/start/wizard).
- **Anthropic API key**: preferred Anthropic assistant choice in onboarding/configure.
- **Anthropic setup-token**: still available in onboarding/configure, though OpenClaw now prefers Claude CLI reuse when available.
- **OpenAI Code (Codex) subscription (OAuth)**: browser flow; paste the `code#state`.
- Sets `agents.defaults.model` to `openai-codex/gpt-5.4` when model is unset or `openai/*`.
- Sets `agents.defaults.model` to `openai-codex/gpt-5.5` when model is unset or `openai/*`.
- **OpenAI Code (Codex) subscription (device pairing)**: browser pairing flow with a short-lived device code.
- Sets `agents.defaults.model` to `openai-codex/gpt-5.4` when model is unset or `openai/*`.
- Sets `agents.defaults.model` to `openai-codex/gpt-5.5` when model is unset or `openai/*`.
- **OpenAI API key**: uses `OPENAI_API_KEY` if present or prompts for a key, then stores it in auth profiles.
- Sets `agents.defaults.model` to `openai/gpt-5.4` when model is unset, `openai/*`, or `openai-codex/*`.
- Sets `agents.defaults.model` to `openai/gpt-5.5` when model is unset, `openai/*`, or `openai-codex/*`.
- **xAI (Grok) API key**: prompts for `XAI_API_KEY` and configures xAI as a model provider.
- **OpenCode**: prompts for `OPENCODE_API_KEY` (or `OPENCODE_ZEN_API_KEY`, get it at https://opencode.ai/auth) and lets you pick the Zen or Go catalog.
- **Ollama**: offers **Cloud + Local**, **Cloud only**, or **Local only** first. `Cloud only` prompts for `OLLAMA_API_KEY` and uses `https://ollama.com`; the host-backed modes prompt for the Ollama base URL, discover available models, and auto-pull the selected local model when needed; `Cloud + Local` also checks whether that Ollama host is signed in for cloud access.
@@ -184,7 +184,7 @@ Use this reference page for flag semantics and step ordering.
```bash
openclaw agents add work \
--workspace ~/.openclaw/workspace-work \
--model openai/gpt-5.4 \
--model openai/gpt-5.5 \
--bind whatsapp:biz \
--non-interactive \
--json

View File

@@ -203,7 +203,7 @@ sessions, and auth profiles. Running without `--workspace` launches the wizard.
```bash
openclaw agents add work \
--workspace ~/.openclaw/workspace-work \
--model openai/gpt-5.4 \
--model openai/gpt-5.5 \
--bind whatsapp:biz \
--non-interactive \
--json

View File

@@ -132,19 +132,19 @@ What you set:
<Accordion title="OpenAI Code subscription (OAuth)">
Browser flow; paste `code#state`.
Sets `agents.defaults.model` to `openai-codex/gpt-5.4` when model is unset or `openai/*`.
Sets `agents.defaults.model` to `openai-codex/gpt-5.5` when model is unset or `openai/*`.
</Accordion>
<Accordion title="OpenAI Code subscription (device pairing)">
Browser pairing flow with a short-lived device code.
Sets `agents.defaults.model` to `openai-codex/gpt-5.4` when model is unset or `openai/*`.
Sets `agents.defaults.model` to `openai-codex/gpt-5.5` when model is unset or `openai/*`.
</Accordion>
<Accordion title="OpenAI API key">
Uses `OPENAI_API_KEY` if present or prompts for a key, then stores the credential in auth profiles.
Sets `agents.defaults.model` to `openai/gpt-5.4` when model is unset, `openai/*`, or `openai-codex/*`.
Sets `agents.defaults.model` to `openai/gpt-5.5` when model is unset, `openai/*`, or `openai-codex/*`.
</Accordion>
<Accordion title="xAI (Grok) API key">

View File

@@ -483,7 +483,7 @@ Notes:
| `/acp close` | Close session and unbind thread targets. | `/acp close` |
| `/acp status` | Show backend, mode, state, runtime options, capabilities. | `/acp status` |
| `/acp set-mode` | Set runtime mode for target session. | `/acp set-mode plan` |
| `/acp set` | Generic runtime config option write. | `/acp set model openai/gpt-5.4` |
| `/acp set` | Generic runtime config option write. | `/acp set model openai/gpt-5.5` |
| `/acp cwd` | Set runtime working directory override. | `/acp cwd /Users/user/Projects/repo` |
| `/acp permissions` | Set approval policy profile. | `/acp permissions strict` |
| `/acp timeout` | Set runtime timeout (seconds). | `/acp timeout 120` |

View File

@@ -215,7 +215,7 @@ when you want to disable it or restrict it to specific models:
{
tools: {
exec: {
applyPatch: { workspaceOnly: true, allowModels: ["gpt-5.4"] },
applyPatch: { workspaceOnly: true, allowModels: ["gpt-5.5"] },
},
},
}

View File

@@ -53,9 +53,9 @@ without writing custom OpenClaw code for each workflow.
"enabled": true,
"config": {
"defaultProvider": "openai-codex",
"defaultModel": "gpt-5.4",
"defaultModel": "gpt-5.5",
"defaultAuthProfileId": "main",
"allowedModels": ["openai-codex/gpt-5.4"],
"allowedModels": ["openai-codex/gpt-5.5"],
"maxTokens": 800,
"timeoutMs": 30000
}

View File

@@ -205,7 +205,7 @@ The filtering order is:
Each level can further restrict tools, but cannot grant back denied tools from earlier levels.
If `agents.list[].tools.sandbox.tools` is set, it replaces `tools.sandbox.tools` for that agent.
If `agents.list[].tools.profile` is set, it overrides `tools.profile` for that agent.
Provider tool keys accept either `provider` (e.g. `google-antigravity`) or `provider/model` (e.g. `openai/gpt-5.4`).
Provider tool keys accept either `provider` (e.g. `google-antigravity`) or `provider/model` (e.g. `openai/gpt-5.5`).
Tool policies support `group:*` shorthands that expand to multiple tools. See [Tool groups](/gateway/sandbox-vs-tool-policy-vs-elevated#tool-groups-shorthands) for the full list.

View File

@@ -243,7 +243,7 @@ Examples:
/model
/model list
/model 3
/model openai/gpt-5.4
/model openai/gpt-5.5
/model opus@anthropic:default
/model status
```