mirror of
https://github.com/moltbot/moltbot.git
synced 2026-05-06 23:55:12 +00:00
docs(openai): canonicalize GPT model refs
This commit is contained in:
@@ -38,7 +38,7 @@ openclaw config get browser.executablePath
|
||||
openclaw config set browser.executablePath "/usr/bin/google-chrome"
|
||||
openclaw config set agents.defaults.heartbeat.every "2h"
|
||||
openclaw config set agents.list[0].tools.exec.node "node-id-or-name"
|
||||
openclaw config set agents.defaults.models '{"openai-codex/gpt-5.5":{}}' --strict-json --merge
|
||||
openclaw config set agents.defaults.models '{"openai/gpt-5.5":{}}' --strict-json --merge
|
||||
openclaw config set channels.discord.token --ref-provider default --ref-source env --ref-id DISCORD_BOT_TOKEN
|
||||
openclaw config set secrets.providers.vaultfile --provider-source file --provider-path /etc/openclaw/secrets.json --provider-mode json
|
||||
openclaw config unset plugins.entries.brave.config.webSearch.apiKey
|
||||
@@ -115,7 +115,7 @@ you pass `--replace`.
|
||||
Use `--merge` when adding entries to those maps:
|
||||
|
||||
```bash
|
||||
openclaw config set agents.defaults.models '{"openai-codex/gpt-5.5":{}}' --strict-json --merge
|
||||
openclaw config set agents.defaults.models '{"openai/gpt-5.5":{}}' --strict-json --merge
|
||||
openclaw config set models.providers.ollama.models '[{"id":"llama3.2","name":"Llama 3.2"}]' --strict-json --merge
|
||||
```
|
||||
|
||||
|
||||
@@ -18,7 +18,7 @@ For model selection rules, see [/concepts/models](/concepts/models).
|
||||
- CLI helpers: `openclaw onboard`, `openclaw models list`, `openclaw models set <provider/model>`.
|
||||
- `models.providers.*.models[].contextWindow` is native model metadata; `contextTokens` is the effective runtime cap.
|
||||
- Fallback rules, cooldown probes, and session-override persistence: [Model failover](/concepts/model-failover).
|
||||
- Bundled `codex` is paired with the Codex agent harness — use `codex/gpt-*` for Codex-owned login, discovery, native thread resume, and app-server execution. Plain `openai/gpt-*` uses the OpenAI provider and normal transport. Disable automatic PI fallback for Codex-only deployments via `agents.defaults.embeddedHarness.fallback: "none"` — see [Codex harness](/plugins/codex-harness).
|
||||
- OpenAI GPT model refs are canonical as `openai/<model>`. Legacy `openai-codex/<model>` and `codex/<model>` refs remain compatibility aliases for older configs and tests. For native Codex app-server execution, keep the model ref as `openai/gpt-*` and force `agents.defaults.embeddedHarness.runtime: "codex"` — see [Codex harness](/plugins/codex-harness).
|
||||
|
||||
## Plugin-owned provider behavior
|
||||
|
||||
@@ -95,26 +95,27 @@ OpenClaw ships with the pi‑ai catalog. These providers require **no**
|
||||
}
|
||||
```
|
||||
|
||||
### OpenAI Code (Codex)
|
||||
### OpenAI Codex OAuth
|
||||
|
||||
- Provider: `openai-codex`
|
||||
- Auth: OAuth (ChatGPT)
|
||||
- Example model: `openai-codex/gpt-5.5`
|
||||
- Canonical model ref: `openai/gpt-5.5`
|
||||
- Legacy model refs: `openai-codex/gpt-*`, `codex/gpt-*`
|
||||
- CLI: `openclaw onboard --auth-choice openai-codex` or `openclaw models auth login --provider openai-codex`
|
||||
- Default transport is `auto` (WebSocket-first, SSE fallback)
|
||||
- Override per model via `agents.defaults.models["openai-codex/<model>"].params.transport` (`"sse"`, `"websocket"`, or `"auto"`)
|
||||
- Override per model via `agents.defaults.models["openai/<model>"].params.transport` (`"sse"`, `"websocket"`, or `"auto"`)
|
||||
- `params.serviceTier` is also forwarded on native Codex Responses requests (`chatgpt.com/backend-api`)
|
||||
- Hidden OpenClaw attribution headers (`originator`, `version`,
|
||||
`User-Agent`) are only attached on native Codex traffic to
|
||||
`chatgpt.com/backend-api`, not generic OpenAI-compatible proxies
|
||||
- Shares the same `/fast` toggle and `params.fastMode` config as direct `openai/*`; OpenClaw maps that to `service_tier=priority`
|
||||
- `openai-codex/gpt-5.3-codex-spark` remains available when the Codex OAuth catalog exposes it; entitlement-dependent
|
||||
- `openai-codex/gpt-5.5` keeps native `contextWindow = 1000000` and a default runtime `contextTokens = 272000`; override the runtime cap with `models.providers.openai-codex.models[].contextTokens`
|
||||
- `openai/gpt-5.3-codex-spark` remains available through Codex OAuth when the catalog exposes it; entitlement-dependent
|
||||
- `openai/gpt-5.5` keeps native `contextWindow = 1000000` and a default runtime `contextTokens = 272000`; override the runtime cap with `models.providers.openai-codex.models[].contextTokens`
|
||||
- Policy note: OpenAI Codex OAuth is explicitly supported for external tools/workflows like OpenClaw.
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: { defaults: { model: { primary: "openai-codex/gpt-5.5" } } },
|
||||
agents: { defaults: { model: { primary: "openai/gpt-5.5" } } },
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
@@ -72,7 +72,7 @@ Provider configuration examples (including OpenCode) live in
|
||||
Use additive writes when updating `agents.defaults.models` by hand:
|
||||
|
||||
```bash
|
||||
openclaw config set agents.defaults.models '{"openai-codex/gpt-5.5":{}}' --strict-json --merge
|
||||
openclaw config set agents.defaults.models '{"openai/gpt-5.5":{}}' --strict-json --merge
|
||||
```
|
||||
|
||||
`openclaw config set` protects model/provider maps from accidental clobbers. A
|
||||
|
||||
@@ -1257,7 +1257,7 @@ Codex app-server harness.
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: "codex/gpt-5.5",
|
||||
model: "openai/gpt-5.5",
|
||||
embeddedHarness: {
|
||||
runtime: "codex",
|
||||
fallback: "none",
|
||||
@@ -1270,7 +1270,7 @@ Codex app-server harness.
|
||||
- `runtime`: `"auto"`, `"pi"`, or a registered plugin harness id. The bundled Codex plugin registers `codex`.
|
||||
- `fallback`: `"pi"` or `"none"`. `"pi"` keeps the built-in PI harness as the compatibility fallback when no plugin harness is selected. `"none"` makes missing or unsupported plugin harness selection fail instead of silently using PI. Selected plugin harness failures always surface directly.
|
||||
- Environment overrides: `OPENCLAW_AGENT_RUNTIME=<id|auto|pi>` overrides `runtime`; `OPENCLAW_AGENT_HARNESS_FALLBACK=none` disables PI fallback for that process.
|
||||
- For Codex-only deployments, set `model: "codex/gpt-5.5"`, `embeddedHarness.runtime: "codex"`, and `embeddedHarness.fallback: "none"`.
|
||||
- For Codex-only deployments, set `model: "openai/gpt-5.5"`, `embeddedHarness.runtime: "codex"`, and `embeddedHarness.fallback: "none"`.
|
||||
- Harness choice is pinned per session id after the first embedded run. Config/env changes affect new or reset sessions, not an existing transcript. Legacy sessions with transcript history but no recorded pin are treated as PI-pinned. `/status` shows non-PI harness ids such as `codex` next to `Fast`.
|
||||
- This only controls the embedded chat harness. Media generation, vision, PDF, music, video, and TTS still use their provider/model settings.
|
||||
|
||||
|
||||
@@ -658,26 +658,27 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="How does Codex auth work?">
|
||||
OpenClaw supports **OpenAI Code (Codex)** via OAuth (ChatGPT sign-in). Onboarding can run the OAuth flow and will set the default model to `openai-codex/gpt-5.5` when appropriate. See [Model providers](/concepts/model-providers) and [Onboarding (CLI)](/start/wizard).
|
||||
OpenClaw supports **OpenAI Code (Codex)** via OAuth (ChatGPT sign-in). New model refs should use the canonical `openai/gpt-5.5` path; `openai-codex/gpt-*` remains a legacy compatibility alias. See [Model providers](/concepts/model-providers) and [Onboarding (CLI)](/start/wizard).
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Why does ChatGPT GPT-5.5 not unlock openai/gpt-5.5 in OpenClaw?">
|
||||
OpenClaw treats the two routes separately:
|
||||
<Accordion title="Why does OpenClaw still mention openai-codex?">
|
||||
`openai-codex` is still the internal auth/profile provider id for ChatGPT/Codex OAuth. The model ref should be canonical OpenAI:
|
||||
|
||||
- `openai-codex/gpt-5.5` = ChatGPT/Codex OAuth
|
||||
- `openai/gpt-5.5` = direct OpenAI Platform API
|
||||
- `openai/gpt-5.5` = canonical GPT-5.5 model ref
|
||||
- `openai-codex/gpt-5.5` = legacy compatibility alias
|
||||
- `openai-codex:...` = auth profile id, not a model ref
|
||||
|
||||
In OpenClaw, ChatGPT/Codex sign-in is wired to the `openai-codex/*` route,
|
||||
not the direct `openai/*` route. If you want the direct API path in
|
||||
OpenClaw, set `OPENAI_API_KEY` (or the equivalent OpenAI provider config).
|
||||
If you want ChatGPT/Codex sign-in in OpenClaw, use `openai-codex/*`.
|
||||
If you want the direct OpenAI Platform billing/limit path, set
|
||||
`OPENAI_API_KEY`. If you want ChatGPT/Codex subscription auth, sign in with
|
||||
`openclaw models auth login --provider openai-codex` and keep model refs on
|
||||
`openai/*` in new configs.
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Why can Codex OAuth limits differ from ChatGPT web?">
|
||||
`openai-codex/*` uses the Codex OAuth route, and its usable quota windows are
|
||||
OpenAI-managed and plan-dependent. In practice, those limits can differ from
|
||||
the ChatGPT website/app experience, even when both are tied to the same account.
|
||||
Codex OAuth uses OpenAI-managed, plan-dependent quota windows. In practice,
|
||||
those limits can differ from the ChatGPT website/app experience, even when
|
||||
both are tied to the same account.
|
||||
|
||||
OpenClaw can show the currently visible provider usage/quota windows in
|
||||
`openclaw models status`, but it does not invent or normalize ChatGPT-web
|
||||
@@ -2344,8 +2345,8 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
|
||||
<Accordion title="Can I use GPT 5.5 for daily tasks and Codex 5.5 for coding?">
|
||||
Yes. Set one as default and switch as needed:
|
||||
|
||||
- **Quick switch (per session):** `/model gpt-5.5` for daily tasks, `/model openai-codex/gpt-5.5` for coding with Codex OAuth.
|
||||
- **Default + switch:** set `agents.defaults.model.primary` to `openai/gpt-5.5`, then switch to `openai-codex/gpt-5.5` when coding (or the other way around).
|
||||
- **Quick switch (per session):** `/model gpt-5.5` for daily tasks, or keep the same model and switch auth/profile as needed.
|
||||
- **Default:** set `agents.defaults.model.primary` to `openai/gpt-5.5`.
|
||||
- **Sub-agents:** route coding tasks to sub-agents with a different default model.
|
||||
|
||||
See [Models](/concepts/models) and [Slash commands](/tools/slash-commands).
|
||||
@@ -2355,9 +2356,9 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
|
||||
<Accordion title="How do I configure fast mode for GPT 5.5?">
|
||||
Use either a session toggle or a config default:
|
||||
|
||||
- **Per session:** send `/fast on` while the session is using `openai/gpt-5.5` or `openai-codex/gpt-5.5`.
|
||||
- **Per session:** send `/fast on` while the session is using `openai/gpt-5.5`.
|
||||
- **Per model default:** set `agents.defaults.models["openai/gpt-5.5"].params.fastMode` to `true`.
|
||||
- **Codex OAuth too:** if you also use `openai-codex/gpt-5.5`, set the same flag there.
|
||||
- **Legacy aliases:** older `openai-codex/gpt-*` entries can keep their own params, but new configs should put params on `openai/gpt-*`.
|
||||
|
||||
Example:
|
||||
|
||||
@@ -2371,11 +2372,6 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
|
||||
fastMode: true,
|
||||
},
|
||||
},
|
||||
"openai-codex/gpt-5.5": {
|
||||
params: {
|
||||
fastMode: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
@@ -681,7 +681,7 @@ Docker notes:
|
||||
`agent` method:
|
||||
- load the bundled `codex` plugin
|
||||
- select `OPENCLAW_AGENT_RUNTIME=codex`
|
||||
- send a first gateway agent turn to `codex/gpt-5.5`
|
||||
- send a first gateway agent turn to `openai/gpt-5.5` with the Codex harness forced
|
||||
- send a second turn to the same OpenClaw session and verify the app-server
|
||||
thread can resume
|
||||
- run `/codex status` and `/codex models` through the same gateway command
|
||||
@@ -691,7 +691,7 @@ Docker notes:
|
||||
denied so the agent asks back
|
||||
- Test: `src/gateway/gateway-codex-harness.live.test.ts`
|
||||
- Enable: `OPENCLAW_LIVE_CODEX_HARNESS=1`
|
||||
- Default model: `codex/gpt-5.5`
|
||||
- Default model: `openai/gpt-5.5`
|
||||
- Optional image probe: `OPENCLAW_LIVE_CODEX_HARNESS_IMAGE_PROBE=1`
|
||||
- Optional MCP/tool probe: `OPENCLAW_LIVE_CODEX_HARNESS_MCP_PROBE=1`
|
||||
- Optional Guardian probe: `OPENCLAW_LIVE_CODEX_HARNESS_GUARDIAN_PROBE=1`
|
||||
@@ -708,7 +708,7 @@ OPENCLAW_LIVE_CODEX_HARNESS=1 \
|
||||
OPENCLAW_LIVE_CODEX_HARNESS_IMAGE_PROBE=1 \
|
||||
OPENCLAW_LIVE_CODEX_HARNESS_MCP_PROBE=1 \
|
||||
OPENCLAW_LIVE_CODEX_HARNESS_GUARDIAN_PROBE=1 \
|
||||
OPENCLAW_LIVE_CODEX_HARNESS_MODEL=codex/gpt-5.5 \
|
||||
OPENCLAW_LIVE_CODEX_HARNESS_MODEL=openai/gpt-5.5 \
|
||||
pnpm test:live -- src/gateway/gateway-codex-harness.live.test.ts
|
||||
```
|
||||
|
||||
@@ -731,7 +731,7 @@ Docker notes:
|
||||
`OPENCLAW_LIVE_CODEX_HARNESS_GUARDIAN_PROBE=0` when you need a narrower debug
|
||||
run.
|
||||
- Docker also exports `OPENCLAW_AGENT_HARNESS_FALLBACK=none`, matching the live
|
||||
test config so `openai-codex/*` or PI fallback cannot hide a Codex harness
|
||||
test config so legacy aliases or PI fallback cannot hide a Codex harness
|
||||
regression.
|
||||
|
||||
### Recommended live recipes
|
||||
@@ -769,7 +769,7 @@ There is no fixed “CI model list” (live is opt-in), but these are the **reco
|
||||
This is the “common models” run we expect to keep working:
|
||||
|
||||
- OpenAI (non-Codex): `openai/gpt-5.5` (optional: `openai/gpt-5.4-mini`)
|
||||
- OpenAI Codex: `openai-codex/gpt-5.5`
|
||||
- OpenAI Codex OAuth: `openai/gpt-5.5` (`openai-codex/gpt-*` remains a legacy alias)
|
||||
- Anthropic: `anthropic/claude-opus-4-6` (or `anthropic/claude-sonnet-4-6`)
|
||||
- Google (Gemini API): `google/gemini-3.1-pro-preview` and `google/gemini-3-flash-preview` (avoid older Gemini 2.x models)
|
||||
- Google (Antigravity): `google-antigravity/claude-opus-4-6-thinking` and `google-antigravity/gemini-3-flash`
|
||||
@@ -777,7 +777,7 @@ This is the “common models” run we expect to keep working:
|
||||
- MiniMax: `minimax/MiniMax-M2.7`
|
||||
|
||||
Run gateway smoke with tools + image:
|
||||
`OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.5,openai-codex/gpt-5.5,anthropic/claude-opus-4-6,google/gemini-3.1-pro-preview,google/gemini-3-flash-preview,google-antigravity/claude-opus-4-6-thinking,google-antigravity/gemini-3-flash,zai/glm-4.7,minimax/MiniMax-M2.7" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
|
||||
`OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.5,anthropic/claude-opus-4-6,google/gemini-3.1-pro-preview,google/gemini-3-flash-preview,google-antigravity/claude-opus-4-6-thinking,google-antigravity/gemini-3-flash,zai/glm-4.7,minimax/MiniMax-M2.7" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
|
||||
|
||||
### Baseline: tool calling (Read + optional Exec)
|
||||
|
||||
|
||||
@@ -31,25 +31,23 @@ aligned with the PI harness:
|
||||
Bundled plugins can also register a Codex app-server extension factory to add
|
||||
async `tool_result` middleware.
|
||||
|
||||
The harness is off by default. It is selected only when the `codex` plugin is
|
||||
enabled and the resolved model is a `codex/*` model, or when you explicitly
|
||||
force `embeddedHarness.runtime: "codex"` or `OPENCLAW_AGENT_RUNTIME=codex`.
|
||||
If you never configure `codex/*`, existing PI, OpenAI, Anthropic, Gemini, local,
|
||||
and custom-provider runs keep their current behavior.
|
||||
The harness is off by default. New configs should keep OpenAI model refs
|
||||
canonical as `openai/gpt-*` and explicitly force
|
||||
`embeddedHarness.runtime: "codex"` or `OPENCLAW_AGENT_RUNTIME=codex` when they
|
||||
want native app-server execution. Legacy `codex/*` model refs still auto-select
|
||||
the harness for compatibility.
|
||||
|
||||
## Pick the right model prefix
|
||||
|
||||
OpenClaw has separate routes for OpenAI and Codex-shaped access:
|
||||
OpenClaw now keeps OpenAI GPT model refs canonical as `openai/*`:
|
||||
|
||||
| Model ref | Runtime path | Use when |
|
||||
| ---------------------- | -------------------------------------------- | ----------------------------------------------------------------------- |
|
||||
| `openai/gpt-5.5` | OpenAI provider through OpenClaw/PI plumbing | You want direct OpenAI Platform API access with `OPENAI_API_KEY`. |
|
||||
| `openai-codex/gpt-5.5` | OpenAI Codex OAuth provider through PI | You want ChatGPT/Codex OAuth without the Codex app-server harness. |
|
||||
| `codex/gpt-5.5` | Bundled Codex provider plus Codex harness | You want native Codex app-server execution for the embedded agent turn. |
|
||||
| Model ref | Runtime path | Use when |
|
||||
| ----------------------------------------------------- | -------------------------------------------- | ----------------------------------------------------------------------- |
|
||||
| `openai/gpt-5.5` | OpenAI provider through OpenClaw/PI plumbing | You want direct OpenAI Platform API access with `OPENAI_API_KEY`. |
|
||||
| `openai/gpt-5.5` + `embeddedHarness.runtime: "codex"` | Codex app-server harness | You want native Codex app-server execution for the embedded agent turn. |
|
||||
|
||||
The Codex harness only claims `codex/*` model refs. Existing `openai/*`,
|
||||
`openai-codex/*`, Anthropic, Gemini, xAI, local, and custom provider refs keep
|
||||
their normal paths.
|
||||
Legacy `openai-codex/gpt-*` and `codex/gpt-*` refs remain accepted as
|
||||
compatibility aliases, but new docs/config examples should use `openai/gpt-*`.
|
||||
|
||||
Harness selection is not a live session control. When an embedded turn runs,
|
||||
OpenClaw records the selected harness id on that session and keeps using it for
|
||||
@@ -82,7 +80,7 @@ uses.
|
||||
|
||||
## Minimal config
|
||||
|
||||
Use `codex/gpt-5.5`, enable the bundled plugin, and force the `codex` harness:
|
||||
Use `openai/gpt-5.5`, enable the bundled plugin, and force the `codex` harness:
|
||||
|
||||
```json5
|
||||
{
|
||||
@@ -95,7 +93,7 @@ Use `codex/gpt-5.5`, enable the bundled plugin, and force the `codex` harness:
|
||||
},
|
||||
agents: {
|
||||
defaults: {
|
||||
model: "codex/gpt-5.5",
|
||||
model: "openai/gpt-5.5",
|
||||
embeddedHarness: {
|
||||
runtime: "codex",
|
||||
fallback: "none",
|
||||
@@ -120,14 +118,15 @@ If your config uses `plugins.allow`, include `codex` there too:
|
||||
}
|
||||
```
|
||||
|
||||
Setting `agents.defaults.model` or an agent model to `codex/<model>` also
|
||||
auto-enables the bundled `codex` plugin. The explicit plugin entry is still
|
||||
useful in shared configs because it makes the deployment intent obvious.
|
||||
Legacy configs that set `agents.defaults.model` or an agent model to
|
||||
`codex/<model>` still auto-enable the bundled `codex` plugin. New configs should
|
||||
prefer `openai/<model>` plus the explicit `embeddedHarness` entry above.
|
||||
|
||||
## Add Codex without replacing other models
|
||||
|
||||
Keep `runtime: "auto"` when you want Codex for `codex/*` models and PI for
|
||||
everything else:
|
||||
Keep `runtime: "auto"` when you want legacy `codex/*` refs to select Codex and
|
||||
PI for everything else. For new configs, prefer explicit `runtime: "codex"` on
|
||||
the agents that should use the harness.
|
||||
|
||||
```json5
|
||||
{
|
||||
@@ -141,17 +140,15 @@ everything else:
|
||||
agents: {
|
||||
defaults: {
|
||||
model: {
|
||||
primary: "codex/gpt-5.5",
|
||||
primary: "openai/gpt-5.5",
|
||||
fallbacks: ["openai/gpt-5.5", "anthropic/claude-opus-4-6"],
|
||||
},
|
||||
models: {
|
||||
"codex/gpt-5.5": { alias: "codex" },
|
||||
"codex/gpt-5.4-mini": { alias: "codex-mini" },
|
||||
"openai/gpt-5.5": { alias: "gpt" },
|
||||
"anthropic/claude-opus-4-6": { alias: "opus" },
|
||||
},
|
||||
embeddedHarness: {
|
||||
runtime: "auto",
|
||||
runtime: "codex",
|
||||
fallback: "pi",
|
||||
},
|
||||
},
|
||||
@@ -161,8 +158,7 @@ everything else:
|
||||
|
||||
With this shape:
|
||||
|
||||
- `/model codex` or `/model codex/gpt-5.5` uses the Codex app-server harness.
|
||||
- `/model gpt` or `/model openai/gpt-5.5` uses the OpenAI provider path.
|
||||
- `/model gpt` or `/model openai/gpt-5.5` uses the Codex app-server harness for this config.
|
||||
- `/model opus` uses the Anthropic provider path.
|
||||
- If a non-Codex model is selected, PI remains the compatibility harness.
|
||||
|
||||
@@ -175,7 +171,7 @@ the Codex harness:
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: "codex/gpt-5.5",
|
||||
model: "openai/gpt-5.5",
|
||||
embeddedHarness: {
|
||||
runtime: "codex",
|
||||
fallback: "none",
|
||||
@@ -194,8 +190,7 @@ openclaw gateway run
|
||||
```
|
||||
|
||||
With fallback disabled, OpenClaw fails early if the Codex plugin is disabled,
|
||||
the requested model is not a `codex/*` ref, the app-server is too old, or the
|
||||
app-server cannot start.
|
||||
the app-server is too old, or the app-server cannot start.
|
||||
|
||||
## Per-agent Codex
|
||||
|
||||
@@ -220,7 +215,7 @@ auto-selection:
|
||||
{
|
||||
id: "codex",
|
||||
name: "Codex",
|
||||
model: "codex/gpt-5.5",
|
||||
model: "openai/gpt-5.5",
|
||||
embeddedHarness: {
|
||||
runtime: "codex",
|
||||
fallback: "none",
|
||||
@@ -239,11 +234,11 @@ and lets the next turn resolve the harness from current config again.
|
||||
## Model discovery
|
||||
|
||||
By default, the Codex plugin asks the app-server for available models. If
|
||||
discovery fails or times out, it uses the bundled fallback catalog:
|
||||
discovery fails or times out, it uses a bundled fallback catalog for:
|
||||
|
||||
- `codex/gpt-5.5`
|
||||
- `codex/gpt-5.4-mini`
|
||||
- `codex/gpt-5.2`
|
||||
- GPT-5.5
|
||||
- GPT-5.4 mini
|
||||
- GPT-5.2
|
||||
|
||||
You can tune discovery under `plugins.entries.codex.config.discovery`:
|
||||
|
||||
@@ -458,8 +453,8 @@ Remote app-server with explicit headers:
|
||||
|
||||
Model switching stays OpenClaw-controlled. When an OpenClaw session is attached
|
||||
to an existing Codex thread, the next turn sends the currently selected
|
||||
`codex/*` model, provider, approval policy, sandbox, and service tier to
|
||||
app-server again. Switching from `codex/gpt-5.5` to `codex/gpt-5.2` keeps the
|
||||
OpenAI model, provider, approval policy, sandbox, and service tier to
|
||||
app-server again. Switching from `openai/gpt-5.5` to `openai/gpt-5.2` keeps the
|
||||
thread binding but asks Codex to continue with the newly selected model.
|
||||
|
||||
## Codex command
|
||||
|
||||
@@ -122,19 +122,19 @@ OpenClaw. The harness then claims that provider in `supports(...)`.
|
||||
The bundled Codex plugin follows this pattern:
|
||||
|
||||
- provider id: `codex`
|
||||
- user model refs: `codex/gpt-5.5`, `codex/gpt-5.2`, or another model returned
|
||||
by the Codex app server
|
||||
- user model refs: canonical `openai/gpt-5.5` plus
|
||||
`embeddedHarness.runtime: "codex"`; legacy `codex/gpt-*` refs remain accepted
|
||||
for compatibility
|
||||
- harness id: `codex`
|
||||
- auth: synthetic provider availability, because the Codex harness owns the
|
||||
native Codex login/session
|
||||
- app-server request: OpenClaw sends the bare model id to Codex and lets the
|
||||
harness talk to the native app-server protocol
|
||||
|
||||
The Codex plugin is additive. Plain `openai/gpt-*` refs remain OpenAI provider
|
||||
refs and continue to use the normal OpenClaw provider path. Select `codex/gpt-*`
|
||||
when you want Codex-managed auth, Codex model discovery, native threads, and
|
||||
Codex app-server execution. `/model` can switch among the Codex models returned
|
||||
by the Codex app server without requiring OpenAI provider credentials.
|
||||
The Codex plugin is additive. Plain `openai/gpt-*` refs continue to use the
|
||||
normal OpenClaw provider path unless you force the Codex harness with
|
||||
`embeddedHarness.runtime: "codex"`. Older `codex/gpt-*` refs still select the
|
||||
Codex provider and harness for compatibility.
|
||||
|
||||
For operator setup, model prefix examples, and Codex-only configs, see
|
||||
[Codex Harness](/plugins/codex-harness).
|
||||
@@ -156,13 +156,9 @@ into the OpenClaw transcript.
|
||||
|
||||
The bundled `codex` harness is the native Codex mode for embedded OpenClaw
|
||||
agent turns. Enable the bundled `codex` plugin first, and include `codex` in
|
||||
`plugins.allow` if your config uses a restrictive allowlist. It is different
|
||||
from `openai-codex/*`:
|
||||
|
||||
- `openai-codex/*` uses ChatGPT/Codex OAuth through the normal OpenClaw provider
|
||||
path.
|
||||
- `codex/*` uses the bundled Codex provider and routes the turn through Codex
|
||||
app-server.
|
||||
`plugins.allow` if your config uses a restrictive allowlist. New configs should
|
||||
use `openai/gpt-*` with `embeddedHarness.runtime: "codex"`. Legacy
|
||||
`openai-codex/*` and `codex/*` model refs remain compatibility aliases.
|
||||
|
||||
When this mode runs, Codex owns the native thread id, resume behavior,
|
||||
compaction, and app-server execution. OpenClaw still owns the chat channel,
|
||||
@@ -189,7 +185,7 @@ For Codex-only embedded runs:
|
||||
{
|
||||
"agents": {
|
||||
"defaults": {
|
||||
"model": "codex/gpt-5.5",
|
||||
"model": "openai/gpt-5.5",
|
||||
"embeddedHarness": {
|
||||
"runtime": "codex",
|
||||
"fallback": "none"
|
||||
@@ -230,7 +226,7 @@ Per-agent overrides use the same shape:
|
||||
"list": [
|
||||
{
|
||||
"id": "codex-only",
|
||||
"model": "codex/gpt-5.5",
|
||||
"model": "openai/gpt-5.5",
|
||||
"embeddedHarness": {
|
||||
"runtime": "codex",
|
||||
"fallback": "none"
|
||||
|
||||
@@ -9,10 +9,10 @@ title: "OpenAI"
|
||||
|
||||
# OpenAI
|
||||
|
||||
OpenAI provides developer APIs for GPT models. OpenClaw supports two auth routes:
|
||||
OpenAI provides developer APIs for GPT models. OpenClaw supports two auth routes behind the same canonical OpenAI model refs:
|
||||
|
||||
- **API key** — direct OpenAI Platform access with usage-based billing (`openai/*` models)
|
||||
- **Codex subscription** — ChatGPT/Codex sign-in with subscription access (`openai-codex/*` models)
|
||||
- **Codex subscription** — ChatGPT/Codex sign-in with subscription access. The internal auth/provider id is `openai-codex`, but new model refs should still use `openai/*`.
|
||||
|
||||
OpenAI explicitly supports subscription OAuth usage in external tools and workflows like OpenClaw.
|
||||
|
||||
@@ -21,7 +21,7 @@ OpenAI explicitly supports subscription OAuth usage in external tools and workfl
|
||||
| OpenAI capability | OpenClaw surface | Status |
|
||||
| ------------------------- | ----------------------------------------- | ------------------------------------------------------ |
|
||||
| Chat / Responses | `openai/<model>` model provider | Yes |
|
||||
| Codex subscription models | `openai-codex/<model>` model provider | Yes |
|
||||
| Codex subscription models | `openai/<model>` with `openai-codex` auth | Yes |
|
||||
| Server-side web search | Native OpenAI Responses tool | Yes, when web search is enabled and no provider pinned |
|
||||
| Images | `image_generate` | Yes |
|
||||
| Videos | `video_generate` | Yes |
|
||||
@@ -69,7 +69,7 @@ Choose your preferred auth method and follow the setup steps.
|
||||
| `openai/gpt-5.5-pro` | Direct OpenAI Platform API | `OPENAI_API_KEY` |
|
||||
|
||||
<Note>
|
||||
ChatGPT/Codex sign-in is routed through `openai-codex/*`, not `openai/*`.
|
||||
`openai-codex/*` remains accepted as a legacy compatibility alias, but new configs should use `openai/*`.
|
||||
</Note>
|
||||
|
||||
### Config example
|
||||
@@ -110,7 +110,7 @@ Choose your preferred auth method and follow the setup steps.
|
||||
</Step>
|
||||
<Step title="Set the default model">
|
||||
```bash
|
||||
openclaw config set agents.defaults.model.primary openai-codex/gpt-5.5
|
||||
openclaw config set agents.defaults.model.primary openai/gpt-5.5
|
||||
```
|
||||
</Step>
|
||||
<Step title="Verify the model is available">
|
||||
@@ -124,18 +124,18 @@ Choose your preferred auth method and follow the setup steps.
|
||||
|
||||
| Model ref | Route | Auth |
|
||||
|-----------|-------|------|
|
||||
| `openai-codex/gpt-5.5` | ChatGPT/Codex OAuth | Codex sign-in |
|
||||
| `openai-codex/gpt-5.3-codex-spark` | ChatGPT/Codex OAuth | Codex sign-in (entitlement-dependent) |
|
||||
| `openai/gpt-5.5` | ChatGPT/Codex OAuth | Codex sign-in |
|
||||
| `openai/gpt-5.3-codex-spark` | ChatGPT/Codex OAuth | Codex sign-in (entitlement-dependent) |
|
||||
|
||||
<Note>
|
||||
This route is intentionally separate from `openai/gpt-5.5`. Use `openai/*` with an API key for direct Platform access, and `openai-codex/*` for Codex subscription access.
|
||||
`openai-codex/*` and `codex/*` model refs are legacy compatibility aliases. Keep using the `openai-codex` provider id for auth/profile commands.
|
||||
</Note>
|
||||
|
||||
### Config example
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: { defaults: { model: { primary: "openai-codex/gpt-5.5" } } },
|
||||
agents: { defaults: { model: { primary: "openai/gpt-5.5" } } },
|
||||
}
|
||||
```
|
||||
|
||||
@@ -147,7 +147,7 @@ Choose your preferred auth method and follow the setup steps.
|
||||
|
||||
OpenClaw treats model metadata and the runtime context cap as separate values.
|
||||
|
||||
For `openai-codex/gpt-5.5`:
|
||||
For `openai/gpt-5.5` through Codex OAuth:
|
||||
|
||||
- Native `contextWindow`: `1000000`
|
||||
- Default runtime `contextTokens` cap: `272000`
|
||||
@@ -243,9 +243,9 @@ See [Video Generation](/tools/video-generation) for shared tool parameters, prov
|
||||
|
||||
## GPT-5 prompt contribution
|
||||
|
||||
OpenClaw adds a shared GPT-5 prompt contribution for GPT-5-family runs across providers. It applies by model id, so `openai/gpt-5.5`, `openai-codex/gpt-5.5`, `openrouter/openai/gpt-5.5`, `opencode/gpt-5.5`, and other compatible GPT-5 refs receive the same overlay. Older GPT-4.x models do not.
|
||||
OpenClaw adds a shared GPT-5 prompt contribution for GPT-5-family runs across providers. It applies by model id, so `openai/gpt-5.5`, `openrouter/openai/gpt-5.5`, `opencode/gpt-5.5`, and other compatible GPT-5 refs receive the same overlay. Older GPT-4.x models do not.
|
||||
|
||||
The bundled native Codex harness provider (`codex/*`) uses the same GPT-5 behavior and heartbeat overlay through Codex app-server developer instructions, so `codex/gpt-5.x` sessions keep the same follow-through and proactive heartbeat guidance even though Codex owns the rest of the harness prompt.
|
||||
The bundled native Codex harness uses the same GPT-5 behavior and heartbeat overlay through Codex app-server developer instructions, so `openai/gpt-5.x` sessions forced through `embeddedHarness.runtime: "codex"` keep the same follow-through and proactive heartbeat guidance even though Codex owns the rest of the harness prompt.
|
||||
|
||||
The GPT-5 contribution adds a tagged behavior contract for persona persistence, execution safety, tool discipline, output shape, completion checks, and verification. Channel-specific reply and silent-message behavior stays in the shared OpenClaw system prompt and outbound delivery policy. The GPT-5 guidance is always enabled for matching models. The friendly interaction-style layer is separate and configurable.
|
||||
|
||||
@@ -535,7 +535,7 @@ the Server-side compaction accordion below.
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"openai-codex/gpt-5.5": {
|
||||
"openai/gpt-5.5": {
|
||||
params: { transport: "auto" },
|
||||
},
|
||||
},
|
||||
@@ -571,7 +571,7 @@ the Server-side compaction accordion below.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Fast mode">
|
||||
OpenClaw exposes a shared fast-mode toggle for both `openai/*` and `openai-codex/*`:
|
||||
OpenClaw exposes a shared fast-mode toggle for `openai/*`:
|
||||
|
||||
- **Chat/UI:** `/fast status|on|off`
|
||||
- **Config:** `agents.defaults.models["<provider>/<model>"].params.fastMode`
|
||||
@@ -584,7 +584,6 @@ the Server-side compaction accordion below.
|
||||
defaults: {
|
||||
models: {
|
||||
"openai/gpt-5.5": { params: { fastMode: true } },
|
||||
"openai-codex/gpt-5.5": { params: { fastMode: true } },
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -606,7 +605,6 @@ the Server-side compaction accordion below.
|
||||
defaults: {
|
||||
models: {
|
||||
"openai/gpt-5.5": { params: { serviceTier: "priority" } },
|
||||
"openai-codex/gpt-5.5": { params: { serviceTier: "priority" } },
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -688,7 +686,7 @@ the Server-side compaction accordion below.
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Strict-agentic GPT mode">
|
||||
For GPT-5-family runs on `openai/*` and `openai-codex/*`, OpenClaw can use a stricter embedded execution contract:
|
||||
For GPT-5-family runs on `openai/*`, OpenClaw can use a stricter embedded execution contract:
|
||||
|
||||
```json5
|
||||
{
|
||||
@@ -715,7 +713,7 @@ the Server-side compaction accordion below.
|
||||
<Accordion title="Native vs OpenAI-compatible routes">
|
||||
OpenClaw treats direct OpenAI, Codex, and Azure OpenAI endpoints differently from generic OpenAI-compatible `/v1` proxies:
|
||||
|
||||
**Native routes** (`openai/*`, `openai-codex/*`, Azure OpenAI):
|
||||
**Native routes** (`openai/*`, Azure OpenAI):
|
||||
- Keep `reasoning: { effort: "none" }` only for models that support the OpenAI `none` effort
|
||||
- Omit disabled reasoning for models or proxies that reject `reasoning.effort: "none"`
|
||||
- Default tool schemas to strict mode
|
||||
|
||||
@@ -34,9 +34,9 @@ For a high-level overview, see [Onboarding (CLI)](/start/wizard).
|
||||
- **Anthropic API key**: preferred Anthropic assistant choice in onboarding/configure.
|
||||
- **Anthropic setup-token**: still available in onboarding/configure, though OpenClaw now prefers Claude CLI reuse when available.
|
||||
- **OpenAI Code (Codex) subscription (OAuth)**: browser flow; paste the `code#state`.
|
||||
- Sets `agents.defaults.model` to `openai-codex/gpt-5.5` when model is unset or `openai/*`.
|
||||
- Sets `agents.defaults.model` to `openai/gpt-5.5` when model is unset or already OpenAI-family.
|
||||
- **OpenAI Code (Codex) subscription (device pairing)**: browser pairing flow with a short-lived device code.
|
||||
- Sets `agents.defaults.model` to `openai-codex/gpt-5.5` when model is unset or `openai/*`.
|
||||
- Sets `agents.defaults.model` to `openai/gpt-5.5` when model is unset or already OpenAI-family.
|
||||
- **OpenAI API key**: uses `OPENAI_API_KEY` if present or prompts for a key, then stores it in auth profiles.
|
||||
- Sets `agents.defaults.model` to `openai/gpt-5.5` when model is unset, `openai/*`, or `openai-codex/*`.
|
||||
- **xAI (Grok) API key**: prompts for `XAI_API_KEY` and configures xAI as a model provider.
|
||||
|
||||
@@ -132,13 +132,13 @@ What you set:
|
||||
<Accordion title="OpenAI Code subscription (OAuth)">
|
||||
Browser flow; paste `code#state`.
|
||||
|
||||
Sets `agents.defaults.model` to `openai-codex/gpt-5.5` when model is unset or `openai/*`.
|
||||
Sets `agents.defaults.model` to `openai/gpt-5.5` when model is unset or already OpenAI-family.
|
||||
|
||||
</Accordion>
|
||||
<Accordion title="OpenAI Code subscription (device pairing)">
|
||||
Browser pairing flow with a short-lived device code.
|
||||
|
||||
Sets `agents.defaults.model` to `openai-codex/gpt-5.5` when model is unset or `openai/*`.
|
||||
Sets `agents.defaults.model` to `openai/gpt-5.5` when model is unset or already OpenAI-family.
|
||||
|
||||
</Accordion>
|
||||
<Accordion title="OpenAI API key">
|
||||
|
||||
@@ -55,7 +55,7 @@ without writing custom OpenClaw code for each workflow.
|
||||
"defaultProvider": "openai-codex",
|
||||
"defaultModel": "gpt-5.5",
|
||||
"defaultAuthProfileId": "main",
|
||||
"allowedModels": ["openai-codex/gpt-5.5"],
|
||||
"allowedModels": ["openai/gpt-5.5"],
|
||||
"maxTokens": 800,
|
||||
"timeoutMs": 30000
|
||||
}
|
||||
|
||||
@@ -24,7 +24,7 @@ codeRefs:
|
||||
- extensions/qa-lab/src/suite.ts
|
||||
execution:
|
||||
kind: flow
|
||||
summary: Run with `pnpm openclaw qa suite --provider-mode live-frontier --model codex/gpt-5.5 --alt-model codex/gpt-5.5 --scenario codex-harness-no-meta-leak`.
|
||||
summary: Run with `pnpm openclaw qa suite --provider-mode live-frontier --model openai/gpt-5.5 --alt-model openai/gpt-5.5 --scenario codex-harness-no-meta-leak`.
|
||||
config:
|
||||
requiredProvider: codex
|
||||
requiredModel: gpt-5.5
|
||||
|
||||
@@ -11,7 +11,7 @@ coverage:
|
||||
- models.codex-cli
|
||||
objective: Verify the Codex app-server harness can plan and build a medium-complex self-contained browser game.
|
||||
successCriteria:
|
||||
- A live-frontier run fails fast unless the selected primary model is codex/gpt-5.5.
|
||||
- A live-frontier run fails fast unless the selected primary model is openai/gpt-5.5 with the Codex harness forced.
|
||||
- The scenario forces the Codex embedded harness and disables PI fallback.
|
||||
- The prompt explicitly asks the agent to enter plan mode before editing.
|
||||
- The agent writes a self-contained HTML game with a canvas loop, controls, scoring, waves, pause, and restart.
|
||||
@@ -25,7 +25,7 @@ codeRefs:
|
||||
- extensions/qa-lab/src/suite.ts
|
||||
execution:
|
||||
kind: flow
|
||||
summary: Run with `pnpm openclaw qa suite --provider-mode live-frontier --model codex/gpt-5.5 --alt-model codex/gpt-5.5 --scenario medium-game-plan-codex-harness`.
|
||||
summary: Run with `pnpm openclaw qa suite --provider-mode live-frontier --model openai/gpt-5.5 --alt-model openai/gpt-5.5 --scenario medium-game-plan-codex-harness`.
|
||||
config:
|
||||
requiredProvider: codex
|
||||
requiredModel: gpt-5.5
|
||||
|
||||
Reference in New Issue
Block a user