mirror of
https://github.com/moltbot/moltbot.git
synced 2026-04-25 23:47:20 +00:00
feat(openai): add gpt-5.4 support for API and Codex OAuth (#36590)
* feat(openai): add gpt-5.4 support and priority processing * feat(openai-codex): add gpt-5.4 oauth support * fix(openai): preserve provider overrides in gpt-5.4 fallback * fix(openai-codex): keep xhigh for gpt-5.4 default * fix(models): preserve configured overrides in list output * fix(models): close gpt-5.4 integration gaps * fix(openai): scope service tier to public api * fix(openai): complete prep followups for gpt-5.4 support (#36590) (thanks @dorukardahan) --------- Co-authored-by: Tyler Yust <TYTYYUST@YAHOO.COM>
This commit is contained in:
@@ -41,15 +41,16 @@ OpenClaw ships with the pi‑ai catalog. These providers require **no**
|
||||
- Provider: `openai`
|
||||
- Auth: `OPENAI_API_KEY`
|
||||
- Optional rotation: `OPENAI_API_KEYS`, `OPENAI_API_KEY_1`, `OPENAI_API_KEY_2`, plus `OPENCLAW_LIVE_OPENAI_KEY` (single override)
|
||||
- Example model: `openai/gpt-5.1-codex`
|
||||
- Example models: `openai/gpt-5.4`, `openai/gpt-5.4-pro`
|
||||
- CLI: `openclaw onboard --auth-choice openai-api-key`
|
||||
- Default transport is `auto` (WebSocket-first, SSE fallback)
|
||||
- Override per model via `agents.defaults.models["openai/<model>"].params.transport` (`"sse"`, `"websocket"`, or `"auto"`)
|
||||
- OpenAI Responses WebSocket warm-up defaults to enabled via `params.openaiWsWarmup` (`true`/`false`)
|
||||
- OpenAI priority processing can be enabled via `agents.defaults.models["openai/<model>"].params.serviceTier`
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: { defaults: { model: { primary: "openai/gpt-5.1-codex" } } },
|
||||
agents: { defaults: { model: { primary: "openai/gpt-5.4" } } },
|
||||
}
|
||||
```
|
||||
|
||||
@@ -73,7 +74,7 @@ OpenClaw ships with the pi‑ai catalog. These providers require **no**
|
||||
|
||||
- Provider: `openai-codex`
|
||||
- Auth: OAuth (ChatGPT)
|
||||
- Example model: `openai-codex/gpt-5.3-codex`
|
||||
- Example model: `openai-codex/gpt-5.4`
|
||||
- CLI: `openclaw onboard --auth-choice openai-codex` or `openclaw models auth login --provider openai-codex`
|
||||
- Default transport is `auto` (WebSocket-first, SSE fallback)
|
||||
- Override per model via `agents.defaults.models["openai-codex/<model>"].params.transport` (`"sse"`, `"websocket"`, or `"auto"`)
|
||||
@@ -81,7 +82,7 @@ OpenClaw ships with the pi‑ai catalog. These providers require **no**
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: { defaults: { model: { primary: "openai-codex/gpt-5.3-codex" } } },
|
||||
agents: { defaults: { model: { primary: "openai-codex/gpt-5.4" } } },
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
@@ -31,7 +31,7 @@ openclaw agent --message "hi" --model claude-cli/opus-4.6
|
||||
Codex CLI also works out of the box:
|
||||
|
||||
```bash
|
||||
openclaw agent --message "hi" --model codex-cli/gpt-5.3-codex
|
||||
openclaw agent --message "hi" --model codex-cli/gpt-5.4
|
||||
```
|
||||
|
||||
If your gateway runs under launchd/systemd and PATH is minimal, add just the
|
||||
|
||||
@@ -767,7 +767,7 @@ Yes - via pi-ai's **Amazon Bedrock (Converse)** provider with **manual config**.
|
||||
|
||||
### How does Codex auth work
|
||||
|
||||
OpenClaw supports **OpenAI Code (Codex)** via OAuth (ChatGPT sign-in). The wizard can run the OAuth flow and will set the default model to `openai-codex/gpt-5.3-codex` when appropriate. See [Model providers](/concepts/model-providers) and [Wizard](/start/wizard).
|
||||
OpenClaw supports **OpenAI Code (Codex)** via OAuth (ChatGPT sign-in). The wizard can run the OAuth flow and will set the default model to `openai-codex/gpt-5.4` when appropriate. See [Model providers](/concepts/model-providers) and [Wizard](/start/wizard).
|
||||
|
||||
### Do you support OpenAI subscription auth Codex OAuth
|
||||
|
||||
@@ -2156,8 +2156,8 @@ Use `/model status` to confirm which auth profile is active.
|
||||
|
||||
Yes. Set one as default and switch as needed:
|
||||
|
||||
- **Quick switch (per session):** `/model gpt-5.2` for daily tasks, `/model gpt-5.3-codex` for coding.
|
||||
- **Default + switch:** set `agents.defaults.model.primary` to `openai/gpt-5.2`, then switch to `openai-codex/gpt-5.3-codex` when coding (or the other way around).
|
||||
- **Quick switch (per session):** `/model gpt-5.2` for daily tasks, `/model openai-codex/gpt-5.4` for coding with Codex OAuth.
|
||||
- **Default + switch:** set `agents.defaults.model.primary` to `openai/gpt-5.2`, then switch to `openai-codex/gpt-5.4` when coding (or the other way around).
|
||||
- **Sub-agents:** route coding tasks to sub-agents with a different default model.
|
||||
|
||||
See [Models](/concepts/models) and [Slash commands](/tools/slash-commands).
|
||||
|
||||
@@ -222,7 +222,7 @@ OPENCLAW_LIVE_SETUP_TOKEN=1 OPENCLAW_LIVE_SETUP_TOKEN_PROFILE=anthropic:setup-to
|
||||
- Args: `["-p","--output-format","json","--permission-mode","bypassPermissions"]`
|
||||
- Overrides (optional):
|
||||
- `OPENCLAW_LIVE_CLI_BACKEND_MODEL="claude-cli/claude-opus-4-6"`
|
||||
- `OPENCLAW_LIVE_CLI_BACKEND_MODEL="codex-cli/gpt-5.3-codex"`
|
||||
- `OPENCLAW_LIVE_CLI_BACKEND_MODEL="codex-cli/gpt-5.4"`
|
||||
- `OPENCLAW_LIVE_CLI_BACKEND_COMMAND="/full/path/to/claude"`
|
||||
- `OPENCLAW_LIVE_CLI_BACKEND_ARGS='["-p","--output-format","json","--permission-mode","bypassPermissions"]'`
|
||||
- `OPENCLAW_LIVE_CLI_BACKEND_CLEAR_ENV='["ANTHROPIC_API_KEY","ANTHROPIC_API_KEY_OLD"]'`
|
||||
@@ -275,7 +275,7 @@ There is no fixed “CI model list” (live is opt-in), but these are the **reco
|
||||
This is the “common models” run we expect to keep working:
|
||||
|
||||
- OpenAI (non-Codex): `openai/gpt-5.2` (optional: `openai/gpt-5.1`)
|
||||
- OpenAI Codex: `openai-codex/gpt-5.3-codex` (optional: `openai-codex/gpt-5.3-codex-codex`)
|
||||
- OpenAI Codex: `openai-codex/gpt-5.4`
|
||||
- Anthropic: `anthropic/claude-opus-4-6` (or `anthropic/claude-sonnet-4-5`)
|
||||
- Google (Gemini API): `google/gemini-3-pro-preview` and `google/gemini-3-flash-preview` (avoid older Gemini 2.x models)
|
||||
- Google (Antigravity): `google-antigravity/claude-opus-4-6-thinking` and `google-antigravity/gemini-3-flash`
|
||||
@@ -283,7 +283,7 @@ This is the “common models” run we expect to keep working:
|
||||
- MiniMax: `minimax/minimax-m2.5`
|
||||
|
||||
Run gateway smoke with tools + image:
|
||||
`OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2,openai-codex/gpt-5.3-codex,anthropic/claude-opus-4-6,google/gemini-3-pro-preview,google/gemini-3-flash-preview,google-antigravity/claude-opus-4-6-thinking,google-antigravity/gemini-3-flash,zai/glm-4.7,minimax/minimax-m2.5" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
|
||||
`OPENCLAW_LIVE_GATEWAY_MODELS="openai/gpt-5.2,openai-codex/gpt-5.4,anthropic/claude-opus-4-6,google/gemini-3-pro-preview,google/gemini-3-flash-preview,google-antigravity/claude-opus-4-6-thinking,google-antigravity/gemini-3-flash,zai/glm-4.7,minimax/minimax-m2.5" pnpm test:live src/gateway/gateway-models.profiles.live.test.ts`
|
||||
|
||||
### Baseline: tool calling (Read + optional Exec)
|
||||
|
||||
|
||||
@@ -30,10 +30,13 @@ openclaw onboard --openai-api-key "$OPENAI_API_KEY"
|
||||
```json5
|
||||
{
|
||||
env: { OPENAI_API_KEY: "sk-..." },
|
||||
agents: { defaults: { model: { primary: "openai/gpt-5.2" } } },
|
||||
agents: { defaults: { model: { primary: "openai/gpt-5.4" } } },
|
||||
}
|
||||
```
|
||||
|
||||
OpenAI's current API model docs list `gpt-5.4` and `gpt-5.4-pro` for direct
|
||||
OpenAI API usage. OpenClaw forwards both through the `openai/*` Responses path.
|
||||
|
||||
## Option B: OpenAI Code (Codex) subscription
|
||||
|
||||
**Best for:** using ChatGPT/Codex subscription access instead of an API key.
|
||||
@@ -53,10 +56,13 @@ openclaw models auth login --provider openai-codex
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: { defaults: { model: { primary: "openai-codex/gpt-5.3-codex" } } },
|
||||
agents: { defaults: { model: { primary: "openai-codex/gpt-5.4" } } },
|
||||
}
|
||||
```
|
||||
|
||||
OpenAI's current Codex docs list `gpt-5.4` as the current Codex model. OpenClaw
|
||||
maps that to `openai-codex/gpt-5.4` for ChatGPT/Codex OAuth usage.
|
||||
|
||||
### Transport default
|
||||
|
||||
OpenClaw uses `pi-ai` for model streaming. For both `openai/*` and
|
||||
@@ -81,9 +87,9 @@ Related OpenAI docs:
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
model: { primary: "openai-codex/gpt-5.3-codex" },
|
||||
model: { primary: "openai-codex/gpt-5.4" },
|
||||
models: {
|
||||
"openai-codex/gpt-5.3-codex": {
|
||||
"openai-codex/gpt-5.4": {
|
||||
params: {
|
||||
transport: "auto",
|
||||
},
|
||||
@@ -106,7 +112,7 @@ OpenAI docs describe warm-up as optional. OpenClaw enables it by default for
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"openai/gpt-5.2": {
|
||||
"openai/gpt-5.4": {
|
||||
params: {
|
||||
openaiWsWarmup: false,
|
||||
},
|
||||
@@ -124,7 +130,7 @@ OpenAI docs describe warm-up as optional. OpenClaw enables it by default for
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"openai/gpt-5.2": {
|
||||
"openai/gpt-5.4": {
|
||||
params: {
|
||||
openaiWsWarmup: true,
|
||||
},
|
||||
@@ -135,6 +141,30 @@ OpenAI docs describe warm-up as optional. OpenClaw enables it by default for
|
||||
}
|
||||
```
|
||||
|
||||
### OpenAI priority processing
|
||||
|
||||
OpenAI's API exposes priority processing via `service_tier=priority`. In
|
||||
OpenClaw, set `agents.defaults.models["openai/<model>"].params.serviceTier` to
|
||||
pass that field through on direct `openai/*` Responses requests.
|
||||
|
||||
```json5
|
||||
{
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"openai/gpt-5.4": {
|
||||
params: {
|
||||
serviceTier: "priority",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
Supported values are `auto`, `default`, `flex`, and `priority`.
|
||||
|
||||
### OpenAI Responses server-side compaction
|
||||
|
||||
For direct OpenAI Responses models (`openai/*` using `api: "openai-responses"` with
|
||||
@@ -157,7 +187,7 @@ Responses models (for example Azure OpenAI Responses):
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"azure-openai-responses/gpt-5.2": {
|
||||
"azure-openai-responses/gpt-5.4": {
|
||||
params: {
|
||||
responsesServerCompaction: true,
|
||||
},
|
||||
@@ -175,7 +205,7 @@ Responses models (for example Azure OpenAI Responses):
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"openai/gpt-5.2": {
|
||||
"openai/gpt-5.4": {
|
||||
params: {
|
||||
responsesServerCompaction: true,
|
||||
responsesCompactThreshold: 120000,
|
||||
@@ -194,7 +224,7 @@ Responses models (for example Azure OpenAI Responses):
|
||||
agents: {
|
||||
defaults: {
|
||||
models: {
|
||||
"openai/gpt-5.2": {
|
||||
"openai/gpt-5.4": {
|
||||
params: {
|
||||
responsesServerCompaction: false,
|
||||
},
|
||||
|
||||
@@ -143,7 +143,7 @@ What you set:
|
||||
<Accordion title="OpenAI Code subscription (OAuth)">
|
||||
Browser flow; paste `code#state`.
|
||||
|
||||
Sets `agents.defaults.model` to `openai-codex/gpt-5.3-codex` when model is unset or `openai/*`.
|
||||
Sets `agents.defaults.model` to `openai-codex/gpt-5.4` when model is unset or `openai/*`.
|
||||
|
||||
</Accordion>
|
||||
<Accordion title="OpenAI API key">
|
||||
|
||||
@@ -53,9 +53,9 @@ without writing custom OpenClaw code for each workflow.
|
||||
"enabled": true,
|
||||
"config": {
|
||||
"defaultProvider": "openai-codex",
|
||||
"defaultModel": "gpt-5.2",
|
||||
"defaultModel": "gpt-5.4",
|
||||
"defaultAuthProfileId": "main",
|
||||
"allowedModels": ["openai-codex/gpt-5.3-codex"],
|
||||
"allowedModels": ["openai-codex/gpt-5.4"],
|
||||
"maxTokens": 800,
|
||||
"timeoutMs": 30000
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user