docs: clarify provider attribution behavior

This commit is contained in:
Peter Steinberger
2026-04-04 09:27:31 +01:00
parent 73572e04c1
commit 1fcb2cfeb5
3 changed files with 19 additions and 1 deletions

View File

@@ -197,6 +197,9 @@ OpenClaw ships with the piai catalog. These providers require **no**
- Default transport is `auto` (WebSocket-first, SSE fallback)
- Override per model via `agents.defaults.models["openai-codex/<model>"].params.transport` (`"sse"`, `"websocket"`, or `"auto"`)
- `params.serviceTier` is also forwarded on native Codex Responses requests (`chatgpt.com/backend-api`)
- Hidden OpenClaw attribution headers (`originator`, `version`,
`User-Agent`) are only attached on native Codex traffic to
`chatgpt.com/backend-api`, not generic OpenAI-compatible proxies
- Shares the same `/fast` toggle and `params.fastMode` config as direct `openai/*`; OpenClaw maps that to `service_tier=priority`
- `openai-codex/gpt-5.3-codex-spark` remains available when the Codex OAuth catalog exposes it; entitlement-dependent
- `openai-codex/gpt-5.4` keeps native `contextWindow = 1050000` and a default runtime `contextTokens = 272000`; override the runtime cap with `models.providers.openai-codex.models[].contextTokens`
@@ -290,6 +293,8 @@ See [/providers/kilocode](/providers/kilocode) for setup details.
- OpenRouter: `openrouter` (`OPENROUTER_API_KEY`)
- Example model: `openrouter/auto`
- OpenClaw applies OpenRouter's documented app-attribution headers only when
the request actually targets `openrouter.ai`
- Kilo Gateway: `kilocode` (`KILOCODE_API_KEY`)
- Example model: `kilocode/anthropic/claude-opus-4.6`
- MiniMax: `minimax` (`MINIMAX_API_KEY`)

View File

@@ -358,8 +358,15 @@ from generic OpenAI-compatible `/v1` proxies:
- native `openai/*`, `openai-codex/*`, and Azure OpenAI routes keep
`reasoning: { effort: "none" }` intact when you explicitly disable reasoning
- native OpenAI-family routes default tool schemas to strict mode
- hidden OpenClaw attribution headers (`originator`, `version`, and
`User-Agent`) are only attached on verified native OpenAI hosts
(`api.openai.com`) and native Codex hosts (`chatgpt.com/backend-api`)
- proxy-style OpenAI-compatible routes keep the looser compat behavior and do
not force strict tool schemas or native-only request shaping
not force strict tool schemas, native-only request shaping, or hidden
OpenAI/Codex attribution headers
Azure OpenAI stays in the native-routing bucket for transport and compat
behavior, but it does not receive the hidden OpenAI/Codex attribution headers.
This preserves current native OpenAI Responses behavior without forcing older
OpenAI-compatible shims onto third-party `/v1` backends.

View File

@@ -37,3 +37,9 @@ openclaw onboard --auth-choice openrouter-api-key
`openclaw models set openrouter/<provider>/<model>`.
- For more model/provider options, see [/concepts/model-providers](/concepts/model-providers).
- OpenRouter uses a Bearer token with your API key under the hood.
- On real OpenRouter requests (`https://openrouter.ai/api/v1`), OpenClaw also
adds OpenRouter's documented app-attribution headers:
`HTTP-Referer: https://openclaw.ai`, `X-OpenRouter-Title: OpenClaw`, and
`X-OpenRouter-Categories: cli-agent`.
- If you repoint the OpenRouter provider at some other proxy/base URL, OpenClaw
does not inject those OpenRouter-specific headers.