docs: clarify codex harness validation

This commit is contained in:
Peter Steinberger
2026-04-11 00:11:39 +01:00
parent 9ac7a03982
commit 46a6746bca
4 changed files with 10 additions and 1 deletions

View File

@@ -438,6 +438,9 @@ Docker notes:
- Docker enables the image and MCP/tool probes by default. Set
`OPENCLAW_LIVE_CODEX_HARNESS_IMAGE_PROBE=0` or
`OPENCLAW_LIVE_CODEX_HARNESS_MCP_PROBE=0` when you need a narrower debug run.
- Docker also exports `OPENCLAW_AGENT_HARNESS_FALLBACK=none`, matching the live
test config so `openai-codex/*` or PI fallback cannot hide a Codex harness
regression.
### Recommended live recipes

View File

@@ -452,7 +452,9 @@ continue through the normal OpenClaw delivery path.
When the selected model uses the Codex harness, native thread compaction is
delegated to Codex app-server. OpenClaw keeps a transcript mirror for channel
history, search, `/new`, `/reset`, and future model or harness switching.
history, search, `/new`, `/reset`, and future model or harness switching. The
mirror includes the user prompt, final assistant text, and lightweight Codex
reasoning or plan records when the app-server emits them.
Media generation does not require PI. Image, video, music, PDF, TTS, and media
understanding continue to use the matching provider/model settings such as