* docs(msteams): add Teams CLI setup instructions
Replace manual Azure Bot setup as primary path with
@microsoft/teams.cli workflow. Manual steps collapsed
into <details> blocks for users who can't use the CLI.
* docs(msteams): fix devtunnel instructions to use persistent tunnels
Use devtunnel create + host for stable URLs across sessions
instead of throwaway tunnels that change each time.
* docs(msteams): address PR feedback
- Remove "Abandon all hope" quote (showed as net-addition in diff)
- Add preview disclaimer for @microsoft/teams.cli
- Add security note for --allow-anonymous devtunnel flag
- Clarify where to find teamsAppId from create output
- Link to official devtunnel getting started guide
* docs(msteams): fix oxfmt formatting
* docs(msteams): clarify install step references create prompt
* docs(msteams): drop --env flag, use terminal output instead
Avoids writing secrets to a file that could be accidentally committed.
* docs(msteams): remove redundant H1, match other channel docs
mushuiyu_xydt's commit 0e1ef93e84 (#61155) routes MiniMax image
generation requests to the dedicated image endpoint
(api.minimax.io/v1/image_generation), ignoring models.providers.minimax.baseUrl
(which targets the chat/Anthropic-compatible API), and adds
MINIMAX_API_HOST support for the CN api.minimaxi.com endpoint. The
CHANGELOG entry covered it but docs/providers/minimax.md image-generation
section did not. Add a paragraph naming both endpoints and the
MINIMAX_API_HOST override.
* docs(browser): note tilde expansion also covers per-profile paths
The 95a2c9b fix expanded "~" for both `browser.executablePath` and
per-profile `profiles.<name>.executablePath` (config.ts:382 calls
`normalizeExecutablePath` for profile overrides). Per-profile
`userDataDir` on existing-session profiles is also tilde-expanded
(config.ts:391 via `resolveUserPath`). The configuration reference
only mentioned the top-level `browser.executablePath` case.
* docs(browser): align tilde path config help
---------
Co-authored-by: Peter Steinberger <steipete@gmail.com>
* docs(browser): document local startup timeout bounds
The new browser.localLaunchTimeoutMs and browser.localCdpReadyTimeoutMs
options are clamped to MAX_BROWSER_STARTUP_TIMEOUT_MS (120000 ms) by
normalizeStartupTimeoutMs in extensions/browser/src/browser/config.ts,
and zero/negative/non-finite values fall back to the defaults. Without
this in the configuration reference, users setting a higher value see
no error and silently get the 120 s ceiling, or set 0 expecting 'no
timeout' and silently get the default.
* docs(browser): clarify startup timeout validation
---------
Co-authored-by: Peter Steinberger <steipete@gmail.com>
Five recent diagnostics-otel feat commits added user-facing OpenTelemetry
surfaces but did not update docs/logging.md, so the listed metrics and
spans drifted out of sync with what the plugin actually exports:
- 7bbd47349e adds gen_ai.client.token.usage histogram (GenAI semconv)
- b8a41739d5 adds memory heap/rss histograms, pressure counter and span
- d6ef1fcf24 adds openclaw.tool.loop counters and span
- ff172f46a5 adds openclaw.context.assembled span
- 44114328b4 adds openclaw.provider.request_id_hash attr on
openclaw.model.call spans
Append the new metrics under existing model-usage and exec sections,
add a 'Diagnostics internals' subsection for memory + tool-loop
metrics, and add the three new spans (context.assembled, tool.loop,
memory.pressure) plus the request-id-hash attribute to the spans
listing.
* feat(litellm): add image generation provider
Registers litellm as an image-generation provider so model refs like
litellm/gpt-image-2 route through the LiteLLM proxy, and
agents.defaults.imageGenerationModel.fallbacks entries of the form
litellm/... resolve without "No image-generation provider registered
for litellm" errors.
Implementation uses the OpenAI-compatible /images/generations and
/images/edits endpoints that LiteLLM proxies for. BaseUrl resolves from
models.providers.litellm.baseUrl (default http://localhost:4000). Private
network is auto-allowed when baseUrl is a loopback/RFC1918 address, which
covers the common self-hosted LiteLLM proxy case without needing
OPENCLAW_PROVIDER_ALLOW_PRIVATE_NETWORK. Public baseUrls keep normal SSRF
defaults.
Default model is gpt-image-2 (matching upstream 4.21+ OpenAI default).
Advertises the same 2K/4K sizes OpenAI now exposes, plus legacy
256/512/1024 for dall-e-3. Supports both generate and edit.
Local patch. LiteLLM has no upstream image-generation support yet; revisit
if upstream adds one.
* ci: rerun after upstream main hot-fix
* fix(litellm): harden image generation provider
---------
Co-authored-by: Chris Zhang <chris@ChrisdeMac-mini.local>
Co-authored-by: Peter Steinberger <steipete@gmail.com>
Val Alexander's c65aa1d2a6 (#71639) changed assistant avatar uploads
from gateway config persistence to localStorage, mirroring the existing
user-avatar pattern. CHANGELOG covered it but docs/web/control-ui.md
'Personal identity (browser-local)' section only documented the user
identity. Add a paragraph noting the assistant avatar override follows
the same browser-local pattern, while keeping the ui.assistant.avatar
config field reachable for non-UI clients writing the field directly.