docs(providers): improve opencode, glm, runway, perplexity-provider, vercel-ai-gateway with Mintlify components

This commit is contained in:
Vincent Koc
2026-04-12 11:34:59 +01:00
parent 0d9eca0e1a
commit 7de76ac6e3
5 changed files with 413 additions and 135 deletions

View File

@@ -11,26 +11,42 @@ title: "GLM Models"
GLM is a **model family** (not a company) available through the Z.AI platform. In OpenClaw, GLM
models are accessed via the `zai` provider and model IDs like `zai/glm-5`.
## CLI setup
## Getting started
```bash
# Generic API-key setup with endpoint auto-detection
openclaw onboard --auth-choice zai-api-key
<Steps>
<Step title="Choose an auth route and run onboarding">
Pick the onboarding choice that matches your Z.AI plan and region:
# Coding Plan Global, recommended for Coding Plan users
openclaw onboard --auth-choice zai-coding-global
| Auth choice | Best for |
| ----------- | -------- |
| `zai-api-key` | Generic API-key setup with endpoint auto-detection |
| `zai-coding-global` | Coding Plan users (global) |
| `zai-coding-cn` | Coding Plan users (China region) |
| `zai-global` | General API (global) |
| `zai-cn` | General API (China region) |
# Coding Plan CN (China region), recommended for Coding Plan users
openclaw onboard --auth-choice zai-coding-cn
```bash
# Example: generic auto-detect
openclaw onboard --auth-choice zai-api-key
# General API
openclaw onboard --auth-choice zai-global
# Example: Coding Plan global
openclaw onboard --auth-choice zai-coding-global
```
# General API CN (China region)
openclaw onboard --auth-choice zai-cn
```
</Step>
<Step title="Set GLM as the default model">
```bash
openclaw config set agents.defaults.model.primary "zai/glm-5.1"
```
</Step>
<Step title="Verify models are available">
```bash
openclaw models list --provider zai
```
</Step>
</Steps>
## Config snippet
## Config example
```json5
{
@@ -39,30 +55,56 @@ openclaw onboard --auth-choice zai-cn
}
```
<Tip>
`zai-api-key` lets OpenClaw detect the matching Z.AI endpoint from the key and
apply the correct base URL automatically. Use the explicit regional choices when
you want to force a specific Coding Plan or general API surface.
</Tip>
## Current bundled GLM models
## Bundled GLM models
OpenClaw currently seeds the bundled `zai` provider with these GLM refs:
- `glm-5.1`
- `glm-5`
- `glm-5-turbo`
- `glm-5v-turbo`
- `glm-4.7`
- `glm-4.7-flash`
- `glm-4.7-flashx`
- `glm-4.6`
- `glm-4.6v`
- `glm-4.5`
- `glm-4.5-air`
- `glm-4.5-flash`
- `glm-4.5v`
| Model | Model |
| --------------- | ---------------- |
| `glm-5.1` | `glm-4.7` |
| `glm-5` | `glm-4.7-flash` |
| `glm-5-turbo` | `glm-4.7-flashx` |
| `glm-5v-turbo` | `glm-4.6` |
| `glm-4.5` | `glm-4.6v` |
| `glm-4.5-air` | |
| `glm-4.5-flash` | |
| `glm-4.5v` | |
## Notes
<Note>
The default bundled model ref is `zai/glm-5.1`. GLM versions and availability
can change; check Z.AI's docs for the latest.
</Note>
- GLM versions and availability can change; check Z.AI's docs for the latest.
- Default bundled model ref is `zai/glm-5.1`.
- For provider details, see [/providers/zai](/providers/zai).
## Advanced notes
<AccordionGroup>
<Accordion title="Endpoint auto-detection">
When you use the `zai-api-key` auth choice, OpenClaw inspects the key format
to determine the correct Z.AI base URL. Explicit regional choices
(`zai-coding-global`, `zai-coding-cn`, `zai-global`, `zai-cn`) override
auto-detection and pin the endpoint directly.
</Accordion>
<Accordion title="Provider details">
GLM models are served by the `zai` runtime provider. For full provider
configuration, regional endpoints, and additional capabilities, see
[Z.AI provider docs](/providers/zai).
</Accordion>
</AccordionGroup>
## Related
<CardGroup cols={2}>
<Card title="Z.AI provider" href="/providers/zai" icon="server">
Full Z.AI provider configuration and regional endpoints.
</Card>
<Card title="Model selection" href="/concepts/model-providers" icon="layers">
Choosing providers, model refs, and failover behavior.
</Card>
</CardGroup>

View File

@@ -10,30 +10,78 @@ title: "OpenCode"
OpenCode exposes two hosted catalogs in OpenClaw:
- `opencode/...` for the **Zen** catalog
- `opencode-go/...` for the **Go** catalog
| Catalog | Prefix | Runtime provider |
| ------- | ----------------- | ---------------- |
| **Zen** | `opencode/...` | `opencode` |
| **Go** | `opencode-go/...` | `opencode-go` |
Both catalogs use the same OpenCode API key. OpenClaw keeps the runtime provider ids
split so upstream per-model routing stays correct, but onboarding and docs treat them
as one OpenCode setup.
## CLI setup
## Getting started
### Zen catalog
<Tabs>
<Tab title="Zen catalog">
**Best for:** the curated OpenCode multi-model proxy (Claude, GPT, Gemini).
```bash
openclaw onboard --auth-choice opencode-zen
openclaw onboard --opencode-zen-api-key "$OPENCODE_API_KEY"
```
<Steps>
<Step title="Run onboarding">
```bash
openclaw onboard --auth-choice opencode-zen
```
### Go catalog
Or pass the key directly:
```bash
openclaw onboard --auth-choice opencode-go
openclaw onboard --opencode-go-api-key "$OPENCODE_API_KEY"
```
```bash
openclaw onboard --opencode-zen-api-key "$OPENCODE_API_KEY"
```
</Step>
<Step title="Set a Zen model as the default">
```bash
openclaw config set agents.defaults.model.primary "opencode/claude-opus-4-6"
```
</Step>
<Step title="Verify models are available">
```bash
openclaw models list --provider opencode
```
</Step>
</Steps>
## Config snippet
</Tab>
<Tab title="Go catalog">
**Best for:** the OpenCode-hosted Kimi, GLM, and MiniMax lineup.
<Steps>
<Step title="Run onboarding">
```bash
openclaw onboard --auth-choice opencode-go
```
Or pass the key directly:
```bash
openclaw onboard --opencode-go-api-key "$OPENCODE_API_KEY"
```
</Step>
<Step title="Set a Go model as the default">
```bash
openclaw config set agents.defaults.model.primary "opencode-go/kimi-k2.5"
```
</Step>
<Step title="Verify models are available">
```bash
openclaw models list --provider opencode-go
```
</Step>
</Steps>
</Tab>
</Tabs>
## Config example
```json5
{
@@ -46,23 +94,58 @@ openclaw onboard --opencode-go-api-key "$OPENCODE_API_KEY"
### Zen
- Runtime provider: `opencode`
- Example models: `opencode/claude-opus-4-6`, `opencode/gpt-5.4`, `opencode/gemini-3-pro`
- Best when you want the curated OpenCode multi-model proxy
| Property | Value |
| ---------------- | ----------------------------------------------------------------------- |
| Runtime provider | `opencode` |
| Example models | `opencode/claude-opus-4-6`, `opencode/gpt-5.4`, `opencode/gemini-3-pro` |
### Go
- Runtime provider: `opencode-go`
- Example models: `opencode-go/kimi-k2.5`, `opencode-go/glm-5`, `opencode-go/minimax-m2.5`
- Best when you want the OpenCode-hosted Kimi/GLM/MiniMax lineup
| Property | Value |
| ---------------- | ------------------------------------------------------------------------ |
| Runtime provider | `opencode-go` |
| Example models | `opencode-go/kimi-k2.5`, `opencode-go/glm-5`, `opencode-go/minimax-m2.5` |
## Notes
## Advanced notes
- `OPENCODE_ZEN_API_KEY` is also supported.
- Entering one OpenCode key during setup stores credentials for both runtime providers.
- You sign in to OpenCode, add billing details, and copy your API key.
- Billing and catalog availability are managed from the OpenCode dashboard.
- Gemini-backed OpenCode refs stay on the proxy-Gemini path, so OpenClaw keeps
Gemini thought-signature sanitation there without enabling native Gemini
replay validation or bootstrap rewrites.
- Non-Gemini OpenCode refs keep the minimal OpenAI-compatible replay policy.
<AccordionGroup>
<Accordion title="API key aliases">
`OPENCODE_ZEN_API_KEY` is also supported as an alias for `OPENCODE_API_KEY`.
</Accordion>
<Accordion title="Shared credentials">
Entering one OpenCode key during setup stores credentials for both runtime
providers. You do not need to onboard each catalog separately.
</Accordion>
<Accordion title="Billing and dashboard">
You sign in to OpenCode, add billing details, and copy your API key. Billing
and catalog availability are managed from the OpenCode dashboard.
</Accordion>
<Accordion title="Gemini replay behavior">
Gemini-backed OpenCode refs stay on the proxy-Gemini path, so OpenClaw keeps
Gemini thought-signature sanitation there without enabling native Gemini
replay validation or bootstrap rewrites.
</Accordion>
<Accordion title="Non-Gemini replay behavior">
Non-Gemini OpenCode refs keep the minimal OpenAI-compatible replay policy.
</Accordion>
</AccordionGroup>
<Tip>
Entering one OpenCode key during setup stores credentials for both the Zen and
Go runtime providers, so you only need to onboard once.
</Tip>
## Related
<CardGroup cols={2}>
<Card title="Model selection" href="/concepts/model-providers" icon="layers">
Choosing providers, model refs, and failover behavior.
</Card>
<Card title="Configuration reference" href="/gateway/configuration-reference" icon="gear">
Full config reference for agents, models, and providers.
</Card>
</CardGroup>

View File

@@ -16,30 +16,52 @@ This page covers the Perplexity **provider** setup. For the Perplexity
**tool** (how the agent uses it), see [Perplexity tool](/tools/perplexity-search).
</Note>
- Type: web search provider (not a model provider)
- Auth: `PERPLEXITY_API_KEY` (direct) or `OPENROUTER_API_KEY` (via OpenRouter)
- Config path: `plugins.entries.perplexity.config.webSearch.apiKey`
| Property | Value |
| ----------- | ---------------------------------------------------------------------- |
| Type | Web search provider (not a model provider) |
| Auth | `PERPLEXITY_API_KEY` (direct) or `OPENROUTER_API_KEY` (via OpenRouter) |
| Config path | `plugins.entries.perplexity.config.webSearch.apiKey` |
## Quick start
## Getting started
1. Set the API key:
<Steps>
<Step title="Set the API key">
Run the interactive web-search configuration flow:
```bash
openclaw configure --section web
```
```bash
openclaw configure --section web
```
Or set it directly:
Or set the key directly:
```bash
openclaw config set plugins.entries.perplexity.config.webSearch.apiKey "pplx-xxxxxxxxxxxx"
```
```bash
openclaw config set plugins.entries.perplexity.config.webSearch.apiKey "pplx-xxxxxxxxxxxx"
```
2. The agent will automatically use Perplexity for web searches when configured.
</Step>
<Step title="Start searching">
The agent will automatically use Perplexity for web searches once the key is
configured. No additional steps are required.
</Step>
</Steps>
## Search modes
The plugin auto-selects the transport based on API key prefix:
<Tabs>
<Tab title="Native Perplexity API (pplx-)">
When your key starts with `pplx-`, OpenClaw uses the native Perplexity Search
API. This transport returns structured results and supports domain, language,
and date filters (see filtering options below).
</Tab>
<Tab title="OpenRouter / Sonar (sk-or-)">
When your key starts with `sk-or-`, OpenClaw routes through OpenRouter using
the Perplexity Sonar model. This transport returns AI-synthesized answers with
citations.
</Tab>
</Tabs>
| Key prefix | Transport | Features |
| ---------- | ---------------------------- | ------------------------------------------------ |
| `pplx-` | Native Perplexity Search API | Structured results, domain/language/date filters |
@@ -47,16 +69,58 @@ The plugin auto-selects the transport based on API key prefix:
## Native API filtering
When using the native Perplexity API (`pplx-` key), searches support:
<Note>
Filtering options are only available when using the native Perplexity API
(`pplx-` key). OpenRouter/Sonar searches do not support these parameters.
</Note>
- **Country**: 2-letter country code
- **Language**: ISO 639-1 language code
- **Date range**: day, week, month, year
- **Domain filters**: allowlist/denylist (max 20 domains)
- **Content budget**: `max_tokens`, `max_tokens_per_page`
When using the native Perplexity API, searches support the following filters:
## Environment note
| Filter | Description | Example |
| -------------- | -------------------------------------- | ----------------------------------- |
| Country | 2-letter country code | `us`, `de`, `jp` |
| Language | ISO 639-1 language code | `en`, `fr`, `zh` |
| Date range | Recency window | `day`, `week`, `month`, `year` |
| Domain filters | Allowlist or denylist (max 20 domains) | `example.com` |
| Content budget | Token limits per response / per page | `max_tokens`, `max_tokens_per_page` |
If the Gateway runs as a daemon (launchd/systemd), make sure
`PERPLEXITY_API_KEY` is available to that process (for example, in
`~/.openclaw/.env` or via `env.shellEnv`).
## Advanced notes
<AccordionGroup>
<Accordion title="Environment variable for daemon processes">
If the OpenClaw Gateway runs as a daemon (launchd/systemd), make sure
`PERPLEXITY_API_KEY` is available to that process.
<Warning>
A key set only in `~/.profile` will not be visible to a launchd/systemd
daemon unless that environment is explicitly imported. Set the key in
`~/.openclaw/.env` or via `env.shellEnv` to ensure the gateway process can
read it.
</Warning>
</Accordion>
<Accordion title="OpenRouter proxy setup">
If you prefer to route Perplexity searches through OpenRouter, set an
`OPENROUTER_API_KEY` (prefix `sk-or-`) instead of a native Perplexity key.
OpenClaw will detect the prefix and switch to the Sonar transport
automatically.
<Tip>
The OpenRouter transport is useful if you already have an OpenRouter account
and want consolidated billing across multiple providers.
</Tip>
</Accordion>
</AccordionGroup>
## Related
<CardGroup cols={2}>
<Card title="Perplexity search tool" href="/tools/perplexity-search" icon="magnifying-glass">
How the agent invokes Perplexity searches and interprets results.
</Card>
<Card title="Configuration reference" href="/gateway/configuration-reference" icon="gear">
Full configuration reference including plugin entries.
</Card>
</CardGroup>

View File

@@ -11,25 +11,29 @@ read_when:
OpenClaw ships a bundled `runway` provider for hosted video generation.
- Provider id: `runway`
- Auth: `RUNWAYML_API_SECRET` (canonical) or `RUNWAY_API_KEY`
- API: Runway task-based video generation (`GET /v1/tasks/{id}` polling)
| Property | Value |
| ----------- | ----------------------------------------------------------------- |
| Provider id | `runway` |
| Auth | `RUNWAYML_API_SECRET` (canonical) or `RUNWAY_API_KEY` |
| API | Runway task-based video generation (`GET /v1/tasks/{id}` polling) |
## Quick start
## Getting started
1. Set the API key:
```bash
openclaw onboard --auth-choice runway-api-key
```
2. Set Runway as the default video provider:
```bash
openclaw config set agents.defaults.videoGenerationModel.primary "runway/gen4.5"
```
3. Ask the agent to generate a video. Runway will be used automatically.
<Steps>
<Step title="Set the API key">
```bash
openclaw onboard --auth-choice runway-api-key
```
</Step>
<Step title="Set Runway as the default video provider">
```bash
openclaw config set agents.defaults.videoGenerationModel.primary "runway/gen4.5"
```
</Step>
<Step title="Generate a video">
Ask the agent to generate a video. Runway will be used automatically.
</Step>
</Steps>
## Supported modes
@@ -39,9 +43,14 @@ openclaw config set agents.defaults.videoGenerationModel.primary "runway/gen4.5"
| Image-to-video | `gen4.5` | 1 local or remote image |
| Video-to-video | `gen4_aleph` | 1 local or remote video |
- Local image and video references are supported via data URIs.
- Video-to-video currently requires `runway/gen4_aleph` specifically.
- Text-only runs currently expose `16:9` and `9:16` aspect ratios.
<Note>
Local image and video references are supported via data URIs. Text-only runs
currently expose `16:9` and `9:16` aspect ratios.
</Note>
<Warning>
Video-to-video currently requires `runway/gen4_aleph` specifically.
</Warning>
## Configuration
@@ -57,7 +66,28 @@ openclaw config set agents.defaults.videoGenerationModel.primary "runway/gen4.5"
}
```
## Advanced notes
<AccordionGroup>
<Accordion title="Environment variable aliases">
OpenClaw recognizes both `RUNWAYML_API_SECRET` (canonical) and `RUNWAY_API_KEY`.
Either variable will authenticate the Runway provider.
</Accordion>
<Accordion title="Task polling">
Runway uses a task-based API. After submitting a generation request, OpenClaw
polls `GET /v1/tasks/{id}` until the video is ready. No additional
configuration is needed for the polling behavior.
</Accordion>
</AccordionGroup>
## Related
- [Video Generation](/tools/video-generation) -- shared tool parameters, provider selection, and async behavior
- [Configuration Reference](/gateway/configuration-reference#agent-defaults)
<CardGroup cols={2}>
<Card title="Video generation" href="/tools/video-generation" icon="video">
Shared tool parameters, provider selection, and async behavior.
</Card>
<Card title="Configuration reference" href="/gateway/configuration-reference#agent-defaults" icon="gear">
Agent default settings including video generation model.
</Card>
</CardGroup>

View File

@@ -8,36 +8,58 @@ read_when:
# Vercel AI Gateway
The [Vercel AI Gateway](https://vercel.com/ai-gateway) provides a unified API to access hundreds of models through a single endpoint.
The [Vercel AI Gateway](https://vercel.com/ai-gateway) provides a unified API to
access hundreds of models through a single endpoint.
- Provider: `vercel-ai-gateway`
- Auth: `AI_GATEWAY_API_KEY`
- API: Anthropic Messages compatible
- OpenClaw auto-discovers the Gateway `/v1/models` catalog, so `/models vercel-ai-gateway`
includes current model refs such as `vercel-ai-gateway/openai/gpt-5.4`.
| Property | Value |
| ------------- | -------------------------------- |
| Provider | `vercel-ai-gateway` |
| Auth | `AI_GATEWAY_API_KEY` |
| API | Anthropic Messages compatible |
| Model catalog | Auto-discovered via `/v1/models` |
## Quick start
<Tip>
OpenClaw auto-discovers the Gateway `/v1/models` catalog, so
`/models vercel-ai-gateway` includes current model refs such as
`vercel-ai-gateway/openai/gpt-5.4`.
</Tip>
1. Set the API key (recommended: store it for the Gateway):
## Getting started
```bash
openclaw onboard --auth-choice ai-gateway-api-key
```
<Steps>
<Step title="Set the API key">
Run onboarding and choose the AI Gateway auth option:
2. Set a default model:
```bash
openclaw onboard --auth-choice ai-gateway-api-key
```
```json5
{
agents: {
defaults: {
model: { primary: "vercel-ai-gateway/anthropic/claude-opus-4.6" },
},
},
}
```
</Step>
<Step title="Set a default model">
Add the model to your OpenClaw config:
```json5
{
agents: {
defaults: {
model: { primary: "vercel-ai-gateway/anthropic/claude-opus-4.6" },
},
},
}
```
</Step>
<Step title="Verify the model is available">
```bash
openclaw models list --provider vercel-ai-gateway
```
</Step>
</Steps>
## Non-interactive example
For scripted or CI setups, pass all values on the command line:
```bash
openclaw onboard --non-interactive \
--mode local \
@@ -45,16 +67,53 @@ openclaw onboard --non-interactive \
--ai-gateway-api-key "$AI_GATEWAY_API_KEY"
```
## Environment note
If the Gateway runs as a daemon (launchd/systemd), make sure `AI_GATEWAY_API_KEY`
is available to that process (for example, in `~/.openclaw/.env` or via
`env.shellEnv`).
## Model ID shorthand
OpenClaw accepts Vercel Claude shorthand model refs and normalizes them at
runtime:
- `vercel-ai-gateway/claude-opus-4.6` -> `vercel-ai-gateway/anthropic/claude-opus-4.6`
- `vercel-ai-gateway/opus-4.6` -> `vercel-ai-gateway/anthropic/claude-opus-4-6`
| Shorthand input | Normalized model ref |
| ----------------------------------- | --------------------------------------------- |
| `vercel-ai-gateway/claude-opus-4.6` | `vercel-ai-gateway/anthropic/claude-opus-4.6` |
| `vercel-ai-gateway/opus-4.6` | `vercel-ai-gateway/anthropic/claude-opus-4-6` |
<Tip>
You can use either the shorthand or the fully qualified model ref in your
configuration. OpenClaw resolves the canonical form automatically.
</Tip>
## Advanced notes
<AccordionGroup>
<Accordion title="Environment variable for daemon processes">
If the OpenClaw Gateway runs as a daemon (launchd/systemd), make sure
`AI_GATEWAY_API_KEY` is available to that process.
<Warning>
A key set only in `~/.profile` will not be visible to a launchd/systemd
daemon unless that environment is explicitly imported. Set the key in
`~/.openclaw/.env` or via `env.shellEnv` to ensure the gateway process can
read it.
</Warning>
</Accordion>
<Accordion title="Provider routing">
Vercel AI Gateway routes requests to the upstream provider based on the model
ref prefix. For example, `vercel-ai-gateway/anthropic/claude-opus-4.6` routes
through Anthropic, while `vercel-ai-gateway/openai/gpt-5.4` routes through
OpenAI. Your single `AI_GATEWAY_API_KEY` handles authentication for all
upstream providers.
</Accordion>
</AccordionGroup>
## Related
<CardGroup cols={2}>
<Card title="Model selection" href="/concepts/model-providers" icon="layers">
Choosing providers, model refs, and failover behavior.
</Card>
<Card title="Troubleshooting" href="/help/troubleshooting" icon="wrench">
General troubleshooting and FAQ.
</Card>
</CardGroup>