mirror of
https://github.com/moltbot/moltbot.git
synced 2026-03-07 22:44:16 +00:00
feat: Provider/Mistral full support for Mistral on OpenClaw 🇫🇷 (#23845)
* Onboard: add Mistral auth choice and CLI flags * Onboard/Auth: add Mistral provider config defaults * Auth choice: wire Mistral API-key flow * Onboard non-interactive: support --mistral-api-key * Media understanding: add Mistral Voxtral audio provider * Changelog: note Mistral onboarding and media support * Docs: add Mistral provider and onboarding/media references * Tests: cover Mistral media registry/defaults and auth mapping * Memory: add Mistral embeddings provider support * Onboarding: refresh Mistral model metadata * Docs: document Mistral embeddings and endpoints * Memory: persist Mistral embedding client state in managers * Memory: add regressions for mistral provider wiring * Gateway: add live tool probe retry helper * Gateway: cover live tool probe retry helper * Gateway: retry malformed live tool-read probe responses * Memory: support plain-text batch error bodies * Tests: add Mistral Voxtral live transcription smoke * Docs: add Mistral live audio test command * Revert: remove Mistral live voice test and docs entry * Onboard: re-export Mistral default model ref from models * Changelog: credit joeVenner for Mistral work * fix: include Mistral in auto audio key fallback * Update CHANGELOG.md * Update CHANGELOG.md --------- Co-authored-by: Shakker <shakkerdroid@gmail.com>
This commit is contained in:
@@ -8,6 +8,7 @@ Docs: https://docs.openclaw.ai
|
||||
|
||||
### Changes
|
||||
|
||||
- Provider/Mistral: Adding support for new provider Mistral, supporting also memory embeddings and voice. (#23845) Thanks @vincentkoc
|
||||
- Update/Core: add an optional built-in auto-updater for package installs (`update.auto.*`), default-off, with stable rollout delay+jitter and beta hourly cadence.
|
||||
- CLI/Update: add `openclaw update --dry-run` to preview channel/tag/target/restart actions without mutating config, installing, syncing plugins, or restarting.
|
||||
- Config/UI: add tag-aware settings filtering and broaden config labels/help copy so fields are easier to discover and understand in the dashboard config screen.
|
||||
@@ -20,6 +21,7 @@ Docs: https://docs.openclaw.ai
|
||||
- Channels/Config: unify channel preview streaming config handling with a shared resolver and canonical migration path.
|
||||
- Gateway/Auth: unify call/probe/status/auth credential-source precedence on shared resolver helpers, with table-driven parity coverage across gateway entrypoints.
|
||||
- Gateway/Auth: refactor gateway credential resolution and websocket auth handshake paths to use shared typed auth contexts, including explicit `auth.deviceToken` support in connect frames and tests.
|
||||
- Onboarding/Media: add first-class Mistral API-key onboarding (`--mistral-api-key`, auth-choice flow + config defaults) and Mistral Voxtral audio transcription provider defaults for media understanding. Thanks @jaimegh-es, @joeVenner, and @JamesEBall.
|
||||
- Skills: remove bundled `food-order` skill from this repo; manage/install it from ClawHub instead.
|
||||
- Docs/Subagents: make thread-bound session guidance channel-first instead of Discord-specific, and list thread-supporting channels explicitly. (#23589) Thanks @osolmaz.
|
||||
|
||||
@@ -99,6 +101,9 @@ Docs: https://docs.openclaw.ai
|
||||
- Control UI/WebSocket: stop and clear the browser gateway client on UI teardown so remounts cannot leave orphan websocket clients that create duplicate active connections. (#23422) Thanks @floatinggball-design.
|
||||
- Control UI/WebSocket: send a stable per-tab `instanceId` in websocket connect frames so reconnect cycles keep a consistent client identity for diagnostics and presence tracking. (#23616) Thanks @zq58855371-ui.
|
||||
- Config/Memory: allow `"mistral"` in `agents.defaults.memorySearch.provider` and `agents.defaults.memorySearch.fallback` schema validation. (#14934) Thanks @ThomsenDrake.
|
||||
- Memory/Mistral: align schema/runtime support by adding Mistral embeddings (`/v1/embeddings`, default `mistral-embed`) for memory search provider resolution/fallback/doctor checks, and refresh onboarding Mistral model metadata for `mistral-large-latest` context/output limits. Thanks @jaimegh-es, @joeVenner, and @JamesEBall.
|
||||
- Security/Feishu: enforce ID-only allowlist matching for DM/group sender authorization, normalize Feishu ID prefixes during checks, and ignore mutable display names so display-name collisions cannot satisfy allowlist entries. This ships in the next npm release. Thanks @jiseoung for reporting.
|
||||
- Security/Group policy: harden `channels.*.groups.*.toolsBySender` matching by requiring explicit sender-key types (`id:`, `e164:`, `username:`, `name:`), preventing cross-identifier collisions across mutable/display-name fields while keeping legacy untyped keys on a deprecated ID-only path. This ships in the next npm release. Thanks @jiseoung for reporting.
|
||||
- Feishu/Commands: in group chats, command authorization now falls back to top-level `channels.feishu.allowFrom` when per-group `allowFrom` is not set, so `/command` no longer gets blocked by an unintended empty allowlist. (#23756)
|
||||
- Dev tooling: prevent `CLAUDE.md` symlink target regressions by excluding CLAUDE symlink sentinels from `oxfmt` and marking them `-text` in `.gitattributes`, so formatter/EOL normalization cannot reintroduce trailing-newline targets. Thanks @vincentkoc.
|
||||
- Agents/Compaction: restore embedded compaction safeguard/context-pruning extension loading in production by wiring bundled extension factories into the resource loader instead of runtime file-path resolution. (#22349) Thanks @Glucksberg.
|
||||
|
||||
@@ -321,13 +321,14 @@ Options:
|
||||
- `--non-interactive`
|
||||
- `--mode <local|remote>`
|
||||
- `--flow <quickstart|advanced|manual>` (manual is an alias for advanced)
|
||||
- `--auth-choice <setup-token|token|chutes|openai-codex|openai-api-key|openrouter-api-key|ai-gateway-api-key|moonshot-api-key|moonshot-api-key-cn|kimi-code-api-key|synthetic-api-key|venice-api-key|gemini-api-key|zai-api-key|apiKey|minimax-api|minimax-api-lightning|opencode-zen|custom-api-key|skip>`
|
||||
- `--auth-choice <setup-token|token|chutes|openai-codex|openai-api-key|openrouter-api-key|ai-gateway-api-key|moonshot-api-key|moonshot-api-key-cn|kimi-code-api-key|synthetic-api-key|venice-api-key|gemini-api-key|zai-api-key|mistral-api-key|apiKey|minimax-api|minimax-api-lightning|opencode-zen|custom-api-key|skip>`
|
||||
- `--token-provider <id>` (non-interactive; used with `--auth-choice token`)
|
||||
- `--token <token>` (non-interactive; used with `--auth-choice token`)
|
||||
- `--token-profile-id <id>` (non-interactive; default: `<provider>:manual`)
|
||||
- `--token-expires-in <duration>` (non-interactive; e.g. `365d`, `12h`)
|
||||
- `--anthropic-api-key <key>`
|
||||
- `--openai-api-key <key>`
|
||||
- `--mistral-api-key <key>`
|
||||
- `--openrouter-api-key <key>`
|
||||
- `--ai-gateway-api-key <key>`
|
||||
- `--moonshot-api-key <key>`
|
||||
|
||||
@@ -56,6 +56,14 @@ openclaw onboard --non-interactive \
|
||||
# --auth-choice zai-cn
|
||||
```
|
||||
|
||||
Non-interactive Mistral example:
|
||||
|
||||
```bash
|
||||
openclaw onboard --non-interactive \
|
||||
--auth-choice mistral-api-key \
|
||||
--mistral-api-key "$MISTRAL_API_KEY"
|
||||
```
|
||||
|
||||
Flow notes:
|
||||
|
||||
- `quickstart`: minimal prompts, auto-generates a gateway token.
|
||||
|
||||
@@ -105,7 +105,8 @@ Defaults:
|
||||
2. `openai` if an OpenAI key can be resolved.
|
||||
3. `gemini` if a Gemini key can be resolved.
|
||||
4. `voyage` if a Voyage key can be resolved.
|
||||
5. Otherwise memory search stays disabled until configured.
|
||||
5. `mistral` if a Mistral key can be resolved.
|
||||
6. Otherwise memory search stays disabled until configured.
|
||||
- Local mode uses node-llama-cpp and may require `pnpm approve-builds`.
|
||||
- Uses sqlite-vec (when available) to accelerate vector search inside SQLite.
|
||||
|
||||
@@ -114,7 +115,9 @@ resolves keys from auth profiles, `models.providers.*.apiKey`, or environment
|
||||
variables. Codex OAuth only covers chat/completions and does **not** satisfy
|
||||
embeddings for memory search. For Gemini, use `GEMINI_API_KEY` or
|
||||
`models.providers.google.apiKey`. For Voyage, use `VOYAGE_API_KEY` or
|
||||
`models.providers.voyage.apiKey`. When using a custom OpenAI-compatible endpoint,
|
||||
`models.providers.voyage.apiKey`. For Mistral, use `MISTRAL_API_KEY` or
|
||||
`models.providers.mistral.apiKey`.
|
||||
When using a custom OpenAI-compatible endpoint,
|
||||
set `memorySearch.remote.apiKey` (and optional `memorySearch.remote.headers`).
|
||||
|
||||
### QMD backend (experimental)
|
||||
@@ -328,7 +331,7 @@ If you don't want to set an API key, use `memorySearch.provider = "local"` or se
|
||||
|
||||
Fallbacks:
|
||||
|
||||
- `memorySearch.fallback` can be `openai`, `gemini`, `local`, or `none`.
|
||||
- `memorySearch.fallback` can be `openai`, `gemini`, `voyage`, `mistral`, `local`, or `none`.
|
||||
- The fallback provider is only used when the primary embedding provider fails.
|
||||
|
||||
Batch indexing (OpenAI + Gemini + Voyage):
|
||||
|
||||
@@ -131,11 +131,13 @@ OpenClaw ships with the pi‑ai catalog. These providers require **no**
|
||||
- OpenRouter: `openrouter` (`OPENROUTER_API_KEY`)
|
||||
- Example model: `openrouter/anthropic/claude-sonnet-4-5`
|
||||
- xAI: `xai` (`XAI_API_KEY`)
|
||||
- Mistral: `mistral` (`MISTRAL_API_KEY`)
|
||||
- Example model: `mistral/mistral-large-latest`
|
||||
- CLI: `openclaw onboard --auth-choice mistral-api-key`
|
||||
- Groq: `groq` (`GROQ_API_KEY`)
|
||||
- Cerebras: `cerebras` (`CEREBRAS_API_KEY`)
|
||||
- GLM models on Cerebras use ids `zai-glm-4.7` and `zai-glm-4.6`.
|
||||
- OpenAI-compatible base URL: `https://api.cerebras.ai/v1`.
|
||||
- Mistral: `mistral` (`MISTRAL_API_KEY`)
|
||||
- GitHub Copilot: `github-copilot` (`COPILOT_GITHUB_TOKEN` / `GH_TOKEN` / `GITHUB_TOKEN`)
|
||||
- Hugging Face Inference: `huggingface` (`HUGGINGFACE_HUB_TOKEN` or `HF_TOKEN`) — OpenAI-compatible router; example model: `huggingface/deepseek-ai/DeepSeek-R1`; CLI: `openclaw onboard --auth-choice huggingface-api-key`. See [Hugging Face (Inference)](/providers/huggingface).
|
||||
|
||||
|
||||
@@ -91,6 +91,10 @@
|
||||
"source": "/moonshot",
|
||||
"destination": "/providers/moonshot"
|
||||
},
|
||||
{
|
||||
"source": "/mistral",
|
||||
"destination": "/providers/mistral"
|
||||
},
|
||||
{
|
||||
"source": "/openrouter",
|
||||
"destination": "/providers/openrouter"
|
||||
@@ -1066,6 +1070,7 @@
|
||||
"providers/bedrock",
|
||||
"providers/vercel-ai-gateway",
|
||||
"providers/moonshot",
|
||||
"providers/mistral",
|
||||
"providers/minimax",
|
||||
"providers/opencode",
|
||||
"providers/glm",
|
||||
|
||||
@@ -1251,14 +1251,15 @@ still need a real API key (`OPENAI_API_KEY` or `models.providers.openai.apiKey`)
|
||||
If you don't set a provider explicitly, OpenClaw auto-selects a provider when it
|
||||
can resolve an API key (auth profiles, `models.providers.*.apiKey`, or env vars).
|
||||
It prefers OpenAI if an OpenAI key resolves, otherwise Gemini if a Gemini key
|
||||
resolves. If neither key is available, memory search stays disabled until you
|
||||
configure it. If you have a local model path configured and present, OpenClaw
|
||||
resolves, then Voyage, then Mistral. If no remote key is available, memory
|
||||
search stays disabled until you configure it. If you have a local model path
|
||||
configured and present, OpenClaw
|
||||
prefers `local`.
|
||||
|
||||
If you'd rather stay local, set `memorySearch.provider = "local"` (and optionally
|
||||
`memorySearch.fallback = "none"`). If you want Gemini embeddings, set
|
||||
`memorySearch.provider = "gemini"` and provide `GEMINI_API_KEY` (or
|
||||
`memorySearch.remote.apiKey`). We support **OpenAI, Gemini, or local** embedding
|
||||
`memorySearch.remote.apiKey`). We support **OpenAI, Gemini, Voyage, Mistral, or local** embedding
|
||||
models - see [Memory](/concepts/memory) for the setup details.
|
||||
|
||||
### Does memory persist forever What are the limits
|
||||
|
||||
@@ -94,11 +94,27 @@ Note: Binary detection is best-effort across macOS/Linux/Windows; ensure the CLI
|
||||
}
|
||||
```
|
||||
|
||||
### Provider-only (Mistral Voxtral)
|
||||
|
||||
```json5
|
||||
{
|
||||
tools: {
|
||||
media: {
|
||||
audio: {
|
||||
enabled: true,
|
||||
models: [{ provider: "mistral", model: "voxtral-mini-latest" }],
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Notes & limits
|
||||
|
||||
- Provider auth follows the standard model auth order (auth profiles, env vars, `models.providers.*.apiKey`).
|
||||
- Deepgram picks up `DEEPGRAM_API_KEY` when `provider: "deepgram"` is used.
|
||||
- Deepgram setup details: [Deepgram (audio transcription)](/providers/deepgram).
|
||||
- Mistral setup details: [Mistral](/providers/mistral).
|
||||
- Audio providers can override `baseUrl`, `headers`, and `providerOptions` via `tools.media.audio`.
|
||||
- Default size cap is 20MB (`tools.media.audio.maxBytes`). Oversize audio is skipped for that model and the next entry is tried.
|
||||
- Default `maxChars` for audio is **unset** (full transcript). Set `tools.media.audio.maxChars` or per-entry `maxChars` to trim output.
|
||||
|
||||
@@ -175,11 +175,11 @@ If you omit `capabilities`, the entry is eligible for the list it appears in.
|
||||
|
||||
## Provider support matrix (OpenClaw integrations)
|
||||
|
||||
| Capability | Provider integration | Notes |
|
||||
| ---------- | ------------------------------------------------ | ------------------------------------------------- |
|
||||
| Image | OpenAI / Anthropic / Google / others via `pi-ai` | Any image-capable model in the registry works. |
|
||||
| Audio | OpenAI, Groq, Deepgram, Google | Provider transcription (Whisper/Deepgram/Gemini). |
|
||||
| Video | Google (Gemini API) | Provider video understanding. |
|
||||
| Capability | Provider integration | Notes |
|
||||
| ---------- | ------------------------------------------------ | --------------------------------------------------------- |
|
||||
| Image | OpenAI / Anthropic / Google / others via `pi-ai` | Any image-capable model in the registry works. |
|
||||
| Audio | OpenAI, Groq, Deepgram, Google, Mistral | Provider transcription (Whisper/Deepgram/Gemini/Voxtral). |
|
||||
| Video | Google (Gemini API) | Provider video understanding. |
|
||||
|
||||
## Recommended providers
|
||||
|
||||
@@ -190,7 +190,7 @@ If you omit `capabilities`, the entry is eligible for the list it appears in.
|
||||
|
||||
**Audio**
|
||||
|
||||
- `openai/gpt-4o-mini-transcribe`, `groq/whisper-large-v3-turbo`, or `deepgram/nova-3`.
|
||||
- `openai/gpt-4o-mini-transcribe`, `groq/whisper-large-v3-turbo`, `deepgram/nova-3`, or `mistral/voxtral-mini-latest`.
|
||||
- CLI fallback: `whisper-cli` (whisper-cpp) or `whisper`.
|
||||
- Deepgram setup: [Deepgram (audio transcription)](/providers/deepgram).
|
||||
|
||||
|
||||
@@ -44,6 +44,7 @@ See [Venice AI](/providers/venice).
|
||||
- [Together AI](/providers/together)
|
||||
- [Cloudflare AI Gateway](/providers/cloudflare-ai-gateway)
|
||||
- [Moonshot AI (Kimi + Kimi Coding)](/providers/moonshot)
|
||||
- [Mistral](/providers/mistral)
|
||||
- [OpenCode Zen](/providers/opencode)
|
||||
- [Amazon Bedrock](/providers/bedrock)
|
||||
- [Z.AI](/providers/zai)
|
||||
|
||||
54
docs/providers/mistral.md
Normal file
54
docs/providers/mistral.md
Normal file
@@ -0,0 +1,54 @@
|
||||
---
|
||||
summary: "Use Mistral models and Voxtral transcription with OpenClaw"
|
||||
read_when:
|
||||
- You want to use Mistral models in OpenClaw
|
||||
- You need Mistral API key onboarding and model refs
|
||||
title: "Mistral"
|
||||
---
|
||||
|
||||
# Mistral
|
||||
|
||||
OpenClaw supports Mistral for both text/image model routing (`mistral/...`) and
|
||||
audio transcription via Voxtral in media understanding.
|
||||
Mistral can also be used for memory embeddings (`memorySearch.provider = "mistral"`).
|
||||
|
||||
## CLI setup
|
||||
|
||||
```bash
|
||||
openclaw onboard --auth-choice mistral-api-key
|
||||
# or non-interactive
|
||||
openclaw onboard --mistral-api-key "$MISTRAL_API_KEY"
|
||||
```
|
||||
|
||||
## Config snippet (LLM provider)
|
||||
|
||||
```json5
|
||||
{
|
||||
env: { MISTRAL_API_KEY: "sk-..." },
|
||||
agents: { defaults: { model: { primary: "mistral/mistral-large-latest" } } },
|
||||
}
|
||||
```
|
||||
|
||||
## Config snippet (audio transcription with Voxtral)
|
||||
|
||||
```json5
|
||||
{
|
||||
tools: {
|
||||
media: {
|
||||
audio: {
|
||||
enabled: true,
|
||||
models: [{ provider: "mistral", model: "voxtral-mini-latest" }],
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Mistral auth uses `MISTRAL_API_KEY`.
|
||||
- Provider base URL defaults to `https://api.mistral.ai/v1`.
|
||||
- Onboarding default model is `mistral/mistral-large-latest`.
|
||||
- Media-understanding default audio model for Mistral is `voxtral-mini-latest`.
|
||||
- Media transcription path uses `/v1/audio/transcriptions`.
|
||||
- Memory embeddings path uses `/v1/embeddings` (default model: `mistral-embed`).
|
||||
@@ -39,6 +39,7 @@ See [Venice AI](/providers/venice).
|
||||
- [Vercel AI Gateway](/providers/vercel-ai-gateway)
|
||||
- [Cloudflare AI Gateway](/providers/cloudflare-ai-gateway)
|
||||
- [Moonshot AI (Kimi + Kimi Coding)](/providers/moonshot)
|
||||
- [Mistral](/providers/mistral)
|
||||
- [Synthetic](/providers/synthetic)
|
||||
- [OpenCode Zen](/providers/opencode)
|
||||
- [Z.AI](/providers/zai)
|
||||
|
||||
@@ -67,6 +67,7 @@ Semantic memory search uses **embedding APIs** when configured for remote provid
|
||||
- `memorySearch.provider = "openai"` → OpenAI embeddings
|
||||
- `memorySearch.provider = "gemini"` → Gemini embeddings
|
||||
- `memorySearch.provider = "voyage"` → Voyage embeddings
|
||||
- `memorySearch.provider = "mistral"` → Mistral embeddings
|
||||
- Optional fallback to a remote provider if local embeddings fail
|
||||
|
||||
You can keep it local with `memorySearch.provider = "local"` (no API usage).
|
||||
|
||||
@@ -86,6 +86,16 @@ Add `--json` for a machine-readable summary.
|
||||
--gateway-bind loopback
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Mistral example">
|
||||
```bash
|
||||
openclaw onboard --non-interactive \
|
||||
--mode local \
|
||||
--auth-choice mistral-api-key \
|
||||
--mistral-api-key "$MISTRAL_API_KEY" \
|
||||
--gateway-port 18789 \
|
||||
--gateway-bind loopback
|
||||
```
|
||||
</Accordion>
|
||||
<Accordion title="Synthetic example">
|
||||
```bash
|
||||
openclaw onboard --non-interactive \
|
||||
|
||||
@@ -9,7 +9,7 @@ export type ResolvedMemorySearchConfig = {
|
||||
enabled: boolean;
|
||||
sources: Array<"memory" | "sessions">;
|
||||
extraPaths: string[];
|
||||
provider: "openai" | "local" | "gemini" | "voyage" | "auto";
|
||||
provider: "openai" | "local" | "gemini" | "voyage" | "mistral" | "auto";
|
||||
remote?: {
|
||||
baseUrl?: string;
|
||||
apiKey?: string;
|
||||
@@ -25,7 +25,7 @@ export type ResolvedMemorySearchConfig = {
|
||||
experimental: {
|
||||
sessionMemory: boolean;
|
||||
};
|
||||
fallback: "openai" | "gemini" | "local" | "voyage" | "none";
|
||||
fallback: "openai" | "gemini" | "local" | "voyage" | "mistral" | "none";
|
||||
model: string;
|
||||
local: {
|
||||
modelPath?: string;
|
||||
@@ -81,6 +81,7 @@ export type ResolvedMemorySearchConfig = {
|
||||
const DEFAULT_OPENAI_MODEL = "text-embedding-3-small";
|
||||
const DEFAULT_GEMINI_MODEL = "gemini-embedding-001";
|
||||
const DEFAULT_VOYAGE_MODEL = "voyage-4-large";
|
||||
const DEFAULT_MISTRAL_MODEL = "mistral-embed";
|
||||
const DEFAULT_CHUNK_TOKENS = 400;
|
||||
const DEFAULT_CHUNK_OVERLAP = 80;
|
||||
const DEFAULT_WATCH_DEBOUNCE_MS = 1500;
|
||||
@@ -153,6 +154,7 @@ function mergeConfig(
|
||||
provider === "openai" ||
|
||||
provider === "gemini" ||
|
||||
provider === "voyage" ||
|
||||
provider === "mistral" ||
|
||||
provider === "auto";
|
||||
const batch = {
|
||||
enabled: overrideRemote?.batch?.enabled ?? defaultRemote?.batch?.enabled ?? false,
|
||||
@@ -182,7 +184,9 @@ function mergeConfig(
|
||||
? DEFAULT_OPENAI_MODEL
|
||||
: provider === "voyage"
|
||||
? DEFAULT_VOYAGE_MODEL
|
||||
: undefined;
|
||||
: provider === "mistral"
|
||||
? DEFAULT_MISTRAL_MODEL
|
||||
: undefined;
|
||||
const model = overrides?.model ?? defaults?.model ?? modelDefault ?? "";
|
||||
const local = {
|
||||
modelPath: overrides?.local?.modelPath ?? defaults?.local?.modelPath,
|
||||
|
||||
@@ -131,6 +131,7 @@ export function registerOnboardCommand(program: Command) {
|
||||
tokenExpiresIn: opts.tokenExpiresIn as string | undefined,
|
||||
anthropicApiKey: opts.anthropicApiKey as string | undefined,
|
||||
openaiApiKey: opts.openaiApiKey as string | undefined,
|
||||
mistralApiKey: opts.mistralApiKey as string | undefined,
|
||||
openrouterApiKey: opts.openrouterApiKey as string | undefined,
|
||||
aiGatewayApiKey: opts.aiGatewayApiKey as string | undefined,
|
||||
cloudflareAiGatewayAccountId: opts.cloudflareAiGatewayAccountId as string | undefined,
|
||||
|
||||
@@ -43,6 +43,7 @@ describe("buildAuthChoiceOptions", () => {
|
||||
["Chutes OAuth auth choice", ["chutes"]],
|
||||
["Qwen auth choice", ["qwen-portal"]],
|
||||
["xAI auth choice", ["xai-api-key"]],
|
||||
["Mistral auth choice", ["mistral-api-key"]],
|
||||
["Volcano Engine auth choice", ["volcengine-api-key"]],
|
||||
["BytePlus auth choice", ["byteplus-api-key"]],
|
||||
["vLLM auth choice", ["vllm"]],
|
||||
|
||||
@@ -70,6 +70,12 @@ const AUTH_CHOICE_GROUP_DEFS: {
|
||||
hint: "API key",
|
||||
choices: ["xai-api-key"],
|
||||
},
|
||||
{
|
||||
value: "mistral",
|
||||
label: "Mistral AI",
|
||||
hint: "API key",
|
||||
choices: ["mistral-api-key"],
|
||||
},
|
||||
{
|
||||
value: "volcengine",
|
||||
label: "Volcano Engine",
|
||||
@@ -191,6 +197,7 @@ const BASE_AUTH_CHOICE_OPTIONS: ReadonlyArray<AuthChoiceOption> = [
|
||||
hint: "Local/self-hosted OpenAI-compatible server",
|
||||
},
|
||||
{ value: "openai-api-key", label: "OpenAI API key" },
|
||||
{ value: "mistral-api-key", label: "Mistral API key" },
|
||||
{ value: "xai-api-key", label: "xAI (Grok) API key" },
|
||||
{ value: "volcengine-api-key", label: "Volcano Engine API key" },
|
||||
{ value: "byteplus-api-key", label: "BytePlus API key" },
|
||||
|
||||
@@ -29,6 +29,8 @@ import {
|
||||
applyKimiCodeProviderConfig,
|
||||
applyLitellmConfig,
|
||||
applyLitellmProviderConfig,
|
||||
applyMistralConfig,
|
||||
applyMistralProviderConfig,
|
||||
applyMoonshotConfig,
|
||||
applyMoonshotConfigCn,
|
||||
applyMoonshotProviderConfig,
|
||||
@@ -52,6 +54,7 @@ import {
|
||||
QIANFAN_DEFAULT_MODEL_REF,
|
||||
KIMI_CODING_MODEL_REF,
|
||||
MOONSHOT_DEFAULT_MODEL_REF,
|
||||
MISTRAL_DEFAULT_MODEL_REF,
|
||||
SYNTHETIC_DEFAULT_MODEL_REF,
|
||||
TOGETHER_DEFAULT_MODEL_REF,
|
||||
VENICE_DEFAULT_MODEL_REF,
|
||||
@@ -62,6 +65,7 @@ import {
|
||||
setGeminiApiKey,
|
||||
setLitellmApiKey,
|
||||
setKimiCodingApiKey,
|
||||
setMistralApiKey,
|
||||
setMoonshotApiKey,
|
||||
setOpencodeZenApiKey,
|
||||
setSyntheticApiKey,
|
||||
@@ -91,6 +95,7 @@ const API_KEY_TOKEN_PROVIDER_AUTH_CHOICE: Record<string, AuthChoice> = {
|
||||
venice: "venice-api-key",
|
||||
together: "together-api-key",
|
||||
huggingface: "huggingface-api-key",
|
||||
mistral: "mistral-api-key",
|
||||
opencode: "opencode-zen",
|
||||
qianfan: "qianfan-api-key",
|
||||
};
|
||||
@@ -190,6 +195,18 @@ const SIMPLE_API_KEY_PROVIDER_FLOWS: Partial<Record<AuthChoice, SimpleApiKeyProv
|
||||
applyProviderConfig: applyXiaomiProviderConfig,
|
||||
noteDefault: XIAOMI_DEFAULT_MODEL_REF,
|
||||
},
|
||||
"mistral-api-key": {
|
||||
provider: "mistral",
|
||||
profileId: "mistral:default",
|
||||
expectedProviders: ["mistral"],
|
||||
envLabel: "MISTRAL_API_KEY",
|
||||
promptMessage: "Enter Mistral API key",
|
||||
setCredential: setMistralApiKey,
|
||||
defaultModel: MISTRAL_DEFAULT_MODEL_REF,
|
||||
applyDefaultConfig: applyMistralConfig,
|
||||
applyProviderConfig: applyMistralProviderConfig,
|
||||
noteDefault: MISTRAL_DEFAULT_MODEL_REF,
|
||||
},
|
||||
"venice-api-key": {
|
||||
provider: "venice",
|
||||
profileId: "venice:default",
|
||||
|
||||
@@ -20,6 +20,7 @@ const PREFERRED_PROVIDER_BY_AUTH_CHOICE: Partial<Record<AuthChoice, string>> = {
|
||||
"gemini-api-key": "google",
|
||||
"google-antigravity": "google-antigravity",
|
||||
"google-gemini-cli": "google-gemini-cli",
|
||||
"mistral-api-key": "mistral",
|
||||
"zai-api-key": "zai",
|
||||
"zai-coding-global": "zai",
|
||||
"zai-coding-cn": "zai",
|
||||
|
||||
@@ -66,6 +66,7 @@ describe("applyAuthChoice", () => {
|
||||
"AI_GATEWAY_API_KEY",
|
||||
"CLOUDFLARE_AI_GATEWAY_API_KEY",
|
||||
"MOONSHOT_API_KEY",
|
||||
"MISTRAL_API_KEY",
|
||||
"KIMI_API_KEY",
|
||||
"GEMINI_API_KEY",
|
||||
"XIAOMI_API_KEY",
|
||||
@@ -527,6 +528,13 @@ describe("applyAuthChoice", () => {
|
||||
provider: "moonshot",
|
||||
modelPrefix: "moonshot/",
|
||||
},
|
||||
{
|
||||
authChoice: "mistral-api-key",
|
||||
tokenProvider: "mistral",
|
||||
profileId: "mistral:default",
|
||||
provider: "mistral",
|
||||
modelPrefix: "mistral/",
|
||||
},
|
||||
{
|
||||
authChoice: "kimi-code-api-key",
|
||||
tokenProvider: "kimi-code",
|
||||
@@ -1267,6 +1275,10 @@ describe("resolvePreferredProviderForAuthChoice", () => {
|
||||
expect(resolvePreferredProviderForAuthChoice("qwen-portal")).toBe("qwen-portal");
|
||||
});
|
||||
|
||||
it("maps mistral-api-key to the provider", () => {
|
||||
expect(resolvePreferredProviderForAuthChoice("mistral-api-key")).toBe("mistral");
|
||||
});
|
||||
|
||||
it("returns undefined for unknown choices", () => {
|
||||
expect(resolvePreferredProviderForAuthChoice("unknown" as AuthChoice)).toBeUndefined();
|
||||
});
|
||||
|
||||
@@ -104,6 +104,28 @@ describe("noteMemorySearchHealth", () => {
|
||||
});
|
||||
expect(note).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it("resolves mistral auth for explicit mistral embedding provider", async () => {
|
||||
resolveMemorySearchConfig.mockReturnValue({
|
||||
provider: "mistral",
|
||||
local: {},
|
||||
remote: {},
|
||||
});
|
||||
resolveApiKeyForProvider.mockResolvedValue({
|
||||
apiKey: "k",
|
||||
source: "env: MISTRAL_API_KEY",
|
||||
mode: "api-key",
|
||||
});
|
||||
|
||||
await noteMemorySearchHealth(cfg);
|
||||
|
||||
expect(resolveApiKeyForProvider).toHaveBeenCalledWith({
|
||||
provider: "mistral",
|
||||
cfg,
|
||||
agentDir: "/tmp/agent-default",
|
||||
});
|
||||
expect(note).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe("detectLegacyWorkspaceDirs", () => {
|
||||
|
||||
@@ -76,7 +76,7 @@ export async function noteMemorySearchHealth(cfg: OpenClawConfig): Promise<void>
|
||||
if (hasLocalEmbeddings(resolved.local)) {
|
||||
return;
|
||||
}
|
||||
for (const provider of ["openai", "gemini", "voyage"] as const) {
|
||||
for (const provider of ["openai", "gemini", "voyage", "mistral"] as const) {
|
||||
if (hasRemoteApiKey || (await hasApiKeyForProvider(provider, cfg, agentDir))) {
|
||||
return;
|
||||
}
|
||||
@@ -88,7 +88,7 @@ export async function noteMemorySearchHealth(cfg: OpenClawConfig): Promise<void>
|
||||
"Semantic recall will not work without an embedding provider.",
|
||||
"",
|
||||
"Fix (pick one):",
|
||||
"- Set OPENAI_API_KEY or GEMINI_API_KEY in your environment",
|
||||
"- Set OPENAI_API_KEY, GEMINI_API_KEY, VOYAGE_API_KEY, or MISTRAL_API_KEY in your environment",
|
||||
`- Add credentials: ${formatCliCommand("openclaw auth add --provider openai")}`,
|
||||
`- For local embeddings: configure agents.defaults.memorySearch.provider and local model path`,
|
||||
`- To disable: ${formatCliCommand("openclaw config set agents.defaults.memorySearch.enabled false")}`,
|
||||
@@ -119,7 +119,7 @@ function hasLocalEmbeddings(local: { modelPath?: string }): boolean {
|
||||
}
|
||||
|
||||
async function hasApiKeyForProvider(
|
||||
provider: "openai" | "gemini" | "voyage",
|
||||
provider: "openai" | "gemini" | "voyage" | "mistral",
|
||||
cfg: OpenClawConfig,
|
||||
agentDir: string,
|
||||
): Promise<boolean> {
|
||||
|
||||
@@ -31,6 +31,7 @@ import type { OpenClawConfig } from "../config/config.js";
|
||||
import type { ModelApi } from "../config/types.models.js";
|
||||
import {
|
||||
HUGGINGFACE_DEFAULT_MODEL_REF,
|
||||
MISTRAL_DEFAULT_MODEL_REF,
|
||||
OPENROUTER_DEFAULT_MODEL_REF,
|
||||
TOGETHER_DEFAULT_MODEL_REF,
|
||||
XIAOMI_DEFAULT_MODEL_REF,
|
||||
@@ -57,9 +58,12 @@ import {
|
||||
applyProviderConfigWithModelCatalog,
|
||||
} from "./onboard-auth.config-shared.js";
|
||||
import {
|
||||
buildMistralModelDefinition,
|
||||
buildZaiModelDefinition,
|
||||
buildMoonshotModelDefinition,
|
||||
buildXaiModelDefinition,
|
||||
MISTRAL_BASE_URL,
|
||||
MISTRAL_DEFAULT_MODEL_ID,
|
||||
QIANFAN_BASE_URL,
|
||||
QIANFAN_DEFAULT_MODEL_REF,
|
||||
KIMI_CODING_MODEL_ID,
|
||||
@@ -402,6 +406,30 @@ export function applyXaiConfig(cfg: OpenClawConfig): OpenClawConfig {
|
||||
return applyAgentDefaultModelPrimary(next, XAI_DEFAULT_MODEL_REF);
|
||||
}
|
||||
|
||||
export function applyMistralProviderConfig(cfg: OpenClawConfig): OpenClawConfig {
|
||||
const models = { ...cfg.agents?.defaults?.models };
|
||||
models[MISTRAL_DEFAULT_MODEL_REF] = {
|
||||
...models[MISTRAL_DEFAULT_MODEL_REF],
|
||||
alias: models[MISTRAL_DEFAULT_MODEL_REF]?.alias ?? "Mistral",
|
||||
};
|
||||
|
||||
const defaultModel = buildMistralModelDefinition();
|
||||
|
||||
return applyProviderConfigWithDefaultModel(cfg, {
|
||||
agentModels: models,
|
||||
providerId: "mistral",
|
||||
api: "openai-completions",
|
||||
baseUrl: MISTRAL_BASE_URL,
|
||||
defaultModel,
|
||||
defaultModelId: MISTRAL_DEFAULT_MODEL_ID,
|
||||
});
|
||||
}
|
||||
|
||||
export function applyMistralConfig(cfg: OpenClawConfig): OpenClawConfig {
|
||||
const next = applyMistralProviderConfig(cfg);
|
||||
return applyAgentDefaultModelPrimary(next, MISTRAL_DEFAULT_MODEL_REF);
|
||||
}
|
||||
|
||||
export function applyAuthProfileConfig(
|
||||
cfg: OpenClawConfig,
|
||||
params: {
|
||||
|
||||
@@ -5,7 +5,7 @@ import { resolveOpenClawAgentDir } from "../agents/agent-paths.js";
|
||||
import { upsertAuthProfile } from "../agents/auth-profiles.js";
|
||||
import { resolveStateDir } from "../config/paths.js";
|
||||
export { CLOUDFLARE_AI_GATEWAY_DEFAULT_MODEL_REF } from "../agents/cloudflare-ai-gateway.js";
|
||||
export { XAI_DEFAULT_MODEL_REF } from "./onboard-auth.models.js";
|
||||
export { MISTRAL_DEFAULT_MODEL_REF, XAI_DEFAULT_MODEL_REF } from "./onboard-auth.models.js";
|
||||
|
||||
const resolveAuthAgentDir = (agentDir?: string) => agentDir ?? resolveOpenClawAgentDir();
|
||||
|
||||
@@ -360,3 +360,15 @@ export function setXaiApiKey(key: string, agentDir?: string) {
|
||||
agentDir: resolveAuthAgentDir(agentDir),
|
||||
});
|
||||
}
|
||||
|
||||
export async function setMistralApiKey(key: string, agentDir?: string) {
|
||||
upsertAuthProfile({
|
||||
profileId: "mistral:default",
|
||||
credential: {
|
||||
type: "api_key",
|
||||
provider: "mistral",
|
||||
key,
|
||||
},
|
||||
agentDir: resolveAuthAgentDir(agentDir),
|
||||
});
|
||||
}
|
||||
|
||||
@@ -137,6 +137,30 @@ export function buildMoonshotModelDefinition(): ModelDefinitionConfig {
|
||||
};
|
||||
}
|
||||
|
||||
export const MISTRAL_BASE_URL = "https://api.mistral.ai/v1";
|
||||
export const MISTRAL_DEFAULT_MODEL_ID = "mistral-large-latest";
|
||||
export const MISTRAL_DEFAULT_MODEL_REF = `mistral/${MISTRAL_DEFAULT_MODEL_ID}`;
|
||||
export const MISTRAL_DEFAULT_CONTEXT_WINDOW = 262144;
|
||||
export const MISTRAL_DEFAULT_MAX_TOKENS = 262144;
|
||||
export const MISTRAL_DEFAULT_COST = {
|
||||
input: 0,
|
||||
output: 0,
|
||||
cacheRead: 0,
|
||||
cacheWrite: 0,
|
||||
};
|
||||
|
||||
export function buildMistralModelDefinition(): ModelDefinitionConfig {
|
||||
return {
|
||||
id: MISTRAL_DEFAULT_MODEL_ID,
|
||||
name: "Mistral Large",
|
||||
reasoning: false,
|
||||
input: ["text", "image"],
|
||||
cost: MISTRAL_DEFAULT_COST,
|
||||
contextWindow: MISTRAL_DEFAULT_CONTEXT_WINDOW,
|
||||
maxTokens: MISTRAL_DEFAULT_MAX_TOKENS,
|
||||
};
|
||||
}
|
||||
|
||||
export function buildZaiModelDefinition(params: {
|
||||
id: string;
|
||||
name?: string;
|
||||
|
||||
@@ -7,6 +7,8 @@ import type { OpenClawConfig } from "../config/config.js";
|
||||
import {
|
||||
applyAuthProfileConfig,
|
||||
applyLitellmProviderConfig,
|
||||
applyMistralConfig,
|
||||
applyMistralProviderConfig,
|
||||
applyMinimaxApiConfig,
|
||||
applyMinimaxApiProviderConfig,
|
||||
applyOpencodeZenConfig,
|
||||
@@ -22,6 +24,7 @@ import {
|
||||
applyZaiConfig,
|
||||
applyZaiProviderConfig,
|
||||
OPENROUTER_DEFAULT_MODEL_REF,
|
||||
MISTRAL_DEFAULT_MODEL_REF,
|
||||
SYNTHETIC_DEFAULT_MODEL_ID,
|
||||
SYNTHETIC_DEFAULT_MODEL_REF,
|
||||
XAI_DEFAULT_MODEL_REF,
|
||||
@@ -540,9 +543,46 @@ describe("applyXaiProviderConfig", () => {
|
||||
});
|
||||
});
|
||||
|
||||
describe("applyMistralConfig", () => {
|
||||
it("adds Mistral provider with correct settings", () => {
|
||||
const cfg = applyMistralConfig({});
|
||||
expect(cfg.models?.providers?.mistral).toMatchObject({
|
||||
baseUrl: "https://api.mistral.ai/v1",
|
||||
api: "openai-completions",
|
||||
});
|
||||
expect(cfg.agents?.defaults?.model?.primary).toBe(MISTRAL_DEFAULT_MODEL_REF);
|
||||
});
|
||||
});
|
||||
|
||||
describe("applyMistralProviderConfig", () => {
|
||||
it("merges Mistral models and keeps existing provider overrides", () => {
|
||||
const cfg = applyMistralProviderConfig(
|
||||
createLegacyProviderConfig({
|
||||
providerId: "mistral",
|
||||
api: "anthropic-messages",
|
||||
modelId: "custom-model",
|
||||
modelName: "Custom",
|
||||
}),
|
||||
);
|
||||
|
||||
expect(cfg.models?.providers?.mistral?.baseUrl).toBe("https://api.mistral.ai/v1");
|
||||
expect(cfg.models?.providers?.mistral?.api).toBe("openai-completions");
|
||||
expect(cfg.models?.providers?.mistral?.apiKey).toBe("old-key");
|
||||
expect(cfg.models?.providers?.mistral?.models.map((m) => m.id)).toEqual([
|
||||
"custom-model",
|
||||
"mistral-large-latest",
|
||||
]);
|
||||
const mistralDefault = cfg.models?.providers?.mistral?.models.find(
|
||||
(model) => model.id === "mistral-large-latest",
|
||||
);
|
||||
expect(mistralDefault?.contextWindow).toBe(262144);
|
||||
expect(mistralDefault?.maxTokens).toBe(262144);
|
||||
});
|
||||
});
|
||||
|
||||
describe("fallback preservation helpers", () => {
|
||||
it("preserves existing model fallbacks", () => {
|
||||
const fallbackCases = [applyMinimaxApiConfig, applyXaiConfig] as const;
|
||||
const fallbackCases = [applyMinimaxApiConfig, applyXaiConfig, applyMistralConfig] as const;
|
||||
for (const applyConfig of fallbackCases) {
|
||||
const cfg = applyConfig(createConfigWithFallbacks());
|
||||
expectFallbacksPreserved(cfg);
|
||||
@@ -563,6 +603,11 @@ describe("provider alias defaults", () => {
|
||||
modelRef: XAI_DEFAULT_MODEL_REF,
|
||||
alias: "Grok",
|
||||
},
|
||||
{
|
||||
applyConfig: () => applyMistralProviderConfig({}),
|
||||
modelRef: MISTRAL_DEFAULT_MODEL_REF,
|
||||
alias: "Mistral",
|
||||
},
|
||||
] as const;
|
||||
for (const testCase of aliasCases) {
|
||||
const cfg = testCase.applyConfig();
|
||||
|
||||
@@ -15,6 +15,8 @@ export {
|
||||
applyKimiCodeProviderConfig,
|
||||
applyLitellmConfig,
|
||||
applyLitellmProviderConfig,
|
||||
applyMistralConfig,
|
||||
applyMistralProviderConfig,
|
||||
applyMoonshotConfig,
|
||||
applyMoonshotConfigCn,
|
||||
applyMoonshotProviderConfig,
|
||||
@@ -62,6 +64,7 @@ export {
|
||||
setLitellmApiKey,
|
||||
setKimiCodingApiKey,
|
||||
setMinimaxApiKey,
|
||||
setMistralApiKey,
|
||||
setMoonshotApiKey,
|
||||
setOpencodeZenApiKey,
|
||||
setOpenrouterApiKey,
|
||||
@@ -79,11 +82,13 @@ export {
|
||||
XIAOMI_DEFAULT_MODEL_REF,
|
||||
ZAI_DEFAULT_MODEL_REF,
|
||||
TOGETHER_DEFAULT_MODEL_REF,
|
||||
MISTRAL_DEFAULT_MODEL_REF,
|
||||
XAI_DEFAULT_MODEL_REF,
|
||||
} from "./onboard-auth.credentials.js";
|
||||
export {
|
||||
buildMinimaxApiModelDefinition,
|
||||
buildMinimaxModelDefinition,
|
||||
buildMistralModelDefinition,
|
||||
buildMoonshotModelDefinition,
|
||||
buildZaiModelDefinition,
|
||||
DEFAULT_MINIMAX_BASE_URL,
|
||||
@@ -100,6 +105,8 @@ export {
|
||||
MOONSHOT_BASE_URL,
|
||||
MOONSHOT_DEFAULT_MODEL_ID,
|
||||
MOONSHOT_DEFAULT_MODEL_REF,
|
||||
MISTRAL_BASE_URL,
|
||||
MISTRAL_DEFAULT_MODEL_ID,
|
||||
resolveZaiBaseUrl,
|
||||
ZAI_CODING_CN_BASE_URL,
|
||||
ZAI_DEFAULT_MODEL_ID,
|
||||
|
||||
@@ -253,6 +253,23 @@ describe("onboard (non-interactive): provider auth", () => {
|
||||
});
|
||||
}, 60_000);
|
||||
|
||||
it("infers Mistral auth choice from --mistral-api-key and sets default model", async () => {
|
||||
await withOnboardEnv("openclaw-onboard-mistral-infer-", async (env) => {
|
||||
const cfg = await runOnboardingAndReadConfig(env, {
|
||||
mistralApiKey: "mistral-test-key",
|
||||
});
|
||||
|
||||
expect(cfg.auth?.profiles?.["mistral:default"]?.provider).toBe("mistral");
|
||||
expect(cfg.auth?.profiles?.["mistral:default"]?.mode).toBe("api_key");
|
||||
expect(cfg.agents?.defaults?.model?.primary).toBe("mistral/mistral-large-latest");
|
||||
await expectApiKeyProfile({
|
||||
profileId: "mistral:default",
|
||||
provider: "mistral",
|
||||
key: "mistral-test-key",
|
||||
});
|
||||
});
|
||||
}, 60_000);
|
||||
|
||||
it("stores Volcano Engine API key and sets default model", async () => {
|
||||
await withOnboardEnv("openclaw-onboard-volcengine-", async (env) => {
|
||||
const cfg = await runOnboardingAndReadConfig(env, {
|
||||
|
||||
@@ -12,6 +12,7 @@ type AuthChoiceFlagOptions = Pick<
|
||||
| "anthropicApiKey"
|
||||
| "geminiApiKey"
|
||||
| "openaiApiKey"
|
||||
| "mistralApiKey"
|
||||
| "openrouterApiKey"
|
||||
| "aiGatewayApiKey"
|
||||
| "cloudflareAiGatewayApiKey"
|
||||
|
||||
@@ -27,6 +27,7 @@ import {
|
||||
applyHuggingfaceConfig,
|
||||
applyVercelAiGatewayConfig,
|
||||
applyLitellmConfig,
|
||||
applyMistralConfig,
|
||||
applyXaiConfig,
|
||||
applyXiaomiConfig,
|
||||
applyZaiConfig,
|
||||
@@ -36,6 +37,7 @@ import {
|
||||
setGeminiApiKey,
|
||||
setKimiCodingApiKey,
|
||||
setLitellmApiKey,
|
||||
setMistralApiKey,
|
||||
setMinimaxApiKey,
|
||||
setMoonshotApiKey,
|
||||
setOpencodeZenApiKey,
|
||||
@@ -304,6 +306,29 @@ export async function applyNonInteractiveAuthChoice(params: {
|
||||
return applyXaiConfig(nextConfig);
|
||||
}
|
||||
|
||||
if (authChoice === "mistral-api-key") {
|
||||
const resolved = await resolveNonInteractiveApiKey({
|
||||
provider: "mistral",
|
||||
cfg: baseConfig,
|
||||
flagValue: opts.mistralApiKey,
|
||||
flagName: "--mistral-api-key",
|
||||
envVar: "MISTRAL_API_KEY",
|
||||
runtime,
|
||||
});
|
||||
if (!resolved) {
|
||||
return null;
|
||||
}
|
||||
if (resolved.source !== "profile") {
|
||||
await setMistralApiKey(resolved.key);
|
||||
}
|
||||
nextConfig = applyAuthProfileConfig(nextConfig, {
|
||||
profileId: "mistral:default",
|
||||
provider: "mistral",
|
||||
mode: "api_key",
|
||||
});
|
||||
return applyMistralConfig(nextConfig);
|
||||
}
|
||||
|
||||
if (authChoice === "volcengine-api-key") {
|
||||
const resolved = await resolveNonInteractiveApiKey({
|
||||
provider: "volcengine",
|
||||
|
||||
@@ -4,6 +4,7 @@ type OnboardProviderAuthOptionKey = keyof Pick<
|
||||
OnboardOptions,
|
||||
| "anthropicApiKey"
|
||||
| "openaiApiKey"
|
||||
| "mistralApiKey"
|
||||
| "openrouterApiKey"
|
||||
| "aiGatewayApiKey"
|
||||
| "cloudflareAiGatewayApiKey"
|
||||
@@ -49,6 +50,13 @@ export const ONBOARD_PROVIDER_AUTH_FLAGS: ReadonlyArray<OnboardProviderAuthFlag>
|
||||
cliOption: "--openai-api-key <key>",
|
||||
description: "OpenAI API key",
|
||||
},
|
||||
{
|
||||
optionKey: "mistralApiKey",
|
||||
authChoice: "mistral-api-key",
|
||||
cliFlag: "--mistral-api-key",
|
||||
cliOption: "--mistral-api-key <key>",
|
||||
description: "Mistral API key",
|
||||
},
|
||||
{
|
||||
optionKey: "openrouterApiKey",
|
||||
authChoice: "openrouter-api-key",
|
||||
|
||||
@@ -45,6 +45,7 @@ export type AuthChoice =
|
||||
| "copilot-proxy"
|
||||
| "qwen-portal"
|
||||
| "xai-api-key"
|
||||
| "mistral-api-key"
|
||||
| "volcengine-api-key"
|
||||
| "byteplus-api-key"
|
||||
| "qianfan-api-key"
|
||||
@@ -68,6 +69,7 @@ export type AuthChoiceGroupId =
|
||||
| "minimax"
|
||||
| "synthetic"
|
||||
| "venice"
|
||||
| "mistral"
|
||||
| "qwen"
|
||||
| "together"
|
||||
| "huggingface"
|
||||
@@ -105,6 +107,7 @@ export type OnboardOptions = {
|
||||
tokenExpiresIn?: string;
|
||||
anthropicApiKey?: string;
|
||||
openaiApiKey?: string;
|
||||
mistralApiKey?: string;
|
||||
openrouterApiKey?: string;
|
||||
litellmApiKey?: string;
|
||||
aiGatewayApiKey?: string;
|
||||
|
||||
@@ -37,6 +37,20 @@ describe("config schema regressions", () => {
|
||||
expect(res.ok).toBe(true);
|
||||
});
|
||||
|
||||
it('accepts memorySearch provider "mistral"', () => {
|
||||
const res = validateConfigObject({
|
||||
agents: {
|
||||
defaults: {
|
||||
memorySearch: {
|
||||
provider: "mistral",
|
||||
},
|
||||
},
|
||||
},
|
||||
});
|
||||
|
||||
expect(res.ok).toBe(true);
|
||||
});
|
||||
|
||||
it("accepts safe iMessage remoteHost", () => {
|
||||
const res = validateConfigObject({
|
||||
channels: {
|
||||
|
||||
@@ -661,7 +661,7 @@ export const FIELD_HELP: Record<string, string> = {
|
||||
"agents.defaults.memorySearch.experimental.sessionMemory":
|
||||
"Indexes session transcripts into memory search so responses can reference prior chat turns. Keep this off unless transcript recall is needed, because indexing cost and storage usage both increase.",
|
||||
"agents.defaults.memorySearch.provider":
|
||||
'Selects the embedding backend used to build/query memory vectors: "openai", "gemini", "voyage", or "local". Keep your most reliable provider here and configure fallback for resilience.',
|
||||
'Selects the embedding backend used to build/query memory vectors: "openai", "gemini", "voyage", "mistral", or "local". Keep your most reliable provider here and configure fallback for resilience.',
|
||||
"agents.defaults.memorySearch.model":
|
||||
"Embedding model override used by the selected memory provider when a non-default model is required. Set this only when you need explicit recall quality/cost tuning beyond provider defaults.",
|
||||
"agents.defaults.memorySearch.remote.baseUrl":
|
||||
@@ -683,7 +683,7 @@ export const FIELD_HELP: Record<string, string> = {
|
||||
"agents.defaults.memorySearch.local.modelPath":
|
||||
"Specifies the local embedding model source for local memory search, such as a GGUF file path or `hf:` URI. Use this only when provider is `local`, and verify model compatibility before large index rebuilds.",
|
||||
"agents.defaults.memorySearch.fallback":
|
||||
'Backup provider used when primary embeddings fail: "openai", "gemini", "local", or "none". Set a real fallback for production reliability; use "none" only if you prefer explicit failures.',
|
||||
'Backup provider used when primary embeddings fail: "openai", "gemini", "voyage", "mistral", "local", or "none". Set a real fallback for production reliability; use "none" only if you prefer explicit failures.',
|
||||
"agents.defaults.memorySearch.store.path":
|
||||
"Sets where the SQLite memory index is stored on disk for each agent. Keep the default `~/.openclaw/memory/{agentId}.sqlite` unless you need custom storage placement or backup policy alignment.",
|
||||
"agents.defaults.memorySearch.store.vector.enabled":
|
||||
|
||||
@@ -314,7 +314,7 @@ export type MemorySearchConfig = {
|
||||
sessionMemory?: boolean;
|
||||
};
|
||||
/** Embedding provider mode. */
|
||||
provider?: "openai" | "gemini" | "local" | "voyage";
|
||||
provider?: "openai" | "gemini" | "local" | "voyage" | "mistral";
|
||||
remote?: {
|
||||
baseUrl?: string;
|
||||
apiKey?: string;
|
||||
@@ -333,7 +333,7 @@ export type MemorySearchConfig = {
|
||||
};
|
||||
};
|
||||
/** Fallback behavior when embeddings fail. */
|
||||
fallback?: "openai" | "gemini" | "local" | "voyage" | "none";
|
||||
fallback?: "openai" | "gemini" | "local" | "voyage" | "mistral" | "none";
|
||||
/** Embedding model id (remote) or alias (local). */
|
||||
model?: string;
|
||||
/** Local embedding settings (node-llama-cpp). */
|
||||
|
||||
@@ -28,6 +28,7 @@ import { DEFAULT_AGENT_ID } from "../routing/session-key.js";
|
||||
import { GATEWAY_CLIENT_MODES, GATEWAY_CLIENT_NAMES } from "../utils/message-channel.js";
|
||||
import { GatewayClient } from "./client.js";
|
||||
import { renderCatNoncePngBase64 } from "./live-image-probe.js";
|
||||
import { hasExpectedToolNonce, shouldRetryToolReadProbe } from "./live-tool-probe-utils.js";
|
||||
import { startGatewayServer } from "./server.js";
|
||||
import { extractPayloadText } from "./test-helpers.agent-results.js";
|
||||
|
||||
@@ -680,38 +681,75 @@ async function runGatewayModelSuite(params: GatewayModelSuiteParams) {
|
||||
// Real tool invocation: force the agent to Read a local file and echo a nonce.
|
||||
logProgress(`${progressLabel}: tool-read`);
|
||||
const runIdTool = randomUUID();
|
||||
const toolProbe = await client.request<AgentFinalPayload>(
|
||||
"agent",
|
||||
{
|
||||
sessionKey,
|
||||
idempotencyKey: `idem-${runIdTool}-tool`,
|
||||
message:
|
||||
"OpenClaw live tool probe (local, safe): " +
|
||||
`use the tool named \`read\` (or \`Read\`) with JSON arguments {"path":"${toolProbePath}"}. ` +
|
||||
"Then reply with the two nonce values you read (include both).",
|
||||
thinking: params.thinkingLevel,
|
||||
deliver: false,
|
||||
},
|
||||
{ expectFinal: true },
|
||||
);
|
||||
if (toolProbe?.status !== "ok") {
|
||||
throw new Error(`tool probe failed: status=${String(toolProbe?.status)}`);
|
||||
}
|
||||
const toolText = extractPayloadText(toolProbe?.result);
|
||||
if (
|
||||
isEmptyStreamText(toolText) &&
|
||||
(model.provider === "minimax" || model.provider === "openai-codex")
|
||||
const maxToolReadAttempts = 3;
|
||||
let toolText = "";
|
||||
for (
|
||||
let toolReadAttempt = 0;
|
||||
toolReadAttempt < maxToolReadAttempts;
|
||||
toolReadAttempt += 1
|
||||
) {
|
||||
logProgress(`${progressLabel}: skip (${model.provider} empty response)`);
|
||||
break;
|
||||
const strictReply = toolReadAttempt > 0;
|
||||
const toolProbe = await client.request<AgentFinalPayload>(
|
||||
"agent",
|
||||
{
|
||||
sessionKey,
|
||||
idempotencyKey: `idem-${runIdTool}-tool-${toolReadAttempt + 1}`,
|
||||
message: strictReply
|
||||
? "OpenClaw live tool probe (local, safe): " +
|
||||
`use the tool named \`read\` (or \`Read\`) with JSON arguments {"path":"${toolProbePath}"}. ` +
|
||||
`Then reply with exactly: ${nonceA} ${nonceB}. No extra text.`
|
||||
: "OpenClaw live tool probe (local, safe): " +
|
||||
`use the tool named \`read\` (or \`Read\`) with JSON arguments {"path":"${toolProbePath}"}. ` +
|
||||
"Then reply with the two nonce values you read (include both).",
|
||||
thinking: params.thinkingLevel,
|
||||
deliver: false,
|
||||
},
|
||||
{ expectFinal: true },
|
||||
);
|
||||
if (toolProbe?.status !== "ok") {
|
||||
if (toolReadAttempt + 1 < maxToolReadAttempts) {
|
||||
logProgress(
|
||||
`${progressLabel}: tool-read retry (${toolReadAttempt + 2}/${maxToolReadAttempts}) status=${String(toolProbe?.status)}`,
|
||||
);
|
||||
continue;
|
||||
}
|
||||
throw new Error(`tool probe failed: status=${String(toolProbe?.status)}`);
|
||||
}
|
||||
toolText = extractPayloadText(toolProbe?.result);
|
||||
if (
|
||||
isEmptyStreamText(toolText) &&
|
||||
(model.provider === "minimax" || model.provider === "openai-codex")
|
||||
) {
|
||||
logProgress(`${progressLabel}: skip (${model.provider} empty response)`);
|
||||
break;
|
||||
}
|
||||
assertNoReasoningTags({
|
||||
text: toolText,
|
||||
model: modelKey,
|
||||
phase: "tool-read",
|
||||
label: params.label,
|
||||
});
|
||||
if (hasExpectedToolNonce(toolText, nonceA, nonceB)) {
|
||||
break;
|
||||
}
|
||||
if (
|
||||
shouldRetryToolReadProbe({
|
||||
text: toolText,
|
||||
nonceA,
|
||||
nonceB,
|
||||
provider: model.provider,
|
||||
attempt: toolReadAttempt,
|
||||
maxAttempts: maxToolReadAttempts,
|
||||
})
|
||||
) {
|
||||
logProgress(
|
||||
`${progressLabel}: tool-read retry (${toolReadAttempt + 2}/${maxToolReadAttempts}) malformed tool output`,
|
||||
);
|
||||
continue;
|
||||
}
|
||||
throw new Error(`tool probe missing nonce: ${toolText}`);
|
||||
}
|
||||
assertNoReasoningTags({
|
||||
text: toolText,
|
||||
model: modelKey,
|
||||
phase: "tool-read",
|
||||
label: params.label,
|
||||
});
|
||||
if (!toolText.includes(nonceA) || !toolText.includes(nonceB)) {
|
||||
if (!hasExpectedToolNonce(toolText, nonceA, nonceB)) {
|
||||
throw new Error(`tool probe missing nonce: ${toolText}`);
|
||||
}
|
||||
|
||||
|
||||
48
src/gateway/live-tool-probe-utils.test.ts
Normal file
48
src/gateway/live-tool-probe-utils.test.ts
Normal file
@@ -0,0 +1,48 @@
|
||||
import { describe, expect, it } from "vitest";
|
||||
import { hasExpectedToolNonce, shouldRetryToolReadProbe } from "./live-tool-probe-utils.js";
|
||||
|
||||
describe("live tool probe utils", () => {
|
||||
it("matches nonce pair when both are present", () => {
|
||||
expect(hasExpectedToolNonce("value a-1 and b-2", "a-1", "b-2")).toBe(true);
|
||||
expect(hasExpectedToolNonce("value a-1 only", "a-1", "b-2")).toBe(false);
|
||||
});
|
||||
|
||||
it("retries malformed tool output when attempts remain", () => {
|
||||
expect(
|
||||
shouldRetryToolReadProbe({
|
||||
text: "read[object Object],[object Object]",
|
||||
nonceA: "nonce-a",
|
||||
nonceB: "nonce-b",
|
||||
provider: "mistral",
|
||||
attempt: 0,
|
||||
maxAttempts: 3,
|
||||
}),
|
||||
).toBe(true);
|
||||
});
|
||||
|
||||
it("does not retry once max attempts are exhausted", () => {
|
||||
expect(
|
||||
shouldRetryToolReadProbe({
|
||||
text: "read[object Object],[object Object]",
|
||||
nonceA: "nonce-a",
|
||||
nonceB: "nonce-b",
|
||||
provider: "mistral",
|
||||
attempt: 2,
|
||||
maxAttempts: 3,
|
||||
}),
|
||||
).toBe(false);
|
||||
});
|
||||
|
||||
it("does not retry when nonce pair is already present", () => {
|
||||
expect(
|
||||
shouldRetryToolReadProbe({
|
||||
text: "nonce-a nonce-b",
|
||||
nonceA: "nonce-a",
|
||||
nonceB: "nonce-b",
|
||||
provider: "mistral",
|
||||
attempt: 0,
|
||||
maxAttempts: 3,
|
||||
}),
|
||||
).toBe(false);
|
||||
});
|
||||
});
|
||||
34
src/gateway/live-tool-probe-utils.ts
Normal file
34
src/gateway/live-tool-probe-utils.ts
Normal file
@@ -0,0 +1,34 @@
|
||||
export function hasExpectedToolNonce(text: string, nonceA: string, nonceB: string): boolean {
|
||||
return text.includes(nonceA) && text.includes(nonceB);
|
||||
}
|
||||
|
||||
export function shouldRetryToolReadProbe(params: {
|
||||
text: string;
|
||||
nonceA: string;
|
||||
nonceB: string;
|
||||
provider: string;
|
||||
attempt: number;
|
||||
maxAttempts: number;
|
||||
}): boolean {
|
||||
if (params.attempt + 1 >= params.maxAttempts) {
|
||||
return false;
|
||||
}
|
||||
if (hasExpectedToolNonce(params.text, params.nonceA, params.nonceB)) {
|
||||
return false;
|
||||
}
|
||||
const trimmed = params.text.trim();
|
||||
if (!trimmed) {
|
||||
return true;
|
||||
}
|
||||
const lower = trimmed.toLowerCase();
|
||||
if (trimmed.includes("[object Object]")) {
|
||||
return true;
|
||||
}
|
||||
if (/\bread\s*\[/.test(lower) || /\btool\b/.test(lower) || /\bfunction\b/.test(lower)) {
|
||||
return true;
|
||||
}
|
||||
if (params.provider === "mistral" && (lower.includes("noncea=") || lower.includes("nonceb="))) {
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
14
src/media-understanding/defaults.test.ts
Normal file
14
src/media-understanding/defaults.test.ts
Normal file
@@ -0,0 +1,14 @@
|
||||
import { describe, expect, it } from "vitest";
|
||||
import { AUTO_AUDIO_KEY_PROVIDERS, DEFAULT_AUDIO_MODELS } from "./defaults.js";
|
||||
|
||||
describe("DEFAULT_AUDIO_MODELS", () => {
|
||||
it("includes Mistral Voxtral default", () => {
|
||||
expect(DEFAULT_AUDIO_MODELS.mistral).toBe("voxtral-mini-latest");
|
||||
});
|
||||
});
|
||||
|
||||
describe("AUTO_AUDIO_KEY_PROVIDERS", () => {
|
||||
it("includes mistral auto key resolution", () => {
|
||||
expect(AUTO_AUDIO_KEY_PROVIDERS).toContain("mistral");
|
||||
});
|
||||
});
|
||||
@@ -31,9 +31,16 @@ export const DEFAULT_AUDIO_MODELS: Record<string, string> = {
|
||||
groq: "whisper-large-v3-turbo",
|
||||
openai: "gpt-4o-mini-transcribe",
|
||||
deepgram: "nova-3",
|
||||
mistral: "voxtral-mini-latest",
|
||||
};
|
||||
|
||||
export const AUTO_AUDIO_KEY_PROVIDERS = ["openai", "groq", "deepgram", "google"] as const;
|
||||
export const AUTO_AUDIO_KEY_PROVIDERS = [
|
||||
"openai",
|
||||
"groq",
|
||||
"deepgram",
|
||||
"google",
|
||||
"mistral",
|
||||
] as const;
|
||||
export const AUTO_IMAGE_KEY_PROVIDERS = [
|
||||
"openai",
|
||||
"anthropic",
|
||||
|
||||
19
src/media-understanding/providers/index.test.ts
Normal file
19
src/media-understanding/providers/index.test.ts
Normal file
@@ -0,0 +1,19 @@
|
||||
import { describe, expect, it } from "vitest";
|
||||
import { buildMediaUnderstandingRegistry, getMediaUnderstandingProvider } from "./index.js";
|
||||
|
||||
describe("media-understanding provider registry", () => {
|
||||
it("registers the Mistral provider", () => {
|
||||
const registry = buildMediaUnderstandingRegistry();
|
||||
const provider = getMediaUnderstandingProvider("mistral", registry);
|
||||
|
||||
expect(provider?.id).toBe("mistral");
|
||||
expect(provider?.capabilities).toEqual(["audio"]);
|
||||
});
|
||||
|
||||
it("keeps provider id normalization behavior", () => {
|
||||
const registry = buildMediaUnderstandingRegistry();
|
||||
const provider = getMediaUnderstandingProvider("gemini", registry);
|
||||
|
||||
expect(provider?.id).toBe("google");
|
||||
});
|
||||
});
|
||||
@@ -5,6 +5,7 @@ import { deepgramProvider } from "./deepgram/index.js";
|
||||
import { googleProvider } from "./google/index.js";
|
||||
import { groqProvider } from "./groq/index.js";
|
||||
import { minimaxProvider } from "./minimax/index.js";
|
||||
import { mistralProvider } from "./mistral/index.js";
|
||||
import { openaiProvider } from "./openai/index.js";
|
||||
import { zaiProvider } from "./zai/index.js";
|
||||
|
||||
@@ -14,6 +15,7 @@ const PROVIDERS: MediaUnderstandingProvider[] = [
|
||||
googleProvider,
|
||||
anthropicProvider,
|
||||
minimaxProvider,
|
||||
mistralProvider,
|
||||
zaiProvider,
|
||||
deepgramProvider,
|
||||
];
|
||||
|
||||
46
src/media-understanding/providers/mistral/index.test.ts
Normal file
46
src/media-understanding/providers/mistral/index.test.ts
Normal file
@@ -0,0 +1,46 @@
|
||||
import { describe, expect, it } from "vitest";
|
||||
import {
|
||||
createRequestCaptureJsonFetch,
|
||||
installPinnedHostnameTestHooks,
|
||||
} from "../audio.test-helpers.js";
|
||||
import { mistralProvider } from "./index.js";
|
||||
|
||||
installPinnedHostnameTestHooks();
|
||||
|
||||
describe("mistralProvider", () => {
|
||||
it("has expected provider metadata", () => {
|
||||
expect(mistralProvider.id).toBe("mistral");
|
||||
expect(mistralProvider.capabilities).toEqual(["audio"]);
|
||||
expect(mistralProvider.transcribeAudio).toBeDefined();
|
||||
});
|
||||
|
||||
it("uses Mistral base URL by default", async () => {
|
||||
const { fetchFn, getRequest } = createRequestCaptureJsonFetch({ text: "bonjour" });
|
||||
|
||||
const result = await mistralProvider.transcribeAudio!({
|
||||
buffer: Buffer.from("audio-bytes"),
|
||||
fileName: "voice.ogg",
|
||||
apiKey: "test-mistral-key",
|
||||
timeoutMs: 5000,
|
||||
fetchFn,
|
||||
});
|
||||
|
||||
expect(getRequest().url).toBe("https://api.mistral.ai/v1/audio/transcriptions");
|
||||
expect(result.text).toBe("bonjour");
|
||||
});
|
||||
|
||||
it("allows overriding baseUrl", async () => {
|
||||
const { fetchFn, getRequest } = createRequestCaptureJsonFetch({ text: "ok" });
|
||||
|
||||
await mistralProvider.transcribeAudio!({
|
||||
buffer: Buffer.from("audio"),
|
||||
fileName: "note.mp3",
|
||||
apiKey: "key",
|
||||
timeoutMs: 1000,
|
||||
baseUrl: "https://custom.mistral.example/v1",
|
||||
fetchFn,
|
||||
});
|
||||
|
||||
expect(getRequest().url).toBe("https://custom.mistral.example/v1/audio/transcriptions");
|
||||
});
|
||||
});
|
||||
14
src/media-understanding/providers/mistral/index.ts
Normal file
14
src/media-understanding/providers/mistral/index.ts
Normal file
@@ -0,0 +1,14 @@
|
||||
import type { MediaUnderstandingProvider } from "../../types.js";
|
||||
import { transcribeOpenAiCompatibleAudio } from "../openai/audio.js";
|
||||
|
||||
const DEFAULT_MISTRAL_AUDIO_BASE_URL = "https://api.mistral.ai/v1";
|
||||
|
||||
export const mistralProvider: MediaUnderstandingProvider = {
|
||||
id: "mistral",
|
||||
capabilities: ["audio"],
|
||||
transcribeAudio: (req) =>
|
||||
transcribeOpenAiCompatibleAudio({
|
||||
...req,
|
||||
baseUrl: req.baseUrl ?? DEFAULT_MISTRAL_AUDIO_BASE_URL,
|
||||
}),
|
||||
};
|
||||
@@ -107,4 +107,55 @@ describe("runCapability auto audio entries", () => {
|
||||
expect(result.outputs[0]?.text).toBe("ok");
|
||||
expect(seenModel).toBe("whisper-1");
|
||||
});
|
||||
|
||||
it("uses mistral when only mistral key is configured", async () => {
|
||||
let runResult: Awaited<ReturnType<typeof runCapability>> | undefined;
|
||||
await withAudioFixture("openclaw-auto-audio-mistral", async ({ ctx, media, cache }) => {
|
||||
const providerRegistry = buildProviderRegistry({
|
||||
openai: {
|
||||
id: "openai",
|
||||
capabilities: ["audio"],
|
||||
transcribeAudio: async () => ({ text: "openai", model: "gpt-4o-mini-transcribe" }),
|
||||
},
|
||||
mistral: {
|
||||
id: "mistral",
|
||||
capabilities: ["audio"],
|
||||
transcribeAudio: async (req) => ({ text: "mistral", model: req.model ?? "unknown" }),
|
||||
},
|
||||
});
|
||||
const cfg = {
|
||||
models: {
|
||||
providers: {
|
||||
mistral: {
|
||||
apiKey: "mistral-test-key",
|
||||
models: [],
|
||||
},
|
||||
},
|
||||
},
|
||||
tools: {
|
||||
media: {
|
||||
audio: {
|
||||
enabled: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
} as unknown as OpenClawConfig;
|
||||
|
||||
runResult = await runCapability({
|
||||
capability: "audio",
|
||||
cfg,
|
||||
ctx,
|
||||
attachments: cache,
|
||||
media,
|
||||
providerRegistry,
|
||||
});
|
||||
});
|
||||
if (!runResult) {
|
||||
throw new Error("Expected auto audio mistral result");
|
||||
}
|
||||
expect(runResult.decision.outcome).toBe("success");
|
||||
expect(runResult.outputs[0]?.provider).toBe("mistral");
|
||||
expect(runResult.outputs[0]?.model).toBe("voxtral-mini-latest");
|
||||
expect(runResult.outputs[0]?.text).toBe("mistral");
|
||||
});
|
||||
});
|
||||
|
||||
@@ -16,6 +16,12 @@ describe("extractBatchErrorMessage", () => {
|
||||
extractBatchErrorMessage([{ response: { body: { error: { message: "nested-only" } } } }, {}]),
|
||||
).toBe("nested-only");
|
||||
});
|
||||
|
||||
it("accepts plain string response bodies", () => {
|
||||
expect(extractBatchErrorMessage([{ response: { body: "provider plain-text error" } }])).toBe(
|
||||
"provider plain-text error",
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe("formatUnavailableBatchError", () => {
|
||||
|
||||
@@ -11,6 +11,9 @@ type BatchOutputErrorLike = {
|
||||
|
||||
function getResponseErrorMessage(line: BatchOutputErrorLike | undefined): string | undefined {
|
||||
const body = line?.response?.body;
|
||||
if (typeof body === "string") {
|
||||
return body || undefined;
|
||||
}
|
||||
if (!body || typeof body !== "object") {
|
||||
return undefined;
|
||||
}
|
||||
|
||||
70
src/memory/embeddings-mistral.ts
Normal file
70
src/memory/embeddings-mistral.ts
Normal file
@@ -0,0 +1,70 @@
|
||||
import type { SsrFPolicy } from "../infra/net/ssrf.js";
|
||||
import { resolveRemoteEmbeddingBearerClient } from "./embeddings-remote-client.js";
|
||||
import { fetchRemoteEmbeddingVectors } from "./embeddings-remote-fetch.js";
|
||||
import type { EmbeddingProvider, EmbeddingProviderOptions } from "./embeddings.js";
|
||||
|
||||
export type MistralEmbeddingClient = {
|
||||
baseUrl: string;
|
||||
headers: Record<string, string>;
|
||||
ssrfPolicy?: SsrFPolicy;
|
||||
model: string;
|
||||
};
|
||||
|
||||
export const DEFAULT_MISTRAL_EMBEDDING_MODEL = "mistral-embed";
|
||||
const DEFAULT_MISTRAL_BASE_URL = "https://api.mistral.ai/v1";
|
||||
|
||||
export function normalizeMistralModel(model: string): string {
|
||||
const trimmed = model.trim();
|
||||
if (!trimmed) {
|
||||
return DEFAULT_MISTRAL_EMBEDDING_MODEL;
|
||||
}
|
||||
if (trimmed.startsWith("mistral/")) {
|
||||
return trimmed.slice("mistral/".length);
|
||||
}
|
||||
return trimmed;
|
||||
}
|
||||
|
||||
export async function createMistralEmbeddingProvider(
|
||||
options: EmbeddingProviderOptions,
|
||||
): Promise<{ provider: EmbeddingProvider; client: MistralEmbeddingClient }> {
|
||||
const client = await resolveMistralEmbeddingClient(options);
|
||||
const url = `${client.baseUrl.replace(/\/$/, "")}/embeddings`;
|
||||
|
||||
const embed = async (input: string[]): Promise<number[][]> => {
|
||||
if (input.length === 0) {
|
||||
return [];
|
||||
}
|
||||
return await fetchRemoteEmbeddingVectors({
|
||||
url,
|
||||
headers: client.headers,
|
||||
ssrfPolicy: client.ssrfPolicy,
|
||||
body: { model: client.model, input },
|
||||
errorPrefix: "mistral embeddings failed",
|
||||
});
|
||||
};
|
||||
|
||||
return {
|
||||
provider: {
|
||||
id: "mistral",
|
||||
model: client.model,
|
||||
embedQuery: async (text) => {
|
||||
const [vec] = await embed([text]);
|
||||
return vec ?? [];
|
||||
},
|
||||
embedBatch: embed,
|
||||
},
|
||||
client,
|
||||
};
|
||||
}
|
||||
|
||||
export async function resolveMistralEmbeddingClient(
|
||||
options: EmbeddingProviderOptions,
|
||||
): Promise<MistralEmbeddingClient> {
|
||||
const { baseUrl, headers, ssrfPolicy } = await resolveRemoteEmbeddingBearerClient({
|
||||
provider: "mistral",
|
||||
options,
|
||||
defaultBaseUrl: DEFAULT_MISTRAL_BASE_URL,
|
||||
});
|
||||
const model = normalizeMistralModel(options.model);
|
||||
return { baseUrl, headers, ssrfPolicy, model };
|
||||
}
|
||||
@@ -3,7 +3,7 @@ import type { SsrFPolicy } from "../infra/net/ssrf.js";
|
||||
import type { EmbeddingProviderOptions } from "./embeddings.js";
|
||||
import { buildRemoteBaseUrlPolicy } from "./remote-http.js";
|
||||
|
||||
type RemoteEmbeddingProviderId = "openai" | "voyage";
|
||||
type RemoteEmbeddingProviderId = "openai" | "voyage" | "mistral";
|
||||
|
||||
export async function resolveRemoteEmbeddingBearerClient(params: {
|
||||
provider: RemoteEmbeddingProviderId;
|
||||
|
||||
@@ -66,7 +66,7 @@ function createLocalProvider(options?: { fallback?: "none" | "openai" }) {
|
||||
|
||||
function expectAutoSelectedProvider(
|
||||
result: Awaited<ReturnType<typeof createEmbeddingProvider>>,
|
||||
expectedId: "openai" | "gemini",
|
||||
expectedId: "openai" | "gemini" | "mistral",
|
||||
) {
|
||||
expect(result.requestedProvider).toBe("auto");
|
||||
const provider = requireProvider(result);
|
||||
@@ -205,6 +205,43 @@ describe("embedding provider remote overrides", () => {
|
||||
expect(headers["x-goog-api-key"]).toBe("gemini-key");
|
||||
expect(headers["Content-Type"]).toBe("application/json");
|
||||
});
|
||||
|
||||
it("builds Mistral embeddings requests with bearer auth", async () => {
|
||||
const fetchMock = createFetchMock();
|
||||
vi.stubGlobal("fetch", fetchMock);
|
||||
mockResolvedProviderKey("provider-key");
|
||||
|
||||
const cfg = {
|
||||
models: {
|
||||
providers: {
|
||||
mistral: {
|
||||
baseUrl: "https://api.mistral.ai/v1",
|
||||
},
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
const result = await createEmbeddingProvider({
|
||||
config: cfg as never,
|
||||
provider: "mistral",
|
||||
remote: {
|
||||
apiKey: "mistral-key",
|
||||
},
|
||||
model: "mistral/mistral-embed",
|
||||
fallback: "none",
|
||||
});
|
||||
|
||||
const provider = requireProvider(result);
|
||||
await provider.embedQuery("hello");
|
||||
|
||||
const url = fetchMock.mock.calls[0]?.[0];
|
||||
const init = fetchMock.mock.calls[0]?.[1] as RequestInit | undefined;
|
||||
expect(url).toBe("https://api.mistral.ai/v1/embeddings");
|
||||
const headers = (init?.headers ?? {}) as Record<string, string>;
|
||||
expect(headers.Authorization).toBe("Bearer mistral-key");
|
||||
const payload = JSON.parse((init?.body as string | undefined) ?? "{}") as { model?: string };
|
||||
expect(payload.model).toBe("mistral-embed");
|
||||
});
|
||||
});
|
||||
|
||||
describe("embedding provider auto selection", () => {
|
||||
@@ -273,6 +310,23 @@ describe("embedding provider auto selection", () => {
|
||||
const payload = JSON.parse(init?.body as string) as { model?: string };
|
||||
expect(payload.model).toBe("text-embedding-3-small");
|
||||
});
|
||||
|
||||
it("uses mistral when openai/gemini/voyage are missing", async () => {
|
||||
const fetchMock = createFetchMock();
|
||||
vi.stubGlobal("fetch", fetchMock);
|
||||
vi.mocked(authModule.resolveApiKeyForProvider).mockImplementation(async ({ provider }) => {
|
||||
if (provider === "mistral") {
|
||||
return { apiKey: "mistral-key", source: "env: MISTRAL_API_KEY", mode: "api-key" };
|
||||
}
|
||||
throw new Error(`No API key found for provider "${provider}".`);
|
||||
});
|
||||
|
||||
const result = await createAutoProvider();
|
||||
const provider = expectAutoSelectedProvider(result, "mistral");
|
||||
await provider.embedQuery("hello");
|
||||
const [url] = fetchMock.mock.calls[0] ?? [];
|
||||
expect(url).toBe("https://api.mistral.ai/v1/embeddings");
|
||||
});
|
||||
});
|
||||
|
||||
describe("embedding provider local fallback", () => {
|
||||
@@ -300,6 +354,7 @@ describe("embedding provider local fallback", () => {
|
||||
it("mentions every remote provider in local setup guidance", async () => {
|
||||
mockMissingLocalEmbeddingDependency();
|
||||
await expect(createLocalProvider()).rejects.toThrow(/provider = "gemini"/i);
|
||||
await expect(createLocalProvider()).rejects.toThrow(/provider = "mistral"/i);
|
||||
});
|
||||
});
|
||||
|
||||
|
||||
@@ -4,6 +4,10 @@ import type { OpenClawConfig } from "../config/config.js";
|
||||
import { formatErrorMessage } from "../infra/errors.js";
|
||||
import { resolveUserPath } from "../utils.js";
|
||||
import { createGeminiEmbeddingProvider, type GeminiEmbeddingClient } from "./embeddings-gemini.js";
|
||||
import {
|
||||
createMistralEmbeddingProvider,
|
||||
type MistralEmbeddingClient,
|
||||
} from "./embeddings-mistral.js";
|
||||
import { createOpenAiEmbeddingProvider, type OpenAiEmbeddingClient } from "./embeddings-openai.js";
|
||||
import { createVoyageEmbeddingProvider, type VoyageEmbeddingClient } from "./embeddings-voyage.js";
|
||||
import { importNodeLlamaCpp } from "./node-llama.js";
|
||||
@@ -18,6 +22,7 @@ function sanitizeAndNormalizeEmbedding(vec: number[]): number[] {
|
||||
}
|
||||
|
||||
export type { GeminiEmbeddingClient } from "./embeddings-gemini.js";
|
||||
export type { MistralEmbeddingClient } from "./embeddings-mistral.js";
|
||||
export type { OpenAiEmbeddingClient } from "./embeddings-openai.js";
|
||||
export type { VoyageEmbeddingClient } from "./embeddings-voyage.js";
|
||||
|
||||
@@ -29,11 +34,11 @@ export type EmbeddingProvider = {
|
||||
embedBatch: (texts: string[]) => Promise<number[][]>;
|
||||
};
|
||||
|
||||
export type EmbeddingProviderId = "openai" | "local" | "gemini" | "voyage";
|
||||
export type EmbeddingProviderId = "openai" | "local" | "gemini" | "voyage" | "mistral";
|
||||
export type EmbeddingProviderRequest = EmbeddingProviderId | "auto";
|
||||
export type EmbeddingProviderFallback = EmbeddingProviderId | "none";
|
||||
|
||||
const REMOTE_EMBEDDING_PROVIDER_IDS = ["openai", "gemini", "voyage"] as const;
|
||||
const REMOTE_EMBEDDING_PROVIDER_IDS = ["openai", "gemini", "voyage", "mistral"] as const;
|
||||
|
||||
export type EmbeddingProviderResult = {
|
||||
provider: EmbeddingProvider | null;
|
||||
@@ -44,6 +49,7 @@ export type EmbeddingProviderResult = {
|
||||
openAi?: OpenAiEmbeddingClient;
|
||||
gemini?: GeminiEmbeddingClient;
|
||||
voyage?: VoyageEmbeddingClient;
|
||||
mistral?: MistralEmbeddingClient;
|
||||
};
|
||||
|
||||
export type EmbeddingProviderOptions = {
|
||||
@@ -154,6 +160,10 @@ export async function createEmbeddingProvider(
|
||||
const { provider, client } = await createVoyageEmbeddingProvider(options);
|
||||
return { provider, voyage: client };
|
||||
}
|
||||
if (id === "mistral") {
|
||||
const { provider, client } = await createMistralEmbeddingProvider(options);
|
||||
return { provider, mistral: client };
|
||||
}
|
||||
const { provider, client } = await createOpenAiEmbeddingProvider(options);
|
||||
return { provider, openAi: client };
|
||||
};
|
||||
|
||||
@@ -12,12 +12,14 @@ import { createSubsystemLogger } from "../logging/subsystem.js";
|
||||
import { onSessionTranscriptUpdate } from "../sessions/transcript-events.js";
|
||||
import { resolveUserPath } from "../utils.js";
|
||||
import { DEFAULT_GEMINI_EMBEDDING_MODEL } from "./embeddings-gemini.js";
|
||||
import { DEFAULT_MISTRAL_EMBEDDING_MODEL } from "./embeddings-mistral.js";
|
||||
import { DEFAULT_OPENAI_EMBEDDING_MODEL } from "./embeddings-openai.js";
|
||||
import { DEFAULT_VOYAGE_EMBEDDING_MODEL } from "./embeddings-voyage.js";
|
||||
import {
|
||||
createEmbeddingProvider,
|
||||
type EmbeddingProvider,
|
||||
type GeminiEmbeddingClient,
|
||||
type MistralEmbeddingClient,
|
||||
type OpenAiEmbeddingClient,
|
||||
type VoyageEmbeddingClient,
|
||||
} from "./embeddings.js";
|
||||
@@ -89,10 +91,11 @@ export abstract class MemoryManagerSyncOps {
|
||||
protected abstract readonly workspaceDir: string;
|
||||
protected abstract readonly settings: ResolvedMemorySearchConfig;
|
||||
protected provider: EmbeddingProvider | null = null;
|
||||
protected fallbackFrom?: "openai" | "local" | "gemini" | "voyage";
|
||||
protected fallbackFrom?: "openai" | "local" | "gemini" | "voyage" | "mistral";
|
||||
protected openAi?: OpenAiEmbeddingClient;
|
||||
protected gemini?: GeminiEmbeddingClient;
|
||||
protected voyage?: VoyageEmbeddingClient;
|
||||
protected mistral?: MistralEmbeddingClient;
|
||||
protected abstract batch: {
|
||||
enabled: boolean;
|
||||
wait: boolean;
|
||||
@@ -954,7 +957,7 @@ export abstract class MemoryManagerSyncOps {
|
||||
if (this.fallbackFrom) {
|
||||
return false;
|
||||
}
|
||||
const fallbackFrom = this.provider.id as "openai" | "gemini" | "local" | "voyage";
|
||||
const fallbackFrom = this.provider.id as "openai" | "gemini" | "local" | "voyage" | "mistral";
|
||||
|
||||
const fallbackModel =
|
||||
fallback === "gemini"
|
||||
@@ -963,7 +966,9 @@ export abstract class MemoryManagerSyncOps {
|
||||
? DEFAULT_OPENAI_EMBEDDING_MODEL
|
||||
: fallback === "voyage"
|
||||
? DEFAULT_VOYAGE_EMBEDDING_MODEL
|
||||
: this.settings.model;
|
||||
: fallback === "mistral"
|
||||
? DEFAULT_MISTRAL_EMBEDDING_MODEL
|
||||
: this.settings.model;
|
||||
|
||||
const fallbackResult = await createEmbeddingProvider({
|
||||
config: this.cfg,
|
||||
@@ -981,6 +986,7 @@ export abstract class MemoryManagerSyncOps {
|
||||
this.openAi = fallbackResult.openAi;
|
||||
this.gemini = fallbackResult.gemini;
|
||||
this.voyage = fallbackResult.voyage;
|
||||
this.mistral = fallbackResult.mistral;
|
||||
this.providerKey = this.computeProviderKey();
|
||||
this.batch = this.resolveBatchConfig();
|
||||
log.warn(`memory embeddings: switched to fallback provider (${fallback})`, { reason });
|
||||
|
||||
147
src/memory/manager.mistral-provider.test.ts
Normal file
147
src/memory/manager.mistral-provider.test.ts
Normal file
@@ -0,0 +1,147 @@
|
||||
import fs from "node:fs/promises";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
|
||||
import type { OpenClawConfig } from "../config/config.js";
|
||||
import type {
|
||||
EmbeddingProvider,
|
||||
EmbeddingProviderResult,
|
||||
MistralEmbeddingClient,
|
||||
OpenAiEmbeddingClient,
|
||||
} from "./embeddings.js";
|
||||
import { getMemorySearchManager, type MemoryIndexManager } from "./index.js";
|
||||
|
||||
const { createEmbeddingProviderMock } = vi.hoisted(() => ({
|
||||
createEmbeddingProviderMock: vi.fn(),
|
||||
}));
|
||||
|
||||
vi.mock("./embeddings.js", () => ({
|
||||
createEmbeddingProvider: createEmbeddingProviderMock,
|
||||
}));
|
||||
|
||||
vi.mock("./sqlite-vec.js", () => ({
|
||||
loadSqliteVecExtension: async () => ({ ok: false, error: "sqlite-vec disabled in tests" }),
|
||||
}));
|
||||
|
||||
function createProvider(id: string): EmbeddingProvider {
|
||||
return {
|
||||
id,
|
||||
model: `${id}-model`,
|
||||
embedQuery: async () => [0.1, 0.2, 0.3],
|
||||
embedBatch: async (texts: string[]) => texts.map(() => [0.1, 0.2, 0.3]),
|
||||
};
|
||||
}
|
||||
|
||||
function buildConfig(params: {
|
||||
workspaceDir: string;
|
||||
indexPath: string;
|
||||
provider: "openai" | "mistral";
|
||||
fallback?: "none" | "mistral";
|
||||
}): OpenClawConfig {
|
||||
return {
|
||||
agents: {
|
||||
defaults: {
|
||||
workspace: params.workspaceDir,
|
||||
memorySearch: {
|
||||
provider: params.provider,
|
||||
model: params.provider === "mistral" ? "mistral/mistral-embed" : "text-embedding-3-small",
|
||||
fallback: params.fallback ?? "none",
|
||||
store: { path: params.indexPath, vector: { enabled: false } },
|
||||
sync: { watch: false, onSessionStart: false, onSearch: false },
|
||||
query: { minScore: 0, hybrid: { enabled: false } },
|
||||
},
|
||||
},
|
||||
list: [{ id: "main", default: true }],
|
||||
},
|
||||
} as OpenClawConfig;
|
||||
}
|
||||
|
||||
describe("memory manager mistral provider wiring", () => {
|
||||
let workspaceDir = "";
|
||||
let indexPath = "";
|
||||
let manager: MemoryIndexManager | null = null;
|
||||
|
||||
beforeEach(async () => {
|
||||
createEmbeddingProviderMock.mockReset();
|
||||
workspaceDir = await fs.mkdtemp(path.join(os.tmpdir(), "openclaw-memory-mistral-"));
|
||||
indexPath = path.join(workspaceDir, "index.sqlite");
|
||||
await fs.mkdir(path.join(workspaceDir, "memory"), { recursive: true });
|
||||
await fs.writeFile(path.join(workspaceDir, "MEMORY.md"), "test");
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
if (manager) {
|
||||
await manager.close();
|
||||
manager = null;
|
||||
}
|
||||
if (workspaceDir) {
|
||||
await fs.rm(workspaceDir, { recursive: true, force: true });
|
||||
workspaceDir = "";
|
||||
indexPath = "";
|
||||
}
|
||||
});
|
||||
|
||||
it("stores mistral client when mistral provider is selected", async () => {
|
||||
const mistralClient: MistralEmbeddingClient = {
|
||||
baseUrl: "https://api.mistral.ai/v1",
|
||||
headers: { authorization: "Bearer test-key" },
|
||||
model: "mistral-embed",
|
||||
};
|
||||
const providerResult: EmbeddingProviderResult = {
|
||||
requestedProvider: "mistral",
|
||||
provider: createProvider("mistral"),
|
||||
mistral: mistralClient,
|
||||
};
|
||||
createEmbeddingProviderMock.mockResolvedValueOnce(providerResult);
|
||||
|
||||
const cfg = buildConfig({ workspaceDir, indexPath, provider: "mistral" });
|
||||
const result = await getMemorySearchManager({ cfg, agentId: "main" });
|
||||
if (!result.manager) {
|
||||
throw new Error(`manager missing: ${result.error ?? "no error provided"}`);
|
||||
}
|
||||
manager = result.manager as unknown as MemoryIndexManager;
|
||||
|
||||
const internal = manager as unknown as { mistral?: MistralEmbeddingClient };
|
||||
expect(internal.mistral).toBe(mistralClient);
|
||||
});
|
||||
|
||||
it("stores mistral client after fallback activation", async () => {
|
||||
const openAiClient: OpenAiEmbeddingClient = {
|
||||
baseUrl: "https://api.openai.com/v1",
|
||||
headers: { authorization: "Bearer openai-key" },
|
||||
model: "text-embedding-3-small",
|
||||
};
|
||||
const mistralClient: MistralEmbeddingClient = {
|
||||
baseUrl: "https://api.mistral.ai/v1",
|
||||
headers: { authorization: "Bearer mistral-key" },
|
||||
model: "mistral-embed",
|
||||
};
|
||||
createEmbeddingProviderMock.mockResolvedValueOnce({
|
||||
requestedProvider: "openai",
|
||||
provider: createProvider("openai"),
|
||||
openAi: openAiClient,
|
||||
} as EmbeddingProviderResult);
|
||||
createEmbeddingProviderMock.mockResolvedValueOnce({
|
||||
requestedProvider: "mistral",
|
||||
provider: createProvider("mistral"),
|
||||
mistral: mistralClient,
|
||||
} as EmbeddingProviderResult);
|
||||
|
||||
const cfg = buildConfig({ workspaceDir, indexPath, provider: "openai", fallback: "mistral" });
|
||||
const result = await getMemorySearchManager({ cfg, agentId: "main" });
|
||||
if (!result.manager) {
|
||||
throw new Error(`manager missing: ${result.error ?? "no error provided"}`);
|
||||
}
|
||||
manager = result.manager as unknown as MemoryIndexManager;
|
||||
const internal = manager as unknown as {
|
||||
activateFallbackProvider: (reason: string) => Promise<boolean>;
|
||||
openAi?: OpenAiEmbeddingClient;
|
||||
mistral?: MistralEmbeddingClient;
|
||||
};
|
||||
|
||||
const activated = await internal.activateFallbackProvider("forced test");
|
||||
expect(activated).toBe(true);
|
||||
expect(internal.openAi).toBeUndefined();
|
||||
expect(internal.mistral).toBe(mistralClient);
|
||||
});
|
||||
});
|
||||
@@ -12,6 +12,7 @@ import {
|
||||
type EmbeddingProvider,
|
||||
type EmbeddingProviderResult,
|
||||
type GeminiEmbeddingClient,
|
||||
type MistralEmbeddingClient,
|
||||
type OpenAiEmbeddingClient,
|
||||
type VoyageEmbeddingClient,
|
||||
} from "./embeddings.js";
|
||||
@@ -46,13 +47,14 @@ export class MemoryIndexManager extends MemoryManagerEmbeddingOps implements Mem
|
||||
protected readonly workspaceDir: string;
|
||||
protected readonly settings: ResolvedMemorySearchConfig;
|
||||
protected provider: EmbeddingProvider | null;
|
||||
private readonly requestedProvider: "openai" | "local" | "gemini" | "voyage" | "auto";
|
||||
protected fallbackFrom?: "openai" | "local" | "gemini" | "voyage";
|
||||
private readonly requestedProvider: "openai" | "local" | "gemini" | "voyage" | "mistral" | "auto";
|
||||
protected fallbackFrom?: "openai" | "local" | "gemini" | "voyage" | "mistral";
|
||||
protected fallbackReason?: string;
|
||||
private readonly providerUnavailableReason?: string;
|
||||
protected openAi?: OpenAiEmbeddingClient;
|
||||
protected gemini?: GeminiEmbeddingClient;
|
||||
protected voyage?: VoyageEmbeddingClient;
|
||||
protected mistral?: MistralEmbeddingClient;
|
||||
protected batch: {
|
||||
enabled: boolean;
|
||||
wait: boolean;
|
||||
@@ -159,6 +161,7 @@ export class MemoryIndexManager extends MemoryManagerEmbeddingOps implements Mem
|
||||
this.openAi = params.providerResult.openAi;
|
||||
this.gemini = params.providerResult.gemini;
|
||||
this.voyage = params.providerResult.voyage;
|
||||
this.mistral = params.providerResult.mistral;
|
||||
this.sources = new Set(params.settings.sources);
|
||||
this.db = this.openDatabase();
|
||||
this.providerKey = this.computeProviderKey();
|
||||
|
||||
Reference in New Issue
Block a user