mirror of
https://github.com/moltbot/moltbot.git
synced 2026-04-24 07:01:49 +00:00
docs: clarify memory plugin adapter ids
This commit is contained in:
@@ -100,9 +100,10 @@ semantic queries can find related notes even when wording differs. Hybrid search
|
||||
(BM25 + vector) is available for combining semantic matching with exact keyword
|
||||
lookups.
|
||||
|
||||
Memory search supports multiple embedding providers (OpenAI, Gemini, Voyage,
|
||||
Mistral, Ollama, and local GGUF models), an optional QMD sidecar backend for
|
||||
advanced retrieval, and post-processing features like MMR diversity re-ranking
|
||||
Memory search adapter ids come from the active memory plugin. The default
|
||||
`memory-core` plugin ships built-ins for OpenAI, Gemini, Voyage, Mistral,
|
||||
Ollama, and local GGUF models, plus an optional QMD sidecar backend for
|
||||
advanced retrieval and post-processing features like MMR diversity re-ranking
|
||||
and temporal decay.
|
||||
|
||||
For the full configuration reference -- including embedding provider setup, QMD
|
||||
|
||||
@@ -159,6 +159,22 @@ AI CLI backend such as `claude-cli` or `codex-cli`.
|
||||
| `api.registerContextEngine(id, factory)` | Context engine (one active at a time) |
|
||||
| `api.registerMemoryPromptSection(builder)` | Memory prompt section builder |
|
||||
| `api.registerMemoryFlushPlan(resolver)` | Memory flush plan resolver |
|
||||
| `api.registerMemoryRuntime(runtime)` | Memory runtime adapter |
|
||||
|
||||
### Memory embedding adapters
|
||||
|
||||
| Method | What it registers |
|
||||
| ---------------------------------------------- | ---------------------------------------------- |
|
||||
| `api.registerMemoryEmbeddingProvider(adapter)` | Memory embedding adapter for the active plugin |
|
||||
|
||||
- `registerMemoryPromptSection`, `registerMemoryFlushPlan`, and
|
||||
`registerMemoryRuntime` are exclusive to memory plugins.
|
||||
- `registerMemoryEmbeddingProvider` lets the active memory plugin register one
|
||||
or more embedding adapter ids (for example `openai`, `gemini`, or a custom
|
||||
plugin-defined id).
|
||||
- User config such as `agents.defaults.memorySearch.provider` and
|
||||
`agents.defaults.memorySearch.fallback` resolves against those registered
|
||||
adapter ids.
|
||||
|
||||
### Events and lifecycle
|
||||
|
||||
|
||||
@@ -20,7 +20,12 @@ automatic flush), see [Memory](/concepts/memory).
|
||||
- Watches memory files for changes (debounced).
|
||||
- Configure memory search under `agents.defaults.memorySearch` (not top-level
|
||||
`memorySearch`).
|
||||
- Uses remote embeddings by default. If `memorySearch.provider` is not set, OpenClaw auto-selects:
|
||||
- `memorySearch.provider` and `memorySearch.fallback` accept **adapter ids**
|
||||
registered by the active memory plugin.
|
||||
- The default `memory-core` plugin registers these built-in adapter ids:
|
||||
`local`, `openai`, `gemini`, `voyage`, `mistral`, and `ollama`.
|
||||
- With the default `memory-core` plugin, if `memorySearch.provider` is not set,
|
||||
OpenClaw auto-selects:
|
||||
1. `local` if a `memorySearch.local.modelPath` is configured and the file exists.
|
||||
2. `openai` if an OpenAI key can be resolved.
|
||||
3. `gemini` if a Gemini key can be resolved.
|
||||
@@ -29,8 +34,9 @@ automatic flush), see [Memory](/concepts/memory).
|
||||
6. Otherwise memory search stays disabled until configured.
|
||||
- Local mode uses node-llama-cpp and may require `pnpm approve-builds`.
|
||||
- Uses sqlite-vec (when available) to accelerate vector search inside SQLite.
|
||||
- `memorySearch.provider = "ollama"` is also supported for local/self-hosted
|
||||
Ollama embeddings (`/api/embeddings`), but it is not auto-selected.
|
||||
- With the default `memory-core` plugin, `memorySearch.provider = "ollama"` is
|
||||
also supported for local/self-hosted Ollama embeddings (`/api/embeddings`),
|
||||
but it is not auto-selected.
|
||||
|
||||
Remote embeddings **require** an API key for the embedding provider. OpenClaw
|
||||
resolves keys from auth profiles, `models.providers.*.apiKey`, or environment
|
||||
@@ -317,15 +323,16 @@ If you don't want to set an API key, use `memorySearch.provider = "local"` or se
|
||||
|
||||
### Fallbacks
|
||||
|
||||
- `memorySearch.fallback` can be `openai`, `gemini`, `voyage`, `mistral`, `ollama`, `local`, or `none`.
|
||||
- `memorySearch.fallback` can be any registered memory embedding adapter id, or `none`.
|
||||
- With the default `memory-core` plugin, valid built-in fallback ids are `openai`, `gemini`, `voyage`, `mistral`, `ollama`, and `local`.
|
||||
- The fallback provider is only used when the primary embedding provider fails.
|
||||
|
||||
### Batch indexing (OpenAI + Gemini + Voyage)
|
||||
### Batch indexing
|
||||
|
||||
- Disabled by default. Set `agents.defaults.memorySearch.remote.batch.enabled = true` to enable for large-corpus indexing (OpenAI, Gemini, and Voyage).
|
||||
- Disabled by default. Set `agents.defaults.memorySearch.remote.batch.enabled = true` to enable batch indexing for providers whose adapter exposes batch support.
|
||||
- Default behavior waits for batch completion; tune `remote.batch.wait`, `remote.batch.pollIntervalMs`, and `remote.batch.timeoutMinutes` if needed.
|
||||
- Set `remote.batch.concurrency` to control how many batch jobs we submit in parallel (default: 2).
|
||||
- Batch mode applies when `memorySearch.provider = "openai"` or `"gemini"` and uses the corresponding API key.
|
||||
- With the default `memory-core` plugin, batch indexing is available for `openai`, `gemini`, and `voyage`.
|
||||
- Gemini batch jobs use the async embeddings batch endpoint and require Gemini Batch API availability.
|
||||
|
||||
Why OpenAI batch is fast and cheap:
|
||||
|
||||
Reference in New Issue
Block a user