mirror of
https://github.com/moltbot/moltbot.git
synced 2026-04-23 22:55:24 +00:00
6.1 KiB
6.1 KiB
summary, read_when, title
| summary | read_when | title | |||
|---|---|---|---|---|---|
| Run OpenClaw through inferrs (OpenAI-compatible local server) |
|
inferrs |
inferrs
inferrs can serve local models behind an
OpenAI-compatible /v1 API. OpenClaw works with inferrs through the generic
openai-completions path.
inferrs is currently best treated as a custom self-hosted OpenAI-compatible
backend, not a dedicated OpenClaw provider plugin.
Getting started
```bash inferrs serve google/gemma-4-E2B-it \ --host 127.0.0.1 \ --port 8080 \ --device metal ``` ```bash curl http://127.0.0.1:8080/health curl http://127.0.0.1:8080/v1/models ``` Add an explicit provider entry and point your default model at it. See the full config example below.Full config example
This example uses Gemma 4 on a local inferrs server.
{
agents: {
defaults: {
model: { primary: "inferrs/google/gemma-4-E2B-it" },
models: {
"inferrs/google/gemma-4-E2B-it": {
alias: "Gemma 4 (inferrs)",
},
},
},
},
models: {
mode: "merge",
providers: {
inferrs: {
baseUrl: "http://127.0.0.1:8080/v1",
apiKey: "inferrs-local",
api: "openai-completions",
models: [
{
id: "google/gemma-4-E2B-it",
name: "Gemma 4 E2B (inferrs)",
reasoning: false,
input: ["text"],
cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
contextWindow: 131072,
maxTokens: 4096,
compat: {
requiresStringContent: true,
},
},
],
},
},
},
}
Advanced
Some `inferrs` Chat Completions routes accept only string `messages[].content`, not structured content-part arrays.<Warning>
If OpenClaw runs fail with an error like:
```text
messages[1].content: invalid type: sequence, expected a string
```
set `compat.requiresStringContent: true` in your model entry.
</Warning>
```json5
compat: {
requiresStringContent: true
}
```
OpenClaw will flatten pure text content parts into plain strings before sending
the request.
Some current `inferrs` + Gemma combinations accept small direct
`/v1/chat/completions` requests but still fail on full OpenClaw agent-runtime
turns.
If that happens, try this first:
```json5
compat: {
requiresStringContent: true,
supportsTools: false
}
```
That disables OpenClaw's tool schema surface for the model and can reduce prompt
pressure on stricter local backends.
If tiny direct requests still work but normal OpenClaw agent turns continue to
crash inside `inferrs`, the remaining issue is usually upstream model/server
behavior rather than OpenClaw's transport layer.
Once configured, test both layers:
```bash
curl http://127.0.0.1:8080/v1/chat/completions \
-H 'content-type: application/json' \
-d '{"model":"google/gemma-4-E2B-it","messages":[{"role":"user","content":"What is 2 + 2?"}],"stream":false}'
```
```bash
openclaw infer model run \
--model inferrs/google/gemma-4-E2B-it \
--prompt "What is 2 + 2? Reply with one short sentence." \
--json
```
If the first command works but the second fails, check the troubleshooting section below.
`inferrs` is treated as a proxy-style OpenAI-compatible `/v1` backend, not a
native OpenAI endpoint.
- Native OpenAI-only request shaping does not apply here
- No `service_tier`, no Responses `store`, no prompt-cache hints, and no
OpenAI reasoning-compat payload shaping
- Hidden OpenClaw attribution headers (`originator`, `version`, `User-Agent`)
are not injected on custom `inferrs` base URLs