docs: remove provider recommendation language

This commit is contained in:
Peter Steinberger
2026-03-02 17:33:35 +00:00
parent b9e820b7ed
commit eb35fb745d
3 changed files with 12 additions and 30 deletions

View File

@@ -13,15 +13,6 @@ default model as `provider/model`.
Looking for chat channel docs (WhatsApp/Telegram/Discord/Slack/Mattermost (plugin)/etc.)? See [Channels](/channels).
## Highlight: Venice (Venice AI)
Venice is our recommended Venice AI setup for privacy-first inference with an option to use Opus for hard tasks.
- Default: `venice/llama-3.3-70b`
- Best overall: `venice/claude-opus-45` (Opus remains the strongest)
See [Venice AI](/providers/venice).
## Quick start
1. Authenticate with the provider (usually via `openclaw onboard`).

View File

@@ -11,15 +11,6 @@ title: "Model Provider Quickstart"
OpenClaw can use many LLM providers. Pick one, authenticate, then set the default
model as `provider/model`.
## Highlight: Venice (Venice AI)
Venice is our recommended Venice AI setup for privacy-first inference with an option to use Opus for the hardest tasks.
- Default: `venice/llama-3.3-70b`
- Best overall: `venice/claude-opus-45` (Opus remains the strongest)
See [Venice AI](/providers/venice).
## Quick start (two steps)
1. Authenticate with the provider (usually via `openclaw onboard`).

View File

@@ -86,8 +86,8 @@ openclaw agent --model venice/llama-3.3-70b --message "Hello, are you working?"
After setup, OpenClaw shows all available Venice models. Pick based on your needs:
- **Default (our pick)**: `venice/llama-3.3-70b` for private, balanced performance.
- **Best overall quality**: `venice/claude-opus-45` for hard jobs (Opus remains the strongest).
- **Default model**: `venice/llama-3.3-70b` for private, balanced performance.
- **High-capability option**: `venice/claude-opus-45` for hard jobs.
- **Privacy**: Choose "private" models for fully private inference.
- **Capability**: Choose "anonymized" models to access Claude, GPT, Gemini via Venice's proxy.
@@ -112,16 +112,16 @@ openclaw models list | grep venice
## Which Model Should I Use?
| Use Case | Recommended Model | Why |
| ---------------------------- | -------------------------------- | ----------------------------------------- |
| **General chat** | `llama-3.3-70b` | Good all-around, fully private |
| **Best overall quality** | `claude-opus-45` | Opus remains the strongest for hard tasks |
| **Privacy + Claude quality** | `claude-opus-45` | Best reasoning via anonymized proxy |
| **Coding** | `qwen3-coder-480b-a35b-instruct` | Code-optimized, 262k context |
| **Vision tasks** | `qwen3-vl-235b-a22b` | Best private vision model |
| **Uncensored** | `venice-uncensored` | No content restrictions |
| **Fast + cheap** | `qwen3-4b` | Lightweight, still capable |
| **Complex reasoning** | `deepseek-v3.2` | Strong reasoning, private |
| Use Case | Recommended Model | Why |
| ---------------------------- | -------------------------------- | ----------------------------------- |
| **General chat** | `llama-3.3-70b` | Good all-around, fully private |
| **High-capability option** | `claude-opus-45` | Higher quality for hard tasks |
| **Privacy + Claude quality** | `claude-opus-45` | Best reasoning via anonymized proxy |
| **Coding** | `qwen3-coder-480b-a35b-instruct` | Code-optimized, 262k context |
| **Vision tasks** | `qwen3-vl-235b-a22b` | Best private vision model |
| **Uncensored** | `venice-uncensored` | No content restrictions |
| **Fast + cheap** | `qwen3-4b` | Lightweight, still capable |
| **Complex reasoning** | `deepseek-v3.2` | Strong reasoning, private |
## Available Models (25 Total)