docs: improve VPS hub page and convert Podman to Mintlify Steps

This commit is contained in:
Vincent Koc
2026-03-19 12:06:42 -07:00
parent 34adde2e41
commit ebb6738e9d
2 changed files with 63 additions and 50 deletions

View File

@@ -7,49 +7,60 @@ title: "Podman"
# Podman
Run the OpenClaw gateway in a **rootless** Podman container. Uses the same image as Docker (build from the repo [Dockerfile](https://github.com/openclaw/openclaw/blob/main/Dockerfile)).
Run the OpenClaw Gateway in a **rootless** Podman container. Uses the same image as Docker (built from the repo [Dockerfile](https://github.com/openclaw/openclaw/blob/main/Dockerfile)).
## Requirements
## Prerequisites
- Podman (rootless)
- Sudo for one-time setup (create user, build image)
- **Podman** (rootless mode)
- **sudo** access for one-time setup (creating the dedicated user and building the image)
## Quick start
**1. One-time setup** (from repo root; creates user, builds image, installs launch script):
<Steps>
<Step title="One-time setup">
From the repo root, run the setup script. It creates a dedicated `openclaw` user, builds the container image, and installs the launch script:
```bash
./setup-podman.sh
```
```bash
./setup-podman.sh
```
This also creates a minimal `~openclaw/.openclaw/openclaw.json` (sets `gateway.mode="local"`) so the gateway can start without running the wizard.
This also creates a minimal config at `~openclaw/.openclaw/openclaw.json` (sets `gateway.mode` to `"local"`) so the Gateway can start without running the wizard.
By default the container is **not** installed as a systemd service, you start it manually (see below). For a production-style setup with auto-start and restarts, install it as a systemd Quadlet user service instead:
By default the container is **not** installed as a systemd service -- you start it manually in the next step. For a production-style setup with auto-start and restarts, pass `--quadlet` instead:
```bash
./setup-podman.sh --quadlet
```
```bash
./setup-podman.sh --quadlet
```
(Or set `OPENCLAW_PODMAN_QUADLET=1`; use `--container` to install only the container and launch script.)
(Or set `OPENCLAW_PODMAN_QUADLET=1`. Use `--container` to install only the container and launch script.)
Optional build-time env vars (set before running `setup-podman.sh`):
**Optional build-time env vars** (set before running `setup-podman.sh`):
- `OPENCLAW_DOCKER_APT_PACKAGES` install extra apt packages during image build
- `OPENCLAW_EXTENSIONS` pre-install extension dependencies (space-separated extension names, e.g. `diagnostics-otel matrix`)
- `OPENCLAW_DOCKER_APT_PACKAGES` -- install extra apt packages during image build.
- `OPENCLAW_EXTENSIONS` -- pre-install extension dependencies (space-separated names, e.g. `diagnostics-otel matrix`).
**2. Start gateway** (manual, for quick smoke testing):
</Step>
```bash
./scripts/run-openclaw-podman.sh launch
```
<Step title="Start the Gateway">
For a quick manual launch:
**3. Onboarding wizard** (e.g. to add channels or providers):
```bash
./scripts/run-openclaw-podman.sh launch
```
```bash
./scripts/run-openclaw-podman.sh launch setup
```
</Step>
Then open `http://127.0.0.1:18789/` and use the token from `~openclaw/.openclaw/.env` (or the value printed by setup).
<Step title="Run the onboarding wizard">
To add channels or providers interactively:
```bash
./scripts/run-openclaw-podman.sh launch setup
```
Then open `http://127.0.0.1:18789/` and use the token from `~openclaw/.openclaw/.env` (or the value printed by setup).
</Step>
</Steps>
## Systemd (Quadlet, optional)

View File

@@ -6,45 +6,47 @@ read_when:
title: "VPS Hosting"
---
# VPS hosting
# VPS Hosting
This hub links to the supported VPS/hosting guides and explains how cloud
deployments work at a high level.
Run the OpenClaw Gateway around the clock on a cloud VPS. This page helps you pick a provider, explains how cloud deployments work, and covers generic Linux server tuning that applies to every provider.
## Pick a provider
- **Railway** (oneclick + browser setup): [Railway](/install/railway)
- **Northflank** (oneclick + browser setup): [Northflank](/install/northflank)
- **Oracle Cloud (Always Free)**: [Oracle](/platforms/oracle) — $0/month (Always Free, ARM; capacity/signup can be finicky)
- **Fly.io**: [Fly.io](/install/fly)
- **Hetzner (Docker)**: [Hetzner](/install/hetzner)
- **GCP (Compute Engine)**: [GCP](/install/gcp)
- **Azure (Linux VM)**: [Azure](/install/azure)
- **exe.dev** (VM + HTTPS proxy): [exe.dev](/install/exe-dev)
- **AWS (EC2/Lightsail/free tier)**: works well too. Community video guide:
[https://x.com/techfrenAJ/status/2014934471095812547](https://x.com/techfrenAJ/status/2014934471095812547)
(community resource, may become unavailable)
<CardGroup cols={2}>
<Card title="Railway" href="/install/railway">One-click, browser setup</Card>
<Card title="Northflank" href="/install/northflank">One-click, browser setup</Card>
<Card title="Oracle Cloud" href="/platforms/oracle">Always Free ARM tier ($0/month, capacity can be finicky)</Card>
<Card title="Fly.io" href="/install/fly">Fly Machines</Card>
<Card title="Hetzner" href="/install/hetzner">Docker on Hetzner VPS</Card>
<Card title="GCP" href="/install/gcp">Compute Engine</Card>
<Card title="Azure" href="/install/azure">Linux VM</Card>
<Card title="exe.dev" href="/install/exe-dev">VM with HTTPS proxy</Card>
</CardGroup>
**AWS (EC2 / Lightsail / free tier)** also works well.
A community video walkthrough is available at
[x.com/techfrenAJ/status/2014934471095812547](https://x.com/techfrenAJ/status/2014934471095812547)
(community resource -- may become unavailable).
## How cloud setups work
- The **Gateway runs on the VPS** and owns state + workspace.
- You connect from your laptop/phone via the **Control UI** or **Tailscale/SSH**.
- Treat the VPS as the source of truth and **back up** the state + workspace.
- You connect from your laptop or phone via the **Control UI** or **Tailscale/SSH**.
- Treat the VPS as the source of truth and **back up** the state + workspace regularly.
- Secure default: keep the Gateway on loopback and access it via SSH tunnel or Tailscale Serve.
If you bind to `lan`/`tailnet`, require `gateway.auth.token` or `gateway.auth.password`.
If you bind to `lan` or `tailnet`, require `gateway.auth.token` or `gateway.auth.password`.
Remote access: [Gateway remote](/gateway/remote)
Platforms hub: [Platforms](/platforms)
Related pages: [Gateway remote access](/gateway/remote), [Platforms hub](/platforms).
## Shared company agent on a VPS
This is a valid setup when the users are in one trust boundary (for example one company team), and the agent is business-only.
Running a single agent for a team is a valid setup when every user is in the same trust boundary and the agent is business-only.
- Keep it on a dedicated runtime (VPS/VM/container + dedicated OS user/accounts).
- Do not sign that runtime into personal Apple/Google accounts or personal browser/password-manager profiles.
- If users are adversarial to each other, split by gateway/host/OS user.
Security model details: [Security](/gateway/security)
Security model details: [Security](/gateway/security).
## Using nodes with a VPS
@@ -52,7 +54,7 @@ You can keep the Gateway in the cloud and pair **nodes** on your local devices
(Mac/iOS/Android/headless). Nodes provide local screen/camera/canvas and `system.run`
capabilities while the Gateway stays in the cloud.
Docs: [Nodes](/nodes), [Nodes CLI](/cli/nodes)
Docs: [Nodes](/nodes), [Nodes CLI](/cli/nodes).
## Startup tuning for small VMs and ARM hosts
@@ -69,14 +71,14 @@ source ~/.bashrc
- `NODE_COMPILE_CACHE` improves repeated command startup times.
- `OPENCLAW_NO_RESPAWN=1` avoids extra startup overhead from a self-respawn path.
- First command run warms cache; subsequent runs are faster.
- First command run warms the cache; subsequent runs are faster.
- For Raspberry Pi specifics, see [Raspberry Pi](/platforms/raspberry-pi).
### systemd tuning checklist (optional)
For VM hosts using `systemd`, consider:
- Add service env for stable startup path:
- Add service env for a stable startup path:
- `OPENCLAW_NO_RESPAWN=1`
- `NODE_COMPILE_CACHE=/var/tmp/openclaw-compile-cache`
- Keep restart behavior explicit: