switching project name from 'localai' to directory-based naming would cause users to lose all data stored in docker volumes (workflows, databases, configs) since volumes are prefixed with project name
8.2 KiB
CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Project Overview
This is n8n-install, a Docker Compose-based installer that provides a comprehensive self-hosted environment for n8n workflow automation and numerous AI/automation services. The installer includes an interactive wizard, automated secret generation, and integrated HTTPS via Caddy.
Core Architecture
- Profile-based service management: Services are activated via Docker Compose profiles (e.g.,
n8n,flowise,monitoring). Profiles are stored in the.envfile'sCOMPOSE_PROFILESvariable. - No exposed ports: Services do NOT publish ports directly. All external HTTPS access is routed through Caddy reverse proxy on ports 80/443.
- Shared secrets: Core services (Postgres, Redis/Valkey, Caddy) are always included. Other services are optional and selected during installation.
- Queue-based n8n: n8n runs in
queuemode with Redis, Postgres, and dynamically scaled workers (N8N_WORKER_COUNT).
Key Files
docker-compose.yml: Service definitions with profilesCaddyfile: Reverse proxy configuration with automatic HTTPS.env: Generated secrets and configuration (from.env.example)scripts/install.sh: Main installation orchestratorscripts/04_wizard.sh: Interactive service selection using whiptailscripts/03_generate_secrets.sh: Secret generation and bcrypt hashingscripts/07_final_report.sh: Post-install credential summary
Common Development Commands
Installation and Updates
# Full installation (run from project root)
sudo bash ./scripts/install.sh
# Update to latest versions and pull new images
sudo bash ./scripts/update.sh
# Re-run service selection wizard (for adding/removing services)
sudo bash ./scripts/04_wizard.sh
Docker Compose Operations
# Start all enabled profile services
docker compose -p localai up -d
# View logs for a specific service
docker compose -p localai logs -f --tail=200 <service-name> | cat
# Recreate a single service (e.g., after config changes)
docker compose -p localai up -d --no-deps --force-recreate <service-name>
# Stop all services
docker compose -p localai down
# Remove unused Docker resources
sudo bash ./scripts/docker_cleanup.sh
Development and Testing
# Regenerate secrets after modifying .env.example
bash ./scripts/03_generate_secrets.sh
# Check current active profiles
grep COMPOSE_PROFILES .env
# View Caddy logs for reverse proxy issues
docker compose -p localai logs -f caddy
# Test n8n worker scaling
# Edit N8N_WORKER_COUNT in .env, then:
docker compose -p localai up -d --scale n8n-worker=<count>
Adding a New Service
Follow this workflow when adding a new optional service (refer to .cursor/rules/add-new-service.mdc for complete details):
- docker-compose.yml: Add service with
profiles: ["myservice"],restart: unless-stopped. Do NOT expose ports. - Caddyfile: Add reverse proxy block using
{$MYSERVICE_HOSTNAME}. Consider if basic auth is needed. - .env.example: Add
MYSERVICE_HOSTNAME=myservice.yourdomain.comand credentials if using basic auth. - scripts/03_generate_secrets.sh: Generate passwords and bcrypt hashes. Add to
VARS_TO_GENERATEmap. - scripts/04_wizard.sh: Add service to
base_services_dataarray for wizard selection. - scripts/07_final_report.sh: Add service URL and credentials output using
is_profile_active "myservice". - README.md: Add one-line description under "What's Included".
Always ask users if the new service requires Caddy basic auth protection.
Important Service Details
n8n Configuration (v2.0+)
- n8n runs in
EXECUTIONS_MODE=queuewith Redis as the queue backend - OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS=true: All executions (including manual tests) run on workers
- Worker-Runner Sidecar Pattern: Each worker has its own dedicated task runner
- Workers and runners are generated dynamically via
scripts/generate_n8n_workers.sh - Configuration stored in
docker-compose.n8n-workers.yml(auto-generated, gitignored) - Runner connects to its worker via
network_mode: "service:n8n-worker-N"(localhost:5679) - Runner image
n8nio/runnersmust match n8n version
- Workers and runners are generated dynamically via
- Scaling: Change
N8N_WORKER_COUNTin.envand runbash scripts/generate_n8n_workers.sh - Code node libraries: Configured on the runner container (not n8n):
NODE_FUNCTION_ALLOW_EXTERNAL: JS packages (cheerio,axios,moment,lodash)NODE_FUNCTION_ALLOW_BUILTIN: Node.js built-in modules (*= all)N8N_RUNNERS_STDLIB_ALLOW: Python stdlib modulesN8N_RUNNERS_EXTERNAL_ALLOW: Python third-party packages
- Workflows can access the host filesystem via
/data/shared(mapped to./shared) N8N_BLOCK_ENV_ACCESS_IN_NODE=falseallows Code nodes to access environment variables
Caddy Reverse Proxy
- Automatically obtains Let's Encrypt certificates when
LETSENCRYPT_EMAILis set - Hostnames are passed via environment variables (e.g.,
N8N_HOSTNAME,FLOWISE_HOSTNAME) - Basic auth uses bcrypt hashes generated by
scripts/03_generate_secrets.shvia Caddy's hash command - Never add
ports:to services in docker-compose.yml; let Caddy handle all external access
Secret Generation
The scripts/03_generate_secrets.sh script:
- Generates random passwords, JWT secrets, API keys, and encryption keys
- Creates bcrypt password hashes using Caddy's
hash-passwordcommand - Preserves existing user-provided values in
.env - Supports different secret types via
VARS_TO_GENERATEmap:password:32,jwt,api_key, etc.
Service Profiles
Common profiles:
n8n: n8n workflow automation (includes main app, worker, runner, and import services)flowise: Flowise AI agent buildermonitoring: Prometheus, Grafana, cAdvisor, node-exporterlangfuse: Langfuse observability (includes ClickHouse, MinIO, worker, web)cpu,gpu-nvidia,gpu-amd: Ollama hardware profiles (mutually exclusive)cloudflare-tunnel: Cloudflare Tunnel for zero-trust access
Architecture Patterns
Healthchecks
Services should define healthchecks for proper dependency management:
healthcheck:
test: ["CMD-SHELL", "wget -qO- http://localhost:8080/health || exit 1"]
interval: 30s
timeout: 10s
retries: 5
Service Dependencies
Use depends_on with conditions:
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
Environment Variable Patterns
- All secrets/passwords end with
_PASSWORDor_KEY - All hostnames end with
_HOSTNAME - Password hashes end with
_PASSWORD_HASH - Use
${VAR:-default}for optional vars with defaults
Profile Activation Logic
In bash scripts, check if a profile is active:
if is_profile_active "myservice"; then
# Service-specific logic
fi
Common Issues and Solutions
Service won't start after adding
- Ensure profile is added to
COMPOSE_PROFILESin.env - Check logs:
docker compose -p localai logs <service> - Verify no port conflicts (no services should publish ports)
- Ensure healthcheck is properly defined if service has dependencies
Caddy certificate issues
- DNS must be configured before installation (wildcard A record:
*.yourdomain.com) - Check Caddy logs for certificate acquisition errors
- Verify
LETSENCRYPT_EMAILis set in.env
Password hash generation fails
- Ensure Caddy container is running:
docker compose -p localai up -d caddy - Script uses:
docker exec caddy caddy hash-password --plaintext "$password"
File Locations
- Shared files accessible by n8n:
./shared(mounted as/data/sharedin n8n) - n8n storage: Docker volume
localai_n8n_storage - Service-specific volumes: Defined in
volumes:section at top ofdocker-compose.yml - Installation logs: stdout during script execution
- Service logs:
docker compose -p localai logs <service>
Testing Changes
When modifying installer scripts:
- Test on a clean Ubuntu 24.04 LTS system (minimum 4GB RAM / 2 CPU)
- Verify all profile combinations work
- Check that
.envis properly generated - Confirm final report displays correct URLs and credentials
- Test update script preserves custom configurations