12 Commits

Author SHA1 Message Date
Yury Kossakovsky
418e8700bb feat: add local installation mode with http-only caddy and .local domains 2026-01-19 10:06:04 -07:00
Yury Kossakovsky
a99676e3d5 fix(postiz): improve temporal integration
- increase elasticsearch memory to 512mb
- add temporal databases to initialization
- add postiz to final report
2026-01-17 19:56:29 -07:00
Yury Kossakovsky
bf7ce20f7b fix(caddy): add http block for welcome page to prevent redirect loop
when accessing welcome page through cloudflare tunnel, caddy was
redirecting http to https, causing an infinite redirect loop.
adding an explicit http block prevents automatic https redirect.
2026-01-17 19:42:50 -07:00
Yury Kossakovsky
36717a45c9 docs(readme): clarify vps requirement in prerequisites 2026-01-17 12:28:55 -07:00
Yury Kossakovsky
31b81b71a4 fix(postiz): add elasticsearch for temporal advanced visibility
temporal with sql visibility has a hard limit of 3 text search
attributes per namespace. postiz requires more, causing startup
failure. adding elasticsearch enables advanced visibility mode
which removes this limitation.
2026-01-17 12:26:40 -07:00
Yury Kossakovsky
a3e8f26925 fix(postiz): use correct temporal address env var 2026-01-16 20:27:06 -07:00
Yury Kossakovsky
917afe615c fix(temporal): use container ip for healthcheck connection 2026-01-16 20:15:03 -07:00
Yury Kossakovsky
641fd04290 fix(temporal): update healthcheck to use modern cli 2026-01-16 18:59:37 -07:00
Yury Kossakovsky
ca43e7ab12 docs(readme): add troubleshooting for update script issues 2026-01-16 18:48:31 -07:00
Yury Kossakovsky
e5db00098a refactor(docker-compose): extract logging config into yaml anchor 2026-01-16 18:45:30 -07:00
Yury Kossakovsky
4a6f1c0e01 feat(postiz): add temporal server for workflow orchestration
add temporal and temporal-ui services to the postiz profile for
workflow orchestration. includes caddy reverse proxy with basic
auth, secret generation, and welcome page integration.
2026-01-16 18:42:54 -07:00
Yury Kossakovsky
19cd6b6f91 docs(cloudflare): update tunnel instructions and add missing services
- update dashboard navigation to match current cloudflare ui
- add nocodb and welcome page to services table
- add notes explaining external compose files and caddy-served content
2026-01-13 08:40:36 -07:00
24 changed files with 1282 additions and 236 deletions

View File

@@ -314,14 +314,16 @@ ${SERVICE_NAME_UPPER}_PASSWORD=
${SERVICE_NAME_UPPER}_PASSWORD_HASH=
```
### 3.3 GOST_NO_PROXY (if using proxy-env)
### 3.3 GOST_NO_PROXY (REQUIRED for ALL services)
Add service to comma-separated list:
**CRITICAL:** Add ALL new service container names to the comma-separated list to prevent internal Docker traffic from going through the proxy:
```dotenv
GOST_NO_PROXY=localhost,127.0.0.1,...existing...,$ARGUMENTS
```
This applies to ALL services, not just those using `<<: *proxy-env`. Internal service-to-service communication must bypass the proxy.
---
## STEP 4: scripts/03_generate_secrets.sh
@@ -706,6 +708,7 @@ bash -n scripts/07_final_report.sh
- [ ] `docker-compose.yml`: caddy environment vars (if external)
- [ ] `Caddyfile`: reverse proxy block (if external)
- [ ] `.env.example`: hostname added
- [ ] `.env.example`: service added to `GOST_NO_PROXY` (ALL internal services must be listed)
- [ ] `scripts/03_generate_secrets.sh`: password in `VARS_TO_GENERATE`
- [ ] `scripts/04_wizard.sh`: service in `base_services_data`
- [ ] `scripts/generate_welcome_page.sh`: `SERVICES_ARRAY` entry
@@ -722,7 +725,6 @@ bash -n scripts/07_final_report.sh
### If Outbound Proxy (AI API calls)
- [ ] `docker-compose.yml`: `<<: *proxy-env` in environment
- [ ] `.env.example`: service added to `GOST_NO_PROXY`
- [ ] `docker-compose.yml`: healthcheck bypasses proxy
### If Database Required

View File

@@ -6,6 +6,27 @@
# Set to false to disable (default: true)
############
# SCARF_ANALYTICS=false
# Installation Mode Configuration
# INSTALL_MODE: vps | local
# vps - Production mode with real domain and Let's Encrypt SSL
# local - Local installation with .local domains (no SSL, HTTP only)
############
INSTALL_MODE=vps
# Caddy HTTPS mode (auto-set based on INSTALL_MODE during installation)
# VPS mode: on (Let's Encrypt certificates)
# Local mode: off (HTTP only)
CADDY_AUTO_HTTPS=on
# n8n cookie security (auto-set based on INSTALL_MODE)
# VPS mode: true (requires HTTPS)
# Local mode: false (allows HTTP)
N8N_SECURE_COOKIE=true
# Protocol for service URLs (auto-set based on INSTALL_MODE)
# VPS mode: https
# Local mode: http
PROTOCOL=https
############
# [required]
@@ -164,6 +185,7 @@ NOCODB_HOSTNAME=nocodb.yourdomain.com
PADDLEOCR_HOSTNAME=paddleocr.yourdomain.com
PORTAINER_HOSTNAME=portainer.yourdomain.com
POSTIZ_HOSTNAME=postiz.yourdomain.com
TEMPORAL_UI_HOSTNAME=temporal.yourdomain.com
PROMETHEUS_HOSTNAME=prometheus.yourdomain.com
QDRANT_HOSTNAME=qdrant.yourdomain.com
RAGAPP_HOSTNAME=ragapp.yourdomain.com
@@ -433,7 +455,7 @@ GOST_UPSTREAM_PROXY=
# Internal services bypass list (prevents internal Docker traffic from going through proxy)
# Includes: Docker internal networks (172.16-31.*, 10.*), Docker DNS (127.0.0.11), and all service hostnames
GOST_NO_PROXY=localhost,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.local,postgres,postgres:5432,redis,redis:6379,caddy,ollama,neo4j,qdrant,weaviate,clickhouse,minio,searxng,crawl4ai,gotenberg,langfuse-web,langfuse-worker,flowise,n8n,n8n-import,n8n-worker-1,n8n-worker-2,n8n-worker-3,n8n-worker-4,n8n-worker-5,n8n-worker-6,n8n-worker-7,n8n-worker-8,n8n-worker-9,n8n-worker-10,n8n-runner-1,n8n-runner-2,n8n-runner-3,n8n-runner-4,n8n-runner-5,n8n-runner-6,n8n-runner-7,n8n-runner-8,n8n-runner-9,n8n-runner-10,letta,lightrag,docling,postiz,ragflow,ragflow-mysql,ragflow-minio,ragflow-redis,ragflow-elasticsearch,ragapp,open-webui,comfyui,waha,libretranslate,paddleocr,nocodb,db,studio,kong,auth,rest,realtime,storage,imgproxy,meta,functions,analytics,vector,supavisor,gost
GOST_NO_PROXY=localhost,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.local,postgres,postgres:5432,redis,redis:6379,caddy,ollama,neo4j,qdrant,weaviate,clickhouse,minio,searxng,crawl4ai,gotenberg,langfuse-web,langfuse-worker,flowise,n8n,n8n-import,n8n-worker-1,n8n-worker-2,n8n-worker-3,n8n-worker-4,n8n-worker-5,n8n-worker-6,n8n-worker-7,n8n-worker-8,n8n-worker-9,n8n-worker-10,n8n-runner-1,n8n-runner-2,n8n-runner-3,n8n-runner-4,n8n-runner-5,n8n-runner-6,n8n-runner-7,n8n-runner-8,n8n-runner-9,n8n-runner-10,letta,lightrag,docling,postiz,temporal,temporal-ui,ragflow,ragflow-mysql,ragflow-minio,ragflow-redis,ragflow-elasticsearch,ragapp,open-webui,comfyui,waha,libretranslate,paddleocr,nocodb,db,studio,kong,auth,rest,realtime,storage,imgproxy,meta,functions,analytics,vector,supavisor,gost
############
# Functions - Configuration for Functions
@@ -489,6 +511,13 @@ RAGAPP_PASSWORD_HASH=
POSTIZ_DISABLE_REGISTRATION=false
############
# Temporal UI credentials (for Caddy basic auth)
############
TEMPORAL_UI_USERNAME=
TEMPORAL_UI_PASSWORD=
TEMPORAL_UI_PASSWORD_HASH=
############
# Postiz Social Media Integrations
# Leave blank if not used. Provide credentials from each platform.

2
.gitignore vendored
View File

@@ -21,3 +21,5 @@ certs/*
# Custom Caddy addons (user configurations)
caddy-addon/*.conf
!caddy-addon/*.example
hosts.txt
welcome/data.json

View File

@@ -1,5 +1,12 @@
# Changelog
## [Unreleased]
## [1.2.1] - 2026-01-16
### Added
- **Temporal** - Temporal server and UI for Postiz workflow orchestration (#33)
## [1.2.0] - 2026-01-12
### Added

View File

@@ -17,9 +17,10 @@ This is **n8n-install**, a Docker Compose-based installer that provides a compre
- `Makefile`: Common commands (install, update, logs, etc.)
- `docker-compose.yml`: Service definitions with profiles
- `Caddyfile`: Reverse proxy configuration with automatic HTTPS
- `Caddyfile`: Reverse proxy configuration (HTTPS for VPS, HTTP for local via `CADDY_AUTO_HTTPS`)
- `.env`: Generated secrets and configuration (from `.env.example`)
- `scripts/install.sh`: Main installation orchestrator (runs numbered scripts 01-08 in sequence)
- `scripts/install-vps.sh`: VPS installation orchestrator (runs scripts 01-08 in sequence)
- `scripts/install-local.sh`: Local installation orchestrator (inline prerequisites check, then 03-08)
- `scripts/utils.sh`: Shared utility functions (sourced by all scripts via `source "$(dirname "$0")/utils.sh" && init_paths`)
- `scripts/01_system_preparation.sh`: System updates, firewall, security hardening
- `scripts/02_install_docker.sh`: Docker and Docker Compose installation
@@ -30,6 +31,7 @@ This is **n8n-install**, a Docker Compose-based installer that provides a compre
- `scripts/databases.sh`: Creates isolated PostgreSQL databases for services (library)
- `scripts/telemetry.sh`: Anonymous telemetry functions (Scarf integration)
- `scripts/06_run_services.sh`: Starts Docker Compose stack
- `scripts/generate_hosts.sh`: Generates /etc/hosts entries for local mode
- `scripts/07_final_report.sh`: Post-install credential summary
- `scripts/08_fix_permissions.sh`: Fixes file ownership for non-root access
- `scripts/generate_n8n_workers.sh`: Generates dynamic worker/runner compose file
@@ -45,8 +47,17 @@ This is **n8n-install**, a Docker Compose-based installer that provides a compre
### Installation Flow
`scripts/install.sh` orchestrates the installation by running numbered scripts in sequence:
Two independent installation paths:
```
make install-vps → install-vps.sh (Ubuntu VPS)
└── 01 → 02 → 03 → 04 → 05 → 06 → 07 → 08
make install-local → install-local.sh (macOS/Linux local)
└── [inline prereq check] → 03 → 04 → 05 → 06 → 07 → 08
```
**VPS Installation** (`install-vps.sh`):
1. `01_system_preparation.sh` - System updates, firewall, security hardening
2. `02_install_docker.sh` - Docker and Docker Compose installation
3. `03_generate_secrets.sh` - Generate passwords, API keys, bcrypt hashes
@@ -56,8 +67,23 @@ This is **n8n-install**, a Docker Compose-based installer that provides a compre
7. `07_final_report.sh` - Display credentials and URLs
8. `08_fix_permissions.sh` - Fix file ownership for non-root access
**Local Installation** (`install-local.sh`):
1. Inline prerequisites check - Verify Docker, whiptail, openssl, git
2. `03_generate_secrets.sh` - Generate passwords, API keys, bcrypt hashes
3. `04_wizard.sh` - Interactive service selection (whiptail UI)
4. `05_configure_services.sh` - Service-specific configuration
5. `06_run_services.sh` - Start Docker Compose stack
6. `07_final_report.sh` - Display credentials and URLs
The update flow (`scripts/update.sh`) similarly orchestrates: git fetch + reset → service selection → `apply_update.sh` → restart.
### Installation Modes
Two independent entry points (no shared entrypoint):
- **VPS Mode** (`make install-vps`): Production with real domains, Let's Encrypt SSL, full system prep. Run with `sudo`.
- **Local Mode** (`make install-local`): Development with `.local` domains, HTTP only, no system prep. Requires Docker pre-installed.
## Common Development Commands
### Makefile Commands
@@ -65,6 +91,9 @@ The update flow (`scripts/update.sh`) similarly orchestrates: git fetch + reset
```bash
make install # Full installation (runs scripts/install.sh)
make update # Update system and services (resets to origin)
make install-vps # VPS installation (Ubuntu with SSL)
make install-local # Local installation (macOS/Linux, no sudo)
make update # Update system and services
make update-preview # Preview available updates (dry-run)
make git-pull # Update for forks (merges from upstream/main)
make clean # Remove unused Docker resources (preserves data)
@@ -189,7 +218,7 @@ Key functions:
- `load_env` - Source .env file to make variables available
- `update_compose_profiles "profile1,profile2"` - Update COMPOSE_PROFILES in .env
- `gen_password 32` / `gen_hex 64` / `gen_base64 64` - Secret generation
- `generate_bcrypt_hash "password"` - Create Caddy-compatible bcrypt hash (uses Caddy binary)
- `generate_bcrypt_hash "password"` - Create Caddy-compatible bcrypt hash (uses Caddy Docker image)
- `json_escape "string"` - Escape string for JSON output
- `wt_input`, `wt_password`, `wt_yesno`, `wt_msg` - Whiptail dialog wrappers
- `wt_checklist`, `wt_radiolist`, `wt_menu` - Whiptail selection dialogs
@@ -203,6 +232,22 @@ Key functions:
- `get_n8n_workers_compose` / `get_supabase_compose` / `get_dify_compose` - Get compose file path if profile active AND file exists
- `build_compose_files_array` - Build global `COMPOSE_FILES` array with all active compose files (main + external)
### Local Mode Utilities (scripts/local.sh)
Source with: `source "$SCRIPT_DIR/local.sh"` (after sourcing utils.sh)
Encapsulates all VPS vs Local mode logic in one place:
- `get_install_mode` - Get current mode from INSTALL_MODE env or .env file (defaults to "vps")
- `is_local_mode` / `is_vps_mode` - Check current installation mode
- `get_protocol` - Get protocol based on mode ("http" for local, "https" for VPS)
- `get_caddy_auto_https` - Get Caddy setting ("off" for local, "on" for VPS)
- `get_n8n_secure_cookie` - Get cookie security ("false" for local, "true" for VPS)
- `get_local_domain` - Get default domain for local mode (".local")
- `configure_mode_env` - Populate associative array with all mode-specific settings
- `get_all_hostname_vars` - Get list of all hostname variables (single source of truth)
- `print_local_hosts_instructions` - Display /etc/hosts setup instructions
- `check_local_prerequisites` - Verify Docker, whiptail, openssl, git are installed
### Service Profiles
Common profiles:
@@ -305,8 +350,8 @@ These are backed up before `git reset --hard` and restored after.
- Verify `LETSENCRYPT_EMAIL` is set in `.env`
### Password hash generation fails
- Ensure Caddy container is running: `docker compose -p localai up -d caddy`
- Script uses: `docker exec caddy caddy hash-password --plaintext "$password"`
- Ensure Docker is running (bcrypt hashes are generated via `docker run caddy:latest`)
- Script uses: `docker run --rm caddy:latest caddy hash-password --algorithm bcrypt --plaintext "$password"`
## File Locations
@@ -326,6 +371,7 @@ docker compose -p localai config --quiet
# Bash script syntax (validate all key scripts)
bash -n scripts/utils.sh
bash -n scripts/local.sh
bash -n scripts/git.sh
bash -n scripts/databases.sh
bash -n scripts/telemetry.sh
@@ -337,7 +383,8 @@ bash -n scripts/generate_welcome_page.sh
bash -n scripts/generate_n8n_workers.sh
bash -n scripts/apply_update.sh
bash -n scripts/update.sh
bash -n scripts/install.sh
bash -n scripts/install-vps.sh
bash -n scripts/install-local.sh
```
### Full Testing

View File

@@ -1,32 +1,32 @@
{
# Global options - works for both environments
# Global options - works for both VPS and local environments
# CADDY_AUTO_HTTPS: "on" for VPS (Let's Encrypt), "off" for local (HTTP only)
auto_https {$CADDY_AUTO_HTTPS:on}
email {$LETSENCRYPT_EMAIL}
}
# N8N
{$N8N_HOSTNAME} {
# For domains, Caddy will automatically use Let's Encrypt
# For localhost/port addresses, HTTPS won't be enabled
http://{$N8N_HOSTNAME} {
reverse_proxy n8n:5678
}
# Open WebUI
{$WEBUI_HOSTNAME} {
http://{$WEBUI_HOSTNAME} {
reverse_proxy open-webui:8080
}
# Flowise
{$FLOWISE_HOSTNAME} {
http://{$FLOWISE_HOSTNAME} {
reverse_proxy flowise:3001
}
# Dify
{$DIFY_HOSTNAME} {
http://{$DIFY_HOSTNAME} {
reverse_proxy nginx:80
}
# RAGApp
{$RAGAPP_HOSTNAME} {
http://{$RAGAPP_HOSTNAME} {
basic_auth {
{$RAGAPP_USERNAME} {$RAGAPP_PASSWORD_HASH}
}
@@ -34,80 +34,83 @@
}
# RAGFlow
{$RAGFLOW_HOSTNAME} {
http://{$RAGFLOW_HOSTNAME} {
reverse_proxy ragflow:80
}
# Langfuse
{$LANGFUSE_HOSTNAME} {
http://{$LANGFUSE_HOSTNAME} {
reverse_proxy langfuse-web:3000
}
# # Ollama API
# {$OLLAMA_HOSTNAME} {
# reverse_proxy ollama:11434
# }
# Supabase
{$SUPABASE_HOSTNAME} {
http://{$SUPABASE_HOSTNAME} {
reverse_proxy kong:8000
}
# Grafana
{$GRAFANA_HOSTNAME} {
http://{$GRAFANA_HOSTNAME} {
reverse_proxy grafana:3000
}
# WAHA (WhatsApp HTTP API)
{$WAHA_HOSTNAME} {
http://{$WAHA_HOSTNAME} {
reverse_proxy waha:3000
}
# Prometheus
{$PROMETHEUS_HOSTNAME} {
basic_auth {
http://{$PROMETHEUS_HOSTNAME} {
basic_auth {
{$PROMETHEUS_USERNAME} {$PROMETHEUS_PASSWORD_HASH}
}
reverse_proxy prometheus:9090
}
# Portainer
{$PORTAINER_HOSTNAME} {
http://{$PORTAINER_HOSTNAME} {
reverse_proxy portainer:9000
}
# Postiz
{$POSTIZ_HOSTNAME} {
http://{$POSTIZ_HOSTNAME} {
reverse_proxy postiz:5000
}
# Temporal UI (workflow orchestration for Postiz)
{$TEMPORAL_UI_HOSTNAME} {
basic_auth {
{$TEMPORAL_UI_USERNAME} {$TEMPORAL_UI_PASSWORD_HASH}
}
reverse_proxy temporal-ui:8080
}
# Databasus
{$DATABASUS_HOSTNAME} {
reverse_proxy databasus:4005
}
# Letta
{$LETTA_HOSTNAME} {
http://{$LETTA_HOSTNAME} {
reverse_proxy letta:8283
}
# LightRAG (Graph-based RAG with Knowledge Extraction)
{$LIGHTRAG_HOSTNAME} {
http://{$LIGHTRAG_HOSTNAME} {
reverse_proxy lightrag:9621
}
# Weaviate
{$WEAVIATE_HOSTNAME} {
http://{$WEAVIATE_HOSTNAME} {
reverse_proxy weaviate:8080
}
# Qdrant
{$QDRANT_HOSTNAME} {
http://{$QDRANT_HOSTNAME} {
reverse_proxy qdrant:6333
}
# ComfyUI
{$COMFYUI_HOSTNAME} {
http://{$COMFYUI_HOSTNAME} {
basic_auth {
{$COMFYUI_USERNAME} {$COMFYUI_PASSWORD_HASH}
}
@@ -115,30 +118,30 @@
}
# LibreTranslate (Self-hosted Translation API)
{$LT_HOSTNAME} {
http://{$LT_HOSTNAME} {
basic_auth {
{$LT_USERNAME} {$LT_PASSWORD_HASH}
}
reverse_proxy libretranslate:5000
}
# Neo4j
{$NEO4J_HOSTNAME} {
# Neo4j Browser
http://{$NEO4J_HOSTNAME} {
reverse_proxy neo4j:7474
}
# Neo4j Bolt Protocol (wss)
https://{$NEO4J_HOSTNAME}:7687 {
# Neo4j Bolt Protocol
{$NEO4J_HOSTNAME}:7687 {
reverse_proxy neo4j:7687
}
# NocoDB
{$NOCODB_HOSTNAME} {
http://{$NOCODB_HOSTNAME} {
reverse_proxy nocodb:8080
}
# PaddleOCR (PaddleX Basic Serving)
{$PADDLEOCR_HOSTNAME} {
http://{$PADDLEOCR_HOSTNAME} {
basic_auth {
{$PADDLEOCR_USERNAME} {$PADDLEOCR_PASSWORD_HASH}
}
@@ -146,7 +149,7 @@ https://{$NEO4J_HOSTNAME}:7687 {
}
# Docling (Document Conversion API)
{$DOCLING_HOSTNAME} {
http://{$DOCLING_HOSTNAME} {
basic_auth {
{$DOCLING_USERNAME} {$DOCLING_PASSWORD_HASH}
}
@@ -154,6 +157,17 @@ https://{$NEO4J_HOSTNAME}:7687 {
}
# Welcome Page (Post-install dashboard)
# HTTP block for Cloudflare Tunnel access (prevents redirect loop)
http://{$WELCOME_HOSTNAME} {
basic_auth {
{$WELCOME_USERNAME} {$WELCOME_PASSWORD_HASH}
}
root * /srv/welcome
file_server
try_files {path} /index.html
}
# HTTPS block for direct access
{$WELCOME_HOSTNAME} {
basic_auth {
{$WELCOME_USERNAME} {$WELCOME_PASSWORD_HASH}
@@ -165,8 +179,8 @@ https://{$NEO4J_HOSTNAME}:7687 {
import /etc/caddy/addons/*.conf
# # SearXNG
{$SEARXNG_HOSTNAME} {
# SearXNG
http://{$SEARXNG_HOSTNAME} {
@protected not remote_ip 127.0.0.0/8 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 100.64.0.0/10
basic_auth @protected {
@@ -174,7 +188,7 @@ import /etc/caddy/addons/*.conf
}
encode zstd gzip
@api {
path /config
path /healthz
@@ -190,7 +204,7 @@ import /etc/caddy/addons/*.conf
@static {
path /static/*
}
header {
# CSP (https://content-security-policy.com)
Content-Security-Policy "upgrade-insecure-requests; default-src 'none'; script-src 'self'; style-src 'self' 'unsafe-inline'; form-action 'self' https://github.com/searxng/searxng/issues/new; font-src 'self'; frame-ancestors 'self'; base-uri 'self'; connect-src 'self' https://overpass-api.de; img-src * data:; frame-src https://www.youtube-nocookie.com https://player.vimeo.com https://www.dailymotion.com https://www.deezer.com https://www.mixcloud.com https://w.soundcloud.com https://embed.spotify.com;"
@@ -207,12 +221,12 @@ import /etc/caddy/addons/*.conf
# Remove "Server" header
-Server
}
header @api {
Access-Control-Allow-Methods "GET, OPTIONS"
Access-Control-Allow-Origin "*"
}
route {
# Cache policy
header Cache-Control "max-age=0, no-store"
@@ -220,7 +234,7 @@ import /etc/caddy/addons/*.conf
header @imageproxy Cache-Control "max-age=604800, public"
header @static Cache-Control "max-age=31536000, public, immutable"
}
# SearXNG (uWSGI)
reverse_proxy searxng:8080 {
header_up X-Forwarded-Port {http.request.port}

View File

@@ -2,11 +2,15 @@
PROJECT_NAME := localai
# Detect bash 4+ (macOS ships with bash 3.2, need Homebrew bash; Ubuntu 24+ has bash 5.x)
BASH_CMD := $(shell command -v /opt/homebrew/bin/bash 2>/dev/null || command -v /usr/local/bin/bash 2>/dev/null || echo bash)
help:
@echo "n8n-install - Available commands:"
@echo ""
@echo " make install Full installation"
@echo " make update Update system and services (resets to origin)"
@echo " make install-vps VPS installation (Ubuntu with SSL)"
@echo " make install-local Local installation (macOS/Linux, no sudo)"
@echo " make update Update system and services"
@echo " make update-preview Preview available updates (dry-run)"
@echo " make git-pull Update for forks (merges from upstream)"
@echo " make clean Remove unused Docker resources (preserves data)"
@@ -28,8 +32,11 @@ help:
@echo " make switch-beta Switch to beta (develop branch)"
@echo " make switch-stable Switch to stable (main branch)"
install:
sudo bash ./scripts/install.sh
install-vps:
sudo bash ./scripts/install-vps.sh
install-local:
$(BASH_CMD) ./scripts/install-local.sh
update:
sudo bash ./scripts/update.sh

View File

@@ -137,19 +137,25 @@ Get started quickly with a vast library of pre-built automations (optional impor
1. **Domain Name:** You need a registered domain name (e.g., `yourdomain.com`).
2. **DNS Configuration:** Before running the installation script, you **must** configure DNS A-record for your domain, pointing to the public IP address of the server where you'll install this system. Replace `yourdomain.com` with your actual domain:
- **Wildcard Record:** `A *.yourdomain.com` -> `YOUR_SERVER_IP`
3. **Server:** Minimum server system requirements: Ubuntu 24.04 LTS, 64-bit.
- For running **all available services**: at least **20 GB Memory / 4 CPU Cores / 60 GB Disk Space**.
- For a minimal setup with **n8n, Monitoring, Databasus and Portainer**: **4 GB Memory / 2 CPU Cores / 40 GB Disk Space**.
3. **VPS (Virtual Private Server):** A dedicated VPS with a public IP address is required. Home servers, shared hosting, or localhost setups are not supported.
- **Operating System:** Ubuntu 24.04 LTS, 64-bit
- For a minimal setup with **n8n, Monitoring, Databasus and Portainer**: **4 GB Memory / 2 CPU Cores / 40 GB Disk Space**
- For running **all available services**: at least **20 GB Memory / 4 CPU Cores / 60 GB Disk Space**
### Running the Install
The recommended way to install is using the provided main installation script.
The recommended way to install on a VPS is using the provided installation script.
1. Connect to your server via SSH.
2. Run the following command:
```bash
git clone https://github.com/kossakovsky/n8n-install && cd n8n-install && sudo bash ./scripts/install.sh
git clone https://github.com/kossakovsky/n8n-install && cd n8n-install && make install-vps
```
Or run directly:
```bash
sudo bash ./scripts/install-vps.sh
```
This single command automates the entire setup process, including:
@@ -161,16 +167,60 @@ This single command automates the entire setup process, including:
During the installation, the script will prompt you for:
1. Your **primary domain name** (Required, e.g., `yourdomain.com`). This is the domain for which you've configured the wildcard DNS record.
2. Your **email address** (Required, used for service logins like Flowise, Supabase dashboard, Grafana, and for SSL certificate registration with Let's Encrypt).
1. Your **primary domain name** (e.g., `yourdomain.com`). This is the domain for which you've configured the wildcard DNS record.
2. Your **email address** (used for service logins like Flowise, Supabase dashboard, Grafana, and for SSL certificate registration with Let's Encrypt).
3. An optional **OpenAI API key** (Not required. If provided, it can be used by Supabase AI features and Crawl4ai. Press Enter to skip).
4. Whether you want to **import ~300 ready-made n8n community workflows** (y/n, Optional. This can take 20-30 minutes, depending on your server and network speed).
5. The **number of n8n workers** you want to run (Required, e.g., 1, 2, 3, 4. This determines how many workflows can be processed in parallel. Each worker automatically gets its own dedicated task runner sidecar for executing Code nodes. Defaults to 1 if not specified).
5. The **number of n8n workers** you want to run (e.g., 1, 2, 3, 4. This determines how many workflows can be processed in parallel. Each worker automatically gets its own dedicated task runner sidecar for executing Code nodes. Defaults to 1 if not specified).
6. A **Service Selection Wizard** will then appear, allowing you to choose which of the available services (like Flowise, Supabase, Qdrant, Open WebUI, etc.) you want to deploy. Core services (Caddy, Postgres, Redis) will be set up to support your selections.
Upon successful completion, the script will display a summary report. This report contains the access URLs and credentials for the deployed services. **Save this information in a safe place!**
## Quick Start and Usage
### Local Installation
You can also run this project on your local machine (macOS, Linux, or Windows via WSL2) for development and testing purposes.
#### Prerequisites
- Docker and Docker Compose installed and running
- Bash 4.0+ (macOS ships with bash 3.2, install modern bash: `brew install bash`)
- Python 3 with PyYAML (`pip3 install pyyaml` - auto-installed if missing)
- `whiptail` installed (macOS: `brew install newt`, Linux: `apt install whiptail`)
- `openssl` and `git` installed
#### Running Local Install
```bash
git clone https://github.com/kossakovsky/n8n-install && cd n8n-install
make install-local
```
This automatically uses Homebrew bash on macOS (required for bash 4+ features).
The local installer will:
- Check prerequisites (Docker, whiptail, openssl, git, python3)
- Use `.local` domains (e.g., `n8n.local`, `flowise.local`)
- Configure Caddy for HTTP only (no SSL certificates)
- Skip system preparation and Docker installation (assumes Docker is pre-installed)
#### After Installation
Add the generated host entries to your system:
```bash
# The installer generates a hosts.txt file with all required entries
sudo bash -c 'cat hosts.txt >> /etc/hosts'
# Flush DNS cache
# macOS:
sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder
# Linux:
sudo systemd-resolve --flush-caches
```
Access your services at `http://n8n.local`, `http://flowise.local`, etc.
## ⚡️ Quick Start and Usage
After successful installation, your services are up and running! Here's how to get started:
@@ -303,8 +353,9 @@ The project includes a Makefile for simplified command execution:
| Command | Description |
| --------------------- | ---------------------------------------------------- |
| `make install` | Full installation |
| `make update` | Update system and services (resets to origin) |
| `make install-vps` | VPS installation (Ubuntu with SSL) |
| `make install-local` | Local installation (macOS/Linux, no sudo) |
| `make update` | Update system and services |
| `make update-preview` | Preview available updates without applying (dry-run) |
| `make git-pull` | Update for forks (merges from upstream/main) |
| `make clean` | Remove unused Docker resources |
@@ -367,6 +418,18 @@ Here are solutions to common issues you might encounter:
- **VPN Conflicts:** Using a VPN might interfere with downloading Docker images. If you encounter issues pulling images, try temporarily disabling your VPN.
- **Server Requirements:** If you experience unexpected issues, ensure your server meets the minimum hardware and operating system requirements (including version) as specified in the "Prerequisites before Installation" section.
### Update Script Not Working
- **Symptom:** The `make update` command fails, shows errors, or doesn't apply the latest changes.
- **Cause:** This can happen if your local repository has diverged from the upstream, has uncommitted changes, or is in an inconsistent state.
- **Solution:** Run the following command to force-sync your local installation with the latest version:
```bash
git config pull.rebase true && git fetch origin && git checkout main && git reset --hard "origin/main" && make update
```
**Warning:** This will discard any local changes you've made to the installer files. If you've customized any scripts or configurations, back them up first.
## Recommended Reading
n8n offers excellent resources for getting started with its AI capabilities:

View File

@@ -22,8 +22,8 @@ Cloudflare Tunnel **bypasses Caddy** and connects directly to your services. Thi
1. Go to [Cloudflare One Dashboard](https://one.dash.cloudflare.com/)
2. Navigate to **Networks****Connectors****Cloudflare Tunnels**
3. Click **Create new cloudflared Tunnel**
4. Choose **Cloudflared** connector and click **Next**
3. Click **Create a tunnel**
4. Select **Cloudflared** as the connector type and click **Next**
5. Name your tunnel (e.g., "n8n-install") and click **Save tunnel**
6. Copy the installation command shown - it contains your tunnel token
@@ -106,7 +106,7 @@ dig NS yourdomain.com +short
#### 3. Configure Public Hostnames
After DNS is configured, go to **Cloudflare Zero Trust** → **Networks** → **Tunnels** → your tunnel → **Public Hostname** tab. For each service you want to expose, click **Add a public hostname** and configure:
After DNS is configured, go to **Cloudflare One Dashboard** → **Networks** → **Connectors** → **Cloudflare Tunnels** → your tunnel → **Public Hostname** tab. For each service you want to expose, click **Add a public hostname** and configure:
| Service | Public Hostname | Service URL | Auth Notes |
| ------------------ | ----------------------------- | ---------------------------- | ------------------- |
@@ -122,6 +122,7 @@ After DNS is configured, go to **Cloudflare Zero Trust** → **Networks** → **
| **LibreTranslate** | libretranslate.yourdomain.com | `http://libretranslate:5000` | ⚠️ Loses Caddy auth |
| **LightRAG** | lightrag.yourdomain.com | `http://lightrag:9621` | No auth |
| **Neo4j** | neo4j.yourdomain.com | `http://neo4j:7474` | Built-in login |
| **NocoDB** | nocodb.yourdomain.com | `http://nocodb:8080` | Built-in login |
| **Open WebUI** | webui.yourdomain.com | `http://open-webui:8080` | Built-in login |
| **PaddleOCR** | paddleocr.yourdomain.com | `http://paddleocr:8080` | ⚠️ Loses Caddy auth |
| **Portainer** | portainer.yourdomain.com | `http://portainer:9000` | Built-in login |
@@ -134,6 +135,11 @@ After DNS is configured, go to **Cloudflare Zero Trust** → **Networks** → **
| **Supabase** ¹ | supabase.yourdomain.com | `http://kong:8000` | Built-in login |
| **WAHA** | waha.yourdomain.com | `http://waha:3000` | API key recommended |
| **Weaviate** | weaviate.yourdomain.com | `http://weaviate:8080` | API key recommended |
| **Welcome Page** ² | welcome.yourdomain.com | `http://caddy:80` | ⚠️ Loses Caddy auth |
**Notes:**
- ¹ Dify and Supabase use external compose files from adjacent directories
- ² Welcome Page is served by Caddy as static content; tunnel proxies through Caddy
**⚠️ Security Warning:**
- Services marked **"Loses Caddy auth"** have basic authentication via Caddy that is bypassed by the tunnel. Use [Cloudflare Access](https://developers.cloudflare.com/cloudflare-one/applications/) or keep them internal.
@@ -181,7 +187,7 @@ You have two options for accessing your services:
For services that lose Caddy's basic auth protection, you can add Cloudflare Access:
1. In **Cloudflare One Dashboard** → **Access controls** → **Applications**
1. In **Cloudflare One Dashboard** → **Access** → **Applications** (or **Access controls** → **Applications** depending on your dashboard version)
2. Click **Add an application** → **Self-hosted**
3. Configure:
- **Application name**: e.g., "Prometheus"

View File

@@ -33,9 +33,17 @@ volumes:
ragflow_minio_data:
ragflow_mysql_data:
ragflow_redis_data:
temporal_elasticsearch_data:
valkey-data:
weaviate_data:
# Shared logging configuration for services
x-logging: &default-logging
driver: "json-file"
options:
max-size: "1m"
max-file: "1"
# Shared proxy configuration for services that need outbound proxy support
x-proxy-env: &proxy-env
HTTP_PROXY: ${GOST_PROXY_URL:-}
@@ -90,6 +98,7 @@ x-n8n: &service-n8n
N8N_SMTP_SSL: ${N8N_SMTP_SSL:-true}
N8N_SMTP_STARTTLS: ${N8N_SMTP_STARTTLS:-true}
N8N_SMTP_USER: ${N8N_SMTP_USER:-}
N8N_SECURE_COOKIE: ${N8N_SECURE_COOKIE:-true}
N8N_TRUST_PROXY: true
N8N_USER_MANAGEMENT_JWT_SECRET: ${N8N_USER_MANAGEMENT_JWT_SECRET}
NODE_ENV: production
@@ -97,7 +106,7 @@ x-n8n: &service-n8n
QUEUE_BULL_REDIS_HOST: ${REDIS_HOST:-redis}
QUEUE_BULL_REDIS_PORT: ${REDIS_PORT:-6379}
QUEUE_HEALTH_CHECK_ACTIVE: true
WEBHOOK_URL: ${N8N_HOSTNAME:+https://}${N8N_HOSTNAME:-http://localhost:5678}/
WEBHOOK_URL: ${PROTOCOL:-https}://${N8N_HOSTNAME}/
x-ollama: &service-ollama
image: ollama/ollama:latest
@@ -274,15 +283,11 @@ services:
container_name: nocodb
profiles: ["nocodb"]
restart: unless-stopped
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "1"
logging: *default-logging
environment:
NC_AUTH_JWT_SECRET: ${NOCODB_JWT_SECRET}
NC_DB: pg://postgres:5432?u=postgres&p=${POSTGRES_PASSWORD}&d=nocodb
NC_PUBLIC_URL: https://${NOCODB_HOSTNAME}
NC_PUBLIC_URL: ${PROTOCOL:-https}://${NOCODB_HOSTNAME}
NC_REDIS_URL: redis://redis:6379
volumes:
- nocodb_data:/usr/app/data
@@ -314,6 +319,7 @@ services:
- caddy-data:/data:rw
- caddy-config:/config:rw
environment:
CADDY_AUTO_HTTPS: ${CADDY_AUTO_HTTPS:-on}
COMFYUI_HOSTNAME: ${COMFYUI_HOSTNAME}
COMFYUI_PASSWORD_HASH: ${COMFYUI_PASSWORD_HASH}
COMFYUI_USERNAME: ${COMFYUI_USERNAME}
@@ -339,6 +345,9 @@ services:
PORTAINER_HOSTNAME: ${PORTAINER_HOSTNAME}
DATABASUS_HOSTNAME: ${DATABASUS_HOSTNAME}
POSTIZ_HOSTNAME: ${POSTIZ_HOSTNAME}
TEMPORAL_UI_HOSTNAME: ${TEMPORAL_UI_HOSTNAME}
TEMPORAL_UI_USERNAME: ${TEMPORAL_UI_USERNAME}
TEMPORAL_UI_PASSWORD_HASH: ${TEMPORAL_UI_PASSWORD_HASH}
PROMETHEUS_HOSTNAME: ${PROMETHEUS_HOSTNAME}
PROMETHEUS_PASSWORD_HASH: ${PROMETHEUS_PASSWORD_HASH}
PROMETHEUS_USERNAME: ${PROMETHEUS_USERNAME}
@@ -361,11 +370,7 @@ services:
- ALL
cap_add:
- NET_BIND_SERVICE
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "1"
logging: *default-logging
cloudflared:
image: cloudflare/cloudflared:latest
@@ -375,11 +380,7 @@ services:
command: tunnel --no-autoupdate run
environment:
TUNNEL_TOKEN: ${CLOUDFLARE_TUNNEL_TOKEN}
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "1"
logging: *default-logging
gost:
image: gogost/gost:latest
@@ -397,11 +398,7 @@ services:
timeout: 10s
retries: 3
start_period: 10s
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "1"
logging: *default-logging
langfuse-worker:
image: langfuse/langfuse-worker:3
@@ -470,7 +467,7 @@ services:
depends_on: *langfuse-depends-on
environment:
<<: *langfuse-worker-env
NEXTAUTH_URL: https://${LANGFUSE_HOSTNAME}
NEXTAUTH_URL: ${PROTOCOL:-https}://${LANGFUSE_HOSTNAME}
NEXTAUTH_SECRET: ${NEXTAUTH_SECRET}
LANGFUSE_INIT_ORG_ID: ${LANGFUSE_INIT_ORG_ID:-organization_id}
LANGFUSE_INIT_ORG_NAME: ${LANGFUSE_INIT_ORG_NAME:-Organization}
@@ -553,11 +550,7 @@ services:
- SETGID
- SETUID
- DAC_OVERRIDE
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "1"
logging: *default-logging
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 3s
@@ -572,7 +565,7 @@ services:
volumes:
- ./searxng:/etc/searxng:rw
environment:
SEARXNG_BASE_URL: https://${SEARXNG_HOSTNAME:-localhost}/
SEARXNG_BASE_URL: ${PROTOCOL:-https}://${SEARXNG_HOSTNAME:-localhost}/
UWSGI_WORKERS: ${SEARXNG_UWSGI_WORKERS:-4}
UWSGI_THREADS: ${SEARXNG_UWSGI_THREADS:-4}
# cap_drop: - ALL # Temporarily commented out for first run
@@ -580,11 +573,7 @@ services:
- CHOWN
- SETGID
- SETUID
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "1"
logging: *default-logging
ollama-cpu:
profiles: ["cpu"]
@@ -778,6 +767,70 @@ services:
- portainer_data:/data
- ${DOCKER_SOCKET_LOCATION:-/var/run/docker.sock}:/var/run/docker.sock
temporal-elasticsearch:
image: elasticsearch:7.17.27
container_name: temporal-elasticsearch
profiles: ["postiz"]
restart: unless-stopped
logging: *default-logging
environment:
cluster.routing.allocation.disk.threshold_enabled: "true"
cluster.routing.allocation.disk.watermark.low: 512mb
cluster.routing.allocation.disk.watermark.high: 256mb
cluster.routing.allocation.disk.watermark.flood_stage: 128mb
discovery.type: single-node
ES_JAVA_OPTS: -Xms512m -Xmx512m
xpack.security.enabled: "false"
volumes:
- temporal_elasticsearch_data:/usr/share/elasticsearch/data
healthcheck:
test: ["CMD-SHELL", "curl -s http://localhost:9200/_cluster/health | grep -qE '\"status\":\"(green|yellow)\"'"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
temporal:
image: temporalio/auto-setup:latest
container_name: temporal
profiles: ["postiz"]
restart: unless-stopped
logging: *default-logging
environment:
DB: postgres12
POSTGRES_USER: postgres
POSTGRES_PWD: ${POSTGRES_PASSWORD}
POSTGRES_SEEDS: postgres
DB_PORT: 5432
TEMPORAL_NAMESPACE: default
ENABLE_ES: "true"
ES_SEEDS: temporal-elasticsearch
ES_VERSION: v7
depends_on:
postgres:
condition: service_healthy
temporal-elasticsearch:
condition: service_healthy
healthcheck:
test: ["CMD-SHELL", "temporal operator cluster health --address $(hostname -i):7233 | grep -q SERVING || exit 1"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
temporal-ui:
image: temporalio/ui:latest
container_name: temporal-ui
profiles: ["postiz"]
restart: unless-stopped
logging: *default-logging
environment:
TEMPORAL_ADDRESS: temporal:7233
TEMPORAL_CORS_ORIGINS: http://localhost:3000
depends_on:
temporal:
condition: service_healthy
postiz:
image: ghcr.io/gitroomhq/postiz-app:latest
container_name: postiz
@@ -788,14 +841,15 @@ services:
BACKEND_INTERNAL_URL: http://postiz:3000
DATABASE_URL: "postgresql://postgres:${POSTGRES_PASSWORD}@postgres:5432/${POSTIZ_DB_NAME:-postiz}?schema=postiz"
DISABLE_REGISTRATION: ${POSTIZ_DISABLE_REGISTRATION}
FRONTEND_URL: ${POSTIZ_HOSTNAME:+https://}${POSTIZ_HOSTNAME}
FRONTEND_URL: ${PROTOCOL:-https}://${POSTIZ_HOSTNAME}
IS_GENERAL: "true" # Required for self-hosting.
JWT_SECRET: ${JWT_SECRET}
MAIN_URL: ${POSTIZ_HOSTNAME:+https://}${POSTIZ_HOSTNAME}
NEXT_PUBLIC_BACKEND_URL: ${POSTIZ_HOSTNAME:+https://}${POSTIZ_HOSTNAME}/api
MAIN_URL: ${PROTOCOL:-https}://${POSTIZ_HOSTNAME}
NEXT_PUBLIC_BACKEND_URL: ${PROTOCOL:-https}://${POSTIZ_HOSTNAME}/api
NEXT_PUBLIC_UPLOAD_DIRECTORY: "/uploads"
REDIS_URL: "redis://redis:6379"
STORAGE_PROVIDER: "local"
TEMPORAL_ADDRESS: temporal:7233
UPLOAD_DIRECTORY: "/uploads"
# Social Media API Settings
X_API_KEY: ${X_API_KEY}
@@ -837,17 +891,15 @@ services:
condition: service_healthy
redis:
condition: service_healthy
temporal:
condition: service_healthy
databasus:
image: databasus/databasus:latest
container_name: databasus
profiles: ["databasus"]
restart: unless-stopped
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "1"
logging: *default-logging
volumes:
- databasus_data:/databasus-data
healthcheck:
@@ -1044,11 +1096,7 @@ services:
- SETGID
- SETUID
- DAC_OVERRIDE
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "1"
logging: *default-logging
healthcheck:
test: ["CMD", "valkey-cli", "-a", "${RAGFLOW_REDIS_PASSWORD}", "ping"]
interval: 3s

View File

@@ -23,9 +23,15 @@ set -e
source "$(dirname "$0")/utils.sh"
init_paths
# Source local mode utilities
source "$SCRIPT_DIR/local.sh"
# Source telemetry functions
source "$SCRIPT_DIR/telemetry.sh"
# Get installation mode using local.sh helper
INSTALL_MODE="$(get_install_mode)"
# Setup cleanup for temporary files
TEMP_FILES=()
cleanup_temp_files() {
@@ -55,6 +61,7 @@ EMAIL_VARS=(
"PROMETHEUS_USERNAME"
"RAGAPP_USERNAME"
"SEARXNG_USERNAME"
"TEMPORAL_UI_USERNAME"
"WAHA_DASHBOARD_USERNAME"
"WEAVIATE_USERNAME"
"WELCOME_USERNAME"
@@ -114,6 +121,7 @@ declare -A VARS_TO_GENERATE=(
["RAGFLOW_REDIS_PASSWORD"]="password:32"
["SEARXNG_PASSWORD"]="password:32" # Added SearXNG admin password
["SECRET_KEY_BASE"]="base64:64" # 48 bytes -> 64 chars
["TEMPORAL_UI_PASSWORD"]="password:32" # Temporal UI basic auth password
["VAULT_ENC_KEY"]="alphanum:32"
["WAHA_DASHBOARD_PASSWORD"]="password:32"
["WEAVIATE_API_KEY"]="secret:48" # API Key for Weaviate service (36 bytes -> 48 chars base64)
@@ -151,15 +159,13 @@ if [ -f "$OUTPUT_FILE" ]; then
done < "$OUTPUT_FILE"
fi
# Install Caddy
log_subheader "Installing Caddy"
log_info "Adding Caddy repository and installing..."
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | gpg --yes --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | tee /etc/apt/sources.list.d/caddy-stable.list
apt install -y caddy
# Check for caddy
require_command "caddy" "Caddy installation failed. Please check the installation logs above."
# Setup Caddy for bcrypt hash generation (using Docker for cross-platform support)
log_subheader "Setting up Caddy for password hashing"
log_info "Using Docker-based Caddy for password hashing..."
if ! docker image inspect caddy:latest &>/dev/null; then
log_info "Pulling Caddy Docker image..."
docker pull caddy:latest
fi
require_whiptail
@@ -167,42 +173,51 @@ require_whiptail
log_subheader "Domain Configuration"
DOMAIN="" # Initialize DOMAIN variable
# Try to get domain from existing .env file first
# Check if USER_DOMAIN_NAME is set in existing_env_vars and is not empty
if [[ ${existing_env_vars[USER_DOMAIN_NAME]+_} && -n "${existing_env_vars[USER_DOMAIN_NAME]}" ]]; then
DOMAIN="${existing_env_vars[USER_DOMAIN_NAME]}"
# Ensure this value is carried over to generated_values for writing and template processing
# If it came from existing_env_vars, it might already be there, but this ensures it.
# Configure mode-specific environment variables using local.sh
configure_mode_env generated_values "$INSTALL_MODE"
if is_local_mode; then
# Local mode: use .local TLD automatically
DOMAIN="$(get_local_domain)"
generated_values["USER_DOMAIN_NAME"]="$DOMAIN"
log_info "Using local development domain: .$DOMAIN"
else
while true; do
DOMAIN_INPUT=$(wt_input "Primary Domain" "Enter the primary domain name for your services (e.g., example.com)." "") || true
# VPS mode: prompt for real domain
# Try to get domain from existing .env file first
if [[ ${existing_env_vars[USER_DOMAIN_NAME]+_} && -n "${existing_env_vars[USER_DOMAIN_NAME]}" ]]; then
DOMAIN="${existing_env_vars[USER_DOMAIN_NAME]}"
generated_values["USER_DOMAIN_NAME"]="$DOMAIN"
else
while true; do
DOMAIN_INPUT=$(wt_input "Primary Domain" "Enter the primary domain name for your services (e.g., example.com)." "") || true
DOMAIN_TO_USE="$DOMAIN_INPUT" # Direct assignment, no default fallback
DOMAIN_TO_USE="$DOMAIN_INPUT" # Direct assignment, no default fallback
# Validate domain input
if [[ -z "$DOMAIN_TO_USE" ]]; then
wt_msg "Validation" "Domain name cannot be empty."
continue
fi
# Validate domain input
if [[ -z "$DOMAIN_TO_USE" ]]; then
wt_msg "Validation" "Domain name cannot be empty."
continue
fi
# Basic check for likely invalid domain characters (very permissive)
if [[ "$DOMAIN_TO_USE" =~ [^a-zA-Z0-9.-] ]]; then
wt_msg "Validation" "Warning: Domain contains potentially invalid characters: '$DOMAIN_TO_USE'"
fi
if wt_yesno "Confirm Domain" "Use '$DOMAIN_TO_USE' as the primary domain?" "yes"; then
DOMAIN="$DOMAIN_TO_USE" # Set the final DOMAIN variable
generated_values["USER_DOMAIN_NAME"]="$DOMAIN" # Using USER_DOMAIN_NAME
log_info "Domain set to '$DOMAIN'. It will be saved in .env."
break # Confirmed, exit loop
fi
done
# Basic check for likely invalid domain characters (very permissive)
if [[ "$DOMAIN_TO_USE" =~ [^a-zA-Z0-9.-] ]]; then
wt_msg "Validation" "Warning: Domain contains potentially invalid characters: '$DOMAIN_TO_USE'"
fi
if wt_yesno "Confirm Domain" "Use '$DOMAIN_TO_USE' as the primary domain?" "yes"; then
DOMAIN="$DOMAIN_TO_USE"
generated_values["USER_DOMAIN_NAME"]="$DOMAIN"
log_info "Domain set to '$DOMAIN'. It will be saved in .env."
break # Confirmed, exit loop
fi
done
fi
fi
# Prompt for user email
# Prompt for user email (used for service logins and SSL certificates in VPS mode)
log_subheader "Email Configuration"
if [[ -z "${existing_env_vars[LETSENCRYPT_EMAIL]}" ]]; then
wt_msg "Email Required" "Please enter your email address. It will be used for logins and Let's Encrypt SSL."
wt_msg "Email Required" "Please enter your email address. It will be used for service logins."
fi
if [[ -n "${existing_env_vars[LETSENCRYPT_EMAIL]}" ]]; then
@@ -564,7 +579,7 @@ if [[ -n "$template_no_proxy" ]]; then
fi
# Hash passwords using caddy with bcrypt (consolidated loop)
SERVICES_NEEDING_HASH=("PROMETHEUS" "SEARXNG" "COMFYUI" "PADDLEOCR" "RAGAPP" "LT" "DOCLING" "WELCOME")
SERVICES_NEEDING_HASH=("PROMETHEUS" "SEARXNG" "COMFYUI" "PADDLEOCR" "RAGAPP" "LT" "DOCLING" "TEMPORAL_UI" "WELCOME")
for service in "${SERVICES_NEEDING_HASH[@]}"; do
password_var="${service}_PASSWORD"
@@ -590,9 +605,6 @@ log_success ".env file generated successfully in the project root ($OUTPUT_FILE)
# Save installation ID for telemetry correlation
save_installation_id "$OUTPUT_FILE"
# Uninstall caddy
apt remove -y caddy
# Cleanup any .bak files
cleanup_bak_files "$PROJECT_ROOT"

View File

@@ -119,7 +119,7 @@ write_env_var "N8N_WORKER_COUNT" "$N8N_WORKER_COUNT"
# Generate worker-runner pairs configuration
# Each worker gets its own dedicated task runner sidecar
log_info "Generating n8n worker-runner pairs configuration..."
bash "$SCRIPT_DIR/generate_n8n_workers.sh"
"$BASH" "$SCRIPT_DIR/generate_n8n_workers.sh"
# ----------------------------------------------------------------

View File

@@ -23,13 +23,20 @@ set -e
source "$(dirname "$0")/utils.sh"
init_paths
# Source local mode utilities
source "$SCRIPT_DIR/local.sh"
# Load environment variables from .env file
load_env || exit 1
# Get installation mode and protocol using local.sh helpers
INSTALL_MODE="$(get_install_mode)"
PROTOCOL="$(get_protocol)"
# Generate welcome page data
if [ -f "$SCRIPT_DIR/generate_welcome_page.sh" ]; then
log_info "Generating welcome page..."
bash "$SCRIPT_DIR/generate_welcome_page.sh" || log_warning "Failed to generate welcome page"
"$BASH" "$SCRIPT_DIR/generate_welcome_page.sh" || log_warning "Failed to generate welcome page"
fi
# Helper function to print a divider line
@@ -58,12 +65,25 @@ clear
# Header
log_box "Installation/Update Complete"
# --- Local Mode: /etc/hosts Instructions ---
if is_local_mode; then
print_section "Local Installation Setup"
# Generate hosts entries
if [ -f "$SCRIPT_DIR/generate_hosts.sh" ]; then
"$BASH" "$SCRIPT_DIR/generate_hosts.sh" 2>/dev/null || true
fi
# Print hosts setup instructions using local.sh helper
print_local_hosts_instructions
fi
# --- Welcome Page Section ---
print_section "Welcome Page"
echo ""
echo -e " ${WHITE}All your service credentials are available here:${NC}"
echo ""
print_credential "URL" "https://${WELCOME_HOSTNAME:-welcome.${USER_DOMAIN_NAME}}"
print_credential "URL" "${PROTOCOL}://${WELCOME_HOSTNAME:-welcome.${USER_DOMAIN_NAME}}"
print_credential "Username" "${WELCOME_USERNAME:-<not_set>}"
print_credential "Password" "${WELCOME_PASSWORD:-<not_set>}"
echo ""
@@ -74,7 +94,7 @@ echo -e " ${DIM}hostnames, credentials, and internal URLs.${NC}"
print_section "Next Steps"
echo ""
echo -e " ${WHITE}1.${NC} Visit your Welcome Page to view all credentials"
echo -e " ${CYAN}https://${WELCOME_HOSTNAME:-welcome.${USER_DOMAIN_NAME}}${NC}"
echo -e " ${CYAN}${PROTOCOL}://${WELCOME_HOSTNAME:-welcome.${USER_DOMAIN_NAME}}${NC}"
echo ""
echo -e " ${WHITE}2.${NC} Store the Welcome Page credentials securely"
echo ""
@@ -97,6 +117,9 @@ fi
if is_profile_active "nocodb"; then
echo -e " ${GREEN}*${NC} ${WHITE}NocoDB${NC}: Create your account on first login"
fi
if is_profile_active "postiz"; then
echo -e " ${GREEN}*${NC} ${WHITE}Postiz${NC}: Create your account on first login"
fi
if is_profile_active "gost"; then
echo -e " ${GREEN}*${NC} ${WHITE}Gost Proxy${NC}: Routing AI traffic through external proxy"
fi

View File

@@ -17,8 +17,33 @@ source "$(dirname "$0")/utils.sh"
# Initialize paths
init_paths
# Source local mode utilities
source "$SCRIPT_DIR/local.sh"
# Load environment for INSTALL_MODE
load_env 2>/dev/null || true
log_info "Fixing file permissions..."
# Local mode: minimal permission fixes (usually not run with sudo)
if is_local_mode; then
log_info "Local mode - applying minimal permission fixes"
# Ensure .env has restricted permissions
if [[ -f "$ENV_FILE" ]]; then
chmod 600 "$ENV_FILE"
log_info "Set restrictive permissions on .env file"
fi
# Ensure scripts are executable
chmod +x "$SCRIPT_DIR"/*.sh 2>/dev/null || true
chmod +x "$PROJECT_ROOT"/*.py 2>/dev/null || true
log_success "File permissions configured for local development!"
exit 0
fi
# VPS mode: full permission fix with chown
# Get the real user who ran the installer
REAL_USER=$(get_real_user)
REAL_GROUP=$(id -gn "$REAL_USER" 2>/dev/null || echo "$REAL_USER")

View File

@@ -44,7 +44,7 @@ send_telemetry "update_start"
# --- Call 03_generate_secrets.sh in update mode ---
set_telemetry_stage "update_env"
log_info "Ensuring .env file is up-to-date with all variables..."
bash "$SCRIPT_DIR/03_generate_secrets.sh" --update || {
"$BASH" "$SCRIPT_DIR/03_generate_secrets.sh" --update || {
log_error "Failed to update .env configuration via 03_generate_secrets.sh. Update process cannot continue."
exit 1
}
@@ -54,7 +54,7 @@ log_success ".env file updated successfully."
# --- Run Service Selection Wizard FIRST to get updated profiles ---
set_telemetry_stage "update_wizard"
log_info "Running Service Selection Wizard to update service choices..."
bash "$SCRIPT_DIR/04_wizard.sh" || {
"$BASH" "$SCRIPT_DIR/04_wizard.sh" || {
log_error "Service Selection Wizard failed. Update process cannot continue."
exit 1
}
@@ -64,7 +64,7 @@ log_success "Service selection updated."
# --- Configure Services (prompts and .env updates) ---
set_telemetry_stage "update_configure"
log_info "Configuring services (.env updates for optional inputs)..."
bash "$SCRIPT_DIR/05_configure_services.sh" || {
"$BASH" "$SCRIPT_DIR/05_configure_services.sh" || {
log_error "Configure Services failed. Update process cannot continue."
exit 1
}
@@ -104,21 +104,21 @@ init_all_databases || { log_warning "Database initialization had issues, but con
# Start all services using the 06_run_services.sh script (postgres is already running)
set_telemetry_stage "update_services_start"
log_info "Running Services..."
bash "$RUN_SERVICES_SCRIPT" || { log_error "Failed to start services. Check logs for details."; exit 1; }
"$BASH" "$RUN_SERVICES_SCRIPT" || { log_error "Failed to start services. Check logs for details."; exit 1; }
log_success "Update application completed successfully!"
# --- Fix file permissions ---
set_telemetry_stage "update_fix_perms"
log_info "Fixing file permissions..."
bash "$SCRIPT_DIR/08_fix_permissions.sh" || {
"$BASH" "$SCRIPT_DIR/08_fix_permissions.sh" || {
log_warning "Failed to fix file permissions. This does not affect the update."
}
# --- End of Fix permissions ---
# --- Display Final Report with Credentials ---
set_telemetry_stage "update_final_report"
bash "$SCRIPT_DIR/07_final_report.sh" || {
"$BASH" "$SCRIPT_DIR/07_final_report.sh" || {
log_warning "Failed to display the final report. This does not affect the update."
# We don't exit 1 here as the update itself was successful.
}

View File

@@ -30,6 +30,8 @@ INIT_DB_DATABASES=(
"lightrag"
"nocodb"
"postiz"
"temporal"
"temporal_visibility"
"waha"
)

90
scripts/generate_hosts.sh Executable file
View File

@@ -0,0 +1,90 @@
#!/bin/bash
# =============================================================================
# generate_hosts.sh - Generate /etc/hosts entries for local development
# =============================================================================
# Creates a hosts.txt file with all .local domain entries needed for local
# development mode. Users can then add these entries to their /etc/hosts.
#
# Usage: bash scripts/generate_hosts.sh
# =============================================================================
set -e
source "$(dirname "$0")/utils.sh"
init_paths
# Source local mode utilities
source "$SCRIPT_DIR/local.sh"
# Load environment
load_env || { log_error "Could not load .env file"; exit 1; }
# Check if local mode using local.sh helper
if ! is_local_mode; then
log_info "Not in local mode, skipping hosts generation"
exit 0
fi
OUTPUT_FILE="$PROJECT_ROOT/hosts.txt"
log_info "Generating hosts file entries for local development..."
# Get hostname variables from local.sh (single source of truth)
readarray -t HOSTNAME_VARS < <(get_all_hostname_vars)
# Create hosts.txt header
cat > "$OUTPUT_FILE" << 'EOF'
# =============================================================================
# n8n-install Local Installation Hosts
# =============================================================================
# Add these lines to your hosts file:
# macOS/Linux: /etc/hosts
# Windows: C:\Windows\System32\drivers\etc\hosts
#
# To add automatically (macOS/Linux):
# sudo bash -c 'cat hosts.txt >> /etc/hosts'
#
# To flush DNS cache after adding:
# macOS: sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder
# Linux: sudo systemd-resolve --flush-caches
# Windows: ipconfig /flushdns
# =============================================================================
EOF
# Collect unique hostnames
declare -A HOSTNAMES_MAP
for var in "${HOSTNAME_VARS[@]}"; do
value=$(read_env_var "$var")
if [[ -n "$value" && "$value" =~ \.local$ ]]; then
HOSTNAMES_MAP["$value"]=1
fi
done
# Write sorted hostnames
for hostname in $(echo "${!HOSTNAMES_MAP[@]}" | tr ' ' '\n' | sort); do
echo "127.0.0.1 $hostname" >> "$OUTPUT_FILE"
done
# Count entries
ENTRY_COUNT=${#HOSTNAMES_MAP[@]}
if [ "$ENTRY_COUNT" -eq 0 ]; then
log_warning "No .local hostnames found in .env"
rm -f "$OUTPUT_FILE"
exit 0
fi
echo "" >> "$OUTPUT_FILE"
echo "# Total: $ENTRY_COUNT entries" >> "$OUTPUT_FILE"
log_success "Generated $ENTRY_COUNT host entries in: $OUTPUT_FILE"
log_info ""
log_info "To add these entries to your hosts file, run:"
log_info " sudo bash -c 'cat $OUTPUT_FILE >> /etc/hosts'"
log_info ""
log_info "Then flush your DNS cache:"
log_info " macOS: sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder"
log_info " Linux: sudo systemd-resolve --flush-caches"
exit 0

View File

@@ -8,11 +8,17 @@ set -e
source "$(dirname "$0")/utils.sh"
init_paths
# Source local mode utilities
source "$SCRIPT_DIR/local.sh"
OUTPUT_FILE="$PROJECT_ROOT/welcome/data.json"
# Load environment variables from .env file
load_env || exit 1
# Get protocol using local.sh helper (based on INSTALL_MODE)
PROTOCOL="$(get_protocol)"
# Ensure welcome directory exists
mkdir -p "$PROJECT_ROOT/welcome"
@@ -133,7 +139,7 @@ if is_profile_active "dify"; then
\"note\": \"Create account on first login\"
},
\"extra\": {
\"api_endpoint\": \"https://$(json_escape "$DIFY_HOSTNAME")/v1\",
\"api_endpoint\": \"${PROTOCOL}://$(json_escape "$DIFY_HOSTNAME")/v1\",
\"internal_api\": \"http://dify-api:5001\"
}
}")
@@ -147,7 +153,7 @@ if is_profile_active "qdrant"; then
\"api_key\": \"$(json_escape "$QDRANT_API_KEY")\"
},
\"extra\": {
\"dashboard\": \"https://$(json_escape "$QDRANT_HOSTNAME")/dashboard\",
\"dashboard\": \"${PROTOCOL}://$(json_escape "$QDRANT_HOSTNAME")/dashboard\",
\"internal_api\": \"http://qdrant:6333\"
}
}")
@@ -213,8 +219,8 @@ if is_profile_active "ragapp"; then
\"password\": \"$(json_escape "$RAGAPP_PASSWORD")\"
},
\"extra\": {
\"admin\": \"https://$(json_escape "$RAGAPP_HOSTNAME")/admin\",
\"docs\": \"https://$(json_escape "$RAGAPP_HOSTNAME")/docs\",
\"admin\": \"${PROTOCOL}://$(json_escape "$RAGAPP_HOSTNAME")/admin\",
\"docs\": \"${PROTOCOL}://$(json_escape "$RAGAPP_HOSTNAME")/docs\",
\"internal_api\": \"http://ragapp:8000\"
}
}")
@@ -243,7 +249,7 @@ if is_profile_active "lightrag"; then
\"api_key\": \"$(json_escape "$LIGHTRAG_API_KEY")\"
},
\"extra\": {
\"docs\": \"https://$(json_escape "$LIGHTRAG_HOSTNAME")/docs\",
\"docs\": \"${PROTOCOL}://$(json_escape "$LIGHTRAG_HOSTNAME")/docs\",
\"internal_api\": \"http://lightrag:9621\"
}
}")
@@ -293,8 +299,8 @@ if is_profile_active "docling"; then
\"password\": \"$(json_escape "$DOCLING_PASSWORD")\"
},
\"extra\": {
\"ui\": \"https://$(json_escape "$DOCLING_HOSTNAME")/ui\",
\"docs\": \"https://$(json_escape "$DOCLING_HOSTNAME")/docs\",
\"ui\": \"${PROTOCOL}://$(json_escape "$DOCLING_HOSTNAME")/ui\",
\"docs\": \"${PROTOCOL}://$(json_escape "$DOCLING_HOSTNAME")/docs\",
\"internal_api\": \"http://docling:5001\"
}
}")
@@ -327,6 +333,20 @@ if is_profile_active "postiz"; then
}")
fi
# Temporal UI
if is_profile_active "postiz"; then
SERVICES_ARRAY+=(" \"temporal-ui\": {
\"hostname\": \"$(json_escape "$TEMPORAL_UI_HOSTNAME")\",
\"credentials\": {
\"username\": \"$(json_escape "$TEMPORAL_UI_USERNAME")\",
\"password\": \"$(json_escape "$TEMPORAL_UI_PASSWORD")\"
},
\"extra\": {
\"note\": \"Workflow orchestration admin for Postiz\"
}
}")
fi
# WAHA
if is_profile_active "waha"; then
SERVICES_ARRAY+=(" \"waha\": {
@@ -337,7 +357,7 @@ if is_profile_active "waha"; then
\"api_key\": \"$(json_escape "$WAHA_API_KEY_PLAIN")\"
},
\"extra\": {
\"dashboard\": \"https://$(json_escape "$WAHA_HOSTNAME")/dashboard\",
\"dashboard\": \"${PROTOCOL}://$(json_escape "$WAHA_HOSTNAME")/dashboard\",
\"swagger_user\": \"$(json_escape "$WHATSAPP_SWAGGER_USERNAME")\",
\"swagger_pass\": \"$(json_escape "$WHATSAPP_SWAGGER_PASSWORD")\",
\"internal_api\": \"http://waha:3000\"
@@ -529,6 +549,7 @@ done
cat > "$OUTPUT_FILE" << EOF
{
"domain": "$(json_escape "$USER_DOMAIN_NAME")",
"protocol": "$PROTOCOL",
"generated_at": "$GENERATED_AT",
"services": {
$SERVICES_JSON
@@ -568,3 +589,4 @@ if [ -f "$CHANGELOG_SOURCE" ]; then
else
log_warning "CHANGELOG.md not found, skipping changelog.json generation"
fi
log_info "Access it at: ${PROTOCOL}://${WELCOME_HOSTNAME:-welcome.${USER_DOMAIN_NAME}}"

331
scripts/install-local.sh Normal file
View File

@@ -0,0 +1,331 @@
#!/bin/bash
# =============================================================================
# install-local.sh - Local installation for n8n-install
# =============================================================================
# Runs the local development installation process (6 steps):
# 1. Secret Generation - generates passwords, API keys, bcrypt hashes
# 2. Service Selection Wizard - interactive service selection
# 3. Configure Services - service-specific configuration
# 4. Run Services - starts Docker Compose stack
# 5. Final Report - displays credentials and URLs
# 6. Fix Permissions - fixes file permissions
#
# Requirements:
# - Docker and Docker Compose installed and running
# - Bash 4.0+ (macOS: brew install bash)
# - whiptail (macOS: brew install newt)
# - openssl, git, python3
#
# Usage:
# make install-local
# Or directly: bash scripts/install-local.sh
# =============================================================================
# =============================================================================
# Check bash version FIRST (requires bash 4+ for associative arrays)
# =============================================================================
if [ "${BASH_VERSINFO[0]}" -lt 4 ]; then
echo "[ERROR] Bash 4.0 or higher is required. Current version: $BASH_VERSION"
echo ""
echo "On macOS, install modern bash and run with it:"
echo " brew install bash"
echo " /opt/homebrew/bin/bash ./scripts/install-local.sh"
echo ""
echo "Or use: make install-local"
exit 1
fi
set -e
# Local mode is fixed for this script
export INSTALL_MODE="local"
# Source the utilities file
source "$(dirname "$0")/utils.sh"
# Initialize paths
init_paths
# Source local mode utilities
source "$SCRIPT_DIR/local.sh"
# Source telemetry functions
source "$SCRIPT_DIR/telemetry.sh"
# =============================================================================
# Prerequisites Check (inline)
# =============================================================================
check_prerequisites() {
log_subheader "Checking Prerequisites for Local Installation"
local MISSING_DEPS=()
# Check Docker
log_info "Checking Docker..."
if ! command -v docker &> /dev/null; then
MISSING_DEPS+=("docker")
print_error "Docker is not installed"
case "$(uname)" in
Darwin)
log_info " Install Docker Desktop: https://www.docker.com/products/docker-desktop"
;;
Linux)
log_info " Install Docker: https://docs.docker.com/engine/install/"
;;
MINGW*|MSYS*|CYGWIN*)
log_info " Install Docker Desktop for Windows: https://www.docker.com/products/docker-desktop"
log_info " Then enable WSL2 integration in Docker Desktop settings"
;;
esac
else
local docker_version
docker_version=$(docker --version 2>/dev/null || echo "unknown")
print_ok "Docker is installed: $docker_version"
# Check Docker daemon
if ! docker info > /dev/null 2>&1; then
MISSING_DEPS+=("docker-daemon")
print_error "Docker daemon is not running"
case "$(uname)" in
Darwin)
log_info " Start Docker Desktop from Applications"
;;
Linux)
log_info " Start Docker: sudo systemctl start docker"
;;
esac
else
print_ok "Docker daemon is running"
fi
fi
# Check Docker Compose
log_info "Checking Docker Compose..."
if ! docker compose version &> /dev/null 2>&1; then
MISSING_DEPS+=("docker-compose")
print_error "Docker Compose plugin is not installed"
log_info " Docker Compose should be included with Docker Desktop"
log_info " Or install: https://docs.docker.com/compose/install/"
else
local compose_version
compose_version=$(docker compose version 2>/dev/null || echo "unknown")
print_ok "Docker Compose is installed: $compose_version"
fi
# Check whiptail
log_info "Checking whiptail..."
if ! command -v whiptail &> /dev/null; then
MISSING_DEPS+=("whiptail")
print_error "whiptail is not installed"
case "$(uname)" in
Darwin)
log_info " Install with: brew install newt"
;;
Linux)
if command -v apt-get &> /dev/null; then
log_info " Install with: sudo apt-get install -y whiptail"
elif command -v yum &> /dev/null; then
log_info " Install with: sudo yum install -y newt"
elif command -v pacman &> /dev/null; then
log_info " Install with: sudo pacman -S libnewt"
else
log_info " Install the 'newt' or 'whiptail' package for your distribution"
fi
;;
esac
else
print_ok "whiptail is installed"
fi
# Check openssl
log_info "Checking openssl..."
if ! command -v openssl &> /dev/null; then
MISSING_DEPS+=("openssl")
print_error "openssl is not installed"
case "$(uname)" in
Darwin)
log_info " openssl should be pre-installed on macOS"
log_info " Or install with: brew install openssl"
;;
Linux)
if command -v apt-get &> /dev/null; then
log_info " Install with: sudo apt-get install -y openssl"
elif command -v yum &> /dev/null; then
log_info " Install with: sudo yum install -y openssl"
else
log_info " Install openssl for your distribution"
fi
;;
esac
else
local openssl_version
openssl_version=$(openssl version 2>/dev/null || echo "unknown")
print_ok "openssl is installed: $openssl_version"
fi
# Check git
log_info "Checking git..."
if ! command -v git &> /dev/null; then
MISSING_DEPS+=("git")
print_error "git is not installed"
log_info " Install git: https://git-scm.com/downloads"
else
local git_version
git_version=$(git --version 2>/dev/null || echo "unknown")
print_ok "git is installed: $git_version"
fi
# Check Python3 and required modules
log_info "Checking Python3..."
if ! command -v python3 &> /dev/null; then
MISSING_DEPS+=("python3")
print_error "Python3 is not installed"
log_info " Install Python3: https://www.python.org/downloads/"
else
local python_version
python_version=$(python3 --version 2>/dev/null || echo "unknown")
print_ok "Python3 is installed: $python_version"
# Check and install required Python modules
local PYTHON_MODULES=("yaml:pyyaml" "dotenv:python-dotenv")
for module_pair in "${PYTHON_MODULES[@]}"; do
local import_name="${module_pair%%:*}"
local package_name="${module_pair##*:}"
log_info "Checking Python module: $package_name..."
if ! python3 -c "import $import_name" 2>/dev/null; then
print_warning "$package_name not found. Installing..."
if python3 -m pip install --user "$package_name" 2>/dev/null; then
print_ok "$package_name installed successfully"
else
MISSING_DEPS+=("$package_name")
print_error "Failed to install $package_name"
log_info " Install manually: pip3 install $package_name"
fi
else
print_ok "$package_name is available"
fi
done
fi
# Summary
echo ""
if [ ${#MISSING_DEPS[@]} -gt 0 ]; then
log_error "Missing dependencies: ${MISSING_DEPS[*]}"
log_info "Please install the missing dependencies and try again."
return 1
else
log_success "All prerequisites are satisfied!"
return 0
fi
}
# Setup error telemetry trap for tracking failures
setup_error_telemetry_trap
# Generate installation ID for telemetry correlation
INSTALLATION_ID=$(get_installation_id)
export INSTALLATION_ID
# Send telemetry: installation started
send_telemetry "install_start"
# Check required scripts
required_scripts=(
"03_generate_secrets.sh"
"04_wizard.sh"
"05_configure_services.sh"
"06_run_services.sh"
"07_final_report.sh"
"08_fix_permissions.sh"
)
missing_scripts=()
for script in "${required_scripts[@]}"; do
script_path="$SCRIPT_DIR/$script"
if [ ! -f "$script_path" ]; then
missing_scripts+=("$script")
fi
done
if [ ${#missing_scripts[@]} -gt 0 ]; then
log_error "The following required scripts are missing in $SCRIPT_DIR:"
printf " - %s\n" "${missing_scripts[@]}"
exit 1
fi
# Make scripts executable
chmod +x "$SCRIPT_DIR"/*.sh 2>/dev/null || true
# =============================================================================
# Run Local installation steps sequentially (6 steps total)
# =============================================================================
TOTAL_STEPS=6
log_header "Local Installation"
log_info "Starting local development installation..."
# Step 1: Prerequisites Check (inline)
show_step 1 $TOTAL_STEPS "Checking Prerequisites"
check_prerequisites || { log_error "Prerequisites check failed"; exit 1; }
log_success "Prerequisites check complete!"
# Pull Caddy image for bcrypt hash generation
log_info "Pulling Caddy image for password hashing..."
docker pull caddy:latest 2>/dev/null || log_warning "Could not pull Caddy image, will try during installation"
# Step 2: Secrets Generation
show_step 2 $TOTAL_STEPS "Generating Secrets and Configuration"
set_telemetry_stage "secrets_gen"
"$BASH" "$SCRIPT_DIR/03_generate_secrets.sh" || { log_error "Secret/Config Generation failed"; exit 1; }
log_success "Secret/Config Generation complete!"
# Step 3: Service Selection Wizard
show_step 3 $TOTAL_STEPS "Running Service Selection Wizard"
set_telemetry_stage "wizard"
"$BASH" "$SCRIPT_DIR/04_wizard.sh" || { log_error "Service Selection Wizard failed"; exit 1; }
log_success "Service Selection Wizard complete!"
# Step 4: Configure Services
show_step 4 $TOTAL_STEPS "Configure Services"
set_telemetry_stage "configure"
"$BASH" "$SCRIPT_DIR/05_configure_services.sh" || { log_error "Configure Services failed"; exit 1; }
log_success "Configure Services complete!"
# Step 5: Running Services
show_step 5 $TOTAL_STEPS "Running Services"
set_telemetry_stage "db_init"
# Start PostgreSQL first to initialize databases before other services
log_info "Starting PostgreSQL..."
docker compose -p localai up -d postgres || { log_error "Failed to start PostgreSQL"; exit 1; }
# Initialize PostgreSQL databases for services (creates if not exist)
source "$SCRIPT_DIR/databases.sh"
init_all_databases || { log_warning "Database initialization had issues, but continuing..."; }
# Now start all services (postgres is already running)
set_telemetry_stage "services_start"
"$BASH" "$SCRIPT_DIR/06_run_services.sh" || { log_error "Running Services failed"; exit 1; }
log_success "Running Services complete!"
# Step 6: Final Report
show_step 6 $TOTAL_STEPS "Generating Final Report"
set_telemetry_stage "final_report"
log_info "Installation Summary:"
echo -e " ${GREEN}*${NC} Prerequisites verified (Docker, whiptail, openssl, git)"
echo -e " ${GREEN}*${NC} Local installation mode configured (.local domains)"
echo -e " ${GREEN}*${NC} '.env' generated with secure passwords and secrets"
echo -e " ${GREEN}*${NC} Services launched via Docker Compose"
"$BASH" "$SCRIPT_DIR/07_final_report.sh" || { log_error "Final Report Generation failed"; exit 1; }
log_success "Final Report generated!"
# Fix Permissions (run silently, not as a numbered step for local)
set_telemetry_stage "fix_perms"
"$BASH" "$SCRIPT_DIR/08_fix_permissions.sh" || { log_warning "Fix Permissions had issues"; }
log_success "Local installation complete!"
# Send telemetry: installation completed with selected services
send_telemetry "install_complete" "$(read_env_var COMPOSE_PROFILES)"
exit 0

View File

@@ -1,50 +1,30 @@
#!/bin/bash
# =============================================================================
# install.sh - Main installation orchestrator for n8n-install
# install-vps.sh - VPS installation orchestrator for n8n-install
# =============================================================================
# This script runs the complete installation process by sequentially executing
# 8 installation steps:
# Runs the complete VPS installation process:
# 1. System Preparation - updates packages, installs utilities, configures firewall
# 2. Docker Installation - installs Docker and Docker Compose
# 3. Secret Generation - creates .env file with secure passwords and secrets
# 4. Service Wizard - interactive service selection using whiptail
# 5. Service Configuration - prompts for API keys and service-specific settings
# 6. Service Launch - starts all selected services via Docker Compose
# 7. Final Report - displays credentials and access URLs
# 8. Fix Permissions - ensures correct file ownership for the invoking user
# 3. Secret Generation - generates passwords, API keys, bcrypt hashes
# 4. Service Selection Wizard - interactive service selection
# 5. Configure Services - service-specific configuration
# 6. Run Services - starts Docker Compose stack
# 7. Final Report - displays credentials and URLs
# 8. Fix Permissions - fixes file ownership
#
# Usage: sudo bash scripts/install.sh
# Usage: sudo bash scripts/install-vps.sh
# Note: This script should be called from install.sh (entry point)
# =============================================================================
set -e
# VPS mode is fixed for this script
export INSTALL_MODE="vps"
# Source the utilities file
source "$(dirname "$0")/utils.sh"
# Check for nested n8n-install directory
current_path=$(pwd)
if [[ "$current_path" == *"/n8n-install/n8n-install" ]]; then
log_info "Detected nested n8n-install directory. Correcting..."
cd ..
log_info "Moved to $(pwd)"
log_info "Removing redundant n8n-install directory..."
rm -rf "n8n-install"
log_info "Redundant directory removed."
# Re-evaluate SCRIPT_DIR after potential path correction
SCRIPT_DIR_REALPATH_TEMP="$( cd "$( dirname "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
if [[ "$SCRIPT_DIR_REALPATH_TEMP" == *"/n8n-install/n8n-install/scripts" ]]; then
# If SCRIPT_DIR is still pointing to the nested structure's scripts dir, adjust it
# This happens if the script was invoked like: sudo bash n8n-install/scripts/install.sh
# from the outer n8n-install directory.
# We need to ensure that relative paths for other scripts are correct.
# The most robust way is to re-execute the script from the corrected location
# if the SCRIPT_DIR itself was nested.
log_info "Re-executing install script from corrected path..."
exec sudo bash "./scripts/install.sh" "$@"
fi
fi
# Initialize paths using utils.sh helper
# Initialize paths
init_paths
# Source telemetry functions
@@ -111,52 +91,63 @@ if [ ${#non_executable_scripts[@]} -gt 0 ]; then
log_success "Scripts successfully made executable."
fi
# Run installation steps sequentially using their full paths
# =============================================================================
# Run VPS installation steps sequentially (8 steps total)
# =============================================================================
TOTAL_STEPS=8
show_step 1 8 "System Preparation"
log_header "VPS Installation"
log_info "Starting VPS installation..."
# Step 1: System Preparation
show_step 1 $TOTAL_STEPS "System Preparation"
set_telemetry_stage "system_prep"
bash "$SCRIPT_DIR/01_system_preparation.sh" || { log_error "System Preparation failed"; exit 1; }
"$BASH" "$SCRIPT_DIR/01_system_preparation.sh" || { log_error "System Preparation failed"; exit 1; }
log_success "System preparation complete!"
show_step 2 8 "Installing Docker"
# Step 2: Docker Installation
show_step 2 $TOTAL_STEPS "Installing Docker"
set_telemetry_stage "docker_install"
bash "$SCRIPT_DIR/02_install_docker.sh" || { log_error "Docker Installation failed"; exit 1; }
"$BASH" "$SCRIPT_DIR/02_install_docker.sh" || { log_error "Docker Installation failed"; exit 1; }
log_success "Docker installation complete!"
show_step 3 8 "Generating Secrets and Configuration"
# Step 3: Secrets Generation
show_step 3 $TOTAL_STEPS "Generating Secrets and Configuration"
set_telemetry_stage "secrets_gen"
bash "$SCRIPT_DIR/03_generate_secrets.sh" || { log_error "Secret/Config Generation failed"; exit 1; }
"$BASH" "$SCRIPT_DIR/03_generate_secrets.sh" || { log_error "Secret/Config Generation failed"; exit 1; }
log_success "Secret/Config Generation complete!"
show_step 4 8 "Running Service Selection Wizard"
# Step 4: Service Selection Wizard
show_step 4 $TOTAL_STEPS "Running Service Selection Wizard"
set_telemetry_stage "wizard"
bash "$SCRIPT_DIR/04_wizard.sh" || { log_error "Service Selection Wizard failed"; exit 1; }
"$BASH" "$SCRIPT_DIR/04_wizard.sh" || { log_error "Service Selection Wizard failed"; exit 1; }
log_success "Service Selection Wizard complete!"
show_step 5 8 "Configure Services"
# Step 5: Configure Services
show_step 5 $TOTAL_STEPS "Configure Services"
set_telemetry_stage "configure"
bash "$SCRIPT_DIR/05_configure_services.sh" || { log_error "Configure Services failed"; exit 1; }
"$BASH" "$SCRIPT_DIR/05_configure_services.sh" || { log_error "Configure Services failed"; exit 1; }
log_success "Configure Services complete!"
show_step 6 8 "Running Services"
# Step 6: Running Services
show_step 6 $TOTAL_STEPS "Running Services"
set_telemetry_stage "db_init"
# Start PostgreSQL first to initialize databases before other services
log_info "Starting PostgreSQL..."
docker compose -p localai up -d postgres || { log_error "Failed to start PostgreSQL"; exit 1; }
# Initialize PostgreSQL databases for services (creates if not exist)
# This must run BEFORE other services that depend on these databases
source "$SCRIPT_DIR/databases.sh"
init_all_databases || { log_warning "Database initialization had issues, but continuing..."; }
# Now start all services (postgres is already running)
set_telemetry_stage "services_start"
bash "$SCRIPT_DIR/06_run_services.sh" || { log_error "Running Services failed"; exit 1; }
"$BASH" "$SCRIPT_DIR/06_run_services.sh" || { log_error "Running Services failed"; exit 1; }
log_success "Running Services complete!"
show_step 7 8 "Generating Final Report"
# Step 7: Final Report
show_step 7 $TOTAL_STEPS "Generating Final Report"
set_telemetry_stage "final_report"
# --- Installation Summary ---
log_info "Installation Summary:"
echo -e " ${GREEN}*${NC} System updated and basic utilities installed"
echo -e " ${GREEN}*${NC} Firewall (UFW) configured and enabled"
@@ -166,12 +157,13 @@ echo -e " ${GREEN}*${NC} Docker and Docker Compose installed"
echo -e " ${GREEN}*${NC} '.env' generated with secure passwords and secrets"
echo -e " ${GREEN}*${NC} Services launched via Docker Compose"
bash "$SCRIPT_DIR/07_final_report.sh" || { log_error "Final Report Generation failed"; exit 1; }
"$BASH" "$SCRIPT_DIR/07_final_report.sh" || { log_error "Final Report Generation failed"; exit 1; }
log_success "Final Report generated!"
show_step 8 8 "Fixing File Permissions"
# Step 8: Fix Permissions
show_step 8 $TOTAL_STEPS "Fixing File Permissions"
set_telemetry_stage "fix_perms"
bash "$SCRIPT_DIR/08_fix_permissions.sh" || { log_error "Fix Permissions failed"; exit 1; }
"$BASH" "$SCRIPT_DIR/08_fix_permissions.sh" || { log_error "Fix Permissions failed"; exit 1; }
log_success "File permissions fixed!"
log_success "Installation complete!"

294
scripts/local.sh Normal file
View File

@@ -0,0 +1,294 @@
#!/bin/bash
# =============================================================================
# local.sh - Local installation mode utilities
# =============================================================================
# Encapsulates all logic related to local vs VPS installation modes.
# Provides functions for mode detection, environment configuration,
# and mode-specific settings.
#
# Usage: source "$(dirname "$0")/local.sh"
#
# Functions:
# - get_install_mode: Get current installation mode (vps|local)
# - is_local_mode: Check if running in local mode
# - is_vps_mode: Check if running in VPS mode
# - get_protocol: Get protocol based on mode (http|https)
# - get_caddy_auto_https: Get Caddy auto_https setting (on|off)
# - get_n8n_secure_cookie: Get n8n secure cookie setting (true|false)
# - get_local_domain: Get default domain for local mode (.local)
# - configure_mode_env: Set all mode-specific environment variables
# - print_local_hosts_instructions: Display hosts file setup instructions
# =============================================================================
#=============================================================================
# CONSTANTS
#=============================================================================
# Local mode defaults
LOCAL_MODE_DOMAIN="local"
LOCAL_MODE_PROTOCOL="http"
LOCAL_MODE_CADDY_AUTO_HTTPS="off"
LOCAL_MODE_N8N_SECURE_COOKIE="false"
# VPS mode defaults
VPS_MODE_PROTOCOL="https"
VPS_MODE_CADDY_AUTO_HTTPS="on"
VPS_MODE_N8N_SECURE_COOKIE="true"
#=============================================================================
# MODE DETECTION
#=============================================================================
# Get the current installation mode
# Checks: 1) exported INSTALL_MODE, 2) .env file, 3) defaults to "vps"
# Usage: mode=$(get_install_mode)
get_install_mode() {
local mode="${INSTALL_MODE:-}"
# If not set, try to read from .env
if [[ -z "$mode" && -n "${ENV_FILE:-}" && -f "$ENV_FILE" ]]; then
mode=$(grep "^INSTALL_MODE=" "$ENV_FILE" 2>/dev/null | cut -d'=' -f2- | tr -d '"'"'" || true)
fi
# Default to vps for backward compatibility
echo "${mode:-vps}"
}
# Check if running in local mode
# Usage: is_local_mode && echo "Local mode"
is_local_mode() {
[[ "$(get_install_mode)" == "local" ]]
}
# Check if running in VPS mode
# Usage: is_vps_mode && echo "VPS mode"
is_vps_mode() {
[[ "$(get_install_mode)" != "local" ]]
}
#=============================================================================
# MODE-SPECIFIC GETTERS
#=============================================================================
# Get protocol based on installation mode
# Usage: protocol=$(get_protocol)
get_protocol() {
if is_local_mode; then
echo "$LOCAL_MODE_PROTOCOL"
else
echo "$VPS_MODE_PROTOCOL"
fi
}
# Get Caddy auto_https setting based on installation mode
# Usage: auto_https=$(get_caddy_auto_https)
get_caddy_auto_https() {
if is_local_mode; then
echo "$LOCAL_MODE_CADDY_AUTO_HTTPS"
else
echo "$VPS_MODE_CADDY_AUTO_HTTPS"
fi
}
# Get n8n secure cookie setting based on installation mode
# Usage: secure_cookie=$(get_n8n_secure_cookie)
get_n8n_secure_cookie() {
if is_local_mode; then
echo "$LOCAL_MODE_N8N_SECURE_COOKIE"
else
echo "$VPS_MODE_N8N_SECURE_COOKIE"
fi
}
# Get default domain for local mode
# Usage: domain=$(get_local_domain)
get_local_domain() {
echo "$LOCAL_MODE_DOMAIN"
}
#=============================================================================
# ENVIRONMENT CONFIGURATION
#=============================================================================
# Configure all mode-specific environment variables
# Populates the provided associative array with mode settings
# Usage: declare -A settings; configure_mode_env settings "local"
# Arguments:
# $1 - nameref to associative array for storing values
# $2 - mode (optional, defaults to get_install_mode())
configure_mode_env() {
local -n _env_ref=$1
local mode="${2:-$(get_install_mode)}"
_env_ref["INSTALL_MODE"]="$mode"
if [[ "$mode" == "local" ]]; then
_env_ref["PROTOCOL"]="$LOCAL_MODE_PROTOCOL"
_env_ref["CADDY_AUTO_HTTPS"]="$LOCAL_MODE_CADDY_AUTO_HTTPS"
_env_ref["N8N_SECURE_COOKIE"]="$LOCAL_MODE_N8N_SECURE_COOKIE"
else
_env_ref["PROTOCOL"]="$VPS_MODE_PROTOCOL"
_env_ref["CADDY_AUTO_HTTPS"]="$VPS_MODE_CADDY_AUTO_HTTPS"
_env_ref["N8N_SECURE_COOKIE"]="$VPS_MODE_N8N_SECURE_COOKIE"
fi
}
#=============================================================================
# HOSTS FILE UTILITIES
#=============================================================================
# All hostname variables used in the project
# Used by generate_hosts.sh and other scripts that need hostname list
get_all_hostname_vars() {
local vars=(
"N8N_HOSTNAME"
"WEBUI_HOSTNAME"
"FLOWISE_HOSTNAME"
"DIFY_HOSTNAME"
"RAGAPP_HOSTNAME"
"RAGFLOW_HOSTNAME"
"LANGFUSE_HOSTNAME"
"SUPABASE_HOSTNAME"
"GRAFANA_HOSTNAME"
"WAHA_HOSTNAME"
"PROMETHEUS_HOSTNAME"
"PORTAINER_HOSTNAME"
"POSTIZ_HOSTNAME"
"POSTGRESUS_HOSTNAME"
"LETTA_HOSTNAME"
"LIGHTRAG_HOSTNAME"
"WEAVIATE_HOSTNAME"
"QDRANT_HOSTNAME"
"COMFYUI_HOSTNAME"
"LT_HOSTNAME"
"NEO4J_HOSTNAME"
"NOCODB_HOSTNAME"
"PADDLEOCR_HOSTNAME"
"DOCLING_HOSTNAME"
"WELCOME_HOSTNAME"
"SEARXNG_HOSTNAME"
)
printf '%s\n' "${vars[@]}"
}
# Print instructions for setting up /etc/hosts for local mode
# Usage: print_local_hosts_instructions
print_local_hosts_instructions() {
# Requires color variables from utils.sh
local cyan="${CYAN:-}"
local white="${WHITE:-}"
local dim="${DIM:-}"
local nc="${NC:-}"
echo ""
echo -e " ${white}Before accessing services, add entries to your hosts file:${nc}"
echo ""
echo -e " ${cyan}sudo bash -c 'cat hosts.txt >> /etc/hosts'${nc}"
echo ""
echo -e " ${dim}Then flush your DNS cache:${nc}"
echo -e " ${cyan}macOS: sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder${nc}"
echo -e " ${cyan}Linux: sudo systemd-resolve --flush-caches${nc}"
}
#=============================================================================
# PREREQUISITES CHECK (for local mode)
#=============================================================================
# Check if all prerequisites for local mode are met
# Returns 0 if all prerequisites are met, 1 otherwise
# Usage: check_local_prerequisites || exit 1
check_local_prerequisites() {
local missing=()
# Check Docker
if ! command -v docker &> /dev/null; then
missing+=("docker")
elif ! docker info > /dev/null 2>&1; then
missing+=("docker-daemon")
fi
# Check Docker Compose
if ! docker compose version &> /dev/null 2>&1; then
missing+=("docker-compose")
fi
# Check whiptail
if ! command -v whiptail &> /dev/null; then
missing+=("whiptail")
fi
# Check openssl
if ! command -v openssl &> /dev/null; then
missing+=("openssl")
fi
# Check git
if ! command -v git &> /dev/null; then
missing+=("git")
fi
# Check Python3
if ! command -v python3 &> /dev/null; then
missing+=("python3")
fi
if [[ ${#missing[@]} -gt 0 ]]; then
echo "Missing prerequisites: ${missing[*]}"
return 1
fi
return 0
}
# Print installation instructions for missing prerequisites
# Usage: print_prerequisite_instructions "docker" "whiptail"
print_prerequisite_instructions() {
local os_type
os_type=$(uname)
for dep in "$@"; do
case "$dep" in
docker)
echo " Docker:"
case "$os_type" in
Darwin) echo " Install Docker Desktop: https://www.docker.com/products/docker-desktop" ;;
Linux) echo " Install Docker: https://docs.docker.com/engine/install/" ;;
esac
;;
docker-daemon)
echo " Docker daemon is not running:"
case "$os_type" in
Darwin) echo " Start Docker Desktop from Applications" ;;
Linux) echo " Run: sudo systemctl start docker" ;;
esac
;;
docker-compose)
echo " Docker Compose:"
echo " Should be included with Docker Desktop"
echo " Or install: https://docs.docker.com/compose/install/"
;;
whiptail)
echo " whiptail:"
case "$os_type" in
Darwin) echo " Run: brew install newt" ;;
Linux) echo " Run: sudo apt-get install -y whiptail" ;;
esac
;;
openssl)
echo " openssl:"
case "$os_type" in
Darwin) echo " Usually pre-installed, or: brew install openssl" ;;
Linux) echo " Run: sudo apt-get install -y openssl" ;;
esac
;;
git)
echo " git:"
echo " Install: https://git-scm.com/downloads"
;;
python3)
echo " Python3:"
echo " Install: https://www.python.org/downloads/"
;;
esac
done
}

View File

@@ -130,6 +130,6 @@ fi
# Execute the rest of the update process using the (potentially updated) apply_update.sh
# Note: apply_update.sh has its own error telemetry trap and stages
bash "$APPLY_UPDATE_SCRIPT"
"$BASH" "$APPLY_UPDATE_SCRIPT"
exit 0

View File

@@ -472,7 +472,8 @@ restore_debian_frontend() {
gen_random() {
local length="$1"
local characters="$2"
head /dev/urandom | tr -dc "$characters" | head -c "$length"
# LC_ALL=C is required on macOS to handle raw bytes from /dev/urandom
LC_ALL=C tr -dc "$characters" < /dev/urandom | head -c "$length"
}
# Generate alphanumeric password
@@ -497,13 +498,18 @@ gen_base64() {
openssl rand -base64 "$bytes" | head -c "$length"
}
# Generate bcrypt hash using Caddy
# Generate bcrypt hash using Caddy Docker image
# Usage: hash=$(generate_bcrypt_hash "plaintext_password")
generate_bcrypt_hash() {
local plaintext="$1"
if [[ -n "$plaintext" ]]; then
caddy hash-password --algorithm bcrypt --plaintext "$plaintext" 2>/dev/null
[[ -z "$plaintext" ]] && return 1
if ! command -v docker &> /dev/null; then
log_error "Docker is required for bcrypt generation"
return 1
fi
docker run --rm caddy:latest caddy hash-password --algorithm bcrypt --plaintext "$plaintext" 2>/dev/null
}
#=============================================================================

View File

@@ -144,6 +144,14 @@
</svg>`
};
// ============================================
// CONFIG - Global configuration loaded from data.json
// ============================================
let CONFIG = {
protocol: 'https', // Default, will be overridden by data.json
domain: ''
};
// ============================================
// DATA - Service metadata and commands
// ============================================
@@ -340,6 +348,14 @@
category: 'tools',
docsUrl: 'https://docs.postiz.com'
},
'temporal-ui': {
name: 'Temporal UI',
description: 'Postiz Workflow Orchestration',
icon: 'TM',
color: 'bg-violet-500',
category: 'tools',
docsUrl: 'https://docs.temporal.io/'
},
'waha': {
name: 'WAHA',
description: 'WhatsApp HTTP API',
@@ -763,7 +779,7 @@
// External link (if hostname exists)
if (serviceData.hostname) {
const link = document.createElement('a');
link.href = `https://${serviceData.hostname}`;
link.href = `${CONFIG.protocol}://${serviceData.hostname}`;
link.target = '_blank';
link.rel = 'noopener';
link.className = 'text-brand hover:text-brand-400 text-sm font-medium inline-flex items-center gap-1 group transition-colors';
@@ -1031,6 +1047,14 @@
if (dataResult.status === 'fulfilled' && dataResult.value) {
const data = dataResult.value;
// Update global config from data.json
if (data.protocol) {
CONFIG.protocol = data.protocol;
}
if (data.domain) {
CONFIG.domain = data.domain;
}
// Update domain info
if (domainInfo) {
if (data.domain) {