50 Commits

Author SHA1 Message Date
Yury Kossakovsky
9dcf622e9f fix: use node-based healthcheck for uptime-kuma
louislam/uptime-kuma:2 image doesn't include wget
2026-03-28 17:50:48 -06:00
Yury Kossakovsky
7861dee1b1 fix: make n8n payload size max configurable via .env
was hardcoded to 256 in docker-compose.yml, ignoring user overrides
2026-03-28 17:40:18 -06:00
Yury Kossakovsky
6fe028d01b chore: remove claude code github actions workflows 2026-03-23 16:12:15 -06:00
Yury Kossakovsky
804b81f6cb fix: resolve supabase-storage crash-loop by adding missing s3 config variables
supabase-storage crashes with "Region is missing" after upstream image
update because @aws-sdk/client-s3vectors requires REGION env var.

- add REGION, GLOBAL_S3_BUCKET, STORAGE_TENANT_ID to .env.example
- auto-generate S3_PROTOCOL_ACCESS_KEY_ID/SECRET in secret generation
- sync new env vars to existing supabase/docker/.env during updates
  (append-only, never overwrites existing values)
- bump version 1.3.3 → 1.4.1
2026-03-23 16:09:06 -06:00
Yury Kossakovsky
d344291c21 Merge pull request #53 from kossakovsky/develop
v1.4.0: add uptime kuma and pgvector support
2026-03-15 20:21:35 -06:00
Yury Kossakovsky
463258fb06 feat: add pgvector support to postgresql
switch postgres image to pgvector/pgvector for vector
similarity search capabilities
2026-03-15 20:15:05 -06:00
Yury Kossakovsky
e8a8a5a511 fix: complete uptime kuma integration gaps
add missing readme service url, update preview image tracking,
and release changelog as v1.4.0
2026-03-15 20:10:37 -06:00
Yury Kossakovsky
944e0465bd Merge pull request #51 from kossakovsky/feature/add-uptime-kuma
feat: add Uptime Kuma uptime monitoring service
2026-03-13 18:30:25 -06:00
Yury Kossakovsky
a6a3c2cb05 fix: add proxy-env inheritance and healthcheck proxy bypass to uptime kuma 2026-03-13 18:09:52 -06:00
Yury Kossakovsky
5859fc9d25 fix: add missing healthcheck start_period and standardize wording 2026-03-13 18:02:48 -06:00
Yury Kossakovsky
c33998043f fix: correct 15 errors in uptime kuma service integration
fix healthcheck port (3000→3001), add missing logging config,
add UPTIME_KUMA_HOSTNAME to caddy env, add import service_tls
in caddyfile, fix hostname typo in .env.example, add uptime-kuma
to GOST_NO_PROXY, fix profile name in wizard/final report, fix
env var in welcome page generator, add missing trailing comma in
app.js, move changelog to Added section, declare volume in
top-level section, fix container name in caddyfile, fix volume
mount path, fix broken markdown link in README
2026-03-13 17:58:13 -06:00
Yury Kossakovsky
174fce7527 Merge pull request #48 from kossakovsky/add-claude-github-actions-1773363152825
Add claude GitHub actions 1773363152825
2026-03-12 20:31:23 -06:00
Yury Kossakovsky
52845d1ed9 feat: add uptime kuma uptime monitoring service 2026-03-12 20:30:00 -06:00
Yury Kossakovsky
b0564ea0d8 "Claude Code Review workflow" 2026-03-12 18:52:35 -06:00
Yury Kossakovsky
888347e110 "Claude PR Assistant workflow" 2026-03-12 18:52:34 -06:00
Yury Kossakovsky
277466f144 fix(postiz): generate .env file to prevent dotenv-cli crash (#40)
the postiz backend image uses dotenv-cli to load /app/.env, which
doesn't exist when config is only passed via docker environment vars.
generate postiz.env from root .env and mount it read-only. also handle
edge case where docker creates the file as a directory on bind mount
failure, and quote values to prevent dotenv-cli misparses.
2026-02-27 20:44:18 -07:00
Yury Kossakovsky
58c485e49a docs: improve CLAUDE.md with missing architecture details
add start_services.py to key files, document python task runner,
docker-compose.override.yml support, yaml anchors, restart behavior,
supabase/dify profiles, --update flag for secrets, and expand file
locations and syntax validation lists
2026-02-27 19:13:36 -07:00
Yury Kossakovsky
b34e1468aa chore: remove redundant agents.md 2026-02-27 19:06:51 -07:00
Yury Kossakovsky
6a1301bfc0 fix(docker): respect docker-compose.override.yml for user customizations (#44)
all compose file assembly points now include the override file last
when present, giving it highest precedence over other compose files
2026-02-27 19:05:50 -07:00
Yury Kossakovsky
19325191c3 fix(installer): skip n8n prompts when n8n profile is not active
load COMPOSE_PROFILES early in 05_configure_services.sh so
is_profile_active guards n8n workflow import and worker config
sections, avoiding confusing prompts for users who don't use n8n
2026-02-27 18:56:39 -07:00
Yury Kossakovsky
107f18296a feat: add appsmith low-code platform for internal tools
adds appsmith as an optional service with caddy reverse proxy,
auto-generated encryption secrets, wizard selection, welcome page
integration, update preview support, and final report output.
bumps version to 1.3.0.
2026-02-27 18:39:45 -07:00
Yury Kossakovsky
059e141daa fix(ragflow): correct nginx config and backend port (#41)
mount nginx config to conf.d/default.conf instead of
sites-available/default, and set SVR_HTTP_PORT to 9380
(official default) instead of 80 which conflicts with
nginx and causes 502 on api requests
2026-02-27 18:11:12 -07:00
Yury Kossakovsky
6505c5cdf4 fix(docker): limit parallel image pulls to prevent tls handshake timeout
set COMPOSE_PARALLEL_LIMIT=3 in .env.example to avoid net/http TLS
handshake timeout errors when pulling many images simultaneously
2026-02-27 16:55:40 -07:00
Yury Kossakovsky
f8e665f85f fix(comfyui): update docker image to cuda 12.8 2026-02-10 17:09:19 -07:00
Yury Kossakovsky
f2f51c6e13 docs: add agents.md with repository guidelines 2026-02-03 10:28:14 -07:00
Yury Kossakovsky
ceaa970273 docs: add missing architecture details to claude.md
document valkey/redis naming, VERSION file, GIT_MODE, caddy addons,
external compose files pattern, GOST_NO_PROXY requirement, and
n8n-template profile pattern
2026-02-03 10:28:10 -07:00
Yury Kossakovsky
6f1aaa0555 docs(changelog): release 1.2.5 2026-02-02 21:11:19 -07:00
Yury Kossakovsky
0dec31539e fix(n8n): use static ffmpeg for alpine compatibility 2026-02-02 21:04:06 -07:00
Yury Kossakovsky
b990b09681 docs: add missing scripts to key files in claude.md 2026-02-02 14:06:27 -07:00
Yury Kossakovsky
de8df8a0b7 fix(postiz): use localhost instead of docker hostname for backend_internal_url
the internal nginx in postiz container requires localhost, not the docker
service name, as this url is used for proxying within the container itself.
2026-01-30 13:50:50 -07:00
Yury Kossakovsky
543593de36 docs(gost): clarify http proxy protocol in wizard and env example
users may mistakenly use https:// for http proxies, which causes
gost to fail connecting to upstream. the protocol refers to proxy
type, not connection security.
2026-01-30 10:55:31 -07:00
Yury Kossakovsky
50bd817b56 fix(gost): add telegram domains to proxy bypass list
allows n8n telegram triggers to work when gost proxy is enabled
2026-01-29 16:11:18 -07:00
Yury Kossakovsky
611591dc0f docs(changelog): update 1.2.2 release date 2026-01-26 17:51:36 -07:00
Yury Kossakovsky
ad9c7aa57d fix(caddy): set readable permissions on custom tls certificates
docker volume mounts preserve host permissions, and caddy container
may run as different uid than host user, causing certificate read
failures with restrictive (600) permissions.
2026-01-26 17:50:35 -07:00
Yury Kossakovsky
6e283c508c fix(caddy): resolve snippet redeclaration by using site-*.conf pattern 2026-01-24 21:11:25 -07:00
Yury Kossakovsky
adc5b94f1c fix(caddy): resolve duplicate hostname error with custom tls certificates
change architecture from generating separate site blocks to using
a shared tls snippet that all services import
2026-01-24 20:23:25 -07:00
Yury Kossakovsky
a99676e3d5 fix(postiz): improve temporal integration
- increase elasticsearch memory to 512mb
- add temporal databases to initialization
- add postiz to final report
2026-01-17 19:56:29 -07:00
Yury Kossakovsky
bf7ce20f7b fix(caddy): add http block for welcome page to prevent redirect loop
when accessing welcome page through cloudflare tunnel, caddy was
redirecting http to https, causing an infinite redirect loop.
adding an explicit http block prevents automatic https redirect.
2026-01-17 19:42:50 -07:00
Yury Kossakovsky
36717a45c9 docs(readme): clarify vps requirement in prerequisites 2026-01-17 12:28:55 -07:00
Yury Kossakovsky
31b81b71a4 fix(postiz): add elasticsearch for temporal advanced visibility
temporal with sql visibility has a hard limit of 3 text search
attributes per namespace. postiz requires more, causing startup
failure. adding elasticsearch enables advanced visibility mode
which removes this limitation.
2026-01-17 12:26:40 -07:00
Yury Kossakovsky
a3e8f26925 fix(postiz): use correct temporal address env var 2026-01-16 20:27:06 -07:00
Yury Kossakovsky
917afe615c fix(temporal): use container ip for healthcheck connection 2026-01-16 20:15:03 -07:00
Yury Kossakovsky
641fd04290 fix(temporal): update healthcheck to use modern cli 2026-01-16 18:59:37 -07:00
Yury Kossakovsky
ca43e7ab12 docs(readme): add troubleshooting for update script issues 2026-01-16 18:48:31 -07:00
Yury Kossakovsky
e5db00098a refactor(docker-compose): extract logging config into yaml anchor 2026-01-16 18:45:30 -07:00
Yury Kossakovsky
4a6f1c0e01 feat(postiz): add temporal server for workflow orchestration
add temporal and temporal-ui services to the postiz profile for
workflow orchestration. includes caddy reverse proxy with basic
auth, secret generation, and welcome page integration.
2026-01-16 18:42:54 -07:00
Yury Kossakovsky
19cd6b6f91 docs(cloudflare): update tunnel instructions and add missing services
- update dashboard navigation to match current cloudflare ui
- add nocodb and welcome page to services table
- add notes explaining external compose files and caddy-served content
2026-01-13 08:40:36 -07:00
Yury Kossakovsky
b28093b5cd feat(welcome): add changelog section to dashboard 2026-01-12 10:03:46 -07:00
Yury Kossakovsky
361a726a07 docs(changelog): update v1.1.0 release date 2026-01-11 13:10:32 -07:00
Yury Kossakovsky
0b4c9d5dda feat(makefile): add stop and start commands for service control 2026-01-10 11:02:23 -07:00
29 changed files with 1003 additions and 478 deletions

View File

@@ -314,14 +314,16 @@ ${SERVICE_NAME_UPPER}_PASSWORD=
${SERVICE_NAME_UPPER}_PASSWORD_HASH= ${SERVICE_NAME_UPPER}_PASSWORD_HASH=
``` ```
### 3.3 GOST_NO_PROXY (if using proxy-env) ### 3.3 GOST_NO_PROXY (REQUIRED for ALL services)
Add service to comma-separated list: **CRITICAL:** Add ALL new service container names to the comma-separated list to prevent internal Docker traffic from going through the proxy:
```dotenv ```dotenv
GOST_NO_PROXY=localhost,127.0.0.1,...existing...,$ARGUMENTS GOST_NO_PROXY=localhost,127.0.0.1,...existing...,$ARGUMENTS
``` ```
This applies to ALL services, not just those using `<<: *proxy-env`. Internal service-to-service communication must bypass the proxy.
--- ---
## STEP 4: scripts/03_generate_secrets.sh ## STEP 4: scripts/03_generate_secrets.sh
@@ -706,6 +708,7 @@ bash -n scripts/07_final_report.sh
- [ ] `docker-compose.yml`: caddy environment vars (if external) - [ ] `docker-compose.yml`: caddy environment vars (if external)
- [ ] `Caddyfile`: reverse proxy block (if external) - [ ] `Caddyfile`: reverse proxy block (if external)
- [ ] `.env.example`: hostname added - [ ] `.env.example`: hostname added
- [ ] `.env.example`: service added to `GOST_NO_PROXY` (ALL internal services must be listed)
- [ ] `scripts/03_generate_secrets.sh`: password in `VARS_TO_GENERATE` - [ ] `scripts/03_generate_secrets.sh`: password in `VARS_TO_GENERATE`
- [ ] `scripts/04_wizard.sh`: service in `base_services_data` - [ ] `scripts/04_wizard.sh`: service in `base_services_data`
- [ ] `scripts/generate_welcome_page.sh`: `SERVICES_ARRAY` entry - [ ] `scripts/generate_welcome_page.sh`: `SERVICES_ARRAY` entry
@@ -722,7 +725,6 @@ bash -n scripts/07_final_report.sh
### If Outbound Proxy (AI API calls) ### If Outbound Proxy (AI API calls)
- [ ] `docker-compose.yml`: `<<: *proxy-env` in environment - [ ] `docker-compose.yml`: `<<: *proxy-env` in environment
- [ ] `.env.example`: service added to `GOST_NO_PROXY`
- [ ] `docker-compose.yml`: healthcheck bypasses proxy - [ ] `docker-compose.yml`: healthcheck bypasses proxy
### If Database Required ### If Database Required

View File

@@ -99,6 +99,15 @@ NEO4J_AUTH_PASSWORD=
NOCODB_JWT_SECRET= NOCODB_JWT_SECRET=
############
# [required]
# Appsmith encryption credentials (auto-generated)
############
APPSMITH_ENCRYPTION_PASSWORD=
APPSMITH_ENCRYPTION_SALT=
############ ############
# [required] # [required]
# Langfuse credentials # Langfuse credentials
@@ -148,6 +157,7 @@ LT_PASSWORD_HASH=
USER_DOMAIN_NAME= USER_DOMAIN_NAME=
LETSENCRYPT_EMAIL= LETSENCRYPT_EMAIL=
APPSMITH_HOSTNAME=appsmith.yourdomain.com
COMFYUI_HOSTNAME=comfyui.yourdomain.com COMFYUI_HOSTNAME=comfyui.yourdomain.com
DATABASUS_HOSTNAME=databasus.yourdomain.com DATABASUS_HOSTNAME=databasus.yourdomain.com
DIFY_HOSTNAME=dify.yourdomain.com DIFY_HOSTNAME=dify.yourdomain.com
@@ -164,6 +174,7 @@ NOCODB_HOSTNAME=nocodb.yourdomain.com
PADDLEOCR_HOSTNAME=paddleocr.yourdomain.com PADDLEOCR_HOSTNAME=paddleocr.yourdomain.com
PORTAINER_HOSTNAME=portainer.yourdomain.com PORTAINER_HOSTNAME=portainer.yourdomain.com
POSTIZ_HOSTNAME=postiz.yourdomain.com POSTIZ_HOSTNAME=postiz.yourdomain.com
TEMPORAL_UI_HOSTNAME=temporal.yourdomain.com
PROMETHEUS_HOSTNAME=prometheus.yourdomain.com PROMETHEUS_HOSTNAME=prometheus.yourdomain.com
QDRANT_HOSTNAME=qdrant.yourdomain.com QDRANT_HOSTNAME=qdrant.yourdomain.com
RAGAPP_HOSTNAME=ragapp.yourdomain.com RAGAPP_HOSTNAME=ragapp.yourdomain.com
@@ -172,6 +183,7 @@ SEARXNG_HOSTNAME=searxng.yourdomain.com
SUPABASE_HOSTNAME=supabase.yourdomain.com SUPABASE_HOSTNAME=supabase.yourdomain.com
WAHA_HOSTNAME=waha.yourdomain.com WAHA_HOSTNAME=waha.yourdomain.com
WEAVIATE_HOSTNAME=weaviate.yourdomain.com WEAVIATE_HOSTNAME=weaviate.yourdomain.com
UPTIME_KUMA_HOSTNAME=uptime-kuma.yourdomain.com
WEBUI_HOSTNAME=webui.yourdomain.com WEBUI_HOSTNAME=webui.yourdomain.com
WELCOME_HOSTNAME=welcome.yourdomain.com WELCOME_HOSTNAME=welcome.yourdomain.com
@@ -214,6 +226,10 @@ N8N_LOG_LEVEL=info
NODES_EXCLUDE="[]" NODES_EXCLUDE="[]"
N8N_LOG_OUTPUT=console N8N_LOG_OUTPUT=console
# Maximum payload size in MB for n8n requests (default: 256 MB).
# Increase if you need to handle large files or webhook payloads.
N8N_PAYLOAD_SIZE_MAX=256
# Timezone for n8n and workflows (https://docs.n8n.io/hosting/configuration/environment-variables/timezone-localization/) # Timezone for n8n and workflows (https://docs.n8n.io/hosting/configuration/environment-variables/timezone-localization/)
GENERIC_TIMEZONE=America/New_York GENERIC_TIMEZONE=America/New_York
@@ -409,6 +425,25 @@ IMGPROXY_ENABLE_WEBP_DETECTION=true
# Add your OpenAI API key to enable SQL Editor Assistant # Add your OpenAI API key to enable SQL Editor Assistant
OPENAI_API_KEY= OPENAI_API_KEY=
############
# Storage - Configuration for S3 protocol endpoint
############
# S3 bucket when using S3 backend, directory name when using 'file'
GLOBAL_S3_BUCKET=stub
# Used for S3 protocol endpoint configuration
REGION=stub
# Equivalent to project_ref (S3 session token authentication)
STORAGE_TENANT_ID=stub
# Access to Storage via S3 protocol endpoint
S3_PROTOCOL_ACCESS_KEY_ID=
S3_PROTOCOL_ACCESS_KEY_SECRET=
# ============================================ # ============================================
# Cloudflare Tunnel Configuration (Optional) # Cloudflare Tunnel Configuration (Optional)
# ============================================ # ============================================
@@ -429,11 +464,13 @@ GOST_PROXY_URL=
# External upstream proxy (REQUIRED - asked during wizard if gost is selected) # External upstream proxy (REQUIRED - asked during wizard if gost is selected)
# Examples: socks5://user:pass@proxy.com:1080, http://user:pass@proxy.com:8080 # Examples: socks5://user:pass@proxy.com:1080, http://user:pass@proxy.com:8080
# IMPORTANT: For HTTP proxies use http://, NOT https://
# The protocol refers to proxy type, not connection security.
GOST_UPSTREAM_PROXY= GOST_UPSTREAM_PROXY=
# Internal services bypass list (prevents internal Docker traffic from going through proxy) # Internal services bypass list (prevents internal Docker traffic from going through proxy)
# Includes: Docker internal networks (172.16-31.*, 10.*), Docker DNS (127.0.0.11), and all service hostnames # Includes: Docker internal networks (172.16-31.*, 10.*), Docker DNS (127.0.0.11), and all service hostnames
GOST_NO_PROXY=localhost,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.local,postgres,postgres:5432,redis,redis:6379,caddy,ollama,neo4j,qdrant,weaviate,clickhouse,minio,searxng,crawl4ai,gotenberg,langfuse-web,langfuse-worker,flowise,n8n,n8n-import,n8n-worker-1,n8n-worker-2,n8n-worker-3,n8n-worker-4,n8n-worker-5,n8n-worker-6,n8n-worker-7,n8n-worker-8,n8n-worker-9,n8n-worker-10,n8n-runner-1,n8n-runner-2,n8n-runner-3,n8n-runner-4,n8n-runner-5,n8n-runner-6,n8n-runner-7,n8n-runner-8,n8n-runner-9,n8n-runner-10,letta,lightrag,docling,postiz,ragflow,ragflow-mysql,ragflow-minio,ragflow-redis,ragflow-elasticsearch,ragapp,open-webui,comfyui,waha,libretranslate,paddleocr,nocodb,db,studio,kong,auth,rest,realtime,storage,imgproxy,meta,functions,analytics,vector,supavisor,gost GOST_NO_PROXY=localhost,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.local,appsmith,postgres,postgres:5432,redis,redis:6379,caddy,ollama,neo4j,qdrant,weaviate,clickhouse,minio,searxng,crawl4ai,gotenberg,langfuse-web,langfuse-worker,flowise,n8n,n8n-import,n8n-worker-1,n8n-worker-2,n8n-worker-3,n8n-worker-4,n8n-worker-5,n8n-worker-6,n8n-worker-7,n8n-worker-8,n8n-worker-9,n8n-worker-10,n8n-runner-1,n8n-runner-2,n8n-runner-3,n8n-runner-4,n8n-runner-5,n8n-runner-6,n8n-runner-7,n8n-runner-8,n8n-runner-9,n8n-runner-10,letta,lightrag,docling,postiz,temporal,temporal-ui,ragflow,ragflow-mysql,ragflow-minio,ragflow-redis,ragflow-elasticsearch,ragapp,open-webui,comfyui,waha,libretranslate,paddleocr,nocodb,db,studio,kong,auth,rest,realtime,storage,imgproxy,meta,functions,analytics,vector,supavisor,gost,uptime-kuma,api.telegram.org,telegram.org,t.me,core.telegram.org
############ ############
# Functions - Configuration for Functions # Functions - Configuration for Functions
@@ -474,6 +511,14 @@ DIFY_SECRET_KEY=
DIFY_EXPOSE_NGINX_PORT=8080 DIFY_EXPOSE_NGINX_PORT=8080
DIFY_EXPOSE_NGINX_SSL_PORT=9443 DIFY_EXPOSE_NGINX_SSL_PORT=9443
############
# Docker Compose parallel limit
# Limits the number of simultaneous Docker image pulls to prevent
# "net/http: TLS handshake timeout" errors when many services are selected.
# Increase this value if you have a fast network connection.
############
COMPOSE_PARALLEL_LIMIT=3
########################################################################################### ###########################################################################################
COMPOSE_PROFILES="n8n,portainer,monitoring,databasus" COMPOSE_PROFILES="n8n,portainer,monitoring,databasus"
PROMETHEUS_PASSWORD_HASH= PROMETHEUS_PASSWORD_HASH=
@@ -489,6 +534,13 @@ RAGAPP_PASSWORD_HASH=
POSTIZ_DISABLE_REGISTRATION=false POSTIZ_DISABLE_REGISTRATION=false
############
# Temporal UI credentials (for Caddy basic auth)
############
TEMPORAL_UI_USERNAME=
TEMPORAL_UI_PASSWORD=
TEMPORAL_UI_PASSWORD_HASH=
############ ############
# Postiz Social Media Integrations # Postiz Social Media Integrations
# Leave blank if not used. Provide credentials from each platform. # Leave blank if not used. Provide credentials from each platform.

2
.gitignore vendored
View File

@@ -11,7 +11,9 @@ dify/
volumes/ volumes/
docker-compose.override.yml docker-compose.override.yml
docker-compose.n8n-workers.yml docker-compose.n8n-workers.yml
postiz.env
welcome/data.json welcome/data.json
welcome/changelog.json
# Custom TLS certificates # Custom TLS certificates
certs/* certs/*

View File

@@ -1,16 +1,95 @@
# Changelog # Changelog
All notable changes to this project are documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased] ## [Unreleased]
## [1.1.0] - 2026-01-09 ## [1.4.2] - 2026-03-28
### Fixed
- **n8n** - Make `N8N_PAYLOAD_SIZE_MAX` configurable via `.env` (was hardcoded to 256, ignoring user overrides)
- **Uptime Kuma** - Fix healthcheck failure (`wget: not found`) by switching to Node.js-based check
## [1.4.1] - 2026-03-23
### Fixed
- **Supabase Storage** - Fix crash-loop (`Region is missing`) by adding missing S3 storage configuration variables (`REGION`, `GLOBAL_S3_BUCKET`, `STORAGE_TENANT_ID`) from upstream Supabase
- **Supabase** - Sync new environment variables to existing `supabase/docker/.env` during updates (previously only populated on first install)
## [1.4.0] - 2026-03-15
### Added
- **Uptime Kuma** - Self-hosted uptime monitoring with 90+ notification services
- **pgvector** - Switch PostgreSQL image to `pgvector/pgvector` for vector similarity search support
## [1.3.3] - 2026-02-27
### Fixed
- **Postiz** - Generate `postiz.env` file to prevent `dotenv-cli` crash in backend container (#40). Handles edge case where Docker creates the file as a directory, and quotes values to prevent misparses.
## [1.3.2] - 2026-02-27
### Fixed
- **Docker Compose** - Respect `docker-compose.override.yml` for user customizations (#44). All compose file assembly points now include the override file when present.
## [1.3.1] - 2026-02-27
### Fixed
- **Installer** - Skip n8n workflow import and worker configuration prompts when n8n profile is not selected
## [1.3.0] - 2026-02-27
### Added
- **Appsmith** - Low-code platform for building internal tools, dashboards, and admin panels
## [1.2.8] - 2026-02-27
### Fixed
- **Ragflow** - Fix nginx config mount path (`sites-available/default``conf.d/default.conf`) to resolve default "Welcome to nginx!" page (#41)
## [1.2.7] - 2026-02-27
### Fixed
- **Docker** - Limit parallel image pulls (`COMPOSE_PARALLEL_LIMIT=3`) to prevent `TLS handshake timeout` errors when many services are selected
## [1.2.6] - 2026-02-10
### Changed
- **ComfyUI** - Update Docker image to CUDA 12.8 (`cu128-slim`)
## [1.2.5] - 2026-02-03
### Fixed
- **n8n** - Use static ffmpeg binaries for Alpine/musl compatibility (fixes glibc errors)
## [1.2.4] - 2026-01-30
### Fixed
- **Postiz** - Fix `BACKEND_INTERNAL_URL` to use `localhost` instead of Docker hostname (internal nginx requires localhost)
## [1.2.3] - 2026-01-29
### Fixed
- **Gost proxy** - Add Telegram domains to `GOST_NO_PROXY` bypass list for n8n Telegram triggers
## [1.2.2] - 2026-01-26
### Fixed
- **Custom TLS** - Fix duplicate hostname error when using custom certificates. Changed architecture from generating separate site blocks to using a shared TLS snippet that all services import.
## [1.2.1] - 2026-01-16
### Added
- **Temporal** - Temporal server and UI for Postiz workflow orchestration (#33)
## [1.2.0] - 2026-01-12
### Added
- Changelog section on Welcome Page dashboard
## [1.1.0] - 2026-01-11
### Added ### Added
- **Custom TLS certificates** - Support for corporate/internal certificates via `caddy-addon/` mechanism - **Custom TLS certificates** - Support for corporate/internal certificates via `caddy-addon/` mechanism
- New `make stop` and `make start` commands for stopping/starting all services without restart
- New `make setup-tls` command and `scripts/setup_custom_tls.sh` helper script for easy certificate configuration - New `make setup-tls` command and `scripts/setup_custom_tls.sh` helper script for easy certificate configuration
- New `make git-pull` command for fork workflows - merges from upstream instead of hard reset - New `make git-pull` command for fork workflows - merges from upstream instead of hard reset
@@ -218,3 +297,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Added ### Added
- Langfuse - LLM observability and analytics platform - Langfuse - LLM observability and analytics platform
- Initial fork from coleam00/local-ai-packager with enhanced service support - Initial fork from coleam00/local-ai-packager with enhanced service support
---
All notable changes to this project are documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

View File

@@ -10,7 +10,7 @@ This is **n8n-install**, a Docker Compose-based installer that provides a compre
- **Profile-based service management**: Services are activated via Docker Compose profiles (e.g., `n8n`, `flowise`, `monitoring`). Profiles are stored in the `.env` file's `COMPOSE_PROFILES` variable. - **Profile-based service management**: Services are activated via Docker Compose profiles (e.g., `n8n`, `flowise`, `monitoring`). Profiles are stored in the `.env` file's `COMPOSE_PROFILES` variable.
- **No exposed ports**: Services do NOT publish ports directly. All external HTTPS access is routed through Caddy reverse proxy on ports 80/443. - **No exposed ports**: Services do NOT publish ports directly. All external HTTPS access is routed through Caddy reverse proxy on ports 80/443.
- **Shared secrets**: Core services (Postgres, Redis/Valkey, Caddy) are always included. Other services are optional and selected during installation. - **Shared secrets**: Core services (Postgres, Valkey (Redis-compatible, container named `redis` for backward compatibility), Caddy) are always included. Other services are optional and selected during installation.
- **Queue-based n8n**: n8n runs in `queue` mode with Redis, Postgres, and dynamically scaled workers (`N8N_WORKER_COUNT`). - **Queue-based n8n**: n8n runs in `queue` mode with Redis, Postgres, and dynamically scaled workers (`N8N_WORKER_COUNT`).
### Key Files ### Key Files
@@ -40,9 +40,14 @@ This is **n8n-install**, a Docker Compose-based installer that provides a compre
- `scripts/docker_cleanup.sh`: Removes unused Docker resources (used by `make clean`) - `scripts/docker_cleanup.sh`: Removes unused Docker resources (used by `make clean`)
- `scripts/download_top_workflows.sh`: Downloads community n8n workflows - `scripts/download_top_workflows.sh`: Downloads community n8n workflows
- `scripts/import_workflows.sh`: Imports workflows from `n8n/backup/workflows/` into n8n (used by `make import`) - `scripts/import_workflows.sh`: Imports workflows from `n8n/backup/workflows/` into n8n (used by `make import`)
- `scripts/restart.sh`: Restarts services with proper compose file handling (used by `make restart`)
- `scripts/setup_custom_tls.sh`: Configures custom TLS certificates (used by `make setup-tls`); supports `--remove` to revert to Let's Encrypt
- `start_services.py`: Python orchestrator for service startup order, builds Docker images, handles external services (Supabase/Dify cloning, env preparation, startup), generates SearXNG secret key, stops existing containers. Uses `python-dotenv` (`dotenv_values`).
**Project Name**: All docker-compose commands use `-p localai` (defined in Makefile as `PROJECT_NAME := localai`). **Project Name**: All docker-compose commands use `-p localai` (defined in Makefile as `PROJECT_NAME := localai`).
**Version**: Stored in `VERSION` file at repository root.
### Installation Flow ### Installation Flow
`scripts/install.sh` orchestrates the installation by running numbered scripts in sequence: `scripts/install.sh` orchestrates the installation by running numbered scripts in sequence:
@@ -56,7 +61,9 @@ This is **n8n-install**, a Docker Compose-based installer that provides a compre
7. `07_final_report.sh` - Display credentials and URLs 7. `07_final_report.sh` - Display credentials and URLs
8. `08_fix_permissions.sh` - Fix file ownership for non-root access 8. `08_fix_permissions.sh` - Fix file ownership for non-root access
The update flow (`scripts/update.sh`) similarly orchestrates: git fetch + reset → service selection → `apply_update.sh` → restart. The update flow (`scripts/update.sh`) similarly orchestrates: git fetch + reset → service selection → `apply_update.sh` → restart. During updates, `03_generate_secrets.sh --update` adds new variables from `.env.example` without regenerating existing ones (preserves user-set values).
**Git update modes**: Default is `reset` (hard reset to origin). Set `GIT_MODE=merge` in `.env` for fork workflows (merges from upstream instead of hard reset). The `make git-pull` command uses merge mode. Git branch support is explicit: `GIT_SUPPORTED_BRANCHES=("main" "develop")` in `git.sh`; unknown branches warn and fall back to `main`.
## Common Development Commands ## Common Development Commands
@@ -75,6 +82,8 @@ make logs s=<service> # View logs for specific service
make status # Show container status make status # Show container status
make monitor # Live CPU/memory monitoring (docker stats) make monitor # Live CPU/memory monitoring (docker stats)
make restart # Restart all services make restart # Restart all services
make stop # Stop all services
make start # Start all services
make show-restarts # Show restart count per container make show-restarts # Show restart count per container
make doctor # Run system diagnostics (DNS, SSL, containers, disk, memory) make doctor # Run system diagnostics (DNS, SSL, containers, disk, memory)
make import # Import n8n workflows from backup make import # Import n8n workflows from backup
@@ -96,7 +105,7 @@ Follow this workflow when adding a new optional service (refer to `.claude/comma
3. **.env.example**: Add `MYSERVICE_HOSTNAME=myservice.yourdomain.com` and credentials if using basic auth. 3. **.env.example**: Add `MYSERVICE_HOSTNAME=myservice.yourdomain.com` and credentials if using basic auth.
4. **scripts/03_generate_secrets.sh**: Generate passwords and bcrypt hashes. Add to `VARS_TO_GENERATE` map. 4. **scripts/03_generate_secrets.sh**: Generate passwords and bcrypt hashes. Add to `VARS_TO_GENERATE` map.
5. **scripts/04_wizard.sh**: Add service to `base_services_data` array for wizard selection. 5. **scripts/04_wizard.sh**: Add service to `base_services_data` array for wizard selection.
6. **scripts/databases.sh**: If service uses PostgreSQL, add database name to `INIT_DB_DATABASES` array. 6. **scripts/databases.sh**: If service uses PostgreSQL, add database name to `INIT_DB_DATABASES` array. Database creation is idempotent (checks existence before creating). Note: Postiz also requires `temporal` and `temporal_visibility` databases.
7. **scripts/generate_welcome_page.sh**: Add service to `SERVICES_ARRAY` for welcome dashboard. 7. **scripts/generate_welcome_page.sh**: Add service to `SERVICES_ARRAY` for welcome dashboard.
8. **welcome/app.js**: Add `SERVICE_METADATA` entry with name, description, icon, color, category. 8. **welcome/app.js**: Add `SERVICE_METADATA` entry with name, description, icon, color, category.
9. **scripts/07_final_report.sh**: Add service URL and credentials output using `is_profile_active "myservice"`. 9. **scripts/07_final_report.sh**: Add service URL and credentials output using `is_profile_active "myservice"`.
@@ -154,11 +163,11 @@ This project uses [Semantic Versioning](https://semver.org/). When updating `CHA
- Configuration stored in `docker-compose.n8n-workers.yml` (auto-generated, gitignored) - Configuration stored in `docker-compose.n8n-workers.yml` (auto-generated, gitignored)
- Runner connects to its worker via `network_mode: "service:n8n-worker-N"` (localhost:5679) - Runner connects to its worker via `network_mode: "service:n8n-worker-N"` (localhost:5679)
- Runner image `n8nio/runners` must match n8n version - Runner image `n8nio/runners` must match n8n version
- **Template profile pattern**: `docker-compose.yml` defines `n8n-worker-template` and `n8n-runner-template` with `profiles: ["n8n-template"]` (never activated directly). `generate_n8n_workers.sh` uses these as templates to generate `docker-compose.n8n-workers.yml` with the actual worker/runner services.
- **Scaling**: Change `N8N_WORKER_COUNT` in `.env` and run `bash scripts/generate_n8n_workers.sh` - **Scaling**: Change `N8N_WORKER_COUNT` in `.env` and run `bash scripts/generate_n8n_workers.sh`
- **Code node libraries**: Configured via `n8n/n8n-task-runners.json` and `n8n/Dockerfile.runner`: - **Code node libraries**: Configured via `n8n/n8n-task-runners.json` and `n8n/Dockerfile.runner`:
- JS packages installed via `pnpm add` in Dockerfile.runner - **JavaScript runner**: packages installed via `pnpm add` in Dockerfile.runner; allowlist in `n8n-task-runners.json` (`NODE_FUNCTION_ALLOW_EXTERNAL`, `NODE_FUNCTION_ALLOW_BUILTIN`); default packages: `cheerio`, `axios`, `moment`, `lodash`
- Allowlist configured in `n8n-task-runners.json` (`NODE_FUNCTION_ALLOW_EXTERNAL`, `NODE_FUNCTION_ALLOW_BUILTIN`) - **Python runner**: also configured in `n8n-task-runners.json`; uses `/opt/runners/task-runner-python/.venv/bin/python` with `N8N_RUNNERS_STDLIB_ALLOW: "*"` and `N8N_RUNNERS_EXTERNAL_ALLOW: "*"`
- Default packages: `cheerio`, `axios`, `moment`, `lodash`
- Workflows can access the host filesystem via `/data/shared` (mapped to `./shared`) - Workflows can access the host filesystem via `/data/shared` (mapped to `./shared`)
- `N8N_BLOCK_ENV_ACCESS_IN_NODE=false` allows Code nodes to access environment variables - `N8N_BLOCK_ENV_ACCESS_IN_NODE=false` allows Code nodes to access environment variables
@@ -168,6 +177,18 @@ This project uses [Semantic Versioning](https://semver.org/). When updating `CHA
- Hostnames are passed via environment variables (e.g., `N8N_HOSTNAME`, `FLOWISE_HOSTNAME`) - Hostnames are passed via environment variables (e.g., `N8N_HOSTNAME`, `FLOWISE_HOSTNAME`)
- Basic auth uses bcrypt hashes generated by `scripts/03_generate_secrets.sh` via Caddy's hash command - Basic auth uses bcrypt hashes generated by `scripts/03_generate_secrets.sh` via Caddy's hash command
- Never add `ports:` to services in docker-compose.yml; let Caddy handle all external access - Never add `ports:` to services in docker-compose.yml; let Caddy handle all external access
- **Caddy Addons** (`caddy-addon/`): Extend Caddy config without modifying the main Caddyfile. Files matching `site-*.conf` are auto-imported (gitignored, user-created). TLS is controlled via `tls-snippet.conf` (all service blocks use `import service_tls`). See `caddy-addon/README.md` for details.
- Custom TLS certificates go in `certs/` directory (gitignored), referenced as `/etc/caddy/certs/` inside the container
### External Compose Files (Supabase/Dify)
Complex services like Supabase and Dify maintain their own upstream docker-compose files:
- `start_services.py` handles cloning repos, preparing `.env` files, and starting services
- Each external service needs: `is_*_enabled()`, `clone_*_repo()`, `prepare_*_env()`, `start_*()` functions in `start_services.py`
- `scripts/utils.sh` provides `get_*_compose()` getter functions and `build_compose_files_array()` includes them
- `stop_all_services()` in `start_services.py` checks compose file existence (not profile) to ensure cleanup when a profile is removed
- All external compose files use the same project name (`-p localai`) so containers appear together
- **`docker-compose.override.yml`**: User customizations file (gitignored). Both `start_services.py` and `build_compose_files_array()` in `utils.sh` auto-detect and include it last (highest precedence). Users can override any service property without modifying tracked files.
### Secret Generation ### Secret Generation
@@ -176,6 +197,7 @@ The `scripts/03_generate_secrets.sh` script:
- Creates bcrypt password hashes using Caddy's `hash-password` command - Creates bcrypt password hashes using Caddy's `hash-password` command
- Preserves existing user-provided values in `.env` - Preserves existing user-provided values in `.env`
- Supports different secret types via `VARS_TO_GENERATE` map: `password:32`, `jwt`, `api_key`, `base64:64`, `hex:32` - Supports different secret types via `VARS_TO_GENERATE` map: `password:32`, `jwt`, `api_key`, `base64:64`, `hex:32`
- When called with `--update` flag (during updates), only adds new variables without regenerating existing ones
### Utility Functions (scripts/utils.sh) ### Utility Functions (scripts/utils.sh)
@@ -210,11 +232,24 @@ Common profiles:
- `langfuse`: Langfuse observability (includes ClickHouse, MinIO, worker, web) - `langfuse`: Langfuse observability (includes ClickHouse, MinIO, worker, web)
- `cpu`, `gpu-nvidia`, `gpu-amd`: Ollama hardware profiles (mutually exclusive) - `cpu`, `gpu-nvidia`, `gpu-amd`: Ollama hardware profiles (mutually exclusive)
- `cloudflare-tunnel`: Cloudflare Tunnel for zero-trust access (see `cloudflare-instructions.md`) - `cloudflare-tunnel`: Cloudflare Tunnel for zero-trust access (see `cloudflare-instructions.md`)
- `supabase`: Supabase BaaS (external compose, cloned at runtime; mutually exclusive with `dify`)
- `dify`: Dify AI platform (external compose, cloned at runtime; mutually exclusive with `supabase`)
- `gost`: HTTP/HTTPS proxy for routing AI service outbound traffic - `gost`: HTTP/HTTPS proxy for routing AI service outbound traffic
- `python-runner`: Internal Python execution environment (no external access) - `python-runner`: Internal Python execution environment (no external access)
- `searxng`, `letta`, `lightrag`, `libretranslate`, `crawl4ai`, `docling`, `waha`, `comfyui`, `paddleocr`, `ragapp`, `gotenberg`, `postiz`: Additional optional services
## Architecture Patterns ## Architecture Patterns
### Docker Compose YAML Anchors
`docker-compose.yml` defines reusable anchors at the top:
- `x-logging: &default-logging` - `json-file` with `max-size: 1m`, `max-file: 1`
- `x-proxy-env: &proxy-env` - HTTP/HTTPS proxy vars from `GOST_PROXY_URL`/`GOST_NO_PROXY`
- `x-n8n: &service-n8n` - Full n8n service definition (reused by workers via `extends`)
- `x-ollama: &service-ollama` - Ollama service definition (reused by CPU/GPU variants)
- `x-init-ollama: &init-ollama` - Ollama model pre-puller (auto-pulls `qwen2.5:7b-instruct-q4_K_M` and `nomic-embed-text`)
- `x-n8n-worker-runner: &service-n8n-worker-runner` - Runner template for worker generation
### Healthchecks ### Healthchecks
Services should define healthchecks for proper dependency management: Services should define healthchecks for proper dependency management:
@@ -274,6 +309,8 @@ healthcheck:
test: ["CMD-SHELL", "http_proxy= https_proxy= HTTP_PROXY= HTTPS_PROXY= wget -qO- http://localhost:8080/health || exit 1"] test: ["CMD-SHELL", "http_proxy= https_proxy= HTTP_PROXY= HTTPS_PROXY= wget -qO- http://localhost:8080/health || exit 1"]
``` ```
**GOST_NO_PROXY**: ALL service container names must be listed in `GOST_NO_PROXY` in `.env.example`. This prevents internal Docker network traffic from routing through the proxy. This applies to every service, not just those using `<<: *proxy-env`.
### Welcome Page Dashboard ### Welcome Page Dashboard
The welcome page (`welcome/`) provides a post-install dashboard showing all active services: The welcome page (`welcome/`) provides a post-install dashboard showing all active services:
@@ -289,6 +326,10 @@ Directories in `PRESERVE_DIRS` (defined in `scripts/utils.sh`) survive git updat
These are backed up before `git reset --hard` and restored after. These are backed up before `git reset --hard` and restored after.
### Restart Behavior
`scripts/restart.sh` stops all services first, then starts external stacks (Supabase/Dify) separately before the main stack (10s delay between). This is required because external compose files use relative volume paths that resolve from their own directory.
## Common Issues and Solutions ## Common Issues and Solutions
### Service won't start after adding ### Service won't start after adding
@@ -309,7 +350,11 @@ These are backed up before `git reset --hard` and restored after.
## File Locations ## File Locations
- Shared files accessible by n8n: `./shared` (mounted as `/data/shared` in n8n) - Shared files accessible by n8n: `./shared` (mounted as `/data/shared` in n8n)
- n8n backup/workflows: `n8n/backup/workflows/` (mounted as `/backup` in n8n containers)
- n8n storage: Docker volume `localai_n8n_storage` - n8n storage: Docker volume `localai_n8n_storage`
- Flowise storage: `~/.flowise` on host (mounted from user's home directory, not a named volume)
- Custom TLS certificates: `certs/` (gitignored, mounted as `/etc/caddy/certs/`)
- Caddy addon configs: `caddy-addon/site-*.conf` (gitignored, auto-imported)
- Service-specific volumes: Defined in `volumes:` section at top of `docker-compose.yml` - Service-specific volumes: Defined in `volumes:` section at top of `docker-compose.yml`
- Installation logs: stdout during script execution - Installation logs: stdout during script execution
- Service logs: `docker compose -p localai logs <service>` - Service logs: `docker compose -p localai logs <service>`
@@ -336,6 +381,10 @@ bash -n scripts/generate_n8n_workers.sh
bash -n scripts/apply_update.sh bash -n scripts/apply_update.sh
bash -n scripts/update.sh bash -n scripts/update.sh
bash -n scripts/install.sh bash -n scripts/install.sh
bash -n scripts/restart.sh
bash -n scripts/doctor.sh
bash -n scripts/setup_custom_tls.sh
bash -n scripts/docker_cleanup.sh
``` ```
### Full Testing ### Full Testing

View File

@@ -3,30 +3,44 @@
email {$LETSENCRYPT_EMAIL} email {$LETSENCRYPT_EMAIL}
} }
# Import TLS snippet (must be before service blocks)
# Default: Let's Encrypt automatic certificates
# Custom: Run 'make setup-tls' to use your own certificates
import /etc/caddy/addons/tls-snippet.conf
# Appsmith
{$APPSMITH_HOSTNAME} {
import service_tls
reverse_proxy appsmith:80
}
# N8N # N8N
{$N8N_HOSTNAME} { {$N8N_HOSTNAME} {
# For domains, Caddy will automatically use Let's Encrypt import service_tls
# For localhost/port addresses, HTTPS won't be enabled
reverse_proxy n8n:5678 reverse_proxy n8n:5678
} }
# Open WebUI # Open WebUI
{$WEBUI_HOSTNAME} { {$WEBUI_HOSTNAME} {
import service_tls
reverse_proxy open-webui:8080 reverse_proxy open-webui:8080
} }
# Flowise # Flowise
{$FLOWISE_HOSTNAME} { {$FLOWISE_HOSTNAME} {
import service_tls
reverse_proxy flowise:3001 reverse_proxy flowise:3001
} }
# Dify # Dify
{$DIFY_HOSTNAME} { {$DIFY_HOSTNAME} {
import service_tls
reverse_proxy nginx:80 reverse_proxy nginx:80
} }
# RAGApp # RAGApp
{$RAGAPP_HOSTNAME} { {$RAGAPP_HOSTNAME} {
import service_tls
basic_auth { basic_auth {
{$RAGAPP_USERNAME} {$RAGAPP_PASSWORD_HASH} {$RAGAPP_USERNAME} {$RAGAPP_PASSWORD_HASH}
} }
@@ -35,37 +49,38 @@
# RAGFlow # RAGFlow
{$RAGFLOW_HOSTNAME} { {$RAGFLOW_HOSTNAME} {
import service_tls
reverse_proxy ragflow:80 reverse_proxy ragflow:80
} }
# Langfuse # Langfuse
{$LANGFUSE_HOSTNAME} { {$LANGFUSE_HOSTNAME} {
import service_tls
reverse_proxy langfuse-web:3000 reverse_proxy langfuse-web:3000
} }
# # Ollama API
# {$OLLAMA_HOSTNAME} {
# reverse_proxy ollama:11434
# }
# Supabase # Supabase
{$SUPABASE_HOSTNAME} { {$SUPABASE_HOSTNAME} {
import service_tls
reverse_proxy kong:8000 reverse_proxy kong:8000
} }
# Grafana # Grafana
{$GRAFANA_HOSTNAME} { {$GRAFANA_HOSTNAME} {
import service_tls
reverse_proxy grafana:3000 reverse_proxy grafana:3000
} }
# WAHA (WhatsApp HTTP API) # WAHA (WhatsApp HTTP API)
{$WAHA_HOSTNAME} { {$WAHA_HOSTNAME} {
import service_tls
reverse_proxy waha:3000 reverse_proxy waha:3000
} }
# Prometheus # Prometheus
{$PROMETHEUS_HOSTNAME} { {$PROMETHEUS_HOSTNAME} {
basic_auth { import service_tls
basic_auth {
{$PROMETHEUS_USERNAME} {$PROMETHEUS_PASSWORD_HASH} {$PROMETHEUS_USERNAME} {$PROMETHEUS_PASSWORD_HASH}
} }
reverse_proxy prometheus:9090 reverse_proxy prometheus:9090
@@ -73,41 +88,64 @@
# Portainer # Portainer
{$PORTAINER_HOSTNAME} { {$PORTAINER_HOSTNAME} {
import service_tls
reverse_proxy portainer:9000 reverse_proxy portainer:9000
} }
# Postiz # Postiz
{$POSTIZ_HOSTNAME} { {$POSTIZ_HOSTNAME} {
import service_tls
reverse_proxy postiz:5000 reverse_proxy postiz:5000
} }
# Temporal UI (workflow orchestration for Postiz)
{$TEMPORAL_UI_HOSTNAME} {
import service_tls
basic_auth {
{$TEMPORAL_UI_USERNAME} {$TEMPORAL_UI_PASSWORD_HASH}
}
reverse_proxy temporal-ui:8080
}
# Uptime Kuma
{$UPTIME_KUMA_HOSTNAME} {
import service_tls
reverse_proxy uptime-kuma:3001
}
# Databasus # Databasus
{$DATABASUS_HOSTNAME} { {$DATABASUS_HOSTNAME} {
import service_tls
reverse_proxy databasus:4005 reverse_proxy databasus:4005
} }
# Letta # Letta
{$LETTA_HOSTNAME} { {$LETTA_HOSTNAME} {
import service_tls
reverse_proxy letta:8283 reverse_proxy letta:8283
} }
# LightRAG (Graph-based RAG with Knowledge Extraction) # LightRAG (Graph-based RAG with Knowledge Extraction)
{$LIGHTRAG_HOSTNAME} { {$LIGHTRAG_HOSTNAME} {
import service_tls
reverse_proxy lightrag:9621 reverse_proxy lightrag:9621
} }
# Weaviate # Weaviate
{$WEAVIATE_HOSTNAME} { {$WEAVIATE_HOSTNAME} {
import service_tls
reverse_proxy weaviate:8080 reverse_proxy weaviate:8080
} }
# Qdrant # Qdrant
{$QDRANT_HOSTNAME} { {$QDRANT_HOSTNAME} {
import service_tls
reverse_proxy qdrant:6333 reverse_proxy qdrant:6333
} }
# ComfyUI # ComfyUI
{$COMFYUI_HOSTNAME} { {$COMFYUI_HOSTNAME} {
import service_tls
basic_auth { basic_auth {
{$COMFYUI_USERNAME} {$COMFYUI_PASSWORD_HASH} {$COMFYUI_USERNAME} {$COMFYUI_PASSWORD_HASH}
} }
@@ -116,6 +154,7 @@
# LibreTranslate (Self-hosted Translation API) # LibreTranslate (Self-hosted Translation API)
{$LT_HOSTNAME} { {$LT_HOSTNAME} {
import service_tls
basic_auth { basic_auth {
{$LT_USERNAME} {$LT_PASSWORD_HASH} {$LT_USERNAME} {$LT_PASSWORD_HASH}
} }
@@ -124,21 +163,25 @@
# Neo4j # Neo4j
{$NEO4J_HOSTNAME} { {$NEO4J_HOSTNAME} {
import service_tls
reverse_proxy neo4j:7474 reverse_proxy neo4j:7474
} }
# Neo4j Bolt Protocol (wss) # Neo4j Bolt Protocol (wss)
https://{$NEO4J_HOSTNAME}:7687 { https://{$NEO4J_HOSTNAME}:7687 {
import service_tls
reverse_proxy neo4j:7687 reverse_proxy neo4j:7687
} }
# NocoDB # NocoDB
{$NOCODB_HOSTNAME} { {$NOCODB_HOSTNAME} {
import service_tls
reverse_proxy nocodb:8080 reverse_proxy nocodb:8080
} }
# PaddleOCR (PaddleX Basic Serving) # PaddleOCR (PaddleX Basic Serving)
{$PADDLEOCR_HOSTNAME} { {$PADDLEOCR_HOSTNAME} {
import service_tls
basic_auth { basic_auth {
{$PADDLEOCR_USERNAME} {$PADDLEOCR_PASSWORD_HASH} {$PADDLEOCR_USERNAME} {$PADDLEOCR_PASSWORD_HASH}
} }
@@ -147,6 +190,7 @@ https://{$NEO4J_HOSTNAME}:7687 {
# Docling (Document Conversion API) # Docling (Document Conversion API)
{$DOCLING_HOSTNAME} { {$DOCLING_HOSTNAME} {
import service_tls
basic_auth { basic_auth {
{$DOCLING_USERNAME} {$DOCLING_PASSWORD_HASH} {$DOCLING_USERNAME} {$DOCLING_PASSWORD_HASH}
} }
@@ -154,7 +198,8 @@ https://{$NEO4J_HOSTNAME}:7687 {
} }
# Welcome Page (Post-install dashboard) # Welcome Page (Post-install dashboard)
{$WELCOME_HOSTNAME} { # HTTP block for Cloudflare Tunnel access (prevents redirect loop)
http://{$WELCOME_HOSTNAME} {
basic_auth { basic_auth {
{$WELCOME_USERNAME} {$WELCOME_PASSWORD_HASH} {$WELCOME_USERNAME} {$WELCOME_PASSWORD_HASH}
} }
@@ -163,10 +208,23 @@ https://{$NEO4J_HOSTNAME}:7687 {
try_files {path} /index.html try_files {path} /index.html
} }
import /etc/caddy/addons/*.conf # HTTPS block for direct access
{$WELCOME_HOSTNAME} {
import service_tls
basic_auth {
{$WELCOME_USERNAME} {$WELCOME_PASSWORD_HASH}
}
root * /srv/welcome
file_server
try_files {path} /index.html
}
# # SearXNG # Import custom site addons
import /etc/caddy/addons/site-*.conf
# SearXNG
{$SEARXNG_HOSTNAME} { {$SEARXNG_HOSTNAME} {
import service_tls
@protected not remote_ip 127.0.0.0/8 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 100.64.0.0/10 @protected not remote_ip 127.0.0.0/8 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 100.64.0.0/10
basic_auth @protected { basic_auth @protected {

View File

@@ -1,4 +1,4 @@
.PHONY: help install update update-preview git-pull clean clean-all logs status monitor restart show-restarts doctor switch-beta switch-stable import setup-tls .PHONY: help install update update-preview git-pull clean clean-all logs status monitor restart stop start show-restarts doctor switch-beta switch-stable import setup-tls
PROJECT_NAME := localai PROJECT_NAME := localai
@@ -17,6 +17,8 @@ help:
@echo " make status Show container status" @echo " make status Show container status"
@echo " make monitor Live CPU/memory monitoring" @echo " make monitor Live CPU/memory monitoring"
@echo " make restart Restart all services" @echo " make restart Restart all services"
@echo " make stop Stop all services"
@echo " make start Start all services"
@echo " make show-restarts Show restart count per container" @echo " make show-restarts Show restart count per container"
@echo " make doctor Run system diagnostics" @echo " make doctor Run system diagnostics"
@echo " make import Import n8n workflows from backup" @echo " make import Import n8n workflows from backup"
@@ -63,6 +65,12 @@ monitor:
restart: restart:
bash ./scripts/restart.sh bash ./scripts/restart.sh
stop:
docker compose -p $(PROJECT_NAME) stop
start:
docker compose -p $(PROJECT_NAME) start
show-restarts: show-restarts:
@docker ps -q | while read id; do \ @docker ps -q | while read id; do \
name=$$(docker inspect --format '{{.Name}}' $$id | sed 's/^\/\(.*\)/\1/'); \ name=$$(docker inspect --format '{{.Name}}' $$id | sed 's/^\/\(.*\)/\1/'); \

View File

@@ -56,6 +56,8 @@ This setup provides a comprehensive suite of cutting-edge services, all pre-conf
The installer also makes the following powerful open-source tools **available for you to select and deploy** via an interactive wizard during setup: The installer also makes the following powerful open-source tools **available for you to select and deploy** via an interactive wizard during setup:
✅ [**Appsmith**](https://www.appsmith.com/) - An open-source low-code platform for building internal tools, dashboards, and admin panels with a drag-and-drop UI builder.
✅ [**n8n**](https://n8n.io/) - A low-code platform with over 400 integrations and advanced AI components to automate workflows. ✅ [**n8n**](https://n8n.io/) - A low-code platform with over 400 integrations and advanced AI components to automate workflows.
✅ [**ComfyUI**](https://github.com/comfyanonymous/ComfyUI) - A powerful, node-based UI for Stable Diffusion workflows. Build and run image-generation pipelines visually, with support for custom nodes and extensions. ✅ [**ComfyUI**](https://github.com/comfyanonymous/ComfyUI) - A powerful, node-based UI for Stable Diffusion workflows. Build and run image-generation pipelines visually, with support for custom nodes and extensions.
@@ -110,6 +112,8 @@ The installer also makes the following powerful open-source tools **available fo
✅ [**Supabase**](https://supabase.com/) - An open-source alternative to Firebase, providing database storage, user authentication, and more. It's a popular choice for AI applications. ✅ [**Supabase**](https://supabase.com/) - An open-source alternative to Firebase, providing database storage, user authentication, and more. It's a popular choice for AI applications.
✅ [**Uptime Kuma**](https://github.com/louislam/uptime-kuma) - Self-hosted uptime monitoring tool with notifications
✅ [**WAHA**](https://waha.devlike.pro/) - WhatsApp HTTP API (REST API) that you can configure in a click! 3 engines: WEBJS (browser based), NOWEB (websocket nodejs), GOWS (websocket go). ✅ [**WAHA**](https://waha.devlike.pro/) - WhatsApp HTTP API (REST API) that you can configure in a click! 3 engines: WEBJS (browser based), NOWEB (websocket nodejs), GOWS (websocket go).
✅ [**Weaviate**](https://weaviate.io/) - An open-source AI-native vector database with a focus on scalability and ease of use. It can be used for RAG, hybrid search, and more. ✅ [**Weaviate**](https://weaviate.io/) - An open-source AI-native vector database with a focus on scalability and ease of use. It can be used for RAG, hybrid search, and more.
@@ -137,9 +141,10 @@ Get started quickly with a vast library of pre-built automations (optional impor
1. **Domain Name:** You need a registered domain name (e.g., `yourdomain.com`). 1. **Domain Name:** You need a registered domain name (e.g., `yourdomain.com`).
2. **DNS Configuration:** Before running the installation script, you **must** configure DNS A-record for your domain, pointing to the public IP address of the server where you'll install this system. Replace `yourdomain.com` with your actual domain: 2. **DNS Configuration:** Before running the installation script, you **must** configure DNS A-record for your domain, pointing to the public IP address of the server where you'll install this system. Replace `yourdomain.com` with your actual domain:
- **Wildcard Record:** `A *.yourdomain.com` -> `YOUR_SERVER_IP` - **Wildcard Record:** `A *.yourdomain.com` -> `YOUR_SERVER_IP`
3. **Server:** Minimum server system requirements: Ubuntu 24.04 LTS, 64-bit. 3. **VPS (Virtual Private Server):** A dedicated VPS with a public IP address is required. Home servers, shared hosting, or localhost setups are not supported.
- For running **all available services**: at least **20 GB Memory / 4 CPU Cores / 60 GB Disk Space**. - **Operating System:** Ubuntu 24.04 LTS, 64-bit
- For a minimal setup with **n8n, Monitoring, Databasus and Portainer**: **4 GB Memory / 2 CPU Cores / 40 GB Disk Space**. - For a minimal setup with **n8n, Monitoring, Databasus and Portainer**: **4 GB Memory / 2 CPU Cores / 40 GB Disk Space**
- For running **all available services**: at least **20 GB Memory / 4 CPU Cores / 60 GB Disk Space**
### Running the Install ### Running the Install
@@ -178,6 +183,7 @@ After successful installation, your services are up and running! Here's how to g
The installation script provided a summary report with all access URLs and credentials. Please refer to that report. The main services will be available at the following addresses (replace `yourdomain.com` with your actual domain): The installation script provided a summary report with all access URLs and credentials. Please refer to that report. The main services will be available at the following addresses (replace `yourdomain.com` with your actual domain):
- **n8n:** `n8n.yourdomain.com` (Log in with the email address you provided during installation and the initial password from the summary report. You may be prompted to change this password on first login.) - **n8n:** `n8n.yourdomain.com` (Log in with the email address you provided during installation and the initial password from the summary report. You may be prompted to change this password on first login.)
- **Appsmith:** `appsmith.yourdomain.com` (Low-code app builder)
- **ComfyUI:** `comfyui.yourdomain.com` (Node-based Stable Diffusion UI) - **ComfyUI:** `comfyui.yourdomain.com` (Node-based Stable Diffusion UI)
- **Databasus:** `databasus.yourdomain.com` - **Databasus:** `databasus.yourdomain.com`
- **Dify:** `dify.yourdomain.com` (AI application development platform with comprehensive LLMOps capabilities) - **Dify:** `dify.yourdomain.com` (AI application development platform with comprehensive LLMOps capabilities)
@@ -200,6 +206,7 @@ After successful installation, your services are up and running! Here's how to g
- **RAGFlow:** `ragflow.yourdomain.com` - **RAGFlow:** `ragflow.yourdomain.com`
- **SearXNG:** `searxng.yourdomain.com` - **SearXNG:** `searxng.yourdomain.com`
- **Supabase (Dashboard):** `supabase.yourdomain.com` - **Supabase (Dashboard):** `supabase.yourdomain.com`
- **Uptime Kuma:** `uptime-kuma.yourdomain.com` (Uptime monitoring dashboard)
- **WAHA:** `waha.yourdomain.com` (WhatsApp HTTP API; engines: WEBJS, NOWEB, GOWS) - **WAHA:** `waha.yourdomain.com` (WhatsApp HTTP API; engines: WEBJS, NOWEB, GOWS)
- **Weaviate:** `weaviate.yourdomain.com` - **Weaviate:** `weaviate.yourdomain.com`
@@ -318,6 +325,8 @@ The project includes a Makefile for simplified command execution:
| `make status` | Show container status | | `make status` | Show container status |
| `make monitor` | Live CPU/memory monitoring | | `make monitor` | Live CPU/memory monitoring |
| `make restart` | Restart all services | | `make restart` | Restart all services |
| `make stop` | Stop all services |
| `make start` | Start all services |
| `make show-restarts` | Show restart count per container | | `make show-restarts` | Show restart count per container |
| `make import` | Import n8n workflows from backup | | `make import` | Import n8n workflows from backup |
| `make import n=10` | Import first N workflows only | | `make import n=10` | Import first N workflows only |
@@ -365,6 +374,18 @@ Here are solutions to common issues you might encounter:
- **VPN Conflicts:** Using a VPN might interfere with downloading Docker images. If you encounter issues pulling images, try temporarily disabling your VPN. - **VPN Conflicts:** Using a VPN might interfere with downloading Docker images. If you encounter issues pulling images, try temporarily disabling your VPN.
- **Server Requirements:** If you experience unexpected issues, ensure your server meets the minimum hardware and operating system requirements (including version) as specified in the "Prerequisites before Installation" section. - **Server Requirements:** If you experience unexpected issues, ensure your server meets the minimum hardware and operating system requirements (including version) as specified in the "Prerequisites before Installation" section.
### Update Script Not Working
- **Symptom:** The `make update` command fails, shows errors, or doesn't apply the latest changes.
- **Cause:** This can happen if your local repository has diverged from the upstream, has uncommitted changes, or is in an inconsistent state.
- **Solution:** Run the following command to force-sync your local installation with the latest version:
```bash
git config pull.rebase true && git fetch origin && git checkout main && git reset --hard "origin/main" && make update
```
**Warning:** This will discard any local changes you've made to the installer files. If you've customized any scripts or configurations, back them up first.
## Recommended Reading ## Recommended Reading
n8n offers excellent resources for getting started with its AI capabilities: n8n offers excellent resources for getting started with its AI capabilities:

View File

@@ -1 +1 @@
1.0.0 1.4.2

View File

@@ -2,7 +2,7 @@
This directory allows you to extend or override Caddy configuration without modifying the main `Caddyfile`. This directory allows you to extend or override Caddy configuration without modifying the main `Caddyfile`.
All `.conf` files in this directory are automatically imported via `import /etc/caddy/addons/*.conf` at the end of the main Caddyfile. Files matching `site-*.conf` in this directory are automatically imported via `import /etc/caddy/addons/site-*.conf` in the main Caddyfile.
## Use Cases ## Use Cases
@@ -15,6 +15,23 @@ All `.conf` files in this directory are automatically imported via `import /etc/
For corporate/internal deployments where Let's Encrypt is not available, you can use your own certificates. For corporate/internal deployments where Let's Encrypt is not available, you can use your own certificates.
### How It Works
The main `Caddyfile` imports a TLS snippet that all service blocks use:
```caddy
# In Caddyfile (top)
import /etc/caddy/addons/tls-snippet.conf
# In each service block
{$N8N_HOSTNAME} {
import service_tls # <-- Uses the snippet
reverse_proxy n8n:5678
}
```
By default, the snippet is empty (Let's Encrypt is used). When you run `make setup-tls`, the snippet is updated with your certificate paths.
### Quick Setup ### Quick Setup
1. Place your certificates in the `certs/` directory: 1. Place your certificates in the `certs/` directory:
@@ -28,42 +45,22 @@ For corporate/internal deployments where Let's Encrypt is not available, you can
make setup-tls make setup-tls
``` ```
3. Restart Caddy: 3. The script will:
```bash - Update `caddy-addon/tls-snippet.conf` with your certificate paths
docker compose -p localai restart caddy - Optionally restart Caddy to apply changes
```
### Manual Setup ### Reset to Let's Encrypt
1. Copy the example file: To switch back to automatic Let's Encrypt certificates:
```bash
cp caddy-addon/custom-tls.conf.example caddy-addon/custom-tls.conf
```
2. Edit `custom-tls.conf` with your hostnames and certificate paths ```bash
make setup-tls --remove
3. Place certificates in `certs/` directory
4. Restart Caddy:
```bash
docker compose -p localai restart caddy
```
## How Site Override Works
When you define a site block in an addon file with the same hostname as the main Caddyfile, Caddy will use **both** configurations. To completely override a site, use the exact same hostname.
Example: To override `n8n.yourdomain.com` with a custom certificate:
```
# caddy-addon/custom-tls.conf
n8n.internal.company.com {
tls /etc/caddy/certs/wildcard.crt /etc/caddy/certs/wildcard.key
reverse_proxy n8n:5678
}
``` ```
Make sure your `.env` file has `N8N_HOSTNAME=n8n.internal.company.com`. Or run directly:
```bash
bash scripts/setup_custom_tls.sh --remove
```
## File Structure ## File Structure
@@ -71,8 +68,9 @@ Make sure your `.env` file has `N8N_HOSTNAME=n8n.internal.company.com`.
caddy-addon/ caddy-addon/
├── .gitkeep # Keeps directory in git ├── .gitkeep # Keeps directory in git
├── README.md # This file ├── README.md # This file
├── custom-tls.conf.example # Example for custom certificates ├── tls-snippet.conf.example # Template for TLS snippet (tracked in git)
── custom-tls.conf # Your custom config (gitignored) ── tls-snippet.conf # Your TLS config (gitignored, auto-created)
└── site-*.conf # Your custom addons (gitignored, must start with "site-")
certs/ certs/
├── .gitkeep # Keeps directory in git ├── .gitkeep # Keeps directory in git
@@ -80,11 +78,26 @@ certs/
└── wildcard.key # Your private key (gitignored) └── wildcard.key # Your private key (gitignored)
``` ```
## Adding Custom Addons
You can create `site-*.conf` files for custom Caddy configurations. They will be automatically loaded by the main Caddyfile.
**Important:** Custom addon files MUST start with `site-` prefix to be loaded (e.g., `site-custom.conf`, `site-myapp.conf`).
Example: `caddy-addon/site-custom-headers.conf`
```caddy
# Add custom headers to all responses
(custom_headers) {
header X-Custom-Header "My Value"
}
```
## Important Notes ## Important Notes
- Files in `caddy-addon/*.conf` are gitignored (preserved during updates) - `tls-snippet.conf.example` is tracked in git (template with default Let's Encrypt behavior)
- `tls-snippet.conf` is gitignored and auto-created from template (preserved during updates)
- `site-*.conf` files are gitignored (preserved during updates)
- Files in `certs/` are gitignored (certificates are not committed) - Files in `certs/` are gitignored (certificates are not committed)
- Example files (`*.example`) are tracked in git
- Caddy validates configuration on startup - check logs if it fails: - Caddy validates configuration on startup - check logs if it fails:
```bash ```bash
docker compose -p localai logs caddy docker compose -p localai logs caddy

View File

@@ -1,114 +0,0 @@
# Custom TLS Configuration for Corporate/Internal Certificates
#
# This file provides examples for using your own TLS certificates instead of Let's Encrypt.
# Copy this file to custom-tls.conf and modify as needed.
#
# Prerequisites:
# 1. Place your certificate files in the ./certs/ directory
# 2. Update .env hostnames to match your internal domain
# 3. Restart Caddy: docker compose -p localai restart caddy
# =============================================================================
# Option 1: Reusable TLS snippet (recommended for wildcard certificates)
# =============================================================================
# Define once, import in each service block
(custom_tls) {
tls /etc/caddy/certs/wildcard.crt /etc/caddy/certs/wildcard.key
}
# Then for each service you want to override:
#
# n8n.internal.company.com {
# import custom_tls
# reverse_proxy n8n:5678
# }
#
# flowise.internal.company.com {
# import custom_tls
# reverse_proxy flowise:3001
# }
# =============================================================================
# Option 2: Individual service configuration
# =============================================================================
# Use when you have different certificates for different services
# n8n.internal.company.com {
# tls /etc/caddy/certs/n8n.crt /etc/caddy/certs/n8n.key
# reverse_proxy n8n:5678
# }
# =============================================================================
# Option 3: Internal CA with auto-reload
# =============================================================================
# Caddy can auto-reload certificates when they change
# n8n.internal.company.com {
# tls /etc/caddy/certs/cert.pem /etc/caddy/certs/key.pem {
# # Optional: specify CA certificate for client verification
# # client_auth {
# # mode require_and_verify
# # trusted_ca_cert_file /etc/caddy/certs/ca.pem
# # }
# }
# reverse_proxy n8n:5678
# }
# =============================================================================
# Full Example: All common services with wildcard certificate
# =============================================================================
# Uncomment and modify the hostnames to match your .env configuration
# # N8N
# n8n.internal.company.com {
# import custom_tls
# reverse_proxy n8n:5678
# }
# # Flowise
# flowise.internal.company.com {
# import custom_tls
# reverse_proxy flowise:3001
# }
# # Open WebUI
# webui.internal.company.com {
# import custom_tls
# reverse_proxy open-webui:8080
# }
# # Grafana
# grafana.internal.company.com {
# import custom_tls
# reverse_proxy grafana:3000
# }
# # Portainer
# portainer.internal.company.com {
# import custom_tls
# reverse_proxy portainer:9000
# }
# # Langfuse
# langfuse.internal.company.com {
# import custom_tls
# reverse_proxy langfuse-web:3000
# }
# # Supabase
# supabase.internal.company.com {
# import custom_tls
# reverse_proxy kong:8000
# }
# # Welcome Page (with basic auth preserved)
# welcome.internal.company.com {
# import custom_tls
# basic_auth {
# {$WELCOME_USERNAME} {$WELCOME_PASSWORD_HASH}
# }
# root * /srv/welcome
# file_server
# try_files {path} /index.html
# }

View File

@@ -0,0 +1,10 @@
# TLS Configuration Snippet
# Imported by all service blocks in the main Caddyfile.
#
# Default: Empty (uses Let's Encrypt automatic certificates)
# Custom: Overwritten by 'make setup-tls' with your certificate paths
# Reset: Run 'make setup-tls --remove' to restore Let's Encrypt
(service_tls) {
# Default: Let's Encrypt automatic certificates (empty = no override)
}

View File

@@ -22,8 +22,8 @@ Cloudflare Tunnel **bypasses Caddy** and connects directly to your services. Thi
1. Go to [Cloudflare One Dashboard](https://one.dash.cloudflare.com/) 1. Go to [Cloudflare One Dashboard](https://one.dash.cloudflare.com/)
2. Navigate to **Networks****Connectors****Cloudflare Tunnels** 2. Navigate to **Networks****Connectors****Cloudflare Tunnels**
3. Click **Create new cloudflared Tunnel** 3. Click **Create a tunnel**
4. Choose **Cloudflared** connector and click **Next** 4. Select **Cloudflared** as the connector type and click **Next**
5. Name your tunnel (e.g., "n8n-install") and click **Save tunnel** 5. Name your tunnel (e.g., "n8n-install") and click **Save tunnel**
6. Copy the installation command shown - it contains your tunnel token 6. Copy the installation command shown - it contains your tunnel token
@@ -106,7 +106,7 @@ dig NS yourdomain.com +short
#### 3. Configure Public Hostnames #### 3. Configure Public Hostnames
After DNS is configured, go to **Cloudflare Zero Trust** → **Networks** → **Tunnels** → your tunnel → **Public Hostname** tab. For each service you want to expose, click **Add a public hostname** and configure: After DNS is configured, go to **Cloudflare One Dashboard** → **Networks** → **Connectors** → **Cloudflare Tunnels** → your tunnel → **Public Hostname** tab. For each service you want to expose, click **Add a public hostname** and configure:
| Service | Public Hostname | Service URL | Auth Notes | | Service | Public Hostname | Service URL | Auth Notes |
| ------------------ | ----------------------------- | ---------------------------- | ------------------- | | ------------------ | ----------------------------- | ---------------------------- | ------------------- |
@@ -122,6 +122,7 @@ After DNS is configured, go to **Cloudflare Zero Trust** → **Networks** → **
| **LibreTranslate** | libretranslate.yourdomain.com | `http://libretranslate:5000` | ⚠️ Loses Caddy auth | | **LibreTranslate** | libretranslate.yourdomain.com | `http://libretranslate:5000` | ⚠️ Loses Caddy auth |
| **LightRAG** | lightrag.yourdomain.com | `http://lightrag:9621` | No auth | | **LightRAG** | lightrag.yourdomain.com | `http://lightrag:9621` | No auth |
| **Neo4j** | neo4j.yourdomain.com | `http://neo4j:7474` | Built-in login | | **Neo4j** | neo4j.yourdomain.com | `http://neo4j:7474` | Built-in login |
| **NocoDB** | nocodb.yourdomain.com | `http://nocodb:8080` | Built-in login |
| **Open WebUI** | webui.yourdomain.com | `http://open-webui:8080` | Built-in login | | **Open WebUI** | webui.yourdomain.com | `http://open-webui:8080` | Built-in login |
| **PaddleOCR** | paddleocr.yourdomain.com | `http://paddleocr:8080` | ⚠️ Loses Caddy auth | | **PaddleOCR** | paddleocr.yourdomain.com | `http://paddleocr:8080` | ⚠️ Loses Caddy auth |
| **Portainer** | portainer.yourdomain.com | `http://portainer:9000` | Built-in login | | **Portainer** | portainer.yourdomain.com | `http://portainer:9000` | Built-in login |
@@ -134,6 +135,11 @@ After DNS is configured, go to **Cloudflare Zero Trust** → **Networks** → **
| **Supabase** ¹ | supabase.yourdomain.com | `http://kong:8000` | Built-in login | | **Supabase** ¹ | supabase.yourdomain.com | `http://kong:8000` | Built-in login |
| **WAHA** | waha.yourdomain.com | `http://waha:3000` | API key recommended | | **WAHA** | waha.yourdomain.com | `http://waha:3000` | API key recommended |
| **Weaviate** | weaviate.yourdomain.com | `http://weaviate:8080` | API key recommended | | **Weaviate** | weaviate.yourdomain.com | `http://weaviate:8080` | API key recommended |
| **Welcome Page** ² | welcome.yourdomain.com | `http://caddy:80` | ⚠️ Loses Caddy auth |
**Notes:**
- ¹ Dify and Supabase use external compose files from adjacent directories
- ² Welcome Page is served by Caddy as static content; tunnel proxies through Caddy
**⚠️ Security Warning:** **⚠️ Security Warning:**
- Services marked **"Loses Caddy auth"** have basic authentication via Caddy that is bypassed by the tunnel. Use [Cloudflare Access](https://developers.cloudflare.com/cloudflare-one/applications/) or keep them internal. - Services marked **"Loses Caddy auth"** have basic authentication via Caddy that is bypassed by the tunnel. Use [Cloudflare Access](https://developers.cloudflare.com/cloudflare-one/applications/) or keep them internal.
@@ -181,7 +187,7 @@ You have two options for accessing your services:
For services that lose Caddy's basic auth protection, you can add Cloudflare Access: For services that lose Caddy's basic auth protection, you can add Cloudflare Access:
1. In **Cloudflare One Dashboard** → **Access controls** → **Applications** 1. In **Cloudflare One Dashboard** → **Access** → **Applications** (or **Access controls** → **Applications** depending on your dashboard version)
2. Click **Add an application** → **Self-hosted** 2. Click **Add an application** → **Self-hosted**
3. Configure: 3. Configure:
- **Application name**: e.g., "Prometheus" - **Application name**: e.g., "Prometheus"

View File

@@ -1,4 +1,5 @@
volumes: volumes:
appsmith_data:
caddy-config: caddy-config:
caddy-data: caddy-data:
comfyui_data: comfyui_data:
@@ -33,9 +34,18 @@ volumes:
ragflow_minio_data: ragflow_minio_data:
ragflow_mysql_data: ragflow_mysql_data:
ragflow_redis_data: ragflow_redis_data:
temporal_elasticsearch_data:
valkey-data: valkey-data:
uptime_kuma_data:
weaviate_data: weaviate_data:
# Shared logging configuration for services
x-logging: &default-logging
driver: "json-file"
options:
max-size: "1m"
max-file: "1"
# Shared proxy configuration for services that need outbound proxy support # Shared proxy configuration for services that need outbound proxy support
x-proxy-env: &proxy-env x-proxy-env: &proxy-env
HTTP_PROXY: ${GOST_PROXY_URL:-} HTTP_PROXY: ${GOST_PROXY_URL:-}
@@ -73,7 +83,7 @@ x-n8n: &service-n8n
N8N_LOG_LEVEL: ${N8N_LOG_LEVEL:-info} N8N_LOG_LEVEL: ${N8N_LOG_LEVEL:-info}
N8N_LOG_OUTPUT: ${N8N_LOG_OUTPUT:-console} N8N_LOG_OUTPUT: ${N8N_LOG_OUTPUT:-console}
N8N_METRICS: true N8N_METRICS: true
N8N_PAYLOAD_SIZE_MAX: 256 N8N_PAYLOAD_SIZE_MAX: ${N8N_PAYLOAD_SIZE_MAX:-256}
N8N_PERSONALIZATION_ENABLED: false N8N_PERSONALIZATION_ENABLED: false
N8N_RESTRICT_FILE_ACCESS_TO: /data/shared N8N_RESTRICT_FILE_ACCESS_TO: /data/shared
N8N_RUNNERS_AUTH_TOKEN: ${N8N_RUNNERS_AUTH_TOKEN} N8N_RUNNERS_AUTH_TOKEN: ${N8N_RUNNERS_AUTH_TOKEN}
@@ -136,6 +146,26 @@ x-n8n-worker-runner: &service-n8n-worker-runner
N8N_RUNNERS_TASK_BROKER_URI: http://127.0.0.1:5679 N8N_RUNNERS_TASK_BROKER_URI: http://127.0.0.1:5679
services: services:
appsmith:
image: appsmith/appsmith-ce:release
container_name: appsmith
profiles: ["appsmith"]
restart: unless-stopped
logging: *default-logging
environment:
<<: *proxy-env
APPSMITH_ENCRYPTION_PASSWORD: ${APPSMITH_ENCRYPTION_PASSWORD}
APPSMITH_ENCRYPTION_SALT: ${APPSMITH_ENCRYPTION_SALT}
APPSMITH_DISABLE_TELEMETRY: "true"
volumes:
- appsmith_data:/appsmith-stacks
healthcheck:
test: ["CMD-SHELL", "http_proxy= https_proxy= HTTP_PROXY= HTTPS_PROXY= wget -qO- http://localhost/api/v1/health || exit 1"]
interval: 30s
timeout: 10s
retries: 5
start_period: 120s
flowise: flowise:
image: flowiseai/flowise image: flowiseai/flowise
restart: unless-stopped restart: unless-stopped
@@ -274,11 +304,7 @@ services:
container_name: nocodb container_name: nocodb
profiles: ["nocodb"] profiles: ["nocodb"]
restart: unless-stopped restart: unless-stopped
logging: logging: *default-logging
driver: "json-file"
options:
max-size: "1m"
max-file: "1"
environment: environment:
NC_AUTH_JWT_SECRET: ${NOCODB_JWT_SECRET} NC_AUTH_JWT_SECRET: ${NOCODB_JWT_SECRET}
NC_DB: pg://postgres:5432?u=postgres&p=${POSTGRES_PASSWORD}&d=nocodb NC_DB: pg://postgres:5432?u=postgres&p=${POSTGRES_PASSWORD}&d=nocodb
@@ -314,6 +340,7 @@ services:
- caddy-data:/data:rw - caddy-data:/data:rw
- caddy-config:/config:rw - caddy-config:/config:rw
environment: environment:
APPSMITH_HOSTNAME: ${APPSMITH_HOSTNAME}
COMFYUI_HOSTNAME: ${COMFYUI_HOSTNAME} COMFYUI_HOSTNAME: ${COMFYUI_HOSTNAME}
COMFYUI_PASSWORD_HASH: ${COMFYUI_PASSWORD_HASH} COMFYUI_PASSWORD_HASH: ${COMFYUI_PASSWORD_HASH}
COMFYUI_USERNAME: ${COMFYUI_USERNAME} COMFYUI_USERNAME: ${COMFYUI_USERNAME}
@@ -339,6 +366,9 @@ services:
PORTAINER_HOSTNAME: ${PORTAINER_HOSTNAME} PORTAINER_HOSTNAME: ${PORTAINER_HOSTNAME}
DATABASUS_HOSTNAME: ${DATABASUS_HOSTNAME} DATABASUS_HOSTNAME: ${DATABASUS_HOSTNAME}
POSTIZ_HOSTNAME: ${POSTIZ_HOSTNAME} POSTIZ_HOSTNAME: ${POSTIZ_HOSTNAME}
TEMPORAL_UI_HOSTNAME: ${TEMPORAL_UI_HOSTNAME}
TEMPORAL_UI_USERNAME: ${TEMPORAL_UI_USERNAME}
TEMPORAL_UI_PASSWORD_HASH: ${TEMPORAL_UI_PASSWORD_HASH}
PROMETHEUS_HOSTNAME: ${PROMETHEUS_HOSTNAME} PROMETHEUS_HOSTNAME: ${PROMETHEUS_HOSTNAME}
PROMETHEUS_PASSWORD_HASH: ${PROMETHEUS_PASSWORD_HASH} PROMETHEUS_PASSWORD_HASH: ${PROMETHEUS_PASSWORD_HASH}
PROMETHEUS_USERNAME: ${PROMETHEUS_USERNAME} PROMETHEUS_USERNAME: ${PROMETHEUS_USERNAME}
@@ -351,6 +381,7 @@ services:
SEARXNG_PASSWORD_HASH: ${SEARXNG_PASSWORD_HASH} SEARXNG_PASSWORD_HASH: ${SEARXNG_PASSWORD_HASH}
SEARXNG_USERNAME: ${SEARXNG_USERNAME} SEARXNG_USERNAME: ${SEARXNG_USERNAME}
SUPABASE_HOSTNAME: ${SUPABASE_HOSTNAME} SUPABASE_HOSTNAME: ${SUPABASE_HOSTNAME}
UPTIME_KUMA_HOSTNAME: ${UPTIME_KUMA_HOSTNAME}
WAHA_HOSTNAME: ${WAHA_HOSTNAME} WAHA_HOSTNAME: ${WAHA_HOSTNAME}
WEAVIATE_HOSTNAME: ${WEAVIATE_HOSTNAME} WEAVIATE_HOSTNAME: ${WEAVIATE_HOSTNAME}
WEBUI_HOSTNAME: ${WEBUI_HOSTNAME} WEBUI_HOSTNAME: ${WEBUI_HOSTNAME}
@@ -361,11 +392,7 @@ services:
- ALL - ALL
cap_add: cap_add:
- NET_BIND_SERVICE - NET_BIND_SERVICE
logging: logging: *default-logging
driver: "json-file"
options:
max-size: "1m"
max-file: "1"
cloudflared: cloudflared:
image: cloudflare/cloudflared:latest image: cloudflare/cloudflared:latest
@@ -375,11 +402,7 @@ services:
command: tunnel --no-autoupdate run command: tunnel --no-autoupdate run
environment: environment:
TUNNEL_TOKEN: ${CLOUDFLARE_TUNNEL_TOKEN} TUNNEL_TOKEN: ${CLOUDFLARE_TUNNEL_TOKEN}
logging: logging: *default-logging
driver: "json-file"
options:
max-size: "1m"
max-file: "1"
gost: gost:
image: gogost/gost:latest image: gogost/gost:latest
@@ -397,11 +420,7 @@ services:
timeout: 10s timeout: 10s
retries: 3 retries: 3
start_period: 10s start_period: 10s
logging: logging: *default-logging
driver: "json-file"
options:
max-size: "1m"
max-file: "1"
langfuse-worker: langfuse-worker:
image: langfuse/langfuse-worker:3 image: langfuse/langfuse-worker:3
@@ -525,7 +544,7 @@ services:
postgres: postgres:
container_name: postgres container_name: postgres
image: postgres:${POSTGRES_VERSION:-17} image: pgvector/pgvector:pg${POSTGRES_VERSION:-17}
restart: unless-stopped restart: unless-stopped
healthcheck: healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"] test: ["CMD-SHELL", "pg_isready -U postgres"]
@@ -553,11 +572,7 @@ services:
- SETGID - SETGID
- SETUID - SETUID
- DAC_OVERRIDE - DAC_OVERRIDE
logging: logging: *default-logging
driver: "json-file"
options:
max-size: "1m"
max-file: "1"
healthcheck: healthcheck:
test: ["CMD", "redis-cli", "ping"] test: ["CMD", "redis-cli", "ping"]
interval: 3s interval: 3s
@@ -580,11 +595,7 @@ services:
- CHOWN - CHOWN
- SETGID - SETGID
- SETUID - SETUID
logging: logging: *default-logging
driver: "json-file"
options:
max-size: "1m"
max-file: "1"
ollama-cpu: ollama-cpu:
profiles: ["cpu"] profiles: ["cpu"]
@@ -778,6 +789,70 @@ services:
- portainer_data:/data - portainer_data:/data
- ${DOCKER_SOCKET_LOCATION:-/var/run/docker.sock}:/var/run/docker.sock - ${DOCKER_SOCKET_LOCATION:-/var/run/docker.sock}:/var/run/docker.sock
temporal-elasticsearch:
image: elasticsearch:7.17.27
container_name: temporal-elasticsearch
profiles: ["postiz"]
restart: unless-stopped
logging: *default-logging
environment:
cluster.routing.allocation.disk.threshold_enabled: "true"
cluster.routing.allocation.disk.watermark.low: 512mb
cluster.routing.allocation.disk.watermark.high: 256mb
cluster.routing.allocation.disk.watermark.flood_stage: 128mb
discovery.type: single-node
ES_JAVA_OPTS: -Xms512m -Xmx512m
xpack.security.enabled: "false"
volumes:
- temporal_elasticsearch_data:/usr/share/elasticsearch/data
healthcheck:
test: ["CMD-SHELL", "curl -s http://localhost:9200/_cluster/health | grep -qE '\"status\":\"(green|yellow)\"'"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
temporal:
image: temporalio/auto-setup:latest
container_name: temporal
profiles: ["postiz"]
restart: unless-stopped
logging: *default-logging
environment:
DB: postgres12
POSTGRES_USER: postgres
POSTGRES_PWD: ${POSTGRES_PASSWORD}
POSTGRES_SEEDS: postgres
DB_PORT: 5432
TEMPORAL_NAMESPACE: default
ENABLE_ES: "true"
ES_SEEDS: temporal-elasticsearch
ES_VERSION: v7
depends_on:
postgres:
condition: service_healthy
temporal-elasticsearch:
condition: service_healthy
healthcheck:
test: ["CMD-SHELL", "temporal operator cluster health --address $(hostname -i):7233 | grep -q SERVING || exit 1"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s
temporal-ui:
image: temporalio/ui:latest
container_name: temporal-ui
profiles: ["postiz"]
restart: unless-stopped
logging: *default-logging
environment:
TEMPORAL_ADDRESS: temporal:7233
TEMPORAL_CORS_ORIGINS: http://localhost:3000
depends_on:
temporal:
condition: service_healthy
postiz: postiz:
image: ghcr.io/gitroomhq/postiz-app:latest image: ghcr.io/gitroomhq/postiz-app:latest
container_name: postiz container_name: postiz
@@ -785,7 +860,7 @@ services:
restart: always restart: always
environment: environment:
<<: *proxy-env <<: *proxy-env
BACKEND_INTERNAL_URL: http://postiz:3000 BACKEND_INTERNAL_URL: http://localhost:3000
DATABASE_URL: "postgresql://postgres:${POSTGRES_PASSWORD}@postgres:5432/${POSTIZ_DB_NAME:-postiz}?schema=postiz" DATABASE_URL: "postgresql://postgres:${POSTGRES_PASSWORD}@postgres:5432/${POSTIZ_DB_NAME:-postiz}?schema=postiz"
DISABLE_REGISTRATION: ${POSTIZ_DISABLE_REGISTRATION} DISABLE_REGISTRATION: ${POSTIZ_DISABLE_REGISTRATION}
FRONTEND_URL: ${POSTIZ_HOSTNAME:+https://}${POSTIZ_HOSTNAME} FRONTEND_URL: ${POSTIZ_HOSTNAME:+https://}${POSTIZ_HOSTNAME}
@@ -796,6 +871,7 @@ services:
NEXT_PUBLIC_UPLOAD_DIRECTORY: "/uploads" NEXT_PUBLIC_UPLOAD_DIRECTORY: "/uploads"
REDIS_URL: "redis://redis:6379" REDIS_URL: "redis://redis:6379"
STORAGE_PROVIDER: "local" STORAGE_PROVIDER: "local"
TEMPORAL_ADDRESS: temporal:7233
UPLOAD_DIRECTORY: "/uploads" UPLOAD_DIRECTORY: "/uploads"
# Social Media API Settings # Social Media API Settings
X_API_KEY: ${X_API_KEY} X_API_KEY: ${X_API_KEY}
@@ -832,22 +908,21 @@ services:
volumes: volumes:
- postiz-config:/config/ - postiz-config:/config/
- postiz-uploads:/uploads/ - postiz-uploads:/uploads/
- ./postiz.env:/app/.env:ro
depends_on: depends_on:
postgres: postgres:
condition: service_healthy condition: service_healthy
redis: redis:
condition: service_healthy condition: service_healthy
temporal:
condition: service_healthy
databasus: databasus:
image: databasus/databasus:latest image: databasus/databasus:latest
container_name: databasus container_name: databasus
profiles: ["databasus"] profiles: ["databasus"]
restart: unless-stopped restart: unless-stopped
logging: logging: *default-logging
driver: "json-file"
options:
max-size: "1m"
max-file: "1"
volumes: volumes:
- databasus_data:/databasus-data - databasus_data:/databasus-data
healthcheck: healthcheck:
@@ -858,7 +933,7 @@ services:
start_period: 60s start_period: 60s
comfyui: comfyui:
image: yanwk/comfyui-boot:cu124-slim image: yanwk/comfyui-boot:cu128-slim
container_name: comfyui container_name: comfyui
profiles: ["comfyui"] profiles: ["comfyui"]
restart: unless-stopped restart: unless-stopped
@@ -980,10 +1055,10 @@ services:
REDIS_HOST: ragflow-redis REDIS_HOST: ragflow-redis
REDIS_PASSWORD: ${RAGFLOW_REDIS_PASSWORD} REDIS_PASSWORD: ${RAGFLOW_REDIS_PASSWORD}
REDIS_PORT: 6379 REDIS_PORT: 6379
SVR_HTTP_PORT: 80 SVR_HTTP_PORT: 9380
volumes: volumes:
- ragflow_data:/ragflow - ragflow_data:/ragflow
- ./ragflow/nginx.conf:/etc/nginx/sites-available/default:ro - ./ragflow/nginx.conf:/etc/nginx/conf.d/default.conf:ro
depends_on: depends_on:
ragflow-elasticsearch: ragflow-elasticsearch:
condition: service_healthy condition: service_healthy
@@ -1044,11 +1119,7 @@ services:
- SETGID - SETGID
- SETUID - SETUID
- DAC_OVERRIDE - DAC_OVERRIDE
logging: logging: *default-logging
driver: "json-file"
options:
max-size: "1m"
max-file: "1"
healthcheck: healthcheck:
test: ["CMD", "valkey-cli", "-a", "${RAGFLOW_REDIS_PASSWORD}", "ping"] test: ["CMD", "valkey-cli", "-a", "${RAGFLOW_REDIS_PASSWORD}", "ping"]
interval: 3s interval: 3s
@@ -1207,3 +1278,21 @@ services:
timeout: 10s timeout: 10s
retries: 5 retries: 5
start_period: 30s start_period: 30s
uptime-kuma:
image: louislam/uptime-kuma:2
container_name: uptime-kuma
profiles: ["uptime-kuma"]
restart: unless-stopped
logging: *default-logging
environment:
<<: *proxy-env
UPTIME_KUMA_WS_ORIGIN_CHECK: bypass
volumes:
- uptime_kuma_data:/app/data
healthcheck:
test: ["CMD-SHELL", "node -e \"const http=require('http');const r=http.get('http://localhost:3001',res=>{process.exit(res.statusCode<400?0:1)});r.on('error',()=>process.exit(1));r.setTimeout(5000,()=>{r.destroy();process.exit(1)})\""]
interval: 30s
timeout: 10s
retries: 5
start_period: 30s

View File

@@ -1,9 +1,11 @@
# Stage 1: Get static ffmpeg binaries (statically linked, works on Alpine/musl)
FROM mwader/static-ffmpeg:latest AS ffmpeg
# Stage 2: Build final n8n image with ffmpeg
FROM n8nio/n8n:stable FROM n8nio/n8n:stable
USER root USER root
# Install static ffmpeg binary from BtbN GitHub releases # Copy static ffmpeg binaries from the ffmpeg stage
RUN wget -qO- --tries=3 --timeout=60 https://github.com/BtbN/FFmpeg-Builds/releases/download/latest/ffmpeg-master-latest-linux64-gpl.tar.xz | \ COPY --from=ffmpeg /ffmpeg /usr/local/bin/ffmpeg
tar -xJC /tmp && \ COPY --from=ffmpeg /ffprobe /usr/local/bin/ffprobe
mv /tmp/ffmpeg-master-latest-linux64-gpl/bin/ffmpeg /tmp/ffmpeg-master-latest-linux64-gpl/bin/ffprobe /usr/local/bin/ && \
rm -rf /tmp/ffmpeg-*
USER node USER node

View File

@@ -55,6 +55,7 @@ EMAIL_VARS=(
"PROMETHEUS_USERNAME" "PROMETHEUS_USERNAME"
"RAGAPP_USERNAME" "RAGAPP_USERNAME"
"SEARXNG_USERNAME" "SEARXNG_USERNAME"
"TEMPORAL_UI_USERNAME"
"WAHA_DASHBOARD_USERNAME" "WAHA_DASHBOARD_USERNAME"
"WEAVIATE_USERNAME" "WEAVIATE_USERNAME"
"WELCOME_USERNAME" "WELCOME_USERNAME"
@@ -73,6 +74,8 @@ USER_INPUT_VARS=(
# Variables to generate: varName="type:length" # Variables to generate: varName="type:length"
# Types: password (alphanum), secret (base64), hex, base64, alphanum # Types: password (alphanum), secret (base64), hex, base64, alphanum
declare -A VARS_TO_GENERATE=( declare -A VARS_TO_GENERATE=(
["APPSMITH_ENCRYPTION_PASSWORD"]="password:32"
["APPSMITH_ENCRYPTION_SALT"]="password:32"
["CLICKHOUSE_PASSWORD"]="password:32" ["CLICKHOUSE_PASSWORD"]="password:32"
["COMFYUI_PASSWORD"]="password:32" # Added ComfyUI basic auth password ["COMFYUI_PASSWORD"]="password:32" # Added ComfyUI basic auth password
["DASHBOARD_PASSWORD"]="password:32" # Supabase Dashboard ["DASHBOARD_PASSWORD"]="password:32" # Supabase Dashboard
@@ -112,8 +115,11 @@ declare -A VARS_TO_GENERATE=(
["RAGFLOW_MINIO_ROOT_PASSWORD"]="password:32" ["RAGFLOW_MINIO_ROOT_PASSWORD"]="password:32"
["RAGFLOW_MYSQL_ROOT_PASSWORD"]="password:32" ["RAGFLOW_MYSQL_ROOT_PASSWORD"]="password:32"
["RAGFLOW_REDIS_PASSWORD"]="password:32" ["RAGFLOW_REDIS_PASSWORD"]="password:32"
["S3_PROTOCOL_ACCESS_KEY_ID"]="hex:32"
["S3_PROTOCOL_ACCESS_KEY_SECRET"]="hex:64"
["SEARXNG_PASSWORD"]="password:32" # Added SearXNG admin password ["SEARXNG_PASSWORD"]="password:32" # Added SearXNG admin password
["SECRET_KEY_BASE"]="base64:64" # 48 bytes -> 64 chars ["SECRET_KEY_BASE"]="base64:64" # 48 bytes -> 64 chars
["TEMPORAL_UI_PASSWORD"]="password:32" # Temporal UI basic auth password
["VAULT_ENC_KEY"]="alphanum:32" ["VAULT_ENC_KEY"]="alphanum:32"
["WAHA_DASHBOARD_PASSWORD"]="password:32" ["WAHA_DASHBOARD_PASSWORD"]="password:32"
["WEAVIATE_API_KEY"]="secret:48" # API Key for Weaviate service (36 bytes -> 48 chars base64) ["WEAVIATE_API_KEY"]="secret:48" # API Key for Weaviate service (36 bytes -> 48 chars base64)
@@ -564,7 +570,7 @@ if [[ -n "$template_no_proxy" ]]; then
fi fi
# Hash passwords using caddy with bcrypt (consolidated loop) # Hash passwords using caddy with bcrypt (consolidated loop)
SERVICES_NEEDING_HASH=("PROMETHEUS" "SEARXNG" "COMFYUI" "PADDLEOCR" "RAGAPP" "LT" "DOCLING" "WELCOME") SERVICES_NEEDING_HASH=("PROMETHEUS" "SEARXNG" "COMFYUI" "PADDLEOCR" "RAGAPP" "LT" "DOCLING" "TEMPORAL_UI" "WELCOME")
for service in "${SERVICES_NEEDING_HASH[@]}"; do for service in "${SERVICES_NEEDING_HASH[@]}"; do
password_var="${service}_PASSWORD" password_var="${service}_PASSWORD"

View File

@@ -38,6 +38,7 @@ current_profiles_for_matching=",$CURRENT_PROFILES_VALUE,"
# --- Define available services and their descriptions --- # --- Define available services and their descriptions ---
# Base service definitions (tag, description) # Base service definitions (tag, description)
base_services_data=( base_services_data=(
"appsmith" "Appsmith (Low-code Platform for Internal Tools & Dashboards)"
"cloudflare-tunnel" "Cloudflare Tunnel (Zero-Trust Secure Access)" "cloudflare-tunnel" "Cloudflare Tunnel (Zero-Trust Secure Access)"
"comfyui" "ComfyUI (Node-based Stable Diffusion UI)" "comfyui" "ComfyUI (Node-based Stable Diffusion UI)"
"crawl4ai" "Crawl4ai (Web Crawler for AI)" "crawl4ai" "Crawl4ai (Web Crawler for AI)"
@@ -66,6 +67,7 @@ base_services_data=(
"ragflow" "RAGFlow (Deep document understanding RAG engine)" "ragflow" "RAGFlow (Deep document understanding RAG engine)"
"searxng" "SearXNG (Private Metasearch Engine)" "searxng" "SearXNG (Private Metasearch Engine)"
"supabase" "Supabase (Backend as a Service)" "supabase" "Supabase (Backend as a Service)"
"uptime-kuma" "Uptime Kuma (Uptime Monitoring)"
"waha" "WAHA WhatsApp HTTP API (NOWEB engine)" "waha" "WAHA WhatsApp HTTP API (NOWEB engine)"
"weaviate" "Weaviate (Vector Database with API Key Auth)" "weaviate" "Weaviate (Vector Database with API Key Auth)"
) )
@@ -215,7 +217,7 @@ if [ $gost_selected -eq 1 ]; then
EXISTING_UPSTREAM=$(read_env_var "GOST_UPSTREAM_PROXY") EXISTING_UPSTREAM=$(read_env_var "GOST_UPSTREAM_PROXY")
GOST_UPSTREAM_INPUT=$(wt_input "Gost Upstream Proxy" \ GOST_UPSTREAM_INPUT=$(wt_input "Gost Upstream Proxy" \
"Enter your external proxy URL for geo-bypass.\n\nExamples:\n socks5://user:pass@proxy.com:1080\n http://user:pass@proxy.com:8080\n\nThis proxy should be located outside restricted regions." \ "Enter your external proxy URL for geo-bypass.\n\nExamples:\n socks5://user:pass@proxy.com:1080\n http://user:pass@proxy.com:8080\n\nIMPORTANT: For HTTP proxies use http://, NOT https://.\nThe protocol refers to proxy type, not connection security.\n\nThis proxy should be located outside restricted regions." \
"$EXISTING_UPSTREAM") || true "$EXISTING_UPSTREAM") || true
if [ -n "$GOST_UPSTREAM_INPUT" ]; then if [ -n "$GOST_UPSTREAM_INPUT" ]; then

View File

@@ -27,6 +27,10 @@ init_paths
# Ensure .env exists # Ensure .env exists
ensure_file_exists "$ENV_FILE" ensure_file_exists "$ENV_FILE"
# Load COMPOSE_PROFILES early so is_profile_active works for all sections
COMPOSE_PROFILES_VALUE="$(read_env_var COMPOSE_PROFILES)"
COMPOSE_PROFILES="$COMPOSE_PROFILES_VALUE"
# ---------------------------------------------------------------- # ----------------------------------------------------------------
# Prompt for OpenAI API key (optional) using .env value as source of truth # Prompt for OpenAI API key (optional) using .env value as source of truth
# ---------------------------------------------------------------- # ----------------------------------------------------------------
@@ -48,87 +52,89 @@ fi
# ---------------------------------------------------------------- # ----------------------------------------------------------------
# Logic for n8n workflow import (RUN_N8N_IMPORT) # Logic for n8n workflow import (RUN_N8N_IMPORT)
# ---------------------------------------------------------------- # ----------------------------------------------------------------
log_subheader "n8n Workflow Import" if is_profile_active "n8n"; then
final_run_n8n_import_decision="false" log_subheader "n8n Workflow Import"
require_whiptail
if wt_yesno "Import n8n Workflows" "Import ~300 ready-made n8n workflows now? This can take ~30 minutes." "no"; then
final_run_n8n_import_decision="true"
else
final_run_n8n_import_decision="false" final_run_n8n_import_decision="false"
fi require_whiptail
if wt_yesno "Import n8n Workflows" "Import ~300 ready-made n8n workflows now? This can take ~30 minutes." "no"; then
final_run_n8n_import_decision="true"
else
final_run_n8n_import_decision="false"
fi
# Persist RUN_N8N_IMPORT to .env # Persist RUN_N8N_IMPORT to .env
write_env_var "RUN_N8N_IMPORT" "$final_run_n8n_import_decision" write_env_var "RUN_N8N_IMPORT" "$final_run_n8n_import_decision"
else
write_env_var "RUN_N8N_IMPORT" "false"
fi
# ---------------------------------------------------------------- # ----------------------------------------------------------------
# Prompt for number of n8n workers # Prompt for number of n8n workers
# ---------------------------------------------------------------- # ----------------------------------------------------------------
log_subheader "n8n Worker Configuration" if is_profile_active "n8n"; then
EXISTING_N8N_WORKER_COUNT="$(read_env_var N8N_WORKER_COUNT)" log_subheader "n8n Worker Configuration"
require_whiptail EXISTING_N8N_WORKER_COUNT="$(read_env_var N8N_WORKER_COUNT)"
if [[ -n "$EXISTING_N8N_WORKER_COUNT" ]]; then require_whiptail
N8N_WORKER_COUNT_CURRENT="$EXISTING_N8N_WORKER_COUNT" if [[ -n "$EXISTING_N8N_WORKER_COUNT" ]]; then
N8N_WORKER_COUNT_INPUT_RAW=$(wt_input "n8n Workers (instances)" "Enter new number of n8n workers, or leave as current ($N8N_WORKER_COUNT_CURRENT)." "") || true N8N_WORKER_COUNT_CURRENT="$EXISTING_N8N_WORKER_COUNT"
if [[ -z "$N8N_WORKER_COUNT_INPUT_RAW" ]]; then N8N_WORKER_COUNT_INPUT_RAW=$(wt_input "n8n Workers (instances)" "Enter new number of n8n workers, or leave as current ($N8N_WORKER_COUNT_CURRENT)." "") || true
N8N_WORKER_COUNT="$N8N_WORKER_COUNT_CURRENT" if [[ -z "$N8N_WORKER_COUNT_INPUT_RAW" ]]; then
else N8N_WORKER_COUNT="$N8N_WORKER_COUNT_CURRENT"
if [[ "$N8N_WORKER_COUNT_INPUT_RAW" =~ ^0*[1-9][0-9]*$ ]]; then else
N8N_WORKER_COUNT_TEMP="$((10#$N8N_WORKER_COUNT_INPUT_RAW))" if [[ "$N8N_WORKER_COUNT_INPUT_RAW" =~ ^0*[1-9][0-9]*$ ]]; then
if [[ "$N8N_WORKER_COUNT_TEMP" -ge 1 ]]; then N8N_WORKER_COUNT_TEMP="$((10#$N8N_WORKER_COUNT_INPUT_RAW))"
if wt_yesno "Confirm Workers" "Update n8n workers to $N8N_WORKER_COUNT_TEMP?" "yes"; then if [[ "$N8N_WORKER_COUNT_TEMP" -ge 1 ]]; then
N8N_WORKER_COUNT="$N8N_WORKER_COUNT_TEMP" if wt_yesno "Confirm Workers" "Update n8n workers to $N8N_WORKER_COUNT_TEMP?" "yes"; then
N8N_WORKER_COUNT="$N8N_WORKER_COUNT_TEMP"
else
N8N_WORKER_COUNT="$N8N_WORKER_COUNT_CURRENT"
log_info "Change declined. Keeping N8N_WORKER_COUNT at $N8N_WORKER_COUNT."
fi
else else
log_warning "Invalid input '$N8N_WORKER_COUNT_INPUT_RAW'. Number must be positive. Keeping $N8N_WORKER_COUNT_CURRENT."
N8N_WORKER_COUNT="$N8N_WORKER_COUNT_CURRENT" N8N_WORKER_COUNT="$N8N_WORKER_COUNT_CURRENT"
log_info "Change declined. Keeping N8N_WORKER_COUNT at $N8N_WORKER_COUNT."
fi fi
else else
log_warning "Invalid input '$N8N_WORKER_COUNT_INPUT_RAW'. Number must be positive. Keeping $N8N_WORKER_COUNT_CURRENT." log_warning "Invalid input '$N8N_WORKER_COUNT_INPUT_RAW'. Please enter a positive integer. Keeping $N8N_WORKER_COUNT_CURRENT."
N8N_WORKER_COUNT="$N8N_WORKER_COUNT_CURRENT" N8N_WORKER_COUNT="$N8N_WORKER_COUNT_CURRENT"
fi fi
else
log_warning "Invalid input '$N8N_WORKER_COUNT_INPUT_RAW'. Please enter a positive integer. Keeping $N8N_WORKER_COUNT_CURRENT."
N8N_WORKER_COUNT="$N8N_WORKER_COUNT_CURRENT"
fi fi
fi else
else while true; do
while true; do N8N_WORKER_COUNT_INPUT_RAW=$(wt_input "n8n Workers" "Enter number of n8n workers to run (default 1)." "1") || true
N8N_WORKER_COUNT_INPUT_RAW=$(wt_input "n8n Workers" "Enter number of n8n workers to run (default 1)." "1") || true N8N_WORKER_COUNT_CANDIDATE="${N8N_WORKER_COUNT_INPUT_RAW:-1}"
N8N_WORKER_COUNT_CANDIDATE="${N8N_WORKER_COUNT_INPUT_RAW:-1}" if [[ "$N8N_WORKER_COUNT_CANDIDATE" =~ ^0*[1-9][0-9]*$ ]]; then
if [[ "$N8N_WORKER_COUNT_CANDIDATE" =~ ^0*[1-9][0-9]*$ ]]; then N8N_WORKER_COUNT_VALIDATED="$((10#$N8N_WORKER_COUNT_CANDIDATE))"
N8N_WORKER_COUNT_VALIDATED="$((10#$N8N_WORKER_COUNT_CANDIDATE))" if [[ "$N8N_WORKER_COUNT_VALIDATED" -ge 1 ]]; then
if [[ "$N8N_WORKER_COUNT_VALIDATED" -ge 1 ]]; then if wt_yesno "Confirm Workers" "Run $N8N_WORKER_COUNT_VALIDATED n8n worker(s)?" "yes"; then
if wt_yesno "Confirm Workers" "Run $N8N_WORKER_COUNT_VALIDATED n8n worker(s)?" "yes"; then N8N_WORKER_COUNT="$N8N_WORKER_COUNT_VALIDATED"
N8N_WORKER_COUNT="$N8N_WORKER_COUNT_VALIDATED" break
break fi
else
log_error "Number of workers must be a positive integer."
fi fi
else else
log_error "Number of workers must be a positive integer." log_error "Invalid input '$N8N_WORKER_COUNT_CANDIDATE'. Please enter a positive integer (e.g., 1, 2)."
fi fi
else done
log_error "Invalid input '$N8N_WORKER_COUNT_CANDIDATE'. Please enter a positive integer (e.g., 1, 2)." fi
fi # Ensure N8N_WORKER_COUNT is definitely set (should be by logic above)
done N8N_WORKER_COUNT="${N8N_WORKER_COUNT:-1}"
# Persist N8N_WORKER_COUNT to .env
write_env_var "N8N_WORKER_COUNT" "$N8N_WORKER_COUNT"
# Generate worker-runner pairs configuration
# Each worker gets its own dedicated task runner sidecar
log_info "Generating n8n worker-runner pairs configuration..."
bash "$SCRIPT_DIR/generate_n8n_workers.sh"
fi fi
# Ensure N8N_WORKER_COUNT is definitely set (should be by logic above)
N8N_WORKER_COUNT="${N8N_WORKER_COUNT:-1}"
# Persist N8N_WORKER_COUNT to .env
write_env_var "N8N_WORKER_COUNT" "$N8N_WORKER_COUNT"
# Generate worker-runner pairs configuration
# Each worker gets its own dedicated task runner sidecar
log_info "Generating n8n worker-runner pairs configuration..."
bash "$SCRIPT_DIR/generate_n8n_workers.sh"
# ---------------------------------------------------------------- # ----------------------------------------------------------------
# Cloudflare Tunnel Token (if cloudflare-tunnel profile is active) # Cloudflare Tunnel Token (if cloudflare-tunnel profile is active)
# ---------------------------------------------------------------- # ----------------------------------------------------------------
COMPOSE_PROFILES_VALUE="$(read_env_var COMPOSE_PROFILES)"
# Set COMPOSE_PROFILES for is_profile_active to work
COMPOSE_PROFILES="$COMPOSE_PROFILES_VALUE"
if is_profile_active "cloudflare-tunnel"; then if is_profile_active "cloudflare-tunnel"; then
log_subheader "Cloudflare Tunnel" log_subheader "Cloudflare Tunnel"
existing_cf_token="$(read_env_var CLOUDFLARE_TUNNEL_TOKEN)" existing_cf_token="$(read_env_var CLOUDFLARE_TUNNEL_TOKEN)"

View File

@@ -32,6 +32,23 @@ require_file "$PROJECT_ROOT/docker-compose.yml" "docker-compose.yml file not fou
require_file "$PROJECT_ROOT/Caddyfile" "Caddyfile not found in project root. Reverse proxy might not work." require_file "$PROJECT_ROOT/Caddyfile" "Caddyfile not found in project root. Reverse proxy might not work."
require_file "$PROJECT_ROOT/start_services.py" "start_services.py file not found in project root." require_file "$PROJECT_ROOT/start_services.py" "start_services.py file not found in project root."
# Remove legacy custom-tls.conf that causes duplicate host errors
# This is needed for users upgrading from older versions
# TODO: Remove this cleanup block after v3.0 release (all users migrated)
OLD_TLS_CONFIG="$PROJECT_ROOT/caddy-addon/custom-tls.conf"
if [[ -f "$OLD_TLS_CONFIG" ]]; then
log_warning "Removing obsolete custom-tls.conf (causes duplicate host errors)"
rm -f "$OLD_TLS_CONFIG"
fi
# Ensure TLS snippet exists (auto-create from template if missing)
TLS_SNIPPET="$PROJECT_ROOT/caddy-addon/tls-snippet.conf"
TLS_TEMPLATE="$PROJECT_ROOT/caddy-addon/tls-snippet.conf.example"
if [[ ! -f "$TLS_SNIPPET" ]] && [[ -f "$TLS_TEMPLATE" ]]; then
cp "$TLS_TEMPLATE" "$TLS_SNIPPET"
log_info "Created tls-snippet.conf from template (Let's Encrypt mode)"
fi
# Check if Docker daemon is running # Check if Docker daemon is running
if ! docker info > /dev/null 2>&1; then if ! docker info > /dev/null 2>&1; then
log_error "Docker daemon is not running. Please start Docker and try again." log_error "Docker daemon is not running. Please start Docker and try again."

View File

@@ -79,6 +79,9 @@ echo ""
echo -e " ${WHITE}2.${NC} Store the Welcome Page credentials securely" echo -e " ${WHITE}2.${NC} Store the Welcome Page credentials securely"
echo "" echo ""
echo -e " ${WHITE}3.${NC} Configure services as needed:" echo -e " ${WHITE}3.${NC} Configure services as needed:"
if is_profile_active "appsmith"; then
echo -e " ${GREEN}*${NC} ${WHITE}Appsmith${NC}: Create admin account on first login (may take a few minutes to start)"
fi
if is_profile_active "n8n"; then if is_profile_active "n8n"; then
echo -e " ${GREEN}*${NC} ${WHITE}n8n${NC}: Complete first-run setup with your email" echo -e " ${GREEN}*${NC} ${WHITE}n8n${NC}: Complete first-run setup with your email"
fi fi
@@ -97,6 +100,12 @@ fi
if is_profile_active "nocodb"; then if is_profile_active "nocodb"; then
echo -e " ${GREEN}*${NC} ${WHITE}NocoDB${NC}: Create your account on first login" echo -e " ${GREEN}*${NC} ${WHITE}NocoDB${NC}: Create your account on first login"
fi fi
if is_profile_active "postiz"; then
echo -e " ${GREEN}*${NC} ${WHITE}Postiz${NC}: Create your account on first login"
fi
if is_profile_active "uptime-kuma"; then
echo -e " ${GREEN}*${NC} ${WHITE}Uptime Kuma${NC}: Create your account on first login"
fi
if is_profile_active "gost"; then if is_profile_active "gost"; then
echo -e " ${GREEN}*${NC} ${WHITE}Gost Proxy${NC}: Routing AI traffic through external proxy" echo -e " ${GREEN}*${NC} ${WHITE}Gost Proxy${NC}: Routing AI traffic through external proxy"
fi fi

View File

@@ -30,6 +30,8 @@ INIT_DB_DATABASES=(
"lightrag" "lightrag"
"nocodb" "nocodb"
"postiz" "postiz"
"temporal"
"temporal_visibility"
"waha" "waha"
) )

View File

@@ -27,6 +27,19 @@ GENERATED_AT=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
# Build services array - each entry is a formatted JSON block # Build services array - each entry is a formatted JSON block
declare -a SERVICES_ARRAY declare -a SERVICES_ARRAY
# Appsmith
if is_profile_active "appsmith"; then
SERVICES_ARRAY+=(" \"appsmith\": {
\"hostname\": \"$(json_escape "$APPSMITH_HOSTNAME")\",
\"credentials\": {
\"note\": \"Create your account on first login\"
},
\"extra\": {
\"docs\": \"https://docs.appsmith.com\"
}
}")
fi
# n8n # n8n
if is_profile_active "n8n"; then if is_profile_active "n8n"; then
N8N_WORKER_COUNT_VAL="${N8N_WORKER_COUNT:-1}" N8N_WORKER_COUNT_VAL="${N8N_WORKER_COUNT:-1}"
@@ -327,6 +340,30 @@ if is_profile_active "postiz"; then
}") }")
fi fi
# Temporal UI
if is_profile_active "postiz"; then
SERVICES_ARRAY+=(" \"temporal-ui\": {
\"hostname\": \"$(json_escape "$TEMPORAL_UI_HOSTNAME")\",
\"credentials\": {
\"username\": \"$(json_escape "$TEMPORAL_UI_USERNAME")\",
\"password\": \"$(json_escape "$TEMPORAL_UI_PASSWORD")\"
},
\"extra\": {
\"note\": \"Workflow orchestration admin for Postiz\"
}
}")
fi
# Uptime Kuma
if is_profile_active "uptime-kuma"; then
SERVICES_ARRAY+=(" \"uptime-kuma\": {
\"hostname\": \"$(json_escape "$UPTIME_KUMA_HOSTNAME")\",
\"credentials\": {
\"note\": \"Create account on first login\"
}
}")
fi
# WAHA # WAHA
if is_profile_active "waha"; then if is_profile_active "waha"; then
SERVICES_ARRAY+=(" \"waha\": { SERVICES_ARRAY+=(" \"waha\": {
@@ -505,6 +542,16 @@ if is_profile_active "databasus"; then
((STEP_NUM++)) ((STEP_NUM++))
fi fi
# Set up Appsmith (if appsmith active)
if is_profile_active "appsmith"; then
QUICK_START_ARRAY+=(" {
\"step\": $STEP_NUM,
\"title\": \"Set up Appsmith\",
\"description\": \"Create your admin account and build your first app\"
}")
((STEP_NUM++))
fi
# Step 4: Monitor system (if monitoring active) # Step 4: Monitor system (if monitoring active)
if is_profile_active "monitoring"; then if is_profile_active "monitoring"; then
QUICK_START_ARRAY+=(" { QUICK_START_ARRAY+=(" {
@@ -541,3 +588,30 @@ EOF
log_success "Welcome page data generated at: $OUTPUT_FILE" log_success "Welcome page data generated at: $OUTPUT_FILE"
log_info "Access it at: https://${WELCOME_HOSTNAME:-welcome.${USER_DOMAIN_NAME}}" log_info "Access it at: https://${WELCOME_HOSTNAME:-welcome.${USER_DOMAIN_NAME}}"
# Generate changelog.json with CHANGELOG.md content
CHANGELOG_JSON_FILE="$PROJECT_ROOT/welcome/changelog.json"
CHANGELOG_SOURCE="$PROJECT_ROOT/CHANGELOG.md"
if [ -f "$CHANGELOG_SOURCE" ]; then
# Read and escape content for JSON (preserve newlines as \n)
# Using awk for cross-platform compatibility (macOS + Linux)
CHANGELOG_CONTENT=$(awk '
BEGIN { ORS="" }
{
gsub(/\\/, "\\\\") # Escape backslashes first
gsub(/"/, "\\\"") # Escape double quotes
gsub(/\t/, "\\t") # Escape tabs
gsub(/\r/, "") # Remove carriage returns (CRLF → LF)
if (NR > 1) printf "\\n"
printf "%s", $0
}
' "$CHANGELOG_SOURCE")
# Write changelog.json file
printf '{\n "content": "%s"\n}\n' "$CHANGELOG_CONTENT" > "$CHANGELOG_JSON_FILE"
log_success "Changelog JSON generated at: $CHANGELOG_JSON_FILE"
else
log_warning "CHANGELOG.md not found, skipping changelog.json generation"
fi

View File

@@ -10,6 +10,7 @@
# - docker-compose.n8n-workers.yml (if exists and n8n profile active) # - docker-compose.n8n-workers.yml (if exists and n8n profile active)
# - supabase/docker/docker-compose.yml (if exists and supabase profile active) # - supabase/docker/docker-compose.yml (if exists and supabase profile active)
# - dify/docker/docker-compose.yaml (if exists and dify profile active) # - dify/docker/docker-compose.yaml (if exists and dify profile active)
# - docker-compose.override.yml (if exists, user overrides with highest precedence)
# #
# Usage: bash scripts/restart.sh # Usage: bash scripts/restart.sh
# ============================================================================= # =============================================================================
@@ -33,6 +34,20 @@ EXTERNAL_SERVICE_INIT_DELAY=10
# Build compose files array (sets global COMPOSE_FILES) # Build compose files array (sets global COMPOSE_FILES)
build_compose_files_array build_compose_files_array
# Ensure postiz.env exists if Postiz is enabled (required for volume mount)
# This is a safety net for cases where restart runs without start_services.py
# (e.g., git pull + make restart instead of make update)
if is_profile_active "postiz"; then
if [ -d "$PROJECT_ROOT/postiz.env" ]; then
log_warning "postiz.env exists as a directory (created by Docker). Removing and recreating as file."
rm -rf "$PROJECT_ROOT/postiz.env"
touch "$PROJECT_ROOT/postiz.env"
elif [ ! -f "$PROJECT_ROOT/postiz.env" ]; then
log_warning "postiz.env not found, creating empty file. Run 'make update' to generate full config."
touch "$PROJECT_ROOT/postiz.env"
fi
fi
log_info "Restarting services..." log_info "Restarting services..."
log_info "Using compose files: ${COMPOSE_FILES[*]}" log_info "Using compose files: ${COMPOSE_FILES[*]}"
@@ -71,6 +86,10 @@ MAIN_COMPOSE_FILES=("-f" "$PROJECT_ROOT/docker-compose.yml")
if path=$(get_n8n_workers_compose); then if path=$(get_n8n_workers_compose); then
MAIN_COMPOSE_FILES+=("-f" "$path") MAIN_COMPOSE_FILES+=("-f" "$path")
fi fi
OVERRIDE_COMPOSE="$PROJECT_ROOT/docker-compose.override.yml"
if [ -f "$OVERRIDE_COMPOSE" ]; then
MAIN_COMPOSE_FILES+=("-f" "$OVERRIDE_COMPOSE")
fi
# Start main services # Start main services
log_info "Starting main services..." log_info "Starting main services..."

View File

@@ -2,12 +2,13 @@
# ============================================================================= # =============================================================================
# setup_custom_tls.sh - Configure custom TLS certificates for Caddy # setup_custom_tls.sh - Configure custom TLS certificates for Caddy
# ============================================================================= # =============================================================================
# Generates caddy-addon/custom-tls.conf for using corporate/internal certificates # Updates caddy-addon/tls-snippet.conf to use corporate/internal certificates
# instead of Let's Encrypt. # instead of Let's Encrypt.
# #
# Usage: # Usage:
# bash scripts/setup_custom_tls.sh # Interactive mode # bash scripts/setup_custom_tls.sh # Interactive mode
# bash scripts/setup_custom_tls.sh cert.crt key.key # Non-interactive mode # bash scripts/setup_custom_tls.sh cert.crt key.key # Non-interactive mode
# bash scripts/setup_custom_tls.sh --remove # Reset to Let's Encrypt
# #
# Prerequisites: # Prerequisites:
# - Place certificate files in ./certs/ directory # - Place certificate files in ./certs/ directory
@@ -18,13 +19,27 @@ set -euo pipefail
source "$(dirname "$0")/utils.sh" && init_paths source "$(dirname "$0")/utils.sh" && init_paths
ADDON_FILE="$PROJECT_ROOT/caddy-addon/custom-tls.conf" SNIPPET_FILE="$PROJECT_ROOT/caddy-addon/tls-snippet.conf"
SNIPPET_EXAMPLE="$PROJECT_ROOT/caddy-addon/tls-snippet.conf.example"
CERTS_DIR="$PROJECT_ROOT/certs" CERTS_DIR="$PROJECT_ROOT/certs"
# Legacy file that causes duplicate host errors (must be cleaned up on migration)
# TODO: Remove OLD_CONFIG and cleanup_legacy_config() after v3.0 release (all users migrated)
OLD_CONFIG="$PROJECT_ROOT/caddy-addon/custom-tls.conf"
# ============================================================================= # =============================================================================
# FUNCTIONS # FUNCTIONS
# ============================================================================= # =============================================================================
cleanup_legacy_config() {
# Remove old custom-tls.conf that causes duplicate host errors
# This is needed for users upgrading from older versions
if [[ -f "$OLD_CONFIG" ]]; then
log_warning "Removing obsolete custom-tls.conf (causes duplicate host errors)"
rm -f "$OLD_CONFIG"
fi
}
show_help() { show_help() {
cat << EOF cat << EOF
Setup Custom TLS Certificates for Caddy Setup Custom TLS Certificates for Caddy
@@ -33,7 +48,7 @@ Usage: $(basename "$0") [OPTIONS] [CERT_FILE] [KEY_FILE]
Options: Options:
-h, --help Show this help message -h, --help Show this help message
--remove Remove custom TLS configuration --remove Reset to Let's Encrypt automatic certificates
Arguments: Arguments:
CERT_FILE Path to certificate file (relative to ./certs/) CERT_FILE Path to certificate file (relative to ./certs/)
@@ -42,13 +57,12 @@ Arguments:
Examples: Examples:
$(basename "$0") # Interactive mode $(basename "$0") # Interactive mode
$(basename "$0") wildcard.crt wildcard.key # Use specific files $(basename "$0") wildcard.crt wildcard.key # Use specific files
$(basename "$0") --remove # Remove custom TLS config $(basename "$0") --remove # Reset to Let's Encrypt
The script will: The script will:
1. Detect certificate files in ./certs/ 1. Detect certificate files in ./certs/
2. Read active services from .env 2. Update caddy-addon/tls-snippet.conf with your certificate paths
3. Generate caddy-addon/custom-tls.conf 3. Optionally restart Caddy
4. Optionally restart Caddy
EOF EOF
} }
@@ -75,157 +89,53 @@ find_keys() {
echo "${keys[*]:-}" echo "${keys[*]:-}"
} }
get_active_services() { ensure_snippet_exists() {
# Get list of services with their hostnames from .env # Create tls-snippet.conf from example if it doesn't exist
load_env # This ensures the file survives git updates (it's gitignored)
local services=() if [[ ! -f "$SNIPPET_FILE" ]]; then
if [[ -f "$SNIPPET_EXAMPLE" ]]; then
# Map of service names to their hostname variables cp "$SNIPPET_EXAMPLE" "$SNIPPET_FILE"
declare -A service_map=( log_info "Created tls-snippet.conf from template"
["n8n"]="N8N_HOSTNAME" else
["flowise"]="FLOWISE_HOSTNAME" # Fallback: create default content directly
["webui"]="WEBUI_HOSTNAME" remove_config
["grafana"]="GRAFANA_HOSTNAME"
["prometheus"]="PROMETHEUS_HOSTNAME"
["portainer"]="PORTAINER_HOSTNAME"
["langfuse"]="LANGFUSE_HOSTNAME"
["supabase"]="SUPABASE_HOSTNAME"
["dify"]="DIFY_HOSTNAME"
["nocodb"]="NOCODB_HOSTNAME"
["ragapp"]="RAGAPP_HOSTNAME"
["ragflow"]="RAGFLOW_HOSTNAME"
["waha"]="WAHA_HOSTNAME"
["searxng"]="SEARXNG_HOSTNAME"
["comfyui"]="COMFYUI_HOSTNAME"
["welcome"]="WELCOME_HOSTNAME"
["databasus"]="DATABASUS_HOSTNAME"
["letta"]="LETTA_HOSTNAME"
["lightrag"]="LIGHTRAG_HOSTNAME"
["weaviate"]="WEAVIATE_HOSTNAME"
["qdrant"]="QDRANT_HOSTNAME"
["neo4j"]="NEO4J_HOSTNAME"
["postiz"]="POSTIZ_HOSTNAME"
["libretranslate"]="LT_HOSTNAME"
["paddleocr"]="PADDLEOCR_HOSTNAME"
["docling"]="DOCLING_HOSTNAME"
)
for service in "${!service_map[@]}"; do
local hostname_var="${service_map[$service]}"
local hostname="${!hostname_var:-}"
if [[ -n "$hostname" && "$hostname" != *"yourdomain.com" ]]; then
services+=("$service:$hostname")
fi fi
done fi
echo "${services[*]:-}"
} }
generate_config() { generate_config() {
local cert_file="$1" local cert_file="$1"
local key_file="$2" local key_file="$2"
local services=("${@:3}")
cat > "$ADDON_FILE" << 'HEADER' cat > "$SNIPPET_FILE" << EOF
# Custom TLS Configuration # TLS Configuration Snippet
# Generated by setup_custom_tls.sh # Generated by setup_custom_tls.sh on $(date -Iseconds)
# # Using custom certificates instead of Let's Encrypt.
# This file overrides default Let's Encrypt certificates with custom ones. # Reset to Let's Encrypt: make setup-tls --remove
# Regenerate with: make setup-tls
# Reusable TLS snippet (service_tls) {
(custom_tls) { tls /etc/caddy/certs/$cert_file /etc/caddy/certs/$key_file
HEADER }
EOF
echo " tls /etc/caddy/certs/$cert_file /etc/caddy/certs/$key_file" >> "$ADDON_FILE" log_success "Generated $SNIPPET_FILE"
echo "}" >> "$ADDON_FILE"
echo "" >> "$ADDON_FILE"
# Service-specific reverse proxy mappings
declare -A proxy_map=(
["n8n"]="n8n:5678"
["flowise"]="flowise:3001"
["webui"]="open-webui:8080"
["grafana"]="grafana:3000"
["prometheus"]="prometheus:9090"
["portainer"]="portainer:9000"
["langfuse"]="langfuse-web:3000"
["supabase"]="kong:8000"
["dify"]="nginx:80"
["nocodb"]="nocodb:8080"
["ragapp"]="ragapp:8000"
["ragflow"]="ragflow:80"
["waha"]="waha:3000"
["searxng"]="searxng:8080"
["comfyui"]="comfyui:8188"
["welcome"]="file_server"
["databasus"]="databasus:4005"
["letta"]="letta:8283"
["lightrag"]="lightrag:9621"
["weaviate"]="weaviate:8080"
["qdrant"]="qdrant:6333"
["neo4j"]="neo4j:7474"
["postiz"]="postiz:5000"
["libretranslate"]="libretranslate:5000"
["paddleocr"]="paddleocr:8080"
["docling"]="docling:5001"
)
# Services that need basic auth (format: USERNAME_VAR:PASSWORD_HASH_VAR)
declare -A auth_services=(
["prometheus"]="PROMETHEUS_USERNAME:PROMETHEUS_PASSWORD_HASH"
["ragapp"]="RAGAPP_USERNAME:RAGAPP_PASSWORD_HASH"
["comfyui"]="COMFYUI_USERNAME:COMFYUI_PASSWORD_HASH"
["welcome"]="WELCOME_USERNAME:WELCOME_PASSWORD_HASH"
["libretranslate"]="LT_USERNAME:LT_PASSWORD_HASH"
["paddleocr"]="PADDLEOCR_USERNAME:PADDLEOCR_PASSWORD_HASH"
["docling"]="DOCLING_USERNAME:DOCLING_PASSWORD_HASH"
)
for service_entry in "${services[@]}"; do
local service="${service_entry%%:*}"
local hostname="${service_entry#*:}"
local proxy="${proxy_map[$service]:-}"
[[ -z "$proxy" ]] && continue
echo "# $service" >> "$ADDON_FILE"
echo "$hostname {" >> "$ADDON_FILE"
echo " import custom_tls" >> "$ADDON_FILE"
# Add basic auth if needed
if [[ -n "${auth_services[$service]:-}" ]]; then
local auth_config="${auth_services[$service]}"
local username_var="${auth_config%%:*}"
local password_hash_var="${auth_config#*:}"
echo " basic_auth {" >> "$ADDON_FILE"
echo " {\$${username_var}} {\$${password_hash_var}}" >> "$ADDON_FILE"
echo " }" >> "$ADDON_FILE"
fi
# Add reverse proxy or file server
if [[ "$proxy" == "file_server" ]]; then
echo " root * /srv/welcome" >> "$ADDON_FILE"
echo " file_server" >> "$ADDON_FILE"
echo " try_files {path} /index.html" >> "$ADDON_FILE"
else
echo " reverse_proxy $proxy" >> "$ADDON_FILE"
fi
echo "}" >> "$ADDON_FILE"
echo "" >> "$ADDON_FILE"
done
log_success "Generated $ADDON_FILE"
} }
remove_config() { remove_config() {
if [[ -f "$ADDON_FILE" ]]; then cat > "$SNIPPET_FILE" << 'EOF'
rm -f "$ADDON_FILE" # TLS Configuration Snippet
log_success "Removed custom TLS configuration" # Imported by all service blocks in the main Caddyfile.
else #
log_info "No custom TLS configuration found" # Default: Empty (uses Let's Encrypt automatic certificates)
fi # Custom: Overwritten by 'make setup-tls' with your certificate paths
# Reset: Run 'make setup-tls --remove' to restore Let's Encrypt
(service_tls) {
# Default: Let's Encrypt automatic certificates (empty = no override)
}
EOF
log_success "Reset to Let's Encrypt (automatic certificates)"
} }
restart_caddy() { restart_caddy() {
@@ -250,12 +160,19 @@ main() {
exit 0 exit 0
;; ;;
--remove) --remove)
cleanup_legacy_config
remove_config remove_config
restart_caddy restart_caddy
exit 0 exit 0
;; ;;
esac esac
# Clean up legacy config that causes duplicate hosts
cleanup_legacy_config
# Ensure snippet file exists (survives git updates)
ensure_snippet_exists
# Ensure certs directory exists # Ensure certs directory exists
mkdir -p "$CERTS_DIR" mkdir -p "$CERTS_DIR"
@@ -319,29 +236,16 @@ main() {
log_info "Using certificate: $cert_file" log_info "Using certificate: $cert_file"
log_info "Using key: $key_file" log_info "Using key: $key_file"
# Get active services # Ensure certificate files are readable by Caddy container
local services_arr # (Docker volume mounts preserve host permissions, Caddy may run as different UID)
IFS=' ' read -ra services_arr <<< "$(get_active_services)" chmod 644 "$CERTS_DIR/$cert_file" "$CERTS_DIR/$key_file"
if [[ ${#services_arr[@]} -eq 0 ]]; then
log_warning "No services with configured hostnames found in .env"
log_info "Make sure to update *_HOSTNAME variables in .env with your domain"
exit 1
fi
log_info "Found ${#services_arr[@]} services with configured hostnames"
# Generate configuration # Generate configuration
generate_config "$cert_file" "$key_file" "${services_arr[@]}" generate_config "$cert_file" "$key_file"
# Show summary
echo "" echo ""
log_info "Configuration generated for the following services:" log_info "Custom TLS configured successfully!"
for service_entry in "${services_arr[@]}"; do log_info "All services will use: /etc/caddy/certs/$cert_file"
local service="${service_entry%%:*}"
local hostname="${service_entry#*:}"
echo " - $service: $hostname"
done
echo "" echo ""
# Restart Caddy # Restart Caddy

View File

@@ -72,7 +72,7 @@ echo ""
# Core services (always checked) # Core services (always checked)
log_subheader "Core Services" log_subheader "Core Services"
check_image_update "postgres" "postgres:${POSTGRES_VERSION:-17}-alpine" check_image_update "postgres" "pgvector/pgvector:pg${POSTGRES_VERSION:-17}"
check_image_update "redis" "valkey/valkey:8-alpine" check_image_update "redis" "valkey/valkey:8-alpine"
check_image_update "caddy" "caddy:2-alpine" check_image_update "caddy" "caddy:2-alpine"
@@ -134,6 +134,16 @@ if is_profile_active "databasus"; then
check_image_update "databasus" "databasus/databasus:latest" check_image_update "databasus" "databasus/databasus:latest"
fi fi
if is_profile_active "appsmith"; then
log_subheader "Appsmith"
check_image_update "appsmith" "appsmith/appsmith-ce:release"
fi
if is_profile_active "uptime-kuma"; then
log_subheader "Uptime Kuma"
check_image_update "uptime-kuma" "louislam/uptime-kuma:2"
fi
# Summary # Summary
log_divider log_divider
echo "" echo ""

View File

@@ -353,6 +353,7 @@ get_dify_compose() {
} }
# Build array of all active compose files (main + external services) # Build array of all active compose files (main + external services)
# Appends docker-compose.override.yml last if it exists (user overrides, highest precedence)
# IMPORTANT: Requires COMPOSE_PROFILES to be set before calling (via load_env) # IMPORTANT: Requires COMPOSE_PROFILES to be set before calling (via load_env)
# Usage: build_compose_files_array; docker compose "${COMPOSE_FILES[@]}" up -d # Usage: build_compose_files_array; docker compose "${COMPOSE_FILES[@]}" up -d
# Result is stored in global COMPOSE_FILES array # Result is stored in global COMPOSE_FILES array
@@ -369,6 +370,12 @@ build_compose_files_array() {
if path=$(get_dify_compose); then if path=$(get_dify_compose); then
COMPOSE_FILES+=("-f" "$path") COMPOSE_FILES+=("-f" "$path")
fi fi
# Include user overrides last (highest precedence)
local override="$PROJECT_ROOT/docker-compose.override.yml"
if [ -f "$override" ]; then
COMPOSE_FILES+=("-f" "$override")
fi
} }
#============================================================================= #=============================================================================

View File

@@ -71,18 +71,40 @@ def clone_supabase_repo():
os.chdir("..") os.chdir("..")
def prepare_supabase_env(): def prepare_supabase_env():
"""Copy .env to .env in supabase/docker.""" """Copy .env to supabase/docker/.env, or sync new variables if it already exists."""
if not is_supabase_enabled(): if not is_supabase_enabled():
print("Supabase is not enabled, skipping env preparation.") print("Supabase is not enabled, skipping env preparation.")
return return
env_path = os.path.join("supabase", "docker", ".env") env_path = os.path.join("supabase", "docker", ".env")
env_example_path = os.path.join(".env") root_env_path = ".env"
# Do not overwrite existing Supabase env to avoid credential drift
if os.path.exists(env_path): if os.path.exists(env_path):
print(f"Supabase env already exists at {env_path}, not overwriting.") # Sync new variables from root .env that don't exist in supabase .env
print(f"Syncing new variables from root .env to {env_path}...")
root_env = dotenv_values(root_env_path)
supabase_env = dotenv_values(env_path)
new_vars = []
for key, value in root_env.items():
if key not in supabase_env and value is not None:
# Quote values to handle special characters safely
if '$' in value:
new_vars.append(f"{key}='{value}'")
else:
new_vars.append(f'{key}="{value}"')
if new_vars:
with open(env_path, 'r') as f:
existing_content = f.read()
sync_header = "# --- Variables synced from root .env ---"
with open(env_path, 'a') as f:
if sync_header not in existing_content:
f.write(f"\n{sync_header}\n")
for var in new_vars:
f.write(f"{var}\n")
print(f"Synced {len(new_vars)} new variable(s) to Supabase env.")
else:
print("Supabase env is up to date, no new variables to sync.")
return return
print("Copying .env in root to .env in supabase/docker...") print("Copying .env in root to .env in supabase/docker...")
shutil.copyfile(env_example_path, env_path) shutil.copyfile(root_env_path, env_path)
def clone_dify_repo(): def clone_dify_repo():
"""Clone the Dify repository using sparse checkout if not already present.""" """Clone the Dify repository using sparse checkout if not already present."""
@@ -166,6 +188,76 @@ def prepare_dify_env():
with open(env_path, 'w') as f: with open(env_path, 'w') as f:
f.write("\n".join(lines) + "\n") f.write("\n".join(lines) + "\n")
def is_postiz_enabled():
"""Check if 'postiz' is in COMPOSE_PROFILES in .env file."""
env_values = dotenv_values(".env")
compose_profiles = env_values.get("COMPOSE_PROFILES", "")
return "postiz" in compose_profiles.split(',')
def prepare_postiz_env():
"""Generate postiz.env for mounting as /app/.env in Postiz container.
The Postiz image uses dotenv-cli (dotenv -e ../../.env) to load config.
Always regenerated to reflect current .env values.
"""
if not is_postiz_enabled():
print("Postiz is not enabled, skipping env preparation.")
return
print("Generating postiz.env from root .env values...")
root_env = dotenv_values(".env")
hostname = root_env.get("POSTIZ_HOSTNAME", "")
frontend_url = f"https://{hostname}" if hostname else ""
env_vars = {
"BACKEND_INTERNAL_URL": "http://localhost:3000",
"DATABASE_URL": f"postgresql://postgres:{root_env.get('POSTGRES_PASSWORD', '')}@postgres:5432/{root_env.get('POSTIZ_DB_NAME', 'postiz')}?schema=postiz",
"DISABLE_REGISTRATION": root_env.get("POSTIZ_DISABLE_REGISTRATION", "false"),
"FRONTEND_URL": frontend_url,
"IS_GENERAL": "true",
"JWT_SECRET": root_env.get("JWT_SECRET", ""),
"MAIN_URL": frontend_url,
"NEXT_PUBLIC_BACKEND_URL": f"{frontend_url}/api" if frontend_url else "",
"NEXT_PUBLIC_UPLOAD_DIRECTORY": "/uploads",
"REDIS_URL": "redis://redis:6379",
"STORAGE_PROVIDER": "local",
"TEMPORAL_ADDRESS": "temporal:7233",
"UPLOAD_DIRECTORY": "/uploads",
}
# Social media API keys — direct pass-through from root .env
social_keys = [
"X_API_KEY", "X_API_SECRET",
"LINKEDIN_CLIENT_ID", "LINKEDIN_CLIENT_SECRET",
"REDDIT_CLIENT_ID", "REDDIT_CLIENT_SECRET",
"GITHUB_CLIENT_ID", "GITHUB_CLIENT_SECRET",
"BEEHIIVE_API_KEY", "BEEHIIVE_PUBLICATION_ID",
"THREADS_APP_ID", "THREADS_APP_SECRET",
"FACEBOOK_APP_ID", "FACEBOOK_APP_SECRET",
"YOUTUBE_CLIENT_ID", "YOUTUBE_CLIENT_SECRET",
"TIKTOK_CLIENT_ID", "TIKTOK_CLIENT_SECRET",
"PINTEREST_CLIENT_ID", "PINTEREST_CLIENT_SECRET",
"DRIBBBLE_CLIENT_ID", "DRIBBBLE_CLIENT_SECRET",
"DISCORD_CLIENT_ID", "DISCORD_CLIENT_SECRET",
"DISCORD_BOT_TOKEN_ID",
"SLACK_ID", "SLACK_SECRET", "SLACK_SIGNING_SECRET",
"MASTODON_URL", "MASTODON_CLIENT_ID", "MASTODON_CLIENT_SECRET",
]
for key in social_keys:
env_vars[key] = root_env.get(key, "")
# Handle case where Docker created postiz.env as a directory
if os.path.isdir("postiz.env"):
print("Warning: postiz.env exists as a directory (likely created by Docker). Removing...")
shutil.rmtree("postiz.env")
with open("postiz.env", 'w') as f:
for key, value in env_vars.items():
f.write(f'{key}="{value}"\n')
print(f"Generated postiz.env with {len(env_vars)} variables.")
def stop_existing_containers(): def stop_existing_containers():
"""Stop and remove existing containers for our unified project ('localai').""" """Stop and remove existing containers for our unified project ('localai')."""
print("Stopping and removing existing containers for the unified project 'localai'...") print("Stopping and removing existing containers for the unified project 'localai'...")
@@ -195,6 +287,11 @@ def stop_existing_containers():
if os.path.exists(n8n_workers_compose_path): if os.path.exists(n8n_workers_compose_path):
cmd.extend(["-f", n8n_workers_compose_path]) cmd.extend(["-f", n8n_workers_compose_path])
# Include user overrides if present
override_path = "docker-compose.override.yml"
if os.path.exists(override_path):
cmd.extend(["-f", override_path])
cmd.extend(["down"]) cmd.extend(["down"])
run_command(cmd) run_command(cmd)
@@ -230,6 +327,11 @@ def start_local_ai():
if os.path.exists(n8n_workers_compose_path): if os.path.exists(n8n_workers_compose_path):
compose_files.extend(["-f", n8n_workers_compose_path]) compose_files.extend(["-f", n8n_workers_compose_path])
# Include user overrides if present (must be last for highest precedence)
override_path = "docker-compose.override.yml"
if os.path.exists(override_path):
compose_files.extend(["-f", override_path])
# Explicitly build services and pull newer base images first. # Explicitly build services and pull newer base images first.
print("Checking for newer base images and building services...") print("Checking for newer base images and building services...")
build_cmd = ["docker", "compose", "-p", "localai"] + compose_files + ["build", "--pull"] build_cmd = ["docker", "compose", "-p", "localai"] + compose_files + ["build", "--pull"]
@@ -394,7 +496,10 @@ def main():
# Generate SearXNG secret key and check docker-compose.yml # Generate SearXNG secret key and check docker-compose.yml
generate_searxng_secret_key() generate_searxng_secret_key()
check_and_fix_docker_compose_for_searxng() check_and_fix_docker_compose_for_searxng()
# Generate Postiz env file
prepare_postiz_env()
stop_existing_containers() stop_existing_containers()
# Start Supabase first # Start Supabase first

View File

@@ -136,6 +136,11 @@
warning: (className = '') => ` warning: (className = '') => `
<svg class="${className}" fill="none" stroke="currentColor" viewBox="0 0 24 24" aria-hidden="true"> <svg class="${className}" fill="none" stroke="currentColor" viewBox="0 0 24 24" aria-hidden="true">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 9v2m0 4h.01m-6.938 4h13.856c1.54 0 2.502-1.667 1.732-3L13.732 4c-.77-1.333-2.694-1.333-3.464 0L3.34 16c-.77 1.333.192 3 1.732 3z"/> <path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 9v2m0 4h.01m-6.938 4h13.856c1.54 0 2.502-1.667 1.732-3L13.732 4c-.77-1.333-2.694-1.333-3.464 0L3.34 16c-.77 1.333.192 3 1.732 3z"/>
</svg>`,
changelog: (className = '') => `
<svg class="${className}" fill="none" stroke="currentColor" viewBox="0 0 24 24" aria-hidden="true">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2m-3 7h3m-3 4h3m-6-4h.01M9 16h.01"/>
</svg>` </svg>`
}; };
@@ -143,6 +148,14 @@
// DATA - Service metadata and commands // DATA - Service metadata and commands
// ============================================ // ============================================
const SERVICE_METADATA = { const SERVICE_METADATA = {
'appsmith': {
name: 'Appsmith',
description: 'Low-code Internal Tools',
icon: 'AS',
color: 'bg-[#5f2dde]',
category: 'tools',
docsUrl: 'https://docs.appsmith.com'
},
'n8n': { 'n8n': {
name: 'n8n', name: 'n8n',
description: 'Workflow Automation', description: 'Workflow Automation',
@@ -335,6 +348,14 @@
category: 'tools', category: 'tools',
docsUrl: 'https://docs.postiz.com' docsUrl: 'https://docs.postiz.com'
}, },
'temporal-ui': {
name: 'Temporal UI',
description: 'Postiz Workflow Orchestration',
icon: 'TM',
color: 'bg-violet-500',
category: 'tools',
docsUrl: 'https://docs.temporal.io/'
},
'waha': { 'waha': {
name: 'WAHA', name: 'WAHA',
description: 'WhatsApp HTTP API', description: 'WhatsApp HTTP API',
@@ -399,6 +420,14 @@
category: 'tools', category: 'tools',
docsUrl: 'https://docs.python.org' docsUrl: 'https://docs.python.org'
}, },
'uptime-kuma': {
name: 'Uptime Kuma',
description: 'Uptime Monitoring Dashboard',
icon: 'UK',
color: 'bg-[#5CDD8B]',
category: 'monitoring',
docsUrl: 'https://github.com/louislam/uptime-kuma'
},
'cloudflare-tunnel': { 'cloudflare-tunnel': {
name: 'Cloudflare Tunnel', name: 'Cloudflare Tunnel',
description: 'Zero-Trust Network Access', description: 'Zero-Trust Network Access',
@@ -415,6 +444,8 @@
{ cmd: 'make logs s=<service>', desc: 'View logs for specific service' }, { cmd: 'make logs s=<service>', desc: 'View logs for specific service' },
{ cmd: 'make monitor', desc: 'Live CPU/memory monitoring' }, { cmd: 'make monitor', desc: 'Live CPU/memory monitoring' },
{ cmd: 'make restart', desc: 'Restart all services' }, { cmd: 'make restart', desc: 'Restart all services' },
{ cmd: 'make stop', desc: 'Stop all services' },
{ cmd: 'make start', desc: 'Start all services' },
{ cmd: 'make show-restarts', desc: 'Show restart count per container' }, { cmd: 'make show-restarts', desc: 'Show restart count per container' },
{ cmd: 'make doctor', desc: 'Run system diagnostics' }, { cmd: 'make doctor', desc: 'Run system diagnostics' },
{ cmd: 'make update', desc: 'Update system and services' }, { cmd: 'make update', desc: 'Update system and services' },
@@ -842,6 +873,7 @@
const servicesContainer = document.getElementById('services-container'); const servicesContainer = document.getElementById('services-container');
const quickstartContainer = document.getElementById('quickstart-container'); const quickstartContainer = document.getElementById('quickstart-container');
const commandsContainer = document.getElementById('commands-container'); const commandsContainer = document.getElementById('commands-container');
const changelogContainer = document.getElementById('changelog-container');
const domainInfo = document.getElementById('domain-info'); const domainInfo = document.getElementById('domain-info');
/** /**
@@ -957,6 +989,26 @@
commandsContainer.appendChild(grid); commandsContainer.appendChild(grid);
} }
/**
* Render changelog content
*/
function renderChangelog(content) {
if (!changelogContainer) return;
changelogContainer.innerHTML = '';
if (!content) {
changelogContainer.innerHTML = `
<p class="text-gray-500 text-center py-8">Changelog not available</p>
`;
return;
}
const pre = document.createElement('pre');
pre.className = 'text-sm text-gray-300 font-mono whitespace-pre-wrap break-words leading-relaxed';
pre.textContent = content;
changelogContainer.appendChild(pre);
}
/** /**
* Render error state in services container * Render error state in services container
*/ */
@@ -982,14 +1034,26 @@
// Always render commands (static content) // Always render commands (static content)
renderCommands(); renderCommands();
try { // Fetch both JSON files in parallel for better performance
const response = await fetch('data.json'); // Each fetch is handled independently - changelog failure won't affect main data
const [changelogResult, dataResult] = await Promise.allSettled([
fetch('changelog.json').then(r => r.ok ? r.json() : null),
fetch('data.json').then(r => r.ok ? r.json() : Promise.reject(new Error(`HTTP ${r.status}`)))
]);
if (!response.ok) { // Handle changelog (independent - failures don't break the page)
throw new Error(`Failed to load data (${response.status})`); if (changelogResult.status === 'fulfilled' && changelogResult.value?.content) {
renderChangelog(changelogResult.value.content);
} else {
if (changelogResult.status === 'rejected') {
console.error('Error loading changelog:', changelogResult.reason);
} }
renderChangelog(null);
}
const data = await response.json(); // Handle main data
if (dataResult.status === 'fulfilled' && dataResult.value) {
const data = dataResult.value;
// Update domain info // Update domain info
if (domainInfo) { if (domainInfo) {
@@ -1007,9 +1071,8 @@
// Render quick start // Render quick start
renderQuickStart(data.quick_start); renderQuickStart(data.quick_start);
} else {
} catch (error) { console.error('Error loading data:', dataResult.reason);
console.error('Error loading data:', error);
// Show error in UI // Show error in UI
renderServicesError(); renderServicesError();

View File

@@ -51,7 +51,7 @@
} }
::-webkit-scrollbar-track { ::-webkit-scrollbar-track {
background: rgba(17, 17, 17, 0.8); background: transparent;
border-radius: 5px; border-radius: 5px;
} }
@@ -198,6 +198,23 @@
<div class="gradient-line my-8" aria-hidden="true"></div> <div class="gradient-line my-8" aria-hidden="true"></div>
<!-- Changelog Section -->
<section class="mb-16">
<div class="flex items-center gap-3 mb-6">
<div class="w-10 h-10 rounded-lg bg-brand/10 border border-brand/20 flex items-center justify-center"
data-section-icon="changelog"></div>
<h2 class="text-2xl font-semibold text-white">Changelog</h2>
</div>
<div id="changelog-container"
class="bg-surface-100 rounded-xl border border-surface-400 p-6 overflow-y-auto"
style="max-height: 444px;">
<!-- Changelog content will be injected here by JavaScript -->
<div class="animate-pulse h-32"></div>
</div>
</section>
<div class="gradient-line my-8" aria-hidden="true"></div>
<!-- Documentation Section --> <!-- Documentation Section -->
<section class="mb-16"> <section class="mb-16">
<div class="flex items-center gap-3 mb-6"> <div class="flex items-center gap-3 mb-6">