19 Commits

Author SHA1 Message Date
Yury Kossakovsky
9dcf622e9f fix: use node-based healthcheck for uptime-kuma
louislam/uptime-kuma:2 image doesn't include wget
2026-03-28 17:50:48 -06:00
Yury Kossakovsky
7861dee1b1 fix: make n8n payload size max configurable via .env
was hardcoded to 256 in docker-compose.yml, ignoring user overrides
2026-03-28 17:40:18 -06:00
Yury Kossakovsky
6fe028d01b chore: remove claude code github actions workflows 2026-03-23 16:12:15 -06:00
Yury Kossakovsky
804b81f6cb fix: resolve supabase-storage crash-loop by adding missing s3 config variables
supabase-storage crashes with "Region is missing" after upstream image
update because @aws-sdk/client-s3vectors requires REGION env var.

- add REGION, GLOBAL_S3_BUCKET, STORAGE_TENANT_ID to .env.example
- auto-generate S3_PROTOCOL_ACCESS_KEY_ID/SECRET in secret generation
- sync new env vars to existing supabase/docker/.env during updates
  (append-only, never overwrites existing values)
- bump version 1.3.3 → 1.4.1
2026-03-23 16:09:06 -06:00
Yury Kossakovsky
d344291c21 Merge pull request #53 from kossakovsky/develop
v1.4.0: add uptime kuma and pgvector support
2026-03-15 20:21:35 -06:00
Yury Kossakovsky
463258fb06 feat: add pgvector support to postgresql
switch postgres image to pgvector/pgvector for vector
similarity search capabilities
2026-03-15 20:15:05 -06:00
Yury Kossakovsky
e8a8a5a511 fix: complete uptime kuma integration gaps
add missing readme service url, update preview image tracking,
and release changelog as v1.4.0
2026-03-15 20:10:37 -06:00
Yury Kossakovsky
944e0465bd Merge pull request #51 from kossakovsky/feature/add-uptime-kuma
feat: add Uptime Kuma uptime monitoring service
2026-03-13 18:30:25 -06:00
Yury Kossakovsky
a6a3c2cb05 fix: add proxy-env inheritance and healthcheck proxy bypass to uptime kuma 2026-03-13 18:09:52 -06:00
Yury Kossakovsky
5859fc9d25 fix: add missing healthcheck start_period and standardize wording 2026-03-13 18:02:48 -06:00
Yury Kossakovsky
c33998043f fix: correct 15 errors in uptime kuma service integration
fix healthcheck port (3000→3001), add missing logging config,
add UPTIME_KUMA_HOSTNAME to caddy env, add import service_tls
in caddyfile, fix hostname typo in .env.example, add uptime-kuma
to GOST_NO_PROXY, fix profile name in wizard/final report, fix
env var in welcome page generator, add missing trailing comma in
app.js, move changelog to Added section, declare volume in
top-level section, fix container name in caddyfile, fix volume
mount path, fix broken markdown link in README
2026-03-13 17:58:13 -06:00
Yury Kossakovsky
174fce7527 Merge pull request #48 from kossakovsky/add-claude-github-actions-1773363152825
Add claude GitHub actions 1773363152825
2026-03-12 20:31:23 -06:00
Yury Kossakovsky
52845d1ed9 feat: add uptime kuma uptime monitoring service 2026-03-12 20:30:00 -06:00
Yury Kossakovsky
b0564ea0d8 "Claude Code Review workflow" 2026-03-12 18:52:35 -06:00
Yury Kossakovsky
888347e110 "Claude PR Assistant workflow" 2026-03-12 18:52:34 -06:00
Yury Kossakovsky
277466f144 fix(postiz): generate .env file to prevent dotenv-cli crash (#40)
the postiz backend image uses dotenv-cli to load /app/.env, which
doesn't exist when config is only passed via docker environment vars.
generate postiz.env from root .env and mount it read-only. also handle
edge case where docker creates the file as a directory on bind mount
failure, and quote values to prevent dotenv-cli misparses.
2026-02-27 20:44:18 -07:00
Yury Kossakovsky
58c485e49a docs: improve CLAUDE.md with missing architecture details
add start_services.py to key files, document python task runner,
docker-compose.override.yml support, yaml anchors, restart behavior,
supabase/dify profiles, --update flag for secrets, and expand file
locations and syntax validation lists
2026-02-27 19:13:36 -07:00
Yury Kossakovsky
b34e1468aa chore: remove redundant agents.md 2026-02-27 19:06:51 -07:00
Yury Kossakovsky
6a1301bfc0 fix(docker): respect docker-compose.override.yml for user customizations (#44)
all compose file assembly points now include the override file last
when present, giving it highest precedence over other compose files
2026-02-27 19:05:50 -07:00
18 changed files with 290 additions and 50 deletions

View File

@@ -183,6 +183,7 @@ SEARXNG_HOSTNAME=searxng.yourdomain.com
SUPABASE_HOSTNAME=supabase.yourdomain.com SUPABASE_HOSTNAME=supabase.yourdomain.com
WAHA_HOSTNAME=waha.yourdomain.com WAHA_HOSTNAME=waha.yourdomain.com
WEAVIATE_HOSTNAME=weaviate.yourdomain.com WEAVIATE_HOSTNAME=weaviate.yourdomain.com
UPTIME_KUMA_HOSTNAME=uptime-kuma.yourdomain.com
WEBUI_HOSTNAME=webui.yourdomain.com WEBUI_HOSTNAME=webui.yourdomain.com
WELCOME_HOSTNAME=welcome.yourdomain.com WELCOME_HOSTNAME=welcome.yourdomain.com
@@ -225,6 +226,10 @@ N8N_LOG_LEVEL=info
NODES_EXCLUDE="[]" NODES_EXCLUDE="[]"
N8N_LOG_OUTPUT=console N8N_LOG_OUTPUT=console
# Maximum payload size in MB for n8n requests (default: 256 MB).
# Increase if you need to handle large files or webhook payloads.
N8N_PAYLOAD_SIZE_MAX=256
# Timezone for n8n and workflows (https://docs.n8n.io/hosting/configuration/environment-variables/timezone-localization/) # Timezone for n8n and workflows (https://docs.n8n.io/hosting/configuration/environment-variables/timezone-localization/)
GENERIC_TIMEZONE=America/New_York GENERIC_TIMEZONE=America/New_York
@@ -420,6 +425,25 @@ IMGPROXY_ENABLE_WEBP_DETECTION=true
# Add your OpenAI API key to enable SQL Editor Assistant # Add your OpenAI API key to enable SQL Editor Assistant
OPENAI_API_KEY= OPENAI_API_KEY=
############
# Storage - Configuration for S3 protocol endpoint
############
# S3 bucket when using S3 backend, directory name when using 'file'
GLOBAL_S3_BUCKET=stub
# Used for S3 protocol endpoint configuration
REGION=stub
# Equivalent to project_ref (S3 session token authentication)
STORAGE_TENANT_ID=stub
# Access to Storage via S3 protocol endpoint
S3_PROTOCOL_ACCESS_KEY_ID=
S3_PROTOCOL_ACCESS_KEY_SECRET=
# ============================================ # ============================================
# Cloudflare Tunnel Configuration (Optional) # Cloudflare Tunnel Configuration (Optional)
# ============================================ # ============================================
@@ -446,7 +470,7 @@ GOST_UPSTREAM_PROXY=
# Internal services bypass list (prevents internal Docker traffic from going through proxy) # Internal services bypass list (prevents internal Docker traffic from going through proxy)
# Includes: Docker internal networks (172.16-31.*, 10.*), Docker DNS (127.0.0.11), and all service hostnames # Includes: Docker internal networks (172.16-31.*, 10.*), Docker DNS (127.0.0.11), and all service hostnames
GOST_NO_PROXY=localhost,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.local,appsmith,postgres,postgres:5432,redis,redis:6379,caddy,ollama,neo4j,qdrant,weaviate,clickhouse,minio,searxng,crawl4ai,gotenberg,langfuse-web,langfuse-worker,flowise,n8n,n8n-import,n8n-worker-1,n8n-worker-2,n8n-worker-3,n8n-worker-4,n8n-worker-5,n8n-worker-6,n8n-worker-7,n8n-worker-8,n8n-worker-9,n8n-worker-10,n8n-runner-1,n8n-runner-2,n8n-runner-3,n8n-runner-4,n8n-runner-5,n8n-runner-6,n8n-runner-7,n8n-runner-8,n8n-runner-9,n8n-runner-10,letta,lightrag,docling,postiz,temporal,temporal-ui,ragflow,ragflow-mysql,ragflow-minio,ragflow-redis,ragflow-elasticsearch,ragapp,open-webui,comfyui,waha,libretranslate,paddleocr,nocodb,db,studio,kong,auth,rest,realtime,storage,imgproxy,meta,functions,analytics,vector,supavisor,gost,api.telegram.org,telegram.org,t.me,core.telegram.org GOST_NO_PROXY=localhost,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.local,appsmith,postgres,postgres:5432,redis,redis:6379,caddy,ollama,neo4j,qdrant,weaviate,clickhouse,minio,searxng,crawl4ai,gotenberg,langfuse-web,langfuse-worker,flowise,n8n,n8n-import,n8n-worker-1,n8n-worker-2,n8n-worker-3,n8n-worker-4,n8n-worker-5,n8n-worker-6,n8n-worker-7,n8n-worker-8,n8n-worker-9,n8n-worker-10,n8n-runner-1,n8n-runner-2,n8n-runner-3,n8n-runner-4,n8n-runner-5,n8n-runner-6,n8n-runner-7,n8n-runner-8,n8n-runner-9,n8n-runner-10,letta,lightrag,docling,postiz,temporal,temporal-ui,ragflow,ragflow-mysql,ragflow-minio,ragflow-redis,ragflow-elasticsearch,ragapp,open-webui,comfyui,waha,libretranslate,paddleocr,nocodb,db,studio,kong,auth,rest,realtime,storage,imgproxy,meta,functions,analytics,vector,supavisor,gost,uptime-kuma,api.telegram.org,telegram.org,t.me,core.telegram.org
############ ############
# Functions - Configuration for Functions # Functions - Configuration for Functions

1
.gitignore vendored
View File

@@ -11,6 +11,7 @@ dify/
volumes/ volumes/
docker-compose.override.yml docker-compose.override.yml
docker-compose.n8n-workers.yml docker-compose.n8n-workers.yml
postiz.env
welcome/data.json welcome/data.json
welcome/changelog.json welcome/changelog.json

View File

@@ -1,31 +0,0 @@
# Repository Guidelines
## Project Structure & Module Organization
- Core runtime config lives at the repo root: `docker-compose.yml`, `docker-compose.n8n-workers.yml`, and `Caddyfile`.
- Installer and maintenance logic is in `scripts/` (install, update, doctor, cleanup, and helpers).
- Service-specific assets are grouped by folder (examples: `n8n/`, `grafana/`, `prometheus/`, `searxng/`, `ragflow/`, `python-runner/`, `welcome/`).
- Shared files for workflows are stored in `shared/` and mounted inside containers as `/data/shared`.
## Build, Test, and Development Commands
- `make install`: run the full installation wizard.
- `make update` or `make git-pull`: refresh images and configuration (fork-friendly via `make git-pull`).
- `make logs s=<service>`: tail a specific services logs (example: `make logs s=n8n`).
- `make doctor`: run system checks for DNS/SSL/containers.
- `make restart`, `make stop`, `make start`, `make status`: manage the compose stack.
- `make clean` or `make clean-all`: remove unused Docker resources (`clean-all` is destructive).
## Coding Style & Naming Conventions
- Bash scripts in `scripts/` use `#!/bin/bash`, 4-space indentation, and uppercase constants. Match existing formatting.
- Environment variable patterns are consistent: hostnames use `_HOSTNAME`, secrets use `_PASSWORD` or `_KEY`, and bcrypt hashes use `_PASSWORD_HASH`.
- Services should not publish ports directly; external access goes through Caddy.
## Testing Guidelines
- There is no unit-test suite. Use syntax checks instead:
- `docker compose -p localai config --quiet`
- `bash -n scripts/install.sh` (and other edited scripts)
- For installer changes, validate on a clean Ubuntu 24.04 LTS host and confirm profile selections start correctly.
## Commit & Pull Request Guidelines
- Commit messages follow Conventional Commits: `type(scope): summary` (examples in history include `fix(caddy): ...`, `docs(readme): ...`, `feat(postiz): ...`).
- PRs should include a short summary, affected services/profiles, and test commands run.
- Update `README.md` and `CHANGELOG.md` for user-facing changes or new services.

View File

@@ -2,6 +2,34 @@
## [Unreleased] ## [Unreleased]
## [1.4.2] - 2026-03-28
### Fixed
- **n8n** - Make `N8N_PAYLOAD_SIZE_MAX` configurable via `.env` (was hardcoded to 256, ignoring user overrides)
- **Uptime Kuma** - Fix healthcheck failure (`wget: not found`) by switching to Node.js-based check
## [1.4.1] - 2026-03-23
### Fixed
- **Supabase Storage** - Fix crash-loop (`Region is missing`) by adding missing S3 storage configuration variables (`REGION`, `GLOBAL_S3_BUCKET`, `STORAGE_TENANT_ID`) from upstream Supabase
- **Supabase** - Sync new environment variables to existing `supabase/docker/.env` during updates (previously only populated on first install)
## [1.4.0] - 2026-03-15
### Added
- **Uptime Kuma** - Self-hosted uptime monitoring with 90+ notification services
- **pgvector** - Switch PostgreSQL image to `pgvector/pgvector` for vector similarity search support
## [1.3.3] - 2026-02-27
### Fixed
- **Postiz** - Generate `postiz.env` file to prevent `dotenv-cli` crash in backend container (#40). Handles edge case where Docker creates the file as a directory, and quotes values to prevent misparses.
## [1.3.2] - 2026-02-27
### Fixed
- **Docker Compose** - Respect `docker-compose.override.yml` for user customizations (#44). All compose file assembly points now include the override file when present.
## [1.3.1] - 2026-02-27 ## [1.3.1] - 2026-02-27
### Fixed ### Fixed

View File

@@ -41,7 +41,8 @@ This is **n8n-install**, a Docker Compose-based installer that provides a compre
- `scripts/download_top_workflows.sh`: Downloads community n8n workflows - `scripts/download_top_workflows.sh`: Downloads community n8n workflows
- `scripts/import_workflows.sh`: Imports workflows from `n8n/backup/workflows/` into n8n (used by `make import`) - `scripts/import_workflows.sh`: Imports workflows from `n8n/backup/workflows/` into n8n (used by `make import`)
- `scripts/restart.sh`: Restarts services with proper compose file handling (used by `make restart`) - `scripts/restart.sh`: Restarts services with proper compose file handling (used by `make restart`)
- `scripts/setup_custom_tls.sh`: Configures custom TLS certificates (used by `make setup-tls`) - `scripts/setup_custom_tls.sh`: Configures custom TLS certificates (used by `make setup-tls`); supports `--remove` to revert to Let's Encrypt
- `start_services.py`: Python orchestrator for service startup order, builds Docker images, handles external services (Supabase/Dify cloning, env preparation, startup), generates SearXNG secret key, stops existing containers. Uses `python-dotenv` (`dotenv_values`).
**Project Name**: All docker-compose commands use `-p localai` (defined in Makefile as `PROJECT_NAME := localai`). **Project Name**: All docker-compose commands use `-p localai` (defined in Makefile as `PROJECT_NAME := localai`).
@@ -60,9 +61,9 @@ This is **n8n-install**, a Docker Compose-based installer that provides a compre
7. `07_final_report.sh` - Display credentials and URLs 7. `07_final_report.sh` - Display credentials and URLs
8. `08_fix_permissions.sh` - Fix file ownership for non-root access 8. `08_fix_permissions.sh` - Fix file ownership for non-root access
The update flow (`scripts/update.sh`) similarly orchestrates: git fetch + reset → service selection → `apply_update.sh` → restart. The update flow (`scripts/update.sh`) similarly orchestrates: git fetch + reset → service selection → `apply_update.sh` → restart. During updates, `03_generate_secrets.sh --update` adds new variables from `.env.example` without regenerating existing ones (preserves user-set values).
**Git update modes**: Default is `reset` (hard reset to origin). Set `GIT_MODE=merge` in `.env` for fork workflows (merges from upstream instead of hard reset). The `make git-pull` command uses merge mode. **Git update modes**: Default is `reset` (hard reset to origin). Set `GIT_MODE=merge` in `.env` for fork workflows (merges from upstream instead of hard reset). The `make git-pull` command uses merge mode. Git branch support is explicit: `GIT_SUPPORTED_BRANCHES=("main" "develop")` in `git.sh`; unknown branches warn and fall back to `main`.
## Common Development Commands ## Common Development Commands
@@ -104,7 +105,7 @@ Follow this workflow when adding a new optional service (refer to `.claude/comma
3. **.env.example**: Add `MYSERVICE_HOSTNAME=myservice.yourdomain.com` and credentials if using basic auth. 3. **.env.example**: Add `MYSERVICE_HOSTNAME=myservice.yourdomain.com` and credentials if using basic auth.
4. **scripts/03_generate_secrets.sh**: Generate passwords and bcrypt hashes. Add to `VARS_TO_GENERATE` map. 4. **scripts/03_generate_secrets.sh**: Generate passwords and bcrypt hashes. Add to `VARS_TO_GENERATE` map.
5. **scripts/04_wizard.sh**: Add service to `base_services_data` array for wizard selection. 5. **scripts/04_wizard.sh**: Add service to `base_services_data` array for wizard selection.
6. **scripts/databases.sh**: If service uses PostgreSQL, add database name to `INIT_DB_DATABASES` array. 6. **scripts/databases.sh**: If service uses PostgreSQL, add database name to `INIT_DB_DATABASES` array. Database creation is idempotent (checks existence before creating). Note: Postiz also requires `temporal` and `temporal_visibility` databases.
7. **scripts/generate_welcome_page.sh**: Add service to `SERVICES_ARRAY` for welcome dashboard. 7. **scripts/generate_welcome_page.sh**: Add service to `SERVICES_ARRAY` for welcome dashboard.
8. **welcome/app.js**: Add `SERVICE_METADATA` entry with name, description, icon, color, category. 8. **welcome/app.js**: Add `SERVICE_METADATA` entry with name, description, icon, color, category.
9. **scripts/07_final_report.sh**: Add service URL and credentials output using `is_profile_active "myservice"`. 9. **scripts/07_final_report.sh**: Add service URL and credentials output using `is_profile_active "myservice"`.
@@ -165,9 +166,8 @@ This project uses [Semantic Versioning](https://semver.org/). When updating `CHA
- **Template profile pattern**: `docker-compose.yml` defines `n8n-worker-template` and `n8n-runner-template` with `profiles: ["n8n-template"]` (never activated directly). `generate_n8n_workers.sh` uses these as templates to generate `docker-compose.n8n-workers.yml` with the actual worker/runner services. - **Template profile pattern**: `docker-compose.yml` defines `n8n-worker-template` and `n8n-runner-template` with `profiles: ["n8n-template"]` (never activated directly). `generate_n8n_workers.sh` uses these as templates to generate `docker-compose.n8n-workers.yml` with the actual worker/runner services.
- **Scaling**: Change `N8N_WORKER_COUNT` in `.env` and run `bash scripts/generate_n8n_workers.sh` - **Scaling**: Change `N8N_WORKER_COUNT` in `.env` and run `bash scripts/generate_n8n_workers.sh`
- **Code node libraries**: Configured via `n8n/n8n-task-runners.json` and `n8n/Dockerfile.runner`: - **Code node libraries**: Configured via `n8n/n8n-task-runners.json` and `n8n/Dockerfile.runner`:
- JS packages installed via `pnpm add` in Dockerfile.runner - **JavaScript runner**: packages installed via `pnpm add` in Dockerfile.runner; allowlist in `n8n-task-runners.json` (`NODE_FUNCTION_ALLOW_EXTERNAL`, `NODE_FUNCTION_ALLOW_BUILTIN`); default packages: `cheerio`, `axios`, `moment`, `lodash`
- Allowlist configured in `n8n-task-runners.json` (`NODE_FUNCTION_ALLOW_EXTERNAL`, `NODE_FUNCTION_ALLOW_BUILTIN`) - **Python runner**: also configured in `n8n-task-runners.json`; uses `/opt/runners/task-runner-python/.venv/bin/python` with `N8N_RUNNERS_STDLIB_ALLOW: "*"` and `N8N_RUNNERS_EXTERNAL_ALLOW: "*"`
- Default packages: `cheerio`, `axios`, `moment`, `lodash`
- Workflows can access the host filesystem via `/data/shared` (mapped to `./shared`) - Workflows can access the host filesystem via `/data/shared` (mapped to `./shared`)
- `N8N_BLOCK_ENV_ACCESS_IN_NODE=false` allows Code nodes to access environment variables - `N8N_BLOCK_ENV_ACCESS_IN_NODE=false` allows Code nodes to access environment variables
@@ -177,7 +177,8 @@ This project uses [Semantic Versioning](https://semver.org/). When updating `CHA
- Hostnames are passed via environment variables (e.g., `N8N_HOSTNAME`, `FLOWISE_HOSTNAME`) - Hostnames are passed via environment variables (e.g., `N8N_HOSTNAME`, `FLOWISE_HOSTNAME`)
- Basic auth uses bcrypt hashes generated by `scripts/03_generate_secrets.sh` via Caddy's hash command - Basic auth uses bcrypt hashes generated by `scripts/03_generate_secrets.sh` via Caddy's hash command
- Never add `ports:` to services in docker-compose.yml; let Caddy handle all external access - Never add `ports:` to services in docker-compose.yml; let Caddy handle all external access
- **Caddy Addons** (`caddy-addon/`): Extend Caddy config without modifying the main Caddyfile. Files matching `site-*.conf` are auto-imported. TLS is controlled via `tls-snippet.conf` (all service blocks use `import service_tls`). See `caddy-addon/README.md` for details. - **Caddy Addons** (`caddy-addon/`): Extend Caddy config without modifying the main Caddyfile. Files matching `site-*.conf` are auto-imported (gitignored, user-created). TLS is controlled via `tls-snippet.conf` (all service blocks use `import service_tls`). See `caddy-addon/README.md` for details.
- Custom TLS certificates go in `certs/` directory (gitignored), referenced as `/etc/caddy/certs/` inside the container
### External Compose Files (Supabase/Dify) ### External Compose Files (Supabase/Dify)
@@ -187,6 +188,7 @@ Complex services like Supabase and Dify maintain their own upstream docker-compo
- `scripts/utils.sh` provides `get_*_compose()` getter functions and `build_compose_files_array()` includes them - `scripts/utils.sh` provides `get_*_compose()` getter functions and `build_compose_files_array()` includes them
- `stop_all_services()` in `start_services.py` checks compose file existence (not profile) to ensure cleanup when a profile is removed - `stop_all_services()` in `start_services.py` checks compose file existence (not profile) to ensure cleanup when a profile is removed
- All external compose files use the same project name (`-p localai`) so containers appear together - All external compose files use the same project name (`-p localai`) so containers appear together
- **`docker-compose.override.yml`**: User customizations file (gitignored). Both `start_services.py` and `build_compose_files_array()` in `utils.sh` auto-detect and include it last (highest precedence). Users can override any service property without modifying tracked files.
### Secret Generation ### Secret Generation
@@ -195,6 +197,7 @@ The `scripts/03_generate_secrets.sh` script:
- Creates bcrypt password hashes using Caddy's `hash-password` command - Creates bcrypt password hashes using Caddy's `hash-password` command
- Preserves existing user-provided values in `.env` - Preserves existing user-provided values in `.env`
- Supports different secret types via `VARS_TO_GENERATE` map: `password:32`, `jwt`, `api_key`, `base64:64`, `hex:32` - Supports different secret types via `VARS_TO_GENERATE` map: `password:32`, `jwt`, `api_key`, `base64:64`, `hex:32`
- When called with `--update` flag (during updates), only adds new variables without regenerating existing ones
### Utility Functions (scripts/utils.sh) ### Utility Functions (scripts/utils.sh)
@@ -229,11 +232,24 @@ Common profiles:
- `langfuse`: Langfuse observability (includes ClickHouse, MinIO, worker, web) - `langfuse`: Langfuse observability (includes ClickHouse, MinIO, worker, web)
- `cpu`, `gpu-nvidia`, `gpu-amd`: Ollama hardware profiles (mutually exclusive) - `cpu`, `gpu-nvidia`, `gpu-amd`: Ollama hardware profiles (mutually exclusive)
- `cloudflare-tunnel`: Cloudflare Tunnel for zero-trust access (see `cloudflare-instructions.md`) - `cloudflare-tunnel`: Cloudflare Tunnel for zero-trust access (see `cloudflare-instructions.md`)
- `supabase`: Supabase BaaS (external compose, cloned at runtime; mutually exclusive with `dify`)
- `dify`: Dify AI platform (external compose, cloned at runtime; mutually exclusive with `supabase`)
- `gost`: HTTP/HTTPS proxy for routing AI service outbound traffic - `gost`: HTTP/HTTPS proxy for routing AI service outbound traffic
- `python-runner`: Internal Python execution environment (no external access) - `python-runner`: Internal Python execution environment (no external access)
- `searxng`, `letta`, `lightrag`, `libretranslate`, `crawl4ai`, `docling`, `waha`, `comfyui`, `paddleocr`, `ragapp`, `gotenberg`, `postiz`: Additional optional services
## Architecture Patterns ## Architecture Patterns
### Docker Compose YAML Anchors
`docker-compose.yml` defines reusable anchors at the top:
- `x-logging: &default-logging` - `json-file` with `max-size: 1m`, `max-file: 1`
- `x-proxy-env: &proxy-env` - HTTP/HTTPS proxy vars from `GOST_PROXY_URL`/`GOST_NO_PROXY`
- `x-n8n: &service-n8n` - Full n8n service definition (reused by workers via `extends`)
- `x-ollama: &service-ollama` - Ollama service definition (reused by CPU/GPU variants)
- `x-init-ollama: &init-ollama` - Ollama model pre-puller (auto-pulls `qwen2.5:7b-instruct-q4_K_M` and `nomic-embed-text`)
- `x-n8n-worker-runner: &service-n8n-worker-runner` - Runner template for worker generation
### Healthchecks ### Healthchecks
Services should define healthchecks for proper dependency management: Services should define healthchecks for proper dependency management:
@@ -310,6 +326,10 @@ Directories in `PRESERVE_DIRS` (defined in `scripts/utils.sh`) survive git updat
These are backed up before `git reset --hard` and restored after. These are backed up before `git reset --hard` and restored after.
### Restart Behavior
`scripts/restart.sh` stops all services first, then starts external stacks (Supabase/Dify) separately before the main stack (10s delay between). This is required because external compose files use relative volume paths that resolve from their own directory.
## Common Issues and Solutions ## Common Issues and Solutions
### Service won't start after adding ### Service won't start after adding
@@ -330,7 +350,11 @@ These are backed up before `git reset --hard` and restored after.
## File Locations ## File Locations
- Shared files accessible by n8n: `./shared` (mounted as `/data/shared` in n8n) - Shared files accessible by n8n: `./shared` (mounted as `/data/shared` in n8n)
- n8n backup/workflows: `n8n/backup/workflows/` (mounted as `/backup` in n8n containers)
- n8n storage: Docker volume `localai_n8n_storage` - n8n storage: Docker volume `localai_n8n_storage`
- Flowise storage: `~/.flowise` on host (mounted from user's home directory, not a named volume)
- Custom TLS certificates: `certs/` (gitignored, mounted as `/etc/caddy/certs/`)
- Caddy addon configs: `caddy-addon/site-*.conf` (gitignored, auto-imported)
- Service-specific volumes: Defined in `volumes:` section at top of `docker-compose.yml` - Service-specific volumes: Defined in `volumes:` section at top of `docker-compose.yml`
- Installation logs: stdout during script execution - Installation logs: stdout during script execution
- Service logs: `docker compose -p localai logs <service>` - Service logs: `docker compose -p localai logs <service>`
@@ -357,6 +381,10 @@ bash -n scripts/generate_n8n_workers.sh
bash -n scripts/apply_update.sh bash -n scripts/apply_update.sh
bash -n scripts/update.sh bash -n scripts/update.sh
bash -n scripts/install.sh bash -n scripts/install.sh
bash -n scripts/restart.sh
bash -n scripts/doctor.sh
bash -n scripts/setup_custom_tls.sh
bash -n scripts/docker_cleanup.sh
``` ```
### Full Testing ### Full Testing

View File

@@ -107,6 +107,12 @@ import /etc/caddy/addons/tls-snippet.conf
reverse_proxy temporal-ui:8080 reverse_proxy temporal-ui:8080
} }
# Uptime Kuma
{$UPTIME_KUMA_HOSTNAME} {
import service_tls
reverse_proxy uptime-kuma:3001
}
# Databasus # Databasus
{$DATABASUS_HOSTNAME} { {$DATABASUS_HOSTNAME} {
import service_tls import service_tls

View File

@@ -112,6 +112,8 @@ The installer also makes the following powerful open-source tools **available fo
✅ [**Supabase**](https://supabase.com/) - An open-source alternative to Firebase, providing database storage, user authentication, and more. It's a popular choice for AI applications. ✅ [**Supabase**](https://supabase.com/) - An open-source alternative to Firebase, providing database storage, user authentication, and more. It's a popular choice for AI applications.
✅ [**Uptime Kuma**](https://github.com/louislam/uptime-kuma) - Self-hosted uptime monitoring tool with notifications
✅ [**WAHA**](https://waha.devlike.pro/) - WhatsApp HTTP API (REST API) that you can configure in a click! 3 engines: WEBJS (browser based), NOWEB (websocket nodejs), GOWS (websocket go). ✅ [**WAHA**](https://waha.devlike.pro/) - WhatsApp HTTP API (REST API) that you can configure in a click! 3 engines: WEBJS (browser based), NOWEB (websocket nodejs), GOWS (websocket go).
✅ [**Weaviate**](https://weaviate.io/) - An open-source AI-native vector database with a focus on scalability and ease of use. It can be used for RAG, hybrid search, and more. ✅ [**Weaviate**](https://weaviate.io/) - An open-source AI-native vector database with a focus on scalability and ease of use. It can be used for RAG, hybrid search, and more.
@@ -204,6 +206,7 @@ After successful installation, your services are up and running! Here's how to g
- **RAGFlow:** `ragflow.yourdomain.com` - **RAGFlow:** `ragflow.yourdomain.com`
- **SearXNG:** `searxng.yourdomain.com` - **SearXNG:** `searxng.yourdomain.com`
- **Supabase (Dashboard):** `supabase.yourdomain.com` - **Supabase (Dashboard):** `supabase.yourdomain.com`
- **Uptime Kuma:** `uptime-kuma.yourdomain.com` (Uptime monitoring dashboard)
- **WAHA:** `waha.yourdomain.com` (WhatsApp HTTP API; engines: WEBJS, NOWEB, GOWS) - **WAHA:** `waha.yourdomain.com` (WhatsApp HTTP API; engines: WEBJS, NOWEB, GOWS)
- **Weaviate:** `weaviate.yourdomain.com` - **Weaviate:** `weaviate.yourdomain.com`

View File

@@ -1 +1 @@
1.3.1 1.4.2

View File

@@ -36,6 +36,7 @@ volumes:
ragflow_redis_data: ragflow_redis_data:
temporal_elasticsearch_data: temporal_elasticsearch_data:
valkey-data: valkey-data:
uptime_kuma_data:
weaviate_data: weaviate_data:
# Shared logging configuration for services # Shared logging configuration for services
@@ -82,7 +83,7 @@ x-n8n: &service-n8n
N8N_LOG_LEVEL: ${N8N_LOG_LEVEL:-info} N8N_LOG_LEVEL: ${N8N_LOG_LEVEL:-info}
N8N_LOG_OUTPUT: ${N8N_LOG_OUTPUT:-console} N8N_LOG_OUTPUT: ${N8N_LOG_OUTPUT:-console}
N8N_METRICS: true N8N_METRICS: true
N8N_PAYLOAD_SIZE_MAX: 256 N8N_PAYLOAD_SIZE_MAX: ${N8N_PAYLOAD_SIZE_MAX:-256}
N8N_PERSONALIZATION_ENABLED: false N8N_PERSONALIZATION_ENABLED: false
N8N_RESTRICT_FILE_ACCESS_TO: /data/shared N8N_RESTRICT_FILE_ACCESS_TO: /data/shared
N8N_RUNNERS_AUTH_TOKEN: ${N8N_RUNNERS_AUTH_TOKEN} N8N_RUNNERS_AUTH_TOKEN: ${N8N_RUNNERS_AUTH_TOKEN}
@@ -380,6 +381,7 @@ services:
SEARXNG_PASSWORD_HASH: ${SEARXNG_PASSWORD_HASH} SEARXNG_PASSWORD_HASH: ${SEARXNG_PASSWORD_HASH}
SEARXNG_USERNAME: ${SEARXNG_USERNAME} SEARXNG_USERNAME: ${SEARXNG_USERNAME}
SUPABASE_HOSTNAME: ${SUPABASE_HOSTNAME} SUPABASE_HOSTNAME: ${SUPABASE_HOSTNAME}
UPTIME_KUMA_HOSTNAME: ${UPTIME_KUMA_HOSTNAME}
WAHA_HOSTNAME: ${WAHA_HOSTNAME} WAHA_HOSTNAME: ${WAHA_HOSTNAME}
WEAVIATE_HOSTNAME: ${WEAVIATE_HOSTNAME} WEAVIATE_HOSTNAME: ${WEAVIATE_HOSTNAME}
WEBUI_HOSTNAME: ${WEBUI_HOSTNAME} WEBUI_HOSTNAME: ${WEBUI_HOSTNAME}
@@ -542,7 +544,7 @@ services:
postgres: postgres:
container_name: postgres container_name: postgres
image: postgres:${POSTGRES_VERSION:-17} image: pgvector/pgvector:pg${POSTGRES_VERSION:-17}
restart: unless-stopped restart: unless-stopped
healthcheck: healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"] test: ["CMD-SHELL", "pg_isready -U postgres"]
@@ -906,6 +908,7 @@ services:
volumes: volumes:
- postiz-config:/config/ - postiz-config:/config/
- postiz-uploads:/uploads/ - postiz-uploads:/uploads/
- ./postiz.env:/app/.env:ro
depends_on: depends_on:
postgres: postgres:
condition: service_healthy condition: service_healthy
@@ -1275,3 +1278,21 @@ services:
timeout: 10s timeout: 10s
retries: 5 retries: 5
start_period: 30s start_period: 30s
uptime-kuma:
image: louislam/uptime-kuma:2
container_name: uptime-kuma
profiles: ["uptime-kuma"]
restart: unless-stopped
logging: *default-logging
environment:
<<: *proxy-env
UPTIME_KUMA_WS_ORIGIN_CHECK: bypass
volumes:
- uptime_kuma_data:/app/data
healthcheck:
test: ["CMD-SHELL", "node -e \"const http=require('http');const r=http.get('http://localhost:3001',res=>{process.exit(res.statusCode<400?0:1)});r.on('error',()=>process.exit(1));r.setTimeout(5000,()=>{r.destroy();process.exit(1)})\""]
interval: 30s
timeout: 10s
retries: 5
start_period: 30s

View File

@@ -115,6 +115,8 @@ declare -A VARS_TO_GENERATE=(
["RAGFLOW_MINIO_ROOT_PASSWORD"]="password:32" ["RAGFLOW_MINIO_ROOT_PASSWORD"]="password:32"
["RAGFLOW_MYSQL_ROOT_PASSWORD"]="password:32" ["RAGFLOW_MYSQL_ROOT_PASSWORD"]="password:32"
["RAGFLOW_REDIS_PASSWORD"]="password:32" ["RAGFLOW_REDIS_PASSWORD"]="password:32"
["S3_PROTOCOL_ACCESS_KEY_ID"]="hex:32"
["S3_PROTOCOL_ACCESS_KEY_SECRET"]="hex:64"
["SEARXNG_PASSWORD"]="password:32" # Added SearXNG admin password ["SEARXNG_PASSWORD"]="password:32" # Added SearXNG admin password
["SECRET_KEY_BASE"]="base64:64" # 48 bytes -> 64 chars ["SECRET_KEY_BASE"]="base64:64" # 48 bytes -> 64 chars
["TEMPORAL_UI_PASSWORD"]="password:32" # Temporal UI basic auth password ["TEMPORAL_UI_PASSWORD"]="password:32" # Temporal UI basic auth password

View File

@@ -67,6 +67,7 @@ base_services_data=(
"ragflow" "RAGFlow (Deep document understanding RAG engine)" "ragflow" "RAGFlow (Deep document understanding RAG engine)"
"searxng" "SearXNG (Private Metasearch Engine)" "searxng" "SearXNG (Private Metasearch Engine)"
"supabase" "Supabase (Backend as a Service)" "supabase" "Supabase (Backend as a Service)"
"uptime-kuma" "Uptime Kuma (Uptime Monitoring)"
"waha" "WAHA WhatsApp HTTP API (NOWEB engine)" "waha" "WAHA WhatsApp HTTP API (NOWEB engine)"
"weaviate" "Weaviate (Vector Database with API Key Auth)" "weaviate" "Weaviate (Vector Database with API Key Auth)"
) )

View File

@@ -103,6 +103,9 @@ fi
if is_profile_active "postiz"; then if is_profile_active "postiz"; then
echo -e " ${GREEN}*${NC} ${WHITE}Postiz${NC}: Create your account on first login" echo -e " ${GREEN}*${NC} ${WHITE}Postiz${NC}: Create your account on first login"
fi fi
if is_profile_active "uptime-kuma"; then
echo -e " ${GREEN}*${NC} ${WHITE}Uptime Kuma${NC}: Create your account on first login"
fi
if is_profile_active "gost"; then if is_profile_active "gost"; then
echo -e " ${GREEN}*${NC} ${WHITE}Gost Proxy${NC}: Routing AI traffic through external proxy" echo -e " ${GREEN}*${NC} ${WHITE}Gost Proxy${NC}: Routing AI traffic through external proxy"
fi fi

View File

@@ -354,6 +354,16 @@ if is_profile_active "postiz"; then
}") }")
fi fi
# Uptime Kuma
if is_profile_active "uptime-kuma"; then
SERVICES_ARRAY+=(" \"uptime-kuma\": {
\"hostname\": \"$(json_escape "$UPTIME_KUMA_HOSTNAME")\",
\"credentials\": {
\"note\": \"Create account on first login\"
}
}")
fi
# WAHA # WAHA
if is_profile_active "waha"; then if is_profile_active "waha"; then
SERVICES_ARRAY+=(" \"waha\": { SERVICES_ARRAY+=(" \"waha\": {

View File

@@ -10,6 +10,7 @@
# - docker-compose.n8n-workers.yml (if exists and n8n profile active) # - docker-compose.n8n-workers.yml (if exists and n8n profile active)
# - supabase/docker/docker-compose.yml (if exists and supabase profile active) # - supabase/docker/docker-compose.yml (if exists and supabase profile active)
# - dify/docker/docker-compose.yaml (if exists and dify profile active) # - dify/docker/docker-compose.yaml (if exists and dify profile active)
# - docker-compose.override.yml (if exists, user overrides with highest precedence)
# #
# Usage: bash scripts/restart.sh # Usage: bash scripts/restart.sh
# ============================================================================= # =============================================================================
@@ -33,6 +34,20 @@ EXTERNAL_SERVICE_INIT_DELAY=10
# Build compose files array (sets global COMPOSE_FILES) # Build compose files array (sets global COMPOSE_FILES)
build_compose_files_array build_compose_files_array
# Ensure postiz.env exists if Postiz is enabled (required for volume mount)
# This is a safety net for cases where restart runs without start_services.py
# (e.g., git pull + make restart instead of make update)
if is_profile_active "postiz"; then
if [ -d "$PROJECT_ROOT/postiz.env" ]; then
log_warning "postiz.env exists as a directory (created by Docker). Removing and recreating as file."
rm -rf "$PROJECT_ROOT/postiz.env"
touch "$PROJECT_ROOT/postiz.env"
elif [ ! -f "$PROJECT_ROOT/postiz.env" ]; then
log_warning "postiz.env not found, creating empty file. Run 'make update' to generate full config."
touch "$PROJECT_ROOT/postiz.env"
fi
fi
log_info "Restarting services..." log_info "Restarting services..."
log_info "Using compose files: ${COMPOSE_FILES[*]}" log_info "Using compose files: ${COMPOSE_FILES[*]}"
@@ -71,6 +86,10 @@ MAIN_COMPOSE_FILES=("-f" "$PROJECT_ROOT/docker-compose.yml")
if path=$(get_n8n_workers_compose); then if path=$(get_n8n_workers_compose); then
MAIN_COMPOSE_FILES+=("-f" "$path") MAIN_COMPOSE_FILES+=("-f" "$path")
fi fi
OVERRIDE_COMPOSE="$PROJECT_ROOT/docker-compose.override.yml"
if [ -f "$OVERRIDE_COMPOSE" ]; then
MAIN_COMPOSE_FILES+=("-f" "$OVERRIDE_COMPOSE")
fi
# Start main services # Start main services
log_info "Starting main services..." log_info "Starting main services..."

View File

@@ -72,7 +72,7 @@ echo ""
# Core services (always checked) # Core services (always checked)
log_subheader "Core Services" log_subheader "Core Services"
check_image_update "postgres" "postgres:${POSTGRES_VERSION:-17}-alpine" check_image_update "postgres" "pgvector/pgvector:pg${POSTGRES_VERSION:-17}"
check_image_update "redis" "valkey/valkey:8-alpine" check_image_update "redis" "valkey/valkey:8-alpine"
check_image_update "caddy" "caddy:2-alpine" check_image_update "caddy" "caddy:2-alpine"
@@ -139,6 +139,11 @@ if is_profile_active "appsmith"; then
check_image_update "appsmith" "appsmith/appsmith-ce:release" check_image_update "appsmith" "appsmith/appsmith-ce:release"
fi fi
if is_profile_active "uptime-kuma"; then
log_subheader "Uptime Kuma"
check_image_update "uptime-kuma" "louislam/uptime-kuma:2"
fi
# Summary # Summary
log_divider log_divider
echo "" echo ""

View File

@@ -353,6 +353,7 @@ get_dify_compose() {
} }
# Build array of all active compose files (main + external services) # Build array of all active compose files (main + external services)
# Appends docker-compose.override.yml last if it exists (user overrides, highest precedence)
# IMPORTANT: Requires COMPOSE_PROFILES to be set before calling (via load_env) # IMPORTANT: Requires COMPOSE_PROFILES to be set before calling (via load_env)
# Usage: build_compose_files_array; docker compose "${COMPOSE_FILES[@]}" up -d # Usage: build_compose_files_array; docker compose "${COMPOSE_FILES[@]}" up -d
# Result is stored in global COMPOSE_FILES array # Result is stored in global COMPOSE_FILES array
@@ -369,6 +370,12 @@ build_compose_files_array() {
if path=$(get_dify_compose); then if path=$(get_dify_compose); then
COMPOSE_FILES+=("-f" "$path") COMPOSE_FILES+=("-f" "$path")
fi fi
# Include user overrides last (highest precedence)
local override="$PROJECT_ROOT/docker-compose.override.yml"
if [ -f "$override" ]; then
COMPOSE_FILES+=("-f" "$override")
fi
} }
#============================================================================= #=============================================================================

View File

@@ -71,18 +71,40 @@ def clone_supabase_repo():
os.chdir("..") os.chdir("..")
def prepare_supabase_env(): def prepare_supabase_env():
"""Copy .env to .env in supabase/docker.""" """Copy .env to supabase/docker/.env, or sync new variables if it already exists."""
if not is_supabase_enabled(): if not is_supabase_enabled():
print("Supabase is not enabled, skipping env preparation.") print("Supabase is not enabled, skipping env preparation.")
return return
env_path = os.path.join("supabase", "docker", ".env") env_path = os.path.join("supabase", "docker", ".env")
env_example_path = os.path.join(".env") root_env_path = ".env"
# Do not overwrite existing Supabase env to avoid credential drift
if os.path.exists(env_path): if os.path.exists(env_path):
print(f"Supabase env already exists at {env_path}, not overwriting.") # Sync new variables from root .env that don't exist in supabase .env
print(f"Syncing new variables from root .env to {env_path}...")
root_env = dotenv_values(root_env_path)
supabase_env = dotenv_values(env_path)
new_vars = []
for key, value in root_env.items():
if key not in supabase_env and value is not None:
# Quote values to handle special characters safely
if '$' in value:
new_vars.append(f"{key}='{value}'")
else:
new_vars.append(f'{key}="{value}"')
if new_vars:
with open(env_path, 'r') as f:
existing_content = f.read()
sync_header = "# --- Variables synced from root .env ---"
with open(env_path, 'a') as f:
if sync_header not in existing_content:
f.write(f"\n{sync_header}\n")
for var in new_vars:
f.write(f"{var}\n")
print(f"Synced {len(new_vars)} new variable(s) to Supabase env.")
else:
print("Supabase env is up to date, no new variables to sync.")
return return
print("Copying .env in root to .env in supabase/docker...") print("Copying .env in root to .env in supabase/docker...")
shutil.copyfile(env_example_path, env_path) shutil.copyfile(root_env_path, env_path)
def clone_dify_repo(): def clone_dify_repo():
"""Clone the Dify repository using sparse checkout if not already present.""" """Clone the Dify repository using sparse checkout if not already present."""
@@ -166,6 +188,76 @@ def prepare_dify_env():
with open(env_path, 'w') as f: with open(env_path, 'w') as f:
f.write("\n".join(lines) + "\n") f.write("\n".join(lines) + "\n")
def is_postiz_enabled():
"""Check if 'postiz' is in COMPOSE_PROFILES in .env file."""
env_values = dotenv_values(".env")
compose_profiles = env_values.get("COMPOSE_PROFILES", "")
return "postiz" in compose_profiles.split(',')
def prepare_postiz_env():
"""Generate postiz.env for mounting as /app/.env in Postiz container.
The Postiz image uses dotenv-cli (dotenv -e ../../.env) to load config.
Always regenerated to reflect current .env values.
"""
if not is_postiz_enabled():
print("Postiz is not enabled, skipping env preparation.")
return
print("Generating postiz.env from root .env values...")
root_env = dotenv_values(".env")
hostname = root_env.get("POSTIZ_HOSTNAME", "")
frontend_url = f"https://{hostname}" if hostname else ""
env_vars = {
"BACKEND_INTERNAL_URL": "http://localhost:3000",
"DATABASE_URL": f"postgresql://postgres:{root_env.get('POSTGRES_PASSWORD', '')}@postgres:5432/{root_env.get('POSTIZ_DB_NAME', 'postiz')}?schema=postiz",
"DISABLE_REGISTRATION": root_env.get("POSTIZ_DISABLE_REGISTRATION", "false"),
"FRONTEND_URL": frontend_url,
"IS_GENERAL": "true",
"JWT_SECRET": root_env.get("JWT_SECRET", ""),
"MAIN_URL": frontend_url,
"NEXT_PUBLIC_BACKEND_URL": f"{frontend_url}/api" if frontend_url else "",
"NEXT_PUBLIC_UPLOAD_DIRECTORY": "/uploads",
"REDIS_URL": "redis://redis:6379",
"STORAGE_PROVIDER": "local",
"TEMPORAL_ADDRESS": "temporal:7233",
"UPLOAD_DIRECTORY": "/uploads",
}
# Social media API keys — direct pass-through from root .env
social_keys = [
"X_API_KEY", "X_API_SECRET",
"LINKEDIN_CLIENT_ID", "LINKEDIN_CLIENT_SECRET",
"REDDIT_CLIENT_ID", "REDDIT_CLIENT_SECRET",
"GITHUB_CLIENT_ID", "GITHUB_CLIENT_SECRET",
"BEEHIIVE_API_KEY", "BEEHIIVE_PUBLICATION_ID",
"THREADS_APP_ID", "THREADS_APP_SECRET",
"FACEBOOK_APP_ID", "FACEBOOK_APP_SECRET",
"YOUTUBE_CLIENT_ID", "YOUTUBE_CLIENT_SECRET",
"TIKTOK_CLIENT_ID", "TIKTOK_CLIENT_SECRET",
"PINTEREST_CLIENT_ID", "PINTEREST_CLIENT_SECRET",
"DRIBBBLE_CLIENT_ID", "DRIBBBLE_CLIENT_SECRET",
"DISCORD_CLIENT_ID", "DISCORD_CLIENT_SECRET",
"DISCORD_BOT_TOKEN_ID",
"SLACK_ID", "SLACK_SECRET", "SLACK_SIGNING_SECRET",
"MASTODON_URL", "MASTODON_CLIENT_ID", "MASTODON_CLIENT_SECRET",
]
for key in social_keys:
env_vars[key] = root_env.get(key, "")
# Handle case where Docker created postiz.env as a directory
if os.path.isdir("postiz.env"):
print("Warning: postiz.env exists as a directory (likely created by Docker). Removing...")
shutil.rmtree("postiz.env")
with open("postiz.env", 'w') as f:
for key, value in env_vars.items():
f.write(f'{key}="{value}"\n')
print(f"Generated postiz.env with {len(env_vars)} variables.")
def stop_existing_containers(): def stop_existing_containers():
"""Stop and remove existing containers for our unified project ('localai').""" """Stop and remove existing containers for our unified project ('localai')."""
print("Stopping and removing existing containers for the unified project 'localai'...") print("Stopping and removing existing containers for the unified project 'localai'...")
@@ -195,6 +287,11 @@ def stop_existing_containers():
if os.path.exists(n8n_workers_compose_path): if os.path.exists(n8n_workers_compose_path):
cmd.extend(["-f", n8n_workers_compose_path]) cmd.extend(["-f", n8n_workers_compose_path])
# Include user overrides if present
override_path = "docker-compose.override.yml"
if os.path.exists(override_path):
cmd.extend(["-f", override_path])
cmd.extend(["down"]) cmd.extend(["down"])
run_command(cmd) run_command(cmd)
@@ -230,6 +327,11 @@ def start_local_ai():
if os.path.exists(n8n_workers_compose_path): if os.path.exists(n8n_workers_compose_path):
compose_files.extend(["-f", n8n_workers_compose_path]) compose_files.extend(["-f", n8n_workers_compose_path])
# Include user overrides if present (must be last for highest precedence)
override_path = "docker-compose.override.yml"
if os.path.exists(override_path):
compose_files.extend(["-f", override_path])
# Explicitly build services and pull newer base images first. # Explicitly build services and pull newer base images first.
print("Checking for newer base images and building services...") print("Checking for newer base images and building services...")
build_cmd = ["docker", "compose", "-p", "localai"] + compose_files + ["build", "--pull"] build_cmd = ["docker", "compose", "-p", "localai"] + compose_files + ["build", "--pull"]
@@ -394,7 +496,10 @@ def main():
# Generate SearXNG secret key and check docker-compose.yml # Generate SearXNG secret key and check docker-compose.yml
generate_searxng_secret_key() generate_searxng_secret_key()
check_and_fix_docker_compose_for_searxng() check_and_fix_docker_compose_for_searxng()
# Generate Postiz env file
prepare_postiz_env()
stop_existing_containers() stop_existing_containers()
# Start Supabase first # Start Supabase first

View File

@@ -420,6 +420,14 @@
category: 'tools', category: 'tools',
docsUrl: 'https://docs.python.org' docsUrl: 'https://docs.python.org'
}, },
'uptime-kuma': {
name: 'Uptime Kuma',
description: 'Uptime Monitoring Dashboard',
icon: 'UK',
color: 'bg-[#5CDD8B]',
category: 'monitoring',
docsUrl: 'https://github.com/louislam/uptime-kuma'
},
'cloudflare-tunnel': { 'cloudflare-tunnel': {
name: 'Cloudflare Tunnel', name: 'Cloudflare Tunnel',
description: 'Zero-Trust Network Access', description: 'Zero-Trust Network Access',