revert: restore -p localai to preserve user data volumes

switching project name from 'localai' to directory-based naming would
cause users to lose all data stored in docker volumes (workflows,
databases, configs) since volumes are prefixed with project name
This commit is contained in:
Yury Kossakovsky
2025-12-09 18:55:57 -07:00
parent 5bf2d4cf31
commit 5dc994eec6
6 changed files with 29 additions and 48 deletions

View File

@@ -151,14 +151,14 @@ bash scripts/03_generate_secrets.sh
```
- Start (or recreate) only the affected services:
```bash
docker compose up -d --no-deps --force-recreate caddy
docker compose -p localai up -d --no-deps --force-recreate caddy
# If your service was added/changed
docker compose up -d --no-deps --force-recreate myservice
docker compose -p localai up -d --no-deps --force-recreate myservice
```
- Check logs:
```bash
docker compose logs -f --tail=200 myservice | cat
docker compose logs -f --tail=200 caddy | cat
docker compose -p localai logs -f --tail=200 myservice | cat
docker compose -p localai logs -f --tail=200 caddy | cat
```
## 10) Quick checklist

View File

@@ -42,16 +42,16 @@ sudo bash ./scripts/04_wizard.sh
```bash
# Start all enabled profile services
docker compose up -d
docker compose -p localai up -d
# View logs for a specific service
docker compose logs -f --tail=200 <service-name> | cat
docker compose -p localai logs -f --tail=200 <service-name> | cat
# Recreate a single service (e.g., after config changes)
docker compose up -d --no-deps --force-recreate <service-name>
docker compose -p localai up -d --no-deps --force-recreate <service-name>
# Stop all services
docker compose down
docker compose -p localai down
# Remove unused Docker resources
sudo bash ./scripts/docker_cleanup.sh
@@ -67,11 +67,11 @@ bash ./scripts/03_generate_secrets.sh
grep COMPOSE_PROFILES .env
# View Caddy logs for reverse proxy issues
docker compose logs -f caddy
docker compose -p localai logs -f caddy
# Test n8n worker scaling
# Edit N8N_WORKER_COUNT in .env, then:
docker compose up -d --scale n8n-worker=<count>
docker compose -p localai up -d --scale n8n-worker=<count>
```
## Adding a New Service
@@ -177,7 +177,7 @@ fi
### Service won't start after adding
1. Ensure profile is added to `COMPOSE_PROFILES` in `.env`
2. Check logs: `docker compose logs <service>`
2. Check logs: `docker compose -p localai logs <service>`
3. Verify no port conflicts (no services should publish ports)
4. Ensure healthcheck is properly defined if service has dependencies
@@ -187,16 +187,16 @@ fi
- Verify `LETSENCRYPT_EMAIL` is set in `.env`
### Password hash generation fails
- Ensure Caddy container is running: `docker compose up -d caddy`
- Ensure Caddy container is running: `docker compose -p localai up -d caddy`
- Script uses: `docker exec caddy caddy hash-password --plaintext "$password"`
## File Locations
- Shared files accessible by n8n: `./shared` (mounted as `/data/shared` in n8n)
- n8n storage: Docker volume `n8n_storage`
- n8n storage: Docker volume `localai_n8n_storage`
- Service-specific volumes: Defined in `volumes:` section at top of `docker-compose.yml`
- Installation logs: stdout during script execution
- Service logs: `docker compose logs <service>`
- Service logs: `docker compose -p localai logs <service>`
## Testing Changes

View File

@@ -379,8 +379,8 @@ To disable Cloudflare Tunnel and return to Caddy-only access:
2. Stop the tunnel and restart services:
```bash
docker compose --profile cloudflare-tunnel down
docker compose up -d
docker compose -p localai --profile cloudflare-tunnel down
docker compose -p localai up -d
```
3. Re-open firewall ports if closed:

View File

@@ -310,7 +310,7 @@ if is_profile_active "python-runner"; then
echo "Mounted Code Directory: ./python-runner (host) -> /app (container)"
echo "Entry File: /app/main.py"
echo "(Note: Internal-only service with no exposed ports; view output via logs)"
echo "Logs: docker compose logs -f python-runner"
echo "Logs: docker compose -p localai logs -f python-runner"
fi
if is_profile_active "n8n" || is_profile_active "langfuse"; then

View File

@@ -77,8 +77,9 @@ if [ -d "$DIFY_DOCKER_DIR" ] && [ -f "$DIFY_COMPOSE_FILE_PATH" ]; then
COMPOSE_FILES_FOR_PULL+=("-f" "$DIFY_COMPOSE_FILE_PATH")
fi
# Use the project name "localai" for consistency.
# This command WILL respect COMPOSE_PROFILES from the .env file (updated by the wizard above).
$COMPOSE_CMD "${COMPOSE_FILES_FOR_PULL[@]}" pull --ignore-buildable || {
$COMPOSE_CMD -p "localai" "${COMPOSE_FILES_FOR_PULL[@]}" pull --ignore-buildable || {
log_error "Failed to pull Docker images for selected services. Check network connection and Docker Hub status."
exit 1
}

View File

@@ -3,7 +3,8 @@
start_services.py
This script starts the Supabase stack first, waits for it to initialize, and then starts
the local AI stack.
the local AI stack. Both stacks use the same Docker Compose project name ("localai")
so they appear together in Docker Desktop.
"""
import os
@@ -167,33 +168,12 @@ def prepare_dify_env():
with open(env_path, 'w') as f:
f.write("\n".join(lines) + "\n")
def cleanup_legacy_project():
"""Clean up containers from legacy 'localai' project name.
The project was previously named 'localai' (from the old directory name or profile).
After renaming to 'n8n-installer', old containers may still exist under the old project name.
This function removes them to prevent container name conflicts.
"""
print("Checking for legacy 'localai' project containers...")
try:
# This will silently do nothing if no containers exist for the project
subprocess.run(
["docker", "compose", "-p", "localai", "down", "--remove-orphans"],
check=False, # Don't fail if project doesn't exist
capture_output=True # Suppress output for cleaner logs
)
except Exception:
pass # Ignore any errors - this is just cleanup
def stop_existing_containers():
"""Stop and remove existing containers."""
print("Stopping and removing existing containers...")
"""Stop and remove existing containers for our unified project ('localai')."""
print("Stopping and removing existing containers for the unified project 'localai'...")
# First, clean up any legacy containers from the old 'localai' project
cleanup_legacy_project()
# Base command
cmd = ["docker", "compose"]
# Base command with project name for consistency
cmd = ["docker", "compose", "-p", "localai"]
# Get all profiles from the main docker-compose.yml to ensure all services can be brought down
all_profiles = get_all_profiles("docker-compose.yml")
@@ -227,7 +207,7 @@ def start_supabase():
return
print("Starting Supabase services...")
run_command([
"docker", "compose", "-f", "supabase/docker/docker-compose.yml", "up", "-d"
"docker", "compose", "-p", "localai", "-f", "supabase/docker/docker-compose.yml", "up", "-d"
])
def start_dify():
@@ -237,7 +217,7 @@ def start_dify():
return
print("Starting Dify services...")
run_command([
"docker", "compose", "-f", "dify/docker/docker-compose.yaml", "up", "-d"
"docker", "compose", "-p", "localai", "-f", "dify/docker/docker-compose.yaml", "up", "-d"
])
def start_local_ai():
@@ -254,13 +234,13 @@ def start_local_ai():
# Explicitly build services and pull newer base images first.
print("Checking for newer base images and building services...")
build_cmd = ["docker", "compose"] + compose_files + ["build", "--pull"]
build_cmd = ["docker", "compose", "-p", "localai"] + compose_files + ["build", "--pull"]
run_command(build_cmd)
# Now, start the services using the newly built images. No --build needed as we just built.
# Use --remove-orphans to clean up containers from profiles that are no longer active
print("Starting containers...")
up_cmd = ["docker", "compose"] + compose_files + ["up", "-d", "--remove-orphans"]
up_cmd = ["docker", "compose", "-p", "localai"] + compose_files + ["up", "-d", "--remove-orphans"]
run_command(up_cmd)
def generate_searxng_secret_key():