mirror of
https://github.com/dalekurt/local-llm-stack.git
synced 2026-01-28 07:40:24 +00:00
Fix watchtower configuration and add CHANGELOG.md
This commit is contained in:
35
CHANGELOG.md
Normal file
35
CHANGELOG.md
Normal file
@@ -0,0 +1,35 @@
|
||||
# Changelog
|
||||
|
||||
All notable changes to the Docker Local AI LLM project will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [Unreleased]
|
||||
|
||||
## [0.2.0] - 2025-06-06
|
||||
|
||||
### Fixed
|
||||
- Fixed watchtower configuration in docker-compose.yml by replacing the invalid `--label-filter=ai-network` flag with the correct `--scope ai-network` flag
|
||||
- Resolved watchtower container restart loop caused by invalid command flag
|
||||
|
||||
### Added
|
||||
- Added watchtower service for monitoring container updates
|
||||
- Configured to check for updates every 30 seconds
|
||||
- Cleanup mode enabled to remove old images after updates
|
||||
- Monitor-only mode enabled to avoid automatic updates
|
||||
- Label-enable flag added to only watch containers with watchtower enable label
|
||||
- Scope limited to ai-network containers
|
||||
|
||||
- Added pipelines service for Open WebUI
|
||||
- Using image from ghcr.io/open-webui/pipelines:main
|
||||
- Configured with restart policy set to unless-stopped
|
||||
- Volume mounted at ./data/pipelines:/app/pipelines
|
||||
- Environment variable PIPELINES_API_KEY set
|
||||
|
||||
## [0.1.0] - 2025-06-01
|
||||
|
||||
### Added
|
||||
- Initial project setup with core AI services
|
||||
- Docker Compose configuration for local AI and LLM services
|
||||
- Network configuration for AI services communication
|
||||
@@ -69,11 +69,11 @@ services:
|
||||
- STORAGE_DIR=${STORAGE_DIR}
|
||||
- JWT_SECRET=${JWT_SECRET}
|
||||
- LLM_PROVIDER=${LLM_PROVIDER}
|
||||
- OLLAMA_BASE_PATH=${OLLAMA_BASE_PATH}
|
||||
- OLLAMA_BASE_PATH=http://host.docker.internal:11434
|
||||
- OLLAMA_MODEL_PREF=${OLLAMA_MODEL_PREF}
|
||||
- OLLAMA_MODEL_TOKEN_LIMIT=${OLLAMA_MODEL_TOKEN_LIMIT}
|
||||
- EMBEDDING_ENGINE=${EMBEDDING_ENGINE}
|
||||
- EMBEDDING_BASE_PATH=${EMBEDDING_BASE_PATH}
|
||||
- EMBEDDING_BASE_PATH=http://host.docker.internal:11434
|
||||
- EMBEDDING_MODEL_PREF=${EMBEDDING_MODEL_PREF}
|
||||
- EMBEDDING_MODEL_MAX_CHUNK_LENGTH=${EMBEDDING_MODEL_MAX_CHUNK_LENGTH}
|
||||
- VECTOR_DB=${VECTOR_DB}
|
||||
@@ -177,4 +177,4 @@ services:
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
command: --interval 30 --cleanup --label-enable --monitor-only --label-filter=ai-network
|
||||
command: --interval 30 --cleanup --label-enable --monitor-only --scope ai-network
|
||||
Reference in New Issue
Block a user