mirror of
https://github.com/kossakovsky/n8n-install.git
synced 2026-03-07 22:33:11 +00:00
- Introduced Ollama service with CPU and GPU profiles in docker-compose.yml, allowing users to run large language models locally. - Added Ollama selection option in the wizard script for hardware profile configuration. - Updated README.md to include Ollama as a new available service. - Adjusted .env.example to include GRAFANA_HOSTNAME in the correct position.