mirror of
https://github.com/n8n-io/self-hosted-ai-starter-kit.git
synced 2025-11-29 00:23:13 +00:00
Move .env file and simplify setup on mac (#68)
This commit is contained in:
7
.env
7
.env
@@ -1,7 +0,0 @@
|
||||
POSTGRES_USER=root
|
||||
POSTGRES_PASSWORD=password
|
||||
POSTGRES_DB=n8n
|
||||
|
||||
N8N_ENCRYPTION_KEY=super-secret-key
|
||||
N8N_USER_MANAGEMENT_JWT_SECRET=even-more-secret
|
||||
N8N_DEFAULT_BINARY_DATA_MODE=filesystem
|
||||
13
.env.example
Normal file
13
.env.example
Normal file
@@ -0,0 +1,13 @@
|
||||
POSTGRES_USER=root
|
||||
POSTGRES_PASSWORD=password
|
||||
POSTGRES_DB=n8n
|
||||
|
||||
N8N_ENCRYPTION_KEY=super-secret-key
|
||||
N8N_USER_MANAGEMENT_JWT_SECRET=even-more-secret
|
||||
N8N_DEFAULT_BINARY_DATA_MODE=filesystem
|
||||
|
||||
# For Mac users running OLLAMA locally
|
||||
# See https://github.com/n8n-io/self-hosted-ai-starter-kit?tab=readme-ov-file#for-mac--apple-silicon-users
|
||||
# OLLAMA_HOST=host.docker.internal:11434
|
||||
|
||||
|
||||
1
.gitignore
vendored
Normal file
1
.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
.env
|
||||
33
README.md
33
README.md
@@ -42,15 +42,17 @@ Engineering world, handles large amounts of data safely.
|
||||
```bash
|
||||
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
|
||||
cd self-hosted-ai-starter-kit
|
||||
cp .env.example .env # you should update secrets and passwords inside
|
||||
```
|
||||
|
||||
### Running n8n using Docker Compose
|
||||
|
||||
#### For Nvidia GPU users
|
||||
|
||||
```
|
||||
```bash
|
||||
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
|
||||
cd self-hosted-ai-starter-kit
|
||||
cp .env.example .env # you should update secrets and passwords inside
|
||||
docker compose --profile gpu-nvidia up
|
||||
```
|
||||
|
||||
@@ -60,9 +62,10 @@ docker compose --profile gpu-nvidia up
|
||||
|
||||
### For AMD GPU users on Linux
|
||||
|
||||
```
|
||||
```bash
|
||||
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
|
||||
cd self-hosted-ai-starter-kit
|
||||
cp .env.example .env # you should update secrets and passwords inside
|
||||
docker compose --profile gpu-amd up
|
||||
```
|
||||
|
||||
@@ -80,36 +83,30 @@ If you want to run Ollama on your mac, check the
|
||||
[Ollama homepage](https://ollama.com/)
|
||||
for installation instructions, and run the starter kit as follows:
|
||||
|
||||
```
|
||||
```bash
|
||||
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
|
||||
cd self-hosted-ai-starter-kit
|
||||
cp .env.example .env # you should update secrets and passwords inside
|
||||
docker compose up
|
||||
```
|
||||
|
||||
##### For Mac users running OLLAMA locally
|
||||
|
||||
If you're running OLLAMA locally on your Mac (not in Docker), you need to modify the OLLAMA_HOST environment variable
|
||||
in the n8n service configuration. Update the x-n8n section in your Docker Compose file as follows:
|
||||
|
||||
```yaml
|
||||
x-n8n: &service-n8n
|
||||
# ... other configurations ...
|
||||
environment:
|
||||
# ... other environment variables ...
|
||||
- OLLAMA_HOST=host.docker.internal:11434
|
||||
```
|
||||
1. Set OLLAMA_HOST to `host.docker.internal:11434` in your .env file.
|
||||
2. Additionally, after you see "Editor is now accessible via: <http://localhost:5678/>":
|
||||
|
||||
Additionally, after you see "Editor is now accessible via: <http://localhost:5678/>":
|
||||
|
||||
1. Head to <http://localhost:5678/home/credentials>
|
||||
2. Click on "Local Ollama service"
|
||||
3. Change the base URL to "http://host.docker.internal:11434/"
|
||||
1. Head to <http://localhost:5678/home/credentials>
|
||||
2. Click on "Local Ollama service"
|
||||
3. Change the base URL to "http://host.docker.internal:11434/"
|
||||
|
||||
#### For everyone else
|
||||
|
||||
```
|
||||
```bash
|
||||
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
|
||||
cd self-hosted-ai-starter-kit
|
||||
cp .env.example .env # you should update secrets and passwords inside
|
||||
docker compose --profile cpu up
|
||||
```
|
||||
|
||||
@@ -154,7 +151,7 @@ docker compose create && docker compose --profile gpu-nvidia up
|
||||
|
||||
* ### For Mac / Apple Silicon users
|
||||
|
||||
```
|
||||
```bash
|
||||
docker compose pull
|
||||
docker compose create && docker compose up
|
||||
```
|
||||
|
||||
@@ -19,9 +19,10 @@ x-n8n: &service-n8n
|
||||
- N8N_PERSONALIZATION_ENABLED=false
|
||||
- N8N_ENCRYPTION_KEY
|
||||
- N8N_USER_MANAGEMENT_JWT_SECRET
|
||||
- OLLAMA_HOST=ollama:11434
|
||||
- OLLAMA_HOST=${OLLAMA_HOST:-ollama:11434}
|
||||
env_file:
|
||||
- .env
|
||||
- path: .env
|
||||
required: true
|
||||
|
||||
x-ollama: &service-ollama
|
||||
image: ollama/ollama:latest
|
||||
|
||||
Reference in New Issue
Block a user