mirror of
https://github.com/n8n-io/self-hosted-ai-starter-kit.git
synced 2025-11-29 00:23:13 +00:00
Update README with Ollama instructions for both Nvidia and mac users (#13)
This commit is contained in:
44
README.md
44
README.md
@@ -47,6 +47,33 @@ cd self-hosted-ai-starter-kit
|
|||||||
docker compose --profile gpu-nvidia up
|
docker compose --profile gpu-nvidia up
|
||||||
```
|
```
|
||||||
|
|
||||||
|
> [!NOTE]
|
||||||
|
> If you have not used your Nvidia GPU with Docker before, please follow the
|
||||||
|
> [Ollama Docker instructions](https://github.com/ollama/ollama/blob/main/docs/docker.md).
|
||||||
|
|
||||||
|
### For Mac / Apple Silicon users
|
||||||
|
|
||||||
|
If you’re using a Mac with an M1 or newer processor, you can't expose your GPU
|
||||||
|
to the Docker instance, unfortunately. There are two options in this case:
|
||||||
|
|
||||||
|
1. Run the starter kit fully on CPU, like in the section "For everyone else"
|
||||||
|
below
|
||||||
|
2. Run Ollama on your Mac for faster inference, and connect to that from the
|
||||||
|
n8n instance
|
||||||
|
|
||||||
|
If you want to run Ollama on your mac, check the
|
||||||
|
[Ollama homepage](https://ollama.com/)
|
||||||
|
for installation instructions, and run the starter kit as follows:
|
||||||
|
|
||||||
|
```
|
||||||
|
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
|
||||||
|
cd self-hosted-ai-starter-kit
|
||||||
|
docker compose up
|
||||||
|
```
|
||||||
|
|
||||||
|
After you followed the quick start set-up below, change the Ollama credentials
|
||||||
|
by using `http://host.docker.internal:11434/` as the host.
|
||||||
|
|
||||||
### For everyone else
|
### For everyone else
|
||||||
|
|
||||||
```
|
```
|
||||||
@@ -55,14 +82,6 @@ cd self-hosted-ai-starter-kit
|
|||||||
docker compose --profile cpu up
|
docker compose --profile cpu up
|
||||||
```
|
```
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> If you’re using a Mac with an M1 or newer processor, you can run Ollama on
|
|
||||||
> your host machine for faster GPU inference. Unfortunately, you can’t expose
|
|
||||||
> the GPU to Docker instances. Check the
|
|
||||||
> [Ollama homepage](https://ollama.com/) for installation instructions, and
|
|
||||||
> use `http://host.docker.internal:11434/` as the Ollama host in your
|
|
||||||
> credentials.
|
|
||||||
|
|
||||||
## ⚡️ Quick start and usage
|
## ⚡️ Quick start and usage
|
||||||
|
|
||||||
The main component of the self-hosted AI starter kit is a docker compose file
|
The main component of the self-hosted AI starter kit is a docker compose file
|
||||||
@@ -101,6 +120,13 @@ language model and Qdrant as your vector store.
|
|||||||
|
|
||||||
```
|
```
|
||||||
docker compose --profile gpu-nvidia pull
|
docker compose --profile gpu-nvidia pull
|
||||||
|
docker compose create && docker compose --profile gpu-nvidia up
|
||||||
|
```
|
||||||
|
|
||||||
|
### For Mac / Apple Silicon users
|
||||||
|
|
||||||
|
```
|
||||||
|
docker compose pull
|
||||||
docker compose create && docker compose up
|
docker compose create && docker compose up
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -108,7 +134,7 @@ docker compose create && docker compose up
|
|||||||
|
|
||||||
```
|
```
|
||||||
docker compose --profile cpu pull
|
docker compose --profile cpu pull
|
||||||
docker compose create && docker compose up
|
docker compose create && docker compose --profile cpu up
|
||||||
```
|
```
|
||||||
|
|
||||||
## 👓 Recommended reading
|
## 👓 Recommended reading
|
||||||
|
|||||||
Reference in New Issue
Block a user