mirror of
https://github.com/n8n-io/self-hosted-ai-starter-kit.git
synced 2025-11-29 00:23:13 +00:00
Add AMD GPU support on Linux (#16)
This commit is contained in:
@@ -58,6 +58,14 @@ docker compose --profile gpu-nvidia up
|
|||||||
> If you have not used your Nvidia GPU with Docker before, please follow the
|
> If you have not used your Nvidia GPU with Docker before, please follow the
|
||||||
> [Ollama Docker instructions](https://github.com/ollama/ollama/blob/main/docs/docker.md).
|
> [Ollama Docker instructions](https://github.com/ollama/ollama/blob/main/docs/docker.md).
|
||||||
|
|
||||||
|
### For AMD GPU users on Linux
|
||||||
|
|
||||||
|
```
|
||||||
|
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
|
||||||
|
cd self-hosted-ai-starter-kit
|
||||||
|
docker compose --profile gpu-amd up
|
||||||
|
```
|
||||||
|
|
||||||
#### For Mac / Apple Silicon users
|
#### For Mac / Apple Silicon users
|
||||||
|
|
||||||
If you’re using a Mac with an M1 or newer processor, you can't expose your GPU
|
If you’re using a Mac with an M1 or newer processor, you can't expose your GPU
|
||||||
|
|||||||
@@ -115,6 +115,14 @@ services:
|
|||||||
count: 1
|
count: 1
|
||||||
capabilities: [gpu]
|
capabilities: [gpu]
|
||||||
|
|
||||||
|
ollama-gpu-amd:
|
||||||
|
profiles: ["gpu-amd"]
|
||||||
|
<<: *service-ollama
|
||||||
|
image: ollama/ollama:rocm
|
||||||
|
devices:
|
||||||
|
- "/dev/kfd"
|
||||||
|
- "/dev/dri"
|
||||||
|
|
||||||
ollama-pull-llama-cpu:
|
ollama-pull-llama-cpu:
|
||||||
profiles: ["cpu"]
|
profiles: ["cpu"]
|
||||||
<<: *init-ollama
|
<<: *init-ollama
|
||||||
@@ -126,3 +134,10 @@ services:
|
|||||||
<<: *init-ollama
|
<<: *init-ollama
|
||||||
depends_on:
|
depends_on:
|
||||||
- ollama-gpu
|
- ollama-gpu
|
||||||
|
|
||||||
|
ollama-pull-llama-gpu-amd:
|
||||||
|
profiles: [gpu-amd]
|
||||||
|
<<: *init-ollama
|
||||||
|
image: ollama/ollama:rocm
|
||||||
|
depends_on:
|
||||||
|
- ollama-gpu-amd
|
||||||
|
|||||||
Reference in New Issue
Block a user