From 623ed891003f3373967b41b5f22685e85af606ca Mon Sep 17 00:00:00 2001 From: Alex Date: Wed, 8 Nov 2023 14:34:33 +0000 Subject: [PATCH] Update How-to-use-different-LLM.md --- docs/pages/Guides/How-to-use-different-LLM.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/pages/Guides/How-to-use-different-LLM.md b/docs/pages/Guides/How-to-use-different-LLM.md index 397b5a75..8915011b 100644 --- a/docs/pages/Guides/How-to-use-different-LLM.md +++ b/docs/pages/Guides/How-to-use-different-LLM.md @@ -44,5 +44,5 @@ Alternatively, for local Llama setup, run `setup.sh` and choose option 1. The sc ## Step 3: Local Hosting for Privacy -If working with sensitive data, host everything locally by setting `SELF_HOSTED_MODEL` to true in your `.env`. For `LLM_NAME`, use any model available on Hugging Face. +If working with sensitive data, host everything locally by setting `LLM_NAME`, llama.cpp or huggingface, use any model available on Hugging Face, for llama.cpp you need to convert it into gguf format. That's it! Your app is now configured for local and private hosting, ensuring optimal security for critical data.