mirror of
https://github.com/arc53/DocsGPT.git
synced 2026-02-06 14:20:35 +00:00
Merge branch 'main' into New-prompt-
This commit is contained in:
@@ -4,45 +4,38 @@ Here's a step-by-step guide on how to setup an Amazon Lightsail instance to host
|
||||
|
||||
## Configuring your instance
|
||||
|
||||
(If you know how to create a Lightsail instance, you can skip to the recommended configuration part by clicking here).
|
||||
(If you know how to create a Lightsail instance, you can skip to the recommended configuration part by clicking [here](#connecting-to-your-newly-created-instance)).
|
||||
|
||||
### 1. Create an account or login to https://lightsail.aws.amazon.com
|
||||
### 1. Create an AWS Account:
|
||||
If you haven't already, create or log in to your AWS account at https://lightsail.aws.amazon.com.
|
||||
|
||||
### 2. Click on "Create instance"
|
||||
### 2. Create an Instance:
|
||||
|
||||
### 3. Create your instance
|
||||
a. Click "Create Instance."
|
||||
|
||||
The first step is to select the "Instance location". In most cases, there's no need to switch locations as the default one will work well.
|
||||
b. Select the "Instance location." In most cases, the default location works fine.
|
||||
|
||||
After that, it is time to pick your Instance Image. We recommend using "Linux/Unix" as the image and "Ubuntu 20.04 LTS" as the Operating System.
|
||||
c. Choose "Linux/Unix" as the image and "Ubuntu 20.04 LTS" as the Operating System.
|
||||
|
||||
As for instance plan, it'll vary depending on your unique demands, but a "1 GB, 1vCPU, 40GB SSD and 2TB transfer" setup should cover most scenarios.
|
||||
d. Configure the instance plan based on your requirements. A "1 GB, 1vCPU, 40GB SSD, and 2TB transfer" setup is recommended for most scenarios.
|
||||
|
||||
Lastly, Identify your instance by giving it a unique name and then hit "Create instance".
|
||||
e. Give your instance a unique name and click "Create Instance."
|
||||
|
||||
PS: Once you create your instance, it'll likely take a few minutes for the setup to be completed.
|
||||
PS: It may take a few minutes for the instance setup to complete.
|
||||
|
||||
#### The recommended configuration is as follows:
|
||||
### Connecting to Your newly created Instance
|
||||
|
||||
- Ubuntu 20.04 LTS
|
||||
- 1GB RAM
|
||||
- 1vCPU
|
||||
- 40GB SSD Hard Drive
|
||||
- 2TB transfer
|
||||
Your instance will be ready a few minutes after creation. To access it, open the instance and click "Connect using SSH."
|
||||
|
||||
### Connecting to your newly created instance
|
||||
#### Clone the DocsGPT Repository
|
||||
|
||||
Your instance will be ready for use a few minutes after being created. To access it, just open it up and click on "Connect using SSH".
|
||||
|
||||
#### Clone the repository
|
||||
|
||||
A terminal window will pop up, and the first step will be to clone the DocsGPT git repository:
|
||||
A terminal window will pop up, and the first step will be to clone the DocsGPT Git repository:
|
||||
|
||||
`git clone https://github.com/arc53/DocsGPT.git`
|
||||
|
||||
#### Download the package information
|
||||
|
||||
Once it has finished cloning the repository, it is time to download the package information from all sources. To do so simply enter the following command:
|
||||
Once it has finished cloning the repository, it is time to download the package information from all sources. To do so, simply enter the following command:
|
||||
|
||||
`sudo apt update`
|
||||
|
||||
@@ -56,13 +49,13 @@ And now install docker-compose:
|
||||
|
||||
`sudo apt install docker-compose`
|
||||
|
||||
#### Access the DocsGPT folder
|
||||
#### Access the DocsGPT Folder
|
||||
|
||||
Enter the following command to access the folder in which DocsGPT docker-compose file is present.
|
||||
Enter the following command to access the folder in which the DocsGPT docker-compose file is present.
|
||||
|
||||
`cd DocsGPT/`
|
||||
|
||||
#### Prepare the environment
|
||||
#### Prepare the Environment
|
||||
|
||||
Inside the DocsGPT folder create a `.env` file and copy the contents of `.env_sample` into it.
|
||||
|
||||
@@ -78,16 +71,16 @@ SELF_HOSTED_MODEL=false
|
||||
|
||||
To save the file, press CTRL+X, then Y, and then ENTER.
|
||||
|
||||
Next, we need to set a correct IP for our Backend. To do so, open the docker-compose.yml file:
|
||||
Next, set the correct IP for the Backend by opening the docker-compose.yml file:
|
||||
|
||||
`nano docker-compose.yml`
|
||||
|
||||
And change this line 7 `VITE_API_HOST=http://localhost:7091`
|
||||
And Change line 7 to: `VITE_API_HOST=http://localhost:7091`
|
||||
to this `VITE_API_HOST=http://<your instance public IP>:7091`
|
||||
|
||||
This will allow the frontend to connect to the backend.
|
||||
|
||||
#### Running the app
|
||||
#### Running the Application
|
||||
|
||||
You're almost there! Now that all the necessary bits and pieces have been installed, it is time to run the application. To do so, use the following command:
|
||||
|
||||
@@ -97,16 +90,19 @@ Launching it for the first time will take a few minutes to download all the nece
|
||||
|
||||
Once this is done you can go ahead and close the terminal window.
|
||||
|
||||
#### Enabling ports
|
||||
#### Enabling Ports
|
||||
|
||||
Before you are able to access your live instance, you must first enable the port that it is using.
|
||||
a. Before you are able to access your live instance, you must first enable the port that it is using.
|
||||
|
||||
Open your Lightsail instance and head to "Networking".
|
||||
b. Open your Lightsail instance and head to "Networking".
|
||||
|
||||
Then click on "Add rule" under "IPv4 Firewall", enter `5173` as your port, and hit "Create".
|
||||
c. Then click on "Add rule" under "IPv4 Firewall", enter `5173` as your port, and hit "Create".
|
||||
Repeat the process for port `7091`.
|
||||
|
||||
#### Access your instance
|
||||
|
||||
Your instance will now be available under your Public IP Address and port `5173`. Enjoy!
|
||||
Your instance is now available at your Public IP Address on port 5173. Enjoy using DocsGPT!
|
||||
|
||||
## Other Deployment Options
|
||||
|
||||
- [Deploy DocsGPT on Civo Compute Cloud](https://dev.to/rutamhere/deploying-docsgpt-on-civo-compute-c)
|
||||
|
||||
@@ -1,24 +1,107 @@
|
||||
## Launching Web App
|
||||
Note: Make sure you have Docker installed
|
||||
**Note**: Make sure you have Docker installed
|
||||
|
||||
On Mac OS or Linux just write:
|
||||
**On macOS or Linux:**
|
||||
Just run the following command::
|
||||
|
||||
`./setup.sh`
|
||||
|
||||
It will install all the dependencies and give you an option to download the local model or use OpenAI
|
||||
This command will install all the necessary dependencies and provide you with an option to download the local model or use OpenAI.
|
||||
|
||||
Otherwise, refer to this Guide:
|
||||
If you prefer to follow manual steps, refer to this guide:
|
||||
|
||||
1. Open and download this repository with `git clone https://github.com/arc53/DocsGPT.git`.
|
||||
2. Create a `.env` file in your root directory and set your `API_KEY` with your [OpenAI api key](https://platform.openai.com/account/api-keys).
|
||||
3. Run `docker-compose build && docker-compose up`.
|
||||
1. Open and download this repository with
|
||||
`git clone https://github.com/arc53/DocsGPT.git`.
|
||||
2. Create a `.env` file in your root directory and set your `API_KEY` with your [OpenAI API key](https://platform.openai.com/account/api-keys).
|
||||
3. Run the following commands:
|
||||
`docker-compose build && docker-compose up`.
|
||||
4. Navigate to `http://localhost:5173/`.
|
||||
|
||||
To stop just run `Ctrl + C`.
|
||||
To stop, simply press Ctrl + C.
|
||||
|
||||
**For WINDOWS:**
|
||||
|
||||
To run the setup on Windows, you have two options: using the Windows Subsystem for Linux (WSL) or using Git Bash or Command Prompt.
|
||||
|
||||
**Option 1: Using Windows Subsystem for Linux (WSL):**
|
||||
|
||||
1. Install WSL if you haven't already. You can follow the official Microsoft documentation for installation: (https://learn.microsoft.com/en-us/windows/wsl/install).
|
||||
2. After setting up WSL, open the WSL terminal.
|
||||
3. Clone the repository and create the `.env` file:
|
||||
```
|
||||
git clone https://github.com/arc53/DocsGPT.git
|
||||
cd DocsGPT
|
||||
echo "API_KEY=Yourkey" > .env
|
||||
echo "VITE_API_STREAMING=true" >> .env
|
||||
```
|
||||
4. Run the following command to start the setup with Docker Compose:
|
||||
`./run-with-docker-compose.sh`
|
||||
5. Open your web browser and navigate to (http://localhost:5173/).
|
||||
6. To stop the setup, just press `Ctrl + C` in the WSL terminal
|
||||
|
||||
**Option 2: Using Git Bash or Command Prompt (CMD):**
|
||||
|
||||
1. Install Git for Windows if you haven't already. Download it from the official website: (https://gitforwindows.org/).
|
||||
2. Open Git Bash or Command Prompt.
|
||||
3. Clone the repository and create the `.env` file:
|
||||
```
|
||||
git clone https://github.com/arc53/DocsGPT.git
|
||||
cd DocsGPT
|
||||
echo "API_KEY=Yourkey" > .env
|
||||
echo "VITE_API_STREAMING=true" >> .env
|
||||
```
|
||||
4.Run the following command to start the setup with Docker Compose:
|
||||
`./run-with-docker-compose.sh`
|
||||
5.Open your web browser and navigate to (http://localhost:5173/).
|
||||
6.To stop the setup, just press Ctrl + C in the Git Bash or Command Prompt terminal.
|
||||
|
||||
These steps should help you set up and run the project on Windows using either WSL or Git Bash/Command Prompt.
|
||||
**Important:** Ensure that Docker is installed and properly configured on your Windows system for these steps to work.
|
||||
|
||||
|
||||
For WINDOWS:
|
||||
|
||||
To run the given setup on Windows, you can use the Windows Subsystem for Linux (WSL) or a Git Bash terminal to execute similar commands. Here are the steps adapted for Windows:
|
||||
|
||||
Option 1: Using Windows Subsystem for Linux (WSL):
|
||||
|
||||
1. Install WSL if you haven't already. You can follow the official Microsoft documentation for installation: (https://learn.microsoft.com/en-us/windows/wsl/install).
|
||||
2. After setting up WSL, open the WSL terminal.
|
||||
3. Clone the repository and create the `.env` file:
|
||||
```
|
||||
git clone https://github.com/arc53/DocsGPT.git
|
||||
cd DocsGPT
|
||||
echo "API_KEY=Yourkey" > .env
|
||||
echo "VITE_API_STREAMING=true" >> .env
|
||||
```
|
||||
4. Run the following command to start the setup with Docker Compose:
|
||||
`./run-with-docker-compose.sh`
|
||||
5. Open your web browser and navigate to (http://localhost:5173/).
|
||||
6. To stop the setup, just press `Ctrl + C` in the WSL terminal
|
||||
|
||||
Option 2: Using Git Bash or Command Prompt (CMD):
|
||||
|
||||
1. Install Git for Windows if you haven't already. You can download it from the official website: (https://gitforwindows.org/).
|
||||
2. Open Git Bash or Command Prompt.
|
||||
3. Clone the repository and create the `.env` file:
|
||||
```
|
||||
git clone https://github.com/arc53/DocsGPT.git
|
||||
cd DocsGPT
|
||||
echo "API_KEY=Yourkey" > .env
|
||||
echo "VITE_API_STREAMING=true" >> .env
|
||||
```
|
||||
4.Run the following command to start the setup with Docker Compose:
|
||||
`./run-with-docker-compose.sh`
|
||||
5.Open your web browser and navigate to (http://localhost:5173/).
|
||||
6.To stop the setup, just press Ctrl + C in the Git Bash or Command Prompt terminal.
|
||||
|
||||
These steps should help you set up and run the project on Windows using either WSL or Git Bash/Command Prompt. Make sure you have Docker installed and properly configured on your Windows system for this to work.
|
||||
|
||||
|
||||
### Chrome Extension
|
||||
|
||||
To install the Chrome extension:
|
||||
#### Installing the Chrome extension:
|
||||
To enhance your DocsGPT experience, you can install the DocsGPT Chrome extension. Here's how:
|
||||
|
||||
1. In the DocsGPT GitHub repository, click on the "Code" button and select "Download ZIP".
|
||||
2. Unzip the downloaded file to a location you can easily access.
|
||||
|
||||
@@ -1,9 +1,25 @@
|
||||
Currently, the application provides the following main API endpoints:
|
||||
# API Endpoints Documentation
|
||||
|
||||
### /api/answer
|
||||
It's a POST request that sends a JSON in body with 4 values. It will receive an answer for a user provided question.
|
||||
Here is a JavaScript fetch example:
|
||||
*Currently, the application provides the following main API endpoints:*
|
||||
|
||||
|
||||
### 1. /api/answer
|
||||
**Description:**
|
||||
|
||||
This endpoint is used to request answers to user-provided questions.
|
||||
|
||||
**Request:**
|
||||
|
||||
Method: POST
|
||||
Headers: Content-Type should be set to "application/json; charset=utf-8"
|
||||
Request Body: JSON object with the following fields:
|
||||
* **question:** The user's question
|
||||
* **history:** (Optional) Previous conversation history
|
||||
* **api_key:** Your API key
|
||||
* **embeddings_key:** Your embeddings key
|
||||
* **active_docs:** The location of active documentation
|
||||
|
||||
Here is a JavaScript Fetch Request example:
|
||||
```js
|
||||
// answer (POST http://127.0.0.1:5000/api/answer)
|
||||
fetch("http://127.0.0.1:5000/api/answer", {
|
||||
@@ -18,8 +34,9 @@ fetch("http://127.0.0.1:5000/api/answer", {
|
||||
.then(console.log.bind(console))
|
||||
```
|
||||
|
||||
In response you will get a json document like this one:
|
||||
**Response**
|
||||
|
||||
In response, you will get a JSON document containing the answer,query and the result:
|
||||
```json
|
||||
{
|
||||
"answer": " Hi there! How can I help you?\n",
|
||||
@@ -28,10 +45,17 @@ In response you will get a json document like this one:
|
||||
}
|
||||
```
|
||||
|
||||
### /api/docs_check
|
||||
It will make sure documentation is loaded on a server (just run it every time user is switching between libraries (documentations)).
|
||||
It's a POST request that sends a JSON in body with 1 value. Here is a JavaScript fetch example:
|
||||
### 2. /api/docs_check
|
||||
|
||||
**Description:**
|
||||
|
||||
This endpoint will make sure documentation is loaded on the server (just run it every time user is switching between libraries (documentations)).
|
||||
|
||||
**Request:**
|
||||
|
||||
Headers: Content-Type should be set to "application/json; charset=utf-8"
|
||||
Request Body: JSON object with the field:
|
||||
* **docs:** The location of the documentation
|
||||
```js
|
||||
// answer (POST http://127.0.0.1:5000/api/docs_check)
|
||||
fetch("http://127.0.0.1:5000/api/docs_check", {
|
||||
@@ -45,7 +69,9 @@ fetch("http://127.0.0.1:5000/api/docs_check", {
|
||||
.then(console.log.bind(console))
|
||||
```
|
||||
|
||||
In response you will get a json document like this one:
|
||||
**Response:**
|
||||
|
||||
In response, you will get a JSON document like this one indicating whether the documentation exists or not.:
|
||||
```json
|
||||
{
|
||||
"status": "exists"
|
||||
@@ -53,18 +79,36 @@ In response you will get a json document like this one:
|
||||
```
|
||||
|
||||
|
||||
### /api/combine
|
||||
Provides json that tells UI which vectors are available and where they are located with a simple get request.
|
||||
### 3. /api/combine
|
||||
**Description:**
|
||||
|
||||
This endpoint provides information about available vectors and their locations with a simple GET request.
|
||||
|
||||
**Request:**
|
||||
|
||||
Method: GET
|
||||
|
||||
**Response:**
|
||||
|
||||
Response will include:
|
||||
`date`, `description`, `docLink`, `fullName`, `language`, `location` (local or docshub), `model`, `name`, `version`.
|
||||
|
||||
Example of json in Docshub and local:
|
||||
|
||||
Example of JSON in Docshub and local:
|
||||
|
||||
<img width="295" alt="image" src="https://user-images.githubusercontent.com/15183589/224714085-f09f51a4-7a9a-4efb-bd39-798029bb4273.png">
|
||||
|
||||
|
||||
### /api/upload
|
||||
Uploads file that needs to be trained, response is json with task id, which can be used to check on tasks progress
|
||||
### 4. /api/upload
|
||||
**Description:**
|
||||
|
||||
This endpoint is used to upload a file that needs to be trained, response is JSON with task ID, which can be used to check on task's progress.
|
||||
|
||||
**Request:**
|
||||
|
||||
Method: POST
|
||||
Request Body: A multipart/form-data form with file upload and additional fields, including "user" and "name."
|
||||
|
||||
HTML example:
|
||||
|
||||
```html
|
||||
@@ -79,20 +123,24 @@ HTML example:
|
||||
</form>
|
||||
```
|
||||
|
||||
Response:
|
||||
```json
|
||||
{
|
||||
"status": "ok",
|
||||
"task_id": "b2684988-9047-428b-bd47-08518679103c"
|
||||
}
|
||||
**Response:**
|
||||
|
||||
```
|
||||
JSON response with a status and a task ID that can be used to check the task's progress.
|
||||
|
||||
### /api/task_status
|
||||
Gets task status (`task_id`) from `/api/upload`:
|
||||
|
||||
### 5. /api/task_status
|
||||
**Description:**
|
||||
|
||||
This endpoint is used to get the status of a task (`task_id`) from `/api/upload`
|
||||
|
||||
**Request:**
|
||||
Method: GET
|
||||
Query Parameter: task_id (task ID to check)
|
||||
|
||||
**Sample JavaScript Fetch Request:**
|
||||
```js
|
||||
// Task status (Get http://127.0.0.1:5000/api/task_status)
|
||||
fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4fe2e7454d1", {
|
||||
fetch("http://localhost:5001/api/task_status?task_id=YOUR_TASK_ID", {
|
||||
"method": "GET",
|
||||
"headers": {
|
||||
"Content-Type": "application/json; charset=utf-8"
|
||||
@@ -102,9 +150,12 @@ fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4f
|
||||
.then(console.log.bind(console))
|
||||
```
|
||||
|
||||
Responses:
|
||||
**Response:**
|
||||
|
||||
There are two types of responses:
|
||||
1. while task it still running, where "current" will show progress from 0 to 100
|
||||
|
||||
1. While the task is still running, the 'current' value will show progress from 0 to 100.
|
||||
|
||||
```json
|
||||
{
|
||||
"result": {
|
||||
@@ -114,7 +165,7 @@ There are two types of responses:
|
||||
}
|
||||
```
|
||||
|
||||
2. When task is completed
|
||||
2. When task is completed:
|
||||
```json
|
||||
{
|
||||
"result": {
|
||||
@@ -132,8 +183,14 @@ There are two types of responses:
|
||||
}
|
||||
```
|
||||
|
||||
### /api/delete_old
|
||||
Deletes old vectorstores:
|
||||
### 6. /api/delete_old
|
||||
**Description:**
|
||||
|
||||
This endpoint is used to delete old Vector Stores.
|
||||
|
||||
**Request:**
|
||||
|
||||
Method: GET
|
||||
```js
|
||||
// Task status (GET http://127.0.0.1:5000/api/docs_check)
|
||||
fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4fe2e7454d1", {
|
||||
@@ -144,10 +201,11 @@ fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4f
|
||||
})
|
||||
.then((res) => res.text())
|
||||
.then(console.log.bind(console))
|
||||
|
||||
```
|
||||
**Response:**
|
||||
|
||||
Response:
|
||||
|
||||
JSON response indicating the status of the operation.
|
||||
```json
|
||||
{ "status": "ok" }
|
||||
```
|
||||
|
||||
@@ -1,29 +1,42 @@
|
||||
### To start chatwoot extension:
|
||||
1. Prepare and start the DocsGPT itself (load your documentation too). Follow our [wiki](https://github.com/arc53/DocsGPT/wiki) to start it and to [ingest](https://github.com/arc53/DocsGPT/wiki/How-to-train-on-other-documentation) data.
|
||||
2. Go to chatwoot, **Navigate** to your profile (bottom left), click on profile settings, scroll to the bottom and copy **Access Token**.
|
||||
3. Navigate to `/extensions/chatwoot`. Copy `.env_sample` and create `.env` file.
|
||||
4. Fill in the values.
|
||||
### To Start Chatwoot Extension:
|
||||
|
||||
```
|
||||
docsgpt_url=<docsgpt_api_url>
|
||||
chatwoot_url=<chatwoot_url>
|
||||
docsgpt_key=<openai_api_key or other llm key>
|
||||
chatwoot_token=<from part 2>
|
||||
```
|
||||
1. **Prepare and Start DocsGPT:**
|
||||
- Launch DocsGPT using the instructions in our [wiki](https://github.com/arc53/DocsGPT/wiki).
|
||||
- Make sure to load your documentation.
|
||||
|
||||
5. Start with `flask run` command.
|
||||
2. **Get Access Token from Chatwoot:**
|
||||
- Navigate to Chatwoot.
|
||||
- Go to your profile (bottom left), click on profile settings.
|
||||
- Scroll to the bottom and copy the **Access Token**.
|
||||
|
||||
If you want for bot to stop responding to questions for a specific user or session just add label `human-requested` in your conversation.
|
||||
3. **Set Up Chatwoot Extension:**
|
||||
- Navigate to `/extensions/chatwoot`.
|
||||
- Copy `.env_sample` and create a `.env` file.
|
||||
- Fill in the values in the `.env` file:
|
||||
|
||||
```env
|
||||
docsgpt_url=<docsgpt_api_url>
|
||||
chatwoot_url=<chatwoot_url>
|
||||
docsgpt_key=<openai_api_key or other llm key>
|
||||
chatwoot_token=<from part 2>
|
||||
```
|
||||
|
||||
### Optional (extra validation)
|
||||
In `app.py` uncomment lines 12-13 and 71-75
|
||||
4. **Start the Extension:**
|
||||
- Use the command `flask run` to start the extension.
|
||||
|
||||
in your `.env` file add:
|
||||
5. **Optional: Extra Validation**
|
||||
- In `app.py`, uncomment lines 12-13 and 71-75.
|
||||
- Add the following lines to your `.env` file:
|
||||
|
||||
```
|
||||
account_id=(optional) 1
|
||||
assignee_id=(optional) 1
|
||||
```
|
||||
```env
|
||||
account_id=(optional) 1
|
||||
assignee_id=(optional) 1
|
||||
```
|
||||
|
||||
Those are chatwoot values and will allow you to check if you are responding to correct widget and responding to questions assigned to specific user.
|
||||
These Chatwoot values help ensure you respond to the correct widget and handle questions assigned to a specific user.
|
||||
|
||||
### Stopping Bot Responses for Specific User or Session:
|
||||
- If you want the bot to stop responding to questions for a specific user or session, add a label `human-requested` in your conversation.
|
||||
|
||||
### Additional Notes:
|
||||
- For further details on training on other documentation, refer to our [wiki](https://github.com/arc53/DocsGPT/wiki/How-to-train-on-other-documentation).
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
Got to your project and install a new dependency: `npm install docsgpt`.
|
||||
|
||||
### Usage
|
||||
Go to your project and in the file where you want to use the widget import it:
|
||||
Go to your project and in the file where you want to use the widget, import it:
|
||||
```js
|
||||
import { DocsGPTWidget } from "docsgpt";
|
||||
import "docsgpt/dist/style.css";
|
||||
@@ -14,12 +14,12 @@ import "docsgpt/dist/style.css";
|
||||
Then you can use it like this: `<DocsGPTWidget />`
|
||||
|
||||
DocsGPTWidget takes 3 props:
|
||||
- `apiHost` — url of your DocsGPT API.
|
||||
- `selectDocs` — documentation that you want to use for your widget (eg. `default` or `local/docs1.zip`).
|
||||
- `apiKey` — usually its empty.
|
||||
- `apiHost` — URL of your DocsGPT API.
|
||||
- `selectDocs` — documentation that you want to use for your widget (e.g. `default` or `local/docs1.zip`).
|
||||
- `apiKey` — usually it's empty.
|
||||
|
||||
### How to use DocsGPTWidget with [Nextra](https://nextra.site/) (Next.js + MDX)
|
||||
Install you widget as described above and then go to your `pages/` folder and create a new file `_app.js` with the following content:
|
||||
Install your widget as described above and then go to your `pages/` folder and create a new file `_app.js` with the following content:
|
||||
```js
|
||||
import { DocsGPTWidget } from "docsgpt";
|
||||
import "docsgpt/dist/style.css";
|
||||
|
||||
@@ -17,8 +17,7 @@ When using code examples, use the following format:
|
||||
|
||||
(code)
|
||||
{summaries}
|
||||
|
||||
Thank you
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -5,18 +5,18 @@ This AI can use any documentation, but first it needs to be prepared for similar
|
||||
|
||||
Start by going to `/scripts/` folder.
|
||||
|
||||
If you open this file you will see that it uses RST files from the folder to create a `index.faiss` and `index.pkl`.
|
||||
If you open this file, you will see that it uses RST files from the folder to create a `index.faiss` and `index.pkl`.
|
||||
|
||||
It currently uses OPEN_AI to create vector store, so make sure your documentation is not too big. Pandas cost me around 3-4$.
|
||||
It currently uses OPEN_AI to create the vector store, so make sure your documentation is not too big. Pandas cost me around $3-$4.
|
||||
|
||||
You can usually find documentation on github in `docs/` folder for most open-source projects.
|
||||
You can usually find documentation on Github in `docs/` folder for most open-source projects.
|
||||
|
||||
### 1. Find documentation in .rst/.md and create a folder with it in your scripts directory
|
||||
Name it `inputs/`
|
||||
Put all your .rst/.md files in there
|
||||
The search is recursive, so you don't need to flatten them
|
||||
- Name it `inputs/`
|
||||
- Put all your .rst/.md files in there
|
||||
- The search is recursive, so you don't need to flatten them
|
||||
|
||||
If there are no .rst/.md files just convert whatever you find to txt and feed it. (don't forget to change the extension in script)
|
||||
If there are no .rst/.md files just convert whatever you find to .txt and feed it. (don't forget to change the extension in script)
|
||||
|
||||
### 2. Create .env file in `scripts/` folder
|
||||
And write your OpenAI API key inside
|
||||
@@ -32,7 +32,7 @@ It will tell you how much it will cost
|
||||
|
||||
|
||||
### 5. Run web app
|
||||
Once you run it will use new context that is relevant to your documentation
|
||||
Once you run it will use new context that is relevant to your documentation
|
||||
Make sure you select default in the dropdown in the UI
|
||||
|
||||
## Customization
|
||||
@@ -41,7 +41,7 @@ You can learn more about options while running ingest.py by running:
|
||||
`python ingest.py --help`
|
||||
| Options | |
|
||||
|:--------------------------------:|:------------------------------------------------------------------------------------------------------------------------------:|
|
||||
| **ingest** | Runs 'ingest' function converting documentation to Faiss plus Index format |
|
||||
| **ingest** | Runs 'ingest' function, converting documentation to Faiss plus Index format |
|
||||
| --dir TEXT | List of paths to directory for index creation. E.g. --dir inputs --dir inputs2 [default: inputs] |
|
||||
| --file TEXT | File paths to use (Optional; overrides directory) E.g. --files inputs/1.md --files inputs/2.md |
|
||||
| --recursive / --no-recursive | Whether to recursively search in subdirectories [default: recursive] |
|
||||
@@ -56,4 +56,4 @@ You can learn more about options while running ingest.py by running:
|
||||
| | |
|
||||
| **convert** | Creates documentation in .md format from source code |
|
||||
| --dir TEXT | Path to a directory with source code. E.g. --dir inputs [default: inputs] |
|
||||
| --formats TEXT | Source code language from which to create documentation. Supports py, js and java. E.g. --formats py [default: py] |
|
||||
| --formats TEXT | Source code language from which to create documentation. Supports py, js and java. E.g. --formats py [default: py] |
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
Fortunately there are many providers for LLM's and some of them can even be ran locally
|
||||
Fortunately, there are many providers for LLM's and some of them can even be run locally
|
||||
|
||||
There are two models used in the app:
|
||||
1. Embeddings.
|
||||
@@ -21,12 +21,16 @@ By default, we use OpenAI's models but if you want to change it or even run it l
|
||||
You don't need to provide keys if you are happy with users providing theirs, so make sure you set `LLM_NAME` and `EMBEDDINGS_NAME`.
|
||||
|
||||
Options:
|
||||
LLM_NAME (openai, manifest, cohere, Arc53/docsgpt-14b, Arc53/docsgpt-7b-falcon)
|
||||
LLM_NAME (openai, manifest, cohere, Arc53/docsgpt-14b, Arc53/docsgpt-7b-falcon, llama.cpp)
|
||||
EMBEDDINGS_NAME (openai_text-embedding-ada-002, huggingface_sentence-transformers/all-mpnet-base-v2, huggingface_hkunlp/instructor-large, cohere_medium)
|
||||
|
||||
If using Llama, set the `EMBEDDINGS_NAME` to `huggingface_sentence-transformers/all-mpnet-base-v2` and be sure to download [this model](https://d3dg1063dc54p9.cloudfront.net/models/docsgpt-7b-f16.gguf) into the `models/` folder: `https://d3dg1063dc54p9.cloudfront.net/models/docsgpt-7b-f16.gguf`.
|
||||
|
||||
Alternatively, if you wish to run Llama locally, you can run `setup.sh` and choose option 1 when prompted. You do not need to manually add the DocsGPT model mentioned above to your `models/` folder if you use `setup.sh`, as the script will manage that step for you.
|
||||
|
||||
That's it!
|
||||
|
||||
### Hosting everything locally and privately (for using our optimised open-source models)
|
||||
If you are working with important data and don't want anything to leave your premises.
|
||||
|
||||
Make sure you set `SELF_HOSTED_MODEL` as true in you `.env` variable and for your `LLM_NAME` you can use anything that's on Hugging Face.
|
||||
Make sure you set `SELF_HOSTED_MODEL` as true in your `.env` variable and for your `LLM_NAME` you can use anything that's on Hugging Face.
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
If your AI uses external knowledge and is not explicit enough it is ok, because we try to make docsgpt friendly.
|
||||
If your AI uses external knowledge and is not explicit enough, it is ok, because we try to make DocsGPT friendly.
|
||||
|
||||
But if you want to adjust it, here is a simple way.
|
||||
But if you want to adjust it, here is a simple way:-
|
||||
|
||||
Got to `application/prompts/chat_combine_prompt.txt`
|
||||
- Got to `application/prompts/chat_combine_prompt.txt`
|
||||
|
||||
And change it to
|
||||
- And change it to
|
||||
|
||||
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user