models+guide-upd+extentions

This commit is contained in:
Pavel
2025-02-12 14:38:21 +03:00
parent 28cdbe407c
commit b44b9d8016
31 changed files with 955 additions and 311 deletions

View File

@@ -1,9 +1,11 @@
import Image from 'next/image';
const iconMap = {
'Deploy on Civo Compute Cloud': '/civo.png',
'Deploy on DigitalOcean Droplet': '/digitalocean.png',
'Deploy on Kamatera Cloud': '/kamatera.png',
'Amazon Lightsail': '/lightsail.png',
'Railway': '/railway.png',
'Civo Compute Cloud': '/civo.png',
'DigitalOcean Droplet': '/digitalocean.png',
'Kamatera Cloud': '/kamatera.png',
};

View File

@@ -0,0 +1,110 @@
---
title: Hosting DocsGPT on Amazon Lightsail
description:
display: hidden
---
# Self-hosting DocsGPT on Amazon Lightsail
Here's a step-by-step guide on how to set up an Amazon Lightsail instance to host DocsGPT.
## Configuring your instance
(If you know how to create a Lightsail instance, you can skip to the recommended configuration part by clicking [here](#connecting-to-your-newly-created-instance)).
### 1. Create an AWS Account:
If you haven't already, create or log in to your AWS account at https://lightsail.aws.amazon.com.
### 2. Create an Instance:
a. Click "Create Instance."
b. Select the "Instance location." In most cases, the default location works fine.
c. Choose "Linux/Unix" as the image and "Ubuntu 20.04 LTS" as the Operating System.
d. Configure the instance plan based on your requirements. A "1 GB, 1vCPU, 40GB SSD, and 2TB transfer" setup is recommended for most scenarios.
e. Give your instance a unique name and click "Create Instance."
PS: It may take a few minutes for the instance setup to complete.
### Connecting to Your newly created Instance
Your instance will be ready a few minutes after creation. To access it, open the instance and click "Connect using SSH."
#### Clone the DocsGPT Repository
A terminal window will pop up, and the first step will be to clone the DocsGPT Git repository:
`git clone https://github.com/arc53/DocsGPT.git`
#### Download the package information
Once it has finished cloning the repository, it is time to download the package information from all sources. To do so, simply enter the following command:
`sudo apt update`
#### Install Docker and Docker Compose
DocsGPT backend and worker use Python, Frontend is written on React and the whole application is containerized using Docker. To install Docker and Docker Compose, enter the following commands:
`sudo apt install docker.io`
And now install docker-compose:
`sudo apt install docker-compose`
#### Access the DocsGPT Folder
Enter the following command to access the folder in which the DocsGPT docker-compose file is present.
`cd DocsGPT/`
#### Prepare the Environment
Inside the DocsGPT folder create a `.env` file and copy the contents of `.env_sample` into it.
`nano .env`
Make sure your `.env` file looks like this:
```
OPENAI_API_KEY=(Your OpenAI API key)
VITE_API_STREAMING=true
SELF_HOSTED_MODEL=false
```
To save the file, press CTRL+X, then Y, and then ENTER.
Next, set the correct IP for the Backend by opening the docker-compose.yml file:
`nano deployment/docker-compose.yaml`
And Change line 7 to: `VITE_API_HOST=http://localhost:7091`
to this `VITE_API_HOST=http://<your instance public IP>:7091`
This will allow the frontend to connect to the backend.
#### Running the Application
You're almost there! Now that all the necessary bits and pieces have been installed, it is time to run the application. To do so, use the following command:
`sudo docker compose -f deployment/docker-compose.yaml up -d`
Launching it for the first time will take a few minutes to download all the necessary dependencies and build.
Once this is done you can go ahead and close the terminal window.
#### Enabling Ports
a. Before you are able to access your live instance, you must first enable the port that it is using.
b. Open your Lightsail instance and head to "Networking".
c. Then click on "Add rule" under "IPv4 Firewall", enter `5173` as your port, and hit "Create".
Repeat the process for port `7091`.
#### Access your instance
Your instance is now available at your Public IP Address on port 5173. Enjoy using DocsGPT!

View File

@@ -1,128 +1,31 @@
import { DeploymentCards } from '../../components/DeploymentCards';
# Self-hosting DocsGPT on Amazon Lightsail
Here's a step-by-step guide on how to set up an Amazon Lightsail instance to host DocsGPT.
## Configuring your instance
(If you know how to create a Lightsail instance, you can skip to the recommended configuration part by clicking [here](#connecting-to-your-newly-created-instance)).
### 1. Create an AWS Account:
If you haven't already, create or log in to your AWS account at https://lightsail.aws.amazon.com.
### 2. Create an Instance:
a. Click "Create Instance."
b. Select the "Instance location." In most cases, the default location works fine.
c. Choose "Linux/Unix" as the image and "Ubuntu 20.04 LTS" as the Operating System.
d. Configure the instance plan based on your requirements. A "1 GB, 1vCPU, 40GB SSD, and 2TB transfer" setup is recommended for most scenarios.
e. Give your instance a unique name and click "Create Instance."
PS: It may take a few minutes for the instance setup to complete.
### Connecting to Your newly created Instance
Your instance will be ready a few minutes after creation. To access it, open the instance and click "Connect using SSH."
#### Clone the DocsGPT Repository
A terminal window will pop up, and the first step will be to clone the DocsGPT Git repository:
`git clone https://github.com/arc53/DocsGPT.git`
#### Download the package information
Once it has finished cloning the repository, it is time to download the package information from all sources. To do so, simply enter the following command:
`sudo apt update`
#### Install Docker and Docker Compose
DocsGPT backend and worker use Python, Frontend is written on React and the whole application is containerized using Docker. To install Docker and Docker Compose, enter the following commands:
`sudo apt install docker.io`
And now install docker-compose:
`sudo apt install docker-compose`
#### Access the DocsGPT Folder
Enter the following command to access the folder in which the DocsGPT docker-compose file is present.
`cd DocsGPT/`
#### Prepare the Environment
Inside the DocsGPT folder create a `.env` file and copy the contents of `.env_sample` into it.
`nano .env`
Make sure your `.env` file looks like this:
```
OPENAI_API_KEY=(Your OpenAI API key)
VITE_API_STREAMING=true
SELF_HOSTED_MODEL=false
```
To save the file, press CTRL+X, then Y, and then ENTER.
Next, set the correct IP for the Backend by opening the docker-compose.yml file:
`nano deployment/docker-compose.yaml`
And Change line 7 to: `VITE_API_HOST=http://localhost:7091`
to this `VITE_API_HOST=http://<your instance public IP>:7091`
This will allow the frontend to connect to the backend.
#### Running the Application
You're almost there! Now that all the necessary bits and pieces have been installed, it is time to run the application. To do so, use the following command:
`sudo docker compose -f deployment/docker-compose.yaml up -d`
Launching it for the first time will take a few minutes to download all the necessary dependencies and build.
Once this is done you can go ahead and close the terminal window.
#### Enabling Ports
a. Before you are able to access your live instance, you must first enable the port that it is using.
b. Open your Lightsail instance and head to "Networking".
c. Then click on "Add rule" under "IPv4 Firewall", enter `5173` as your port, and hit "Create".
Repeat the process for port `7091`.
#### Access your instance
Your instance is now available at your Public IP Address on port 5173. Enjoy using DocsGPT!
## Other Deployment Options
# Deployment Guides
<DeploymentCards
items={[
{
title: 'Deploy on Civo Compute Cloud',
title: 'Amazon Lightsail',
link: 'https://docs.docsgpt.cloud/Deploying/Amazon-Lightsail',
description: 'Self-hosting DocsGPT on Amazon Lightsail'
},
{
title: 'Railway',
link: 'https://docs.docsgpt.cloud/Deploying/Railway',
description: 'Hosting DocsGPT on Railway'
},
{
title: 'Civo Compute Cloud',
link: 'https://dev.to/rutamhere/deploying-docsgpt-on-civo-compute-c',
description: 'Step-by-step guide for Civo deployment'
},
{
title: 'Deploy on DigitalOcean Droplet',
title: 'DigitalOcean Droplet',
link: 'https://dev.to/rutamhere/deploying-docsgpt-on-digitalocean-droplet-50ea',
description: 'Guide for DigitalOcean deployment'
},
{
title: 'Deploy on Kamatera Cloud',
title: 'Kamatera Cloud',
link: 'https://dev.to/rutamhere/deploying-docsgpt-on-kamatera-performance-cloud-1bj',
description: 'Kamatera deployment tutorial'
}

View File

@@ -0,0 +1,258 @@
---
title: Hosting DocsGPT on Railway
description: Learn how to deploy your own DocsGPT instance on Railway with this step-by-step tutorial
---
# Self-hosting DocsGPT on Railway
Here's a step-by-step guide on how to host DocsGPT on Railway App.
At first Clone and set up the project locally to run , test and Modify.
### 1. Clone and GitHub SetUp
a. Open Terminal (Windows Shell or Git bash(recommended)).
b. Type `git clone https://github.com/arc53/DocsGPT.git`
#### Download the package information
Once it has finished cloning the repository, it is time to download the package information from all sources. To do so, simply enter the following command:
`sudo apt update`
#### Install Docker and Docker Compose
DocsGPT backend and worker use Python, Frontend is written on React and the whole application is containerized using Docker. To install Docker and Docker Compose, enter the following commands:
`sudo apt install docker.io`
And now install docker-compose:
`sudo apt install docker-compose`
#### Access the DocsGPT Folder
Enter the following command to access the folder in which the DocsGPT docker-compose file is present.
`cd DocsGPT/`
#### Prepare the Environment
Inside the DocsGPT folder create a `.env` file and copy the contents of `.env_sample` into it.
`nano .env`
Make sure your `.env` file looks like this:
```
OPENAI_API_KEY=(Your OpenAI API key)
VITE_API_STREAMING=true
SELF_HOSTED_MODEL=false
```
To save the file, press CTRL+X, then Y, and then ENTER.
Next, set the correct IP for the Backend by opening the docker-compose.yaml file:
`nano deployment/docker-compose.yaml`
And Change line 7 to: `VITE_API_HOST=http://localhost:7091`
to this `VITE_API_HOST=http://<your instance public IP>:7091`
This will allow the frontend to connect to the backend.
#### Running the Application
You're almost there! Now that all the necessary bits and pieces have been installed, it is time to run the application. To do so, use the following command:
`sudo docker compose -f deployment/docker-compose.yaml up -d`
Launching it for the first time will take a few minutes to download all the necessary dependencies and build.
Once this is done you can go ahead and close the terminal window.
### 2. Pushing it to your own Repository
a. Create a Repository on your GitHub.
b. Open Terminal in the same directory of the Cloned project.
c. Type `git init`
d. `git add .`
e. `git commit -m "first-commit"`
f. `git remote add origin <your repository link>`
g. `git push git push --set-upstream origin master`
Your local files will now be pushed to your GitHub Account. :)
### 3. Create a Railway Account:
If you haven't already, create or log in to your railway account do it by visiting [Railway](https://railway.app/)
Signup via **GitHub** [Recommended].
### 4. Start New Project:
a. Open Railway app and Click on "Start New Project."
b. Choose any from the list of options available (Recommended "**Deploy from GitHub Repo**")
c. Choose the required Repository from your GitHub.
d. Configure and allow access to modify your GitHub content from the pop-up window.
e. Agree to all the terms and conditions.
PS: It may take a few minutes for the account setup to complete.
#### You will get A free trial of $5 (use it for trial and then purchase if satisfied and needed)
### 5. Connecting to Your newly Railway app with GitHub
a. Choose DocsGPT repo from the list of your GitHub repository that you want to deploy now.
b. Click on Deploy now.
![Three Tabs will be there](/Railway-selection.png)
c. Select Variables Tab.
d. Upload the env file here that you used for local setup.
e. Go to Settings Tab now.
f. Go to "Networking" and click on Generate Domain Name, to get the URL of your hosted project.
g. You can update the Root directory, build command, installation command as per need.
*[However recommended not the disturb these options and leave them as default if not that needed.]*
Your own DocsGPT is now available at the Generated domain URl. :)

View File

@@ -18,5 +18,15 @@
"Hosting-the-app": {
"title": "☁️ Hosting DocsGPT",
"href": "/Deploying/Hosting-the-app"
},
"Amazon-Lightsail": {
"title": "Hosting DocsGPT on Amazon Lightsail",
"href": "/Deploying/Amazon-Lightsail",
"display": "hidden"
},
"Railway": {
"title": "Hosting DocsGPT on Railway",
"href": "/Deploying/Railway",
"display": "hidden"
}
}

View File

@@ -1,3 +1,7 @@
---
title: Comprehensive Guide to Setting Up the Chatwoot Extension with DocsGPT
description: This step-by-step guide walks you through the process of setting up the Chatwoot extension with DocsGPT, enabling seamless integration for automated responses and enhanced customer support. Learn how to launch DocsGPT, retrieve your Chatwoot access token, configure the .env file, and start the extension.
---
## Chatwoot Extension Setup Guide
### Step 1: Prepare and Start DocsGPT

View File

@@ -1,3 +1,7 @@
---
title: Add DocsGPT Chrome Extension to Your Browser
description: Install the DocsGPT Chrome extension to access AI-powered document assistance directly from your browser for enhanced productivity.
---
import {Steps} from 'nextra/components'
import { Callout } from 'nextra/components'

View File

@@ -3,7 +3,7 @@
"title": "🔑 Getting API key",
"href": "/Extensions/api-key-guide"
},
"react-widget": {
"chat-widget": {
"title": "💬️ Chat Widget",
"href": "/Extensions/chat-widget"
},

View File

@@ -1,22 +1,20 @@
## Guide to DocsGPT API Keys
---
title: API Keys for DocsGPT Integrations
description: Learn how to obtain, understand, and use DocsGPT API keys to integrate DocsGPT into your external applications and widgets.
---
DocsGPT API keys are essential for developers and users who wish to integrate the DocsGPT models into external applications, such as the our widget. This guide will walk you through the steps of obtaining an API key, starting from uploading your document to understanding the key variables associated with API keys.
# Guide to DocsGPT API Keys
### Uploading Your Document
DocsGPT API keys are essential for developers and users who wish to integrate the DocsGPT models into external applications, such as [our widget](/Extensions/chat-widget). This guide will walk you through the steps of obtaining an API key, starting from uploading your document to understanding the key variables associated with API keys.
Before creating your first API key, you must upload the document that will be linked to this key. You can upload your document through two methods:
- **GUI Web App Upload:** A user-friendly graphical interface that allows for easy upload and management of documents.
- **Using `/api/upload` Method:** For users comfortable with API calls, this method provides a direct way to upload documents.
### Obtaining Your API Key
## Obtaining Your API Key
After uploading your document, you can obtain an API key either through the graphical user interface or via an API call:
- **Graphical User Interface:** Navigate to the Settings section of the DocsGPT web app, find the API Keys option, and press 'Create New' to generate your key.
- **API Call:** Alternatively, you can use the `/api/create_api_key` endpoint to create a new API key. For detailed instructions, visit [DocsGPT API Documentation](https://docs.docsgpt.cloud/API/API-docs#8-apicreate_api_key).
- **API Call:** Alternatively, you can use the `/api/create_api_key` endpoint to create a new API key. For detailed instructions, visit [DocsGPT API Documentation](https://gptcloud.arc53.com/).
### Understanding Key Variables
## Understanding Key Variables
Upon creating your API key, you will encounter several key variables. Each serves a specific purpose:
@@ -27,4 +25,4 @@ Upon creating your API key, you will encounter several key variables. Each serve
With your API key ready, you can now integrate DocsGPT into your application, such as the DocsGPT Widget or any other software, via `/api/answer` or `/stream` endpoints. The source document is preset with the API key, allowing you to bypass fields like `selectDocs` and `active_docs` during implementation.
Congratulations on taking the first step towards enhancing your applications with DocsGPT! With this guide, you're now equipped to navigate the process of obtaining and understanding DocsGPT API keys.
Congratulations on taking the first step towards enhancing your applications with DocsGPT!

View File

@@ -1,12 +1,12 @@
### Setting up the DocsGPT Widget in Your React Project
# Setting up the DocsGPT Widget in Your React Project
### Introduction:
## Introduction:
The DocsGPT Widget is a powerful tool that allows you to integrate AI-powered documentation assistance into your web applications. This guide will walk you through the installation and usage of the DocsGPT Widget in your React project. Whether you're building a web app or a knowledge base, this widget can enhance your user experience.
### Installation
## Installation
First, make sure you have Node.js and npm installed in your project. Then go to your project and install a new dependency: `npm install docsgpt`.
### Usage
## Usage
In the file where you want to use the widget, import it and include the CSS file:
```js
import { DocsGPTWidget } from "docsgpt";
@@ -29,7 +29,7 @@ Now, you can use the widget in your component like this :
buttonBg = "#222327"
/>
```
### Props Table for DocsGPT Widget
## Props Table for DocsGPT Widget
| **Prop** | **Type** | **Default Value** | **Description** |
|--------------------|------------------|-------------------------------------------------------------|-----------------------------------------------------------------------------------------------------|
@@ -47,7 +47,7 @@ Now, you can use the widget in your component like this :
---
### Notes
## Notes
- **Customizing Props:** All properties can be overridden when embedding the widget. For example, you can provide a unique avatar, title, or color scheme to better align with your brand.
- **Default Theme:** The widget defaults to the dark theme unless explicitly set to `"light"`.
- **API Key:** If the `apiKey` is not required for your application, leave it empty.
@@ -55,7 +55,7 @@ Now, you can use the widget in your component like this :
This table provides a clear overview of the customization options available for tailoring the DocsGPT widget to fit your application.
### How to use DocsGPTWidget with [Nextra](https://nextra.site/) (Next.js + MDX)
## How to use DocsGPTWidget with [Nextra](https://nextra.site/) (Next.js + MDX)
Install your widget as described above and then go to your `pages/` folder and create a new file `_app.js` with the following content:
```js
import { DocsGPTWidget } from "docsgpt";
@@ -69,7 +69,7 @@ export default function MyApp({ Component, pageProps }) {
)
}
```
### How to use DocsGPTWidget with HTML
## How to use DocsGPTWidget with HTML
```html
<!DOCTYPE html>
<html lang="en">

View File

@@ -0,0 +1,158 @@
---
title: Integrate DocsGPT Chat Widget into Your Web Application
description: Embed the DocsGPT Widget in your React, HTML, or Nextra projects to provide AI-powered chat functionality to your users.
---
import { Tabs } from 'nextra/components'
# Integrating DocsGPT Chat Widget
## Introduction
The DocsGPT Widget is a powerful tool that allows you to integrate AI-driven document assistance directly into your web applications. This guide will walk you through embedding the DocsGPT Widget into your projects, whether you're using React, plain HTML, or Nextra. Enhance your user experience by providing seamless access to intelligent document search and chatbot capabilities.
Try out the interactive widget showcase and customize its parameters at the [DocsGPT Widget Demo](https://widget.docsgpt.cloud/).
## Setup
<Tabs items={['React', 'HTML', 'Nextra']}>
<Tabs.Tab>
### Installation
Make sure you have Node.js and npm (or yarn, pnpm) installed in your project. Navigate to your project directory in the terminal and install the `docsgpt` package:
```bash npm
npm install docsgpt
```
### Usage
In your React component file, import the `DocsGPTWidget` component:
```js
import { DocsGPTWidget } from "docsgpt";
```
Now, you can embed the widget within your React component's JSX:
```jsx
<DocsGPTWidget
apiHost="https://your-docsgpt-api.com"
apiKey=""
avatar="https://d3dg1063dc54p9.cloudfront.net/cute-docsgpt.png"
title="Get AI assistance"
description="DocsGPT's AI Chatbot is here to help"
heroTitle="Welcome to DocsGPT !"
heroDescription="This chatbot is built with DocsGPT and utilises GenAI,
please review important information using sources."
theme="dark"
buttonIcon="https://your-icon"
buttonBg="#222327"
/>
```
</Tabs.Tab>
<Tabs.Tab>
### Installation
To use the DocsGPT Widget directly in HTML, include the widget script from a CDN in your HTML file:
```html filename="html"
<script
src="https://unpkg.com/docsgpt/dist/legacy/main.js"
type="module"
></script>
```
### Usage
In your HTML `<body>`, add a `<div>` element where you want to render the widget. Set an `id` for easy targeting.
```html filename="html"
<div id="app"></div>
```
Then, in a `<script type="module">` block, use the `renderDocsGPTWidget` function to initialize the widget, passing the `id` of your `<div>` and a configuration object. To link the widget to your DocsGPT API and specific documents, pass the relevant parameters within the configuration object of `renderDocsGPTWidget`.
```html filename="html"
<!DOCTYPE html>
<div id="app"></div>
<script type="module">
window.onload = function() {
renderDocsGPTWidget('app', {
apiHost: 'http://localhost:7001', // Replace with your API Host
apiKey:"",
avatar: 'https://d3dg1063dc54p9.cloudfront.net/cute-docsgpt.png',
title: 'Get AI assistance',
description: "DocsGPT's AI Chatbot is here to help",
heroTitle: 'Welcome to DocsGPT!',
heroDescription: 'This chatbot is utilises GenAI, please review important information.',
theme:"dark",
buttonIcon:"https://your-icon",
buttonBg:"#222327"
});
}
</script>
```
</Tabs.Tab>
<Tabs.Tab>
### Installation
Make sure you have Node.js and npm (or yarn, pnpm) installed in your project. Navigate to your project directory in the terminal and install the `docsgpt` package:
```bash npm
npm install docsgpt
```
### Usage with Nextra (Next.js + MDX)
To integrate the DocsGPT Widget into a [Nextra](https://nextra.site/) documentation site (built with Next.js and MDX), create or modify your `pages/_app.js` file as follows:
```js filename="pages/_app.js"
import { DocsGPTWidget } from "docsgpt";
export default function MyApp({ Component, pageProps }) {
return (
<>
<Component {...pageProps} />
<DocsGPTWidget selectDocs="local/docsgpt-sep.zip/"/>
</>
)
}
```
</Tabs.Tab>
</Tabs>
---
## Properties Table
The DocsGPT Widget offers a range of customizable properties that allow you to tailor its appearance and behavior to perfectly match your web application. These parameters can be modified directly when embedding the widget in your React components or HTML code. Below is a detailed overview of each available prop:
| **Prop** | **Type** | **Default Value** | **Description** |
|--------------------|------------------|-------------------------------------------------------------|-----------------------------------------------------------------------------------------------------|
| **`apiHost`** | `string` | `"https://gptcloud.arc53.com"` | **Required.** The URL of your DocsGPT API backend. This endpoint handles vector search and chatbot queries. |
| **`apiKey`** | `string` | `"your-api-key"` | API key for authentication with your DocsGPT API. Leave empty if no authentication is required. |
| **`avatar`** | `string` | [`dino-icon-link`](https://d3dg1063dc54p9.cloudfront.net/cute-docsgpt.png) | URL for the avatar image displayed in the chatbot interface. |
| **`title`** | `string` | `"Get AI assistance"` | Title text shown in the chatbot header. |
| **`description`** | `string` | `"DocsGPT's AI Chatbot is here to help"` | Sub-title or descriptive text displayed below the title in the chatbot header. |
| **`heroTitle`** | `string` | `"Welcome to DocsGPT !"` | Welcome message displayed when the chatbot is initially opened. |
| **`heroDescription`** | `string` | `"This chatbot is built with DocsGPT and utilises GenAI, please review important information using sources."` | Introductory text providing context or disclaimers about the chatbot. |
| **`theme`** | `"dark" \| "light"` | `"dark"` | Color theme of the widget interface. Options: `"dark"` or `"light"`. Defaults to `"dark"`. |
| **`buttonIcon`** | `string` | `"https://your-icon"` | URL for the icon image used in the widget's launch button. |
| **`buttonBg`** | `string` | `"#222327"` | Background color of the widget's launch button. |
| **`size`** | `"small" \| "medium"` | `"medium"` | Size of the widget. Options: `"small"` or `"medium"`. Defaults to `"medium"`. |
---
## Notes on Widget Properties
* **Full Customization:** Every property listed in the table can be customized. Override the defaults to create a widget that perfectly matches your branding and application context. From avatars and titles to color schemes, you have fine-grained control over the widget's presentation.
* **API Key Handling:** The `apiKey` prop is optional. Only include it if your DocsGPT backend API is configured to require API key authentication. `apiHost` for DocsGPT Cloud is `https://gptcloud.arc53.com/`
## Explore and Customize Further
The DocsGPT Widget is fully open-source, allowing for deep customization and extension beyond the readily available props.
The complete source code for the React-based widget is available in the `extensions/react-widget` directory within the main [DocsGPT GitHub Repository](https://github.com/arc53/DocsGPT). Feel free to explore the code, fork the repository, and tailor the widget to your exact requirements.

View File

@@ -0,0 +1,116 @@
---
title: Integrate DocsGPT Search Bar into Your Web Application
description: Embed the DocsGPT Search Bar Widget in your React or HTML projects to provide AI-powered document search functionality to your users.
---
import { Tabs } from 'nextra/components'
# Integrating DocsGPT Search Bar Widget
## Introduction
The DocsGPT Search Bar Widget offers a simple yet powerful way to embed AI-powered document search directly into your web applications. This widget allows users to perform searches across your documents or pages, enabling them to quickly find the information they need. This guide will walk you through embedding the Search Bar Widget into your projects, whether you're using React or plain HTML.
Try out the interactive widget showcase and customize its parameters at the [DocsGPT Widget Demo](https://widget.docsgpt.cloud/).
## Setup
<Tabs items={['React', 'HTML']}>
<Tabs.Tab>
## React Setup
### Installation
Make sure you have Node.js and npm (or yarn, pnpm) installed in your project. Navigate to your project directory in the terminal and install the `docsgpt` package:
```bash npm
npm install docsgpt
```
### Usage
In your React component file, import the `SearchBar` component:
```js
import { SearchBar } from "docsgpt";
```
Now, you can embed the widget within your React component's JSX:
```jsx
<SearchBar
apiKey="your-api-key"
apiHost="https://your-docsgpt-api.com"
theme="light"
placeholder="Search or Ask AI..."
width="300px"
/>
```
</Tabs.Tab>
<Tabs.Tab>
### Installation
To use the DocsGPT Search Bar Widget directly in HTML, include the widget script from a CDN in your HTML file:
```html filename="html"
<script
src="https://unpkg.com/docsgpt/dist/legacy/main.js"
type="module"
></script>
```
### Usage
In your HTML `<body>`, add a `<div>` element where you want to render the Search Bar Widget. Set an `id` for easy targeting.
```html filename="html"
<div id="search-bar-container"></div>
```
Then, in a `<script type="module">` block, use the `renderSearchBar` function to initialize the widget, passing the `id` of your `<div>` and a configuration object. To link the widget to your DocsGPT API and configure its behaviour, pass the relevant parameters within the configuration object of `renderSearchBar`.
```html filename="html"
<!DOCTYPE html>
<div id="search-bar-container"></div>
<script type="module">
window.onload = function() {
renderSearchBar('search-bar-container', {
apiKey: 'your-api-key-here',
apiHost: 'https://your-api-host.com',
theme: 'light',
placeholder: 'Search here...',
width: '300px'
});
}
</script>
```
</Tabs.Tab>
</Tabs>
---
## Properties Table
The DocsGPT Search Bar Widget offers a range of customizable properties that allow you to tailor its appearance and behavior to perfectly match your web application. These parameters can be modified directly when embedding the widget in your React components or HTML code. Below is a detailed overview of each available prop:
| **Prop** | **Type** | **Default Value** | **Description** |
|-----------------|-----------|-------------------------------------|--------------------------------------------------------------------------------------------------|
| **`apiKey`** | `string` | `"your-api-key"` | API key for authentication with your DocsGPT API. Leave empty if no authentication is required. |
| **`apiHost`** | `string` | `"https://gptcloud.arc53.com"` | **Required.** The URL of your DocsGPT API backend. This endpoint handles vector similarity search queries. |
| **`theme`** | `"dark" \| "light"` | `"dark"` | Color theme of the search bar. Options: `"dark"` or `"light"`. Defaults to `"dark"`. |
| **`placeholder`** | `string` | `"Search or Ask AI..."` | Placeholder text displayed in the search input field. |
| **`width`** | `string` | `"256px"` | Width of the search bar. Accepts any valid CSS width value (e.g., `"300px"`, `"100%"`, `"20rem"`). |
---
## Notes on Widget Properties
* **Full Customization:** Every property listed in the table can be customized. Override the defaults to create a Search Bar Widget that perfectly matches your branding and application context.
* **API Key Handling:** The `apiKey` prop is optional. Only include it if your DocsGPT backend API is configured to require API key authentication. `apiHost` for DocsGPT Cloud is `https://gptcloud.arc53.com/`
## Explore and Customize Further
The DocsGPT Search Bar Widget is fully open-source, allowing for deep customization and extension beyond the readily available props.
The complete source code for the React-based widget is available in the `extensions/react-widget` directory within the main [DocsGPT GitHub Repository](https://github.com/arc53/DocsGPT). Feel free to explore the code, fork the repository, and tailor the widget to your exact requirements.

View File

@@ -1,3 +1,8 @@
---
title: Customizing Prompts
description: This guide will explain how to change prompts in DocsGPT and why it might be benefitial. Additionaly this article expains additional variables that can be used in prompts.
---
import Image from 'next/image'
# Customizing the Main Prompt
@@ -34,6 +39,8 @@ When using code examples, use the following format:
{summaries}
```
Note that `{summaries}` allows model to see and respond to your upploaded documents. If you don't want this functionality you can safely remove it from the customized prompt.
Feel free to customize the prompt to align it with your specific use case or the kind of responses you want from the AI. For example, you can focus on specific document types, industries, or topics to get more targeted results.
## Conclusion

View File

@@ -1,3 +1,7 @@
---
title: How to Train on Other Documentation
description: A step-by-step guide on how to effectively train DocsGPT on additional documentation sources.
---
import { Callout } from 'nextra/components'
import Image from 'next/image'

View File

@@ -1,3 +1,7 @@
---
title:
description:
---
import { Callout } from 'nextra/components'
import Image from 'next/image'
@@ -26,24 +30,13 @@ Choose the LLM of your choice.
<Image src="/llms.gif" alt="prompts" width={800} height={500} />
### For Open source llm change:
<Steps >
<Steps>
### Step 1
For open source you have to edit .env file with LLM_NAME with their desired LLM name.
For open source version please edit `LLM_NAME`, `MODEL_NAME` and others in the .env file. Refer to [⚙️ App Configuration](/Deploying/DocsGPT-Settings) for more information.
### Step 2
All the supported LLM providers are here application/llm and you can check what env variable are needed for each
List of latest supported LLMs are https://github.com/arc53/DocsGPT/blob/main/application/llm/llm_creator.py
### Step 3
Visit application/llm and select the file of your selected llm and there you will find the specific requirements needed to be filled in order to use it,i.e API key of that llm.
Visit [☁️ Cloud Providers](/Models/cloud-providers) for the updated list of online models. Make sure you have the right API_KEY and correct LLM_NAME.
For self-hosted please visit [🖥️ Local Inference](/Models/local-inference).
</Steps>
### For OpenAI-Compatible Endpoints:
DocsGPT supports the use of OpenAI-compatible endpoints through base URL substitution. This feature allows you to use alternative AI models or services that implement the OpenAI API interface.
Set the OPENAI_BASE_URL in your environment. You can change .env file with OPENAI_BASE_URL with the desired base URL or docker-compose.yml file and add the environment variable to the backend container.
> Make sure you have the right API_KEY and correct LLM_NAME.

View File

@@ -1,3 +1,8 @@
---
title:
description:
---
# Avoiding hallucinations
If your AI uses external knowledge and is not explicit enough, it is ok, because we try to make DocsGPT friendly.

View File

@@ -9,10 +9,12 @@
},
"How-to-use-different-LLM": {
"title": "️🤖 How to use different LLM's",
"href": "/Guides/How-to-use-different-LLM"
"href": "/Guides/How-to-use-different-LLM",
"display": "hidden"
},
"My-AI-answers-questions-using-external-knowledge": {
"title": "💭️ Avoiding hallucinations",
"href": "/Guides/My-AI-answers-questions-using-external-knowledge"
"href": "/Guides/My-AI-answers-questions-using-external-knowledge",
"display": "hidden"
}
}

View File

@@ -1,79 +0,0 @@
---
title: Connecting DocsGPT to LLM Providers
description: Explore the different Large Language Model (LLM) providers you can connect to DocsGPT, from cloud APIs to local inference engines.
---
# Connecting DocsGPT to LLM Providers
DocsGPT is designed to be flexible and work with a variety of Large Language Model (LLM) providers. Whether you prefer the simplicity of a public API, the power of cloud-based models, or the control of local inference engines, DocsGPT can be configured to meet your needs.
This guide will introduce you to the LLM providers that DocsGPT natively supports and explain how to connect to them.
## Supported LLM Providers
DocsGPT offers built-in support for the following LLM providers, selectable during the `setup.sh` script execution:
**Cloud API Providers:**
* **DocsGPT Public API**
* **OpenAI**
* **Google (Vertex AI, Gemini)**
* **Anthropic (Claude)**
* **Groq**
* **HuggingFace Inference API**
* **Azure OpenAI**
## Configuration via `.env` file
Connecting DocsGPT to an LLM provider is primarily configured through environment variables set in the `.env` file located in the root directory of your DocsGPT project.
**Basic Configuration Parameters:**
* **`LLM_NAME`**: This setting is crucial and specifies the provider you want to use. The values correspond to the provider names listed above (e.g., `docsgpt`, `openai`, `google`, `ollama`, etc.).
* **`MODEL_NAME`**: Determines the specific model to be used from the chosen provider (e.g., `gpt-4o`, `gemini-2.0-flash`, `llama3.2:1b`). Refer to the provider's documentation for available model names.
* **`API_KEY`**: Required for most cloud API providers. Obtain this key from your provider's platform and set it in the `.env` file.
* **`OPENAI_BASE_URL`**: Specifically used when connecting to a local inference engine that is OpenAI API compatible. This setting points DocsGPT to the address of your local server.
## Configuration Examples
Here are examples of `.env` configurations for different LLM providers.
**Example for OpenAI:**
To use OpenAI's `gpt-4o` model, your `.env` file would look like this:
```
LLM_NAME=openai
API_KEY=YOUR_OPENAI_API_KEY # Replace with your actual OpenAI API key
MODEL_NAME=gpt-4o
```
**Example for Local Ollama:**
To connect to a local Ollama instance running `llama3.2:1b`, configure your `.env` as follows:
```
LLM_NAME=openai # Using OpenAI compatible API format for local models
API_KEY=None # API Key is not needed for local Ollama
MODEL_NAME=llama3.2:1b
OPENAI_BASE_URL=http://host.docker.internal:11434/v1 # Default Ollama API URL within Docker
```
**Example for OpenAI-Compatible API (DeepSeek):**
Many LLM providers offer APIs that are compatible with the OpenAI API format. DeepSeek is one such example. To connect to DeepSeek, you would still use `LLM_NAME=openai` and point `OPENAI_BASE_URL` to the DeepSeek API endpoint.
```
LLM_NAME=openai
API_KEY=YOUR_DEEPSEEK_API_KEY # Your DeepSeek API key
MODEL_NAME=deepseek-chat # Or your desired DeepSeek model name
OPENAI_BASE_URL=https://api.deepseek.com/v1 # DeepSeek API base URL
```
**Important Note:** When using OpenAI-compatible APIs, you might need to adjust other settings as well, depending on the specific API's requirements. Always consult the provider's API documentation and the [DocsGPT Settings Guide](/Deploying/DocsGPT-Settings) for detailed configuration options.
## Exploring More Providers and Advanced Settings
The providers listed above are those with direct support in `setup.sh`. However, DocsGPT's flexible design allows you to connect to virtually any LLM provider that offers an API, especially those compatible with the OpenAI API standard.
For a comprehensive list of all configurable settings, including advanced options for each provider and details on how to connect to other LLMs, please refer to the [DocsGPT Settings Guide](/Deploying/DocsGPT-Settings). This guide provides in-depth information on customizing your DocsGPT setup to work with a wide range of LLM providers and tailor the application to your specific needs.

View File

@@ -0,0 +1,55 @@
---
title: Connecting DocsGPT to Cloud LLM Providers
description: Connect DocsGPT to various Cloud Large Language Model (LLM) providers to power your document Q&A.
---
# Connecting DocsGPT to Cloud LLM Providers
DocsGPT is designed to seamlessly integrate with a variety of Cloud Large Language Model (LLM) providers, giving you access to state-of-the-art AI models for document question answering.
## Configuration via `.env` file
The primary method for configuring your LLM provider in DocsGPT is through the `.env` file. For a comprehensive understanding of all available settings, please refer to the detailed [DocsGPT Settings Guide](/Deploying/DocsGPT-Settings).
To connect to a cloud LLM provider, you will typically need to configure the following basic settings in your `.env` file:
* **`LLM_NAME`**: This setting is essential and identifies the specific cloud provider you wish to use (e.g., `openai`, `google`, `anthropic`).
* **`MODEL_NAME`**: Specifies the exact model you want to utilize from your chosen provider (e.g., `gpt-4o`, `gemini-2.0-flash`, `claude-3-5-sonnet-latest`). Refer to your provider's documentation for a list of available models.
* **`API_KEY`**: Almost all cloud LLM providers require an API key for authentication. Obtain your API key from your chosen provider's platform and securely store it in your `.env` file.
## Explicitly Supported Cloud Providers
DocsGPT offers direct, streamlined support for the following cloud LLM providers, making configuration straightforward. The table below outlines the `LLM_NAME` and example `MODEL_NAME` values to use for each provider in your `.env` file.
| Provider | `LLM_NAME` | Example `MODEL_NAME` |
| :--------------------------- | :------------- | :-------------------------- |
| DocsGPT Public API | `docsgpt` | `None` |
| OpenAI | `openai` | `gpt-4o` |
| Google (Vertex AI, Gemini) | `google` | `gemini-2.0-flash` |
| Anthropic (Claude) | `anthropic` | `claude-3-5-sonnet-latest` |
| Groq | `groq` | `llama-3.1-8b-instant` |
| HuggingFace Inference API | `huggingface` | `meta-llama/Llama-3.1-8B-Instruct` |
| Azure OpenAI | `azure_openai` | `gpt-4o` |
## Connecting to OpenAI-Compatible Cloud APIs
DocsGPT's flexible architecture allows you to connect to any cloud provider that offers an API compatible with the OpenAI API standard. This opens up a vast ecosystem of LLM services.
To connect to an OpenAI-compatible cloud provider, you will still use `LLM_NAME=openai` in your `.env` file. However, you will also need to specify the API endpoint of your chosen provider using the `OPENAI_BASE_URL` setting. You will also likely need to provide an `API_KEY` and `MODEL_NAME` as required by that provider.
**Example for DeepSeek (OpenAI-Compatible API):**
To connect to DeepSeek, which offers an OpenAI-compatible API, your `.env` file could be configured as follows:
```
LLM_NAME=openai
API_KEY=YOUR_API_KEY # Your DeepSeek API key
MODEL_NAME=deepseek-chat # Or your desired DeepSeek model name
OPENAI_BASE_URL=https://api.deepseek.com/v1 # DeepSeek's OpenAI API URL
```
Remember to consult the documentation of your chosen OpenAI-compatible cloud provider for their specific API endpoint, required model names, and authentication methods.
## Adding Support for Other Cloud Providers
If you wish to connect to a cloud provider that is not explicitly listed above or doesn't offer OpenAI API compatibility, you can extend DocsGPT to support it. Within the DocsGPT repository, navigate to the `application/llm` directory. Here, you will find Python files defining the existing LLM integrations. You can use these files as examples to create a new module for your desired cloud provider. After creating your new LLM module, you will need to register it within the `llm_creator.py` file. This process involves some coding, but it allows for virtually unlimited extensibility to connect to any cloud-based LLM service with an accessible API.

View File

@@ -0,0 +1,72 @@
---
title: Understanding and Configuring Embedding Models in DocsGPT
description: Learn about embedding models, their importance in DocsGPT, and how to configure them for optimal performance.
---
# Understanding and Configuring Embedding Models in DocsGPT
Embedding models are a crucial component of DocsGPT, enabling its powerful document understanding and question-answering capabilities. This guide will explain what embedding models are, why they are essential for DocsGPT, and how to configure them.
## What are Embedding Models?
In simple terms, an embedding model is a type of language model that converts text into numerical vectors. These vectors, known as embeddings, capture the semantic meaning of the text. Think of it as translating words and sentences into a language that computers can understand mathematically, where similar meanings are represented by vectors that are close to each other in vector space.
**Why are embedding models important for DocsGPT?**
DocsGPT uses embedding models for several key tasks:
* **Semantic Search:** When you upload documents to DocsGPT, the application uses an embedding model to generate embeddings for each document chunk. These embeddings are stored in a vector store. When you ask a question, your query is also converted into an embedding. DocsGPT then performs a semantic search in the vector store, finding document chunks whose embeddings are most similar to your query embedding. This allows DocsGPT to retrieve relevant information based on the *meaning* of your question and documents, not just keyword matching.
* **Document Understanding:** Embeddings help DocsGPT understand the underlying meaning of your documents, enabling it to answer questions accurately and contextually, even if the exact keywords from your question are not present in the retrieved document chunks.
In essence, embedding models are the bridge that allows DocsGPT to understand the nuances of human language and connect your questions to the relevant information within your documents.
## Out-of-the-Box Embedding Model Support in DocsGPT
DocsGPT is designed to be flexible and supports a wide range of embedding models right out of the box. Currently, DocsGPT provides native support for models from two major sources:
* **Sentence Transformers:** DocsGPT supports all models available through the [Sentence Transformers library](https://www.sbert.net/). This library offers a vast selection of pre-trained embedding models, known for their quality and efficiency in various semantic tasks.
* **OpenAI Embeddings:** DocsGPT also supports using embedding models from OpenAI, specifically the `text-embedding-ada-002` model, which is a powerful and widely used embedding model from OpenAI's API.
## Configuring Sentence Transformer Models
To utilize Sentence Transformer models within DocsGPT, you need to follow these steps:
1. **Download the Model:** Sentence Transformer models are typically hosted on Hugging Face Model Hub. You need to download your chosen model and place it in the `model/` folder in the root directory of your DocsGPT project.
For example, to use the `all-mpnet-base-v2` model, you would set `EMBEDDINGS_NAME` as described below, and ensure that the model files are available locally (DocsGPT will attempt to download it if it's not found, but local download is recommended for development and offline use).
2. **Set `EMBEDDINGS_NAME` in `.env` (or `settings.py`):** You need to configure the `EMBEDDINGS_NAME` setting in your `.env` file (or `settings.py`) to point to the desired Sentence Transformer model.
* **Using a pre-downloaded model from `model/` folder:** You can specify a path to the downloaded model within the `model/` directory. For instance, if you downloaded `all-mpnet-base-v2` and it's in `model/all-mpnet-base-v2`, you could potentially use a relative path like (though direct path to the model name is usually sufficient):
```
EMBEDDINGS_NAME=huggingface_sentence-transformers/all-mpnet-base-v2
```
or simply use the model identifier:
```
EMBEDDINGS_NAME=sentence-transformers/all-mpnet-base-v2
```
* **Using a model directly from Hugging Face Model Hub:** You can directly specify the model identifier from Hugging Face Model Hub:
```
EMBEDDINGS_NAME=huggingface_sentence-transformers/all-mpnet-base-v2
```
## Using OpenAI Embeddings
To use OpenAI's `text-embedding-ada-002` embedding model, you need to set `EMBEDDINGS_NAME` to `openai_text-embedding-ada-002` and ensure you have your OpenAI API key configured correctly via `API_KEY` in your `.env` file (if you are not using Azure OpenAI).
**Example `.env` configuration for OpenAI Embeddings:**
```
LLM_NAME=openai
API_KEY=YOUR_OPENAI_API_KEY # Your OpenAI API Key
EMBEDDINGS_NAME=openai_text-embedding-ada-002
```
## Adding Support for Other Embedding Models
If you wish to use an embedding model that is not supported out-of-the-box, a good starting point for adding custom embedding model support is to examine the `base.py` file located in the `application/vectorstore` directory.
Specifically, pay attention to the `EmbeddingsWrapper` and `EmbeddingsSingleton` classes. `EmbeddingsWrapper` provides a way to wrap different embedding model libraries into a consistent interface for DocsGPT. `EmbeddingsSingleton` manages the instantiation and retrieval of embedding model instances. By understanding these classes and the existing embedding model implementations, you can create your own custom integration for virtually any embedding model library you desire.

View File

@@ -0,0 +1,44 @@
---
title: Connecting DocsGPT to Local Inference Engines
description: Connect DocsGPT to local inference engines for running LLMs directly on your hardware.
---
# Connecting DocsGPT to Local Inference Engines
DocsGPT can be configured to leverage local inference engines, allowing you to run Large Language Models directly on your own infrastructure. This approach offers enhanced privacy and control over your LLM processing.
Currently, DocsGPT primarily supports local inference engines that are compatible with the OpenAI API format. This means you can connect DocsGPT to various local LLM servers that mimic the OpenAI API structure.
## Configuration via `.env` file
Setting up a local inference engine with DocsGPT is configured through environment variables in the `.env` file. For a detailed explanation of all settings, please consult the [DocsGPT Settings Guide](/Deploying/DocsGPT-Settings).
To connect to a local inference engine, you will generally need to configure these settings in your `.env` file:
* **`LLM_NAME`**: Crucially set this to `openai`. This tells DocsGPT to use the OpenAI-compatible API format for communication, even though the LLM is local.
* **`MODEL_NAME`**: Specify the model name as recognized by your local inference engine. This might be a model identifier or left as `None` if the engine doesn't require explicit model naming in the API request.
* **`OPENAI_BASE_URL`**: This is essential. Set this to the base URL of your local inference engine's API endpoint. This tells DocsGPT where to find your local LLM server.
* **`API_KEY`**: Generally, for local inference engines, you can set `API_KEY=None` as authentication is usually not required in local setups.
## Supported Local Inference Engines (OpenAI API Compatible)
DocsGPT is readily configurable to work with the following local inference engines, all communicating via the OpenAI API format. Here are example `OPENAI_BASE_URL` values for each, based on default setups:
| Inference Engine | `LLM_NAME` | `OPENAI_BASE_URL` |
| :---------------------------- | :--------- | :------------------------- |
| LLaMa.cpp | `openai` | `http://localhost:8000/v1` |
| Ollama | `openai` | `http://localhost:11434/v1` |
| Text Generation Inference (TGI)| `openai` | `http://localhost:8080/v1` |
| SGLang | `openai` | `http://localhost:30000/v1` |
| vLLM | `openai` | `http://localhost:8000/v1` |
| Aphrodite | `openai` | `http://localhost:2242/v1` |
| FriendliAI | `openai` | `http://localhost:8997/v1` |
| LMDeploy | `openai` | `http://localhost:23333/v1` |
**Important Note on `localhost` vs `host.docker.internal`:**
The `OPENAI_BASE_URL` examples above use `http://localhost`. If you are running DocsGPT within Docker and your local inference engine is running on your host machine (outside of Docker), you will likely need to replace `localhost` with `http://host.docker.internal` to ensure Docker can correctly access your host's services. For example, `http://host.docker.internal:11434/v1` for Ollama.
## Adding Support for Other Local Engines
While DocsGPT currently focuses on OpenAI API compatible local engines, you can extend its capabilities to support other local inference solutions. To do this, navigate to the `application/llm` directory in the DocsGPT repository. Examine the existing Python files for examples of LLM integrations. You can create a new module for your desired local engine, and then register it in the `llm_creator.py` file within the same directory. This allows for custom integration with a wide range of local LLM servers beyond those listed above.

View File

@@ -11,5 +11,8 @@
"newWindow": true
},
"Guides": "Guides",
"changelog": "Changelog"
"changelog": {
"title": "Changelog",
"display": "hidden"
}
}

View File

@@ -1,67 +0,0 @@
---
title: 'Changelog'
---
## Launching Web App
**Note**: Make sure you have Docker installed
**On macOS or Linux:**
Just run the following command:
```bash
./setup.sh
```
This command will install all the necessary dependencies and provide you with an option to use our LLM API, download the local model or use OpenAI.
If you prefer to follow manual steps, refer to this guide:
1. Open and download this repository with
```bash
git clone https://github.com/arc53/DocsGPT.git
cd DocsGPT
```
2. Create a `.env` file in your root directory and set the env variables.
It should look like this inside:
```
LLM_NAME=[docsgpt or openai or others]
API_KEY=[if LLM_NAME is openai]
```
See optional environment variables in the [/application/.env_sample](https://github.com/arc53/DocsGPT/blob/main/application/.env_sample) file.
3. Run the following commands:
```bash
docker compose -f deployment/docker-compose.yaml up
```
4. Navigate to http://localhost:5173/.
To stop, simply press **Ctrl + C**.
**For WINDOWS:**
1. Open and download this repository with
```bash
git clone https://github.com/arc53/DocsGPT.git
cd DocsGPT
```
2. Create a `.env` file in your root directory and set the env variables.
It should look like this inside:
```
LLM_NAME=[docsgpt or openai or others]
API_KEY=[if LLM_NAME is openai]
```
See optional environment variables in the [/application/.env_sample](https://github.com/arc53/DocsGPT/blob/main/application/.env_sample) file.
3. Run the following command:
```bash
docker compose -f deployment/docker-compose.yaml up
```
4. Navigate to http://localhost:5173/.
5. To stop the setup, just press **Ctrl + C** in the WSL terminal
**Important:** Ensure that Docker is installed and properly configured on your Windows system for these steps to work.

3
docs/pages/changelog.mdx Normal file
View File

@@ -0,0 +1,3 @@
---
title: 'Changelog'
---

View File

@@ -3,23 +3,62 @@ title: 'Home'
---
import { Cards, Card } from 'nextra/components'
import Image from 'next/image'
import deployingGuides from './Deploying/_meta.json';
import extensionGuides from './Extensions/_meta.json';
import mainGuides from './Guides/_meta.json';
export const allGuides = {
...deployingGuides,
...extensionGuides,
...mainGuides,
"quickstart": {
"title": "⚡️ Quickstart",
"href": "/quickstart"
},
"DocsGPT-Settings": {
"title": "⚙️ App Configuration",
"href": "/Deploying/DocsGPT-Settings"
},
"Docker-Deploying": {
"title": "🛳️ Docker Setup",
"href": "/Deploying/Docker-Deploying"
},
"Development-Environment": {
"title": "🛠Development Environment",
"href": "/Deploying/Development-Environment"
},
"https://gptcloud.arc53.com/": {
"title": "🧑‍💻️ API",
"href": "https://gptcloud.arc53.com/",
"newWindow": true
},
"cloud-providers": {
"title": "☁️ Cloud Providers",
"href": "/Models/cloud-providers"
},
"local-inference": {
"title": "🖥️ Local Inference",
"href": "/Models/local-inference"
},
"embeddings": {
"title": "📝 Embeddings",
"href": "/Models/embeddings"
},
"api-key-guide": {
"title": "🔑 Getting API key",
"href": "/Extensions/api-key-guide"
},
"chat-widget": {
"title": "💬️ Chat Widget",
"href": "/Extensions/chat-widget"
},
"search-widget": {
"title": "🔎 Search Widget",
"href": "/Extensions/search-widget"
},
"Customising-prompts": {
"title": "️💻 Customising Prompts",
"href": "/Guides/Customising-prompts"
}
};
## **DocsGPT 🦖**
DocsGPT 🦖 is an innovative open-source tool designed to simplify the retrieval of information from project documentation using advanced GPT models 🤖. Eliminate lengthy manual searches 🔍 and enhance your documentation experience with DocsGPT, and consider contributing to its AI-powered future 🚀.
# **DocsGPT 🦖**
DocsGPT is an open-source genAI tool that helps users get reliable answers from any knowledge source, while avoiding hallucinations. It enables quick and reliable information retrieval, with tooling and agentic system capability built in.
<video controls width={1920} height={1080} muted autoPlay loop playsInline>

BIN
docs/public/civo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
docs/public/kamatera.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.7 KiB

BIN
docs/public/lightsail.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.0 KiB

BIN
docs/public/railway.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 104 KiB