+
+
+ >
+ );
+}
\ No newline at end of file
diff --git a/docs/pages/Agents/_meta.json b/docs/pages/Agents/_meta.json
new file mode 100644
index 00000000..f5d0fe6e
--- /dev/null
+++ b/docs/pages/Agents/_meta.json
@@ -0,0 +1,6 @@
+{
+ "basics": {
+ "title": "🤖 Agent Basics",
+ "href": "/Agents/basics"
+ }
+}
\ No newline at end of file
diff --git a/docs/pages/Agents/basics.mdx b/docs/pages/Agents/basics.mdx
new file mode 100644
index 00000000..cc67c2df
--- /dev/null
+++ b/docs/pages/Agents/basics.mdx
@@ -0,0 +1,109 @@
+---
+title: Understanding DocsGPT Agents
+description: Learn about DocsGPT Agents, their types, how to create and manage them, and how they can enhance your interaction with documents and tools.
+---
+
+import { Callout } from 'nextra/components';
+import Image from 'next/image'; // Assuming you might want to embed images later, like the ones you uploaded.
+
+# Understanding DocsGPT Agents 🤖
+
+DocsGPT Agents are advanced, configurable AI entities designed to go beyond simple question-answering. They act as specialized assistants or workers that combine instructions (prompts), knowledge (document sources), and capabilities (tools) to perform a wide range of tasks, automate workflows, and provide tailored interactions.
+
+Think of an Agent as a pre-configured version of DocsGPT, fine-tuned for a specific purpose, such as classifying documents, responding to new form submissions, or validating emails.
+
+## Why Use Agents?
+
+* **Personalization:** Create AI assistants that behave and respond according to specific roles or personas.
+* **Task Specialization:** Design agents focused on particular tasks, like customer support, data extraction, or content generation.
+* **Knowledge Integration:** Equip agents with specific document sources, making them experts in particular domains.
+* **Tool Utilization:** Grant agents access to various tools, allowing them to interact with external services, fetch live data, or perform actions.
+* **Automation:** Automate repetitive tasks by defining an agent's behavior and integrating it via webhooks or other means.
+* **Shareability:** Share your custom-configured agents with others or use agents shared with you.
+
+Agents provide a more structured and powerful way to leverage LLMs compared to a standard chat interface, as they come with a pre-defined context, instruction set, and set of capabilities.
+
+## Core Components of an Agent
+
+When you create or configure an agent, you'll work with these key components:
+
+**Meta:**
+ * **Agent Name:** A user-friendly name to identify the agent (e.g., "Support Ticket Classifier," "Product Spec Expert").
+ * **Describe your agent:** A brief description for you or users to understand the agent's purpose.
+
+**Source:**
+ * **Select source:** The knowledge base for the agent. You can select from previously uploaded documents or data sources. This is what the agent will "know."
+ * **Chunks per query:** A numerical value determining how many relevant text chunks from the selected source are sent to the LLM with each query. This helps manage context length and relevance.
+
+**Prompt:**
+The main set of instructions or system [prompt](/Guides/Customising-prompts) that defines the agent's persona, objectives, constraints, and how it should behave or respond.
+
+**Tools:** A selection of available [DocsGPT Tools](/Tools/basics) that the agent can use to perform actions or access external information.
+
+**Agent type:** The underlying operational logic or architecture the agent uses. DocsGPT supports different types of agents, each suited for different kinds of tasks.
+
+## Understanding Agent Types
+
+DocsGPT allows for different "types" of agents, each with a distinct way of processing information and generating responses. The code for these agent types can be found in the `application/agents/` directory.
+
+### 1. Classic Agent (`classic_agent.py`)
+
+**How it works:** The Classic Agent follows a traditional Retrieval Augmented Generation (RAG) approach.
+ 1. **Retrieve:** When a query is made, it first searches the selected Source documents for relevant information.
+ 2. **Augment:** This retrieved data is then added to the context, along with the main Prompt and the user's query.
+ 3. **Generate:** The LLM generates a response based on this augmented context. It can also utilize any configured tools if the LLM decides they are necessary.
+
+**Best for:**
+ * Direct question-answering over a specific set of documents.
+ * Tasks where the primary goal is to extract and synthesize information from the provided sources.
+ * Simpler tool integrations where the decision to use a tool is straightforward.
+
+### 2. ReAct Agent (`react_agent.py`)
+
+**How it works:** The ReAct Agent employs a more sophisticated "Reason and Act" framework. This involves a multi-step process:
+ 1. **Plan (Thought):** Based on the query, its prompt, and available tools/sources, the LLM first generates a plan or a sequence of thoughts on how to approach the problem. You might see this output as a "thought" process during generation.
+ 2. **Act:** The agent then executes actions based on this plan. This might involve querying its sources, using a tool, or performing internal reasoning.
+ 3. **Observe:** It gathers observations from the results of its actions (e.g., data from a tool, snippets from documents).
+ 4. **Repeat (if necessary):** Steps 2 and 3 can be repeated as the agent refines its approach or gathers more information.
+ 5. **Conclude:** Finally, it generates the final answer based on the initial query and all accumulated observations.
+
+**Best for:**
+ * More complex tasks that require multi-step reasoning or problem-solving.
+ * Scenarios where the agent needs to dynamically decide which tools to use and in what order, based on intermediate results.
+ * Interactive tasks where the agent needs to "think" through a problem.
+
+
+Developers looking to introduce new agent architectures can explore the `application/agents/` directory. `classic_agent.py` and `react_agent.py` serve as excellent starting points, demonstrating how to inherit from `BaseAgent` and structure agent logic.
+
+
+## Navigating and Managing Agents in DocsGPT
+
+You can easily access and manage your agents through the DocsGPT user interface. Recently used agents appear at the top of the left sidebar for quick access. Below these, the "Manage Agents" button will take you to the main Agents page.
+
+### Creating a New Agent
+
+1. Navigate to the "Agents" page.
+2. Click the **"New Agent"** button.
+3. You will be presented with the "New Agent" configuration screen:
+
+
+
+4. Fill in the fields as described in the "Core Components of an Agent" section.
+5. Once configured, you can **"Save Draft"** to continue editing later or **"Publish"** to make the agent active.
+
+## Interacting with and Editing Agents
+
+Once an agent is created, you can:
+
+* **Chat with it:** Select the agent to start an interaction.
+* **View Logs:** Access usage statistics, monitor token consumption per interaction, and review user message feedbacks. This is crucial for understanding how your agent is being used and performing.
+* **Edit an Agent:**
+ * Modify any of its configuration settings (name, description, source, prompt, tools, type).
+ * **Generate a Public Link:** From the edit screen, you can create a shareable public link that allows others to import and use your agent.
+ * **Get a Webhook URL:** You can also obtain a Webhook URL for the agent. This allows external applications or services to trigger the agent and receive responses programmatically, enabling powerful integrations and automations.
diff --git a/docs/pages/Deploying/DocsGPT-Settings.mdx b/docs/pages/Deploying/DocsGPT-Settings.mdx
index ce1e46ba..239b35d7 100644
--- a/docs/pages/Deploying/DocsGPT-Settings.mdx
+++ b/docs/pages/Deploying/DocsGPT-Settings.mdx
@@ -95,6 +95,49 @@ EMBEDDINGS_NAME=huggingface_sentence-transformers/all-mpnet-base-v2 # You can al
In this case, even though you are using Ollama locally, `LLM_NAME` is set to `openai` because Ollama (and many other local inference engines) are designed to be API-compatible with OpenAI. `OPENAI_BASE_URL` points DocsGPT to the local Ollama server.
+## Authentication Settings
+
+DocsGPT includes a JWT (JSON Web Token) based authentication feature for managing sessions or securing local deployments while allowing access.
+
+- **`AUTH_TYPE`**: This setting in your `.env` file or `settings.py` determines the authentication method.
+
+ - **Possible values:**
+ - `None` (or not set): No authentication is used.
+ - `simple_jwt`: A single, long-lived JWT token is generated and used for all authenticated requests. This is useful for securing a local deployment with a shared secret.
+ - `session_jwt`: Unique JWT tokens are generated for sessions, typically for individual users or temporary access.
+ - If `AUTH_TYPE` is set to `simple_jwt` or `session_jwt`, then a `JWT_SECRET_KEY` is required.
+- **`JWT_SECRET_KEY`**: This is a crucial secret key used to sign and verify JWTs.
+
+ - It can be set directly in your `.env` file or `settings.py`.
+ - **Automatic Key Generation**: If `AUTH_TYPE` is `simple_jwt` or `session_jwt` and `JWT_SECRET_KEY` is _not_ set in your environment variables or `settings.py`, DocsGPT will attempt to:
+ 1. Read the key from a file named `.jwt_secret_key` in the project's root directory.
+ 2. If the file doesn't exist, it will generate a new 32-byte random key, save it to `.jwt_secret_key`, and use it for the session. This ensures that the key persists across application restarts.
+ - **Security Note**: It's vital to keep this key secure. If you set it manually, choose a strong, random string.
+
+**How it works:**
+
+- When `AUTH_TYPE` is set to `simple_jwt`, a token is generated at startup (if not already present or configured) and printed to the console. This token should be included in the `Authorization` header of your API requests as a Bearer token (e.g., `Authorization: Bearer YOUR_SIMPLE_JWT_TOKEN`).
+- When `AUTH_TYPE` is set to `session_jwt`:
+ - Clients can request a new token from the `/api/generate_token` endpoint.
+ - This token should then be included in the `Authorization` header for subsequent requests.
+- The backend verifies the JWT token provided in the `Authorization` header for protected routes.
+- The `/api/config` endpoint can be used to check the current `auth_type` and whether authentication is required.
+
+**Frontend Token Input for `simple_jwt`:**
+
+
+
+If you have configured `AUTH_TYPE=simple_jwt`, the DocsGPT frontend will prompt you to enter the JWT token if it's not already set or is invalid. You'll need to paste the `SIMPLE_JWT_TOKEN` (which is printed to your console when the backend starts) into this field to access the application.
+
## Exploring More Settings
These are just the basic settings to get you started. The `settings.py` file contains many more advanced options that you can explore to further customize DocsGPT, such as:
diff --git a/docs/pages/Tools/_meta.json b/docs/pages/Tools/_meta.json
new file mode 100644
index 00000000..2f58d116
--- /dev/null
+++ b/docs/pages/Tools/_meta.json
@@ -0,0 +1,14 @@
+{
+ "basics": {
+ "title": "🔧 Tools Basics",
+ "href": "/Tools/basics"
+ },
+ "api-tool": {
+ "title": "🗝️ API Tool",
+ "href": "/Tools/api-tool"
+ },
+ "creating-a-tool": {
+ "title": "🛠️ Creating a Custom Tool",
+ "href": "/Tools/creating-a-tool"
+ }
+}
\ No newline at end of file
diff --git a/docs/pages/Tools/api-tool.mdx b/docs/pages/Tools/api-tool.mdx
new file mode 100644
index 00000000..dd7ee4cb
--- /dev/null
+++ b/docs/pages/Tools/api-tool.mdx
@@ -0,0 +1,153 @@
+---
+title: 🗝️ Generic API Tool
+description: Learn how to configure and use the API Tool in DocsGPT to connect with any RESTful API without writing custom code.
+---
+
+import { Callout } from 'nextra/components';
+import Image from 'next/image';
+
+# Using the Generic API Tool
+
+The API Tool provides a no-code/low-code solution to make DocsGPT interact with third-party or internal RESTful APIs. It acts as a bridge, allowing the Large Language Model (LLM) to leverage external services based on your chat interactions.
+ This guide will walk you through its capabilities, configuration, and best practices.
+
+## Introduction to the Generic API Tool
+
+**When to Use It:**
+ * Ideal for quickly integrating existing APIs where the interaction involves standard HTTP requests (GET, POST, PUT, DELETE).
+ * Suitable for fetching data to enrich answers (e.g., current weather, stock prices, product details).
+ * Useful for triggering simple actions in other systems (e.g., sending a notification, creating a basic task).
+
+**Contrast with Custom Python Tools:**
+ * **API Tool:** Best for straightforward API calls. Configuration is done through the DocsGPT UI.
+ * **Custom Python Tools:** Preferable when you need complex logic before or after the API call, handle non-standard authentication (like complex OAuth flows), manage multi-step API interactions, or require intricate data processing not easily managed by the LLM alone. See [Creating a Custom Tool](/Tools/creating-a-tool) for more.
+
+## Capabilities of the API Tool
+
+**Supported HTTP Methods:** You can configure actions using standard HTTP methods such as:
+ * `GET`: To retrieve data.
+ * `POST`: To submit data to create a new resource.
+ * `PUT`: To update an existing resource.
+ * `DELETE`: To remove a resource.
+
+**Request Configuration:**
+ * **Headers:** Define static or dynamic HTTP headers for authentication (e.g., API keys), content type specification, etc.
+ * **Query Parameters:** Specify URL query parameters, which can be static or dynamically filled by the LLM based on user input.
+ * **Request Body:** Define the structure of the request body (e.g., JSON), with fields that can be static or dynamically populated by the LLM.
+
+**Response Handling:**
+ * The API Tool executes the request and receives the raw response from the API (typically JSON or plain text).
+ * This raw response is then passed back to the LLM.
+ * The LLM uses this response, along with the context of your query and the description of the API tool action, to formulate an answer or decide on follow-up actions. The API tool itself doesn't deeply parse or transform the response beyond basic content type detection (e.g., loading JSON into a parsable object).
+
+## Configuring an API as a Tool
+
+You can configure the API Tool through the DocsGPT user interface, found in **Settings -> Tools**. When you add or modify an API Tool, you'll define specific actions that DocsGPT can perform.
+
+
+ The configuration involves defining how DocsGPT should call an API endpoint. Each configured API call essentially becomes a distinct "action" the LLM can choose to use.
+
+
+Below is an example of how you might configure an API action, inspired by setting up a phone number validation service:
+
+
+_Figure 1: Example configuration for an API Tool action to validate phone numbers._
+
+**Defining an API Endpoint/Action:**
+
+When you configure a new API action, you'll fill in the following fields:
+
+- **`Name`:** A user-friendly name for this specific API action (e.g., "Phone-check" as in the image, or more specific like "ValidateUSPhoneNumber"). This helps in managing your tools.
+- **`Description`:** This is a **critical field**. Provide a clear and concise description of what the API action does, what kind of input it expects (implicitly), and what kind of output it provides. The LLM uses this description to understand when and how to use this action.
+- **`URL`:** The full endpoint URL for the API request.
+- **`HTTP Method`:** Select the appropriate HTTP method (e.g., GET, POST) from a dropdown.
+- **`Headers`:** You can add custom HTTP headers as key-value pairs (Name, Value). Indicate if the value should be `Filled by LLM` or is static. If filled by LLM, provide a `Description` for the LLM.
+
+- **`Query Parameters`:** For `GET` requests or when parameters are sent in the URL.
+ * **`Name`:** The name of the query parameter (e.g., `api_key`, `phone`).
+ * **`Type`:** The data type of the parameter (e.g., `string`).
+ * **`Filled by LLM` (Checkbox):**
+ - **Unchecked (Static):** The `Value` you provide will be used for every call (e.g., for an `api_key` that doesn't change).
+ - **Checked (Dynamic):** The LLM will extract the appropriate value from the user's chat query based on the `Description` you provide for this parameter. The `Value` field is typically left empty or contains a placeholder if `Filled by LLM` is checked.
+ * `Description`: Context for the LLM if the parameter is to be filled dynamically, or for your own reference if static.
+ * `Value`: The static value if not filled by LLM.
+
+- **`Request Body`:** Used to send data (commonly JSON) to the API. Similar to Query Parameters, you define fields with `Name`, `Type`, whether it's `Filled by LLM`, a `Description` for dynamic fields, and a static `Value` if applicable.
+
+**Response Handling Guidance for the LLM:**
+
+While the API Tool configuration UI doesn't have explicit fields for defining response parsing rules (like JSONPath extractors), you significantly influence how the LLM handles the response through:
+ * **Tool Action `Description`:** Clearly state what kind of information the API returns (e.g., "This API returns a JSON object with 'status' and 'location' fields for the phone number."). This helps the LLM know what to look for in the API's output.
+ * **Prompt Engineering:** For more complex scenarios, you might need to adjust your global or agent-specific prompts to guide DocsGPT on how to interpret and present information from API tool responses. See [Customising Prompts](/Guides/Customising-prompts).
+
+## Using the Configured API Tool in Chat
+
+Once an API action is configured and enabled, DocsGPT's LLM can decide to use it based on your natural language queries.
+
+**Example (based on the phone validation tool in Figure 1):**
+
+1. **User Query:** "Hey DocsGPT, can you check if +14155555555 is a valid phone number?"
+
+2. **DocsGPT (LLM Orchestration):**
+ * The LLM analyzes the query.
+ * It matches the intent ("check if ... is a valid phone number") with the description of the "Phone-check" API action.
+ * It identifies `+14155555555` as the value for the `phone` parameter (which was marked as `Filled by LLM` with the description "Phone number to check").
+ * DocsGPT constructs the GET API request.
+3. **API Tool Execution:**
+ * The API Tool makes the HTTP GET request.
+ * The external API (AbstractAPI) processes the request and returns a JSON response, e.g.:
+ ```json
+ {
+ "phone": "+14155555555",
+ "valid": true,
+ "format": {
+ "international": "+1 415-555-5555",
+ "national": "(415) 555-5555"
+ },
+ "country": {
+ "code": "US",
+ "name": "United States",
+ "prefix": "+1"
+ },
+ "location": "California",
+ "type": "Landline"
+ }
+ ```
+
+4. **DocsGPT Response Formulation:**
+ * The API Tool passes this JSON response back to the LLM.
+ * The LLM, guided by the tool's description and the user's original query, extracts relevant information and formulates a user-friendly answer.
+ * **DocsGPT Chat Response:** "Yes, +14155555555 appears to be a valid landline phone number in California, United States."
+
+## Advanced Tips and Best Practices
+
+**Clear Description is the Key:** The LLM relies heavily on the `Description` field of the API action and its parameters. Make them unambiguous and action-oriented. Clearly state what the tool does and what kind of input it expects (even if implicitly through parameter descriptions).
+
+**Iterative Testing:** After configuring an API tool, test it with various phrasings of user queries to ensure the LLM triggers it correctly and interprets the response as expected.
+
+**Error Handling:**
+ * If an API call fails, the API Tool will return an error message and status code from the `requests` library or the API itself. The LLM may relay this error or try to explain it.
+ * Check DocsGPT's backend logs for more detailed error information if you encounter issues.
+
+**Security Considerations:**
+ * **API Keys:** Be mindful of API keys and other sensitive credentials. The example image shows an API key directly in the configuration. For production or shared environments avoid exposing configurations with sensitive keys.
+ * **Rate Limits:** Be aware of the rate limits of the APIs you are integrating. Frequent calls from DocsGPT could exceed these limits.
+ * **Data Privacy:** Consider the data privacy implications of sending user query data to third-party APIs.
+- **Idempotency:** For tools that modify data (POST, PUT, DELETE), be aware of whether the API operations are idempotent to avoid unintended consequences from repeated calls if the LLM retries an action.
+
+## Limitations
+
+While powerful, the Generic API Tool has some limitations:
+
+- **Complex Authentication:** Advanced authentication flows like OAuth 2.0 (especially 3-legged OAuth requiring user redirection) or custom signature-based authentication often require custom Python tools.
+- **Multi-Step API Interactions:** If a task requires multiple API calls that depend on each other (e.g., fetch a list, then for each item, fetch details), this kind of complex chaining and logic is better handled by a custom Python tool.
+- **Complex Data Transformations:** If the API response needs significant transformation or processing before being useful to the LLM, a custom Python tool offers more flexibility.
+- **Real-time Streaming (SSE, WebSockets):** The tool is designed for request-response interactions, not for maintaining persistent streaming connections.
+
+For scenarios that exceed these limitations, developing a [Custom Python Tool](/Tools/creating-a-tool) is the recommended approach.
\ No newline at end of file
diff --git a/docs/pages/Tools/basics.mdx b/docs/pages/Tools/basics.mdx
new file mode 100644
index 00000000..66ef2e71
--- /dev/null
+++ b/docs/pages/Tools/basics.mdx
@@ -0,0 +1,92 @@
+---
+title: Tools Basics - Enhancing DocsGPT Capabilities
+description: Understand what DocsGPT Tools are, how they work, and explore the built-in tools available to extend DocsGPT's functionality.
+---
+
+import { Callout } from 'nextra/components';
+import Image from 'next/image';
+import { ToolCards } from '../../components/ToolCards';
+
+# Understanding DocsGPT Tools
+
+DocsGPT Tools are powerful extensions that significantly enhance the capabilities of your DocsGPT application.
+They allow DocsGPT to move beyond its core function of retrieving information from your documents and enable it to perform actions,
+interact with external data sources, and integrate with other services. You can find and configure available tools within
+the "Tools" section of the DocsGPT application settings in the user interface.
+
+## What are Tools?
+
+- **Purpose:** The primary purpose of Tools is to bridge the gap between understanding a user's request (natural language processing by the LLM) and executing a tangible action. This could involve fetching live data from the web, sending notifications, running code snippets, querying databases, or interacting with third-party APIs.
+
+- **LLM as an Orchestrator:** The Large Language Model (LLM) at the heart of DocsGPT is designed to act as an intelligent orchestrator. Based on your query and the declared capabilities of the available tools (defined in their metadata), the LLM decides if a tool is needed, which tool to use, and what parameters to pass to it.
+
+- **Action-Oriented Interactions:** Tools enable more dynamic and action-oriented interactions. For example:
+ * *"What's the latest news on renewable energy?"* - This might trigger a web search tool to fetch current articles.
+ * *"Fetch the order status for customer ID 12345 from our database."* - This could use a database tool.
+ * *"Summarize the content of this webpage and send the summary to the #general channel on Telegram."* - This might involve a web scraping tool followed by a Telegram notification tool.
+
+## Overview of Built-in Tools
+
+DocsGPT includes a suite of pre-built tools designed to expand its capabilities out-of-the-box. Below is an overview of the currently available tools.
+
+
+
+## Using Tools in DocsGPT (User Perspective)
+
+Interacting with tools in DocsGPT is designed to be intuitive:
+
+1. **Natural Language Interaction:** As a user, you typically interact with DocsGPT using natural language queries or commands. The LLM within DocsGPT analyzes your input to determine if a specific task can or should be handled by one of the available and configured tools.
+
+2. **Configuration in UI:**
+ * Tools are generally managed and configured within the DocsGPT application's settings, found under a "Tools" section in the GUI.
+ * For tools that interact with external services (like Brave Search, Telegram, or any service via the API Tool), you might need to provide authentication credentials (e.g., API keys, tokens) or specific endpoint information during the tool's setup in the UI.
+
+3. **Prompt Engineering for Tools:** While the LLM aims to intelligently use tools, for more complex or reliable agent-like behaviors, you might need to customize the system prompts. Modifying the prompt can guide the LLM on when and how to prioritize or chain tools to achieve specific outcomes, especially if you're building an agent designed to perform a certain sequence of actions every time. For more on this, see [Customising Prompts](/Guides/Customising-prompts).
+
+## Advancing with Tools
+
+Understanding the basics of DocsGPT Tools opens up many possibilities:
+
+* **Leverage the API Tool:** For quick integrations with numerous external services, explore the [API Tool Detailed Guide](/Tools/api-tool).
+* **Develop Custom Tools:** If you have specific needs not covered by built-in tools or the generic API tool, you can develop your own. See our guide on `[Developing Custom Tools](/Tools/creating-a-tool)` (placeholder for now).
+* **Build AI Agents:** Tools are the fundamental building blocks for creating sophisticated AI agents within DocsGPT. Explore how these can be combined by looking into the `[Agents section/tab concept - link to be added once available]`.
+
+By harnessing the power of Tools, you can transform DocsGPT into a more versatile and proactive assistant tailored to your unique workflows.
\ No newline at end of file
diff --git a/docs/pages/Tools/creating-a-tool.mdx b/docs/pages/Tools/creating-a-tool.mdx
new file mode 100644
index 00000000..30c75c5c
--- /dev/null
+++ b/docs/pages/Tools/creating-a-tool.mdx
@@ -0,0 +1,186 @@
+---
+title: 🛠️ Creating a Custom Tool
+description: Learn how to create custom Python tools to extend DocsGPT's functionality and integrate with various services or perform specific actions.
+---
+
+import { Callout } from 'nextra/components';
+import { Steps } from 'nextra/components';
+
+# 🛠️ Creating a Custom Python Tool
+
+This guide provides developers with a comprehensive, step-by-step approach to creating their own custom tools for DocsGPT. By developing custom tools, you can significantly extend DocsGPT's capabilities, enabling it to interact with new data sources, services, and perform specialized actions tailored to your unique needs.
+
+## Introduction to Custom Tool Development
+
+### Why Create Custom Tools?
+
+While DocsGPT offers a range of built-in tools and a versatile API Tool, there are many scenarios where a custom Python tool is the best solution:
+
+* **Integrating with Proprietary Systems:** Connect to internal APIs, databases, or services that are not publicly accessible or require complex authentication.
+* **Adding Domain-Specific Functionalities:** Implement logic specific to your industry or use case that isn't covered by general-purpose tools.
+* **Automating Unique Workflows:** Create tools that orchestrate multiple steps or interact with systems in a way unique to your operational needs.
+* **Connecting to Any System with an Accessible Interface:** If you can interact with a system programmatically using Python (e.g., through libraries, SDKs, or direct HTTP requests), you can likely build a DocsGPT tool for it.
+* **Complex Logic or Data Transformation:** When API interactions require intricate logic before sending a request or after receiving a response, or when data needs significant transformation that is difficult for an LLM to handle directly.
+
+### Prerequisites
+
+Before you begin, ensure you have:
+
+* A solid understanding of Python programming.
+* Familiarity with the DocsGPT project structure, particularly the `application/agents/tools/` directory where custom tools reside.
+* Basic knowledge of how APIs work, as many tools involve interacting with external or internal APIs.
+* Your DocsGPT development environment set up. If not, please refer to the [Setting Up a Development Environment](/Deploying/Development-Environment) guide.
+
+## The Anatomy of a DocsGPT Tool
+
+Custom tools in DocsGPT are Python classes that inherit from a base `Tool` class and implement specific methods to define their behavior, capabilities, and configuration needs.
+
+The **foundation** for all custom tools is the abstract base class, located in `application/agents/tools/base.py`. Your custom tool class **must** inherit from this class.
+
+### Essential Methods to Implement
+
+Your custom tool class needs to implement the following methods:
+
+1. **`__init__(self, config: dict)`**
+
+ - **Purpose:** The constructor for your tool. It's called when DocsGPT initializes the tool.
+ - **Usage:** This method is typically used to receive and store tool-specific configurations passed via the `config` dictionary. This dictionary is populated based on the tool's settings, often configured through the DocsGPT UI or environment variables. For example, you would store API keys, base URLs, or database connection strings here.
+ - **Example** (`brave.py`)**:**
+ ``` python
+ class BraveSearchTool(Tool):
+ def __init__(self, config):
+ self.config = config
+ self.token = config.get("token", "") # API Key for Brave Search
+ self.base_url = "https://api.search.brave.com/res/v1"
+ ```
+
+2. **`execute_action(self, action_name: str, **kwargs) -> dict`**
+
+ - **Purpose:** This is the workhorse of your tool. The LLM, acting as an agent, calls this method when it decides to use one of the actions your tool provides.
+ - **Parameters:**
+ - `action_name` (str): A string specifying which of the tool's actions to run (e.g., "brave_web_search").
+ - `**kwargs` (dict): A dictionary containing the parameters for that specific action. These parameters are defined in the tool's metadata (`get_actions_metadata()`) and are extracted or inferred by the LLM from the user's query.
+ - **Return Value:** A dictionary containing the result of the action. It's good practice to include keys like:
+ - `status_code` (int): An HTTP-like status code (e.g., 200 for success, 500 for error).
+ - `message` (str): A human-readable message describing the outcome.
+ - `data` (any): The actual data payload returned by the action (if applicable).
+ - `error` (str): An error message if the action failed.
+ - **Example (`read_webpage.py`):**
+
+ ``` python
+ def execute_action(self, action_name: str, **kwargs) -> str:
+ if action_name != "read_webpage":
+ return f"Error: Unknown action '{action_name}'. This tool only supports 'read_webpage'."
+
+ url = kwargs.get("url")
+ if not url:
+ return "Error: URL parameter is missing."
+ # ... (logic to fetch and parse webpage) ...
+ try:
+ # ...
+ return markdown_content
+ except Exception as e:
+ return f"Error processing URL {url}: {e}"
+ ```
+
+ A more structured return:
+
+ ``` python
+ # ... inside execute_action
+ try:
+ # ... logic ...
+ return {"status_code": 200, "message": "Webpage read successfully", "data": markdown_content}
+ except Exception as e:
+ return {"status_code": 500, "message": f"Error processing URL {url}", "error": str(e)}
+ ```
+
+3. **`get_actions_metadata(self) -> list`**
+
+ - **Purpose:** This method is **critical** for the LLM to understand what your tool can do, when to use it, and what parameters it needs. It effectively advertises your tool's capabilities.
+ - **Return Value:** A list of dictionaries. Each dictionary describes one distinct action the tool can perform and must follow a specific JSON schema structure.
+ - `name` (str): A unique and descriptive name for the action (e.g., `mytool_get_user_details`). It's a common convention to prefix with the tool name to avoid collisions.
+ - `description` (str): A clear, concise, and unambiguous description of what the action does. **Write this for the LLM.** The LLM uses this description to decide if this action is appropriate for a given user query.
+ - `parameters` (dict): A JSON Schema object defining the parameters that the action expects. This schema tells the LLM what arguments are needed, their types, and which are required.
+ - `type`: Should always be `"object"`.
+ - `properties`: A dictionary where each key is a parameter name, and the value is an object defining its `type` (e.g., "string", "integer", "boolean") and `description`.
+ - `required`: A list of strings, where each string is the name of a parameter that is mandatory for the action.
+ - **Example (`postgres.py` - partial):**
+
+ ``` python
+ def get_actions_metadata(self):
+ return [
+ {
+ "name": "postgres_execute_sql",
+ "description": "Execute an SQL query against the PostgreSQL database...",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "sql_query": {
+ "type": "string",
+ "description": "The SQL query to execute.",
+ },
+ },
+ "required": ["sql_query"],
+ "additionalProperties": False, # Good practice to prevent unexpected params
+ },
+ },
+ # ... other actions like postgres_get_schema
+ ]
+ ```
+
+4. **`get_config_requirements(self) -> dict`**
+
+ - **Purpose:** Defines the configuration parameters that your tool needs to function (e.g., API keys, specific base URLs, connection strings, default settings). This information can be used by the DocsGPT UI to dynamically render configuration fields for your tool or for validation.
+ - **Return Value:** A dictionary where keys are the configuration item names (which will be keys in the `config` dict passed to `__init__`) and values are dictionaries describing each requirement:
+ - `type` (str): The expected data type of the config value (e.g., "string", "boolean", "integer").
+ - `description` (str): A human-readable description of what this configuration item is for.
+ - `secret` (bool, optional): Set to `True` if the value is sensitive (e.g., an API key) and should be masked or handled specially in UIs. Defaults to `False`.
+ - **Example (`brave.py`):**
+
+ ``` python
+ def get_config_requirements(self):
+ return {
+ "token": { # This 'token' will be a key in the config dict for __init__
+ "type": "string",
+ "description": "Brave Search API key for authentication",
+ "secret": True
+ },
+ }
+ ```
+
+## Tool Registration and Discovery
+
+DocsGPT's ToolManager (located in application/agents/tools/tool_manager.py) automatically discovers and loads tools.
+
+As long as your custom tool:
+
+1. Is placed in a Python file within the `application/agents/tools/` directory (and the filename is not `base.py` or starts with `__`).
+2. Correctly inherits from the `Tool` base class.
+3. Implements all the abstract methods (`execute_action`, `get_actions_metadata`, `get_config_requirements`).
+
+The `ToolManager` should be able to load it when DocsGPT starts.
+
+## Configuration & Secrets Management
+
+- **Configuration Source:** The `config` dictionary passed to your tool's `__init__` method is typically populated from settings defined in the DocsGPT UI (if available for the tool) or from environment variables/configuration files that DocsGPT loads (see [⚙️ App Configuration](/Deploying/DocsGPT-Settings)). The keys in this dictionary should match the names you define in `get_config_requirements()`.
+- **Secrets:** Never hardcode secrets (like API keys or passwords) directly into your tool's Python code. Instead, define them as configuration requirements (using `secret: True` in `get_config_requirements()`) and let DocsGPT's configuration system inject them via the `config` dictionary at runtime. This ensures that secrets are managed securely and are not exposed in your codebase.
+
+## Best Practices for Tool Development
+
+- **Atomicity:** Design tool actions to be as atomic (single, well-defined purpose) as possible. This makes them easier for the LLM to understand and combine.
+- **Clarity in Metadata:** Ensure action names and descriptions in `get_actions_metadata()` are extremely clear, specific, and unambiguous. This is the primary way the LLM understands your tool.
+- **Robust Error Handling:** Implement comprehensive error handling within your `execute_action` logic (and the private methods it calls). Return informative error messages in the result dictionary so the LLM or user can understand what went wrong.
+- **Security:**
+ - Be mindful of the security implications of your tool, especially if it interacts with sensitive systems or can execute arbitrary code/queries.
+ - Validate and sanitize any inputs, especially if they are used to construct database queries or shell commands, to prevent injection attacks.
+- **Performance:** Consider the performance implications of your tool's actions. If an action is slow, it will impact the user experience. Optimize where possible.
+
+## (Optional) Contributing Your Tool
+
+If you develop a custom tool that you believe could be valuable to the broader DocsGPT community and is general-purpose:
+
+1. Ensure it's well-documented (both in code and with clear metadata).
+2. Make sure it adheres to the best practices outlined above.
+3. Consider opening a Pull Request to the [DocsGPT GitHub repository](https://github.com/arc53/DocsGPT) with your new tool, including any necessary documentation updates.
+
+By following this guide, you can create powerful custom tools that extend DocsGPT's capabilities to your specific operational environment.
\ No newline at end of file
diff --git a/docs/pages/_meta.json b/docs/pages/_meta.json
index 000b569d..37202830 100644
--- a/docs/pages/_meta.json
+++ b/docs/pages/_meta.json
@@ -4,6 +4,8 @@
"quickstart": "Quickstart",
"Deploying": "Deploying",
"Models": "Models",
+ "Tools": "Tools",
+ "Agents": "Agents",
"Extensions": "Extensions",
"https://gptcloud.arc53.com/": {
"title": "API",
diff --git a/docs/public/jwt-input.png b/docs/public/jwt-input.png
new file mode 100644
index 00000000..494d9744
Binary files /dev/null and b/docs/public/jwt-input.png differ
diff --git a/docs/public/new-agent.png b/docs/public/new-agent.png
new file mode 100644
index 00000000..26774dfb
Binary files /dev/null and b/docs/public/new-agent.png differ
diff --git a/docs/public/toolIcons/api-tool-example.png b/docs/public/toolIcons/api-tool-example.png
new file mode 100644
index 00000000..119e75a6
Binary files /dev/null and b/docs/public/toolIcons/api-tool-example.png differ
diff --git a/docs/public/toolIcons/tool_api_tool.svg b/docs/public/toolIcons/tool_api_tool.svg
new file mode 100644
index 00000000..1e923cf3
--- /dev/null
+++ b/docs/public/toolIcons/tool_api_tool.svg
@@ -0,0 +1,6 @@
+
+
\ No newline at end of file
diff --git a/docs/public/toolIcons/tool_brave.svg b/docs/public/toolIcons/tool_brave.svg
new file mode 100644
index 00000000..380c19ed
--- /dev/null
+++ b/docs/public/toolIcons/tool_brave.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/public/toolIcons/tool_cryptoprice.svg b/docs/public/toolIcons/tool_cryptoprice.svg
new file mode 100644
index 00000000..6a422694
--- /dev/null
+++ b/docs/public/toolIcons/tool_cryptoprice.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/public/toolIcons/tool_ntfy.svg b/docs/public/toolIcons/tool_ntfy.svg
new file mode 100644
index 00000000..71fd647f
--- /dev/null
+++ b/docs/public/toolIcons/tool_ntfy.svg
@@ -0,0 +1,8 @@
+
+
diff --git a/docs/public/toolIcons/tool_postgres.svg b/docs/public/toolIcons/tool_postgres.svg
new file mode 100644
index 00000000..c7acdb18
--- /dev/null
+++ b/docs/public/toolIcons/tool_postgres.svg
@@ -0,0 +1,29 @@
+
+
\ No newline at end of file
diff --git a/docs/public/toolIcons/tool_read_webpage.svg b/docs/public/toolIcons/tool_read_webpage.svg
new file mode 100644
index 00000000..41692e95
--- /dev/null
+++ b/docs/public/toolIcons/tool_read_webpage.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/public/toolIcons/tool_telegram.svg b/docs/public/toolIcons/tool_telegram.svg
new file mode 100644
index 00000000..27536ded
--- /dev/null
+++ b/docs/public/toolIcons/tool_telegram.svg
@@ -0,0 +1,10 @@
+