Compare commits

..

1 Commits

Author SHA1 Message Date
dependabot[bot]
a5db3d2019 chore(deps): bump actions/labeler from 5 to 6
Bumps [actions/labeler](https://github.com/actions/labeler) from 5 to 6.
- [Release notes](https://github.com/actions/labeler/releases)
- [Commits](https://github.com/actions/labeler/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/labeler
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-04 20:22:45 +00:00
193 changed files with 6677 additions and 18507 deletions

View File

@@ -10,7 +10,7 @@ jobs:
pull-requests: write
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@v5
- uses: actions/labeler@v6
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
sync-labels: true

33
.vscode/launch.json vendored
View File

@@ -2,11 +2,15 @@
"version": "0.2.0",
"configurations": [
{
"name": "Frontend Debug (npm)",
"type": "node-terminal",
"name": "Docker Debug Frontend",
"request": "launch",
"command": "npm run dev",
"cwd": "${workspaceFolder}/frontend"
"type": "chrome",
"preLaunchTask": "docker-compose: debug:frontend",
"url": "http://127.0.0.1:5173",
"webRoot": "${workspaceFolder}/frontend",
"skipFiles": [
"<node_internals>/**"
]
},
{
"name": "Flask Debugger",
@@ -45,27 +49,6 @@
"--pool=solo"
],
"cwd": "${workspaceFolder}"
},
{
"name": "Dev Containers (Mongo + Redis)",
"type": "node-terminal",
"request": "launch",
"command": "docker compose -f deployment/docker-compose-dev.yaml up --build",
"cwd": "${workspaceFolder}"
}
],
"compounds": [
{
"name": "DocsGPT: Full Stack",
"configurations": [
"Frontend Debug (npm)",
"Flask Debugger",
"Celery Debugger"
],
"presentation": {
"group": "DocsGPT",
"order": 1
}
}
]
}

21
.vscode/tasks.json vendored Normal file
View File

@@ -0,0 +1,21 @@
{
"version": "2.0.0",
"tasks": [
{
"type": "docker-compose",
"label": "docker-compose: debug:frontend",
"dockerCompose": {
"up": {
"detached": true,
"services": [
"frontend"
],
"build": true
},
"files": [
"${workspaceFolder}/docker-compose.yaml"
]
}
}
]
}

View File

@@ -1,38 +0,0 @@
# **🎉 Join the Hacktoberfest with DocsGPT and win a Free T-shirt for a meaningful PR! 🎉**
Welcome, contributors! We're excited to announce that DocsGPT is participating in Hacktoberfest. Get involved by submitting meaningful pull requests.
All Meaningful contributors with accepted PRs that were created for issues with the `hacktoberfest` label (set by our maintainer team: dartpain, siiddhantt, pabik, ManishMadan2882) will receive a cool T-shirt! 🤩.
Fill in [this form](https://forms.gle/Npaba4n9Epfyx56S8
) after your PR was merged please
If you are in doubt don't hesitate to ping us on discord, ping me - Alex (dartpain).
## 📜 Here's How to Contribute:
```text
🛠️ Code: This is the golden ticket! Make meaningful contributions through PRs.
🧩 API extension: Build an app utilising DocsGPT API. We prefer submissions that showcase original ideas and turn the API into an AI agent.
They can be a completely separate repos.
For example:
https://github.com/arc53/tg-bot-docsgpt-extenstion or
https://github.com/arc53/DocsGPT-cli
Non-Code Contributions:
📚 Wiki: Improve our documentation, create a guide.
🖥️ Design: Improve the UI/UX or design a new feature.
```
### 📝 Guidelines for Pull Requests:
- Familiarize yourself with the current contributions and our [Roadmap](https://github.com/orgs/arc53/projects/2).
- Before contributing check existing [issues](https://github.com/arc53/DocsGPT/issues) or [create](https://github.com/arc53/DocsGPT/issues/new/choose) an issue and wait to get assigned.
- Once you are finished with your contribution, please fill in this [form](https://forms.gle/Npaba4n9Epfyx56S8).
- Refer to the [Documentation](https://docs.docsgpt.cloud/).
- Feel free to join our [Discord](https://discord.gg/n5BX8dh8rU) server. We're here to help newcomers, so don't hesitate to jump in! Join us [here](https://discord.gg/n5BX8dh8rU).
Thank you very much for considering contributing to DocsGPT during Hacktoberfest! 🙏 Your contributions (not just simple typos) could earn you a stylish new t-shirt.
We will publish a t-shirt design later into the October.

View File

@@ -17,7 +17,7 @@
<a href="https://github.com/arc53/DocsGPT/blob/main/LICENSE">![link to license file](https://img.shields.io/github/license/arc53/docsgpt)</a>
<a href="https://www.bestpractices.dev/projects/9907"><img src="https://www.bestpractices.dev/projects/9907/badge"></a>
<a href="https://discord.gg/n5BX8dh8rU">![link to discord](https://img.shields.io/discord/1070046503302877216)</a>
<a href="https://x.com/docsgptai">![X (formerly Twitter) URL](https://img.shields.io/twitter/follow/docsgptai)</a>
<a href="https://twitter.com/docsgptai">![X (formerly Twitter) URL](https://img.shields.io/twitter/follow/docsgptai)</a>
<a href="https://docs.docsgpt.cloud/quickstart">⚡️ Quickstart</a><a href="https://app.docsgpt.cloud/">☁️ Cloud Version</a><a href="https://discord.gg/n5BX8dh8rU">💬 Discord</a>
<br>
@@ -25,17 +25,7 @@
<br>
</div>
<div align="center">
<br>
🎃 <a href="https://github.com/arc53/DocsGPT/blob/main/HACKTOBERFEST.md"> Hacktoberfest Prizes, Rules & Q&A </a> 🎃
<br>
<br>
</div>
<div align="center">
<br>
<img src="https://d3dg1063dc54p9.cloudfront.net/videos/demov7.gif" alt="video-example-of-docs-gpt" width="800" height="450">
</div>
<h3 align="left">
@@ -65,11 +55,9 @@
- [x] Agent optimisations (May 2025)
- [x] Filesystem sources update (July 2025)
- [x] Json Responses (August 2025)
- [x] MCP support (August 2025)
- [x] Google Drive integration (September 2025)
- [x] Add OAuth 2.0 authentication for MCP (September 2025)
- [ ] Sharepoint integration (October 2025)
- [ ] Deep Agents (October 2025)
- [ ] Sharepoint integration (August 2025)
- [ ] MCP support (August 2025)
- [ ] Add OAuth 2.0 authentication for tools and sources (August 2025)
- [ ] Agent scheduling
You can find our full roadmap [here](https://github.com/orgs/arc53/projects/2). Please don't hesitate to contribute or create issues, it helps us improve DocsGPT!
@@ -118,7 +106,7 @@ A more detailed [Quickstart](https://docs.docsgpt.cloud/quickstart) is available
PowerShell -ExecutionPolicy Bypass -File .\setup.ps1
```
Either script will guide you through setting up DocsGPT. Five options available: using the public API, running locally, connecting to a local inference engine, using a cloud API provider, or build the docker image locally. Scripts will automatically configure your `.env` file and handle necessary downloads and installations based on your chosen option.
Either script will guide you through setting up DocsGPT. Four options available: using the public API, running locally, connecting to a local inference engine, or using a cloud API provider. Scripts will automatically configure your `.env` file and handle necessary downloads and installations based on your chosen option.
**Navigate to http://localhost:5173/**

View File

@@ -140,28 +140,28 @@ class BaseAgent(ABC):
tool_id, action_name, call_args = parser.parse_args(call)
call_id = getattr(call, "id", None) or str(uuid.uuid4())
# Check if parsing failed
if tool_id is None or action_name is None:
error_message = f"Error: Failed to parse LLM tool call. Tool name: {getattr(call, 'name', 'unknown')}"
logger.error(error_message)
tool_call_data = {
"tool_name": "unknown",
"call_id": call_id,
"action_name": getattr(call, "name", "unknown"),
"action_name": getattr(call, 'name', 'unknown'),
"arguments": call_args or {},
"result": f"Failed to parse tool call. Invalid tool name format: {getattr(call, 'name', 'unknown')}",
}
yield {"type": "tool_call", "data": {**tool_call_data, "status": "error"}}
self.tool_calls.append(tool_call_data)
return "Failed to parse tool call.", call_id
# Check if tool_id exists in available tools
if tool_id not in tools_dict:
error_message = f"Error: Tool ID '{tool_id}' extracted from LLM call not found in available tools_dict. Available IDs: {list(tools_dict.keys())}"
logger.error(error_message)
# Return error result
tool_call_data = {
"tool_name": "unknown",
@@ -173,7 +173,7 @@ class BaseAgent(ABC):
yield {"type": "tool_call", "data": {**tool_call_data, "status": "error"}}
self.tool_calls.append(tool_call_data)
return f"Tool with ID {tool_id} not found.", call_id
tool_call_data = {
"tool_name": tools_dict[tool_id]["name"],
"call_id": call_id,
@@ -213,25 +213,18 @@ class BaseAgent(ABC):
):
target_dict[param] = value
tm = ToolManager(config={})
# Prepare tool_config and add tool_id for memory tools
if tool_data["name"] == "api_tool":
tool_config = {
"url": tool_data["config"]["actions"][action_name]["url"],
"method": tool_data["config"]["actions"][action_name]["method"],
"headers": headers,
"query_params": query_params,
}
else:
tool_config = tool_data["config"].copy() if tool_data["config"] else {}
# Add tool_id from MongoDB _id for tools that need instance isolation (like memory tool)
# Use MongoDB _id if available, otherwise fall back to enumerated tool_id
tool_config["tool_id"] = str(tool_data.get("_id", tool_id))
tool = tm.load_tool(
tool_data["name"],
tool_config=tool_config,
user_id=self.user, # Pass user ID for MCP tools credential decryption
tool_config=(
{
"url": tool_data["config"]["actions"][action_name]["url"],
"method": tool_data["config"]["actions"][action_name]["method"],
"headers": headers,
"query_params": query_params,
}
if tool_data["name"] == "api_tool"
else tool_data["config"]
),
)
if tool_data["name"] == "api_tool":
print(
@@ -270,15 +263,7 @@ class BaseAgent(ABC):
query: str,
retrieved_data: List[Dict],
) -> List[Dict]:
docs_with_filenames = []
for doc in retrieved_data:
filename = doc.get("filename") or doc.get("title") or doc.get("source")
if filename:
chunk_header = str(filename)
docs_with_filenames.append(f"{chunk_header}\n{doc['text']}")
else:
docs_with_filenames.append(doc["text"])
docs_together = "\n\n".join(docs_with_filenames)
docs_together = "\n".join([doc["text"] for doc in retrieved_data])
p_chat_combine = system_prompt.replace("{summaries}", docs_together)
messages_combine = [{"role": "system", "content": p_chat_combine}]

View File

@@ -20,10 +20,9 @@ with open(
"r",
) as f:
final_prompt_template = f.read()
MAX_ITERATIONS_REASONING = 10
class ReActAgent(BaseAgent):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
@@ -39,69 +38,49 @@ class ReActAgent(BaseAgent):
collected_content = []
if isinstance(resp, str):
collected_content.append(resp)
elif ( # OpenAI non-streaming or Anthropic non-streaming (older SDK style)
elif ( # OpenAI non-streaming or Anthropic non-streaming (older SDK style)
hasattr(resp, "message")
and hasattr(resp.message, "content")
and resp.message.content is not None
):
collected_content.append(resp.message.content)
elif ( # OpenAI non-streaming (Pydantic model), Anthropic new SDK non-streaming
hasattr(resp, "choices")
and resp.choices
and hasattr(resp.choices[0], "message")
and hasattr(resp.choices[0].message, "content")
and resp.choices[0].message.content is not None
elif ( # OpenAI non-streaming (Pydantic model), Anthropic new SDK non-streaming
hasattr(resp, "choices") and resp.choices and
hasattr(resp.choices[0], "message") and
hasattr(resp.choices[0].message, "content") and
resp.choices[0].message.content is not None
):
collected_content.append(resp.choices[0].message.content) # OpenAI
elif ( # Anthropic new SDK non-streaming content block
hasattr(resp, "content")
and isinstance(resp.content, list)
and resp.content
and hasattr(resp.content[0], "text")
collected_content.append(resp.choices[0].message.content) # OpenAI
elif ( # Anthropic new SDK non-streaming content block
hasattr(resp, "content") and isinstance(resp.content, list) and resp.content and
hasattr(resp.content[0], "text")
):
collected_content.append(resp.content[0].text) # Anthropic
collected_content.append(resp.content[0].text) # Anthropic
else:
# Assume resp is a stream if not a recognized object
chunk = None
try:
for (
chunk
) in (
resp
): # This will fail if resp is not iterable (e.g. a non-streaming response object)
for chunk in resp: # This will fail if resp is not iterable (e.g. a non-streaming response object)
content_piece = ""
# OpenAI-like stream
if (
hasattr(chunk, "choices")
and len(chunk.choices) > 0
and hasattr(chunk.choices[0], "delta")
and hasattr(chunk.choices[0].delta, "content")
and chunk.choices[0].delta.content is not None
):
if hasattr(chunk, 'choices') and len(chunk.choices) > 0 and \
hasattr(chunk.choices[0], 'delta') and \
hasattr(chunk.choices[0].delta, 'content') and \
chunk.choices[0].delta.content is not None:
content_piece = chunk.choices[0].delta.content
# Anthropic-like stream (ContentBlockDelta)
elif (
hasattr(chunk, "type")
and chunk.type == "content_block_delta"
and hasattr(chunk, "delta")
and hasattr(chunk.delta, "text")
):
elif hasattr(chunk, 'type') and chunk.type == 'content_block_delta' and \
hasattr(chunk, 'delta') and hasattr(chunk.delta, 'text'):
content_piece = chunk.delta.text
elif isinstance(chunk, str): # Simplest case: stream of strings
elif isinstance(chunk, str): # Simplest case: stream of strings
content_piece = chunk
if content_piece:
collected_content.append(content_piece)
except (
TypeError
): # If resp is not iterable (e.g. a final response object that wasn't caught above)
logger.debug(
f"Response type {type(resp)} could not be iterated as a stream. It might be a non-streaming object not handled by specific checks."
)
except TypeError: # If resp is not iterable (e.g. a final response object that wasn't caught above)
logger.debug(f"Response type {type(resp)} could not be iterated as a stream. It might be a non-streaming object not handled by specific checks.")
except Exception as e:
logger.error(
f"Error processing potential stream chunk: {e}, chunk was: {getattr(chunk, '__dict__', chunk) if chunk is not None else 'N/A'}"
)
logger.error(f"Error processing potential stream chunk: {e}, chunk was: {getattr(chunk, '__dict__', chunk)}")
return "".join(collected_content)
@@ -133,9 +112,8 @@ class ReActAgent(BaseAgent):
yield {"thought": line_chunk}
self.plan = "".join(current_plan_parts)
if self.plan:
self.observations.append(
f"Plan: {self.plan} Iteration: {iterating_reasoning}"
)
self.observations.append(f"Plan: {self.plan} Iteration: {iterating_reasoning}")
max_obs_len = 20000
obs_str = "\n".join(self.observations)
@@ -147,55 +125,34 @@ class ReActAgent(BaseAgent):
+ f"\n\nObservations:\n{obs_str}"
+ f"\n\nIf there is enough data to complete user query '{query}', Respond with 'SATISFIED' only. Otherwise, continue. Dont Menstion 'SATISFIED' in your response if you are not ready. "
)
messages = self._build_messages(execution_prompt_str, query, retrieved_data)
resp_from_llm_gen = self._llm_gen(messages, log_context)
initial_llm_thought_content = self._extract_content_from_llm_response(
resp_from_llm_gen
)
initial_llm_thought_content = self._extract_content_from_llm_response(resp_from_llm_gen)
if initial_llm_thought_content:
self.observations.append(
f"Initial thought/response: {initial_llm_thought_content}"
)
self.observations.append(f"Initial thought/response: {initial_llm_thought_content}")
else:
logger.info(
"ReActAgent: Initial LLM response (before handler) had no textual content (might be only tool calls)."
)
resp_after_handler = self._llm_handler(
resp_from_llm_gen, tools_dict, messages, log_context
)
for (
tool_call_info
) in (
self.tool_calls
): # Iterate over self.tool_calls populated by _llm_handler
logger.info("ReActAgent: Initial LLM response (before handler) had no textual content (might be only tool calls).")
resp_after_handler = self._llm_handler(resp_from_llm_gen, tools_dict, messages, log_context)
for tool_call_info in self.tool_calls: # Iterate over self.tool_calls populated by _llm_handler
observation_string = (
f"Executed Action: Tool '{tool_call_info.get('tool_name', 'N/A')}' "
f"with arguments '{tool_call_info.get('arguments', '{}')}'. Result: '{str(tool_call_info.get('result', ''))[:200]}...'"
)
self.observations.append(observation_string)
content_after_handler = self._extract_content_from_llm_response(
resp_after_handler
)
content_after_handler = self._extract_content_from_llm_response(resp_after_handler)
if content_after_handler:
self.observations.append(
f"Response after tool execution: {content_after_handler}"
)
self.observations.append(f"Response after tool execution: {content_after_handler}")
else:
logger.info(
"ReActAgent: LLM response after handler had no textual content."
)
logger.info("ReActAgent: LLM response after handler had no textual content.")
if log_context:
log_context.stacks.append(
{
"component": "agent_tool_calls",
"data": {"tool_calls": self.tool_calls.copy()},
}
{"component": "agent_tool_calls", "data": {"tool_calls": self.tool_calls.copy()}}
)
yield {"sources": retrieved_data}
@@ -208,17 +165,13 @@ class ReActAgent(BaseAgent):
display_tool_calls.append(cleaned_tc)
if display_tool_calls:
yield {"tool_calls": display_tool_calls}
if "SATISFIED" in content_after_handler:
logger.info(
"ReActAgent: LLM satisfied with the plan and data. Stopping reasoning."
)
logger.info("ReActAgent: LLM satisfied with the plan and data. Stopping reasoning.")
break
# 3. Create Final Answer based on all observations
final_answer_stream = self._create_final_answer(
query, self.observations, log_context
)
final_answer_stream = self._create_final_answer(query, self.observations, log_context)
for answer_chunk in final_answer_stream:
yield {"answer": answer_chunk}
logger.info("ReActAgent: Finished generating final answer.")
@@ -231,16 +184,12 @@ class ReActAgent(BaseAgent):
summaries = docs_data if docs_data else "No documents retrieved."
plan_prompt_filled = plan_prompt_filled.replace("{summaries}", summaries)
plan_prompt_filled = plan_prompt_filled.replace("{prompt}", self.prompt or "")
plan_prompt_filled = plan_prompt_filled.replace(
"{observations}", "\n".join(self.observations)
)
plan_prompt_filled = plan_prompt_filled.replace("{observations}", "\n".join(self.observations))
messages = [{"role": "user", "content": plan_prompt_filled}]
plan_stream_from_llm = self.llm.gen_stream(
model=self.gpt_model,
messages=messages,
tools=getattr(self, "tools", None), # Use self.tools
model=self.gpt_model, messages=messages, tools=getattr(self, 'tools', None) # Use self.tools
)
if log_context:
data = build_stack_data(self.llm)
@@ -257,12 +206,8 @@ class ReActAgent(BaseAgent):
observation_string = "\n".join(observations)
max_obs_len = 10000
if len(observation_string) > max_obs_len:
observation_string = (
observation_string[:max_obs_len] + "\n...[observations truncated]"
)
logger.warning(
"ReActAgent: Truncated observations for final answer prompt due to length."
)
observation_string = observation_string[:max_obs_len] + "\n...[observations truncated]"
logger.warning("ReActAgent: Truncated observations for final answer prompt due to length.")
final_answer_prompt_filled = final_prompt_template.format(
query=query, observations=observation_string
@@ -281,4 +226,4 @@ class ReActAgent(BaseAgent):
for chunk in final_answer_stream_from_llm:
content_piece = self._extract_content_from_llm_response(chunk)
if content_piece:
yield content_piece
yield content_piece

View File

@@ -1,861 +0,0 @@
import asyncio
import base64
import json
import logging
import time
from typing import Any, Dict, List, Optional
from urllib.parse import parse_qs, urlparse
from application.agents.tools.base import Tool
from application.api.user.tasks import mcp_oauth_status_task, mcp_oauth_task
from application.cache import get_redis_instance
from application.core.mongo_db import MongoDB
from application.core.settings import settings
from application.security.encryption import decrypt_credentials
from fastmcp import Client
from fastmcp.client.auth import BearerAuth
from fastmcp.client.transports import (
SSETransport,
StdioTransport,
StreamableHttpTransport,
)
from mcp.client.auth import OAuthClientProvider, TokenStorage
from mcp.shared.auth import OAuthClientInformationFull, OAuthClientMetadata, OAuthToken
from pydantic import AnyHttpUrl, ValidationError
from redis import Redis
mongo = MongoDB.get_client()
db = mongo[settings.MONGO_DB_NAME]
_mcp_clients_cache = {}
class MCPTool(Tool):
"""
MCP Tool
Connect to remote Model Context Protocol (MCP) servers to access dynamic tools and resources.
"""
def __init__(self, config: Dict[str, Any], user_id: Optional[str] = None):
"""
Initialize the MCP Tool with configuration.
Args:
config: Dictionary containing MCP server configuration:
- server_url: URL of the remote MCP server
- transport_type: Transport type (auto, sse, http, stdio)
- auth_type: Type of authentication (bearer, oauth, api_key, basic, none)
- encrypted_credentials: Encrypted credentials (if available)
- timeout: Request timeout in seconds (default: 30)
- headers: Custom headers for requests
- command: Command for STDIO transport
- args: Arguments for STDIO transport
- oauth_scopes: OAuth scopes for oauth auth type
- oauth_client_name: OAuth client name for oauth auth type
user_id: User ID for decrypting credentials (required if encrypted_credentials exist)
"""
self.config = config
self.user_id = user_id
self.server_url = config.get("server_url", "")
self.transport_type = config.get("transport_type", "auto")
self.auth_type = config.get("auth_type", "none")
self.timeout = config.get("timeout", 30)
self.custom_headers = config.get("headers", {})
self.auth_credentials = {}
if config.get("encrypted_credentials") and user_id:
self.auth_credentials = decrypt_credentials(
config["encrypted_credentials"], user_id
)
else:
self.auth_credentials = config.get("auth_credentials", {})
self.oauth_scopes = config.get("oauth_scopes", [])
self.oauth_task_id = config.get("oauth_task_id", None)
self.oauth_client_name = config.get("oauth_client_name", "DocsGPT-MCP")
self.redirect_uri = f"{settings.API_URL}/api/mcp_server/callback"
self.available_tools = []
self._cache_key = self._generate_cache_key()
self._client = None
# Only validate and setup if server_url is provided and not OAuth
if self.server_url and self.auth_type != "oauth":
self._setup_client()
def _generate_cache_key(self) -> str:
"""Generate a unique cache key for this MCP server configuration."""
auth_key = ""
if self.auth_type == "oauth":
scopes_str = ",".join(self.oauth_scopes) if self.oauth_scopes else "none"
auth_key = f"oauth:{self.oauth_client_name}:{scopes_str}"
elif self.auth_type in ["bearer"]:
token = self.auth_credentials.get(
"bearer_token", ""
) or self.auth_credentials.get("access_token", "")
auth_key = f"bearer:{token[:10]}..." if token else "bearer:none"
elif self.auth_type == "api_key":
api_key = self.auth_credentials.get("api_key", "")
auth_key = f"apikey:{api_key[:10]}..." if api_key else "apikey:none"
elif self.auth_type == "basic":
username = self.auth_credentials.get("username", "")
auth_key = f"basic:{username}"
else:
auth_key = "none"
return f"{self.server_url}#{self.transport_type}#{auth_key}"
def _setup_client(self):
"""Setup FastMCP client with proper transport and authentication."""
global _mcp_clients_cache
if self._cache_key in _mcp_clients_cache:
cached_data = _mcp_clients_cache[self._cache_key]
if time.time() - cached_data["created_at"] < 1800:
self._client = cached_data["client"]
return
else:
del _mcp_clients_cache[self._cache_key]
transport = self._create_transport()
auth = None
if self.auth_type == "oauth":
redis_client = get_redis_instance()
auth = DocsGPTOAuth(
mcp_url=self.server_url,
scopes=self.oauth_scopes,
redis_client=redis_client,
redirect_uri=self.redirect_uri,
task_id=self.oauth_task_id,
db=db,
user_id=self.user_id,
)
elif self.auth_type == "bearer":
token = self.auth_credentials.get(
"bearer_token", ""
) or self.auth_credentials.get("access_token", "")
if token:
auth = BearerAuth(token)
self._client = Client(transport, auth=auth)
_mcp_clients_cache[self._cache_key] = {
"client": self._client,
"created_at": time.time(),
}
def _create_transport(self):
"""Create appropriate transport based on configuration."""
headers = {"Content-Type": "application/json", "User-Agent": "DocsGPT-MCP/1.0"}
headers.update(self.custom_headers)
if self.auth_type == "api_key":
api_key = self.auth_credentials.get("api_key", "")
header_name = self.auth_credentials.get("api_key_header", "X-API-Key")
if api_key:
headers[header_name] = api_key
elif self.auth_type == "basic":
username = self.auth_credentials.get("username", "")
password = self.auth_credentials.get("password", "")
if username and password:
credentials = base64.b64encode(
f"{username}:{password}".encode()
).decode()
headers["Authorization"] = f"Basic {credentials}"
if self.transport_type == "auto":
if "sse" in self.server_url.lower() or self.server_url.endswith("/sse"):
transport_type = "sse"
else:
transport_type = "http"
else:
transport_type = self.transport_type
if transport_type == "sse":
headers.update({"Accept": "text/event-stream", "Cache-Control": "no-cache"})
return SSETransport(url=self.server_url, headers=headers)
elif transport_type == "http":
return StreamableHttpTransport(url=self.server_url, headers=headers)
elif transport_type == "stdio":
command = self.config.get("command", "python")
args = self.config.get("args", [])
env = self.auth_credentials if self.auth_credentials else None
return StdioTransport(command=command, args=args, env=env)
else:
return StreamableHttpTransport(url=self.server_url, headers=headers)
def _format_tools(self, tools_response) -> List[Dict]:
"""Format tools response to match expected format."""
if hasattr(tools_response, "tools"):
tools = tools_response.tools
elif isinstance(tools_response, list):
tools = tools_response
else:
tools = []
tools_dict = []
for tool in tools:
if hasattr(tool, "name"):
tool_dict = {
"name": tool.name,
"description": tool.description,
}
if hasattr(tool, "inputSchema"):
tool_dict["inputSchema"] = tool.inputSchema
tools_dict.append(tool_dict)
elif isinstance(tool, dict):
tools_dict.append(tool)
else:
if hasattr(tool, "model_dump"):
tools_dict.append(tool.model_dump())
else:
tools_dict.append({"name": str(tool), "description": ""})
return tools_dict
async def _execute_with_client(self, operation: str, *args, **kwargs):
"""Execute operation with FastMCP client."""
if not self._client:
raise Exception("FastMCP client not initialized")
async with self._client:
if operation == "ping":
return await self._client.ping()
elif operation == "list_tools":
tools_response = await self._client.list_tools()
self.available_tools = self._format_tools(tools_response)
return self.available_tools
elif operation == "call_tool":
tool_name = args[0]
tool_args = kwargs
return await self._client.call_tool(tool_name, tool_args)
elif operation == "list_resources":
return await self._client.list_resources()
elif operation == "list_prompts":
return await self._client.list_prompts()
else:
raise Exception(f"Unknown operation: {operation}")
def _run_async_operation(self, operation: str, *args, **kwargs):
"""Run async operation in sync context."""
try:
try:
loop = asyncio.get_running_loop()
import concurrent.futures
def run_in_thread():
new_loop = asyncio.new_event_loop()
asyncio.set_event_loop(new_loop)
try:
return new_loop.run_until_complete(
self._execute_with_client(operation, *args, **kwargs)
)
finally:
new_loop.close()
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(run_in_thread)
return future.result(timeout=self.timeout)
except RuntimeError:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
return loop.run_until_complete(
self._execute_with_client(operation, *args, **kwargs)
)
finally:
loop.close()
except Exception as e:
print(f"Error occurred while running async operation: {e}")
raise
def discover_tools(self) -> List[Dict]:
"""
Discover available tools from the MCP server using FastMCP.
Returns:
List of tool definitions from the server
"""
if not self.server_url:
return []
if not self._client:
self._setup_client()
try:
tools = self._run_async_operation("list_tools")
self.available_tools = tools
return self.available_tools
except Exception as e:
raise Exception(f"Failed to discover tools from MCP server: {str(e)}")
def execute_action(self, action_name: str, **kwargs) -> Any:
"""
Execute an action on the remote MCP server using FastMCP.
Args:
action_name: Name of the action to execute
**kwargs: Parameters for the action
Returns:
Result from the MCP server
"""
if not self.server_url:
raise Exception("No MCP server configured")
if not self._client:
self._setup_client()
cleaned_kwargs = {}
for key, value in kwargs.items():
if value == "" or value is None:
continue
cleaned_kwargs[key] = value
try:
result = self._run_async_operation(
"call_tool", action_name, **cleaned_kwargs
)
return self._format_result(result)
except Exception as e:
raise Exception(f"Failed to execute action '{action_name}': {str(e)}")
def _format_result(self, result) -> Dict:
"""Format FastMCP result to match expected format."""
if hasattr(result, "content"):
content_list = []
for content_item in result.content:
if hasattr(content_item, "text"):
content_list.append({"type": "text", "text": content_item.text})
elif hasattr(content_item, "data"):
content_list.append({"type": "data", "data": content_item.data})
else:
content_list.append(
{"type": "unknown", "content": str(content_item)}
)
return {
"content": content_list,
"isError": getattr(result, "isError", False),
}
else:
return result
def test_connection(self) -> Dict:
"""
Test the connection to the MCP server and validate functionality.
Returns:
Dictionary with connection test results including tool count
"""
if not self.server_url:
return {
"success": False,
"message": "No MCP server URL configured",
"tools_count": 0,
"transport_type": self.transport_type,
"auth_type": self.auth_type,
"error_type": "ConfigurationError",
}
if not self._client:
self._setup_client()
try:
if self.auth_type == "oauth":
return self._test_oauth_connection()
else:
return self._test_regular_connection()
except Exception as e:
return {
"success": False,
"message": f"Connection failed: {str(e)}",
"tools_count": 0,
"transport_type": self.transport_type,
"auth_type": self.auth_type,
"error_type": type(e).__name__,
}
def _test_regular_connection(self) -> Dict:
"""Test connection for non-OAuth auth types."""
try:
self._run_async_operation("ping")
ping_success = True
except Exception:
ping_success = False
tools = self.discover_tools()
message = f"Successfully connected to MCP server. Found {len(tools)} tools."
if not ping_success:
message += " (Ping not supported, but tool discovery worked)"
return {
"success": True,
"message": message,
"tools_count": len(tools),
"transport_type": self.transport_type,
"auth_type": self.auth_type,
"ping_supported": ping_success,
"tools": [tool.get("name", "unknown") for tool in tools],
}
def _test_oauth_connection(self) -> Dict:
"""Test connection for OAuth auth type with proper async handling."""
try:
task = mcp_oauth_task.delay(config=self.config, user=self.user_id)
if not task:
raise Exception("Failed to start OAuth authentication")
return {
"success": True,
"requires_oauth": True,
"task_id": task.id,
"status": "pending",
"message": "OAuth flow started",
}
except Exception as e:
return {
"success": False,
"message": f"OAuth connection failed: {str(e)}",
"tools_count": 0,
"transport_type": self.transport_type,
"auth_type": self.auth_type,
"error_type": type(e).__name__,
}
def get_actions_metadata(self) -> List[Dict]:
"""
Get metadata for all available actions.
Returns:
List of action metadata dictionaries
"""
actions = []
for tool in self.available_tools:
input_schema = (
tool.get("inputSchema")
or tool.get("input_schema")
or tool.get("schema")
or tool.get("parameters")
)
parameters_schema = {
"type": "object",
"properties": {},
"required": [],
}
if input_schema:
if isinstance(input_schema, dict):
if "properties" in input_schema:
parameters_schema = {
"type": input_schema.get("type", "object"),
"properties": input_schema.get("properties", {}),
"required": input_schema.get("required", []),
}
for key in ["additionalProperties", "description"]:
if key in input_schema:
parameters_schema[key] = input_schema[key]
else:
parameters_schema["properties"] = input_schema
action = {
"name": tool.get("name", ""),
"description": tool.get("description", ""),
"parameters": parameters_schema,
}
actions.append(action)
return actions
def get_config_requirements(self) -> Dict:
"""Get configuration requirements for the MCP tool."""
return {
"server_url": {
"type": "string",
"description": "URL of the remote MCP server (e.g., https://api.example.com/mcp or https://docs.mcp.cloudflare.com/sse)",
"required": True,
},
"transport_type": {
"type": "string",
"description": "Transport type for connection",
"enum": ["auto", "sse", "http", "stdio"],
"default": "auto",
"required": False,
"help": {
"auto": "Automatically detect best transport",
"sse": "Server-Sent Events (for real-time streaming)",
"http": "HTTP streaming (recommended for production)",
"stdio": "Standard I/O (for local servers)",
},
},
"auth_type": {
"type": "string",
"description": "Authentication type",
"enum": ["none", "bearer", "oauth", "api_key", "basic"],
"default": "none",
"required": True,
"help": {
"none": "No authentication",
"bearer": "Bearer token authentication",
"oauth": "OAuth 2.1 authentication (with frontend integration)",
"api_key": "API key authentication",
"basic": "Basic authentication",
},
},
"auth_credentials": {
"type": "object",
"description": "Authentication credentials (varies by auth_type)",
"required": False,
"properties": {
"bearer_token": {
"type": "string",
"description": "Bearer token for bearer auth",
},
"access_token": {
"type": "string",
"description": "Access token for OAuth (if pre-obtained)",
},
"api_key": {
"type": "string",
"description": "API key for api_key auth",
},
"api_key_header": {
"type": "string",
"description": "Header name for API key (default: X-API-Key)",
},
"username": {
"type": "string",
"description": "Username for basic auth",
},
"password": {
"type": "string",
"description": "Password for basic auth",
},
},
},
"oauth_scopes": {
"type": "array",
"description": "OAuth scopes to request (for oauth auth_type)",
"items": {"type": "string"},
"required": False,
"default": [],
},
"oauth_client_name": {
"type": "string",
"description": "Client name for OAuth registration (for oauth auth_type)",
"default": "DocsGPT-MCP",
"required": False,
},
"headers": {
"type": "object",
"description": "Custom headers to send with requests",
"required": False,
},
"timeout": {
"type": "integer",
"description": "Request timeout in seconds",
"default": 30,
"minimum": 1,
"maximum": 300,
"required": False,
},
"command": {
"type": "string",
"description": "Command to run for STDIO transport (e.g., 'python')",
"required": False,
},
"args": {
"type": "array",
"description": "Arguments for STDIO command",
"items": {"type": "string"},
"required": False,
},
}
class DocsGPTOAuth(OAuthClientProvider):
"""
Custom OAuth handler for DocsGPT that uses frontend redirect instead of browser.
"""
def __init__(
self,
mcp_url: str,
redirect_uri: str,
redis_client: Redis | None = None,
redis_prefix: str = "mcp_oauth:",
task_id: str = None,
scopes: str | list[str] | None = None,
client_name: str = "DocsGPT-MCP",
user_id=None,
db=None,
additional_client_metadata: dict[str, Any] | None = None,
):
"""
Initialize custom OAuth client provider for DocsGPT.
Args:
mcp_url: Full URL to the MCP endpoint
redirect_uri: Custom redirect URI for DocsGPT frontend
redis_client: Redis client for storing auth state
redis_prefix: Prefix for Redis keys
task_id: Task ID for tracking auth status
scopes: OAuth scopes to request
client_name: Name for this client during registration
user_id: User ID for token storage
db: Database instance for token storage
additional_client_metadata: Extra fields for OAuthClientMetadata
"""
self.redirect_uri = redirect_uri
self.redis_client = redis_client
self.redis_prefix = redis_prefix
self.task_id = task_id
self.user_id = user_id
self.db = db
parsed_url = urlparse(mcp_url)
self.server_base_url = f"{parsed_url.scheme}://{parsed_url.netloc}"
if isinstance(scopes, list):
scopes = " ".join(scopes)
client_metadata = OAuthClientMetadata(
client_name=client_name,
redirect_uris=[AnyHttpUrl(redirect_uri)],
grant_types=["authorization_code", "refresh_token"],
response_types=["code"],
scope=scopes,
**(additional_client_metadata or {}),
)
storage = DBTokenStorage(
server_url=self.server_base_url, user_id=self.user_id, db_client=self.db
)
super().__init__(
server_url=self.server_base_url,
client_metadata=client_metadata,
storage=storage,
redirect_handler=self.redirect_handler,
callback_handler=self.callback_handler,
)
self.auth_url = None
self.extracted_state = None
def _process_auth_url(self, authorization_url: str) -> tuple[str, str]:
"""Process authorization URL to extract state"""
try:
parsed_url = urlparse(authorization_url)
query_params = parse_qs(parsed_url.query)
state_params = query_params.get("state", [])
if state_params:
state = state_params[0]
else:
raise ValueError("No state in auth URL")
return authorization_url, state
except Exception as e:
raise Exception(f"Failed to process auth URL: {e}")
async def redirect_handler(self, authorization_url: str) -> None:
"""Store auth URL and state in Redis for frontend to use."""
auth_url, state = self._process_auth_url(authorization_url)
logging.info(
"[DocsGPTOAuth] Processed auth_url: %s, state: %s", auth_url, state
)
self.auth_url = auth_url
self.extracted_state = state
if self.redis_client and self.extracted_state:
key = f"{self.redis_prefix}auth_url:{self.extracted_state}"
self.redis_client.setex(key, 600, auth_url)
logging.info("[DocsGPTOAuth] Stored auth_url in Redis: %s", key)
if self.task_id:
status_key = f"mcp_oauth_status:{self.task_id}"
status_data = {
"status": "requires_redirect",
"message": "OAuth authorization required",
"authorization_url": self.auth_url,
"state": self.extracted_state,
"requires_oauth": True,
"task_id": self.task_id,
}
self.redis_client.setex(status_key, 600, json.dumps(status_data))
async def callback_handler(self) -> tuple[str, str | None]:
"""Wait for auth code from Redis using the state value."""
if not self.redis_client or not self.extracted_state:
raise Exception("Redis client or state not configured for OAuth")
poll_interval = 1
max_wait_time = 300
code_key = f"{self.redis_prefix}code:{self.extracted_state}"
if self.task_id:
status_key = f"mcp_oauth_status:{self.task_id}"
status_data = {
"status": "awaiting_callback",
"message": "Waiting for OAuth callback...",
"authorization_url": self.auth_url,
"state": self.extracted_state,
"requires_oauth": True,
"task_id": self.task_id,
}
self.redis_client.setex(status_key, 600, json.dumps(status_data))
start_time = time.time()
while time.time() - start_time < max_wait_time:
code_data = self.redis_client.get(code_key)
if code_data:
code = code_data.decode()
returned_state = self.extracted_state
self.redis_client.delete(code_key)
self.redis_client.delete(
f"{self.redis_prefix}auth_url:{self.extracted_state}"
)
self.redis_client.delete(
f"{self.redis_prefix}state:{self.extracted_state}"
)
if self.task_id:
status_data = {
"status": "callback_received",
"message": "OAuth callback received, completing authentication...",
"task_id": self.task_id,
}
self.redis_client.setex(status_key, 600, json.dumps(status_data))
return code, returned_state
error_key = f"{self.redis_prefix}error:{self.extracted_state}"
error_data = self.redis_client.get(error_key)
if error_data:
error_msg = error_data.decode()
self.redis_client.delete(error_key)
self.redis_client.delete(
f"{self.redis_prefix}auth_url:{self.extracted_state}"
)
self.redis_client.delete(
f"{self.redis_prefix}state:{self.extracted_state}"
)
raise Exception(f"OAuth error: {error_msg}")
await asyncio.sleep(poll_interval)
self.redis_client.delete(f"{self.redis_prefix}auth_url:{self.extracted_state}")
self.redis_client.delete(f"{self.redis_prefix}state:{self.extracted_state}")
raise Exception("OAuth callback timeout: no code received within 5 minutes")
class DBTokenStorage(TokenStorage):
def __init__(self, server_url: str, user_id: str, db_client):
self.server_url = server_url
self.user_id = user_id
self.db_client = db_client
self.collection = db_client["connector_sessions"]
@staticmethod
def get_base_url(url: str) -> str:
parsed = urlparse(url)
return f"{parsed.scheme}://{parsed.netloc}"
def get_db_key(self) -> dict:
return {
"server_url": self.get_base_url(self.server_url),
"user_id": self.user_id,
}
async def get_tokens(self) -> OAuthToken | None:
doc = await asyncio.to_thread(self.collection.find_one, self.get_db_key())
if not doc or "tokens" not in doc:
return None
try:
tokens = OAuthToken.model_validate(doc["tokens"])
return tokens
except ValidationError as e:
logging.error(f"Could not load tokens: {e}")
return None
async def set_tokens(self, tokens: OAuthToken) -> None:
await asyncio.to_thread(
self.collection.update_one,
self.get_db_key(),
{"$set": {"tokens": tokens.model_dump()}},
True,
)
logging.info(f"Saved tokens for {self.get_base_url(self.server_url)}")
async def get_client_info(self) -> OAuthClientInformationFull | None:
doc = await asyncio.to_thread(self.collection.find_one, self.get_db_key())
if not doc or "client_info" not in doc:
return None
try:
client_info = OAuthClientInformationFull.model_validate(doc["client_info"])
tokens = await self.get_tokens()
if tokens is None:
logging.debug(
"No tokens found, clearing client info to force fresh registration."
)
await asyncio.to_thread(
self.collection.update_one,
self.get_db_key(),
{"$unset": {"client_info": ""}},
)
return None
return client_info
except ValidationError as e:
logging.error(f"Could not load client info: {e}")
return None
def _serialize_client_info(self, info: dict) -> dict:
if "redirect_uris" in info and isinstance(info["redirect_uris"], list):
info["redirect_uris"] = [str(u) for u in info["redirect_uris"]]
return info
async def set_client_info(self, client_info: OAuthClientInformationFull) -> None:
serialized_info = self._serialize_client_info(client_info.model_dump())
await asyncio.to_thread(
self.collection.update_one,
self.get_db_key(),
{"$set": {"client_info": serialized_info}},
True,
)
logging.info(f"Saved client info for {self.get_base_url(self.server_url)}")
async def clear(self) -> None:
await asyncio.to_thread(self.collection.delete_one, self.get_db_key())
logging.info(f"Cleared OAuth cache for {self.get_base_url(self.server_url)}")
@classmethod
async def clear_all(cls, db_client) -> None:
collection = db_client["connector_sessions"]
await asyncio.to_thread(collection.delete_many, {})
logging.info("Cleared all OAuth client cache data.")
class MCPOAuthManager:
"""Manager for handling MCP OAuth callbacks."""
def __init__(self, redis_client: Redis | None, redis_prefix: str = "mcp_oauth:"):
self.redis_client = redis_client
self.redis_prefix = redis_prefix
def handle_oauth_callback(
self, state: str, code: str, error: Optional[str] = None
) -> bool:
"""
Handle OAuth callback from provider.
Args:
state: The state parameter from OAuth callback
code: The authorization code from OAuth callback
error: Error message if OAuth failed
Returns:
True if successful, False otherwise
"""
try:
if not self.redis_client or not state:
raise Exception("Redis client or state not provided")
if error:
error_key = f"{self.redis_prefix}error:{state}"
self.redis_client.setex(error_key, 300, error)
raise Exception(f"OAuth error received: {error}")
code_key = f"{self.redis_prefix}code:{state}"
self.redis_client.setex(code_key, 300, code)
state_key = f"{self.redis_prefix}state:{state}"
self.redis_client.setex(state_key, 300, "completed")
return True
except Exception as e:
logging.error(f"Error handling OAuth callback: {e}")
return False
def get_oauth_status(self, task_id: str) -> Dict[str, Any]:
"""Get current status of OAuth flow using provided task_id."""
if not task_id:
return {"status": "not_started", "message": "OAuth flow not started"}
return mcp_oauth_status_task(task_id)

View File

@@ -1,546 +0,0 @@
from datetime import datetime
from pathlib import Path
from typing import Any, Dict, List, Optional
import re
import uuid
from .base import Tool
from application.core.mongo_db import MongoDB
from application.core.settings import settings
class MemoryTool(Tool):
"""Memory
Stores and retrieves information across conversations through a memory file directory.
"""
def __init__(self, tool_config: Optional[Dict[str, Any]] = None, user_id: Optional[str] = None) -> None:
"""Initialize the tool.
Args:
tool_config: Optional tool configuration. Should include:
- tool_id: Unique identifier for this memory tool instance (from user_tools._id)
This ensures each user's tool configuration has isolated memories
user_id: The authenticated user's id (should come from decoded_token["sub"]).
"""
self.user_id: Optional[str] = user_id
# Get tool_id from configuration (passed from user_tools._id in production)
# In production, tool_id is the MongoDB ObjectId string from user_tools collection
if tool_config and "tool_id" in tool_config:
self.tool_id = tool_config["tool_id"]
elif user_id:
# Fallback for backward compatibility or testing
self.tool_id = f"default_{user_id}"
else:
# Last resort fallback (shouldn't happen in normal use)
self.tool_id = str(uuid.uuid4())
db = MongoDB.get_client()[settings.MONGO_DB_NAME]
self.collection = db["memories"]
# -----------------------------
# Action implementations
# -----------------------------
def execute_action(self, action_name: str, **kwargs: Any) -> str:
"""Execute an action by name.
Args:
action_name: One of view, create, str_replace, insert, delete, rename.
**kwargs: Parameters for the action.
Returns:
A human-readable string result.
"""
if not self.user_id:
return "Error: MemoryTool requires a valid user_id."
if action_name == "view":
return self._view(
kwargs.get("path", "/"),
kwargs.get("view_range")
)
if action_name == "create":
return self._create(
kwargs.get("path", ""),
kwargs.get("file_text", "")
)
if action_name == "str_replace":
return self._str_replace(
kwargs.get("path", ""),
kwargs.get("old_str", ""),
kwargs.get("new_str", "")
)
if action_name == "insert":
return self._insert(
kwargs.get("path", ""),
kwargs.get("insert_line", 1),
kwargs.get("insert_text", "")
)
if action_name == "delete":
return self._delete(kwargs.get("path", ""))
if action_name == "rename":
return self._rename(
kwargs.get("old_path", ""),
kwargs.get("new_path", "")
)
return f"Unknown action: {action_name}"
def get_actions_metadata(self) -> List[Dict[str, Any]]:
"""Return JSON metadata describing supported actions for tool schemas."""
return [
{
"name": "view",
"description": "Shows directory contents or file contents with optional line ranges.",
"parameters": {
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "Path to file or directory (e.g., /notes.txt or /project/ or /)."
},
"view_range": {
"type": "array",
"items": {"type": "integer"},
"description": "Optional [start_line, end_line] to view specific lines (1-indexed)."
}
},
"required": ["path"]
},
},
{
"name": "create",
"description": "Create or overwrite a file.",
"parameters": {
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "File path to create (e.g., /notes.txt or /project/task.txt)."
},
"file_text": {
"type": "string",
"description": "Content to write to the file."
}
},
"required": ["path", "file_text"]
},
},
{
"name": "str_replace",
"description": "Replace text in a file.",
"parameters": {
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "File path (e.g., /notes.txt)."
},
"old_str": {
"type": "string",
"description": "String to find."
},
"new_str": {
"type": "string",
"description": "String to replace with."
}
},
"required": ["path", "old_str", "new_str"]
},
},
{
"name": "insert",
"description": "Insert text at a specific line in a file.",
"parameters": {
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "File path (e.g., /notes.txt)."
},
"insert_line": {
"type": "integer",
"description": "Line number to insert at (1-indexed)."
},
"insert_text": {
"type": "string",
"description": "Text to insert."
}
},
"required": ["path", "insert_line", "insert_text"]
},
},
{
"name": "delete",
"description": "Delete a file or directory.",
"parameters": {
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "Path to delete (e.g., /notes.txt or /project/)."
}
},
"required": ["path"]
},
},
{
"name": "rename",
"description": "Rename or move a file/directory.",
"parameters": {
"type": "object",
"properties": {
"old_path": {
"type": "string",
"description": "Current path (e.g., /old.txt)."
},
"new_path": {
"type": "string",
"description": "New path (e.g., /new.txt)."
}
},
"required": ["old_path", "new_path"]
},
},
]
def get_config_requirements(self) -> Dict[str, Any]:
"""Return configuration requirements."""
return {}
# -----------------------------
# Path validation
# -----------------------------
def _validate_path(self, path: str) -> Optional[str]:
"""Validate and normalize path.
Args:
path: User-provided path.
Returns:
Normalized path or None if invalid.
"""
if not path:
return None
# Remove any leading/trailing whitespace
path = path.strip()
# Preserve whether path ends with / (indicates directory)
is_directory = path.endswith("/")
# Ensure path starts with / for consistency
if not path.startswith("/"):
path = "/" + path
# Check for directory traversal patterns
if ".." in path or path.count("//") > 0:
return None
# Normalize the path
try:
# Convert to Path object and resolve to canonical form
normalized = str(Path(path).as_posix())
# Ensure it still starts with /
if not normalized.startswith("/"):
return None
# Preserve trailing slash for directories
if is_directory and not normalized.endswith("/") and normalized != "/":
normalized = normalized + "/"
return normalized
except Exception:
return None
# -----------------------------
# Internal helpers
# -----------------------------
def _view(self, path: str, view_range: Optional[List[int]] = None) -> str:
"""View directory contents or file contents."""
validated_path = self._validate_path(path)
if not validated_path:
return "Error: Invalid path."
# Check if viewing directory (ends with / or is root)
if validated_path == "/" or validated_path.endswith("/"):
return self._view_directory(validated_path)
# Otherwise view file
return self._view_file(validated_path, view_range)
def _view_directory(self, path: str) -> str:
"""List files in a directory."""
# Ensure path ends with / for proper prefix matching
search_path = path if path.endswith("/") else path + "/"
# Find all files that start with this directory path
query = {
"user_id": self.user_id,
"tool_id": self.tool_id,
"path": {"$regex": f"^{re.escape(search_path)}"}
}
docs = list(self.collection.find(query, {"path": 1}))
if not docs:
return f"Directory: {path}\n(empty)"
# Extract filenames relative to the directory
files = []
for doc in docs:
file_path = doc["path"]
# Remove the directory prefix
if file_path.startswith(search_path):
relative = file_path[len(search_path):]
if relative:
files.append(relative)
files.sort()
file_list = "\n".join(f"- {f}" for f in files)
return f"Directory: {path}\n{file_list}"
def _view_file(self, path: str, view_range: Optional[List[int]] = None) -> str:
"""View file contents with optional line range."""
doc = self.collection.find_one({"user_id": self.user_id, "tool_id": self.tool_id, "path": path})
if not doc or not doc.get("content"):
return f"Error: File not found: {path}"
content = str(doc["content"])
# Apply view_range if specified
if view_range and len(view_range) == 2:
lines = content.split("\n")
start, end = view_range
# Convert to 0-indexed
start_idx = max(0, start - 1)
end_idx = min(len(lines), end)
if start_idx >= len(lines):
return f"Error: Line range out of bounds. File has {len(lines)} lines."
selected_lines = lines[start_idx:end_idx]
# Add line numbers (enumerate with 1-based start)
numbered_lines = [f"{i}: {line}" for i, line in enumerate(selected_lines, start=start)]
return "\n".join(numbered_lines)
return content
def _create(self, path: str, file_text: str) -> str:
"""Create or overwrite a file."""
validated_path = self._validate_path(path)
if not validated_path:
return "Error: Invalid path."
if validated_path == "/" or validated_path.endswith("/"):
return "Error: Cannot create a file at directory path."
self.collection.update_one(
{"user_id": self.user_id, "tool_id": self.tool_id, "path": validated_path},
{
"$set": {
"content": file_text,
"updated_at": datetime.now()
}
},
upsert=True
)
return f"File created: {validated_path}"
def _str_replace(self, path: str, old_str: str, new_str: str) -> str:
"""Replace text in a file."""
validated_path = self._validate_path(path)
if not validated_path:
return "Error: Invalid path."
if not old_str:
return "Error: old_str is required."
doc = self.collection.find_one({"user_id": self.user_id, "tool_id": self.tool_id, "path": validated_path})
if not doc or not doc.get("content"):
return f"Error: File not found: {validated_path}"
current_content = str(doc["content"])
# Check if old_str exists (case-insensitive)
if old_str.lower() not in current_content.lower():
return f"Error: String '{old_str}' not found in file."
# Replace the string (case-insensitive)
import re as regex_module
updated_content = regex_module.sub(regex_module.escape(old_str), new_str, current_content, flags=regex_module.IGNORECASE)
self.collection.update_one(
{"user_id": self.user_id, "tool_id": self.tool_id, "path": validated_path},
{
"$set": {
"content": updated_content,
"updated_at": datetime.now()
}
}
)
return f"File updated: {validated_path}"
def _insert(self, path: str, insert_line: int, insert_text: str) -> str:
"""Insert text at a specific line."""
validated_path = self._validate_path(path)
if not validated_path:
return "Error: Invalid path."
if not insert_text:
return "Error: insert_text is required."
doc = self.collection.find_one({"user_id": self.user_id, "tool_id": self.tool_id, "path": validated_path})
if not doc or not doc.get("content"):
return f"Error: File not found: {validated_path}"
current_content = str(doc["content"])
lines = current_content.split("\n")
# Convert to 0-indexed
index = insert_line - 1
if index < 0 or index > len(lines):
return f"Error: Invalid line number. File has {len(lines)} lines."
lines.insert(index, insert_text)
updated_content = "\n".join(lines)
self.collection.update_one(
{"user_id": self.user_id, "tool_id": self.tool_id, "path": validated_path},
{
"$set": {
"content": updated_content,
"updated_at": datetime.now()
}
}
)
return f"Text inserted at line {insert_line} in {validated_path}"
def _delete(self, path: str) -> str:
"""Delete a file or directory."""
validated_path = self._validate_path(path)
if not validated_path:
return "Error: Invalid path."
if validated_path == "/":
# Delete all files for this user and tool
result = self.collection.delete_many({"user_id": self.user_id, "tool_id": self.tool_id})
return f"Deleted {result.deleted_count} file(s) from memory."
# Check if it's a directory (ends with /)
if validated_path.endswith("/"):
# Delete all files in directory
result = self.collection.delete_many({
"user_id": self.user_id,
"tool_id": self.tool_id,
"path": {"$regex": f"^{re.escape(validated_path)}"}
})
return f"Deleted directory and {result.deleted_count} file(s)."
# Try to delete as directory first (without trailing slash)
# Check if any files start with this path + /
search_path = validated_path + "/"
directory_result = self.collection.delete_many({
"user_id": self.user_id,
"tool_id": self.tool_id,
"path": {"$regex": f"^{re.escape(search_path)}"}
})
if directory_result.deleted_count > 0:
return f"Deleted directory and {directory_result.deleted_count} file(s)."
# Delete single file
result = self.collection.delete_one({
"user_id": self.user_id,
"tool_id": self.tool_id,
"path": validated_path
})
if result.deleted_count:
return f"Deleted: {validated_path}"
return f"Error: File not found: {validated_path}"
def _rename(self, old_path: str, new_path: str) -> str:
"""Rename or move a file/directory."""
validated_old = self._validate_path(old_path)
validated_new = self._validate_path(new_path)
if not validated_old or not validated_new:
return "Error: Invalid path."
if validated_old == "/" or validated_new == "/":
return "Error: Cannot rename root directory."
# Check if renaming a directory
if validated_old.endswith("/"):
# Ensure validated_new also ends with / for proper path replacement
if not validated_new.endswith("/"):
validated_new = validated_new + "/"
# Find all files in the old directory
docs = list(self.collection.find({
"user_id": self.user_id,
"tool_id": self.tool_id,
"path": {"$regex": f"^{re.escape(validated_old)}"}
}))
if not docs:
return f"Error: Directory not found: {validated_old}"
# Update paths for all files
for doc in docs:
old_file_path = doc["path"]
new_file_path = old_file_path.replace(validated_old, validated_new, 1)
self.collection.update_one(
{"_id": doc["_id"]},
{"$set": {"path": new_file_path, "updated_at": datetime.now()}}
)
return f"Renamed directory: {validated_old} -> {validated_new} ({len(docs)} files)"
# Rename single file
doc = self.collection.find_one({
"user_id": self.user_id,
"tool_id": self.tool_id,
"path": validated_old
})
if not doc:
return f"Error: File not found: {validated_old}"
# Check if new path already exists
existing = self.collection.find_one({
"user_id": self.user_id,
"tool_id": self.tool_id,
"path": validated_new
})
if existing:
return f"Error: File already exists at {validated_new}"
# Delete the old document and create a new one with the new path
self.collection.delete_one({"user_id": self.user_id, "tool_id": self.tool_id, "path": validated_old})
self.collection.insert_one({
"user_id": self.user_id,
"tool_id": self.tool_id,
"path": validated_new,
"content": doc.get("content", ""),
"updated_at": datetime.now()
})
return f"Renamed: {validated_old} -> {validated_new}"

View File

@@ -1,199 +0,0 @@
from datetime import datetime
from typing import Any, Dict, List, Optional
import uuid
from .base import Tool
from application.core.mongo_db import MongoDB
from application.core.settings import settings
class NotesTool(Tool):
"""Notepad
Single note. Supports viewing, overwriting, string replacement.
"""
def __init__(self, tool_config: Optional[Dict[str, Any]] = None, user_id: Optional[str] = None) -> None:
"""Initialize the tool.
Args:
tool_config: Optional tool configuration. Should include:
- tool_id: Unique identifier for this notes tool instance (from user_tools._id)
This ensures each user's tool configuration has isolated notes
user_id: The authenticated user's id (should come from decoded_token["sub"]).
"""
self.user_id: Optional[str] = user_id
# Get tool_id from configuration (passed from user_tools._id in production)
# In production, tool_id is the MongoDB ObjectId string from user_tools collection
if tool_config and "tool_id" in tool_config:
self.tool_id = tool_config["tool_id"]
elif user_id:
# Fallback for backward compatibility or testing
self.tool_id = f"default_{user_id}"
else:
# Last resort fallback (shouldn't happen in normal use)
self.tool_id = str(uuid.uuid4())
db = MongoDB.get_client()[settings.MONGO_DB_NAME]
self.collection = db["notes"]
# -----------------------------
# Action implementations
# -----------------------------
def execute_action(self, action_name: str, **kwargs: Any) -> str:
"""Execute an action by name.
Args:
action_name: One of view, overwrite, str_replace, insert, delete.
**kwargs: Parameters for the action.
Returns:
A human-readable string result.
"""
if not self.user_id:
return "Error: NotesTool requires a valid user_id."
if action_name == "view":
return self._get_note()
if action_name == "overwrite":
return self._overwrite_note(kwargs.get("text", ""))
if action_name == "str_replace":
return self._str_replace(kwargs.get("old_str", ""), kwargs.get("new_str", ""))
if action_name == "insert":
return self._insert(kwargs.get("line_number", 1), kwargs.get("text", ""))
if action_name == "delete":
return self._delete_note()
return f"Unknown action: {action_name}"
def get_actions_metadata(self) -> List[Dict[str, Any]]:
"""Return JSON metadata describing supported actions for tool schemas."""
return [
{
"name": "view",
"description": "Retrieve the user's note.",
"parameters": {"type": "object", "properties": {}},
},
{
"name": "overwrite",
"description": "Replace the entire note content (creates if doesn't exist).",
"parameters": {
"type": "object",
"properties": {
"text": {"type": "string", "description": "New note content."}
},
"required": ["text"],
},
},
{
"name": "str_replace",
"description": "Replace occurrences of old_str with new_str in the note.",
"parameters": {
"type": "object",
"properties": {
"old_str": {"type": "string", "description": "String to find."},
"new_str": {"type": "string", "description": "String to replace with."}
},
"required": ["old_str", "new_str"],
},
},
{
"name": "insert",
"description": "Insert text at the specified line number (1-indexed).",
"parameters": {
"type": "object",
"properties": {
"line_number": {"type": "integer", "description": "Line number to insert at (1-indexed)."},
"text": {"type": "string", "description": "Text to insert."}
},
"required": ["line_number", "text"],
},
},
{
"name": "delete",
"description": "Delete the user's note.",
"parameters": {"type": "object", "properties": {}},
},
]
def get_config_requirements(self) -> Dict[str, Any]:
"""Return configuration requirements (none for now)."""
return {}
# -----------------------------
# Internal helpers (single-note)
# -----------------------------
def _get_note(self) -> str:
doc = self.collection.find_one({"user_id": self.user_id, "tool_id": self.tool_id})
if not doc or not doc.get("note"):
return "No note found."
return str(doc["note"])
def _overwrite_note(self, content: str) -> str:
content = (content or "").strip()
if not content:
return "Note content required."
self.collection.update_one(
{"user_id": self.user_id, "tool_id": self.tool_id},
{"$set": {"note": content, "updated_at": datetime.utcnow()}},
upsert=True, # ✅ create if missing
)
return "Note saved."
def _str_replace(self, old_str: str, new_str: str) -> str:
if not old_str:
return "old_str is required."
doc = self.collection.find_one({"user_id": self.user_id, "tool_id": self.tool_id})
if not doc or not doc.get("note"):
return "No note found."
current_note = str(doc["note"])
# Case-insensitive search
if old_str.lower() not in current_note.lower():
return f"String '{old_str}' not found in note."
# Case-insensitive replacement
import re
updated_note = re.sub(re.escape(old_str), new_str, current_note, flags=re.IGNORECASE)
self.collection.update_one(
{"user_id": self.user_id, "tool_id": self.tool_id},
{"$set": {"note": updated_note, "updated_at": datetime.utcnow()}},
)
return "Note updated."
def _insert(self, line_number: int, text: str) -> str:
if not text:
return "Text is required."
doc = self.collection.find_one({"user_id": self.user_id, "tool_id": self.tool_id})
if not doc or not doc.get("note"):
return "No note found."
current_note = str(doc["note"])
lines = current_note.split("\n")
# Convert to 0-indexed and validate
index = line_number - 1
if index < 0 or index > len(lines):
return f"Invalid line number. Note has {len(lines)} lines."
lines.insert(index, text)
updated_note = "\n".join(lines)
self.collection.update_one(
{"user_id": self.user_id, "tool_id": self.tool_id},
{"$set": {"note": updated_note, "updated_at": datetime.utcnow()}},
)
return "Text inserted."
def _delete_note(self) -> str:
res = self.collection.delete_one({"user_id": self.user_id, "tool_id": self.tool_id})
return "Note deleted." if res.deleted_count else "No note found to delete."

View File

@@ -23,23 +23,16 @@ class ToolManager:
tool_config = self.config.get(name, {})
self.tools[name] = obj(tool_config)
def load_tool(self, tool_name, tool_config, user_id=None):
def load_tool(self, tool_name, tool_config):
self.config[tool_name] = tool_config
module = importlib.import_module(f"application.agents.tools.{tool_name}")
for member_name, obj in inspect.getmembers(module, inspect.isclass):
if issubclass(obj, Tool) and obj is not Tool:
if tool_name in {"mcp_tool", "notes", "memory"} and user_id:
return obj(tool_config, user_id)
else:
return obj(tool_config)
return obj(tool_config)
def execute_action(self, tool_name, action_name, user_id=None, **kwargs):
def execute_action(self, tool_name, action_name, **kwargs):
if tool_name not in self.tools:
raise ValueError(f"Tool '{tool_name}' not loaded")
if tool_name in {"mcp_tool", "memory"} and user_id:
tool_config = self.config.get(tool_name, {})
tool = self.load_tool(tool_name, tool_config, user_id)
return tool.execute_action(action_name, **kwargs)
return self.tools[tool_name].execute_action(action_name, **kwargs)
def get_all_actions_metadata(self):

View File

@@ -110,8 +110,6 @@ class BaseAnswerResource:
yield f"data: {data}\n\n"
elif "tool_calls" in line:
tool_calls = line["tool_calls"]
data = json.dumps({"type": "tool_calls", "tool_calls": tool_calls})
yield f"data: {data}\n\n"
elif "thought" in line:
thought += line["thought"]
data = json.dumps({"type": "thought", "thought": line["thought"]})

View File

@@ -69,8 +69,11 @@ class StreamProcessor:
self.decoded_token.get("sub") if self.decoded_token is not None else None
)
self.conversation_id = self.data.get("conversation_id")
self.source = {}
self.all_sources = []
self.source = (
{"active_docs": self.data["active_docs"]}
if "active_docs" in self.data
else {}
)
self.attachments = []
self.history = []
self.agent_config = {}
@@ -82,8 +85,6 @@ class StreamProcessor:
def initialize(self):
"""Initialize all required components for processing"""
self._configure_agent()
self._configure_source()
self._configure_retriever()
self._configure_agent()
self._load_conversation_history()
@@ -170,77 +171,13 @@ class StreamProcessor:
source = data.get("source")
if isinstance(source, DBRef):
source_doc = self.db.dereference(source)
if source_doc:
data["source"] = str(source_doc["_id"])
data["retriever"] = source_doc.get("retriever", data.get("retriever"))
data["chunks"] = source_doc.get("chunks", data.get("chunks"))
else:
data["source"] = None
elif source == "default":
data["source"] = "default"
data["source"] = str(source_doc["_id"])
data["retriever"] = source_doc.get("retriever", data.get("retriever"))
data["chunks"] = source_doc.get("chunks", data.get("chunks"))
else:
data["source"] = None
# Handle multiple sources
sources = data.get("sources", [])
if sources and isinstance(sources, list):
sources_list = []
for i, source_ref in enumerate(sources):
if source_ref == "default":
processed_source = {
"id": "default",
"retriever": "classic",
"chunks": data.get("chunks", "2"),
}
sources_list.append(processed_source)
elif isinstance(source_ref, DBRef):
source_doc = self.db.dereference(source_ref)
if source_doc:
processed_source = {
"id": str(source_doc["_id"]),
"retriever": source_doc.get("retriever", "classic"),
"chunks": source_doc.get("chunks", data.get("chunks", "2")),
}
sources_list.append(processed_source)
data["sources"] = sources_list
else:
data["sources"] = []
return data
def _configure_source(self):
"""Configure the source based on agent data"""
api_key = self.data.get("api_key") or self.agent_key
if api_key:
agent_data = self._get_data_from_api_key(api_key)
if agent_data.get("sources") and len(agent_data["sources"]) > 0:
source_ids = [
source["id"] for source in agent_data["sources"] if source.get("id")
]
if source_ids:
self.source = {"active_docs": source_ids}
else:
self.source = {}
self.all_sources = agent_data["sources"]
elif agent_data.get("source"):
self.source = {"active_docs": agent_data["source"]}
self.all_sources = [
{
"id": agent_data["source"],
"retriever": agent_data.get("retriever", "classic"),
}
]
else:
self.source = {}
self.all_sources = []
return
if "active_docs" in self.data:
self.source = {"active_docs": self.data["active_docs"]}
return
self.source = {}
self.all_sources = []
def _configure_agent(self):
"""Configure the agent based on request data"""
agent_id = self.data.get("agent_id")
@@ -266,13 +203,7 @@ class StreamProcessor:
if data_key.get("retriever"):
self.retriever_config["retriever_name"] = data_key["retriever"]
if data_key.get("chunks") is not None:
try:
self.retriever_config["chunks"] = int(data_key["chunks"])
except (ValueError, TypeError):
logger.warning(
f"Invalid chunks value: {data_key['chunks']}, using default value 2"
)
self.retriever_config["chunks"] = 2
self.retriever_config["chunks"] = data_key["chunks"]
elif self.agent_key:
data_key = self._get_data_from_api_key(self.agent_key)
self.agent_config.update(
@@ -293,13 +224,7 @@ class StreamProcessor:
if data_key.get("retriever"):
self.retriever_config["retriever_name"] = data_key["retriever"]
if data_key.get("chunks") is not None:
try:
self.retriever_config["chunks"] = int(data_key["chunks"])
except (ValueError, TypeError):
logger.warning(
f"Invalid chunks value: {data_key['chunks']}, using default value 2"
)
self.retriever_config["chunks"] = 2
self.retriever_config["chunks"] = data_key["chunks"]
else:
self.agent_config.update(
{
@@ -318,8 +243,7 @@ class StreamProcessor:
"token_limit": self.data.get("token_limit", settings.DEFAULT_MAX_HISTORY),
}
api_key = self.data.get("api_key") or self.agent_key
if not api_key and "isNoneDoc" in self.data and self.data["isNoneDoc"]:
if "isNoneDoc" in self.data and self.data["isNoneDoc"]:
self.retriever_config["chunks"] = 0
def create_agent(self):

View File

@@ -1,7 +1,6 @@
import base64
import datetime
import json
import uuid
import logging
from bson.objectid import ObjectId
@@ -15,6 +14,8 @@ from flask import (
from flask_restx import fields, Namespace, Resource
from application.api.user.tasks import (
ingest_connector_task,
)
@@ -172,7 +173,7 @@ class ConnectorSources(Resource):
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
try:
sources = sources_collection.find({"user": user, "type": "connector:file"}).sort("date", -1)
sources = sources_collection.find({"user": user, "type": "connector"}).sort("date", -1)
connector_sources = []
for source in sources:
connector_sources.append({
@@ -234,24 +235,8 @@ class ConnectorAuth(Resource):
if not ConnectorCreator.is_supported(provider):
return make_response(jsonify({"success": False, "error": f"Unsupported provider: {provider}"}), 400)
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False, "error": "Unauthorized"}), 401)
user_id = decoded_token.get('sub')
now = datetime.datetime.now(datetime.timezone.utc)
result = sessions_collection.insert_one({
"provider": provider,
"user": user_id,
"status": "pending",
"created_at": now
})
state_dict = {
"provider": provider,
"object_id": str(result.inserted_id)
}
state = base64.urlsafe_b64encode(json.dumps(state_dict).encode()).decode()
import uuid
state = str(uuid.uuid4())
auth = ConnectorCreator.create_auth(provider)
authorization_url = auth.get_authorization_url(state=state)
return make_response(jsonify({
@@ -272,30 +257,25 @@ class ConnectorsCallback(Resource):
try:
from application.parser.connectors.connector_creator import ConnectorCreator
from flask import request, redirect
import uuid
provider = request.args.get('provider', 'google_drive')
authorization_code = request.args.get('code')
state = request.args.get('state')
_ = request.args.get('state')
error = request.args.get('error')
state_dict = json.loads(base64.urlsafe_b64decode(state.encode()).decode())
provider = state_dict["provider"]
state_object_id = state_dict["object_id"]
if error:
if error == "access_denied":
return redirect(f"/api/connectors/callback-status?status=cancelled&message=Authentication+was+cancelled.+You+can+try+again+if+you'd+like+to+connect+your+account.&provider={provider}")
else:
current_app.logger.warning(f"OAuth error in callback: {error}")
return redirect(f"/api/connectors/callback-status?status=error&message=Authentication+failed.+Please+try+again+and+make+sure+to+grant+all+requested+permissions.&provider={provider}")
return redirect(f"/api/connectors/callback-status?status=error&message=OAuth+error:+{error}.+Please+try+again+and+make+sure+to+grant+all+requested+permissions,+including+offline+access.&provider={provider}")
if not authorization_code:
return redirect(f"/api/connectors/callback-status?status=error&message=Authentication+failed.+Please+try+again+and+make+sure+to+grant+all+requested+permissions.&provider={provider}")
return redirect(f"/api/connectors/callback-status?status=error&message=Authorization+code+not+provided.+Please+complete+the+authorization+process+and+make+sure+to+grant+offline+access.&provider={provider}")
try:
auth = ConnectorCreator.create_auth(provider)
token_info = auth.exchange_code_for_tokens(authorization_code)
session_token = str(uuid.uuid4())
try:
credentials = auth.create_credentials_from_token_info(token_info)
@@ -310,31 +290,30 @@ class ConnectorsCallback(Resource):
"access_token": token_info.get("access_token"),
"refresh_token": token_info.get("refresh_token"),
"token_uri": token_info.get("token_uri"),
"expiry": token_info.get("expiry")
"expiry": token_info.get("expiry"),
"scopes": token_info.get("scopes")
}
sessions_collection.find_one_and_update(
{"_id": ObjectId(state_object_id), "provider": provider},
{
"$set": {
"session_token": session_token,
"token_info": sanitized_token_info,
"user_email": user_email,
"status": "authorized"
}
}
)
user_id = request.decoded_token.get("sub") if getattr(request, "decoded_token", None) else None
sessions_collection.insert_one({
"session_token": session_token,
"user": user_id,
"token_info": sanitized_token_info,
"created_at": datetime.datetime.now(datetime.timezone.utc),
"user_email": user_email,
"provider": provider
})
# Redirect to success page with session token and user email
return redirect(f"/api/connectors/callback-status?status=success&message=Authentication+successful&provider={provider}&session_token={session_token}&user_email={user_email}")
except Exception as e:
current_app.logger.error(f"Error exchanging code for tokens: {str(e)}", exc_info=True)
return redirect(f"/api/connectors/callback-status?status=error&message=Authentication+failed.+Please+try+again+and+make+sure+to+grant+all+requested+permissions.&provider={provider}")
return redirect(f"/api/connectors/callback-status?status=error&message=Failed+to+exchange+authorization+code+for+tokens:+{str(e)}&provider={provider}")
except Exception as e:
current_app.logger.error(f"Error handling connector callback: {e}")
return redirect("/api/connectors/callback-status?status=error&message=Authentication+failed.+Please+try+again+and+make+sure+to+grant+all+requested+permissions.")
return redirect(f"/api/connectors/callback-status?status=error&message=Failed+to+complete+connector+authentication:+{str(e)}.+Please+try+again+and+make+sure+to+grant+all+requested+permissions,+including+offline+access.")
@connectors_ns.route("/api/connectors/refresh")
@@ -360,15 +339,8 @@ class ConnectorRefresh(Resource):
@connectors_ns.route("/api/connectors/files")
class ConnectorFiles(Resource):
@api.expect(api.model("ConnectorFilesModel", {
"provider": fields.String(required=True),
"session_token": fields.String(required=True),
"folder_id": fields.String(required=False),
"limit": fields.Integer(required=False),
"page_token": fields.String(required=False),
"search_query": fields.String(required=False)
}))
@api.doc(description="List files from a connector provider (supports pagination and search)")
@api.expect(api.model("ConnectorFilesModel", {"provider": fields.String(required=True), "session_token": fields.String(required=True), "folder_id": fields.String(required=False), "limit": fields.Integer(required=False), "page_token": fields.String(required=False)}))
@api.doc(description="List files from a connector provider (supports pagination)")
def post(self):
try:
data = request.get_json()
@@ -377,11 +349,10 @@ class ConnectorFiles(Resource):
folder_id = data.get('folder_id')
limit = data.get('limit', 10)
page_token = data.get('page_token')
search_query = data.get('search_query')
if not provider or not session_token:
return make_response(jsonify({"success": False, "error": "provider and session_token are required"}), 400)
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False, "error": "Unauthorized"}), 401)
@@ -391,17 +362,13 @@ class ConnectorFiles(Resource):
return make_response(jsonify({"success": False, "error": "Invalid or unauthorized session"}), 401)
loader = ConnectorCreator.create_connector(provider, session_token)
input_config = {
documents = loader.load_data({
'limit': limit,
'list_only': True,
'session_token': session_token,
'folder_id': folder_id,
'page_token': page_token
}
if search_query:
input_config['search_query'] = search_query
documents = loader.load_data(input_config)
})
files = []
for doc in documents[:limit]:
@@ -419,20 +386,13 @@ class ConnectorFiles(Resource):
'name': metadata.get('file_name', 'Unknown File'),
'type': metadata.get('mime_type', 'unknown'),
'size': metadata.get('size', None),
'modifiedTime': formatted_time,
'isFolder': metadata.get('is_folder', False)
'modifiedTime': formatted_time
})
next_token = getattr(loader, 'next_page_token', None)
has_more = bool(next_token)
return make_response(jsonify({
"success": True,
"files": files,
"total": len(files),
"next_page_token": next_token,
"has_more": has_more
}), 200)
return make_response(jsonify({"success": True, "files": files, "total": len(files), "next_page_token": next_token, "has_more": has_more}), 200)
except Exception as e:
current_app.logger.error(f"Error loading connector files: {e}")
return make_response(jsonify({"success": False, "error": f"Failed to load files: {str(e)}"}), 500)
@@ -441,7 +401,7 @@ class ConnectorFiles(Resource):
@connectors_ns.route("/api/connectors/validate-session")
class ConnectorValidateSession(Resource):
@api.expect(api.model("ConnectorValidateSessionModel", {"provider": fields.String(required=True), "session_token": fields.String(required=True)}))
@api.doc(description="Validate connector session token and return user info and access token")
@api.doc(description="Validate connector session token and return user info")
def post(self):
try:
data = request.get_json()
@@ -450,6 +410,7 @@ class ConnectorValidateSession(Resource):
if not provider or not session_token:
return make_response(jsonify({"success": False, "error": "provider and session_token are required"}), 400)
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False, "error": "Unauthorized"}), 401)
@@ -463,36 +424,10 @@ class ConnectorValidateSession(Resource):
auth = ConnectorCreator.create_auth(provider)
is_expired = auth.is_token_expired(token_info)
if is_expired and token_info.get('refresh_token'):
try:
refreshed_token_info = auth.refresh_access_token(token_info.get('refresh_token'))
sanitized_token_info = {
"access_token": refreshed_token_info.get("access_token"),
"refresh_token": refreshed_token_info.get("refresh_token"),
"token_uri": refreshed_token_info.get("token_uri"),
"expiry": refreshed_token_info.get("expiry")
}
sessions_collection.update_one(
{"session_token": session_token},
{"$set": {"token_info": sanitized_token_info}}
)
token_info = sanitized_token_info
is_expired = False
except Exception as refresh_error:
current_app.logger.error(f"Failed to refresh token: {refresh_error}")
if is_expired:
return make_response(jsonify({
"success": False,
"expired": True,
"error": "Session token has expired. Please reconnect."
}), 401)
return make_response(jsonify({
"success": True,
"expired": False,
"user_email": session.get('user_email', 'Connected User'),
"access_token": token_info.get('access_token')
"expired": is_expired,
"user_email": session.get('user_email', 'Connected User')
}), 200)
except Exception as e:
current_app.logger.error(f"Error validating connector session: {e}")
@@ -652,23 +587,20 @@ class ConnectorCallbackStatus(Resource):
.container {{ max-width: 600px; margin: 0 auto; }}
.success {{ color: #4CAF50; }}
.error {{ color: #F44336; }}
.cancelled {{ color: #FF9800; }}
</style>
<script>
window.onload = function() {{
const status = "{status}";
const sessionToken = "{session_token}";
const userEmail = "{user_email}";
if (status === "success" && window.opener) {{
window.opener.postMessage({{
type: '{provider}_auth_success',
session_token: sessionToken,
user_email: userEmail
}}, '*');
setTimeout(() => window.close(), 3000);
}} else if (status === "cancelled" || status === "error") {{
setTimeout(() => window.close(), 3000);
}}
}};
@@ -681,7 +613,7 @@ class ConnectorCallbackStatus(Resource):
<p>{message}</p>
{f'<p>Connected as: {user_email}</p>' if status == 'success' else ''}
</div>
<p><small>You can close this window. {f"Your {provider.replace('_', ' ').title()} is now connected and ready to use." if status == 'success' else "Feel free to close this window."}</small></p>
<p><small>You can close this window. {f"Your {provider.replace('_', ' ').title()} is now connected and ready to use." if status == 'success' else ''}</small></p>
</div>
</body>
</html>

View File

@@ -1,5 +0,0 @@
"""User API module - provides all user-related API endpoints"""
from .routes import user
__all__ = ["user"]

View File

@@ -1,7 +0,0 @@
"""Agents module."""
from .routes import agents_ns
from .sharing import agents_sharing_ns
from .webhooks import agents_webhooks_ns
__all__ = ["agents_ns", "agents_sharing_ns", "agents_webhooks_ns"]

View File

@@ -1,974 +0,0 @@
"""Agent management routes."""
import datetime
import json
import uuid
from bson.dbref import DBRef
from bson.objectid import ObjectId
from flask import current_app, jsonify, make_response, request
from flask_restx import fields, Namespace, Resource
from application.api import api
from application.api.user.base import (
agents_collection,
db,
ensure_user_doc,
handle_image_upload,
resolve_tool_details,
storage,
users_collection,
)
from application.utils import (
check_required_fields,
generate_image_url,
validate_required_fields,
)
agents_ns = Namespace("agents", description="Agent management operations", path="/api")
@agents_ns.route("/get_agent")
class GetAgent(Resource):
@api.doc(params={"id": "Agent ID"}, description="Get agent by ID")
def get(self):
if not (decoded_token := request.decoded_token):
return {"success": False}, 401
if not (agent_id := request.args.get("id")):
return {"success": False, "message": "ID required"}, 400
try:
agent = agents_collection.find_one(
{"_id": ObjectId(agent_id), "user": decoded_token["sub"]}
)
if not agent:
return {"status": "Not found"}, 404
data = {
"id": str(agent["_id"]),
"name": agent["name"],
"description": agent.get("description", ""),
"image": (
generate_image_url(agent["image"]) if agent.get("image") else ""
),
"source": (
str(source_doc["_id"])
if isinstance(agent.get("source"), DBRef)
and (source_doc := db.dereference(agent.get("source")))
else ""
),
"sources": [
(
str(db.dereference(source_ref)["_id"])
if isinstance(source_ref, DBRef) and db.dereference(source_ref)
else source_ref
)
for source_ref in agent.get("sources", [])
if (isinstance(source_ref, DBRef) and db.dereference(source_ref))
or source_ref == "default"
],
"chunks": agent["chunks"],
"retriever": agent.get("retriever", ""),
"prompt_id": agent.get("prompt_id", ""),
"tools": agent.get("tools", []),
"tool_details": resolve_tool_details(agent.get("tools", [])),
"agent_type": agent.get("agent_type", ""),
"status": agent.get("status", ""),
"json_schema": agent.get("json_schema"),
"created_at": agent.get("createdAt", ""),
"updated_at": agent.get("updatedAt", ""),
"last_used_at": agent.get("lastUsedAt", ""),
"key": (
f"{agent['key'][:4]}...{agent['key'][-4:]}"
if "key" in agent
else ""
),
"pinned": agent.get("pinned", False),
"shared": agent.get("shared_publicly", False),
"shared_metadata": agent.get("shared_metadata", {}),
"shared_token": agent.get("shared_token", ""),
}
return make_response(jsonify(data), 200)
except Exception as e:
current_app.logger.error(f"Agent fetch error: {e}", exc_info=True)
return {"success": False}, 400
@agents_ns.route("/get_agents")
class GetAgents(Resource):
@api.doc(description="Retrieve agents for the user")
def get(self):
if not (decoded_token := request.decoded_token):
return {"success": False}, 401
user = decoded_token.get("sub")
try:
user_doc = ensure_user_doc(user)
pinned_ids = set(user_doc.get("agent_preferences", {}).get("pinned", []))
agents = agents_collection.find({"user": user})
list_agents = [
{
"id": str(agent["_id"]),
"name": agent["name"],
"description": agent.get("description", ""),
"image": (
generate_image_url(agent["image"]) if agent.get("image") else ""
),
"source": (
str(source_doc["_id"])
if isinstance(agent.get("source"), DBRef)
and (source_doc := db.dereference(agent.get("source")))
else (
agent.get("source", "")
if agent.get("source") == "default"
else ""
)
),
"sources": [
(
source_ref
if source_ref == "default"
else str(db.dereference(source_ref)["_id"])
)
for source_ref in agent.get("sources", [])
if source_ref == "default"
or (
isinstance(source_ref, DBRef) and db.dereference(source_ref)
)
],
"chunks": agent["chunks"],
"retriever": agent.get("retriever", ""),
"prompt_id": agent.get("prompt_id", ""),
"tools": agent.get("tools", []),
"tool_details": resolve_tool_details(agent.get("tools", [])),
"agent_type": agent.get("agent_type", ""),
"status": agent.get("status", ""),
"json_schema": agent.get("json_schema"),
"created_at": agent.get("createdAt", ""),
"updated_at": agent.get("updatedAt", ""),
"last_used_at": agent.get("lastUsedAt", ""),
"key": (
f"{agent['key'][:4]}...{agent['key'][-4:]}"
if "key" in agent
else ""
),
"pinned": str(agent["_id"]) in pinned_ids,
"shared": agent.get("shared_publicly", False),
"shared_metadata": agent.get("shared_metadata", {}),
"shared_token": agent.get("shared_token", ""),
}
for agent in agents
if "source" in agent or "retriever" in agent
]
except Exception as err:
current_app.logger.error(f"Error retrieving agents: {err}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify(list_agents), 200)
@agents_ns.route("/create_agent")
class CreateAgent(Resource):
create_agent_model = api.model(
"CreateAgentModel",
{
"name": fields.String(required=True, description="Name of the agent"),
"description": fields.String(
required=True, description="Description of the agent"
),
"image": fields.Raw(
required=False, description="Image file upload", type="file"
),
"source": fields.String(
required=False, description="Source ID (legacy single source)"
),
"sources": fields.List(
fields.String,
required=False,
description="List of source identifiers for multiple sources",
),
"chunks": fields.Integer(required=True, description="Chunks count"),
"retriever": fields.String(required=True, description="Retriever ID"),
"prompt_id": fields.String(required=True, description="Prompt ID"),
"tools": fields.List(
fields.String, required=False, description="List of tool identifiers"
),
"agent_type": fields.String(required=True, description="Type of the agent"),
"status": fields.String(
required=True, description="Status of the agent (draft or published)"
),
"json_schema": fields.Raw(
required=False,
description="JSON schema for enforcing structured output format",
),
},
)
@api.expect(create_agent_model)
@api.doc(description="Create a new agent")
def post(self):
if not (decoded_token := request.decoded_token):
return {"success": False}, 401
user = decoded_token.get("sub")
if request.content_type == "application/json":
data = request.get_json()
else:
data = request.form.to_dict()
if "tools" in data:
try:
data["tools"] = json.loads(data["tools"])
except json.JSONDecodeError:
data["tools"] = []
if "sources" in data:
try:
data["sources"] = json.loads(data["sources"])
except json.JSONDecodeError:
data["sources"] = []
if "json_schema" in data:
try:
data["json_schema"] = json.loads(data["json_schema"])
except json.JSONDecodeError:
data["json_schema"] = None
print(f"Received data: {data}")
# Validate JSON schema if provided
if data.get("json_schema"):
try:
# Basic validation - ensure it's a valid JSON structure
json_schema = data.get("json_schema")
if not isinstance(json_schema, dict):
return make_response(
jsonify(
{
"success": False,
"message": "JSON schema must be a valid JSON object",
}
),
400,
)
# Validate that it has either a 'schema' property or is itself a schema
if "schema" not in json_schema and "type" not in json_schema:
return make_response(
jsonify(
{
"success": False,
"message": "JSON schema must contain either a 'schema' property or be a valid JSON schema with 'type' property",
}
),
400,
)
except Exception as e:
return make_response(
jsonify(
{"success": False, "message": f"Invalid JSON schema: {str(e)}"}
),
400,
)
if data.get("status") not in ["draft", "published"]:
return make_response(
jsonify(
{
"success": False,
"message": "Status must be either 'draft' or 'published'",
}
),
400,
)
if data.get("status") == "published":
required_fields = [
"name",
"description",
"chunks",
"retriever",
"prompt_id",
"agent_type",
]
# Require either source or sources (but not both)
if not data.get("source") and not data.get("sources"):
return make_response(
jsonify(
{
"success": False,
"message": "Either 'source' or 'sources' field is required for published agents",
}
),
400,
)
validate_fields = ["name", "description", "prompt_id", "agent_type"]
else:
required_fields = ["name"]
validate_fields = []
missing_fields = check_required_fields(data, required_fields)
invalid_fields = validate_required_fields(data, validate_fields)
if missing_fields:
return missing_fields
if invalid_fields:
return invalid_fields
image_url, error = handle_image_upload(request, "", user, storage)
if error:
return make_response(
jsonify({"success": False, "message": "Image upload failed"}), 400
)
try:
key = str(uuid.uuid4()) if data.get("status") == "published" else ""
sources_list = []
if data.get("sources") and len(data.get("sources", [])) > 0:
for source_id in data.get("sources", []):
if source_id == "default":
sources_list.append("default")
elif ObjectId.is_valid(source_id):
sources_list.append(DBRef("sources", ObjectId(source_id)))
source_field = ""
else:
source_value = data.get("source", "")
if source_value == "default":
source_field = "default"
elif ObjectId.is_valid(source_value):
source_field = DBRef("sources", ObjectId(source_value))
else:
source_field = ""
new_agent = {
"user": user,
"name": data.get("name"),
"description": data.get("description", ""),
"image": image_url,
"source": source_field,
"sources": sources_list,
"chunks": data.get("chunks", ""),
"retriever": data.get("retriever", ""),
"prompt_id": data.get("prompt_id", ""),
"tools": data.get("tools", []),
"agent_type": data.get("agent_type", ""),
"status": data.get("status"),
"json_schema": data.get("json_schema"),
"createdAt": datetime.datetime.now(datetime.timezone.utc),
"updatedAt": datetime.datetime.now(datetime.timezone.utc),
"lastUsedAt": None,
"key": key,
}
if new_agent["chunks"] == "":
new_agent["chunks"] = "2"
if (
new_agent["source"] == ""
and new_agent["retriever"] == ""
and not new_agent["sources"]
):
new_agent["retriever"] = "classic"
resp = agents_collection.insert_one(new_agent)
new_id = str(resp.inserted_id)
except Exception as err:
current_app.logger.error(f"Error creating agent: {err}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"id": new_id, "key": key}), 201)
@agents_ns.route("/update_agent/<string:agent_id>")
class UpdateAgent(Resource):
update_agent_model = api.model(
"UpdateAgentModel",
{
"name": fields.String(required=True, description="New name of the agent"),
"description": fields.String(
required=True, description="New description of the agent"
),
"image": fields.String(
required=False, description="New image URL or identifier"
),
"source": fields.String(
required=False, description="Source ID (legacy single source)"
),
"sources": fields.List(
fields.String,
required=False,
description="List of source identifiers for multiple sources",
),
"chunks": fields.Integer(required=True, description="Chunks count"),
"retriever": fields.String(required=True, description="Retriever ID"),
"prompt_id": fields.String(required=True, description="Prompt ID"),
"tools": fields.List(
fields.String, required=False, description="List of tool identifiers"
),
"agent_type": fields.String(required=True, description="Type of the agent"),
"status": fields.String(
required=True, description="Status of the agent (draft or published)"
),
"json_schema": fields.Raw(
required=False,
description="JSON schema for enforcing structured output format",
),
},
)
@api.expect(update_agent_model)
@api.doc(description="Update an existing agent")
def put(self, agent_id):
if not (decoded_token := request.decoded_token):
return make_response(
jsonify({"success": False, "message": "Unauthorized"}), 401
)
user = decoded_token.get("sub")
if not ObjectId.is_valid(agent_id):
return make_response(
jsonify({"success": False, "message": "Invalid agent ID format"}), 400
)
oid = ObjectId(agent_id)
try:
if request.content_type and "application/json" in request.content_type:
data = request.get_json()
else:
data = request.form.to_dict()
json_fields = ["tools", "sources", "json_schema"]
for field in json_fields:
if field in data and data[field]:
try:
data[field] = json.loads(data[field])
except json.JSONDecodeError:
return make_response(
jsonify(
{
"success": False,
"message": f"Invalid JSON format for field: {field}",
}
),
400,
)
except Exception as err:
current_app.logger.error(
f"Error parsing request data: {err}", exc_info=True
)
return make_response(
jsonify({"success": False, "message": "Invalid request data"}), 400
)
try:
existing_agent = agents_collection.find_one({"_id": oid, "user": user})
except Exception as err:
current_app.logger.error(
f"Error finding agent {agent_id}: {err}", exc_info=True
)
return make_response(
jsonify({"success": False, "message": "Database error finding agent"}),
500,
)
if not existing_agent:
return make_response(
jsonify(
{"success": False, "message": "Agent not found or not authorized"}
),
404,
)
image_url, error = handle_image_upload(
request, existing_agent.get("image", ""), user, storage
)
if error:
current_app.logger.error(
f"Image upload error for agent {agent_id}: {error}"
)
return make_response(
jsonify({"success": False, "message": f"Image upload failed: {error}"}),
400,
)
update_fields = {}
allowed_fields = [
"name",
"description",
"image",
"source",
"sources",
"chunks",
"retriever",
"prompt_id",
"tools",
"agent_type",
"status",
"json_schema",
]
for field in allowed_fields:
if field not in data:
continue
if field == "status":
new_status = data.get("status")
if new_status not in ["draft", "published"]:
return make_response(
jsonify(
{
"success": False,
"message": "Invalid status value. Must be 'draft' or 'published'",
}
),
400,
)
update_fields[field] = new_status
elif field == "source":
source_id = data.get("source")
if source_id == "default":
update_fields[field] = "default"
elif source_id and ObjectId.is_valid(source_id):
update_fields[field] = DBRef("sources", ObjectId(source_id))
elif source_id:
return make_response(
jsonify(
{
"success": False,
"message": f"Invalid source ID format: {source_id}",
}
),
400,
)
else:
update_fields[field] = ""
elif field == "sources":
sources_list = data.get("sources", [])
if sources_list and isinstance(sources_list, list):
valid_sources = []
for source_id in sources_list:
if source_id == "default":
valid_sources.append("default")
elif ObjectId.is_valid(source_id):
valid_sources.append(DBRef("sources", ObjectId(source_id)))
else:
return make_response(
jsonify(
{
"success": False,
"message": f"Invalid source ID in list: {source_id}",
}
),
400,
)
update_fields[field] = valid_sources
else:
update_fields[field] = []
elif field == "chunks":
chunks_value = data.get("chunks")
if chunks_value == "" or chunks_value is None:
update_fields[field] = "2"
else:
try:
chunks_int = int(chunks_value)
if chunks_int < 0:
return make_response(
jsonify(
{
"success": False,
"message": "Chunks value must be a non-negative integer",
}
),
400,
)
update_fields[field] = str(chunks_int)
except (ValueError, TypeError):
return make_response(
jsonify(
{
"success": False,
"message": f"Invalid chunks value: {chunks_value}",
}
),
400,
)
elif field == "tools":
tools_list = data.get("tools", [])
if isinstance(tools_list, list):
update_fields[field] = tools_list
else:
return make_response(
jsonify(
{
"success": False,
"message": "Tools must be a list",
}
),
400,
)
elif field == "json_schema":
json_schema = data.get("json_schema")
if json_schema is not None:
if not isinstance(json_schema, dict):
return make_response(
jsonify(
{
"success": False,
"message": "JSON schema must be a valid object",
}
),
400,
)
update_fields[field] = json_schema
else:
update_fields[field] = None
else:
value = data[field]
if field in ["name", "description", "prompt_id", "agent_type"]:
if not value or not str(value).strip():
return make_response(
jsonify(
{
"success": False,
"message": f"Field '{field}' cannot be empty",
}
),
400,
)
update_fields[field] = value
if image_url:
update_fields["image"] = image_url
if not update_fields:
return make_response(
jsonify(
{
"success": False,
"message": "No valid update data provided",
}
),
400,
)
newly_generated_key = None
final_status = update_fields.get("status", existing_agent.get("status"))
if final_status == "published":
required_published_fields = {
"name": "Agent name",
"description": "Agent description",
"chunks": "Chunks count",
"prompt_id": "Prompt",
"agent_type": "Agent type",
}
missing_published_fields = []
for req_field, field_label in required_published_fields.items():
final_value = update_fields.get(
req_field, existing_agent.get(req_field)
)
if not final_value:
missing_published_fields.append(field_label)
source_val = update_fields.get("source", existing_agent.get("source"))
sources_val = update_fields.get(
"sources", existing_agent.get("sources", [])
)
has_valid_source = (
isinstance(source_val, DBRef)
or source_val == "default"
or (isinstance(sources_val, list) and len(sources_val) > 0)
)
if not has_valid_source:
missing_published_fields.append("Source")
if missing_published_fields:
return make_response(
jsonify(
{
"success": False,
"message": f"Cannot publish agent. Missing or invalid required fields: {', '.join(missing_published_fields)}",
}
),
400,
)
if not existing_agent.get("key"):
newly_generated_key = str(uuid.uuid4())
update_fields["key"] = newly_generated_key
update_fields["updatedAt"] = datetime.datetime.now(datetime.timezone.utc)
try:
result = agents_collection.update_one(
{"_id": oid, "user": user}, {"$set": update_fields}
)
if result.matched_count == 0:
return make_response(
jsonify(
{
"success": False,
"message": "Agent not found or update failed",
}
),
404,
)
if result.modified_count == 0 and result.matched_count == 1:
return make_response(
jsonify(
{
"success": True,
"message": "No changes detected",
"id": agent_id,
}
),
200,
)
except Exception as err:
current_app.logger.error(
f"Error updating agent {agent_id}: {err}", exc_info=True
)
return make_response(
jsonify({"success": False, "message": "Database error during update"}),
500,
)
response_data = {
"success": True,
"id": agent_id,
"message": "Agent updated successfully",
}
if newly_generated_key:
response_data["key"] = newly_generated_key
return make_response(jsonify(response_data), 200)
@agents_ns.route("/delete_agent")
class DeleteAgent(Resource):
@api.doc(params={"id": "ID of the agent"}, description="Delete an agent by ID")
def delete(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
agent_id = request.args.get("id")
if not agent_id:
return make_response(
jsonify({"success": False, "message": "ID is required"}), 400
)
try:
deleted_agent = agents_collection.find_one_and_delete(
{"_id": ObjectId(agent_id), "user": user}
)
if not deleted_agent:
return make_response(
jsonify({"success": False, "message": "Agent not found"}), 404
)
deleted_id = str(deleted_agent["_id"])
except Exception as err:
current_app.logger.error(f"Error deleting agent: {err}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"id": deleted_id}), 200)
@agents_ns.route("/pinned_agents")
class PinnedAgents(Resource):
@api.doc(description="Get pinned agents for the user")
def get(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user_id = decoded_token.get("sub")
try:
user_doc = ensure_user_doc(user_id)
pinned_ids = user_doc.get("agent_preferences", {}).get("pinned", [])
if not pinned_ids:
return make_response(jsonify([]), 200)
pinned_object_ids = [ObjectId(agent_id) for agent_id in pinned_ids]
pinned_agents_cursor = agents_collection.find(
{"_id": {"$in": pinned_object_ids}}
)
pinned_agents = list(pinned_agents_cursor)
existing_ids = {str(agent["_id"]) for agent in pinned_agents}
# Clean up any stale pinned IDs
stale_ids = [
agent_id for agent_id in pinned_ids if agent_id not in existing_ids
]
if stale_ids:
users_collection.update_one(
{"user_id": user_id},
{"$pullAll": {"agent_preferences.pinned": stale_ids}},
)
list_pinned_agents = [
{
"id": str(agent["_id"]),
"name": agent.get("name", ""),
"description": agent.get("description", ""),
"image": (
generate_image_url(agent["image"]) if agent.get("image") else ""
),
"source": (
str(db.dereference(agent["source"])["_id"])
if "source" in agent
and agent["source"]
and isinstance(agent["source"], DBRef)
and db.dereference(agent["source"]) is not None
else ""
),
"chunks": agent.get("chunks", ""),
"retriever": agent.get("retriever", ""),
"prompt_id": agent.get("prompt_id", ""),
"tools": agent.get("tools", []),
"tool_details": resolve_tool_details(agent.get("tools", [])),
"agent_type": agent.get("agent_type", ""),
"status": agent.get("status", ""),
"created_at": agent.get("createdAt", ""),
"updated_at": agent.get("updatedAt", ""),
"last_used_at": agent.get("lastUsedAt", ""),
"key": (
f"{agent['key'][:4]}...{agent['key'][-4:]}"
if "key" in agent
else ""
),
"pinned": True,
}
for agent in pinned_agents
if "source" in agent or "retriever" in agent
]
except Exception as err:
current_app.logger.error(f"Error retrieving pinned agents: {err}")
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify(list_pinned_agents), 200)
@agents_ns.route("/template_agents")
class GetTemplateAgents(Resource):
@api.doc(description="Get template/premade agents")
def get(self):
try:
template_agents = agents_collection.find({"user": "system"})
template_agents = [
{
"id": str(agent["_id"]),
"name": agent["name"],
"description": agent["description"],
"image": agent.get("image", ""),
}
for agent in template_agents
]
return make_response(jsonify(template_agents), 200)
except Exception as e:
current_app.logger.error(f"Template agents fetch error: {e}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
@agents_ns.route("/adopt_agent")
class AdoptAgent(Resource):
@api.doc(params={"id": "Agent ID"}, description="Adopt an agent by ID")
def post(self):
if not (decoded_token := request.decoded_token):
return make_response(jsonify({"success": False}), 401)
if not (agent_id := request.args.get("id")):
return make_response(
jsonify({"success": False, "message": "ID required"}), 400
)
try:
agent = agents_collection.find_one(
{"_id": ObjectId(agent_id), "user": "system"}
)
if not agent:
return make_response(jsonify({"status": "Not found"}), 404)
new_agent = agent.copy()
new_agent.pop("_id", None)
new_agent["user"] = decoded_token["sub"]
new_agent["status"] = "published"
new_agent["lastUsedAt"] = datetime.datetime.now(datetime.timezone.utc)
new_agent["key"] = str(uuid.uuid4())
insert_result = agents_collection.insert_one(new_agent)
response_agent = new_agent.copy()
response_agent.pop("_id", None)
response_agent["id"] = str(insert_result.inserted_id)
response_agent["tool_details"] = resolve_tool_details(
response_agent.get("tools", [])
)
if isinstance(response_agent.get("source"), DBRef):
response_agent["source"] = str(response_agent["source"].id)
return make_response(
jsonify({"success": True, "agent": response_agent}), 200
)
except Exception as e:
current_app.logger.error(f"Agent adopt error: {e}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
@agents_ns.route("/pin_agent")
class PinAgent(Resource):
@api.doc(params={"id": "ID of the agent"}, description="Pin or unpin an agent")
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user_id = decoded_token.get("sub")
agent_id = request.args.get("id")
if not agent_id:
return make_response(
jsonify({"success": False, "message": "ID is required"}), 400
)
try:
agent = agents_collection.find_one({"_id": ObjectId(agent_id)})
if not agent:
return make_response(
jsonify({"success": False, "message": "Agent not found"}), 404
)
user_doc = ensure_user_doc(user_id)
pinned_list = user_doc.get("agent_preferences", {}).get("pinned", [])
if agent_id in pinned_list:
users_collection.update_one(
{"user_id": user_id},
{"$pull": {"agent_preferences.pinned": agent_id}},
)
action = "unpinned"
else:
users_collection.update_one(
{"user_id": user_id},
{"$addToSet": {"agent_preferences.pinned": agent_id}},
)
action = "pinned"
except Exception as err:
current_app.logger.error(f"Error pinning/unpinning agent: {err}")
return make_response(
jsonify({"success": False, "message": "Server error"}), 500
)
return make_response(jsonify({"success": True, "action": action}), 200)
@agents_ns.route("/remove_shared_agent")
class RemoveSharedAgent(Resource):
@api.doc(
params={"id": "ID of the shared agent"},
description="Remove a shared agent from the current user's shared list",
)
def delete(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user_id = decoded_token.get("sub")
agent_id = request.args.get("id")
if not agent_id:
return make_response(
jsonify({"success": False, "message": "ID is required"}), 400
)
try:
agent = agents_collection.find_one(
{"_id": ObjectId(agent_id), "shared_publicly": True}
)
if not agent:
return make_response(
jsonify({"success": False, "message": "Shared agent not found"}),
404,
)
ensure_user_doc(user_id)
users_collection.update_one(
{"user_id": user_id},
{
"$pull": {
"agent_preferences.shared_with_me": agent_id,
"agent_preferences.pinned": agent_id,
}
},
)
return make_response(jsonify({"success": True, "action": "removed"}), 200)
except Exception as err:
current_app.logger.error(f"Error removing shared agent: {err}")
return make_response(
jsonify({"success": False, "message": "Server error"}), 500
)

View File

@@ -1,254 +0,0 @@
"""Agent management sharing functionality."""
import datetime
import secrets
from bson import DBRef
from bson.objectid import ObjectId
from flask import current_app, jsonify, make_response, request
from flask_restx import fields, Namespace, Resource
from application.api import api
from application.api.user.base import (
agents_collection,
db,
ensure_user_doc,
resolve_tool_details,
user_tools_collection,
users_collection,
)
from application.utils import generate_image_url
agents_sharing_ns = Namespace(
"agents", description="Agent management operations", path="/api"
)
@agents_sharing_ns.route("/shared_agent")
class SharedAgent(Resource):
@api.doc(
params={
"token": "Shared token of the agent",
},
description="Get a shared agent by token or ID",
)
def get(self):
shared_token = request.args.get("token")
if not shared_token:
return make_response(
jsonify({"success": False, "message": "Token or ID is required"}), 400
)
try:
query = {
"shared_publicly": True,
"shared_token": shared_token,
}
shared_agent = agents_collection.find_one(query)
if not shared_agent:
return make_response(
jsonify({"success": False, "message": "Shared agent not found"}),
404,
)
agent_id = str(shared_agent["_id"])
data = {
"id": agent_id,
"user": shared_agent.get("user", ""),
"name": shared_agent.get("name", ""),
"image": (
generate_image_url(shared_agent["image"])
if shared_agent.get("image")
else ""
),
"description": shared_agent.get("description", ""),
"source": (
str(source_doc["_id"])
if isinstance(shared_agent.get("source"), DBRef)
and (source_doc := db.dereference(shared_agent.get("source")))
else ""
),
"chunks": shared_agent.get("chunks", "0"),
"retriever": shared_agent.get("retriever", "classic"),
"prompt_id": shared_agent.get("prompt_id", "default"),
"tools": shared_agent.get("tools", []),
"tool_details": resolve_tool_details(shared_agent.get("tools", [])),
"agent_type": shared_agent.get("agent_type", ""),
"status": shared_agent.get("status", ""),
"json_schema": shared_agent.get("json_schema"),
"created_at": shared_agent.get("createdAt", ""),
"updated_at": shared_agent.get("updatedAt", ""),
"shared": shared_agent.get("shared_publicly", False),
"shared_token": shared_agent.get("shared_token", ""),
"shared_metadata": shared_agent.get("shared_metadata", {}),
}
if data["tools"]:
enriched_tools = []
for tool in data["tools"]:
tool_data = user_tools_collection.find_one({"_id": ObjectId(tool)})
if tool_data:
enriched_tools.append(tool_data.get("name", ""))
data["tools"] = enriched_tools
decoded_token = getattr(request, "decoded_token", None)
if decoded_token:
user_id = decoded_token.get("sub")
owner_id = shared_agent.get("user")
if user_id != owner_id:
ensure_user_doc(user_id)
users_collection.update_one(
{"user_id": user_id},
{"$addToSet": {"agent_preferences.shared_with_me": agent_id}},
)
return make_response(jsonify(data), 200)
except Exception as err:
current_app.logger.error(f"Error retrieving shared agent: {err}")
return make_response(jsonify({"success": False}), 400)
@agents_sharing_ns.route("/shared_agents")
class SharedAgents(Resource):
@api.doc(description="Get shared agents explicitly shared with the user")
def get(self):
try:
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user_id = decoded_token.get("sub")
user_doc = ensure_user_doc(user_id)
shared_with_ids = user_doc.get("agent_preferences", {}).get(
"shared_with_me", []
)
shared_object_ids = [ObjectId(id) for id in shared_with_ids]
shared_agents_cursor = agents_collection.find(
{"_id": {"$in": shared_object_ids}, "shared_publicly": True}
)
shared_agents = list(shared_agents_cursor)
found_ids_set = {str(agent["_id"]) for agent in shared_agents}
stale_ids = [id for id in shared_with_ids if id not in found_ids_set]
if stale_ids:
users_collection.update_one(
{"user_id": user_id},
{"$pullAll": {"agent_preferences.shared_with_me": stale_ids}},
)
pinned_ids = set(user_doc.get("agent_preferences", {}).get("pinned", []))
list_shared_agents = [
{
"id": str(agent["_id"]),
"name": agent.get("name", ""),
"description": agent.get("description", ""),
"image": (
generate_image_url(agent["image"]) if agent.get("image") else ""
),
"tools": agent.get("tools", []),
"tool_details": resolve_tool_details(agent.get("tools", [])),
"agent_type": agent.get("agent_type", ""),
"status": agent.get("status", ""),
"json_schema": agent.get("json_schema"),
"created_at": agent.get("createdAt", ""),
"updated_at": agent.get("updatedAt", ""),
"pinned": str(agent["_id"]) in pinned_ids,
"shared": agent.get("shared_publicly", False),
"shared_token": agent.get("shared_token", ""),
"shared_metadata": agent.get("shared_metadata", {}),
}
for agent in shared_agents
]
return make_response(jsonify(list_shared_agents), 200)
except Exception as err:
current_app.logger.error(f"Error retrieving shared agents: {err}")
return make_response(jsonify({"success": False}), 400)
@agents_sharing_ns.route("/share_agent")
class ShareAgent(Resource):
@api.expect(
api.model(
"ShareAgentModel",
{
"id": fields.String(required=True, description="ID of the agent"),
"shared": fields.Boolean(
required=True, description="Share or unshare the agent"
),
"username": fields.String(
required=False, description="Name of the user"
),
},
)
)
@api.doc(description="Share or unshare an agent")
def put(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
data = request.get_json()
if not data:
return make_response(
jsonify({"success": False, "message": "Missing JSON body"}), 400
)
agent_id = data.get("id")
shared = data.get("shared")
username = data.get("username", "")
if not agent_id:
return make_response(
jsonify({"success": False, "message": "ID is required"}), 400
)
if shared is None:
return make_response(
jsonify(
{
"success": False,
"message": "Shared parameter is required and must be true or false",
}
),
400,
)
try:
try:
agent_oid = ObjectId(agent_id)
except Exception:
return make_response(
jsonify({"success": False, "message": "Invalid agent ID"}), 400
)
agent = agents_collection.find_one({"_id": agent_oid, "user": user})
if not agent:
return make_response(
jsonify({"success": False, "message": "Agent not found"}), 404
)
if shared:
shared_metadata = {
"shared_by": username,
"shared_at": datetime.datetime.now(datetime.timezone.utc),
}
shared_token = secrets.token_urlsafe(32)
agents_collection.update_one(
{"_id": agent_oid, "user": user},
{
"$set": {
"shared_publicly": shared,
"shared_metadata": shared_metadata,
"shared_token": shared_token,
}
},
)
else:
agents_collection.update_one(
{"_id": agent_oid, "user": user},
{"$set": {"shared_publicly": shared, "shared_token": None}},
{"$unset": {"shared_metadata": ""}},
)
except Exception as err:
current_app.logger.error(f"Error sharing/unsharing agent: {err}")
return make_response(jsonify({"success": False, "error": str(err)}), 400)
shared_token = shared_token if shared else None
return make_response(
jsonify({"success": True, "shared_token": shared_token}), 200
)

View File

@@ -1,119 +0,0 @@
"""Agent management webhook handlers."""
import secrets
from bson.objectid import ObjectId
from flask import current_app, jsonify, make_response, request
from flask_restx import Namespace, Resource
from application.api import api
from application.api.user.base import agents_collection, require_agent
from application.api.user.tasks import process_agent_webhook
from application.core.settings import settings
agents_webhooks_ns = Namespace(
"agents", description="Agent management operations", path="/api"
)
@agents_webhooks_ns.route("/agent_webhook")
class AgentWebhook(Resource):
@api.doc(
params={"id": "ID of the agent"},
description="Generate webhook URL for the agent",
)
def get(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
agent_id = request.args.get("id")
if not agent_id:
return make_response(
jsonify({"success": False, "message": "ID is required"}), 400
)
try:
agent = agents_collection.find_one(
{"_id": ObjectId(agent_id), "user": user}
)
if not agent:
return make_response(
jsonify({"success": False, "message": "Agent not found"}), 404
)
webhook_token = agent.get("incoming_webhook_token")
if not webhook_token:
webhook_token = secrets.token_urlsafe(32)
agents_collection.update_one(
{"_id": ObjectId(agent_id), "user": user},
{"$set": {"incoming_webhook_token": webhook_token}},
)
base_url = settings.API_URL.rstrip("/")
full_webhook_url = f"{base_url}/api/webhooks/agents/{webhook_token}"
except Exception as err:
current_app.logger.error(
f"Error generating webhook URL: {err}", exc_info=True
)
return make_response(
jsonify({"success": False, "message": "Error generating webhook URL"}),
400,
)
return make_response(
jsonify({"success": True, "webhook_url": full_webhook_url}), 200
)
@agents_webhooks_ns.route("/webhooks/agents/<string:webhook_token>")
class AgentWebhookListener(Resource):
method_decorators = [require_agent]
def _enqueue_webhook_task(self, agent_id_str, payload, source_method):
if not payload:
current_app.logger.warning(
f"Webhook ({source_method}) received for agent {agent_id_str} with empty payload."
)
current_app.logger.info(
f"Incoming {source_method} webhook for agent {agent_id_str}. Enqueuing task with payload: {payload}"
)
try:
task = process_agent_webhook.delay(
agent_id=agent_id_str,
payload=payload,
)
current_app.logger.info(
f"Task {task.id} enqueued for agent {agent_id_str} ({source_method})."
)
return make_response(jsonify({"success": True, "task_id": task.id}), 200)
except Exception as err:
current_app.logger.error(
f"Error enqueuing webhook task ({source_method}) for agent {agent_id_str}: {err}",
exc_info=True,
)
return make_response(
jsonify({"success": False, "message": "Error processing webhook"}), 500
)
@api.doc(
description="Webhook listener for agent events (POST). Expects JSON payload, which is used to trigger processing.",
)
def post(self, webhook_token, agent, agent_id_str):
payload = request.get_json()
if payload is None:
return make_response(
jsonify(
{
"success": False,
"message": "Invalid or missing JSON data in request body",
}
),
400,
)
return self._enqueue_webhook_task(agent_id_str, payload, source_method="POST")
@api.doc(
description="Webhook listener for agent events (GET). Uses URL query parameters as payload to trigger processing.",
)
def get(self, webhook_token, agent, agent_id_str):
payload = request.args.to_dict(flat=True)
return self._enqueue_webhook_task(agent_id_str, payload, source_method="GET")

View File

@@ -1,5 +0,0 @@
"""Analytics module."""
from .routes import analytics_ns
__all__ = ["analytics_ns"]

View File

@@ -1,540 +0,0 @@
"""Analytics and reporting routes."""
import datetime
from bson.objectid import ObjectId
from flask import current_app, jsonify, make_response, request
from flask_restx import fields, Namespace, Resource
from application.api import api
from application.api.user.base import (
agents_collection,
conversations_collection,
generate_date_range,
generate_hourly_range,
generate_minute_range,
token_usage_collection,
user_logs_collection,
)
analytics_ns = Namespace(
"analytics", description="Analytics and reporting operations", path="/api"
)
@analytics_ns.route("/get_message_analytics")
class GetMessageAnalytics(Resource):
get_message_analytics_model = api.model(
"GetMessageAnalyticsModel",
{
"api_key_id": fields.String(required=False, description="API Key ID"),
"filter_option": fields.String(
required=False,
description="Filter option for analytics",
default="last_30_days",
enum=[
"last_hour",
"last_24_hour",
"last_7_days",
"last_15_days",
"last_30_days",
],
),
},
)
@api.expect(get_message_analytics_model)
@api.doc(description="Get message analytics based on filter option")
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
data = request.get_json()
api_key_id = data.get("api_key_id")
filter_option = data.get("filter_option", "last_30_days")
try:
api_key = (
agents_collection.find_one({"_id": ObjectId(api_key_id), "user": user})[
"key"
]
if api_key_id
else None
)
except Exception as err:
current_app.logger.error(f"Error getting API key: {err}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
end_date = datetime.datetime.now(datetime.timezone.utc)
if filter_option == "last_hour":
start_date = end_date - datetime.timedelta(hours=1)
group_format = "%Y-%m-%d %H:%M:00"
elif filter_option == "last_24_hour":
start_date = end_date - datetime.timedelta(hours=24)
group_format = "%Y-%m-%d %H:00"
else:
if filter_option in ["last_7_days", "last_15_days", "last_30_days"]:
filter_days = (
6
if filter_option == "last_7_days"
else 14 if filter_option == "last_15_days" else 29
)
else:
return make_response(
jsonify({"success": False, "message": "Invalid option"}), 400
)
start_date = end_date - datetime.timedelta(days=filter_days)
start_date = start_date.replace(hour=0, minute=0, second=0, microsecond=0)
end_date = end_date.replace(
hour=23, minute=59, second=59, microsecond=999999
)
group_format = "%Y-%m-%d"
try:
match_stage = {
"$match": {
"user": user,
}
}
if api_key:
match_stage["$match"]["api_key"] = api_key
pipeline = [
match_stage,
{"$unwind": "$queries"},
{
"$match": {
"queries.timestamp": {"$gte": start_date, "$lte": end_date}
}
},
{
"$group": {
"_id": {
"$dateToString": {
"format": group_format,
"date": "$queries.timestamp",
}
},
"count": {"$sum": 1},
}
},
{"$sort": {"_id": 1}},
]
message_data = conversations_collection.aggregate(pipeline)
if filter_option == "last_hour":
intervals = generate_minute_range(start_date, end_date)
elif filter_option == "last_24_hour":
intervals = generate_hourly_range(start_date, end_date)
else:
intervals = generate_date_range(start_date, end_date)
daily_messages = {interval: 0 for interval in intervals}
for entry in message_data:
daily_messages[entry["_id"]] = entry["count"]
except Exception as err:
current_app.logger.error(
f"Error getting message analytics: {err}", exc_info=True
)
return make_response(jsonify({"success": False}), 400)
return make_response(
jsonify({"success": True, "messages": daily_messages}), 200
)
@analytics_ns.route("/get_token_analytics")
class GetTokenAnalytics(Resource):
get_token_analytics_model = api.model(
"GetTokenAnalyticsModel",
{
"api_key_id": fields.String(required=False, description="API Key ID"),
"filter_option": fields.String(
required=False,
description="Filter option for analytics",
default="last_30_days",
enum=[
"last_hour",
"last_24_hour",
"last_7_days",
"last_15_days",
"last_30_days",
],
),
},
)
@api.expect(get_token_analytics_model)
@api.doc(description="Get token analytics data")
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
data = request.get_json()
api_key_id = data.get("api_key_id")
filter_option = data.get("filter_option", "last_30_days")
try:
api_key = (
agents_collection.find_one({"_id": ObjectId(api_key_id), "user": user})[
"key"
]
if api_key_id
else None
)
except Exception as err:
current_app.logger.error(f"Error getting API key: {err}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
end_date = datetime.datetime.now(datetime.timezone.utc)
if filter_option == "last_hour":
start_date = end_date - datetime.timedelta(hours=1)
group_format = "%Y-%m-%d %H:%M:00"
group_stage = {
"$group": {
"_id": {
"minute": {
"$dateToString": {
"format": group_format,
"date": "$timestamp",
}
}
},
"total_tokens": {
"$sum": {"$add": ["$prompt_tokens", "$generated_tokens"]}
},
}
}
elif filter_option == "last_24_hour":
start_date = end_date - datetime.timedelta(hours=24)
group_format = "%Y-%m-%d %H:00"
group_stage = {
"$group": {
"_id": {
"hour": {
"$dateToString": {
"format": group_format,
"date": "$timestamp",
}
}
},
"total_tokens": {
"$sum": {"$add": ["$prompt_tokens", "$generated_tokens"]}
},
}
}
else:
if filter_option in ["last_7_days", "last_15_days", "last_30_days"]:
filter_days = (
6
if filter_option == "last_7_days"
else (14 if filter_option == "last_15_days" else 29)
)
else:
return make_response(
jsonify({"success": False, "message": "Invalid option"}), 400
)
start_date = end_date - datetime.timedelta(days=filter_days)
start_date = start_date.replace(hour=0, minute=0, second=0, microsecond=0)
end_date = end_date.replace(
hour=23, minute=59, second=59, microsecond=999999
)
group_format = "%Y-%m-%d"
group_stage = {
"$group": {
"_id": {
"day": {
"$dateToString": {
"format": group_format,
"date": "$timestamp",
}
}
},
"total_tokens": {
"$sum": {"$add": ["$prompt_tokens", "$generated_tokens"]}
},
}
}
try:
match_stage = {
"$match": {
"user_id": user,
"timestamp": {"$gte": start_date, "$lte": end_date},
}
}
if api_key:
match_stage["$match"]["api_key"] = api_key
token_usage_data = token_usage_collection.aggregate(
[
match_stage,
group_stage,
{"$sort": {"_id": 1}},
]
)
if filter_option == "last_hour":
intervals = generate_minute_range(start_date, end_date)
elif filter_option == "last_24_hour":
intervals = generate_hourly_range(start_date, end_date)
else:
intervals = generate_date_range(start_date, end_date)
daily_token_usage = {interval: 0 for interval in intervals}
for entry in token_usage_data:
if filter_option == "last_hour":
daily_token_usage[entry["_id"]["minute"]] = entry["total_tokens"]
elif filter_option == "last_24_hour":
daily_token_usage[entry["_id"]["hour"]] = entry["total_tokens"]
else:
daily_token_usage[entry["_id"]["day"]] = entry["total_tokens"]
except Exception as err:
current_app.logger.error(
f"Error getting token analytics: {err}", exc_info=True
)
return make_response(jsonify({"success": False}), 400)
return make_response(
jsonify({"success": True, "token_usage": daily_token_usage}), 200
)
@analytics_ns.route("/get_feedback_analytics")
class GetFeedbackAnalytics(Resource):
get_feedback_analytics_model = api.model(
"GetFeedbackAnalyticsModel",
{
"api_key_id": fields.String(required=False, description="API Key ID"),
"filter_option": fields.String(
required=False,
description="Filter option for analytics",
default="last_30_days",
enum=[
"last_hour",
"last_24_hour",
"last_7_days",
"last_15_days",
"last_30_days",
],
),
},
)
@api.expect(get_feedback_analytics_model)
@api.doc(description="Get feedback analytics data")
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
data = request.get_json()
api_key_id = data.get("api_key_id")
filter_option = data.get("filter_option", "last_30_days")
try:
api_key = (
agents_collection.find_one({"_id": ObjectId(api_key_id), "user": user})[
"key"
]
if api_key_id
else None
)
except Exception as err:
current_app.logger.error(f"Error getting API key: {err}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
end_date = datetime.datetime.now(datetime.timezone.utc)
if filter_option == "last_hour":
start_date = end_date - datetime.timedelta(hours=1)
group_format = "%Y-%m-%d %H:%M:00"
date_field = {
"$dateToString": {
"format": group_format,
"date": "$queries.feedback_timestamp",
}
}
elif filter_option == "last_24_hour":
start_date = end_date - datetime.timedelta(hours=24)
group_format = "%Y-%m-%d %H:00"
date_field = {
"$dateToString": {
"format": group_format,
"date": "$queries.feedback_timestamp",
}
}
else:
if filter_option in ["last_7_days", "last_15_days", "last_30_days"]:
filter_days = (
6
if filter_option == "last_7_days"
else (14 if filter_option == "last_15_days" else 29)
)
else:
return make_response(
jsonify({"success": False, "message": "Invalid option"}), 400
)
start_date = end_date - datetime.timedelta(days=filter_days)
start_date = start_date.replace(hour=0, minute=0, second=0, microsecond=0)
end_date = end_date.replace(
hour=23, minute=59, second=59, microsecond=999999
)
group_format = "%Y-%m-%d"
date_field = {
"$dateToString": {
"format": group_format,
"date": "$queries.feedback_timestamp",
}
}
try:
match_stage = {
"$match": {
"queries.feedback_timestamp": {
"$gte": start_date,
"$lte": end_date,
},
"queries.feedback": {"$exists": True},
}
}
if api_key:
match_stage["$match"]["api_key"] = api_key
pipeline = [
match_stage,
{"$unwind": "$queries"},
{"$match": {"queries.feedback": {"$exists": True}}},
{
"$group": {
"_id": {"time": date_field, "feedback": "$queries.feedback"},
"count": {"$sum": 1},
}
},
{
"$group": {
"_id": "$_id.time",
"positive": {
"$sum": {
"$cond": [
{"$eq": ["$_id.feedback", "LIKE"]},
"$count",
0,
]
}
},
"negative": {
"$sum": {
"$cond": [
{"$eq": ["$_id.feedback", "DISLIKE"]},
"$count",
0,
]
}
},
}
},
{"$sort": {"_id": 1}},
]
feedback_data = conversations_collection.aggregate(pipeline)
if filter_option == "last_hour":
intervals = generate_minute_range(start_date, end_date)
elif filter_option == "last_24_hour":
intervals = generate_hourly_range(start_date, end_date)
else:
intervals = generate_date_range(start_date, end_date)
daily_feedback = {
interval: {"positive": 0, "negative": 0} for interval in intervals
}
for entry in feedback_data:
daily_feedback[entry["_id"]] = {
"positive": entry["positive"],
"negative": entry["negative"],
}
except Exception as err:
current_app.logger.error(
f"Error getting feedback analytics: {err}", exc_info=True
)
return make_response(jsonify({"success": False}), 400)
return make_response(
jsonify({"success": True, "feedback": daily_feedback}), 200
)
@analytics_ns.route("/get_user_logs")
class GetUserLogs(Resource):
get_user_logs_model = api.model(
"GetUserLogsModel",
{
"page": fields.Integer(
required=False,
description="Page number for pagination",
default=1,
),
"api_key_id": fields.String(required=False, description="API Key ID"),
"page_size": fields.Integer(
required=False,
description="Number of logs per page",
default=10,
),
},
)
@api.expect(get_user_logs_model)
@api.doc(description="Get user logs with pagination")
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
data = request.get_json()
page = int(data.get("page", 1))
api_key_id = data.get("api_key_id")
page_size = int(data.get("page_size", 10))
skip = (page - 1) * page_size
try:
api_key = (
agents_collection.find_one({"_id": ObjectId(api_key_id)})["key"]
if api_key_id
else None
)
except Exception as err:
current_app.logger.error(f"Error getting API key: {err}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
query = {"user": user}
if api_key:
query = {"api_key": api_key}
items_cursor = (
user_logs_collection.find(query)
.sort("timestamp", -1)
.skip(skip)
.limit(page_size + 1)
)
items = list(items_cursor)
results = [
{
"id": str(item.get("_id")),
"action": item.get("action"),
"level": item.get("level"),
"user": item.get("user"),
"question": item.get("question"),
"sources": item.get("sources"),
"retriever_params": item.get("retriever_params"),
"timestamp": item.get("timestamp"),
}
for item in items[:page_size]
]
has_more = len(items) > page_size
return make_response(
jsonify(
{
"success": True,
"logs": results,
"page": page,
"page_size": page_size,
"has_more": has_more,
}
),
200,
)

View File

@@ -1,5 +0,0 @@
"""Attachments module."""
from .routes import attachments_ns
__all__ = ["attachments_ns"]

View File

@@ -1,150 +0,0 @@
"""File attachments and media routes."""
import os
from bson.objectid import ObjectId
from flask import current_app, jsonify, make_response, request
from flask_restx import fields, Namespace, Resource
from application.api import api
from application.api.user.base import agents_collection, storage
from application.api.user.tasks import store_attachment
from application.core.settings import settings
from application.tts.google_tts import GoogleTTS
from application.utils import safe_filename
attachments_ns = Namespace(
"attachments", description="File attachments and media operations", path="/api"
)
@attachments_ns.route("/store_attachment")
class StoreAttachment(Resource):
@api.expect(
api.model(
"AttachmentModel",
{
"file": fields.Raw(required=True, description="File to upload"),
"api_key": fields.String(
required=False, description="API key (optional)"
),
},
)
)
@api.doc(
description="Stores a single attachment without vectorization or training. Supports user or API key authentication."
)
def post(self):
decoded_token = getattr(request, "decoded_token", None)
api_key = request.form.get("api_key") or request.args.get("api_key")
file = request.files.get("file")
if not file or file.filename == "":
return make_response(
jsonify({"status": "error", "message": "Missing file"}),
400,
)
user = None
if decoded_token:
user = safe_filename(decoded_token.get("sub"))
elif api_key:
agent = agents_collection.find_one({"key": api_key})
if not agent:
return make_response(
jsonify({"success": False, "message": "Invalid API key"}), 401
)
user = safe_filename(agent.get("user"))
else:
return make_response(
jsonify({"success": False, "message": "Authentication required"}), 401
)
try:
attachment_id = ObjectId()
original_filename = safe_filename(os.path.basename(file.filename))
relative_path = f"{settings.UPLOAD_FOLDER}/{user}/attachments/{str(attachment_id)}/{original_filename}"
metadata = storage.save_file(file, relative_path)
file_info = {
"filename": original_filename,
"attachment_id": str(attachment_id),
"path": relative_path,
"metadata": metadata,
}
task = store_attachment.delay(file_info, user)
return make_response(
jsonify(
{
"success": True,
"task_id": task.id,
"message": "File uploaded successfully. Processing started.",
}
),
200,
)
except Exception as err:
current_app.logger.error(f"Error storing attachment: {err}", exc_info=True)
return make_response(jsonify({"success": False, "error": str(err)}), 400)
@attachments_ns.route("/images/<path:image_path>")
class ServeImage(Resource):
@api.doc(description="Serve an image from storage")
def get(self, image_path):
try:
file_obj = storage.get_file(image_path)
extension = image_path.split(".")[-1].lower()
content_type = f"image/{extension}"
if extension == "jpg":
content_type = "image/jpeg"
response = make_response(file_obj.read())
response.headers.set("Content-Type", content_type)
response.headers.set("Cache-Control", "max-age=86400")
return response
except FileNotFoundError:
return make_response(
jsonify({"success": False, "message": "Image not found"}), 404
)
except Exception as e:
current_app.logger.error(f"Error serving image: {e}")
return make_response(
jsonify({"success": False, "message": "Error retrieving image"}), 500
)
@attachments_ns.route("/tts")
class TextToSpeech(Resource):
tts_model = api.model(
"TextToSpeechModel",
{
"text": fields.String(
required=True, description="Text to be synthesized as audio"
),
},
)
@api.expect(tts_model)
@api.doc(description="Synthesize audio speech from text")
def post(self):
data = request.get_json()
text = data["text"]
try:
tts_instance = GoogleTTS()
audio_base64, detected_language = tts_instance.text_to_speech(text)
return make_response(
jsonify(
{
"success": True,
"audio_base64": audio_base64,
"lang": detected_language,
}
),
200,
)
except Exception as err:
current_app.logger.error(f"Error synthesizing audio: {err}", exc_info=True)
return make_response(jsonify({"success": False}), 400)

View File

@@ -1,222 +0,0 @@
"""
Shared utilities, database connections, and helper functions for user API routes.
"""
import datetime
import os
import uuid
from functools import wraps
from typing import Optional, Tuple
from bson.objectid import ObjectId
from flask import current_app, jsonify, make_response, Response
from pymongo import ReturnDocument
from werkzeug.utils import secure_filename
from application.core.mongo_db import MongoDB
from application.core.settings import settings
from application.storage.storage_creator import StorageCreator
from application.vectorstore.vector_creator import VectorCreator
storage = StorageCreator.get_storage()
mongo = MongoDB.get_client()
db = mongo[settings.MONGO_DB_NAME]
conversations_collection = db["conversations"]
sources_collection = db["sources"]
prompts_collection = db["prompts"]
feedback_collection = db["feedback"]
agents_collection = db["agents"]
token_usage_collection = db["token_usage"]
shared_conversations_collections = db["shared_conversations"]
users_collection = db["users"]
user_logs_collection = db["user_logs"]
user_tools_collection = db["user_tools"]
attachments_collection = db["attachments"]
try:
agents_collection.create_index(
[("shared", 1)],
name="shared_index",
background=True,
)
users_collection.create_index("user_id", unique=True)
except Exception as e:
print("Error creating indexes:", e)
current_dir = os.path.dirname(
os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
)
def generate_minute_range(start_date, end_date):
"""Generate a dictionary with minute-level time ranges."""
return {
(start_date + datetime.timedelta(minutes=i)).strftime("%Y-%m-%d %H:%M:00"): 0
for i in range(int((end_date - start_date).total_seconds() // 60) + 1)
}
def generate_hourly_range(start_date, end_date):
"""Generate a dictionary with hourly time ranges."""
return {
(start_date + datetime.timedelta(hours=i)).strftime("%Y-%m-%d %H:00"): 0
for i in range(int((end_date - start_date).total_seconds() // 3600) + 1)
}
def generate_date_range(start_date, end_date):
"""Generate a dictionary with daily date ranges."""
return {
(start_date + datetime.timedelta(days=i)).strftime("%Y-%m-%d"): 0
for i in range((end_date - start_date).days + 1)
}
def ensure_user_doc(user_id):
"""
Ensure user document exists with proper agent preferences structure.
Args:
user_id: The user ID to ensure
Returns:
The user document
"""
default_prefs = {
"pinned": [],
"shared_with_me": [],
}
user_doc = users_collection.find_one_and_update(
{"user_id": user_id},
{"$setOnInsert": {"agent_preferences": default_prefs}},
upsert=True,
return_document=ReturnDocument.AFTER,
)
prefs = user_doc.get("agent_preferences", {})
updates = {}
if "pinned" not in prefs:
updates["agent_preferences.pinned"] = []
if "shared_with_me" not in prefs:
updates["agent_preferences.shared_with_me"] = []
if updates:
users_collection.update_one({"user_id": user_id}, {"$set": updates})
user_doc = users_collection.find_one({"user_id": user_id})
return user_doc
def resolve_tool_details(tool_ids):
"""
Resolve tool IDs to their details.
Args:
tool_ids: List of tool IDs
Returns:
List of tool details with id, name, and display_name
"""
tools = user_tools_collection.find(
{"_id": {"$in": [ObjectId(tid) for tid in tool_ids]}}
)
return [
{
"id": str(tool["_id"]),
"name": tool.get("name", ""),
"display_name": tool.get("displayName", tool.get("name", "")),
}
for tool in tools
]
def get_vector_store(source_id):
"""
Get the Vector Store for a given source ID.
Args:
source_id (str): source id of the document
Returns:
Vector store instance
"""
store = VectorCreator.create_vectorstore(
settings.VECTOR_STORE,
source_id=source_id,
embeddings_key=os.getenv("EMBEDDINGS_KEY"),
)
return store
def handle_image_upload(
request, existing_url: str, user: str, storage, base_path: str = "attachments/"
) -> Tuple[str, Optional[Response]]:
"""
Handle image file upload from request.
Args:
request: Flask request object
existing_url: Existing image URL (fallback)
user: User ID
storage: Storage instance
base_path: Base path for upload
Returns:
Tuple of (image_url, error_response)
"""
image_url = existing_url
if "image" in request.files:
file = request.files["image"]
if file.filename != "":
filename = secure_filename(file.filename)
upload_path = f"{settings.UPLOAD_FOLDER.rstrip('/')}/{user}/{base_path.rstrip('/')}/{uuid.uuid4()}_{filename}"
try:
storage.save_file(file, upload_path, storage_class="STANDARD")
image_url = upload_path
except Exception as e:
current_app.logger.error(f"Error uploading image: {e}")
return None, make_response(
jsonify({"success": False, "message": "Image upload failed"}),
400,
)
return image_url, None
def require_agent(func):
"""
Decorator to require valid agent webhook token.
Args:
func: Function to decorate
Returns:
Wrapped function
"""
@wraps(func)
def wrapper(*args, **kwargs):
webhook_token = kwargs.get("webhook_token")
if not webhook_token:
return make_response(
jsonify({"success": False, "message": "Webhook token missing"}), 400
)
agent = agents_collection.find_one(
{"incoming_webhook_token": webhook_token}, {"_id": 1}
)
if not agent:
current_app.logger.warning(
f"Webhook attempt with invalid token: {webhook_token}"
)
return make_response(
jsonify({"success": False, "message": "Agent not found"}), 404
)
kwargs["agent"] = agent
kwargs["agent_id_str"] = str(agent["_id"])
return func(*args, **kwargs)
return wrapper

View File

@@ -1,5 +0,0 @@
"""Conversation management module."""
from .routes import conversations_ns
__all__ = ["conversations_ns"]

View File

@@ -1,280 +0,0 @@
"""Conversation management routes."""
import datetime
from bson.objectid import ObjectId
from flask import current_app, jsonify, make_response, request
from flask_restx import fields, Namespace, Resource
from application.api import api
from application.api.user.base import attachments_collection, conversations_collection
from application.utils import check_required_fields
conversations_ns = Namespace(
"conversations", description="Conversation management operations", path="/api"
)
@conversations_ns.route("/delete_conversation")
class DeleteConversation(Resource):
@api.doc(
description="Deletes a conversation by ID",
params={"id": "The ID of the conversation to delete"},
)
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
conversation_id = request.args.get("id")
if not conversation_id:
return make_response(
jsonify({"success": False, "message": "ID is required"}), 400
)
try:
conversations_collection.delete_one(
{"_id": ObjectId(conversation_id), "user": decoded_token["sub"]}
)
except Exception as err:
current_app.logger.error(
f"Error deleting conversation: {err}", exc_info=True
)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"success": True}), 200)
@conversations_ns.route("/delete_all_conversations")
class DeleteAllConversations(Resource):
@api.doc(
description="Deletes all conversations for a specific user",
)
def get(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user_id = decoded_token.get("sub")
try:
conversations_collection.delete_many({"user": user_id})
except Exception as err:
current_app.logger.error(
f"Error deleting all conversations: {err}", exc_info=True
)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"success": True}), 200)
@conversations_ns.route("/get_conversations")
class GetConversations(Resource):
@api.doc(
description="Retrieve a list of the latest 30 conversations (excluding API key conversations)",
)
def get(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
try:
conversations = (
conversations_collection.find(
{
"$or": [
{"api_key": {"$exists": False}},
{"agent_id": {"$exists": True}},
],
"user": decoded_token.get("sub"),
}
)
.sort("date", -1)
.limit(30)
)
list_conversations = [
{
"id": str(conversation["_id"]),
"name": conversation["name"],
"agent_id": conversation.get("agent_id", None),
"is_shared_usage": conversation.get("is_shared_usage", False),
"shared_token": conversation.get("shared_token", None),
}
for conversation in conversations
]
except Exception as err:
current_app.logger.error(
f"Error retrieving conversations: {err}", exc_info=True
)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify(list_conversations), 200)
@conversations_ns.route("/get_single_conversation")
class GetSingleConversation(Resource):
@api.doc(
description="Retrieve a single conversation by ID",
params={"id": "The conversation ID"},
)
def get(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
conversation_id = request.args.get("id")
if not conversation_id:
return make_response(
jsonify({"success": False, "message": "ID is required"}), 400
)
try:
conversation = conversations_collection.find_one(
{"_id": ObjectId(conversation_id), "user": decoded_token.get("sub")}
)
if not conversation:
return make_response(jsonify({"status": "not found"}), 404)
# Process queries to include attachment names
queries = conversation["queries"]
for query in queries:
if "attachments" in query and query["attachments"]:
attachment_details = []
for attachment_id in query["attachments"]:
try:
attachment = attachments_collection.find_one(
{"_id": ObjectId(attachment_id)}
)
if attachment:
attachment_details.append(
{
"id": str(attachment["_id"]),
"fileName": attachment.get(
"filename", "Unknown file"
),
}
)
except Exception as e:
current_app.logger.error(
f"Error retrieving attachment {attachment_id}: {e}",
exc_info=True,
)
query["attachments"] = attachment_details
except Exception as err:
current_app.logger.error(
f"Error retrieving conversation: {err}", exc_info=True
)
return make_response(jsonify({"success": False}), 400)
data = {
"queries": queries,
"agent_id": conversation.get("agent_id"),
"is_shared_usage": conversation.get("is_shared_usage", False),
"shared_token": conversation.get("shared_token", None),
}
return make_response(jsonify(data), 200)
@conversations_ns.route("/update_conversation_name")
class UpdateConversationName(Resource):
@api.expect(
api.model(
"UpdateConversationModel",
{
"id": fields.String(required=True, description="Conversation ID"),
"name": fields.String(
required=True, description="New name of the conversation"
),
},
)
)
@api.doc(
description="Updates the name of a conversation",
)
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
data = request.get_json()
required_fields = ["id", "name"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
try:
conversations_collection.update_one(
{"_id": ObjectId(data["id"]), "user": decoded_token.get("sub")},
{"$set": {"name": data["name"]}},
)
except Exception as err:
current_app.logger.error(
f"Error updating conversation name: {err}", exc_info=True
)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"success": True}), 200)
@conversations_ns.route("/feedback")
class SubmitFeedback(Resource):
@api.expect(
api.model(
"FeedbackModel",
{
"question": fields.String(
required=False, description="The user question"
),
"answer": fields.String(required=False, description="The AI answer"),
"feedback": fields.String(required=True, description="User feedback"),
"question_index": fields.Integer(
required=True,
description="The question number in that particular conversation",
),
"conversation_id": fields.String(
required=True, description="id of the particular conversation"
),
"api_key": fields.String(description="Optional API key"),
},
)
)
@api.doc(
description="Submit feedback for a conversation",
)
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
data = request.get_json()
required_fields = ["feedback", "conversation_id", "question_index"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
try:
if data["feedback"] is None:
# Remove feedback and feedback_timestamp if feedback is null
conversations_collection.update_one(
{
"_id": ObjectId(data["conversation_id"]),
"user": decoded_token.get("sub"),
f"queries.{data['question_index']}": {"$exists": True},
},
{
"$unset": {
f"queries.{data['question_index']}.feedback": "",
f"queries.{data['question_index']}.feedback_timestamp": "",
}
},
)
else:
# Set feedback and feedback_timestamp if feedback has a value
conversations_collection.update_one(
{
"_id": ObjectId(data["conversation_id"]),
"user": decoded_token.get("sub"),
f"queries.{data['question_index']}": {"$exists": True},
},
{
"$set": {
f"queries.{data['question_index']}.feedback": data[
"feedback"
],
f"queries.{data['question_index']}.feedback_timestamp": datetime.datetime.now(
datetime.timezone.utc
),
}
},
)
except Exception as err:
current_app.logger.error(f"Error submitting feedback: {err}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"success": True}), 200)

View File

@@ -1,5 +0,0 @@
"""Prompts module."""
from .routes import prompts_ns
__all__ = ["prompts_ns"]

View File

@@ -1,191 +0,0 @@
"""Prompt management routes."""
import os
from bson.objectid import ObjectId
from flask import current_app, jsonify, make_response, request
from flask_restx import fields, Namespace, Resource
from application.api import api
from application.api.user.base import current_dir, prompts_collection
from application.utils import check_required_fields
prompts_ns = Namespace(
"prompts", description="Prompt management operations", path="/api"
)
@prompts_ns.route("/create_prompt")
class CreatePrompt(Resource):
create_prompt_model = api.model(
"CreatePromptModel",
{
"content": fields.String(
required=True, description="Content of the prompt"
),
"name": fields.String(required=True, description="Name of the prompt"),
},
)
@api.expect(create_prompt_model)
@api.doc(description="Create a new prompt")
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
data = request.get_json()
required_fields = ["content", "name"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
user = decoded_token.get("sub")
try:
resp = prompts_collection.insert_one(
{
"name": data["name"],
"content": data["content"],
"user": user,
}
)
new_id = str(resp.inserted_id)
except Exception as err:
current_app.logger.error(f"Error creating prompt: {err}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"id": new_id}), 200)
@prompts_ns.route("/get_prompts")
class GetPrompts(Resource):
@api.doc(description="Get all prompts for the user")
def get(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
try:
prompts = prompts_collection.find({"user": user})
list_prompts = [
{"id": "default", "name": "default", "type": "public"},
{"id": "creative", "name": "creative", "type": "public"},
{"id": "strict", "name": "strict", "type": "public"},
]
for prompt in prompts:
list_prompts.append(
{
"id": str(prompt["_id"]),
"name": prompt["name"],
"type": "private",
}
)
except Exception as err:
current_app.logger.error(f"Error retrieving prompts: {err}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify(list_prompts), 200)
@prompts_ns.route("/get_single_prompt")
class GetSinglePrompt(Resource):
@api.doc(params={"id": "ID of the prompt"}, description="Get a single prompt by ID")
def get(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
prompt_id = request.args.get("id")
if not prompt_id:
return make_response(
jsonify({"success": False, "message": "ID is required"}), 400
)
try:
if prompt_id == "default":
with open(
os.path.join(current_dir, "prompts", "chat_combine_default.txt"),
"r",
) as f:
chat_combine_template = f.read()
return make_response(jsonify({"content": chat_combine_template}), 200)
elif prompt_id == "creative":
with open(
os.path.join(current_dir, "prompts", "chat_combine_creative.txt"),
"r",
) as f:
chat_reduce_creative = f.read()
return make_response(jsonify({"content": chat_reduce_creative}), 200)
elif prompt_id == "strict":
with open(
os.path.join(current_dir, "prompts", "chat_combine_strict.txt"), "r"
) as f:
chat_reduce_strict = f.read()
return make_response(jsonify({"content": chat_reduce_strict}), 200)
prompt = prompts_collection.find_one(
{"_id": ObjectId(prompt_id), "user": user}
)
except Exception as err:
current_app.logger.error(f"Error retrieving prompt: {err}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"content": prompt["content"]}), 200)
@prompts_ns.route("/delete_prompt")
class DeletePrompt(Resource):
delete_prompt_model = api.model(
"DeletePromptModel",
{"id": fields.String(required=True, description="Prompt ID to delete")},
)
@api.expect(delete_prompt_model)
@api.doc(description="Delete a prompt by ID")
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
data = request.get_json()
required_fields = ["id"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
try:
prompts_collection.delete_one({"_id": ObjectId(data["id"]), "user": user})
except Exception as err:
current_app.logger.error(f"Error deleting prompt: {err}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"success": True}), 200)
@prompts_ns.route("/update_prompt")
class UpdatePrompt(Resource):
update_prompt_model = api.model(
"UpdatePromptModel",
{
"id": fields.String(required=True, description="Prompt ID to update"),
"name": fields.String(required=True, description="New name of the prompt"),
"content": fields.String(
required=True, description="New content of the prompt"
),
},
)
@api.expect(update_prompt_model)
@api.doc(description="Update an existing prompt")
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
data = request.get_json()
required_fields = ["id", "name", "content"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
try:
prompts_collection.update_one(
{"_id": ObjectId(data["id"]), "user": user},
{"$set": {"name": data["name"], "content": data["content"]}},
)
except Exception as err:
current_app.logger.error(f"Error updating prompt: {err}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"success": True}), 200)

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +0,0 @@
"""Sharing module."""
from .routes import sharing_ns
__all__ = ["sharing_ns"]

View File

@@ -1,301 +0,0 @@
"""Conversation sharing routes."""
import uuid
from bson.binary import Binary, UuidRepresentation
from bson.dbref import DBRef
from bson.objectid import ObjectId
from flask import current_app, jsonify, make_response, request
from flask_restx import fields, inputs, Namespace, Resource
from application.api import api
from application.api.user.base import (
agents_collection,
attachments_collection,
conversations_collection,
db,
shared_conversations_collections,
)
from application.utils import check_required_fields
sharing_ns = Namespace(
"sharing", description="Conversation sharing operations", path="/api"
)
@sharing_ns.route("/share")
class ShareConversation(Resource):
share_conversation_model = api.model(
"ShareConversationModel",
{
"conversation_id": fields.String(
required=True, description="Conversation ID"
),
"user": fields.String(description="User ID (optional)"),
"prompt_id": fields.String(description="Prompt ID (optional)"),
"chunks": fields.Integer(description="Chunks count (optional)"),
},
)
@api.expect(share_conversation_model)
@api.doc(description="Share a conversation")
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
data = request.get_json()
required_fields = ["conversation_id"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
is_promptable = request.args.get("isPromptable", type=inputs.boolean)
if is_promptable is None:
return make_response(
jsonify({"success": False, "message": "isPromptable is required"}), 400
)
conversation_id = data["conversation_id"]
try:
conversation = conversations_collection.find_one(
{"_id": ObjectId(conversation_id)}
)
if conversation is None:
return make_response(
jsonify(
{
"status": "error",
"message": "Conversation does not exist",
}
),
404,
)
current_n_queries = len(conversation["queries"])
explicit_binary = Binary.from_uuid(
uuid.uuid4(), UuidRepresentation.STANDARD
)
if is_promptable:
prompt_id = data.get("prompt_id", "default")
chunks = data.get("chunks", "2")
name = conversation["name"] + "(shared)"
new_api_key_data = {
"prompt_id": prompt_id,
"chunks": chunks,
"user": user,
}
if "source" in data and ObjectId.is_valid(data["source"]):
new_api_key_data["source"] = DBRef(
"sources", ObjectId(data["source"])
)
if "retriever" in data:
new_api_key_data["retriever"] = data["retriever"]
pre_existing_api_document = agents_collection.find_one(new_api_key_data)
if pre_existing_api_document:
api_uuid = pre_existing_api_document["key"]
pre_existing = shared_conversations_collections.find_one(
{
"conversation_id": DBRef(
"conversations", ObjectId(conversation_id)
),
"isPromptable": is_promptable,
"first_n_queries": current_n_queries,
"user": user,
"api_key": api_uuid,
}
)
if pre_existing is not None:
return make_response(
jsonify(
{
"success": True,
"identifier": str(pre_existing["uuid"].as_uuid()),
}
),
200,
)
else:
shared_conversations_collections.insert_one(
{
"uuid": explicit_binary,
"conversation_id": {
"$ref": "conversations",
"$id": ObjectId(conversation_id),
},
"isPromptable": is_promptable,
"first_n_queries": current_n_queries,
"user": user,
"api_key": api_uuid,
}
)
return make_response(
jsonify(
{
"success": True,
"identifier": str(explicit_binary.as_uuid()),
}
),
201,
)
else:
api_uuid = str(uuid.uuid4())
new_api_key_data["key"] = api_uuid
new_api_key_data["name"] = name
if "source" in data and ObjectId.is_valid(data["source"]):
new_api_key_data["source"] = DBRef(
"sources", ObjectId(data["source"])
)
if "retriever" in data:
new_api_key_data["retriever"] = data["retriever"]
agents_collection.insert_one(new_api_key_data)
shared_conversations_collections.insert_one(
{
"uuid": explicit_binary,
"conversation_id": {
"$ref": "conversations",
"$id": ObjectId(conversation_id),
},
"isPromptable": is_promptable,
"first_n_queries": current_n_queries,
"user": user,
"api_key": api_uuid,
}
)
return make_response(
jsonify(
{
"success": True,
"identifier": str(explicit_binary.as_uuid()),
}
),
201,
)
pre_existing = shared_conversations_collections.find_one(
{
"conversation_id": DBRef(
"conversations", ObjectId(conversation_id)
),
"isPromptable": is_promptable,
"first_n_queries": current_n_queries,
"user": user,
}
)
if pre_existing is not None:
return make_response(
jsonify(
{
"success": True,
"identifier": str(pre_existing["uuid"].as_uuid()),
}
),
200,
)
else:
shared_conversations_collections.insert_one(
{
"uuid": explicit_binary,
"conversation_id": {
"$ref": "conversations",
"$id": ObjectId(conversation_id),
},
"isPromptable": is_promptable,
"first_n_queries": current_n_queries,
"user": user,
}
)
return make_response(
jsonify(
{"success": True, "identifier": str(explicit_binary.as_uuid())}
),
201,
)
except Exception as err:
current_app.logger.error(
f"Error sharing conversation: {err}", exc_info=True
)
return make_response(jsonify({"success": False}), 400)
@sharing_ns.route("/shared_conversation/<string:identifier>")
class GetPubliclySharedConversations(Resource):
@api.doc(description="Get publicly shared conversations by identifier")
def get(self, identifier: str):
try:
query_uuid = Binary.from_uuid(
uuid.UUID(identifier), UuidRepresentation.STANDARD
)
shared = shared_conversations_collections.find_one({"uuid": query_uuid})
conversation_queries = []
if (
shared
and "conversation_id" in shared
and isinstance(shared["conversation_id"], DBRef)
):
conversation_ref = shared["conversation_id"]
conversation = db.dereference(conversation_ref)
if conversation is None:
return make_response(
jsonify(
{
"success": False,
"error": "might have broken url or the conversation does not exist",
}
),
404,
)
conversation_queries = conversation["queries"][
: (shared["first_n_queries"])
]
for query in conversation_queries:
if "attachments" in query and query["attachments"]:
attachment_details = []
for attachment_id in query["attachments"]:
try:
attachment = attachments_collection.find_one(
{"_id": ObjectId(attachment_id)}
)
if attachment:
attachment_details.append(
{
"id": str(attachment["_id"]),
"fileName": attachment.get(
"filename", "Unknown file"
),
}
)
except Exception as e:
current_app.logger.error(
f"Error retrieving attachment {attachment_id}: {e}",
exc_info=True,
)
query["attachments"] = attachment_details
else:
return make_response(
jsonify(
{
"success": False,
"error": "might have broken url or the conversation does not exist",
}
),
404,
)
date = conversation["_id"].generation_time.isoformat()
res = {
"success": True,
"queries": conversation_queries,
"title": conversation["name"],
"timestamp": date,
}
if shared["isPromptable"] and "api_key" in shared:
res["api_key"] = shared["api_key"]
return make_response(jsonify(res), 200)
except Exception as err:
current_app.logger.error(
f"Error getting shared conversation: {err}", exc_info=True
)
return make_response(jsonify({"success": False}), 400)

View File

@@ -1,7 +0,0 @@
"""Sources module."""
from .chunks import sources_chunks_ns
from .routes import sources_ns
from .upload import sources_upload_ns
__all__ = ["sources_ns", "sources_chunks_ns", "sources_upload_ns"]

View File

@@ -1,278 +0,0 @@
"""Source document management chunk management."""
from bson.objectid import ObjectId
from flask import current_app, jsonify, make_response, request
from flask_restx import fields, Namespace, Resource
from application.api import api
from application.api.user.base import get_vector_store, sources_collection
from application.utils import check_required_fields, num_tokens_from_string
sources_chunks_ns = Namespace(
"sources", description="Source document management operations", path="/api"
)
@sources_chunks_ns.route("/get_chunks")
class GetChunks(Resource):
@api.doc(
description="Retrieves chunks from a document, optionally filtered by file path and search term",
params={
"id": "The document ID",
"page": "Page number for pagination",
"per_page": "Number of chunks per page",
"path": "Optional: Filter chunks by relative file path",
"search": "Optional: Search term to filter chunks by title or content",
},
)
def get(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
doc_id = request.args.get("id")
page = int(request.args.get("page", 1))
per_page = int(request.args.get("per_page", 10))
path = request.args.get("path")
search_term = request.args.get("search", "").strip().lower()
if not ObjectId.is_valid(doc_id):
return make_response(jsonify({"error": "Invalid doc_id"}), 400)
doc = sources_collection.find_one({"_id": ObjectId(doc_id), "user": user})
if not doc:
return make_response(
jsonify({"error": "Document not found or access denied"}), 404
)
try:
store = get_vector_store(doc_id)
chunks = store.get_chunks()
filtered_chunks = []
for chunk in chunks:
metadata = chunk.get("metadata", {})
# Filter by path if provided
if path:
chunk_source = metadata.get("source", "")
# Check if the chunk's source matches the requested path
if not chunk_source or not chunk_source.endswith(path):
continue
# Filter by search term if provided
if search_term:
text_match = search_term in chunk.get("text", "").lower()
title_match = search_term in metadata.get("title", "").lower()
if not (text_match or title_match):
continue
filtered_chunks.append(chunk)
chunks = filtered_chunks
total_chunks = len(chunks)
start = (page - 1) * per_page
end = start + per_page
paginated_chunks = chunks[start:end]
return make_response(
jsonify(
{
"page": page,
"per_page": per_page,
"total": total_chunks,
"chunks": paginated_chunks,
"path": path if path else None,
"search": search_term if search_term else None,
}
),
200,
)
except Exception as e:
current_app.logger.error(f"Error getting chunks: {e}", exc_info=True)
return make_response(jsonify({"success": False}), 500)
@sources_chunks_ns.route("/add_chunk")
class AddChunk(Resource):
@api.expect(
api.model(
"AddChunkModel",
{
"id": fields.String(required=True, description="Document ID"),
"text": fields.String(required=True, description="Text of the chunk"),
"metadata": fields.Raw(
required=False,
description="Metadata associated with the chunk",
),
},
)
)
@api.doc(
description="Adds a new chunk to the document",
)
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
data = request.get_json()
required_fields = ["id", "text"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
doc_id = data.get("id")
text = data.get("text")
metadata = data.get("metadata", {})
token_count = num_tokens_from_string(text)
metadata["token_count"] = token_count
if not ObjectId.is_valid(doc_id):
return make_response(jsonify({"error": "Invalid doc_id"}), 400)
doc = sources_collection.find_one({"_id": ObjectId(doc_id), "user": user})
if not doc:
return make_response(
jsonify({"error": "Document not found or access denied"}), 404
)
try:
store = get_vector_store(doc_id)
chunk_id = store.add_chunk(text, metadata)
return make_response(
jsonify({"message": "Chunk added successfully", "chunk_id": chunk_id}),
201,
)
except Exception as e:
current_app.logger.error(f"Error adding chunk: {e}", exc_info=True)
return make_response(jsonify({"success": False}), 500)
@sources_chunks_ns.route("/delete_chunk")
class DeleteChunk(Resource):
@api.doc(
description="Deletes a specific chunk from the document.",
params={"id": "The document ID", "chunk_id": "The ID of the chunk to delete"},
)
def delete(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
doc_id = request.args.get("id")
chunk_id = request.args.get("chunk_id")
if not ObjectId.is_valid(doc_id):
return make_response(jsonify({"error": "Invalid doc_id"}), 400)
doc = sources_collection.find_one({"_id": ObjectId(doc_id), "user": user})
if not doc:
return make_response(
jsonify({"error": "Document not found or access denied"}), 404
)
try:
store = get_vector_store(doc_id)
deleted = store.delete_chunk(chunk_id)
if deleted:
return make_response(
jsonify({"message": "Chunk deleted successfully"}), 200
)
else:
return make_response(
jsonify({"message": "Chunk not found or could not be deleted"}),
404,
)
except Exception as e:
current_app.logger.error(f"Error deleting chunk: {e}", exc_info=True)
return make_response(jsonify({"success": False}), 500)
@sources_chunks_ns.route("/update_chunk")
class UpdateChunk(Resource):
@api.expect(
api.model(
"UpdateChunkModel",
{
"id": fields.String(required=True, description="Document ID"),
"chunk_id": fields.String(
required=True, description="Chunk ID to update"
),
"text": fields.String(
required=False, description="New text of the chunk"
),
"metadata": fields.Raw(
required=False,
description="Updated metadata associated with the chunk",
),
},
)
)
@api.doc(
description="Updates an existing chunk in the document.",
)
def put(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
data = request.get_json()
required_fields = ["id", "chunk_id"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
doc_id = data.get("id")
chunk_id = data.get("chunk_id")
text = data.get("text")
metadata = data.get("metadata")
if text is not None:
token_count = num_tokens_from_string(text)
if metadata is None:
metadata = {}
metadata["token_count"] = token_count
if not ObjectId.is_valid(doc_id):
return make_response(jsonify({"error": "Invalid doc_id"}), 400)
doc = sources_collection.find_one({"_id": ObjectId(doc_id), "user": user})
if not doc:
return make_response(
jsonify({"error": "Document not found or access denied"}), 404
)
try:
store = get_vector_store(doc_id)
chunks = store.get_chunks()
existing_chunk = next((c for c in chunks if c["doc_id"] == chunk_id), None)
if not existing_chunk:
return make_response(jsonify({"error": "Chunk not found"}), 404)
new_text = text if text is not None else existing_chunk["text"]
if metadata is not None:
new_metadata = existing_chunk["metadata"].copy()
new_metadata.update(metadata)
else:
new_metadata = existing_chunk["metadata"].copy()
if text is not None:
new_metadata["token_count"] = num_tokens_from_string(new_text)
try:
new_chunk_id = store.add_chunk(new_text, new_metadata)
deleted = store.delete_chunk(chunk_id)
if not deleted:
current_app.logger.warning(
f"Failed to delete old chunk {chunk_id}, but new chunk {new_chunk_id} was created"
)
return make_response(
jsonify(
{
"message": "Chunk updated successfully",
"chunk_id": new_chunk_id,
"original_chunk_id": chunk_id,
}
),
200,
)
except Exception as add_error:
current_app.logger.error(f"Failed to add updated chunk: {add_error}")
return make_response(
jsonify({"error": "Failed to update chunk - addition failed"}), 500
)
except Exception as e:
current_app.logger.error(f"Error updating chunk: {e}", exc_info=True)
return make_response(jsonify({"success": False}), 500)

View File

@@ -1,350 +0,0 @@
"""Source document management routes."""
import json
import math
import os
from bson.objectid import ObjectId
from flask import current_app, jsonify, make_response, redirect, request
from flask_restx import fields, Namespace, Resource
from werkzeug.utils import secure_filename
from application.api import api
from application.api.user.base import sources_collection
from application.core.settings import settings
from application.storage.storage_creator import StorageCreator
from application.utils import check_required_fields
from application.vectorstore.vector_creator import VectorCreator
sources_ns = Namespace(
"sources", description="Source document management operations", path="/api"
)
@sources_ns.route("/sources")
class CombinedJson(Resource):
@api.doc(description="Provide JSON file with combined available indexes")
def get(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
data = [
{
"name": "Default",
"date": "default",
"model": settings.EMBEDDINGS_NAME,
"location": "remote",
"tokens": "",
"retriever": "classic",
}
]
try:
for index in sources_collection.find({"user": user}).sort("date", -1):
data.append(
{
"id": str(index["_id"]),
"name": index.get("name"),
"date": index.get("date"),
"model": settings.EMBEDDINGS_NAME,
"location": "local",
"tokens": index.get("tokens", ""),
"retriever": index.get("retriever", "classic"),
"syncFrequency": index.get("sync_frequency", ""),
"is_nested": bool(index.get("directory_structure")),
"type": index.get(
"type", "file"
), # Add type field with default "file"
}
)
except Exception as err:
current_app.logger.error(f"Error retrieving sources: {err}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify(data), 200)
@sources_ns.route("/sources/paginated")
class PaginatedSources(Resource):
@api.doc(description="Get document with pagination, sorting and filtering")
def get(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
sort_field = request.args.get("sort", "date") # Default to 'date'
sort_order = request.args.get("order", "desc") # Default to 'desc'
page = int(request.args.get("page", 1)) # Default to 1
rows_per_page = int(request.args.get("rows", 10)) # Default to 10
# add .strip() to remove leading and trailing whitespaces
search_term = request.args.get(
"search", ""
).strip() # add search for filter documents
# Prepare query for filtering
query = {"user": user}
if search_term:
query["name"] = {
"$regex": search_term,
"$options": "i", # using case-insensitive search
}
total_documents = sources_collection.count_documents(query)
total_pages = max(1, math.ceil(total_documents / rows_per_page))
page = min(
max(1, page), total_pages
) # add this to make sure page inbound is within the range
sort_order = 1 if sort_order == "asc" else -1
skip = (page - 1) * rows_per_page
try:
documents = (
sources_collection.find(query)
.sort(sort_field, sort_order)
.skip(skip)
.limit(rows_per_page)
)
paginated_docs = []
for doc in documents:
doc_data = {
"id": str(doc["_id"]),
"name": doc.get("name", ""),
"date": doc.get("date", ""),
"model": settings.EMBEDDINGS_NAME,
"location": "local",
"tokens": doc.get("tokens", ""),
"retriever": doc.get("retriever", "classic"),
"syncFrequency": doc.get("sync_frequency", ""),
"isNested": bool(doc.get("directory_structure")),
"type": doc.get("type", "file"),
}
paginated_docs.append(doc_data)
response = {
"total": total_documents,
"totalPages": total_pages,
"currentPage": page,
"paginated": paginated_docs,
}
return make_response(jsonify(response), 200)
except Exception as err:
current_app.logger.error(
f"Error retrieving paginated sources: {err}", exc_info=True
)
return make_response(jsonify({"success": False}), 400)
@sources_ns.route("/docs_check")
class CheckDocs(Resource):
check_docs_model = api.model(
"CheckDocsModel",
{"docs": fields.String(required=True, description="Document name")},
)
@api.expect(check_docs_model)
@api.doc(description="Check if document exists")
def post(self):
data = request.get_json()
required_fields = ["docs"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
try:
vectorstore = "vectors/" + secure_filename(data["docs"])
if os.path.exists(vectorstore) or data["docs"] == "default":
return {"status": "exists"}, 200
except Exception as err:
current_app.logger.error(f"Error checking document: {err}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"status": "not found"}), 404)
@sources_ns.route("/delete_by_ids")
class DeleteByIds(Resource):
@api.doc(
description="Deletes documents from the vector store by IDs",
params={"path": "Comma-separated list of IDs"},
)
def get(self):
ids = request.args.get("path")
if not ids:
return make_response(
jsonify({"success": False, "message": "Missing required fields"}), 400
)
try:
result = sources_collection.delete_index(ids=ids)
if result:
return make_response(jsonify({"success": True}), 200)
except Exception as err:
current_app.logger.error(f"Error deleting indexes: {err}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"success": False}), 400)
@sources_ns.route("/delete_old")
class DeleteOldIndexes(Resource):
@api.doc(
description="Deletes old indexes and associated files",
params={"source_id": "The source ID to delete"},
)
def get(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
source_id = request.args.get("source_id")
if not source_id:
return make_response(
jsonify({"success": False, "message": "Missing required fields"}), 400
)
doc = sources_collection.find_one(
{"_id": ObjectId(source_id), "user": decoded_token.get("sub")}
)
if not doc:
return make_response(jsonify({"status": "not found"}), 404)
storage = StorageCreator.get_storage()
try:
# Delete vector index
if settings.VECTOR_STORE == "faiss":
index_path = f"indexes/{str(doc['_id'])}"
if storage.file_exists(f"{index_path}/index.faiss"):
storage.delete_file(f"{index_path}/index.faiss")
if storage.file_exists(f"{index_path}/index.pkl"):
storage.delete_file(f"{index_path}/index.pkl")
else:
vectorstore = VectorCreator.create_vectorstore(
settings.VECTOR_STORE, source_id=str(doc["_id"])
)
vectorstore.delete_index()
if "file_path" in doc and doc["file_path"]:
file_path = doc["file_path"]
if storage.is_directory(file_path):
files = storage.list_files(file_path)
for f in files:
storage.delete_file(f)
else:
storage.delete_file(file_path)
except FileNotFoundError:
pass
except Exception as err:
current_app.logger.error(
f"Error deleting files and indexes: {err}", exc_info=True
)
return make_response(jsonify({"success": False}), 400)
sources_collection.delete_one({"_id": ObjectId(source_id)})
return make_response(jsonify({"success": True}), 200)
@sources_ns.route("/combine")
class RedirectToSources(Resource):
@api.doc(
description="Redirects /api/combine to /api/sources for backward compatibility"
)
def get(self):
return redirect("/api/sources", code=301)
@sources_ns.route("/manage_sync")
class ManageSync(Resource):
manage_sync_model = api.model(
"ManageSyncModel",
{
"source_id": fields.String(required=True, description="Source ID"),
"sync_frequency": fields.String(
required=True,
description="Sync frequency (never, daily, weekly, monthly)",
),
},
)
@api.expect(manage_sync_model)
@api.doc(description="Manage sync frequency for sources")
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
data = request.get_json()
required_fields = ["source_id", "sync_frequency"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
source_id = data["source_id"]
sync_frequency = data["sync_frequency"]
if sync_frequency not in ["never", "daily", "weekly", "monthly"]:
return make_response(
jsonify({"success": False, "message": "Invalid frequency"}), 400
)
update_data = {"$set": {"sync_frequency": sync_frequency}}
try:
sources_collection.update_one(
{
"_id": ObjectId(source_id),
"user": user,
},
update_data,
)
except Exception as err:
current_app.logger.error(
f"Error updating sync frequency: {err}", exc_info=True
)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"success": True}), 200)
@sources_ns.route("/directory_structure")
class DirectoryStructure(Resource):
@api.doc(
description="Get the directory structure for a document",
params={"id": "The document ID"},
)
def get(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
doc_id = request.args.get("id")
if not doc_id:
return make_response(jsonify({"error": "Document ID is required"}), 400)
if not ObjectId.is_valid(doc_id):
return make_response(jsonify({"error": "Invalid document ID"}), 400)
try:
doc = sources_collection.find_one({"_id": ObjectId(doc_id), "user": user})
if not doc:
return make_response(
jsonify({"error": "Document not found or access denied"}), 404
)
directory_structure = doc.get("directory_structure", {})
base_path = doc.get("file_path", "")
provider = None
remote_data = doc.get("remote_data")
try:
if isinstance(remote_data, str) and remote_data:
remote_data_obj = json.loads(remote_data)
provider = remote_data_obj.get("provider")
except Exception as e:
current_app.logger.warning(
f"Failed to parse remote_data for doc {doc_id}: {e}"
)
return make_response(
jsonify(
{
"success": True,
"directory_structure": directory_structure,
"base_path": base_path,
"provider": provider,
}
),
200,
)
except Exception as e:
current_app.logger.error(
f"Error retrieving directory structure: {e}", exc_info=True
)
return make_response(jsonify({"success": False, "error": str(e)}), 500)

View File

@@ -1,572 +0,0 @@
"""Source document management upload functionality."""
import json
import os
import tempfile
import zipfile
from bson.objectid import ObjectId
from flask import current_app, jsonify, make_response, request
from flask_restx import fields, Namespace, Resource
from application.api import api
from application.api.user.base import sources_collection
from application.api.user.tasks import ingest, ingest_connector_task, ingest_remote
from application.core.settings import settings
from application.parser.connectors.connector_creator import ConnectorCreator
from application.storage.storage_creator import StorageCreator
from application.utils import check_required_fields, safe_filename
sources_upload_ns = Namespace(
"sources", description="Source document management operations", path="/api"
)
@sources_upload_ns.route("/upload")
class UploadFile(Resource):
@api.expect(
api.model(
"UploadModel",
{
"user": fields.String(required=True, description="User ID"),
"name": fields.String(required=True, description="Job name"),
"file": fields.Raw(required=True, description="File(s) to upload"),
},
)
)
@api.doc(
description="Uploads a file to be vectorized and indexed",
)
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
data = request.form
files = request.files.getlist("file")
required_fields = ["user", "name"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields or not files or all(file.filename == "" for file in files):
return make_response(
jsonify(
{
"status": "error",
"message": "Missing required fields or files",
}
),
400,
)
user = decoded_token.get("sub")
job_name = request.form["name"]
# Create safe versions for filesystem operations
safe_user = safe_filename(user)
dir_name = safe_filename(job_name)
base_path = f"{settings.UPLOAD_FOLDER}/{safe_user}/{dir_name}"
try:
storage = StorageCreator.get_storage()
for file in files:
original_filename = file.filename
safe_file = safe_filename(original_filename)
with tempfile.TemporaryDirectory() as temp_dir:
temp_file_path = os.path.join(temp_dir, safe_file)
file.save(temp_file_path)
if zipfile.is_zipfile(temp_file_path):
try:
with zipfile.ZipFile(temp_file_path, "r") as zip_ref:
zip_ref.extractall(path=temp_dir)
# Walk through extracted files and upload them
for root, _, files in os.walk(temp_dir):
for extracted_file in files:
if (
os.path.join(root, extracted_file)
== temp_file_path
):
continue
rel_path = os.path.relpath(
os.path.join(root, extracted_file), temp_dir
)
storage_path = f"{base_path}/{rel_path}"
with open(
os.path.join(root, extracted_file), "rb"
) as f:
storage.save_file(f, storage_path)
except Exception as e:
current_app.logger.error(
f"Error extracting zip: {e}", exc_info=True
)
# If zip extraction fails, save the original zip file
file_path = f"{base_path}/{safe_file}"
with open(temp_file_path, "rb") as f:
storage.save_file(f, file_path)
else:
# For non-zip files, save directly
file_path = f"{base_path}/{safe_file}"
with open(temp_file_path, "rb") as f:
storage.save_file(f, file_path)
task = ingest.delay(
settings.UPLOAD_FOLDER,
[
".rst",
".md",
".pdf",
".txt",
".docx",
".csv",
".epub",
".html",
".mdx",
".json",
".xlsx",
".pptx",
".png",
".jpg",
".jpeg",
],
job_name,
user,
file_path=base_path,
filename=dir_name,
)
except Exception as err:
current_app.logger.error(f"Error uploading file: {err}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"success": True, "task_id": task.id}), 200)
@sources_upload_ns.route("/remote")
class UploadRemote(Resource):
@api.expect(
api.model(
"RemoteUploadModel",
{
"user": fields.String(required=True, description="User ID"),
"source": fields.String(
required=True, description="Source of the data"
),
"name": fields.String(required=True, description="Job name"),
"data": fields.String(required=True, description="Data to process"),
"repo_url": fields.String(description="GitHub repository URL"),
},
)
)
@api.doc(
description="Uploads remote source for vectorization",
)
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
data = request.form
required_fields = ["user", "source", "name", "data"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
try:
config = json.loads(data["data"])
source_data = None
if data["source"] == "github":
source_data = config.get("repo_url")
elif data["source"] in ["crawler", "url"]:
source_data = config.get("url")
elif data["source"] == "reddit":
source_data = config
elif data["source"] in ConnectorCreator.get_supported_connectors():
session_token = config.get("session_token")
if not session_token:
return make_response(
jsonify(
{
"success": False,
"error": f"Missing session_token in {data['source']} configuration",
}
),
400,
)
# Process file_ids
file_ids = config.get("file_ids", [])
if isinstance(file_ids, str):
file_ids = [id.strip() for id in file_ids.split(",") if id.strip()]
elif not isinstance(file_ids, list):
file_ids = []
# Process folder_ids
folder_ids = config.get("folder_ids", [])
if isinstance(folder_ids, str):
folder_ids = [
id.strip() for id in folder_ids.split(",") if id.strip()
]
elif not isinstance(folder_ids, list):
folder_ids = []
config["file_ids"] = file_ids
config["folder_ids"] = folder_ids
task = ingest_connector_task.delay(
job_name=data["name"],
user=decoded_token.get("sub"),
source_type=data["source"],
session_token=session_token,
file_ids=file_ids,
folder_ids=folder_ids,
recursive=config.get("recursive", False),
retriever=config.get("retriever", "classic"),
)
return make_response(
jsonify({"success": True, "task_id": task.id}), 200
)
task = ingest_remote.delay(
source_data=source_data,
job_name=data["name"],
user=decoded_token.get("sub"),
loader=data["source"],
)
except Exception as err:
current_app.logger.error(
f"Error uploading remote source: {err}", exc_info=True
)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"success": True, "task_id": task.id}), 200)
@sources_upload_ns.route("/manage_source_files")
class ManageSourceFiles(Resource):
@api.expect(
api.model(
"ManageSourceFilesModel",
{
"source_id": fields.String(
required=True, description="Source ID to modify"
),
"operation": fields.String(
required=True,
description="Operation: 'add', 'remove', or 'remove_directory'",
),
"file_paths": fields.List(
fields.String,
required=False,
description="File paths to remove (for remove operation)",
),
"directory_path": fields.String(
required=False,
description="Directory path to remove (for remove_directory operation)",
),
"file": fields.Raw(
required=False, description="Files to add (for add operation)"
),
"parent_dir": fields.String(
required=False,
description="Parent directory path relative to source root",
),
},
)
)
@api.doc(
description="Add files, remove files, or remove directories from an existing source",
)
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(
jsonify({"success": False, "message": "Unauthorized"}), 401
)
user = decoded_token.get("sub")
source_id = request.form.get("source_id")
operation = request.form.get("operation")
if not source_id or not operation:
return make_response(
jsonify(
{
"success": False,
"message": "source_id and operation are required",
}
),
400,
)
if operation not in ["add", "remove", "remove_directory"]:
return make_response(
jsonify(
{
"success": False,
"message": "operation must be 'add', 'remove', or 'remove_directory'",
}
),
400,
)
try:
ObjectId(source_id)
except Exception:
return make_response(
jsonify({"success": False, "message": "Invalid source ID format"}), 400
)
try:
source = sources_collection.find_one(
{"_id": ObjectId(source_id), "user": user}
)
if not source:
return make_response(
jsonify(
{
"success": False,
"message": "Source not found or access denied",
}
),
404,
)
except Exception as err:
current_app.logger.error(f"Error finding source: {err}", exc_info=True)
return make_response(
jsonify({"success": False, "message": "Database error"}), 500
)
try:
storage = StorageCreator.get_storage()
source_file_path = source.get("file_path", "")
parent_dir = request.form.get("parent_dir", "")
if parent_dir and (parent_dir.startswith("/") or ".." in parent_dir):
return make_response(
jsonify(
{"success": False, "message": "Invalid parent directory path"}
),
400,
)
if operation == "add":
files = request.files.getlist("file")
if not files or all(file.filename == "" for file in files):
return make_response(
jsonify(
{
"success": False,
"message": "No files provided for add operation",
}
),
400,
)
added_files = []
target_dir = source_file_path
if parent_dir:
target_dir = f"{source_file_path}/{parent_dir}"
for file in files:
if file.filename:
safe_filename_str = safe_filename(file.filename)
file_path = f"{target_dir}/{safe_filename_str}"
# Save file to storage
storage.save_file(file, file_path)
added_files.append(safe_filename_str)
# Trigger re-ingestion pipeline
from application.api.user.tasks import reingest_source_task
task = reingest_source_task.delay(source_id=source_id, user=user)
return make_response(
jsonify(
{
"success": True,
"message": f"Added {len(added_files)} files",
"added_files": added_files,
"parent_dir": parent_dir,
"reingest_task_id": task.id,
}
),
200,
)
elif operation == "remove":
file_paths_str = request.form.get("file_paths")
if not file_paths_str:
return make_response(
jsonify(
{
"success": False,
"message": "file_paths required for remove operation",
}
),
400,
)
try:
file_paths = (
json.loads(file_paths_str)
if isinstance(file_paths_str, str)
else file_paths_str
)
except Exception:
return make_response(
jsonify(
{"success": False, "message": "Invalid file_paths format"}
),
400,
)
# Remove files from storage and directory structure
removed_files = []
for file_path in file_paths:
full_path = f"{source_file_path}/{file_path}"
# Remove from storage
if storage.file_exists(full_path):
storage.delete_file(full_path)
removed_files.append(file_path)
# Trigger re-ingestion pipeline
from application.api.user.tasks import reingest_source_task
task = reingest_source_task.delay(source_id=source_id, user=user)
return make_response(
jsonify(
{
"success": True,
"message": f"Removed {len(removed_files)} files",
"removed_files": removed_files,
"reingest_task_id": task.id,
}
),
200,
)
elif operation == "remove_directory":
directory_path = request.form.get("directory_path")
if not directory_path:
return make_response(
jsonify(
{
"success": False,
"message": "directory_path required for remove_directory operation",
}
),
400,
)
# Validate directory path (prevent path traversal)
if directory_path.startswith("/") or ".." in directory_path:
current_app.logger.warning(
f"Invalid directory path attempted for removal. "
f"User: {user}, Source ID: {source_id}, Directory path: {directory_path}"
)
return make_response(
jsonify(
{"success": False, "message": "Invalid directory path"}
),
400,
)
full_directory_path = (
f"{source_file_path}/{directory_path}"
if directory_path
else source_file_path
)
if not storage.is_directory(full_directory_path):
current_app.logger.warning(
f"Directory not found or is not a directory for removal. "
f"User: {user}, Source ID: {source_id}, Directory path: {directory_path}, "
f"Full path: {full_directory_path}"
)
return make_response(
jsonify(
{
"success": False,
"message": "Directory not found or is not a directory",
}
),
404,
)
success = storage.remove_directory(full_directory_path)
if not success:
current_app.logger.error(
f"Failed to remove directory from storage. "
f"User: {user}, Source ID: {source_id}, Directory path: {directory_path}, "
f"Full path: {full_directory_path}"
)
return make_response(
jsonify(
{"success": False, "message": "Failed to remove directory"}
),
500,
)
current_app.logger.info(
f"Successfully removed directory. "
f"User: {user}, Source ID: {source_id}, Directory path: {directory_path}, "
f"Full path: {full_directory_path}"
)
# Trigger re-ingestion pipeline
from application.api.user.tasks import reingest_source_task
task = reingest_source_task.delay(source_id=source_id, user=user)
return make_response(
jsonify(
{
"success": True,
"message": f"Successfully removed directory: {directory_path}",
"removed_directory": directory_path,
"reingest_task_id": task.id,
}
),
200,
)
except Exception as err:
error_context = f"operation={operation}, user={user}, source_id={source_id}"
if operation == "remove_directory":
directory_path = request.form.get("directory_path", "")
error_context += f", directory_path={directory_path}"
elif operation == "remove":
file_paths_str = request.form.get("file_paths", "")
error_context += f", file_paths={file_paths_str}"
elif operation == "add":
parent_dir = request.form.get("parent_dir", "")
error_context += f", parent_dir={parent_dir}"
current_app.logger.error(
f"Error managing source files: {err} ({error_context})", exc_info=True
)
return make_response(
jsonify({"success": False, "message": "Operation failed"}), 500
)
@sources_upload_ns.route("/task_status")
class TaskStatus(Resource):
task_status_model = api.model(
"TaskStatusModel",
{"task_id": fields.String(required=True, description="Task ID")},
)
@api.expect(task_status_model)
@api.doc(description="Get celery job status")
def get(self):
task_id = request.args.get("task_id")
if not task_id:
return make_response(
jsonify({"success": False, "message": "Task ID is required"}), 400
)
try:
from application.celery_init import celery
task = celery.AsyncResult(task_id)
task_meta = task.info
print(f"Task status: {task.status}")
if not isinstance(
task_meta, (dict, list, str, int, float, bool, type(None))
):
task_meta = str(task_meta) # Convert to a string representation
except Exception as err:
current_app.logger.error(f"Error getting task status: {err}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"status": task.status, "result": task_meta}), 200)

View File

@@ -5,8 +5,6 @@ from application.worker import (
agent_webhook_worker,
attachment_worker,
ingest_worker,
mcp_oauth,
mcp_oauth_status,
remote_worker,
sync_worker,
)
@@ -27,7 +25,6 @@ def ingest_remote(self, source_data, job_name, user, loader):
@celery.task(bind=True)
def reingest_source_task(self, source_id, user):
from application.worker import reingest_source_worker
resp = reingest_source_worker(self, source_id, user)
return resp
@@ -63,10 +60,9 @@ def ingest_connector_task(
retriever="classic",
operation_mode="upload",
doc_id=None,
sync_frequency="never",
sync_frequency="never"
):
from application.worker import ingest_connector
resp = ingest_connector(
self,
job_name,
@@ -79,7 +75,7 @@ def ingest_connector_task(
retriever=retriever,
operation_mode=operation_mode,
doc_id=doc_id,
sync_frequency=sync_frequency,
sync_frequency=sync_frequency
)
return resp
@@ -98,15 +94,3 @@ def setup_periodic_tasks(sender, **kwargs):
timedelta(days=30),
schedule_syncs.s("monthly"),
)
@celery.task(bind=True)
def mcp_oauth_task(self, config, user):
resp = mcp_oauth(self, config, user)
return resp
@celery.task(bind=True)
def mcp_oauth_status_task(self, task_id):
resp = mcp_oauth_status(self, task_id)
return resp

View File

@@ -1,6 +0,0 @@
"""Tools module."""
from .mcp import tools_mcp_ns
from .routes import tools_ns
__all__ = ["tools_ns", "tools_mcp_ns"]

View File

@@ -1,333 +0,0 @@
"""Tool management MCP server integration."""
import json
from email.quoprimime import unquote
from bson.objectid import ObjectId
from flask import current_app, jsonify, make_response, redirect, request
from flask_restx import fields, Namespace, Resource
from application.agents.tools.mcp_tool import MCPOAuthManager, MCPTool
from application.api import api
from application.api.user.base import user_tools_collection
from application.cache import get_redis_instance
from application.security.encryption import encrypt_credentials
from application.utils import check_required_fields
tools_mcp_ns = Namespace("tools", description="Tool management operations", path="/api")
@tools_mcp_ns.route("/mcp_server/test")
class TestMCPServerConfig(Resource):
@api.expect(
api.model(
"MCPServerTestModel",
{
"config": fields.Raw(
required=True, description="MCP server configuration to test"
),
},
)
)
@api.doc(description="Test MCP server connection with provided configuration")
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
data = request.get_json()
required_fields = ["config"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
try:
config = data["config"]
auth_credentials = {}
auth_type = config.get("auth_type", "none")
if auth_type == "api_key" and "api_key" in config:
auth_credentials["api_key"] = config["api_key"]
if "api_key_header" in config:
auth_credentials["api_key_header"] = config["api_key_header"]
elif auth_type == "bearer" and "bearer_token" in config:
auth_credentials["bearer_token"] = config["bearer_token"]
elif auth_type == "basic":
if "username" in config:
auth_credentials["username"] = config["username"]
if "password" in config:
auth_credentials["password"] = config["password"]
test_config = config.copy()
test_config["auth_credentials"] = auth_credentials
mcp_tool = MCPTool(config=test_config, user_id=user)
result = mcp_tool.test_connection()
return make_response(jsonify(result), 200)
except Exception as e:
current_app.logger.error(f"Error testing MCP server: {e}", exc_info=True)
return make_response(
jsonify(
{"success": False, "error": f"Connection test failed: {str(e)}"}
),
500,
)
@tools_mcp_ns.route("/mcp_server/save")
class MCPServerSave(Resource):
@api.expect(
api.model(
"MCPServerSaveModel",
{
"id": fields.String(
required=False, description="Tool ID for updates (optional)"
),
"displayName": fields.String(
required=True, description="Display name for the MCP server"
),
"config": fields.Raw(
required=True, description="MCP server configuration"
),
"status": fields.Boolean(
required=False, default=True, description="Tool status"
),
},
)
)
@api.doc(description="Create or update MCP server with automatic tool discovery")
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
data = request.get_json()
required_fields = ["displayName", "config"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
try:
config = data["config"]
auth_credentials = {}
auth_type = config.get("auth_type", "none")
if auth_type == "api_key":
if "api_key" in config and config["api_key"]:
auth_credentials["api_key"] = config["api_key"]
if "api_key_header" in config:
auth_credentials["api_key_header"] = config["api_key_header"]
elif auth_type == "bearer":
if "bearer_token" in config and config["bearer_token"]:
auth_credentials["bearer_token"] = config["bearer_token"]
elif auth_type == "basic":
if "username" in config and config["username"]:
auth_credentials["username"] = config["username"]
if "password" in config and config["password"]:
auth_credentials["password"] = config["password"]
mcp_config = config.copy()
mcp_config["auth_credentials"] = auth_credentials
if auth_type == "oauth":
if not config.get("oauth_task_id"):
return make_response(
jsonify(
{
"success": False,
"error": "Connection not authorized. Please complete the OAuth authorization first.",
}
),
400,
)
redis_client = get_redis_instance()
manager = MCPOAuthManager(redis_client)
result = manager.get_oauth_status(config["oauth_task_id"])
if not result.get("status") == "completed":
return make_response(
jsonify(
{
"success": False,
"error": "OAuth failed or not completed. Please try authorizing again.",
}
),
400,
)
actions_metadata = result.get("tools", [])
elif auth_type == "none" or auth_credentials:
mcp_tool = MCPTool(config=mcp_config, user_id=user)
mcp_tool.discover_tools()
actions_metadata = mcp_tool.get_actions_metadata()
else:
raise Exception(
"No valid credentials provided for the selected authentication type"
)
storage_config = config.copy()
if auth_credentials:
encrypted_credentials_string = encrypt_credentials(
auth_credentials, user
)
storage_config["encrypted_credentials"] = encrypted_credentials_string
for field in [
"api_key",
"bearer_token",
"username",
"password",
"api_key_header",
]:
storage_config.pop(field, None)
transformed_actions = []
for action in actions_metadata:
action["active"] = True
if "parameters" in action:
if "properties" in action["parameters"]:
for param_name, param_details in action["parameters"][
"properties"
].items():
param_details["filled_by_llm"] = True
param_details["value"] = ""
transformed_actions.append(action)
tool_data = {
"name": "mcp_tool",
"displayName": data["displayName"],
"customName": data["displayName"],
"description": f"MCP Server: {storage_config.get('server_url', 'Unknown')}",
"config": storage_config,
"actions": transformed_actions,
"status": data.get("status", True),
"user": user,
}
tool_id = data.get("id")
if tool_id:
result = user_tools_collection.update_one(
{"_id": ObjectId(tool_id), "user": user, "name": "mcp_tool"},
{"$set": {k: v for k, v in tool_data.items() if k != "user"}},
)
if result.matched_count == 0:
return make_response(
jsonify(
{
"success": False,
"error": "Tool not found or access denied",
}
),
404,
)
response_data = {
"success": True,
"id": tool_id,
"message": f"MCP server updated successfully! Discovered {len(transformed_actions)} tools.",
"tools_count": len(transformed_actions),
}
else:
result = user_tools_collection.insert_one(tool_data)
tool_id = str(result.inserted_id)
response_data = {
"success": True,
"id": tool_id,
"message": f"MCP server created successfully! Discovered {len(transformed_actions)} tools.",
"tools_count": len(transformed_actions),
}
return make_response(jsonify(response_data), 200)
except Exception as e:
current_app.logger.error(f"Error saving MCP server: {e}", exc_info=True)
return make_response(
jsonify(
{"success": False, "error": f"Failed to save MCP server: {str(e)}"}
),
500,
)
@tools_mcp_ns.route("/mcp_server/callback")
class MCPOAuthCallback(Resource):
@api.expect(
api.model(
"MCPServerCallbackModel",
{
"code": fields.String(required=True, description="Authorization code"),
"state": fields.String(required=True, description="State parameter"),
"error": fields.String(
required=False, description="Error message (if any)"
),
},
)
)
@api.doc(
description="Handle OAuth callback by providing the authorization code and state"
)
def get(self):
code = request.args.get("code")
state = request.args.get("state")
error = request.args.get("error")
if error:
return redirect(
f"/api/connectors/callback-status?status=error&message=OAuth+error:+{error}.+Please+try+again+and+make+sure+to+grant+all+requested+permissions,+including+offline+access.&provider=mcp_tool"
)
if not code or not state:
return redirect(
"/api/connectors/callback-status?status=error&message=Authorization+code+or+state+not+provided.+Please+complete+the+authorization+process+and+make+sure+to+grant+offline+access.&provider=mcp_tool"
)
try:
redis_client = get_redis_instance()
if not redis_client:
return redirect(
"/api/connectors/callback-status?status=error&message=Internal+server+error:+Redis+not+available.&provider=mcp_tool"
)
code = unquote(code)
manager = MCPOAuthManager(redis_client)
success = manager.handle_oauth_callback(state, code, error)
if success:
return redirect(
"/api/connectors/callback-status?status=success&message=Authorization+code+received+successfully.+You+can+close+this+window.&provider=mcp_tool"
)
else:
return redirect(
"/api/connectors/callback-status?status=error&message=OAuth+callback+failed.&provider=mcp_tool"
)
except Exception as e:
current_app.logger.error(
f"Error handling MCP OAuth callback: {str(e)}", exc_info=True
)
return redirect(
f"/api/connectors/callback-status?status=error&message=Internal+server+error:+{str(e)}.&provider=mcp_tool"
)
@tools_mcp_ns.route("/mcp_server/oauth_status/<string:task_id>")
class MCPOAuthStatus(Resource):
def get(self, task_id):
"""
Get current status of OAuth flow.
Frontend should poll this endpoint periodically.
"""
try:
redis_client = get_redis_instance()
status_key = f"mcp_oauth_status:{task_id}"
status_data = redis_client.get(status_key)
if status_data:
status = json.loads(status_data)
return make_response(
jsonify({"success": True, "task_id": task_id, **status})
)
else:
return make_response(
jsonify(
{
"success": False,
"error": "Task not found or expired",
"task_id": task_id,
}
),
404,
)
except Exception as e:
current_app.logger.error(
f"Error getting OAuth status for task {task_id}: {str(e)}"
)
return make_response(
jsonify({"success": False, "error": str(e), "task_id": task_id}), 500
)

View File

@@ -1,415 +0,0 @@
"""Tool management routes."""
from bson.objectid import ObjectId
from flask import current_app, jsonify, make_response, request
from flask_restx import fields, Namespace, Resource
from application.agents.tools.tool_manager import ToolManager
from application.api import api
from application.api.user.base import user_tools_collection
from application.security.encryption import decrypt_credentials, encrypt_credentials
from application.utils import check_required_fields, validate_function_name
tool_config = {}
tool_manager = ToolManager(config=tool_config)
tools_ns = Namespace("tools", description="Tool management operations", path="/api")
@tools_ns.route("/available_tools")
class AvailableTools(Resource):
@api.doc(description="Get available tools for a user")
def get(self):
try:
tools_metadata = []
for tool_name, tool_instance in tool_manager.tools.items():
doc = tool_instance.__doc__.strip()
lines = doc.split("\n", 1)
name = lines[0].strip()
description = lines[1].strip() if len(lines) > 1 else ""
tools_metadata.append(
{
"name": tool_name,
"displayName": name,
"description": description,
"configRequirements": tool_instance.get_config_requirements(),
}
)
except Exception as err:
current_app.logger.error(
f"Error getting available tools: {err}", exc_info=True
)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"success": True, "data": tools_metadata}), 200)
@tools_ns.route("/get_tools")
class GetTools(Resource):
@api.doc(description="Get tools created by a user")
def get(self):
try:
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
tools = user_tools_collection.find({"user": user})
user_tools = []
for tool in tools:
tool["id"] = str(tool["_id"])
tool.pop("_id")
user_tools.append(tool)
except Exception as err:
current_app.logger.error(f"Error getting user tools: {err}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"success": True, "tools": user_tools}), 200)
@tools_ns.route("/create_tool")
class CreateTool(Resource):
@api.expect(
api.model(
"CreateToolModel",
{
"name": fields.String(required=True, description="Name of the tool"),
"displayName": fields.String(
required=True, description="Display name for the tool"
),
"description": fields.String(
required=True, description="Tool description"
),
"config": fields.Raw(
required=True, description="Configuration of the tool"
),
"customName": fields.String(
required=False, description="Custom name for the tool"
),
"status": fields.Boolean(
required=True, description="Status of the tool"
),
},
)
)
@api.doc(description="Create a new tool")
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
data = request.get_json()
required_fields = [
"name",
"displayName",
"description",
"config",
"status",
]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
try:
tool_instance = tool_manager.tools.get(data["name"])
if not tool_instance:
return make_response(
jsonify({"success": False, "message": "Tool not found"}), 404
)
actions_metadata = tool_instance.get_actions_metadata()
transformed_actions = []
for action in actions_metadata:
action["active"] = True
if "parameters" in action:
if "properties" in action["parameters"]:
for param_name, param_details in action["parameters"][
"properties"
].items():
param_details["filled_by_llm"] = True
param_details["value"] = ""
transformed_actions.append(action)
except Exception as err:
current_app.logger.error(
f"Error getting tool actions: {err}", exc_info=True
)
return make_response(jsonify({"success": False}), 400)
try:
new_tool = {
"user": user,
"name": data["name"],
"displayName": data["displayName"],
"description": data["description"],
"customName": data.get("customName", ""),
"actions": transformed_actions,
"config": data["config"],
"status": data["status"],
}
resp = user_tools_collection.insert_one(new_tool)
new_id = str(resp.inserted_id)
except Exception as err:
current_app.logger.error(f"Error creating tool: {err}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"id": new_id}), 200)
@tools_ns.route("/update_tool")
class UpdateTool(Resource):
@api.expect(
api.model(
"UpdateToolModel",
{
"id": fields.String(required=True, description="Tool ID"),
"name": fields.String(description="Name of the tool"),
"displayName": fields.String(description="Display name for the tool"),
"customName": fields.String(description="Custom name for the tool"),
"description": fields.String(description="Tool description"),
"config": fields.Raw(description="Configuration of the tool"),
"actions": fields.List(
fields.Raw, description="Actions the tool can perform"
),
"status": fields.Boolean(description="Status of the tool"),
},
)
)
@api.doc(description="Update a tool by ID")
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
data = request.get_json()
required_fields = ["id"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
try:
update_data = {}
if "name" in data:
update_data["name"] = data["name"]
if "displayName" in data:
update_data["displayName"] = data["displayName"]
if "customName" in data:
update_data["customName"] = data["customName"]
if "description" in data:
update_data["description"] = data["description"]
if "actions" in data:
update_data["actions"] = data["actions"]
if "config" in data:
if "actions" in data["config"]:
for action_name in list(data["config"]["actions"].keys()):
if not validate_function_name(action_name):
return make_response(
jsonify(
{
"success": False,
"message": f"Invalid function name '{action_name}'. Function names must match pattern '^[a-zA-Z0-9_-]+$'.",
"param": "tools[].function.name",
}
),
400,
)
tool_doc = user_tools_collection.find_one(
{"_id": ObjectId(data["id"]), "user": user}
)
if tool_doc and tool_doc.get("name") == "mcp_tool":
config = data["config"]
existing_config = tool_doc.get("config", {})
storage_config = existing_config.copy()
storage_config.update(config)
existing_credentials = {}
if "encrypted_credentials" in existing_config:
existing_credentials = decrypt_credentials(
existing_config["encrypted_credentials"], user
)
auth_credentials = existing_credentials.copy()
auth_type = storage_config.get("auth_type", "none")
if auth_type == "api_key":
if "api_key" in config and config["api_key"]:
auth_credentials["api_key"] = config["api_key"]
if "api_key_header" in config:
auth_credentials["api_key_header"] = config[
"api_key_header"
]
elif auth_type == "bearer":
if "bearer_token" in config and config["bearer_token"]:
auth_credentials["bearer_token"] = config["bearer_token"]
elif "encrypted_token" in config and config["encrypted_token"]:
auth_credentials["bearer_token"] = config["encrypted_token"]
elif auth_type == "basic":
if "username" in config and config["username"]:
auth_credentials["username"] = config["username"]
if "password" in config and config["password"]:
auth_credentials["password"] = config["password"]
if auth_type != "none" and auth_credentials:
encrypted_credentials_string = encrypt_credentials(
auth_credentials, user
)
storage_config["encrypted_credentials"] = (
encrypted_credentials_string
)
elif auth_type == "none":
storage_config.pop("encrypted_credentials", None)
for field in [
"api_key",
"bearer_token",
"encrypted_token",
"username",
"password",
"api_key_header",
]:
storage_config.pop(field, None)
update_data["config"] = storage_config
else:
update_data["config"] = data["config"]
if "status" in data:
update_data["status"] = data["status"]
user_tools_collection.update_one(
{"_id": ObjectId(data["id"]), "user": user},
{"$set": update_data},
)
except Exception as err:
current_app.logger.error(f"Error updating tool: {err}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"success": True}), 200)
@tools_ns.route("/update_tool_config")
class UpdateToolConfig(Resource):
@api.expect(
api.model(
"UpdateToolConfigModel",
{
"id": fields.String(required=True, description="Tool ID"),
"config": fields.Raw(
required=True, description="Configuration of the tool"
),
},
)
)
@api.doc(description="Update the configuration of a tool")
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
data = request.get_json()
required_fields = ["id", "config"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
try:
user_tools_collection.update_one(
{"_id": ObjectId(data["id"]), "user": user},
{"$set": {"config": data["config"]}},
)
except Exception as err:
current_app.logger.error(
f"Error updating tool config: {err}", exc_info=True
)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"success": True}), 200)
@tools_ns.route("/update_tool_actions")
class UpdateToolActions(Resource):
@api.expect(
api.model(
"UpdateToolActionsModel",
{
"id": fields.String(required=True, description="Tool ID"),
"actions": fields.List(
fields.Raw,
required=True,
description="Actions the tool can perform",
),
},
)
)
@api.doc(description="Update the actions of a tool")
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
data = request.get_json()
required_fields = ["id", "actions"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
try:
user_tools_collection.update_one(
{"_id": ObjectId(data["id"]), "user": user},
{"$set": {"actions": data["actions"]}},
)
except Exception as err:
current_app.logger.error(
f"Error updating tool actions: {err}", exc_info=True
)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"success": True}), 200)
@tools_ns.route("/update_tool_status")
class UpdateToolStatus(Resource):
@api.expect(
api.model(
"UpdateToolStatusModel",
{
"id": fields.String(required=True, description="Tool ID"),
"status": fields.Boolean(
required=True, description="Status of the tool"
),
},
)
)
@api.doc(description="Update the status of a tool")
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
data = request.get_json()
required_fields = ["id", "status"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
try:
user_tools_collection.update_one(
{"_id": ObjectId(data["id"]), "user": user},
{"$set": {"status": data["status"]}},
)
except Exception as err:
current_app.logger.error(
f"Error updating tool status: {err}", exc_info=True
)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"success": True}), 200)
@tools_ns.route("/delete_tool")
class DeleteTool(Resource):
@api.expect(
api.model(
"DeleteToolModel",
{"id": fields.String(required=True, description="Tool ID")},
)
)
@api.doc(description="Delete a tool by ID")
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
data = request.get_json()
required_fields = ["id"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
try:
result = user_tools_collection.delete_one(
{"_id": ObjectId(data["id"]), "user": user}
)
if result.deleted_count == 0:
return {"success": False, "message": "Tool not found"}, 404
except Exception as err:
current_app.logger.error(f"Error deleting tool: {err}", exc_info=True)
return {"success": False}, 400
return {"success": True}, 200

View File

@@ -26,7 +26,7 @@ class Settings(BaseSettings):
"gpt-4o-mini": 128000,
"gpt-3.5-turbo": 4096,
"claude-2": 1e5,
"gemini-2.5-flash": 1e6,
"gemini-2.0-flash-exp": 1e6,
}
UPLOAD_FOLDER: str = "inputs"
PARSE_PDF_AS_IMAGE: bool = False
@@ -41,18 +41,11 @@ class Settings(BaseSettings):
FALLBACK_LLM_API_KEY: Optional[str] = None # api key for fallback llm
# Google Drive integration
GOOGLE_CLIENT_ID: Optional[str] = (
None # Replace with your actual Google OAuth client ID
)
GOOGLE_CLIENT_SECRET: Optional[str] = (
None # Replace with your actual Google OAuth client secret
)
CONNECTOR_REDIRECT_BASE_URI: Optional[str] = (
"http://127.0.0.1:7091/api/connectors/callback" ##add redirect url as it is to your provider's console(gcp)
)
GOOGLE_CLIENT_ID: Optional[str] = None # Replace with your actual Google OAuth client ID
GOOGLE_CLIENT_SECRET: Optional[str] = None# Replace with your actual Google OAuth client secret
CONNECTOR_REDIRECT_BASE_URI: Optional[str] = "http://127.0.0.1:7091/api/connectors/callback"
##append ?provider={provider_name} in your Provider console like http://127.0.0.1:7091/api/connectors/callback?provider=google_drive
# GitHub source
GITHUB_ACCESS_TOKEN: Optional[str] = None # PAT token with read repo access
# LLM Cache
CACHE_REDIS_URL: str = "redis://localhost:6379/2"
@@ -103,7 +96,7 @@ class Settings(BaseSettings):
QDRANT_HOST: Optional[str] = None
QDRANT_PATH: Optional[str] = None
QDRANT_DISTANCE_FUNC: str = "Cosine"
# PGVector vectorstore config
PGVECTOR_CONNECTION_STRING: Optional[str] = None
# Milvus vectorstore config
@@ -123,10 +116,6 @@ class Settings(BaseSettings):
JWT_SECRET_KEY: str = ""
# Encryption settings
ENCRYPTION_SECRET_KEY: str = "default-docsgpt-encryption-key"
ELEVENLABS_API_KEY: Optional[str] = None
path = Path(__file__).parent.parent.absolute()
settings = Settings(_env_file=path.joinpath(".env"), _env_file_encoding="utf-8")

View File

@@ -143,7 +143,6 @@ class GoogleLLM(BaseLLM):
raise
def _clean_messages_google(self, messages):
"""Convert OpenAI format messages to Google AI format."""
cleaned_messages = []
for message in messages:
role = message.get("role")
@@ -151,8 +150,6 @@ class GoogleLLM(BaseLLM):
if role == "assistant":
role = "model"
elif role == "tool":
role = "model"
parts = []
if role and content is not None:
@@ -191,63 +188,11 @@ class GoogleLLM(BaseLLM):
else:
raise ValueError(f"Unexpected content type: {type(content)}")
if parts:
cleaned_messages.append(types.Content(role=role, parts=parts))
cleaned_messages.append(types.Content(role=role, parts=parts))
return cleaned_messages
def _clean_schema(self, schema_obj):
"""
Recursively remove unsupported fields from schema objects
and validate required properties.
"""
if not isinstance(schema_obj, dict):
return schema_obj
allowed_fields = {
"type",
"description",
"items",
"properties",
"required",
"enum",
"pattern",
"minimum",
"maximum",
"nullable",
"default",
}
cleaned = {}
for key, value in schema_obj.items():
if key not in allowed_fields:
continue
elif key == "type" and isinstance(value, str):
cleaned[key] = value.upper()
elif isinstance(value, dict):
cleaned[key] = self._clean_schema(value)
elif isinstance(value, list):
cleaned[key] = [self._clean_schema(item) for item in value]
else:
cleaned[key] = value
# Validate that required properties actually exist in properties
if "required" in cleaned and "properties" in cleaned:
valid_required = []
properties_keys = set(cleaned["properties"].keys())
for required_prop in cleaned["required"]:
if required_prop in properties_keys:
valid_required.append(required_prop)
if valid_required:
cleaned["required"] = valid_required
else:
cleaned.pop("required", None)
elif "required" in cleaned and "properties" not in cleaned:
cleaned.pop("required", None)
return cleaned
def _clean_tools_format(self, tools_list):
"""Convert OpenAI format tools to Google AI format."""
genai_tools = []
for tool_data in tools_list:
if tool_data["type"] == "function":
@@ -256,16 +201,18 @@ class GoogleLLM(BaseLLM):
properties = parameters.get("properties", {})
if properties:
cleaned_properties = {}
for k, v in properties.items():
cleaned_properties[k] = self._clean_schema(v)
genai_function = dict(
name=function["name"],
description=function["description"],
parameters={
"type": "OBJECT",
"properties": cleaned_properties,
"properties": {
k: {
**v,
"type": v["type"].upper() if v["type"] else None,
}
for k, v in properties.items()
},
"required": (
parameters["required"]
if "required" in parameters
@@ -295,7 +242,6 @@ class GoogleLLM(BaseLLM):
response_schema=None,
**kwargs,
):
"""Generate content using Google AI API without streaming."""
client = genai.Client(api_key=self.api_key)
if formatting == "openai":
messages = self._clean_messages_google(messages)
@@ -335,7 +281,6 @@ class GoogleLLM(BaseLLM):
response_schema=None,
**kwargs,
):
"""Generate content using Google AI API with streaming."""
client = genai.Client(api_key=self.api_key)
if formatting == "openai":
messages = self._clean_messages_google(messages)
@@ -386,15 +331,12 @@ class GoogleLLM(BaseLLM):
yield chunk.text
def _supports_tools(self):
"""Return whether this LLM supports function calling."""
return True
def _supports_structured_output(self):
"""Return whether this LLM supports structured JSON output."""
return True
def prepare_structured_output_format(self, json_schema):
"""Convert JSON schema to Google AI structured output format."""
if not json_schema:
return None

View File

@@ -205,6 +205,7 @@ class LLMHandler(ABC):
except StopIteration as e:
tool_response, call_id = e.value
break
updated_messages.append(
{
"role": "assistant",
@@ -221,36 +222,17 @@ class LLMHandler(ABC):
)
updated_messages.append(self.create_tool_message(call, tool_response))
except Exception as e:
logger.error(f"Error executing tool: {str(e)}", exc_info=True)
error_call = ToolCall(
id=call.id, name=call.name, arguments=call.arguments
updated_messages.append(
{
"role": "tool",
"content": f"Error executing tool: {str(e)}",
"tool_call_id": call.id,
}
)
error_response = f"Error executing tool: {str(e)}"
error_message = self.create_tool_message(error_call, error_response)
updated_messages.append(error_message)
call_parts = call.name.split("_")
if len(call_parts) >= 2:
tool_id = call_parts[-1] # Last part is tool ID (e.g., "1")
action_name = "_".join(call_parts[:-1])
tool_name = tools_dict.get(tool_id, {}).get("name", "unknown_tool")
full_action_name = f"{action_name}_{tool_id}"
else:
tool_name = "unknown_tool"
action_name = call.name
full_action_name = call.name
yield {
"type": "tool_call",
"data": {
"tool_name": tool_name,
"call_id": call.id,
"action_name": full_action_name,
"arguments": call.arguments,
"error": error_response,
"status": "error",
},
}
return updated_messages
def handle_non_streaming(
@@ -281,11 +263,13 @@ class LLMHandler(ABC):
except StopIteration as e:
messages = e.value
break
response = agent.llm.gen(
model=agent.gpt_model, messages=messages, tools=agent.tools
)
parsed = self.parse_response(response)
self.llm_calls.append(build_stack_data(agent.llm))
return parsed.content
def handle_streaming(

View File

@@ -17,6 +17,7 @@ class GoogleLLMHandler(LLMHandler):
finish_reason="stop",
raw_response=response,
)
if hasattr(response, "candidates"):
parts = response.candidates[0].content.parts if response.candidates else []
tool_calls = [
@@ -40,6 +41,7 @@ class GoogleLLMHandler(LLMHandler):
finish_reason="tool_calls" if tool_calls else "stop",
raw_response=response,
)
else:
tool_calls = []
if hasattr(response, "function_call"):
@@ -59,16 +61,14 @@ class GoogleLLMHandler(LLMHandler):
def create_tool_message(self, tool_call: ToolCall, result: Any) -> Dict:
"""Create Google-style tool message."""
from google.genai import types
return {
"role": "model",
"role": "tool",
"content": [
{
"function_response": {
"name": tool_call.name,
"response": {"result": result},
}
}
types.Part.from_function_response(
name=tool_call.name, response={"result": result}
).to_json_dict()
],
}

View File

@@ -17,13 +17,14 @@ class GoogleDriveAuth(BaseConnectorAuth):
"""
SCOPES = [
'https://www.googleapis.com/auth/drive.file'
'https://www.googleapis.com/auth/drive.readonly',
'https://www.googleapis.com/auth/drive.metadata.readonly'
]
def __init__(self):
self.client_id = settings.GOOGLE_CLIENT_ID
self.client_secret = settings.GOOGLE_CLIENT_SECRET
self.redirect_uri = f"{settings.CONNECTOR_REDIRECT_BASE_URI}"
self.redirect_uri = f"{settings.CONNECTOR_REDIRECT_BASE_URI}?provider=google_drive"
if not self.client_id or not self.client_secret:
raise ValueError("Google OAuth credentials not configured. Please set GOOGLE_CLIENT_ID and GOOGLE_CLIENT_SECRET in settings.")
@@ -49,7 +50,7 @@ class GoogleDriveAuth(BaseConnectorAuth):
authorization_url, _ = flow.authorization_url(
access_type='offline',
prompt='consent',
include_granted_scopes='false',
include_granted_scopes='true',
state=state
)

View File

@@ -32,10 +32,6 @@ class GoogleDriveLoader(BaseConnectorLoader):
'text/plain': '.txt',
'text/csv': '.csv',
'text/html': '.html',
'text/markdown': '.md',
'text/x-rst': '.rst',
'application/json': '.json',
'application/epub+zip': '.epub',
'application/rtf': '.rtf',
'image/jpeg': '.jpg',
'image/jpg': '.jpg',
@@ -124,7 +120,6 @@ class GoogleDriveLoader(BaseConnectorLoader):
list_only = inputs.get('list_only', False)
load_content = not list_only
page_token = inputs.get('page_token')
search_query = inputs.get('search_query')
self.next_page_token = None
if file_ids:
@@ -133,18 +128,12 @@ class GoogleDriveLoader(BaseConnectorLoader):
try:
doc = self._load_file_by_id(file_id, load_content=load_content)
if doc:
if not search_query or (
search_query.lower() in doc.extra_info.get('file_name', '').lower()
):
documents.append(doc)
documents.append(doc)
elif hasattr(self, '_credential_refreshed') and self._credential_refreshed:
self._credential_refreshed = False
logging.info(f"Retrying load of file {file_id} after credential refresh")
doc = self._load_file_by_id(file_id, load_content=load_content)
if doc and (
not search_query or
search_query.lower() in doc.extra_info.get('file_name', '').lower()
):
if doc:
documents.append(doc)
except Exception as e:
logging.error(f"Error loading file {file_id}: {e}")
@@ -152,13 +141,7 @@ class GoogleDriveLoader(BaseConnectorLoader):
else:
# Browsing mode: list immediate children of provided folder or root
parent_id = folder_id if folder_id else 'root'
documents = self._list_items_in_parent(
parent_id,
limit=limit,
load_content=load_content,
page_token=page_token,
search_query=search_query
)
documents = self._list_items_in_parent(parent_id, limit=limit, load_content=load_content, page_token=page_token)
logging.info(f"Loaded {len(documents)} documents from Google Drive")
return documents
@@ -201,18 +184,13 @@ class GoogleDriveLoader(BaseConnectorLoader):
return None
def _list_items_in_parent(self, parent_id: str, limit: int = 100, load_content: bool = False, page_token: Optional[str] = None, search_query: Optional[str] = None) -> List[Document]:
def _list_items_in_parent(self, parent_id: str, limit: int = 100, load_content: bool = False, page_token: Optional[str] = None) -> List[Document]:
self._ensure_service()
documents: List[Document] = []
try:
query = f"'{parent_id}' in parents and trashed=false"
if search_query:
safe_search = search_query.replace("'", "\\'")
query += f" and name contains '{safe_search}'"
next_token_out: Optional[str] = None
while True:
@@ -227,8 +205,7 @@ class GoogleDriveLoader(BaseConnectorLoader):
q=query,
fields='nextPageToken,files(id,name,mimeType,size,createdTime,modifiedTime,parents)',
pageToken=page_token,
pageSize=page_size,
orderBy='name'
pageSize=page_size
).execute()
items = results.get('files', [])

View File

@@ -1,135 +1,44 @@
import base64
import requests
import time
from typing import List, Optional
from typing import List
from application.parser.remote.base import BaseRemote
from application.parser.schema.base import Document
from langchain_core.documents import Document
import mimetypes
from application.core.settings import settings
class GitHubLoader(BaseRemote):
def __init__(self):
self.access_token = settings.GITHUB_ACCESS_TOKEN
self.access_token = None
self.headers = {
"Authorization": f"token {self.access_token}",
"Accept": "application/vnd.github.v3+json"
} if self.access_token else {
"Accept": "application/vnd.github.v3+json"
}
"Authorization": f"token {self.access_token}"
} if self.access_token else {}
return
def is_text_file(self, file_path: str) -> bool:
"""Determine if a file is a text file based on extension."""
# Common text file extensions
text_extensions = {
'.txt', '.md', '.markdown', '.rst', '.json', '.xml', '.yaml', '.yml',
'.py', '.js', '.ts', '.jsx', '.tsx', '.java', '.c', '.cpp', '.h', '.hpp',
'.cs', '.go', '.rs', '.rb', '.php', '.swift', '.kt', '.scala',
'.html', '.css', '.scss', '.sass', '.less',
'.sh', '.bash', '.zsh', '.fish',
'.sql', '.r', '.m', '.mat',
'.ini', '.cfg', '.conf', '.config', '.env',
'.gitignore', '.dockerignore', '.editorconfig',
'.log', '.csv', '.tsv'
}
# Get file extension
file_lower = file_path.lower()
for ext in text_extensions:
if file_lower.endswith(ext):
return True
# Also check MIME type
mime_type, _ = mimetypes.guess_type(file_path)
if mime_type and (mime_type.startswith("text") or mime_type in ["application/json", "application/xml"]):
return True
return False
def fetch_file_content(self, repo_url: str, file_path: str) -> Optional[str]:
"""Fetch file content. Returns None if file should be skipped (binary files or empty files)."""
def fetch_file_content(self, repo_url: str, file_path: str) -> str:
url = f"https://api.github.com/repos/{repo_url}/contents/{file_path}"
response = self._make_request(url)
response = requests.get(url, headers=self.headers)
content = response.json()
if response.status_code == 200:
content = response.json()
mime_type, _ = mimetypes.guess_type(file_path) # Guess the MIME type based on the file extension
if content.get("encoding") == "base64":
if self.is_text_file(file_path): # Handle only text files
try:
decoded_content = base64.b64decode(content["content"]).decode("utf-8").strip()
# Skip empty files
if not decoded_content:
return None
return decoded_content
except Exception:
# If decoding fails, it's probably a binary file
return None
if content.get("encoding") == "base64":
if mime_type and mime_type.startswith("text"): # Handle only text files
try:
decoded_content = base64.b64decode(content["content"]).decode("utf-8")
return f"Filename: {file_path}\n\n{decoded_content}"
except Exception as e:
raise e
else:
return f"Filename: {file_path} is a binary file and was skipped."
else:
# Skip binary files by returning None
return None
return f"Filename: {file_path}\n\n{content['content']}"
else:
file_content = content['content'].strip()
# Skip empty files
if not file_content:
return None
return file_content
def _make_request(self, url: str, max_retries: int = 3) -> requests.Response:
"""Make a request with retry logic for rate limiting"""
for attempt in range(max_retries):
response = requests.get(url, headers=self.headers)
if response.status_code == 200:
return response
elif response.status_code == 403:
# Check if it's a rate limit issue
try:
error_data = response.json()
error_msg = error_data.get("message", "")
# Check rate limit headers
remaining = response.headers.get("X-RateLimit-Remaining", "unknown")
reset_time = response.headers.get("X-RateLimit-Reset", "unknown")
print(f"GitHub API 403 Error: {error_msg}")
print(f"Rate limit remaining: {remaining}, Reset time: {reset_time}")
if "rate limit" in error_msg.lower():
if attempt < max_retries - 1:
wait_time = 2 ** attempt # Exponential backoff
print(f"Rate limit hit, waiting {wait_time} seconds before retry...")
time.sleep(wait_time)
continue
# Provide helpful error message
if remaining == "0":
raise Exception(f"GitHub API rate limit exceeded. Please set GITHUB_ACCESS_TOKEN environment variable. Reset time: {reset_time}")
else:
raise Exception(f"GitHub API error: {error_msg}. This may require authentication - set GITHUB_ACCESS_TOKEN environment variable.")
except Exception as e:
if isinstance(e, Exception) and "GitHub API" in str(e):
raise
# If we can't parse the response, raise the original error
response.raise_for_status()
else:
response.raise_for_status()
return response
response.raise_for_status()
def fetch_repo_files(self, repo_url: str, path: str = "") -> List[str]:
url = f"https://api.github.com/repos/{repo_url}/contents/{path}"
response = self._make_request(url)
response = requests.get(url, headers={**self.headers, "Accept": "application/vnd.github.v3.raw"})
contents = response.json()
# Handle error responses from GitHub API
if isinstance(contents, dict) and "message" in contents:
raise Exception(f"GitHub API error: {contents.get('message')}")
# Ensure contents is a list
if not isinstance(contents, list):
raise TypeError(f"Expected list from GitHub API, got {type(contents).__name__}: {contents}")
files = []
for item in contents:
if item["type"] == "file":
@@ -144,15 +53,6 @@ class GitHubLoader(BaseRemote):
documents = []
for file_path in files:
content = self.fetch_file_content(repo_name, file_path)
# Skip binary files (content is None)
if content is None:
continue
documents.append(Document(
text=content,
doc_id=file_path,
extra_info={
"title": file_path,
"source": f"https://github.com/{repo_name}/blob/main/{file_path}"
}
))
documents.append(Document(page_content=content, metadata={"title": file_path,
"source": f"https://github.com/{repo_name}/blob/main/{file_path}"}))
return documents

View File

@@ -2,7 +2,6 @@ anthropic==0.49.0
boto3==1.38.18
beautifulsoup4==4.13.4
celery==5.4.0
cryptography==42.0.8
dataclasses-json==0.6.7
docx2txt==0.8
duckduckgo-search==7.5.2
@@ -12,7 +11,6 @@ esprima==4.0.1
esutils==1.0.1
Flask==3.1.1
faiss-cpu==1.9.0.post1
fastmcp==2.11.0
flask-restx==1.3.0
google-genai==1.3.0
google-api-python-client==2.179.0
@@ -57,13 +55,13 @@ prompt-toolkit==3.0.51
protobuf==5.29.3
psycopg2-binary==2.9.10
py==1.11.0
pydantic
pydantic-core
pydantic-settings
pydantic==2.10.6
pydantic-core==2.27.2
pydantic-settings==2.7.1
pymongo==4.11.3
pypdf==5.5.0
python-dateutil==2.9.0.post0
python-dotenv
python-dotenv==1.0.1
python-jose==3.4.0
python-pptx==1.0.2
redis==5.2.1
@@ -83,7 +81,7 @@ tzdata==2024.2
urllib3==2.3.0
vine==5.1.0
wcwidth==0.2.13
werkzeug>=3.1.0,<3.1.2
werkzeug==3.1.3
yarl==1.20.0
markdownify==1.1.0
tldextract==5.1.3

View File

@@ -5,6 +5,10 @@ class BaseRetriever(ABC):
def __init__(self):
pass
@abstractmethod
def gen(self, *args, **kwargs):
pass
@abstractmethod
def search(self, *args, **kwargs):
pass

View File

@@ -1,6 +1,4 @@
import logging
import os
from application.core.settings import settings
from application.llm.llm_creator import LLMCreator
from application.retriever.base import BaseRetriever
@@ -22,25 +20,10 @@ class ClassicRAG(BaseRetriever):
api_key=settings.API_KEY,
decoded_token=None,
):
"""Initialize ClassicRAG retriever with vectorstore sources and LLM configuration"""
self.original_question = source.get("question", "")
self.original_question = ""
self.chat_history = chat_history if chat_history is not None else []
self.prompt = prompt
if isinstance(chunks, str):
try:
self.chunks = int(chunks)
except ValueError:
logging.warning(
f"Invalid chunks value '{chunks}', using default value 2"
)
self.chunks = 2
else:
self.chunks = chunks
user_identifier = user_api_key if user_api_key else "default"
logging.info(
f"ClassicRAG initialized with chunks={self.chunks}, user_api_key={user_identifier}, "
f"sources={'active_docs' in source and source['active_docs'] is not None}"
)
self.chunks = chunks
self.gpt_model = gpt_model
self.token_limit = (
token_limit
@@ -61,48 +44,26 @@ class ClassicRAG(BaseRetriever):
user_api_key=self.user_api_key,
decoded_token=decoded_token,
)
if "active_docs" in source and source["active_docs"] is not None:
if isinstance(source["active_docs"], list):
self.vectorstores = source["active_docs"]
else:
self.vectorstores = [source["active_docs"]]
else:
self.vectorstores = []
self.vectorstore = source["active_docs"] if "active_docs" in source else None
self.question = self._rephrase_query()
self.decoded_token = decoded_token
self._validate_vectorstore_config()
def _validate_vectorstore_config(self):
"""Validate vectorstore IDs and remove any empty/invalid entries"""
if not self.vectorstores:
logging.warning("No vectorstores configured for retrieval")
return
invalid_ids = [
vs_id for vs_id in self.vectorstores if not vs_id or not vs_id.strip()
]
if invalid_ids:
logging.warning(f"Found invalid vectorstore IDs: {invalid_ids}")
self.vectorstores = [
vs_id for vs_id in self.vectorstores if vs_id and vs_id.strip()
]
def _rephrase_query(self):
"""Rephrase user query with chat history context for better retrieval"""
if (
not self.original_question
or not self.chat_history
or self.chat_history == []
or self.chunks == 0
or not self.vectorstores
or self.vectorstore is None
):
return self.original_question
prompt = (
"Given the following conversation history:\n"
f"{self.chat_history}\n\n"
"Rephrase the following user question to be a standalone search query "
"that captures all relevant context from the conversation:\n"
)
prompt = f"""Given the following conversation history:
{self.chat_history}
Rephrase the following user question to be a standalone search query
that captures all relevant context from the conversation:
"""
messages = [
{"role": "system", "content": prompt},
@@ -118,89 +79,44 @@ class ClassicRAG(BaseRetriever):
return self.original_question
def _get_data(self):
"""Retrieve relevant documents from configured vectorstores"""
if self.chunks == 0 or not self.vectorstores:
logging.info(
f"ClassicRAG._get_data: Skipping retrieval - chunks={self.chunks}, "
f"vectorstores_count={len(self.vectorstores) if self.vectorstores else 0}"
if self.chunks == 0 or self.vectorstore is None:
docs = []
else:
docsearch = VectorCreator.create_vectorstore(
settings.VECTOR_STORE, self.vectorstore, settings.EMBEDDINGS_KEY
)
return []
all_docs = []
chunks_per_source = max(1, self.chunks // len(self.vectorstores))
docs_temp = docsearch.search(self.question, k=self.chunks)
docs = [
{
"title": i.metadata.get(
"title", i.metadata.get("post_title", i.page_content)
).split("/")[-1],
"text": i.page_content,
"source": (
i.metadata.get("source")
if i.metadata.get("source")
else "local"
),
}
for i in docs_temp
]
logging.info(
f"ClassicRAG._get_data: Starting retrieval with chunks={self.chunks}, "
f"vectorstores={self.vectorstores}, chunks_per_source={chunks_per_source}, "
f"query='{self.question[:50]}...'"
)
return docs
for vectorstore_id in self.vectorstores:
if vectorstore_id:
try:
docsearch = VectorCreator.create_vectorstore(
settings.VECTOR_STORE, vectorstore_id, settings.EMBEDDINGS_KEY
)
docs_temp = docsearch.search(self.question, k=chunks_per_source)
for doc in docs_temp:
if hasattr(doc, "page_content") and hasattr(doc, "metadata"):
page_content = doc.page_content
metadata = doc.metadata
else:
page_content = doc.get("text", doc.get("page_content", ""))
metadata = doc.get("metadata", {})
title = metadata.get(
"title", metadata.get("post_title", page_content)
)
if not isinstance(title, str):
title = str(title)
title = title.split("/")[-1]
filename = (
metadata.get("filename")
or metadata.get("file_name")
or metadata.get("source")
)
if isinstance(filename, str):
filename = os.path.basename(filename) or filename
else:
filename = title
if not filename:
filename = title
source_path = metadata.get("source") or vectorstore_id
all_docs.append(
{
"title": title,
"text": page_content,
"source": source_path,
"filename": filename,
}
)
except Exception as e:
logging.error(
f"Error searching vectorstore {vectorstore_id}: {e}",
exc_info=True,
)
continue
logging.info(
f"ClassicRAG._get_data: Retrieval complete - retrieved {len(all_docs)} documents "
f"(requested chunks={self.chunks}, chunks_per_source={chunks_per_source})"
)
return all_docs
def gen():
pass
def search(self, query: str = ""):
"""Search for documents using optional query override"""
if query:
self.original_question = query
self.question = self._rephrase_query()
return self._get_data()
def get_params(self):
"""Return current retriever configuration parameters"""
return {
"question": self.original_question,
"rephrased_question": self.question,
"sources": self.vectorstores,
"source": self.vectorstore,
"chunks": self.chunks,
"token_limit": self.token_limit,
"gpt_model": self.gpt_model,

View File

@@ -1,85 +0,0 @@
import base64
import json
import os
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives.ciphers import algorithms, Cipher, modes
from cryptography.hazmat.primitives.kdf.pbkdf2 import PBKDF2HMAC
from application.core.settings import settings
def _derive_key(user_id: str, salt: bytes) -> bytes:
app_secret = settings.ENCRYPTION_SECRET_KEY
password = f"{app_secret}#{user_id}".encode()
kdf = PBKDF2HMAC(
algorithm=hashes.SHA256(),
length=32,
salt=salt,
iterations=100000,
backend=default_backend(),
)
return kdf.derive(password)
def encrypt_credentials(credentials: dict, user_id: str) -> str:
if not credentials:
return ""
try:
salt = os.urandom(16)
iv = os.urandom(16)
key = _derive_key(user_id, salt)
json_str = json.dumps(credentials)
cipher = Cipher(algorithms.AES(key), modes.CBC(iv), backend=default_backend())
encryptor = cipher.encryptor()
padded_data = _pad_data(json_str.encode())
encrypted_data = encryptor.update(padded_data) + encryptor.finalize()
result = salt + iv + encrypted_data
return base64.b64encode(result).decode()
except Exception as e:
print(f"Warning: Failed to encrypt credentials: {e}")
return ""
def decrypt_credentials(encrypted_data: str, user_id: str) -> dict:
if not encrypted_data:
return {}
try:
data = base64.b64decode(encrypted_data.encode())
salt = data[:16]
iv = data[16:32]
encrypted_content = data[32:]
key = _derive_key(user_id, salt)
cipher = Cipher(algorithms.AES(key), modes.CBC(iv), backend=default_backend())
decryptor = cipher.decryptor()
decrypted_padded = decryptor.update(encrypted_content) + decryptor.finalize()
decrypted_data = _unpad_data(decrypted_padded)
return json.loads(decrypted_data.decode())
except Exception as e:
print(f"Warning: Failed to decrypt credentials: {e}")
return {}
def _pad_data(data: bytes) -> bytes:
block_size = 16
padding_len = block_size - (len(data) % block_size)
padding = bytes([padding_len]) * padding_len
return data + padding
def _unpad_data(data: bytes) -> bytes:
padding_len = data[-1]
return data[:-padding_len]

View File

@@ -1,26 +0,0 @@
import click
from application.core.mongo_db import MongoDB
from application.core.settings import settings
from application.seed.seeder import DatabaseSeeder
@click.group()
def seed():
"""Database seeding commands"""
pass
@seed.command()
@click.option("--force", is_flag=True, help="Force reseeding even if data exists")
def init(force):
"""Initialize database with seed data"""
mongo = MongoDB.get_client()
db = mongo[settings.MONGO_DB_NAME]
seeder = DatabaseSeeder(db)
seeder.seed_initial_data(force=force)
if __name__ == "__main__":
seed()

View File

@@ -1,36 +0,0 @@
# Configuration for Premade Agents
# This file contains template agents that will be seeded into the database
agents:
# Basic Agent Template
- name: "Agent Name" # Required: Unique name for the agent
description: "What this agent does" # Required: Brief description of the agent's purpose
image: "URL_TO_IMAGE" # Optional: URL to agent's avatar/image
agent_type: "classic" # Required: Type of agent (e.g., classic, react, etc.)
prompt_id: "default" # Optional: Reference to prompt template
prompt: # Optional: Define new prompt
name: "New Prompt"
content: "You are new agent with cool new prompt."
chunks: "0" # Optional: Chunking strategy for documents
retriever: "" # Optional: Retriever type for document search
# Source Configuration (where the agent gets its knowledge)
source: # Optional: Select a source to link with agent
name: "Source Display Name" # Human-readable name for the source
url: "https://example.com/data-source" # URL or path to knowledge source
loader: "url" # Type of loader (url, pdf, txt, etc.)
# Tools Configuration (what capabilities the agent has)
tools: # Optional: Remove if agent doesn't need tools
- name: "tool_name" # Must match a supported tool name
display_name: "Tool Display Name" # Optional: Human-readable name for the tool
config:
# Tool-specific configuration
# Example for DuckDuckGo:
# token: "${DDG_API_KEY}" # ${} denotes environment variable
# Add more tools as needed
# - name: "another_tool"
# config:
# param1: "value1"
# param2: "${ENV_VAR}"

View File

@@ -1,94 +0,0 @@
# Configuration for Premade Agents
agents:
- name: "Assistant"
description: "Your general-purpose AI assistant. Ready to help with a wide range of tasks."
image: "https://d3dg1063dc54p9.cloudfront.net/imgs/agents/agent-logo.svg"
agent_type: "classic"
prompt_id: "default"
chunks: "0"
retriever: ""
# Tools Configuration
tools:
- name: "tool_name"
display_name: "read_webpage"
config:
- name: "Researcher"
description: "A specialized research agent that performs deep dives into subjects."
image: "https://d3dg1063dc54p9.cloudfront.net/imgs/agents/agent-researcher.svg"
agent_type: "react"
prompt:
name: "Researcher-Agent"
content: |
You are a specialized AI research assistant, DocsGPT. Your primary function is to conduct in-depth research on a given subject or question. You are methodical, thorough, and analytical. You should perform multiple iterations of thinking to gather and synthesize information before providing a final, comprehensive answer.
You have access to the 'Read Webpage' tool. Use this tool to explore sources, gather data, and deepen your understanding. Be proactive in using the tool to fill in knowledge gaps and validate information.
Users can Upload documents for your context as attachments or sources via UI using the Conversation input box.
If appropriate, your answers can include code examples, formatted as follows:
```(language)
(code)
```
Users are also able to see charts and diagrams if you use them with valid mermaid syntax in your responses. Try to respond with mermaid charts if visualization helps with users queries. You effectively utilize chat history, ensuring relevant and tailored responses. Try to use additional provided context if it's available, otherwise use your knowledge and tool capabilities.
----------------
Possible additional context from uploaded sources:
{summaries}
chunks: "0"
retriever: ""
# Tools Configuration
tools:
- name: "tool_name"
display_name: "read_webpage"
config:
- name: "Search Widget"
description: "A powerful search widget agent. Ask it anything about DocsGPT"
image: "https://d3dg1063dc54p9.cloudfront.net/imgs/agents/agent-search.svg"
agent_type: "classic"
prompt:
name: "Search-Agent"
content: |
You are a website search assistant, DocsGPT. Your sole purpose is to help users find information within the provided context of the DocsGPT documentation. Act as a specialized search engine.
Your answers must be based *only* on the provided context. Do not use any external knowledge. If the answer is not in the context, inform the user that you could not find the information within the documentation.
Keep your responses concise and directly related to the user's query, pointing them to the most relevant information.
----------------
Possible additional context from uploaded sources:
{summaries}
chunks: "8"
retriever: ""
source:
name: "DocsGPT-Docs"
url: "https://d3dg1063dc54p9.cloudfront.net/agent-source/docsgpt-documentation.md" # URL to DocsGPT documentation
loader: "url"
- name: "Support Widget"
description: "A friendly support widget agent to help you with any questions."
image: "https://d3dg1063dc54p9.cloudfront.net/imgs/agents/agent-support.svg"
agent_type: "classic"
prompt:
name: "Support-Agent"
content: |
You are a helpful AI support widget agent, DocsGPT. Your goal is to assist users by answering their questions about our website, product and its features. Provide friendly, clear, and direct support.
Your knowledge is strictly limited to the provided context from the DocsGPT documentation. You must not answer questions outside of this scope. If a user asks something you cannot answer from the context, politely state that you can only help with questions about this website.
Effectively utilize chat history to understand the user's issue fully. Guide users to the information they need in a helpful and conversational manner.
----------------
Possible additional context from uploaded sources:
{summaries}
chunks: "8"
retriever: ""
source:
name: "DocsGPT-Docs"
url: "https://d3dg1063dc54p9.cloudfront.net/agent-source/docsgpt-documentation.md" # URL to DocsGPT documentation
loader: "url"

View File

@@ -1,277 +0,0 @@
import logging
import os
from datetime import datetime, timezone
from typing import Dict, List, Optional, Union
import yaml
from bson import ObjectId
from bson.dbref import DBRef
from dotenv import load_dotenv
from pymongo import MongoClient
from application.agents.tools.tool_manager import ToolManager
from application.api.user.tasks import ingest_remote
load_dotenv()
tool_config = {}
tool_manager = ToolManager(config=tool_config)
class DatabaseSeeder:
def __init__(self, db):
self.db = db
self.tools_collection = self.db["user_tools"]
self.sources_collection = self.db["sources"]
self.agents_collection = self.db["agents"]
self.prompts_collection = self.db["prompts"]
self.system_user_id = "system"
self.logger = logging.getLogger(__name__)
def seed_initial_data(self, config_path: str = None, force=False):
"""Main entry point for seeding all initial data"""
if not force and self._is_already_seeded():
self.logger.info("Database already seeded. Use force=True to reseed.")
return
config_path = config_path or os.path.join(
os.path.dirname(__file__), "config", "premade_agents.yaml"
)
try:
with open(config_path, "r") as f:
config = yaml.safe_load(f)
self._seed_from_config(config)
except Exception as e:
self.logger.error(f"Failed to load seeding config: {str(e)}")
raise
def _seed_from_config(self, config: Dict):
"""Seed all data from configuration"""
self.logger.info("🌱 Starting seeding...")
if not config.get("agents"):
self.logger.warning("No agents found in config")
return
used_tool_ids = set()
for agent_config in config["agents"]:
try:
self.logger.info(f"Processing agent: {agent_config['name']}")
# 1. Handle Source
source_result = self._handle_source(agent_config)
if source_result is False:
self.logger.error(
f"Skipping agent {agent_config['name']} due to source ingestion failure"
)
continue
source_id = source_result
# 2. Handle Tools
tool_ids = self._handle_tools(agent_config)
if len(tool_ids) == 0:
self.logger.warning(
f"No valid tools for agent {agent_config['name']}"
)
used_tool_ids.update(tool_ids)
# 3. Handle Prompt
prompt_id = self._handle_prompt(agent_config)
# 4. Create Agent
agent_data = {
"user": self.system_user_id,
"name": agent_config["name"],
"description": agent_config["description"],
"image": agent_config.get("image", ""),
"source": (
DBRef("sources", ObjectId(source_id)) if source_id else ""
),
"tools": [str(tid) for tid in tool_ids],
"agent_type": agent_config["agent_type"],
"prompt_id": prompt_id or agent_config.get("prompt_id", "default"),
"chunks": agent_config.get("chunks", "0"),
"retriever": agent_config.get("retriever", ""),
"status": "template",
"createdAt": datetime.now(timezone.utc),
"updatedAt": datetime.now(timezone.utc),
}
existing = self.agents_collection.find_one(
{"user": self.system_user_id, "name": agent_config["name"]}
)
if existing:
self.logger.info(f"Updating existing agent: {agent_config['name']}")
self.agents_collection.update_one(
{"_id": existing["_id"]}, {"$set": agent_data}
)
agent_id = existing["_id"]
else:
self.logger.info(f"Creating new agent: {agent_config['name']}")
result = self.agents_collection.insert_one(agent_data)
agent_id = result.inserted_id
self.logger.info(
f"Successfully processed agent: {agent_config['name']} (ID: {agent_id})"
)
except Exception as e:
self.logger.error(
f"Error processing agent {agent_config['name']}: {str(e)}"
)
continue
self.logger.info("✅ Database seeding completed")
def _handle_source(self, agent_config: Dict) -> Union[ObjectId, None, bool]:
"""Handle source ingestion and return source ID"""
if not agent_config.get("source"):
self.logger.info(
"No source provided for agent - will create agent without source"
)
return None
source_config = agent_config["source"]
self.logger.info(f"Ingesting source: {source_config['url']}")
try:
existing = self.sources_collection.find_one(
{"user": self.system_user_id, "remote_data": source_config["url"]}
)
if existing:
self.logger.info(f"Source already exists: {existing['_id']}")
return existing["_id"]
# Ingest new source using worker
task = ingest_remote.delay(
source_data=source_config["url"],
job_name=source_config["name"],
user=self.system_user_id,
loader=source_config.get("loader", "url"),
)
result = task.get(timeout=300)
if not task.successful():
raise Exception(f"Source ingestion failed: {result}")
source_id = None
if isinstance(result, dict) and "id" in result:
source_id = result["id"]
else:
raise Exception(f"Source ingestion result missing 'id': {result}")
self.logger.info(f"Source ingested successfully: {source_id}")
return source_id
except Exception as e:
self.logger.error(f"Failed to ingest source: {str(e)}")
return False
def _handle_tools(self, agent_config: Dict) -> List[ObjectId]:
"""Handle tool creation and return list of tool IDs"""
tool_ids = []
if not agent_config.get("tools"):
return tool_ids
for tool_config in agent_config["tools"]:
try:
tool_name = tool_config["name"]
processed_config = self._process_config(tool_config.get("config", {}))
self.logger.info(f"Processing tool: {tool_name}")
existing = self.tools_collection.find_one(
{
"user": self.system_user_id,
"name": tool_name,
"config": processed_config,
}
)
if existing:
self.logger.info(f"Tool already exists: {existing['_id']}")
tool_ids.append(existing["_id"])
continue
tool_data = {
"user": self.system_user_id,
"name": tool_name,
"displayName": tool_config.get("display_name", tool_name),
"description": tool_config.get("description", ""),
"actions": tool_manager.tools[tool_name].get_actions_metadata(),
"config": processed_config,
"status": True,
}
result = self.tools_collection.insert_one(tool_data)
tool_ids.append(result.inserted_id)
self.logger.info(f"Created new tool: {result.inserted_id}")
except Exception as e:
self.logger.error(f"Failed to process tool {tool_name}: {str(e)}")
continue
return tool_ids
def _handle_prompt(self, agent_config: Dict) -> Optional[str]:
"""Handle prompt creation and return prompt ID"""
if not agent_config.get("prompt"):
return None
prompt_config = agent_config["prompt"]
prompt_name = prompt_config.get("name", f"{agent_config['name']} Prompt")
prompt_content = prompt_config.get("content", "")
if not prompt_content:
self.logger.warning(
f"No prompt content provided for agent {agent_config['name']}"
)
return None
self.logger.info(f"Processing prompt: {prompt_name}")
try:
existing = self.prompts_collection.find_one(
{
"user": self.system_user_id,
"name": prompt_name,
"content": prompt_content,
}
)
if existing:
self.logger.info(f"Prompt already exists: {existing['_id']}")
return str(existing["_id"])
prompt_data = {
"name": prompt_name,
"content": prompt_content,
"user": self.system_user_id,
}
result = self.prompts_collection.insert_one(prompt_data)
prompt_id = str(result.inserted_id)
self.logger.info(f"Created new prompt: {prompt_id}")
return prompt_id
except Exception as e:
self.logger.error(f"Failed to process prompt {prompt_name}: {str(e)}")
return None
def _process_config(self, config: Dict) -> Dict:
"""Process config values to replace environment variables"""
processed = {}
for key, value in config.items():
if (
isinstance(value, str)
and value.startswith("${")
and value.endswith("}")
):
env_var = value[2:-1]
processed[key] = os.getenv(env_var, "")
else:
processed[key] = value
return processed
def _is_already_seeded(self) -> bool:
"""Check if premade agents already exist"""
return self.agents_collection.count_documents({"user": self.system_user_id}) > 0
@classmethod
def initialize_from_env(cls, worker=None):
"""Factory method to create seeder from environment"""
mongo_uri = os.getenv("MONGO_URI", "mongodb://localhost:27017")
db_name = os.getenv("MONGO_DB_NAME", "docsgpt")
client = MongoClient(mongo_uri)
db = client[db_name]
return cls(db)

View File

@@ -26,7 +26,7 @@ class LocalStorage(BaseStorage):
return path
return os.path.join(self.base_dir, path)
def save_file(self, file_data: BinaryIO, path: str, **kwargs) -> dict:
def save_file(self, file_data: BinaryIO, path: str) -> dict:
"""Save a file to local storage."""
full_path = self._get_full_path(path)

View File

@@ -1,30 +1,84 @@
from io import BytesIO
import asyncio
import websockets
import json
import base64
from io import BytesIO
from application.tts.base import BaseTTS
from application.core.settings import settings
class ElevenlabsTTS(BaseTTS):
def __init__(self):
from elevenlabs.client import ElevenLabs
self.client = ElevenLabs(
api_key=settings.ELEVENLABS_API_KEY,
)
def __init__(self):
self.api_key = 'ELEVENLABS_API_KEY'# here you should put your api key
self.model = "eleven_flash_v2_5"
self.voice = "VOICE_ID" # this is the hash code for the voice not the name!
self.write_audio = 1
def text_to_speech(self, text):
lang = "en"
audio = self.client.generate(
text=text,
model="eleven_multilingual_v2",
voice="Brian",
)
audio_data = BytesIO()
for chunk in audio:
audio_data.write(chunk)
audio_bytes = audio_data.getvalue()
asyncio.run(self._text_to_speech_websocket(text))
# Encode to base64
audio_base64 = base64.b64encode(audio_bytes).decode("utf-8")
return audio_base64, lang
async def _text_to_speech_websocket(self, text):
uri = f"wss://api.elevenlabs.io/v1/text-to-speech/{self.voice}/stream-input?model_id={self.model}"
websocket = await websockets.connect(uri)
payload = {
"text": " ",
"voice_settings": {
"stability": 0.5,
"similarity_boost": 0.8,
},
"xi_api_key": self.api_key,
}
await websocket.send(json.dumps(payload))
async def listen():
while 1:
try:
msg = await websocket.recv()
data = json.loads(msg)
if data.get("audio"):
print("audio received")
yield base64.b64decode(data["audio"])
elif data.get("isFinal"):
break
except websockets.exceptions.ConnectionClosed:
print("websocket closed")
break
listen_task = asyncio.create_task(self.stream(listen()))
await websocket.send(json.dumps({"text": text}))
# this is to signal the end of the text, either use this or flush
await websocket.send(json.dumps({"text": ""}))
await listen_task
async def stream(self, audio_stream):
if self.write_audio:
audio_bytes = BytesIO()
async for chunk in audio_stream:
if chunk:
audio_bytes.write(chunk)
with open("output_audio.mp3", "wb") as f:
f.write(audio_bytes.getvalue())
else:
async for chunk in audio_stream:
pass # depends on the streamer!
def test_elevenlabs_websocket():
"""
Tests the ElevenlabsTTS text_to_speech method with a sample prompt.
Prints out the base64-encoded result and writes it to 'output_audio.mp3'.
"""
# Instantiate your TTS class
tts = ElevenlabsTTS()
# Call the method with some sample text
tts.text_to_speech("Hello from ElevenLabs WebSocket!")
print("Saved audio to output_audio.mp3.")
if __name__ == "__main__":
test_elevenlabs_websocket()

View File

@@ -168,10 +168,6 @@ def validate_function_name(function_name):
def generate_image_url(image_path):
if isinstance(image_path, str) and (
image_path.startswith("http://") or image_path.startswith("https://")
):
return image_path
strategy = getattr(settings, "URL_STRATEGY", "backend")
if strategy == "s3":
bucket_name = getattr(settings, "S3_BUCKET_NAME", "docsgpt-test-bucket")

View File

@@ -1,43 +1,20 @@
import logging
import os
from abc import ABC, abstractmethod
from langchain_openai import OpenAIEmbeddings
import os
from sentence_transformers import SentenceTransformer
from langchain_openai import OpenAIEmbeddings
from application.core.settings import settings
class EmbeddingsWrapper:
def __init__(self, model_name, *args, **kwargs):
logging.info(f"Initializing EmbeddingsWrapper with model: {model_name}")
try:
kwargs.setdefault("trust_remote_code", True)
self.model = SentenceTransformer(
model_name,
config_kwargs={"allow_dangerous_deserialization": True},
*args,
**kwargs,
)
if self.model is None or self.model._first_module() is None:
raise ValueError(
f"SentenceTransformer model failed to load properly for: {model_name}"
)
self.dimension = self.model.get_sentence_embedding_dimension()
logging.info(f"Successfully loaded model with dimension: {self.dimension}")
except Exception as e:
logging.error(
f"Failed to initialize SentenceTransformer with model {model_name}: {str(e)}",
exc_info=True,
)
raise
self.model = SentenceTransformer(model_name, config_kwargs={'allow_dangerous_deserialization': True}, *args, **kwargs)
self.dimension = self.model.get_sentence_embedding_dimension()
def embed_query(self, query: str):
return self.model.encode(query).tolist()
def embed_documents(self, documents: list):
return self.model.encode(documents).tolist()
def __call__(self, text):
if isinstance(text, str):
return self.embed_query(text)
@@ -47,14 +24,15 @@ class EmbeddingsWrapper:
raise ValueError("Input must be a string or a list of strings")
class EmbeddingsSingleton:
_instances = {}
@staticmethod
def get_instance(embeddings_name, *args, **kwargs):
if embeddings_name not in EmbeddingsSingleton._instances:
EmbeddingsSingleton._instances[embeddings_name] = (
EmbeddingsSingleton._create_instance(embeddings_name, *args, **kwargs)
EmbeddingsSingleton._instances[embeddings_name] = EmbeddingsSingleton._create_instance(
embeddings_name, *args, **kwargs
)
return EmbeddingsSingleton._instances[embeddings_name]
@@ -62,15 +40,9 @@ class EmbeddingsSingleton:
def _create_instance(embeddings_name, *args, **kwargs):
embeddings_factory = {
"openai_text-embedding-ada-002": OpenAIEmbeddings,
"huggingface_sentence-transformers/all-mpnet-base-v2": lambda: EmbeddingsWrapper(
"sentence-transformers/all-mpnet-base-v2"
),
"huggingface_sentence-transformers-all-mpnet-base-v2": lambda: EmbeddingsWrapper(
"sentence-transformers/all-mpnet-base-v2"
),
"huggingface_hkunlp/instructor-large": lambda: EmbeddingsWrapper(
"hkunlp/instructor-large"
),
"huggingface_sentence-transformers/all-mpnet-base-v2": lambda: EmbeddingsWrapper("sentence-transformers/all-mpnet-base-v2"),
"huggingface_sentence-transformers-all-mpnet-base-v2": lambda: EmbeddingsWrapper("sentence-transformers/all-mpnet-base-v2"),
"huggingface_hkunlp/instructor-large": lambda: EmbeddingsWrapper("hkunlp/instructor-large"),
}
if embeddings_name in embeddings_factory:
@@ -78,83 +50,41 @@ class EmbeddingsSingleton:
else:
return EmbeddingsWrapper(embeddings_name, *args, **kwargs)
class BaseVectorStore(ABC):
def __init__(self):
pass
@abstractmethod
def search(self, *args, **kwargs):
"""Search for similar documents/chunks in the vectorstore"""
pass
@abstractmethod
def add_texts(self, texts, metadatas=None, *args, **kwargs):
"""Add texts with their embeddings to the vectorstore"""
pass
def delete_index(self, *args, **kwargs):
"""Delete the entire index/collection"""
pass
def save_local(self, *args, **kwargs):
"""Save vectorstore to local storage"""
pass
def get_chunks(self, *args, **kwargs):
"""Get all chunks from the vectorstore"""
pass
def add_chunk(self, text, metadata=None, *args, **kwargs):
"""Add a single chunk to the vectorstore"""
pass
def delete_chunk(self, chunk_id, *args, **kwargs):
"""Delete a specific chunk from the vectorstore"""
pass
def is_azure_configured(self):
return (
settings.OPENAI_API_BASE
and settings.OPENAI_API_VERSION
and settings.AZURE_DEPLOYMENT_NAME
)
return settings.OPENAI_API_BASE and settings.OPENAI_API_VERSION and settings.AZURE_DEPLOYMENT_NAME
def _get_embeddings(self, embeddings_name, embeddings_key=None):
if embeddings_name == "openai_text-embedding-ada-002":
if self.is_azure_configured():
os.environ["OPENAI_API_TYPE"] = "azure"
embedding_instance = EmbeddingsSingleton.get_instance(
embeddings_name, model=settings.AZURE_EMBEDDINGS_DEPLOYMENT_NAME
embeddings_name,
model=settings.AZURE_EMBEDDINGS_DEPLOYMENT_NAME
)
else:
embedding_instance = EmbeddingsSingleton.get_instance(
embeddings_name, openai_api_key=embeddings_key
embeddings_name,
openai_api_key=embeddings_key
)
elif embeddings_name == "huggingface_sentence-transformers/all-mpnet-base-v2":
possible_paths = [
"/app/models/all-mpnet-base-v2", # Docker absolute path
"./models/all-mpnet-base-v2", # Relative path
]
local_model_path = None
for path in possible_paths:
if os.path.exists(path):
local_model_path = path
logging.info(f"Found local model at path: {path}")
break
else:
logging.info(f"Path does not exist: {path}")
if local_model_path:
if os.path.exists("./models/all-mpnet-base-v2"):
embedding_instance = EmbeddingsSingleton.get_instance(
local_model_path,
embeddings_name = "./models/all-mpnet-base-v2",
)
else:
logging.warning(
f"Local model not found in any of the paths: {possible_paths}. Falling back to HuggingFace download."
)
embedding_instance = EmbeddingsSingleton.get_instance(
embeddings_name,
)
else:
embedding_instance = EmbeddingsSingleton.get_instance(embeddings_name)
return embedding_instance

View File

@@ -19,7 +19,6 @@ from bson.objectid import ObjectId
from application.agents.agent_creator import AgentCreator
from application.api.answer.services.stream_processor import get_prompt
from application.cache import get_redis_instance
from application.core.mongo_db import MongoDB
from application.core.settings import settings
from application.parser.chunking import Chunker
@@ -39,7 +38,6 @@ sources_collection = db["sources"]
# Constants
MIN_TOKENS = 150
MAX_TOKENS = 1250
RECURSION_DEPTH = 2
@@ -216,7 +214,8 @@ def run_agent_logic(agent_config, input_data):
def ingest_worker(
self, directory, formats, job_name, file_path, filename, user, retriever="classic"
self, directory, formats, job_name, file_path, filename, user,
retriever="classic"
):
"""
Ingest and process documents.
@@ -241,7 +240,7 @@ def ingest_worker(
sample = False
storage = StorageCreator.get_storage()
logging.info(f"Ingest path: {file_path}", extra={"user": user, "job": job_name})
# Create temporary working directory
@@ -254,32 +253,30 @@ def ingest_worker(
# Handle directory case
logging.info(f"Processing directory: {file_path}")
files_list = storage.list_files(file_path)
for storage_file_path in files_list:
if storage.is_directory(storage_file_path):
continue
# Create relative path structure in temp directory
rel_path = os.path.relpath(storage_file_path, file_path)
local_file_path = os.path.join(temp_dir, rel_path)
os.makedirs(os.path.dirname(local_file_path), exist_ok=True)
# Download file
try:
file_data = storage.get_file(storage_file_path)
with open(local_file_path, "wb") as f:
f.write(file_data.read())
except Exception as e:
logging.error(
f"Error downloading file {storage_file_path}: {e}"
)
logging.error(f"Error downloading file {storage_file_path}: {e}")
continue
else:
# Handle single file case
temp_filename = os.path.basename(file_path)
temp_file_path = os.path.join(temp_dir, temp_filename)
file_data = storage.get_file(file_path)
with open(temp_file_path, "wb") as f:
f.write(file_data.read())
@@ -288,10 +285,7 @@ def ingest_worker(
if temp_filename.endswith(".zip"):
logging.info(f"Extracting zip file: {temp_filename}")
extract_zip_recursive(
temp_file_path,
temp_dir,
current_depth=0,
max_depth=RECURSION_DEPTH,
temp_file_path, temp_dir, current_depth=0, max_depth=RECURSION_DEPTH
)
self.update_state(state="PROGRESS", meta={"current": 1})
@@ -306,8 +300,8 @@ def ingest_worker(
file_metadata=metadata_from_filename,
)
raw_docs = reader.load_data()
directory_structure = getattr(reader, "directory_structure", {})
directory_structure = getattr(reader, 'directory_structure', {})
logging.info(f"Directory structure from reader: {directory_structure}")
chunker = Chunker(
@@ -377,10 +371,7 @@ def reingest_source_worker(self, source_id, user):
try:
from application.vectorstore.vector_creator import VectorCreator
self.update_state(
state="PROGRESS",
meta={"current": 10, "status": "Initializing re-ingestion scan"},
)
self.update_state(state="PROGRESS", meta={"current": 10, "status": "Initializing re-ingestion scan"})
source = sources_collection.find_one({"_id": ObjectId(source_id), "user": user})
if not source:
@@ -389,9 +380,7 @@ def reingest_source_worker(self, source_id, user):
storage = StorageCreator.get_storage()
source_file_path = source.get("file_path", "")
self.update_state(
state="PROGRESS", meta={"current": 20, "status": "Scanning current files"}
)
self.update_state(state="PROGRESS", meta={"current": 20, "status": "Scanning current files"})
with tempfile.TemporaryDirectory() as temp_dir:
# Download all files from storage to temp directory, preserving directory structure
@@ -402,6 +391,7 @@ def reingest_source_worker(self, source_id, user):
if storage.is_directory(storage_file_path):
continue
rel_path = os.path.relpath(storage_file_path, source_file_path)
local_file_path = os.path.join(temp_dir, rel_path)
@@ -413,39 +403,23 @@ def reingest_source_worker(self, source_id, user):
with open(local_file_path, "wb") as f:
f.write(file_data.read())
except Exception as e:
logging.error(
f"Error downloading file {storage_file_path}: {e}"
)
logging.error(f"Error downloading file {storage_file_path}: {e}")
continue
reader = SimpleDirectoryReader(
input_dir=temp_dir,
recursive=True,
required_exts=[
".rst",
".md",
".pdf",
".txt",
".docx",
".csv",
".epub",
".html",
".mdx",
".json",
".xlsx",
".pptx",
".png",
".jpg",
".jpeg",
".rst", ".md", ".pdf", ".txt", ".docx", ".csv", ".epub",
".html", ".mdx", ".json", ".xlsx", ".pptx", ".png",
".jpg", ".jpeg",
],
exclude_hidden=True,
file_metadata=metadata_from_filename,
)
reader.load_data()
directory_structure = reader.directory_structure
logging.info(
f"Directory structure built with token counts: {directory_structure}"
)
logging.info(f"Directory structure built with token counts: {directory_structure}")
try:
old_directory_structure = source.get("directory_structure") or {}
@@ -459,17 +433,11 @@ def reingest_source_worker(self, source_id, user):
files = set()
if isinstance(struct, dict):
for name, meta in struct.items():
current_path = (
os.path.join(prefix, name) if prefix else name
)
if isinstance(meta, dict) and (
"type" in meta and "size_bytes" in meta
):
current_path = os.path.join(prefix, name) if prefix else name
if isinstance(meta, dict) and ("type" in meta and "size_bytes" in meta):
files.add(current_path)
elif isinstance(meta, dict):
files |= _flatten_directory_structure(
meta, current_path
)
files |= _flatten_directory_structure(meta, current_path)
return files
old_files = _flatten_directory_structure(old_directory_structure)
@@ -489,9 +457,7 @@ def reingest_source_worker(self, source_id, user):
logging.info("No files removed since last ingest.")
except Exception as e:
logging.error(
f"Error comparing directory structures: {e}", exc_info=True
)
logging.error(f"Error comparing directory structures: {e}", exc_info=True)
added_files = []
removed_files = []
try:
@@ -511,21 +477,14 @@ def reingest_source_worker(self, source_id, user):
settings.EMBEDDINGS_KEY,
)
self.update_state(
state="PROGRESS",
meta={"current": 40, "status": "Processing file changes"},
)
self.update_state(state="PROGRESS", meta={"current": 40, "status": "Processing file changes"})
# 1) Delete chunks from removed files
deleted = 0
if removed_files:
try:
for ch in vector_store.get_chunks() or []:
metadata = (
ch.get("metadata", {})
if isinstance(ch, dict)
else getattr(ch, "metadata", {})
)
metadata = ch.get("metadata", {}) if isinstance(ch, dict) else getattr(ch, "metadata", {})
raw_source = metadata.get("source")
source_file = str(raw_source) if raw_source else ""
@@ -537,17 +496,10 @@ def reingest_source_worker(self, source_id, user):
vector_store.delete_chunk(cid)
deleted += 1
except Exception as de:
logging.error(
f"Failed deleting chunk {cid}: {de}"
)
logging.info(
f"Deleted {deleted} chunks from {len(removed_files)} removed files"
)
logging.error(f"Failed deleting chunk {cid}: {de}")
logging.info(f"Deleted {deleted} chunks from {len(removed_files)} removed files")
except Exception as e:
logging.error(
f"Error during deletion of removed file chunks: {e}",
exc_info=True,
)
logging.error(f"Error during deletion of removed file chunks: {e}", exc_info=True)
# 2) Add chunks from new files
added = 0
@@ -576,86 +528,58 @@ def reingest_source_worker(self, source_id, user):
)
chunked_new = chunker_new.chunk(documents=raw_docs_new)
for (
file_path,
token_count,
) in reader_new.file_token_counts.items():
for file_path, token_count in reader_new.file_token_counts.items():
try:
rel_path = os.path.relpath(
file_path, start=temp_dir
)
rel_path = os.path.relpath(file_path, start=temp_dir)
path_parts = rel_path.split(os.sep)
current_dir = directory_structure
for part in path_parts[:-1]:
if part in current_dir and isinstance(
current_dir[part], dict
):
if part in current_dir and isinstance(current_dir[part], dict):
current_dir = current_dir[part]
else:
break
filename = path_parts[-1]
if filename in current_dir and isinstance(
current_dir[filename], dict
):
current_dir[filename][
"token_count"
] = token_count
logging.info(
f"Updated token count for {rel_path}: {token_count}"
)
if filename in current_dir and isinstance(current_dir[filename], dict):
current_dir[filename]["token_count"] = token_count
logging.info(f"Updated token count for {rel_path}: {token_count}")
except Exception as e:
logging.warning(
f"Could not update token count for {file_path}: {e}"
)
logging.warning(f"Could not update token count for {file_path}: {e}")
for d in chunked_new:
meta = dict(d.extra_info or {})
try:
raw_src = meta.get("source")
if isinstance(raw_src, str) and os.path.isabs(
raw_src
):
meta["source"] = os.path.relpath(
raw_src, start=temp_dir
)
if isinstance(raw_src, str) and os.path.isabs(raw_src):
meta["source"] = os.path.relpath(raw_src, start=temp_dir)
except Exception:
pass
vector_store.add_chunk(d.text, metadata=meta)
added += 1
logging.info(
f"Added {added} chunks from {len(added_files)} new files"
)
logging.info(f"Added {added} chunks from {len(added_files)} new files")
except Exception as e:
logging.error(
f"Error during ingestion of new files: {e}", exc_info=True
)
logging.error(f"Error during ingestion of new files: {e}", exc_info=True)
# 3) Update source directory structure timestamp
try:
total_tokens = sum(reader.file_token_counts.values())
sources_collection.update_one(
{"_id": ObjectId(source_id)},
{
"$set": {
"directory_structure": directory_structure,
"date": datetime.datetime.now(),
"tokens": total_tokens,
"tokens": total_tokens
}
},
)
except Exception as e:
logging.error(
f"Error updating directory_structure in DB: {e}", exc_info=True
)
logging.error(f"Error updating directory_structure in DB: {e}", exc_info=True)
self.update_state(
state="PROGRESS",
meta={"current": 100, "status": "Re-ingestion completed"},
)
self.update_state(state="PROGRESS", meta={"current": 100, "status": "Re-ingestion completed"})
return {
"source_id": source_id,
@@ -667,16 +591,15 @@ def reingest_source_worker(self, source_id, user):
"chunks_deleted": deleted,
}
except Exception as e:
logging.error(
f"Error while processing file changes: {e}", exc_info=True
)
logging.error(f"Error while processing file changes: {e}", exc_info=True)
raise
except Exception as e:
logging.error(f"Error in reingest_source_worker: {e}", exc_info=True)
raise
def remote_worker(
self,
source_data,
@@ -728,7 +651,7 @@ def remote_worker(
"id": str(id),
"type": loader,
"remote_data": source_data,
"sync_frequency": sync_frequency,
"sync_frequency": sync_frequency
}
if operation_mode == "sync":
@@ -741,13 +664,7 @@ def remote_worker(
if os.path.exists(full_path):
shutil.rmtree(full_path)
logging.info("remote_worker task completed successfully")
return {
"id": str(id),
"urls": source_data,
"name_job": name_job,
"user": user,
"limited": False,
}
return {"urls": source_data, "name_job": name_job, "user": user, "limited": False}
def sync(
@@ -795,7 +712,7 @@ def sync_worker(self, frequency):
self, source_data, name, user, source_type, frequency, retriever, doc_id
)
sync_counts["total_sync_count"] += 1
sync_counts[
sync_counts[
"sync_success" if resp["status"] == "success" else "sync_failure"
] += 1
return {
@@ -832,14 +749,15 @@ def attachment_worker(self, file_info, user):
input_files=[local_path], exclude_hidden=True, errors="ignore"
)
.load_data()[0]
.text,
.text,
)
token_count = num_tokens_from_string(content)
if token_count > 100000:
content = content[:250000]
token_count = num_tokens_from_string(content)
self.update_state(
state="PROGRESS", meta={"current": 80, "status": "Storing in database"}
)
@@ -954,49 +872,37 @@ def ingest_connector(
doc_id: Document ID for sync operations (required when operation_mode="sync")
sync_frequency: How often to sync ("never", "daily", "weekly", "monthly")
"""
logging.info(
f"Starting remote ingestion from {source_type} for user: {user}, job: {job_name}"
)
logging.info(f"Starting remote ingestion from {source_type} for user: {user}, job: {job_name}")
self.update_state(state="PROGRESS", meta={"current": 1})
with tempfile.TemporaryDirectory() as temp_dir:
try:
# Step 1: Initialize the appropriate loader
self.update_state(
state="PROGRESS",
meta={"current": 10, "status": "Initializing connector"},
)
self.update_state(state="PROGRESS", meta={"current": 10, "status": "Initializing connector"})
if not session_token:
raise ValueError(f"{source_type} connector requires session_token")
if not ConnectorCreator.is_supported(source_type):
raise ValueError(
f"Unsupported connector type: {source_type}. Supported types: {ConnectorCreator.get_supported_connectors()}"
)
raise ValueError(f"Unsupported connector type: {source_type}. Supported types: {ConnectorCreator.get_supported_connectors()}")
remote_loader = ConnectorCreator.create_connector(
source_type, session_token
)
remote_loader = ConnectorCreator.create_connector(source_type, session_token)
# Create a clean config for storage
api_source_config = {
"file_ids": file_ids or [],
"folder_ids": folder_ids or [],
"recursive": recursive,
"recursive": recursive
}
# Step 2: Download files to temp directory
self.update_state(
state="PROGRESS", meta={"current": 20, "status": "Downloading files"}
)
self.update_state(state="PROGRESS", meta={"current": 20, "status": "Downloading files"})
download_info = remote_loader.download_to_directory(
temp_dir, api_source_config
temp_dir,
api_source_config
)
if download_info.get("empty_result", False) or not download_info.get(
"files_downloaded", 0
):
if download_info.get("empty_result", False) or not download_info.get("files_downloaded", 0):
logging.warning(f"No files were downloaded from {source_type}")
# Create empty result directly instead of calling a separate method
return {
@@ -1007,42 +913,28 @@ def ingest_connector(
"source_config": api_source_config,
"directory_structure": "{}",
}
# Step 3: Use SimpleDirectoryReader to process downloaded files
self.update_state(
state="PROGRESS", meta={"current": 40, "status": "Processing files"}
)
self.update_state(state="PROGRESS", meta={"current": 40, "status": "Processing files"})
reader = SimpleDirectoryReader(
input_dir=temp_dir,
recursive=True,
required_exts=[
".rst",
".md",
".pdf",
".txt",
".docx",
".csv",
".epub",
".html",
".mdx",
".json",
".xlsx",
".pptx",
".png",
".jpg",
".jpeg",
".rst", ".md", ".pdf", ".txt", ".docx", ".csv", ".epub",
".html", ".mdx", ".json", ".xlsx", ".pptx", ".png",
".jpg", ".jpeg",
],
exclude_hidden=True,
file_metadata=metadata_from_filename,
)
raw_docs = reader.load_data()
directory_structure = getattr(reader, "directory_structure", {})
directory_structure = getattr(reader, 'directory_structure', {})
# Step 4: Process documents (chunking, embedding, etc.)
self.update_state(
state="PROGRESS", meta={"current": 60, "status": "Processing documents"}
)
self.update_state(state="PROGRESS", meta={"current": 60, "status": "Processing documents"})
chunker = Chunker(
chunking_strategy="classic_chunk",
max_tokens=MAX_TOKENS,
@@ -1050,26 +942,22 @@ def ingest_connector(
duplicate_headers=False,
)
raw_docs = chunker.chunk(documents=raw_docs)
# Preserve source information in document metadata
for doc in raw_docs:
if hasattr(doc, "extra_info") and doc.extra_info:
source = doc.extra_info.get("source")
if hasattr(doc, 'extra_info') and doc.extra_info:
source = doc.extra_info.get('source')
if source and os.path.isabs(source):
# Convert absolute path to relative path
doc.extra_info["source"] = os.path.relpath(
source, start=temp_dir
)
doc.extra_info['source'] = os.path.relpath(source, start=temp_dir)
docs = [Document.to_langchain_format(raw_doc) for raw_doc in raw_docs]
if operation_mode == "upload":
id = ObjectId()
elif operation_mode == "sync":
if not doc_id or not ObjectId.is_valid(doc_id):
logging.error(
"Invalid doc_id provided for sync operation: %s", doc_id
)
logging.error("Invalid doc_id provided for sync operation: %s", doc_id)
raise ValueError("doc_id must be provided for sync operation.")
id = ObjectId(doc_id)
else:
@@ -1078,9 +966,7 @@ def ingest_connector(
vector_store_path = os.path.join(temp_dir, "vector_store")
os.makedirs(vector_store_path, exist_ok=True)
self.update_state(
state="PROGRESS", meta={"current": 80, "status": "Storing documents"}
)
self.update_state(state="PROGRESS", meta={"current": 80, "status": "Storing documents"})
embed_and_store_documents(docs, vector_store_path, id, self)
tokens = count_tokens_docs(docs)
@@ -1092,12 +978,13 @@ def ingest_connector(
"tokens": tokens,
"retriever": retriever,
"id": str(id),
"type": "connector:file",
"remote_data": json.dumps(
{"provider": source_type, **api_source_config}
),
"type": "connector",
"remote_data": json.dumps({
"provider": source_type,
**api_source_config
}),
"directory_structure": json.dumps(directory_structure),
"sync_frequency": sync_frequency,
"sync_frequency": sync_frequency
}
if operation_mode == "sync":
@@ -1108,9 +995,7 @@ def ingest_connector(
upload_index(vector_store_path, file_data)
# Ensure we mark the task as complete
self.update_state(
state="PROGRESS", meta={"current": 100, "status": "Complete"}
)
self.update_state(state="PROGRESS", meta={"current": 100, "status": "Complete"})
logging.info(f"Remote ingestion completed: {job_name}")
@@ -1120,136 +1005,9 @@ def ingest_connector(
"tokens": tokens,
"type": source_type,
"id": str(id),
"status": "complete",
"status": "complete"
}
except Exception as e:
logging.error(f"Error during remote ingestion: {e}", exc_info=True)
raise
def mcp_oauth(self, config: Dict[str, Any], user_id: str = None) -> Dict[str, Any]:
"""Worker to handle MCP OAuth flow asynchronously."""
logging.info(
"[MCP OAuth] Worker started for user_id=%s, config=%s", user_id, config
)
try:
import asyncio
from application.agents.tools.mcp_tool import MCPTool
task_id = self.request.id
logging.info("[MCP OAuth] Task ID: %s", task_id)
redis_client = get_redis_instance()
def update_status(status_data: Dict[str, Any]):
logging.info("[MCP OAuth] Updating status: %s", status_data)
status_key = f"mcp_oauth_status:{task_id}"
redis_client.setex(status_key, 600, json.dumps(status_data))
update_status(
{
"status": "in_progress",
"message": "Starting OAuth flow...",
"task_id": task_id,
}
)
tool_config = config.copy()
tool_config["oauth_task_id"] = task_id
logging.info("[MCP OAuth] Initializing MCPTool with config: %s", tool_config)
mcp_tool = MCPTool(tool_config, user_id)
async def run_oauth_discovery():
if not mcp_tool._client:
mcp_tool._setup_client()
return await mcp_tool._execute_with_client("list_tools")
update_status(
{
"status": "awaiting_redirect",
"message": "Waiting for OAuth redirect...",
"task_id": task_id,
}
)
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
logging.info("[MCP OAuth] Starting event loop for OAuth discovery...")
tools_response = loop.run_until_complete(run_oauth_discovery())
logging.info(
"[MCP OAuth] Tools response after async call: %s", tools_response
)
status_key = f"mcp_oauth_status:{task_id}"
redis_status = redis_client.get(status_key)
if redis_status:
logging.info(
"[MCP OAuth] Redis status after async call: %s", redis_status
)
else:
logging.warning(
"[MCP OAuth] No Redis status found after async call for key: %s",
status_key,
)
tools = mcp_tool.get_actions_metadata()
update_status(
{
"status": "completed",
"message": f"OAuth completed successfully. Found {len(tools)} tools.",
"tools": tools,
"tools_count": len(tools),
"task_id": task_id,
}
)
logging.info(
"[MCP OAuth] OAuth flow completed successfully for task_id=%s", task_id
)
return {"success": True, "tools": tools, "tools_count": len(tools)}
except Exception as e:
error_msg = f"OAuth flow failed: {str(e)}"
logging.error(
"[MCP OAuth] Exception in OAuth discovery: %s", error_msg, exc_info=True
)
update_status(
{
"status": "error",
"message": error_msg,
"error": str(e),
"task_id": task_id,
}
)
return {"success": False, "error": error_msg}
finally:
logging.info("[MCP OAuth] Closing event loop for task_id=%s", task_id)
loop.close()
except Exception as e:
error_msg = f"Failed to initialize OAuth flow: {str(e)}"
logging.error(
"[MCP OAuth] Exception during initialization: %s", error_msg, exc_info=True
)
update_status(
{
"status": "error",
"message": error_msg,
"error": str(e),
"task_id": task_id,
}
)
return {"success": False, "error": error_msg}
def mcp_oauth_status(self, task_id: str) -> Dict[str, Any]:
"""Check the status of an MCP OAuth flow."""
redis_client = get_redis_instance()
status_key = f"mcp_oauth_status:{task_id}"
status_data = redis_client.get(status_key)
if status_data:
return json.loads(status_data)
return {"status": "not_found", "message": "Status not found"}

View File

@@ -6,7 +6,6 @@ services:
environment:
- VITE_API_HOST=http://localhost:7091
- VITE_API_STREAMING=$VITE_API_STREAMING
- VITE_GOOGLE_CLIENT_ID=$VITE_GOOGLE_CLIENT_ID
ports:
- "5173:5173"
depends_on:

View File

@@ -7,7 +7,6 @@ services:
environment:
- VITE_API_HOST=http://localhost:7091
- VITE_API_STREAMING=$VITE_API_STREAMING
- VITE_GOOGLE_CLIENT_ID=$VITE_GOOGLE_CLIENT_ID
ports:
- "5173:5173"
depends_on:

View File

@@ -107,13 +107,3 @@ Once an agent is created, you can:
* Modify any of its configuration settings (name, description, source, prompt, tools, type).
* **Generate a Public Link:** From the edit screen, you can create a shareable public link that allows others to import and use your agent.
* **Get a Webhook URL:** You can also obtain a Webhook URL for the agent. This allows external applications or services to trigger the agent and receive responses programmatically, enabling powerful integrations and automations.
## Seeding Premade Agents from YAML
You can bootstrap a fresh DocsGPT deployment with a curated set of agents by seeding them directly into MongoDB.
1. **Customize the configuration** edit `application/seed/config/premade_agents.yaml` (or copy from `application/seed/config/agents_template.yaml`) to describe the agents you want to provision. Each entry lets you define prompts, tools, and optional data sources.
2. **Ensure dependencies are running** MongoDB must be reachable using the credentials in `.env`, and a Celery worker should be available if any agent sources need to be ingested via `ingest_remote`.
3. **Execute the seeder** run `python -m application.seed.commands init`. Add `--force` when you need to reseed an existing environment.
The seeder keeps templates under the `system` user so they appear in the UI for anyone to clone or customize. Environment variable placeholders such as `${MY_TOKEN}` inside tool configs are resolved during the seeding process.

View File

@@ -1,6 +0,0 @@
{
"google-drive-connector": {
"title": "🔗 Google Drive",
"href": "/Guides/Integrations/google-drive-connector"
}
}

View File

@@ -1,212 +0,0 @@
---
title: Google Drive Connector
description: Connect your Google Drive as an external knowledge base to upload and process files directly from your Google Drive account.
---
import { Callout } from 'nextra/components'
import { Steps } from 'nextra/components'
# Google Drive Connector
The Google Drive Connector allows you to seamlessly connect your Google Drive account as an external knowledge base. This integration enables you to upload and process files directly from your Google Drive without manually downloading and uploading them to DocsGPT.
## Features
- **Direct File Access**: Browse and select files directly from your Google Drive
- **Comprehensive File Support**: Supports all major document formats including:
- Google Workspace files (Docs, Sheets, Slides)
- Microsoft Office files (.docx, .xlsx, .pptx, .doc, .ppt, .xls)
- PDF documents
- Text files (.txt, .md, .rst, .html, .rtf)
- Data files (.csv, .json)
- Image files (.png, .jpg, .jpeg)
- E-books (.epub)
- **Secure Authentication**: Uses OAuth 2.0 for secure access to your Google Drive
- **Real-time Sync**: Process files directly from Google Drive without local downloads
<Callout type="info" emoji="">
The Google Drive Connector requires proper configuration of Google API credentials. Follow the setup instructions below to enable this feature.
</Callout>
## Prerequisites
Before setting up the Google Drive Connector, you'll need:
1. A Google Cloud Platform (GCP) project
2. Google Drive API enabled
3. OAuth 2.0 credentials configured
4. DocsGPT instance with proper environment variables
## Setup Instructions
<Steps>
### Step 1: Create a Google Cloud Project
1. Go to the [Google Cloud Console](https://console.cloud.google.com/)
2. Create a new project or select an existing one
3. Note down your Project ID for later use
### Step 2: Enable Google Drive API
1. In the Google Cloud Console, navigate to **APIs & Services** > **Library**
2. Search for "Google Drive API"
3. Click on "Google Drive API" and click **Enable**
### Step 3: Create OAuth 2.0 Credentials
1. Go to **APIs & Services** > **Credentials**
2. Click **Create Credentials** > **OAuth client ID**
3. If prompted, configure the OAuth consent screen:
- Choose **External** user type (unless you're using Google Workspace)
- Fill in the required fields (App name, User support email, Developer contact)
- Add your domain to **Authorized domains** if deploying publicly
4. For Application type, select **Web application**
5. Add your DocsGPT frontend URL to **Authorized JavaScript origins**:
- For local development: `http://localhost:3000`
- For production: `https://yourdomain.com`
6. Add your DocsGPT callback URL to **Authorized redirect URIs**:
- For local development: `http://localhost:7091/api/connectors/callback?provider=google_drive`
- For production: `https://yourdomain.com/api/connectors/callback?provider=google_drive`
7. Click **Create** and note down the **Client ID** and **Client Secret**
### Step 4: Configure Backend Environment Variables
Add the following environment variables to your backend configuration:
**For Docker deployment**, add to your `.env` file in the root directory:
```env
# Google Drive Connector Configuration
GOOGLE_CLIENT_ID=your_google_client_id_here
GOOGLE_CLIENT_SECRET=your_google_client_secret_here
```
**For manual deployment**, set these environment variables in your system or application configuration.
### Step 5: Configure Frontend Environment Variables
Add the following environment variables to your frontend `.env` file:
```env
# Google Drive Frontend Configuration
VITE_GOOGLE_CLIENT_ID=your_google_client_id_here
```
<Callout type="warning" emoji="⚠️">
Make sure to use the same Google Client ID in both backend and frontend configurations.
</Callout>
### Step 6: Restart Your Application
After configuring the environment variables:
1. **For Docker**: Restart your Docker containers
```bash
docker-compose down
docker-compose up -d
```
2. **For manual deployment**: Restart both backend and frontend services
</Steps>
## Using the Google Drive Connector
Once configured, you can use the Google Drive Connector to upload files:
<Steps>
### Step 1: Access the Upload Interface
1. Navigate to the DocsGPT interface
2. Go to the upload/training section
3. You should now see "Google Drive" as an available upload option
### Step 2: Connect Your Google Account
1. Select "Google Drive" as your upload method
2. Click "Connect to Google Drive"
3. You'll be redirected to Google's OAuth consent screen
4. Grant the necessary permissions to DocsGPT
5. You'll be redirected back to DocsGPT with a successful connection
### Step 3: Select Files
1. Once connected, click "Select Files"
2. The Google Drive picker will open
3. Browse your Google Drive and select the files you want to process
4. Click "Select" to confirm your choices
### Step 4: Process Files
1. Review your selected files
2. Click "Train" or "Upload" to process the files
3. DocsGPT will download and process the files from your Google Drive
4. Once processing is complete, the files will be available in your knowledge base
</Steps>
## Supported File Types
The Google Drive Connector supports the following file types:
| File Type | Extensions | Description |
|-----------|------------|-------------|
| **Google Workspace** | - | Google Docs, Sheets, Slides (automatically converted) |
| **Microsoft Office** | .docx, .xlsx, .pptx | Modern Office formats |
| **Legacy Office** | .doc, .ppt, .xls | Older Office formats |
| **PDF Documents** | .pdf | Portable Document Format |
| **Text Files** | .txt, .md, .rst, .html, .rtf | Various text formats |
| **Data Files** | .csv, .json | Structured data formats |
| **Images** | .png, .jpg, .jpeg | Image files (with OCR if enabled) |
| **E-books** | .epub | Electronic publication format |
## Troubleshooting
### Common Issues
**"Google Drive option not appearing"**
- Verify that `VITE_GOOGLE_CLIENT_ID` is set in frontend environment
- Check that `VITE_GOOGLE_CLIENT_ID` environment variable is present in your frontend configuration
- Check browser console for any JavaScript errors
- Ensure the frontend has been restarted after adding environment variables
**"Authentication failed"**
- Verify that your OAuth 2.0 credentials are correctly configured
- Check that the redirect URI `http://<your-domain>/api/connectors/callback?provider=google_drive` is correctly added in GCP console
- Ensure the Google Drive API is enabled in your GCP project
**"Permission denied" errors**
- Verify that the OAuth consent screen is properly configured
- Check that your Google account has access to the files you're trying to select
- Ensure the required scopes are granted during authentication
**"Files not processing"**
- Check that the backend environment variables are correctly set
- Verify that the OAuth credentials have the necessary permissions
- Check the backend logs for any error messages
### Environment Variable Checklist
**Backend (.env in root directory):**
- ✅ `GOOGLE_CLIENT_ID`
- ✅ `GOOGLE_CLIENT_SECRET`
**Frontend (.env in frontend directory):**
- ✅ `VITE_GOOGLE_CLIENT_ID`
### Security Considerations
- Keep your Google Client Secret secure and never expose it in frontend code
- Regularly rotate your OAuth credentials
- Use HTTPS in production to protect authentication tokens
- Ensure proper OAuth consent screen configuration for production use
<Callout type="tip" emoji="💡">
For production deployments, make sure to add your actual domain to the OAuth consent screen and authorized origins/redirect URIs.
</Callout>

View File

@@ -20,8 +20,5 @@
"Architecture": {
"title": "🏗️ Architecture",
"href": "/Guides/Architecture"
},
"Integrations": {
"title": "🔗 Integrations"
}
}

View File

@@ -43,8 +43,7 @@ The easiest way to launch DocsGPT is using the provided `setup.sh` script. This
2) Serve Local (with Ollama)
3) Connect Local Inference Engine
4) Connect Cloud API Provider
5) Advanced: Build images locally (for developers)
Choose option (1-5):
Choose option (1-4):
```
Let's break down each option:
@@ -57,8 +56,6 @@ The easiest way to launch DocsGPT is using the provided `setup.sh` script. This
* **4) Connect Cloud API Provider:** This option lets you connect DocsGPT to a commercial Cloud API provider such as OpenAI, Google (Vertex AI/Gemini), Anthropic (Claude), Groq, HuggingFace Inference API, or Azure OpenAI. You will need an API key from your chosen provider. Select this if you prefer to use a powerful cloud-based LLM.
* **5) Modify DocsGPT's source code and rebuild the Docker images locally. Instead of pulling prebuilt images from Docker Hub or using the hosted/public API, you build the entire backend and frontend from source, customizing how DocsGPT works internally, or run it in an environment without internet access.
After selecting an option and providing any required information (like API keys or model names), the script will configure your `.env` file and start DocsGPT using Docker Compose.
4. **Access DocsGPT in your browser:**

View File

@@ -12,7 +12,7 @@
"chart.js": "^4.4.4",
"clsx": "^2.1.1",
"copy-to-clipboard": "^3.3.3",
"i18next": "^25.5.3",
"i18next": "^24.2.0",
"i18next-browser-languagedetector": "^8.0.2",
"lodash": "^4.17.21",
"mermaid": "^11.6.0",
@@ -21,7 +21,6 @@
"react-chartjs-2": "^5.3.0",
"react-dom": "^19.0.0",
"react-dropzone": "^14.3.8",
"react-google-drive-picker": "^1.2.2",
"react-i18next": "^15.4.0",
"react-markdown": "^9.0.1",
"react-redux": "^9.2.0",
@@ -321,9 +320,9 @@
}
},
"node_modules/@babel/runtime": {
"version": "7.28.4",
"resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.28.4.tgz",
"integrity": "sha512-Q/N6JNWvIvPnLDvjlE1OUBLPQHH6l3CltCEsHIujp45zQUSSh8K+gHnaEX45yAT1nyngnINhvWtzN+Nb9D8RAQ==",
"version": "7.27.3",
"resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.27.3.tgz",
"integrity": "sha512-7EYtGezsdiDMyY80+65EzwiGmcJqpmcZCojSXaRgdrBaGtWTgDZKq69cPIVped6MkIM78cTQ2GOiEYjwOlG4xw==",
"license": "MIT",
"engines": {
"node": ">=6.9.0"
@@ -6217,9 +6216,9 @@
}
},
"node_modules/i18next": {
"version": "25.5.3",
"resolved": "https://registry.npmjs.org/i18next/-/i18next-25.5.3.tgz",
"integrity": "sha512-joFqorDeQ6YpIXni944upwnuHBf5IoPMuqAchGVeQLdWC2JOjxgM9V8UGLhNIIH/Q8QleRxIi0BSRQehSrDLcg==",
"version": "24.2.0",
"resolved": "https://registry.npmjs.org/i18next/-/i18next-24.2.0.tgz",
"integrity": "sha512-ArJJTS1lV6lgKH7yEf4EpgNZ7+THl7bsGxxougPYiXRTJ/Fe1j08/TBpV9QsXCIYVfdE/HWG/xLezJ5DOlfBOA==",
"funding": [
{
"type": "individual",
@@ -6234,9 +6233,8 @@
"url": "https://www.i18next.com/how-to/faq#i18next-is-awesome.-how-can-i-support-the-project"
}
],
"license": "MIT",
"dependencies": {
"@babel/runtime": "^7.27.6"
"@babel/runtime": "^7.23.2"
},
"peerDependencies": {
"typescript": "^5"
@@ -9384,16 +9382,6 @@
"react": ">= 16.8 || 18.0.0"
}
},
"node_modules/react-google-drive-picker": {
"version": "1.2.2",
"resolved": "https://registry.npmjs.org/react-google-drive-picker/-/react-google-drive-picker-1.2.2.tgz",
"integrity": "sha512-x30mYkt9MIwPCgL+fyK75HZ8E6G5L/WGW0bfMG6kbD4NG2kmdlmV9oH5lPa6P6d46y9hj5Y3btAMrZd4JRRkSA==",
"license": "MIT",
"peerDependencies": {
"react": ">=17.0.0",
"react-dom": ">=17.0.0"
}
},
"node_modules/react-i18next": {
"version": "15.4.0",
"resolved": "https://registry.npmjs.org/react-i18next/-/react-i18next-15.4.0.tgz",

View File

@@ -23,7 +23,7 @@
"chart.js": "^4.4.4",
"clsx": "^2.1.1",
"copy-to-clipboard": "^3.3.3",
"i18next": "^25.5.3",
"i18next": "^24.2.0",
"i18next-browser-languagedetector": "^8.0.2",
"lodash": "^4.17.21",
"mermaid": "^11.6.0",
@@ -32,7 +32,6 @@
"react-chartjs-2": "^5.3.0",
"react-dom": "^19.0.0",
"react-dropzone": "^14.3.8",
"react-google-drive-picker": "^1.2.2",
"react-i18next": "^15.4.0",
"react-markdown": "^9.0.1",
"react-redux": "^9.2.0",

View File

@@ -1,4 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" width="64" height="64" color="#000000" fill="none">
<path d="M3.49994 11.7501L11.6717 3.57855C12.7762 2.47398 14.5672 2.47398 15.6717 3.57855C16.7762 4.68312 16.7762 6.47398 15.6717 7.57855M15.6717 7.57855L9.49994 13.7501M15.6717 7.57855C16.7762 6.47398 18.5672 6.47398 19.6717 7.57855C20.7762 8.68312 20.7762 10.474 19.6717 11.5785L12.7072 18.543C12.3167 18.9335 12.3167 19.5667 12.7072 19.9572L13.9999 21.2499" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"></path>
<path d="M17.4999 9.74921L11.3282 15.921C10.2237 17.0255 8.43272 17.0255 7.32823 15.921C6.22373 14.8164 6.22373 13.0255 7.32823 11.921L13.4999 5.74939" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"></path>
</svg>

Before

Width:  |  Height:  |  Size: 831 B

View File

@@ -1,3 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" height="24px" viewBox="0 -960 960 960" width="24px" fill="#e3e3e3">
<path d="M240-80q-33 0-56.5-23.5T160-160v-480q0-33 23.5-56.5T240-720h80v-80q0-17 11.5-28.5T360-840q17 0 28.5 11.5T400-800v80h40v-80q0-17 11.5-28.5T480-840q17 0 28.5 11.5T520-800v80h40v-80q0-17 11.5-28.5T600-840q17 0 28.5 11.5T640-800v80h80q33 0 56.5 23.5T800-640v480q0 33-23.5 56.5T720-80H240Zm0-80h480v-480H240v480Zm120-320v-80h240v80H360Zm0 120v-80h240v80H360Zm0 120v-80h160v80H360ZM240-160v-480 480Z"/>
</svg>

Before

Width:  |  Height:  |  Size: 523 B

View File

@@ -1 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" height="24px" viewBox="0 -960 960 960" width="24px" fill="#e3e3e3"><path d="M320-240h320v-80H320v80Zm0-160h320v-80H320v80ZM240-80q-33 0-56.5-23.5T160-160v-640q0-33 23.5-56.5T240-880h320l240 240v480q0 33-23.5 56.5T720-80H240Zm280-520v-200H240v640h480v-440H520ZM240-800v200-200 640-640Z"/></svg>

Before

Width:  |  Height:  |  Size: 334 B

View File

@@ -7,7 +7,6 @@ import Agents from './agents';
import SharedAgentGate from './agents/SharedAgentGate';
import ActionButtons from './components/ActionButtons';
import Spinner from './components/Spinner';
import UploadToast from './components/UploadToast';
import Conversation from './conversation/Conversation';
import { SharedConversation } from './conversation/SharedConversation';
import { useDarkTheme, useMediaQuery } from './hooks';
@@ -34,19 +33,18 @@ function MainLayout() {
const [navOpen, setNavOpen] = useState(!(isMobile || isTablet));
return (
<div className="dark:bg-raisin-black relative h-screen overflow-hidden">
<div className="relative h-screen overflow-hidden dark:bg-raisin-black">
<Navigation navOpen={navOpen} setNavOpen={setNavOpen} />
<ActionButtons showNewChat={true} showShare={true} />
<div
className={`h-[calc(100dvh-64px)] overflow-auto transition-all duration-300 ease-in-out lg:h-screen ${
className={`h-[calc(100dvh-64px)] overflow-auto lg:h-screen ${
!(isMobile || isTablet)
? `${navOpen ? 'lg:ml-72' : 'lg:ml-0'}`
? `ml-0 ${!navOpen ? 'lg:mx-auto' : 'lg:ml-72'}`
: 'ml-0 lg:ml-16'
}`}
>
<Outlet />
</div>
<UploadToast />
</div>
);
}

View File

@@ -29,7 +29,7 @@ export default function Hero({
</div>
{/* Demo Buttons Section */}
<div className="mb-3 w-full max-w-full md:mb-3">
<div className="mb-8 w-full max-w-full md:mb-16">
<div className="grid grid-cols-1 gap-3 text-xs md:grid-cols-1 md:gap-4 lg:grid-cols-2">
{demos?.map(
(demo: { header: string; query: string }, key: number) =>

View File

@@ -10,11 +10,11 @@ import Add from './assets/add.svg';
import DocsGPT3 from './assets/cute_docsgpt3.svg';
import Discord from './assets/discord.svg';
import Expand from './assets/expand.svg';
import Github from './assets/git_nav.svg';
import Github from './assets/github.svg';
import Hamburger from './assets/hamburger.svg';
import openNewChat from './assets/openNewChat.svg';
import Pin from './assets/pin.svg';
import AgentImage from './components/AgentImage';
import Robot from './assets/robot.svg';
import SettingGear from './assets/settingGear.svg';
import Spark from './assets/spark.svg';
import SpinnerDark from './assets/spinner-dark.svg';
@@ -292,26 +292,20 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
useDefaultDocument();
return (
<>
{(isMobile || isTablet) && navOpen && (
<div
className="fixed inset-0 z-10 bg-black opacity-50 transition-opacity duration-300"
onClick={() => setNavOpen(false)}
/>
)}
{
<div className="absolute top-3 left-3 z-20 hidden transition-all duration-300 ease-in-out lg:block">
{!navOpen && (
<div className="absolute top-3 left-3 z-20 hidden transition-all duration-25 lg:block">
<div className="flex items-center gap-3">
<button
onClick={() => {
setNavOpen(!navOpen);
}}
className="transition-transform duration-200 hover:scale-110"
>
<img
src={Expand}
alt="Toggle navigation menu"
className="m-auto transition-all duration-300 ease-in-out"
className={`${
!navOpen ? 'rotate-180' : 'rotate-0'
} m-auto transition-all duration-200`}
/>
</button>
{queries?.length > 0 && (
@@ -319,7 +313,6 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
onClick={() => {
newChat();
}}
className="transition-transform duration-200 hover:scale-110"
>
<img
src={openNewChat}
@@ -333,12 +326,12 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
</div>
</div>
</div>
}
)}
<div
ref={navRef}
className={`${
!navOpen && '-ml-96 md:-ml-72'
} bg-lotion dark:border-r-purple-taupe dark:bg-chinese-black fixed top-0 z-20 flex h-full w-72 flex-col border-r border-b-0 transition-all duration-300 ease-in-out dark:text-white`}
} bg-lotion dark:border-r-purple-taupe dark:bg-chinese-black fixed top-0 z-20 flex h-full w-72 flex-col border-r border-b-0 transition-all duration-20 dark:text-white`}
>
<div
className={'visible mt-2 flex h-[6vh] w-full justify-between md:h-12'}
@@ -352,7 +345,7 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
}}
>
<a href="/" className="flex gap-1.5">
<img className="h-10" src={DocsGPT3} alt="DocsGPT Logo" />
<img className="mb-2 h-10" src={DocsGPT3} alt="DocsGPT Logo" />
<p className="my-auto text-2xl font-semibold">DocsGPT</p>
</a>
</div>
@@ -365,7 +358,9 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
<img
src={Expand}
alt="Toggle navigation menu"
className="m-auto transition-all duration-300 ease-in-out hover:scale-110"
className={`${
!navOpen ? 'rotate-180' : 'rotate-0'
} m-auto transition-all duration-200`}
/>
</button>
</div>
@@ -424,8 +419,12 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
>
<div className="flex items-center gap-2">
<div className="flex w-6 justify-center">
<AgentImage
src={agent.image}
<img
src={
agent.image && agent.image.trim() !== ''
? agent.image
: Robot
}
alt="agent-logo"
className="h-6 w-6 rounded-full object-contain"
/>
@@ -569,25 +568,21 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
>
<img
src={Discord}
width={24}
height={24}
alt="Join Discord community"
className="m-2 w-6 self-center filter dark:invert"
/>
</NavLink>
<NavLink
target="_blank"
to={'https://x.com/docsgptai'}
to={'https://twitter.com/docsgptai'}
className={
'rounded-full hover:bg-gray-100 dark:hover:bg-[#28292E]'
}
>
<img
src={Twitter}
width={20}
height={20}
alt="Follow us on X"
className="m-2 self-center filter dark:invert"
alt="Follow us on Twitter"
className="m-2 w-5 self-center filter dark:invert"
/>
</NavLink>
<NavLink
@@ -600,9 +595,7 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
<img
src={Github}
alt="View on GitHub"
width={28}
height={28}
className="m-2 self-center filter dark:invert"
className="m-2 w-6 self-center filter dark:invert"
/>
</NavLink>
</div>

View File

@@ -2,11 +2,11 @@ import { Link } from 'react-router-dom';
export default function PageNotFound() {
return (
<div className="dark:bg-raisin-black grid min-h-screen">
<p className="text-jet dark:bg-outer-space mx-auto my-auto mt-20 flex w-full max-w-6xl flex-col place-items-center gap-6 rounded-3xl bg-gray-100 p-6 lg:p-10 xl:p-16 dark:text-gray-100">
<div className="grid min-h-screen dark:bg-raisin-black">
<p className="mx-auto my-auto mt-20 flex w-full max-w-6xl flex-col place-items-center gap-6 rounded-3xl bg-gray-100 p-6 text-jet dark:bg-outer-space dark:text-gray-100 lg:p-10 xl:p-16">
<h1>404</h1>
<p>The page you are looking for does not exist.</p>
<button className="pointer-cursor bg-blue-1000 hover:bg-blue-3000 mr-4 flex cursor-pointer items-center justify-center rounded-full px-4 py-2 text-white transition-colors duration-100">
<button className="pointer-cursor mr-4 flex cursor-pointer items-center justify-center rounded-full bg-blue-1000 px-4 py-2 text-white transition-colors duration-100 hover:bg-blue-3000">
<Link to="/">Go Back Home</Link>
</button>
</p>

View File

@@ -1,22 +1,14 @@
import { SyntheticEvent, useRef, useState } from 'react';
import { useRef, useState } from 'react';
import { useDispatch, useSelector } from 'react-redux';
import { useNavigate } from 'react-router-dom';
import userService from '../api/services/userService';
import Duplicate from '../assets/duplicate.svg';
import Edit from '../assets/edit.svg';
import Link from '../assets/link-gray.svg';
import Monitoring from '../assets/monitoring.svg';
import Pin from '../assets/pin.svg';
import Trash from '../assets/red-trash.svg';
import Robot from '../assets/robot.svg';
import ThreeDots from '../assets/three-dots.svg';
import UnPin from '../assets/unpin.svg';
import AgentImage from '../components/AgentImage';
import ContextMenu, { MenuOption } from '../components/ContextMenu';
import ConfirmationModal from '../modals/ConfirmationModal';
import { ActiveState } from '../models/misc';
import {
selectAgents,
selectToken,
setAgents,
setSelectedAgent,
@@ -26,205 +18,46 @@ import { Agent } from './types';
type AgentCardProps = {
agent: Agent;
agents: Agent[];
updateAgents?: (agents: Agent[]) => void;
section: string;
menuOptions?: MenuOption[];
onDelete?: (agentId: string) => void;
};
export default function AgentCard({
agent,
agents,
updateAgents,
section,
menuOptions,
onDelete,
}: AgentCardProps) {
const navigate = useNavigate();
const dispatch = useDispatch();
const token = useSelector(selectToken);
const userAgents = useSelector(selectAgents);
const [isMenuOpen, setIsMenuOpen] = useState<boolean>(false);
const [isMenuOpen, setIsMenuOpen] = useState(false);
const [deleteConfirmation, setDeleteConfirmation] =
useState<ActiveState>('INACTIVE');
const menuRef = useRef<HTMLDivElement>(null);
const menuOptionsConfig: Record<string, MenuOption[]> = {
template: [
{
icon: Duplicate,
label: 'Duplicate',
onClick: (e: SyntheticEvent) => {
e.stopPropagation();
handleDuplicate();
},
variant: 'primary',
iconWidth: 18,
iconHeight: 18,
},
],
user: [
{
icon: Monitoring,
label: 'Logs',
onClick: (e: SyntheticEvent) => {
e.stopPropagation();
navigate(`/agents/logs/${agent.id}`);
},
variant: 'primary',
iconWidth: 14,
iconHeight: 14,
},
{
icon: Edit,
label: 'Edit',
onClick: (e: SyntheticEvent) => {
e.stopPropagation();
navigate(`/agents/edit/${agent.id}`);
},
variant: 'primary',
iconWidth: 14,
iconHeight: 14,
},
...(agent.status === 'published'
? [
{
icon: agent.pinned ? UnPin : Pin,
label: agent.pinned ? 'Unpin' : 'Pin agent',
onClick: (e: SyntheticEvent) => {
e.stopPropagation();
togglePin();
},
variant: 'primary' as const,
iconWidth: 18,
iconHeight: 18,
},
]
: []),
{
icon: Trash,
label: 'Delete',
onClick: (e: SyntheticEvent) => {
e.stopPropagation();
setDeleteConfirmation('ACTIVE');
},
variant: 'danger',
iconWidth: 13,
iconHeight: 13,
},
],
shared: [
{
icon: Link,
label: 'Open',
onClick: (e: SyntheticEvent) => {
e.stopPropagation();
navigate(`/agents/shared/${agent.shared_token}`);
},
variant: 'primary',
iconWidth: 12,
iconHeight: 12,
},
{
icon: agent.pinned ? UnPin : Pin,
label: agent.pinned ? 'Unpin' : 'Pin agent',
onClick: (e: SyntheticEvent) => {
e.stopPropagation();
togglePin();
},
variant: 'primary',
iconWidth: 18,
iconHeight: 18,
},
{
icon: Trash,
label: 'Remove',
onClick: (e: SyntheticEvent) => {
e.stopPropagation();
handleHideSharedAgent();
},
variant: 'danger',
iconWidth: 13,
iconHeight: 13,
},
],
};
const menuOptions = menuOptionsConfig[section] || [];
const handleClick = () => {
if (section === 'user') {
if (agent.status === 'published') {
dispatch(setSelectedAgent(agent));
navigate(`/`);
}
}
if (section === 'shared') {
navigate(`/agents/shared/${agent.shared_token}`);
const handleCardClick = () => {
if (agent.status === 'published') {
dispatch(setSelectedAgent(agent));
navigate('/');
}
};
const togglePin = async () => {
try {
const response = await userService.togglePinAgent(agent.id ?? '', token);
if (!response.ok) throw new Error('Failed to pin agent');
const updatedAgents = agents.map((prevAgent) => {
if (prevAgent.id === agent.id)
return { ...prevAgent, pinned: !prevAgent.pinned };
return prevAgent;
});
updateAgents?.(updatedAgents);
} catch (error) {
console.error('Error:', error);
}
const defaultDelete = async (agentId: string) => {
const response = await userService.deleteAgent(agentId, token);
if (!response.ok) throw new Error('Failed to delete agent');
const data = await response.json();
dispatch(setAgents(agents.filter((prevAgent) => prevAgent.id !== data.id)));
};
const handleHideSharedAgent = async () => {
try {
const response = await userService.removeSharedAgent(
agent.id ?? '',
token,
);
if (!response.ok) throw new Error('Failed to hide shared agent');
const updatedAgents = agents.filter(
(prevAgent) => prevAgent.id !== agent.id,
);
updateAgents?.(updatedAgents);
} catch (error) {
console.error('Error:', error);
}
};
const handleDelete = async () => {
try {
const response = await userService.deleteAgent(agent.id ?? '', token);
if (!response.ok) throw new Error('Failed to delete agent');
const updatedAgents = agents.filter(
(prevAgent) => prevAgent.id !== agent.id,
);
updateAgents?.(updatedAgents);
} catch (error) {
console.error('Error:', error);
}
};
const handleDuplicate = async () => {
try {
const response = await userService.adoptAgent(agent.id ?? '', token);
if (!response.ok) throw new Error('Failed to duplicate agent');
const data = await response.json();
if (userAgents) {
const updatedAgents = [...userAgents, data.agent];
dispatch(setAgents(updatedAgents));
} else dispatch(setAgents([data.agent]));
} catch (error) {
console.error('Error:', error);
}
};
return (
<div
className={`relative flex h-44 w-full flex-col justify-between rounded-[1.2rem] bg-[#F6F6F6] px-6 py-5 hover:bg-[#ECECEC] md:w-48 dark:bg-[#383838] dark:hover:bg-[#383838]/80 ${agent.status === 'published' && 'cursor-pointer'}`}
onClick={(e) => {
e.stopPropagation();
handleClick();
}}
className={`relative flex h-44 w-48 flex-col justify-between rounded-[1.2rem] bg-[#F6F6F6] px-6 py-5 hover:bg-[#ECECEC] dark:bg-[#383838] dark:hover:bg-[#383838]/80 ${
agent.status === 'published' ? 'cursor-pointer' : ''
}`}
onClick={handleCardClick}
>
<div
ref={menuRef}
@@ -234,25 +67,30 @@ export default function AgentCard({
}}
className="absolute top-4 right-4 z-10 cursor-pointer"
>
<img src={ThreeDots} alt={'use-agent'} className="h-[19px] w-[19px]" />
<ContextMenu
isOpen={isMenuOpen}
setIsOpen={setIsMenuOpen}
options={menuOptions}
anchorRef={menuRef}
position="bottom-right"
offset={{ x: 0, y: 0 }}
/>
<img src={ThreeDots} alt="options" className="h-[19px] w-[19px]" />
{menuOptions && (
<ContextMenu
isOpen={isMenuOpen}
setIsOpen={setIsMenuOpen}
options={menuOptions}
anchorRef={menuRef}
position="top-right"
offset={{ x: 0, y: 0 }}
/>
)}
</div>
<div className="w-full">
<div className="flex w-full items-center gap-1 px-1">
<AgentImage
src={agent.image}
<img
src={agent.image && agent.image.trim() !== '' ? agent.image : Robot}
alt={`${agent.name}`}
className="h-7 w-7 rounded-full object-contain"
/>
{agent.status === 'draft' && (
<p className="text-xs text-black opacity-50 dark:text-[#E0E0E0]">{`(Draft)`}</p>
<p className="text-xs text-black opacity-50 dark:text-[#E0E0E0]">
(Draft)
</p>
)}
</div>
<div className="mt-2">
@@ -267,13 +105,14 @@ export default function AgentCard({
</p>
</div>
</div>
<ConfirmationModal
message="Are you sure you want to delete this agent?"
modalState={deleteConfirmation}
setModalState={setDeleteConfirmation}
submitLabel="Delete"
handleSubmit={() => {
handleDelete();
onDelete ? onDelete(agent.id || '') : defaultDelete(agent.id || '');
setDeleteConfirmation('INACTIVE');
}}
cancelLabel="Cancel"

View File

@@ -49,7 +49,7 @@ export default function AgentLogs() {
</p>
</div>
<div className="mt-5 flex w-full flex-wrap items-center justify-between gap-2 px-4">
<h1 className="text-eerie-black m-0 text-[32px] font-bold md:text-[40px] dark:text-white">
<h1 className="text-eerie-black m-0 text-[40px] font-bold dark:text-white">
Agent Logs
</h1>
</div>

View File

@@ -1,134 +0,0 @@
import { useEffect, useState } from 'react';
import { useDispatch, useSelector } from 'react-redux';
import { useNavigate } from 'react-router-dom';
import Spinner from '../components/Spinner';
import {
setConversation,
updateConversationId,
} from '../conversation/conversationSlice';
import {
selectSelectedAgent,
selectToken,
setSelectedAgent,
} from '../preferences/preferenceSlice';
import AgentCard from './AgentCard';
import { agentSectionsConfig } from './agents.config';
import { Agent } from './types';
export default function AgentsList() {
const dispatch = useDispatch();
const token = useSelector(selectToken);
const selectedAgent = useSelector(selectSelectedAgent);
useEffect(() => {
dispatch(setConversation([]));
dispatch(
updateConversationId({
query: { conversationId: null },
}),
);
if (selectedAgent) dispatch(setSelectedAgent(null));
}, [token]);
return (
<div className="p-4 md:p-12">
<h1 className="text-eerie-black mb-0 text-[32px] font-bold lg:text-[40px] dark:text-[#E0E0E0]">
Agents
</h1>
<p className="dark:text-gray-4000 mt-5 text-[15px] text-[#71717A]">
Discover and create custom versions of DocsGPT that combine
instructions, extra knowledge, and any combination of skills
</p>
{agentSectionsConfig.map((sectionConfig) => (
<AgentSection key={sectionConfig.id} config={sectionConfig} />
))}
</div>
);
}
function AgentSection({
config,
}: {
config: (typeof agentSectionsConfig)[number];
}) {
const navigate = useNavigate();
const dispatch = useDispatch();
const token = useSelector(selectToken);
const agents = useSelector(config.selectData);
const [loading, setLoading] = useState(true);
const updateAgents = (updatedAgents: Agent[]) => {
dispatch(config.updateAction(updatedAgents));
};
useEffect(() => {
const getAgents = async () => {
setLoading(true);
try {
const response = await config.fetchAgents(token);
if (!response.ok)
throw new Error(`Failed to fetch ${config.id} agents`);
const data = await response.json();
dispatch(config.updateAction(data));
} catch (error) {
console.error(`Error fetching ${config.id} agents:`, error);
dispatch(config.updateAction([]));
} finally {
setLoading(false);
}
};
getAgents();
}, [token, config, dispatch]);
return (
<div className="mt-8 flex flex-col gap-4">
<div className="flex w-full items-center justify-between">
<div className="flex flex-col gap-2">
<h2 className="text-[18px] font-semibold text-[#18181B] dark:text-[#E0E0E0]">
{config.title}
</h2>
<p className="text-[13px] text-[#71717A]">{config.description}</p>
</div>
{config.showNewAgentButton && (
<button
className="bg-purple-30 hover:bg-violets-are-blue rounded-full px-4 py-2 text-sm text-white"
onClick={() => navigate('/agents/new')}
>
New Agent
</button>
)}
</div>
<div>
{loading ? (
<div className="flex h-72 w-full items-center justify-center">
<Spinner />
</div>
) : agents && agents.length > 0 ? (
<div className="grid grid-cols-1 gap-4 sm:flex sm:flex-wrap">
{agents.map((agent) => (
<AgentCard
key={agent.id}
agent={agent}
agents={agents}
updateAgents={updateAgents}
section={config.id}
/>
))}
</div>
) : (
<div className="flex h-72 w-full flex-col items-center justify-center gap-3 text-base text-[#18181B] dark:text-[#E0E0E0]">
<p>{config.emptyStateDescription}</p>
{config.showNewAgentButton && (
<button
className="bg-purple-30 hover:bg-violets-are-blue ml-2 rounded-full px-4 py-2 text-sm text-white"
onClick={() => navigate('/agents/new')}
>
New Agent
</button>
)}
</div>
)}
</div>
</div>
);
}

View File

@@ -23,7 +23,7 @@ import PromptsModal from '../preferences/PromptsModal';
import Prompts from '../settings/Prompts';
import { UserToolType } from '../settings/types';
import AgentPreview from './AgentPreview';
import { Agent, ToolSummary } from './types';
import { Agent } from './types';
const embeddingsName =
import.meta.env.VITE_EMBEDDINGS_NAME ||
@@ -45,12 +45,11 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
description: '',
image: '',
source: '',
sources: [],
chunks: '2',
retriever: 'classic',
chunks: '',
retriever: '',
prompt_id: 'default',
tools: [],
agent_type: 'classic',
agent_type: '',
status: '',
json_schema: undefined,
});
@@ -64,7 +63,9 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
const [selectedSourceIds, setSelectedSourceIds] = useState<
Set<string | number>
>(new Set());
const [selectedTools, setSelectedTools] = useState<ToolSummary[]>([]);
const [selectedToolIds, setSelectedToolIds] = useState<Set<string | number>>(
new Set(),
);
const [deleteConfirmation, setDeleteConfirmation] =
useState<ActiveState>('INACTIVE');
const [agentDetails, setAgentDetails] = useState<ActiveState>('INACTIVE');
@@ -120,8 +121,7 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
agent.name && agent.description && agent.prompt_id && agent.agent_type;
const isJsonSchemaValidOrEmpty =
jsonSchemaText.trim() === '' || jsonSchemaValid;
const hasSource = selectedSourceIds.size > 0;
return hasRequiredFields && isJsonSchemaValidOrEmpty && hasSource;
return hasRequiredFields && isJsonSchemaValidOrEmpty;
};
const isJsonSchemaInvalid = () => {
@@ -150,41 +150,7 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
const formData = new FormData();
formData.append('name', agent.name);
formData.append('description', agent.description);
if (selectedSourceIds.size > 1) {
const sourcesArray = Array.from(selectedSourceIds)
.map((id) => {
const sourceDoc = sourceDocs?.find(
(source) =>
source.id === id || source.retriever === id || source.name === id,
);
if (sourceDoc?.name === 'Default' && !sourceDoc?.id) {
return 'default';
}
return sourceDoc?.id || id;
})
.filter(Boolean);
formData.append('sources', JSON.stringify(sourcesArray));
formData.append('source', '');
} else if (selectedSourceIds.size === 1) {
const singleSourceId = Array.from(selectedSourceIds)[0];
const sourceDoc = sourceDocs?.find(
(source) =>
source.id === singleSourceId ||
source.retriever === singleSourceId ||
source.name === singleSourceId,
);
let finalSourceId;
if (sourceDoc?.name === 'Default' && !sourceDoc?.id)
finalSourceId = 'default';
else finalSourceId = sourceDoc?.id || singleSourceId;
formData.append('source', String(finalSourceId));
formData.append('sources', JSON.stringify([]));
} else {
formData.append('source', '');
formData.append('sources', JSON.stringify([]));
}
formData.append('source', agent.source);
formData.append('chunks', agent.chunks);
formData.append('retriever', agent.retriever);
formData.append('prompt_id', agent.prompt_id);
@@ -230,41 +196,7 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
const formData = new FormData();
formData.append('name', agent.name);
formData.append('description', agent.description);
if (selectedSourceIds.size > 1) {
const sourcesArray = Array.from(selectedSourceIds)
.map((id) => {
const sourceDoc = sourceDocs?.find(
(source) =>
source.id === id || source.retriever === id || source.name === id,
);
if (sourceDoc?.name === 'Default' && !sourceDoc?.id) {
return 'default';
}
return sourceDoc?.id || id;
})
.filter(Boolean);
formData.append('sources', JSON.stringify(sourcesArray));
formData.append('source', '');
} else if (selectedSourceIds.size === 1) {
const singleSourceId = Array.from(selectedSourceIds)[0];
const sourceDoc = sourceDocs?.find(
(source) =>
source.id === singleSourceId ||
source.retriever === singleSourceId ||
source.name === singleSourceId,
);
let finalSourceId;
if (sourceDoc?.name === 'Default' && !sourceDoc?.id)
finalSourceId = 'default';
else finalSourceId = sourceDoc?.id || singleSourceId;
formData.append('source', String(finalSourceId));
formData.append('sources', JSON.stringify([]));
} else {
formData.append('source', '');
formData.append('sources', JSON.stringify([]));
}
formData.append('source', agent.source);
formData.append('chunks', agent.chunks);
formData.append('retriever', agent.retriever);
formData.append('prompt_id', agent.prompt_id);
@@ -335,7 +267,7 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
const data = await response.json();
const tools: OptionType[] = data.tools.map((tool: UserToolType) => ({
id: tool.id,
label: tool.customName ? tool.customName : tool.displayName,
label: tool.displayName,
icon: `/toolIcons/tool_${tool.name}.svg`,
}));
setUserTools(tools);
@@ -352,26 +284,6 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
getPrompts();
}, [token]);
// Auto-select default source if none selected
useEffect(() => {
if (sourceDocs && sourceDocs.length > 0 && selectedSourceIds.size === 0) {
const defaultSource = sourceDocs.find((s) => s.name === 'Default');
if (defaultSource) {
setSelectedSourceIds(
new Set([
defaultSource.id || defaultSource.retriever || defaultSource.name,
]),
);
} else {
setSelectedSourceIds(
new Set([
sourceDocs[0].id || sourceDocs[0].retriever || sourceDocs[0].name,
]),
);
}
}
}, [sourceDocs, selectedSourceIds.size]);
useEffect(() => {
if ((mode === 'edit' || mode === 'draft') && agentId) {
const getAgent = async () => {
@@ -381,34 +293,10 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
throw new Error('Failed to fetch agent');
}
const data = await response.json();
if (data.sources && data.sources.length > 0) {
const mappedSources = data.sources.map((sourceId: string) => {
if (sourceId === 'default') {
const defaultSource = sourceDocs?.find(
(source) => source.name === 'Default',
);
return defaultSource?.retriever || 'classic';
}
return sourceId;
});
setSelectedSourceIds(new Set(mappedSources));
} else if (data.source) {
if (data.source === 'default') {
const defaultSource = sourceDocs?.find(
(source) => source.name === 'Default',
);
setSelectedSourceIds(
new Set([defaultSource?.retriever || 'classic']),
);
} else {
setSelectedSourceIds(new Set([data.source]));
}
} else if (data.retriever) {
if (data.source) setSelectedSourceIds(new Set([data.source]));
else if (data.retriever)
setSelectedSourceIds(new Set([data.retriever]));
}
if (data.tool_details) setSelectedTools(data.tool_details);
if (data.tools) setSelectedToolIds(new Set(data.tools));
if (data.status === 'draft') setEffectiveMode('draft');
if (data.json_schema) {
const jsonText = JSON.stringify(data.json_schema, null, 2);
@@ -423,68 +311,39 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
}, [agentId, mode, token]);
useEffect(() => {
const selectedSources = Array.from(selectedSourceIds)
.map((id) =>
sourceDocs?.find(
(source) =>
source.id === id || source.retriever === id || source.name === id,
),
)
.filter(Boolean);
if (selectedSources.length > 0) {
// Handle multiple sources
if (selectedSources.length > 1) {
// Multiple sources selected - store in sources array
const sourceIds = selectedSources
.map((source) => source?.id)
.filter((id): id is string => Boolean(id));
const selectedSource = Array.from(selectedSourceIds).map((id) =>
sourceDocs?.find(
(source) =>
source.id === id || source.retriever === id || source.name === id,
),
);
if (selectedSource[0]?.model === embeddingsName) {
if (selectedSource[0] && 'id' in selectedSource[0]) {
setAgent((prev) => ({
...prev,
sources: sourceIds,
source: '', // Clear single source for multiple sources
source: selectedSource[0]?.id || 'default',
retriever: '',
}));
} else {
// Single source selected - maintain backward compatibility
const selectedSource = selectedSources[0];
if (selectedSource?.model === embeddingsName) {
if (selectedSource && 'id' in selectedSource) {
setAgent((prev) => ({
...prev,
source: selectedSource?.id || 'default',
sources: [], // Clear sources array for single source
retriever: '',
}));
} else {
setAgent((prev) => ({
...prev,
source: '',
sources: [], // Clear sources array
retriever: selectedSource?.retriever || 'classic',
}));
}
}
}
} else {
// No sources selected
setAgent((prev) => ({
...prev,
source: '',
sources: [],
retriever: '',
}));
} else
setAgent((prev) => ({
...prev,
source: '',
retriever: selectedSource[0]?.retriever || 'classic',
}));
}
}, [selectedSourceIds]);
useEffect(() => {
const selectedTool = Array.from(selectedToolIds).map((id) =>
userTools.find((tool) => tool.id === id),
);
setAgent((prev) => ({
...prev,
tools: Array.from(selectedTools)
tools: selectedTool
.map((tool) => tool?.id)
.filter((id): id is string => typeof id === 'string'),
}));
}, [selectedTools]);
}, [selectedToolIds]);
useEffect(() => {
if (isPublishable()) dispatch(setSelectedAgent(agent));
@@ -522,7 +381,7 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
</p>
</div>
<div className="mt-5 flex w-full flex-wrap items-center justify-between gap-2 px-4">
<h1 className="text-eerie-black m-0 text-[32px] font-bold lg:text-[40px] dark:text-white">
<h1 className="text-eerie-black m-0 text-[40px] font-bold dark:text-white">
{modeConfig[effectiveMode].heading}
</h1>
<div className="flex flex-wrap items-center gap-1">
@@ -602,7 +461,7 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
onChange={(e) => setAgent({ ...agent, name: e.target.value })}
/>
<textarea
className="border-silver text-jet dark:bg-raisin-black dark:text-bright-gray dark:placeholder:text-silver mt-3 h-32 w-full rounded-xl border bg-white px-5 py-4 text-sm outline-hidden placeholder:text-gray-400 dark:border-[#7E7E7E]"
className="border-silver text-jet dark:bg-raisin-black dark:text-bright-gray dark:placeholder:text-silver mt-3 h-32 w-full rounded-3xl border bg-white px-5 py-4 text-sm outline-hidden placeholder:text-gray-400 dark:border-[#7E7E7E]"
placeholder="Describe your agent"
value={agent.description}
onChange={(e) =>
@@ -640,18 +499,18 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
>
{selectedSourceIds.size > 0
? Array.from(selectedSourceIds)
.map((id) => {
const matchedDoc = sourceDocs?.find(
(source) =>
source.id === id ||
source.name === id ||
source.retriever === id,
);
return matchedDoc?.name || `External KB`;
})
.map(
(id) =>
sourceDocs?.find(
(source) =>
source.id === id ||
source.name === id ||
source.retriever === id,
)?.name,
)
.filter(Boolean)
.join(', ')
: 'Select sources'}
: 'Select source'}
</button>
<MultiSelectPopup
isOpen={isSourcePopupOpen}
@@ -666,38 +525,13 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
}
selectedIds={selectedSourceIds}
onSelectionChange={(newSelectedIds: Set<string | number>) => {
if (
newSelectedIds.size === 0 &&
sourceDocs &&
sourceDocs.length > 0
) {
const defaultSource = sourceDocs.find(
(s) => s.name === 'Default',
);
if (defaultSource) {
setSelectedSourceIds(
new Set([
defaultSource.id ||
defaultSource.retriever ||
defaultSource.name,
]),
);
} else {
setSelectedSourceIds(
new Set([
sourceDocs[0].id ||
sourceDocs[0].retriever ||
sourceDocs[0].name,
]),
);
}
} else {
setSelectedSourceIds(newSelectedIds);
}
setSelectedSourceIds(newSelectedIds);
setIsSourcePopupOpen(false);
}}
title="Select Sources"
title="Select Source"
searchPlaceholder="Search sources..."
noOptionsMessage="No sources available"
noOptionsMessage="No source available"
singleSelect={true}
/>
</div>
<div className="mt-3">
@@ -763,14 +597,16 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
ref={toolAnchorButtonRef}
onClick={() => setIsToolsPopupOpen(!isToolsPopupOpen)}
className={`border-silver dark:bg-raisin-black w-full truncate rounded-3xl border bg-white px-5 py-3 text-left text-sm dark:border-[#7E7E7E] ${
selectedTools.length > 0
selectedToolIds.size > 0
? 'text-jet dark:text-bright-gray'
: 'dark:text-silver text-gray-400'
}`}
>
{selectedTools.length > 0
? selectedTools
.map((tool) => tool.display_name || tool.name)
{selectedToolIds.size > 0
? Array.from(selectedToolIds)
.map(
(id) => userTools.find((tool) => tool.id === id)?.label,
)
.filter(Boolean)
.join(', ')
: 'Select tools'}
@@ -780,17 +616,9 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
onClose={() => setIsToolsPopupOpen(false)}
anchorRef={toolAnchorButtonRef}
options={userTools}
selectedIds={new Set(selectedTools.map((tool) => tool.id))}
selectedIds={selectedToolIds}
onSelectionChange={(newSelectedIds: Set<string | number>) =>
setSelectedTools(
userTools
.filter((tool) => newSelectedIds.has(tool.id))
.map((tool) => ({
id: String(tool.id),
name: tool.label,
display_name: tool.label,
})),
)
setSelectedToolIds(newSelectedIds)
}
title="Select Tools"
searchPlaceholder="Search tools..."

View File

@@ -6,7 +6,7 @@ import { useParams } from 'react-router-dom';
import userService from '../api/services/userService';
import NoFilesDarkIcon from '../assets/no-files-dark.svg';
import NoFilesIcon from '../assets/no-files.svg';
import AgentImage from '../components/AgentImage';
import Robot from '../assets/robot.svg';
import MessageInput from '../components/MessageInput';
import Spinner from '../components/Spinner';
import ConversationMessages from '../conversation/ConversationMessages';
@@ -152,8 +152,12 @@ export default function SharedAgent() {
return (
<div className="relative h-full w-full">
<div className="absolute top-5 left-4 hidden items-center gap-3 sm:flex">
<AgentImage
src={sharedAgent.image}
<img
src={
sharedAgent.image && sharedAgent.image.trim() !== ''
? sharedAgent.image
: Robot
}
alt="agent-logo"
className="h-6 w-6 rounded-full object-contain"
/>

View File

@@ -1,4 +1,4 @@
import AgentImage from '../components/AgentImage';
import Robot from '../assets/robot.svg';
import { Agent } from './types';
export default function SharedAgentCard({ agent }: { agent: Agent }) {
@@ -6,8 +6,8 @@ export default function SharedAgentCard({ agent }: { agent: Agent }) {
<div className="border-dark-gray dark:border-grey flex w-full max-w-[720px] flex-col rounded-3xl border p-6 shadow-xs sm:w-fit sm:min-w-[480px]">
<div className="flex items-center gap-3">
<div className="flex h-12 w-12 items-center justify-center overflow-hidden rounded-full p-1">
<AgentImage
src={agent.image}
<img
src={agent.image && agent.image.trim() !== '' ? agent.image : Robot}
className="h-full w-full rounded-full object-contain"
/>
</div>

View File

@@ -1,20 +1,19 @@
import { createAsyncThunk, createSlice, PayloadAction } from '@reduxjs/toolkit';
import {
handleFetchAnswer,
handleFetchAnswerSteaming,
} from '../conversation/conversationHandlers';
import {
Answer,
ConversationState,
Query,
Status,
} from '../conversation/conversationModels';
import store from '../store';
import {
clearAttachments,
handleFetchAnswer,
handleFetchAnswerSteaming,
} from '../conversation/conversationHandlers';
import {
selectCompletedAttachments,
clearAttachments,
} from '../upload/uploadSlice';
import store from '../store';
const initialState: ConversationState = {
queries: [],

View File

@@ -1,42 +0,0 @@
import userService from '../api/services/userService';
import {
selectAgents,
selectTemplateAgents,
selectSharedAgents,
setAgents,
setTemplateAgents,
setSharedAgents,
} from '../preferences/preferenceSlice';
export const agentSectionsConfig = [
{
id: 'template',
title: 'By DocsGPT',
description: 'Agents provided by DocsGPT',
showNewAgentButton: false,
emptyStateDescription: 'No template agents found.',
fetchAgents: (token: string | null) => userService.getTemplateAgents(token),
selectData: selectTemplateAgents,
updateAction: setTemplateAgents,
},
{
id: 'user',
title: 'By me',
description: 'Agents created or published by you',
showNewAgentButton: true,
emptyStateDescription: 'You dont have any created agents yet.',
fetchAgents: (token: string | null) => userService.getAgents(token),
selectData: selectAgents,
updateAction: setAgents,
},
{
id: 'shared',
title: 'Shared with me',
description: 'Agents imported by using a public link',
showNewAgentButton: false,
emptyStateDescription: 'No shared agents found.',
fetchAgents: (token: string | null) => userService.getSharedAgents(token),
selectData: selectSharedAgents,
updateAction: setSharedAgents,
},
];

View File

@@ -1,9 +1,37 @@
import { Route, Routes } from 'react-router-dom';
import { SyntheticEvent, useEffect, useRef, useState } from 'react';
import { useDispatch, useSelector } from 'react-redux';
import { Route, Routes, useNavigate } from 'react-router-dom';
import userService from '../api/services/userService';
import Edit from '../assets/edit.svg';
import Link from '../assets/link-gray.svg';
import Monitoring from '../assets/monitoring.svg';
import Pin from '../assets/pin.svg';
import Trash from '../assets/red-trash.svg';
import Robot from '../assets/robot.svg';
import ThreeDots from '../assets/three-dots.svg';
import UnPin from '../assets/unpin.svg';
import ContextMenu, { MenuOption } from '../components/ContextMenu';
import Spinner from '../components/Spinner';
import {
setConversation,
updateConversationId,
} from '../conversation/conversationSlice';
import ConfirmationModal from '../modals/ConfirmationModal';
import { ActiveState } from '../models/misc';
import {
selectAgents,
selectSelectedAgent,
selectSharedAgents,
selectToken,
setAgents,
setSelectedAgent,
setSharedAgents,
} from '../preferences/preferenceSlice';
import AgentLogs from './AgentLogs';
import AgentsList from './AgentsList';
import NewAgent from './NewAgent';
import SharedAgent from './SharedAgent';
import { Agent } from './types';
export default function Agents() {
return (
@@ -16,3 +44,431 @@ export default function Agents() {
</Routes>
);
}
const sectionConfig = {
user: {
title: 'By me',
description: 'Agents created or published by you',
showNewAgentButton: true,
emptyStateDescription: 'You dont have any created agents yet',
},
shared: {
title: 'Shared with me',
description: 'Agents imported by using a public link',
showNewAgentButton: false,
emptyStateDescription: 'No shared agents found',
},
};
function AgentsList() {
const dispatch = useDispatch();
const token = useSelector(selectToken);
const agents = useSelector(selectAgents);
const sharedAgents = useSelector(selectSharedAgents);
const selectedAgent = useSelector(selectSelectedAgent);
const [loadingUserAgents, setLoadingUserAgents] = useState<boolean>(true);
const [loadingSharedAgents, setLoadingSharedAgents] = useState<boolean>(true);
const getAgents = async () => {
try {
setLoadingUserAgents(true);
const response = await userService.getAgents(token);
if (!response.ok) throw new Error('Failed to fetch agents');
const data = await response.json();
dispatch(setAgents(data));
setLoadingUserAgents(false);
} catch (error) {
console.error('Error:', error);
setLoadingUserAgents(false);
}
};
const getSharedAgents = async () => {
try {
setLoadingSharedAgents(true);
const response = await userService.getSharedAgents(token);
if (!response.ok) throw new Error('Failed to fetch shared agents');
const data = await response.json();
dispatch(setSharedAgents(data));
setLoadingSharedAgents(false);
} catch (error) {
console.error('Error:', error);
setLoadingSharedAgents(false);
}
};
useEffect(() => {
getAgents();
getSharedAgents();
dispatch(setConversation([]));
dispatch(
updateConversationId({
query: { conversationId: null },
}),
);
if (selectedAgent) dispatch(setSelectedAgent(null));
}, [token]);
return (
<div className="p-4 md:p-12">
<h1 className="text-eerie-black mb-0 text-[40px] font-bold dark:text-[#E0E0E0]">
Agents
</h1>
<p className="dark:text-gray-4000 mt-5 text-[15px] text-[#71717A]">
Discover and create custom versions of DocsGPT that combine
instructions, extra knowledge, and any combination of skills
</p>
{/* Premade agents section */}
{/* <div className="mt-6">
<h2 className="text-[18px] font-semibold text-[#18181B] dark:text-[#E0E0E0]">
Premade by DocsGPT
</h2>
<div className="mt-4 flex w-full flex-wrap gap-4">
{Array.from({ length: 5 }, (_, index) => (
<div
key={index}
className="relative flex h-44 w-48 flex-col justify-between rounded-[1.2rem] bg-[#F6F6F6] px-6 py-5 dark:bg-[#383838]"
>
<button onClick={() => {}} className="absolute right-4 top-4">
<img
src={Copy}
alt={'use-agent'}
className="h-[19px] w-[19px]"
/>
</button>
<div className="w-full">
<div className="flex w-full items-center px-1">
<img
src={Robot}
alt="agent-logo"
className="h-7 w-7 rounded-full"
/>
</div>
<div className="mt-2">
<p
title={''}
className="truncate px-1 text-[13px] font-semibold capitalize leading-relaxed text-raisin-black-light dark:text-bright-gray"
>
{}
</p>
<p className="mt-1 h-20 overflow-auto px-1 text-[12px] leading-relaxed text-old-silver dark:text-sonic-silver-light">
{}
</p>
</div>
</div>
<div className="absolute bottom-4 right-4"></div>
</div>
))}
</div>
</div> */}
<AgentSection
agents={agents ?? []}
updateAgents={(updatedAgents) => {
dispatch(setAgents(updatedAgents));
}}
loading={loadingUserAgents}
section="user"
/>
<AgentSection
agents={sharedAgents ?? []}
updateAgents={(updatedAgents) => {
dispatch(setSharedAgents(updatedAgents));
}}
loading={loadingSharedAgents}
section="shared"
/>
</div>
);
}
function AgentSection({
agents,
updateAgents,
loading,
section,
}: {
agents: Agent[];
updateAgents?: (agents: Agent[]) => void;
loading: boolean;
section: keyof typeof sectionConfig;
}) {
const navigate = useNavigate();
return (
<div className="mt-8 flex flex-col gap-4">
<div className="flex w-full items-center justify-between">
<div className="flex flex-col gap-2">
<h2 className="text-[18px] font-semibold text-[#18181B] dark:text-[#E0E0E0]">
{sectionConfig[section].title}
</h2>
<p className="text-[13px] text-[#71717A]">
{sectionConfig[section].description}
</p>
</div>
{sectionConfig[section].showNewAgentButton && (
<button
className="bg-purple-30 hover:bg-violets-are-blue rounded-full px-4 py-2 text-sm text-white"
onClick={() => navigate('/agents/new')}
>
New Agent
</button>
)}
</div>
<div>
{loading ? (
<div className="flex h-72 w-full items-center justify-center">
<Spinner />
</div>
) : agents && agents.length > 0 ? (
<div className="grid grid-cols-1 gap-4 sm:flex sm:flex-wrap">
{agents.map((agent, idx) => (
<AgentCard
key={agent.id}
agent={agent}
agents={agents}
updateAgents={updateAgents}
section={section}
/>
))}
</div>
) : (
<div className="flex h-72 w-full flex-col items-center justify-center gap-3 text-base text-[#18181B] dark:text-[#E0E0E0]">
<p>{sectionConfig[section].emptyStateDescription}</p>
{sectionConfig[section].showNewAgentButton && (
<button
className="bg-purple-30 hover:bg-violets-are-blue ml-2 rounded-full px-4 py-2 text-sm text-white"
onClick={() => navigate('/agents/new')}
>
New Agent
</button>
)}
</div>
)}
</div>
</div>
);
}
function AgentCard({
agent,
agents,
updateAgents,
section,
}: {
agent: Agent;
agents: Agent[];
updateAgents?: (agents: Agent[]) => void;
section: keyof typeof sectionConfig;
}) {
const navigate = useNavigate();
const dispatch = useDispatch();
const token = useSelector(selectToken);
const [isMenuOpen, setIsMenuOpen] = useState<boolean>(false);
const [deleteConfirmation, setDeleteConfirmation] =
useState<ActiveState>('INACTIVE');
const menuRef = useRef<HTMLDivElement>(null);
const togglePin = async () => {
try {
const response = await userService.togglePinAgent(agent.id ?? '', token);
if (!response.ok) throw new Error('Failed to pin agent');
const updatedAgents = agents.map((prevAgent) => {
if (prevAgent.id === agent.id)
return { ...prevAgent, pinned: !prevAgent.pinned };
return prevAgent;
});
updateAgents?.(updatedAgents);
} catch (error) {
console.error('Error:', error);
}
};
const handleHideSharedAgent = async () => {
try {
const response = await userService.removeSharedAgent(
agent.id ?? '',
token,
);
if (!response.ok) throw new Error('Failed to hide shared agent');
const updatedAgents = agents.filter(
(prevAgent) => prevAgent.id !== agent.id,
);
updateAgents?.(updatedAgents);
} catch (error) {
console.error('Error:', error);
}
};
const menuOptionsConfig: Record<string, MenuOption[]> = {
user: [
{
icon: Monitoring,
label: 'Logs',
onClick: (e: SyntheticEvent) => {
e.stopPropagation();
navigate(`/agents/logs/${agent.id}`);
},
variant: 'primary',
iconWidth: 14,
iconHeight: 14,
},
{
icon: Edit,
label: 'Edit',
onClick: (e: SyntheticEvent) => {
e.stopPropagation();
navigate(`/agents/edit/${agent.id}`);
},
variant: 'primary',
iconWidth: 14,
iconHeight: 14,
},
...(agent.status === 'published'
? [
{
icon: agent.pinned ? UnPin : Pin,
label: agent.pinned ? 'Unpin' : 'Pin agent',
onClick: (e: SyntheticEvent) => {
e.stopPropagation();
togglePin();
},
variant: 'primary' as const,
iconWidth: 18,
iconHeight: 18,
},
]
: []),
{
icon: Trash,
label: 'Delete',
onClick: (e: SyntheticEvent) => {
e.stopPropagation();
setDeleteConfirmation('ACTIVE');
},
variant: 'danger',
iconWidth: 13,
iconHeight: 13,
},
],
shared: [
{
icon: Link,
label: 'Open',
onClick: (e: SyntheticEvent) => {
e.stopPropagation();
navigate(`/agents/shared/${agent.shared_token}`);
},
variant: 'primary',
iconWidth: 12,
iconHeight: 12,
},
{
icon: agent.pinned ? UnPin : Pin,
label: agent.pinned ? 'Unpin' : 'Pin agent',
onClick: (e: SyntheticEvent) => {
e.stopPropagation();
togglePin();
},
variant: 'primary',
iconWidth: 18,
iconHeight: 18,
},
{
icon: Trash,
label: 'Remove',
onClick: (e: SyntheticEvent) => {
e.stopPropagation();
handleHideSharedAgent();
},
variant: 'danger',
iconWidth: 13,
iconHeight: 13,
},
],
};
const menuOptions = menuOptionsConfig[section] || [];
const handleClick = () => {
if (section === 'user') {
if (agent.status === 'published') {
dispatch(setSelectedAgent(agent));
navigate(`/`);
}
}
if (section === 'shared') {
navigate(`/agents/shared/${agent.shared_token}`);
}
};
const handleDelete = async (agentId: string) => {
const response = await userService.deleteAgent(agentId, token);
if (!response.ok) throw new Error('Failed to delete agent');
const data = await response.json();
dispatch(setAgents(agents.filter((prevAgent) => prevAgent.id !== data.id)));
};
return (
<div
className={`relative flex h-44 w-full flex-col justify-between rounded-[1.2rem] bg-[#F6F6F6] px-6 py-5 hover:bg-[#ECECEC] md:w-48 dark:bg-[#383838] dark:hover:bg-[#383838]/80 ${agent.status === 'published' && 'cursor-pointer'}`}
onClick={(e) => {
e.stopPropagation();
handleClick();
}}
>
<div
ref={menuRef}
onClick={(e) => {
e.stopPropagation();
setIsMenuOpen(true);
}}
className="absolute top-4 right-4 z-10 cursor-pointer"
>
<img src={ThreeDots} alt={'use-agent'} className="h-[19px] w-[19px]" />
<ContextMenu
isOpen={isMenuOpen}
setIsOpen={setIsMenuOpen}
options={menuOptions}
anchorRef={menuRef}
position="bottom-right"
offset={{ x: 0, y: 0 }}
/>
</div>
<div className="w-full">
<div className="flex w-full items-center gap-1 px-1">
<img
src={agent.image && agent.image.trim() !== '' ? agent.image : Robot}
alt={`${agent.name}`}
className="h-7 w-7 rounded-full object-contain"
/>
{agent.status === 'draft' && (
<p className="text-xs text-black opacity-50 dark:text-[#E0E0E0]">{`(Draft)`}</p>
)}
</div>
<div className="mt-2">
<p
title={agent.name}
className="truncate px-1 text-[13px] leading-relaxed font-semibold text-[#020617] capitalize dark:text-[#E0E0E0]"
>
{agent.name}
</p>
<p className="dark:text-sonic-silver-light mt-1 h-20 overflow-auto px-1 text-[12px] leading-relaxed text-[#64748B]">
{agent.description}
</p>
</div>
</div>
<ConfirmationModal
message="Are you sure you want to delete this agent?"
modalState={deleteConfirmation}
setModalState={setDeleteConfirmation}
submitLabel="Delete"
handleSubmit={() => {
handleDelete(agent.id || '');
setDeleteConfirmation('INACTIVE');
}}
cancelLabel="Cancel"
variant="danger"
/>
</div>
);
}

View File

@@ -10,7 +10,6 @@ export type Agent = {
description: string;
image: string;
source: string;
sources?: string[];
chunks: string;
retriever: string;
prompt_id: string;

View File

@@ -19,8 +19,6 @@ const endpoints = {
SHARED_AGENTS: '/api/shared_agents',
SHARE_AGENT: `/api/share_agent`,
REMOVE_SHARED_AGENT: (id: string) => `/api/remove_shared_agent?id=${id}`,
TEMPLATE_AGENTS: '/api/template_agents',
ADOPT_AGENT: (id: string) => `/api/adopt_agent?id=${id}`,
AGENT_WEBHOOK: (id: string) => `/api/agent_webhook?id=${id}`,
PROMPTS: '/api/get_prompts',
CREATE_PROMPT: '/api/create_prompt',
@@ -59,10 +57,6 @@ const endpoints = {
DIRECTORY_STRUCTURE: (docId: string) =>
`/api/directory_structure?id=${docId}`,
MANAGE_SOURCE_FILES: '/api/manage_source_files',
MCP_TEST_CONNECTION: '/api/mcp_server/test',
MCP_SAVE_SERVER: '/api/mcp_server/save',
MCP_OAUTH_STATUS: (task_id: string) =>
`/api/mcp_server/oauth_status/${task_id}`,
},
CONVERSATION: {
ANSWER: '/api/answer',

View File

@@ -1,6 +1,6 @@
import { getSessionToken } from '../../utils/providerUtils';
import apiClient from '../client';
import endpoints from '../endpoints';
import { getSessionToken } from '../../utils/providerUtils';
const userService = {
getConfig: (): Promise<any> => apiClient.get(endpoints.USER.CONFIG, null),
@@ -44,10 +44,6 @@ const userService = {
apiClient.put(endpoints.USER.SHARE_AGENT, data, token),
removeSharedAgent: (id: string, token: string | null): Promise<any> =>
apiClient.delete(endpoints.USER.REMOVE_SHARED_AGENT(id), token),
getTemplateAgents: (token: string | null): Promise<any> =>
apiClient.get(endpoints.USER.TEMPLATE_AGENTS, token),
adoptAgent: (id: string, token: string | null): Promise<any> =>
apiClient.post(endpoints.USER.ADOPT_AGENT(id), {}, token),
getAgentWebhook: (id: string, token: string | null): Promise<any> =>
apiClient.get(endpoints.USER.AGENT_WEBHOOK(id), token),
getPrompts: (token: string | null): Promise<any> =>
@@ -94,10 +90,7 @@ const userService = {
path?: string,
search?: string,
): Promise<any> =>
apiClient.get(
endpoints.USER.GET_CHUNKS(docId, page, perPage, path, search),
token,
),
apiClient.get(endpoints.USER.GET_CHUNKS(docId, page, perPage, path, search), token),
addChunk: (data: any, token: string | null): Promise<any> =>
apiClient.post(endpoints.USER.ADD_CHUNK, data, token),
deleteChunk: (
@@ -112,26 +105,16 @@ const userService = {
apiClient.get(endpoints.USER.DIRECTORY_STRUCTURE(docId), token),
manageSourceFiles: (data: FormData, token: string | null): Promise<any> =>
apiClient.postFormData(endpoints.USER.MANAGE_SOURCE_FILES, data, token),
testMCPConnection: (data: any, token: string | null): Promise<any> =>
apiClient.post(endpoints.USER.MCP_TEST_CONNECTION, data, token),
saveMCPServer: (data: any, token: string | null): Promise<any> =>
apiClient.post(endpoints.USER.MCP_SAVE_SERVER, data, token),
getMCPOAuthStatus: (task_id: string, token: string | null): Promise<any> =>
apiClient.get(endpoints.USER.MCP_OAUTH_STATUS(task_id), token),
syncConnector: (
docId: string,
provider: string,
token: string | null,
): Promise<any> => {
syncConnector: (docId: string, provider: string, token: string | null): Promise<any> => {
const sessionToken = getSessionToken(provider);
return apiClient.post(
endpoints.USER.SYNC_CONNECTOR,
{
source_id: docId,
session_token: sessionToken,
provider: provider,
provider: provider
},
token,
token
);
},
};

View File

@@ -1,3 +0,0 @@
<svg width="24" height="25" viewBox="0 0 24 25" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M12 2.5C17.523 2.5 22 6.977 22 12.5C22 18.023 17.523 22.5 12 22.5C6.477 22.5 2 18.023 2 12.5C2 6.977 6.477 2.5 12 2.5ZM15.22 9.47L10.75 13.94L8.78 11.97C8.63783 11.8375 8.44978 11.7654 8.25548 11.7688C8.06118 11.7723 7.87579 11.851 7.73838 11.9884C7.60097 12.1258 7.52225 12.3112 7.51883 12.5055C7.5154 12.6998 7.58752 12.8878 7.72 13.03L10.22 15.53C10.3606 15.6705 10.5512 15.7493 10.75 15.7493C10.9488 15.7493 11.1394 15.6705 11.28 15.53L16.28 10.53C16.3537 10.4613 16.4128 10.3785 16.4538 10.2865C16.4948 10.1945 16.5168 10.0952 16.5186 9.99452C16.5204 9.89382 16.5018 9.79379 16.4641 9.7004C16.4264 9.60701 16.3703 9.52218 16.299 9.45096C16.2278 9.37974 16.143 9.3236 16.0496 9.28588C15.9562 9.24816 15.8562 9.22963 15.7555 9.23141C15.6548 9.23318 15.5555 9.25523 15.4635 9.29622C15.3715 9.33721 15.2887 9.39631 15.22 9.47Z" fill="#0C9D35"/>
</svg>

Before

Width:  |  Height:  |  Size: 958 B

View File

@@ -1 +1 @@
<svg width="16px" height="16px" viewBox="0 0 1024 1024" class="icon" version="1.1" xmlns="http://www.w3.org/2000/svg" fill="#11ee1c" stroke="#11ee1c" stroke-width="83.96799999999999"><g id="SVGRepo_bgCarrier" stroke-width="0"></g><g id="SVGRepo_tracerCarrier" stroke-linecap="round" stroke-linejoin="round"></g><g id="SVGRepo_iconCarrier"><path d="M866.133333 258.133333L362.666667 761.6l-204.8-204.8L98.133333 618.666667 362.666667 881.066667l563.2-563.2z" fill="#0C9D35"></path></g></svg>
<svg width="16px" height="16px" viewBox="0 0 1024 1024" class="icon" version="1.1" xmlns="http://www.w3.org/2000/svg" fill="#11ee1c" stroke="#11ee1c" stroke-width="83.96799999999999"><g id="SVGRepo_bgCarrier" stroke-width="0"></g><g id="SVGRepo_tracerCarrier" stroke-linecap="round" stroke-linejoin="round"></g><g id="SVGRepo_iconCarrier"><path d="M866.133333 258.133333L362.666667 761.6l-204.8-204.8L98.133333 618.666667 362.666667 881.066667l563.2-563.2z" fill="#11ee1c"></path></g></svg>

Before

Width:  |  Height:  |  Size: 490 B

After

Width:  |  Height:  |  Size: 490 B

View File

@@ -1,3 +0,0 @@
<svg width="22" height="22" viewBox="0 0 22 22" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M20.2891 15.81L21.7091 14.39L18.4991 11.21L15.4991 10.36L17.4091 10.1L21.5991 6.89999L20.3991 5.29998L16.5891 8.14999L13.9091 8.59999L17.1091 5.40999L15.9991 0.859985L13.9991 1.33999L14.8591 4.78999L13.7591 5.92999C13.5285 5.38882 13.144 4.92736 12.6533 4.60302C12.1625 4.27867 11.5873 4.10574 10.9991 4.10574C10.4108 4.10574 9.83559 4.27867 9.34487 4.60302C8.85414 4.92736 8.4696 5.38882 8.23906 5.92999L7.10906 4.78999L7.99906 1.33999L5.99906 0.859985L4.88906 5.40999L8.08906 8.59999L5.39906 8.14999L1.59906 5.29998L0.399063 6.89999L4.59906 10.1L6.45906 10.41L3.45906 11.26L0.289062 14.39L1.70906 15.81L4.49906 12.99L6.86906 12.32L2.99906 15.64V21.1H4.99906V16.56L6.55906 15.22C6.73264 16.2723 7.27432 17.2287 8.08751 17.9188C8.90071 18.6088 9.93255 18.9876 10.9991 18.9876C12.0656 18.9876 13.0974 18.6088 13.9106 17.9188C14.7238 17.2287 15.2655 16.2723 15.4391 15.22L16.9991 16.56V21.1H18.9991V15.64L15.1291 12.32L17.4991 12.99L20.2891 15.81Z" fill="black"/>
</svg>

Before

Width:  |  Height:  |  Size: 1.0 KiB

View File

@@ -1,3 +0,0 @@
<svg width="24" height="22" viewBox="0 0 24 22" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M12.01 0.784912C9.928 0.784912 8.256 0.804912 8.267 0.831912C8.277 0.851912 9.975 3.83291 12.041 7.45191L15.801 14.0259H19.561C21.642 14.0259 23.314 14.0059 23.303 13.9789C23.298 13.9589 21.595 10.9779 19.528 7.35891L15.768 0.784912H12.01ZM7.25 2.51491C6.03029 4.61565 4.82028 6.72201 3.62 8.83391L0 15.1679L1.89 18.4659L3.775 21.7629L7.395 15.4279L11.013 9.09791L9.133 5.81091C8.1 4.00391 7.255 2.52091 7.25 2.51491ZM9.509 15.1679L9.306 15.5159C9.192 15.7139 8.346 17.1879 7.426 18.8029C6.864 19.7952 6.29799 20.7852 5.728 21.7729C5.718 21.7989 8.968 21.8149 12.95 21.8149H20.194L21.99 18.6579C22.982 16.9239 23.84 15.4279 23.896 15.3349L24 15.1679H16.751H9.509Z" fill="black"/>
</svg>

Before

Width:  |  Height:  |  Size: 792 B

View File

@@ -1,4 +0,0 @@
<svg width="20" height="21" viewBox="0 0 20 21" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M15.8984 5.5H7.22656C5.99687 5.5 5 6.49687 5 7.72656V16.3984C5 17.6281 5.99687 18.625 7.22656 18.625H15.8984C17.1281 18.625 18.125 17.6281 18.125 16.3984V7.72656C18.125 6.49687 17.1281 5.5 15.8984 5.5Z" stroke="#949494" stroke-width="1.25" stroke-linejoin="round"/>
<path d="M14.9805 5.5L15 4.5625C14.9984 3.98285 14.7674 3.4274 14.3575 3.01753C13.9476 2.60765 13.3922 2.37665 12.8125 2.375H4.375C3.71256 2.37696 3.07781 2.64098 2.6094 3.1094C2.14098 3.57781 1.87696 4.21256 1.875 4.875V13.3125C1.87665 13.8922 2.10765 14.4476 2.51753 14.8575C2.9274 15.2674 3.48285 15.4984 4.0625 15.5H5M11.5625 8.9375V15.1875M14.6875 12.0625H8.4375" stroke="#949494" stroke-width="1.25" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

Before

Width:  |  Height:  |  Size: 833 B

View File

@@ -1,10 +1,3 @@
<svg width="24" height="25" viewBox="0 0 24 25" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_9890_21170)">
<path d="M12.75 19.1V8.91248L9.5 12.1625L7.75 10.35L14 4.09998L20.25 10.35L18.5 12.1625L15.25 8.91248V19.1H12.75ZM6.5 24.1C5.8125 24.1 5.22417 23.8554 4.735 23.3662C4.24583 22.8771 4.00083 22.2883 4 21.6V17.85H6.5V21.6H21.5V17.85H24V21.6C24 22.2875 23.7554 22.8762 23.2663 23.3662C22.7771 23.8562 22.1883 24.1008 21.5 24.1H6.5Z" fill="black"/>
</g>
<defs>
<clipPath id="clip0_9890_21170">
<rect width="24" height="24" fill="white" transform="translate(0 0.0999756)"/>
</clipPath>
</defs>
</svg>
<svg width="28" height="34" viewBox="0 0 28 34" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M10 26.0003H18C19.1 26.0003 20 25.1003 20 24.0003V14.0003H23.18C24.96 14.0003 25.86 11.8403 24.6 10.5803L15.42 1.40032C15.235 1.21491 15.0152 1.06782 14.7732 0.967453C14.5313 0.86709 14.2719 0.81543 14.01 0.81543C13.7481 0.81543 13.4887 0.86709 13.2468 0.967453C13.0048 1.06782 12.785 1.21491 12.6 1.40032L3.42 10.5803C2.16 11.8403 3.04 14.0003 4.82 14.0003H8V24.0003C8 25.1003 8.9 26.0003 10 26.0003ZM2 30.0003H26C27.1 30.0003 28 30.9003 28 32.0003C28 33.1003 27.1 34.0003 26 34.0003H2C0.9 34.0003 0 33.1003 0 32.0003C0 30.9003 0.9 30.0003 2 30.0003Z" fill="#949494"/>
</svg>

Before

Width:  |  Height:  |  Size: 630 B

After

Width:  |  Height:  |  Size: 681 B

View File

@@ -1,3 +0,0 @@
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M20.8175 3.09139C20.0845 2.34392 19.2025 1.9707 18.1705 1.9707H5.6835C4.6515 1.9707 3.7695 2.34392 3.0365 3.09139C2.3035 3.83885 1.9375 4.73825 1.9375 5.79061V18.524C1.9375 19.5763 2.3035 20.4757 3.0365 21.2232C3.7695 21.9707 4.6515 22.3439 5.6835 22.3439H8.5975C8.7875 22.3439 8.9305 22.3368 9.0265 22.3235C9.13819 22.3007 9.23901 22.2399 9.3125 22.1512C9.4075 22.0492 9.4555 21.9013 9.4555 21.7076L9.4485 20.8051C9.4445 20.23 9.4425 19.7752 9.4425 19.4387L9.1425 19.4917C8.9525 19.5274 8.7125 19.5427 8.4215 19.5386C8.11819 19.5329 7.81584 19.5019 7.5175 19.4458C7.1999 19.386 6.90093 19.2497 6.6455 19.0481C6.37799 18.8418 6.17847 18.5572 6.0735 18.2323L5.9435 17.9264C5.83393 17.6851 5.69627 17.4581 5.5335 17.2503C5.3475 17.0025 5.1585 16.8353 4.9675 16.7466L4.8775 16.6803C4.81474 16.6345 4.75766 16.5811 4.7075 16.5212C4.65959 16.4657 4.62015 16.4031 4.5905 16.3356C4.5645 16.2734 4.5865 16.2224 4.6555 16.1827C4.7255 16.1419 4.8505 16.1225 5.0335 16.1225L5.2935 16.1633C5.4665 16.198 5.6815 16.304 5.9365 16.4804C6.19456 16.6598 6.41013 16.8957 6.5675 17.1708C6.7675 17.5328 7.0075 17.8091 7.2895 17.9998C7.5715 18.1895 7.8555 18.2854 8.1415 18.2854C8.4275 18.2854 8.6745 18.2629 8.8835 18.2191C9.08561 18.1765 9.28201 18.1094 9.4685 18.0192C9.5465 17.4278 9.7585 16.9709 10.1055 16.6528C9.65588 16.6078 9.21026 16.5281 8.7725 16.4142C8.34529 16.2945 7.93444 16.1208 7.5495 15.8972C7.14675 15.6736 6.79101 15.3714 6.5025 15.008C6.2255 14.6541 5.9975 14.1901 5.8195 13.616C5.6425 13.0409 5.5535 12.377 5.5535 11.6255C5.5535 10.5558 5.8955 9.64519 6.5805 8.89263C6.2605 8.08908 6.2905 7.18662 6.6715 6.18831C6.9235 6.10775 7.2965 6.16791 7.7905 6.36676C8.2845 6.56561 8.6465 6.7359 8.8765 6.87662C9.1065 7.01939 9.2905 7.13869 9.4295 7.23557C10.2425 7.00486 11.0826 6.88889 11.9265 6.8909C12.7855 6.8909 13.6175 7.00613 14.4245 7.23557L14.9185 6.91741C15.2985 6.68476 15.6993 6.48946 16.1155 6.33413C16.5755 6.15669 16.9255 6.10877 17.1695 6.18831C17.5595 7.18764 17.5935 8.08908 17.2725 8.89365C17.9575 9.64519 18.3005 10.5558 18.3005 11.6265C18.3005 12.3781 18.2115 13.044 18.0335 13.6221C17.8565 14.2013 17.6265 14.6653 17.3445 15.0151C17.0509 15.3739 16.6937 15.6731 16.2915 15.8972C15.8715 16.1358 15.4635 16.3081 15.0685 16.4142C14.6308 16.5284 14.1852 16.6085 13.7355 16.6538C14.1855 17.0515 14.4115 17.6786 14.4115 18.5362V21.7076C14.4115 21.8575 14.4325 21.9788 14.4765 22.0716C14.4967 22.1163 14.5256 22.1564 14.5613 22.1895C14.597 22.2226 14.6389 22.2481 14.6845 22.2643C14.7805 22.299 14.8645 22.3215 14.9385 22.3296C15.0125 22.3398 15.1185 22.3429 15.2565 22.3429H18.1705C19.2025 22.3429 20.0845 21.9696 20.8175 21.2222C21.5495 20.4757 21.9165 19.5753 21.9165 18.523V5.79061C21.9165 4.73825 21.5505 3.83885 20.8175 3.09139Z" fill="#747474"/>
</svg>

Before

Width:  |  Height:  |  Size: 2.8 KiB

View File

@@ -1,3 +1,5 @@
<svg width="20" height="20" viewBox="0 0 20 20" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M10 0.299927C8.68678 0.299927 7.38642 0.558584 6.17317 1.06113C4.95991 1.56368 3.85752 2.30027 2.92893 3.22886C1.05357 5.10422 0 7.64776 0 10.2999C0 14.7199 2.87 18.4699 6.84 19.7999C7.34 19.8799 7.5 19.5699 7.5 19.2999V17.6099C4.73 18.2099 4.14 16.2699 4.14 16.2699C3.68 15.1099 3.03 14.7999 3.03 14.7999C2.12 14.1799 3.1 14.1999 3.1 14.1999C4.1 14.2699 4.63 15.2299 4.63 15.2299C5.5 16.7499 6.97 16.2999 7.54 16.0599C7.63 15.4099 7.89 14.9699 8.17 14.7199C5.95 14.4699 3.62 13.6099 3.62 9.79993C3.62 8.68993 4 7.79993 4.65 7.08993C4.55 6.83993 4.2 5.79993 4.75 4.44993C4.75 4.44993 5.59 4.17993 7.5 5.46993C8.29 5.24993 9.15 5.13993 10 5.13993C10.85 5.13993 11.71 5.24993 12.5 5.46993C14.41 4.17993 15.25 4.44993 15.25 4.44993C15.8 5.79993 15.45 6.83993 15.35 7.08993C16 7.79993 16.38 8.68993 16.38 9.79993C16.38 13.6199 14.04 14.4599 11.81 14.7099C12.17 15.0199 12.5 15.6299 12.5 16.5599V19.2999C12.5 19.5699 12.66 19.8899 13.17 19.7999C17.14 18.4599 20 14.7199 20 10.2999C20 8.98671 19.7413 7.68635 19.2388 6.47309C18.7362 5.25984 17.9997 4.15744 17.0711 3.22886C16.1425 2.30027 15.0401 1.56368 13.8268 1.06113C12.6136 0.558584 11.3132 0.299927 10 0.299927Z" fill="black"/>
<svg width="800px" height="800px" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg">
<title>github</title>
<rect width="24" height="24" fill="none"/>
<path d="M12,2A10,10,0,0,0,8.84,21.5c.5.08.66-.23.66-.5V19.31C6.73,19.91,6.14,18,6.14,18A2.69,2.69,0,0,0,5,16.5c-.91-.62.07-.6.07-.6a2.1,2.1,0,0,1,1.53,1,2.15,2.15,0,0,0,2.91.83,2.16,2.16,0,0,1,.63-1.34C8,16.17,5.62,15.31,5.62,11.5a3.87,3.87,0,0,1,1-2.71,3.58,3.58,0,0,1,.1-2.64s.84-.27,2.75,1a9.63,9.63,0,0,1,5,0c1.91-1.29,2.75-1,2.75-1a3.58,3.58,0,0,1,.1,2.64,3.87,3.87,0,0,1,1,2.71c0,3.82-2.34,4.66-4.57,4.91a2.39,2.39,0,0,1,.69,1.85V21c0,.27.16.59.67.5A10,10,0,0,0,12,2Z" fill="black" fill-opacity="0.54"/>
</svg>

Before

Width:  |  Height:  |  Size: 1.3 KiB

After

Width:  |  Height:  |  Size: 679 B

View File

@@ -1,4 +0,0 @@
<svg width="24" height="25" viewBox="0 0 24 25" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M10.7519 13.3399C10.7519 12.7699 10.2819 12.2999 9.71187 12.2999C9.14187 12.2999 8.67188 12.7699 8.67188 13.3399C8.67188 13.6158 8.78145 13.8803 8.97648 14.0753C9.17152 14.2704 9.43605 14.3799 9.71187 14.3799C9.9877 14.3799 10.2522 14.2704 10.4473 14.0753C10.6423 13.8803 10.7519 13.6158 10.7519 13.3399ZM14.0919 15.7099C13.6419 16.1599 12.6819 16.3199 12.0019 16.3199C11.3219 16.3199 10.3619 16.1599 9.91187 15.7099C9.88755 15.6839 9.85813 15.6631 9.82545 15.6489C9.79276 15.6347 9.75751 15.6274 9.72188 15.6274C9.68624 15.6274 9.65099 15.6347 9.6183 15.6489C9.58562 15.6631 9.5562 15.6839 9.53187 15.7099C9.50583 15.7343 9.48507 15.7637 9.47088 15.7964C9.45668 15.829 9.44936 15.8643 9.44936 15.8999C9.44936 15.9356 9.45668 15.9708 9.47088 16.0035C9.48507 16.0362 9.50583 16.0656 9.53187 16.0899C10.2419 16.7999 11.6019 16.8599 12.0019 16.8599C12.4019 16.8599 13.7619 16.7999 14.4719 16.0899C14.4979 16.0656 14.5187 16.0362 14.5329 16.0035C14.5471 15.9708 14.5544 15.9356 14.5544 15.8999C14.5544 15.8643 14.5471 15.829 14.5329 15.7964C14.5187 15.7637 14.4979 15.7343 14.4719 15.7099C14.3719 15.6099 14.2019 15.6099 14.0919 15.7099ZM14.2919 12.2999C13.7219 12.2999 13.2519 12.7699 13.2519 13.3399C13.2519 13.9099 13.7219 14.3799 14.2919 14.3799C14.8619 14.3799 15.3319 13.9099 15.3319 13.3399C15.3319 12.7699 14.8719 12.2999 14.2919 12.2999Z" fill="black"/>
<path d="M12 2.29993C6.48 2.29993 2 6.77993 2 12.2999C2 17.8199 6.48 22.2999 12 22.2999C17.52 22.2999 22 17.8199 22 12.2999C22 6.77993 17.52 2.29993 12 2.29993ZM17.8 13.6299C17.82 13.7699 17.83 13.9199 17.83 14.0699C17.83 16.3099 15.22 18.1299 12 18.1299C8.78 18.1299 6.17 16.3099 6.17 14.0699C6.17 13.9199 6.18 13.7699 6.2 13.6299C5.69 13.3999 5.34 12.8899 5.34 12.2999C5.33852 12.0132 5.4218 11.7324 5.57939 11.4928C5.73698 11.2532 5.96185 11.0656 6.22576 10.9534C6.48966 10.8412 6.78083 10.8095 7.06269 10.8622C7.34456 10.915 7.60454 11.0499 7.81 11.2499C8.82 10.5199 10.22 10.0599 11.77 10.0099L12.51 6.51993C12.52 6.44993 12.56 6.38993 12.62 6.35993C12.68 6.31993 12.75 6.30993 12.82 6.31993L15.24 6.83993C15.3221 6.67351 15.4472 6.53207 15.6023 6.4303C15.7575 6.32853 15.9371 6.27013 16.1224 6.26115C16.3077 6.25217 16.4921 6.29294 16.6564 6.37924C16.8207 6.46553 16.9589 6.59421 17.0566 6.75191C17.1544 6.90962 17.2082 7.09062 17.2125 7.27613C17.2167 7.46164 17.1712 7.64491 17.0808 7.80692C16.9903 7.96894 16.8582 8.1038 16.698 8.19753C16.5379 8.29125 16.3556 8.34042 16.17 8.33993C15.61 8.33993 15.16 7.89993 15.13 7.34993L12.96 6.88993L12.3 10.0099C13.83 10.0599 15.2 10.5299 16.2 11.2499C16.3533 11.1035 16.5367 10.9924 16.7375 10.9243C16.9382 10.8562 17.1514 10.8328 17.3621 10.8557C17.5728 10.8787 17.776 10.9473 17.9574 11.057C18.1388 11.1666 18.2941 11.3145 18.4123 11.4905C18.5306 11.6664 18.609 11.866 18.642 12.0754C18.6751 12.2847 18.662 12.4988 18.6037 12.7026C18.5454 12.9064 18.4432 13.0949 18.3044 13.2551C18.1656 13.4153 17.9934 13.5432 17.8 13.6299Z" fill="black"/>
</svg>

Before

Width:  |  Height:  |  Size: 3.0 KiB

View File

@@ -1 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" width="1em" height="1em" viewBox="0 0 24 24"><path fill="black" d="M10.72,19.9a8,8,0,0,1-6.5-9.79A7.77,7.77,0,0,1,10.4,4.16a8,8,0,0,1,9.49,6.52A1.54,1.54,0,0,0,21.38,12h.13a1.37,1.37,0,0,0,1.38-1.54,11,11,0,1,0-12.7,12.39A1.54,1.54,0,0,0,12,21.34h0A1.47,1.47,0,0,0,10.72,19.9Z"><animateTransform attributeName="transform" dur="0.75s" repeatCount="indefinite" type="rotate" values="0 12 12;360 12 12"/></path></svg>
<svg xmlns="http://www.w3.org/2000/svg" width="1em" height="1em" viewBox="0 0 24 24"><path fill="white" d="M10.72,19.9a8,8,0,0,1-6.5-9.79A7.77,7.77,0,0,1,10.4,4.16a8,8,0,0,1,9.49,6.52A1.54,1.54,0,0,0,21.38,12h.13a1.37,1.37,0,0,0,1.38-1.54,11,11,0,1,0-12.7,12.39A1.54,1.54,0,0,0,12,21.34h0A1.47,1.47,0,0,0,10.72,19.9Z"><animateTransform attributeName="transform" dur="0.75s" repeatCount="indefinite" type="rotate" values="0 12 12;360 12 12"/></path></svg>

Before

Width:  |  Height:  |  Size: 454 B

After

Width:  |  Height:  |  Size: 454 B

Some files were not shown because too many files have changed in this diff Show More