Compare commits

..

270 Commits

Author SHA1 Message Date
Alex
e6d64f71f2 Update HACKTOBERFEST.md 2025-10-02 16:58:01 +01:00
Alex
e72313ebdd Merge pull request #2002 from ManishMadan2882/main
Frontend Lint
2025-10-02 12:35:59 +01:00
ManishMadan2882
65d5bd72cd Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-10-02 16:59:13 +05:30
ManishMadan2882
dc0cbb41f0 (fix:table) remove display name 2025-10-02 16:50:59 +05:30
ManishMadan2882
c4a54a85be (lint) fe 2025-10-02 16:30:08 +05:30
Alex
c2ccf2c72c Merge pull request #2000 from ManishMadan2882/main
Test coverage:  TTS, Security and Storage layers
2025-10-02 00:11:22 +01:00
ManishMadan2882
80aaecb5f0 ruff-fix 2025-10-02 02:48:16 +05:30
ManishMadan2882
946865a335 test:TTS 2025-10-02 02:40:30 +05:30
ManishMadan2882
5de15c8413 (feat:11Labs) just functional part 2025-10-02 02:39:53 +05:30
ManishMadan2882
67268fd35a tests:security 2025-10-02 02:34:05 +05:30
ManishMadan2882
42fc771833 tests:storage 2025-10-02 02:33:12 +05:30
Alex
4d34dc4234 Merge pull request #1996 from arc53/pr/1988
Pr/1988
2025-10-01 10:29:20 +01:00
Alex
d567399f2b Merge pull request #1995 from siiddhantt/fix/agent-sources
fix: agent sources and other issues
2025-10-01 10:19:47 +01:00
Alex
77f4f8d8b0 (refactor) update launch configurations and remove obsolete tasks.json 2025-10-01 10:17:52 +01:00
Alex
a2d04beaa1 (refactor) remove redundant source validation in CreateAgent 2025-10-01 10:17:26 +01:00
Siddhant Rai
ba49eea23d Refactor agent creation and update logic to improve error handling and default values; enhance logging for better traceability 2025-10-01 13:56:31 +05:30
Alex
82beafc086 Merge pull request #1991 from ManishMadan2882/tester
Tests: coverage for application/llm/* and application/llm/handlers/* ; fix on parsers/test_markdown
2025-09-30 23:19:38 +01:00
ManishMadan2882
7d8ed2d102 ruff-fix 2025-10-01 02:23:53 +05:30
ManishMadan2882
aab8d3a4f1 Merge branch 'tester' of https://github.com/manishmadan2882/docsgpt into tester 2025-10-01 01:58:52 +05:30
Alex
76658d50a0 Update HACKTOBERFEST.md 2025-09-30 21:46:09 +03:00
Alex
88ba22342c Update README with Hacktoberfest details and demo
Added Hacktoberfest information and a demo GIF.
2025-09-30 21:45:55 +03:00
Alex
11a1460af9 Add Hacktoberfest participation details to HACKTOBERFEST.md
This document outlines the participation of DocsGPT in Hacktoberfest, encouraging contributors to submit meaningful pull requests for a chance to win a T-shirt. It includes guidelines for contributions and links to resources.
2025-09-30 21:42:13 +03:00
Alex
2cd4c41316 Merge pull request #1992 from arc53/fix-api-answer-tool-call
fix: api answer tool call event
2025-09-30 14:49:57 +01:00
Alex
b910f308f2 fix: api answer tool call event 2025-09-30 14:42:54 +01:00
ManishMadan2882
763aa73ea4 (tests:llm) llms, handlers 2025-09-30 16:03:02 +05:30
Manish Madan
30c79e92d4 Merge branch 'arc53:main' into tester 2025-09-30 15:52:23 +05:30
ManishMadan2882
402d5e054b (fix:test_markdown) patch tokenizer 2025-09-30 13:35:03 +05:30
Alex
0e211df206 Merge pull request #1988 from ManishMadan2882/tester
Test coverage for parsers
2025-09-29 17:42:27 +01:00
ManishMadan2882
e24a0ac686 (test:parsers) github, reddit 2025-09-29 20:33:05 +05:30
ManishMadan2882
8c91b1c527 (tests:parsers) remote 2025-09-29 19:39:24 +05:30
ManishMadan2882
2b38f80d04 (test:files) epub, image, rst 2025-09-29 17:39:20 +05:30
ManishMadan2882
282bd35f52 (test:files) html, md, json 2025-09-29 17:23:15 +05:30
Alex
cc9b4c2bcb Merge pull request #1964 from siiddhantt/refine/mcp-tool
refactor: oauth + use fastmcp client for handling SSE and different transports in remote mcp
2025-09-26 13:54:40 +01:00
Alex
068ce4970a Merge branch 'refine/mcp-tool' of https://github.com/siiddhantt/DocsGPT into pr/1964 2025-09-26 13:42:34 +01:00
Alex
cf19165ad8 fix(lint): ruff 2025-09-26 13:42:08 +01:00
Alex
68c479f3a5 Merge branch 'main' into refine/mcp-tool 2025-09-26 13:40:12 +01:00
ManishMadan2882
ba496a772b (test) doc parsers coverage 2025-09-26 16:07:12 +05:30
Siddhant Rai
3b27db36f2 feat: Implement OAuth flow for MCP server integration
- Added MCPOAuthManager to handle OAuth authorization.
- Updated MCPServerSave resource to manage OAuth status and callback.
- Introduced new endpoints for OAuth status and callback handling.
- Enhanced user interface to support OAuth authentication type.
- Implemented polling mechanism for OAuth status in MCPServerModal.
- Updated frontend services and endpoints to accommodate new OAuth features.
- Improved error handling and user feedback for OAuth processes.
2025-09-26 02:44:08 +05:30
Alex
f803def69b Merge pull request #1976 from ManishMadan2882/main
OAuth fix for connector
2025-09-25 10:15:13 +01:00
ManishMadan2882
52065e69a4 ruff 2025-09-25 05:05:59 +05:30
ManishMadan2882
50f5e8a955 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-09-25 04:58:05 +05:30
ManishMadan2882
2d0e97b66d (feat:oauth) provider as state, not args 2025-09-25 04:55:25 +05:30
Manish Madan
5f3cc5a392 Merge branch 'arc53:main' into main 2025-09-25 03:50:50 +05:30
ManishMadan2882
ac66d77512 (fix:oauth) handle access denied 2025-09-25 03:45:12 +05:30
Alex
50cf653d4a Merge pull request #1975 from arc53/chunk-fix
fix: chunking
2025-09-24 23:05:41 +01:00
Alex
56256051d2 fix: chunking 2025-09-24 22:59:53 +01:00
ManishMadan2882
c0361ff03d (fix:oauth) user field null 2025-09-25 03:27:16 +05:30
Alex
f153435c08 Merge pull request #1974 from ManishMadan2882/tester
Test app fix
2025-09-24 10:12:29 +01:00
Alex
9aa7f22fa6 Merge pull request #1961 from NewAi25/feature-letmecheck-fix-Identica-Sub-Expression
Added fix in frontend/src/conversation/ConversationMessages.tsx line…
2025-09-24 09:35:12 +01:00
ManishMadan2882
52b7bda5f8 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt into tester 2025-09-24 02:37:20 +05:30
ManishMadan2882
21aefa2778 (fix:tests.application) attr err 2025-09-23 23:43:50 +05:30
Alex
a89ff71c9e Update README.md 2025-09-23 12:48:49 +03:00
Alex
4c275816be Merge pull request #1954 from ManishMadan2882/main
Refinements: Files uploaded via Connectors
2025-09-22 21:54:48 +01:00
ManishMadan2882
f8dfbcfc80 (docs) guide gdrive 2025-09-22 20:20:51 +05:30
ManishMadan2882
d317f6473d (feat:gdrive) upload files only 2025-09-22 20:19:56 +05:30
Siddhant Rai
00b4e133d4 feat: implement OAuth 2.1 integration with custom handlers for fastmcp 2025-09-22 01:31:09 +05:30
ManishMadan2882
b6349e4efb (fix:gdrive) no api keyneded 2025-09-20 19:55:18 +05:30
ManishMadan2882
6ca3d9585c (fix:connector) db entry 2025-09-20 19:52:12 +05:30
ManishMadan2882
5935a0283a (fix:ui)adjust 2025-09-20 19:50:58 +05:30
ManishMadan2882
5400a6ec06 (feat:googlePicker) frontend envars 2025-09-20 00:13:03 +05:30
ManishMadan2882
6574d9cc84 (feat:pickers) ux, code refactor 2025-09-20 00:04:27 +05:30
ManishMadan2882
42b83c5994 (refactor:upload) single schema with validation 2025-09-19 23:55:40 +05:30
ManishMadan2882
896612a5a3 (fix:auth) refresh drive token 2025-09-19 22:57:09 +05:30
ManishMadan2882
0ee875bee4 (fix:gdrive) missing appId in picker 2025-09-18 20:25:20 +05:30
Siddhant Rai
8ce345cd94 feat: refactor MCPTool to use FastMCP client and improve async handling, update transport and authentication configurations 2025-09-17 20:51:32 +05:30
ManishMadan2882
da2f8477e6 (feat:drive) oauth for drive.file scope, picker 2025-09-17 19:37:01 +05:30
jane
82b47b5673 Added fix in frontend/src/conversation/ConversationMessages.tsx line 213 2025-09-16 23:53:06 +05:30
Siddhant Rai
3369b910b4 feat: update MCP request ID generation and enhance response handling for event streams 2025-09-16 20:43:04 +05:30
ManishMadan2882
ec0c4c3b84 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-09-16 14:59:18 +05:30
ManishMadan2882
f74e2c9da1 (feat:filepciker) load inside table, ui 2025-09-16 14:50:58 +05:30
Alex
e26ad3c475 Merge pull request #1947 from siiddhantt/feat/remote-mcp
feat: remote mcp
2025-09-15 09:31:36 +01:00
ManishMadan2882
145c3b8ad0 (feat:file picker) table ui 2025-09-15 12:58:45 +05:30
Siddhant Rai
0ff6c6a154 Merge branch 'main' into feat/remote-mcp 2025-09-15 09:53:58 +05:30
Siddhant Rai
641cf5a4c1 feat: skip empty fields in mcp tool call + improve error handling and response 2025-09-11 19:04:10 +05:30
Siddhant Rai
09b9576eef feat: enhance message and schema cleaning for Google AI integration 2025-09-11 17:54:46 +05:30
ManishMadan2882
18b71ca2f2 (feat:upload) cards for upload types 2025-09-11 13:27:55 +05:30
Alex
e0eb7f456e Merge pull request #1930 from Ankit-Matth/feature/multi-select-sources
Added support for Multi select sources
2025-09-10 17:48:02 +01:00
Siddhant Rai
188d118fc0 refactor: remove unused logging import from routes.py 2025-09-10 22:14:31 +05:30
Siddhant Rai
adcdce8d76 fix: handle invalid chunks value in StreamProcessor and ClassicRAG 2025-09-10 22:10:11 +05:30
Siddhant Rai
b865a7aec1 Merge branch 'main' of https://github.com/siiddhantt/DocsGPT into pr/1930 2025-09-10 20:15:20 +05:30
ManishMadan2882
cec8c72b46 (refactor:uploads) YAGNI 2025-09-10 19:19:40 +05:30
Siddhant Rai
b052e32805 feat: enhance MCP tool configuration handling and authentication logic 2025-09-10 14:15:51 +05:30
Siddhant Rai
816f660be3 fix: enhance MCPTool request handling and tool discovery logic 2025-09-10 13:08:14 +05:30
ManishMadan2882
fc8be45d5a (feat:sync) confirmation check 2025-09-09 13:08:38 +05:30
ManishMadan2882
e749c936c9 (refactor:uploads) for new ui 2025-09-09 13:07:26 +05:30
ManishMadan2882
b2b9670a23 (feat:connectors) type as connector:file 2025-09-09 00:00:58 +05:30
Siddhant Rai
2f88890c94 feat: add support for multiple sources in agent configuration and update related components 2025-09-08 22:10:08 +05:30
ManishMadan2882
6366663f03 (refactor:uploads) separate file picker 2025-09-08 19:09:19 +05:30
Manish Madan
20fe7dc6d1 Merge branch 'main' into main 2025-09-08 14:19:32 +05:30
Alex
4b9153069e Merge pull request #1928 from Ankit-Matth/fix/ui-ux-improvements
bugfix(ui): several UI/UX updates and fixes
2025-09-05 13:54:09 +01:00
ManishMadan2882
80406d0753 (feat:drive) debounce search, ui/ux 2025-09-05 13:25:36 +05:30
ManishMadan2882
35f4c11784 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-09-05 11:43:18 +05:30
ManishMadan2882
7896526f19 (feat:load_files) search feature 2025-09-05 10:35:23 +05:30
Alex
f7db22edff Merge pull request #1937 from ManishMadan2882/main
Connectors: Google Drive Ingestion
2025-09-04 12:05:15 +01:00
Siddhant Rai
0e4196f036 fix: remove unused tool labels from localization file 2025-09-04 15:19:58 +05:30
Siddhant Rai
1bf6af6eeb feat: finalize remote mcp 2025-09-04 15:10:12 +05:30
ManishMadan2882
5a9bc6d2bf (feat:connector) infinite scroll file pick 2025-09-04 08:35:41 +05:30
ManishMadan2882
f7f6042579 (feat:connector) paginate files 2025-09-04 07:58:12 +05:30
ManishMadan2882
c4a598f3d3 (lint-fix) ruff 2025-09-03 19:29:34 +05:30
Siddhant Rai
7c23f43c63 feat: Add MCP Server management functionality
- Implemented encryption utility for securely storing sensitive credentials.
- Added MCPServerModal component for managing MCP server configurations.
- Updated AddToolModal to include MCP server management.
- Enhanced localization with new strings for MCP server features.
- Introduced SVG icons for MCP tools in the frontend.
- Created a new settings section for MCP server configurations in the application.
2025-09-03 15:41:59 +05:30
ManishMadan2882
7e2cbdd88c (feat:connector) redirect url as backend overhead 2025-09-03 09:57:13 +05:30
ManishMadan2882
3b3a04a249 (feat:connector) sync fixes UI, minor refactor 2025-09-02 20:28:23 +05:30
ManishMadan2882
f9b2c95695 (feat:connector) sync, simply re-ingest 2025-09-02 18:06:04 +05:30
ManishMadan2882
c2c18e8319 (feat:connector,fe) sync api, notification 2025-09-02 13:36:41 +05:30
ManishMadan2882
384ad3e0ac (feat:connector) raw sync flow 2025-09-02 13:34:31 +05:30
ManishMadan2882
8c986aaa7f Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-09-01 12:05:17 +05:30
ManishMadan2882
bb4ea76d30 (fix:connectorTree) path navigation fn 2025-09-01 12:04:58 +05:30
ManishMadan2882
2868e47cf8 (feat:connector) provider metadata, separate fe nested display 2025-08-29 18:05:58 +05:30
GH Action - Upstream Sync
e0adc3e5d5 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-08-29 01:36:09 +00:00
ManishMadan2882
e55d1a5865 (feat:connector,auth) consider user_id 2025-08-29 02:13:51 +05:30
ManishMadan2882
018273c6b2 (feat:connector) refactor, updated routes FE 2025-08-29 01:06:40 +05:30
Alex
44b8a11c04 Merge pull request #1936 from Ankit-Matth/feature/load-containers-from-dockerhub
Speed up scripts by loading containers from docker hub
2025-08-28 14:48:08 +01:00
Siddhant Rai
56e5aba559 fix: correct frontend image name in docker-compose configuration 2025-08-28 18:21:54 +05:30
Alex
46904ccd54 feat: add theme color meta tags for light and dark modes 2025-08-28 11:36:42 +01:00
Siddhant Rai
5b7c7a4471 fix: update Docker images in docker-compose to use 'develop' tag 2025-08-28 12:11:06 +05:30
Siddhant Rai
9da4215d1f feat: implement Docker Hub integration for building and pushing images in CI/CD workflow 2025-08-28 12:01:04 +05:30
ManishMadan2882
f39ac9945f (feat:auth) follow connector-session 2025-08-28 00:53:19 +05:30
ManishMadan2882
a0cc2e4d46 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-08-28 00:51:29 +05:30
ManishMadan2882
4065041a9f (feat:connectors) separate routes, namespace 2025-08-28 00:51:09 +05:30
GH Action - Upstream Sync
f08067a161 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-08-27 01:36:38 +00:00
Alex
545caacfa3 feat: prevent NUL character ingestion failures 2025-08-26 23:30:57 +01:00
Alex
a06f646637 feat: enhance tool call error handling 2025-08-26 22:37:21 +01:00
ManishMadan2882
578c68205a (feat:connectors) abstracting auth, base class 2025-08-26 02:46:36 +05:30
ManishMadan2882
f09f1433a9 (feat:connectors) separate layer 2025-08-26 01:38:36 +05:30
ManishMadan2882
15a9e97a1e (feat:ingest_connectors) spread config params 2025-08-26 00:56:39 +05:30
Ankit Matth
b3af4ee50b speed up scripts by using docker hub 2025-08-24 08:59:19 +05:30
Ankit Matth
07d59b6640 refactor: use list instead of string parsing 2025-08-23 20:25:29 +05:30
GH Action - Upstream Sync
e25b988dc8 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-08-23 01:35:35 +00:00
ManishMadan2882
2410bd8654 (fix:driveLoader) folder ingesting 2025-08-22 19:07:52 +05:30
Alex
44d21ab703 fix: passing sources and chunk if agent is shared 2025-08-22 13:36:31 +01:00
Alex
e283957c8f Fix source field retrieval in SharedAgent to handle DBRef correctly 2025-08-22 11:41:35 +01:00
ManishMadan2882
b1210c4902 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-08-22 13:36:52 +05:30
ManishMadan2882
e7430f0fbc (feat:googleDrive,fe) file tree 2025-08-22 13:36:32 +05:30
ManishMadan2882
92d6ae54c3 (fix:google-oauth) no explicit datetime compare 2025-08-22 13:35:03 +05:30
ManishMadan2882
f82be23ca9 (feat:ingestion) external drive connect 2025-08-22 13:33:21 +05:30
ManishMadan2882
8c3f75e3e2 (feat:ingestion) google drive loader 2025-08-22 13:32:40 +05:30
GH Action - Upstream Sync
193d59f193 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-08-22 01:38:59 +00:00
ManishMadan2882
c2bebbaefa (feat:oauth/drive) raw fe integrate 2025-08-22 03:29:57 +05:30
Alex
7ae5a9c5a5 Refactor diagramId initialization to use a combination of Date.now() and random string for uniqueness 2025-08-21 14:50:37 +01:00
ManishMadan2882
3b69bea23d (chore:settings)addefault oath creds 2025-08-21 17:02:23 +05:30
ManishMadan2882
ab05726b99 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-08-21 02:46:56 +05:30
ManishMadan2882
b2b04268e9 (feat:drive) oauth flow 2025-08-21 02:46:32 +05:30
Siddhant Rai
bd73fa9ae7 refactor: remove unused abstract method and improve retrievers 2025-08-20 22:25:31 +05:30
Alex
927d10d66e Update README.md 2025-08-17 12:44:47 +03:00
Alex
b67329623c Update README.md 2025-08-16 13:51:40 +03:00
Ankit Matth
6f47aa802b added support for multi select sources 2025-08-16 15:19:19 +05:30
Ankit Matth
3417c73011 fix(ui): resolve multiple UI/UX issues 2025-08-15 10:14:31 +05:30
Alex
6a02bcf15b Merge pull request #1873 from ManishMadan2882/main
Sources are the new Docs
2025-08-13 18:24:35 +01:00
Alex
cd0fbf79a3 Merge pull request #1924 from siiddhantt/feat/agent-schema-response
feat: add support for structured output and JSON schema validation
2025-08-13 17:33:29 +01:00
Alex
15d2d0115b Merge branch 'main' into feat/agent-schema-response 2025-08-13 17:12:26 +01:00
ManishMadan2882
d1a0fe6e91 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-08-13 17:39:59 +05:30
ManishMadan2882
1db80d140f (fix) search dropdowns 2025-08-13 17:39:39 +05:30
Siddhant Rai
896dcf1f9e feat: add support for structured output and JSON schema validation 2025-08-13 13:29:51 +05:30
GH Action - Upstream Sync
819a12fb49 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-08-13 01:45:41 +00:00
Alex
c68273706c Merge pull request #1920 from hanzalahwaheed/fix/response-bubble-btns
fix: always show the response bubble buttons.
2025-08-13 00:14:01 +01:00
Hanzalah Waheed
6bb0cd535a fix: rm redundant states. track feedback state w prop var 2025-08-13 02:36:58 +04:00
Hanzalah Waheed
cb9ec69cf6 chore: refactor code to use ternary operator for error type check 2025-08-13 02:31:25 +04:00
Hanzalah Waheed
143854fa81 fix: show both like and dislike buttons 2025-08-13 02:11:47 +04:00
ManishMadan2882
2f48a3d7d5 (feat:chunks) consistent path header 2025-08-13 02:53:32 +05:30
ManishMadan2882
ec95dafe1e (feat:sources) matching the figma 2025-08-13 01:35:23 +05:30
ManishMadan2882
3d1fe724e5 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-08-12 18:05:11 +05:30
ManishMadan2882
5c615d6f2d (feat:sources) card ui 2025-08-12 18:04:40 +05:30
GH Action - Upstream Sync
d72558eb36 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-08-12 01:43:57 +00:00
ManishMadan2882
65c33ad915 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-08-12 03:09:13 +05:30
ManishMadan2882
9be128a963 (feat:sources) closer to figma,ux 2025-08-12 03:08:47 +05:30
Hanzalah Waheed
eb05132008 fix: always show the response bubble buttons. 2025-08-12 00:16:21 +04:00
Alex
f94a093e8c fix: truncate long text fields to prevent overflow in logs and sources 2025-08-11 14:56:31 +01:00
GH Action - Upstream Sync
0d0c2daf64 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-08-09 01:44:41 +00:00
ManishMadan2882
823d948b25 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-08-08 21:59:01 +05:30
Alex
56831fbcf2 Merge pull request #1917 from ManishMadan2882/fix/agent_prompts
Fixes missing attributes on shared agents
2025-08-08 16:30:09 +01:00
ManishMadan2882
bf49b9cb88 (fix/shared_agent)missing main attr 2025-08-08 20:44:09 +05:30
GH Action - Upstream Sync
e01adffbad Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-08-08 01:54:52 +00:00
Alex
08a5d52d82 Update README.md 2025-08-07 17:20:45 +03:00
ManishMadan2882
fdae235742 (feat:sources) i18n 2025-08-07 12:53:12 +05:30
GH Action - Upstream Sync
9903fad1e9 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-08-07 01:55:18 +00:00
Alex
14bbd5338d Merge pull request #1909 from siiddhantt/main
fix: remove unnecessary parameter from fetchPreviewAnswer
2025-08-06 19:21:59 +01:00
Siddhant Rai
4a236c2f6f fix: remove unnecessary parameter from fetchPreviewAnswer 2025-08-06 22:34:06 +05:30
Alex
0a8cdbd7f1 fix: update token selector in FileTreeComponent 2025-08-06 12:44:21 +01:00
Alex
94c49843be Merge pull request #1906 from arc53/feat/pg-vector
feat: implement PGVectorStore for PostgreSQL vector storage
2025-08-06 10:42:34 +01:00
Alex
9281fac898 fix: improve error logging for index creation and add PARSE_IMAGE_REMOTE setting 2025-08-06 10:40:20 +01:00
GH Action - Upstream Sync
0b2736f454 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-08-06 01:55:04 +00:00
ManishMadan2882
ae116b0d0d (fix:agentPreview) build err 2025-08-06 02:55:39 +05:30
ManishMadan2882
ba260e3382 (fix:faiss) not save tmp dir 2025-08-06 02:53:39 +05:30
Alex
1282e7687f fix: add error handling for index creation in user and agents collections 2025-08-05 17:19:06 +01:00
Alex
b1d8266eef feat: implement PGVectorStore for PostgreSQL vector storage 2025-08-05 13:54:39 +01:00
Alex
7acae6935b Merge pull request #1905 from arc53/fix-qdrant
fix: qdrant issues
2025-08-05 12:26:24 +01:00
Alex
092c01cae7 fix: ruff lint 2025-08-05 12:22:33 +01:00
Alex
56a1066c30 fix: qdrant issues 2025-08-05 12:19:18 +01:00
ManishMadan2882
1356d71839 (lint) ruff fix 2025-08-05 15:37:39 +05:30
ManishMadan2882
1eb011e8c3 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-08-05 15:28:51 +05:30
ManishMadan2882
e349eb28b0 (fix:update_chunk) data integrity, uplod back faiss 2025-08-05 06:31:00 +05:30
ManishMadan2882
b000b235a2 (feat:sources) renamed docs,fe 2025-08-05 05:23:29 +05:30
ManishMadan2882
16fe92282e (fix:chunks) responsive, editing controls 2025-08-05 03:04:37 +05:30
ManishMadan2882
e218e88cf4 (fix:chunks) alignment and ui 2025-08-04 19:28:15 +05:30
ManishMadan2882
888ea81a32 (feat:fil_management) serialising updates, queue 2025-08-04 17:27:12 +05:30
ManishMadan2882
735fab7640 (feat:storage) sync base 2025-08-04 16:36:38 +05:30
ManishMadan2882
45745c2a47 (feat:docs) skeleton loader 2025-08-04 16:35:24 +05:30
Alex
4caff0fcf6 fix: enhance error logging for malformed request in stream route 2025-08-04 11:41:41 +01:00
Alex
762ea6ce7f Merge pull request #1866 from arc53/dependabot/npm_and_yarn/frontend/tailwindcss-4.1.11
build(deps-dev): bump tailwindcss from 4.1.10 to 4.1.11 in /frontend
2025-08-02 23:49:54 +01:00
ManishMadan2882
8b4f6553f3 (fix:menu) left and right 2025-08-02 02:08:35 +05:30
ManishMadan2882
a61e44d175 (feat:dir_tree) improvement 2025-08-02 01:48:43 +05:30
ManishMadan2882
e1b1558fc9 (feat:storage) rm dir 2025-08-02 00:54:09 +05:30
ManishMadan2882
53225bda4e (feat:reingestion) spit directories 2025-08-02 00:49:15 +05:30
ManishMadan2882
5212769848 (feat:reingest) UI, polling 2025-08-01 01:25:37 +05:30
ManishMadan2882
d5ded3c9f4 (feat:reingest) eat and spit specific chunks 2025-08-01 01:14:48 +05:30
ManishMadan2882
c92d778894 (feat:chunker) do not combine text 2025-07-31 02:13:55 +05:30
ManishMadan2882
829abd1ad6 (fix:textarea) consistent line indxs 2025-07-30 21:00:00 +05:30
ManishMadan2882
266d256a07 (feat:sources) management, simple re-ingest 2025-07-30 01:57:40 +05:30
Alex
8380cac3e7 Merge pull request #1900 from naaa760/docs/auth-type-configuration
Docs: Expand and Clarify AUTH_TYPE Configuration and Authentication Methods (#1882)
2025-07-28 23:07:40 +01:00
ManishMadan2882
a24652f901 (feat:chunks) update iff changed 2025-07-28 19:35:41 +05:30
ManishMadan2882
2d203d3c70 (fix:chunks)responsive 2025-07-28 18:01:51 +05:30
Alex
48d21600da Merge pull request #1896 from siiddhantt/feat/rework-answer-routes
feat: answer routes re-structure for better maintainability and reuse
2025-07-26 14:18:10 +01:00
ManishMadan2882
2508d0fbb3 (fix:chunks) preserve paths 2025-07-26 00:27:39 +05:30
ManishMadan2882
e90e80c289 (fix:chunks) also count tokens 2025-07-26 00:16:45 +05:30
ManishMadan2882
5e4748f9d9 (fix:faiss) rely on storage abstrct 2025-07-26 00:14:17 +05:30
Siddhant Rai
212952f3e9 fix: allow api call in stream route + get_prompt error 2025-07-25 16:17:18 +05:30
naaa760
f99b6496c5 update 2025-07-25 14:48:49 +05:30
ManishMadan2882
67423d51b9 (feat:chunks) ask to edit, ui 2025-07-25 04:05:06 +05:30
ManishMadan2882
58465ece65 (feat:chunks) server-side filter on search 2025-07-25 01:43:50 +05:30
ManishMadan2882
8ede3a0173 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-07-23 20:48:00 +05:30
ManishMadan2882
ad2f0f8950 (chore:chunks) i18n 2025-07-23 20:47:36 +05:30
Siddhant Rai
76973a4b4c feat: answer routes re-structure for better maintainability and reuse 2025-07-23 20:07:42 +05:30
GH Action - Upstream Sync
b198e2e029 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-07-23 01:52:41 +00:00
ManishMadan2882
4d6ea401b5 (feat:chunks) line numbered editor 2025-07-23 03:36:18 +05:30
ManishMadan2882
b00c4cc3b6 (feat:chunk) editing mode 2025-07-23 02:22:56 +05:30
Alex
4185e64c65 Merge pull request #1893 from arc53/fix-attachment-bugs
fix: replace secure_filename with safe_filename for attachment handling
2025-07-22 16:24:55 +01:00
ManishMadan2882
6eb2c884a2 (refactor) separation in chunks/files view 2025-07-22 19:36:52 +05:30
Alex
6c0362a4cf fix: replace secure_filename with safe_filename for attachment handling 2025-07-22 12:56:17 +01:00
ManishMadan2882
50b1755a63 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-07-21 16:31:57 +05:30
ManishMadan2882
ff3c7eb5fb (fix:delete_old) comply with storage abtrctn 2025-07-21 16:31:42 +05:30
ManishMadan2882
3755316d49 (fix:chunks) responsive design 2025-07-21 16:30:30 +05:30
GH Action - Upstream Sync
f952046847 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-07-19 01:47:09 +00:00
Alex
969cdb4a63 Merge pull request #1890 from siiddhantt/feat/enhance-agents
feat: enhance prompt selection in new agents
2025-07-18 13:07:45 +01:00
ManishMadan2882
f336d44595 (feat:chunks) search in dir 2025-07-18 15:03:23 +05:30
Siddhant Rai
a53f93c195 feat: enhance dropdown component and prompts integration 2025-07-18 14:02:29 +05:30
GH Action - Upstream Sync
fcb334ce33 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-07-17 01:50:59 +00:00
ManishMadan2882
8ddf04a904 (feat:chunks) use common header, navigate 2025-07-17 03:08:01 +05:30
ManishMadan2882
29698ca169 (feat:chunks) redesigned 2025-07-17 02:16:40 +05:30
ManishMadan2882
a9baf7436a Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-07-17 01:05:59 +05:30
ManishMadan2882
99a8962183 (fix/docs) menu event capture 2025-07-17 01:05:24 +05:30
Alex
afc5b15a6b Merge pull request #1887 from Krrish0902/fix-issue-1854
fix: Removed incorrect www from URL in DocsGPT Docs frontend (#1854)
2025-07-16 13:20:05 +01:00
GH Action - Upstream Sync
b6ab508e27 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-07-16 01:50:25 +00:00
ananthakrishnan
789e65557a fix: Removed incorrect www from URL in DocsGPT Docs frontend (#1854) 2025-07-15 22:02:02 +05:30
ManishMadan2882
8a7806ab2d (feat:nested source view) file tree, chunks display 2025-07-15 19:15:40 +05:30
Alex
493303e103 Merge pull request #1886 from arc53/copilot/fix-1878
🐛 Fix conversation summary prompt to use user query language
2025-07-15 12:57:14 +01:00
ManishMadan2882
1d9af05e9e (feat:storage) is dir fnc 2025-07-15 15:34:16 +05:30
ManishMadan2882
5b07c5f2e8 (feat:ingestion) unzip, extract and store 2025-07-15 15:31:26 +05:30
copilot-swe-agent[bot]
2a4ec0cf5b Fix conversation summary prompt to use user query language
Co-authored-by: dartpain <15183589+dartpain@users.noreply.github.com>
2025-07-15 09:33:52 +00:00
copilot-swe-agent[bot]
a00c44386e Initial plan 2025-07-15 09:29:22 +00:00
ManishMadan2882
a38d71bbfb (feat:get_chunks) filtered by relative path 2025-07-15 13:38:19 +05:30
ManishMadan2882
a24a3f868c (feat:dir-structure) adding route 2025-07-15 13:38:19 +05:30
ManishMadan2882
f60c516185 (feat:dir_tree) table with folder contents 2025-07-15 13:38:19 +05:30
Manish Madan
26f4646304 Merge branch 'arc53:main' into main 2025-07-14 23:02:36 +05:30
ananthakrishnan
3a351f67e6 fix: correct agent tools name when creating new agent (#1877) 2025-07-14 20:20:55 +05:30
dependabot[bot]
e7c09cb91e build(deps-dev): bump tailwindcss from 4.1.10 to 4.1.11 in /frontend
Bumps [tailwindcss](https://github.com/tailwindlabs/tailwindcss/tree/HEAD/packages/tailwindcss) from 4.1.10 to 4.1.11.
- [Release notes](https://github.com/tailwindlabs/tailwindcss/releases)
- [Changelog](https://github.com/tailwindlabs/tailwindcss/blob/main/CHANGELOG.md)
- [Commits](https://github.com/tailwindlabs/tailwindcss/commits/v4.1.11/packages/tailwindcss)

---
updated-dependencies:
- dependency-name: tailwindcss
  dependency-version: 4.1.11
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-07-14 09:02:48 +00:00
Alex
ae1a6ef303 Merge pull request #1883 from siiddhantt/feat/agent-validation-enhance
feat: improve interactivity of publishing/drafting of agents
2025-07-14 09:58:12 +01:00
Siddhant Rai
2ff477a339 feat(agent): enhance validation for agent creation by checking required and invalid fields 2025-07-14 12:53:09 +05:30
Siddhant Rai
793f3fb683 refactor(NewAgent): remove debug logs 2025-07-12 12:36:18 +05:30
Siddhant Rai
a472ee7602 feat: add validation for required fields and improve agent creation logic 2025-07-12 12:30:00 +05:30
GH Action - Upstream Sync
c62040e232 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-07-11 01:49:09 +00:00
ManishMadan2882
d3b592bffc Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-07-09 01:29:59 +05:30
ManishMadan2882
4fcbdae5bf (feat:docs) cards UI 2025-07-08 02:46:34 +05:30
ManishMadan2882
ca95d7275a (feat:dateTimeUtils) localise weekday format 2025-07-08 02:32:18 +05:30
Manish Madan
61baf3701c Merge branch 'arc53:main' into main 2025-07-04 02:26:24 +05:30
ManishMadan2882
bbce872ac5 (fix:chunker) combine metadata as well 2025-07-04 02:19:58 +05:30
ManishMadan2882
0f7ebcd8e4 (feat:dir-reader) store mime types, file size in db 2025-07-03 18:09:19 +05:30
ManishMadan2882
82fc19e7b7 (fix:dir-reader) conflict of same filename in dir 2025-07-03 17:28:12 +05:30
ManishMadan2882
2ef23fe1b3 (feat:dir-reader) maintain dir structure in db 2025-07-03 01:24:22 +05:30
ManishMadan2882
fd905b1a06 (feat:dir-reader) save tokens with filenames 2025-07-02 16:30:29 +05:30
ManishMadan2882
ade704d065 (refactor:ingestion) pass file path once 2025-07-01 04:00:57 +05:30
199 changed files with 19075 additions and 3817 deletions

33
.vscode/launch.json vendored
View File

@@ -2,15 +2,11 @@
"version": "0.2.0",
"configurations": [
{
"name": "Docker Debug Frontend",
"name": "Frontend Debug (npm)",
"type": "node-terminal",
"request": "launch",
"type": "chrome",
"preLaunchTask": "docker-compose: debug:frontend",
"url": "http://127.0.0.1:5173",
"webRoot": "${workspaceFolder}/frontend",
"skipFiles": [
"<node_internals>/**"
]
"command": "npm run dev",
"cwd": "${workspaceFolder}/frontend"
},
{
"name": "Flask Debugger",
@@ -49,6 +45,27 @@
"--pool=solo"
],
"cwd": "${workspaceFolder}"
},
{
"name": "Dev Containers (Mongo + Redis)",
"type": "node-terminal",
"request": "launch",
"command": "docker compose -f deployment/docker-compose-dev.yaml up --build",
"cwd": "${workspaceFolder}"
}
],
"compounds": [
{
"name": "DocsGPT: Full Stack",
"configurations": [
"Frontend Debug (npm)",
"Flask Debugger",
"Celery Debugger"
],
"presentation": {
"group": "DocsGPT",
"order": 1
}
}
]
}

21
.vscode/tasks.json vendored
View File

@@ -1,21 +0,0 @@
{
"version": "2.0.0",
"tasks": [
{
"type": "docker-compose",
"label": "docker-compose: debug:frontend",
"dockerCompose": {
"up": {
"detached": true,
"services": [
"frontend"
],
"build": true
},
"files": [
"${workspaceFolder}/docker-compose.yaml"
]
}
}
]
}

38
HACKTOBERFEST.md Normal file
View File

@@ -0,0 +1,38 @@
# **🎉 Join the Hacktoberfest with DocsGPT and win a Free T-shirt for a meaningful PR! 🎉**
Welcome, contributors! We're excited to announce that DocsGPT is participating in Hacktoberfest. Get involved by submitting meaningful pull requests.
All Meaningful contributors with accepted PRs that were created for issues with the `hacktoberfest` label (set by our maintainer team: dartpain, siiddhantt, pabik, ManishMadan2882) will receive a cool T-shirt! 🤩.
Fill in [this form](https://forms.gle/Npaba4n9Epfyx56S8
) after your PR was merged please
If you are in doubt don't hesitate to ping us on discord, ping me - Alex (dartpain).
## 📜 Here's How to Contribute:
```text
🛠️ Code: This is the golden ticket! Make meaningful contributions through PRs.
🧩 API extension: Build an app utilising DocsGPT API. We prefer submissions that showcase original ideas and turn the API into an AI agent.
They can be a completely separate repos.
For example:
https://github.com/arc53/tg-bot-docsgpt-extenstion or
https://github.com/arc53/DocsGPT-cli
Non-Code Contributions:
📚 Wiki: Improve our documentation, create a guide.
🖥️ Design: Improve the UI/UX or design a new feature.
```
### 📝 Guidelines for Pull Requests:
- Familiarize yourself with the current contributions and our [Roadmap](https://github.com/orgs/arc53/projects/2).
- Before contributing check existing [issues](https://github.com/arc53/DocsGPT/issues) or [create](https://github.com/arc53/DocsGPT/issues/new/choose) an issue and wait to get assigned.
- Once you are finished with your contribution, please fill in this [form](https://forms.gle/Npaba4n9Epfyx56S8).
- Refer to the [Documentation](https://docs.docsgpt.cloud/).
- Feel free to join our [Discord](https://discord.gg/n5BX8dh8rU) server. We're here to help newcomers, so don't hesitate to jump in! Join us [here](https://discord.gg/n5BX8dh8rU).
Thank you very much for considering contributing to DocsGPT during Hacktoberfest! 🙏 Your contributions (not just simple typos) could earn you a stylish new t-shirt.
We will publish a t-shirt desing later into the October.

View File

@@ -3,11 +3,11 @@
</h1>
<p align="center">
<strong>Open-Source RAG Assistant</strong>
<strong>Private AI for agents, assistants and enterprise search</strong>
</p>
<p align="left">
<strong><a href="https://www.docsgpt.cloud/">DocsGPT</a></strong> is an open-source genAI tool that helps users get reliable answers from any knowledge source, while avoiding hallucinations. It enables quick and reliable information retrieval, with tooling and agentic system capability built in.
<strong><a href="https://www.docsgpt.cloud/">DocsGPT</a></strong> is an open-source AI platform for building intelligent agents and assistants. Features Agent Builder, deep research tools, document analysis (PDF, Office, web content), Multi-model support (choose your provider or run locally), and rich API connectivity for agents with actionable tools and integrations. Deploy anywhere with complete privacy control.
</p>
<div align="center">
@@ -19,13 +19,23 @@
<a href="https://discord.gg/n5BX8dh8rU">![link to discord](https://img.shields.io/discord/1070046503302877216)</a>
<a href="https://twitter.com/docsgptai">![X (formerly Twitter) URL](https://img.shields.io/twitter/follow/docsgptai)</a>
<a href="https://docs.docsgpt.cloud/quickstart">⚡️ Quickstart</a><a href="https://app.docsgpt.cloud/">☁️ Cloud Version</a><a href="https://discord.gg/n5BX8dh8rU">💬 Discord</a>
<br>
<a href="https://docs.docsgpt.cloud/">📖 Documentation</a><a href="https://github.com/arc53/DocsGPT/blob/main/CONTRIBUTING.md">👫 Contribute</a><a href="https://blog.docsgpt.cloud/">🗞 Blog</a>
<br>
<a href="https://docs.docsgpt.cloud/quickstart">⚡️ Quickstart</a><a href="https://app.docsgpt.cloud/">☁️ Cloud Version</a><a href="https://discord.gg/n5BX8dh8rU">💬 Discord</a>
<br>
<a href="https://docs.docsgpt.cloud/">📖 Documentation</a><a href="https://github.com/arc53/DocsGPT/blob/main/CONTRIBUTING.md">👫 Contribute</a><a href="https://blog.docsgpt.cloud/">🗞 Blog</a>
<br>
</div>
<div align="center">
<br>
🎃 <a href="https://github.com/arc53/DocsGPT/blob/main/HACKTOBERFEST.md"> Hacktoberfest Prizes, Rules & Q&A </a> 🎃
<br>
<br>
</div>
<div align="center">
<br>
<img src="https://d3dg1063dc54p9.cloudfront.net/videos/demov7.gif" alt="video-example-of-docs-gpt" width="800" height="450">
</div>
<h3 align="left">
@@ -53,10 +63,13 @@
- [x] New input box in the conversation menu (April 2025)
- [x] Add triggerable actions / tools (webhook) (April 2025)
- [x] Agent optimisations (May 2025)
- [ ] Filesystem sources update (July 2025)
- [ ] Anthropic Tool compatibility (July 2025)
- [ ] MCP support (July 2025)
- [ ] Add OAuth 2.0 authentication for tools and sources (August 2025)
- [x] Filesystem sources update (July 2025)
- [x] Json Responses (August 2025)
- [x] MCP support (August 2025)
- [x] Google Drive integration (September 2025)
- [ ] Add OAuth 2.0 authentication for MCP (September 2025)
- [ ] Sharepoint integration (October 2025)
- [ ] Deep Agents (October 2025)
- [ ] Agent scheduling
You can find our full roadmap [here](https://github.com/orgs/arc53/projects/2). Please don't hesitate to contribute or create issues, it helps us improve DocsGPT!
@@ -71,11 +84,10 @@ We're eager to provide personalized assistance when deploying your DocsGPT to a
## Join the Lighthouse Program 🌟
Calling all developers and GenAI innovators! The **DocsGPT Lighthouse Program** connects technical leaders actively deploying or extending DocsGPT in real-world scenarios. Collaborate directly with our team to shape the roadmap, access priority support, and build enterprise-ready solutions with exclusive community insights.
Calling all developers and GenAI innovators! The **DocsGPT Lighthouse Program** connects technical leaders actively deploying or extending DocsGPT in real-world scenarios. Collaborate directly with our team to shape the roadmap, access priority support, and build enterprise-ready solutions with exclusive community insights.
[Learn More & Apply →](https://docs.google.com/forms/d/1KAADiJinUJ8EMQyfTXUIGyFbqINNClNR3jBNWq7DgTE)
## QuickStart
> [!Note]
@@ -106,7 +118,7 @@ A more detailed [Quickstart](https://docs.docsgpt.cloud/quickstart) is available
PowerShell -ExecutionPolicy Bypass -File .\setup.ps1
```
Either script will guide you through setting up DocsGPT. Four options available: using the public API, running locally, connecting to a local inference engine, or using a cloud API provider. Scripts will automatically configure your `.env` file and handle necessary downloads and installations based on your chosen option.
Either script will guide you through setting up DocsGPT. Four options available: using the public API, running locally, connecting to a local inference engine, or using a cloud API provider. Scripts will automatically configure your `.env` file and handle necessary downloads and installations based on your chosen option.
**Navigate to http://localhost:5173/**
@@ -115,6 +127,7 @@ To stop DocsGPT, open a terminal in the `DocsGPT` directory and run:
```bash
docker compose -f deployment/docker-compose.yaml down
```
(or use the specific `docker compose down` command shown after running the setup script).
> [!Note]
@@ -142,7 +155,6 @@ Please refer to the [CONTRIBUTING.md](CONTRIBUTING.md) file for information abou
We as members, contributors, and leaders, pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. Please refer to the [CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md) file for more information about contributing.
## Many Thanks To Our Contributors⚡
<a href="https://github.com/arc53/DocsGPT/graphs/contributors" alt="View Contributors">

View File

@@ -1,3 +1,4 @@
import logging
import uuid
from abc import ABC, abstractmethod
from typing import Dict, Generator, List, Optional
@@ -6,15 +7,15 @@ from bson.objectid import ObjectId
from application.agents.tools.tool_action_parser import ToolActionParser
from application.agents.tools.tool_manager import ToolManager
from application.core.mongo_db import MongoDB
from application.core.settings import settings
from application.llm.handlers.handler_creator import LLMHandlerCreator
from application.llm.llm_creator import LLMCreator
from application.logging import build_stack_data, log_activity, LogContext
from application.retriever.base import BaseRetriever
logger = logging.getLogger(__name__)
class BaseAgent(ABC):
def __init__(
@@ -28,6 +29,7 @@ class BaseAgent(ABC):
chat_history: Optional[List[Dict]] = None,
decoded_token: Optional[Dict] = None,
attachments: Optional[List[Dict]] = None,
json_schema: Optional[Dict] = None,
):
self.endpoint = endpoint
self.llm_name = llm_name
@@ -51,6 +53,7 @@ class BaseAgent(ABC):
llm_name if llm_name else "default"
)
self.attachments = attachments or []
self.json_schema = json_schema
@log_activity()
def gen(
@@ -137,6 +140,40 @@ class BaseAgent(ABC):
tool_id, action_name, call_args = parser.parse_args(call)
call_id = getattr(call, "id", None) or str(uuid.uuid4())
# Check if parsing failed
if tool_id is None or action_name is None:
error_message = f"Error: Failed to parse LLM tool call. Tool name: {getattr(call, 'name', 'unknown')}"
logger.error(error_message)
tool_call_data = {
"tool_name": "unknown",
"call_id": call_id,
"action_name": getattr(call, "name", "unknown"),
"arguments": call_args or {},
"result": f"Failed to parse tool call. Invalid tool name format: {getattr(call, 'name', 'unknown')}",
}
yield {"type": "tool_call", "data": {**tool_call_data, "status": "error"}}
self.tool_calls.append(tool_call_data)
return "Failed to parse tool call.", call_id
# Check if tool_id exists in available tools
if tool_id not in tools_dict:
error_message = f"Error: Tool ID '{tool_id}' extracted from LLM call not found in available tools_dict. Available IDs: {list(tools_dict.keys())}"
logger.error(error_message)
# Return error result
tool_call_data = {
"tool_name": "unknown",
"call_id": call_id,
"action_name": f"{action_name}_{tool_id}",
"arguments": call_args,
"result": f"Tool with ID {tool_id} not found. Available tools: {list(tools_dict.keys())}",
}
yield {"type": "tool_call", "data": {**tool_call_data, "status": "error"}}
self.tool_calls.append(tool_call_data)
return f"Tool with ID {tool_id} not found.", call_id
tool_call_data = {
"tool_name": tools_dict[tool_id]["name"],
"call_id": call_id,
@@ -188,6 +225,7 @@ class BaseAgent(ABC):
if tool_data["name"] == "api_tool"
else tool_data["config"]
),
user_id=self.user, # Pass user ID for MCP tools credential decryption
)
if tool_data["name"] == "api_tool":
print(
@@ -226,7 +264,15 @@ class BaseAgent(ABC):
query: str,
retrieved_data: List[Dict],
) -> List[Dict]:
docs_together = "\n".join([doc["text"] for doc in retrieved_data])
docs_with_filenames = []
for doc in retrieved_data:
filename = doc.get("filename") or doc.get("title") or doc.get("source")
if filename:
chunk_header = str(filename)
docs_with_filenames.append(f"{chunk_header}\n{doc['text']}")
else:
docs_with_filenames.append(doc["text"])
docs_together = "\n\n".join(docs_with_filenames)
p_chat_combine = system_prompt.replace("{summaries}", docs_together)
messages_combine = [{"role": "system", "content": p_chat_combine}]
@@ -283,6 +329,21 @@ class BaseAgent(ABC):
and self.tools
):
gen_kwargs["tools"] = self.tools
if (
self.json_schema
and hasattr(self.llm, "_supports_structured_output")
and self.llm._supports_structured_output()
):
structured_format = self.llm.prepare_structured_output_format(
self.json_schema
)
if structured_format:
if self.llm_name == "openai":
gen_kwargs["response_format"] = structured_format
elif self.llm_name == "google":
gen_kwargs["response_schema"] = structured_format
resp = self.llm.gen_stream(**gen_kwargs)
if log_context:
@@ -307,11 +368,25 @@ class BaseAgent(ABC):
return resp
def _handle_response(self, response, tools_dict, messages, log_context):
is_structured_output = (
self.json_schema is not None
and hasattr(self.llm, "_supports_structured_output")
and self.llm._supports_structured_output()
)
if isinstance(response, str):
yield {"answer": response}
answer_data = {"answer": response}
if is_structured_output:
answer_data["structured"] = True
answer_data["schema"] = self.json_schema
yield answer_data
return
if hasattr(response, "message") and getattr(response.message, "content", None):
yield {"answer": response.message.content}
answer_data = {"answer": response.message.content}
if is_structured_output:
answer_data["structured"] = True
answer_data["schema"] = self.json_schema
yield answer_data
return
processed_response_gen = self._llm_handler(
response, tools_dict, messages, log_context, self.attachments
@@ -319,8 +394,16 @@ class BaseAgent(ABC):
for event in processed_response_gen:
if isinstance(event, str):
yield {"answer": event}
answer_data = {"answer": event}
if is_structured_output:
answer_data["structured"] = True
answer_data["schema"] = self.json_schema
yield answer_data
elif hasattr(event, "message") and getattr(event.message, "content", None):
yield {"answer": event.message.content}
answer_data = {"answer": event.message.content}
if is_structured_output:
answer_data["structured"] = True
answer_data["schema"] = self.json_schema
yield answer_data
elif isinstance(event, dict) and "type" in event:
yield event

View File

@@ -0,0 +1,861 @@
import asyncio
import base64
import json
import logging
import time
from typing import Any, Dict, List, Optional
from urllib.parse import parse_qs, urlparse
from application.agents.tools.base import Tool
from application.api.user.tasks import mcp_oauth_status_task, mcp_oauth_task
from application.cache import get_redis_instance
from application.core.mongo_db import MongoDB
from application.core.settings import settings
from application.security.encryption import decrypt_credentials
from fastmcp import Client
from fastmcp.client.auth import BearerAuth
from fastmcp.client.transports import (
SSETransport,
StdioTransport,
StreamableHttpTransport,
)
from mcp.client.auth import OAuthClientProvider, TokenStorage
from mcp.shared.auth import OAuthClientInformationFull, OAuthClientMetadata, OAuthToken
from pydantic import AnyHttpUrl, ValidationError
from redis import Redis
mongo = MongoDB.get_client()
db = mongo[settings.MONGO_DB_NAME]
_mcp_clients_cache = {}
class MCPTool(Tool):
"""
MCP Tool
Connect to remote Model Context Protocol (MCP) servers to access dynamic tools and resources. Supports various authentication methods and provides secure access to external services through the MCP protocol.
"""
def __init__(self, config: Dict[str, Any], user_id: Optional[str] = None):
"""
Initialize the MCP Tool with configuration.
Args:
config: Dictionary containing MCP server configuration:
- server_url: URL of the remote MCP server
- transport_type: Transport type (auto, sse, http, stdio)
- auth_type: Type of authentication (bearer, oauth, api_key, basic, none)
- encrypted_credentials: Encrypted credentials (if available)
- timeout: Request timeout in seconds (default: 30)
- headers: Custom headers for requests
- command: Command for STDIO transport
- args: Arguments for STDIO transport
- oauth_scopes: OAuth scopes for oauth auth type
- oauth_client_name: OAuth client name for oauth auth type
user_id: User ID for decrypting credentials (required if encrypted_credentials exist)
"""
self.config = config
self.user_id = user_id
self.server_url = config.get("server_url", "")
self.transport_type = config.get("transport_type", "auto")
self.auth_type = config.get("auth_type", "none")
self.timeout = config.get("timeout", 30)
self.custom_headers = config.get("headers", {})
self.auth_credentials = {}
if config.get("encrypted_credentials") and user_id:
self.auth_credentials = decrypt_credentials(
config["encrypted_credentials"], user_id
)
else:
self.auth_credentials = config.get("auth_credentials", {})
self.oauth_scopes = config.get("oauth_scopes", [])
self.oauth_task_id = config.get("oauth_task_id", None)
self.oauth_client_name = config.get("oauth_client_name", "DocsGPT-MCP")
self.redirect_uri = f"{settings.API_URL}/api/mcp_server/callback"
self.available_tools = []
self._cache_key = self._generate_cache_key()
self._client = None
# Only validate and setup if server_url is provided and not OAuth
if self.server_url and self.auth_type != "oauth":
self._setup_client()
def _generate_cache_key(self) -> str:
"""Generate a unique cache key for this MCP server configuration."""
auth_key = ""
if self.auth_type == "oauth":
scopes_str = ",".join(self.oauth_scopes) if self.oauth_scopes else "none"
auth_key = f"oauth:{self.oauth_client_name}:{scopes_str}"
elif self.auth_type in ["bearer"]:
token = self.auth_credentials.get(
"bearer_token", ""
) or self.auth_credentials.get("access_token", "")
auth_key = f"bearer:{token[:10]}..." if token else "bearer:none"
elif self.auth_type == "api_key":
api_key = self.auth_credentials.get("api_key", "")
auth_key = f"apikey:{api_key[:10]}..." if api_key else "apikey:none"
elif self.auth_type == "basic":
username = self.auth_credentials.get("username", "")
auth_key = f"basic:{username}"
else:
auth_key = "none"
return f"{self.server_url}#{self.transport_type}#{auth_key}"
def _setup_client(self):
"""Setup FastMCP client with proper transport and authentication."""
global _mcp_clients_cache
if self._cache_key in _mcp_clients_cache:
cached_data = _mcp_clients_cache[self._cache_key]
if time.time() - cached_data["created_at"] < 1800:
self._client = cached_data["client"]
return
else:
del _mcp_clients_cache[self._cache_key]
transport = self._create_transport()
auth = None
if self.auth_type == "oauth":
redis_client = get_redis_instance()
auth = DocsGPTOAuth(
mcp_url=self.server_url,
scopes=self.oauth_scopes,
redis_client=redis_client,
redirect_uri=self.redirect_uri,
task_id=self.oauth_task_id,
db=db,
user_id=self.user_id,
)
elif self.auth_type == "bearer":
token = self.auth_credentials.get(
"bearer_token", ""
) or self.auth_credentials.get("access_token", "")
if token:
auth = BearerAuth(token)
self._client = Client(transport, auth=auth)
_mcp_clients_cache[self._cache_key] = {
"client": self._client,
"created_at": time.time(),
}
def _create_transport(self):
"""Create appropriate transport based on configuration."""
headers = {"Content-Type": "application/json", "User-Agent": "DocsGPT-MCP/1.0"}
headers.update(self.custom_headers)
if self.auth_type == "api_key":
api_key = self.auth_credentials.get("api_key", "")
header_name = self.auth_credentials.get("api_key_header", "X-API-Key")
if api_key:
headers[header_name] = api_key
elif self.auth_type == "basic":
username = self.auth_credentials.get("username", "")
password = self.auth_credentials.get("password", "")
if username and password:
credentials = base64.b64encode(
f"{username}:{password}".encode()
).decode()
headers["Authorization"] = f"Basic {credentials}"
if self.transport_type == "auto":
if "sse" in self.server_url.lower() or self.server_url.endswith("/sse"):
transport_type = "sse"
else:
transport_type = "http"
else:
transport_type = self.transport_type
if transport_type == "sse":
headers.update({"Accept": "text/event-stream", "Cache-Control": "no-cache"})
return SSETransport(url=self.server_url, headers=headers)
elif transport_type == "http":
return StreamableHttpTransport(url=self.server_url, headers=headers)
elif transport_type == "stdio":
command = self.config.get("command", "python")
args = self.config.get("args", [])
env = self.auth_credentials if self.auth_credentials else None
return StdioTransport(command=command, args=args, env=env)
else:
return StreamableHttpTransport(url=self.server_url, headers=headers)
def _format_tools(self, tools_response) -> List[Dict]:
"""Format tools response to match expected format."""
if hasattr(tools_response, "tools"):
tools = tools_response.tools
elif isinstance(tools_response, list):
tools = tools_response
else:
tools = []
tools_dict = []
for tool in tools:
if hasattr(tool, "name"):
tool_dict = {
"name": tool.name,
"description": tool.description,
}
if hasattr(tool, "inputSchema"):
tool_dict["inputSchema"] = tool.inputSchema
tools_dict.append(tool_dict)
elif isinstance(tool, dict):
tools_dict.append(tool)
else:
if hasattr(tool, "model_dump"):
tools_dict.append(tool.model_dump())
else:
tools_dict.append({"name": str(tool), "description": ""})
return tools_dict
async def _execute_with_client(self, operation: str, *args, **kwargs):
"""Execute operation with FastMCP client."""
if not self._client:
raise Exception("FastMCP client not initialized")
async with self._client:
if operation == "ping":
return await self._client.ping()
elif operation == "list_tools":
tools_response = await self._client.list_tools()
self.available_tools = self._format_tools(tools_response)
return self.available_tools
elif operation == "call_tool":
tool_name = args[0]
tool_args = kwargs
return await self._client.call_tool(tool_name, tool_args)
elif operation == "list_resources":
return await self._client.list_resources()
elif operation == "list_prompts":
return await self._client.list_prompts()
else:
raise Exception(f"Unknown operation: {operation}")
def _run_async_operation(self, operation: str, *args, **kwargs):
"""Run async operation in sync context."""
try:
try:
loop = asyncio.get_running_loop()
import concurrent.futures
def run_in_thread():
new_loop = asyncio.new_event_loop()
asyncio.set_event_loop(new_loop)
try:
return new_loop.run_until_complete(
self._execute_with_client(operation, *args, **kwargs)
)
finally:
new_loop.close()
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(run_in_thread)
return future.result(timeout=self.timeout)
except RuntimeError:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
return loop.run_until_complete(
self._execute_with_client(operation, *args, **kwargs)
)
finally:
loop.close()
except Exception as e:
print(f"Error occurred while running async operation: {e}")
raise
def discover_tools(self) -> List[Dict]:
"""
Discover available tools from the MCP server using FastMCP.
Returns:
List of tool definitions from the server
"""
if not self.server_url:
return []
if not self._client:
self._setup_client()
try:
tools = self._run_async_operation("list_tools")
self.available_tools = tools
return self.available_tools
except Exception as e:
raise Exception(f"Failed to discover tools from MCP server: {str(e)}")
def execute_action(self, action_name: str, **kwargs) -> Any:
"""
Execute an action on the remote MCP server using FastMCP.
Args:
action_name: Name of the action to execute
**kwargs: Parameters for the action
Returns:
Result from the MCP server
"""
if not self.server_url:
raise Exception("No MCP server configured")
if not self._client:
self._setup_client()
cleaned_kwargs = {}
for key, value in kwargs.items():
if value == "" or value is None:
continue
cleaned_kwargs[key] = value
try:
result = self._run_async_operation(
"call_tool", action_name, **cleaned_kwargs
)
return self._format_result(result)
except Exception as e:
raise Exception(f"Failed to execute action '{action_name}': {str(e)}")
def _format_result(self, result) -> Dict:
"""Format FastMCP result to match expected format."""
if hasattr(result, "content"):
content_list = []
for content_item in result.content:
if hasattr(content_item, "text"):
content_list.append({"type": "text", "text": content_item.text})
elif hasattr(content_item, "data"):
content_list.append({"type": "data", "data": content_item.data})
else:
content_list.append(
{"type": "unknown", "content": str(content_item)}
)
return {
"content": content_list,
"isError": getattr(result, "isError", False),
}
else:
return result
def test_connection(self) -> Dict:
"""
Test the connection to the MCP server and validate functionality.
Returns:
Dictionary with connection test results including tool count
"""
if not self.server_url:
return {
"success": False,
"message": "No MCP server URL configured",
"tools_count": 0,
"transport_type": self.transport_type,
"auth_type": self.auth_type,
"error_type": "ConfigurationError",
}
if not self._client:
self._setup_client()
try:
if self.auth_type == "oauth":
return self._test_oauth_connection()
else:
return self._test_regular_connection()
except Exception as e:
return {
"success": False,
"message": f"Connection failed: {str(e)}",
"tools_count": 0,
"transport_type": self.transport_type,
"auth_type": self.auth_type,
"error_type": type(e).__name__,
}
def _test_regular_connection(self) -> Dict:
"""Test connection for non-OAuth auth types."""
try:
self._run_async_operation("ping")
ping_success = True
except Exception:
ping_success = False
tools = self.discover_tools()
message = f"Successfully connected to MCP server. Found {len(tools)} tools."
if not ping_success:
message += " (Ping not supported, but tool discovery worked)"
return {
"success": True,
"message": message,
"tools_count": len(tools),
"transport_type": self.transport_type,
"auth_type": self.auth_type,
"ping_supported": ping_success,
"tools": [tool.get("name", "unknown") for tool in tools],
}
def _test_oauth_connection(self) -> Dict:
"""Test connection for OAuth auth type with proper async handling."""
try:
task = mcp_oauth_task.delay(config=self.config, user=self.user_id)
if not task:
raise Exception("Failed to start OAuth authentication")
return {
"success": True,
"requires_oauth": True,
"task_id": task.id,
"status": "pending",
"message": "OAuth flow started",
}
except Exception as e:
return {
"success": False,
"message": f"OAuth connection failed: {str(e)}",
"tools_count": 0,
"transport_type": self.transport_type,
"auth_type": self.auth_type,
"error_type": type(e).__name__,
}
def get_actions_metadata(self) -> List[Dict]:
"""
Get metadata for all available actions.
Returns:
List of action metadata dictionaries
"""
actions = []
for tool in self.available_tools:
input_schema = (
tool.get("inputSchema")
or tool.get("input_schema")
or tool.get("schema")
or tool.get("parameters")
)
parameters_schema = {
"type": "object",
"properties": {},
"required": [],
}
if input_schema:
if isinstance(input_schema, dict):
if "properties" in input_schema:
parameters_schema = {
"type": input_schema.get("type", "object"),
"properties": input_schema.get("properties", {}),
"required": input_schema.get("required", []),
}
for key in ["additionalProperties", "description"]:
if key in input_schema:
parameters_schema[key] = input_schema[key]
else:
parameters_schema["properties"] = input_schema
action = {
"name": tool.get("name", ""),
"description": tool.get("description", ""),
"parameters": parameters_schema,
}
actions.append(action)
return actions
def get_config_requirements(self) -> Dict:
"""Get configuration requirements for the MCP tool."""
return {
"server_url": {
"type": "string",
"description": "URL of the remote MCP server (e.g., https://api.example.com/mcp or https://docs.mcp.cloudflare.com/sse)",
"required": True,
},
"transport_type": {
"type": "string",
"description": "Transport type for connection",
"enum": ["auto", "sse", "http", "stdio"],
"default": "auto",
"required": False,
"help": {
"auto": "Automatically detect best transport",
"sse": "Server-Sent Events (for real-time streaming)",
"http": "HTTP streaming (recommended for production)",
"stdio": "Standard I/O (for local servers)",
},
},
"auth_type": {
"type": "string",
"description": "Authentication type",
"enum": ["none", "bearer", "oauth", "api_key", "basic"],
"default": "none",
"required": True,
"help": {
"none": "No authentication",
"bearer": "Bearer token authentication",
"oauth": "OAuth 2.1 authentication (with frontend integration)",
"api_key": "API key authentication",
"basic": "Basic authentication",
},
},
"auth_credentials": {
"type": "object",
"description": "Authentication credentials (varies by auth_type)",
"required": False,
"properties": {
"bearer_token": {
"type": "string",
"description": "Bearer token for bearer auth",
},
"access_token": {
"type": "string",
"description": "Access token for OAuth (if pre-obtained)",
},
"api_key": {
"type": "string",
"description": "API key for api_key auth",
},
"api_key_header": {
"type": "string",
"description": "Header name for API key (default: X-API-Key)",
},
"username": {
"type": "string",
"description": "Username for basic auth",
},
"password": {
"type": "string",
"description": "Password for basic auth",
},
},
},
"oauth_scopes": {
"type": "array",
"description": "OAuth scopes to request (for oauth auth_type)",
"items": {"type": "string"},
"required": False,
"default": [],
},
"oauth_client_name": {
"type": "string",
"description": "Client name for OAuth registration (for oauth auth_type)",
"default": "DocsGPT-MCP",
"required": False,
},
"headers": {
"type": "object",
"description": "Custom headers to send with requests",
"required": False,
},
"timeout": {
"type": "integer",
"description": "Request timeout in seconds",
"default": 30,
"minimum": 1,
"maximum": 300,
"required": False,
},
"command": {
"type": "string",
"description": "Command to run for STDIO transport (e.g., 'python')",
"required": False,
},
"args": {
"type": "array",
"description": "Arguments for STDIO command",
"items": {"type": "string"},
"required": False,
},
}
class DocsGPTOAuth(OAuthClientProvider):
"""
Custom OAuth handler for DocsGPT that uses frontend redirect instead of browser.
"""
def __init__(
self,
mcp_url: str,
redirect_uri: str,
redis_client: Redis | None = None,
redis_prefix: str = "mcp_oauth:",
task_id: str = None,
scopes: str | list[str] | None = None,
client_name: str = "DocsGPT-MCP",
user_id=None,
db=None,
additional_client_metadata: dict[str, Any] | None = None,
):
"""
Initialize custom OAuth client provider for DocsGPT.
Args:
mcp_url: Full URL to the MCP endpoint
redirect_uri: Custom redirect URI for DocsGPT frontend
redis_client: Redis client for storing auth state
redis_prefix: Prefix for Redis keys
task_id: Task ID for tracking auth status
scopes: OAuth scopes to request
client_name: Name for this client during registration
user_id: User ID for token storage
db: Database instance for token storage
additional_client_metadata: Extra fields for OAuthClientMetadata
"""
self.redirect_uri = redirect_uri
self.redis_client = redis_client
self.redis_prefix = redis_prefix
self.task_id = task_id
self.user_id = user_id
self.db = db
parsed_url = urlparse(mcp_url)
self.server_base_url = f"{parsed_url.scheme}://{parsed_url.netloc}"
if isinstance(scopes, list):
scopes = " ".join(scopes)
client_metadata = OAuthClientMetadata(
client_name=client_name,
redirect_uris=[AnyHttpUrl(redirect_uri)],
grant_types=["authorization_code", "refresh_token"],
response_types=["code"],
scope=scopes,
**(additional_client_metadata or {}),
)
storage = DBTokenStorage(
server_url=self.server_base_url, user_id=self.user_id, db_client=self.db
)
super().__init__(
server_url=self.server_base_url,
client_metadata=client_metadata,
storage=storage,
redirect_handler=self.redirect_handler,
callback_handler=self.callback_handler,
)
self.auth_url = None
self.extracted_state = None
def _process_auth_url(self, authorization_url: str) -> tuple[str, str]:
"""Process authorization URL to extract state"""
try:
parsed_url = urlparse(authorization_url)
query_params = parse_qs(parsed_url.query)
state_params = query_params.get("state", [])
if state_params:
state = state_params[0]
else:
raise ValueError("No state in auth URL")
return authorization_url, state
except Exception as e:
raise Exception(f"Failed to process auth URL: {e}")
async def redirect_handler(self, authorization_url: str) -> None:
"""Store auth URL and state in Redis for frontend to use."""
auth_url, state = self._process_auth_url(authorization_url)
logging.info(
"[DocsGPTOAuth] Processed auth_url: %s, state: %s", auth_url, state
)
self.auth_url = auth_url
self.extracted_state = state
if self.redis_client and self.extracted_state:
key = f"{self.redis_prefix}auth_url:{self.extracted_state}"
self.redis_client.setex(key, 600, auth_url)
logging.info("[DocsGPTOAuth] Stored auth_url in Redis: %s", key)
if self.task_id:
status_key = f"mcp_oauth_status:{self.task_id}"
status_data = {
"status": "requires_redirect",
"message": "OAuth authorization required",
"authorization_url": self.auth_url,
"state": self.extracted_state,
"requires_oauth": True,
"task_id": self.task_id,
}
self.redis_client.setex(status_key, 600, json.dumps(status_data))
async def callback_handler(self) -> tuple[str, str | None]:
"""Wait for auth code from Redis using the state value."""
if not self.redis_client or not self.extracted_state:
raise Exception("Redis client or state not configured for OAuth")
poll_interval = 1
max_wait_time = 300
code_key = f"{self.redis_prefix}code:{self.extracted_state}"
if self.task_id:
status_key = f"mcp_oauth_status:{self.task_id}"
status_data = {
"status": "awaiting_callback",
"message": "Waiting for OAuth callback...",
"authorization_url": self.auth_url,
"state": self.extracted_state,
"requires_oauth": True,
"task_id": self.task_id,
}
self.redis_client.setex(status_key, 600, json.dumps(status_data))
start_time = time.time()
while time.time() - start_time < max_wait_time:
code_data = self.redis_client.get(code_key)
if code_data:
code = code_data.decode()
returned_state = self.extracted_state
self.redis_client.delete(code_key)
self.redis_client.delete(
f"{self.redis_prefix}auth_url:{self.extracted_state}"
)
self.redis_client.delete(
f"{self.redis_prefix}state:{self.extracted_state}"
)
if self.task_id:
status_data = {
"status": "callback_received",
"message": "OAuth callback received, completing authentication...",
"task_id": self.task_id,
}
self.redis_client.setex(status_key, 600, json.dumps(status_data))
return code, returned_state
error_key = f"{self.redis_prefix}error:{self.extracted_state}"
error_data = self.redis_client.get(error_key)
if error_data:
error_msg = error_data.decode()
self.redis_client.delete(error_key)
self.redis_client.delete(
f"{self.redis_prefix}auth_url:{self.extracted_state}"
)
self.redis_client.delete(
f"{self.redis_prefix}state:{self.extracted_state}"
)
raise Exception(f"OAuth error: {error_msg}")
await asyncio.sleep(poll_interval)
self.redis_client.delete(f"{self.redis_prefix}auth_url:{self.extracted_state}")
self.redis_client.delete(f"{self.redis_prefix}state:{self.extracted_state}")
raise Exception("OAuth callback timeout: no code received within 5 minutes")
class DBTokenStorage(TokenStorage):
def __init__(self, server_url: str, user_id: str, db_client):
self.server_url = server_url
self.user_id = user_id
self.db_client = db_client
self.collection = db_client["connector_sessions"]
@staticmethod
def get_base_url(url: str) -> str:
parsed = urlparse(url)
return f"{parsed.scheme}://{parsed.netloc}"
def get_db_key(self) -> dict:
return {
"server_url": self.get_base_url(self.server_url),
"user_id": self.user_id,
}
async def get_tokens(self) -> OAuthToken | None:
doc = await asyncio.to_thread(self.collection.find_one, self.get_db_key())
if not doc or "tokens" not in doc:
return None
try:
tokens = OAuthToken.model_validate(doc["tokens"])
return tokens
except ValidationError as e:
logging.error(f"Could not load tokens: {e}")
return None
async def set_tokens(self, tokens: OAuthToken) -> None:
await asyncio.to_thread(
self.collection.update_one,
self.get_db_key(),
{"$set": {"tokens": tokens.model_dump()}},
True,
)
logging.info(f"Saved tokens for {self.get_base_url(self.server_url)}")
async def get_client_info(self) -> OAuthClientInformationFull | None:
doc = await asyncio.to_thread(self.collection.find_one, self.get_db_key())
if not doc or "client_info" not in doc:
return None
try:
client_info = OAuthClientInformationFull.model_validate(doc["client_info"])
tokens = await self.get_tokens()
if tokens is None:
logging.debug(
"No tokens found, clearing client info to force fresh registration."
)
await asyncio.to_thread(
self.collection.update_one,
self.get_db_key(),
{"$unset": {"client_info": ""}},
)
return None
return client_info
except ValidationError as e:
logging.error(f"Could not load client info: {e}")
return None
def _serialize_client_info(self, info: dict) -> dict:
if "redirect_uris" in info and isinstance(info["redirect_uris"], list):
info["redirect_uris"] = [str(u) for u in info["redirect_uris"]]
return info
async def set_client_info(self, client_info: OAuthClientInformationFull) -> None:
serialized_info = self._serialize_client_info(client_info.model_dump())
await asyncio.to_thread(
self.collection.update_one,
self.get_db_key(),
{"$set": {"client_info": serialized_info}},
True,
)
logging.info(f"Saved client info for {self.get_base_url(self.server_url)}")
async def clear(self) -> None:
await asyncio.to_thread(self.collection.delete_one, self.get_db_key())
logging.info(f"Cleared OAuth cache for {self.get_base_url(self.server_url)}")
@classmethod
async def clear_all(cls, db_client) -> None:
collection = db_client["connector_sessions"]
await asyncio.to_thread(collection.delete_many, {})
logging.info("Cleared all OAuth client cache data.")
class MCPOAuthManager:
"""Manager for handling MCP OAuth callbacks."""
def __init__(self, redis_client: Redis | None, redis_prefix: str = "mcp_oauth:"):
self.redis_client = redis_client
self.redis_prefix = redis_prefix
def handle_oauth_callback(
self, state: str, code: str, error: Optional[str] = None
) -> bool:
"""
Handle OAuth callback from provider.
Args:
state: The state parameter from OAuth callback
code: The authorization code from OAuth callback
error: Error message if OAuth failed
Returns:
True if successful, False otherwise
"""
try:
if not self.redis_client or not state:
raise Exception("Redis client or state not provided")
if error:
error_key = f"{self.redis_prefix}error:{state}"
self.redis_client.setex(error_key, 300, error)
raise Exception(f"OAuth error received: {error}")
code_key = f"{self.redis_prefix}code:{state}"
self.redis_client.setex(code_key, 300, code)
state_key = f"{self.redis_prefix}state:{state}"
self.redis_client.setex(state_key, 300, "completed")
return True
except Exception as e:
logging.error(f"Error handling OAuth callback: {e}")
return False
def get_oauth_status(self, task_id: str) -> Dict[str, Any]:
"""Get current status of OAuth flow using provided task_id."""
if not task_id:
return {"status": "not_started", "message": "OAuth flow not started"}
return mcp_oauth_status_task(task_id)

View File

@@ -19,8 +19,20 @@ class ToolActionParser:
def _parse_openai_llm(self, call):
try:
call_args = json.loads(call.arguments)
tool_id = call.name.split("_")[-1]
action_name = call.name.rsplit("_", 1)[0]
tool_parts = call.name.split("_")
# If the tool name doesn't contain an underscore, it's likely a hallucinated tool
if len(tool_parts) < 2:
logger.warning(f"Invalid tool name format: {call.name}. Expected format: action_name_tool_id")
return None, None, None
tool_id = tool_parts[-1]
action_name = "_".join(tool_parts[:-1])
# Validate that tool_id looks like a numerical ID
if not tool_id.isdigit():
logger.warning(f"Tool ID '{tool_id}' is not numerical. This might be a hallucinated tool call.")
except (AttributeError, TypeError) as e:
logger.error(f"Error parsing OpenAI LLM call: {e}")
return None, None, None
@@ -29,8 +41,20 @@ class ToolActionParser:
def _parse_google_llm(self, call):
try:
call_args = call.arguments
tool_id = call.name.split("_")[-1]
action_name = call.name.rsplit("_", 1)[0]
tool_parts = call.name.split("_")
# If the tool name doesn't contain an underscore, it's likely a hallucinated tool
if len(tool_parts) < 2:
logger.warning(f"Invalid tool name format: {call.name}. Expected format: action_name_tool_id")
return None, None, None
tool_id = tool_parts[-1]
action_name = "_".join(tool_parts[:-1])
# Validate that tool_id looks like a numerical ID
if not tool_id.isdigit():
logger.warning(f"Tool ID '{tool_id}' is not numerical. This might be a hallucinated tool call.")
except (AttributeError, TypeError) as e:
logger.error(f"Error parsing Google LLM call: {e}")
return None, None, None

View File

@@ -23,16 +23,23 @@ class ToolManager:
tool_config = self.config.get(name, {})
self.tools[name] = obj(tool_config)
def load_tool(self, tool_name, tool_config):
def load_tool(self, tool_name, tool_config, user_id=None):
self.config[tool_name] = tool_config
module = importlib.import_module(f"application.agents.tools.{tool_name}")
for member_name, obj in inspect.getmembers(module, inspect.isclass):
if issubclass(obj, Tool) and obj is not Tool:
return obj(tool_config)
if tool_name == "mcp_tool" and user_id:
return obj(tool_config, user_id)
else:
return obj(tool_config)
def execute_action(self, tool_name, action_name, **kwargs):
def execute_action(self, tool_name, action_name, user_id=None, **kwargs):
if tool_name not in self.tools:
raise ValueError(f"Tool '{tool_name}' not loaded")
if tool_name == "mcp_tool" and user_id:
tool_config = self.config.get(tool_name, {})
tool = self.load_tool(tool_name, tool_config, user_id)
return tool.execute_action(action_name, **kwargs)
return self.tools[tool_name].execute_action(action_name, **kwargs)
def get_all_actions_metadata(self):

View File

@@ -0,0 +1,7 @@
from flask_restx import Api
api = Api(
version="1.0",
title="DocsGPT API",
description="API for DocsGPT",
)

View File

@@ -0,0 +1,19 @@
from flask import Blueprint
from application.api import api
from application.api.answer.routes.answer import AnswerResource
from application.api.answer.routes.base import answer_ns
from application.api.answer.routes.stream import StreamResource
answer = Blueprint("answer", __name__)
api.add_namespace(answer_ns)
def init_answer_routes():
api.add_resource(StreamResource, "/stream")
api.add_resource(AnswerResource, "/api/answer")
init_answer_routes()

View File

@@ -1,925 +0,0 @@
import asyncio
import datetime
import json
import logging
import os
import traceback
from bson.dbref import DBRef
from bson.objectid import ObjectId
from flask import Blueprint, make_response, request, Response
from flask_restx import fields, Namespace, Resource
from application.agents.agent_creator import AgentCreator
from application.core.mongo_db import MongoDB
from application.core.settings import settings
from application.error import bad_request
from application.extensions import api
from application.llm.llm_creator import LLMCreator
from application.retriever.retriever_creator import RetrieverCreator
from application.utils import check_required_fields, limit_chat_history
logger = logging.getLogger(__name__)
mongo = MongoDB.get_client()
db = mongo[settings.MONGO_DB_NAME]
conversations_collection = db["conversations"]
sources_collection = db["sources"]
prompts_collection = db["prompts"]
agents_collection = db["agents"]
user_logs_collection = db["user_logs"]
attachments_collection = db["attachments"]
answer = Blueprint("answer", __name__)
answer_ns = Namespace("answer", description="Answer related operations", path="/")
api.add_namespace(answer_ns)
gpt_model = ""
# to have some kind of default behaviour
if settings.LLM_PROVIDER == "openai":
gpt_model = "gpt-4o-mini"
elif settings.LLM_PROVIDER == "anthropic":
gpt_model = "claude-2"
elif settings.LLM_PROVIDER == "groq":
gpt_model = "llama3-8b-8192"
elif settings.LLM_PROVIDER == "novita":
gpt_model = "deepseek/deepseek-r1"
if settings.LLM_NAME: # in case there is particular model name configured
gpt_model = settings.LLM_NAME
# load the prompts
current_dir = os.path.dirname(
os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
)
with open(os.path.join(current_dir, "prompts", "chat_combine_default.txt"), "r") as f:
chat_combine_template = f.read()
with open(os.path.join(current_dir, "prompts", "chat_reduce_prompt.txt"), "r") as f:
chat_reduce_template = f.read()
with open(os.path.join(current_dir, "prompts", "chat_combine_creative.txt"), "r") as f:
chat_combine_creative = f.read()
with open(os.path.join(current_dir, "prompts", "chat_combine_strict.txt"), "r") as f:
chat_combine_strict = f.read()
api_key_set = settings.API_KEY is not None
embeddings_key_set = settings.EMBEDDINGS_KEY is not None
async def async_generate(chain, question, chat_history):
result = await chain.arun({"question": question, "chat_history": chat_history})
return result
def run_async_chain(chain, question, chat_history):
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
result = {}
try:
answer = loop.run_until_complete(async_generate(chain, question, chat_history))
finally:
loop.close()
result["answer"] = answer
return result
def get_agent_key(agent_id, user_id):
if not agent_id:
return None, False, None
try:
agent = agents_collection.find_one({"_id": ObjectId(agent_id)})
if agent is None:
raise Exception("Agent not found", 404)
is_owner = agent.get("user") == user_id
if is_owner:
agents_collection.update_one(
{"_id": ObjectId(agent_id)},
{"$set": {"lastUsedAt": datetime.datetime.now(datetime.timezone.utc)}},
)
return str(agent["key"]), False, None
is_shared_with_user = agent.get(
"shared_publicly", False
) or user_id in agent.get("shared_with", [])
if is_shared_with_user:
return str(agent["key"]), True, agent.get("shared_token")
raise Exception("Unauthorized access to the agent", 403)
except Exception as e:
logger.error(f"Error in get_agent_key: {str(e)}", exc_info=True)
raise
def get_data_from_api_key(api_key):
data = agents_collection.find_one({"key": api_key})
if not data:
raise Exception("Invalid API Key, please generate a new key", 401)
source = data.get("source")
if isinstance(source, DBRef):
source_doc = db.dereference(source)
data["source"] = str(source_doc["_id"])
data["retriever"] = source_doc.get("retriever", data.get("retriever"))
else:
data["source"] = {}
return data
def get_retriever(source_id: str):
doc = sources_collection.find_one({"_id": ObjectId(source_id)})
if doc is None:
raise Exception("Source document does not exist", 404)
retriever_name = None if "retriever" not in doc else doc["retriever"]
return retriever_name
def is_azure_configured():
return (
settings.OPENAI_API_BASE
and settings.OPENAI_API_VERSION
and settings.AZURE_DEPLOYMENT_NAME
)
def save_conversation(
conversation_id,
question,
response,
thought,
source_log_docs,
tool_calls,
llm,
decoded_token,
index=None,
api_key=None,
agent_id=None,
is_shared_usage=False,
shared_token=None,
attachment_ids=None,
):
current_time = datetime.datetime.now(datetime.timezone.utc)
if conversation_id is not None and index is not None:
conversations_collection.update_one(
{"_id": ObjectId(conversation_id), f"queries.{index}": {"$exists": True}},
{
"$set": {
f"queries.{index}.prompt": question,
f"queries.{index}.response": response,
f"queries.{index}.thought": thought,
f"queries.{index}.sources": source_log_docs,
f"queries.{index}.tool_calls": tool_calls,
f"queries.{index}.timestamp": current_time,
f"queries.{index}.attachments": attachment_ids,
}
},
)
##remove following queries from the array
conversations_collection.update_one(
{"_id": ObjectId(conversation_id), f"queries.{index}": {"$exists": True}},
{"$push": {"queries": {"$each": [], "$slice": index + 1}}},
)
elif conversation_id is not None and conversation_id != "None":
conversations_collection.update_one(
{"_id": ObjectId(conversation_id)},
{
"$push": {
"queries": {
"prompt": question,
"response": response,
"thought": thought,
"sources": source_log_docs,
"tool_calls": tool_calls,
"timestamp": current_time,
"attachments": attachment_ids,
}
}
},
)
else:
# create new conversation
# generate summary
messages_summary = [
{
"role": "assistant",
"content": "Summarise following conversation in no more than 3 "
"words, respond ONLY with the summary, use the same "
"language as the system",
},
{
"role": "user",
"content": "Summarise following conversation in no more than 3 words, "
"respond ONLY with the summary, use the same language as the "
"system \n\nUser: " + question + "\n\n" + "AI: " + response,
},
]
completion = llm.gen(model=gpt_model, messages=messages_summary, max_tokens=30)
conversation_data = {
"user": decoded_token.get("sub"),
"date": datetime.datetime.utcnow(),
"name": completion,
"queries": [
{
"prompt": question,
"response": response,
"thought": thought,
"sources": source_log_docs,
"tool_calls": tool_calls,
"timestamp": current_time,
"attachments": attachment_ids,
}
],
}
if api_key:
if agent_id:
conversation_data["agent_id"] = agent_id
if is_shared_usage:
conversation_data["is_shared_usage"] = is_shared_usage
conversation_data["shared_token"] = shared_token
api_key_doc = agents_collection.find_one({"key": api_key})
if api_key_doc:
conversation_data["api_key"] = api_key_doc["key"]
conversation_id = conversations_collection.insert_one(
conversation_data
).inserted_id
return conversation_id
def get_prompt(prompt_id):
if prompt_id == "default":
prompt = chat_combine_template
elif prompt_id == "creative":
prompt = chat_combine_creative
elif prompt_id == "strict":
prompt = chat_combine_strict
else:
prompt = prompts_collection.find_one({"_id": ObjectId(prompt_id)})["content"]
return prompt
def complete_stream(
question,
agent,
retriever,
conversation_id,
user_api_key,
decoded_token,
isNoneDoc=False,
index=None,
should_save_conversation=True,
attachment_ids=None,
agent_id=None,
is_shared_usage=False,
shared_token=None,
):
try:
response_full, thought, source_log_docs, tool_calls = "", "", [], []
answer = agent.gen(query=question, retriever=retriever)
for line in answer:
if "answer" in line:
response_full += str(line["answer"])
data = json.dumps({"type": "answer", "answer": line["answer"]})
yield f"data: {data}\n\n"
elif "sources" in line:
truncated_sources = []
source_log_docs = line["sources"]
for source in line["sources"]:
truncated_source = source.copy()
if "text" in truncated_source:
truncated_source["text"] = (
truncated_source["text"][:100].strip() + "..."
)
truncated_sources.append(truncated_source)
if len(truncated_sources) > 0:
data = json.dumps({"type": "source", "source": truncated_sources})
yield f"data: {data}\n\n"
elif "tool_calls" in line:
tool_calls = line["tool_calls"]
elif "thought" in line:
thought += line["thought"]
data = json.dumps({"type": "thought", "thought": line["thought"]})
yield f"data: {data}\n\n"
elif "type" in line:
data = json.dumps(line)
yield f"data: {data}\n\n"
if isNoneDoc:
for doc in source_log_docs:
doc["source"] = "None"
llm = LLMCreator.create_llm(
settings.LLM_PROVIDER,
api_key=settings.API_KEY,
user_api_key=user_api_key,
decoded_token=decoded_token,
)
if should_save_conversation:
conversation_id = save_conversation(
conversation_id,
question,
response_full,
thought,
source_log_docs,
tool_calls,
llm,
decoded_token,
index,
api_key=user_api_key,
attachment_ids=attachment_ids,
agent_id=agent_id,
is_shared_usage=is_shared_usage,
shared_token=shared_token,
)
else:
conversation_id = None
# send data.type = "end" to indicate that the stream has ended as json
data = json.dumps({"type": "id", "id": str(conversation_id)})
yield f"data: {data}\n\n"
retriever_params = retriever.get_params()
user_logs_collection.insert_one(
{
"action": "stream_answer",
"level": "info",
"user": decoded_token.get("sub"),
"api_key": user_api_key,
"question": question,
"response": response_full,
"sources": source_log_docs,
"retriever_params": retriever_params,
"attachments": attachment_ids,
"timestamp": datetime.datetime.now(datetime.timezone.utc),
}
)
data = json.dumps({"type": "end"})
yield f"data: {data}\n\n"
except Exception as e:
logger.error(f"Error in stream: {str(e)}", exc_info=True)
data = json.dumps(
{
"type": "error",
"error": "Please try again later. We apologize for any inconvenience.",
}
)
yield f"data: {data}\n\n"
return
@answer_ns.route("/stream")
class Stream(Resource):
stream_model = api.model(
"StreamModel",
{
"question": fields.String(
required=True, description="Question to be asked"
),
"history": fields.List(
fields.String, required=False, description="Chat history"
),
"conversation_id": fields.String(
required=False, description="Conversation ID"
),
"prompt_id": fields.String(
required=False, default="default", description="Prompt ID"
),
"chunks": fields.Integer(
required=False, default=2, description="Number of chunks"
),
"token_limit": fields.Integer(required=False, description="Token limit"),
"retriever": fields.String(required=False, description="Retriever type"),
"api_key": fields.String(required=False, description="API key"),
"active_docs": fields.String(
required=False, description="Active documents"
),
"isNoneDoc": fields.Boolean(
required=False, description="Flag indicating if no document is used"
),
"index": fields.Integer(
required=False, description="Index of the query to update"
),
"save_conversation": fields.Boolean(
required=False,
default=True,
description="Whether to save the conversation",
),
"attachments": fields.List(
fields.String, required=False, description="List of attachment IDs"
),
},
)
@api.expect(stream_model)
@api.doc(description="Stream a response based on the question and retriever")
def post(self):
data = request.get_json()
required_fields = ["question"]
if "index" in data:
required_fields = ["question", "conversation_id"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
save_conv = data.get("save_conversation", True)
try:
question = data["question"]
history = limit_chat_history(
json.loads(data.get("history", "[]")), gpt_model=gpt_model
)
conversation_id = data.get("conversation_id")
prompt_id = data.get("prompt_id", "default")
attachment_ids = data.get("attachments", [])
index = data.get("index", None)
chunks = int(data.get("chunks", 2))
token_limit = data.get("token_limit", settings.DEFAULT_MAX_HISTORY)
retriever_name = data.get("retriever", "classic")
agent_id = data.get("agent_id", None)
agent_type = settings.AGENT_NAME
decoded_token = getattr(request, "decoded_token", None)
user_sub = decoded_token.get("sub") if decoded_token else None
agent_key, is_shared_usage, shared_token = get_agent_key(agent_id, user_sub)
if agent_key:
data.update({"api_key": agent_key})
else:
agent_id = None
if "api_key" in data:
data_key = get_data_from_api_key(data["api_key"])
chunks = int(data_key.get("chunks", 2))
prompt_id = data_key.get("prompt_id", "default")
source = {"active_docs": data_key.get("source")}
retriever_name = data_key.get("retriever", retriever_name)
user_api_key = data["api_key"]
agent_type = data_key.get("agent_type", agent_type)
if is_shared_usage:
decoded_token = request.decoded_token
else:
decoded_token = {"sub": data_key.get("user")}
is_shared_usage = False
elif "active_docs" in data:
source = {"active_docs": data["active_docs"]}
retriever_name = get_retriever(data["active_docs"]) or retriever_name
user_api_key = None
decoded_token = request.decoded_token
else:
source = {}
user_api_key = None
decoded_token = request.decoded_token
if not decoded_token:
return make_response({"error": "Unauthorized"}, 401)
attachments = get_attachments_content(
attachment_ids, decoded_token.get("sub")
)
logger.info(
f"/stream - request_data: {data}, source: {source}, attachments: {len(attachments)}",
extra={"data": json.dumps({"request_data": data, "source": source})},
)
prompt = get_prompt(prompt_id)
if "isNoneDoc" in data and data["isNoneDoc"] is True:
chunks = 0
agent = AgentCreator.create_agent(
agent_type,
endpoint="stream",
llm_name=settings.LLM_PROVIDER,
gpt_model=gpt_model,
api_key=settings.API_KEY,
user_api_key=user_api_key,
prompt=prompt,
chat_history=history,
decoded_token=decoded_token,
attachments=attachments,
)
retriever = RetrieverCreator.create_retriever(
retriever_name,
source=source,
chat_history=history,
prompt=prompt,
chunks=chunks,
token_limit=token_limit,
gpt_model=gpt_model,
user_api_key=user_api_key,
decoded_token=decoded_token,
)
return Response(
complete_stream(
question=question,
agent=agent,
retriever=retriever,
conversation_id=conversation_id,
user_api_key=user_api_key,
decoded_token=decoded_token,
isNoneDoc=data.get("isNoneDoc"),
index=index,
should_save_conversation=save_conv,
attachment_ids=attachment_ids,
agent_id=agent_id,
is_shared_usage=is_shared_usage,
shared_token=shared_token,
),
mimetype="text/event-stream",
)
except ValueError:
message = "Malformed request body"
logger.error(f"/stream - error: {message}")
return Response(
error_stream_generate(message),
status=400,
mimetype="text/event-stream",
)
except Exception as e:
logger.error(
f"/stream - error: {str(e)} - traceback: {traceback.format_exc()}",
extra={"error": str(e), "traceback": traceback.format_exc()},
)
status_code = 400
return Response(
error_stream_generate("Unknown error occurred"),
status=status_code,
mimetype="text/event-stream",
)
def error_stream_generate(err_response):
data = json.dumps({"type": "error", "error": err_response})
yield f"data: {data}\n\n"
@answer_ns.route("/api/answer")
class Answer(Resource):
answer_model = api.model(
"AnswerModel",
{
"question": fields.String(
required=True, description="The question to answer"
),
"history": fields.List(
fields.String, required=False, description="Conversation history"
),
"conversation_id": fields.String(
required=False, description="Conversation ID"
),
"prompt_id": fields.String(
required=False, default="default", description="Prompt ID"
),
"chunks": fields.Integer(
required=False, default=2, description="Number of chunks"
),
"token_limit": fields.Integer(required=False, description="Token limit"),
"retriever": fields.String(required=False, description="Retriever type"),
"api_key": fields.String(required=False, description="API key"),
"active_docs": fields.String(
required=False, description="Active documents"
),
"isNoneDoc": fields.Boolean(
required=False, description="Flag indicating if no document is used"
),
"attachments": fields.List(
fields.String, required=False, description="List of attachment IDs"
),
},
)
@api.expect(answer_model)
@api.doc(description="Provide an answer based on the question and retriever")
def post(self):
data = request.get_json()
required_fields = ["question"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
try:
question = data["question"]
history = limit_chat_history(
json.loads(data.get("history", "[]")), gpt_model=gpt_model
)
conversation_id = data.get("conversation_id")
prompt_id = data.get("prompt_id", "default")
attachment_ids = data.get("attachments", [])
chunks = int(data.get("chunks", 2))
token_limit = data.get("token_limit", settings.DEFAULT_MAX_HISTORY)
retriever_name = data.get("retriever", "classic")
agent_type = settings.AGENT_NAME
if "api_key" in data:
data_key = get_data_from_api_key(data["api_key"])
chunks = int(data_key.get("chunks", 2))
prompt_id = data_key.get("prompt_id", "default")
source = {"active_docs": data_key.get("source")}
retriever_name = data_key.get("retriever", retriever_name)
user_api_key = data["api_key"]
agent_type = data_key.get("agent_type", agent_type)
decoded_token = {"sub": data_key.get("user")}
elif "active_docs" in data:
source = {"active_docs": data["active_docs"]}
retriever_name = get_retriever(data["active_docs"]) or retriever_name
user_api_key = None
decoded_token = request.decoded_token
else:
source = {}
user_api_key = None
decoded_token = request.decoded_token
if not decoded_token:
return make_response({"error": "Unauthorized"}, 401)
attachments = get_attachments_content(
attachment_ids, decoded_token.get("sub")
)
prompt = get_prompt(prompt_id)
logger.info(
f"/api/answer - request_data: {data}, source: {source}, attachments: {len(attachments)}",
extra={"data": json.dumps({"request_data": data, "source": source})},
)
agent = AgentCreator.create_agent(
agent_type,
endpoint="api/answer",
llm_name=settings.LLM_PROVIDER,
gpt_model=gpt_model,
api_key=settings.API_KEY,
user_api_key=user_api_key,
prompt=prompt,
chat_history=history,
decoded_token=decoded_token,
attachments=attachments,
)
retriever = RetrieverCreator.create_retriever(
retriever_name,
source=source,
chat_history=history,
prompt=prompt,
chunks=chunks,
token_limit=token_limit,
gpt_model=gpt_model,
user_api_key=user_api_key,
decoded_token=decoded_token,
)
response_full = ""
source_log_docs = []
tool_calls = []
stream_ended = False
thought = ""
for line in complete_stream(
question=question,
agent=agent,
retriever=retriever,
conversation_id=conversation_id,
user_api_key=user_api_key,
decoded_token=decoded_token,
isNoneDoc=data.get("isNoneDoc"),
index=None,
should_save_conversation=False,
attachment_ids=attachment_ids,
):
try:
event_data = line.replace("data: ", "").strip()
event = json.loads(event_data)
if event["type"] == "answer":
response_full += event["answer"]
elif event["type"] == "source":
source_log_docs = event["source"]
elif event["type"] == "tool_calls":
tool_calls = event["tool_calls"]
elif event["type"] == "thought":
thought = event["thought"]
elif event["type"] == "error":
logger.error(f"Error from stream: {event['error']}")
return bad_request(500, event["error"])
elif event["type"] == "end":
stream_ended = True
except (json.JSONDecodeError, KeyError) as e:
logger.warning(f"Error parsing stream event: {e}, line: {line}")
continue
if not stream_ended:
logger.error("Stream ended unexpectedly without an 'end' event.")
return bad_request(500, "Stream ended unexpectedly.")
if data.get("isNoneDoc"):
for doc in source_log_docs:
doc["source"] = "None"
llm = LLMCreator.create_llm(
settings.LLM_PROVIDER,
api_key=settings.API_KEY,
user_api_key=user_api_key,
decoded_token=decoded_token,
)
result = {"answer": response_full, "sources": source_log_docs}
result["conversation_id"] = str(
save_conversation(
conversation_id,
question,
response_full,
thought,
source_log_docs,
tool_calls,
llm,
decoded_token,
api_key=user_api_key,
attachment_ids=attachment_ids,
)
)
retriever_params = retriever.get_params()
user_logs_collection.insert_one(
{
"action": "api_answer",
"level": "info",
"user": decoded_token.get("sub"),
"api_key": user_api_key,
"question": question,
"response": response_full,
"sources": source_log_docs,
"retriever_params": retriever_params,
"timestamp": datetime.datetime.now(datetime.timezone.utc),
}
)
except Exception as e:
logger.error(
f"/api/answer - error: {str(e)} - traceback: {traceback.format_exc()}",
extra={"error": str(e), "traceback": traceback.format_exc()},
)
return bad_request(500, str(e))
return make_response(result, 200)
@answer_ns.route("/api/search")
class Search(Resource):
search_model = api.model(
"SearchModel",
{
"question": fields.String(
required=True, description="The question to search"
),
"chunks": fields.Integer(
required=False, default=2, description="Number of chunks"
),
"api_key": fields.String(
required=False, description="API key for authentication"
),
"active_docs": fields.String(
required=False, description="Active documents for retrieval"
),
"retriever": fields.String(required=False, description="Retriever type"),
"token_limit": fields.Integer(
required=False, description="Limit for tokens"
),
"isNoneDoc": fields.Boolean(
required=False, description="Flag indicating if no document is used"
),
},
)
@api.expect(search_model)
@api.doc(
description="Search for relevant documents based on the question and retriever"
)
def post(self):
data = request.get_json()
required_fields = ["question"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
try:
question = data["question"]
chunks = int(data.get("chunks", 2))
token_limit = data.get("token_limit", settings.DEFAULT_MAX_HISTORY)
retriever_name = data.get("retriever", "classic")
if "api_key" in data:
data_key = get_data_from_api_key(data["api_key"])
chunks = int(data_key.get("chunks", 2))
source = {"active_docs": data_key.get("source")}
user_api_key = data["api_key"]
decoded_token = {"sub": data_key.get("user")}
elif "active_docs" in data:
source = {"active_docs": data["active_docs"]}
user_api_key = None
decoded_token = request.decoded_token
else:
source = {}
user_api_key = None
decoded_token = request.decoded_token
if not decoded_token:
return make_response({"error": "Unauthorized"}, 401)
logger.info(
f"/api/answer - request_data: {data}, source: {source}",
extra={"data": json.dumps({"request_data": data, "source": source})},
)
retriever = RetrieverCreator.create_retriever(
retriever_name,
source=source,
chat_history=[],
prompt="default",
chunks=chunks,
token_limit=token_limit,
gpt_model=gpt_model,
user_api_key=user_api_key,
decoded_token=decoded_token,
)
docs = retriever.search(question)
retriever_params = retriever.get_params()
user_logs_collection.insert_one(
{
"action": "api_search",
"level": "info",
"user": decoded_token.get("sub"),
"api_key": user_api_key,
"question": question,
"sources": docs,
"retriever_params": retriever_params,
"timestamp": datetime.datetime.now(datetime.timezone.utc),
}
)
if data.get("isNoneDoc"):
for doc in docs:
doc["source"] = "None"
except Exception as e:
logger.error(
f"/api/search - error: {str(e)} - traceback: {traceback.format_exc()}",
extra={"error": str(e), "traceback": traceback.format_exc()},
)
return bad_request(500, str(e))
return make_response(docs, 200)
def get_attachments_content(attachment_ids, user):
"""
Retrieve content from attachment documents based on their IDs.
Args:
attachment_ids (list): List of attachment document IDs
user (str): User identifier to verify ownership
Returns:
list: List of dictionaries containing attachment content and metadata
"""
if not attachment_ids:
return []
attachments = []
for attachment_id in attachment_ids:
try:
attachment_doc = attachments_collection.find_one(
{"_id": ObjectId(attachment_id), "user": user}
)
if attachment_doc:
attachments.append(attachment_doc)
except Exception as e:
logger.error(
f"Error retrieving attachment {attachment_id}: {e}", exc_info=True
)
return attachments

View File

@@ -0,0 +1,122 @@
import logging
import traceback
from flask import make_response, request
from flask_restx import fields, Resource
from application.api import api
from application.api.answer.routes.base import answer_ns, BaseAnswerResource
from application.api.answer.services.stream_processor import StreamProcessor
logger = logging.getLogger(__name__)
@answer_ns.route("/api/answer")
class AnswerResource(Resource, BaseAnswerResource):
def __init__(self, *args, **kwargs):
Resource.__init__(self, *args, **kwargs)
BaseAnswerResource.__init__(self)
answer_model = answer_ns.model(
"AnswerModel",
{
"question": fields.String(
required=True, description="Question to be asked"
),
"history": fields.List(
fields.String,
required=False,
description="Conversation history (only for new conversations)",
),
"conversation_id": fields.String(
required=False,
description="Existing conversation ID (loads history)",
),
"prompt_id": fields.String(
required=False, default="default", description="Prompt ID"
),
"chunks": fields.Integer(
required=False, default=2, description="Number of chunks"
),
"token_limit": fields.Integer(required=False, description="Token limit"),
"retriever": fields.String(required=False, description="Retriever type"),
"api_key": fields.String(required=False, description="API key"),
"active_docs": fields.String(
required=False, description="Active documents"
),
"isNoneDoc": fields.Boolean(
required=False, description="Flag indicating if no document is used"
),
"save_conversation": fields.Boolean(
required=False,
default=True,
description="Whether to save the conversation",
),
},
)
@api.expect(answer_model)
@api.doc(description="Provide a response based on the question and retriever")
def post(self):
data = request.get_json()
if error := self.validate_request(data):
return error
decoded_token = getattr(request, "decoded_token", None)
processor = StreamProcessor(data, decoded_token)
try:
processor.initialize()
if not processor.decoded_token:
return make_response({"error": "Unauthorized"}, 401)
agent = processor.create_agent()
retriever = processor.create_retriever()
stream = self.complete_stream(
question=data["question"],
agent=agent,
retriever=retriever,
conversation_id=processor.conversation_id,
user_api_key=processor.agent_config.get("user_api_key"),
decoded_token=processor.decoded_token,
isNoneDoc=data.get("isNoneDoc"),
index=None,
should_save_conversation=data.get("save_conversation", True),
)
stream_result = self.process_response_stream(stream)
if len(stream_result) == 7:
(
conversation_id,
response,
sources,
tool_calls,
thought,
error,
structured_info,
) = stream_result
else:
conversation_id, response, sources, tool_calls, thought, error = (
stream_result
)
structured_info = None
if error:
return make_response({"error": error}, 400)
result = {
"conversation_id": conversation_id,
"answer": response,
"sources": sources,
"tool_calls": tool_calls,
"thought": thought,
}
if structured_info:
result.update(structured_info)
except Exception as e:
logger.error(
f"/api/answer - error: {str(e)} - traceback: {traceback.format_exc()}",
extra={"error": str(e), "traceback": traceback.format_exc()},
)
return make_response({"error": str(e)}, 500)
return make_response(result, 200)

View File

@@ -0,0 +1,265 @@
import datetime
import json
import logging
from typing import Any, Dict, Generator, List, Optional
from flask import Response
from flask_restx import Namespace
from application.api.answer.services.conversation_service import ConversationService
from application.core.mongo_db import MongoDB
from application.core.settings import settings
from application.llm.llm_creator import LLMCreator
from application.utils import check_required_fields, get_gpt_model
logger = logging.getLogger(__name__)
answer_ns = Namespace("answer", description="Answer related operations", path="/")
class BaseAnswerResource:
"""Shared base class for answer endpoints"""
def __init__(self):
mongo = MongoDB.get_client()
db = mongo[settings.MONGO_DB_NAME]
self.user_logs_collection = db["user_logs"]
self.gpt_model = get_gpt_model()
self.conversation_service = ConversationService()
def validate_request(
self, data: Dict[str, Any], require_conversation_id: bool = False
) -> Optional[Response]:
"""Common request validation"""
required_fields = ["question"]
if require_conversation_id:
required_fields.append("conversation_id")
if missing_fields := check_required_fields(data, required_fields):
return missing_fields
return None
def complete_stream(
self,
question: str,
agent: Any,
retriever: Any,
conversation_id: Optional[str],
user_api_key: Optional[str],
decoded_token: Dict[str, Any],
isNoneDoc: bool = False,
index: Optional[int] = None,
should_save_conversation: bool = True,
attachment_ids: Optional[List[str]] = None,
agent_id: Optional[str] = None,
is_shared_usage: bool = False,
shared_token: Optional[str] = None,
) -> Generator[str, None, None]:
"""
Generator function that streams the complete conversation response.
Args:
question: The user's question
agent: The agent instance
retriever: The retriever instance
conversation_id: Existing conversation ID
user_api_key: User's API key if any
decoded_token: Decoded JWT token
isNoneDoc: Flag for document-less responses
index: Index of message to update
should_save_conversation: Whether to persist the conversation
attachment_ids: List of attachment IDs
agent_id: ID of agent used
is_shared_usage: Flag for shared agent usage
shared_token: Token for shared agent
Yields:
Server-sent event strings
"""
try:
response_full, thought, source_log_docs, tool_calls = "", "", [], []
is_structured = False
schema_info = None
structured_chunks = []
for line in agent.gen(query=question, retriever=retriever):
if "answer" in line:
response_full += str(line["answer"])
if line.get("structured"):
is_structured = True
schema_info = line.get("schema")
structured_chunks.append(line["answer"])
else:
data = json.dumps({"type": "answer", "answer": line["answer"]})
yield f"data: {data}\n\n"
elif "sources" in line:
truncated_sources = []
source_log_docs = line["sources"]
for source in line["sources"]:
truncated_source = source.copy()
if "text" in truncated_source:
truncated_source["text"] = (
truncated_source["text"][:100].strip() + "..."
)
truncated_sources.append(truncated_source)
if truncated_sources:
data = json.dumps(
{"type": "source", "source": truncated_sources}
)
yield f"data: {data}\n\n"
elif "tool_calls" in line:
tool_calls = line["tool_calls"]
data = json.dumps({"type": "tool_calls", "tool_calls": tool_calls})
yield f"data: {data}\n\n"
elif "thought" in line:
thought += line["thought"]
data = json.dumps({"type": "thought", "thought": line["thought"]})
yield f"data: {data}\n\n"
elif "type" in line:
data = json.dumps(line)
yield f"data: {data}\n\n"
if is_structured and structured_chunks:
structured_data = {
"type": "structured_answer",
"answer": response_full,
"structured": True,
"schema": schema_info,
}
data = json.dumps(structured_data)
yield f"data: {data}\n\n"
if isNoneDoc:
for doc in source_log_docs:
doc["source"] = "None"
llm = LLMCreator.create_llm(
settings.LLM_PROVIDER,
api_key=settings.API_KEY,
user_api_key=user_api_key,
decoded_token=decoded_token,
)
if should_save_conversation:
conversation_id = self.conversation_service.save_conversation(
conversation_id,
question,
response_full,
thought,
source_log_docs,
tool_calls,
llm,
self.gpt_model,
decoded_token,
index=index,
api_key=user_api_key,
agent_id=agent_id,
is_shared_usage=is_shared_usage,
shared_token=shared_token,
attachment_ids=attachment_ids,
)
else:
conversation_id = None
id_data = {"type": "id", "id": str(conversation_id)}
data = json.dumps(id_data)
yield f"data: {data}\n\n"
retriever_params = retriever.get_params()
log_data = {
"action": "stream_answer",
"level": "info",
"user": decoded_token.get("sub"),
"api_key": user_api_key,
"question": question,
"response": response_full,
"sources": source_log_docs,
"retriever_params": retriever_params,
"attachments": attachment_ids,
"timestamp": datetime.datetime.now(datetime.timezone.utc),
}
if is_structured:
log_data["structured_output"] = True
if schema_info:
log_data["schema"] = schema_info
# clean up text fields to be no longer than 10000 characters
for key, value in log_data.items():
if isinstance(value, str) and len(value) > 10000:
log_data[key] = value[:10000]
self.user_logs_collection.insert_one(log_data)
# End of stream
data = json.dumps({"type": "end"})
yield f"data: {data}\n\n"
except Exception as e:
logger.error(f"Error in stream: {str(e)}", exc_info=True)
data = json.dumps(
{
"type": "error",
"error": "Please try again later. We apologize for any inconvenience.",
}
)
yield f"data: {data}\n\n"
return
def process_response_stream(self, stream):
"""Process the stream response for non-streaming endpoint"""
conversation_id = ""
response_full = ""
source_log_docs = []
tool_calls = []
thought = ""
stream_ended = False
is_structured = False
schema_info = None
for line in stream:
try:
event_data = line.replace("data: ", "").strip()
event = json.loads(event_data)
if event["type"] == "id":
conversation_id = event["id"]
elif event["type"] == "answer":
response_full += event["answer"]
elif event["type"] == "structured_answer":
response_full = event["answer"]
is_structured = True
schema_info = event.get("schema")
elif event["type"] == "source":
source_log_docs = event["source"]
elif event["type"] == "tool_calls":
tool_calls = event["tool_calls"]
elif event["type"] == "thought":
thought = event["thought"]
elif event["type"] == "error":
logger.error(f"Error from stream: {event['error']}")
return None, None, None, None, event["error"]
elif event["type"] == "end":
stream_ended = True
except (json.JSONDecodeError, KeyError) as e:
logger.warning(f"Error parsing stream event: {e}, line: {line}")
continue
if not stream_ended:
logger.error("Stream ended unexpectedly without an 'end' event.")
return None, None, None, None, "Stream ended unexpectedly"
result = (
conversation_id,
response_full,
source_log_docs,
tool_calls,
thought,
None,
)
if is_structured:
result = result + ({"structured": True, "schema": schema_info},)
return result
def error_stream_generate(self, err_response):
data = json.dumps({"type": "error", "error": err_response})
yield f"data: {data}\n\n"

View File

@@ -0,0 +1,117 @@
import logging
import traceback
from flask import request, Response
from flask_restx import fields, Resource
from application.api import api
from application.api.answer.routes.base import answer_ns, BaseAnswerResource
from application.api.answer.services.stream_processor import StreamProcessor
logger = logging.getLogger(__name__)
@answer_ns.route("/stream")
class StreamResource(Resource, BaseAnswerResource):
def __init__(self, *args, **kwargs):
Resource.__init__(self, *args, **kwargs)
BaseAnswerResource.__init__(self)
stream_model = answer_ns.model(
"StreamModel",
{
"question": fields.String(
required=True, description="Question to be asked"
),
"history": fields.List(
fields.String,
required=False,
description="Conversation history (only for new conversations)",
),
"conversation_id": fields.String(
required=False,
description="Existing conversation ID (loads history)",
),
"prompt_id": fields.String(
required=False, default="default", description="Prompt ID"
),
"chunks": fields.Integer(
required=False, default=2, description="Number of chunks"
),
"token_limit": fields.Integer(required=False, description="Token limit"),
"retriever": fields.String(required=False, description="Retriever type"),
"api_key": fields.String(required=False, description="API key"),
"active_docs": fields.String(
required=False, description="Active documents"
),
"isNoneDoc": fields.Boolean(
required=False, description="Flag indicating if no document is used"
),
"index": fields.Integer(
required=False, description="Index of the query to update"
),
"save_conversation": fields.Boolean(
required=False,
default=True,
description="Whether to save the conversation",
),
"attachments": fields.List(
fields.String, required=False, description="List of attachment IDs"
),
},
)
@api.expect(stream_model)
@api.doc(description="Stream a response based on the question and retriever")
def post(self):
data = request.get_json()
if error := self.validate_request(data, "index" in data):
return error
decoded_token = getattr(request, "decoded_token", None)
processor = StreamProcessor(data, decoded_token)
try:
processor.initialize()
agent = processor.create_agent()
retriever = processor.create_retriever()
return Response(
self.complete_stream(
question=data["question"],
agent=agent,
retriever=retriever,
conversation_id=processor.conversation_id,
user_api_key=processor.agent_config.get("user_api_key"),
decoded_token=processor.decoded_token,
isNoneDoc=data.get("isNoneDoc"),
index=data.get("index"),
should_save_conversation=data.get("save_conversation", True),
attachment_ids=data.get("attachments", []),
agent_id=data.get("agent_id"),
is_shared_usage=processor.is_shared_usage,
shared_token=processor.shared_token,
),
mimetype="text/event-stream",
)
except ValueError as e:
message = "Malformed request body"
logger.error(
f"/stream - error: {message} - specific error: {str(e)} - traceback: {traceback.format_exc()}",
extra={"error": str(e), "traceback": traceback.format_exc()},
)
return Response(
self.error_stream_generate(message),
status=400,
mimetype="text/event-stream",
)
except Exception as e:
logger.error(
f"/stream - error: {str(e)} - traceback: {traceback.format_exc()}",
extra={"error": str(e), "traceback": traceback.format_exc()},
)
return Response(
self.error_stream_generate("Unknown error occurred"),
status=400,
mimetype="text/event-stream",
)

View File

@@ -0,0 +1,180 @@
import logging
from datetime import datetime, timezone
from typing import Any, Dict, List, Optional
from application.core.mongo_db import MongoDB
from application.core.settings import settings
from bson import ObjectId
logger = logging.getLogger(__name__)
class ConversationService:
def __init__(self):
mongo = MongoDB.get_client()
db = mongo[settings.MONGO_DB_NAME]
self.conversations_collection = db["conversations"]
self.agents_collection = db["agents"]
def get_conversation(
self, conversation_id: str, user_id: str
) -> Optional[Dict[str, Any]]:
"""Retrieve a conversation with proper access control"""
if not conversation_id or not user_id:
return None
try:
conversation = self.conversations_collection.find_one(
{
"_id": ObjectId(conversation_id),
"$or": [{"user": user_id}, {"shared_with": user_id}],
}
)
if not conversation:
logger.warning(
f"Conversation not found or unauthorized - ID: {conversation_id}, User: {user_id}"
)
return None
conversation["_id"] = str(conversation["_id"])
return conversation
except Exception as e:
logger.error(f"Error fetching conversation: {str(e)}", exc_info=True)
return None
def save_conversation(
self,
conversation_id: Optional[str],
question: str,
response: str,
thought: str,
sources: List[Dict[str, Any]],
tool_calls: List[Dict[str, Any]],
llm: Any,
gpt_model: str,
decoded_token: Dict[str, Any],
index: Optional[int] = None,
api_key: Optional[str] = None,
agent_id: Optional[str] = None,
is_shared_usage: bool = False,
shared_token: Optional[str] = None,
attachment_ids: Optional[List[str]] = None,
) -> str:
"""Save or update a conversation in the database"""
user_id = decoded_token.get("sub")
if not user_id:
raise ValueError("User ID not found in token")
current_time = datetime.now(timezone.utc)
# clean up in sources array such that we save max 1k characters for text part
for source in sources:
if "text" in source and isinstance(source["text"], str):
source["text"] = source["text"][:1000]
if conversation_id is not None and index is not None:
# Update existing conversation with new query
result = self.conversations_collection.update_one(
{
"_id": ObjectId(conversation_id),
"user": user_id,
f"queries.{index}": {"$exists": True},
},
{
"$set": {
f"queries.{index}.prompt": question,
f"queries.{index}.response": response,
f"queries.{index}.thought": thought,
f"queries.{index}.sources": sources,
f"queries.{index}.tool_calls": tool_calls,
f"queries.{index}.timestamp": current_time,
f"queries.{index}.attachments": attachment_ids,
}
},
)
if result.matched_count == 0:
raise ValueError("Conversation not found or unauthorized")
self.conversations_collection.update_one(
{
"_id": ObjectId(conversation_id),
"user": user_id,
f"queries.{index}": {"$exists": True},
},
{"$push": {"queries": {"$each": [], "$slice": index + 1}}},
)
return conversation_id
elif conversation_id:
# Append new message to existing conversation
result = self.conversations_collection.update_one(
{"_id": ObjectId(conversation_id), "user": user_id},
{
"$push": {
"queries": {
"prompt": question,
"response": response,
"thought": thought,
"sources": sources,
"tool_calls": tool_calls,
"timestamp": current_time,
"attachments": attachment_ids,
}
}
},
)
if result.matched_count == 0:
raise ValueError("Conversation not found or unauthorized")
return conversation_id
else:
# Create new conversation
messages_summary = [
{
"role": "assistant",
"content": "Summarise following conversation in no more than 3 "
"words, respond ONLY with the summary, use the same "
"language as the user query",
},
{
"role": "user",
"content": "Summarise following conversation in no more than 3 words, "
"respond ONLY with the summary, use the same language as the "
"user query \n\nUser: " + question + "\n\n" + "AI: " + response,
},
]
completion = llm.gen(
model=gpt_model, messages=messages_summary, max_tokens=30
)
conversation_data = {
"user": user_id,
"date": current_time,
"name": completion,
"queries": [
{
"prompt": question,
"response": response,
"thought": thought,
"sources": sources,
"tool_calls": tool_calls,
"timestamp": current_time,
"attachments": attachment_ids,
}
],
}
if api_key:
if agent_id:
conversation_data["agent_id"] = agent_id
if is_shared_usage:
conversation_data["is_shared_usage"] = is_shared_usage
conversation_data["shared_token"] = shared_token
agent = self.agents_collection.find_one({"key": api_key})
if agent:
conversation_data["api_key"] = agent["key"]
result = self.conversations_collection.insert_one(conversation_data)
return str(result.inserted_id)

View File

@@ -0,0 +1,353 @@
import datetime
import json
import logging
import os
from pathlib import Path
from typing import Any, Dict, Optional
from bson.dbref import DBRef
from bson.objectid import ObjectId
from application.agents.agent_creator import AgentCreator
from application.api.answer.services.conversation_service import ConversationService
from application.core.mongo_db import MongoDB
from application.core.settings import settings
from application.retriever.retriever_creator import RetrieverCreator
from application.utils import get_gpt_model, limit_chat_history
logger = logging.getLogger(__name__)
def get_prompt(prompt_id: str, prompts_collection=None) -> str:
"""
Get a prompt by preset name or MongoDB ID
"""
current_dir = Path(__file__).resolve().parents[3]
prompts_dir = current_dir / "prompts"
preset_mapping = {
"default": "chat_combine_default.txt",
"creative": "chat_combine_creative.txt",
"strict": "chat_combine_strict.txt",
"reduce": "chat_reduce_prompt.txt",
}
if prompt_id in preset_mapping:
file_path = os.path.join(prompts_dir, preset_mapping[prompt_id])
try:
with open(file_path, "r") as f:
return f.read()
except FileNotFoundError:
raise FileNotFoundError(f"Prompt file not found: {file_path}")
try:
if prompts_collection is None:
mongo = MongoDB.get_client()
db = mongo[settings.MONGO_DB_NAME]
prompts_collection = db["prompts"]
prompt_doc = prompts_collection.find_one({"_id": ObjectId(prompt_id)})
if not prompt_doc:
raise ValueError(f"Prompt with ID {prompt_id} not found")
return prompt_doc["content"]
except Exception as e:
raise ValueError(f"Invalid prompt ID: {prompt_id}") from e
class StreamProcessor:
def __init__(
self, request_data: Dict[str, Any], decoded_token: Optional[Dict[str, Any]]
):
mongo = MongoDB.get_client()
self.db = mongo[settings.MONGO_DB_NAME]
self.agents_collection = self.db["agents"]
self.attachments_collection = self.db["attachments"]
self.prompts_collection = self.db["prompts"]
self.data = request_data
self.decoded_token = decoded_token
self.initial_user_id = (
self.decoded_token.get("sub") if self.decoded_token is not None else None
)
self.conversation_id = self.data.get("conversation_id")
self.source = {}
self.all_sources = []
self.attachments = []
self.history = []
self.agent_config = {}
self.retriever_config = {}
self.is_shared_usage = False
self.shared_token = None
self.gpt_model = get_gpt_model()
self.conversation_service = ConversationService()
def initialize(self):
"""Initialize all required components for processing"""
self._configure_agent()
self._configure_source()
self._configure_retriever()
self._configure_agent()
self._load_conversation_history()
self._process_attachments()
def _load_conversation_history(self):
"""Load conversation history either from DB or request"""
if self.conversation_id and self.initial_user_id:
conversation = self.conversation_service.get_conversation(
self.conversation_id, self.initial_user_id
)
if not conversation:
raise ValueError("Conversation not found or unauthorized")
self.history = [
{"prompt": query["prompt"], "response": query["response"]}
for query in conversation.get("queries", [])
]
else:
self.history = limit_chat_history(
json.loads(self.data.get("history", "[]")), gpt_model=self.gpt_model
)
def _process_attachments(self):
"""Process any attachments in the request"""
attachment_ids = self.data.get("attachments", [])
self.attachments = self._get_attachments_content(
attachment_ids, self.initial_user_id
)
def _get_attachments_content(self, attachment_ids, user_id):
"""
Retrieve content from attachment documents based on their IDs.
"""
if not attachment_ids:
return []
attachments = []
for attachment_id in attachment_ids:
try:
attachment_doc = self.attachments_collection.find_one(
{"_id": ObjectId(attachment_id), "user": user_id}
)
if attachment_doc:
attachments.append(attachment_doc)
except Exception as e:
logger.error(
f"Error retrieving attachment {attachment_id}: {e}", exc_info=True
)
return attachments
def _get_agent_key(self, agent_id: Optional[str], user_id: Optional[str]) -> tuple:
"""Get API key for agent with access control"""
if not agent_id:
return None, False, None
try:
agent = self.agents_collection.find_one({"_id": ObjectId(agent_id)})
if agent is None:
raise Exception("Agent not found")
is_owner = agent.get("user") == user_id
is_shared_with_user = agent.get(
"shared_publicly", False
) or user_id in agent.get("shared_with", [])
if not (is_owner or is_shared_with_user):
raise Exception("Unauthorized access to the agent")
if is_owner:
self.agents_collection.update_one(
{"_id": ObjectId(agent_id)},
{
"$set": {
"lastUsedAt": datetime.datetime.now(datetime.timezone.utc)
}
},
)
return str(agent["key"]), not is_owner, agent.get("shared_token")
except Exception as e:
logger.error(f"Error in get_agent_key: {str(e)}", exc_info=True)
raise
def _get_data_from_api_key(self, api_key: str) -> Dict[str, Any]:
data = self.agents_collection.find_one({"key": api_key})
if not data:
raise Exception("Invalid API Key, please generate a new key", 401)
source = data.get("source")
if isinstance(source, DBRef):
source_doc = self.db.dereference(source)
if source_doc:
data["source"] = str(source_doc["_id"])
data["retriever"] = source_doc.get("retriever", data.get("retriever"))
data["chunks"] = source_doc.get("chunks", data.get("chunks"))
else:
data["source"] = None
elif source == "default":
data["source"] = "default"
else:
data["source"] = None
# Handle multiple sources
sources = data.get("sources", [])
if sources and isinstance(sources, list):
sources_list = []
for i, source_ref in enumerate(sources):
if source_ref == "default":
processed_source = {
"id": "default",
"retriever": "classic",
"chunks": data.get("chunks", "2"),
}
sources_list.append(processed_source)
elif isinstance(source_ref, DBRef):
source_doc = self.db.dereference(source_ref)
if source_doc:
processed_source = {
"id": str(source_doc["_id"]),
"retriever": source_doc.get("retriever", "classic"),
"chunks": source_doc.get("chunks", data.get("chunks", "2")),
}
sources_list.append(processed_source)
data["sources"] = sources_list
else:
data["sources"] = []
return data
def _configure_source(self):
"""Configure the source based on agent data"""
api_key = self.data.get("api_key") or self.agent_key
if api_key:
agent_data = self._get_data_from_api_key(api_key)
if agent_data.get("sources") and len(agent_data["sources"]) > 0:
source_ids = [
source["id"] for source in agent_data["sources"] if source.get("id")
]
if source_ids:
self.source = {"active_docs": source_ids}
else:
self.source = {}
self.all_sources = agent_data["sources"]
elif agent_data.get("source"):
self.source = {"active_docs": agent_data["source"]}
self.all_sources = [
{
"id": agent_data["source"],
"retriever": agent_data.get("retriever", "classic"),
}
]
else:
self.source = {}
self.all_sources = []
return
if "active_docs" in self.data:
self.source = {"active_docs": self.data["active_docs"]}
return
self.source = {}
self.all_sources = []
def _configure_agent(self):
"""Configure the agent based on request data"""
agent_id = self.data.get("agent_id")
self.agent_key, self.is_shared_usage, self.shared_token = self._get_agent_key(
agent_id, self.initial_user_id
)
api_key = self.data.get("api_key")
if api_key:
data_key = self._get_data_from_api_key(api_key)
self.agent_config.update(
{
"prompt_id": data_key.get("prompt_id", "default"),
"agent_type": data_key.get("agent_type", settings.AGENT_NAME),
"user_api_key": api_key,
"json_schema": data_key.get("json_schema"),
}
)
self.initial_user_id = data_key.get("user")
self.decoded_token = {"sub": data_key.get("user")}
if data_key.get("source"):
self.source = {"active_docs": data_key["source"]}
if data_key.get("retriever"):
self.retriever_config["retriever_name"] = data_key["retriever"]
if data_key.get("chunks") is not None:
try:
self.retriever_config["chunks"] = int(data_key["chunks"])
except (ValueError, TypeError):
logger.warning(
f"Invalid chunks value: {data_key['chunks']}, using default value 2"
)
self.retriever_config["chunks"] = 2
elif self.agent_key:
data_key = self._get_data_from_api_key(self.agent_key)
self.agent_config.update(
{
"prompt_id": data_key.get("prompt_id", "default"),
"agent_type": data_key.get("agent_type", settings.AGENT_NAME),
"user_api_key": self.agent_key,
"json_schema": data_key.get("json_schema"),
}
)
self.decoded_token = (
self.decoded_token
if self.is_shared_usage
else {"sub": data_key.get("user")}
)
if data_key.get("source"):
self.source = {"active_docs": data_key["source"]}
if data_key.get("retriever"):
self.retriever_config["retriever_name"] = data_key["retriever"]
if data_key.get("chunks") is not None:
try:
self.retriever_config["chunks"] = int(data_key["chunks"])
except (ValueError, TypeError):
logger.warning(
f"Invalid chunks value: {data_key['chunks']}, using default value 2"
)
self.retriever_config["chunks"] = 2
else:
self.agent_config.update(
{
"prompt_id": self.data.get("prompt_id", "default"),
"agent_type": settings.AGENT_NAME,
"user_api_key": None,
"json_schema": None,
}
)
def _configure_retriever(self):
"""Configure the retriever based on request data"""
self.retriever_config = {
"retriever_name": self.data.get("retriever", "classic"),
"chunks": int(self.data.get("chunks", 2)),
"token_limit": self.data.get("token_limit", settings.DEFAULT_MAX_HISTORY),
}
api_key = self.data.get("api_key") or self.agent_key
if not api_key and "isNoneDoc" in self.data and self.data["isNoneDoc"]:
self.retriever_config["chunks"] = 0
def create_agent(self):
"""Create and return the configured agent"""
return AgentCreator.create_agent(
self.agent_config["agent_type"],
endpoint="stream",
llm_name=settings.LLM_PROVIDER,
gpt_model=self.gpt_model,
api_key=settings.API_KEY,
user_api_key=self.agent_config["user_api_key"],
prompt=get_prompt(self.agent_config["prompt_id"], self.prompts_collection),
chat_history=self.history,
decoded_token=self.decoded_token,
attachments=self.attachments,
json_schema=self.agent_config.get("json_schema"),
)
def create_retriever(self):
"""Create and return the configured retriever"""
return RetrieverCreator.create_retriever(
self.retriever_config["retriever_name"],
source=self.source,
chat_history=self.history,
prompt=get_prompt(self.agent_config["prompt_id"], self.prompts_collection),
chunks=self.retriever_config["chunks"],
token_limit=self.retriever_config["token_limit"],
gpt_model=self.gpt_model,
user_api_key=self.agent_config["user_api_key"],
decoded_token=self.decoded_token,
)

View File

@@ -0,0 +1,695 @@
import base64
import datetime
import json
import uuid
from bson.objectid import ObjectId
from flask import (
Blueprint,
current_app,
jsonify,
make_response,
request
)
from flask_restx import fields, Namespace, Resource
from application.api.user.tasks import (
ingest_connector_task,
)
from application.core.mongo_db import MongoDB
from application.core.settings import settings
from application.api import api
from application.utils import (
check_required_fields
)
from application.parser.connectors.connector_creator import ConnectorCreator
mongo = MongoDB.get_client()
db = mongo[settings.MONGO_DB_NAME]
sources_collection = db["sources"]
sessions_collection = db["connector_sessions"]
connector = Blueprint("connector", __name__)
connectors_ns = Namespace("connectors", description="Connector operations", path="/")
api.add_namespace(connectors_ns)
@connectors_ns.route("/api/connectors/upload")
class UploadConnector(Resource):
@api.expect(
api.model(
"ConnectorUploadModel",
{
"user": fields.String(required=True, description="User ID"),
"source": fields.String(
required=True, description="Source type (google_drive, github, etc.)"
),
"name": fields.String(required=True, description="Job name"),
"data": fields.String(required=True, description="Configuration data"),
"repo_url": fields.String(description="GitHub repository URL"),
},
)
)
@api.doc(
description="Uploads connector source for vectorization",
)
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
data = request.form
required_fields = ["user", "source", "name", "data"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
try:
config = json.loads(data["data"])
source_data = None
sync_frequency = config.get("sync_frequency", "never")
if data["source"] == "github":
source_data = config.get("repo_url")
elif data["source"] in ["crawler", "url"]:
source_data = config.get("url")
elif data["source"] == "reddit":
source_data = config
elif data["source"] in ConnectorCreator.get_supported_connectors():
session_token = config.get("session_token")
if not session_token:
return make_response(jsonify({
"success": False,
"error": f"Missing session_token in {data['source']} configuration"
}), 400)
file_ids = config.get("file_ids", [])
if isinstance(file_ids, str):
file_ids = [id.strip() for id in file_ids.split(',') if id.strip()]
elif not isinstance(file_ids, list):
file_ids = []
folder_ids = config.get("folder_ids", [])
if isinstance(folder_ids, str):
folder_ids = [id.strip() for id in folder_ids.split(',') if id.strip()]
elif not isinstance(folder_ids, list):
folder_ids = []
config["file_ids"] = file_ids
config["folder_ids"] = folder_ids
task = ingest_connector_task.delay(
job_name=data["name"],
user=decoded_token.get("sub"),
source_type=data["source"],
session_token=session_token,
file_ids=file_ids,
folder_ids=folder_ids,
recursive=config.get("recursive", False),
retriever=config.get("retriever", "classic"),
sync_frequency=sync_frequency
)
return make_response(jsonify({"success": True, "task_id": task.id}), 200)
task = ingest_connector_task.delay(
source_data=source_data,
job_name=data["name"],
user=decoded_token.get("sub"),
loader=data["source"],
sync_frequency=sync_frequency
)
except Exception as err:
current_app.logger.error(
f"Error uploading connector source: {err}", exc_info=True
)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"success": True, "task_id": task.id}), 200)
@connectors_ns.route("/api/connectors/task_status")
class ConnectorTaskStatus(Resource):
task_status_model = api.model(
"ConnectorTaskStatusModel",
{"task_id": fields.String(required=True, description="Task ID")},
)
@api.expect(task_status_model)
@api.doc(description="Get connector task status")
def get(self):
task_id = request.args.get("task_id")
if not task_id:
return make_response(
jsonify({"success": False, "message": "Task ID is required"}), 400
)
try:
from application.celery_init import celery
task = celery.AsyncResult(task_id)
task_meta = task.info
print(f"Task status: {task.status}")
if not isinstance(
task_meta, (dict, list, str, int, float, bool, type(None))
):
task_meta = str(task_meta)
except Exception as err:
current_app.logger.error(f"Error getting task status: {err}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"status": task.status, "result": task_meta}), 200)
@connectors_ns.route("/api/connectors/sources")
class ConnectorSources(Resource):
@api.doc(description="Get connector sources")
def get(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
user = decoded_token.get("sub")
try:
sources = sources_collection.find({"user": user, "type": "connector:file"}).sort("date", -1)
connector_sources = []
for source in sources:
connector_sources.append({
"id": str(source["_id"]),
"name": source.get("name"),
"date": source.get("date"),
"type": source.get("type"),
"source": source.get("source"),
"tokens": source.get("tokens", ""),
"retriever": source.get("retriever", "classic"),
"syncFrequency": source.get("sync_frequency", ""),
})
except Exception as err:
current_app.logger.error(f"Error retrieving connector sources: {err}", exc_info=True)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify(connector_sources), 200)
@connectors_ns.route("/api/connectors/delete")
class DeleteConnectorSource(Resource):
@api.doc(
description="Delete a connector source",
params={"source_id": "The source ID to delete"},
)
def delete(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
source_id = request.args.get("source_id")
if not source_id:
return make_response(
jsonify({"success": False, "message": "source_id is required"}), 400
)
try:
result = sources_collection.delete_one(
{"_id": ObjectId(source_id), "user": decoded_token.get("sub")}
)
if result.deleted_count == 0:
return make_response(
jsonify({"success": False, "message": "Source not found"}), 404
)
except Exception as err:
current_app.logger.error(
f"Error deleting connector source: {err}", exc_info=True
)
return make_response(jsonify({"success": False}), 400)
return make_response(jsonify({"success": True}), 200)
@connectors_ns.route("/api/connectors/auth")
class ConnectorAuth(Resource):
@api.doc(description="Get connector OAuth authorization URL", params={"provider": "Connector provider (e.g., google_drive)"})
def get(self):
try:
provider = request.args.get('provider') or request.args.get('source')
if not provider:
return make_response(jsonify({"success": False, "error": "Missing provider"}), 400)
if not ConnectorCreator.is_supported(provider):
return make_response(jsonify({"success": False, "error": f"Unsupported provider: {provider}"}), 400)
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False, "error": "Unauthorized"}), 401)
user_id = decoded_token.get('sub')
now = datetime.datetime.now(datetime.timezone.utc)
result = sessions_collection.insert_one({
"provider": provider,
"user": user_id,
"status": "pending",
"created_at": now
})
state_dict = {
"provider": provider,
"object_id": str(result.inserted_id)
}
state = base64.urlsafe_b64encode(json.dumps(state_dict).encode()).decode()
auth = ConnectorCreator.create_auth(provider)
authorization_url = auth.get_authorization_url(state=state)
return make_response(jsonify({
"success": True,
"authorization_url": authorization_url,
"state": state
}), 200)
except Exception as e:
current_app.logger.error(f"Error generating connector auth URL: {e}")
return make_response(jsonify({"success": False, "error": str(e)}), 500)
@connectors_ns.route("/api/connectors/callback")
class ConnectorsCallback(Resource):
@api.doc(description="Handle OAuth callback for external connectors")
def get(self):
"""Handle OAuth callback for external connectors"""
try:
from application.parser.connectors.connector_creator import ConnectorCreator
from flask import request, redirect
authorization_code = request.args.get('code')
state = request.args.get('state')
error = request.args.get('error')
state_dict = json.loads(base64.urlsafe_b64decode(state.encode()).decode())
provider = state_dict["provider"]
state_object_id = state_dict["object_id"]
if error:
if error == "access_denied":
return redirect(f"/api/connectors/callback-status?status=cancelled&message=Authentication+was+cancelled.+You+can+try+again+if+you'd+like+to+connect+your+account.&provider={provider}")
else:
current_app.logger.warning(f"OAuth error in callback: {error}")
return redirect(f"/api/connectors/callback-status?status=error&message=Authentication+failed.+Please+try+again+and+make+sure+to+grant+all+requested+permissions.&provider={provider}")
if not authorization_code:
return redirect(f"/api/connectors/callback-status?status=error&message=Authentication+failed.+Please+try+again+and+make+sure+to+grant+all+requested+permissions.&provider={provider}")
try:
auth = ConnectorCreator.create_auth(provider)
token_info = auth.exchange_code_for_tokens(authorization_code)
session_token = str(uuid.uuid4())
try:
credentials = auth.create_credentials_from_token_info(token_info)
service = auth.build_drive_service(credentials)
user_info = service.about().get(fields="user").execute()
user_email = user_info.get('user', {}).get('emailAddress', 'Connected User')
except Exception as e:
current_app.logger.warning(f"Could not get user info: {e}")
user_email = 'Connected User'
sanitized_token_info = {
"access_token": token_info.get("access_token"),
"refresh_token": token_info.get("refresh_token"),
"token_uri": token_info.get("token_uri"),
"expiry": token_info.get("expiry")
}
sessions_collection.find_one_and_update(
{"_id": ObjectId(state_object_id), "provider": provider},
{
"$set": {
"session_token": session_token,
"token_info": sanitized_token_info,
"user_email": user_email,
"status": "authorized"
}
}
)
# Redirect to success page with session token and user email
return redirect(f"/api/connectors/callback-status?status=success&message=Authentication+successful&provider={provider}&session_token={session_token}&user_email={user_email}")
except Exception as e:
current_app.logger.error(f"Error exchanging code for tokens: {str(e)}", exc_info=True)
return redirect(f"/api/connectors/callback-status?status=error&message=Authentication+failed.+Please+try+again+and+make+sure+to+grant+all+requested+permissions.&provider={provider}")
except Exception as e:
current_app.logger.error(f"Error handling connector callback: {e}")
return redirect("/api/connectors/callback-status?status=error&message=Authentication+failed.+Please+try+again+and+make+sure+to+grant+all+requested+permissions.")
@connectors_ns.route("/api/connectors/refresh")
class ConnectorRefresh(Resource):
@api.expect(api.model("ConnectorRefreshModel", {"provider": fields.String(required=True), "refresh_token": fields.String(required=True)}))
@api.doc(description="Refresh connector access token")
def post(self):
try:
data = request.get_json()
provider = data.get('provider')
refresh_token = data.get('refresh_token')
if not provider or not refresh_token:
return make_response(jsonify({"success": False, "error": "provider and refresh_token are required"}), 400)
auth = ConnectorCreator.create_auth(provider)
token_info = auth.refresh_access_token(refresh_token)
return make_response(jsonify({"success": True, "token_info": token_info}), 200)
except Exception as e:
current_app.logger.error(f"Error refreshing token for connector: {e}")
return make_response(jsonify({"success": False, "error": str(e)}), 500)
@connectors_ns.route("/api/connectors/files")
class ConnectorFiles(Resource):
@api.expect(api.model("ConnectorFilesModel", {
"provider": fields.String(required=True),
"session_token": fields.String(required=True),
"folder_id": fields.String(required=False),
"limit": fields.Integer(required=False),
"page_token": fields.String(required=False),
"search_query": fields.String(required=False)
}))
@api.doc(description="List files from a connector provider (supports pagination and search)")
def post(self):
try:
data = request.get_json()
provider = data.get('provider')
session_token = data.get('session_token')
folder_id = data.get('folder_id')
limit = data.get('limit', 10)
page_token = data.get('page_token')
search_query = data.get('search_query')
if not provider or not session_token:
return make_response(jsonify({"success": False, "error": "provider and session_token are required"}), 400)
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False, "error": "Unauthorized"}), 401)
user = decoded_token.get('sub')
session = sessions_collection.find_one({"session_token": session_token, "user": user})
if not session:
return make_response(jsonify({"success": False, "error": "Invalid or unauthorized session"}), 401)
loader = ConnectorCreator.create_connector(provider, session_token)
input_config = {
'limit': limit,
'list_only': True,
'session_token': session_token,
'folder_id': folder_id,
'page_token': page_token
}
if search_query:
input_config['search_query'] = search_query
documents = loader.load_data(input_config)
files = []
for doc in documents[:limit]:
metadata = doc.extra_info
modified_time = metadata.get('modified_time')
if modified_time:
date_part = modified_time.split('T')[0]
time_part = modified_time.split('T')[1].split('.')[0].split('Z')[0]
formatted_time = f"{date_part} {time_part}"
else:
formatted_time = None
files.append({
'id': doc.doc_id,
'name': metadata.get('file_name', 'Unknown File'),
'type': metadata.get('mime_type', 'unknown'),
'size': metadata.get('size', None),
'modifiedTime': formatted_time,
'isFolder': metadata.get('is_folder', False)
})
next_token = getattr(loader, 'next_page_token', None)
has_more = bool(next_token)
return make_response(jsonify({
"success": True,
"files": files,
"total": len(files),
"next_page_token": next_token,
"has_more": has_more
}), 200)
except Exception as e:
current_app.logger.error(f"Error loading connector files: {e}")
return make_response(jsonify({"success": False, "error": f"Failed to load files: {str(e)}"}), 500)
@connectors_ns.route("/api/connectors/validate-session")
class ConnectorValidateSession(Resource):
@api.expect(api.model("ConnectorValidateSessionModel", {"provider": fields.String(required=True), "session_token": fields.String(required=True)}))
@api.doc(description="Validate connector session token and return user info and access token")
def post(self):
try:
data = request.get_json()
provider = data.get('provider')
session_token = data.get('session_token')
if not provider or not session_token:
return make_response(jsonify({"success": False, "error": "provider and session_token are required"}), 400)
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False, "error": "Unauthorized"}), 401)
user = decoded_token.get('sub')
session = sessions_collection.find_one({"session_token": session_token, "user": user})
if not session or "token_info" not in session:
return make_response(jsonify({"success": False, "error": "Invalid or expired session"}), 401)
token_info = session["token_info"]
auth = ConnectorCreator.create_auth(provider)
is_expired = auth.is_token_expired(token_info)
if is_expired and token_info.get('refresh_token'):
try:
refreshed_token_info = auth.refresh_access_token(token_info.get('refresh_token'))
sanitized_token_info = {
"access_token": refreshed_token_info.get("access_token"),
"refresh_token": refreshed_token_info.get("refresh_token"),
"token_uri": refreshed_token_info.get("token_uri"),
"expiry": refreshed_token_info.get("expiry")
}
sessions_collection.update_one(
{"session_token": session_token},
{"$set": {"token_info": sanitized_token_info}}
)
token_info = sanitized_token_info
is_expired = False
except Exception as refresh_error:
current_app.logger.error(f"Failed to refresh token: {refresh_error}")
if is_expired:
return make_response(jsonify({
"success": False,
"expired": True,
"error": "Session token has expired. Please reconnect."
}), 401)
return make_response(jsonify({
"success": True,
"expired": False,
"user_email": session.get('user_email', 'Connected User'),
"access_token": token_info.get('access_token')
}), 200)
except Exception as e:
current_app.logger.error(f"Error validating connector session: {e}")
return make_response(jsonify({"success": False, "error": str(e)}), 500)
@connectors_ns.route("/api/connectors/disconnect")
class ConnectorDisconnect(Resource):
@api.expect(api.model("ConnectorDisconnectModel", {"provider": fields.String(required=True), "session_token": fields.String(required=False)}))
@api.doc(description="Disconnect a connector session")
def post(self):
try:
data = request.get_json()
provider = data.get('provider')
session_token = data.get('session_token')
if not provider:
return make_response(jsonify({"success": False, "error": "provider is required"}), 400)
if session_token:
sessions_collection.delete_one({"session_token": session_token})
return make_response(jsonify({"success": True}), 200)
except Exception as e:
current_app.logger.error(f"Error disconnecting connector session: {e}")
return make_response(jsonify({"success": False, "error": str(e)}), 500)
@connectors_ns.route("/api/connectors/sync")
class ConnectorSync(Resource):
@api.expect(
api.model(
"ConnectorSyncModel",
{
"source_id": fields.String(required=True, description="Source ID to sync"),
"session_token": fields.String(required=True, description="Authentication token")
},
)
)
@api.doc(description="Sync connector source to check for modifications")
def post(self):
decoded_token = request.decoded_token
if not decoded_token:
return make_response(jsonify({"success": False}), 401)
try:
data = request.get_json()
source_id = data.get('source_id')
session_token = data.get('session_token')
if not all([source_id, session_token]):
return make_response(
jsonify({
"success": False,
"error": "source_id and session_token are required"
}),
400
)
source = sources_collection.find_one({"_id": ObjectId(source_id)})
if not source:
return make_response(
jsonify({
"success": False,
"error": "Source not found"
}),
404
)
if source.get('user') != decoded_token.get('sub'):
return make_response(
jsonify({
"success": False,
"error": "Unauthorized access to source"
}),
403
)
remote_data = {}
try:
if source.get('remote_data'):
remote_data = json.loads(source.get('remote_data'))
except json.JSONDecodeError:
current_app.logger.error(f"Invalid remote_data format for source {source_id}")
remote_data = {}
source_type = remote_data.get('provider')
if not source_type:
return make_response(
jsonify({
"success": False,
"error": "Source provider not found in remote_data"
}),
400
)
# Extract configuration from remote_data
file_ids = remote_data.get('file_ids', [])
folder_ids = remote_data.get('folder_ids', [])
recursive = remote_data.get('recursive', True)
# Start the sync task
task = ingest_connector_task.delay(
job_name=source.get('name'),
user=decoded_token.get('sub'),
source_type=source_type,
session_token=session_token,
file_ids=file_ids,
folder_ids=folder_ids,
recursive=recursive,
retriever=source.get('retriever', 'classic'),
operation_mode="sync",
doc_id=source_id,
sync_frequency=source.get('sync_frequency', 'never')
)
return make_response(
jsonify({
"success": True,
"task_id": task.id
}),
200
)
except Exception as err:
current_app.logger.error(
f"Error syncing connector source: {err}",
exc_info=True
)
return make_response(
jsonify({
"success": False,
"error": str(err)
}),
400
)
@connectors_ns.route("/api/connectors/callback-status")
class ConnectorCallbackStatus(Resource):
@api.doc(description="Return HTML page with connector authentication status")
def get(self):
"""Return HTML page with connector authentication status"""
try:
status = request.args.get('status', 'error')
message = request.args.get('message', '')
provider = request.args.get('provider', 'connector')
session_token = request.args.get('session_token', '')
user_email = request.args.get('user_email', '')
html_content = f"""
<!DOCTYPE html>
<html>
<head>
<title>{provider.replace('_', ' ').title()} Authentication</title>
<style>
body {{ font-family: Arial, sans-serif; text-align: center; padding: 40px; }}
.container {{ max-width: 600px; margin: 0 auto; }}
.success {{ color: #4CAF50; }}
.error {{ color: #F44336; }}
.cancelled {{ color: #FF9800; }}
</style>
<script>
window.onload = function() {{
const status = "{status}";
const sessionToken = "{session_token}";
const userEmail = "{user_email}";
if (status === "success" && window.opener) {{
window.opener.postMessage({{
type: '{provider}_auth_success',
session_token: sessionToken,
user_email: userEmail
}}, '*');
setTimeout(() => window.close(), 3000);
}} else if (status === "cancelled" || status === "error") {{
setTimeout(() => window.close(), 3000);
}}
}};
</script>
</head>
<body>
<div class="container">
<h2>{provider.replace('_', ' ').title()} Authentication</h2>
<div class="{status}">
<p>{message}</p>
{f'<p>Connected as: {user_email}</p>' if status == 'success' else ''}
</div>
<p><small>You can close this window. {f"Your {provider.replace('_', ' ').title()} is now connected and ready to use." if status == 'success' else "Feel free to close this window."}</small></p>
</div>
</body>
</html>
"""
return make_response(html_content, 200, {'Content-Type': 'text/html'})
except Exception as e:
current_app.logger.error(f"Error rendering callback status page: {e}")
return make_response("Authentication error occurred", 500, {'Content-Type': 'text/html'})

View File

@@ -1,5 +1,6 @@
import os
import datetime
import json
from flask import Blueprint, request, send_from_directory
from werkzeug.utils import secure_filename
from bson.objectid import ObjectId
@@ -48,7 +49,17 @@ def upload_index_files():
remote_data = request.form["remote_data"] if "remote_data" in request.form else None
sync_frequency = request.form["sync_frequency"] if "sync_frequency" in request.form else None
original_file_path = request.form.get("original_file_path")
file_path = request.form.get("file_path")
directory_structure = request.form.get("directory_structure")
if directory_structure:
try:
directory_structure = json.loads(directory_structure)
except Exception:
logger.error("Error parsing directory_structure")
directory_structure = {}
else:
directory_structure = {}
storage = StorageCreator.get_storage()
index_base_path = f"indexes/{id}"
@@ -66,10 +77,13 @@ def upload_index_files():
file_pkl = request.files["file_pkl"]
if file_pkl.filename == "":
return {"status": "no file name"}
# Save index files to storage
storage.save_file(file_faiss, f"{index_base_path}/index.faiss")
storage.save_file(file_pkl, f"{index_base_path}/index.pkl")
faiss_storage_path = f"{index_base_path}/index.faiss"
pkl_storage_path = f"{index_base_path}/index.pkl"
storage.save_file(file_faiss, faiss_storage_path)
storage.save_file(file_pkl, pkl_storage_path)
existing_entry = sources_collection.find_one({"_id": ObjectId(id)})
if existing_entry:
@@ -87,7 +101,8 @@ def upload_index_files():
"retriever": retriever,
"remote_data": remote_data,
"sync_frequency": sync_frequency,
"file_path": original_file_path,
"file_path": file_path,
"directory_structure": directory_structure,
}
},
)
@@ -105,7 +120,8 @@ def upload_index_files():
"retriever": retriever,
"remote_data": remote_data,
"sync_frequency": sync_frequency,
"file_path": original_file_path,
"file_path": file_path,
"directory_structure": directory_structure,
}
)
return {"status": "ok"}

File diff suppressed because it is too large Load Diff

View File

@@ -5,14 +5,16 @@ from application.worker import (
agent_webhook_worker,
attachment_worker,
ingest_worker,
mcp_oauth,
mcp_oauth_status,
remote_worker,
sync_worker,
)
@celery.task(bind=True)
def ingest(self, directory, formats, job_name, filename, user, dir_name, user_dir):
resp = ingest_worker(self, directory, formats, job_name, filename, user, dir_name, user_dir)
def ingest(self, directory, formats, job_name, user, file_path, filename):
resp = ingest_worker(self, directory, formats, job_name, file_path, filename, user)
return resp
@@ -22,6 +24,14 @@ def ingest_remote(self, source_data, job_name, user, loader):
return resp
@celery.task(bind=True)
def reingest_source_task(self, source_id, user):
from application.worker import reingest_source_worker
resp = reingest_source_worker(self, source_id, user)
return resp
@celery.task(bind=True)
def schedule_syncs(self, frequency):
resp = sync_worker(self, frequency)
@@ -40,6 +50,40 @@ def process_agent_webhook(self, agent_id, payload):
return resp
@celery.task(bind=True)
def ingest_connector_task(
self,
job_name,
user,
source_type,
session_token=None,
file_ids=None,
folder_ids=None,
recursive=True,
retriever="classic",
operation_mode="upload",
doc_id=None,
sync_frequency="never",
):
from application.worker import ingest_connector
resp = ingest_connector(
self,
job_name,
user,
source_type,
session_token=session_token,
file_ids=file_ids,
folder_ids=folder_ids,
recursive=recursive,
retriever=retriever,
operation_mode=operation_mode,
doc_id=doc_id,
sync_frequency=sync_frequency,
)
return resp
@celery.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
sender.add_periodic_task(
@@ -54,3 +98,15 @@ def setup_periodic_tasks(sender, **kwargs):
timedelta(days=30),
schedule_syncs.s("monthly"),
)
@celery.task(bind=True)
def mcp_oauth_task(self, config, user):
resp = mcp_oauth(self, config, user)
return resp
@celery.task(bind=True)
def mcp_oauth_status_task(self, task_id):
resp = mcp_oauth_status(self, task_id)
return resp

View File

@@ -12,25 +12,26 @@ from application.core.logging_config import setup_logging
setup_logging()
from application.api.answer.routes import answer # noqa: E402
from application.api import api # noqa: E402
from application.api.answer import answer # noqa: E402
from application.api.internal.routes import internal # noqa: E402
from application.api.user.routes import user # noqa: E402
from application.api.connector.routes import connector # noqa: E402
from application.celery_init import celery # noqa: E402
from application.core.settings import settings # noqa: E402
from application.extensions import api # noqa: E402
if platform.system() == "Windows":
import pathlib
pathlib.PosixPath = pathlib.WindowsPath
dotenv.load_dotenv()
app = Flask(__name__)
app.register_blueprint(user)
app.register_blueprint(answer)
app.register_blueprint(internal)
app.register_blueprint(connector)
app.config.update(
UPLOAD_FOLDER="inputs",
CELERY_BROKER_URL=settings.CELERY_BROKER_URL,
@@ -52,7 +53,6 @@ if settings.AUTH_TYPE in ("simple_jwt", "session_jwt") and not settings.JWT_SECR
settings.JWT_SECRET_KEY = new_key
except Exception as e:
raise RuntimeError(f"Failed to setup JWT_SECRET_KEY: {e}")
SIMPLE_JWT_TOKEN = None
if settings.AUTH_TYPE == "simple_jwt":
payload = {"sub": "local"}
@@ -92,7 +92,6 @@ def generate_token():
def authenticate_request():
if request.method == "OPTIONS":
return "", 200
decoded_token = handle_auth(request)
if not decoded_token:
request.decoded_token = None

View File

@@ -10,7 +10,7 @@ current_dir = os.path.dirname(
class Settings(BaseSettings):
AUTH_TYPE: Optional[str] = None
AUTH_TYPE: Optional[str] = None # simple_jwt, session_jwt, or None
LLM_PROVIDER: str = "docsgpt"
LLM_NAME: Optional[str] = (
None # if LLM_PROVIDER is openai, LLM_NAME can be gpt-4 or gpt-3.5-turbo
@@ -26,10 +26,11 @@ class Settings(BaseSettings):
"gpt-4o-mini": 128000,
"gpt-3.5-turbo": 4096,
"claude-2": 1e5,
"gemini-2.0-flash-exp": 1e6,
"gemini-2.5-flash": 1e6,
}
UPLOAD_FOLDER: str = "inputs"
PARSE_PDF_AS_IMAGE: bool = False
PARSE_IMAGE_REMOTE: bool = False
VECTOR_STORE: str = (
"faiss" # "faiss" or "elasticsearch" or "qdrant" or "milvus" or "lancedb"
)
@@ -39,6 +40,12 @@ class Settings(BaseSettings):
FALLBACK_LLM_NAME: Optional[str] = None # model name for fallback llm
FALLBACK_LLM_API_KEY: Optional[str] = None # api key for fallback llm
# Google Drive integration
GOOGLE_CLIENT_ID: Optional[str] = None # Replace with your actual Google OAuth client ID
GOOGLE_CLIENT_SECRET: Optional[str] = None# Replace with your actual Google OAuth client secret
CONNECTOR_REDIRECT_BASE_URI: Optional[str] = "http://127.0.0.1:7091/api/connectors/callback" ##add redirect url as it is to your provider's console(gcp)
# LLM Cache
CACHE_REDIS_URL: str = "redis://localhost:6379/2"
@@ -89,6 +96,8 @@ class Settings(BaseSettings):
QDRANT_PATH: Optional[str] = None
QDRANT_DISTANCE_FUNC: str = "Cosine"
# PGVector vectorstore config
PGVECTOR_CONNECTION_STRING: Optional[str] = None
# Milvus vectorstore config
MILVUS_COLLECTION_NAME: Optional[str] = "docsgpt"
MILVUS_URI: Optional[str] = "./milvus_local.db" # milvus lite version as default
@@ -106,6 +115,10 @@ class Settings(BaseSettings):
JWT_SECRET_KEY: str = ""
# Encryption settings
ENCRYPTION_SECRET_KEY: str = "default-docsgpt-encryption-key"
ELEVENLABS_API_KEY: Optional[str] = None
path = Path(__file__).parent.parent.absolute()
settings = Settings(_env_file=path.joinpath(".env"), _env_file_encoding="utf-8")

View File

@@ -1,7 +0,0 @@
from flask_restx import Api
api = Api(
version="1.0",
title="DocsGPT API",
description="API for DocsGPT",
)

View File

@@ -120,6 +120,20 @@ class BaseLLM(ABC):
def _supports_tools(self):
raise NotImplementedError("Subclass must implement _supports_tools method")
def supports_structured_output(self):
"""Check if the LLM supports structured output/JSON schema enforcement"""
return hasattr(self, "_supports_structured_output") and callable(
getattr(self, "_supports_structured_output")
)
def _supports_structured_output(self):
return False
def prepare_structured_output_format(self, json_schema):
"""Prepare structured output format specific to the LLM provider"""
_ = json_schema
return None
def get_supported_attachment_types(self):
"""
Return a list of MIME types supported by this LLM for file uploads.
@@ -127,4 +141,4 @@ class BaseLLM(ABC):
Returns:
list: List of supported MIME types
"""
return [] # Default: no attachments supported
return []

View File

@@ -1,11 +1,13 @@
import json
import logging
from google import genai
from google.genai import types
import logging
import json
from application.core.settings import settings
from application.llm.base import BaseLLM
from application.storage.storage_creator import StorageCreator
from application.core.settings import settings
class GoogleLLM(BaseLLM):
@@ -24,12 +26,12 @@ class GoogleLLM(BaseLLM):
list: List of supported MIME types
"""
return [
'application/pdf',
'image/png',
'image/jpeg',
'image/jpg',
'image/webp',
'image/gif'
"application/pdf",
"image/png",
"image/jpeg",
"image/jpg",
"image/webp",
"image/gif",
]
def prepare_messages_with_attachments(self, messages, attachments=None):
@@ -70,26 +72,30 @@ class GoogleLLM(BaseLLM):
files = []
for attachment in attachments:
mime_type = attachment.get('mime_type')
mime_type = attachment.get("mime_type")
if mime_type in self.get_supported_attachment_types():
try:
file_uri = self._upload_file_to_google(attachment)
logging.info(f"GoogleLLM: Successfully uploaded file, got URI: {file_uri}")
logging.info(
f"GoogleLLM: Successfully uploaded file, got URI: {file_uri}"
)
files.append({"file_uri": file_uri, "mime_type": mime_type})
except Exception as e:
logging.error(f"GoogleLLM: Error uploading file: {e}", exc_info=True)
if 'content' in attachment:
prepared_messages[user_message_index]["content"].append({
"type": "text",
"text": f"[File could not be processed: {attachment.get('path', 'unknown')}]"
})
logging.error(
f"GoogleLLM: Error uploading file: {e}", exc_info=True
)
if "content" in attachment:
prepared_messages[user_message_index]["content"].append(
{
"type": "text",
"text": f"[File could not be processed: {attachment.get('path', 'unknown')}]",
}
)
if files:
logging.info(f"GoogleLLM: Adding {len(files)} files to message")
prepared_messages[user_message_index]["content"].append({
"files": files
})
prepared_messages[user_message_index]["content"].append({"files": files})
return prepared_messages
@@ -103,10 +109,10 @@ class GoogleLLM(BaseLLM):
Returns:
str: Google AI file URI for the uploaded file.
"""
if 'google_file_uri' in attachment:
return attachment['google_file_uri']
if "google_file_uri" in attachment:
return attachment["google_file_uri"]
file_path = attachment.get('path')
file_path = attachment.get("path")
if not file_path:
raise ValueError("No file path provided in attachment")
@@ -116,17 +122,19 @@ class GoogleLLM(BaseLLM):
try:
file_uri = self.storage.process_file(
file_path,
lambda local_path, **kwargs: self.client.files.upload(file=local_path).uri
lambda local_path, **kwargs: self.client.files.upload(
file=local_path
).uri,
)
from application.core.mongo_db import MongoDB
mongo = MongoDB.get_client()
db = mongo[settings.MONGO_DB_NAME]
attachments_collection = db["attachments"]
if '_id' in attachment:
if "_id" in attachment:
attachments_collection.update_one(
{"_id": attachment['_id']},
{"$set": {"google_file_uri": file_uri}}
{"_id": attachment["_id"]}, {"$set": {"google_file_uri": file_uri}}
)
return file_uri
@@ -135,6 +143,7 @@ class GoogleLLM(BaseLLM):
raise
def _clean_messages_google(self, messages):
"""Convert OpenAI format messages to Google AI format."""
cleaned_messages = []
for message in messages:
role = message.get("role")
@@ -142,6 +151,8 @@ class GoogleLLM(BaseLLM):
if role == "assistant":
role = "model"
elif role == "tool":
role = "model"
parts = []
if role and content is not None:
@@ -166,13 +177,13 @@ class GoogleLLM(BaseLLM):
)
)
elif "files" in item:
for file_data in item["files"]:
parts.append(
types.Part.from_uri(
file_uri=file_data["file_uri"],
mime_type=file_data["mime_type"]
)
for file_data in item["files"]:
parts.append(
types.Part.from_uri(
file_uri=file_data["file_uri"],
mime_type=file_data["mime_type"],
)
)
else:
raise ValueError(
f"Unexpected content dictionary format:{item}"
@@ -180,11 +191,63 @@ class GoogleLLM(BaseLLM):
else:
raise ValueError(f"Unexpected content type: {type(content)}")
cleaned_messages.append(types.Content(role=role, parts=parts))
if parts:
cleaned_messages.append(types.Content(role=role, parts=parts))
return cleaned_messages
def _clean_schema(self, schema_obj):
"""
Recursively remove unsupported fields from schema objects
and validate required properties.
"""
if not isinstance(schema_obj, dict):
return schema_obj
allowed_fields = {
"type",
"description",
"items",
"properties",
"required",
"enum",
"pattern",
"minimum",
"maximum",
"nullable",
"default",
}
cleaned = {}
for key, value in schema_obj.items():
if key not in allowed_fields:
continue
elif key == "type" and isinstance(value, str):
cleaned[key] = value.upper()
elif isinstance(value, dict):
cleaned[key] = self._clean_schema(value)
elif isinstance(value, list):
cleaned[key] = [self._clean_schema(item) for item in value]
else:
cleaned[key] = value
# Validate that required properties actually exist in properties
if "required" in cleaned and "properties" in cleaned:
valid_required = []
properties_keys = set(cleaned["properties"].keys())
for required_prop in cleaned["required"]:
if required_prop in properties_keys:
valid_required.append(required_prop)
if valid_required:
cleaned["required"] = valid_required
else:
cleaned.pop("required", None)
elif "required" in cleaned and "properties" not in cleaned:
cleaned.pop("required", None)
return cleaned
def _clean_tools_format(self, tools_list):
"""Convert OpenAI format tools to Google AI format."""
genai_tools = []
for tool_data in tools_list:
if tool_data["type"] == "function":
@@ -193,18 +256,16 @@ class GoogleLLM(BaseLLM):
properties = parameters.get("properties", {})
if properties:
cleaned_properties = {}
for k, v in properties.items():
cleaned_properties[k] = self._clean_schema(v)
genai_function = dict(
name=function["name"],
description=function["description"],
parameters={
"type": "OBJECT",
"properties": {
k: {
**v,
"type": v["type"].upper() if v["type"] else None,
}
for k, v in properties.items()
},
"properties": cleaned_properties,
"required": (
parameters["required"]
if "required" in parameters
@@ -231,8 +292,10 @@ class GoogleLLM(BaseLLM):
stream=False,
tools=None,
formatting="openai",
response_schema=None,
**kwargs,
):
"""Generate content using Google AI API without streaming."""
client = genai.Client(api_key=self.api_key)
if formatting == "openai":
messages = self._clean_messages_google(messages)
@@ -244,16 +307,21 @@ class GoogleLLM(BaseLLM):
if tools:
cleaned_tools = self._clean_tools_format(tools)
config.tools = cleaned_tools
response = client.models.generate_content(
model=model,
contents=messages,
config=config,
)
# Add response schema for structured output if provided
if response_schema:
config.response_schema = response_schema
config.response_mime_type = "application/json"
response = client.models.generate_content(
model=model,
contents=messages,
config=config,
)
if tools:
return response
else:
response = client.models.generate_content(
model=model, contents=messages, config=config
)
return response.text
def _raw_gen_stream(
@@ -264,8 +332,10 @@ class GoogleLLM(BaseLLM):
stream=True,
tools=None,
formatting="openai",
response_schema=None,
**kwargs,
):
"""Generate content using Google AI API with streaming."""
client = genai.Client(api_key=self.api_key)
if formatting == "openai":
messages = self._clean_messages_google(messages)
@@ -278,17 +348,24 @@ class GoogleLLM(BaseLLM):
cleaned_tools = self._clean_tools_format(tools)
config.tools = cleaned_tools
# Add response schema for structured output if provided
if response_schema:
config.response_schema = response_schema
config.response_mime_type = "application/json"
# Check if we have both tools and file attachments
has_attachments = False
for message in messages:
for part in message.parts:
if hasattr(part, 'file_data') and part.file_data is not None:
if hasattr(part, "file_data") and part.file_data is not None:
has_attachments = True
break
if has_attachments:
break
logging.info(f"GoogleLLM: Starting stream generation. Model: {model}, Messages: {json.dumps(messages, default=str)}, Has attachments: {has_attachments}")
logging.info(
f"GoogleLLM: Starting stream generation. Model: {model}, Messages: {json.dumps(messages, default=str)}, Has attachments: {has_attachments}"
)
response = client.models.generate_content_stream(
model=model,
@@ -296,7 +373,6 @@ class GoogleLLM(BaseLLM):
config=config,
)
for chunk in response:
if hasattr(chunk, "candidates") and chunk.candidates:
for candidate in chunk.candidates:
@@ -310,4 +386,79 @@ class GoogleLLM(BaseLLM):
yield chunk.text
def _supports_tools(self):
"""Return whether this LLM supports function calling."""
return True
def _supports_structured_output(self):
"""Return whether this LLM supports structured JSON output."""
return True
def prepare_structured_output_format(self, json_schema):
"""Convert JSON schema to Google AI structured output format."""
if not json_schema:
return None
type_map = {
"object": "OBJECT",
"array": "ARRAY",
"string": "STRING",
"integer": "INTEGER",
"number": "NUMBER",
"boolean": "BOOLEAN",
}
def convert(schema):
if not isinstance(schema, dict):
return schema
result = {}
schema_type = schema.get("type")
if schema_type:
result["type"] = type_map.get(schema_type.lower(), schema_type.upper())
for key in [
"description",
"nullable",
"enum",
"minItems",
"maxItems",
"required",
"propertyOrdering",
]:
if key in schema:
result[key] = schema[key]
if "format" in schema:
format_value = schema["format"]
if schema_type == "string":
if format_value == "date":
result["format"] = "date-time"
elif format_value in ["enum", "date-time"]:
result["format"] = format_value
else:
result["format"] = format_value
if "properties" in schema:
result["properties"] = {
k: convert(v) for k, v in schema["properties"].items()
}
if "propertyOrdering" not in result and result.get("type") == "OBJECT":
result["propertyOrdering"] = list(result["properties"].keys())
if "items" in schema:
result["items"] = convert(schema["items"])
for field in ["anyOf", "oneOf", "allOf"]:
if field in schema:
result[field] = [convert(s) for s in schema[field]]
return result
try:
return convert(json_schema)
except Exception as e:
logging.error(
f"Error preparing structured output format for Google: {e}",
exc_info=True,
)
return None

View File

@@ -205,7 +205,6 @@ class LLMHandler(ABC):
except StopIteration as e:
tool_response, call_id = e.value
break
updated_messages.append(
{
"role": "assistant",
@@ -222,17 +221,36 @@ class LLMHandler(ABC):
)
updated_messages.append(self.create_tool_message(call, tool_response))
except Exception as e:
logger.error(f"Error executing tool: {str(e)}", exc_info=True)
updated_messages.append(
{
"role": "tool",
"content": f"Error executing tool: {str(e)}",
"tool_call_id": call.id,
}
error_call = ToolCall(
id=call.id, name=call.name, arguments=call.arguments
)
error_response = f"Error executing tool: {str(e)}"
error_message = self.create_tool_message(error_call, error_response)
updated_messages.append(error_message)
call_parts = call.name.split("_")
if len(call_parts) >= 2:
tool_id = call_parts[-1] # Last part is tool ID (e.g., "1")
action_name = "_".join(call_parts[:-1])
tool_name = tools_dict.get(tool_id, {}).get("name", "unknown_tool")
full_action_name = f"{action_name}_{tool_id}"
else:
tool_name = "unknown_tool"
action_name = call.name
full_action_name = call.name
yield {
"type": "tool_call",
"data": {
"tool_name": tool_name,
"call_id": call.id,
"action_name": full_action_name,
"arguments": call.arguments,
"error": error_response,
"status": "error",
},
}
return updated_messages
def handle_non_streaming(
@@ -263,13 +281,11 @@ class LLMHandler(ABC):
except StopIteration as e:
messages = e.value
break
response = agent.llm.gen(
model=agent.gpt_model, messages=messages, tools=agent.tools
)
parsed = self.parse_response(response)
self.llm_calls.append(build_stack_data(agent.llm))
return parsed.content
def handle_streaming(

View File

@@ -17,7 +17,6 @@ class GoogleLLMHandler(LLMHandler):
finish_reason="stop",
raw_response=response,
)
if hasattr(response, "candidates"):
parts = response.candidates[0].content.parts if response.candidates else []
tool_calls = [
@@ -41,7 +40,6 @@ class GoogleLLMHandler(LLMHandler):
finish_reason="tool_calls" if tool_calls else "stop",
raw_response=response,
)
else:
tool_calls = []
if hasattr(response, "function_call"):
@@ -61,14 +59,16 @@ class GoogleLLMHandler(LLMHandler):
def create_tool_message(self, tool_call: ToolCall, result: Any) -> Dict:
"""Create Google-style tool message."""
from google.genai import types
return {
"role": "tool",
"role": "model",
"content": [
types.Part.from_function_response(
name=tool_call.name, response={"result": result}
).to_json_dict()
{
"function_response": {
"name": tool_call.name,
"response": {"result": result},
}
}
],
}

View File

@@ -1,5 +1,5 @@
import json
import base64
import json
import logging
from application.core.settings import settings
@@ -13,7 +13,10 @@ class OpenAILLM(BaseLLM):
from openai import OpenAI
super().__init__(*args, **kwargs)
if isinstance(settings.OPENAI_BASE_URL, str) and settings.OPENAI_BASE_URL.strip():
if (
isinstance(settings.OPENAI_BASE_URL, str)
and settings.OPENAI_BASE_URL.strip()
):
self.client = OpenAI(api_key=api_key, base_url=settings.OPENAI_BASE_URL)
else:
DEFAULT_OPENAI_API_BASE = "https://api.openai.com/v1"
@@ -73,14 +76,30 @@ class OpenAILLM(BaseLLM):
elif isinstance(item, dict):
content_parts = []
if "text" in item:
content_parts.append({"type": "text", "text": item["text"]})
elif "type" in item and item["type"] == "text" and "text" in item:
content_parts.append(
{"type": "text", "text": item["text"]}
)
elif (
"type" in item
and item["type"] == "text"
and "text" in item
):
content_parts.append(item)
elif "type" in item and item["type"] == "file" and "file" in item:
elif (
"type" in item
and item["type"] == "file"
and "file" in item
):
content_parts.append(item)
elif "type" in item and item["type"] == "image_url" and "image_url" in item:
elif (
"type" in item
and item["type"] == "image_url"
and "image_url" in item
):
content_parts.append(item)
cleaned_messages.append({"role": role, "content": content_parts})
cleaned_messages.append(
{"role": role, "content": content_parts}
)
else:
raise ValueError(
f"Unexpected content dictionary format: {item}"
@@ -98,22 +117,29 @@ class OpenAILLM(BaseLLM):
stream=False,
tools=None,
engine=settings.AZURE_DEPLOYMENT_NAME,
response_format=None,
**kwargs,
):
messages = self._clean_messages_openai(messages)
request_params = {
"model": model,
"messages": messages,
"stream": stream,
**kwargs,
}
if tools:
request_params["tools"] = tools
if response_format:
request_params["response_format"] = response_format
response = self.client.chat.completions.create(**request_params)
if tools:
response = self.client.chat.completions.create(
model=model,
messages=messages,
stream=stream,
tools=tools,
**kwargs,
)
return response.choices[0]
else:
response = self.client.chat.completions.create(
model=model, messages=messages, stream=stream, **kwargs
)
return response.choices[0].message.content
def _raw_gen_stream(
@@ -124,24 +150,32 @@ class OpenAILLM(BaseLLM):
stream=True,
tools=None,
engine=settings.AZURE_DEPLOYMENT_NAME,
response_format=None,
**kwargs,
):
messages = self._clean_messages_openai(messages)
request_params = {
"model": model,
"messages": messages,
"stream": stream,
**kwargs,
}
if tools:
response = self.client.chat.completions.create(
model=model,
messages=messages,
stream=stream,
tools=tools,
**kwargs,
)
else:
response = self.client.chat.completions.create(
model=model, messages=messages, stream=stream, **kwargs
)
request_params["tools"] = tools
if response_format:
request_params["response_format"] = response_format
response = self.client.chat.completions.create(**request_params)
for line in response:
if len(line.choices) > 0 and line.choices[0].delta.content is not None and len(line.choices[0].delta.content) > 0:
if (
len(line.choices) > 0
and line.choices[0].delta.content is not None
and len(line.choices[0].delta.content) > 0
):
yield line.choices[0].delta.content
elif len(line.choices) > 0:
yield line.choices[0]
@@ -149,6 +183,66 @@ class OpenAILLM(BaseLLM):
def _supports_tools(self):
return True
def _supports_structured_output(self):
return True
def prepare_structured_output_format(self, json_schema):
if not json_schema:
return None
try:
def add_additional_properties_false(schema_obj):
if isinstance(schema_obj, dict):
schema_copy = schema_obj.copy()
if schema_copy.get("type") == "object":
schema_copy["additionalProperties"] = False
# Ensure 'required' includes all properties for OpenAI strict mode
if "properties" in schema_copy:
schema_copy["required"] = list(
schema_copy["properties"].keys()
)
for key, value in schema_copy.items():
if key == "properties" and isinstance(value, dict):
schema_copy[key] = {
prop_name: add_additional_properties_false(prop_schema)
for prop_name, prop_schema in value.items()
}
elif key == "items" and isinstance(value, dict):
schema_copy[key] = add_additional_properties_false(value)
elif key in ["anyOf", "oneOf", "allOf"] and isinstance(
value, list
):
schema_copy[key] = [
add_additional_properties_false(sub_schema)
for sub_schema in value
]
return schema_copy
return schema_obj
processed_schema = add_additional_properties_false(json_schema)
result = {
"type": "json_schema",
"json_schema": {
"name": processed_schema.get("name", "response"),
"description": processed_schema.get(
"description", "Structured response"
),
"schema": processed_schema,
"strict": True,
},
}
return result
except Exception as e:
logging.error(f"Error preparing structured output format: {e}")
return None
def get_supported_attachment_types(self):
"""
Return a list of MIME types supported by OpenAI for file uploads.
@@ -157,12 +251,12 @@ class OpenAILLM(BaseLLM):
list: List of supported MIME types
"""
return [
'application/pdf',
'image/png',
'image/jpeg',
'image/jpg',
'image/webp',
'image/gif'
"application/pdf",
"image/png",
"image/jpeg",
"image/jpg",
"image/webp",
"image/gif",
]
def prepare_messages_with_attachments(self, messages, attachments=None):
@@ -202,39 +296,46 @@ class OpenAILLM(BaseLLM):
prepared_messages[user_message_index]["content"] = []
for attachment in attachments:
mime_type = attachment.get('mime_type')
mime_type = attachment.get("mime_type")
if mime_type and mime_type.startswith('image/'):
if mime_type and mime_type.startswith("image/"):
try:
base64_image = self._get_base64_image(attachment)
prepared_messages[user_message_index]["content"].append({
"type": "image_url",
"image_url": {
"url": f"data:{mime_type};base64,{base64_image}"
prepared_messages[user_message_index]["content"].append(
{
"type": "image_url",
"image_url": {
"url": f"data:{mime_type};base64,{base64_image}"
},
}
})
)
except Exception as e:
logging.error(f"Error processing image attachment: {e}", exc_info=True)
if 'content' in attachment:
prepared_messages[user_message_index]["content"].append({
"type": "text",
"text": f"[Image could not be processed: {attachment.get('path', 'unknown')}]"
})
logging.error(
f"Error processing image attachment: {e}", exc_info=True
)
if "content" in attachment:
prepared_messages[user_message_index]["content"].append(
{
"type": "text",
"text": f"[Image could not be processed: {attachment.get('path', 'unknown')}]",
}
)
# Handle PDFs using the file API
elif mime_type == 'application/pdf':
elif mime_type == "application/pdf":
try:
file_id = self._upload_file_to_openai(attachment)
prepared_messages[user_message_index]["content"].append({
"type": "file",
"file": {"file_id": file_id}
})
prepared_messages[user_message_index]["content"].append(
{"type": "file", "file": {"file_id": file_id}}
)
except Exception as e:
logging.error(f"Error uploading PDF to OpenAI: {e}", exc_info=True)
if 'content' in attachment:
prepared_messages[user_message_index]["content"].append({
"type": "text",
"text": f"File content:\n\n{attachment['content']}"
})
if "content" in attachment:
prepared_messages[user_message_index]["content"].append(
{
"type": "text",
"text": f"File content:\n\n{attachment['content']}",
}
)
return prepared_messages
@@ -248,13 +349,13 @@ class OpenAILLM(BaseLLM):
Returns:
str: Base64-encoded image data.
"""
file_path = attachment.get('path')
file_path = attachment.get("path")
if not file_path:
raise ValueError("No file path provided in attachment")
try:
with self.storage.get_file(file_path) as image_file:
return base64.b64encode(image_file.read()).decode('utf-8')
return base64.b64encode(image_file.read()).decode("utf-8")
except FileNotFoundError:
raise FileNotFoundError(f"File not found: {file_path}")
@@ -273,10 +374,10 @@ class OpenAILLM(BaseLLM):
"""
import logging
if 'openai_file_id' in attachment:
return attachment['openai_file_id']
if "openai_file_id" in attachment:
return attachment["openai_file_id"]
file_path = attachment.get('path')
file_path = attachment.get("path")
if not self.storage.file_exists(file_path):
raise FileNotFoundError(f"File not found: {file_path}")
@@ -285,19 +386,18 @@ class OpenAILLM(BaseLLM):
file_id = self.storage.process_file(
file_path,
lambda local_path, **kwargs: self.client.files.create(
file=open(local_path, 'rb'),
purpose="assistants"
).id
file=open(local_path, "rb"), purpose="assistants"
).id,
)
from application.core.mongo_db import MongoDB
mongo = MongoDB.get_client()
db = mongo[settings.MONGO_DB_NAME]
attachments_collection = db["attachments"]
if '_id' in attachment:
if "_id" in attachment:
attachments_collection.update_one(
{"_id": attachment['_id']},
{"$set": {"openai_file_id": file_id}}
{"_id": attachment["_id"]}, {"$set": {"openai_file_id": file_id}}
)
return file_id
@@ -308,9 +408,7 @@ class OpenAILLM(BaseLLM):
class AzureOpenAILLM(OpenAILLM):
def __init__(
self, api_key, user_api_key, *args, **kwargs
):
def __init__(self, api_key, user_api_key, *args, **kwargs):
super().__init__(api_key)
self.api_base = (settings.OPENAI_API_BASE,)
@@ -321,5 +419,5 @@ class AzureOpenAILLM(OpenAILLM):
self.client = AzureOpenAI(
api_key=api_key,
api_version=settings.OPENAI_API_VERSION,
azure_endpoint=settings.OPENAI_API_BASE
azure_endpoint=settings.OPENAI_API_BASE,
)

View File

@@ -136,6 +136,8 @@ def _log_to_mongodb(
mongo = MongoDB.get_client()
db = mongo[settings.MONGO_DB_NAME]
user_logs_collection = db["stack_logs"]
log_entry = {
"endpoint": endpoint,
@@ -147,6 +149,11 @@ def _log_to_mongodb(
"stacks": stacks,
"timestamp": datetime.datetime.now(datetime.timezone.utc),
}
# clean up text fields to be no longer than 10000 characters
for key, value in log_entry.items():
if isinstance(value, str) and len(value) > 10000:
log_entry[key] = value[:10000]
user_logs_collection.insert_one(log_entry)
logging.debug(f"Logged activity to MongoDB: {activity_id}")

View File

@@ -32,16 +32,7 @@ class Chunker:
header, body = "", text # No header, treat entire text as body
return header, body
def combine_documents(self, doc: Document, next_doc: Document) -> Document:
combined_text = doc.text + " " + next_doc.text
combined_token_count = len(self.encoding.encode(combined_text))
new_doc = Document(
text=combined_text,
doc_id=doc.doc_id,
embedding=doc.embedding,
extra_info={**(doc.extra_info or {}), "token_count": combined_token_count}
)
return new_doc
def split_document(self, doc: Document) -> List[Document]:
split_docs = []
@@ -82,26 +73,11 @@ class Chunker:
processed_docs.append(doc)
i += 1
elif token_count < self.min_tokens:
if i + 1 < len(documents):
next_doc = documents[i + 1]
next_tokens = self.encoding.encode(next_doc.text)
if token_count + len(next_tokens) <= self.max_tokens:
# Combine small documents
combined_doc = self.combine_documents(doc, next_doc)
processed_docs.append(combined_doc)
i += 2
else:
# Keep the small document as is if adding next_doc would exceed max_tokens
doc.extra_info = doc.extra_info or {}
doc.extra_info["token_count"] = token_count
processed_docs.append(doc)
i += 1
else:
# No next document to combine with; add the small document as is
doc.extra_info = doc.extra_info or {}
doc.extra_info["token_count"] = token_count
processed_docs.append(doc)
i += 1
doc.extra_info = doc.extra_info or {}
doc.extra_info["token_count"] = token_count
processed_docs.append(doc)
i += 1
else:
# Split large documents
processed_docs.extend(self.split_document(doc))

View File

@@ -0,0 +1,18 @@
"""
External knowledge base connectors for DocsGPT.
This module contains connectors for external knowledge bases and document storage systems
that require authentication and specialized handling, separate from simple web scrapers.
"""
from .base import BaseConnectorAuth, BaseConnectorLoader
from .connector_creator import ConnectorCreator
from .google_drive import GoogleDriveAuth, GoogleDriveLoader
__all__ = [
'BaseConnectorAuth',
'BaseConnectorLoader',
'ConnectorCreator',
'GoogleDriveAuth',
'GoogleDriveLoader'
]

View File

@@ -0,0 +1,129 @@
"""
Base classes for external knowledge base connectors.
This module provides minimal abstract base classes that define the essential
interface for external knowledge base connectors.
"""
from abc import ABC, abstractmethod
from typing import Any, Dict, List, Optional
from application.parser.schema.base import Document
class BaseConnectorAuth(ABC):
"""
Abstract base class for connector authentication.
Defines the minimal interface that all connector authentication
implementations must follow.
"""
@abstractmethod
def get_authorization_url(self, state: Optional[str] = None) -> str:
"""
Generate authorization URL for OAuth flows.
Args:
state: Optional state parameter for CSRF protection
Returns:
Authorization URL
"""
pass
@abstractmethod
def exchange_code_for_tokens(self, authorization_code: str) -> Dict[str, Any]:
"""
Exchange authorization code for access tokens.
Args:
authorization_code: Authorization code from OAuth callback
Returns:
Dictionary containing token information
"""
pass
@abstractmethod
def refresh_access_token(self, refresh_token: str) -> Dict[str, Any]:
"""
Refresh an expired access token.
Args:
refresh_token: Refresh token
Returns:
Dictionary containing refreshed token information
"""
pass
@abstractmethod
def is_token_expired(self, token_info: Dict[str, Any]) -> bool:
"""
Check if a token is expired.
Args:
token_info: Token information dictionary
Returns:
True if token is expired, False otherwise
"""
pass
class BaseConnectorLoader(ABC):
"""
Abstract base class for connector loaders.
Defines the minimal interface that all connector loader
implementations must follow.
"""
@abstractmethod
def __init__(self, session_token: str):
"""
Initialize the connector loader.
Args:
session_token: Authentication session token
"""
pass
@abstractmethod
def load_data(self, inputs: Dict[str, Any]) -> List[Document]:
"""
Load documents from the external knowledge base.
Args:
inputs: Configuration dictionary containing:
- file_ids: Optional list of specific file IDs to load
- folder_ids: Optional list of folder IDs to browse/download
- limit: Maximum number of items to return
- list_only: If True, return metadata without content
- recursive: Whether to recursively process folders
Returns:
List of Document objects
"""
pass
@abstractmethod
def download_to_directory(self, local_dir: str, source_config: Dict[str, Any] = None) -> Dict[str, Any]:
"""
Download files/folders to a local directory.
Args:
local_dir: Local directory path to download files to
source_config: Configuration for what to download
Returns:
Dictionary containing download results:
- files_downloaded: Number of files downloaded
- directory_path: Path where files were downloaded
- empty_result: Whether no files were downloaded
- source_type: Type of connector
- config_used: Configuration that was used
- error: Error message if download failed (optional)
"""
pass

View File

@@ -0,0 +1,81 @@
from application.parser.connectors.google_drive.loader import GoogleDriveLoader
from application.parser.connectors.google_drive.auth import GoogleDriveAuth
class ConnectorCreator:
"""
Factory class for creating external knowledge base connectors and auth providers.
These are different from remote loaders as they typically require
authentication and connect to external document storage systems.
"""
connectors = {
"google_drive": GoogleDriveLoader,
}
auth_providers = {
"google_drive": GoogleDriveAuth,
}
@classmethod
def create_connector(cls, connector_type, *args, **kwargs):
"""
Create a connector instance for the specified type.
Args:
connector_type: Type of connector to create (e.g., 'google_drive')
*args, **kwargs: Arguments to pass to the connector constructor
Returns:
Connector instance
Raises:
ValueError: If connector type is not supported
"""
connector_class = cls.connectors.get(connector_type.lower())
if not connector_class:
raise ValueError(f"No connector class found for type {connector_type}")
return connector_class(*args, **kwargs)
@classmethod
def create_auth(cls, connector_type):
"""
Create an auth provider instance for the specified connector type.
Args:
connector_type: Type of connector auth to create (e.g., 'google_drive')
Returns:
Auth provider instance
Raises:
ValueError: If connector type is not supported for auth
"""
auth_class = cls.auth_providers.get(connector_type.lower())
if not auth_class:
raise ValueError(f"No auth class found for type {connector_type}")
return auth_class()
@classmethod
def get_supported_connectors(cls):
"""
Get list of supported connector types.
Returns:
List of supported connector type strings
"""
return list(cls.connectors.keys())
@classmethod
def is_supported(cls, connector_type):
"""
Check if a connector type is supported.
Args:
connector_type: Type of connector to check
Returns:
True if supported, False otherwise
"""
return connector_type.lower() in cls.connectors

View File

@@ -0,0 +1,10 @@
"""
Google Drive connector for DocsGPT.
This module provides authentication and document loading capabilities for Google Drive.
"""
from .auth import GoogleDriveAuth
from .loader import GoogleDriveLoader
__all__ = ['GoogleDriveAuth', 'GoogleDriveLoader']

View File

@@ -0,0 +1,267 @@
import logging
import datetime
from typing import Optional, Dict, Any
from google.oauth2.credentials import Credentials
from google_auth_oauthlib.flow import Flow
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
from application.core.settings import settings
from application.parser.connectors.base import BaseConnectorAuth
class GoogleDriveAuth(BaseConnectorAuth):
"""
Handles Google OAuth 2.0 authentication for Google Drive access.
"""
SCOPES = [
'https://www.googleapis.com/auth/drive.file'
]
def __init__(self):
self.client_id = settings.GOOGLE_CLIENT_ID
self.client_secret = settings.GOOGLE_CLIENT_SECRET
self.redirect_uri = f"{settings.CONNECTOR_REDIRECT_BASE_URI}"
if not self.client_id or not self.client_secret:
raise ValueError("Google OAuth credentials not configured. Please set GOOGLE_CLIENT_ID and GOOGLE_CLIENT_SECRET in settings.")
def get_authorization_url(self, state: Optional[str] = None) -> str:
try:
flow = Flow.from_client_config(
{
"web": {
"client_id": self.client_id,
"client_secret": self.client_secret,
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"redirect_uris": [self.redirect_uri]
}
},
scopes=self.SCOPES
)
flow.redirect_uri = self.redirect_uri
authorization_url, _ = flow.authorization_url(
access_type='offline',
prompt='consent',
include_granted_scopes='false',
state=state
)
return authorization_url
except Exception as e:
logging.error(f"Error generating authorization URL: {e}")
raise
def exchange_code_for_tokens(self, authorization_code: str) -> Dict[str, Any]:
try:
if not authorization_code:
raise ValueError("Authorization code is required")
flow = Flow.from_client_config(
{
"web": {
"client_id": self.client_id,
"client_secret": self.client_secret,
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"redirect_uris": [self.redirect_uri]
}
},
scopes=self.SCOPES
)
flow.redirect_uri = self.redirect_uri
flow.fetch_token(code=authorization_code)
credentials = flow.credentials
if not credentials.refresh_token:
logging.warning("OAuth flow did not return a refresh_token.")
if not credentials.token:
raise ValueError("OAuth flow did not return an access token")
if not credentials.token_uri:
credentials.token_uri = "https://oauth2.googleapis.com/token"
if not credentials.client_id:
credentials.client_id = self.client_id
if not credentials.client_secret:
credentials.client_secret = self.client_secret
if not credentials.refresh_token:
raise ValueError(
"No refresh token received. This typically happens when offline access wasn't granted. "
)
return {
'access_token': credentials.token,
'refresh_token': credentials.refresh_token,
'token_uri': credentials.token_uri,
'client_id': credentials.client_id,
'client_secret': credentials.client_secret,
'scopes': credentials.scopes,
'expiry': credentials.expiry.isoformat() if credentials.expiry else None
}
except Exception as e:
logging.error(f"Error exchanging code for tokens: {e}")
raise
def refresh_access_token(self, refresh_token: str) -> Dict[str, Any]:
try:
if not refresh_token:
raise ValueError("Refresh token is required")
credentials = Credentials(
token=None,
refresh_token=refresh_token,
token_uri="https://oauth2.googleapis.com/token",
client_id=self.client_id,
client_secret=self.client_secret
)
from google.auth.transport.requests import Request
credentials.refresh(Request())
return {
'access_token': credentials.token,
'refresh_token': refresh_token,
'token_uri': credentials.token_uri,
'client_id': credentials.client_id,
'client_secret': credentials.client_secret,
'scopes': credentials.scopes,
'expiry': credentials.expiry.isoformat() if credentials.expiry else None
}
except Exception as e:
logging.error(f"Error refreshing access token: {e}", exc_info=True)
raise
def create_credentials_from_token_info(self, token_info: Dict[str, Any]) -> Credentials:
from application.core.settings import settings
access_token = token_info.get('access_token')
if not access_token:
raise ValueError("No access token found in token_info")
credentials = Credentials(
token=access_token,
refresh_token=token_info.get('refresh_token'),
token_uri= 'https://oauth2.googleapis.com/token',
client_id=settings.GOOGLE_CLIENT_ID,
client_secret=settings.GOOGLE_CLIENT_SECRET,
scopes=token_info.get('scopes', ['https://www.googleapis.com/auth/drive.readonly'])
)
if not credentials.token:
raise ValueError("Credentials created without valid access token")
return credentials
def build_drive_service(self, credentials: Credentials):
try:
if not credentials:
raise ValueError("No credentials provided")
if not credentials.token and not credentials.refresh_token:
raise ValueError("No access token or refresh token available. User must re-authorize with offline access.")
needs_refresh = credentials.expired or not credentials.token
if needs_refresh:
if credentials.refresh_token:
try:
from google.auth.transport.requests import Request
credentials.refresh(Request())
except Exception as refresh_error:
raise ValueError(f"Failed to refresh credentials: {refresh_error}")
else:
raise ValueError("No access token or refresh token available. User must re-authorize with offline access.")
return build('drive', 'v3', credentials=credentials)
except HttpError as e:
raise ValueError(f"Failed to build Google Drive service: HTTP {e.resp.status}")
except Exception as e:
raise ValueError(f"Failed to build Google Drive service: {str(e)}")
def is_token_expired(self, token_info):
if 'expiry' in token_info and token_info['expiry']:
try:
from dateutil import parser
# Google Drive provides timezone-aware ISO8601 dates
expiry_dt = parser.parse(token_info['expiry'])
current_time = datetime.datetime.now(datetime.timezone.utc)
return current_time >= expiry_dt - datetime.timedelta(seconds=60)
except Exception:
return True
if 'access_token' in token_info and token_info['access_token']:
return False
return True
def get_token_info_from_session(self, session_token: str) -> Dict[str, Any]:
try:
from application.core.mongo_db import MongoDB
from application.core.settings import settings
mongo = MongoDB.get_client()
db = mongo[settings.MONGO_DB_NAME]
sessions_collection = db["connector_sessions"]
session = sessions_collection.find_one({"session_token": session_token})
if not session:
raise ValueError(f"Invalid session token: {session_token}")
if "token_info" not in session:
raise ValueError("Session missing token information")
token_info = session["token_info"]
if not token_info:
raise ValueError("Invalid token information")
required_fields = ["access_token", "refresh_token"]
missing_fields = [field for field in required_fields if field not in token_info or not token_info.get(field)]
if missing_fields:
raise ValueError(f"Missing required token fields: {missing_fields}")
if 'client_id' not in token_info:
token_info['client_id'] = settings.GOOGLE_CLIENT_ID
if 'client_secret' not in token_info:
token_info['client_secret'] = settings.GOOGLE_CLIENT_SECRET
if 'token_uri' not in token_info:
token_info['token_uri'] = 'https://oauth2.googleapis.com/token'
return token_info
except Exception as e:
raise ValueError(f"Failed to retrieve Google Drive token information: {str(e)}")
def validate_credentials(self, credentials: Credentials) -> bool:
"""
Validate Google Drive credentials by making a test API call.
Args:
credentials: Google credentials object
Returns:
True if credentials are valid, False otherwise
"""
try:
service = self.build_drive_service(credentials)
service.about().get(fields="user").execute()
return True
except HttpError as e:
logging.error(f"HTTP error validating credentials: {e}")
return False
except Exception as e:
logging.error(f"Error validating credentials: {e}")
return False

View File

@@ -0,0 +1,559 @@
"""
Google Drive loader for DocsGPT.
Loads documents from Google Drive using Google Drive API.
"""
import io
import logging
import os
from typing import List, Dict, Any, Optional
from googleapiclient.http import MediaIoBaseDownload
from googleapiclient.errors import HttpError
from application.parser.connectors.base import BaseConnectorLoader
from application.parser.connectors.google_drive.auth import GoogleDriveAuth
from application.parser.schema.base import Document
class GoogleDriveLoader(BaseConnectorLoader):
SUPPORTED_MIME_TYPES = {
'application/pdf': '.pdf',
'application/vnd.google-apps.document': '.docx',
'application/vnd.google-apps.presentation': '.pptx',
'application/vnd.google-apps.spreadsheet': '.xlsx',
'application/vnd.openxmlformats-officedocument.wordprocessingml.document': '.docx',
'application/vnd.openxmlformats-officedocument.presentationml.presentation': '.pptx',
'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet': '.xlsx',
'application/msword': '.doc',
'application/vnd.ms-powerpoint': '.ppt',
'application/vnd.ms-excel': '.xls',
'text/plain': '.txt',
'text/csv': '.csv',
'text/html': '.html',
'text/markdown': '.md',
'text/x-rst': '.rst',
'application/json': '.json',
'application/epub+zip': '.epub',
'application/rtf': '.rtf',
'image/jpeg': '.jpg',
'image/jpg': '.jpg',
'image/png': '.png',
}
EXPORT_FORMATS = {
'application/vnd.google-apps.document': 'application/vnd.openxmlformats-officedocument.wordprocessingml.document',
'application/vnd.google-apps.presentation': 'application/vnd.openxmlformats-officedocument.presentationml.presentation',
'application/vnd.google-apps.spreadsheet': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
}
def __init__(self, session_token: str):
self.auth = GoogleDriveAuth()
self.session_token = session_token
token_info = self.auth.get_token_info_from_session(session_token)
self.credentials = self.auth.create_credentials_from_token_info(token_info)
try:
self.service = self.auth.build_drive_service(self.credentials)
except Exception as e:
logging.warning(f"Could not build Google Drive service: {e}")
self.service = None
self.next_page_token = None
def _process_file(self, file_metadata: Dict[str, Any], load_content: bool = True) -> Optional[Document]:
try:
file_id = file_metadata.get('id')
file_name = file_metadata.get('name', 'Unknown')
mime_type = file_metadata.get('mimeType', 'application/octet-stream')
if mime_type not in self.SUPPORTED_MIME_TYPES and not mime_type.startswith('application/vnd.google-apps.'):
return None
if mime_type not in self.SUPPORTED_MIME_TYPES and not mime_type.startswith('application/vnd.google-apps.'):
logging.info(f"Skipping unsupported file type: {mime_type} for file {file_name}")
return None
# Google Drive provides timezone-aware ISO8601 dates
doc_metadata = {
'file_name': file_name,
'mime_type': mime_type,
'size': file_metadata.get('size', None),
'created_time': file_metadata.get('createdTime'),
'modified_time': file_metadata.get('modifiedTime'),
'parents': file_metadata.get('parents', []),
'source': 'google_drive'
}
if not load_content:
return Document(
text="",
doc_id=file_id,
extra_info=doc_metadata
)
content = self._download_file_content(file_id, mime_type)
if content is None:
logging.warning(f"Could not load content for file {file_name} ({file_id})")
return None
return Document(
text=content,
doc_id=file_id,
extra_info=doc_metadata
)
except Exception as e:
logging.error(f"Error processing file: {e}")
return None
def load_data(self, inputs: Dict[str, Any]) -> List[Document]:
session_token = inputs.get('session_token')
if session_token and session_token != self.session_token:
logging.warning("Session token in inputs differs from loader's session token. Using loader's session token.")
self.config = inputs
try:
documents: List[Document] = []
folder_id = inputs.get('folder_id')
file_ids = inputs.get('file_ids', [])
limit = inputs.get('limit', 100)
list_only = inputs.get('list_only', False)
load_content = not list_only
page_token = inputs.get('page_token')
search_query = inputs.get('search_query')
self.next_page_token = None
if file_ids:
# Specific files requested: load them
for file_id in file_ids:
try:
doc = self._load_file_by_id(file_id, load_content=load_content)
if doc:
if not search_query or (
search_query.lower() in doc.extra_info.get('file_name', '').lower()
):
documents.append(doc)
elif hasattr(self, '_credential_refreshed') and self._credential_refreshed:
self._credential_refreshed = False
logging.info(f"Retrying load of file {file_id} after credential refresh")
doc = self._load_file_by_id(file_id, load_content=load_content)
if doc and (
not search_query or
search_query.lower() in doc.extra_info.get('file_name', '').lower()
):
documents.append(doc)
except Exception as e:
logging.error(f"Error loading file {file_id}: {e}")
continue
else:
# Browsing mode: list immediate children of provided folder or root
parent_id = folder_id if folder_id else 'root'
documents = self._list_items_in_parent(
parent_id,
limit=limit,
load_content=load_content,
page_token=page_token,
search_query=search_query
)
logging.info(f"Loaded {len(documents)} documents from Google Drive")
return documents
except Exception as e:
logging.error(f"Error loading data from Google Drive: {e}", exc_info=True)
raise
def _load_file_by_id(self, file_id: str, load_content: bool = True) -> Optional[Document]:
self._ensure_service()
try:
file_metadata = self.service.files().get(
fileId=file_id,
fields='id,name,mimeType,size,createdTime,modifiedTime,parents'
).execute()
return self._process_file(file_metadata, load_content=load_content)
except HttpError as e:
logging.error(f"HTTP error loading file {file_id}: {e.resp.status} - {e.content}")
if e.resp.status in [401, 403]:
if hasattr(self.credentials, 'refresh_token') and self.credentials.refresh_token:
try:
from google.auth.transport.requests import Request
self.credentials.refresh(Request())
self._ensure_service()
return None
except Exception as refresh_error:
raise ValueError(f"Authentication failed and could not be refreshed: {refresh_error}")
else:
raise ValueError("Authentication failed and cannot be refreshed: missing refresh_token")
return None
except Exception as e:
logging.error(f"Error loading file {file_id}: {e}")
return None
def _list_items_in_parent(self, parent_id: str, limit: int = 100, load_content: bool = False, page_token: Optional[str] = None, search_query: Optional[str] = None) -> List[Document]:
self._ensure_service()
documents: List[Document] = []
try:
query = f"'{parent_id}' in parents and trashed=false"
if search_query:
safe_search = search_query.replace("'", "\\'")
query += f" and name contains '{safe_search}'"
next_token_out: Optional[str] = None
while True:
page_size = 100
if limit:
remaining = max(0, limit - len(documents))
if remaining == 0:
break
page_size = min(100, remaining)
results = self.service.files().list(
q=query,
fields='nextPageToken,files(id,name,mimeType,size,createdTime,modifiedTime,parents)',
pageToken=page_token,
pageSize=page_size,
orderBy='name'
).execute()
items = results.get('files', [])
for item in items:
mime_type = item.get('mimeType')
if mime_type == 'application/vnd.google-apps.folder':
doc_metadata = {
'file_name': item.get('name', 'Unknown'),
'mime_type': mime_type,
'size': item.get('size', None),
'created_time': item.get('createdTime'),
'modified_time': item.get('modifiedTime'),
'parents': item.get('parents', []),
'source': 'google_drive',
'is_folder': True
}
documents.append(Document(text="", doc_id=item.get('id'), extra_info=doc_metadata))
else:
doc = self._process_file(item, load_content=load_content)
if doc:
documents.append(doc)
if limit and len(documents) >= limit:
self.next_page_token = results.get('nextPageToken')
return documents
page_token = results.get('nextPageToken')
next_token_out = page_token
if not page_token:
break
self.next_page_token = next_token_out
return documents
except Exception as e:
logging.error(f"Error listing items under parent {parent_id}: {e}")
return documents
def _download_file_content(self, file_id: str, mime_type: str) -> Optional[str]:
if not self.credentials.token:
logging.warning("No access token in credentials, attempting to refresh")
if hasattr(self.credentials, 'refresh_token') and self.credentials.refresh_token:
try:
from google.auth.transport.requests import Request
self.credentials.refresh(Request())
logging.info("Credentials refreshed successfully")
self._ensure_service()
except Exception as e:
logging.error(f"Failed to refresh credentials: {e}")
raise ValueError("Authentication failed and cannot be refreshed: missing or invalid refresh_token")
else:
logging.error("No access token and no refresh_token available")
raise ValueError("Authentication failed and cannot be refreshed: missing refresh_token")
if self.credentials.expired:
logging.warning("Credentials are expired, attempting to refresh")
if hasattr(self.credentials, 'refresh_token') and self.credentials.refresh_token:
try:
from google.auth.transport.requests import Request
self.credentials.refresh(Request())
logging.info("Credentials refreshed successfully")
self._ensure_service()
except Exception as e:
logging.error(f"Failed to refresh expired credentials: {e}")
raise ValueError("Authentication failed and cannot be refreshed: expired credentials")
else:
logging.error("Credentials expired and no refresh_token available")
raise ValueError("Authentication failed and cannot be refreshed: missing refresh_token")
try:
if mime_type in self.EXPORT_FORMATS:
export_mime_type = self.EXPORT_FORMATS[mime_type]
request = self.service.files().export_media(
fileId=file_id,
mimeType=export_mime_type
)
else:
request = self.service.files().get_media(fileId=file_id)
file_io = io.BytesIO()
downloader = MediaIoBaseDownload(file_io, request)
done = False
while done is False:
try:
_, done = downloader.next_chunk()
except HttpError as e:
logging.error(f"HTTP error downloading file {file_id}: {e.resp.status} - {e.content}")
return None
except Exception as e:
logging.error(f"Error during download of file {file_id}: {e}")
return None
content_bytes = file_io.getvalue()
try:
content = content_bytes.decode('utf-8')
except UnicodeDecodeError:
try:
content = content_bytes.decode('latin-1')
except UnicodeDecodeError:
logging.error(f"Could not decode file {file_id} as text")
return None
return content
except HttpError as e:
logging.error(f"HTTP error downloading file {file_id}: {e.resp.status} - {e.content}")
if e.resp.status in [401, 403]:
logging.error(f"Authentication error downloading file {file_id}")
if hasattr(self.credentials, 'refresh_token') and self.credentials.refresh_token:
logging.info(f"Attempting to refresh credentials for file {file_id}")
try:
from google.auth.transport.requests import Request
self.credentials.refresh(Request())
logging.info("Credentials refreshed successfully")
self._credential_refreshed = True
self._ensure_service()
return None
except Exception as refresh_error:
logging.error(f"Error refreshing credentials: {refresh_error}")
raise ValueError(f"Authentication failed and could not be refreshed: {refresh_error}")
else:
logging.error("Cannot refresh credentials: missing refresh_token")
raise ValueError("Authentication failed and cannot be refreshed: missing refresh_token")
return None
except Exception as e:
logging.error(f"Error downloading file {file_id}: {e}")
return None
def _download_file_to_directory(self, file_id: str, local_dir: str) -> bool:
try:
self._ensure_service()
return self._download_single_file(file_id, local_dir)
except Exception as e:
logging.error(f"Error downloading file {file_id}: {e}", exc_info=True)
return False
def _ensure_service(self):
if not self.service:
try:
self.service = self.auth.build_drive_service(self.credentials)
except Exception as e:
raise ValueError(f"Cannot access Google Drive: {e}")
def _download_single_file(self, file_id: str, local_dir: str) -> bool:
file_metadata = self.service.files().get(
fileId=file_id,
fields='name,mimeType'
).execute()
file_name = file_metadata['name']
mime_type = file_metadata['mimeType']
if mime_type not in self.SUPPORTED_MIME_TYPES and not mime_type.startswith('application/vnd.google-apps.'):
return False
os.makedirs(local_dir, exist_ok=True)
full_path = os.path.join(local_dir, file_name)
if mime_type in self.EXPORT_FORMATS:
export_mime_type = self.EXPORT_FORMATS[mime_type]
request = self.service.files().export_media(
fileId=file_id,
mimeType=export_mime_type
)
extension = self._get_extension_for_mime_type(export_mime_type)
if not full_path.endswith(extension):
full_path += extension
else:
request = self.service.files().get_media(fileId=file_id)
with open(full_path, 'wb') as f:
downloader = MediaIoBaseDownload(f, request)
done = False
while not done:
_, done = downloader.next_chunk()
return True
def _download_folder_recursive(self, folder_id: str, local_dir: str, recursive: bool = True) -> int:
files_downloaded = 0
try:
os.makedirs(local_dir, exist_ok=True)
query = f"'{folder_id}' in parents and trashed=false"
page_token = None
while True:
results = self.service.files().list(
q=query,
fields='nextPageToken, files(id, name, mimeType)',
pageToken=page_token,
pageSize=1000
).execute()
items = results.get('files', [])
logging.info(f"Found {len(items)} items in folder {folder_id}")
for item in items:
item_name = item['name']
item_id = item['id']
mime_type = item['mimeType']
if mime_type == 'application/vnd.google-apps.folder':
if recursive:
# Create subfolder and recurse
subfolder_path = os.path.join(local_dir, item_name)
os.makedirs(subfolder_path, exist_ok=True)
subfolder_files = self._download_folder_recursive(
item_id,
subfolder_path,
recursive
)
files_downloaded += subfolder_files
logging.info(f"Downloaded {subfolder_files} files from subfolder {item_name}")
else:
# Download file
success = self._download_single_file(item_id, local_dir)
if success:
files_downloaded += 1
logging.info(f"Downloaded file: {item_name}")
else:
logging.warning(f"Failed to download file: {item_name}")
page_token = results.get('nextPageToken')
if not page_token:
break
return files_downloaded
except Exception as e:
logging.error(f"Error in _download_folder_recursive for folder {folder_id}: {e}", exc_info=True)
return files_downloaded
def _get_extension_for_mime_type(self, mime_type: str) -> str:
extensions = {
'application/pdf': '.pdf',
'text/plain': '.txt',
'application/vnd.openxmlformats-officedocument.wordprocessingml.document': '.docx',
'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet': '.xlsx',
'application/vnd.openxmlformats-officedocument.presentationml.presentation': '.pptx',
'text/html': '.html',
'text/markdown': '.md',
}
return extensions.get(mime_type, '.bin')
def _download_folder_contents(self, folder_id: str, local_dir: str, recursive: bool = True) -> int:
try:
self._ensure_service()
return self._download_folder_recursive(folder_id, local_dir, recursive)
except Exception as e:
logging.error(f"Error downloading folder {folder_id}: {e}", exc_info=True)
return 0
def download_to_directory(self, local_dir: str, source_config: dict = None) -> dict:
if source_config is None:
source_config = {}
config = source_config if source_config else getattr(self, 'config', {})
files_downloaded = 0
try:
folder_ids = config.get('folder_ids', [])
file_ids = config.get('file_ids', [])
recursive = config.get('recursive', True)
self._ensure_service()
if file_ids:
if isinstance(file_ids, str):
file_ids = [file_ids]
for file_id in file_ids:
if self._download_file_to_directory(file_id, local_dir):
files_downloaded += 1
# Process folders
if folder_ids:
if isinstance(folder_ids, str):
folder_ids = [folder_ids]
for folder_id in folder_ids:
try:
folder_metadata = self.service.files().get(
fileId=folder_id,
fields='name'
).execute()
folder_name = folder_metadata.get('name', '')
folder_path = os.path.join(local_dir, folder_name)
os.makedirs(folder_path, exist_ok=True)
folder_files = self._download_folder_recursive(
folder_id,
folder_path,
recursive
)
files_downloaded += folder_files
logging.info(f"Downloaded {folder_files} files from folder {folder_name}")
except Exception as e:
logging.error(f"Error downloading folder {folder_id}: {e}", exc_info=True)
if not file_ids and not folder_ids:
raise ValueError("No folder_ids or file_ids provided for download")
return {
"files_downloaded": files_downloaded,
"directory_path": local_dir,
"empty_result": files_downloaded == 0,
"source_type": "google_drive",
"config_used": config
}
except Exception as e:
return {
"files_downloaded": files_downloaded,
"directory_path": local_dir,
"empty_result": True,
"source_type": "google_drive",
"config_used": config,
"error": str(e)
}

View File

@@ -6,6 +6,21 @@ from application.core.settings import settings
from application.vectorstore.vector_creator import VectorCreator
def sanitize_content(content: str) -> str:
"""
Remove NUL characters that can cause vector store ingestion to fail.
Args:
content (str): Raw content that may contain NUL characters
Returns:
str: Sanitized content with NUL characters removed
"""
if not content:
return content
return content.replace('\x00', '')
@retry(tries=10, delay=60)
def add_text_to_store_with_retry(store, doc, source_id):
"""
@@ -16,6 +31,9 @@ def add_text_to_store_with_retry(store, doc, source_id):
source_id: Unique identifier for the source.
"""
try:
# Sanitize content to remove NUL characters that cause ingestion failures
doc.page_content = sanitize_content(doc.page_content)
doc.metadata["source_id"] = str(source_id)
store.add_texts([doc.page_content], metadatas=[doc.metadata])
except Exception as e:
@@ -46,7 +64,7 @@ def embed_and_store_documents(docs, folder_name, source_id, task_status):
store = VectorCreator.create_vectorstore(
settings.VECTOR_STORE,
docs_init=docs_init,
source_id=folder_name,
source_id=source_id,
embeddings_key=os.getenv("EMBEDDINGS_KEY"),
)
else:

View File

@@ -15,6 +15,7 @@ from application.parser.file.json_parser import JSONParser
from application.parser.file.pptx_parser import PPTXParser
from application.parser.file.image_parser import ImageParser
from application.parser.schema.base import Document
from application.utils import num_tokens_from_string
DEFAULT_FILE_EXTRACTOR: Dict[str, BaseParser] = {
".pdf": PDFParser(),
@@ -141,11 +142,12 @@ class SimpleDirectoryReader(BaseReader):
Returns:
List[Document]: A list of documents.
"""
data: Union[str, List[str]] = ""
data_list: List[str] = []
metadata_list = []
self.file_token_counts = {}
for input_file in self.input_files:
if input_file.suffix in self.file_extractor:
parser = self.file_extractor[input_file.suffix]
@@ -156,24 +158,48 @@ class SimpleDirectoryReader(BaseReader):
# do standard read
with open(input_file, "r", errors=self.errors) as f:
data = f.read()
# Prepare metadata for this file
if self.file_metadata is not None:
file_metadata = self.file_metadata(input_file.name)
# Calculate token count for this file
if isinstance(data, List):
file_tokens = sum(num_tokens_from_string(str(d)) for d in data)
else:
# Provide a default empty metadata
file_metadata = {'title': '', 'store': ''}
# TODO: Find a case with no metadata and check if breaks anything
file_tokens = num_tokens_from_string(str(data))
full_path = str(input_file.resolve())
self.file_token_counts[full_path] = file_tokens
base_metadata = {
'title': input_file.name,
'token_count': file_tokens,
}
if hasattr(self, 'input_dir'):
try:
relative_path = str(input_file.relative_to(self.input_dir))
base_metadata['source'] = relative_path
except ValueError:
base_metadata['source'] = str(input_file)
else:
base_metadata['source'] = str(input_file)
if self.file_metadata is not None:
custom_metadata = self.file_metadata(input_file.name)
base_metadata.update(custom_metadata)
if isinstance(data, List):
# Extend data_list with each item in the data list
data_list.extend([str(d) for d in data])
# For each item in the data list, add the file's metadata to metadata_list
metadata_list.extend([file_metadata for _ in data])
metadata_list.extend([base_metadata for _ in data])
else:
# Add the single piece of data to data_list
data_list.append(str(data))
# Add the file's metadata to metadata_list
metadata_list.append(file_metadata)
metadata_list.append(base_metadata)
# Build directory structure if input_dir is provided
if hasattr(self, 'input_dir'):
self.directory_structure = self.build_directory_structure(self.input_dir)
logging.info("Directory structure built successfully")
else:
self.directory_structure = {}
if concatenate:
return [Document("\n".join(data_list))]
@@ -181,3 +207,48 @@ class SimpleDirectoryReader(BaseReader):
return [Document(d, extra_info=m) for d, m in zip(data_list, metadata_list)]
else:
return [Document(d) for d in data_list]
def build_directory_structure(self, base_path):
"""Build a dictionary representing the directory structure.
Args:
base_path: The base path to start building the structure from.
Returns:
dict: A nested dictionary representing the directory structure.
"""
import mimetypes
def build_tree(path):
"""Helper function to recursively build the directory tree."""
result = {}
for item in path.iterdir():
if self.exclude_hidden and item.name.startswith('.'):
continue
if item.is_dir():
subtree = build_tree(item)
if subtree:
result[item.name] = subtree
else:
if self.required_exts is not None and item.suffix not in self.required_exts:
continue
full_path = str(item.resolve())
file_size_bytes = item.stat().st_size
mime_type = mimetypes.guess_type(item.name)[0] or "application/octet-stream"
file_info = {
"type": mime_type,
"size_bytes": file_size_bytes
}
if hasattr(self, 'file_token_counts') and full_path in self.file_token_counts:
file_info["token_count"] = self.file_token_counts[full_path]
result[item.name] = file_info
return result
return build_tree(Path(base_path))

View File

@@ -8,6 +8,7 @@ import requests
from typing import Dict, Union
from application.parser.file.base_parser import BaseParser
from application.core.settings import settings
class ImageParser(BaseParser):
@@ -18,10 +19,13 @@ class ImageParser(BaseParser):
return {}
def parse_file(self, file: Path, errors: str = "ignore") -> Union[str, list[str]]:
doc2md_service = "https://llm.arc53.com/doc2md"
# alternatively you can use local vision capable LLM
with open(file, "rb") as file_loaded:
files = {'file': file_loaded}
response = requests.post(doc2md_service, files=files)
data = response.json()["markdown"]
if settings.PARSE_IMAGE_REMOTE:
doc2md_service = "https://llm.arc53.com/doc2md"
# alternatively you can use local vision capable LLM
with open(file, "rb") as file_loaded:
files = {'file': file_loaded}
response = requests.post(doc2md_service, files=files)
data = response.json()["markdown"]
else:
data = ""
return data

View File

@@ -6,6 +6,16 @@ from application.parser.remote.github_loader import GitHubLoader
class RemoteCreator:
"""
Factory class for creating remote content loaders.
These loaders fetch content from remote web sources like URLs,
sitemaps, web crawlers, social media platforms, etc.
For external knowledge base connectors (like Google Drive),
use ConnectorCreator instead.
"""
loaders = {
"url": WebLoader,
"sitemap": SitemapLoader,
@@ -18,5 +28,5 @@ class RemoteCreator:
def create_loader(cls, type, *args, **kwargs):
loader_class = cls.loaders.get(type.lower())
if not loader_class:
raise ValueError(f"No LLM class found for type {type}")
raise ValueError(f"No loader class found for type {type}")
return loader_class(*args, **kwargs)

View File

@@ -2,6 +2,7 @@ anthropic==0.49.0
boto3==1.38.18
beautifulsoup4==4.13.4
celery==5.4.0
cryptography==42.0.8
dataclasses-json==0.6.7
docx2txt==0.8
duckduckgo-search==7.5.2
@@ -11,8 +12,12 @@ esprima==4.0.1
esutils==1.0.1
Flask==3.1.1
faiss-cpu==1.9.0.post1
fastmcp==2.11.0
flask-restx==1.3.0
google-genai==1.3.0
google-api-python-client==2.179.0
google-auth-httplib2==0.2.0
google-auth-oauthlib==1.2.2
gTTS==2.5.4
gunicorn==23.0.0
javalang==0.13.0
@@ -52,13 +57,13 @@ prompt-toolkit==3.0.51
protobuf==5.29.3
psycopg2-binary==2.9.10
py==1.11.0
pydantic==2.10.6
pydantic-core==2.27.2
pydantic-settings==2.7.1
pydantic
pydantic-core
pydantic-settings
pymongo==4.11.3
pypdf==5.5.0
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
python-dotenv
python-jose==3.4.0
python-pptx==1.0.2
redis==5.2.1
@@ -78,7 +83,7 @@ tzdata==2024.2
urllib3==2.3.0
vine==5.1.0
wcwidth==0.2.13
werkzeug==3.1.3
werkzeug>=3.1.0,<3.1.2
yarl==1.20.0
markdownify==1.1.0
tldextract==5.1.3

View File

@@ -5,10 +5,6 @@ class BaseRetriever(ABC):
def __init__(self):
pass
@abstractmethod
def gen(self, *args, **kwargs):
pass
@abstractmethod
def search(self, *args, **kwargs):
pass

View File

@@ -1,4 +1,6 @@
import logging
import os
from application.core.settings import settings
from application.llm.llm_creator import LLMCreator
from application.retriever.base import BaseRetriever
@@ -20,10 +22,25 @@ class ClassicRAG(BaseRetriever):
api_key=settings.API_KEY,
decoded_token=None,
):
self.original_question = ""
"""Initialize ClassicRAG retriever with vectorstore sources and LLM configuration"""
self.original_question = source.get("question", "")
self.chat_history = chat_history if chat_history is not None else []
self.prompt = prompt
self.chunks = chunks
if isinstance(chunks, str):
try:
self.chunks = int(chunks)
except ValueError:
logging.warning(
f"Invalid chunks value '{chunks}', using default value 2"
)
self.chunks = 2
else:
self.chunks = chunks
user_identifier = user_api_key if user_api_key else "default"
logging.info(
f"ClassicRAG initialized with chunks={self.chunks}, user_api_key={user_identifier}, "
f"sources={'active_docs' in source and source['active_docs'] is not None}"
)
self.gpt_model = gpt_model
self.token_limit = (
token_limit
@@ -44,26 +61,48 @@ class ClassicRAG(BaseRetriever):
user_api_key=self.user_api_key,
decoded_token=decoded_token,
)
self.vectorstore = source["active_docs"] if "active_docs" in source else None
if "active_docs" in source and source["active_docs"] is not None:
if isinstance(source["active_docs"], list):
self.vectorstores = source["active_docs"]
else:
self.vectorstores = [source["active_docs"]]
else:
self.vectorstores = []
self.question = self._rephrase_query()
self.decoded_token = decoded_token
self._validate_vectorstore_config()
def _validate_vectorstore_config(self):
"""Validate vectorstore IDs and remove any empty/invalid entries"""
if not self.vectorstores:
logging.warning("No vectorstores configured for retrieval")
return
invalid_ids = [
vs_id for vs_id in self.vectorstores if not vs_id or not vs_id.strip()
]
if invalid_ids:
logging.warning(f"Found invalid vectorstore IDs: {invalid_ids}")
self.vectorstores = [
vs_id for vs_id in self.vectorstores if vs_id and vs_id.strip()
]
def _rephrase_query(self):
"""Rephrase user query with chat history context for better retrieval"""
if (
not self.original_question
or not self.chat_history
or self.chat_history == []
or self.chunks == 0
or self.vectorstore is None
or not self.vectorstores
):
return self.original_question
prompt = f"""Given the following conversation history:
{self.chat_history}
Rephrase the following user question to be a standalone search query
that captures all relevant context from the conversation:
"""
prompt = (
"Given the following conversation history:\n"
f"{self.chat_history}\n\n"
"Rephrase the following user question to be a standalone search query "
"that captures all relevant context from the conversation:\n"
)
messages = [
{"role": "system", "content": prompt},
@@ -79,44 +118,89 @@ class ClassicRAG(BaseRetriever):
return self.original_question
def _get_data(self):
if self.chunks == 0 or self.vectorstore is None:
docs = []
else:
docsearch = VectorCreator.create_vectorstore(
settings.VECTOR_STORE, self.vectorstore, settings.EMBEDDINGS_KEY
"""Retrieve relevant documents from configured vectorstores"""
if self.chunks == 0 or not self.vectorstores:
logging.info(
f"ClassicRAG._get_data: Skipping retrieval - chunks={self.chunks}, "
f"vectorstores_count={len(self.vectorstores) if self.vectorstores else 0}"
)
docs_temp = docsearch.search(self.question, k=self.chunks)
docs = [
{
"title": i.metadata.get(
"title", i.metadata.get("post_title", i.page_content)
).split("/")[-1],
"text": i.page_content,
"source": (
i.metadata.get("source")
if i.metadata.get("source")
else "local"
),
}
for i in docs_temp
]
return []
all_docs = []
chunks_per_source = max(1, self.chunks // len(self.vectorstores))
return docs
logging.info(
f"ClassicRAG._get_data: Starting retrieval with chunks={self.chunks}, "
f"vectorstores={self.vectorstores}, chunks_per_source={chunks_per_source}, "
f"query='{self.question[:50]}...'"
)
def gen():
pass
for vectorstore_id in self.vectorstores:
if vectorstore_id:
try:
docsearch = VectorCreator.create_vectorstore(
settings.VECTOR_STORE, vectorstore_id, settings.EMBEDDINGS_KEY
)
docs_temp = docsearch.search(self.question, k=chunks_per_source)
for doc in docs_temp:
if hasattr(doc, "page_content") and hasattr(doc, "metadata"):
page_content = doc.page_content
metadata = doc.metadata
else:
page_content = doc.get("text", doc.get("page_content", ""))
metadata = doc.get("metadata", {})
title = metadata.get(
"title", metadata.get("post_title", page_content)
)
if not isinstance(title, str):
title = str(title)
title = title.split("/")[-1]
filename = (
metadata.get("filename")
or metadata.get("file_name")
or metadata.get("source")
)
if isinstance(filename, str):
filename = os.path.basename(filename) or filename
else:
filename = title
if not filename:
filename = title
source_path = metadata.get("source") or vectorstore_id
all_docs.append(
{
"title": title,
"text": page_content,
"source": source_path,
"filename": filename,
}
)
except Exception as e:
logging.error(
f"Error searching vectorstore {vectorstore_id}: {e}",
exc_info=True,
)
continue
logging.info(
f"ClassicRAG._get_data: Retrieval complete - retrieved {len(all_docs)} documents "
f"(requested chunks={self.chunks}, chunks_per_source={chunks_per_source})"
)
return all_docs
def search(self, query: str = ""):
"""Search for documents using optional query override"""
if query:
self.original_question = query
self.question = self._rephrase_query()
return self._get_data()
def get_params(self):
"""Return current retriever configuration parameters"""
return {
"question": self.original_question,
"rephrased_question": self.question,
"source": self.vectorstore,
"sources": self.vectorstores,
"chunks": self.chunks,
"token_limit": self.token_limit,
"gpt_model": self.gpt_model,

View File

View File

@@ -0,0 +1,85 @@
import base64
import json
import os
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives.ciphers import algorithms, Cipher, modes
from cryptography.hazmat.primitives.kdf.pbkdf2 import PBKDF2HMAC
from application.core.settings import settings
def _derive_key(user_id: str, salt: bytes) -> bytes:
app_secret = settings.ENCRYPTION_SECRET_KEY
password = f"{app_secret}#{user_id}".encode()
kdf = PBKDF2HMAC(
algorithm=hashes.SHA256(),
length=32,
salt=salt,
iterations=100000,
backend=default_backend(),
)
return kdf.derive(password)
def encrypt_credentials(credentials: dict, user_id: str) -> str:
if not credentials:
return ""
try:
salt = os.urandom(16)
iv = os.urandom(16)
key = _derive_key(user_id, salt)
json_str = json.dumps(credentials)
cipher = Cipher(algorithms.AES(key), modes.CBC(iv), backend=default_backend())
encryptor = cipher.encryptor()
padded_data = _pad_data(json_str.encode())
encrypted_data = encryptor.update(padded_data) + encryptor.finalize()
result = salt + iv + encrypted_data
return base64.b64encode(result).decode()
except Exception as e:
print(f"Warning: Failed to encrypt credentials: {e}")
return ""
def decrypt_credentials(encrypted_data: str, user_id: str) -> dict:
if not encrypted_data:
return {}
try:
data = base64.b64decode(encrypted_data.encode())
salt = data[:16]
iv = data[16:32]
encrypted_content = data[32:]
key = _derive_key(user_id, salt)
cipher = Cipher(algorithms.AES(key), modes.CBC(iv), backend=default_backend())
decryptor = cipher.decryptor()
decrypted_padded = decryptor.update(encrypted_content) + decryptor.finalize()
decrypted_data = _unpad_data(decrypted_padded)
return json.loads(decrypted_data.decode())
except Exception as e:
print(f"Warning: Failed to decrypt credentials: {e}")
return {}
def _pad_data(data: bytes) -> bytes:
block_size = 16
padding_len = block_size - (len(data) % block_size)
padding = bytes([padding_len]) * padding_len
return data + padding
def _unpad_data(data: bytes) -> bytes:
padding_len = data[-1]
return data[:-padding_len]

View File

View File

@@ -93,3 +93,32 @@ class BaseStorage(ABC):
List[str]: List of file paths
"""
pass
@abstractmethod
def is_directory(self, path: str) -> bool:
"""
Check if a path is a directory.
Args:
path: Path to check
Returns:
bool: True if the path is a directory
"""
pass
@abstractmethod
def remove_directory(self, directory: str) -> bool:
"""
Remove a directory and all its contents.
For local storage, this removes the directory and all files/subdirectories within it.
For S3 storage, this removes all objects with the directory path as a prefix.
Args:
directory: Directory path to remove
Returns:
bool: True if removal was successful, False otherwise
"""
pass

View File

@@ -101,3 +101,40 @@ class LocalStorage(BaseStorage):
raise FileNotFoundError(f"File not found: {full_path}")
return processor_func(local_path=full_path, **kwargs)
def is_directory(self, path: str) -> bool:
"""
Check if a path is a directory in local storage.
Args:
path: Path to check
Returns:
bool: True if the path is a directory, False otherwise
"""
full_path = self._get_full_path(path)
return os.path.isdir(full_path)
def remove_directory(self, directory: str) -> bool:
"""
Remove a directory and all its contents from local storage.
Args:
directory: Directory path to remove
Returns:
bool: True if removal was successful, False otherwise
"""
full_path = self._get_full_path(directory)
if not os.path.exists(full_path):
return False
if not os.path.isdir(full_path):
return False
try:
shutil.rmtree(full_path)
return True
except (OSError, PermissionError):
return False

View File

@@ -130,3 +130,77 @@ class S3Storage(BaseStorage):
except Exception as e:
logging.error(f"Error processing S3 file {path}: {e}", exc_info=True)
raise
def is_directory(self, path: str) -> bool:
"""
Check if a path is a directory in S3 storage.
In S3, directories are virtual concepts. A path is considered a directory
if there are objects with the path as a prefix.
Args:
path: Path to check
Returns:
bool: True if the path is a directory, False otherwise
"""
# Ensure path ends with a slash if not empty
if path and not path.endswith('/'):
path += '/'
response = self.s3.list_objects_v2(
Bucket=self.bucket_name,
Prefix=path,
MaxKeys=1
)
return 'Contents' in response
def remove_directory(self, directory: str) -> bool:
"""
Remove a directory and all its contents from S3 storage.
In S3, this removes all objects with the directory path as a prefix.
Since S3 doesn't have actual directories, this effectively removes
all files within the virtual directory structure.
Args:
directory: Directory path to remove
Returns:
bool: True if removal was successful, False otherwise
"""
# Ensure directory ends with a slash if not empty
if directory and not directory.endswith('/'):
directory += '/'
try:
# Get all objects with the directory prefix
objects_to_delete = []
paginator = self.s3.get_paginator('list_objects_v2')
pages = paginator.paginate(Bucket=self.bucket_name, Prefix=directory)
for page in pages:
if 'Contents' in page:
for obj in page['Contents']:
objects_to_delete.append({'Key': obj['Key']})
if not objects_to_delete:
return False
batch_size = 1000
for i in range(0, len(objects_to_delete), batch_size):
batch = objects_to_delete[i:i + batch_size]
response = self.s3.delete_objects(
Bucket=self.bucket_name,
Delete={'Objects': batch}
)
if 'Errors' in response and response['Errors']:
return False
return True
except ClientError:
return False

View File

@@ -1,84 +1,30 @@
import asyncio
import websockets
import json
import base64
from io import BytesIO
import base64
from application.tts.base import BaseTTS
from application.core.settings import settings
class ElevenlabsTTS(BaseTTS):
def __init__(self):
self.api_key = 'ELEVENLABS_API_KEY'# here you should put your api key
self.model = "eleven_flash_v2_5"
self.voice = "VOICE_ID" # this is the hash code for the voice not the name!
self.write_audio = 1
def __init__(self):
from elevenlabs.client import ElevenLabs
self.client = ElevenLabs(
api_key=settings.ELEVENLABS_API_KEY,
)
def text_to_speech(self, text):
asyncio.run(self._text_to_speech_websocket(text))
lang = "en"
audio = self.client.generate(
text=text,
model="eleven_multilingual_v2",
voice="Brian",
)
audio_data = BytesIO()
for chunk in audio:
audio_data.write(chunk)
audio_bytes = audio_data.getvalue()
async def _text_to_speech_websocket(self, text):
uri = f"wss://api.elevenlabs.io/v1/text-to-speech/{self.voice}/stream-input?model_id={self.model}"
websocket = await websockets.connect(uri)
payload = {
"text": " ",
"voice_settings": {
"stability": 0.5,
"similarity_boost": 0.8,
},
"xi_api_key": self.api_key,
}
await websocket.send(json.dumps(payload))
async def listen():
while 1:
try:
msg = await websocket.recv()
data = json.loads(msg)
if data.get("audio"):
print("audio received")
yield base64.b64decode(data["audio"])
elif data.get("isFinal"):
break
except websockets.exceptions.ConnectionClosed:
print("websocket closed")
break
listen_task = asyncio.create_task(self.stream(listen()))
await websocket.send(json.dumps({"text": text}))
# this is to signal the end of the text, either use this or flush
await websocket.send(json.dumps({"text": ""}))
await listen_task
async def stream(self, audio_stream):
if self.write_audio:
audio_bytes = BytesIO()
async for chunk in audio_stream:
if chunk:
audio_bytes.write(chunk)
with open("output_audio.mp3", "wb") as f:
f.write(audio_bytes.getvalue())
else:
async for chunk in audio_stream:
pass # depends on the streamer!
def test_elevenlabs_websocket():
"""
Tests the ElevenlabsTTS text_to_speech method with a sample prompt.
Prints out the base64-encoded result and writes it to 'output_audio.mp3'.
"""
# Instantiate your TTS class
tts = ElevenlabsTTS()
# Call the method with some sample text
tts.text_to_speech("Hello from ElevenLabs WebSocket!")
print("Saved audio to output_audio.mp3.")
if __name__ == "__main__":
test_elevenlabs_websocket()
# Encode to base64
audio_base64 = base64.b64encode(audio_bytes).decode("utf-8")
return audio_base64, lang

View File

@@ -6,6 +6,7 @@ import uuid
import tiktoken
from flask import jsonify, make_response
from werkzeug.utils import secure_filename
from application.core.settings import settings
@@ -19,6 +20,17 @@ def get_encoding():
return _encoding
def get_gpt_model() -> str:
"""Get the appropriate GPT model based on provider"""
model_map = {
"openai": "gpt-4o-mini",
"anthropic": "claude-2",
"groq": "llama3-8b-8192",
"novita": "deepseek/deepseek-r1",
}
return settings.LLM_NAME or model_map.get(settings.LLM_PROVIDER, "")
def safe_filename(filename):
"""
Creates a safe filename that preserves the original extension.
@@ -32,15 +44,14 @@ def safe_filename(filename):
"""
if not filename:
return str(uuid.uuid4())
_, extension = os.path.splitext(filename)
safe_name = secure_filename(filename)
# If secure_filename returns just the extension or an empty string
if not safe_name or safe_name == extension.lstrip("."):
return f"{str(uuid.uuid4())}{extension}"
return safe_name
@@ -68,7 +79,6 @@ def count_tokens_docs(docs):
docs_content = ""
for doc in docs:
docs_content += doc.page_content
tokens = num_tokens_from_string(docs_content)
return tokens
@@ -80,7 +90,7 @@ def check_required_fields(data, required_fields):
jsonify(
{
"success": False,
"message": f"Missing fields: {', '.join(missing_fields)}",
"message": f"Missing required fields: {', '.join(missing_fields)}",
}
),
400,
@@ -88,6 +98,27 @@ def check_required_fields(data, required_fields):
return None
def validate_required_fields(data, required_fields):
missing_fields = []
empty_fields = []
for field in required_fields:
if field not in data:
missing_fields.append(field)
elif not data[field]:
empty_fields.append(field)
errors = []
if missing_fields:
errors.append(f"Missing required fields: {', '.join(missing_fields)}")
if empty_fields:
errors.append(f"Empty values in required fields: {', '.join(empty_fields)}")
if errors:
return make_response(
jsonify({"success": False, "message": " | ".join(errors)}), 400
)
return None
def get_hash(data):
return hashlib.md5(data.encode(), usedforsecurity=False).hexdigest()
@@ -109,7 +140,6 @@ def limit_chat_history(history, max_token_limit=None, gpt_model="docsgpt"):
if not history:
return []
trimmed_history = []
tokens_current_history = 0
@@ -118,18 +148,15 @@ def limit_chat_history(history, max_token_limit=None, gpt_model="docsgpt"):
if "prompt" in message and "response" in message:
tokens_batch += num_tokens_from_string(message["prompt"])
tokens_batch += num_tokens_from_string(message["response"])
if "tool_calls" in message:
for tool_call in message["tool_calls"]:
tool_call_string = f"Tool: {tool_call.get('tool_name')} | Action: {tool_call.get('action_name')} | Args: {tool_call.get('arguments')} | Response: {tool_call.get('result')}"
tokens_batch += num_tokens_from_string(tool_call_string)
if tokens_current_history + tokens_batch < max_token_limit:
tokens_current_history += tokens_batch
trimmed_history.insert(0, message)
else:
break
return trimmed_history

View File

@@ -1,20 +1,43 @@
from abc import ABC, abstractmethod
import logging
import os
from sentence_transformers import SentenceTransformer
from abc import ABC, abstractmethod
from langchain_openai import OpenAIEmbeddings
from sentence_transformers import SentenceTransformer
from application.core.settings import settings
class EmbeddingsWrapper:
def __init__(self, model_name, *args, **kwargs):
self.model = SentenceTransformer(model_name, config_kwargs={'allow_dangerous_deserialization': True}, *args, **kwargs)
self.dimension = self.model.get_sentence_embedding_dimension()
logging.info(f"Initializing EmbeddingsWrapper with model: {model_name}")
try:
kwargs.setdefault("trust_remote_code", True)
self.model = SentenceTransformer(
model_name,
config_kwargs={"allow_dangerous_deserialization": True},
*args,
**kwargs,
)
if self.model is None or self.model._first_module() is None:
raise ValueError(
f"SentenceTransformer model failed to load properly for: {model_name}"
)
self.dimension = self.model.get_sentence_embedding_dimension()
logging.info(f"Successfully loaded model with dimension: {self.dimension}")
except Exception as e:
logging.error(
f"Failed to initialize SentenceTransformer with model {model_name}: {str(e)}",
exc_info=True,
)
raise
def embed_query(self, query: str):
return self.model.encode(query).tolist()
def embed_documents(self, documents: list):
return self.model.encode(documents).tolist()
def __call__(self, text):
if isinstance(text, str):
return self.embed_query(text)
@@ -24,15 +47,14 @@ class EmbeddingsWrapper:
raise ValueError("Input must be a string or a list of strings")
class EmbeddingsSingleton:
_instances = {}
@staticmethod
def get_instance(embeddings_name, *args, **kwargs):
if embeddings_name not in EmbeddingsSingleton._instances:
EmbeddingsSingleton._instances[embeddings_name] = EmbeddingsSingleton._create_instance(
embeddings_name, *args, **kwargs
EmbeddingsSingleton._instances[embeddings_name] = (
EmbeddingsSingleton._create_instance(embeddings_name, *args, **kwargs)
)
return EmbeddingsSingleton._instances[embeddings_name]
@@ -40,9 +62,15 @@ class EmbeddingsSingleton:
def _create_instance(embeddings_name, *args, **kwargs):
embeddings_factory = {
"openai_text-embedding-ada-002": OpenAIEmbeddings,
"huggingface_sentence-transformers/all-mpnet-base-v2": lambda: EmbeddingsWrapper("sentence-transformers/all-mpnet-base-v2"),
"huggingface_sentence-transformers-all-mpnet-base-v2": lambda: EmbeddingsWrapper("sentence-transformers/all-mpnet-base-v2"),
"huggingface_hkunlp/instructor-large": lambda: EmbeddingsWrapper("hkunlp/instructor-large"),
"huggingface_sentence-transformers/all-mpnet-base-v2": lambda: EmbeddingsWrapper(
"sentence-transformers/all-mpnet-base-v2"
),
"huggingface_sentence-transformers-all-mpnet-base-v2": lambda: EmbeddingsWrapper(
"sentence-transformers/all-mpnet-base-v2"
),
"huggingface_hkunlp/instructor-large": lambda: EmbeddingsWrapper(
"hkunlp/instructor-large"
),
}
if embeddings_name in embeddings_factory:
@@ -50,41 +78,83 @@ class EmbeddingsSingleton:
else:
return EmbeddingsWrapper(embeddings_name, *args, **kwargs)
class BaseVectorStore(ABC):
def __init__(self):
pass
@abstractmethod
def search(self, *args, **kwargs):
"""Search for similar documents/chunks in the vectorstore"""
pass
@abstractmethod
def add_texts(self, texts, metadatas=None, *args, **kwargs):
"""Add texts with their embeddings to the vectorstore"""
pass
def delete_index(self, *args, **kwargs):
"""Delete the entire index/collection"""
pass
def save_local(self, *args, **kwargs):
"""Save vectorstore to local storage"""
pass
def get_chunks(self, *args, **kwargs):
"""Get all chunks from the vectorstore"""
pass
def add_chunk(self, text, metadata=None, *args, **kwargs):
"""Add a single chunk to the vectorstore"""
pass
def delete_chunk(self, chunk_id, *args, **kwargs):
"""Delete a specific chunk from the vectorstore"""
pass
def is_azure_configured(self):
return settings.OPENAI_API_BASE and settings.OPENAI_API_VERSION and settings.AZURE_DEPLOYMENT_NAME
return (
settings.OPENAI_API_BASE
and settings.OPENAI_API_VERSION
and settings.AZURE_DEPLOYMENT_NAME
)
def _get_embeddings(self, embeddings_name, embeddings_key=None):
if embeddings_name == "openai_text-embedding-ada-002":
if self.is_azure_configured():
os.environ["OPENAI_API_TYPE"] = "azure"
embedding_instance = EmbeddingsSingleton.get_instance(
embeddings_name,
model=settings.AZURE_EMBEDDINGS_DEPLOYMENT_NAME
embeddings_name, model=settings.AZURE_EMBEDDINGS_DEPLOYMENT_NAME
)
else:
embedding_instance = EmbeddingsSingleton.get_instance(
embeddings_name,
openai_api_key=embeddings_key
embeddings_name, openai_api_key=embeddings_key
)
elif embeddings_name == "huggingface_sentence-transformers/all-mpnet-base-v2":
if os.path.exists("./models/all-mpnet-base-v2"):
possible_paths = [
"/app/models/all-mpnet-base-v2", # Docker absolute path
"./models/all-mpnet-base-v2", # Relative path
]
local_model_path = None
for path in possible_paths:
if os.path.exists(path):
local_model_path = path
logging.info(f"Found local model at path: {path}")
break
else:
logging.info(f"Path does not exist: {path}")
if local_model_path:
embedding_instance = EmbeddingsSingleton.get_instance(
embeddings_name = "./models/all-mpnet-base-v2",
local_model_path,
)
else:
logging.warning(
f"Local model not found in any of the paths: {possible_paths}. Falling back to HuggingFace download."
)
embedding_instance = EmbeddingsSingleton.get_instance(
embeddings_name,
)
else:
embedding_instance = EmbeddingsSingleton.get_instance(embeddings_name)
return embedding_instance

View File

@@ -1,5 +1,6 @@
import os
import tempfile
import io
from langchain_community.vectorstores import FAISS
@@ -66,8 +67,37 @@ class FaissStore(BaseVectorStore):
def add_texts(self, *args, **kwargs):
return self.docsearch.add_texts(*args, **kwargs)
def save_local(self, *args, **kwargs):
return self.docsearch.save_local(*args, **kwargs)
def _save_to_storage(self):
"""
Save the FAISS index to storage using temporary directory pattern.
Works consistently for both local and S3 storage.
"""
with tempfile.TemporaryDirectory() as temp_dir:
self.docsearch.save_local(temp_dir)
faiss_path = os.path.join(temp_dir, "index.faiss")
pkl_path = os.path.join(temp_dir, "index.pkl")
with open(faiss_path, "rb") as f_faiss:
faiss_data = f_faiss.read()
with open(pkl_path, "rb") as f_pkl:
pkl_data = f_pkl.read()
storage_path = get_vectorstore(self.source_id)
self.storage.save_file(io.BytesIO(faiss_data), f"{storage_path}/index.faiss")
self.storage.save_file(io.BytesIO(pkl_data), f"{storage_path}/index.pkl")
return True
def save_local(self, path=None):
if path:
os.makedirs(path, exist_ok=True)
self.docsearch.save_local(path)
self._save_to_storage()
return True
def delete_index(self, *args, **kwargs):
return self.docsearch.delete(*args, **kwargs)
@@ -103,13 +133,17 @@ class FaissStore(BaseVectorStore):
return chunks
def add_chunk(self, text, metadata=None):
"""Add a new chunk and save to storage."""
metadata = metadata or {}
doc = Document(text=text, extra_info=metadata).to_langchain_format()
doc_id = self.docsearch.add_documents([doc])
self.save_local(self.path)
self._save_to_storage()
return doc_id
def delete_chunk(self, chunk_id):
"""Delete a chunk and save to storage."""
self.delete_index([chunk_id])
self.save_local(self.path)
self._save_to_storage()
return True

View File

@@ -0,0 +1,303 @@
import logging
from typing import List, Optional, Any, Dict
from application.core.settings import settings
from application.vectorstore.base import BaseVectorStore
from application.vectorstore.document_class import Document
class PGVectorStore(BaseVectorStore):
def __init__(
self,
source_id: str = "",
embeddings_key: str = "embeddings",
table_name: str = "documents",
vector_column: str = "embedding",
text_column: str = "text",
metadata_column: str = "metadata",
connection_string: str = None,
):
super().__init__()
# Store the source_id for use in add_chunk
self._source_id = str(source_id).replace("application/indexes/", "").rstrip("/")
self._embeddings_key = embeddings_key
self._table_name = table_name
self._vector_column = vector_column
self._text_column = text_column
self._metadata_column = metadata_column
self._embedding = self._get_embeddings(settings.EMBEDDINGS_NAME, embeddings_key)
# Use provided connection string or fall back to settings
self._connection_string = connection_string or getattr(settings, 'PGVECTOR_CONNECTION_STRING', None)
if not self._connection_string:
raise ValueError(
"PostgreSQL connection string is required. "
"Set PGVECTOR_CONNECTION_STRING in settings or pass connection_string parameter."
)
try:
import psycopg2
from psycopg2.extras import Json
import pgvector.psycopg2
except ImportError:
raise ImportError(
"Could not import required packages. "
"Please install with `pip install psycopg2-binary pgvector`."
)
self._psycopg2 = psycopg2
self._Json = Json
self._pgvector = pgvector.psycopg2
self._connection = None
self._ensure_table_exists()
def _get_connection(self):
"""Get or create database connection"""
if self._connection is None or self._connection.closed:
self._connection = self._psycopg2.connect(self._connection_string)
# Register pgvector types
self._pgvector.register_vector(self._connection)
return self._connection
def _ensure_table_exists(self):
"""Create table and enable pgvector extension if they don't exist"""
conn = self._get_connection()
cursor = conn.cursor()
try:
# Enable pgvector extension
cursor.execute("CREATE EXTENSION IF NOT EXISTS vector;")
# Get embedding dimension
embedding_dim = getattr(self._embedding, 'dimension', 1536) # Default to OpenAI dimension
# Create table with vector column
create_table_query = f"""
CREATE TABLE IF NOT EXISTS {self._table_name} (
id SERIAL PRIMARY KEY,
{self._text_column} TEXT NOT NULL,
{self._vector_column} vector({embedding_dim}),
{self._metadata_column} JSONB,
source_id TEXT NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
"""
cursor.execute(create_table_query)
# Create index for vector similarity search
index_query = f"""
CREATE INDEX IF NOT EXISTS {self._table_name}_{self._vector_column}_idx
ON {self._table_name} USING ivfflat ({self._vector_column} vector_cosine_ops)
WITH (lists = 100);
"""
cursor.execute(index_query)
# Create index for source_id filtering
source_index_query = f"""
CREATE INDEX IF NOT EXISTS {self._table_name}_source_id_idx
ON {self._table_name} (source_id);
"""
cursor.execute(source_index_query)
conn.commit()
except Exception as e:
conn.rollback()
logging.error(f"Error creating table: {e}")
raise
finally:
cursor.close()
def search(self, question: str, k: int = 2, *args, **kwargs) -> List[Document]:
"""Search for similar documents using vector similarity"""
query_vector = self._embedding.embed_query(question)
conn = self._get_connection()
cursor = conn.cursor()
try:
# Use cosine distance for similarity search with proper vector formatting
search_query = f"""
SELECT {self._text_column}, {self._metadata_column},
({self._vector_column} <=> %s::vector) as distance
FROM {self._table_name}
WHERE source_id = %s
ORDER BY {self._vector_column} <=> %s::vector
LIMIT %s;
"""
cursor.execute(search_query, (query_vector, self._source_id, query_vector, k))
results = cursor.fetchall()
documents = []
for text, metadata, distance in results:
metadata = metadata or {}
documents.append(Document(page_content=text, metadata=metadata))
return documents
except Exception as e:
logging.error(f"Error searching documents: {e}", exc_info=True)
return []
finally:
cursor.close()
def add_texts(
self,
texts: List[str],
metadatas: Optional[List[Dict[str, Any]]] = None,
*args,
**kwargs,
) -> List[str]:
"""Add texts with their embeddings to the vector store"""
if not texts:
return []
embeddings = self._embedding.embed_documents(texts)
metadatas = metadatas or [{}] * len(texts)
conn = self._get_connection()
cursor = conn.cursor()
try:
insert_query = f"""
INSERT INTO {self._table_name} ({self._text_column}, {self._vector_column}, {self._metadata_column}, source_id)
VALUES (%s, %s, %s, %s)
RETURNING id;
"""
inserted_ids = []
for text, embedding, metadata in zip(texts, embeddings, metadatas):
cursor.execute(
insert_query,
(text, embedding, self._Json(metadata), self._source_id)
)
inserted_id = cursor.fetchone()[0]
inserted_ids.append(str(inserted_id))
conn.commit()
return inserted_ids
except Exception as e:
conn.rollback()
logging.error(f"Error adding texts: {e}")
raise
finally:
cursor.close()
def delete_index(self, *args, **kwargs):
"""Delete all documents for this source_id"""
conn = self._get_connection()
cursor = conn.cursor()
try:
delete_query = f"DELETE FROM {self._table_name} WHERE source_id = %s;"
cursor.execute(delete_query, (self._source_id,))
conn.commit()
except Exception as e:
conn.rollback()
logging.error(f"Error deleting index: {e}")
raise
finally:
cursor.close()
def save_local(self, *args, **kwargs):
"""No-op for PostgreSQL - data is already persisted"""
pass
def get_chunks(self) -> List[Dict[str, Any]]:
"""Get all chunks for this source_id"""
conn = self._get_connection()
cursor = conn.cursor()
try:
select_query = f"""
SELECT id, {self._text_column}, {self._metadata_column}
FROM {self._table_name}
WHERE source_id = %s;
"""
cursor.execute(select_query, (self._source_id,))
results = cursor.fetchall()
chunks = []
for doc_id, text, metadata in results:
chunks.append({
"doc_id": str(doc_id),
"text": text,
"metadata": metadata or {}
})
return chunks
except Exception as e:
logging.error(f"Error getting chunks: {e}")
return []
finally:
cursor.close()
def add_chunk(self, text: str, metadata: Optional[Dict[str, Any]] = None) -> str:
"""Add a single chunk to the vector store"""
metadata = metadata or {}
# Create a copy to avoid modifying the original metadata
final_metadata = metadata.copy()
# Ensure the source_id is in the metadata so the chunk can be found by filters
final_metadata["source_id"] = self._source_id
embeddings = self._embedding.embed_documents([text])
if not embeddings:
raise ValueError("Could not generate embedding for chunk")
conn = self._get_connection()
cursor = conn.cursor()
try:
insert_query = f"""
INSERT INTO {self._table_name} ({self._text_column}, {self._vector_column}, {self._metadata_column}, source_id)
VALUES (%s, %s, %s, %s)
RETURNING id;
"""
cursor.execute(
insert_query,
(text, embeddings[0], self._Json(final_metadata), self._source_id)
)
inserted_id = cursor.fetchone()[0]
conn.commit()
return str(inserted_id)
except Exception as e:
conn.rollback()
logging.error(f"Error adding chunk: {e}")
raise
finally:
cursor.close()
def delete_chunk(self, chunk_id: str) -> bool:
"""Delete a specific chunk by its ID"""
conn = self._get_connection()
cursor = conn.cursor()
try:
delete_query = f"DELETE FROM {self._table_name} WHERE id = %s AND source_id = %s;"
cursor.execute(delete_query, (int(chunk_id), self._source_id))
deleted_count = cursor.rowcount
conn.commit()
return deleted_count > 0
except Exception as e:
conn.rollback()
logging.error(f"Error deleting chunk: {e}")
return False
finally:
cursor.close()
def __del__(self):
"""Close database connection when object is destroyed"""
if hasattr(self, '_connection') and self._connection and not self._connection.closed:
self._connection.close()

View File

@@ -1,5 +1,7 @@
import logging
from application.vectorstore.base import BaseVectorStore
from application.core.settings import settings
from application.vectorstore.document_class import Document
class QdrantStore(BaseVectorStore):
@@ -7,18 +9,22 @@ class QdrantStore(BaseVectorStore):
from qdrant_client import models
from langchain_community.vectorstores.qdrant import Qdrant
# Store the source_id for use in add_chunk
self._source_id = str(source_id).replace("application/indexes/", "").rstrip("/")
self._filter = models.Filter(
must=[
models.FieldCondition(
key="metadata.source_id",
match=models.MatchValue(value=source_id.replace("application/indexes/", "").rstrip("/")),
match=models.MatchValue(value=self._source_id),
)
]
)
embedding=self._get_embeddings(settings.EMBEDDINGS_NAME, embeddings_key)
self._docsearch = Qdrant.construct_instance(
["TEXT_TO_OBTAIN_EMBEDDINGS_DIMENSION"],
embedding=self._get_embeddings(settings.EMBEDDINGS_NAME, embeddings_key),
embedding=embedding,
collection_name=settings.QDRANT_COLLECTION_NAME,
location=settings.QDRANT_LOCATION,
url=settings.QDRANT_URL,
@@ -32,6 +38,32 @@ class QdrantStore(BaseVectorStore):
path=settings.QDRANT_PATH,
distance_func=settings.QDRANT_DISTANCE_FUNC,
)
try:
collections = self._docsearch.client.get_collections()
collection_exists = settings.QDRANT_COLLECTION_NAME in [
collection.name for collection in collections.collections
]
if not collection_exists:
self._docsearch.client.recreate_collection(
collection_name=settings.QDRANT_COLLECTION_NAME,
vectors_config=models.VectorParams(size=embedding.client[1].word_embedding_dimension, distance=models.Distance.COSINE),
)
# Ensure the required index exists for metadata.source_id
try:
self._docsearch.client.create_payload_index(
collection_name=settings.QDRANT_COLLECTION_NAME,
field_name="metadata.source_id",
field_schema=models.PayloadSchemaType.KEYWORD,
)
except Exception as index_error:
# Index might already exist, which is fine
if "already exists" not in str(index_error).lower():
logging.warning(f"Could not create index for metadata.source_id: {index_error}")
except Exception as e:
logging.warning(f"Could not check for collection: {e}")
def search(self, *args, **kwargs):
return self._docsearch.similarity_search(filter=self._filter, *args, **kwargs)
@@ -46,3 +78,59 @@ class QdrantStore(BaseVectorStore):
return self._docsearch.client.delete(
collection_name=settings.QDRANT_COLLECTION_NAME, points_selector=self._filter
)
def get_chunks(self):
try:
chunks = []
offset = None
while True:
records, offset = self._docsearch.client.scroll(
collection_name=settings.QDRANT_COLLECTION_NAME,
scroll_filter=self._filter,
limit=10,
with_payload=True,
with_vectors=False,
offset=offset,
)
for record in records:
doc_id = record.id
text = record.payload.get("page_content")
metadata = record.payload.get("metadata")
chunks.append(
{"doc_id": doc_id, "text": text, "metadata": metadata}
)
if offset is None:
break
return chunks
except Exception as e:
logging.error(f"Error getting chunks: {e}", exc_info=True)
return []
def add_chunk(self, text, metadata=None):
import uuid
metadata = metadata or {}
# Create a copy to avoid modifying the original metadata
final_metadata = metadata.copy()
# Ensure the source_id is in the metadata so the chunk can be found by filters
final_metadata["source_id"] = self._source_id
doc = Document(page_content=text, metadata=final_metadata)
# Generate a unique ID for the document
doc_id = str(uuid.uuid4())
doc.id = doc_id
doc_ids = self._docsearch.add_documents([doc])
return doc_ids[0] if doc_ids else doc_id
def delete_chunk(self, chunk_id):
try:
self._docsearch.client.delete(
collection_name=settings.QDRANT_COLLECTION_NAME,
points_selector=[chunk_id],
)
return True
except Exception as e:
logging.error(f"Error deleting chunk: {e}", exc_info=True)
return False

View File

@@ -3,6 +3,7 @@ from application.vectorstore.elasticsearch import ElasticsearchStore
from application.vectorstore.milvus import MilvusStore
from application.vectorstore.mongodb import MongoDBVectorStore
from application.vectorstore.qdrant import QdrantStore
from application.vectorstore.pgvector import PGVectorStore
class VectorCreator:
@@ -12,6 +13,7 @@ class VectorCreator:
"mongodb": MongoDBVectorStore,
"qdrant": QdrantStore,
"milvus": MilvusStore,
"pgvector": PGVectorStore
}
@classmethod

View File

@@ -6,6 +6,7 @@ import os
import shutil
import string
import tempfile
from typing import Any, Dict
import zipfile
from collections import Counter
@@ -16,11 +17,13 @@ from bson.dbref import DBRef
from bson.objectid import ObjectId
from application.agents.agent_creator import AgentCreator
from application.api.answer.routes import get_prompt
from application.api.answer.services.stream_processor import get_prompt
from application.cache import get_redis_instance
from application.core.mongo_db import MongoDB
from application.core.settings import settings
from application.parser.chunking import Chunker
from application.parser.connectors.connector_creator import ConnectorCreator
from application.parser.embedding_pipeline import embed_and_store_documents
from application.parser.file.bulk import SimpleDirectoryReader
from application.parser.remote.remote_creator import RemoteCreator
@@ -35,17 +38,22 @@ db = mongo[settings.MONGO_DB_NAME]
sources_collection = db["sources"]
# Constants
MIN_TOKENS = 150
MAX_TOKENS = 1250
RECURSION_DEPTH = 2
# Define a function to extract metadata from a given filename.
def metadata_from_filename(title):
return {"title": title}
# Define a function to generate a random string of a given length.
def generate_random_string(length):
return "".join([string.ascii_letters[i % 52] for i in range(length)])
@@ -68,7 +76,6 @@ def extract_zip_recursive(zip_path, extract_to, current_depth=0, max_depth=5):
if current_depth > max_depth:
logging.warning(f"Reached maximum recursion depth of {max_depth}")
return
try:
with zipfile.ZipFile(zip_path, "r") as zip_ref:
zip_ref.extractall(extract_to)
@@ -76,12 +83,13 @@ def extract_zip_recursive(zip_path, extract_to, current_depth=0, max_depth=5):
except Exception as e:
logging.error(f"Error extracting zip file {zip_path}: {e}", exc_info=True)
return
# Check for nested zip files and extract them
for root, dirs, files in os.walk(extract_to):
for file in files:
if file.endswith(".zip"):
# If a nested zip file is found, extract it recursively
file_path = os.path.join(root, file)
extract_zip_recursive(file_path, root, current_depth + 1, max_depth)
@@ -98,11 +106,23 @@ def download_file(url, params, dest_path):
def upload_index(full_path, file_data):
files = None
try:
if settings.VECTOR_STORE == "faiss":
faiss_path = full_path + "/index.faiss"
pkl_path = full_path + "/index.pkl"
if not os.path.exists(faiss_path):
logging.error(f"FAISS index file not found: {faiss_path}")
raise FileNotFoundError(f"FAISS index file not found: {faiss_path}")
if not os.path.exists(pkl_path):
logging.error(f"FAISS pickle file not found: {pkl_path}")
raise FileNotFoundError(f"FAISS pickle file not found: {pkl_path}")
files = {
"file_faiss": open(full_path + "/index.faiss", "rb"),
"file_pkl": open(full_path + "/index.pkl", "rb"),
"file_faiss": open(faiss_path, "rb"),
"file_pkl": open(pkl_path, "rb"),
}
response = requests.post(
urljoin(settings.API_URL, "/api/upload_index"),
@@ -114,11 +134,11 @@ def upload_index(full_path, file_data):
urljoin(settings.API_URL, "/api/upload_index"), data=file_data
)
response.raise_for_status()
except requests.RequestException as e:
except (requests.RequestException, FileNotFoundError) as e:
logging.error(f"Error uploading index: {e}")
raise
finally:
if settings.VECTOR_STORE == "faiss":
if settings.VECTOR_STORE == "faiss" and files is not None:
for file in files.values():
file.close()
@@ -139,7 +159,7 @@ def run_agent_logic(agent_config, input_data):
user_api_key = agent_config["key"]
agent_type = agent_config.get("agent_type", "classic")
decoded_token = {"sub": agent_config.get("user")}
prompt = get_prompt(prompt_id)
prompt = get_prompt(prompt_id, db["prompts"])
agent = AgentCreator.create_agent(
agent_type,
endpoint="webhook",
@@ -178,7 +198,6 @@ def run_agent_logic(agent_config, input_data):
tool_calls.extend(line["tool_calls"])
elif "thought" in line:
thought += line["thought"]
result = {
"answer": response_full,
"sources": source_log_docs,
@@ -193,8 +212,10 @@ def run_agent_logic(agent_config, input_data):
# Define the main function for ingesting and processing documents.
def ingest_worker(
self, directory, formats, job_name, filename, user, dir_name=None, user_dir=None, retriever="classic"
self, directory, formats, job_name, file_path, filename, user, retriever="classic"
):
"""
Ingest and process documents.
@@ -204,10 +225,9 @@ def ingest_worker(
directory (str): Specifies the directory for ingesting ('inputs' or 'temp').
formats (list of str): List of file extensions to consider for ingestion (e.g., [".rst", ".md"]).
job_name (str): Name of the job for this ingestion task (original, unsanitized).
filename (str): Name of the file to be ingested.
file_path (str): Complete file path to use consistently throughout the pipeline.
filename (str): Original unsanitized filename provided by the user.
user (str): Identifier for the user initiating the ingestion (original, unsanitized).
dir_name (str, optional): Sanitized directory name for filesystem operations.
user_dir (str, optional): Sanitized user ID for filesystem operations.
retriever (str): Type of retriever to use for processing the documents.
Returns:
@@ -218,38 +238,64 @@ def ingest_worker(
limit = None
exclude = True
sample = False
storage = StorageCreator.get_storage()
full_path = os.path.join(directory, user_dir, dir_name)
source_file_path = os.path.join(full_path, filename)
logging.info(f"Ingest file: {full_path}", extra={"user": user, "job": job_name})
logging.info(f"Ingest path: {file_path}", extra={"user": user, "job": job_name})
# Create temporary working directory
with tempfile.TemporaryDirectory() as temp_dir:
try:
os.makedirs(temp_dir, exist_ok=True)
# Download file from storage to temp directory
temp_file_path = os.path.join(temp_dir, filename)
file_data = storage.get_file(source_file_path)
if storage.is_directory(file_path):
# Handle directory case
logging.info(f"Processing directory: {file_path}")
files_list = storage.list_files(file_path)
with open(temp_file_path, "wb") as f:
f.write(file_data.read())
for storage_file_path in files_list:
if storage.is_directory(storage_file_path):
continue
# Create relative path structure in temp directory
rel_path = os.path.relpath(storage_file_path, file_path)
local_file_path = os.path.join(temp_dir, rel_path)
os.makedirs(os.path.dirname(local_file_path), exist_ok=True)
# Download file
try:
file_data = storage.get_file(storage_file_path)
with open(local_file_path, "wb") as f:
f.write(file_data.read())
except Exception as e:
logging.error(
f"Error downloading file {storage_file_path}: {e}"
)
continue
else:
# Handle single file case
temp_filename = os.path.basename(file_path)
temp_file_path = os.path.join(temp_dir, temp_filename)
file_data = storage.get_file(file_path)
with open(temp_file_path, "wb") as f:
f.write(file_data.read())
# Handle zip files
if temp_filename.endswith(".zip"):
logging.info(f"Extracting zip file: {temp_filename}")
extract_zip_recursive(
temp_file_path,
temp_dir,
current_depth=0,
max_depth=RECURSION_DEPTH,
)
self.update_state(state="PROGRESS", meta={"current": 1})
# Handle zip files
if filename.endswith(".zip"):
logging.info(f"Extracting zip file: {filename}")
extract_zip_recursive(
temp_file_path, temp_dir, current_depth=0, max_depth=RECURSION_DEPTH
)
if sample:
logging.info(f"Sample mode enabled. Using {limit} documents.")
reader = SimpleDirectoryReader(
input_dir=temp_dir,
input_files=input_files,
@@ -260,6 +306,9 @@ def ingest_worker(
)
raw_docs = reader.load_data()
directory_structure = getattr(reader, "directory_structure", {})
logging.info(f"Directory structure from reader: {directory_structure}")
chunker = Chunker(
chunking_strategy="classic_chunk",
max_tokens=MAX_TOKENS,
@@ -285,22 +334,21 @@ def ingest_worker(
for i in range(min(5, len(raw_docs))):
logging.info(f"Sample document {i}: {raw_docs[i]}")
file_data = {
"name": job_name, # Use original job_name
"name": job_name,
"file": filename,
"user": user, # Use original user
"user": user,
"tokens": tokens,
"retriever": retriever,
"id": str(id),
"type": "local",
"original_file_path": source_file_path,
"file_path": file_path,
"directory_structure": json.dumps(directory_structure),
}
upload_index(vector_store_path, file_data)
except Exception as e:
logging.error(f"Error in ingest_worker: {e}", exc_info=True)
raise
return {
"directory": directory,
"formats": formats,
@@ -311,6 +359,323 @@ def ingest_worker(
}
def reingest_source_worker(self, source_id, user):
"""
Re-ingestion worker that handles incremental updates by:
1. Adding chunks from newly added files
2. Removing chunks from deleted files
Args:
self: Task instance
source_id: ID of the source to re-ingest
user: User identifier
Returns:
dict: Information about the re-ingestion task
"""
try:
from application.vectorstore.vector_creator import VectorCreator
self.update_state(
state="PROGRESS",
meta={"current": 10, "status": "Initializing re-ingestion scan"},
)
source = sources_collection.find_one({"_id": ObjectId(source_id), "user": user})
if not source:
raise ValueError(f"Source {source_id} not found or access denied")
storage = StorageCreator.get_storage()
source_file_path = source.get("file_path", "")
self.update_state(
state="PROGRESS", meta={"current": 20, "status": "Scanning current files"}
)
with tempfile.TemporaryDirectory() as temp_dir:
# Download all files from storage to temp directory, preserving directory structure
if storage.is_directory(source_file_path):
files_list = storage.list_files(source_file_path)
for storage_file_path in files_list:
if storage.is_directory(storage_file_path):
continue
rel_path = os.path.relpath(storage_file_path, source_file_path)
local_file_path = os.path.join(temp_dir, rel_path)
os.makedirs(os.path.dirname(local_file_path), exist_ok=True)
# Download file
try:
file_data = storage.get_file(storage_file_path)
with open(local_file_path, "wb") as f:
f.write(file_data.read())
except Exception as e:
logging.error(
f"Error downloading file {storage_file_path}: {e}"
)
continue
reader = SimpleDirectoryReader(
input_dir=temp_dir,
recursive=True,
required_exts=[
".rst",
".md",
".pdf",
".txt",
".docx",
".csv",
".epub",
".html",
".mdx",
".json",
".xlsx",
".pptx",
".png",
".jpg",
".jpeg",
],
exclude_hidden=True,
file_metadata=metadata_from_filename,
)
reader.load_data()
directory_structure = reader.directory_structure
logging.info(
f"Directory structure built with token counts: {directory_structure}"
)
try:
old_directory_structure = source.get("directory_structure") or {}
if isinstance(old_directory_structure, str):
try:
old_directory_structure = json.loads(old_directory_structure)
except Exception:
old_directory_structure = {}
def _flatten_directory_structure(struct, prefix=""):
files = set()
if isinstance(struct, dict):
for name, meta in struct.items():
current_path = (
os.path.join(prefix, name) if prefix else name
)
if isinstance(meta, dict) and (
"type" in meta and "size_bytes" in meta
):
files.add(current_path)
elif isinstance(meta, dict):
files |= _flatten_directory_structure(
meta, current_path
)
return files
old_files = _flatten_directory_structure(old_directory_structure)
new_files = _flatten_directory_structure(directory_structure)
added_files = sorted(new_files - old_files)
removed_files = sorted(old_files - new_files)
if added_files:
logging.info(f"Files added since last ingest: {added_files}")
else:
logging.info("No files added since last ingest.")
if removed_files:
logging.info(f"Files removed since last ingest: {removed_files}")
else:
logging.info("No files removed since last ingest.")
except Exception as e:
logging.error(
f"Error comparing directory structures: {e}", exc_info=True
)
added_files = []
removed_files = []
try:
if not added_files and not removed_files:
logging.info("No changes detected.")
return {
"source_id": source_id,
"user": user,
"status": "no_changes",
"added_files": [],
"removed_files": [],
}
vector_store = VectorCreator.create_vectorstore(
settings.VECTOR_STORE,
source_id,
settings.EMBEDDINGS_KEY,
)
self.update_state(
state="PROGRESS",
meta={"current": 40, "status": "Processing file changes"},
)
# 1) Delete chunks from removed files
deleted = 0
if removed_files:
try:
for ch in vector_store.get_chunks() or []:
metadata = (
ch.get("metadata", {})
if isinstance(ch, dict)
else getattr(ch, "metadata", {})
)
raw_source = metadata.get("source")
source_file = str(raw_source) if raw_source else ""
if source_file in removed_files:
cid = ch.get("doc_id")
if cid:
try:
vector_store.delete_chunk(cid)
deleted += 1
except Exception as de:
logging.error(
f"Failed deleting chunk {cid}: {de}"
)
logging.info(
f"Deleted {deleted} chunks from {len(removed_files)} removed files"
)
except Exception as e:
logging.error(
f"Error during deletion of removed file chunks: {e}",
exc_info=True,
)
# 2) Add chunks from new files
added = 0
if added_files:
try:
# Build list of local files for added files only
added_local_files = []
for rel_path in added_files:
local_path = os.path.join(temp_dir, rel_path)
if os.path.isfile(local_path):
added_local_files.append(local_path)
if added_local_files:
reader_new = SimpleDirectoryReader(
input_files=added_local_files,
exclude_hidden=True,
errors="ignore",
file_metadata=metadata_from_filename,
)
raw_docs_new = reader_new.load_data()
chunker_new = Chunker(
chunking_strategy="classic_chunk",
max_tokens=MAX_TOKENS,
min_tokens=MIN_TOKENS,
duplicate_headers=False,
)
chunked_new = chunker_new.chunk(documents=raw_docs_new)
for (
file_path,
token_count,
) in reader_new.file_token_counts.items():
try:
rel_path = os.path.relpath(
file_path, start=temp_dir
)
path_parts = rel_path.split(os.sep)
current_dir = directory_structure
for part in path_parts[:-1]:
if part in current_dir and isinstance(
current_dir[part], dict
):
current_dir = current_dir[part]
else:
break
filename = path_parts[-1]
if filename in current_dir and isinstance(
current_dir[filename], dict
):
current_dir[filename][
"token_count"
] = token_count
logging.info(
f"Updated token count for {rel_path}: {token_count}"
)
except Exception as e:
logging.warning(
f"Could not update token count for {file_path}: {e}"
)
for d in chunked_new:
meta = dict(d.extra_info or {})
try:
raw_src = meta.get("source")
if isinstance(raw_src, str) and os.path.isabs(
raw_src
):
meta["source"] = os.path.relpath(
raw_src, start=temp_dir
)
except Exception:
pass
vector_store.add_chunk(d.text, metadata=meta)
added += 1
logging.info(
f"Added {added} chunks from {len(added_files)} new files"
)
except Exception as e:
logging.error(
f"Error during ingestion of new files: {e}", exc_info=True
)
# 3) Update source directory structure timestamp
try:
total_tokens = sum(reader.file_token_counts.values())
sources_collection.update_one(
{"_id": ObjectId(source_id)},
{
"$set": {
"directory_structure": directory_structure,
"date": datetime.datetime.now(),
"tokens": total_tokens,
}
},
)
except Exception as e:
logging.error(
f"Error updating directory_structure in DB: {e}", exc_info=True
)
self.update_state(
state="PROGRESS",
meta={"current": 100, "status": "Re-ingestion completed"},
)
return {
"source_id": source_id,
"user": user,
"status": "completed",
"added_files": added_files,
"removed_files": removed_files,
"chunks_added": added,
"chunks_deleted": deleted,
}
except Exception as e:
logging.error(
f"Error while processing file changes: {e}", exc_info=True
)
raise
except Exception as e:
logging.error(f"Error in reingest_source_worker: {e}", exc_info=True)
raise
def remote_worker(
self,
source_data,
@@ -326,7 +691,6 @@ def remote_worker(
full_path = os.path.join(directory, user, name_job)
if not os.path.exists(full_path):
os.makedirs(full_path)
self.update_state(state="PROGRESS", meta={"current": 1})
try:
logging.info("Initializing remote loader with type: %s", loader)
@@ -353,7 +717,6 @@ def remote_worker(
raise ValueError("doc_id must be provided for sync operation.")
id = ObjectId(doc_id)
embed_and_store_documents(docs, full_path, id, self)
self.update_state(state="PROGRESS", meta={"current": 100})
file_data = {
@@ -366,16 +729,16 @@ def remote_worker(
"remote_data": source_data,
"sync_frequency": sync_frequency,
}
upload_index(full_path, file_data)
if operation_mode == "sync":
file_data["last_sync"] = datetime.datetime.now()
upload_index(full_path, file_data)
except Exception as e:
logging.error("Error in remote_worker task: %s", str(e), exc_info=True)
raise
finally:
if os.path.exists(full_path):
shutil.rmtree(full_path)
logging.info("remote_worker task completed successfully")
return {"urls": source_data, "name_job": name_job, "user": user, "limited": False}
@@ -428,7 +791,6 @@ def sync_worker(self, frequency):
sync_counts[
"sync_success" if resp["status"] == "success" else "sync_failure"
] += 1
return {
key: sync_counts[key]
for key in ["total_sync_count", "sync_success", "sync_failure"]
@@ -467,6 +829,9 @@ def attachment_worker(self, file_info, user):
)
token_count = num_tokens_from_string(content)
if token_count > 100000:
content = content[:250000]
token_count = num_tokens_from_string(content)
self.update_state(
state="PROGRESS", meta={"current": 80, "status": "Storing in database"}
@@ -503,7 +868,6 @@ def attachment_worker(self, file_info, user):
"mime_type": mime_type,
"metadata": metadata,
}
except Exception as e:
logging.error(
f"Error processing file {filename}: {e}",
@@ -539,7 +903,6 @@ def agent_webhook_worker(self, agent_id, payload):
except Exception as e:
logging.error(f"Error processing agent webhook: {e}", exc_info=True)
return {"status": "error", "error": str(e)}
self.update_state(state="PROGRESS", meta={"current": 50})
try:
result = run_agent_logic(agent_config, input_data)
@@ -552,3 +915,334 @@ def agent_webhook_worker(self, agent_id, payload):
f"Webhook processed for agent {agent_id}", extra={"agent_id": agent_id}
)
return {"status": "success", "result": result}
def ingest_connector(
self,
job_name: str,
user: str,
source_type: str,
session_token=None,
file_ids=None,
folder_ids=None,
recursive=True,
retriever: str = "classic",
operation_mode: str = "upload",
doc_id=None,
sync_frequency: str = "never",
) -> Dict[str, Any]:
"""
Ingestion for internal knowledge bases (GoogleDrive, etc.).
Args:
job_name: Name of the ingestion job
user: User identifier
source_type: Type of remote source ("google_drive", "dropbox", etc.)
session_token: Authentication token for the service
file_ids: List of file IDs to download
folder_ids: List of folder IDs to download
recursive: Whether to recursively download folders
retriever: Type of retriever to use
operation_mode: "upload" for initial ingestion, "sync" for incremental sync
doc_id: Document ID for sync operations (required when operation_mode="sync")
sync_frequency: How often to sync ("never", "daily", "weekly", "monthly")
"""
logging.info(
f"Starting remote ingestion from {source_type} for user: {user}, job: {job_name}"
)
self.update_state(state="PROGRESS", meta={"current": 1})
with tempfile.TemporaryDirectory() as temp_dir:
try:
# Step 1: Initialize the appropriate loader
self.update_state(
state="PROGRESS",
meta={"current": 10, "status": "Initializing connector"},
)
if not session_token:
raise ValueError(f"{source_type} connector requires session_token")
if not ConnectorCreator.is_supported(source_type):
raise ValueError(
f"Unsupported connector type: {source_type}. Supported types: {ConnectorCreator.get_supported_connectors()}"
)
remote_loader = ConnectorCreator.create_connector(
source_type, session_token
)
# Create a clean config for storage
api_source_config = {
"file_ids": file_ids or [],
"folder_ids": folder_ids or [],
"recursive": recursive,
}
# Step 2: Download files to temp directory
self.update_state(
state="PROGRESS", meta={"current": 20, "status": "Downloading files"}
)
download_info = remote_loader.download_to_directory(
temp_dir, api_source_config
)
if download_info.get("empty_result", False) or not download_info.get(
"files_downloaded", 0
):
logging.warning(f"No files were downloaded from {source_type}")
# Create empty result directly instead of calling a separate method
return {
"name": job_name,
"user": user,
"tokens": 0,
"type": source_type,
"source_config": api_source_config,
"directory_structure": "{}",
}
# Step 3: Use SimpleDirectoryReader to process downloaded files
self.update_state(
state="PROGRESS", meta={"current": 40, "status": "Processing files"}
)
reader = SimpleDirectoryReader(
input_dir=temp_dir,
recursive=True,
required_exts=[
".rst",
".md",
".pdf",
".txt",
".docx",
".csv",
".epub",
".html",
".mdx",
".json",
".xlsx",
".pptx",
".png",
".jpg",
".jpeg",
],
exclude_hidden=True,
file_metadata=metadata_from_filename,
)
raw_docs = reader.load_data()
directory_structure = getattr(reader, "directory_structure", {})
# Step 4: Process documents (chunking, embedding, etc.)
self.update_state(
state="PROGRESS", meta={"current": 60, "status": "Processing documents"}
)
chunker = Chunker(
chunking_strategy="classic_chunk",
max_tokens=MAX_TOKENS,
min_tokens=MIN_TOKENS,
duplicate_headers=False,
)
raw_docs = chunker.chunk(documents=raw_docs)
# Preserve source information in document metadata
for doc in raw_docs:
if hasattr(doc, "extra_info") and doc.extra_info:
source = doc.extra_info.get("source")
if source and os.path.isabs(source):
# Convert absolute path to relative path
doc.extra_info["source"] = os.path.relpath(
source, start=temp_dir
)
docs = [Document.to_langchain_format(raw_doc) for raw_doc in raw_docs]
if operation_mode == "upload":
id = ObjectId()
elif operation_mode == "sync":
if not doc_id or not ObjectId.is_valid(doc_id):
logging.error(
"Invalid doc_id provided for sync operation: %s", doc_id
)
raise ValueError("doc_id must be provided for sync operation.")
id = ObjectId(doc_id)
else:
raise ValueError(f"Invalid operation_mode: {operation_mode}")
vector_store_path = os.path.join(temp_dir, "vector_store")
os.makedirs(vector_store_path, exist_ok=True)
self.update_state(
state="PROGRESS", meta={"current": 80, "status": "Storing documents"}
)
embed_and_store_documents(docs, vector_store_path, id, self)
tokens = count_tokens_docs(docs)
# Step 6: Upload index files
file_data = {
"user": user,
"name": job_name,
"tokens": tokens,
"retriever": retriever,
"id": str(id),
"type": "connector:file",
"remote_data": json.dumps(
{"provider": source_type, **api_source_config}
),
"directory_structure": json.dumps(directory_structure),
"sync_frequency": sync_frequency,
}
if operation_mode == "sync":
file_data["last_sync"] = datetime.datetime.now()
else:
file_data["last_sync"] = datetime.datetime.now()
upload_index(vector_store_path, file_data)
# Ensure we mark the task as complete
self.update_state(
state="PROGRESS", meta={"current": 100, "status": "Complete"}
)
logging.info(f"Remote ingestion completed: {job_name}")
return {
"user": user,
"name": job_name,
"tokens": tokens,
"type": source_type,
"id": str(id),
"status": "complete",
}
except Exception as e:
logging.error(f"Error during remote ingestion: {e}", exc_info=True)
raise
def mcp_oauth(self, config: Dict[str, Any], user_id: str = None) -> Dict[str, Any]:
"""Worker to handle MCP OAuth flow asynchronously."""
logging.info(
"[MCP OAuth] Worker started for user_id=%s, config=%s", user_id, config
)
try:
import asyncio
from application.agents.tools.mcp_tool import MCPTool
task_id = self.request.id
logging.info("[MCP OAuth] Task ID: %s", task_id)
redis_client = get_redis_instance()
def update_status(status_data: Dict[str, Any]):
logging.info("[MCP OAuth] Updating status: %s", status_data)
status_key = f"mcp_oauth_status:{task_id}"
redis_client.setex(status_key, 600, json.dumps(status_data))
update_status(
{
"status": "in_progress",
"message": "Starting OAuth flow...",
"task_id": task_id,
}
)
tool_config = config.copy()
tool_config["oauth_task_id"] = task_id
logging.info("[MCP OAuth] Initializing MCPTool with config: %s", tool_config)
mcp_tool = MCPTool(tool_config, user_id)
async def run_oauth_discovery():
if not mcp_tool._client:
mcp_tool._setup_client()
return await mcp_tool._execute_with_client("list_tools")
update_status(
{
"status": "awaiting_redirect",
"message": "Waiting for OAuth redirect...",
"task_id": task_id,
}
)
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
logging.info("[MCP OAuth] Starting event loop for OAuth discovery...")
tools_response = loop.run_until_complete(run_oauth_discovery())
logging.info(
"[MCP OAuth] Tools response after async call: %s", tools_response
)
status_key = f"mcp_oauth_status:{task_id}"
redis_status = redis_client.get(status_key)
if redis_status:
logging.info(
"[MCP OAuth] Redis status after async call: %s", redis_status
)
else:
logging.warning(
"[MCP OAuth] No Redis status found after async call for key: %s",
status_key,
)
tools = mcp_tool.get_actions_metadata()
update_status(
{
"status": "completed",
"message": f"OAuth completed successfully. Found {len(tools)} tools.",
"tools": tools,
"tools_count": len(tools),
"task_id": task_id,
}
)
logging.info(
"[MCP OAuth] OAuth flow completed successfully for task_id=%s", task_id
)
return {"success": True, "tools": tools, "tools_count": len(tools)}
except Exception as e:
error_msg = f"OAuth flow failed: {str(e)}"
logging.error(
"[MCP OAuth] Exception in OAuth discovery: %s", error_msg, exc_info=True
)
update_status(
{
"status": "error",
"message": error_msg,
"error": str(e),
"task_id": task_id,
}
)
return {"success": False, "error": error_msg}
finally:
logging.info("[MCP OAuth] Closing event loop for task_id=%s", task_id)
loop.close()
except Exception as e:
error_msg = f"Failed to initialize OAuth flow: {str(e)}"
logging.error(
"[MCP OAuth] Exception during initialization: %s", error_msg, exc_info=True
)
update_status(
{
"status": "error",
"message": error_msg,
"error": str(e),
"task_id": task_id,
}
)
return {"success": False, "error": error_msg}
def mcp_oauth_status(self, task_id: str) -> Dict[str, Any]:
"""Check the status of an MCP OAuth flow."""
redis_client = get_redis_instance()
status_key = f"mcp_oauth_status:{task_id}"
status_data = redis_client.get(status_key)
if status_data:
return json.loads(status_data)
return {"status": "not_found", "message": "Status not found"}

View File

@@ -0,0 +1,75 @@
name: docsgpt-oss
services:
frontend:
image: arc53/docsgpt-fe:develop
environment:
- VITE_API_HOST=http://localhost:7091
- VITE_API_STREAMING=$VITE_API_STREAMING
- VITE_GOOGLE_CLIENT_ID=$VITE_GOOGLE_CLIENT_ID
ports:
- "5173:5173"
depends_on:
- backend
backend:
user: root
image: arc53/docsgpt:develop
environment:
- API_KEY=$API_KEY
- EMBEDDINGS_KEY=$API_KEY
- LLM_PROVIDER=$LLM_PROVIDER
- LLM_NAME=$LLM_NAME
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/1
- MONGO_URI=mongodb://mongo:27017/docsgpt
- CACHE_REDIS_URL=redis://redis:6379/2
- OPENAI_BASE_URL=$OPENAI_BASE_URL
ports:
- "7091:7091"
volumes:
- ../application/indexes:/app/indexes
- ../application/inputs:/app/inputs
- ../application/vectors:/app/vectors
depends_on:
- redis
- mongo
worker:
user: root
image: arc53/docsgpt:develop
command: celery -A application.app.celery worker -l INFO -B
environment:
- API_KEY=$API_KEY
- EMBEDDINGS_KEY=$API_KEY
- LLM_PROVIDER=$LLM_PROVIDER
- LLM_NAME=$LLM_NAME
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/1
- MONGO_URI=mongodb://mongo:27017/docsgpt
- API_URL=http://backend:7091
- CACHE_REDIS_URL=redis://redis:6379/2
volumes:
- ../application/indexes:/app/indexes
- ../application/inputs:/app/inputs
- ../application/vectors:/app/vectors
depends_on:
- redis
- mongo
redis:
image: redis:6-alpine
ports:
- 6379:6379
mongo:
image: mongo:6
ports:
- 27017:27017
volumes:
- mongodb_data_container:/data/db
volumes:
mongodb_data_container:

View File

@@ -7,6 +7,7 @@ services:
environment:
- VITE_API_HOST=http://localhost:7091
- VITE_API_STREAMING=$VITE_API_STREAMING
- VITE_GOOGLE_CLIENT_ID=$VITE_GOOGLE_CLIENT_ID
ports:
- "5173:5173"
depends_on:

View File

@@ -37,33 +37,33 @@ While modifying `settings.py` offers more flexibility, it's generally recommende
Here are some of the most fundamental settings you'll likely want to configure:
- **`LLM_PROVIDER`**: This setting determines which Large Language Model (LLM) provider DocsGPT will use. It tells DocsGPT which API to interact with.
- **`LLM_PROVIDER`**: This setting determines which Large Language Model (LLM) provider DocsGPT will use. It tells DocsGPT which API to interact with.
- **Common values:**
- `docsgpt`: Use the DocsGPT Public API Endpoint (simple and free, as offered in `setup.sh` option 1).
- `openai`: Use OpenAI's API (requires an API key).
- `google`: Use Google's Vertex AI or Gemini models.
- `anthropic`: Use Anthropic's Claude models.
- `groq`: Use Groq's models.
- `huggingface`: Use HuggingFace Inference API.
- `azure_openai`: Use Azure OpenAI Service.
- `openai` (when using local inference engines like Ollama, Llama.cpp, TGI, etc.): This signals DocsGPT to use an OpenAI-compatible API format, even if the actual LLM is running locally.
- **Common values:**
- `docsgpt`: Use the DocsGPT Public API Endpoint (simple and free, as offered in `setup.sh` option 1).
- `openai`: Use OpenAI's API (requires an API key).
- `google`: Use Google's Vertex AI or Gemini models.
- `anthropic`: Use Anthropic's Claude models.
- `groq`: Use Groq's models.
- `huggingface`: Use HuggingFace Inference API.
- `azure_openai`: Use Azure OpenAI Service.
- `openai` (when using local inference engines like Ollama, Llama.cpp, TGI, etc.): This signals DocsGPT to use an OpenAI-compatible API format, even if the actual LLM is running locally.
- **`LLM_NAME`**: Specifies the specific model to use from the chosen LLM provider. The available models depend on the `LLM_PROVIDER` you've selected.
- **`LLM_NAME`**: Specifies the specific model to use from the chosen LLM provider. The available models depend on the `LLM_PROVIDER` you've selected.
- **Examples:**
- For `LLM_PROVIDER=openai`: `gpt-4o`
- For `LLM_PROVIDER=google`: `gemini-2.0-flash`
- For local models (e.g., Ollama): `llama3.2:1b` (or any model name available in your setup).
- **Examples:**
- For `LLM_PROVIDER=openai`: `gpt-4o`
- For `LLM_PROVIDER=google`: `gemini-2.0-flash`
- For local models (e.g., Ollama): `llama3.2:1b` (or any model name available in your setup).
- **`EMBEDDINGS_NAME`**: This setting defines which embedding model DocsGPT will use to generate vector embeddings for your documents. Embeddings are numerical representations of text that allow DocsGPT to understand the semantic meaning of your documents for efficient search and retrieval.
- **`EMBEDDINGS_NAME`**: This setting defines which embedding model DocsGPT will use to generate vector embeddings for your documents. Embeddings are numerical representations of text that allow DocsGPT to understand the semantic meaning of your documents for efficient search and retrieval.
- **Default value:** `huggingface_sentence-transformers/all-mpnet-base-v2` (a good general-purpose embedding model).
- **Other options:** You can explore other embedding models from Hugging Face Sentence Transformers or other providers if needed.
- **Default value:** `huggingface_sentence-transformers/all-mpnet-base-v2` (a good general-purpose embedding model).
- **Other options:** You can explore other embedding models from Hugging Face Sentence Transformers or other providers if needed.
- **`API_KEY`**: Required for most cloud-based LLM providers. This is your authentication key to access the LLM provider's API. You'll need to obtain this key from your chosen provider's platform.
- **`API_KEY`**: Required for most cloud-based LLM providers. This is your authentication key to access the LLM provider's API. You'll need to obtain this key from your chosen provider's platform.
- **`OPENAI_BASE_URL`**: Specifically used when `LLM_PROVIDER` is set to `openai` but you are connecting to a local inference engine (like Ollama, Llama.cpp, etc.) that exposes an OpenAI-compatible API. This setting tells DocsGPT where to find your local LLM server.
- **`OPENAI_BASE_URL`**: Specifically used when `LLM_PROVIDER` is set to `openai` but you are connecting to a local inference engine (like Ollama, Llama.cpp, etc.) that exposes an OpenAI-compatible API. This setting tells DocsGPT where to find your local LLM server.
## Configuration Examples
@@ -93,51 +93,82 @@ OPENAI_BASE_URL=http://host.docker.internal:11434/v1 # Default Ollama API URL wi
EMBEDDINGS_NAME=huggingface_sentence-transformers/all-mpnet-base-v2 # You can also run embeddings locally if needed
```
In this case, even though you are using Ollama locally, `LLM_PROVIDER` is set to `openai` because Ollama (and many other local inference engines) are designed to be API-compatible with OpenAI. `OPENAI_BASE_URL` points DocsGPT to the local Ollama server.
In this case, even though you are using Ollama locally, `LLM_PROVIDER` is set to `openai` because Ollama (and many other local inference engines) are designed to be API-compatible with OpenAI. `OPENAI_BASE_URL` points DocsGPT to the local Ollama server.
## Authentication Settings
DocsGPT includes a JWT (JSON Web Token) based authentication feature for managing sessions or securing local deployments while allowing access.
- **`AUTH_TYPE`**: This setting in your `.env` file or `settings.py` determines the authentication method.
- **Possible values:**
- `None` (or not set): No authentication is used.
- `simple_jwt`: A single, long-lived JWT token is generated and used for all authenticated requests. This is useful for securing a local deployment with a shared secret.
- `session_jwt`: Unique JWT tokens are generated for sessions, typically for individual users or temporary access.
- If `AUTH_TYPE` is set to `simple_jwt` or `session_jwt`, then a `JWT_SECRET_KEY` is required.
- **`JWT_SECRET_KEY`**: This is a crucial secret key used to sign and verify JWTs.
- It can be set directly in your `.env` file or `settings.py`.
- **Automatic Key Generation**: If `AUTH_TYPE` is `simple_jwt` or `session_jwt` and `JWT_SECRET_KEY` is _not_ set in your environment variables or `settings.py`, DocsGPT will attempt to:
1. Read the key from a file named `.jwt_secret_key` in the project's root directory.
2. If the file doesn't exist, it will generate a new 32-byte random key, save it to `.jwt_secret_key`, and use it for the session. This ensures that the key persists across application restarts.
- **Security Note**: It's vital to keep this key secure. If you set it manually, choose a strong, random string.
### `AUTH_TYPE` Overview
**How it works:**
The `AUTH_TYPE` setting in your `.env` file or `settings.py` determines the authentication method used by DocsGPT. This allows you to control how users authenticate with your DocsGPT instance.
- When `AUTH_TYPE` is set to `simple_jwt`, a token is generated at startup (if not already present or configured) and printed to the console. This token should be included in the `Authorization` header of your API requests as a Bearer token (e.g., `Authorization: Bearer YOUR_SIMPLE_JWT_TOKEN`).
- When `AUTH_TYPE` is set to `session_jwt`:
- Clients can request a new token from the `/api/generate_token` endpoint.
- This token should then be included in the `Authorization` header for subsequent requests.
- The backend verifies the JWT token provided in the `Authorization` header for protected routes.
- The `/api/config` endpoint can be used to check the current `auth_type` and whether authentication is required.
| Value | Description |
| ------------- | ------------------------------------------------------------------------------------------- |
| `None` | No authentication is used. Anyone can access the app. |
| `simple_jwt` | A single, long-lived JWT token is generated at startup. All requests use this shared token. |
| `session_jwt` | Unique JWT tokens are generated for each session/user. |
**Frontend Token Input for `simple_jwt`:**
#### How to Configure
<img
src="/jwt-input.png"
alt="Frontend prompt for JWT Token"
style={{
width: '500px',
maxWidth: '100%',
display: 'block',
margin: '1em auto'
}}
Add the following to your `.env` file (or set in `settings.py`):
```env
# No authentication (default)
AUTH_TYPE=None
# OR: Simple JWT (shared token)
AUTH_TYPE=simple_jwt
JWT_SECRET_KEY=your_secret_key_here
# OR: Session JWT (per-user/session tokens)
AUTH_TYPE=session_jwt
JWT_SECRET_KEY=your_secret_key_here
```
- If `AUTH_TYPE` is set to `simple_jwt` or `session_jwt`, a `JWT_SECRET_KEY` is required.
- If `JWT_SECRET_KEY` is not set, DocsGPT will generate one and store it in `.jwt_secret_key` in the project root.
#### How Each Method Works
- **None**: No authentication. All API and UI access is open.
- **simple_jwt**:
- A single JWT token is generated at startup and printed to the console.
- Use this token in the `Authorization` header for all API requests:
```http
Authorization: Bearer <SIMPLE_JWT_TOKEN>
```
- The frontend will prompt for this token if not already set.
- **session_jwt**:
- Clients can request a new token from `/api/generate_token`.
- Use the received token in the `Authorization` header for subsequent requests.
- Each user/session gets a unique token.
#### Security Notes
- Always keep your `JWT_SECRET_KEY` secure and private.
- If you set it manually, use a strong, random string.
- If not set, DocsGPT will generate a secure key and persist it in `.jwt_secret_key`.
#### Checking Current Auth Type
- Use the `/api/config` endpoint to check the current `auth_type` and whether authentication is required.
#### Frontend Token Input for `simple_jwt`
If you have configured `AUTH_TYPE=simple_jwt`, the DocsGPT frontend will prompt you to enter the JWT token if it's not already set or is invalid. Paste the `SIMPLE_JWT_TOKEN` (printed to your console when the backend starts) into this field to access the application.
<img
src="/jwt-input.png"
alt="Frontend prompt for JWT Token"
style={{
width: "500px",
maxWidth: "100%",
display: "block",
margin: "1em auto",
}}
/>
If you have configured `AUTH_TYPE=simple_jwt`, the DocsGPT frontend will prompt you to enter the JWT token if it's not already set or is invalid. You'll need to paste the `SIMPLE_JWT_TOKEN` (which is printed to your console when the backend starts) into this field to access the application.
## Exploring More Settings
These are just the basic settings to get you started. The `settings.py` file contains many more advanced options that you can explore to further customize DocsGPT, such as:
@@ -147,4 +178,4 @@ These are just the basic settings to get you started. The `settings.py` file con
- Cache settings (`CACHE_REDIS_URL`)
- And many more!
For a complete list of available settings and their descriptions, refer to the `settings.py` file in `application/core`. Remember to restart your Docker containers after making changes to your `.env` file or `settings.py` for the changes to take effect.
For a complete list of available settings and their descriptions, refer to the `settings.py` file in `application/core`. Remember to restart your Docker containers after making changes to your `.env` file or `settings.py` for the changes to take effect.

View File

@@ -0,0 +1,6 @@
{
"google-drive-connector": {
"title": "🔗 Google Drive",
"href": "/Guides/Integrations/google-drive-connector"
}
}

View File

@@ -0,0 +1,212 @@
---
title: Google Drive Connector
description: Connect your Google Drive as an external knowledge base to upload and process files directly from your Google Drive account.
---
import { Callout } from 'nextra/components'
import { Steps } from 'nextra/components'
# Google Drive Connector
The Google Drive Connector allows you to seamlessly connect your Google Drive account as an external knowledge base. This integration enables you to upload and process files directly from your Google Drive without manually downloading and uploading them to DocsGPT.
## Features
- **Direct File Access**: Browse and select files directly from your Google Drive
- **Comprehensive File Support**: Supports all major document formats including:
- Google Workspace files (Docs, Sheets, Slides)
- Microsoft Office files (.docx, .xlsx, .pptx, .doc, .ppt, .xls)
- PDF documents
- Text files (.txt, .md, .rst, .html, .rtf)
- Data files (.csv, .json)
- Image files (.png, .jpg, .jpeg)
- E-books (.epub)
- **Secure Authentication**: Uses OAuth 2.0 for secure access to your Google Drive
- **Real-time Sync**: Process files directly from Google Drive without local downloads
<Callout type="info" emoji="">
The Google Drive Connector requires proper configuration of Google API credentials. Follow the setup instructions below to enable this feature.
</Callout>
## Prerequisites
Before setting up the Google Drive Connector, you'll need:
1. A Google Cloud Platform (GCP) project
2. Google Drive API enabled
3. OAuth 2.0 credentials configured
4. DocsGPT instance with proper environment variables
## Setup Instructions
<Steps>
### Step 1: Create a Google Cloud Project
1. Go to the [Google Cloud Console](https://console.cloud.google.com/)
2. Create a new project or select an existing one
3. Note down your Project ID for later use
### Step 2: Enable Google Drive API
1. In the Google Cloud Console, navigate to **APIs & Services** > **Library**
2. Search for "Google Drive API"
3. Click on "Google Drive API" and click **Enable**
### Step 3: Create OAuth 2.0 Credentials
1. Go to **APIs & Services** > **Credentials**
2. Click **Create Credentials** > **OAuth client ID**
3. If prompted, configure the OAuth consent screen:
- Choose **External** user type (unless you're using Google Workspace)
- Fill in the required fields (App name, User support email, Developer contact)
- Add your domain to **Authorized domains** if deploying publicly
4. For Application type, select **Web application**
5. Add your DocsGPT frontend URL to **Authorized JavaScript origins**:
- For local development: `http://localhost:3000`
- For production: `https://yourdomain.com`
6. Add your DocsGPT callback URL to **Authorized redirect URIs**:
- For local development: `http://localhost:7091/api/connectors/callback?provider=google_drive`
- For production: `https://yourdomain.com/api/connectors/callback?provider=google_drive`
7. Click **Create** and note down the **Client ID** and **Client Secret**
### Step 4: Configure Backend Environment Variables
Add the following environment variables to your backend configuration:
**For Docker deployment**, add to your `.env` file in the root directory:
```env
# Google Drive Connector Configuration
GOOGLE_CLIENT_ID=your_google_client_id_here
GOOGLE_CLIENT_SECRET=your_google_client_secret_here
```
**For manual deployment**, set these environment variables in your system or application configuration.
### Step 5: Configure Frontend Environment Variables
Add the following environment variables to your frontend `.env` file:
```env
# Google Drive Frontend Configuration
VITE_GOOGLE_CLIENT_ID=your_google_client_id_here
```
<Callout type="warning" emoji="⚠️">
Make sure to use the same Google Client ID in both backend and frontend configurations.
</Callout>
### Step 6: Restart Your Application
After configuring the environment variables:
1. **For Docker**: Restart your Docker containers
```bash
docker-compose down
docker-compose up -d
```
2. **For manual deployment**: Restart both backend and frontend services
</Steps>
## Using the Google Drive Connector
Once configured, you can use the Google Drive Connector to upload files:
<Steps>
### Step 1: Access the Upload Interface
1. Navigate to the DocsGPT interface
2. Go to the upload/training section
3. You should now see "Google Drive" as an available upload option
### Step 2: Connect Your Google Account
1. Select "Google Drive" as your upload method
2. Click "Connect to Google Drive"
3. You'll be redirected to Google's OAuth consent screen
4. Grant the necessary permissions to DocsGPT
5. You'll be redirected back to DocsGPT with a successful connection
### Step 3: Select Files
1. Once connected, click "Select Files"
2. The Google Drive picker will open
3. Browse your Google Drive and select the files you want to process
4. Click "Select" to confirm your choices
### Step 4: Process Files
1. Review your selected files
2. Click "Train" or "Upload" to process the files
3. DocsGPT will download and process the files from your Google Drive
4. Once processing is complete, the files will be available in your knowledge base
</Steps>
## Supported File Types
The Google Drive Connector supports the following file types:
| File Type | Extensions | Description |
|-----------|------------|-------------|
| **Google Workspace** | - | Google Docs, Sheets, Slides (automatically converted) |
| **Microsoft Office** | .docx, .xlsx, .pptx | Modern Office formats |
| **Legacy Office** | .doc, .ppt, .xls | Older Office formats |
| **PDF Documents** | .pdf | Portable Document Format |
| **Text Files** | .txt, .md, .rst, .html, .rtf | Various text formats |
| **Data Files** | .csv, .json | Structured data formats |
| **Images** | .png, .jpg, .jpeg | Image files (with OCR if enabled) |
| **E-books** | .epub | Electronic publication format |
## Troubleshooting
### Common Issues
**"Google Drive option not appearing"**
- Verify that `VITE_GOOGLE_CLIENT_ID` is set in frontend environment
- Check that `VITE_GOOGLE_CLIENT_ID` environment variable is present in your frontend configuration
- Check browser console for any JavaScript errors
- Ensure the frontend has been restarted after adding environment variables
**"Authentication failed"**
- Verify that your OAuth 2.0 credentials are correctly configured
- Check that the redirect URI `http://<your-domain>/api/connectors/callback?provider=google_drive` is correctly added in GCP console
- Ensure the Google Drive API is enabled in your GCP project
**"Permission denied" errors**
- Verify that the OAuth consent screen is properly configured
- Check that your Google account has access to the files you're trying to select
- Ensure the required scopes are granted during authentication
**"Files not processing"**
- Check that the backend environment variables are correctly set
- Verify that the OAuth credentials have the necessary permissions
- Check the backend logs for any error messages
### Environment Variable Checklist
**Backend (.env in root directory):**
- ✅ `GOOGLE_CLIENT_ID`
- ✅ `GOOGLE_CLIENT_SECRET`
**Frontend (.env in frontend directory):**
- ✅ `VITE_GOOGLE_CLIENT_ID`
### Security Considerations
- Keep your Google Client Secret secure and never expose it in frontend code
- Regularly rotate your OAuth credentials
- Use HTTPS in production to protect authentication tokens
- Ensure proper OAuth consent screen configuration for production use
<Callout type="tip" emoji="💡">
For production deployments, make sure to add your actual domain to the OAuth consent screen and authorized origins/redirect URIs.
</Callout>

View File

@@ -20,5 +20,8 @@
"Architecture": {
"title": "🏗️ Architecture",
"href": "/Guides/Architecture"
},
"Integrations": {
"title": "🔗 Integrations"
}
}

View File

@@ -60,7 +60,7 @@ const config = {
GitHub
</a>
{' | '}
<a href="https://www.blog.docsgpt.cloud/" target="_blank">
<a href="https://blog.docsgpt.cloud/" target="_blank">
Blog
</a>
</div>

View File

@@ -5,6 +5,8 @@
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0,viewport-fit=cover" />
<meta name="apple-mobile-web-app-capable" content="yes">
<meta name="theme-color" content="#fbfbfb" media="(prefers-color-scheme: light)" />
<meta name="theme-color" content="#161616" media="(prefers-color-scheme: dark)" />
<title>DocsGPT</title>
<link rel="shortcut icon" type="image/x-icon" href="/favicon.ico" />
</head>

View File

@@ -14,12 +14,14 @@
"copy-to-clipboard": "^3.3.3",
"i18next": "^24.2.0",
"i18next-browser-languagedetector": "^8.0.2",
"lodash": "^4.17.21",
"mermaid": "^11.6.0",
"prop-types": "^15.8.1",
"react": "^19.1.0",
"react-chartjs-2": "^5.3.0",
"react-dom": "^19.0.0",
"react-dropzone": "^14.3.8",
"react-google-drive-picker": "^1.2.2",
"react-i18next": "^15.4.0",
"react-markdown": "^9.0.1",
"react-redux": "^9.2.0",
@@ -32,6 +34,7 @@
},
"devDependencies": {
"@tailwindcss/postcss": "^4.1.10",
"@types/lodash": "^4.17.20",
"@types/mermaid": "^9.1.0",
"@types/react": "^19.1.8",
"@types/react-dom": "^19.0.0",
@@ -53,7 +56,7 @@
"postcss": "^8.4.49",
"prettier": "^3.5.3",
"prettier-plugin-tailwindcss": "^0.6.13",
"tailwindcss": "^4.1.10",
"tailwindcss": "^4.1.11",
"typescript": "^5.8.3",
"vite": "^6.3.5",
"vite-plugin-svgr": "^4.3.0"
@@ -1694,6 +1697,13 @@
"tailwindcss": "4.1.10"
}
},
"node_modules/@tailwindcss/node/node_modules/tailwindcss": {
"version": "4.1.10",
"resolved": "https://registry.npmjs.org/tailwindcss/-/tailwindcss-4.1.10.tgz",
"integrity": "sha512-P3nr6WkvKV/ONsTzj6Gb57sWPMX29EPNPopo7+FcpkQaNsrNpZ1pv8QmrYI2RqEKD7mlGqLnGovlcYnBK0IqUA==",
"dev": true,
"license": "MIT"
},
"node_modules/@tailwindcss/oxide": {
"version": "4.1.10",
"resolved": "https://registry.npmjs.org/@tailwindcss/oxide/-/oxide-4.1.10.tgz",
@@ -1906,6 +1916,66 @@
"node": ">=14.0.0"
}
},
"node_modules/@tailwindcss/oxide-wasm32-wasi/node_modules/@emnapi/core": {
"version": "1.4.3",
"dev": true,
"inBundle": true,
"license": "MIT",
"optional": true,
"dependencies": {
"@emnapi/wasi-threads": "1.0.2",
"tslib": "^2.4.0"
}
},
"node_modules/@tailwindcss/oxide-wasm32-wasi/node_modules/@emnapi/runtime": {
"version": "1.4.3",
"dev": true,
"inBundle": true,
"license": "MIT",
"optional": true,
"dependencies": {
"tslib": "^2.4.0"
}
},
"node_modules/@tailwindcss/oxide-wasm32-wasi/node_modules/@emnapi/wasi-threads": {
"version": "1.0.2",
"dev": true,
"inBundle": true,
"license": "MIT",
"optional": true,
"dependencies": {
"tslib": "^2.4.0"
}
},
"node_modules/@tailwindcss/oxide-wasm32-wasi/node_modules/@napi-rs/wasm-runtime": {
"version": "0.2.10",
"dev": true,
"inBundle": true,
"license": "MIT",
"optional": true,
"dependencies": {
"@emnapi/core": "^1.4.3",
"@emnapi/runtime": "^1.4.3",
"@tybys/wasm-util": "^0.9.0"
}
},
"node_modules/@tailwindcss/oxide-wasm32-wasi/node_modules/@tybys/wasm-util": {
"version": "0.9.0",
"dev": true,
"inBundle": true,
"license": "MIT",
"optional": true,
"dependencies": {
"tslib": "^2.4.0"
}
},
"node_modules/@tailwindcss/oxide-wasm32-wasi/node_modules/tslib": {
"version": "2.8.0",
"dev": true,
"inBundle": true,
"license": "0BSD",
"optional": true
},
"node_modules/@tailwindcss/oxide-win32-arm64-msvc": {
"version": "4.1.10",
"resolved": "https://registry.npmjs.org/@tailwindcss/oxide-win32-arm64-msvc/-/oxide-win32-arm64-msvc-4.1.10.tgz",
@@ -1954,6 +2024,13 @@
"tailwindcss": "4.1.10"
}
},
"node_modules/@tailwindcss/postcss/node_modules/tailwindcss": {
"version": "4.1.10",
"resolved": "https://registry.npmjs.org/tailwindcss/-/tailwindcss-4.1.10.tgz",
"integrity": "sha512-P3nr6WkvKV/ONsTzj6Gb57sWPMX29EPNPopo7+FcpkQaNsrNpZ1pv8QmrYI2RqEKD7mlGqLnGovlcYnBK0IqUA==",
"dev": true,
"license": "MIT"
},
"node_modules/@types/babel__core": {
"version": "7.20.5",
"resolved": "https://registry.npmjs.org/@types/babel__core/-/babel__core-7.20.5.tgz",
@@ -2302,6 +2379,13 @@
"integrity": "sha512-HMwFiRujE5PjrgwHQ25+bsLJgowjGjm5Z8FVSf0N6PwgJrwxH0QxzHYDcKsTfV3wva0vzrpqMTJS2jXPr5BMEQ==",
"license": "MIT"
},
"node_modules/@types/lodash": {
"version": "4.17.20",
"resolved": "https://registry.npmjs.org/@types/lodash/-/lodash-4.17.20.tgz",
"integrity": "sha512-H3MHACvFUEiujabxhaI/ImO6gUrd8oOurg7LQtS7mbwIXA/cUqWrvBsaeJ23aZEPk1TAYkurjfMbSELfoCXlGA==",
"dev": true,
"license": "MIT"
},
"node_modules/@types/mdast": {
"version": "4.0.4",
"resolved": "https://registry.npmjs.org/@types/mdast/-/mdast-4.0.4.tgz",
@@ -7226,6 +7310,12 @@
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/lodash": {
"version": "4.17.21",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.21.tgz",
"integrity": "sha512-v2kDEe57lecTulaDIuNTPy3Ry4gLGJ6Z1O3vE1krgXZNrsQ+LFTGHVxVjcXPs17LhbZVGedAJv8XZ1tvj5FvSg==",
"license": "MIT"
},
"node_modules/lodash-es": {
"version": "4.17.21",
"resolved": "https://registry.npmjs.org/lodash-es/-/lodash-es-4.17.21.tgz",
@@ -9293,6 +9383,16 @@
"react": ">= 16.8 || 18.0.0"
}
},
"node_modules/react-google-drive-picker": {
"version": "1.2.2",
"resolved": "https://registry.npmjs.org/react-google-drive-picker/-/react-google-drive-picker-1.2.2.tgz",
"integrity": "sha512-x30mYkt9MIwPCgL+fyK75HZ8E6G5L/WGW0bfMG6kbD4NG2kmdlmV9oH5lPa6P6d46y9hj5Y3btAMrZd4JRRkSA==",
"license": "MIT",
"peerDependencies": {
"react": ">=17.0.0",
"react-dom": ">=17.0.0"
}
},
"node_modules/react-i18next": {
"version": "15.4.0",
"resolved": "https://registry.npmjs.org/react-i18next/-/react-i18next-15.4.0.tgz",
@@ -10387,9 +10487,9 @@
}
},
"node_modules/tailwindcss": {
"version": "4.1.10",
"resolved": "https://registry.npmjs.org/tailwindcss/-/tailwindcss-4.1.10.tgz",
"integrity": "sha512-P3nr6WkvKV/ONsTzj6Gb57sWPMX29EPNPopo7+FcpkQaNsrNpZ1pv8QmrYI2RqEKD7mlGqLnGovlcYnBK0IqUA==",
"version": "4.1.11",
"resolved": "https://registry.npmjs.org/tailwindcss/-/tailwindcss-4.1.11.tgz",
"integrity": "sha512-2E9TBm6MDD/xKYe+dvJZAmg3yxIEDNRc0jwlNyDg/4Fil2QcSLjFKGVff0lAf1jjeaArlG/M75Ey/EYr/OJtBA==",
"dev": true,
"license": "MIT"
},

View File

@@ -25,12 +25,14 @@
"copy-to-clipboard": "^3.3.3",
"i18next": "^24.2.0",
"i18next-browser-languagedetector": "^8.0.2",
"lodash": "^4.17.21",
"mermaid": "^11.6.0",
"prop-types": "^15.8.1",
"react": "^19.1.0",
"react-chartjs-2": "^5.3.0",
"react-dom": "^19.0.0",
"react-dropzone": "^14.3.8",
"react-google-drive-picker": "^1.2.2",
"react-i18next": "^15.4.0",
"react-markdown": "^9.0.1",
"react-redux": "^9.2.0",
@@ -43,6 +45,7 @@
},
"devDependencies": {
"@tailwindcss/postcss": "^4.1.10",
"@types/lodash": "^4.17.20",
"@types/mermaid": "^9.1.0",
"@types/react": "^19.1.8",
"@types/react-dom": "^19.0.0",
@@ -64,7 +67,7 @@
"postcss": "^8.4.49",
"prettier": "^3.5.3",
"prettier-plugin-tailwindcss": "^0.6.13",
"tailwindcss": "^4.1.10",
"tailwindcss": "^4.1.11",
"typescript": "^5.8.3",
"vite": "^6.3.5",
"vite-plugin-svgr": "^4.3.0"

View File

@@ -0,0 +1,4 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" width="64" height="64" color="#000000" fill="none">
<path d="M3.49994 11.7501L11.6717 3.57855C12.7762 2.47398 14.5672 2.47398 15.6717 3.57855C16.7762 4.68312 16.7762 6.47398 15.6717 7.57855M15.6717 7.57855L9.49994 13.7501M15.6717 7.57855C16.7762 6.47398 18.5672 6.47398 19.6717 7.57855C20.7762 8.68312 20.7762 10.474 19.6717 11.5785L12.7072 18.543C12.3167 18.9335 12.3167 19.5667 12.7072 19.9572L13.9999 21.2499" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"></path>
<path d="M17.4999 9.74921L11.3282 15.921C10.2237 17.0255 8.43272 17.0255 7.32823 15.921C6.22373 14.8164 6.22373 13.0255 7.32823 11.921L13.4999 5.74939" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"></path>
</svg>

After

Width:  |  Height:  |  Size: 831 B

View File

@@ -33,7 +33,7 @@ function MainLayout() {
const [navOpen, setNavOpen] = useState(!(isMobile || isTablet));
return (
<div className="relative h-screen overflow-hidden dark:bg-raisin-black">
<div className="dark:bg-raisin-black relative h-screen overflow-hidden">
<Navigation navOpen={navOpen} setNavOpen={setNavOpen} />
<ActionButtons showNewChat={true} showShare={true} />
<div

View File

@@ -29,7 +29,7 @@ export default function Hero({
</div>
{/* Demo Buttons Section */}
<div className="mb-8 w-full max-w-full md:mb-16">
<div className="mb-3 w-full max-w-full md:mb-3">
<div className="grid grid-cols-1 gap-3 text-xs md:grid-cols-1 md:gap-4 lg:grid-cols-2">
{demos?.map(
(demo: { header: string; query: string }, key: number) =>

View File

@@ -10,7 +10,7 @@ import Add from './assets/add.svg';
import DocsGPT3 from './assets/cute_docsgpt3.svg';
import Discord from './assets/discord.svg';
import Expand from './assets/expand.svg';
import Github from './assets/github.svg';
import Github from './assets/git_nav.svg';
import Hamburger from './assets/hamburger.svg';
import openNewChat from './assets/openNewChat.svg';
import Pin from './assets/pin.svg';
@@ -568,6 +568,8 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
>
<img
src={Discord}
width={24}
height={24}
alt="Join Discord community"
className="m-2 w-6 self-center filter dark:invert"
/>
@@ -581,8 +583,10 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
>
<img
src={Twitter}
width={20}
height={20}
alt="Follow us on Twitter"
className="m-2 w-5 self-center filter dark:invert"
className="m-2 self-center filter dark:invert"
/>
</NavLink>
<NavLink
@@ -595,7 +599,9 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
<img
src={Github}
alt="View on GitHub"
className="m-2 w-6 self-center filter dark:invert"
width={28}
height={28}
className="m-2 self-center filter dark:invert"
/>
</NavLink>
</div>

View File

@@ -2,11 +2,11 @@ import { Link } from 'react-router-dom';
export default function PageNotFound() {
return (
<div className="grid min-h-screen dark:bg-raisin-black">
<p className="mx-auto my-auto mt-20 flex w-full max-w-6xl flex-col place-items-center gap-6 rounded-3xl bg-gray-100 p-6 text-jet dark:bg-outer-space dark:text-gray-100 lg:p-10 xl:p-16">
<div className="dark:bg-raisin-black grid min-h-screen">
<p className="text-jet dark:bg-outer-space mx-auto my-auto mt-20 flex w-full max-w-6xl flex-col place-items-center gap-6 rounded-3xl bg-gray-100 p-6 lg:p-10 xl:p-16 dark:text-gray-100">
<h1>404</h1>
<p>The page you are looking for does not exist.</p>
<button className="pointer-cursor mr-4 flex cursor-pointer items-center justify-center rounded-full bg-blue-1000 px-4 py-2 text-white transition-colors duration-100 hover:bg-blue-3000">
<button className="pointer-cursor bg-blue-1000 hover:bg-blue-3000 mr-4 flex cursor-pointer items-center justify-center rounded-full px-4 py-2 text-white transition-colors duration-100">
<Link to="/">Go Back Home</Link>
</button>
</p>

View File

@@ -110,7 +110,7 @@ export default function AgentPreview() {
}, [queries]);
return (
<div>
<div className="flex h-full flex-col items-center justify-between gap-2 overflow-y-hidden dark:bg-raisin-black">
<div className="dark:bg-raisin-black flex h-full flex-col items-center justify-between gap-2 overflow-y-hidden">
<div className="h-[512px] w-full overflow-y-auto">
<ConversationMessages
handleQuestion={handleQuestion}
@@ -128,7 +128,7 @@ export default function AgentPreview() {
showToolButton={selectedAgent ? false : true}
autoFocus={false}
/>
<p className="w-full self-center bg-transparent pt-2 text-center text-xs text-gray-4000 dark:text-sonic-silver md:inline">
<p className="text-gray-4000 dark:text-sonic-silver w-full self-center bg-transparent pt-2 text-center text-xs md:inline">
This is a preview of the agent. You can publish it to start using it
in conversations.
</p>

View File

@@ -1,3 +1,4 @@
import isEqual from 'lodash/isEqual';
import React, { useCallback, useEffect, useRef, useState } from 'react';
import { useDispatch, useSelector } from 'react-redux';
import { useNavigate, useParams } from 'react-router-dom';
@@ -8,6 +9,7 @@ import SourceIcon from '../assets/source.svg';
import Dropdown from '../components/Dropdown';
import { FileUpload } from '../components/FileUpload';
import MultiSelectPopup, { OptionType } from '../components/MultiSelectPopup';
import Spinner from '../components/Spinner';
import AgentDetailsModal from '../modals/AgentDetailsModal';
import ConfirmationModal from '../modals/ConfirmationModal';
import { ActiveState, Doc, Prompt } from '../models/misc';
@@ -18,6 +20,7 @@ import {
setSelectedAgent,
} from '../preferences/preferenceSlice';
import PromptsModal from '../preferences/PromptsModal';
import Prompts from '../settings/Prompts';
import { UserToolType } from '../settings/types';
import AgentPreview from './AgentPreview';
import { Agent } from './types';
@@ -42,12 +45,14 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
description: '',
image: '',
source: '',
chunks: '',
retriever: '',
prompt_id: '',
sources: [],
chunks: '2',
retriever: 'classic',
prompt_id: 'default',
tools: [],
agent_type: '',
agent_type: 'classic',
status: '',
json_schema: undefined,
});
const [imageFile, setImageFile] = useState<File | null>(null);
const [prompts, setPrompts] = useState<
@@ -66,34 +71,44 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
useState<ActiveState>('INACTIVE');
const [agentDetails, setAgentDetails] = useState<ActiveState>('INACTIVE');
const [addPromptModal, setAddPromptModal] = useState<ActiveState>('INACTIVE');
const [hasChanges, setHasChanges] = useState(false);
const [draftLoading, setDraftLoading] = useState(false);
const [publishLoading, setPublishLoading] = useState(false);
const [jsonSchemaText, setJsonSchemaText] = useState('');
const [jsonSchemaValid, setJsonSchemaValid] = useState(true);
const [isJsonSchemaExpanded, setIsJsonSchemaExpanded] = useState(false);
const initialAgentRef = useRef<Agent | null>(null);
const sourceAnchorButtonRef = useRef<HTMLButtonElement>(null);
const toolAnchorButtonRef = useRef<HTMLButtonElement>(null);
const modeConfig = {
new: {
heading: 'New Agent',
buttonText: 'Create Agent',
buttonText: 'Publish',
showDelete: false,
showSaveDraft: true,
showLogs: false,
showAccessDetails: false,
trackChanges: false,
},
edit: {
heading: 'Edit Agent',
buttonText: 'Save Changes',
buttonText: 'Save',
showDelete: true,
showSaveDraft: false,
showLogs: true,
showAccessDetails: true,
trackChanges: true,
},
draft: {
heading: 'New Agent (Draft)',
buttonText: 'Publish Draft',
buttonText: 'Publish',
showDelete: true,
showSaveDraft: true,
showLogs: false,
showAccessDetails: false,
trackChanges: false,
},
};
const chunks = ['0', '2', '4', '6', '8', '10'];
@@ -103,9 +118,16 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
];
const isPublishable = () => {
return (
agent.name && agent.description && agent.prompt_id && agent.agent_type
);
const hasRequiredFields =
agent.name && agent.description && agent.prompt_id && agent.agent_type;
const isJsonSchemaValidOrEmpty =
jsonSchemaText.trim() === '' || jsonSchemaValid;
const hasSource = selectedSourceIds.size > 0;
return hasRequiredFields && isJsonSchemaValidOrEmpty && hasSource;
};
const isJsonSchemaInvalid = () => {
return jsonSchemaText.trim() !== '' && !jsonSchemaValid;
};
const handleUpload = useCallback((files: File[]) => {
@@ -130,7 +152,41 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
const formData = new FormData();
formData.append('name', agent.name);
formData.append('description', agent.description);
formData.append('source', agent.source);
if (selectedSourceIds.size > 1) {
const sourcesArray = Array.from(selectedSourceIds)
.map((id) => {
const sourceDoc = sourceDocs?.find(
(source) =>
source.id === id || source.retriever === id || source.name === id,
);
if (sourceDoc?.name === 'Default' && !sourceDoc?.id) {
return 'default';
}
return sourceDoc?.id || id;
})
.filter(Boolean);
formData.append('sources', JSON.stringify(sourcesArray));
formData.append('source', '');
} else if (selectedSourceIds.size === 1) {
const singleSourceId = Array.from(selectedSourceIds)[0];
const sourceDoc = sourceDocs?.find(
(source) =>
source.id === singleSourceId ||
source.retriever === singleSourceId ||
source.name === singleSourceId,
);
let finalSourceId;
if (sourceDoc?.name === 'Default' && !sourceDoc?.id)
finalSourceId = 'default';
else finalSourceId = sourceDoc?.id || singleSourceId;
formData.append('source', String(finalSourceId));
formData.append('sources', JSON.stringify([]));
} else {
formData.append('source', '');
formData.append('sources', JSON.stringify([]));
}
formData.append('chunks', agent.chunks);
formData.append('retriever', agent.retriever);
formData.append('prompt_id', agent.prompt_id);
@@ -143,24 +199,32 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
formData.append('tools', JSON.stringify(agent.tools));
else formData.append('tools', '[]');
if (agent.json_schema) {
formData.append('json_schema', JSON.stringify(agent.json_schema));
}
try {
setDraftLoading(true);
const response =
effectiveMode === 'new'
? await userService.createAgent(formData, token)
: await userService.updateAgent(agent.id || '', formData, token);
if (!response.ok) throw new Error('Failed to create agent draft');
const data = await response.json();
if (effectiveMode === 'new') {
setEffectiveMode('draft');
setAgent((prev) => ({
...prev,
id: data.id,
image: data.image || prev.image,
}));
}
const updatedAgent = {
...agent,
id: data.id || agent.id,
image: data.image || agent.image,
};
setAgent(updatedAgent);
if (effectiveMode === 'new') setEffectiveMode('draft');
} catch (error) {
console.error('Error saving draft:', error);
throw new Error('Failed to save draft');
} finally {
setDraftLoading(false);
}
};
@@ -168,7 +232,41 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
const formData = new FormData();
formData.append('name', agent.name);
formData.append('description', agent.description);
formData.append('source', agent.source);
if (selectedSourceIds.size > 1) {
const sourcesArray = Array.from(selectedSourceIds)
.map((id) => {
const sourceDoc = sourceDocs?.find(
(source) =>
source.id === id || source.retriever === id || source.name === id,
);
if (sourceDoc?.name === 'Default' && !sourceDoc?.id) {
return 'default';
}
return sourceDoc?.id || id;
})
.filter(Boolean);
formData.append('sources', JSON.stringify(sourcesArray));
formData.append('source', '');
} else if (selectedSourceIds.size === 1) {
const singleSourceId = Array.from(selectedSourceIds)[0];
const sourceDoc = sourceDocs?.find(
(source) =>
source.id === singleSourceId ||
source.retriever === singleSourceId ||
source.name === singleSourceId,
);
let finalSourceId;
if (sourceDoc?.name === 'Default' && !sourceDoc?.id)
finalSourceId = 'default';
else finalSourceId = sourceDoc?.id || singleSourceId;
formData.append('source', String(finalSourceId));
formData.append('sources', JSON.stringify([]));
} else {
formData.append('source', '');
formData.append('sources', JSON.stringify([]));
}
formData.append('chunks', agent.chunks);
formData.append('retriever', agent.retriever);
formData.append('prompt_id', agent.prompt_id);
@@ -180,27 +278,55 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
formData.append('tools', JSON.stringify(agent.tools));
else formData.append('tools', '[]');
if (agent.json_schema) {
formData.append('json_schema', JSON.stringify(agent.json_schema));
}
try {
setPublishLoading(true);
const response =
effectiveMode === 'new'
? await userService.createAgent(formData, token)
: await userService.updateAgent(agent.id || '', formData, token);
if (!response.ok) throw new Error('Failed to publish agent');
const data = await response.json();
if (data.id) setAgent((prev) => ({ ...prev, id: data.id }));
if (data.key) setAgent((prev) => ({ ...prev, key: data.key }));
const updatedAgent = {
...agent,
id: data.id || agent.id,
key: data.key || agent.key,
status: 'published',
image: data.image || agent.image,
};
setAgent(updatedAgent);
initialAgentRef.current = updatedAgent;
if (effectiveMode === 'new' || effectiveMode === 'draft') {
setEffectiveMode('edit');
setAgent((prev) => ({
...prev,
status: 'published',
image: data.image || prev.image,
}));
setAgentDetails('ACTIVE');
}
setImageFile(null);
} catch (error) {
console.error('Error publishing agent:', error);
throw new Error('Failed to publish agent');
} finally {
setPublishLoading(false);
}
};
const validateAndSetJsonSchema = (text: string) => {
setJsonSchemaText(text);
if (text.trim() === '') {
setAgent({ ...agent, json_schema: undefined });
setJsonSchemaValid(true);
return;
}
try {
const parsed = JSON.parse(text);
setAgent({ ...agent, json_schema: parsed });
setJsonSchemaValid(true);
} catch (error) {
setJsonSchemaValid(false);
}
};
@@ -228,6 +354,26 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
getPrompts();
}, [token]);
// Auto-select default source if none selected
useEffect(() => {
if (sourceDocs && sourceDocs.length > 0 && selectedSourceIds.size === 0) {
const defaultSource = sourceDocs.find((s) => s.name === 'Default');
if (defaultSource) {
setSelectedSourceIds(
new Set([
defaultSource.id || defaultSource.retriever || defaultSource.name,
]),
);
} else {
setSelectedSourceIds(
new Set([
sourceDocs[0].id || sourceDocs[0].retriever || sourceDocs[0].name,
]),
);
}
}
}, [sourceDocs, selectedSourceIds.size]);
useEffect(() => {
if ((mode === 'edit' || mode === 'draft') && agentId) {
const getAgent = async () => {
@@ -237,37 +383,99 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
throw new Error('Failed to fetch agent');
}
const data = await response.json();
if (data.source) setSelectedSourceIds(new Set([data.source]));
else if (data.retriever)
if (data.sources && data.sources.length > 0) {
const mappedSources = data.sources.map((sourceId: string) => {
if (sourceId === 'default') {
const defaultSource = sourceDocs?.find(
(source) => source.name === 'Default',
);
return defaultSource?.retriever || 'classic';
}
return sourceId;
});
setSelectedSourceIds(new Set(mappedSources));
} else if (data.source) {
if (data.source === 'default') {
const defaultSource = sourceDocs?.find(
(source) => source.name === 'Default',
);
setSelectedSourceIds(
new Set([defaultSource?.retriever || 'classic']),
);
} else {
setSelectedSourceIds(new Set([data.source]));
}
} else if (data.retriever) {
setSelectedSourceIds(new Set([data.retriever]));
}
if (data.tools) setSelectedToolIds(new Set(data.tools));
if (data.status === 'draft') setEffectiveMode('draft');
if (data.json_schema) {
const jsonText = JSON.stringify(data.json_schema, null, 2);
setJsonSchemaText(jsonText);
setJsonSchemaValid(true);
}
setAgent(data);
initialAgentRef.current = data;
};
getAgent();
}
}, [agentId, mode, token]);
useEffect(() => {
const selectedSource = Array.from(selectedSourceIds).map((id) =>
sourceDocs?.find(
(source) =>
source.id === id || source.retriever === id || source.name === id,
),
);
if (selectedSource[0]?.model === embeddingsName) {
if (selectedSource[0] && 'id' in selectedSource[0]) {
const selectedSources = Array.from(selectedSourceIds)
.map((id) =>
sourceDocs?.find(
(source) =>
source.id === id || source.retriever === id || source.name === id,
),
)
.filter(Boolean);
if (selectedSources.length > 0) {
// Handle multiple sources
if (selectedSources.length > 1) {
// Multiple sources selected - store in sources array
const sourceIds = selectedSources
.map((source) => source?.id)
.filter((id): id is string => Boolean(id));
setAgent((prev) => ({
...prev,
source: selectedSource[0]?.id || 'default',
sources: sourceIds,
source: '', // Clear single source for multiple sources
retriever: '',
}));
} else
setAgent((prev) => ({
...prev,
source: '',
retriever: selectedSource[0]?.retriever || 'classic',
}));
} else {
// Single source selected - maintain backward compatibility
const selectedSource = selectedSources[0];
if (selectedSource?.model === embeddingsName) {
if (selectedSource && 'id' in selectedSource) {
setAgent((prev) => ({
...prev,
source: selectedSource?.id || 'default',
sources: [], // Clear sources array for single source
retriever: '',
}));
} else {
setAgent((prev) => ({
...prev,
source: '',
sources: [], // Clear sources array
retriever: selectedSource?.retriever || 'classic',
}));
}
}
}
} else {
// No sources selected
setAgent((prev) => ({
...prev,
source: '',
sources: [],
retriever: '',
}));
}
}, [selectedSourceIds]);
@@ -285,7 +493,26 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
useEffect(() => {
if (isPublishable()) dispatch(setSelectedAgent(agent));
}, [agent, dispatch]);
if (!modeConfig[effectiveMode].trackChanges) {
setHasChanges(true);
return;
}
if (!initialAgentRef.current) {
setHasChanges(false);
return;
}
const initialJsonSchemaText = initialAgentRef.current.json_schema
? JSON.stringify(initialAgentRef.current.json_schema, null, 2)
: '';
const isChanged =
!isEqual(agent, initialAgentRef.current) ||
imageFile !== null ||
jsonSchemaText !== initialJsonSchemaText;
setHasChanges(isChanged);
}, [agent, dispatch, effectiveMode, imageFile, jsonSchemaText]);
return (
<div className="p-4 md:p-12">
<div className="flex items-center gap-3 px-4">
@@ -321,10 +548,19 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
)}
{modeConfig[effectiveMode].showSaveDraft && (
<button
className="hover:bg-vi</button>olets-are-blue border-violets-are-blue text-violets-are-blue hover:bg-violets-are-blue rounded-3xl border border-solid px-5 py-2 text-sm font-medium transition-colors hover:text-white"
disabled={isJsonSchemaInvalid()}
className={`border-violets-are-blue text-violets-are-blue hover:bg-violets-are-blue w-28 rounded-3xl border border-solid py-2 text-sm font-medium transition-colors hover:text-white ${
isJsonSchemaInvalid() ? 'cursor-not-allowed opacity-30' : ''
}`}
onClick={handleSaveDraft}
>
Save Draft
<span className="flex items-center justify-center transition-all duration-200">
{draftLoading ? (
<Spinner size="small" color="#976af3" />
) : (
'Save Draft'
)}
</span>
</button>
)}
{modeConfig[effectiveMode].showAccessDetails && (
@@ -345,11 +581,17 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
</button>
)}
<button
disabled={!isPublishable()}
className={`${!isPublishable() && 'cursor-not-allowed opacity-30'} bg-purple-30 hover:bg-violets-are-blue rounded-3xl px-5 py-2 text-sm font-medium text-white`}
disabled={!isPublishable() || !hasChanges}
className={`${!isPublishable() || !hasChanges ? 'cursor-not-allowed opacity-30' : ''} bg-purple-30 hover:bg-violets-are-blue flex w-28 items-center justify-center rounded-3xl py-2 text-sm font-medium text-white`}
onClick={handlePublish}
>
Publish
<span className="flex items-center justify-center transition-all duration-200">
{publishLoading ? (
<Spinner size="small" color="white" />
) : (
modeConfig[effectiveMode].buttonText
)}
</span>
</button>
</div>
</div>
@@ -365,7 +607,7 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
onChange={(e) => setAgent({ ...agent, name: e.target.value })}
/>
<textarea
className="border-silver text-jet dark:bg-raisin-black dark:text-bright-gray dark:placeholder:text-silver mt-3 h-32 w-full rounded-3xl border bg-white px-5 py-4 text-sm outline-hidden placeholder:text-gray-400 dark:border-[#7E7E7E]"
className="border-silver text-jet dark:bg-raisin-black dark:text-bright-gray dark:placeholder:text-silver mt-3 h-32 w-full rounded-xl border bg-white px-5 py-4 text-sm outline-hidden placeholder:text-gray-400 dark:border-[#7E7E7E]"
placeholder="Describe your agent"
value={agent.description}
onChange={(e) =>
@@ -414,7 +656,7 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
)
.filter(Boolean)
.join(', ')
: 'Select source'}
: 'Select sources'}
</button>
<MultiSelectPopup
isOpen={isSourcePopupOpen}
@@ -429,13 +671,38 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
}
selectedIds={selectedSourceIds}
onSelectionChange={(newSelectedIds: Set<string | number>) => {
setSelectedSourceIds(newSelectedIds);
setIsSourcePopupOpen(false);
if (
newSelectedIds.size === 0 &&
sourceDocs &&
sourceDocs.length > 0
) {
const defaultSource = sourceDocs.find(
(s) => s.name === 'Default',
);
if (defaultSource) {
setSelectedSourceIds(
new Set([
defaultSource.id ||
defaultSource.retriever ||
defaultSource.name,
]),
);
} else {
setSelectedSourceIds(
new Set([
sourceDocs[0].id ||
sourceDocs[0].retriever ||
sourceDocs[0].name,
]),
);
}
} else {
setSelectedSourceIds(newSelectedIds);
}
}}
title="Select Source"
title="Select Sources"
searchPlaceholder="Search sources..."
noOptionsMessage="No source available"
singleSelect={true}
noOptionsMessage="No sources available"
/>
</div>
<div className="mt-3">
@@ -458,32 +725,32 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
</div>
</div>
<div className="rounded-[30px] bg-[#F6F6F6] px-6 py-3 dark:bg-[#383838] dark:text-[#E0E0E0]">
<h2 className="text-lg font-semibold">Prompt</h2>
<div className="mt-3 flex flex-wrap items-center gap-1">
<div className="flex flex-wrap items-end gap-1">
<div className="min-w-20 grow basis-full sm:basis-0">
<Dropdown
options={prompts.map((prompt) => ({
label: prompt.name,
value: prompt.id,
}))}
selectedValue={
agent.prompt_id
? prompts.filter(
(prompt) => prompt.id === agent.prompt_id,
)[0]?.name || null
: null
<Prompts
prompts={prompts}
selectedPrompt={
prompts.find((prompt) => prompt.id === agent.prompt_id) ||
prompts[0]
}
onSelect={(option: { label: string; value: string }) =>
setAgent({ ...agent, prompt_id: option.value })
onSelectPrompt={(name, id, type) =>
setAgent({ ...agent, prompt_id: id })
}
size="w-full"
rounded="3xl"
border="border"
buttonClassName="bg-white dark:bg-[#222327] border-silver dark:border-[#7E7E7E]"
optionsClassName="bg-white dark:bg-[#383838] border-silver dark:border-[#7E7E7E] dark:border-[#7E7E7E] dark:bg-dark-charcoal"
placeholderClassName="text-gray-400 dark:text-silver"
placeholder="Select a prompt"
contentSize="text-sm"
setPrompts={setPrompts}
title="Prompt"
titleClassName="text-lg font-semibold dark:text-[#E0E0E0]"
showAddButton={false}
dropdownProps={{
size: 'w-full',
rounded: '3xl',
border: 'border',
buttonClassName:
'bg-white dark:bg-[#222327] border-silver dark:border-[#7E7E7E]',
optionsClassName:
'bg-white dark:bg-[#383838] border-silver dark:border-[#7E7E7E]',
placeholderClassName: 'text-gray-400 dark:text-silver',
contentSize: 'text-sm',
}}
/>
</div>
<button
@@ -555,6 +822,78 @@ export default function NewAgent({ mode }: { mode: 'new' | 'edit' | 'draft' }) {
/>
</div>
</div>
<div className="rounded-[30px] bg-[#F6F6F6] px-6 py-3 dark:bg-[#383838] dark:text-[#E0E0E0]">
<button
onClick={() => setIsJsonSchemaExpanded(!isJsonSchemaExpanded)}
className="flex w-full items-center justify-between text-left focus:outline-none"
>
<div>
<h2 className="text-lg font-semibold">Advanced</h2>
</div>
<div className="ml-4 flex items-center">
<svg
className={`h-5 w-5 transform transition-transform duration-200 ${
isJsonSchemaExpanded ? 'rotate-180' : ''
}`}
fill="none"
stroke="currentColor"
viewBox="0 0 24 24"
>
<path
strokeLinecap="round"
strokeLinejoin="round"
strokeWidth={2}
d="M19 9l-7 7-7-7"
/>
</svg>
</div>
</button>
{isJsonSchemaExpanded && (
<div className="mt-3">
<div>
<h2 className="text-sm font-medium">JSON response schema</h2>
<p className="mt-1 text-xs text-gray-600 dark:text-gray-400">
Define a JSON schema to enforce structured output format
</p>
</div>
<textarea
value={jsonSchemaText}
onChange={(e) => validateAndSetJsonSchema(e.target.value)}
placeholder={`{
"type": "object",
"properties": {
"name": {"type": "string"},
"email": {"type": "string"}
},
"required": ["name", "email"],
"additionalProperties": false
}`}
rows={9}
className={`border-silver text-jet dark:bg-raisin-black dark:text-bright-gray mt-2 w-full rounded-2xl border bg-white px-4 py-3 font-mono text-sm outline-hidden dark:border-[#7E7E7E]`}
/>
{jsonSchemaText.trim() !== '' && (
<div
className={`mt-2 flex items-center gap-2 text-sm ${
jsonSchemaValid
? 'text-green-600 dark:text-green-400'
: 'text-red-600 dark:text-red-400'
}`}
>
<span
className={`h-4 w-4 bg-contain bg-center bg-no-repeat ${
jsonSchemaValid
? "bg-[url('/src/assets/circle-check.svg')]"
: "bg-[url('/src/assets/circle-x.svg')]"
}`}
/>
{jsonSchemaValid
? 'Valid JSON'
: 'Invalid JSON - fix to enable saving'}
</div>
)}
</div>
)}
</div>
</div>
<div className="col-span-3 flex flex-col gap-3 rounded-[30px] bg-[#F6F6F6] px-6 py-3 dark:bg-[#383838] dark:text-[#E0E0E0]">
<h2 className="text-lg font-semibold">Preview</h2>
@@ -654,7 +993,7 @@ function AddPromptModal({
setNewPromptContent('');
onSelect?.(newPromptName, newPrompt.id, newPromptContent);
} catch (error) {
console.error(error);
console.error('Error adding prompt:', error);
}
};
return (

View File

@@ -57,12 +57,11 @@ export const fetchPreviewAnswer = createAsyncThunk<
signal,
state.preference.token,
state.preference.selectedDocs!,
state.agentPreview.queries,
null, // No conversation ID for previews
state.preference.prompt.id,
state.preference.chunks,
state.preference.token_limit,
(event) => {
(event: MessageEvent) => {
const data = JSON.parse(event.data);
const targetIndex = indx ?? state.agentPreview.queries.length - 1;
@@ -97,6 +96,17 @@ export const fetchPreviewAnswer = createAsyncThunk<
message: data.error,
}),
);
} else if (data.type === 'structured_answer') {
dispatch(
updateStreamingQuery({
index: targetIndex,
query: {
response: data.answer,
structured: data.structured,
schema: data.schema,
},
}),
);
} else {
dispatch(
updateStreamingQuery({
@@ -118,7 +128,6 @@ export const fetchPreviewAnswer = createAsyncThunk<
signal,
state.preference.token,
state.preference.selectedDocs!,
state.agentPreview.queries,
null, // No conversation ID for previews
state.preference.prompt.id,
state.preference.chunks,
@@ -203,6 +212,14 @@ export const agentPreviewSlice = createSlice({
state.queries[index].response =
(state.queries[index].response || '') + query.response;
}
if (query.structured !== undefined) {
state.queries[index].structured = query.structured;
}
if (query.schema !== undefined) {
state.queries[index].schema = query.schema;
}
},
updateThought(
state,

View File

@@ -10,6 +10,7 @@ export type Agent = {
description: string;
image: string;
source: string;
sources?: string[];
chunks: string;
retriever: string;
prompt_id: string;
@@ -26,4 +27,5 @@ export type Agent = {
created_at?: string;
updated_at?: string;
last_used_at?: string;
json_schema?: object;
};

View File

@@ -38,13 +38,29 @@ const endpoints = {
UPDATE_TOOL_STATUS: '/api/update_tool_status',
UPDATE_TOOL: '/api/update_tool',
DELETE_TOOL: '/api/delete_tool',
GET_CHUNKS: (docId: string, page: number, per_page: number) =>
`/api/get_chunks?id=${docId}&page=${page}&per_page=${per_page}`,
SYNC_CONNECTOR: '/api/connectors/sync',
GET_CHUNKS: (
docId: string,
page: number,
per_page: number,
path?: string,
search?: string,
) =>
`/api/get_chunks?id=${docId}&page=${page}&per_page=${per_page}${
path ? `&path=${encodeURIComponent(path)}` : ''
}${search ? `&search=${encodeURIComponent(search)}` : ''}`,
ADD_CHUNK: '/api/add_chunk',
DELETE_CHUNK: (docId: string, chunkId: string) =>
`/api/delete_chunk?id=${docId}&chunk_id=${chunkId}`,
UPDATE_CHUNK: '/api/update_chunk',
STORE_ATTACHMENT: '/api/store_attachment',
DIRECTORY_STRUCTURE: (docId: string) =>
`/api/directory_structure?id=${docId}`,
MANAGE_SOURCE_FILES: '/api/manage_source_files',
MCP_TEST_CONNECTION: '/api/mcp_server/test',
MCP_SAVE_SERVER: '/api/mcp_server/save',
MCP_OAUTH_STATUS: (task_id: string) =>
`/api/mcp_server/oauth_status/${task_id}`,
},
CONVERSATION: {
ANSWER: '/api/answer',

View File

@@ -1,3 +1,4 @@
import { getSessionToken } from '../../utils/providerUtils';
import apiClient from '../client';
import endpoints from '../endpoints';
@@ -86,8 +87,13 @@ const userService = {
page: number,
perPage: number,
token: string | null,
path?: string,
search?: string,
): Promise<any> =>
apiClient.get(endpoints.USER.GET_CHUNKS(docId, page, perPage), token),
apiClient.get(
endpoints.USER.GET_CHUNKS(docId, page, perPage, path, search),
token,
),
addChunk: (data: any, token: string | null): Promise<any> =>
apiClient.post(endpoints.USER.ADD_CHUNK, data, token),
deleteChunk: (
@@ -98,6 +104,32 @@ const userService = {
apiClient.delete(endpoints.USER.DELETE_CHUNK(docId, chunkId), token),
updateChunk: (data: any, token: string | null): Promise<any> =>
apiClient.put(endpoints.USER.UPDATE_CHUNK, data, token),
getDirectoryStructure: (docId: string, token: string | null): Promise<any> =>
apiClient.get(endpoints.USER.DIRECTORY_STRUCTURE(docId), token),
manageSourceFiles: (data: FormData, token: string | null): Promise<any> =>
apiClient.postFormData(endpoints.USER.MANAGE_SOURCE_FILES, data, token),
testMCPConnection: (data: any, token: string | null): Promise<any> =>
apiClient.post(endpoints.USER.MCP_TEST_CONNECTION, data, token),
saveMCPServer: (data: any, token: string | null): Promise<any> =>
apiClient.post(endpoints.USER.MCP_SAVE_SERVER, data, token),
getMCPOAuthStatus: (task_id: string, token: string | null): Promise<any> =>
apiClient.get(endpoints.USER.MCP_OAUTH_STATUS(task_id), token),
syncConnector: (
docId: string,
provider: string,
token: string | null,
): Promise<any> => {
const sessionToken = getSessionToken(provider);
return apiClient.post(
endpoints.USER.SYNC_CONNECTOR,
{
source_id: docId,
session_token: sessionToken,
provider: provider,
},
token,
);
},
};
export default userService;

View File

@@ -0,0 +1,4 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M6 7.5C6 7.36739 5.94732 7.24021 5.85355 7.14645C5.75979 7.05268 5.63261 7 5.5 7H4.5C4.36739 7 4.24021 7.05268 4.14645 7.14645C4.05268 7.24021 4 7.36739 4 7.5V8.5C4 8.63261 4.05268 8.75979 4.14645 8.85355C4.24021 8.94732 4.36739 9 4.5 9H5.5C5.63261 9 5.75979 8.94732 5.85355 8.85355C5.94732 8.75979 6 8.63261 6 8.5V7.5ZM6 10.5C6 10.3674 5.94732 10.2402 5.85355 10.1464C5.75979 10.0527 5.63261 10 5.5 10H4.5C4.36739 10 4.24021 10.0527 4.14645 10.1464C4.05268 10.2402 4 10.3674 4 10.5V11.5C4 11.6326 4.05268 11.7598 4.14645 11.8536C4.24021 11.9473 4.36739 12 4.5 12H5.5C5.63261 12 5.75979 11.9473 5.85355 11.8536C5.94732 11.7598 6 11.6326 6 11.5V10.5ZM7.5 7H8.5C8.63261 7 8.75979 7.05268 8.85355 7.14645C8.94732 7.24021 9 7.36739 9 7.5V8.5C9 8.63261 8.94732 8.75979 8.85355 8.85355C8.75979 8.94732 8.63261 9 8.5 9H7.5C7.36739 9 7.24021 8.94732 7.14645 8.85355C7.05268 8.75979 7 8.63261 7 8.5V7.5C7 7.36739 7.05268 7.24021 7.14645 7.14645C7.24021 7.05268 7.36739 7 7.5 7ZM8.5 10H7.5C7.36739 10 7.24021 10.0527 7.14645 10.1464C7.05268 10.2402 7 10.3674 7 10.5V11.5C7 11.6326 7.05268 11.7598 7.14645 11.8536C7.24021 11.9473 7.36739 12 7.5 12H8.5C8.63261 12 8.75979 11.9473 8.85355 11.8536C8.94732 11.7598 9 11.6326 9 11.5V10.5C9 10.3674 8.94732 10.2402 8.85355 10.1464C8.75979 10.0527 8.63261 10 8.5 10ZM10 7.5C10 7.36739 10.0527 7.24021 10.1464 7.14645C10.2402 7.05268 10.3674 7 10.5 7H11.5C11.6326 7 11.7598 7.05268 11.8536 7.14645C11.9473 7.24021 12 7.36739 12 7.5V8.5C12 8.63261 11.9473 8.75979 11.8536 8.85355C11.7598 8.94732 11.6326 9 11.5 9H10.5C10.3674 9 10.2402 8.94732 10.1464 8.85355C10.0527 8.75979 10 8.63261 10 8.5V7.5Z" fill="#848484"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M4.5 0C4.63261 0 4.75979 0.0526784 4.85355 0.146447C4.94732 0.240215 5 0.367392 5 0.5V1H11V0.5C11 0.367392 11.0527 0.240215 11.1464 0.146447C11.2402 0.0526784 11.3674 0 11.5 0C11.6326 0 11.7598 0.0526784 11.8536 0.146447C11.9473 0.240215 12 0.367392 12 0.5V1C13.66 1 15 2.34 15 4V12C15 13.66 13.66 15 12 15H4C2.34 15 1 13.66 1 12V4C1 2.34 2.34 1 4 1V0.5C4 0.367392 4.05268 0.240215 4.14645 0.146447C4.24021 0.0526784 4.36739 0 4.5 0ZM14 4V5H2V4C2 2.9 2.895 2 4 2V2.5C4 2.63261 4.05268 2.75979 4.14645 2.85355C4.24021 2.94732 4.36739 3 4.5 3C4.63261 3 4.75979 2.94732 4.85355 2.85355C4.94732 2.75979 5 2.63261 5 2.5V2H11V2.5C11 2.63261 11.0527 2.75979 11.1464 2.85355C11.2402 2.94732 11.3674 3 11.5 3C11.6326 3 11.7598 2.94732 11.8536 2.85355C11.9473 2.75979 12 2.63261 12 2.5V2C13.1 2 14 2.895 14 4ZM2 12V6H14V12C14 13.1 13.105 14 12 14H4C2.9 14 2 13.105 2 12Z" fill="#848484"/>
</svg>

After

Width:  |  Height:  |  Size: 2.6 KiB

View File

@@ -1 +1 @@
<svg width="16px" height="16px" viewBox="0 0 1024 1024" class="icon" version="1.1" xmlns="http://www.w3.org/2000/svg" fill="#11ee1c" stroke="#11ee1c" stroke-width="83.96799999999999"><g id="SVGRepo_bgCarrier" stroke-width="0"></g><g id="SVGRepo_tracerCarrier" stroke-linecap="round" stroke-linejoin="round"></g><g id="SVGRepo_iconCarrier"><path d="M866.133333 258.133333L362.666667 761.6l-204.8-204.8L98.133333 618.666667 362.666667 881.066667l563.2-563.2z" fill="#11ee1c"></path></g></svg>
<svg width="16px" height="16px" viewBox="0 0 1024 1024" class="icon" version="1.1" xmlns="http://www.w3.org/2000/svg" fill="#11ee1c" stroke="#11ee1c" stroke-width="83.96799999999999"><g id="SVGRepo_bgCarrier" stroke-width="0"></g><g id="SVGRepo_tracerCarrier" stroke-linecap="round" stroke-linejoin="round"></g><g id="SVGRepo_iconCarrier"><path d="M866.133333 258.133333L362.666667 761.6l-204.8-204.8L98.133333 618.666667 362.666667 881.066667l563.2-563.2z" fill="#0C9D35"></path></g></svg>

Before

Width:  |  Height:  |  Size: 490 B

After

Width:  |  Height:  |  Size: 490 B

View File

@@ -0,0 +1,3 @@
<svg width="22" height="22" viewBox="0 0 22 22" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M20.2891 15.81L21.7091 14.39L18.4991 11.21L15.4991 10.36L17.4091 10.1L21.5991 6.89999L20.3991 5.29998L16.5891 8.14999L13.9091 8.59999L17.1091 5.40999L15.9991 0.859985L13.9991 1.33999L14.8591 4.78999L13.7591 5.92999C13.5285 5.38882 13.144 4.92736 12.6533 4.60302C12.1625 4.27867 11.5873 4.10574 10.9991 4.10574C10.4108 4.10574 9.83559 4.27867 9.34487 4.60302C8.85414 4.92736 8.4696 5.38882 8.23906 5.92999L7.10906 4.78999L7.99906 1.33999L5.99906 0.859985L4.88906 5.40999L8.08906 8.59999L5.39906 8.14999L1.59906 5.29998L0.399063 6.89999L4.59906 10.1L6.45906 10.41L3.45906 11.26L0.289062 14.39L1.70906 15.81L4.49906 12.99L6.86906 12.32L2.99906 15.64V21.1H4.99906V16.56L6.55906 15.22C6.73264 16.2723 7.27432 17.2287 8.08751 17.9188C8.90071 18.6088 9.93255 18.9876 10.9991 18.9876C12.0656 18.9876 13.0974 18.6088 13.9106 17.9188C14.7238 17.2287 15.2655 16.2723 15.4391 15.22L16.9991 16.56V21.1H18.9991V15.64L15.1291 12.32L17.4991 12.99L20.2891 15.81Z" fill="black"/>
</svg>

After

Width:  |  Height:  |  Size: 1.0 KiB

View File

@@ -0,0 +1,3 @@
<svg width="14" height="10" viewBox="0 0 14 10" fill="none" xmlns="http://www.w3.org/2000/svg">
<path fill-rule="evenodd" clip-rule="evenodd" d="M7 0C8.8144 0 10.4902 0.332143 11.739 0.898571C12.362 1.18143 12.9108 1.53643 13.3119 1.96714C13.7172 2.4 14 2.94429 14 3.57143V6.42857C14 7.05571 13.7172 7.59929 13.3119 8.03286C12.9108 8.46357 12.3627 8.81857 11.739 9.10143C10.4902 9.66786 8.8144 10 7 10C5.1856 10 3.5098 9.66786 2.261 9.10143C1.638 8.81857 1.0892 8.46357 0.6881 8.03286C0.2828 7.6 0 7.05571 0 6.42857V3.57143C0 2.94429 0.2828 2.40071 0.6881 1.96714C1.0892 1.53643 1.6373 1.18143 2.261 0.898571C3.5098 0.332143 5.1856 0 7 0ZM12.6 5.77143C12.3375 5.94714 12.047 6.10429 11.739 6.24429C10.4902 6.81071 8.8144 7.14286 7 7.14286C5.1856 7.14286 3.5098 6.81071 2.261 6.24429C1.96243 6.10966 1.67456 5.95157 1.4 5.77143V6.42857C1.4 6.59071 1.47 6.79857 1.7024 7.04857C1.9383 7.30143 2.3128 7.56214 2.8294 7.79643C3.8612 8.26429 5.3354 8.57143 7 8.57143C8.6646 8.57143 10.1388 8.26429 11.1706 7.79643C11.6872 7.56214 12.0617 7.30143 12.2976 7.04857C12.5307 6.79857 12.6 6.59071 12.6 6.42857V5.77143ZM7 1.42857C5.3347 1.42857 3.8612 1.73571 2.8294 2.20357C2.3128 2.43786 1.9383 2.69857 1.7024 2.95143C1.4693 3.20143 1.4 3.40929 1.4 3.57143C1.4 3.73357 1.47 3.94143 1.7024 4.19143C1.9383 4.44429 2.3128 4.705 2.8294 4.93929C3.8612 5.40714 5.3354 5.71429 7 5.71429C8.6646 5.71429 10.1388 5.40714 11.1706 4.93929C11.6872 4.705 12.0617 4.44429 12.2976 4.19143C12.5307 3.94143 12.6 3.73357 12.6 3.57143C12.6 3.40929 12.53 3.20143 12.2976 2.95143C12.0617 2.69857 11.6872 2.43786 11.1706 2.20357C10.1388 1.73643 8.6646 1.42857 7 1.42857Z" fill="#848484"/>
</svg>

After

Width:  |  Height:  |  Size: 1.6 KiB

View File

@@ -0,0 +1,3 @@
<svg width="24" height="22" viewBox="0 0 24 22" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M12.01 0.784912C9.928 0.784912 8.256 0.804912 8.267 0.831912C8.277 0.851912 9.975 3.83291 12.041 7.45191L15.801 14.0259H19.561C21.642 14.0259 23.314 14.0059 23.303 13.9789C23.298 13.9589 21.595 10.9779 19.528 7.35891L15.768 0.784912H12.01ZM7.25 2.51491C6.03029 4.61565 4.82028 6.72201 3.62 8.83391L0 15.1679L1.89 18.4659L3.775 21.7629L7.395 15.4279L11.013 9.09791L9.133 5.81091C8.1 4.00391 7.255 2.52091 7.25 2.51491ZM9.509 15.1679L9.306 15.5159C9.192 15.7139 8.346 17.1879 7.426 18.8029C6.864 19.7952 6.29799 20.7852 5.728 21.7729C5.718 21.7989 8.968 21.8149 12.95 21.8149H20.194L21.99 18.6579C22.982 16.9239 23.84 15.4279 23.896 15.3349L24 15.1679H16.751H9.509Z" fill="black"/>
</svg>

After

Width:  |  Height:  |  Size: 792 B

View File

@@ -0,0 +1,3 @@
<svg width="13" height="17" viewBox="0 0 13 17" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M0 1.94971C0 0.983707 0.784 0.199707 1.75 0.199707H8.336C8.8 0.199707 9.245 0.383707 9.573 0.712707L12.487 3.62671C12.816 3.95471 13 4.39971 13 4.86371V14.4497C13 14.9138 12.8156 15.359 12.4874 15.6871C12.1592 16.0153 11.7141 16.1997 11.25 16.1997H1.75C1.28587 16.1997 0.840752 16.0153 0.512563 15.6871C0.184375 15.359 0 14.9138 0 14.4497V1.94971ZM1.75 1.69971C1.6837 1.69971 1.62011 1.72605 1.57322 1.77293C1.52634 1.81981 1.5 1.8834 1.5 1.94971V14.4497C1.5 14.5877 1.612 14.6997 1.75 14.6997H11.25C11.3163 14.6997 11.3799 14.6734 11.4268 14.6265C11.4737 14.5796 11.5 14.516 11.5 14.4497V6.19971H8.75C8.28587 6.19971 7.84075 6.01533 7.51256 5.68714C7.18437 5.35896 7 4.91384 7 4.44971V1.69971H1.75ZM8.5 1.76171V4.44971C8.5 4.58771 8.612 4.69971 8.75 4.69971H11.438L11.427 4.68671L8.513 1.77271L8.5 1.76171Z" fill="#59636E"/>
</svg>

After

Width:  |  Height:  |  Size: 938 B

View File

@@ -1,3 +1,10 @@
<svg width="28" height="34" viewBox="0 0 28 34" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M10 26.0003H18C19.1 26.0003 20 25.1003 20 24.0003V14.0003H23.18C24.96 14.0003 25.86 11.8403 24.6 10.5803L15.42 1.40032C15.235 1.21491 15.0152 1.06782 14.7732 0.967453C14.5313 0.86709 14.2719 0.81543 14.01 0.81543C13.7481 0.81543 13.4887 0.86709 13.2468 0.967453C13.0048 1.06782 12.785 1.21491 12.6 1.40032L3.42 10.5803C2.16 11.8403 3.04 14.0003 4.82 14.0003H8V24.0003C8 25.1003 8.9 26.0003 10 26.0003ZM2 30.0003H26C27.1 30.0003 28 30.9003 28 32.0003C28 33.1003 27.1 34.0003 26 34.0003H2C0.9 34.0003 0 33.1003 0 32.0003C0 30.9003 0.9 30.0003 2 30.0003Z" fill="#949494"/>
</svg>
<svg width="24" height="25" viewBox="0 0 24 25" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_9890_21170)">
<path d="M12.75 19.1V8.91248L9.5 12.1625L7.75 10.35L14 4.09998L20.25 10.35L18.5 12.1625L15.25 8.91248V19.1H12.75ZM6.5 24.1C5.8125 24.1 5.22417 23.8554 4.735 23.3662C4.24583 22.8771 4.00083 22.2883 4 21.6V17.85H6.5V21.6H21.5V17.85H24V21.6C24 22.2875 23.7554 22.8762 23.2663 23.3662C22.7771 23.8562 22.1883 24.1008 21.5 24.1H6.5Z" fill="black"/>
</g>
<defs>
<clipPath id="clip0_9890_21170">
<rect width="24" height="24" fill="white" transform="translate(0 0.0999756)"/>
</clipPath>
</defs>
</svg>

Before

Width:  |  Height:  |  Size: 681 B

After

Width:  |  Height:  |  Size: 630 B

View File

@@ -0,0 +1,3 @@
<svg width="16" height="15" viewBox="0 0 16 15" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M1.75 0.599915C1.28587 0.599915 0.840752 0.784289 0.512563 1.11248C0.184374 1.44067 0 1.88579 0 2.34991L0 12.8499C0 13.8159 0.784 14.5999 1.75 14.5999H14.25C14.7141 14.5999 15.1592 14.4155 15.4874 14.0874C15.8156 13.7592 16 13.314 16 12.8499V4.34991C16 3.88579 15.8156 3.44067 15.4874 3.11248C15.1592 2.78429 14.7141 2.59991 14.25 2.59991H7.5C7.46119 2.59991 7.42291 2.59088 7.3882 2.57352C7.35348 2.55616 7.32329 2.53096 7.3 2.49991L6.4 1.29991C6.07 0.859915 5.55 0.599915 5 0.599915H1.75Z" fill="#A382E7"/>
</svg>

After

Width:  |  Height:  |  Size: 621 B

View File

@@ -0,0 +1,3 @@
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M20.8175 3.09139C20.0845 2.34392 19.2025 1.9707 18.1705 1.9707H5.6835C4.6515 1.9707 3.7695 2.34392 3.0365 3.09139C2.3035 3.83885 1.9375 4.73825 1.9375 5.79061V18.524C1.9375 19.5763 2.3035 20.4757 3.0365 21.2232C3.7695 21.9707 4.6515 22.3439 5.6835 22.3439H8.5975C8.7875 22.3439 8.9305 22.3368 9.0265 22.3235C9.13819 22.3007 9.23901 22.2399 9.3125 22.1512C9.4075 22.0492 9.4555 21.9013 9.4555 21.7076L9.4485 20.8051C9.4445 20.23 9.4425 19.7752 9.4425 19.4387L9.1425 19.4917C8.9525 19.5274 8.7125 19.5427 8.4215 19.5386C8.11819 19.5329 7.81584 19.5019 7.5175 19.4458C7.1999 19.386 6.90093 19.2497 6.6455 19.0481C6.37799 18.8418 6.17847 18.5572 6.0735 18.2323L5.9435 17.9264C5.83393 17.6851 5.69627 17.4581 5.5335 17.2503C5.3475 17.0025 5.1585 16.8353 4.9675 16.7466L4.8775 16.6803C4.81474 16.6345 4.75766 16.5811 4.7075 16.5212C4.65959 16.4657 4.62015 16.4031 4.5905 16.3356C4.5645 16.2734 4.5865 16.2224 4.6555 16.1827C4.7255 16.1419 4.8505 16.1225 5.0335 16.1225L5.2935 16.1633C5.4665 16.198 5.6815 16.304 5.9365 16.4804C6.19456 16.6598 6.41013 16.8957 6.5675 17.1708C6.7675 17.5328 7.0075 17.8091 7.2895 17.9998C7.5715 18.1895 7.8555 18.2854 8.1415 18.2854C8.4275 18.2854 8.6745 18.2629 8.8835 18.2191C9.08561 18.1765 9.28201 18.1094 9.4685 18.0192C9.5465 17.4278 9.7585 16.9709 10.1055 16.6528C9.65588 16.6078 9.21026 16.5281 8.7725 16.4142C8.34529 16.2945 7.93444 16.1208 7.5495 15.8972C7.14675 15.6736 6.79101 15.3714 6.5025 15.008C6.2255 14.6541 5.9975 14.1901 5.8195 13.616C5.6425 13.0409 5.5535 12.377 5.5535 11.6255C5.5535 10.5558 5.8955 9.64519 6.5805 8.89263C6.2605 8.08908 6.2905 7.18662 6.6715 6.18831C6.9235 6.10775 7.2965 6.16791 7.7905 6.36676C8.2845 6.56561 8.6465 6.7359 8.8765 6.87662C9.1065 7.01939 9.2905 7.13869 9.4295 7.23557C10.2425 7.00486 11.0826 6.88889 11.9265 6.8909C12.7855 6.8909 13.6175 7.00613 14.4245 7.23557L14.9185 6.91741C15.2985 6.68476 15.6993 6.48946 16.1155 6.33413C16.5755 6.15669 16.9255 6.10877 17.1695 6.18831C17.5595 7.18764 17.5935 8.08908 17.2725 8.89365C17.9575 9.64519 18.3005 10.5558 18.3005 11.6265C18.3005 12.3781 18.2115 13.044 18.0335 13.6221C17.8565 14.2013 17.6265 14.6653 17.3445 15.0151C17.0509 15.3739 16.6937 15.6731 16.2915 15.8972C15.8715 16.1358 15.4635 16.3081 15.0685 16.4142C14.6308 16.5284 14.1852 16.6085 13.7355 16.6538C14.1855 17.0515 14.4115 17.6786 14.4115 18.5362V21.7076C14.4115 21.8575 14.4325 21.9788 14.4765 22.0716C14.4967 22.1163 14.5256 22.1564 14.5613 22.1895C14.597 22.2226 14.6389 22.2481 14.6845 22.2643C14.7805 22.299 14.8645 22.3215 14.9385 22.3296C15.0125 22.3398 15.1185 22.3429 15.2565 22.3429H18.1705C19.2025 22.3429 20.0845 21.9696 20.8175 21.2222C21.5495 20.4757 21.9165 19.5753 21.9165 18.523V5.79061C21.9165 4.73825 21.5505 3.83885 20.8175 3.09139Z" fill="#747474"/>
</svg>

After

Width:  |  Height:  |  Size: 2.8 KiB

View File

@@ -1,5 +1,3 @@
<svg width="800px" height="800px" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg">
<title>github</title>
<rect width="24" height="24" fill="none"/>
<path d="M12,2A10,10,0,0,0,8.84,21.5c.5.08.66-.23.66-.5V19.31C6.73,19.91,6.14,18,6.14,18A2.69,2.69,0,0,0,5,16.5c-.91-.62.07-.6.07-.6a2.1,2.1,0,0,1,1.53,1,2.15,2.15,0,0,0,2.91.83,2.16,2.16,0,0,1,.63-1.34C8,16.17,5.62,15.31,5.62,11.5a3.87,3.87,0,0,1,1-2.71,3.58,3.58,0,0,1,.1-2.64s.84-.27,2.75,1a9.63,9.63,0,0,1,5,0c1.91-1.29,2.75-1,2.75-1a3.58,3.58,0,0,1,.1,2.64,3.87,3.87,0,0,1,1,2.71c0,3.82-2.34,4.66-4.57,4.91a2.39,2.39,0,0,1,.69,1.85V21c0,.27.16.59.67.5A10,10,0,0,0,12,2Z" fill="black" fill-opacity="0.54"/>
<svg width="20" height="20" viewBox="0 0 20 20" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M10 0.299927C8.68678 0.299927 7.38642 0.558584 6.17317 1.06113C4.95991 1.56368 3.85752 2.30027 2.92893 3.22886C1.05357 5.10422 0 7.64776 0 10.2999C0 14.7199 2.87 18.4699 6.84 19.7999C7.34 19.8799 7.5 19.5699 7.5 19.2999V17.6099C4.73 18.2099 4.14 16.2699 4.14 16.2699C3.68 15.1099 3.03 14.7999 3.03 14.7999C2.12 14.1799 3.1 14.1999 3.1 14.1999C4.1 14.2699 4.63 15.2299 4.63 15.2299C5.5 16.7499 6.97 16.2999 7.54 16.0599C7.63 15.4099 7.89 14.9699 8.17 14.7199C5.95 14.4699 3.62 13.6099 3.62 9.79993C3.62 8.68993 4 7.79993 4.65 7.08993C4.55 6.83993 4.2 5.79993 4.75 4.44993C4.75 4.44993 5.59 4.17993 7.5 5.46993C8.29 5.24993 9.15 5.13993 10 5.13993C10.85 5.13993 11.71 5.24993 12.5 5.46993C14.41 4.17993 15.25 4.44993 15.25 4.44993C15.8 5.79993 15.45 6.83993 15.35 7.08993C16 7.79993 16.38 8.68993 16.38 9.79993C16.38 13.6199 14.04 14.4599 11.81 14.7099C12.17 15.0199 12.5 15.6299 12.5 16.5599V19.2999C12.5 19.5699 12.66 19.8899 13.17 19.7999C17.14 18.4599 20 14.7199 20 10.2999C20 8.98671 19.7413 7.68635 19.2388 6.47309C18.7362 5.25984 17.9997 4.15744 17.0711 3.22886C16.1425 2.30027 15.0401 1.56368 13.8268 1.06113C12.6136 0.558584 11.3132 0.299927 10 0.299927Z" fill="black"/>
</svg>

Before

Width:  |  Height:  |  Size: 679 B

After

Width:  |  Height:  |  Size: 1.3 KiB

View File

@@ -0,0 +1,3 @@
<svg width="21" height="21" viewBox="0 0 21 21" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M19.25 5.25H12.25L10.5 3.5H5.25C4.2875 3.5 3.50875 4.2875 3.50875 5.25L3.5 15.75C3.5 16.7125 4.2875 17.5 5.25 17.5H19.25C20.2125 17.5 21 16.7125 21 15.75V7C21 6.0375 20.2125 5.25 19.25 5.25ZM19.25 15.75H5.25V5.25H9.77375L11.5238 7H19.25V15.75ZM17.5 10.5H7V8.75H17.5V10.5ZM14 14H7V12.25H14V14Z" fill="#949494"/>
</svg>

After

Width:  |  Height:  |  Size: 423 B

View File

@@ -0,0 +1,4 @@
<svg width="24" height="25" viewBox="0 0 24 25" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M10.7519 13.3399C10.7519 12.7699 10.2819 12.2999 9.71187 12.2999C9.14187 12.2999 8.67188 12.7699 8.67188 13.3399C8.67188 13.6158 8.78145 13.8803 8.97648 14.0753C9.17152 14.2704 9.43605 14.3799 9.71187 14.3799C9.9877 14.3799 10.2522 14.2704 10.4473 14.0753C10.6423 13.8803 10.7519 13.6158 10.7519 13.3399ZM14.0919 15.7099C13.6419 16.1599 12.6819 16.3199 12.0019 16.3199C11.3219 16.3199 10.3619 16.1599 9.91187 15.7099C9.88755 15.6839 9.85813 15.6631 9.82545 15.6489C9.79276 15.6347 9.75751 15.6274 9.72188 15.6274C9.68624 15.6274 9.65099 15.6347 9.6183 15.6489C9.58562 15.6631 9.5562 15.6839 9.53187 15.7099C9.50583 15.7343 9.48507 15.7637 9.47088 15.7964C9.45668 15.829 9.44936 15.8643 9.44936 15.8999C9.44936 15.9356 9.45668 15.9708 9.47088 16.0035C9.48507 16.0362 9.50583 16.0656 9.53187 16.0899C10.2419 16.7999 11.6019 16.8599 12.0019 16.8599C12.4019 16.8599 13.7619 16.7999 14.4719 16.0899C14.4979 16.0656 14.5187 16.0362 14.5329 16.0035C14.5471 15.9708 14.5544 15.9356 14.5544 15.8999C14.5544 15.8643 14.5471 15.829 14.5329 15.7964C14.5187 15.7637 14.4979 15.7343 14.4719 15.7099C14.3719 15.6099 14.2019 15.6099 14.0919 15.7099ZM14.2919 12.2999C13.7219 12.2999 13.2519 12.7699 13.2519 13.3399C13.2519 13.9099 13.7219 14.3799 14.2919 14.3799C14.8619 14.3799 15.3319 13.9099 15.3319 13.3399C15.3319 12.7699 14.8719 12.2999 14.2919 12.2999Z" fill="black"/>
<path d="M12 2.29993C6.48 2.29993 2 6.77993 2 12.2999C2 17.8199 6.48 22.2999 12 22.2999C17.52 22.2999 22 17.8199 22 12.2999C22 6.77993 17.52 2.29993 12 2.29993ZM17.8 13.6299C17.82 13.7699 17.83 13.9199 17.83 14.0699C17.83 16.3099 15.22 18.1299 12 18.1299C8.78 18.1299 6.17 16.3099 6.17 14.0699C6.17 13.9199 6.18 13.7699 6.2 13.6299C5.69 13.3999 5.34 12.8899 5.34 12.2999C5.33852 12.0132 5.4218 11.7324 5.57939 11.4928C5.73698 11.2532 5.96185 11.0656 6.22576 10.9534C6.48966 10.8412 6.78083 10.8095 7.06269 10.8622C7.34456 10.915 7.60454 11.0499 7.81 11.2499C8.82 10.5199 10.22 10.0599 11.77 10.0099L12.51 6.51993C12.52 6.44993 12.56 6.38993 12.62 6.35993C12.68 6.31993 12.75 6.30993 12.82 6.31993L15.24 6.83993C15.3221 6.67351 15.4472 6.53207 15.6023 6.4303C15.7575 6.32853 15.9371 6.27013 16.1224 6.26115C16.3077 6.25217 16.4921 6.29294 16.6564 6.37924C16.8207 6.46553 16.9589 6.59421 17.0566 6.75191C17.1544 6.90962 17.2082 7.09062 17.2125 7.27613C17.2167 7.46164 17.1712 7.64491 17.0808 7.80692C16.9903 7.96894 16.8582 8.1038 16.698 8.19753C16.5379 8.29125 16.3556 8.34042 16.17 8.33993C15.61 8.33993 15.16 7.89993 15.13 7.34993L12.96 6.88993L12.3 10.0099C13.83 10.0599 15.2 10.5299 16.2 11.2499C16.3533 11.1035 16.5367 10.9924 16.7375 10.9243C16.9382 10.8562 17.1514 10.8328 17.3621 10.8557C17.5728 10.8787 17.776 10.9473 17.9574 11.057C18.1388 11.1666 18.2941 11.3145 18.4123 11.4905C18.5306 11.6664 18.609 11.866 18.642 12.0754C18.6751 12.2847 18.662 12.4988 18.6037 12.7026C18.5454 12.9064 18.4432 13.0949 18.3044 13.2551C18.1656 13.4153 17.9934 13.5432 17.8 13.6299Z" fill="black"/>
</svg>

After

Width:  |  Height:  |  Size: 3.0 KiB

View File

@@ -0,0 +1,3 @@
<svg width="15" height="15" viewBox="0 0 15 15" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M10.4798 10.739C9.27414 11.6748 7.7572 12.116 6.23773 11.9728C4.71826 11.8296 3.31047 11.1127 2.30094 9.96806C1.2914 8.82345 0.756002 7.33714 0.803717 5.81168C0.851432 4.28622 1.47868 2.83628 2.55777 1.75699C3.63706 0.677895 5.087 0.0506505 6.61246 0.00293578C8.13792 -0.044779 9.62423 0.490623 10.7688 1.50016C11.9135 2.50969 12.6303 3.91747 12.7736 5.43694C12.9168 6.95641 12.4756 8.47336 11.5398 9.67899L14.5798 12.719C14.6785 12.8107 14.7507 12.9273 14.7887 13.0565C14.8267 13.1858 14.8291 13.3229 14.7958 13.4534C14.7624 13.5839 14.6944 13.703 14.5991 13.7982C14.5037 13.8933 14.3844 13.961 14.2538 13.994C14.1234 14.0274 13.9864 14.0251 13.8573 13.9872C13.7281 13.9494 13.6115 13.8775 13.5198 13.779L10.4798 10.739ZM11.2998 5.99899C11.3087 5.4026 11.1989 4.81039 10.9768 4.25681C10.7547 3.70323 10.4248 3.19934 10.0062 2.77445C9.58757 2.34955 9.08865 2.01214 8.53844 1.78183C7.98824 1.55152 7.39773 1.43292 6.80127 1.43292C6.20481 1.43292 5.6143 1.55152 5.0641 1.78183C4.5139 2.01214 4.01498 2.34955 3.59637 2.77445C3.17777 3.19934 2.84783 3.70323 2.62575 4.25681C2.40367 4.81039 2.29388 5.4026 2.30277 5.99899C2.32039 7.18045 2.80208 8.30756 3.6438 9.13682C4.48552 9.96608 5.61968 10.4309 6.80127 10.4309C7.98286 10.4309 9.11703 9.96608 9.95874 9.13682C10.8005 8.30756 11.2822 7.18045 11.2998 5.99899Z" fill="#59636E"/>
</svg>

After

Width:  |  Height:  |  Size: 1.4 KiB

View File

View File

@@ -1 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" width="1em" height="1em" viewBox="0 0 24 24"><path fill="white" d="M10.72,19.9a8,8,0,0,1-6.5-9.79A7.77,7.77,0,0,1,10.4,4.16a8,8,0,0,1,9.49,6.52A1.54,1.54,0,0,0,21.38,12h.13a1.37,1.37,0,0,0,1.38-1.54,11,11,0,1,0-12.7,12.39A1.54,1.54,0,0,0,12,21.34h0A1.47,1.47,0,0,0,10.72,19.9Z"><animateTransform attributeName="transform" dur="0.75s" repeatCount="indefinite" type="rotate" values="0 12 12;360 12 12"/></path></svg>
<svg xmlns="http://www.w3.org/2000/svg" width="1em" height="1em" viewBox="0 0 24 24"><path fill="black" d="M10.72,19.9a8,8,0,0,1-6.5-9.79A7.77,7.77,0,0,1,10.4,4.16a8,8,0,0,1,9.49,6.52A1.54,1.54,0,0,0,21.38,12h.13a1.37,1.37,0,0,0,1.38-1.54,11,11,0,1,0-12.7,12.39A1.54,1.54,0,0,0,12,21.34h0A1.47,1.47,0,0,0,10.72,19.9Z"><animateTransform attributeName="transform" dur="0.75s" repeatCount="indefinite" type="rotate" values="0 12 12;360 12 12"/></path></svg>

Before

Width:  |  Height:  |  Size: 454 B

After

Width:  |  Height:  |  Size: 454 B

View File

@@ -1 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" width="1em" height="1em" viewBox="0 0 24 24"><path fill="black" d="M10.72,19.9a8,8,0,0,1-6.5-9.79A7.77,7.77,0,0,1,10.4,4.16a8,8,0,0,1,9.49,6.52A1.54,1.54,0,0,0,21.38,12h.13a1.37,1.37,0,0,0,1.38-1.54,11,11,0,1,0-12.7,12.39A1.54,1.54,0,0,0,12,21.34h0A1.47,1.47,0,0,0,10.72,19.9Z"><animateTransform attributeName="transform" dur="0.75s" repeatCount="indefinite" type="rotate" values="0 12 12;360 12 12"/></path></svg>
<svg xmlns="http://www.w3.org/2000/svg" width="1em" height="1em" viewBox="0 0 24 24"><path fill="white" d="M10.72,19.9a8,8,0,0,1-6.5-9.79A7.77,7.77,0,0,1,10.4,4.16a8,8,0,0,1,9.49,6.52A1.54,1.54,0,0,0,21.38,12h.13a1.37,1.37,0,0,0,1.38-1.54,11,11,0,1,0-12.7,12.39A1.54,1.54,0,0,0,12,21.34h0A1.47,1.47,0,0,0,10.72,19.9Z"><animateTransform attributeName="transform" dur="0.75s" repeatCount="indefinite" type="rotate" values="0 12 12;360 12 12"/></path></svg>

Before

Width:  |  Height:  |  Size: 454 B

After

Width:  |  Height:  |  Size: 454 B

View File

@@ -0,0 +1,3 @@
<svg width="22" height="23" viewBox="0 0 22 23" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M16.304 8.62401L19.486 11.806C20.0023 12.3155 20.4128 12.922 20.6937 13.5908C20.9747 14.2596 21.1206 14.9773 21.123 15.7026C21.1254 16.428 20.9843 17.1467 20.7079 17.8173C20.4314 18.4879 20.025 19.0972 19.5121 19.6102C18.9992 20.1231 18.3899 20.5295 17.7193 20.8059C17.0486 21.0824 16.33 21.2235 15.6046 21.221C14.8792 21.2186 14.1615 21.0727 13.4928 20.7918C12.824 20.5108 12.2174 20.1003 11.708 19.584L10.648 18.524C10.5046 18.3857 10.3903 18.2202 10.3116 18.0373C10.2329 17.8543 10.1914 17.6575 10.1896 17.4583C10.1878 17.2592 10.2256 17.0616 10.301 16.8772C10.3763 16.6929 10.4876 16.5253 10.6284 16.3844C10.7691 16.2435 10.9366 16.1321 11.1209 16.0566C11.3052 15.9811 11.5027 15.943 11.7019 15.9446C11.901 15.9463 12.0979 15.9876 12.2809 16.0661C12.464 16.1446 12.6295 16.2588 12.768 16.402L13.83 17.463C14.2996 17.9284 14.9344 18.1888 15.5955 18.1873C16.2567 18.1857 16.8903 17.9224 17.3577 17.4548C17.8252 16.9872 18.0884 16.3535 18.0897 15.6924C18.0911 15.0312 17.8305 14.3965 17.365 13.927L14.183 10.745C13.839 10.4009 13.4022 10.1647 12.926 10.0652C12.4498 9.96574 11.9549 10.0074 11.502 10.185C11.3406 10.249 11.1893 10.3143 11.048 10.381L10.584 10.598C9.96396 10.878 9.48696 10.998 8.87996 10.392C8.00796 9.52001 8.23396 8.71501 9.29696 7.98201C10.3559 7.25337 11.6365 6.91862 12.9166 7.0359C14.1966 7.15318 15.3951 7.71508 16.304 8.62401ZM10.294 2.61401L11.354 3.67401C11.6273 3.95678 11.7787 4.33562 11.7755 4.72891C11.7722 5.12221 11.6147 5.49851 11.3367 5.77675C11.0587 6.055 10.6826 6.21293 10.2893 6.21653C9.89597 6.22013 9.517 6.06912 9.23396 5.79601L8.17296 4.73601C7.94241 4.49717 7.66661 4.30664 7.36163 4.17553C7.05666 4.04442 6.72863 3.97536 6.39668 3.97239C6.06474 3.96941 5.73552 4.03257 5.42824 4.15818C5.12097 4.2838 4.84179 4.46935 4.60699 4.70402C4.37219 4.93868 4.18648 5.21776 4.06069 5.52496C3.9349 5.83217 3.87155 6.16135 3.87434 6.4933C3.87713 6.82525 3.94601 7.15332 4.07694 7.45836C4.20788 7.76341 4.39825 8.03933 4.63696 8.27001L7.81896 11.452C8.16289 11.7961 8.59974 12.0323 9.07595 12.1318C9.55217 12.2313 10.0471 12.1896 10.5 12.012C10.6613 11.948 10.8126 11.8827 10.954 11.816L11.418 11.599C12.038 11.319 12.516 11.199 13.122 11.805C13.994 12.677 13.768 13.482 12.705 14.215C11.6461 14.9437 10.3654 15.2784 9.08537 15.1611C7.80535 15.0438 6.60683 14.4819 5.69796 13.573L2.51596 10.391C1.99962 9.88154 1.58916 9.27497 1.30821 8.60622C1.02726 7.93747 0.881367 7.21975 0.878937 6.49438C0.876507 5.76901 1.01759 5.05033 1.29405 4.37971C1.57052 3.70909 1.9769 3.09978 2.48982 2.58686C3.00273 2.07395 3.61204 1.66756 4.28266 1.3911C4.95328 1.11463 5.67196 0.973553 6.39733 0.975983C7.1227 0.978413 7.84042 1.1243 8.50917 1.40526C9.17793 1.68621 9.7845 2.09767 10.294 2.61401Z" fill="black"/>
</svg>

After

Width:  |  Height:  |  Size: 2.8 KiB

View File

@@ -41,13 +41,13 @@ export default function ActionButtons({
navigate('/');
};
return (
<div className="fixed right-4 top-0 z-10 flex h-16 flex-col justify-center">
<div className="fixed top-0 right-4 z-10 flex h-16 flex-col justify-center">
<div className={`flex items-center gap-2 sm:gap-4 ${className}`}>
{showNewChat && (
<button
title="Open New Chat"
onClick={newChat}
className="flex items-center gap-1 rounded-full p-2 hover:bg-bright-gray dark:hover:bg-[#28292E] lg:hidden"
className="hover:bg-bright-gray flex items-center gap-1 rounded-full p-2 lg:hidden dark:hover:bg-[#28292E]"
>
<img
className="filter dark:invert"
@@ -64,7 +64,7 @@ export default function ActionButtons({
<button
title="Share"
onClick={() => setShareModalState(true)}
className="rounded-full p-2 hover:bg-bright-gray dark:hover:bg-[#28292E]"
className="hover:bg-bright-gray rounded-full p-2 dark:hover:bg-[#28292E]"
>
<img
className="filter dark:invert"

View File

@@ -0,0 +1,696 @@
import React, { useState, useEffect, useRef } from 'react';
import { useSelector } from 'react-redux';
import { useTranslation } from 'react-i18next';
import { selectToken } from '../preferences/preferenceSlice';
import {
useDarkTheme,
useLoaderState,
useMediaQuery,
useOutsideAlerter,
} from '../hooks';
import userService from '../api/services/userService';
import ArrowLeft from '../assets/arrow-left.svg';
import NoFilesIcon from '../assets/no-files.svg';
import NoFilesDarkIcon from '../assets/no-files-dark.svg';
import SkeletonLoader from './SkeletonLoader';
import ConfirmationModal from '../modals/ConfirmationModal';
import { ActiveState } from '../models/misc';
import { ChunkType } from '../settings/types';
import Pagination from './DocumentPagination';
import FileIcon from '../assets/file.svg';
import FolderIcon from '../assets/folder.svg';
import SearchIcon from '../assets/search.svg';
interface LineNumberedTextareaProps {
value: string;
onChange: (value: string) => void;
placeholder?: string;
ariaLabel?: string;
className?: string;
editable?: boolean;
onDoubleClick?: () => void;
}
const LineNumberedTextarea: React.FC<LineNumberedTextareaProps> = ({
value,
onChange,
placeholder,
ariaLabel,
className = '',
editable = true,
onDoubleClick,
}) => {
const { isMobile } = useMediaQuery();
const handleChange = (e: React.ChangeEvent<HTMLTextAreaElement>) => {
onChange(e.target.value);
};
const lineHeight = 19.93;
const contentLines = value.split('\n').length;
const heightOffset = isMobile ? 200 : 300;
const minLinesForDisplay = Math.ceil(
(typeof window !== 'undefined' ? window.innerHeight - heightOffset : 600) /
lineHeight,
);
const totalLines = Math.max(contentLines, minLinesForDisplay);
return (
<div className={`relative w-full ${className}`}>
<div
className="pointer-events-none absolute top-0 left-0 w-8 pr-2 text-right font-mono text-xs leading-[19.93px] text-gray-500 select-none lg:w-12 lg:pr-3 lg:text-sm dark:text-gray-400"
style={{
height: `${totalLines * lineHeight}px`,
}}
>
{Array.from({ length: totalLines }, (_, i) => (
<div
key={i + 1}
className="flex h-[19.93px] items-center justify-end leading-[19.93px]"
>
{i + 1}
</div>
))}
</div>
<textarea
className={`w-full resize-none overflow-hidden border-none bg-transparent pl-8 font-['Inter'] text-[13.68px] leading-[19.93px] text-[#18181B] outline-none lg:pl-12 dark:text-white ${isMobile ? 'min-h-[calc(100vh-200px)]' : 'min-h-[calc(100vh-300px)]'} ${!editable ? 'select-none' : ''}`}
value={value}
onChange={editable ? handleChange : undefined}
onDoubleClick={onDoubleClick}
placeholder={placeholder}
aria-label={ariaLabel}
rows={totalLines}
readOnly={!editable}
style={{
height: `${totalLines * lineHeight}px`,
}}
/>
</div>
);
};
interface SearchResult {
path: string;
isFile: boolean;
}
interface ChunksProps {
documentId: string;
documentName?: string;
handleGoBack: () => void;
path?: string;
onFileSearch?: (query: string) => SearchResult[];
onFileSelect?: (path: string) => void;
}
const Chunks: React.FC<ChunksProps> = ({
documentId,
documentName,
handleGoBack,
path,
onFileSearch,
onFileSelect,
}) => {
const [fileSearchQuery, setFileSearchQuery] = useState('');
const [fileSearchResults, setFileSearchResults] = useState<SearchResult[]>(
[],
);
const searchDropdownRef = useRef<HTMLDivElement>(null);
const { t } = useTranslation();
const token = useSelector(selectToken);
const [isDarkTheme] = useDarkTheme();
const [paginatedChunks, setPaginatedChunks] = useState<ChunkType[]>([]);
const [page, setPage] = useState(1);
const [perPage, setPerPage] = useState(5);
const [totalChunks, setTotalChunks] = useState(0);
const [loading, setLoading] = useLoaderState(true);
const [searchTerm, setSearchTerm] = useState<string>('');
const [editingChunk, setEditingChunk] = useState<ChunkType | null>(null);
const [editingTitle, setEditingTitle] = useState('');
const [editingText, setEditingText] = useState('');
const [isAddingChunk, setIsAddingChunk] = useState(false);
const [deleteModalState, setDeleteModalState] =
useState<ActiveState>('INACTIVE');
const [chunkToDelete, setChunkToDelete] = useState<ChunkType | null>(null);
const [isEditing, setIsEditing] = useState(false);
const pathParts = path ? path.split('/') : [];
const fetchChunks = () => {
setLoading(true);
try {
userService
.getDocumentChunks(documentId, page, perPage, token, path, searchTerm)
.then((response) => {
if (!response.ok) {
setLoading(false);
setPaginatedChunks([]);
throw new Error('Failed to fetch chunks data');
}
return response.json();
})
.then((data) => {
setPage(data.page);
setPerPage(data.per_page);
setTotalChunks(data.total);
setPaginatedChunks(data.chunks);
setLoading(false);
})
.catch((error) => {
setLoading(false);
setPaginatedChunks([]);
});
} catch (e) {
setLoading(false);
setPaginatedChunks([]);
}
};
const handleAddChunk = (title: string, text: string) => {
if (!text.trim()) {
return;
}
try {
const metadata = {
source: path || documentName,
source_id: documentId,
title: title,
};
userService
.addChunk(
{
id: documentId,
text: text,
metadata: metadata,
},
token,
)
.then((response) => {
if (!response.ok) {
throw new Error('Failed to add chunk');
}
fetchChunks();
});
} catch (e) {
console.log(e);
}
};
const handleUpdateChunk = (title: string, text: string, chunk: ChunkType) => {
if (!text.trim()) {
return;
}
const originalTitle = chunk.metadata?.title || '';
const originalText = chunk.text || '';
if (title === originalTitle && text === originalText) {
return;
}
try {
userService
.updateChunk(
{
id: documentId,
chunk_id: chunk.doc_id,
text: text,
metadata: {
title: title,
},
},
token,
)
.then((response) => {
if (!response.ok) {
throw new Error('Failed to update chunk');
}
fetchChunks();
});
} catch (e) {
console.log(e);
}
};
const handleDeleteChunk = (chunk: ChunkType) => {
try {
userService
.deleteChunk(documentId, chunk.doc_id, token)
.then((response) => {
if (!response.ok) {
throw new Error('Failed to delete chunk');
}
setEditingChunk(null);
fetchChunks();
});
} catch (e) {
console.log(e);
}
};
const confirmDeleteChunk = (chunk: ChunkType) => {
setChunkToDelete(chunk);
setDeleteModalState('ACTIVE');
};
const handleConfirmedDelete = () => {
if (chunkToDelete) {
handleDeleteChunk(chunkToDelete);
setDeleteModalState('INACTIVE');
setChunkToDelete(null);
}
};
const handleCancelDelete = () => {
setDeleteModalState('INACTIVE');
setChunkToDelete(null);
};
useEffect(() => {
const delayDebounceFn = setTimeout(() => {
if (page !== 1) {
setPage(1);
} else {
fetchChunks();
}
}, 300);
return () => clearTimeout(delayDebounceFn);
}, [searchTerm]);
useEffect(() => {
!loading && fetchChunks();
}, [page, perPage, path]);
useEffect(() => {
setSearchTerm('');
setPage(1);
}, [path]);
const filteredChunks = paginatedChunks;
const renderPathNavigation = () => {
return (
<div className="mb-0 flex min-h-[38px] flex-col gap-2 text-base sm:flex-row sm:items-center sm:justify-between">
<div className="flex w-full items-center sm:w-auto">
<button
className="mr-3 flex h-[29px] w-[29px] items-center justify-center rounded-full border p-2 text-sm font-medium text-gray-400 transition-all duration-200 dark:border-0 dark:bg-[#28292D] dark:text-gray-500 dark:hover:bg-[#2E2F34]"
onClick={
editingChunk
? () => setEditingChunk(null)
: isAddingChunk
? () => setIsAddingChunk(false)
: handleGoBack
}
>
<img src={ArrowLeft} alt="left-arrow" className="h-3 w-3" />
</button>
<div className="flex flex-wrap items-center">
{/* Removed the directory icon */}
<span className="font-semibold break-words text-[#7D54D1]">
{documentName}
</span>
{pathParts.length > 0 && (
<>
<span className="mx-1 flex-shrink-0 text-gray-500">/</span>
{pathParts.map((part, index) => (
<React.Fragment key={index}>
<span
className={`break-words ${
index < pathParts.length - 1
? 'font-medium text-[#7D54D1]'
: 'text-gray-700 dark:text-gray-300'
}`}
>
{part}
</span>
{index < pathParts.length - 1 && (
<span className="mx-1 flex-shrink-0 text-gray-500">
/
</span>
)}
</React.Fragment>
))}
</>
)}
</div>
</div>
<div className="mt-2 flex w-full flex-row flex-nowrap items-center justify-end gap-2 overflow-x-auto sm:mt-0 sm:w-auto">
{editingChunk ? (
!isEditing ? (
<>
<button
className="bg-purple-30 hover:bg-violets-are-blue flex h-[38px] min-w-[108px] items-center justify-center rounded-full px-4 text-[14px] font-medium whitespace-nowrap text-white"
onClick={() => setIsEditing(true)}
>
{t('modals.chunk.edit')}
</button>
<button
className="flex h-[38px] min-w-[108px] items-center justify-center rounded-full border border-solid border-red-500 px-4 py-1 text-[14px] font-medium text-nowrap text-red-500 hover:bg-red-500 hover:text-white"
onClick={() => {
confirmDeleteChunk(editingChunk);
}}
>
{t('modals.chunk.delete')}
</button>
</>
) : (
<>
<button
onClick={() => {
setIsEditing(false);
}}
className="dark:text-light-gray flex h-[38px] min-w-[108px] cursor-pointer items-center justify-center rounded-full px-4 py-1 text-sm font-medium text-nowrap hover:bg-gray-100 dark:bg-transparent dark:hover:bg-[#767183]/50"
>
{t('modals.chunk.cancel')}
</button>
<button
onClick={() => {
if (editingText.trim()) {
const hasChanges =
editingTitle !==
(editingChunk?.metadata?.title || '') ||
editingText !== (editingChunk?.text || '');
if (hasChanges) {
handleUpdateChunk(
editingTitle,
editingText,
editingChunk,
);
}
setIsEditing(false);
setEditingChunk(null);
}
}}
disabled={
!editingText.trim() ||
(editingTitle === (editingChunk?.metadata?.title || '') &&
editingText === (editingChunk?.text || ''))
}
className={`flex h-[38px] min-w-[108px] items-center justify-center rounded-full px-4 py-1 text-[14px] font-medium text-nowrap text-white transition-all ${
editingText.trim() &&
(editingTitle !== (editingChunk?.metadata?.title || '') ||
editingText !== (editingChunk?.text || ''))
? 'bg-purple-30 hover:bg-violets-are-blue cursor-pointer'
: 'cursor-not-allowed bg-gray-400'
}`}
>
{t('modals.chunk.save')}
</button>
</>
)
) : isAddingChunk ? (
<>
<button
onClick={() => setIsAddingChunk(false)}
className="dark:text-light-gray flex h-[38px] min-w-[108px] cursor-pointer items-center justify-center rounded-full px-4 py-1 text-sm font-medium text-nowrap hover:bg-gray-100 dark:bg-transparent dark:hover:bg-[#767183]/50"
>
{t('modals.chunk.cancel')}
</button>
<button
onClick={() => {
if (editingText.trim()) {
handleAddChunk(editingTitle, editingText);
setIsAddingChunk(false);
}
}}
disabled={!editingText.trim()}
className={`flex h-[38px] min-w-[108px] items-center justify-center rounded-full px-4 py-1 text-[14px] font-medium text-nowrap text-white transition-all ${
editingText.trim()
? 'bg-purple-30 hover:bg-violets-are-blue cursor-pointer'
: 'cursor-not-allowed bg-gray-400'
}`}
>
{t('modals.chunk.add')}
</button>
</>
) : null}
</div>
</div>
);
};
// File search handling
const handleFileSearchChange = (query: string) => {
setFileSearchQuery(query);
if (query.trim() && onFileSearch) {
const results = onFileSearch(query);
setFileSearchResults(results);
} else {
setFileSearchResults([]);
}
};
const handleSearchResultClick = (result: SearchResult) => {
if (!onFileSelect) return;
if (result.isFile) {
onFileSelect(result.path);
} else {
// For directories, navigate to the directory and return to file tree
onFileSelect(result.path);
handleGoBack();
}
setFileSearchQuery('');
setFileSearchResults([]);
};
useOutsideAlerter(
searchDropdownRef,
() => {
setFileSearchQuery('');
setFileSearchResults([]);
},
[], // No additional dependencies
false, // Don't handle escape key
);
const renderFileSearch = () => {
return (
<div className="relative" ref={searchDropdownRef}>
<div className="relative flex items-center">
<div className="pointer-events-none absolute left-3">
<img src={SearchIcon} alt="Search" className="h-4 w-4" />
</div>
<input
type="text"
value={fileSearchQuery}
onChange={(e) => handleFileSearchChange(e.target.value)}
placeholder={t('settings.sources.searchFiles')}
className={`h-[38px] w-full border border-[#D1D9E0] py-2 pr-4 pl-10 dark:border-[#6A6A6A] ${
fileSearchQuery ? 'rounded-t-[6px]' : 'rounded-[6px]'
} bg-transparent transition-all duration-200 focus:outline-none dark:text-[#E0E0E0]`}
/>
</div>
{fileSearchQuery && (
<div className="absolute z-10 max-h-[calc(100vh-200px)] w-full overflow-hidden rounded-b-[6px] border border-t-0 border-[#D1D9E0] bg-white shadow-lg dark:border-[#6A6A6A] dark:bg-[#1F2023]">
<div className="max-h-[calc(100vh-200px)] overflow-x-hidden overflow-y-auto">
{fileSearchResults.length === 0 ? (
<div className="py-2 text-center text-sm text-gray-500 dark:text-gray-400">
{t('settings.sources.noResults')}
</div>
) : (
fileSearchResults.map((result, index) => (
<div
key={index}
title={result.path}
onClick={() => handleSearchResultClick(result)}
className={`flex cursor-pointer items-center px-3 py-2 hover:bg-[#ECEEEF] dark:hover:bg-[#27282D] ${
index !== fileSearchResults.length - 1
? 'border-b border-[#D1D9E0] dark:border-[#6A6A6A]'
: ''
}`}
>
<img
src={result.isFile ? FileIcon : FolderIcon}
alt={result.isFile ? 'File' : 'Folder'}
className="mr-2 h-4 w-4 flex-shrink-0"
/>
<span className="truncate text-sm dark:text-[#E0E0E0]">
{result.path.split('/').pop() || result.path}
</span>
</div>
))
)}
</div>
</div>
)}
</div>
);
};
return (
<div className="flex flex-col">
<div className="mb-2">{renderPathNavigation()}</div>
<div className="flex gap-4">
{onFileSearch && onFileSelect && (
<div className="hidden w-[198px] lg:block">{renderFileSearch()}</div>
)}
{/* Right side: Chunks content */}
<div className="flex-1">
{!editingChunk && !isAddingChunk ? (
<>
<div className="mb-3 flex flex-col items-start justify-between gap-3 sm:flex-row sm:items-center">
<div className="flex h-[38px] w-full flex-1 items-center overflow-hidden rounded-md border border-[#D1D9E0] dark:border-[#6A6A6A]">
<div className="flex h-full items-center px-4 font-medium whitespace-nowrap text-gray-700 dark:text-[#E0E0E0]">
{totalChunks > 999999
? `${(totalChunks / 1000000).toFixed(2)}M`
: totalChunks > 999
? `${(totalChunks / 1000).toFixed(2)}K`
: totalChunks}{' '}
{t('settings.sources.chunks')}
</div>
<div className="h-full w-[1px] bg-[#D1D9E0] dark:bg-[#6A6A6A]"></div>
<div className="h-full flex-1">
<input
type="text"
placeholder={t('settings.sources.searchPlaceholder')}
value={searchTerm}
onChange={(e) => setSearchTerm(e.target.value)}
className="h-full w-full border-none bg-transparent px-3 py-2 text-[13.56px] leading-[100%] font-normal outline-none dark:text-[#E0E0E0]"
/>
</div>
</div>
<button
className="bg-purple-30 hover:bg-violets-are-blue flex h-[38px] w-full min-w-[108px] shrink-0 items-center justify-center rounded-full px-4 text-[14px] font-medium whitespace-normal text-white sm:w-auto"
title={t('settings.sources.addChunk')}
onClick={() => {
setIsAddingChunk(true);
setEditingTitle('');
setEditingText('');
}}
>
{t('settings.sources.addChunk')}
</button>
</div>
{loading ? (
<div className="grid w-full grid-cols-1 justify-items-start gap-4 sm:[grid-template-columns:repeat(auto-fit,minmax(400px,1fr))]">
<SkeletonLoader component="chunkCards" count={perPage} />
</div>
) : (
<div className="grid w-full grid-cols-1 justify-items-start gap-4 sm:[grid-template-columns:repeat(auto-fit,minmax(400px,1fr))]">
{filteredChunks.length === 0 ? (
<div className="col-span-full flex min-h-[50vh] w-full flex-col items-center justify-center text-center text-gray-500 dark:text-gray-400">
<img
src={isDarkTheme ? NoFilesDarkIcon : NoFilesIcon}
alt={t('settings.sources.noChunksAlt')}
className="mx-auto mb-2 h-24 w-24"
/>
{t('settings.sources.noChunks')}
</div>
) : (
filteredChunks.map((chunk, index) => (
<div
key={index}
className="relative flex h-[197px] w-full max-w-[487px] transform cursor-pointer flex-col justify-between overflow-hidden rounded-[5.86px] border border-[#D1D9E0] transition-transform duration-200 hover:scale-105 dark:border-[#6A6A6A]"
onClick={() => {
setEditingChunk(chunk);
setEditingTitle(chunk.metadata?.title || '');
setEditingText(chunk.text || '');
}}
>
<div className="w-full">
<div className="flex w-full items-center justify-between border-b border-[#D1D9E0] bg-[#F6F8FA] px-4 py-3 dark:border-[#6A6A6A] dark:bg-[#27282D]">
<div className="text-sm text-[#59636E] dark:text-[#E0E0E0]">
{chunk.metadata.token_count
? chunk.metadata.token_count.toLocaleString()
: '-'}{' '}
{t('settings.sources.tokensUnit')}
</div>
</div>
<div className="px-4 pt-3 pb-6">
<p className="line-clamp-6 font-['Inter'] text-[13.68px] leading-[19.93px] font-normal text-[#18181B] dark:text-[#E0E0E0]">
{chunk.text}
</p>
</div>
</div>
</div>
))
)}
</div>
)}
</>
) : isAddingChunk ? (
<div className="w-full">
<div className="relative overflow-hidden rounded-lg border border-[#D1D9E0] dark:border-[#6A6A6A]">
<LineNumberedTextarea
value={editingText}
onChange={setEditingText}
ariaLabel={t('modals.chunk.promptText')}
editable={true}
/>
</div>
</div>
) : (
editingChunk && (
<div className="w-full">
<div className="relative flex w-full flex-col overflow-hidden rounded-[5.86px] border border-[#D1D9E0] dark:border-[#6A6A6A]">
<div className="flex w-full items-center justify-between border-b border-[#D1D9E0] bg-[#F6F8FA] px-4 py-3 dark:border-[#6A6A6A] dark:bg-[#27282D]">
<div className="text-sm text-[#59636E] dark:text-[#E0E0E0]">
{editingChunk.metadata.token_count
? editingChunk.metadata.token_count.toLocaleString()
: '-'}{' '}
{t('settings.sources.tokensUnit')}
</div>
</div>
<div className="overflow-hidden p-4">
<LineNumberedTextarea
value={isEditing ? editingText : editingChunk.text}
onChange={setEditingText}
ariaLabel={t('modals.chunk.promptText')}
editable={isEditing}
onDoubleClick={() => {
if (!isEditing) {
setIsEditing(true);
setEditingTitle(editingChunk.metadata.title || '');
setEditingText(editingChunk.text);
}
}}
/>
</div>
</div>
</div>
)
)}
{!loading &&
totalChunks > perPage &&
!editingChunk &&
!isAddingChunk && (
<Pagination
currentPage={page}
totalPages={Math.ceil(totalChunks / perPage)}
rowsPerPage={perPage}
onPageChange={setPage}
onRowsPerPageChange={(rows) => {
setPerPage(rows);
setPage(1);
}}
/>
)}
</div>
</div>
{/* Delete Confirmation Modal */}
<ConfirmationModal
message={t('modals.chunk.deleteConfirmation')}
modalState={deleteModalState}
setModalState={setDeleteModalState}
handleSubmit={handleConfirmedDelete}
handleCancel={handleCancelDelete}
submitLabel={t('modals.chunk.delete')}
variant="danger"
/>
</div>
);
};
export default Chunks;

View File

@@ -0,0 +1,180 @@
import React, { useRef } from 'react';
import { useSelector } from 'react-redux';
import { useDarkTheme } from '../hooks';
import { selectToken } from '../preferences/preferenceSlice';
interface ConnectorAuthProps {
provider: string;
onSuccess: (data: { session_token: string; user_email: string }) => void;
onError: (error: string) => void;
label?: string;
isConnected?: boolean;
userEmail?: string;
onDisconnect?: () => void;
errorMessage?: string;
}
const ConnectorAuth: React.FC<ConnectorAuthProps> = ({
provider,
onSuccess,
onError,
label,
isConnected = false,
userEmail = '',
onDisconnect,
errorMessage,
}) => {
const token = useSelector(selectToken);
const [isDarkTheme] = useDarkTheme();
const completedRef = useRef(false);
const intervalRef = useRef<number | null>(null);
const cleanup = () => {
if (intervalRef.current) {
clearInterval(intervalRef.current);
intervalRef.current = null;
}
window.removeEventListener('message', handleAuthMessage as any);
};
const handleAuthMessage = (event: MessageEvent) => {
const successGeneric = event.data?.type === 'connector_auth_success';
const successProvider = event.data?.type === `${provider}_auth_success`;
const errorProvider = event.data?.type === `${provider}_auth_error`;
if (successGeneric || successProvider) {
completedRef.current = true;
cleanup();
onSuccess({
session_token: event.data.session_token,
user_email: event.data.user_email || 'Connected User',
});
} else if (errorProvider) {
completedRef.current = true;
cleanup();
onError(event.data.error || 'Authentication failed');
}
};
const handleAuth = async () => {
try {
completedRef.current = false;
cleanup();
const apiHost = import.meta.env.VITE_API_HOST;
const authResponse = await fetch(
`${apiHost}/api/connectors/auth?provider=${provider}`,
{
headers: { Authorization: `Bearer ${token}` },
},
);
if (!authResponse.ok) {
throw new Error(
`Failed to get authorization URL: ${authResponse.status}`,
);
}
const authData = await authResponse.json();
if (!authData.success || !authData.authorization_url) {
throw new Error(authData.error || 'Failed to get authorization URL');
}
const authWindow = window.open(
authData.authorization_url,
`${provider}-auth`,
'width=500,height=600,scrollbars=yes,resizable=yes',
);
if (!authWindow) {
throw new Error(
'Failed to open authentication window. Please allow popups.',
);
}
window.addEventListener('message', handleAuthMessage as any);
const checkClosed = window.setInterval(() => {
if (authWindow.closed) {
clearInterval(checkClosed);
window.removeEventListener('message', handleAuthMessage as any);
if (!completedRef.current) {
onError('Authentication was cancelled');
}
}
}, 1000);
intervalRef.current = checkClosed;
} catch (error) {
onError(error instanceof Error ? error.message : 'Authentication failed');
}
};
return (
<>
{errorMessage && (
<div className="mb-4 flex items-center gap-2 rounded-lg border border-[#E60000] bg-transparent p-2 dark:border-[#D42626] dark:bg-[#D426261A]">
<svg
width="30"
height="30"
viewBox="0 0 30 30"
fill="none"
xmlns="http://www.w3.org/2000/svg"
>
<path
d="M7.09974 24.5422H22.9C24.5156 24.5422 25.5228 22.7901 24.715 21.3947L16.8149 7.74526C16.007 6.34989 13.9927 6.34989 13.1848 7.74526L5.28471 21.3947C4.47686 22.7901 5.48405 24.5422 7.09974 24.5422ZM14.9998 17.1981C14.4228 17.1981 13.9507 16.726 13.9507 16.149V14.0507C13.9507 13.4736 14.4228 13.0015 14.9998 13.0015C15.5769 13.0015 16.049 13.4736 16.049 14.0507V16.149C16.049 16.726 15.5769 17.1981 14.9998 17.1981ZM16.049 21.3947H13.9507V19.2964H16.049V21.3947Z"
fill={isDarkTheme ? '#EECF56' : '#E60000'}
/>
</svg>
<span
className="text-sm text-[#E60000] dark:text-[#E37064]"
style={{
fontFamily: 'Inter',
lineHeight: '100%',
}}
>
{errorMessage}
</span>
</div>
)}
{isConnected ? (
<div className="mb-4">
<div className="flex w-full items-center justify-between rounded-[10px] bg-[#8FDD51] px-4 py-2 text-sm font-medium text-[#212121]">
<div className="flex items-center gap-2">
<svg className="h-4 w-4" viewBox="0 0 24 24">
<path
fill="currentColor"
d="M9 16.17L4.83 12l-1.42 1.41L9 19 21 7l-1.41-1.41z"
/>
</svg>
<span>Connected as {userEmail}</span>
</div>
{onDisconnect && (
<button
onClick={onDisconnect}
className="text-xs font-medium text-[#212121] underline hover:text-gray-700"
>
Disconnect
</button>
)}
</div>
</div>
) : (
<button
onClick={handleAuth}
className="flex w-full items-center justify-center gap-2 rounded-lg bg-blue-500 px-4 py-3 text-white transition-colors hover:bg-blue-600"
>
<svg className="h-5 w-5" viewBox="0 0 24 24">
<path
fill="currentColor"
d="M6.28 3l5.72 10H24l-5.72-10H6.28zm11.44 0L12 13l5.72 10H24L18.28 3h-.56zM0 13l5.72 10h5.72L5.72 13H0z"
/>
</svg>
{label}
</button>
)}
</>
);
};
export default ConnectorAuth;

Some files were not shown because too many files have changed in this diff Show More