Compare commits

...

418 Commits

Author SHA1 Message Date
Manish Madan
c78518baf0 Merge pull request #1827 from arc53/dependabot/npm_and_yarn/frontend/react-dropzone-14.3.8
build(deps): bump react-dropzone from 14.3.5 to 14.3.8 in /frontend
2025-06-19 16:31:06 +05:30
dependabot[bot]
556d7e0497 build(deps): bump react-dropzone from 14.3.5 to 14.3.8 in /frontend
Bumps [react-dropzone](https://github.com/react-dropzone/react-dropzone) from 14.3.5 to 14.3.8.
- [Release notes](https://github.com/react-dropzone/react-dropzone/releases)
- [Commits](https://github.com/react-dropzone/react-dropzone/compare/v14.3.5...v14.3.8)

---
updated-dependencies:
- dependency-name: react-dropzone
  dependency-version: 14.3.8
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-19 10:59:31 +00:00
Manish Madan
2d27936dab Merge pull request #1826 from arc53/dependabot/npm_and_yarn/frontend/reduxjs/toolkit-2.8.2
build(deps): bump @reduxjs/toolkit from 2.5.1 to 2.8.2 in /frontend
2025-06-19 16:27:30 +05:30
Alex
63f6127049 Revert "Update README.md"
This reverts commit 55f60a9fe1.
2025-06-18 22:26:38 +01:00
Alex
f34e00c986 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-06-18 22:26:19 +01:00
Alex
55f60a9fe1 Update README.md 2025-06-18 22:26:11 +01:00
Alex
7da3618e0c Update README.md 2025-06-18 22:25:47 +01:00
Alex
56bfa98633 Merge remote-tracking branch 'upstream/main' 2025-06-18 22:18:06 +01:00
Alex
96f6188722 Initial commit 2025-06-18 22:17:23 +01:00
Alex
6c3a79802e Merge pull request #1849 from siiddhantt/feat/upload-agent-logo
feat: enhance agent management with image upload and retrieval
2025-06-18 17:45:51 +01:00
Siddhant Rai
c35c5e0793 Refactor image upload handling and add URL strategy setting 2025-06-18 21:54:44 +05:30
Siddhant Rai
fc01b90007 Add tailwind-merge dependency to package.json and package-lock.json 2025-06-18 19:00:59 +05:30
Siddhant Rai
e35f1d70e4 - Added image upload functionality for agents in the backend and frontend.
- Implemented image URL generation based on storage strategy (S3 or local).
- Updated agent creation and update endpoints to handle image files.
- Enhanced frontend components to display agent images with fallbacks.
- New API endpoint to serve images from storage.
- Refactored API client to support FormData for file uploads.
- Improved error handling and logging for image processing.
2025-06-18 18:17:20 +05:30
Siddhant Rai
cab1f3787a Refactor S3 storage implementation and enhance file handling
- Improved code readability by reorganizing imports and formatting.
- Updated S3Storage class to handle file uploads and downloads more efficiently.
- Added a new function to generate image URLs based on settings.
- Enhanced file listing and processing methods for better error handling.
- Introduced a FileUpload component for improved file upload experience in the frontend.
- Updated agent management components to support image uploads and previews.
- Added new SVG assets for UI enhancements.
- Modified API client to support FormData for file uploads.
2025-06-18 18:04:44 +05:30
Alex
bb42f4cbc1 Merge pull request #1846 from ManishMadan2882/main
Agents Details: UI update, external links
2025-06-17 14:31:16 +01:00
ManishMadan2882
98dc418a51 (feat:agent-details) redirect on url 2025-06-17 18:30:11 +05:30
ManishMadan2882
322b4eb18c Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-06-17 16:04:33 +05:30
ManishMadan2882
7f1cc30ed8 (feat:agent-details) test, learn more redirects 2025-06-17 16:04:08 +05:30
ManishMadan2882
7b45a6b956 (feat:agentDetails) copy button ui 2025-06-17 12:03:36 +05:30
Alex
e36769e70f Merge pull request #1844 from ManishMadan2882/main
Collapsible Question bubbles
2025-06-15 16:36:37 +01:00
ManishMadan2882
bd4a4cc4af (feat:question) match design 2025-06-14 02:32:59 +05:30
ManishMadan2882
8343fe63cb (feat:bubble) collapsable questions 2025-06-14 02:02:36 +05:30
Alex
7d89fb8461 fix: lint 2025-06-13 01:14:09 +01:00
Alex
098955d230 fix paths in docker compose 2025-06-13 01:11:22 +01:00
Alex
d254d14928 Merge pull request #1838 from ManishMadan2882/main
Fixes ingestion of file with non-ascii characters in name
2025-06-12 09:52:03 +01:00
GH Action - Upstream Sync
0a3e8ca535 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-06-12 01:43:21 +00:00
ManishMadan2882
b8a10e0962 (fix:ingestion) display names are separate 2025-06-12 00:57:46 +05:30
Alex
0aceda96e4 Merge pull request #1824 from siiddhantt/refactor/llm-handler
feat: reorganize LLM handler structure with better abstraction
2025-06-11 17:19:50 +01:00
ManishMadan2882
44b6ec25a2 clean 2025-06-11 21:18:37 +05:30
ManishMadan2882
1b84d1fa9d Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-06-11 21:04:57 +05:30
ManishMadan2882
78d5ed2ed2 (fix:ingestion) uuid for non-ascii filename 2025-06-11 21:04:50 +05:30
ManishMadan2882
142477ab9b (feat:safe_filename) handles case of non-ascii char 2025-06-11 21:03:38 +05:30
Siddhant Rai
b414f79bc5 fix: adjust width of tool calls display in ConversationBubble component 2025-06-11 19:37:32 +05:30
Siddhant Rai
6e08fe21d0 Merge branch 'refactor/llm-handler' of https://github.com/siiddhantt/DocsGPT into refactor/llm-handler 2025-06-11 19:28:47 +05:30
Siddhant Rai
9b839655a7 refactor: improve tool call result handling and display in conversation components 2025-06-11 19:28:15 +05:30
Siddhant Rai
3353c0ee1d Merge branch 'main' into refactor/llm-handler 2025-06-11 19:27:33 +05:30
Alex
aaecf52c99 refactor: update docs LLM_NAME and MODEL_NAME to LLM_PROVIDER and LLM_NAME 2025-06-11 12:30:34 +01:00
ManishMadan2882
8b3e960be0 (feat:ingestion) store filepath from now 2025-06-11 16:00:09 +05:30
Siddhant Rai
3351f71813 refactor: tool calls sent when pending and after completion 2025-06-11 12:40:32 +05:30
Alex
7490256303 Merge pull request #1830 from ManishMadan2882/main
UI update: attachments in question bubble
2025-06-10 14:46:05 +01:00
ManishMadan2882
041d600e45 (feat:prompts) delete after confirmation 2025-06-10 18:00:11 +05:30
ManishMadan2882
b4e2588a24 (fix:prompts) save when content changes 2025-06-10 17:02:24 +05:30
ManishMadan2882
68dc14c5a1 (feat:attachments) clear after passing 2025-06-10 02:50:07 +05:30
ManishMadan2882
ef35864e16 (fix) type error, ui adjust 2025-06-09 19:50:07 +05:30
ManishMadan2882
c0d385b983 (refactor:attachments) moved to new uploadSlice 2025-06-09 17:10:12 +05:30
ManishMadan2882
b2df431fa4 (feat:attachments) shared conversations route 2025-06-07 20:04:33 +05:30
ManishMadan2882
69a4bd415a (feat:attachment) message input update 2025-06-06 21:52:51 +05:30
ManishMadan2882
4862548e65 (feat:attach) renaming, semantic identifier names 2025-06-06 21:52:10 +05:30
ManishMadan2882
50248cc9ea Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-06-06 18:55:11 +05:30
ManishMadan2882
430822bae3 (feat:attach)state manage, follow camelCase 2025-06-06 18:54:50 +05:30
Siddhant Rai
dd9d18208d Merge branch 'main' into refactor/llm-handler 2025-06-06 17:36:31 +05:30
Siddhant Rai
e5b1a71659 refactor: update fallback LLM initialization to use factory method 2025-06-06 17:23:27 +05:30
Siddhant Rai
35f4b13237 refactor: add fallback LLM configuration options to settings 2025-06-06 17:05:15 +05:30
Siddhant Rai
5f5c31cd5b refactor: enhance LLM fallback handling and streamline method execution 2025-06-06 16:55:57 +05:30
Siddhant Rai
e9530d5ec5 refactor: update env variable names 2025-06-06 15:29:53 +05:30
Siddhant Rai
143f4aa886 refactor: streamline conversation handling and update agent pinning logic 2025-06-06 14:41:44 +05:30
GH Action - Upstream Sync
ece5c8bb31 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-06-06 01:42:12 +00:00
Alex
31baf181a3 fix: default optimisations 2025-06-05 12:21:40 +01:00
ManishMadan2882
3bae30c70c (fix:messages) attachments are for questions 2025-06-05 03:09:37 +05:30
ManishMadan2882
12b18c6bd1 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-06-05 02:54:51 +05:30
ManishMadan2882
787d9e3bf5 (feat:attachments) ui details in bubble 2025-06-05 02:54:36 +05:30
ManishMadan2882
f325b54895 (fix:get_single_conversation) return attachments with filename 2025-06-05 02:53:43 +05:30
GH Action - Upstream Sync
c5616705b0 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-06-04 01:43:58 +00:00
Alex
c0f693d35d remove abount 2025-06-03 15:40:16 +01:00
Alex
52a5f132c1 Merge pull request #1814 from siiddhantt/fix/agents-bugs
fix: shared agent redirect and pinned agents error
2025-06-03 15:37:20 +01:00
Siddhant Rai
f14eac6d10 Merge branch 'main' into fix/agents-bugs 2025-06-03 19:59:10 +05:30
ManishMadan2882
e90fe117ec (feat:attachments) render in bubble 2025-06-03 18:05:47 +05:30
Siddhant Rai
381d737d24 fix: correct vectorstore path in get_vectorstore function 2025-06-03 15:14:00 +05:30
ManishMadan2882
7cab5b3b09 (feat:attachments) store filenames in worker 2025-06-03 04:02:41 +05:30
ManishMadan2882
9f911cb5cb (feat:stream) store attachment_ids for query 2025-06-03 03:30:06 +05:30
dependabot[bot]
3da7cba06c build(deps): bump @reduxjs/toolkit from 2.5.1 to 2.8.2 in /frontend
Bumps [@reduxjs/toolkit](https://github.com/reduxjs/redux-toolkit) from 2.5.1 to 2.8.2.
- [Release notes](https://github.com/reduxjs/redux-toolkit/releases)
- [Commits](https://github.com/reduxjs/redux-toolkit/compare/v2.5.1...v2.8.2)

---
updated-dependencies:
- dependency-name: "@reduxjs/toolkit"
  dependency-version: 2.8.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-02 20:27:20 +00:00
Alex
b47af9600f Merge pull request #1821 from ManishMadan2882/main
Chore: Frontend refinements, i18n sync
2025-06-02 10:13:02 +01:00
Siddhant Rai
92c3c707e1 refactor: reorganize LLM handler structure and improve tool call parsing 2025-06-02 12:17:29 +05:30
GH Action - Upstream Sync
5acc54e609 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-06-01 01:58:40 +00:00
Manish Madan
9c6352dd5b Merge pull request #1823 from arc53/dependabot/npm_and_yarn/docs/next-15.3.3
build(deps): bump next from 14.2.26 to 15.3.3 in /docs
2025-05-31 16:08:57 +05:30
GH Action - Upstream Sync
8e29a07df5 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-05-31 01:39:33 +00:00
Manish Madan
bd88cd3a06 Merge pull request #1818 from arc53/dependabot/npm_and_yarn/frontend/eslint-config-prettier-10.1.5
build(deps-dev): bump eslint-config-prettier from 9.1.0 to 10.1.5 in /frontend
2025-05-31 02:58:41 +05:30
dependabot[bot]
f371b9702f build(deps-dev): bump eslint-config-prettier in /frontend
Bumps [eslint-config-prettier](https://github.com/prettier/eslint-config-prettier) from 9.1.0 to 10.1.5.
- [Release notes](https://github.com/prettier/eslint-config-prettier/releases)
- [Changelog](https://github.com/prettier/eslint-config-prettier/blob/main/CHANGELOG.md)
- [Commits](https://github.com/prettier/eslint-config-prettier/compare/v9.1.0...v10.1.5)

---
updated-dependencies:
- dependency-name: eslint-config-prettier
  dependency-version: 10.1.5
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-30 21:25:20 +00:00
Manish Madan
3ff4ae29af Merge pull request #1817 from arc53/dependabot/npm_and_yarn/frontend/eslint-plugin-react-7.37.5
build(deps-dev): bump eslint-plugin-react from 7.37.3 to 7.37.5 in /frontend
2025-05-31 02:53:20 +05:30
dependabot[bot]
eae0f2e7a9 build(deps-dev): bump eslint-plugin-react in /frontend
Bumps [eslint-plugin-react](https://github.com/jsx-eslint/eslint-plugin-react) from 7.37.3 to 7.37.5.
- [Release notes](https://github.com/jsx-eslint/eslint-plugin-react/releases)
- [Changelog](https://github.com/jsx-eslint/eslint-plugin-react/blob/master/CHANGELOG.md)
- [Commits](https://github.com/jsx-eslint/eslint-plugin-react/compare/v7.37.3...v7.37.5)

---
updated-dependencies:
- dependency-name: eslint-plugin-react
  dependency-version: 7.37.5
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-30 21:18:03 +00:00
Manish Madan
305a98bb79 Merge pull request #1815 from arc53/dependabot/npm_and_yarn/frontend/react-syntax-highlighter-15.6.1
build(deps): bump react-syntax-highlighter from 15.5.0 to 15.6.1 in /frontend
2025-05-31 02:46:15 +05:30
dependabot[bot]
8040a3ed60 build(deps): bump react-syntax-highlighter in /frontend
Bumps [react-syntax-highlighter](https://github.com/react-syntax-highlighter/react-syntax-highlighter) from 15.5.0 to 15.6.1.
- [Release notes](https://github.com/react-syntax-highlighter/react-syntax-highlighter/releases)
- [Changelog](https://github.com/react-syntax-highlighter/react-syntax-highlighter/blob/master/CHANGELOG.MD)
- [Commits](https://github.com/react-syntax-highlighter/react-syntax-highlighter/compare/15.5.0...v15.6.1)

---
updated-dependencies:
- dependency-name: react-syntax-highlighter
  dependency-version: 15.6.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-30 21:11:29 +00:00
dependabot[bot]
bb9de7d9b0 build(deps): bump next from 14.2.26 to 15.3.3 in /docs
Bumps [next](https://github.com/vercel/next.js) from 14.2.26 to 15.3.3.
- [Release notes](https://github.com/vercel/next.js/releases)
- [Changelog](https://github.com/vercel/next.js/blob/canary/release.js)
- [Commits](https://github.com/vercel/next.js/compare/v14.2.26...v15.3.3)

---
updated-dependencies:
- dependency-name: next
  dependency-version: 15.3.3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-30 21:10:55 +00:00
Manish Madan
d8e8bc0068 Merge pull request #1765 from arc53/dependabot/npm_and_yarn/frontend/vite-6.3.5
build(deps-dev): bump vite from 5.4.14 to 6.3.5 in /frontend
2025-05-31 02:39:41 +05:30
ManishMadan2882
6577e9d852 (chore) peer deps 2025-05-31 02:37:27 +05:30
dependabot[bot]
3f8625c65a build(deps-dev): bump vite from 5.4.14 to 6.3.5 in /frontend
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 5.4.14 to 6.3.5.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/main/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v6.3.5/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-version: 6.3.5
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-30 20:08:19 +00:00
ManishMadan2882
92d69636a7 (fix/ui) minor adjustments 2025-05-30 19:33:07 +05:30
ManishMadan2882
9c28817fba (chore:i18n) sync all locales 2025-05-30 19:05:01 +05:30
Siddhant Rai
773788fb32 fix: correct vectorstore path and improve file existence checks in FaissStore 2025-05-30 14:30:51 +05:30
Siddhant Rai
a393ad8e04 refactor: standardize string quotes and improve retriever type handling in RetrieverCreator 2025-05-30 12:50:11 +05:30
ManishMadan2882
71d3714347 (fix:nav) tablets behave like mobile, use tailwind breakpoints 2025-05-29 17:17:29 +05:30
ManishMadan2882
b7e1329c13 (feat:chunksModal) i18n, use wrapperModal 2025-05-29 17:15:13 +05:30
ManishMadan2882
59e6d9d10e Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-05-29 15:47:42 +05:30
ManishMadan2882
46efb446fb (clean:about) purge route 2025-05-29 15:43:58 +05:30
ManishMadan2882
d31e3a54fd (feat:layout) tablet sidebar behave like mobile 2025-05-29 15:43:00 +05:30
Siddhant Rai
c4e471ac47 fix: ensure shared metadata is displayed only when available in SharedAgentCard 2025-05-29 11:16:38 +05:30
Siddhant Rai
3b8733e085 feat: add tool details resolution and update SharedAgentCard to display tool names 2025-05-29 10:58:49 +05:30
Alex
a7c67d83ca Merge pull request #1820 from ManishMadan2882/main
Chore:  Frontend Refinements
2025-05-29 00:53:59 +01:00
ManishMadan2882
8abc1de26d (feat:hooks) update useMediaQuery 2025-05-29 04:01:47 +05:30
ManishMadan2882
2ca9f708a6 (fix:menu) position smartly 2025-05-29 04:00:25 +05:30
ManishMadan2882
f8f369fbb2 (fix/tools) avoid max width for buttons, i18n 2025-05-29 03:58:34 +05:30
ManishMadan2882
3e9155767b (fix:input) placeholder overflow 2025-05-29 03:57:11 +05:30
Siddhant Rai
8cd4195657 feat: add SharedAgentCard to display selected agent in Conversation component 2025-05-28 14:25:37 +05:30
Siddhant Rai
ad1a944276 refactor: agents sharing and shared with me logic 2025-05-28 13:58:55 +05:30
ManishMadan2882
02ff4c5657 (fix:menu) larger strings break 2025-05-28 03:29:48 +05:30
ManishMadan2882
b1b27f2dde (feat:toolConfig) i18n 2025-05-28 01:10:10 +05:30
ManishMadan2882
5097f77469 (feat:toolConfig) ui details, no actions placeholder 2025-05-27 19:02:41 +05:30
Manish Madan
7e826d5002 Merge pull request #1778 from arc53/dependabot/npm_and_yarn/docs/multi-61ed51ac21
build(deps): bump estree-util-value-to-estree and remark-reading-time in /docs
2025-05-27 16:37:32 +05:30
dependabot[bot]
fe8143a56c build(deps): bump estree-util-value-to-estree and remark-reading-time
Bumps [estree-util-value-to-estree](https://github.com/remcohaszing/estree-util-value-to-estree) and [remark-reading-time](https://github.com/mattjennings/remark-reading-time). These dependencies needed to be updated together.

Updates `estree-util-value-to-estree` from 1.3.0 to 3.4.0
- [Release notes](https://github.com/remcohaszing/estree-util-value-to-estree/releases)
- [Commits](https://github.com/remcohaszing/estree-util-value-to-estree/compare/v1.3.0...v3.4.0)

Updates `remark-reading-time` from 2.0.1 to 2.0.2
- [Release notes](https://github.com/mattjennings/remark-reading-time/releases)
- [Commits](https://github.com/mattjennings/remark-reading-time/compare/2.0.1...2.0.2)

---
updated-dependencies:
- dependency-name: estree-util-value-to-estree
  dependency-version: 3.4.0
  dependency-type: indirect
- dependency-name: remark-reading-time
  dependency-version: 2.0.2
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-27 11:04:37 +00:00
Manish Madan
e5442a713a Merge pull request #1760 from arc53/dependabot/npm_and_yarn/extensions/react-widget/base-x-3.0.11
build(deps-dev): bump base-x from 3.0.9 to 3.0.11 in /extensions/react-widget
2025-05-27 16:32:38 +05:30
dependabot[bot]
1982a46f36 build(deps-dev): bump base-x in /extensions/react-widget
Bumps [base-x](https://github.com/cryptocoinjs/base-x) from 3.0.9 to 3.0.11.
- [Commits](https://github.com/cryptocoinjs/base-x/compare/v3.0.9...v3.0.11)

---
updated-dependencies:
- dependency-name: base-x
  dependency-version: 3.0.11
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-27 11:02:07 +00:00
Manish Madan
c8c3640baf Merge pull request #1743 from arc53/dependabot/npm_and_yarn/frontend/vite-5.4.18
build(deps-dev): bump vite from 5.4.14 to 5.4.18 in /frontend
2025-05-27 15:52:41 +05:30
dependabot[bot]
fdf47b3f2c build(deps-dev): bump vite from 5.4.14 to 5.4.18 in /frontend
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 5.4.14 to 5.4.18.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/v5.4.18/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v5.4.18/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-version: 5.4.18
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-27 10:18:54 +00:00
Manish Madan
93fa4b6a37 Merge pull request #1756 from arc53/dependabot/npm_and_yarn/frontend/multi-08a24af093
build(deps): bump react-router and react-router-dom in /frontend
2025-05-27 15:43:27 +05:30
dependabot[bot]
90e9ab70b0 build(deps): bump react-router and react-router-dom in /frontend
Bumps [react-router](https://github.com/remix-run/react-router/tree/HEAD/packages/react-router) to 7.5.3 and updates ancestor dependency [react-router-dom](https://github.com/remix-run/react-router/tree/HEAD/packages/react-router-dom). These dependencies need to be updated together.


Updates `react-router` from 7.1.1 to 7.5.3
- [Release notes](https://github.com/remix-run/react-router/releases)
- [Changelog](https://github.com/remix-run/react-router/blob/main/packages/react-router/CHANGELOG.md)
- [Commits](https://github.com/remix-run/react-router/commits/react-router@7.5.3/packages/react-router)

Updates `react-router-dom` from 7.1.1 to 7.5.3
- [Release notes](https://github.com/remix-run/react-router/releases)
- [Changelog](https://github.com/remix-run/react-router/blob/main/packages/react-router-dom/CHANGELOG.md)
- [Commits](https://github.com/remix-run/react-router/commits/react-router-dom@7.5.3/packages/react-router-dom)

---
updated-dependencies:
- dependency-name: react-router
  dependency-version: 7.5.3
  dependency-type: indirect
- dependency-name: react-router-dom
  dependency-version: 7.5.3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-27 10:10:25 +00:00
Manish Madan
573c2386b7 Merge pull request #1736 from arc53/dependabot/npm_and_yarn/frontend/typescript-5.8.3
build(deps-dev): bump typescript from 5.7.2 to 5.8.3 in /frontend
2025-05-27 15:37:53 +05:30
dependabot[bot]
d2176aeeb9 build(deps-dev): bump typescript from 5.7.2 to 5.8.3 in /frontend
Bumps [typescript](https://github.com/microsoft/TypeScript) from 5.7.2 to 5.8.3.
- [Release notes](https://github.com/microsoft/TypeScript/releases)
- [Changelog](https://github.com/microsoft/TypeScript/blob/main/azure-pipelines.release-publish.yml)
- [Commits](https://github.com/microsoft/TypeScript/commits)

---
updated-dependencies:
- dependency-name: typescript
  dependency-version: 5.8.3
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-27 10:04:27 +00:00
Manish Madan
920aec5c3e Merge pull request #1706 from arc53/dependabot/npm_and_yarn/docs/babel/runtime-7.26.10
build(deps): bump @babel/runtime from 7.23.7 to 7.26.10 in /docs
2025-05-27 15:31:39 +05:30
dependabot[bot]
b792c5459a build(deps): bump @babel/runtime from 7.23.7 to 7.26.10 in /docs
Bumps [@babel/runtime](https://github.com/babel/babel/tree/HEAD/packages/babel-runtime) from 7.23.7 to 7.26.10.
- [Release notes](https://github.com/babel/babel/releases)
- [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel/commits/v7.26.10/packages/babel-runtime)

---
updated-dependencies:
- dependency-name: "@babel/runtime"
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-27 10:00:19 +00:00
Manish Madan
87fbf05fa1 Merge pull request #1707 from arc53/dependabot/npm_and_yarn/extensions/react-widget/babel/helpers-7.26.10
build(deps): bump @babel/helpers from 7.24.6 to 7.26.10 in /extensions/react-widget
2025-05-27 15:28:08 +05:30
dependabot[bot]
67c53250c5 build(deps): bump @babel/helpers in /extensions/react-widget
Bumps [@babel/helpers](https://github.com/babel/babel/tree/HEAD/packages/babel-helpers) from 7.24.6 to 7.26.10.
- [Release notes](https://github.com/babel/babel/releases)
- [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel/commits/v7.26.10/packages/babel-helpers)

---
updated-dependencies:
- dependency-name: "@babel/helpers"
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-27 09:55:00 +00:00
Manish Madan
d657eea910 Merge pull request #1705 from arc53/dependabot/npm_and_yarn/docs/babel/helpers-7.26.10
build(deps): bump @babel/helpers from 7.24.0 to 7.26.10 in /docs
2025-05-27 15:23:01 +05:30
dependabot[bot]
b5fbb825ed build(deps): bump @babel/helpers from 7.24.0 to 7.26.10 in /docs
Bumps [@babel/helpers](https://github.com/babel/babel/tree/HEAD/packages/babel-helpers) from 7.24.0 to 7.26.10.
- [Release notes](https://github.com/babel/babel/releases)
- [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel/commits/v7.26.10/packages/babel-helpers)

---
updated-dependencies:
- dependency-name: "@babel/helpers"
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-27 09:51:46 +00:00
Manish Madan
d094e7a4c6 Merge pull request #1704 from arc53/dependabot/npm_and_yarn/frontend/babel/runtime-7.26.10
build(deps): bump @babel/runtime from 7.25.0 to 7.26.10 in /frontend
2025-05-27 15:19:39 +05:30
dependabot[bot]
945c155b17 build(deps): bump @babel/runtime from 7.25.0 to 7.26.10 in /frontend
Bumps [@babel/runtime](https://github.com/babel/babel/tree/HEAD/packages/babel-runtime) from 7.25.0 to 7.26.10.
- [Release notes](https://github.com/babel/babel/releases)
- [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel/commits/v7.26.10/packages/babel-runtime)

---
updated-dependencies:
- dependency-name: "@babel/runtime"
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-27 09:44:53 +00:00
Manish Madan
f798072a1e Merge pull request #1632 from arc53/dependabot/npm_and_yarn/extensions/react-widget/dompurify-3.2.4
build(deps): bump dompurify from 3.1.5 to 3.2.4 in /extensions/react-widget
2025-05-27 15:04:39 +05:30
Manish Madan
f967214b57 Merge pull request #1524 from arc53/dependabot/npm_and_yarn/frontend/react-redux-9.2.0
build(deps): bump react-redux from 8.1.3 to 9.2.0 in /frontend
2025-05-27 15:03:56 +05:30
dependabot[bot]
d0b92e2540 build(deps): bump react-redux from 8.1.3 to 9.2.0 in /frontend
Bumps [react-redux](https://github.com/reduxjs/react-redux) from 8.1.3 to 9.2.0.
- [Release notes](https://github.com/reduxjs/react-redux/releases)
- [Changelog](https://github.com/reduxjs/react-redux/blob/master/CHANGELOG.md)
- [Commits](https://github.com/reduxjs/react-redux/compare/v8.1.3...v9.2.0)

---
updated-dependencies:
- dependency-name: react-redux
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-27 09:25:20 +00:00
dependabot[bot]
8ddfe272bf build(deps): bump dompurify in /extensions/react-widget
Bumps [dompurify](https://github.com/cure53/DOMPurify) from 3.1.5 to 3.2.4.
- [Release notes](https://github.com/cure53/DOMPurify/releases)
- [Commits](https://github.com/cure53/DOMPurify/compare/3.1.5...3.2.4)

---
updated-dependencies:
- dependency-name: dompurify
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-27 09:20:40 +00:00
Siddhant Rai
b7a6bad7cd fix: minor bugs and route errors 2025-05-27 13:50:13 +05:30
Alex
e2f6c04406 (fix:AgentDetailsModal) update shared token URL format in AgentDetailsModal 2025-05-24 00:12:31 +01:00
Alex
c662725955 Merge pull request #1812 from ManishMadan2882/main
Tools redesign
2025-05-23 23:49:45 +01:00
ManishMadan2882
4b66ddfdef (fix:config) use default variant for confirm modal 2025-05-24 01:20:57 +05:30
ManishMadan2882
2d55b1f592 (fix:confirmModal) only close modal on click outisde 2025-05-24 01:19:14 +05:30
ManishMadan2882
14adfabf7e (feat:tools) reflect custom name on ui 2025-05-24 01:17:22 +05:30
Alex
e7a76ede76 Merge pull request #1813 from arc53/react-improve
React improve
2025-05-23 15:25:19 +01:00
Alex
de47df3bf9 fix: enhance ReActAgent's reasoning iterations and update planning prompt structure 2025-05-23 15:10:12 +01:00
Alex
5475e6f7c5 fix: enhance ReActAgent's response handling and update planning prompt 2025-05-23 14:21:02 +01:00
ManishMadan2882
8e3f3d74d4 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-05-23 18:22:20 +05:30
ManishMadan2882
046f6c66ed (feat:tools) warn on unsaved changes, deep compare 2025-05-23 18:22:06 +05:30
Manish Madan
79f9d6552e Merge branch 'arc53:main' into main 2025-05-23 03:37:53 +05:30
ManishMadan2882
56b4b63749 (fix/tools) custom actions only for api tool, allow delete 2025-05-23 03:36:27 +05:30
ManishMadan2882
b3246a48c7 (fix/addAction) duplicate placeholders 2025-05-23 03:11:54 +05:30
ManishMadan2882
71722ef6a3 (fix/config) correctly placed save btn 2025-05-23 00:57:42 +05:30
ManishMadan2882
ebf8f00302 (fix/input) minor details 2025-05-23 00:56:50 +05:30
Alex
7445928c7e (fix:stream) handle empty history input and improve user identification 2025-05-22 13:14:10 +01:00
ManishMadan2882
5ab7602f2f (feat:tools) ask for custom names 2025-05-22 17:14:52 +05:30
ManishMadan2882
a340aff63a (feat:tools) store custom names 2025-05-22 17:14:05 +05:30
ManishMadan2882
f82042ff00 (feat:config) custom name 2025-05-22 15:27:29 +05:30
ManishMadan2882
920422e28c Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-05-22 15:25:28 +05:30
ManishMadan2882
50d6b7a6f8 (fix/input) placeholder 2025-05-22 15:24:58 +05:30
Alex
41d624a36a Merge pull request #1810 from ManishMadan2882/main
(fix:layout) even action buttons
2025-05-21 18:59:01 +01:00
ManishMadan2882
f42c37c82e (fix:layout) even action buttons 2025-05-21 23:23:49 +05:30
Alex
119fcdf6f6 Merge pull request #1809 from ManishMadan2882/feat/sources-in-widget
(release-widget)0.5.1
2025-05-21 14:04:38 +01:00
ManishMadan2882
a5b093d1a9 (release-widget)0.5.1 2025-05-21 16:36:52 +05:30
Alex
e07cb44a3e Merge pull request #1806 from arc53/documentation-agents
Documentation agents
2025-05-21 11:44:05 +03:00
Alex
fec1bcfd5c Merge pull request #1789 from ManishMadan2882/main
Frontend Fixes
2025-05-21 01:05:47 +03:00
Alex
dbcf658343 Merge pull request #1808 from ManishMadan2882/feat/sources-in-widget
Glassmorphic effect on search widget
2025-05-21 01:00:49 +03:00
ManishMadan2882
d89e78c9ca (feat:search) minor adjust 2025-05-21 02:59:22 +05:30
ManishMadan2882
ec50650dfa (fix/searchwidget) prominent placeholder, sleek spinner 2025-05-21 02:00:47 +05:30
ManishMadan2882
7432e551f9 (fix/loader) drop on empty input 2025-05-21 01:40:03 +05:30
ManishMadan2882
4ee6bd44d1 (feat:glassy) search widget 2025-05-21 01:26:34 +05:30
Pavel
26f819098d agents and tools doc update 2025-05-20 20:32:38 +04:00
Alex
a1c79f93d7 Merge pull request #1801 from siiddhantt/feat/agents-enhance
feat: enhance agent sharing functionality and UI improvements
2025-05-20 18:40:22 +03:00
GH Action - Upstream Sync
9c1b202d74 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-05-20 01:42:17 +00:00
Pavel
8ad0f59f19 jwt update 2025-05-19 21:12:14 +04:00
Alex
50fbe3d5af Merge pull request #1798 from arc53/dependabot/pip/application/markdownify-1.1.0
build(deps): bump markdownify from 0.14.1 to 1.1.0 in /application
2025-05-19 14:01:13 +03:00
GH Action - Upstream Sync
af40a77d24 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-05-19 01:45:51 +00:00
dependabot[bot]
8af9a5e921 build(deps): bump markdownify from 0.14.1 to 1.1.0 in /application
Bumps [markdownify](https://github.com/matthewwithanm/python-markdownify) from 0.14.1 to 1.1.0.
- [Release notes](https://github.com/matthewwithanm/python-markdownify/releases)
- [Commits](https://github.com/matthewwithanm/python-markdownify/compare/0.14.1...1.1.0)

---
updated-dependencies:
- dependency-name: markdownify
  dependency-version: 1.1.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-18 12:07:13 +00:00
Alex
9807788ecb Merge pull request #1795 from arc53/dependabot/pip/application/pypdf-5.5.0
build(deps): bump pypdf from 5.2.0 to 5.5.0 in /application
2025-05-18 15:05:57 +03:00
dependabot[bot]
5e2f329f15 build(deps): bump pypdf from 5.2.0 to 5.5.0 in /application
Bumps [pypdf](https://github.com/py-pdf/pypdf) from 5.2.0 to 5.5.0.
- [Release notes](https://github.com/py-pdf/pypdf/releases)
- [Changelog](https://github.com/py-pdf/pypdf/blob/main/CHANGELOG.md)
- [Commits](https://github.com/py-pdf/pypdf/compare/5.2.0...5.5.0)

---
updated-dependencies:
- dependency-name: pypdf
  dependency-version: 5.5.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-18 12:02:27 +00:00
Alex
9572a7adaa Merge pull request #1800 from arc53/dependabot/pip/application/boto3-1.38.18
build(deps): bump boto3 from 1.35.97 to 1.38.18 in /application
2025-05-18 15:00:45 +03:00
dependabot[bot]
1ba94f4f5f build(deps): bump boto3 from 1.35.97 to 1.38.18 in /application
Bumps [boto3](https://github.com/boto/boto3) from 1.35.97 to 1.38.18.
- [Release notes](https://github.com/boto/boto3/releases)
- [Commits](https://github.com/boto/boto3/compare/1.35.97...1.38.18)

---
updated-dependencies:
- dependency-name: boto3
  dependency-version: 1.38.18
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-18 11:54:08 +00:00
Alex
237afa0a3a Merge pull request #1799 from arc53/dependabot/pip/application/beautifulsoup4-4.13.4
build(deps): bump beautifulsoup4 from 4.12.3 to 4.13.4 in /application
2025-05-18 14:53:01 +03:00
GH Action - Upstream Sync
d80b7017cf Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-05-17 01:38:42 +00:00
Siddhant Rai
56793c8db7 feat: enhance agent sharing functionality and UI improvements
- Added shared agents state management in Navigation and AgentsList components.
- Implemented fetching and displaying shared agents in the AgentsList.
- Introduced functionality to hide shared agents with appropriate API integration.
- Updated the SharedAgent component layout for better UI consistency.
- Improved error handling in conversation fetching logic.
- Added new API endpoint for hiding shared agents.
- Updated Redux slice to manage shared agents state.
- Refactored AgentCard and AgentSection components for better code organization and readability.
2025-05-17 05:53:56 +05:30
ManishMadan2882
8edb217943 (fix) source.link is new source.source 2025-05-17 01:41:29 +05:30
ManishMadan2882
23ebcf1065 (fix:all_sources) inlined svg, dom heirarchy 2025-05-17 01:33:37 +05:30
ManishMadan2882
68a5a3d62a (feat:source_cards) enhance ux 2025-05-17 00:15:42 +05:30
ManishMadan2882
8d7236b0db (fix:action-buttons) redirect to conversations 2025-05-17 00:07:54 +05:30
ManishMadan2882
96c7daf818 (fix:modals) use portals 2025-05-16 16:15:02 +05:30
Alex
9d8073d468 Merge pull request #1794 from arc53/remove-duplicate-docs
remove duplicate chat widget page
2025-05-16 12:45:59 +03:00
ManishMadan2882
fc4942e189 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-05-16 14:33:55 +05:30
ManishMadan2882
ca69d025bd (fix:agents) adhere to new MessageInput 2025-05-16 14:33:30 +05:30
GH Action - Upstream Sync
ffa428e32a Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-05-16 01:41:33 +00:00
ManishMadan2882
c24e90eaae Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-05-16 04:12:27 +05:30
ManishMadan2882
ab32eff588 (fix/tool_calls) prevent overscroll somehow 2025-05-16 04:09:52 +05:30
ManishMadan2882
7f592f2b35 (fix/layout) prevent overlap in tab screen 2025-05-16 01:56:53 +05:30
dependabot[bot]
3bf7f67adf build(deps): bump beautifulsoup4 from 4.12.3 to 4.13.4 in /application
Bumps [beautifulsoup4](https://www.crummy.com/software/BeautifulSoup/bs4/) from 4.12.3 to 4.13.4.

---
updated-dependencies:
- dependency-name: beautifulsoup4
  dependency-version: 4.13.4
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-15 20:15:17 +00:00
Pavel
594ce05292 remove duplicate chat widget page 2025-05-15 21:32:59 +04:00
Alex
fe02ca68d5 fix: update API key for DocsGPTWidget 2025-05-15 18:18:12 +01:00
Alex
21ef27ee9b Merge pull request #1791 from arc53/fix-openai-conflict
fix for OPENAI_BASE_URL + ollama can't connect to container
2025-05-15 19:04:21 +03:00
Alex
09d37f669f Merge pull request #1793 from arc53/prompts-update
refactor: enhance AI assistant prompts for clarity and creativity
2025-05-15 16:19:32 +03:00
Pavel
416b776062 redundant f string 2025-05-15 16:57:41 +04:00
Alex
5ed05d4020 refactor: enhance AI assistant prompts for clarity and creativity 2025-05-15 13:30:18 +01:00
Alex
4004bfb5ef Merge pull request #1792 from arc53/read_webpage_tools
feat: add ReadWebpageTool for fetching and converting webpage content…
2025-05-15 15:14:02 +03:00
Alex
45aace8966 feat: add ReadWebpageTool for fetching and converting webpage content to Markdown 2025-05-15 12:56:06 +01:00
Alex
d9fc623dcb Merge pull request #1790 from ManishMadan2882/setup-fix
(fix:dockercompose) volumes, user
2025-05-15 13:37:11 +03:00
Pavel
dbb822f6b0 fix for OPENAI_BASE_URL + ollama can't connect to container
- fix for OpenAI trying to use base_url=""
- fix for ollama container error:
`Error code: 404 - {'error': {'message': 'model "MODEL_NAME" not found, try pulling it first', 'type': 'api_error', 'param': None, 'code': None}}`
2025-05-15 13:50:08 +04:00
ManishMadan2882
3d64dffc32 (fix:dockercompose) volumes, user 2025-05-15 15:19:34 +05:30
GH Action - Upstream Sync
130ece7bc0 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-05-15 01:38:39 +00:00
Alex
b2809b2e9a remove old deps 2025-05-14 22:03:27 +01:00
Alex
29e89d2965 Merge pull request #1785 from arc53/dependabot/pip/application/yarl-1.20.0
build(deps): bump yarl from 1.18.3 to 1.20.0 in /application
2025-05-14 23:55:09 +03:00
dependabot[bot]
e7d54a639e build(deps): bump yarl from 1.18.3 to 1.20.0 in /application
Bumps [yarl](https://github.com/aio-libs/yarl) from 1.18.3 to 1.20.0.
- [Release notes](https://github.com/aio-libs/yarl/releases)
- [Changelog](https://github.com/aio-libs/yarl/blob/master/CHANGES.rst)
- [Commits](https://github.com/aio-libs/yarl/compare/v1.18.3...v1.20.0)

---
updated-dependencies:
- dependency-name: yarl
  dependency-version: 1.20.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-14 20:50:26 +00:00
Alex
22df98e9bb Merge pull request #1782 from arc53/dependabot/pip/application/prompt-toolkit-3.0.51
build(deps): bump prompt-toolkit from 3.0.50 to 3.0.51 in /application
2025-05-14 23:49:14 +03:00
dependabot[bot]
0d45c44c6f build(deps): bump prompt-toolkit from 3.0.50 to 3.0.51 in /application
Bumps [prompt-toolkit](https://github.com/prompt-toolkit/python-prompt-toolkit) from 3.0.50 to 3.0.51.
- [Release notes](https://github.com/prompt-toolkit/python-prompt-toolkit/releases)
- [Changelog](https://github.com/prompt-toolkit/python-prompt-toolkit/blob/main/CHANGELOG)
- [Commits](https://github.com/prompt-toolkit/python-prompt-toolkit/compare/3.0.50...3.0.51)

---
updated-dependencies:
- dependency-name: prompt-toolkit
  dependency-version: 3.0.51
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-14 20:47:44 +00:00
Alex
63c6912841 lazy load elasticsearch 2025-05-14 21:45:30 +01:00
Alex
73bce73034 Merge pull request #1786 from arc53/dependabot/pip/application/flask-3.1.1
build(deps): bump flask from 3.1.0 to 3.1.1 in /application
2025-05-14 23:40:58 +03:00
Manish Madan
b2582796a2 Merge branch 'arc53:main' into main 2025-05-15 01:20:23 +05:30
Alex
8babb6e68f Merge pull request #1787 from siiddhantt/refactor/agents
refactor: handle empty sources and chunks in agent
2025-05-14 14:39:01 +03:00
Siddhant Rai
d1d28df8a1 fix: handle empty chunks and retriever values in agent creation and update 2025-05-14 09:26:36 +05:30
GH Action - Upstream Sync
cd556d5d43 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-05-14 01:40:32 +00:00
ManishMadan2882
2855283a2c (fix/shared) append to right state 2025-05-14 04:15:11 +05:30
ManishMadan2882
06c29500f2 (fix:input-lag) localised the input state to messageInput 2025-05-14 04:13:53 +05:30
ManishMadan2882
81104153a6 (feat:action-buttons) combine the buttons at the top of layout 2025-05-14 03:01:57 +05:30
dependabot[bot]
23bfd4683c build(deps): bump flask from 3.1.0 to 3.1.1 in /application
Bumps [flask](https://github.com/pallets/flask) from 3.1.0 to 3.1.1.
- [Release notes](https://github.com/pallets/flask/releases)
- [Changelog](https://github.com/pallets/flask/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/flask/compare/3.1.0...3.1.1)

---
updated-dependencies:
- dependency-name: flask
  dependency-version: 3.1.1
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-13 20:32:17 +00:00
Alex
a52a3e3158 Merge pull request #1750 from arc53/dependabot/pip/application/packaging-25.0
build(deps): bump packaging from 24.1 to 25.0 in /application
2025-05-13 18:34:56 +03:00
Alex
44e524e3c3 build(deps): update langchain and openai dependencies 2025-05-13 16:29:24 +01:00
dependabot[bot]
9a430f73e2 build(deps): bump packaging from 24.1 to 25.0 in /application
Bumps [packaging](https://github.com/pypa/packaging) from 24.1 to 25.0.
- [Release notes](https://github.com/pypa/packaging/releases)
- [Changelog](https://github.com/pypa/packaging/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pypa/packaging/compare/24.1...25.0)

---
updated-dependencies:
- dependency-name: packaging
  dependency-version: '25.0'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-13 15:10:29 +00:00
Alex
fdea40ec11 Merge pull request #1735 from arc53/dependabot/pip/application/referencing-0.36.2
build(deps): bump referencing from 0.30.2 to 0.36.2 in /application
2025-05-13 18:09:10 +03:00
Alex
526d340849 fix: stale deps 2025-05-13 16:03:48 +01:00
dependabot[bot]
fe95f6ad81 build(deps): bump referencing from 0.30.2 to 0.36.2 in /application
Bumps [referencing](https://github.com/python-jsonschema/referencing) from 0.30.2 to 0.36.2.
- [Release notes](https://github.com/python-jsonschema/referencing/releases)
- [Changelog](https://github.com/python-jsonschema/referencing/blob/main/docs/changes.rst)
- [Commits](https://github.com/python-jsonschema/referencing/compare/v0.30.2...v0.36.2)

---
updated-dependencies:
- dependency-name: referencing
  dependency-version: 0.36.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-13 13:20:27 +00:00
Alex
39e73c37ab Merge pull request #1576 from arc53/dependabot/pip/application/portalocker-3.1.1
build(deps): bump portalocker from 2.10.1 to 3.1.1 in /application
2025-05-13 16:06:44 +03:00
Alex
39b36b6857 Feat: Add MD gen script, enable Qdrant lazy loading 2025-05-13 14:03:05 +01:00
dependabot[bot]
44e98748c5 build(deps): bump portalocker from 2.10.1 to 3.1.1 in /application
Bumps [portalocker](https://github.com/wolph/portalocker) from 2.10.1 to 3.1.1.
- [Release notes](https://github.com/wolph/portalocker/releases)
- [Changelog](https://github.com/wolph/portalocker/blob/develop/CHANGELOG.rst)
- [Commits](https://github.com/wolph/portalocker/compare/v2.10.1...v3.1.1)

---
updated-dependencies:
- dependency-name: portalocker
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-13 12:44:30 +00:00
Alex
8a7aeee955 Merge pull request #1751 from arc53/dependabot/pip/application/torch-2.7.0
build(deps): bump torch from 2.5.1 to 2.7.0 in /application
2025-05-13 15:41:10 +03:00
dependabot[bot]
1c7befb8d3 build(deps): bump torch from 2.5.1 to 2.7.0 in /application
Bumps [torch](https://github.com/pytorch/pytorch) from 2.5.1 to 2.7.0.
- [Release notes](https://github.com/pytorch/pytorch/releases)
- [Changelog](https://github.com/pytorch/pytorch/blob/main/RELEASE.md)
- [Commits](https://github.com/pytorch/pytorch/compare/v2.5.1...v2.7.0)

---
updated-dependencies:
- dependency-name: torch
  dependency-version: 2.7.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-13 12:20:22 +00:00
Alex
d5d59ac62c Merge pull request #1759 from arc53/dependabot/pip/application/transformers-4.51.3
build(deps): bump transformers from 4.49.0 to 4.51.3 in /application
2025-05-13 15:19:18 +03:00
Alex
562f0762a0 Merge pull request #1775 from siiddhantt/feat/enhance-agents
feat: share and pin agents
2025-05-13 11:54:29 +03:00
Siddhant Rai
e46aedce21 Merge branch 'feat/enhance-agents' of https://github.com/siiddhantt/DocsGPT into feat/enhance-agents 2025-05-13 13:05:50 +05:30
Siddhant Rai
57cc09b1d7 feat: update tool display name and improve SharedAgent component layout 2025-05-13 13:05:20 +05:30
ManishMadan2882
e1e608b744 (fix:bubble) sleeker source cards 2025-05-13 06:16:16 +05:30
Alex
cbfa5a5118 Delete .jwt_secret_key 2025-05-12 15:11:42 +03:00
ManishMadan2882
ea9ab5b27c (feat:sources) remove None option, don't close on select 2025-05-12 17:15:03 +05:30
ManishMadan2882
357ced6cba Revert "(fix:message-input) no badge buttons when shared"
This reverts commit d873539856.
2025-05-12 16:32:17 +05:30
ManishMadan2882
3ffda69651 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-05-12 14:38:33 +05:30
ManishMadan2882
e1bf4e0762 (fix/logs)log must render once 2025-05-12 14:38:15 +05:30
Siddhant Rai
ec7f14b82d Merge branch 'main' into feat/enhance-agents 2025-05-12 13:45:18 +05:30
Siddhant Rai
6520be5b85 feat: shared and pinning agents + fix for streaming tools 2025-05-12 06:06:11 +05:30
GH Action - Upstream Sync
17e4fad6fb Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-05-10 01:36:32 +00:00
Alex
d84c416421 Merge pull request #1774 from arc53/limit-yield-on-tools
fix: truncate tool call results to 50 characters for cleaner output
2025-05-10 00:53:28 +03:00
Alex
32803c89a3 fix: truncate tool call results to 50 characters for cleaner output 2025-05-09 22:52:17 +01:00
Alex
a86bcb5c29 Merge pull request #1773 from arc53/cols-agent-fix
fix:(style) update layout for agent list and card components
2025-05-10 00:21:05 +03:00
Alex
7d76a33790 fix:(style) update layout for agent list and card components 2025-05-09 22:15:55 +01:00
ManishMadan2882
8552e81022 (fix:input) sync with translations 2025-05-09 21:39:54 +05:30
ManishMadan2882
eacdde829f (fix/logs) abnormal scroll over page 2025-05-09 20:07:51 +05:30
ManishMadan2882
d873539856 (fix:message-input) no badge buttons when shared 2025-05-09 20:02:49 +05:30
Alex
24bb2e469d Merge pull request #1772 from ManishMadan2882/main
Bug Fixes
2025-05-09 11:01:01 +03:00
ManishMadan2882
e1aa2cc0b8 (fix:ingestion) store file name as metadata, not path 2025-05-09 02:26:35 +05:30
ManishMadan2882
d073947f3b Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-05-08 16:11:14 +05:30
ManishMadan2882
3243740dd1 (fix-bubble) inconsistent width with snippets 2025-05-08 16:10:31 +05:30
Alex
f9bd566a3b Merge pull request #1768 from ManishMadan2882/main
Attachments : File storage changes
2025-05-08 01:41:08 +03:00
Alex
183251487c lint: remove unused import of 'io' in worker.py 2025-05-07 23:37:35 +01:00
ManishMadan2882
ff532210f7 lint 2025-05-08 00:15:28 +05:30
ManishMadan2882
d0a04d9801 (fix/lint) empty methods 2025-05-08 00:14:42 +05:30
ManishMadan2882
ea6533db4e (fix:attachment) strictly use filename 2025-05-08 00:11:02 +05:30
ManishMadan2882
89d5e7bee5 (feat:attachment) store file in endpoint layer 2025-05-07 19:15:36 +05:30
Alex
7e6cdee592 Merge pull request #1732 from asminkarki012/feature/mermaid-integration
feat[mermaid]:integration of mermaid
2025-05-07 11:52:50 +03:00
ManishMadan2882
990c2fb416 Merge branch 'main' into 'feature/mermaid-integration' 2025-05-07 03:44:47 +05:30
ManishMadan2882
09e054c6aa (fix/scroll) bring back arrowDown button, smoother scroll 2025-05-07 02:48:49 +05:30
ManishMadan2882
23f648f53a (feat/mermaid) zoom as per the requirement 2025-05-07 02:07:57 +05:30
Siddhant Rai
07fa656e7c feat: implement pinning functionality for agents with UI updates 2025-05-06 16:12:55 +05:30
Alex
7858c48f11 fix: zip file uploads 2025-05-06 11:12:26 +01:00
Alex
e56d54c3f0 fix: improve source and description handling in GetAgent and GetAgents responses 2025-05-06 10:59:25 +01:00
ManishMadan2882
f37ca95c10 (fix/mermaid loading): load only the diagrams which stream 2025-05-06 15:16:14 +05:30
ManishMadan2882
72e51bb072 (feat:mermaid) dont pass isDarkTheme 2025-05-06 04:38:38 +05:30
Alex
dcfcbf54be Merge pull request #1767 from arc53/sources-icon-fix-agent-menu
fix: sources icon mini fix
2025-05-06 01:36:29 +03:00
Alex
204936b2d0 fix: sources icon mini fix 2025-05-05 23:34:13 +01:00
ManishMadan2882
98856b39ac (feat:mermaid) zoom onhover, throw syntax errors 2025-05-06 00:53:33 +05:30
Alex
ad5f707486 lint: ruff fix 2025-05-05 18:03:45 +01:00
Alex
5ecfb0ce6d fix: enhance error logging 2025-05-05 17:59:37 +01:00
Alex
2147b3f06f lint: mini fix 2025-05-05 13:14:56 +01:00
Alex
7daed3daaf Merge pull request #1764 from arc53/feat/better-logs
fix: enhance error logging with exception info across multiple modules
2025-05-05 15:13:21 +03:00
Alex
481df4d604 fix: enhance error logging with exception info across multiple modules 2025-05-05 13:12:39 +01:00
Alex
cf333873fd fix: json body 2025-05-05 00:08:56 +01:00
Alex
ae700e8f3a fix: display only 2 demos buttons on mobile 2025-05-04 18:56:33 +01:00
ManishMadan2882
16386a9524 (feat:mermaid) zoom on hover 2025-05-04 19:39:24 +05:30
ManishMadan2882
7e7ce276b2 (fix:mermaid/flicker) separated from markdown 2025-05-02 14:12:24 +05:30
Alex
71c6b41b83 Merge pull request #1762 from arc53/dartpain-patch-2
Update README.md
2025-05-01 17:19:10 +03:00
Alex
4b2faae29a Update README.md 2025-05-01 17:15:08 +03:00
ManishMadan2882
7e28e562d0 (fix:mermaid) download svg/png 2025-05-01 19:37:03 +05:30
dependabot[bot]
93c2e2a597 build(deps): bump transformers from 4.49.0 to 4.51.3 in /application
Bumps [transformers](https://github.com/huggingface/transformers) from 4.49.0 to 4.51.3.
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](https://github.com/huggingface/transformers/compare/v4.49.0...v4.51.3)

---
updated-dependencies:
- dependency-name: transformers
  dependency-version: 4.51.3
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-29 20:50:32 +00:00
Alex
c45d13d834 Merge pull request #1755 from siiddhantt/feat/agent-menu
feat: agent webhook and minor fixes
2025-04-29 00:41:36 +03:00
Alex
330276cdf7 fix: lint for ruff 2025-04-28 22:32:13 +01:00
Siddhant Rai
22c7015c69 refactor: webhook listener handle both POST and GET requests 2025-04-29 00:29:16 +05:30
Alex
cc67d4a1e2 process all request data implicitly 2025-04-28 17:49:29 +01:00
Siddhant Rai
eeb9da696f Merge remote-tracking branch 'upstream/main' into feat/agent-menu 2025-04-28 17:01:46 +05:30
Siddhant Rai
4979e1ac9a feat: add clsx dependency, enhance logging in agent logic, and improve agent logs component 2025-04-28 14:18:28 +05:30
ManishMadan2882
545353dabf (feat:mermaid) use contentLoaded to render 2025-04-27 03:42:28 +05:30
ManishMadan2882
545376740c (fix:re-render) useRef to check for bottom 2025-04-26 19:33:24 +05:30
Siddhant Rai
8289b02ab0 feat: add agent webhook endpoint and implement related functionality 2025-04-26 12:00:29 +05:30
ManishMadan2882
fc0060662b (fix:mermaid) suppress mermaid injected errors in DOM 2025-04-25 21:18:49 +05:30
Alex
df9d432d29 fix: mongo db database name in settings 2025-04-24 17:29:41 +01:00
Alex
76fd6e15cc Update Dockerfile 2025-04-24 18:54:58 +03:00
Alex
06982efda5 Merge pull request #1742 from ManishMadan2882/main
File System Abstraction
2025-04-24 01:32:27 +03:00
Alex
3cd9a72495 add storage type to the settings cofig 2025-04-23 23:13:39 +01:00
ManishMadan2882
0ce27f274a (feat:storage) file indexes/faiss 2025-04-23 04:28:45 +05:30
ManishMadan2882
e60f78ac4a (feat:storage) file uploads 2025-04-23 03:39:35 +05:30
ManishMadan2882
637d3a24a1 Revert "(feat:storage) file, indexes uploads"
This reverts commit 64c42f0ddf.
2025-04-23 00:52:55 +05:30
ManishMadan2882
24c8b24b1f Revert "(fix:indexes) look for the right path"
This reverts commit 5ad34e2216.
2025-04-23 00:52:22 +05:30
ManishMadan2882
5ad34e2216 (fix:indexes) look for the right path 2025-04-22 17:34:25 +05:30
ManishMadan2882
64c42f0ddf (feat:storage) file, indexes uploads 2025-04-22 05:18:07 +05:30
ManishMadan2882
0a31ddaae6 (feat:storage) use get storage 2025-04-22 01:41:53 +05:30
ManishMadan2882
38476cfeb8 (gfeat:storage) get storage instance based on settings 2025-04-22 00:57:57 +05:30
asminkarki012
decc31f1f0 update latest changes 2025-04-21 21:05:12 +05:45
asminkarki012
ea0aa64330 fix: added renderer in answer bubble 2025-04-21 20:50:04 +05:45
Manish Madan
e9a6044645 Merge branch 'main' into main 2025-04-20 16:01:13 +05:30
Alex
474d700df2 Merge pull request #1745 from arc53/dependabot/pip/application/google-generativeai-0.8.5
build(deps): bump google-generativeai from 0.8.3 to 0.8.5 in /application
2025-04-20 01:39:11 +03:00
ManishMadan2882
c50ff6faa3 (feat:fs abstract) googleLLM class 2025-04-18 21:03:28 +05:30
ManishMadan2882
c8efef8f04 (fix:openai) image uplads, use lambda in process_files 2025-04-18 18:27:02 +05:30
dependabot[bot]
1d22f77568 build(deps): bump google-generativeai in /application
Bumps [google-generativeai](https://github.com/google/generative-ai-python) from 0.8.3 to 0.8.5.
- [Release notes](https://github.com/google/generative-ai-python/releases)
- [Changelog](https://github.com/google-gemini/deprecated-generative-ai-python/blob/main/RELEASE.md)
- [Commits](https://github.com/google/generative-ai-python/compare/v0.8.3...v0.8.5)

---
updated-dependencies:
- dependency-name: google-generativeai
  dependency-version: 0.8.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-17 20:06:10 +00:00
ManishMadan2882
5aa51f5f36 (feat:file_abstract) openai attachments comply 2025-04-18 01:27:21 +05:30
ManishMadan2882
335c21c48a (fix:attachment) dont calculate MIME again 2025-04-17 16:36:40 +05:30
ManishMadan2882
c35d1cecfe (feat:file_abstract) return storage metadata after upload 2025-04-17 16:29:34 +05:30
ManishMadan2882
0d3e6157cd (feat:attachmentUpload) parse content before upload 2025-04-17 16:23:01 +05:30
ManishMadan2882
68e4cf4d14 (feat:fsabstract) add factory class 2025-04-17 02:40:53 +05:30
ManishMadan2882
9454150f7d (fix:s3) processor func 2025-04-17 02:36:55 +05:30
ManishMadan2882
0a0e16547e (feat:fs_abstract) attachment uploads 2025-04-17 02:35:45 +05:30
Alex
0aec1b9969 Merge pull request #1739 from siiddhantt/feat/agent-menu
feat: new agents section
2025-04-16 17:50:11 +03:00
Alex
3e1ec23409 fix: lint 2025-04-16 15:47:39 +01:00
Siddhant Rai
2f9f428a2f feat: add prompt management functionality with modal integration 2025-04-16 19:41:16 +05:30
Siddhant Rai
da15cde49c Merge branch 'main' into feat/agent-menu 2025-04-16 18:35:38 +05:30
Siddhant Rai
e6ed37139a feat: implement agent preview functionality and enhance conversation handling 2025-04-16 16:26:47 +05:30
ManishMadan2882
377e33c148 (feat:file_abstract) process files method 2025-04-16 03:36:45 +05:30
ManishMadan2882
e567d88951 ((feat:fs_abstact) s3 2025-04-16 03:31:42 +05:30
ManishMadan2882
89b2937b11 ((feat:fs_abstact) local 2025-04-16 03:31:28 +05:30
ManishMadan2882
142ed75468 ((feat:fs_abstact) base 2025-04-16 03:31:06 +05:30
Siddhant Rai
d80eeb044c feat: add agent timestamps and improve agent retrieval logic 2025-04-15 14:06:20 +05:30
Siddhant Rai
7c69e99914 feat: Enhance agent selection and conversation handling
- Added functionality to select agents in the Navigation component, allowing users to reset conversations and set the selected agent.
- Updated the MessageInput component to conditionally show source and tool buttons based on the selected agent.
- Modified the Conversation component to handle agent-specific queries and manage file uploads.
- Improved conversation fetching logic to include agent IDs and handle attachments.
- Introduced new types for conversation summaries and results to streamline API responses.
- Refactored Redux slices to manage selected agent state and improve overall state management.
- Enhanced error handling and loading states across components for better user experience.
2025-04-15 11:53:53 +05:30
Alex
5e1aaf5a44 Merge pull request #1733 from ManishMadan2882/main
Attachments: Enhancements , strategy specific to certain LLMs
2025-04-15 01:55:46 +03:00
Alex
ad610d2f90 fix: lint ruff 2025-04-14 23:49:40 +01:00
Alex
02934452d6 fix: remove comment 2025-04-14 23:48:17 +01:00
ManishMadan2882
8b054010e1 (Feat:input) sleek attachment pills 2025-04-15 03:30:11 +05:30
Alex
5b77f3839b fix: maybe 2025-04-14 20:24:05 +01:00
Alex
231b792452 fix: streaming with tools google and openai halfway 2025-04-14 18:54:40 +01:00
ManishMadan2882
b468e0c164 (fix:generation) attach + tools 2025-04-13 18:33:26 +05:30
Siddhant Rai
fa1f9d7009 feat: agents route replacing chatbots
- Removed API Keys tab from SettingsBar and adjusted tab layout.
- Improved styling for tab scrolling buttons and gradient indicators.
- Introduced AgentDetailsModal for displaying agent access details.
- Updated Analytics component to fetch agent data and handle analytics for selected agent.
- Refactored Logs component to accept agentId as a prop for filtering logs.
- Enhanced type definitions for InputProps to include textSize.
- Cleaned up unused imports and optimized component structure across various files.
2025-04-11 17:24:22 +05:30
asminkarki012
c5a8f3abcd refact:generate unique meraid id using crypto.randomUUID 2025-04-11 11:48:07 +05:45
ManishMadan2882
dfe6a8d3e3 (feat:attach) simplify the format for files 2025-04-11 01:53:17 +05:30
ManishMadan2882
292257770c (refactor:attach) centralize attachment state 2025-04-10 03:02:39 +05:30
ManishMadan2882
b4c6b2b08b (feat:utils) getOS, isTouchDevice 2025-04-10 01:44:49 +05:30
ManishMadan2882
6cb4577e1b (feat:input) hotkey for sources open 2025-04-10 01:43:46 +05:30
ManishMadan2882
456784db48 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-04-10 01:29:52 +05:30
ManishMadan2882
dd9ea46e58 (feat:attach) strategy specific to google genai 2025-04-10 01:29:01 +05:30
GH Action - Upstream Sync
ed3af2fac0 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-04-09 01:27:51 +00:00
Alex
02f8132f3a Update README.md 2025-04-08 15:56:34 +03:00
ManishMadan2882
55bd90fad9 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-04-08 17:51:27 +05:30
ManishMadan2882
cd7bbb45c3 (fix:popups) minor hover 2025-04-08 17:51:18 +05:30
Alex
6c7fc0ed22 Merge pull request #1729 from arc53/setup-windows
win-setup
2025-04-08 13:58:28 +03:00
ManishMadan2882
5421bc1386 (feat:attach) extend support for imgs 2025-04-08 15:51:37 +05:30
GH Action - Upstream Sync
051841e566 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-04-08 01:27:09 +00:00
ManishMadan2882
0c68815cf2 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-04-07 20:16:12 +05:30
ManishMadan2882
0c1138179b (feat:attch) store file mime type 2025-04-07 20:16:03 +05:30
ManishMadan2882
1f3d1cc73e (feat:attach) handle unsupported attachments 2025-04-07 20:15:11 +05:30
Alex
707d1332de Merge pull request #1734 from arc53/fix-thought-param
fix: thought param in /api/answer
2025-04-07 13:08:23 +03:00
Alex
f6c88da81b fix: thought param in /api/answer 2025-04-07 11:03:49 +01:00
GH Action - Upstream Sync
a651e6e518 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-04-07 01:37:46 +00:00
Alex
bea89b93eb Merge pull request #1730 from arc53/dependabot/pip/application/multidict-6.3.2
build(deps): bump multidict from 6.1.0 to 6.3.2 in /application
2025-04-06 22:00:31 +03:00
ManishMadan2882
244c9b96a2 (fix:attach) pass attachment docs as it is 2025-04-06 16:02:30 +05:30
ManishMadan2882
a37bd76950 (feat:storeAttach) store in inputs, raise errors from worker 2025-04-06 16:01:57 +05:30
ManishMadan2882
9d70032de8 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-04-06 15:57:26 +05:30
ManishMadan2882
e4945b41e9 (feat:files) link attachment to openai_api 2025-04-06 15:57:18 +05:30
Pavel
493dc8689c guide-updates 2025-04-05 17:49:56 +04:00
GH Action - Upstream Sync
bdac2ffa27 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-04-05 01:25:36 +00:00
asminkarki012
b1235f3ce0 feat[mermaid]:clean comment 2025-04-04 18:40:57 +05:45
asminkarki012
ba4bb63a1f feat[mermaid]:integration of mermaid 2025-04-04 18:36:45 +05:45
Alex
3227b0e69c Merge pull request #1722 from asminkarki012/fix/csv-parser-include-headers
fix[csv_parser]:missing header
2025-04-04 14:53:45 +03:00
ManishMadan2882
29c899627e Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-04-04 03:31:45 +05:30
ManishMadan2882
5923781484 (feat:attach) warning for error, progress, removal 2025-04-04 03:29:01 +05:30
dependabot[bot]
8bb263a2ec build(deps): bump multidict from 6.1.0 to 6.3.2 in /application
Bumps [multidict](https://github.com/aio-libs/multidict) from 6.1.0 to 6.3.2.
- [Release notes](https://github.com/aio-libs/multidict/releases)
- [Changelog](https://github.com/aio-libs/multidict/blob/master/CHANGES.rst)
- [Commits](https://github.com/aio-libs/multidict/compare/v6.1.0...v6.3.2)

---
updated-dependencies:
- dependency-name: multidict
  dependency-version: 6.3.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-03 20:24:09 +00:00
Alex
94c7bba168 Merge pull request #1716 from ManishMadan2882/main
Sources + Tools
2025-04-03 02:03:47 +03:00
ManishMadan2882
f9ad4c068a (feat:attach) fallback strategy to process docs 2025-04-03 03:26:37 +05:30
ManishMadan2882
19d68252cd (fix/attach): inputs are created in application 2025-04-02 16:36:58 +05:30
ManishMadan2882
72bbe3b1ce Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-04-02 15:57:45 +05:30
ManishMadan2882
856824316b (feat: attach) handle them locally from message input 2025-04-02 15:33:35 +05:30
ManishMadan2882
95e189d1d8 (feat:base/agents) default attachment 2025-04-02 15:29:04 +05:30
ManishMadan2882
c629460acb (feat:attach) extract contents in endpoint layer 2025-04-02 15:21:33 +05:30
ManishMadan2882
f235a94986 (feat:attach) pass attachments for generation 2025-04-02 15:14:56 +05:30
GH Action - Upstream Sync
632cba86e9 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-04-02 01:36:06 +00:00
Alex
6b92c7eccc Update README.md 2025-04-01 12:27:55 +03:00
Alex
ab0da1abac Merge pull request #1721 from siiddhantt/feat/react-agent
feat: ReActAgent and agent refactor
2025-04-01 12:08:09 +03:00
Siddhant Rai
7f31ac7bcb refactor: minor changes 2025-04-01 12:33:43 +05:30
Pavel
57a6fb31b2 periodic header injection 2025-03-31 22:28:04 +04:00
Siddhant Rai
fd2b6c111c feat: enhance ClassicAgent and ReActAgent with tool preparation steps 2025-03-31 17:02:36 +05:30
GH Action - Upstream Sync
302458b505 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-03-30 01:40:32 +00:00
GH Action - Upstream Sync
8978a4cf2d Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-03-29 01:25:46 +00:00
asminkarki012
c70be12bfd fix[csv_parser]:missing header 2025-03-28 22:46:11 +05:45
ManishMadan2882
4241307990 (fix:responsive/messageInput) sleek badge buttons, aligned send btn 2025-03-28 20:42:12 +05:30
ManishMadan2882
727a8ef13d (fix:responsive) tools and source pop 2025-03-28 20:40:46 +05:30
ManishMadan2882
7c92558ad1 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-03-28 18:13:42 +05:30
ManishMadan2882
45083d29a6 (feat:attach) show files on conversations 2025-03-28 18:13:24 +05:30
ManishMadan2882
5089d86095 (feat:attach) send attachment ids 2025-03-28 18:12:38 +05:30
ManishMadan2882
80e55ef385 (feat:attach) functionality to upload files 2025-03-28 18:10:18 +05:30
GH Action - Upstream Sync
b5ed98445f Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-03-28 01:26:07 +00:00
Siddhant Rai
82d377abf5 feat: add support for thought processing in conversation flow and introduce ReActAgent 2025-03-27 23:19:08 +05:30
ManishMadan2882
2dbea5d1b2 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-03-27 16:55:49 +05:30
ManishMadan2882
4ba35d6189 (feat: attachment) integrate upload on fe 2025-03-27 16:54:21 +05:30
GH Action - Upstream Sync
cec3f987f2 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-03-27 01:26:14 +00:00
ManishMadan2882
55050a9f58 (feat:attachment) upload single file 2025-03-27 03:28:03 +05:30
ManishMadan2882
502dc9ec52 (feat:attachments) store and ingest files shared 2025-03-26 18:01:31 +05:30
ManishMadan2882
9c8999a3ae (fix:sources) doc selection for preLoaded 2025-03-25 19:04:31 +05:30
ManishMadan2882
90db42ce3a Revert "(fix:sources) preloaded docs have ids too"
This reverts commit 02c8bd06f5.
2025-03-25 16:08:59 +05:30
ManishMadan2882
551130f0e1 (feat:sources/tools) ui perfection for pop-ups 2025-03-25 03:22:21 +05:30
ManishMadan2882
2940a60b3c (faet:input) tools pop-up 2025-03-24 17:16:24 +05:30
ManishMadan2882
76b9bc0d56 (feat:input) sources pop-up 2025-03-24 17:15:57 +05:30
ManishMadan2882
42422ccdcd (feat:settings) routes for tab 2025-03-24 13:56:12 +05:30
ManishMadan2882
e9702ae2de Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-03-21 23:57:09 +05:30
ManishMadan2882
5c54852ebe (fix:tools) no tools placeholder 2025-03-21 23:56:47 +05:30
GH Action - Upstream Sync
718a86ecda Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-03-21 01:26:21 +00:00
Pavel
1223fd2149 win-setup 2025-03-20 21:48:59 +03:00
ManishMadan2882
b09386d102 (clean:nav) rm source dropdown 2025-03-20 11:26:45 +05:30
ManishMadan2882
6464698b6d Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-03-20 09:47:38 +05:30
ManishMadan2882
9230fd3bd6 Revert "(feat:nav) clean-up source dropdown"
This reverts commit 561a125c92.
2025-03-20 09:47:29 +05:30
ManishMadan2882
7771609ea0 (locales) udpate placeholder 2025-03-20 09:43:36 +05:30
ManishMadan2882
561a125c92 (feat:nav) clean-up source dropdown 2025-03-20 09:42:57 +05:30
ManishMadan2882
7149461d8e (feat:sources) add pop-up to switch sources 2025-03-20 09:42:08 +05:30
ManishMadan2882
02c8bd06f5 (fix:sources) preloaded docs have ids too 2025-03-20 09:39:30 +05:30
ManishMadan2882
9f17eb1d28 feat(textInput) new design 2025-03-18 19:33:26 +05:30
210 changed files with 18165 additions and 4788 deletions

2
.gitattributes vendored Normal file
View File

@@ -0,0 +1,2 @@
# Auto detect text files and perform LF normalization
* text=auto

1
.gitignore vendored
View File

@@ -113,6 +113,7 @@ venv.bak/
# Spyder project settings
.spyderproject
.spyproject
.jwt_secret_key
# Rope project settings
.ropeproject

View File

@@ -48,10 +48,13 @@
- [x] Add tools (Jan 2025)
- [x] Manually updating chunks in the app UI (Feb 2025)
- [x] Devcontainer for easy development (Feb 2025)
- [ ] Anthropic Tool compatibility
- [ ] Add triggerable actions / tools (webhook)
- [x] ReACT agent (March 2025)
- [x] Chatbots menu re-design to handle tools, agent types, and more (April 2025)
- [x] New input box in the conversation menu (April 2025)
- [x] Add triggerable actions / tools (webhook) (April 2025)
- [ ] Anthropic Tool compatibility (May 2025)
- [ ] Add OAuth 2.0 authentication for tools and sources
- [ ] Chatbots menu re-design to handle tools, scheduling, and more
- [ ] Agent scheduling
You can find our full roadmap [here](https://github.com/orgs/arc53/projects/2). Please don't hesitate to contribute or create issues, it helps us improve DocsGPT!
@@ -92,13 +95,15 @@ A more detailed [Quickstart](https://docs.docsgpt.cloud/quickstart) is available
./setup.sh
```
This interactive script will guide you through setting up DocsGPT. It offers four options: using the public API, running locally, connecting to a local inference engine, or using a cloud API provider. The script will automatically configure your `.env` file and handle necessary downloads and installations based on your chosen option.
**For Windows:**
2. **Follow the Docker Deployment Guide:**
2. **Run the PowerShell setup script:**
Please refer to the [Docker Deployment documentation](https://docs.docsgpt.cloud/Deploying/Docker-Deploying) for detailed step-by-step instructions on setting up DocsGPT using Docker.
```powershell
PowerShell -ExecutionPolicy Bypass -File .\setup.ps1
```
Either script will guide you through setting up DocsGPT. Four options available: using the public API, running locally, connecting to a local inference engine, or using a cloud API provider. Scripts will automatically configure your `.env` file and handle necessary downloads and installations based on your chosen option.
**Navigate to http://localhost:5173/**
@@ -107,7 +112,7 @@ To stop DocsGPT, open a terminal in the `DocsGPT` directory and run:
```bash
docker compose -f deployment/docker-compose.yaml down
```
(or use the specific `docker compose down` command shown after running `setup.sh`).
(or use the specific `docker compose down` command shown after running the setup script).
> [!Note]
> For development environment setup instructions, please refer to the [Development Environment Guide](https://docs.docsgpt.cloud/Deploying/Development-Environment).

View File

@@ -84,4 +84,4 @@ EXPOSE 7091
USER appuser
# Start Gunicorn
CMD ["gunicorn", "-w", "2", "--timeout", "120", "--bind", "0.0.0.0:7091", "application.wsgi:app"]
CMD ["gunicorn", "-w", "1", "--timeout", "120", "--bind", "0.0.0.0:7091", "--preload", "application.wsgi:app"]

View File

View File

@@ -1,9 +1,11 @@
from application.agents.classic_agent import ClassicAgent
from application.agents.react_agent import ReActAgent
class AgentCreator:
agents = {
"classic": ClassicAgent,
"react": ReActAgent,
}
@classmethod

View File

@@ -1,42 +1,93 @@
from typing import Dict, Generator
import uuid
from abc import ABC, abstractmethod
from typing import Dict, Generator, List, Optional
from bson.objectid import ObjectId
from application.agents.llm_handler import get_llm_handler
from application.agents.tools.tool_action_parser import ToolActionParser
from application.agents.tools.tool_manager import ToolManager
from application.core.mongo_db import MongoDB
from application.core.settings import settings
from application.llm.handlers.handler_creator import LLMHandlerCreator
from application.llm.llm_creator import LLMCreator
from application.logging import build_stack_data, log_activity, LogContext
from application.retriever.base import BaseRetriever
class BaseAgent:
class BaseAgent(ABC):
def __init__(
self,
endpoint,
llm_name,
gpt_model,
api_key,
user_api_key=None,
decoded_token=None,
endpoint: str,
llm_name: str,
gpt_model: str,
api_key: str,
user_api_key: Optional[str] = None,
prompt: str = "",
chat_history: Optional[List[Dict]] = None,
decoded_token: Optional[Dict] = None,
attachments: Optional[List[Dict]] = None,
):
self.endpoint = endpoint
self.llm_name = llm_name
self.gpt_model = gpt_model
self.api_key = api_key
self.user_api_key = user_api_key
self.prompt = prompt
self.decoded_token = decoded_token or {}
self.user: str = decoded_token.get("sub")
self.tool_config: Dict = {}
self.tools: List[Dict] = []
self.tool_calls: List[Dict] = []
self.chat_history: List[Dict] = chat_history if chat_history is not None else []
self.llm = LLMCreator.create_llm(
llm_name,
api_key=api_key,
user_api_key=user_api_key,
decoded_token=decoded_token,
)
self.llm_handler = get_llm_handler(llm_name)
self.gpt_model = gpt_model
self.tools = []
self.tool_config = {}
self.tool_calls = []
self.llm_handler = LLMHandlerCreator.create_handler(
llm_name if llm_name else "default"
)
self.attachments = attachments or []
def gen(self, *args, **kwargs) -> Generator[Dict, None, None]:
raise NotImplementedError('Method "gen" must be implemented in the child class')
@log_activity()
def gen(
self, query: str, retriever: BaseRetriever, log_context: LogContext = None
) -> Generator[Dict, None, None]:
yield from self._gen_inner(query, retriever, log_context)
@abstractmethod
def _gen_inner(
self, query: str, retriever: BaseRetriever, log_context: LogContext
) -> Generator[Dict, None, None]:
pass
def _get_tools(self, api_key: str = None) -> Dict[str, Dict]:
mongo = MongoDB.get_client()
db = mongo[settings.MONGO_DB_NAME]
agents_collection = db["agents"]
tools_collection = db["user_tools"]
agent_data = agents_collection.find_one({"key": api_key or self.user_api_key})
tool_ids = agent_data.get("tools", []) if agent_data else []
tools = (
tools_collection.find(
{"_id": {"$in": [ObjectId(tool_id) for tool_id in tool_ids]}}
)
if tool_ids
else []
)
tools = list(tools)
tools_by_id = {str(tool["_id"]): tool for tool in tools} if tools else {}
return tools_by_id
def _get_user_tools(self, user="local"):
mongo = MongoDB.get_client()
db = mongo["docsgpt"]
db = mongo[settings.MONGO_DB_NAME]
user_tools_collection = db["user_tools"]
user_tools = user_tools_collection.find({"user": user, "status": True})
user_tools = list(user_tools)
@@ -85,6 +136,15 @@ class BaseAgent:
parser = ToolActionParser(self.llm.__class__.__name__)
tool_id, action_name, call_args = parser.parse_args(call)
call_id = getattr(call, "id", None) or str(uuid.uuid4())
tool_call_data = {
"tool_name": tools_dict[tool_id]["name"],
"call_id": call_id,
"action_name": f"{action_name}_{tool_id}",
"arguments": call_args,
}
yield {"type": "tool_call", "data": {**tool_call_data, "status": "pending"}}
tool_data = tools_dict[tool_id]
action_data = (
tool_data["config"]["actions"][action_name]
@@ -109,14 +169,12 @@ class BaseAgent:
for param, details in action_data[param_type]["properties"].items():
if param not in call_args and "value" in details:
target_dict[param] = details["value"]
for param, value in call_args.items():
for param_type, target_dict in param_types.items():
if param_type in action_data and param in action_data[param_type].get(
"properties", {}
):
target_dict[param] = value
tm = ToolManager(config={})
tool = tm.load_tool(
tool_data["name"],
@@ -139,15 +197,131 @@ class BaseAgent:
else:
print(f"Executing tool: {action_name} with args: {call_args}")
result = tool.execute_action(action_name, **parameters)
call_id = getattr(call, "id", None)
tool_call_data["result"] = (
f"{str(result)[:50]}..." if len(str(result)) > 50 else result
)
tool_call_data = {
"tool_name": tool_data["name"],
"call_id": call_id if call_id is not None else "None",
"action_name": f"{action_name}_{tool_id}",
"arguments": call_args,
"result": result,
}
yield {"type": "tool_call", "data": {**tool_call_data, "status": "completed"}}
self.tool_calls.append(tool_call_data)
return result, call_id
def _get_truncated_tool_calls(self):
return [
{
**tool_call,
"result": (
f"{str(tool_call['result'])[:50]}..."
if len(str(tool_call["result"])) > 50
else tool_call["result"]
),
"status": "completed",
}
for tool_call in self.tool_calls
]
def _build_messages(
self,
system_prompt: str,
query: str,
retrieved_data: List[Dict],
) -> List[Dict]:
docs_together = "\n".join([doc["text"] for doc in retrieved_data])
p_chat_combine = system_prompt.replace("{summaries}", docs_together)
messages_combine = [{"role": "system", "content": p_chat_combine}]
for i in self.chat_history:
if "prompt" in i and "response" in i:
messages_combine.append({"role": "user", "content": i["prompt"]})
messages_combine.append({"role": "assistant", "content": i["response"]})
if "tool_calls" in i:
for tool_call in i["tool_calls"]:
call_id = tool_call.get("call_id") or str(uuid.uuid4())
function_call_dict = {
"function_call": {
"name": tool_call.get("action_name"),
"args": tool_call.get("arguments"),
"call_id": call_id,
}
}
function_response_dict = {
"function_response": {
"name": tool_call.get("action_name"),
"response": {"result": tool_call.get("result")},
"call_id": call_id,
}
}
messages_combine.append(
{"role": "assistant", "content": [function_call_dict]}
)
messages_combine.append(
{"role": "tool", "content": [function_response_dict]}
)
messages_combine.append({"role": "user", "content": query})
return messages_combine
def _retriever_search(
self,
retriever: BaseRetriever,
query: str,
log_context: Optional[LogContext] = None,
) -> List[Dict]:
retrieved_data = retriever.search(query)
if log_context:
data = build_stack_data(retriever, exclude_attributes=["llm"])
log_context.stacks.append({"component": "retriever", "data": data})
return retrieved_data
def _llm_gen(self, messages: List[Dict], log_context: Optional[LogContext] = None):
gen_kwargs = {"model": self.gpt_model, "messages": messages}
if (
hasattr(self.llm, "_supports_tools")
and self.llm._supports_tools
and self.tools
):
gen_kwargs["tools"] = self.tools
resp = self.llm.gen_stream(**gen_kwargs)
if log_context:
data = build_stack_data(self.llm, exclude_attributes=["client"])
log_context.stacks.append({"component": "llm", "data": data})
return resp
def _llm_handler(
self,
resp,
tools_dict: Dict,
messages: List[Dict],
log_context: Optional[LogContext] = None,
attachments: Optional[List[Dict]] = None,
):
resp = self.llm_handler.process_message_flow(
self, resp, tools_dict, messages, attachments, True
)
if log_context:
data = build_stack_data(self.llm_handler, exclude_attributes=["tool_calls"])
log_context.stacks.append({"component": "llm_handler", "data": data})
return resp
def _handle_response(self, response, tools_dict, messages, log_context):
if isinstance(response, str):
yield {"answer": response}
return
if hasattr(response, "message") and getattr(response.message, "content", None):
yield {"answer": response.message.content}
return
processed_response_gen = self._llm_handler(
response, tools_dict, messages, log_context, self.attachments
)
for event in processed_response_gen:
if isinstance(event, str):
yield {"answer": event}
elif hasattr(event, "message") and getattr(event.message, "content", None):
yield {"answer": event.message.content}
elif isinstance(event, dict) and "type" in event:
yield event

View File

@@ -1,140 +1,53 @@
import uuid
from typing import Dict, Generator
from application.agents.base import BaseAgent
from application.logging import build_stack_data, log_activity, LogContext
from application.logging import LogContext
from application.retriever.base import BaseRetriever
import logging
logger = logging.getLogger(__name__)
class ClassicAgent(BaseAgent):
def __init__(
self,
endpoint,
llm_name,
gpt_model,
api_key,
user_api_key=None,
prompt="",
chat_history=None,
decoded_token=None,
):
super().__init__(
endpoint, llm_name, gpt_model, api_key, user_api_key, decoded_token
)
self.user = decoded_token.get("sub")
self.prompt = prompt
self.chat_history = chat_history if chat_history is not None else []
"""A simplified classic agent with clear execution flow.
@log_activity()
def gen(
self, query: str, retriever: BaseRetriever, log_context: LogContext = None
) -> Generator[Dict, None, None]:
yield from self._gen_inner(query, retriever, log_context)
Usage:
1. Processes a query through retrieval
2. Sets up available tools
3. Generates responses using LLM
4. Handles tool interactions if needed
5. Returns standardized outputs
Easy to extend by overriding specific steps.
"""
def _gen_inner(
self, query: str, retriever: BaseRetriever, log_context: LogContext
) -> Generator[Dict, None, None]:
# Step 1: Retrieve relevant data
retrieved_data = self._retriever_search(retriever, query, log_context)
docs_together = "\n".join([doc["text"] for doc in retrieved_data])
p_chat_combine = self.prompt.replace("{summaries}", docs_together)
messages_combine = [{"role": "system", "content": p_chat_combine}]
if len(self.chat_history) > 0:
for i in self.chat_history:
if "prompt" in i and "response" in i:
messages_combine.append({"role": "user", "content": i["prompt"]})
messages_combine.append(
{"role": "assistant", "content": i["response"]}
)
if "tool_calls" in i:
for tool_call in i["tool_calls"]:
call_id = tool_call.get("call_id")
if call_id is None or call_id == "None":
call_id = str(uuid.uuid4())
function_call_dict = {
"function_call": {
"name": tool_call.get("action_name"),
"args": tool_call.get("arguments"),
"call_id": call_id,
}
}
function_response_dict = {
"function_response": {
"name": tool_call.get("action_name"),
"response": {"result": tool_call.get("result")},
"call_id": call_id,
}
}
messages_combine.append(
{"role": "assistant", "content": [function_call_dict]}
)
messages_combine.append(
{"role": "tool", "content": [function_response_dict]}
)
messages_combine.append({"role": "user", "content": query})
tools_dict = self._get_user_tools(self.user)
# Step 2: Prepare tools
tools_dict = (
self._get_user_tools(self.user)
if not self.user_api_key
else self._get_tools(self.user_api_key)
)
self._prepare_tools(tools_dict)
resp = self._llm_gen(messages_combine, log_context)
# Step 3: Build and process messages
messages = self._build_messages(self.prompt, query, retrieved_data)
llm_response = self._llm_gen(messages, log_context)
if isinstance(resp, str):
yield {"answer": resp}
return
if (
hasattr(resp, "message")
and hasattr(resp.message, "content")
and resp.message.content is not None
):
yield {"answer": resp.message.content}
return
resp = self._llm_handler(resp, tools_dict, messages_combine, log_context)
if isinstance(resp, str):
yield {"answer": resp}
elif (
hasattr(resp, "message")
and hasattr(resp.message, "content")
and resp.message.content is not None
):
yield {"answer": resp.message.content}
else:
completion = self.llm.gen_stream(
model=self.gpt_model, messages=messages_combine, tools=self.tools
)
for line in completion:
if isinstance(line, str):
yield {"answer": line}
# Step 4: Handle the response
yield from self._handle_response(
llm_response, tools_dict, messages, log_context
)
# Step 5: Return metadata
yield {"sources": retrieved_data}
yield {"tool_calls": self.tool_calls.copy()}
yield {"tool_calls": self._get_truncated_tool_calls()}
def _retriever_search(self, retriever, query, log_context):
retrieved_data = retriever.search(query)
if log_context:
data = build_stack_data(retriever, exclude_attributes=["llm"])
log_context.stacks.append({"component": "retriever", "data": data})
return retrieved_data
def _llm_gen(self, messages_combine, log_context):
resp = self.llm.gen_stream(
model=self.gpt_model, messages=messages_combine, tools=self.tools
# Log tool calls for debugging
log_context.stacks.append(
{"component": "agent", "data": {"tool_calls": self.tool_calls.copy()}}
)
if log_context:
data = build_stack_data(self.llm)
log_context.stacks.append({"component": "llm", "data": data})
return resp
def _llm_handler(self, resp, tools_dict, messages_combine, log_context):
resp = self.llm_handler.handle_response(
self, resp, tools_dict, messages_combine
)
if log_context:
data = build_stack_data(self.llm_handler)
log_context.stacks.append({"component": "llm_handler", "data": data})
return resp

View File

@@ -1,254 +0,0 @@
import json
from abc import ABC, abstractmethod
from application.logging import build_stack_data
class LLMHandler(ABC):
def __init__(self):
self.llm_calls = []
self.tool_calls = []
@abstractmethod
def handle_response(self, agent, resp, tools_dict, messages, **kwargs):
pass
class OpenAILLMHandler(LLMHandler):
def handle_response(self, agent, resp, tools_dict, messages, stream: bool = True):
if not stream:
while hasattr(resp, "finish_reason") and resp.finish_reason == "tool_calls":
message = json.loads(resp.model_dump_json())["message"]
keys_to_remove = {"audio", "function_call", "refusal"}
filtered_data = {
k: v for k, v in message.items() if k not in keys_to_remove
}
messages.append(filtered_data)
tool_calls = resp.message.tool_calls
for call in tool_calls:
try:
self.tool_calls.append(call)
tool_response, call_id = agent._execute_tool_action(
tools_dict, call
)
function_call_dict = {
"function_call": {
"name": call.function.name,
"args": call.function.arguments,
"call_id": call_id,
}
}
function_response_dict = {
"function_response": {
"name": call.function.name,
"response": {"result": tool_response},
"call_id": call_id,
}
}
messages.append(
{"role": "assistant", "content": [function_call_dict]}
)
messages.append(
{"role": "tool", "content": [function_response_dict]}
)
except Exception as e:
messages.append(
{
"role": "tool",
"content": f"Error executing tool: {str(e)}",
"tool_call_id": call_id,
}
)
resp = agent.llm.gen_stream(
model=agent.gpt_model, messages=messages, tools=agent.tools
)
self.llm_calls.append(build_stack_data(agent.llm))
return resp
else:
while True:
tool_calls = {}
for chunk in resp:
if isinstance(chunk, str) and len(chunk) > 0:
return
elif hasattr(chunk, "delta"):
chunk_delta = chunk.delta
if (
hasattr(chunk_delta, "tool_calls")
and chunk_delta.tool_calls is not None
):
for tool_call in chunk_delta.tool_calls:
index = tool_call.index
if index not in tool_calls:
tool_calls[index] = {
"id": "",
"function": {"name": "", "arguments": ""},
}
current = tool_calls[index]
if tool_call.id:
current["id"] = tool_call.id
if tool_call.function.name:
current["function"][
"name"
] = tool_call.function.name
if tool_call.function.arguments:
current["function"][
"arguments"
] += tool_call.function.arguments
tool_calls[index] = current
if (
hasattr(chunk, "finish_reason")
and chunk.finish_reason == "tool_calls"
):
for index in sorted(tool_calls.keys()):
call = tool_calls[index]
try:
self.tool_calls.append(call)
tool_response, call_id = agent._execute_tool_action(
tools_dict, call
)
if isinstance(call["function"]["arguments"], str):
call["function"]["arguments"] = json.loads(call["function"]["arguments"])
function_call_dict = {
"function_call": {
"name": call["function"]["name"],
"args": call["function"]["arguments"],
"call_id": call["id"],
}
}
function_response_dict = {
"function_response": {
"name": call["function"]["name"],
"response": {"result": tool_response},
"call_id": call["id"],
}
}
messages.append(
{
"role": "assistant",
"content": [function_call_dict],
}
)
messages.append(
{
"role": "tool",
"content": [function_response_dict],
}
)
except Exception as e:
messages.append(
{
"role": "assistant",
"content": f"Error executing tool: {str(e)}",
}
)
tool_calls = {}
if (
hasattr(chunk, "finish_reason")
and chunk.finish_reason == "stop"
):
return
elif isinstance(chunk, str) and len(chunk) == 0:
continue
resp = agent.llm.gen_stream(
model=agent.gpt_model, messages=messages, tools=agent.tools
)
self.llm_calls.append(build_stack_data(agent.llm))
class GoogleLLMHandler(LLMHandler):
def handle_response(self, agent, resp, tools_dict, messages, stream: bool = True):
from google.genai import types
while True:
if not stream:
response = agent.llm.gen(
model=agent.gpt_model, messages=messages, tools=agent.tools
)
self.llm_calls.append(build_stack_data(agent.llm))
if response.candidates and response.candidates[0].content.parts:
tool_call_found = False
for part in response.candidates[0].content.parts:
if part.function_call:
tool_call_found = True
self.tool_calls.append(part.function_call)
tool_response, call_id = agent._execute_tool_action(
tools_dict, part.function_call
)
function_response_part = types.Part.from_function_response(
name=part.function_call.name,
response={"result": tool_response},
)
messages.append(
{"role": "model", "content": [part.to_json_dict()]}
)
messages.append(
{
"role": "tool",
"content": [function_response_part.to_json_dict()],
}
)
if (
not tool_call_found
and response.candidates[0].content.parts
and response.candidates[0].content.parts[0].text
):
return response.candidates[0].content.parts[0].text
elif not tool_call_found:
return response.candidates[0].content.parts
else:
return response
else:
response = agent.llm.gen_stream(
model=agent.gpt_model, messages=messages, tools=agent.tools
)
self.llm_calls.append(build_stack_data(agent.llm))
tool_call_found = False
for result in response:
if hasattr(result, "function_call"):
tool_call_found = True
self.tool_calls.append(result.function_call)
tool_response, call_id = agent._execute_tool_action(
tools_dict, result.function_call
)
function_response_part = types.Part.from_function_response(
name=result.function_call.name,
response={"result": tool_response},
)
messages.append(
{"role": "model", "content": [result.to_json_dict()]}
)
messages.append(
{
"role": "tool",
"content": [function_response_part.to_json_dict()],
}
)
if not tool_call_found:
return response
def get_llm_handler(llm_type):
handlers = {
"openai": OpenAILLMHandler(),
"google": GoogleLLMHandler(),
}
return handlers.get(llm_type, OpenAILLMHandler())

View File

@@ -0,0 +1,229 @@
import os
from typing import Dict, Generator, List, Any
import logging
from application.agents.base import BaseAgent
from application.logging import build_stack_data, LogContext
from application.retriever.base import BaseRetriever
logger = logging.getLogger(__name__)
current_dir = os.path.dirname(
os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
)
with open(
os.path.join(current_dir, "application/prompts", "react_planning_prompt.txt"), "r"
) as f:
planning_prompt_template = f.read()
with open(
os.path.join(current_dir, "application/prompts", "react_final_prompt.txt"),
"r",
) as f:
final_prompt_template = f.read()
MAX_ITERATIONS_REASONING = 10
class ReActAgent(BaseAgent):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.plan: str = ""
self.observations: List[str] = []
def _extract_content_from_llm_response(self, resp: Any) -> str:
"""
Helper to extract string content from various LLM response types.
Handles strings, message objects (OpenAI-like), and streams.
Adapt stream handling for your specific LLM client if not OpenAI.
"""
collected_content = []
if isinstance(resp, str):
collected_content.append(resp)
elif ( # OpenAI non-streaming or Anthropic non-streaming (older SDK style)
hasattr(resp, "message")
and hasattr(resp.message, "content")
and resp.message.content is not None
):
collected_content.append(resp.message.content)
elif ( # OpenAI non-streaming (Pydantic model), Anthropic new SDK non-streaming
hasattr(resp, "choices") and resp.choices and
hasattr(resp.choices[0], "message") and
hasattr(resp.choices[0].message, "content") and
resp.choices[0].message.content is not None
):
collected_content.append(resp.choices[0].message.content) # OpenAI
elif ( # Anthropic new SDK non-streaming content block
hasattr(resp, "content") and isinstance(resp.content, list) and resp.content and
hasattr(resp.content[0], "text")
):
collected_content.append(resp.content[0].text) # Anthropic
else:
# Assume resp is a stream if not a recognized object
try:
for chunk in resp: # This will fail if resp is not iterable (e.g. a non-streaming response object)
content_piece = ""
# OpenAI-like stream
if hasattr(chunk, 'choices') and len(chunk.choices) > 0 and \
hasattr(chunk.choices[0], 'delta') and \
hasattr(chunk.choices[0].delta, 'content') and \
chunk.choices[0].delta.content is not None:
content_piece = chunk.choices[0].delta.content
# Anthropic-like stream (ContentBlockDelta)
elif hasattr(chunk, 'type') and chunk.type == 'content_block_delta' and \
hasattr(chunk, 'delta') and hasattr(chunk.delta, 'text'):
content_piece = chunk.delta.text
elif isinstance(chunk, str): # Simplest case: stream of strings
content_piece = chunk
if content_piece:
collected_content.append(content_piece)
except TypeError: # If resp is not iterable (e.g. a final response object that wasn't caught above)
logger.debug(f"Response type {type(resp)} could not be iterated as a stream. It might be a non-streaming object not handled by specific checks.")
except Exception as e:
logger.error(f"Error processing potential stream chunk: {e}, chunk was: {getattr(chunk, '__dict__', chunk)}")
return "".join(collected_content)
def _gen_inner(
self, query: str, retriever: BaseRetriever, log_context: LogContext
) -> Generator[Dict, None, None]:
# Reset state for this generation call
self.plan = ""
self.observations = []
retrieved_data = self._retriever_search(retriever, query, log_context)
if self.user_api_key:
tools_dict = self._get_tools(self.user_api_key)
else:
tools_dict = self._get_user_tools(self.user)
self._prepare_tools(tools_dict)
docs_together = "\n".join([doc["text"] for doc in retrieved_data])
iterating_reasoning = 0
while iterating_reasoning < MAX_ITERATIONS_REASONING:
iterating_reasoning += 1
# 1. Create Plan
logger.info("ReActAgent: Creating plan...")
plan_stream = self._create_plan(query, docs_together, log_context)
current_plan_parts = []
yield {"thought": f"Reasoning... (iteration {iterating_reasoning})\n\n"}
for line_chunk in plan_stream:
current_plan_parts.append(line_chunk)
yield {"thought": line_chunk}
self.plan = "".join(current_plan_parts)
if self.plan:
self.observations.append(f"Plan: {self.plan} Iteration: {iterating_reasoning}")
max_obs_len = 20000
obs_str = "\n".join(self.observations)
if len(obs_str) > max_obs_len:
obs_str = obs_str[:max_obs_len] + "\n...[observations truncated]"
execution_prompt_str = (
(self.prompt or "")
+ f"\n\nFollow this plan:\n{self.plan}"
+ f"\n\nObservations:\n{obs_str}"
+ f"\n\nIf there is enough data to complete user query '{query}', Respond with 'SATISFIED' only. Otherwise, continue. Dont Menstion 'SATISFIED' in your response if you are not ready. "
)
messages = self._build_messages(execution_prompt_str, query, retrieved_data)
resp_from_llm_gen = self._llm_gen(messages, log_context)
initial_llm_thought_content = self._extract_content_from_llm_response(resp_from_llm_gen)
if initial_llm_thought_content:
self.observations.append(f"Initial thought/response: {initial_llm_thought_content}")
else:
logger.info("ReActAgent: Initial LLM response (before handler) had no textual content (might be only tool calls).")
resp_after_handler = self._llm_handler(resp_from_llm_gen, tools_dict, messages, log_context)
for tool_call_info in self.tool_calls: # Iterate over self.tool_calls populated by _llm_handler
observation_string = (
f"Executed Action: Tool '{tool_call_info.get('tool_name', 'N/A')}' "
f"with arguments '{tool_call_info.get('arguments', '{}')}'. Result: '{str(tool_call_info.get('result', ''))[:200]}...'"
)
self.observations.append(observation_string)
content_after_handler = self._extract_content_from_llm_response(resp_after_handler)
if content_after_handler:
self.observations.append(f"Response after tool execution: {content_after_handler}")
else:
logger.info("ReActAgent: LLM response after handler had no textual content.")
if log_context:
log_context.stacks.append(
{"component": "agent_tool_calls", "data": {"tool_calls": self.tool_calls.copy()}}
)
yield {"sources": retrieved_data}
display_tool_calls = []
for tc in self.tool_calls:
cleaned_tc = tc.copy()
if len(str(cleaned_tc.get("result", ""))) > 50:
cleaned_tc["result"] = str(cleaned_tc["result"])[:50] + "..."
display_tool_calls.append(cleaned_tc)
if display_tool_calls:
yield {"tool_calls": display_tool_calls}
if "SATISFIED" in content_after_handler:
logger.info("ReActAgent: LLM satisfied with the plan and data. Stopping reasoning.")
break
# 3. Create Final Answer based on all observations
final_answer_stream = self._create_final_answer(query, self.observations, log_context)
for answer_chunk in final_answer_stream:
yield {"answer": answer_chunk}
logger.info("ReActAgent: Finished generating final answer.")
def _create_plan(
self, query: str, docs_data: str, log_context: LogContext = None
) -> Generator[str, None, None]:
plan_prompt_filled = planning_prompt_template.replace("{query}", query)
if "{summaries}" in plan_prompt_filled:
summaries = docs_data if docs_data else "No documents retrieved."
plan_prompt_filled = plan_prompt_filled.replace("{summaries}", summaries)
plan_prompt_filled = plan_prompt_filled.replace("{prompt}", self.prompt or "")
plan_prompt_filled = plan_prompt_filled.replace("{observations}", "\n".join(self.observations))
messages = [{"role": "user", "content": plan_prompt_filled}]
plan_stream_from_llm = self.llm.gen_stream(
model=self.gpt_model, messages=messages, tools=getattr(self, 'tools', None) # Use self.tools
)
if log_context:
data = build_stack_data(self.llm)
log_context.stacks.append({"component": "planning_llm", "data": data})
for chunk in plan_stream_from_llm:
content_piece = self._extract_content_from_llm_response(chunk)
if content_piece:
yield content_piece
def _create_final_answer(
self, query: str, observations: List[str], log_context: LogContext = None
) -> Generator[str, None, None]:
observation_string = "\n".join(observations)
max_obs_len = 10000
if len(observation_string) > max_obs_len:
observation_string = observation_string[:max_obs_len] + "\n...[observations truncated]"
logger.warning("ReActAgent: Truncated observations for final answer prompt due to length.")
final_answer_prompt_filled = final_prompt_template.format(
query=query, observations=observation_string
)
messages = [{"role": "user", "content": final_answer_prompt_filled}]
# Final answer should synthesize, not call tools.
final_answer_stream_from_llm = self.llm.gen_stream(
model=self.gpt_model, messages=messages, tools=None
)
if log_context:
data = build_stack_data(self.llm)
log_context.stacks.append({"component": "final_answer_llm", "data": data})
for chunk in final_answer_stream_from_llm:
content_piece = self._extract_content_from_llm_response(chunk)
if content_piece:
yield content_piece

View File

@@ -25,8 +25,8 @@ class APITool(Tool):
def _make_api_call(self, url, method, headers, query_params, body):
if query_params:
url = f"{url}?{requests.compat.urlencode(query_params)}"
if isinstance(body, dict):
body = json.dumps(body)
# if isinstance(body, dict):
# body = json.dumps(body)
try:
print(f"Making API call: {method} {url} with body: {body}")
if body == "{}":

View File

@@ -0,0 +1,83 @@
import requests
from markdownify import markdownify
from application.agents.tools.base import Tool
from urllib.parse import urlparse
class ReadWebpageTool(Tool):
"""
Read Webpage (browser)
A tool to fetch the HTML content of a URL and convert it to Markdown.
"""
def __init__(self, config=None):
"""
Initializes the tool.
:param config: Optional configuration dictionary. Not used by this tool.
"""
self.config = config
def execute_action(self, action_name: str, **kwargs) -> str:
"""
Executes the specified action. For this tool, the only action is 'read_webpage'.
:param action_name: The name of the action to execute. Should be 'read_webpage'.
:param kwargs: Keyword arguments, must include 'url'.
:return: The Markdown content of the webpage or an error message.
"""
if action_name != "read_webpage":
return f"Error: Unknown action '{action_name}'. This tool only supports 'read_webpage'."
url = kwargs.get("url")
if not url:
return "Error: URL parameter is missing."
# Ensure the URL has a scheme (if not, default to http)
parsed_url = urlparse(url)
if not parsed_url.scheme:
url = "http://" + url
try:
response = requests.get(url, timeout=10, headers={'User-Agent': 'DocsGPT-Agent/1.0'})
response.raise_for_status() # Raise an exception for HTTP errors (4xx or 5xx)
html_content = response.text
#soup = BeautifulSoup(html_content, 'html.parser')
markdown_content = markdownify(html_content, heading_style="ATX", newline_style="BACKSLASH")
return markdown_content
except requests.exceptions.RequestException as e:
return f"Error fetching URL {url}: {e}"
except Exception as e:
return f"Error processing URL {url}: {e}"
def get_actions_metadata(self):
"""
Returns metadata for the actions supported by this tool.
"""
return [
{
"name": "read_webpage",
"description": "Fetches the HTML content of a given URL and returns it as clean Markdown text. Input must be a valid URL.",
"parameters": {
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "The fully qualified URL of the webpage to read (e.g., 'https://www.example.com').",
}
},
"required": ["url"],
"additionalProperties": False,
},
}
]
def get_config_requirements(self):
"""
Returns a dictionary describing the configuration requirements for the tool.
This tool does not require any specific configuration.
"""
return {}

View File

@@ -17,26 +17,21 @@ class ToolActionParser:
return parser(call)
def _parse_openai_llm(self, call):
if isinstance(call, dict):
try:
call_args = json.loads(call["function"]["arguments"])
tool_id = call["function"]["name"].split("_")[-1]
action_name = call["function"]["name"].rsplit("_", 1)[0]
except (KeyError, TypeError) as e:
logger.error(f"Error parsing OpenAI LLM call: {e}")
return None, None, None
else:
try:
call_args = json.loads(call.function.arguments)
tool_id = call.function.name.split("_")[-1]
action_name = call.function.name.rsplit("_", 1)[0]
except (AttributeError, TypeError) as e:
logger.error(f"Error parsing OpenAI LLM call: {e}")
return None, None, None
try:
call_args = json.loads(call.arguments)
tool_id = call.name.split("_")[-1]
action_name = call.name.rsplit("_", 1)[0]
except (AttributeError, TypeError) as e:
logger.error(f"Error parsing OpenAI LLM call: {e}")
return None, None, None
return tool_id, action_name, call_args
def _parse_google_llm(self, call):
call_args = call.args
tool_id = call.name.split("_")[-1]
action_name = call.name.rsplit("_", 1)[0]
try:
call_args = call.arguments
tool_id = call.name.split("_")[-1]
action_name = call.name.rsplit("_", 1)[0]
except (AttributeError, TypeError) as e:
logger.error(f"Error parsing Google LLM call: {e}")
return None, None, None
return tool_id, action_name, call_args

View File

@@ -23,12 +23,13 @@ from application.utils import check_required_fields, limit_chat_history
logger = logging.getLogger(__name__)
mongo = MongoDB.get_client()
db = mongo["docsgpt"]
db = mongo[settings.MONGO_DB_NAME]
conversations_collection = db["conversations"]
sources_collection = db["sources"]
prompts_collection = db["prompts"]
api_key_collection = db["api_keys"]
agents_collection = db["agents"]
user_logs_collection = db["user_logs"]
attachments_collection = db["attachments"]
answer = Blueprint("answer", __name__)
answer_ns = Namespace("answer", description="Answer related operations", path="/")
@@ -36,17 +37,17 @@ api.add_namespace(answer_ns)
gpt_model = ""
# to have some kind of default behaviour
if settings.LLM_NAME == "openai":
if settings.LLM_PROVIDER == "openai":
gpt_model = "gpt-4o-mini"
elif settings.LLM_NAME == "anthropic":
elif settings.LLM_PROVIDER == "anthropic":
gpt_model = "claude-2"
elif settings.LLM_NAME == "groq":
elif settings.LLM_PROVIDER == "groq":
gpt_model = "llama3-8b-8192"
elif settings.LLM_NAME == "novita":
elif settings.LLM_PROVIDER == "novita":
gpt_model = "deepseek/deepseek-r1"
if settings.MODEL_NAME: # in case there is particular model name configured
gpt_model = settings.MODEL_NAME
if settings.LLM_NAME: # in case there is particular model name configured
gpt_model = settings.LLM_NAME
# load the prompts
current_dir = os.path.dirname(
@@ -85,19 +86,51 @@ def run_async_chain(chain, question, chat_history):
return result
def get_data_from_api_key(api_key):
data = api_key_collection.find_one({"key": api_key})
# # Raise custom exception if the API key is not found
if data is None:
raise Exception("Invalid API Key, please generate new key", 401)
def get_agent_key(agent_id, user_id):
if not agent_id:
return None, False, None
if "source" in data and isinstance(data["source"], DBRef):
source_doc = db.dereference(data["source"])
try:
agent = agents_collection.find_one({"_id": ObjectId(agent_id)})
if agent is None:
raise Exception("Agent not found", 404)
is_owner = agent.get("user") == user_id
if is_owner:
agents_collection.update_one(
{"_id": ObjectId(agent_id)},
{"$set": {"lastUsedAt": datetime.datetime.now(datetime.timezone.utc)}},
)
return str(agent["key"]), False, None
is_shared_with_user = agent.get(
"shared_publicly", False
) or user_id in agent.get("shared_with", [])
if is_shared_with_user:
return str(agent["key"]), True, agent.get("shared_token")
raise Exception("Unauthorized access to the agent", 403)
except Exception as e:
logger.error(f"Error in get_agent_key: {str(e)}", exc_info=True)
raise
def get_data_from_api_key(api_key):
data = agents_collection.find_one({"key": api_key})
if not data:
raise Exception("Invalid API Key, please generate a new key", 401)
source = data.get("source")
if isinstance(source, DBRef):
source_doc = db.dereference(source)
data["source"] = str(source_doc["_id"])
if "retriever" in source_doc:
data["retriever"] = source_doc["retriever"]
data["retriever"] = source_doc.get("retriever", data.get("retriever"))
else:
data["source"] = {}
return data
@@ -121,12 +154,17 @@ def save_conversation(
conversation_id,
question,
response,
thought,
source_log_docs,
tool_calls,
llm,
decoded_token,
index=None,
api_key=None,
agent_id=None,
is_shared_usage=False,
shared_token=None,
attachment_ids=None,
):
current_time = datetime.datetime.now(datetime.timezone.utc)
if conversation_id is not None and index is not None:
@@ -136,9 +174,11 @@ def save_conversation(
"$set": {
f"queries.{index}.prompt": question,
f"queries.{index}.response": response,
f"queries.{index}.thought": thought,
f"queries.{index}.sources": source_log_docs,
f"queries.{index}.tool_calls": tool_calls,
f"queries.{index}.timestamp": current_time,
f"queries.{index}.attachments": attachment_ids,
}
},
)
@@ -155,9 +195,11 @@ def save_conversation(
"queries": {
"prompt": question,
"response": response,
"thought": thought,
"sources": source_log_docs,
"tool_calls": tool_calls,
"timestamp": current_time,
"attachments": attachment_ids,
}
}
},
@@ -190,14 +232,21 @@ def save_conversation(
{
"prompt": question,
"response": response,
"thought": thought,
"sources": source_log_docs,
"tool_calls": tool_calls,
"timestamp": current_time,
"attachments": attachment_ids,
}
],
}
if api_key:
api_key_doc = api_key_collection.find_one({"key": api_key})
if agent_id:
conversation_data["agent_id"] = agent_id
if is_shared_usage:
conversation_data["is_shared_usage"] = is_shared_usage
conversation_data["shared_token"] = shared_token
api_key_doc = agents_collection.find_one({"key": api_key})
if api_key_doc:
conversation_data["api_key"] = api_key_doc["key"]
conversation_id = conversations_collection.insert_one(
@@ -228,11 +277,13 @@ def complete_stream(
isNoneDoc=False,
index=None,
should_save_conversation=True,
attachment_ids=None,
agent_id=None,
is_shared_usage=False,
shared_token=None,
):
try:
response_full = ""
source_log_docs = []
tool_calls = []
response_full, thought, source_log_docs, tool_calls = "", "", [], []
answer = agent.gen(query=question, retriever=retriever)
@@ -256,7 +307,12 @@ def complete_stream(
yield f"data: {data}\n\n"
elif "tool_calls" in line:
tool_calls = line["tool_calls"]
data = json.dumps({"type": "tool_calls", "tool_calls": tool_calls})
elif "thought" in line:
thought += line["thought"]
data = json.dumps({"type": "thought", "thought": line["thought"]})
yield f"data: {data}\n\n"
elif "type" in line:
data = json.dumps(line)
yield f"data: {data}\n\n"
if isNoneDoc:
@@ -264,7 +320,7 @@ def complete_stream(
doc["source"] = "None"
llm = LLMCreator.create_llm(
settings.LLM_NAME,
settings.LLM_PROVIDER,
api_key=settings.API_KEY,
user_api_key=user_api_key,
decoded_token=decoded_token,
@@ -275,12 +331,17 @@ def complete_stream(
conversation_id,
question,
response_full,
thought,
source_log_docs,
tool_calls,
llm,
decoded_token,
index,
api_key=user_api_key,
attachment_ids=attachment_ids,
agent_id=agent_id,
is_shared_usage=is_shared_usage,
shared_token=shared_token,
)
else:
conversation_id = None
@@ -300,14 +361,14 @@ def complete_stream(
"response": response_full,
"sources": source_log_docs,
"retriever_params": retriever_params,
"attachments": attachment_ids,
"timestamp": datetime.datetime.now(datetime.timezone.utc),
}
)
data = json.dumps({"type": "end"})
yield f"data: {data}\n\n"
except Exception as e:
logger.error(f"Error in stream: {str(e)}")
logger.error(traceback.format_exc())
logger.error(f"Error in stream: {str(e)}", exc_info=True)
data = json.dumps(
{
"type": "error",
@@ -348,10 +409,15 @@ class Stream(Resource):
required=False, description="Flag indicating if no document is used"
),
"index": fields.Integer(
required=False, description="The position where query is to be updated"
required=False, description="Index of the query to update"
),
"save_conversation": fields.Boolean(
required=False, default=True, description="Flag to save conversation"
required=False,
default=True,
description="Whether to save the conversation",
),
"attachments": fields.List(
fields.String, required=False, description="List of attachment IDs"
),
},
)
@@ -372,15 +438,26 @@ class Stream(Resource):
try:
question = data["question"]
history = limit_chat_history(
json.loads(data.get("history", [])), gpt_model=gpt_model
json.loads(data.get("history", "[]")), gpt_model=gpt_model
)
conversation_id = data.get("conversation_id")
prompt_id = data.get("prompt_id", "default")
attachment_ids = data.get("attachments", [])
index = data.get("index", None)
chunks = int(data.get("chunks", 2))
token_limit = data.get("token_limit", settings.DEFAULT_MAX_HISTORY)
retriever_name = data.get("retriever", "classic")
agent_id = data.get("agent_id", None)
agent_type = settings.AGENT_NAME
decoded_token = getattr(request, "decoded_token", None)
user_sub = decoded_token.get("sub") if decoded_token else None
agent_key, is_shared_usage, shared_token = get_agent_key(agent_id, user_sub)
if agent_key:
data.update({"api_key": agent_key})
else:
agent_id = None
if "api_key" in data:
data_key = get_data_from_api_key(data["api_key"])
@@ -389,7 +466,12 @@ class Stream(Resource):
source = {"active_docs": data_key.get("source")}
retriever_name = data_key.get("retriever", retriever_name)
user_api_key = data["api_key"]
decoded_token = {"sub": data_key.get("user")}
agent_type = data_key.get("agent_type", agent_type)
if is_shared_usage:
decoded_token = request.decoded_token
else:
decoded_token = {"sub": data_key.get("user")}
is_shared_usage = False
elif "active_docs" in data:
source = {"active_docs": data["active_docs"]}
@@ -405,8 +487,12 @@ class Stream(Resource):
if not decoded_token:
return make_response({"error": "Unauthorized"}, 401)
attachments = get_attachments_content(
attachment_ids, decoded_token.get("sub")
)
logger.info(
f"/stream - request_data: {data}, source: {source}",
f"/stream - request_data: {data}, source: {source}, attachments: {len(attachments)}",
extra={"data": json.dumps({"request_data": data, "source": source})},
)
@@ -415,15 +501,16 @@ class Stream(Resource):
chunks = 0
agent = AgentCreator.create_agent(
settings.AGENT_NAME,
agent_type,
endpoint="stream",
llm_name=settings.LLM_NAME,
llm_name=settings.LLM_PROVIDER,
gpt_model=gpt_model,
api_key=settings.API_KEY,
user_api_key=user_api_key,
prompt=prompt,
chat_history=history,
decoded_token=decoded_token,
attachments=attachments,
)
retriever = RetrieverCreator.create_retriever(
@@ -449,6 +536,10 @@ class Stream(Resource):
isNoneDoc=data.get("isNoneDoc"),
index=index,
should_save_conversation=save_conv,
attachment_ids=attachment_ids,
agent_id=agent_id,
is_shared_usage=is_shared_usage,
shared_token=shared_token,
),
mimetype="text/event-stream",
)
@@ -530,6 +621,7 @@ class Answer(Resource):
chunks = int(data.get("chunks", 2))
token_limit = data.get("token_limit", settings.DEFAULT_MAX_HISTORY)
retriever_name = data.get("retriever", "classic")
agent_type = settings.AGENT_NAME
if "api_key" in data:
data_key = get_data_from_api_key(data["api_key"])
@@ -538,6 +630,7 @@ class Answer(Resource):
source = {"active_docs": data_key.get("source")}
retriever_name = data_key.get("retriever", retriever_name)
user_api_key = data["api_key"]
agent_type = data_key.get("agent_type", agent_type)
decoded_token = {"sub": data_key.get("user")}
elif "active_docs" in data:
@@ -562,9 +655,9 @@ class Answer(Resource):
)
agent = AgentCreator.create_agent(
settings.AGENT_NAME,
agent_type,
endpoint="api/answer",
llm_name=settings.LLM_NAME,
llm_name=settings.LLM_PROVIDER,
gpt_model=gpt_model,
api_key=settings.API_KEY,
user_api_key=user_api_key,
@@ -589,6 +682,7 @@ class Answer(Resource):
source_log_docs = []
tool_calls = []
stream_ended = False
thought = ""
for line in complete_stream(
question=question,
@@ -611,6 +705,8 @@ class Answer(Resource):
source_log_docs = event["source"]
elif event["type"] == "tool_calls":
tool_calls = event["tool_calls"]
elif event["type"] == "thought":
thought = event["thought"]
elif event["type"] == "error":
logger.error(f"Error from stream: {event['error']}")
return bad_request(500, event["error"])
@@ -630,7 +726,7 @@ class Answer(Resource):
doc["source"] = "None"
llm = LLMCreator.create_llm(
settings.LLM_NAME,
settings.LLM_PROVIDER,
api_key=settings.API_KEY,
user_api_key=user_api_key,
decoded_token=decoded_token,
@@ -642,6 +738,7 @@ class Answer(Resource):
conversation_id,
question,
response_full,
thought,
source_log_docs,
tool_calls,
llm,
@@ -784,3 +881,34 @@ class Search(Resource):
return bad_request(500, str(e))
return make_response(docs, 200)
def get_attachments_content(attachment_ids, user):
"""
Retrieve content from attachment documents based on their IDs.
Args:
attachment_ids (list): List of attachment document IDs
user (str): User identifier to verify ownership
Returns:
list: List of dictionaries containing attachment content and metadata
"""
if not attachment_ids:
return []
attachments = []
for attachment_id in attachment_ids:
try:
attachment_doc = attachments_collection.find_one(
{"_id": ObjectId(attachment_id), "user": user}
)
if attachment_doc:
attachments.append(attachment_doc)
except Exception as e:
logger.error(
f"Error retrieving attachment {attachment_id}: {e}", exc_info=True
)
return attachments

View File

@@ -3,12 +3,15 @@ import datetime
from flask import Blueprint, request, send_from_directory
from werkzeug.utils import secure_filename
from bson.objectid import ObjectId
import logging
from application.core.mongo_db import MongoDB
from application.core.settings import settings
from application.storage.storage_creator import StorageCreator
logger = logging.getLogger(__name__)
mongo = MongoDB.get_client()
db = mongo["docsgpt"]
db = mongo[settings.MONGO_DB_NAME]
conversations_collection = db["conversations"]
sources_collection = db["sources"]
@@ -34,37 +37,39 @@ def upload_index_files():
"""Upload two files(index.faiss, index.pkl) to the user's folder."""
if "user" not in request.form:
return {"status": "no user"}
user = secure_filename(request.form["user"])
user = request.form["user"]
if "name" not in request.form:
return {"status": "no name"}
job_name = secure_filename(request.form["name"])
tokens = secure_filename(request.form["tokens"])
retriever = secure_filename(request.form["retriever"])
id = secure_filename(request.form["id"])
type = secure_filename(request.form["type"])
job_name = request.form["name"]
tokens = request.form["tokens"]
retriever = request.form["retriever"]
id = request.form["id"]
type = request.form["type"]
remote_data = request.form["remote_data"] if "remote_data" in request.form else None
sync_frequency = secure_filename(request.form["sync_frequency"]) if "sync_frequency" in request.form else None
sync_frequency = request.form["sync_frequency"] if "sync_frequency" in request.form else None
original_file_path = request.form.get("original_file_path")
save_dir = os.path.join(current_dir, "indexes", str(id))
storage = StorageCreator.get_storage()
index_base_path = f"indexes/{id}"
if settings.VECTOR_STORE == "faiss":
if "file_faiss" not in request.files:
print("No file part")
logger.error("No file_faiss part")
return {"status": "no file"}
file_faiss = request.files["file_faiss"]
if file_faiss.filename == "":
return {"status": "no file name"}
if "file_pkl" not in request.files:
print("No file part")
logger.error("No file_pkl part")
return {"status": "no file"}
file_pkl = request.files["file_pkl"]
if file_pkl.filename == "":
return {"status": "no file name"}
# saves index files
if not os.path.exists(save_dir):
os.makedirs(save_dir)
file_faiss.save(os.path.join(save_dir, "index.faiss"))
file_pkl.save(os.path.join(save_dir, "index.pkl"))
# Save index files to storage
storage.save_file(file_faiss, f"{index_base_path}/index.faiss")
storage.save_file(file_pkl, f"{index_base_path}/index.pkl")
existing_entry = sources_collection.find_one({"_id": ObjectId(id)})
if existing_entry:
@@ -82,6 +87,7 @@ def upload_index_files():
"retriever": retriever,
"remote_data": remote_data,
"sync_frequency": sync_frequency,
"file_path": original_file_path,
}
},
)
@@ -99,6 +105,7 @@ def upload_index_files():
"retriever": retriever,
"remote_data": remote_data,
"sync_frequency": sync_frequency,
"file_path": original_file_path,
}
)
return {"status": "ok"}

File diff suppressed because it is too large Load Diff

View File

@@ -1,12 +1,18 @@
from datetime import timedelta
from application.celery_init import celery
from application.worker import ingest_worker, remote_worker, sync_worker
from application.worker import (
agent_webhook_worker,
attachment_worker,
ingest_worker,
remote_worker,
sync_worker,
)
@celery.task(bind=True)
def ingest(self, directory, formats, name_job, filename, user):
resp = ingest_worker(self, directory, formats, name_job, filename, user)
def ingest(self, directory, formats, job_name, filename, user, dir_name, user_dir):
resp = ingest_worker(self, directory, formats, job_name, filename, user, dir_name, user_dir)
return resp
@@ -22,6 +28,18 @@ def schedule_syncs(self, frequency):
return resp
@celery.task(bind=True)
def store_attachment(self, file_info, user):
resp = attachment_worker(self, file_info, user)
return resp
@celery.task(bind=True)
def process_agent_webhook(self, agent_id, payload):
resp = agent_webhook_worker(self, agent_id, payload)
return resp
@celery.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
sender.add_periodic_task(

View File

@@ -61,14 +61,14 @@ def gen_cache(func):
if cached_response:
return cached_response.decode("utf-8")
except Exception as e:
logger.error(f"Error getting cached response: {e}")
logger.error(f"Error getting cached response: {e}", exc_info=True)
result = func(self, model, messages, stream, tools, *args, **kwargs)
if redis_client and isinstance(result, str):
try:
redis_client.set(cache_key, result, ex=1800)
except Exception as e:
logger.error(f"Error setting cache: {e}")
logger.error(f"Error setting cache: {e}", exc_info=True)
return result
@@ -100,7 +100,7 @@ def stream_cache(func):
time.sleep(0.03) # Simulate streaming delay
return
except Exception as e:
logger.error(f"Error getting cached stream: {e}")
logger.error(f"Error getting cached stream: {e}", exc_info=True)
stream_cache_data = []
for chunk in func(self, model, messages, stream, tools, *args, **kwargs):
@@ -112,6 +112,6 @@ def stream_cache(func):
redis_client.set(cache_key, json.dumps(stream_cache_data), ex=1800)
logger.info(f"Stream cache saved for key: {cache_key}")
except Exception as e:
logger.error(f"Error setting stream cache: {e}")
logger.error(f"Error setting stream cache: {e}", exc_info=True)
return wrapper

View File

@@ -11,17 +11,18 @@ current_dir = os.path.dirname(
class Settings(BaseSettings):
AUTH_TYPE: Optional[str] = None
LLM_NAME: str = "docsgpt"
MODEL_NAME: Optional[str] = (
None # if LLM_NAME is openai, MODEL_NAME can be gpt-4 or gpt-3.5-turbo
LLM_PROVIDER: str = "docsgpt"
LLM_NAME: Optional[str] = (
None # if LLM_PROVIDER is openai, LLM_NAME can be gpt-4 or gpt-3.5-turbo
)
EMBEDDINGS_NAME: str = "huggingface_sentence-transformers/all-mpnet-base-v2"
CELERY_BROKER_URL: str = "redis://localhost:6379/0"
CELERY_RESULT_BACKEND: str = "redis://localhost:6379/1"
MONGO_URI: str = "mongodb://localhost:27017/docsgpt"
MODEL_PATH: str = os.path.join(current_dir, "models/docsgpt-7b-f16.gguf")
MONGO_DB_NAME: str = "docsgpt"
LLM_PATH: str = os.path.join(current_dir, "models/docsgpt-7b-f16.gguf")
DEFAULT_MAX_HISTORY: int = 150
MODEL_TOKEN_LIMITS: dict = {
LLM_TOKEN_LIMITS: dict = {
"gpt-4o-mini": 128000,
"gpt-3.5-turbo": 4096,
"claude-2": 1e5,
@@ -34,6 +35,9 @@ class Settings(BaseSettings):
)
RETRIEVERS_ENABLED: list = ["classic_rag", "duckduck_search"] # also brave_search
AGENT_NAME: str = "classic"
FALLBACK_LLM_PROVIDER: Optional[str] = None # provider for fallback llm
FALLBACK_LLM_NAME: Optional[str] = None # model name for fallback llm
FALLBACK_LLM_API_KEY: Optional[str] = None # api key for fallback llm
# LLM Cache
CACHE_REDIS_URL: str = "redis://localhost:6379/2"
@@ -98,6 +102,8 @@ class Settings(BaseSettings):
BRAVE_SEARCH_API_KEY: Optional[str] = None
FLASK_DEBUG_MODE: bool = False
STORAGE_TYPE: str = "local" # local or s3
URL_STRATEGY: str = "backend" # backend or s3
JWT_SECRET_KEY: str = ""

View File

@@ -1,53 +1,117 @@
import logging
from abc import ABC, abstractmethod
from application.cache import gen_cache, stream_cache
from application.core.settings import settings
from application.usage import gen_token_usage, stream_token_usage
logger = logging.getLogger(__name__)
class BaseLLM(ABC):
def __init__(self, decoded_token=None):
def __init__(
self,
decoded_token=None,
):
self.decoded_token = decoded_token
self.token_usage = {"prompt_tokens": 0, "generated_tokens": 0}
self.fallback_provider = settings.FALLBACK_LLM_PROVIDER
self.fallback_model_name = settings.FALLBACK_LLM_NAME
self.fallback_llm_api_key = settings.FALLBACK_LLM_API_KEY
self._fallback_llm = None
def _apply_decorator(self, method, decorators, *args, **kwargs):
for decorator in decorators:
method = decorator(method)
return method(self, *args, **kwargs)
@property
def fallback_llm(self):
"""Lazy-loaded fallback LLM instance."""
if (
self._fallback_llm is None
and self.fallback_provider
and self.fallback_model_name
):
try:
from application.llm.llm_creator import LLMCreator
self._fallback_llm = LLMCreator.create_llm(
self.fallback_provider,
self.fallback_llm_api_key,
None,
self.decoded_token,
)
except Exception as e:
logger.error(
f"Failed to initialize fallback LLM: {str(e)}", exc_info=True
)
return self._fallback_llm
def _execute_with_fallback(
self, method_name: str, decorators: list, *args, **kwargs
):
"""
Unified method execution with fallback support.
Args:
method_name: Name of the raw method ('_raw_gen' or '_raw_gen_stream')
decorators: List of decorators to apply
*args: Positional arguments
**kwargs: Keyword arguments
"""
def decorated_method():
method = getattr(self, method_name)
for decorator in decorators:
method = decorator(method)
return method(self, *args, **kwargs)
try:
return decorated_method()
except Exception as e:
if not self.fallback_llm:
logger.error(f"Primary LLM failed and no fallback available: {str(e)}")
raise
logger.warning(
f"Falling back to {self.fallback_provider}/{self.fallback_model_name}. Error: {str(e)}"
)
fallback_method = getattr(
self.fallback_llm, method_name.replace("_raw_", "")
)
return fallback_method(*args, **kwargs)
def gen(self, model, messages, stream=False, tools=None, *args, **kwargs):
decorators = [gen_token_usage, gen_cache]
return self._execute_with_fallback(
"_raw_gen",
decorators,
model=model,
messages=messages,
stream=stream,
tools=tools,
*args,
**kwargs,
)
def gen_stream(self, model, messages, stream=True, tools=None, *args, **kwargs):
decorators = [stream_cache, stream_token_usage]
return self._execute_with_fallback(
"_raw_gen_stream",
decorators,
model=model,
messages=messages,
stream=stream,
tools=tools,
*args,
**kwargs,
)
@abstractmethod
def _raw_gen(self, model, messages, stream, tools, *args, **kwargs):
pass
def gen(self, model, messages, stream=False, tools=None, *args, **kwargs):
decorators = [gen_token_usage, gen_cache]
return self._apply_decorator(
self._raw_gen,
decorators=decorators,
model=model,
messages=messages,
stream=stream,
tools=tools,
*args,
**kwargs
)
@abstractmethod
def _raw_gen_stream(self, model, messages, stream, *args, **kwargs):
pass
def gen_stream(self, model, messages, stream=True, tools=None, *args, **kwargs):
decorators = [stream_cache, stream_token_usage]
return self._apply_decorator(
self._raw_gen_stream,
decorators=decorators,
model=model,
messages=messages,
stream=stream,
tools=tools,
*args,
**kwargs
)
def supports_tools(self):
return hasattr(self, "_supports_tools") and callable(
getattr(self, "_supports_tools")
@@ -55,3 +119,12 @@ class BaseLLM(ABC):
def _supports_tools(self):
raise NotImplementedError("Subclass must implement _supports_tools method")
def get_supported_attachment_types(self):
"""
Return a list of MIME types supported by this LLM for file uploads.
Returns:
list: List of supported MIME types
"""
return [] # Default: no attachments supported

View File

@@ -1,7 +1,11 @@
from google import genai
from google.genai import types
import logging
import json
from application.llm.base import BaseLLM
from application.storage.storage_creator import StorageCreator
from application.core.settings import settings
class GoogleLLM(BaseLLM):
@@ -9,6 +13,126 @@ class GoogleLLM(BaseLLM):
super().__init__(*args, **kwargs)
self.api_key = api_key
self.user_api_key = user_api_key
self.client = genai.Client(api_key=self.api_key)
self.storage = StorageCreator.get_storage()
def get_supported_attachment_types(self):
"""
Return a list of MIME types supported by Google Gemini for file uploads.
Returns:
list: List of supported MIME types
"""
return [
'application/pdf',
'image/png',
'image/jpeg',
'image/jpg',
'image/webp',
'image/gif'
]
def prepare_messages_with_attachments(self, messages, attachments=None):
"""
Process attachments using Google AI's file API for more efficient handling.
Args:
messages (list): List of message dictionaries.
attachments (list): List of attachment dictionaries with content and metadata.
Returns:
list: Messages formatted with file references for Google AI API.
"""
if not attachments:
return messages
prepared_messages = messages.copy()
# Find the user message to attach files to the last one
user_message_index = None
for i in range(len(prepared_messages) - 1, -1, -1):
if prepared_messages[i].get("role") == "user":
user_message_index = i
break
if user_message_index is None:
user_message = {"role": "user", "content": []}
prepared_messages.append(user_message)
user_message_index = len(prepared_messages) - 1
if isinstance(prepared_messages[user_message_index].get("content"), str):
text_content = prepared_messages[user_message_index]["content"]
prepared_messages[user_message_index]["content"] = [
{"type": "text", "text": text_content}
]
elif not isinstance(prepared_messages[user_message_index].get("content"), list):
prepared_messages[user_message_index]["content"] = []
files = []
for attachment in attachments:
mime_type = attachment.get('mime_type')
if mime_type in self.get_supported_attachment_types():
try:
file_uri = self._upload_file_to_google(attachment)
logging.info(f"GoogleLLM: Successfully uploaded file, got URI: {file_uri}")
files.append({"file_uri": file_uri, "mime_type": mime_type})
except Exception as e:
logging.error(f"GoogleLLM: Error uploading file: {e}", exc_info=True)
if 'content' in attachment:
prepared_messages[user_message_index]["content"].append({
"type": "text",
"text": f"[File could not be processed: {attachment.get('path', 'unknown')}]"
})
if files:
logging.info(f"GoogleLLM: Adding {len(files)} files to message")
prepared_messages[user_message_index]["content"].append({
"files": files
})
return prepared_messages
def _upload_file_to_google(self, attachment):
"""
Upload a file to Google AI and return the file URI.
Args:
attachment (dict): Attachment dictionary with path and metadata.
Returns:
str: Google AI file URI for the uploaded file.
"""
if 'google_file_uri' in attachment:
return attachment['google_file_uri']
file_path = attachment.get('path')
if not file_path:
raise ValueError("No file path provided in attachment")
if not self.storage.file_exists(file_path):
raise FileNotFoundError(f"File not found: {file_path}")
try:
file_uri = self.storage.process_file(
file_path,
lambda local_path, **kwargs: self.client.files.upload(file=local_path).uri
)
from application.core.mongo_db import MongoDB
mongo = MongoDB.get_client()
db = mongo[settings.MONGO_DB_NAME]
attachments_collection = db["attachments"]
if '_id' in attachment:
attachments_collection.update_one(
{"_id": attachment['_id']},
{"$set": {"google_file_uri": file_uri}}
)
return file_uri
except Exception as e:
logging.error(f"Error uploading file to Google AI: {e}", exc_info=True)
raise
def _clean_messages_google(self, messages):
cleaned_messages = []
@@ -26,7 +150,7 @@ class GoogleLLM(BaseLLM):
elif isinstance(content, list):
for item in content:
if "text" in item:
parts.append(types.Part.from_text(item["text"]))
parts.append(types.Part.from_text(text=item["text"]))
elif "function_call" in item:
parts.append(
types.Part.from_function_call(
@@ -41,6 +165,14 @@ class GoogleLLM(BaseLLM):
response=item["function_response"]["response"],
)
)
elif "files" in item:
for file_data in item["files"]:
parts.append(
types.Part.from_uri(
file_uri=file_data["file_uri"],
mime_type=file_data["mime_type"]
)
)
else:
raise ValueError(
f"Unexpected content dictionary format:{item}"
@@ -146,11 +278,25 @@ class GoogleLLM(BaseLLM):
cleaned_tools = self._clean_tools_format(tools)
config.tools = cleaned_tools
# Check if we have both tools and file attachments
has_attachments = False
for message in messages:
for part in message.parts:
if hasattr(part, 'file_data') and part.file_data is not None:
has_attachments = True
break
if has_attachments:
break
logging.info(f"GoogleLLM: Starting stream generation. Model: {model}, Messages: {json.dumps(messages, default=str)}, Has attachments: {has_attachments}")
response = client.models.generate_content_stream(
model=model,
contents=messages,
config=config,
)
for chunk in response:
if hasattr(chunk, "candidates") and chunk.candidates:
for candidate in chunk.candidates:

View File

View File

@@ -0,0 +1,335 @@
import logging
from abc import ABC, abstractmethod
from dataclasses import dataclass
from typing import Any, Dict, Generator, List, Optional, Union
from application.logging import build_stack_data
logger = logging.getLogger(__name__)
@dataclass
class ToolCall:
"""Represents a tool/function call from the LLM."""
id: str
name: str
arguments: Union[str, Dict]
index: Optional[int] = None
@classmethod
def from_dict(cls, data: Dict) -> "ToolCall":
"""Create ToolCall from dictionary."""
return cls(
id=data.get("id", ""),
name=data.get("name", ""),
arguments=data.get("arguments", {}),
index=data.get("index"),
)
@dataclass
class LLMResponse:
"""Represents a response from the LLM."""
content: str
tool_calls: List[ToolCall]
finish_reason: str
raw_response: Any
@property
def requires_tool_call(self) -> bool:
"""Check if the response requires tool calls."""
return bool(self.tool_calls) and self.finish_reason == "tool_calls"
class LLMHandler(ABC):
"""Abstract base class for LLM handlers."""
def __init__(self):
self.llm_calls = []
self.tool_calls = []
@abstractmethod
def parse_response(self, response: Any) -> LLMResponse:
"""Parse raw LLM response into standardized format."""
pass
@abstractmethod
def create_tool_message(self, tool_call: ToolCall, result: Any) -> Dict:
"""Create a tool result message for the conversation history."""
pass
@abstractmethod
def _iterate_stream(self, response: Any) -> Generator:
"""Iterate through streaming response chunks."""
pass
def process_message_flow(
self,
agent,
initial_response,
tools_dict: Dict,
messages: List[Dict],
attachments: Optional[List] = None,
stream: bool = False,
) -> Union[str, Generator]:
"""
Main orchestration method for processing LLM message flow.
Args:
agent: The agent instance
initial_response: Initial LLM response
tools_dict: Dictionary of available tools
messages: Conversation history
attachments: Optional attachments
stream: Whether to use streaming
Returns:
Final response or generator for streaming
"""
messages = self.prepare_messages(agent, messages, attachments)
if stream:
return self.handle_streaming(agent, initial_response, tools_dict, messages)
else:
return self.handle_non_streaming(
agent, initial_response, tools_dict, messages
)
def prepare_messages(
self, agent, messages: List[Dict], attachments: Optional[List] = None
) -> List[Dict]:
"""
Prepare messages with attachments and provider-specific formatting.
Args:
agent: The agent instance
messages: Original messages
attachments: List of attachments
Returns:
Prepared messages list
"""
if not attachments:
return messages
logger.info(f"Preparing messages with {len(attachments)} attachments")
supported_types = agent.llm.get_supported_attachment_types()
supported_attachments = [
a for a in attachments if a.get("mime_type") in supported_types
]
unsupported_attachments = [
a for a in attachments if a.get("mime_type") not in supported_types
]
# Process supported attachments with the LLM's custom method
if supported_attachments:
logger.info(
f"Processing {len(supported_attachments)} supported attachments"
)
messages = agent.llm.prepare_messages_with_attachments(
messages, supported_attachments
)
# Process unsupported attachments with default method
if unsupported_attachments:
logger.info(
f"Processing {len(unsupported_attachments)} unsupported attachments"
)
messages = self._append_unsupported_attachments(
messages, unsupported_attachments
)
return messages
def _append_unsupported_attachments(
self, messages: List[Dict], attachments: List[Dict]
) -> List[Dict]:
"""
Default method to append unsupported attachment content to system prompt.
Args:
messages: Current messages
attachments: List of unsupported attachments
Returns:
Updated messages list
"""
prepared_messages = messages.copy()
attachment_texts = []
for attachment in attachments:
logger.info(f"Adding attachment {attachment.get('id')} to context")
if "content" in attachment:
attachment_texts.append(
f"Attached file content:\n\n{attachment['content']}"
)
if attachment_texts:
combined_text = "\n\n".join(attachment_texts)
system_msg = next(
(msg for msg in prepared_messages if msg.get("role") == "system"),
{"role": "system", "content": ""},
)
if system_msg not in prepared_messages:
prepared_messages.insert(0, system_msg)
system_msg["content"] += f"\n\n{combined_text}"
return prepared_messages
def handle_tool_calls(
self, agent, tool_calls: List[ToolCall], tools_dict: Dict, messages: List[Dict]
) -> Generator:
"""
Execute tool calls and update conversation history.
Args:
agent: The agent instance
tool_calls: List of tool calls to execute
tools_dict: Available tools dictionary
messages: Current conversation history
Returns:
Updated messages list
"""
updated_messages = messages.copy()
for call in tool_calls:
try:
self.tool_calls.append(call)
tool_executor_gen = agent._execute_tool_action(tools_dict, call)
while True:
try:
yield next(tool_executor_gen)
except StopIteration as e:
tool_response, call_id = e.value
break
updated_messages.append(
{
"role": "assistant",
"content": [
{
"function_call": {
"name": call.name,
"args": call.arguments,
"call_id": call_id,
}
}
],
}
)
updated_messages.append(self.create_tool_message(call, tool_response))
except Exception as e:
logger.error(f"Error executing tool: {str(e)}", exc_info=True)
updated_messages.append(
{
"role": "tool",
"content": f"Error executing tool: {str(e)}",
"tool_call_id": call.id,
}
)
return updated_messages
def handle_non_streaming(
self, agent, response: Any, tools_dict: Dict, messages: List[Dict]
) -> Generator:
"""
Handle non-streaming response flow.
Args:
agent: The agent instance
response: Current LLM response
tools_dict: Available tools dictionary
messages: Conversation history
Returns:
Final response after processing all tool calls
"""
parsed = self.parse_response(response)
self.llm_calls.append(build_stack_data(agent.llm))
while parsed.requires_tool_call:
tool_handler_gen = self.handle_tool_calls(
agent, parsed.tool_calls, tools_dict, messages
)
while True:
try:
yield next(tool_handler_gen)
except StopIteration as e:
messages = e.value
break
response = agent.llm.gen(
model=agent.gpt_model, messages=messages, tools=agent.tools
)
parsed = self.parse_response(response)
self.llm_calls.append(build_stack_data(agent.llm))
return parsed.content
def handle_streaming(
self, agent, response: Any, tools_dict: Dict, messages: List[Dict]
) -> Generator:
"""
Handle streaming response flow.
Args:
agent: The agent instance
response: Current LLM response
tools_dict: Available tools dictionary
messages: Conversation history
Yields:
Streaming response chunks
"""
buffer = ""
tool_calls = {}
for chunk in self._iterate_stream(response):
if isinstance(chunk, str):
yield chunk
continue
parsed = self.parse_response(chunk)
if parsed.tool_calls:
for call in parsed.tool_calls:
if call.index not in tool_calls:
tool_calls[call.index] = call
else:
existing = tool_calls[call.index]
if call.id:
existing.id = call.id
if call.name:
existing.name = call.name
if call.arguments:
existing.arguments += call.arguments
if parsed.finish_reason == "tool_calls":
tool_handler_gen = self.handle_tool_calls(
agent, list(tool_calls.values()), tools_dict, messages
)
while True:
try:
yield next(tool_handler_gen)
except StopIteration as e:
messages = e.value
break
tool_calls = {}
response = agent.llm.gen_stream(
model=agent.gpt_model, messages=messages, tools=agent.tools
)
self.llm_calls.append(build_stack_data(agent.llm))
yield from self.handle_streaming(agent, response, tools_dict, messages)
return
if parsed.content:
buffer += parsed.content
yield buffer
buffer = ""
if parsed.finish_reason == "stop":
return

View File

@@ -0,0 +1,78 @@
import uuid
from typing import Any, Dict, Generator
from application.llm.handlers.base import LLMHandler, LLMResponse, ToolCall
class GoogleLLMHandler(LLMHandler):
"""Handler for Google's GenAI API."""
def parse_response(self, response: Any) -> LLMResponse:
"""Parse Google response into standardized format."""
if isinstance(response, str):
return LLMResponse(
content=response,
tool_calls=[],
finish_reason="stop",
raw_response=response,
)
if hasattr(response, "candidates"):
parts = response.candidates[0].content.parts if response.candidates else []
tool_calls = [
ToolCall(
id=str(uuid.uuid4()),
name=part.function_call.name,
arguments=part.function_call.args,
)
for part in parts
if hasattr(part, "function_call") and part.function_call is not None
]
content = " ".join(
part.text
for part in parts
if hasattr(part, "text") and part.text is not None
)
return LLMResponse(
content=content,
tool_calls=tool_calls,
finish_reason="tool_calls" if tool_calls else "stop",
raw_response=response,
)
else:
tool_calls = []
if hasattr(response, "function_call"):
tool_calls.append(
ToolCall(
id=str(uuid.uuid4()),
name=response.function_call.name,
arguments=response.function_call.args,
)
)
return LLMResponse(
content=response.text if hasattr(response, "text") else "",
tool_calls=tool_calls,
finish_reason="tool_calls" if tool_calls else "stop",
raw_response=response,
)
def create_tool_message(self, tool_call: ToolCall, result: Any) -> Dict:
"""Create Google-style tool message."""
from google.genai import types
return {
"role": "tool",
"content": [
types.Part.from_function_response(
name=tool_call.name, response={"result": result}
).to_json_dict()
],
}
def _iterate_stream(self, response: Any) -> Generator:
"""Iterate through Google streaming response."""
for chunk in response:
yield chunk

View File

@@ -0,0 +1,18 @@
from application.llm.handlers.base import LLMHandler
from application.llm.handlers.google import GoogleLLMHandler
from application.llm.handlers.openai import OpenAILLMHandler
class LLMHandlerCreator:
handlers = {
"openai": OpenAILLMHandler,
"google": GoogleLLMHandler,
"default": OpenAILLMHandler,
}
@classmethod
def create_handler(cls, llm_type: str, *args, **kwargs) -> LLMHandler:
handler_class = cls.handlers.get(llm_type.lower())
if not handler_class:
raise ValueError(f"No LLM handler class found for type {llm_type}")
return handler_class(*args, **kwargs)

View File

@@ -0,0 +1,57 @@
from typing import Any, Dict, Generator
from application.llm.handlers.base import LLMHandler, LLMResponse, ToolCall
class OpenAILLMHandler(LLMHandler):
"""Handler for OpenAI API."""
def parse_response(self, response: Any) -> LLMResponse:
"""Parse OpenAI response into standardized format."""
if isinstance(response, str):
return LLMResponse(
content=response,
tool_calls=[],
finish_reason="stop",
raw_response=response,
)
message = getattr(response, "message", None) or getattr(response, "delta", None)
tool_calls = []
if hasattr(message, "tool_calls"):
tool_calls = [
ToolCall(
id=getattr(tc, "id", ""),
name=getattr(tc.function, "name", ""),
arguments=getattr(tc.function, "arguments", ""),
index=getattr(tc, "index", None),
)
for tc in message.tool_calls or []
]
return LLMResponse(
content=getattr(message, "content", ""),
tool_calls=tool_calls,
finish_reason=getattr(response, "finish_reason", ""),
raw_response=response,
)
def create_tool_message(self, tool_call: ToolCall, result: Any) -> Dict:
"""Create OpenAI-style tool message."""
return {
"role": "tool",
"content": [
{
"function_response": {
"name": tool_call.name,
"response": {"result": result},
"call_id": tool_call.id,
}
}
],
}
def _iterate_stream(self, response: Any) -> Generator:
"""Iterate through OpenAI streaming response."""
for chunk in response:
yield chunk

View File

@@ -2,6 +2,7 @@ from application.llm.base import BaseLLM
from application.core.settings import settings
import threading
class LlamaSingleton:
_instances = {}
_lock = threading.Lock() # Add a lock for thread synchronization
@@ -29,7 +30,7 @@ class LlamaCpp(BaseLLM):
self,
api_key=None,
user_api_key=None,
llm_name=settings.MODEL_PATH,
llm_name=settings.LLM_PATH,
*args,
**kwargs,
):
@@ -42,14 +43,18 @@ class LlamaCpp(BaseLLM):
context = messages[0]["content"]
user_question = messages[-1]["content"]
prompt = f"### Instruction \n {user_question} \n ### Context \n {context} \n ### Answer \n"
result = LlamaSingleton.query_model(self.llama, prompt, max_tokens=150, echo=False)
result = LlamaSingleton.query_model(
self.llama, prompt, max_tokens=150, echo=False
)
return result["choices"][0]["text"].split("### Answer \n")[-1]
def _raw_gen_stream(self, baseself, model, messages, stream=True, **kwargs):
context = messages[0]["content"]
user_question = messages[-1]["content"]
prompt = f"### Instruction \n {user_question} \n ### Context \n {context} \n ### Answer \n"
result = LlamaSingleton.query_model(self.llama, prompt, max_tokens=150, echo=False, stream=stream)
result = LlamaSingleton.query_model(
self.llama, prompt, max_tokens=150, echo=False, stream=stream
)
for item in result:
for choice in item["choices"]:
yield choice["text"]
yield choice["text"]

View File

@@ -1,7 +1,10 @@
import json
import base64
import logging
from application.core.settings import settings
from application.llm.base import BaseLLM
from application.storage.storage_creator import StorageCreator
class OpenAILLM(BaseLLM):
@@ -10,12 +13,14 @@ class OpenAILLM(BaseLLM):
from openai import OpenAI
super().__init__(*args, **kwargs)
if settings.OPENAI_BASE_URL:
if isinstance(settings.OPENAI_BASE_URL, str) and settings.OPENAI_BASE_URL.strip():
self.client = OpenAI(api_key=api_key, base_url=settings.OPENAI_BASE_URL)
else:
self.client = OpenAI(api_key=api_key)
DEFAULT_OPENAI_API_BASE = "https://api.openai.com/v1"
self.client = OpenAI(api_key=api_key, base_url=DEFAULT_OPENAI_API_BASE)
self.api_key = api_key
self.user_api_key = user_api_key
self.storage = StorageCreator.get_storage()
def _clean_messages_openai(self, messages):
cleaned_messages = []
@@ -65,6 +70,17 @@ class OpenAILLM(BaseLLM):
),
}
)
elif isinstance(item, dict):
content_parts = []
if "text" in item:
content_parts.append({"type": "text", "text": item["text"]})
elif "type" in item and item["type"] == "text" and "text" in item:
content_parts.append(item)
elif "type" in item and item["type"] == "file" and "file" in item:
content_parts.append(item)
elif "type" in item and item["type"] == "image_url" and "image_url" in item:
content_parts.append(item)
cleaned_messages.append({"role": role, "content": content_parts})
else:
raise ValueError(
f"Unexpected content dictionary format: {item}"
@@ -133,11 +149,167 @@ class OpenAILLM(BaseLLM):
def _supports_tools(self):
return True
def get_supported_attachment_types(self):
"""
Return a list of MIME types supported by OpenAI for file uploads.
Returns:
list: List of supported MIME types
"""
return [
'application/pdf',
'image/png',
'image/jpeg',
'image/jpg',
'image/webp',
'image/gif'
]
def prepare_messages_with_attachments(self, messages, attachments=None):
"""
Process attachments using OpenAI's file API for more efficient handling.
Args:
messages (list): List of message dictionaries.
attachments (list): List of attachment dictionaries with content and metadata.
Returns:
list: Messages formatted with file references for OpenAI API.
"""
if not attachments:
return messages
prepared_messages = messages.copy()
# Find the user message to attach file_id to the last one
user_message_index = None
for i in range(len(prepared_messages) - 1, -1, -1):
if prepared_messages[i].get("role") == "user":
user_message_index = i
break
if user_message_index is None:
user_message = {"role": "user", "content": []}
prepared_messages.append(user_message)
user_message_index = len(prepared_messages) - 1
if isinstance(prepared_messages[user_message_index].get("content"), str):
text_content = prepared_messages[user_message_index]["content"]
prepared_messages[user_message_index]["content"] = [
{"type": "text", "text": text_content}
]
elif not isinstance(prepared_messages[user_message_index].get("content"), list):
prepared_messages[user_message_index]["content"] = []
for attachment in attachments:
mime_type = attachment.get('mime_type')
if mime_type and mime_type.startswith('image/'):
try:
base64_image = self._get_base64_image(attachment)
prepared_messages[user_message_index]["content"].append({
"type": "image_url",
"image_url": {
"url": f"data:{mime_type};base64,{base64_image}"
}
})
except Exception as e:
logging.error(f"Error processing image attachment: {e}", exc_info=True)
if 'content' in attachment:
prepared_messages[user_message_index]["content"].append({
"type": "text",
"text": f"[Image could not be processed: {attachment.get('path', 'unknown')}]"
})
# Handle PDFs using the file API
elif mime_type == 'application/pdf':
try:
file_id = self._upload_file_to_openai(attachment)
prepared_messages[user_message_index]["content"].append({
"type": "file",
"file": {"file_id": file_id}
})
except Exception as e:
logging.error(f"Error uploading PDF to OpenAI: {e}", exc_info=True)
if 'content' in attachment:
prepared_messages[user_message_index]["content"].append({
"type": "text",
"text": f"File content:\n\n{attachment['content']}"
})
return prepared_messages
def _get_base64_image(self, attachment):
"""
Convert an image file to base64 encoding.
Args:
attachment (dict): Attachment dictionary with path and metadata.
Returns:
str: Base64-encoded image data.
"""
file_path = attachment.get('path')
if not file_path:
raise ValueError("No file path provided in attachment")
try:
with self.storage.get_file(file_path) as image_file:
return base64.b64encode(image_file.read()).decode('utf-8')
except FileNotFoundError:
raise FileNotFoundError(f"File not found: {file_path}")
def _upload_file_to_openai(self, attachment):
"""
Upload a file to OpenAI and return the file_id.
Args:
attachment (dict): Attachment dictionary with path and metadata.
Expected keys:
- path: Path to the file
- id: Optional MongoDB ID for caching
Returns:
str: OpenAI file_id for the uploaded file.
"""
import logging
if 'openai_file_id' in attachment:
return attachment['openai_file_id']
file_path = attachment.get('path')
if not self.storage.file_exists(file_path):
raise FileNotFoundError(f"File not found: {file_path}")
try:
file_id = self.storage.process_file(
file_path,
lambda local_path, **kwargs: self.client.files.create(
file=open(local_path, 'rb'),
purpose="assistants"
).id
)
from application.core.mongo_db import MongoDB
mongo = MongoDB.get_client()
db = mongo[settings.MONGO_DB_NAME]
attachments_collection = db["attachments"]
if '_id' in attachment:
attachments_collection.update_one(
{"_id": attachment['_id']},
{"$set": {"openai_file_id": file_id}}
)
return file_id
except Exception as e:
logging.error(f"Error uploading file to OpenAI: {e}", exc_info=True)
raise
class AzureOpenAILLM(OpenAILLM):
def __init__(
self, api_key, user_api_key, *args, **kwargs
self, api_key, user_api_key, *args, **kwargs
):
super().__init__(api_key)

View File

@@ -7,6 +7,7 @@ import uuid
from typing import Any, Callable, Dict, Generator, List
from application.core.mongo_db import MongoDB
from application.core.settings import settings
logging.basicConfig(
level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s"
@@ -29,6 +30,8 @@ def build_stack_data(
exclude_attributes: List[str] = None,
custom_data: Dict = None,
) -> Dict:
if obj is None:
raise ValueError("The 'obj' parameter cannot be None")
data = {}
if include_attributes is None:
include_attributes = []
@@ -56,8 +59,8 @@ def build_stack_data(
data[attr_name] = [str(item) for item in attr_value]
elif isinstance(attr_value, dict):
data[attr_name] = {k: str(v) for k, v in attr_value.items()}
else:
data[attr_name] = str(attr_value)
except AttributeError as e:
logging.warning(f"AttributeError while accessing {attr_name}: {e}")
except AttributeError:
pass
if custom_data:
@@ -131,7 +134,7 @@ def _log_to_mongodb(
) -> None:
try:
mongo = MongoDB.get_client()
db = mongo["docsgpt"]
db = mongo[settings.MONGO_DB_NAME]
user_logs_collection = db["stack_logs"]
log_entry = {
@@ -148,4 +151,4 @@ def _log_to_mongodb(
logging.debug(f"Logged activity to MongoDB: {activity_id}")
except Exception as e:
logging.error(f"Failed to log to MongoDB: {e}")
logging.error(f"Failed to log to MongoDB: {e}", exc_info=True)

View File

@@ -19,7 +19,7 @@ def add_text_to_store_with_retry(store, doc, source_id):
doc.metadata["source_id"] = str(source_id)
store.add_texts([doc.page_content], metadatas=[doc.metadata])
except Exception as e:
logging.error(f"Failed to add document with retry: {e}")
logging.error(f"Failed to add document with retry: {e}", exc_info=True)
raise
@@ -75,7 +75,7 @@ def embed_and_store_documents(docs, folder_name, source_id, task_status):
# Add document to vector store
add_text_to_store_with_retry(store, doc, source_id)
except Exception as e:
logging.error(f"Error embedding document {idx}: {e}")
logging.error(f"Error embedding document {idx}: {e}", exc_info=True)
logging.info(f"Saving progress at document {idx} out of {total_docs}")
store.save_local(folder_name)
break

View File

@@ -158,7 +158,7 @@ class SimpleDirectoryReader(BaseReader):
data = f.read()
# Prepare metadata for this file
if self.file_metadata is not None:
file_metadata = self.file_metadata(str(input_file))
file_metadata = self.file_metadata(input_file.name)
else:
# Provide a default empty metadata
file_metadata = {'title': '', 'store': ''}

View File

@@ -73,7 +73,13 @@ class PandasCSVParser(BaseParser):
for more information.
Set to empty dict by default, this means pandas will try to figure
out the separators, table head, etc. on its own.
header_period (int): Controls how headers are included in output:
- 0: Headers only at the beginning
- 1: Headers in every row
- N > 1: Headers every N rows
header_prefix (str): Prefix for header rows. Default is "HEADERS: ".
"""
def __init__(
@@ -83,6 +89,8 @@ class PandasCSVParser(BaseParser):
col_joiner: str = ", ",
row_joiner: str = "\n",
pandas_config: dict = {},
header_period: int = 20,
header_prefix: str = "HEADERS: ",
**kwargs: Any
) -> None:
"""Init params."""
@@ -91,6 +99,8 @@ class PandasCSVParser(BaseParser):
self._col_joiner = col_joiner
self._row_joiner = row_joiner
self._pandas_config = pandas_config
self._header_period = header_period
self._header_prefix = header_prefix
def _init_parser(self) -> Dict:
"""Init parser."""
@@ -104,15 +114,26 @@ class PandasCSVParser(BaseParser):
raise ValueError("pandas module is required to read CSV files.")
df = pd.read_csv(file, **self._pandas_config)
headers = df.columns.tolist()
header_row = f"{self._header_prefix}{self._col_joiner.join(headers)}"
text_list = df.apply(
lambda row: (self._col_joiner).join(row.astype(str).tolist()), axis=1
).tolist()
if not self._concat_rows:
return df.apply(
lambda row: (self._col_joiner).join(row.astype(str).tolist()), axis=1
).tolist()
text_list = []
if self._header_period != 1:
text_list.append(header_row)
for i, row in df.iterrows():
if (self._header_period > 1 and i > 0 and i % self._header_period == 0):
text_list.append(header_row)
text_list.append(self._col_joiner.join(row.astype(str).tolist()))
if self._header_period == 1 and i < len(df) - 1:
text_list.append(header_row)
if self._concat_rows:
return (self._row_joiner).join(text_list)
else:
return text_list
return self._row_joiner.join(text_list)
class ExcelParser(BaseParser):
@@ -138,7 +159,13 @@ class ExcelParser(BaseParser):
for more information.
Set to empty dict by default, this means pandas will try to figure
out the table structure on its own.
header_period (int): Controls how headers are included in output:
- 0: Headers only at the beginning (default)
- 1: Headers in every row
- N > 1: Headers every N rows
header_prefix (str): Prefix for header rows. Default is "HEADERS: ".
"""
def __init__(
@@ -148,6 +175,8 @@ class ExcelParser(BaseParser):
col_joiner: str = ", ",
row_joiner: str = "\n",
pandas_config: dict = {},
header_period: int = 20,
header_prefix: str = "HEADERS: ",
**kwargs: Any
) -> None:
"""Init params."""
@@ -156,6 +185,8 @@ class ExcelParser(BaseParser):
self._col_joiner = col_joiner
self._row_joiner = row_joiner
self._pandas_config = pandas_config
self._header_period = header_period
self._header_prefix = header_prefix
def _init_parser(self) -> Dict:
"""Init parser."""
@@ -169,12 +200,22 @@ class ExcelParser(BaseParser):
raise ValueError("pandas module is required to read Excel files.")
df = pd.read_excel(file, **self._pandas_config)
headers = df.columns.tolist()
header_row = f"{self._header_prefix}{self._col_joiner.join(headers)}"
if not self._concat_rows:
return df.apply(
lambda row: (self._col_joiner).join(row.astype(str).tolist()), axis=1
).tolist()
text_list = []
if self._header_period != 1:
text_list.append(header_row)
text_list = df.apply(
lambda row: (self._col_joiner).join(row.astype(str).tolist()), axis=1
).tolist()
if self._concat_rows:
return (self._row_joiner).join(text_list)
else:
return text_list
for i, row in df.iterrows():
if (self._header_period > 1 and i > 0 and i % self._header_period == 0):
text_list.append(header_row)
text_list.append(self._col_joiner.join(row.astype(str).tolist()))
if self._header_period == 1 and i < len(df) - 1:
text_list.append(header_row)
return self._row_joiner.join(text_list)

View File

@@ -1,3 +1,4 @@
import logging
import requests
from urllib.parse import urlparse, urljoin
from bs4 import BeautifulSoup
@@ -42,7 +43,7 @@ class CrawlerLoader(BaseRemote):
)
)
except Exception as e:
print(f"Error processing URL {current_url}: {e}")
logging.error(f"Error processing URL {current_url}: {e}", exc_info=True)
continue
# Parse the HTML content to extract all links
@@ -61,4 +62,4 @@ class CrawlerLoader(BaseRemote):
if self.limit is not None and len(visited_urls) >= self.limit:
break
return loaded_content
return loaded_content

View File

@@ -1,3 +1,4 @@
import logging
import requests
import re # Import regular expression library
import xml.etree.ElementTree as ET
@@ -32,7 +33,7 @@ class SitemapLoader(BaseRemote):
documents.extend(loader.load())
processed_urls += 1 # Increment the counter after processing each URL
except Exception as e:
print(f"Error processing URL {url}: {e}")
logging.error(f"Error processing URL {url}: {e}", exc_info=True)
continue
return documents

View File

@@ -1,3 +1,4 @@
import logging
from application.parser.remote.base import BaseRemote
from application.parser.schema.base import Document
from langchain_community.document_loaders import WebBaseLoader
@@ -39,6 +40,6 @@ class WebLoader(BaseRemote):
)
)
except Exception as e:
print(f"Error processing URL {url}: {e}")
logging.error(f"Error processing URL {url}: {e}", exc_info=True)
continue
return documents
return documents

View File

@@ -1,9 +1,15 @@
You are a DocsGPT, friendly and helpful AI assistant by Arc53 that provides help with documents. You give thorough answers with code examples if possible.
Use the following pieces of context to help answer the users question. If its not relevant to the question, provide friendly responses.
You have access to chat history, and can use it to help answer the question.
When using code examples, use the following format:
You are a helpful AI assistant, DocsGPT. You are proactive and helpful. Try to use tools, if they are available to you,
be proactive and fill in missing information.
Users can Upload documents for your context as attachments or sources via UI using the Conversation input box.
If appropriate, your answers can include code examples, formatted as follows:
```(language)
(code)
```
Users are also able to see charts and diagrams if you use them with valid mermaid syntax in your responses.
Try to respond with mermaid charts if visualization helps with users queries.
You effectively utilize chat history, ensuring relevant and tailored responses.
Try to use additional provided context if it's available, otherwise use your knowledge and tool capabilities.
Allow yourself to be very creative and use your imagination.
----------------
Possible additional context from uploaded sources:
{summaries}

View File

@@ -1,9 +1,14 @@
You are a helpful AI assistant, DocsGPT, specializing in document assistance, designed to offer detailed and informative responses.
You are a helpful AI assistant, DocsGPT. You are proactive and helpful. Try to use tools, if they are available to you,
be proactive and fill in missing information.
Users can Upload documents for your context as attachments or sources via UI using the Conversation input box.
If appropriate, your answers can include code examples, formatted as follows:
```(language)
(code)
```
Users are also able to see charts and diagrams if you use them with valid mermaid syntax in your responses.
Try to respond with mermaid charts if visualization helps with users queries.
You effectively utilize chat history, ensuring relevant and tailored responses.
If a question doesn't align with your context, you provide friendly and helpful replies.
Try to use additional provided context if it's available, otherwise use your knowledge and tool capabilities.
----------------
Possible additional context from uploaded sources:
{summaries}

View File

@@ -1,13 +1,17 @@
You are an AI Assistant, DocsGPT, adept at offering document assistance.
Your expertise lies in providing answer on top of provided context.
You can leverage the chat history if needed.
Answer the question based on the context below.
Keep the answer concise. Respond "Irrelevant context" if not sure about the answer.
If question is not related to the context, respond "Irrelevant context".
When using code examples, use the following format:
You are a helpful AI assistant, DocsGPT. You are proactive and helpful. Try to use tools, if they are available to you,
be proactive and fill in missing information.
Users can Upload documents for your context as attachments or sources via UI using the Conversation input box.
If appropriate, your answers can include code examples, formatted as follows:
```(language)
(code)
```
----------------
Context:
{summaries}
Users are also able to see charts and diagrams if you use them with valid mermaid syntax in your responses.
Try to respond with mermaid charts if visualization helps with users queries.
You effectively utilize chat history, ensuring relevant and tailored responses.
Use context provided below or use available tools tool capabilities to answer user queries.
If you dont have enough information from the context or tools, answer "I don't know" or "I don't have enough information".
Never make up information or provide false information!
Allow yourself to be very creative and use your imagination.
----------------
Context from uploaded sources:
{summaries}

View File

@@ -0,0 +1,3 @@
Query: {query}
Observations: {observations}
Now, using the insights from the observations, formulate a well-structured and precise final answer.

View File

@@ -0,0 +1,13 @@
You are an AI assistant and talk like you're thinking out loud. Given the following query, outline a concise thought process that includes key steps and considerations necessary for effective analysis and response. Avoid pointwise formatting. The goal is to break down the query into manageable components without excessive detail, focusing on clarity and logical progression.
Include the following elements in your thought and execution process:
1. Identify the main objective of the query.
2. Determine any relevant context or background information needed to understand the query.
3. List potential approaches or methods to address the query.
4. Highlight any critical factors or constraints that may influence the outcome.
5. Plan with available tools to help you with the analysis but dont execute them. Tools will be executed by another AI.
Query: {query}
Summaries: {summaries}
Prompt: {prompt}
Observations(potentially previous tool calls): {observations}

View File

@@ -1,24 +1,20 @@
anthropic==0.49.0
boto3==1.35.97
beautifulsoup4==4.12.3
boto3==1.38.18
beautifulsoup4==4.13.4
celery==5.4.0
dataclasses-json==0.6.7
docx2txt==0.8
duckduckgo-search==7.5.2
ebooklib==0.18
elastic-transport==8.17.0
elasticsearch==8.17.1
escodegen==1.0.11
esprima==4.0.1
esutils==1.0.1
Flask==3.1.0
Flask==3.1.1
faiss-cpu==1.9.0.post1
flask-restx==1.3.0
google-genai==1.3.0
google-generativeai==0.8.3
gTTS==2.5.4
gunicorn==23.0.0
html2text==2024.2.26
javalang==0.13.0
jinja2==3.1.6
jiter==0.8.2
@@ -26,39 +22,33 @@ jmespath==1.0.1
joblib==1.4.2
jsonpatch==1.33
jsonpointer==3.0.0
jsonschema==4.23.0
jsonschema-spec==0.2.4
jsonschema-specifications==2023.7.1
kombu==5.4.2
langchain==0.3.20
langchain-community==0.3.19
langchain-core==0.3.45
langchain-openai==0.3.8
langchain-text-splitters==0.3.6
langsmith==0.3.19
langchain-core==0.3.59
langchain-openai==0.3.16
langchain-text-splitters==0.3.8
langsmith==0.3.42
lazy-object-proxy==1.10.0
lxml==5.3.1
markupsafe==3.0.2
marshmallow==3.26.1
mpmath==1.3.0
multidict==6.1.0
multidict==6.4.3
mypy-extensions==1.0.0
networkx==3.4.2
numpy==2.2.1
openai==1.66.3
openapi-schema-validator==0.6.3
openapi-spec-validator==0.6.0
openapi3-parser==1.1.19
openai==1.78.1
openapi3-parser==1.1.21
orjson==3.10.14
packaging==24.1
packaging==24.2
pandas==2.2.3
openpyxl==3.1.5
pathable==0.4.4
pillow==11.1.0
portalocker==2.10.1
portalocker>=2.7.0,<3.0.0
prance==23.6.21.0
primp==0.14.0
prompt-toolkit==3.0.50
prompt-toolkit==3.0.51
protobuf==5.29.3
psycopg2-binary==2.9.10
py==1.11.0
@@ -66,23 +56,22 @@ pydantic==2.10.6
pydantic-core==2.27.2
pydantic-settings==2.7.1
pymongo==4.11.3
pypdf==5.2.0
pypdf==5.5.0
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
python-jose==3.4.0
python-pptx==1.0.2
qdrant-client==1.13.2
redis==5.2.1
referencing==0.30.2
referencing>=0.28.0,<0.31.0
regex==2024.11.6
requests==2.32.3
retry==0.9.2
sentence-transformers==3.3.1
tiktoken==0.8.0
tokenizers==0.21.0
torch==2.5.1
torch==2.7.0
tqdm==4.67.1
transformers==4.49.0
transformers==4.51.3
typing-extensions==4.12.2
typing-inspect==0.9.0
tzdata==2024.2
@@ -90,7 +79,7 @@ urllib3==2.3.0
vine==5.1.0
wcwidth==0.2.13
werkzeug==3.1.3
yarl==1.18.3
markdownify==0.14.1
yarl==1.20.0
markdownify==1.1.0
tldextract==5.1.3
websockets==14.1

View File

@@ -29,10 +29,10 @@ class BraveRetSearch(BaseRetriever):
self.token_limit = (
token_limit
if token_limit
< settings.MODEL_TOKEN_LIMITS.get(
< settings.LLM_TOKEN_LIMITS.get(
self.gpt_model, settings.DEFAULT_MAX_HISTORY
)
else settings.MODEL_TOKEN_LIMITS.get(
else settings.LLM_TOKEN_LIMITS.get(
self.gpt_model, settings.DEFAULT_MAX_HISTORY
)
)
@@ -59,7 +59,7 @@ class BraveRetSearch(BaseRetriever):
docs.append({"text": snippet, "title": title, "link": link})
except IndexError:
pass
if settings.LLM_NAME == "llama.cpp":
if settings.LLM_PROVIDER == "llama.cpp":
docs = [docs[0]]
return docs
@@ -84,7 +84,7 @@ class BraveRetSearch(BaseRetriever):
messages_combine.append({"role": "user", "content": self.question})
llm = LLMCreator.create_llm(
settings.LLM_NAME,
settings.LLM_PROVIDER,
api_key=settings.API_KEY,
user_api_key=self.user_api_key,
decoded_token=self.decoded_token,

View File

@@ -1,3 +1,4 @@
import logging
from application.core.settings import settings
from application.llm.llm_creator import LLMCreator
from application.retriever.base import BaseRetriever
@@ -15,7 +16,7 @@ class ClassicRAG(BaseRetriever):
token_limit=150,
gpt_model="docsgpt",
user_api_key=None,
llm_name=settings.LLM_NAME,
llm_name=settings.LLM_PROVIDER,
api_key=settings.API_KEY,
decoded_token=None,
):
@@ -27,10 +28,10 @@ class ClassicRAG(BaseRetriever):
self.token_limit = (
token_limit
if token_limit
< settings.MODEL_TOKEN_LIMITS.get(
< settings.LLM_TOKEN_LIMITS.get(
self.gpt_model, settings.DEFAULT_MAX_HISTORY
)
else settings.MODEL_TOKEN_LIMITS.get(
else settings.LLM_TOKEN_LIMITS.get(
self.gpt_model, settings.DEFAULT_MAX_HISTORY
)
)
@@ -43,8 +44,8 @@ class ClassicRAG(BaseRetriever):
user_api_key=self.user_api_key,
decoded_token=decoded_token,
)
self.question = self._rephrase_query()
self.vectorstore = source["active_docs"] if "active_docs" in source else None
self.question = self._rephrase_query()
self.decoded_token = decoded_token
def _rephrase_query(self):
@@ -52,6 +53,8 @@ class ClassicRAG(BaseRetriever):
not self.original_question
or not self.chat_history
or self.chat_history == []
or self.chunks == 0
or self.vectorstore is None
):
return self.original_question
@@ -72,11 +75,11 @@ class ClassicRAG(BaseRetriever):
print(f"Rephrased query: {rephrased_query}")
return rephrased_query if rephrased_query else self.original_question
except Exception as e:
print(f"Error rephrasing query: {e}")
logging.error(f"Error rephrasing query: {e}", exc_info=True)
return self.original_question
def _get_data(self):
if self.chunks == 0:
if self.chunks == 0 or self.vectorstore is None:
docs = []
else:
docsearch = VectorCreator.create_vectorstore(

View File

@@ -28,10 +28,10 @@ class DuckDuckSearch(BaseRetriever):
self.token_limit = (
token_limit
if token_limit
< settings.MODEL_TOKEN_LIMITS.get(
< settings.LLM_TOKEN_LIMITS.get(
self.gpt_model, settings.DEFAULT_MAX_HISTORY
)
else settings.MODEL_TOKEN_LIMITS.get(
else settings.LLM_TOKEN_LIMITS.get(
self.gpt_model, settings.DEFAULT_MAX_HISTORY
)
)
@@ -58,7 +58,7 @@ class DuckDuckSearch(BaseRetriever):
)
except IndexError:
pass
if settings.LLM_NAME == "llama.cpp":
if settings.LLM_PROVIDER == "llama.cpp":
docs = [docs[0]]
return docs
@@ -83,7 +83,7 @@ class DuckDuckSearch(BaseRetriever):
messages_combine.append({"role": "user", "content": self.question})
llm = LLMCreator.create_llm(
settings.LLM_NAME,
settings.LLM_PROVIDER,
api_key=settings.API_KEY,
user_api_key=self.user_api_key,
decoded_token=self.decoded_token,

View File

@@ -3,18 +3,18 @@ from application.retriever.duckduck_search import DuckDuckSearch
from application.retriever.brave_search import BraveRetSearch
class RetrieverCreator:
retrievers = {
'classic': ClassicRAG,
'duckduck_search': DuckDuckSearch,
'brave_search': BraveRetSearch,
'default': ClassicRAG
"classic": ClassicRAG,
"duckduck_search": DuckDuckSearch,
"brave_search": BraveRetSearch,
"default": ClassicRAG,
}
@classmethod
def create_retriever(cls, type, *args, **kwargs):
retiever_class = cls.retrievers.get(type.lower())
retriever_type = (type or "default").lower()
retiever_class = cls.retrievers.get(retriever_type)
if not retiever_class:
raise ValueError(f"No retievers class found for type {type}")
return retiever_class(*args, **kwargs)
return retiever_class(*args, **kwargs)

View File

@@ -0,0 +1,94 @@
"""Base storage class for file system abstraction."""
from abc import ABC, abstractmethod
from typing import BinaryIO, List, Callable
class BaseStorage(ABC):
"""Abstract base class for storage implementations."""
@abstractmethod
def save_file(self, file_data: BinaryIO, path: str) -> dict:
"""
Save a file to storage.
Args:
file_data: File-like object containing the data
path: Path where the file should be stored
Returns:
dict: A dictionary containing metadata about the saved file, including:
- 'path': The path where the file was saved
- 'storage_type': The type of storage (e.g., 'local', 's3')
- Other storage-specific metadata (e.g., 'uri', 'bucket_name', etc.)
"""
pass
@abstractmethod
def get_file(self, path: str) -> BinaryIO:
"""
Retrieve a file from storage.
Args:
path: Path to the file
Returns:
BinaryIO: File-like object containing the file data
"""
pass
@abstractmethod
def process_file(self, path: str, processor_func: Callable, **kwargs):
"""
Process a file using the provided processor function.
This method handles the details of retrieving the file and providing
it to the processor function in an appropriate way based on the storage type.
Args:
path: Path to the file
processor_func: Function that processes the file
**kwargs: Additional arguments to pass to the processor function
Returns:
The result of the processor function
"""
pass
@abstractmethod
def delete_file(self, path: str) -> bool:
"""
Delete a file from storage.
Args:
path: Path to the file
Returns:
bool: True if deletion was successful
"""
pass
@abstractmethod
def file_exists(self, path: str) -> bool:
"""
Check if a file exists.
Args:
path: Path to the file
Returns:
bool: True if the file exists
"""
pass
@abstractmethod
def list_files(self, directory: str) -> List[str]:
"""
List all files in a directory.
Args:
directory: Directory path to list
Returns:
List[str]: List of file paths
"""
pass

View File

@@ -0,0 +1,103 @@
"""Local file system implementation."""
import os
import shutil
from typing import BinaryIO, List, Callable
from application.storage.base import BaseStorage
class LocalStorage(BaseStorage):
"""Local file system storage implementation."""
def __init__(self, base_dir: str = None):
"""
Initialize local storage.
Args:
base_dir: Base directory for all operations. If None, uses current directory.
"""
self.base_dir = base_dir or os.path.dirname(
os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
)
def _get_full_path(self, path: str) -> str:
"""Get absolute path by combining base_dir and path."""
if os.path.isabs(path):
return path
return os.path.join(self.base_dir, path)
def save_file(self, file_data: BinaryIO, path: str) -> dict:
"""Save a file to local storage."""
full_path = self._get_full_path(path)
os.makedirs(os.path.dirname(full_path), exist_ok=True)
if hasattr(file_data, 'save'):
file_data.save(full_path)
else:
with open(full_path, 'wb') as f:
shutil.copyfileobj(file_data, f)
return {
'storage_type': 'local'
}
def get_file(self, path: str) -> BinaryIO:
"""Get a file from local storage."""
full_path = self._get_full_path(path)
if not os.path.exists(full_path):
raise FileNotFoundError(f"File not found: {full_path}")
return open(full_path, 'rb')
def delete_file(self, path: str) -> bool:
"""Delete a file from local storage."""
full_path = self._get_full_path(path)
if not os.path.exists(full_path):
return False
os.remove(full_path)
return True
def file_exists(self, path: str) -> bool:
"""Check if a file exists in local storage."""
full_path = self._get_full_path(path)
return os.path.exists(full_path)
def list_files(self, directory: str) -> List[str]:
"""List all files in a directory in local storage."""
full_path = self._get_full_path(directory)
if not os.path.exists(full_path):
return []
result = []
for root, _, files in os.walk(full_path):
for file in files:
rel_path = os.path.relpath(os.path.join(root, file), self.base_dir)
result.append(rel_path)
return result
def process_file(self, path: str, processor_func: Callable, **kwargs):
"""
Process a file using the provided processor function.
For local storage, we can directly pass the full path to the processor.
Args:
path: Path to the file
processor_func: Function that processes the file
**kwargs: Additional arguments to pass to the processor function
Returns:
The result of the processor function
"""
full_path = self._get_full_path(path)
if not os.path.exists(full_path):
raise FileNotFoundError(f"File not found: {full_path}")
return processor_func(local_path=full_path, **kwargs)

124
application/storage/s3.py Normal file
View File

@@ -0,0 +1,124 @@
"""S3 storage implementation."""
import io
import os
from typing import BinaryIO, Callable, List
import boto3
from application.core.settings import settings
from application.storage.base import BaseStorage
from botocore.exceptions import ClientError
class S3Storage(BaseStorage):
"""AWS S3 storage implementation."""
def __init__(self, bucket_name=None):
"""
Initialize S3 storage.
Args:
bucket_name: S3 bucket name (optional, defaults to settings)
"""
self.bucket_name = bucket_name or getattr(
settings, "S3_BUCKET_NAME", "docsgpt-test-bucket"
)
# Get credentials from settings
aws_access_key_id = getattr(settings, "SAGEMAKER_ACCESS_KEY", None)
aws_secret_access_key = getattr(settings, "SAGEMAKER_SECRET_KEY", None)
region_name = getattr(settings, "SAGEMAKER_REGION", None)
self.s3 = boto3.client(
"s3",
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
region_name=region_name,
)
def save_file(self, file_data: BinaryIO, path: str) -> dict:
"""Save a file to S3 storage."""
self.s3.upload_fileobj(file_data, self.bucket_name, path)
region = getattr(settings, "SAGEMAKER_REGION", None)
return {
"storage_type": "s3",
"bucket_name": self.bucket_name,
"uri": f"s3://{self.bucket_name}/{path}",
"region": region,
}
def get_file(self, path: str) -> BinaryIO:
"""Get a file from S3 storage."""
if not self.file_exists(path):
raise FileNotFoundError(f"File not found: {path}")
file_obj = io.BytesIO()
self.s3.download_fileobj(self.bucket_name, path, file_obj)
file_obj.seek(0)
return file_obj
def delete_file(self, path: str) -> bool:
"""Delete a file from S3 storage."""
try:
self.s3.delete_object(Bucket=self.bucket_name, Key=path)
return True
except ClientError:
return False
def file_exists(self, path: str) -> bool:
"""Check if a file exists in S3 storage."""
try:
self.s3.head_object(Bucket=self.bucket_name, Key=path)
return True
except ClientError:
return False
def list_files(self, directory: str) -> List[str]:
"""List all files in a directory in S3 storage."""
# Ensure directory ends with a slash if it's not empty
if directory and not directory.endswith("/"):
directory += "/"
result = []
paginator = self.s3.get_paginator("list_objects_v2")
pages = paginator.paginate(Bucket=self.bucket_name, Prefix=directory)
for page in pages:
if "Contents" in page:
for obj in page["Contents"]:
result.append(obj["Key"])
return result
def process_file(self, path: str, processor_func: Callable, **kwargs):
"""
Process a file using the provided processor function.
Args:
path: Path to the file
processor_func: Function that processes the file
**kwargs: Additional arguments to pass to the processor function
Returns:
The result of the processor function
"""
import logging
import tempfile
if not self.file_exists(path):
raise FileNotFoundError(f"File not found in S3: {path}")
with tempfile.NamedTemporaryFile(
suffix=os.path.splitext(path)[1], delete=True
) as temp_file:
try:
# Download the file from S3 to the temporary file
self.s3.download_fileobj(self.bucket_name, path, temp_file)
temp_file.flush()
return processor_func(local_path=temp_file.name, **kwargs)
except Exception as e:
logging.error(f"Error processing S3 file {path}: {e}", exc_info=True)
raise

View File

@@ -0,0 +1,32 @@
"""Storage factory for creating different storage implementations."""
from typing import Dict, Type
from application.storage.base import BaseStorage
from application.storage.local import LocalStorage
from application.storage.s3 import S3Storage
from application.core.settings import settings
class StorageCreator:
storages: Dict[str, Type[BaseStorage]] = {
"local": LocalStorage,
"s3": S3Storage,
}
_instance = None
@classmethod
def get_storage(cls) -> BaseStorage:
if cls._instance is None:
storage_type = getattr(settings, "STORAGE_TYPE", "local")
cls._instance = cls.create_storage(storage_type)
return cls._instance
@classmethod
def create_storage(cls, type_name: str, *args, **kwargs) -> BaseStorage:
storage_class = cls.storages.get(type_name.lower())
if not storage_class:
raise ValueError(f"No storage implementation found for type {type_name}")
return storage_class(*args, **kwargs)

View File

@@ -2,10 +2,11 @@ import sys
from datetime import datetime
from application.core.mongo_db import MongoDB
from application.core.settings import settings
from application.utils import num_tokens_from_object_or_list, num_tokens_from_string
mongo = MongoDB.get_client()
db = mongo["docsgpt"]
db = mongo[settings.MONGO_DB_NAME]
usage_collection = db["token_usage"]

View File

@@ -1,8 +1,12 @@
import hashlib
import os
import re
import uuid
import tiktoken
from flask import jsonify, make_response
from werkzeug.utils import secure_filename
from application.core.settings import settings
_encoding = None
@@ -15,6 +19,31 @@ def get_encoding():
return _encoding
def safe_filename(filename):
"""
Creates a safe filename that preserves the original extension.
Uses secure_filename, but ensures a proper filename is returned even with non-Latin characters.
Args:
filename (str): The original filename
Returns:
str: A safe filename that can be used for storage
"""
if not filename:
return str(uuid.uuid4())
_, extension = os.path.splitext(filename)
safe_name = secure_filename(filename)
# If secure_filename returns just the extension or an empty string
if not safe_name or safe_name == extension.lstrip("."):
return f"{str(uuid.uuid4())}{extension}"
return safe_name
def num_tokens_from_string(string: str) -> int:
encoding = get_encoding()
if isinstance(string, str):
@@ -74,8 +103,8 @@ def limit_chat_history(history, max_token_limit=None, gpt_model="docsgpt"):
max_token_limit
if max_token_limit
and max_token_limit
< settings.MODEL_TOKEN_LIMITS.get(gpt_model, settings.DEFAULT_MAX_HISTORY)
else settings.MODEL_TOKEN_LIMITS.get(gpt_model, settings.DEFAULT_MAX_HISTORY)
< settings.LLM_TOKEN_LIMITS.get(gpt_model, settings.DEFAULT_MAX_HISTORY)
else settings.LLM_TOKEN_LIMITS.get(gpt_model, settings.DEFAULT_MAX_HISTORY)
)
if not history:
@@ -109,3 +138,14 @@ def validate_function_name(function_name):
if not re.match(r"^[a-zA-Z0-9_-]+$", function_name):
return False
return True
def generate_image_url(image_path):
strategy = getattr(settings, "URL_STRATEGY", "backend")
if strategy == "s3":
bucket_name = getattr(settings, "S3_BUCKET_NAME", "docsgpt-test-bucket")
region_name = getattr(settings, "SAGEMAKER_REGION", "eu-central-1")
return f"https://{bucket_name}.s3.{region_name}.amazonaws.com/{image_path}"
else:
base_url = getattr(settings, "API_URL", "http://localhost:7091")
return f"{base_url}/api/images/{image_path}"

View File

@@ -1,9 +1,6 @@
from application.vectorstore.base import BaseVectorStore
from application.core.settings import settings
from application.vectorstore.document_class import Document
import elasticsearch
class ElasticsearchStore(BaseVectorStore):
@@ -26,8 +23,7 @@ class ElasticsearchStore(BaseVectorStore):
else:
raise ValueError("Please provide either elasticsearch_url or cloud_id.")
import elasticsearch
ElasticsearchStore._es_connection = elasticsearch.Elasticsearch(**connection_params)
self.docsearch = ElasticsearchStore._es_connection
@@ -155,8 +151,6 @@ class ElasticsearchStore(BaseVectorStore):
**kwargs,
):
from elasticsearch.helpers import BulkIndexError, bulk
bulk_kwargs = bulk_kwargs or {}
import uuid
embeddings = []
@@ -189,6 +183,7 @@ class ElasticsearchStore(BaseVectorStore):
if len(requests) > 0:
from elasticsearch.helpers import BulkIndexError, bulk
try:
success, failed = bulk(
self._es_connection,

View File

@@ -1,17 +1,19 @@
import os
import tempfile
from langchain_community.vectorstores import FAISS
from application.core.settings import settings
from application.parser.schema.base import Document
from application.vectorstore.base import BaseVectorStore
from application.storage.storage_creator import StorageCreator
def get_vectorstore(path: str) -> str:
if path:
vectorstore = os.path.join("application", "indexes", path)
vectorstore = f"indexes/{path}"
else:
vectorstore = os.path.join("application")
vectorstore = "indexes"
return vectorstore
@@ -21,16 +23,40 @@ class FaissStore(BaseVectorStore):
self.source_id = source_id
self.path = get_vectorstore(source_id)
self.embeddings = self._get_embeddings(settings.EMBEDDINGS_NAME, embeddings_key)
self.storage = StorageCreator.get_storage()
try:
if docs_init:
self.docsearch = FAISS.from_documents(docs_init, self.embeddings)
else:
self.docsearch = FAISS.load_local(
self.path, self.embeddings, allow_dangerous_deserialization=True
)
except Exception:
raise
with tempfile.TemporaryDirectory() as temp_dir:
faiss_path = f"{self.path}/index.faiss"
pkl_path = f"{self.path}/index.pkl"
if not self.storage.file_exists(
faiss_path
) or not self.storage.file_exists(pkl_path):
raise FileNotFoundError(
f"Index files not found in storage at {self.path}"
)
faiss_file = self.storage.get_file(faiss_path)
pkl_file = self.storage.get_file(pkl_path)
local_faiss_path = os.path.join(temp_dir, "index.faiss")
local_pkl_path = os.path.join(temp_dir, "index.pkl")
with open(local_faiss_path, "wb") as f:
f.write(faiss_file.read())
with open(local_pkl_path, "wb") as f:
f.write(pkl_file.read())
self.docsearch = FAISS.load_local(
temp_dir, self.embeddings, allow_dangerous_deserialization=True
)
except Exception as e:
raise Exception(f"Error loading FAISS index: {str(e)}")
self.assert_embedding_dimensions(self.embeddings)

View File

@@ -1,3 +1,4 @@
import logging
from application.core.settings import settings
from application.vectorstore.base import BaseVectorStore
from application.vectorstore.document_class import Document
@@ -146,7 +147,7 @@ class MongoDBVectorStore(BaseVectorStore):
return chunks
except Exception as e:
print(f"Error getting chunks: {e}")
logging.error(f"Error getting chunks: {e}", exc_info=True)
return []
def add_chunk(self, text, metadata=None):
@@ -172,5 +173,5 @@ class MongoDBVectorStore(BaseVectorStore):
result = self._collection.delete_one({"_id": object_id})
return result.deleted_count > 0
except Exception as e:
print(f"Error deleting chunk: {e}")
logging.error(f"Error deleting chunk: {e}", exc_info=True)
return False

View File

@@ -1,11 +1,12 @@
from langchain_community.vectorstores.qdrant import Qdrant
from application.vectorstore.base import BaseVectorStore
from application.core.settings import settings
from qdrant_client import models
class QdrantStore(BaseVectorStore):
def __init__(self, source_id: str = "", embeddings_key: str = "embeddings"):
from qdrant_client import models
from langchain_community.vectorstores.qdrant import Qdrant
self._filter = models.Filter(
must=[
models.FieldCondition(

View File

@@ -1,25 +1,37 @@
import datetime
import json
import logging
import mimetypes
import os
import shutil
import string
import tempfile
import zipfile
from collections import Counter
from urllib.parse import urljoin
import requests
from bson.dbref import DBRef
from bson.objectid import ObjectId
from application.agents.agent_creator import AgentCreator
from application.api.answer.routes import get_prompt
from application.core.mongo_db import MongoDB
from application.core.settings import settings
from application.parser.file.bulk import SimpleDirectoryReader
from application.parser.chunking import Chunker
from application.parser.embedding_pipeline import embed_and_store_documents
from application.parser.file.bulk import SimpleDirectoryReader
from application.parser.remote.remote_creator import RemoteCreator
from application.parser.schema.base import Document
from application.parser.chunking import Chunker
from application.utils import count_tokens_docs
from application.retriever.retriever_creator import RetrieverCreator
from application.storage.storage_creator import StorageCreator
from application.utils import count_tokens_docs, num_tokens_from_string
mongo = MongoDB.get_client()
db = mongo["docsgpt"]
db = mongo[settings.MONGO_DB_NAME]
sources_collection = db["sources"]
# Constants
@@ -27,18 +39,22 @@ MIN_TOKENS = 150
MAX_TOKENS = 1250
RECURSION_DEPTH = 2
# Define a function to extract metadata from a given filename.
def metadata_from_filename(title):
return {"title": title}
# Define a function to generate a random string of a given length.
def generate_random_string(length):
return "".join([string.ascii_letters[i % 52] for i in range(length)])
current_dir = os.path.dirname(
os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
)
def extract_zip_recursive(zip_path, extract_to, current_depth=0, max_depth=5):
"""
Recursively extract zip files with a limit on recursion depth.
@@ -58,7 +74,7 @@ def extract_zip_recursive(zip_path, extract_to, current_depth=0, max_depth=5):
zip_ref.extractall(extract_to)
os.remove(zip_path) # Remove the zip file after extracting
except Exception as e:
logging.error(f"Error extracting zip file {zip_path}: {e}")
logging.error(f"Error extracting zip file {zip_path}: {e}", exc_info=True)
return
# Check for nested zip files and extract them
@@ -69,6 +85,7 @@ def extract_zip_recursive(zip_path, extract_to, current_depth=0, max_depth=5):
file_path = os.path.join(root, file)
extract_zip_recursive(file_path, root, current_depth + 1, max_depth)
def download_file(url, params, dest_path):
try:
response = requests.get(url, params=params)
@@ -79,6 +96,7 @@ def download_file(url, params, dest_path):
logging.error(f"Error downloading file: {e}")
raise
def upload_index(full_path, file_data):
try:
if settings.VECTOR_STORE == "faiss":
@@ -87,7 +105,9 @@ def upload_index(full_path, file_data):
"file_pkl": open(full_path + "/index.pkl", "rb"),
}
response = requests.post(
urljoin(settings.API_URL, "/api/upload_index"), files=files, data=file_data
urljoin(settings.API_URL, "/api/upload_index"),
files=files,
data=file_data,
)
else:
response = requests.post(
@@ -102,9 +122,79 @@ def upload_index(full_path, file_data):
for file in files.values():
file.close()
def run_agent_logic(agent_config, input_data):
try:
source = agent_config.get("source")
retriever = agent_config.get("retriever", "classic")
if isinstance(source, DBRef):
source_doc = db.dereference(source)
source = str(source_doc["_id"])
retriever = source_doc.get("retriever", agent_config.get("retriever"))
else:
source = {}
source = {"active_docs": source}
chunks = int(agent_config.get("chunks", 2))
prompt_id = agent_config.get("prompt_id", "default")
user_api_key = agent_config["key"]
agent_type = agent_config.get("agent_type", "classic")
decoded_token = {"sub": agent_config.get("user")}
prompt = get_prompt(prompt_id)
agent = AgentCreator.create_agent(
agent_type,
endpoint="webhook",
llm_name=settings.LLM_PROVIDER,
gpt_model=settings.LLM_NAME,
api_key=settings.API_KEY,
user_api_key=user_api_key,
prompt=prompt,
chat_history=[],
decoded_token=decoded_token,
attachments=[],
)
retriever = RetrieverCreator.create_retriever(
retriever,
source=source,
chat_history=[],
prompt=prompt,
chunks=chunks,
token_limit=settings.DEFAULT_MAX_HISTORY,
gpt_model=settings.LLM_NAME,
user_api_key=user_api_key,
decoded_token=decoded_token,
)
answer = agent.gen(query=input_data, retriever=retriever)
response_full = ""
thought = ""
source_log_docs = []
tool_calls = []
for line in answer:
if "answer" in line:
response_full += str(line["answer"])
elif "sources" in line:
source_log_docs.extend(line["sources"])
elif "tool_calls" in line:
tool_calls.extend(line["tool_calls"])
elif "thought" in line:
thought += line["thought"]
result = {
"answer": response_full,
"sources": source_log_docs,
"tool_calls": tool_calls,
"thought": thought,
}
logging.info(f"Agent response: {result}")
return result
except Exception as e:
logging.error(f"Error in run_agent_logic: {e}", exc_info=True)
raise
# Define the main function for ingesting and processing documents.
def ingest_worker(
self, directory, formats, name_job, filename, user, retriever="classic"
self, directory, formats, job_name, filename, user, dir_name=None, user_dir=None, retriever="classic"
):
"""
Ingest and process documents.
@@ -113,9 +203,11 @@ def ingest_worker(
self: Reference to the instance of the task.
directory (str): Specifies the directory for ingesting ('inputs' or 'temp').
formats (list of str): List of file extensions to consider for ingestion (e.g., [".rst", ".md"]).
name_job (str): Name of the job for this ingestion task.
job_name (str): Name of the job for this ingestion task (original, unsanitized).
filename (str): Name of the file to be ingested.
user (str): Identifier for the user initiating the ingestion.
user (str): Identifier for the user initiating the ingestion (original, unsanitized).
dir_name (str, optional): Sanitized directory name for filesystem operations.
user_dir (str, optional): Sanitized user ID for filesystem operations.
retriever (str): Type of retriever to use for processing the documents.
Returns:
@@ -126,72 +218,99 @@ def ingest_worker(
limit = None
exclude = True
sample = False
full_path = os.path.join(directory, user, name_job)
storage = StorageCreator.get_storage()
logging.info(f"Ingest file: {full_path}", extra={"user": user, "job": name_job})
file_data = {"name": name_job, "file": filename, "user": user}
full_path = os.path.join(directory, user_dir, dir_name)
source_file_path = os.path.join(full_path, filename)
if not os.path.exists(full_path):
os.makedirs(full_path)
download_file(urljoin(settings.API_URL, "/api/download"), file_data, os.path.join(full_path, filename))
logging.info(f"Ingest file: {full_path}", extra={"user": user, "job": job_name})
# check if file is .zip and extract it
if filename.endswith(".zip"):
extract_zip_recursive(
os.path.join(full_path, filename), full_path, 0, RECURSION_DEPTH
)
# Create temporary working directory
with tempfile.TemporaryDirectory() as temp_dir:
try:
os.makedirs(temp_dir, exist_ok=True)
self.update_state(state="PROGRESS", meta={"current": 1})
# Download file from storage to temp directory
temp_file_path = os.path.join(temp_dir, filename)
file_data = storage.get_file(source_file_path)
raw_docs = SimpleDirectoryReader(
input_dir=full_path,
input_files=input_files,
recursive=recursive,
required_exts=formats,
num_files_limit=limit,
exclude_hidden=exclude,
file_metadata=metadata_from_filename,
).load_data()
with open(temp_file_path, "wb") as f:
f.write(file_data.read())
chunker = Chunker(
chunking_strategy="classic_chunk",
max_tokens=MAX_TOKENS,
min_tokens=MIN_TOKENS,
duplicate_headers=False
)
raw_docs = chunker.chunk(documents=raw_docs)
self.update_state(state="PROGRESS", meta={"current": 1})
docs = [Document.to_langchain_format(raw_doc) for raw_doc in raw_docs]
id = ObjectId()
# Handle zip files
if filename.endswith(".zip"):
logging.info(f"Extracting zip file: {filename}")
extract_zip_recursive(
temp_file_path, temp_dir, current_depth=0, max_depth=RECURSION_DEPTH
)
embed_and_store_documents(docs, full_path, id, self)
tokens = count_tokens_docs(docs)
self.update_state(state="PROGRESS", meta={"current": 100})
if sample:
logging.info(f"Sample mode enabled. Using {limit} documents.")
if sample:
for i in range(min(5, len(raw_docs))):
logging.info(f"Sample document {i}: {raw_docs[i]}")
reader = SimpleDirectoryReader(
input_dir=temp_dir,
input_files=input_files,
recursive=recursive,
required_exts=formats,
exclude_hidden=exclude,
file_metadata=metadata_from_filename,
)
raw_docs = reader.load_data()
file_data.update({
"tokens": tokens,
"retriever": retriever,
"id": str(id),
"type": "local",
})
upload_index(full_path, file_data)
chunker = Chunker(
chunking_strategy="classic_chunk",
max_tokens=MAX_TOKENS,
min_tokens=MIN_TOKENS,
duplicate_headers=False,
)
raw_docs = chunker.chunk(documents=raw_docs)
# delete local
shutil.rmtree(full_path)
docs = [Document.to_langchain_format(raw_doc) for raw_doc in raw_docs]
id = ObjectId()
vector_store_path = os.path.join(temp_dir, "vector_store")
os.makedirs(vector_store_path, exist_ok=True)
embed_and_store_documents(docs, vector_store_path, id, self)
tokens = count_tokens_docs(docs)
self.update_state(state="PROGRESS", meta={"current": 100})
if sample:
for i in range(min(5, len(raw_docs))):
logging.info(f"Sample document {i}: {raw_docs[i]}")
file_data = {
"name": job_name, # Use original job_name
"file": filename,
"user": user, # Use original user
"tokens": tokens,
"retriever": retriever,
"id": str(id),
"type": "local",
"original_file_path": source_file_path,
}
upload_index(vector_store_path, file_data)
except Exception as e:
logging.error(f"Error in ingest_worker: {e}", exc_info=True)
raise
return {
"directory": directory,
"formats": formats,
"name_job": name_job,
"name_job": job_name, # Use original job_name
"filename": filename,
"user": user,
"user": user, # Use original user
"limited": False,
}
def remote_worker(
self,
source_data,
@@ -203,7 +322,7 @@ def remote_worker(
sync_frequency="never",
operation_mode="upload",
doc_id=None,
):
):
full_path = os.path.join(directory, user, name_job)
if not os.path.exists(full_path):
os.makedirs(full_path)
@@ -218,7 +337,7 @@ def remote_worker(
chunking_strategy="classic_chunk",
max_tokens=MAX_TOKENS,
min_tokens=MIN_TOKENS,
duplicate_headers=False
duplicate_headers=False,
)
docs = chunker.chunk(documents=raw_docs)
docs = [Document.to_langchain_format(raw_doc) for raw_doc in raw_docs]
@@ -260,6 +379,7 @@ def remote_worker(
logging.info("remote_worker task completed successfully")
return {"urls": source_data, "name_job": name_job, "user": user, "limited": False}
def sync(
self,
source_data,
@@ -285,10 +405,11 @@ def sync(
doc_id,
)
except Exception as e:
logging.error(f"Error during sync: {e}")
logging.error(f"Error during sync: {e}", exc_info=True)
return {"status": "error", "error": str(e)}
return {"status": "success"}
def sync_worker(self, frequency):
sync_counts = Counter()
sources = sources_collection.find()
@@ -312,3 +433,122 @@ def sync_worker(self, frequency):
key: sync_counts[key]
for key in ["total_sync_count", "sync_success", "sync_failure"]
}
def attachment_worker(self, file_info, user):
"""
Process and store a single attachment without vectorization.
"""
mongo = MongoDB.get_client()
db = mongo[settings.MONGO_DB_NAME]
attachments_collection = db["attachments"]
filename = file_info["filename"]
attachment_id = file_info["attachment_id"]
relative_path = file_info["path"]
metadata = file_info.get("metadata", {})
try:
self.update_state(state="PROGRESS", meta={"current": 10})
storage = StorageCreator.get_storage()
self.update_state(
state="PROGRESS", meta={"current": 30, "status": "Processing content"}
)
content = storage.process_file(
relative_path,
lambda local_path, **kwargs: SimpleDirectoryReader(
input_files=[local_path], exclude_hidden=True, errors="ignore"
)
.load_data()[0]
.text,
)
token_count = num_tokens_from_string(content)
self.update_state(
state="PROGRESS", meta={"current": 80, "status": "Storing in database"}
)
mime_type = mimetypes.guess_type(filename)[0] or "application/octet-stream"
doc_id = ObjectId(attachment_id)
attachments_collection.insert_one(
{
"_id": doc_id,
"user": user,
"path": relative_path,
"filename": filename,
"content": content,
"token_count": token_count,
"mime_type": mime_type,
"date": datetime.datetime.now(),
"metadata": metadata,
}
)
logging.info(
f"Stored attachment with ID: {attachment_id}", extra={"user": user}
)
self.update_state(state="PROGRESS", meta={"current": 100, "status": "Complete"})
return {
"filename": filename,
"path": relative_path,
"token_count": token_count,
"attachment_id": attachment_id,
"mime_type": mime_type,
"metadata": metadata,
}
except Exception as e:
logging.error(
f"Error processing file {filename}: {e}",
extra={"user": user},
exc_info=True,
)
raise
def agent_webhook_worker(self, agent_id, payload):
"""
Process the webhook payload for an agent.
Args:
self: Reference to the instance of the task.
agent_id (str): Unique identifier for the agent.
payload (dict): The payload data from the webhook.
Returns:
dict: Information about the processed webhook.
"""
mongo = MongoDB.get_client()
db = mongo["docsgpt"]
agents_collection = db["agents"]
self.update_state(state="PROGRESS", meta={"current": 1})
try:
agent_oid = ObjectId(agent_id)
agent_config = agents_collection.find_one({"_id": agent_oid})
if not agent_config:
raise ValueError(f"Agent with ID {agent_id} not found.")
input_data = json.dumps(payload)
except Exception as e:
logging.error(f"Error processing agent webhook: {e}", exc_info=True)
return {"status": "error", "error": str(e)}
self.update_state(state="PROGRESS", meta={"current": 50})
try:
result = run_agent_logic(agent_config, input_data)
except Exception as e:
logging.error(f"Error running agent logic: {e}", exc_info=True)
return {"status": "error", "error": str(e)}
finally:
self.update_state(state="PROGRESS", meta={"current": 100})
logging.info(
f"Webhook processed for agent {agent_id}", extra={"agent_id": agent_id}
)
return {"status": "success", "result": result}

View File

@@ -1,3 +1,4 @@
name: docsgpt-oss
services:
redis:

View File

@@ -1,3 +1,4 @@
name: docsgpt-oss
services:
frontend:
build: ../frontend
@@ -12,39 +13,46 @@ services:
- backend
backend:
user: root
build: ../application
environment:
- API_KEY=$API_KEY
- EMBEDDINGS_KEY=$API_KEY
- LLM_PROVIDER=$LLM_PROVIDER
- LLM_NAME=$LLM_NAME
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/1
- MONGO_URI=mongodb://mongo:27017/docsgpt
- CACHE_REDIS_URL=redis://redis:6379/2
- OPENAI_BASE_URL=$OPENAI_BASE_URL
- MODEL_NAME=$MODEL_NAME
ports:
- "7091:7091"
volumes:
- ../application/indexes:/app/application/indexes
- ../application/inputs:/app/application/inputs
- ../application/vectors:/app/application/vectors
- ../application/indexes:/app/indexes
- ../application/inputs:/app/inputs
- ../application/vectors:/app/vectors
depends_on:
- redis
- mongo
worker:
user: root
build: ../application
command: celery -A application.app.celery worker -l INFO -B
environment:
- API_KEY=$API_KEY
- EMBEDDINGS_KEY=$API_KEY
- LLM_PROVIDER=$LLM_PROVIDER
- LLM_NAME=$LLM_NAME
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/1
- MONGO_URI=mongodb://mongo:27017/docsgpt
- API_URL=http://backend:7091
- CACHE_REDIS_URL=redis://redis:6379/2
volumes:
- ../application/indexes:/app/indexes
- ../application/inputs:/app/inputs
- ../application/vectors:/app/vectors
depends_on:
- redis
- mongo

View File

@@ -4,7 +4,7 @@ metadata:
name: docsgpt-secrets
type: Opaque
data:
LLM_NAME: ZG9jc2dwdA==
LLM_PROVIDER: ZG9jc2dwdA==
INTERNAL_KEY: aW50ZXJuYWw=
CELERY_BROKER_URL: cmVkaXM6Ly9yZWRpcy1zZXJ2aWNlOjYzNzkvMA==
CELERY_RESULT_BACKEND: cmVkaXM6Ly9yZWRpcy1zZXJ2aWNlOjYzNzkvMA==

View File

@@ -0,0 +1,117 @@
import Image from 'next/image';
const iconMap = {
'API Tool': '/toolIcons/tool_api_tool.svg',
'Brave Search Tool': '/toolIcons/tool_brave.svg',
'Cryptoprice Tool': '/toolIcons/tool_cryptoprice.svg',
'Ntfy Tool': '/toolIcons/tool_ntfy.svg',
'PostgreSQL Tool': '/toolIcons/tool_postgres.svg',
'Read Webpage Tool': '/toolIcons/tool_read_webpage.svg',
'Telegram Tool': '/toolIcons/tool_telegram.svg'
};
export function ToolCards({ items }) {
return (
<>
<div className="tool-cards">
{items.map(({ title, link, description }) => {
const isExternal = link.startsWith('https://');
const iconSrc = iconMap[title] || '/default-icon.png'; // Default icon if not found
return (
<div
key={title}
className={`card${isExternal ? ' external' : ''}`}
>
<a href={link} target={isExternal ? '_blank' : undefined} rel="noopener noreferrer" className="card-link-wrapper">
<div className="card-icon-container">
{iconSrc && <div className="card-icon"><Image src={iconSrc} alt={title} width={32} height={32} /></div>} {/* Reduced icon size */}
</div>
<h3 className="card-title">{title}</h3>
{description && <p className="card-description">{description}</p>}
{/* Card URL element removed from here */}
</a>
</div>
);
})}
</div>
<style jsx>{`
.tool-cards {
margin-top: 24px;
display: grid;
grid-template-columns: 1fr;
gap: 16px;
}
@media (min-width: 768px) {
.tool-cards {
grid-template-columns: 1fr 1fr; /* Keeps two columns on wider screens */
}
}
.card {
background-color: #222222;
border-radius: 8px;
padding: 16px; /* Existing padding */
transition: background-color 0.3s;
position: relative;
color: #ffffff;
display: flex; /* Using flex to help with alignment */
flex-direction: column;
/* align-items: center; // Alignment for items inside card-link-wrapper is better */
/* justify-content: center; // We want content to flow from top */
height: 100%; /* Fill the height of the grid cell, ensures cards in a row are same height */
}
.card:hover {
background-color: #333333;
}
.card.external::after {
content: "↗";
position: absolute;
top: 12px;
right: 12px;
color: #ffffff;
font-size: 0.7em;
opacity: 0.8;
}
.card-link-wrapper {
display: flex;
flex-direction: column;
align-items:center; /* Centers icon, title, description horizontally */
text-align: center; /* Ensures text within p and h3 is centered */
color: inherit;
text-decoration: none;
width:100%;
height: 100%; /* Make the link wrapper take full card height */
justify-content: flex-start; /* Align content to the top */
}
.card-icon-container{
display:flex;
justify-content:center;
width: 100%;
margin-top: 8px; /* Added some margin at the top if needed */
margin-bottom: 12px; /* Increased space between icon and title */
}
.card-icon {
display: block;
/* margin: 0 auto; // Center handled by card-icon-container */
}
.card-title {
font-weight: 600;
margin-bottom: 8px; /* Increased space below title */
font-size: 16px; /* Consider increasing slightly if descriptions are longer e.g. 17px or 18px */
color: #f0f0f0;
}
.card-description {
/* margin-bottom: 0; // Original value */
font-size: 14px; /* Slightly increased font size for better readability */
color: #aaaaaa;
line-height: 1.5; /* Slightly increased line height */
flex-grow: 1; /* Allows description to take available space */
overflow-y: auto; /* Adds scroll if description is too long, though ideally content fits */
padding-bottom: 8px; /* Add some padding at the bottom of the description area */
}
`}</style>
</>
);
}

1663
docs/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -7,8 +7,8 @@
"license": "MIT",
"dependencies": {
"@vercel/analytics": "^1.1.1",
"docsgpt-react": "^0.5.0",
"next": "^14.2.26",
"docsgpt-react": "^0.5.1",
"next": "^15.3.3",
"nextra": "^2.13.2",
"nextra-theme-docs": "^2.13.2",
"react": "^18.2.0",

View File

@@ -0,0 +1,6 @@
{
"basics": {
"title": "🤖 Agent Basics",
"href": "/Agents/basics"
}
}

View File

@@ -0,0 +1,109 @@
---
title: Understanding DocsGPT Agents
description: Learn about DocsGPT Agents, their types, how to create and manage them, and how they can enhance your interaction with documents and tools.
---
import { Callout } from 'nextra/components';
import Image from 'next/image'; // Assuming you might want to embed images later, like the ones you uploaded.
# Understanding DocsGPT Agents 🤖
DocsGPT Agents are advanced, configurable AI entities designed to go beyond simple question-answering. They act as specialized assistants or workers that combine instructions (prompts), knowledge (document sources), and capabilities (tools) to perform a wide range of tasks, automate workflows, and provide tailored interactions.
Think of an Agent as a pre-configured version of DocsGPT, fine-tuned for a specific purpose, such as classifying documents, responding to new form submissions, or validating emails.
## Why Use Agents?
* **Personalization:** Create AI assistants that behave and respond according to specific roles or personas.
* **Task Specialization:** Design agents focused on particular tasks, like customer support, data extraction, or content generation.
* **Knowledge Integration:** Equip agents with specific document sources, making them experts in particular domains.
* **Tool Utilization:** Grant agents access to various tools, allowing them to interact with external services, fetch live data, or perform actions.
* **Automation:** Automate repetitive tasks by defining an agent's behavior and integrating it via webhooks or other means.
* **Shareability:** Share your custom-configured agents with others or use agents shared with you.
Agents provide a more structured and powerful way to leverage LLMs compared to a standard chat interface, as they come with a pre-defined context, instruction set, and set of capabilities.
## Core Components of an Agent
When you create or configure an agent, you'll work with these key components:
**Meta:**
* **Agent Name:** A user-friendly name to identify the agent (e.g., "Support Ticket Classifier," "Product Spec Expert").
* **Describe your agent:** A brief description for you or users to understand the agent's purpose.
**Source:**
* **Select source:** The knowledge base for the agent. You can select from previously uploaded documents or data sources. This is what the agent will "know."
* **Chunks per query:** A numerical value determining how many relevant text chunks from the selected source are sent to the LLM with each query. This helps manage context length and relevance.
**Prompt:**
The main set of instructions or system [prompt](/Guides/Customising-prompts) that defines the agent's persona, objectives, constraints, and how it should behave or respond.
**Tools:** A selection of available [DocsGPT Tools](/Tools/basics) that the agent can use to perform actions or access external information.
**Agent type:** The underlying operational logic or architecture the agent uses. DocsGPT supports different types of agents, each suited for different kinds of tasks.
## Understanding Agent Types
DocsGPT allows for different "types" of agents, each with a distinct way of processing information and generating responses. The code for these agent types can be found in the `application/agents/` directory.
### 1. Classic Agent (`classic_agent.py`)
**How it works:** The Classic Agent follows a traditional Retrieval Augmented Generation (RAG) approach.
1. **Retrieve:** When a query is made, it first searches the selected Source documents for relevant information.
2. **Augment:** This retrieved data is then added to the context, along with the main Prompt and the user's query.
3. **Generate:** The LLM generates a response based on this augmented context. It can also utilize any configured tools if the LLM decides they are necessary.
**Best for:**
* Direct question-answering over a specific set of documents.
* Tasks where the primary goal is to extract and synthesize information from the provided sources.
* Simpler tool integrations where the decision to use a tool is straightforward.
### 2. ReAct Agent (`react_agent.py`)
**How it works:** The ReAct Agent employs a more sophisticated "Reason and Act" framework. This involves a multi-step process:
1. **Plan (Thought):** Based on the query, its prompt, and available tools/sources, the LLM first generates a plan or a sequence of thoughts on how to approach the problem. You might see this output as a "thought" process during generation.
2. **Act:** The agent then executes actions based on this plan. This might involve querying its sources, using a tool, or performing internal reasoning.
3. **Observe:** It gathers observations from the results of its actions (e.g., data from a tool, snippets from documents).
4. **Repeat (if necessary):** Steps 2 and 3 can be repeated as the agent refines its approach or gathers more information.
5. **Conclude:** Finally, it generates the final answer based on the initial query and all accumulated observations.
**Best for:**
* More complex tasks that require multi-step reasoning or problem-solving.
* Scenarios where the agent needs to dynamically decide which tools to use and in what order, based on intermediate results.
* Interactive tasks where the agent needs to "think" through a problem.
<Callout type="info">
Developers looking to introduce new agent architectures can explore the `application/agents/` directory. `classic_agent.py` and `react_agent.py` serve as excellent starting points, demonstrating how to inherit from `BaseAgent` and structure agent logic.
</Callout>
## Navigating and Managing Agents in DocsGPT
You can easily access and manage your agents through the DocsGPT user interface. Recently used agents appear at the top of the left sidebar for quick access. Below these, the "Manage Agents" button will take you to the main Agents page.
### Creating a New Agent
1. Navigate to the "Agents" page.
2. Click the **"New Agent"** button.
3. You will be presented with the "New Agent" configuration screen:
<Image
src="/new-agent.png"
alt="API Tool configuration example for phone validation"
width={800}
height={450}
style={{ margin: '1em auto', display: 'block', borderRadius: '8px' }}
/>
4. Fill in the fields as described in the "Core Components of an Agent" section.
5. Once configured, you can **"Save Draft"** to continue editing later or **"Publish"** to make the agent active.
## Interacting with and Editing Agents
Once an agent is created, you can:
* **Chat with it:** Select the agent to start an interaction.
* **View Logs:** Access usage statistics, monitor token consumption per interaction, and review user message feedbacks. This is crucial for understanding how your agent is being used and performing.
* **Edit an Agent:**
* Modify any of its configuration settings (name, description, source, prompt, tools, type).
* **Generate a Public Link:** From the edit screen, you can create a shareable public link that allows others to import and use your agent.
* **Get a Webhook URL:** You can also obtain a Webhook URL for the agent. This allows external applications or services to trigger the agent and receive responses programmatically, enabling powerful integrations and automations.

View File

@@ -37,7 +37,7 @@ The fastest way to try out DocsGPT is by using the public API endpoint. This req
Open the `.env` file and add the following lines:
```
LLM_NAME=docsgpt
LLM_PROVIDER=docsgpt
VITE_API_STREAMING=true
```
@@ -93,16 +93,16 @@ There are two Ollama optional files:
3. **Pull the Ollama Model:**
**Crucially, after launching with Ollama, you need to pull the desired model into the Ollama container.** Find the `MODEL_NAME` you configured in your `.env` file (e.g., `llama3.2:1b`). Then execute the following command to pull the model *inside* the running Ollama container:
**Crucially, after launching with Ollama, you need to pull the desired model into the Ollama container.** Find the `LLM_NAME` you configured in your `.env` file (e.g., `llama3.2:1b`). Then execute the following command to pull the model *inside* the running Ollama container:
```bash
docker compose -f deployment/docker-compose.yaml -f deployment/optional/docker-compose.optional.ollama-cpu.yaml exec -it ollama ollama pull <MODEL_NAME>
docker compose -f deployment/docker-compose.yaml -f deployment/optional/docker-compose.optional.ollama-cpu.yaml exec -it ollama ollama pull <LLM_NAME>
```
or (for GPU):
```bash
docker compose -f deployment/docker-compose.yaml -f deployment/optional/docker-compose.optional.ollama-gpu.yaml exec -it ollama ollama pull <MODEL_NAME>
docker compose -f deployment/docker-compose.yaml -f deployment/optional/docker-compose.optional.ollama-gpu.yaml exec -it ollama ollama pull <LLM_NAME>
```
Replace `<MODEL_NAME>` with the actual model name from your `.env` file.
Replace `<LLM_NAME>` with the actual model name from your `.env` file.
4. **Access DocsGPT in your browser:**

View File

@@ -20,9 +20,9 @@ The easiest and recommended way to configure basic settings is by using a `.env`
**Example `.env` file structure:**
```
LLM_NAME=openai
LLM_PROVIDER=openai
API_KEY=YOUR_OPENAI_API_KEY
MODEL_NAME=gpt-4o
LLM_NAME=gpt-4o
```
### 2. Configuration via `settings.py` file (Advanced)
@@ -37,7 +37,7 @@ While modifying `settings.py` offers more flexibility, it's generally recommende
Here are some of the most fundamental settings you'll likely want to configure:
- **`LLM_NAME`**: This setting determines which Large Language Model (LLM) provider DocsGPT will use. It tells DocsGPT which API to interact with.
- **`LLM_PROVIDER`**: This setting determines which Large Language Model (LLM) provider DocsGPT will use. It tells DocsGPT which API to interact with.
- **Common values:**
- `docsgpt`: Use the DocsGPT Public API Endpoint (simple and free, as offered in `setup.sh` option 1).
@@ -49,11 +49,11 @@ Here are some of the most fundamental settings you'll likely want to configure:
- `azure_openai`: Use Azure OpenAI Service.
- `openai` (when using local inference engines like Ollama, Llama.cpp, TGI, etc.): This signals DocsGPT to use an OpenAI-compatible API format, even if the actual LLM is running locally.
- **`MODEL_NAME`**: Specifies the specific model to use from the chosen LLM provider. The available models depend on the `LLM_NAME` you've selected.
- **`LLM_NAME`**: Specifies the specific model to use from the chosen LLM provider. The available models depend on the `LLM_PROVIDER` you've selected.
- **Examples:**
- For `LLM_NAME=openai`: `gpt-4o`
- For `LLM_NAME=google`: `gemini-2.0-flash`
- For `LLM_PROVIDER=openai`: `gpt-4o`
- For `LLM_PROVIDER=google`: `gemini-2.0-flash`
- For local models (e.g., Ollama): `llama3.2:1b` (or any model name available in your setup).
- **`EMBEDDINGS_NAME`**: This setting defines which embedding model DocsGPT will use to generate vector embeddings for your documents. Embeddings are numerical representations of text that allow DocsGPT to understand the semantic meaning of your documents for efficient search and retrieval.
@@ -63,7 +63,7 @@ Here are some of the most fundamental settings you'll likely want to configure:
- **`API_KEY`**: Required for most cloud-based LLM providers. This is your authentication key to access the LLM provider's API. You'll need to obtain this key from your chosen provider's platform.
- **`OPENAI_BASE_URL`**: Specifically used when `LLM_NAME` is set to `openai` but you are connecting to a local inference engine (like Ollama, Llama.cpp, etc.) that exposes an OpenAI-compatible API. This setting tells DocsGPT where to find your local LLM server.
- **`OPENAI_BASE_URL`**: Specifically used when `LLM_PROVIDER` is set to `openai` but you are connecting to a local inference engine (like Ollama, Llama.cpp, etc.) that exposes an OpenAI-compatible API. This setting tells DocsGPT where to find your local LLM server.
## Configuration Examples
@@ -74,9 +74,9 @@ Let's look at some concrete examples of how to configure these settings in your
To use OpenAI's `gpt-4o` model, you would configure your `.env` file like this:
```
LLM_NAME=openai
LLM_PROVIDER=openai
API_KEY=YOUR_OPENAI_API_KEY # Replace with your actual OpenAI API key
MODEL_NAME=gpt-4o
LLM_NAME=gpt-4o
```
Make sure to replace `YOUR_OPENAI_API_KEY` with your actual OpenAI API key.
@@ -86,14 +86,57 @@ Make sure to replace `YOUR_OPENAI_API_KEY` with your actual OpenAI API key.
To use a local Ollama server with the `llama3.2:1b` model, you would configure your `.env` file like this:
```
LLM_NAME=openai # Using OpenAI compatible API format for local models
LLM_PROVIDER=openai # Using OpenAI compatible API format for local models
API_KEY=None # API Key is not needed for local Ollama
MODEL_NAME=llama3.2:1b
LLM_NAME=llama3.2:1b
OPENAI_BASE_URL=http://host.docker.internal:11434/v1 # Default Ollama API URL within Docker
EMBEDDINGS_NAME=huggingface_sentence-transformers/all-mpnet-base-v2 # You can also run embeddings locally if needed
```
In this case, even though you are using Ollama locally, `LLM_NAME` is set to `openai` because Ollama (and many other local inference engines) are designed to be API-compatible with OpenAI. `OPENAI_BASE_URL` points DocsGPT to the local Ollama server.
In this case, even though you are using Ollama locally, `LLM_PROVIDER` is set to `openai` because Ollama (and many other local inference engines) are designed to be API-compatible with OpenAI. `OPENAI_BASE_URL` points DocsGPT to the local Ollama server.
## Authentication Settings
DocsGPT includes a JWT (JSON Web Token) based authentication feature for managing sessions or securing local deployments while allowing access.
- **`AUTH_TYPE`**: This setting in your `.env` file or `settings.py` determines the authentication method.
- **Possible values:**
- `None` (or not set): No authentication is used.
- `simple_jwt`: A single, long-lived JWT token is generated and used for all authenticated requests. This is useful for securing a local deployment with a shared secret.
- `session_jwt`: Unique JWT tokens are generated for sessions, typically for individual users or temporary access.
- If `AUTH_TYPE` is set to `simple_jwt` or `session_jwt`, then a `JWT_SECRET_KEY` is required.
- **`JWT_SECRET_KEY`**: This is a crucial secret key used to sign and verify JWTs.
- It can be set directly in your `.env` file or `settings.py`.
- **Automatic Key Generation**: If `AUTH_TYPE` is `simple_jwt` or `session_jwt` and `JWT_SECRET_KEY` is _not_ set in your environment variables or `settings.py`, DocsGPT will attempt to:
1. Read the key from a file named `.jwt_secret_key` in the project's root directory.
2. If the file doesn't exist, it will generate a new 32-byte random key, save it to `.jwt_secret_key`, and use it for the session. This ensures that the key persists across application restarts.
- **Security Note**: It's vital to keep this key secure. If you set it manually, choose a strong, random string.
**How it works:**
- When `AUTH_TYPE` is set to `simple_jwt`, a token is generated at startup (if not already present or configured) and printed to the console. This token should be included in the `Authorization` header of your API requests as a Bearer token (e.g., `Authorization: Bearer YOUR_SIMPLE_JWT_TOKEN`).
- When `AUTH_TYPE` is set to `session_jwt`:
- Clients can request a new token from the `/api/generate_token` endpoint.
- This token should then be included in the `Authorization` header for subsequent requests.
- The backend verifies the JWT token provided in the `Authorization` header for protected routes.
- The `/api/config` endpoint can be used to check the current `auth_type` and whether authentication is required.
**Frontend Token Input for `simple_jwt`:**
<img
src="/jwt-input.png"
alt="Frontend prompt for JWT Token"
style={{
width: '500px',
maxWidth: '100%',
display: 'block',
margin: '1em auto'
}}
/>
If you have configured `AUTH_TYPE=simple_jwt`, the DocsGPT frontend will prompt you to enter the JWT token if it's not already set or is invalid. You'll need to paste the `SIMPLE_JWT_TOKEN` (which is printed to your console when the backend starts) into this field to access the application.
## Exploring More Settings

View File

@@ -1,212 +0,0 @@
# Setting up the DocsGPT Widget in Your React Project
## Introduction:
The DocsGPT Widget is a powerful tool that allows you to integrate AI-powered documentation assistance into your web applications. This guide will walk you through the installation and usage of the DocsGPT Widget in your React project. Whether you're building a web app or a knowledge base, this widget can enhance your user experience.
## Installation
First, make sure you have Node.js and npm installed in your project. Then go to your project and install a new dependency: `npm install docsgpt`.
## Usage
In the file where you want to use the widget, import it and include the CSS file:
```js
import { DocsGPTWidget } from "docsgpt";
```
Now, you can use the widget in your component like this :
```jsx
<DocsGPTWidget
apiHost="https://your-docsgpt-api.com"
apiKey=""
avatar = "https://d3dg1063dc54p9.cloudfront.net/cute-docsgpt.png"
title = "Get AI assistance"
description = "DocsGPT's AI Chatbot is here to help"
heroTitle = "Welcome to DocsGPT !"
heroDescription="This chatbot is built with DocsGPT and utilises GenAI,
please review important information using sources."
theme = "dark"
buttonIcon = "https://your-icon"
buttonBg = "#222327"
/>
```
## Props Table for DocsGPT Widget
| **Prop** | **Type** | **Default Value** | **Description** |
|--------------------|------------------|-------------------------------------------------------------|-----------------------------------------------------------------------------------------------------|
| **`apiHost`** | `string` | `"https://gptcloud.arc53.com"` | The URL of your DocsGPT API for vector search and chatbot queries. |
| **`apiKey`** | `string` | `""` | Your API key for authentication. Can be left empty if authentication is not required. |
| **`avatar`** | `string` | `"https://d3dg1063dc54p9.cloudfront.net/cute-docsgpt.png"` | Specifies the URL of the avatar or image representing the chatbot. |
| **`title`** | `string` | `"Get AI assistance"` | Sets the title text displayed in the chatbot interface. |
| **`description`** | `string` | `"DocsGPT's AI Chatbot is here to help"` | Provides a brief description of the chatbot's purpose or functionality. |
| **`heroTitle`** | `string` | `"Welcome to DocsGPT !"` | Displays a welcome title when users interact with the chatbot. |
| **`heroDescription`** | `string` | `"This chatbot is built with DocsGPT and utilises GenAI, please review important information using sources."` | Provides additional introductory text or information about the chatbot's capabilities. |
| **`theme`** | `"dark" \| "light"` | `"dark"` | Allows you to select the theme for the chatbot interface. Accepts `"dark"` or `"light"`. |
| **`buttonIcon`** | `string` | `"https://your-icon"` | Specifies the URL of the icon image for the widget's launch button. |
| **`buttonBg`** | `string` | `"#222327"` | Sets the background color of the widget's launch button. |
| **`size`** | `"small" \| "medium"` | `"medium"` | Sets the size of the widget. Options are `"small"` or `"medium"`. |
---
## Notes
- **Customizing Props:** All properties can be overridden when embedding the widget. For example, you can provide a unique avatar, title, or color scheme to better align with your brand.
- **Default Theme:** The widget defaults to the dark theme unless explicitly set to `"light"`.
- **API Key:** If the `apiKey` is not required for your application, leave it empty.
This table provides a clear overview of the customization options available for tailoring the DocsGPT widget to fit your application.
## How to use DocsGPTWidget with [Nextra](https://nextra.site/) (Next.js + MDX)
Install your widget as described above and then go to your `pages/` folder and create a new file `_app.js` with the following content:
```js
import { DocsGPTWidget } from "docsgpt";
export default function MyApp({ Component, pageProps }) {
return (
<>
<Component {...pageProps} />
<DocsGPTWidget selectDocs="local/docsgpt-sep.zip/"/>
</>
)
}
```
## How to use DocsGPTWidget with HTML
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<meta http-equiv="X-UA-Compatible" content="ie=edge" />
<title>HTML + CSS</title>
<link rel="stylesheet" href="styles.css" />
</head>
<body>
<h1>This is a simple HTML + CSS template!</h1>
<div id="app"></div>
<!-- Include the widget script from dist/modern or dist/legacy -->
<script
src="https://unpkg.com/docsgpt/dist/modern/main.js"
type="module"
></script>
<script type="module">
window.onload = function () {
renderDocsGPTWidget("app", {
apiKey: "",
size: "medium",
});
};
</script>
</body>
</html>
```
To link the widget to your api and your documents you can pass parameters to the renderDocsGPTWidget('div id', { parameters }).
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>DocsGPT Widget</title>
<script src="https://unpkg.com/docsgpt/dist/modern/main.js" type="module"></script>
</head>
<body>
<div id="app"></div>
<!-- Include the widget script from dist/modern or dist/legacy -->
<script type="module">
window.onload = function() {
renderDocsGPTWidget('app', {
apiHost: 'http://localhost:7001',
apiKey:"",
avatar: 'https://d3dg1063dc54p9.cloudfront.net/cute-docsgpt.png',
title: 'Get AI assistance',
description: "DocsGPT's AI Chatbot is here to help",
heroTitle: 'Welcome to DocsGPT!',
heroDescription: 'This chatbot is built with DocsGPT and utilises GenAI, please review important information using sources.',
theme:"dark",
buttonIcon:"https://your-icon",
buttonBg:"#222327"
});
}
</script>
</body>
</html>
```
# SearchBar
The `SearchBar` component is an interactive search bar designed to provide search results based on **vector similarity search**. It also includes the capability to open the AI Chatbot, enabling users to query.
---
### Importing the Component
```tsx
import { SearchBar } from "docsgpt-react";
```
---
### Usage Example
```tsx
<SearchBar
apiKey="your-api-key"
apiHost="https://gptcloud.arc53.com"
theme="light"
placeholder="Search or Ask AI..."
width="300px"
/>
```
---
## HTML embedding for Search bar
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>SearchBar Embedding</title>
<script src="https://unpkg.com/docsgpt/dist/modern/main.js"></script> <!-- The bundled JavaScript file -->
</head>
<body>
<!-- Element where the SearchBar will render -->
<div id="search-bar-container"></div>
<script>
// Render the SearchBar into the specified element
renderSearchBar('search-bar-container', {
apiKey: 'your-api-key-here',
apiHost: 'https://your-api-host.com',
theme: 'light',
placeholder: 'Search here...',
width: '300px'
});
</script>
</body>
</html>
```
### Props
| **Prop** | **Type** | **Default Value** | **Description** |
|-----------------|-----------|-------------------------------------|--------------------------------------------------------------------------------------------------|
| **`apiKey`** | `string` | `"74039c6d-bff7-44ce-ae55-2973cbf13837"` | Your API key generated from the app. Used for authenticating requests. |
| **`apiHost`** | `string` | `"https://gptcloud.arc53.com"` | The base URL of the server hosting the vector similarity search and chatbot services. |
| **`theme`** | `"dark" \| "light"` | `"dark"` | The theme of the search bar. Accepts `"dark"` or `"light"`. |
| **`placeholder`** | `string` | `"Search or Ask AI..."` | Placeholder text displayed in the search input field. |
| **`width`** | `string` | `"256px"` | Width of the search bar. Accepts any valid CSS width value (e.g., `"300px"`, `"100%"`, `"20rem"`). |
Feel free to reach out if you need help customizing or extending the `SearchBar`!
## Our github
[DocsGPT](https://github.com/arc53/DocsGPT)
You can find the source code in the extensions/react-widget folder.
For more information about React, refer to this [link here](https://react.dev/learn)

View File

@@ -32,9 +32,9 @@ Choose the LLM of your choice.
### For Open source llm change:
<Steps>
### Step 1
For open source version please edit `LLM_NAME`, `MODEL_NAME` and others in the .env file. Refer to [⚙️ App Configuration](/Deploying/DocsGPT-Settings) for more information.
For open source version please edit `LLM_PROVIDER`, `LLM_NAME` and others in the .env file. Refer to [⚙️ App Configuration](/Deploying/DocsGPT-Settings) for more information.
### Step 2
Visit [☁️ Cloud Providers](/Models/cloud-providers) for the updated list of online models. Make sure you have the right API_KEY and correct LLM_NAME.
Visit [☁️ Cloud Providers](/Models/cloud-providers) for the updated list of online models. Make sure you have the right API_KEY and correct LLM_PROVIDER.
For self-hosted please visit [🖥️ Local Inference](/Models/local-inference).
</Steps>

View File

@@ -13,15 +13,15 @@ The primary method for configuring your LLM provider in DocsGPT is through the `
To connect to a cloud LLM provider, you will typically need to configure the following basic settings in your `.env` file:
* **`LLM_NAME`**: This setting is essential and identifies the specific cloud provider you wish to use (e.g., `openai`, `google`, `anthropic`).
* **`MODEL_NAME`**: Specifies the exact model you want to utilize from your chosen provider (e.g., `gpt-4o`, `gemini-2.0-flash`, `claude-3-5-sonnet-latest`). Refer to your provider's documentation for a list of available models.
* **`LLM_PROVIDER`**: This setting is essential and identifies the specific cloud provider you wish to use (e.g., `openai`, `google`, `anthropic`).
* **`LLM_NAME`**: Specifies the exact model you want to utilize from your chosen provider (e.g., `gpt-4o`, `gemini-2.0-flash`, `claude-3-5-sonnet-latest`). Refer to your provider's documentation for a list of available models.
* **`API_KEY`**: Almost all cloud LLM providers require an API key for authentication. Obtain your API key from your chosen provider's platform and securely store it in your `.env` file.
## Explicitly Supported Cloud Providers
DocsGPT offers direct, streamlined support for the following cloud LLM providers, making configuration straightforward. The table below outlines the `LLM_NAME` and example `MODEL_NAME` values to use for each provider in your `.env` file.
DocsGPT offers direct, streamlined support for the following cloud LLM providers, making configuration straightforward. The table below outlines the `LLM_PROVIDER` and example `LLM_NAME` values to use for each provider in your `.env` file.
| Provider | `LLM_NAME` | Example `MODEL_NAME` |
| Provider | `LLM_PROVIDER` | Example `LLM_NAME` |
| :--------------------------- | :------------- | :-------------------------- |
| DocsGPT Public API | `docsgpt` | `None` |
| OpenAI | `openai` | `gpt-4o` |
@@ -35,16 +35,16 @@ DocsGPT offers direct, streamlined support for the following cloud LLM providers
DocsGPT's flexible architecture allows you to connect to any cloud provider that offers an API compatible with the OpenAI API standard. This opens up a vast ecosystem of LLM services.
To connect to an OpenAI-compatible cloud provider, you will still use `LLM_NAME=openai` in your `.env` file. However, you will also need to specify the API endpoint of your chosen provider using the `OPENAI_BASE_URL` setting. You will also likely need to provide an `API_KEY` and `MODEL_NAME` as required by that provider.
To connect to an OpenAI-compatible cloud provider, you will still use `LLM_PROVIDER=openai` in your `.env` file. However, you will also need to specify the API endpoint of your chosen provider using the `OPENAI_BASE_URL` setting. You will also likely need to provide an `API_KEY` and `LLM_NAME` as required by that provider.
**Example for DeepSeek (OpenAI-Compatible API):**
To connect to DeepSeek, which offers an OpenAI-compatible API, your `.env` file could be configured as follows:
```
LLM_NAME=openai
LLM_PROVIDER=openai
API_KEY=YOUR_API_KEY # Your DeepSeek API key
MODEL_NAME=deepseek-chat # Or your desired DeepSeek model name
LLM_NAME=deepseek-chat # Or your desired DeepSeek model name
OPENAI_BASE_URL=https://api.deepseek.com/v1 # DeepSeek's OpenAI API URL
```

View File

@@ -60,7 +60,7 @@ To use OpenAI's `text-embedding-ada-002` embedding model, you need to set `EMBED
**Example `.env` configuration for OpenAI Embeddings:**
```
LLM_NAME=openai
LLM_PROVIDER=openai
API_KEY=YOUR_OPENAI_API_KEY # Your OpenAI API Key
EMBEDDINGS_NAME=openai_text-embedding-ada-002
```

View File

@@ -15,8 +15,8 @@ Setting up a local inference engine with DocsGPT is configured through environme
To connect to a local inference engine, you will generally need to configure these settings in your `.env` file:
* **`LLM_NAME`**: Crucially set this to `openai`. This tells DocsGPT to use the OpenAI-compatible API format for communication, even though the LLM is local.
* **`MODEL_NAME`**: Specify the model name as recognized by your local inference engine. This might be a model identifier or left as `None` if the engine doesn't require explicit model naming in the API request.
* **`LLM_PROVIDER`**: Crucially set this to `openai`. This tells DocsGPT to use the OpenAI-compatible API format for communication, even though the LLM is local.
* **`LLM_NAME`**: Specify the model name as recognized by your local inference engine. This might be a model identifier or left as `None` if the engine doesn't require explicit model naming in the API request.
* **`OPENAI_BASE_URL`**: This is essential. Set this to the base URL of your local inference engine's API endpoint. This tells DocsGPT where to find your local LLM server.
* **`API_KEY`**: Generally, for local inference engines, you can set `API_KEY=None` as authentication is usually not required in local setups.
@@ -24,16 +24,16 @@ To connect to a local inference engine, you will generally need to configure the
DocsGPT is readily configurable to work with the following local inference engines, all communicating via the OpenAI API format. Here are example `OPENAI_BASE_URL` values for each, based on default setups:
| Inference Engine | `LLM_NAME` | `OPENAI_BASE_URL` |
| :---------------------------- | :--------- | :------------------------- |
| LLaMa.cpp | `openai` | `http://localhost:8000/v1` |
| Ollama | `openai` | `http://localhost:11434/v1` |
| Text Generation Inference (TGI)| `openai` | `http://localhost:8080/v1` |
| SGLang | `openai` | `http://localhost:30000/v1` |
| vLLM | `openai` | `http://localhost:8000/v1` |
| Aphrodite | `openai` | `http://localhost:2242/v1` |
| FriendliAI | `openai` | `http://localhost:8997/v1` |
| LMDeploy | `openai` | `http://localhost:23333/v1` |
| Inference Engine | `LLM_PROVIDER` | `OPENAI_BASE_URL` |
| :---------------------------- | :------------- | :------------------------- |
| LLaMa.cpp | `openai` | `http://localhost:8000/v1` |
| Ollama | `openai` | `http://localhost:11434/v1` |
| Text Generation Inference (TGI)| `openai` | `http://localhost:8080/v1` |
| SGLang | `openai` | `http://localhost:30000/v1` |
| vLLM | `openai` | `http://localhost:8000/v1` |
| Aphrodite | `openai` | `http://localhost:2242/v1` |
| FriendliAI | `openai` | `http://localhost:8997/v1` |
| LMDeploy | `openai` | `http://localhost:23333/v1` |
**Important Note on `localhost` vs `host.docker.internal`:**

View File

@@ -0,0 +1,14 @@
{
"basics": {
"title": "🔧 Tools Basics",
"href": "/Tools/basics"
},
"api-tool": {
"title": "🗝️ API Tool",
"href": "/Tools/api-tool"
},
"creating-a-tool": {
"title": "🛠️ Creating a Custom Tool",
"href": "/Tools/creating-a-tool"
}
}

View File

@@ -0,0 +1,153 @@
---
title: 🗝️ Generic API Tool
description: Learn how to configure and use the API Tool in DocsGPT to connect with any RESTful API without writing custom code.
---
import { Callout } from 'nextra/components';
import Image from 'next/image';
# Using the Generic API Tool
The API Tool provides a no-code/low-code solution to make DocsGPT interact with third-party or internal RESTful APIs. It acts as a bridge, allowing the Large Language Model (LLM) to leverage external services based on your chat interactions.
This guide will walk you through its capabilities, configuration, and best practices.
## Introduction to the Generic API Tool
**When to Use It:**
* Ideal for quickly integrating existing APIs where the interaction involves standard HTTP requests (GET, POST, PUT, DELETE).
* Suitable for fetching data to enrich answers (e.g., current weather, stock prices, product details).
* Useful for triggering simple actions in other systems (e.g., sending a notification, creating a basic task).
**Contrast with Custom Python Tools:**
* **API Tool:** Best for straightforward API calls. Configuration is done through the DocsGPT UI.
* **Custom Python Tools:** Preferable when you need complex logic before or after the API call, handle non-standard authentication (like complex OAuth flows), manage multi-step API interactions, or require intricate data processing not easily managed by the LLM alone. See [Creating a Custom Tool](/Tools/creating-a-tool) for more.
## Capabilities of the API Tool
**Supported HTTP Methods:** You can configure actions using standard HTTP methods such as:
* `GET`: To retrieve data.
* `POST`: To submit data to create a new resource.
* `PUT`: To update an existing resource.
* `DELETE`: To remove a resource.
**Request Configuration:**
* **Headers:** Define static or dynamic HTTP headers for authentication (e.g., API keys), content type specification, etc.
* **Query Parameters:** Specify URL query parameters, which can be static or dynamically filled by the LLM based on user input.
* **Request Body:** Define the structure of the request body (e.g., JSON), with fields that can be static or dynamically populated by the LLM.
**Response Handling:**
* The API Tool executes the request and receives the raw response from the API (typically JSON or plain text).
* This raw response is then passed back to the LLM.
* The LLM uses this response, along with the context of your query and the description of the API tool action, to formulate an answer or decide on follow-up actions. The API tool itself doesn't deeply parse or transform the response beyond basic content type detection (e.g., loading JSON into a parsable object).
## Configuring an API as a Tool
You can configure the API Tool through the DocsGPT user interface, found in **Settings -> Tools**. When you add or modify an API Tool, you'll define specific actions that DocsGPT can perform.
<Callout type="info">
The configuration involves defining how DocsGPT should call an API endpoint. Each configured API call essentially becomes a distinct "action" the LLM can choose to use.
</Callout>
Below is an example of how you might configure an API action, inspired by setting up a phone number validation service:
<Image
src="/toolIcons/api-tool-example.png"
alt="API Tool configuration example for phone validation"
width={800}
height={450}
style={{ margin: '1em auto', display: 'block', borderRadius: '8px' }}
/>
_Figure 1: Example configuration for an API Tool action to validate phone numbers._
**Defining an API Endpoint/Action:**
When you configure a new API action, you'll fill in the following fields:
- **`Name`:** A user-friendly name for this specific API action (e.g., "Phone-check" as in the image, or more specific like "ValidateUSPhoneNumber"). This helps in managing your tools.
- **`Description`:** This is a **critical field**. Provide a clear and concise description of what the API action does, what kind of input it expects (implicitly), and what kind of output it provides. The LLM uses this description to understand when and how to use this action.
- **`URL`:** The full endpoint URL for the API request.
- **`HTTP Method`:** Select the appropriate HTTP method (e.g., GET, POST) from a dropdown.
- **`Headers`:** You can add custom HTTP headers as key-value pairs (Name, Value). Indicate if the value should be `Filled by LLM` or is static. If filled by LLM, provide a `Description` for the LLM.
- **`Query Parameters`:** For `GET` requests or when parameters are sent in the URL.
* **`Name`:** The name of the query parameter (e.g., `api_key`, `phone`).
* **`Type`:** The data type of the parameter (e.g., `string`).
* **`Filled by LLM` (Checkbox):**
- **Unchecked (Static):** The `Value` you provide will be used for every call (e.g., for an `api_key` that doesn't change).
- **Checked (Dynamic):** The LLM will extract the appropriate value from the user's chat query based on the `Description` you provide for this parameter. The `Value` field is typically left empty or contains a placeholder if `Filled by LLM` is checked.
* `Description`: Context for the LLM if the parameter is to be filled dynamically, or for your own reference if static.
* `Value`: The static value if not filled by LLM.
- **`Request Body`:** Used to send data (commonly JSON) to the API. Similar to Query Parameters, you define fields with `Name`, `Type`, whether it's `Filled by LLM`, a `Description` for dynamic fields, and a static `Value` if applicable.
**Response Handling Guidance for the LLM:**
While the API Tool configuration UI doesn't have explicit fields for defining response parsing rules (like JSONPath extractors), you significantly influence how the LLM handles the response through:
* **Tool Action `Description`:** Clearly state what kind of information the API returns (e.g., "This API returns a JSON object with 'status' and 'location' fields for the phone number."). This helps the LLM know what to look for in the API's output.
* **Prompt Engineering:** For more complex scenarios, you might need to adjust your global or agent-specific prompts to guide DocsGPT on how to interpret and present information from API tool responses. See [Customising Prompts](/Guides/Customising-prompts).
## Using the Configured API Tool in Chat
Once an API action is configured and enabled, DocsGPT's LLM can decide to use it based on your natural language queries.
**Example (based on the phone validation tool in Figure 1):**
1. **User Query:** "Hey DocsGPT, can you check if +14155555555 is a valid phone number?"
2. **DocsGPT (LLM Orchestration):**
* The LLM analyzes the query.
* It matches the intent ("check if ... is a valid phone number") with the description of the "Phone-check" API action.
* It identifies `+14155555555` as the value for the `phone` parameter (which was marked as `Filled by LLM` with the description "Phone number to check").
* DocsGPT constructs the GET API request.
3. **API Tool Execution:**
* The API Tool makes the HTTP GET request.
* The external API (AbstractAPI) processes the request and returns a JSON response, e.g.:
```json
{
"phone": "+14155555555",
"valid": true,
"format": {
"international": "+1 415-555-5555",
"national": "(415) 555-5555"
},
"country": {
"code": "US",
"name": "United States",
"prefix": "+1"
},
"location": "California",
"type": "Landline"
}
```
4. **DocsGPT Response Formulation:**
* The API Tool passes this JSON response back to the LLM.
* The LLM, guided by the tool's description and the user's original query, extracts relevant information and formulates a user-friendly answer.
* **DocsGPT Chat Response:** "Yes, +14155555555 appears to be a valid landline phone number in California, United States."
## Advanced Tips and Best Practices
**Clear Description is the Key:** The LLM relies heavily on the `Description` field of the API action and its parameters. Make them unambiguous and action-oriented. Clearly state what the tool does and what kind of input it expects (even if implicitly through parameter descriptions).
**Iterative Testing:** After configuring an API tool, test it with various phrasings of user queries to ensure the LLM triggers it correctly and interprets the response as expected.
**Error Handling:**
* If an API call fails, the API Tool will return an error message and status code from the `requests` library or the API itself. The LLM may relay this error or try to explain it.
* Check DocsGPT's backend logs for more detailed error information if you encounter issues.
**Security Considerations:**
* **API Keys:** Be mindful of API keys and other sensitive credentials. The example image shows an API key directly in the configuration. For production or shared environments avoid exposing configurations with sensitive keys.
* **Rate Limits:** Be aware of the rate limits of the APIs you are integrating. Frequent calls from DocsGPT could exceed these limits.
* **Data Privacy:** Consider the data privacy implications of sending user query data to third-party APIs.
- **Idempotency:** For tools that modify data (POST, PUT, DELETE), be aware of whether the API operations are idempotent to avoid unintended consequences from repeated calls if the LLM retries an action.
## Limitations
While powerful, the Generic API Tool has some limitations:
- **Complex Authentication:** Advanced authentication flows like OAuth 2.0 (especially 3-legged OAuth requiring user redirection) or custom signature-based authentication often require custom Python tools.
- **Multi-Step API Interactions:** If a task requires multiple API calls that depend on each other (e.g., fetch a list, then for each item, fetch details), this kind of complex chaining and logic is better handled by a custom Python tool.
- **Complex Data Transformations:** If the API response needs significant transformation or processing before being useful to the LLM, a custom Python tool offers more flexibility.
- **Real-time Streaming (SSE, WebSockets):** The tool is designed for request-response interactions, not for maintaining persistent streaming connections.
For scenarios that exceed these limitations, developing a [Custom Python Tool](/Tools/creating-a-tool) is the recommended approach.

View File

@@ -0,0 +1,92 @@
---
title: Tools Basics - Enhancing DocsGPT Capabilities
description: Understand what DocsGPT Tools are, how they work, and explore the built-in tools available to extend DocsGPT's functionality.
---
import { Callout } from 'nextra/components';
import Image from 'next/image';
import { ToolCards } from '../../components/ToolCards';
# Understanding DocsGPT Tools
DocsGPT Tools are powerful extensions that significantly enhance the capabilities of your DocsGPT application.
They allow DocsGPT to move beyond its core function of retrieving information from your documents and enable it to perform actions,
interact with external data sources, and integrate with other services. You can find and configure available tools within
the "Tools" section of the DocsGPT application settings in the user interface.
## What are Tools?
- **Purpose:** The primary purpose of Tools is to bridge the gap between understanding a user's request (natural language processing by the LLM) and executing a tangible action. This could involve fetching live data from the web, sending notifications, running code snippets, querying databases, or interacting with third-party APIs.
- **LLM as an Orchestrator:** The Large Language Model (LLM) at the heart of DocsGPT is designed to act as an intelligent orchestrator. Based on your query and the declared capabilities of the available tools (defined in their metadata), the LLM decides if a tool is needed, which tool to use, and what parameters to pass to it.
- **Action-Oriented Interactions:** Tools enable more dynamic and action-oriented interactions. For example:
* *"What's the latest news on renewable energy?"* - This might trigger a web search tool to fetch current articles.
* *"Fetch the order status for customer ID 12345 from our database."* - This could use a database tool.
* *"Summarize the content of this webpage and send the summary to the #general channel on Telegram."* - This might involve a web scraping tool followed by a Telegram notification tool.
## Overview of Built-in Tools
DocsGPT includes a suite of pre-built tools designed to expand its capabilities out-of-the-box. Below is an overview of the currently available tools.
<ToolCards
items={[
{
title: 'API Tool',
link: '/Tools/api-tool',
description: 'A highly flexible tool that allows DocsGPT to interact with virtually any API without needing to write custom Python code.'
},
{
title: 'Brave Search Tool',
link: 'https://github.com/arc53/DocsGPT/blob/main/application/agents/tools/brave.py',
description: 'Enables DocsGPT to perform real-time web and image searches using the Brave Search API for up-to-date information.'
},
{
title: 'Cryptoprice Tool',
link: 'https://github.com/arc53/DocsGPT/blob/main/application/agents/tools/cryptoprice.py',
description: 'Fetches the current price of specified cryptocurrencies.'
},
{
title: 'Ntfy Tool',
link: 'https://github.com/arc53/DocsGPT/blob/main/application/agents/tools/ntfy.py',
description: 'Allows DocsGPT to send push notifications to Ntfy.sh channels, ideal for alerts and updates.'
},
{
title: 'PostgreSQL Tool',
link: 'https://github.com/arc53/DocsGPT/blob/main/application/agents/tools/postgres.py',
description: 'Provides capabilities to connect to a PostgreSQL database, execute SQL queries, and retrieve schema information.'
},
{
title: 'Read Webpage Tool', // Renamed from Scraper Tool
link: 'https://github.com/arc53/DocsGPT/blob/main/application/agents/tools/read_webpage.py',
description: 'Enables DocsGPT to fetch and extract (scrape) textual content from specified web page URLs.'
},
{
title: 'Telegram Tool',
link: 'https://github.com/arc53/DocsGPT/blob/main/application/agents/tools/telegram.py',
description: 'Allows DocsGPT to send messages or images to Telegram chats via a Telegram Bot.'
}
]}
/>
## Using Tools in DocsGPT (User Perspective)
Interacting with tools in DocsGPT is designed to be intuitive:
1. **Natural Language Interaction:** As a user, you typically interact with DocsGPT using natural language queries or commands. The LLM within DocsGPT analyzes your input to determine if a specific task can or should be handled by one of the available and configured tools.
2. **Configuration in UI:**
* Tools are generally managed and configured within the DocsGPT application's settings, found under a "Tools" section in the GUI.
* For tools that interact with external services (like Brave Search, Telegram, or any service via the API Tool), you might need to provide authentication credentials (e.g., API keys, tokens) or specific endpoint information during the tool's setup in the UI.
3. **Prompt Engineering for Tools:** While the LLM aims to intelligently use tools, for more complex or reliable agent-like behaviors, you might need to customize the system prompts. Modifying the prompt can guide the LLM on when and how to prioritize or chain tools to achieve specific outcomes, especially if you're building an agent designed to perform a certain sequence of actions every time. For more on this, see [Customising Prompts](/Guides/Customising-prompts).
## Advancing with Tools
Understanding the basics of DocsGPT Tools opens up many possibilities:
* **Leverage the API Tool:** For quick integrations with numerous external services, explore the [API Tool Detailed Guide](/Tools/api-tool).
* **Develop Custom Tools:** If you have specific needs not covered by built-in tools or the generic API tool, you can develop your own. See our guide on `[Developing Custom Tools](/Tools/creating-a-tool)` (placeholder for now).
* **Build AI Agents:** Tools are the fundamental building blocks for creating sophisticated AI agents within DocsGPT. Explore how these can be combined by looking into the `[Agents section/tab concept - link to be added once available]`.
By harnessing the power of Tools, you can transform DocsGPT into a more versatile and proactive assistant tailored to your unique workflows.

View File

@@ -0,0 +1,186 @@
---
title: 🛠️ Creating a Custom Tool
description: Learn how to create custom Python tools to extend DocsGPT's functionality and integrate with various services or perform specific actions.
---
import { Callout } from 'nextra/components';
import { Steps } from 'nextra/components';
# 🛠️ Creating a Custom Python Tool
This guide provides developers with a comprehensive, step-by-step approach to creating their own custom tools for DocsGPT. By developing custom tools, you can significantly extend DocsGPT's capabilities, enabling it to interact with new data sources, services, and perform specialized actions tailored to your unique needs.
## Introduction to Custom Tool Development
### Why Create Custom Tools?
While DocsGPT offers a range of built-in tools and a versatile API Tool, there are many scenarios where a custom Python tool is the best solution:
* **Integrating with Proprietary Systems:** Connect to internal APIs, databases, or services that are not publicly accessible or require complex authentication.
* **Adding Domain-Specific Functionalities:** Implement logic specific to your industry or use case that isn't covered by general-purpose tools.
* **Automating Unique Workflows:** Create tools that orchestrate multiple steps or interact with systems in a way unique to your operational needs.
* **Connecting to Any System with an Accessible Interface:** If you can interact with a system programmatically using Python (e.g., through libraries, SDKs, or direct HTTP requests), you can likely build a DocsGPT tool for it.
* **Complex Logic or Data Transformation:** When API interactions require intricate logic before sending a request or after receiving a response, or when data needs significant transformation that is difficult for an LLM to handle directly.
### Prerequisites
Before you begin, ensure you have:
* A solid understanding of Python programming.
* Familiarity with the DocsGPT project structure, particularly the `application/agents/tools/` directory where custom tools reside.
* Basic knowledge of how APIs work, as many tools involve interacting with external or internal APIs.
* Your DocsGPT development environment set up. If not, please refer to the [Setting Up a Development Environment](/Deploying/Development-Environment) guide.
## The Anatomy of a DocsGPT Tool
Custom tools in DocsGPT are Python classes that inherit from a base `Tool` class and implement specific methods to define their behavior, capabilities, and configuration needs.
The **foundation** for all custom tools is the abstract base class, located in `application/agents/tools/base.py`. Your custom tool class **must** inherit from this class.
### Essential Methods to Implement
Your custom tool class needs to implement the following methods:
1. **`__init__(self, config: dict)`**
- **Purpose:** The constructor for your tool. It's called when DocsGPT initializes the tool.
- **Usage:** This method is typically used to receive and store tool-specific configurations passed via the `config` dictionary. This dictionary is populated based on the tool's settings, often configured through the DocsGPT UI or environment variables. For example, you would store API keys, base URLs, or database connection strings here.
- **Example** (`brave.py`)**:**
``` python
class BraveSearchTool(Tool):
def __init__(self, config):
self.config = config
self.token = config.get("token", "") # API Key for Brave Search
self.base_url = "https://api.search.brave.com/res/v1"
```
2. **`execute_action(self, action_name: str, **kwargs) -> dict`**
- **Purpose:** This is the workhorse of your tool. The LLM, acting as an agent, calls this method when it decides to use one of the actions your tool provides.
- **Parameters:**
- `action_name` (str): A string specifying which of the tool's actions to run (e.g., "brave_web_search").
- `**kwargs` (dict): A dictionary containing the parameters for that specific action. These parameters are defined in the tool's metadata (`get_actions_metadata()`) and are extracted or inferred by the LLM from the user's query.
- **Return Value:** A dictionary containing the result of the action. It's good practice to include keys like:
- `status_code` (int): An HTTP-like status code (e.g., 200 for success, 500 for error).
- `message` (str): A human-readable message describing the outcome.
- `data` (any): The actual data payload returned by the action (if applicable).
- `error` (str): An error message if the action failed.
- **Example (`read_webpage.py`):**
``` python
def execute_action(self, action_name: str, **kwargs) -> str:
if action_name != "read_webpage":
return f"Error: Unknown action '{action_name}'. This tool only supports 'read_webpage'."
url = kwargs.get("url")
if not url:
return "Error: URL parameter is missing."
# ... (logic to fetch and parse webpage) ...
try:
# ...
return markdown_content
except Exception as e:
return f"Error processing URL {url}: {e}"
```
A more structured return:
``` python
# ... inside execute_action
try:
# ... logic ...
return {"status_code": 200, "message": "Webpage read successfully", "data": markdown_content}
except Exception as e:
return {"status_code": 500, "message": f"Error processing URL {url}", "error": str(e)}
```
3. **`get_actions_metadata(self) -> list`**
- **Purpose:** This method is **critical** for the LLM to understand what your tool can do, when to use it, and what parameters it needs. It effectively advertises your tool's capabilities.
- **Return Value:** A list of dictionaries. Each dictionary describes one distinct action the tool can perform and must follow a specific JSON schema structure.
- `name` (str): A unique and descriptive name for the action (e.g., `mytool_get_user_details`). It's a common convention to prefix with the tool name to avoid collisions.
- `description` (str): A clear, concise, and unambiguous description of what the action does. **Write this for the LLM.** The LLM uses this description to decide if this action is appropriate for a given user query.
- `parameters` (dict): A JSON Schema object defining the parameters that the action expects. This schema tells the LLM what arguments are needed, their types, and which are required.
- `type`: Should always be `"object"`.
- `properties`: A dictionary where each key is a parameter name, and the value is an object defining its `type` (e.g., "string", "integer", "boolean") and `description`.
- `required`: A list of strings, where each string is the name of a parameter that is mandatory for the action.
- **Example (`postgres.py` - partial):**
``` python
def get_actions_metadata(self):
return [
{
"name": "postgres_execute_sql",
"description": "Execute an SQL query against the PostgreSQL database...",
"parameters": {
"type": "object",
"properties": {
"sql_query": {
"type": "string",
"description": "The SQL query to execute.",
},
},
"required": ["sql_query"],
"additionalProperties": False, # Good practice to prevent unexpected params
},
},
# ... other actions like postgres_get_schema
]
```
4. **`get_config_requirements(self) -> dict`**
- **Purpose:** Defines the configuration parameters that your tool needs to function (e.g., API keys, specific base URLs, connection strings, default settings). This information can be used by the DocsGPT UI to dynamically render configuration fields for your tool or for validation.
- **Return Value:** A dictionary where keys are the configuration item names (which will be keys in the `config` dict passed to `__init__`) and values are dictionaries describing each requirement:
- `type` (str): The expected data type of the config value (e.g., "string", "boolean", "integer").
- `description` (str): A human-readable description of what this configuration item is for.
- `secret` (bool, optional): Set to `True` if the value is sensitive (e.g., an API key) and should be masked or handled specially in UIs. Defaults to `False`.
- **Example (`brave.py`):**
``` python
def get_config_requirements(self):
return {
"token": { # This 'token' will be a key in the config dict for __init__
"type": "string",
"description": "Brave Search API key for authentication",
"secret": True
},
}
```
## Tool Registration and Discovery
DocsGPT's ToolManager (located in application/agents/tools/tool_manager.py) automatically discovers and loads tools.
As long as your custom tool:
1. Is placed in a Python file within the `application/agents/tools/` directory (and the filename is not `base.py` or starts with `__`).
2. Correctly inherits from the `Tool` base class.
3. Implements all the abstract methods (`execute_action`, `get_actions_metadata`, `get_config_requirements`).
The `ToolManager` should be able to load it when DocsGPT starts.
## Configuration & Secrets Management
- **Configuration Source:** The `config` dictionary passed to your tool's `__init__` method is typically populated from settings defined in the DocsGPT UI (if available for the tool) or from environment variables/configuration files that DocsGPT loads (see [⚙️ App Configuration](/Deploying/DocsGPT-Settings)). The keys in this dictionary should match the names you define in `get_config_requirements()`.
- **Secrets:** Never hardcode secrets (like API keys or passwords) directly into your tool's Python code. Instead, define them as configuration requirements (using `secret: True` in `get_config_requirements()`) and let DocsGPT's configuration system inject them via the `config` dictionary at runtime. This ensures that secrets are managed securely and are not exposed in your codebase.
## Best Practices for Tool Development
- **Atomicity:** Design tool actions to be as atomic (single, well-defined purpose) as possible. This makes them easier for the LLM to understand and combine.
- **Clarity in Metadata:** Ensure action names and descriptions in `get_actions_metadata()` are extremely clear, specific, and unambiguous. This is the primary way the LLM understands your tool.
- **Robust Error Handling:** Implement comprehensive error handling within your `execute_action` logic (and the private methods it calls). Return informative error messages in the result dictionary so the LLM or user can understand what went wrong.
- **Security:**
- Be mindful of the security implications of your tool, especially if it interacts with sensitive systems or can execute arbitrary code/queries.
- Validate and sanitize any inputs, especially if they are used to construct database queries or shell commands, to prevent injection attacks.
- **Performance:** Consider the performance implications of your tool's actions. If an action is slow, it will impact the user experience. Optimize where possible.
## (Optional) Contributing Your Tool
If you develop a custom tool that you believe could be valuable to the broader DocsGPT community and is general-purpose:
1. Ensure it's well-documented (both in code and with clear metadata).
2. Make sure it adheres to the best practices outlined above.
3. Consider opening a Pull Request to the [DocsGPT GitHub repository](https://github.com/arc53/DocsGPT) with your new tool, including any necessary documentation updates.
By following this guide, you can create powerful custom tools that extend DocsGPT's capabilities to your specific operational environment.

View File

@@ -4,7 +4,7 @@ export default function MyApp({ Component, pageProps }) {
return (
<>
<Component {...pageProps} />
<DocsGPTWidget showSources={true} apiKey="6dd66edf-d374-4904-93af-ab0c6d41ee56" theme="dark" size="medium" />
<DocsGPTWidget showSources={true} apiKey="5d8270cb-735f-484e-9dc9-5b407f24e652" theme="dark" size="medium" />
</>
)
}

View File

@@ -4,6 +4,8 @@
"quickstart": "Quickstart",
"Deploying": "Deploying",
"Models": "Models",
"Tools": "Tools",
"Agents": "Agents",
"Extensions": "Extensions",
"https://gptcloud.arc53.com/": {
"title": "API",

View File

@@ -73,9 +73,44 @@ The easiest way to launch DocsGPT is using the provided `setup.sh` script. This
## Launching DocsGPT (Windows)
For Windows users, we recommend following the Docker deployment guide for detailed instructions. Please refer to the [Docker Deployment documentation](/Deploying/Docker-Deploying) for step-by-step instructions on setting up DocsGPT on Windows using Docker.
For Windows users, we provide a PowerShell script that offers the same functionality as the macOS/Linux setup script.
**Important for Windows:** Ensure Docker Desktop is installed and running correctly on your Windows system before proceeding.
**Steps:**
1. **Download the DocsGPT Repository:**
First, you need to download the DocsGPT repository to your local machine. You can do this using Git:
```powershell
git clone https://github.com/arc53/DocsGPT.git
cd DocsGPT
```
2. **Run the `setup.ps1` script:**
Execute the PowerShell setup script:
```powershell
PowerShell -ExecutionPolicy Bypass -File .\setup.ps1
```
3. **Follow the interactive setup:**
Just like the Linux/macOS script, the PowerShell script will guide you through setting DocsGPT.
The script will handle environment configuration and start DocsGPT based on your selections.
4. **Access DocsGPT in your browser:**
Once the setup is complete and Docker containers are running, navigate to [http://localhost:5173/](http://localhost:5173/) in your web browser to access the DocsGPT web application.
5. **Stopping DocsGPT:**
To stop DocsGPT run the Docker Compose down command displayed at the end of the setup script's execution.
**Important for Windows:** Ensure Docker Desktop is installed and running correctly on your Windows system before proceeding. The script will attempt to start Docker if it's not running, but you may need to start it manually if there are issues.
**Alternative Method:**
If you prefer a more manual approach, you can follow our [Docker Deployment documentation](/Deploying/Docker-Deploying) for detailed instructions on setting up DocsGPT on Windows using Docker commands directly.
## Advanced Configuration

BIN
docs/public/jwt-input.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

BIN
docs/public/new-agent.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

View File

@@ -0,0 +1,6 @@
<?xml version="1.0" encoding="utf-8"?>
<svg viewBox="1 6 38 28" xmlns="http://www.w3.org/2000/svg">
<path d="M3,33.5c-0.827,0-1.5-0.673-1.5-1.5V8c0-0.827,0.673-1.5,1.5-1.5h34c0.827,0,1.5,0.673,1.5,1.5v24 c0,0.827-0.673,1.5-1.5,1.5H3z" style="fill: rgb(7, 106, 255);"/>
<path d="M37,7c0.551,0,1,0.449,1,1v24c0,0.551-0.449,1-1,1H3c-0.551,0-1-0.449-1-1V8c0-0.551,0.449-1,1-1 H37 M37,6H3C1.895,6,1,6.895,1,8v24c0,1.105,0.895,2,2,2h34c1.105,0,2-0.895,2-2V8C39,6.895,38.105,6,37,6L37,6z" style="fill: rgb(7, 106, 255);"/>
<path d="M 19.296 13.226 C 20.066 13.06 21.108 12.955 22.147 12.955 C 23.772 12.955 25.153 13.185 26.047 14.038 C 26.88 14.766 27.255 15.931 27.255 17.118 C 27.255 18.638 26.798 19.718 26.07 20.489 C 25.196 21.426 23.801 21.842 22.656 21.842 C 22.47 21.842 22.302 21.842 22.115 21.821 L 22.115 27.045 L 19.297 27.045 L 19.297 13.226 L 19.296 13.226 Z M 22.114 19.616 C 22.259 19.637 22.405 19.637 22.571 19.637 C 23.945 19.637 24.55 18.657 24.55 17.347 C 24.55 16.119 24.049 15.162 22.78 15.162 C 22.532 15.162 22.281 15.203 22.114 15.266 L 22.114 19.616 Z M 29.158 12.955 L 31.976 12.955 L 31.976 27.045 L 29.158 27.045 L 29.158 12.955 Z M 15.001 27.045 L 17.887 27.045 L 14.91 12.955 L 11.342 12.955 L 8.024 27.045 L 10.91 27.045 L 11.524 24.227 L 14.408 24.227 L 15.001 27.045 Z M 13 15.547 L 13.068 15.547 C 13.205 16.467 13.409 17.888 13.568 18.745 L 14.021 21.409 L 11.942 21.409 L 12.457 18.746 C 12.614 17.93 12.841 16.488 13 15.547 Z" style="fill: rgb(255, 255, 255);"/>
</svg>

After

Width:  |  Height:  |  Size: 1.5 KiB

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" viewBox="0 0 194.18 227.53"><defs><style>.cls-1{fill-rule:evenodd;fill:url(#linear-gradient);}.cls-2{fill:#fff;}</style><linearGradient id="linear-gradient" y1="116.23" x2="194.18" y2="116.23" gradientTransform="matrix(1, 0, 0, -1, 0, 230)" gradientUnits="userSpaceOnUse"><stop offset="0" stop-color="#ff5601"/><stop offset="0.5" stop-color="#ff4000"/><stop offset="1" stop-color="#ff1f01"/></linearGradient></defs><g id="Layer_2" data-name="Layer 2"><g id="Layer_1-2" data-name="Layer 1"><path class="cls-1" d="M187.39,54.58l5.34-13.1s-6.8-7.27-15-15.52S152,22.56,152,22.56L132,0H62.14L42.23,22.56S24.76,17.71,16.51,26s-15,15.52-15,15.52L6.8,54.58,0,74s20,75.65,22.33,84.89c4.61,18.19,7.77,25.22,20.88,34.44S80.1,218.55,84,221s8.74,6.56,13.11,6.56,9.22-4.13,13.11-6.56,27.67-18.43,40.78-27.65,16.26-16.25,20.87-34.44C174.19,149.64,194.18,74,194.18,74Z"/><path class="cls-2" d="M121.85,41c2.91,0,24.51-4.12,24.51-4.12S172,67.8,172,74.41c0,5.47-2.21,7.6-4.8,10.12-.54.53-1.1,1.08-1.66,1.67l-19.2,20.37-.63.64c-1.91,1.92-4.73,4.76-2.74,9.47l.41,1c2.18,5.1,4.87,11.39,1.44,17.78-3.64,6.78-9.89,11.31-13.9,10.56s-13.41-5.66-16.87-7.9S99.6,126.8,99.6,123.35c0-2.89,7.88-7.68,11.71-10,.77-.47,1.37-.83,1.71-1.07l1.88-1.18c3.49-2.17,9.8-6.09,10-7.83.2-2.14.12-2.77-2.69-8.06-.6-1.13-1.3-2.33-2-3.58-2.68-4.61-5.69-9.78-5-13.48.75-4.18,7.3-6.57,12.85-8.6l2-.75,5.78-2.17c5.54-2.07,11.69-4.37,12.71-4.84,1.4-.65,1-1.27-3.22-1.67l-2.06-.21c-5.27-.56-15-1.59-19.71-.28l-3.06.84c-5.31,1.43-11.81,3.19-12.44,4.21-.11.18-.22.33-.32.47-.6.85-1,1.41-.32,5,.19,1.08.6,3.19,1.1,5.81,1.46,7.65,3.75,19.58,4,22.26,0,.38.08.74.13,1.09.36,3,.61,5-2.87,5.77l-.91.21c-3.92.9-9.67,2.22-11.75,2.22s-7.83-1.32-11.76-2.22l-.9-.21c-3.48-.79-3.23-2.78-2.87-5.77,0-.35.09-.71.13-1.09.29-2.68,2.58-14.65,4-22.3.5-2.59.9-4.7,1.1-5.77.66-3.6.27-4.16-.33-5-.1-.14-.21-.29-.32-.47-.62-1-7.13-2.78-12.43-4.21l-3.07-.84C66,58.31,56.25,59.34,51,59.9l-2.06.21c-4.26.4-4.62,1-3.22,1.67,1,.47,7.17,2.77,12.71,4.84l5.78,2.17,2,.75c5.55,2,12.1,4.42,12.85,8.6.67,3.7-2.34,8.87-5,13.48-.72,1.25-1.43,2.45-2,3.58-2.82,5.29-2.9,5.92-2.7,8.06.16,1.74,6.47,5.66,10,7.83.82.5,1.48.92,1.88,1.18s.94.6,1.71,1.06c3.83,2.33,11.71,7.13,11.71,10,0,3.45-11,12.49-14.42,14.73S67.3,145.24,63.29,146,53,142.2,49.39,135.42c-3.43-6.38-.74-12.68,1.44-17.78l.41-1c2-4.71-.83-7.55-2.74-9.47l-.63-.64L28.67,86.2c-.56-.59-1.12-1.14-1.66-1.67-2.59-2.52-4.79-4.65-4.79-10.12,0-6.61,25.6-37.53,25.6-37.53S69.42,41,72.33,41c2.33,0,6.82-1.55,11.49-3.16l3.56-1.21a34.33,34.33,0,0,1,9.71-2,34.33,34.33,0,0,1,9.71,2c1.18.39,2.37.81,3.56,1.21C115,39.45,119.52,41,121.85,41Z"/><path class="cls-2" d="M118.14,150.39c4.57,2.35,7.81,4,9,4.78,1.59,1,.62,2.86-.82,3.88s-20.85,16-22.73,17.69l-.76.68c-1.82,1.64-4.13,3.72-5.77,3.72s-4-2.08-5.77-3.72l-.76-.68c-1.88-1.66-21.28-16.67-22.73-17.69s-2.41-2.89-.82-3.88c1.23-.77,4.47-2.44,9-4.79l4.34-2.24c6.84-3.54,15.37-6.54,16.7-6.54s9.86,3,16.7,6.54Z"/></g></g></svg>

After

Width:  |  Height:  |  Size: 2.9 KiB

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 122.88 122.88"><path d="M17.89 0h88.9c8.85 0 16.1 7.24 16.1 16.1v90.68c0 8.85-7.24 16.1-16.1 16.1H16.1c-8.85 0-16.1-7.24-16.1-16.1v-88.9C0 8.05 8.05 0 17.89 0zm57.04 66.96l16.46 4.96c-1.1 4.61-2.84 8.47-5.23 11.56-2.38 3.1-5.32 5.43-8.85 7-3.52 1.57-8.01 2.36-13.45 2.36-6.62 0-12.01-.96-16.21-2.87-4.19-1.92-7.79-5.3-10.83-10.13-3.04-4.82-4.57-11.02-4.57-18.54 0-10.04 2.67-17.76 8.02-23.17 5.36-5.39 12.93-8.09 22.71-8.09 7.65 0 13.68 1.54 18.06 4.64 4.37 3.1 7.64 7.85 9.76 14.27l-16.55 3.66c-.58-1.84-1.19-3.18-1.82-4.03-1.06-1.43-2.35-2.53-3.86-3.3-1.53-.78-3.22-1.16-5.11-1.16-4.27 0-7.54 1.71-9.8 5.12-1.71 2.53-2.57 6.52-2.57 11.94 0 6.73 1.02 11.33 3.07 13.83 2.05 2.49 4.92 3.73 8.63 3.73 3.59 0 6.31-1 8.15-3.03 1.83-1.99 3.16-4.92 3.99-8.75z" fill-rule="evenodd" clip-rule="evenodd"/></svg>

After

Width:  |  Height:  |  Size: 855 B

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 11 KiB

View File

@@ -0,0 +1,29 @@
<?xml version="1.0" encoding="utf-8"?><!-- Uploaded to: SVG Repo, www.svgrepo.com, Generator: SVG Repo Mixer Tools -->
<svg width="800px" height="800px" viewBox="-8.78 0 70 70" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:cc="http://creativecommons.org/ns#" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://www.w3.org/2000/svg">
<metadata>
<rdf:RDF>
<cc:Work>
<dc:subject>
Data
</dc:subject>
<dc:identifier>
sql-database-generic
</dc:identifier>
<dc:title>
SQL Database (Generic)
</dc:title>
<dc:format>
image/svg+xml
</dc:format>
<dc:publisher>
Amido Limited
</dc:publisher>
<dc:creator>
Richard Slater
</dc:creator>
<dc:type rdf:resource="http://purl.org/dc/dcmitype/StillImage"/>
</cc:Work>
</rdf:RDF>
</metadata>
<path d="m 852.97077,1013.9363 c -6.55238,-0.4723 -13.02857,-2.1216 -17.00034,-4.3296 -2.26232,-1.2576 -3.98589,-2.8032 -4.66223,-4.1807 l -0.4024,-0.8196 0,-25.70807 0,-25.7081 0.31843,-0.6465 c 1.42297,-2.889 5.96432,-5.4935 12.30378,-7.0562 2.15195,-0.5305 5.2586,-1.0588 7.79304,-1.3252 2.58797,-0.2721 9.44765,-0.2307 12.02919,0.073 6.86123,0.8061 12.69967,2.6108 16.29768,5.0377 1.38756,0.9359 2.81137,2.4334 3.29371,3.4642 l 0.41358,0.8838 -0.0354,25.6303 -0.0354,25.63047 -0.33195,0.6744 c -0.18257,0.3709 -0.73406,1.1007 -1.22553,1.6216 -2.99181,3.1715 -9.40919,5.5176 -17.8267,6.5172 -1.71567,0.2038 -9.16916,0.3686 -10.92937,0.2417 z m 12.07501,-22.02839 c -0.0252,-0.0657 -1.00472,-0.93831 -2.17671,-1.93922 -1.17199,-1.00091 -2.18138,-1.86687 -2.24309,-1.92436 -0.0617,-0.0575 0.15481,-0.26106 0.48117,-0.45237 0.32635,-0.19131 0.95163,-0.7235 1.3895,-1.18265 1.2805,-1.34272 1.88466,-3.00131 1.88466,-5.17388 0,-2.1388 -0.65162,-3.8645 -1.95671,-5.1818 -1.31533,-1.3278 -2.82554,-1.8983 -5.02486,-1.8983 -3.39007,0 -5.99368,1.9781 -6.82468,5.1851 -0.28586,1.1031 -0.28432,3.33211 0.003,4.31023 0.74941,2.55136 2.79044,4.40434 5.33062,4.83946 0.8596,0.14724 0.97605,0.21071 1.5621,0.85144 0.34829,0.38078 1.06301,1.14085 1.58827,1.68904 l 0.95501,0.9967 2.53878,0 c 1.39633,0 2.51816,-0.0537 2.49296,-0.11939 z m -8.70653,-7.10848 c -0.61119,-0.31868 -0.84225,-0.56599 -1.19079,-1.27453 -0.26919,-0.54724 -0.31522,-0.85851 -0.31824,-2.15197 -0.003,-1.3143 0.0388,-1.5983 0.31987,-2.169 0.45985,-0.9339 1.09355,-1.376 2.07384,-1.4469 1.36454,-0.099 2.15217,0.5707 2.56498,2.1801 0.50612,1.97321 -0.0504,4.07107 -1.26471,4.76729 -0.63707,0.36527 -1.58737,0.40659 -2.18495,0.095 z m -11.25315,3.66269 c 2.66179,-0.5048 4.1728,-2.0528 4.1728,-4.27495 0,-1.97137 -0.97548,-3.12004 -3.6716,-4.32364 -1.54338,-0.689 -2.10241,-1.1215 -2.10241,-1.6268 0,-0.4188 0.53052,-0.8777 1.14813,-0.993 0.60302,-0.1126 2.20237,0.1652 3.14683,0.5467 l 0.79167,0.3198 0,-1.7524 0,-1.7525 -0.85923,-0.1906 c -0.53103,-0.1178 -1.64689,-0.1885 -2.92137,-0.1849 -1.80528,0 -2.15881,0.044 -2.83818,0.3138 -1.98445,0.7878 -2.92613,2.1298 -2.91107,4.1485 0.0141,1.8898 1.01108,3.06864 3.49227,4.12912 1.46399,0.62572 2.05076,1.10218 2.05076,1.66522 0,1.1965 -1.99362,1.34375 -4.10437,0.30315 -0.57805,-0.28498 -1.09739,-0.54137 -1.1541,-0.56976 -0.0567,-0.0284 -0.10311,0.79023 -0.10311,1.81917 0,1.86239 0.002,1.87137 0.33919,1.99974 1.26979,0.48278 4.07626,0.69787 5.52379,0.42335 z m 30.4308,-1.72766 0,-1.58098 -2.40584,0 -2.40583,0 0,-5.43035 0,-5.4303 -2.13089,0 -2.13088,0 0,7.0113 0,7.01131 4.53672,0 4.53672,0 0,-1.58098 z m -14.84745,-27.70503 c 4.23447,-0.2937 7.4086,-0.8482 10.20178,-1.7821 2.78264,-0.9304 4.42643,-2.0562 4.79413,-3.2834 0.14166,-0.4729 0.13146,-0.6523 -0.0665,-1.1708 -0.88775,-2.3245 -5.84694,-4.1104 -13.42493,-4.8345 -3.24154,-0.3098 -9.13671,-0.2094 -12.22745,0.2081 -4.71604,0.6372 -8.54333,1.8208 -10.2451,3.1683 -3.44251,2.726 0.19793,5.7242 8.66397,7.1354 3.67084,0.6119 8.42674,0.828 12.30414,0.559 z" fill="#00bcf2" transform="translate(-830.906 -943.981)"/>
</svg>

After

Width:  |  Height:  |  Size: 4.1 KiB

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" height="24px" viewBox="0 -960 960 960" width="24px" fill="#e3e3e3"><path d="M480-80q-82 0-155-31.5t-127.5-86Q143-252 111.5-325T80-480q0-83 31.5-155.5t86-127Q252-817 325-848.5T480-880q83 0 155.5 31.5t127 86q54.5 54.5 86 127T880-480q0 82-31.5 155t-86 127.5q-54.5 54.5-127 86T480-80Zm0-82q26-36 45-75t31-83H404q12 44 31 83t45 75Zm-104-16q-18-33-31.5-68.5T322-320H204q29 50 72.5 87t99.5 55Zm208 0q56-18 99.5-55t72.5-87H638q-9 38-22.5 73.5T584-178ZM170-400h136q-3-20-4.5-39.5T300-480q0-21 1.5-40.5T306-560H170q-5 20-7.5 39.5T160-480q0 21 2.5 40.5T170-400Zm216 0h188q3-20 4.5-39.5T580-480q0-21-1.5-40.5T574-560H386q-3 20-4.5 39.5T380-480q0 21 1.5 40.5T386-400Zm268 0h136q5-20 7.5-39.5T800-480q0-21-2.5-40.5T790-560H654q3 20 4.5 39.5T660-480q0 21-1.5 40.5T654-400Zm-16-240h118q-29-50-72.5-87T584-782q18 33 31.5 68.5T638-640Zm-234 0h152q-12-44-31-83t-45-75q-26 36-45 75t-31 83Zm-200 0h118q9-38 22.5-73.5T376-782q-56 18-99.5 55T204-640Z"/></svg>

After

Width:  |  Height:  |  Size: 976 B

View File

@@ -0,0 +1,10 @@
<svg width="24" height="25" viewBox="0 0 24 25" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M12 0.5C8.81812 0.5 5.76375 1.76506 3.51562 4.01469C1.2652 6.26522 0.000643966 9.31734 0 12.5C0 15.6813 1.26562 18.7357 3.51562 20.9853C5.76375 23.2349 8.81812 24.5 12 24.5C15.1819 24.5 18.2362 23.2349 20.4844 20.9853C22.7344 18.7357 24 15.6813 24 12.5C24 9.31869 22.7344 6.26431 20.4844 4.01469C18.2362 1.76506 15.1819 0.5 12 0.5Z" fill="url(#paint0_linear_5586_9958)"/>
<path d="M5.43282 12.373C8.93157 10.849 11.2641 9.8443 12.4303 9.3588C15.7641 7.97261 16.4559 7.73186 16.9078 7.7237C17.0072 7.72211 17.2284 7.74667 17.3728 7.86339C17.4928 7.96183 17.5266 8.09495 17.5434 8.18842C17.5584 8.2818 17.5791 8.49461 17.5622 8.66074C17.3822 10.5582 16.6003 15.1629 16.2028 17.2882C16.0359 18.1874 15.7041 18.4889 15.3834 18.5184C14.6859 18.5825 14.1572 18.0579 13.4822 17.6155C12.4266 16.9231 11.8303 16.4922 10.8047 15.8167C9.6197 15.0359 10.3884 14.6067 11.0634 13.9055C11.2397 13.7219 14.3109 10.9291 14.3691 10.6758C14.3766 10.6441 14.3841 10.526 14.3128 10.4637C14.2434 10.4013 14.1403 10.4227 14.0653 10.4395C13.9584 10.4635 12.2728 11.5788 9.00282 13.7851C8.52469 14.114 8.09157 14.2743 7.70157 14.2659C7.27407 14.2567 6.44907 14.0236 5.83595 13.8245C5.08595 13.5802 4.48782 13.451 4.54032 13.036C4.56657 12.82 4.8647 12.599 5.43282 12.373Z" fill="white"/>
<defs>
<linearGradient id="paint0_linear_5586_9958" x1="1200" y1="0.5" x2="1200" y2="2400.5" gradientUnits="userSpaceOnUse">
<stop stop-color="#2AABEE"/>
<stop offset="1" stop-color="#229ED9"/>
</linearGradient>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 1.6 KiB

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{
"name": "docsgpt",
"version": "0.5.0",
"version": "0.5.1",
"private": false,
"description": "DocsGPT 🦖 is an innovative open-source tool designed to simplify the retrieval of information from project documentation using advanced GPT models 🤖.",
"source": "./src/index.html",

View File

@@ -4,19 +4,19 @@ import { WidgetCore } from './DocsGPTWidget';
import { SearchBarProps } from '@/types';
import { getSearchResults } from '../requests/searchAPI';
import { Result } from '@/types';
import MarkdownIt from 'markdown-it';
import { getOS, processMarkdownString } from '../utils/helper';
import DOMPurify from 'dompurify';
import {
CodeIcon,
import {
CodeIcon,
TextAlignLeftIcon,
HeadingIcon,
ReaderIcon,
ListBulletIcon,
QuoteIcon
ReaderIcon,
ListBulletIcon,
QuoteIcon
} from '@radix-ui/react-icons';
const themes = {
dark: {
name: 'dark',
bg: '#202124',
text: '#EDEDED',
primary: {
@@ -29,6 +29,7 @@ const themes = {
}
},
light: {
name: 'light',
bg: '#EAEAEA',
text: '#171717',
primary: {
@@ -44,15 +45,16 @@ const themes = {
const GlobalStyle = createGlobalStyle`
.highlight {
color:#007EE6;
color: ${props => props.theme.name === 'dark' ? '#4B9EFF' : '#0066CC'};
font-weight: 500;
}
`;
const loadGeistFont = () => {
const link = document.createElement('link');
link.href = 'https://fonts.googleapis.com/css2?family=Geist:wght@100..900&display=swap';
link.rel = 'stylesheet';
document.head.appendChild(link);
const link = document.createElement('link');
link.href = 'https://fonts.googleapis.com/css2?family=Geist:wght@100..900&display=swap';
link.rel = 'stylesheet';
document.head.appendChild(link);
};
const Main = styled.div`
@@ -81,12 +83,27 @@ const Container = styled.div`
position: relative;
display: inline-block;
`
const SearchOverlay = styled.div`
position: fixed;
top: 0;
left: 0;
width: 100%;
height: 100%;
background-color: #0000001A;
backdrop-filter: blur(8px);
-webkit-backdrop-filter: blur(8px);
z-index: 99;
`;
const SearchResults = styled.div`
position: fixed;
display: flex;
flex-direction: column;
background-color: ${props => props.theme.primary.bg};
border: 1px solid ${props => props.theme.bg};
background-color: ${props => props.theme.name === 'dark' ?
'rgba(0, 0, 0, 0.15)' :
'rgba(255, 255, 255, 0.4)'};
border: 1px solid rgba(255, 255, 255, 0.18);
border-radius: 15px;
padding: 8px 0px 8px 0px;
width: 792px;
@@ -97,8 +114,12 @@ const SearchResults = styled.div`
top: 50%;
transform: translate(-50%, -50%);
color: ${props => props.theme.primary.text};
box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1), 0 2px 4px rgba(0, 0, 0, 0.1);
backdrop-filter: blur(16px);
box-shadow: 0 8px 32px 0 rgba(31, 38, 135, 0.37);
backdrop-filter: blur(82px);
-webkit-backdrop-filter: blur(82px);
border-radius: 10px;
box-sizing: border-box;
@media only screen and (max-width: 768px) {
@@ -142,6 +163,33 @@ const ContentWrapper = styled.div`
flex-direction: column;
gap: 12px;
`;
const ResultWrapper = styled.div`
display: flex;
align-items: flex-start;
width: 100%;
box-sizing: border-box;
padding: 8px 16px;
cursor: pointer;
background-color: transparent;
font-family: 'Geist', sans-serif;
border-radius: 8px;
word-wrap: break-word;
overflow-wrap: break-word;
word-break: break-word;
white-space: normal;
overflow: hidden;
text-overflow: ellipsis;
&:hover {
backdrop-filter: blur(8px);
-webkit-backdrop-filter: blur(8px);
}
`;
const Content = styled.div`
display: flex;
margin-left: 8px;
@@ -151,9 +199,10 @@ const Content = styled.div`
font-size: 15px;
color: ${props => props.theme.primary.text};
line-height: 1.6;
border-left: 2px solid #585858;
border-left: 2px solid ${props => props.theme.primary.text}CC;
overflow: hidden;
`
`;
const ContentSegment = styled.div`
display: flex;
align-items: flex-start;
@@ -165,80 +214,6 @@ const ContentSegment = styled.div`
text-overflow: ellipsis;
`
const ResultWrapper = styled.div`
display: flex;
align-items: flex-start;
width: 100%;
box-sizing: border-box;
padding: 8px 16px;
cursor: pointer;
background-color: ${props => props.theme.primary.bg};
font-family: 'Geist', sans-serif;
transition: background-color 0.2s;
border-radius: 8px;
word-wrap: break-word;
overflow-wrap: break-word;
word-break: break-word;
white-space: normal;
overflow: hidden;
text-overflow: ellipsis;
&:hover {
background-color: ${props => props.theme.bg};
}
`
const Markdown = styled.div`
line-height:18px;
font-size: 11px;
white-space: pre-wrap;
pre {
padding: 8px;
width: 90%;
font-size: 11px;
border-radius: 6px;
overflow-x: auto;
background-color: #1B1C1F;
color: #fff ;
}
h1,h2 {
font-size: 14px;
font-weight: 600;
color: ${(props) => props.theme.text};
opacity: 0.8;
}
h3 {
font-size: 12px;
}
p {
margin: 0px;
line-height: 1.35rem;
font-size: 11px;
}
code:not(pre code) {
border-radius: 6px;
padding: 2px 2px;
margin: 2px;
font-size: 9px;
display: inline;
background-color: #646464;
color: #fff ;
}
img{
max-width: 50%;
}
code {
overflow-x: auto;
}
a{
color: #007ee6;
}
`
const Toolkit = styled.kbd`
position: absolute;
right: 4px;
@@ -259,8 +234,8 @@ const Toolkit = styled.kbd`
`
const Loader = styled.div`
margin: 2rem auto;
border: 4px solid ${props => props.theme.secondary.text};
border-top: 4px solid ${props => props.theme.primary.bg};
border: 4px solid ${props => props.theme.name === 'dark' ? 'rgba(255, 255, 255, 0.2)' : 'rgba(0, 0, 0, 0.1)'};
border-top: 4px solid ${props => props.theme.name === 'dark' ? '#FFFFFF' : props.theme.primary.bg};
border-radius: 50%;
width: 12px;
height: 12px;
@@ -280,7 +255,8 @@ const NoResults = styled.div`
margin-top: 2rem;
text-align: center;
font-size: 14px;
color: #888;
color: ${props => props.theme.name === 'dark' ? '#E0E0E0' : '#505050'};
font-weight: 500;
`;
const AskAIButton = styled.button`
display: flex;
@@ -293,25 +269,35 @@ const AskAIButton = styled.button`
height: 50px;
padding: 8px 24px;
border: none;
border-radius: 6px;
background-color: ${props => props.theme.bg};
border-radius: 8px;
color: ${props => props.theme.text};
cursor: pointer;
transition: background-color 0.2s, box-shadow 0.2s;
font-size: 16px;
backdrop-filter: blur(16px);
-webkit-backdrop-filter: blur(16px);
background-color: ${props => props.theme.name === 'dark' ?
'rgba(255, 255, 255, 0.05)' :
'rgba(0, 0, 0, 0.03)'};
&:hover {
opacity: 0.8;
backdrop-filter: blur(20px);
-webkit-backdrop-filter: blur(20px);
background-color: ${props => props.theme.name === 'dark' ?
'rgba(255, 255, 255, 0.1)' :
'rgba(0, 0, 0, 0.06)'};
}
`
`;
const SearchHeader = styled.div`
display: flex;
align-items: center;
gap: 8px;
margin-bottom: 12px;
padding-bottom: 12px;
border-bottom: 1px solid ${props => props.theme.bg};
`
border-bottom: 1px solid ${props => props.theme.name === 'dark' ? '#FFFFFF24' : 'rgba(0, 0, 0, 0.14)'};
`;
const TextField = styled.input`
width: calc(100% - 32px);
@@ -327,8 +313,16 @@ const TextField = styled.input`
&:focus {
border-color: none;
}
&::placeholder {
color: ${props => props.theme.name === 'dark' ? 'rgba(255, 255, 255, 0.6)' : 'rgba(0, 0, 0, 0.5)'} !important;
opacity: 100%; /* Force opacity to ensure placeholder is visible */
font-weight: 500;
}
`
const EscapeInstruction = styled.kbd`
display: flex;
align-items: center;
@@ -337,17 +331,21 @@ const EscapeInstruction = styled.kbd`
padding: 4px 8px;
border-radius: 4px;
background-color: transparent;
border: 1px solid ${props => props.theme.secondary.text};
color: ${props => props.theme.text};
border: 1px solid ${props => props.theme.name === 'dark' ?
'rgba(237, 237, 237, 0.6)' :
'rgba(23, 23, 23, 0.6)'};
color: ${props => props.theme.name === 'dark' ? '#EDEDED' : '#171717'};
font-size: 12px;
font-family: 'Geist', sans-serif;
white-space: nowrap;
cursor: pointer;
width: fit-content;
&:hover {
background-color: rgba(255, 255, 255, 0.1);
}
`
-webkit-appearance: none;
-moz-appearance: none;
appearance: none;
`;
export const SearchBar = ({
apiKey = "74039c6d-bff7-44ce-ae55-2973cbf13837",
apiHost = "https://gptcloud.arc53.com",
@@ -367,7 +365,7 @@ export const SearchBar = ({
const abortControllerRef = React.useRef<AbortController | null>(null);
const browserOS = getOS();
const isTouch = 'ontouchstart' in window;
const getKeyboardInstruction = () => {
if (isResultVisible) return "Enter";
return browserOS === 'mac' ? '⌘ + K' : 'Ctrl + K';
@@ -394,7 +392,7 @@ export const SearchBar = ({
}
};
document.addEventListener('mousedown', handleClickOutside);
document.addEventListener('keydown', handleKeyDown);
return () => {
@@ -404,33 +402,34 @@ export const SearchBar = ({
}, []);
React.useEffect(() => {
if (!input) {
setResults([]);
return;
}
setLoading(true);
if (debounceTimeout.current) {
clearTimeout(debounceTimeout.current);
}
if (!input) {
setResults([]);
setLoading(false);
return;
}
setLoading(true);
if (debounceTimeout.current) {
clearTimeout(debounceTimeout.current);
}
if (abortControllerRef.current) {
abortControllerRef.current.abort();
}
const abortController = new AbortController();
abortControllerRef.current = abortController;
if (abortControllerRef.current) {
abortControllerRef.current.abort();
}
const abortController = new AbortController();
abortControllerRef.current = abortController;
debounceTimeout.current = setTimeout(() => {
getSearchResults(input, apiKey, apiHost, abortController.signal)
.then((data) => setResults(data))
.catch((err) => !abortController.signal.aborted && console.log(err))
.finally(() => setLoading(false));
}, 500);
debounceTimeout.current = setTimeout(() => {
getSearchResults(input, apiKey, apiHost, abortController.signal)
.then((data) => setResults(data))
.catch((err) => !abortController.signal.aborted && console.log(err))
.finally(() => setLoading(false));
}, 500);
return () => {
abortController.abort();
clearTimeout(debounceTimeout.current ?? undefined);
};
}, [input])
return () => {
abortController.abort();
clearTimeout(debounceTimeout.current ?? undefined);
};
}, [input])
const handleKeyDown = (event: React.KeyboardEvent<HTMLInputElement>) => {
if (event.key === 'Enter') {
@@ -462,6 +461,8 @@ export const SearchBar = ({
</SearchButton>
{
isResultVisible && (
<>
<SearchOverlay onClick={() => setIsResultVisible(false)} />
<SearchResults>
<SearchHeader>
<TextField
@@ -477,8 +478,8 @@ export const SearchBar = ({
</EscapeInstruction>
</SearchHeader>
<AskAIButton onClick={openWidget}>
<img
src="https://d3dg1063dc54p9.cloudfront.net/cute-docsgpt.png"
<img
src="https://d3dg1063dc54p9.cloudfront.net/cute-docsgpt.png"
alt="DocsGPT"
width={24}
height={24}
@@ -539,6 +540,7 @@ export const SearchBar = ({
)}
</SearchResultsScroll>
</SearchResults>
</>
)
}
{

View File

@@ -1,3 +1,4 @@
# Please put appropriate value
VITE_API_HOST=http://0.0.0.0:7091
VITE_BASE_URL=http://localhost:5173
VITE_API_HOST=http://127.0.0.1:7091
VITE_API_STREAMING=true

File diff suppressed because it is too large Load Diff

View File

@@ -19,27 +19,31 @@
]
},
"dependencies": {
"@reduxjs/toolkit": "^2.5.1",
"@reduxjs/toolkit": "^2.8.2",
"chart.js": "^4.4.4",
"clsx": "^2.1.1",
"i18next": "^24.2.0",
"i18next-browser-languagedetector": "^8.0.2",
"mermaid": "^11.6.0",
"prop-types": "^15.8.1",
"react": "^18.2.0",
"react-chartjs-2": "^5.3.0",
"react-copy-to-clipboard": "^5.1.0",
"react-dom": "^18.3.1",
"react-dropzone": "^14.3.8",
"react-helmet": "^6.1.0",
"react-dropzone": "^14.3.5",
"react-i18next": "^15.4.0",
"react-markdown": "^9.0.1",
"react-redux": "^8.0.5",
"react-router-dom": "^7.1.1",
"react-syntax-highlighter": "^15.5.0",
"react-redux": "^9.2.0",
"react-router-dom": "^7.6.1",
"react-syntax-highlighter": "^15.6.1",
"rehype-katex": "^7.0.1",
"remark-gfm": "^4.0.0",
"remark-math": "^6.0.0"
"remark-math": "^6.0.0",
"tailwind-merge": "^3.3.1"
},
"devDependencies": {
"@types/mermaid": "^9.1.0",
"@types/react": "^18.0.27",
"@types/react-dom": "^18.3.0",
"@types/react-helmet": "^6.1.11",
@@ -49,22 +53,22 @@
"@vitejs/plugin-react": "^4.3.4",
"autoprefixer": "^10.4.13",
"eslint": "^8.57.1",
"eslint-config-prettier": "^9.1.0",
"eslint-config-prettier": "^10.1.5",
"eslint-config-standard-with-typescript": "^34.0.0",
"eslint-plugin-import": "^2.31.0",
"eslint-plugin-n": "^15.7.0",
"eslint-plugin-prettier": "^5.2.1",
"eslint-plugin-promise": "^6.6.0",
"eslint-plugin-react": "^7.37.3",
"eslint-plugin-react": "^7.37.5",
"eslint-plugin-unused-imports": "^4.1.4",
"husky": "^8.0.0",
"lint-staged": "^15.3.0",
"postcss": "^8.4.49",
"prettier": "^3.4.2",
"prettier-plugin-tailwindcss": "^0.6.9",
"prettier": "^3.5.3",
"prettier-plugin-tailwindcss": "^0.6.11",
"tailwindcss": "^3.4.17",
"typescript": "^5.7.2",
"vite": "^5.4.14",
"vite-plugin-svgr": "^4.2.0"
"typescript": "^5.8.3",
"vite": "^6.3.5",
"vite-plugin-svgr": "^4.3.0"
}
}

View File

@@ -4,4 +4,5 @@ module.exports = {
semi: true,
singleQuote: true,
printWidth: 80,
}
plugins: ['prettier-plugin-tailwindcss'],
};

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" height="24px" viewBox="0 -960 960 960" width="24px" fill="#e3e3e3"><path d="M480-80q-82 0-155-31.5t-127.5-86Q143-252 111.5-325T80-480q0-83 31.5-155.5t86-127Q252-817 325-848.5T480-880q83 0 155.5 31.5t127 86q54.5 54.5 86 127T880-480q0 82-31.5 155t-86 127.5q-54.5 54.5-127 86T480-80Zm0-82q26-36 45-75t31-83H404q12 44 31 83t45 75Zm-104-16q-18-33-31.5-68.5T322-320H204q29 50 72.5 87t99.5 55Zm208 0q56-18 99.5-55t72.5-87H638q-9 38-22.5 73.5T584-178ZM170-400h136q-3-20-4.5-39.5T300-480q0-21 1.5-40.5T306-560H170q-5 20-7.5 39.5T160-480q0 21 2.5 40.5T170-400Zm216 0h188q3-20 4.5-39.5T580-480q0-21-1.5-40.5T574-560H386q-3 20-4.5 39.5T380-480q0 21 1.5 40.5T386-400Zm268 0h136q5-20 7.5-39.5T800-480q0-21-2.5-40.5T790-560H654q3 20 4.5 39.5T660-480q0 21-1.5 40.5T654-400Zm-16-240h118q-29-50-72.5-87T584-782q18 33 31.5 68.5T638-640Zm-234 0h152q-12-44-31-83t-45-75q-26 36-45 75t-31 83Zm-200 0h118q9-38 22.5-73.5T376-782q-56 18-99.5 55T204-640Z"/></svg>

After

Width:  |  Height:  |  Size: 976 B

View File

@@ -1,99 +0,0 @@
//TODO - Add hyperlinks to text
//TODO - Styling
import DocsGPT3 from './assets/cute_docsgpt3.svg';
export default function About() {
return (
<div className="mx-5 grid min-h-screen md:mx-36">
<article className="place-items-left mx-auto my-auto flex w-full max-w-6xl flex-col gap-4 rounded-3xl bg-gray-100 p-6 text-jet dark:bg-gun-metal dark:text-bright-gray lg:p-6 xl:p-10">
<div className="flex items-center">
<p className="mr-2 text-3xl">About DocsGPT</p>
<img className="h14 mb-2" src={DocsGPT3} alt="DocsGPT" />
</div>
<p className="mt-4">
Find the information in your documentation through AI-powered
<a
className="text-blue-500"
href="https://github.com/arc53/DocsGPT"
target="_blank"
rel="noreferrer"
>
{' '}
open-source{' '}
</a>
chatbot. Powered by GPT-3, Faiss and LangChain.
</p>
<div>
<p>
If you want to add your own documentation, please follow the
instruction below:
</p>
<p className="mt-4 ml-2">
1. Navigate to{' '}
<span className="bg-gray-200 italic dark:bg-outer-space">
{' '}
/application
</span>{' '}
folder
</p>
<p className="mt-4 ml-2">
2. Install dependencies from{' '}
<span className="bg-gray-200 italic dark:bg-outer-space">
pip install -r requirements.txt
</span>
</p>
<p className="mt-4 ml-2">
3. Prepare a{' '}
<span className="bg-gray-200 italic dark:bg-outer-space">.env</span>{' '}
file. Copy{' '}
<span className="bg-gray-200 italic dark:bg-outer-space">
.env_sample
</span>{' '}
and create{' '}
<span className="bg-gray-200 italic dark:bg-outer-space">.env</span>{' '}
with your OpenAI API token
</p>
<p className="mt-4 ml-2">
4. Run the app with{' '}
<span className="bg-gray-200 italic dark:bg-outer-space">
python app.py
</span>
</p>
</div>
<p>
Currently It uses{' '}
<span className="text-blue-950 font-medium">DocsGPT</span>{' '}
documentation, so it will respond to information relevant to{' '}
<span className="text-blue-950 font-medium">DocsGPT</span>. If you
want to train it on different documentation - please follow
<a
className="text-blue-500"
href="https://github.com/arc53/DocsGPT/wiki/How-to-train-on-other-documentation"
target="_blank"
rel="noreferrer"
>
{' '}
this guide
</a>
.
</p>
<p className="mt-4 text-left">
If you want to launch it on your own server - follow
<a
className="text-blue-500"
href="https://github.com/arc53/DocsGPT/wiki/Hosting-the-app"
target="_blank"
rel="noreferrer"
>
{' '}
this guide
</a>
.
</p>
</article>
</div>
);
}

View File

@@ -3,7 +3,9 @@ import './locale/i18n';
import { useState } from 'react';
import { Outlet, Route, Routes } from 'react-router-dom';
import About from './About';
import Agents from './agents';
import SharedAgentGate from './agents/SharedAgentGate';
import ActionButtons from './components/ActionButtons';
import Spinner from './components/Spinner';
import Conversation from './conversation/Conversation';
import { SharedConversation } from './conversation/SharedConversation';
@@ -18,7 +20,7 @@ function AuthWrapper({ children }: { children: React.ReactNode }) {
if (isAuthLoading) {
return (
<div className="h-screen flex items-center justify-center">
<div className="flex h-screen items-center justify-center">
<Spinner />
</div>
);
@@ -27,17 +29,18 @@ function AuthWrapper({ children }: { children: React.ReactNode }) {
}
function MainLayout() {
const { isMobile } = useMediaQuery();
const [navOpen, setNavOpen] = useState(!isMobile);
const { isMobile, isTablet } = useMediaQuery();
const [navOpen, setNavOpen] = useState(!(isMobile || isTablet));
return (
<div className="dark:bg-raisin-black relative h-screen overflow-auto">
<div className="relative h-screen overflow-hidden dark:bg-raisin-black">
<Navigation navOpen={navOpen} setNavOpen={setNavOpen} />
<ActionButtons showNewChat={true} showShare={true} />
<div
className={`h-[calc(100dvh-64px)] md:h-screen ${
!isMobile
? `ml-0 ${!navOpen ? 'md:mx-auto lg:mx-auto' : 'md:ml-72'}`
: 'ml-0 md:ml-16'
className={`h-[calc(100dvh-64px)] overflow-auto lg:h-screen ${
!(isMobile || isTablet)
? `ml-0 ${!navOpen ? 'lg:mx-auto' : 'lg:ml-72'}`
: 'ml-0 lg:ml-16'
}`}
>
<Outlet />
@@ -45,14 +48,13 @@ function MainLayout() {
</div>
);
}
export default function App() {
const [, , componentMounted] = useDarkTheme();
if (!componentMounted) {
return <div />;
}
return (
<div className="h-full relative overflow-auto">
<div className="relative h-full overflow-hidden">
<Routes>
<Route
element={
@@ -62,10 +64,11 @@ export default function App() {
}
>
<Route index element={<Conversation />} />
<Route path="/about" element={<About />} />
<Route path="/settings" element={<Setting />} />
<Route path="/settings/*" element={<Setting />} />
<Route path="/agents/*" element={<Agents />} />
</Route>
<Route path="/share/:identifier" element={<SharedConversation />} />
<Route path="/shared/agent/:agentId" element={<SharedAgentGate />} />
<Route path="/*" element={<PageNotFound />} />
</Routes>
</div>

View File

@@ -19,18 +19,18 @@ export default function Hero({
}>;
return (
<div className="flex h-full w-full flex-col text-black-1000 dark:text-bright-gray items-center justify-between">
<div className="flex h-full w-full flex-col items-center justify-between text-black-1000 dark:text-bright-gray">
{/* Header Section */}
<div className="flex flex-col items-center justify-center flex-grow pt-8 md:pt-0">
<div className="flex items-center mb-4">
<div className="flex flex-grow flex-col items-center justify-center pt-8 md:pt-0">
<div className="mb-4 flex items-center">
<span className="text-4xl font-semibold">DocsGPT</span>
<img className="mb-1 inline w-14" src={DocsGPT3} alt="docsgpt" />
</div>
</div>
{/* Demo Buttons Section */}
<div className="w-full max-w-full mb-8 md:mb-16">
<div className="grid grid-cols-1 md:grid-cols-1 lg:grid-cols-2 gap-3 md:gap-4 text-xs">
<div className="mb-8 w-full max-w-full md:mb-16">
<div className="grid grid-cols-1 gap-3 text-xs md:grid-cols-1 md:gap-4 lg:grid-cols-2">
{demos?.map(
(demo: { header: string; query: string }, key: number) =>
demo.header &&
@@ -38,14 +38,12 @@ export default function Hero({
<button
key={key}
onClick={() => handleQuestion({ question: demo.query })}
className="w-full rounded-[66px] border bg-transparent px-6 py-[14px] text-left transition-colors
border-dark-gray text-just-black hover:bg-cultured
dark:border-dim-gray dark:text-chinese-white dark:hover:bg-charleston-green"
className={`w-full rounded-[66px] border border-dark-gray bg-transparent px-6 py-[14px] text-left text-just-black transition-colors hover:bg-cultured dark:border-dim-gray dark:text-chinese-white dark:hover:bg-charleston-green ${key >= 2 ? 'hidden md:block' : ''} // Show only 2 buttons on mobile`}
>
<p className="mb-2 font-semibold text-black-1000 dark:text-bright-gray">
{demo.header}
</p>
<span className="text-gray-700 dark:text-gray-300 opacity-60 line-clamp-2">
<span className="line-clamp-2 text-gray-700 opacity-60 dark:text-gray-300">
{demo.query}
</span>
</button>

View File

@@ -3,6 +3,7 @@ import { useTranslation } from 'react-i18next';
import { useDispatch, useSelector } from 'react-redux';
import { NavLink, useNavigate } from 'react-router-dom';
import { Agent } from './agents/types';
import conversationService from './api/services/conversationService';
import userService from './api/services/userService';
import Add from './assets/add.svg';
@@ -12,13 +13,15 @@ import Expand from './assets/expand.svg';
import Github from './assets/github.svg';
import Hamburger from './assets/hamburger.svg';
import openNewChat from './assets/openNewChat.svg';
import Pin from './assets/pin.svg';
import Robot from './assets/robot.svg';
import SettingGear from './assets/settingGear.svg';
import Spark from './assets/spark.svg';
import SpinnerDark from './assets/spinner-dark.svg';
import Spinner from './assets/spinner.svg';
import Twitter from './assets/TwitterX.svg';
import UploadIcon from './assets/upload.svg';
import UnPin from './assets/unpin.svg';
import Help from './components/Help';
import SourceDropdown from './components/SourceDropdown';
import {
handleAbort,
selectQueries,
@@ -31,22 +34,21 @@ import useDefaultDocument from './hooks/useDefaultDocument';
import useTokenAuth from './hooks/useTokenAuth';
import DeleteConvModal from './modals/DeleteConvModal';
import JWTModal from './modals/JWTModal';
import { ActiveState, Doc } from './models/misc';
import { getConversations, getDocs } from './preferences/preferenceApi';
import { ActiveState } from './models/misc';
import { getConversations } from './preferences/preferenceApi';
import {
selectApiKeyStatus,
selectAgents,
selectConversationId,
selectConversations,
selectModalStateDeleteConv,
selectPaginatedDocuments,
selectSelectedDocs,
selectSourceDocs,
selectSelectedAgent,
selectSharedAgents,
selectToken,
setAgents,
setConversations,
setModalStateDeleteConv,
setPaginatedDocuments,
setSelectedDocs,
setSourceDocs,
setSelectedAgent,
setSharedAgents,
} from './preferences/preferenceSlice';
import Upload from './upload/Upload';
@@ -57,39 +59,64 @@ interface NavigationProps {
export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
const dispatch = useDispatch();
const navigate = useNavigate();
const { t } = useTranslation();
const token = useSelector(selectToken);
const queries = useSelector(selectQueries);
const docs = useSelector(selectSourceDocs);
const selectedDocs = useSelector(selectSelectedDocs);
const conversations = useSelector(selectConversations);
const modalStateDeleteConv = useSelector(selectModalStateDeleteConv);
const conversationId = useSelector(selectConversationId);
const paginatedDocuments = useSelector(selectPaginatedDocuments);
const [isDeletingConversation, setIsDeletingConversation] = useState(false);
const modalStateDeleteConv = useSelector(selectModalStateDeleteConv);
const agents = useSelector(selectAgents);
const sharedAgents = useSelector(selectSharedAgents);
const selectedAgent = useSelector(selectSelectedAgent);
const { isMobile } = useMediaQuery();
const { isMobile, isTablet } = useMediaQuery();
const [isDarkTheme] = useDarkTheme();
const [isDocsListOpen, setIsDocsListOpen] = useState(false);
const { t } = useTranslation();
const isApiKeySet = useSelector(selectApiKeyStatus);
const { showTokenModal, handleTokenSubmit } = useTokenAuth();
const [isDeletingConversation, setIsDeletingConversation] = useState(false);
const [uploadModalState, setUploadModalState] =
useState<ActiveState>('INACTIVE');
const [recentAgents, setRecentAgents] = useState<Agent[]>([]);
const navRef = useRef(null);
const navigate = useNavigate();
useEffect(() => {
if (!conversations?.data) {
fetchConversations();
async function fetchRecentAgents() {
try {
const response = await userService.getPinnedAgents(token);
if (!response.ok) throw new Error('Failed to fetch pinned agents');
const pinnedAgents: Agent[] = await response.json();
if (pinnedAgents.length >= 3) {
setRecentAgents(pinnedAgents);
return;
}
let tempAgents: Agent[] = [];
if (!agents) {
const response = await userService.getAgents(token);
if (!response.ok) throw new Error('Failed to fetch agents');
const data: Agent[] = await response.json();
dispatch(setAgents(data));
tempAgents = data;
} else tempAgents = agents;
const additionalAgents = tempAgents
.filter(
(agent: Agent) =>
agent.status === 'published' &&
!pinnedAgents.some((pinned) => pinned.id === agent.id),
)
.sort(
(a: Agent, b: Agent) =>
new Date(b.last_used_at ?? 0).getTime() -
new Date(a.last_used_at ?? 0).getTime(),
)
.slice(0, 3 - pinnedAgents.length);
setRecentAgents([...pinnedAgents, ...additionalAgents]);
} catch (error) {
console.error('Failed to fetch recent agents: ', error);
}
if (queries.length === 0) {
resetConversation();
}
}, [conversations?.data, dispatch]);
}
async function fetchConversations() {
dispatch(setConversations({ ...conversations, loading: true }));
@@ -103,6 +130,15 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
});
}
useEffect(() => {
fetchRecentAgents();
}, [agents, sharedAgents, token, dispatch]);
useEffect(() => {
if (!conversations?.data) fetchConversations();
if (queries.length === 0) resetConversation();
}, [conversations?.data, dispatch]);
const handleDeleteAllConversations = () => {
setIsDeletingConversation(true);
conversationService
@@ -124,45 +160,75 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
.catch((error) => console.error(error));
};
const handleDeleteClick = (doc: Doc) => {
userService
.deletePath(doc.id ?? '', token)
.then(() => {
return getDocs(token);
})
.then((updatedDocs) => {
dispatch(setSourceDocs(updatedDocs));
const updatedPaginatedDocs = paginatedDocuments?.filter(
(document) => document.id !== doc.id,
);
dispatch(
setPaginatedDocuments(updatedPaginatedDocs || paginatedDocuments),
);
dispatch(
setSelectedDocs(
Array.isArray(updatedDocs) &&
updatedDocs?.find(
(doc: Doc) => doc.name.toLowerCase() === 'default',
),
),
);
})
.catch((error) => console.error(error));
const handleAgentClick = (agent: Agent) => {
resetConversation();
dispatch(setSelectedAgent(agent));
if (isMobile || isTablet) setNavOpen(!navOpen);
navigate('/');
};
const handleConversationClick = (index: string) => {
conversationService
.getConversation(index, token)
.then((response) => response.json())
.then((data) => {
const handleTogglePin = (agent: Agent) => {
userService.togglePinAgent(agent.id ?? '', token).then((response) => {
if (response.ok) {
const updatePinnedStatus = (a: Agent) =>
a.id === agent.id ? { ...a, pinned: !a.pinned } : a;
dispatch(setAgents(agents?.map(updatePinnedStatus)));
dispatch(setSharedAgents(sharedAgents?.map(updatePinnedStatus)));
}
});
};
const handleConversationClick = async (index: string) => {
try {
dispatch(setSelectedAgent(null));
const response = await conversationService.getConversation(index, token);
if (!response.ok) {
navigate('/');
dispatch(setConversation(data));
dispatch(
updateConversationId({
query: { conversationId: index },
}),
return;
}
const data = await response.json();
if (!data) return;
dispatch(setConversation(data.queries));
dispatch(updateConversationId({ query: { conversationId: index } }));
if (!data.agent_id) {
navigate('/');
return;
}
let agent: Agent;
if (data.is_shared_usage) {
const sharedResponse = await userService.getSharedAgent(
data.shared_token,
token,
);
});
if (!sharedResponse.ok) {
navigate('/');
return;
}
agent = await sharedResponse.json();
navigate(`/agents/shared/${agent.shared_token}`);
} else {
const agentResponse = await userService.getAgent(data.agent_id, token);
if (!agentResponse.ok) {
navigate('/');
return;
}
agent = await agentResponse.json();
if (agent.shared_token) {
navigate(`/agents/shared/${agent.shared_token}`);
} else {
await Promise.resolve(dispatch(setSelectedAgent(agent)));
navigate('/');
}
}
} catch (error) {
console.error('Error handling conversation click:', error);
navigate('/');
}
};
const resetConversation = () => {
@@ -173,12 +239,15 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
query: { conversationId: null },
}),
);
dispatch(setSelectedAgent(null));
};
const newChat = () => {
if (queries && queries?.length > 0) {
resetConversation();
}
};
async function updateConversationName(updatedConversation: {
name: string;
id: string;
@@ -197,20 +266,16 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
});
}
/*
Needed to fix bug where if mobile nav was closed and then window was resized to desktop, nav would still be closed but the button to open would be gone, as per #1 on issue #146
*/
useEffect(() => {
setNavOpen(!isMobile);
}, [isMobile]);
setNavOpen(!(isMobile || isTablet));
}, [isMobile, isTablet]);
useDefaultDocument();
return (
<>
{!navOpen && (
<div className="duration-25 absolute top-3 left-3 z-20 hidden transition-all md:block">
<div className="flex gap-3 items-center">
<div className="duration-25 absolute left-3 top-3 z-20 hidden transition-all lg:block">
<div className="flex items-center gap-3">
<button
onClick={() => {
setNavOpen(!navOpen);
@@ -237,7 +302,7 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
/>
</button>
)}
<div className="text-[#949494] font-medium text-[20px]">
<div className="text-[20px] font-medium text-[#949494]">
DocsGPT
</div>
</div>
@@ -247,13 +312,13 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
ref={navRef}
className={`${
!navOpen && '-ml-96 md:-ml-[18rem]'
} duration-20 fixed top-0 z-20 flex h-full w-72 flex-col border-r-[1px] border-b-0 bg-lotion dark:bg-chinese-black transition-all dark:border-r-purple-taupe dark:text-white`}
} duration-20 fixed top-0 z-20 flex h-full w-72 flex-col border-b-0 border-r-[1px] bg-lotion transition-all dark:border-r-purple-taupe dark:bg-chinese-black dark:text-white`}
>
<div
className={'visible mt-2 flex h-[6vh] w-full justify-between md:h-12'}
>
<div
className="my-auto mx-4 flex cursor-pointer gap-1.5"
className="mx-4 my-auto flex cursor-pointer gap-1.5"
onClick={() => {
if (isMobile) {
setNavOpen(!navOpen);
@@ -283,7 +348,7 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
<NavLink
to={'/'}
onClick={() => {
if (isMobile) {
if (isMobile || isTablet) {
setNavOpen(!navOpen);
}
resetConversation();
@@ -291,7 +356,7 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
className={({ isActive }) =>
`${
isActive ? 'bg-transparent' : ''
} group sticky mx-4 mt-4 flex cursor-pointer gap-2.5 rounded-3xl border border-silver p-3 hover:border-rainy-gray dark:border-purple-taupe dark:text-white hover:bg-transparent`
} group sticky mx-4 mt-4 flex cursor-pointer gap-2.5 rounded-3xl border border-silver p-3 hover:border-rainy-gray hover:bg-transparent dark:border-purple-taupe dark:text-white`
}
>
<img
@@ -299,7 +364,7 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
alt="Create new chat"
className="opacity-80 group-hover:opacity-100"
/>
<p className=" text-sm text-dove-gray group-hover:text-neutral-600 dark:text-chinese-silver dark:group-hover:text-bright-gray">
<p className="text-sm text-dove-gray group-hover:text-neutral-600 dark:text-chinese-silver dark:group-hover:text-bright-gray">
{t('newChat')}
</p>
</NavLink>
@@ -308,7 +373,7 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
className="mb-auto h-[78vh] overflow-y-auto overflow-x-hidden dark:text-white"
>
{conversations?.loading && !isDeletingConversation && (
<div className="absolute top-1/2 left-1/2 transform -translate-x-1/2 -translate-y-1/2">
<div className="absolute left-1/2 top-1/2 -translate-x-1/2 -translate-y-1/2 transform">
<img
src={isDarkTheme ? SpinnerDark : Spinner}
className="animate-spin cursor-pointer bg-transparent"
@@ -316,10 +381,108 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
/>
</div>
)}
{conversations?.data && conversations.data.length > 0 ? (
{recentAgents?.length > 0 ? (
<div>
<div className=" my-auto mx-4 mt-2 flex h-6 items-center justify-between gap-4 rounded-3xl">
<p className="mt-1 ml-4 text-sm font-semibold">{t('chats')}</p>
<div className="mx-4 my-auto mt-2 flex h-6 items-center">
<p className="ml-4 mt-1 text-sm font-semibold">Agents</p>
</div>
<div className="agents-container">
<div>
{recentAgents.map((agent, idx) => (
<div
key={idx}
className={`group mx-4 my-auto mt-4 flex h-9 cursor-pointer items-center justify-between rounded-3xl pl-4 hover:bg-bright-gray dark:hover:bg-dark-charcoal ${
agent.id === selectedAgent?.id && !conversationId
? 'bg-bright-gray dark:bg-dark-charcoal'
: ''
}`}
onClick={() => handleAgentClick(agent)}
>
<div className="flex items-center gap-2">
<div className="flex w-6 justify-center">
<img
src={
agent.image && agent.image.trim() !== ''
? agent.image
: Robot
}
alt="agent-logo"
className="h-6 w-6 rounded-full object-contain"
/>
</div>
<p className="overflow-hidden overflow-ellipsis whitespace-nowrap text-sm leading-6 text-eerie-black dark:text-bright-gray">
{agent.name}
</p>
</div>
<div
className={`${isMobile || isTablet ? 'flex' : 'invisible flex group-hover:visible'} items-center px-3`}
>
<button
className="rounded-full hover:opacity-75"
onClick={(e) => {
e.stopPropagation();
handleTogglePin(agent);
}}
>
<img
src={agent.pinned ? UnPin : Pin}
className="h-4 w-4"
></img>
</button>
</div>
</div>
))}
</div>
<div
className="mx-4 my-auto mt-2 flex h-9 cursor-pointer items-center gap-2 rounded-3xl pl-4 hover:bg-bright-gray dark:hover:bg-dark-charcoal"
onClick={() => {
dispatch(setSelectedAgent(null));
if (isMobile || isTablet) {
setNavOpen(false);
}
navigate('/agents');
}}
>
<div className="flex w-6 justify-center">
<img
src={Spark}
alt="manage-agents"
className="h-[18px] w-[18px]"
/>
</div>
<p className="overflow-hidden overflow-ellipsis whitespace-nowrap text-sm leading-6 text-eerie-black dark:text-bright-gray">
{t('manageAgents')}
</p>
</div>
</div>
</div>
) : (
<div
className="mx-4 my-auto mt-2 flex h-9 cursor-pointer items-center gap-2 rounded-3xl pl-4 hover:bg-bright-gray dark:hover:bg-dark-charcoal"
onClick={() => {
if (isMobile || isTablet) {
setNavOpen(false);
}
dispatch(setSelectedAgent(null));
navigate('/agents');
}}
>
<div className="flex w-6 justify-center">
<img
src={Spark}
alt="manage-agents"
className="h-[18px] w-[18px]"
/>
</div>
<p className="overflow-hidden overflow-ellipsis whitespace-nowrap text-sm leading-6 text-eerie-black dark:text-bright-gray">
{t('manageAgents')}
</p>
</div>
)}
{conversations?.data && conversations.data.length > 0 ? (
<div className="mt-7">
<div className="mx-4 my-auto mt-2 flex h-6 items-center justify-between gap-4 rounded-3xl">
<p className="ml-4 mt-1 text-sm font-semibold">{t('chats')}</p>
</div>
<div className="conversations-container">
{conversations.data?.map((conversation) => (
@@ -345,48 +508,17 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
)}
</div>
<div className="flex h-auto flex-col justify-end text-eerie-black dark:text-white">
<div className="flex flex-col-reverse border-b-[1px] dark:border-b-purple-taupe">
<div className="relative my-4 mx-4 flex gap-4 items-center">
<SourceDropdown
options={docs}
selectedDocs={selectedDocs}
setSelectedDocs={setSelectedDocs}
isDocsListOpen={isDocsListOpen}
setIsDocsListOpen={setIsDocsListOpen}
handleDeleteClick={handleDeleteClick}
handlePostDocumentSelect={(option?: string) => {
if (isMobile) {
setNavOpen(!navOpen);
}
}}
/>
<img
className="hover:cursor-pointer"
src={UploadIcon}
width={28}
height={25}
alt="Upload document"
onClick={() => {
setUploadModalState('ACTIVE');
if (isMobile) {
setNavOpen(!navOpen);
}
}}
></img>
</div>
<p className="ml-5 mt-3 text-sm font-semibold">{t('sourceDocs')}</p>
</div>
<div className="flex flex-col gap-2 border-b-[1px] py-2 dark:border-b-purple-taupe">
<NavLink
onClick={() => {
if (isMobile) {
setNavOpen(!navOpen);
if (isMobile || isTablet) {
setNavOpen(false);
}
resetConversation();
}}
to="/settings"
className={({ isActive }) =>
`my-auto mx-4 flex h-9 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100 dark:hover:bg-[#28292E] ${
`mx-4 my-auto flex h-9 cursor-pointer gap-4 rounded-3xl hover:bg-gray-100 dark:hover:bg-[#28292E] ${
isActive ? 'bg-gray-3000 dark:bg-transparent' : ''
}`
}
@@ -394,15 +526,15 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
<img
src={SettingGear}
alt="Settings"
className="ml-2 w- filter dark:invert"
className="w- ml-2 filter dark:invert"
/>
<p className="my-auto text-sm text-eerie-black dark:text-white">
<p className="my-auto text-sm text-eerie-black dark:text-white">
{t('settings.label')}
</p>
</NavLink>
</div>
<div className="flex flex-col justify-end text-eerie-black dark:text-white">
<div className="flex justify-between items-center py-1">
<div className="flex items-center justify-between py-1">
<Help />
<div className="flex items-center gap-1 pr-4">
@@ -450,10 +582,10 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
</div>
</div>
</div>
<div className="sticky z-10 h-16 w-full border-b-2 bg-gray-50 dark:border-b-purple-taupe dark:bg-chinese-black md:hidden">
<div className="flex gap-6 items-center h-full ml-6 ">
<div className="sticky z-10 h-16 w-full border-b-2 bg-gray-50 dark:border-b-purple-taupe dark:bg-chinese-black lg:hidden">
<div className="ml-6 flex h-full items-center gap-6">
<button
className=" h-6 w-6 md:hidden"
className="h-6 w-6 lg:hidden"
onClick={() => setNavOpen(true)}
>
<img
@@ -462,7 +594,7 @@ export default function Navigation({ navOpen, setNavOpen }: NavigationProps) {
className="w-7 filter dark:invert"
/>
</button>
<div className="text-[#949494] font-medium text-[20px]">DocsGPT</div>
<div className="text-[20px] font-medium text-[#949494]">DocsGPT</div>
</div>
</div>
<DeleteConvModal

Some files were not shown because too many files have changed in this diff Show More