Compare commits

...

343 Commits

Author SHA1 Message Date
Pavel
c740933782 Delete docsgpt_scanner.py 2025-03-30 11:45:33 +01:00
Pavel
6d3134c944 proxy for api-tool
-Only for api_tool for now, if this solution works well then implementation for other tools is required
- Need to check api keys creation with the current proxies
- Show connection string example at creation
- locale needs updates for other languages
2025-03-30 14:42:37 +04:00
Alex
0e31329785 Merge pull request #1723 from arc53/dependabot/pip/application/langsmith-0.3.19 2025-03-29 21:14:46 +02:00
dependabot[bot]
57d103116f build(deps): bump langsmith from 0.3.15 to 0.3.19 in /application
Bumps [langsmith](https://github.com/langchain-ai/langsmith-sdk) from 0.3.15 to 0.3.19.
- [Release notes](https://github.com/langchain-ai/langsmith-sdk/releases)
- [Commits](https://github.com/langchain-ai/langsmith-sdk/compare/v0.3.15...v0.3.19)

---
updated-dependencies:
- dependency-name: langsmith
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-28 20:55:06 +00:00
Alex
a4e9ee72d4 Merge pull request #1713 from arc53/dependabot/pip/application/pymongo-4.11.3 2025-03-28 22:45:04 +02:00
Alex
1620b4f214 Merge pull request #1709 from Charlesnorris509/main
Update Navigation.tsx
2025-03-27 11:28:45 +02:00
GH Action - Upstream Sync
ec27445728 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-03-27 01:21:05 +00:00
Alex
4b1f572b04 Merge pull request #1720 from arc53/dependabot/npm_and_yarn/docs/next-14.2.26
build(deps): bump next from 14.2.22 to 14.2.26 in /docs
2025-03-26 16:53:50 +02:00
dependabot[bot]
28f925ef75 build(deps): bump next from 14.2.22 to 14.2.26 in /docs
Bumps [next](https://github.com/vercel/next.js) from 14.2.22 to 14.2.26.
- [Release notes](https://github.com/vercel/next.js/releases)
- [Changelog](https://github.com/vercel/next.js/blob/canary/release.js)
- [Commits](https://github.com/vercel/next.js/compare/v14.2.22...v14.2.26)

---
updated-dependencies:
- dependency-name: next
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-26 06:23:32 +00:00
Charles Norris
98abeabc0d Update ConversationTile.tsx 2025-03-24 16:10:41 -04:00
GH Action - Upstream Sync
e02f19058e Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-03-21 01:21:33 +00:00
Alex
4095b2b674 Merge pull request #1714 from siiddhantt/fix/retrievers-broken
fix: brave and duckduckgo retrievers
2025-03-20 13:00:34 +00:00
Siddhant Rai
3be6e2132b refactor: remove outdated vector store tests 2025-03-20 17:27:18 +05:30
Siddhant Rai
0732d9b6c8 fix: brave and duckduckgo retrievers 2025-03-20 08:27:00 +05:30
GH Action - Upstream Sync
2952c1be08 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-03-20 01:19:59 +00:00
dependabot[bot]
96c4a13c93 build(deps): bump pymongo from 4.10.1 to 4.11.3 in /application
Bumps [pymongo](https://github.com/mongodb/mongo-python-driver) from 4.10.1 to 4.11.3.
- [Release notes](https://github.com/mongodb/mongo-python-driver/releases)
- [Changelog](https://github.com/mongodb/mongo-python-driver/blob/4.11.3/doc/changelog.rst)
- [Commits](https://github.com/mongodb/mongo-python-driver/compare/4.10.1...4.11.3)

---
updated-dependencies:
- dependency-name: pymongo
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-19 08:40:51 +00:00
Alex
53abf1a79e Merge pull request #1701 from siiddhantt/feat/jwt-auth
feat: implement JWT authentication and token management
2025-03-19 08:39:29 +00:00
Siddhant Rai
f00802dd6b fix: llm tests failing 2025-03-19 10:25:46 +05:30
Siddhant Rai
ab95d90284 feat: pass decoded_token to llm and retrievers 2025-03-18 23:46:02 +05:30
Siddhant Rai
f4ab85a2bb feat: minor fixes after merge 2025-03-18 18:56:02 +05:30
Siddhant Rai
5b40c5a9d7 Merge branch 'main' into feat/jwt-auth 2025-03-18 18:26:29 +05:30
Siddhant Rai
6583aeff08 feat: update authentication handling and integrate token usage across frontend and backend 2025-03-18 08:29:57 +05:30
Charles Norris
b1c531fbcc Update Navigation.tsx
Typo on onCoversationClick() function it should be onConversationClick()
2025-03-17 16:51:33 -04:00
Siddhant Rai
4406426515 feat: implement session_jwt and enhance auth 2025-03-17 11:51:30 +05:30
Alex
af48782464 Merge pull request #1708 from ManishMadan2882/main
Refactor(fe): Conversation
2025-03-16 20:05:46 +00:00
Manish Madan
726d4ddd9f Merge branch 'arc53:main' into main 2025-03-16 04:08:26 +05:30
ManishMadan2882
adc637b689 (clean:unused) 2025-03-16 04:07:33 +05:30
ManishMadan2882
d6c9b4fbc9 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-03-16 04:02:12 +05:30
ManishMadan2882
e17cc8ea34 (fix:date) handle iso 8601 date 2025-03-16 04:01:19 +05:30
ManishMadan2882
574a0e2dba (fix:conv) perfect aligment 2025-03-16 03:57:31 +05:30
ManishMadan2882
fd0bd13b08 (refactor:conv) separate textarea 2025-03-16 02:01:32 +05:30
Alex
f8c92147cd deps: bump langchian things 2025-03-15 19:51:30 +00:00
Alex
8136cd78d3 Merge pull request #1698 from arc53/dependabot/pip/application/duckduckgo-search-7.5.2
build(deps): bump duckduckgo-search from 7.4.2 to 7.5.2 in /application
2025-03-15 19:16:39 +00:00
ManishMadan2882
d9c4331480 (refactor:conv) wrap msgs separately 2025-03-15 16:41:56 +05:30
Alex
7af726f4b2 Merge pull request #1702 from nickaggarwal/main
fix signature for AzureOpenAILLM
2025-03-14 22:08:36 +00:00
ManishMadan2882
a50f3bc55b (fix:sourceDropdown) ask before delete 2025-03-15 00:15:23 +05:30
Nilesh Agarwal
5438bf9754 fix signature for AzureOpenAILLM 2025-03-14 09:52:09 -07:00
Siddhant Rai
7fd377bdbe feat: implement JWT authentication and token management in frontend and backend 2025-03-14 17:07:15 +05:30
Alex
84620a7375 Merge pull request #1700 from ScriptScientist/main
fix: docker compose up doesn't use the env
2025-03-14 10:39:28 +00:00
rock.lee
6968317db2 fix: docker compose up doesn't use the env and setup script will not exit when service start success 2025-03-14 17:09:10 +08:00
dependabot[bot]
67a92428b5 build(deps): bump duckduckgo-search from 7.4.2 to 7.5.2 in /application
Bumps [duckduckgo-search](https://github.com/deedy5/duckduckgo_search) from 7.4.2 to 7.5.2.
- [Release notes](https://github.com/deedy5/duckduckgo_search/releases)
- [Commits](https://github.com/deedy5/duckduckgo_search/compare/v7.4.2...v7.5.2)

---
updated-dependencies:
- dependency-name: duckduckgo-search
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-13 20:19:41 +00:00
Alex
5bb639f0ad Merge pull request #1670 from ManishMadan2882/main
Figma consolidation
2025-03-13 15:21:03 +00:00
ManishMadan2882
5bc758aa2d (fix:ui) minor adjustments 2025-03-13 20:39:17 +05:30
Pavel
27b24f19de user-avatar-svg 2025-03-13 14:25:41 +03:00
GH Action - Upstream Sync
3dfde84827 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-03-13 01:25:22 +00:00
Alex
5e39be6a2c fix: tests anthropic 2025-03-13 00:32:24 +00:00
Alex
35248991e7 fix: cache 2025-03-13 00:25:52 +00:00
Alex
b76e820122 fix: ruff fix 2025-03-13 00:11:50 +00:00
Alex
51eced00aa fix: openai compatable with llama and gemini 2025-03-13 00:10:13 +00:00
ManishMadan2882
079a216f5b (fix:chunkDocs) alike purple action btn 2025-03-13 03:08:02 +05:30
ManishMadan2882
8b5df98f57 (fix:ui) minor adjust 2025-03-13 03:06:49 +05:30
ManishMadan2882
fb6fd5b5b2 (fix:input) unwanted autocomplete style 2025-03-13 03:06:22 +05:30
Alex
5d5ea3eb8f Merge pull request #1689 from ScriptScientist/main
feat: novita llms support
2025-03-12 21:03:09 +00:00
ManishMadan2882
21360981ee (fix:tool-cards) ui adjust 2025-03-13 01:39:51 +05:30
ManishMadan2882
0b3cad152f (fix:prompts) design specs 2025-03-13 00:43:56 +05:30
Alex
2c2dbe45a6 Merge pull request #1696 from arc53/fixes-cache
Fixes cache
2025-03-12 17:37:38 +00:00
ManishMadan2882
5c7a3a515c (fix:general) perfect labels, delete btn 2025-03-12 22:52:08 +05:30
Alex
f2b05ad56d fix: handle cache issues with more grace 2025-03-12 15:13:14 +00:00
Alex
5f9702b91c fix: /search 2025-03-12 13:47:16 +00:00
ManishMadan2882
93de4065c7 (fix:tables) table header, name table data 2025-03-12 17:32:05 +05:30
ManishMadan2882
8e0e55fe5e (fix:analytics) feedback title 2025-03-12 17:31:30 +05:30
ManishMadan2882
a8a8585570 (fix:purple-btn) hover to 976af3 2025-03-12 08:17:13 +05:30
ManishMadan2882
1f3c07979a (fix:bubble) color adjustments 2025-03-12 06:41:37 +05:30
ManishMadan2882
fa07b3349d (fix:sidebar) upload icon, bg perfect 2025-03-12 05:57:59 +05:30
GH Action - Upstream Sync
519ffe617b Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-03-11 01:24:52 +00:00
Alex
fe02bf9347 Merge pull request #1693 from siiddhantt/fix/response-and-sources
feat: agent use in answer and enhance search
2025-03-10 12:53:46 +00:00
Siddhant Rai
faa583864d feat: enhance conversation saving and response streaming with source handling 2025-03-10 14:19:43 +05:30
GH Action - Upstream Sync
1a7504eba0 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-03-10 01:11:30 +00:00
Alex
46d32b4072 Merge pull request #1682 from arc53/dependabot/pip/application/jinja2-3.1.6
build(deps): bump jinja2 from 3.1.5 to 3.1.6 in /application
2025-03-10 00:04:03 +00:00
Alex
18d8b9c395 Merge pull request #1692 from arc53/tool-ntfy
feat: ntfy tool
2025-03-09 01:28:00 +00:00
Alex
8b9b74464e feat: ntfy tool 2025-03-09 01:22:00 +00:00
rock.lee
867c375843 add novita provider 2025-03-08 15:45:49 +08:00
GH Action - Upstream Sync
54ca6acf5a Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-03-08 01:07:50 +00:00
ManishMadan2882
6ac2d6d228 (fix:tile) hover behaviour on rename 2025-03-08 00:48:53 +05:30
ManishMadan2882
10c7a5f36b (clean) comments 2025-03-07 20:24:38 +05:30
Alex
4fd6c52951 fix: api tool avoid sending body if empty 2025-03-07 14:34:23 +00:00
ManishMadan2882
93fea17918 (feat:config) color name update 2025-03-07 19:55:47 +05:30
ManishMadan2882
b3f6a3aae6 (fix:ui) mninor adjustments 2025-03-07 17:20:14 +05:30
ManishMadan2882
773147701d (fix:ui) tool cards 2025-03-07 17:19:14 +05:30
ManishMadan2882
d891c8dae2 (fix:ui) minor perfections 2025-03-07 17:18:28 +05:30
ManishMadan2882
101852c7d1 (feat:toggle) add id, aria props 2025-03-07 17:16:45 +05:30
ManishMadan2882
c1f13ba8b1 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-03-07 17:11:00 +05:30
ManishMadan2882
71e45860f3 (fix:input) remove ambiguous label prop 2025-03-07 17:10:48 +05:30
ManishMadan2882
25dfd63c4f (fix:ui) color adjust 2025-03-07 17:09:00 +05:30
ManishMadan2882
fc12d7b4c8 (fix:tool) rely on reusable components 2025-03-07 17:07:01 +05:30
GH Action - Upstream Sync
a6eedc6d84 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-03-07 01:24:48 +00:00
Alex
b523a98289 Merge pull request #1671 from aidanbennettjones/AidanComponentEditChat
Enables Enter Key Functionality to Submit Edited Chats
2025-03-06 16:44:06 -05:00
Alex
a0929c96ba fix: postgres tool migration 2025-03-06 16:20:19 +00:00
Alex
ae1f25379f Merge pull request #1684 from arc53/brave-tool
brave-tool
2025-03-06 11:17:47 -05:00
Pavel
1e3c8cb7b1 fix imports 2025-03-06 19:13:19 +03:00
Pavel
b9f28705c8 brave-tool 2025-03-06 18:46:50 +03:00
Alex
ad4f3ce379 Merge pull request #1648 from siiddhantt/feat/agent-refactor-and-logging
feat: agent-retriever workflow + logging stack
2025-03-06 09:32:18 -05:00
Alex
d4f53bf6bb fix: ruff check 2025-03-06 14:31:46 +00:00
dependabot[bot]
2ea2819477 build(deps): bump jinja2 from 3.1.5 to 3.1.6 in /application
Bumps [jinja2](https://github.com/pallets/jinja) from 3.1.5 to 3.1.6.
- [Release notes](https://github.com/pallets/jinja/releases)
- [Changelog](https://github.com/pallets/jinja/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/jinja/compare/3.1.5...3.1.6)

---
updated-dependencies:
- dependency-name: jinja2
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-06 04:47:24 +00:00
Alex
49a2b2ce6d fix: agent not forgotten 2025-03-05 16:11:06 -05:00
Alex
06edc261c0 fix: duplicates... 2025-03-05 16:09:13 -05:00
Alex
af69bc9d3c Merge branch 'main' into feat/agent-refactor-and-logging 2025-03-05 16:04:09 -05:00
ManishMadan2882
6eb8256220 (feat:docs,chatbots) danger delete btn 2025-03-06 02:00:05 +05:30
ManishMadan2882
ecf3067d67 (fix:analytics) rename feedback title 2025-03-06 01:58:37 +05:30
ManishMadan2882
3a7f23f75e (feat:logs) ui 2025-03-06 01:56:38 +05:30
Siddhant Rai
f88c34a0be feat: streaming responses with function call 2025-03-05 09:02:55 +05:30
ManishMadan2882
572c57e023 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-03-05 03:53:34 +05:30
ManishMadan2882
79cf2150d5 (fix:ui) minor adjustments 2025-03-05 03:15:50 +05:30
ManishMadan2882
68b868047e (feat:copy) prop to showText 2025-03-05 03:13:34 +05:30
ManishMadan2882
377670b34a (feat:docs) adding view option 2025-03-05 03:12:48 +05:30
ManishMadan2882
2b7f4de832 (feat:docs):add contextMenu to actions 2025-03-04 18:53:23 +05:30
GH Action - Upstream Sync
4a88a63fa0 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-03-04 01:24:01 +00:00
Alex
bf195051e2 Merge pull request #1663 from arc53/dependabot/pip/application/primp-0.14.0
build(deps): bump primp from 0.10.0 to 0.14.0 in /application
2025-03-03 14:50:26 +00:00
GH Action - Upstream Sync
c3ccd9feff Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-03-03 01:25:14 +00:00
Alex
2d0f0948fb Merge pull request #1673 from arc53/dependabot/pip/application/anthropic-0.49.0
build(deps): bump anthropic from 0.45.2 to 0.49.0 in /application
2025-03-02 22:44:03 +00:00
aidanbennettjones
fc7a5d098d Merge branch 'arc53:main' into AidanComponentEditChat 2025-03-02 13:35:20 -05:00
aidanbennettjones
b7f766ab82 Fix Number of Rows and Remove Comments 2025-03-02 13:34:55 -05:00
ManishMadan2882
bfffd5e4b3 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-03-01 20:15:24 +05:30
ManishMadan2882
63ba005f4d (fix:ui) minor color perfections 2025-03-01 20:15:10 +05:30
ManishMadan2882
f66ef05f2a (feat:barGraph) on hover opacity 2025-03-01 20:14:27 +05:30
ManishMadan2882
a3b28843b6 (feat:confirm-modal) danger variant 2025-03-01 20:12:40 +05:30
ManishMadan2882
b07ec8accb (feat:general) ui 2025-03-01 20:11:46 +05:30
GH Action - Upstream Sync
06f4b5823a Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-03-01 01:35:16 +00:00
Alex
99fe57f99a Merge pull request #1664 from arc53/dependabot/pip/application/langchain-core-0.3.40
build(deps): bump langchain-core from 0.3.29 to 0.3.40 in /application
2025-03-01 00:58:31 +00:00
dependabot[bot]
d1226031e1 build(deps): bump anthropic from 0.45.2 to 0.49.0 in /application
Bumps [anthropic](https://github.com/anthropics/anthropic-sdk-python) from 0.45.2 to 0.49.0.
- [Release notes](https://github.com/anthropics/anthropic-sdk-python/releases)
- [Changelog](https://github.com/anthropics/anthropic-sdk-python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/anthropics/anthropic-sdk-python/compare/v0.45.2...v0.49.0)

---
updated-dependencies:
- dependency-name: anthropic
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-28 20:35:48 +00:00
aidanbennettjones
78f3e64d5a Merge branch 'arc53:main' into AidanComponentEditChat 2025-02-28 10:35:17 -05:00
ManishMadan2882
1d98e75b92 (fix:share) modal covers screen 2025-02-28 16:41:34 +05:30
ManishMadan2882
66d8d95763 (feat:input) floating input labels 2025-02-28 03:46:26 +05:30
ManishMadan2882
e2bf468195 (refactor) conv tile 2025-02-28 03:45:54 +05:30
ManishMadan2882
b7efc16257 (feat:menu) add reusable menu, ui 2025-02-28 03:44:14 +05:30
ManishMadan2882
ec6bcdff7e (feat:convBubble) minor adjustment 2025-02-28 03:42:34 +05:30
ManishMadan2882
3e65885e1f (feat:toggle) greener colors, flexible 2025-02-28 03:40:42 +05:30
Siddhant Rai
c6ce4d9374 feat: logging stacks 2025-02-27 19:14:10 +05:30
ManishMadan2882
0b437d0e8d (feat:ui) updating hero, code snippets 2025-02-27 03:04:55 +05:30
dependabot[bot]
e1df3be4b9 build(deps): bump langchain-core from 0.3.29 to 0.3.40 in /application
Bumps [langchain-core](https://github.com/langchain-ai/langchain) from 0.3.29 to 0.3.40.
- [Release notes](https://github.com/langchain-ai/langchain/releases)
- [Commits](https://github.com/langchain-ai/langchain/compare/langchain-core==0.3.29...langchain-core==0.3.40)

---
updated-dependencies:
- dependency-name: langchain-core
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-26 20:16:15 +00:00
dependabot[bot]
b944769f8c build(deps): bump primp from 0.10.0 to 0.14.0 in /application
Bumps [primp](https://github.com/deedy5/primp) from 0.10.0 to 0.14.0.
- [Release notes](https://github.com/deedy5/primp/releases)
- [Commits](https://github.com/deedy5/primp/compare/v0.10.0...v0.14.0)

---
updated-dependencies:
- dependency-name: primp
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-26 20:16:08 +00:00
Alex
56b8074c22 Merge pull request #1658 from ManishMadan2882/main
Widget upgrade to 0.5.0
2025-02-26 11:05:32 +00:00
ManishMadan2882
b577f322c9 Merge branch 'main' of https://github.com/arc53/docsgpt 2025-02-26 16:17:19 +05:30
ManishMadan2882
b007e2af8f (update:docs) docsgpt dep 2025-02-26 16:12:38 +05:30
ManishMadan2882
df89990aa5 (upgrade:widget) v0.5.0 2025-02-26 16:09:12 +05:30
Alex
c108a53b11 fix: default keys 2025-02-26 10:35:26 +00:00
ManishMadan2882
4831f5bb5d Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-02-26 14:52:59 +05:30
ManishMadan2882
987ef63e64 (feat:widget) simplify scrolling 2025-02-26 14:52:48 +05:30
Alex
e997e12bb9 Merge pull request #1656 from arc53/dependabot/pip/application/lxml-5.3.1
build(deps): bump lxml from 5.3.0 to 5.3.1 in /application
2025-02-25 22:29:56 +00:00
Alex
6ba0add265 Merge pull request #1655 from arc53/dependabot/pip/application/qdrant-client-1.13.2
build(deps): bump qdrant-client from 1.12.2 to 1.13.2 in /application
2025-02-25 22:29:40 +00:00
Alex
9160c13039 Merge pull request #1651 from arc53/dependabot/pip/application/elasticsearch-8.17.1
build(deps): bump elasticsearch from 8.17.0 to 8.17.1 in /application
2025-02-25 22:28:49 +00:00
Alex
40be9f65e4 Merge pull request #1647 from ManishMadan2882/main
Analytics and feedback
2025-02-25 22:28:16 +00:00
dependabot[bot]
0aae53524c build(deps): bump lxml from 5.3.0 to 5.3.1 in /application
Bumps [lxml](https://github.com/lxml/lxml) from 5.3.0 to 5.3.1.
- [Release notes](https://github.com/lxml/lxml/releases)
- [Changelog](https://github.com/lxml/lxml/blob/master/CHANGES.txt)
- [Commits](https://github.com/lxml/lxml/compare/lxml-5.3.0...lxml-5.3.1)

---
updated-dependencies:
- dependency-name: lxml
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-25 20:38:23 +00:00
dependabot[bot]
1d1efc00b5 build(deps): bump qdrant-client from 1.12.2 to 1.13.2 in /application
Bumps [qdrant-client](https://github.com/qdrant/qdrant-client) from 1.12.2 to 1.13.2.
- [Release notes](https://github.com/qdrant/qdrant-client/releases)
- [Commits](https://github.com/qdrant/qdrant-client/compare/v1.12.2...v1.13.2)

---
updated-dependencies:
- dependency-name: qdrant-client
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-25 20:38:17 +00:00
ManishMadan2882
7584305159 (feat:feedback) unset feedback when null 2025-02-26 01:01:30 +05:30
ManishMadan2882
554601d674 (fix:feedback) widget can handle feedback 2025-02-26 01:00:38 +05:30
Alex
6caf14f4b2 Merge pull request #1653 from asminkarki012/main
docs: Ensure --env-file .env is included for environment variable loa…
2025-02-25 17:16:58 +00:00
asminkarki012
edbd08be8a docs: Ensure --env-file .env is included for environment variable loading 2025-02-25 21:52:39 +05:45
ManishMadan2882
caed6df53b (feat:stream) save conversations optionally 2025-02-25 17:32:35 +05:30
ManishMadan2882
d823fba60b Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-02-25 16:23:55 +05:30
ManishMadan2882
92c8abe65d (fix:sharedConv) makes sure that response state updates 2025-02-25 16:23:35 +05:30
Alex
91e966b480 Merge pull request #1650 from arc53/dependabot/pip/application/transformers-4.49.0
build(deps): bump transformers from 4.48.0 to 4.49.0 in /application
2025-02-25 08:51:41 +00:00
Siddhant Rai
1f0b779c64 refactor: folder restructure for agent based workflow 2025-02-25 09:03:45 +05:30
GH Action - Upstream Sync
0ccd76074a Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-02-25 01:22:53 +00:00
Alex
07c6dcab4a Merge pull request #1652 from arc53/dependabot/pip/application/google-genai-1.3.0
build(deps): bump google-genai from 0.5.0 to 1.3.0 in /application
2025-02-24 22:34:16 +00:00
Alex
84cbc1201c fix: googles update 2025-02-24 22:30:09 +00:00
dependabot[bot]
495bbc2aba build(deps): bump google-genai from 0.5.0 to 1.3.0 in /application
Bumps [google-genai](https://github.com/googleapis/python-genai) from 0.5.0 to 1.3.0.
- [Release notes](https://github.com/googleapis/python-genai/releases)
- [Changelog](https://github.com/googleapis/python-genai/blob/main/CHANGELOG.md)
- [Commits](https://github.com/googleapis/python-genai/compare/v0.5.0...v1.3.0)

---
updated-dependencies:
- dependency-name: google-genai
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-24 22:24:08 +00:00
dependabot[bot]
cb0bceacfa build(deps): bump elasticsearch from 8.17.0 to 8.17.1 in /application
Bumps [elasticsearch](https://github.com/elastic/elasticsearch-py) from 8.17.0 to 8.17.1.
- [Release notes](https://github.com/elastic/elasticsearch-py/releases)
- [Commits](https://github.com/elastic/elasticsearch-py/compare/v8.17.0...v8.17.1)

---
updated-dependencies:
- dependency-name: elasticsearch
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-24 22:23:48 +00:00
Alex
6799050718 Merge pull request #1601 from arc53/dependabot/pip/application/pydantic-2.10.6
build(deps): bump pydantic from 2.10.4 to 2.10.6 in /application
2025-02-24 22:23:46 +00:00
dependabot[bot]
4b892e8939 build(deps): bump transformers from 4.48.0 to 4.49.0 in /application
Bumps [transformers](https://github.com/huggingface/transformers) from 4.48.0 to 4.49.0.
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](https://github.com/huggingface/transformers/compare/v4.48.0...v4.49.0)

---
updated-dependencies:
- dependency-name: transformers
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-24 22:23:30 +00:00
Alex
674001b499 Merge pull request #1583 from arc53/dependabot/pip/application/openapi-schema-validator-0.6.3
build(deps): bump openapi-schema-validator from 0.6.2 to 0.6.3 in /application
2025-02-24 22:23:27 +00:00
ManishMadan2882
c730777134 (fix:bubble) keeping feedback visible once submitted 2025-02-25 00:51:33 +05:30
ManishMadan2882
8148876249 (feat:message analytics) count individual queries 2025-02-25 00:45:19 +05:30
ManishMadan2882
4cf946f856 (feat:feedback) timestamp feedback 2025-02-24 18:53:29 +05:30
ManishMadan2882
05706f1641 (feat:feedback and tokens) count apiKey docs separately 2025-02-24 17:24:53 +05:30
Siddhant Rai
6fed84958e feat: agent-retriever workflow + query rephrase 2025-02-24 16:41:57 +05:30
ManishMadan2882
64011c5988 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-02-24 16:28:44 +05:30
ManishMadan2882
3e02d5a56f (feat:conv) save the conv with key 2025-02-24 16:28:24 +05:30
Alex
14f57bc3a4 Update README.md 2025-02-23 15:30:59 +00:00
aidanbennettjones
ac8f1b9aa3 Merge branch 'arc53:main' into AidanComponentEditChat 2025-02-21 15:02:39 -05:00
Alex
104c6ef457 Merge pull request #1645 from ManishMadan2882/main
Settings: Improving table layout
2025-02-20 22:47:25 +00:00
ManishMadan2882
84661cea36 lint 2025-02-21 00:54:06 +05:30
ManishMadan2882
c2b0ed85d2 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-02-21 00:42:56 +05:30
ManishMadan2882
5a081f2419 (feat/docs) prevent event bubble 2025-02-21 00:42:27 +05:30
ManishMadan2882
88016f9c35 (fix/chatbots) prevent overflow 2025-02-21 00:41:30 +05:30
ManishMadan2882
0d56e62bb8 (fix/tables) responsiveness issues 2025-02-20 14:48:16 +05:30
GH Action - Upstream Sync
567756edd3 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-02-20 01:21:11 +00:00
ManishMadan2882
7cc0a3620e (fix:docs) consistency 2025-02-20 03:33:35 +05:30
ManishMadan2882
b5587e458f Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-02-20 01:39:37 +05:30
ManishMadan2882
b22d965b7b (fix:chatbots) ui 2025-02-20 01:39:25 +05:30
Alex
cc0b41ddfb Merge pull request #1642 from siiddhantt/fix/minor-bugs
fix: minor bugs and enhancement
2025-02-19 16:16:05 +00:00
ManishMadan2882
006aeeebb0 (fix:typo) copy 2025-02-19 16:56:15 +05:30
ManishMadan2882
3cfb1abf62 (fix/merge) revert dropdown skeletons 2025-02-19 16:55:36 +05:30
ManishMadan2882
e1da69040d (fix/merge error) tool config modal 2025-02-19 16:44:35 +05:30
Siddhant Rai
5924693e90 fix: merge errors 2025-02-19 14:37:47 +05:30
Siddhant Rai
9ee7d659df Merge branch 'main' into fix/minor-bugs 2025-02-19 14:16:56 +05:30
Siddhant Rai
ac1b1c3cdd feat: add loading spinner to AddToolModal and improve label spacing in General settings 2025-02-19 13:58:40 +05:30
Alex
8440138ba0 Merge pull request #1641 from ManishMadan2882/main
Refactor: Apply base modal for UI consitency
2025-02-18 23:42:06 +00:00
ManishMadan2882
877b44ec0a (lint) 2025-02-19 04:49:22 +05:30
ManishMadan2882
cc4acb8766 (feat:createAPIModal): UI inconsistency 2025-02-19 04:38:49 +05:30
ManishMadan2882
3aa85bb51c (refactor:modals) reuse wrapper modal 2025-02-19 04:35:33 +05:30
ManishMadan2882
4e948d8bff (refactor:tools modal) reuse wrapper modal 2025-02-19 04:15:31 +05:30
ManishMadan2882
28489d244c (feat:input) consistent colors 2025-02-19 04:10:14 +05:30
ManishMadan2882
acf3dd2762 Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-02-19 04:01:23 +05:30
ManishMadan2882
8589303753 (feat:consitency)modals, dropdowns 2025-02-19 04:01:14 +05:30
ManishMadan2882
0d9fc26119 (refactor): purge unused files 2025-02-19 03:39:15 +05:30
Alex
9dd63c1da4 Merge pull request #1636 from arc53/dependabot/pip/application/duckduckgo-search-7.4.2
build(deps): bump duckduckgo-search from 6.3.0 to 7.4.2 in /application
2025-02-18 21:55:44 +00:00
aidanbennettjones
7ff03ab098 Enables "Enter" Key Functionality for Edit Chat Submission 2025-02-18 15:58:37 -05:00
Alex
750345d209 feat: architecrure guide 2025-02-18 14:51:17 +00:00
Alex
03ee16f5ca Update _app.mdx 2025-02-18 09:09:37 +00:00
Alex
586fc80c19 Merge pull request #1640 from ManishMadan2882/docs 2025-02-18 08:54:28 +00:00
ManishMadan2882
13cd221fe5 (feat:widget) udpate docs 2025-02-18 14:19:53 +05:30
Siddhant Rai
f35af54e9f refactor: clean up code and improve UI elements in various components 2025-02-18 13:10:35 +05:30
Alex
67e37f1ce1 bump docs widget docs 2025-02-17 23:46:23 +00:00
ManishMadan2882
49ff27a5fe Merge branch 'main' of https://github.com/manishmadan2882/docsgpt 2025-02-18 04:22:32 +05:30
ManishMadan2882
04730ba8c7 (feat:theme)exacting the designs 2025-02-18 04:20:20 +05:30
Alex
b2fcf91958 Merge pull request #1637 from arc53/feat/sources-in-widget
Feat/sources in widget
2025-02-17 21:55:02 +00:00
ManishMadan2882
b78d2bd4b1 (feat:widget) add optional sources 2025-02-18 02:09:13 +05:30
dependabot[bot]
2612ce5ad9 build(deps): bump duckduckgo-search from 6.3.0 to 7.4.2 in /application
Bumps [duckduckgo-search](https://github.com/deedy5/duckduckgo_search) from 6.3.0 to 7.4.2.
- [Release notes](https://github.com/deedy5/duckduckgo_search/releases)
- [Commits](https://github.com/deedy5/duckduckgo_search/compare/v6.3.0...v7.4.2)

---
updated-dependencies:
- dependency-name: duckduckgo-search
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-17 20:34:44 +00:00
Manish Madan
798913740e Merge pull request #1317 from utin-francis-peter/feat/sources-in-react-widget
Feat/sources in react widget
2025-02-17 16:26:43 +05:30
Manish Madan
7d0445cc20 Merge branch 'main' into feat/sources-in-react-widget 2025-02-17 15:41:34 +05:30
Alex
361f6895ee Merge pull request #1514 from ayaan-qadri/Fixing-1513 2025-02-16 20:10:33 +00:00
Alex
47442f4f58 fix: bandit workflow only on main repo 2025-02-15 15:06:18 +00:00
dependabot[bot]
307c2e1682 build(deps): bump openapi-schema-validator in /application
Bumps [openapi-schema-validator](https://github.com/python-openapi/openapi-schema-validator) from 0.6.2 to 0.6.3.
- [Release notes](https://github.com/python-openapi/openapi-schema-validator/releases)
- [Commits](https://github.com/python-openapi/openapi-schema-validator/compare/0.6.2...0.6.3)

---
updated-dependencies:
- dependency-name: openapi-schema-validator
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-14 14:49:26 +00:00
Alex
2190359e4d Merge pull request #1631 from arc53/dependabot/pip/application/anthropic-0.45.2
build(deps): bump anthropic from 0.40.0 to 0.45.2 in /application
2025-02-14 14:47:32 +00:00
Alex
27a933c7b7 Merge pull request #1628 from ManishMadan2882/main
Smoother transition in settings
2025-02-14 14:47:16 +00:00
ManishMadan2882
71970a0d1d (feat:transit): reduce delay to 250 2025-02-14 20:06:50 +05:30
ManishMadan2882
7661273cfd (fix/logs) append loader 2025-02-14 19:50:19 +05:30
ManishMadan2882
cd06334049 (feat:transitions) uniform color and animation 2025-02-14 17:38:58 +05:30
Alex
05319e36a7 Merge pull request #1630 from siiddhantt/feat/show-tool-execution
feat: tool calls tracking
2025-02-14 10:27:15 +00:00
ManishMadan2882
200a3b81e5 (feat:loaders) loader for logs, dropdown 2025-02-14 02:20:29 +05:30
dependabot[bot]
5647755762 build(deps): bump anthropic from 0.40.0 to 0.45.2 in /application
Bumps [anthropic](https://github.com/anthropics/anthropic-sdk-python) from 0.40.0 to 0.45.2.
- [Release notes](https://github.com/anthropics/anthropic-sdk-python/releases)
- [Changelog](https://github.com/anthropics/anthropic-sdk-python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/anthropics/anthropic-sdk-python/compare/v0.40.0...v0.45.2)

---
updated-dependencies:
- dependency-name: anthropic
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-13 20:24:49 +00:00
ManishMadan2882
adb2947b52 (feat:transitions): reshaped the tablular loaders 2025-02-14 01:01:07 +05:30
Siddhant Rai
7b05afab74 refactor: formatting + token limit for gemini-2.0-flash-exp 2025-02-14 00:27:27 +05:30
Siddhant Rai
5cf5bed6a8 feat: enhance tool call handling with structured message cleaning and improved UI display 2025-02-14 00:15:01 +05:30
Alex
095cb58df3 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-02-13 11:48:59 +00:00
ManishMadan2882
181bf69994 (feat:transitions) custom hook to loading state 2025-02-13 17:16:17 +05:30
Alex
927b513bf8 fix: error warning 2025-02-13 11:44:05 +00:00
Alex
05801cd90c Update README.md 2025-02-13 11:37:47 +00:00
Alex
a8ac00469d fix: bandit 2025-02-13 11:30:02 +00:00
Alex
1e3ae948a2 feat: add static code analysis 2025-02-13 11:25:03 +00:00
Alex
2d8aa229c6 fix: used for security comment 2025-02-13 11:10:44 +00:00
Alex
84f4812189 Update CONTRIBUTING.md 2025-02-13 10:48:49 +00:00
Siddhant Rai
8a3612e56c fix: improve tool call handling and UI adjustments 2025-02-13 05:02:10 +05:30
ManishMadan2882
d08861fb30 (fix/docs) revert effected portions 2025-02-13 01:16:11 +05:30
ManishMadan2882
ecc0f9d9f5 Merge branch 'main' of https://github.com/arc53/docsgpt 2025-02-13 00:17:58 +05:30
Siddhant Rai
e209699b19 feat: add tool calls tracking and show in frontend 2025-02-12 21:47:47 +05:30
Alex
c8d8690cfd Update README.md 2025-02-12 16:09:53 +00:00
Alex
59d05b698a Update README.md 2025-02-12 16:09:22 +00:00
Alex
1bcbfc8d18 Merge pull request #1629 from arc53/easy-deploy 2025-02-12 15:13:36 +00:00
Pavel
bafed63d40 super-final 2025-02-12 17:41:59 +03:00
ManishMadan2882
828a056e21 Merge branch 'main' of https://github.com/ManishMadan2882/docsgpt 2025-02-12 20:10:36 +05:30
ManishMadan2882
9424f6303a (feat:upload) smooth transitions on advanced fields 2025-02-12 20:10:21 +05:30
Pavel
c0dc5c3a4d finished 2025-02-12 17:33:11 +03:00
Alex
d0fb3da285 Merge pull request #1626 from arc53/dependabot/pip/application/prompt-toolkit-3.0.50
build(deps): bump prompt-toolkit from 3.0.48 to 3.0.50 in /application
2025-02-12 13:49:50 +00:00
Pavel
ccce01800d index-desc 2025-02-12 14:43:24 +03:00
Pavel
b44b9d8016 models+guide-upd+extentions 2025-02-12 14:38:21 +03:00
dependabot[bot]
7592c45bd9 build(deps): bump prompt-toolkit from 3.0.48 to 3.0.50 in /application
Bumps [prompt-toolkit](https://github.com/prompt-toolkit/python-prompt-toolkit) from 3.0.48 to 3.0.50.
- [Release notes](https://github.com/prompt-toolkit/python-prompt-toolkit/releases)
- [Changelog](https://github.com/prompt-toolkit/python-prompt-toolkit/blob/master/CHANGELOG)
- [Commits](https://github.com/prompt-toolkit/python-prompt-toolkit/compare/3.0.48...3.0.50)

---
updated-dependencies:
- dependency-name: prompt-toolkit
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-12 10:32:09 +00:00
Alex
b024936ad7 Update devc-welcome.md 2025-02-12 09:48:21 +00:00
Manish Madan
be2246283f Merge branch 'arc53:main' into main 2025-02-12 00:00:23 +05:30
ManishMadan2882
a7969f6ec8 Merge branch 'main' of https://github.com/ManishMadan2882/docsgpt 2025-02-11 23:58:14 +05:30
ManishMadan2882
ac447dd055 (feat:settings) smoother transitions 2025-02-11 23:55:30 +05:30
Pavel
28cdbe407c tools-remove 2025-02-11 21:19:09 +03:00
Alex
bf486082c9 fix: centering 2025-02-11 17:27:45 +00:00
Alex
41290b463c feat: cards 2025-02-11 17:17:12 +00:00
Pavel
385ebe234e setup+development-docs 2025-02-11 19:17:59 +03:00
Alex
72e9fcc895 Update README.md 2025-02-11 16:08:48 +00:00
Alex
5f42e4ac3f fix: default file codespace 2025-02-11 09:53:26 +00:00
Alex
926ec89f48 Create devc-welcome.md 2025-02-11 09:48:45 +00:00
GH Action - Upstream Sync
440e1b9156 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-02-11 01:19:54 +00:00
ManishMadan2882
ea0a6e413d (feat:bubble) formattedtable/inline code 2025-02-11 02:07:54 +05:30
Alex
0de4241b56 Merge pull request #1625 from arc53/handle-bad-tool-names
Handle bad tool names
2025-02-10 17:25:45 +00:00
Alex
6e8a53a204 fix: open new tool after its added 2025-02-10 16:52:59 +00:00
Alex
60772889d5 fix: handle bad tool name input 2025-02-10 16:20:37 +00:00
Alex
7db7c9e978 Merge pull request #1612 from arc53/dependabot/pip/application/marshmallow-3.26.1
build(deps): bump marshmallow from 3.24.1 to 3.26.1 in /application
2025-02-10 13:19:04 +00:00
dependabot[bot]
d85bf67103 build(deps): bump marshmallow from 3.24.1 to 3.26.1 in /application
Bumps [marshmallow](https://github.com/marshmallow-code/marshmallow) from 3.24.1 to 3.26.1.
- [Changelog](https://github.com/marshmallow-code/marshmallow/blob/dev/CHANGELOG.rst)
- [Commits](https://github.com/marshmallow-code/marshmallow/compare/3.24.1...3.26.1)

---
updated-dependencies:
- dependency-name: marshmallow
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-10 13:06:17 +00:00
Alex
926f2e9f48 Merge pull request #1621 from ManishMadan2882/main
Refactor Upload
2025-02-10 13:04:53 +00:00
ManishMadan2882
2019f29e8c (clean) mock changes 2025-02-10 16:02:37 +05:30
ManishMadan2882
3b45b63d2a (fix:upload) ui adjust 2025-02-10 16:02:02 +05:30
Alex
1c08c53121 Merge pull request #1624 from siiddhantt/feat/edit-chunks
feat: view chunks for docs and add/delete them
2025-02-10 09:48:11 +00:00
Siddhant Rai
7623bde159 feat: add update chunk API endpoint and service method 2025-02-10 09:36:18 +05:30
GH Action - Upstream Sync
1ed0f5e78d Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-02-10 01:21:14 +00:00
Piotr Idzik
568ab33a37 style: use underscore for an unused loop variable (#1593)
This addresses the SC2034 warning.
2025-02-09 22:56:52 +00:00
Alex
f639b052e3 fix: remove debugging code 2025-02-09 12:08:19 +00:00
Alex
56f91948f8 feat: improve logging 2025-02-09 12:05:37 +00:00
GH Action - Upstream Sync
6c5e481318 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-02-09 01:24:07 +00:00
ManishMadan2882
f487f1e8c1 Merge branch 'main' of https://github.com/ManishMadan2882/docsgpt 2025-02-09 01:31:15 +05:30
ManishMadan2882
68ee9743fe (feat:upload) advanced fields 2025-02-09 01:31:01 +05:30
Alex
f4cb48ed0d fix: logging 2025-02-08 19:29:54 +00:00
Alex
ad77fe1116 fix: logging in submodules 2025-02-08 18:07:03 +00:00
Alex
28a0667da6 fix: minor logging issue 2025-02-08 12:49:42 +00:00
Siddhant Rai
1f0366c989 Merge branch 'feat/edit-chunks' of https://github.com/siiddhantt/DocsGPT into feat/edit-chunks 2025-02-08 15:00:20 +05:30
Siddhant Rai
3a51922650 fix: linting error 2025-02-08 15:00:02 +05:30
Siddhant Rai
82b2be5046 Merge branch 'main' into feat/edit-chunks 2025-02-08 14:56:34 +05:30
Siddhant Rai
0fc9718c35 feat: loading state and spinner + delete chunk option 2025-02-08 14:52:32 +05:30
GH Action - Upstream Sync
976733a3c3 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-02-08 01:17:57 +00:00
Alex
5d17072709 fix: readme space 2025-02-07 19:22:01 +00:00
Alex
fbad183d39 fix: post create devcontainers 2025-02-07 18:44:24 +00:00
Alex
7356a2ff07 fix: minor docker fixes 2025-02-07 18:39:07 +00:00
Alex
6ff948c107 fix: dockerfile in devcontainer build dir fix 2025-02-07 14:35:44 +00:00
Alex
e3ebce117b fix: devcontainer paths 2 2025-02-07 14:34:19 +00:00
Alex
ce69b09730 fix: devcontainer paths 2025-02-07 14:30:15 +00:00
Alex
c823cef405 fix: devcontainer codespaces correct api address 2025-02-07 14:25:09 +00:00
Siddhant Rai
0379b81d43 feat: view and add document chunks for mongodb and faiss 2025-02-07 19:39:07 +05:30
ManishMadan2882
6a997163fd Merge branch 'main' of https://github.com/ManishMadan2882/docsgpt 2025-02-07 19:37:39 +05:30
ManishMadan2882
93f8466230 (feat:Upload): required form fields 2025-02-07 19:37:24 +05:30
ManishMadan2882
114c8d3c22 (feat:Input) required prop 2025-02-07 17:59:28 +05:30
GH Action - Upstream Sync
3e77e79194 Merge branch 'main' of https://github.com/arc53/DocsGPT 2025-02-07 01:20:29 +00:00
dependabot[bot]
ca91d36979 build(deps): bump pydantic from 2.10.4 to 2.10.6 in /application
Bumps [pydantic](https://github.com/pydantic/pydantic) from 2.10.4 to 2.10.6.
- [Release notes](https://github.com/pydantic/pydantic/releases)
- [Changelog](https://github.com/pydantic/pydantic/blob/main/HISTORY.md)
- [Commits](https://github.com/pydantic/pydantic/compare/v2.10.4...v2.10.6)

---
updated-dependencies:
- dependency-name: pydantic
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-06 20:01:07 +00:00
Alex
d47232246a fix: remove old pypdf 2025-02-06 19:59:42 +00:00
Alex
d819222cf7 fix: remove unused import 2025-02-06 19:46:11 +00:00
Alex
0c4c4d5622 fix: improve error logging 2025-02-06 19:44:09 +00:00
Alex
ad051ed083 fix: docker compose files 2025-02-06 18:51:17 +00:00
ManishMadan2882
1aa0af3e58 Merge branch 'main' of https://github.com/ManishMadan2882/docsgpt 2025-02-06 20:16:31 +05:30
ManishMadan2882
72556b37f5 (refactor:upload) simplify call to /api/remote 2025-02-06 20:16:07 +05:30
Manish Madan
0bddae5775 Merge branch 'arc53:main' into main 2025-02-06 04:09:08 +05:30
ManishMadan2882
1f1e710a6d Merge branch 'main' of https://github.com/ManishMadan2882/docsgpt 2025-02-06 04:07:51 +05:30
ManishMadan2882
b57d418b98 (refactor:upload) remove redundant types 2025-02-06 04:04:18 +05:30
Alex
0913c43219 feat: edit deploymen files locations 2025-02-05 18:04:41 +00:00
Alex
d754a43fba feat: devcontainer 2025-02-05 11:54:06 +00:00
Manish Madan
f97b56a87b (fix): date formatting (#1617) 2025-02-05 10:20:14 +00:00
Michele Grimaldi
2f78398914 Fixing issues #1445 (#1603)
* Fixing issues #1445

* Fixed issue #1445
2025-02-05 08:52:18 +00:00
Manish Madan
81b9a34e5e Merge branch 'arc53:main' into main 2025-02-05 03:33:20 +05:30
Alex
73ba078efc fix: init push to ghcr 2025-02-04 22:02:56 +00:00
ManishMadan2882
1ffe0ad85c (fix): date formatting 2025-02-05 03:31:17 +05:30
Alex
797b36a81e fix: container name 2025-02-04 21:56:26 +00:00
Alex
b82c14892e Arm builds (#1615)
* fix: matrix build

* fix: trigger build

* fix: trigger wrong name

* fix: runner name

* fix: manifest fix

* fix: yaml error

* fix: manifest build

* fix: build error

* feat: multi arch containers
2025-02-04 21:02:44 +00:00
Alex
a8891dabec feat: docker arm64 2025-02-04 17:46:44 +00:00
Alex
86ba797665 Merge pull request #1614 from arc53/documentation-footer
footer text with links
2025-02-04 17:35:22 +00:00
Pavel
3830dcb3f3 footer text with links 2025-02-04 19:42:24 +03:00
Alex
c20fe7a773 Update docker-compose.yaml 2025-02-03 13:21:28 +00:00
Ayaan
220a801138 Requested changes 2025-01-29 20:29:39 +05:30
Ayaan
c6821d9cc3 Improve edit message interface 2025-01-29 20:11:55 +05:30
Ayaan
8b59245e6a Decreased margin right for message hover button 2025-01-29 20:06:10 +05:30
utin-francis-peter
2c8a2945f0 feat: better sources scroll management 2024-11-17 10:34:22 +01:00
utin-francis-peter
ba59042e5c Merge branch 'main' into feat/sources-in-react-widget 2024-11-17 09:19:20 +01:00
utin-francis-peter
6f83bd8961 Merge branch 'main' into feat/sources-in-react-widget 2024-11-11 14:36:34 +01:00
utin-francis-peter
a7aae3ff7e style: minor adjustments in border-radius and spacings 2024-11-10 03:29:56 +01:00
utin-francis-peter
25feab9a29 chore: removed unused import 2024-11-10 03:11:41 +01:00
utin-francis-peter
97916bf925 chore: returned themes cofig into DocsGPTWidget component 2024-11-10 03:08:35 +01:00
utin-francis-peter
42e2c784c4 Merge branch 'main' of https://github.com/utin-francis-peter/DocsGPT into feat/sources-in-react-widget 2024-11-10 03:06:23 +01:00
utin-francis-peter
1a8f89573d feat: query sources in widget 2024-11-09 01:09:22 +01:00
utin-francis-peter
3e87d83ae8 chore: adjusted spacing in source bubble 2024-11-05 21:50:42 +01:00
utin-francis-peter
0784823e21 Merge branch 'main' of https://github.com/utin-francis-peter/DocsGPT into feat/sources-in-react-widget 2024-11-04 16:36:13 +01:00
utin-francis-peter
1a9f47b1bc chore: modified query sources and removed tooltip 2024-11-04 16:33:00 +01:00
utin-francis-peter
991a38df28 Merge branch 'main' of https://github.com/utin-francis-peter/DocsGPT into feat/sources-in-react-widget 2024-10-28 17:44:50 +01:00
utin-francis-peter
656f4da8f9 feat: rendering of response source 2024-10-28 17:34:35 +01:00
utin-francis-peter
f8d65b84db chore: wrapped the base component with ThemeProvider at the root level to make theme props available globally 2024-10-28 17:33:40 +01:00
utin-francis-peter
bd66d0a987 Merge branch 'main' of https://github.com/utin-francis-peter/DocsGPT into feat/sources-in-react-widget 2024-10-14 13:02:38 +01:00
utin-francis-peter
62802eb138 chore: styled component styles for sources, added showSources prop to widget, handled sources data.type, and rendering sources when available 2024-10-14 12:37:02 +01:00
utin-francis-peter
848beb11df chore: corrected typo in var declaration 2024-10-14 12:33:56 +01:00
utin-francis-peter
0481e766ae chore: updated Query and WidgetProps interface with source property 2024-10-14 12:30:57 +01:00
utin-francis-peter
aa57984bde build: added missing dependency 2024-10-11 03:55:35 +01:00
232 changed files with 10576 additions and 6620 deletions

15
.devcontainer/Dockerfile Normal file
View File

@@ -0,0 +1,15 @@
FROM python:3.12-bookworm
# Install Node.js 20.x
RUN curl -fsSL https://deb.nodesource.com/setup_20.x | bash - \
&& apt-get install -y nodejs \
&& rm -rf /var/lib/apt/lists/*
# Install global npm packages
RUN npm install -g husky vite
# Create and activate Python virtual environment
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
WORKDIR /workspace

View File

@@ -0,0 +1,49 @@
# Welcome to DocsGPT Devcontainer
Welcome to the DocsGPT development environment! This guide will help you get started quickly.
## Starting Services
To run DocsGPT, you need to start three main services: Flask (backend), Celery (task queue), and Vite (frontend). Here are the commands to start each service within the devcontainer:
### Vite (Frontend)
```bash
cd frontend
npm run dev -- --host
```
### Flask (Backend)
```bash
flask --app application/app.py run --host=0.0.0.0 --port=7091
```
### Celery (Task Queue)
```bash
celery -A application.app.celery worker -l INFO
```
## Github Codespaces Instructions
### 1. Make Ports Public:
Go to the "Ports" panel in Codespaces (usually located at the bottom of the VS Code window).
For both port 5173 and 7091, right-click on the port and select "Make Public".
![CleanShot 2025-02-12 at 09 46 14@2x](https://github.com/user-attachments/assets/00a34b16-a7ef-47af-9648-87a7e3008475)
### 2. Update VITE_API_HOST:
After making port 7091 public, copy the public URL provided by Codespaces for port 7091.
Open the file frontend/.env.development.
Find the line VITE_API_HOST=http://localhost:7091.
Replace http://localhost:7091 with the public URL you copied from Codespaces.
![CleanShot 2025-02-12 at 09 46 56@2x](https://github.com/user-attachments/assets/c472242f-1079-4cd8-bc0b-2d78db22b94c)

View File

@@ -0,0 +1,24 @@
{
"name": "DocsGPT Dev Container",
"dockerComposeFile": ["docker-compose-dev.yaml", "docker-compose.override.yaml"],
"service": "dev",
"workspaceFolder": "/workspace",
"postCreateCommand": ".devcontainer/post-create-command.sh",
"forwardPorts": [7091, 5173, 6379, 27017],
"customizations": {
"vscode": {
"extensions": [
"ms-python.python",
"ms-toolsai.jupyter",
"esbenp.prettier-vscode",
"dbaeumer.vscode-eslint"
]
},
"codespaces": {
"openFiles": [
".devcontainer/devc-welcome.md",
"CONTRIBUTING.md"
]
}
}
}

View File

@@ -0,0 +1,40 @@
version: '3.8'
services:
dev:
build:
context: .
dockerfile: Dockerfile
volumes:
- ../:/workspace:cached
command: sleep infinity
depends_on:
redis:
condition: service_healthy
mongo:
condition: service_healthy
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/1
- MONGO_URI=mongodb://mongo:27017/docsgpt
- CACHE_REDIS_URL=redis://redis:6379/2
networks:
- default
redis:
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 30s
retries: 5
mongo:
healthcheck:
test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"]
interval: 5s
timeout: 30s
retries: 5
networks:
default:
name: docsgpt-dev-network

View File

@@ -0,0 +1,32 @@
#!/bin/bash
set -e # Exit immediately if a command exits with a non-zero status
if [ ! -f frontend/.env.development ]; then
cp -n .env-template frontend/.env.development || true # Assuming .env-template is in the root
fi
# Determine VITE_API_HOST based on environment
if [ -n "$CODESPACES" ]; then
# Running in Codespaces
CODESPACE_NAME=$(echo "$CODESPACES" | cut -d'-' -f1) # Extract codespace name
PUBLIC_API_HOST="https://${CODESPACE_NAME}-7091.${GITHUB_CODESPACES_PORT_FORWARDING_DOMAIN}"
echo "Setting VITE_API_HOST for Codespaces: $PUBLIC_API_HOST in frontend/.env.development"
sed -i "s|VITE_API_HOST=.*|VITE_API_HOST=$PUBLIC_API_HOST|" frontend/.env.development
else
# Not running in Codespaces (local devcontainer)
DEFAULT_API_HOST="http://localhost:7091"
echo "Setting VITE_API_HOST for local dev: $DEFAULT_API_HOST in frontend/.env.development"
sed -i "s|VITE_API_HOST=.*|VITE_API_HOST=$DEFAULT_API_HOST|" frontend/.env.development
fi
mkdir -p model
if [ ! -d model/all-mpnet-base-v2 ]; then
wget -q https://d3dg1063dc54p9.cloudfront.net/models/embeddings/mpnet-base-v2.zip -O model/mpnet-base-v2.zip
unzip -q model/mpnet-base-v2.zip -d model
rm model/mpnet-base-v2.zip
fi
pip install -r application/requirements.txt
cd frontend
npm install --include=dev

40
.github/workflows/bandit.yaml vendored Normal file
View File

@@ -0,0 +1,40 @@
name: Bandit Security Scan
on:
push:
branches:
- main
pull_request:
types: [opened, synchronize, reopened]
jobs:
bandit_scan:
if: ${{ github.repository == 'arc53/DocsGPT' }}
runs-on: ubuntu-latest
permissions:
security-events: write
actions: read
contents: read
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install bandit # Bandit is needed for this action
if [ -f application/requirements.txt ]; then pip install -r application/requirements.txt; fi
- name: Run Bandit scan
uses: PyCQA/bandit-action@v1
with:
severity: medium
confidence: medium
targets: application/
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -5,20 +5,33 @@ on:
types: [published]
jobs:
deploy:
build:
if: github.repository == 'arc53/DocsGPT'
runs-on: ubuntu-latest
strategy:
matrix:
include:
- platform: linux/amd64
runner: ubuntu-latest
suffix: amd64
- platform: linux/arm64
runner: ubuntu-24.04-arm
suffix: arm64
runs-on: ${{ matrix.runner }}
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v4
- name: Set up QEMU
- name: Set up QEMU # Only needed for emulation, not for native arm64 builds
if: matrix.platform == 'linux/arm64'
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver: docker-container
install: true
- name: Login to DockerHub
uses: docker/login-action@v3
@@ -33,15 +46,67 @@ jobs:
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Docker images to docker.io and ghcr.io
- name: Build and push platform-specific images
uses: docker/build-push-action@v6
with:
file: './application/Dockerfile'
platforms: linux/amd64
platforms: ${{ matrix.platform }}
context: ./application
push: true
tags: |
${{ secrets.DOCKER_USERNAME }}/docsgpt:${{ github.event.release.tag_name }},${{ secrets.DOCKER_USERNAME }}/docsgpt:latest
ghcr.io/${{ github.repository_owner }}/docsgpt:${{ github.event.release.tag_name }},ghcr.io/${{ github.repository_owner }}/docsgpt:latest
${{ secrets.DOCKER_USERNAME }}/docsgpt:${{ github.event.release.tag_name }}-${{ matrix.suffix }}
ghcr.io/${{ github.repository_owner }}/docsgpt:${{ github.event.release.tag_name }}-${{ matrix.suffix }}
provenance: false
sbom: false
cache-from: type=registry,ref=${{ secrets.DOCKER_USERNAME }}/docsgpt:latest
cache-to: type=inline
manifest:
if: github.repository == 'arc53/DocsGPT'
needs: build
runs-on: ubuntu-latest
permissions:
packages: write
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver: docker-container
install: true
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Login to ghcr.io
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Create and push manifest for DockerHub
run: |
set -e
docker manifest create ${{ secrets.DOCKER_USERNAME }}/docsgpt:${{ github.event.release.tag_name }} \
--amend ${{ secrets.DOCKER_USERNAME }}/docsgpt:${{ github.event.release.tag_name }}-amd64 \
--amend ${{ secrets.DOCKER_USERNAME }}/docsgpt:${{ github.event.release.tag_name }}-arm64
docker manifest push ${{ secrets.DOCKER_USERNAME }}/docsgpt:${{ github.event.release.tag_name }}
docker manifest create ${{ secrets.DOCKER_USERNAME }}/docsgpt:latest \
--amend ${{ secrets.DOCKER_USERNAME }}/docsgpt:${{ github.event.release.tag_name }}-amd64 \
--amend ${{ secrets.DOCKER_USERNAME }}/docsgpt:${{ github.event.release.tag_name }}-arm64
docker manifest push ${{ secrets.DOCKER_USERNAME }}/docsgpt:latest
- name: Create and push manifest for ghcr.io
run: |
set -e
docker manifest create ghcr.io/${{ github.repository_owner }}/docsgpt:${{ github.event.release.tag_name }} \
--amend ghcr.io/${{ github.repository_owner }}/docsgpt:${{ github.event.release.tag_name }}-amd64 \
--amend ghcr.io/${{ github.repository_owner }}/docsgpt:${{ github.event.release.tag_name }}-arm64
docker manifest push ghcr.io/${{ github.repository_owner }}/docsgpt:${{ github.event.release.tag_name }}
docker manifest create ghcr.io/${{ github.repository_owner }}/docsgpt:latest \
--amend ghcr.io/${{ github.repository_owner }}/docsgpt:${{ github.event.release.tag_name }}-amd64 \
--amend ghcr.io/${{ github.repository_owner }}/docsgpt:${{ github.event.release.tag_name }}-arm64
docker manifest push ghcr.io/${{ github.repository_owner }}/docsgpt:latest

View File

@@ -5,20 +5,33 @@ on:
types: [published]
jobs:
deploy:
build:
if: github.repository == 'arc53/DocsGPT'
runs-on: ubuntu-latest
strategy:
matrix:
include:
- platform: linux/amd64
runner: ubuntu-latest
suffix: amd64
- platform: linux/arm64
runner: ubuntu-24.04-arm
suffix: arm64
runs-on: ${{ matrix.runner }}
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v4
- name: Set up QEMU
- name: Set up QEMU # Only needed for emulation, not for native arm64 builds
if: matrix.platform == 'linux/arm64'
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver: docker-container
install: true
- name: Login to DockerHub
uses: docker/login-action@v3
@@ -33,16 +46,67 @@ jobs:
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
# Runs a single command using the runners shell
- name: Build and push Docker images to docker.io and ghcr.io
- name: Build and push platform-specific images
uses: docker/build-push-action@v6
with:
file: './frontend/Dockerfile'
platforms: linux/amd64, linux/arm64
platforms: ${{ matrix.platform }}
context: ./frontend
push: true
tags: |
${{ secrets.DOCKER_USERNAME }}/docsgpt-fe:${{ github.event.release.tag_name }},${{ secrets.DOCKER_USERNAME }}/docsgpt-fe:latest
ghcr.io/${{ github.repository_owner }}/docsgpt-fe:${{ github.event.release.tag_name }},ghcr.io/${{ github.repository_owner }}/docsgpt-fe:latest
${{ secrets.DOCKER_USERNAME }}/docsgpt-fe:${{ github.event.release.tag_name }}-${{ matrix.suffix }}
ghcr.io/${{ github.repository_owner }}/docsgpt-fe:${{ github.event.release.tag_name }}-${{ matrix.suffix }}
provenance: false
sbom: false
cache-from: type=registry,ref=${{ secrets.DOCKER_USERNAME }}/docsgpt-fe:latest
cache-to: type=inline
manifest:
if: github.repository == 'arc53/DocsGPT'
needs: build
runs-on: ubuntu-latest
permissions:
packages: write
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver: docker-container
install: true
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Login to ghcr.io
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Create and push manifest for DockerHub
run: |
set -e
docker manifest create ${{ secrets.DOCKER_USERNAME }}/docsgpt-fe:${{ github.event.release.tag_name }} \
--amend ${{ secrets.DOCKER_USERNAME }}/docsgpt-fe:${{ github.event.release.tag_name }}-amd64 \
--amend ${{ secrets.DOCKER_USERNAME }}/docsgpt-fe:${{ github.event.release.tag_name }}-arm64
docker manifest push ${{ secrets.DOCKER_USERNAME }}/docsgpt-fe:${{ github.event.release.tag_name }}
docker manifest create ${{ secrets.DOCKER_USERNAME }}/docsgpt-fe:latest \
--amend ${{ secrets.DOCKER_USERNAME }}/docsgpt-fe:${{ github.event.release.tag_name }}-amd64 \
--amend ${{ secrets.DOCKER_USERNAME }}/docsgpt-fe:${{ github.event.release.tag_name }}-arm64
docker manifest push ${{ secrets.DOCKER_USERNAME }}/docsgpt-fe:latest
- name: Create and push manifest for ghcr.io
run: |
set -e
docker manifest create ghcr.io/${{ github.repository_owner }}/docsgpt-fe:${{ github.event.release.tag_name }} \
--amend ghcr.io/${{ github.repository_owner }}/docsgpt-fe:${{ github.event.release.tag_name }}-amd64 \
--amend ghcr.io/${{ github.repository_owner }}/docsgpt-fe:${{ github.event.release.tag_name }}-arm64
docker manifest push ghcr.io/${{ github.repository_owner }}/docsgpt-fe:${{ github.event.release.tag_name }}
docker manifest create ghcr.io/${{ github.repository_owner }}/docsgpt-fe:latest \
--amend ghcr.io/${{ github.repository_owner }}/docsgpt-fe:${{ github.event.release.tag_name }}-amd64 \
--amend ghcr.io/${{ github.repository_owner }}/docsgpt-fe:${{ github.event.release.tag_name }}-arm64
docker manifest push ghcr.io/${{ github.repository_owner }}/docsgpt-fe:latest

View File

@@ -1,4 +1,4 @@
name: Build and push DocsGPT Docker image for development
name: Build and push multi-arch DocsGPT Docker image
on:
workflow_dispatch:
@@ -7,27 +7,36 @@ on:
- main
jobs:
deploy:
build:
if: github.repository == 'arc53/DocsGPT'
runs-on: ubuntu-latest
strategy:
matrix:
include:
- platform: linux/amd64
runner: ubuntu-latest
suffix: amd64
- platform: linux/arm64
runner: ubuntu-24.04-arm
suffix: arm64
runs-on: ${{ matrix.runner }}
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver: docker-container
install: true
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Login to ghcr.io
uses: docker/login-action@v3
with:
@@ -35,15 +44,57 @@ jobs:
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Docker images to docker.io and ghcr.io
- name: Build and push platform-specific images
uses: docker/build-push-action@v6
with:
file: './application/Dockerfile'
platforms: linux/amd64
platforms: ${{ matrix.platform }}
context: ./application
push: true
tags: |
${{ secrets.DOCKER_USERNAME }}/docsgpt:develop
ghcr.io/${{ github.repository_owner }}/docsgpt:develop
${{ secrets.DOCKER_USERNAME }}/docsgpt:develop-${{ matrix.suffix }}
ghcr.io/${{ github.repository_owner }}/docsgpt:develop-${{ matrix.suffix }}
provenance: false
sbom: false
cache-from: type=registry,ref=${{ secrets.DOCKER_USERNAME }}/docsgpt:develop
cache-to: type=inline
manifest:
if: github.repository == 'arc53/DocsGPT'
needs: build
runs-on: ubuntu-latest
permissions:
packages: write
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver: docker-container
install: true
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Login to ghcr.io
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Create and push manifest for DockerHub
run: |
docker manifest create ${{ secrets.DOCKER_USERNAME }}/docsgpt:develop \
--amend ${{ secrets.DOCKER_USERNAME }}/docsgpt:develop-amd64 \
--amend ${{ secrets.DOCKER_USERNAME }}/docsgpt:develop-arm64
docker manifest push ${{ secrets.DOCKER_USERNAME }}/docsgpt:develop
- name: Create and push manifest for ghcr.io
run: |
docker manifest create ghcr.io/${{ github.repository_owner }}/docsgpt:develop \
--amend ghcr.io/${{ github.repository_owner }}/docsgpt:develop-amd64 \
--amend ghcr.io/${{ github.repository_owner }}/docsgpt:develop-arm64
docker manifest push ghcr.io/${{ github.repository_owner }}/docsgpt:develop

View File

@@ -7,20 +7,33 @@ on:
- main
jobs:
deploy:
build:
if: github.repository == 'arc53/DocsGPT'
runs-on: ubuntu-latest
strategy:
matrix:
include:
- platform: linux/amd64
runner: ubuntu-latest
suffix: amd64
- platform: linux/arm64
runner: ubuntu-24.04-arm
suffix: arm64
runs-on: ${{ matrix.runner }}
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v4
- name: Set up QEMU
- name: Set up QEMU # Only needed for emulation, not for native arm64 builds
if: matrix.platform == 'linux/arm64'
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver: docker-container
install: true
- name: Login to DockerHub
uses: docker/login-action@v3
@@ -35,15 +48,57 @@ jobs:
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Docker images to docker.io and ghcr.io
- name: Build and push platform-specific images
uses: docker/build-push-action@v6
with:
file: './frontend/Dockerfile'
platforms: linux/amd64
platforms: ${{ matrix.platform }}
context: ./frontend
push: true
tags: |
${{ secrets.DOCKER_USERNAME }}/docsgpt-fe:develop
ghcr.io/${{ github.repository_owner }}/docsgpt-fe:develop
${{ secrets.DOCKER_USERNAME }}/docsgpt-fe:develop-${{ matrix.suffix }}
ghcr.io/${{ github.repository_owner }}/docsgpt-fe:develop-${{ matrix.suffix }}
provenance: false
sbom: false
cache-from: type=registry,ref=${{ secrets.DOCKER_USERNAME }}/docsgpt-fe:develop
cache-to: type=inline
manifest:
if: github.repository == 'arc53/DocsGPT'
needs: build
runs-on: ubuntu-latest
permissions:
packages: write
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
driver: docker-container
install: true
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Login to ghcr.io
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Create and push manifest for DockerHub
run: |
docker manifest create ${{ secrets.DOCKER_USERNAME }}/docsgpt-fe:develop \
--amend ${{ secrets.DOCKER_USERNAME }}/docsgpt-fe:develop-amd64 \
--amend ${{ secrets.DOCKER_USERNAME }}/docsgpt-fe:develop-arm64
docker manifest push ${{ secrets.DOCKER_USERNAME }}/docsgpt-fe:develop
- name: Create and push manifest for ghcr.io
run: |
docker manifest create ghcr.io/${{ github.repository_owner }}/docsgpt-fe:develop \
--amend ghcr.io/${{ github.repository_owner }}/docsgpt-fe:develop-amd64 \
--amend ghcr.io/${{ github.repository_owner }}/docsgpt-fe:develop-arm64
docker manifest push ghcr.io/${{ github.repository_owner }}/docsgpt-fe:develop

View File

@@ -35,18 +35,40 @@ Tech Stack Overview:
- 🖥 Backend: Developed in Python 🐍
### 🌐 If you are looking to contribute to frontend (⚛React, Vite):
### 🌐 Frontend Contributions (⚛️ React, Vite)
* The updated Figma design can be found [here](https://www.figma.com/file/OXLtrl1EAy885to6S69554/DocsGPT?node-id=0%3A1&t=hjWVuxRg9yi5YkJ9-1). Please try to follow the guidelines.
* **Coding Style:** We follow a strict coding style enforced by ESLint and Prettier. Please ensure your code adheres to the configuration provided in our repository's `fronetend/.eslintrc.js` file. We recommend configuring your editor with ESLint and Prettier to help with this.
* **Component Structure:** Strive for small, reusable components. Favor functional components and hooks over class components where possible.
* **State Management** If you need to add stores, please use Redux.
- The updated Figma design can be found [here](https://www.figma.com/file/OXLtrl1EAy885to6S69554/DocsGPT?node-id=0%3A1&t=hjWVuxRg9yi5YkJ9-1).
Please try to follow the guidelines.
### 🖥 If you are looking to contribute to Backend (🐍 Python):
### 🖥 Backend Contributions (🐍 Python)
- Review our issues and contribute to [`/application`](https://github.com/arc53/DocsGPT/tree/main/application)
- All new code should be covered with unit tests ([pytest](https://github.com/pytest-dev/pytest)). Please find tests under [`/tests`](https://github.com/arc53/DocsGPT/tree/main/tests) folder.
- Before submitting your Pull Request, ensure it can be queried after ingesting some test data.
- **Coding Style:** We adhere to the [PEP 8](https://www.python.org/dev/peps/pep-0008/) style guide for Python code. We use `ruff` as our linter and code formatter. Please ensure your code is formatted correctly and passes `ruff` checks before submitting.
- **Type Hinting:** Please use type hints for all function arguments and return values. This improves code readability and helps catch errors early. Example:
```python
def my_function(name: str, count: int) -> list[str]:
...
```
- **Docstrings:** All functions and classes should have docstrings explaining their purpose, parameters, and return values. We prefer the [Google style docstrings](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html). Example:
```python
def my_function(name: str, count: int) -> list[str]:
"""Does something with a name and a count.
Args:
name: The name to use.
count: The number of times to do it.
Returns:
A list of strings.
"""
...
```
### Testing

View File

@@ -15,14 +15,14 @@
<a href="https://github.com/arc53/DocsGPT">![link to main GitHub showing Stars number](https://img.shields.io/github/stars/arc53/docsgpt?style=social)</a>
<a href="https://github.com/arc53/DocsGPT">![link to main GitHub showing Forks number](https://img.shields.io/github/forks/arc53/docsgpt?style=social)</a>
<a href="https://github.com/arc53/DocsGPT/blob/main/LICENSE">![link to license file](https://img.shields.io/github/license/arc53/docsgpt)</a>
<a href="https://www.bestpractices.dev/projects/9907"><img src="https://www.bestpractices.dev/projects/9907/badge"></a>
<a href="https://discord.gg/n5BX8dh8rU">![link to discord](https://img.shields.io/discord/1070046503302877216)</a>
<a href="https://twitter.com/docsgptai">![X (formerly Twitter) URL](https://img.shields.io/twitter/follow/docsgptai)</a>
<a href="https://docs.docsgpt.cloud/quickstart">⚡️ Quickstart</a><a href="https://app.docsgpt.cloud/">☁️ Cloud Version</a><a href="https://discord.gg/n5BX8dh8rU">💬 Discord</a>
<br>
[☁️ Cloud Version](https://app.docsgpt.cloud/) • [💬 Discord](https://discord.gg/n5BX8dh8rU) • [📖 Guides](https://docs.docsgpt.cloud/)
<a href="https://docs.docsgpt.cloud/">📖 Documentation</a><a href="https://github.com/arc53/DocsGPT/blob/main/CONTRIBUTING.md">👫 Contribute</a><a href="https://blog.docsgpt.cloud/">🗞 Blog</a>
<br>
[👫 Contribute](https://github.com/arc53/DocsGPT/blob/main/CONTRIBUTING.md) • [🏠 Self-host](https://docs.docsgpt.cloud/Guides/How-to-use-different-LLM) • [⚡️ Quickstart](https://github.com/arc53/DocsGPT#quickstart)
</div>
<div align="center">
@@ -35,6 +35,7 @@
<li><strong>🗂️ Wide Format Support:</strong> Reads PDF, DOCX, CSV, XLSX, EPUB, MD, RST, HTML, MDX, JSON, PPTX, and images.</li>
<li><strong>🌐 Web & Data Integration:</strong> Ingests from URLs, sitemaps, Reddit, GitHub and web crawlers.</li>
<li><strong>✅ Reliable Answers:</strong> Get accurate, hallucination-free responses with source citations viewable in a clean UI.</li>
<li><strong>🔑 Streamlined API Keys:</strong> Generate keys linked to your settings, documents, and models, simplifying chatbot and integration setup.</li>
<li><strong>🔗 Actionable Tooling:</strong> Connect to APIs, tools, and other services to enable LLM actions.</li>
<li><strong>🧩 Pre-built Integrations:</strong> Use readily available HTML/React chat widgets, search tools, Discord/Telegram bots, and more.</li>
<li><strong>🔌 Flexible Deployment:</strong> Works with major LLMs (OpenAI, Google, Anthropic) and local models (Ollama, llama_cpp).</li>
@@ -45,11 +46,11 @@
- [x] Full GoogleAI compatibility (Jan 2025)
- [x] Add tools (Jan 2025)
- [x] Manually updating chunks in the app UI (Feb 2025)
- [x] Devcontainer for easy development (Feb 2025)
- [ ] Anthropic Tool compatibility
- [ ] Add triggerable actions / tools (webhook)
- [ ] Add OAuth 2.0 authentication for tools and sources
- [ ] Manually updating chunks in the app UI
- [ ] Devcontainer for easy development
- [ ] Chatbots menu re-design to handle tools, scheduling, and more
You can find our full roadmap [here](https://github.com/orgs/arc53/projects/2). Please don't hesitate to contribute or create issues, it helps us improve DocsGPT!
@@ -62,51 +63,51 @@ We're eager to provide personalized assistance when deploying your DocsGPT to a
[Send Email :email:](mailto:support@docsgpt.cloud?subject=DocsGPT%20support%2Fsolutions)
## Join the Lighthouse Program 🌟
Calling all developers and GenAI innovators! The **DocsGPT Lighthouse Program** connects technical leaders actively deploying or extending DocsGPT in real-world scenarios. Collaborate directly with our team to shape the roadmap, access priority support, and build enterprise-ready solutions with exclusive community insights.
[Learn More & Apply →](https://docs.google.com/forms/d/1KAADiJinUJ8EMQyfTXUIGyFbqINNClNR3jBNWq7DgTE)
## QuickStart
> [!Note]
> Make sure you have [Docker](https://docs.docker.com/engine/install/) installed
A more detailed [Quickstart](https://docs.docsgpt.cloud/quickstart) is available in our documentation
1. Clone the repository and run the following command:
```bash
git clone https://github.com/arc53/DocsGPT.git
cd DocsGPT
```
On Mac OS or Linux, write:
2. Run the following command:
```bash
./setup.sh
```
It will install all the dependencies and allow you to download the local model, use OpenAI or use our LLM API.
Otherwise, refer to this Guide for Windows:
On windows:
2. Create a `.env` file in your root directory and set the env variables.
It should look like this inside:
```
LLM_NAME=[docsgpt or openai or others]
API_KEY=[if LLM_NAME is openai]
```
See optional environment variables in the [/application/.env_sample](https://github.com/arc53/DocsGPT/blob/main/application/.env_sample) file.
3. Run the following command:
1. **Clone the repository:**
```bash
docker compose up --build
git clone https://github.com/arc53/DocsGPT.git
cd DocsGPT
```
4. Navigate to http://localhost:5173/.
To stop, just run `Ctrl + C`.
**For macOS and Linux:**
2. **Run the setup script:**
```bash
./setup.sh
```
This interactive script will guide you through setting up DocsGPT. It offers four options: using the public API, running locally, connecting to a local inference engine, or using a cloud API provider. The script will automatically configure your `.env` file and handle necessary downloads and installations based on your chosen option.
**For Windows:**
2. **Follow the Docker Deployment Guide:**
Please refer to the [Docker Deployment documentation](https://docs.docsgpt.cloud/Deploying/Docker-Deploying) for detailed step-by-step instructions on setting up DocsGPT using Docker.
**Navigate to http://localhost:5173/**
To stop DocsGPT, open a terminal in the `DocsGPT` directory and run:
```bash
docker compose -f deployment/docker-compose.yaml down
```
(or use the specific `docker compose down` command shown after running `setup.sh`).
> [!Note]
> For development environment setup instructions, please refer to the [Development Environment Guide](https://docs.docsgpt.cloud/Deploying/Development-Environment).

View File

@@ -6,7 +6,6 @@ ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
apt-get install -y software-properties-common && \
add-apt-repository ppa:deadsnakes/ppa && \
# Install necessary packages and Python
apt-get update && \
apt-get install -y --no-install-recommends gcc wget unzip libc6-dev python3.12 python3.12-venv && \
rm -rf /var/lib/apt/lists/*
@@ -20,7 +19,7 @@ RUN if [ -f /usr/bin/python3.12 ]; then \
# Download and unzip the model
RUN wget https://d3dg1063dc54p9.cloudfront.net/models/embeddings/mpnet-base-v2.zip && \
unzip mpnet-base-v2.zip -d model && \
unzip mpnet-base-v2.zip -d models && \
rm mpnet-base-v2.zip
# Install Rust
@@ -49,7 +48,6 @@ FROM ubuntu:24.04 as final
RUN apt-get update && \
apt-get install -y software-properties-common && \
add-apt-repository ppa:deadsnakes/ppa && \
# Install Python
apt-get update && apt-get install -y --no-install-recommends python3.12 && \
ln -s /usr/bin/python3.12 /usr/bin/python && \
rm -rf /var/lib/apt/lists/*
@@ -63,7 +61,8 @@ RUN groupadd -r appuser && \
# Copy the virtual environment and model from the builder stage
COPY --from=builder /venv /venv
COPY --from=builder /model /app/model
COPY --from=builder /models /app/models
# Copy your application code
COPY . /app/application

View File

@@ -0,0 +1,17 @@
from application.agents.classic_agent import ClassicAgent
class AgentCreator:
agents = {
"classic": ClassicAgent,
}
@classmethod
def create_agent(cls, type, *args, **kwargs):
agent_class = cls.agents.get(type.lower())
if not agent_class:
raise ValueError(f"No agent class found for type {type}")
config = kwargs.pop('config', None)
if isinstance(config, dict) and 'proxy_id' in config and 'proxy_id' not in kwargs:
kwargs['proxy_id'] = config['proxy_id']
return agent_class(*args, **kwargs)

View File

@@ -1,21 +1,40 @@
from typing import Dict, Generator
from application.agents.llm_handler import get_llm_handler
from application.agents.tools.tool_action_parser import ToolActionParser
from application.agents.tools.tool_manager import ToolManager
from application.core.mongo_db import MongoDB
from application.llm.llm_creator import LLMCreator
from application.tools.llm_handler import get_llm_handler
from application.tools.tool_action_parser import ToolActionParser
from application.tools.tool_manager import ToolManager
class Agent:
def __init__(self, llm_name, gpt_model, api_key, user_api_key=None):
# Initialize the LLM with the provided parameters
class BaseAgent:
def __init__(
self,
endpoint,
llm_name,
gpt_model,
api_key,
user_api_key=None,
decoded_token=None,
proxy_id=None,
):
self.endpoint = endpoint
self.llm = LLMCreator.create_llm(
llm_name, api_key=api_key, user_api_key=user_api_key
llm_name,
api_key=api_key,
user_api_key=user_api_key,
decoded_token=decoded_token,
)
self.llm_handler = get_llm_handler(llm_name)
self.gpt_model = gpt_model
# Static tool configuration (to be replaced later)
self.tools = []
self.tool_config = {}
self.tool_calls = []
self.proxy_id = proxy_id
def gen(self, *args, **kwargs) -> Generator[Dict, None, None]:
raise NotImplementedError('Method "gen" must be implemented in the child class')
def _get_user_tools(self, user="local"):
mongo = MongoDB.get_client()
@@ -24,6 +43,11 @@ class Agent:
user_tools = user_tools_collection.find({"user": user, "status": True})
user_tools = list(user_tools)
tools_by_id = {str(tool["_id"]): tool for tool in user_tools}
if hasattr(self, 'proxy_id') and self.proxy_id:
for tool_id, tool in tools_by_id.items():
if 'config' not in tool:
tool['config'] = {}
tool['config']['proxy_id'] = self.proxy_id
return tools_by_id
def _build_tool_parameters(self, action):
@@ -52,6 +76,10 @@ class Agent:
},
}
for tool_id, tool in tools_dict.items()
if (
(tool["name"] == "api_tool" and "actions" in tool.get("config", {}))
or (tool["name"] != "api_tool" and "actions" in tool)
)
for action in (
tool["config"]["actions"].values()
if tool["name"] == "api_tool"
@@ -105,6 +133,7 @@ class Agent:
"method": tool_data["config"]["actions"][action_name]["method"],
"headers": headers,
"query_params": query_params,
"proxy_id": self.proxy_id,
}
if tool_data["name"] == "api_tool"
else tool_data["config"]
@@ -119,42 +148,14 @@ class Agent:
print(f"Executing tool: {action_name} with args: {call_args}")
result = tool.execute_action(action_name, **parameters)
call_id = getattr(call, "id", None)
tool_call_data = {
"tool_name": tool_data["name"],
"call_id": call_id if call_id is not None else "None",
"action_name": f"{action_name}_{tool_id}",
"arguments": call_args,
"result": result,
}
self.tool_calls.append(tool_call_data)
return result, call_id
def _simple_tool_agent(self, messages):
tools_dict = self._get_user_tools()
self._prepare_tools(tools_dict)
resp = self.llm.gen(model=self.gpt_model, messages=messages, tools=self.tools)
if isinstance(resp, str):
yield resp
return
if hasattr(resp, "message") and hasattr(resp.message, "content"):
yield resp.message.content
return
resp = self.llm_handler.handle_response(self, resp, tools_dict, messages)
if isinstance(resp, str):
yield resp
elif hasattr(resp, "message") and hasattr(resp.message, "content"):
yield resp.message.content
else:
completion = self.llm.gen_stream(
model=self.gpt_model, messages=messages, tools=self.tools
)
for line in completion:
yield line
return
def gen(self, messages):
if self.llm.supports_tools():
resp = self._simple_tool_agent(messages)
for line in resp:
yield line
else:
resp = self.llm.gen_stream(model=self.gpt_model, messages=messages)
for line in resp:
yield line

View File

@@ -0,0 +1,141 @@
import uuid
from typing import Dict, Generator
from application.agents.base import BaseAgent
from application.logging import build_stack_data, log_activity, LogContext
from application.retriever.base import BaseRetriever
class ClassicAgent(BaseAgent):
def __init__(
self,
endpoint,
llm_name,
gpt_model,
api_key,
user_api_key=None,
prompt="",
chat_history=None,
decoded_token=None,
proxy_id=None,
):
super().__init__(
endpoint, llm_name, gpt_model, api_key, user_api_key, decoded_token, proxy_id
)
self.user = decoded_token.get("sub")
self.prompt = prompt
self.chat_history = chat_history if chat_history is not None else []
@log_activity()
def gen(
self, query: str, retriever: BaseRetriever, log_context: LogContext = None
) -> Generator[Dict, None, None]:
yield from self._gen_inner(query, retriever, log_context)
def _gen_inner(
self, query: str, retriever: BaseRetriever, log_context: LogContext
) -> Generator[Dict, None, None]:
retrieved_data = self._retriever_search(retriever, query, log_context)
docs_together = "\n".join([doc["text"] for doc in retrieved_data])
p_chat_combine = self.prompt.replace("{summaries}", docs_together)
messages_combine = [{"role": "system", "content": p_chat_combine}]
if len(self.chat_history) > 0:
for i in self.chat_history:
if "prompt" in i and "response" in i:
messages_combine.append({"role": "user", "content": i["prompt"]})
messages_combine.append(
{"role": "assistant", "content": i["response"]}
)
if "tool_calls" in i:
for tool_call in i["tool_calls"]:
call_id = tool_call.get("call_id")
if call_id is None or call_id == "None":
call_id = str(uuid.uuid4())
function_call_dict = {
"function_call": {
"name": tool_call.get("action_name"),
"args": tool_call.get("arguments"),
"call_id": call_id,
}
}
function_response_dict = {
"function_response": {
"name": tool_call.get("action_name"),
"response": {"result": tool_call.get("result")},
"call_id": call_id,
}
}
messages_combine.append(
{"role": "assistant", "content": [function_call_dict]}
)
messages_combine.append(
{"role": "tool", "content": [function_response_dict]}
)
messages_combine.append({"role": "user", "content": query})
tools_dict = self._get_user_tools(self.user)
self._prepare_tools(tools_dict)
resp = self._llm_gen(messages_combine, log_context)
if isinstance(resp, str):
yield {"answer": resp}
return
if (
hasattr(resp, "message")
and hasattr(resp.message, "content")
and resp.message.content is not None
):
yield {"answer": resp.message.content}
return
resp = self._llm_handler(resp, tools_dict, messages_combine, log_context)
if isinstance(resp, str):
yield {"answer": resp}
elif (
hasattr(resp, "message")
and hasattr(resp.message, "content")
and resp.message.content is not None
):
yield {"answer": resp.message.content}
else:
completion = self.llm.gen_stream(
model=self.gpt_model, messages=messages_combine, tools=self.tools
)
for line in completion:
if isinstance(line, str):
yield {"answer": line}
yield {"sources": retrieved_data}
yield {"tool_calls": self.tool_calls.copy()}
def _retriever_search(self, retriever, query, log_context):
retrieved_data = retriever.search(query)
if log_context:
data = build_stack_data(retriever, exclude_attributes=["llm"])
log_context.stacks.append({"component": "retriever", "data": data})
return retrieved_data
def _llm_gen(self, messages_combine, log_context):
resp = self.llm.gen_stream(
model=self.gpt_model, messages=messages_combine, tools=self.tools
)
if log_context:
data = build_stack_data(self.llm)
log_context.stacks.append({"component": "llm", "data": data})
return resp
def _llm_handler(self, resp, tools_dict, messages_combine, log_context):
resp = self.llm_handler.handle_response(
self, resp, tools_dict, messages_combine
)
if log_context:
data = build_stack_data(self.llm_handler)
log_context.stacks.append({"component": "llm_handler", "data": data})
return resp

View File

@@ -0,0 +1,254 @@
import json
from abc import ABC, abstractmethod
from application.logging import build_stack_data
class LLMHandler(ABC):
def __init__(self):
self.llm_calls = []
self.tool_calls = []
@abstractmethod
def handle_response(self, agent, resp, tools_dict, messages, **kwargs):
pass
class OpenAILLMHandler(LLMHandler):
def handle_response(self, agent, resp, tools_dict, messages, stream: bool = True):
if not stream:
while hasattr(resp, "finish_reason") and resp.finish_reason == "tool_calls":
message = json.loads(resp.model_dump_json())["message"]
keys_to_remove = {"audio", "function_call", "refusal"}
filtered_data = {
k: v for k, v in message.items() if k not in keys_to_remove
}
messages.append(filtered_data)
tool_calls = resp.message.tool_calls
for call in tool_calls:
try:
self.tool_calls.append(call)
tool_response, call_id = agent._execute_tool_action(
tools_dict, call
)
function_call_dict = {
"function_call": {
"name": call.function.name,
"args": call.function.arguments,
"call_id": call_id,
}
}
function_response_dict = {
"function_response": {
"name": call.function.name,
"response": {"result": tool_response},
"call_id": call_id,
}
}
messages.append(
{"role": "assistant", "content": [function_call_dict]}
)
messages.append(
{"role": "tool", "content": [function_response_dict]}
)
except Exception as e:
messages.append(
{
"role": "tool",
"content": f"Error executing tool: {str(e)}",
"tool_call_id": call_id,
}
)
resp = agent.llm.gen_stream(
model=agent.gpt_model, messages=messages, tools=agent.tools
)
self.llm_calls.append(build_stack_data(agent.llm))
return resp
else:
while True:
tool_calls = {}
for chunk in resp:
if isinstance(chunk, str) and len(chunk) > 0:
return
elif hasattr(chunk, "delta"):
chunk_delta = chunk.delta
if (
hasattr(chunk_delta, "tool_calls")
and chunk_delta.tool_calls is not None
):
for tool_call in chunk_delta.tool_calls:
index = tool_call.index
if index not in tool_calls:
tool_calls[index] = {
"id": "",
"function": {"name": "", "arguments": ""},
}
current = tool_calls[index]
if tool_call.id:
current["id"] = tool_call.id
if tool_call.function.name:
current["function"][
"name"
] = tool_call.function.name
if tool_call.function.arguments:
current["function"][
"arguments"
] += tool_call.function.arguments
tool_calls[index] = current
if (
hasattr(chunk, "finish_reason")
and chunk.finish_reason == "tool_calls"
):
for index in sorted(tool_calls.keys()):
call = tool_calls[index]
try:
self.tool_calls.append(call)
tool_response, call_id = agent._execute_tool_action(
tools_dict, call
)
if isinstance(call["function"]["arguments"], str):
call["function"]["arguments"] = json.loads(call["function"]["arguments"])
function_call_dict = {
"function_call": {
"name": call["function"]["name"],
"args": call["function"]["arguments"],
"call_id": call["id"],
}
}
function_response_dict = {
"function_response": {
"name": call["function"]["name"],
"response": {"result": tool_response},
"call_id": call["id"],
}
}
messages.append(
{
"role": "assistant",
"content": [function_call_dict],
}
)
messages.append(
{
"role": "tool",
"content": [function_response_dict],
}
)
except Exception as e:
messages.append(
{
"role": "assistant",
"content": f"Error executing tool: {str(e)}",
}
)
tool_calls = {}
if (
hasattr(chunk, "finish_reason")
and chunk.finish_reason == "stop"
):
return
elif isinstance(chunk, str) and len(chunk) == 0:
continue
resp = agent.llm.gen_stream(
model=agent.gpt_model, messages=messages, tools=agent.tools
)
self.llm_calls.append(build_stack_data(agent.llm))
class GoogleLLMHandler(LLMHandler):
def handle_response(self, agent, resp, tools_dict, messages, stream: bool = True):
from google.genai import types
while True:
if not stream:
response = agent.llm.gen(
model=agent.gpt_model, messages=messages, tools=agent.tools
)
self.llm_calls.append(build_stack_data(agent.llm))
if response.candidates and response.candidates[0].content.parts:
tool_call_found = False
for part in response.candidates[0].content.parts:
if part.function_call:
tool_call_found = True
self.tool_calls.append(part.function_call)
tool_response, call_id = agent._execute_tool_action(
tools_dict, part.function_call
)
function_response_part = types.Part.from_function_response(
name=part.function_call.name,
response={"result": tool_response},
)
messages.append(
{"role": "model", "content": [part.to_json_dict()]}
)
messages.append(
{
"role": "tool",
"content": [function_response_part.to_json_dict()],
}
)
if (
not tool_call_found
and response.candidates[0].content.parts
and response.candidates[0].content.parts[0].text
):
return response.candidates[0].content.parts[0].text
elif not tool_call_found:
return response.candidates[0].content.parts
else:
return response
else:
response = agent.llm.gen_stream(
model=agent.gpt_model, messages=messages, tools=agent.tools
)
self.llm_calls.append(build_stack_data(agent.llm))
tool_call_found = False
for result in response:
if hasattr(result, "function_call"):
tool_call_found = True
self.tool_calls.append(result.function_call)
tool_response, call_id = agent._execute_tool_action(
tools_dict, result.function_call
)
function_response_part = types.Part.from_function_response(
name=result.function_call.name,
response={"result": tool_response},
)
messages.append(
{"role": "model", "content": [result.to_json_dict()]}
)
messages.append(
{
"role": "tool",
"content": [function_response_part.to_json_dict()],
}
)
if not tool_call_found:
return response
def get_llm_handler(llm_type):
handlers = {
"openai": OpenAILLMHandler(),
"google": GoogleLLMHandler(),
}
return handlers.get(llm_type, OpenAILLMHandler())

View File

@@ -0,0 +1,100 @@
import json
import requests
from application.agents.tools.base import Tool
class APITool(Tool):
"""
API Tool
A flexible tool for performing various API actions (e.g., sending messages, retrieving data) via custom user-specified APIs
"""
def __init__(self, config):
self.config = config
self.url = config.get("url", "")
self.method = config.get("method", "GET")
self.headers = config.get("headers", {"Content-Type": "application/json"})
self.query_params = config.get("query_params", {})
def execute_action(self, action_name, **kwargs):
return self._make_api_call(
self.url, self.method, self.headers, self.query_params, kwargs
)
def _make_api_call(self, url, method, headers, query_params, body):
sanitized_headers = {}
for key, value in headers.items():
if isinstance(value, str):
sanitized_value = value.encode('latin-1', errors='ignore').decode('latin-1')
sanitized_headers[key] = sanitized_value
else:
sanitized_headers[key] = value
if query_params:
url = f"{url}?{requests.compat.urlencode(query_params)}"
if isinstance(body, dict):
body = json.dumps(body)
response = None
try:
print(f"Making API call: {method} {url} with body: {body}")
if body == "{}":
body = None
proxy_id = self.config.get("proxy_id", None)
request_kwargs = {
'method': method,
'url': url,
'headers': sanitized_headers,
'data': body
}
try:
if proxy_id:
from application.agents.tools.proxy_handler import apply_proxy_to_request
response = apply_proxy_to_request(
requests.request,
proxy_id=proxy_id,
**request_kwargs
)
else:
response = requests.request(**request_kwargs)
except ImportError:
response = requests.request(**request_kwargs)
response.raise_for_status()
content_type = response.headers.get(
"Content-Type", "application/json"
).lower()
if "application/json" in content_type:
try:
data = response.json()
except json.JSONDecodeError as e:
print(f"Error decoding JSON: {e}. Raw response: {response.text}")
return {
"status_code": response.status_code,
"message": f"API call returned invalid JSON. Error: {e}",
"data": response.text,
}
elif "text/" in content_type or "application/xml" in content_type:
data = response.text
elif not response.content:
data = None
else:
print(f"Unsupported content type: {content_type}")
data = response.content
return {
"status_code": response.status_code,
"data": data,
"message": "API call successful.",
}
except requests.exceptions.RequestException as e:
return {
"status_code": response.status_code if response else None,
"message": f"API call failed: {str(e)}",
}
def get_actions_metadata(self):
return []
def get_config_requirements(self):
return {}

View File

@@ -0,0 +1,217 @@
import requests
from application.agents.tools.base import Tool
class BraveSearchTool(Tool):
"""
Brave Search
A tool for performing web and image searches using the Brave Search API.
Requires an API key for authentication.
"""
def __init__(self, config):
self.config = config
self.token = config.get("token", "")
self.base_url = "https://api.search.brave.com/res/v1"
def execute_action(self, action_name, **kwargs):
actions = {
"brave_web_search": self._web_search,
"brave_image_search": self._image_search,
}
if action_name in actions:
return actions[action_name](**kwargs)
else:
raise ValueError(f"Unknown action: {action_name}")
def _web_search(self, query, country="ALL", search_lang="en", count=10,
offset=0, safesearch="off", freshness=None,
result_filter=None, extra_snippets=False, summary=False):
"""
Performs a web search using the Brave Search API.
"""
print(f"Performing Brave web search for: {query}")
url = f"{self.base_url}/web/search"
# Build query parameters
params = {
"q": query,
"country": country,
"search_lang": search_lang,
"count": min(count, 20),
"offset": min(offset, 9),
"safesearch": safesearch
}
# Add optional parameters only if they have values
if freshness:
params["freshness"] = freshness
if result_filter:
params["result_filter"] = result_filter
if extra_snippets:
params["extra_snippets"] = 1
if summary:
params["summary"] = 1
# Set up headers
headers = {
"Accept": "application/json",
"Accept-Encoding": "gzip",
"X-Subscription-Token": self.token
}
# Make the request
response = requests.get(url, params=params, headers=headers)
if response.status_code == 200:
return {
"status_code": response.status_code,
"results": response.json(),
"message": "Search completed successfully."
}
else:
return {
"status_code": response.status_code,
"message": f"Search failed with status code: {response.status_code}."
}
def _image_search(self, query, country="ALL", search_lang="en", count=5,
safesearch="off", spellcheck=False):
"""
Performs an image search using the Brave Search API.
"""
print(f"Performing Brave image search for: {query}")
url = f"{self.base_url}/images/search"
# Build query parameters
params = {
"q": query,
"country": country,
"search_lang": search_lang,
"count": min(count, 100), # API max is 100
"safesearch": safesearch,
"spellcheck": 1 if spellcheck else 0
}
# Set up headers
headers = {
"Accept": "application/json",
"Accept-Encoding": "gzip",
"X-Subscription-Token": self.token
}
# Make the request
response = requests.get(url, params=params, headers=headers)
if response.status_code == 200:
return {
"status_code": response.status_code,
"results": response.json(),
"message": "Image search completed successfully."
}
else:
return {
"status_code": response.status_code,
"message": f"Image search failed with status code: {response.status_code}."
}
def get_actions_metadata(self):
return [
{
"name": "brave_web_search",
"description": "Perform a web search using Brave Search",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query (max 400 characters, 50 words)",
},
# "country": {
# "type": "string",
# "description": "The 2-character country code (default: US)",
# },
"search_lang": {
"type": "string",
"description": "The search language preference (default: en)",
},
# "count": {
# "type": "integer",
# "description": "Number of results to return (max 20, default: 10)",
# },
# "offset": {
# "type": "integer",
# "description": "Pagination offset (max 9, default: 0)",
# },
# "safesearch": {
# "type": "string",
# "description": "Filter level for adult content (off, moderate, strict)",
# },
"freshness": {
"type": "string",
"description": "Time filter for results (pd: last 24h, pw: last week, pm: last month, py: last year)",
},
# "result_filter": {
# "type": "string",
# "description": "Comma-delimited list of result types to include",
# },
# "extra_snippets": {
# "type": "boolean",
# "description": "Get additional excerpts from result pages",
# },
# "summary": {
# "type": "boolean",
# "description": "Enable summary generation in search results",
# }
},
"required": ["query"],
"additionalProperties": False,
},
},
{
"name": "brave_image_search",
"description": "Perform an image search using Brave Search",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query (max 400 characters, 50 words)",
},
# "country": {
# "type": "string",
# "description": "The 2-character country code (default: US)",
# },
# "search_lang": {
# "type": "string",
# "description": "The search language preference (default: en)",
# },
"count": {
"type": "integer",
"description": "Number of results to return (max 100, default: 5)",
},
# "safesearch": {
# "type": "string",
# "description": "Filter level for adult content (off, strict). Default: strict",
# },
# "spellcheck": {
# "type": "boolean",
# "description": "Whether to spellcheck provided query (default: true)",
# }
},
"required": ["query"],
"additionalProperties": False,
},
}
]
def get_config_requirements(self):
return {
"token": {
"type": "string",
"description": "Brave Search API key for authentication"
},
}

View File

@@ -1,5 +1,5 @@
import requests
from application.tools.base import Tool
from application.agents.tools.base import Tool
class CryptoPriceTool(Tool):
@@ -31,7 +31,6 @@ class CryptoPriceTool(Tool):
response = requests.get(url)
if response.status_code == 200:
data = response.json()
# data will be like {"USD": <price>} if the call is successful
if currency.upper() in data:
return {
"status_code": response.status_code,

View File

@@ -0,0 +1,127 @@
import requests
from application.agents.tools.base import Tool
class NtfyTool(Tool):
"""
Ntfy Tool
A tool for sending notifications to ntfy topics on a specified server.
"""
def __init__(self, config):
"""
Initialize the NtfyTool with configuration.
Args:
config (dict): Configuration dictionary containing the access token.
"""
self.config = config
self.token = config.get("token", "")
def execute_action(self, action_name, **kwargs):
"""
Execute the specified action with given parameters.
Args:
action_name (str): Name of the action to execute.
**kwargs: Parameters for the action, including server_url.
Returns:
dict: Result of the action with status code and message.
Raises:
ValueError: If the action name is unknown.
"""
actions = {
"ntfy_send_message": self._send_message,
}
if action_name in actions:
return actions[action_name](**kwargs)
else:
raise ValueError(f"Unknown action: {action_name}")
def _send_message(self, server_url, message, topic, title=None, priority=None):
"""
Send a message to an ntfy topic on the specified server.
Args:
server_url (str): Base URL of the ntfy server (e.g., https://ntfy.sh).
message (str): The message text to send.
topic (str): The topic to send the message to.
title (str, optional): Title of the notification.
priority (int, optional): Priority of the notification (1-5).
Returns:
dict: Response with status code and a confirmation message.
Raises:
ValueError: If priority is not an integer between 1 and 5.
"""
url = f"{server_url.rstrip('/')}/{topic}"
headers = {}
if title:
headers["X-Title"] = title
if priority:
try:
priority = int(priority)
except (ValueError, TypeError):
raise ValueError("Priority must be convertible to an integer")
if priority < 1 or priority > 5:
raise ValueError("Priority must be an integer between 1 and 5")
headers["X-Priority"] = str(priority)
if self.token:
headers["Authorization"] = f"Basic {self.token}"
data = message.encode("utf-8")
response = requests.post(url, headers=headers, data=data)
return {"status_code": response.status_code, "message": "Message sent"}
def get_actions_metadata(self):
"""
Provide metadata about available actions.
Returns:
list: List of dictionaries describing each action.
"""
return [
{
"name": "ntfy_send_message",
"description": "Send a notification to an ntfy topic",
"parameters": {
"type": "object",
"properties": {
"server_url": {
"type": "string",
"description": "Base URL of the ntfy server",
},
"message": {
"type": "string",
"description": "Text to send in the notification",
},
"topic": {
"type": "string",
"description": "Topic to send the notification to",
},
"title": {
"type": "string",
"description": "Title of the notification (optional)",
},
"priority": {
"type": "integer",
"description": "Priority of the notification (1-5, optional)",
},
},
"required": ["server_url", "message", "topic"],
"additionalProperties": False,
},
},
]
def get_config_requirements(self):
"""
Specify the configuration requirements.
Returns:
dict: Dictionary describing required config parameters.
"""
return {
"token": {"type": "string", "description": "Access token for authentication"},
}

View File

@@ -1,5 +1,5 @@
import psycopg2
from application.tools.base import Tool
from application.agents.tools.base import Tool
class PostgresTool(Tool):
"""

View File

@@ -0,0 +1,63 @@
import logging
import requests
from typing import Dict, Optional
from bson.objectid import ObjectId
from application.core.mongo_db import MongoDB
logger = logging.getLogger(__name__)
# Get MongoDB connection
mongo = MongoDB.get_client()
db = mongo["docsgpt"]
proxies_collection = db["proxies"]
def get_proxy_config(proxy_id: str) -> Optional[Dict[str, str]]:
"""
Retrieve proxy configuration from the database.
Args:
proxy_id: The ID of the proxy configuration
Returns:
A dictionary with proxy configuration or None if not found
"""
if not proxy_id or proxy_id == "none":
return None
try:
if ObjectId.is_valid(proxy_id):
proxy_config = proxies_collection.find_one({"_id": ObjectId(proxy_id)})
if proxy_config and "connection" in proxy_config:
connection_str = proxy_config["connection"].strip()
if connection_str:
# Format proxy for requests library
return {
"http": connection_str,
"https": connection_str
}
return None
except Exception as e:
logger.error(f"Error retrieving proxy configuration: {e}")
return None
def apply_proxy_to_request(request_func, proxy_id=None, **kwargs):
"""
Apply proxy configuration to a requests function if available.
This is a minimal wrapper that doesn't change the function signature.
Args:
request_func: The requests function to call (e.g., requests.get, requests.post)
proxy_id: Optional proxy ID to use
**kwargs: Arguments to pass to the request function
Returns:
The response from the request
"""
if proxy_id:
proxy_config = get_proxy_config(proxy_id)
if proxy_config:
kwargs['proxies'] = proxy_config
logger.info(f"Using proxy for request")
return request_func(**kwargs)

View File

@@ -1,5 +1,5 @@
import requests
from application.tools.base import Tool
from application.agents.tools.base import Tool
class TelegramTool(Tool):

View File

@@ -0,0 +1,42 @@
import json
import logging
logger = logging.getLogger(__name__)
class ToolActionParser:
def __init__(self, llm_type):
self.llm_type = llm_type
self.parsers = {
"OpenAILLM": self._parse_openai_llm,
"GoogleLLM": self._parse_google_llm,
}
def parse_args(self, call):
parser = self.parsers.get(self.llm_type, self._parse_openai_llm)
return parser(call)
def _parse_openai_llm(self, call):
if isinstance(call, dict):
try:
call_args = json.loads(call["function"]["arguments"])
tool_id = call["function"]["name"].split("_")[-1]
action_name = call["function"]["name"].rsplit("_", 1)[0]
except (KeyError, TypeError) as e:
logger.error(f"Error parsing OpenAI LLM call: {e}")
return None, None, None
else:
try:
call_args = json.loads(call.function.arguments)
tool_id = call.function.name.split("_")[-1]
action_name = call.function.name.rsplit("_", 1)[0]
except (AttributeError, TypeError) as e:
logger.error(f"Error parsing OpenAI LLM call: {e}")
return None, None, None
return tool_id, action_name, call_args
def _parse_google_llm(self, call):
call_args = call.args
tool_id = call.name.split("_")[-1]
action_name = call.name.rsplit("_", 1)[0]
return tool_id, action_name, call_args

View File

@@ -3,7 +3,7 @@ import inspect
import os
import pkgutil
from application.tools.base import Tool
from application.agents.tools.base import Tool
class ToolManager:
@@ -13,13 +13,11 @@ class ToolManager:
self.load_tools()
def load_tools(self):
tools_dir = os.path.join(os.path.dirname(__file__), "implementations")
tools_dir = os.path.join(os.path.dirname(__file__))
for finder, name, ispkg in pkgutil.iter_modules([tools_dir]):
if name == "base" or name.startswith("__"):
continue
module = importlib.import_module(
f"application.tools.implementations.{name}"
)
module = importlib.import_module(f"application.agents.tools.{name}")
for member_name, obj in inspect.getmembers(module, inspect.isclass):
if issubclass(obj, Tool) and obj is not Tool:
tool_config = self.config.get(name, {})
@@ -27,9 +25,7 @@ class ToolManager:
def load_tool(self, tool_name, tool_config):
self.config[tool_name] = tool_config
module = importlib.import_module(
f"application.tools.implementations.{tool_name}"
)
module = importlib.import_module(f"application.agents.tools.{tool_name}")
for member_name, obj in inspect.getmembers(module, inspect.isclass):
if issubclass(obj, Tool) and obj is not Tool:
return obj(tool_config)

View File

@@ -3,14 +3,14 @@ import datetime
import json
import logging
import os
import sys
import traceback
from bson.dbref import DBRef
from bson.objectid import ObjectId
from flask import Blueprint, current_app, make_response, request, Response
from flask import Blueprint, make_response, request, Response
from flask_restx import fields, Namespace, Resource
from application.agents.agent_creator import AgentCreator
from application.core.mongo_db import MongoDB
from application.core.settings import settings
@@ -42,6 +42,8 @@ elif settings.LLM_NAME == "anthropic":
gpt_model = "claude-2"
elif settings.LLM_NAME == "groq":
gpt_model = "llama3-8b-8192"
elif settings.LLM_NAME == "novita":
gpt_model = "deepseek/deepseek-r1"
if settings.MODEL_NAME: # in case there is particular model name configured
gpt_model = settings.MODEL_NAME
@@ -89,9 +91,6 @@ def get_data_from_api_key(api_key):
if data is None:
raise Exception("Invalid API Key, please generate new key", 401)
if "retriever" not in data:
data["retriever"] = None
if "source" in data and isinstance(data["source"], DBRef):
source_doc = db.dereference(data["source"])
data["source"] = str(source_doc["_id"])
@@ -118,7 +117,18 @@ def is_azure_configured():
)
def save_conversation(conversation_id, question, response, source_log_docs, llm,index=None):
def save_conversation(
conversation_id,
question,
response,
source_log_docs,
tool_calls,
llm,
decoded_token,
index=None,
api_key=None,
):
current_time = datetime.datetime.now(datetime.timezone.utc)
if conversation_id is not None and index is not None:
conversations_collection.update_one(
{"_id": ObjectId(conversation_id), f"queries.{index}": {"$exists": True}},
@@ -127,20 +137,15 @@ def save_conversation(conversation_id, question, response, source_log_docs, llm,
f"queries.{index}.prompt": question,
f"queries.{index}.response": response,
f"queries.{index}.sources": source_log_docs,
f"queries.{index}.tool_calls": tool_calls,
f"queries.{index}.timestamp": current_time,
}
}
},
)
##remove following queries from the array
conversations_collection.update_one(
{"_id": ObjectId(conversation_id), f"queries.{index}": {"$exists": True}},
{
"$push":{
"queries":{
"$each":[],
"$slice":index+1
}
}
}
{"$push": {"queries": {"$each": [], "$slice": index + 1}}},
)
elif conversation_id is not None and conversation_id != "None":
conversations_collection.update_one(
@@ -151,6 +156,8 @@ def save_conversation(conversation_id, question, response, source_log_docs, llm,
"prompt": question,
"response": response,
"sources": source_log_docs,
"tool_calls": tool_calls,
"timestamp": current_time,
}
}
},
@@ -170,28 +177,31 @@ def save_conversation(conversation_id, question, response, source_log_docs, llm,
"role": "user",
"content": "Summarise following conversation in no more than 3 words, "
"respond ONLY with the summary, use the same language as the "
"system \n\nUser: "
+ question
+ "\n\n"
+ "AI: "
+ response,
"system \n\nUser: " + question + "\n\n" + "AI: " + response,
},
]
completion = llm.gen(model=gpt_model, messages=messages_summary, max_tokens=30)
conversation_data = {
"user": decoded_token.get("sub"),
"date": datetime.datetime.utcnow(),
"name": completion,
"queries": [
{
"prompt": question,
"response": response,
"sources": source_log_docs,
"tool_calls": tool_calls,
"timestamp": current_time,
}
],
}
if api_key:
api_key_doc = api_key_collection.find_one({"key": api_key})
if api_key_doc:
conversation_data["api_key"] = api_key_doc["key"]
conversation_id = conversations_collection.insert_one(
{
"user": "local",
"date": datetime.datetime.utcnow(),
"name": completion,
"queries": [
{
"prompt": question,
"response": response,
"sources": source_log_docs,
}
],
}
conversation_data
).inserted_id
return conversation_id
@@ -209,49 +219,82 @@ def get_prompt(prompt_id):
def complete_stream(
question, retriever, conversation_id, user_api_key, isNoneDoc=False,index=None
question,
agent,
retriever,
conversation_id,
user_api_key,
decoded_token,
isNoneDoc=False,
index=None,
should_save_conversation=True,
):
try:
response_full = ""
source_log_docs = []
answer = retriever.gen()
sources = retriever.search()
for source in sources:
if "text" in source:
source["text"] = source["text"][:100].strip() + "..."
if len(sources) > 0:
data = json.dumps({"type": "source", "source": sources})
yield f"data: {data}\n\n"
tool_calls = []
answer = agent.gen(query=question, retriever=retriever)
for line in answer:
if "answer" in line:
response_full += str(line["answer"])
data = json.dumps(line)
data = json.dumps({"type": "answer", "answer": line["answer"]})
yield f"data: {data}\n\n"
elif "sources" in line:
truncated_sources = []
source_log_docs = line["sources"]
for source in line["sources"]:
truncated_source = source.copy()
if "text" in truncated_source:
truncated_source["text"] = (
truncated_source["text"][:100].strip() + "..."
)
truncated_sources.append(truncated_source)
if len(truncated_sources) > 0:
data = json.dumps({"type": "source", "source": truncated_sources})
yield f"data: {data}\n\n"
elif "tool_calls" in line:
tool_calls = line["tool_calls"]
data = json.dumps({"type": "tool_calls", "tool_calls": tool_calls})
yield f"data: {data}\n\n"
elif "source" in line:
source_log_docs.append(line["source"])
if isNoneDoc:
for doc in source_log_docs:
doc["source"] = "None"
llm = LLMCreator.create_llm(
settings.LLM_NAME, api_key=settings.API_KEY, user_api_key=user_api_key
settings.LLM_NAME,
api_key=settings.API_KEY,
user_api_key=user_api_key,
decoded_token=decoded_token,
)
if user_api_key is None:
if should_save_conversation:
conversation_id = save_conversation(
conversation_id, question, response_full, source_log_docs, llm,index
conversation_id,
question,
response_full,
source_log_docs,
tool_calls,
llm,
decoded_token,
index,
api_key=user_api_key,
)
# send data.type = "end" to indicate that the stream has ended as json
data = json.dumps({"type": "id", "id": str(conversation_id)})
yield f"data: {data}\n\n"
else:
conversation_id = None
# send data.type = "end" to indicate that the stream has ended as json
data = json.dumps({"type": "id", "id": str(conversation_id)})
yield f"data: {data}\n\n"
retriever_params = retriever.get_params()
user_logs_collection.insert_one(
{
"action": "stream_answer",
"level": "info",
"user": "local",
"user": decoded_token.get("sub"),
"api_key": user_api_key,
"question": question,
"response": response_full,
@@ -263,13 +306,12 @@ def complete_stream(
data = json.dumps({"type": "end"})
yield f"data: {data}\n\n"
except Exception as e:
print("\033[91merr", str(e), file=sys.stderr)
traceback.print_exc()
logger.error(f"Error in stream: {str(e)}")
logger.error(traceback.format_exc())
data = json.dumps(
{
"type": "error",
"error": "Please try again later. We apologize for any inconvenience.",
"error_exception": str(e),
}
)
yield f"data: {data}\n\n"
@@ -293,6 +335,9 @@ class Stream(Resource):
"prompt_id": fields.String(
required=False, default="default", description="Prompt ID"
),
"proxy_id": fields.String(
required=False, description="Proxy ID to use for API calls"
),
"chunks": fields.Integer(
required=False, default=2, description="Number of chunks"
),
@@ -305,9 +350,12 @@ class Stream(Resource):
"isNoneDoc": fields.Boolean(
required=False, description="Flag indicating if no document is used"
),
"index":fields.Integer(
"index": fields.Integer(
required=False, description="The position where query is to be updated"
),
"save_conversation": fields.Boolean(
required=False, default=True, description="Flag to save conversation"
),
},
)
@@ -317,40 +365,52 @@ class Stream(Resource):
data = request.get_json()
required_fields = ["question"]
if "index" in data:
required_fields = ["question","conversation_id"]
required_fields = ["question", "conversation_id"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
save_conv = data.get("save_conversation", True)
try:
question = data["question"]
history = limit_chat_history(json.loads(data.get("history", [])), gpt_model=gpt_model)
history = limit_chat_history(
json.loads(data.get("history", [])), gpt_model=gpt_model
)
conversation_id = data.get("conversation_id")
prompt_id = data.get("prompt_id", "default")
index=data.get("index",None)
proxy_id = data.get("proxy_id", None)
index = data.get("index", None)
chunks = int(data.get("chunks", 2))
token_limit = data.get("token_limit", settings.DEFAULT_MAX_HISTORY)
retriever_name = data.get("retriever", "classic")
if "api_key" in data:
data_key = get_data_from_api_key(data["api_key"])
chunks = int(data_key.get("chunks", 2))
prompt_id = data_key.get("prompt_id", "default")
proxy_id = data_key.get("proxy_id", None)
source = {"active_docs": data_key.get("source")}
retriever_name = data_key.get("retriever", retriever_name)
user_api_key = data["api_key"]
decoded_token = {"sub": data_key.get("user")}
elif "active_docs" in data:
source = {"active_docs": data["active_docs"]}
retriever_name = get_retriever(data["active_docs"]) or retriever_name
user_api_key = None
decoded_token = request.decoded_token
else:
source = {}
user_api_key = None
decoded_token = request.decoded_token
current_app.logger.info(
if not decoded_token:
return make_response({"error": "Unauthorized"}, 401)
logger.info(
f"/stream - request_data: {data}, source: {source}",
extra={"data": json.dumps({"request_data": data, "source": source})},
)
@@ -358,9 +418,22 @@ class Stream(Resource):
prompt = get_prompt(prompt_id)
if "isNoneDoc" in data and data["isNoneDoc"] is True:
chunks = 0
agent = AgentCreator.create_agent(
settings.AGENT_NAME,
endpoint="stream",
llm_name=settings.LLM_NAME,
gpt_model=gpt_model,
api_key=settings.API_KEY,
user_api_key=user_api_key,
prompt=prompt,
proxy_id=proxy_id,
chat_history=history,
decoded_token=decoded_token,
)
retriever = RetrieverCreator.create_retriever(
retriever_name,
question=question,
source=source,
chat_history=history,
prompt=prompt,
@@ -368,40 +441,40 @@ class Stream(Resource):
token_limit=token_limit,
gpt_model=gpt_model,
user_api_key=user_api_key,
decoded_token=decoded_token,
)
return Response(
complete_stream(
question=question,
agent=agent,
retriever=retriever,
conversation_id=conversation_id,
user_api_key=user_api_key,
decoded_token=decoded_token,
isNoneDoc=data.get("isNoneDoc"),
index=index,
should_save_conversation=save_conv,
),
mimetype="text/event-stream",
)
except ValueError:
message = "Malformed request body"
print("\033[91merr", str(message), file=sys.stderr)
logger.error(f"/stream - error: {message}")
return Response(
error_stream_generate(message),
status=400,
mimetype="text/event-stream",
)
except Exception as e:
current_app.logger.error(
logger.error(
f"/stream - error: {str(e)} - traceback: {traceback.format_exc()}",
extra={"error": str(e), "traceback": traceback.format_exc()},
)
message = e.args[0]
status_code = 400
# Custom exceptions with two arguments, index 1 as status code
if len(e.args) >= 2:
status_code = e.args[1]
return Response(
error_stream_generate(message),
error_stream_generate("Unknown error occurred"),
status=status_code,
mimetype="text/event-stream",
)
@@ -429,6 +502,9 @@ class Answer(Resource):
"prompt_id": fields.String(
required=False, default="default", description="Prompt ID"
),
"proxy_id": fields.String(
required=False, description="Proxy ID to use for API calls"
),
"chunks": fields.Integer(
required=False, default=2, description="Number of chunks"
),
@@ -448,16 +524,19 @@ class Answer(Resource):
@api.doc(description="Provide an answer based on the question and retriever")
def post(self):
data = request.get_json()
required_fields = ["question"]
required_fields = ["question"]
missing_fields = check_required_fields(data, required_fields)
if missing_fields:
return missing_fields
try:
question = data["question"]
history = limit_chat_history(json.loads(data.get("history", [])), gpt_model=gpt_model)
history = limit_chat_history(
json.loads(data.get("history", [])), gpt_model=gpt_model
)
conversation_id = data.get("conversation_id")
prompt_id = data.get("prompt_id", "default")
proxy_id = data.get("proxy_id", None)
chunks = int(data.get("chunks", 2))
token_limit = data.get("token_limit", settings.DEFAULT_MAX_HISTORY)
retriever_name = data.get("retriever", "classic")
@@ -466,27 +545,48 @@ class Answer(Resource):
data_key = get_data_from_api_key(data["api_key"])
chunks = int(data_key.get("chunks", 2))
prompt_id = data_key.get("prompt_id", "default")
proxy_id = data_key.get("proxy_id", None)
source = {"active_docs": data_key.get("source")}
retriever_name = data_key.get("retriever", retriever_name)
user_api_key = data["api_key"]
decoded_token = {"sub": data_key.get("user")}
elif "active_docs" in data:
source = {"active_docs": data["active_docs"]}
retriever_name = get_retriever(data["active_docs"]) or retriever_name
user_api_key = None
decoded_token = request.decoded_token
else:
source = {}
user_api_key = None
decoded_token = request.decoded_token
if not decoded_token:
return make_response({"error": "Unauthorized"}, 401)
prompt = get_prompt(prompt_id)
current_app.logger.info(
logger.info(
f"/api/answer - request_data: {data}, source: {source}",
extra={"data": json.dumps({"request_data": data, "source": source})},
)
agent = AgentCreator.create_agent(
settings.AGENT_NAME,
endpoint="api/answer",
llm_name=settings.LLM_NAME,
gpt_model=gpt_model,
api_key=settings.API_KEY,
user_api_key=user_api_key,
prompt=prompt,
proxy_id=proxy_id,
chat_history=history,
decoded_token=decoded_token,
)
retriever = RetrieverCreator.create_retriever(
retriever_name,
question=question,
source=source,
chat_history=history,
prompt=prompt,
@@ -494,36 +594,80 @@ class Answer(Resource):
token_limit=token_limit,
gpt_model=gpt_model,
user_api_key=user_api_key,
decoded_token=decoded_token,
)
source_log_docs = []
response_full = ""
for line in retriever.gen():
if "source" in line:
source_log_docs.append(line["source"])
elif "answer" in line:
response_full += line["answer"]
source_log_docs = []
tool_calls = []
stream_ended = False
for line in complete_stream(
question=question,
agent=agent,
retriever=retriever,
conversation_id=conversation_id,
user_api_key=user_api_key,
decoded_token=decoded_token,
isNoneDoc=data.get("isNoneDoc"),
index=None,
should_save_conversation=False,
):
try:
event_data = line.replace("data: ", "").strip()
event = json.loads(event_data)
if event["type"] == "answer":
response_full += event["answer"]
elif event["type"] == "source":
source_log_docs = event["source"]
elif event["type"] == "tool_calls":
tool_calls = event["tool_calls"]
elif event["type"] == "error":
logger.error(f"Error from stream: {event['error']}")
return bad_request(500, event["error"])
elif event["type"] == "end":
stream_ended = True
except (json.JSONDecodeError, KeyError) as e:
logger.warning(f"Error parsing stream event: {e}, line: {line}")
continue
if not stream_ended:
logger.error("Stream ended unexpectedly without an 'end' event.")
return bad_request(500, "Stream ended unexpectedly.")
if data.get("isNoneDoc"):
for doc in source_log_docs:
doc["source"] = "None"
llm = LLMCreator.create_llm(
settings.LLM_NAME, api_key=settings.API_KEY, user_api_key=user_api_key
settings.LLM_NAME,
api_key=settings.API_KEY,
user_api_key=user_api_key,
decoded_token=decoded_token,
)
result = {"answer": response_full, "sources": source_log_docs}
result["conversation_id"] = str(
save_conversation(
conversation_id, question, response_full, source_log_docs, llm
conversation_id,
question,
response_full,
source_log_docs,
tool_calls,
llm,
decoded_token,
api_key=user_api_key,
)
)
retriever_params = retriever.get_params()
user_logs_collection.insert_one(
{
"action": "api_answer",
"level": "info",
"user": "local",
"user": decoded_token.get("sub"),
"api_key": user_api_key,
"question": question,
"response": response_full,
@@ -534,7 +678,7 @@ class Answer(Resource):
)
except Exception as e:
current_app.logger.error(
logger.error(
f"/api/answer - error: {str(e)} - traceback: {traceback.format_exc()}",
extra={"error": str(e), "traceback": traceback.format_exc()},
)
@@ -592,21 +736,28 @@ class Search(Resource):
chunks = int(data_key.get("chunks", 2))
source = {"active_docs": data_key.get("source")}
user_api_key = data["api_key"]
decoded_token = {"sub": data_key.get("user")}
elif "active_docs" in data:
source = {"active_docs": data["active_docs"]}
user_api_key = None
decoded_token = request.decoded_token
else:
source = {}
user_api_key = None
decoded_token = request.decoded_token
current_app.logger.info(
if not decoded_token:
return make_response({"error": "Unauthorized"}, 401)
logger.info(
f"/api/answer - request_data: {data}, source: {source}",
extra={"data": json.dumps({"request_data": data, "source": source})},
)
retriever = RetrieverCreator.create_retriever(
retriever_name,
question=question,
source=source,
chat_history=[],
prompt="default",
@@ -614,16 +765,17 @@ class Search(Resource):
token_limit=token_limit,
gpt_model=gpt_model,
user_api_key=user_api_key,
decoded_token=decoded_token,
)
docs = retriever.search()
docs = retriever.search(question)
retriever_params = retriever.get_params()
user_logs_collection.insert_one(
{
"action": "api_search",
"level": "info",
"user": "local",
"user": decoded_token.get("sub"),
"api_key": user_api_key,
"question": question,
"sources": docs,
@@ -637,7 +789,7 @@ class Search(Resource):
doc["source"] = "None"
except Exception as e:
current_app.logger.error(
logger.error(
f"/api/search - error: {str(e)} - traceback: {traceback.format_exc()}",
extra={"error": str(e), "traceback": traceback.format_exc()},
)

File diff suppressed because it is too large Load Diff

View File

@@ -1,15 +1,24 @@
import os
import platform
import uuid
import dotenv
from flask import Flask, redirect, request
from flask import Flask, jsonify, redirect, request
from jose import jwt
from application.auth import handle_auth
from application.api.answer.routes import answer
from application.api.internal.routes import internal
from application.api.user.routes import user
from application.celery_init import celery
from application.core.logging_config import setup_logging
from application.core.settings import settings
from application.extensions import api
setup_logging()
from application.api.answer.routes import answer # noqa: E402
from application.api.internal.routes import internal # noqa: E402
from application.api.user.routes import user # noqa: E402
from application.celery_init import celery # noqa: E402
from application.core.settings import settings # noqa: E402
from application.extensions import api # noqa: E402
if platform.system() == "Windows":
import pathlib
@@ -17,7 +26,6 @@ if platform.system() == "Windows":
pathlib.PosixPath = pathlib.WindowsPath
dotenv.load_dotenv()
setup_logging()
app = Flask(__name__)
app.register_blueprint(user)
@@ -32,6 +40,25 @@ app.config.update(
celery.config_from_object("application.celeryconfig")
api.init_app(app)
if settings.AUTH_TYPE in ("simple_jwt", "session_jwt") and not settings.JWT_SECRET_KEY:
key_file = ".jwt_secret_key"
try:
with open(key_file, "r") as f:
settings.JWT_SECRET_KEY = f.read().strip()
except FileNotFoundError:
new_key = os.urandom(32).hex()
with open(key_file, "w") as f:
f.write(new_key)
settings.JWT_SECRET_KEY = new_key
except Exception as e:
raise RuntimeError(f"Failed to setup JWT_SECRET_KEY: {e}")
SIMPLE_JWT_TOKEN = None
if settings.AUTH_TYPE == "simple_jwt":
payload = {"sub": "local"}
SIMPLE_JWT_TOKEN = jwt.encode(payload, settings.JWT_SECRET_KEY, algorithm="HS256")
print(f"Generated Simple JWT Token: {SIMPLE_JWT_TOKEN}")
@app.route("/")
def home():
@@ -41,11 +68,47 @@ def home():
return "Welcome to DocsGPT Backend!"
@app.route("/api/config")
def get_config():
response = {
"auth_type": settings.AUTH_TYPE,
"requires_auth": settings.AUTH_TYPE in ["simple_jwt", "session_jwt"],
}
return jsonify(response)
@app.route("/api/generate_token")
def generate_token():
if settings.AUTH_TYPE == "session_jwt":
new_user_id = str(uuid.uuid4())
token = jwt.encode(
{"sub": new_user_id}, settings.JWT_SECRET_KEY, algorithm="HS256"
)
return jsonify({"token": token})
return jsonify({"error": "Token generation not allowed in current auth mode"}), 400
@app.before_request
def authenticate_request():
if request.method == "OPTIONS":
return "", 200
decoded_token = handle_auth(request)
if not decoded_token:
request.decoded_token = None
elif "error" in decoded_token:
return jsonify(decoded_token), 401
else:
request.decoded_token = decoded_token
@app.after_request
def after_request(response):
response.headers.add("Access-Control-Allow-Origin", "*")
response.headers.add("Access-Control-Allow-Headers", "Content-Type,Authorization")
response.headers.add("Access-Control-Allow-Methods", "GET,PUT,POST,DELETE,OPTIONS")
response.headers.add("Access-Control-Allow-Headers", "Content-Type, Authorization")
response.headers.add(
"Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS"
)
return response

28
application/auth.py Normal file
View File

@@ -0,0 +1,28 @@
from jose import jwt
from application.core.settings import settings
def handle_auth(request, data={}):
if settings.AUTH_TYPE in ["simple_jwt", "session_jwt"]:
jwt_token = request.headers.get("Authorization")
if not jwt_token:
return None
jwt_token = jwt_token.replace("Bearer ", "")
try:
decoded_token = jwt.decode(
jwt_token,
settings.JWT_SECRET_KEY,
algorithms=["HS256"],
options={"verify_exp": False},
)
return decoded_token
except Exception as e:
return {
"message": f"Authentication error: {str(e)}",
"error": "invalid_token",
}
else:
return {"sub": "local"}

View File

@@ -11,21 +11,25 @@ from application.utils import get_hash
logger = logging.getLogger(__name__)
_redis_instance = None
_redis_creation_failed = False
_instance_lock = Lock()
def get_redis_instance():
global _redis_instance
if _redis_instance is None:
global _redis_instance, _redis_creation_failed
if _redis_instance is None and not _redis_creation_failed:
with _instance_lock:
if _redis_instance is None:
if _redis_instance is None and not _redis_creation_failed:
try:
_redis_instance = redis.Redis.from_url(
settings.CACHE_REDIS_URL, socket_connect_timeout=2
)
except ValueError as e:
logger.error(f"Invalid Redis URL: {e}")
_redis_creation_failed = True # Stop future attempts
_redis_instance = None
except redis.ConnectionError as e:
logger.error(f"Redis connection error: {e}")
_redis_instance = None
_redis_instance = None # Keep trying for connection errors
return _redis_instance
@@ -41,36 +45,48 @@ def gen_cache_key(messages, model="docgpt", tools=None):
def gen_cache(func):
def wrapper(self, model, messages, stream, tools=None, *args, **kwargs):
if tools is not None:
return func(self, model, messages, stream, tools, *args, **kwargs)
try:
cache_key = gen_cache_key(messages, model, tools)
redis_client = get_redis_instance()
if redis_client:
try:
cached_response = redis_client.get(cache_key)
if cached_response:
return cached_response.decode("utf-8")
except redis.ConnectionError as e:
logger.error(f"Redis connection error: {e}")
result = func(self, model, messages, stream, tools, *args, **kwargs)
if redis_client and isinstance(result, str):
try:
redis_client.set(cache_key, result, ex=1800)
except redis.ConnectionError as e:
logger.error(f"Redis connection error: {e}")
return result
except ValueError as e:
logger.error(e)
return "Error: No user message found in the conversation to generate a cache key."
logger.error(f"Cache key generation failed: {e}")
return func(self, model, messages, stream, tools, *args, **kwargs)
redis_client = get_redis_instance()
if redis_client:
try:
cached_response = redis_client.get(cache_key)
if cached_response:
return cached_response.decode("utf-8")
except Exception as e:
logger.error(f"Error getting cached response: {e}")
result = func(self, model, messages, stream, tools, *args, **kwargs)
if redis_client and isinstance(result, str):
try:
redis_client.set(cache_key, result, ex=1800)
except Exception as e:
logger.error(f"Error setting cache: {e}")
return result
return wrapper
def stream_cache(func):
def wrapper(self, model, messages, stream, tools=None, *args, **kwargs):
cache_key = gen_cache_key(messages, model, tools)
logger.info(f"Stream cache key: {cache_key}")
if tools is not None:
yield from func(self, model, messages, stream, tools, *args, **kwargs)
return
try:
cache_key = gen_cache_key(messages, model, tools)
except ValueError as e:
logger.error(f"Cache key generation failed: {e}")
yield from func(self, model, messages, stream, tools, *args, **kwargs)
return
redis_client = get_redis_instance()
if redis_client:
@@ -81,23 +97,21 @@ def stream_cache(func):
cached_response = json.loads(cached_response.decode("utf-8"))
for chunk in cached_response:
yield chunk
time.sleep(0.03)
time.sleep(0.03) # Simulate streaming delay
return
except redis.ConnectionError as e:
logger.error(f"Redis connection error: {e}")
except Exception as e:
logger.error(f"Error getting cached stream: {e}")
result = func(self, model, messages, stream, tools=tools, *args, **kwargs)
stream_cache_data = []
for chunk in result:
stream_cache_data.append(chunk)
for chunk in func(self, model, messages, stream, tools, *args, **kwargs):
yield chunk
stream_cache_data.append(str(chunk))
if redis_client:
try:
redis_client.set(cache_key, json.dumps(stream_cache_data), ex=1800)
logger.info(f"Stream cache saved for key: {cache_key}")
except redis.ConnectionError as e:
logger.error(f"Redis connection error: {e}")
except Exception as e:
logger.error(f"Error setting stream cache: {e}")
return wrapper

View File

@@ -1,26 +1,39 @@
import os
from pathlib import Path
from typing import Optional
import os
from pydantic_settings import BaseSettings
current_dir = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
current_dir = os.path.dirname(
os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
)
class Settings(BaseSettings):
AUTH_TYPE: Optional[str] = None
LLM_NAME: str = "docsgpt"
MODEL_NAME: Optional[str] = None # if LLM_NAME is openai, MODEL_NAME can be gpt-4 or gpt-3.5-turbo
MODEL_NAME: Optional[str] = (
None # if LLM_NAME is openai, MODEL_NAME can be gpt-4 or gpt-3.5-turbo
)
EMBEDDINGS_NAME: str = "huggingface_sentence-transformers/all-mpnet-base-v2"
CELERY_BROKER_URL: str = "redis://localhost:6379/0"
CELERY_RESULT_BACKEND: str = "redis://localhost:6379/1"
MONGO_URI: str = "mongodb://localhost:27017/docsgpt"
MODEL_PATH: str = os.path.join(current_dir, "models/docsgpt-7b-f16.gguf")
DEFAULT_MAX_HISTORY: int = 150
MODEL_TOKEN_LIMITS: dict = {"gpt-4o-mini": 128000, "gpt-3.5-turbo": 4096, "claude-2": 1e5}
MODEL_TOKEN_LIMITS: dict = {
"gpt-4o-mini": 128000,
"gpt-3.5-turbo": 4096,
"claude-2": 1e5,
"gemini-2.0-flash-exp": 1e6,
}
UPLOAD_FOLDER: str = "inputs"
PARSE_PDF_AS_IMAGE: bool = False
VECTOR_STORE: str = "faiss" # "faiss" or "elasticsearch" or "qdrant" or "milvus" or "lancedb"
RETRIEVERS_ENABLED: list = ["classic_rag", "duckduck_search"] # also brave_search
VECTOR_STORE: str = (
"faiss" # "faiss" or "elasticsearch" or "qdrant" or "milvus" or "lancedb"
)
RETRIEVERS_ENABLED: list = ["classic_rag", "duckduck_search"] # also brave_search
AGENT_NAME: str = "classic"
# LLM Cache
CACHE_REDIS_URL: str = "redis://localhost:6379/2"
@@ -28,12 +41,18 @@ class Settings(BaseSettings):
API_URL: str = "http://localhost:7091" # backend url for celery worker
API_KEY: Optional[str] = None # LLM api key
EMBEDDINGS_KEY: Optional[str] = None # api key for embeddings (if using openai, just copy API_KEY)
EMBEDDINGS_KEY: Optional[str] = (
None # api key for embeddings (if using openai, just copy API_KEY)
)
OPENAI_API_BASE: Optional[str] = None # azure openai api base url
OPENAI_API_VERSION: Optional[str] = None # azure openai api version
AZURE_DEPLOYMENT_NAME: Optional[str] = None # azure deployment name for answering
AZURE_EMBEDDINGS_DEPLOYMENT_NAME: Optional[str] = None # azure deployment name for embeddings
OPENAI_BASE_URL: Optional[str] = None # openai base url for open ai compatable models
AZURE_EMBEDDINGS_DEPLOYMENT_NAME: Optional[str] = (
None # azure deployment name for embeddings
)
OPENAI_BASE_URL: Optional[str] = (
None # openai base url for open ai compatable models
)
# elasticsearch
ELASTIC_CLOUD_ID: Optional[str] = None # cloud id for elasticsearch
@@ -68,16 +87,20 @@ class Settings(BaseSettings):
# Milvus vectorstore config
MILVUS_COLLECTION_NAME: Optional[str] = "docsgpt"
MILVUS_URI: Optional[str] = "./milvus_local.db" # milvus lite version as default
MILVUS_URI: Optional[str] = "./milvus_local.db" # milvus lite version as default
MILVUS_TOKEN: Optional[str] = ""
# LanceDB vectorstore config
LANCEDB_PATH: str = "/tmp/lancedb" # Path where LanceDB stores its local data
LANCEDB_TABLE_NAME: Optional[str] = "docsgpts" # Name of the table to use for storing vectors
LANCEDB_TABLE_NAME: Optional[str] = (
"docsgpts" # Name of the table to use for storing vectors
)
BRAVE_SEARCH_API_KEY: Optional[str] = None
FLASK_DEBUG_MODE: bool = False
JWT_SECRET_KEY: str = ""
path = Path(__file__).parent.parent.absolute()
settings = Settings(_env_file=path.joinpath(".env"), _env_file_encoding="utf-8")

View File

@@ -5,7 +5,8 @@ from application.usage import gen_token_usage, stream_token_usage
class BaseLLM(ABC):
def __init__(self):
def __init__(self, decoded_token=None):
self.decoded_token = decoded_token
self.token_usage = {"prompt_tokens": 0, "generated_tokens": 0}
def _apply_decorator(self, method, decorators, *args, **kwargs):

View File

@@ -1,34 +1,131 @@
from application.llm.base import BaseLLM
import json
import requests
from application.core.settings import settings
from application.llm.base import BaseLLM
class DocsGPTAPILLM(BaseLLM):
def __init__(self, api_key=None, user_api_key=None, *args, **kwargs):
from openai import OpenAI
super().__init__(*args, **kwargs)
self.api_key = api_key
self.client = OpenAI(api_key="sk-docsgpt-public", base_url="https://oai.arc53.com")
self.user_api_key = user_api_key
self.endpoint = "https://llm.arc53.com"
self.api_key = api_key
def _raw_gen(self, baseself, model, messages, stream=False, *args, **kwargs):
response = requests.post(
f"{self.endpoint}/answer", json={"messages": messages, "max_new_tokens": 30}
)
response_clean = response.json()["a"].replace("###", "")
def _clean_messages_openai(self, messages):
cleaned_messages = []
for message in messages:
role = message.get("role")
content = message.get("content")
return response_clean
if role == "model":
role = "assistant"
def _raw_gen_stream(self, baseself, model, messages, stream=True, *args, **kwargs):
response = requests.post(
f"{self.endpoint}/stream",
json={"messages": messages, "max_new_tokens": 256},
stream=True,
)
if role and content is not None:
if isinstance(content, str):
cleaned_messages.append({"role": role, "content": content})
elif isinstance(content, list):
for item in content:
if "text" in item:
cleaned_messages.append(
{"role": role, "content": item["text"]}
)
elif "function_call" in item:
tool_call = {
"id": item["function_call"]["call_id"],
"type": "function",
"function": {
"name": item["function_call"]["name"],
"arguments": json.dumps(
item["function_call"]["args"]
),
},
}
cleaned_messages.append(
{
"role": "assistant",
"content": None,
"tool_calls": [tool_call],
}
)
elif "function_response" in item:
cleaned_messages.append(
{
"role": "tool",
"tool_call_id": item["function_response"][
"call_id"
],
"content": json.dumps(
item["function_response"]["response"]["result"]
),
}
)
else:
raise ValueError(
f"Unexpected content dictionary format: {item}"
)
else:
raise ValueError(f"Unexpected content type: {type(content)}")
for line in response.iter_lines():
if line:
data_str = line.decode("utf-8")
if data_str.startswith("data: "):
data = json.loads(data_str[6:])
yield data["a"]
return cleaned_messages
def _raw_gen(
self,
baseself,
model,
messages,
stream=False,
tools=None,
engine=settings.AZURE_DEPLOYMENT_NAME,
**kwargs,
):
messages = self._clean_messages_openai(messages)
if tools:
response = self.client.chat.completions.create(
model="docsgpt",
messages=messages,
stream=stream,
tools=tools,
**kwargs,
)
return response.choices[0]
else:
response = self.client.chat.completions.create(
model="docsgpt", messages=messages, stream=stream, **kwargs
)
return response.choices[0].message.content
def _raw_gen_stream(
self,
baseself,
model,
messages,
stream=True,
tools=None,
engine=settings.AZURE_DEPLOYMENT_NAME,
**kwargs,
):
messages = self._clean_messages_openai(messages)
if tools:
response = self.client.chat.completions.create(
model="docsgpt",
messages=messages,
stream=stream,
tools=tools,
**kwargs,
)
else:
response = self.client.chat.completions.create(
model="docsgpt", messages=messages, stream=stream, **kwargs
)
for line in response:
if len(line.choices) > 0 and line.choices[0].delta.content is not None and len(line.choices[0].delta.content) > 0:
yield line.choices[0].delta.content
elif len(line.choices) > 0:
yield line.choices[0]
def _supports_tools(self):
return True

View File

@@ -22,7 +22,7 @@ class GoogleLLM(BaseLLM):
parts = []
if role and content is not None:
if isinstance(content, str):
parts = [types.Part.from_text(content)]
parts = [types.Part.from_text(text=content)]
elif isinstance(content, list):
for item in content:
if "text" in item:
@@ -152,7 +152,15 @@ class GoogleLLM(BaseLLM):
config=config,
)
for chunk in response:
if chunk.text is not None:
if hasattr(chunk, "candidates") and chunk.candidates:
for candidate in chunk.candidates:
if candidate.content and candidate.content.parts:
for part in candidate.content.parts:
if part.function_call:
yield part
elif part.text:
yield part.text
elif hasattr(chunk, "text"):
yield chunk.text
def _supports_tools(self):

View File

@@ -7,6 +7,7 @@ from application.llm.anthropic import AnthropicLLM
from application.llm.docsgpt_provider import DocsGPTAPILLM
from application.llm.premai import PremAILLM
from application.llm.google_ai import GoogleLLM
from application.llm.novita import NovitaLLM
class LLMCreator:
@@ -20,12 +21,15 @@ class LLMCreator:
"docsgpt": DocsGPTAPILLM,
"premai": PremAILLM,
"groq": GroqLLM,
"google": GoogleLLM
"google": GoogleLLM,
"novita": NovitaLLM,
}
@classmethod
def create_llm(cls, type, api_key, user_api_key, *args, **kwargs):
def create_llm(cls, type, api_key, user_api_key, decoded_token, *args, **kwargs):
llm_class = cls.llms.get(type.lower())
if not llm_class:
raise ValueError(f"No LLM class found for type {type}")
return llm_class(api_key, user_api_key, *args, **kwargs)
return llm_class(
api_key, user_api_key, decoded_token=decoded_token, *args, **kwargs
)

32
application/llm/novita.py Normal file
View File

@@ -0,0 +1,32 @@
from application.llm.base import BaseLLM
from openai import OpenAI
class NovitaLLM(BaseLLM):
def __init__(self, api_key=None, user_api_key=None, *args, **kwargs):
super().__init__(*args, **kwargs)
self.client = OpenAI(api_key=api_key, base_url="https://api.novita.ai/v3/openai")
self.api_key = api_key
self.user_api_key = user_api_key
def _raw_gen(self, baseself, model, messages, stream=False, tools=None, **kwargs):
if tools:
response = self.client.chat.completions.create(
model=model, messages=messages, stream=stream, tools=tools, **kwargs
)
return response.choices[0]
else:
response = self.client.chat.completions.create(
model=model, messages=messages, stream=stream, **kwargs
)
return response.choices[0].message.content
def _raw_gen_stream(
self, baseself, model, messages, stream=True, tools=None, **kwargs
):
response = self.client.chat.completions.create(
model=model, messages=messages, stream=stream, **kwargs
)
for line in response:
if line.choices[0].delta.content is not None:
yield line.choices[0].delta.content

View File

@@ -1,3 +1,5 @@
import json
from application.core.settings import settings
from application.llm.base import BaseLLM
@@ -15,6 +17,63 @@ class OpenAILLM(BaseLLM):
self.api_key = api_key
self.user_api_key = user_api_key
def _clean_messages_openai(self, messages):
cleaned_messages = []
for message in messages:
role = message.get("role")
content = message.get("content")
if role == "model":
role = "assistant"
if role and content is not None:
if isinstance(content, str):
cleaned_messages.append({"role": role, "content": content})
elif isinstance(content, list):
for item in content:
if "text" in item:
cleaned_messages.append(
{"role": role, "content": item["text"]}
)
elif "function_call" in item:
tool_call = {
"id": item["function_call"]["call_id"],
"type": "function",
"function": {
"name": item["function_call"]["name"],
"arguments": json.dumps(
item["function_call"]["args"]
),
},
}
cleaned_messages.append(
{
"role": "assistant",
"content": None,
"tool_calls": [tool_call],
}
)
elif "function_response" in item:
cleaned_messages.append(
{
"role": "tool",
"tool_call_id": item["function_response"][
"call_id"
],
"content": json.dumps(
item["function_response"]["response"]["result"]
),
}
)
else:
raise ValueError(
f"Unexpected content dictionary format: {item}"
)
else:
raise ValueError(f"Unexpected content type: {type(content)}")
return cleaned_messages
def _raw_gen(
self,
baseself,
@@ -25,9 +84,14 @@ class OpenAILLM(BaseLLM):
engine=settings.AZURE_DEPLOYMENT_NAME,
**kwargs,
):
messages = self._clean_messages_openai(messages)
if tools:
response = self.client.chat.completions.create(
model=model, messages=messages, stream=stream, tools=tools, **kwargs
model=model,
messages=messages,
stream=stream,
tools=tools,
**kwargs,
)
return response.choices[0]
else:
@@ -46,13 +110,25 @@ class OpenAILLM(BaseLLM):
engine=settings.AZURE_DEPLOYMENT_NAME,
**kwargs,
):
response = self.client.chat.completions.create(
model=model, messages=messages, stream=stream, **kwargs
)
messages = self._clean_messages_openai(messages)
if tools:
response = self.client.chat.completions.create(
model=model,
messages=messages,
stream=stream,
tools=tools,
**kwargs,
)
else:
response = self.client.chat.completions.create(
model=model, messages=messages, stream=stream, **kwargs
)
for line in response:
if line.choices[0].delta.content is not None:
if len(line.choices) > 0 and line.choices[0].delta.content is not None and len(line.choices[0].delta.content) > 0:
yield line.choices[0].delta.content
elif len(line.choices) > 0:
yield line.choices[0]
def _supports_tools(self):
return True
@@ -61,17 +137,17 @@ class OpenAILLM(BaseLLM):
class AzureOpenAILLM(OpenAILLM):
def __init__(
self, openai_api_key, openai_api_base, openai_api_version, deployment_name
self, api_key, user_api_key, *args, **kwargs
):
super().__init__(openai_api_key)
super().__init__(api_key)
self.api_base = (settings.OPENAI_API_BASE,)
self.api_version = (settings.OPENAI_API_VERSION,)
self.deployment_name = (settings.AZURE_DEPLOYMENT_NAME,)
from openai import AzureOpenAI
self.client = AzureOpenAI(
api_key=openai_api_key,
api_key=api_key,
api_version=settings.OPENAI_API_VERSION,
api_base=settings.OPENAI_API_BASE,
deployment_name=settings.AZURE_DEPLOYMENT_NAME,
azure_endpoint=settings.OPENAI_API_BASE
)

151
application/logging.py Normal file
View File

@@ -0,0 +1,151 @@
import datetime
import functools
import inspect
import logging
import uuid
from typing import Any, Callable, Dict, Generator, List
from application.core.mongo_db import MongoDB
logging.basicConfig(
level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s"
)
class LogContext:
def __init__(self, endpoint, activity_id, user, api_key, query):
self.endpoint = endpoint
self.activity_id = activity_id
self.user = user
self.api_key = api_key
self.query = query
self.stacks = []
def build_stack_data(
obj: Any,
include_attributes: List[str] = None,
exclude_attributes: List[str] = None,
custom_data: Dict = None,
) -> Dict:
data = {}
if include_attributes is None:
include_attributes = []
for name, value in inspect.getmembers(obj):
if (
not name.startswith("_")
and not inspect.ismethod(value)
and not inspect.isfunction(value)
):
include_attributes.append(name)
for attr_name in include_attributes:
if exclude_attributes and attr_name in exclude_attributes:
continue
try:
attr_value = getattr(obj, attr_name)
if attr_value is not None:
if isinstance(attr_value, (int, float, str, bool)):
data[attr_name] = attr_value
elif isinstance(attr_value, list):
if all(isinstance(item, dict) for item in attr_value):
data[attr_name] = attr_value
elif all(hasattr(item, "__dict__") for item in attr_value):
data[attr_name] = [item.__dict__ for item in attr_value]
else:
data[attr_name] = [str(item) for item in attr_value]
elif isinstance(attr_value, dict):
data[attr_name] = {k: str(v) for k, v in attr_value.items()}
else:
data[attr_name] = str(attr_value)
except AttributeError:
pass
if custom_data:
data.update(custom_data)
return data
def log_activity() -> Callable:
def decorator(func: Callable) -> Callable:
@functools.wraps(func)
def wrapper(*args: Any, **kwargs: Any) -> Any:
activity_id = str(uuid.uuid4())
data = build_stack_data(args[0])
endpoint = data.get("endpoint", "")
user = data.get("user", "local")
api_key = data.get("user_api_key", "")
query = kwargs.get("query", getattr(args[0], "query", ""))
context = LogContext(endpoint, activity_id, user, api_key, query)
kwargs["log_context"] = context
logging.info(
f"Starting activity: {endpoint} - {activity_id} - User: {user}"
)
generator = func(*args, **kwargs)
yield from _consume_and_log(generator, context)
return wrapper
return decorator
def _consume_and_log(generator: Generator, context: "LogContext"):
try:
for item in generator:
yield item
except Exception as e:
logging.exception(f"Error in {context.endpoint} - {context.activity_id}: {e}")
context.stacks.append({"component": "error", "data": {"message": str(e)}})
_log_to_mongodb(
endpoint=context.endpoint,
activity_id=context.activity_id,
user=context.user,
api_key=context.api_key,
query=context.query,
stacks=context.stacks,
level="error",
)
raise
finally:
_log_to_mongodb(
endpoint=context.endpoint,
activity_id=context.activity_id,
user=context.user,
api_key=context.api_key,
query=context.query,
stacks=context.stacks,
level="info",
)
def _log_to_mongodb(
endpoint: str,
activity_id: str,
user: str,
api_key: str,
query: str,
stacks: List[Dict],
level: str,
) -> None:
try:
mongo = MongoDB.get_client()
db = mongo["docsgpt"]
user_logs_collection = db["stack_logs"]
log_entry = {
"endpoint": endpoint,
"id": activity_id,
"level": level,
"user": user,
"api_key": api_key,
"query": query,
"stacks": stacks,
"timestamp": datetime.datetime.now(datetime.timezone.utc),
}
user_logs_collection.insert_one(log_entry)
logging.debug(f"Logged activity to MongoDB: {activity_id}")
except Exception as e:
logging.error(f"Failed to log to MongoDB: {e}")

View File

@@ -24,26 +24,27 @@ class PDFParser(BaseParser):
# alternatively you can use local vision capable LLM
with open(file, "rb") as file_loaded:
files = {'file': file_loaded}
response = requests.post(doc2md_service, files=files)
data = response.json()["markdown"]
response = requests.post(doc2md_service, files=files)
data = response.json()["markdown"]
return data
try:
import PyPDF2
from pypdf import PdfReader
except ImportError:
raise ValueError("PyPDF2 is required to read PDF files.")
raise ValueError("pypdf is required to read PDF files.")
text_list = []
with open(file, "rb") as fp:
# Create a PDF object
pdf = PyPDF2.PdfReader(fp)
pdf = PdfReader(fp)
# Get the number of pages in the PDF document
num_pages = len(pdf.pages)
# Iterate over every page
for page in range(num_pages):
for page_index in range(num_pages):
# Extract the text from the page
page_text = pdf.pages[page].extract_text()
page = pdf.pages[page_index]
page_text = page.extract_text()
text_list.append(page_text)
text = "\n".join(text_list)
@@ -66,4 +67,4 @@ class DocxParser(BaseParser):
text = docx2txt.process(file)
return text
return text

View File

@@ -1,26 +1,27 @@
anthropic==0.40.0
anthropic==0.49.0
boto3==1.35.97
beautifulsoup4==4.12.3
celery==5.4.0
dataclasses-json==0.6.7
docx2txt==0.8
duckduckgo-search==6.3.0
duckduckgo-search==7.5.2
ebooklib==0.18
elastic-transport==8.17.0
elasticsearch==8.17.0
elasticsearch==8.17.1
escodegen==1.0.11
esprima==4.0.1
esutils==1.0.1
Flask==3.1.0
faiss-cpu==1.9.0.post1
flask-restx==1.3.0
google-genai==0.5.0
gevent==24.11.1
google-genai==1.3.0
google-generativeai==0.8.3
gTTS==2.5.4
gunicorn==23.0.0
html2text==2024.2.26
javalang==0.13.0
jinja2==3.1.5
jinja2==3.1.6
jiter==0.8.2
jmespath==1.0.1
joblib==1.4.2
@@ -30,23 +31,23 @@ jsonschema==4.23.0
jsonschema-spec==0.2.4
jsonschema-specifications==2023.7.1
kombu==5.4.2
langchain==0.3.14
langchain-community==0.3.14
langchain-core==0.3.29
langchain-openai==0.3.0
langchain-text-splitters==0.3.5
langsmith==0.2.10
langchain==0.3.20
langchain-community==0.3.19
langchain-core==0.3.45
langchain-openai==0.3.8
langchain-text-splitters==0.3.6
langsmith==0.3.15
lazy-object-proxy==1.10.0
lxml==5.3.0
lxml==5.3.1
markupsafe==3.0.2
marshmallow==3.24.1
marshmallow==3.26.1
mpmath==1.3.0
multidict==6.1.0
mypy-extensions==1.0.0
networkx==3.4.2
numpy==2.2.1
openai==1.59.5
openapi-schema-validator==0.6.2
openai==1.66.3
openapi-schema-validator==0.6.3
openapi-spec-validator==0.6.0
openapi3-parser==1.1.19
orjson==3.10.14
@@ -57,20 +58,21 @@ pathable==0.4.4
pillow==11.1.0
portalocker==2.10.1
prance==23.6.21.0
primp==0.10.0
prompt-toolkit==3.0.48
primp==0.14.0
prompt-toolkit==3.0.50
protobuf==5.29.3
psycopg2-binary==2.9.10
py==1.11.0
pydantic==2.10.4
pydantic==2.10.6
pydantic-core==2.27.2
pydantic-settings==2.7.1
pymongo==4.10.1
pypdf2==3.0.1
pypdf==5.2.0
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
python-jose==3.4.0
python-pptx==1.0.2
qdrant-client==1.12.2
qdrant-client==1.13.2
redis==5.2.1
referencing==0.30.2
regex==2024.11.6
@@ -81,7 +83,7 @@ tiktoken==0.8.0
tokenizers==0.21.0
torch==2.5.1
tqdm==4.67.1
transformers==4.48.0
transformers==4.49.0
typing-extensions==4.12.2
typing-inspect==0.9.0
tzdata==2024.2

View File

@@ -1,15 +1,16 @@
import json
from application.retriever.base import BaseRetriever
from langchain_community.tools import BraveSearch
from application.core.settings import settings
from application.llm.llm_creator import LLMCreator
from langchain_community.tools import BraveSearch
from application.retriever.base import BaseRetriever
class BraveRetSearch(BaseRetriever):
def __init__(
self,
question,
source,
chat_history,
prompt,
@@ -17,8 +18,9 @@ class BraveRetSearch(BaseRetriever):
token_limit=150,
gpt_model="docsgpt",
user_api_key=None,
decoded_token=None,
):
self.question = question
self.question = ""
self.source = source
self.chat_history = chat_history
self.prompt = prompt
@@ -35,6 +37,7 @@ class BraveRetSearch(BaseRetriever):
)
)
self.user_api_key = user_api_key
self.decoded_token = decoded_token
def _get_data(self):
if self.chunks == 0:
@@ -81,14 +84,19 @@ class BraveRetSearch(BaseRetriever):
messages_combine.append({"role": "user", "content": self.question})
llm = LLMCreator.create_llm(
settings.LLM_NAME, api_key=settings.API_KEY, user_api_key=self.user_api_key
settings.LLM_NAME,
api_key=settings.API_KEY,
user_api_key=self.user_api_key,
decoded_token=self.decoded_token,
)
completion = llm.gen_stream(model=self.gpt_model, messages=messages_combine)
for line in completion:
yield {"answer": str(line)}
def search(self):
def search(self, query: str = ""):
if query:
self.question = query
return self._get_data()
def get_params(self):
@@ -100,5 +108,5 @@ class BraveRetSearch(BaseRetriever):
"chunks": self.chunks,
"token_limit": self.token_limit,
"gpt_model": self.gpt_model,
"user_api_key": self.user_api_key
"user_api_key": self.user_api_key,
}

View File

@@ -1,26 +1,26 @@
from application.core.settings import settings
from application.llm.llm_creator import LLMCreator
from application.retriever.base import BaseRetriever
from application.tools.agent import Agent
from application.vectorstore.vector_creator import VectorCreator
class ClassicRAG(BaseRetriever):
def __init__(
self,
question,
source,
chat_history,
prompt,
chat_history=None,
prompt="",
chunks=2,
token_limit=150,
gpt_model="docsgpt",
user_api_key=None,
llm_name=settings.LLM_NAME,
api_key=settings.API_KEY,
decoded_token=None,
):
self.question = question
self.vectorstore = source["active_docs"] if "active_docs" in source else None
self.chat_history = chat_history
self.original_question = ""
self.chat_history = chat_history if chat_history is not None else []
self.prompt = prompt
self.chunks = chunks
self.gpt_model = gpt_model
@@ -35,6 +35,45 @@ class ClassicRAG(BaseRetriever):
)
)
self.user_api_key = user_api_key
self.llm_name = llm_name
self.api_key = api_key
self.llm = LLMCreator.create_llm(
self.llm_name,
api_key=self.api_key,
user_api_key=self.user_api_key,
decoded_token=decoded_token,
)
self.question = self._rephrase_query()
self.vectorstore = source["active_docs"] if "active_docs" in source else None
self.decoded_token = decoded_token
def _rephrase_query(self):
if (
not self.original_question
or not self.chat_history
or self.chat_history == []
):
return self.original_question
prompt = f"""Given the following conversation history:
{self.chat_history}
Rephrase the following user question to be a standalone search query
that captures all relevant context from the conversation:
"""
messages = [
{"role": "system", "content": prompt},
{"role": "user", "content": self.original_question},
]
try:
rephrased_query = self.llm.gen(model=self.gpt_model, messages=messages)
print(f"Rephrased query: {rephrased_query}")
return rephrased_query if rephrased_query else self.original_question
except Exception as e:
print(f"Error rephrasing query: {e}")
return self.original_question
def _get_data(self):
if self.chunks == 0:
@@ -61,47 +100,20 @@ class ClassicRAG(BaseRetriever):
return docs
def gen(self):
docs = self._get_data()
def gen():
pass
# join all page_content together with a newline
docs_together = "\n".join([doc["text"] for doc in docs])
p_chat_combine = self.prompt.replace("{summaries}", docs_together)
messages_combine = [{"role": "system", "content": p_chat_combine}]
for doc in docs:
yield {"source": doc}
if len(self.chat_history) > 0:
for i in self.chat_history:
if "prompt" in i and "response" in i:
messages_combine.append({"role": "user", "content": i["prompt"]})
messages_combine.append(
{"role": "assistant", "content": i["response"]}
)
messages_combine.append({"role": "user", "content": self.question})
# llm = LLMCreator.create_llm(
# settings.LLM_NAME, api_key=settings.API_KEY, user_api_key=self.user_api_key
# )
# completion = llm.gen_stream(model=self.gpt_model, messages=messages_combine)
agent = Agent(
llm_name=settings.LLM_NAME,
gpt_model=self.gpt_model,
api_key=settings.API_KEY,
user_api_key=self.user_api_key,
)
completion = agent.gen(messages_combine)
for line in completion:
yield {"answer": str(line)}
def search(self):
def search(self, query: str = ""):
if query:
self.original_question = query
self.question = self._rephrase_query()
return self._get_data()
def get_params(self):
return {
"question": self.question,
"question": self.original_question,
"rephrased_question": self.question,
"source": self.vectorstore,
"chat_history": self.chat_history,
"prompt": self.prompt,
"chunks": self.chunks,
"token_limit": self.token_limit,
"gpt_model": self.gpt_model,

View File

@@ -1,15 +1,15 @@
from application.retriever.base import BaseRetriever
from application.core.settings import settings
from application.llm.llm_creator import LLMCreator
from langchain_community.tools import DuckDuckGoSearchResults
from langchain_community.utilities import DuckDuckGoSearchAPIWrapper
from application.core.settings import settings
from application.llm.llm_creator import LLMCreator
from application.retriever.base import BaseRetriever
class DuckDuckSearch(BaseRetriever):
def __init__(
self,
question,
source,
chat_history,
prompt,
@@ -17,8 +17,9 @@ class DuckDuckSearch(BaseRetriever):
token_limit=150,
gpt_model="docsgpt",
user_api_key=None,
decoded_token=None,
):
self.question = question
self.question = ""
self.source = source
self.chat_history = chat_history
self.prompt = prompt
@@ -35,42 +36,26 @@ class DuckDuckSearch(BaseRetriever):
)
)
self.user_api_key = user_api_key
def _parse_lang_string(self, input_string):
result = []
current_item = ""
inside_brackets = False
for char in input_string:
if char == "[":
inside_brackets = True
elif char == "]":
inside_brackets = False
result.append(current_item)
current_item = ""
elif inside_brackets:
current_item += char
if inside_brackets:
result.append(current_item)
return result
self.decoded_token = decoded_token
def _get_data(self):
if self.chunks == 0:
docs = []
else:
wrapper = DuckDuckGoSearchAPIWrapper(max_results=self.chunks)
search = DuckDuckGoSearchResults(api_wrapper=wrapper)
search = DuckDuckGoSearchResults(api_wrapper=wrapper, output_format="list")
results = search.run(self.question)
results = self._parse_lang_string(results)
docs = []
for i in results:
try:
text = i.split("title:")[0]
title = i.split("title:")[1].split("link:")[0]
link = i.split("link:")[1]
docs.append({"text": text, "title": title, "link": link})
docs.append(
{
"text": i.get("snippet", "").strip(),
"title": i.get("title", "").strip(),
"link": i.get("link", "").strip(),
}
)
except IndexError:
pass
if settings.LLM_NAME == "llama.cpp":
@@ -88,26 +73,31 @@ class DuckDuckSearch(BaseRetriever):
for doc in docs:
yield {"source": doc}
if len(self.chat_history) > 0:
if len(self.chat_history) > 0:
for i in self.chat_history:
if "prompt" in i and "response" in i:
messages_combine.append({"role": "user", "content": i["prompt"]})
messages_combine.append(
{"role": "assistant", "content": i["response"]}
)
if "prompt" in i and "response" in i:
messages_combine.append({"role": "user", "content": i["prompt"]})
messages_combine.append(
{"role": "assistant", "content": i["response"]}
)
messages_combine.append({"role": "user", "content": self.question})
llm = LLMCreator.create_llm(
settings.LLM_NAME, api_key=settings.API_KEY, user_api_key=self.user_api_key
settings.LLM_NAME,
api_key=settings.API_KEY,
user_api_key=self.user_api_key,
decoded_token=self.decoded_token,
)
completion = llm.gen_stream(model=self.gpt_model, messages=messages_combine)
for line in completion:
yield {"answer": str(line)}
def search(self):
def search(self, query: str = ""):
if query:
self.question = query
return self._get_data()
def get_params(self):
return {
"question": self.question,
@@ -117,5 +107,5 @@ class DuckDuckSearch(BaseRetriever):
"chunks": self.chunks,
"token_limit": self.token_limit,
"gpt_model": self.gpt_model,
"user_api_key": self.user_api_key
"user_api_key": self.user_api_key,
}

View File

@@ -1,54 +0,0 @@
import json
import requests
from application.tools.base import Tool
class APITool(Tool):
"""
API Tool
A flexible tool for performing various API actions (e.g., sending messages, retrieving data) via custom user-specified APIs
"""
def __init__(self, config):
self.config = config
self.url = config.get("url", "")
self.method = config.get("method", "GET")
self.headers = config.get("headers", {"Content-Type": "application/json"})
self.query_params = config.get("query_params", {})
def execute_action(self, action_name, **kwargs):
return self._make_api_call(
self.url, self.method, self.headers, self.query_params, kwargs
)
def _make_api_call(self, url, method, headers, query_params, body):
if query_params:
url = f"{url}?{requests.compat.urlencode(query_params)}"
if isinstance(body, dict):
body = json.dumps(body)
try:
print(f"Making API call: {method} {url} with body: {body}")
response = requests.request(method, url, headers=headers, data=body)
response.raise_for_status()
try:
data = response.json()
except ValueError:
data = None
return {
"status_code": response.status_code,
"data": data,
"message": "API call successful.",
}
except requests.exceptions.RequestException as e:
return {
"status_code": response.status_code if response else None,
"message": f"API call failed: {str(e)}",
}
def get_actions_metadata(self):
return []
def get_config_requirements(self):
return {}

View File

@@ -1,97 +0,0 @@
import json
from abc import ABC, abstractmethod
class LLMHandler(ABC):
@abstractmethod
def handle_response(self, agent, resp, tools_dict, messages, **kwargs):
pass
class OpenAILLMHandler(LLMHandler):
def handle_response(self, agent, resp, tools_dict, messages):
while resp.finish_reason == "tool_calls":
message = json.loads(resp.model_dump_json())["message"]
keys_to_remove = {"audio", "function_call", "refusal"}
filtered_data = {
k: v for k, v in message.items() if k not in keys_to_remove
}
messages.append(filtered_data)
tool_calls = resp.message.tool_calls
for call in tool_calls:
try:
tool_response, call_id = agent._execute_tool_action(
tools_dict, call
)
messages.append(
{
"role": "tool",
"content": str(tool_response),
"tool_call_id": call_id,
}
)
except Exception as e:
messages.append(
{
"role": "tool",
"content": f"Error executing tool: {str(e)}",
"tool_call_id": call_id,
}
)
resp = agent.llm.gen(
model=agent.gpt_model, messages=messages, tools=agent.tools
)
return resp
class GoogleLLMHandler(LLMHandler):
def handle_response(self, agent, resp, tools_dict, messages):
from google.genai import types
while True:
response = agent.llm.gen(
model=agent.gpt_model, messages=messages, tools=agent.tools
)
if response.candidates and response.candidates[0].content.parts:
tool_call_found = False
for part in response.candidates[0].content.parts:
if part.function_call:
tool_call_found = True
tool_response, call_id = agent._execute_tool_action(
tools_dict, part.function_call
)
function_response_part = types.Part.from_function_response(
name=part.function_call.name,
response={"result": tool_response},
)
messages.append(
{"role": "model", "content": [part.to_json_dict()]}
)
messages.append(
{
"role": "tool",
"content": [function_response_part.to_json_dict()],
}
)
if (
not tool_call_found
and response.candidates[0].content.parts
and response.candidates[0].content.parts[0].text
):
return response.candidates[0].content.parts[0].text
elif not tool_call_found:
return response.candidates[0].content.parts
else:
return response
def get_llm_handler(llm_type):
handlers = {
"openai": OpenAILLMHandler(),
"google": GoogleLLMHandler(),
}
return handlers.get(llm_type, OpenAILLMHandler())

View File

@@ -1,26 +0,0 @@
import json
class ToolActionParser:
def __init__(self, llm_type):
self.llm_type = llm_type
self.parsers = {
"OpenAILLM": self._parse_openai_llm,
"GoogleLLM": self._parse_google_llm,
}
def parse_args(self, call):
parser = self.parsers.get(self.llm_type, self._parse_openai_llm)
return parser(call)
def _parse_openai_llm(self, call):
call_args = json.loads(call.function.arguments)
tool_id = call.function.name.split("_")[-1]
action_name = call.function.name.rsplit("_", 1)[0]
return tool_id, action_name, call_args
def _parse_google_llm(self, call):
call_args = call.args
tool_id = call.name.split("_")[-1]
action_name = call.name.rsplit("_", 1)[0]
return tool_id, action_name, call_args

View File

@@ -1,17 +1,23 @@
import sys
from datetime import datetime
from application.core.mongo_db import MongoDB
from application.utils import num_tokens_from_string, num_tokens_from_object_or_list
from application.utils import num_tokens_from_object_or_list, num_tokens_from_string
mongo = MongoDB.get_client()
db = mongo["docsgpt"]
usage_collection = db["token_usage"]
def update_token_usage(user_api_key, token_usage):
def update_token_usage(decoded_token, user_api_key, token_usage):
if "pytest" in sys.modules:
return
if decoded_token:
user_id = decoded_token["sub"]
else:
user_id = None
usage_data = {
"user_id": user_id,
"api_key": user_api_key,
"prompt_tokens": token_usage["prompt_tokens"],
"generated_tokens": token_usage["generated_tokens"],
@@ -24,14 +30,17 @@ def gen_token_usage(func):
def wrapper(self, model, messages, stream, tools, **kwargs):
for message in messages:
if message["content"]:
self.token_usage["prompt_tokens"] += num_tokens_from_string(message["content"])
self.token_usage["prompt_tokens"] += num_tokens_from_string(
message["content"]
)
result = func(self, model, messages, stream, tools, **kwargs)
# check if result is a string
if isinstance(result, str):
self.token_usage["generated_tokens"] += num_tokens_from_string(result)
else:
self.token_usage["generated_tokens"] += num_tokens_from_object_or_list(result)
update_token_usage(self.user_api_key, self.token_usage)
self.token_usage["generated_tokens"] += num_tokens_from_object_or_list(
result
)
update_token_usage(self.decoded_token, self.user_api_key, self.token_usage)
return result
return wrapper
@@ -40,7 +49,9 @@ def gen_token_usage(func):
def stream_token_usage(func):
def wrapper(self, model, messages, stream, tools, **kwargs):
for message in messages:
self.token_usage["prompt_tokens"] += num_tokens_from_string(message["content"])
self.token_usage["prompt_tokens"] += num_tokens_from_string(
message["content"]
)
batch = []
result = func(self, model, messages, stream, tools, **kwargs)
for r in result:
@@ -48,6 +59,6 @@ def stream_token_usage(func):
yield r
for line in batch:
self.token_usage["generated_tokens"] += num_tokens_from_string(line)
update_token_usage(self.user_api_key, self.token_usage)
update_token_usage(self.decoded_token, self.user_api_key, self.token_usage)
return wrapper

View File

@@ -1,5 +1,7 @@
import tiktoken
import hashlib
import re
import tiktoken
from flask import jsonify, make_response
@@ -21,6 +23,7 @@ def num_tokens_from_string(string: str) -> int:
else:
return 0
def num_tokens_from_object_or_list(thing):
if isinstance(thing, list):
return sum([num_tokens_from_object_or_list(x) for x in thing])
@@ -31,6 +34,7 @@ def num_tokens_from_object_or_list(thing):
else:
return 0
def count_tokens_docs(docs):
docs_content = ""
for doc in docs:
@@ -56,7 +60,8 @@ def check_required_fields(data, required_fields):
def get_hash(data):
return hashlib.md5(data.encode()).hexdigest()
return hashlib.md5(data.encode(), usedforsecurity=False).hexdigest()
def limit_chat_history(history, max_token_limit=None, gpt_model="docsgpt"):
"""
@@ -66,32 +71,41 @@ def limit_chat_history(history, max_token_limit=None, gpt_model="docsgpt"):
from application.core.settings import settings
max_token_limit = (
max_token_limit
if max_token_limit and
max_token_limit < settings.MODEL_TOKEN_LIMITS.get(
gpt_model, settings.DEFAULT_MAX_HISTORY
)
else settings.MODEL_TOKEN_LIMITS.get(
gpt_model, settings.DEFAULT_MAX_HISTORY
)
)
max_token_limit
if max_token_limit
and max_token_limit
< settings.MODEL_TOKEN_LIMITS.get(gpt_model, settings.DEFAULT_MAX_HISTORY)
else settings.MODEL_TOKEN_LIMITS.get(gpt_model, settings.DEFAULT_MAX_HISTORY)
)
if not history:
return []
tokens_current_history = 0
trimmed_history = []
tokens_current_history = 0
for message in reversed(history):
tokens_batch = 0
if "prompt" in message and "response" in message:
tokens_batch = num_tokens_from_string(message["prompt"]) + num_tokens_from_string(
message["response"]
)
if tokens_current_history + tokens_batch < max_token_limit:
tokens_current_history += tokens_batch
trimmed_history.insert(0, message)
else:
break
tokens_batch += num_tokens_from_string(message["prompt"])
tokens_batch += num_tokens_from_string(message["response"])
if "tool_calls" in message:
for tool_call in message["tool_calls"]:
tool_call_string = f"Tool: {tool_call.get('tool_name')} | Action: {tool_call.get('action_name')} | Args: {tool_call.get('arguments')} | Response: {tool_call.get('result')}"
tokens_batch += num_tokens_from_string(tool_call_string)
if tokens_current_history + tokens_batch < max_token_limit:
tokens_current_history += tokens_batch
trimmed_history.insert(0, message)
else:
break
return trimmed_history
def validate_function_name(function_name):
"""Validates if a function name matches the allowed pattern."""
if not re.match(r"^[a-zA-Z0-9_-]+$", function_name):
return False
return True

View File

@@ -75,9 +75,9 @@ class BaseVectorStore(ABC):
openai_api_key=embeddings_key
)
elif embeddings_name == "huggingface_sentence-transformers/all-mpnet-base-v2":
if os.path.exists("./model/all-mpnet-base-v2"):
if os.path.exists("./models/all-mpnet-base-v2"):
embedding_instance = EmbeddingsSingleton.get_instance(
embeddings_name="./model/all-mpnet-base-v2",
embeddings_name = "./models/all-mpnet-base-v2",
)
else:
embedding_instance = EmbeddingsSingleton.get_instance(
@@ -86,4 +86,5 @@ class BaseVectorStore(ABC):
else:
embedding_instance = EmbeddingsSingleton.get_instance(embeddings_name)
return embedding_instance
return embedding_instance

View File

@@ -1,8 +1,12 @@
from langchain_community.vectorstores import FAISS
from application.vectorstore.base import BaseVectorStore
from application.core.settings import settings
import os
from langchain_community.vectorstores import FAISS
from application.core.settings import settings
from application.parser.schema.base import Document
from application.vectorstore.base import BaseVectorStore
def get_vectorstore(path: str) -> str:
if path:
vectorstore = os.path.join("application", "indexes", path)
@@ -10,21 +14,25 @@ def get_vectorstore(path: str) -> str:
vectorstore = os.path.join("application")
return vectorstore
class FaissStore(BaseVectorStore):
def __init__(self, source_id: str, embeddings_key: str, docs_init=None):
super().__init__()
self.source_id = source_id
self.path = get_vectorstore(source_id)
embeddings = self._get_embeddings(settings.EMBEDDINGS_NAME, embeddings_key)
self.embeddings = self._get_embeddings(settings.EMBEDDINGS_NAME, embeddings_key)
try:
if docs_init:
self.docsearch = FAISS.from_documents(docs_init, embeddings)
self.docsearch = FAISS.from_documents(docs_init, self.embeddings)
else:
self.docsearch = FAISS.load_local(self.path, embeddings, allow_dangerous_deserialization=True)
self.docsearch = FAISS.load_local(
self.path, self.embeddings, allow_dangerous_deserialization=True
)
except Exception:
raise
self.assert_embedding_dimensions(embeddings)
self.assert_embedding_dimensions(self.embeddings)
def search(self, *args, **kwargs):
return self.docsearch.similarity_search(*args, **kwargs)
@@ -40,11 +48,42 @@ class FaissStore(BaseVectorStore):
def assert_embedding_dimensions(self, embeddings):
"""Check that the word embedding dimension of the docsearch index matches the dimension of the word embeddings used."""
if settings.EMBEDDINGS_NAME == "huggingface_sentence-transformers/all-mpnet-base-v2":
word_embedding_dimension = getattr(embeddings, 'dimension', None)
if (
settings.EMBEDDINGS_NAME
== "huggingface_sentence-transformers/all-mpnet-base-v2"
):
word_embedding_dimension = getattr(embeddings, "dimension", None)
if word_embedding_dimension is None:
raise AttributeError("'dimension' attribute not found in embeddings instance.")
raise AttributeError(
"'dimension' attribute not found in embeddings instance."
)
docsearch_index_dimension = self.docsearch.index.d
if word_embedding_dimension != docsearch_index_dimension:
raise ValueError(f"Embedding dimension mismatch: embeddings.dimension ({word_embedding_dimension}) != docsearch index dimension ({docsearch_index_dimension})")
raise ValueError(
f"Embedding dimension mismatch: embeddings.dimension ({word_embedding_dimension}) != docsearch index dimension ({docsearch_index_dimension})"
)
def get_chunks(self):
chunks = []
if self.docsearch:
for doc_id, doc in self.docsearch.docstore._dict.items():
chunk_data = {
"doc_id": doc_id,
"text": doc.page_content,
"metadata": doc.metadata,
}
chunks.append(chunk_data)
return chunks
def add_chunk(self, text, metadata=None):
metadata = metadata or {}
doc = Document(text=text, extra_info=metadata).to_langchain_format()
doc_id = self.docsearch.add_documents([doc])
self.save_local(self.path)
return doc_id
def delete_chunk(self, chunk_id):
self.delete_index([chunk_id])
self.save_local(self.path)
return True

View File

@@ -124,3 +124,53 @@ class MongoDBVectorStore(BaseVectorStore):
def delete_index(self, *args, **kwargs):
self._collection.delete_many({"source_id": self._source_id})
def get_chunks(self):
try:
chunks = []
cursor = self._collection.find({"source_id": self._source_id})
for doc in cursor:
doc_id = str(doc.get("_id"))
text = doc.get(self._text_key)
metadata = {
k: v
for k, v in doc.items()
if k
not in ["_id", self._text_key, self._embedding_key, "source_id"]
}
if text:
chunks.append(
{"doc_id": doc_id, "text": text, "metadata": metadata}
)
return chunks
except Exception as e:
print(f"Error getting chunks: {e}")
return []
def add_chunk(self, text, metadata=None):
metadata = metadata or {}
embeddings = self._embedding.embed_documents([text])
if not embeddings:
raise ValueError("Could not generate embedding for chunk")
chunk_data = {
self._text_key: text,
self._embedding_key: embeddings[0],
"source_id": self._source_id,
**metadata,
}
result = self._collection.insert_one(chunk_data)
return str(result.inserted_id)
def delete_chunk(self, chunk_id):
try:
from bson.objectid import ObjectId
object_id = ObjectId(chunk_id)
result = self._collection.delete_one({"_id": object_id})
return result.deleted_count > 0
except Exception as e:
print(f"Error deleting chunk: {e}")
return False

View File

@@ -1,6 +1,6 @@
services:
frontend:
build: ./frontend
build: ../frontend
environment:
- VITE_API_HOST=http://localhost:7091
- VITE_API_STREAMING=$VITE_API_STREAMING
@@ -10,7 +10,7 @@ services:
- backend
backend:
build: ./application
build: ../application
environment:
- API_KEY=$OPENAI_API_KEY
- EMBEDDINGS_KEY=$OPENAI_API_KEY
@@ -25,15 +25,15 @@ services:
ports:
- "7091:7091"
volumes:
- ./application/indexes:/app/application/indexes
- ./application/inputs:/app/application/inputs
- ./application/vectors:/app/application/vectors
- ../application/indexes:/app/application/indexes
- ../application/inputs:/app/application/inputs
- ../application/vectors:/app/application/vectors
depends_on:
- redis
- mongo
worker:
build: ./application
build: ../application
command: celery -A application.app.celery worker -l INFO
environment:
- API_KEY=$OPENAI_API_KEY

View File

@@ -0,0 +1,18 @@
services:
redis:
image: redis:6-alpine
ports:
- 6379:6379
mongo:
image: mongo:6
ports:
- 27017:27017
volumes:
- mongodb_data_container:/data/db
volumes:
mongodb_data_container:

View File

@@ -1,8 +1,8 @@
services:
frontend:
build: ./frontend
build: ../frontend
volumes:
- ./frontend/src:/app/src
- ../frontend/src:/app/src
environment:
- VITE_API_HOST=http://localhost:7091
- VITE_API_STREAMING=$VITE_API_STREAMING

View File

@@ -1,8 +1,8 @@
services:
frontend:
build: ./frontend
build: ../frontend
volumes:
- ./frontend/src:/app/src
- ../frontend/src:/app/src
environment:
- VITE_API_HOST=http://localhost:7091
- VITE_API_STREAMING=$VITE_API_STREAMING
@@ -12,7 +12,7 @@ services:
- backend
backend:
build: ./application
build: ../application
environment:
- API_KEY=$API_KEY
- EMBEDDINGS_KEY=$API_KEY
@@ -21,19 +21,21 @@ services:
- CELERY_RESULT_BACKEND=redis://redis:6379/1
- MONGO_URI=mongodb://mongo:27017/docsgpt
- CACHE_REDIS_URL=redis://redis:6379/2
- OPENAI_BASE_URL=$OPENAI_BASE_URL
- MODEL_NAME=$MODEL_NAME
ports:
- "7091:7091"
volumes:
- ./application/indexes:/app/application/indexes
- ./application/inputs:/app/application/inputs
- ./application/vectors:/app/application/vectors
- ../application/indexes:/app/application/indexes
- ../application/inputs:/app/application/inputs
- ../application/vectors:/app/application/vectors
depends_on:
- redis
- mongo
worker:
build: ./application
command: celery -A application.app.celery worker -l INFO -B
build: ../application
command: celery -A application.app.celery worker -l INFO --pool=gevent -B
environment:
- API_KEY=$API_KEY
- EMBEDDINGS_KEY=$API_KEY

View File

@@ -0,0 +1,11 @@
version: "3.8"
services:
ollama:
image: ollama/ollama
ports:
- "11434:11434"
volumes:
- ollama_data:/root/.ollama
volumes:
ollama_data:

View File

@@ -0,0 +1,16 @@
version: "3.8"
services:
ollama:
image: ollama/ollama
ports:
- "11434:11434"
volumes:
- ollama_data:/root/.ollama
deploy:
resources:
reservations:
devices:
- capabilities: [gpu]
volumes:
ollama_data:

View File

@@ -1,20 +0,0 @@
services:
frontend:
build: ./frontend
environment:
- VITE_API_HOST=http://localhost:7091
- VITE_API_STREAMING=$VITE_API_STREAMING
ports:
- "5173:5173"
depends_on:
- mock-backend
mock-backend:
build: ./mock-backend
ports:
- "7091:7091"
redis:
image: redis:6-alpine
ports:
- 6379:6379

View File

@@ -0,0 +1,120 @@
import Image from 'next/image';
const iconMap = {
'Amazon Lightsail': '/lightsail.png',
'Railway': '/railway.png',
'Civo Compute Cloud': '/civo.png',
'DigitalOcean Droplet': '/digitalocean.png',
'Kamatera Cloud': '/kamatera.png',
};
export function DeploymentCards({ items }) {
return (
<>
<div className="deployment-cards">
{items.map(({ title, link, description }) => {
const isExternal = link.startsWith('https://');
const iconSrc = iconMap[title] || '/default-icon.png'; // Default icon if not found
return (
<div
key={title}
className={`card${isExternal ? ' external' : ''}`}
>
<a href={link} target={isExternal ? '_blank' : undefined} rel="noopener noreferrer" className="card-link-wrapper">
<div className="card-icon-container">
{iconSrc && <div className="card-icon"><Image src={iconSrc} alt={title} width={32} height={32} /></div>} {/* Reduced icon size */}
</div>
<h3 className="card-title">{title}</h3>
{description && <p className="card-description">{description}</p>}
<p className="card-url">{new URL(link).hostname.replace('www.', '')}</p>
</a>
</div>
);
})}
</div>
<style jsx>{`
.deployment-cards {
margin-top: 24px;
display: grid;
grid-template-columns: 1fr;
gap: 16px;
}
@media (min-width: 768px) {
.deployment-cards {
grid-template-columns: 1fr 1fr;
}
}
.card {
background-color: #222222;
border-radius: 8px;
padding: 16px;
transition: background-color 0.3s;
position: relative;
color: #ffffff;
/* Make the card a flex container */
display: flex;
flex-direction: column;
align-items: center; /* Center horizontally */
justify-content: center; /* Center vertically */
height: 100%; /* Fill the height of the grid cell */
}
.card:hover {
background-color: #333333;
}
.card.external::after {
content: "↗";
position: absolute;
top: 12px; /* Adjusted position */
right: 12px; /* Adjusted position */
color: #ffffff;
font-size: 0.7em; /* Reduced size */
opacity: 0.8; /* Slightly faded */
}
.card-link-wrapper {
display: flex;
flex-direction: column;
align-items:center;
color: inherit;
text-decoration: none;
width:100%; /* Important: make link wrapper take full width */
}
.card-icon-container{
display:flex;
justify-content:center;
width: 100%;
margin-bottom: 8px; /* Space between icon and title */
}
.card-icon {
display: block;
margin: 0 auto;
}
.card-title {
font-weight: 600;
margin-bottom: 4px;
font-size: 16px;
text-align: center;
color: #f0f0f0; /* Lighter title color if needed */
}
.card-description {
margin-bottom: 0;
font-size: 13px;
color: #aaaaaa;
text-align: center;
line-height: 1.4;
}
.card-url {
margin-top: 8px; /*Keep space consistent */
font-size: 11px;
color: #777777;
text-align: center;
font-family: monospace;
}
`}</style>
</>
);
}

519
docs/package-lock.json generated
View File

@@ -7,8 +7,8 @@
"license": "MIT",
"dependencies": {
"@vercel/analytics": "^1.1.1",
"docsgpt-react": "^0.4.9",
"next": "^14.2.22",
"docsgpt-react": "^0.5.0",
"next": "^14.2.26",
"nextra": "^2.13.2",
"nextra-theme-docs": "^2.13.2",
"react": "^18.2.0",
@@ -422,6 +422,13 @@
"node": ">=6.9.0"
}
},
"node_modules/@bpmn-io/snarkdown": {
"version": "2.2.0",
"resolved": "https://registry.npmjs.org/@bpmn-io/snarkdown/-/snarkdown-2.2.0.tgz",
"integrity": "sha512-bVD7FIoaBDZeCJkMRgnBPDeptPlto87wt2qaCjf5t8iLaevDmTPaREd6FpBEGsHlUdHFFZWRk4qAoEC5So2M0Q==",
"deprecated": "Package no longer supported. Contact Support at https://www.npmjs.com/support for more info.",
"license": "MIT"
},
"node_modules/@braintree/sanitize-url": {
"version": "6.0.4",
"resolved": "https://registry.npmjs.org/@braintree/sanitize-url/-/sanitize-url-6.0.4.tgz",
@@ -931,17 +938,19 @@
}
},
"node_modules/@next/env": {
"version": "14.2.22",
"resolved": "https://registry.npmjs.org/@next/env/-/env-14.2.22.tgz",
"integrity": "sha512-EQ6y1QeNQglNmNIXvwP/Bb+lf7n9WtgcWvtoFsHquVLCJUuxRs+6SfZ5EK0/EqkkLex4RrDySvKgKNN7PXip7Q=="
"version": "14.2.26",
"resolved": "https://registry.npmjs.org/@next/env/-/env-14.2.26.tgz",
"integrity": "sha512-vO//GJ/YBco+H7xdQhzJxF7ub3SUwft76jwaeOyVVQFHCi5DCnkP16WHB+JBylo4vOKPoZBlR94Z8xBxNBdNJA==",
"license": "MIT"
},
"node_modules/@next/swc-darwin-arm64": {
"version": "14.2.22",
"resolved": "https://registry.npmjs.org/@next/swc-darwin-arm64/-/swc-darwin-arm64-14.2.22.tgz",
"integrity": "sha512-HUaLiehovgnqY4TMBZJ3pDaOsTE1spIXeR10pWgdQVPYqDGQmHJBj3h3V6yC0uuo/RoY2GC0YBFRkOX3dI9WVQ==",
"version": "14.2.26",
"resolved": "https://registry.npmjs.org/@next/swc-darwin-arm64/-/swc-darwin-arm64-14.2.26.tgz",
"integrity": "sha512-zDJY8gsKEseGAxG+C2hTMT0w9Nk9N1Sk1qV7vXYz9MEiyRoF5ogQX2+vplyUMIfygnjn9/A04I6yrUTRTuRiyQ==",
"cpu": [
"arm64"
],
"license": "MIT",
"optional": true,
"os": [
"darwin"
@@ -951,12 +960,13 @@
}
},
"node_modules/@next/swc-darwin-x64": {
"version": "14.2.22",
"resolved": "https://registry.npmjs.org/@next/swc-darwin-x64/-/swc-darwin-x64-14.2.22.tgz",
"integrity": "sha512-ApVDANousaAGrosWvxoGdLT0uvLBUC+srqOcpXuyfglA40cP2LBFaGmBjhgpxYk5z4xmunzqQvcIgXawTzo2uQ==",
"version": "14.2.26",
"resolved": "https://registry.npmjs.org/@next/swc-darwin-x64/-/swc-darwin-x64-14.2.26.tgz",
"integrity": "sha512-U0adH5ryLfmTDkahLwG9sUQG2L0a9rYux8crQeC92rPhi3jGQEY47nByQHrVrt3prZigadwj/2HZ1LUUimuSbg==",
"cpu": [
"x64"
],
"license": "MIT",
"optional": true,
"os": [
"darwin"
@@ -966,12 +976,13 @@
}
},
"node_modules/@next/swc-linux-arm64-gnu": {
"version": "14.2.22",
"resolved": "https://registry.npmjs.org/@next/swc-linux-arm64-gnu/-/swc-linux-arm64-gnu-14.2.22.tgz",
"integrity": "sha512-3O2J99Bk9aM+d4CGn9eEayJXHuH9QLx0BctvWyuUGtJ3/mH6lkfAPRI4FidmHMBQBB4UcvLMfNf8vF0NZT7iKw==",
"version": "14.2.26",
"resolved": "https://registry.npmjs.org/@next/swc-linux-arm64-gnu/-/swc-linux-arm64-gnu-14.2.26.tgz",
"integrity": "sha512-SINMl1I7UhfHGM7SoRiw0AbwnLEMUnJ/3XXVmhyptzriHbWvPPbbm0OEVG24uUKhuS1t0nvN/DBvm5kz6ZIqpg==",
"cpu": [
"arm64"
],
"license": "MIT",
"optional": true,
"os": [
"linux"
@@ -981,12 +992,13 @@
}
},
"node_modules/@next/swc-linux-arm64-musl": {
"version": "14.2.22",
"resolved": "https://registry.npmjs.org/@next/swc-linux-arm64-musl/-/swc-linux-arm64-musl-14.2.22.tgz",
"integrity": "sha512-H/hqfRz75yy60y5Eg7DxYfbmHMjv60Dsa6IWHzpJSz4MRkZNy5eDnEW9wyts9bkxwbOVZNPHeb3NkqanP+nGPg==",
"version": "14.2.26",
"resolved": "https://registry.npmjs.org/@next/swc-linux-arm64-musl/-/swc-linux-arm64-musl-14.2.26.tgz",
"integrity": "sha512-s6JaezoyJK2DxrwHWxLWtJKlqKqTdi/zaYigDXUJ/gmx/72CrzdVZfMvUc6VqnZ7YEvRijvYo+0o4Z9DencduA==",
"cpu": [
"arm64"
],
"license": "MIT",
"optional": true,
"os": [
"linux"
@@ -996,12 +1008,13 @@
}
},
"node_modules/@next/swc-linux-x64-gnu": {
"version": "14.2.22",
"resolved": "https://registry.npmjs.org/@next/swc-linux-x64-gnu/-/swc-linux-x64-gnu-14.2.22.tgz",
"integrity": "sha512-LckLwlCLcGR1hlI5eiJymR8zSHPsuruuwaZ3H2uudr25+Dpzo6cRFjp/3OR5UYJt8LSwlXv9mmY4oI2QynwpqQ==",
"version": "14.2.26",
"resolved": "https://registry.npmjs.org/@next/swc-linux-x64-gnu/-/swc-linux-x64-gnu-14.2.26.tgz",
"integrity": "sha512-FEXeUQi8/pLr/XI0hKbe0tgbLmHFRhgXOUiPScz2hk0hSmbGiU8aUqVslj/6C6KA38RzXnWoJXo4FMo6aBxjzg==",
"cpu": [
"x64"
],
"license": "MIT",
"optional": true,
"os": [
"linux"
@@ -1011,12 +1024,13 @@
}
},
"node_modules/@next/swc-linux-x64-musl": {
"version": "14.2.22",
"resolved": "https://registry.npmjs.org/@next/swc-linux-x64-musl/-/swc-linux-x64-musl-14.2.22.tgz",
"integrity": "sha512-qGUutzmh0PoFU0fCSu0XYpOfT7ydBZgDfcETIeft46abPqP+dmePhwRGLhFKwZWxNWQCPprH26TjaTxM0Nv8mw==",
"version": "14.2.26",
"resolved": "https://registry.npmjs.org/@next/swc-linux-x64-musl/-/swc-linux-x64-musl-14.2.26.tgz",
"integrity": "sha512-BUsomaO4d2DuXhXhgQCVt2jjX4B4/Thts8nDoIruEJkhE5ifeQFtvW5c9JkdOtYvE5p2G0hcwQ0UbRaQmQwaVg==",
"cpu": [
"x64"
],
"license": "MIT",
"optional": true,
"os": [
"linux"
@@ -1026,12 +1040,13 @@
}
},
"node_modules/@next/swc-win32-arm64-msvc": {
"version": "14.2.22",
"resolved": "https://registry.npmjs.org/@next/swc-win32-arm64-msvc/-/swc-win32-arm64-msvc-14.2.22.tgz",
"integrity": "sha512-K6MwucMWmIvMb9GlvT0haYsfIPxfQD8yXqxwFy4uLFMeXIb2TcVYQimxkaFZv86I7sn1NOZnpOaVk5eaxThGIw==",
"version": "14.2.26",
"resolved": "https://registry.npmjs.org/@next/swc-win32-arm64-msvc/-/swc-win32-arm64-msvc-14.2.26.tgz",
"integrity": "sha512-5auwsMVzT7wbB2CZXQxDctpWbdEnEW/e66DyXO1DcgHxIyhP06awu+rHKshZE+lPLIGiwtjo7bsyeuubewwxMw==",
"cpu": [
"arm64"
],
"license": "MIT",
"optional": true,
"os": [
"win32"
@@ -1041,12 +1056,13 @@
}
},
"node_modules/@next/swc-win32-ia32-msvc": {
"version": "14.2.22",
"resolved": "https://registry.npmjs.org/@next/swc-win32-ia32-msvc/-/swc-win32-ia32-msvc-14.2.22.tgz",
"integrity": "sha512-5IhDDTPEbzPR31ZzqHe90LnNe7BlJUZvC4sA1thPJV6oN5WmtWjZ0bOYfNsyZx00FJt7gggNs6SrsX0UEIcIpA==",
"version": "14.2.26",
"resolved": "https://registry.npmjs.org/@next/swc-win32-ia32-msvc/-/swc-win32-ia32-msvc-14.2.26.tgz",
"integrity": "sha512-GQWg/Vbz9zUGi9X80lOeGsz1rMH/MtFO/XqigDznhhhTfDlDoynCM6982mPCbSlxJ/aveZcKtTlwfAjwhyxDpg==",
"cpu": [
"ia32"
],
"license": "MIT",
"optional": true,
"os": [
"win32"
@@ -1056,12 +1072,13 @@
}
},
"node_modules/@next/swc-win32-x64-msvc": {
"version": "14.2.22",
"resolved": "https://registry.npmjs.org/@next/swc-win32-x64-msvc/-/swc-win32-x64-msvc-14.2.22.tgz",
"integrity": "sha512-nvRaB1PyG4scn9/qNzlkwEwLzuoPH3Gjp7Q/pLuwUgOTt1oPMlnCI3A3rgkt+eZnU71emOiEv/mR201HoURPGg==",
"version": "14.2.26",
"resolved": "https://registry.npmjs.org/@next/swc-win32-x64-msvc/-/swc-win32-x64-msvc-14.2.26.tgz",
"integrity": "sha512-2rdB3T1/Gp7bv1eQTTm9d1Y1sv9UuJ2LAwOE0Pe2prHKe32UNscj7YS13fRB37d0GAiGNR+Y7ZcW8YjDI8Ns0w==",
"cpu": [
"x64"
],
"license": "MIT",
"optional": true,
"os": [
"win32"
@@ -1171,27 +1188,28 @@
}
},
"node_modules/@parcel/core": {
"version": "2.13.2",
"resolved": "https://registry.npmjs.org/@parcel/core/-/core-2.13.2.tgz",
"integrity": "sha512-1zC5Au4z9or5XyP6ipfvJqHktuB0jD7WuxMcV1CWAZGARHKylLe+0ccl+Wx7HN5O+xAvfCDtTlKrATY8qyrIyw==",
"version": "2.14.2",
"resolved": "https://registry.npmjs.org/@parcel/core/-/core-2.14.2.tgz",
"integrity": "sha512-VIgo7dgwY9nHJTnxaa86xxQmUX6K5pEAxfsjkFxOhrviTG+KAI5bOHQAbsY61ytTO/x+uSDEnMoIkR8TAIVc3Q==",
"license": "MIT",
"peer": true,
"dependencies": {
"@mischnic/json-sourcemap": "^0.1.0",
"@parcel/cache": "2.13.2",
"@parcel/diagnostic": "2.13.2",
"@parcel/events": "2.13.2",
"@parcel/feature-flags": "2.13.2",
"@parcel/fs": "2.13.2",
"@parcel/graph": "3.3.2",
"@parcel/logger": "2.13.2",
"@parcel/package-manager": "2.13.2",
"@parcel/plugin": "2.13.2",
"@parcel/profiler": "2.13.2",
"@parcel/rust": "2.13.2",
"@parcel/cache": "2.14.2",
"@parcel/diagnostic": "2.14.2",
"@parcel/events": "2.14.2",
"@parcel/feature-flags": "2.14.2",
"@parcel/fs": "2.14.2",
"@parcel/graph": "3.4.2",
"@parcel/logger": "2.14.2",
"@parcel/package-manager": "2.14.2",
"@parcel/plugin": "2.14.2",
"@parcel/profiler": "2.14.2",
"@parcel/rust": "2.14.2",
"@parcel/source-map": "^2.1.1",
"@parcel/types": "2.13.2",
"@parcel/utils": "2.13.2",
"@parcel/workers": "2.13.2",
"@parcel/types": "2.14.2",
"@parcel/utils": "2.14.2",
"@parcel/workers": "2.14.2",
"base-x": "^3.0.8",
"browserslist": "^4.6.6",
"clone": "^2.1.1",
@@ -1211,14 +1229,15 @@
}
},
"node_modules/@parcel/core/node_modules/@parcel/cache": {
"version": "2.13.2",
"resolved": "https://registry.npmjs.org/@parcel/cache/-/cache-2.13.2.tgz",
"integrity": "sha512-Y0nWlCMWDSp1lxiPI5zCWTGD0InnVZ+IfqeyLWmROAqValYyd0QZCvnSljKJ144jWTr0jXxDveir+DVF8sAYaA==",
"version": "2.14.2",
"resolved": "https://registry.npmjs.org/@parcel/cache/-/cache-2.14.2.tgz",
"integrity": "sha512-C3uGFSjDGgVbNMxi9WGTavGPQgyv+QP9bBD+kw1LvgoWIvjw21NqDD3PpJS5iDJusEr8b1YOt/j2ejWJmSq2FQ==",
"license": "MIT",
"peer": true,
"dependencies": {
"@parcel/fs": "2.13.2",
"@parcel/logger": "2.13.2",
"@parcel/utils": "2.13.2",
"@parcel/fs": "2.14.2",
"@parcel/logger": "2.14.2",
"@parcel/utils": "2.14.2",
"lmdb": "2.8.5"
},
"engines": {
@@ -1229,13 +1248,14 @@
"url": "https://opencollective.com/parcel"
},
"peerDependencies": {
"@parcel/core": "^2.13.2"
"@parcel/core": "^2.14.2"
}
},
"node_modules/@parcel/core/node_modules/@parcel/codeframe": {
"version": "2.13.2",
"resolved": "https://registry.npmjs.org/@parcel/codeframe/-/codeframe-2.13.2.tgz",
"integrity": "sha512-qFMiS14orb6QSQj5/J/QN+gJElUfedVAKBTNkp9QB4i8ObdLHDqHRUzFb55ZQJI3G4vsxOOWAOUXGirtLwrxGQ==",
"version": "2.14.2",
"resolved": "https://registry.npmjs.org/@parcel/codeframe/-/codeframe-2.14.2.tgz",
"integrity": "sha512-sjXiM+XUWiq7OOeTDsWUaNvKkrcCA89w0lvLFFXbtxxDXVBnM8SERP8nosA95izKWEy3fA6LopCuPbfz9v7FmA==",
"license": "MIT",
"peer": true,
"dependencies": {
"chalk": "^4.1.2"
@@ -1249,9 +1269,10 @@
}
},
"node_modules/@parcel/core/node_modules/@parcel/diagnostic": {
"version": "2.13.2",
"resolved": "https://registry.npmjs.org/@parcel/diagnostic/-/diagnostic-2.13.2.tgz",
"integrity": "sha512-6Au0JEJ5SY2gYrY0/m0i0sTuqTvK0k2E9azhBJR+zzCREbUxLiDdLZ+vXAfLW7t/kPAcWtdNU0Bj7pnZcMiMXg==",
"version": "2.14.2",
"resolved": "https://registry.npmjs.org/@parcel/diagnostic/-/diagnostic-2.14.2.tgz",
"integrity": "sha512-xoq9gf08Pv4q3zJUJqG9zsA1IBIr328HsEJpRC7b7zDd8j6DVJjrWTYDWnBybHAMXQ34x1qjsTDyvJcGA7uyWA==",
"license": "MIT",
"peer": true,
"dependencies": {
"@mischnic/json-sourcemap": "^0.1.0",
@@ -1266,9 +1287,10 @@
}
},
"node_modules/@parcel/core/node_modules/@parcel/events": {
"version": "2.13.2",
"resolved": "https://registry.npmjs.org/@parcel/events/-/events-2.13.2.tgz",
"integrity": "sha512-BVB9hW1RGh/tMaDHfpa+uIgz5PMULorCnjmWr/KvrlhdUSUQoaPYfRcTDYrKhoKuNIKsWSnTGvXrxE53L5qo0w==",
"version": "2.14.2",
"resolved": "https://registry.npmjs.org/@parcel/events/-/events-2.14.2.tgz",
"integrity": "sha512-ZwHOicEfnr0DVlA2+I9HN/wAIOKqpjVe/kRLZfKA3N5R2xLB+Mx3e5zDMSi2kCxkkqW5Yg0qxSI9hy3kTNgM0A==",
"license": "MIT",
"peer": true,
"engines": {
"node": ">= 16.0.0"
@@ -1279,17 +1301,18 @@
}
},
"node_modules/@parcel/core/node_modules/@parcel/fs": {
"version": "2.13.2",
"resolved": "https://registry.npmjs.org/@parcel/fs/-/fs-2.13.2.tgz",
"integrity": "sha512-bdeIMuAXhMnROvqV55JWRUmjD438/T7h3r3NsFnkq+Mp4z2nuAn0STxbqDNxIgTMJHNunSDzncqRNMT7xJCe8A==",
"version": "2.14.2",
"resolved": "https://registry.npmjs.org/@parcel/fs/-/fs-2.14.2.tgz",
"integrity": "sha512-SOKgXMGA4buLs56kEVqymdpysb9Lpmo7QQn9QRcSZYX0u+vkD5JORCK0SiO7etvPWFttXNKjdlYE9IvvrRweaQ==",
"license": "MIT",
"peer": true,
"dependencies": {
"@parcel/feature-flags": "2.13.2",
"@parcel/rust": "2.13.2",
"@parcel/types-internal": "2.13.2",
"@parcel/utils": "2.13.2",
"@parcel/feature-flags": "2.14.2",
"@parcel/rust": "2.14.2",
"@parcel/types-internal": "2.14.2",
"@parcel/utils": "2.14.2",
"@parcel/watcher": "^2.0.7",
"@parcel/workers": "2.13.2"
"@parcel/workers": "2.14.2"
},
"engines": {
"node": ">= 16.0.0"
@@ -1299,17 +1322,18 @@
"url": "https://opencollective.com/parcel"
},
"peerDependencies": {
"@parcel/core": "^2.13.2"
"@parcel/core": "^2.14.2"
}
},
"node_modules/@parcel/core/node_modules/@parcel/logger": {
"version": "2.13.2",
"resolved": "https://registry.npmjs.org/@parcel/logger/-/logger-2.13.2.tgz",
"integrity": "sha512-SFVABAMqaT9jIDn4maPgaQQauPDz8fpoKUGEuLF44Q0aQFbBUy7vX7KYs/EvYSWZo4VyJcUDHvIInBlepA0/ZQ==",
"version": "2.14.2",
"resolved": "https://registry.npmjs.org/@parcel/logger/-/logger-2.14.2.tgz",
"integrity": "sha512-gnG/0J2mj1Ot/1XoKbTh0YdEt+vWnODc022FgG+df5+qBiPwonwsIThSv1feUHX2EqHCEtbpXJ+OzZdzXRH9yA==",
"license": "MIT",
"peer": true,
"dependencies": {
"@parcel/diagnostic": "2.13.2",
"@parcel/events": "2.13.2"
"@parcel/diagnostic": "2.14.2",
"@parcel/events": "2.14.2"
},
"engines": {
"node": ">= 16.0.0"
@@ -1320,9 +1344,10 @@
}
},
"node_modules/@parcel/core/node_modules/@parcel/markdown-ansi": {
"version": "2.13.2",
"resolved": "https://registry.npmjs.org/@parcel/markdown-ansi/-/markdown-ansi-2.13.2.tgz",
"integrity": "sha512-MIEoetfT/snk1GqWzBI3AhifV257i2xke9dvyQl14PPiMl+TlVhwnbQyA09WJBvDor+MuxZypHL7xoFdW8ff3A==",
"version": "2.14.2",
"resolved": "https://registry.npmjs.org/@parcel/markdown-ansi/-/markdown-ansi-2.14.2.tgz",
"integrity": "sha512-9wSiT2C7kW9pcvh2PyiuN/jWvIkDWpZvlGpCGmFU87qYAJbOjsjC7cybqDbqVEMQUplFPs8RL5vcTGU88vrzvA==",
"license": "MIT",
"peer": true,
"dependencies": {
"chalk": "^4.1.2"
@@ -1336,16 +1361,17 @@
}
},
"node_modules/@parcel/core/node_modules/@parcel/node-resolver-core": {
"version": "3.4.2",
"resolved": "https://registry.npmjs.org/@parcel/node-resolver-core/-/node-resolver-core-3.4.2.tgz",
"integrity": "sha512-SwnKLcZRG1VdB5JeM/Ax5VMWWh2QfXufmMQCKKx0/Kk41nUpie+aIZKj3LH6Z/fJsnKig/vXpeWoxGhmG523qg==",
"version": "3.5.2",
"resolved": "https://registry.npmjs.org/@parcel/node-resolver-core/-/node-resolver-core-3.5.2.tgz",
"integrity": "sha512-cUSsfwd+Phatm5oeiJIzev6TojbDGL09/GUzoWKwpx3exT2VqcrFfHEt4GQLBUJsTf/bqk5FzyzK548ue09loQ==",
"license": "MIT",
"peer": true,
"dependencies": {
"@mischnic/json-sourcemap": "^0.1.0",
"@parcel/diagnostic": "2.13.2",
"@parcel/fs": "2.13.2",
"@parcel/rust": "2.13.2",
"@parcel/utils": "2.13.2",
"@parcel/diagnostic": "2.14.2",
"@parcel/fs": "2.14.2",
"@parcel/rust": "2.14.2",
"@parcel/utils": "2.14.2",
"nullthrows": "^1.1.1",
"semver": "^7.5.2"
},
@@ -1358,19 +1384,20 @@
}
},
"node_modules/@parcel/core/node_modules/@parcel/package-manager": {
"version": "2.13.2",
"resolved": "https://registry.npmjs.org/@parcel/package-manager/-/package-manager-2.13.2.tgz",
"integrity": "sha512-6HjfbdJUjHyNKzYB7GSYnOCtLwqCGW7yT95GlnnTKyFffvXYsqvBSyepMuPRlbX0mFUm4S9l2DH3OVZrk108AA==",
"version": "2.14.2",
"resolved": "https://registry.npmjs.org/@parcel/package-manager/-/package-manager-2.14.2.tgz",
"integrity": "sha512-vagjbhRxYAfU1dQP5qAbBA9KpYROKDtqsa9YV5+5ni7HTkQR3PdSkjd9dIV/7UnsN9kiYHu/jhxTFM0cxjC3pw==",
"license": "MIT",
"peer": true,
"dependencies": {
"@parcel/diagnostic": "2.13.2",
"@parcel/fs": "2.13.2",
"@parcel/logger": "2.13.2",
"@parcel/node-resolver-core": "3.4.2",
"@parcel/types": "2.13.2",
"@parcel/utils": "2.13.2",
"@parcel/workers": "2.13.2",
"@swc/core": "^1.7.26",
"@parcel/diagnostic": "2.14.2",
"@parcel/fs": "2.14.2",
"@parcel/logger": "2.14.2",
"@parcel/node-resolver-core": "3.5.2",
"@parcel/types": "2.14.2",
"@parcel/utils": "2.14.2",
"@parcel/workers": "2.14.2",
"@swc/core": "^1.11.5",
"semver": "^7.5.2"
},
"engines": {
@@ -1381,16 +1408,17 @@
"url": "https://opencollective.com/parcel"
},
"peerDependencies": {
"@parcel/core": "^2.13.2"
"@parcel/core": "^2.14.2"
}
},
"node_modules/@parcel/core/node_modules/@parcel/plugin": {
"version": "2.13.2",
"resolved": "https://registry.npmjs.org/@parcel/plugin/-/plugin-2.13.2.tgz",
"integrity": "sha512-Q+RIENS1B185yLPhrGdzBK1oJrZmh/RXrYMnzJs78Tog8SpihjeNBNR6z4PT85o2F+Gy2y1S9A26fpiGq161qQ==",
"version": "2.14.2",
"resolved": "https://registry.npmjs.org/@parcel/plugin/-/plugin-2.14.2.tgz",
"integrity": "sha512-iUL6eJFcJkMmvTpaav+SA4bR+MEG/d5+QO6MQFsl+jkGEohDCbJlm2kCjnhjY9Gu2UAl2GMC0k+DFb5R2v9HYg==",
"license": "MIT",
"peer": true,
"dependencies": {
"@parcel/types": "2.13.2"
"@parcel/types": "2.14.2"
},
"engines": {
"node": ">= 16.0.0"
@@ -1401,14 +1429,15 @@
}
},
"node_modules/@parcel/core/node_modules/@parcel/profiler": {
"version": "2.13.2",
"resolved": "https://registry.npmjs.org/@parcel/profiler/-/profiler-2.13.2.tgz",
"integrity": "sha512-fur6Oq2HkX6AiM8rtqmDvldH5JWz0sqXA1ylz8cE3XOiDZIuvCulZmQ+hH+4odaNH6QocI1MwfV+GDh3HlQoCA==",
"version": "2.14.2",
"resolved": "https://registry.npmjs.org/@parcel/profiler/-/profiler-2.14.2.tgz",
"integrity": "sha512-kNgEz7FDC1hb7gpA/Z3s9vp6NiAJ5tEvxGhRAV64CDx98Jd/FES5MJLZ7A0vDWMHqDChgtprCkh0u3KnWrXRqg==",
"license": "MIT",
"peer": true,
"dependencies": {
"@parcel/diagnostic": "2.13.2",
"@parcel/events": "2.13.2",
"@parcel/types-internal": "2.13.2",
"@parcel/diagnostic": "2.14.2",
"@parcel/events": "2.14.2",
"@parcel/types-internal": "2.14.2",
"chrome-trace-event": "^1.0.2"
},
"engines": {
@@ -1420,9 +1449,10 @@
}
},
"node_modules/@parcel/core/node_modules/@parcel/rust": {
"version": "2.13.2",
"resolved": "https://registry.npmjs.org/@parcel/rust/-/rust-2.13.2.tgz",
"integrity": "sha512-XFIewSwxkrDYOnnSP/XZ1LDLdXTs7L9CjQUWtl46Vir5Pq/rinemwLJeKGIwKLHy7fhUZQjYxquH6fBL+AY8DA==",
"version": "2.14.2",
"resolved": "https://registry.npmjs.org/@parcel/rust/-/rust-2.14.2.tgz",
"integrity": "sha512-NPXebSTdhLttERkWgJZf/QRIIvQ8DpGby84T6FGM0pfFzocnHmuL/36J5xjquEncbSjFEVUBGomCpJNQLBwK9g==",
"license": "MIT",
"peer": true,
"engines": {
"node": ">= 16.0.0"
@@ -1430,29 +1460,39 @@
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/parcel"
},
"peerDependencies": {
"napi-wasm": "^1.1.2"
},
"peerDependenciesMeta": {
"napi-wasm": {
"optional": true
}
}
},
"node_modules/@parcel/core/node_modules/@parcel/types": {
"version": "2.13.2",
"resolved": "https://registry.npmjs.org/@parcel/types/-/types-2.13.2.tgz",
"integrity": "sha512-6ixqjk2pjKELn4sQ/jdvpbCVTeH6xXQTdotkN8Wzk68F2K2MtSPIRAEocumlexScfffbRQplr2MdIf1JJWLogA==",
"version": "2.14.2",
"resolved": "https://registry.npmjs.org/@parcel/types/-/types-2.14.2.tgz",
"integrity": "sha512-15vSnfdjWB3fLkqGGZ0dEZVeHheH4XgtkSBnmwhgLN7LgigK1P9BwwN8/cN/tIKMP+YfcvjVpHCtaeKRGpje9A==",
"license": "MIT",
"peer": true,
"dependencies": {
"@parcel/types-internal": "2.13.2",
"@parcel/workers": "2.13.2"
"@parcel/types-internal": "2.14.2",
"@parcel/workers": "2.14.2"
}
},
"node_modules/@parcel/core/node_modules/@parcel/utils": {
"version": "2.13.2",
"resolved": "https://registry.npmjs.org/@parcel/utils/-/utils-2.13.2.tgz",
"integrity": "sha512-BkFtRo5xenmonwnBy+X4sVbHIRrx+ZHMPpS/6hFqyTvoUUFq2yTFQnfRGVVOOvscVUxpGom+kewnrTG3HHbZoA==",
"version": "2.14.2",
"resolved": "https://registry.npmjs.org/@parcel/utils/-/utils-2.14.2.tgz",
"integrity": "sha512-jgDQrzPOU4IfWnYjRL2zGMbc439334ia1nRa13XcID3+oEp10HWTxw26PGhnYQ02mlOwxg8mtrb5ugZOL+dEIQ==",
"license": "MIT",
"peer": true,
"dependencies": {
"@parcel/codeframe": "2.13.2",
"@parcel/diagnostic": "2.13.2",
"@parcel/logger": "2.13.2",
"@parcel/markdown-ansi": "2.13.2",
"@parcel/rust": "2.13.2",
"@parcel/codeframe": "2.14.2",
"@parcel/diagnostic": "2.14.2",
"@parcel/logger": "2.14.2",
"@parcel/markdown-ansi": "2.14.2",
"@parcel/rust": "2.14.2",
"@parcel/source-map": "^2.1.1",
"chalk": "^4.1.2",
"nullthrows": "^1.1.1"
@@ -1466,16 +1506,17 @@
}
},
"node_modules/@parcel/core/node_modules/@parcel/workers": {
"version": "2.13.2",
"resolved": "https://registry.npmjs.org/@parcel/workers/-/workers-2.13.2.tgz",
"integrity": "sha512-P78BpH0yTT9KK09wgK4eabtlb5OlcWAmZebOToN5UYuwWEylKt0gWZx1+d+LPQupvK84/iZ+AutDScsATjgUMw==",
"version": "2.14.2",
"resolved": "https://registry.npmjs.org/@parcel/workers/-/workers-2.14.2.tgz",
"integrity": "sha512-tbM71fwlmwOL62v+B1cxnte0oS/D9sNVW2CxFZJROpi4jiBJYi2SWCJsQPNVbXYdnU2gzSbQX7ozX0CaE4f9Qw==",
"license": "MIT",
"peer": true,
"dependencies": {
"@parcel/diagnostic": "2.13.2",
"@parcel/logger": "2.13.2",
"@parcel/profiler": "2.13.2",
"@parcel/types-internal": "2.13.2",
"@parcel/utils": "2.13.2",
"@parcel/diagnostic": "2.14.2",
"@parcel/logger": "2.14.2",
"@parcel/profiler": "2.14.2",
"@parcel/types-internal": "2.14.2",
"@parcel/utils": "2.14.2",
"nullthrows": "^1.1.1"
},
"engines": {
@@ -1486,13 +1527,14 @@
"url": "https://opencollective.com/parcel"
},
"peerDependencies": {
"@parcel/core": "^2.13.2"
"@parcel/core": "^2.14.2"
}
},
"node_modules/@parcel/core/node_modules/ansi-styles": {
"version": "4.3.0",
"resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-4.3.0.tgz",
"integrity": "sha512-zbB9rCJAT1rbjiVDb2hqKFHNYLxgtk8NURxZ3IZwD3F6NtxbXZQCnnSi1Lkx+IDohdPlFp222wVALIheZJQSEg==",
"license": "MIT",
"peer": true,
"dependencies": {
"color-convert": "^2.0.1"
@@ -1508,6 +1550,7 @@
"version": "4.1.2",
"resolved": "https://registry.npmjs.org/chalk/-/chalk-4.1.2.tgz",
"integrity": "sha512-oKnbhFyRIXpUuez8iBMmyEa4nbj4IOQyuhc/wy9kY7/WVPcwIO9VA668Pu8RkO7+0G76SLROeyw9CpQ061i4mA==",
"license": "MIT",
"peer": true,
"dependencies": {
"ansi-styles": "^4.1.0",
@@ -1524,6 +1567,7 @@
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/color-convert/-/color-convert-2.0.1.tgz",
"integrity": "sha512-RRECPsj7iu/xb5oKYcsFHSppFNnsj/52OVTRKb4zP5onXwVF3zVmmToNcOfGC+CRDpfK/U584fMg38ZHCaElKQ==",
"license": "MIT",
"peer": true,
"dependencies": {
"color-name": "~1.1.4"
@@ -1536,21 +1580,24 @@
"version": "1.1.4",
"resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.4.tgz",
"integrity": "sha512-dOy+3AuW3a2wNbZHIuMZpTcgjGuLU/uBL/ubcZF9OXbDo8ff4O8yVp5Bf0efS8uEoYo5q4Fx7dY9OgQGXgAsQA==",
"license": "MIT",
"peer": true
},
"node_modules/@parcel/core/node_modules/has-flag": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/has-flag/-/has-flag-4.0.0.tgz",
"integrity": "sha512-EykJT/Q1KjTWctppgIAgfSO0tKVuZUjhgMr17kqTumMl6Afv3EISleU7qZUzoXDFTAHTDC4NOoG/ZxU3EvlMPQ==",
"license": "MIT",
"peer": true,
"engines": {
"node": ">=8"
}
},
"node_modules/@parcel/core/node_modules/semver": {
"version": "7.6.3",
"resolved": "https://registry.npmjs.org/semver/-/semver-7.6.3.tgz",
"integrity": "sha512-oVekP1cKtI+CTDvHWYFUcMtsK/00wmAEfyqKfNdARm8u1wNVhSgaX7A8d4UuIlUI5e84iEwOhs7ZPYRmzU9U6A==",
"version": "7.7.1",
"resolved": "https://registry.npmjs.org/semver/-/semver-7.7.1.tgz",
"integrity": "sha512-hlq8tAfn0m/61p4BVRcPzIGr6LKiMwo4VM6dGi6pt4qcRkmNzTcWq6eCEjEh+qXjkMDvPlOFFSGwQjoEa6gyMA==",
"license": "ISC",
"peer": true,
"bin": {
"semver": "bin/semver.js"
@@ -1563,6 +1610,7 @@
"version": "7.2.0",
"resolved": "https://registry.npmjs.org/supports-color/-/supports-color-7.2.0.tgz",
"integrity": "sha512-qpCAvRl9stuOHveKsn7HncJRvv501qIacKzQlO/+Lwxc9+0q2wLyv4Dfvt80/DPn2pqOBsJdDiogXGR9+OvwRw==",
"license": "MIT",
"peer": true,
"dependencies": {
"has-flag": "^4.0.0"
@@ -1600,9 +1648,10 @@
}
},
"node_modules/@parcel/feature-flags": {
"version": "2.13.2",
"resolved": "https://registry.npmjs.org/@parcel/feature-flags/-/feature-flags-2.13.2.tgz",
"integrity": "sha512-cCwDAKD4Er24EkuQ+loVZXSURpM0gAGRsLJVoBtFiCSbB3nmIJJ6FLRwSBI/5OsOUExiUXDvSpfUCA5ldGTzbw==",
"version": "2.14.2",
"resolved": "https://registry.npmjs.org/@parcel/feature-flags/-/feature-flags-2.14.2.tgz",
"integrity": "sha512-TqurCACfUVoCRNWYSNHdIStc8ibWl+ZHPZWKOpnZSnBOgYf0lppmeq1W/dHTeaBDCB57VZM9d0ucFd0Xd0SZlA==",
"license": "MIT",
"peer": true,
"engines": {
"node": ">= 16.0.0"
@@ -1635,12 +1684,13 @@
}
},
"node_modules/@parcel/graph": {
"version": "3.3.2",
"resolved": "https://registry.npmjs.org/@parcel/graph/-/graph-3.3.2.tgz",
"integrity": "sha512-aAysQLRr8SOonSHWqdKHMJzfcrDFXKK8IYZEurlOzosiSgZXrAK7q8b8JcaJ4r84/jlvQYNYneNZeFQxKjHXkA==",
"version": "3.4.2",
"resolved": "https://registry.npmjs.org/@parcel/graph/-/graph-3.4.2.tgz",
"integrity": "sha512-yMP3Adz/zSkxPtTu9rh8XELkYyJfhysnw8PFA9UJlfvVWZsyir0ZWQC6R6cchlVthpVRAn29VIBOy67vmcgqjg==",
"license": "MIT",
"peer": true,
"dependencies": {
"@parcel/feature-flags": "2.13.2",
"@parcel/feature-flags": "2.14.2",
"nullthrows": "^1.1.1"
},
"engines": {
@@ -2001,21 +2051,23 @@
}
},
"node_modules/@parcel/types-internal": {
"version": "2.13.2",
"resolved": "https://registry.npmjs.org/@parcel/types-internal/-/types-internal-2.13.2.tgz",
"integrity": "sha512-j0zb3WNM8O/+d8CArll7/4w4AyBED3Jbo32/unz89EPVN0VklmgBrRCAI5QXDKuJAGdAZSL5/a8bNYbwl7/Wxw==",
"version": "2.14.2",
"resolved": "https://registry.npmjs.org/@parcel/types-internal/-/types-internal-2.14.2.tgz",
"integrity": "sha512-ylh2LMQtPPhc20RtygT1Qpji6zK4fSdpnokWyImJG6GYLN5tqN7tS0F0o6nPo6/1ll+X11CxTa/MPO6g+NaphQ==",
"license": "MIT",
"peer": true,
"dependencies": {
"@parcel/diagnostic": "2.13.2",
"@parcel/feature-flags": "2.13.2",
"@parcel/diagnostic": "2.14.2",
"@parcel/feature-flags": "2.14.2",
"@parcel/source-map": "^2.1.1",
"utility-types": "^3.10.0"
}
},
"node_modules/@parcel/types-internal/node_modules/@parcel/diagnostic": {
"version": "2.13.2",
"resolved": "https://registry.npmjs.org/@parcel/diagnostic/-/diagnostic-2.13.2.tgz",
"integrity": "sha512-6Au0JEJ5SY2gYrY0/m0i0sTuqTvK0k2E9azhBJR+zzCREbUxLiDdLZ+vXAfLW7t/kPAcWtdNU0Bj7pnZcMiMXg==",
"version": "2.14.2",
"resolved": "https://registry.npmjs.org/@parcel/diagnostic/-/diagnostic-2.14.2.tgz",
"integrity": "sha512-xoq9gf08Pv4q3zJUJqG9zsA1IBIr328HsEJpRC7b7zDd8j6DVJjrWTYDWnBybHAMXQ34x1qjsTDyvJcGA7uyWA==",
"license": "MIT",
"peer": true,
"dependencies": {
"@mischnic/json-sourcemap": "^0.1.0",
@@ -2739,13 +2791,14 @@
}
},
"node_modules/@swc/core": {
"version": "1.10.1",
"resolved": "https://registry.npmjs.org/@swc/core/-/core-1.10.1.tgz",
"integrity": "sha512-rQ4dS6GAdmtzKiCRt3LFVxl37FaY1cgL9kSUTnhQ2xc3fmHOd7jdJK/V4pSZMG1ruGTd0bsi34O2R0Olg9Zo/w==",
"version": "1.11.13",
"resolved": "https://registry.npmjs.org/@swc/core/-/core-1.11.13.tgz",
"integrity": "sha512-9BXdYz12Wl0zWmZ80PvtjBWeg2ncwJ9L5WJzjhN6yUTZWEV/AwAdVdJnIEp4pro3WyKmAaMxcVOSbhuuOZco5g==",
"hasInstallScript": true,
"license": "Apache-2.0",
"dependencies": {
"@swc/counter": "^0.1.3",
"@swc/types": "^0.1.17"
"@swc/types": "^0.1.19"
},
"engines": {
"node": ">=10"
@@ -2755,16 +2808,16 @@
"url": "https://opencollective.com/swc"
},
"optionalDependencies": {
"@swc/core-darwin-arm64": "1.10.1",
"@swc/core-darwin-x64": "1.10.1",
"@swc/core-linux-arm-gnueabihf": "1.10.1",
"@swc/core-linux-arm64-gnu": "1.10.1",
"@swc/core-linux-arm64-musl": "1.10.1",
"@swc/core-linux-x64-gnu": "1.10.1",
"@swc/core-linux-x64-musl": "1.10.1",
"@swc/core-win32-arm64-msvc": "1.10.1",
"@swc/core-win32-ia32-msvc": "1.10.1",
"@swc/core-win32-x64-msvc": "1.10.1"
"@swc/core-darwin-arm64": "1.11.13",
"@swc/core-darwin-x64": "1.11.13",
"@swc/core-linux-arm-gnueabihf": "1.11.13",
"@swc/core-linux-arm64-gnu": "1.11.13",
"@swc/core-linux-arm64-musl": "1.11.13",
"@swc/core-linux-x64-gnu": "1.11.13",
"@swc/core-linux-x64-musl": "1.11.13",
"@swc/core-win32-arm64-msvc": "1.11.13",
"@swc/core-win32-ia32-msvc": "1.11.13",
"@swc/core-win32-x64-msvc": "1.11.13"
},
"peerDependencies": {
"@swc/helpers": "*"
@@ -2776,12 +2829,13 @@
}
},
"node_modules/@swc/core-darwin-arm64": {
"version": "1.10.1",
"resolved": "https://registry.npmjs.org/@swc/core-darwin-arm64/-/core-darwin-arm64-1.10.1.tgz",
"integrity": "sha512-NyELPp8EsVZtxH/mEqvzSyWpfPJ1lugpTQcSlMduZLj1EASLO4sC8wt8hmL1aizRlsbjCX+r0PyL+l0xQ64/6Q==",
"version": "1.11.13",
"resolved": "https://registry.npmjs.org/@swc/core-darwin-arm64/-/core-darwin-arm64-1.11.13.tgz",
"integrity": "sha512-loSERhLaQ9XDS+5Kdx8cLe2tM1G0HLit8MfehipAcsdctpo79zrRlkW34elOf3tQoVPKUItV0b/rTuhjj8NtHg==",
"cpu": [
"arm64"
],
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
"darwin"
@@ -2791,12 +2845,13 @@
}
},
"node_modules/@swc/core-darwin-x64": {
"version": "1.10.1",
"resolved": "https://registry.npmjs.org/@swc/core-darwin-x64/-/core-darwin-x64-1.10.1.tgz",
"integrity": "sha512-L4BNt1fdQ5ZZhAk5qoDfUnXRabDOXKnXBxMDJ+PWLSxOGBbWE6aJTnu4zbGjJvtot0KM46m2LPAPY8ttknqaZA==",
"version": "1.11.13",
"resolved": "https://registry.npmjs.org/@swc/core-darwin-x64/-/core-darwin-x64-1.11.13.tgz",
"integrity": "sha512-uSA4UwgsDCIysUPfPS8OrQTH2h9spO7IYFd+1NB6dJlVGUuR6jLKuMBOP1IeLeax4cGHayvkcwSJ3OvxHwgcZQ==",
"cpu": [
"x64"
],
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
"darwin"
@@ -2806,12 +2861,13 @@
}
},
"node_modules/@swc/core-linux-arm-gnueabihf": {
"version": "1.10.1",
"resolved": "https://registry.npmjs.org/@swc/core-linux-arm-gnueabihf/-/core-linux-arm-gnueabihf-1.10.1.tgz",
"integrity": "sha512-Y1u9OqCHgvVp2tYQAJ7hcU9qO5brDMIrA5R31rwWQIAKDkJKtv3IlTHF0hrbWk1wPR0ZdngkQSJZple7G+Grvw==",
"version": "1.11.13",
"resolved": "https://registry.npmjs.org/@swc/core-linux-arm-gnueabihf/-/core-linux-arm-gnueabihf-1.11.13.tgz",
"integrity": "sha512-boVtyJzS8g30iQfe8Q46W5QE/cmhKRln/7NMz/5sBP/am2Lce9NL0d05NnFwEWJp1e2AMGHFOdRr3Xg1cDiPKw==",
"cpu": [
"arm"
],
"license": "Apache-2.0",
"optional": true,
"os": [
"linux"
@@ -2821,12 +2877,13 @@
}
},
"node_modules/@swc/core-linux-arm64-gnu": {
"version": "1.10.1",
"resolved": "https://registry.npmjs.org/@swc/core-linux-arm64-gnu/-/core-linux-arm64-gnu-1.10.1.tgz",
"integrity": "sha512-tNQHO/UKdtnqjc7o04iRXng1wTUXPgVd8Y6LI4qIbHVoVPwksZydISjMcilKNLKIwOoUQAkxyJ16SlOAeADzhQ==",
"version": "1.11.13",
"resolved": "https://registry.npmjs.org/@swc/core-linux-arm64-gnu/-/core-linux-arm64-gnu-1.11.13.tgz",
"integrity": "sha512-+IK0jZ84zHUaKtwpV+T+wT0qIUBnK9v2xXD03vARubKF+eUqCsIvcVHXmLpFuap62dClMrhCiwW10X3RbXNlHw==",
"cpu": [
"arm64"
],
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
"linux"
@@ -2836,12 +2893,13 @@
}
},
"node_modules/@swc/core-linux-arm64-musl": {
"version": "1.10.1",
"resolved": "https://registry.npmjs.org/@swc/core-linux-arm64-musl/-/core-linux-arm64-musl-1.10.1.tgz",
"integrity": "sha512-x0L2Pd9weQ6n8dI1z1Isq00VHFvpBClwQJvrt3NHzmR+1wCT/gcYl1tp9P5xHh3ldM8Cn4UjWCw+7PaUgg8FcQ==",
"version": "1.11.13",
"resolved": "https://registry.npmjs.org/@swc/core-linux-arm64-musl/-/core-linux-arm64-musl-1.11.13.tgz",
"integrity": "sha512-+ukuB8RHD5BHPCUjQwuLP98z+VRfu+NkKQVBcLJGgp0/+w7y0IkaxLY/aKmrAS5ofCNEGqKL+AOVyRpX1aw+XA==",
"cpu": [
"arm64"
],
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
"linux"
@@ -2851,12 +2909,13 @@
}
},
"node_modules/@swc/core-linux-x64-gnu": {
"version": "1.10.1",
"resolved": "https://registry.npmjs.org/@swc/core-linux-x64-gnu/-/core-linux-x64-gnu-1.10.1.tgz",
"integrity": "sha512-yyYEwQcObV3AUsC79rSzN9z6kiWxKAVJ6Ntwq2N9YoZqSPYph+4/Am5fM1xEQYf/kb99csj0FgOelomJSobxQA==",
"version": "1.11.13",
"resolved": "https://registry.npmjs.org/@swc/core-linux-x64-gnu/-/core-linux-x64-gnu-1.11.13.tgz",
"integrity": "sha512-q9H3WI3U3dfJ34tdv60zc8oTuWvSd5fOxytyAO9Pc5M82Hic3jjWaf2xBekUg07ubnMZpyfnv+MlD+EbUI3Llw==",
"cpu": [
"x64"
],
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
"linux"
@@ -2866,12 +2925,13 @@
}
},
"node_modules/@swc/core-linux-x64-musl": {
"version": "1.10.1",
"resolved": "https://registry.npmjs.org/@swc/core-linux-x64-musl/-/core-linux-x64-musl-1.10.1.tgz",
"integrity": "sha512-tcaS43Ydd7Fk7sW5ROpaf2Kq1zR+sI5K0RM+0qYLYYurvsJruj3GhBCaiN3gkzd8m/8wkqNqtVklWaQYSDsyqA==",
"version": "1.11.13",
"resolved": "https://registry.npmjs.org/@swc/core-linux-x64-musl/-/core-linux-x64-musl-1.11.13.tgz",
"integrity": "sha512-9aaZnnq2pLdTbAzTSzy/q8dr7Woy3aYIcQISmw1+Q2/xHJg5y80ZzbWSWKYca/hKonDMjIbGR6dp299I5J0aeA==",
"cpu": [
"x64"
],
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
"linux"
@@ -2881,12 +2941,13 @@
}
},
"node_modules/@swc/core-win32-arm64-msvc": {
"version": "1.10.1",
"resolved": "https://registry.npmjs.org/@swc/core-win32-arm64-msvc/-/core-win32-arm64-msvc-1.10.1.tgz",
"integrity": "sha512-D3Qo1voA7AkbOzQ2UGuKNHfYGKL6eejN8VWOoQYtGHHQi1p5KK/Q7V1ku55oxXBsj79Ny5FRMqiRJpVGad7bjQ==",
"version": "1.11.13",
"resolved": "https://registry.npmjs.org/@swc/core-win32-arm64-msvc/-/core-win32-arm64-msvc-1.11.13.tgz",
"integrity": "sha512-n3QZmDewkHANcoHvtwvA6yJbmS4XJf0MBMmwLZoKDZ2dOnC9D/jHiXw7JOohEuzYcpLoL5tgbqmjxa3XNo9Oow==",
"cpu": [
"arm64"
],
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
"win32"
@@ -2896,12 +2957,13 @@
}
},
"node_modules/@swc/core-win32-ia32-msvc": {
"version": "1.10.1",
"resolved": "https://registry.npmjs.org/@swc/core-win32-ia32-msvc/-/core-win32-ia32-msvc-1.10.1.tgz",
"integrity": "sha512-WalYdFoU3454Og+sDKHM1MrjvxUGwA2oralknXkXL8S0I/8RkWZOB++p3pLaGbTvOO++T+6znFbQdR8KRaa7DA==",
"version": "1.11.13",
"resolved": "https://registry.npmjs.org/@swc/core-win32-ia32-msvc/-/core-win32-ia32-msvc-1.11.13.tgz",
"integrity": "sha512-wM+Nt4lc6YSJFthCx3W2dz0EwFNf++j0/2TQ0Js9QLJuIxUQAgukhNDVCDdq8TNcT0zuA399ALYbvj5lfIqG6g==",
"cpu": [
"ia32"
],
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
"win32"
@@ -2911,12 +2973,13 @@
}
},
"node_modules/@swc/core-win32-x64-msvc": {
"version": "1.10.1",
"resolved": "https://registry.npmjs.org/@swc/core-win32-x64-msvc/-/core-win32-x64-msvc-1.10.1.tgz",
"integrity": "sha512-JWobfQDbTnoqaIwPKQ3DVSywihVXlQMbDuwik/dDWlj33A8oEHcjPOGs4OqcA3RHv24i+lfCQpM3Mn4FAMfacA==",
"version": "1.11.13",
"resolved": "https://registry.npmjs.org/@swc/core-win32-x64-msvc/-/core-win32-x64-msvc-1.11.13.tgz",
"integrity": "sha512-+X5/uW3s1L5gK7wAo0E27YaAoidJDo51dnfKSfU7gF3mlEUuWH8H1bAy5OTt2mU4eXtfsdUMEVXSwhDlLtQkuA==",
"cpu": [
"x64"
],
"license": "Apache-2.0 AND MIT",
"optional": true,
"os": [
"win32"
@@ -2940,9 +3003,10 @@
}
},
"node_modules/@swc/types": {
"version": "0.1.17",
"resolved": "https://registry.npmjs.org/@swc/types/-/types-0.1.17.tgz",
"integrity": "sha512-V5gRru+aD8YVyCOMAjMpWR1Ui577DD5KSJsHP8RAxopAH22jFz6GZd/qxqjO6MJHQhcsjvjOFXyDhyLQUnMveQ==",
"version": "0.1.20",
"resolved": "https://registry.npmjs.org/@swc/types/-/types-0.1.20.tgz",
"integrity": "sha512-/rlIpxwKrhz4BIplXf6nsEHtqlhzuNN34/k3kMAXH4/lvVoA3cdq+60aqVNnyvw2uITEaCi0WV3pxBe4dQqoXQ==",
"license": "Apache-2.0",
"dependencies": {
"@swc/counter": "^0.1.3"
}
@@ -3187,9 +3251,10 @@
}
},
"node_modules/base-x": {
"version": "3.0.10",
"resolved": "https://registry.npmjs.org/base-x/-/base-x-3.0.10.tgz",
"integrity": "sha512-7d0s06rR9rYaIWHkpfLIFICM/tkSVdoPC9qYAQRpxn9DdKNWNsKC0uk++akckyLq16Tx2WIinnZ6WRriAt6njQ==",
"version": "3.0.11",
"resolved": "https://registry.npmjs.org/base-x/-/base-x-3.0.11.tgz",
"integrity": "sha512-xz7wQ8xDhdyP7tQxwdteLYeFfS68tSMNCZ/Y37WJ4bhGfKPpqEIlmIyueQHqOyoPhE6xNUqjzRr8ra0eF9VRvA==",
"license": "MIT",
"peer": true,
"dependencies": {
"safe-buffer": "^5.0.1"
@@ -3413,6 +3478,7 @@
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/clone/-/clone-2.1.2.tgz",
"integrity": "sha512-3Pe/CF1Nn94hyhIYpjtiLhdCoEoz0DqQ+988E9gmeEdQZlojxnOb74wctFyuwWQHzqyf9X7C7MG8juUpqBJT8w==",
"license": "MIT",
"peer": true,
"engines": {
"node": ">=0.8"
@@ -4057,11 +4123,13 @@
}
},
"node_modules/docsgpt-react": {
"version": "0.4.9",
"resolved": "https://registry.npmjs.org/docsgpt-react/-/docsgpt-react-0.4.9.tgz",
"integrity": "sha512-mGGbd4IGVHrQVVdgoej991Vpl/hYkTuKz5Ax95hvqSbWDZELZnEx2/AZajAII5AayUZKWYaEFRluewUiGJVSbA==",
"version": "0.5.0",
"resolved": "https://registry.npmjs.org/docsgpt-react/-/docsgpt-react-0.5.0.tgz",
"integrity": "sha512-5tDfFxBHG9432URaE8rQaYmBE8tbEUg74L85ykg/WbcoL84U3ixrt0tG7T0SfoTfxQT46H3afliYdv1rDmFGLw==",
"license": "Apache-2.0",
"dependencies": {
"@babel/plugin-transform-flow-strip-types": "^7.23.3",
"@bpmn-io/snarkdown": "^2.2.0",
"@parcel/resolver-glob": "^2.12.0",
"@parcel/transformer-svg-react": "^2.12.0",
"@parcel/transformer-typescript-tsc": "^2.12.0",
@@ -4149,6 +4217,7 @@
"version": "16.4.7",
"resolved": "https://registry.npmjs.org/dotenv/-/dotenv-16.4.7.tgz",
"integrity": "sha512-47qPchRCykZC03FhkYAhrvwU4xDBFIj1QPqaarj6mdM/hgUzfPHcpkHJOn3mJAufFeeAxAzeGsr5X0M4k6fLZQ==",
"license": "BSD-2-Clause",
"peer": true,
"engines": {
"node": ">=12"
@@ -4161,6 +4230,7 @@
"version": "11.0.7",
"resolved": "https://registry.npmjs.org/dotenv-expand/-/dotenv-expand-11.0.7.tgz",
"integrity": "sha512-zIHwmZPRshsCdpMDyVsqGmgyP0yT8GAgXUnkdAoJisxvf33k7yO6OuoKmcTGuXPWSsm8Oh88nZicRLA9Y0rUeA==",
"license": "BSD-2-Clause",
"peer": true,
"dependencies": {
"dotenv": "^16.4.5"
@@ -6758,11 +6828,12 @@
}
},
"node_modules/next": {
"version": "14.2.22",
"resolved": "https://registry.npmjs.org/next/-/next-14.2.22.tgz",
"integrity": "sha512-Ps2caobQ9hlEhscLPiPm3J3SYhfwfpMqzsoCMZGWxt9jBRK9hoBZj2A37i8joKhsyth2EuVKDVJCTF5/H4iEDw==",
"version": "14.2.26",
"resolved": "https://registry.npmjs.org/next/-/next-14.2.26.tgz",
"integrity": "sha512-b81XSLihMwCfwiUVRRja3LphLo4uBBMZEzBBWMaISbKTwOmq3wPknIETy/8000tr7Gq4WmbuFYPS7jOYIf+ZJw==",
"license": "MIT",
"dependencies": {
"@next/env": "14.2.22",
"@next/env": "14.2.26",
"@swc/helpers": "0.5.5",
"busboy": "1.6.0",
"caniuse-lite": "^1.0.30001579",
@@ -6777,15 +6848,15 @@
"node": ">=18.17.0"
},
"optionalDependencies": {
"@next/swc-darwin-arm64": "14.2.22",
"@next/swc-darwin-x64": "14.2.22",
"@next/swc-linux-arm64-gnu": "14.2.22",
"@next/swc-linux-arm64-musl": "14.2.22",
"@next/swc-linux-x64-gnu": "14.2.22",
"@next/swc-linux-x64-musl": "14.2.22",
"@next/swc-win32-arm64-msvc": "14.2.22",
"@next/swc-win32-ia32-msvc": "14.2.22",
"@next/swc-win32-x64-msvc": "14.2.22"
"@next/swc-darwin-arm64": "14.2.26",
"@next/swc-darwin-x64": "14.2.26",
"@next/swc-linux-arm64-gnu": "14.2.26",
"@next/swc-linux-arm64-musl": "14.2.26",
"@next/swc-linux-x64-gnu": "14.2.26",
"@next/swc-linux-x64-musl": "14.2.26",
"@next/swc-win32-arm64-msvc": "14.2.26",
"@next/swc-win32-ia32-msvc": "14.2.26",
"@next/swc-win32-x64-msvc": "14.2.26"
},
"peerDependencies": {
"@opentelemetry/api": "^1.1.0",
@@ -10123,6 +10194,7 @@
"url": "https://feross.org/support"
}
],
"license": "MIT",
"peer": true
},
"node_modules/safer-buffer": {
@@ -10470,9 +10542,10 @@
}
},
"node_modules/typescript": {
"version": "5.7.2",
"resolved": "https://registry.npmjs.org/typescript/-/typescript-5.7.2.tgz",
"integrity": "sha512-i5t66RHxDvVN40HfDd1PsEThGNnlMCMT3jMUuoh9/0TaqWevNontacunWyN02LA9/fIbEWlcHZcgTKb9QoaLfg==",
"version": "5.8.2",
"resolved": "https://registry.npmjs.org/typescript/-/typescript-5.8.2.tgz",
"integrity": "sha512-aJn6wq13/afZp/jT9QZmwEjDqqvSGp1VT5GVg+f/t6/oVyrgXM6BY1h9BRh/O5p3PlUPAe+WuiEZOmb/49RqoQ==",
"license": "Apache-2.0",
"peer": true,
"bin": {
"tsc": "bin/tsc",

View File

@@ -7,8 +7,8 @@
"license": "MIT",
"dependencies": {
"@vercel/analytics": "^1.1.1",
"docsgpt-react": "^0.4.9",
"next": "^14.2.22",
"docsgpt-react": "^0.5.0",
"next": "^14.2.26",
"nextra": "^2.13.2",
"nextra-theme-docs": "^2.13.2",
"react": "^18.2.0",

View File

@@ -1,350 +0,0 @@
# API Endpoints Documentation
*Currently, the application provides the following main API endpoints:*
### 1. /api/answer
**Description:**
This endpoint is used to request answers to user-provided questions.
**Request:**
**Method**: `POST`
**Headers**: Content-Type should be set to `application/json; charset=utf-8`
**Request Body**: JSON object with the following fields:
* `question` — The user's question.
* `history` — (Optional) Previous conversation history.
* `api_key`— Your API key.
* `embeddings_key` — Your embeddings key.
* `active_docs` — The location of active documentation.
Here is a JavaScript Fetch Request example:
```js
// answer (POST http://127.0.0.1:5000/api/answer)
fetch("http://127.0.0.1:5000/api/answer", {
"method": "POST",
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
"body": JSON.stringify({"question":"Hi","history":null,"api_key":"OPENAI_API_KEY","embeddings_key":"OPENAI_API_KEY",
"active_docs": "javascript/.project/ES2015/openai_text-embedding-ada-002/"})
})
.then((res) => res.text())
.then(console.log.bind(console))
```
**Response**
In response, you will get a JSON document containing the `answer`, `query` and `result`:
```json
{
"answer": "Hi there! How can I help you?\n",
"query": "Hi",
"result": "Hi there! How can I help you?\nSOURCES:"
}
```
### 2. /api/docs_check
**Description:**
This endpoint will make sure documentation is loaded on the server (just run it every time user is switching between libraries (documentations)).
**Request:**
**Method**: `POST`
**Headers**: Content-Type should be set to `application/json; charset=utf-8`
**Request Body**: JSON object with the field:
* `docs` — The location of the documentation:
```js
// docs_check (POST http://127.0.0.1:5000/api/docs_check)
fetch("http://127.0.0.1:5000/api/docs_check", {
"method": "POST",
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
"body": JSON.stringify({"docs":"javascript/.project/ES2015/openai_text-embedding-ada-002/"})
})
.then((res) => res.text())
.then(console.log.bind(console))
```
**Response:**
In response, you will get a JSON document like this one indicating whether the documentation exists or not:
```json
{
"status": "exists"
}
```
### 3. /api/combine
**Description:**
This endpoint provides information about available vectors and their locations with a simple GET request.
**Request:**
**Method**: `GET`
**Response:**
Response will include:
* `date`
* `description`
* `docLink`
* `fullName`
* `language`
* `location` (local or docshub)
* `model`
* `name`
* `version`
Example of JSON in Docshub and local:
<img width="295" alt="image" src="https://user-images.githubusercontent.com/15183589/224714085-f09f51a4-7a9a-4efb-bd39-798029bb4273.png">
### 4. /api/upload
**Description:**
This endpoint is used to upload a file that needs to be trained, response is JSON with task ID, which can be used to check on task's progress.
**Request:**
**Method**: `POST`
**Request Body**: A multipart/form-data form with file upload and additional fields, including `user` and `name`.
HTML example:
```html
<form action="/api/upload" method="post" enctype="multipart/form-data" class="mt-2">
<input type="file" name="file" class="py-4" id="file-upload">
<input type="text" name="user" value="local" hidden>
<input type="text" name="name" placeholder="Name:">
<button type="submit" class="py-2 px-4 text-white bg-purple-30 rounded-md hover:bg-purple-30 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-purple-30">
Upload
</button>
</form>
```
**Response:**
JSON response with a status and a task ID that can be used to check the task's progress.
### 5. /api/task_status
**Description:**
This endpoint is used to get the status of a task (`task_id`) from `/api/upload`
**Request:**
**Method**: `GET`
**Query Parameter**: `task_id` (task ID to check)
**Sample JavaScript Fetch Request:**
```js
// Task status (Get http://127.0.0.1:5000/api/task_status)
fetch("http://localhost:5001/api/task_status?task_id=YOUR_TASK_ID", {
"method": "GET",
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
})
.then((res) => res.text())
.then(console.log.bind(console))
```
**Response:**
There are two types of responses:
1. While the task is still running, the 'current' value will show progress from 0 to 100.
```json
{
"result": {
"current": 1
},
"status": "PROGRESS"
}
```
2. When task is completed:
```json
{
"result": {
"directory": "temp",
"filename": "install.rst",
"formats": [
".rst",
".md",
".pdf"
],
"name_job": "somename",
"user": "local"
},
"status": "SUCCESS"
}
```
### 6. /api/delete_old
**Description:**
This endpoint is used to delete old Vector Stores.
**Request:**
**Method**: `GET`
**Query Parameter**: `task_id`
**Sample JavaScript Fetch Request:**
```js
// delete_old (GET http://127.0.0.1:5000/api/delete_old)
fetch("http://localhost:5001/api/delete_old?task_id=YOUR_TASK_ID", {
"method": "GET",
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
})
.then((res) => res.text())
.then(console.log.bind(console))
```
**Response:**
JSON response indicating the status of the operation:
```json
{ "status": "ok" }
```
### 7. /api/get_api_keys
**Description:**
The endpoint retrieves a list of API keys for the user.
**Request:**
**Method**: `GET`
**Sample JavaScript Fetch Request:**
```js
// get_api_keys (GET http://127.0.0.1:5000/api/get_api_keys)
fetch("http://localhost:5001/api/get_api_keys", {
"method": "GET",
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
})
.then((res) => res.text())
.then(console.log.bind(console))
```
**Response:**
JSON response with a list of created API keys:
```json
[
{
"id": "string",
"name": "string",
"key": "string",
"source": "string"
},
...
]
```
### 8. /api/create_api_key
**Description:**
Create a new API key for the user.
**Request:**
**Method**: `POST`
**Headers**: Content-Type should be set to `application/json; charset=utf-8`
**Request Body**: JSON object with the following fields:
* `name` — A name for the API key.
* `source` — The source documents that will be used.
* `prompt_id` — The prompt ID.
* `chunks` — The number of chunks used to process an answer.
Here is a JavaScript Fetch Request example:
```js
// create_api_key (POST http://127.0.0.1:5000/api/create_api_key)
fetch("http://127.0.0.1:5000/api/create_api_key", {
"method": "POST",
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
"body": JSON.stringify({"name":"Example Key Name",
"source":"Example Source",
"prompt_id":"creative",
"chunks":"2"})
})
.then((res) => res.json())
.then(console.log.bind(console))
```
**Response**
In response, you will get a JSON document containing the `id` and `key`:
```json
{
"id": "string",
"key": "string"
}
```
### 9. /api/delete_api_key
**Description:**
Delete an API key for the user.
**Request:**
**Method**: `POST`
**Headers**: Content-Type should be set to `application/json; charset=utf-8`
**Request Body**: JSON object with the field:
* `id` — The unique identifier of the API key to be deleted.
Here is a JavaScript Fetch Request example:
```js
// delete_api_key (POST http://127.0.0.1:5000/api/delete_api_key)
fetch("http://127.0.0.1:5000/api/delete_api_key", {
"method": "POST",
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
"body": JSON.stringify({"id":"API_KEY_ID"})
})
.then((res) => res.json())
.then(console.log.bind(console))
```
**Response:**
In response, you will get a JSON document indicating the status of the operation:
```json
{
"status": "ok"
}
```

View File

@@ -1,10 +0,0 @@
{
"API-docs": {
"title": "🗂️️ API-docs",
"href": "/API/API-docs"
},
"api-key-guide": {
"title": "🔐 API Keys guide",
"href": "/API/api-key-guide"
}
}

View File

@@ -1,3 +1,9 @@
---
title: Hosting DocsGPT on Amazon Lightsail
description:
display: hidden
---
# Self-hosting DocsGPT on Amazon Lightsail
Here's a step-by-step guide on how to set up an Amazon Lightsail instance to host DocsGPT.
@@ -73,7 +79,7 @@ To save the file, press CTRL+X, then Y, and then ENTER.
Next, set the correct IP for the Backend by opening the docker-compose.yml file:
`nano docker-compose.yml`
`nano deployment/docker-compose.yaml`
And Change line 7 to: `VITE_API_HOST=http://localhost:7091`
to this `VITE_API_HOST=http://<your instance public IP>:7091`
@@ -84,7 +90,7 @@ This will allow the frontend to connect to the backend.
You're almost there! Now that all the necessary bits and pieces have been installed, it is time to run the application. To do so, use the following command:
`sudo docker-compose up -d`
`sudo docker compose -f deployment/docker-compose.yaml up -d`
Launching it for the first time will take a few minutes to download all the necessary dependencies and build.
@@ -101,10 +107,4 @@ Repeat the process for port `7091`.
#### Access your instance
Your instance is now available at your Public IP Address on port 5173. Enjoy using DocsGPT!
## Other Deployment Options
- [Deploy DocsGPT on Civo Compute Cloud](https://dev.to/rutamhere/deploying-docsgpt-on-civo-compute-c)
- [Deploy DocsGPT on DigitalOcean Droplet](https://dev.to/rutamhere/deploying-docsgpt-on-digitalocean-droplet-50ea)
- [Deploy DocsGPT on Kamatera Performance Cloud](https://dev.to/rutamhere/deploying-docsgpt-on-kamatera-performance-cloud-1bj)
Your instance is now available at your Public IP Address on port 5173. Enjoy using DocsGPT!

View File

@@ -1,78 +0,0 @@
## Development Environments
### Spin up Mongo and Redis
For development, only two containers are used from [docker-compose.yaml](https://github.com/arc53/DocsGPT/blob/main/docker-compose.yaml) (by deleting all services except for Redis and Mongo).
See file [docker-compose-dev.yaml](https://github.com/arc53/DocsGPT/blob/main/docker-compose-dev.yaml).
Run
```
docker compose -f docker-compose-dev.yaml build
docker compose -f docker-compose-dev.yaml up -d
```
### Run the Backend
> [!Note]
> Make sure you have Python 3.12 installed.
1. Export required environment variables or prepare a `.env` file in the project folder:
- Copy [.env-template](https://github.com/arc53/DocsGPT/blob/main/application/.env-template) and create `.env`.
(check out [`application/core/settings.py`](application/core/settings.py) if you want to see more config options.)
2. (optional) Create a Python virtual environment:
You can follow the [Python official documentation](https://docs.python.org/3/tutorial/venv.html) for virtual environments.
a) On Mac OS and Linux
```commandline
python -m venv venv
. venv/bin/activate
```
b) On Windows
```commandline
python -m venv venv
venv/Scripts/activate
```
3. Download embedding model and save it in the `model/` folder:
You can use the script below, or download it manually from [here](https://d3dg1063dc54p9.cloudfront.net/models/embeddings/mpnet-base-v2.zip), unzip it and save it in the `model/` folder.
```commandline
wget https://d3dg1063dc54p9.cloudfront.net/models/embeddings/mpnet-base-v2.zip
unzip mpnet-base-v2.zip -d model
rm mpnet-base-v2.zip
```
4. Install dependencies for the backend:
```commandline
pip install -r application/requirements.txt
```
5. Run the app using `flask --app application/app.py run --host=0.0.0.0 --port=7091`.
6. Start worker with `celery -A application.app.celery worker -l INFO`.
> [!Note]
> You can also launch the in a debugger mode in vscode by accessing SHIFT + CMD + D or SHIFT + Windows + D on windows and selecting Flask or Celery.
### Start Frontend
> [!Note]
> Make sure you have Node version 16 or higher.
1. Navigate to the [/frontend](https://github.com/arc53/DocsGPT/tree/main/frontend) folder.
2. Install the required packages `husky` and `vite` (ignore if already installed).
```commandline
npm install husky -g
npm install vite -g
```
3. Install dependencies by running `npm install --include=dev`.
4. Run the app using `npm run dev`.

View File

@@ -0,0 +1,163 @@
---
title: Setting Up a Development Environment
description: Guide to setting up a development environment for DocsGPT, including backend and frontend setup.
---
# Setting Up a Development Environment
This guide will walk you through setting up a development environment for DocsGPT. This setup allows you to modify and test the application's backend and frontend components.
## 1. Spin Up MongoDB and Redis
For development purposes, you can quickly start MongoDB and Redis containers, which are the primary database and caching systems used by DocsGPT. We provide a dedicated Docker Compose file, `docker-compose-dev.yaml`, located in the `deployment` directory, that includes only these essential services.
You can find the `docker-compose-dev.yaml` file [here](https://github.com/arc53/DocsGPT/blob/main/deployment/docker-compose-dev.yaml).
**Steps to start MongoDB and Redis:**
1. Navigate to the root directory of your DocsGPT repository in your terminal.
2. Run the following commands to build and start the containers defined in `docker-compose-dev.yaml`:
```bash
docker compose -f deployment/docker-compose-dev.yaml build
docker compose -f deployment/docker-compose-dev.yaml up -d
```
These commands will start MongoDB and Redis in detached mode, running in the background.
## 2. Run the Backend
To run the DocsGPT backend locally, you'll need to set up a Python environment and install the necessary dependencies.
**Prerequisites:**
* **Python 3.12:** Ensure you have Python 3.12 installed on your system. You can check your Python version by running `python --version` or `python3 --version` in your terminal.
**Steps to run the backend:**
1. **Configure Environment Variables:**
DocsGPT backend settings are configured using environment variables. You can set these either in a `.env` file or directly in the `settings.py` file. For a comprehensive overview of all settings, please refer to the [DocsGPT Settings Guide](/Deploying/DocsGPT-Settings).
* **Option 1: Using a `.env` file (Recommended):**
* If you haven't already, create a file named `.env` in the **root directory** of your DocsGPT project.
* Modify the `.env` file to adjust settings as needed. You can find a comprehensive list of configurable options in [`application/core/settings.py`](application/core/settings.py).
* **Option 2: Exporting Environment Variables:**
* Alternatively, you can export environment variables directly in your terminal. However, using a `.env` file is generally more organized for development.
2. **Create a Python Virtual Environment (Optional but Recommended):**
Using a virtual environment isolates project dependencies and avoids conflicts with system-wide Python packages.
* **macOS and Linux:**
```bash
python -m venv venv
. venv/bin/activate
```
* **Windows:**
```bash
python -m venv venv
venv/Scripts/activate
```
3. **Download Embedding Model:**
The backend requires an embedding model. Download the `mpnet-base-v2` model and place it in the `model/` directory within the project root. You can use the following script:
```bash
wget https://d3dg1063dc54p9.cloudfront.net/models/embeddings/mpnet-base-v2.zip
unzip mpnet-base-v2.zip -d model
rm mpnet-base-v2.zip
```
Alternatively, you can manually download the zip file from [here](https://d3dg1063dc54p9.cloudfront.net/models/embeddings/mpnet-base-v2.zip), unzip it, and place the extracted folder in `model/`.
4. **Install Backend Dependencies:**
Navigate to the root of your DocsGPT repository and install the required Python packages:
```bash
pip install -r application/requirements.txt
```
5. **Run the Flask App:**
Start the Flask backend application:
```bash
flask --app application/app.py run --host=0.0.0.0 --port=7091
```
This command will launch the backend server, making it accessible on `http://localhost:7091`.
6. **Start the Celery Worker:**
Open a new terminal window (and activate your virtual environment if you used one). Start the Celery worker to handle background tasks:
```bash
celery -A application.app.celery worker -l INFO
```
This command will start the Celery worker, which processes tasks such as document parsing and vector embedding.
**Running in Debugger (VSCode):**
For easier debugging, you can launch the Flask app and Celery worker directly from VSCode's debugger.
* Press <kbd>Shift</kbd> + <kbd>Cmd</kbd> + <kbd>D</kbd> (macOS) or <kbd>Shift</kbd> + <kbd>Windows</kbd> + <kbd>D</kbd> (Windows) to open the Run and Debug view.
* You should see configurations named "Flask" and "Celery". Select the desired configuration and click the "Start Debugging" button (green play icon).
## 3. Start the Frontend
To run the DocsGPT frontend locally, you'll need Node.js and npm (Node Package Manager).
**Prerequisites:**
* **Node.js version 16 or higher:** Ensure you have Node.js version 16 or greater installed. You can check your Node.js version by running `node -v` in your terminal. npm is usually bundled with Node.js.
**Steps to start the frontend:**
1. **Navigate to the Frontend Directory:**
In your terminal, change the current directory to the `frontend` folder within your DocsGPT repository:
```bash
cd frontend
```
2. **Install Global Packages (If Needed):**
If you don't have `husky` and `vite` installed globally, you can install them:
```bash
npm install husky -g
npm install vite -g
```
You can skip this step if you already have these packages installed or prefer to use local installations (though global installation simplifies running the commands in this guide).
3. **Install Frontend Dependencies:**
Install the project's frontend dependencies using npm:
```bash
npm install --include=dev
```
This command reads the `package.json` file in the `frontend` directory and installs all listed dependencies, including development dependencies.
4. **Run the Frontend App:**
Start the frontend development server:
```bash
npm run dev
```
This command will start the Vite development server. The frontend application will typically be accessible at [http://localhost:5173/](http://localhost:5173/). The terminal will display the exact URL where the frontend is running.
With both the backend and frontend running, you should now have a fully functional DocsGPT development environment. You can access the application in your browser at [http://localhost:5173/](http://localhost:5173/) and start developing!

View File

@@ -0,0 +1,135 @@
---
title: Docker Deployment of DocsGPT
description: Deploy DocsGPT using Docker and Docker Compose for easy setup and management.
---
# Docker Deployment of DocsGPT
Docker is the recommended method for deploying DocsGPT, providing a consistent and isolated environment for the application to run. This guide will walk you through deploying DocsGPT using Docker and Docker Compose.
## Prerequisites
* **Docker Engine:** You need to have Docker Engine installed on your system.
* **macOS:** [Docker Desktop for Mac](https://docs.docker.com/desktop/install/mac-install/)
* **Linux:** [Docker Engine Installation Guide](https://docs.docker.com/engine/install/) (follow instructions for your specific distribution)
* **Windows:** [Docker Desktop for Windows](https://docs.docker.com/desktop/install/windows-install/) (requires WSL 2 backend, see notes below)
* **Docker Compose:** Docker Compose is usually included with Docker Desktop. If you are using Docker Engine separately, ensure you have Docker Compose V2 installed.
**Important Note for Windows Users:** Docker Desktop on Windows generally requires the WSL 2 backend to function correctly, especially when using features like host networking which are utilized in DocsGPT's Docker Compose setup. Ensure WSL 2 is enabled and configured in Docker Desktop settings.
## Quickest Setup: Using DocsGPT Public API
The fastest way to try out DocsGPT is by using the public API endpoint. This requires minimal configuration and no local LLM setup.
1. **Clone the DocsGPT Repository (if you haven't already):**
```bash
git clone https://github.com/arc53/DocsGPT.git
cd DocsGPT
```
2. **Create a `.env` file:**
In the root directory of your DocsGPT repository, create a file named `.env`.
3. **Add Public API Configuration to `.env`:**
Open the `.env` file and add the following lines:
```
LLM_NAME=docsgpt
VITE_API_STREAMING=true
```
This minimal configuration tells DocsGPT to use the public API. For more advanced settings and other LLM options, refer to the [DocsGPT Settings Guide](/Deploying/DocsGPT-Settings).
4. **Launch DocsGPT with Docker Compose:**
Navigate to the root directory of the DocsGPT repository in your terminal and run:
```bash
docker compose -f deployment/docker-compose.yaml up -d
```
The `-d` flag runs Docker Compose in detached mode (in the background).
5. **Access DocsGPT in your browser:**
Once the containers are running, open your web browser and go to [http://localhost:5173/](http://localhost:5173/).
6. **Stopping DocsGPT:**
To stop the application, navigate to the same directory in your terminal and run:
```bash
docker compose -f deployment/docker-compose.yaml down
```
## Optional Ollama Setup (Local Models)
DocsGPT provides optional Docker Compose files to easily integrate with [Ollama](https://ollama.com/) for running local models. These files add an official Ollama container to your Docker Compose setup. These files are located in the `deployment/optional/` directory.
There are two Ollama optional files:
* **`docker-compose.optional.ollama-cpu.yaml`**: For running Ollama on CPU.
* **`docker-compose.optional.ollama-gpu.yaml`**: For running Ollama on GPU (requires Docker to be configured for GPU usage).
### Launching with Ollama and Pulling a Model
1. **Clone the DocsGPT Repository and Create `.env` (as described above).**
2. **Launch DocsGPT with Ollama Docker Compose:**
Choose the appropriate Ollama Compose file (CPU or GPU) and launch DocsGPT:
**CPU:**
```bash
docker compose --env-file .env -f deployment/docker-compose.yaml -f deployment/optional/docker-compose.optional.ollama-cpu.yaml up -d
```
**GPU:**
```bash
docker compose --env-file .env -f deployment/docker-compose.yaml -f deployment/optional/docker-compose.optional.ollama-gpu.yaml up -d
```
3. **Pull the Ollama Model:**
**Crucially, after launching with Ollama, you need to pull the desired model into the Ollama container.** Find the `MODEL_NAME` you configured in your `.env` file (e.g., `llama3.2:1b`). Then execute the following command to pull the model *inside* the running Ollama container:
```bash
docker compose -f deployment/docker-compose.yaml -f deployment/optional/docker-compose.optional.ollama-cpu.yaml exec -it ollama ollama pull <MODEL_NAME>
```
or (for GPU):
```bash
docker compose -f deployment/docker-compose.yaml -f deployment/optional/docker-compose.optional.ollama-gpu.yaml exec -it ollama ollama pull <MODEL_NAME>
```
Replace `<MODEL_NAME>` with the actual model name from your `.env` file.
4. **Access DocsGPT in your browser:**
Once the model is pulled and containers are running, open your web browser and go to [http://localhost:5173/](http://localhost:5173/).
5. **Stopping Ollama Setup:**
To stop a DocsGPT setup launched with Ollama optional files, use `docker compose down` and include all the compose files used during the `up` command:
```bash
docker compose -f deployment/docker-compose.yaml -f deployment/optional/docker-compose.optional.ollama-cpu.yaml down
```
or
```bash
docker compose -f deployment/docker-compose.yaml -f deployment/optional/docker-compose.optional.ollama-gpu.yaml down
```
**Important for GPU Usage:**
* **NVIDIA Container Toolkit (for NVIDIA GPUs):** If you are using NVIDIA GPUs, you need to have the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) installed and configured on your system for Docker to access your GPU.
* **Docker GPU Configuration:** Ensure Docker is configured to utilize your GPU. Refer to the [Ollama Docker Hub page](https://hub.docker.com/r/ollama/ollama) and Docker documentation for GPU setup instructions specific to your GPU type (NVIDIA, AMD, Intel).
## Restarting After Configuration Changes
Whenever you modify the `.env` file or any Docker Compose files, you need to restart the Docker containers for the changes to be applied. Use the same `docker compose down` and `docker compose up -d` commands you used to launch DocsGPT, ensuring you include all relevant `-f` flags for optional files if you are using them.
## Further Configuration
This guide covers the basic Docker deployment of DocsGPT. For detailed information on configuring various aspects of DocsGPT, such as LLM providers, models, vector stores, and more, please refer to the comprehensive [DocsGPT Settings Guide](/Deploying/DocsGPT-Settings).

View File

@@ -0,0 +1,107 @@
---
title: DocsGPT Settings
description: Configure your DocsGPT application by understanding the basic settings.
---
# DocsGPT Settings
DocsGPT is highly configurable, allowing you to tailor it to your specific needs and preferences. You can control various aspects of the application, from choosing the Large Language Model (LLM) provider to selecting embedding models and vector stores.
This document will guide you through the basic settings you can configure in DocsGPT. These settings determine how DocsGPT interacts with LLMs and processes your data.
## Configuration Methods
There are two primary ways to configure DocsGPT settings:
### 1. Configuration via `.env` file (Recommended)
The easiest and recommended way to configure basic settings is by using a `.env` file. This file should be located in the **root directory** of your DocsGPT project (the same directory where `setup.sh` is located).
**Example `.env` file structure:**
```
LLM_NAME=openai
API_KEY=YOUR_OPENAI_API_KEY
MODEL_NAME=gpt-4o
```
### 2. Configuration via `settings.py` file (Advanced)
For more advanced configurations or if you prefer to manage settings directly in code, you can modify the `settings.py` file. This file is located in the `application/core` directory of your DocsGPT project.
While modifying `settings.py` offers more flexibility, it's generally recommended to use the `.env` file for basic settings and reserve `settings.py` for more complex adjustments or when you need to configure settings programmatically.
**Location of `settings.py`:** `application/core/settings.py`
## Basic Settings Explained
Here are some of the most fundamental settings you'll likely want to configure:
- **`LLM_NAME`**: This setting determines which Large Language Model (LLM) provider DocsGPT will use. It tells DocsGPT which API to interact with.
- **Common values:**
- `docsgpt`: Use the DocsGPT Public API Endpoint (simple and free, as offered in `setup.sh` option 1).
- `openai`: Use OpenAI's API (requires an API key).
- `google`: Use Google's Vertex AI or Gemini models.
- `anthropic`: Use Anthropic's Claude models.
- `groq`: Use Groq's models.
- `huggingface`: Use HuggingFace Inference API.
- `azure_openai`: Use Azure OpenAI Service.
- `openai` (when using local inference engines like Ollama, Llama.cpp, TGI, etc.): This signals DocsGPT to use an OpenAI-compatible API format, even if the actual LLM is running locally.
- **`MODEL_NAME`**: Specifies the specific model to use from the chosen LLM provider. The available models depend on the `LLM_NAME` you've selected.
- **Examples:**
- For `LLM_NAME=openai`: `gpt-4o`
- For `LLM_NAME=google`: `gemini-2.0-flash`
- For local models (e.g., Ollama): `llama3.2:1b` (or any model name available in your setup).
- **`EMBEDDINGS_NAME`**: This setting defines which embedding model DocsGPT will use to generate vector embeddings for your documents. Embeddings are numerical representations of text that allow DocsGPT to understand the semantic meaning of your documents for efficient search and retrieval.
- **Default value:** `huggingface_sentence-transformers/all-mpnet-base-v2` (a good general-purpose embedding model).
- **Other options:** You can explore other embedding models from Hugging Face Sentence Transformers or other providers if needed.
- **`API_KEY`**: Required for most cloud-based LLM providers. This is your authentication key to access the LLM provider's API. You'll need to obtain this key from your chosen provider's platform.
- **`OPENAI_BASE_URL`**: Specifically used when `LLM_NAME` is set to `openai` but you are connecting to a local inference engine (like Ollama, Llama.cpp, etc.) that exposes an OpenAI-compatible API. This setting tells DocsGPT where to find your local LLM server.
## Configuration Examples
Let's look at some concrete examples of how to configure these settings in your `.env` file.
### Example for Cloud API Provider (OpenAI)
To use OpenAI's `gpt-4o` model, you would configure your `.env` file like this:
```
LLM_NAME=openai
API_KEY=YOUR_OPENAI_API_KEY # Replace with your actual OpenAI API key
MODEL_NAME=gpt-4o
```
Make sure to replace `YOUR_OPENAI_API_KEY` with your actual OpenAI API key.
### Example for Local Deployment
To use a local Ollama server with the `llama3.2:1b` model, you would configure your `.env` file like this:
```
LLM_NAME=openai # Using OpenAI compatible API format for local models
API_KEY=None # API Key is not needed for local Ollama
MODEL_NAME=llama3.2:1b
OPENAI_BASE_URL=http://host.docker.internal:11434/v1 # Default Ollama API URL within Docker
EMBEDDINGS_NAME=huggingface_sentence-transformers/all-mpnet-base-v2 # You can also run embeddings locally if needed
```
In this case, even though you are using Ollama locally, `LLM_NAME` is set to `openai` because Ollama (and many other local inference engines) are designed to be API-compatible with OpenAI. `OPENAI_BASE_URL` points DocsGPT to the local Ollama server.
## Exploring More Settings
These are just the basic settings to get you started. The `settings.py` file contains many more advanced options that you can explore to further customize DocsGPT, such as:
- Vector store configuration (`VECTOR_STORE`, Qdrant, Milvus, LanceDB settings)
- Retriever settings (`RETRIEVERS_ENABLED`)
- Cache settings (`CACHE_REDIS_URL`)
- And many more!
For a complete list of available settings and their descriptions, refer to the `settings.py` file in `application/core`. Remember to restart your Docker containers after making changes to your `.env` file or `settings.py` for the changes to take effect.

View File

@@ -0,0 +1,33 @@
import { DeploymentCards } from '../../components/DeploymentCards';
# Deployment Guides
<DeploymentCards
items={[
{
title: 'Amazon Lightsail',
link: 'https://docs.docsgpt.cloud/Deploying/Amazon-Lightsail',
description: 'Self-hosting DocsGPT on Amazon Lightsail'
},
{
title: 'Railway',
link: 'https://docs.docsgpt.cloud/Deploying/Railway',
description: 'Hosting DocsGPT on Railway'
},
{
title: 'Civo Compute Cloud',
link: 'https://dev.to/rutamhere/deploying-docsgpt-on-civo-compute-c',
description: 'Step-by-step guide for Civo deployment'
},
{
title: 'DigitalOcean Droplet',
link: 'https://dev.to/rutamhere/deploying-docsgpt-on-digitalocean-droplet-50ea',
description: 'Guide for DigitalOcean deployment'
},
{
title: 'Kamatera Cloud',
link: 'https://dev.to/rutamhere/deploying-docsgpt-on-kamatera-performance-cloud-1bj',
description: 'Kamatera deployment tutorial'
}
]}
/>

View File

@@ -1,4 +1,10 @@
# Self-hosting DocsGPT on Kubernetes
---
title: Deploying DocsGPT on Kubernetes
description: Learn how to self-host DocsGPT on a Kubernetes cluster for scalable and robust deployments.
---
# Self-hosting DocsGPT
on Kubernetes
This guide will walk you through deploying DocsGPT on Kubernetes.
@@ -11,7 +17,7 @@ Ensure you have the following installed before proceeding:
## Folder Structure
The `k8s` folder contains the necessary deployment and service configuration files:
The `deployment/k8s` folder contains the necessary deployment and service configuration files:
- `deployments/`
- `services/`
@@ -23,7 +29,7 @@ The `k8s` folder contains the necessary deployment and service configuration fil
```sh
git clone https://github.com/arc53/DocsGPT.git
cd docsgpt/k8s
cd docsgpt/deployment/k8s
```
2. **Configure Secrets (optional)**

View File

@@ -1,64 +0,0 @@
## Launching Web App
**Note**: Make sure you have Docker installed
**On macOS or Linux:**
Just run the following command:
```bash
./setup.sh
```
This command will install all the necessary dependencies and provide you with an option to use our LLM API, download the local model or use OpenAI.
If you prefer to follow manual steps, refer to this guide:
1. Open and download this repository with
```bash
git clone https://github.com/arc53/DocsGPT.git
cd DocsGPT
```
2. Create a `.env` file in your root directory and set the env variables.
It should look like this inside:
```
LLM_NAME=[docsgpt or openai or others]
API_KEY=[if LLM_NAME is openai]
```
See optional environment variables in the [/application/.env_sample](https://github.com/arc53/DocsGPT/blob/main/application/.env_sample) file.
3. Run the following commands:
```bash
docker compose up
```
4. Navigate to http://localhost:5173/.
To stop, simply press **Ctrl + C**.
**For WINDOWS:**
1. Open and download this repository with
```bash
git clone https://github.com/arc53/DocsGPT.git
cd DocsGPT
```
2. Create a `.env` file in your root directory and set the env variables.
It should look like this inside:
```
LLM_NAME=[docsgpt or openai or others]
API_KEY=[if LLM_NAME is openai]
```
See optional environment variables in the [/application/.env_sample](https://github.com/arc53/DocsGPT/blob/main/application/.env_sample) file.
3. Run the following command:
```bash
docker-compose up
```
4. Navigate to http://localhost:5173/.
5. To stop the setup, just press **Ctrl + C** in the WSL terminal
**Important:** Ensure that Docker is installed and properly configured on your Windows system for these steps to work.

View File

@@ -1,3 +1,7 @@
---
title: Hosting DocsGPT on Railway
description: Learn how to deploy your own DocsGPT instance on Railway with this step-by-step tutorial
---
# Self-hosting DocsGPT on Railway
@@ -97,11 +101,11 @@ To save the file, press CTRL+X, then Y, and then ENTER.
Next, set the correct IP for the Backend by opening the docker-compose.yml file:
Next, set the correct IP for the Backend by opening the docker-compose.yaml file:
`nano docker-compose.yml`
`nano deployment/docker-compose.yaml`
@@ -123,7 +127,7 @@ You're almost there! Now that all the necessary bits and pieces have been instal
`sudo docker-compose up -d`
`sudo docker compose -f deployment/docker-compose.yaml up -d`

View File

@@ -1,22 +1,32 @@
{
"Hosting-the-app": {
"title": " Hosting DocsGPT",
"href": "/Deploying/Hosting-the-app"
"DocsGPT-Settings": {
"title": " App Configuration",
"href": "/Deploying/DocsGPT-Settings"
},
"Quickstart": {
"title": "Quickstart",
"href": "/Deploying/Quickstart"
"Docker-Deploying": {
"title": "🛳️ Docker Setup",
"href": "/Deploying/Docker-Deploying"
},
"Development-Environment": {
"title": "🛠Development Environment",
"href": "/Deploying/Development-Environment"
},
"Railway-Deploying": {
"title": "🚂Deploying on Railway",
"href": "/Deploying/Railway-Deploying"
},
"Kubernetes-Deploying": {
"title": "☸Deploying on Kubernetes",
"title": "☸️ Deploying on Kubernetes",
"href": "/Deploying/Kubernetes-Deploying"
},
"Hosting-the-app": {
"title": "☁️ Hosting DocsGPT",
"href": "/Deploying/Hosting-the-app"
},
"Amazon-Lightsail": {
"title": "Hosting DocsGPT on Amazon Lightsail",
"href": "/Deploying/Amazon-Lightsail",
"display": "hidden"
},
"Railway": {
"title": "Hosting DocsGPT on Railway",
"href": "/Deploying/Railway",
"display": "hidden"
}
}

View File

@@ -1,8 +1,12 @@
---
title: Comprehensive Guide to Setting Up the Chatwoot Extension with DocsGPT
description: This step-by-step guide walks you through the process of setting up the Chatwoot extension with DocsGPT, enabling seamless integration for automated responses and enhanced customer support. Learn how to launch DocsGPT, retrieve your Chatwoot access token, configure the .env file, and start the extension.
---
## Chatwoot Extension Setup Guide
### Step 1: Prepare and Start DocsGPT
- **Launch DocsGPT**: Follow the instructions in our [DocsGPT Wiki](https://github.com/arc53/DocsGPT/wiki) to start DocsGPT. Make sure to load your documentation.
- **Launch DocsGPT**: Follow the instructions in our [Quickstart](/quickstart) to start DocsGPT. Make sure to load your documentation.
### Step 2: Get Access Token from Chatwoot

View File

@@ -1,3 +1,7 @@
---
title: Add DocsGPT Chrome Extension to Your Browser
description: Install the DocsGPT Chrome extension to access AI-powered document assistance directly from your browser for enhanced productivity.
---
import {Steps} from 'nextra/components'
import { Callout } from 'nextra/components'

View File

@@ -1,14 +1,22 @@
{
"Chatwoot-extension": {
"title": "💬️ Chatwoot Extension",
"href": "/Extensions/Chatwoot-extension"
"api-key-guide": {
"title": "🔑 Getting API key",
"href": "/Extensions/api-key-guide"
},
"react-widget": {
"title": "🏗️ Widget setup",
"href": "/Extensions/react-widget"
"chat-widget": {
"title": "💬️ Chat Widget",
"href": "/Extensions/chat-widget"
},
"search-widget": {
"title": "🔎 Search Widget",
"href": "/Extensions/search-widget"
},
"Chrome-extension": {
"title": "🌐 Chrome Extension",
"href": "/Extensions/Chrome-extension"
},
"Chatwoot-extension": {
"title": "🗣️ Chatwoot Extension",
"href": "/Extensions/Chatwoot-extension"
}
}

View File

@@ -1,22 +1,20 @@
## Guide to DocsGPT API Keys
---
title: API Keys for DocsGPT Integrations
description: Learn how to obtain, understand, and use DocsGPT API keys to integrate DocsGPT into your external applications and widgets.
---
DocsGPT API keys are essential for developers and users who wish to integrate the DocsGPT models into external applications, such as the our widget. This guide will walk you through the steps of obtaining an API key, starting from uploading your document to understanding the key variables associated with API keys.
# Guide to DocsGPT API Keys
### Uploading Your Document
DocsGPT API keys are essential for developers and users who wish to integrate the DocsGPT models into external applications, such as [our widget](/Extensions/chat-widget). This guide will walk you through the steps of obtaining an API key, starting from uploading your document to understanding the key variables associated with API keys.
Before creating your first API key, you must upload the document that will be linked to this key. You can upload your document through two methods:
- **GUI Web App Upload:** A user-friendly graphical interface that allows for easy upload and management of documents.
- **Using `/api/upload` Method:** For users comfortable with API calls, this method provides a direct way to upload documents.
### Obtaining Your API Key
## Obtaining Your API Key
After uploading your document, you can obtain an API key either through the graphical user interface or via an API call:
- **Graphical User Interface:** Navigate to the Settings section of the DocsGPT web app, find the API Keys option, and press 'Create New' to generate your key.
- **API Call:** Alternatively, you can use the `/api/create_api_key` endpoint to create a new API key. For detailed instructions, visit [DocsGPT API Documentation](https://docs.docsgpt.cloud/API/API-docs#8-apicreate_api_key).
- **API Call:** Alternatively, you can use the `/api/create_api_key` endpoint to create a new API key. For detailed instructions, visit [DocsGPT API Documentation](https://gptcloud.arc53.com/).
### Understanding Key Variables
## Understanding Key Variables
Upon creating your API key, you will encounter several key variables. Each serves a specific purpose:
@@ -27,4 +25,4 @@ Upon creating your API key, you will encounter several key variables. Each serve
With your API key ready, you can now integrate DocsGPT into your application, such as the DocsGPT Widget or any other software, via `/api/answer` or `/stream` endpoints. The source document is preset with the API key, allowing you to bypass fields like `selectDocs` and `active_docs` during implementation.
Congratulations on taking the first step towards enhancing your applications with DocsGPT! With this guide, you're now equipped to navigate the process of obtaining and understanding DocsGPT API keys.
Congratulations on taking the first step towards enhancing your applications with DocsGPT!

View File

@@ -1,12 +1,12 @@
### Setting up the DocsGPT Widget in Your React Project
# Setting up the DocsGPT Widget in Your React Project
### Introduction:
## Introduction:
The DocsGPT Widget is a powerful tool that allows you to integrate AI-powered documentation assistance into your web applications. This guide will walk you through the installation and usage of the DocsGPT Widget in your React project. Whether you're building a web app or a knowledge base, this widget can enhance your user experience.
### Installation
## Installation
First, make sure you have Node.js and npm installed in your project. Then go to your project and install a new dependency: `npm install docsgpt`.
### Usage
## Usage
In the file where you want to use the widget, import it and include the CSS file:
```js
import { DocsGPTWidget } from "docsgpt";
@@ -29,7 +29,7 @@ Now, you can use the widget in your component like this :
buttonBg = "#222327"
/>
```
### Props Table for DocsGPT Widget
## Props Table for DocsGPT Widget
| **Prop** | **Type** | **Default Value** | **Description** |
|--------------------|------------------|-------------------------------------------------------------|-----------------------------------------------------------------------------------------------------|
@@ -47,7 +47,7 @@ Now, you can use the widget in your component like this :
---
### Notes
## Notes
- **Customizing Props:** All properties can be overridden when embedding the widget. For example, you can provide a unique avatar, title, or color scheme to better align with your brand.
- **Default Theme:** The widget defaults to the dark theme unless explicitly set to `"light"`.
- **API Key:** If the `apiKey` is not required for your application, leave it empty.
@@ -55,7 +55,7 @@ Now, you can use the widget in your component like this :
This table provides a clear overview of the customization options available for tailoring the DocsGPT widget to fit your application.
### How to use DocsGPTWidget with [Nextra](https://nextra.site/) (Next.js + MDX)
## How to use DocsGPTWidget with [Nextra](https://nextra.site/) (Next.js + MDX)
Install your widget as described above and then go to your `pages/` folder and create a new file `_app.js` with the following content:
```js
import { DocsGPTWidget } from "docsgpt";
@@ -69,7 +69,7 @@ export default function MyApp({ Component, pageProps }) {
)
}
```
### How to use DocsGPTWidget with HTML
## How to use DocsGPTWidget with HTML
```html
<!DOCTYPE html>
<html lang="en">

View File

@@ -0,0 +1,159 @@
---
title: Integrate DocsGPT Chat Widget into Your Web Application
description: Embed the DocsGPT Widget in your React, HTML, or Nextra projects to provide AI-powered chat functionality to your users.
---
import { Tabs } from 'nextra/components'
# Integrating DocsGPT Chat Widget
## Introduction
The DocsGPT Widget is a powerful tool that allows you to integrate AI-driven document assistance directly into your web applications. This guide will walk you through embedding the DocsGPT Widget into your projects, whether you're using React, plain HTML, or Nextra. Enhance your user experience by providing seamless access to intelligent document search and chatbot capabilities.
Try out the interactive widget showcase and customize its parameters at the [DocsGPT Widget Demo](https://widget.docsgpt.cloud/).
## Setup
<Tabs items={['React', 'HTML', 'Nextra']}>
<Tabs.Tab>
### Installation
Make sure you have Node.js and npm (or yarn, pnpm) installed in your project. Navigate to your project directory in the terminal and install the `docsgpt` package:
```bash npm
npm install docsgpt
```
### Usage
In your React component file, import the `DocsGPTWidget` component:
```js
import { DocsGPTWidget } from "docsgpt";
```
Now, you can embed the widget within your React component's JSX:
```jsx
<DocsGPTWidget
apiHost="https://your-docsgpt-api.com"
apiKey=""
avatar="https://d3dg1063dc54p9.cloudfront.net/cute-docsgpt.png"
title="Get AI assistance"
description="DocsGPT's AI Chatbot is here to help"
heroTitle="Welcome to DocsGPT !"
heroDescription="This chatbot is built with DocsGPT and utilises GenAI,
please review important information using sources."
theme="dark"
buttonIcon="https://your-icon"
buttonBg="#222327"
/>
```
</Tabs.Tab>
<Tabs.Tab>
### Installation
To use the DocsGPT Widget directly in HTML, include the widget script from a CDN in your HTML file:
```html filename="html"
<script
src="https://unpkg.com/docsgpt/dist/legacy/main.js"
type="module"
></script>
```
### Usage
In your HTML `<body>`, add a `<div>` element where you want to render the widget. Set an `id` for easy targeting.
```html filename="html"
<div id="app"></div>
```
Then, in a `<script type="module">` block, use the `renderDocsGPTWidget` function to initialize the widget, passing the `id` of your `<div>` and a configuration object. To link the widget to your DocsGPT API and specific documents, pass the relevant parameters within the configuration object of `renderDocsGPTWidget`.
```html filename="html"
<!DOCTYPE html>
<div id="app"></div>
<script type="module">
window.onload = function() {
renderDocsGPTWidget('app', {
apiHost: 'http://localhost:7001', // Replace with your API Host
apiKey:"",
avatar: 'https://d3dg1063dc54p9.cloudfront.net/cute-docsgpt.png',
title: 'Get AI assistance',
description: "DocsGPT's AI Chatbot is here to help",
heroTitle: 'Welcome to DocsGPT!',
heroDescription: 'This chatbot is utilises GenAI, please review important information.',
theme:"dark",
buttonIcon:"https://your-icon",
buttonBg:"#222327"
});
}
</script>
```
</Tabs.Tab>
<Tabs.Tab>
### Installation
Make sure you have Node.js and npm (or yarn, pnpm) installed in your project. Navigate to your project directory in the terminal and install the `docsgpt` package:
```bash npm
npm install docsgpt
```
### Usage with Nextra (Next.js + MDX)
To integrate the DocsGPT Widget into a [Nextra](https://nextra.site/) documentation site (built with Next.js and MDX), create or modify your `pages/_app.js` file as follows:
```js filename="pages/_app.js"
import { DocsGPTWidget } from "docsgpt";
export default function MyApp({ Component, pageProps }) {
return (
<>
<Component {...pageProps} />
<DocsGPTWidget selectDocs="local/docsgpt-sep.zip/"/>
</>
)
}
```
</Tabs.Tab>
</Tabs>
---
## Properties Table
The DocsGPT Widget offers a range of customizable properties that allow you to tailor its appearance and behavior to perfectly match your web application. These parameters can be modified directly when embedding the widget in your React components or HTML code. Below is a detailed overview of each available prop:
| **Prop** | **Type** | **Default Value** | **Description** |
|--------------------|------------------|-------------------------------------------------------------|-----------------------------------------------------------------------------------------------------|
| **`apiHost`** | `string` | `"https://gptcloud.arc53.com"` | **Required.** The URL of your DocsGPT API backend. This endpoint handles vector search and chatbot queries. |
| **`apiKey`** | `string` | `"your-api-key"` | API key for authentication with your DocsGPT API. Leave empty if no authentication is required. |
| **`avatar`** | `string` | [`dino-icon-link`](https://d3dg1063dc54p9.cloudfront.net/cute-docsgpt.png) | URL for the avatar image displayed in the chatbot interface. |
| **`title`** | `string` | `"Get AI assistance"` | Title text shown in the chatbot header. |
| **`description`** | `string` | `"DocsGPT's AI Chatbot is here to help"` | Sub-title or descriptive text displayed below the title in the chatbot header. |
| **`heroTitle`** | `string` | `"Welcome to DocsGPT !"` | Welcome message displayed when the chatbot is initially opened. |
| **`heroDescription`** | `string` | `"This chatbot is built with DocsGPT and utilises GenAI, please review important information using sources."` | Introductory text providing context or disclaimers about the chatbot. |
| **`theme`** | `"dark" \| "light"` | `"dark"` | Color theme of the widget interface. Options: `"dark"` or `"light"`. Defaults to `"dark"`. |
| **`buttonIcon`** | `string` | `"https://your-icon"` | URL for the icon image used in the widget's launch button. |
| **`buttonBg`** | `string` | `"#222327"` | Background color of the widget's launch button. |
| **`size`** | `"small" \| "medium"` | `"medium"` | Size of the widget. Options: `"small"` or `"medium"`. Defaults to `"medium"`. |
| **`showSources`** | `boolean` | `false` | Enables displaying source URLs for data fetched within the widget. When set to `true`, the widget will show the original sources of the fetched data. |
---
## Notes on Widget Properties
* **Full Customization:** Every property listed in the table can be customized. Override the defaults to create a widget that perfectly matches your branding and application context. From avatars and titles to color schemes, you have fine-grained control over the widget's presentation.
* **API Key Handling:** The `apiKey` prop is optional. Only include it if your DocsGPT backend API is configured to require API key authentication. `apiHost` for DocsGPT Cloud is `https://gptcloud.arc53.com/`
## Explore and Customize Further
The DocsGPT Widget is fully open-source, allowing for deep customization and extension beyond the readily available props.
The complete source code for the React-based widget is available in the `extensions/react-widget` directory within the main [DocsGPT GitHub Repository](https://github.com/arc53/DocsGPT). Feel free to explore the code, fork the repository, and tailor the widget to your exact requirements.

View File

@@ -0,0 +1,116 @@
---
title: Integrate DocsGPT Search Bar into Your Web Application
description: Embed the DocsGPT Search Bar Widget in your React or HTML projects to provide AI-powered document search functionality to your users.
---
import { Tabs } from 'nextra/components'
# Integrating DocsGPT Search Bar Widget
## Introduction
The DocsGPT Search Bar Widget offers a simple yet powerful way to embed AI-powered document search directly into your web applications. This widget allows users to perform searches across your documents or pages, enabling them to quickly find the information they need. This guide will walk you through embedding the Search Bar Widget into your projects, whether you're using React or plain HTML.
Try out the interactive widget showcase and customize its parameters at the [DocsGPT Widget Demo](https://widget.docsgpt.cloud/).
## Setup
<Tabs items={['React', 'HTML']}>
<Tabs.Tab>
## React Setup
### Installation
Make sure you have Node.js and npm (or yarn, pnpm) installed in your project. Navigate to your project directory in the terminal and install the `docsgpt` package:
```bash npm
npm install docsgpt
```
### Usage
In your React component file, import the `SearchBar` component:
```js
import { SearchBar } from "docsgpt";
```
Now, you can embed the widget within your React component's JSX:
```jsx
<SearchBar
apiKey="your-api-key"
apiHost="https://your-docsgpt-api.com"
theme="light"
placeholder="Search or Ask AI..."
width="300px"
/>
```
</Tabs.Tab>
<Tabs.Tab>
### Installation
To use the DocsGPT Search Bar Widget directly in HTML, include the widget script from a CDN in your HTML file:
```html filename="html"
<script
src="https://unpkg.com/docsgpt/dist/legacy/main.js"
type="module"
></script>
```
### Usage
In your HTML `<body>`, add a `<div>` element where you want to render the Search Bar Widget. Set an `id` for easy targeting.
```html filename="html"
<div id="search-bar-container"></div>
```
Then, in a `<script type="module">` block, use the `renderSearchBar` function to initialize the widget, passing the `id` of your `<div>` and a configuration object. To link the widget to your DocsGPT API and configure its behaviour, pass the relevant parameters within the configuration object of `renderSearchBar`.
```html filename="html"
<!DOCTYPE html>
<div id="search-bar-container"></div>
<script type="module">
window.onload = function() {
renderSearchBar('search-bar-container', {
apiKey: 'your-api-key-here',
apiHost: 'https://your-api-host.com',
theme: 'light',
placeholder: 'Search here...',
width: '300px'
});
}
</script>
```
</Tabs.Tab>
</Tabs>
---
## Properties Table
The DocsGPT Search Bar Widget offers a range of customizable properties that allow you to tailor its appearance and behavior to perfectly match your web application. These parameters can be modified directly when embedding the widget in your React components or HTML code. Below is a detailed overview of each available prop:
| **Prop** | **Type** | **Default Value** | **Description** |
|-----------------|-----------|-------------------------------------|--------------------------------------------------------------------------------------------------|
| **`apiKey`** | `string` | `"your-api-key"` | API key for authentication with your DocsGPT API. Leave empty if no authentication is required. |
| **`apiHost`** | `string` | `"https://gptcloud.arc53.com"` | **Required.** The URL of your DocsGPT API backend. This endpoint handles vector similarity search queries. |
| **`theme`** | `"dark" \| "light"` | `"dark"` | Color theme of the search bar. Options: `"dark"` or `"light"`. Defaults to `"dark"`. |
| **`placeholder`** | `string` | `"Search or Ask AI..."` | Placeholder text displayed in the search input field. |
| **`width`** | `string` | `"256px"` | Width of the search bar. Accepts any valid CSS width value (e.g., `"300px"`, `"100%"`, `"20rem"`). |
---
## Notes on Widget Properties
* **Full Customization:** Every property listed in the table can be customized. Override the defaults to create a Search Bar Widget that perfectly matches your branding and application context.
* **API Key Handling:** The `apiKey` prop is optional. Only include it if your DocsGPT backend API is configured to require API key authentication. `apiHost` for DocsGPT Cloud is `https://gptcloud.arc53.com/`
## Explore and Customize Further
The DocsGPT Search Bar Widget is fully open-source, allowing for deep customization and extension beyond the readily available props.
The complete source code for the React-based widget is available in the `extensions/react-widget` directory within the main [DocsGPT GitHub Repository](https://github.com/arc53/DocsGPT). Feel free to explore the code, fork the repository, and tailor the widget to your exact requirements.

View File

@@ -0,0 +1,157 @@
---
title: Architecture
description: High-level architecture of DocsGPT
---
## Introduction
DocsGPT is designed as a modular and scalable application for knowledge based GenAI system. This document outlines the high-level architecture of DocsGPT, highlighting its key components.
## High-Level Architecture
This diagram provides a bird's-eye view of the DocsGPT architecture, illustrating the main components and their interactions.
```mermaid
flowchart LR
User["User"] --> Frontend["Frontend (React/Vite)"]
Frontend --> Backend["Backend API (Flask)"]
Backend --> LLM["LLM Integration Layer"] & VectorStore["Vector Stores"] & TaskQueue["Task Queue (Celery)"] & Databases["Databases (MongoDB, Redis)"]
LLM -- Cloud APIs / Local Engines --> InferenceEngine["Inference Engine"]
VectorStore -- Document Embeddings --> Indexes[("Indexes")]
TaskQueue -- Asynchronous Tasks --> DocumentIngestion["Document Ingestion"]
style Frontend fill:#AA00FF,color:#FFFFFF
style Backend fill:#AA00FF,color:#FFFFFF
style LLM fill:#AA00FF,color:#FFFFFF
style TaskQueue fill:#AA00FF,color:#FFFFFF,stroke:#AA00FF
style DocumentIngestion fill:#AA00FF,color:#FFFFFF,stroke:none
```
## Component Descriptions
### 1. Frontend (React/Vite)
* **Technology:** Built using React and Vite.
* **Responsibility:** This is the user interface of DocsGPT, providing users with an UI to ask questions and receive answers, configure prompts, tools and other settings. It handles user input, displays conversation history, shows sources, and manages settings.
* **Key Features:**
* Clean and responsive UI.
* Simple static client-side rendering.
* Manages conversation state and settings.
* Communicates with the Backend API for data retrieval and processing.
### 2. Backend API (Flask)
* **Technology:** Implemented using Flask (Python).
* **Responsibility:** The Backend API serves as the core logic and orchestration layer of DocsGPT. It receives requests from the Frontend, Extensions or API clients, processes them, and coordinates interactions between different components.
* **Key Features:**
* API endpoints for handling user queries, document uploads, and settings configurations.
* Manages the overall application flow and logic.
* Integrates with the LLM Integration Layer, Vector Stores, Task Queue, Tools, Agents and Databases.
* Provides Swagger documentation for API endpoints.
### 3. LLM Integration Layer (Part of backend)
* **Technology:** Supports multiple LLM APIs and local engines.
* **Responsibility:** This layer provides an abstraction for interacting with Large Language Models (LLMs).
* **Key Features:**
* Supports LLMs from OpenAI, Google, Anthropic, Groq, HuggingFace Inference API, Azure OpenAI, also compatable with local models like Ollama, LLaMa.cpp, Text Generation Inference (TGI), SGLang, vLLM, Aphrodite, FriendliAI, and LMDeploy.
* Manages API key handling and request formatting and Tool fromatting.
* Offers caching mechanisms to improve response times and reduce API usage.
* Handles streaming responses for a more interactive user experience.
### 4. Vector Stores (Part of backend)
* **Technology:** Supports multiple vector databases.
* **Responsibility:** Vector Stores are used to store and retrieve vector embeddings of document chunks. This enables semantic search and retrieval of relevant document snippets in response to user queries.
* **Key Features:**
* Supports vector databases including FAISS, Elasticsearch, Qdrant, Milvus, and LanceDB.
* Provides storage and indexing of high-dimensional vector embeddings.
* Enables editing and updating of vector indexes including specific chunks.
### 5. Parser Integration Layer (Part of backend)
* **Technology:** Supports multiple formats for file processing and remote source uploading.
* **Responsibility:** Parser Integration Layer handles uploading, parsing, chunking, embedding, and indexing documents.
* **Key Features:**
* Supports various document formats (PDF, DOCX, TXT, etc.) and remote sources (web URLs, sitemaps).
* Handles document parsing, text chunking, and embedding generation.
* Utilizes Celery for asynchronous processing, ensuring efficient handling of large documents.
### 6. Task Queue (Celery)
* **Technology:** Celery with Redis as broker and backend.
* **Responsibility:** Celery handles asynchronous task processing, for long-running operations such as document ingestion and indexing. This ensures that the main application remains responsive and efficient.
* **Key Features:**
* Manages background tasks for document processing and indexing.
* Improves application responsiveness by offloading heavy tasks.
* Enhances scalability and reliability through distributed task processing.
### 7. Databases (MongoDB, Redis)
* **Technology:** MongoDB and Redis.
* **Responsibility:** Databases are used for persistent data storage and caching. MongoDB stores structured data such as conversations, documents, user settings, and API keys. Redis is used as a cache, as well as a message broker for Celery.
## Request Flow Diagram
This diagram illustrates the sequence of steps involved when a user submits a question to DocsGPT.
```mermaid
sequenceDiagram
participant User
participant Frontend
participant BackendAPI
participant LLMIntegrationLayer
participant VectorStores
participant InferenceEngine
User->>Frontend: User asks a question
Frontend->>BackendAPI: API Request (Question)
BackendAPI->>VectorStores: Fetch relevant document chunks (Similarity Search)
VectorStores-->>BackendAPI: Return document chunks
BackendAPI->>LLMIntegrationLayer: Send question and document chunks
LLMIntegrationLayer->>InferenceEngine: LLM API Request (Prompt + Context)
InferenceEngine-->>LLMIntegrationLayer: LLM API Response (Answer)
LLMIntegrationLayer-->>BackendAPI: Return Answer
BackendAPI->>Frontend: API Response (Answer)
Frontend->>User: Display Answer
Note over Frontend,BackendAPI: Data flow is simplified for clarity
```
## Deployment Architecture
DocsGPT is designed to be deployed using Docker and Kubernetes, here is a qucik overview of a simple k8s deployment.
```mermaid
graph LR
subgraph Kubernetes Cluster
subgraph Nodes
subgraph Node 1
FrontendPod[Frontend Pod]
BackendAPIPod[Backend API Pod]
end
subgraph Node 2
CeleryWorkerPod[Celery Worker Pod]
RedisPod[Redis Pod]
end
subgraph Node 3
MongoDBPod[MongoDB Pod]
VectorStorePod[Vector Store Pod]
end
end
LoadBalancer[Load Balancer] --> docsgpt-frontend-service[docsgpt-frontend-service]
LoadBalancer --> docsgpt-api-service[docsgpt-api-service]
docsgpt-frontend-service --> FrontendPod
docsgpt-api-service --> BackendAPIPod
BackendAPIPod --> CeleryWorkerPod
BackendAPIPod --> RedisPod
BackendAPIPod --> MongoDBPod
BackendAPIPod --> VectorStorePod
CeleryWorkerPod --> RedisPod
BackendAPIPod --> InferenceEngine[(Inference Engine)]
VectorStorePod --> Indexes[(Indexes)]
MongoDBPod --> Data[(Data)]
RedisPod --> Cache[(Cache)]
end
User[User] --> LoadBalancer
```

View File

@@ -1,3 +1,8 @@
---
title: Customizing Prompts
description: This guide will explain how to change prompts in DocsGPT and why it might be benefitial. Additionaly this article expains additional variables that can be used in prompts.
---
import Image from 'next/image'
# Customizing the Main Prompt
@@ -34,6 +39,8 @@ When using code examples, use the following format:
{summaries}
```
Note that `{summaries}` allows model to see and respond to your upploaded documents. If you don't want this functionality you can safely remove it from the customized prompt.
Feel free to customize the prompt to align it with your specific use case or the kind of responses you want from the AI. For example, you can focus on specific document types, industries, or topics to get more targeted results.
## Conclusion

View File

@@ -1,3 +1,7 @@
---
title: How to Train on Other Documentation
description: A step-by-step guide on how to effectively train DocsGPT on additional documentation sources.
---
import { Callout } from 'nextra/components'
import Image from 'next/image'

View File

@@ -1,3 +1,7 @@
---
title:
description:
---
import { Callout } from 'nextra/components'
import Image from 'next/image'
@@ -26,24 +30,13 @@ Choose the LLM of your choice.
<Image src="/llms.gif" alt="prompts" width={800} height={500} />
### For Open source llm change:
<Steps >
<Steps>
### Step 1
For open source you have to edit .env file with LLM_NAME with their desired LLM name.
For open source version please edit `LLM_NAME`, `MODEL_NAME` and others in the .env file. Refer to [⚙️ App Configuration](/Deploying/DocsGPT-Settings) for more information.
### Step 2
All the supported LLM providers are here application/llm and you can check what env variable are needed for each
List of latest supported LLMs are https://github.com/arc53/DocsGPT/blob/main/application/llm/llm_creator.py
### Step 3
Visit application/llm and select the file of your selected llm and there you will find the specific requirements needed to be filled in order to use it,i.e API key of that llm.
Visit [☁️ Cloud Providers](/Models/cloud-providers) for the updated list of online models. Make sure you have the right API_KEY and correct LLM_NAME.
For self-hosted please visit [🖥️ Local Inference](/Models/local-inference).
</Steps>
### For OpenAI-Compatible Endpoints:
DocsGPT supports the use of OpenAI-compatible endpoints through base URL substitution. This feature allows you to use alternative AI models or services that implement the OpenAI API interface.
Set the OPENAI_BASE_URL in your environment. You can change .env file with OPENAI_BASE_URL with the desired base URL or docker-compose.yml file and add the environment variable to the backend container.
> Make sure you have the right API_KEY and correct LLM_NAME.

View File

@@ -1,3 +1,8 @@
---
title:
description:
---
# Avoiding hallucinations
If your AI uses external knowledge and is not explicit enough, it is ok, because we try to make DocsGPT friendly.

View File

@@ -9,10 +9,16 @@
},
"How-to-use-different-LLM": {
"title": "️🤖 How to use different LLM's",
"href": "/Guides/How-to-use-different-LLM"
"href": "/Guides/How-to-use-different-LLM",
"display": "hidden"
},
"My-AI-answers-questions-using-external-knowledge": {
"title": "💭️ Avoiding hallucinations",
"href": "/Guides/My-AI-answers-questions-using-external-knowledge"
"href": "/Guides/My-AI-answers-questions-using-external-knowledge",
"display": "hidden"
},
"Architecture": {
"title": "🏗️ Architecture",
"href": "/Guides/Architecture"
}
}

View File

@@ -0,0 +1,14 @@
{
"cloud-providers": {
"title": "☁️ Cloud Providers",
"href": "/Models/cloud-providers"
},
"local-inference": {
"title": "🖥️ Local Inference",
"href": "/Models/local-inference"
},
"embeddings": {
"title": "📝 Embeddings",
"href": "/Models/embeddings"
}
}

View File

@@ -0,0 +1,55 @@
---
title: Connecting DocsGPT to Cloud LLM Providers
description: Connect DocsGPT to various Cloud Large Language Model (LLM) providers to power your document Q&A.
---
# Connecting DocsGPT to Cloud LLM Providers
DocsGPT is designed to seamlessly integrate with a variety of Cloud Large Language Model (LLM) providers, giving you access to state-of-the-art AI models for document question answering.
## Configuration via `.env` file
The primary method for configuring your LLM provider in DocsGPT is through the `.env` file. For a comprehensive understanding of all available settings, please refer to the detailed [DocsGPT Settings Guide](/Deploying/DocsGPT-Settings).
To connect to a cloud LLM provider, you will typically need to configure the following basic settings in your `.env` file:
* **`LLM_NAME`**: This setting is essential and identifies the specific cloud provider you wish to use (e.g., `openai`, `google`, `anthropic`).
* **`MODEL_NAME`**: Specifies the exact model you want to utilize from your chosen provider (e.g., `gpt-4o`, `gemini-2.0-flash`, `claude-3-5-sonnet-latest`). Refer to your provider's documentation for a list of available models.
* **`API_KEY`**: Almost all cloud LLM providers require an API key for authentication. Obtain your API key from your chosen provider's platform and securely store it in your `.env` file.
## Explicitly Supported Cloud Providers
DocsGPT offers direct, streamlined support for the following cloud LLM providers, making configuration straightforward. The table below outlines the `LLM_NAME` and example `MODEL_NAME` values to use for each provider in your `.env` file.
| Provider | `LLM_NAME` | Example `MODEL_NAME` |
| :--------------------------- | :------------- | :-------------------------- |
| DocsGPT Public API | `docsgpt` | `None` |
| OpenAI | `openai` | `gpt-4o` |
| Google (Vertex AI, Gemini) | `google` | `gemini-2.0-flash` |
| Anthropic (Claude) | `anthropic` | `claude-3-5-sonnet-latest` |
| Groq | `groq` | `llama-3.1-8b-instant` |
| HuggingFace Inference API | `huggingface` | `meta-llama/Llama-3.1-8B-Instruct` |
| Azure OpenAI | `azure_openai` | `gpt-4o` |
## Connecting to OpenAI-Compatible Cloud APIs
DocsGPT's flexible architecture allows you to connect to any cloud provider that offers an API compatible with the OpenAI API standard. This opens up a vast ecosystem of LLM services.
To connect to an OpenAI-compatible cloud provider, you will still use `LLM_NAME=openai` in your `.env` file. However, you will also need to specify the API endpoint of your chosen provider using the `OPENAI_BASE_URL` setting. You will also likely need to provide an `API_KEY` and `MODEL_NAME` as required by that provider.
**Example for DeepSeek (OpenAI-Compatible API):**
To connect to DeepSeek, which offers an OpenAI-compatible API, your `.env` file could be configured as follows:
```
LLM_NAME=openai
API_KEY=YOUR_API_KEY # Your DeepSeek API key
MODEL_NAME=deepseek-chat # Or your desired DeepSeek model name
OPENAI_BASE_URL=https://api.deepseek.com/v1 # DeepSeek's OpenAI API URL
```
Remember to consult the documentation of your chosen OpenAI-compatible cloud provider for their specific API endpoint, required model names, and authentication methods.
## Adding Support for Other Cloud Providers
If you wish to connect to a cloud provider that is not explicitly listed above or doesn't offer OpenAI API compatibility, you can extend DocsGPT to support it. Within the DocsGPT repository, navigate to the `application/llm` directory. Here, you will find Python files defining the existing LLM integrations. You can use these files as examples to create a new module for your desired cloud provider. After creating your new LLM module, you will need to register it within the `llm_creator.py` file. This process involves some coding, but it allows for virtually unlimited extensibility to connect to any cloud-based LLM service with an accessible API.

Some files were not shown because too many files have changed in this diff Show More