Compare commits

...

1231 Commits

Author SHA1 Message Date
Alex
4025e55b95 Merge pull request #1028 from utin-francis-peter/fix/issue#1023
Fix: adjusted alignment of submit query icon within its container
2024-07-17 00:25:42 +01:00
utin-francis-peter
e1e63ebd64 Merge branch 'main' of https://github.com/utin-francis-peter/DocsGPT into fix/issue#1023 2024-07-16 22:05:12 +01:00
utin-francis-peter
8279df48bf removed shrink 2024-07-16 22:04:26 +01:00
Alex
d86a06fab0 Merge pull request #1027 from utin-francis-peter/feat/issue#1017
Feat Implementation for issue#1017
2024-07-16 14:35:18 +01:00
Siddhant Rai
90b24dd915 fix: removed unused TextArea component 2024-07-16 18:20:13 +05:30
Alex
bacd2a6893 Merge pull request #1034 from ManishMadan2882/main
Feat: sharing endpoints
2024-07-16 12:28:59 +01:00
Alex
0f059f247d fix: ruff lint 2024-07-16 12:28:43 +01:00
ManishMadan2882
e2b76d9c29 feat(share): share btn above conversations 2024-07-16 02:09:36 +05:30
ManishMadan2882
1107a2f2bc refactor App.tsx: better convention 2024-07-15 17:56:23 +05:30
ManishMadan2882
efd43013da minor fix 2024-07-15 05:13:28 +05:30
ManishMadan2882
7b8458b47d fix layout 2024-07-15 05:00:13 +05:30
ManishMadan2882
84eed09a17 feedback visible conditioned, update meta info in shared 2024-07-15 02:55:38 +05:30
ManishMadan2882
35b1a40d49 feat(share) translate 2024-07-14 04:13:25 +05:30
ManishMadan2882
81d7fe3fdb refactor App, add /shared/id page 2024-07-14 03:29:06 +05:30
ManishMadan2882
02187fed4e add timetamp in iso, remove sources 2024-07-14 03:27:53 +05:30
ManishMadan2882
019bf013ac add css class: no-scrollbar 2024-07-12 02:51:59 +05:30
ManishMadan2882
d6e59a6a0a conversation tile: add menu, add share modal 2024-07-11 21:45:47 +05:30
utin-francis-peter
46aa862943 Merge branch 'main' of https://github.com/utin-francis-peter/DocsGPT into feat/issue#1017 2024-07-09 13:34:49 +01:00
utin-francis-peter
0413cab0d9 chore: removed all TextArea related entities from branch as it's outiside scope of branch/issue 2024-07-09 13:32:46 +01:00
Manish Madan
3357ce8f33 Merge branch 'arc53:main' into main 2024-07-09 16:29:04 +05:30
Alex
1776f6e7fd Merge pull request #1024 from blackviking27/feat-bubble-width 2024-07-09 09:06:39 +04:00
ManishMadan2882
edfe5e1156 restrict redundant sharing, add user field 2024-07-08 15:59:19 +05:30
ManishMadan2882
0768992848 add route to share and fetch public conversations 2024-07-08 03:03:46 +05:30
FIRST_NAME LAST_NAME
1224f94879 moved the three icons to the bottom of conversation bubble 2024-07-07 21:52:20 +05:30
Alex
b58c5344b8 Merge pull request #1033 from arc53/dependabot/npm_and_yarn/extensions/web-widget/braces-3.0.3
chore(deps-dev): bump braces from 3.0.2 to 3.0.3 in /extensions/web-widget
2024-07-07 17:24:03 +04:00
dependabot[bot]
7175bc0595 chore(deps-dev): bump braces in /extensions/web-widget
Bumps [braces](https://github.com/micromatch/braces) from 3.0.2 to 3.0.3.
- [Changelog](https://github.com/micromatch/braces/blob/master/CHANGELOG.md)
- [Commits](https://github.com/micromatch/braces/compare/3.0.2...3.0.3)

---
updated-dependencies:
- dependency-name: braces
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-07 13:20:00 +00:00
Alex
b7a6f5696d Merge pull request #1032 from utin-francis-peter/fix/issue#1016
FEAT: Auto Language Detection using User's Browser Default
2024-07-07 17:19:32 +04:00
utin-francis-peter
abf5b89c28 refactor: handling applied styles based on colorVariant in a neater manner 2024-07-07 08:33:02 +01:00
utin-francis-peter
d554444b0e chore: updated Input prop from hasSilverBorder to colorVariant 2024-07-06 21:22:41 +01:00
utin-francis-peter
16ae0725e6 chore: took off the option of looking-up docsgpt-locale lang key in localStorage on first load 2024-07-06 20:41:21 +01:00
utin-francis-peter
61feced541 Merge branch 'feat/issue#1017' of https://github.com/utin-francis-peter/DocsGPT into feat/issue#1017 2024-07-05 21:57:46 +01:00
utin-francis-peter
a1d4db2f1e Merge branch 'main' of https://github.com/utin-francis-peter/DocsGPT into feat/issue#1017 2024-07-05 12:15:38 +01:00
Utin Francis Peter
357e9af627 chore: typo elimination
Co-authored-by: Siddhant Rai <47355538+siiddhantt@users.noreply.github.com>
2024-07-05 12:07:33 +01:00
utin-francis-peter
a41519be63 fix: minor typo 2024-07-05 11:41:12 +01:00
FIRST_NAME LAST_NAME
870e6b07c8 Merge branch 'main' of https://github.com/blackviking27/DocsGPT into feat-bubble-width 2024-07-04 19:12:04 +05:30
utin-francis-peter
6f41759519 Merge branch 'main' of https://github.com/utin-francis-peter/DocsGPT into fix/issue#1016 2024-07-04 10:11:57 +01:00
utin-francis-peter
6727c42f18 feat: auto browser lang detection on first visit 2024-07-04 10:05:54 +01:00
utin-francis-peter
90c367842f chore: added browser lang detector package by i18next 2024-07-04 09:00:14 +01:00
Alex
a0bb6e370e Merge pull request #1018 from utin-francis-peter/fix/issue#1014 2024-07-04 00:35:29 +04:00
Alex
f2910ab9d1 Merge pull request #1029 from arc53/dependabot/npm_and_yarn/docs/braces-3.0.3
chore(deps): bump braces from 3.0.2 to 3.0.3 in /docs
2024-07-03 23:11:43 +04:00
utin-francis-peter
b4bfed2ccb style: query submission icon centering 2024-07-03 15:46:35 +01:00
dependabot[bot]
2fcde61b6d chore(deps): bump braces from 3.0.2 to 3.0.3 in /docs
Bumps [braces](https://github.com/micromatch/braces) from 3.0.2 to 3.0.3.
- [Changelog](https://github.com/micromatch/braces/blob/master/CHANGELOG.md)
- [Commits](https://github.com/micromatch/braces/compare/3.0.2...3.0.3)

---
updated-dependencies:
- dependency-name: braces
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-07-03 13:10:18 +00:00
Alex
ffddf10de5 Merge pull request #1026 from ManishMadan2882/main 2024-07-03 17:09:46 +04:00
utin-francis-peter
6e3bd5e6f3 fix: adjusted alignment of submit query icon within its container 2024-07-03 13:29:34 +01:00
utin-francis-peter
b21230c4d6 chore: migrated to using custom Input component to address redundant twClasses 2024-07-03 12:34:13 +01:00
utin-francis-peter
0a533b64e1 chore: migrated prop type definition into a types declaration file for components. other components prop types will live here 2024-07-03 11:49:49 +01:00
utin-francis-peter
15b0e321bd chore: TextArea component to replace Div contentEditable for entering prompts 2024-07-03 11:24:29 +01:00
ManishMadan2882
4d749340a2 fix: lint error - semantic ambiguity 2024-07-03 13:25:47 +05:30
utin-francis-peter
0ef6ffa452 gap between y-borders and prompts input + border-radius reduction as prompts input grows 2024-07-02 19:48:19 +01:00
FIRST_NAME LAST_NAME
d7b1310ba3 conversation bubble width fix 2024-07-02 22:11:21 +05:30
utin-francis-peter
7408454a75 chore: prompts input now uses useState hook for state change and inbuilt autoFocus 2024-07-01 19:54:31 +01:00
utin-francis-peter
07b71468cc style: removed custom padding and used twClasses 2024-06-29 20:45:33 +01:00
utin-francis-peter
522e966194 refactor: custom input component is used. inputRef is also replaced with state value 2024-06-29 18:58:13 +01:00
utin-francis-peter
937c60c9cf style: updated custom css class to match textInput component's 2024-06-29 18:55:10 +01:00
utin-francis-peter
bbb1e22163 style: spacings... 2024-06-28 20:19:01 +01:00
utin-francis-peter
a16e83200a style fix: gap between conversations wrapper and prompts input wrapper 2024-06-28 15:16:55 +01:00
utin-francis-peter
d437521710 style fix: response bubble padding and radius 2024-06-28 14:45:14 +01:00
utin-francis-peter
5cbf4cf352 style fix: padding and radius of question bubble 2024-06-28 14:24:34 +01:00
Alex
2985e3b75b Merge pull request #1013 from arc53/fix/singleton-llama-cpp
fix: use singleton in llama_cpp
2024-06-25 18:25:01 +01:00
Alex
f34a75fc5b Merge pull request #1004 from utin-francis-peter/fix/traning-progress
Fix/training progress
2024-06-25 14:57:26 +01:00
Alex
5aa88714b8 refactor: Add thread lock 2024-06-25 14:41:04 +01:00
Alex
ce56a414e0 fix: use singleton 2024-06-25 14:37:00 +01:00
Alex
ba4a7dcd45 Merge pull request #1012 from siiddhantt/fix/input-box-cutting-content
fix: input box improvements
2024-06-25 13:38:08 +01:00
Siddhant Rai
85c648da6c fix: large spacing + padding issue in input box 2024-06-25 17:58:16 +05:30
Alex
483f8eb690 Merge pull request #1011 from arc53/dependabot/npm_and_yarn/braces-3.0.3
chore(deps-dev): bump braces from 3.0.2 to 3.0.3
2024-06-25 13:10:18 +01:00
dependabot[bot]
93c868d698 chore(deps-dev): bump braces from 3.0.2 to 3.0.3
Bumps [braces](https://github.com/micromatch/braces) from 3.0.2 to 3.0.3.
- [Changelog](https://github.com/micromatch/braces/blob/master/CHANGELOG.md)
- [Commits](https://github.com/micromatch/braces/compare/3.0.2...3.0.3)

---
updated-dependencies:
- dependency-name: braces
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-25 12:05:49 +00:00
Alex
a14e70e3f4 Merge pull request #1006 from arc53/dependabot/npm_and_yarn/frontend/braces-3.0.3
chore(deps-dev): bump braces from 3.0.2 to 3.0.3 in /frontend
2024-06-25 13:04:35 +01:00
Alex
a6ff606cae Merge pull request #1008 from utin-francis-peter/fix/issue#998
Fix/issue#998
2024-06-24 22:14:24 +01:00
utin-francis-peter
651eb3374c chore: on language change when active tab is general, active tab is persisted as general 2024-06-23 23:33:27 +01:00
utin-francis-peter
68c71adc5a chore: i18n "General" tab title 2024-06-23 23:29:59 +01:00
utin-francis-peter
0c4ca9c94d refactor: selected language gets stored in local state, triggering an effect that updates lang value in local storage and change language 2024-06-23 23:27:43 +01:00
utin-francis-peter
8c04f5b3f1 chore: selected language isn't included in language options 2024-06-23 23:19:14 +01:00
Alex
35b29a0a1e Merge pull request #1005 from siiddhantt/fix/modals-and-sidebar
fix: modals close on clicking outside
2024-06-23 12:51:51 +01:00
dependabot[bot]
d289f432b1 chore(deps-dev): bump braces from 3.0.2 to 3.0.3 in /frontend
Bumps [braces](https://github.com/micromatch/braces) from 3.0.2 to 3.0.3.
- [Changelog](https://github.com/micromatch/braces/blob/master/CHANGELOG.md)
- [Commits](https://github.com/micromatch/braces/compare/3.0.2...3.0.3)

---
updated-dependencies:
- dependency-name: braces
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-21 18:49:54 +00:00
Siddhant Rai
e16e269775 fix: dropdown closes on clicking outside 2024-06-21 23:35:03 +05:30
utin-francis-peter
4e5d0c2e84 Merge branch 'main' of https://github.com/utin-francis-peter/DocsGPT into fix/traning-progress 2024-06-21 18:06:55 +01:00
utin-francis-peter
c9a2034936 chore: adjusted delay time before training starts 2024-06-21 18:04:30 +01:00
Alex
b70fc1151d fix: print error to console 2024-06-21 14:54:32 +01:00
utin-francis-peter
c11034edcd chore: slight delay between uploading and learning progress transition 2024-06-20 23:35:39 +01:00
utin-francis-peter
804d9b42a5 Merge branch 'main' of https://github.com/utin-francis-peter/DocsGPT into fix/traning-progress 2024-06-20 22:33:44 +01:00
utin-francis-peter
b1bb4e6758 fix: uploading/training progress bar 2024-06-20 22:18:18 +01:00
Alex
76ed8f0ba2 Merge pull request #1002 from ManishMadan2882/main
Better Error handling on /stream endpoint
2024-06-20 20:00:55 +01:00
Alex
4dde7eaea1 feat: Improve error handling in /stream route 2024-06-20 19:51:35 +01:00
Alex
2e2149c110 fix: stream stuff 2024-06-20 19:40:29 +01:00
ManishMadan2882
70bb9477c5 update err msg, if req fails from client 2024-06-20 18:21:19 +05:30
Alex
ec5363e9c1 Merge pull request #1001 from utin-francis-peter/latest-srcdoc-as-active
Fix: Set Uploaded/Trained/Latest Source Doc as Selected/Active Source Doc
2024-06-20 13:31:10 +01:00
ManishMadan2882
dba3b1c559 sort local vectors in latest first order 2024-06-20 17:58:59 +05:30
utin-francis-peter
9606e3f80c chore: handleDeleteClick now accepts only doc as param 2024-06-20 06:00:32 +01:00
utin-francis-peter
7bc7b500f5 refactor/chore: migrated away from manually removing a deleted source doc from UI / latest docs are fetched after deletion to update UI 2024-06-20 05:58:39 +01:00
utin-francis-peter
c6e804fa10 Merge branch 'main' of https://github.com/utin-francis-peter/DocsGPT into latest-srcdoc-as-active 2024-06-20 00:19:09 +01:00
utin-francis-peter
1cbaf9bd9d chore: updates from upstream 2024-06-20 00:05:14 +01:00
utin-francis-peter
45145685d5 fix: upon successful training of newly uploaded src doc, the latest doc is auto set as selected doc 2024-06-19 23:41:38 +01:00
utin-francis-peter
2fbec6f21f chore: added cleanup fxn for TrainingProgress timeout fxn 2024-06-19 23:39:16 +01:00
ManishMadan2882
ad29d2765f fix: add reducers to raise error, handle complete_stream() 2024-06-20 00:10:29 +05:30
Alex
e47e751142 fix link 2024-06-19 12:35:30 +01:00
Alex
c63d4ccf3e Merge pull request #1000 from arc53/feat/upgrade-ubuntu-docker
upgrade docker to 24.04
2024-06-19 11:57:37 +01:00
Alex
e5c30cf841 upgrade docker to 24.04 2024-06-19 11:45:37 +01:00
Alex
c80678aac5 Merge pull request #994 from xucailiang/fix-celery-import-error
rename celery.py
2024-06-19 09:47:52 +01:00
xucai
1754570057 rename celery_init.py 2024-06-19 16:17:09 +08:00
xucailiang
d87b411193 Merge branch 'arc53:main' into fix-celery-import-error 2024-06-19 15:16:39 +08:00
utin-francis-peter
8fc6284317 chore: on deleting an uploaded doc, default doc gets set as selected source doc 2024-06-18 23:33:49 +01:00
Alex
eae49d2367 Merge pull request #996 from arc53/feat/memory-embedding-singleton
chore: Refactor embeddings instantiation to use a singleton pattern
2024-06-18 11:52:27 +01:00
ManishMadan2882
69287c5198 feat: err handling /stream 2024-06-18 16:12:18 +05:30
Alex
e6b3984f78 Merge pull request #988 from utin-francis-peter/fix/retry-btn
Fix/retry-btn
2024-06-15 11:36:46 +01:00
Alex
547fe888d4 Merge pull request #991 from vedantbhatter/vedant-branch
Adding in Mandarin translation into DocsGPT
2024-06-14 15:13:45 +01:00
Alex
3454309cbc chore: Refactor embeddings instantiation to use a singleton pattern 2024-06-14 12:58:35 +01:00
utin-francis-peter
544c46cd44 chore: retry btn is side-by-side with error mssg 2024-06-14 00:31:33 +01:00
utin-francis-peter
2c100825cc Merge branch 'main' of https://github.com/utin-francis-peter/DocsGPT into fix/retry-btn 2024-06-13 23:25:33 +01:00
Alex
558ecd84a6 Merge pull request #993 from siiddhantt/fix/input-bar-hidden-safari
fix: input field covered by url bar in safari
2024-06-13 14:18:26 +01:00
utin-francis-peter
df24cfff4f style: improve visibility of bottom-most message bubble 2024-06-12 22:52:43 +01:00
Siddhant Rai
bd5d93a964 fix: unfixed input bar + safe area inset for Safari (iOS) 2024-06-13 00:21:51 +05:30
xucai
ae2ded119f rename celery_init.py 2024-06-12 19:48:28 +08:00
Siddhant Rai
abdb80a6be fix: input field covered by url bar in safari 2024-06-12 15:55:55 +05:30
utin-francis-peter
2f9cbe2bf1 chore: if user types in a new prompt after failed generation (instead of hitting retry btn), the failed query is updated with the new prompt before response is fetched. Ensuring every query object remains useful & relevant 2024-06-11 20:30:12 +01:00
utin-francis-peter
2cca7d60d5 chore: modified "retry" generation flow to give users the option of retrying with prev failed response or entering a new prompt into the provided field 2024-06-11 18:19:35 +01:00
Alex
3df745d1d2 Merge pull request #990 from IlyasOsman/token-format
Denominations on tokens
2024-06-11 10:19:28 +01:00
Alex
9862083e0b Update README.md 2024-06-11 10:11:09 +01:00
Vedant Bhatter
7a4976c470 Addign in Mandarin translation into DocsGPT 2024-06-10 17:47:49 -07:00
ilyasosman
8834a19743 Denominations on tokens 2024-06-10 22:50:35 +03:00
Alex
6e15403f60 Merge pull request #989 from SDanielDev/working
Updated nextra docs with new html code block installation instruction
2024-06-10 10:57:45 +01:00
utin-francis-peter
7e1cf10cb2 style: reduced retry container padding 2024-06-09 13:49:26 +01:00
utin-francis-peter
ee762c3c68 chore: modified handleQuestion params for more clarity 2024-06-09 13:47:51 +01:00
utin-francis-peter
32c06414c5 style: added theme adaptable RetryIcon component to Retry btn 2024-06-08 03:29:18 +01:00
SamDanielDev
e97e1ba4bc Updated nextra docs with new html code block installation instruction 2024-06-07 18:16:50 +01:00
utin-francis-peter
2f580f7800 feat: japan locale config 2024-06-07 17:40:33 +01:00
utin-francis-peter
1ce1459455 Merge branch 'main' of https://github.com/utin-francis-peter/DocsGPT into fix/retry-btn 2024-06-07 17:38:03 +01:00
utin-francis-peter
c26573482e style: retry query generation btn 2024-06-07 17:28:13 +01:00
utin-francis-peter
414ec08dee refactor: modified prepResponseView to prioritize query.response and trigger re-render after a failed generation is retried 2024-06-07 17:26:19 +01:00
Alex
1cc78191eb Merge pull request #987 from charlesnilsson/main
my-japanese-translation
2024-06-07 16:14:25 +01:00
Alex
75c6c6081a feat: Add Japanese translation support fix 2024-06-07 16:08:36 +01:00
utin-francis-peter
8d2ebe9718 feat: "Retry" btn conditionally renders in place of query input when a generation fails. Uses prev query to fetch answer when clicked. 2024-06-07 15:59:56 +01:00
Charles Nilsson
eed974b883 my-japanese-translation 2024-06-07 16:44:16 +02:00
utin-francis-peter
ae846dac4d chore: received changes from upstream 2024-06-07 15:33:24 +01:00
utin-francis-peter
0b09c00b50 chore: modified handleQuestion to favor "Retry" action after a failed response generation 2024-06-07 14:47:29 +01:00
Alex
f7a1874cb3 Merge pull request #979 from arc53/dependabot/pip/application/qdrant-client-1.9.0
chore(deps): bump qdrant-client from 1.8.2 to 1.9.0 in /application
2024-06-04 19:13:55 +01:00
dependabot[bot]
28fb04eb7b chore(deps): bump qdrant-client from 1.8.2 to 1.9.0 in /application
Bumps [qdrant-client](https://github.com/qdrant/qdrant-client) from 1.8.2 to 1.9.0.
- [Release notes](https://github.com/qdrant/qdrant-client/releases)
- [Commits](https://github.com/qdrant/qdrant-client/compare/v1.8.2...v1.9.0)

---
updated-dependencies:
- dependency-name: qdrant-client
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-04 17:53:28 +00:00
Alex
34310cf420 Merge pull request #974 from siiddhantt/fix/pr-960
fix/pr-960
2024-06-03 22:44:36 +01:00
Alex
e1d61d7190 Merge pull request #961 from arc53/dependabot/pip/application/requests-2.32.0
build(deps): bump requests from 2.31.0 to 2.32.0 in /application
2024-06-03 22:43:11 +01:00
Alex
9c14ac84cb Merge pull request #977 from shelar1423/main
DocsGPT link update
2024-06-03 17:26:54 +01:00
Siddhant Rai
1d1ea7b6f2 fix: version update 2024-06-03 20:59:52 +05:30
digvijay shelar
92401f5b7c link fix 2024-06-03 20:59:30 +05:30
Siddhant Rai
38ac9218ec fix: unpkg link in readme 2024-06-03 20:43:43 +05:30
Siddhant Rai
48497c749a fix: dompurify import error 2024-06-03 20:36:52 +05:30
Siddhant Rai
72a1892058 fix: added targets for browser environment 2024-06-03 12:57:53 +05:30
Siddhant Rai
f2c328d212 fix: empty types.d.ts generated during build + updated README.md 2024-06-01 14:10:12 +05:30
Alex
e9eafc40a7 Merge pull request #971 from shelar1423/main
FIX: improved documentation
2024-05-30 15:32:48 +01:00
digvijay shelar
933ca1bf81 updated the llm instructions for OS version 2024-05-30 18:51:56 +05:30
digvijay shelar
b4fc9aa7eb new home demo 2024-05-30 18:27:40 +05:30
Digvijay Shelar
dcc475bbef Merge branch 'arc53:main' into main 2024-05-30 18:22:56 +05:30
Alex
1fe35ad0cd Merge pull request #970 from siiddhantt/feature/link-to-source
feat: remote sources have clickable links to original url
2024-05-30 12:06:05 +01:00
Siddhant Rai
f1ed1e0f14 fix: type error 2024-05-30 15:33:16 +05:30
Alex
fcc746fb98 Merge pull request #972 from ManishMadan2882/main
Fix: added translation for the conversation history dropdown
2024-05-29 18:37:43 +01:00
ManishMadan2882
95934a5b7a (i18n): updated for conv history 2024-05-29 22:54:46 +05:30
Digvijay Shelar
d38b101820 Merge branch 'arc53:main' into main 2024-05-29 19:45:35 +05:30
Siddhant Rai
91d730a7bc feat: remote sources have clickable links 2024-05-29 19:07:08 +05:30
Alex
0cfa77b628 chats word in translations 2024-05-29 11:29:00 +01:00
Alex
ca4881ad51 Merge pull request #969 from ManishMadan2882/main
Internationalisation with i18next
2024-05-29 11:23:45 +01:00
digvijay shelar
8c2c064fe2 updated emoji's 2024-05-29 15:25:23 +05:30
Digvijay Shelar
10646b9b86 Merge branch 'arc53:main' into main 2024-05-29 15:04:16 +05:30
Alex
967b195946 Merge pull request #967 from starkgate/empty-response-after-streaming
Fix empty response in the conversation
2024-05-28 23:06:46 +01:00
ManishMadan2882
1ae7771290 add spacing in general, minor change 2024-05-29 03:27:53 +05:30
ManishMadan2882
a585fe4d54 refactored locale json 2024-05-28 21:38:42 +05:30
ManishMadan2882
fa3a9fe70e fix: minor changes 2024-05-28 21:35:10 +05:30
ManishMadan2882
99952a393f feat(i18n): modals, Hero, Nav 2024-05-28 20:50:07 +05:30
digvijay shelar
920a41e3ca api section fixed 2024-05-28 20:47:22 +05:30
digvijay shelar
e5bec957a1 issue #962 2024-05-28 20:32:35 +05:30
Alex
41cb765255 Update README.md 2024-05-28 10:09:06 +01:00
Alex
2d12a3cd7a Merge pull request #965 from siiddhantt/feature/set-tokens-message-history
feat: dropdown to adjust conversational history limits
2024-05-28 09:43:21 +01:00
starkgate
df4fe0176c Fix empty response in the conversation 2024-05-28 10:40:55 +02:00
ManishMadan2882
4fcc80719e feat(i18n): settings static content 2024-05-28 01:39:37 +05:30
Alex
f6c66f6ee4 Merge pull request #964 from ManishMadan2882/main
Feature: Token count for vectors
2024-05-27 11:44:11 +01:00
Siddhant Rai
220d137e66 feat: dropdown to adjust conversational history limits 2024-05-26 23:13:01 +05:30
Alex
425803a1b6 chore: Refactor source assignment in api_answer route 2024-05-24 16:50:00 +01:00
Manish Madan
c794ea614a Merge branch 'arc53:main' into main 2024-05-24 21:12:07 +05:30
ManishMadan2882
9000838aab (feat:vectors): calc, add token in db 2024-05-24 21:10:50 +05:30
Alex
2790bda1e9 feat: Update Kubernetes deployment instructions for DocsGPT 2024-05-24 16:16:32 +01:00
Alex
e13d4daa9a chore: Remove unused VECTOR_STORE variable in docsgpt-secrets.yaml 2024-05-24 16:09:31 +01:00
Alex
2f504a4e03 Merge pull request #963 from arc53/feat/kubes-deployment
feat: k8s deployment
2024-05-24 14:48:22 +01:00
Alex
598a50a133 feat: Add Kubernetes deployment instructions for DocsGPT 2024-05-24 14:40:28 +01:00
Alex
1b06a5a3e0 feat: k8s deployment 2024-05-23 18:23:01 +01:00
Alex
9f1d3b0269 Update README.md 2024-05-22 16:34:04 +01:00
Alex
a09543d38b Update README.md 2024-05-22 16:33:48 +01:00
dependabot[bot]
2ab3539925 ---
updated-dependencies:
- dependency-name: requests
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-21 05:53:49 +00:00
Alex
23ddf53abe Update ci.yml 2024-05-20 12:09:11 +01:00
ilyasosman
d8720d0849 Add DocsGPTWidget embedding support for HTML 2024-05-19 22:08:18 +03:00
Alex
6753b55160 Merge pull request #955 from sossost/feat/add_copy_button_on_code_snippet
Feat : add copy button on code snippet
2024-05-18 13:08:25 +01:00
Alex
7f7f48ad56 Merge pull request #958 from arc53/feat-pre-loading-embeds
chore: Update Docker build platforms for application and frontend and…
2024-05-18 12:49:19 +01:00
jang_yoonsu
149ca01029 fix : Add group property to code block parent element and add copy button condition 2024-05-18 20:43:13 +09:00
Alex
5c8133a810 chore: Update Docker build platforms for application and frontend and optimised embedding import 2024-05-18 12:10:24 +01:00
Alex
2adccdd1b0 Merge pull request #957 from ManishMadan2882/main
Update Sidebar
2024-05-17 14:37:44 +01:00
ManishMadan2882
b91068d658 (navbar): shrink navbar 2024-05-17 18:07:06 +05:30
Alex
4534cafd3f Merge pull request #949 from ManishMadan2882/main
Updating Hero
2024-05-16 23:32:49 +01:00
Alex
405e79d729 removed space 2024-05-16 23:32:12 +01:00
ManishMadan2882
4df2349e9d (hero) minor update 2024-05-17 00:59:47 +05:30
jang_yoonsu
a9b61d3e13 design : add style invisible when lg and visible when hover 2024-05-16 23:29:33 +09:00
jang_yoonsu
3767d14e5c feat: add copy button in code snippet 2024-05-16 23:23:46 +09:00
jang_yoonsu
889a050f25 feat : add copy button component 2024-05-16 23:23:06 +09:00
ManishMadan2882
0701fac807 (hero): hover button outline 2024-05-16 18:42:19 +05:30
ManishMadan2882
9fba91069a lint fix 2024-05-16 18:27:36 +05:30
ManishMadan2882
4f9ce70ff8 (hero): demo queries on click 2024-05-16 18:23:45 +05:30
Alex
5e00d4ded7 Merge pull request #953 from shelar1423/main
FIX: Spinner
2024-05-16 10:51:40 +01:00
digvijay shelar
95cd9ee5bb spinner fixed 2024-05-16 15:15:48 +05:30
Alex
40f16f8ef1 Merge pull request #952 from ManishMadan2882/fix-api-key-parse
FIx: API Key Parsing
2024-05-15 16:27:43 +01:00
ManishMadan2882
3d9288f82f fix: override chunks,promps with api-key-data 2024-05-15 20:23:02 +05:30
ManishMadan2882
c51f12f88b (conversation)- taller input field 2024-05-15 16:31:41 +05:30
Alex
0618153390 fix: object id bug 2024-05-14 19:01:45 +01:00
Alex
a7c066291b Update README.md 2024-05-13 17:08:12 +01:00
Alex
a69ac372fa Merge pull request #946 from siiddhantt/refactor/ui-elements
refactor: several small ui refactor for generalisation
2024-05-13 11:47:20 +01:00
Alex
16b2a54981 Merge pull request #936 from Fagner-lourenco/patch-1
Update Dockerfile
2024-05-12 22:36:52 +01:00
Alex
3f68e0d66f chore: Update Dockerfile 2024-05-12 22:33:43 +01:00
Alex
12d483fde6 chore: update documentation links to use the new domain 2024-05-12 11:40:09 +01:00
Siddhant Rai
96034a9712 fix: minor change 2024-05-12 12:56:34 +05:30
Siddhant Rai
d2def4479b refactor: several small ui refactor for generalisation 2024-05-12 12:41:12 +05:30
ManishMadan2882
afbbb913e7 (hero): updating the UI 2024-05-10 16:21:42 +05:30
Alex
ad76f239a3 Merge pull request #943 from arc53/dependabot/npm_and_yarn/docs/next-14.1.1
build(deps): bump next from 14.0.4 to 14.1.1 in /docs
2024-05-10 11:29:37 +01:00
dependabot[bot]
e6b096c9e0 build(deps): bump next from 14.0.4 to 14.1.1 in /docs
Bumps [next](https://github.com/vercel/next.js) from 14.0.4 to 14.1.1.
- [Release notes](https://github.com/vercel/next.js/releases)
- [Changelog](https://github.com/vercel/next.js/blob/canary/release.js)
- [Commits](https://github.com/vercel/next.js/compare/v14.0.4...v14.1.1)

---
updated-dependencies:
- dependency-name: next
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-10 04:27:40 +00:00
Alex
6e26b4e6c7 Merge pull request #942 from ManishMadan2882/main
Fix: Abnormal overflow on mobile screens and arbitrary word breaks.
2024-05-09 16:29:42 +01:00
ManishMadan2882
ea79494b6d fix(conversation): overflows in sources, removed tagline below input 2024-05-08 20:50:20 +05:30
ManishMadan2882
afb18a3e4d (conversation) makes overflow auto 2024-05-08 16:17:16 +05:30
ManishMadan2882
f9c9853102 fix(conversation) word breaks 2024-05-08 16:07:49 +05:30
ManishMadan2882
b3eb9fb6fa fix(conversation): mobile abnormal overflows 2024-05-08 15:56:52 +05:30
Alex
d3b97bf51a Merge pull request #941 from ManishMadan2882/main
fix(UI):conversation,settings
2024-05-08 09:50:30 +01:00
ManishMadan2882
7a2e491199 fix(UI):conversation,settings 2024-05-07 20:37:05 +05:30
Alex
25efaf08b7 Merge pull request #935 from arc53/dependabot/pip/application/tqdm-4.66.3
build(deps): bump tqdm from 4.66.1 to 4.66.3 in /application
2024-05-07 09:52:09 +01:00
Alex
f893ea6b98 Merge pull request #934 from arc53/dependabot/pip/scripts/tqdm-4.66.3
build(deps): bump tqdm from 4.66.1 to 4.66.3 in /scripts
2024-05-07 09:51:57 +01:00
dependabot[bot]
500745b62c build(deps): bump tqdm from 4.66.1 to 4.66.3 in /application
Bumps [tqdm](https://github.com/tqdm/tqdm) from 4.66.1 to 4.66.3.
- [Release notes](https://github.com/tqdm/tqdm/releases)
- [Commits](https://github.com/tqdm/tqdm/compare/v4.66.1...v4.66.3)

---
updated-dependencies:
- dependency-name: tqdm
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-07 08:24:51 +00:00
Alex
9ebe5bf1a7 Merge pull request #939 from arc53/dependabot/pip/application/werkzeug-3.0.3
build(deps): bump werkzeug from 3.0.1 to 3.0.3 in /application
2024-05-07 09:23:58 +01:00
dependabot[bot]
4aecb86daa build(deps): bump werkzeug from 3.0.1 to 3.0.3 in /application
Bumps [werkzeug](https://github.com/pallets/werkzeug) from 3.0.1 to 3.0.3.
- [Release notes](https://github.com/pallets/werkzeug/releases)
- [Changelog](https://github.com/pallets/werkzeug/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/werkzeug/compare/3.0.1...3.0.3)

---
updated-dependencies:
- dependency-name: werkzeug
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-06 19:28:27 +00:00
Fagner-lourenco
6924dd6df6 Update Dockerfile 2024-05-04 20:50:11 -03:00
Alex
431755144e fix: Update count_tokens function in utils.py 2024-05-04 10:39:23 +01:00
dependabot[bot]
d182f81754 build(deps): bump tqdm from 4.66.1 to 4.66.3 in /scripts
Bumps [tqdm](https://github.com/tqdm/tqdm) from 4.66.1 to 4.66.3.
- [Release notes](https://github.com/tqdm/tqdm/releases)
- [Commits](https://github.com/tqdm/tqdm/compare/v4.66.1...v4.66.3)

---
updated-dependencies:
- dependency-name: tqdm
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-03 21:48:38 +00:00
Alex
de0193fffc Merge pull request #933 from siiddhantt/fix/remote-upload-issue
fix: remote upload error
2024-05-03 14:54:12 +01:00
Siddhant Rai
53e86205ad fix: added more headers from default 2024-05-03 18:47:30 +05:30
Siddhant Rai
aa670efe3a fix: connection aborted in WebBaseLoader 2024-05-03 18:25:01 +05:30
Alex
e693fe49a7 fix: fixed Dockerfile python path bug 2024-05-03 11:55:51 +01:00
Alex
7eaa32d85f remove gunicorn from final 2024-05-02 14:43:09 +01:00
Alex
ab40d2c37a remove pip from final container 2024-05-01 14:11:16 +01:00
Alex
784206b39b chore: Update Dockerfile to use Ubuntu mantic as base image and upgrade gunicorn to version 22.0.0 2024-05-01 13:19:16 +01:00
Alex
7c8264e221 Merge pull request #929 from TomasMatarazzo/issue-button-to-clean-chat-history
Issue button to clean chat history
2024-05-01 10:54:34 +01:00
TomasMatarazzo
db7195aa30 Update Navigation.tsx 2024-04-29 17:02:22 -03:00
TomasMatarazzo
eb7bbc1612 TS2741 2024-04-27 11:04:28 -03:00
TomasMatarazzo
ee3792181d probando 2024-04-26 20:35:36 -03:00
TomasMatarazzo
9804965a20 style in button and user in back route delete all conv 2024-04-25 23:43:45 -03:00
TomasMatarazzo
b84842df3d Fixing types 2024-04-22 16:35:44 -03:00
TomasMatarazzo
fc170d3033 Update package.json 2024-04-22 16:19:00 -03:00
TomasMatarazzo
8fa4ec7ad8 delete console.log 2024-04-22 16:17:26 -03:00
TomasMatarazzo
480825ddd7 now is working in settings 2024-04-22 16:16:19 -03:00
TomasMatarazzo
260e328cc1 first change 2024-04-22 14:41:59 -03:00
Alex
8873428b4b Merge pull request #926 from siiddhantt/feature
Feature: Logging token usage info to MongoDB
2024-04-22 12:10:00 +01:00
Alex
ab43c20b8f delete test output 2024-04-22 12:08:11 +01:00
TomasMatarazzo
88d9d4f4a3 Update DeleteConvModal.tsx 2024-04-18 13:56:03 -03:00
TomasMatarazzo
d4840f85c0 change text in modal 2024-04-18 13:50:08 -03:00
TomasMatarazzo
6f9ddeaed0 Button to clean chat history 2024-04-17 19:51:29 -03:00
Siddhant Rai
af5e73c8cb fix: user_api_key capturing 2024-04-16 15:31:11 +05:30
Siddhant Rai
333b6e60e1 fix: anthropic llm positional arguments 2024-04-16 10:02:04 +05:30
Siddhant Rai
1b61337b75 fix: skip logging to db during tests 2024-04-16 01:08:39 +05:30
Siddhant Rai
77991896b4 fix: api_key capturing + pytest errors 2024-04-15 22:32:24 +05:30
Siddhant Rai
60a670ce29 fix: changes to llm classes according to base 2024-04-15 19:47:24 +05:30
Siddhant Rai
c1c69ed22b fix: pytest issues 2024-04-15 19:35:59 +05:30
Siddhant Rai
d71c74c6fb Merge branch 'feature' of https://github.com/siiddhantt/DocsGPT into feature 2024-04-15 18:57:46 +05:30
Siddhant Rai
590aa8b43f update: apply decorator to abstract classes 2024-04-15 18:57:28 +05:30
Siddhant Rai
607e0166f6 Merge branch 'arc53:main' into feature 2024-04-15 18:55:09 +05:30
Alex
130c83ee92 Merge pull request #911 from arc53/dependabot/pip/application/pymongo-4.6.3
Bump pymongo from 4.6.1 to 4.6.3 in /application
2024-04-15 12:57:22 +01:00
Alex
fd5e418abf Merge pull request #919 from arc53/dependabot/npm_and_yarn/docs/multi-4407677fd1
build(deps): bump tar and npm in /docs
2024-04-15 12:29:26 +01:00
Siddhant Rai
262d160314 Merge with branch main 2024-04-15 15:18:48 +05:30
Siddhant Rai
9146827590 fix: removed unused import 2024-04-15 15:14:17 +05:30
Siddhant Rai
062b108259 Merge branch 'arc53:main' into feature 2024-04-15 15:04:10 +05:30
Siddhant Rai
ba796b6be1 feat: logging token usage to database 2024-04-15 15:03:00 +05:30
Alex
3d763235e1 Merge pull request #925 from ManishMadan2882/main
Untraced types in react widget
2024-04-14 11:43:03 +01:00
Manish Madan
c30c6d9f10 Merge branch 'arc53:main' into main 2024-04-13 16:20:56 +05:30
ManishMadan2882
311716ed18 refactored fs, fix: untracked dir 2024-04-13 16:01:46 +05:30
Alex
19bb1b4aa4 Create SECURITY.md 2024-04-12 09:39:33 +01:00
Alex
b8749e36b9 Merge pull request #921 from siiddhantt/bugfix
fix for missing fields in API Keys section
2024-04-10 10:25:26 +01:00
Siddhant Rai
00b6639155 fix: minor ui changes 2024-04-10 12:37:29 +05:30
Siddhant Rai
71d7daaef3 fix: minor ui changes 2024-04-10 12:23:37 +05:30
Siddhant Rai
8654c5d471 Merge branch 'bugfix' of https://github.com/siiddhantt/DocsGPT into bugfix 2024-04-10 12:11:51 +05:30
Siddhant Rai
02124b3d38 fix: missing fields from API Keys section 2024-04-10 12:11:34 +05:30
dependabot[bot]
340dcfb70d build(deps): bump tar and npm in /docs
Removes [tar](https://github.com/isaacs/node-tar). It's no longer used after updating ancestor dependency [npm](https://github.com/npm/cli). These dependencies need to be updated together.


Removes `tar`

Updates `npm` from 10.5.0 to 10.5.1
- [Release notes](https://github.com/npm/cli/releases)
- [Changelog](https://github.com/npm/cli/blob/latest/CHANGELOG.md)
- [Commits](https://github.com/npm/cli/compare/v10.5.0...v10.5.1)

---
updated-dependencies:
- dependency-name: tar
  dependency-type: indirect
- dependency-name: npm
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-09 21:09:48 +00:00
Alex
a37b92223a Merge pull request #915 from arc53/feat/retrievers-class
Update application files and fix LLM models, create new retriever class
2024-04-09 22:09:11 +01:00
Alex
7d2b8cb4fc Merge pull request #917 from arc53/multiple-uploads
Multiple file upload
2024-04-09 18:13:52 +01:00
Alex
8d7a134cb4 lint: ruff 2024-04-09 17:25:08 +01:00
Alex
4b849d7201 Fix SagemakerAPILLM test 2024-04-09 17:20:26 +01:00
Alex
e03e185d30 Add Brave Search retriever and update application files 2024-04-09 17:11:09 +01:00
Pavel
7a02df5588 Multiple uploads 2024-04-09 19:56:07 +04:00
Alex
19494685ba Update application files, fix LLM models, and create new retriever class 2024-04-09 16:38:42 +01:00
Alex
1e26943c3e Update application files, fix LLM models, and create new retriever class 2024-04-09 15:45:24 +01:00
dependabot[bot]
83fa850142 Bump pymongo from 4.6.1 to 4.6.3 in /application
Bumps [pymongo](https://github.com/mongodb/mongo-python-driver) from 4.6.1 to 4.6.3.
- [Release notes](https://github.com/mongodb/mongo-python-driver/releases)
- [Changelog](https://github.com/mongodb/mongo-python-driver/blob/master/doc/changelog.rst)
- [Commits](https://github.com/mongodb/mongo-python-driver/compare/4.6.1...4.6.3)

---
updated-dependencies:
- dependency-name: pymongo
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-09 14:22:15 +00:00
Alex
968a116d14 Merge pull request #916 from siiddhantt/bugfix
fix: updated qdrant-client to v1.8.2
2024-04-09 15:20:46 +01:00
Siddhant Rai
fb55b494d7 Merge branch 'arc53:main' into bugfix 2024-04-09 19:09:44 +05:30
Siddhant Rai
59b6a83d7d fix: issue #884 2024-04-09 19:08:59 +05:30
Alex
aabc4f0d7b Merge pull request #907 from siiddhantt/main
refactor: clean up settings file for better structure
2024-04-09 14:17:56 +01:00
Alex
391f686173 Update application files and fix LLM models, create new retriever class 2024-04-09 14:02:33 +01:00
Siddhant Rai
8e6f6d46ec fix: issue during build 2024-04-09 16:34:51 +05:30
Siddhant Rai
2ba7a55439 Merge branch 'arc53:main' into main 2024-04-09 13:54:48 +05:30
Alex
e07df29ab9 Update FLASK_DEBUG_MODE setting to use value from settings module 2024-04-08 13:27:43 +01:00
Alex
abf24fe60f Update FLASK_DEBUG_MODE setting to use value from settings module 2024-04-08 13:15:58 +01:00
Siddhant Rai
fad5f5b81f fix: added requested changes 2024-04-08 17:45:56 +05:30
Siddhant Rai
6961f49a0c Merge branch 'arc53:main' into main 2024-04-08 17:43:21 +05:30
Alex
6911f8652a Fix vectorstore path in check_docs function 2024-04-08 13:06:05 +01:00
Alex
6658cec6a0 Merge pull request #897 from arc53/dependabot/npm_and_yarn/frontend/vite-5.0.13
Bump vite from 5.0.12 to 5.0.13 in /frontend
2024-04-08 13:03:20 +01:00
Alex
14011b9d84 Merge pull request #891 from arc53/dependabot/npm_and_yarn/mock-backend/express-4.19.2
Bump express from 4.18.2 to 4.19.2 in /mock-backend
2024-04-08 13:02:58 +01:00
Alex
bd2d0b6790 Merge pull request from GHSA-p5qc-vj2x-9rjp
advisory-fix
2024-04-08 12:58:36 +01:00
Alex
d36f58230a advisory-fix 2024-04-08 12:56:27 +01:00
Alex
018f950ca3 Merge pull request #908 from arc53/api-keys-documentation-guide
API keys guide
2024-04-08 10:36:35 +01:00
Alex
db8db9fae9 Add prompt_id and chunks fields in create_api_key function 2024-04-08 10:35:15 +01:00
Pavel
79ce8d6563 guide 2024-04-07 20:14:16 +04:00
Alex
13eaa9a35a Merge pull request #904 from arc53/fix/update-docs-widget
Update api key to new data
2024-04-06 11:39:32 +01:00
Siddhant Rai
39f0d76b4b refactor: clean up settings file for better structure 2024-04-05 23:38:59 +05:30
Siddhant Rai
0a5832ec75 refactor: clean up settings file for better structure 2024-04-05 23:33:27 +05:30
Alex
6e147b3ed2 Update api key to new data 2024-04-05 14:49:32 +01:00
Alex
c162f79daa Merge pull request #903 from arc53/feature/api-key-create
Feature/api key create
2024-04-05 13:18:11 +01:00
Alex
87585be687 Merge branch 'main' into feature/api-key-create 2024-04-05 13:01:42 +01:00
Alex
ea08d6413c Merge pull request #902 from ManishMadan2882/feature/api-key-create
Add Prompt, Chunks in Create Key
2024-04-04 12:45:33 +01:00
Alex
879905edf6 Refactor create_api_key function to include prompt_id and chunks in routes.py 2024-04-04 12:38:23 +01:00
Alex
6fd80a5582 Merge pull request #899 from siiddhantt/main
feat: added prompts section under general in settings
2024-04-04 10:25:08 +01:00
Siddhant Rai
0dc7333563 fix: added API Keys in tabs 2024-04-04 14:42:14 +05:30
Siddhant Rai
f61c3168d2 fix: issue with editing new prompts 2024-04-04 14:29:37 +05:30
Siddhant Rai
9cadd74a96 fix: minor ui changes 2024-04-04 13:42:32 +05:30
Siddhant Rai
729fa2352b feat: added prompts section under general in settings 2024-04-04 00:48:49 +05:30
dependabot[bot]
b673aaf9f0 Bump vite from 5.0.12 to 5.0.13 in /frontend
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 5.0.12 to 5.0.13.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/v5.0.13/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v5.0.13/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-03 17:53:09 +00:00
Alex
3132cc6005 Merge pull request #895 from sarfarazsiddiquii/new_branch
added feature #887
2024-04-03 17:09:06 +01:00
ManishMadan2882
ac994d3077 add prompt,chunks in create key 2024-04-03 19:19:53 +05:30
sarfaraz siddiqui
02d4f7f2da functions can accept null 2024-04-03 18:08:46 +05:30
Alex
d99569f005 Merge pull request #896 from arc53/api-update
api update
2024-04-01 18:22:13 +01:00
Pavel
ec5166249a api update
A description of 3 more api methods in documentation.
2024-04-01 21:07:27 +04:00
Alex
dadd12adb3 Update API key in DocsGPTWidget.tsx 2024-04-01 11:25:59 +01:00
Alex
88b4fb8c2a Update API key in DocsGPTWidget.tsx 2024-04-01 11:25:31 +01:00
sarfaraz siddiqui
afecae3786 added feature #887 2024-03-31 03:50:11 +05:30
Alex
d18598bc33 Merge pull request #894 from arc53/feature/api-key-create
Feature/api key create
2024-03-29 20:04:26 +00:00
Alex
794fc05ada Merge branch 'main' into feature/api-key-create 2024-03-29 19:59:45 +00:00
Alex
5daeb7f876 Merge pull request #892 from ManishMadan2882/feature/api-key-create
Feature/api key create
2024-03-29 19:57:25 +00:00
ManishMadan2882
53e71c545e api key modal - enhancements 2024-03-29 19:11:40 +05:30
ManishMadan2882
959a55e36c adding dark mode - api key 2024-03-29 04:13:12 +05:30
ManishMadan2882
64572b0024 feat(settings): api key endpoints 2024-03-29 03:26:45 +05:30
Manish Madan
9a0c1caa43 Merge branch 'arc53:feature/api-key-create' into feature/api-key-create 2024-03-28 19:28:23 +05:30
ManishMadan2882
eed6723147 feat(settings): api keys tab 2024-03-28 19:25:35 +05:30
Alex
97fabf51b8 Refactor conversationSlice.ts and conversationApi.ts 2024-03-28 13:43:10 +00:00
Alex
5e5e2b8aee Merge pull request #877 from siiddhantt/main
Added reddit loader
2024-03-27 16:55:01 +00:00
Siddhant Rai
e01071426f feat: field to pass number of posts as a parameter 2024-03-27 19:20:55 +05:30
Siddhant Rai
eed1bfbe50 feat: fields to handle reddit loader + minor changes 2024-03-26 16:07:44 +05:30
Siddhant Rai
0c3970a266 Merge branch 'arc53:main' into main 2024-03-26 16:07:25 +05:30
dependabot[bot]
267cfb621e Bump express from 4.18.2 to 4.19.2 in /mock-backend
Bumps [express](https://github.com/expressjs/express) from 4.18.2 to 4.19.2.
- [Release notes](https://github.com/expressjs/express/releases)
- [Changelog](https://github.com/expressjs/express/blob/master/History.md)
- [Commits](https://github.com/expressjs/express/compare/4.18.2...4.19.2)

---
updated-dependencies:
- dependency-name: express
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-26 10:25:02 +00:00
Alex
0e90febab2 Merge pull request #890 from arc53/dependabot/npm_and_yarn/docs/katex-0.16.10
Bump katex from 0.16.9 to 0.16.10 in /docs
2024-03-26 10:24:19 +00:00
dependabot[bot]
31d947837f Bump katex from 0.16.9 to 0.16.10 in /docs
Bumps [katex](https://github.com/KaTeX/KaTeX) from 0.16.9 to 0.16.10.
- [Release notes](https://github.com/KaTeX/KaTeX/releases)
- [Changelog](https://github.com/KaTeX/KaTeX/blob/main/CHANGELOG.md)
- [Commits](https://github.com/KaTeX/KaTeX/compare/v0.16.9...v0.16.10)

---
updated-dependencies:
- dependency-name: katex
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-25 20:31:43 +00:00
Alex
017b11fbba Merge pull request #888 from arc53/fix/parsing-chunks-issue
Fix parsing issue with chunks in store.ts
2024-03-23 11:47:11 +00:00
Alex
3c492062a9 Fix parsing issue with chunks in store.ts 2024-03-23 11:42:50 +00:00
Alex
b26b49d0ca Merge pull request #883 from arc53/feat/chunks
Add support for setting the number of chunks processed per query
2024-03-22 15:34:09 +00:00
Alex
ed08123550 Add support for setting the number of chunks processed per query 2024-03-22 14:50:56 +00:00
Alex
add2db5b7a Merge pull request #881 from arc53/fix_model_selection_for_openai
Fix model selection at least for openAI LLM_NAME
2024-03-21 16:47:52 +00:00
Siddhant Rai
f272d7121a Merge branch 'arc53:main' into main 2024-03-21 19:38:44 +05:30
Anton Larin
577556678c Fix model selection at least for openAI LLM_NAME 2024-03-21 10:14:48 +01:00
Alex
e146922367 Merge pull request #880 from ManishMadan2882/main
Customised Scrollbar, fixed: Hero wasn't completely scrollable in Mobile
2024-03-20 10:08:22 +00:00
ManishMadan2882
6f1548b7f8 customised scrollbar 2024-03-19 21:40:00 +05:30
ManishMadan2882
9e6fe47b44 fix(hero): not fully scrollable in mobile 2024-03-19 21:39:16 +05:30
Siddhant Rai
60cfea1126 feat: added reddit loader 2024-03-16 20:22:05 +05:30
Alex
80a4a094af lint 2024-03-14 11:37:33 +00:00
Alex
70e1560cb3 fix check on model 2024-03-14 11:37:01 +00:00
Alex
725033659a Merge pull request #876 from ManishMadan2882/main
Pause Auto-scroll on user interrupt
2024-03-14 11:33:07 +00:00
ManishMadan2882
059111fb57 widget: version release 2024-03-14 16:58:57 +05:30
ManishMadan2882
d4a5eadf13 docs: updated version 2024-03-14 16:54:52 +05:30
ManishMadan2882
79cf487ac5 purge unused deps, comments 2024-03-14 04:03:17 +05:30
ManishMadan2882
52ecbab859 purge logs 2024-03-14 04:01:35 +05:30
ManishMadan2882
adfc79bf92 block autoScroll on user interrupt 2024-03-14 02:25:33 +05:30
ManishMadan2882
2447bab924 add listener for wheel, touch events 2024-03-14 01:51:55 +05:30
Alex
1057ca78a6 default remote 2024-03-13 17:01:23 +00:00
ManishMadan2882
7e7f98fd92 sanitize html - add dompurify 2024-03-13 00:21:54 +05:30
ManishMadan2882
64552ce2de add snarkdown: markdown support 2024-03-12 18:12:27 +05:30
ManishMadan2882
7506256f42 fix(lint): 2 errors 2024-03-11 19:42:22 +05:30
ManishMadan2882
db75230521 pause scroll on user action 2024-03-11 19:21:17 +05:30
Alex
f8955d5607 Update README.md 2024-03-11 12:05:35 +00:00
Alex
0bad217b93 Merge pull request #867 from siiddhantt/main
fix: issue #157
2024-03-08 16:06:51 +00:00
Alex
4da400a136 Merge pull request #873 from ManishMadan2882/main 2024-03-07 19:26:33 +00:00
ManishMadan2882
24740bd341 fix(UI) overflow in next 2024-03-08 00:48:18 +05:30
ManishMadan2882
3b6a15de84 version update 2024-03-08 00:46:53 +05:30
ManishMadan2882
ac1f525a6c fix: fine tuned the css 2024-03-07 19:20:03 +05:30
ManishMadan2882
e3999bdb0c updating version 2024-03-07 19:14:55 +05:30
ManishMadan2882
ad3d5a30ec docs update - widget v0.3.3 2024-03-07 15:58:31 +05:30
Alex
e4b5847725 Merge pull request #872 from ManishMadan2882/main
Widget UI fixes
2024-03-07 09:54:51 +00:00
ManishMadan2882
1a91a245a3 ui fixes 2024-03-07 02:50:30 +05:30
Alex
229f62d071 Merge pull request #861 from ManishMadan2882/main
DocsGPT Widget
2024-03-06 17:38:47 +00:00
Alex
b96fe16770 Update docsgpt version to 0.3.0 2024-03-06 17:36:47 +00:00
Alex
97750cb5e2 Update package.json with new version and add repository information 2024-03-06 17:24:23 +00:00
Siddhant Rai
e1a2bd11a9 fix: upload dropdown also combined 2024-03-06 16:01:53 +05:30
ManishMadan2882
229b408252 adding fallback avatar 2024-03-06 01:58:52 +05:30
ManishMadan2882
ae929438a5 shifted to parcel, styled-components 2024-03-05 21:15:58 +05:30
Siddhant Rai
5daaf84e05 fix: combined two dropdowns into a single component 2024-03-05 14:26:08 +05:30
Siddhant Rai
19b09515a1 Merge branch 'arc53:main' into main 2024-03-05 14:22:51 +05:30
Alex
9ce6078c8b Merge pull request #863 from Anush008/main
feat: Qdrant vectorstore support
2024-03-04 12:41:11 +00:00
Siddhant Rai
51f588f4b1 fix: issue #157 2024-03-04 15:45:34 +05:30
Alex
5ee6605703 Merge pull request #835 from arc53/feature/remote-loads
Feature/remote loads
2024-03-01 15:42:42 +00:00
Alex
7ef97cfd81 fix abort 2024-03-01 15:42:22 +00:00
Alex
f4288f0bd4 remove sitemap 2024-03-01 14:41:03 +00:00
Alex
4a701cb993 Merge branch 'main' into feature/remote-loads 2024-03-01 14:38:27 +00:00
Anush008
00dfb07b15 chore: revert to faiss default 2024-02-29 09:48:38 +05:30
ManishMadan2882
5fffa8e9db adding rollup-plugin-import-css 2024-02-29 04:11:47 +05:30
Pavel
54d187a0ad Fixing ingestion metadata grouping 2024-02-28 19:52:58 +03:00
ManishMadan2882
192ce468b7 inline responsive module styles 2024-02-28 19:31:36 +05:30
Anush008
75c0cadb50 feat: Qdrant vector store 2024-02-28 11:49:15 +05:30
ManishMadan2882
5d578d4b3b preparing for npm publish 2024-02-27 21:31:08 +05:30
Alex
325a8889ab update url 2024-02-27 11:52:51 +00:00
ManishMadan2882
9cdd78e68c purge out dist, update gitignore 2024-02-27 15:51:05 +05:30
ManishMadan2882
3a6770a1ae preparing build 2024-02-26 21:10:22 +05:30
ManishMadan2882
8073924056 padding improved at the edges 2024-02-26 20:46:46 +05:30
ManishMadan2882
7b53e1c54b UI enhancement, scroll fix 2024-02-26 20:10:00 +05:30
Alex
c4c0516820 add endpoint 2024-02-26 14:31:54 +00:00
Alex
8d36f8850e Merge pull request #860 from arc53/Fix-ingestion-grouping
Fixing ingestion metadata grouping
2024-02-26 10:16:37 +00:00
ManishMadan2882
abe5f43f3d adding responsive markdown response, error alert 2024-02-26 03:15:31 +05:30
Pavel
c8d8a8d0b5 Fixing ingestion metadata grouping 2024-02-25 16:03:18 +03:00
ManishMadan2882
f60e88573a refactored UI strategy, added prompt response in chat box 2024-02-24 21:02:28 +05:30
Alex
4216671ea2 Update README.md 2024-02-24 12:28:31 +00:00
Alex
ee3ea7a970 Add wget and unzip packages to Dockerfile 2024-02-23 21:19:04 +00:00
Alex
2b644dbb01 Add Rust toolchain and download mpnet-base-v2.zip model 2024-02-23 21:15:26 +00:00
ManishMadan2882
63878e7ffd inititated shadcn 2024-02-19 04:14:09 +05:30
Alex
007cd6cff1 Add conversations to db.json 2024-02-18 19:33:45 +00:00
Alex
4375215baa Update port number in Dockerfile and server.js 2024-02-18 19:12:58 +00:00
Alex
8cc5e9db13 Merge pull request #856 from ManishMadan2882/main
(mock) adding prompt routes
2024-02-16 11:22:40 +00:00
ManishMadan2882
5685f831a7 (mock) adding prompt routes 2024-02-15 05:35:34 +05:30
Alex
0cb3d12d94 Refactor loader classes to accept inputs directly 2024-02-14 15:17:56 +00:00
Alex
0e38c6751b Merge pull request #854 from ManishMadan2882/main
Message Streaming with the Mock Server
2024-02-14 13:50:15 +00:00
ManishMadan2882
70ad1fb3d8 Merge branch 'main' of https://github.com/manishMadan2882/docsgpt 2024-02-14 18:50:02 +05:30
ManishMadan2882
44f27d91a0 purge console logs 2024-02-14 18:48:43 +05:30
Manish Madan
1bb559c285 Merge branch 'arc53:main' into main 2024-02-14 18:40:24 +05:30
ManishMadan2882
7a005ef126 streamed the sample response /stream 2024-02-14 18:39:21 +05:30
Pavel
030c2a740f upload_remote class 2024-02-13 23:41:36 +03:00
Alex
5dcde67ae9 Merge pull request #852 from arc53/feat/premaillm
fix: docsgpt provider
2024-02-13 15:20:05 +00:00
Alex
ee06fa85f1 fix: docsgpt provider 2024-02-13 15:06:52 +00:00
Alex
5b9352a946 Merge pull request #851 from arc53/feat/premaillm
Add PremAI LLM implementation
2024-02-13 14:14:20 +00:00
Alex
b7927d8d75 Add PremAI LLM implementation 2024-02-13 14:08:55 +00:00
Alex
c144f30606 Merge pull request #850 from ManishMadan2882/feature/remote-loads
adding remote uploads tab
2024-02-12 23:46:30 +00:00
ManishMadan2882
d2dba3a0db adding remote uploads tab 2024-02-13 01:53:25 +05:30
Alex
2c991583ff Merge pull request #848 from ManishMadan2882/main
Makes input field absolute in Conversation, fixes delete icon in Settings/Documents
2024-02-09 14:20:02 +00:00
Alex
2e14dec12d Merge pull request #849 from arc53/main
Sync
2024-02-09 14:05:39 +00:00
ManishMadan2882
8826f0ff3c slight UI improvements in input box 2024-02-09 19:17:26 +05:30
ManishMadan2882
9129f7fb33 fix(Conversation): input box UI 2024-02-09 19:12:48 +05:30
ManishMadan2882
c0ed54406f fix(settings): delete button 2024-02-09 18:04:24 +05:30
Alex
18be257e10 Merge pull request #847 from ManishMadan2882/main
Fix : error on changing conversation while streaming answer
2024-02-07 18:00:12 +00:00
ManishMadan2882
615d549494 slight fixes, checking for null case 2024-02-07 05:09:12 +05:30
ManishMadan2882
0ce39e7f52 purge logs and !need code 2024-02-07 05:04:16 +05:30
ManishMadan2882
3c68cbc955 fix(stream err on changing conversation) 2024-02-07 04:42:39 +05:30
ManishMadan2882
300430e2d5 fixes weird bug- dark theme hook 2024-02-06 05:17:43 +05:30
Alex
166a07732a Merge pull request #820 from Quentium-Forks/main
Bump dependencies & support next 14 for docs
2024-02-05 15:13:40 +00:00
Alex
510b517270 Merge pull request #844 from ManishMadan2882/main
Fix: Sidebar Icons update on changing theme
2024-02-01 09:55:50 +00:00
ManishMadan2882
dea385384a fixes, update Nav images on theme toggle 2024-02-01 03:43:05 +05:30
ManishMadan2882
7a1c9101b2 add custom hook for dark theme 2024-02-01 03:42:09 +05:30
Alex
2be523cf77 Fix handling of embeddings_key in api_search() function 2024-01-30 17:22:33 +00:00
Alex
c01e334487 Merge pull request #843 from larinam/fix_application
Fix application + script requirements.txt
2024-01-29 20:59:36 +00:00
Alex
a2418d1373 Add sentence-transformers library to requirements.txt and comment out model_name in base.py 2024-01-29 20:51:28 +00:00
Alex
a697248b26 Merge pull request #841 from ManishMadan2882/main
Message bubble responsiveness
2024-01-29 13:55:29 +00:00
ManishMadan2882
6058939c00 change size in copy, like , dislike icons 2024-01-29 19:10:03 +05:30
Anton Larin
318de530e3 fix openapi-parser requirement 2024-01-27 16:52:33 +01:00
Anton Larin
9e04b7796a application folder related changes:
* optimize content of requirements.txt
* upgrade libs
* fix imports
2024-01-27 16:25:19 +01:00
Anton Larin
e8099c4db5 script folder related changes:
* optmize content of requirements.txt
* upgrade libs
* fix imports
2024-01-27 14:58:08 +01:00
Alex
bf808811cc Update README.md 2024-01-26 12:21:09 +00:00
ManishMadan2882
f0293de1b9 ui adjustments 2024-01-26 03:06:15 +05:30
ManishMadan2882
810dcb90ce refactored the divs, prevent overlap 2024-01-26 02:47:51 +05:30
ManishMadan2882
a2f2b8fabc make responsive msg bubble 2024-01-26 02:33:50 +05:30
Alex
cbc5f47786 Merge pull request #837 from ManishMadan2882/main
Adding Dark Mode
2024-01-23 14:59:22 +00:00
ManishMadan2882
3e3886ced7 slight UI changes 2024-01-23 19:38:22 +05:30
ManishMadan2882
9ce39fd2ba made borders in settings a bit darker 2024-01-22 18:22:46 +05:30
ManishMadan2882
5b08cdedf0 revert changes in docker yaml 2024-01-22 16:22:19 +05:30
ManishMadan2882
67e4d40c49 added dark mode, About page and bubble icons 2024-01-22 16:11:26 +05:30
ManishMadan2882
537a733157 add dark mode - conversation, bubble, UI fixes 2024-01-22 02:56:07 +05:30
ManishMadan2882
5136e7726d added dark mode, Hero component 2024-01-21 17:18:23 +05:30
Alex
6e236ba74d Merge pull request #827 from Juneezee/vite-5
Upgrade to Vite 5
2024-01-19 14:01:40 +00:00
Eng Zer Jun
374b665089 Upgrade to Vite 5
This commit upgrades vite to the latest version 5, and also updates the
vite plugins to the latest version.

Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2024-01-19 21:34:28 +08:00
ManishMadan2882
ffecc9a0c7 add dark mode, in Settings 2024-01-19 19:01:13 +05:30
ManishMadan2882
0b997418d3 add dark - sidebar 2024-01-19 01:47:23 +05:30
ManishMadan2882
eaad8a4cf5 initialising dark mode 2024-01-18 02:39:40 +05:30
Alex
396b4595f4 Merge pull request #832 from ArnabBCA/main
Fixed Empty Document Name Upload
2024-01-16 17:07:02 +00:00
Arnab Ghosh
0752aae9ef Fixed Empty Document Name Upload 2024-01-16 15:35:48 +05:30
Alex
ad2221a677 Merge pull request #830 from arc53/feature/search-endpoint
Search docs not inside the /stream in the stream request
2024-01-15 16:55:53 +00:00
Alex
1713d693b1 Merge pull request #831 from ManishMadan2882/feature/search-endpoint
integrate /api/search endpoint, get sources post stream
2024-01-15 16:34:48 +00:00
ManishMadan2882
f4f056449f integrate /api/search endpoint, get sources post stream 2024-01-15 20:23:18 +05:30
Alex
6a70e3e45b Commented out unused code in api_search function 2024-01-12 14:39:17 +00:00
Alex
a04cdee33f Refactor source log generation in complete_stream function 2024-01-12 14:38:15 +00:00
Alex
157769eeb4 Add API endpoint for searching documents 2024-01-12 14:35:23 +00:00
Alex
667b66b926 Merge pull request #825 from arc53/feat/mongodb
Public LLM
2024-01-09 14:31:40 +00:00
Alex
c0f7b344d9 Update environment variables and installation instructions 2024-01-09 12:35:18 +00:00
Alex
060c59e97d Update mpnet-base-v2.zip download URL 2024-01-09 11:41:25 +00:00
Alex
b3461b7134 Add MPNet model and update vector store for Hugging Face embeddings 2024-01-09 11:39:32 +00:00
Pavel
001c450abb choice text 2024-01-09 13:05:16 +03:00
Pavel
ceaa5763d4 choice fix 2024-01-09 12:57:19 +03:00
Alex
b45fd58944 Update EMBEDDINGS_NAME in settings.py and test_vector_store.py 2024-01-09 00:34:04 +00:00
Alex
b3149def82 Update EMBEDDINGS_NAME in settings.py 2024-01-09 00:29:02 +00:00
Alex
378d498402 Remove unused imports in docsgpt_provider.py 2024-01-09 00:19:49 +00:00
Alex
98f52b32a3 Update README and Quickstart guide 2024-01-09 00:18:04 +00:00
Alex
0ab32a6f84 Update setup.sh script with new options for language model usage 2024-01-09 00:07:37 +00:00
Alex
71cc22325d Add application files and update setup script 2024-01-09 00:05:44 +00:00
Alex
e1b2991aa6 Update LLM_NAME and EMBEDDINGS_NAME 2024-01-09 00:01:31 +00:00
Alex
033bcf80d0 docsgpt llm provider 2024-01-08 23:35:37 +00:00
Alex
103118d558 Merge pull request #823 from ManishMadan2882/main
fix distortion on different browsers
2024-01-08 11:30:30 +00:00
ManishMadan2882
f91b5fa004 fix distortion on different browsers 2024-01-08 16:03:41 +05:30
Alex
7179bf7b67 Merge pull request #822 from arc53/feat/mongodb
Mongodb integration as vectorstore
2024-01-06 18:30:25 +00:00
Alex
a3e6239e6e fix: remove import 2024-01-06 18:23:20 +00:00
Alex
1fa12e56c6 Remove unused test cases in test_openai.py 2024-01-06 18:04:50 +00:00
Alex
4ff834de76 Refactor MongoDBVectorStore and add delete_index method 2024-01-06 17:59:01 +00:00
QuentiumYT
6db38ad769 Bump dependencies & support next 14 for docs
- Renamed _app.js to mdx (for Next 14)
- Lint next config file & package.json
2024-01-05 18:50:49 +01:00
Alex
293b7b09a9 init tests 2024-01-05 17:16:16 +00:00
Alex
d5945f9ee7 Update README.md 2024-01-05 13:58:22 +00:00
Alex
d1f5a6fc31 Merge pull request #816 from ManishMadan2882/main
adding responsive sidebar
2024-01-04 20:51:30 +00:00
ManishMadan2882
e7b9f5e4c3 adding responsive sidebar 2024-01-05 01:50:52 +05:30
Alex
7870749077 fix openai 2024-01-03 12:09:05 +00:00
Alex
c5352f443a Merge pull request #813 from CBID2/making-alt-text-less-redundant
fix: Making alt text less redundant
2024-01-03 10:52:08 +00:00
Christine
fd8b7aa0f2 fix: change alt text for setting 2024-01-03 04:17:53 +00:00
Christine
458ea266ec fix: change name to alt text for Discord and GitHub 2024-01-03 04:14:06 +00:00
Alex
9748eaba25 Merge pull request #811 from Rutam21/patch-2
Added new Deployment Guide for Kamatera Performance Cloud
2023-12-31 15:37:32 +00:00
Alex
887a3740b2 Update holopin.yml 2023-12-31 15:34:55 +00:00
Rutam Prita Mishra
2e7cfe9cd7 Added new Deployment Guide
This PR adds a new deployment guide for Kamatera Performance Cloud.
2023-12-26 16:28:57 +05:30
Alex
6dbe156a02 Update README.md 2023-12-25 11:52:37 +00:00
Alex
2a9ef6d48e Merge pull request #792 from arc53/dependabot/npm_and_yarn/frontend/vite-4.5.1
Bump vite from 4.5.0 to 4.5.1 in /frontend
2023-12-22 15:51:11 +00:00
Alex
6717ddbd0b Merge pull request #804 from arc53/dependabot/npm_and_yarn/docs/next-13.5.1
Bump next from 13.4.19 to 13.5.1 in /docs
2023-12-22 15:50:49 +00:00
Alex
47c1aab064 Merge pull request #793 from arc53/dependabot/npm_and_yarn/extensions/react-widget/vite-4.4.12
Bump vite from 4.4.9 to 4.4.12 in /extensions/react-widget
2023-12-22 15:50:08 +00:00
Alex
eda41658b9 Merge pull request #806 from arc53/cve/py-removal
fix: Remove py==1.11.0 from requirements.txt
2023-12-22 15:47:45 +00:00
Alex
7f79363944 fix: Remove py==1.11.0 from requirements.txt 2023-12-22 15:44:39 +00:00
Alex
25967f2a09 Merge pull request #805 from arc53/fix/scripts-vulnerabilities
fix: vulns
2023-12-22 15:35:37 +00:00
Alex
4d3963ad67 fix: vulns 2023-12-22 15:27:23 +00:00
dependabot[bot]
f78c5257dc Bump next from 13.4.19 to 13.5.1 in /docs
Bumps [next](https://github.com/vercel/next.js) from 13.4.19 to 13.5.1.
- [Release notes](https://github.com/vercel/next.js/releases)
- [Changelog](https://github.com/vercel/next.js/blob/canary/release.js)
- [Commits](https://github.com/vercel/next.js/compare/v13.4.19...v13.5.1)

---
updated-dependencies:
- dependency-name: next
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-22 14:14:27 +00:00
Alex
ccc6234ac8 Merge pull request #803 from arc53/fix/dependency-upgrades
fix: cve upgrades
2023-12-22 14:13:41 +00:00
Alex
c81b0200eb fix: Update pydantic_settings version to 2.1.0 2023-12-22 14:08:03 +00:00
Alex
f039d37c8a fix: pydantic 2023-12-22 14:03:43 +00:00
Alex
237975bfef fix: cve upgrades 2023-12-22 13:25:57 +00:00
Alex
015bc7c8c3 hotfix source doc data 2023-12-22 12:10:35 +00:00
Alex
3da2a00ee9 Merge pull request #801 from Victorivus/bug/fix-#800-mistral_models
Update requirements.txt HF Transformers
2023-12-14 15:29:28 +00:00
Victorivus
16eca5bebf Update requirements.txt HF Transformers
Fix 'mistral' models missing
2023-12-12 15:16:31 +01:00
Alex
a4483cf255 Revert "Merge pull request #797 from arc53/fix_boxes_hero"
This reverts commit 0bf020a1b4, reversing
changes made to a62566e8fb.
2023-12-11 12:18:47 +00:00
Alex
0bf020a1b4 Merge pull request #797 from arc53/fix_boxes_hero
Fix_hero
2023-12-10 19:17:46 +00:00
Pavel
d43927a167 Fix_hero
Fix of hero section for chrome and firefox. Fix of conversations-container pushing containers below.
2023-12-10 16:28:20 +03:00
Alex
a62566e8fb Merge pull request #795 from HeetVekariya/fix/UI
fix: API docs text overflow
2023-12-08 17:08:43 +00:00
HeetVekariya
23a1730106 fix: API docs text overflow 2023-12-08 20:36:12 +05:30
dependabot[bot]
f8ac5e0af3 Bump vite from 4.4.9 to 4.4.12 in /extensions/react-widget
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 4.4.9 to 4.4.12.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/v4.4.12/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v4.4.12/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-06 01:52:56 +00:00
dependabot[bot]
eb48a153d9 Bump vite from 4.5.0 to 4.5.1 in /frontend
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 4.5.0 to 4.5.1.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/v4.5.1/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v4.5.1/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-05 23:44:40 +00:00
Alex
1a78a6f786 Merge pull request #791 from arc53/bug/sticky-prompts
Add getLocalPrompt and setLocalPrompt functions to preferenceApi.ts
2023-12-04 11:27:18 +02:00
Alex
f8f60c62fe Add getLocalPrompt and setLocalPrompt functions to preferenceApi.ts 2023-12-04 11:23:51 +02:00
Alex
453e507b89 Update routes.py 2023-11-24 15:17:48 +00:00
Alex
022c32f9d5 fix chats display 2023-11-24 12:22:34 +00:00
Alex
af1a0c3520 Update routes.py 2023-11-23 11:31:07 +00:00
Alex
0a6d9dfcf4 Merge pull request #782 from arc53/feature/add-prompts
Feature/add prompts
2023-11-23 00:13:02 +00:00
Alex
5bdedacab1 fix xss 2023-11-23 00:09:17 +00:00
Alex
d7a1be2f3c fix bug 2023-11-23 00:00:08 +00:00
Alex
d6dcbb63d4 fix ruff 2023-11-22 23:57:47 +00:00
Alex
b2770f67a1 custom prompts 2023-11-22 23:55:41 +00:00
Alex
aa2691b153 Merge pull request #513 from akshay11298/mock-backend-server
Mock backend server
2023-11-22 12:27:10 +00:00
Alex
e9a9cbbd07 feedback local 2023-11-22 12:16:37 +00:00
Alex
17e2222802 Merge pull request #777 from arc53/dependabot/pip/scripts/aiohttp-3.8.6
Bump aiohttp from 3.8.5 to 3.8.6 in /scripts
2023-11-22 09:30:43 +00:00
Alex
58b2970b19 Merge pull request #778 from arc53/dependabot/pip/application/aiohttp-3.8.6
Bump aiohttp from 3.8.5 to 3.8.6 in /application
2023-11-22 09:28:20 +00:00
Alex
fd69961185 Merge pull request #780 from guspan-tanadi/docsAPIdeleteold
docs: refer delete_old sample docs_check comments API-docs
2023-11-22 00:19:41 +00:00
Guspan Tanadi
e5cd813958 docs: delete_old sample docs_check comments API-docs 2023-11-21 22:15:27 +07:00
Alex
5b12423d98 setup-fix2 2023-11-21 10:16:54 +00:00
Alex
4141f633a3 Setup process 2023-11-21 10:16:10 +00:00
Alex
67854b3ebd Reqs2 2023-11-17 16:18:28 +00:00
Alex
0c21dbc7c8 reqs cmd 2023-11-17 16:18:22 +00:00
Alex
5925aa50d8 Merge pull request #779 from arc53/bug/fix-patj
fix path bug on default
2023-11-17 15:38:12 +00:00
Alex
852b016111 fix path bug on default 2023-11-17 15:33:51 +00:00
Alex
ba77a67ba7 fix path 2023-11-17 15:31:53 +00:00
Alex
c14a9a55d7 Merge pull request #775 from guspan-tanadi/markdownhighlightNote
style(README): markdown highlight Note section
2023-11-15 11:32:24 +00:00
dependabot[bot]
5203db6c9c Bump aiohttp from 3.8.5 to 3.8.6 in /application
Bumps [aiohttp](https://github.com/aio-libs/aiohttp) from 3.8.5 to 3.8.6.
- [Release notes](https://github.com/aio-libs/aiohttp/releases)
- [Changelog](https://github.com/aio-libs/aiohttp/blob/master/CHANGES.rst)
- [Commits](https://github.com/aio-libs/aiohttp/compare/v3.8.5...v3.8.6)

---
updated-dependencies:
- dependency-name: aiohttp
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-11-14 23:45:14 +00:00
dependabot[bot]
30eb8dda1d Bump aiohttp from 3.8.5 to 3.8.6 in /scripts
Bumps [aiohttp](https://github.com/aio-libs/aiohttp) from 3.8.5 to 3.8.6.
- [Release notes](https://github.com/aio-libs/aiohttp/releases)
- [Changelog](https://github.com/aio-libs/aiohttp/blob/master/CHANGES.rst)
- [Commits](https://github.com/aio-libs/aiohttp/compare/v3.8.5...v3.8.6)

---
updated-dependencies:
- dependency-name: aiohttp
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-11-14 23:42:27 +00:00
Guspan Tanadi
69d40b5fe8 Merge branch 'arc53:main' into markdownhighlightNote 2023-11-15 06:35:04 +07:00
Alex
706e87659e Merge pull request #776 from arc53/feature/settings-api
Feature/settings api
2023-11-14 01:21:37 +00:00
Alex
5c785e49af prompts and docs 2023-11-14 01:16:42 +00:00
Alex
0974085c6f prompts 2023-11-14 01:16:06 +00:00
Guspan Tanadi
e67ced8848 new line Important section Quickstart 2023-11-13 20:02:07 +07:00
Guspan Tanadi
c2dea6b881 bold colon Important section Quickstart 2023-11-13 19:58:35 +07:00
Guspan Tanadi
ee62b2cf31 lowercase Important section Quickstart 2023-11-13 19:56:23 +07:00
Guspan Tanadi
252e06bee6 revert markdown highlight Note Important section Quickstart 2023-11-13 19:54:12 +07:00
Guspan Tanadi
1f0ce88e08 style: highlight markdown Note Important section Quickstart 2023-11-13 19:33:01 +07:00
Guspan Tanadi
7e8fb388a3 style(README): highlight markdown available Note section 2023-11-13 19:17:13 +07:00
Alex
a3de360878 fix sidebar 2023-11-12 18:31:29 +00:00
Alex
6ee556e386 Delete HACKTOBERFEST.md 2023-11-12 18:26:14 +00:00
Alex
623ed89100 Update How-to-use-different-LLM.md 2023-11-08 14:34:33 +00:00
Alex
8c114cae95 Update How-to-use-different-LLM.md 2023-11-08 14:33:11 +00:00
Alex
93dd58ec59 Update How-to-use-different-LLM.md 2023-11-08 14:32:18 +00:00
Alex
f0bc93ad8e Update How-to-use-different-LLM.md 2023-11-08 14:30:53 +00:00
Alex
27e8aad479 Update HACKTOBERFEST.md 2023-11-08 11:41:02 +00:00
Alex
6298578db9 Merge pull request #764 from Rutam21/patch-3
Add DigitalOcean Droplet Hosting solution to Hosting-the-app.md
2023-11-07 20:29:51 +00:00
Alex
f079e5dadd Merge pull request #763 from Rutam21/patch-2
Corrected Typo on _meta.json
2023-11-07 20:29:29 +00:00
Alex
e6fdead89f Merge pull request #762 from SamsShow/code/setting-ui
[Code]:Setting Section UI
2023-11-07 18:11:44 +00:00
Alex
cfa6e3982c Update HACKTOBERFEST.md 2023-11-07 16:38:00 +00:00
Rutam Prita Mishra
e4bc4d9071 Add DigitalOcean Droplet Hosting solution to Hosting-the-app.md
This PR adds a new deployment solution for DocsGPT.
2023-11-06 19:29:51 +05:30
Rutam Prita Mishra
d372e10f1a Corrected Typo on _meta.json
This PR changes `Rainway` to the correct spelling `Railway`.
2023-11-06 19:22:58 +05:30
Saksham Tyagi
d1c93754db Merge branch 'arc53:main' into code/setting-ui 2023-11-04 02:38:09 +05:30
Saksham Tyagi
3d54a1abf3 widget 2023-11-04 02:33:56 +05:30
Saksham Tyagi
06eef5779d Documents 2023-11-03 22:52:16 +05:30
Alex
b4d78376fb Merge pull request #759 from krabbi/add_table_render
Render tables in conversation
2023-11-03 13:12:46 +00:00
Saksham Tyagi
87a59a6de3 Prompt 2023-11-03 13:06:18 +05:30
Alexander Deshkevich
1e7741e341 Render tables in conversation 2023-11-02 18:51:15 -03:00
Alex
ae5e484506 fix docx file 2023-11-02 18:25:43 +00:00
Alex
c9dd219565 Update CONTRIBUTING.md 2023-11-02 11:11:58 +00:00
Alex
55eb662dc9 Merge pull request #758 from arc53/bug/UI-fixes-and-settings-prep
Bug/UI fixes and settings prep
2023-11-01 22:10:04 +00:00
Alex
2d202088c7 syntax 2023-11-01 21:48:52 +00:00
Alex
9f7c9180d9 prep settings slightly 2023-11-01 21:45:40 +00:00
Alex
fdc5e0a92d Merge pull request #756 from ArpitPandey29/main
docs: fix grammar issues
2023-11-01 12:10:54 +00:00
arpitpandey0209
7f6fef1373 Revert txt files 2023-11-01 10:11:40 +05:30
ArpitPandey29
e4973f572f Update combine_prompt_hist.txt 2023-11-01 10:08:36 +05:30
ArpitPandey29
eb768f2076 Update combine_prompt.txt 2023-11-01 10:07:25 +05:30
ArpitPandey29
df9723a011 Update combine_prompt.txt 2023-11-01 10:06:54 +05:30
ArpitPandey29
51ad3fdb0b Update combine_prompt.txt 2023-11-01 10:04:53 +05:30
ArpitPandey29
8e553e7a93 Update combine_prompt.txt 2023-11-01 10:04:30 +05:30
ArpitPandey29
4c70e92293 Update chat_reduce_prompt.txt 2023-11-01 10:03:50 +05:30
ArpitPandey29
3e983b121e Update chat_combine_prompt.txt 2023-11-01 10:02:03 +05:30
ArpitPandey29
9f3c962ea4 Update chat_combine_prompt.txt 2023-11-01 10:01:43 +05:30
ArpitPandey29
2a1a3fb1b5 Update chat_combine_prompt.txt 2023-11-01 09:59:46 +05:30
ArpitPandey29
bb28cc5c65 Update README.md 2023-11-01 09:58:13 +05:30
ArpitPandey29
23b6a38e18 Update README.md 2023-11-01 09:57:31 +05:30
arpitpandey0209
715cd9daf5 Revert some files 2023-11-01 09:51:09 +05:30
ArpitPandey29
cbfdaec394 Merge branch 'arc53:main' into main 2023-11-01 09:45:39 +05:30
Alex
bb527ac981 Update index.mdx 2023-10-31 23:22:23 +00:00
Alex
961c26894d Update README.md 2023-10-31 23:21:46 +00:00
Alex
693bdebb30 Merge pull request #752 from varundhand/feature/mobile-resp
Feat: enhance mobile responsiveness of Hero Page final
2023-10-31 23:13:04 +00:00
Varun Dhand
353e24f1c5 fix: container layout for firefox 2023-11-01 03:56:48 +05:30
Alex
59d1773057 Merge pull request #757 from gfggithubleet/patch-1
Update README.md
2023-10-31 21:57:50 +00:00
Alex
93a1368b60 Merge pull request #754 from IamSenthilKumar/fix-docs
Fix Guide: How to use other LLMS
2023-10-31 21:48:14 +00:00
gfggithubleet
3bc0fe5a70 Update README.md
TYPO FIXED
2023-11-01 01:09:38 +05:30
arpitpandey0209
973c11a048 docs: fix grammar issues 2023-11-01 01:04:40 +05:30
Alex
5094386516 hotfix-1 2023-10-31 18:21:44 +00:00
Alex
64477c6573 Merge pull request #691 from SoumyadiptoPal/settings
Feature: Add "Settings" Button to sidebar
2023-10-31 18:20:58 +00:00
Alex
f052c707e7 Merge branch 'main' into settings 2023-10-31 18:18:31 +00:00
Senthil Kumar N
de0e1d3e10 Fix Guide to use other LLMS 2023-10-31 23:41:27 +05:30
Alex
e273da1b5b Merge pull request #753 from THEGAMECHANGER416/patch-1
Improved Documentation: Added comments in application/worker.py
2023-10-31 17:56:09 +00:00
Alex
761f6963ab Merge pull request #738 from 0xrahul6/docs/enhance
Enhanced Guides/Customization.md
2023-10-31 17:25:37 +00:00
Alex
e5aff1316a Merge pull request #739 from 0xrahul6/docs/trains
Enhancement: Updated Train other docs
2023-10-31 17:24:47 +00:00
Alex
8ee0fbe6a3 Update README.md 2023-10-31 17:23:01 +00:00
Arnav Kohli
4c6b8b4173 Update worker.py
Added comments in difficult to understand areas
2023-10-31 20:00:07 +05:30
Varun Dhand
6940a75591 erge branch 'feature/mobile-resp' of https://github.com/varundhand/DocsGPT into feature/mobile-resp 2023-10-31 19:33:03 +05:30
Varun Dhand
7ba939b008 feat: mobile responsive hero page 2023-10-31 19:27:45 +05:30
0xrahul6
6918a36bee Update Customising-prompts.md 2023-10-31 19:14:14 +05:30
0xrahul6
ba132fc411 Update Customising-prompts.md 2023-10-31 19:12:38 +05:30
Alex
b4a940a8d6 Merge pull request #750 from guspan-tanadi/settingsAPI-docs
style: formatting API-docs close parenthesis comment core/settings.py
2023-10-31 13:29:06 +00:00
Varun Dhand
5e0dd5c63b feat: mobile responsive hero page 2023-10-31 18:56:18 +05:30
Alex
f19114e530 Merge pull request #749 from Sai-Suraj-27/fix_codecov
Fixed wrong closing parenthesis in `codecov.yml`
2023-10-31 13:16:15 +00:00
Alex
0db40ecf0f Merge pull request #746 from Sai-Suraj-27/fix_branch_name
Fixed branch name from `master` to `main` in the contributing guide.
2023-10-31 13:10:50 +00:00
Alex
8289067a4e Merge pull request #709 from CBID2/making-colors-accessible
feat: made color accessible
2023-10-31 13:03:54 +00:00
Guspan Tanadi
9c5e3d094b docs: insert Method description /api/docs_check section API-docs 2023-10-31 19:59:09 +07:00
Guspan Tanadi
cb12b19c1e Merge branch 'arc53:main' into settingsAPI-docs 2023-10-31 19:53:59 +07:00
Alex
5d0b8588f9 Revert "delete conflicting checkmark"
This reverts commit 266087c5f1.
2023-10-31 12:52:34 +00:00
Alex
0c05e1036d del 2023-10-31 12:52:27 +00:00
Alex
266087c5f1 delete conflicting checkmark 2023-10-31 12:50:38 +00:00
Alex
147b94d936 checkmark bug 2023-10-31 12:49:25 +00:00
Alex
872511ebb9 Merge pull request #731 from harshita-2003/patch-2
Update README.md
2023-10-31 12:41:55 +00:00
Guspan Tanadi
ce8ed5bfeb style: formatting /api/task_status API-docs 2023-10-31 16:45:23 +07:00
Guspan Tanadi
d81838dfc4 docs: close parentheses EMBEDDINGS_KEY comment settings 2023-10-31 16:40:41 +07:00
Sai-Suraj-27
79ec3594fe Fixed wrong closing parenthesis in codecov.yml 2023-10-31 12:03:13 +05:30
Sai-Suraj-27
cdb246697e Fixed branch name from master to main in contributing guide. 2023-10-31 11:03:04 +05:30
Alex
6476e688e5 Fixes sidebar 2023-10-30 22:41:35 +00:00
Alex
5d1ec6a9c8 Update README.md 2023-10-30 22:07:34 +00:00
Alex
be8a7e981a Update README.md 2023-10-30 22:01:04 +00:00
Alex
d59731a678 Merge pull request #722 from lakshmi930/update-sidebar-ui
Update sidebar UI and Logo
2023-10-30 21:55:00 +00:00
Lakshmi Narayanan
0254510d53 Fix the rotation of the avatar 2023-10-31 00:40:35 +04:00
Lakshmi Narayanan
9327955891 Update sidebar effects and styles based on figma 2023-10-31 00:38:12 +04:00
Christine Belzie
4daf08e20f fix: use a new color 2023-10-30 13:21:53 -04:00
SoumyadiptoPal
6fc31ddedb Updated the code 2023-10-30 22:32:39 +05:30
0xrahul6
fac8c9ee4e Enhancement: Updated Train other docs 2023-10-30 14:53:08 +00:00
0xrahul6
d05f7e2084 Enhanced Guides/Customizi 2023-10-30 14:23:21 +00:00
Alex
0a0a6bae0f Merge pull request #728 from ka1bi4/update/docs-improve-and-fixes
Update/docs improve and fixes
2023-10-30 13:19:42 +00:00
Roman Zhukov
560c063db4 Update Quickstart docs with bash language hl. 2023-10-30 13:20:19 +03:00
Roman Zhukov
54ac2d33e2 Update Quickstart docs with bash language hl. 2023-10-30 13:06:37 +03:00
HARSHITA GUPTA
fb3be8a6a0 Update README.md
added a lighting emoji to give it more great look
2023-10-30 13:00:47 +05:30
Lakshmi Narayanan
5a33953b78 Add Chats heading if there are any conversations 2023-10-30 10:14:40 +04:00
Alex
ba7a8fc796 Merge pull request #710 from mishmanners/main
Make README accessible
2023-10-30 01:12:51 +00:00
Michelle "MishManners®™" Mannering
0bdee8219a Fix README
for some reason - things were missing; fixed now
2023-10-30 11:59:47 +11:00
Alex
f82951f412 Merge pull request #723 from FarukhS52/main
Fix typo
2023-10-30 00:42:02 +00:00
Alex
35e188b851 Merge pull request #719 from theprince29/patch-1
Update CONTRIBUTING.md
2023-10-29 23:38:50 +00:00
Roman Bug
8990e4666a Update Quickstart.md 2023-10-30 01:46:06 +03:00
Roman Bug
ceff618e5d Update API-docs.md 2023-10-30 01:18:52 +03:00
Roman Bug
cf3aab9d38 Update Quickstart.md 2023-10-30 01:06:24 +03:00
Roman Bug
a74c70e8a1 Update README.md 2023-10-30 00:59:17 +03:00
Alex
46817c7664 Merge pull request #679 from akash0708/fix/hero-styling
fix: Hero section styling fixed, made responsive across all devices
2023-10-29 21:40:59 +00:00
Akshay
c0c9cab14c Formatting 2023-10-29 21:38:17 +05:30
Alex
478a034740 Merge pull request #725 from beKoool/remote-upload-ui
Design the Remote sources upload menu UI
2023-10-29 13:57:07 +00:00
beKool.sh
01693cb155 Fix random spaces 2023-10-29 17:13:23 +05:45
Farookh Zaheer Siddiqui
7a44c9e650 Update Railway-Deploying.md 2023-10-29 14:09:56 +05:30
Farookh Zaheer Siddiqui
70a6a275f4 Update CODE_OF_CONDUCT.md 2023-10-29 14:03:05 +05:30
Lakshmi Narayanan
e627ebc127 Small fix with fixed height and width 2023-10-29 02:01:18 +04:00
Lakshmi Narayanan
56b81b78c3 Update sidebar with new logo and icon 2023-10-29 01:48:04 +04:00
Lakshmi Narayanan
c304485079 Fix the color of documentation icon 2023-10-29 01:44:00 +04:00
Lakshmi Narayanan
df51797c29 Add expand icon 2023-10-29 01:43:09 +04:00
Lakshmi Narayanan
754339214c Update logo in conversation 2023-10-29 01:42:26 +04:00
Lakshmi Narayanan
057ecc3ed9 Update logo in Homepage and About page 2023-10-29 01:41:55 +04:00
Lakshmi Narayanan
c14f79ebf7 Add the new logo 2023-10-29 01:40:50 +04:00
Pavel
71fdff17de Merge pull request #721 from arc53/feature/anthropic
anthropic LLM
2023-10-28 22:58:30 +04:00
Alex
04b4001277 anthropic working 2023-10-28 19:51:12 +01:00
PRINCE PAL
fbfb8a3b41 Update CONTRIBUTING.md
Under the workflow heading I found that in point 2 & 3 shell was missing, that I rectified and corrected it
2023-10-28 21:17:25 +05:30
dependabot[bot]
1bee088fe6 Bump urllib3 from 1.26.17 to 1.26.18 in /application
Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.17 to 1.26.18.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/1.26.17...1.26.18)

---
updated-dependencies:
- dependency-name: urllib3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-28 16:52:37 +02:00
Akash Bag
d2e4d6ecf0 fix: fixed bug due to change in number of lines of text 2023-10-28 11:01:50 +05:30
Akash Bag
5f03f90582 fix: fixed styling for firefox 2023-10-27 16:35:59 +05:30
Alex
e54d46aae1 Update Navigation.tsx 2023-10-27 02:22:04 +01:00
Alex
54a3b9900e fix error msg 2023-10-27 01:52:09 +01:00
Michelle "MishManners®™" Mannering
1dc16e900a Make README accessible
Add accessibility features to README file based on the top tips from GitHub: https://github.blog/2023-10-26-5-tips-for-making-your-github-profile-page-accessible/
2023-10-27 11:22:47 +11:00
Alex
08a7e666b2 Update ConversationTile.tsx 2023-10-27 00:52:15 +01:00
Alex
678fd28f1d Update Railway-Deploying.md 2023-10-26 22:53:57 +01:00
Christine
ff89c3b274 feat: made color accessible 2023-10-26 19:12:33 +00:00
Alex
cff7aebe55 fix 2023-10-26 18:06:26 +01:00
Alex
ed3a3d0876 Merge pull request #706 from debnath003/main
Update some markdown files
2023-10-26 17:53:45 +01:00
Alex
425cd9eb26 Merge pull request #707 from akash0708/docs/typo
docs: Typo in quickstart guide fixed
2023-10-26 17:39:26 +01:00
Alex
ebe84dd8a4 Merge pull request #703 from Ankit-Matth/revamp_icon_effects
Background color of like/dislike has been changed
2023-10-26 10:39:52 +01:00
Akash Bag
217c4144b5 docs: Typo in quickstart guide fixed 2023-10-26 13:09:08 +05:30
Pronay Debnath
aaeed64621 Update and rename Hosting-DocsGPT.md to Hosting-the-app.md 2023-10-26 11:17:19 +05:30
Pronay Debnath
9133a56d2a Update _meta.json 2023-10-26 11:16:26 +05:30
Pronay Debnath
12bd7dc44f Update _meta.json 2023-10-26 11:14:30 +05:30
Pronay Debnath
32fa86adaa Update _meta.json 2023-10-26 11:11:41 +05:30
Pronay Debnath
1811cff1f9 Rename Hosting-the-app.md to Hosting-DocsGPT.md 2023-10-26 10:59:32 +05:30
Pronay Debnath
2e6a5c0525 Update My-AI-answers-questions-using-external-knowledge.md 2023-10-26 10:57:24 +05:30
Pronay Debnath
b68b214d08 Update My-AI-answers-questions-using-external-knowledge.md 2023-10-26 10:56:57 +05:30
Pronay Debnath
153d12d93f Update How-to-use-different-LLM.md 2023-10-26 10:47:35 +05:30
Pronay Debnath
1a62a773ae Update react-widget.md 2023-10-26 10:40:19 +05:30
Pronay Debnath
5f0cccb81e Update Customising-prompts.md 2023-10-26 10:37:58 +05:30
Ankit Matth
5c7f4b3df7 Merge branch 'arc53:main' into revamp_icon_effects 2023-10-26 07:17:40 +05:30
Ankit Matth
1248b76b41 Background color of like/dislike has been changed 2023-10-26 06:20:50 +05:30
Alex
c4176af1ea Merge pull request #672 from Ankit-Matth/revamp_icon_effects
[feature] : The UI of the feedback icons has been changed.
2023-10-25 22:29:50 +01:00
Alex
799c306138 Merge branch 'main' of https://github.com/arc53/DocsGPT 2023-10-25 22:27:12 +01:00
Alex
c65e3fdf62 Update _meta.json 2023-10-25 22:27:11 +01:00
Alex
08712ef4f8 Merge pull request #688 from SoumyadiptoPal/stickyNavbar
Feature: left side toggle button should be sticky
2023-10-25 22:23:57 +01:00
Alex
d0119f5bf1 Merge pull request #694 from Ram-tyagi/main
[Add] Vite and react documentation link
2023-10-25 22:05:59 +01:00
Alex
f3c626c800 Merge pull request #556 from Exterminator11/openapi3_parser
Parser for OpenAPI3(Swagger)
2023-10-25 21:27:45 +01:00
Alex
f1891478d5 Merge pull request #692 from Yash-sudo-web/main
Enhanced How-to-use-different-LLM.md
2023-10-25 21:26:32 +01:00
Ankit Matth
7a2e6e640d [making required changes] - Update ConversationBubble.tsx 2023-10-25 21:48:25 +05:30
Soumyadipto Pal
890d418639 Merge branch 'main' into stickyNavbar 2023-10-25 21:07:53 +05:30
Ram-tyagi
f38c934a6d [Add] Vite and react documentation link 2023-10-25 17:42:01 +05:30
Exterminator11
f3540aac0f Changed import 2023-10-25 17:07:47 +05:30
Exterminator11
889ce984a9 Made changes 2023-10-25 16:50:01 +05:30
Alex
89a437149c Merge pull request #690 from thanmaisai/main
Update README.md
2023-10-25 07:19:04 -04:00
Alex
43f65651ac Merge pull request #675 from krishvsoni/patch-2
Patch 2
2023-10-25 06:57:29 -04:00
Alex
d74d69c1c8 Merge pull request #681 from nobunagaaa/docs/enhance
Enhancement: Update react-widget.md docs
2023-10-25 06:52:53 -04:00
Alex
fefc85683c Merge pull request #677 from AnkitaSikdar005/code4
Fixed Typos and minor issues in CODE_OF_CONDUCT.md
2023-10-25 06:41:37 -04:00
Alex
6f97158c0e Merge pull request #676 from Gourav2609/main
Fixed overflow
2023-10-25 06:39:30 -04:00
Alex
1320101112 Merge pull request #663 from neha3423/main
Updated workflow of Contributing.md
2023-10-25 06:32:11 -04:00
Alex
031a267394 Merge pull request #537 from AnkitaSikdar005/code1
Updated CONTRIBUTING.md
2023-10-25 06:30:41 -04:00
Alex
9119030959 Merge pull request #670 from iamakhileshmishra/main
Railway Deployment Guide Added
2023-10-25 06:26:09 -04:00
Yash-sudo-web
9b10a8028d Enhanced How-to-use-different-LLM.md 2023-10-25 15:38:19 +05:30
SoumyadiptoPal
4be38fcb0e Settings page added 2023-10-25 10:24:55 +05:30
noobcoder
9090f4485a Update README.md 2023-10-24 19:31:39 +00:00
iamakhileshmishra
5749d66ac9 Merge branch 'main' of https://github.com/iamakhileshmishra/DocsGPT 2023-10-24 23:59:42 +05:30
Akhilesh Kumar Mishra
4bb4b4eb1d Merge branch 'arc53:main' into main 2023-10-24 23:59:31 +05:30
iamakhileshmishra
103d062f74 package conflict solved 2023-10-24 23:57:49 +05:30
SoumyadiptoPal
9893480089 Made the navbar sticky 2023-10-24 22:27:55 +05:30
Alex
5dbd240017 Update DocsGPT tee-back.jpeg 2023-10-24 10:51:28 +01:00
nobunagaaa
e0dce8fd01 Enhancement: Update react-widget.md docs 2023-10-24 15:05:43 +05:30
Akash Bag
492139942c fix: Hero section styling fixed, made responsive accross all devices 2023-10-24 10:38:51 +05:30
Exterminator11
8ebff1a908 Updated test_openapi3parser.py 2023-10-24 07:43:57 +05:30
Ankita Sikdar
44def1f6bc Update CODE_OF_CONDUCT.md 2023-10-24 01:05:53 +05:30
Gourav2609
8934b9ab5c Fixed overflow 2023-10-23 23:15:07 +05:30
KRISH SONI
130a6b67bd Update README.md 2023-10-23 22:16:19 +05:30
Alex
2df32cd9a7 Merge pull request #668 from lakshmi930/fix-small-ui-bugs
Fix something went wrong message bubble
2023-10-23 12:43:22 -04:00
Alex
d413d58b47 Merge pull request #652 from rasvanjaya21/main
Enhance backdrop modal effect
2023-10-23 12:38:06 -04:00
KRISH SONI
9e632aa0bd Update README.md 2023-10-23 22:01:07 +05:30
Alex
964020ee12 Merge pull request #651 from 0xrahul6/docs/enhance
Enhanced Chatwoot-Extension
2023-10-23 12:10:16 -04:00
Alex
672e14d6ea Merge pull request #641 from Ankit-Matth/documentation_icon_changed
I have changed icon for Documentation in the left side bar.
2023-10-23 11:59:34 -04:00
Alex
54baf04a86 Merge pull request #658 from RDxR10/main
Update react-widget.md
2023-10-23 11:56:03 -04:00
Alex
64c83460b9 Merge pull request #656 from Lokendrakushwah12/main
added the 3 cards in hero section with gradient border also its respo…
2023-10-23 11:52:49 -04:00
Alex
74ec3fa7d4 Merge pull request #655 from parthrc/adding-custom-404-page
Added a custom 404 not found page
2023-10-23 11:44:24 -04:00
Alex
0821d7a803 Merge pull request #654 from adarsh-jha-dev/patch-2
Enhanced Hooks with Outside Click Handling and Dark Mode Detection
2023-10-23 11:40:59 -04:00
Ankita Sikdar
55beb3978c Merge branch 'main' into code1 2023-10-23 17:30:18 +05:30
Ankita Sikdar
9b044815de Update CODE_OF_CONDUCT.md 2023-10-23 17:07:25 +05:30
Ankit Matth
0668fea3b7 Revamp Icon Effects 2023-10-23 17:01:41 +05:30
iamakhileshmishra
a6677b2e45 Railway Deployment Guide Added 2023-10-23 14:44:37 +05:30
Yusril A
6f544f56d8 Merge branch 'arc53:main' into main 2023-10-23 11:20:04 +07:00
Lakshmi Narayanan
5556be9cab Fix something went wrong message bubble 2023-10-23 00:52:01 +04:00
Alex
465c4afe8d Merge pull request #653 from Ankit-Matth/aboutPage_alignment
I have aligned about page in the center.
2023-10-22 11:59:39 -04:00
Alex
78dd1e1d81 Merge branch 'main' into aboutPage_alignment 2023-10-22 11:56:13 -04:00
Alex
eebfc78ad3 Merge pull request #626 from shruti-sen2004/edit_2
Update react-widget.md
2023-10-22 11:53:04 -04:00
Alex
4783685fdb Merge pull request #650 from SamsShow/main
[Bug]:About section not Aligned.
2023-10-22 11:51:13 -04:00
Alex
bfd0363fad Merge pull request #647 from beKoool/settings-ui
Design the Settings UI
2023-10-22 11:48:21 -04:00
KRISH SONI
e9323ba2ec Merge branch 'main' into patch-2 2023-10-22 21:17:56 +05:30
Alex
dac774c9d2 Merge pull request #639 from Ayush-Prabhu/patch-2
Update How-to-use-different-LLM.md
2023-10-22 11:39:44 -04:00
Alex
664ee2b433 Merge pull request #637 from debghs/debghs-patch-1
Update How-to-train-on-other-documentation.md
2023-10-22 11:36:50 -04:00
Alex
85f283fe2b Merge pull request #636 from Ritish134/patch-1
Update README.md
2023-10-22 11:33:43 -04:00
Alex
7bd7d66afc Merge branch 'main' into patch-1 2023-10-22 11:31:51 -04:00
Alex
81b16aa900 Merge pull request #638 from Bitnagar/fix(frontend)--fix-header-z-index
Fix(frontend):  fix Navigation component z-index for small devices
2023-10-22 11:27:25 -04:00
Alex
e07fb34ace Merge pull request #504 from ayan-joshi/New-prompt-
Improving Customising_prompt.md file
2023-10-22 11:19:03 -04:00
Alex
9303746d80 Merge branch 'main' into New-prompt- 2023-10-22 11:16:03 -04:00
Alex
e3e8e67cb4 Merge pull request #627 from YadlaMani/main
Fixed Typo
2023-10-22 11:15:00 -04:00
Alex
6490027e57 Merge pull request #577 from shivanandmn/faiss_delete_index
added delete index of vector store in faiss
2023-10-22 10:15:31 -04:00
neha3423
6cbe4f2ea7 neha3423 2023-10-22 11:38:23 +05:30
neha3423
960365a063 neha3423 2023-10-22 10:54:02 +05:30
RDxR10
839d614c9c Update react-widget.md 2023-10-21 12:53:47 +05:30
Lokendra Kushwah
ae13e557a7 added the 3 cards in hero section with gradient border also its responsive 2023-10-20 23:24:56 +05:30
unknown
a245383f8c Added a custom 404 not found page 2023-10-20 16:22:35 +05:30
Adarsh Jha
78b8d3e41d Update index.ts 2023-10-20 14:33:43 +05:30
Ankit Matth
fcfaa04cc6 About page aligned in center 2023-10-20 05:42:12 +05:30
rasvanjaya21
fe866b2d66 Enhance backdrop modal effect 2023-10-20 02:57:41 +07:00
0xrahul6
e7bbc4ac0c Enhanced Chatwoot-Extension 2023-10-19 17:13:30 +00:00
Saksham Tyagi
3b746c91df Merge branch 'main' of https://github.com/SamsShow/DocsGPT 2023-10-19 20:15:45 +05:30
Saksham Tyagi
06f0129b59 💄 About margin 2023-10-19 20:12:40 +05:30
Alex
641e75b8a8 Merge pull request #620 from rahul0x00/docs/API-docs
Enhancement: Improve API Endpoint Documentation
2023-10-19 07:41:17 -04:00
Alex
35f9fda457 Merge pull request #633 from Rutam21/patch-1
Added Civo Cloud Deployment option.
2023-10-19 07:38:42 -04:00
Alex
de29d69efe Merge pull request #619 from vedant-z/patch-1
fix: Direction of dropdown arrow corrected
2023-10-19 07:37:19 -04:00
Alex
f587af1005 Merge pull request #625 from krabbi/prevent_unnecessary_renders
[FE]Prevent unneccessary renders.
2023-10-19 07:30:56 -04:00
Alex
4ed6580e1d Merge pull request #623 from krabbi/fix_lists_in_converastion
[FE]: Fix render lists in conversation
2023-10-19 07:29:02 -04:00
beKool.sh
2f6213c944 Change font weight 2023-10-19 11:11:30 +05:45
KRISH SONI
f365b76cfc Merge branch 'main' into patch-2 2023-10-18 21:58:17 +05:30
KRISH SONI
55921b262f docker docs 2023-10-18 21:54:28 +05:30
Alex
3039c97989 Merge pull request #622 from Yash-sudo-web/main
Updated Chatwoot-extension.md
2023-10-18 11:30:47 -04:00
Alex
a1af4f19c5 Merge pull request #621 from arc53/dependabot/npm_and_yarn/frontend/babel/traverse-7.23.2
Bump @babel/traverse from 7.20.13 to 7.23.2 in /frontend
2023-10-18 11:13:48 -04:00
Alex
131e4087fd Merge pull request #631 from Raunakkumarr/docsCorrection-selfHosting
Correction in docs
2023-10-18 11:11:11 -04:00
Ankit Matth
ee6471351d Icon for Documentation changed 2023-10-18 18:41:03 +05:30
Ayush-Prabhu
d93266fee2 Update How-to-use-different-LLM.md
Corrected grammatical errors to remove ambiguity and improve professionalism.
2023-10-18 16:21:15 +05:30
Shivam Bhatnagar
dbbf39db6d pre-commit hook 2023-10-18 10:34:50 +00:00
Shivam Bhatnagar
d40ea44ae6 fix(frontend): fix navigation z-index in mobiles 2023-10-18 10:33:44 +00:00
debghs
f0d4847946 Update How-to-train-on-other-documentation.md 2023-10-18 12:46:29 +05:30
Ritish Srivastava
98a9c766ef Update README.md
Enhance grammar and formatting in various sections.
2023-10-18 07:49:24 +05:30
Rutam Prita Mishra
91393b650b Added Civo Cloud Deployment option.
This change adds a new deployment guide for Civo Compute Cloud.
2023-10-18 04:09:50 +05:30
Alex
49a4b119e1 Merge pull request #615 from shelar1423/patch3
FIX: added script in package.json of docs and added instructions to run nextra-DocsGPT locally
2023-10-17 14:21:51 -04:00
Alex
e69fab822b Merge pull request #624 from krabbi/fix_footer_width
[FE]Fix footer width ob md+ screens
2023-10-17 14:19:31 -04:00
Raunak Kumar
45c58cc766 Correction in docs 2023-10-17 23:31:13 +05:45
ManiYadla
ca48f000bd changed the typos 2023-10-17 20:40:27 +05:30
Shivanand
21ba1e3958 Merge branch 'main' into faiss_delete_index 2023-10-17 17:35:30 +05:30
Rahul Kumar
062f3256a7 Update API-docs.md 2023-10-17 12:28:50 +05:30
Rahul Kumar
186f565b99 Update API-docs.md 2023-10-17 12:27:38 +05:30
Vedant Borkar
5c2b4398d9 Update Navigation.tsx 2023-10-17 11:53:57 +05:30
Rahul Kumar
a9fb61bbd6 Update API-docs.md 2023-10-17 08:28:13 +05:30
Shruti Sen
a51e25dbde Update react-widget.md 2023-10-17 08:09:32 +05:30
Alexander Deshkevich
0a717ae82e Prevent unneccessary renders. Update show/hide state of feedback buttons by css instead React 2023-10-16 20:13:20 -03:00
Alexander Deshkevich
f9e6751279 fix render lists in conversation 2023-10-16 18:59:31 -03:00
Alexander Deshkevich
0306f8ec65 Fix footer width ob md+ screens 2023-10-16 16:35:33 -03:00
Alex
66f2e549ce Shirt preview 2023-10-16 15:02:28 -04:00
Alex
9ab413643a Update HACKTOBERFEST.md 2023-10-16 15:02:02 -04:00
Alex
3a4eeb01b0 Update CONTRIBUTING.md 2023-10-16 15:00:34 -04:00
Alex
57a8dcc155 Merge pull request #616 from Sanyam-2026/main
Fixed Typo
2023-10-16 13:59:07 -05:00
Yash-sudo-web
2f21476b2a Update Chatwoot-extension.md 2023-10-17 00:18:40 +05:30
Yash-sudo-web
9f9e2f3b24 Update Chatwoot-extension.md 2023-10-17 00:15:03 +05:30
dhselar1423
f886dfb60c removed commandline 2023-10-16 23:28:39 +05:30
dependabot[bot]
c22b014056 Bump @babel/traverse from 7.20.13 to 7.23.2 in /frontend
Bumps [@babel/traverse](https://github.com/babel/babel/tree/HEAD/packages/babel-traverse) from 7.20.13 to 7.23.2.
- [Release notes](https://github.com/babel/babel/releases)
- [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel/commits/v7.23.2/packages/babel-traverse)

---
updated-dependencies:
- dependency-name: "@babel/traverse"
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-16 17:53:36 +00:00
Alex
d899b6a7e1 Merge pull request #588 from asoderlind/fix/as/embedding-size-mismatch
raise more legible error if the word embedding dimensions don't match
2023-10-16 12:53:08 -05:00
Alex
450dde3739 Merge pull request #614 from 5h0ov/patch-3
Update Chatwoot-extension.md
2023-10-16 12:43:05 -05:00
Alex
2ac40903f3 Update README.md 2023-10-16 13:36:54 -04:00
Exterminator11
f328b39f57 Fixed import error 2023-10-16 21:05:47 +05:30
Rahul Kumar
06cc4b07ab Update API-docs.md 2023-10-16 20:35:17 +05:30
Rahul Kumar
1c0b68f0e3 Update API-docs.md 2023-10-16 20:27:09 +05:30
Rahul Kumar
efcce6a826 Enhancement: Improve API Endpoint Documentation 2023-10-16 14:51:24 +00:00
Vedant Borkar
fa8177d0e5 Update Navigation.tsx 2023-10-16 19:52:16 +05:30
asoderlind
d51cd8df89 add file docstring 2023-10-16 11:31:10 +02:00
Sanyam Jain
5530d611b9 Update Hosting-the-app.md 2023-10-16 15:00:11 +05:30
asoderlind
e73636bef3 remove trailing whitespace, sort imports, remove unused arguments 2023-10-16 11:22:42 +02:00
dhselar1423
74ff994281 nextra-docsGPT local setup 2023-10-16 10:59:25 +05:30
dhselar1423
7b28d353ee made changes in package.json and added instructions nextra-docsgpt 2023-10-16 10:57:49 +05:30
asoderlind
e2a8ca143a remove unused imports 2023-10-16 06:13:26 +02:00
asoderlind
4e81f98927 add dependency 2023-10-16 06:13:15 +02:00
Shuvadipta Das
ab4c994266 Update Chatwoot-extension.md 2023-10-16 08:56:38 +05:30
Alex
aea6a434f1 Merge pull request #610 from rahul0x00/enhance/quickstart
Quickstart: Enhanced Documentation for Launching DocsGPT and Installing Chrome Extension
2023-10-15 16:48:08 -05:00
Alex
6c95d5a2de Merge pull request #579 from ankur0904/contribution-guideline
Adds contributing guidelines with steps
2023-10-15 16:46:19 -05:00
Alex
fcaabb2c1e Merge pull request #595 from HeetVekariya/main
fix: Remove extra spaces in response
2023-10-15 16:44:40 -05:00
Alex
66b2722cad Merge pull request #609 from shelar1423/patch2
CONTRIBUTING.MD : Added DocsGPT -Hacktoberfest t-shirt images
2023-10-15 16:37:00 -05:00
shivanandmn
2e95666939 added new endpoint 2023-10-16 02:46:48 +05:30
Akshay
cdfcd99695 Fix docs_check and upload endpoint 2023-10-15 21:31:02 +05:30
Akshay
e71d21fc27 Fix render method 2023-10-15 20:23:07 +05:30
Akshay
e95ebfd6a0 Add docker files 2023-10-15 20:23:07 +05:30
Akshay
e5a875856a Minor 2023-10-15 20:23:06 +05:30
Akshay
930218c067 Mock task status call 2023-10-15 20:23:06 +05:30
Akshay
ff1362e462 Mock upload call 2023-10-15 20:23:06 +05:30
Akshay
01457bbe79 Add feedback endpoint 2023-10-15 20:23:06 +05:30
Akshay
8c7da0bdb6 Add more routes 2023-10-15 20:23:06 +05:30
Akshay
6f634c3f13 Add mock server module 2023-10-15 20:23:06 +05:30
Exterminator11
a7f5303eaf Cleaned up the code 2023-10-15 17:20:50 +05:30
Exterminator11
7159e4fbe2 Formatted files 2023-10-15 17:16:58 +05:30
Exterminator11
36b243e9d2 Formatted all the changed files 2023-10-15 17:16:12 +05:30
Exterminator11
bd70e00f08 Added tests and updated openapi3_parser.py 2023-10-15 17:00:54 +05:30
asoderlind
0ca96130c8 remove trailing whitespace 2023-10-15 10:23:09 +02:00
asoderlind
09aa56b63d add test 2023-10-15 10:22:07 +02:00
asoderlind
60cd6a455a refactor 2023-10-15 10:22:00 +02:00
asoderlind
4752ce5250 fix linting error 2023-10-15 09:12:00 +02:00
Rahul Kumar
832569a79c Update Quickstart.md 2023-10-15 09:54:25 +05:30
Rahul Kumar
ecd8cebbef Quickstart: Enhanced Documentation for Launching DocsGPT and Installing Chrome Extension 2023-10-15 04:20:09 +00:00
ankur0904
3c37efa650 Adds the Note for testing 2023-10-15 09:41:27 +05:30
Ankur Singh
21b6ce204d Merge branch 'arc53:main' into contribution-guideline 2023-10-15 09:32:43 +05:30
HeetVekariya
337d2970a0 fix: removed px-2 for source 2023-10-15 09:03:28 +05:30
HeetVekariya
3e5bd25c6e fix: removed items-center from conversationBubble.tsx 2023-10-15 08:47:55 +05:30
Digvijay Shelar
7f0f68b707 Delete Assets/images.md 2023-10-15 02:35:49 +05:30
Digvijay Shelar
ea85482736 Update CONTRIBUTING.md 2023-10-15 02:35:23 +05:30
Digvijay Shelar
01160a5361 Add files via upload 2023-10-15 02:31:45 +05:30
Digvijay Shelar
f4b5a02197 Create images.md 2023-10-15 02:31:08 +05:30
Alex
f724f10a35 Merge pull request #607 from mozi47/fix/sidebar-glitch
sidebar glitch fixed
2023-10-14 15:05:48 -05:00
Alex
0c221ba3d7 Merge pull request #554 from arc53/dependabot/pip/scripts/langchain-0.0.312
Bump langchain from 0.0.308 to 0.0.312 in /scripts
2023-10-14 15:03:38 -05:00
Alex
1907aaf32f Merge pull request #580 from DHRUVKADAM22/DHRUVKADAM22-patch-5
Dhruvkadam22 patch 5
2023-10-14 14:59:55 -05:00
Alex
d6f26b3133 Merge pull request #574 from rahul0x00/enhance/deployment_docs
Enhanced Documentation for Self-Hosting DocsGPT on Amazon Lightsail
2023-10-14 14:53:39 -05:00
Alex
c97a55e65f Merge pull request #460 from alienishi/main
Modified README resolving issue #457
2023-10-14 14:52:08 -05:00
Alex
4a6e38f7da Merge pull request #602 from siddwarr/patch-1
Update README.md
2023-10-14 14:50:48 -05:00
Alex
845ef42338 Merge pull request #598 from Juneezee/simplify-jsx-conditional-rendering
refactor(frontend): simplify JSX conditional rendering
2023-10-14 14:48:29 -05:00
Alex
fde8de8b9e Merge pull request #558 from MSaiKiran9/train-button
Train button Disabled Before Selecting File
2023-10-14 14:43:50 -05:00
Muzakir Shah
88123261ac sidebar glitch fixed 2023-10-15 00:38:46 +05:00
Alex
c04b76528b Merge pull request #597 from shelar1423/main
[Docs] :Enhance Tech Stack Overview with Emojis and Bullet Points
2023-10-14 14:28:18 -05:00
dependabot[bot]
04a13c2ebb Bump langchain from 0.0.308 to 0.0.312 in /scripts
Bumps [langchain](https://github.com/langchain-ai/langchain) from 0.0.308 to 0.0.312.
- [Release notes](https://github.com/langchain-ai/langchain/releases)
- [Commits](https://github.com/langchain-ai/langchain/compare/v0.0.308...v0.0.312)

---
updated-dependencies:
- dependency-name: langchain
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-14 19:20:49 +00:00
Alex
6b3cc62cbe Merge pull request #553 from arc53/dependabot/pip/application/langchain-0.0.312
Bump langchain from 0.0.308 to 0.0.312 in /application
2023-10-14 14:20:07 -05:00
Alex
8627be07e7 Merge pull request #592 from faria-karim-porna/ui-sidebar-chat-options
Fix the font size and alignment of all the chat options of the sidebar based on issue #586
2023-10-14 14:13:58 -05:00
Alex
5509a5bca3 Merge pull request #589 from outlivo/outlivo-UI
Closing #357 using CONTRIBUTING.md
2023-10-14 14:10:47 -05:00
Alex
dd52949a2a Merge pull request #510 from HarshMN2345/patch-2
Update API-docs.md
2023-10-14 13:53:53 -05:00
Alex
a310ae6566 Merge branch 'main' into patch-2 2023-10-14 13:52:25 -05:00
Aditya Aryaman Das
1f8643c538 Updated README.md 2023-10-14 19:03:09 +05:30
Aditya Aryaman Das
6ea313970d Merge branch 'arc53:main' into main 2023-10-14 19:01:02 +05:30
Siddharth Warrier
13e6b15308 Update README.md 2023-10-14 14:53:38 +05:30
Eng Zer Jun
0efc2277dd refactor(frontend): simplify JSX conditional rendering
JSX conditional rendering can be simplified to use the logical AND
operator (&&) [1] instead of ternary operator (? :) if we want to render
something only when the conditon is true, and nothing otherwise.

[1]: https://react.dev/learn/conditional-rendering#logical-and-operator-

Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2023-10-14 13:39:23 +08:00
Pavel
381a2740ee change input 2023-10-13 21:52:56 +04:00
Digvijay Shelar
9e6aecd707 Update CONTRIBUTING.md 2023-10-13 18:35:13 +05:30
Digvijay Shelar
f5510ef1b5 Update CONTRIBUTING.md 2023-10-13 18:32:56 +05:30
Alex
8b3b16bce4 inputs 2023-10-13 08:46:35 +01:00
Pavel
024674eef3 List check 2023-10-13 11:42:42 +04:00
HeetVekariya
c8e6224946 fix: else text corrected 2023-10-13 09:46:07 +05:30
HeetVekariya
bf11300ab3 fix: Remove extra spaces in response 2023-10-13 09:33:55 +05:30
faria-karim-porna
7361a35c94 Fix the font size and alignment of all the chat options of the sidebar based on issue #586
Fix the font size and alignment of all the chat options of the sidebar based on issue #586
2023-10-13 06:01:27 +06:00
Outlivo
02b2cebb85 Solving issue #357 using CONTRIBUTING.md 2023-10-13 01:07:49 +05:30
asoderlind
9b6ae46e92 raise more legible error if the embedding vector dims don't match 2023-10-12 20:24:25 +02:00
Alex
e5e5a42736 Update CONTRIBUTING.md 2023-10-12 18:19:40 +01:00
Alex
308d8afe4e Update CONTRIBUTING.md 2023-10-12 18:19:06 +01:00
Pavel
b7d88b4c0f fix wrong link 2023-10-12 19:45:36 +04:00
Pavel
719ca63ec1 fixes 2023-10-12 19:40:23 +04:00
Rahul Kumar
2100cd77ce Update Hosting-the-app.md 2023-10-12 18:52:54 +05:30
Harsh Mahajan
58b13ae69a Update API-docs.md 2023-10-12 17:31:41 +05:30
Pavel
2cfb416fd0 Desc loader 2023-10-12 13:44:32 +04:00
M Sai Kiran
993c9b31bd Update Upload.tsx 2023-10-12 15:10:26 +05:30
M Sai Kiran
b5d6f0ad36 Update Upload.tsx 2023-10-12 15:07:48 +05:30
M Sai Kiran
03c05a82e4 Merge branch 'arc53:main' into train-button 2023-10-12 14:59:08 +05:30
DHRUVKADAM22
cc887d25e4 Update Quickstart.md
added windows Quickstart guide in Quickstart.md for easy to understand and make more user friendly
2023-10-12 14:48:44 +05:30
ankur0904
80e2d0651b Adds contributing guidelines with steps 2023-10-12 14:32:44 +05:30
Pavel
50f07f9ef5 limit crawler 2023-10-12 12:53:33 +04:00
Pavel
c517bdd2e1 Crawler + sitemap 2023-10-12 12:35:26 +04:00
shivanandmn
ca3e549dd4 added delete index of vector store in faiss 2023-10-12 13:29:52 +05:30
Rahul Kumar
51f2ca72b9 Enhanced Documentation for Self-Hosting DocsGPT on Amazon Lightsail 2023-10-12 03:19:40 +00:00
Alex
771950f1de Merge pull request #573 from arc53/bug/update-and-delete
fix update and delete bug
2023-10-11 23:16:44 +01:00
Alex
c969e9c014 fix update and delete bug 2023-10-11 23:11:08 +01:00
Pavel
658867cb46 No crawler, no sitemap 2023-10-12 01:03:40 +04:00
Alex
344692f9f6 Merge pull request #542 from shruti-sen2004/main
Update CONTRIBUTING.md
2023-10-11 18:57:59 +01:00
DHRUVKADAM22
fd083078fc Merge pull request #2 from DHRUVKADAM22/DHRUVKADAM22-patch-3
Update README.md
2023-10-11 15:07:00 +05:30
DHRUVKADAM22
9bacae4b2e Update README.md 2023-10-11 15:06:44 +05:30
Alex
8f2ad38503 tests 2023-10-11 10:13:51 +01:00
M Sai Kiran
76baa6c5f8 Before File Selection Train Disabled 1 2023-10-11 09:39:04 +05:30
M Sai Kiran
84c822a0ca Before File Selection Train Button Disabled 2023-10-11 09:38:15 +05:30
Exterminator11
ddd938fd64 Parser for OpenAPI3(Swagger) 2023-10-11 07:36:37 +05:30
Alex
e91b30f4c7 Merge pull request #549 from hariraghav10/ui-update
UI update - Added autofocus for user chat input
2023-10-10 22:31:46 +01:00
dependabot[bot]
31fb1801d2 Bump langchain from 0.0.308 to 0.0.312 in /application
Bumps [langchain](https://github.com/langchain-ai/langchain) from 0.0.308 to 0.0.312.
- [Release notes](https://github.com/langchain-ai/langchain/releases)
- [Commits](https://github.com/langchain-ai/langchain/compare/v0.0.308...v0.0.312)

---
updated-dependencies:
- dependency-name: langchain
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-10 21:18:11 +00:00
DHRUVKADAM22
117d0f2e38 Merge pull request #1 from DHRUVKADAM22/DHRUVKADAM22-patch-1
Update README.md
2023-10-10 23:32:10 +05:30
DHRUVKADAM22
79bb79debc Update README.md
I have added quick start guide for windows more specifically 
makes more user friendly by adding basic info and description
(which make user or newbies to get understand properly.)
2023-10-10 23:29:46 +05:30
Harsh Mahajan
11cd022965 Update API-docs.md 2023-10-10 23:22:11 +05:30
Ayan Joshi
d7b28a3586 Update Customising-prompts.md 2023-10-10 19:10:24 +05:30
Shruti Sen
dc14245105 Update CONTRIBUTING.md 2023-10-10 18:05:19 +05:30
Hari Raghav
e772dfaa12 Merge branch 'arc53:main' into ui-update 2023-10-10 17:53:48 +05:30
Shruti Sen
4d29cae936 Update CONTRIBUTING.md 2023-10-10 17:39:07 +05:30
hariraghav10
71ed0ffe13 Merge remote-tracking branch 'origin/ui-update' into ui-update 2023-10-10 17:35:35 +05:30
Shruti Sen
56d0981cee Update CONTRIBUTING.md 2023-10-10 17:35:18 +05:30
Shruti Sen
ad43d10ce4 Merge branch 'arc53:main' into main 2023-10-10 17:34:41 +05:30
Alex
fb6618181a Merge pull request #538 from ManishMadan2882/main
UI bug fixed - Placed the feedback/copy buttons at the top
2023-10-10 12:57:04 +01:00
Shruti Sen
43a9bc0d7b Update CONTRIBUTING.md 2023-10-10 17:22:38 +05:30
hariraghav10
f835b14902 added autofocus for user chat input 2023-10-10 17:22:10 +05:30
Shruti Sen
c1c591d1eb Merge branch 'arc53:main' into main 2023-10-10 17:21:19 +05:30
Hari Raghav
4348549f2d Merge branch 'arc53:main' into ui-update 2023-10-10 17:20:07 +05:30
hariraghav10
e48df87e06 added autofocus feature for chat input div 2023-10-10 17:04:05 +05:30
Alex
e718feb1f7 Update README.md 2023-10-10 11:36:54 +01:00
Shruti Sen
3b6f3f13d4 Update CONTRIBUTING.md 2023-10-10 12:12:25 +05:30
ManishMadan2882
13fabaf6aa bugs fixed 2023-10-10 06:01:21 +05:30
Alex
9cfcdb1c23 Merge pull request #535 from vn-os/main
Correct instructions in `Development environments`
2023-10-09 23:59:14 +01:00
Alex
2800d0dcd3 Merge pull request #532 from staticGuru/error-state-#484
Fix the error state UI
2023-10-09 23:54:14 +01:00
Alex
3e2055255e Merge branch 'main' into error-state-#484 2023-10-09 23:53:14 +01:00
Alex
64a8857884 Merge pull request #529 from staticGuru/chatroom-rename-#495
Chatroom rename feature
2023-10-09 23:42:45 +01:00
Alex
808b291c2c Merge pull request #530 from robbiebusinessacc/patch-1
Backend Code Refactoring for Better Readability and Scalability
2023-10-09 23:40:10 +01:00
Alex
cae3e7136e Merge pull request #525 from Vaibhav91one/bug/text-out-of-screen
bug(front): The text seems to go out of screen when the result is too big.
2023-10-09 23:32:34 +01:00
Alex
c069a187f8 Merge pull request #476 from mnickrogers/main
Fix missing documentation for using Llama_cpp
2023-10-09 23:26:38 +01:00
Ankita Sikdar
91fa932168 Update CONTRIBUTING.md 2023-10-10 02:06:52 +05:30
Vic P
188158a29b Merge branch 'arc53:main' into main 2023-10-10 00:47:26 +07:00
Vic P
a3d5cb5851 Update README.md to correct instructions in Development environments. 2023-10-10 00:47:15 +07:00
Alex
0788582528 Merge pull request #522 from 5h0ov/patch-2
Updated My-AI-answers-questions-using-external-knowledge.md
2023-10-09 16:40:59 +01:00
Alex
da81abc12e Merge pull request #437 from shelar1423/main
[Enhanced]: Useful Links Section in README.md
2023-10-09 16:34:25 +01:00
Alex
81b92111ca Merge pull request #511 from beKoool/docs-fix
[API Docs] Add Backlink to "Vector Stores" and fix grammatical error
2023-10-09 16:32:50 +01:00
Alex
a809e72704 Merge branch 'main' into docs-fix 2023-10-09 16:30:11 +01:00
Alex
cb0e4b6e87 Update README.md 2023-10-09 15:37:40 +01:00
staticGuru
16df8d803c Fix the error state issues 2023-10-09 19:48:35 +05:30
staticGuru
ce7ac78b42 add the darkmode in config file 2023-10-09 19:48:17 +05:30
Harsh Mahajan
c21e0755b3 Update API-docs.md 2023-10-09 17:55:41 +05:30
Harsh Mahajan
e1dc0a576d Update API-docs.md 2023-10-09 17:45:50 +05:30
staticGuru
a998db0570 add fetch conversations in the delete callbacks 2023-10-09 16:30:09 +05:30
staticGuru
c79ec45adb Fix the lint issues 2023-10-09 16:16:56 +05:30
staticGuru
72481e8453 Fix the post API issues 2023-10-09 16:16:20 +05:30
staticGuru
3753f7d138 change the input outline border color 2023-10-09 16:12:56 +05:30
Robbie Walmsley
4d92606562 Update ingest.py 2023-10-09 11:11:07 +01:00
Robbie Walmsley
2d0b6bcfcc Update ingest.py 2023-10-09 11:04:55 +01:00
Robbie Walmsley
57fb29b600 Update worker.py 2023-10-09 10:55:34 +01:00
Robbie Walmsley
340647cb22 Update ingest.py 2023-10-09 10:53:03 +01:00
staticGuru
a06369dd7b add the checkmark icons 2023-10-09 15:19:29 +05:30
staticGuru
95fe103718 add the conversation in the result response 2023-10-09 15:17:32 +05:30
staticGuru
036297ef36 Merge branch 'main' of https://github.com/staticGuru/DocsGPT into chatroom-rename-#495 2023-10-09 14:58:21 +05:30
staticGuru
129c055fee Change the width in the tile 2023-10-09 14:44:43 +05:30
Ayan Joshi
c688656607 Commit 2023-10-09 14:27:59 +05:30
staticGuru
b49e8deb3e add the typescript props interface 2023-10-09 14:13:09 +05:30
staticGuru
17264e7872 add the outside click listioner 2023-10-09 12:50:12 +05:30
staticGuru
022c0c3a89 add trash icons changes 2023-10-09 12:49:09 +05:30
staticGuru
b8539122ed add the update conversation callbacks 2023-10-09 12:48:48 +05:30
Vaibhav91one
4ca906a518 bug(front): The text seems to go out of screen when the result is too big, needed some tailwind 2023-10-09 10:44:14 +05:30
Nick Rogers
7bf67869b0 Reference DocsGPT model in custom model steps. 2023-10-08 22:03:32 -07:00
staticGuru
a032164a99 Add update conversation name API 2023-10-09 10:25:38 +05:30
Nick Rogers
f588e7783e Merge branch 'arc53:main' into main 2023-10-08 21:54:58 -07:00
Shuvadipta Das
f8ca6c019f Update My-AI-answers-questions-using-external-knowledge.md 2023-10-09 09:46:54 +05:30
Digvijay Shelar
f88806fc3c Update README.md 2023-10-09 07:26:34 +05:30
GH Action - Upstream Sync
ee0880fab7 Merge branch 'main' of https://github.com/arc53/DocsGPT 2023-10-09 00:30:35 +00:00
beKool.sh
261c674832 Update API-docs.md 2023-10-09 05:48:27 +05:45
Alex
e95bc82b8e Update Conversation.tsx 2023-10-08 23:30:21 +01:00
Alex
6d0cc49ecd Merge pull request #477 from staticGuru/input-hidden-issue-#474
[FIX] Question is hidden under the question input box
2023-10-08 23:27:45 +01:00
Alex
e108833db2 Merge branch 'main' into input-hidden-issue-#474 2023-10-08 23:19:35 +01:00
Alex
151fdb9bad Merge pull request #493 from HarshMN2345/patch-1
Update CONTRIBUTING.md
2023-10-08 22:57:19 +01:00
Alex
59ca8665fe Merge branch 'main' into patch-1 2023-10-08 22:56:20 +01:00
Alex
71c101b82e Merge pull request #519 from SoumyadiptoPal/newBranch2
Feature: Round Corners
2023-10-08 22:46:12 +01:00
Alex
860030824e Merge pull request #503 from ManishMadan2882/main
added the copy response feature
2023-10-08 22:40:51 +01:00
Alex
46c4bf6e94 Merge pull request #515 from timoransky/main
Fix: adjust left margin on content container
2023-10-08 22:32:55 +01:00
Alex
53ed6e54b5 Merge pull request #512 from drk1rd/main
Grammar and punctuations improved
2023-10-08 22:27:07 +01:00
ManishMadan2882
3197c356e9 UI corrections 2023-10-09 02:36:48 +05:30
Harsh Mahajan
cdad083d7f Update CONTRIBUTING.md 2023-10-09 01:21:36 +05:30
Alex
2e076ef3f4 Merge pull request #469 from 5h0ov/patch-1
Update How-to-train-on-other-documentation.md
2023-10-08 19:40:54 +01:00
Alex
46e3a27626 Merge pull request #509 from sanketmp/add-license
add license link to readme.md
2023-10-08 19:39:27 +01:00
Alex
1247867187 Merge pull request #480 from GuptaPratik02/improve-doc-readme-contributing-hacktoberfest-files
Improved docs - readme , contributing and hacktoberfest files
2023-10-08 19:36:55 +01:00
Soumyadipto Pal
b1f863cc4d Update Navigation.tsx 2023-10-08 23:59:08 +05:30
Soumyadipto Pal
823b41b7ec Merge branch 'main' into newBranch2 2023-10-08 23:53:35 +05:30
SoumyadiptoPal
16a2b3b19b Rounded the components to 3xl 2023-10-08 23:48:49 +05:30
Alex
0a2e899363 Merge pull request #506 from akshitarora921/fix/about-page
🪛 Fix: About us page margin
2023-10-08 18:50:08 +01:00
timoransky
65d431c7a0 fix: margin left on content container 2023-10-08 19:18:11 +02:00
staticGuru
6b617955b7 add the check mark logics 2023-10-08 22:31:17 +05:30
staticGuru
10cf0470cb add the conversation Tile 2023-10-08 22:09:38 +05:30
staticGuru
f91ca796de add the check Mark icons 2023-10-08 22:09:16 +05:30
Suryansh
7f1fb41d48 Update My-AI-answers-questions-using-external-knowledge.md 2023-10-08 22:08:55 +05:30
staticGuru
ceb9c70fba add the edit icons in the assets 2023-10-08 22:08:42 +05:30
staticGuru
5c9d11861e add the coversation tile in the chat section 2023-10-08 22:08:19 +05:30
Suryansh
706e6c01aa Update How-to-use-different-LLM.md 2023-10-08 22:08:18 +05:30
Suryansh
64cecb4931 Update How-to-train-on-other-documentation.md 2023-10-08 22:07:00 +05:30
Suryansh
31e0dfef76 Update Customising-prompts.md 2023-10-08 22:03:18 +05:30
Suryansh
dc85f93423 Update react-widget.md 2023-10-08 22:02:10 +05:30
Suryansh
4d5d407655 Update Chatwoot-extension.md 2023-10-08 22:00:46 +05:30
Suryansh
d2424ce540 Update API-docs.md 2023-10-08 21:59:27 +05:30
Suryansh
4d5de8176a Update Quickstart.md 2023-10-08 21:55:43 +05:30
Suryansh
c451d00eb4 Update Hosting-the-app.md 2023-10-08 21:52:52 +05:30
Harsh Mahajan
a8180bddad Update API-docs.md
done all the changes proposed in issue #508
2023-10-08 18:02:54 +05:30
Harsh Mahajan
e988364766 Update CONTRIBUTING.md
i made it easy to understand
2023-10-08 17:47:27 +05:30
staticGuru
396697ead2 Query overlay text input issues 2023-10-08 17:36:57 +05:30
Sanket Pol
2993bd8c05 add license link to readme.md 2023-10-08 17:02:44 +05:30
beKool.sh
fc50bb6e57 Add Backlink to Vector Stores 2023-10-08 17:10:33 +05:45
Pratik Gupta
a064066e42 Update README.md
Updated the DocsGPT link as mentioned by the maintainer.
2023-10-08 16:52:56 +05:30
Akshit Arora
a6783e537b fix css 2023-10-08 14:17:26 +05:30
Ayan Joshi
4b1dad96cd Commit 2023-10-08 12:42:56 +05:30
Digvijay Shelar
6758b51617 Update README.md 2023-10-08 11:20:28 +05:30
ManishMadan2882
54fdd2da57 UI checks 2023-10-08 06:26:08 +05:30
GH Action - Upstream Sync
3132a4965e Merge branch 'main' of https://github.com/arc53/DocsGPT 2023-10-08 00:32:29 +00:00
ManishMadan2882
7ee3f10a81 fix 2023-10-08 05:33:13 +05:30
ManishMadan2882
accd65a26a added the copy msg feature 2023-10-08 05:27:03 +05:30
Alex
e0ada7fc48 Merge pull request #496 from ankur0904/improve-docs
Make text bold
2023-10-08 00:33:28 +01:00
Alex
ad1401854c Merge pull request #448 from SoumyadiptoPal/newBranch
Updated navigation bar and also added icons
2023-10-08 00:27:58 +01:00
Alex
e18189caae Merge pull request #486 from krishvsoni/krishvsoni-patch-1
Krishvsoni patch 1
2023-10-08 00:19:25 +01:00
Alex
d601d35a21 Merge pull request #478 from adityagupta19/main
Fix "Sources" Feature doesn't look as intended
2023-10-08 00:10:13 +01:00
Alex
66fd402f00 Merge branch 'main' into main 2023-10-08 00:01:43 +01:00
Alex
835a04358c Merge pull request #492 from Ankit-Matth/change_color_of_buttons
Color of all buttons and elements changed from blue to #7D54D1
2023-10-07 23:56:55 +01:00
Alex
af9b4e448d Merge pull request #491 from daniel-shuy/bugfix/about-page-to-conversation-navigation
Fix About page to Conversation navigation
2023-10-07 23:52:40 +01:00
Alex
39e8ba42ff Merge pull request #454 from aindree-2005/patch-1
Update About.tsx
2023-10-07 23:29:25 +01:00
Alex
1e52c956a8 Merge pull request #462 from mohitd404/main
Update README.md -- Fixed #453
2023-10-07 23:18:20 +01:00
ankur0904
d261ed074e Make text bold 2023-10-07 23:54:47 +05:30
KRISH SONI
47f9be32ce git commit 2023-10-07 21:40:29 +05:30
Harsh Mahajan
a17390c157 Update CONTRIBUTING.md
i  made it easy to understand
2023-10-07 21:37:58 +05:30
Aditya Gupta
5ca5e0d00f Revert "synced the branch"
This reverts commit 0585fb4c80.
2023-10-07 21:22:48 +05:30
Ankit Matth
b0085f2741 Color of all buttons and elements changed from blue to #7D54D1 2023-10-07 20:42:32 +05:30
Daniel Shuy
b983095e13 Fix About page to Conversation navigation 2023-10-07 22:47:21 +08:00
KRISH SONI
dd6e018e46 Update README.md 2023-10-07 20:09:48 +05:30
KRISH SONI
6f8394a086 git commit 2023-10-07 20:00:17 +05:30
KRISH SONI
a0739a18e8 added one line for the venv documentation 2023-10-07 19:53:23 +05:30
KRISH SONI
27d33f015f added venv official Python documentation 2023-10-07 19:43:45 +05:30
Alex
ffb7ad1417 Merge pull request #463 from jbampton/remove-duplicate-words
Remove unneeded duplicate words
2023-10-07 14:12:00 +01:00
Aindree Chatterjee
97e6bab6e3 Update About.tsx 2023-10-07 18:41:39 +05:30
Aindree Chatterjee
b311b7620c Update About.tsx 2023-10-07 18:41:01 +05:30
Alex
25ec8fb2ab Merge pull request #465 from adarsh-jha-dev/patch-1
Add Feedback Feature to Conversation Module (Update conversationSlice.ts)
2023-10-07 13:58:37 +01:00
Alex
75100cd182 Merge pull request #473 from staticGuru/main
Hero section's figma UI changes
2023-10-07 13:33:38 +01:00
KRISH SONI
50d48ee3ec Added alternative virtual environment activation command for Windows users in README 2023-10-07 16:46:02 +05:30
KRISH SONI
0e330f983b Added alternative virtual environment activation command for Windows users in README 2023-10-07 16:44:55 +05:30
Mohit Dhote
9523a929af Merge branch 'main' into main 2023-10-07 15:23:53 +05:30
Mohit Dhote
3fcec069ed Update README.md 2023-10-07 15:14:04 +05:30
Mohit Dhote
7c2e72aebb Update README.md 2023-10-07 15:13:15 +05:30
Mohit Dhote
8d6fbddd67 Update README.md #453
made the required changes #453
2023-10-07 15:11:53 +05:30
Pratik Gupta
66b5ac8ff1 Update HACKTOBERFEST.md
Improvements
2023-10-07 14:51:22 +05:30
Pratik Gupta
e034fc12eb Update CONTRIBUTING.md
Fixed and added the necessary links in contributing file
2023-10-07 14:46:56 +05:30
Pratik Gupta
a8317ccacd Update README.md
Fixed and added the necessary links in README file
2023-10-07 14:37:54 +05:30
Alex
74376586a8 Update README.md 2023-10-07 09:51:41 +01:00
Pratik Gupta
ea49296cfe Update HACKTOBERFEST.md
Improved the HACKTOBERFEST.md file.
Fixed and added 7 links.
2023-10-07 14:15:17 +05:30
Aditya Gupta
992f817fef Fix: Sources feature doesn't look as intended 2023-10-07 13:24:12 +05:30
staticGuru
36528fceab add issue fixes 2023-10-07 12:20:47 +05:30
staticGuru
8323b8af4d Reset the input hidden issues 2023-10-07 12:13:05 +05:30
staticGuru
0e496181a1 Fix the question input hidden issue 2023-10-07 12:03:43 +05:30
staticGuru
f47fc7a484 add the additional div in the chat sections 2023-10-07 11:39:51 +05:30
Aditya Gupta
0585fb4c80 synced the branch 2023-10-07 11:21:55 +05:30
Nick Rogers
bdfcf6591e Fix missing documentation for using Llama_cpp 2023-10-06 21:20:40 -07:00
GH Action - Upstream Sync
cad54f0f07 Merge branch 'main' of https://github.com/arc53/DocsGPT 2023-10-07 01:15:01 +00:00
GH Action - Upstream Sync
a52ab1685e Merge branch 'main' of https://github.com/arc53/DocsGPT 2023-10-07 00:29:26 +00:00
Alex
3182816965 Merge pull request #472 from adityagupta19/main
Fix: Conversations tabs overlapping
2023-10-06 22:29:37 +01:00
Guruvignesh
a8da4b0162 Merge branch 'arc53:main' into main 2023-10-07 00:07:02 +05:30
staticGuru
ab7f6e8300 Change the hero section with figma style 2023-10-06 23:42:22 +05:30
Aditya Gupta
943bf477a0 Fix: Conversations tabs overlapping 2023-10-06 23:37:23 +05:30
Digvijay Shelar
168f4c0056 Merge branch 'main' into main 2023-10-06 22:41:28 +05:30
Digvijay Shelar
35fef11d2a Update README.md 2023-10-06 22:40:46 +05:30
Shuvadipta Das
425a8a6412 Update How-to-train-on-other-documentation.md 2023-10-06 21:39:18 +05:30
Alex
b64495f7a9 Merge pull request #467 from jbampton/add-pr-labeler
Add pull request labeler
2023-10-06 16:11:53 +01:00
Pavel
014861a7f2 Merge pull request #468 from arc53/bug/faiss-import
Fix faiss import bug
2023-10-06 18:09:31 +03:00
Alex
17edaa0e1f Update faiss.py 2023-10-06 16:05:10 +01:00
John Bampton
bbd0325c10 Add pull request labeler 2023-10-07 00:40:27 +10:00
Alex
316c276545 Update README.md 2023-10-06 15:29:17 +01:00
John Bampton
32ea0213f7 Remove unneeded duplicate words 2023-10-07 00:11:03 +10:00
Adarsh Jha
86c2f0716e Update conversationSlice.ts 2023-10-06 19:36:48 +05:30
Pavel
68b8d7d7f2 Merge pull request #464 from arc53/bug/fix-tests
Fix tests on sagemaker.py
2023-10-06 17:02:54 +03:00
Alex
43a22f84d9 Update sagemaker.py 2023-10-06 14:43:05 +01:00
Alex
b3a0368b95 Update app.py 2023-10-06 13:54:03 +01:00
Alex
cd79330c4c Merge pull request #449 from arc53/feature/sage-streaming
sagemaker streaming
2023-10-06 13:49:32 +01:00
mohitd404
245e09c723 Update README.md -- Fixed ##453
 Added code of conduct section to documentation.
 Added license section into the repository.
  Fixed some Typos.
2023-10-06 18:01:50 +05:30
Alex
495728593f sagemaker fixes + test 2023-10-06 13:22:51 +01:00
Aditya Aryaman Das
e9c4b0dc01 Updated README.md 2023-10-06 17:24:49 +05:30
Alex
9942bf2124 Merge pull request #458 from staticGuru/main
[Fix] Source selection overflow issue
2023-10-06 12:37:14 +01:00
staticGuru
0a8ba068c4 remove the unwanted whitespace 2023-10-06 17:04:25 +05:30
staticGuru
a2bb70aaec Source selection overflow issues 2023-10-06 16:51:35 +05:30
Alex
5ed25d8bcb Merge pull request #447 from byt3h3ad/byt3h3ad-patch-1
improved CONTRIBUTING.MD, README.MD, HACKTOBERFEST.MD
2023-10-06 12:03:27 +01:00
Alex
cafc068c39 Merge branch 'main' into byt3h3ad-patch-1 2023-10-06 12:03:20 +01:00
Alex
b8dde0767b Update docker-compose.yaml 2023-10-06 11:57:08 +01:00
Alex
f8e5e3b3c0 Merge pull request #445 from amoghak-ds/main
Update CODE_OF_CONDUCT.md
2023-10-06 11:53:03 +01:00
Aindree Chatterjee
edc19e99a9 Merge branch 'arc53:main' into patch-1 2023-10-06 16:20:11 +05:30
Alex
2b0b3827ab Merge pull request #443 from amoghak-ds/patch-1
Update app.py
2023-10-06 11:49:16 +01:00
Aindree Chatterjee
8afe5a0087 Update About.tsx 2023-10-06 16:19:05 +05:30
Alex
0ecc53f3b6 Merge pull request #456 from LunarMarathon/fix-455-bitsandbytes
correct wrapper, typos; add links
2023-10-06 11:21:11 +01:00
LunarMarathon
b3f2827961 correct wrapper, typos; add links
Signed-off-by: LunarMarathon <lmaytan24@gmail.com>
2023-10-06 15:11:22 +05:30
Aindree Chatterjee
9c96a4d81b Update About.tsx
Text decorations added
2023-10-06 12:28:32 +05:30
Alex
4f5e363452 sagemaker streaming 2023-10-06 01:52:29 +01:00
Alex
92572ff919 Merge pull request #442 from ratishjain12/fixed-homepage
fixed homepage
2023-10-06 01:32:34 +01:00
Ratish jain
39ddaf49be Update Hero.tsx 2023-10-06 05:37:44 +05:30
Alex
627dc2d4a0 Merge pull request #441 from ka1bi4/update/documentation-update-formatting
Improved docs readability and fix some typo.
2023-10-05 23:33:00 +01:00
Soumyadipto Pal
42739bbb61 Merge branch 'main' into newBranch 2023-10-06 02:36:49 +05:30
SoumyadiptoPal
261c9eefe1 Updated navigation bar and also added discord and github icons 2023-10-06 02:10:21 +05:30
Adhiraj
e21e4d2b16 Update README.md 2023-10-06 02:05:11 +05:30
Adhiraj
8b6b8f0c53 Update CONTRIBUTING.md 2023-10-06 01:40:03 +05:30
Adhiraj
5a9feb4411 Update HACKTOBERFEST.md 2023-10-06 01:31:37 +05:30
Amogha Kancharla
f16128da09 Update CODE_OF_CONDUCT.md
I've improved the CODE_OF_CONDUCT.md file and fixed some typos and errors. Kindly review it.
2023-10-06 00:07:13 +05:30
Amogha Kancharla
48f9997ea9 Update app.py
This version of code uses a concise syntax and reduces redundancy.
2023-10-05 23:35:47 +05:30
ratishjain
f0e87094d6 fixed homepage 2023-10-05 23:24:09 +05:30
ratishjain
e0882e9e04 fixed homepage 2023-10-05 23:16:54 +05:30
Roman Zhukov
d37885ea88 Update doc formatting and fix some spelling. 2023-10-05 20:27:48 +03:00
Alex
d13e5e7e3f Merge pull request #435 from Ankit-Matth/changing_discord_github_icons
I have changed icons & text for Discord and Github in the left sidebar.
2023-10-05 16:35:53 +01:00
Alex
aa9a024ee1 Merge branch 'main' into changing_discord_github_icons 2023-10-05 16:33:18 +01:00
Digvijay Shelar
5bbf6d2ae9 Update README.md 2023-10-05 20:38:02 +05:30
Alex
30299a9f04 Merge pull request #434 from QuantuM410/quantum410/frontend-placeholder
Added css attribute placeholder for search div
2023-10-05 15:55:00 +01:00
Alex
23d7fe936d Merge pull request #432 from arc53/dependabot/pip/application/langchain-0.0.308
Bump langchain from 0.0.263 to 0.0.308 in /application
2023-10-05 15:46:23 +01:00
Alex
b50c052222 Merge pull request #399 from arc53/dependabot/npm_and_yarn/extensions/web-widget/postcss-8.4.31
Bump postcss from 8.4.23 to 8.4.31 in /extensions/web-widget
2023-10-05 15:41:28 +01:00
dependabot[bot]
ef9e9809e2 Bump langchain from 0.0.263 to 0.0.308 in /application
Bumps [langchain](https://github.com/langchain-ai/langchain) from 0.0.263 to 0.0.308.
- [Release notes](https://github.com/langchain-ai/langchain/releases)
- [Commits](https://github.com/langchain-ai/langchain/compare/v0.0.263...v0.0.308)

---
updated-dependencies:
- dependency-name: langchain
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-05 14:38:51 +00:00
Alex
f139c3268b Merge pull request #431 from arc53/dependabot/pip/scripts/langchain-0.0.308
Bump langchain from 0.0.252 to 0.0.308 in /scripts
2023-10-05 15:37:53 +01:00
dependabot[bot]
e869bfd991 Bump postcss from 8.4.23 to 8.4.31 in /extensions/web-widget
Dependabot couldn't find the original pull request head commit, b4ce5c5d7b37c529245006df293f65d8065dd513.
2023-10-05 14:36:56 +00:00
Alex
d5309fcaf5 Merge pull request #398 from arc53/dependabot/npm_and_yarn/extensions/chrome/postcss-8.4.31
Bump postcss from 8.4.21 to 8.4.31 in /extensions/chrome
2023-10-05 15:36:41 +01:00
Alex
c4fc49553c Merge pull request #396 from arc53/dependabot/pip/application/pillow-10.0.1
Bump pillow from 9.4.0 to 10.0.1 in /application
2023-10-05 15:36:07 +01:00
Alex
75704899a7 Merge pull request #395 from arc53/dependabot/npm_and_yarn/extensions/react-widget/postcss-8.4.31
Bump postcss from 8.4.29 to 8.4.31 in /extensions/react-widget
2023-10-05 15:35:48 +01:00
QuantuM410
70aa3b1ff1 Added css attribute placeholder for search div 2023-10-05 20:01:30 +05:30
Ankit Matth
6154a8169b Replace Discord & Github Icons 2023-10-05 19:53:37 +05:30
dependabot[bot]
cf0173e079 Bump langchain from 0.0.252 to 0.0.308 in /scripts
Bumps [langchain](https://github.com/langchain-ai/langchain) from 0.0.252 to 0.0.308.
- [Release notes](https://github.com/langchain-ai/langchain/releases)
- [Commits](https://github.com/langchain-ai/langchain/compare/v0.0.252...v0.0.308)

---
updated-dependencies:
- dependency-name: langchain
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-05 13:47:34 +00:00
Alex
cc62fc6222 Merge pull request #429 from shelar1423/main
fix: Typo in Readme.md
2023-10-05 14:00:33 +01:00
Digvijay Shelar
6cdadf1b37 Update README.md 2023-10-05 17:19:21 +05:30
Alex
dc90a66a96 Merge pull request #427 from ArnabBCA/main
Fixed Overflowing Text messages with long strings without spaces
2023-10-05 11:10:39 +01:00
Alex
4b629d20cf Merge pull request #425 from Archit-Kohli/patch-2
Update huggingface.py
2023-10-05 11:03:27 +01:00
Arnab Ghosh
437bd13fd0 Fixed Overflowing Text message when user passed a very long string without spaces 2023-10-05 14:44:18 +05:30
Archit-Kohli
7ce1dc9069 Update huggingface.py add import torch statement
Added import torch statement
2023-10-05 14:27:15 +05:30
Archit-Kohli
5b4e517d9d Update huggingface.py
Added quantization support using bitsandbytes
2023-10-05 11:03:51 +05:30
Alex
6c3ed5e533 Merge pull request #417 from eltociear/patch-2
Update How-to-use-different-LLM.md
2023-10-05 00:14:43 +01:00
Alex
ec2762c31a Merge pull request #384 from ArnabBCA/main
Fixed Empty Spaces Passed in the Input Field
2023-10-05 00:10:51 +01:00
Alex
29f3158b61 Merge pull request #407 from Akash190104/patch-2
[Docs] Update Quickstart.md
2023-10-04 23:58:41 +01:00
Alex
d83b7276fd Merge pull request #406 from Akash190104/patch-1
[DOCS] Update Hosting-the-app.md
2023-10-04 23:57:19 +01:00
Alex
1336010bb2 Update .env_sample 2023-10-04 23:54:53 +01:00
Alex
5cb3df6db1 Merge pull request #420 from jerempy/feature/useMediaQuery
useMediaQuery
2023-10-04 18:50:42 +01:00
jerempy
4be0c1c0eb delete unused test 2023-10-04 13:41:43 -04:00
jerempy
33e5e74228 custom hook 2023-10-04 13:12:52 -04:00
Ikko Eltociear Ashimine
2b06989372 Update How-to-use-different-LLM.md
Huggingface -> Hugging Face
2023-10-05 01:24:46 +09:00
Alex
cd4da2aca3 Merge pull request #402 from jbampton/fix-spelling
react-widget: fix spelling; `atrribute`
2023-10-04 16:38:28 +01:00
Alex
b335951862 Merge pull request #408 from Shrit1401/main
Added Contributors Images in Readme.md
2023-10-04 16:35:23 +01:00
Arnab Ghosh
e95e084956 Fixed Empty Spaces Passed in the Input Field 2023-10-04 20:54:25 +05:30
Alex
b7d569de98 Update README.md 2023-10-04 14:01:23 +01:00
Alex
8320cca5cd minor fixes on deleting 2023-10-04 13:57:40 +01:00
Shrit Shrivastava
bad5fec0f1 Added Contributors Images in Readme.md 2023-10-04 18:16:53 +05:30
Akash Kundu
200a3b65ee [Docs] Update Quickstart.md 2023-10-04 18:06:44 +05:30
Akash Kundu
d9fc2a93cc Update Hosting-the-app.md
There were a lot of grammatical errors and typos that I fixed to improve the readability of the document.
2023-10-04 17:56:25 +05:30
Alex
94f3533c29 Update README.md 2023-10-04 12:33:58 +01:00
John Bampton
77f7ad309e react-widget: fix spelling; atrribute 2023-10-04 10:12:46 +10:00
dependabot[bot]
cefd270837 Bump postcss from 8.4.21 to 8.4.31 in /extensions/chrome
Bumps [postcss](https://github.com/postcss/postcss) from 8.4.21 to 8.4.31.
- [Release notes](https://github.com/postcss/postcss/releases)
- [Changelog](https://github.com/postcss/postcss/blob/main/CHANGELOG.md)
- [Commits](https://github.com/postcss/postcss/compare/8.4.21...8.4.31)

---
updated-dependencies:
- dependency-name: postcss
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-03 22:52:59 +00:00
dependabot[bot]
037e68a376 Bump postcss from 8.4.29 to 8.4.31 in /extensions/react-widget
Bumps [postcss](https://github.com/postcss/postcss) from 8.4.29 to 8.4.31.
- [Release notes](https://github.com/postcss/postcss/releases)
- [Changelog](https://github.com/postcss/postcss/blob/main/CHANGELOG.md)
- [Commits](https://github.com/postcss/postcss/compare/8.4.29...8.4.31)

---
updated-dependencies:
- dependency-name: postcss
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-03 22:52:56 +00:00
Alex
4da0785494 Merge pull request #397 from arc53/dependabot/npm_and_yarn/frontend/postcss-8.4.31
Bump postcss from 8.4.21 to 8.4.31 in /frontend
2023-10-03 23:52:19 +01:00
dependabot[bot]
53171bafec Bump pillow from 9.4.0 to 10.0.1 in /application
Bumps [pillow](https://github.com/python-pillow/Pillow) from 9.4.0 to 10.0.1.
- [Release notes](https://github.com/python-pillow/Pillow/releases)
- [Changelog](https://github.com/python-pillow/Pillow/blob/main/CHANGES.rst)
- [Commits](https://github.com/python-pillow/Pillow/compare/9.4.0...10.0.1)

---
updated-dependencies:
- dependency-name: pillow
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-03 22:46:06 +00:00
dependabot[bot]
16fe77e472 Bump postcss from 8.4.21 to 8.4.31 in /frontend
Bumps [postcss](https://github.com/postcss/postcss) from 8.4.21 to 8.4.31.
- [Release notes](https://github.com/postcss/postcss/releases)
- [Changelog](https://github.com/postcss/postcss/blob/main/CHANGELOG.md)
- [Commits](https://github.com/postcss/postcss/compare/8.4.21...8.4.31)

---
updated-dependencies:
- dependency-name: postcss
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-03 22:46:06 +00:00
Alex
9f147e5b6d Merge pull request #393 from arc53/dependabot/pip/scripts/pillow-10.0.1
Bump pillow from 10.0.0 to 10.0.1 in /scripts
2023-10-03 23:45:37 +01:00
Alex
c43106a744 Merge pull request #392 from jbampton/remove-unneeded-comments
tests(python): remove unneeded comments
2023-10-03 23:43:24 +01:00
Alex
a73eb0377d Merge pull request #389 from Cioraz/fixed_sumbit_to_submit
Corrected Sumbit to Submit
2023-10-03 23:36:00 +01:00
Alex
e05514b455 Merge pull request #372 from siiddhantt/main
Updated sources section according to Figma design
2023-10-03 23:21:32 +01:00
dependabot[bot]
901c7be9a8 Bump pillow from 10.0.0 to 10.0.1 in /scripts
Bumps [pillow](https://github.com/python-pillow/Pillow) from 10.0.0 to 10.0.1.
- [Release notes](https://github.com/python-pillow/Pillow/releases)
- [Changelog](https://github.com/python-pillow/Pillow/blob/main/CHANGES.rst)
- [Commits](https://github.com/python-pillow/Pillow/compare/10.0.0...10.0.1)

---
updated-dependencies:
- dependency-name: pillow
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-03 20:40:53 +00:00
John Bampton
e503cc3003 tests(python): remove unneeded comments 2023-10-04 06:06:20 +10:00
Alex
2ff777acb7 Merge pull request #390 from jbampton/fix-spelling
misc: fix spelling
2023-10-03 20:25:46 +01:00
John Bampton
034d73a4eb misc: fix spelling 2023-10-04 05:18:15 +10:00
Siddhant Rai
aec55e50f9 Merge branch 'arc53:main' into main 2023-10-03 23:32:43 +05:30
siiddhantt
1f356a67b2 fix: added conditions for error state 2023-10-03 23:32:08 +05:30
Cioraz
0f60cd480d Correct Sumbit to Submit 2023-10-03 22:27:58 +05:30
Cioraz
316b8c66db Correct Sumbit to Submit 2023-10-03 22:26:53 +05:30
Alex
744d4ebbaf Update bug_report.yml 2023-10-03 15:14:44 +01:00
Alex
005deaccc8 Merge pull request #381 from adeyinkaezra123/issuetemplates
feat: create issue template forms to ensure consistency in issue and pr formats
2023-10-03 15:04:05 +01:00
Ezra Adeyinka
b6f78ce1af fix(bug_report_template): add environments and variable constructs 2023-10-03 14:39:56 +01:00
Ezra Adeyinka
932b504d82 fix(bug_report_template): append working code of conduct url 2023-10-03 14:22:53 +01:00
Ezra Adeyinka
1cca46cf7b ci(root): add pull request markdown template 2023-10-03 13:04:32 +01:00
Ezra Adeyinka
a8f6d2adf0 ci(root): add feature request issue form template directive 2023-10-03 13:02:24 +01:00
Ezra Adeyinka
203de18053 ci(root): add bug report issue form template directive 2023-10-03 12:47:10 +01:00
Alex
ee12b4164b Merge pull request #337 from arc53/dependabot/pip/scripts/cryptography-41.0.4
Bump cryptography from 41.0.3 to 41.0.4 in /scripts
2023-10-03 11:12:21 +01:00
Alex
b38459439d Merge pull request #338 from arc53/dependabot/pip/application/cryptography-41.0.4
Bump cryptography from 41.0.3 to 41.0.4 in /application
2023-10-03 11:12:02 +01:00
Alex
a2eddb3580 Merge pull request #373 from arc53/dependabot/pip/application/urllib3-1.26.17
Bump urllib3 from 1.26.14 to 1.26.17 in /application
2023-10-03 11:07:34 +01:00
Alex
18adbc6bf0 Merge pull request #377 from prithvi2k2/main
Removed redundant files as discussed at #376
2023-10-03 11:06:52 +01:00
Alex
c1ccef25a3 Merge pull request #375 from chriscarnold/github-star-badge-readme
Fixed GihHub badges in README.md links
2023-10-03 11:02:04 +01:00
Alex
b1bea73efb Merge pull request #374 from Smartmind12/patch-2
Adding tab to documentation in Navigation bar
2023-10-03 10:57:56 +01:00
Alex
4175d29056 hotfix 2023-10-03 10:50:49 +01:00
Alex
46c78c33cf Merge pull request #378 from arc53/feature/holopin
Feature/holopin
2023-10-03 09:49:38 +01:00
Alex
3fe5a41433 Update HACKTOBERFEST.md 2023-10-03 09:49:03 +01:00
Alex
f2b1f95521 Update holopin.yml 2023-10-03 09:46:25 +01:00
prithvi2k2
94f81caf28 Removed redundant files as discussed at #376 2023-10-03 14:11:52 +05:30
chriscarnold
669a4a299c Fixed GihHub badges in README.md links 2023-10-03 09:05:14 +01:00
siiddhantt
afff55045f fix: css changes 2023-10-03 10:47:15 +05:30
Siddhant Rai
4fbcd2ba5d Merge branch 'arc53:main' into main 2023-10-03 10:45:41 +05:30
Utsav Paul
016295dfee Adding tab to documentation in Navigation bar 2023-10-03 10:33:02 +05:30
dependabot[bot]
6d2bc2929a Bump urllib3 from 1.26.14 to 1.26.17 in /application
Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.26.14 to 1.26.17.
- [Release notes](https://github.com/urllib3/urllib3/releases)
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst)
- [Commits](https://github.com/urllib3/urllib3/compare/1.26.14...1.26.17)

---
updated-dependencies:
- dependency-name: urllib3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-03 02:47:59 +00:00
Alex
23a5e566f2 Update Quickstart.md 2023-10-03 00:39:28 +01:00
Alex
4b387961a4 Update Quickstart.md 2023-10-03 00:39:12 +01:00
siiddhantt
1765a8a7f9 fix: fixes according to figma design 2023-10-02 23:22:04 +05:30
siiddhantt
837a5b52a7 Merge branch 'main' of github.com:siiddhantt/DocsGPT 2023-10-02 22:42:20 +05:30
siiddhantt
180c4e855e feat: updated sources section design 2023-10-02 22:37:12 +05:30
Alex
01d2af9961 Update favicon.ico
favicon new
2023-10-02 15:21:36 +01:00
Alex
7f3cc6269b holopin init 2023-10-02 15:14:19 +01:00
Alex
dbc0d54491 Update index.mdx 2023-10-02 10:04:08 +01:00
Alex
f843f5ae9d Update index.mdx 2023-10-02 10:03:34 +01:00
Alex
962cb290e4 Merge pull request #366 from Smartmind12/patch-1
Update index.mdx
2023-10-02 09:53:39 +01:00
Alex
7138655dd1 Update HACKTOBERFEST.md 2023-10-02 09:47:39 +01:00
Utsav Paul
93acfc2e38 Update index.mdx
Updating the Documentation Landing page to make it more userfriendly.
2023-10-02 11:16:44 +05:30
Alex
91878c4591 Update README.md 2023-10-02 00:31:24 +01:00
Alex
05ec1216e0 Merge pull request #363 from hirenchauhan2/feat/ui/170
feat(ui): add scroll to bottom button
2023-10-01 21:59:22 +01:00
Alex
af0e6481f8 Update Conversation.tsx 2023-10-01 21:57:12 +01:00
Alex
11a745c4d9 Merge pull request #364 from arc53/bug/fix-test
Fix broken test
2023-10-01 21:45:31 +01:00
Alex
2393da4425 Update README.md 2023-10-01 21:44:54 +01:00
Alex
95ab08e02d Update test_celery.py 2023-10-01 21:34:57 +01:00
Hiren Chauhan
153b5c028b feat(ui): add scroll to bottom button
This will show the scroll to bottom button when user scrolls to top from the last message.

Closes #170
2023-10-02 00:15:31 +05:30
dependabot[bot]
0ffb40f4c1 Bump cryptography from 41.0.3 to 41.0.4 in /application
Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.3 to 41.0.4.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/41.0.3...41.0.4)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-21 21:13:00 +00:00
dependabot[bot]
8bcffb4ad5 Bump cryptography from 41.0.3 to 41.0.4 in /scripts
Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.3 to 41.0.4.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/41.0.3...41.0.4)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-21 21:06:30 +00:00
286 changed files with 32253 additions and 9295 deletions

View File

@@ -1,8 +1,8 @@
OPENAI_API_KEY=<LLM api key (for example, open ai key)>
SELF_HOSTED_MODEL=false
API_KEY=<LLM api key (for example, open ai key)>
LLM_NAME=docsgpt
VITE_API_STREAMING=true
#For Azure
#For Azure (you can delete it if you don't use Azure)
OPENAI_API_BASE=
OPENAI_API_VERSION=
AZURE_DEPLOYMENT_NAME=

138
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View File

@@ -0,0 +1,138 @@
name: "🐛 Bug Report"
description: "Submit a bug report to help us improve"
title: "🐛 Bug Report: "
labels: ["type: bug"]
body:
- type: markdown
attributes:
value: We value your time and your efforts to submit this bug report is appreciated. 🙏
- type: textarea
id: description
validations:
required: true
attributes:
label: "📜 Description"
description: "A clear and concise description of what the bug is."
placeholder: "It bugs out when ..."
- type: textarea
id: steps-to-reproduce
validations:
required: true
attributes:
label: "👟 Reproduction steps"
description: "How do you trigger this bug? Please walk us through it step by step."
placeholder: "1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error"
- type: textarea
id: expected-behavior
validations:
required: true
attributes:
label: "👍 Expected behavior"
description: "What did you think should happen?"
placeholder: "It should ..."
- type: textarea
id: actual-behavior
validations:
required: true
attributes:
label: "👎 Actual Behavior with Screenshots"
description: "What did actually happen? Add screenshots, if applicable."
placeholder: "It actually ..."
- type: dropdown
id: operating-system
attributes:
label: "💻 Operating system"
description: "What OS is your app running on?"
options:
- Linux
- MacOS
- Windows
- Something else
validations:
required: true
- type: dropdown
id: browsers
attributes:
label: What browsers are you seeing the problem on?
multiple: true
options:
- Firefox
- Chrome
- Safari
- Microsoft Edge
- Something else
- type: dropdown
id: dev-environment
validations:
required: true
attributes:
label: "🤖 What development environment are you experiencing this bug on?"
options:
- Docker
- Local dev server
- type: textarea
id: env-vars
validations:
required: false
attributes:
label: "🔒 Did you set the correct environment variables in the right path? List the environment variable names (not values please!)"
description: "Please refer to the [Project setup instructions](https://github.com/arc53/DocsGPT#quickstart) if you are unsure."
placeholder: "It actually ..."
- type: textarea
id: additional-context
validations:
required: false
attributes:
label: "📃 Provide any additional context for the Bug."
description: "Add any other context about the problem here."
placeholder: "It actually ..."
- type: textarea
id: logs
validations:
required: false
attributes:
label: 📖 Relevant log output
description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
render: shell
- type: checkboxes
id: no-duplicate-issues
attributes:
label: "👀 Have you spent some time to check if this bug has been raised before?"
options:
- label: "I checked and didn't find similar issue"
required: true
- type: dropdown
id: willing-to-submit-pr
attributes:
label: 🔗 Are you willing to submit PR?
description: This is absolutely not required, but we are happy to guide you in the contribution process.
options: # Added options key
- "Yes, I am willing to submit a PR!"
- "No"
validations:
required: false
- type: checkboxes
id: terms
attributes:
label: 🧑‍⚖️ Code of Conduct
description: By submitting this issue, you agree to follow our [Code of Conduct](https://github.com/arc53/DocsGPT/blob/main/CODE_OF_CONDUCT.md)
options:
- label: I agree to follow this project's Code of Conduct
required: true

View File

@@ -0,0 +1,54 @@
name: 🚀 Feature
description: "Submit a proposal for a new feature"
title: "🚀 Feature: "
labels: [feature]
body:
- type: markdown
attributes:
value: We value your time and your efforts to submit this bug report is appreciated. 🙏
- type: textarea
id: feature-description
validations:
required: true
attributes:
label: "🔖 Feature description"
description: "A clear and concise description of what the feature is."
placeholder: "You should add ..."
- type: textarea
id: pitch
validations:
required: true
attributes:
label: "🎤 Why is this feature needed ?"
description: "Please explain why this feature should be implemented and how it would be used. Add examples, if applicable."
placeholder: "In my use-case, ..."
- type: textarea
id: solution
validations:
required: true
attributes:
label: "✌️ How do you aim to achieve this?"
description: "A clear and concise description of what you want to happen."
placeholder: "I want this feature to, ..."
- type: textarea
id: alternative
validations:
required: false
attributes:
label: "🔄️ Additional Information"
description: "A clear and concise description of any alternative solutions or additional solutions you've considered."
placeholder: "I tried, ..."
- type: checkboxes
id: no-duplicate-issues
attributes:
label: "👀 Have you spent some time to check if this feature request has been raised before?"
options:
- label: "I checked and didn't find similar issue"
required: true
- type: dropdown
id: willing-to-submit-pr
attributes:
label: Are you willing to submit PR?
description: This is absolutely not required, but we are happy to guide you in the contribution process.
options:
- "Yes I am willing to submit a PR!"

5
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,5 @@
- **What kind of change does this PR introduce?** (Bug fix, feature, docs update, ...)
- **Why was this change needed?** (You can also link to an open issue here)
- **Other information**:

5
.github/holopin.yml vendored Normal file
View File

@@ -0,0 +1,5 @@
organization: arc53
defaultSticker: clqmdf0ed34290glbvqh0kzxd
stickers:
- id: clqmdf0ed34290glbvqh0kzxd
alias: festive

23
.github/labeler.yml vendored Normal file
View File

@@ -0,0 +1,23 @@
repo:
- '*'
github:
- .github/**/*
application:
- application/**/*
docs:
- docs/**/*
extensions:
- extensions/**/*
frontend:
- frontend/**/*
scripts:
- scripts/**/*
tests:
- tests/**/*

View File

@@ -13,7 +13,6 @@ jobs:
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v3
@@ -36,7 +35,6 @@ jobs:
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
# Runs a single command using the runners shell
- name: Build and push Docker images to docker.io and ghcr.io
uses: docker/build-push-action@v4
with:

View File

@@ -8,11 +8,11 @@ on:
jobs:
deploy:
if: github.repository == 'arc53/DocsGPT'
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v3
@@ -40,7 +40,7 @@ jobs:
uses: docker/build-push-action@v4
with:
file: './frontend/Dockerfile'
platforms: linux/amd64
platforms: linux/amd64, linux/arm64
context: ./frontend
push: true
tags: |

15
.github/workflows/labeler.yml vendored Normal file
View File

@@ -0,0 +1,15 @@
# https://github.com/actions/labeler
name: Pull Request Labeler
on:
- pull_request_target
jobs:
triage:
permissions:
contents: read
pull-requests: write
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@v4
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
sync-labels: true

2
.gitignore vendored
View File

@@ -75,6 +75,7 @@ target/
# Jupyter Notebook
.ipynb_checkpoints
**/*.ipynb
# IPython
profile_default/
@@ -172,3 +173,4 @@ application/vectors/
node_modules/
.vscode/settings.json
models/
model/

Binary file not shown.

After

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

View File

@@ -2,58 +2,58 @@
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
We as members, contributors and leaders pledge to make participation in our
community, a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
nationality, personal appearance, race, religion or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
diverse, inclusive and a healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
Examples of behavior that contribute to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
## Demonstrating empathy and kindness towards other people
1. Being respectful and open to differing opinions, viewpoints, and experiences
2. Giving and gracefully accepting constructive feedback
3. Taking accountability and offering apologies to those who have been impacted by our errors,
while also gaining insights from the situation
4. Focusing on what is best not just for us as individuals but for the
community as a whole
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
1. The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
2. Trolling, insulting or derogatory comments, and personal or political attacks
3. Public or private harassment
4. Publishing other's private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
5. Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
response to any behavior that they deem inappropriate, threatening, offensive
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
not aligned to this Code of Conduct and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
This Code of Conduct applies within all community spaces and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
posting via an official social media account or acting as an appointed
representative at an online or offline event.
## Enforcement
@@ -63,29 +63,27 @@ reported to the community leaders responsible for enforcement at
contact@arc53.com.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
All community leaders are obligated to be respectful towards the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
the consequences for any action that they deem in violation of this Code of Conduct:
### 1. Correction
* **Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community space.
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
* **Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
* **Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
* **Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
@@ -93,23 +91,21 @@ like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
* **Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
* **Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
* **Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior,harassment of an
individual or aggression towards or disparagement of classes of individuals.
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
* **Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution

View File

@@ -1,44 +1,128 @@
# Welcome to DocsGPT Contributing guideline
# Welcome to DocsGPT Contributing Guidelines
Thank you for choosing this project to contribute to, we are all very grateful!
Thank you for choosing to contribute to DocsGPT! We are all very grateful!
# We accept different types of contributions
📣 Discussions - where you can start a new topic or answer some questions
📣 **Discussions** - Engage in conversations, start new topics, or help answer questions.
🐞 Issues - This is how we track tasks, sometimes it is bugs that need fixing, and sometimes it is new features
🐞 **Issues** - This is where we keep track of tasks. It could be bugs,fixes or suggestions for new features.
🛠️ Pull requests - This is how you can suggest changes to our repository, to work on existing issues or add new features
🛠️ **Pull requests** - Suggest changes to our repository, either by working on existing issues or adding new features.
📚 Wiki - where we have our documentation
📚 **Wiki** - This is where our documentation resides.
## 🐞 Issues and Pull requests
We value contributions to our issues in the form of discussion or suggestion, we recommend that you check out existing issues and our [Roadmap](https://github.com/orgs/arc53/projects/2)
- We value contributions in the form of discussions or suggestions. We recommend taking a look at existing issues and our [roadmap](https://github.com/orgs/arc53/projects/2).
If you want to contribute by writing code there are a few things that you should know before doing it:
We have frontend (React, Vite) and Backend (python)
### If you are looking to contribute to Frontend (⚛React, Vite):
The current frontend is being migrated from /application to /frontend with a new design, so please contribute to the new one. Check out this [Milestone](https://github.com/arc53/DocsGPT/milestone/1) and its issues also [Figma](https://www.figma.com/file/OXLtrl1EAy885to6S69554/DocsGPT?node-id=0%3A1&t=hjWVuxRg9yi5YkJ9-1)
- If you're interested in contributing code, here are some important things to know:
- We have a frontend built on React (Vite) and a backend in Python.
=======
Before creating issues, please check out how the latest version of our app looks and works by launching it via [Quickstart](https://github.com/arc53/DocsGPT#quickstart) the version on our live demo is slightly modified with login. Your issues should relate to the version that you can launch via [Quickstart](https://github.com/arc53/DocsGPT#quickstart).
### 👨‍💻 If you're interested in contributing code, here are some important things to know:
Tech Stack Overview:
- 🌐 Frontend: Built with React (Vite) ⚛️,
- 🖥 Backend: Developed in Python 🐍
### 🌐 If you are looking to contribute to frontend (⚛React, Vite):
- The current frontend is being migrated from [`/application`](https://github.com/arc53/DocsGPT/tree/main/application) to [`/frontend`](https://github.com/arc53/DocsGPT/tree/main/frontend) with a new design, so please contribute to the new one.
- Check out this [milestone](https://github.com/arc53/DocsGPT/milestone/1) and its issues.
- The updated Figma design can be found [here](https://www.figma.com/file/OXLtrl1EAy885to6S69554/DocsGPT?node-id=0%3A1&t=hjWVuxRg9yi5YkJ9-1).
Please try to follow the guidelines.
### If you are looking to contribute to Backend (🐍Python):
* Check out our issues, and contribute to /application or /scripts (ignore old ingest_rst.py ingest_rst_sphinx.py files, they will be deprecated soon)
* All new code should be covered with unit tests ([pytest](https://github.com/pytest-dev/pytest)). Please find tests under [/tests](https://github.com/arc53/DocsGPT/tree/main/tests) folder.
* Before submitting your PR make sure that after you ingested some test data it is queryable.
### 🖥 If you are looking to contribute to Backend (🐍 Python):
- Review our issues and contribute to [`/application`](https://github.com/arc53/DocsGPT/tree/main/application) or [`/scripts`](https://github.com/arc53/DocsGPT/tree/main/scripts) (please disregard old [`ingest_rst.py`](https://github.com/arc53/DocsGPT/blob/main/scripts/old/ingest_rst.py) [`ingest_rst_sphinx.py`](https://github.com/arc53/DocsGPT/blob/main/scripts/old/ingest_rst_sphinx.py) files; they will be deprecated soon).
- All new code should be covered with unit tests ([pytest](https://github.com/pytest-dev/pytest)). Please find tests under [`/tests`](https://github.com/arc53/DocsGPT/tree/main/tests) folder.
- Before submitting your Pull Request, ensure it can be queried after ingesting some test data.
### Testing
To run unit tests, from the root of the repository execute:
To run unit tests from the root of the repository, execute:
```
python -m pytest
```
### Workflow:
Create a fork, make changes on your forked repository, and submit changes in the form of a pull request.
## Workflow 📈
Here's a step-by-step guide on how to contribute to DocsGPT:
1. **Fork the Repository:**
- Click the "Fork" button at the top-right of this repository to create your fork.
2. **Clone the Forked Repository:**
- Clone the repository using:
``` shell
git clone https://github.com/<your-github-username>/DocsGPT.git
```
3. **Keep your Fork in Sync:**
- Before you make any changes, make sure that your fork is in sync to avoid merge conflicts using:
```shell
git remote add upstream https://github.com/arc53/DocsGPT.git
git pull upstream main
```
4. **Create and Switch to a New Branch:**
- Create a new branch for your contribution using:
```shell
git checkout -b your-branch-name
```
5. **Make Changes:**
- Make the required changes in your branch.
6. **Add Changes to the Staging Area:**
- Add your changes to the staging area using:
```shell
git add .
```
7. **Commit Your Changes:**
- Commit your changes with a descriptive commit message using:
```shell
git commit -m "Your descriptive commit message"
```
8. **Push Your Changes to the Remote Repository:**
- Push your branch with changes to your fork on GitHub using:
```shell
git push origin your-branch-name
```
9. **Submit a Pull Request (PR):**
- Create a Pull Request from your branch to the main repository. Make sure to include a detailed description of your changes and reference any related issues.
10. **Collaborate:**
- Be responsive to comments and feedback on your PR.
- Make necessary updates as suggested.
- Once your PR is approved, it will be merged into the main repository.
11. **Testing:**
- Before submitting a Pull Request, ensure your code passes all unit tests.
- To run unit tests from the root of the repository, execute:
```shell
python -m pytest
```
*Note: You should run the unit test only after making the changes to the backend code.*
12. **Questions and Collaboration:**
- Feel free to join our Discord. We're very friendly and welcoming to new contributors, so don't hesitate to reach out.
Thank you for considering contributing to DocsGPT! 🙏
## Questions/collaboration
Please join our [Discord](https://discord.gg/n5BX8dh8rU) don't hesitate, we are very friendly and welcoming to new contributors.
# Thank you so much for considering contributing to DocsGPT!🙏
Feel free to join our [Discord](https://discord.gg/n5BX8dh8rU). We're very friendly and welcoming to new contributors, so don't hesitate to reach out.
# Thank you so much for considering to contribute DocsGPT!🙏

View File

@@ -1,31 +0,0 @@
🎉 Join the Hacktoberfest with DocsGPT and Earn a Free T-shirt! 🎉
Welcome, contributors! We're excited to announce that DocsGPT is participating in Hacktoberfest. Get involved by submitting a **meaningful** pull request, and earn a free shirt in return!
📜 Here's How to Contribute:
🛠️ Code: This is the golden ticket! Make meaningful contributions through PRs.
📚 Wiki: Improve our documentation, Create a guide or change existing documentation.
🖥️ Design: Improve the UI/UX, or design a new feature.
📝 Guidelines for Pull Requests:
Familiarize yourself with the current contributions and our [Roadmap](https://github.com/orgs/arc53/projects/2).
Deciding to contribute with code? Here are some insights based on the area of your interest:
Frontend (⚛React, Vite):
Most of the code is located in /frontend folder. You can also check out our React extension in /extensions/react-widget.
For design references, here's the [Figma](https://www.figma.com/file/OXLtrl1EAy885to6S69554/DocsGPT?node-id=0%3A1&t=hjWVuxRg9yi5YkJ9-1).
Ensure you adhere to the established guidelines.
Backend (🐍Python):
Focus on /application or /scripts. However, avoid the files ingest_rst.py and ingest_rst_sphinx.py as they are soon to be deprecated.
Newly added code should come with relevant unit tests (pytest).
Refer to the /tests folder for test suites.
Check out [Contributing Guidelines](https://github.com/arc53/DocsGPT/blob/main/CONTRIBUTING.md)
Don't be shy! Hop into our [Discord](https://discord.gg/n5BX8dh8rU) Server. We're a friendly bunch and eager to assist newcomers.
Big thanks for considering contributing to DocsGPT during Hacktoberfest! 🙏 Your effort can earn you a swanky new t-shirt. 🎁 Let's code together! 🚀

196
README.md
View File

@@ -7,150 +7,194 @@
</p>
<p align="left">
<strong>DocsGPT</strong> is a cutting-edge open-source solution that streamlines the process of finding information in project documentation. With its integration of the powerful <strong>GPT</strong> models, developers can easily ask questions about a project and receive accurate answers.
<strong><a href="https://www.docsgpt.cloud/">DocsGPT</a></strong> is a cutting-edge open-source solution that streamlines the process of finding information in the project documentation. With its integration of the powerful <strong>GPT</strong> models, developers can easily ask questions about a project and receive accurate answers.
Say goodbye to time-consuming manual searches, and let <strong>DocsGPT</strong> help you quickly find the information you need. Try it out and see how it revolutionizes your project documentation experience. Contribute to its development and be a part of the future of AI-powered assistance.
Say goodbye to time-consuming manual searches, and let <strong><a href="https://www.docsgpt.cloud/">DocsGPT</a></strong> help you quickly find the information you need. Try it out and see how it revolutionizes your project documentation experience. Contribute to its development and be a part of the future of AI-powered assistance.
</p>
<div align="center">
<a href="https://discord.gg/n5BX8dh8rU">![example1](https://img.shields.io/github/stars/arc53/docsgpt?style=social)</a>
<a href="https://discord.gg/n5BX8dh8rU">![example2](https://img.shields.io/github/forks/arc53/docsgpt?style=social)</a>
<a href="https://discord.gg/n5BX8dh8rU">![example3](https://img.shields.io/github/license/arc53/docsgpt)</a>
<a href="https://discord.gg/n5BX8dh8rU">![example3](https://img.shields.io/discord/1070046503302877216)</a>
<a href="https://github.com/arc53/DocsGPT">![link to main GitHub showing Stars number](https://img.shields.io/github/stars/arc53/docsgpt?style=social)</a>
<a href="https://github.com/arc53/DocsGPT">![link to main GitHub showing Forks number](https://img.shields.io/github/forks/arc53/docsgpt?style=social)</a>
<a href="https://github.com/arc53/DocsGPT/blob/main/LICENSE">![link to license file](https://img.shields.io/github/license/arc53/docsgpt)</a>
<a href="https://discord.gg/n5BX8dh8rU">![link to discord](https://img.shields.io/discord/1070046503302877216)</a>
<a href="https://twitter.com/docsgptai">![X (formerly Twitter) URL](https://img.shields.io/twitter/follow/docsgptai)</a>
</div>
### Enterprise Solutions:
### Production Support / Help for Companies:
When deploying your DocsGPT to a live environment, we're eager to provide personalized assistance. Reach out to us via email [here]( mailto:contact@arc53.com?subject=DocsGPT%20Enterprise&body=Hi%20we%20are%20%3CCompany%20name%3E%20and%20we%20want%20to%20build%20%3CSolution%3E%20with%20DocsGPT) to discuss your project further, and our team will connect with you shortly.
We're eager to provide personalized assistance when deploying your DocsGPT to a live environment.
### [🎉 Join the Hacktoberfest with DocsGPT and Earn a Free T-shirt! 🎉](https://github.com/arc53/DocsGPT/blob/main/HACKTOBERFEST.md)
- [Get Enterprise / teams Demo :wave:](https://www.docsgpt.cloud/contact)
- [Send Email :email:](mailto:contact@arc53.com?subject=DocsGPT%20support%2Fsolutions)
![video-example-of-docs-gpt](https://d3dg1063dc54p9.cloudfront.net/videos/demov3.gif)
## Roadmap
You can find our [Roadmap](https://github.com/orgs/arc53/projects/2) here. Please don't hesitate to contribute or create issues, it helps us make DocsGPT better!
You can find our roadmap [here](https://github.com/orgs/arc53/projects/2). Please don't hesitate to contribute or create issues, it helps us improve DocsGPT!
## Our open source models optimised for DocsGPT:
## Our Open-Source Models Optimized for DocsGPT:
| Name | Base Model | Requirements (or similar) |
|-------------------|------------|----------------------------------------------------------|
| [Docsgpt-7b-falcon](https://huggingface.co/Arc53/docsgpt-7b-falcon) | Falcon-7b | 1xA10G gpu |
| [Docsgpt-14b](https://huggingface.co/Arc53/docsgpt-14b) | llama-2-14b | 2xA10 gpu's |
| [Docsgpt-40b-falcon](https://huggingface.co/Arc53/docsgpt-40b-falcon) | falcon-40b | 8xA10G gpu's |
If you don't have enough resources to run it you can use bitsnbytes to quantize
| Name | Base Model | Requirements (or similar) |
| --------------------------------------------------------------------- | ----------- | ------------------------- |
| [Docsgpt-7b-mistral](https://huggingface.co/Arc53/docsgpt-7b-mistral) | Mistral-7b | 1xA10G gpu |
| [Docsgpt-14b](https://huggingface.co/Arc53/docsgpt-14b) | llama-2-14b | 2xA10 gpu's |
| [Docsgpt-40b-falcon](https://huggingface.co/Arc53/docsgpt-40b-falcon) | falcon-40b | 8xA10G gpu's |
If you don't have enough resources to run it, you can use bitsnbytes to quantize.
## Features
![Group 9](https://user-images.githubusercontent.com/17906039/220427472-2644cff4-7666-46a5-819f-fc4a521f63c7.png)
![Main features of DocsGPT showcasing six main features](https://user-images.githubusercontent.com/17906039/220427472-2644cff4-7666-46a5-819f-fc4a521f63c7.png)
## Useful Links
## Useful links
[Live preview](https://docsgpt.arc53.com/)
[Join Our Discord](https://discord.gg/n5BX8dh8rU)
[Guides](https://docs.docsgpt.co.uk/)
- :mag: :fire: [Cloud Version](https://app.docsgpt.cloud/)
[Interested in contributing?](https://github.com/arc53/DocsGPT/blob/main/CONTRIBUTING.md)
- :speech_balloon: :tada: [Join our Discord](https://discord.gg/n5BX8dh8rU)
[How to use any other documentation](https://docs.docsgpt.co.uk/Guides/How-to-train-on-other-documentation)
- :books: :sunglasses: [Guides](https://docs.docsgpt.cloud/)
[How to host it locally (so all data will stay on-premises)](https://docs.docsgpt.co.uk/Guides/How-to-use-different-LLM)
- :couple: [Interested in contributing?](https://github.com/arc53/DocsGPT/blob/main/CONTRIBUTING.md)
- :file_folder: :rocket: [How to use any other documentation](https://docs.docsgpt.cloud/Guides/How-to-train-on-other-documentation)
## Project structure
- Application - Flask app (main application)
- :house: :closed_lock_with_key: [How to host it locally (so all data will stay on-premises)](https://docs.docsgpt.cloud/Guides/How-to-use-different-LLM)
- Extensions - Chrome extension
## Project Structure
- Scripts - Script that creates similarity search index and store for other libraries.
- Application - Flask app (main application).
- Frontend - Frontend uses Vite and React
- Extensions - Chrome extension.
- Scripts - Script that creates similarity search index for other libraries.
- Frontend - Frontend uses <a href="https://vitejs.dev/">Vite</a> and <a href="https://react.dev/">React</a>.
## QuickStart
Note: Make sure you have Docker installed
> [!Note]
> Make sure you have [Docker](https://docs.docker.com/engine/install/) installed
On Mac OS or Linux just write:
On Mac OS or Linux, write:
`./setup.sh`
It will install all the dependencies and give you an option to download local model or use OpenAI
It will install all the dependencies and allow you to download the local model, use OpenAI or use our LLM API.
Otherwise refer to this Guide:
Otherwise, refer to this Guide for Windows:
1. Download and open this repository with `git clone https://github.com/arc53/DocsGPT.git`
2. Create a .env file in your root directory and set the env variable OPENAI_API_KEY with your OpenAI API key and VITE_API_STREAMING to true or false, depending on if you want streaming answers or not
2. Create a `.env` file in your root directory and set the env variables and `VITE_API_STREAMING` to true or false, depending on whether you want streaming answers or not.
It should look like this inside:
```
OPENAI_API_KEY=Yourkey
LLM_NAME=[docsgpt or openai or others]
VITE_API_STREAMING=true
SELF_HOSTED_MODEL=false
API_KEY=[if LLM_NAME is openai]
```
See optional environment variables in the `/.env-template` and `/application/.env_sample` files.
3. Run `./run-with-docker-compose.sh`
4. Navigate to http://localhost:5173/
To stop just run Ctrl + C
See optional environment variables in the [/.env-template](https://github.com/arc53/DocsGPT/blob/main/.env-template) and [/application/.env_sample](https://github.com/arc53/DocsGPT/blob/main/application/.env_sample) files.
## Development environments
3. Run [./run-with-docker-compose.sh](https://github.com/arc53/DocsGPT/blob/main/run-with-docker-compose.sh).
4. Navigate to http://localhost:5173/.
### Spin up mongo and redis
For development only 2 containers are used from docker-compose.yaml (by deleting all services except for Redis and Mongo).
To stop, just run `Ctrl + C`.
## Development Environments
### Spin up Mongo and Redis
For development, only two containers are used from [docker-compose.yaml](https://github.com/arc53/DocsGPT/blob/main/docker-compose.yaml) (by deleting all services except for Redis and Mongo).
See file [docker-compose-dev.yaml](./docker-compose-dev.yaml).
Run
```
docker compose -f docker-compose-dev.yaml build
docker compose -f docker-compose-dev.yaml up -d
```
### Run the backend
### Run the Backend
Make sure you have Python 3.10 or 3.11 installed.
> [!Note]
> Make sure you have Python 3.10 or 3.11 installed.
1. Export required environment variables or prepare a `.env` file in the project folder:
- Copy [.env_sample](https://github.com/arc53/DocsGPT/blob/main/application/.env_sample) and create `.env`.
(check out [`application/core/settings.py`](application/core/settings.py) if you want to see more config options.)
2. (optional) Create a Python virtual environment:
You can follow the [Python official documentation](https://docs.python.org/3/tutorial/venv.html) for virtual environments.
a) On Mac OS and Linux
1. Export required environment variables
```commandline
export CELERY_BROKER_URL=redis://localhost:6379/0
export CELERY_RESULT_BACKEND=redis://localhost:6379/1
export MONGO_URI=mongodb://localhost:27017/docsgpt
export FLASK_APP=application/app.py
export FLASK_DEBUG=true
```
2. Prepare .env file
Copy `.env_sample` and create `.env` with your OpenAI API token
3. (optional) Create a Python virtual environment
```commandline
python -m venv venv
. venv/bin/activate
```
4. Change to `application/` subdir and install dependencies for the backend
b) On Windows
```commandline
python -m venv venv
venv/Scripts/activate
```
3. Download embedding model and save it in the `model/` folder:
You can use the script below, or download it manually from [here](https://d3dg1063dc54p9.cloudfront.net/models/embeddings/mpnet-base-v2.zip), unzip it and save it in the `model/` folder.
```commandline
wget https://d3dg1063dc54p9.cloudfront.net/models/embeddings/mpnet-base-v2.zip
unzip mpnet-base-v2.zip -d model
rm mpnet-base-v2.zip
```
4. Install dependencies for the backend:
```commandline
pip install -r application/requirements.txt
```
5. Run the app `flask run --host=0.0.0.0 --port=7091`
6. Start worker with `celery -A application.app.celery worker -l INFO`
### Start frontend
Make sure you have Node version 16 or higher.
5. Run the app using `flask --app application/app.py run --host=0.0.0.0 --port=7091`.
6. Start worker with `celery -A application.app.celery worker -l INFO`.
1. Navigate to `/frontend` folder
2. Install dependencies
`npm install`
3. Run the app
`npm run dev`
### Start Frontend
> [!Note]
> Make sure you have Node version 16 or higher.
1. Navigate to the [/frontend](https://github.com/arc53/DocsGPT/tree/main/frontend) folder.
2. Install the required packages `husky` and `vite` (ignore if already installed).
Built with [🦜️🔗 LangChain](https://github.com/hwchase17/langchain)
```commandline
npm install husky -g
npm install vite -g
```
3. Install dependencies by running `npm install --include=dev`.
4. Run the app using `npm run dev`.
## Contributing
Please refer to the [CONTRIBUTING.md](CONTRIBUTING.md) file for information about how to get involved. We welcome issues, questions, and pull requests.
## Code Of Conduct
We as members, contributors, and leaders, pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. Please refer to the [CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md) file for more information about contributing.
## Many Thanks To Our Contributors⚡
<a href="https://github.com/arc53/DocsGPT/graphs/contributors" alt="View Contributors">
<img src="https://contrib.rocks/image?repo=arc53/DocsGPT" alt="Contributors" />
</a>
## License
The source code license is [MIT](https://opensource.org/license/mit/), as described in the [LICENSE](LICENSE) file.
Built with [:bird: :link: LangChain](https://github.com/hwchase17/langchain)

14
SECURITY.md Normal file
View File

@@ -0,0 +1,14 @@
# Security Policy
## Supported Versions
Supported Versions:
Currently, we support security patches by committing changes and bumping the version published on Github.
## Reporting a Vulnerability
Found a vulnerability? Please email us:
security@arc53.com

View File

@@ -1,9 +1,8 @@
API_KEY=your_api_key
EMBEDDINGS_KEY=your_api_key
CELERY_BROKER_URL=redis://localhost:6379/0
CELERY_RESULT_BACKEND=redis://localhost:6379/1
MONGO_URI=mongodb://localhost:27017/docsgpt
API_URL=http://localhost:7091
FLASK_APP=application/app.py
FLASK_DEBUG=true
#For OPENAI on Azure
OPENAI_API_BASE=

View File

@@ -1,23 +1,93 @@
FROM python:3.10-slim-bullseye as builder
# Builder Stage
FROM ubuntu:24.04 as builder
# Tiktoken requires Rust toolchain, so build it in a separate stage
RUN apt-get update && apt-get install -y gcc curl
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y && apt-get install --reinstall libc6-dev -y
ENV PATH="/root/.cargo/bin:${PATH}"
RUN pip install --upgrade pip && pip install tiktoken==0.3.3
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
apt-get install -y software-properties-common
RUN add-apt-repository ppa:deadsnakes/ppa
# Install necessary packages and Python
RUN apt-get update && \
apt-get install -y --no-install-recommends gcc curl wget unzip libc6-dev python3.11 python3.11-distutils python3.11-venv && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Verify Python installation and setup symlink
RUN if [ -f /usr/bin/python3.11 ]; then \
ln -s /usr/bin/python3.11 /usr/bin/python; \
else \
echo "Python 3.11 not found"; exit 1; \
fi
# Download and unzip the model
RUN wget https://d3dg1063dc54p9.cloudfront.net/models/embeddings/mpnet-base-v2.zip && \
unzip mpnet-base-v2.zip -d model && \
rm mpnet-base-v2.zip
# Install Rust
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y
# Clean up to reduce container size
RUN apt-get remove --purge -y wget unzip && apt-get autoremove -y && rm -rf /var/lib/apt/lists/*
# Copy requirements.txt
COPY requirements.txt .
RUN pip install -r requirements.txt
FROM python:3.10-slim-bullseye
# Setup Python virtual environment
RUN python3.11 -m venv /venv
# Copy pre-built packages and binaries from builder stage
COPY --from=builder /usr/local/ /usr/local/
# Activate virtual environment and install Python packages
ENV PATH="/venv/bin:$PATH"
# Install Python packages
RUN pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir tiktoken && \
pip install --no-cache-dir -r requirements.txt
# Final Stage
FROM ubuntu:24.04 as final
RUN apt-get update && \
apt-get install -y software-properties-common
RUN add-apt-repository ppa:deadsnakes/ppa
# Install Python
RUN apt-get update && apt-get install -y --no-install-recommends python3.11 && \
ln -s /usr/bin/python3.11 /usr/bin/python && \
rm -rf /var/lib/apt/lists/*
# Set working directory
WORKDIR /app
COPY . /app/application
ENV FLASK_APP=app.py
ENV FLASK_DEBUG=true
# Create a non-root user: `appuser` (Feel free to choose a name)
RUN groupadd -r appuser && \
useradd -r -g appuser -d /app -s /sbin/nologin -c "Docker image user" appuser
# Copy the virtual environment and model from the builder stage
COPY --from=builder /venv /venv
COPY --from=builder /model /app/model
# Copy your application code
COPY . /app/application
# Change the ownership of the /app directory to the appuser
RUN mkdir -p /app/application/inputs/local
RUN chown -R appuser:appuser /app
# Set environment variables
ENV FLASK_APP=app.py \
FLASK_DEBUG=true \
PATH="/venv/bin:$PATH"
# Expose the port the app runs on
EXPOSE 7091
CMD ["gunicorn", "-w", "2", "--timeout", "120", "--bind", "0.0.0.0:7091", "application.wsgi:app"]
# Switch to non-root user
USER appuser
# Start Gunicorn
CMD ["gunicorn", "-w", "2", "--timeout", "120", "--bind", "0.0.0.0:7091", "application.wsgi:app"]

View File

@@ -1,5 +1,6 @@
import asyncio
import os
import sys
from flask import Blueprint, request, Response
import json
import datetime
@@ -8,47 +9,48 @@ import traceback
from pymongo import MongoClient
from bson.objectid import ObjectId
from transformers import GPT2TokenizerFast
from application.core.settings import settings
from application.vectorstore.vector_creator import VectorCreator
from application.llm.llm_creator import LLMCreator
from application.retriever.retriever_creator import RetrieverCreator
from application.error import bad_request
logger = logging.getLogger(__name__)
mongo = MongoClient(settings.MONGO_URI)
db = mongo["docsgpt"]
conversations_collection = db["conversations"]
vectors_collection = db["vectors"]
answer = Blueprint('answer', __name__)
prompts_collection = db["prompts"]
api_key_collection = db["api_keys"]
answer = Blueprint("answer", __name__)
if settings.LLM_NAME == "gpt4":
gpt_model = 'gpt-4'
else:
gpt_model = 'gpt-3.5-turbo'
gpt_model = ""
# to have some kind of default behaviour
if settings.LLM_NAME == "openai":
gpt_model = "gpt-3.5-turbo"
elif settings.LLM_NAME == "anthropic":
gpt_model = "claude-2"
if settings.MODEL_NAME: # in case there is particular model name configured
gpt_model = settings.MODEL_NAME
# load the prompts
current_dir = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
with open(os.path.join(current_dir, "prompts", "combine_prompt.txt"), "r") as f:
template = f.read()
with open(os.path.join(current_dir, "prompts", "combine_prompt_hist.txt"), "r") as f:
template_hist = f.read()
with open(os.path.join(current_dir, "prompts", "question_prompt.txt"), "r") as f:
template_quest = f.read()
with open(os.path.join(current_dir, "prompts", "chat_combine_prompt.txt"), "r") as f:
current_dir = os.path.dirname(
os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
)
with open(os.path.join(current_dir, "prompts", "chat_combine_default.txt"), "r") as f:
chat_combine_template = f.read()
with open(os.path.join(current_dir, "prompts", "chat_reduce_prompt.txt"), "r") as f:
chat_reduce_template = f.read()
with open(os.path.join(current_dir, "prompts", "chat_combine_creative.txt"), "r") as f:
chat_combine_creative = f.read()
with open(os.path.join(current_dir, "prompts", "chat_combine_strict.txt"), "r") as f:
chat_combine_strict = f.read()
api_key_set = settings.API_KEY is not None
embeddings_key_set = settings.EMBEDDINGS_KEY is not None
@@ -58,11 +60,6 @@ async def async_generate(chain, question, chat_history):
return result
def count_tokens(string):
tokenizer = GPT2TokenizerFast.from_pretrained('gpt2')
return len(tokenizer(string)['input_ids'])
def run_async_chain(chain, question, chat_history):
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
@@ -75,13 +72,21 @@ def run_async_chain(chain, question, chat_history):
return result
def get_data_from_api_key(api_key):
data = api_key_collection.find_one({"key": api_key})
# # Raise custom exception if the API key is not found
if data is None:
raise Exception("Invalid API Key, please generate new key", 401)
return data
def get_vectorstore(data):
if "active_docs" in data:
if data["active_docs"].split("/")[0] == "local":
if data["active_docs"].split("/")[1] == "default":
vectorstore = ""
else:
vectorstore = "indexes/" + data["active_docs"]
if data["active_docs"].split("/")[0] == "default":
vectorstore = ""
elif data["active_docs"].split("/")[0] == "local":
vectorstore = "indexes/" + data["active_docs"]
else:
vectorstore = "vectors/" + data["active_docs"]
if data["active_docs"] == "default":
@@ -92,246 +97,343 @@ def get_vectorstore(data):
return vectorstore
# def get_docsearch(vectorstore, embeddings_key):
# if settings.EMBEDDINGS_NAME == "openai_text-embedding-ada-002":
# if is_azure_configured():
# os.environ["OPENAI_API_TYPE"] = "azure"
# openai_embeddings = OpenAIEmbeddings(model=settings.AZURE_EMBEDDINGS_DEPLOYMENT_NAME)
# else:
# openai_embeddings = OpenAIEmbeddings(openai_api_key=embeddings_key)
# docsearch = FAISS.load_local(vectorstore, openai_embeddings)
# elif settings.EMBEDDINGS_NAME == "huggingface_sentence-transformers/all-mpnet-base-v2":
# docsearch = FAISS.load_local(vectorstore, HuggingFaceHubEmbeddings())
# elif settings.EMBEDDINGS_NAME == "huggingface_hkunlp/instructor-large":
# docsearch = FAISS.load_local(vectorstore, HuggingFaceInstructEmbeddings())
# elif settings.EMBEDDINGS_NAME == "cohere_medium":
# docsearch = FAISS.load_local(vectorstore, CohereEmbeddings(cohere_api_key=embeddings_key))
# return docsearch
def is_azure_configured():
return settings.OPENAI_API_BASE and settings.OPENAI_API_VERSION and settings.AZURE_DEPLOYMENT_NAME
return (
settings.OPENAI_API_BASE
and settings.OPENAI_API_VERSION
and settings.AZURE_DEPLOYMENT_NAME
)
def complete_stream(question, docsearch, chat_history, api_key, conversation_id):
llm = LLMCreator.create_llm(settings.LLM_NAME, api_key=api_key)
docs = docsearch.search(question, k=2)
if settings.LLM_NAME == "llama.cpp":
docs = [docs[0]]
# join all page_content together with a newline
docs_together = "\n".join([doc.page_content for doc in docs])
p_chat_combine = chat_combine_template.replace("{summaries}", docs_together)
messages_combine = [{"role": "system", "content": p_chat_combine}]
source_log_docs = []
for doc in docs:
if doc.metadata:
data = json.dumps({"type": "source", "doc": doc.page_content, "metadata": doc.metadata})
source_log_docs.append({"title": doc.metadata['title'].split('/')[-1], "text": doc.page_content})
else:
data = json.dumps({"type": "source", "doc": doc.page_content})
source_log_docs.append({"title": doc.page_content, "text": doc.page_content})
yield f"data:{data}\n\n"
if len(chat_history) > 1:
tokens_current_history = 0
# count tokens in history
chat_history.reverse()
for i in chat_history:
if "prompt" in i and "response" in i:
tokens_batch = count_tokens(i["prompt"]) + count_tokens(i["response"])
if tokens_current_history + tokens_batch < settings.TOKENS_MAX_HISTORY:
tokens_current_history += tokens_batch
messages_combine.append({"role": "user", "content": i["prompt"]})
messages_combine.append({"role": "system", "content": i["response"]})
messages_combine.append({"role": "user", "content": question})
response_full = ""
completion = llm.gen_stream(model=gpt_model, engine=settings.AZURE_DEPLOYMENT_NAME,
messages=messages_combine)
for line in completion:
data = json.dumps({"answer": str(line)})
response_full += str(line)
yield f"data: {data}\n\n"
# save conversation to database
if conversation_id is not None:
def save_conversation(conversation_id, question, response, source_log_docs, llm):
if conversation_id is not None and conversation_id != "None":
conversations_collection.update_one(
{"_id": ObjectId(conversation_id)},
{"$push": {"queries": {"prompt": question, "response": response_full, "sources": source_log_docs}}},
{
"$push": {
"queries": {
"prompt": question,
"response": response,
"sources": source_log_docs,
}
}
},
)
else:
# create new conversation
# generate summary
messages_summary = [{"role": "assistant", "content": "Summarise following conversation in no more than 3 "
"words, respond ONLY with the summary, use the same "
"language as the system \n\nUser: " + question + "\n\n" +
"AI: " +
response_full},
{"role": "user", "content": "Summarise following conversation in no more than 3 words, "
"respond ONLY with the summary, use the same language as the "
"system"}]
messages_summary = [
{
"role": "assistant",
"content": "Summarise following conversation in no more than 3 "
"words, respond ONLY with the summary, use the same "
"language as the system \n\nUser: "
+question
+"\n\n"
+"AI: "
+response,
},
{
"role": "user",
"content": "Summarise following conversation in no more than 3 words, "
"respond ONLY with the summary, use the same language as the "
"system",
},
]
completion = llm.gen(model=gpt_model, engine=settings.AZURE_DEPLOYMENT_NAME,
messages=messages_summary, max_tokens=30)
completion = llm.gen(model=gpt_model, messages=messages_summary, max_tokens=30)
conversation_id = conversations_collection.insert_one(
{"user": "local",
"date": datetime.datetime.utcnow(),
"name": completion,
"queries": [{"prompt": question, "response": response_full, "sources": source_log_docs}]}
{
"user": "local",
"date": datetime.datetime.utcnow(),
"name": completion,
"queries": [
{
"prompt": question,
"response": response,
"sources": source_log_docs,
}
],
}
).inserted_id
return conversation_id
# send data.type = "end" to indicate that the stream has ended as json
data = json.dumps({"type": "id", "id": str(conversation_id)})
yield f"data: {data}\n\n"
data = json.dumps({"type": "end"})
yield f"data: {data}\n\n"
def get_prompt(prompt_id):
if prompt_id == "default":
prompt = chat_combine_template
elif prompt_id == "creative":
prompt = chat_combine_creative
elif prompt_id == "strict":
prompt = chat_combine_strict
else:
prompt = prompts_collection.find_one({"_id": ObjectId(prompt_id)})["content"]
return prompt
def complete_stream(question, retriever, conversation_id, user_api_key):
try:
response_full = ""
source_log_docs = []
answer = retriever.gen()
for line in answer:
if "answer" in line:
response_full += str(line["answer"])
data = json.dumps(line)
yield f"data: {data}\n\n"
elif "source" in line:
source_log_docs.append(line["source"])
llm = LLMCreator.create_llm(
settings.LLM_NAME, api_key=settings.API_KEY, user_api_key=user_api_key
)
conversation_id = save_conversation(
conversation_id, question, response_full, source_log_docs, llm
)
# send data.type = "end" to indicate that the stream has ended as json
data = json.dumps({"type": "id", "id": str(conversation_id)})
yield f"data: {data}\n\n"
data = json.dumps({"type": "end"})
yield f"data: {data}\n\n"
except Exception as e:
print("\033[91merr", str(e), file=sys.stderr)
data = json.dumps({"type": "error","error":"Please try again later. We apologize for any inconvenience.",
"error_exception": str(e)})
yield f"data: {data}\n\n"
return
@answer.route("/stream", methods=["POST"])
def stream():
try:
data = request.get_json()
# get parameter from url question
question = data["question"]
history = data["history"]
# history to json object from string
history = json.loads(history)
conversation_id = data["conversation_id"]
# check if active_docs is set
if not api_key_set:
api_key = data["api_key"]
if "history" not in data:
history = []
else:
api_key = settings.API_KEY
if not embeddings_key_set:
embeddings_key = data["embeddings_key"]
history = data["history"]
history = json.loads(history)
if "conversation_id" not in data:
conversation_id = None
else:
embeddings_key = settings.EMBEDDINGS_KEY
if "active_docs" in data:
vectorstore = get_vectorstore({"active_docs": data["active_docs"]})
conversation_id = data["conversation_id"]
if "prompt_id" in data:
prompt_id = data["prompt_id"]
else:
vectorstore = ""
docsearch = VectorCreator.create_vectorstore(settings.VECTOR_STORE, vectorstore, embeddings_key)
prompt_id = "default"
if "selectedDocs" in data and data["selectedDocs"] is None:
chunks = 0
elif "chunks" in data:
chunks = int(data["chunks"])
else:
chunks = 2
if "token_limit" in data:
token_limit = data["token_limit"]
else:
token_limit = settings.DEFAULT_MAX_HISTORY
return Response(
complete_stream(question, docsearch,
chat_history=history, api_key=api_key,
conversation_id=conversation_id), mimetype="text/event-stream"
# check if active_docs or api_key is set
if "api_key" in data:
data_key = get_data_from_api_key(data["api_key"])
chunks = int(data_key["chunks"])
prompt_id = data_key["prompt_id"]
source = {"active_docs": data_key["source"]}
user_api_key = data["api_key"]
elif "active_docs" in data:
source = {"active_docs": data["active_docs"]}
user_api_key = None
else:
source = {}
user_api_key = None
if (
source["active_docs"].split("/")[0] == "default"
or source["active_docs"].split("/")[0] == "local"
):
retriever_name = "classic"
else:
retriever_name = source["active_docs"]
prompt = get_prompt(prompt_id)
retriever = RetrieverCreator.create_retriever(
retriever_name,
question=question,
source=source,
chat_history=history,
prompt=prompt,
chunks=chunks,
token_limit=token_limit,
gpt_model=gpt_model,
user_api_key=user_api_key,
)
return Response(
complete_stream(
question=question,
retriever=retriever,
conversation_id=conversation_id,
user_api_key=user_api_key,
),
mimetype="text/event-stream",
)
except ValueError:
message = "Malformed request body"
print("\033[91merr", str(message), file=sys.stderr)
return Response(
error_stream_generate(message),
status=400,
mimetype="text/event-stream",
)
except Exception as e:
print("\033[91merr", str(e), file=sys.stderr)
message = e.args[0]
status_code = 400
# # Custom exceptions with two arguments, index 1 as status code
if(len(e.args) >= 2):
status_code = e.args[1]
return Response(
error_stream_generate(message),
status=status_code,
mimetype="text/event-stream",
)
def error_stream_generate(err_response):
data = json.dumps({"type": "error", "error":err_response})
yield f"data: {data}\n\n"
@answer.route("/api/answer", methods=["POST"])
def api_answer():
data = request.get_json()
question = data["question"]
history = data["history"]
if "history" not in data:
history = []
else:
history = data["history"]
if "conversation_id" not in data:
conversation_id = None
else:
conversation_id = data["conversation_id"]
print("-" * 5)
if not api_key_set:
api_key = data["api_key"]
if "prompt_id" in data:
prompt_id = data["prompt_id"]
else:
api_key = settings.API_KEY
if not embeddings_key_set:
embeddings_key = data["embeddings_key"]
prompt_id = "default"
if "chunks" in data:
chunks = int(data["chunks"])
else:
embeddings_key = settings.EMBEDDINGS_KEY
chunks = 2
if "token_limit" in data:
token_limit = data["token_limit"]
else:
token_limit = settings.DEFAULT_MAX_HISTORY
# use try and except to check for exception
try:
# check if the vectorstore is set
vectorstore = get_vectorstore(data)
# loading the index and the store and the prompt template
# Note if you have used other embeddings than OpenAI, you need to change the embeddings
docsearch = VectorCreator.create_vectorstore(settings.VECTOR_STORE, vectorstore, embeddings_key)
llm = LLMCreator.create_llm(settings.LLM_NAME, api_key=api_key)
docs = docsearch.search(question, k=2)
# join all page_content together with a newline
docs_together = "\n".join([doc.page_content for doc in docs])
p_chat_combine = chat_combine_template.replace("{summaries}", docs_together)
messages_combine = [{"role": "system", "content": p_chat_combine}]
source_log_docs = []
for doc in docs:
if doc.metadata:
source_log_docs.append({"title": doc.metadata['title'].split('/')[-1], "text": doc.page_content})
else:
source_log_docs.append({"title": doc.page_content, "text": doc.page_content})
# join all page_content together with a newline
if len(history) > 1:
tokens_current_history = 0
# count tokens in history
history.reverse()
for i in history:
if "prompt" in i and "response" in i:
tokens_batch = count_tokens(i["prompt"]) + count_tokens(i["response"])
if tokens_current_history + tokens_batch < settings.TOKENS_MAX_HISTORY:
tokens_current_history += tokens_batch
messages_combine.append({"role": "user", "content": i["prompt"]})
messages_combine.append({"role": "system", "content": i["response"]})
messages_combine.append({"role": "user", "content": question})
completion = llm.gen(model=gpt_model, engine=settings.AZURE_DEPLOYMENT_NAME,
messages=messages_combine)
result = {"answer": completion, "sources": source_log_docs}
logger.debug(result)
# generate conversationId
if conversation_id is not None:
conversations_collection.update_one(
{"_id": ObjectId(conversation_id)},
{"$push": {"queries": {"prompt": question,
"response": result["answer"], "sources": result['sources']}}},
)
if "api_key" in data:
data_key = get_data_from_api_key(data["api_key"])
chunks = int(data_key["chunks"])
prompt_id = data_key["prompt_id"]
source = {"active_docs": data_key["source"]}
user_api_key = data["api_key"]
else:
# create new conversation
# generate summary
messages_summary = [
{"role": "assistant", "content": "Summarise following conversation in no more than 3 words, "
"respond ONLY with the summary, use the same language as the system \n\n"
"User: " + question + "\n\n" + "AI: " + result["answer"]},
{"role": "user", "content": "Summarise following conversation in no more than 3 words, "
"respond ONLY with the summary, use the same language as the system"}
]
source = data
user_api_key = None
completion = llm.gen(
model=gpt_model,
engine=settings.AZURE_DEPLOYMENT_NAME,
messages=messages_summary,
max_tokens=30
)
conversation_id = conversations_collection.insert_one(
{"user": "local",
"date": datetime.datetime.utcnow(),
"name": completion,
"queries": [{"prompt": question, "response": result["answer"], "sources": source_log_docs}]}
).inserted_id
if (
source["active_docs"].split("/")[0] == "default"
or source["active_docs"].split("/")[0] == "local"
):
retriever_name = "classic"
else:
retriever_name = source["active_docs"]
result["conversation_id"] = str(conversation_id)
prompt = get_prompt(prompt_id)
retriever = RetrieverCreator.create_retriever(
retriever_name,
question=question,
source=source,
chat_history=history,
prompt=prompt,
chunks=chunks,
token_limit=token_limit,
gpt_model=gpt_model,
user_api_key=user_api_key,
)
source_log_docs = []
response_full = ""
for line in retriever.gen():
if "source" in line:
source_log_docs.append(line["source"])
elif "answer" in line:
response_full += line["answer"]
llm = LLMCreator.create_llm(
settings.LLM_NAME, api_key=settings.API_KEY, user_api_key=user_api_key
)
result = {"answer": response_full, "sources": source_log_docs}
result["conversation_id"] = save_conversation(
conversation_id, question, response_full, source_log_docs, llm
)
# mock result
# result = {
# "answer": "The answer is 42",
# "sources": ["https://en.wikipedia.org/wiki/42_(number)", "https://en.wikipedia.org/wiki/42_(number)"]
# }
return result
except Exception as e:
# print whole traceback
traceback.print_exc()
print(str(e))
return bad_request(500, str(e))
@answer.route("/api/search", methods=["POST"])
def api_search():
data = request.get_json()
# get parameter from url question
question = data["question"]
if "chunks" in data:
chunks = int(data["chunks"])
else:
chunks = 2
if "api_key" in data:
data_key = get_data_from_api_key(data["api_key"])
chunks = int(data_key["chunks"])
source = {"active_docs": data_key["source"]}
user_api_key = data["api_key"]
elif "active_docs" in data:
source = {"active_docs": data["active_docs"]}
user_api_key = None
else:
source = {}
user_api_key = None
if (
source["active_docs"].split("/")[0] == "default"
or source["active_docs"].split("/")[0] == "local"
):
retriever_name = "classic"
else:
retriever_name = source["active_docs"]
if "token_limit" in data:
token_limit = data["token_limit"]
else:
token_limit = settings.DEFAULT_MAX_HISTORY
retriever = RetrieverCreator.create_retriever(
retriever_name,
question=question,
source=source,
chat_history=[],
prompt="default",
chunks=chunks,
token_limit=token_limit,
gpt_model=gpt_model,
user_api_key=user_api_key,
)
docs = retriever.search()
return docs

2
application/api/internal/routes.py Normal file → Executable file
View File

@@ -34,6 +34,7 @@ def upload_index_files():
if "name" not in request.form:
return {"status": "no name"}
job_name = secure_filename(request.form["name"])
tokens = secure_filename(request.form["tokens"])
save_dir = os.path.join(current_dir, "indexes", user, job_name)
if settings.VECTOR_STORE == "faiss":
if "file_faiss" not in request.files:
@@ -64,6 +65,7 @@ def upload_index_files():
"date": datetime.datetime.now().strftime("%d/%m/%Y %H:%M:%S"),
"model": settings.EMBEDDINGS_NAME,
"type": "local",
"tokens": tokens
}
)
return {"status": "ok"}

View File

@@ -1,13 +1,15 @@
import os
import uuid
import shutil
from flask import Blueprint, request, jsonify
from urllib.parse import urlparse
import requests
import json
from pymongo import MongoClient
from bson.objectid import ObjectId
from bson.binary import Binary, UuidRepresentation
from werkzeug.utils import secure_filename
import http.client
from application.api.user.tasks import ingest
from bson.dbref import DBRef
from application.api.user.tasks import ingest, ingest_remote
from application.core.settings import settings
from application.vectorstore.vector_creator import VectorCreator
@@ -16,9 +18,17 @@ mongo = MongoClient(settings.MONGO_URI)
db = mongo["docsgpt"]
conversations_collection = db["conversations"]
vectors_collection = db["vectors"]
user = Blueprint('user', __name__)
prompts_collection = db["prompts"]
feedback_collection = db["feedback"]
api_key_collection = db["api_keys"]
shared_conversations_collections = db["shared_conversations"]
user = Blueprint("user", __name__)
current_dir = os.path.dirname(
os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
)
current_dir = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
@user.route("/api/delete_conversation", methods=["POST"])
def delete_conversation():
@@ -33,15 +43,25 @@ def delete_conversation():
return {"status": "ok"}
@user.route("/api/delete_all_conversations", methods=["POST"])
def delete_all_conversations():
user_id = "local"
conversations_collection.delete_many({"user": user_id})
return {"status": "ok"}
@user.route("/api/get_conversations", methods=["get"])
def get_conversations():
# provides a list of conversations
conversations = conversations_collection.find().sort("date", -1)
conversations = conversations_collection.find().sort("date", -1).limit(30)
list_conversations = []
for conversation in conversations:
list_conversations.append({"id": str(conversation["_id"]), "name": conversation["name"]})
list_conversations.append(
{"id": str(conversation["_id"]), "name": conversation["name"]}
)
#list_conversations = [{"id": "default", "name": "default"}, {"id": "jeff", "name": "jeff"}]
# list_conversations = [{"id": "default", "name": "default"}, {"id": "jeff", "name": "jeff"}]
return jsonify(list_conversations)
@@ -51,7 +71,17 @@ def get_single_conversation():
# provides data for a conversation
conversation_id = request.args.get("id")
conversation = conversations_collection.find_one({"_id": ObjectId(conversation_id)})
return jsonify(conversation['queries'])
return jsonify(conversation["queries"])
@user.route("/api/update_conversation_name", methods=["POST"])
def update_conversation_name():
# update data for a conversation
data = request.get_json()
id = data["id"]
name = data["name"]
conversations_collection.update_one({"_id": ObjectId(id)}, {"$set": {"name": name}})
return {"status": "ok"}
@user.route("/api/feedback", methods=["POST"])
@@ -61,19 +91,29 @@ def api_feedback():
answer = data["answer"]
feedback = data["feedback"]
print("-" * 5)
print("Question: " + question)
print("Answer: " + answer)
print("Feedback: " + feedback)
print("-" * 5)
response = requests.post(
url="https://86x89umx77.execute-api.eu-west-2.amazonaws.com/docsgpt-feedback",
headers={
"Content-Type": "application/json; charset=utf-8",
},
data=json.dumps({"answer": answer, "question": question, "feedback": feedback}),
feedback_collection.insert_one(
{
"question": question,
"answer": answer,
"feedback": feedback,
}
)
return {"status": http.client.responses.get(response.status_code, "ok")}
return {"status": "ok"}
@user.route("/api/delete_by_ids", methods=["get"])
def delete_by_ids():
"""Delete by ID. These are the IDs in the vectorstore"""
ids = request.args.get("path")
if not ids:
return {"status": "error"}
if settings.VECTOR_STORE == "faiss":
result = vectors_collection.delete_index(ids=ids)
if result:
return {"status": "ok"}
return {"status": "error"}
@user.route("/api/delete_old", methods=["get"])
@@ -84,13 +124,14 @@ def delete_old():
path = request.args.get("path")
dirs = path.split("/")
dirs_clean = []
for i in range(1, len(dirs)):
for i in range(0, len(dirs)):
dirs_clean.append(secure_filename(dirs[i]))
# check that path strats with indexes or vectors
if dirs[0] not in ["indexes", "vectors"]:
if dirs_clean[0] not in ["indexes", "vectors"]:
return {"status": "error"}
path_clean = "/".join(dirs)
vectors_collection.delete_one({"location": path})
path_clean = "/".join(dirs_clean)
vectors_collection.delete_one({"name": dirs_clean[-1], "user": dirs_clean[-2]})
if settings.VECTOR_STORE == "faiss":
try:
shutil.rmtree(os.path.join(current_dir, path_clean))
@@ -101,9 +142,10 @@ def delete_old():
settings.VECTOR_STORE, path=os.path.join(current_dir, path_clean)
)
vetorstore.delete_index()
return {"status": "ok"}
@user.route("/api/upload", methods=["POST"])
def upload_file():
"""Upload a file to get vectorized and indexed."""
@@ -114,34 +156,84 @@ def upload_file():
return {"status": "no name"}
job_name = secure_filename(request.form["name"])
# check if the post request has the file part
if "file" not in request.files:
print("No file part")
return {"status": "no file"}
file = request.files["file"]
if file.filename == "":
files = request.files.getlist("file")
if not files or all(file.filename == "" for file in files):
return {"status": "no file name"}
if file:
filename = secure_filename(file.filename)
# save dir
save_dir = os.path.join(current_dir, settings.UPLOAD_FOLDER, user, job_name)
# create dir if not exists
if not os.path.exists(save_dir):
os.makedirs(save_dir)
# Directory where files will be saved
save_dir = os.path.join(current_dir, settings.UPLOAD_FOLDER, user, job_name)
os.makedirs(save_dir, exist_ok=True)
file.save(os.path.join(save_dir, filename))
task = ingest.delay(settings.UPLOAD_FOLDER, [".rst", ".md", ".pdf", ".txt"], job_name, filename, user)
# task id
if len(files) > 1:
# Multiple files; prepare them for zip
temp_dir = os.path.join(save_dir, "temp")
os.makedirs(temp_dir, exist_ok=True)
for file in files:
filename = secure_filename(file.filename)
file.save(os.path.join(temp_dir, filename))
# Use shutil.make_archive to zip the temp directory
zip_path = shutil.make_archive(
base_name=os.path.join(save_dir, job_name), format="zip", root_dir=temp_dir
)
final_filename = os.path.basename(zip_path)
# Clean up the temporary directory after zipping
shutil.rmtree(temp_dir)
else:
# Single file
file = files[0]
final_filename = secure_filename(file.filename)
file_path = os.path.join(save_dir, final_filename)
file.save(file_path)
# Call ingest with the single file or zipped file
task = ingest.delay(
settings.UPLOAD_FOLDER,
[".rst", ".md", ".pdf", ".txt", ".docx", ".csv", ".epub", ".html", ".mdx"],
job_name,
final_filename,
user,
)
return {"status": "ok", "task_id": task.id}
@user.route("/api/remote", methods=["POST"])
def upload_remote():
"""Upload a remote source to get vectorized and indexed."""
if "user" not in request.form:
return {"status": "no user"}
user = secure_filename(request.form["user"])
if "source" not in request.form:
return {"status": "no source"}
source = secure_filename(request.form["source"])
if "name" not in request.form:
return {"status": "no name"}
job_name = secure_filename(request.form["name"])
if "data" not in request.form:
print("No data")
return {"status": "no data"}
source_data = request.form["data"]
if source_data:
task = ingest_remote.delay(
source_data=source_data, job_name=job_name, user=user, loader=source
)
task_id = task.id
return {"status": "ok", "task_id": task_id}
else:
return {"status": "error"}
@user.route("/api/task_status", methods=["GET"])
def task_status():
"""Get celery job status."""
task_id = request.args.get("task_id")
from application.celery import celery
from application.celery_init import celery
task = celery.AsyncResult(task_id)
task_meta = task.info
return {"status": task.status, "result": task_meta}
@@ -163,12 +255,13 @@ def combined_json():
"date": "default",
"docLink": "default",
"model": settings.EMBEDDINGS_NAME,
"location": "local",
"location": "remote",
"tokens":""
}
]
# structure: name, language, version, description, fullName, date, docLink
# append data from vectors_collection
for index in vectors_collection.find({"user": user}):
# append data from vectors_collection in sorted order in descending order of date
for index in vectors_collection.find({"user": user}).sort("date", -1):
data.append(
{
"name": index["name"],
@@ -180,13 +273,46 @@ def combined_json():
"docLink": index["location"],
"model": settings.EMBEDDINGS_NAME,
"location": "local",
"tokens" : index["tokens"] if ("tokens" in index.keys()) else ""
}
)
if settings.VECTOR_STORE == "faiss":
data_remote = requests.get("https://d3dg1063dc54p9.cloudfront.net/combined.json").json()
data_remote = requests.get(
"https://d3dg1063dc54p9.cloudfront.net/combined.json"
).json()
for index in data_remote:
index["location"] = "remote"
data.append(index)
if "duckduck_search" in settings.RETRIEVERS_ENABLED:
data.append(
{
"name": "DuckDuckGo Search",
"language": "en",
"version": "",
"description": "duckduck_search",
"fullName": "DuckDuckGo Search",
"date": "duckduck_search",
"docLink": "duckduck_search",
"model": settings.EMBEDDINGS_NAME,
"location": "custom",
"tokens":""
}
)
if "brave_search" in settings.RETRIEVERS_ENABLED:
data.append(
{
"name": "Brave Search",
"language": "en",
"version": "",
"description": "brave_search",
"fullName": "Brave Search",
"date": "brave_search",
"docLink": "brave_search",
"model": settings.EMBEDDINGS_NAME,
"location": "custom",
"tokens":""
}
)
return jsonify(data)
@@ -198,29 +324,243 @@ def check_docs():
# split docs on / and take first part
if data["docs"].split("/")[0] == "local":
return {"status": "exists"}
vectorstore = "vectors/" + data["docs"]
vectorstore = "vectors/" + secure_filename(data["docs"])
base_path = "https://raw.githubusercontent.com/arc53/DocsHUB/main/"
if os.path.exists(vectorstore) or data["docs"] == "default":
return {"status": "exists"}
else:
r = requests.get(base_path + vectorstore + "index.faiss")
file_url = urlparse(base_path + vectorstore + "index.faiss")
if r.status_code != 200:
return {"status": "null"}
if (
file_url.scheme in ["https"]
and file_url.netloc == "raw.githubusercontent.com"
and file_url.path.startswith("/arc53/DocsHUB/main/")
):
r = requests.get(file_url.geturl())
if r.status_code != 200:
return {"status": "null"}
else:
if not os.path.exists(vectorstore):
os.makedirs(vectorstore)
with open(vectorstore + "index.faiss", "wb") as f:
f.write(r.content)
r = requests.get(base_path + vectorstore + "index.pkl")
with open(vectorstore + "index.pkl", "wb") as f:
f.write(r.content)
else:
if not os.path.exists(vectorstore):
os.makedirs(vectorstore)
with open(vectorstore + "index.faiss", "wb") as f:
f.write(r.content)
# download the store
r = requests.get(base_path + vectorstore + "index.pkl")
with open(vectorstore + "index.pkl", "wb") as f:
f.write(r.content)
return {"status": "null"}
return {"status": "loaded"}
@user.route("/api/create_prompt", methods=["POST"])
def create_prompt():
data = request.get_json()
content = data["content"]
name = data["name"]
if name == "":
return {"status": "error"}
user = "local"
resp = prompts_collection.insert_one(
{
"name": name,
"content": content,
"user": user,
}
)
new_id = str(resp.inserted_id)
return {"id": new_id}
@user.route("/api/get_prompts", methods=["GET"])
def get_prompts():
user = "local"
prompts = prompts_collection.find({"user": user})
list_prompts = []
list_prompts.append({"id": "default", "name": "default", "type": "public"})
list_prompts.append({"id": "creative", "name": "creative", "type": "public"})
list_prompts.append({"id": "strict", "name": "strict", "type": "public"})
for prompt in prompts:
list_prompts.append(
{"id": str(prompt["_id"]), "name": prompt["name"], "type": "private"}
)
return jsonify(list_prompts)
@user.route("/api/get_single_prompt", methods=["GET"])
def get_single_prompt():
prompt_id = request.args.get("id")
if prompt_id == "default":
with open(
os.path.join(current_dir, "prompts", "chat_combine_default.txt"), "r"
) as f:
chat_combine_template = f.read()
return jsonify({"content": chat_combine_template})
elif prompt_id == "creative":
with open(
os.path.join(current_dir, "prompts", "chat_combine_creative.txt"), "r"
) as f:
chat_reduce_creative = f.read()
return jsonify({"content": chat_reduce_creative})
elif prompt_id == "strict":
with open(
os.path.join(current_dir, "prompts", "chat_combine_strict.txt"), "r"
) as f:
chat_reduce_strict = f.read()
return jsonify({"content": chat_reduce_strict})
prompt = prompts_collection.find_one({"_id": ObjectId(prompt_id)})
return jsonify({"content": prompt["content"]})
@user.route("/api/delete_prompt", methods=["POST"])
def delete_prompt():
data = request.get_json()
id = data["id"]
prompts_collection.delete_one(
{
"_id": ObjectId(id),
}
)
return {"status": "ok"}
@user.route("/api/update_prompt", methods=["POST"])
def update_prompt_name():
data = request.get_json()
id = data["id"]
name = data["name"]
content = data["content"]
# check if name is null
if name == "":
return {"status": "error"}
prompts_collection.update_one(
{"_id": ObjectId(id)}, {"$set": {"name": name, "content": content}}
)
return {"status": "ok"}
@user.route("/api/get_api_keys", methods=["GET"])
def get_api_keys():
user = "local"
keys = api_key_collection.find({"user": user})
list_keys = []
for key in keys:
list_keys.append(
{
"id": str(key["_id"]),
"name": key["name"],
"key": key["key"][:4] + "..." + key["key"][-4:],
"source": key["source"],
"prompt_id": key["prompt_id"],
"chunks": key["chunks"],
}
)
return jsonify(list_keys)
@user.route("/api/create_api_key", methods=["POST"])
def create_api_key():
data = request.get_json()
name = data["name"]
source = data["source"]
prompt_id = data["prompt_id"]
chunks = data["chunks"]
key = str(uuid.uuid4())
user = "local"
resp = api_key_collection.insert_one(
{
"name": name,
"key": key,
"source": source,
"user": user,
"prompt_id": prompt_id,
"chunks": chunks,
}
)
new_id = str(resp.inserted_id)
return {"id": new_id, "key": key}
@user.route("/api/delete_api_key", methods=["POST"])
def delete_api_key():
data = request.get_json()
id = data["id"]
api_key_collection.delete_one(
{
"_id": ObjectId(id),
}
)
return {"status": "ok"}
#route to share conversation
##isPromptable should be passed through queries
@user.route("/api/share",methods=["POST"])
def share_conversation():
try:
data = request.get_json()
user = "local"
if(hasattr(data,"user")):
user = data["user"]
conversation_id = data["conversation_id"]
isPromptable = request.args.get("isPromptable").lower() == "true"
conversation = conversations_collection.find_one({"_id": ObjectId(conversation_id)})
current_n_queries = len(conversation["queries"])
pre_existing = shared_conversations_collections.find_one({
"conversation_id":DBRef("conversations",ObjectId(conversation_id)),
"isPromptable":isPromptable,
"first_n_queries":current_n_queries
})
print("pre_existing",pre_existing)
if(pre_existing is not None):
explicit_binary = pre_existing["uuid"]
return jsonify({"success":True, "identifier":str(explicit_binary.as_uuid())}),200
else:
explicit_binary = Binary.from_uuid(uuid.uuid4(), UuidRepresentation.STANDARD)
shared_conversations_collections.insert_one({
"uuid":explicit_binary,
"conversation_id": {
"$ref":"conversations",
"$id":ObjectId(conversation_id)
} ,
"isPromptable":isPromptable,
"first_n_queries":current_n_queries,
"user":user
})
## Identifier as route parameter in frontend
return jsonify({"success":True, "identifier":str(explicit_binary.as_uuid())}),201
except Exception as err:
return jsonify({"success":False,"error":str(err)}),400
#route to get publicly shared conversations
@user.route("/api/shared_conversation/<string:identifier>",methods=["GET"])
def get_publicly_shared_conversations(identifier : str):
try:
query_uuid = Binary.from_uuid(uuid.UUID(identifier), UuidRepresentation.STANDARD)
shared = shared_conversations_collections.find_one({"uuid":query_uuid})
conversation_queries=[]
if shared and 'conversation_id' in shared and isinstance(shared['conversation_id'], DBRef):
# Resolve the DBRef
conversation_ref = shared['conversation_id']
conversation = db.dereference(conversation_ref)
if(conversation is None):
return jsonify({"sucess":False,"error":"might have broken url or the conversation does not exist"}),404
conversation_queries = conversation['queries'][:(shared["first_n_queries"])]
for query in conversation_queries:
query.pop("sources") ## avoid exposing sources
else:
return jsonify({"sucess":False,"error":"might have broken url or the conversation does not exist"}),404
date = conversation["_id"].generation_time.isoformat()
return jsonify({
"success":True,
"queries":conversation_queries,
"title":conversation["name"],
"timestamp":date
}), 200
except Exception as err:
print (err)
return jsonify({"success":False,"error":str(err)}),400

View File

@@ -1,7 +1,12 @@
from application.worker import ingest_worker
from application.celery import celery
from application.worker import ingest_worker, remote_worker
from application.celery_init import celery
@celery.task(bind=True)
def ingest(self, directory, formats, name_job, filename, user):
resp = ingest_worker(self, directory, formats, name_job, filename, user)
return resp
@celery.task(bind=True)
def ingest_remote(self, source_data, job_name, user, loader):
resp = remote_worker(self, source_data, job_name, user, loader)
return resp

View File

@@ -1,68 +1,44 @@
import platform
import dotenv
from application.celery import celery
from application.celery_init import celery
from flask import Flask, request, redirect
from application.core.settings import settings
from application.api.user.routes import user
from application.api.answer.routes import answer
from application.api.internal.routes import internal
# Redirect PosixPath to WindowsPath on Windows
if platform.system() == "Windows":
import pathlib
temp = pathlib.PosixPath
pathlib.PosixPath = pathlib.WindowsPath
# loading the .env file
dotenv.load_dotenv()
app = Flask(__name__)
app.register_blueprint(user)
app.register_blueprint(answer)
app.register_blueprint(internal)
app.config["UPLOAD_FOLDER"] = UPLOAD_FOLDER = "inputs"
app.config["CELERY_BROKER_URL"] = settings.CELERY_BROKER_URL
app.config["CELERY_RESULT_BACKEND"] = settings.CELERY_RESULT_BACKEND
app.config["MONGO_URI"] = settings.MONGO_URI
app.config.update(
UPLOAD_FOLDER="inputs",
CELERY_BROKER_URL=settings.CELERY_BROKER_URL,
CELERY_RESULT_BACKEND=settings.CELERY_RESULT_BACKEND,
MONGO_URI=settings.MONGO_URI
)
celery.config_from_object("application.celeryconfig")
@app.route("/")
def home():
"""
The frontend source code lives in the /frontend directory of the repository.
"""
if request.remote_addr in ('0.0.0.0', '127.0.0.1', 'localhost', '172.18.0.1'):
# If users locally try to access DocsGPT running in Docker,
# they will be redirected to the Frontend application.
return redirect('http://localhost:5173')
else:
# Handle other cases or render the default page
return 'Welcome to DocsGPT Backend!'
# handling CORS
@app.after_request
def after_request(response):
response.headers.add("Access-Control-Allow-Origin", "*")
response.headers.add("Access-Control-Allow-Headers", "Content-Type,Authorization")
response.headers.add("Access-Control-Allow-Methods", "GET,PUT,POST,DELETE,OPTIONS")
# response.headers.add("Access-Control-Allow-Credentials", "true")
return response
if __name__ == "__main__":
app.run(debug=True, port=7091)
app.run(debug=settings.FLASK_DEBUG_MODE, port=7091)

View File

@@ -1,36 +1,69 @@
from pathlib import Path
from typing import Optional
import os
from pydantic import BaseSettings
from pydantic_settings import BaseSettings
current_dir = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
class Settings(BaseSettings):
LLM_NAME: str = "openai"
EMBEDDINGS_NAME: str = "openai_text-embedding-ada-002"
LLM_NAME: str = "docsgpt"
MODEL_NAME: Optional[str] = None # if LLM_NAME is openai, MODEL_NAME can be gpt-4 or gpt-3.5-turbo
EMBEDDINGS_NAME: str = "huggingface_sentence-transformers/all-mpnet-base-v2"
CELERY_BROKER_URL: str = "redis://localhost:6379/0"
CELERY_RESULT_BACKEND: str = "redis://localhost:6379/1"
MONGO_URI: str = "mongodb://localhost:27017/docsgpt"
MODEL_PATH: str = os.path.join(current_dir, "models/docsgpt-7b-f16.gguf")
TOKENS_MAX_HISTORY: int = 150
DEFAULT_MAX_HISTORY: int = 150
MODEL_TOKEN_LIMITS: dict = {"gpt-3.5-turbo": 4096, "claude-2": 1e5}
UPLOAD_FOLDER: str = "inputs"
VECTOR_STORE: str = "faiss" # "faiss" or "elasticsearch"
VECTOR_STORE: str = "faiss" # "faiss" or "elasticsearch" or "qdrant"
RETRIEVERS_ENABLED: list = ["classic_rag", "duckduck_search"] # also brave_search
API_URL: str = "http://localhost:7091" # backend url for celery worker
API_KEY: str = None # LLM api key
EMBEDDINGS_KEY: str = None # api key for embeddings (if using openai, just copy API_KEY
OPENAI_API_BASE: str = None # azure openai api base url
OPENAI_API_VERSION: str = None # azure openai api version
AZURE_DEPLOYMENT_NAME: str = None # azure deployment name for answering
AZURE_EMBEDDINGS_DEPLOYMENT_NAME: str = None # azure deployment name for embeddings
API_KEY: Optional[str] = None # LLM api key
EMBEDDINGS_KEY: Optional[str] = None # api key for embeddings (if using openai, just copy API_KEY)
OPENAI_API_BASE: Optional[str] = None # azure openai api base url
OPENAI_API_VERSION: Optional[str] = None # azure openai api version
AZURE_DEPLOYMENT_NAME: Optional[str] = None # azure deployment name for answering
AZURE_EMBEDDINGS_DEPLOYMENT_NAME: Optional[str] = None # azure deployment name for embeddings
# elasticsearch
ELASTIC_CLOUD_ID: str = None # cloud id for elasticsearch
ELASTIC_USERNAME: str = None # username for elasticsearch
ELASTIC_PASSWORD: str = None # password for elasticsearch
ELASTIC_URL: str = None # url for elasticsearch
ELASTIC_INDEX: str = "docsgpt" # index name for elasticsearch
ELASTIC_CLOUD_ID: Optional[str] = None # cloud id for elasticsearch
ELASTIC_USERNAME: Optional[str] = None # username for elasticsearch
ELASTIC_PASSWORD: Optional[str] = None # password for elasticsearch
ELASTIC_URL: Optional[str] = None # url for elasticsearch
ELASTIC_INDEX: Optional[str] = "docsgpt" # index name for elasticsearch
# SageMaker config
SAGEMAKER_ENDPOINT: Optional[str] = None # SageMaker endpoint name
SAGEMAKER_REGION: Optional[str] = None # SageMaker region name
SAGEMAKER_ACCESS_KEY: Optional[str] = None # SageMaker access key
SAGEMAKER_SECRET_KEY: Optional[str] = None # SageMaker secret key
# prem ai project id
PREMAI_PROJECT_ID: Optional[str] = None
# Qdrant vectorstore config
QDRANT_COLLECTION_NAME: Optional[str] = "docsgpt"
QDRANT_LOCATION: Optional[str] = None
QDRANT_URL: Optional[str] = None
QDRANT_PORT: Optional[int] = 6333
QDRANT_GRPC_PORT: int = 6334
QDRANT_PREFER_GRPC: bool = False
QDRANT_HTTPS: Optional[bool] = None
QDRANT_API_KEY: Optional[str] = None
QDRANT_PREFIX: Optional[str] = None
QDRANT_TIMEOUT: Optional[float] = None
QDRANT_HOST: Optional[str] = None
QDRANT_PATH: Optional[str] = None
QDRANT_DISTANCE_FUNC: str = "Cosine"
BRAVE_SEARCH_API_KEY: Optional[str] = None
FLASK_DEBUG_MODE: bool = False
path = Path(__file__).parent.parent.absolute()

Binary file not shown.

Binary file not shown.

View File

@@ -0,0 +1,50 @@
from application.llm.base import BaseLLM
from application.core.settings import settings
class AnthropicLLM(BaseLLM):
def __init__(self, api_key=None, user_api_key=None, *args, **kwargs):
from anthropic import Anthropic, HUMAN_PROMPT, AI_PROMPT
super().__init__(*args, **kwargs)
self.api_key = (
api_key or settings.ANTHROPIC_API_KEY
) # If not provided, use a default from settings
self.user_api_key = user_api_key
self.anthropic = Anthropic(api_key=self.api_key)
self.HUMAN_PROMPT = HUMAN_PROMPT
self.AI_PROMPT = AI_PROMPT
def _raw_gen(
self, baseself, model, messages, stream=False, max_tokens=300, **kwargs
):
context = messages[0]["content"]
user_question = messages[-1]["content"]
prompt = f"### Context \n {context} \n ### Question \n {user_question}"
if stream:
return self.gen_stream(model, prompt, stream, max_tokens, **kwargs)
completion = self.anthropic.completions.create(
model=model,
max_tokens_to_sample=max_tokens,
stream=stream,
prompt=f"{self.HUMAN_PROMPT} {prompt}{self.AI_PROMPT}",
)
return completion.completion
def _raw_gen_stream(
self, baseself, model, messages, stream=True, max_tokens=300, **kwargs
):
context = messages[0]["content"]
user_question = messages[-1]["content"]
prompt = f"### Context \n {context} \n ### Question \n {user_question}"
stream_response = self.anthropic.completions.create(
model=model,
prompt=f"{self.HUMAN_PROMPT} {prompt}{self.AI_PROMPT}",
max_tokens_to_sample=max_tokens,
stream=True,
)
for completion in stream_response:
yield completion.completion

View File

@@ -1,14 +1,28 @@
from abc import ABC, abstractmethod
from application.usage import gen_token_usage, stream_token_usage
class BaseLLM(ABC):
def __init__(self):
pass
self.token_usage = {"prompt_tokens": 0, "generated_tokens": 0}
def _apply_decorator(self, method, decorator, *args, **kwargs):
return decorator(method, *args, **kwargs)
@abstractmethod
def gen(self, *args, **kwargs):
def _raw_gen(self, model, messages, stream, *args, **kwargs):
pass
def gen(self, model, messages, stream=False, *args, **kwargs):
return self._apply_decorator(self._raw_gen, gen_token_usage)(
self, model=model, messages=messages, stream=stream, *args, **kwargs
)
@abstractmethod
def gen_stream(self, *args, **kwargs):
def _raw_gen_stream(self, model, messages, stream, *args, **kwargs):
pass
def gen_stream(self, model, messages, stream=True, *args, **kwargs):
return self._apply_decorator(self._raw_gen_stream, stream_token_usage)(
self, model=model, messages=messages, stream=stream, *args, **kwargs
)

View File

@@ -0,0 +1,44 @@
from application.llm.base import BaseLLM
import json
import requests
class DocsGPTAPILLM(BaseLLM):
def __init__(self, api_key=None, user_api_key=None, *args, **kwargs):
super().__init__(*args, **kwargs)
self.api_key = api_key
self.user_api_key = user_api_key
self.endpoint = "https://llm.docsgpt.co.uk"
def _raw_gen(self, baseself, model, messages, stream=False, *args, **kwargs):
context = messages[0]["content"]
user_question = messages[-1]["content"]
prompt = f"### Instruction \n {user_question} \n ### Context \n {context} \n ### Answer \n"
response = requests.post(
f"{self.endpoint}/answer", json={"prompt": prompt, "max_new_tokens": 30}
)
response_clean = response.json()["a"].replace("###", "")
return response_clean
def _raw_gen_stream(self, baseself, model, messages, stream=True, *args, **kwargs):
context = messages[0]["content"]
user_question = messages[-1]["content"]
prompt = f"### Instruction \n {user_question} \n ### Context \n {context} \n ### Answer \n"
# send prompt to endpoint /stream
response = requests.post(
f"{self.endpoint}/stream",
json={"prompt": prompt, "max_new_tokens": 256},
stream=True,
)
for line in response.iter_lines():
if line:
# data = json.loads(line)
data_str = line.decode("utf-8")
if data_str.startswith("data: "):
data = json.loads(data_str[6:])
yield data["a"]

View File

@@ -1,31 +1,68 @@
from application.llm.base import BaseLLM
class HuggingFaceLLM(BaseLLM):
def __init__(self, api_key, llm_name='Arc53/DocsGPT-7B'):
def __init__(
self,
api_key=None,
user_api_key=None,
llm_name="Arc53/DocsGPT-7B",
q=False,
*args,
**kwargs,
):
global hf
from langchain.llms import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained(llm_name)
model = AutoModelForCausalLM.from_pretrained(llm_name)
if q:
import torch
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
pipeline,
BitsAndBytesConfig,
)
tokenizer = AutoTokenizer.from_pretrained(llm_name)
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
)
model = AutoModelForCausalLM.from_pretrained(
llm_name, quantization_config=bnb_config
)
else:
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained(llm_name)
model = AutoModelForCausalLM.from_pretrained(llm_name)
super().__init__(*args, **kwargs)
self.api_key = api_key
self.user_api_key = user_api_key
pipe = pipeline(
"text-generation", model=model,
tokenizer=tokenizer, max_new_tokens=2000,
device_map="auto", eos_token_id=tokenizer.eos_token_id
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=2000,
device_map="auto",
eos_token_id=tokenizer.eos_token_id,
)
hf = HuggingFacePipeline(pipeline=pipe)
def gen(self, model, engine, messages, stream=False, **kwargs):
context = messages[0]['content']
user_question = messages[-1]['content']
def _raw_gen(self, baseself, model, messages, stream=False, **kwargs):
context = messages[0]["content"]
user_question = messages[-1]["content"]
prompt = f"### Instruction \n {user_question} \n ### Context \n {context} \n ### Answer \n"
result = hf(prompt)
return result.content
def gen_stream(self, model, engine, messages, stream=True, **kwargs):
def _raw_gen_stream(self, baseself, model, messages, stream=True, **kwargs):
raise NotImplementedError("HuggingFaceLLM Streaming is not implemented yet.")

View File

@@ -1,39 +1,55 @@
from application.llm.base import BaseLLM
from application.core.settings import settings
import threading
class LlamaSingleton:
_instances = {}
_lock = threading.Lock() # Add a lock for thread synchronization
@classmethod
def get_instance(cls, llm_name):
if llm_name not in cls._instances:
try:
from llama_cpp import Llama
except ImportError:
raise ImportError(
"Please install llama_cpp using pip install llama-cpp-python"
)
cls._instances[llm_name] = Llama(model_path=llm_name, n_ctx=2048)
return cls._instances[llm_name]
@classmethod
def query_model(cls, llm, prompt, **kwargs):
with cls._lock:
return llm(prompt, **kwargs)
class LlamaCpp(BaseLLM):
def __init__(
self,
api_key=None,
user_api_key=None,
llm_name=settings.MODEL_PATH,
*args,
**kwargs,
):
super().__init__(*args, **kwargs)
self.api_key = api_key
self.user_api_key = user_api_key
self.llama = LlamaSingleton.get_instance(llm_name)
def __init__(self, api_key, llm_name=settings.MODEL_PATH, **kwargs):
global llama
try:
from llama_cpp import Llama
except ImportError:
raise ImportError("Please install llama_cpp using pip install llama-cpp-python")
llama = Llama(model_path=llm_name, n_ctx=2048)
def gen(self, model, engine, messages, stream=False, **kwargs):
context = messages[0]['content']
user_question = messages[-1]['content']
def _raw_gen(self, baseself, model, messages, stream=False, **kwargs):
context = messages[0]["content"]
user_question = messages[-1]["content"]
prompt = f"### Instruction \n {user_question} \n ### Context \n {context} \n ### Answer \n"
result = LlamaSingleton.query_model(self.llama, prompt, max_tokens=150, echo=False)
return result["choices"][0]["text"].split("### Answer \n")[-1]
result = llama(prompt, max_tokens=150, echo=False)
# import sys
# print(result['choices'][0]['text'].split('### Answer \n')[-1], file=sys.stderr)
return result['choices'][0]['text'].split('### Answer \n')[-1]
def gen_stream(self, model, engine, messages, stream=True, **kwargs):
context = messages[0]['content']
user_question = messages[-1]['content']
def _raw_gen_stream(self, baseself, model, messages, stream=True, **kwargs):
context = messages[0]["content"]
user_question = messages[-1]["content"]
prompt = f"### Instruction \n {user_question} \n ### Context \n {context} \n ### Answer \n"
result = llama(prompt, max_tokens=150, echo=False, stream=stream)
# import sys
# print(list(result), file=sys.stderr)
result = LlamaSingleton.query_model(self.llama, prompt, max_tokens=150, echo=False, stream=stream)
for item in result:
for choice in item['choices']:
yield choice['text']
for choice in item["choices"]:
yield choice["text"]

View File

@@ -2,21 +2,26 @@ from application.llm.openai import OpenAILLM, AzureOpenAILLM
from application.llm.sagemaker import SagemakerAPILLM
from application.llm.huggingface import HuggingFaceLLM
from application.llm.llama_cpp import LlamaCpp
from application.llm.anthropic import AnthropicLLM
from application.llm.docsgpt_provider import DocsGPTAPILLM
from application.llm.premai import PremAILLM
class LLMCreator:
llms = {
'openai': OpenAILLM,
'azure_openai': AzureOpenAILLM,
'sagemaker': SagemakerAPILLM,
'huggingface': HuggingFaceLLM,
'llama.cpp': LlamaCpp
"openai": OpenAILLM,
"azure_openai": AzureOpenAILLM,
"sagemaker": SagemakerAPILLM,
"huggingface": HuggingFaceLLM,
"llama.cpp": LlamaCpp,
"anthropic": AnthropicLLM,
"docsgpt": DocsGPTAPILLM,
"premai": PremAILLM,
}
@classmethod
def create_llm(cls, type, *args, **kwargs):
def create_llm(cls, type, api_key, user_api_key, *args, **kwargs):
llm_class = cls.llms.get(type.lower())
if not llm_class:
raise ValueError(f"No LLM class found for type {type}")
return llm_class(*args, **kwargs)
return llm_class(api_key, user_api_key, *args, **kwargs)

View File

@@ -1,57 +1,80 @@
from application.llm.base import BaseLLM
from application.core.settings import settings
class OpenAILLM(BaseLLM):
def __init__(self, api_key):
def __init__(self, api_key=None, user_api_key=None, *args, **kwargs):
global openai
import openai
openai.api_key = api_key
self.api_key = api_key # Save the API key to be used later
from openai import OpenAI
super().__init__(*args, **kwargs)
self.client = OpenAI(
api_key=api_key,
)
self.api_key = api_key
self.user_api_key = user_api_key
def _get_openai(self):
# Import openai when needed
import openai
# Set the API key every time you import openai
openai.api_key = self.api_key
return openai
def gen(self, model, engine, messages, stream=False, **kwargs):
response = openai.ChatCompletion.create(
model=model,
engine=engine,
messages=messages,
stream=stream,
**kwargs
def _raw_gen(
self,
baseself,
model,
messages,
stream=False,
engine=settings.AZURE_DEPLOYMENT_NAME,
**kwargs
):
response = self.client.chat.completions.create(
model=model, messages=messages, stream=stream, **kwargs
)
return response["choices"][0]["message"]["content"]
return response.choices[0].message.content
def gen_stream(self, model, engine, messages, stream=True, **kwargs):
response = openai.ChatCompletion.create(
model=model,
engine=engine,
messages=messages,
stream=stream,
**kwargs
def _raw_gen_stream(
self,
baseself,
model,
messages,
stream=True,
engine=settings.AZURE_DEPLOYMENT_NAME,
**kwargs
):
response = self.client.chat.completions.create(
model=model, messages=messages, stream=stream, **kwargs
)
for line in response:
if "content" in line["choices"][0]["delta"]:
yield line["choices"][0]["delta"]["content"]
# import sys
# print(line.choices[0].delta.content, file=sys.stderr)
if line.choices[0].delta.content is not None:
yield line.choices[0].delta.content
class AzureOpenAILLM(OpenAILLM):
def __init__(self, openai_api_key, openai_api_base, openai_api_version, deployment_name):
def __init__(
self, openai_api_key, openai_api_base, openai_api_version, deployment_name
):
super().__init__(openai_api_key)
self.api_base = settings.OPENAI_API_BASE,
self.api_version = settings.OPENAI_API_VERSION,
self.deployment_name = settings.AZURE_DEPLOYMENT_NAME,
self.api_base = (settings.OPENAI_API_BASE,)
self.api_version = (settings.OPENAI_API_VERSION,)
self.deployment_name = (settings.AZURE_DEPLOYMENT_NAME,)
from openai import AzureOpenAI
self.client = AzureOpenAI(
api_key=openai_api_key,
api_version=settings.OPENAI_API_VERSION,
api_base=settings.OPENAI_API_BASE,
deployment_name=settings.AZURE_DEPLOYMENT_NAME,
)
def _get_openai(self):
openai = super()._get_openai()
openai.api_base = self.api_base
openai.api_version = self.api_version
openai.api_type = "azure"
return openai

38
application/llm/premai.py Normal file
View File

@@ -0,0 +1,38 @@
from application.llm.base import BaseLLM
from application.core.settings import settings
class PremAILLM(BaseLLM):
def __init__(self, api_key=None, user_api_key=None, *args, **kwargs):
from premai import Prem
super().__init__(*args, **kwargs)
self.client = Prem(api_key=api_key)
self.api_key = api_key
self.user_api_key = user_api_key
self.project_id = settings.PREMAI_PROJECT_ID
def _raw_gen(self, baseself, model, messages, stream=False, **kwargs):
response = self.client.chat.completions.create(
model=model,
project_id=self.project_id,
messages=messages,
stream=stream,
**kwargs
)
return response.choices[0].message["content"]
def _raw_gen_stream(self, baseself, model, messages, stream=True, **kwargs):
response = self.client.chat.completions.create(
model=model,
project_id=self.project_id,
messages=messages,
stream=stream,
**kwargs
)
for line in response:
if line.choices[0].delta["content"] is not None:
yield line.choices[0].delta["content"]

View File

@@ -1,27 +1,140 @@
from application.llm.base import BaseLLM
from application.core.settings import settings
import requests
import json
import io
class LineIterator:
"""
A helper class for parsing the byte stream input.
The output of the model will be in the following format:
```
b'{"outputs": [" a"]}\n'
b'{"outputs": [" challenging"]}\n'
b'{"outputs": [" problem"]}\n'
...
```
While usually each PayloadPart event from the event stream will contain a byte array
with a full json, this is not guaranteed and some of the json objects may be split across
PayloadPart events. For example:
```
{'PayloadPart': {'Bytes': b'{"outputs": '}}
{'PayloadPart': {'Bytes': b'[" problem"]}\n'}}
```
This class accounts for this by concatenating bytes written via the 'write' function
and then exposing a method which will return lines (ending with a '\n' character) within
the buffer via the 'scan_lines' function. It maintains the position of the last read
position to ensure that previous bytes are not exposed again.
"""
def __init__(self, stream):
self.byte_iterator = iter(stream)
self.buffer = io.BytesIO()
self.read_pos = 0
def __iter__(self):
return self
def __next__(self):
while True:
self.buffer.seek(self.read_pos)
line = self.buffer.readline()
if line and line[-1] == ord("\n"):
self.read_pos += len(line)
return line[:-1]
try:
chunk = next(self.byte_iterator)
except StopIteration:
if self.read_pos < self.buffer.getbuffer().nbytes:
continue
raise
if "PayloadPart" not in chunk:
print("Unknown event type:" + chunk)
continue
self.buffer.seek(0, io.SEEK_END)
self.buffer.write(chunk["PayloadPart"]["Bytes"])
class SagemakerAPILLM(BaseLLM):
def __init__(self, *args, **kwargs):
self.url = settings.SAGEMAKER_API_URL
def __init__(self, api_key=None, user_api_key=None, *args, **kwargs):
import boto3
def gen(self, model, engine, messages, stream=False, **kwargs):
context = messages[0]['content']
user_question = messages[-1]['content']
prompt = f"### Instruction \n {user_question} \n ### Context \n {context} \n ### Answer \n"
response = requests.post(
url=self.url,
headers={
"Content-Type": "application/json; charset=utf-8",
},
data=json.dumps({"input": prompt})
runtime = boto3.client(
"runtime.sagemaker",
aws_access_key_id="xxx",
aws_secret_access_key="xxx",
region_name="us-west-2",
)
return response.json()['answer']
super().__init__(*args, **kwargs)
self.api_key = api_key
self.user_api_key = user_api_key
self.endpoint = settings.SAGEMAKER_ENDPOINT
self.runtime = runtime
def gen_stream(self, model, engine, messages, stream=True, **kwargs):
raise NotImplementedError("Sagemaker does not support streaming")
def _raw_gen(self, baseself, model, messages, stream=False, **kwargs):
context = messages[0]["content"]
user_question = messages[-1]["content"]
prompt = f"### Instruction \n {user_question} \n ### Context \n {context} \n ### Answer \n"
# Construct payload for endpoint
payload = {
"inputs": prompt,
"stream": False,
"parameters": {
"do_sample": True,
"temperature": 0.1,
"max_new_tokens": 30,
"repetition_penalty": 1.03,
"stop": ["</s>", "###"],
},
}
body_bytes = json.dumps(payload).encode("utf-8")
# Invoke the endpoint
response = self.runtime.invoke_endpoint(
EndpointName=self.endpoint, ContentType="application/json", Body=body_bytes
)
result = json.loads(response["Body"].read().decode())
import sys
print(result[0]["generated_text"], file=sys.stderr)
return result[0]["generated_text"][len(prompt) :]
def _raw_gen_stream(self, baseself, model, messages, stream=True, **kwargs):
context = messages[0]["content"]
user_question = messages[-1]["content"]
prompt = f"### Instruction \n {user_question} \n ### Context \n {context} \n ### Answer \n"
# Construct payload for endpoint
payload = {
"inputs": prompt,
"stream": True,
"parameters": {
"do_sample": True,
"temperature": 0.1,
"max_new_tokens": 512,
"repetition_penalty": 1.03,
"stop": ["</s>", "###"],
},
}
body_bytes = json.dumps(payload).encode("utf-8")
# Invoke the endpoint
response = self.runtime.invoke_endpoint_with_response_stream(
EndpointName=self.endpoint, ContentType="application/json", Body=body_bytes
)
# result = json.loads(response['Body'].read().decode())
event_stream = response["Body"]
start_json = b"{"
for line in LineIterator(event_stream):
if line != b"" and start_json in line:
# print(line)
data = json.loads(line[line.find(start_json) :].decode("utf-8"))
if data["token"]["text"] not in ["</s>", "###"]:
print(data["token"]["text"], end="")
yield data["token"]["text"]

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +0,0 @@
{
"devDependencies": {
"tailwindcss": "^3.2.4"
}
}

View File

@@ -62,7 +62,6 @@ class SimpleDirectoryReader(BaseReader):
file_extractor: Optional[Dict[str, BaseParser]] = None,
num_files_limit: Optional[int] = None,
file_metadata: Optional[Callable[[str], Dict]] = None,
chunk_size_max: int = 2048,
) -> None:
"""Initialize with parameters."""
super().__init__()
@@ -148,12 +147,24 @@ class SimpleDirectoryReader(BaseReader):
# do standard read
with open(input_file, "r", errors=self.errors) as f:
data = f.read()
if isinstance(data, List):
data_list.extend(data)
else:
data_list.append(str(data))
# Prepare metadata for this file
if self.file_metadata is not None:
metadata_list.append(self.file_metadata(str(input_file)))
file_metadata = self.file_metadata(str(input_file))
else:
# Provide a default empty metadata
file_metadata = {'title': '', 'store': ''}
# TODO: Find a case with no metadata and check if breaks anything
if isinstance(data, List):
# Extend data_list with each item in the data list
data_list.extend([str(d) for d in data])
# For each item in the data list, add the file's metadata to metadata_list
metadata_list.extend([file_metadata for _ in data])
else:
# Add the single piece of data to data_list
data_list.append(str(data))
# Add the file's metadata to metadata_list
metadata_list.append(file_metadata)
if concatenate:
return [Document("\n".join(data_list))]

View File

@@ -57,7 +57,7 @@ class HTMLParser(BaseParser):
title_indexes = [i for i, isd_el in enumerate(isd) if isd_el['type'] == 'Title']
# Creating 'Chunks' - List of lists of strings
# each list starting with with isd_el['type'] = 'Title' and all the data till the next 'Title'
# each list starting with isd_el['type'] = 'Title' and all the data till the next 'Title'
# Each Chunk can be thought of as an individual set of data, which can be sent to the model
# Where Each Title is grouped together with the data under it

View File

@@ -0,0 +1,51 @@
from urllib.parse import urlparse
from openapi_parser import parse
try:
from application.parser.file.base_parser import BaseParser
except ModuleNotFoundError:
from base_parser import BaseParser
class OpenAPI3Parser(BaseParser):
def init_parser(self) -> None:
return super().init_parser()
def get_base_urls(self, urls):
base_urls = []
for i in urls:
parsed_url = urlparse(i)
base_url = parsed_url.scheme + "://" + parsed_url.netloc
if base_url not in base_urls:
base_urls.append(base_url)
return base_urls
def get_info_from_paths(self, path):
info = ""
if path.operations:
for operation in path.operations:
info += (
f"\n{operation.method.value}="
f"{operation.responses[0].description}"
)
return info
def parse_file(self, file_path):
data = parse(file_path)
results = ""
base_urls = self.get_base_urls(link.url for link in data.servers)
base_urls = ",".join([base_url for base_url in base_urls])
results += f"Base URL:{base_urls}\n"
i = 1
for path in data.paths:
info = self.get_info_from_paths(path)
results += (
f"Path{i}: {path.url}\n"
f"description: {path.description}\n"
f"parameters: {path.parameters}\nmethods: {info}\n"
)
i += 1
with open("results.txt", "w") as f:
f.write(results)
return results

58
application/parser/open_ai_func.py Normal file → Executable file
View File

@@ -1,22 +1,13 @@
import os
import tiktoken
from application.vectorstore.vector_creator import VectorCreator
from application.core.settings import settings
from retry import retry
# from langchain.embeddings import HuggingFaceEmbeddings
# from langchain.embeddings import HuggingFaceInstructEmbeddings
# from langchain.embeddings import CohereEmbeddings
def num_tokens_from_string(string: str, encoding_name: str) -> int:
# Function to convert string to tokens and estimate user cost.
encoding = tiktoken.get_encoding(encoding_name)
num_tokens = len(encoding.encode(string))
total_price = ((num_tokens / 1000) * 0.0004)
return num_tokens, total_price
# from langchain_community.embeddings import HuggingFaceEmbeddings
# from langchain_community.embeddings import HuggingFaceInstructEmbeddings
# from langchain_community.embeddings import CohereEmbeddings
@retry(tries=10, delay=60)
@@ -26,13 +17,13 @@ def store_add_texts_with_retry(store, i):
def call_openai_api(docs, folder_name, task_status):
# Function to create a vector store from the documents and save it to disk.
# Function to create a vector store from the documents and save it to disk
# create output folder if it doesn't exist
if not os.path.exists(f"{folder_name}"):
os.makedirs(f"{folder_name}")
from tqdm import tqdm
c1 = 0
if settings.VECTOR_STORE == "faiss":
docs_init = [docs[0]]
@@ -40,25 +31,32 @@ def call_openai_api(docs, folder_name, task_status):
store = VectorCreator.create_vectorstore(
settings.VECTOR_STORE,
docs_init = docs_init,
docs_init=docs_init,
path=f"{folder_name}",
embeddings_key=os.getenv("EMBEDDINGS_KEY")
embeddings_key=os.getenv("EMBEDDINGS_KEY"),
)
else:
store = VectorCreator.create_vectorstore(
settings.VECTOR_STORE,
path=f"{folder_name}",
embeddings_key=os.getenv("EMBEDDINGS_KEY")
embeddings_key=os.getenv("EMBEDDINGS_KEY"),
)
# Uncomment for MPNet embeddings
# model_name = "sentence-transformers/all-mpnet-base-v2"
# hf = HuggingFaceEmbeddings(model_name=model_name)
# store = FAISS.from_documents(docs_test, hf)
s1 = len(docs)
for i in tqdm(docs, desc="Embedding 🦖", unit="docs", total=len(docs),
bar_format='{l_bar}{bar}| Time Left: {remaining}'):
for i in tqdm(
docs,
desc="Embedding 🦖",
unit="docs",
total=len(docs),
bar_format="{l_bar}{bar}| Time Left: {remaining}",
):
try:
task_status.update_state(state='PROGRESS', meta={'current': int((c1 / s1) * 100)})
task_status.update_state(
state="PROGRESS", meta={"current": int((c1 / s1) * 100)}
)
store_add_texts_with_retry(store, i)
except Exception as e:
print(e)
@@ -72,23 +70,3 @@ def call_openai_api(docs, folder_name, task_status):
store.save_local(f"{folder_name}")
def get_user_permission(docs, folder_name):
# Function to ask user permission to call the OpenAI api and spend their OpenAI funds.
# Here we convert the docs list to a string and calculate the number of OpenAI tokens the string represents.
# docs_content = (" ".join(docs))
docs_content = ""
for doc in docs:
docs_content += doc.page_content
tokens, total_price = num_tokens_from_string(string=docs_content, encoding_name="cl100k_base")
# Here we print the number of tokens and the approx user cost with some visually appealing formatting.
print(f"Number of Tokens = {format(tokens, ',d')}")
print(f"Approx Cost = ${format(total_price, ',.2f')}")
# Here we check for user permission before calling the API.
user_input = input("Price Okay? (Y/N) \n").lower()
if user_input == "y":
call_openai_api(docs, folder_name)
elif user_input == "":
call_openai_api(docs, folder_name)
else:
print("The API was not called. No money was spent.")

View File

@@ -0,0 +1,19 @@
"""Base reader class."""
from abc import abstractmethod
from typing import Any, List
from langchain.docstore.document import Document as LCDocument
from application.parser.schema.base import Document
class BaseRemote:
"""Utilities for loading data from a directory."""
@abstractmethod
def load_data(self, *args: Any, **load_kwargs: Any) -> List[Document]:
"""Load data from the input directory."""
def load_langchain_documents(self, **load_kwargs: Any) -> List[LCDocument]:
"""Load data in LangChain document format."""
docs = self.load_data(**load_kwargs)
return [d.to_langchain_format() for d in docs]

View File

@@ -0,0 +1,59 @@
import requests
from urllib.parse import urlparse, urljoin
from bs4 import BeautifulSoup
from application.parser.remote.base import BaseRemote
class CrawlerLoader(BaseRemote):
def __init__(self, limit=10):
from langchain.document_loaders import WebBaseLoader
self.loader = WebBaseLoader # Initialize the document loader
self.limit = limit # Set the limit for the number of pages to scrape
def load_data(self, inputs):
url = inputs
# Check if the input is a list and if it is, use the first element
if isinstance(url, list) and url:
url = url[0]
# Check if the URL scheme is provided, if not, assume http
if not urlparse(url).scheme:
url = "http://" + url
visited_urls = set() # Keep track of URLs that have been visited
base_url = urlparse(url).scheme + "://" + urlparse(url).hostname # Extract the base URL
urls_to_visit = [url] # List of URLs to be visited, starting with the initial URL
loaded_content = [] # Store the loaded content from each URL
# Continue crawling until there are no more URLs to visit
while urls_to_visit:
current_url = urls_to_visit.pop(0) # Get the next URL to visit
visited_urls.add(current_url) # Mark the URL as visited
# Try to load and process the content from the current URL
try:
response = requests.get(current_url) # Fetch the content of the current URL
response.raise_for_status() # Raise an exception for HTTP errors
loader = self.loader([current_url]) # Initialize the document loader for the current URL
loaded_content.extend(loader.load()) # Load the content and add it to the loaded_content list
except Exception as e:
# Print an error message if loading or processing fails and continue with the next URL
print(f"Error processing URL {current_url}: {e}")
continue
# Parse the HTML content to extract all links
soup = BeautifulSoup(response.text, 'html.parser')
all_links = [
urljoin(current_url, a['href'])
for a in soup.find_all('a', href=True)
if base_url in urljoin(current_url, a['href']) # Ensure links are from the same domain
]
# Add new links to the list of URLs to visit if they haven't been visited yet
urls_to_visit.extend([link for link in all_links if link not in visited_urls])
urls_to_visit = list(set(urls_to_visit)) # Remove duplicate URLs
# Stop crawling if the limit of pages to scrape is reached
if self.limit is not None and len(visited_urls) >= self.limit:
break
return loaded_content # Return the loaded content from all visited URLs

View File

@@ -0,0 +1,26 @@
from application.parser.remote.base import BaseRemote
from langchain_community.document_loaders import RedditPostsLoader
class RedditPostsLoaderRemote(BaseRemote):
def load_data(self, inputs):
data = eval(inputs)
client_id = data.get("client_id")
client_secret = data.get("client_secret")
user_agent = data.get("user_agent")
categories = data.get("categories", ["new", "hot"])
mode = data.get("mode", "subreddit")
search_queries = data.get("search_queries")
number_posts = data.get("number_posts", 10)
self.loader = RedditPostsLoader(
client_id=client_id,
client_secret=client_secret,
user_agent=user_agent,
categories=categories,
mode=mode,
search_queries=search_queries,
number_posts=number_posts,
)
documents = self.loader.load()
print(f"Loaded {len(documents)} documents from Reddit")
return documents

View File

@@ -0,0 +1,20 @@
from application.parser.remote.sitemap_loader import SitemapLoader
from application.parser.remote.crawler_loader import CrawlerLoader
from application.parser.remote.web_loader import WebLoader
from application.parser.remote.reddit_loader import RedditPostsLoaderRemote
class RemoteCreator:
loaders = {
"url": WebLoader,
"sitemap": SitemapLoader,
"crawler": CrawlerLoader,
"reddit": RedditPostsLoaderRemote,
}
@classmethod
def create_loader(cls, type, *args, **kwargs):
loader_class = cls.loaders.get(type.lower())
if not loader_class:
raise ValueError(f"No LLM class found for type {type}")
return loader_class(*args, **kwargs)

View File

@@ -0,0 +1,81 @@
import requests
import re # Import regular expression library
import xml.etree.ElementTree as ET
from application.parser.remote.base import BaseRemote
class SitemapLoader(BaseRemote):
def __init__(self, limit=20):
from langchain.document_loaders import WebBaseLoader
self.loader = WebBaseLoader
self.limit = limit # Adding limit to control the number of URLs to process
def load_data(self, inputs):
sitemap_url= inputs
# Check if the input is a list and if it is, use the first element
if isinstance(sitemap_url, list) and sitemap_url:
url = sitemap_url[0]
urls = self._extract_urls(sitemap_url)
if not urls:
print(f"No URLs found in the sitemap: {sitemap_url}")
return []
# Load content of extracted URLs
documents = []
processed_urls = 0 # Counter for processed URLs
for url in urls:
if self.limit is not None and processed_urls >= self.limit:
break # Stop processing if the limit is reached
try:
loader = self.loader([url])
documents.extend(loader.load())
processed_urls += 1 # Increment the counter after processing each URL
except Exception as e:
print(f"Error processing URL {url}: {e}")
continue
return documents
def _extract_urls(self, sitemap_url):
try:
response = requests.get(sitemap_url)
response.raise_for_status() # Raise an exception for HTTP errors
except (requests.exceptions.HTTPError, requests.exceptions.ConnectionError) as e:
print(f"Failed to fetch sitemap: {sitemap_url}. Error: {e}")
return []
# Determine if this is a sitemap or a URL
if self._is_sitemap(response):
# It's a sitemap, so parse it and extract URLs
return self._parse_sitemap(response.content)
else:
# It's not a sitemap, return the URL itself
return [sitemap_url]
def _is_sitemap(self, response):
content_type = response.headers.get('Content-Type', '')
if 'xml' in content_type or response.url.endswith('.xml'):
return True
if '<sitemapindex' in response.text or '<urlset' in response.text:
return True
return False
def _parse_sitemap(self, sitemap_content):
# Remove namespaces
sitemap_content = re.sub(' xmlns="[^"]+"', '', sitemap_content.decode('utf-8'), count=1)
root = ET.fromstring(sitemap_content)
urls = []
for loc in root.findall('.//url/loc'):
urls.append(loc.text)
# Check for nested sitemaps
for sitemap in root.findall('.//sitemap/loc'):
nested_sitemap_url = sitemap.text
urls.extend(self._extract_urls(nested_sitemap_url))
return urls

View File

@@ -0,0 +1,11 @@
from langchain.document_loader import TelegramChatApiLoader
from application.parser.remote.base import BaseRemote
class TelegramChatApiRemote(BaseRemote):
def _init_parser(self, *args, **load_kwargs):
self.loader = TelegramChatApiLoader(**load_kwargs)
return {}
def parse_file(self, *args, **load_kwargs):
return

View File

@@ -0,0 +1,32 @@
from application.parser.remote.base import BaseRemote
from langchain_community.document_loaders import WebBaseLoader
headers = {
"User-Agent": "Mozilla/5.0",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*"
";q=0.8",
"Accept-Language": "en-US,en;q=0.5",
"Referer": "https://www.google.com/",
"DNT": "1",
"Connection": "keep-alive",
"Upgrade-Insecure-Requests": "1",
}
class WebLoader(BaseRemote):
def __init__(self):
self.loader = WebBaseLoader
def load_data(self, inputs):
urls = inputs
if isinstance(urls, str):
urls = [urls]
documents = []
for url in urls:
try:
loader = self.loader([url], header_template=headers)
documents.extend(loader.load())
except Exception as e:
print(f"Error processing URL {url}: {e}")
continue
return documents

View File

@@ -21,16 +21,18 @@ def group_documents(documents: List[Document], min_tokens: int, max_tokens: int)
for doc in documents:
doc_len = len(tiktoken.get_encoding("cl100k_base").encode(doc.text))
if current_group is None:
current_group = Document(text=doc.text, doc_id=doc.doc_id, embedding=doc.embedding,
extra_info=doc.extra_info)
elif len(tiktoken.get_encoding("cl100k_base").encode(
current_group.text)) + doc_len < max_tokens and doc_len < min_tokens:
current_group.text += " " + doc.text
# Check if current group is empty or if the document can be added based on token count and matching metadata
if (current_group is None or
(len(tiktoken.get_encoding("cl100k_base").encode(current_group.text)) + doc_len < max_tokens and
doc_len < min_tokens and
current_group.extra_info == doc.extra_info)):
if current_group is None:
current_group = doc # Use the document directly to retain its metadata
else:
current_group.text += " " + doc.text # Append text to the current group
else:
docs.append(current_group)
current_group = Document(text=doc.text, doc_id=doc.doc_id, embedding=doc.embedding,
extra_info=doc.extra_info)
current_group = doc # Start a new group with the current document
if current_group is not None:
docs.append(current_group)

View File

@@ -0,0 +1,9 @@
You are a helpful AI assistant, DocsGPT, specializing in document assistance, designed to offer detailed and informative responses.
If appropriate, your answers can include code examples, formatted as follows:
```(language)
(code)
```
You effectively utilize chat history, ensuring relevant and tailored responses.
If a question doesn't align with your context, you provide friendly and helpful replies.
----------------
{summaries}

View File

@@ -0,0 +1,13 @@
You are an AI Assistant, DocsGPT, adept at offering document assistance.
Your expertise lies in providing answer on top of provided context.
You can leverage the chat history if needed.
Answer the question based on the context below.
Keep the answer concise. Respond "Irrelevant context" if not sure about the answer.
If question is not related to the context, respond "Irrelevant context".
When using code examples, use the following format:
```(language)
(code)
```
----------------
Context:
{summaries}

View File

@@ -1,25 +0,0 @@
You are a DocsGPT, friendly and helpful AI assistant by Arc53 that provides help with documents. You give thorough answers with code examples if possible.
QUESTION: How to merge tables in pandas?
=========
Content: pandas provides various facilities for easily combining together Series or DataFrame with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations.
Source: 28-pl
Content: pandas provides a single function, merge(), as the entry point for all standard database join operations between DataFrame or named Series objects: \n\npandas.merge(left, right, how='inner', on=None, left_on=None, right_on=None, left_index=False, right_index=False, sort=False, suffixes=('_x', '_y'), copy=True, indicator=False, validate=None)
Source: 30-pl
=========
FINAL ANSWER: To merge two tables in pandas, you can use the pd.merge() function. The basic syntax is: \n\npd.merge(left, right, on, how) \n\nwhere left and right are the two tables to merge, on is the column to merge on, and how is the type of merge to perform. \n\nFor example, to merge the two tables df1 and df2 on the column 'id', you can use: \n\npd.merge(df1, df2, on='id', how='inner')
SOURCES: 28-pl 30-pl
QUESTION: How are you?
=========
CONTENT:
SOURCE:
=========
FINAL ANSWER: I am fine, thank you. How are you?
SOURCES:
QUESTION: {{ question }}
=========
{{ summaries }}
=========
FINAL ANSWER:

View File

@@ -1,33 +0,0 @@
You are a DocsGPT, friendly and helpful AI assistant by Arc53 that provides help with documents. You give thorough answers with code examples if possible.
QUESTION: How to merge tables in pandas?
=========
Content: pandas provides various facilities for easily combining together Series or DataFrame with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations.
Source: 28-pl
Content: pandas provides a single function, merge(), as the entry point for all standard database join operations between DataFrame or named Series objects: \n\npandas.merge(left, right, how='inner', on=None, left_on=None, right_on=None, left_index=False, right_index=False, sort=False, suffixes=('_x', '_y'), copy=True, indicator=False, validate=None)
Source: 30-pl
=========
FINAL ANSWER: To merge two tables in pandas, you can use the pd.merge() function. The basic syntax is: \n\npd.merge(left, right, on, how) \n\nwhere left and right are the two tables to merge, on is the column to merge on, and how is the type of merge to perform. \n\nFor example, to merge the two tables df1 and df2 on the column 'id', you can use: \n\npd.merge(df1, df2, on='id', how='inner')
SOURCES: 28-pl 30-pl
QUESTION: How are you?
=========
CONTENT:
SOURCE:
=========
FINAL ANSWER: I am fine, thank you. How are you?
SOURCES:
QUESTION: {{ historyquestion }}
=========
CONTENT:
SOURCE:
=========
FINAL ANSWER: {{ historyanswer }}
SOURCES:
QUESTION: {{ question }}
=========
{{ summaries }}
=========
FINAL ANSWER:

View File

@@ -1,4 +0,0 @@
Use the following portion of a long document to see if any of the text is relevant to answer the question.
{{ context }}
Question: {{ question }}
Provide all relevant text to the question verbatim. Summarize if needed. If nothing relevant return "-".

View File

@@ -1,106 +1,34 @@
aiodns==3.0.0
aiohttp==3.8.5
aiohttp-retry==2.8.3
aiosignal==1.3.1
aleph-alpha-client==2.16.1
amqp==5.1.1
async-timeout==4.0.2
attrs==22.2.0
billiard==3.6.4.0
blobfile==2.0.1
boto3==1.28.20
celery==5.2.7
cffi==1.15.1
charset-normalizer==3.1.0
click==8.1.3
click-didyoumean==0.3.0
click-plugins==1.1.1
click-repl==0.2.0
cryptography==41.0.3
dataclasses-json==0.5.7
decorator==5.1.1
dill==0.3.6
dnspython==2.3.0
ecdsa==0.18.0
elasticsearch==8.9.0
entrypoints==0.4
faiss-cpu==1.7.3
filelock==3.9.0
Flask==2.2.5
Flask-Cors==3.0.10
frozenlist==1.3.3
geojson==2.5.0
gunicorn==20.1.0
greenlet==2.0.2
gpt4all==0.1.7
huggingface-hub==0.15.1
humbug==0.3.2
idna==3.4
itsdangerous==2.1.2
Jinja2==3.1.2
jmespath==1.0.1
joblib==1.2.0
kombu==5.2.4
langchain==0.0.263
loguru==0.6.0
lxml==4.9.2
MarkupSafe==2.1.2
marshmallow==3.19.0
marshmallow-enum==1.5.1
mpmath==1.3.0
multidict==6.0.4
multiprocess==0.70.14
mypy-extensions==1.0.0
networkx==3.0
npx
nltk==3.8.1
numcodecs==0.11.0
numpy==1.24.2
openai==0.27.8
packaging==23.0
pathos==0.3.0
Pillow==9.4.0
pox==0.3.2
ppft==1.7.6.6
prompt-toolkit==3.0.38
py==1.11.0
pyasn1==0.4.8
pycares==4.3.0
pycparser==2.21
pycryptodomex==3.17
pycryptodome==3.19.0
pydantic==1.10.5
PyJWT==2.6.0
pymongo==4.3.3
pyowm==3.3.0
anthropic==0.12.0
boto3==1.34.6
celery==5.3.6
dataclasses_json==0.6.3
docx2txt==0.8
duckduckgo-search==5.3.0
EbookLib==0.18
elasticsearch==8.12.0
escodegen==1.0.11
esprima==4.0.1
faiss-cpu==1.7.4
Flask==3.0.1
gunicorn==22.0.0
html2text==2020.1.16
javalang==0.13.0
langchain==0.1.4
langchain-openai==0.0.5
openapi3_parser==1.1.16
pandas==2.2.0
pydantic_settings==2.1.0
pymongo==4.6.3
PyPDF2==3.0.1
PySocks==1.7.1
pytest
python-dateutil==2.8.2
python-dotenv==1.0.0
python-jose==3.3.0
pytz==2022.7.1
PyYAML==6.0
redis==4.5.4
regex==2022.10.31
requests==2.31.0
python-dotenv==1.0.1
qdrant-client==1.9.0
redis==5.0.1
Requests==2.32.0
retry==0.9.2
rsa==4.9
scikit-learn==1.2.2
scipy==1.10.1
sentencepiece
six==1.16.0
SQLAlchemy==1.4.46
sympy==1.11.1
tenacity==8.2.2
threadpoolctl==3.1.0
sentence-transformers
tiktoken
tqdm==4.65.0
transformers==4.30.0
typer==0.7.0
typing-inspect==0.8.0
typing_extensions==4.5.0
urllib3==1.26.14
vine==5.0.0
wcwidth==0.2.6
yarl==1.8.2
torch
tqdm==4.66.3
transformers==4.36.2
unstructured==0.12.2
Werkzeug==3.0.3

View File

View File

@@ -0,0 +1,14 @@
from abc import ABC, abstractmethod
class BaseRetriever(ABC):
def __init__(self):
pass
@abstractmethod
def gen(self, *args, **kwargs):
pass
@abstractmethod
def search(self, *args, **kwargs):
pass

View File

@@ -0,0 +1,103 @@
import json
from application.retriever.base import BaseRetriever
from application.core.settings import settings
from application.llm.llm_creator import LLMCreator
from application.utils import count_tokens
from langchain_community.tools import BraveSearch
class BraveRetSearch(BaseRetriever):
def __init__(
self,
question,
source,
chat_history,
prompt,
chunks=2,
token_limit=150,
gpt_model="docsgpt",
user_api_key=None,
):
self.question = question
self.source = source
self.chat_history = chat_history
self.prompt = prompt
self.chunks = chunks
self.gpt_model = gpt_model
self.token_limit = (
token_limit
if token_limit
< settings.MODEL_TOKEN_LIMITS.get(
self.gpt_model, settings.DEFAULT_MAX_HISTORY
)
else settings.MODEL_TOKEN_LIMITS.get(
self.gpt_model, settings.DEFAULT_MAX_HISTORY
)
)
self.user_api_key = user_api_key
def _get_data(self):
if self.chunks == 0:
docs = []
else:
search = BraveSearch.from_api_key(
api_key=settings.BRAVE_SEARCH_API_KEY,
search_kwargs={"count": int(self.chunks)},
)
results = search.run(self.question)
results = json.loads(results)
docs = []
for i in results:
try:
title = i["title"]
link = i["link"]
snippet = i["snippet"]
docs.append({"text": snippet, "title": title, "link": link})
except IndexError:
pass
if settings.LLM_NAME == "llama.cpp":
docs = [docs[0]]
return docs
def gen(self):
docs = self._get_data()
# join all page_content together with a newline
docs_together = "\n".join([doc["text"] for doc in docs])
p_chat_combine = self.prompt.replace("{summaries}", docs_together)
messages_combine = [{"role": "system", "content": p_chat_combine}]
for doc in docs:
yield {"source": doc}
if len(self.chat_history) > 1:
tokens_current_history = 0
# count tokens in history
self.chat_history.reverse()
for i in self.chat_history:
if "prompt" in i and "response" in i:
tokens_batch = count_tokens(i["prompt"]) + count_tokens(
i["response"]
)
if tokens_current_history + tokens_batch < self.token_limit:
tokens_current_history += tokens_batch
messages_combine.append(
{"role": "user", "content": i["prompt"]}
)
messages_combine.append(
{"role": "system", "content": i["response"]}
)
messages_combine.append({"role": "user", "content": self.question})
llm = LLMCreator.create_llm(
settings.LLM_NAME, api_key=settings.API_KEY, user_api_key=self.user_api_key
)
completion = llm.gen_stream(model=self.gpt_model, messages=messages_combine)
for line in completion:
yield {"answer": str(line)}
def search(self):
return self._get_data()

View File

@@ -0,0 +1,123 @@
import os
from application.retriever.base import BaseRetriever
from application.core.settings import settings
from application.vectorstore.vector_creator import VectorCreator
from application.llm.llm_creator import LLMCreator
from application.utils import count_tokens
class ClassicRAG(BaseRetriever):
def __init__(
self,
question,
source,
chat_history,
prompt,
chunks=2,
token_limit=150,
gpt_model="docsgpt",
user_api_key=None,
):
self.question = question
self.vectorstore = self._get_vectorstore(source=source)
self.chat_history = chat_history
self.prompt = prompt
self.chunks = chunks
self.gpt_model = gpt_model
self.token_limit = (
token_limit
if token_limit
< settings.MODEL_TOKEN_LIMITS.get(
self.gpt_model, settings.DEFAULT_MAX_HISTORY
)
else settings.MODEL_TOKEN_LIMITS.get(
self.gpt_model, settings.DEFAULT_MAX_HISTORY
)
)
self.user_api_key = user_api_key
def _get_vectorstore(self, source):
if "active_docs" in source:
if source["active_docs"].split("/")[0] == "default":
vectorstore = ""
elif source["active_docs"].split("/")[0] == "local":
vectorstore = "indexes/" + source["active_docs"]
else:
vectorstore = "vectors/" + source["active_docs"]
if source["active_docs"] == "default":
vectorstore = ""
else:
vectorstore = ""
vectorstore = os.path.join("application", vectorstore)
return vectorstore
def _get_data(self):
if self.chunks == 0:
docs = []
else:
docsearch = VectorCreator.create_vectorstore(
settings.VECTOR_STORE, self.vectorstore, settings.EMBEDDINGS_KEY
)
docs_temp = docsearch.search(self.question, k=self.chunks)
docs = [
{
"title": (
i.metadata["title"].split("/")[-1]
if i.metadata
else i.page_content
),
"text": i.page_content,
"source": (
i.metadata.get("source")
if i.metadata.get("source")
else "local"
),
}
for i in docs_temp
]
if settings.LLM_NAME == "llama.cpp":
docs = [docs[0]]
return docs
def gen(self):
docs = self._get_data()
# join all page_content together with a newline
docs_together = "\n".join([doc["text"] for doc in docs])
p_chat_combine = self.prompt.replace("{summaries}", docs_together)
messages_combine = [{"role": "system", "content": p_chat_combine}]
for doc in docs:
yield {"source": doc}
if len(self.chat_history) > 1:
tokens_current_history = 0
# count tokens in history
self.chat_history.reverse()
for i in self.chat_history:
if "prompt" in i and "response" in i:
tokens_batch = count_tokens(i["prompt"]) + count_tokens(
i["response"]
)
if tokens_current_history + tokens_batch < self.token_limit:
tokens_current_history += tokens_batch
messages_combine.append(
{"role": "user", "content": i["prompt"]}
)
messages_combine.append(
{"role": "system", "content": i["response"]}
)
messages_combine.append({"role": "user", "content": self.question})
llm = LLMCreator.create_llm(
settings.LLM_NAME, api_key=settings.API_KEY, user_api_key=self.user_api_key
)
completion = llm.gen_stream(model=self.gpt_model, messages=messages_combine)
for line in completion:
yield {"answer": str(line)}
def search(self):
return self._get_data()

View File

@@ -0,0 +1,120 @@
from application.retriever.base import BaseRetriever
from application.core.settings import settings
from application.llm.llm_creator import LLMCreator
from application.utils import count_tokens
from langchain_community.tools import DuckDuckGoSearchResults
from langchain_community.utilities import DuckDuckGoSearchAPIWrapper
class DuckDuckSearch(BaseRetriever):
def __init__(
self,
question,
source,
chat_history,
prompt,
chunks=2,
token_limit=150,
gpt_model="docsgpt",
user_api_key=None,
):
self.question = question
self.source = source
self.chat_history = chat_history
self.prompt = prompt
self.chunks = chunks
self.gpt_model = gpt_model
self.token_limit = (
token_limit
if token_limit
< settings.MODEL_TOKEN_LIMITS.get(
self.gpt_model, settings.DEFAULT_MAX_HISTORY
)
else settings.MODEL_TOKEN_LIMITS.get(
self.gpt_model, settings.DEFAULT_MAX_HISTORY
)
)
self.user_api_key = user_api_key
def _parse_lang_string(self, input_string):
result = []
current_item = ""
inside_brackets = False
for char in input_string:
if char == "[":
inside_brackets = True
elif char == "]":
inside_brackets = False
result.append(current_item)
current_item = ""
elif inside_brackets:
current_item += char
if inside_brackets:
result.append(current_item)
return result
def _get_data(self):
if self.chunks == 0:
docs = []
else:
wrapper = DuckDuckGoSearchAPIWrapper(max_results=self.chunks)
search = DuckDuckGoSearchResults(api_wrapper=wrapper)
results = search.run(self.question)
results = self._parse_lang_string(results)
docs = []
for i in results:
try:
text = i.split("title:")[0]
title = i.split("title:")[1].split("link:")[0]
link = i.split("link:")[1]
docs.append({"text": text, "title": title, "link": link})
except IndexError:
pass
if settings.LLM_NAME == "llama.cpp":
docs = [docs[0]]
return docs
def gen(self):
docs = self._get_data()
# join all page_content together with a newline
docs_together = "\n".join([doc["text"] for doc in docs])
p_chat_combine = self.prompt.replace("{summaries}", docs_together)
messages_combine = [{"role": "system", "content": p_chat_combine}]
for doc in docs:
yield {"source": doc}
if len(self.chat_history) > 1:
tokens_current_history = 0
# count tokens in history
self.chat_history.reverse()
for i in self.chat_history:
if "prompt" in i and "response" in i:
tokens_batch = count_tokens(i["prompt"]) + count_tokens(
i["response"]
)
if tokens_current_history + tokens_batch < self.token_limit:
tokens_current_history += tokens_batch
messages_combine.append(
{"role": "user", "content": i["prompt"]}
)
messages_combine.append(
{"role": "system", "content": i["response"]}
)
messages_combine.append({"role": "user", "content": self.question})
llm = LLMCreator.create_llm(
settings.LLM_NAME, api_key=settings.API_KEY, user_api_key=self.user_api_key
)
completion = llm.gen_stream(model=self.gpt_model, messages=messages_combine)
for line in completion:
yield {"answer": str(line)}
def search(self):
return self._get_data()

View File

@@ -0,0 +1,19 @@
from application.retriever.classic_rag import ClassicRAG
from application.retriever.duckduck_search import DuckDuckSearch
from application.retriever.brave_search import BraveRetSearch
class RetrieverCreator:
retievers = {
'classic': ClassicRAG,
'duckduck_search': DuckDuckSearch,
'brave_search': BraveRetSearch
}
@classmethod
def create_retriever(cls, type, *args, **kwargs):
retiever_class = cls.retievers.get(type.lower())
if not retiever_class:
raise ValueError(f"No retievers class found for type {type}")
return retiever_class(*args, **kwargs)

View File

@@ -1,8 +0,0 @@
/** @type {import('tailwindcss').Config} */
module.exports = {
content: ["./templates/**/*.html", "./static/src/**/*.js"],
theme: {
extend: {},
},
plugins: [],
}

49
application/usage.py Normal file
View File

@@ -0,0 +1,49 @@
import sys
from pymongo import MongoClient
from datetime import datetime
from application.core.settings import settings
from application.utils import count_tokens
mongo = MongoClient(settings.MONGO_URI)
db = mongo["docsgpt"]
usage_collection = db["token_usage"]
def update_token_usage(user_api_key, token_usage):
if "pytest" in sys.modules:
return
usage_data = {
"api_key": user_api_key,
"prompt_tokens": token_usage["prompt_tokens"],
"generated_tokens": token_usage["generated_tokens"],
"timestamp": datetime.now(),
}
usage_collection.insert_one(usage_data)
def gen_token_usage(func):
def wrapper(self, model, messages, stream, **kwargs):
for message in messages:
self.token_usage["prompt_tokens"] += count_tokens(message["content"])
result = func(self, model, messages, stream, **kwargs)
self.token_usage["generated_tokens"] += count_tokens(result)
update_token_usage(self.user_api_key, self.token_usage)
return result
return wrapper
def stream_token_usage(func):
def wrapper(self, model, messages, stream, **kwargs):
for message in messages:
self.token_usage["prompt_tokens"] += count_tokens(message["content"])
batch = []
result = func(self, model, messages, stream, **kwargs)
for r in result:
batch.append(r)
yield r
for line in batch:
self.token_usage["generated_tokens"] += count_tokens(line)
update_token_usage(self.user_api_key, self.token_usage)
return wrapper

6
application/utils.py Normal file
View File

@@ -0,0 +1,6 @@
from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained('gpt2')
tokenizer.model_max_length = 100000
def count_tokens(string):
return len(tokenizer(string)['input_ids'])

View File

@@ -1,13 +1,39 @@
from abc import ABC, abstractmethod
import os
from langchain.embeddings import (
OpenAIEmbeddings,
from langchain_community.embeddings import (
HuggingFaceEmbeddings,
CohereEmbeddings,
HuggingFaceInstructEmbeddings,
)
from langchain_openai import OpenAIEmbeddings
from application.core.settings import settings
class EmbeddingsSingleton:
_instances = {}
@staticmethod
def get_instance(embeddings_name, *args, **kwargs):
if embeddings_name not in EmbeddingsSingleton._instances:
EmbeddingsSingleton._instances[embeddings_name] = EmbeddingsSingleton._create_instance(
embeddings_name, *args, **kwargs
)
return EmbeddingsSingleton._instances[embeddings_name]
@staticmethod
def _create_instance(embeddings_name, *args, **kwargs):
embeddings_factory = {
"openai_text-embedding-ada-002": OpenAIEmbeddings,
"huggingface_sentence-transformers/all-mpnet-base-v2": HuggingFaceEmbeddings,
"huggingface_sentence-transformers-all-mpnet-base-v2": HuggingFaceEmbeddings,
"huggingface_hkunlp/instructor-large": HuggingFaceInstructEmbeddings,
"cohere_medium": CohereEmbeddings
}
if embeddings_name not in embeddings_factory:
raise ValueError(f"Invalid embeddings_name: {embeddings_name}")
return embeddings_factory[embeddings_name](*args, **kwargs)
class BaseVectorStore(ABC):
def __init__(self):
pass
@@ -20,32 +46,36 @@ class BaseVectorStore(ABC):
return settings.OPENAI_API_BASE and settings.OPENAI_API_VERSION and settings.AZURE_DEPLOYMENT_NAME
def _get_embeddings(self, embeddings_name, embeddings_key=None):
embeddings_factory = {
"openai_text-embedding-ada-002": OpenAIEmbeddings,
"huggingface_sentence-transformers/all-mpnet-base-v2": HuggingFaceEmbeddings,
"huggingface_hkunlp/instructor-large": HuggingFaceInstructEmbeddings,
"cohere_medium": CohereEmbeddings
}
if embeddings_name not in embeddings_factory:
raise ValueError(f"Invalid embeddings_name: {embeddings_name}")
if embeddings_name == "openai_text-embedding-ada-002":
if self.is_azure_configured():
os.environ["OPENAI_API_TYPE"] = "azure"
embedding_instance = embeddings_factory[embeddings_name](
embedding_instance = EmbeddingsSingleton.get_instance(
embeddings_name,
model=settings.AZURE_EMBEDDINGS_DEPLOYMENT_NAME
)
else:
embedding_instance = embeddings_factory[embeddings_name](
embedding_instance = EmbeddingsSingleton.get_instance(
embeddings_name,
openai_api_key=embeddings_key
)
elif embeddings_name == "cohere_medium":
embedding_instance = embeddings_factory[embeddings_name](
embedding_instance = EmbeddingsSingleton.get_instance(
embeddings_name,
cohere_api_key=embeddings_key
)
elif embeddings_name == "huggingface_sentence-transformers/all-mpnet-base-v2":
if os.path.exists("./model/all-mpnet-base-v2"):
embedding_instance = EmbeddingsSingleton.get_instance(
embeddings_name,
model_name="./model/all-mpnet-base-v2",
model_kwargs={"device": "cpu"}
)
else:
embedding_instance = EmbeddingsSingleton.get_instance(
embeddings_name,
model_kwargs={"device": "cpu"}
)
else:
embedding_instance = embeddings_factory[embeddings_name]()
return embedding_instance
embedding_instance = EmbeddingsSingleton.get_instance(embeddings_name)
return embedding_instance

View File

@@ -0,0 +1,8 @@
class Document(str):
"""Class for storing a piece of text and associated metadata."""
def __new__(cls, page_content: str, metadata: dict):
instance = super().__new__(cls, page_content)
instance.page_content = page_content
instance.metadata = metadata
return instance

View File

@@ -1,16 +1,8 @@
from application.vectorstore.base import BaseVectorStore
from application.core.settings import settings
from application.vectorstore.document_class import Document
import elasticsearch
class Document(str):
"""Class for storing a piece of text and associated metadata."""
def __new__(cls, page_content: str, metadata: dict):
instance = super().__new__(cls, page_content)
instance.page_content = page_content
instance.metadata = metadata
return instance
@@ -114,7 +106,7 @@ class ElasticsearchStore(BaseVectorStore):
"rank": {"rrf": {}},
}
resp = self.docsearch.search(index=self.index_name, query=full_query['query'], size=k, knn=full_query['knn'])
# create Documnets objects from the results page_content ['_source']['text'], metadata ['_source']['metadata']
# create Documents objects from the results page_content ['_source']['text'], metadata ['_source']['metadata']
doc_list = []
for hit in resp['hits']['hits']:

View File

@@ -1,5 +1,5 @@
from langchain_community.vectorstores import FAISS
from application.vectorstore.base import BaseVectorStore
from langchain import FAISS
from application.core.settings import settings
class FaissStore(BaseVectorStore):
@@ -7,20 +7,40 @@ class FaissStore(BaseVectorStore):
def __init__(self, path, embeddings_key, docs_init=None):
super().__init__()
self.path = path
embeddings = self._get_embeddings(settings.EMBEDDINGS_NAME, embeddings_key)
if docs_init:
self.docsearch = FAISS.from_documents(
docs_init, self._get_embeddings(settings.EMBEDDINGS_NAME, embeddings_key)
docs_init, embeddings
)
else:
self.docsearch = FAISS.load_local(
self.path, self._get_embeddings(settings.EMBEDDINGS_NAME, settings.EMBEDDINGS_KEY)
self.path, embeddings
)
self.assert_embedding_dimensions(embeddings)
def search(self, *args, **kwargs):
return self.docsearch.similarity_search(*args, **kwargs)
def add_texts(self, *args, **kwargs):
return self.docsearch.add_texts(*args, **kwargs)
def save_local(self, *args, **kwargs):
return self.docsearch.save_local(*args, **kwargs)
def delete_index(self, *args, **kwargs):
return self.docsearch.delete(*args, **kwargs)
def assert_embedding_dimensions(self, embeddings):
"""
Check that the word embedding dimension of the docsearch index matches
the dimension of the word embeddings used
"""
if settings.EMBEDDINGS_NAME == "huggingface_sentence-transformers/all-mpnet-base-v2":
try:
word_embedding_dimension = embeddings.client[1].word_embedding_dimension
except AttributeError as e:
raise AttributeError("word_embedding_dimension not found in embeddings.client[1]") from e
docsearch_index_dimension = self.docsearch.index.d
if word_embedding_dimension != docsearch_index_dimension:
raise ValueError(f"word_embedding_dimension ({word_embedding_dimension}) " +
f"!= docsearch_index_word_embedding_dimension ({docsearch_index_dimension})")

View File

@@ -0,0 +1,126 @@
from application.vectorstore.base import BaseVectorStore
from application.core.settings import settings
from application.vectorstore.document_class import Document
class MongoDBVectorStore(BaseVectorStore):
def __init__(
self,
path: str = "",
embeddings_key: str = "embeddings",
collection: str = "documents",
index_name: str = "vector_search_index",
text_key: str = "text",
embedding_key: str = "embedding",
database: str = "docsgpt",
):
self._index_name = index_name
self._text_key = text_key
self._embedding_key = embedding_key
self._embeddings_key = embeddings_key
self._mongo_uri = settings.MONGO_URI
self._path = path.replace("application/indexes/", "").rstrip("/")
self._embedding = self._get_embeddings(settings.EMBEDDINGS_NAME, embeddings_key)
try:
import pymongo
except ImportError:
raise ImportError(
"Could not import pymongo python package. "
"Please install it with `pip install pymongo`."
)
self._client = pymongo.MongoClient(self._mongo_uri)
self._database = self._client[database]
self._collection = self._database[collection]
def search(self, question, k=2, *args, **kwargs):
query_vector = self._embedding.embed_query(question)
pipeline = [
{
"$vectorSearch": {
"queryVector": query_vector,
"path": self._embedding_key,
"limit": k,
"numCandidates": k * 10,
"index": self._index_name,
"filter": {
"store": {"$eq": self._path}
}
}
}
]
cursor = self._collection.aggregate(pipeline)
results = []
for doc in cursor:
text = doc[self._text_key]
doc.pop("_id")
doc.pop(self._text_key)
doc.pop(self._embedding_key)
metadata = doc
results.append(Document(text, metadata))
return results
def _insert_texts(self, texts, metadatas):
if not texts:
return []
embeddings = self._embedding.embed_documents(texts)
to_insert = [
{self._text_key: t, self._embedding_key: embedding, **m}
for t, m, embedding in zip(texts, metadatas, embeddings)
]
# insert the documents in MongoDB Atlas
insert_result = self._collection.insert_many(to_insert)
return insert_result.inserted_ids
def add_texts(self,
texts,
metadatas = None,
ids = None,
refresh_indices = True,
create_index_if_not_exists = True,
bulk_kwargs = None,
**kwargs,):
#dims = self._embedding.client[1].word_embedding_dimension
# # check if index exists
# if create_index_if_not_exists:
# # check if index exists
# info = self._collection.index_information()
# if self._index_name not in info:
# index_mongo = {
# "fields": [{
# "type": "vector",
# "path": self._embedding_key,
# "numDimensions": dims,
# "similarity": "cosine",
# },
# {
# "type": "filter",
# "path": "store"
# }]
# }
# self._collection.create_index(self._index_name, index_mongo)
batch_size = 100
_metadatas = metadatas or ({} for _ in texts)
texts_batch = []
metadatas_batch = []
result_ids = []
for i, (text, metadata) in enumerate(zip(texts, _metadatas)):
texts_batch.append(text)
metadatas_batch.append(metadata)
if (i + 1) % batch_size == 0:
result_ids.extend(self._insert_texts(texts_batch, metadatas_batch))
texts_batch = []
metadatas_batch = []
if texts_batch:
result_ids.extend(self._insert_texts(texts_batch, metadatas_batch))
return result_ids
def delete_index(self, *args, **kwargs):
self._collection.delete_many({"store": self._path})

View File

@@ -0,0 +1,47 @@
from langchain_community.vectorstores.qdrant import Qdrant
from application.vectorstore.base import BaseVectorStore
from application.core.settings import settings
from qdrant_client import models
class QdrantStore(BaseVectorStore):
def __init__(self, path: str = "", embeddings_key: str = "embeddings"):
self._filter = models.Filter(
must=[
models.FieldCondition(
key="metadata.store",
match=models.MatchValue(value=path.replace("application/indexes/", "").rstrip("/")),
)
]
)
self._docsearch = Qdrant.construct_instance(
["TEXT_TO_OBTAIN_EMBEDDINGS_DIMENSION"],
embedding=self._get_embeddings(settings.EMBEDDINGS_NAME, embeddings_key),
collection_name=settings.QDRANT_COLLECTION_NAME,
location=settings.QDRANT_LOCATION,
url=settings.QDRANT_URL,
port=settings.QDRANT_PORT,
grpc_port=settings.QDRANT_GRPC_PORT,
https=settings.QDRANT_HTTPS,
prefer_grpc=settings.QDRANT_PREFER_GRPC,
api_key=settings.QDRANT_API_KEY,
prefix=settings.QDRANT_PREFIX,
timeout=settings.QDRANT_TIMEOUT,
path=settings.QDRANT_PATH,
distance_func=settings.QDRANT_DISTANCE_FUNC,
)
def search(self, *args, **kwargs):
return self._docsearch.similarity_search(filter=self._filter, *args, **kwargs)
def add_texts(self, *args, **kwargs):
return self._docsearch.add_texts(*args, **kwargs)
def save_local(self, *args, **kwargs):
pass
def delete_index(self, *args, **kwargs):
return self._docsearch.client.delete(
collection_name=settings.QDRANT_COLLECTION_NAME, points_selector=self._filter
)

View File

@@ -1,11 +1,15 @@
from application.vectorstore.faiss import FaissStore
from application.vectorstore.elasticsearch import ElasticsearchStore
from application.vectorstore.mongodb import MongoDBVectorStore
from application.vectorstore.qdrant import QdrantStore
class VectorCreator:
vectorstores = {
'faiss': FaissStore,
'elasticsearch':ElasticsearchStore
"faiss": FaissStore,
"elasticsearch": ElasticsearchStore,
"mongodb": MongoDBVectorStore,
"qdrant": QdrantStore,
}
@classmethod
@@ -13,4 +17,4 @@ class VectorCreator:
vectorstore_class = cls.vectorstores.get(type.lower())
if not vectorstore_class:
raise ValueError(f"No vectorstore class found for type {type}")
return vectorstore_class(*args, **kwargs)
return vectorstore_class(*args, **kwargs)

210
application/worker.py Normal file → Executable file
View File

@@ -2,36 +2,77 @@ import os
import shutil
import string
import zipfile
import tiktoken
from urllib.parse import urljoin
import nltk
import requests
from application.core.settings import settings
from application.parser.file.bulk import SimpleDirectoryReader
from application.parser.remote.remote_creator import RemoteCreator
from application.parser.open_ai_func import call_openai_api
from application.parser.schema.base import Document
from application.parser.token_func import group_split
try:
nltk.download('punkt', quiet=True)
nltk.download('averaged_perceptron_tagger', quiet=True)
except FileExistsError:
pass
# Define a function to extract metadata from a given filename.
def metadata_from_filename(title):
store = title.split('/')
store = store[1] + '/' + store[2]
return {'title': title, 'store': store}
store = "/".join(title.split("/")[1:3])
return {"title": title, "store": store}
# Define a function to generate a random string of a given length.
def generate_random_string(length):
return ''.join([string.ascii_letters[i % 52] for i in range(length)])
return "".join([string.ascii_letters[i % 52] for i in range(length)])
current_dir = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
current_dir = os.path.dirname(
os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
)
def extract_zip_recursive(zip_path, extract_to, current_depth=0, max_depth=5):
"""
Recursively extract zip files with a limit on recursion depth.
Args:
zip_path (str): Path to the zip file to be extracted.
extract_to (str): Destination path for extracted files.
current_depth (int): Current depth of recursion.
max_depth (int): Maximum allowed depth of recursion to prevent infinite loops.
"""
if current_depth > max_depth:
print(f"Reached maximum recursion depth of {max_depth}")
return
with zipfile.ZipFile(zip_path, "r") as zip_ref:
zip_ref.extractall(extract_to)
os.remove(zip_path) # Remove the zip file after extracting
# Check for nested zip files and extract them
for root, dirs, files in os.walk(extract_to):
for file in files:
if file.endswith(".zip"):
# If a nested zip file is found, extract it recursively
file_path = os.path.join(root, file)
extract_zip_recursive(file_path, root, current_depth + 1, max_depth)
# Define the main function for ingesting and processing documents.
def ingest_worker(self, directory, formats, name_job, filename, user):
"""
Ingest and process documents.
Args:
self: Reference to the instance of the task.
directory (str): Specifies the directory for ingesting ('inputs' or 'temp').
formats (list of str): List of file extensions to consider for ingestion (e.g., [".rst", ".md"]).
name_job (str): Name of the job for this ingestion task.
filename (str): Name of the file to be ingested.
user (str): Identifier for the user initiating the ingestion.
Returns:
dict: Information about the completed ingestion task, including input parameters and a "limited" flag.
"""
# directory = 'inputs' or 'temp'
# formats = [".rst", ".md"]
input_files = None
@@ -45,38 +86,54 @@ def ingest_worker(self, directory, formats, name_job, filename, user):
token_check = True
min_tokens = 150
max_tokens = 1250
full_path = directory + '/' + user + '/' + name_job
recursion_depth = 2
full_path = os.path.join(directory, user, name_job)
import sys
print(full_path, file=sys.stderr)
# check if API_URL env variable is set
file_data = {'name': name_job, 'file': filename, 'user': user}
response = requests.get(urljoin(settings.API_URL, "/api/download"), params=file_data)
file_data = {"name": name_job, "file": filename, "user": user}
response = requests.get(
urljoin(settings.API_URL, "/api/download"), params=file_data
)
# check if file is in the response
print(response, file=sys.stderr)
file = response.content
if not os.path.exists(full_path):
os.makedirs(full_path)
with open(full_path + '/' + filename, 'wb') as f:
with open(os.path.join(full_path, filename), "wb") as f:
f.write(file)
# check if file is .zip and extract it
if filename.endswith('.zip'):
with zipfile.ZipFile(full_path + '/' + filename, 'r') as zip_ref:
zip_ref.extractall(full_path)
os.remove(full_path + '/' + filename)
if filename.endswith(".zip"):
extract_zip_recursive(
os.path.join(full_path, filename), full_path, 0, recursion_depth
)
self.update_state(state='PROGRESS', meta={'current': 1})
self.update_state(state="PROGRESS", meta={"current": 1})
raw_docs = SimpleDirectoryReader(input_dir=full_path, input_files=input_files, recursive=recursive,
required_exts=formats, num_files_limit=limit,
exclude_hidden=exclude, file_metadata=metadata_from_filename).load_data()
raw_docs = group_split(documents=raw_docs, min_tokens=min_tokens, max_tokens=max_tokens, token_check=token_check)
raw_docs = SimpleDirectoryReader(
input_dir=full_path,
input_files=input_files,
recursive=recursive,
required_exts=formats,
num_files_limit=limit,
exclude_hidden=exclude,
file_metadata=metadata_from_filename,
).load_data()
raw_docs = group_split(
documents=raw_docs,
min_tokens=min_tokens,
max_tokens=max_tokens,
token_check=token_check,
)
docs = [Document.to_langchain_format(raw_doc) for raw_doc in raw_docs]
call_openai_api(docs, full_path, self)
self.update_state(state='PROGRESS', meta={'current': 100})
tokens = count_tokens_docs(docs)
self.update_state(state="PROGRESS", meta={"current": 100})
if sample:
for i in range(min(5, len(raw_docs))):
@@ -84,24 +141,97 @@ def ingest_worker(self, directory, formats, name_job, filename, user):
# get files from outputs/inputs/index.faiss and outputs/inputs/index.pkl
# and send them to the server (provide user and name in form)
file_data = {'name': name_job, 'user': user}
file_data = {"name": name_job, "user": user, "tokens":tokens}
if settings.VECTOR_STORE == "faiss":
files = {'file_faiss': open(full_path + '/index.faiss', 'rb'),
'file_pkl': open(full_path + '/index.pkl', 'rb')}
response = requests.post(urljoin(settings.API_URL, "/api/upload_index"), files=files, data=file_data)
response = requests.get(urljoin(settings.API_URL, "/api/delete_old?path=" + full_path))
files = {
"file_faiss": open(full_path + "/index.faiss", "rb"),
"file_pkl": open(full_path + "/index.pkl", "rb"),
}
response = requests.post(
urljoin(settings.API_URL, "/api/upload_index"), files=files, data=file_data
)
response = requests.get(
urljoin(settings.API_URL, "/api/delete_old?path=" + full_path)
)
else:
response = requests.post(urljoin(settings.API_URL, "/api/upload_index"), data=file_data)
response = requests.post(
urljoin(settings.API_URL, "/api/upload_index"), data=file_data
)
# delete local
shutil.rmtree(full_path)
return {
'directory': directory,
'formats': formats,
'name_job': name_job,
'filename': filename,
'user': user,
'limited': False
"directory": directory,
"formats": formats,
"name_job": name_job,
"filename": filename,
"user": user,
"limited": False,
}
def remote_worker(self, source_data, name_job, user, loader, directory="temp"):
token_check = True
min_tokens = 150
max_tokens = 1250
full_path = directory + "/" + user + "/" + name_job
if not os.path.exists(full_path):
os.makedirs(full_path)
self.update_state(state="PROGRESS", meta={"current": 1})
remote_loader = RemoteCreator.create_loader(loader)
raw_docs = remote_loader.load_data(source_data)
docs = group_split(
documents=raw_docs,
min_tokens=min_tokens,
max_tokens=max_tokens,
token_check=token_check,
)
# docs = [Document.to_langchain_format(raw_doc) for raw_doc in raw_docs]
call_openai_api(docs, full_path, self)
tokens = count_tokens_docs(docs)
self.update_state(state="PROGRESS", meta={"current": 100})
# Proceed with uploading and cleaning as in the original function
file_data = {"name": name_job, "user": user, "tokens":tokens}
if settings.VECTOR_STORE == "faiss":
files = {
"file_faiss": open(full_path + "/index.faiss", "rb"),
"file_pkl": open(full_path + "/index.pkl", "rb"),
}
requests.post(
urljoin(settings.API_URL, "/api/upload_index"), files=files, data=file_data
)
requests.get(urljoin(settings.API_URL, "/api/delete_old?path=" + full_path))
else:
requests.post(urljoin(settings.API_URL, "/api/upload_index"), data=file_data)
shutil.rmtree(full_path)
return {"urls": source_data, "name_job": name_job, "user": user, "limited": False}
def count_tokens_docs(docs):
# Here we convert the docs list to a string and calculate the number of tokens the string represents.
# docs_content = (" ".join(docs))
docs_content = ""
for doc in docs:
docs_content += doc.page_content
tokens, total_price = num_tokens_from_string(
string=docs_content, encoding_name="cl100k_base"
)
# Here we print the number of tokens and the approx user cost with some visually appealing formatting.
return tokens
def num_tokens_from_string(string: str, encoding_name: str) -> int:
# Function to convert string to tokens and estimate user cost.
encoding = tiktoken.get_encoding(encoding_name)
num_tokens = len(encoding.encode(string))
total_price = (num_tokens / 1000) * 0.0004
return num_tokens, total_price

View File

@@ -1,4 +1,5 @@
from application.app import app
from application.core.settings import settings
if __name__ == "__main__":
app.run(debug=True, port=7091)
app.run(debug=settings.FLASK_DEBUG_MODE, port=7091)

View File

@@ -1,2 +1,2 @@
ignore:
- "*/tests/*
- "*/tests/*"

22
docker-compose-mock.yaml Normal file
View File

@@ -0,0 +1,22 @@
version: "3.9"
services:
frontend:
build: ./frontend
environment:
- VITE_API_HOST=http://localhost:7091
- VITE_API_STREAMING=$VITE_API_STREAMING
ports:
- "5173:5173"
depends_on:
- mock-backend
mock-backend:
build: ./mock-backend
ports:
- "7091:7091"
redis:
image: redis:6-alpine
ports:
- 6379:6379

View File

@@ -14,12 +14,12 @@ services:
backend:
build: ./application
environment:
- API_KEY=$OPENAI_API_KEY
- EMBEDDINGS_KEY=$OPENAI_API_KEY
- API_KEY=$API_KEY
- EMBEDDINGS_KEY=$API_KEY
- LLM_NAME=$LLM_NAME
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/1
- MONGO_URI=mongodb://mongo:27017/docsgpt
- SELF_HOSTED_MODEL=$SELF_HOSTED_MODEL
ports:
- "7091:7091"
volumes:
@@ -34,8 +34,9 @@ services:
build: ./application
command: celery -A application.app.celery worker -l INFO
environment:
- API_KEY=$OPENAI_API_KEY
- EMBEDDINGS_KEY=$OPENAI_API_KEY
- API_KEY=$API_KEY
- EMBEDDINGS_KEY=$API_KEY
- LLM_NAME=$LLM_NAME
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/1
- MONGO_URI=mongodb://mongo:27017/docsgpt

View File

@@ -1 +1,51 @@
# nextra-docsgpt
# nextra-docsgpt
## Setting Up Docs Folder of DocsGPT Locally
### 1. Clone the DocsGPT repository:
```bash
git clone https://github.com/arc53/DocsGPT.git
```
### 2. Navigate to the docs folder:
```bash
cd DocsGPT/docs
```
The docs folder contains the markdown files that make up the documentation. The majority of the files are in the pages directory. Some notable files in this folder include:
`index.mdx`: The main documentation file.
`_app.js`: This file is used to customize the default Next.js application shell.
`theme.config.jsx`: This file is for configuring the Nextra theme for the documentation.
### 3. Verify that you have Node.js and npm installed in your system. You can check by running:
```bash
node --version
npm --version
```
### 4. If not installed, download Node.js and npm from the respective official websites.
### 5. Once you have Node.js and npm running, proceed to install yarn - another package manager that helps to manage project dependencies:
```bash
npm install --global yarn
```
### 6. Install the project dependencies using yarn:
```bash
yarn install
```
### 7. After the successful installation of the project dependencies, start the local server:
```bash
yarn dev
```
- Now, you should be able to view the docs on your local environment by visiting `http://localhost:5000`. You can explore the different markdown files and make changes as you see fit.
- **Footnotes:** This guide assumes you have Node.js and npm installed. The guide involves running a local server using yarn, and viewing the documentation offline. If you encounter any issues, it may be worth verifying your Node.js and npm installations and whether you have installed yarn correctly.

View File

@@ -1,9 +1,9 @@
const withNextra = require('nextra')({
theme: 'nextra-theme-docs',
themeConfig: './theme.config.jsx'
})
theme: 'nextra-theme-docs',
themeConfig: './theme.config.jsx'
})
module.exports = withNextra()
module.exports = withNextra()
// If you have other Next.js configurations, you can pass them as the parameter:
// module.exports = withNextra({ /* other next.js config */ })
// If you have other Next.js configurations, you can pass them as the parameter:
// module.exports = withNextra({ /* other next.js config */ })

9218
docs/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,10 +1,16 @@
{
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start"
},
"license": "MIT",
"dependencies": {
"@vercel/analytics": "^1.0.2",
"docsgpt": "^0.2.4",
"next": "^13.4.19",
"nextra": "^2.12.3",
"nextra-theme-docs": "^2.12.3",
"@vercel/analytics": "^1.1.1",
"docsgpt": "^0.3.7",
"next": "^14.1.1",
"nextra": "^2.13.2",
"nextra-theme-docs": "^2.13.2",
"react": "^18.2.0",
"react-dom": "^18.2.0"
}

350
docs/pages/API/API-docs.md Normal file
View File

@@ -0,0 +1,350 @@
# API Endpoints Documentation
*Currently, the application provides the following main API endpoints:*
### 1. /api/answer
**Description:**
This endpoint is used to request answers to user-provided questions.
**Request:**
**Method**: `POST`
**Headers**: Content-Type should be set to `application/json; charset=utf-8`
**Request Body**: JSON object with the following fields:
* `question` — The user's question.
* `history` — (Optional) Previous conversation history.
* `api_key`— Your API key.
* `embeddings_key` — Your embeddings key.
* `active_docs` — The location of active documentation.
Here is a JavaScript Fetch Request example:
```js
// answer (POST http://127.0.0.1:5000/api/answer)
fetch("http://127.0.0.1:5000/api/answer", {
"method": "POST",
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
"body": JSON.stringify({"question":"Hi","history":null,"api_key":"OPENAI_API_KEY","embeddings_key":"OPENAI_API_KEY",
"active_docs": "javascript/.project/ES2015/openai_text-embedding-ada-002/"})
})
.then((res) => res.text())
.then(console.log.bind(console))
```
**Response**
In response, you will get a JSON document containing the `answer`, `query` and `result`:
```json
{
"answer": "Hi there! How can I help you?\n",
"query": "Hi",
"result": "Hi there! How can I help you?\nSOURCES:"
}
```
### 2. /api/docs_check
**Description:**
This endpoint will make sure documentation is loaded on the server (just run it every time user is switching between libraries (documentations)).
**Request:**
**Method**: `POST`
**Headers**: Content-Type should be set to `application/json; charset=utf-8`
**Request Body**: JSON object with the field:
* `docs` — The location of the documentation:
```js
// docs_check (POST http://127.0.0.1:5000/api/docs_check)
fetch("http://127.0.0.1:5000/api/docs_check", {
"method": "POST",
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
"body": JSON.stringify({"docs":"javascript/.project/ES2015/openai_text-embedding-ada-002/"})
})
.then((res) => res.text())
.then(console.log.bind(console))
```
**Response:**
In response, you will get a JSON document like this one indicating whether the documentation exists or not:
```json
{
"status": "exists"
}
```
### 3. /api/combine
**Description:**
This endpoint provides information about available vectors and their locations with a simple GET request.
**Request:**
**Method**: `GET`
**Response:**
Response will include:
* `date`
* `description`
* `docLink`
* `fullName`
* `language`
* `location` (local or docshub)
* `model`
* `name`
* `version`
Example of JSON in Docshub and local:
<img width="295" alt="image" src="https://user-images.githubusercontent.com/15183589/224714085-f09f51a4-7a9a-4efb-bd39-798029bb4273.png">
### 4. /api/upload
**Description:**
This endpoint is used to upload a file that needs to be trained, response is JSON with task ID, which can be used to check on task's progress.
**Request:**
**Method**: `POST`
**Request Body**: A multipart/form-data form with file upload and additional fields, including `user` and `name`.
HTML example:
```html
<form action="/api/upload" method="post" enctype="multipart/form-data" class="mt-2">
<input type="file" name="file" class="py-4" id="file-upload">
<input type="text" name="user" value="local" hidden>
<input type="text" name="name" placeholder="Name:">
<button type="submit" class="py-2 px-4 text-white bg-purple-30 rounded-md hover:bg-purple-30 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-purple-30">
Upload
</button>
</form>
```
**Response:**
JSON response with a status and a task ID that can be used to check the task's progress.
### 5. /api/task_status
**Description:**
This endpoint is used to get the status of a task (`task_id`) from `/api/upload`
**Request:**
**Method**: `GET`
**Query Parameter**: `task_id` (task ID to check)
**Sample JavaScript Fetch Request:**
```js
// Task status (Get http://127.0.0.1:5000/api/task_status)
fetch("http://localhost:5001/api/task_status?task_id=YOUR_TASK_ID", {
"method": "GET",
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
})
.then((res) => res.text())
.then(console.log.bind(console))
```
**Response:**
There are two types of responses:
1. While the task is still running, the 'current' value will show progress from 0 to 100.
```json
{
"result": {
"current": 1
},
"status": "PROGRESS"
}
```
2. When task is completed:
```json
{
"result": {
"directory": "temp",
"filename": "install.rst",
"formats": [
".rst",
".md",
".pdf"
],
"name_job": "somename",
"user": "local"
},
"status": "SUCCESS"
}
```
### 6. /api/delete_old
**Description:**
This endpoint is used to delete old Vector Stores.
**Request:**
**Method**: `GET`
**Query Parameter**: `task_id`
**Sample JavaScript Fetch Request:**
```js
// delete_old (GET http://127.0.0.1:5000/api/delete_old)
fetch("http://localhost:5001/api/delete_old?task_id=YOUR_TASK_ID", {
"method": "GET",
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
})
.then((res) => res.text())
.then(console.log.bind(console))
```
**Response:**
JSON response indicating the status of the operation:
```json
{ "status": "ok" }
```
### 7. /api/get_api_keys
**Description:**
The endpoint retrieves a list of API keys for the user.
**Request:**
**Method**: `GET`
**Sample JavaScript Fetch Request:**
```js
// get_api_keys (GET http://127.0.0.1:5000/api/get_api_keys)
fetch("http://localhost:5001/api/get_api_keys", {
"method": "GET",
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
})
.then((res) => res.text())
.then(console.log.bind(console))
```
**Response:**
JSON response with a list of created API keys:
```json
[
{
"id": "string",
"name": "string",
"key": "string",
"source": "string"
},
...
]
```
### 8. /api/create_api_key
**Description:**
Create a new API key for the user.
**Request:**
**Method**: `POST`
**Headers**: Content-Type should be set to `application/json; charset=utf-8`
**Request Body**: JSON object with the following fields:
* `name` — A name for the API key.
* `source` — The source documents that will be used.
* `prompt_id` — The prompt ID.
* `chunks` — The number of chunks used to process an answer.
Here is a JavaScript Fetch Request example:
```js
// create_api_key (POST http://127.0.0.1:5000/api/create_api_key)
fetch("http://127.0.0.1:5000/api/create_api_key", {
"method": "POST",
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
"body": JSON.stringify({"name":"Example Key Name",
"source":"Example Source",
"prompt_id":"creative",
"chunks":"2"})
})
.then((res) => res.json())
.then(console.log.bind(console))
```
**Response**
In response, you will get a JSON document containing the `id` and `key`:
```json
{
"id": "string",
"key": "string"
}
```
### 9. /api/delete_api_key
**Description:**
Delete an API key for the user.
**Request:**
**Method**: `POST`
**Headers**: Content-Type should be set to `application/json; charset=utf-8`
**Request Body**: JSON object with the field:
* `id` — The unique identifier of the API key to be deleted.
Here is a JavaScript Fetch Request example:
```js
// delete_api_key (POST http://127.0.0.1:5000/api/delete_api_key)
fetch("http://127.0.0.1:5000/api/delete_api_key", {
"method": "POST",
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
"body": JSON.stringify({"id":"API_KEY_ID"})
})
.then((res) => res.json())
.then(console.log.bind(console))
```
**Response:**
In response, you will get a JSON document indicating the status of the operation:
```json
{
"status": "ok"
}
```

10
docs/pages/API/_meta.json Normal file
View File

@@ -0,0 +1,10 @@
{
"API-docs": {
"title": "🗂️️ API-docs",
"href": "/API/API-docs"
},
"api-key-guide": {
"title": "🔐 API Keys guide",
"href": "/API/api-key-guide"
}
}

View File

@@ -0,0 +1,30 @@
## Guide to DocsGPT API Keys
DocsGPT API keys are essential for developers and users who wish to integrate the DocsGPT models into external applications, such as the our widget. This guide will walk you through the steps of obtaining an API key, starting from uploading your document to understanding the key variables associated with API keys.
### Uploading Your Document
Before creating your first API key, you must upload the document that will be linked to this key. You can upload your document through two methods:
- **GUI Web App Upload:** A user-friendly graphical interface that allows for easy upload and management of documents.
- **Using `/api/upload` Method:** For users comfortable with API calls, this method provides a direct way to upload documents.
### Obtaining Your API Key
After uploading your document, you can obtain an API key either through the graphical user interface or via an API call:
- **Graphical User Interface:** Navigate to the Settings section of the DocsGPT web app, find the API Keys option, and press 'Create New' to generate your key.
- **API Call:** Alternatively, you can use the `/api/create_api_key` endpoint to create a new API key. For detailed instructions, visit [DocsGPT API Documentation](https://docs.docsgpt.cloud/API/API-docs#8-apicreate_api_key).
### Understanding Key Variables
Upon creating your API key, you will encounter several key variables. Each serves a specific purpose:
- **Name:** Assign a name to your API key for easy identification.
- **Source:** Indicates the source document(s) linked to your API key, which DocsGPT will use to generate responses.
- **ID:** A unique identifier for your API key. You can view this by making a call to `/api/get_api_keys`.
- **Key:** The API key itself, which will be used in your application to authenticate API requests.
With your API key ready, you can now integrate DocsGPT into your application, such as the DocsGPT Widget or any other software, via `/api/answer` or `/stream` endpoints. The source document is preset with the API key, allowing you to bypass fields like `selectDocs` and `active_docs` during implementation.
Congratulations on taking the first step towards enhancing your applications with DocsGPT! With this guide, you're now equipped to navigate the process of obtaining and understanding DocsGPT API keys.

View File

@@ -1,54 +1,47 @@
# Self-hosting DocsGPT on Amazon Lightsail
Here's a step-by-step guide on how to setup an Amazon Lightsail instance to host DocsGPT.
Here's a step-by-step guide on how to set up an Amazon Lightsail instance to host DocsGPT.
## Configuring your instance
(If you know how to create a Lightsail instance, you can skip to the recommended configuration part by clicking here)
(If you know how to create a Lightsail instance, you can skip to the recommended configuration part by clicking [here](#connecting-to-your-newly-created-instance)).
### 1. Create an account or login to https://lightsail.aws.amazon.com
### 1. Create an AWS Account:
If you haven't already, create or log in to your AWS account at https://lightsail.aws.amazon.com.
### 2. Click on "Create instance"
### 2. Create an Instance:
### 3. Create your instance
a. Click "Create Instance."
The first step is to select the "Instance location". In most cases there's no need to switch locations as the default one will work well.
b. Select the "Instance location." In most cases, the default location works fine.
After that it is time to pick your Instance Image. We recommend using "Linux/Unix" as the image and "Ubuntu 20.04 LTS" for Operating System.
c. Choose "Linux/Unix" as the image and "Ubuntu 20.04 LTS" as the Operating System.
As for instance plan, it'll vary depending on your unique demands, but a "1 GB, 1vCPU, 40GB SSD and 2TB transfer" setup should cover most scenarios.
d. Configure the instance plan based on your requirements. A "1 GB, 1vCPU, 40GB SSD, and 2TB transfer" setup is recommended for most scenarios.
Lastly, Identify your instance by giving it a unique name and then hit "Create instance".
e. Give your instance a unique name and click "Create Instance."
PS: Once you create your instance, it'll likely take a few minutes for the setup to be completed.
PS: It may take a few minutes for the instance setup to complete.
#### The recommended configuration is as follows:
### Connecting to Your newly created Instance
- Ubuntu 20.04 LTS
- 1GB RAM
- 1vCPU
- 40GB SSD Hard Drive
- 2TB transfer
Your instance will be ready a few minutes after creation. To access it, open the instance and click "Connect using SSH."
### Connecting to your the newly created instance
#### Clone the DocsGPT Repository
Your instance will be ready for use a few minutes after being created. To access, just open it up and click on "Connect using SSH".
#### Clone the repository
A terminal window will pop up, and the first step will be to clone DocsGPT git repository.
A terminal window will pop up, and the first step will be to clone the DocsGPT Git repository:
`git clone https://github.com/arc53/DocsGPT.git`
#### Download the package information
Once it has finished cloning the repository, it is time to download the package information from all sources. To do so simply enter the following command:
Once it has finished cloning the repository, it is time to download the package information from all sources. To do so, simply enter the following command:
`sudo apt update`
#### Install Docker and Docker Compose
DocsGPT backend and worker use python, Frontend is written on React and the whole application is containerized using Docker. To install Docker and Docker Compose, enter the following commands:
DocsGPT backend and worker use Python, Frontend is written on React and the whole application is containerized using Docker. To install Docker and Docker Compose, enter the following commands:
`sudo apt install docker.io`
@@ -56,19 +49,19 @@ And now install docker-compose:
`sudo apt install docker-compose`
#### Access the DocsGPT folder
#### Access the DocsGPT Folder
Enter the following command to access the folder in which DocsGPT docker-compose file is.
Enter the following command to access the folder in which the DocsGPT docker-compose file is present.
`cd DocsGPT/`
#### Prepare the environment
#### Prepare the Environment
Inside the DocsGPT folder create a .env file and copy the contents of .env_sample into it.
Inside the DocsGPT folder create a `.env` file and copy the contents of `.env_sample` into it.
`nano .env`
Make sure your .env file looks like this:
Make sure your `.env` file looks like this:
```
OPENAI_API_KEY=(Your OpenAI API key)
@@ -76,37 +69,42 @@ VITE_API_STREAMING=true
SELF_HOSTED_MODEL=false
```
To save the file, press CTRL+X, then Y and then ENTER.
To save the file, press CTRL+X, then Y, and then ENTER.
Next we need to set a correct IP for our Backend. To do so, open the docker-compose.yml file:
Next, set the correct IP for the Backend by opening the docker-compose.yml file:
`nano docker-compose.yml`
And change this line 7 `VITE_API_HOST=http://localhost:7091`
And Change line 7 to: `VITE_API_HOST=http://localhost:7091`
to this `VITE_API_HOST=http://<your instance public IP>:7091`
This will allow the frontend to connect to the backend.
#### Running the app
#### Running the Application
You're almost there! Now that all the necessary bits and pieces have been installed, it is time to run the application. To do so, use the following command:
`sudo docker-compose up -d`
If you launch it for the first time it will take a few minutes to download all the necessary dependencies and build.
Launching it for the first time will take a few minutes to download all the necessary dependencies and build.
Once this is done you can go ahead and close the terminal window.
#### Enabling ports
#### Enabling Ports
Before you being able to access your live instance, you must first enable the port which it is using.
a. Before you are able to access your live instance, you must first enable the port that it is using.
Open your Lightsail instance and head to "Networking".
b. Open your Lightsail instance and head to "Networking".
Then click on "Add rule" under "IPv4 Firewall", enter 5173 as your your port and hit "Create".
Repeat the process for port 7091.
c. Then click on "Add rule" under "IPv4 Firewall", enter `5173` as your port, and hit "Create".
Repeat the process for port `7091`.
#### Access your instance
Your instance will now be available under your Public IP Address and port 5173. Enjoy!
Your instance is now available at your Public IP Address on port 5173. Enjoy using DocsGPT!
## Other Deployment Options
- [Deploy DocsGPT on Civo Compute Cloud](https://dev.to/rutamhere/deploying-docsgpt-on-civo-compute-c)
- [Deploy DocsGPT on DigitalOcean Droplet](https://dev.to/rutamhere/deploying-docsgpt-on-digitalocean-droplet-50ea)
- [Deploy DocsGPT on Kamatera Performance Cloud](https://dev.to/rutamhere/deploying-docsgpt-on-kamatera-performance-cloud-1bj)

View File

@@ -0,0 +1,100 @@
# Self-hosting DocsGPT on Kubernetes
This guide will walk you through deploying DocsGPT on Kubernetes.
## Prerequisites
Ensure you have the following installed before proceeding:
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
- Access to a Kubernetes cluster
## Folder Structure
The `k8s` folder contains the necessary deployment and service configuration files:
- `deployments/`
- `services/`
- `docsgpt-secrets.yaml`
## Deployment Instructions
1. **Clone the Repository**
```sh
git clone https://github.com/arc53/DocsGPT.git
cd docsgpt/k8s
```
2. **Configure Secrets (optional)**
Ensure that you have all the necessary secrets in `docsgpt-secrets.yaml`. Update it with your secrets before applying if you want. By default we will use qdrant as a vectorstore and public docsgpt llm as llm for inference.
3. **Apply Kubernetes Deployments**
Deploy your DocsGPT resources using the following commands:
```sh
kubectl apply -f deployments/
```
4. **Apply Kubernetes Services**
Set up your services using the following commands:
```sh
kubectl apply -f services/
```
5. **Apply Secrets**
Apply the secret configurations:
```sh
kubectl apply -f docsgpt-secrets.yaml
```
6. **Substitute API URL**
After deploying the services, you need to update the environment variable `VITE_API_HOST` in your deployment file `deployments/docsgpt-deploy.yaml` with the actual endpoint URL created by your `docsgpt-api-service`.
```sh
kubectl get services/docsgpt-api-service -o jsonpath='{.status.loadBalancer.ingress[0].ip}' | xargs -I {} sed -i "s|<your-api-endpoint>|{}|g" deployments/docsgpt-deploy.yaml
```
7. **Rerun Deployment**
After making the changes, reapply the deployment configuration to update the environment variables:
```sh
kubectl apply -f deployments/
```
## Verifying the Deployment
To verify if everything is set up correctly, you can run the following:
```sh
kubectl get pods
kubectl get services
```
Ensure that the pods are running and the services are available.
## Accessing DocsGPT
To access DocsGPT, you need to find the external IP address of the frontend service. You can do this by running:
```sh
kubectl get services/docsgpt-frontend-service | awk 'NR>1 {print "http://" $4}'
```
## Troubleshooting
If you encounter any issues, you can check the logs of the pods for more details:
```sh
kubectl logs <pod-name>
```
Replace `<pod-name>` with the actual name of your DocsGPT pod.

View File

@@ -1,23 +1,112 @@
## Launching Web App
Note: Make sure you have docker installed
**Note**: Make sure you have Docker installed
1. Open download this repository with `git clone https://github.com/arc53/DocsGPT.git`
2. Create .env file in your root directory and set your `OPENAI_API_KEY` with your openai api key
3. Run `docker-compose build && docker-compose up`
4. Navigate to `http://localhost:5173/`
**On macOS or Linux:**
Just run the following command:
To stop just run Ctrl + C
```bash
./setup.sh
```
### Chrome Extension
This command will install all the necessary dependencies and provide you with an option to use our LLM API, download the local model or use OpenAI.
To install the Chrome extension:
If you prefer to follow manual steps, refer to this guide:
1. In the DocsGPT GitHub repository, click on the "Code" button and select Download ZIP
2. Unzip the downloaded file to a location you can easily access
3. Open the Google Chrome browser and click on the three dots menu (upper right corner)
4. Select "More Tools" and then "Extensions"
5. Turn on the "Developer mode" switch in the top right corner of the Extensions page
6. Click on the "Load unpacked" button
7. Select the "Chrome" folder where the DocsGPT files have been unzipped (docsgpt-main > extensions > chrome)
8. The extension should now be added to Google Chrome and can be managed on the Extensions page
9. To disable or remove the extension, simply turn off the toggle switch on the extension card or click the "Remove" button.
1. Open and download this repository with
```bash
git clone https://github.com/arc53/DocsGPT.git
```
2. Create a `.env` file in your root directory and set your `API_KEY` with your [OpenAI API key](https://platform.openai.com/account/api-keys). (optional in case you want to use OpenAI)
3. Run the following commands:
```bash
docker-compose build && docker-compose up
```
4. Navigate to http://localhost:5173/.
To stop, simply press **Ctrl + C**.
**For WINDOWS:**
To run the setup on Windows, you have two options: using the Windows Subsystem for Linux (WSL) or using Git Bash or Command Prompt.
**Option 1: Using Windows Subsystem for Linux (WSL):**
1. Install WSL if you haven't already. You can follow the official Microsoft documentation for installation: (https://learn.microsoft.com/en-us/windows/wsl/install).
2. After setting up WSL, open the WSL terminal.
3. Clone the repository and create the `.env` file:
```bash
git clone https://github.com/arc53/DocsGPT.git
cd DocsGPT
echo "API_KEY=Yourkey" > .env
echo "VITE_API_STREAMING=true" >> .env
```
4. Run the following command to start the setup with Docker Compose:
```bash
./run-with-docker-compose.sh
```
6. Open your web browser and navigate to http://localhost:5173/.
7. To stop the setup, just press **Ctrl + C** in the WSL terminal
**Option 2: Using Git Bash or Command Prompt (CMD):**
1. Install Git for Windows if you haven't already. Download it from the official website: (https://gitforwindows.org/).
2. Open Git Bash or Command Prompt.
3. Clone the repository and create the `.env` file:
```bash
git clone https://github.com/arc53/DocsGPT.git
cd DocsGPT
echo "API_KEY=Yourkey" > .env
echo "VITE_API_STREAMING=true" >> .env
```
4. Run the following command to start the setup with Docker Compose:
```bash
./run-with-docker-compose.sh
```
5. Open your web browser and navigate to http://localhost:5173/.
6. To stop the setup, just press **Ctrl + C** in the Git Bash or Command Prompt terminal.
These steps should help you set up and run the project on Windows using either WSL or Git Bash/Command Prompt.
**Important:** Ensure that Docker is installed and properly configured on your Windows system for these steps to work.
For WINDOWS:
To run the given setup on Windows, you can use the Windows Subsystem for Linux (WSL) or a Git Bash terminal to execute similar commands. Here are the steps adapted for Windows:
Option 1: Using Windows Subsystem for Linux (WSL):
1. Install WSL if you haven't already. You can follow the official Microsoft documentation for installation: (https://learn.microsoft.com/en-us/windows/wsl/install).
2. After setting up WSL, open the WSL terminal.
3. Clone the repository and create the `.env` file:
```bash
git clone https://github.com/arc53/DocsGPT.git
cd DocsGPT
echo "API_KEY=Yourkey" > .env
echo "VITE_API_STREAMING=true" >> .env
```
4. Run the following command to start the setup with Docker Compose:
```bash
./run-with-docker-compose.sh
```
5. Open your web browser and navigate to http://localhost:5173/.
6. To stop the setup, just press **Ctrl + C** in the WSL terminal.
Option 2: Using Git Bash or Command Prompt (CMD):
1. Install Git for Windows if you haven't already. You can download it from the official website: (https://gitforwindows.org/).
2. Open Git Bash or Command Prompt.
3. Clone the repository and create the `.env` file:
```bash
git clone https://github.com/arc53/DocsGPT.git
cd DocsGPT
echo "API_KEY=Yourkey" > .env
echo "VITE_API_STREAMING=true" >> .env
```
4. Run the following command to start the setup with Docker Compose:
```bash
./run-with-docker-compose.sh
```
5. Open your web browser and navigate to http://localhost:5173/.
6. To stop the setup, just press **Ctrl + C** in the Git Bash or Command Prompt terminal.
These steps should help you set up and run the project on Windows using either WSL or Git Bash/Command Prompt. Make sure you have Docker installed and properly configured on your Windows system for this to work.

View File

@@ -0,0 +1,254 @@
# Self-hosting DocsGPT on Railway
Here's a step-by-step guide on how to host DocsGPT on Railway App.
At first Clone and set up the project locally to run , test and Modify.
### 1. Clone and GitHub SetUp
a. Open Terminal (Windows Shell or Git bash(recommended)).
b. Type `git clone https://github.com/arc53/DocsGPT.git`
#### Download the package information
Once it has finished cloning the repository, it is time to download the package information from all sources. To do so, simply enter the following command:
`sudo apt update`
#### Install Docker and Docker Compose
DocsGPT backend and worker use Python, Frontend is written on React and the whole application is containerized using Docker. To install Docker and Docker Compose, enter the following commands:
`sudo apt install docker.io`
And now install docker-compose:
`sudo apt install docker-compose`
#### Access the DocsGPT Folder
Enter the following command to access the folder in which the DocsGPT docker-compose file is present.
`cd DocsGPT/`
#### Prepare the Environment
Inside the DocsGPT folder create a `.env` file and copy the contents of `.env_sample` into it.
`nano .env`
Make sure your `.env` file looks like this:
```
OPENAI_API_KEY=(Your OpenAI API key)
VITE_API_STREAMING=true
SELF_HOSTED_MODEL=false
```
To save the file, press CTRL+X, then Y, and then ENTER.
Next, set the correct IP for the Backend by opening the docker-compose.yml file:
`nano docker-compose.yml`
And Change line 7 to: `VITE_API_HOST=http://localhost:7091`
to this `VITE_API_HOST=http://<your instance public IP>:7091`
This will allow the frontend to connect to the backend.
#### Running the Application
You're almost there! Now that all the necessary bits and pieces have been installed, it is time to run the application. To do so, use the following command:
`sudo docker-compose up -d`
Launching it for the first time will take a few minutes to download all the necessary dependencies and build.
Once this is done you can go ahead and close the terminal window.
### 2. Pushing it to your own Repository
a. Create a Repository on your GitHub.
b. Open Terminal in the same directory of the Cloned project.
c. Type `git init`
d. `git add .`
e. `git commit -m "first-commit"`
f. `git remote add origin <your repository link>`
g. `git push git push --set-upstream origin master`
Your local files will now be pushed to your GitHub Account. :)
### 3. Create a Railway Account:
If you haven't already, create or log in to your railway account do it by visiting [Railway](https://railway.app/)
Signup via **GitHub** [Recommended].
### 4. Start New Project:
a. Open Railway app and Click on "Start New Project."
b. Choose any from the list of options available (Recommended "**Deploy from GitHub Repo**")
c. Choose the required Repository from your GitHub.
d. Configure and allow access to modify your GitHub content from the pop-up window.
e. Agree to all the terms and conditions.
PS: It may take a few minutes for the account setup to complete.
#### You will get A free trial of $5 (use it for trial and then purchase if satisfied and needed)
### 5. Connecting to Your newly Railway app with GitHub
a. Choose DocsGPT repo from the list of your GitHub repository that you want to deploy now.
b. Click on Deploy now.
![Three Tabs will be there](/Railway-selection.png)
c. Select Variables Tab.
d. Upload the env file here that you used for local setup.
e. Go to Settings Tab now.
f. Go to "Networking" and click on Generate Domain Name, to get the URL of your hosted project.
g. You can update the Root directory, build command, installation command as per need.
*[However recommended not the disturb these options and leave them as default if not that needed.]*
Your own DocsGPT is now available at the Generated domain URl. :)

View File

@@ -6,5 +6,13 @@
"Quickstart": {
"title": "⚡Quickstart",
"href": "/Deploying/Quickstart"
},
"Railway-Deploying": {
"title": "🚂Deploying on Railway",
"href": "/Deploying/Railway-Deploying"
},
"Kubernetes-Deploying": {
"title": "☸Deploying on Kubernetes",
"href": "/Deploying/Kubernetes-Deploying"
}
}

View File

@@ -1,153 +0,0 @@
App currently has two main api endpoints:
### /api/answer
Its a POST request that sends a JSON in body with 4 values. Here is a JavaScript fetch example
It will receive an answer for a user provided question
```js
// answer (POST http://127.0.0.1:5000/api/answer)
fetch("http://127.0.0.1:5000/api/answer", {
"method": "POST",
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
"body": JSON.stringify({"question":"Hi","history":null,"api_key":"OPENAI_API_KEY","embeddings_key":"OPENAI_API_KEY",
"active_docs": "javascript/.project/ES2015/openai_text-embedding-ada-002/"})
})
.then((res) => res.text())
.then(console.log.bind(console))
```
In response you will get a json document like this one:
```json
{
"answer": " Hi there! How can I help you?\n",
"query": "Hi",
"result": " Hi there! How can I help you?\nSOURCES:"
}
```
### /api/docs_check
It will make sure documentation is loaded on a server (just run it every time user is switching between libraries (documentations)
Its a POST request that sends a JSON in body with 1 value. Here is a JavaScript fetch example
```js
// answer (POST http://127.0.0.1:5000/api/docs_check)
fetch("http://127.0.0.1:5000/api/docs_check", {
"method": "POST",
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
"body": JSON.stringify({"docs":"javascript/.project/ES2015/openai_text-embedding-ada-002/"})
})
.then((res) => res.text())
.then(console.log.bind(console))
```
In response you will get a json document like this one:
```json
{
"status": "exists"
}
```
### /api/combine
Provides json that tells UI which vectors are available and where they are located with a simple get request
Respsonse will include:
date, description, docLink, fullName, language, location (local or docshub), model, name, version
Example of json in Docshub and local:
<img width="295" alt="image" src="https://user-images.githubusercontent.com/15183589/224714085-f09f51a4-7a9a-4efb-bd39-798029bb4273.png">
### /api/upload
Uploads file that needs to be trained, response is json with task id, which can be used to check on tasks progress
HTML example:
```html
<form action="/api/upload" method="post" enctype="multipart/form-data" class="mt-2">
<input type="file" name="file" class="py-4" id="file-upload">
<input type="text" name="user" value="local" hidden>
<input type="text" name="name" placeholder="Name:">
<button type="submit" class="py-2 px-4 text-white bg-blue-500 rounded-md hover:bg-blue-600 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-blue-500">
Upload
</button>
</form>
```
Response:
```json
{
"status": "ok",
"task_id": "b2684988-9047-428b-bd47-08518679103c"
}
```
### /api/task_status
Gets task status (task_id) from /api/upload
```js
// Task status (Get http://127.0.0.1:5000/api/task_status)
fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4fe2e7454d1", {
"method": "GET",
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
})
.then((res) => res.text())
.then(console.log.bind(console))
```
Responses:
There are two types of responses:
1. while task it still running, where "current" will show progress from 0 - 100
```json
{
"result": {
"current": 1
},
"status": "PROGRESS"
}
```
2. When task is completed
```json
{
"result": {
"directory": "temp",
"filename": "install.rst",
"formats": [
".rst",
".md",
".pdf"
],
"name_job": "somename",
"user": "local"
},
"status": "SUCCESS"
}
```
### /api/delete_old
deletes old vecotstores
```js
// Task status (GET http://127.0.0.1:5000/api/docs_check)
fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4fe2e7454d1", {
"method": "GET",
"headers": {
"Content-Type": "application/json; charset=utf-8"
},
})
.then((res) => res.text())
.then(console.log.bind(console))
```
response:
```json
{ "status": "ok" }
```

View File

@@ -1,6 +0,0 @@
{
"API-docs": {
"title": "🗂️️ API-docs",
"href": "/Developing/API-docs"
}
}

View File

@@ -1,29 +1,44 @@
### To start chatwoot extension:
1. Prepare and start the DocsGPT itself (load your documentation too)
Follow our [wiki](https://github.com/arc53/DocsGPT/wiki) to start it and to [ingest](https://github.com/arc53/DocsGPT/wiki/How-to-train-on-other-documentation) data
2. Go to chatwoot, Navigate to your profile (bottom left), click on profile settings, scroll to the bottom and copy Access Token
2. Navigate to `/extensions/chatwoot`. Copy .env_sample and create .env file
3. Fill in the values
## Chatwoot Extension Setup Guide
```
docsgpt_url=<docsgpt_api_url>
chatwoot_url=<chatwoot_url>
docsgpt_key=<openai_api_key or other llm key>
chatwoot_token=<from part 2>
### Step 1: Prepare and Start DocsGPT
- **Launch DocsGPT**: Follow the instructions in our [DocsGPT Wiki](https://github.com/arc53/DocsGPT/wiki) to start DocsGPT. Make sure to load your documentation.
### Step 2: Get Access Token from Chatwoot
- Go to Chatwoot.
- In your profile settings (located at the bottom left), scroll down and copy the **Access Token**.
### Step 3: Set Up Chatwoot Extension
- Navigate to `/extensions/chatwoot`.
- Copy the `.env_sample` file and create a new file named `.env`.
- Fill in the values in the `.env` file as follows:
```env
docsgpt_url=<Docsgpt_API_URL>
chatwoot_url=<Chatwoot_URL>
docsgpt_key=<OpenAI_API_Key or Other_LLM_Key>
chatwoot_token=<Token from Step 2>
```
4. start with `flask run` command
### Step 4: Start the Extension
If you want for bot to stop responding to questions for a specific user or session just add label `human-requested` in your conversation
- Use the command `flask run` to start the extension.
### Step 5: Optional - Extra Validation
### Optional (extra validation)
In app.py uncomment lines 12-13 and 71-75
- In app.py, uncomment lines 12-13 and 71-75.
- Add the following lines to your .env file:
```account_id=(optional) 1
assignee_id=(optional) 1
```
These Chatwoot values help ensure you respond to the correct widget and handle questions assigned to a specific user.
in your .env file add:
### Stopping Bot Responses for Specific User or Session
`account_id=(optional) 1 `
- If you want the bot to stop responding to questions for a specific user or session, add a label `human-requested` in your conversation.
`assignee_id=(optional) 1`
### Additional Notes
Those are chatwoot values and will allow you to check if you are responding to correct widget and responding to questions assigned to specific user
- For further details on training on other documentation, refer to our [wiki](https://github.com/arc53/DocsGPT/wiki/How-to-train-on-other-documentation).

View File

@@ -0,0 +1,34 @@
import {Steps} from 'nextra/components'
import { Callout } from 'nextra/components'
## Chrome Extension Setup Guide
To enhance your DocsGPT experience, you can install the DocsGPT Chrome extension. Here's how:
<Steps >
### Step 1
In the DocsGPT GitHub repository, click on the **Code** button and select **Download ZIP**.
### Step 2
Unzip the downloaded file to a location you can easily access.
### Step 3
Open the Google Chrome browser and click on the three dots menu (upper right corner).
### Step 4
Select **More Tools** and then **Extensions**.
### Step 5
Turn on the **Developer mode** switch in the top right corner of the **Extensions page**.
### Step 6
Click on the **Load unpacked** button.
### Step 7
7. Select the **Chrome** folder where the DocsGPT files have been unzipped (docsgpt-main > extensions > chrome).
### Step 8
The extension should now be added to Google Chrome and can be managed on the Extensions page.
### Step 9
To disable or remove the extension, simply turn off the toggle switch on the extension card or click the **Remove** button.
</Steps>

View File

@@ -4,7 +4,11 @@
"href": "/Extensions/Chatwoot-extension"
},
"react-widget": {
"title": "🏗️ Widget setup",
"href": "/Extensions/react-widget"
}
"title": "🏗️ Widget setup",
"href": "/Extensions/react-widget"
},
"Chrome-extension": {
"title": "🌐 Chrome Extension",
"href": "/Extensions/Chrome-extension"
}
}

View File

@@ -1,28 +1,46 @@
### How to set up react docsGPT widget on your website:
### Setting up the DocsGPT Widget in Your React Project
### Introduction:
The DocsGPT Widget is a powerful tool that allows you to integrate AI-powered documentation assistance into your web applications. This guide will walk you through the installation and usage of the DocsGPT Widget in your React project. Whether you're building a web app or a knowledge base, this widget can enhance your user experience.
### Installation
Got to your project and install a new dependency: `npm install docsgpt`
First, make sure you have Node.js and npm installed in your project. Then go to your project and install a new dependency: `npm install docsgpt`.
### Usage
Go to your project and in the file where you want to use the widget import it:
In the file where you want to use the widget, import it and include the CSS file:
```js
import { DocsGPTWidget } from "docsgpt";
import "docsgpt/dist/style.css";
```
Then you can use it like this: `<DocsGPTWidget />`
DocsGPTWidget takes 3 props:
- `apiHost` - url of your DocsGPT API
- `selectDocs` - documentation that you want to use for your widget (eg. `default` or `local/docs1.zip`)
- `apiKey` - usually its empty
Now, you can use the widget in your component like this :
```jsx
<DocsGPTWidget
apiHost="https://your-docsgpt-api.com"
selectDocs="local/docs.zip"
apiKey=""
avatar = "https://d3dg1063dc54p9.cloudfront.net/cute-docsgpt.png",
title = "Get AI assistance",
description = "DocsGPT's AI Chatbot is here to help",
heroTitle = "Welcome to DocsGPT !",
heroDescription="This chatbot is built with DocsGPT and utilises GenAI,
please review important information using sources."
/>
```
DocsGPTWidget takes 8 **props** with default fallback values:
1. `apiHost` — The URL of your DocsGPT API.
2. `selectDocs` — The documentation source that you want to use for your widget (e.g. `default` or `local/docs1.zip`).
3. `apiKey` — Usually, it's empty.
4. `avatar`: Specifies the URL of the avatar or image representing the chatbot.
5. `title`: Sets the title text displayed in the chatbot interface.
6. `description`: Provides a brief description of the chatbot's purpose or functionality.
7. `heroTitle`: Displays a welcome title when users interact with the chatbot.
8. `heroDescription`: Provide additional introductory text or information about the chatbot's capabilities.
### How to use DocsGPTWidget with [Nextra](https://nextra.site/) (Next.js + MDX)
Install you widget as described above and then go to your `pages/` folder and create a new file `_app.js` with the following content:
Install your widget as described above and then go to your `pages/` folder and create a new file `_app.js` with the following content:
```js
import { DocsGPTWidget } from "docsgpt";
import "docsgpt/dist/style.css";
export default function MyApp({ Component, pageProps }) {
return (
@@ -32,6 +50,60 @@ export default function MyApp({ Component, pageProps }) {
</>
)
}
```
### How to use DocsGPTWidget with HTML
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>DocsGPT Widget</title>
</head>
<body>
<div id="app"></div>
<!-- Include the widget script from dist/modern or dist/legacy -->
<script src="https://unpkg.com/docsgpt/dist/modern/main.js" type="module"></script>
<script type="module">
window.onload = function() {
renderDocsGPTWidget('app');
}
</script>
</body>
</html>
```
To link the widget to your api and your documents you can pass parameters to the renderDocsGPTWidget('div id', { parameters }).
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>DocsGPT Widget</title>
</head>
<body>
<div id="app"></div>
<!-- Include the widget script from dist/modern or dist/legacy -->
<script src="https://unpkg.com/docsgpt/dist/modern/main.js" type="module"></script>
<script type="module">
window.onload = function() {
renderDocsGPTWidget('app', {
apiHost: 'http://localhost:7001',
selectDocs: 'default',
apiKey: '',
avatar: 'https://d3dg1063dc54p9.cloudfront.net/cute-docsgpt.png',
title: 'Get AI assistance',
description: "DocsGPT's AI Chatbot is here to help",
heroTitle: 'Welcome to DocsGPT!',
heroDescription: 'This chatbot is built with DocsGPT and utilises GenAI, please review important information using sources.'
});
}
</script>
</body>
</html>
```
For more information about React, refer to this [link here](https://react.dev/learn)

View File

@@ -1,4 +0,0 @@
## To customise a main prompt navigate to `/application/prompt/combine_prompt.txt`
You can try editing it to see how the model responds.

View File

@@ -0,0 +1,42 @@
import Image from 'next/image'
# Customizing the Main Prompt
Customizing the main prompt for DocsGPT gives you the ability to tailor the AI's responses to your specific requirements. By modifying the prompt text, you can achieve more accurate and relevant answers. Here's how you can do it:
1. Navigate to `SideBar -> Settings`.
2.In Settings select the `Active Prompt` now you will be able to see various prompts style.x
3.Click on the `edit icon` on the prompt of your choice and you will be able to see the current prompt for it,you can now customise the prompt as per your choice.
### Video Demo
<Image src="/prompts.gif" alt="prompts" width={800} height={500} />
## Example Prompt Modification
**Original Prompt:**
```markdown
You are a DocsGPT, friendly and helpful AI assistant by Arc53 that provides help with documents. You give thorough answers with code examples if possible.
Use the following pieces of context to help answer the users question. If it's not relevant to the question, provide friendly responses.
You have access to chat history, and can use it to help answer the question.
When using code examples, use the following format:
(code)
{summaries}
```
Feel free to customize the prompt to align it with your specific use case or the kind of responses you want from the AI. For example, you can focus on specific document types, industries, or topics to get more targeted results.
## Conclusion
Customizing the main prompt for DocsGPT allows you to tailor the AI's responses to your unique requirements. Whether you need in-depth explanations, code examples, or specific insights, you can achieve it by modifying the main prompt. Remember to experiment and fine-tune your prompts to get the best results.

Some files were not shown because too many files have changed in this diff Show More