Merge pull request #360 from jbampton/fix-spelling

Fix spelling
This commit is contained in:
Alex
2023-10-01 17:09:53 +01:00
committed by GitHub
7 changed files with 12 additions and 12 deletions

View File

@@ -69,10 +69,10 @@ class HTMLParser(BaseParser):
Chunks.append([]) Chunks.append([])
Chunks[-1].append(isd_el['text']) Chunks[-1].append(isd_el['text'])
# Removing all the chunks with sum of lenth of all the strings in the chunk < 25 # Removing all the chunks with sum of length of all the strings in the chunk < 25
# TODO: This value can be an user defined variable # TODO: This value can be an user defined variable
for chunk in Chunks: for chunk in Chunks:
# sum of lenth of all the strings in the chunk # sum of length of all the strings in the chunk
sum = 0 sum = 0
sum += len(str(chunk)) sum += len(str(chunk))
if sum < 25: if sum < 25:

View File

@@ -27,7 +27,7 @@ class RstParser(BaseParser):
remove_interpreters: bool = True, remove_interpreters: bool = True,
remove_directives: bool = True, remove_directives: bool = True,
remove_whitespaces_excess: bool = True, remove_whitespaces_excess: bool = True,
# Be carefull with remove_characters_excess, might cause data loss # Be careful with remove_characters_excess, might cause data loss
remove_characters_excess: bool = True, remove_characters_excess: bool = True,
**kwargs: Any, **kwargs: Any,
) -> None: ) -> None:

View File

@@ -1,7 +1,7 @@
## Launching Web App ## Launching Web App
Note: Make sure you have docker installed Note: Make sure you have docker installed
1. Open dowload this repository with `git clone https://github.com/arc53/DocsGPT.git` 1. Open download this repository with `git clone https://github.com/arc53/DocsGPT.git`
2. Create .env file in your root directory and set your `OPENAI_API_KEY` with your openai api key 2. Create .env file in your root directory and set your `OPENAI_API_KEY` with your openai api key
3. Run `docker-compose build && docker-compose up` 3. Run `docker-compose build && docker-compose up`
4. Navigate to `http://localhost:5173/` 4. Navigate to `http://localhost:5173/`

View File

@@ -2,7 +2,7 @@ App currently has two main api endpoints:
### /api/answer ### /api/answer
Its a POST request that sends a JSON in body with 4 values. Here is a JavaScript fetch example Its a POST request that sends a JSON in body with 4 values. Here is a JavaScript fetch example
It will recieve an answer for a user provided question It will receive an answer for a user provided question
```js ```js
// answer (POST http://127.0.0.1:5000/api/answer) // answer (POST http://127.0.0.1:5000/api/answer)
@@ -29,7 +29,7 @@ In response you will get a json document like this one:
``` ```
### /api/docs_check ### /api/docs_check
It will make sure documentation is loaded on a server (just run it everytime user is switching between libraries (documentations) It will make sure documentation is loaded on a server (just run it every time user is switching between libraries (documentations)
Its a POST request that sends a JSON in body with 1 value. Here is a JavaScript fetch example Its a POST request that sends a JSON in body with 1 value. Here is a JavaScript fetch example
```js ```js
@@ -104,7 +104,7 @@ fetch("http://localhost:5001/api/task_status?task_id=b2d2a0f4-387c-44fd-a443-e4f
``` ```
Responses: Responses:
There are two types of repsonses: There are two types of responses:
1. while task it still running, where "current" will show progress from 0 - 100 1. while task it still running, where "current" will show progress from 0 - 100
```json ```json
{ {

View File

@@ -24,9 +24,9 @@ Options:
LLM_NAME (openai, manifest, cohere, Arc53/docsgpt-14b, Arc53/docsgpt-7b-falcon) LLM_NAME (openai, manifest, cohere, Arc53/docsgpt-14b, Arc53/docsgpt-7b-falcon)
EMBEDDINGS_NAME (openai_text-embedding-ada-002, huggingface_sentence-transformers/all-mpnet-base-v2, huggingface_hkunlp/instructor-large, cohere_medium) EMBEDDINGS_NAME (openai_text-embedding-ada-002, huggingface_sentence-transformers/all-mpnet-base-v2, huggingface_hkunlp/instructor-large, cohere_medium)
Thats it! That's it!
### Hosting everything locally and privately (for using our optimised open-source models) ### Hosting everything locally and privately (for using our optimised open-source models)
If you are working with important data and dont want anything to leave your premises. If you are working with important data and dont want anything to leave your premises.
Make sure you set SELF_HOSTED_MODEL as true in you .env variable and for your LLM_NAME you can use anything thats on Huggingface Make sure you set SELF_HOSTED_MODEL as true in you .env variable and for your LLM_NAME you can use anything that's on Huggingface

View File

@@ -69,10 +69,10 @@ class HTMLParser(BaseParser):
Chunks.append([]) Chunks.append([])
Chunks[-1].append(isd_el['text']) Chunks[-1].append(isd_el['text'])
# Removing all the chunks with sum of lenth of all the strings in the chunk < 25 # Removing all the chunks with sum of length of all the strings in the chunk < 25
# TODO: This value can be a user defined variable # TODO: This value can be a user defined variable
for chunk in Chunks: for chunk in Chunks:
# sum of lenth of all the strings in the chunk # sum of length of all the strings in the chunk
sum = 0 sum = 0
sum += len(str(chunk)) sum += len(str(chunk))
if sum < 25: if sum < 25:

View File

@@ -27,7 +27,7 @@ class RstParser(BaseParser):
remove_interpreters: bool = True, remove_interpreters: bool = True,
remove_directives: bool = True, remove_directives: bool = True,
remove_whitespaces_excess: bool = True, remove_whitespaces_excess: bool = True,
# Be carefull with remove_characters_excess, might cause data loss # Be careful with remove_characters_excess, might cause data loss
remove_characters_excess: bool = True, remove_characters_excess: bool = True,
**kwargs: Any, **kwargs: Any,
) -> None: ) -> None: