35 Commits

Author SHA1 Message Date
github-actions[bot]
ff310f2b13 chore: bump version to 1.8.0 [skip ci] 2025-10-31 17:01:56 +00:00
Michele Dolfi
bf132a3c3e feat: Docling with new standard pipeline with threading (#428)
Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>
2025-10-31 17:57:38 +01:00
Michele Dolfi
35319b0da7 docs: Expand automatic docs to nested objects. More complete usage docs. (#426)
Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>
2025-10-31 15:02:20 +01:00
Michele Dolfi
f3957aeb57 docs: add docs for docling parameters like performance and debug (#424)
Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>
2025-10-31 14:17:31 +01:00
github-actions[bot]
1ec44220f5 chore: bump version to 1.7.2 [skip ci] 2025-10-30 15:14:17 +00:00
Michele Dolfi
e9b41406c4 fix: Update locked dependencies. Docling fixes, Expose temperature parameter for vlm models (#423)
Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>
2025-10-30 16:09:21 +01:00
Michele Dolfi
a2e68d39ae test: check that processing time is not skipped (#416)
Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>
2025-10-27 08:29:05 +01:00
Michele Dolfi
7bf2e7b366 fix: temporary constrain fastapi version (#418)
Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>
2025-10-24 11:22:05 +02:00
github-actions[bot]
462ceff9d1 chore: bump version to 1.7.1 [skip ci] 2025-10-22 14:01:58 +00:00
Michele Dolfi
97613a1974 fix: Upgrade dependencies (#417)
Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>
2025-10-22 15:42:59 +02:00
Paweł Rein
0961f2c574 fix: makes task status shared across multiple instances in RQ mode, resolves #378 (#415)
Signed-off-by: Pawel Rein <pawel.rein@prezi.com>
2025-10-21 15:08:42 +02:00
Tiago Santana
9672f310b1 docs: Generate usage.md automatically (#340)
Signed-off-by: Tiago Santana <54704492+SantanaTiago@users.noreply.github.com>
Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>
Co-authored-by: Michele Dolfi <dol@zurich.ibm.com>
2025-10-21 14:27:01 +02:00
Michele Dolfi
56e8535a7a chore: publish release notes on Discord (#409)
Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>
2025-10-20 14:15:58 +02:00
Michele Dolfi
0f274ab135 fix: DOCLING_SERVE_SYNC_POLL_INTERVAL controls the synchronous polling time (#413)
Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>
2025-10-20 14:14:00 +02:00
Michele Dolfi
0427f71ef4 chore: allow to change the container runtime (#412)
Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>
2025-10-20 14:13:51 +02:00
github-actions[bot]
b6eece7ef0 chore: bump version to 1.7.0 [skip ci] 2025-10-17 12:16:37 +00:00
Michele Dolfi
f5af71e8f6 feat(UI): add auto and orcmac options in demo UI (#408)
Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>
2025-10-17 12:23:57 +02:00
Michele Dolfi
d95ea94087 feat: Docling with auto-ocr (#403)
Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>
2025-10-15 21:15:29 +02:00
sahlex
5344505718 fix: run docling ui behind a reverse proxy using a context path (#396)
Signed-off-by: Sahler.Alexander <Alexander.Sahler@m-net.de>
Signed-off-by: sahlex <1122279+sahlex@users.noreply.github.com>
Co-authored-by: Sahler.Alexander <Alexander.Sahler@m-net.de>
2025-10-09 16:07:02 +02:00
github-actions[bot]
5edc624fbf chore: bump version to 1.6.0 [skip ci] 2025-10-03 13:39:59 +00:00
Michele Dolfi
45f0f3c8f9 fix: update locked dependencies (#392)
Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>
2025-10-03 15:33:45 +02:00
Michele Dolfi
0595d31d5b feat: pin new version of jobkit with granite-docling and connectors (#391)
Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>
2025-10-03 14:24:51 +02:00
Michele Dolfi
f6b5f0e063 docs: fix docs for websocket breaking condition (#390)
Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>
2025-10-02 10:55:00 +02:00
Michele Dolfi
8b22a39141 fix(UI): allow both lowercase and uppercase extensions (#386)
Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>
2025-09-29 09:40:49 +02:00
erikmargaronis
d4eac053f9 fix: Correctly raise HTTPException for Gateway Timeout (#382)
Signed-off-by: Erik Margaronis <erik.margaronis@gmail.com>
2025-09-29 08:06:21 +02:00
Rui Dias Gomes
fa1c5f04f3 ci: improve caching steps (#371)
Signed-off-by: rmdg88 <rmdg88@gmail.com>
2025-09-23 18:15:12 +02:00
Viktor Kuropiatnyk
ba61af2359 fix: Pinning of higher version of dependencies to fix potential security issues (#363)
Signed-off-by: Viktor Kuropiatnyk <vku@zurich.ibm.com>
2025-09-18 08:57:41 +02:00
github-actions[bot]
6b6dd8a0d0 chore: bump version to 1.5.1 [skip ci] 2025-09-17 13:45:40 +00:00
Michele Dolfi
513ae0c119 fix: remove old dependencies, fixes in docling-parse and more minor dependencies upgrade (#362)
Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>
2025-09-17 15:36:23 +02:00
Rui Dias Gomes
bde040661f fix: updates rapidocr deps (#361)
Signed-off-by: rmdg88 <rmdg88@gmail.com>
2025-09-16 14:00:21 +02:00
github-actions[bot]
496f7ec26b chore: bump version to 1.5.0 [skip ci] 2025-09-09 08:46:36 +00:00
Michele Dolfi
9d6def0ec8 feat: add chunking endpoints (#353)
Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>
2025-09-09 08:38:54 +02:00
github-actions[bot]
a4fed2d965 chore: bump version to 1.4.1 [skip ci] 2025-09-08 10:28:12 +00:00
Michele Dolfi
b0360d723b fix: trigger fix after ci fixes (#355)
Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>
2025-09-08 12:23:07 +02:00
Michele Dolfi
4adc0dfa79 ci: fix use simple tag for testing (#354)
Signed-off-by: Michele Dolfi <dol@zurich.ibm.com>
2025-09-08 11:29:55 +02:00
26 changed files with 6813 additions and 5408 deletions

View File

@@ -4,6 +4,7 @@ asgi
async
(?i)urls
uvicorn
Config
[Ww]ebserver
RQ
(?i)url

42
.github/workflows/discord-release.yml vendored Normal file
View File

@@ -0,0 +1,42 @@
# .github/workflows/discord-release.yml
name: Notify Discord on Release
on:
release:
types: [published]
jobs:
discord:
runs-on: ubuntu-latest
steps:
- name: Send release info to Discord
env:
DISCORD_WEBHOOK: ${{ secrets.RELEASES_DISCORD_WEBHOOK }}
run: |
REPO_NAME=${{ github.repository }}
RELEASE_TAG=${{ github.event.release.tag_name }}
RELEASE_NAME="${{ github.event.release.name }}"
RELEASE_URL=${{ github.event.release.html_url }}
# Capture the body safely (handles backticks, $, ", etc.)
RELEASE_BODY=$(cat <<'EOF'
${{ github.event.release.body }}
EOF
)
# Fallback if release name is empty
if [ -z "$RELEASE_NAME" ]; then
RELEASE_NAME=$RELEASE_TAG
fi
PAYLOAD=$(jq -n \
--arg title "🚀 New Release: $RELEASE_NAME" \
--arg url "$RELEASE_URL" \
--arg desc "$RELEASE_BODY" \
--arg author_name "$REPO_NAME" \
--arg author_icon "https://github.com/docling-project.png" \
'{embeds: [{title: $title, url: $url, description: $desc, color: 5814783, author: {name: $author_name, icon_url: $author_icon}}]}')
curl -H "Content-Type: application/json" \
-d "$PAYLOAD" \
"$DISCORD_WEBHOOK"

View File

@@ -108,6 +108,7 @@ jobs:
cache-to: type=gha,mode=max
file: Containerfile
build-args: ${{ inputs.build_args }}
pull: true
##
## This stage runs after the build, so it leverages all build cache
##
@@ -117,8 +118,8 @@ jobs:
with:
context: .
push: false
load: true # == '--output=type=docker'
tags: ${{ steps.ghcr_meta.outputs.tags }}-test
load: true
tags: ${{ env.GHCR_REGISTRY }}/${{ inputs.ghcr_image_name }}:${{ github.sha }}-test
labels: |
org.opencontainers.image.title=docling-serve
org.opencontainers.image.test=true
@@ -133,7 +134,7 @@ jobs:
run: |
set -e
IMAGE_TAG="${{ steps.ghcr_meta.outputs.tags }}-test"
IMAGE_TAG="${{ env.GHCR_REGISTRY }}/${{ inputs.ghcr_image_name }}:${{ github.sha }}-test"
echo "Testing local image: $IMAGE_TAG"
# Remove existing container if any
@@ -226,202 +227,8 @@ jobs:
cache-to: type=gha,mode=max
file: Containerfile
build-args: ${{ inputs.build_args }}
pull: true
- name: Remove Local Docker Images
- name: Remove local Docker images
run: |
docker image prune -af
##
## Extra tests for released images
##
# outputs:
# image-tags: ${{ steps.ghcr_meta.outputs.tags }}
# image-labels: ${{ steps.ghcr_meta.outputs.labels }}
# test-cpu-image:
# needs:
# - image
# runs-on: ubuntu-latest
# permissions:
# contents: read
# packages: read
# steps:
# - name: Checkout code
# uses: actions/checkout@v5
# - name: Test CPU images
# run: |
# set -e
# echo "Testing image: ${{ needs.image.outputs.image-tags }}"
# for tag in ${{ needs.image.outputs.image-tags }}; do
# if echo "$tag" | grep -q -- '-cpu' && echo "$tag" | grep -qE ':[vV][0-9]+(\.[0-9]+){0,2}$'; then
# echo "Testing CPU image: $tag"
# # Remove existing container if any
# docker rm -f docling-serve-test-container 2>/dev/null || true
# echo "Pulling image..."
# docker pull "$tag"
# echo "Waiting 5s after pull..."
# sleep 5
# echo "Starting container..."
# docker run -d -p 5001:5001 --name docling-serve-test-container "$tag"
# echo "Waiting 15s for container to boot..."
# sleep 15
# echo "Checking service health..."
# for i in {1..20}; do
# health_response=$(curl -s http://localhost:5001/health || true)
# echo "Health check response [$i]: $health_response"
# if echo "$health_response" | grep -q '"status":"ok"'; then
# echo "Service is healthy!"
# echo "Sending test conversion request..."
# status_code=$(curl -s -o /dev/null -w "%{http_code}" -X POST 'http://localhost:5001/v1/convert/source' \
# -H 'accept: application/json' \
# -H 'Content-Type: application/json' \
# -d '{
# "options": {
# "from_formats": ["pdf"],
# "to_formats": ["md"]
# },
# "sources": [
# {
# "kind": "http",
# "url": "https://arxiv.org/pdf/2501.17887"
# }
# ],
# "target": {
# "kind": "inbody"
# }
# }')
# echo "Conversion request returned status code: $status_code"
# if [ "$status_code" -ne 200 ]; then
# echo "Conversion failed!"
# docker logs docling-serve-test-container
# docker rm -f docling-serve-test-container
# exit 1
# fi
# break
# else
# echo "Waiting for service... [$i/20]"
# sleep 3
# fi
# done
# if ! echo "$health_response" | grep -q '"status":"ok"'; then
# echo "Service did not become healthy in time."
# docker logs docling-serve-test-container
# docker rm -f docling-serve-test-container
# exit 1
# fi
# echo "Cleaning up test container..."
# docker rm -f docling-serve-test-container
# else
# echo "Skipping non-released or non-CPU image: $tag"
# fi
# done
# test-cuda-image:
# needs:
# - image
# runs-on: ubuntu-latest # >> placeholder for GPU runner << #
# permissions:
# contents: read
# packages: read
# steps:
# - name: Checkout code
# uses: actions/checkout@v5
# - name: Test CUDA images
# run: |
# set -e
# echo "Testing image: ${{ needs.image.outputs.image-tags }}"
# for tag in ${{ needs.image.outputs.image-tags }}; do
# if echo "$tag" | grep -qE -- '-cu[0-9]+' && echo "$tag" | grep -qE ':[vV][0-9]+(\.[0-9]+){0,2}$'; then
# echo "Testing CUDA image: $tag"
# # Remove existing container if any
# docker rm -f docling-serve-test-container 2>/dev/null || true
# echo "Pulling image..."
# docker pull "$tag"
# echo "Waiting 5s after pull..."
# sleep 5
# echo "Starting container..."
# docker run -d -p 5001:5001 --gpus all --name docling-serve-test-container "$tag"
# echo "Waiting 15s for container to boot..."
# sleep 15
# echo "Checking service health..."
# for i in {1..25}; do
# health_response=$(curl -s http://localhost:5001/health || true)
# echo "Health check response [$i]: $health_response"
# if echo "$health_response" | grep -q '"status":"ok"'; then
# echo "Service is healthy!"
# echo "Sending test conversion request..."
# status_code=$(curl -s -o /dev/null -w "%{http_code}" -X POST 'http://localhost:5001/v1/convert/source' \
# -H 'accept: application/json' \
# -H 'Content-Type: application/json' \
# -d '{
# "options": {
# "from_formats": ["pdf"],
# "to_formats": ["md"]
# },
# "sources": [
# {
# "kind": "http",
# "url": "https://arxiv.org/pdf/2501.17887"
# }
# ],
# "target": {
# "kind": "inbody"
# }
# }')
# echo "Conversion request returned status code: $status_code"
# if [ "$status_code" -ne 200 ]; then
# echo "Conversion failed!"
# docker logs docling-serve-test-container
# docker rm -f docling-serve-test-container
# exit 1
# fi
# break
# else
# echo "Waiting for service... [$i/25]"
# sleep 3
# fi
# done
# if ! echo "$health_response" | grep -q '"status":"ok"'; then
# echo "Service did not become healthy in time."
# docker logs docling-serve-test-container
# docker rm -f docling-serve-test-container
# exit 1
# fi
# echo "Cleaning up test container..."
# docker rm -f docling-serve-test-container
# else
# echo "Skipping non-released or non-CUDA image: $tag"
# fi
# done

View File

@@ -7,12 +7,12 @@ repos:
- id: ruff-format
name: "Ruff formatter"
args: [--config=pyproject.toml]
files: '^(docling_serve|tests|examples).*\.(py|ipynb)$'
files: '^(docling_serve|tests|examples|scripts).*\.(py|ipynb)$'
# Run the Ruff linter.
- id: ruff
name: "Ruff linter"
args: [--exit-non-zero-on-fix, --fix, --config=pyproject.toml]
files: '^(docling_serve|tests|examples).*\.(py|ipynb)$'
files: '^(docling_serve|tests|examples|scripts).*\.(py|ipynb)$'
- repo: local
hooks:
- id: system
@@ -21,6 +21,15 @@ repos:
pass_filenames: false
language: system
files: '\.py$'
- repo: local
hooks:
- id: update-docs-common-parameters
name: Update Documentation File
entry: uv run scripts/update_doc_usage.py
language: python
pass_filenames: false
# Fail the commit if documentation generation fails
require_serial: true
- repo: https://github.com/errata-ai/vale
rev: v3.12.0 # Use latest stable version
hooks:
@@ -34,6 +43,6 @@ repos:
files: \.md$
- repo: https://github.com/astral-sh/uv-pre-commit
# uv version, https://github.com/astral-sh/uv-pre-commit/releases
rev: 0.8.3
rev: 0.8.19
hooks:
- id: uv-lock

View File

@@ -1,3 +1,154 @@
## [v1.8.0](https://github.com/docling-project/docling-serve/releases/tag/v1.8.0) - 2025-10-31
### Feature
* Docling with new standard pipeline with threading ([#428](https://github.com/docling-project/docling-serve/issues/428)) ([`bf132a3`](https://github.com/docling-project/docling-serve/commit/bf132a3c3e615ddbe624841ea5b3a98593c00654))
### Documentation
* Expand automatic docs to nested objects. More complete usage docs. ([#426](https://github.com/docling-project/docling-serve/issues/426)) ([`35319b0`](https://github.com/docling-project/docling-serve/commit/35319b0da793a2a1a434fd2b60b7632e10ecced3))
* Add docs for docling parameters like performance and debug ([#424](https://github.com/docling-project/docling-serve/issues/424)) ([`f3957ae`](https://github.com/docling-project/docling-serve/commit/f3957aeb577097121fe9d0d21f75a50643f03369))
### Docling libraries included in this release:
- docling 2.60.0
- docling-core 2.50.0
- docling-ibm-models 3.10.2
- docling-jobkit 1.8.0
- docling-mcp 1.3.2
- docling-parse 4.7.0
- docling-serve 1.8.0
## [v1.7.2](https://github.com/docling-project/docling-serve/releases/tag/v1.7.2) - 2025-10-30
### Fix
* Update locked dependencies. Docling fixes, Expose temperature parameter for vlm models ([#423](https://github.com/docling-project/docling-serve/issues/423)) ([`e9b4140`](https://github.com/docling-project/docling-serve/commit/e9b41406c4116ff79a212877ff6484a1151e144d))
* Temporary constrain fastapi version ([#418](https://github.com/docling-project/docling-serve/issues/418)) ([`7bf2e7b`](https://github.com/docling-project/docling-serve/commit/7bf2e7b366470e0cf1c4900df7c84becd6a96991))
### Docling libraries included in this release:
- docling 2.59.0
- docling-core 2.50.0
- docling-ibm-models 3.10.2
- docling-jobkit 1.7.1
- docling-mcp 1.3.2
- docling-parse 4.7.0
- docling-serve 1.7.2
## [v1.7.1](https://github.com/docling-project/docling-serve/releases/tag/v1.7.1) - 2025-10-22
### Fix
* Upgrade dependencies ([#417](https://github.com/docling-project/docling-serve/issues/417)) ([`97613a1`](https://github.com/docling-project/docling-serve/commit/97613a19748e8c152db4a0f62b5a57fca807a33a))
* Makes task status shared across multiple instances in RQ mode, resolves #378 ([#415](https://github.com/docling-project/docling-serve/issues/415)) ([`0961f2c`](https://github.com/docling-project/docling-serve/commit/0961f2c57425859c76130da3ea8a871d65df4b26))
* `DOCLING_SERVE_SYNC_POLL_INTERVAL` controls the synchronous polling time ([#413](https://github.com/docling-project/docling-serve/issues/413)) ([`0f274ab`](https://github.com/docling-project/docling-serve/commit/0f274ab135a9bb41accd05db3c12a9dcce220ad9))
### Documentation
* Generate usage.md automatically ([#340](https://github.com/docling-project/docling-serve/issues/340)) ([`9672f31`](https://github.com/docling-project/docling-serve/commit/9672f310b1bb7030af8a276f14691e46f7da0e9e))
### Docling libraries included in this release:
- docling 2.58.0
- docling-core 2.49.0
- docling-ibm-models 3.10.1
- docling-jobkit 1.7.0
- docling-mcp 1.3.2
- docling-parse 4.7.0
- docling-serve 1.7.1
## [v1.7.0](https://github.com/docling-project/docling-serve/releases/tag/v1.7.0) - 2025-10-17
### Feature
* **UI:** Add auto and orcmac options in demo UI ([#408](https://github.com/docling-project/docling-serve/issues/408)) ([`f5af71e`](https://github.com/docling-project/docling-serve/commit/f5af71e8f6de00d7dd702471a3eea2e94d882410))
* Docling with auto-ocr ([#403](https://github.com/docling-project/docling-serve/issues/403)) ([`d95ea94`](https://github.com/docling-project/docling-serve/commit/d95ea940870af0d8df689061baa50f6026efce28))
### Fix
* Run docling ui behind a reverse proxy using a context path ([#396](https://github.com/docling-project/docling-serve/issues/396)) ([`5344505`](https://github.com/docling-project/docling-serve/commit/53445057184aa731ee7456b33b70bc0ecf82f2a6))
### Docling libraries included in this release:
- docling 2.57.0
- docling-core 2.48.4
- docling-ibm-models 3.9.1
- docling-jobkit 1.6.0
- docling-mcp 1.3.2
- docling-parse 4.5.0
- docling-serve 1.7.0
## [v1.6.0](https://github.com/docling-project/docling-serve/releases/tag/v1.6.0) - 2025-10-03
### Feature
* Pin new version of jobkit with granite-docling and connectors ([#391](https://github.com/docling-project/docling-serve/issues/391)) ([`0595d31`](https://github.com/docling-project/docling-serve/commit/0595d31d5b357553426215ca6771796a47e41324))
### Fix
* Update locked dependencies ([#392](https://github.com/docling-project/docling-serve/issues/392)) ([`45f0f3c`](https://github.com/docling-project/docling-serve/commit/45f0f3c8f95d418ac30e3744d27d02a63f9e4490))
* **UI:** Allow both lowercase and uppercase extensions ([#386](https://github.com/docling-project/docling-serve/issues/386)) ([`8b22a39`](https://github.com/docling-project/docling-serve/commit/8b22a391418d22c1a4d706f880341f28702057b5))
* Correctly raise HTTPException for Gateway Timeout ([#382](https://github.com/docling-project/docling-serve/issues/382)) ([`d4eac05`](https://github.com/docling-project/docling-serve/commit/d4eac053f9ce0a60f9070127335bdd56e193d7fa))
* Pinning of higher version of dependencies to fix potential security issues ([#363](https://github.com/docling-project/docling-serve/issues/363)) ([`ba61af2`](https://github.com/docling-project/docling-serve/commit/ba61af23591eff200481aa2e532cf7d0701f0ea4))
### Documentation
* Fix docs for websocket breaking condition ([#390](https://github.com/docling-project/docling-serve/issues/390)) ([`f6b5f0e`](https://github.com/docling-project/docling-serve/commit/f6b5f0e06354d2db7d03d274b114499e3407dccf))
### Docling libraries included in this release:
- docling 2.55.1
- docling-core 2.48.4
- docling-ibm-models 3.9.1
- docling-jobkit 1.6.0
- docling-mcp 1.3.2
- docling-parse 4.5.0
- docling-serve 1.6.0
## [v1.5.1](https://github.com/docling-project/docling-serve/releases/tag/v1.5.1) - 2025-09-17
### Fix
* Remove old dependencies, fixes in docling-parse and more minor dependencies upgrade ([#362](https://github.com/docling-project/docling-serve/issues/362)) ([`513ae0c`](https://github.com/docling-project/docling-serve/commit/513ae0c119b66d3b17cf9a5d371a0f7971f43be7))
* Updates rapidocr deps ([#361](https://github.com/docling-project/docling-serve/issues/361)) ([`bde0406`](https://github.com/docling-project/docling-serve/commit/bde040661fb65c67699326cd6281c0e6232e26f2))
### Docling libraries included in this release:
- docling 2.52.0
- docling-core 2.48.1
- docling-ibm-models 3.9.1
- docling-jobkit 1.5.0
- docling-mcp 1.2.0
- docling-parse 4.5.0
- docling-serve 1.5.1
## [v1.5.0](https://github.com/docling-project/docling-serve/releases/tag/v1.5.0) - 2025-09-09
### Feature
* Add chunking endpoints ([#353](https://github.com/docling-project/docling-serve/issues/353)) ([`9d6def0`](https://github.com/docling-project/docling-serve/commit/9d6def0ec8b1804ad31aa71defa17658d73d29a1))
### Docling libraries included in this release:
- docling 2.46.0
- docling 2.51.0
- docling-core 2.47.0
- docling-ibm-models 3.9.1
- docling-jobkit 1.5.0
- docling-mcp 1.2.0
- docling-parse 4.4.0
- docling-serve 1.5.0
## [v1.4.1](https://github.com/docling-project/docling-serve/releases/tag/v1.4.1) - 2025-09-08
### Fix
* Trigger fix after ci fixes ([#355](https://github.com/docling-project/docling-serve/issues/355)) ([`b0360d7`](https://github.com/docling-project/docling-serve/commit/b0360d723bff202dcf44a25a3173ec1995945fc2))
### Docling libraries included in this release:
- docling 2.46.0
- docling 2.51.0
- docling-core 2.47.0
- docling-ibm-models 3.9.1
- docling-jobkit 1.4.1
- docling-mcp 1.2.0
- docling-parse 4.4.0
- docling-serve 1.4.1
## [v1.4.0](https://github.com/docling-project/docling-serve/releases/tag/v1.4.0) - 2025-09-05
### Feature

View File

@@ -1,6 +1,6 @@
ARG BASE_IMAGE=quay.io/sclorg/python-312-c9s:c9s
ARG UV_VERSION=0.8.3
ARG UV_IMAGE=ghcr.io/astral-sh/uv:0.8.19
ARG UV_SYNC_EXTRA_ARGS=""
@@ -25,7 +25,7 @@ RUN /usr/bin/fix-permissions /opt/app-root/src/.cache
ENV TESSDATA_PREFIX=/usr/share/tesseract/tessdata/
FROM ghcr.io/astral-sh/uv:${UV_VERSION} AS uv_stage
FROM ${UV_IMAGE} AS uv_stage
###################################################################################################
# Docling layer #
@@ -58,7 +58,7 @@ RUN --mount=from=uv_stage,source=/uv,target=/bin/uv \
uv sync ${UV_SYNC_ARGS} ${UV_SYNC_EXTRA_ARGS} --no-extra flash-attn && \
FLASH_ATTENTION_SKIP_CUDA_BUILD=TRUE uv sync ${UV_SYNC_ARGS} ${UV_SYNC_EXTRA_ARGS} --no-build-isolation-package=flash-attn
ARG MODELS_LIST="layout tableformer picture_classifier easyocr"
ARG MODELS_LIST="layout tableformer picture_classifier rapidocr easyocr"
RUN echo "Downloading models..." && \
HF_HUB_DOWNLOAD_TIMEOUT="90" \

View File

@@ -16,6 +16,9 @@ else
PIPE_DEV_NULL=
endif
# Container runtime - can be overridden: make CONTAINER_RUNTIME=podman cmd
CONTAINER_RUNTIME ?= docker
TAG=$(shell git rev-parse HEAD)
BRANCH_TAG=$(shell git rev-parse --abbrev-ref HEAD)
@@ -28,44 +31,44 @@ md-lint-file:
.PHONY: docling-serve-image
docling-serve-image: Containerfile ## Build docling-serve container image
$(ECHO_PREFIX) printf " %-12s Containerfile\n" "[docling-serve]"
$(CMD_PREFIX) docker build --load -f Containerfile -t ghcr.io/docling-project/docling-serve:$(TAG) .
$(CMD_PREFIX) docker tag ghcr.io/docling-project/docling-serve:$(TAG) ghcr.io/docling-project/docling-serve:$(BRANCH_TAG)
$(CMD_PREFIX) docker tag ghcr.io/docling-project/docling-serve:$(TAG) quay.io/docling-project/docling-serve:$(BRANCH_TAG)
$(CMD_PREFIX) $(CONTAINER_RUNTIME) build --load -f Containerfile -t ghcr.io/docling-project/docling-serve:$(TAG) .
$(CMD_PREFIX) $(CONTAINER_RUNTIME) tag ghcr.io/docling-project/docling-serve:$(TAG) ghcr.io/docling-project/docling-serve:$(BRANCH_TAG)
$(CMD_PREFIX) $(CONTAINER_RUNTIME) tag ghcr.io/docling-project/docling-serve:$(TAG) quay.io/docling-project/docling-serve:$(BRANCH_TAG)
.PHONY: docling-serve-cpu-image
docling-serve-cpu-image: Containerfile ## Build docling-serve "cpu only" container image
$(ECHO_PREFIX) printf " %-12s Containerfile\n" "[docling-serve CPU]"
$(CMD_PREFIX) docker build --load --build-arg "UV_SYNC_EXTRA_ARGS=--no-group pypi --group cpu --no-extra flash-attn" -f Containerfile -t ghcr.io/docling-project/docling-serve-cpu:$(TAG) .
$(CMD_PREFIX) docker tag ghcr.io/docling-project/docling-serve-cpu:$(TAG) ghcr.io/docling-project/docling-serve-cpu:$(BRANCH_TAG)
$(CMD_PREFIX) docker tag ghcr.io/docling-project/docling-serve-cpu:$(TAG) quay.io/docling-project/docling-serve-cpu:$(BRANCH_TAG)
$(CMD_PREFIX) $(CONTAINER_RUNTIME) build --load --build-arg "UV_SYNC_EXTRA_ARGS=--no-group pypi --group cpu --no-extra flash-attn" -f Containerfile -t ghcr.io/docling-project/docling-serve-cpu:$(TAG) .
$(CMD_PREFIX) $(CONTAINER_RUNTIME) tag ghcr.io/docling-project/docling-serve-cpu:$(TAG) ghcr.io/docling-project/docling-serve-cpu:$(BRANCH_TAG)
$(CMD_PREFIX) $(CONTAINER_RUNTIME) tag ghcr.io/docling-project/docling-serve-cpu:$(TAG) quay.io/docling-project/docling-serve-cpu:$(BRANCH_TAG)
.PHONY: docling-serve-cu124-image
docling-serve-cu124-image: Containerfile ## Build docling-serve container image with CUDA 12.4 support
$(ECHO_PREFIX) printf " %-12s Containerfile\n" "[docling-serve with Cuda 12.4]"
$(CMD_PREFIX) docker build --load --build-arg "UV_SYNC_EXTRA_ARGS=--no-group pypi --group cu124" -f Containerfile --platform linux/amd64 -t ghcr.io/docling-project/docling-serve-cu124:$(TAG) .
$(CMD_PREFIX) docker tag ghcr.io/docling-project/docling-serve-cu124:$(TAG) ghcr.io/docling-project/docling-serve-cu124:$(BRANCH_TAG)
$(CMD_PREFIX) docker tag ghcr.io/docling-project/docling-serve-cu124:$(TAG) quay.io/docling-project/docling-serve-cu124:$(BRANCH_TAG)
$(CMD_PREFIX) $(CONTAINER_RUNTIME) build --load --build-arg "UV_SYNC_EXTRA_ARGS=--no-group pypi --group cu124" -f Containerfile --platform linux/amd64 -t ghcr.io/docling-project/docling-serve-cu124:$(TAG) .
$(CMD_PREFIX) $(CONTAINER_RUNTIME) tag ghcr.io/docling-project/docling-serve-cu124:$(TAG) ghcr.io/docling-project/docling-serve-cu124:$(BRANCH_TAG)
$(CMD_PREFIX) $(CONTAINER_RUNTIME) tag ghcr.io/docling-project/docling-serve-cu124:$(TAG) quay.io/docling-project/docling-serve-cu124:$(BRANCH_TAG)
.PHONY: docling-serve-cu126-image
docling-serve-cu126-image: Containerfile ## Build docling-serve container image with CUDA 12.6 support
$(ECHO_PREFIX) printf " %-12s Containerfile\n" "[docling-serve with Cuda 12.6]"
$(CMD_PREFIX) docker build --load --build-arg "UV_SYNC_EXTRA_ARGS=--no-group pypi --group cu126" -f Containerfile --platform linux/amd64 -t ghcr.io/docling-project/docling-serve-cu126:$(TAG) .
$(CMD_PREFIX) docker tag ghcr.io/docling-project/docling-serve-cu126:$(TAG) ghcr.io/docling-project/docling-serve-cu126:$(BRANCH_TAG)
$(CMD_PREFIX) docker tag ghcr.io/docling-project/docling-serve-cu126:$(TAG) quay.io/docling-project/docling-serve-cu126:$(BRANCH_TAG)
$(CMD_PREFIX) $(CONTAINER_RUNTIME) build --load --build-arg "UV_SYNC_EXTRA_ARGS=--no-group pypi --group cu126" -f Containerfile --platform linux/amd64 -t ghcr.io/docling-project/docling-serve-cu126:$(TAG) .
$(CMD_PREFIX) $(CONTAINER_RUNTIME) tag ghcr.io/docling-project/docling-serve-cu126:$(TAG) ghcr.io/docling-project/docling-serve-cu126:$(BRANCH_TAG)
$(CMD_PREFIX) $(CONTAINER_RUNTIME) tag ghcr.io/docling-project/docling-serve-cu126:$(TAG) quay.io/docling-project/docling-serve-cu126:$(BRANCH_TAG)
.PHONY: docling-serve-cu128-image
docling-serve-cu128-image: Containerfile ## Build docling-serve container image with CUDA 12.8 support
$(ECHO_PREFIX) printf " %-12s Containerfile\n" "[docling-serve with Cuda 12.8]"
$(CMD_PREFIX) docker build --load --build-arg "UV_SYNC_EXTRA_ARGS=--no-group pypi --group cu128" -f Containerfile --platform linux/amd64 -t ghcr.io/docling-project/docling-serve-cu128:$(TAG) .
$(CMD_PREFIX) docker tag ghcr.io/docling-project/docling-serve-cu128:$(TAG) ghcr.io/docling-project/docling-serve-cu128:$(BRANCH_TAG)
$(CMD_PREFIX) docker tag ghcr.io/docling-project/docling-serve-cu128:$(TAG) quay.io/docling-project/docling-serve-cu128:$(BRANCH_TAG)
$(CMD_PREFIX) $(CONTAINER_RUNTIME) build --load --build-arg "UV_SYNC_EXTRA_ARGS=--no-group pypi --group cu128" -f Containerfile --platform linux/amd64 -t ghcr.io/docling-project/docling-serve-cu128:$(TAG) .
$(CMD_PREFIX) $(CONTAINER_RUNTIME) tag ghcr.io/docling-project/docling-serve-cu128:$(TAG) ghcr.io/docling-project/docling-serve-cu128:$(BRANCH_TAG)
$(CMD_PREFIX) $(CONTAINER_RUNTIME) tag ghcr.io/docling-project/docling-serve-cu128:$(TAG) quay.io/docling-project/docling-serve-cu128:$(BRANCH_TAG)
.PHONY: docling-serve-rocm-image
docling-serve-rocm-image: Containerfile ## Build docling-serve container image with ROCm support
$(ECHO_PREFIX) printf " %-12s Containerfile\n" "[docling-serve with ROCm 6.3]"
$(CMD_PREFIX) docker build --load --build-arg "UV_SYNC_EXTRA_ARGS=--no-group pypi --group rocm --no-extra flash-attn" -f Containerfile --platform linux/amd64 -t ghcr.io/docling-project/docling-serve-rocm:$(TAG) .
$(CMD_PREFIX) docker tag ghcr.io/docling-project/docling-serve-rocm:$(TAG) ghcr.io/docling-project/docling-serve-rocm:$(BRANCH_TAG)
$(CMD_PREFIX) docker tag ghcr.io/docling-project/docling-serve-rocm:$(TAG) quay.io/docling-project/docling-serve-rocm:$(BRANCH_TAG)
$(CMD_PREFIX) $(CONTAINER_RUNTIME) build --load --build-arg "UV_SYNC_EXTRA_ARGS=--no-group pypi --group rocm --no-extra flash-attn" -f Containerfile --platform linux/amd64 -t ghcr.io/docling-project/docling-serve-rocm:$(TAG) .
$(CMD_PREFIX) $(CONTAINER_RUNTIME) tag ghcr.io/docling-project/docling-serve-rocm:$(TAG) ghcr.io/docling-project/docling-serve-rocm:$(BRANCH_TAG)
$(CMD_PREFIX) $(CONTAINER_RUNTIME) tag ghcr.io/docling-project/docling-serve-rocm:$(TAG) quay.io/docling-project/docling-serve-rocm:$(BRANCH_TAG)
.PHONY: action-lint
action-lint: .action-lint ## Lint GitHub Action workflows
@@ -88,7 +91,7 @@ action-lint: .action-lint ## Lint GitHub Action workflows
md-lint: .md-lint ## Lint markdown files
.md-lint: $(wildcard */**/*.md) | md-lint-file
$(ECHO_PREFIX) printf " %-12s ./...\n" "[MD LINT]"
$(CMD_PREFIX) docker run --rm -v $$(pwd):/workdir davidanson/markdownlint-cli2:v0.16.0 "**/*.md" "#.venv"
$(CMD_PREFIX) $(CONTAINER_RUNTIME) run --rm -v $$(pwd):/workdir davidanson/markdownlint-cli2:v0.16.0 "**/*.md" "#.venv"
$(CMD_PREFIX) touch $@
.PHONY: py-Lint
@@ -104,34 +107,34 @@ py-lint: ## Lint Python files
.PHONY: run-docling-cpu
run-docling-cpu: ## Run the docling-serve container with CPU support and assign a container name
$(ECHO_PREFIX) printf " %-12s Removing existing container if it exists...\n" "[CLEANUP]"
$(CMD_PREFIX) docker rm -f docling-serve-cpu 2>/dev/null || true
$(CMD_PREFIX) $(CONTAINER_RUNTIME) rm -f docling-serve-cpu 2>/dev/null || true
$(ECHO_PREFIX) printf " %-12s Running docling-serve container with CPU support on port 5001...\n" "[RUN CPU]"
$(CMD_PREFIX) docker run -it --name docling-serve-cpu -p 5001:5001 ghcr.io/docling-project/docling-serve-cpu:main
$(CMD_PREFIX) $(CONTAINER_RUNTIME) run -it --name docling-serve-cpu -p 5001:5001 ghcr.io/docling-project/docling-serve-cpu:main
.PHONY: run-docling-cu124
run-docling-cu124: ## Run the docling-serve container with GPU support and assign a container name
$(ECHO_PREFIX) printf " %-12s Removing existing container if it exists...\n" "[CLEANUP]"
$(CMD_PREFIX) docker rm -f docling-serve-cu124 2>/dev/null || true
$(CMD_PREFIX) $(CONTAINER_RUNTIME) rm -f docling-serve-cu124 2>/dev/null || true
$(ECHO_PREFIX) printf " %-12s Running docling-serve container with GPU support on port 5001...\n" "[RUN CUDA 12.4]"
$(CMD_PREFIX) docker run -it --name docling-serve-cu124 -p 5001:5001 ghcr.io/docling-project/docling-serve-cu124:main
$(CMD_PREFIX) $(CONTAINER_RUNTIME) run -it --name docling-serve-cu124 -p 5001:5001 ghcr.io/docling-project/docling-serve-cu124:main
.PHONY: run-docling-cu126
run-docling-cu126: ## Run the docling-serve container with GPU support and assign a container name
$(ECHO_PREFIX) printf " %-12s Removing existing container if it exists...\n" "[CLEANUP]"
$(CMD_PREFIX) docker rm -f docling-serve-cu126 2>/dev/null || true
$(CMD_PREFIX) $(CONTAINER_RUNTIME) rm -f docling-serve-cu126 2>/dev/null || true
$(ECHO_PREFIX) printf " %-12s Running docling-serve container with GPU support on port 5001...\n" "[RUN CUDA 12.6]"
$(CMD_PREFIX) docker run -it --name docling-serve-cu126 -p 5001:5001 ghcr.io/docling-project/docling-serve-cu126:main
$(CMD_PREFIX) $(CONTAINER_RUNTIME) run -it --name docling-serve-cu126 -p 5001:5001 ghcr.io/docling-project/docling-serve-cu126:main
.PHONY: run-docling-cu128
run-docling-cu128: ## Run the docling-serve container with GPU support and assign a container name
$(ECHO_PREFIX) printf " %-12s Removing existing container if it exists...\n" "[CLEANUP]"
$(CMD_PREFIX) docker rm -f docling-serve-cu128 2>/dev/null || true
$(CMD_PREFIX) $(CONTAINER_RUNTIME) rm -f docling-serve-cu128 2>/dev/null || true
$(ECHO_PREFIX) printf " %-12s Running docling-serve container with GPU support on port 5001...\n" "[RUN CUDA 12.8]"
$(CMD_PREFIX) docker run -it --name docling-serve-cu128 -p 5001:5001 ghcr.io/docling-project/docling-serve-cu128:main
$(CMD_PREFIX) $(CONTAINER_RUNTIME) run -it --name docling-serve-cu128 -p 5001:5001 ghcr.io/docling-project/docling-serve-cu128:main
.PHONY: run-docling-rocm
run-docling-rocm: ## Run the docling-serve container with GPU support and assign a container name
$(ECHO_PREFIX) printf " %-12s Removing existing container if it exists...\n" "[CLEANUP]"
$(CMD_PREFIX) docker rm -f docling-serve-rocm 2>/dev/null || true
$(CMD_PREFIX) $(CONTAINER_RUNTIME) rm -f docling-serve-rocm 2>/dev/null || true
$(ECHO_PREFIX) printf " %-12s Running docling-serve container with GPU support on port 5001...\n" "[RUN ROCm 6.3]"
$(CMD_PREFIX) docker run -it --name docling-serve-rocm -p 5001:5001 ghcr.io/docling-project/docling-serve-rocm:main
$(CMD_PREFIX) $(CONTAINER_RUNTIME) run -it --name docling-serve-rocm -p 5001:5001 ghcr.io/docling-project/docling-serve-rocm:main

View File

@@ -385,6 +385,11 @@ def rq_worker() -> Any:
allow_external_plugins=docling_serve_settings.allow_external_plugins,
max_num_pages=docling_serve_settings.max_num_pages,
max_file_size=docling_serve_settings.max_file_size,
queue_max_size=docling_serve_settings.queue_max_size,
ocr_batch_size=docling_serve_settings.ocr_batch_size,
layout_batch_size=docling_serve_settings.layout_batch_size,
table_batch_size=docling_serve_settings.table_batch_size,
batch_polling_interval_seconds=docling_serve_settings.batch_polling_interval_seconds,
)
run_worker(

View File

@@ -35,12 +35,17 @@ from docling_jobkit.datamodel.callback import (
ProgressCallbackRequest,
ProgressCallbackResponse,
)
from docling_jobkit.datamodel.chunking import (
BaseChunkerOptions,
ChunkingExportOptions,
HierarchicalChunkerOptions,
HybridChunkerOptions,
)
from docling_jobkit.datamodel.http_inputs import FileSource, HttpSource
from docling_jobkit.datamodel.s3_coords import S3Coordinates
from docling_jobkit.datamodel.task import Task, TaskSource
from docling_jobkit.datamodel.task import Task, TaskSource, TaskType
from docling_jobkit.datamodel.task_targets import (
InBodyTarget,
TaskTarget,
ZipTarget,
)
from docling_jobkit.orchestrators.base_orchestrator import (
@@ -54,11 +59,15 @@ from docling_serve.datamodel.convert import ConvertDocumentsRequestOptions
from docling_serve.datamodel.requests import (
ConvertDocumentsRequest,
FileSourceRequest,
GenericChunkDocumentsRequest,
HttpSourceRequest,
S3SourceRequest,
TargetName,
TargetRequest,
make_request_model,
)
from docling_serve.datamodel.responses import (
ChunkDocumentResponse,
ClearResponse,
ConvertDocumentResponse,
HealthCheckResponse,
@@ -185,16 +194,25 @@ def create_app(): # noqa: C901
import gradio as gr
from docling_serve.gradio_ui import ui as gradio_ui
from docling_serve.settings import uvicorn_settings
tmp_output_dir = get_scratch() / "gradio"
tmp_output_dir.mkdir(exist_ok=True, parents=True)
gradio_ui.gradio_output_dir = tmp_output_dir
# Build the root_path for Gradio, accounting for UVICORN_ROOT_PATH
gradio_root_path = (
f"{uvicorn_settings.root_path}/ui"
if uvicorn_settings.root_path
else "/ui"
)
app = gr.mount_gradio_app(
app,
gradio_ui,
path="/ui",
allowed_paths=["./logo.png", tmp_output_dir],
root_path="/ui",
root_path=gradio_root_path,
)
except ImportError:
_log.warning(
@@ -249,10 +267,11 @@ def create_app(): # noqa: C901
########################
async def _enque_source(
orchestrator: BaseOrchestrator, conversion_request: ConvertDocumentsRequest
orchestrator: BaseOrchestrator,
request: ConvertDocumentsRequest | GenericChunkDocumentsRequest,
) -> Task:
sources: list[TaskSource] = []
for s in conversion_request.sources:
for s in request.sources:
if isinstance(s, FileSourceRequest):
sources.append(FileSource.model_validate(s))
elif isinstance(s, HttpSourceRequest):
@@ -260,18 +279,41 @@ def create_app(): # noqa: C901
elif isinstance(s, S3SourceRequest):
sources.append(S3Coordinates.model_validate(s))
convert_options: ConvertDocumentsRequestOptions
chunking_options: BaseChunkerOptions | None = None
chunking_export_options = ChunkingExportOptions()
task_type: TaskType
if isinstance(request, ConvertDocumentsRequest):
task_type = TaskType.CONVERT
convert_options = request.options
elif isinstance(request, GenericChunkDocumentsRequest):
task_type = TaskType.CHUNK
convert_options = request.convert_options
chunking_options = request.chunking_options
chunking_export_options.include_converted_doc = (
request.include_converted_doc
)
else:
raise RuntimeError("Uknown request type.")
task = await orchestrator.enqueue(
task_type=task_type,
sources=sources,
options=conversion_request.options,
target=conversion_request.target,
convert_options=convert_options,
chunking_options=chunking_options,
chunking_export_options=chunking_export_options,
target=request.target,
)
return task
async def _enque_file(
orchestrator: BaseOrchestrator,
files: list[UploadFile],
options: ConvertDocumentsRequestOptions,
target: TaskTarget,
task_type: TaskType,
convert_options: ConvertDocumentsRequestOptions,
chunking_options: BaseChunkerOptions | None,
chunking_export_options: ChunkingExportOptions | None,
target: TargetRequest,
) -> Task:
_log.info(f"Received {len(files)} files for processing.")
@@ -284,7 +326,12 @@ def create_app(): # noqa: C901
file_sources.append(DocumentStream(name=name, stream=buf))
task = await orchestrator.enqueue(
sources=file_sources, options=options, target=target
task_type=task_type,
sources=file_sources,
convert_options=convert_options,
chunking_options=chunking_options,
chunking_export_options=chunking_export_options,
target=target,
)
return task
@@ -294,7 +341,7 @@ def create_app(): # noqa: C901
task = await orchestrator.task_status(task_id=task_id)
if task.is_completed():
return True
await asyncio.sleep(5)
await asyncio.sleep(docling_serve_settings.sync_poll_interval)
elapsed_time = time.monotonic() - start_time
if elapsed_time > docling_serve_settings.max_sync_wait:
return False
@@ -381,7 +428,7 @@ def create_app(): # noqa: C901
response = RedirectResponse(url=logo_url)
return response
@app.get("/health")
@app.get("/health", tags=["health"])
def health() -> HealthCheckResponse:
return HealthCheckResponse()
@@ -393,6 +440,7 @@ def create_app(): # noqa: C901
# Convert a document from URL(s)
@app.post(
"/v1/convert/source",
tags=["convert"],
response_model=ConvertDocumentResponse | PresignedUrlConvertDocumentResponse,
responses={
200: {
@@ -408,7 +456,7 @@ def create_app(): # noqa: C901
conversion_request: ConvertDocumentsRequest,
):
task = await _enque_source(
orchestrator=orchestrator, conversion_request=conversion_request
orchestrator=orchestrator, request=conversion_request
)
completed = await _wait_task_complete(
orchestrator=orchestrator, task_id=task.task_id
@@ -416,7 +464,7 @@ def create_app(): # noqa: C901
if not completed:
# TODO: abort task!
return HTTPException(
raise HTTPException(
status_code=504,
detail=f"Conversion is taking too long. The maximum wait time is configure as DOCLING_SERVE_MAX_SYNC_WAIT={docling_serve_settings.max_sync_wait}.",
)
@@ -438,6 +486,7 @@ def create_app(): # noqa: C901
# Convert a document from file(s)
@app.post(
"/v1/convert/file",
tags=["convert"],
response_model=ConvertDocumentResponse | PresignedUrlConvertDocumentResponse,
responses={
200: {
@@ -457,7 +506,13 @@ def create_app(): # noqa: C901
):
target = InBodyTarget() if target_type == TargetName.INBODY else ZipTarget()
task = await _enque_file(
orchestrator=orchestrator, files=files, options=options, target=target
task_type=TaskType.CONVERT,
orchestrator=orchestrator,
files=files,
convert_options=options,
chunking_options=None,
chunking_export_options=None,
target=target,
)
completed = await _wait_task_complete(
orchestrator=orchestrator, task_id=task.task_id
@@ -465,7 +520,7 @@ def create_app(): # noqa: C901
if not completed:
# TODO: abort task!
return HTTPException(
raise HTTPException(
status_code=504,
detail=f"Conversion is taking too long. The maximum wait time is configure as DOCLING_SERVE_MAX_SYNC_WAIT={docling_serve_settings.max_sync_wait}.",
)
@@ -487,6 +542,7 @@ def create_app(): # noqa: C901
# Convert a document from URL(s) using the async api
@app.post(
"/v1/convert/source/async",
tags=["convert"],
response_model=TaskStatusResponse,
)
async def process_url_async(
@@ -495,13 +551,14 @@ def create_app(): # noqa: C901
conversion_request: ConvertDocumentsRequest,
):
task = await _enque_source(
orchestrator=orchestrator, conversion_request=conversion_request
orchestrator=orchestrator, request=conversion_request
)
task_queue_position = await orchestrator.get_queue_position(
task_id=task.task_id
)
return TaskStatusResponse(
task_id=task.task_id,
task_type=task.task_type,
task_status=task.task_status,
task_position=task_queue_position,
task_meta=task.processing_meta,
@@ -510,6 +567,7 @@ def create_app(): # noqa: C901
# Convert a document from file(s) using the async api
@app.post(
"/v1/convert/file/async",
tags=["convert"],
response_model=TaskStatusResponse,
)
async def process_file_async(
@@ -524,21 +582,249 @@ def create_app(): # noqa: C901
):
target = InBodyTarget() if target_type == TargetName.INBODY else ZipTarget()
task = await _enque_file(
orchestrator=orchestrator, files=files, options=options, target=target
task_type=TaskType.CONVERT,
orchestrator=orchestrator,
files=files,
convert_options=options,
chunking_options=None,
chunking_export_options=None,
target=target,
)
task_queue_position = await orchestrator.get_queue_position(
task_id=task.task_id
)
return TaskStatusResponse(
task_id=task.task_id,
task_type=task.task_type,
task_status=task.task_status,
task_position=task_queue_position,
task_meta=task.processing_meta,
)
# Chunking endpoints
for display_name, path_name, opt_cls in (
("HybridChunker", "hybrid", HybridChunkerOptions),
("HierarchicalChunker", "hierarchical", HierarchicalChunkerOptions),
):
req_cls = make_request_model(opt_cls)
@app.post(
f"/v1/chunk/{path_name}/source/async",
name=f"Chunk sources with {display_name} as async task",
tags=["chunk"],
response_model=TaskStatusResponse,
)
async def chunk_source_async(
background_tasks: BackgroundTasks,
auth: Annotated[AuthenticationResult, Depends(require_auth)],
orchestrator: Annotated[BaseOrchestrator, Depends(get_async_orchestrator)],
request: req_cls,
):
task = await _enque_source(orchestrator=orchestrator, request=request)
task_queue_position = await orchestrator.get_queue_position(
task_id=task.task_id
)
return TaskStatusResponse(
task_id=task.task_id,
task_type=task.task_type,
task_status=task.task_status,
task_position=task_queue_position,
task_meta=task.processing_meta,
)
@app.post(
f"/v1/chunk/{path_name}/file/async",
name=f"Chunk files with {display_name} as async task",
tags=["chunk"],
response_model=TaskStatusResponse,
)
async def chunk_file_async(
background_tasks: BackgroundTasks,
auth: Annotated[AuthenticationResult, Depends(require_auth)],
orchestrator: Annotated[BaseOrchestrator, Depends(get_async_orchestrator)],
files: list[UploadFile],
convert_options: Annotated[
ConvertDocumentsRequestOptions,
FormDepends(
ConvertDocumentsRequestOptions,
prefix="convert_",
excluded_fields=[
"to_formats",
],
),
],
chunking_options: Annotated[
opt_cls,
FormDepends(
HybridChunkerOptions,
prefix="chunking_",
excluded_fields=["chunker"],
),
],
include_converted_doc: Annotated[
bool,
Form(
description="If true, the output will include both the chunks and the converted document."
),
] = False,
target_type: Annotated[
TargetName,
Form(description="Specification for the type of output target."),
] = TargetName.INBODY,
):
target = InBodyTarget() if target_type == TargetName.INBODY else ZipTarget()
task = await _enque_file(
task_type=TaskType.CHUNK,
orchestrator=orchestrator,
files=files,
convert_options=convert_options,
chunking_options=chunking_options,
chunking_export_options=ChunkingExportOptions(
include_converted_doc=include_converted_doc
),
target=target,
)
task_queue_position = await orchestrator.get_queue_position(
task_id=task.task_id
)
return TaskStatusResponse(
task_id=task.task_id,
task_type=task.task_type,
task_status=task.task_status,
task_position=task_queue_position,
task_meta=task.processing_meta,
)
@app.post(
f"/v1/chunk/{path_name}/source",
name=f"Chunk sources with {display_name}",
tags=["chunk"],
response_model=ChunkDocumentResponse,
responses={
200: {
"content": {"application/zip": {}},
# "description": "Return the JSON item or an image.",
}
},
)
async def chunk_source(
background_tasks: BackgroundTasks,
auth: Annotated[AuthenticationResult, Depends(require_auth)],
orchestrator: Annotated[BaseOrchestrator, Depends(get_async_orchestrator)],
request: req_cls,
):
task = await _enque_source(orchestrator=orchestrator, request=request)
completed = await _wait_task_complete(
orchestrator=orchestrator, task_id=task.task_id
)
if not completed:
# TODO: abort task!
raise HTTPException(
status_code=504,
detail=f"Conversion is taking too long. The maximum wait time is configure as DOCLING_SERVE_MAX_SYNC_WAIT={docling_serve_settings.max_sync_wait}.",
)
task_result = await orchestrator.task_result(task_id=task.task_id)
if task_result is None:
raise HTTPException(
status_code=404,
detail="Task result not found. Please wait for a completion status.",
)
response = await prepare_response(
task_id=task.task_id,
task_result=task_result,
orchestrator=orchestrator,
background_tasks=background_tasks,
)
return response
@app.post(
f"/v1/chunk/{path_name}/file",
name=f"Chunk files with {display_name}",
tags=["chunk"],
response_model=ChunkDocumentResponse,
responses={
200: {
"content": {"application/zip": {}},
}
},
)
async def chunk_file(
background_tasks: BackgroundTasks,
auth: Annotated[AuthenticationResult, Depends(require_auth)],
orchestrator: Annotated[BaseOrchestrator, Depends(get_async_orchestrator)],
files: list[UploadFile],
convert_options: Annotated[
ConvertDocumentsRequestOptions,
FormDepends(
ConvertDocumentsRequestOptions,
prefix="convert_",
excluded_fields=[
"to_formats",
],
),
],
chunking_options: Annotated[
opt_cls,
FormDepends(
HybridChunkerOptions,
prefix="chunking_",
excluded_fields=["chunker"],
),
],
include_converted_doc: Annotated[
bool,
Form(
description="If true, the output will include both the chunks and the converted document."
),
] = False,
target_type: Annotated[
TargetName,
Form(description="Specification for the type of output target."),
] = TargetName.INBODY,
):
target = InBodyTarget() if target_type == TargetName.INBODY else ZipTarget()
task = await _enque_file(
task_type=TaskType.CHUNK,
orchestrator=orchestrator,
files=files,
convert_options=convert_options,
chunking_options=chunking_options,
chunking_export_options=ChunkingExportOptions(
include_converted_doc=include_converted_doc
),
target=target,
)
completed = await _wait_task_complete(
orchestrator=orchestrator, task_id=task.task_id
)
if not completed:
# TODO: abort task!
raise HTTPException(
status_code=504,
detail=f"Conversion is taking too long. The maximum wait time is configure as DOCLING_SERVE_MAX_SYNC_WAIT={docling_serve_settings.max_sync_wait}.",
)
task_result = await orchestrator.task_result(task_id=task.task_id)
if task_result is None:
raise HTTPException(
status_code=404,
detail="Task result not found. Please wait for a completion status.",
)
response = await prepare_response(
task_id=task.task_id,
task_result=task_result,
orchestrator=orchestrator,
background_tasks=background_tasks,
)
return response
# Task status poll
@app.get(
"/v1/status/poll/{task_id}",
tags=["tasks"],
response_model=TaskStatusResponse,
)
async def task_status_poll(
@@ -557,6 +843,7 @@ def create_app(): # noqa: C901
raise HTTPException(status_code=404, detail="Task not found.")
return TaskStatusResponse(
task_id=task.task_id,
task_type=task.task_type,
task_status=task.task_status,
task_position=task_queue_position,
task_meta=task.processing_meta,
@@ -582,7 +869,10 @@ def create_app(): # noqa: C901
assert isinstance(orchestrator.notifier, WebsocketNotifier)
await websocket.accept()
if task_id not in orchestrator.tasks:
try:
# Get task status from Redis or RQ directly instead of checking in-memory registry
task = await orchestrator.task_status(task_id=task_id)
except TaskNotFoundError:
await websocket.send_text(
WebsocketMessage(
message=MessageKind.ERROR, error="Task not found."
@@ -591,8 +881,6 @@ def create_app(): # noqa: C901
await websocket.close()
return
task = orchestrator.tasks[task_id]
# Track active WebSocket connections for this job
orchestrator.notifier.task_subscribers[task_id].add(websocket)
@@ -600,6 +888,7 @@ def create_app(): # noqa: C901
task_queue_position = await orchestrator.get_queue_position(task_id=task_id)
task_response = TaskStatusResponse(
task_id=task.task_id,
task_type=task.task_type,
task_status=task.task_status,
task_position=task_queue_position,
task_meta=task.processing_meta,
@@ -615,6 +904,7 @@ def create_app(): # noqa: C901
)
task_response = TaskStatusResponse(
task_id=task.task_id,
task_type=task.task_type,
task_status=task.task_status,
task_position=task_queue_position,
task_meta=task.processing_meta,
@@ -637,7 +927,10 @@ def create_app(): # noqa: C901
# Task result
@app.get(
"/v1/result/{task_id}",
response_model=ConvertDocumentResponse | PresignedUrlConvertDocumentResponse,
tags=["tasks"],
response_model=ConvertDocumentResponse
| PresignedUrlConvertDocumentResponse
| ChunkDocumentResponse,
responses={
200: {
"content": {"application/zip": {}},
@@ -670,6 +963,8 @@ def create_app(): # noqa: C901
# Update task progress
@app.post(
"/v1/callback/task/progress",
tags=["internal"],
include_in_schema=False,
response_model=ProgressCallbackResponse,
)
async def callback_task_progress(
@@ -692,6 +987,7 @@ def create_app(): # noqa: C901
# Offload models
@app.get(
"/v1/clear/converters",
tags=["clear"],
response_model=ClearResponse,
)
async def clear_converters(
@@ -704,6 +1000,7 @@ def create_app(): # noqa: C901
# Clean results
@app.get(
"/v1/clear/results",
tags=["clear"],
response_model=ClearResponse,
)
async def clear_results(

View File

@@ -1,16 +1,20 @@
import enum
from typing import Annotated, Literal
from functools import cache
from typing import Annotated, Generic, Literal
from pydantic import BaseModel, Field, model_validator
from pydantic_core import PydanticCustomError
from typing_extensions import Self
from typing_extensions import Self, TypeVar
from docling_jobkit.datamodel.chunking import (
BaseChunkerOptions,
)
from docling_jobkit.datamodel.http_inputs import FileSource, HttpSource
from docling_jobkit.datamodel.s3_coords import S3Coordinates
from docling_jobkit.datamodel.task_targets import (
InBodyTarget,
PutTarget,
S3Target,
TaskTarget,
ZipTarget,
)
@@ -43,12 +47,17 @@ SourceRequestItem = Annotated[
FileSourceRequest | HttpSourceRequest | S3SourceRequest, Field(discriminator="kind")
]
TargetRequest = Annotated[
InBodyTarget | ZipTarget | S3Target | PutTarget,
Field(discriminator="kind"),
]
## Complete Source request
class ConvertDocumentsRequest(BaseModel):
options: ConvertDocumentsRequestOptions = ConvertDocumentsRequestOptions()
sources: list[SourceRequestItem]
target: TaskTarget = InBodyTarget()
target: TargetRequest = InBodyTarget()
@model_validator(mode="after")
def validate_s3_source_and_target(self) -> Self:
@@ -70,3 +79,52 @@ class ConvertDocumentsRequest(BaseModel):
"error target", 'target kind "s3" requires source kind "s3"'
)
return self
## Source chunking requests
class BaseChunkDocumentsRequest(BaseModel):
convert_options: Annotated[
ConvertDocumentsRequestOptions, Field(description="Conversion options.")
] = ConvertDocumentsRequestOptions()
sources: Annotated[
list[SourceRequestItem],
Field(description="List of input document sources to process."),
]
include_converted_doc: Annotated[
bool,
Field(
description="If true, the output will include both the chunks and the converted document."
),
] = False
target: Annotated[
TargetRequest, Field(description="Specification for the type of output target.")
] = InBodyTarget()
ChunkingOptT = TypeVar("ChunkingOptT", bound=BaseChunkerOptions)
class GenericChunkDocumentsRequest(BaseChunkDocumentsRequest, Generic[ChunkingOptT]):
chunking_options: ChunkingOptT
@cache
def make_request_model(
opt_type: type[ChunkingOptT],
) -> type[GenericChunkDocumentsRequest[ChunkingOptT]]:
"""
Dynamically create (and cache) a subclass of GenericChunkDocumentsRequest[opt_type]
with chunking_options having a default factory.
"""
return type(
f"{opt_type.__name__}DocumentsRequest",
(GenericChunkDocumentsRequest[opt_type],), # type: ignore[valid-type]
{
"__annotations__": {"chunking_options": opt_type},
"chunking_options": Field(
default_factory=opt_type, description="Options specific to the chunker."
),
},
)

View File

@@ -5,8 +5,12 @@ from pydantic import BaseModel
from docling.datamodel.document import ConversionStatus, ErrorItem
from docling.utils.profiling import ProfilingItem
from docling_jobkit.datamodel.result import ExportDocumentResponse
from docling_jobkit.datamodel.task_meta import TaskProcessingMeta
from docling_jobkit.datamodel.result import (
ChunkedDocumentResultItem,
ExportDocumentResponse,
ExportResult,
)
from docling_jobkit.datamodel.task_meta import TaskProcessingMeta, TaskType
# Status
@@ -37,8 +41,15 @@ class ConvertDocumentErrorResponse(BaseModel):
status: ConversionStatus
class ChunkDocumentResponse(BaseModel):
chunks: list[ChunkedDocumentResultItem]
documents: list[ExportResult]
processing_time: float
class TaskStatusResponse(BaseModel):
task_id: str
task_type: TaskType
task_status: str
task_position: Optional[int] = None
task_meta: Optional[TaskProcessingMeta] = None

View File

@@ -4,6 +4,7 @@ import itertools
import json
import logging
import ssl
import sys
import tempfile
import time
from pathlib import Path
@@ -224,13 +225,17 @@ def auto_set_return_as_file(
def change_ocr_lang(ocr_engine):
if ocr_engine == "easyocr":
return "en,fr,de,es"
return gr.update(visible=True, value="en,fr,de,es")
elif ocr_engine == "tesseract_cli":
return "eng,fra,deu,spa"
return gr.update(visible=True, value="eng,fra,deu,spa")
elif ocr_engine == "tesseract":
return "eng,fra,deu,spa"
return gr.update(visible=True, value="eng,fra,deu,spa")
elif ocr_engine == "rapidocr":
return "english,chinese"
return gr.update(visible=True, value="english,chinese")
elif ocr_engine == "ocrmac":
return gr.update(visible=True, value="fr-FR,de-DE,es-ES,en-US")
return gr.update(visible=False, value="")
def wait_task_finish(auth: str, task_id: str, return_as_file: bool):
@@ -570,14 +575,17 @@ with gr.Blocks(
with gr.Tab("Convert File"):
with gr.Row():
with gr.Column(scale=4):
raw_exts = itertools.chain.from_iterable(FormatToExtensions.values())
file_input = gr.File(
elem_id="file_input_zone",
label="Upload File",
file_types=[
f".{v}"
for v in itertools.chain.from_iterable(
FormatToExtensions.values()
)
f".{v.lower()}"
for v in raw_exts # lowercase
]
+ [
f".{v.upper()}"
for v in raw_exts # uppercase
],
file_count="multiple",
scale=4,
@@ -633,18 +641,25 @@ with gr.Blocks(
ocr = gr.Checkbox(label="Enable OCR", value=True)
force_ocr = gr.Checkbox(label="Force OCR", value=False)
with gr.Column(scale=1):
engines_list = [
("Auto", "auto"),
("EasyOCR", "easyocr"),
("Tesseract", "tesseract"),
("RapidOCR", "rapidocr"),
]
if sys.platform == "darwin":
engines_list.append(("OCRMac", "ocrmac"))
ocr_engine = gr.Radio(
[
("EasyOCR", "easyocr"),
("Tesseract", "tesseract"),
("RapidOCR", "rapidocr"),
],
engines_list,
label="OCR Engine",
value="easyocr",
value="auto",
)
with gr.Column(scale=1, min_width=200):
ocr_lang = gr.Textbox(
label="OCR Language (beware of the format)", value="en,fr,de,es"
label="OCR Language (beware of the format)",
value="en,fr,de,es",
visible=False,
)
ocr_engine.change(change_ocr_lang, inputs=[ocr_engine], outputs=[ocr_lang])
with gr.Row():

View File

@@ -29,10 +29,15 @@ def is_pydantic_model(type_):
# Adapted from
# https://github.com/fastapi/fastapi/discussions/8971#discussioncomment-7892972
def FormDepends(cls: type[BaseModel]):
def FormDepends(
cls: type[BaseModel], prefix: str = "", excluded_fields: list[str] = []
):
new_parameters = []
for field_name, model_field in cls.model_fields.items():
if field_name in excluded_fields:
continue
annotation = model_field.annotation
description = model_field.description
default = (
@@ -63,7 +68,7 @@ def FormDepends(cls: type[BaseModel]):
new_parameters.append(
inspect.Parameter(
name=field_name,
name=f"{prefix}{field_name}",
kind=inspect.Parameter.POSITIONAL_ONLY,
default=default,
annotation=annotation,
@@ -71,19 +76,23 @@ def FormDepends(cls: type[BaseModel]):
)
async def as_form_func(**data):
newdata = {}
for field_name, model_field in cls.model_fields.items():
value = data.get(field_name)
if field_name in excluded_fields:
continue
value = data.get(f"{prefix}{field_name}")
newdata[field_name] = value
annotation = model_field.annotation
# Parse nested models from JSON string
if value is not None and is_pydantic_model(annotation):
try:
validator = TypeAdapter(annotation)
data[field_name] = validator.validate_json(value)
newdata[field_name] = validator.validate_json(value)
except Exception as e:
raise ValueError(f"Invalid JSON for field '{field_name}': {e}")
return cls(**data)
return cls(**newdata)
sig = inspect.signature(as_form_func)
sig = sig.replace(parameters=new_parameters)

View File

@@ -1,10 +1,267 @@
import json
import logging
from functools import lru_cache
from typing import Any, Optional
from docling_jobkit.orchestrators.base_orchestrator import BaseOrchestrator
import redis.asyncio as redis
from docling_jobkit.datamodel.task import Task
from docling_jobkit.datamodel.task_meta import TaskStatus
from docling_jobkit.orchestrators.base_orchestrator import (
BaseOrchestrator,
TaskNotFoundError,
)
from docling_serve.settings import AsyncEngine, docling_serve_settings
from docling_serve.storage import get_scratch
_log = logging.getLogger(__name__)
class RedisTaskStatusMixin:
tasks: dict[str, Task]
_task_result_keys: dict[str, str]
config: Any
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.redis_prefix = "docling:tasks:"
self._redis_pool = redis.ConnectionPool.from_url(
self.config.redis_url,
max_connections=10,
socket_timeout=2.0,
)
async def task_status(self, task_id: str, wait: float = 0.0) -> Task:
"""
Get task status by checking Redis first, then falling back to RQ verification.
When Redis shows 'pending' but RQ shows 'success', we update Redis
and return the RQ status for cross-instance consistency.
"""
_log.info(f"Task {task_id} status check")
# Always check RQ directly first - this is the most reliable source
rq_task = await self._get_task_from_rq_direct(task_id)
if rq_task:
_log.info(f"Task {task_id} in RQ: {rq_task.task_status}")
# Update memory registry
self.tasks[task_id] = rq_task
# Store/update in Redis for other instances
await self._store_task_in_redis(rq_task)
return rq_task
# If not in RQ, check Redis (maybe it's cached from another instance)
task = await self._get_task_from_redis(task_id)
if task:
_log.info(f"Task {task_id} in Redis: {task.task_status}")
# CRITICAL FIX: Check if Redis status might be stale
# STARTED tasks might have completed since they were cached
if task.task_status in [TaskStatus.PENDING, TaskStatus.STARTED]:
_log.debug(f"Task {task_id} verifying stale status")
# Try to get fresh status from RQ
fresh_rq_task = await self._get_task_from_rq_direct(task_id)
if fresh_rq_task and fresh_rq_task.task_status != task.task_status:
_log.info(
f"Task {task_id} status updated: {fresh_rq_task.task_status}"
)
# Update memory and Redis with fresh status
self.tasks[task_id] = fresh_rq_task
await self._store_task_in_redis(fresh_rq_task)
return fresh_rq_task
else:
_log.debug(f"Task {task_id} status consistent")
return task
# Fall back to parent implementation
try:
parent_task = await super().task_status(task_id, wait) # type: ignore[misc]
_log.debug(f"Task {task_id} from parent: {parent_task.task_status}")
# Store in Redis for other instances to find
await self._store_task_in_redis(parent_task)
return parent_task
except TaskNotFoundError:
_log.warning(f"Task {task_id} not found")
raise
async def _get_task_from_redis(self, task_id: str) -> Optional[Task]:
try:
async with redis.Redis(connection_pool=self._redis_pool) as r:
task_data = await r.get(f"{self.redis_prefix}{task_id}:metadata")
if not task_data:
return None
data: dict[str, Any] = json.loads(task_data)
meta = data.get("processing_meta") or {}
meta.setdefault("num_docs", 0)
meta.setdefault("num_processed", 0)
meta.setdefault("num_succeeded", 0)
meta.setdefault("num_failed", 0)
return Task(
task_id=data["task_id"],
task_type=data["task_type"],
task_status=TaskStatus(data["task_status"]),
processing_meta=meta,
)
except Exception as e:
_log.error(f"Redis get task {task_id}: {e}")
return None
async def _get_task_from_rq_direct(self, task_id: str) -> Optional[Task]:
try:
_log.debug(f"Checking RQ for task {task_id}")
temp_task = Task(
task_id=task_id,
task_type="convert",
task_status=TaskStatus.PENDING,
processing_meta={
"num_docs": 0,
"num_processed": 0,
"num_succeeded": 0,
"num_failed": 0,
},
)
original_task = self.tasks.get(task_id)
self.tasks[task_id] = temp_task
try:
await super()._update_task_from_rq(task_id) # type: ignore[misc]
updated_task = self.tasks.get(task_id)
if updated_task and updated_task.task_status != TaskStatus.PENDING:
_log.debug(f"RQ task {task_id}: {updated_task.task_status}")
# Store result key if available
if task_id in self._task_result_keys:
try:
async with redis.Redis(
connection_pool=self._redis_pool
) as r:
await r.set(
f"{self.redis_prefix}{task_id}:result_key",
self._task_result_keys[task_id],
ex=86400,
)
_log.debug(f"Stored result key for {task_id}")
except Exception as e:
_log.error(f"Store result key {task_id}: {e}")
return updated_task
return None
finally:
# Restore original task state
if original_task:
self.tasks[task_id] = original_task
elif task_id in self.tasks and self.tasks[task_id] == temp_task:
# Only remove if it's still our temp task
del self.tasks[task_id]
except Exception as e:
_log.error(f"RQ check {task_id}: {e}")
return None
async def get_raw_task(self, task_id: str) -> Task:
if task_id in self.tasks:
return self.tasks[task_id]
task = await self._get_task_from_redis(task_id)
if task:
self.tasks[task_id] = task
return task
try:
parent_task = await super().get_raw_task(task_id) # type: ignore[misc]
await self._store_task_in_redis(parent_task)
return parent_task
except TaskNotFoundError:
raise
async def _store_task_in_redis(self, task: Task) -> None:
try:
meta: Any = task.processing_meta
if hasattr(meta, "model_dump"):
meta = meta.model_dump()
elif not isinstance(meta, dict):
meta = {
"num_docs": 0,
"num_processed": 0,
"num_succeeded": 0,
"num_failed": 0,
}
data: dict[str, Any] = {
"task_id": task.task_id,
"task_type": task.task_type.value
if hasattr(task.task_type, "value")
else str(task.task_type),
"task_status": task.task_status.value,
"processing_meta": meta,
}
async with redis.Redis(connection_pool=self._redis_pool) as r:
await r.set(
f"{self.redis_prefix}{task.task_id}:metadata",
json.dumps(data),
ex=86400,
)
except Exception as e:
_log.error(f"Store task {task.task_id}: {e}")
async def enqueue(self, **kwargs): # type: ignore[override]
task = await super().enqueue(**kwargs) # type: ignore[misc]
await self._store_task_in_redis(task)
return task
async def task_result(self, task_id: str): # type: ignore[override]
result = await super().task_result(task_id) # type: ignore[misc]
if result is not None:
return result
try:
async with redis.Redis(connection_pool=self._redis_pool) as r:
result_key = await r.get(f"{self.redis_prefix}{task_id}:result_key")
if result_key:
self._task_result_keys[task_id] = result_key.decode("utf-8")
return await super().task_result(task_id) # type: ignore[misc]
except Exception as e:
_log.error(f"Redis result key {task_id}: {e}")
return None
async def _update_task_from_rq(self, task_id: str) -> None:
original_status = (
self.tasks[task_id].task_status if task_id in self.tasks else None
)
await super()._update_task_from_rq(task_id) # type: ignore[misc]
if task_id in self.tasks:
new_status = self.tasks[task_id].task_status
if original_status != new_status:
_log.debug(f"Task {task_id} status: {original_status} -> {new_status}")
await self._store_task_in_redis(self.tasks[task_id])
if task_id in self._task_result_keys:
try:
async with redis.Redis(connection_pool=self._redis_pool) as r:
await r.set(
f"{self.redis_prefix}{task_id}:result_key",
self._task_result_keys[task_id],
ex=86400,
)
except Exception as e:
_log.error(f"Store result key {task_id}: {e}")
@lru_cache
def get_async_orchestrator() -> BaseOrchestrator:
@@ -31,16 +288,25 @@ def get_async_orchestrator() -> BaseOrchestrator:
allow_external_plugins=docling_serve_settings.allow_external_plugins,
max_num_pages=docling_serve_settings.max_num_pages,
max_file_size=docling_serve_settings.max_file_size,
queue_max_size=docling_serve_settings.queue_max_size,
ocr_batch_size=docling_serve_settings.ocr_batch_size,
layout_batch_size=docling_serve_settings.layout_batch_size,
table_batch_size=docling_serve_settings.table_batch_size,
batch_polling_interval_seconds=docling_serve_settings.batch_polling_interval_seconds,
)
cm = DoclingConverterManager(config=cm_config)
return LocalOrchestrator(config=local_config, converter_manager=cm)
elif docling_serve_settings.eng_kind == AsyncEngine.RQ:
from docling_jobkit.orchestrators.rq.orchestrator import (
RQOrchestrator,
RQOrchestratorConfig,
)
class RedisAwareRQOrchestrator(RedisTaskStatusMixin, RQOrchestrator): # type: ignore[misc]
pass
rq_config = RQOrchestratorConfig(
redis_url=docling_serve_settings.eng_rq_redis_url,
results_prefix=docling_serve_settings.eng_rq_results_prefix,
@@ -48,7 +314,8 @@ def get_async_orchestrator() -> BaseOrchestrator:
scratch_dir=get_scratch(),
)
return RQOrchestrator(config=rq_config)
return RedisAwareRQOrchestrator(config=rq_config)
elif docling_serve_settings.eng_kind == AsyncEngine.KFP:
from docling_jobkit.orchestrators.kfp.orchestrator import (
KfpOrchestrator,

View File

@@ -4,7 +4,8 @@ import logging
from fastapi import BackgroundTasks, Response
from docling_jobkit.datamodel.result import (
ConvertDocumentResult,
ChunkedDocumentResult,
DoclingTaskResult,
ExportResult,
RemoteTargetResult,
ZipArchiveResult,
@@ -14,6 +15,7 @@ from docling_jobkit.orchestrators.base_orchestrator import (
)
from docling_serve.datamodel.responses import (
ChunkDocumentResponse,
ConvertDocumentResponse,
PresignedUrlConvertDocumentResponse,
)
@@ -24,11 +26,16 @@ _log = logging.getLogger(__name__)
async def prepare_response(
task_id: str,
task_result: ConvertDocumentResult,
task_result: DoclingTaskResult,
orchestrator: BaseOrchestrator,
background_tasks: BackgroundTasks,
):
response: Response | ConvertDocumentResponse | PresignedUrlConvertDocumentResponse
response: (
Response
| ConvertDocumentResponse
| PresignedUrlConvertDocumentResponse
| ChunkDocumentResponse
)
if isinstance(task_result.result, ExportResult):
response = ConvertDocumentResponse(
document=task_result.result.content,
@@ -52,6 +59,12 @@ async def prepare_response(
num_succeeded=task_result.num_succeeded,
num_failed=task_result.num_failed,
)
elif isinstance(task_result.result, ChunkedDocumentResult):
response = ChunkDocumentResponse(
chunks=task_result.result.chunks,
documents=task_result.result.documents,
processing_time=task_result.processing_time,
)
else:
raise ValueError("Unknown result type")

View File

@@ -57,6 +57,14 @@ class DoclingServeSettings(BaseSettings):
max_num_pages: int = sys.maxsize
max_file_size: int = sys.maxsize
# Threading pipeline
queue_max_size: Optional[int] = None
ocr_batch_size: Optional[int] = None
layout_batch_size: Optional[int] = None
table_batch_size: Optional[int] = None
batch_polling_interval_seconds: Optional[float] = None
sync_poll_interval: int = 2 # seconds
max_sync_wait: int = 120 # 2 minutes
cors_origins: list[str] = ["*"]

View File

@@ -30,25 +30,47 @@ class WebsocketNotifier(BaseNotifier):
if task_id not in self.task_subscribers:
raise RuntimeError(f"Task {task_id} does not have a subscribers list.")
task = await self.orchestrator.get_raw_task(task_id=task_id)
task_queue_position = await self.orchestrator.get_queue_position(task_id)
msg = TaskStatusResponse(
task_id=task.task_id,
task_status=task.task_status,
task_position=task_queue_position,
task_meta=task.processing_meta,
)
for websocket in self.task_subscribers[task_id]:
await websocket.send_text(
WebsocketMessage(message=MessageKind.UPDATE, task=msg).model_dump_json()
try:
# Get task status from Redis or RQ directly instead of in-memory registry
task = await self.orchestrator.task_status(task_id=task_id)
task_queue_position = await self.orchestrator.get_queue_position(task_id)
msg = TaskStatusResponse(
task_id=task.task_id,
task_type=task.task_type,
task_status=task.task_status,
task_position=task_queue_position,
task_meta=task.processing_meta,
)
if task.is_completed():
await websocket.close()
for websocket in self.task_subscribers[task_id]:
await websocket.send_text(
WebsocketMessage(
message=MessageKind.UPDATE, task=msg
).model_dump_json()
)
if task.is_completed():
await websocket.close()
except Exception as e:
# Log the error but don't crash the notifier
import logging
_log = logging.getLogger(__name__)
_log.error(f"Error notifying subscribers for task {task_id}: {e}")
async def notify_queue_positions(self):
"""Notify all subscribers of pending tasks about queue position updates."""
for task_id in self.task_subscribers.keys():
# notify only pending tasks
if self.orchestrator.tasks[task_id].task_status != TaskStatus.PENDING:
continue
try:
# Check task status directly from Redis or RQ
task = await self.orchestrator.task_status(task_id)
await self.notify_task_subscribers(task_id)
# Notify only pending tasks
if task.task_status == TaskStatus.PENDING:
await self.notify_task_subscribers(task_id)
except Exception as e:
# Log the error but don't crash the notifier
import logging
_log = logging.getLogger(__name__)
_log.error(
f"Error checking task {task_id} status for queue position notification: {e}"
)

View File

@@ -44,18 +44,36 @@ THe following table describes the options to configure the Docling Serve app.
| | `DOCLING_SERVE_SINGLE_USE_RESULTS` | `true` | If true, results can be accessed only once. If false, the results accumulate in the scratch directory. |
| | `DOCLING_SERVE_RESULT_REMOVAL_DELAY` | `300` | When `DOCLING_SERVE_SINGLE_USE_RESULTS` is active, this is the delay before results are removed from the task registry. |
| | `DOCLING_SERVE_MAX_DOCUMENT_TIMEOUT` | `604800` (7 days) | The maximum time for processing a document. |
| | `DOCLING_NUM_THREADS` | `4` | Number of concurrent threads for processing a document. |
| | `DOCLING_SERVE_MAX_NUM_PAGES` | | The maximum number of pages for a document to be processed. |
| | `DOCLING_SERVE_MAX_FILE_SIZE` | | The maximum file size for a document to be processed. |
| | `DOCLING_SERVE_SYNC_POLL_INTERVAL` | `2` | Number of seconds to sleep between polling the task status in the sync endpoints. |
| | `DOCLING_SERVE_MAX_SYNC_WAIT` | `120` | Max number of seconds a synchronous endpoint is waiting for the task completion. |
| | `DOCLING_SERVE_LOAD_MODELS_AT_BOOT` | `True` | If enabled, the models for the default options will be loaded at boot. |
| | `DOCLING_SERVE_OPTIONS_CACHE_SIZE` | `2` | How many DocumentConveter objects (including their loaded models) to keep in the cache. |
| | `DOCLING_SERVE_QUEUE_MAX_SIZE` | | Size of the pages queue. Potentially so many pages opened at the same time. |
| | `DOCLING_SERVE_OCR_BATCH_SIZE` | | Batch size for the OCR stage. |
| | `DOCLING_SERVE_LAYOUT_BATCH_SIZE` | | Batch size for the layout detection stage. |
| | `DOCLING_SERVE_TABLE_BATCH_SIZE` | | Batch size for the table structure stage. |
| | `DOCLING_SERVE_BATCH_POLLING_INTERVAL_SECONDS` | | Wait time for gathering pages before starting a stage processing. |
| | `DOCLING_SERVE_CORS_ORIGINS` | `["*"]` | A list of origins that should be permitted to make cross-origin requests. |
| | `DOCLING_SERVE_CORS_METHODS` | `["*"]` | A list of HTTP methods that should be allowed for cross-origin requests. |
| | `DOCLING_SERVE_CORS_HEADERS` | `["*"]` | A list of HTTP request headers that should be supported for cross-origin requests. |
| | `DOCLING_SERVE_API_KEY` | | If specified, all the API requests must contain the header `X-Api-Key` with this value. |
| | `DOCLING_SERVE_ENG_KIND` | `local` | The compute engine to use for the async tasks. Possible values are `local`, `rq` and `kfp`. See below for more configurations of the engines. |
### Docling configuration
Some Docling settings, mostly about performance, are exposed as environment variable which can be used also when running Docling Serve.
| ENV | Default | Description |
| ----|---------|-------------|
| `DOCLING_NUM_THREADS` | `4` | Number of concurrent threads used for the `torch` CPU execution. |
| `DOCLING_DEVICE` | | Device used for the model execution. Valid values are `cpu`, `cude`, `mps`. When unset, the best device is chosen. For CUDA-enabled environments, you can choose which GPU using the syntax `cuda:0`, `cuda:1`, ... |
| `DOCLING_PERF_PAGE_BATCH_SIZE` | `4` | Number of pages processed in the same batch. |
| `DOCLING_PERF_ELEMENTS_BATCH_SIZE` | `8` | Number of document items/elements processed in the same batch during enrichment. |
| `DOCLING_DEBUG_PROFILE_PIPELINE_TIMINGS` | `false` | When enabled, Docling will provide detailed timings information. |
### Compute engine
Docling Serve can be deployed with several possible of compute engine.

View File

@@ -4,31 +4,89 @@ The API provides two endpoints: one for urls, one for files. This is necessary t
## Common parameters
On top of the source of file (see below), both endpoints support the same parameters, which are almost the same as the Docling CLI.
On top of the source of file (see below), both endpoints support the same parameters.
- `from_formats` (List[str]): Input format(s) to convert from. Allowed values: `docx`, `pptx`, `html`, `image`, `pdf`, `asciidoc`, `md`. Defaults to all formats.
- `to_formats` (List[str]): Output format(s) to convert to. Allowed values: `md`, `json`, `html`, `text`, `doctags`. Defaults to `md`.
- `pipeline` (str). The choice of which pipeline to use. Allowed values are `standard` and `vlm`. Defaults to `standard`.
- `page_range` (tuple). If specified, only convert a range of pages. The page number starts at 1.
- `do_ocr` (bool): If enabled, the bitmap content will be processed using OCR. Defaults to `True`.
- `image_export_mode`: Image export mode for the document (only in case of JSON, Markdown or HTML). Allowed values: embedded, placeholder, referenced. Optional, defaults to `embedded`.
- `force_ocr` (bool): If enabled, replace any existing text with OCR-generated text over the full content. Defaults to `False`.
- `ocr_engine` (str): OCR engine to use. Allowed values: `easyocr`, `tesserocr`, `tesseract`, `rapidocr`, `ocrmac`. Defaults to `easyocr`. To use the `tesserocr` engine, `tesserocr` must be installed where docling-serve is running: `pip install tesserocr`
- `ocr_lang` (List[str]): List of languages used by the OCR engine. Note that each OCR engine has different values for the language names. Defaults to empty.
- `pdf_backend` (str): PDF backend to use. Allowed values: `pypdfium2`, `dlparse_v1`, `dlparse_v2`, `dlparse_v4`. Defaults to `dlparse_v4`.
- `table_mode` (str): Table mode to use. Allowed values: `fast`, `accurate`. Defaults to `fast`.
- `abort_on_error` (bool): If enabled, abort on error. Defaults to false.
- `md_page_break_placeholder` (str): Add this placeholder between pages in the markdown output.
- `do_table_structure` (bool): If enabled, the table structure will be extracted. Defaults to true.
- `do_code_enrichment` (bool): If enabled, perform OCR code enrichment. Defaults to false.
- `do_formula_enrichment` (bool): If enabled, perform formula OCR, return LaTeX code. Defaults to false.
- `do_picture_classification` (bool): If enabled, classify pictures in documents. Defaults to false.
- `do_picture_description` (bool): If enabled, describe pictures in documents. Defaults to false.
- `picture_description_area_threshold` (float): Minimum percentage of the area for a picture to be processed with the models. Defaults to 0.05.
- `picture_description_local` (dict): Options for running a local vision-language model in the picture description. The parameters refer to a model hosted on Hugging Face. This parameter is mutually exclusive with `picture_description_api`.
- `picture_description_api` (dict): API details for using a vision-language model in the picture description. This parameter is mutually exclusive with `picture_description_local`.
- `include_images` (bool): If enabled, images will be extracted from the document. Defaults to false.
- `images_scale` (float): Scale factor for images. Defaults to 2.0.
<!-- begin: parameters-docs -->
<h4>ConvertDocumentsRequestOptions</h4>
| Field Name | Type | Description |
|------------|------|-------------|
| `from_formats` | List[InputFormat] | Input format(s) to convert from. String or list of strings. Allowed values: `docx`, `pptx`, `html`, `image`, `pdf`, `asciidoc`, `md`, `csv`, `xlsx`, `xml_uspto`, `xml_jats`, `mets_gbs`, `json_docling`, `audio`, `vtt`. Optional, defaults to all formats. |
| `to_formats` | List[OutputFormat] | Output format(s) to convert to. String or list of strings. Allowed values: `md`, `json`, `html`, `html_split_page`, `text`, `doctags`. Optional, defaults to Markdown. |
| `image_export_mode` | ImageRefMode | Image export mode for the document (in case of JSON, Markdown or HTML). Allowed values: `placeholder`, `embedded`, `referenced`. Optional, defaults to Embedded. |
| `do_ocr` | bool | If enabled, the bitmap content will be processed using OCR. Boolean. Optional, defaults to true |
| `force_ocr` | bool | If enabled, replace existing text with OCR-generated text over content. Boolean. Optional, defaults to false. |
| `ocr_engine` | `ocr_engines_enum` | The OCR engine to use. String. Allowed values: `auto`, `easyocr`, `ocrmac`, `rapidocr`, `tesserocr`, `tesseract`. Optional, defaults to `easyocr`. |
| `ocr_lang` | List[str] or NoneType | List of languages used by the OCR engine. Note that each OCR engine has different values for the language names. String or list of strings. Optional, defaults to empty. |
| `pdf_backend` | PdfBackend | The PDF backend to use. String. Allowed values: `pypdfium2`, `dlparse_v1`, `dlparse_v2`, `dlparse_v4`. Optional, defaults to `dlparse_v4`. |
| `table_mode` | TableFormerMode | Mode to use for table structure, String. Allowed values: `fast`, `accurate`. Optional, defaults to accurate. |
| `table_cell_matching` | bool | If true, matches table cells predictions back to PDF cells. Can break table output if PDF cells are merged across table columns. If false, let table structure model define the text cells, ignore PDF cells. |
| `pipeline` | ProcessingPipeline | Choose the pipeline to process PDF or image files. |
| `page_range` | Tuple | Only convert a range of pages. The page number starts at 1. |
| `document_timeout` | float | The timeout for processing each document, in seconds. |
| `abort_on_error` | bool | Abort on error if enabled. Boolean. Optional, defaults to false. |
| `do_table_structure` | bool | If enabled, the table structure will be extracted. Boolean. Optional, defaults to true. |
| `include_images` | bool | If enabled, images will be extracted from the document. Boolean. Optional, defaults to true. |
| `images_scale` | float | Scale factor for images. Float. Optional, defaults to 2.0. |
| `md_page_break_placeholder` | str | Add this placeholder between pages in the markdown output. |
| `do_code_enrichment` | bool | If enabled, perform OCR code enrichment. Boolean. Optional, defaults to false. |
| `do_formula_enrichment` | bool | If enabled, perform formula OCR, return LaTeX code. Boolean. Optional, defaults to false. |
| `do_picture_classification` | bool | If enabled, classify pictures in documents. Boolean. Optional, defaults to false. |
| `do_picture_description` | bool | If enabled, describe pictures in documents. Boolean. Optional, defaults to false. |
| `picture_description_area_threshold` | float | Minimum percentage of the area for a picture to be processed with the models. |
| `picture_description_local` | PictureDescriptionLocal or NoneType | Options for running a local vision-language model in the picture description. The parameters refer to a model hosted on Hugging Face. This parameter is mutually exclusive with `picture_description_api`. |
| `picture_description_api` | PictureDescriptionApi or NoneType | API details for using a vision-language model in the picture description. This parameter is mutually exclusive with `picture_description_local`. |
| `vlm_pipeline_model` | VlmModelType or NoneType | Preset of local and API models for the `vlm` pipeline. This parameter is mutually exclusive with `vlm_pipeline_model_local` and `vlm_pipeline_model_api`. Use the other options for more parameters. |
| `vlm_pipeline_model_local` | VlmModelLocal or NoneType | Options for running a local vision-language model for the `vlm` pipeline. The parameters refer to a model hosted on Hugging Face. This parameter is mutually exclusive with `vlm_pipeline_model_api` and `vlm_pipeline_model`. |
| `vlm_pipeline_model_api` | VlmModelApi or NoneType | API details for using a vision-language model for the `vlm` pipeline. This parameter is mutually exclusive with `vlm_pipeline_model_local` and `vlm_pipeline_model`. |
<h4>VlmModelApi</h4>
| Field Name | Type | Description |
|------------|------|-------------|
| `url` | AnyUrl | Endpoint which accepts openai-api compatible requests. |
| `headers` | Dict[str, str] | Headers used for calling the API endpoint. For example, it could include authentication headers. |
| `params` | Dict[str, Any] | Model parameters. |
| `timeout` | float | Timeout for the API request. |
| `concurrency` | int | Maximum number of concurrent requests to the API. |
| `prompt` | str | Prompt used when calling the vision-language model. |
| `scale` | float | Scale factor of the images used. |
| `response_format` | ResponseFormat | Type of response generated by the model. |
| `temperature` | float | Temperature parameter controlling the reproducibility of the result. |
<h4>VlmModelLocal</h4>
| Field Name | Type | Description |
|------------|------|-------------|
| `repo_id` | str | Repository id from the Hugging Face Hub. |
| `prompt` | str | Prompt used when calling the vision-language model. |
| `scale` | float | Scale factor of the images used. |
| `response_format` | ResponseFormat | Type of response generated by the model. |
| `inference_framework` | InferenceFramework | Inference framework to use. |
| `transformers_model_type` | TransformersModelType | Type of transformers auto-model to use. |
| `extra_generation_config` | Dict[str, Any] | Config from https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.GenerationConfig |
| `temperature` | float | Temperature parameter controlling the reproducibility of the result. |
<h4>PictureDescriptionApi</h4>
| Field Name | Type | Description |
|------------|------|-------------|
| `url` | AnyUrl | Endpoint which accepts openai-api compatible requests. |
| `headers` | Dict[str, str] | Headers used for calling the API endpoint. For example, it could include authentication headers. |
| `params` | Dict[str, Any] | Model parameters. |
| `timeout` | float | Timeout for the API request. |
| `concurrency` | int | Maximum number of concurrent requests to the API. |
| `prompt` | str | Prompt used when calling the vision-language model. |
<h4>PictureDescriptionLocal</h4>
| Field Name | Type | Description |
|------------|------|-------------|
| `repo_id` | str | Repository id from the Hugging Face Hub. |
| `prompt` | str | Prompt used when calling the vision-language model. |
| `generation_config` | Dict[str, Any] | Config from https://huggingface.co/docs/transformers/en/main_classes/text_generation#transformers.GenerationConfig |
<!-- end: parameters-docs -->
### Authentication
@@ -433,7 +491,7 @@ with connect(uri) as websocket:
payload = json.loads(message)
if payload["message"] == "error":
break
if payload["message"] == "error" and payload["task"]["task_status"] in ("success", "failure"):
if payload["message"] == "update" and payload["task"]["task_status"] in ("success", "failure"):
break
except:
break

View File

@@ -1,6 +1,6 @@
[project]
name = "docling-serve"
version = "1.4.0" # DO NOT EDIT, updated automatically
version = "1.8.0" # DO NOT EDIT, updated automatically
description = "Running Docling as a service"
license = {text = "MIT"}
authors = [
@@ -35,8 +35,8 @@ requires-python = ">=3.10"
dependencies = [
"docling~=2.38",
"docling-core>=2.45.0",
"docling-jobkit[kfp,rq,vlm]>=1.4.0,<2.0.0",
"fastapi[standard]~=0.115",
"docling-jobkit[kfp,rq,vlm]>=1.8.0,<2.0.0",
"fastapi[standard]<0.119.0", # ~=0.115
"httpx~=0.28",
"pydantic~=2.10",
"pydantic-settings~=2.4",
@@ -50,15 +50,17 @@ dependencies = [
[project.optional-dependencies]
ui = [
"gradio~=5.9",
"pydantic<2.11.0", # fix compatibility between gradio and new pydantic 2.11
"gradio>=5.23.2,<6.0.0",
]
tesserocr = [
"tesserocr~=2.7"
]
easyocr = [
"easyocr>=1.7",
]
rapidocr = [
"rapidocr-onnxruntime~=1.4; python_version<'3.13'",
"onnxruntime~=1.7",
"rapidocr (>=3.3,<4.0.0) ; python_version < '3.14'",
"onnxruntime (>=1.7.0,<2.0.0)",
]
flash-attn = [
"flash-attn~=2.8.2; sys_platform == 'linux' and platform_machine == 'x86_64'"
@@ -87,10 +89,10 @@ cpu = [
"torchvision>=0.22.1",
]
cu124 = [
"torch>=2.6.0",
"torchvision>=0.21.0",
]
# cu124 = [
# "torch>=2.6.0",
# "torchvision>=0.21.0",
# ]
cu126 = [
"torch>=2.7.1",
@@ -115,7 +117,7 @@ conflicts = [
[
{ group = "pypi" },
{ group = "cpu" },
{ group = "cu124" },
# { group = "cu124" },
{ group = "cu126" },
{ group = "cu128" },
{ group = "rocm" },
@@ -123,14 +125,15 @@ conflicts = [
]
environments = ["sys_platform != 'darwin' or platform_machine != 'x86_64'"]
override-dependencies = [
"urllib3~=2.0"
"urllib3~=2.0",
"xgrammar>=0.1.24"
]
[tool.uv.sources]
torch = [
{ index = "pytorch-pypi", group = "pypi" },
{ index = "pytorch-cpu", group = "cpu" },
{ index = "pytorch-cu124", group = "cu124", marker = "sys_platform == 'linux'" },
# { index = "pytorch-cu124", group = "cu124", marker = "sys_platform == 'linux'" },
{ index = "pytorch-cu126", group = "cu126", marker = "sys_platform == 'linux'" },
{ index = "pytorch-cu128", group = "cu128", marker = "sys_platform == 'linux'" },
{ index = "pytorch-rocm", group = "rocm", marker = "sys_platform == 'linux'" },
@@ -139,7 +142,7 @@ torch = [
torchvision = [
{ index = "pytorch-pypi", group = "pypi" },
{ index = "pytorch-cpu", group = "cpu" },
{ index = "pytorch-cu124", group = "cu124", marker = "sys_platform == 'linux'" },
# { index = "pytorch-cu124", group = "cu124", marker = "sys_platform == 'linux'" },
{ index = "pytorch-cu126", group = "cu126", marker = "sys_platform == 'linux'" },
{ index = "pytorch-cu128", group = "cu128", marker = "sys_platform == 'linux'" },
{ index = "pytorch-rocm", group = "rocm", marker = "sys_platform == 'linux'" },
@@ -162,10 +165,10 @@ name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true
[[tool.uv.index]]
name = "pytorch-cu124"
url = "https://download.pytorch.org/whl/cu124"
explicit = true
# [[tool.uv.index]]
# name = "pytorch-cu124"
# url = "https://download.pytorch.org/whl/cu124"
# explicit = true
[[tool.uv.index]]
name = "pytorch-cu126"
@@ -279,6 +282,7 @@ module = [
"kfp.*",
"kfp_server_api.*",
"mlx_vlm.*",
"mlx.*",
"scalar_fastapi.*",
]
ignore_missing_imports = true

0
scripts/__init__.py Normal file
View File

199
scripts/update_doc_usage.py Normal file
View File

@@ -0,0 +1,199 @@
import re
from typing import Annotated, Any, Union, get_args, get_origin
from pydantic import BaseModel
from docling_serve.datamodel.convert import ConvertDocumentsRequestOptions
DOCS_FILE = "docs/usage.md"
VARIABLE_WORDS: list[str] = [
"picture_description_local",
"vlm_pipeline_model",
"vlm",
"vlm_pipeline_model_api",
"ocr_engines_enum",
"easyocr",
"dlparse_v4",
"fast",
"picture_description_api",
"vlm_pipeline_model_local",
]
def format_variable_names(text: str) -> str:
"""Format specific words in description to be code-formatted."""
sorted_words = sorted(VARIABLE_WORDS, key=len, reverse=True)
escaped_words = [re.escape(word) for word in sorted_words]
for word in escaped_words:
pattern = rf"(?<!`)\b{word}\b(?!`)"
text = re.sub(pattern, f"`{word}`", text)
return text
def format_allowed_values_description(description: str) -> str:
"""Format description to code-format allowed values."""
# Regex pattern to find text after "Allowed values:"
match = re.search(r"Allowed values:(.+?)(?:\.|$)", description, re.DOTALL)
if match:
# Extract the allowed values
values_str = match.group(1).strip()
# Split values, handling both comma and 'and' separators
values = re.split(r"\s*(?:,\s*|\s+and\s+)", values_str)
# Remove any remaining punctuation and whitespace
values = [value.strip("., ") for value in values]
# Create code-formatted values
formatted_values = ", ".join(f"`{value}`" for value in values)
# Replace the original allowed values with formatted version
formatted_description = re.sub(
r"(Allowed values:)(.+?)(?:\.|$)",
f"\\1 {formatted_values}.",
description,
flags=re.DOTALL,
)
return formatted_description
return description
def _format_type(type_hint: Any) -> str:
"""Format type ccrrectly, like Annotation or Union."""
if get_origin(type_hint) is Annotated:
base_type = get_args(type_hint)[0]
return _format_type(base_type)
if hasattr(type_hint, "__origin__"):
origin = type_hint.__origin__
args = get_args(type_hint)
if origin is list:
return f"List[{_format_type(args[0])}]"
elif origin is dict:
return f"Dict[{_format_type(args[0])}, {_format_type(args[1])}]"
elif str(origin).__contains__("Union") or str(origin).__contains__("Optional"):
return " or ".join(_format_type(arg) for arg in args)
elif origin is None:
return "null"
if hasattr(type_hint, "__name__"):
return type_hint.__name__
return str(type_hint)
def _unroll_types(tp) -> list[type]:
"""
Unrolls typing.Union and typing.Optional types into a flat list of types.
"""
origin = get_origin(tp)
if origin is Union:
# Recursively unroll each type inside the Union
types = []
for arg in get_args(tp):
types.extend(_unroll_types(arg))
# Remove duplicates while preserving order
return list(dict.fromkeys(types))
else:
# If it's not a Union, just return it as a single-element list
return [tp]
def generate_model_doc(model: type[BaseModel]) -> str:
"""Generate documentation for a Pydantic model."""
models_stack = [model]
doc = ""
while models_stack:
current_model = models_stack.pop()
doc += f"<h4>{current_model.__name__}</h4>\n"
doc += "\n| Field Name | Type | Description |\n"
doc += "|------------|------|-------------|\n"
base_models = []
if hasattr(current_model, "__mro__"):
base_models = current_model.__mro__
else:
base_models = [current_model]
for base_model in base_models:
# Check if this is a Pydantic model
if hasattr(base_model, "model_fields"):
# Iterate through fields of this model
for field_name, field in base_model.model_fields.items():
# Extract description from Annotated field if possible
description = field.description or "No description provided."
description = format_allowed_values_description(description)
description = format_variable_names(description)
# Handle Annotated types
original_type = field.annotation
if get_origin(original_type) is Annotated:
# Extract base type and additional metadata
type_args = get_args(original_type)
base_type = type_args[0]
else:
base_type = original_type
field_type = _format_type(base_type)
field_type = format_variable_names(field_type)
doc += f"| `{field_name}` | {field_type} | {description} |\n"
for field_type in _unroll_types(base_type):
if issubclass(field_type, BaseModel):
models_stack.append(field_type)
# stop iterating the base classes
break
doc += "\n"
return doc
def update_documentation():
"""Update the documentation file with model information."""
doc_request = generate_model_doc(ConvertDocumentsRequestOptions)
with open(DOCS_FILE) as f:
content = f.readlines()
# Prepare to update the content
new_content = []
in_cp_section = False
for line in content:
if line.startswith("<!-- begin: parameters-docs -->"):
in_cp_section = True
new_content.append(line)
new_content.append(doc_request)
continue
if in_cp_section and line.strip() == "<!-- end: parameters-docs -->":
in_cp_section = False
if not in_cp_section:
new_content.append(line)
# Only write to the file if new_content is different from content
if "".join(new_content) != "".join(content):
with open(DOCS_FILE, "w") as f:
f.writelines(new_content)
print(f"Documentation updated in {DOCS_FILE}")
else:
print("No changes detected. Documentation file remains unchanged.")
if __name__ == "__main__":
update_documentation()

View File

@@ -69,3 +69,9 @@ async def test_convert_url(async_client: httpx.AsyncClient):
with connect(uri) as websocket:
for message in websocket:
print(message)
result_resp = await async_client.get(f"{base_url}/result/{task['task_id']}")
assert result_resp.status_code == 200, "Response should be 200 OK"
result = result_resp.json()
print(f"{result['processing_time']=}")
assert result["processing_time"] > 1.0

View File

@@ -62,3 +62,60 @@ async def test_convert_url(async_client):
time.sleep(2)
assert task["task_status"] == "success"
@pytest.mark.asyncio
@pytest.mark.parametrize("include_converted_doc", [False, True])
async def test_chunk_url(async_client, include_converted_doc: bool):
"""Test chunk URL"""
example_docs = [
"https://arxiv.org/pdf/2311.18481",
]
base_url = "http://localhost:5001/v1"
payload = {
"sources": [{"kind": "http", "url": random.choice(example_docs)}],
"include_converted_doc": include_converted_doc,
}
response = await async_client.post(
f"{base_url}/chunk/hybrid/source/async", json=payload
)
assert response.status_code == 200, "Response should be 200 OK"
task = response.json()
print(json.dumps(task, indent=2))
while task["task_status"] not in ("success", "failure"):
response = await async_client.get(f"{base_url}/status/poll/{task['task_id']}")
assert response.status_code == 200, "Response should be 200 OK"
task = response.json()
print(f"{task['task_status']=}")
print(f"{task['task_position']=}")
time.sleep(2)
assert task["task_status"] == "success"
result_resp = await async_client.get(f"{base_url}/result/{task['task_id']}")
assert result_resp.status_code == 200, "Response should be 200 OK"
result = result_resp.json()
print("Got result.")
assert "chunks" in result
assert len(result["chunks"]) > 0
assert "documents" in result
assert len(result["documents"]) > 0
assert result["documents"][0]["status"] == "success"
if include_converted_doc:
assert result["documents"][0]["content"]["json_content"] is not None
assert (
result["documents"][0]["content"]["json_content"]["schema_name"]
== "DoclingDocument"
)
else:
assert result["documents"][0]["content"]["json_content"] is None

View File

@@ -54,6 +54,14 @@ async def test_health(client: AsyncClient):
assert response.json() == {"status": "ok"}
@pytest.mark.asyncio
async def test_openapijson(client: AsyncClient):
response = await client.get("/openapi.json")
assert response.status_code == 200
schema = response.json()
assert "openapi" in schema
@pytest.mark.asyncio
async def test_convert_file(client: AsyncClient, auth_headers: dict):
"""Test convert single file to all outputs"""

10455
uv.lock generated

File diff suppressed because one or more lines are too long