Alvaro Ollero
3736458503
Uvicorn exposes a configuration option to enable reverse proxying from a trusted ip. This PR exposes it downstreams to end clients
2025-10-04 22:21:06 +02:00
Quentin Fuxa
a7db39d999
solves incorrect spacing in buffer diarization
2025-10-02 23:04:00 +02:00
Quentin Fuxa
a153e11fe0
update when self.diarization_before_transcription
2025-09-28 11:04:00 +02:00
Quentin Fuxa
ca6f9246cc
force language = en for .en models
2025-09-28 11:04:00 +02:00
Quentin Fuxa
d080d675a8
cutom alignment heads parameter for custom models
2025-09-27 11:04:00 +02:00
Quentin Fuxa
40bff38933
Merge pull request #239 from msghik/feature/fine-tuned-model-support
...
feat: Allow loading fine-tuned models in simulstreaming
2025-09-29 10:08:26 +02:00
Quentin Fuxa
2fe3ca0188
connect source to output destination when used as chrome extension to keep audio playing
2025-09-27 13:59:44 +02:00
Quentin Fuxa
545ea15c9a
ensure buffer size to be a multiple of the element size
2025-09-27 13:58:32 +02:00
Quentin Fuxa
8cbaeecc75
cutom alignment heads parameter for custom models
2025-09-27 11:04:00 +02:00
google-labs-jules[bot]
70e854b346
feat: Allow loading fine-tuned models in simulstreaming
...
This change modifies the `simulstreaming` backend to support loading fine-tuned Whisper models via the `--model_dir` argument.
The `SimulStreamingASR` class has been updated to:
- Use the `model_dir` path directly to load the model, which is the correct procedure for fine-tuned `.pt` files.
- Automatically disable the `faster-whisper` and `mlx-whisper` fast encoders when `model_dir` is used, as they are not compatible with standard fine-tuned models.
The call site in `core.py` already passed the `model_dir` argument, so no changes were needed there. This change makes the `simulstreaming` backend more flexible and allows users to leverage their own custom models.
2025-09-27 07:29:30 +00:00
Quentin Fuxa
d55490cd27
typo and simpler conditions
2025-09-26 20:38:26 +02:00
Quentin Fuxa
b22478c0b4
correct silences handling when language not auto
2025-09-25 23:20:00 +02:00
Quentin Fuxa
94c34efd90
chrome extension ws default to localhost
2025-09-25 23:04:00 +02:00
Quentin Fuxa
9fc6654a4a
common frontend for web/ and chrome extension
2025-09-25 23:14:25 +02:00
Quentin Fuxa
4dd5d8bf8a
translation compatible with auto and detected language
2025-09-22 11:20:00 +02:00
Quentin Fuxa
6caf3e0485
correct silence handling in translation
2025-09-27 11:58:00 +02:00
Quentin Fuxa
93f002cafb
language detection after few seconds working
2025-09-20 11:08:00 +02:00
Quentin Fuxa
c5e30c2c07
svg loaded once in javascript, no more need for StaticFiles
2025-09-20 11:06:00 +02:00
Quentin Fuxa
1c2afb8bd2
svg loaded once in javascript, no more need for StaticFiles
2025-09-20 11:06:00 +02:00
Quentin Fuxa
674b20d3af
in buffer while language not detected »
2025-09-21 11:05:00 +02:00
Quentin Fuxa
a5503308c5
O(n) to O(1) for simulstreaming timestamp determination
2025-09-21 11:04:00 +02:00
Quentin Fuxa
e61afdefa3
punctuation is now checked in timed_object
2025-09-22 22:40:39 +02:00
Quentin Fuxa
426d70a790
simulstreaming infer does not return a dictionary anymore
2025-09-21 11:03:00 +02:00
Quentin Fuxa
b03a212fbf
fixes #227 , auto language dectection v0.1 - simulstreaming only - when diarization and auto
2025-09-19 19:15:28 +02:00
Quentin Fuxa
1833e7c921
0.2.10
2025-09-16 23:45:00 +02:00
Quentin Fuxa
0a6e5ae9c1
ffmpeg install instruction error indicates --pcm-input alternative
2025-09-17 16:04:17 +02:00
Quentin Fuxa
ee448a37e9
when pcm-input is set, the frontend uses AudioWorklet
2025-09-17 14:55:57 +02:00
Quentin Fuxa
9c051052b0
Merge branch 'main' into ScriptProcessorNode-to-AudioWorklet
2025-09-17 11:28:36 +02:00
Quentin Fuxa
4d7c487614
replace deprecated ScriptProcessorNode with AudioWorklet
2025-09-17 10:53:53 +02:00
Quentin Fuxa
65025cc448
nllb backend can be transformers, and model size can be 1.3B
2025-09-17 10:20:31 +02:00
Quentin Fuxa
bbba1d9bb7
add nllb-backend and translation perf test in dev_notes
2025-09-16 20:45:01 +02:00
Quentin Fuxa
99dc96c644
fixes #224
2025-09-16 18:34:35 +02:00
GeorgeCaoJ
2a27d2030a
feat: support web audio 16kHz PCM input and remove ffmpeg dependency
2025-09-15 23:22:25 +08:00
Quentin Fuxa
cd160caaa1
asyncio.to_thread for transcription and translation
2025-09-15 15:23:22 +02:00
Quentin Fuxa
5aa312e437
simulstreaming warmup is done in whisperlivekit.simul_whisper.backend.load_model, not in warmup_online
2025-09-13 20:19:19 +01:00
notV3NOM
ebaf36a8be
Fix warmup file behavior
2025-09-13 20:44:24 +05:30
Quentin Fuxa
a4e9f3cab7
support for raw PCM input option by @YeonjunNotFR
2025-09-11 21:32:11 +02:00
Quentin Fuxa
b06866877a
add --disable-punctuation-split option
2025-09-11 21:03:00 +02:00
Quentin Fuxa
967cdfebc8
fix Translation imports
2025-09-11 21:03:00 +02:00
Quentin Fuxa
3c11c60126
fix by @treeaaa
2025-09-11 21:03:00 +02:00
Quentin Fuxa
2963e8a757
translate when at least 3 new tokens
2025-09-09 21:45:00 +02:00
Quentin Fuxa
cb2d4ea88a
audio processor lines use now Lines objects instead of dict
2025-09-09 21:45:00 +02:00
Quentin Fuxa
add7ea07ee
translator takes all the tokens from the queue
2025-09-09 19:55:39 +02:00
Quentin Fuxa
3358877054
Fix StorageView conversion for CPU/GPU compatibility
2025-09-09 15:44:16 +02:00
Quentin Fuxa
1f7798c7c1
condition on encoder_feature_ctranslate type
2025-09-09 12:16:52 +02:00
Alexander Lindberg
c7b3bb5e58
Fix regression with faster-whisper encoder_feature
2025-09-09 11:18:55 +03:00
Quentin Fuxa
f661f21675
translation asyncio task
2025-09-08 18:34:31 +02:00
Quentin Fuxa
b6164aa59b
translation device determined with torch.device
2025-09-08 11:34:40 +02:00
Quentin Fuxa
4209d7f7c0
Place all tensors on the same device in sortformer diarization
2025-09-08 10:20:57 +02:00
Quentin Fuxa
334b338ab0
use platform to determine system and recommand mlx whisper
2025-09-07 15:49:11 +02:00