Commit Graph

50 Commits

Author SHA1 Message Date
Quentin Fuxa
7f3a3df620 simulstreaming mlx & torch dedup of common base 2025-02-15 23:52:00 +01:00
Quentin Fuxa
8c799fa4d1 fix simulstreaming vram leak: cap cross-attn accumulation + token budget
fixes #283, fixes #275

- accumulated_cross_attns was growing unboundedly during decoding loop,
  using up to ~5GB for repetition loops. now capped to rolling window of 16
- max_tokens_per_chunk was using TOKENS_PER_SECOND (mel frame rate = 50)
  instead of actual text token rate (~15/s), allowing 10-40x too many
  decoding steps
- removed unused torch.cat on early return path
- removed dead self.committed/last_result_tokens lists (never read)
- same fixes applied to mlx variant
2026-02-11 22:10:00 +01:00
Quentin Fuxa
aded1649ae fix model_cache_dir + direct_english_translation task in simulstreaming
pass actual cache dir instead of None, and use proper task string
instead of boolean for AlignAttConfig

fixes #310
2026-02-11 22:10:00 +01:00
Quentin Fuxa
f1b47178d8 adapt online for mlx detection 2024-11-25 23:52:00 +01:00
Quentin Fuxa
82cd24bb75 LoRa path v0 - functional 2025-11-29 17:21:10 +01:00
Quentin Fuxa
1d88ba9d69 Fixes #294. improve model path backend detection and file extraction 2025-11-27 23:14:00 +01:00
Quentin Fuxa
7faa21f95f alignatt: enable model sharing by removing hooks and centralizing session state. Solves #282
Co-authored-by: Emmanuel Schmidbauer <eschmidbauer@gmail.com>
2025-11-25 23:07:42 +01:00
Quentin Fuxa
4e9f951551 correct silences handling when language not auto 2025-11-20 11:20:00 +01:00
Quentin Fuxa
870141298c isort 2025-11-23 11:20:00 +01:00
Quentin Fuxa
e9b4ceeee5 Add audio partial silence in chunks handling. bump to 0.2.14.post3 2025-11-17 22:52:00 +01:00
Quentin Fuxa
28985962a0 Silence handling: finish transcription even if not validated at the BEGINNING of the silence 2025-11-16 22:29:08 +01:00
Quentin Fuxa
80b77998f9 Refactor backend handling 2025-11-15 19:51:41 +01:00
Quentin Fuxa
d310f7e25f hf compatibility 2025-11-15 18:34:19 +01:00
Quentin Fuxa
16461052ed task to direct-english-translation 2025-11-10 13:20:26 +01:00
Quentin Fuxa
13401ffe24 whisper core at root of wlk 2025-11-10 12:17:18 +01:00
Quentin Fuxa
7108d2ddc5 fixes https://github.com/QuentinFuxa/WhisperLiveKit/issues/269 2025-11-09 20:08:18 +01:00
Quentin Fuxa
a732e0903e Add a script to detect alignement heads, usefull for distilled whisper 2025-11-09 18:12:09 +01:00
Quentin Fuxa
0491681be4 Distilled model compatibility with HF config.json to ModelDimensions 2025-11-08 20:20:05 +01:00
Quentin Fuxa
ffe5284764 _processing_tasks_done checks task completion 2025-11-05 23:34:00 +01:00
Quentin Fuxa
ece02db6a3 Use optional new separate NLLW package for translation 2025-10-30 19:36:28 +01:00
Quentin Fuxa
714fb3b14a custom faster-whisper/mlx whisper encoder available 2025-10-23 20:33:17 +02:00
Quentin Fuxa
a7db39d999 solves incorrect spacing in buffer diarization 2025-10-02 23:04:00 +02:00
Quentin Fuxa
d080d675a8 cutom alignment heads parameter for custom models 2025-09-27 11:04:00 +02:00
Quentin Fuxa
8cbaeecc75 cutom alignment heads parameter for custom models 2025-09-27 11:04:00 +02:00
google-labs-jules[bot]
70e854b346 feat: Allow loading fine-tuned models in simulstreaming
This change modifies the `simulstreaming` backend to support loading fine-tuned Whisper models via the `--model_dir` argument.

The `SimulStreamingASR` class has been updated to:
- Use the `model_dir` path directly to load the model, which is the correct procedure for fine-tuned `.pt` files.
- Automatically disable the `faster-whisper` and `mlx-whisper` fast encoders when `model_dir` is used, as they are not compatible with standard fine-tuned models.

The call site in `core.py` already passed the `model_dir` argument, so no changes were needed there. This change makes the `simulstreaming` backend more flexible and allows users to leverage their own custom models.
2025-09-27 07:29:30 +00:00
Quentin Fuxa
b22478c0b4 correct silences handling when language not auto 2025-09-25 23:20:00 +02:00
Quentin Fuxa
4dd5d8bf8a translation compatible with auto and detected language 2025-09-22 11:20:00 +02:00
Quentin Fuxa
93f002cafb language detection after few seconds working 2025-09-20 11:08:00 +02:00
Quentin Fuxa
674b20d3af in buffer while language not detected » 2025-09-21 11:05:00 +02:00
Quentin Fuxa
a5503308c5 O(n) to O(1) for simulstreaming timestamp determination 2025-09-21 11:04:00 +02:00
Quentin Fuxa
426d70a790 simulstreaming infer does not return a dictionary anymore 2025-09-21 11:03:00 +02:00
Quentin Fuxa
334b338ab0 use platform to determine system and recommand mlx whisper 2025-09-07 15:49:11 +02:00
notV3NOM
abd8f2c269 Fix exponentially growing simulstreaming silence timer 2025-09-04 21:49:07 +05:30
Quentin Fuxa
3bd2122eb4 0.2.8 : only the decoder of whisper is loaded in memory when a different encoder is used 2025-09-02 21:12:25 +02:00
Quentin Fuxa
d5008ed828 mlx/fasterWhisper encoders are loaded once and shared in simulstreaming 2025-09-01 12:33:19 +02:00
Quentin Fuxa
1d926f2e67 mlx-whisper used as simulstreaming encoder: improve speed for macos systems 2025-08-30 22:19:11 +02:00
Quentin Fuxa
4a5d5e1f3b raise Exception when language == auto and task == translation 2025-08-29 17:44:46 +02:00
Quentin Fuxa
9895bc83bf auto detection of language for warmup if not indicated 2025-08-27 20:37:48 +02:00
Quentin Fuxa
b32dd8bfc4 Align backend and frontend time handling 2025-08-21 10:33:15 +02:00
Quentin Fuxa
d0e9e37ef6 simulstreaming: cumulative_time_offset to keep timestamps correct when audio > 30s 2025-08-17 09:33:47 +02:00
Quentin Fuxa
e2184d5e06 better handle silences when VAC + correct offset issue with whisperstreaming backend 2025-08-17 01:27:07 +02:00
Quentin Fuxa
0f2eba507e use with_offset to add no audio offset to tokens 2025-08-17 00:33:24 +02:00
Quentin Fuxa
55e08474f3 recycle backend in simulstreaming thanks to new remove hooks function 2025-08-16 23:06:16 +02:00
Quentin Fuxa
1652db9a2d Use distinct backend models for simulstreaming and add --preloaded_model_count to preload them 2025-08-15 23:03:55 +02:00
Quentin Fuxa
15c3df1cba warmup base whisper when using simulstreaming 2025-08-12 18:52:52 +02:00
Quentin Fuxa
728e1f1290 simulstreaming warmup is done for each instance of online, not for the backend 2025-08-12 18:35:04 +02:00
Quentin Fuxa
d098af3185 each SimulStreamingOnlineProcessor now contains PaddedAlignAttWhisper instance. SimulStreamingASR only contains loaded whisper model 2025-08-11 08:24:14 +02:00
Quentin Fuxa
5491964e81 clean SimulStreamingOnlineProcessor initialization + audio processing 2025-08-09 20:16:27 +02:00
Quentin Fuxa
b05297a96d clean simulwhisper backend and online 2025-08-09 18:02:15 +02:00
Quentin Fuxa
197293e25e refactor(simulstreaming): extract backend + online module into separate files from whisper streaming 2025-08-08 18:07:51 +02:00