mirror of
https://github.com/QuentinFuxa/WhisperLiveKit.git
synced 2026-03-07 22:33:36 +00:00
Merge branch 'main' into online-from-factory
This commit is contained in:
24
README.md
24
README.md
@@ -3,14 +3,12 @@ Whisper realtime streaming for long speech-to-text transcription and translation
|
||||
|
||||
**Turning Whisper into Real-Time Transcription System**
|
||||
|
||||
Demonstration paper, by Dominik Macháček, Raj Dabre, Ondřej Bojar, 2023
|
||||
Demonstration paper, by [Dominik Macháček](https://ufal.mff.cuni.cz/dominik-machacek), [Raj Dabre](https://prajdabre.github.io/), [Ondřej Bojar](https://ufal.mff.cuni.cz/ondrej-bojar), 2023
|
||||
|
||||
Abstract: Whisper is one of the recent state-of-the-art multilingual speech recognition and translation models, however, it is not designed for real time transcription. In this paper, we build on top of Whisper and create Whisper-Streaming, an implementation of real-time speech transcription and translation of Whisper-like models. Whisper-Streaming uses local agreement policy with self-adaptive latency to enable streaming transcription. We show that Whisper-Streaming achieves high quality and 3.3 seconds latency on unsegmented long-form speech transcription test set, and we demonstrate its robustness and practical usability as a component in live transcription service at a multilingual conference.
|
||||
Abstract: Whisper is one of the recent state-of-the-art multilingual speech recognition and translation models, however, it is not designed for real-time transcription. In this paper, we build on top of Whisper and create Whisper-Streaming, an implementation of real-time speech transcription and translation of Whisper-like models. Whisper-Streaming uses local agreement policy with self-adaptive latency to enable streaming transcription. We show that Whisper-Streaming achieves high quality and 3.3 seconds latency on unsegmented long-form speech transcription test set, and we demonstrate its robustness and practical usability as a component in live transcription service at a multilingual conference.
|
||||
|
||||
|
||||
Paper in proceedings: http://www.afnlp.org/conferences/ijcnlp2023/proceedings/main-demo/cdrom/pdf/2023.ijcnlp-demo.3.pdf
|
||||
|
||||
Demo video: https://player.vimeo.com/video/840442741
|
||||
[Paper PDF](https://aclanthology.org/2023.ijcnlp-demo.3.pdf), [Demo video](https://player.vimeo.com/video/840442741)
|
||||
|
||||
[Slides](http://ufallab.ms.mff.cuni.cz/~machacek/pre-prints/AACL23-2.11.2023-Turning-Whisper-oral.pdf) -- 15 minutes oral presentation at IJCNLP-AACL 2023
|
||||
|
||||
@@ -157,7 +155,7 @@ The code whisper_online.py is nicely commented, read it as the full documentatio
|
||||
|
||||
This pseudocode describes the interface that we suggest for your implementation. You can implement any features that you need for your application.
|
||||
|
||||
```
|
||||
```python
|
||||
from whisper_online import *
|
||||
|
||||
src_lan = "en" # source language
|
||||
@@ -185,7 +183,7 @@ online.init() # refresh if you're going to re-use the object for the next audio
|
||||
|
||||
### Server -- real-time from mic
|
||||
|
||||
`whisper_online_server.py` has the same model options as `whisper_online.py`, plus `--host` and `--port` of the TCP connection. See help message (`-h` option).
|
||||
`whisper_online_server.py` has the same model options as `whisper_online.py`, plus `--host` and `--port` of the TCP connection and the `--warmup-file`. See the help message (`-h` option).
|
||||
|
||||
Client example:
|
||||
|
||||
@@ -226,12 +224,20 @@ In more detail: we use the init prompt, we handle the inaccurate timestamps, we
|
||||
re-process confirmed sentence prefixes and skip them, making sure they don't
|
||||
overlap, and we limit the processing buffer window.
|
||||
|
||||
Contributions are welcome.
|
||||
|
||||
### Performance evaluation
|
||||
|
||||
[See the paper.](http://www.afnlp.org/conferences/ijcnlp2023/proceedings/main-demo/cdrom/pdf/2023.ijcnlp-demo.3.pdf)
|
||||
|
||||
### Contributions
|
||||
|
||||
Contributions are welcome. We acknowledge especially:
|
||||
|
||||
- [The GitHub contributors](https://github.com/ufal/whisper_streaming/graphs/contributors) for their pull requests with new features and bugfixes.
|
||||
- [The translation of this repo into Chinese.](https://github.com/Gloridust/whisper_streaming_CN)
|
||||
- [Ondřej Plátek](https://opla.cz/) for the paper pre-review.
|
||||
- [Peter Polák](https://ufal.mff.cuni.cz/peter-polak) for the original idea.
|
||||
- The UEDIN team of the [ELITR project](https://elitr.eu) for the original line_packet.py.
|
||||
|
||||
|
||||
## Contact
|
||||
|
||||
|
||||
@@ -626,7 +626,7 @@ if __name__ == "__main__":
|
||||
# load the audio into the LRU cache before we start the timer
|
||||
a = load_audio_chunk(audio_path,0,1)
|
||||
|
||||
# warm up the ASR, because the very first transcribe takes much more time than the other
|
||||
# warm up the ASR because the very first transcribe takes much more time than the other
|
||||
asr.transcribe(a)
|
||||
|
||||
beg = args.start_at
|
||||
|
||||
@@ -10,6 +10,8 @@ parser = argparse.ArgumentParser()
|
||||
# server options
|
||||
parser.add_argument("--host", type=str, default='localhost')
|
||||
parser.add_argument("--port", type=int, default=43007)
|
||||
parser.add_argument("--warmup-file", type=str, dest="warmup_file",
|
||||
help="The path to a speech audio wav file to warm up Whisper so that the very first chunk processing is fast. It can be e.g. https://github.com/ggerganov/whisper.cpp/raw/master/samples/jfk.wav .")
|
||||
|
||||
|
||||
# options from whisper_online
|
||||
@@ -26,18 +28,25 @@ language = args.lan
|
||||
asr, online = asr_factory(args)
|
||||
min_chunk = args.min_chunk_size
|
||||
|
||||
demo_audio_path = "cs-maji-2.16k.wav"
|
||||
if os.path.exists(demo_audio_path):
|
||||
# load the audio into the LRU cache before we start the timer
|
||||
a = load_audio_chunk(demo_audio_path,0,1)
|
||||
|
||||
# TODO: it should be tested whether it's meaningful
|
||||
# warm up the ASR, because the very first transcribe takes much more time than the other
|
||||
asr.transcribe(a)
|
||||
if args.buffer_trimming == "sentence":
|
||||
tokenizer = create_tokenizer(tgt_language)
|
||||
else:
|
||||
print("Whisper is not warmed up",file=sys.stderr)
|
||||
|
||||
tokenizer = None
|
||||
online = OnlineASRProcessor(asr,tokenizer,buffer_trimming=(args.buffer_trimming, args.buffer_trimming_sec))
|
||||
|
||||
# warm up the ASR because the very first transcribe takes more time than the others.
|
||||
# Test results in https://github.com/ufal/whisper_streaming/pull/81
|
||||
msg = "Whisper is not warmed up. The first chunk processing may take longer."
|
||||
if args.warmup_file:
|
||||
if os.path.isfile(args.warmup_file):
|
||||
a = load_audio_chunk(args.warmup_file,0,1)
|
||||
asr.transcribe(a)
|
||||
print("INFO: Whisper is warmed up.",file=sys.stderr)
|
||||
else:
|
||||
print("WARNING: The warm up file is not available. "+msg,file=sys.stderr)
|
||||
else:
|
||||
print("WARNING: " + msg, file=sys.stderr)
|
||||
|
||||
|
||||
######### Server objects
|
||||
|
||||
Reference in New Issue
Block a user