diff --git a/README.md b/README.md index dd77ed0..0e217c0 100644 --- a/README.md +++ b/README.md @@ -3,44 +3,52 @@ Whisper realtime streaming for long speech-to-text transcription and translation **Turning Whisper into Real-Time Transcription System** -Demonstration paper, by Dominik Macháček, Raj Dabre, Ondřej Bojar, 2023 +Demonstration paper, by [Dominik Macháček](https://ufal.mff.cuni.cz/dominik-machacek), [Raj Dabre](https://prajdabre.github.io/), [Ondřej Bojar](https://ufal.mff.cuni.cz/ondrej-bojar), 2023 -Abstract: Whisper is one of the recent state-of-the-art multilingual speech recognition and translation models, however, it is not designed for real time transcription. In this paper, we build on top of Whisper and create Whisper-Streaming, an implementation of real-time speech transcription and translation of Whisper-like models. Whisper-Streaming uses local agreement policy with self-adaptive latency to enable streaming transcription. We show that Whisper-Streaming achieves high quality and 3.3 seconds latency on unsegmented long-form speech transcription test set, and we demonstrate its robustness and practical usability as a component in live transcription service at a multilingual conference. +Abstract: Whisper is one of the recent state-of-the-art multilingual speech recognition and translation models, however, it is not designed for real-time transcription. In this paper, we build on top of Whisper and create Whisper-Streaming, an implementation of real-time speech transcription and translation of Whisper-like models. Whisper-Streaming uses local agreement policy with self-adaptive latency to enable streaming transcription. We show that Whisper-Streaming achieves high quality and 3.3 seconds latency on unsegmented long-form speech transcription test set, and we demonstrate its robustness and practical usability as a component in live transcription service at a multilingual conference. -Paper in proceedings: http://www.afnlp.org/conferences/ijcnlp2023/proceedings/main-demo/cdrom/pdf/2023.ijcnlp-demo.3.pdf - -Demo video: https://player.vimeo.com/video/840442741 +[Paper PDF](https://aclanthology.org/2023.ijcnlp-demo.3.pdf), [Demo video](https://player.vimeo.com/video/840442741) [Slides](http://ufallab.ms.mff.cuni.cz/~machacek/pre-prints/AACL23-2.11.2023-Turning-Whisper-oral.pdf) -- 15 minutes oral presentation at IJCNLP-AACL 2023 -Please, cite us. [Bibtex citation](http://www.afnlp.org/conferences/ijcnlp2023/proceedings/main-demo/cdrom/bib/2023.ijcnlp-demo.3.bib): +Please, cite us. [ACL Anthology](https://aclanthology.org/2023.ijcnlp-demo.3/), [Bibtex citation](https://aclanthology.org/2023.ijcnlp-demo.3.bib): ``` -@InProceedings{machacek-dabre-bojar:2023:ijcnlp, - author = {Macháček, Dominik and Dabre, Raj and Bojar, Ondřej}, - title = {Turning Whisper into Real-Time Transcription System}, - booktitle = {System Demonstrations}, - month = {November}, - year = {2023}, - address = {Bali, Indonesia}, - publisher = {Asian Federation of Natural Language Processing}, - pages = {17--24}, +@inproceedings{machacek-etal-2023-turning, + title = "Turning Whisper into Real-Time Transcription System", + author = "Mach{\'a}{\v{c}}ek, Dominik and + Dabre, Raj and + Bojar, Ond{\v{r}}ej", + editor = "Saha, Sriparna and + Sujaini, Herry", + booktitle = "Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics: System Demonstrations", + month = nov, + year = "2023", + address = "Bali, Indonesia", + publisher = "Association for Computational Linguistics", + url = "https://aclanthology.org/2023.ijcnlp-demo.3", + pages = "17--24", } ``` ## Installation -1) ``pip install librosa`` -- audio processing library +1) ``pip install librosa soundfile`` -- audio processing library Note: for the VAD I need to `pip install torch torchaudio`. 2) Whisper backend. -Two alternative backends are integrated. The most recommended one is [faster-whisper](https://github.com/guillaumekln/faster-whisper) with GPU support. Follow their instructions for NVIDIA libraries -- we succeeded with CUDNN 8.5.0 and CUDA 11.7. Install with `pip install faster-whisper`. + Several alternative backends are integrated. The most recommended one is [faster-whisper](https://github.com/guillaumekln/faster-whisper) with GPU support. Follow their instructions for NVIDIA libraries -- we succeeded with CUDNN 8.5.0 and CUDA 11.7. Install with `pip install faster-whisper`. Alternative, less restrictive, but slower backend is [whisper-timestamped](https://github.com/linto-ai/whisper-timestamped): `pip install git+https://github.com/linto-ai/whisper-timestamped` +Thirdly, it's also possible to run this software from the [OpenAI Whisper API](https://platform.openai.com/docs/api-reference/audio/createTranscription). This solution is fast and requires no GPU, just a small VM will suffice, but you will need to pay OpenAI for api access. Also note that, since each audio fragment is processed multiple times, the [price](https://openai.com/pricing) will be higher than obvious from the pricing page, so keep an eye on costs while using. Setting a higher chunk-size will reduce costs significantly. +Install with: `pip install openai` + +For running with the openai-api backend, make sure that your [OpenAI api key](https://platform.openai.com/api-keys) is set in the `OPENAI_API_KEY` environment variable. For example, before running, do: `export OPENAI_API_KEY=sk-xxx` with *sk-xxx* replaced with your api key. + The backend is loaded only when chosen. The unused one does not have to be installed. 3) Optional, not recommended: sentence segmenter (aka sentence tokenizer) @@ -71,7 +79,7 @@ In case of installation issues of opus-fast-mosestokenizer, especially on Window ``` usage: whisper_online.py [-h] [--min-chunk-size MIN_CHUNK_SIZE] [--model {tiny.en,tiny,base.en,base,small.en,small,medium.en,medium,large-v1,large-v2,large-v3,large}] [--model_cache_dir MODEL_CACHE_DIR] [--model_dir MODEL_DIR] [--lan LAN] [--task {transcribe,translate}] - [--backend {faster-whisper,whisper_timestamped}] [--vad] [--buffer_trimming {sentence,segment}] [--buffer_trimming_sec BUFFER_TRIMMING_SEC] [--start_at START_AT] [--offline] [--comp_unaware] + [--backend {faster-whisper,whisper_timestamped,openai-api}] [--vad] [--buffer_trimming {sentence,segment}] [--buffer_trimming_sec BUFFER_TRIMMING_SEC] [--start_at START_AT] [--offline] [--comp_unaware] audio_path positional arguments: @@ -91,7 +99,7 @@ options: Source language code, e.g. en,de,cs, or 'auto' for language detection. --task {transcribe,translate} Transcribe or translate. - --backend {faster-whisper,whisper_timestamped} + --backend {faster-whisper,whisper_timestamped,openai-api} Load only this backend for Whisper processing. --vad Use VAD = voice activity detection, with the default parameters. --buffer_trimming {sentence,segment} @@ -149,7 +157,7 @@ The code whisper_online.py is nicely commented, read it as the full documentatio This pseudocode describes the interface that we suggest for your implementation. You can implement any features that you need for your application. -``` +```python from whisper_online import * src_lan = "en" # source language @@ -177,7 +185,7 @@ online.init() # refresh if you're going to re-use the object for the next audio ### Server -- real-time from mic -`whisper_online_server.py` has the same model options as `whisper_online.py`, plus `--host` and `--port` of the TCP connection. See help message (`-h` option). +`whisper_online_server.py` has the same model options as `whisper_online.py`, plus `--host` and `--port` of the TCP connection and the `--warmup-file`. See the help message (`-h` option). Client example: @@ -218,12 +226,21 @@ In more detail: we use the init prompt, we handle the inaccurate timestamps, we re-process confirmed sentence prefixes and skip them, making sure they don't overlap, and we limit the processing buffer window. -Contributions are welcome. - ### Performance evaluation [See the paper.](http://www.afnlp.org/conferences/ijcnlp2023/proceedings/main-demo/cdrom/pdf/2023.ijcnlp-demo.3.pdf) +### Contributions + +Contributions are welcome. We acknowledge especially: + +- [The GitHub contributors](https://github.com/ufal/whisper_streaming/graphs/contributors) for their pull requests with new features and bugfixes. +- [Nice explanation video](https://www.youtube.com/watch?v=_spinzpEeFM) -- published on 31st March 2024, not that newer updates are not included. +- [The translation of this repo into Chinese.](https://github.com/Gloridust/whisper_streaming_CN) +- [Ondřej Plátek](https://opla.cz/) for the paper pre-review. +- [Peter Polák](https://ufal.mff.cuni.cz/peter-polak) for the original idea. +- The UEDIN team of the [ELITR project](https://elitr.eu) for the original line_packet.py. + ## Contact diff --git a/line_packet.py b/line_packet.py index 54dea11..0664fea 100644 --- a/line_packet.py +++ b/line_packet.py @@ -2,8 +2,6 @@ """Functions for sending and receiving individual lines of text over a socket. -Used by marian-server-server.py to communicate with the Marian worker. - A line is transmitted using one or more fixed-size packets of UTF-8 bytes containing: @@ -11,6 +9,7 @@ containing: - Zero or more \0 bytes as required to pad the packet to PACKET_SIZE +Originally from the UEDIN team of the ELITR project. """ PACKET_SIZE = 65536 diff --git a/whisper_online.py b/whisper_online.py index 4ecd2f7..20a9b79 100644 --- a/whisper_online.py +++ b/whisper_online.py @@ -4,12 +4,17 @@ import numpy as np import librosa from functools import lru_cache import time -import datetime +import logging +import io +import soundfile as sf +import math + +logger = logging.getLogger(__name__) @lru_cache def load_audio(fname): - a, _ = librosa.load(fname, sr=16000) + a, _ = librosa.load(fname, sr=16000, dtype=np.float32) return a def load_audio_chunk(fname, beg, end): @@ -57,10 +62,11 @@ class WhisperTimestampedASR(ASRBase): def load_model(self, modelsize=None, cache_dir=None, model_dir=None): import whisper + import whisper_timestamped from whisper_timestamped import transcribe_timestamped self.transcribe_timestamped = transcribe_timestamped if model_dir is not None: - print("ignoring model_dir, not implemented",file=self.logfile) + logger.debug("ignoring model_dir, not implemented") return whisper.load_model(modelsize, download_root=cache_dir) def transcribe(self, audio, init_prompt=""): @@ -99,8 +105,9 @@ class FasterWhisperASR(ASRBase): def load_model(self, modelsize=None, cache_dir=None, model_dir=None): from faster_whisper import WhisperModel +# logging.getLogger("faster_whisper").setLevel(logger.level) if model_dir is not None: - print(f"Loading whisper model from model_dir {model_dir}. modelsize and cache_dir parameters are not used.",file=self.logfile) + logger.debug(f"Loading whisper model from model_dir {model_dir}. modelsize and cache_dir parameters are not used.") model_size_or_path = model_dir elif modelsize is not None: model_size_or_path = modelsize @@ -150,6 +157,93 @@ class FasterWhisperASR(ASRBase): self.transcribe_kargs["task"] = "translate" +class OpenaiApiASR(ASRBase): + """Uses OpenAI's Whisper API for audio transcription.""" + + def __init__(self, lan=None, temperature=0, logfile=sys.stderr): + self.logfile = logfile + + self.modelname = "whisper-1" + self.original_language = None if lan == "auto" else lan # ISO-639-1 language code + self.response_format = "verbose_json" + self.temperature = temperature + + self.load_model() + + self.use_vad_opt = False + + # reset the task in set_translate_task + self.task = "transcribe" + + def load_model(self, *args, **kwargs): + from openai import OpenAI + self.client = OpenAI() + + self.transcribed_seconds = 0 # for logging how many seconds were processed by API, to know the cost + + + def ts_words(self, segments): + no_speech_segments = [] + if self.use_vad_opt: + for segment in segments.segments: + # TODO: threshold can be set from outside + if segment["no_speech_prob"] > 0.8: + no_speech_segments.append((segment.get("start"), segment.get("end"))) + + o = [] + for word in segments.words: + start = word.get("start") + end = word.get("end") + if any(s[0] <= start <= s[1] for s in no_speech_segments): + # print("Skipping word", word.get("word"), "because it's in a no-speech segment") + continue + o.append((start, end, word.get("word"))) + return o + + + def segments_end_ts(self, res): + return [s["end"] for s in res.words] + + def transcribe(self, audio_data, prompt=None, *args, **kwargs): + # Write the audio data to a buffer + buffer = io.BytesIO() + buffer.name = "temp.wav" + sf.write(buffer, audio_data, samplerate=16000, format='WAV', subtype='PCM_16') + buffer.seek(0) # Reset buffer's position to the beginning + + self.transcribed_seconds += math.ceil(len(audio_data)/16000) # it rounds up to the whole seconds + + params = { + "model": self.modelname, + "file": buffer, + "response_format": self.response_format, + "temperature": self.temperature, + "timestamp_granularities": ["word", "segment"] + } + if self.task != "translate" and self.original_language: + params["language"] = self.original_language + if prompt: + params["prompt"] = prompt + + if self.task == "translate": + proc = self.client.audio.translations + else: + proc = self.client.audio.transcriptions + + # Process transcription/translation + transcript = proc.create(**params) + logger.debug(f"OpenAI API processed accumulated {self.transcribed_seconds} seconds") + + return transcript + + def use_vad(self): + self.use_vad_opt = True + + def set_translate_task(self): + self.task = "translate" + + + class HypothesisBuffer: @@ -181,9 +275,11 @@ class HypothesisBuffer: c = " ".join([self.commited_in_buffer[-j][2] for j in range(1,i+1)][::-1]) tail = " ".join(self.new[j-1][2] for j in range(1,i+1)) if c == tail: - print("removing last",i,"words:",file=self.logfile) + words = [] for j in range(i): - print("\t",self.new.pop(0),file=self.logfile) + words.append(repr(self.new.pop(0))) + words_msg = " ".join(words) + logger.debug(f"removing last {i} words: {words_msg}") break def flush(self): @@ -246,8 +342,6 @@ class OnlineASRProcessor: self.transcript_buffer.last_commited_time = self.buffer_time_offset self.commited = [] - self.last_chunked_at = 0 - def insert_audio_chunk(self, audio): self.audio_buffer = np.append(self.audio_buffer, audio) @@ -257,7 +351,7 @@ class OnlineASRProcessor: "context" is the commited text that is inside the audio buffer. It is transcribed again and skipped. It is returned only for debugging and logging reasons. """ k = max(0,len(self.commited)-1) - while k > 0 and self.commited[k-1][1] > self.last_chunked_at: + while k > 0 and self.commited[k-1][1] > self.buffer_time_offset: k -= 1 p = self.commited[:k] @@ -278,9 +372,9 @@ class OnlineASRProcessor: """ prompt, non_prompt = self.prompt() - print("PROMPT:", prompt, file=self.logfile) - print("CONTEXT:", non_prompt, file=self.logfile) - print(f"transcribing {len(self.audio_buffer)/self.SAMPLING_RATE:2.2f} seconds from {self.buffer_time_offset:2.2f}",file=self.logfile) + logger.debug(f"PROMPT: {prompt}") + logger.debug(f"CONTEXT: {non_prompt}") + logger.debug(f"transcribing {len(self.audio_buffer)/self.SAMPLING_RATE:2.2f} seconds from {self.buffer_time_offset:2.2f}") res = self.asr.transcribe(self.audio_buffer, init_prompt=prompt) # transform to [(beg,end,"word1"), ...] @@ -289,8 +383,10 @@ class OnlineASRProcessor: self.transcript_buffer.insert(tsw, self.buffer_time_offset) o = self.transcript_buffer.flush() self.commited.extend(o) - print(">>>>COMPLETE NOW:",self.to_flush(o),file=self.logfile,flush=True) - print("INCOMPLETE:",self.to_flush(self.transcript_buffer.complete()),file=self.logfile,flush=True) + completed = self.to_flush(o) + logger.debug(f">>>>COMPLETE NOW: {completed}") + the_rest = self.to_flush(self.transcript_buffer.complete()) + logger.debug(f"INCOMPLETE: {the_rest}") # there is a newly confirmed text @@ -314,18 +410,18 @@ class OnlineASRProcessor: #while k>0 and self.commited[k][1] > l: # k -= 1 #t = self.commited[k][1] - print(f"chunking segment",file=self.logfile) + logger.debug("chunking segment") #self.chunk_at(t) - print(f"len of buffer now: {len(self.audio_buffer)/self.SAMPLING_RATE:2.2f}",file=self.logfile) + logger.debug(f"len of buffer now: {len(self.audio_buffer)/self.SAMPLING_RATE:2.2f}") return self.to_flush(o) def chunk_completed_sentence(self): if self.commited == []: return - print(self.commited,file=self.logfile) + logger.debug(self.commited) sents = self.words_to_sentences(self.commited) for s in sents: - print("\t\tSENT:",s,file=self.logfile) + logger.debug(f"\t\tSENT: {s}") if len(sents) < 2: return while len(sents) > 2: @@ -333,7 +429,7 @@ class OnlineASRProcessor: # we will continue with audio processing at this timestamp chunk_at = sents[-2][1] - print(f"--- sentence chunked at {chunk_at:2.2f}",file=self.logfile) + logger.debug(f"--- sentence chunked at {chunk_at:2.2f}") self.chunk_at(chunk_at) def chunk_completed_segment(self, res): @@ -350,12 +446,12 @@ class OnlineASRProcessor: ends.pop(-1) e = ends[-2]+self.buffer_time_offset if e <= t: - print(f"--- segment chunked at {e:2.2f}",file=self.logfile) + logger.debug(f"--- segment chunked at {e:2.2f}") self.chunk_at(e) else: - print(f"--- last segment not within commited area",file=self.logfile) + logger.debug(f"--- last segment not within commited area") else: - print(f"--- not enough segments to chunk",file=self.logfile) + logger.debug(f"--- not enough segments to chunk") @@ -368,7 +464,6 @@ class OnlineASRProcessor: cut_seconds = time - self.buffer_time_offset self.audio_buffer = self.audio_buffer[int(cut_seconds*self.SAMPLING_RATE):] self.buffer_time_offset = time - self.last_chunked_at = time def words_to_sentences(self, words): """Uses self.tokenizer for sentence segmentation of words. @@ -402,7 +497,7 @@ class OnlineASRProcessor: """ o = self.transcript_buffer.complete() f = self.to_flush(o) - print("last, noncommited:",f,file=self.logfile) + logger.debug(f"last, noncommited: {f}") self.buffer_time_offset += len(self.audio_buffer)/16000 return f @@ -443,7 +538,7 @@ def create_tokenizer(lan): # the following languages are in Whisper, but not in wtpsplit: if lan in "as ba bo br bs fo haw hr ht jw lb ln lo mi nn oc sa sd sn so su sw tk tl tt".split(): - print(f"{lan} code is not supported by wtpsplit. Going to use None lang_code option.", file=sys.stderr) + logger.debug(f"{lan} code is not supported by wtpsplit. Going to use None lang_code option.") lan = None from wtpsplit import WtP @@ -463,14 +558,67 @@ def add_shared_args(parser): parser.add_argument('--model', type=str, default='large-v2', choices="tiny.en,tiny,base.en,base,small.en,small,medium.en,medium,large-v1,large-v2,large-v3,large".split(","),help="Name size of the Whisper model to use (default: large-v2). The model is automatically downloaded from the model hub if not present in model cache dir.") parser.add_argument('--model_cache_dir', type=str, default=None, help="Overriding the default model cache dir where models downloaded from the hub are saved") parser.add_argument('--model_dir', type=str, default=None, help="Dir where Whisper model.bin and other files are saved. This option overrides --model and --model_cache_dir parameter.") - parser.add_argument('--lan', '--language', type=str, default='en', help="Source language code, e.g. en,de,cs, or 'auto' for language detection.") + parser.add_argument('--lan', '--language', type=str, default='auto', help="Source language code, e.g. en,de,cs, or 'auto' for language detection.") parser.add_argument('--task', type=str, default='transcribe', choices=["transcribe","translate"],help="Transcribe or translate.") - parser.add_argument('--backend', type=str, default="faster-whisper", choices=["faster-whisper", "whisper_timestamped"],help='Load only this backend for Whisper processing.') + parser.add_argument('--backend', type=str, default="faster-whisper", choices=["faster-whisper", "whisper_timestamped", "openai-api"],help='Load only this backend for Whisper processing.') parser.add_argument('--vad', action="store_true", default=False, help='Use VAD = voice activity detection, with the default parameters.') parser.add_argument('--buffer_trimming', type=str, default="segment", choices=["sentence", "segment"],help='Buffer trimming strategy -- trim completed sentences marked with punctuation mark and detected by sentence segmenter, or the completed segments returned by Whisper. Sentence segmenter must be installed for "sentence" option.') parser.add_argument('--buffer_trimming_sec', type=float, default=15, help='Buffer trimming length threshold in seconds. If buffer length is longer, trimming sentence/segment is triggered.') + parser.add_argument("-l", "--log-level", dest="log_level", choices=['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'], help="Set the log level", default='DEBUG') + +def asr_factory(args, logfile=sys.stderr): + """ + Creates and configures an ASR and ASR Online instance based on the specified backend and arguments. + """ + backend = args.backend + if backend == "openai-api": + logger.debug("Using OpenAI API.") + asr = OpenaiApiASR(lan=args.lan) + else: + if backend == "faster-whisper": + asr_cls = FasterWhisperASR + else: + asr_cls = WhisperTimestampedASR + + # Only for FasterWhisperASR and WhisperTimestampedASR + size = args.model + t = time.time() + logger.info(f"Loading Whisper {size} model for {args.lan}...") + asr = asr_cls(modelsize=size, lan=args.lan, cache_dir=args.model_cache_dir, model_dir=args.model_dir) + e = time.time() + logger.info(f"done. It took {round(e-t,2)} seconds.") + + # Apply common configurations + if getattr(args, 'vad', False): # Checks if VAD argument is present and True + logger.info("Setting VAD filter") + asr.use_vad() + + language = args.lan + if args.task == "translate": + asr.set_translate_task() + tgt_language = "en" # Whisper translates into English + else: + tgt_language = language # Whisper transcribes in this language + + # Create the tokenizer + if args.buffer_trimming == "sentence": + tokenizer = create_tokenizer(tgt_language) + else: + tokenizer = None + + # Create the OnlineASRProcessor + online = OnlineASRProcessor(asr,tokenizer,logfile=logfile,buffer_trimming=(args.buffer_trimming, args.buffer_trimming_sec)) + + return asr, online + +def set_logging(args,logger,other="_server"): + logging.basicConfig(#format='%(name)s + format='%(levelname)s\t%(message)s') + logger.setLevel(args.log_level) + logging.getLogger("whisper_online"+other).setLevel(args.log_level) +# logging.getLogger("whisper_online_server").setLevel(args.log_level) + -## main: if __name__ == "__main__": @@ -488,55 +636,28 @@ if __name__ == "__main__": logfile = sys.stderr if args.offline and args.comp_unaware: - print("No or one option from --offline and --comp_unaware are available, not both. Exiting.",file=logfile) + logger.error("No or one option from --offline and --comp_unaware are available, not both. Exiting.") sys.exit(1) +# if args.log_level: +# logging.basicConfig(format='whisper-%(levelname)s:%(name)s: %(message)s', +# level=getattr(logging, args.log_level)) + + set_logging(args,logger) + audio_path = args.audio_path SAMPLING_RATE = 16000 duration = len(load_audio(audio_path))/SAMPLING_RATE - print("Audio duration is: %2.2f seconds" % duration, file=logfile) + logger.info("Audio duration is: %2.2f seconds" % duration) - size = args.model - language = args.lan - - t = time.time() - print(f"Loading Whisper {size} model for {language}...",file=logfile,end=" ",flush=True) - - if args.backend == "faster-whisper": - asr_cls = FasterWhisperASR - else: - asr_cls = WhisperTimestampedASR - - asr = asr_cls(modelsize=size, lan=language, cache_dir=args.model_cache_dir, model_dir=args.model_dir) - - if args.task == "translate": - asr.set_translate_task() - tgt_language = "en" # Whisper translates into English - else: - tgt_language = language # Whisper transcribes in this language - - - e = time.time() - print(f"done. It took {round(e-t,2)} seconds.",file=logfile) - - if args.vad: - print("setting VAD filter",file=logfile) - asr.use_vad() - - + asr, online = asr_factory(args, logfile=logfile) min_chunk = args.min_chunk_size - if args.buffer_trimming == "sentence": - tokenizer = create_tokenizer(tgt_language) - else: - tokenizer = None - online = OnlineASRProcessor(asr,tokenizer,logfile=logfile,buffer_trimming=(args.buffer_trimming, args.buffer_trimming_sec)) - # load the audio into the LRU cache before we start the timer a = load_audio_chunk(audio_path,0,1) - # warm up the ASR, because the very first transcribe takes much more time than the other + # warm up the ASR because the very first transcribe takes much more time than the other asr.transcribe(a) beg = args.start_at @@ -555,16 +676,16 @@ if __name__ == "__main__": print("%1.4f %1.0f %1.0f %s" % (now*1000, o[0]*1000,o[1]*1000,o[2]),file=logfile,flush=True) print("%1.4f %1.0f %1.0f %s" % (now*1000, o[0]*1000,o[1]*1000,o[2]),flush=True) else: - print(o,file=logfile,flush=True) + # No text, so no output + pass if args.offline: ## offline mode processing (for testing/debugging) a = load_audio(audio_path) online.insert_audio_chunk(a) try: o = online.process_iter() - except AssertionError: - print("assertion error",file=logfile) - pass + except AssertionError as e: + logger.error(f"assertion error: {repr(e)}") else: output_transcript(o) now = None @@ -575,13 +696,13 @@ if __name__ == "__main__": online.insert_audio_chunk(a) try: o = online.process_iter() - except AssertionError: - print("assertion error",file=logfile) + except AssertionError as e: + logger.error(f"assertion error: {repr(e)}") pass else: output_transcript(o, now=end) - print(f"## last processed {end:.2f}s",file=logfile,flush=True) + logger.debug(f"## last processed {end:.2f}s") if end >= duration: break @@ -607,13 +728,13 @@ if __name__ == "__main__": try: o = online.process_iter() - except AssertionError: - print("assertion error",file=logfile) + except AssertionError as e: + logger.error(f"assertion error: {e}") pass else: output_transcript(o) now = time.time() - start - print(f"## last processed {end:.2f} s, now is {now:.2f}, the latency is {now-end:.2f}",file=logfile,flush=True) + logger.debug(f"## last processed {end:.2f} s, now is {now:.2f}, the latency is {now-end:.2f}") if end >= duration: break diff --git a/whisper_online_server.py b/whisper_online_server.py index 571bfb6..959020e 100644 --- a/whisper_online_server.py +++ b/whisper_online_server.py @@ -4,6 +4,10 @@ from whisper_online import * import sys import argparse import os +import logging +import numpy as np + +logger = logging.getLogger(__name__) parser = argparse.ArgumentParser() # server options @@ -11,11 +15,14 @@ parser.add_argument("--host", type=str, default='localhost') parser.add_argument("--port", type=int, default=43007) parser.add_argument('--vac', action="store_true", default=False, help='Use VAC = voice activity controller.') parser.add_argument('--vac-chunk-size', type=float, default=0.04, help='VAC sample size in seconds.') +parser.add_argument("--warmup-file", type=str, dest="warmup_file", + help="The path to a speech audio wav file to warm up Whisper so that the very first chunk processing is fast. It can be e.g. https://github.com/ggerganov/whisper.cpp/raw/master/samples/jfk.wav .") # options from whisper_online add_shared_args(parser) args = parser.parse_args() +set_logging(args,logger,other="") # setting whisper object by args @@ -23,59 +30,22 @@ SAMPLING_RATE = 16000 size = args.model language = args.lan +asr, online = asr_factory(args) +min_chunk = args.min_chunk_size -t = time.time() -print(f"Loading Whisper {size} model for {language}...",file=sys.stderr,end=" ",flush=True) - -if args.backend == "faster-whisper": - from faster_whisper import WhisperModel - asr_cls = FasterWhisperASR -elif args.backend == "whisper_timestamped": - import whisper - from whisper_online import WhisperTimestampedASR - asr_cls = WhisperTimestampedASR +# warm up the ASR because the very first transcribe takes more time than the others. +# Test results in https://github.com/ufal/whisper_streaming/pull/81 +msg = "Whisper is not warmed up. The first chunk processing may take longer." +if args.warmup_file: + if os.path.isfile(args.warmup_file): + a = load_audio_chunk(args.warmup_file,0,1) + asr.transcribe(a) + logger.info("Whisper is warmed up.") + else: + logger.critical("The warm up file is not available. "+msg) + sys.exit(1) else: - raise ValueError(f"Unknown {args.backend=}") - -asr = asr_cls(modelsize=size, lan=language, cache_dir=args.model_cache_dir, model_dir=args.model_dir) - -if args.task == "translate": - asr.set_translate_task() - tgt_language = "en" -else: - tgt_language = language - -print(f"done. It took {round(time.time()-t,2)} seconds.",file=sys.stderr) - -if args.vad: - print("setting VAD filter",file=sys.stderr) - asr.use_vad() - - -if args.buffer_trimming == "sentence": - tokenizer = create_tokenizer(tgt_language) -else: - tokenizer = None -if not args.vac: - from whisper_online import OnlineASRProcessor - online = OnlineASRProcessor(asr,tokenizer,buffer_trimming=(args.buffer_trimming, args.buffer_trimming_sec)) -else: - from whisper_online_vac import VACOnlineASRProcessor - online = VACOnlineASRProcessor(args.min_chunk_size, asr,tokenizer,buffer_trimming=(args.buffer_trimming, args.buffer_trimming_sec)) - - -demo_audio_path = "cs-maji-2.16k.wav" -if os.path.exists(demo_audio_path): - # load the audio into the LRU cache before we start the timer - a = load_audio_chunk(demo_audio_path,0,1) - - # TODO: it should be tested whether it's meaningful - # warm up the ASR, because the very first transcribe takes much more time than the other - asr.transcribe(a) -else: - print("Whisper is not warmed up",file=sys.stderr) - - + logger.warning(msg) ######### Server objects @@ -83,9 +53,6 @@ else: import line_packet import socket -import logging - - class Connection: '''it wraps conn object''' PACKET_SIZE = 32000*5*60 # 5 minutes # was: 65536 @@ -143,7 +110,7 @@ class ServerProcessor: break print("received audio:",len(raw_bytes), "bytes", raw_bytes[:10]) sf = soundfile.SoundFile(io.BytesIO(raw_bytes), channels=1,endian="LITTLE",samplerate=SAMPLING_RATE, subtype="PCM_16",format="RAW") - audio, _ = librosa.load(sf,sr=SAMPLING_RATE) + audio, _ = librosa.load(sf,sr=SAMPLING_RATE,dtype=np.float32) out.append(audio) if not out: return None @@ -174,7 +141,7 @@ class ServerProcessor: print("%1.0f %1.0f %s" % (beg,end,o[2]),flush=True,file=sys.stderr) return "%1.0f %1.0f %s" % (beg,end,o[2]) else: - print(o,file=sys.stderr,flush=True) + logger.debug("No text in this segment") return None def send_result(self, o): @@ -188,14 +155,13 @@ class ServerProcessor: while True: a = self.receive_audio_chunk() if a is None: - print("break here",file=sys.stderr) break self.online_asr_proc.insert_audio_chunk(a) o = online.process_iter() try: self.send_result(o) except BrokenPipeError: - print("broken pipe -- connection closed?",file=sys.stderr) + logger.info("broken pipe -- connection closed?") break # o = online.finish() # this should be working @@ -203,23 +169,18 @@ class ServerProcessor: - -# Start logging. -level = logging.INFO -logging.basicConfig(level=level, format='whisper-server-%(levelname)s: %(message)s') - # server loop with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind((args.host, args.port)) s.listen(1) - logging.info('INFO: Listening on'+str((args.host, args.port))) + logger.info('Listening on'+str((args.host, args.port))) while True: conn, addr = s.accept() - logging.info('INFO: Connected to client on {}'.format(addr)) + logger.info('Connected to client on {}'.format(addr)) connection = Connection(conn) proc = ServerProcessor(connection, online, args.min_chunk_size) proc.process() conn.close() - logging.info('INFO: Connection to client closed') -logging.info('INFO: Connection closed, terminating.') + logger.info('Connection to client closed') +logger.info('Connection closed, terminating.')