9 Commits
0.0.1 ... 0.1.3

Author SHA1 Message Date
Quentin Fuxa
e9022894b2 solve #100 2025-03-24 20:38:47 +01:00
Quentin Fuxa
ccf99cecdf Solve #95 and #96 2025-03-24 17:55:52 +01:00
Quentin Fuxa
40e2814cd7 0.1.2 2025-03-20 11:08:40 +01:00
Quentin Fuxa
cd29eace3d Update README.md 2025-03-20 10:23:14 +01:00
Quentin Fuxa
38cb54640f Update README.md 2025-03-19 15:49:39 +01:00
Quentin Fuxa
81268a7ca3 update CLI launch 2025-03-19 15:40:54 +01:00
Quentin Fuxa
33cbd24964 Update README.md 2025-03-19 15:14:38 +01:00
Quentin Fuxa
e966e78584 Merge pull request #92 from QuentinFuxa/refacto_lib
script to lib
2025-03-19 15:13:42 +01:00
Quentin Fuxa
c13d36b5e7 Merge pull request #91 from QuentinFuxa/refacto_lib
move all audio processing out of /asr endpoint
2025-03-19 11:20:28 +01:00
9 changed files with 236 additions and 112 deletions

205
README.md
View File

@@ -1,11 +1,16 @@
<h1 align="center">WhisperLiveKit</h1>
<p align="center"><b>Real-time, Fully Local Whisper's Speech-to-Text and Speaker Diarization</b></p>
<p align="center">
<img alt="PyPI Version" src="https://img.shields.io/pypi/v/whisperlivekit?color=g">
<img alt="PyPI Downloads" src="https://static.pepy.tech/personalized-badge/whisperlivekit">
<img alt="Python Versions" src="https://img.shields.io/badge/python-3.9%20%7C%203.10%20%7C%203.11%20%7C%203.12-dark_green">
</p>
This project is based on [Whisper Streaming](https://github.com/ufal/whisper_streaming) and lets you transcribe audio directly from your browser. Simply launch the local server and grant microphone access. Everything runs locally on your machine ✨
<p align="center">
<img src="https://raw.githubusercontent.com/QuentinFuxa/WhisperLiveKit/demo.png" alt="Demo Screenshot" width="730">
<img src="https://raw.githubusercontent.com/QuentinFuxa/WhisperLiveKit/refs/heads/main/demo.png" alt="Demo Screenshot" width="730">
</p>
### Differences from [Whisper Streaming](https://github.com/ufal/whisper_streaming)
@@ -26,7 +31,7 @@ This project is based on [Whisper Streaming](https://github.com/ufal/whisper_str
## Installation
### Via pip
### Via pip (recommended)
```bash
pip install whisperlivekit
@@ -34,114 +39,136 @@ pip install whisperlivekit
### From source
1. **Clone the Repository**:
```bash
git clone https://github.com/QuentinFuxa/WhisperLiveKit
cd WhisperLiveKit
pip install -e .
```
```bash
git clone https://github.com/QuentinFuxa/WhisperLiveKit
cd WhisperLiveKit
pip install -e .
```
### System Dependencies
You need to install FFmpeg on your system:
- Install system dependencies:
```bash
# Install FFmpeg on your system (required for audio processing)
# For Ubuntu/Debian:
sudo apt install ffmpeg
# For macOS:
brew install ffmpeg
# For Windows:
# Download from https://ffmpeg.org/download.html and add to PATH
```
```bash
# For Ubuntu/Debian:
sudo apt install ffmpeg
- Install required Python dependencies:
# For macOS:
brew install ffmpeg
```bash
# Whisper streaming required dependencies
pip install librosa soundfile
# For Windows:
# Download from https://ffmpeg.org/download.html and add to PATH
```
# Whisper streaming web required dependencies
pip install fastapi ffmpeg-python
```
- Install at least one whisper backend among:
### Optional Dependencies
```
whisper
whisper-timestamped
faster-whisper (faster backend on NVIDIA GPU)
mlx-whisper (faster backend on Apple Silicon)
```
- Optionnal dependencies
```
# If you want to use VAC (Voice Activity Controller). Useful for preventing hallucinations
torch
```bash
# If you want to use VAC (Voice Activity Controller). Useful for preventing hallucinations
pip install torch
# If you choose sentences as buffer trimming strategy
mosestokenizer
wtpsplit
tokenize_uk # If you work with Ukrainian text
# If you choose sentences as buffer trimming strategy
pip install mosestokenizer wtpsplit
pip install tokenize_uk # If you work with Ukrainian text
# If you want to run the server using uvicorn (recommended)
uvicorn
# If you want to use diarization
pip install diart
# If you want to use diarization
diart
```
# Optional backends. Default is faster-whisper
pip install whisperlivekit[whisper] # Original Whisper backend
pip install whisperlivekit[whisper-timestamped] # Whisper with improved timestamps
pip install whisperlivekit[mlx-whisper] # Optimized for Apple Silicon
pip install whisperlivekit[openai] # OpenAI API backend
```
Diart uses by default [pyannote.audio](https://github.com/pyannote/pyannote-audio) models from the _huggingface hub_. To use them, please follow the steps described [here](https://github.com/juanmc2005/diart?tab=readme-ov-file#get-access-to--pyannote-models).
### Get access to 🎹 pyannote models
By default, diart is based on [pyannote.audio](https://github.com/pyannote/pyannote-audio) models from the [huggingface](https://huggingface.co/) hub.
In order to use them, please follow these steps:
1) [Accept user conditions](https://huggingface.co/pyannote/segmentation) for the `pyannote/segmentation` model
2) [Accept user conditions](https://huggingface.co/pyannote/segmentation-3.0) for the newest `pyannote/segmentation-3.0` model
3) [Accept user conditions](https://huggingface.co/pyannote/embedding) for the `pyannote/embedding` model
4) Install [huggingface-cli](https://huggingface.co/docs/huggingface_hub/quick-start#install-the-hub-library) and [log in](https://huggingface.co/docs/huggingface_hub/quick-start#login) with your user access token (or provide it manually in diart CLI or API).
3. **Run the FastAPI Server**:
```bash
python whisper_fastapi_online_server.py --host 0.0.0.0 --port 8000
```
## Usage
**Parameters**
The following parameters are supported:
- `--host` and `--port` let you specify the server's IP/port.
- `-min-chunk-size` sets the minimum chunk size for audio processing. Make sure this value aligns with the chunk size selected in the frontend. If not aligned, the system will work but may unnecessarily over-process audio data.
- `--transcription`: Enable/disable transcription (default: True)
- `--diarization`: Enable/disable speaker diarization (default: False)
- `--confidence-validation`: Use confidence scores for faster validation. Transcription will be faster but punctuation might be less accurate (default: True)
- `--warmup-file`: The path to a speech audio wav file to warm up Whisper so that the very first chunk processing is fast. :
- If not set, uses https://github.com/ggerganov/whisper.cpp/raw/master/samples/jfk.wav.
- If False, no warmup is performed.
- `--min-chunk-size` Minimum audio chunk size in seconds. It waits up to this time to do processing. If the processing takes shorter time, it waits, otherwise it processes the whole segment that was received by this time.
- `--model` {_tiny.en, tiny, base.en, base, small.en, small, medium.en, medium, large-v1, large-v2, large-v3, large, large-v3-turbo_}
Name size of the Whisper model to use (default: tiny). The model is automatically downloaded from the model hub if not present in model cache dir.
- `--model_cache_dir` Overriding the default model cache dir where models downloaded from the hub are saved
- `--model_dir` Dir where Whisper model.bin and other files are saved. This option overrides --model and --model_cache_dir parameter.
- `--lan`, --language Source language code, e.g. en,de,cs, or 'auto' for language detection.
- `--task` {_transcribe, translate_} Transcribe or translate. If translate is set, we recommend avoiding the _large-v3-turbo_ backend, as it [performs significantly worse](https://github.com/QuentinFuxa/whisper_streaming_web/issues/40#issuecomment-2652816533) than other models for translation.
- `--backend` {_faster-whisper, whisper_timestamped, openai-api, mlx-whisper_} Load only this backend for Whisper processing.
- `--vac` Use VAC = voice activity controller. Requires torch.
- `--vac-chunk-size` VAC sample size in seconds.
- `--vad` Use VAD = voice activity detection, with the default parameters.
- `--buffer_trimming` {_sentence, segment_} Buffer trimming strategy -- trim completed sentences marked with punctuation mark and detected by sentence segmenter, or the completed segments returned by Whisper. Sentence segmenter must be installed for "sentence" option.
- `--buffer_trimming_sec` Buffer trimming length threshold in seconds. If buffer length is longer, trimming sentence/segment is triggered.
### Using the command-line tool
5. **Open the Provided HTML**:
After installation, you can start the server using the provided command-line tool:
- By default, the server root endpoint `/` serves a simple `live_transcription.html` page.
- Open your browser at `http://localhost:8000` (or replace `localhost` and `8000` with whatever you specified).
- The page uses vanilla JavaScript and the WebSocket API to capture your microphone and stream audio to the server in real time.
```bash
whisperlivekit-server --host 0.0.0.0 --port 8000 --model tiny.en
```
### How the Live Interface Works
Then open your browser at `http://localhost:8000` (or your specified host and port).
### Using the library in your code
```python
from whisperlivekit import WhisperLiveKit
from whisperlivekit.audio_processor import AudioProcessor
from fastapi import FastAPI, WebSocket
kit = WhisperLiveKit(model="medium", diarization=True)
app = FastAPI() # Create a FastAPI application
@app.get("/")
async def get():
return HTMLResponse(kit.web_interface()) # Use the built-in web interface
async def handle_websocket_results(websocket, results_generator): # Sends results to frontend
async for response in results_generator:
await websocket.send_json(response)
@app.websocket("/asr")
async def websocket_endpoint(websocket: WebSocket):
audio_processor = AudioProcessor()
await websocket.accept()
results_generator = await audio_processor.create_tasks()
websocket_task = asyncio.create_task(handle_websocket_results(websocket, results_generator))
while True:
message = await websocket.receive_bytes()
await audio_processor.process_audio(message)
```
For a complete audio processing example, check [whisper_fastapi_online_server.py](https://github.com/QuentinFuxa/WhisperLiveKit/blob/main/whisper_fastapi_online_server.py)
## Configuration Options
The following parameters are supported when initializing `WhisperLiveKit`:
- `--host` and `--port` let you specify the server's IP/port.
- `--min-chunk-size` sets the minimum chunk size for audio processing. Make sure this value aligns with the chunk size selected in the frontend. If not aligned, the system will work but may unnecessarily over-process audio data.
- `--no-transcription`: Disable transcription (enabled by default)
- `--diarization`: Enable speaker diarization (disabled by default)
- `--confidence-validation`: Use confidence scores for faster validation. Transcription will be faster but punctuation might be less accurate (disabled by default)
- `--warmup-file`: The path to a speech audio wav file to warm up Whisper so that the very first chunk processing is fast:
- If not set, uses https://github.com/ggerganov/whisper.cpp/raw/master/samples/jfk.wav.
- If False, no warmup is performed.
- `--min-chunk-size` Minimum audio chunk size in seconds. It waits up to this time to do processing. If the processing takes shorter time, it waits, otherwise it processes the whole segment that was received by this time.
- `--model`: Name size of the Whisper model to use (default: tiny). Suggested values: tiny.en, tiny, base.en, base, small.en, small, medium.en, medium, large-v1, large-v2, large-v3, large, large-v3-turbo. The model is automatically downloaded from the model hub if not present in model cache dir.
- `--model_cache_dir`: Overriding the default model cache dir where models downloaded from the hub are saved
- `--model_dir`: Dir where Whisper model.bin and other files are saved. This option overrides --model and --model_cache_dir parameter.
- `--lan`, `--language`: Source language code, e.g. en,de,cs, or 'auto' for language detection.
- `--task` {_transcribe, translate_}: Transcribe or translate. If translate is set, we recommend avoiding the _large-v3-turbo_ backend, as it [performs significantly worse](https://github.com/QuentinFuxa/whisper_streaming_web/issues/40#issuecomment-2652816533) than other models for translation.
- `--backend` {_faster-whisper, whisper_timestamped, openai-api, mlx-whisper_}: Load only this backend for Whisper processing.
- `--vac`: Use VAC = voice activity controller. Requires torch. (disabled by default)
- `--vac-chunk-size`: VAC sample size in seconds.
- `--no-vad`: Disable VAD (voice activity detection), which is enabled by default.
- `--buffer_trimming` {_sentence, segment_}: Buffer trimming strategy -- trim completed sentences marked with punctuation mark and detected by sentence segmenter, or the completed segments returned by Whisper. Sentence segmenter must be installed for "sentence" option.
- `--buffer_trimming_sec`: Buffer trimming length threshold in seconds. If buffer length is longer, trimming sentence/segment is triggered.
## How the Live Interface Works
- Once you **allow microphone access**, the page records small chunks of audio using the **MediaRecorder** API in **webm/opus** format.
- These chunks are sent over a **WebSocket** to the FastAPI endpoint at `/asr`.
- The Python server decodes `.webm` chunks on the fly using **FFmpeg** and streams them into the **whisper streaming** implementation for transcription.
- **Partial transcription** appears as soon as enough audio is processed. The unvalidated text is shown in **lighter or grey color** (i.e., an aperçu) to indicate its still buffered partial output. Once Whisper finalizes that segment, its displayed in normal text.
- You can watch the transcription update in near real time, ideal for demos, prototyping, or quick debugging.
- **Partial transcription** appears as soon as enough audio is processed. The "unvalidated" text is shown in **lighter or grey color** (i.e., an 'aperçu') to indicate it's still buffered partial output. Once Whisper finalizes that segment, it's displayed in normal text.
### Deploying to a Remote Server
@@ -149,10 +176,8 @@ If you want to **deploy** this setup:
1. **Host the FastAPI app** behind a production-grade HTTP(S) server (like **Uvicorn + Nginx** or Docker). If you use HTTPS, use "wss" instead of "ws" in WebSocket URL.
2. The **HTML/JS page** can be served by the same FastAPI app or a separate static host.
3. Users open the page in **Chrome/Firefox** (any modern browser that supports MediaRecorder + WebSocket).
No additional front-end libraries or frameworks are required. The WebSocket logic in `live_transcription.html` is minimal enough to adapt for your own custom UI or embed in other pages.
3. Users open the page in **Chrome/Firefox** (any modern browser that supports MediaRecorder + WebSocket). No additional front-end libraries or frameworks are required.
## Acknowledgments
This project builds upon the foundational work of the Whisper Streaming project. We extend our gratitude to the original authors for their contributions.
This project builds upon the foundational work of the Whisper Streaming and Diart projects. We extend our gratitude to the original authors for their contributions.

BIN
demo.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 469 KiB

After

Width:  |  Height:  |  Size: 463 KiB

View File

@@ -1,8 +1,7 @@
from setuptools import setup, find_packages
setup(
name="whisperlivekit",
version="0.1.0",
version="0.1.3",
description="Real-time, Fully Local Whisper's Speech-to-Text and Speaker Diarization",
long_description=open("README.md", "r", encoding="utf-8").read(),
long_description_content_type="text/markdown",
@@ -22,13 +21,17 @@ setup(
"diarization": ["diart"],
"vac": ["torch"],
"sentence": ["mosestokenizer", "wtpsplit"],
"whisper": ["whisper"],
"whisper-timestamped": ["whisper-timestamped"],
"mlx-whisper": ["mlx-whisper"],
"openai": ["openai"],
},
package_data={
'whisperlivekit': ['web/*.html'],
},
entry_points={
'console_scripts': [
'whisperlivekit-server=whisperlivekit.server:run_server',
'whisperlivekit-server=whisperlivekit.basic_server:main',
],
},
classifiers=[

View File

@@ -0,0 +1,86 @@
from contextlib import asynccontextmanager
from fastapi import FastAPI, WebSocket, WebSocketDisconnect
from fastapi.responses import HTMLResponse
from fastapi.middleware.cors import CORSMiddleware
from whisperlivekit import WhisperLiveKit
from whisperlivekit.audio_processor import AudioProcessor
import asyncio
import logging
import os
logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s")
logging.getLogger().setLevel(logging.WARNING)
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
kit = None
@asynccontextmanager
async def lifespan(app: FastAPI):
global kit
kit = WhisperLiveKit()
yield
app = FastAPI(lifespan=lifespan)
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
@app.get("/")
async def get():
return HTMLResponse(kit.web_interface())
async def handle_websocket_results(websocket, results_generator):
"""Consumes results from the audio processor and sends them via WebSocket."""
try:
async for response in results_generator:
await websocket.send_json(response)
except Exception as e:
logger.warning(f"Error in WebSocket results handler: {e}")
@app.websocket("/asr")
async def websocket_endpoint(websocket: WebSocket):
audio_processor = AudioProcessor()
await websocket.accept()
logger.info("WebSocket connection opened.")
results_generator = await audio_processor.create_tasks()
websocket_task = asyncio.create_task(handle_websocket_results(websocket, results_generator))
try:
while True:
message = await websocket.receive_bytes()
await audio_processor.process_audio(message)
except WebSocketDisconnect:
logger.warning("WebSocket disconnected.")
finally:
websocket_task.cancel()
await audio_processor.cleanup()
logger.info("WebSocket endpoint cleaned up.")
def main():
"""Entry point for the CLI command."""
import uvicorn
temp_kit = WhisperLiveKit(transcription=False, diarization=False)
uvicorn.run(
"whisperlivekit.basic_server:app",
host=temp_kit.args.host,
port=temp_kit.args.port,
reload=True,
log_level="info"
)
if __name__ == "__main__":
main()

View File

@@ -1,4 +1,7 @@
from whisperlivekit.whisper_streaming_custom.whisper_online import backend_factory, warmup_asr
try:
from whisperlivekit.whisper_streaming_custom.whisper_online import backend_factory, warmup_asr
except ImportError:
from .whisper_streaming_custom.whisper_online import backend_factory, warmup_asr
from argparse import Namespace, ArgumentParser
def parse_args():
@@ -26,23 +29,21 @@ def parse_args():
parser.add_argument(
"--confidence-validation",
type=bool,
default=False,
action="store_true",
help="Accelerates validation of tokens using confidence scores. Transcription will be faster but punctuation might be less accurate.",
)
parser.add_argument(
"--diarization",
type=bool,
default=True,
help="Whether to enable speaker diarization.",
action="store_true",
default=False,
help="Enable speaker diarization.",
)
parser.add_argument(
"--transcription",
type=bool,
default=True,
help="To disable to only see live diarization results.",
"--no-transcription",
action="store_true",
help="Disable transcription to only see live diarization results.",
)
parser.add_argument(
@@ -51,15 +52,14 @@ def parse_args():
default=0.5,
help="Minimum audio chunk size in seconds. It waits up to this time to do processing. If the processing takes shorter time, it waits, otherwise it processes the whole segment that was received by this time.",
)
parser.add_argument(
"--model",
type=str,
default="tiny",
choices="tiny.en,tiny,base.en,base,small.en,small,medium.en,medium,large-v1,large-v2,large-v3,large,large-v3-turbo".split(
","
),
help="Name size of the Whisper model to use (default: large-v2). The model is automatically downloaded from the model hub if not present in model cache dir.",
help="Name size of the Whisper model to use (default: tiny). Suggested values: tiny.en,tiny,base.en,base,small.en,small,medium.en,medium,large-v1,large-v2,large-v3,large,large-v3-turbo. The model is automatically downloaded from the model hub if not present in model cache dir.",
)
parser.add_argument(
"--model_cache_dir",
type=str,
@@ -102,12 +102,13 @@ def parse_args():
parser.add_argument(
"--vac-chunk-size", type=float, default=0.04, help="VAC sample size in seconds."
)
parser.add_argument(
"--vad",
"--no-vad",
action="store_true",
default=True,
help="Use VAD = voice activity detection, with the default parameters.",
help="Disable VAD (voice activity detection).",
)
parser.add_argument(
"--buffer_trimming",
type=str,
@@ -131,6 +132,12 @@ def parse_args():
)
args = parser.parse_args()
args.transcription = not args.no_transcription
args.vad = not args.no_vad
delattr(args, 'no_transcription')
delattr(args, 'no_vad')
return args
class WhisperLiveKit:

View File

View File

View File

@@ -3,7 +3,10 @@ import logging
import io
import soundfile as sf
import math
import torch
try:
import torch
except ImportError:
torch = None
from typing import List
import numpy as np
from whisperlivekit.timed_objects import ASRToken
@@ -102,7 +105,7 @@ class FasterWhisperASR(ASRBase):
model_size_or_path = modelsize
else:
raise ValueError("Either modelsize or model_dir must be set")
device = "cuda" if torch.cuda.is_available() else "cpu"
device = "cuda" if torch and torch.cuda.is_available() else "cpu"
compute_type = "float16" if device == "cuda" else "float32"
model = WhisperModel(