5 Commits
0.0.1 ... 0.1.1

Author SHA1 Message Date
Quentin Fuxa
38cb54640f Update README.md 2025-03-19 15:49:39 +01:00
Quentin Fuxa
81268a7ca3 update CLI launch 2025-03-19 15:40:54 +01:00
Quentin Fuxa
33cbd24964 Update README.md 2025-03-19 15:14:38 +01:00
Quentin Fuxa
e966e78584 Merge pull request #92 from QuentinFuxa/refacto_lib
script to lib
2025-03-19 15:13:42 +01:00
Quentin Fuxa
c13d36b5e7 Merge pull request #91 from QuentinFuxa/refacto_lib
move all audio processing out of /asr endpoint
2025-03-19 11:20:28 +01:00
4 changed files with 180 additions and 75 deletions

View File

@@ -5,7 +5,7 @@
This project is based on [Whisper Streaming](https://github.com/ufal/whisper_streaming) and lets you transcribe audio directly from your browser. Simply launch the local server and grant microphone access. Everything runs locally on your machine ✨ This project is based on [Whisper Streaming](https://github.com/ufal/whisper_streaming) and lets you transcribe audio directly from your browser. Simply launch the local server and grant microphone access. Everything runs locally on your machine ✨
<p align="center"> <p align="center">
<img src="https://raw.githubusercontent.com/QuentinFuxa/WhisperLiveKit/demo.png" alt="Demo Screenshot" width="730"> <img src="https://raw.githubusercontent.com/QuentinFuxa/WhisperLiveKit/refs/heads/main/demo.png" alt="Demo Screenshot" width="730">
</p> </p>
### Differences from [Whisper Streaming](https://github.com/ufal/whisper_streaming) ### Differences from [Whisper Streaming](https://github.com/ufal/whisper_streaming)
@@ -26,7 +26,7 @@ This project is based on [Whisper Streaming](https://github.com/ufal/whisper_str
## Installation ## Installation
### Via pip ### Via pip (recommended)
```bash ```bash
pip install whisperlivekit pip install whisperlivekit
@@ -46,9 +46,7 @@ pip install whisperlivekit
You need to install FFmpeg on your system: You need to install FFmpeg on your system:
- Install system dependencies:
```bash ```bash
# Install FFmpeg on your system (required for audio processing)
# For Ubuntu/Debian: # For Ubuntu/Debian:
sudo apt install ffmpeg sudo apt install ffmpeg
@@ -59,53 +57,70 @@ You need to install FFmpeg on your system:
# Download from https://ffmpeg.org/download.html and add to PATH # Download from https://ffmpeg.org/download.html and add to PATH
``` ```
- Install required Python dependencies: ### Optional Dependencies
```bash ```bash
# Whisper streaming required dependencies
pip install librosa soundfile
# Whisper streaming web required dependencies
pip install fastapi ffmpeg-python
```
- Install at least one whisper backend among:
```
whisper
whisper-timestamped
faster-whisper (faster backend on NVIDIA GPU)
mlx-whisper (faster backend on Apple Silicon)
```
- Optionnal dependencies
```
# If you want to use VAC (Voice Activity Controller). Useful for preventing hallucinations # If you want to use VAC (Voice Activity Controller). Useful for preventing hallucinations
torch pip install torch
# If you choose sentences as buffer trimming strategy # If you choose sentences as buffer trimming strategy
mosestokenizer pip install mosestokenizer wtpsplit
wtpsplit pip install tokenize_uk # If you work with Ukrainian text
tokenize_uk # If you work with Ukrainian text
# If you want to run the server using uvicorn (recommended)
uvicorn
# If you want to use diarization # If you want to use diarization
diart pip install diart
``` ```
Diart uses by default [pyannote.audio](https://github.com/pyannote/pyannote-audio) models from the _huggingface hub_. To use them, please follow the steps described [here](https://github.com/juanmc2005/diart?tab=readme-ov-file#get-access-to--pyannote-models). Diart uses [pyannote.audio](https://github.com/pyannote/pyannote-audio) models from the _huggingface hub_. To use them, please follow the steps described [here](https://github.com/juanmc2005/diart?tab=readme-ov-file#get-access-to--pyannote-models).
## Usage
3. **Run the FastAPI Server**: ### Using the command-line tool
After installation, you can start the server using the provided command-line tool:
```bash ```bash
python whisper_fastapi_online_server.py --host 0.0.0.0 --port 8000 whisperlivekit-server --host 0.0.0.0 --port 8000 --model tiny.en
``` ```
**Parameters** Then open your browser at `http://localhost:8000` (or your specified host and port).
The following parameters are supported: ### Using the library in your code
```python
from whisperlivekit import WhisperLiveKit
from whisperlivekit.audio_processor import AudioProcessor
from fastapi import FastAPI, WebSocket
kit = WhisperLiveKit(model="medium", diarization=True)
app = FastAPI() # Create a FastAPI application
@app.get("/")
async def get():
return HTMLResponse(kit.web_interface()) # Use the built-in web interface
async def handle_websocket_results(websocket, results_generator): # Sends results to frontend
async for response in results_generator:
await websocket.send_json(response)
@app.websocket("/asr")
async def websocket_endpoint(websocket: WebSocket):
audio_processor = AudioProcessor()
await websocket.accept()
results_generator = await audio_processor.create_tasks()
websocket_task = asyncio.create_task(handle_websocket_results(websocket, results_generator))
while True:
message = await websocket.receive_bytes()
await audio_processor.process_audio(message)
```
For a complete audio processing example, check [whisper_fastapi_online_server.py](https://github.com/QuentinFuxa/WhisperLiveKit/blob/main/whisper_fastapi_online_server.py)
## Configuration Options
The following parameters are supported when initializing `WhisperLiveKit`:
- `--host` and `--port` let you specify the server's IP/port. - `--host` and `--port` let you specify the server's IP/port.
- `-min-chunk-size` sets the minimum chunk size for audio processing. Make sure this value aligns with the chunk size selected in the frontend. If not aligned, the system will work but may unnecessarily over-process audio data. - `-min-chunk-size` sets the minimum chunk size for audio processing. Make sure this value aligns with the chunk size selected in the frontend. If not aligned, the system will work but may unnecessarily over-process audio data.
@@ -135,12 +150,13 @@ You need to install FFmpeg on your system:
- Open your browser at `http://localhost:8000` (or replace `localhost` and `8000` with whatever you specified). - Open your browser at `http://localhost:8000` (or replace `localhost` and `8000` with whatever you specified).
- The page uses vanilla JavaScript and the WebSocket API to capture your microphone and stream audio to the server in real time. - The page uses vanilla JavaScript and the WebSocket API to capture your microphone and stream audio to the server in real time.
### How the Live Interface Works
## How the Live Interface Works
- Once you **allow microphone access**, the page records small chunks of audio using the **MediaRecorder** API in **webm/opus** format. - Once you **allow microphone access**, the page records small chunks of audio using the **MediaRecorder** API in **webm/opus** format.
- These chunks are sent over a **WebSocket** to the FastAPI endpoint at `/asr`. - These chunks are sent over a **WebSocket** to the FastAPI endpoint at `/asr`.
- The Python server decodes `.webm` chunks on the fly using **FFmpeg** and streams them into the **whisper streaming** implementation for transcription. - The Python server decodes `.webm` chunks on the fly using **FFmpeg** and streams them into the **whisper streaming** implementation for transcription.
- **Partial transcription** appears as soon as enough audio is processed. The unvalidated text is shown in **lighter or grey color** (i.e., an aperçu) to indicate its still buffered partial output. Once Whisper finalizes that segment, its displayed in normal text. - **Partial transcription** appears as soon as enough audio is processed. The "unvalidated" text is shown in **lighter or grey color** (i.e., an 'aperçu') to indicate it's still buffered partial output. Once Whisper finalizes that segment, it's displayed in normal text.
- You can watch the transcription update in near real time, ideal for demos, prototyping, or quick debugging. - You can watch the transcription update in near real time, ideal for demos, prototyping, or quick debugging.
### Deploying to a Remote Server ### Deploying to a Remote Server

View File

@@ -28,7 +28,7 @@ setup(
}, },
entry_points={ entry_points={
'console_scripts': [ 'console_scripts': [
'whisperlivekit-server=whisperlivekit.server:run_server', 'whisperlivekit-server=whisperlivekit.basic_server:main',
], ],
}, },
classifiers=[ classifiers=[

View File

@@ -0,0 +1,86 @@
from contextlib import asynccontextmanager
from fastapi import FastAPI, WebSocket, WebSocketDisconnect
from fastapi.responses import HTMLResponse
from fastapi.middleware.cors import CORSMiddleware
from whisperlivekit import WhisperLiveKit
from whisperlivekit.audio_processor import AudioProcessor
import asyncio
import logging
import os
logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s")
logging.getLogger().setLevel(logging.WARNING)
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
kit = None
@asynccontextmanager
async def lifespan(app: FastAPI):
global kit
kit = WhisperLiveKit()
yield
app = FastAPI(lifespan=lifespan)
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
@app.get("/")
async def get():
return HTMLResponse(kit.web_interface())
async def handle_websocket_results(websocket, results_generator):
"""Consumes results from the audio processor and sends them via WebSocket."""
try:
async for response in results_generator:
await websocket.send_json(response)
except Exception as e:
logger.warning(f"Error in WebSocket results handler: {e}")
@app.websocket("/asr")
async def websocket_endpoint(websocket: WebSocket):
audio_processor = AudioProcessor()
await websocket.accept()
logger.info("WebSocket connection opened.")
results_generator = await audio_processor.create_tasks()
websocket_task = asyncio.create_task(handle_websocket_results(websocket, results_generator))
try:
while True:
message = await websocket.receive_bytes()
await audio_processor.process_audio(message)
except WebSocketDisconnect:
logger.warning("WebSocket disconnected.")
finally:
websocket_task.cancel()
await audio_processor.cleanup()
logger.info("WebSocket endpoint cleaned up.")
def main():
"""Entry point for the CLI command."""
import uvicorn
temp_kit = WhisperLiveKit(transcription=False, diarization=False)
uvicorn.run(
"whisperlivekit.basic_server:app",
host=temp_kit.args.host,
port=temp_kit.args.port,
reload=True,
log_level="info"
)
if __name__ == "__main__":
main()

View File

@@ -1,4 +1,7 @@
try:
from whisperlivekit.whisper_streaming_custom.whisper_online import backend_factory, warmup_asr from whisperlivekit.whisper_streaming_custom.whisper_online import backend_factory, warmup_asr
except:
from whisper_streaming_custom.whisper_online import backend_factory, warmup_asr
from argparse import Namespace, ArgumentParser from argparse import Namespace, ArgumentParser
def parse_args(): def parse_args():