- BENCHMARK.md: whisper also supports --language auto, voxtral is not
the only one. Fixed mlx-whisper speed comparison (LA is actually
faster than SS for mlx-whisper, not comparable).
- metrics.py: median calculation was wrong for even-length lists
(took upper middle instead of averaging the two middle values).
- metrics_collector.py: RTF was inflated because log_summary() used
wall-clock elapsed time instead of sum of actual ASR call durations.
- README.md: clarified that whisper also supports auto language
detection, voxtral just does it better.
- Added 2 new median tests (even + odd length).
Test suite covering:
- metrics.py: WER computation, timestamp accuracy, text normalization
- config.py: defaults, .en model detection, policy aliases, from_namespace
- timed_objects.py: ASRToken, Silence, Transcript, Segment, FrontData
- hypothesis_buffer.py: insert, flush, LCP matching, pop_committed
- silence_handling.py: state machine, double-counting regression test
- audio_processor.py: async pipeline with MockOnlineProcessor
All tests run in ~1.3s without downloading any ASR models.
Add pytest and pytest-asyncio as optional test dependencies.
Update .gitignore to allow tests/ directory.