51 Commits

Author SHA1 Message Date
RVC-Boss
725db8734a Update README.md 2023-04-27 16:16:38 +00:00
RVC-Boss
dfb298da66 Update Changelog_CN.md 2023-04-27 16:09:12 +00:00
RVC-Boss
af208d5210 Add files via upload 2023-04-27 23:34:03 +08:00
EntropyRiser
a149107c5a Add full support of all samplerate. (#182)
Co-authored-by: EntropyRiser <1832783120@qq.com>
2023-04-27 18:52:01 +08:00
RVC-Boss
80b54499eb Update vc_infer_pipeline.py 2023-04-27 16:11:45 +08:00
M.Hosoi
7b8a0bb6fc Maximum value of save_every_epoch changed to 50 => 200 (#178) 2023-04-27 10:59:49 +08:00
RVC-Boss
a6cb4d3625 support 16xx GPU and 4G GPU inference
support 16xx GPU and 4G GPU inference
2023-04-27 01:40:04 +08:00
RVC-Boss
2ac8d553ab Update infer-web.py 2023-04-26 15:39:19 +00:00
RVC-Boss
dc0c8756b5 Total_fea not needed now. Better and faster retrieval performance.
Total_fea not needed now. Better and faster retrieval performance.
2023-04-26 19:17:48 +08:00
RVC-Boss
9be8048302 Total_fea not needed. Better and faster retrieval performance
Total_fea not needed now. Better and faster retrieval performance.
2023-04-26 19:13:54 +08:00
RVC-Boss
a21f7ec11f total_fea not needed now
total_fea not needed now
2023-04-26 19:12:47 +08:00
JiHo Han
71e2733719 docs(README.ko): add Korean Translation of README.md (#157)
* docs(README.ko): add Korean Translation of README.md

* docs(Faiss): add Korean tips for Faiss

* docs(README): add hyperlinks for Korean translation on all README

* docs(training_tips): add Korean translation for training tips

---------

Co-authored-by: Ftps <63702646+Tps-F@users.noreply.github.com>
2023-04-25 21:55:48 +08:00
github-actions[bot]
964a85fe15 🎨 同步 locale (#163)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-04-25 10:53:56 +08:00
RVC-Boss
f2abfd5ad2 Update pyproject.toml 2023-04-25 10:51:38 +08:00
Styl
96b6d28718 Web UI to Spanish (#162) 2023-04-25 02:51:20 +00:00
Ftps
52661df363 fix json (#143) 2023-04-24 20:43:45 +08:00
github-actions[bot]
b4c653142d Format code (#142)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-04-24 20:35:56 +08:00
源文雨
376bd31c19 i18n: 优化英文翻译 by @Estil1 (#141)
* fix: i18n rename 不全

* Language 100% fixed 

I can create a Spanish version too

* 🎨 同步 locale

* Update en_US.json

---------

Co-authored-by: Styl <87322309+Estil1@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-04-24 20:31:46 +08:00
nadare
fdf12a4add Faiss Tutorial for Developers (#97)
* add faiss tutorial (WIP)

* add embedding tips
2023-04-24 20:18:34 +08:00
源文雨
f6ef9bca0c fix #115: 隐藏允许的 exception 2023-04-24 20:17:49 +08:00
Ναρουσέ·μ·γιουμεμί·Χινακάννα
9bac0ffaa7 Onnx导出拓展以及WebUI支持 (#140)
* Add files via upload

* Add files via upload

* Add files via upload

* Add files via upload
2023-04-24 19:55:05 +08:00
tarepan
fb1d4b1882 Fix deprecated positional arguments in mel (#133) 2023-04-24 18:35:09 +08:00
tarepan
329d739e70 Refactor mel module (#132)
* Refactor wave-to-mel

* Add docstring on mel

* Refactor mel module import and variable names
2023-04-24 11:45:20 +08:00
RVC-Boss
a02ef401ad Update trainset_preprocess_pipeline_print.py 2023-04-22 14:39:17 +00:00
RVC-Boss
4fdb858a02 Add files via upload 2023-04-22 21:41:50 +08:00
RVC-Boss
bb535a4f71 Update en_US.json 2023-04-22 12:24:12 +00:00
RVC-Boss
44de5de840 Update i18n.py 2023-04-22 12:22:16 +00:00
RVC-Boss
978539ad0e Update extract_f0_print.py 2023-04-22 12:17:32 +00:00
tarepan
5d5ab5465f Refactor GPU cache during training (#108) 2023-04-22 12:05:00 +00:00
autumnmotor
297d92bf5d some change precision audio processing (#94)
* some change precision audio processing

* fix clipping problem in resample

resample sometimes causes signal clipping, not just librosa.resample

* fix error
2023-04-22 11:39:47 +00:00
RVC-Boss
c423f77a16 增加无f0模型的支持
增加无f0模型的支持
2023-04-22 11:38:00 +00:00
EntropyRiser
2f51e932bf Change f0 predictor to harvest. (#123)
Co-authored-by: EntropyRiser <1832783120@qq.com>
2023-04-22 11:32:49 +00:00
Rice Cake
334da847d2 Update README.en.md (#121)
* Update README.en.md

* Update README.en.md
2023-04-22 14:06:18 +08:00
nadare
9b513a2375 Training tutorial (#109)
* add training tips in ja

* add english edition(using google translate)
2023-04-22 14:04:56 +08:00
Ftps
8acc0f2b71 fix port (#118) 2023-04-22 00:36:10 +08:00
Ftps
ebc0b227c1 Update i18n.py (#117) 2023-04-22 00:35:37 +08:00
Yugo Ogura
c941512427 chore: Just fix typo in README.ja.md (#114) 2023-04-22 00:33:11 +08:00
Rice Cake
a2dadfc931 Update README.en.md (#113) 2023-04-21 16:30:08 +08:00
Ftps
8bf1e0e026 Update faiss description (#95) 2023-04-19 13:45:04 +08:00
Kazuki
aca68fad09 improved Japanese translation. (#101) 2023-04-19 11:02:02 +08:00
Ftps
58397a92dc Automatically change faiss version (#92) 2023-04-18 14:03:30 +08:00
github-actions[bot]
0ca936c226 🎨 同步 locale (#90)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-04-17 15:26:59 +00:00
Ftps
294b751e34 some change translation (#91) 2023-04-17 22:37:00 +08:00
github-actions[bot]
1e71efb265 Format code (#89)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-04-17 14:09:03 +00:00
源文雨
35379217e8 优化 change log 格式 (#86)
* 优化 change log 格式

* Apply Code Formatter Change

---------

Co-authored-by: fumiama <fumiama@users.noreply.github.com>
2023-04-17 12:49:54 +00:00
EntropyRiser
88a43e14d1 Add non-search inference support. (#82)
Co-authored-by: 源文雨 <41315874+fumiama@users.noreply.github.com>
2023-04-17 12:49:42 +00:00
源文雨
b0f8a4c7d1 fix: json format (#84)
* Update extract_locale.py

* Apply Code Formatter Change

* Update locale_diff.py

* Apply Code Formatter Change

---------

Co-authored-by: fumiama <fumiama@users.noreply.github.com>
2023-04-17 12:49:29 +00:00
Ftps
5ab6713bb3 fix permission (#87) 2023-04-17 16:15:59 +08:00
Ftps
a4c64b0253 Autoformat when pushed directly (#79)
* Create push_format.yml

* remove unused
2023-04-17 11:09:05 +08:00
Ftps
bfe974ea9f Fix action when PR send (#83) 2023-04-17 10:49:57 +08:00
liujing04
0719b4aa5e Add files via upload 2023-04-16 18:56:20 +08:00
42 changed files with 2776 additions and 1016 deletions

View File

@@ -2,19 +2,21 @@ name: pull format
on: [pull_request]
permissions:
contents: write
jobs:
pull_format:
permissions:
actions: write
checks: write
contents: write
runs-on: ubuntu-latest
continue-on-error: true
steps:
- uses: actions/checkout@v3
- name: checkout
continue-on-error: true
uses: actions/checkout@v3
with:
ref: ${{ github.head_ref }}
fetch-depth: 0
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:

46
.github/workflows/push_format.yml vendored Normal file
View File

@@ -0,0 +1,46 @@
name: push format
on:
push:
branches:
- main
permissions:
contents: write
pull-requests: write
jobs:
push_format:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
ref: ${{github.ref_name}}
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install Black
run: pip install black
- name: Run Black
# run: black $(git ls-files '*.py')
run: black .
- name: Commit Back
continue-on-error: true
id: commitback
run: |
git config --local user.email "github-actions[bot]@users.noreply.github.com"
git config --local user.name "github-actions[bot]"
git add --all
git commit -m "Format code"
- name: Create Pull Request
if: steps.commitback.outcome == 'success'
continue-on-error: true
uses: peter-evans/create-pull-request@v4
with:
body: Apply Code Formatter Change
commit-message: Automatic code format

View File

@@ -1,42 +1,34 @@
20230409
### 20230409
- 修正训练参数提升显卡平均利用率A100最高从25%提升至90%左右V100:50%->90%左右2060S:60%->85%左右P40:25%->95%左右,训练速度显著提升
- 修正参数总batch_size改为每张卡的batch_size
- 修正total_epoch最大限制100解锁至1000默认10提升至默认20
- 修复ckpt提取识别是否带音高错误导致推理异常的问题
- 修复分布式训练每个rank都保存一次ckpt的问题
- 特征提取进行nan特征过滤
- 修复静音输入输出随机辅音or噪声的问题老版模型需要重做训练集重训
&emsp;1-修正训练参数提升显卡平均利用率A100最高从25%提升至90%左右V100:50%->90%左右2060S:60%->85%左右P40:25%->95%左右,训练速度显著提升
### 20230416更新
- 新增本地实时变声迷你GUI双击go-realtime-gui.bat启动
- 训练推理均对<50Hz的频段进行滤波过滤
- 训练推理音高提取pyworld最低音高从默认80下降至50,50-80hz间的男声低音不会哑
- WebUI支持根据系统区域变更语言现支持en_USja_JPzh_CNzh_HKzh_SGzh_TW不支持的默认en_US
- 修正部分显卡识别例如V100-16G识别失败P4识别失败
&emsp;2-修正参数总batch_size改为每张卡的batch_size
### 20230428更新
- 升级faiss索引设置速度更快质量更高
- 取消total_npy依赖后续分享模型不再需要填写total_npy
- 解锁16系限制4G显存GPU给到4G的推理设置
- 修复部分音频格式下UVR5人声伴奏分离的bug
- 实时变声迷你gui增加对非40k与不懈怠音高模型的支持
&emsp;3-修正total_epoch最大限制100解锁至1000默认10提升至默认20
&emsp;4-修复ckpt提取识别是否带音高错误导致推理异常的问题
&emsp;5-修复分布式训练每个rank都保存一次ckpt的问题
&emsp;6-特征提取进行nan特征过滤
&emsp;7-修复静音输入输出随机辅音or噪声的问题老版模型需要重做训练集重训
20230416更新
&emsp;1-新增本地实时变声迷你GUI双击go-realtime-gui.bat启动
&emsp;2-训练推理均对<50Hz的频段进行滤波过滤
&emsp;3-训练推理音高提取pyworld最低音高从默认80下降至50,50-80hz间的男声低音不会哑
&emsp;4-WebUI支持根据系统区域变更语言现支持en_USja_JPzh_CNzh_HKzh_SGzh_TW不支持的默认en_US
&emsp;5-修正部分显卡识别例如V100-16G识别失败P4识别失败
后续计划
&emsp;1-收集呼吸wav加入训练集修正呼吸变声电音的问题
&emsp;2-研究更优的默认faiss索引配置计划将索引打包进weights/xxx.pth中取消推理界面的 特征/检索库 选择
&emsp;3-根据显存情况和显卡架构自动给到最优配置batch size训练集切块推理音频长度相关的config训练是否fp16未来所有>=4G显存的>=pascal架构的显卡都可以训练或推理<4G显存的显卡不会进行支持
&emsp;4-我们正在训练增加了歌声训练集的底模未来会公开
&emsp;5-推理音高识别选项加入"是否开启中值滤波"
&emsp;6-增加选项:每次epoch保存的小模型均进行提取; 增加选项:设置默认测试集音频每次保存的小模型均在保存后对其进行推理导出用户可试听来选择哪个中间epoch最好
### 后续计划:
功能
- 增加选项:每次epoch保存的小模型均进行提取
- 增加选项:推理额外导出mp3至填写的路径
底模
- 收集呼吸wav加入训练集修正呼吸变声电音的问题
- 我们正在训练增加了歌声训练集的底模未来会公开
- 升级鉴别器
- 升级自监督特征结构

213
README.md
View File

@@ -1,105 +1,108 @@
<div align="center">
<h1>Retrieval-based-Voice-Conversion-WebUI</h1>
一个基于VITS的简单易用的语音转换变声器框架<br><br>
[![madewithlove](https://forthebadge.com/images/badges/built-with-love.svg)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI)
<img src="https://counter.seku.su/cmoe?name=rvc&theme=r34" /><br>
[![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Retrieval_based_Voice_Conversion_WebUI.ipynb)
[![Licence](https://img.shields.io/github/license/liujing04/Retrieval-based-Voice-Conversion-WebUI?style=for-the-badge)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/%E4%BD%BF%E7%94%A8%E9%9C%80%E9%81%B5%E5%AE%88%E7%9A%84%E5%8D%8F%E8%AE%AE-LICENSE.txt)
[![Huggingface](https://img.shields.io/badge/🤗%20-Spaces-yellow.svg?style=for-the-badge)](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)
[![Discord](https://img.shields.io/badge/RVC%20Developers-Discord-7289DA?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/HcsmBBGyVk)
</div>
------
[**更新日志**](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Changelog_CN.md)
[**English**](./docs/README.en.md) | [**中文简体**](./README.md) | [**日本語**](./docs/README.ja.md)
> 点此查看我们的[演示视频](https://www.bilibili.com/video/BV1pm4y1z7Gm/) !
> 使用了RVC的实时语音转换: [w-okada/voice-changer](https://github.com/w-okada/voice-changer)
> 底模使用接近50小时的开源高质量VCTK训练集训练无版权方面的顾虑请大家放心使用
> 后续会陆续加入高质量有授权歌声训练集训练底模
## 简介
本仓库具有以下特点
+ 使用top1检索替换输入源特征为训练集特征来杜绝音色泄漏
+ 即便在相对较差的显卡上也能快速训练
+ 使用少量数据进行训练也能得到较好结果(推荐至少收集10分钟低底噪语音数据)
+ 可以通过模型融合来改变音色(借助ckpt处理选项卡中的ckpt-merge)
+ 简单易用的网页界面
+ 可调用UVR5模型来快速分离人声和伴奏
## 环境配置
推荐使用poetry配置环境。
以下指令需在Python版本大于3.8的环境中执行:
```bash
# 安装Pytorch及其核心依赖若已安装则跳过
# 参考自: https://pytorch.org/get-started/locally/
pip install torch torchvision torchaudio
#如果是win系统+Nvidia Ampere架构(RTX30xx),根据 #21 的经验需要指定pytorch对应的cuda版本
#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
# 安装 Poetry 依赖管理工具, 若已安装则跳过
# 参考自: https://python-poetry.org/docs/#installation
curl -sSL https://install.python-poetry.org | python3 -
# 通过poetry安装依赖
poetry install
```
你也可以通过pip来安装依赖
**注意**: `MacOS``faiss 1.7.2`版本会导致抛出段错误,请将`requirements.txt`的对应条目改为`faiss-cpu==1.7.0`
```bash
pip install -r requirements.txt
```
## 其他预模型准备
RVC需要其他一些预模型来推理和训练。
你可以从我们的[Hugging Face space](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)下载到这些模型。
以下是一份清单包括了所有RVC所需的预模型和其他文件的名称:
```bash
hubert_base.pt
./pretrained
./uvr5_weights
#如果你正在使用Windows则你可能需要这个文件若ffmpeg已安装则跳过
./ffmpeg
```
之后使用以下指令来启动WebUI:
```bash
python infer-web.py
```
如果你正在使用Windows你可以直接下载并解压`RVC-beta.7z`,运行`go-web.bat`以启动WebUI。
仓库内还有一份`小白简易教程.doc`以供参考。
## 参考项目
+ [ContentVec](https://github.com/auspicious3000/contentvec/)
+ [VITS](https://github.com/jaywalnut310/vits)
+ [HIFIGAN](https://github.com/jik876/hifi-gan)
+ [Gradio](https://github.com/gradio-app/gradio)
+ [FFmpeg](https://github.com/FFmpeg/FFmpeg)
+ [Ultimate Vocal Remover](https://github.com/Anjok07/ultimatevocalremovergui)
+ [audio-slicer](https://github.com/openvpi/audio-slicer)
## 感谢所有贡献者作出的努力
<a href="https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/graphs/contributors" target="_blank">
<img src="https://contrib.rocks/image?repo=liujing04/Retrieval-based-Voice-Conversion-WebUI" />
</a>
<div align="center">
<h1>Retrieval-based-Voice-Conversion-WebUI</h1>
一个基于VITS的简单易用的语音转换变声器框架<br><br>
[![madewithlove](https://forthebadge.com/images/badges/built-with-love.svg)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI)
<img src="https://counter.seku.su/cmoe?name=rvc&theme=r34" /><br>
[![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Retrieval_based_Voice_Conversion_WebUI.ipynb)
[![Licence](https://img.shields.io/github/license/liujing04/Retrieval-based-Voice-Conversion-WebUI?style=for-the-badge)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/%E4%BD%BF%E7%94%A8%E9%9C%80%E9%81%B5%E5%AE%88%E7%9A%84%E5%8D%8F%E8%AE%AE-LICENSE.txt)
[![Huggingface](https://img.shields.io/badge/🤗%20-Spaces-yellow.svg?style=for-the-badge)](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)
[![Discord](https://img.shields.io/badge/RVC%20Developers-Discord-7289DA?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/HcsmBBGyVk)
</div>
------
[**更新日志**](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Changelog_CN.md)
[**English**](./docs/README.en.md) | [**中文简体**](./README.md) | [**日本語**](./docs/README.ja.md) | [**한국어**](./docs/README.ko.md)
> 点此查看我们的[演示视频](https://www.bilibili.com/video/BV1pm4y1z7Gm/) !
> 使用了RVC的实时语音转换: [w-okada/voice-changer](https://github.com/w-okada/voice-changer)
> 底模使用接近50小时的开源高质量VCTK训练集训练无版权方面的顾虑请大家放心使用
> 后续会陆续加入高质量有授权歌声训练集训练底模
## 简介
本仓库具有以下特点
+ 使用top1检索替换输入源特征为训练集特征来杜绝音色泄漏
+ 即便在相对较差的显卡上也能快速训练
+ 使用少量数据进行训练也能得到较好结果(推荐至少收集10分钟低底噪语音数据)
+ 可以通过模型融合来改变音色(借助ckpt处理选项卡中的ckpt-merge)
+ 简单易用的网页界面
+ 可调用UVR5模型来快速分离人声和伴奏
## 环境配置
推荐使用poetry配置环境。
以下指令需在Python版本大于3.8的环境中执行:
```bash
# 安装Pytorch及其核心依赖若已安装则跳过
# 参考自: https://pytorch.org/get-started/locally/
pip install torch torchvision torchaudio
#如果是win系统+Nvidia Ampere架构(RTX30xx),根据 #21 的经验需要指定pytorch对应的cuda版本
#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
# 安装 Poetry 依赖管理工具, 若已安装则跳过
# 参考自: https://python-poetry.org/docs/#installation
curl -sSL https://install.python-poetry.org | python3 -
# 通过poetry安装依赖
poetry install
```
你也可以通过pip来安装依赖
**注意**: `MacOS``faiss 1.7.2`版本会导致抛出段错误,在手动安装时请使用命令`pip install faiss-cpu==1.7.0`指定使用`1.7.0`版本
```bash
pip install -r requirements.txt
```
## 其他预模型准备
RVC需要其他一些预模型来推理和训练。
你可以从我们的[Hugging Face space](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)下载到这些模型。
以下是一份清单包括了所有RVC所需的预模型和其他文件的名称:
```bash
hubert_base.pt
./pretrained
./uvr5_weights
#如果你正在使用Windows则你可能需要这个文件若ffmpeg和ffprobe已安装则跳过; ubuntu/debian 用户可以通过apt install ffmpeg来安装这2个库
./ffmpeg
./ffprobe
```
之后使用以下指令来启动WebUI:
```bash
python infer-web.py
```
如果你正在使用Windows你可以直接下载并解压`RVC-beta.7z`,运行`go-web.bat`以启动WebUI。
仓库内还有一份`小白简易教程.doc`以供参考。
## 参考项目
+ [ContentVec](https://github.com/auspicious3000/contentvec/)
+ [VITS](https://github.com/jaywalnut310/vits)
+ [HIFIGAN](https://github.com/jik876/hifi-gan)
+ [Gradio](https://github.com/gradio-app/gradio)
+ [FFmpeg](https://github.com/FFmpeg/FFmpeg)
+ [Ultimate Vocal Remover](https://github.com/Anjok07/ultimatevocalremovergui)
+ [audio-slicer](https://github.com/openvpi/audio-slicer)
## 感谢所有贡献者作出的努力
<a href="https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/graphs/contributors" target="_blank">
<img src="https://contrib.rocks/image?repo=liujing04/Retrieval-based-Voice-Conversion-WebUI" />
</a>

View File

@@ -30,7 +30,7 @@ parser.add_argument(
cmd_opts = parser.parse_args()
python_cmd = cmd_opts.pycmd
listen_port = cmd_opts.port
listen_port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865
iscolab = cmd_opts.colab
noparallel = cmd_opts.noparallel
noautoopen = cmd_opts.noautoopen
@@ -64,12 +64,25 @@ if not torch.cuda.is_available():
device = "cpu"
is_half = False
gpu_mem=None
if device not in ["cpu", "mps"]:
gpu_name = torch.cuda.get_device_name(int(device.split(":")[-1]))
if "16" in gpu_name or "MX" in gpu_name:
print("16系显卡/MX系显卡强制单精度")
i_device=int(device.split(":")[-1])
gpu_name = torch.cuda.get_device_name(i_device)
if "16" in gpu_name or "P40"in gpu_name.upper() or "1070"in gpu_name or "1080"in gpu_name:
print("16系显卡强制单精度")
is_half = False
with open("configs/32k.json","r")as f:strr=f.read().replace("true","false")
with open("configs/32k.json","w")as f:f.write(strr)
with open("configs/40k.json","r")as f:strr=f.read().replace("true","false")
with open("configs/40k.json","w")as f:f.write(strr)
with open("configs/48k.json","r")as f:strr=f.read().replace("true","false")
with open("configs/48k.json","w")as f:f.write(strr)
with open("trainset_preprocess_pipeline_print.py","r")as f:strr=f.read().replace("3.7","3.0")
with open("trainset_preprocess_pipeline_print.py","w")as f:f.write(strr)
gpu_mem=int(torch.cuda.get_device_properties(i_device).total_memory/1024/1024/1024+0.4)
if(gpu_mem<=4):
with open("trainset_preprocess_pipeline_print.py","r")as f:strr=f.read().replace("3.7","3.0")
with open("trainset_preprocess_pipeline_print.py","w")as f:f.write(strr)
from multiprocessing import cpu_count
if n_cpu == 0:
@@ -86,3 +99,8 @@ else:
x_query = 6
x_center = 38
x_max = 41
if(gpu_mem!=None and gpu_mem<=4):
x_pad = 1
x_query = 5
x_center = 30
x_max = 32

View File

@@ -18,12 +18,15 @@ An easy-to-use SVC framework based on VITS.<br><br>
------
[**Changelog**](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Changelog_CN.md)
[**English**](./README.en.md) | [**中文简体**](../README.md) | [**日本語**](./README.ja.md)
[**English**](./README.en.md) | [**中文简体**](../README.md) | [**日本語**](./README.ja.md) | [**한국어**](./README.ko.md)
> Check our [Demo Video](https://www.bilibili.com/video/BV1pm4y1z7Gm/) here!
> Realtime Voice Conversion Software using RVC : [w-okada/voice-changer](https://github.com/w-okada/voice-changer)
> The dataset for the pre-training model uses nearly 50 hours of high quality VCTK open source dataset.
> High quality licensed song datasets will be added to training-set one after another for your use, without worrying about copyright infringement.
## Summary
This repository has the following features:
+ Reduce tone leakage by replacing source feature to training-set feature using top1 retrieval;
@@ -32,7 +35,6 @@ This repository has the following features:
+ Supporting model fusion to change timbres (using ckpt processing tab->ckpt merge);
+ Easy-to-use Webui interface;
+ Use the UVR5 model to quickly separate vocals and instruments.
+ The dataset for the pre-training model uses nearly 50 hours of high quality VCTK open source dataset, and high quality licensed song datasets will be added to training-set one after another for your use, without worrying about copyright infringement.
## Preparing the environment
We recommend you install the dependencies through poetry.
@@ -43,8 +45,7 @@ The following commands need to be executed in the environment of Python version
pip install torch torchvision torchaudio
#For Windows + Nvidia Ampere Architecture(RTX30xx), you need to specify the cuda version corresponding to pytorch according to the experience of https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/issues/21
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
# Install the Poetry dependency management tool, skip if installed
# Reference: https://python-poetry.org/docs/#installation
@@ -55,7 +56,7 @@ poetry install
```
You can also use pip to install the dependencies
**Notice**: `faiss 1.7.2` will raise Segmentation Fault: 11 under `MacOS`, please change corresponding line in `requirements.txt` to `faiss-cpu==1.7.0`
**Notice**: `faiss 1.7.2` will raise Segmentation Fault: 11 under `MacOS`, please use `pip install faiss-cpu==1.7.0` if you use pip to install it manually.
```bash
pip install -r requirements.txt
@@ -83,12 +84,16 @@ python infer-web.py
```
If you are using Windows, you can download and extract `RVC-beta.7z` to use RVC directly and use `go-web.bat` to start Webui.
We will develop an English version of the WebUI in 2 weeks.
There's also a tutorial on RVC in Chinese and you can check it out if needed.
## Credits
+ [ContentVec](https://github.com/auspicious3000/contentvec/)
+ [VITS](https://github.com/jaywalnut310/vits)
+ [HIFIGAN](https://github.com/jik876/hifi-gan)
+ [Gradio](https://github.com/gradio-app/gradio)
+ [FFmpeg](https://github.com/FFmpeg/FFmpeg)
+ [Ultimate Vocal Remover](https://github.com/Anjok07/ultimatevocalremovergui)
+ [audio-slicer](https://github.com/openvpi/audio-slicer)
## Thanks to all contributors for their efforts
<a href="https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/graphs/contributors" target="_blank">

View File

@@ -19,60 +19,60 @@ VITSに基づく使いやすい音声変換voice changerframework<br><br>
[**更新日誌**](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Changelog_CN.md)
[**English**](./README.en.md) | [**中文简体**](../README.md) | [**日本語**](./README.ja.md)
[**English**](./README.en.md) | [**中文简体**](../README.md) | [**日本語**](./README.ja.md) | [**한국어**](./README.ko.md)
> デモ動画は[こちら](https://www.bilibili.com/video/BV1pm4y1z7Gm/)でご覧ください
> デモ動画は[こちら](https://www.bilibili.com/video/BV1pm4y1z7Gm/)でご覧ください
> RVCによるリアルタイム音声変換: [w-okada/voice-changer](https://github.com/w-okada/voice-changer)
> 基底modelを訓練(training)したのは、約50時間の高品質なオープンソースデータセット。著作権侵害を心配することなく使用できるように
> 著作権侵害を心配することなく使用できるように、基底モデルは約50時間の高品質なオープンソースデータセットで訓練されています
> 今後次々と使用許可のある高品質歌声資料集を追加し、基底modelを訓練する。
> 今後も、次々と使用許可のある高品質歌声資料集を追加し、基底モデルを訓練する予定です
## はじめに
repoは下記の特徴があります
リポジトリには下記の特徴があります
+ 調子(tone)の漏洩が下がれるためtop1検索で源特徴量を訓練集特徴量に置換
+ 古い又は安いGPUでも高速に訓練でき
+ 小さい訓練集でもかなりいいmodelを得られる(10分以上の低noise音声を推奨)
+ modelを融合し音色をmergeできる(ckpt processing->ckpt merge使用)
+ 使いやすいWebUI
+ UVR5 Modelも含めるため人声とBGMを素早く分離でき
+ Top1検索を用いることで、生の特徴量を訓練用データセット特徴量に変換し、トーンリーケージを削減します。
+ 比較的貧弱なGPUでも高速かつ簡単に訓練できます。
+ 少量のデータセットからでも、比較的良い結果を得ることができます。10分以上のイズの少ない音声を推奨します。
+ モデルを融合することで、音声を混ぜることができます。(ckpt processingタブの、ckpt merge使用します。)
+ 使いやすいWebUI
+ UVR5 Modelも含んでいるため、人の声とBGMを素早く分離できます。
## 環境構築
poetryで依存関係をinstallすることをお勧めします。
Poetryで依存関係をインストールすることをお勧めします。
下記のcommandsは、Python3.8以上の環境で実行する必要があります:
下記のコマンドは、Python3.8以上の環境で実行する必要があります:
```bash
# PyTorch関連の依存関係をinstall。install済の場合はskip
# PyTorch関連の依存関係をインストール。インストール済の場合は省略。
# 参照先: https://pytorch.org/get-started/locally/
pip install torch torchvision torchaudio
#Windows Nvidia Ampere Architecture(RTX30xx)の場合、 #21 に従い、pytorchに対応するcuda versionを指定する必要があります。
#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
# PyTorch関連の依存関係をinstall。install済の場合はskip
# PyTorch関連の依存関係をインストール。インストール済の場合は省略。
# 参照先: https://python-poetry.org/docs/#installation
curl -sSL https://install.python-poetry.org | python3 -
# Poetry経由で依存関係をinstall
# Poetry経由で依存関係をインストール
poetry install
```
pipでも依存関係のinstallが可能です:
pipでも依存関係のインストールが可能です:
**注意**:`faiss 1.7.2``macOS``Segmentation Fault: 11`を起こすので、`requirements.txt`の該当行を `faiss-cpu==1.7.0`に変更してください。
**注意**:`faiss 1.7.2``macOS``Segmentation Fault: 11`を起こすので、マニュアルインストールする場合は、 `pip install faiss-cpu==1.7.0`を実行してください。
```bash
pip install -r requirements.txt
```
## 基底modelsを準備
RVCは推論/訓練のために、様々な事前訓練を行った基底modelsが必要です。
RVCは推論/訓練のために、様々な事前訓練を行った基底モデルを必要とします。
modelsは[Hugging Face space](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)からダウンロードできます。
以下は、RVCに必要な基底modelsやその他のfilesの一覧です。
以下は、RVCに必要な基底モデルやその他のファイルの一覧です。
```bash
hubert_base.pt
@@ -80,16 +80,16 @@ hubert_base.pt
./uvr5_weights
# ffmpegがすでにinstallされている場合はskip
# ffmpegがすでにinstallされている場合は省略
./ffmpeg
```
その後、下記のcommandでWebUIを起動
その後、下記のコマンドでWebUIを起動します。
```bash
python infer-web.py
```
Windowsをお使いの方は、直接`RVC-beta.7z`をダウンロード後に展開し、`go-web.bat`clickでWebUIを起動。(7zipが必要です)
Windowsをお使いの方は、直接`RVC-beta.7z`をダウンロード後に展開し、`go-web.bat`クリックすることで、WebUIを起動することができます。(7zipが必要です)
また、repoに[小白简易教程.doc](./小白简易教程.doc)がありますので、参考にしてください(中国語版のみ)。
また、リポジトリに[小白简易教程.doc](./小白简易教程.doc)がありますので、参考にしてください(中国語版のみ)。
## 参考プロジェクト
+ [ContentVec](https://github.com/auspicious3000/contentvec/)
@@ -100,7 +100,7 @@ Windowsをお使いの方は、直接に`RVC-beta.7z`をダウンロード後に
+ [Ultimate Vocal Remover](https://github.com/Anjok07/ultimatevocalremovergui)
+ [audio-slicer](https://github.com/openvpi/audio-slicer)
## 貢献者(contributer)の皆様の尽力に感謝します
## 貢献者(contributor)の皆様の尽力に感謝します
<a href="https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/graphs/contributors" target="_blank">
<img src="https://contrib.rocks/image?repo=liujing04/Retrieval-based-Voice-Conversion-WebUI" />
</a>

104
docs/README.ko.md Normal file
View File

@@ -0,0 +1,104 @@
<div align="center">
<h1>Retrieval-based-Voice-Conversion-WebUI</h1>
VITS 기반의 간단하고 사용하기 쉬운 음성 변환 프레임워크.<br><br>
[![madewithlove](https://forthebadge.com/images/badges/built-with-love.svg)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI)
<img src="https://counter.seku.su/cmoe?name=rvc&theme=r34" /><br>
[![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Retrieval_based_Voice_Conversion_WebUI.ipynb)
[![Licence](https://img.shields.io/github/license/liujing04/Retrieval-based-Voice-Conversion-WebUI?style=for-the-badge)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/%E4%BD%BF%E7%94%A8%E9%9C%80%E9%81%B5%E5%AE%88%E7%9A%84%E5%8D%8F%E8%AE%AE-LICENSE.txt)
[![Huggingface](https://img.shields.io/badge/🤗%20-Spaces-yellow.svg?style=for-the-badge)](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)
[![Discord](https://img.shields.io/badge/RVC%20Developers-Discord-7289DA?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/HcsmBBGyVk)
</div>
------
[**업데이트 로그**](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Changelog_CN.md)
[**English**](./README.en.md) | [**中文简体**](../README.md) | [**日本語**](./README.ja.md) | [**한국어**](./README.ko.md)
> [데모 영상](https://www.bilibili.com/video/BV1pm4y1z7Gm/)을 확인해 보세요!
> RVC를 활용한 실시간 음성변환: [w-okada/voice-changer](https://github.com/w-okada/voice-changer)
> 기본 모델은 50시간 가량의 고퀄리티 오픈 소스 VCTK 데이터셋을 사용하였으므로, 저작권상의 염려가 없으니 안심하고 사용하시기 바랍니다.
> 저작권 문제가 없는 고퀄리티의 노래를 이후에도 계속해서 훈련할 예정입니다.
## 소개
본 Repo는 다음과 같은 특징을 가지고 있습니다:
+ top1 검색을 이용하여 입력 소스 기능을 훈련 세트 기능으로 대체하여 음색의 누출을 방지;
+ 상대적으로 낮은 성능의 GPU에서도 빠른 훈련 가능;
+ 적은 양의 데이터로 훈련해도 좋은 결과를 얻을 수 있음 (최소 10분 이상의 저잡음 음성 데이터를 사용하는 것을 권장);
+ 모델 융합을 통한 음색의 변조 가능 (ckpt 처리 탭->ckpt 병합 선택);
+ 사용하기 쉬운 WebUI (웹 인터페이스);
+ UVR5 모델을 이용하여 목소리와 배경음악의 빠른 분리;
## 환경의 준비
poetry를 통해 dependecies를 설치하는 것을 권장합니다.
다음 명령은 Python 버전 3.8 이상의 환경에서 실행되어야 합니다:
```bash
# PyTorch 관련 주요 dependencies 설치, 이미 설치되어 있는 경우 건너뛰기 가능
# 참조: https://pytorch.org/get-started/locally/
pip install torch torchvision torchaudio
# Windows + Nvidia Ampere Architecture(RTX30xx)를 사용하고 있다면, https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/issues/21 에서 명시된 것과 같이 PyTorch에 맞는 CUDA 버전을 지정해야 합니다.
#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
# Poetry 설치, 이미 설치되어 있는 경우 건너뛰기 가능
# Reference: https://python-poetry.org/docs/#installation
curl -sSL https://install.python-poetry.org | python3 -
# Dependecies 설치
poetry install
```
pip를 활용하여 dependencies를 설치하여도 무방합니다.
**공지**: `MacOS`에서 `faiss 1.7.2`를 사용하면 Segmentation Fault: 11 오류가 발생할 수 있습니다. 수동으로 pip를 사용하여 설치하는 경우 `pip install faiss-cpu==1.7.0`을 사용해야 합니다.
```bash
pip install -r requirements.txt
```
## 기타 사전 모델 준비
RVC 모델은 추론과 훈련을 위하여 다른 사전 모델이 필요합니다.
[Huggingface space](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)를 통해서 다운로드 할 수 있습니다.
다음은 RVC에 필요한 사전 모델 및 기타 파일 목록입니다:
```bash
hubert_base.pt
./pretrained
./uvr5_weights
# Windows를 사용하는 경우 이 사전도 필요할 수 있습니다. FFmpeg가 설치되어 있으면 건너뛰어도 됩니다.
ffmpeg.exe
```
그 후 이하의 명령을 사용하여 WebUI를 시작할 수 있습니다:
```bash
python infer-web.py
```
Windows를 사용하는 경우 `RVC-beta.7z`를 다운로드 및 압축 해제하여 RVC를 직접 사용하거나 `go-web.bat`을 사용하여 WebUi를 시작할 수 있습니다.
중국어로 된 RVC에 대한 튜토리얼도 있으니 필요하다면 확인할 수 있습니다.
## 크레딧
+ [ContentVec](https://github.com/auspicious3000/contentvec/)
+ [VITS](https://github.com/jaywalnut310/vits)
+ [HIFIGAN](https://github.com/jik876/hifi-gan)
+ [Gradio](https://github.com/gradio-app/gradio)
+ [FFmpeg](https://github.com/FFmpeg/FFmpeg)
+ [Ultimate Vocal Remover](https://github.com/Anjok07/ultimatevocalremovergui)
+ [audio-slicer](https://github.com/openvpi/audio-slicer)
## 모든 기여자 분들의 노력에 감사드립니다.
<a href="https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/graphs/contributors" target="_blank">
<img src="https://contrib.rocks/image?repo=liujing04/Retrieval-based-Voice-Conversion-WebUI" />
</a>

146
docs/faiss_tips_en.md Normal file
View File

@@ -0,0 +1,146 @@
faiss tuning TIPS
==================
# about faiss
faiss is a library of neighborhood searches for dense vectors, developed by facebook research, which efficiently implements many approximate neighborhood search methods.
Approximate Neighbor Search finds similar vectors quickly while sacrificing some accuracy.
## faiss in RVC
In RVC, for the embedding of features converted by HuBERT, we search for embeddings similar to the embedding generated from the training data and mix them to achieve a conversion that is closer to the original speech. However, since this search takes time if performed naively, high-speed conversion is realized by using approximate neighborhood search.
# implementation overview
In '/logs/your-experiment/3_feature256' where the model is located, features extracted by HuBERT from each voice data are located.
From here we read the npy files in order sorted by filename and concatenate the vectors to create big_npy. (This vector has shape [N, 256].)
After saving big_npy as /logs/your-experiment/total_fea.npy, train it with faiss.
As of 2023/04/18, IVF based on L2 distance is used using the index factory function of faiss.
The number of IVF divisions (n_ivf) is N//39, and n_probe uses int(np.power(n_ivf, 0.3)). (Look around train_index in infer-web.py.)
In this article, I will first explain the meaning of these parameters, and then write advice for developers to create a better index.
# Explanation of the method
## index factory
An index factory is a unique faiss notation that expresses a pipeline that connects multiple approximate neighborhood search methods as a string.
This allows you to try various approximate neighborhood search methods simply by changing the index factory string.
In RVC it is used like this:
```python
index = faiss.index_factory(256, "IVF%s,Flat" % n_ivf)
```
Among the arguments of index_factory, the first is the number of dimensions of the vector, the second is the index factory string, and the third is the distance to use.
For more detailed notation
https://github.com/facebookresearch/faiss/wiki/The-index-factory
## index for distance
There are two typical indexes used as similarity of embedding as follows.
- Euclidean distance (METRIC_L2)
- inner product (METRIC_INNER_PRODUCT)
Euclidean distance takes the squared difference in each dimension, sums the differences in all dimensions, and then takes the square root. This is the same as the distance in 2D and 3D that we use on a daily basis.
The inner product is not used as an index of similarity as it is, and the cosine similarity that takes the inner product after being normalized by the L2 norm is generally used.
Which is better depends on the case, but cosine similarity is often used in embedding obtained by word2vec and similar image retrieval models learned by ArcFace. If you want to do l2 normalization on vector X with numpy, you can do it with the following code with eps small enough to avoid 0 division.
```python
X_normed = X / np.maximum(eps, np.linalg.norm(X, ord=2, axis=-1, keepdims=True))
```
Also, for the index factory, you can change the distance index used for calculation by choosing the value to pass as the third argument.
```python
index = faiss.index_factory(dimention, text, faiss.METRIC_INNER_PRODUCT)
```
## IVF
IVF (Inverted file indexes) is an algorithm similar to the inverted index in full-text search.
During learning, the search target is clustered with kmeans, and Voronoi partitioning is performed using the cluster center. Each data point is assigned a cluster, so we create a dictionary that looks up the data points from the clusters.
For example, if clusters are assigned as follows
|index|Cluster|
|-----|-------|
|1|A|
|2|B|
|3|A|
|4|C|
|5|B|
The resulting inverted index looks like this:
|cluster|index|
|-------|-----|
|A|1, 3|
|B|2, 5|
|C|4|
When searching, we first search n_probe clusters from the clusters, and then calculate the distances for the data points belonging to each cluster.
# recommend parameter
There are official guidelines on how to choose an index, so I will explain accordingly.
https://github.com/facebookresearch/faiss/wiki/Guidelines-to-choose-an-index
For datasets below 1M, 4bit-PQ is the most efficient method available in faiss as of April 2023.
Combining this with IVF, narrowing down the candidates with 4bit-PQ, and finally recalculating the distance with an accurate index can be described by using the following index factory.
```python
index = faiss.index_factory(256, "IVF1024,PQ128x4fs,RFlat")
```
## Recommended parameters for IVF
Consider the case of too many IVFs. For example, if coarse quantization by IVF is performed for the number of data, this is the same as a naive exhaustive search and is inefficient.
For 1M or less, IVF values are recommended between 4*sqrt(N) ~ 16*sqrt(N) for N number of data points.
Since the calculation time increases in proportion to the number of n_probes, please consult with the accuracy and choose appropriately. Personally, I don't think RVC needs that much accuracy, so n_probe = 1 is fine.
## FastScan
FastScan is a method that enables high-speed approximation of distances by Cartesian product quantization by performing them in registers.
Cartesian product quantization performs clustering independently for each d dimension (usually d = 2) during learning, calculates the distance between clusters in advance, and creates a lookup table. At the time of prediction, the distance of each dimension can be calculated in O(1) by looking at the lookup table.
So the number you specify after PQ usually specifies half the dimension of the vector.
For a more detailed description of FastScan, please refer to the official documentation.
https://github.com/facebookresearch/faiss/wiki/Fast-accumulation-of-PQ-and-AQ-codes-(FastScan)
## RFlat
RFlat is an instruction to recalculate the rough distance calculated by FastScan with the exact distance specified by the third argument of index factory.
When getting k neighbors, k*k_factor points are recalculated.
# Techniques for embedding
## alpha query expansion
Query expansion is a technique used in searches, for example in full-text searches, where a few words are added to the entered search sentence to improve search accuracy. Several methods have also been proposed for vector search, among which α-query expansion is known as a highly effective method that does not require additional learning. In the paper, it is introduced in [Attention-Based Query Expansion Learning](https://arxiv.org/abs/2007.08019), etc., and [2nd place solution of kaggle shopee competition](https://www.kaggle.com/code/lyakaap/2nd-place-solution/notebook).
α-query expansion can be done by summing a vector with neighboring vectors with weights raised to the power of similarity. How to paste the code example. Replace big_npy with α query expansion.
```python
alpha = 3.
index = faiss.index_factory(256, "IVF512,PQ128x4fs,RFlat")
original_norm = np.maximum(np.linalg.norm(big_npy, ord=2, axis=1, keepdims=True), 1e-9)
big_npy /= original_norm
index.train(big_npy)
index.add(big_npy)
dist, neighbor = index.search(big_npy, num_expand)
expand_arrays = []
ixs = np.arange(big_npy.shape[0])
for i in range(-(-big_npy.shape[0]//batch_size)):
ix = ixs[i*batch_size:(i+1)*batch_size]
weight = np.power(np.einsum("nd,nmd->nm", big_npy[ix], big_npy[neighbor[ix]]), alpha)
expand_arrays.append(np.sum(big_npy[neighbor[ix]] * np.expand_dims(weight, axis=2),axis=1))
big_npy = np.concatenate(expand_arrays, axis=0)
# normalize index version
big_npy = big_npy / np.maximum(np.linalg.norm(big_npy, ord=2, axis=1, keepdims=True), 1e-9)
```
This is a technique that can be applied both to the query that does the search and to the DB being searched.
## Compress embedding with MiniBatch KMeans
If total_fea.npy is too large, it is possible to shrink the vector using KMeans.
Compression of embedding is possible with the following code. Specify the size you want to compress for n_clusters, and specify 256 * number of CPU cores for batch_size to fully benefit from CPU parallelization.
```python
import multiprocessing
from sklearn.cluster import MiniBatchKMeans
kmeans = MiniBatchKMeans(n_clusters=10000, batch_size=256 * multiprocessing.cpu_count(), init="random")
kmeans.fit(big_npy)
sample_npy = kmeans.cluster_centers_
```

146
docs/faiss_tips_ja.md Normal file
View File

@@ -0,0 +1,146 @@
faiss tuning TIPS
==================
# about faiss
faissはfacebook researchの開発する、密なベクトルに対する近傍探索をまとめたライブラリで、多くの近似近傍探索の手法を効率的に実装しています。
近似近傍探索はある程度精度を犠牲にしながら高速に類似するベクトルを探します。
## faiss in RVC
RVCではHuBERTで変換した特徴量のEmbeddingに対し、学習データから生成されたEmbeddingと類似するものを検索し、混ぜることでより元の音声に近い変換を実現しています。ただ、この検索は愚直に行うと時間がかかるため、近似近傍探索を用いることで高速な変換を実現しています。
# 実装のoverview
モデルが配置されている '/logs/your-experiment/3_feature256'には各音声データからHuBERTで抽出された特徴量が配置されています。
ここからnpyファイルをファイル名でソートした順番で読み込み、ベクトルを連結してbig_npyを作成します。(このベクトルのshapeは[N, 256]です。)
big_npyを/logs/your-experiment/total_fea.npyとして保存した後、faissを学習させます。
2023/04/18時点ではfaissのindex factoryの機能を用いて、L2距離に基づくIVFを用いています。
IVFの分割数(n_ivf)はN//39で、n_probeはint(np.power(n_ivf, 0.3))が採用されています。(infer-web.pyのtrain_index周りを探してください。)
本Tipsではまずこれらのパラメータの意味を解説し、その後よりよいindexを作成するための開発者向けアドバイスを書きます。
# 手法の解説
## index factory
index factoryは複数の近似近傍探索の手法を繋げるパイプラインをstringで表記するfaiss独自の記法です。
これにより、index factoryの文字列を変更するだけで様々な近似近傍探索の手法を試せます。
RVCでは以下のように使われています。
```python
index = faiss.index_factory(256, "IVF%s,Flat" % n_ivf)
```
index_factoryの引数のうち、1つ目はベクトルの次元数、2つ目はindex factoryの文字列で、3つ目には用いる距離を指定することができます。
より詳細な記法については
https://github.com/facebookresearch/faiss/wiki/The-index-factory
## 距離指標
embeddingの類似度として用いられる代表的な指標として以下の二つがあります。
- ユークリッド距離(METRIC_L2)
- 内積(METRIC_INNER_PRODUCT)
ユークリッド距離では各次元において二乗の差をとり、全次元の差を足してから平方根をとります。これは日常的に用いる2次元、3次元での距離と同じです。
内積はこのままでは類似度の指標として用いず、一般的にはL2ルムで正規化してから内積をとるコサイン類似度を用います。
どちらがよいかは場合によりますが、word2vec等で得られるembeddingやArcFace等で学習した類似画像検索のモデルではコサイン類似度が用いられることが多いです。ベクトルXに対してl2正規化をnumpyで行う場合は、0 divisionを避けるために十分に小さな値をepsとして以下のコードで可能です。
```python
X_normed = X / np.maximum(eps, np.linalg.norm(X, ord=2, axis=-1, keepdims=True))
```
また、index factoryには第3引数に渡す値を選ぶことで計算に用いる距離指標を変更できます。
```python
index = faiss.index_factory(dimention, text, faiss.METRIC_INNER_PRODUCT)
```
## IVF
IVF(Inverted file indexes)は全文検索における転置インデックスと似たようなアルゴリズムです。
学習時には検索対象に対してkmeansでクラスタリングを行い、クラスタ中心を用いてボロイ分割を行います。各データ点には一つずつクラスタが割り当てられるので、クラスタからデータ点を逆引きする辞書を作成します。
例えば以下のようにクラスタが割り当てられた場合
|index|クラスタ|
|-----|-------|
|1|A|
|2|B|
|3|A|
|4|C|
|5|B|
作成される転置インデックスは以下のようになります。
|クラスタ|index|
|-------|-----|
|A|1, 3|
|B|2, 5|
|C|4|
検索時にはまずクラスタからn_probe個のクラスタを検索し、次にそれぞれのクラスタに属するデータ点について距離を計算します。
# 推奨されるパラメータ
indexの選び方については公式にガイドラインがあるので、それに準じて説明します。
https://github.com/facebookresearch/faiss/wiki/Guidelines-to-choose-an-index
1M以下のデータセットにおいては4bit-PQが2023年4月時点ではfaissで利用できる最も効率的な手法です。
これをIVFと組み合わせ、4bit-PQで候補を絞り、最後に正確な指標で距離を再計算するには以下のindex factoryを用いることで記載できます。
```python
index = faiss.index_factory(256, "IVF1024,PQ128x4fs,RFlat")
```
## IVFの推奨パラメータ
IVFの数が多すぎる場合、たとえばデータ数の数だけIVFによる粗量子化を行うと、これは愚直な全探索と同じになり効率が悪いです。
1M以下の場合ではIVFの値はデータ点の数Nに対して4*sqrt(N) ~ 16*sqrt(N)に推奨しています。
n_probeはn_probeの数に比例して計算時間が増えるので、精度と相談して適切に選んでください。個人的にはRVCにおいてそこまで精度は必要ないと思うのでn_probe = 1で良いと思います。
## FastScan
FastScanは直積量子化で大まかに距離を近似するのを、レジスタ内で行うことにより高速に行うようにした手法です。
直積量子化は学習時にd次元ごと(通常はd=2)に独立してクラスタリングを行い、クラスタ同士の距離を事前計算してlookup tableを作成します。予測時はlookup tableを見ることで各次元の距離をO(1)で計算できます。
そのため、PQの次に指定する数字は通常ベクトルの半分の次元を指定します。
FastScanに関するより詳細な説明は公式のドキュメントを参照してください。
https://github.com/facebookresearch/faiss/wiki/Fast-accumulation-of-PQ-and-AQ-codes-(FastScan)
## RFlat
RFlatはFastScanで計算した大まかな距離を、index factoryの第三引数で指定した正確な距離で再計算する指示です。
k個の近傍を取得する際は、k*k_factor個の点について再計算が行われます。
# Embeddingに関するテクニック
## alpha query expansion
クエリ拡張は検索で使われるテクニックで、例えば全文検索では入力された検索文に単語を幾つか追加することで検索精度を上げることがあります。ベクトル検索にもいくつか提唱されていて、その内追加の学習がいらず効果が高い手法としてα-query expansionが知られています。論文では[Attention-Based Query Expansion Learning](https://arxiv.org/abs/2007.08019)などで紹介されていて、[kaggleのshopeeコンペの2位の解法](https://www.kaggle.com/code/lyakaap/2nd-place-solution/notebook)にも用いられていました。
α-query expansionはあるベクトルに対し、近傍のベクトルを類似度のα乗した重みで足し合わせることでできます。いかにコードの例を張ります。big_npyをα query expansionしたものに置き換えます。
```python
alpha = 3.
index = faiss.index_factory(256, "IVF512,PQ128x4fs,RFlat")
original_norm = np.maximum(np.linalg.norm(big_npy, ord=2, axis=1, keepdims=True), 1e-9)
big_npy /= original_norm
index.train(big_npy)
index.add(big_npy)
dist, neighbor = index.search(big_npy, num_expand)
expand_arrays = []
ixs = np.arange(big_npy.shape[0])
for i in range(-(-big_npy.shape[0]//batch_size)):
ix = ixs[i*batch_size:(i+1)*batch_size]
weight = np.power(np.einsum("nd,nmd->nm", big_npy[ix], big_npy[neighbor[ix]]), alpha)
expand_arrays.append(np.sum(big_npy[neighbor[ix]] * np.expand_dims(weight, axis=2),axis=1))
big_npy = np.concatenate(expand_arrays, axis=0)
# normalize index version
big_npy = big_npy / np.maximum(np.linalg.norm(big_npy, ord=2, axis=1, keepdims=True), 1e-9)
```
これは、検索を行うクエリにも、検索対象のDBにも適応可能なテクニックです。
## MiniBatch KMeansによるembeddingの圧縮
total_fea.npyが大きすぎる場合、KMeansを用いてベクトルを小さくすることが可能です。
以下のコードで、embeddingの圧縮が可能です。n_clustersは圧縮したい大きさを指定し、batch_sizeは256 * CPUのコア数を指定することでCPUの並列化の恩恵を十分に得ることができます。
```python
import multiprocessing
from sklearn.cluster import MiniBatchKMeans
kmeans = MiniBatchKMeans(n_clusters=10000, batch_size=256 * multiprocessing.cpu_count(), init="random")
kmeans.fit(big_npy)
sample_npy = kmeans.cluster_centers_
```

132
docs/faiss_tips_ko.md Normal file
View File

@@ -0,0 +1,132 @@
Facebook AI Similarity Search (Faiss) 팁
==================
# Faiss에 대하여
Faiss 는 Facebook Research가 개발하는, 고밀도 벡터 이웃 검색 라이브러리입니다. 근사 근접 탐색법 (Approximate Neigbor Search)은 약간의 정확성을 희생하여 유사 벡터를 고속으로 찾습니다.
## RVC에 있어서 Faiss
RVC에서는 HuBERT로 변환한 feature의 embedding을 위해 훈련 데이터에서 생성된 embedding과 유사한 embadding을 검색하고 혼합하여 원래의 음성에 더욱 가까운 변환을 달성합니다. 그러나, 이 탐색법은 단순히 수행하면 시간이 다소 소모되므로, 근사 근접 탐색법을 통해 고속 변환을 가능케 하고 있습니다.
# 구현 개요
모델이 위치한 `/logs/your-experiment/3_feature256`에는 각 음성 데이터에서 HuBERT가 추출한 feature들이 있습니다. 여기에서 파일 이름별로 정렬된 npy 파일을 읽고, 벡터를 연결하여 big_npy ([N, 256] 모양의 벡터) 를 만듭니다. big_npy를 `/logs/your-experiment/total_fea.npy`로 저장한 후, Faiss로 학습시킵니다.
2023/04/18 기준으로, Faiss의 Index Factory 기능을 이용해, L2 거리에 근거하는 IVF를 이용하고 있습니다. IVF의 분할수(n_ivf)는 N//39로, n_probe는 int(np.power(n_ivf, 0.3))가 사용되고 있습니다. (infer-web.py의 train_index 주위를 찾으십시오.)
이 팁에서는 먼저 이러한 매개 변수의 의미를 설명하고, 개발자가 추후 더 나은 index를 작성할 수 있도록 하는 조언을 작성합니다.
# 방법의 설명
## Index factory
index factory는 여러 근사 근접 탐색법을 문자열로 연결하는 pipeline을 문자열로 표기하는 Faiss만의 독자적인 기법입니다. 이를 통해 index factory의 문자열을 변경하는 것만으로 다양한 근사 근접 탐색을 시도해 볼 수 있습니다. RVC에서는 다음과 같이 사용됩니다:
```python
index = Faiss.index_factory(256, "IVF%s,Flat" % n_ivf)
```
`index_factory`의 인수들 중 첫 번째는 벡터의 차원 수이고, 두번째는 index factory 문자열이며, 세번째에는 사용할 거리를 지정할 수 있습니다.
기법의 보다 자세한 설명은 https://github.com/facebookresearch/Faiss/wiki/The-index-factory 를 확인해 주십시오.
## 거리에 대한 index
embedding의 유사도로서 사용되는 대표적인 지표로서 이하의 2개가 있습니다.
- 유클리드 거리 (METRIC_L2)
- 내적(内積) (METRIC_INNER_PRODUCT)
유클리드 거리에서는 각 차원에서 제곱의 차를 구하고, 각 차원에서 구한 차를 모두 더한 후 제곱근을 취합니다. 이것은 일상적으로 사용되는 2차원, 3차원에서의 거리의 연산법과 같습니다. 내적은 그 값을 그대로 유사도 지표로 사용하지 않고, L2 정규화를 한 이후 내적을 취하는 코사인 유사도를 사용합니다.
어느 쪽이 더 좋은지는 경우에 따라 다르지만, word2vec에서 얻은 embedding 및 ArcFace를 활용한 이미지 검색 모델은 코사인 유사성이 이용되는 경우가 많습니다. numpy를 사용하여 벡터 X에 대해 L2 정규화를 하고자 하는 경우, 0 division을 피하기 위해 충분히 작은 값을 eps로 한 뒤 이하에 코드를 활용하면 됩니다.
```python
X_normed = X / np.maximum(eps, np.linalg.norm(X, ord=2, axis=-1, keepdims=True))
```
또한, `index factory`의 3번째 인수에 건네주는 값을 선택하는 것을 통해 계산에 사용하는 거리 index를 변경할 수 있습니다.
```python
index = Faiss.index_factory(dimention, text, Faiss.METRIC_INNER_PRODUCT)
```
## IVF
IVF (Inverted file indexes)는 역색인 탐색법과 유사한 알고리즘입니다. 학습시에는 검색 대상에 대해 k-평균 군집법을 실시하고 클러스터 중심을 이용해 보로노이 분할을 실시합니다. 각 데이터 포인트에는 클러스터가 할당되므로, 클러스터에서 데이터 포인트를 조회하는 dictionary를 만듭니다.
예를 들어, 클러스터가 다음과 같이 할당된 경우
|index|Cluster|
|-----|-------|
|1|A|
|2|B|
|3|A|
|4|C|
|5|B|
IVF 이후의 결과는 다음과 같습니다:
|cluster|index|
|-------|-----|
|A|1, 3|
|B|2, 5|
|C|4|
탐색 시, 우선 클러스터에서 `n_probe`개의 클러스터를 탐색한 다음, 각 클러스터에 속한 데이터 포인트의 거리를 계산합니다.
# 권장 매개변수
index의 선택 방법에 대해서는 공식적으로 가이드 라인이 있으므로, 거기에 준해 설명합니다.
https://github.com/facebookresearch/Faiss/wiki/Guidelines-to-choose-an-index
1M 이하의 데이터 세트에 있어서는 4bit-PQ가 2023년 4월 시점에서는 Faiss로 이용할 수 있는 가장 효율적인 수법입니다. 이것을 IVF와 조합해, 4bit-PQ로 후보를 추려내고, 마지막으로 이하의 index factory를 이용하여 정확한 지표로 거리를 재계산하면 됩니다.
```python
index = Faiss.index_factory(256, "IVF1024,PQ128x4fs,RFlat")
```
## IVF 권장 매개변수
IVF의 수가 너무 많으면, 가령 데이터 수의 수만큼 IVF로 양자화(Quantization)를 수행하면, 이것은 완전탐색과 같아져 효율이 나빠지게 됩니다. 1M 이하의 경우 IVF 값은 데이터 포인트 수 N에 대해 4sqrt(N) ~ 16sqrt(N)를 사용하는 것을 권장합니다.
n_probe는 n_probe의 수에 비례하여 계산 시간이 늘어나므로 정확도와 시간을 적절히 균형을 맞추어 주십시오. 개인적으로 RVC에 있어서 그렇게까지 정확도는 필요 없다고 생각하기 때문에 n_probe = 1이면 된다고 생각합니다.
## FastScan
FastScan은 직적 양자화를 레지스터에서 수행함으로써 거리의 고속 근사를 가능하게 하는 방법입니다.직적 양자화는 학습시에 d차원마다(보통 d=2)에 독립적으로 클러스터링을 실시해, 클러스터끼리의 거리를 사전 계산해 lookup table를 작성합니다. 예측시는 lookup table을 보면 각 차원의 거리를 O(1)로 계산할 수 있습니다. 따라서 PQ 다음에 지정하는 숫자는 일반적으로 벡터의 절반 차원을 지정합니다.
FastScan에 대한 자세한 설명은 공식 문서를 참조하십시오.
https://github.com/facebookresearch/Faiss/wiki/Fast-accumulation-of-PQ-and-AQ-codes-(FastScan)
## RFlat
RFlat은 FastScan이 계산한 대략적인 거리를 index factory의 3번째 인수로 지정한 정확한 거리로 다시 계산하라는 인스트럭션입니다. k개의 근접 변수를 가져올 때 k*k_factor개의 점에 대해 재계산이 이루어집니다.
# Embedding 테크닉
## Alpha 쿼리 확장
퀴리 확장이란 탐색에서 사용되는 기술로, 예를 들어 전문 탐색 시, 입력된 검색문에 단어를 몇 개를 추가함으로써 검색 정확도를 올리는 방법입니다. 백터 탐색을 위해서도 몇가지 방법이 제안되었는데, 그 중 α-쿼리 확장은 추가 학습이 필요 없는 매우 효과적인 방법으로 알려져 있습니다. [Attention-Based Query Expansion Learning](https://arxiv.org/abs/2007.08019)와 [2nd place solution of kaggle shopee competition](https://www.kaggle.com/code/lyakaap/2nd-place-solution/notebook) 논문에서 소개된 바 있습니다..
α-쿼리 확장은 한 벡터에 인접한 벡터를 유사도의 α곱한 가중치로 더해주면 됩니다. 코드로 예시를 들어 보겠습니다. big_npy를 α query expansion로 대체합니다.
```python
alpha = 3.
index = Faiss.index_factory(256, "IVF512,PQ128x4fs,RFlat")
original_norm = np.maximum(np.linalg.norm(big_npy, ord=2, axis=1, keepdims=True), 1e-9)
big_npy /= original_norm
index.train(big_npy)
index.add(big_npy)
dist, neighbor = index.search(big_npy, num_expand)
expand_arrays = []
ixs = np.arange(big_npy.shape[0])
for i in range(-(-big_npy.shape[0]//batch_size)):
ix = ixs[i*batch_size:(i+1)*batch_size]
weight = np.power(np.einsum("nd,nmd->nm", big_npy[ix], big_npy[neighbor[ix]]), alpha)
expand_arrays.append(np.sum(big_npy[neighbor[ix]] * np.expand_dims(weight, axis=2),axis=1))
big_npy = np.concatenate(expand_arrays, axis=0)
# index version 정규화
big_npy = big_npy / np.maximum(np.linalg.norm(big_npy, ord=2, axis=1, keepdims=True), 1e-9)
```
위 테크닉은 탐색을 수행하는 쿼리에도, 탐색 대상 DB에도 적응 가능한 테크닉입니다.
## MiniBatch KMeans에 의한 embedding 압축
total_fea.npy가 너무 클 경우 K-means를 이용하여 벡터를 작게 만드는 것이 가능합니다. 이하 코드로 embedding의 압축이 가능합니다. n_clusters에 압축하고자 하는 크기를 지정하고 batch_size에 256 * CPU의 코어 수를 지정함으로써 CPU 병렬화의 혜택을 충분히 얻을 수 있습니다.
```python
import multiprocessing
from sklearn.cluster import MiniBatchKMeans
kmeans = MiniBatchKMeans(n_clusters=10000, batch_size=256 * multiprocessing.cpu_count(), init="random")
kmeans.fit(big_npy)
sample_npy = kmeans.cluster_centers_
```

52
docs/training_tips_en.md Normal file
View File

@@ -0,0 +1,52 @@
Instructions and tips for RVC training
======================================
This TIPS explains how data training is done.
# Training flow
I will explain along the steps in the training tab of the GUI.
## step1
Set the experiment name here. You can also set here whether the model should take pitch into account.
Data for each experiment is placed in `/logs/experiment name/`.
## step2a
Loads and preprocesses audio.
### load audio
If you specify a folder with audio, the audio files in that folder will be read automatically.
For example, if you specify `C:Users\hoge\voices`, `C:Users\hoge\voices\voice.mp3` will be loaded, but `C:Users\hoge\voices\dir\voice.mp3` will Not loaded.
Since ffmpeg is used internally for reading audio, if the extension is supported by ffmpeg, it will be read automatically.
After converting to int16 with ffmpeg, convert to float32 and normalize between -1 to 1.
### denoising
The audio is smoothed by scipy's filtfilt.
### Audio Split
First, the input audio is divided by detecting parts of silence that last longer than a certain period (max_sil_kept=5 seconds?). After splitting the audio on silence, split the audio every 4 seconds with an overlap of 0.3 seconds. For audio separated within 4 seconds, after normalizing the volume, convert the wav file to `/logs/experiment name/0_gt_wavs` and then convert it to 16k sampling rate to `/logs/experiment name/1_16k_wavs ` as a wav file.
## step2b
### Extract pitch
Extract pitch information from wav files. Extract the pitch information (=f0) using the method built into parselmouth or pyworld and save it in `/logs/experiment name/2a_f0`. Then logarithmically convert the pitch information to an integer between 1 and 255 and save it in `/logs/experiment name/2b-f0nsf`.
### Extract feature_print
Convert the wav file to embedding in advance using HuBERT. Read the wav file saved in `/logs/experiment name/1_16k_wavs`, convert the wav file to 256-dimensional features with HuBERT, and save in npy format in `/logs/experiment name/3_feature256`.
## step3
train the model.
### Glossary for Beginners
In deep learning, the data set is divided and the learning proceeds little by little. In one model update (step), batch_size data are retrieved and predictions and error corrections are performed. Doing this once for a dataset counts as one epoch.
Therefore, the learning time is the learning time per step x (the number of data in the dataset / batch size) x the number of epochs. In general, the larger the batch size, the more stable the learning becomes (learning time per step ÷ batch size) becomes smaller, but it uses more GPU memory. GPU RAM can be checked with the nvidia-smi command. Learning can be done in a short time by increasing the batch size as much as possible according to the machine of the execution environment.
### Specify pretrained model
RVC starts training the model from pretrained weights instead of from 0, so it can be trained with a small dataset. By default it loads `rvc-location/pretrained/f0G40k.pth` and `rvc-location/pretrained/f0D40k.pth`. When learning, model parameters are saved in `logs/experiment name/G_{}.pth` and `logs/experiment name/D_{}.pth` for each save_every_epoch, but by specifying this path, you can start learning. You can restart or start training from model weights learned in a different experiment.
### learning index
RVC saves the HuBERT feature values used during training, and during inference, searches for feature values that are similar to the feature values used during learning to perform inference. In order to perform this search at high speed, the index is learned in advance.
For index learning, we use the approximate neighborhood search library faiss. Read the feature value of `/logs/experiment name/3_feature256`, save the combined feature value as `/logs/experiment name/total_fea.npy`, and use it to learn the index `/logs/experiment name Save it as /add_XXX.index`.
### Button description
- Train model: After executing step2b, press this button to train the model.
- Train feature index: After training the model, perform index learning.
- One-click training: step2b, model training and feature index training all at once.

53
docs/training_tips_ja.md Normal file
View File

@@ -0,0 +1,53 @@
RVCの訓練における説明、およびTIPS
===============================
本TIPSではどのようにデータの訓練が行われているかを説明します。
# 訓練の流れ
GUIの訓練タブのstepに沿って説明します。
## step1
実験名の設定を行います。また、モデルにピッチを考慮させるかもここで設定できます。
各実験のデータは`/logs/実験名/`に配置されます。
## step2a
音声の読み込みと前処理を行います。
### load audio
音声のあるフォルダを指定すると、そのフォルダ内にある音声ファイルを自動で読み込みます。
例えば`C:Users\hoge\voices`を指定した場合、`C:Users\hoge\voices\voice.mp3`は読み込まれますが、`C:Users\hoge\voices\dir\voice.mp3`は読み込まれません。
音声の読み込みには内部でffmpegを利用しているので、ffmpegで対応している拡張子であれば自動的に読み込まれます。
ffmpegでint16に変換した後、float32に変換し、-1 ~ 1の間に正規化されます。
### denoising
音声についてscipyのfiltfiltによる平滑化を行います。
### 音声の分割
入力した音声はまず、一定期間(max_sil_kept=5秒?)より長く無音が続く部分を検知して音声を分割します。無音で音声を分割した後は、0.3秒のoverlapを含む4秒ごとに音声を分割します。4秒以内に区切られた音声は、音量の正規化を行った後wavファイルを`/logs/実験名/0_gt_wavs`に、そこから16kのサンプリングレートに変換して`/logs/実験名/1_16k_wavs`にwavファイルで保存します。
## step2b
### ピッチの抽出
wavファイルからピッチ(音の高低)の情報を抽出します。parselmouthやpyworldに内蔵されている手法でピッチ情報(=f0)を抽出し、`/logs/実験名/2a_f0`に保存します。その後、ピッチ情報を対数で変換して1~255の整数に変換し、`/logs/実験名/2b-f0nsf`に保存します。
### feature_printの抽出
HuBERTを用いてwavファイルを事前にembeddingに変換します。`/logs/実験名/1_16k_wavs`に保存したwavファイルを読み込み、HuBERTでwavファイルを256次元の特徴量に変換し、npy形式で`/logs/実験名/3_feature256`に保存します。
## step3
モデルのトレーニングを行います。
### 初心者向け用語解説
深層学習ではデータセットを分割し、少しずつ学習を進めていきます。一回のモデルの更新(step)では、batch_size個のデータを取り出し予測と誤差の修正を行います。これをデータセットに対して一通り行うと一epochと数えます。
そのため、学習時間は 1step当たりの学習時間 x (データセット内のデータ数 ÷ バッチサイズ) x epoch数 かかります。一般にバッチサイズを大きくするほど学習は安定し、(1step当たりの学習時間÷バッチサイズ)は小さくなりますが、その分GPUのメモリを多く使用します。GPUのRAMはnvidia-smiコマンド等で確認できます。実行環境のマシンに合わせてバッチサイズをできるだけ大きくするとより短時間で学習が可能です。
### pretrained modelの指定
RVCではモデルの訓練を0からではなく、事前学習済みの重みから開始するため、少ないデータセットで学習を行えます。デフォルトでは`RVCのある場所/pretrained/f0G40k.pth``RVCのある場所/pretrained/f0D40k.pth`を読み込みます。学習時はsave_every_epochごとにモデルのパラメータが`logs/実験名/G_{}.pth``logs/実験名/D_{}.pth`に保存されますが、このパスを指定することで学習を再開したり、もしくは違う実験で学習したモデルの重みから学習を開始できます。
### indexの学習
RVCでは学習時に使われたHuBERTの特徴量を保存し、推論時は学習時の特徴量から近い特徴量を探してきて推論を行います。この検索を高速に行うために事前にindexの学習を行います。
indexの学習には近似近傍探索ライブラリのfaissを用います。`/logs/実験名/3_feature256`の特徴量を読み込み、全て結合させた特徴量を`/logs/実験名/total_fea.npy`として保存、それを用いて学習したindexを`/logs/実験名/add_XXX.index`として保存します。
### ボタンの説明
- モデルのトレーニング: step2bまでを実行した後、このボタンを押すとモデルの学習を行います。
- 特徴インデックスのトレーニング: モデルのトレーニング後、indexの学習を行います。
- ワンクリックトレーニング: step2bまでとモデルのトレーニング、特徴インデックスのトレーニングを一括で行います。

53
docs/training_tips_ko.md Normal file
View File

@@ -0,0 +1,53 @@
RVC 훈련에 대한 설명과 팁들
======================================
본 팁에서는 어떻게 데이터 훈련이 이루어지고 있는지 설명합니다.
# 훈련의 흐름
GUI의 훈련 탭의 단계를 따라 설명합니다.
## step1
실험 이름을 지정합니다. 또한, 모델이 피치(소리의 높낮이)를 고려해야 하는지 여부를 여기에서 설정할 수도 있습니다..
각 실험을 위한 데이터는 `/logs/experiment name/`에 배치됩니다..
## step2a
음성 파일을 불러오고 전처리합니다.
### 음성 파일 불러오기
음성 파일이 있는 폴더를 지정하면 해당 폴더에 있는 음성 파일이 자동으로 가져와집니다.
예를 들어 `C:Users\hoge\voices`를 지정하면 `C:Users\hoge\voices\voice.mp3`가 읽히지만 `C:Users\hoge\voices\dir\voice.mp3`는 읽히지 않습니다.
음성 로드에는 내부적으로 ffmpeg를 이용하고 있으므로, ffmpeg로 대응하고 있는 확장자라면 자동적으로 읽힙니다.
ffmpeg에서 int16으로 변환한 후 float32로 변환하고 -1과 1 사이에 정규화됩니다.
### 잡음 제거
음성 파일에 대해 scipy의 filtfilt를 이용하여 잡음을 처리합니다.
### 음성 분할
입력한 음성 파일은 먼저 일정 기간(max_sil_kept=5초?)보다 길게 무음이 지속되는 부분을 감지하여 음성을 분할합니다.무음으로 음성을 분할한 후에는 0.3초의 overlap을 포함하여 4초마다 음성을 분할합니다.4초 이내에 구분된 음성은 음량의 정규화를 실시한 후 wav 파일을 `/logs/실험명/0_gt_wavs`로, 거기에서 16k의 샘플링 레이트로 변환해 `/logs/실험명/1_16k_wavs`에 wav 파일로 저장합니다.
## step2b
### 피치 추출
wav 파일에서 피치(소리의 높낮이) 정보를 추출합니다. parselmouth나 pyworld에 내장되어 있는 메서드으로 피치 정보(=f0)를 추출해, `/logs/실험명/2a_f0`에 저장합니다. 그 후 피치 정보를 로그로 변환하여 1~255 정수로 변환하고 `/logs/실험명/2b-f0nsf`에 저장합니다.
### feature_print 추출
HuBERT를 이용하여 wav 파일을 미리 embedding으로 변환합니다. `/logs/실험명/1_16k_wavs`에 저장한 wav 파일을 읽고 HuBERT에서 wav 파일을 256차원 feature들로 변환한 후 npy 형식으로 `/logs/실험명/3_feature256`에 저장합니다.
## step3
모델의 훈련을 진행합니다.
### 초보자용 용어 해설
심층학습(딥러닝)에서는 데이터셋을 분할하여 조금씩 학습을 진행합니다.한 번의 모델 업데이트(step) 단계 당 batch_size개의 데이터를 탐색하여 예측과 오차를 수정합니다. 데이터셋 전부에 대해 이 작업을 한 번 수행하는 이를 하나의 epoch라고 계산합니다.
따라서 학습 시간은 단계당 학습 시간 x (데이터셋 내 데이터의 수 / batch size) x epoch 수가 소요됩니다. 일반적으로 batch size가 클수록 학습이 안정적이게 됩니다. (step당 학습 시간 ÷ batch size)는 작아지지만 GPU 메모리를 더 많이 사용합니다. GPU RAM은 nvidia-smi 명령어를 통해 확인할 수 있습니다. 실행 환경에 따라 배치 크기를 최대한 늘리면 짧은 시간 내에 학습이 가능합니다.
### 사전 학습된 모델 지정
RVC는 적은 데이터셋으로도 훈련이 가능하도록 사전 훈련된 가중치에서 모델 훈련을 시작합니다. 기본적으로 `rvc-location/pretrained/f0G40k.pth``rvc-location/pretrained/f0D40k.pth`를 불러옵니다. 학습을 할 시에, 모델 파라미터는 각 save_every_epoch별로 `logs/experiment name/G_{}.pth``logs/experiment name/D_{}.pth`로 저장이 되는데, 이 경로를 지정함으로써 학습을 재개하거나, 다른 실험에서 학습한 모델의 가중치에서 학습을 시작할 수 있습니다.
### index의 학습
RVC에서는 학습시에 사용된 HuBERT의 feature값을 저장하고, 추론 시에는 학습 시 사용한 feature값과 유사한 feature 값을 탐색해 추론을 진행합니다. 이 탐색을 고속으로 수행하기 위해 사전에 index을 학습하게 됩니다.
Index 학습에는 근사 근접 탐색법 라이브러리인 Faiss를 사용하게 됩니다. `/logs/실험명/3_feature256`의 feature값을 불러와, 이를 모두 결합시킨 feature값을 `/logs/실험명/total_fea.npy`로서 저장, 그것을 사용해 학습한 index를`/logs/실험명/add_XXX.index`로 저장합니다.
### 버튼 설명
- モデルのトレーニング (모델 학습): step2b까지 실행한 후, 이 버튼을 눌러 모델을 학습합니다.
- 特徴インデックスのトレーニング (특징 지수 훈련): 모델의 훈련 후, index를 학습합니다.
- ワンクリックトレーニング (원클릭 트레이닝): step2b까지의 모델 훈련, feature index 훈련을 일괄로 실시합니다.

View File

@@ -1,47 +1,85 @@
from infer_pack.models_onnx import SynthesizerTrnMs256NSFsid
from infer_pack.models_onnx_moess import SynthesizerTrnMs256NSFsidM
from infer_pack.models_onnx import SynthesizerTrnMs256NSFsidO
import torch
person = "Shiroha/shiroha.pth"
exported_path = "model.onnx"
if __name__ == "__main__":
MoeVS = True # 模型是否为MoeVoiceStudio原MoeSS使用
ModelPath = "Shiroha/shiroha.pth" # 模型路径
ExportedPath = "model.onnx" # 输出路径
hidden_channels = 256 # hidden_channels为768Vec做准备
cpt = torch.load(ModelPath, map_location="cpu")
cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
print(*cpt["config"])
cpt = torch.load(person, map_location="cpu")
cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
print(*cpt["config"])
net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=False)
net_g.load_state_dict(cpt["weight"], strict=False)
test_phone = torch.rand(1, 200, hidden_channels) # hidden unit
test_phone_lengths = torch.tensor([200]).long() # hidden unit 长度(貌似没啥用)
test_pitch = torch.randint(size=(1, 200), low=5, high=255) # 基频(单位赫兹)
test_pitchf = torch.rand(1, 200) # nsf基频
test_ds = torch.LongTensor([0]) # 说话人ID
test_rnd = torch.rand(1, 192, 200) # 噪声(加入随机因子)
test_phone = torch.rand(1, 200, 256)
test_phone_lengths = torch.tensor([200]).long()
test_pitch = torch.randint(size=(1, 200), low=5, high=255)
test_pitchf = torch.rand(1, 200)
test_ds = torch.LongTensor([0])
test_rnd = torch.rand(1, 192, 200)
input_names = ["phone", "phone_lengths", "pitch", "pitchf", "ds", "rnd"]
output_names = [
"audio",
]
device = "cpu"
torch.onnx.export(
net_g,
(
test_phone.to(device),
test_phone_lengths.to(device),
test_pitch.to(device),
test_pitchf.to(device),
test_ds.to(device),
test_rnd.to(device),
),
exported_path,
dynamic_axes={
"phone": [1],
"pitch": [1],
"pitchf": [1],
"rnd": [2],
},
do_constant_folding=False,
opset_version=16,
verbose=False,
input_names=input_names,
output_names=output_names,
)
device = "cpu" # 导出时设备(不影响使用模型)
if MoeVS:
net_g = SynthesizerTrnMs256NSFsidM(
*cpt["config"], is_half=False
) # fp32导出C++要支持fp16必须手动将内存重新排列所以暂时不用fp16
net_g.load_state_dict(cpt["weight"], strict=False)
input_names = ["phone", "phone_lengths", "pitch", "pitchf", "ds", "rnd"]
output_names = [
"audio",
]
torch.onnx.export(
net_g,
(
test_phone.to(device),
test_phone_lengths.to(device),
test_pitch.to(device),
test_pitchf.to(device),
test_ds.to(device),
test_rnd.to(device),
),
ExportedPath,
dynamic_axes={
"phone": [1],
"pitch": [1],
"pitchf": [1],
"rnd": [2],
},
do_constant_folding=False,
opset_version=16,
verbose=False,
input_names=input_names,
output_names=output_names,
)
else:
net_g = SynthesizerTrnMs256NSFsidO(
*cpt["config"], is_half=False
) # fp32导出C++要支持fp16必须手动将内存重新排列所以暂时不用fp16
net_g.load_state_dict(cpt["weight"], strict=False)
input_names = ["phone", "phone_lengths", "pitch", "pitchf", "ds"]
output_names = [
"audio",
]
torch.onnx.export(
net_g,
(
test_phone.to(device),
test_phone_lengths.to(device),
test_pitch.to(device),
test_pitchf.to(device),
test_ds.to(device),
),
ExportedPath,
dynamic_axes={
"phone": [1],
"pitch": [1],
"pitchf": [1],
},
do_constant_folding=False,
opset_version=16,
verbose=False,
input_names=input_names,
output_names=output_names,
)

47
export_onnx_old.py Normal file
View File

@@ -0,0 +1,47 @@
from infer_pack.models_onnx_moess import SynthesizerTrnMs256NSFsidM
import torch
person = "Shiroha/shiroha.pth"
exported_path = "model.onnx"
cpt = torch.load(person, map_location="cpu")
cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
print(*cpt["config"])
net_g = SynthesizerTrnMs256NSFsidM(*cpt["config"], is_half=False)
net_g.load_state_dict(cpt["weight"], strict=False)
test_phone = torch.rand(1, 200, 256)
test_phone_lengths = torch.tensor([200]).long()
test_pitch = torch.randint(size=(1, 200), low=5, high=255)
test_pitchf = torch.rand(1, 200)
test_ds = torch.LongTensor([0])
test_rnd = torch.rand(1, 192, 200)
input_names = ["phone", "phone_lengths", "pitch", "pitchf", "ds", "rnd"]
output_names = [
"audio",
]
device = "cpu"
torch.onnx.export(
net_g,
(
test_phone.to(device),
test_phone_lengths.to(device),
test_pitch.to(device),
test_pitchf.to(device),
test_ds.to(device),
test_rnd.to(device),
),
exported_path,
dynamic_axes={
"phone": [1],
"pitch": [1],
"pitchf": [1],
"rnd": [2],
},
do_constant_folding=False,
opset_version=16,
verbose=False,
input_names=input_names,
output_names=output_names,
)

View File

@@ -33,7 +33,9 @@ class FeatureInput(object):
self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700)
def compute_f0(self, path, f0_method):
x, sr = librosa.load(path, self.fs)
# default resample type of librosa.resample is "soxr_hq".
# Quality: soxr_vhq > soxr_hq
x, sr = librosa.load(path, self.fs) # , res_type='soxr_vhq'
p_len = x.shape[0] // self.hop
assert sr == self.fs
if f0_method == "pm":

View File

@@ -28,3 +28,4 @@ process("gui.py")
# Save as a JSON file
with open("./i18n/zh_CN.json", "w", encoding="utf-8") as f:
json.dump(data, f, ensure_ascii=False, indent=4)
f.write("\n")

2
go-realtime-gui.bat Normal file
View File

@@ -0,0 +1,2 @@
runtime\python.exe gui.py
pause

View File

@@ -1,2 +1,2 @@
runtime\python.exe infer-web.py --pycmd runtime\python.exe
runtime\python.exe infer-web.py --pycmd runtime\python.exe --port 7897
pause

186
gui.py
View File

@@ -1,18 +1,22 @@
import os, sys
now_dir = os.getcwd()
sys.path.append(now_dir)
import PySimpleGUI as sg
import sounddevice as sd
import noisereduce as nr
import numpy as np
from fairseq import checkpoint_utils
import librosa, torch, parselmouth, faiss, time, threading
import librosa, torch, pyworld, faiss, time, threading
import torch.nn.functional as F
import torchaudio.transforms as tat
import scipy.signal as signal
# import matplotlib.pyplot as plt
from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono
from i18n import I18nAuto
i18n = I18nAuto()
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
@@ -23,70 +27,82 @@ class RVC:
"""
初始化
"""
self.f0_up_key = key
self.time_step = 160 / 16000 * 1000
self.f0_min = 50
self.f0_max = 1100
self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700)
self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700)
self.index = faiss.read_index(index_path)
self.index_rate = index_rate
"""NOT YET USED"""
self.big_npy = np.load(npy_path)
model_path = hubert_path
print("load model(s) from {}".format(model_path))
models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
[model_path],
suffix="",
)
self.model = models[0]
self.model = self.model.to(device)
self.model = self.model.half()
self.model.eval()
cpt = torch.load(pth_path, map_location="cpu")
tgt_sr = cpt["config"][-1]
cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
if_f0 = cpt.get("f0", 1)
if if_f0 == 1:
self.net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=True)
else:
self.net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
del self.net_g.enc_q
print(self.net_g.load_state_dict(cpt["weight"], strict=False))
self.net_g.eval().to(device)
self.net_g.half()
try:
self.f0_up_key = key
self.time_step = 160 / 16000 * 1000
self.f0_min = 50
self.f0_max = 1100
self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700)
self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700)
self.sr = 16000
self.window = 160
if index_rate != 0:
self.index = faiss.read_index(index_path)
self.big_npy = np.load(npy_path)
print("index search enabled")
self.index_rate = index_rate
model_path = hubert_path
print("load model(s) from {}".format(model_path))
models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
[model_path],
suffix="",
)
self.model = models[0]
self.model = self.model.to(device)
self.model = self.model.half()
self.model.eval()
cpt = torch.load(pth_path, map_location="cpu")
self.tgt_sr = cpt["config"][-1]
cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
self.if_f0 = cpt.get("f0", 1)
if self.if_f0 == 1:
self.net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=True)
else:
self.net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
del self.net_g.enc_q
print(self.net_g.load_state_dict(cpt["weight"], strict=False))
self.net_g.eval().to(device)
self.net_g.half()
except Exception as e:
print(e)
def get_f0_coarse(self, f0):
def get_f0(self, x, f0_up_key, inp_f0=None):
x_pad = 1
f0_min = 50
f0_max = 1100
f0_mel_min = 1127 * np.log(1 + f0_min / 700)
f0_mel_max = 1127 * np.log(1 + f0_max / 700)
f0, t = pyworld.harvest(
x.astype(np.double),
fs=self.sr,
f0_ceil=f0_max,
f0_floor=f0_min,
frame_period=10,
)
f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr)
f0 = signal.medfilt(f0, 3)
f0 *= pow(2, f0_up_key / 12)
# with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
tf0 = self.sr // self.window # 每秒f0点数
if inp_f0 is not None:
delta_t = np.round(
(inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
).astype("int16")
replace_f0 = np.interp(
list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
)
shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0]
f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape]
# with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
f0bak = f0.copy()
f0_mel = 1127 * np.log(1 + f0 / 700)
f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - self.f0_mel_min) * 254 / (
self.f0_mel_max - self.f0_mel_min
f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
f0_mel_max - f0_mel_min
) + 1
f0_mel[f0_mel <= 1] = 1
f0_mel[f0_mel > 255] = 255
# f0_mel[f0_mel > 188] = 188
f0_coarse = np.rint(f0_mel).astype(np.int)
return f0_coarse
def get_f0(self, x, p_len, f0_up_key=0):
f0 = (
parselmouth.Sound(x, 16000)
.to_pitch_ac(
time_step=self.time_step / 1000,
voicing_threshold=0.6,
pitch_floor=self.f0_min,
pitch_ceiling=self.f0_max,
)
.selected_array["frequency"]
)
pad_size = (p_len - len(f0) + 1) // 2
if pad_size > 0 or p_len - len(f0) - pad_size > 0:
f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
f0 *= pow(2, f0_up_key / 12)
# f0=suofang(f0)
f0bak = f0.copy()
f0_coarse = self.get_f0_coarse(f0)
return f0_coarse, f0bak
return f0_coarse, f0bak # 1-0
def infer(self, feats: torch.Tensor) -> np.ndarray:
"""
@@ -107,11 +123,7 @@ class RVC:
feats = self.model.final_proj(logits[0])
####索引优化
if (
isinstance(self.index, type(None)) == False
and isinstance(self.big_npy, type(None)) == False
and self.index_rate != 0
):
if hasattr(self, "index") and hasattr(self, "big_npy") and self.index_rate != 0:
npy = feats[0].cpu().numpy().astype("float32")
_, I = self.index.search(npy, 1)
npy = self.big_npy[I.squeeze()].astype("float16")
@@ -119,30 +131,40 @@ class RVC:
torch.from_numpy(npy).unsqueeze(0).to(device) * self.index_rate
+ (1 - self.index_rate) * feats
)
else:
print("index search FAIL or disabled")
feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
torch.cuda.synchronize()
# p_len = min(feats.shape[1],10000,pitch.shape[0])#太大了爆显存
p_len = min(feats.shape[1], 12000) #
print(feats.shape)
pitch, pitchf = self.get_f0(audio, p_len, self.f0_up_key)
p_len = min(feats.shape[1], 12000, pitch.shape[0]) # 太大了爆显存
if self.if_f0 == 1:
pitch, pitchf = self.get_f0(audio, self.f0_up_key)
p_len = min(feats.shape[1], 13000, pitch.shape[0]) # 太大了爆显存
else:
pitch, pitchf = None, None
p_len = min(feats.shape[1], 13000) # 太大了爆显存
torch.cuda.synchronize()
# print(feats.shape,pitch.shape)
feats = feats[:, :p_len, :]
pitch = pitch[:p_len]
pitchf = pitchf[:p_len]
if self.if_f0 == 1:
pitch = pitch[:p_len]
pitchf = pitchf[:p_len]
pitch = torch.LongTensor(pitch).unsqueeze(0).to(device)
pitchf = torch.FloatTensor(pitchf).unsqueeze(0).to(device)
p_len = torch.LongTensor([p_len]).to(device)
pitch = torch.LongTensor(pitch).unsqueeze(0).to(device)
pitchf = torch.FloatTensor(pitchf).unsqueeze(0).to(device)
ii = 0 # sid
sid = torch.LongTensor([ii]).to(device)
with torch.no_grad():
infered_audio = (
self.net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]
.data.cpu()
.float()
) # nsf
if self.if_f0 == 1:
infered_audio = (
self.net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]
.data.cpu()
.float()
)
else:
infered_audio = (
self.net_g.infer(feats, p_len, sid)[0][0, 0].data.cpu().float()
)
torch.cuda.synchronize()
return infered_audio
@@ -352,10 +374,10 @@ class GUI:
self.block_frame = int(self.config.block_time * self.config.samplerate)
self.crossfade_frame = int(self.config.crossfade_time * self.config.samplerate)
self.sola_search_frame = int(0.012 * self.config.samplerate)
self.delay_frame = int(0.02 * self.config.samplerate) # 往前预留0.02s
self.delay_frame = int(0.01 * self.config.samplerate) # 往前预留0.02s
self.extra_frame = int(
self.config.extra_time * self.config.samplerate
) # 往后预留0.04s
)
self.rvc = None
self.rvc = RVC(
self.config.pitch,
@@ -386,7 +408,7 @@ class GUI:
orig_freq=self.config.samplerate, new_freq=16000, dtype=torch.float32
)
self.resampler2 = tat.Resample(
orig_freq=40000, new_freq=self.config.samplerate, dtype=torch.float32
orig_freq=self.rvc.tgt_sr, new_freq=self.config.samplerate, dtype=torch.float32
)
thread_vc = threading.Thread(target=self.soundinput)
thread_vc.start()
@@ -485,8 +507,8 @@ class GUI:
else:
outdata[:] = self.output_wav[:].repeat(2, 1).t().cpu().numpy()
total_time = time.perf_counter() - start_time
print("infer time:" + str(total_time))
self.window["infer_time"].update(int(total_time * 1000))
print("infer time:" + str(total_time))
def get_devices(self, update: bool = True):
"""获取设备列表"""

View File

@@ -11,10 +11,10 @@ def load_language_list(language):
class I18nAuto:
def __init__(self, language=None):
if language is None:
language = "auto"
if language == "auto":
language = locale.getdefaultlocale()[0]
if language in ["Auto", None]:
language = locale.getdefaultlocale()[
0
] # getlocale can't identify the system's language ((None, None))
if not os.path.exists(f"./i18n/{language}.json"):
language = "en_US"
self.language = language

View File

@@ -1,58 +1,58 @@
{
"本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>使用需遵守的协议-LICENSE.txt</b>.": "本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>使用需遵守的协议-LICENSE.txt</b>.",
"本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>使用需遵守的协议-LICENSE.txt</b>.": "This software is open source under the MIT license, the author does not have any control over the software, and those who use the software and spread the sounds exported by the software are solely responsible. <br>If you do not agree with this clause, you cannot use or quote any codes and files in the software package . See root directory <b>Agreement-LICENSE.txt</b> for details.",
"模型推理": "Model inference",
"推理音色": "Inferencing voice",
"刷新音色列表": "Refresh voice list",
"卸载音色省显存": "Unload voice to save GPU memory",
"请选择说话人id": "select a speaker ID",
"男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ": "Recommended +12 key for male-to-female voice conversion, -12 key for female-to-male voice conversion. If the pitch range is too wide and causes distortion, adjust it to a suitable range by yourself.",
"变调(整数, 半音数量, 升八度12降八度-12)": "Pitch shifting (integer, number of semitones, raise by an octave +12 or lower by an octave -12)",
"输入待处理音频文件路径(默认是正确格式示例)": "Enter the file path of the audio to be processed (default is the correct format example)",
"推理音色": "Inferencing timbre",
"刷新音色列表": "Refresh timbre list",
"卸载音色省显存": "Unload timbre to save GPU memory",
"请选择说话人id": "Please select a speaker id",
"男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ": "It is recommended +12key for male to female conversion, and -12key for female to male conversion. If the sound range explodes and the timbre is distorted, you can also adjust it to the appropriate range by yourself. ",
"变调(整数, 半音数量, 升八度12降八度-12)": "transpose(integer, number of semitones, octave sharp 12 octave flat -12)",
"输入待处理音频文件路径(默认是正确格式示例)": "Enter the path of the audio file to be processed (the default is the correct format example)",
"选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比": "Select the algorithm for pitch extraction. Use 'pm' to speed up for singing voices, or use 'harvest' for better low-pitched voices, but it is extremely slow.",
"特征检索库文件路径": "Feature search database file path",
"特征文件路径": "Feature file path",
"F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调": "F0 curve file, optional, one pitch per line, instead of default F0 and pitch shifting",
"F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调": "F0 curve file, optional, one pitch per line, instead of the default F0 and ups and downs",
"转换": "Conversion",
"输出信息": "Output information",
"输出音频(右下角三个点,点了可以下载)": "Output audio (click the three dots in the lower right corner to download)",
"批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ": "Batch conversion, input the folder containing audio files to be converted, or upload multiple audio files. The converted audio will be output in the specified folder (default opt).",
"输出音频(右下角三个点,点了可以下载)": "Output audio (three dots in the lower right corner, click to download)",
"批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ": "For batch conversion, input the audio folder to be converted, or upload multiple audio files, and output the converted audio in the specified folder (opt by default). ",
"指定输出文件夹": "Specify output folder",
"检索特征占比": "Search feature ratio",
"输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)": "Enter the path to the audio folder to be processed (just copy it from the file manager address bar)",
"也可批量输入音频文件, 二选一, 优先读文件夹": "Multiple audio files can also be inputted, either of the two options, with priority given to the folder",
"伴奏人声分离": "Instrumental and vocal separation",
"人声伴奏分离批量处理, 使用UVR5模型. <br>不带和声用HP2, 带和声且提取的人声不需要和声用HP5<br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)": "Batch processing of instrumental and vocal separation using UVR5 model. <br>Use HP2 for vocal separation without harmonics, and use HP5 for vocal separation with harmonics and the extracted vocals do not need to have harmonics. <br>Example of a qualified folder path: E:\\codes\\py39\\vits_vc_gpu\\test_sample (just copy it from the file manager address bar)",
"输入待处理音频文件夹路径": "Input the path to the audio folder to be processed",
"输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)": "Enter the path of the audio folder to be processed (just go to the address bar of the file manager and copy it)",
"也可批量输入音频文件, 二选一, 优先读文件夹": "You can also input audio files in batches, choose one of the two, and read the folder first",
"伴奏人声分离": "Accompaniment and vocal separation",
"人声伴奏分离批量处理, 使用UVR5模型. <br>不带和声用HP2, 带和声且提取的人声不需要和声用HP5<br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)": "Batch processing of vocal accompaniment separation, using UVR5 model. <br>Without harmony, use HP2, with harmony and extracted vocals do not need harmony, use HP5<br>Example of qualified folder path format: E:\\ codes\\py39\\vits_vc_gpu\\Egret Shuanghua test sample (just go to the address bar of the file manager and copy it)",
"输入待处理音频文件夹路径": "Input audio folder path",
"模型": "Model",
"指定输出人声文件夹": "Specify vocals output folder",
"指定输出乐器文件夹": "Specify instrumentals output folder",
"训练": "Train",
"step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. ": "step1: Fill in the experiment configuration. Experiment data is stored in the 'logs' directory, with each experiment in a separate folder. The experiment name path needs to be entered manually and should contain the experiment configuration, logs, and trained model files.",
"step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. ": "step1: Fill in the experimental configuration. The experimental data is placed under logs, and each experiment has a folder. You need to manually enter the experimental name path, which contains the experimental configuration, logs, and model files obtained from training. ",
"输入实验名": "Input experiment name",
"目标采样率": "Target sample rate",
"模型是否带音高指导(唱歌一定要, 语音可以不要)": "Whether the model has pitch guidance (necessary for singing, but not required for speech)",
"step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. ": "step2a: Automatically traverse the training folder and slice and normalize all audio files that can be decoded into audio. Two 'wav' folders will be generated in the experiment directory. Currently, only single-person training is supported.",
"模型是否带音高指导(唱歌一定要, 语音可以不要)": "Does the model have pitch guidance (singing must, voice can not.)",
"step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. ": "step2a: Automatically traverse all files that can be decoded into audio in the training folder and perform slice normalization, and generate 2 wav folders in the experiment directory; only single-person training is supported for the time being. ",
"输入训练文件夹路径": "Input training folder path",
"请指定说话人id": "Please specify speaker ID",
"处理数据": "Process data",
"step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)": "step2b: Use CPU to extract pitch (if the model has pitch guidance) and GPU to extract features (select card number).",
"以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2": "Separate the GPU id numbers with '-' when inputting. For example, '0-1-2' means using GPU 0, GPU 1, and GPU 2.",
"step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)": "step2b: Use CPU to extract pitch (if the model has pitch), use GPU to extract features (select card number)",
"以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2": "Enter the card numbers used separated by -, for example 0-1-2 use card 0 and card 1 and card 2",
"显卡信息": "GPU information",
"提取音高使用的CPU进程数": "Number of CPU threads to use for pitch extraction",
"选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢": "Select pitch extraction algorithm: Use 'pm' for faster processing of singing voice, 'dio' for high-quality speech but slower processing, and 'harvest' for the best quality but slowest processing.",
"特征提取": "Feature extraction",
"step3: 填写训练设置, 开始训练模型和索引": "step3: Fill in the training settings and start training the model and index.",
"保存频率save_every_epoch": "Saving frequency (save_every_epoch)",
"step3: 填写训练设置, 开始训练模型和索引": "step3: Fill in the training settings, start training the model and index",
"保存频率save_every_epoch": "Save frequency (save_every_epoch)",
"总训练轮数total_epoch": "Total training epochs (total_epoch)",
"是否仅保存最新的ckpt文件以节省硬盘空间": "Whether to save only the latest ckpt file to save disk space",
"是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速": "Whether to cache all training sets in GPU memory. Small datasets (under 10 minutes) can be cached to speed up training, but caching large datasets can cause GPU memory errors and does not increase speed significantly.",
"是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速": "Whether to cache all training sets to video memory. Small data under 10 minutes can be cached to speed up training, and large data cache will blow up video memory and not increase the speed much",
"加载预训练底模G路径": "Load pre-trained base model G path.",
"加载预训练底模D路径": "Load pre-trained base model D path.",
"训练模型": "Train model.",
"训练特征索引": "Train feature index.",
"一键训练": "One-click training.",
"ckpt处理": "Ckpt processing.",
"模型融合, 可用于测试音色融合": "Model fusion, can be used for merging diffrent voices",
"ckpt处理": "ckpt processing.",
"模型融合, 可用于测试音色融合": "Model Fusion, which can be used to test sound fusion",
"A模型路径": "A model path.",
"B模型路径": "B model path.",
"A模型权重": "A model weight for model A.",
@@ -60,40 +60,45 @@
"要置入的模型信息": "Model information to be placed.",
"保存的模型名不带后缀": "Saved model name without extension.",
"融合": "Fusion.",
"修改模型信息(仅支持weights文件夹下提取的小模型文件)": "Modify model information (only supports small model files extracted under the weights folder).",
"修改模型信息(仅支持weights文件夹下提取的小模型文件)": "Modify model information (only small model files extracted from the weights folder are supported)",
"模型路径": "Model path",
"要改的模型信息": "Model information to be modified",
"保存的文件名, 默认空为和源文件同名": "Name of the file to be saved, default is the same as the source file name",
"保存的文件名, 默认空为和源文件同名": "The saved file name, the default is empty and the same name as the source file",
"修改": "Modify",
"查看模型信息(仅支持weights文件夹下提取的小模型文件)": "View model information (only applicable to small model files extracted from the 'weights' folder)",
"查看模型信息(仅支持weights文件夹下提取的小模型文件)": "View model information (only small model files extracted from the weights folder are supported)",
"查看": "View",
"模型提取(输入logs文件夹下大文件模型路径),适用于训一半不想训了模型没有自动提取保存小文件模型,或者想测试中间模型的情况": "Model extraction (input the path of a large model file in the 'logs' folder), applicable when you want to extract a small model file after training halfway and it was not saved automatically, or when you want to test an intermediate model",
"模型提取(输入logs文件夹下大文件模型路径),适用于训一半不想训了模型没有自动提取保存小文件模型,或者想测试中间模型的情况": "Model extraction (enter the path of the large file model under the logs folder), which is suitable for half of the training and does not want to train the model without automatically extracting and saving the small file model, or if you want to test the intermediate model",
"保存名": "Save Name",
"模型是否带音高指导,1是0否": "Whether the model has pitch guidance, 1 for yes, 0 for no",
"提取": "Extract",
"Onnx导出": "Onnx",
"RVC模型路径": "RVC Path",
"Onnx输出路径": "Onnx Export Path",
"MoeVS模型": "MoeSS?",
"导出Onnx模型": "Export Onnx Model",
"招募音高曲线前端编辑器": "Recruit front-end editors for pitch curves",
"加开发群联系我xxxxx": "Join the development group to contact me at xxxxx",
"加开发群联系我xxxxx": "Add development group to contact me xxxxx",
"点击查看交流、问题反馈群号": "Click to view the communication and problem feedback group number",
"xxxxx": "xxxxx",
"加载模型": "Load Model",
"Hubert模型": "Hubert Model",
"选择.pth文件": "Select .pth file",
"选择.index文件": "Select .index file",
"选择.npy文件": "Select .npy file",
"输入设备": "Input device",
"输出设备": "Output device",
"加载模型": "load model",
"Hubert模型": "Hubert File",
"选择.pth文件": "Select the .pth file",
"选择.index文件": "Select the .index file",
"选择.npy文件": "Select the .npy file",
"输入设备": "input device",
"输出设备": "output device",
"音频设备(请使用同种类驱动)": "Audio device (please use the same type of driver)",
"响应阈值": "Response threshold",
"音调设置": "Pitch setting",
"响应阈值": "response threshold",
"音调设置": "tone setting",
"Index Rate": "Index Rate",
"常规设置": "General Settings",
"采样长度": "Sampling length",
"淡入淡出长度": "Fade in/out length",
"额外推理时长": "Additional inference time",
"输入降噪": "Input Noise Reduction",
"输出降噪": "Output Noise Reduction",
"性能设置": "Performance settings",
"开始音频转换": "Start Audio Conversion",
"停止音频转换": "Stop Audio Conversion",
"常规设置": "general settings",
"采样长度": "Sample length",
"淡入淡出长度": "fade length",
"额外推理时长": "extra inference time",
"输入降噪": "Input Noisereduce",
"输出降噪": "Output Noisereduce",
"性能设置": "performance settings",
"开始音频转换": "start audio conversion",
"停止音频转换": "stop audio conversion",
"推理时间(ms):": "Infer Time(ms):"
}

104
i18n/es_ES.json Normal file
View File

@@ -0,0 +1,104 @@
{
"本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>使用需遵守的协议-LICENSE.txt</b>.": "Este software es de código abierto bajo la licencia MIT, el autor no tiene ningún control sobre el software, y aquellos que usan el software y difunden los sonidos exportados por el software son los únicos responsables.<br>Si no está de acuerdo con esta cláusula , no puede utilizar ni citar ningún código ni archivo del paquete de software Consulte el directorio raíz <b>Agreement-LICENSE.txt</b> para obtener más información.",
"模型推理": "inferencia del modelo",
"推理音色": "inferencia de voz",
"刷新音色列表": "Actualizar lista de voces",
"卸载音色省显存": "Descargue la voz para ahorrar memoria GPU",
"请选择说话人id": "seleccione una identificación de altavoz",
"男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ": "Tecla +12 recomendada para conversión de voz de hombre a mujer, tecla -12 para conversión de voz de mujer a hombre. Si el rango de tono es demasiado amplio y causa distorsión, ajústelo usted mismo a un rango adecuado.",
"变调(整数, 半音数量, 升八度12降八度-12)": "Cambio de tono (entero, número de semitonos, subir una octava +12 o bajar una octava -12)",
"输入待处理音频文件路径(默认是正确格式示例)": "Ingrese la ruta del archivo del audio que se procesará (el formato predeterminado es el ejemplo correcto)",
"选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比": "Seleccione el algoritmo para la extracción de tono. Use 'pm' para acelerar las voces cantadas, o use 'harvest' para mejorar las voces bajas, pero es extremadamente lento.",
"特征检索库文件路径": "Ruta del archivo de la base de datos de búsqueda de características",
"特征文件路径": "Ruta del archivo de características",
"F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调": "Archivo de curva F0, opcional, un tono por línea, en lugar de F0 predeterminado y cambio de tono",
"转换": "Conversión",
"输出信息": "Información de salida",
"输出音频(右下角三个点,点了可以下载)": "Salida de audio (haga clic en los tres puntos en la esquina inferior derecha para descargar)",
"批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ": "Conversión por lotes, ingrese la carpeta que contiene los archivos de audio para convertir o cargue varios archivos de audio. El audio convertido se emitirá en la carpeta especificada (opción predeterminada).",
"指定输出文件夹": "Especificar carpeta de salida",
"检索特征占比": "Proporción de función de búsqueda",
"输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)": "Ingrese la ruta a la carpeta de audio que se procesará (simplemente cópiela desde la barra de direcciones del administrador de archivos)",
"也可批量输入音频文件, 二选一, 优先读文件夹": "También se pueden ingresar múltiples archivos de audio, cualquiera de las dos opciones, con prioridad dada a la carpeta",
"伴奏人声分离": "Instrumental and vocal separation",
"人声伴奏分离批量处理, 使用UVR5模型. <br>不带和声用HP2, 带和声且提取的人声不需要和声用HP5<br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)": "Procesamiento por lotes de separación instrumental y vocal utilizando el modelo UVR5. <br>Utilice HP2 para la separación vocal sin armónicos, y utilice HP5 para la separación vocal con armónicos y las voces extraídas no necesitan tener armónicos. <br>Ejemplo de una ruta de carpeta calificada: E:\\codes\\py39\\vits_vc_gpu\\test_sample (simplemente cópielo desde la barra de direcciones del administrador de archivos)",
"输入待处理音频文件夹路径": "Ingrese la ruta a la carpeta de audio que se procesará",
"模型": "Modelo",
"指定输出人声文件夹": "Especificar la carpeta de salida de voces",
"指定输出乐器文件夹": "Especificar la carpeta de salida de instrumentales",
"训练": "Entrenamiento",
"step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. ": "paso 1: Complete la configuración del experimento. Los datos del experimento se almacenan en el directorio 'logs', con cada experimento en una carpeta separada. La ruta del nombre del experimento debe ingresarse manualmente y debe contener la configuración del experimento, los registros y los archivos del modelo entrenado.",
"输入实验名": "Ingrese el nombre del modelo",
"目标采样率": "Tasa de muestreo objetivo",
"模型是否带音高指导(唱歌一定要, 语音可以不要)": "Si el modelo tiene guía de tono (necesaria para cantar, pero no para hablar)",
"step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. ": "paso 2a: recorra automáticamente la carpeta de capacitación y corte y normalice todos los archivos de audio que se pueden decodificar en audio. Se generarán dos carpetas 'wav' en el directorio del experimento. Actualmente, solo se admite la capacitación de una sola persona.",
"输入训练文件夹路径": "Introduzca la ruta de la carpeta de entrenamiento",
"请指定说话人id": "Especifique el ID del hablante",
"处理数据": "Procesar datos",
"step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)": "paso 2b: use la CPU para extraer el tono (si el modelo tiene guía de tono) y la GPU para extraer características (seleccione el número de tarjeta).",
"以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2": "Separe los números de identificación de la GPU con '-' al ingresarlos. Por ejemplo, '0-1-2' significa usar GPU 0, GPU 1 y GPU 2.",
"显卡信息": "información de la GPU",
"提取音高使用的CPU进程数": "Número de subprocesos de CPU que se utilizarán para la extracción de tono",
"选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢": "Seleccione el algoritmo de extracción de tono: utilice 'pm' para un procesamiento más rápido de la voz cantada, 'dio' para un discurso de alta calidad pero un procesamiento más lento y 'cosecha' para obtener la mejor calidad pero un procesamiento más lento.",
"特征提取": "Extracción de características",
"step3: 填写训练设置, 开始训练模型和索引": "Paso 3: complete la configuración de entrenamiento y comience a entrenar el modelo y el índice.",
"保存频率save_every_epoch": "Frecuencia de guardado (save_every_epoch)",
"总训练轮数total_epoch": "Total de épocas de entrenamiento (total_epoch)",
"是否仅保存最新的ckpt文件以节省硬盘空间": "Si guardar solo el archivo ckpt más reciente para ahorrar espacio en disco",
"是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速": "Si almacenar en caché todos los conjuntos de entrenamiento en la memoria de la GPU. Los conjuntos de datos pequeños (menos de 10 minutos) se pueden almacenar en caché para acelerar el entrenamiento, pero el almacenamiento en caché de conjuntos de datos grandes puede causar errores de memoria en la GPU y no aumenta la velocidad de manera significativa.",
"加载预训练底模G路径": "Cargue la ruta G del modelo base preentrenada.",
"加载预训练底模D路径": "Cargue la ruta del modelo D base preentrenada.",
"训练模型": "Entrenar Modelo",
"训练特征索引": "Índice de características del Entrenamiento",
"一键训练": "Entrenamiento con un clic.",
"ckpt处理": "Procesamiento de recibos",
"模型融合, 可用于测试音色融合": "Fusión de modelos, se puede utilizar para fusionar diferentes voces",
"A模型路径": "Modelo A ruta.",
"B模型路径": "Modelo B ruta.",
"A模型权重": "Un peso modelo para el modelo A.",
"模型是否带音高指导": "Si el modelo tiene guía de tono.",
"要置入的模型信息": "Información del modelo a colocar.",
"保存的模型名不带后缀": "Nombre del modelo guardado sin extensión.",
"融合": "Fusión.",
"修改模型信息(仅支持weights文件夹下提取的小模型文件)": "Modificar la información del modelo (solo admite archivos de modelos pequeños extraídos en la carpeta de pesos).",
"模型路径": "Ruta del modelo",
"要改的模型信息": "Información del modelo a modificar",
"保存的文件名, 默认空为和源文件同名": "Nombre del archivo que se guardará, el valor predeterminado es el mismo que el nombre del archivo de origen",
"修改": "Modificar",
"查看模型信息(仅支持weights文件夹下提取的小模型文件)": "Ver información del modelo (solo aplicable a archivos de modelos pequeños extraídos de la carpeta 'pesos')",
"查看": "Ver",
"模型提取(输入logs文件夹下大文件模型路径),适用于训一半不想训了模型没有自动提取保存小文件模型,或者想测试中间模型的情况": "Extracción de modelo (ingrese la ruta de un archivo de modelo grande en la carpeta 'logs'), aplicable cuando desea extraer un archivo de modelo pequeño después de entrenar a mitad de camino y no se guardó automáticamente, o cuando desea probar un modelo intermedio",
"保存名": "Guardar nombre",
"模型是否带音高指导,1是0否": "Si el modelo tiene guía de tono, 1 para sí, 0 para no",
"提取": "Extracter",
"Onnx导出": "Onnx导出",
"RVC模型路径": "RVC模型路径",
"Onnx输出路径": "Onnx输出路径",
"MoeVS模型": "MoeVS模型",
"导出Onnx模型": "导出Onnx模型",
"招募音高曲线前端编辑器": "Reclutar editores front-end para curvas de tono",
"加开发群联系我xxxxx": "Únase al grupo de desarrollo para contactarme en xxxxx",
"点击查看交流、问题反馈群号": "Haga clic para ver el número de grupo de comunicación y comentarios sobre problemas",
"xxxxx": "xxxxx",
"加载模型": "Cargar modelo",
"Hubert模型": "Modelo de Hubert ",
"选择.pth文件": "Seleccionar archivo .pth",
"选择.index文件": "Select .index file",
"选择.npy文件": "Seleccionar archivo .npy",
"输入设备": "Dispositivo de entrada",
"输出设备": "Dispositivo de salida",
"音频设备(请使用同种类驱动)": "Dispositivo de audio (utilice el mismo tipo de controlador)",
"响应阈值": "Umbral de respuesta",
"音调设置": "Ajuste de tono",
"Index Rate": "Tasa de índice",
"常规设置": "Configuración general",
"采样长度": "Longitud de muestreo",
"淡入淡出长度": "Duración del fundido de entrada/salida",
"额外推理时长": "Tiempo de inferencia adicional",
"输入降噪": "Reducción de ruido de entrada",
"输出降噪": "Reducción de ruido de salida",
"性能设置": "Configuración de rendimiento",
"开始音频转换": "Iniciar conversión de audio",
"停止音频转换": "Detener la conversión de audio",
"推理时间(ms):": "Inferir tiempo (ms):"
}

View File

@@ -11,17 +11,17 @@
"选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比": "ピッチ抽出アルゴリズムを選択してください。歌声の場合は、pmを使用して速度を上げることができます。低音が重要な場合は、harvestを使用できますが、非常に遅くなります。",
"特征检索库文件路径": "特徴量検索データベースのファイルパス",
"特征文件路径": "特徴量ファイルのパス",
"F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调": "F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调",
"F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调": "F0(最低共振周波数)カーブファイル(オプション、1行に1ピッチ、デフォルトのF0(最低共振周波数)とエレベーションを置き換えます。)",
"转换": "変換",
"输出信息": "出力情報",
"输出音频(右下角三个点,点了可以下载)": "出力音声(右下の三点をクリックしてダウンロードできます)",
"批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ": "批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ",
"批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ": "一括変換、変換する音声フォルダを入力、または複数の音声ファイルをアップロードし、指定したフォルダ(デフォルトのopt)に変換した音声を出力します。",
"指定输出文件夹": "出力フォルダを指定してください",
"检索特征占比": "検索特徴率",
"输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)": "処理対象音声フォルダーのパスを入力してください(ファイルマネージャのアドレスバーからコピーしてください)",
"也可批量输入音频文件, 二选一, 优先读文件夹": "複数の音声ファイルを一括で入力することもできますが、フォルダーを優先して読み込みます",
"伴奏人声分离": "伴奏とボーカルの分離",
"人声伴奏分离批量处理, 使用UVR5模型. <br>不带和声用HP2, 带和声且提取的人声不需要和声用HP5<br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)": "人声伴奏分离批量处理, 使用UVR5模型. <br>不带和声用HP2, 带和声且提取的人声不需要和声用HP5<br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)",
"人声伴奏分离批量处理, 使用UVR5模型. <br>不带和声用HP2, 带和声且提取的人声不需要和声用HP5<br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)": "UVR5モデルを使用した、声帯分離バッチ処理です。<br>HP2はハーモニー、ハーモニーのあるボーカルとハーモニーのないボーカルを抽出したものはHP5を使ってください <br>フォルダーパスの形式例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(エクスプローラーのアドレスバーからコピーするだけです)",
"输入待处理音频文件夹路径": "処理するオーディオファイルのフォルダパスを入力してください",
"模型": "モデル",
"指定输出人声文件夹": "人の声を出力するフォルダを指定してください",
@@ -60,7 +60,7 @@
"要置入的模型信息": "挿入するモデル情報",
"保存的模型名不带后缀": "拡張子のない保存するモデル名",
"融合": "フュージョン",
"修改模型信息(仅支持weights文件夹下提取的小模型文件)": "修改模型信息(仅支持weights文件夹下提取的小模型文件)",
"修改模型信息(仅支持weights文件夹下提取的小模型文件)": "モデル情報の修正(weightsフォルダから抽出された小さなモデルファイルのみ対応)",
"模型路径": "モデルパス",
"要改的模型信息": "変更するモデル情報",
"保存的文件名, 默认空为和源文件同名": "保存するファイル名、デフォルトでは空欄で元のファイル名と同じ名前になります",
@@ -68,18 +68,23 @@
"查看模型信息(仅支持weights文件夹下提取的小模型文件)": "モデル情報を表示する(小さいモデルファイルはweightsフォルダーからのみサポートされています)",
"查看": "表示",
"模型提取(输入logs文件夹下大文件模型路径),适用于训一半不想训了模型没有自动提取保存小文件模型,或者想测试中间模型的情况": "モデル抽出(ログフォルダー内の大きなファイルのモデルパスを入力)、モデルを半分までトレーニングし、自動的に小さいファイルモデルを保存しなかったり、中間モデルをテストしたい場合に適用されます。",
"保存名": "保存するファイル名",
"保存名": "保存ファイル名",
"模型是否带音高指导,1是0否": "モデルに音高ガイドを付けるかどうか、1は付ける、0は付けない",
"提取": "抽出",
"Onnx导出": "Onnx",
"RVC模型路径": "RVCルパス",
"Onnx输出路径": "Onnx出力パス",
"MoeVS模型": "MoeSS",
"导出Onnx模型": "Onnxに変換",
"招募音高曲线前端编辑器": "音高曲線フロントエンドエディターを募集",
"加开发群联系我xxxxx": "開発グループに参加して私に連絡してくださいxxxxx",
"点击查看交流、问题反馈群号": "クリックして交流、問題フィードバックグループ番号を表示",
"xxxxx": "xxxxx",
"加载模型": "モデルをロードする",
"加载模型": "モデルをロード",
"Hubert模型": "Hubert模型",
"选择.pth文件": ".pthファイルを選択する",
"选择.index文件": ".indexファイルを選択する",
"选择.npy文件": ".npyファイルを選択する",
"选择.pth文件": ".pthファイルを選択",
"选择.index文件": ".indexファイルを選択",
"选择.npy文件": ".npyファイルを選択",
"输入设备": "入力デバイス",
"输出设备": "出力デバイス",
"音频设备(请使用同种类驱动)": "オーディオデバイス(同じ種類のドライバーを使用してください)",
@@ -93,7 +98,7 @@
"输入降噪": "入力ノイズの低減",
"输出降噪": "出力ノイズの低減",
"性能设置": "パフォーマンス設定",
"开始音频转换": "音声変換を開始する",
"停止音频转换": "音声変換を停止する",
"开始音频转换": "音声変換を開始",
"停止音频转换": "音声変換を停止",
"推理时间(ms):": "推論時間(ms):"
}
}

View File

@@ -42,3 +42,4 @@ for lang_file in languages:
# Save the updated language file
with open(lang_file, "w", encoding="utf-8") as f:
json.dump(lang_data, f, ensure_ascii=False, indent=4)
f.write("\n")

View File

@@ -71,6 +71,11 @@
"保存名": "保存名",
"模型是否带音高指导,1是0否": "模型是否带音高指导,1是0否",
"提取": "提取",
"Onnx导出": "Onnx导出",
"RVC模型路径": "RVC模型路径",
"Onnx输出路径": "Onnx输出路径",
"MoeVS模型": "MoeVS模型",
"导出Onnx模型": "导出Onnx模型",
"招募音高曲线前端编辑器": "招募音高曲线前端编辑器",
"加开发群联系我xxxxx": "加开发群联系我xxxxx",
"点击查看交流、问题反馈群号": "点击查看交流、问题反馈群号",
@@ -96,4 +101,4 @@
"开始音频转换": "开始音频转换",
"停止音频转换": "停止音频转换",
"推理时间(ms):": "推理时间(ms):"
}
}

View File

@@ -71,6 +71,11 @@
"保存名": "儲存名",
"模型是否带音高指导,1是0否": "模型是否帶音高指導1是0否",
"提取": "提取",
"Onnx导出": "Onnx导出",
"RVC模型路径": "RVC模型路径",
"Onnx输出路径": "Onnx输出路径",
"MoeVS模型": "MoeSS模型",
"导出Onnx模型": "导出Onnx模型",
"招募音高曲线前端编辑器": "招募音高曲線前端編輯器",
"加开发群联系我xxxxx": "加開發群聯繫我xxxxx",
"点击查看交流、问题反馈群号": "點擊查看交流、問題反饋群號",
@@ -96,4 +101,4 @@
"开始音频转换": "開始音訊轉換",
"停止音频转换": "停止音訊轉換",
"推理时间(ms):": "推理時間(ms):"
}
}

View File

@@ -71,6 +71,11 @@
"保存名": "儲存名",
"模型是否带音高指导,1是0否": "模型是否帶音高指導1是0否",
"提取": "提取",
"Onnx导出": "Onnx导出",
"RVC模型路径": "RVC模型路径",
"Onnx输出路径": "Onnx输出路径",
"MoeVS模型": "MoeSS模型",
"导出Onnx模型": "导出Onnx模型",
"招募音高曲线前端编辑器": "招募音高曲線前端編輯器",
"加开发群联系我xxxxx": "加開發群聯繫我xxxxx",
"点击查看交流、问题反馈群号": "點擊查看交流、問題反饋群號",
@@ -96,4 +101,4 @@
"开始音频转换": "開始音訊轉換",
"停止音频转换": "停止音訊轉換",
"推理时间(ms):": "推理時間(ms):"
}
}

View File

@@ -71,6 +71,11 @@
"保存名": "儲存名",
"模型是否带音高指导,1是0否": "模型是否帶音高指導1是0否",
"提取": "提取",
"Onnx导出": "Onnx导出",
"RVC模型路径": "RVC模型路径",
"Onnx输出路径": "Onnx输出路径",
"MoeVS模型": "MoeSS模型",
"导出Onnx模型": "导出Onnx模型",
"招募音高曲线前端编辑器": "招募音高曲線前端編輯器",
"加开发群联系我xxxxx": "加開發群聯繫我xxxxx",
"点击查看交流、问题反馈群号": "點擊查看交流、問題反饋群號",
@@ -96,4 +101,4 @@
"开始音频转换": "開始音訊轉換",
"停止音频转换": "停止音訊轉換",
"推理时间(ms):": "推理時間(ms):"
}
}

View File

@@ -1,11 +1,11 @@
from multiprocessing import cpu_count
import threading
import threading,pdb,librosa
from time import sleep
from subprocess import Popen
from time import sleep
import torch, os, traceback, sys, warnings, shutil, numpy as np
import faiss
from random import shuffle
now_dir = os.getcwd()
sys.path.append(now_dir)
tmp = os.path.join(now_dir, "TEMP")
@@ -17,20 +17,20 @@ os.environ["TEMP"] = tmp
warnings.filterwarnings("ignore")
torch.manual_seed(114514)
from i18n import I18nAuto
import ffmpeg
i18n = I18nAuto()
# 判断是否有能用来训练和加速推理的N卡
ncpu = cpu_count()
ngpu = torch.cuda.device_count()
gpu_infos = []
mem=[]
if (not torch.cuda.is_available()) or ngpu == 0:
if_gpu_ok = False
else:
if_gpu_ok = False
for i in range(ngpu):
gpu_name = torch.cuda.get_device_name(i)
if ("16" in gpu_name and "V100" not in gpu_name) or "MX" in gpu_name:
continue
if (
"10" in gpu_name
or "20" in gpu_name
@@ -44,17 +44,19 @@ else:
or "70" in gpu_name
or "80" in gpu_name
or "90" in gpu_name
or "M4" in gpu_name
or "T4" in gpu_name
or "M4" in gpu_name.upper()
or "T4" in gpu_name.upper()
or "TITAN" in gpu_name.upper()
): # A10#A100#V100#A40#P40#M40#K80
): # A10#A100#V100#A40#P40#M40#K80#A4500
if_gpu_ok = True # 至少有一张能用的N卡
gpu_infos.append("%s\t%s" % (i, gpu_name))
gpu_info = (
"\n".join(gpu_infos)
if if_gpu_ok == True and len(gpu_infos) > 0
else "很遗憾您这没有能用的显卡来支持您训练"
)
mem.append(int(torch.cuda.get_device_properties(i).total_memory/1024/1024/1024+0.4))
if if_gpu_ok == True and len(gpu_infos) > 0:
gpu_info ="\n".join(gpu_infos)
default_batch_size=min(mem)//2
else:
gpu_info = "很遗憾您这没有能用的显卡来支持您训练"
default_batch_size=1
gpus = "-".join([i[0] for i in gpu_infos])
from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono
from scipy.io import wavfile
@@ -126,7 +128,7 @@ def vc_single(
f0_file,
f0_method,
file_index,
file_big_npy,
# file_big_npy,
index_rate,
): # spk_item, input_audio0, vc_transform0,f0_file,f0method0
global tgt_sr, net_g, vc, hubert_model
@@ -139,6 +141,17 @@ def vc_single(
if hubert_model == None:
load_hubert()
if_f0 = cpt.get("f0", 1)
file_index = (
file_index.strip(" ")
.strip('"')
.strip("\n")
.strip('"')
.strip(" ")
.replace("trained", "added")
) # 防止小白写错,自动帮他替换掉
# file_big_npy = (
# file_big_npy.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
# )
audio_opt = vc.pipeline(
hubert_model,
net_g,
@@ -148,7 +161,7 @@ def vc_single(
f0_up_key,
f0_method,
file_index,
file_big_npy,
# file_big_npy,
index_rate,
if_f0,
f0_file=f0_file,
@@ -171,7 +184,7 @@ def vc_multi(
f0_up_key,
f0_method,
file_index,
file_big_npy,
# file_big_npy,
index_rate,
):
try:
@@ -189,6 +202,14 @@ def vc_multi(
traceback.print_exc()
paths = [path.name for path in paths]
infos = []
file_index = (
file_index.strip(" ")
.strip('"')
.strip("\n")
.strip('"')
.strip(" ")
.replace("trained", "added")
) # 防止小白写错,自动帮他替换掉
for path in paths:
info, opt = vc_single(
sid,
@@ -197,7 +218,7 @@ def vc_multi(
None,
f0_method,
file_index,
file_big_npy,
# file_big_npy,
index_rate,
)
if info == "Success":
@@ -215,7 +236,7 @@ def vc_multi(
yield traceback.format_exc()
def uvr(model_name, inp_root, save_root_vocal, paths, save_root_ins):
def uvr(model_name, inp_root, save_root_vocal, paths, save_root_ins,agg):
infos = []
try:
inp_root = inp_root.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
@@ -226,6 +247,7 @@ def uvr(model_name, inp_root, save_root_vocal, paths, save_root_ins):
save_root_ins.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
)
pre_fun = _audio_pre_(
agg=int(agg),
model_path=os.path.join(weight_uvr5_root, model_name + ".pth"),
device=device,
is_half=is_half,
@@ -234,10 +256,25 @@ def uvr(model_name, inp_root, save_root_vocal, paths, save_root_ins):
paths = [os.path.join(inp_root, name) for name in os.listdir(inp_root)]
else:
paths = [path.name for path in paths]
for name in paths:
inp_path = os.path.join(inp_root, name)
for path in paths:
inp_path = os.path.join(inp_root, path)
need_reformat=1
done=0
try:
pre_fun._path_audio_(inp_path, save_root_ins, save_root_vocal)
info = ffmpeg.probe(inp_path, cmd="ffprobe")
if(info["streams"][0]["channels"]==2 and info["streams"][0]["sample_rate"]=="44100"):
need_reformat=0
pre_fun._path_audio_(inp_path, save_root_ins, save_root_vocal)
done=1
except:
need_reformat = 1
traceback.print_exc()
if(need_reformat==1):
tmp_path="%s/%s.reformatted.wav"%(tmp,os.path.basename(inp_path))
os.system("ffmpeg -i %s -vn -acodec pcm_s16le -ac 2 -ar 44100 %s -y"%(inp_path,tmp_path))
inp_path=tmp_path
try:
if(done==0):pre_fun._path_audio_(inp_path, save_root_ins, save_root_vocal)
infos.append("%s->Success" % (os.path.basename(inp_path)))
yield "\n".join(infos)
except:
@@ -547,15 +584,18 @@ def click_train(
)
)
if if_f0_3 == "":
opt.append(
"%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature256/mute.npy|%s/logs/mute/2a_f0/mute.wav.npy|%s/logs/mute/2b-f0nsf/mute.wav.npy|%s"
% (now_dir, sr2, now_dir, now_dir, now_dir, spk_id5)
)
for _ in range(2):
opt.append(
"%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature256/mute.npy|%s/logs/mute/2a_f0/mute.wav.npy|%s/logs/mute/2b-f0nsf/mute.wav.npy|%s"
% (now_dir, sr2, now_dir, now_dir, now_dir, spk_id5)
)
else:
opt.append(
"%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature256/mute.npy|%s"
% (now_dir, sr2, now_dir, spk_id5)
)
for _ in range(2):
opt.append(
"%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature256/mute.npy|%s"
% (now_dir, sr2, now_dir, spk_id5)
)
shuffle(opt)
with open("%s/filelist.txt" % exp_dir, "w") as f:
f.write("\n".join(opt))
print("write filelist done")
@@ -618,29 +658,29 @@ def train_index(exp_dir1):
phone = np.load("%s/%s" % (feature_dir, name))
npys.append(phone)
big_npy = np.concatenate(npys, 0)
np.save("%s/total_fea.npy" % exp_dir, big_npy)
n_ivf = big_npy.shape[0] // 39
infos = []
infos.append("%s,%s" % (big_npy.shape, n_ivf))
# np.save("%s/total_fea.npy" % exp_dir, big_npy)
# n_ivf = big_npy.shape[0] // 39
n_ivf = min(int(16 * np.sqrt(big_npy.shape[0])),big_npy.shape[0]// 39)
infos=[]
infos.append("%s,%s"%(big_npy.shape,n_ivf))
yield "\n".join(infos)
index = faiss.index_factory(256, "IVF%s,Flat" % n_ivf)
index = faiss.index_factory(256, "IVF%s,Flat"%n_ivf)
# index = faiss.index_factory(256, "IVF%s,PQ128x4fs,RFlat"%n_ivf)
infos.append("training")
yield "\n".join(infos)
index_ivf = faiss.extract_index_ivf(index) #
index_ivf.nprobe = int(np.power(n_ivf, 0.3))
# index_ivf.nprobe = int(np.power(n_ivf,0.3))
index_ivf.nprobe = 1
index.train(big_npy)
faiss.write_index(
index,
"%s/trained_IVF%s_Flat_nprobe_%s.index" % (exp_dir, n_ivf, index_ivf.nprobe),
)
faiss.write_index(index, '%s/trained_IVF%s_Flat_nprobe_%s.index'%(exp_dir,n_ivf,index_ivf.nprobe))
# faiss.write_index(index, '%s/trained_IVF%s_Flat_FastScan.index'%(exp_dir,n_ivf))
infos.append("adding")
yield "\n".join(infos)
index.add(big_npy)
faiss.write_index(
index,
"%s/added_IVF%s_Flat_nprobe_%s.index" % (exp_dir, n_ivf, index_ivf.nprobe),
)
infos.append("成功构建索引, added_IVF%s_Flat_nprobe_%s.index" % (n_ivf, index_ivf.nprobe))
faiss.write_index(index, '%s/added_IVF%s_Flat_nprobe_%s.index'%(exp_dir,n_ivf,index_ivf.nprobe))
infos.append("成功构建索引added_IVF%s_Flat_nprobe_%s.index"%(n_ivf,index_ivf.nprobe))
# faiss.write_index(index, '%s/added_IVF%s_Flat_FastScan.index'%(exp_dir,n_ivf))
# infos.append("成功构建索引added_IVF%s_Flat_FastScan.index"%(n_ivf))
yield "\n".join(infos)
@@ -772,15 +812,18 @@ def train1key(
)
)
if if_f0_3 == "":
opt.append(
"%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature256/mute.npy|%s/logs/mute/2a_f0/mute.wav.npy|%s/logs/mute/2b-f0nsf/mute.wav.npy|%s"
% (now_dir, sr2, now_dir, now_dir, now_dir, spk_id5)
)
for _ in range(2):
opt.append(
"%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature256/mute.npy|%s/logs/mute/2a_f0/mute.wav.npy|%s/logs/mute/2b-f0nsf/mute.wav.npy|%s"
% (now_dir, sr2, now_dir, now_dir, now_dir, spk_id5)
)
else:
opt.append(
"%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature256/mute.npy|%s"
% (now_dir, sr2, now_dir, spk_id5)
)
for _ in range(2):
opt.append(
"%s/logs/mute/0_gt_wavs/mute%s.wav|%s/logs/mute/3_feature256/mute.npy|%s"
% (now_dir, sr2, now_dir, spk_id5)
)
shuffle(opt)
with open("%s/filelist.txt" % exp_dir, "w") as f:
f.write("\n".join(opt))
yield get_info_str("write filelist done")
@@ -831,13 +874,15 @@ def train1key(
phone = np.load("%s/%s" % (feature_dir, name))
npys.append(phone)
big_npy = np.concatenate(npys, 0)
np.save("%s/total_fea.npy" % exp_dir, big_npy)
n_ivf = big_npy.shape[0] // 39
# np.save("%s/total_fea.npy" % exp_dir, big_npy)
# n_ivf = big_npy.shape[0] // 39
n_ivf = min(int(16 * np.sqrt(big_npy.shape[0])),big_npy.shape[0]// 39)
yield get_info_str("%s,%s" % (big_npy.shape, n_ivf))
index = faiss.index_factory(256, "IVF%s,Flat" % n_ivf)
yield get_info_str("training index")
index_ivf = faiss.extract_index_ivf(index) #
index_ivf.nprobe = int(np.power(n_ivf, 0.3))
# index_ivf.nprobe = int(np.power(n_ivf,0.3))
index_ivf.nprobe = 1
index.train(big_npy)
faiss.write_index(
index,
@@ -874,6 +919,90 @@ def change_info_(ckpt_path):
return {"__type__": "update"}, {"__type__": "update"}
from infer_pack.models_onnx_moess import SynthesizerTrnMs256NSFsidM
from infer_pack.models_onnx import SynthesizerTrnMs256NSFsidO
def export_onnx(ModelPath, ExportedPath, MoeVS=True):
hidden_channels = 256 # hidden_channels为768Vec做准备
cpt = torch.load(ModelPath, map_location="cpu")
cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
print(*cpt["config"])
test_phone = torch.rand(1, 200, hidden_channels) # hidden unit
test_phone_lengths = torch.tensor([200]).long() # hidden unit 长度(貌似没啥用)
test_pitch = torch.randint(size=(1, 200), low=5, high=255) # 基频(单位赫兹)
test_pitchf = torch.rand(1, 200) # nsf基频
test_ds = torch.LongTensor([0]) # 说话人ID
test_rnd = torch.rand(1, 192, 200) # 噪声(加入随机因子)
device = "cpu" # 导出时设备(不影响使用模型)
if MoeVS:
net_g = SynthesizerTrnMs256NSFsidM(
*cpt["config"], is_half=False
) # fp32导出C++要支持fp16必须手动将内存重新排列所以暂时不用fp16
net_g.load_state_dict(cpt["weight"], strict=False)
input_names = ["phone", "phone_lengths", "pitch", "pitchf", "ds", "rnd"]
output_names = [
"audio",
]
torch.onnx.export(
net_g,
(
test_phone.to(device),
test_phone_lengths.to(device),
test_pitch.to(device),
test_pitchf.to(device),
test_ds.to(device),
test_rnd.to(device),
),
ExportedPath,
dynamic_axes={
"phone": [1],
"pitch": [1],
"pitchf": [1],
"rnd": [2],
},
do_constant_folding=False,
opset_version=16,
verbose=False,
input_names=input_names,
output_names=output_names,
)
else:
net_g = SynthesizerTrnMs256NSFsidO(
*cpt["config"], is_half=False
) # fp32导出C++要支持fp16必须手动将内存重新排列所以暂时不用fp16
net_g.load_state_dict(cpt["weight"], strict=False)
input_names = ["phone", "phone_lengths", "pitch", "pitchf", "ds"]
output_names = [
"audio",
]
torch.onnx.export(
net_g,
(
test_phone.to(device),
test_phone_lengths.to(device),
test_pitch.to(device),
test_pitchf.to(device),
test_ds.to(device),
),
ExportedPath,
dynamic_axes={
"phone": [1],
"pitch": [1],
"pitchf": [1],
},
do_constant_folding=False,
opset_version=16,
verbose=False,
input_names=input_names,
output_names=output_names,
)
return "Finished"
with gr.Blocks() as app:
gr.Markdown(
value=i18n(
@@ -927,16 +1056,16 @@ with gr.Blocks() as app:
value="E:\\codes\\py39\\vits_vc_gpu_train\\logs\\mi-test-1key\\added_IVF677_Flat_nprobe_7.index",
interactive=True,
)
file_big_npy1 = gr.Textbox(
label=i18n("特征文件路径"),
value="E:\\codes\py39\\vits_vc_gpu_train\\logs\\mi-test-1key\\total_fea.npy",
interactive=True,
)
# file_big_npy1 = gr.Textbox(
# label=i18n("特征文件路径"),
# value="E:\\codes\py39\\vits_vc_gpu_train\\logs\\mi-test-1key\\total_fea.npy",
# interactive=True,
# )
index_rate1 = gr.Slider(
minimum=0,
maximum=1,
label="检索特征占比",
value=1,
value=0.76,
interactive=True,
)
f0_file = gr.File(label=i18n("F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调"))
@@ -953,7 +1082,7 @@ with gr.Blocks() as app:
f0_file,
f0method0,
file_index1,
file_big_npy1,
# file_big_npy1,
index_rate1,
],
[vc_output1, vc_output2],
@@ -980,11 +1109,11 @@ with gr.Blocks() as app:
value="E:\\codes\\py39\\vits_vc_gpu_train\\logs\\mi-test-1key\\added_IVF677_Flat_nprobe_7.index",
interactive=True,
)
file_big_npy2 = gr.Textbox(
label=i18n("特征文件路径"),
value="E:\\codes\\py39\\vits_vc_gpu_train\\logs\\mi-test-1key\\total_fea.npy",
interactive=True,
)
# file_big_npy2 = gr.Textbox(
# label=i18n("特征文件路径"),
# value="E:\\codes\\py39\\vits_vc_gpu_train\\logs\\mi-test-1key\\total_fea.npy",
# interactive=True,
# )
index_rate2 = gr.Slider(
minimum=0,
maximum=1,
@@ -1012,7 +1141,7 @@ with gr.Blocks() as app:
vc_transform1,
f0method1,
file_index2,
file_big_npy2,
# file_big_npy2,
index_rate2,
],
[vc_output3],
@@ -1035,6 +1164,15 @@ with gr.Blocks() as app:
)
with gr.Column():
model_choose = gr.Dropdown(label=i18n("模型"), choices=uvr5_names)
agg = gr.Slider(
minimum=0,
maximum=20,
step=1,
label="人声提取激进程度",
value=10,
interactive=True,
visible=False#先不开放调整
)
opt_vocal_root = gr.Textbox(
label=i18n("指定输出人声文件夹"), value="opt"
)
@@ -1049,6 +1187,7 @@ with gr.Blocks() as app:
opt_vocal_root,
wav_inputs,
opt_ins_root,
agg
],
[vc_output4],
)
@@ -1150,10 +1289,10 @@ with gr.Blocks() as app:
)
batch_size12 = gr.Slider(
minimum=0,
maximum=32,
maximum=40,
step=1,
label="每张显卡的batch_size",
value=4,
value=default_batch_size,
interactive=True,
)
if_save_latest13 = gr.Radio(
@@ -1167,7 +1306,7 @@ with gr.Blocks() as app:
"是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速"
),
choices=["", ""],
value="",
value="",
interactive=True,
)
with gr.Row():
@@ -1350,6 +1489,20 @@ with gr.Blocks() as app:
info7,
)
with gr.TabItem(i18n("Onnx导出")):
with gr.Row():
ckpt_dir = gr.Textbox(label=i18n("RVC模型路径"), value="", interactive=True)
with gr.Row():
onnx_dir = gr.Textbox(
label=i18n("Onnx输出路径"), value="", interactive=True
)
with gr.Row():
moevs = gr.Checkbox(label=i18n("MoeVS模型"), value=True)
infoOnnx = gr.Label(label="Null")
with gr.Row():
butOnnx = gr.Button(i18n("导出Onnx模型"), variant="primary")
butOnnx.click(export_onnx, [ckpt_dir, onnx_dir, moevs], infoOnnx)
# with gr.TabItem(i18n("招募音高曲线前端编辑器")):
# gr.Markdown(value=i18n("加开发群联系我xxxxx"))
# with gr.TabItem(i18n("点击查看交流、问题反馈群号")):

View File

@@ -527,7 +527,7 @@ sr2sr = {
}
class SynthesizerTrnMs256NSFsid(nn.Module):
class SynthesizerTrnMs256NSFsidO(nn.Module):
def __init__(
self,
spec_channels,
@@ -612,104 +612,15 @@ class SynthesizerTrnMs256NSFsid(nn.Module):
self.flow.remove_weight_norm()
self.enc_q.remove_weight_norm()
def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None):
def forward(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None):
g = self.emb_g(sid).unsqueeze(-1)
m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask
z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
z = self.flow(z_p, x_mask, g=g, reverse=True)
o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
return o
class SynthesizerTrnMs256NSFsid_sim(nn.Module):
"""
Synthesizer for Training
"""
def __init__(
self,
spec_channels,
segment_size,
inter_channels,
hidden_channels,
filter_channels,
n_heads,
n_layers,
kernel_size,
p_dropout,
resblock,
resblock_kernel_sizes,
resblock_dilation_sizes,
upsample_rates,
upsample_initial_channel,
upsample_kernel_sizes,
spk_embed_dim,
# hop_length,
gin_channels=0,
use_sdp=True,
**kwargs
):
super().__init__()
self.spec_channels = spec_channels
self.inter_channels = inter_channels
self.hidden_channels = hidden_channels
self.filter_channels = filter_channels
self.n_heads = n_heads
self.n_layers = n_layers
self.kernel_size = kernel_size
self.p_dropout = p_dropout
self.resblock = resblock
self.resblock_kernel_sizes = resblock_kernel_sizes
self.resblock_dilation_sizes = resblock_dilation_sizes
self.upsample_rates = upsample_rates
self.upsample_initial_channel = upsample_initial_channel
self.upsample_kernel_sizes = upsample_kernel_sizes
self.segment_size = segment_size
self.gin_channels = gin_channels
# self.hop_length = hop_length#
self.spk_embed_dim = spk_embed_dim
self.enc_p = TextEncoder256Sim(
inter_channels,
hidden_channels,
filter_channels,
n_heads,
n_layers,
kernel_size,
p_dropout,
)
self.dec = GeneratorNSF(
inter_channels,
resblock,
resblock_kernel_sizes,
resblock_dilation_sizes,
upsample_rates,
upsample_initial_channel,
upsample_kernel_sizes,
gin_channels=gin_channels,
is_half=kwargs["is_half"],
)
self.flow = ResidualCouplingBlock(
inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
)
self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
def remove_weight_norm(self):
self.dec.remove_weight_norm()
self.flow.remove_weight_norm()
self.enc_q.remove_weight_norm()
def forward(
self, phone, phone_lengths, pitch, pitchf, ds, max_len=None
): # y是spec不需要了现在
g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t广播的
x, x_mask = self.enc_p(phone, pitch, phone_lengths)
x = self.flow(x, x_mask, g=g, reverse=True)
o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g)
return o
class MultiPeriodDiscriminator(torch.nn.Module):
def __init__(self, use_spectral_norm=False):
super(MultiPeriodDiscriminator, self).__init__()

View File

@@ -0,0 +1,849 @@
import math, pdb, os
from time import time as ttime
import torch
from torch import nn
from torch.nn import functional as F
from infer_pack import modules
from infer_pack import attentions
from infer_pack import commons
from infer_pack.commons import init_weights, get_padding
from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
from infer_pack.commons import init_weights
import numpy as np
from infer_pack import commons
class TextEncoder256(nn.Module):
def __init__(
self,
out_channels,
hidden_channels,
filter_channels,
n_heads,
n_layers,
kernel_size,
p_dropout,
f0=True,
):
super().__init__()
self.out_channels = out_channels
self.hidden_channels = hidden_channels
self.filter_channels = filter_channels
self.n_heads = n_heads
self.n_layers = n_layers
self.kernel_size = kernel_size
self.p_dropout = p_dropout
self.emb_phone = nn.Linear(256, hidden_channels)
self.lrelu = nn.LeakyReLU(0.1, inplace=True)
if f0 == True:
self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
self.encoder = attentions.Encoder(
hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
)
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
def forward(self, phone, pitch, lengths):
if pitch == None:
x = self.emb_phone(phone)
else:
x = self.emb_phone(phone) + self.emb_pitch(pitch)
x = x * math.sqrt(self.hidden_channels) # [b, t, h]
x = self.lrelu(x)
x = torch.transpose(x, 1, -1) # [b, h, t]
x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
x.dtype
)
x = self.encoder(x * x_mask, x_mask)
stats = self.proj(x) * x_mask
m, logs = torch.split(stats, self.out_channels, dim=1)
return m, logs, x_mask
class TextEncoder256Sim(nn.Module):
def __init__(
self,
out_channels,
hidden_channels,
filter_channels,
n_heads,
n_layers,
kernel_size,
p_dropout,
f0=True,
):
super().__init__()
self.out_channels = out_channels
self.hidden_channels = hidden_channels
self.filter_channels = filter_channels
self.n_heads = n_heads
self.n_layers = n_layers
self.kernel_size = kernel_size
self.p_dropout = p_dropout
self.emb_phone = nn.Linear(256, hidden_channels)
self.lrelu = nn.LeakyReLU(0.1, inplace=True)
if f0 == True:
self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
self.encoder = attentions.Encoder(
hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
)
self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
def forward(self, phone, pitch, lengths):
if pitch == None:
x = self.emb_phone(phone)
else:
x = self.emb_phone(phone) + self.emb_pitch(pitch)
x = x * math.sqrt(self.hidden_channels) # [b, t, h]
x = self.lrelu(x)
x = torch.transpose(x, 1, -1) # [b, h, t]
x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
x.dtype
)
x = self.encoder(x * x_mask, x_mask)
x = self.proj(x) * x_mask
return x, x_mask
class ResidualCouplingBlock(nn.Module):
def __init__(
self,
channels,
hidden_channels,
kernel_size,
dilation_rate,
n_layers,
n_flows=4,
gin_channels=0,
):
super().__init__()
self.channels = channels
self.hidden_channels = hidden_channels
self.kernel_size = kernel_size
self.dilation_rate = dilation_rate
self.n_layers = n_layers
self.n_flows = n_flows
self.gin_channels = gin_channels
self.flows = nn.ModuleList()
for i in range(n_flows):
self.flows.append(
modules.ResidualCouplingLayer(
channels,
hidden_channels,
kernel_size,
dilation_rate,
n_layers,
gin_channels=gin_channels,
mean_only=True,
)
)
self.flows.append(modules.Flip())
def forward(self, x, x_mask, g=None, reverse=False):
if not reverse:
for flow in self.flows:
x, _ = flow(x, x_mask, g=g, reverse=reverse)
else:
for flow in reversed(self.flows):
x = flow(x, x_mask, g=g, reverse=reverse)
return x
def remove_weight_norm(self):
for i in range(self.n_flows):
self.flows[i * 2].remove_weight_norm()
class PosteriorEncoder(nn.Module):
def __init__(
self,
in_channels,
out_channels,
hidden_channels,
kernel_size,
dilation_rate,
n_layers,
gin_channels=0,
):
super().__init__()
self.in_channels = in_channels
self.out_channels = out_channels
self.hidden_channels = hidden_channels
self.kernel_size = kernel_size
self.dilation_rate = dilation_rate
self.n_layers = n_layers
self.gin_channels = gin_channels
self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
self.enc = modules.WN(
hidden_channels,
kernel_size,
dilation_rate,
n_layers,
gin_channels=gin_channels,
)
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
def forward(self, x, x_lengths, g=None):
x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
x.dtype
)
x = self.pre(x) * x_mask
x = self.enc(x, x_mask, g=g)
stats = self.proj(x) * x_mask
m, logs = torch.split(stats, self.out_channels, dim=1)
z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
return z, m, logs, x_mask
def remove_weight_norm(self):
self.enc.remove_weight_norm()
class Generator(torch.nn.Module):
def __init__(
self,
initial_channel,
resblock,
resblock_kernel_sizes,
resblock_dilation_sizes,
upsample_rates,
upsample_initial_channel,
upsample_kernel_sizes,
gin_channels=0,
):
super(Generator, self).__init__()
self.num_kernels = len(resblock_kernel_sizes)
self.num_upsamples = len(upsample_rates)
self.conv_pre = Conv1d(
initial_channel, upsample_initial_channel, 7, 1, padding=3
)
resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
self.ups = nn.ModuleList()
for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
self.ups.append(
weight_norm(
ConvTranspose1d(
upsample_initial_channel // (2**i),
upsample_initial_channel // (2 ** (i + 1)),
k,
u,
padding=(k - u) // 2,
)
)
)
self.resblocks = nn.ModuleList()
for i in range(len(self.ups)):
ch = upsample_initial_channel // (2 ** (i + 1))
for j, (k, d) in enumerate(
zip(resblock_kernel_sizes, resblock_dilation_sizes)
):
self.resblocks.append(resblock(ch, k, d))
self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
self.ups.apply(init_weights)
if gin_channels != 0:
self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
def forward(self, x, g=None):
x = self.conv_pre(x)
if g is not None:
x = x + self.cond(g)
for i in range(self.num_upsamples):
x = F.leaky_relu(x, modules.LRELU_SLOPE)
x = self.ups[i](x)
xs = None
for j in range(self.num_kernels):
if xs is None:
xs = self.resblocks[i * self.num_kernels + j](x)
else:
xs += self.resblocks[i * self.num_kernels + j](x)
x = xs / self.num_kernels
x = F.leaky_relu(x)
x = self.conv_post(x)
x = torch.tanh(x)
return x
def remove_weight_norm(self):
for l in self.ups:
remove_weight_norm(l)
for l in self.resblocks:
l.remove_weight_norm()
class SineGen(torch.nn.Module):
"""Definition of sine generator
SineGen(samp_rate, harmonic_num = 0,
sine_amp = 0.1, noise_std = 0.003,
voiced_threshold = 0,
flag_for_pulse=False)
samp_rate: sampling rate in Hz
harmonic_num: number of harmonic overtones (default 0)
sine_amp: amplitude of sine-wavefrom (default 0.1)
noise_std: std of Gaussian noise (default 0.003)
voiced_thoreshold: F0 threshold for U/V classification (default 0)
flag_for_pulse: this SinGen is used inside PulseGen (default False)
Note: when flag_for_pulse is True, the first time step of a voiced
segment is always sin(np.pi) or cos(0)
"""
def __init__(
self,
samp_rate,
harmonic_num=0,
sine_amp=0.1,
noise_std=0.003,
voiced_threshold=0,
flag_for_pulse=False,
):
super(SineGen, self).__init__()
self.sine_amp = sine_amp
self.noise_std = noise_std
self.harmonic_num = harmonic_num
self.dim = self.harmonic_num + 1
self.sampling_rate = samp_rate
self.voiced_threshold = voiced_threshold
def _f02uv(self, f0):
# generate uv signal
uv = torch.ones_like(f0)
uv = uv * (f0 > self.voiced_threshold)
return uv
def forward(self, f0, upp):
"""sine_tensor, uv = forward(f0)
input F0: tensor(batchsize=1, length, dim=1)
f0 for unvoiced steps should be 0
output sine_tensor: tensor(batchsize=1, length, dim)
output uv: tensor(batchsize=1, length, 1)
"""
with torch.no_grad():
f0 = f0[:, None].transpose(1, 2)
f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
# fundamental component
f0_buf[:, :, 0] = f0[:, :, 0]
for idx in np.arange(self.harmonic_num):
f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
idx + 2
) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
rand_ini = torch.rand(
f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
)
rand_ini[:, 0] = 0
rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
tmp_over_one *= upp
tmp_over_one = F.interpolate(
tmp_over_one.transpose(2, 1),
scale_factor=upp,
mode="linear",
align_corners=True,
).transpose(2, 1)
rad_values = F.interpolate(
rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
).transpose(
2, 1
) #######
tmp_over_one %= 1
tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
cumsum_shift = torch.zeros_like(rad_values)
cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
sine_waves = torch.sin(
torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
)
sine_waves = sine_waves * self.sine_amp
uv = self._f02uv(f0)
uv = F.interpolate(
uv.transpose(2, 1), scale_factor=upp, mode="nearest"
).transpose(2, 1)
noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
noise = noise_amp * torch.randn_like(sine_waves)
sine_waves = sine_waves * uv + noise
return sine_waves, uv, noise
class SourceModuleHnNSF(torch.nn.Module):
"""SourceModule for hn-nsf
SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
add_noise_std=0.003, voiced_threshod=0)
sampling_rate: sampling_rate in Hz
harmonic_num: number of harmonic above F0 (default: 0)
sine_amp: amplitude of sine source signal (default: 0.1)
add_noise_std: std of additive Gaussian noise (default: 0.003)
note that amplitude of noise in unvoiced is decided
by sine_amp
voiced_threshold: threhold to set U/V given F0 (default: 0)
Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
F0_sampled (batchsize, length, 1)
Sine_source (batchsize, length, 1)
noise_source (batchsize, length 1)
uv (batchsize, length, 1)
"""
def __init__(
self,
sampling_rate,
harmonic_num=0,
sine_amp=0.1,
add_noise_std=0.003,
voiced_threshod=0,
is_half=True,
):
super(SourceModuleHnNSF, self).__init__()
self.sine_amp = sine_amp
self.noise_std = add_noise_std
self.is_half = is_half
# to produce sine waveforms
self.l_sin_gen = SineGen(
sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
)
# to merge source harmonics into a single excitation
self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
self.l_tanh = torch.nn.Tanh()
def forward(self, x, upp=None):
sine_wavs, uv, _ = self.l_sin_gen(x, upp)
if self.is_half:
sine_wavs = sine_wavs.half()
sine_merge = self.l_tanh(self.l_linear(sine_wavs))
return sine_merge, None, None # noise, uv
class GeneratorNSF(torch.nn.Module):
def __init__(
self,
initial_channel,
resblock,
resblock_kernel_sizes,
resblock_dilation_sizes,
upsample_rates,
upsample_initial_channel,
upsample_kernel_sizes,
gin_channels,
sr,
is_half=False,
):
super(GeneratorNSF, self).__init__()
self.num_kernels = len(resblock_kernel_sizes)
self.num_upsamples = len(upsample_rates)
self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
self.m_source = SourceModuleHnNSF(
sampling_rate=sr, harmonic_num=0, is_half=is_half
)
self.noise_convs = nn.ModuleList()
self.conv_pre = Conv1d(
initial_channel, upsample_initial_channel, 7, 1, padding=3
)
resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
self.ups = nn.ModuleList()
for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
c_cur = upsample_initial_channel // (2 ** (i + 1))
self.ups.append(
weight_norm(
ConvTranspose1d(
upsample_initial_channel // (2**i),
upsample_initial_channel // (2 ** (i + 1)),
k,
u,
padding=(k - u) // 2,
)
)
)
if i + 1 < len(upsample_rates):
stride_f0 = np.prod(upsample_rates[i + 1 :])
self.noise_convs.append(
Conv1d(
1,
c_cur,
kernel_size=stride_f0 * 2,
stride=stride_f0,
padding=stride_f0 // 2,
)
)
else:
self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
self.resblocks = nn.ModuleList()
for i in range(len(self.ups)):
ch = upsample_initial_channel // (2 ** (i + 1))
for j, (k, d) in enumerate(
zip(resblock_kernel_sizes, resblock_dilation_sizes)
):
self.resblocks.append(resblock(ch, k, d))
self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
self.ups.apply(init_weights)
if gin_channels != 0:
self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
self.upp = np.prod(upsample_rates)
def forward(self, x, f0, g=None):
har_source, noi_source, uv = self.m_source(f0, self.upp)
har_source = har_source.transpose(1, 2)
x = self.conv_pre(x)
if g is not None:
x = x + self.cond(g)
for i in range(self.num_upsamples):
x = F.leaky_relu(x, modules.LRELU_SLOPE)
x = self.ups[i](x)
x_source = self.noise_convs[i](har_source)
x = x + x_source
xs = None
for j in range(self.num_kernels):
if xs is None:
xs = self.resblocks[i * self.num_kernels + j](x)
else:
xs += self.resblocks[i * self.num_kernels + j](x)
x = xs / self.num_kernels
x = F.leaky_relu(x)
x = self.conv_post(x)
x = torch.tanh(x)
return x
def remove_weight_norm(self):
for l in self.ups:
remove_weight_norm(l)
for l in self.resblocks:
l.remove_weight_norm()
sr2sr = {
"32k": 32000,
"40k": 40000,
"48k": 48000,
}
class SynthesizerTrnMs256NSFsidM(nn.Module):
def __init__(
self,
spec_channels,
segment_size,
inter_channels,
hidden_channels,
filter_channels,
n_heads,
n_layers,
kernel_size,
p_dropout,
resblock,
resblock_kernel_sizes,
resblock_dilation_sizes,
upsample_rates,
upsample_initial_channel,
upsample_kernel_sizes,
spk_embed_dim,
gin_channels,
sr,
**kwargs
):
super().__init__()
if type(sr) == type("strr"):
sr = sr2sr[sr]
self.spec_channels = spec_channels
self.inter_channels = inter_channels
self.hidden_channels = hidden_channels
self.filter_channels = filter_channels
self.n_heads = n_heads
self.n_layers = n_layers
self.kernel_size = kernel_size
self.p_dropout = p_dropout
self.resblock = resblock
self.resblock_kernel_sizes = resblock_kernel_sizes
self.resblock_dilation_sizes = resblock_dilation_sizes
self.upsample_rates = upsample_rates
self.upsample_initial_channel = upsample_initial_channel
self.upsample_kernel_sizes = upsample_kernel_sizes
self.segment_size = segment_size
self.gin_channels = gin_channels
# self.hop_length = hop_length#
self.spk_embed_dim = spk_embed_dim
self.enc_p = TextEncoder256(
inter_channels,
hidden_channels,
filter_channels,
n_heads,
n_layers,
kernel_size,
p_dropout,
)
self.dec = GeneratorNSF(
inter_channels,
resblock,
resblock_kernel_sizes,
resblock_dilation_sizes,
upsample_rates,
upsample_initial_channel,
upsample_kernel_sizes,
gin_channels=gin_channels,
sr=sr,
is_half=kwargs["is_half"],
)
self.enc_q = PosteriorEncoder(
spec_channels,
inter_channels,
hidden_channels,
5,
1,
16,
gin_channels=gin_channels,
)
self.flow = ResidualCouplingBlock(
inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
)
self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
def remove_weight_norm(self):
self.dec.remove_weight_norm()
self.flow.remove_weight_norm()
self.enc_q.remove_weight_norm()
def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None):
g = self.emb_g(sid).unsqueeze(-1)
m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask
z = self.flow(z_p, x_mask, g=g, reverse=True)
o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
return o
class SynthesizerTrnMs256NSFsid_sim(nn.Module):
"""
Synthesizer for Training
"""
def __init__(
self,
spec_channels,
segment_size,
inter_channels,
hidden_channels,
filter_channels,
n_heads,
n_layers,
kernel_size,
p_dropout,
resblock,
resblock_kernel_sizes,
resblock_dilation_sizes,
upsample_rates,
upsample_initial_channel,
upsample_kernel_sizes,
spk_embed_dim,
# hop_length,
gin_channels=0,
use_sdp=True,
**kwargs
):
super().__init__()
self.spec_channels = spec_channels
self.inter_channels = inter_channels
self.hidden_channels = hidden_channels
self.filter_channels = filter_channels
self.n_heads = n_heads
self.n_layers = n_layers
self.kernel_size = kernel_size
self.p_dropout = p_dropout
self.resblock = resblock
self.resblock_kernel_sizes = resblock_kernel_sizes
self.resblock_dilation_sizes = resblock_dilation_sizes
self.upsample_rates = upsample_rates
self.upsample_initial_channel = upsample_initial_channel
self.upsample_kernel_sizes = upsample_kernel_sizes
self.segment_size = segment_size
self.gin_channels = gin_channels
# self.hop_length = hop_length#
self.spk_embed_dim = spk_embed_dim
self.enc_p = TextEncoder256Sim(
inter_channels,
hidden_channels,
filter_channels,
n_heads,
n_layers,
kernel_size,
p_dropout,
)
self.dec = GeneratorNSF(
inter_channels,
resblock,
resblock_kernel_sizes,
resblock_dilation_sizes,
upsample_rates,
upsample_initial_channel,
upsample_kernel_sizes,
gin_channels=gin_channels,
is_half=kwargs["is_half"],
)
self.flow = ResidualCouplingBlock(
inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
)
self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
def remove_weight_norm(self):
self.dec.remove_weight_norm()
self.flow.remove_weight_norm()
self.enc_q.remove_weight_norm()
def forward(
self, phone, phone_lengths, pitch, pitchf, ds, max_len=None
): # y是spec不需要了现在
g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t广播的
x, x_mask = self.enc_p(phone, pitch, phone_lengths)
x = self.flow(x, x_mask, g=g, reverse=True)
o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g)
return o
class MultiPeriodDiscriminator(torch.nn.Module):
def __init__(self, use_spectral_norm=False):
super(MultiPeriodDiscriminator, self).__init__()
periods = [2, 3, 5, 7, 11, 17]
# periods = [3, 5, 7, 11, 17, 23, 37]
discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
discs = discs + [
DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
]
self.discriminators = nn.ModuleList(discs)
def forward(self, y, y_hat):
y_d_rs = [] #
y_d_gs = []
fmap_rs = []
fmap_gs = []
for i, d in enumerate(self.discriminators):
y_d_r, fmap_r = d(y)
y_d_g, fmap_g = d(y_hat)
# for j in range(len(fmap_r)):
# print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
y_d_rs.append(y_d_r)
y_d_gs.append(y_d_g)
fmap_rs.append(fmap_r)
fmap_gs.append(fmap_g)
return y_d_rs, y_d_gs, fmap_rs, fmap_gs
class DiscriminatorS(torch.nn.Module):
def __init__(self, use_spectral_norm=False):
super(DiscriminatorS, self).__init__()
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
self.convs = nn.ModuleList(
[
norm_f(Conv1d(1, 16, 15, 1, padding=7)),
norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
]
)
self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
def forward(self, x):
fmap = []
for l in self.convs:
x = l(x)
x = F.leaky_relu(x, modules.LRELU_SLOPE)
fmap.append(x)
x = self.conv_post(x)
fmap.append(x)
x = torch.flatten(x, 1, -1)
return x, fmap
class DiscriminatorP(torch.nn.Module):
def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
super(DiscriminatorP, self).__init__()
self.period = period
self.use_spectral_norm = use_spectral_norm
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
self.convs = nn.ModuleList(
[
norm_f(
Conv2d(
1,
32,
(kernel_size, 1),
(stride, 1),
padding=(get_padding(kernel_size, 1), 0),
)
),
norm_f(
Conv2d(
32,
128,
(kernel_size, 1),
(stride, 1),
padding=(get_padding(kernel_size, 1), 0),
)
),
norm_f(
Conv2d(
128,
512,
(kernel_size, 1),
(stride, 1),
padding=(get_padding(kernel_size, 1), 0),
)
),
norm_f(
Conv2d(
512,
1024,
(kernel_size, 1),
(stride, 1),
padding=(get_padding(kernel_size, 1), 0),
)
),
norm_f(
Conv2d(
1024,
1024,
(kernel_size, 1),
1,
padding=(get_padding(kernel_size, 1), 0),
)
),
]
)
self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
def forward(self, x):
fmap = []
# 1d to 2d
b, c, t = x.shape
if t % self.period != 0: # pad first
n_pad = self.period - (t % self.period)
x = F.pad(x, (0, n_pad), "reflect")
t = t + n_pad
x = x.view(b, c, t // self.period, self.period)
for l in self.convs:
x = l(x)
x = F.leaky_relu(x, modules.LRELU_SLOPE)
fmap.append(x)
x = self.conv_post(x)
fmap.append(x)
x = torch.flatten(x, 1, -1)
return x, fmap

View File

@@ -13,7 +13,7 @@ from scipy.io import wavfile
class _audio_pre_:
def __init__(self, model_path, device, is_half):
def __init__(self, agg,model_path, device, is_half):
self.model_path = model_path
self.device = device
self.data = {
@@ -22,7 +22,7 @@ class _audio_pre_:
"tta": False,
# Constants
"window_size": 512,
"agg": 10,
"agg": agg,
"high_end_process": "mirroring",
}
nn_arch_sizes = [
@@ -139,7 +139,7 @@ class _audio_pre_:
wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp)
print("%s instruments done" % name)
wavfile.write(
os.path.join(ins_root, "instrument_{}.wav".format(name)),
os.path.join(ins_root, "instrument_{}_{}.wav".format(name,self.data["agg"])),
self.mp.param["sr"],
(np.array(wav_instrument) * 32768).astype("int16"),
) #
@@ -155,7 +155,7 @@ class _audio_pre_:
wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp)
print("%s vocals done" % name)
wavfile.write(
os.path.join(vocal_root, "vocal_{}.wav".format(name)),
os.path.join(vocal_root, "vocal_{}_{}.wav".format(name,self.data["agg"])),
self.mp.param["sr"],
(np.array(wav_vocals) * 32768).astype("int16"),
)

View File

@@ -12,10 +12,10 @@ def load_audio(file, sr):
) # 防止小白拷路径头尾带了空格和"和回车
out, _ = (
ffmpeg.input(file, threads=0)
.output("-", format="s16le", acodec="pcm_s16le", ac=1, ar=sr)
.output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr)
.run(cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True)
)
except Exception as e:
raise RuntimeError(f"Failed to load audio: {e}")
return np.frombuffer(out, np.int16).flatten().astype(np.float32) / 32768.0
return np.frombuffer(out, np.float32).flatten()

View File

@@ -18,8 +18,7 @@ ffmpeg-python = "^0.2.0"
tensorboardX = "^2.6"
functorch = "^2.0.0"
fairseq = "^0.12.2"
faiss-gpu = "^1.7.2"
faiss-cpu = "^1.7.3"
faiss-cpu = "^1.7.2"
Jinja2 = "^3.1.2"
json5 = "^0.9.11"
librosa = "0.9.2"

View File

@@ -4,7 +4,8 @@ scipy==1.9.3
librosa==0.9.2
llvmlite==0.39.0
fairseq==0.12.2
faiss-cpu==1.7.2
faiss-cpu==1.7.0; sys_platform == "darwin"
faiss-cpu==1.7.2; sys_platform != "darwin"
gradio
Cython
future>=0.18.3

View File

@@ -98,7 +98,10 @@ class TextAudioLoaderMultiNSFsid(torch.utils.data.Dataset):
sampling_rate, self.sampling_rate
)
)
audio_norm = audio / self.max_wav_value
audio_norm = audio
# audio_norm = audio / self.max_wav_value
# audio_norm = audio / np.abs(audio).max()
audio_norm = audio_norm.unsqueeze(0)
spec_filename = filename.replace(".wav", ".spec.pt")
if os.path.exists(spec_filename):
@@ -287,7 +290,10 @@ class TextAudioLoader(torch.utils.data.Dataset):
sampling_rate, self.sampling_rate
)
)
audio_norm = audio / self.max_wav_value
audio_norm = audio
# audio_norm = audio / self.max_wav_value
# audio_norm = audio / np.abs(audio).max()
audio_norm = audio_norm.unsqueeze(0)
spec_filename = filename.replace(".wav", ".spec.pt")
if os.path.exists(spec_filename):

View File

@@ -1,18 +1,8 @@
import math
import os
import random
import torch
from torch import nn
import torch.nn.functional as F
import torch.utils.data
import numpy as np
import librosa
import librosa.util as librosa_util
from librosa.util import normalize, pad_center, tiny
from scipy.signal import get_window
from scipy.io.wavfile import read
from librosa.filters import mel as librosa_mel_fn
MAX_WAV_VALUE = 32768.0
@@ -35,25 +25,38 @@ def dynamic_range_decompression_torch(x, C=1):
def spectral_normalize_torch(magnitudes):
output = dynamic_range_compression_torch(magnitudes)
return output
return dynamic_range_compression_torch(magnitudes)
def spectral_de_normalize_torch(magnitudes):
output = dynamic_range_decompression_torch(magnitudes)
return output
return dynamic_range_decompression_torch(magnitudes)
# Reusable banks
mel_basis = {}
hann_window = {}
def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
"""Convert waveform into Linear-frequency Linear-amplitude spectrogram.
Args:
y :: (B, T) - Audio waveforms
n_fft
sampling_rate
hop_size
win_size
center
Returns:
:: (B, Freq, Frame) - Linear-frequency Linear-amplitude spectrogram
"""
# Validation
if torch.min(y) < -1.0:
print("min value is ", torch.min(y))
if torch.max(y) > 1.0:
print("max value is ", torch.max(y))
# Window - Cache if needed
global hann_window
dtype_device = str(y.dtype) + "_" + str(y.device)
wnsize_dtype_device = str(win_size) + "_" + dtype_device
@@ -62,6 +65,7 @@ def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False)
dtype=y.dtype, device=y.device
)
# Padding
y = torch.nn.functional.pad(
y.unsqueeze(1),
(int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)),
@@ -69,6 +73,7 @@ def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False)
)
y = y.squeeze(1)
# Complex Spectrogram :: (B, T) -> (B, Freq, Frame, RealComplex=2)
spec = torch.stft(
y,
n_fft,
@@ -82,79 +87,44 @@ def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False)
return_complex=False,
)
# Linear-frequency Linear-amplitude spectrogram :: (B, Freq, Frame, RealComplex=2) -> (B, Freq, Frame)
spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
return spec
def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
# MelBasis - Cache if needed
global mel_basis
dtype_device = str(spec.dtype) + "_" + str(spec.device)
fmax_dtype_device = str(fmax) + "_" + dtype_device
if fmax_dtype_device not in mel_basis:
mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
mel = librosa_mel_fn(
sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax
)
mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(
dtype=spec.dtype, device=spec.device
)
spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
spec = spectral_normalize_torch(spec)
return spec
# Mel-frequency Log-amplitude spectrogram :: (B, Freq=num_mels, Frame)
melspec = torch.matmul(mel_basis[fmax_dtype_device], spec)
melspec = spectral_normalize_torch(melspec)
return melspec
def mel_spectrogram_torch(
y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False
):
if torch.min(y) < -1.0:
print("min value is ", torch.min(y))
if torch.max(y) > 1.0:
print("max value is ", torch.max(y))
"""Convert waveform into Mel-frequency Log-amplitude spectrogram.
global mel_basis, hann_window
dtype_device = str(y.dtype) + "_" + str(y.device)
fmax_dtype_device = str(fmax) + "_" + dtype_device
wnsize_dtype_device = str(win_size) + "_" + dtype_device
if fmax_dtype_device not in mel_basis:
mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax)
mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(
dtype=y.dtype, device=y.device
)
if wnsize_dtype_device not in hann_window:
hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(
dtype=y.dtype, device=y.device
)
Args:
y :: (B, T) - Waveforms
Returns:
melspec :: (B, Freq, Frame) - Mel-frequency Log-amplitude spectrogram
"""
# Linear-frequency Linear-amplitude spectrogram :: (B, T) -> (B, Freq, Frame)
spec = spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center)
y = torch.nn.functional.pad(
y.unsqueeze(1),
(int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)),
mode="reflect",
)
y = y.squeeze(1)
# Mel-frequency Log-amplitude spectrogram :: (B, Freq, Frame) -> (B, Freq=num_mels, Frame)
melspec = spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax)
# spec = torch.stft(
# y,
# n_fft,
# hop_length=hop_size,
# win_length=win_size,
# window=hann_window[wnsize_dtype_device],
# center=center,
# pad_mode="reflect",
# normalized=False,
# onesided=True,
# )
spec = torch.stft(
y,
n_fft,
hop_length=hop_size,
win_length=win_size,
window=hann_window[wnsize_dtype_device],
center=center,
pad_mode="reflect",
normalized=False,
onesided=True,
return_complex=False,
)
spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
spec = torch.matmul(mel_basis[fmax_dtype_device], spec)
spec = spectral_normalize_torch(spec)
return spec
return melspec

View File

@@ -21,7 +21,7 @@ import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.cuda.amp import autocast, GradScaler
from infer_pack import commons
from time import sleep
from time import time as ttime
from data_utils import (
TextAudioLoaderMultiNSFsid,
@@ -45,7 +45,7 @@ global_step = 0
def main():
# n_gpus = torch.cuda.device_count()
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "5555"
os.environ["MASTER_PORT"] = "51545"
mp.spawn(
run,
@@ -157,7 +157,7 @@ def run(rank, n_gpus, hps):
# epoch_str = 1
# global_step = 0
except: # 如果首次不能加载加载pretrain
traceback.print_exc()
# traceback.print_exc()
epoch_str = 1
global_step = 0
if rank == 0:
@@ -230,39 +230,50 @@ def train_and_evaluate(
net_g.train()
net_d.train()
if cache == [] or hps.if_cache_data_in_gpu == False: # 第一个epoch把cache全部填满训练集
# print("caching")
for batch_idx, info in enumerate(train_loader):
if hps.if_f0 == 1:
(
phone,
phone_lengths,
pitch,
pitchf,
spec,
spec_lengths,
wave,
wave_lengths,
sid,
) = info
else:
phone, phone_lengths, spec, spec_lengths, wave, wave_lengths, sid = info
if torch.cuda.is_available():
phone, phone_lengths = phone.cuda(
rank, non_blocking=True
), phone_lengths.cuda(rank, non_blocking=True)
# Prepare data iterator
if hps.if_cache_data_in_gpu == True:
# Use Cache
data_iterator = cache
if cache == []:
# Make new cache
for batch_idx, info in enumerate(train_loader):
# Unpack
if hps.if_f0 == 1:
pitch, pitchf = pitch.cuda(rank, non_blocking=True), pitchf.cuda(
rank, non_blocking=True
)
sid = sid.cuda(rank, non_blocking=True)
spec, spec_lengths = spec.cuda(
rank, non_blocking=True
), spec_lengths.cuda(rank, non_blocking=True)
wave, wave_lengths = wave.cuda(
rank, non_blocking=True
), wave_lengths.cuda(rank, non_blocking=True)
if hps.if_cache_data_in_gpu == True:
(
phone,
phone_lengths,
pitch,
pitchf,
spec,
spec_lengths,
wave,
wave_lengths,
sid,
) = info
else:
(
phone,
phone_lengths,
spec,
spec_lengths,
wave,
wave_lengths,
sid,
) = info
# Load on CUDA
if torch.cuda.is_available():
phone = phone.cuda(rank, non_blocking=True)
phone_lengths = phone_lengths.cuda(rank, non_blocking=True)
if hps.if_f0 == 1:
pitch = pitch.cuda(rank, non_blocking=True)
pitchf = pitchf.cuda(rank, non_blocking=True)
sid = sid.cuda(rank, non_blocking=True)
spec = spec.cuda(rank, non_blocking=True)
spec_lengths = spec_lengths.cuda(rank, non_blocking=True)
wave = wave.cuda(rank, non_blocking=True)
wave_lengths = wave_lengths.cuda(rank, non_blocking=True)
# Cache on list
if hps.if_f0 == 1:
cache.append(
(
@@ -295,372 +306,211 @@ def train_and_evaluate(
),
)
)
with autocast(enabled=hps.train.fp16_run):
if hps.if_f0 == 1:
(
y_hat,
ids_slice,
x_mask,
z_mask,
(z, z_p, m_p, logs_p, m_q, logs_q),
) = net_g(
phone, phone_lengths, pitch, pitchf, spec, spec_lengths, sid
)
else:
(
y_hat,
ids_slice,
x_mask,
z_mask,
(z, z_p, m_p, logs_p, m_q, logs_q),
) = net_g(phone, phone_lengths, spec, spec_lengths, sid)
mel = spec_to_mel_torch(
spec,
hps.data.filter_length,
hps.data.n_mel_channels,
hps.data.sampling_rate,
hps.data.mel_fmin,
hps.data.mel_fmax,
)
y_mel = commons.slice_segments(
mel, ids_slice, hps.train.segment_size // hps.data.hop_length
)
with autocast(enabled=False):
y_hat_mel = mel_spectrogram_torch(
y_hat.float().squeeze(1),
hps.data.filter_length,
hps.data.n_mel_channels,
hps.data.sampling_rate,
hps.data.hop_length,
hps.data.win_length,
hps.data.mel_fmin,
hps.data.mel_fmax,
)
if hps.train.fp16_run == True:
y_hat_mel = y_hat_mel.half()
wave = commons.slice_segments(
wave, ids_slice * hps.data.hop_length, hps.train.segment_size
) # slice
else:
# Load shuffled cache
shuffle(cache)
else:
# Loader
data_iterator = enumerate(train_loader)
# Discriminator
y_d_hat_r, y_d_hat_g, _, _ = net_d(wave, y_hat.detach())
with autocast(enabled=False):
loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(
y_d_hat_r, y_d_hat_g
)
optim_d.zero_grad()
scaler.scale(loss_disc).backward()
scaler.unscale_(optim_d)
grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None)
scaler.step(optim_d)
# Run steps
for batch_idx, info in data_iterator:
# Data
## Unpack
if hps.if_f0 == 1:
(
phone,
phone_lengths,
pitch,
pitchf,
spec,
spec_lengths,
wave,
wave_lengths,
sid,
) = info
else:
phone, phone_lengths, spec, spec_lengths, wave, wave_lengths, sid = info
## Load on CUDA
if (hps.if_cache_data_in_gpu == False) and torch.cuda.is_available():
phone = phone.cuda(rank, non_blocking=True)
phone_lengths = phone_lengths.cuda(rank, non_blocking=True)
if hps.if_f0 == 1:
pitch = pitch.cuda(rank, non_blocking=True)
pitchf = pitchf.cuda(rank, non_blocking=True)
sid = sid.cuda(rank, non_blocking=True)
spec = spec.cuda(rank, non_blocking=True)
spec_lengths = spec_lengths.cuda(rank, non_blocking=True)
wave = wave.cuda(rank, non_blocking=True)
wave_lengths = wave_lengths.cuda(rank, non_blocking=True)
with autocast(enabled=hps.train.fp16_run):
# Generator
y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(wave, y_hat)
with autocast(enabled=False):
loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel
loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl
loss_fm = feature_loss(fmap_r, fmap_g)
loss_gen, losses_gen = generator_loss(y_d_hat_g)
loss_gen_all = loss_gen + loss_fm + loss_mel + loss_kl
optim_g.zero_grad()
scaler.scale(loss_gen_all).backward()
scaler.unscale_(optim_g)
grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None)
scaler.step(optim_g)
scaler.update()
if rank == 0:
if global_step % hps.train.log_interval == 0:
lr = optim_g.param_groups[0]["lr"]
logger.info(
"Train Epoch: {} [{:.0f}%]".format(
epoch, 100.0 * batch_idx / len(train_loader)
)
)
# Amor For Tensorboard display
if loss_mel > 50:
loss_mel = 50
if loss_kl > 5:
loss_kl = 5
logger.info([global_step, lr])
logger.info(
f"loss_disc={loss_disc:.3f}, loss_gen={loss_gen:.3f}, loss_fm={loss_fm:.3f},loss_mel={loss_mel:.3f}, loss_kl={loss_kl:.3f}"
)
scalar_dict = {
"loss/g/total": loss_gen_all,
"loss/d/total": loss_disc,
"learning_rate": lr,
"grad_norm_d": grad_norm_d,
"grad_norm_g": grad_norm_g,
}
scalar_dict.update(
{
"loss/g/fm": loss_fm,
"loss/g/mel": loss_mel,
"loss/g/kl": loss_kl,
}
)
scalar_dict.update(
{"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}
)
scalar_dict.update(
{
"loss/d_r/{}".format(i): v
for i, v in enumerate(losses_disc_r)
}
)
scalar_dict.update(
{
"loss/d_g/{}".format(i): v
for i, v in enumerate(losses_disc_g)
}
)
image_dict = {
"slice/mel_org": utils.plot_spectrogram_to_numpy(
y_mel[0].data.cpu().numpy()
),
"slice/mel_gen": utils.plot_spectrogram_to_numpy(
y_hat_mel[0].data.cpu().numpy()
),
"all/mel": utils.plot_spectrogram_to_numpy(
mel[0].data.cpu().numpy()
),
}
utils.summarize(
writer=writer,
global_step=global_step,
images=image_dict,
scalars=scalar_dict,
)
global_step += 1
# if global_step % hps.train.eval_interval == 0:
if epoch % hps.save_every_epoch == 0 and rank == 0:
if hps.if_latest == 0:
utils.save_checkpoint(
net_g,
optim_g,
hps.train.learning_rate,
epoch,
os.path.join(hps.model_dir, "G_{}.pth".format(global_step)),
)
utils.save_checkpoint(
net_d,
optim_d,
hps.train.learning_rate,
epoch,
os.path.join(hps.model_dir, "D_{}.pth".format(global_step)),
)
else:
utils.save_checkpoint(
net_g,
optim_g,
hps.train.learning_rate,
epoch,
os.path.join(hps.model_dir, "G_{}.pth".format(2333333)),
)
utils.save_checkpoint(
net_d,
optim_d,
hps.train.learning_rate,
epoch,
os.path.join(hps.model_dir, "D_{}.pth".format(2333333)),
)
else: # 后续的epoch直接使用打乱的cache
shuffle(cache)
# print("using cache")
for batch_idx, info in cache:
# Calculate
with autocast(enabled=hps.train.fp16_run):
if hps.if_f0 == 1:
(
phone,
phone_lengths,
pitch,
pitchf,
spec,
spec_lengths,
wave,
wave_lengths,
sid,
) = info
y_hat,
ids_slice,
x_mask,
z_mask,
(z, z_p, m_p, logs_p, m_q, logs_q),
) = net_g(phone, phone_lengths, pitch, pitchf, spec, spec_lengths, sid)
else:
phone, phone_lengths, spec, spec_lengths, wave, wave_lengths, sid = info
with autocast(enabled=hps.train.fp16_run):
if hps.if_f0 == 1:
(
y_hat,
ids_slice,
x_mask,
z_mask,
(z, z_p, m_p, logs_p, m_q, logs_q),
) = net_g(
phone, phone_lengths, pitch, pitchf, spec, spec_lengths, sid
)
else:
(
y_hat,
ids_slice,
x_mask,
z_mask,
(z, z_p, m_p, logs_p, m_q, logs_q),
) = net_g(phone, phone_lengths, spec, spec_lengths, sid)
mel = spec_to_mel_torch(
spec,
(
y_hat,
ids_slice,
x_mask,
z_mask,
(z, z_p, m_p, logs_p, m_q, logs_q),
) = net_g(phone, phone_lengths, spec, spec_lengths, sid)
mel = spec_to_mel_torch(
spec,
hps.data.filter_length,
hps.data.n_mel_channels,
hps.data.sampling_rate,
hps.data.mel_fmin,
hps.data.mel_fmax,
)
y_mel = commons.slice_segments(
mel, ids_slice, hps.train.segment_size // hps.data.hop_length
)
with autocast(enabled=False):
y_hat_mel = mel_spectrogram_torch(
y_hat.float().squeeze(1),
hps.data.filter_length,
hps.data.n_mel_channels,
hps.data.sampling_rate,
hps.data.hop_length,
hps.data.win_length,
hps.data.mel_fmin,
hps.data.mel_fmax,
)
y_mel = commons.slice_segments(
mel, ids_slice, hps.train.segment_size // hps.data.hop_length
if hps.train.fp16_run == True:
y_hat_mel = y_hat_mel.half()
wave = commons.slice_segments(
wave, ids_slice * hps.data.hop_length, hps.train.segment_size
) # slice
# Discriminator
y_d_hat_r, y_d_hat_g, _, _ = net_d(wave, y_hat.detach())
with autocast(enabled=False):
loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(
y_d_hat_r, y_d_hat_g
)
with autocast(enabled=False):
y_hat_mel = mel_spectrogram_torch(
y_hat.float().squeeze(1),
hps.data.filter_length,
hps.data.n_mel_channels,
hps.data.sampling_rate,
hps.data.hop_length,
hps.data.win_length,
hps.data.mel_fmin,
hps.data.mel_fmax,
optim_d.zero_grad()
scaler.scale(loss_disc).backward()
scaler.unscale_(optim_d)
grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None)
scaler.step(optim_d)
with autocast(enabled=hps.train.fp16_run):
# Generator
y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(wave, y_hat)
with autocast(enabled=False):
loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel
loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl
loss_fm = feature_loss(fmap_r, fmap_g)
loss_gen, losses_gen = generator_loss(y_d_hat_g)
loss_gen_all = loss_gen + loss_fm + loss_mel + loss_kl
optim_g.zero_grad()
scaler.scale(loss_gen_all).backward()
scaler.unscale_(optim_g)
grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None)
scaler.step(optim_g)
scaler.update()
if rank == 0:
if global_step % hps.train.log_interval == 0:
lr = optim_g.param_groups[0]["lr"]
logger.info(
"Train Epoch: {} [{:.0f}%]".format(
epoch, 100.0 * batch_idx / len(train_loader)
)
if hps.train.fp16_run == True:
y_hat_mel = y_hat_mel.half()
wave = commons.slice_segments(
wave, ids_slice * hps.data.hop_length, hps.train.segment_size
) # slice
)
# Amor For Tensorboard display
if loss_mel > 50:
loss_mel = 50
if loss_kl > 5:
loss_kl = 5
# Discriminator
y_d_hat_r, y_d_hat_g, _, _ = net_d(wave, y_hat.detach())
with autocast(enabled=False):
loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(
y_d_hat_r, y_d_hat_g
)
optim_d.zero_grad()
scaler.scale(loss_disc).backward()
scaler.unscale_(optim_d)
grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None)
scaler.step(optim_d)
with autocast(enabled=hps.train.fp16_run):
# Generator
y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(wave, y_hat)
with autocast(enabled=False):
loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel
loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl
loss_fm = feature_loss(fmap_r, fmap_g)
loss_gen, losses_gen = generator_loss(y_d_hat_g)
loss_gen_all = loss_gen + loss_fm + loss_mel + loss_kl
optim_g.zero_grad()
scaler.scale(loss_gen_all).backward()
scaler.unscale_(optim_g)
grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None)
scaler.step(optim_g)
scaler.update()
if rank == 0:
if global_step % hps.train.log_interval == 0:
lr = optim_g.param_groups[0]["lr"]
logger.info(
"Train Epoch: {} [{:.0f}%]".format(
epoch, 100.0 * batch_idx / len(train_loader)
)
)
# Amor For Tensorboard display
if loss_mel > 50:
loss_mel = 50
if loss_kl > 5:
loss_kl = 5
logger.info([global_step, lr])
logger.info(
f"loss_disc={loss_disc:.3f}, loss_gen={loss_gen:.3f}, loss_fm={loss_fm:.3f},loss_mel={loss_mel:.3f}, loss_kl={loss_kl:.3f}"
)
scalar_dict = {
"loss/g/total": loss_gen_all,
"loss/d/total": loss_disc,
"learning_rate": lr,
"grad_norm_d": grad_norm_d,
"grad_norm_g": grad_norm_g,
logger.info([global_step, lr])
logger.info(
f"loss_disc={loss_disc:.3f}, loss_gen={loss_gen:.3f}, loss_fm={loss_fm:.3f},loss_mel={loss_mel:.3f}, loss_kl={loss_kl:.3f}"
)
scalar_dict = {
"loss/g/total": loss_gen_all,
"loss/d/total": loss_disc,
"learning_rate": lr,
"grad_norm_d": grad_norm_d,
"grad_norm_g": grad_norm_g,
}
scalar_dict.update(
{
"loss/g/fm": loss_fm,
"loss/g/mel": loss_mel,
"loss/g/kl": loss_kl,
}
scalar_dict.update(
{
"loss/g/fm": loss_fm,
"loss/g/mel": loss_mel,
"loss/g/kl": loss_kl,
}
)
)
scalar_dict.update(
{"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}
)
scalar_dict.update(
{
"loss/d_r/{}".format(i): v
for i, v in enumerate(losses_disc_r)
}
)
scalar_dict.update(
{
"loss/d_g/{}".format(i): v
for i, v in enumerate(losses_disc_g)
}
)
image_dict = {
"slice/mel_org": utils.plot_spectrogram_to_numpy(
y_mel[0].data.cpu().numpy()
),
"slice/mel_gen": utils.plot_spectrogram_to_numpy(
y_hat_mel[0].data.cpu().numpy()
),
"all/mel": utils.plot_spectrogram_to_numpy(
mel[0].data.cpu().numpy()
),
}
utils.summarize(
writer=writer,
global_step=global_step,
images=image_dict,
scalars=scalar_dict,
)
global_step += 1
# if global_step % hps.train.eval_interval == 0:
if epoch % hps.save_every_epoch == 0 and rank == 0:
if hps.if_latest == 0:
utils.save_checkpoint(
net_g,
optim_g,
hps.train.learning_rate,
epoch,
os.path.join(hps.model_dir, "G_{}.pth".format(global_step)),
scalar_dict.update(
{"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}
)
utils.save_checkpoint(
net_d,
optim_d,
hps.train.learning_rate,
epoch,
os.path.join(hps.model_dir, "D_{}.pth".format(global_step)),
scalar_dict.update(
{"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}
)
else:
utils.save_checkpoint(
net_g,
optim_g,
hps.train.learning_rate,
epoch,
os.path.join(hps.model_dir, "G_{}.pth".format(2333333)),
scalar_dict.update(
{"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}
)
utils.save_checkpoint(
net_d,
optim_d,
hps.train.learning_rate,
epoch,
os.path.join(hps.model_dir, "D_{}.pth".format(2333333)),
image_dict = {
"slice/mel_org": utils.plot_spectrogram_to_numpy(
y_mel[0].data.cpu().numpy()
),
"slice/mel_gen": utils.plot_spectrogram_to_numpy(
y_hat_mel[0].data.cpu().numpy()
),
"all/mel": utils.plot_spectrogram_to_numpy(
mel[0].data.cpu().numpy()
),
}
utils.summarize(
writer=writer,
global_step=global_step,
images=image_dict,
scalars=scalar_dict,
)
global_step += 1
# /Run steps
if epoch % hps.save_every_epoch == 0 and rank == 0:
if hps.if_latest == 0:
utils.save_checkpoint(
net_g,
optim_g,
hps.train.learning_rate,
epoch,
os.path.join(hps.model_dir, "G_{}.pth".format(global_step)),
)
utils.save_checkpoint(
net_d,
optim_d,
hps.train.learning_rate,
epoch,
os.path.join(hps.model_dir, "D_{}.pth".format(global_step)),
)
else:
utils.save_checkpoint(
net_g,
optim_g,
hps.train.learning_rate,
epoch,
os.path.join(hps.model_dir, "G_{}.pth".format(2333333)),
)
utils.save_checkpoint(
net_d,
optim_d,
hps.train.learning_rate,
epoch,
os.path.join(hps.model_dir, "D_{}.pth".format(2333333)),
)
if rank == 0:
logger.info("====> Epoch: {}".format(epoch))
@@ -676,6 +526,7 @@ def train_and_evaluate(
"saving final ckpt:%s"
% (savee(ckpt, hps.sample_rate, hps.if_f0, hps.name, epoch))
)
sleep(1)
os._exit(2333333)

View File

@@ -1,4 +1,5 @@
import sys, os, multiprocessing
from scipy import signal
now_dir = os.getcwd()
sys.path.append(now_dir)
@@ -38,6 +39,7 @@ class PreProcess:
max_sil_kept=150,
)
self.sr = sr
self.bh, self.ah = signal.butter(N=5, Wn=48, btype="high", fs=self.sr)
self.per = 3.7
self.overlap = 0.3
self.tail = self.per + self.overlap
@@ -57,18 +59,24 @@ class PreProcess:
wavfile.write(
"%s/%s_%s.wav" % (self.gt_wavs_dir, idx0, idx1),
self.sr,
(tmp_audio * 32768).astype(np.int16),
tmp_audio.astype(np.float32),
)
tmp_audio = librosa.resample(tmp_audio, orig_sr=self.sr, target_sr=16000)
tmp_audio = librosa.resample(
tmp_audio, orig_sr=self.sr, target_sr=16000
) # , res_type="soxr_vhq"
wavfile.write(
"%s/%s_%s.wav" % (self.wavs16k_dir, idx0, idx1),
16000,
(tmp_audio * 32768).astype(np.int16),
tmp_audio.astype(np.float32),
)
def pipeline(self, path, idx0):
try:
audio = load_audio(path, self.sr)
# zero phased digital filter cause pre-ringing noise...
# audio = signal.filtfilt(self.bh, self.ah, audio)
audio = signal.lfilter(self.bh, self.ah, audio)
idx1 = 0
for audio in self.slicer.slice(audio):
i = 0
@@ -81,6 +89,7 @@ class PreProcess:
idx1 += 1
else:
tmp_audio = audio[start:]
idx1 += 1
break
self.norm_write(tmp_audio, idx0, idx1)
println("%s->Suc." % path)

View File

@@ -4,6 +4,9 @@ import torch.nn.functional as F
from config import x_pad, x_query, x_center, x_max
import scipy.signal as signal
import pyworld, os, traceback, faiss
from scipy import signal
bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000)
class VC(object):
@@ -116,8 +119,15 @@ class VC(object):
npy = feats[0].cpu().numpy()
if self.is_half:
npy = npy.astype("float32")
_, I = index.search(npy, 1)
npy = big_npy[I.squeeze()]
# _, I = index.search(npy, 1)
# npy = big_npy[I.squeeze()]
score, ix = index.search(npy, k=8)
weight = np.square(1 / score)
weight /= weight.sum(axis=1, keepdims=True)
npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1)
if self.is_half:
npy = npy.astype("float16")
feats = (
@@ -169,26 +179,28 @@ class VC(object):
f0_up_key,
f0_method,
file_index,
file_big_npy,
# file_big_npy,
index_rate,
if_f0,
f0_file=None,
):
if (
file_big_npy != ""
and file_index != ""
and os.path.exists(file_big_npy) == True
file_index != ""
# and file_big_npy != ""
# and os.path.exists(file_big_npy) == True
and os.path.exists(file_index) == True
and index_rate != 0
):
try:
index = faiss.read_index(file_index)
big_npy = np.load(file_big_npy)
# big_npy = np.load(file_big_npy)
big_npy = index.reconstruct_n(0, index.ntotal)
except:
traceback.print_exc()
index = big_npy = None
else:
index = big_npy = None
audio = signal.filtfilt(bh, ah, audio)
audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect")
opt_ts = []
if audio_pad.shape[0] > self.t_max: