361 Commits

Author SHA1 Message Date
RVC-Boss
4cf1ad4ce9 Update Changelog_EN.md 2023-08-14 00:22:38 +08:00
RVC-Boss
31f7437503 Update Changelog_CN.md 2023-08-14 00:15:31 +08:00
RVC-Boss
af6f72be86 Update README.md 2023-08-14 00:10:26 +08:00
RVC-Boss
17c99ee556 Update README.en.md 2023-08-14 00:09:52 +08:00
RVC-Boss
7e544c453c Update README.md 2023-08-14 00:01:47 +08:00
RVC-Boss
770e8ef2f5 Add files via upload 2023-08-13 23:52:35 +08:00
RVC-Boss
c67e9b63da Update models.py 2023-08-13 21:35:46 +08:00
RVC-Boss
cbe54c34fc Add files via upload 2023-08-13 11:58:16 +08:00
github-actions[bot]
5775772e47 Format code (#993)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-08-13 11:56:37 +08:00
RVC-Boss
2aab92be37 Add files via upload 2023-08-13 11:53:46 +08:00
github-actions[bot]
76b67842ba Format code (#989)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-08-13 11:52:51 +08:00
Doğa Yağcızeybek
7293002f53 chore: translate documents into turkish (#944)
* chore: translate documents into turkish

* chore: add turkish option to other readmes

* chore: add turkish option to main readme
2023-08-13 11:52:24 +08:00
RVC-Boss
44dfd1bbdd Create requirements-dml.txt 2023-08-13 11:49:54 +08:00
RVC-Boss
c96d878708 Update infer-web.py 2023-08-13 11:46:12 +08:00
Ftps
f637bb8788 Cleanup config.py (#992)
* Update config.py

* miss
2023-08-13 11:45:20 +08:00
RVC-Boss
5b9265d4a9 Update requirements.txt 2023-08-13 11:43:04 +08:00
RVC-Boss
03e7c68c11 Add files via upload 2023-08-13 01:05:58 +08:00
RVC-Boss
7f78dce483 Delete gui_v0.py 2023-08-12 23:00:03 +08:00
RVC-Boss
20fb86acfc Add files via upload 2023-08-12 22:59:30 +08:00
RVC-Boss
0fcc293dd0 Add files via upload 2023-08-12 22:58:41 +08:00
RVC-Boss
954ce83d04 Delete models_dml.py 2023-08-12 22:57:33 +08:00
RVC-Boss
1226cc9cfc Delete guidml.py 2023-08-12 22:57:21 +08:00
Matej Tkac
73560448a8 replace np.int with np.int32 (#948)
ref: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
2023-08-10 10:28:30 +08:00
Mert Cobanov
2edeb7168b Solved: ImportError: cannot import name 'FFmpeg' from 'ffmpy' in Windows (#970) 2023-08-10 10:27:32 +08:00
Rice Cake
1a563e68e6 Update README.md (#966) 2023-08-09 20:32:49 +09:00
github-actions[bot]
9a20c3b28f Format code (#932)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-08-03 10:25:05 +08:00
Matej Tkac
296905983a Attempt to infer V2 models (#927) 2023-08-03 10:23:20 +08:00
RVC-Boss
064fecbd5d Create calc_rvc_model_similarity.py 2023-08-02 21:20:46 +08:00
github-actions[bot]
176417e78e Format code (#901)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-07-31 11:47:33 +08:00
Flynn Duniho
144073a924 Automatically select index file when model is selected (#894)
* automatically select index file when model is selected

* Search for full directory of index file

* Add trailing separator char to index search string

* disable debug log

* remove unused re

---------

Co-authored-by: 源文雨 <41315874+fumiama@users.noreply.github.com>
2023-07-30 11:13:03 +08:00
Hiroto N
0784b4e593 add rmvpe opt on inference only app.py (#896) 2023-07-29 23:01:33 +08:00
Naozumi
d82b2cfc14 Update readme (#897) 2023-07-29 22:44:36 +08:00
源文雨
39ef364cff Update genlocale.yml 2023-07-28 12:48:12 +08:00
github-actions[bot]
b2f816a39e Format code (#891)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-07-28 12:45:35 +08:00
forestsource
9f7fe2942a Add REST API settings (#887) 2023-07-28 02:46:09 +00:00
GratefulTony
0b15d48f20 feat: unblock cpu training (#889)
* Update train_nsf_sim_cache_sid_load_pretrain.py

patch to unblock cpu training. CPU training took ~12 hours for me.

* Update train_nsf_sim_cache_sid_load_pretrain.py

Co-authored-by: Nato Boram <NatoBoram@users.noreply.github.com>

---------

Co-authored-by: 源文雨 <41315874+fumiama@users.noreply.github.com>
Co-authored-by: Nato Boram <NatoBoram@users.noreply.github.com>
2023-07-28 02:44:16 +00:00
源文雨
8d8eb8e3e4 chore: remove unnecessary sys.path.append 2023-07-27 18:36:05 +08:00
github-actions[bot]
58370b048c 🎨 同步 locale (#878)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-07-26 19:53:31 +08:00
github-actions[bot]
f7fc51c81a Format code (#877)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-07-26 19:51:48 +08:00
RVC-Boss
b1cb31854a Add files via upload 2023-07-26 19:50:50 +08:00
RVC-Boss
8fb03a64e0 Add files via upload 2023-07-26 19:50:13 +08:00
RVC-Boss
23642ac22a Update train_nsf_sim_cache_sid_load_pretrain.py 2023-07-26 18:05:44 +08:00
Seth T. Allen
2a71c31b66 Create infer_cli.py (#875)
The my_inferer.py mentioned in the RBVC docs is broken. This one works. I think we should add it :^)
2023-07-26 14:43:08 +08:00
RVC-Boss
622c1f5131 Update faq_en.md 2023-07-26 14:39:50 +08:00
RVC-Boss
49d13d41cc Update faq.md 2023-07-26 14:39:18 +08:00
github-actions[bot]
232213a522 Format code (#874)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-07-26 11:56:00 +08:00
Naozumi
85d0d709e0 Move cor_nom, cor_den to mps (gui_v1.py) (#851)
* Move `cor_nom`, `cor_den` to mps

* Split logic based on system
2023-07-26 11:54:37 +08:00
Mix007
78b8bfe890 Update infer-web.py (#864)
fix ModuleNotFoundError: No module named 'config'
fix NameError: name 'sys' is not defined
2023-07-26 11:52:51 +08:00
RVC-Boss
98b2e752f2 Update config.py 2023-07-26 11:36:27 +08:00
RVC-Boss
c757674425 Add files via upload 2023-07-26 11:35:31 +08:00
github-actions[bot]
3ae444b05c Format code (#850)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-07-24 18:19:03 +08:00
源文雨
23f64d3aa8 optimize: cpt as #836 mentioned 2023-07-24 18:16:48 +08:00
源文雨
76c18b547c optimize: move i18n to lib 2023-07-24 14:19:53 +08:00
源文雨
8364750272 optimize: move slicer2, rmvpe, my_utils to lib 2023-07-24 14:16:58 +08:00
Nato Boram
451630a2a4 ⬇️ Downgrade librosa (#846) 2023-07-24 03:31:02 +00:00
Nato Boram
ffc99dbd32 👷 Use black[jupyter] (#847)
* 👷 Use black[jupyter]

* 👷 Add missing matrix
2023-07-24 03:30:01 +00:00
源文雨
18067aa85d fix #835: some broken imports 2023-07-23 14:32:53 +08:00
github-actions[bot]
a002f817df Format code (#834)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-07-23 13:37:01 +08:00
源文雨
f70da25f00 fix: code lint by optimizing train lib's importing 2023-07-23 12:08:11 +08:00
源文雨
add253b476 Update genlocale.yml 2023-07-23 02:11:25 +08:00
github-actions[bot]
6f5697c146 🎨 同步 locale (#828)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-07-23 01:57:31 +08:00
github-actions[bot]
4b8d47f13a Format code (#827)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-07-23 01:56:43 +08:00
源文雨
f5a1c550e5 Update push_format.yml 2023-07-23 01:51:52 +08:00
Naozumi
8c23c3c9e5 Add device reload button (#778) 2023-07-23 01:24:51 +08:00
Karl Kihlström
fe6216a026 add app title (#780) 2023-07-23 01:23:47 +08:00
RVC-Boss
468f9e3075 Update 48k_v2.json 2023-07-19 15:26:11 +08:00
mocci24
4cbdeebefc some error building pyworld (#797)
× Building wheel for pyworld (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> See above for output.
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
  Building wheel for pyworld (pyproject.toml) ... error
  ERROR: Failed building wheel for pyworld
  Building wheel for antlr4-python3-runtime (setup.py) ... done
  Created wheel for antlr4-python3-runtime: filename=antlr4_python3_runtime-4.8-py3-none-any.whl size=141210 sha256=e81137dc4dd676c814cbce5303bf5b687232f3bd7861df8d666cfe05ae199b3e
  Stored in directory: /root/.cache/pip/wheels/a7/20/bd/e1477d664f22d99989fd28ee1a43d6633dddb5cb9e801350d5
Successfully built fairseq antlr4-python3-runtime
Failed to build pyworld
ERROR: Could not build wheels for pyworld, which is required to install pyproject.toml-based projects
2023-07-19 15:04:00 +08:00
github-actions[bot]
f63783c348 Format code (#779)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-07-17 22:56:30 +08:00
丸子
0cf474f820 Fix dependency conflict in last pull request (#775)
The conflict is caused by:
    The user requested starlette>=0.25.0
    fastapi 0.88.0 depends on starlette==0.22.0

fastapi 0.88.0 package will resolve dependency automatically, remove the starlette>=0.25.0 will fix the conflict.
2023-07-17 22:55:28 +08:00
Naozumi
2b3fe8cf1b fix mps in gui-v1.py (#769)
* Fix mps on realtime

* Added back repeat chs
2023-07-17 22:54:15 +08:00
丸子
2e0dfeec50 Fix dependency error (#745)
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
lightning 2.0.2 requires fastapi<0.89.0,>=0.69.0, but you have fastapi 0.100.0 which is incompatible.
so-vits-svc-fork 3.14.1 requires fastapi==0.88, but you have fastapi 0.100.0 which is incompatible.
2023-07-16 23:34:35 +08:00
Naozumi
86ed98aaca Add .sh run script for macOS & linux, fix error on macs with low vram. (#737)
* Add .sh run script

* Update extract_feature_print.py

* Remove `requirements_macOS.txt`
2023-07-13 07:05:35 +00:00
github-actions[bot]
5b9d9b045a 🎨 同步 locale (#743)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-07-13 14:40:46 +08:00
源文雨
c40d522e2a fix: gen-locale 2023-07-13 14:39:34 +08:00
github-actions[bot]
9739f3085d Format code (#727)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-07-13 14:35:24 +08:00
RVC-Boss
6c13f1fe52 Create MIT协议暨相关引用库协议 2023-07-12 11:25:18 +08:00
RVC-Boss
5691e7a237 Update LICENSE 2023-07-12 11:17:30 +08:00
RVC-Boss
cd1d76aac2 Update rvc_for_realtime.py 2023-07-11 22:09:41 +08:00
RVC-Boss
0f9d2e6cac 实时GUI支持rmvpe
实时GUI支持rmvpe
2023-07-11 16:27:18 +08:00
RVC-Boss
c69cecbc41 Add files via upload 2023-07-11 14:46:00 +08:00
RVC-Boss
1279e1dcc4 Add files via upload 2023-07-11 14:45:34 +08:00
RVC-Boss
4af6630792 Update and rename gui.py to gui_v0.py 2023-07-11 14:45:16 +08:00
RVC-Boss
27e7d2dc4a 最先进人声音高提取算法RMVPE已支持,效果默秒全!
最先进人声音高提取算法RMVPE已支持,效果默秒全!
2023-07-11 12:02:30 +08:00
RVC-Boss
9c63bcc8c6 add rmvpe support
add rmvpe support
2023-07-11 11:49:56 +08:00
github-actions[bot]
9b789025d1 Format code (#716)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-07-10 17:59:25 +08:00
Miku AuahDark
a2848f40bb Use sys.executable to determine --pycmd (#715)
* Use sys.executable to determine --pycmd

In some systems, `python` may not correctly refer to the virtual environment's `python` used for webui, or it even refers to Python 2.

Also in Windows, when the webui is run directly through `venv\Scripts\python` without activating the virtual environment, the system python will be picked instead of the one inside virtual environment.

* Remove reduntant "or".
2023-07-10 17:52:42 +08:00
Zhang, Di
211e13b80a Add directML support to RVC for AMD & Intel GPU supported (#707) 2023-07-09 10:07:02 +00:00
Roberts Slisans
3dbba6ae74 add torchcrepe to pyproject (#696) 2023-07-08 11:49:47 +00:00
github-actions[bot]
c3d6057a22 🎨 同步 locale (#699)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-07-08 19:46:30 +08:00
Valerio Montieri
fb785df015 Added italian translation json (#676) 2023-07-08 19:44:16 +08:00
Devyatyi9
c5976ff563 added ru support ru-RU.json (#661)
* added ru support ru-RU.json

Russian translation by redoverflow

* updated description for extract
2023-07-06 17:56:01 +00:00
github-actions[bot]
d97767494c Changes by create-pull-request action (#655)
* 🎨 同步 locale

* Update tr_TR.json

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: 源文雨 <41315874+fumiama@users.noreply.github.com>
2023-06-30 16:20:19 +00:00
Ozan Ayrıkan
dbba35cdd0 added tr support (#653) 2023-06-30 02:34:50 +00:00
Rice Cake
7f4bdf42b0 Update README.md (#646) 2023-06-28 15:29:47 +08:00
tocky
81323dbac6 fix boolean parsing (#629) 2023-06-28 13:54:44 +08:00
github-actions[bot]
549ac02698 Format code (#644)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-06-28 13:54:31 +08:00
Yurzi
5ca7736b2d Fix realtime gui under linux (#609)
* Fix init problem about devices index outbound

* Fix file browse file type

* Fix sd stream channels problem, fix it to 2

---------

Co-authored-by: 源文雨 <41315874+fumiama@users.noreply.github.com>
2023-06-28 05:53:52 +00:00
Rice Cake
7fc6642c04 update index training script v2 (#643)
* update index training script v2

* Apply Code Formatter Change

---------

Co-authored-by: gak123 <gak123@users.noreply.github.com>
2023-06-28 13:48:06 +08:00
github-actions[bot]
fad31f24f5 Format code (#624)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-06-24 18:06:17 +08:00
源文雨
04d7813486 Update README.md 2023-06-24 16:41:22 +08:00
源文雨
ccba65151e 优化代码结构 2023-06-24 16:36:15 +08:00
源文雨
46c0e9b2fe fix extract feature in MPS device 2023-06-24 16:21:31 +08:00
源文雨
359ba54321 Update LICENSE 2023-06-24 16:08:48 +08:00
sungchura
c6a7270811 Fix the index out of bounds bug in extract_feature_print.py (#560)
Check if the length of sys.argv is 6, instead of 5, to cover sys.argv[5]. Otherwise when the length is 6, it runs the else body and tries to access sys.argv[6] in line 13, which is an error.
2023-06-24 16:06:24 +08:00
源文雨
a5c238a392 add dlmodels.sh 2023-06-24 16:05:31 +08:00
源文雨
5e09a55e5f 优化代码结构 2023-06-24 15:26:14 +08:00
源文雨
4e0d399cba 优化 config.py 2023-06-24 13:56:09 +08:00
源文雨
f6051a12f0 move changelogs to docs folder 2023-06-23 22:47:56 +08:00
dependabot[bot]
fda161deba Bump gradio from 3.14.0 to 3.34.0 (#614)
Bumps [gradio](https://github.com/gradio-app/gradio) from 3.14.0 to 3.34.0.
- [Release notes](https://github.com/gradio-app/gradio/releases)
- [Changelog](https://github.com/gradio-app/gradio/blob/main/CHANGELOG.md)
- [Commits](https://github.com/gradio-app/gradio/compare/v3.14.0...v3.34.0)

---
updated-dependencies:
- dependency-name: gradio
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-23 22:44:21 +08:00
github-actions[bot]
eeea7cc3ff 🎨 同步 locale (#613)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-06-23 22:01:55 +08:00
github-actions[bot]
b9fdef34ba Format code (#612)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-06-23 22:00:17 +08:00
源文雨
d4e9badf17 fix extract locale 2023-06-23 21:59:37 +08:00
Ftps
66d470361a Fix links (#596) 2023-06-22 06:37:09 +00:00
kalomaze
195a14e5c5 Add new defaults for infer-web.py & adjust english translation (#584)
* Adjust defaults of index and volume scale

* Adjust eng translation for index and volume envelope
2023-06-20 15:17:32 +08:00
Justin John
cdbb76cb6f Fix OpenBLAS warning (#583)
Fixes the error msg: "OpenBLAS warning: precompiled NUM_THREADS exceeded, adding auxiliary array for thread metadata"
2023-06-20 15:17:18 +08:00
Ftps
413c0d285d Correction of Japanese nuances (#577)
* Correction of Japanese nuances

* Add new line
2023-06-20 10:10:25 +08:00
RVC-Boss
3c1ff4f63b Update README.md 2023-06-19 12:11:41 +00:00
LINKANG ZHAN
ace949b174 Complete i18n Document (#576) 2023-06-19 12:11:30 +00:00
RVC-Boss
f311d39d46 Update README.en.md 2023-06-19 12:07:09 +00:00
RVC-Boss
90ff4e8c29 fix v2 32k 48k extract bug
fix v2 32k 48k extract bug
2023-06-19 15:48:25 +08:00
Ftps
be1b0b33c9 Set the title of the PR (#568) 2023-06-19 14:12:28 +08:00
Pengoose
4c500d4d29 Update Changelog_KO.md (#569) 2023-06-19 14:12:09 +08:00
RVC-Boss
c1ace168fa Update Changelog_EN.md 2023-06-18 14:16:08 +00:00
RVC-Boss
41c345557f Update Changelog_CN.md 2023-06-18 14:06:25 +00:00
RVC-Boss
7fbfc60fcb Update Changelog_CN.md 2023-06-18 14:05:24 +00:00
github-actions[bot]
e4417ce82f Format code (#564)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-06-18 13:55:51 +00:00
RVC-Boss
125a0a7b02 change default train version to v2
change default train version to v2
2023-06-18 13:52:49 +00:00
RVC-Boss
a42330f0ae Add files via upload 2023-06-18 21:49:49 +08:00
RVC-Boss
bc5df2ff8e Add files via upload 2023-06-18 20:05:43 +08:00
RVC-Boss
0812020c90 Add files via upload 2023-06-18 19:37:53 +08:00
RVC-Boss
66667c8f50 Add files via upload 2023-06-18 19:37:21 +08:00
github-actions[bot]
a7647e4094 Format code (#526)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-06-18 10:39:56 +00:00
RVC-Boss
f92a923487 Update infer-web.py 2023-06-18 09:56:29 +00:00
RVC-Boss
0db402c312 Update infer-web.py 2023-06-18 09:56:16 +00:00
RVC-Boss
6ca9c853b0 v2-48k-32k-support
v2-48k-32k-support
2023-06-18 17:40:07 +08:00
RVC-Boss
44426b18b8 train index:auto kmeans when feature shape too large
train index:auto kmeans when feature shape too large
2023-06-18 16:19:07 +08:00
RVC-Boss
e7f204b32e train index:auto kmeans when feature shape too large
train index:auto kmeans when feature shape too large
2023-06-18 16:16:33 +08:00
Ftps
75264d09b6 Fix format #526 (#533)
* Fix format #526

* fix return
2023-06-18 08:01:34 +00:00
RVC-Boss
cbd29350fe extreme value filtering 2023-06-18 15:30:56 +08:00
RVC-Boss
a9a77f2556 fix-no-f0-model-protect-issue
fix-no-f0-model-protect-issue
2023-06-18 15:17:36 +08:00
RVC-Boss
ec0c39d9bc Update infer-web.py 2023-06-18 06:56:22 +00:00
Ftps
9253948f0d Rewrite syntax of infer-web.py (#536)
* Fix import location

* use any

* Correction of if Syntax

* Class definitions to the front

* format

* fix if Syntax
2023-06-18 06:42:40 +00:00
LINKANG ZHAN
c5758a89db Stop extracting features when hubert_base.pt does not exist. (#535)
* support detection of pretrained model, support train without pretrained model path in web ui

* support detection of pretrained model, support train without pretrained model path in web ui

* support detection of pretrained model, support train without pretrained model path in web ui

* Stop extracting features when hubert_base.pt is not exist.

* Stop extracting features when hubert_base.pt is not exist.

* Make error more noticeable
2023-06-18 04:39:10 +00:00
lliiooll
28383fbeee 进行一些判断操作避免崩溃 (#562) 2023-06-18 04:38:35 +00:00
RVC-Boss
846be17351 Update Retrieval_based_Voice_Conversion_WebUI_v2.ipynb 2023-06-18 04:09:06 +00:00
RVC-Boss
4945fba0a3 Update Retrieval_based_Voice_Conversion_WebUI.ipynb 2023-06-18 04:08:57 +00:00
RVC-Boss
602ee19cf9 Update requirements.txt 2023-06-18 04:08:28 +00:00
Ναρουσέ·μ·γιουμεμί·Χινακάννα
0eb6bb67be Onnx推理dml支持 (#556)
* Add files via upload

* Add files via upload
2023-06-17 14:49:16 +00:00
RVC-Boss
a071f1e089 fix v2 onnx export 2023-06-15 15:29:05 +00:00
RVC-Boss
147d3c83b7 fix v2 onnx export 2023-06-15 15:27:51 +00:00
RVC-Boss
75d7c03d41 fix v2 onnx export 2023-06-15 15:26:57 +00:00
LINKANG ZHAN
f349adc9df Add support for train without specify pretrained model, add support for selecting v2 48k as training setting, and add support for auto remove pretrained model when the user do not have pretrained model in designate folder. (#528)
* support detection of pretrained model, support train without pretrained model path in web ui

* support detection of pretrained model, support train without pretrained model path in web ui

* support detection of pretrained model, support train without pretrained model path in web ui
2023-06-15 10:21:58 +08:00
github-actions[bot]
eb1a88cf7e Format code (#522)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-06-14 11:48:05 +00:00
红血球AE3803
4de5d0d551 更新批量推理脚本,可以不用 webui (#518)
* 增加批量推理脚本

* 更新批量推理脚本
2023-06-14 10:22:47 +08:00
RVC-Boss
c74727d487 Update Changelog_CN.md 2023-06-13 16:22:17 +08:00
RVC-Boss
78c88a4f75 add vocal2guitar online demo 2023-06-11 13:59:16 +00:00
RVC-Boss
d963c29fec Update README.md 2023-06-11 13:56:06 +00:00
RVC-Boss
986d92b261 Update README.md 2023-06-11 13:54:38 +00:00
André Thieme
e1f084177d Replace deprecated Numpy function np.int. (#434)
It’s an alias for just `int` and it’s being deprecated:
https://numpy.org/devdocs/release/1.20.0-notes.html

Co-authored-by: Ftps <63702646+Tps-F@users.noreply.github.com>
2023-06-10 14:55:34 +00:00
niizam
fcce61b27f Update train_nsf_sim_cache_sid_load_pretrain.py (#497) 2023-06-10 14:54:53 +00:00
Ftps
ff2793249d remove specify version (#492) 2023-06-09 15:05:13 +00:00
ms903x1
ec83e10b8f Update gui.py default config (#482)
* Update envfilescheck.bat

add pretrained_v2 and uvr5 update

* Update envfilescheck.bat

fix bug

* Update envfilescheck.bat

fix bug

* Update data_utils.py

fix bug where data exceeding 4s is filtered out.

* Update gui.py

Update default config

* Update gui.py

fix json bug
2023-06-08 13:29:34 +00:00
github-actions[bot]
fada942ecd Format code (#456)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-06-08 16:55:18 +08:00
yuuukiasuna
1b307a4222 gui json update (#479)
* Fix gui.py

There seemed to be some conflicts between #338 and #340, so I corrected them.

* Update gui.py

* Update gui.py

---------

Co-authored-by: Ftps <63702646+Tps-F@users.noreply.github.com>
Co-authored-by: RVC-Boss <129054828+RVC-Boss@users.noreply.github.com>
2023-06-08 16:53:51 +08:00
mrhan1993
b28f98fed3 Update gui.py (#475)
增加选择模型时的默认路径以及文件过滤。
2023-06-08 10:42:16 +08:00
Dennis Heckmann
297809bdfd Fixed NameError coming from a typo. (#458) 2023-06-07 10:12:06 +08:00
RVC-Boss
692c245fad Update infer-web.py 2023-06-06 14:37:12 +00:00
github-actions[bot]
52c97ed464 Format code (#455)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-06-06 14:35:35 +00:00
RVC-Boss
6f1bc7d683 Add files via upload 2023-06-06 22:32:32 +08:00
RVC-Boss
9ff976b155 Add files via upload 2023-06-06 22:32:10 +08:00
RVC-Boss
f358fe7242 Update requirements.txt 2023-06-06 13:20:45 +00:00
RVC-Boss
05b5af7c8c Add files via upload 2023-06-06 20:34:54 +08:00
KakaruHayate
b7337d7bf1 Update Retrieval_based_Voice_Conversion_WebUI_v2.ipynb (#448) 2023-06-06 12:14:04 +00:00
github-actions[bot]
99404baf94 Format code (#409)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-06-03 08:22:46 +00:00
YuriHead
a94c8e3a69 Updata Online Infer (#419) 2023-06-03 08:18:42 +00:00
RVC-Boss
bf11700125 fix m1/m2 user training 2023-06-03 07:08:35 +00:00
Rice Cake
95cd1759c5 fix python dependency problem (#418) 2023-06-03 07:00:50 +00:00
ms903x1
80929e472e Update data_utils.py (#407)
* Update envfilescheck.bat

add pretrained_v2 and uvr5 update

* Update envfilescheck.bat

fix bug

* Update envfilescheck.bat

fix bug

* Update data_utils.py

fix bug where data exceeding 4s is filtered out.
2023-06-02 10:27:20 +08:00
Ma5onic
a02019e428 English Translation Fixes (#402)
* Fix English Translations

* Minor translation correction
2023-06-01 10:11:38 +08:00
RVC-Boss
4c28652ed9 Update requirements.txt 2023-06-01 10:01:00 +08:00
RVC-Boss
c2f402d7d1 Update requirements.txt 2023-05-30 16:50:05 +00:00
RVC-Boss
a68037be3c Update gui.py 2023-05-30 13:17:10 +00:00
dependabot[bot]
fa97c3f8bd Bump starlette from 0.26.1 to 0.27.0 (#390) 2023-05-30 08:09:34 +00:00
github-actions[bot]
89afd017ba Format code (#384)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-05-30 15:22:53 +08:00
Pengoose
5284e38c3d Update Changelog_KO.md (#381) 2023-05-30 08:35:12 +09:00
Ναρουσέ·μ·γιουμεμί·Χινακάννα
24f2ad44ea Add files via upload (#379) 2023-05-29 15:52:23 +00:00
HalfMAI
69071119a9 Update infer-web.py (#374)
修复 刷新按钮没有更新 批处理区的 索引路径的问题
2023-05-29 18:26:59 +08:00
github-actions[bot]
86b086e393 🎨 同步 locale (#367)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-05-29 12:09:26 +08:00
RVC-Boss
95a14b734d Add files via upload 2023-05-29 00:23:09 +08:00
ms903x1
f0a798c53f undate envfilescheck.bat (#368)
* Update envfilescheck.bat

add pretrained_v2 and uvr5 update

* Update envfilescheck.bat

fix bug

* Update envfilescheck.bat

fix bug
2023-05-28 16:21:50 +00:00
github-actions[bot]
e435b3bb8a Format code (#366)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-05-28 16:06:11 +00:00
RVC-Boss
e569477457 Update Changelog_EN.md 2023-05-28 15:58:23 +00:00
RVC-Boss
d0249262b3 Add files via upload 2023-05-28 23:51:03 +08:00
RVC-Boss
0841d1341b Update Changelog_CN.md 2023-05-28 15:50:59 +00:00
RVC-Boss
7bd25c4623 Add files via upload 2023-05-28 23:40:54 +08:00
RVC-Boss
e8d92c3e91 Update Changelog_CN.md 2023-05-28 15:11:30 +00:00
RVC-Boss
619e9060aa Update requirements.txt 2023-05-28 15:06:43 +00:00
RVC-Boss
35aa864daa Update requirements.txt 2023-05-28 15:04:10 +00:00
RVC-Boss
e53118c60f Add files via upload 2023-05-28 23:00:51 +08:00
RVC-Boss
c93940a25d Add files via upload 2023-05-28 23:00:29 +08:00
RVC-Boss
f1730d42d4 Add files via upload 2023-05-28 22:58:33 +08:00
Ftps
7789c46ded Fix gui.py (#365)
There seemed to be some conflicts between #338 and #340, so I corrected them.
2023-05-28 12:52:05 +00:00
Rice Cake
4b0c86fbeb add project name to index file's name (#357)
* Add files via upload

* Apply Code Formatter Change

---------

Co-authored-by: gak123 <gak123@users.noreply.github.com>
Co-authored-by: Ftps <63702646+Tps-F@users.noreply.github.com>
2023-05-28 12:25:36 +08:00
dependabot[bot]
2280f3e392 Bump tornado from 6.2 to 6.3.2 (#358)
* Bump tornado from 6.2 to 6.3.2

Bumps [tornado](https://github.com/tornadoweb/tornado) from 6.2 to 6.3.2.
- [Changelog](https://github.com/tornadoweb/tornado/blob/master/docs/releases.rst)
- [Commits](https://github.com/tornadoweb/tornado/compare/v6.2.0...v6.3.2)

---
updated-dependencies:
- dependency-name: tornado
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

* Apply Code Formatter Change

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: dependabot[bot] <dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ftps <63702646+Tps-F@users.noreply.github.com>
2023-05-28 00:36:12 +09:00
Pengoose
7816761bee Add Korean CHANGELOG (#359) 2023-05-28 00:04:31 +09:00
Ftps
a2ef4cca76 fix Config, GUIConfig and self (#340)
Co-authored-by: RVC-Boss <129054828+RVC-Boss@users.noreply.github.com>
2023-05-26 19:32:19 +08:00
fluo10
0729c9d6f2 Exclude python3.11 from dependencies (#352) 2023-05-26 19:28:32 +08:00
JackEllie
039e7afb85 Update gui.py (#338) 2023-05-25 09:27:40 +09:00
RVC-Boss
e0813eb282 Update train_nsf_sim_cache_sid_load_pretrain.py 2023-05-24 12:27:15 +00:00
Rice Cake
8efb101401 upload RVC v2 index training script (#343)
* Add files via upload

* Apply Code Formatter Change

---------

Co-authored-by: gak123 <gak123@users.noreply.github.com>
2023-05-24 12:26:35 +00:00
dependabot[bot]
a4c86a3aa1 Bump requests from 2.28.2 to 2.31.0 (#339)
Bumps [requests](https://github.com/psf/requests) from 2.28.2 to 2.31.0.
- [Release notes](https://github.com/psf/requests/releases)
- [Changelog](https://github.com/psf/requests/blob/main/HISTORY.md)
- [Commits](https://github.com/psf/requests/compare/v2.28.2...v2.31.0)

---
updated-dependencies:
- dependency-name: requests
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-23 22:51:51 +08:00
Yugo Ogura
9cee20f402 feat: ipynb for v2 (#332) 2023-05-23 12:58:05 +08:00
github-actions[bot]
cfd9848128 Format code (#330)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-05-21 19:19:53 +08:00
Ναρουσέ·μ·γιουμεμί·Χινακάννα
067731db9b 768VecOnnxExport (#328)
* Delete export_onnx.py

* Delete export_onnx_old.py

* Delete models_onnx_moess.py

* Support 768 Vec

* Add files via upload

* Support 768 Vec

Support 768 Vec

* Support 768 Vec Onnx Export

Support 768 Vec Onnx Export
2023-05-21 19:11:29 +08:00
pcunwa
c3de24f2e0 Corrected Japanese translation. (#319)
* Update ja_JP.json

Incorrect or incomplete translations have been corrected.

* Update ja_JP.json

---------

Co-authored-by: 源文雨 <41315874+fumiama@users.noreply.github.com>
2023-05-21 11:06:11 +00:00
RVC-Boss
f6e55485d9 Update gui.py 2023-05-21 07:01:34 +00:00
RVC-Boss
615c30c17b Update gui.py 2023-05-21 06:57:16 +00:00
RVC-Boss
79a79c3b99 Update config.py 2023-05-21 03:30:27 +00:00
RVC-Boss
28948f8961 Update infer-web.py 2023-05-21 03:10:20 +00:00
RVC-Boss
19cc9062b0 Update i18n.py 2023-05-21 03:10:07 +00:00
tzshao
50a121fc74 Update of en_US.json and faq_en.md. Proposal for i18n standard. (#318)
* Update en_US.json

1. Severe mistake fixed: certain translation is previously incomplete.

* Update faq_en.md

1. Modified 1 entry for context consistency with lately merged en_US translation

* Update en_US.json

1. Attached colons to all Input Prompts as proposed.
2. Minor changes to translation expressions.

* Update en_US.json

1. Removed trailing periods on button texts
2023-05-20 20:14:23 +08:00
Rilm2525
3f17356c11 Japanese translation added and corrected. (#317)
* Update infer-web.py

* Update ja_JP.json
2023-05-20 10:46:39 +00:00
tzshao
563bf7af6d Update of en_US.json, Proposal for i18n standard. (#314)
* Update en_US.json

### Description:
A rough modification of en-US i18n file.

### Changes:
+ Many translation phrases have been replaced with rather native expressions.
+ Majority of translation phrases have been re-formatted for more efficient reading.

### Problems:
+ There's no standard for i18n. E.g, and my proposal:
	+ All Input Prompts ends with colon(":").
	+ All progress indication(e.g. "step1:processing data","step2a: ...") stays lowercase, may add CSI SGR escape sequences for highlighting.
	+ List of Selections and their descriptions be written in key-value pairs.
+ No more plain-translations.

* Update en_US.json

1.Strings that refer to specific paths/locations have been quoted with '' pairs.
2.1 Typo fixed.

* Update en_US.json

1.Minor re-format.
2023-05-20 10:46:18 +00:00
github-actions[bot]
41d2d72f39 Format code (#310)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-05-19 20:48:17 +08:00
N. Hiroto
080b7cdc31 bugfix: leaked semaphore error (#309)
* use config for n_cpu

* rm import

* fix process loop

* unuse mp.spawn

ref. https://discuss.pytorch.org/t/how-to-fix-a-sigsegv-in-pytorch-when-using-distributed-training-e-g-ddp/113518/10

* fix commentout
2023-05-19 17:56:06 +08:00
kalomaze
563c64ded9 Small english translation tweaks (#308)
* Update en_US.json

* Update en_US.json
2023-05-19 11:32:50 +08:00
github-actions[bot]
0fbfa1d62b Format code (#307)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-05-18 18:54:41 +08:00
Cole Mangio
c2039b6eca Fixed index version not being written to the index file on train_index() in infer-web.py (#305) 2023-05-18 10:02:12 +00:00
github-actions[bot]
aadf7443c3 Format code (#304)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-05-18 13:18:02 +08:00
RVC-Boss
6fb1f8c1b1 Update process_ckpt.py 2023-05-17 15:39:24 +00:00
源文雨
e5374b2041 Revert "fix: merge f0 option value (#298)" (#303)
This reverts commit da0b599fa7.
2023-05-17 23:17:01 +08:00
dependabot[bot]
30c7e417e8 Bump pillow from 9.1.1 to 9.3.0 (#300)
Bumps [pillow](https://github.com/python-pillow/Pillow) from 9.1.1 to 9.3.0.
- [Release notes](https://github.com/python-pillow/Pillow/releases)
- [Changelog](https://github.com/python-pillow/Pillow/blob/main/CHANGES.rst)
- [Commits](https://github.com/python-pillow/Pillow/compare/9.1.1...9.3.0)

---
updated-dependencies:
- dependency-name: pillow
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-17 14:12:22 +08:00
Yugo Ogura
da0b599fa7 fix: merge f0 option value (#298)
Co-authored-by: 源文雨 <41315874+fumiama@users.noreply.github.com>
2023-05-17 05:38:21 +00:00
源文雨
2ec95ab288 fix unitest 2023-05-17 13:32:25 +08:00
github-actions[bot]
5bf26dadca Format code (#296)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-05-17 00:02:26 +08:00
Xerxes-2
0b0bd911d9 Add timestamp and elapsed time for epoch (#273)
* add timestamp and epoch elapsed time

* don't need a class

* Revert "add timestamp and epoch elapsed time"

This reverts commit 93b8d4a7af.

* adjust class def

* delete duplicate import

---------

Co-authored-by: Ftps <63702646+Tps-F@users.noreply.github.com>
Co-authored-by: 源文雨 <41315874+fumiama@users.noreply.github.com>
2023-05-16 23:54:35 +08:00
R0w9h
8a9909bdd1 Update ja_JP.json (#293) 2023-05-16 10:24:45 +08:00
RVC-Boss
9d949118c0 Update en_US.json 2023-05-15 15:21:58 +00:00
RVC-Boss
1c01099dbc Update requirements.txt 2023-05-15 14:42:15 +00:00
源文雨
b07dedd744 fix workflow (#284)
* Update extract_feature_print.py

* Update unitest.yml
2023-05-15 13:11:01 +08:00
github-actions[bot]
137447bdc9 🎨 同步 locale (#283)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-05-14 08:30:20 +00:00
RVC-Boss
f4c2a63a5e Add files via upload 2023-05-14 16:26:33 +08:00
RVC-Boss
2d845e5222 Add files via upload 2023-05-14 16:08:37 +08:00
github-actions[bot]
e06994f473 🎨 同步 locale (#281)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-05-14 08:00:48 +00:00
github-actions[bot]
6a3eaef090 Format code (#275)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-05-14 07:52:36 +00:00
RVC-Boss
32437314b8 Update Changelog_EN.md 2023-05-14 07:50:54 +00:00
RVC-Boss
ac807575ad Update Changelog_CN.md 2023-05-14 07:44:03 +00:00
RVC-Boss
b42f4bf6df Update README.en.md 2023-05-14 07:21:30 +00:00
RVC-Boss
1f63abe3e2 Update README.md 2023-05-14 07:19:35 +00:00
RVC-Boss
bbc3bcba3b Create .gitignore 2023-05-14 07:16:47 +00:00
RVC-Boss
60919b9b02 Update Changelog_CN.md 2023-05-14 07:16:06 +00:00
RVC-Boss
77ff5b08b6 Add files via upload 2023-05-14 15:07:12 +08:00
RVC-Boss
404ce9338f Add files via upload 2023-05-14 15:06:50 +08:00
RVC-Boss
3b5a2298d7 Add files via upload 2023-05-14 15:05:42 +08:00
RVC-Boss
3909ce4a7b Add files via upload 2023-05-13 03:49:38 +08:00
RVC-Boss
0d2212c8ea Add files via upload 2023-05-13 03:47:56 +08:00
github-actions[bot]
af41184320 Format code (#274)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-05-12 19:43:05 +00:00
RVC-Boss
568378761b Update Changelog_CN.md 2023-05-12 19:41:06 +00:00
RVC-Boss
44449efc2e Add files via upload 2023-05-13 03:29:30 +08:00
github-actions[bot]
0bc1ea782e Format code (#270)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-05-12 19:27:59 +00:00
Ftps
3d8d0957e4 remove Unnecessary elif (#259) 2023-05-12 19:27:44 +00:00
RVC-Boss
ef016ae6a0 Update gui.py 2023-05-11 14:29:56 +00:00
RVC-Boss
c84371844a default sr->40k is the best; unload weight debug []->""
default sr->40k is the best; unload weight debug []->""
2023-05-11 03:07:02 +08:00
源文雨
339a116074 Update README.md 2023-05-10 23:39:04 +08:00
github-actions[bot]
2f8179fa32 🎨 同步 locale (#266)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-05-10 23:36:36 +08:00
Sebastian Gabriel Savu
2086a7dab4 [maintenance] change f0 choice to boolean instead of string yes/no + default sample rate for training to 48k (#265) 2023-05-10 23:30:19 +08:00
源文雨
9bab76741e Update README.md 2023-05-10 23:27:33 +08:00
Ftps
769cf352a0 update faiss (#261) 2023-05-10 23:24:03 +08:00
RVC-Boss
c7f6a181a0 Update config.py 2023-05-10 13:19:09 +00:00
Ftps
6cc2279fb9 Support mps generate (#263) 2023-05-10 13:17:13 +00:00
RVC-Boss
5b0ff12163 Update README.md 2023-05-09 11:49:05 +08:00
RVC-Boss
1782a6332e Update requirements.txt 2023-05-08 15:40:06 +00:00
RVC-Boss
28dd13420c Update requirements.txt 2023-05-08 15:34:45 +00:00
github-actions[bot]
75d31f1022 Format code (#254)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-05-08 15:16:19 +00:00
liu biao
eba9b05b28 增加在macos下安装swig说明 (#253) 2023-05-08 15:16:09 +00:00
RVC-Boss
2c4ec6db93 Update trainset_preprocess_pipeline_print.py 2023-05-08 15:04:21 +00:00
RVC-Boss
4a2c9c062f Update gui.py 2023-05-07 17:42:30 +00:00
RVC-Boss
5928d5358c Update gui.py 2023-05-07 17:40:09 +00:00
Scott
f695fe60f6 Add English CHANGELOG (#243) 2023-05-07 16:24:13 +00:00
源文雨
5d7b649175 Update README.md 2023-05-07 13:46:23 +08:00
源文雨
73992be783 增加发行时自动构建docker镜像 2023-05-07 13:43:27 +08:00
Sebastian Gabriel Savu
d43c1d3cdd add ability to containerize with Docker (add Dockerfile) (#240)
Co-authored-by: Sebastian Savu <sebastian.savu@bidfx.com>
2023-05-07 13:32:16 +08:00
github-actions[bot]
b5b9af0255 🎨 同步 locale (#239)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-05-07 13:30:57 +08:00
RVC-Boss
aabbcb70c1 Update README.en.md 2023-05-06 18:42:08 +08:00
RVC-Boss
ddec7b713f Update README.md 2023-05-06 18:40:42 +08:00
R0w9h
e76654e634 Update ja_JP.json (#238) 2023-05-06 10:45:39 +09:00
github-actions[bot]
eb7caaa064 Format code (#228)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-05-06 00:14:11 +08:00
Ftps
e3cb0485ce staticmethod (#232) 2023-05-06 00:13:27 +08:00
Sebastian Gabriel Savu
4abd0bd680 [maintenance] cleanup one click training and related (#219)
- remove unused imports
- remove unused gpus6 param from train1key fn
- improve readability and reusability for various pathing strings
 main
2023-05-05 23:48:39 +08:00
github-actions[bot]
4027928a8e Format code (#227)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-05-05 14:18:51 +08:00
RVC-Boss
15519de5e5 Update i18n.py 2023-05-05 14:14:31 +08:00
github-actions[bot]
6726af00cf Format code (#221)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-05-05 13:13:41 +08:00
RVC-Boss
ccf6e6bbd2 batch_add_faiss_index
batch_add_faiss_index
2023-05-05 00:26:52 +08:00
RVC-Boss
da34d75ec9 Add files via upload 2023-05-04 22:22:46 +08:00
nadare
b18f921a50 big_npy should be shuffled (#218) 2023-05-04 14:03:52 +00:00
RVC-Boss
c4a18107dc Update config.py 2023-05-02 12:31:05 +00:00
github-actions[bot]
951989117b Format code (#214)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-05-02 20:22:08 +08:00
RVC-Boss
71427575c4 Update infer-web.py 2023-05-02 12:17:09 +00:00
RVC-Boss
8370356d95 Update config.py 2023-05-02 12:07:03 +00:00
nadare
69ea94609b update training tips and faiss tips (#208) 2023-04-30 22:26:25 +08:00
Ftps
6d0ec4b00c Escaping when device does not match (#203) 2023-04-29 04:18:06 +00:00
Ftps
b12e33891c fix open (#200) 2023-04-29 12:11:13 +08:00
github-actions[bot]
4cb010bac6 🎨 同步 locale (#196)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-04-28 20:46:39 +08:00
github-actions[bot]
e9301d7a08 Format code (#195)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-04-28 20:45:21 +08:00
bycloud
bbe333552f added some more zh to en_US translation (#194)
* Add files via upload

* updated i18n() translation for en_US

expanded the dict for other languages

* added more i18n()

---------

Co-authored-by: 源文雨 <41315874+fumiama@users.noreply.github.com>
2023-04-28 20:44:46 +08:00
Ftps
f391ac1763 Config class (#192)
* update config.py

* class

* class

* fix
2023-04-28 20:43:02 +08:00
源文雨
b1134d9f64 add 韓國語 2023-04-28 15:54:12 +08:00
RVC-Boss
211a842e88 Update infer-web.py 2023-04-28 11:31:13 +08:00
github-actions[bot]
9068d5283e Format code (#188)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-04-28 11:25:20 +08:00
RVC-Boss
9976df7045 Update Changelog_CN.md 2023-04-27 16:30:45 +00:00
RVC-Boss
725db8734a Update README.md 2023-04-27 16:16:38 +00:00
RVC-Boss
dfb298da66 Update Changelog_CN.md 2023-04-27 16:09:12 +00:00
RVC-Boss
af208d5210 Add files via upload 2023-04-27 23:34:03 +08:00
EntropyRiser
a149107c5a Add full support of all samplerate. (#182)
Co-authored-by: EntropyRiser <1832783120@qq.com>
2023-04-27 18:52:01 +08:00
RVC-Boss
80b54499eb Update vc_infer_pipeline.py 2023-04-27 16:11:45 +08:00
M.Hosoi
7b8a0bb6fc Maximum value of save_every_epoch changed to 50 => 200 (#178) 2023-04-27 10:59:49 +08:00
RVC-Boss
a6cb4d3625 support 16xx GPU and 4G GPU inference
support 16xx GPU and 4G GPU inference
2023-04-27 01:40:04 +08:00
RVC-Boss
2ac8d553ab Update infer-web.py 2023-04-26 15:39:19 +00:00
RVC-Boss
dc0c8756b5 Total_fea not needed now. Better and faster retrieval performance.
Total_fea not needed now. Better and faster retrieval performance.
2023-04-26 19:17:48 +08:00
RVC-Boss
9be8048302 Total_fea not needed. Better and faster retrieval performance
Total_fea not needed now. Better and faster retrieval performance.
2023-04-26 19:13:54 +08:00
RVC-Boss
a21f7ec11f total_fea not needed now
total_fea not needed now
2023-04-26 19:12:47 +08:00
JiHo Han
71e2733719 docs(README.ko): add Korean Translation of README.md (#157)
* docs(README.ko): add Korean Translation of README.md

* docs(Faiss): add Korean tips for Faiss

* docs(README): add hyperlinks for Korean translation on all README

* docs(training_tips): add Korean translation for training tips

---------

Co-authored-by: Ftps <63702646+Tps-F@users.noreply.github.com>
2023-04-25 21:55:48 +08:00
github-actions[bot]
964a85fe15 🎨 同步 locale (#163)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-04-25 10:53:56 +08:00
RVC-Boss
f2abfd5ad2 Update pyproject.toml 2023-04-25 10:51:38 +08:00
Styl
96b6d28718 Web UI to Spanish (#162) 2023-04-25 02:51:20 +00:00
Ftps
52661df363 fix json (#143) 2023-04-24 20:43:45 +08:00
github-actions[bot]
b4c653142d Format code (#142)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-04-24 20:35:56 +08:00
源文雨
376bd31c19 i18n: 优化英文翻译 by @Estil1 (#141)
* fix: i18n rename 不全

* Language 100% fixed 

I can create a Spanish version too

* 🎨 同步 locale

* Update en_US.json

---------

Co-authored-by: Styl <87322309+Estil1@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-04-24 20:31:46 +08:00
nadare
fdf12a4add Faiss Tutorial for Developers (#97)
* add faiss tutorial (WIP)

* add embedding tips
2023-04-24 20:18:34 +08:00
源文雨
f6ef9bca0c fix #115: 隐藏允许的 exception 2023-04-24 20:17:49 +08:00
Ναρουσέ·μ·γιουμεμί·Χινακάννα
9bac0ffaa7 Onnx导出拓展以及WebUI支持 (#140)
* Add files via upload

* Add files via upload

* Add files via upload

* Add files via upload
2023-04-24 19:55:05 +08:00
tarepan
fb1d4b1882 Fix deprecated positional arguments in mel (#133) 2023-04-24 18:35:09 +08:00
tarepan
329d739e70 Refactor mel module (#132)
* Refactor wave-to-mel

* Add docstring on mel

* Refactor mel module import and variable names
2023-04-24 11:45:20 +08:00
RVC-Boss
a02ef401ad Update trainset_preprocess_pipeline_print.py 2023-04-22 14:39:17 +00:00
RVC-Boss
4fdb858a02 Add files via upload 2023-04-22 21:41:50 +08:00
RVC-Boss
bb535a4f71 Update en_US.json 2023-04-22 12:24:12 +00:00
RVC-Boss
44de5de840 Update i18n.py 2023-04-22 12:22:16 +00:00
RVC-Boss
978539ad0e Update extract_f0_print.py 2023-04-22 12:17:32 +00:00
tarepan
5d5ab5465f Refactor GPU cache during training (#108) 2023-04-22 12:05:00 +00:00
autumnmotor
297d92bf5d some change precision audio processing (#94)
* some change precision audio processing

* fix clipping problem in resample

resample sometimes causes signal clipping, not just librosa.resample

* fix error
2023-04-22 11:39:47 +00:00
RVC-Boss
c423f77a16 增加无f0模型的支持
增加无f0模型的支持
2023-04-22 11:38:00 +00:00
EntropyRiser
2f51e932bf Change f0 predictor to harvest. (#123)
Co-authored-by: EntropyRiser <1832783120@qq.com>
2023-04-22 11:32:49 +00:00
Rice Cake
334da847d2 Update README.en.md (#121)
* Update README.en.md

* Update README.en.md
2023-04-22 14:06:18 +08:00
nadare
9b513a2375 Training tutorial (#109)
* add training tips in ja

* add english edition(using google translate)
2023-04-22 14:04:56 +08:00
Ftps
8acc0f2b71 fix port (#118) 2023-04-22 00:36:10 +08:00
Ftps
ebc0b227c1 Update i18n.py (#117) 2023-04-22 00:35:37 +08:00
Yugo Ogura
c941512427 chore: Just fix typo in README.ja.md (#114) 2023-04-22 00:33:11 +08:00
Rice Cake
a2dadfc931 Update README.en.md (#113) 2023-04-21 16:30:08 +08:00
Ftps
8bf1e0e026 Update faiss description (#95) 2023-04-19 13:45:04 +08:00
Kazuki
aca68fad09 improved Japanese translation. (#101) 2023-04-19 11:02:02 +08:00
Ftps
58397a92dc Automatically change faiss version (#92) 2023-04-18 14:03:30 +08:00
github-actions[bot]
0ca936c226 🎨 同步 locale (#90)
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-04-17 15:26:59 +00:00
Ftps
294b751e34 some change translation (#91) 2023-04-17 22:37:00 +08:00
github-actions[bot]
1e71efb265 Format code (#89)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2023-04-17 14:09:03 +00:00
源文雨
35379217e8 优化 change log 格式 (#86)
* 优化 change log 格式

* Apply Code Formatter Change

---------

Co-authored-by: fumiama <fumiama@users.noreply.github.com>
2023-04-17 12:49:54 +00:00
EntropyRiser
88a43e14d1 Add non-search inference support. (#82)
Co-authored-by: 源文雨 <41315874+fumiama@users.noreply.github.com>
2023-04-17 12:49:42 +00:00
源文雨
b0f8a4c7d1 fix: json format (#84)
* Update extract_locale.py

* Apply Code Formatter Change

* Update locale_diff.py

* Apply Code Formatter Change

---------

Co-authored-by: fumiama <fumiama@users.noreply.github.com>
2023-04-17 12:49:29 +00:00
Ftps
5ab6713bb3 fix permission (#87) 2023-04-17 16:15:59 +08:00
Ftps
a4c64b0253 Autoformat when pushed directly (#79)
* Create push_format.yml

* remove unused
2023-04-17 11:09:05 +08:00
Ftps
bfe974ea9f Fix action when PR send (#83) 2023-04-17 10:49:57 +08:00
liujing04
0719b4aa5e Add files via upload 2023-04-16 18:56:20 +08:00
155 changed files with 18128 additions and 9387 deletions

70
.github/workflows/docker.yml vendored Normal file
View File

@@ -0,0 +1,70 @@
name: Build And Push Docker Image
on:
workflow_dispatch:
push:
# Sequence of patterns matched against refs/tags
tags:
- 'v*' # Push events to matching v*, i.e. v1.0, v20.15.10
jobs:
build:
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- uses: actions/checkout@v3
- name: Set time zone
uses: szenius/set-timezone@v1.0
with:
timezoneLinux: "Asia/Shanghai"
timezoneMacos: "Asia/Shanghai"
timezoneWindows: "China Standard Time"
# # 如果有 dockerhub 账户可以在github的secrets中配置下面两个然后取消下面注释的这几行并在meta步骤的images增加一行 ${{ github.repository }}
# - name: Login to DockerHub
# uses: docker/login-action@v1
# with:
# username: ${{ secrets.DOCKERHUB_USERNAME }}
# password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v4
with:
images: |
ghcr.io/${{ github.repository }}
# generate Docker tags based on the following events/attributes
# nightly, master, pr-2, 1.2.3, 1.2, 1
tags: |
type=schedule,pattern=nightly
type=edge
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Build and push
id: docker_build
uses: docker/build-push-action@v4
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}

View File

@@ -4,7 +4,7 @@ on:
branches:
- main
jobs:
golangci:
genlocale:
name: genlocale
runs-on: ubuntu-latest
steps:
@@ -14,14 +14,14 @@ jobs:
- name: Run locale generation
run: |
python3 extract_locale.py
cd i18n && python3 locale_diff.py
cd lib/i18n && python3 locale_diff.py
- name: Commit back
if: ${{ !github.head_ref }}
continue-on-error: true
run: |
git config --local user.name 'github-actions[bot]'
git config --local user.email '41898282+github-actions[bot]@users.noreply.github.com'
git config --local user.email 'github-actions[bot]@users.noreply.github.com'
git add --all
git commit -m "🎨 同步 locale"

View File

@@ -2,18 +2,28 @@ name: pull format
on: [pull_request]
permissions:
contents: write
jobs:
pull_format:
permissions:
actions: write
checks: write
contents: write
runs-on: ubuntu-latest
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: ["3.10"]
os: [ubuntu-latest]
fail-fast: false
continue-on-error: true
steps:
- uses: actions/checkout@v3
- name: checkout
continue-on-error: true
uses: actions/checkout@v3
with:
ref: ${{ github.head_ref }}
fetch-depth: 0
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
@@ -21,7 +31,7 @@ jobs:
python-version: ${{ matrix.python-version }}
- name: Install Black
run: pip install black
run: pip install "black[jupyter]"
- name: Run Black
# run: black $(git ls-files '*.py')

56
.github/workflows/push_format.yml vendored Normal file
View File

@@ -0,0 +1,56 @@
name: push format
on:
push:
branches:
- main
permissions:
contents: write
pull-requests: write
jobs:
push_format:
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: ["3.10"]
os: [ubuntu-latest]
fail-fast: false
steps:
- uses: actions/checkout@v3
with:
ref: ${{github.ref_name}}
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install Black
run: pip install "black[jupyter]"
- name: Run Black
# run: black $(git ls-files '*.py')
run: black .
- name: Commit Back
continue-on-error: true
id: commitback
run: |
git config --local user.email "github-actions[bot]@users.noreply.github.com"
git config --local user.name "github-actions[bot]"
git add --all
git commit -m "Format code"
- name: Create Pull Request
if: steps.commitback.outcome == 'success'
continue-on-error: true
uses: peter-evans/create-pull-request@v5
with:
delete-branch: true
body: Apply Code Formatter Change
title: Apply Code Formatter Change
commit-message: Automatic code format

View File

@@ -33,4 +33,4 @@ jobs:
python trainset_preprocess_pipeline_print.py logs/mute/0_gt_wavs 48000 8 logs/mi-test True
touch logs/mi-test/extract_f0_feature.log
python extract_f0_print.py logs/mi-test $(nproc) pm
python extract_feature_print.py cpu 1 0 0 logs/mi-test
python extract_feature_print.py cpu 1 0 0 logs/mi-test v1

1
.gitignore vendored
View File

@@ -4,3 +4,4 @@ __pycache__
*.pyd
hubert_base.pt
/logs
.venv

View File

@@ -1,42 +0,0 @@
20230409
&emsp;1-修正训练参数提升显卡平均利用率A100最高从25%提升至90%左右V100:50%->90%左右2060S:60%->85%左右P40:25%->95%左右,训练速度显著提升
&emsp;2-修正参数总batch_size改为每张卡的batch_size
&emsp;3-修正total_epoch最大限制100解锁至1000默认10提升至默认20
&emsp;4-修复ckpt提取识别是否带音高错误导致推理异常的问题
&emsp;5-修复分布式训练每个rank都保存一次ckpt的问题
&emsp;6-特征提取进行nan特征过滤
&emsp;7-修复静音输入输出随机辅音or噪声的问题老版模型需要重做训练集重训
20230416更新
&emsp;1-新增本地实时变声迷你GUI双击go-realtime-gui.bat启动
&emsp;2-训练推理均对<50Hz的频段进行滤波过滤
&emsp;3-训练推理音高提取pyworld最低音高从默认80下降至50,50-80hz间的男声低音不会哑
&emsp;4-WebUI支持根据系统区域变更语言现支持en_USja_JPzh_CNzh_HKzh_SGzh_TW不支持的默认en_US
&emsp;5-修正部分显卡识别例如V100-16G识别失败P4识别失败
后续计划
&emsp;1-收集呼吸wav加入训练集修正呼吸变声电音的问题
&emsp;2-研究更优的默认faiss索引配置计划将索引打包进weights/xxx.pth中取消推理界面的 特征/检索库 选择
&emsp;3-根据显存情况和显卡架构自动给到最优配置batch size训练集切块推理音频长度相关的config训练是否fp16未来所有>=4G显存的>=pascal架构的显卡都可以训练或推理<4G显存的显卡不会进行支持
&emsp;4-我们正在训练增加了歌声训练集的底模未来会公开
&emsp;5-推理音高识别选项加入"是否开启中值滤波"
&emsp;6-增加选项:每次epoch保存的小模型均进行提取; 增加选项:设置默认测试集音频每次保存的小模型均在保存后对其进行推理导出用户可试听来选择哪个中间epoch最好

13
Dockerfile Normal file
View File

@@ -0,0 +1,13 @@
# syntax=docker/dockerfile:1
FROM python:3.10-bullseye
EXPOSE 7865
WORKDIR /app
COPY . .
RUN pip3 install -r requirements.txt
CMD ["python3", "infer-web.py"]

View File

@@ -1,6 +1,7 @@
MIT License
Copyright (c) 2023 liujing04
Copyright (c) 2023 源文雨
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@@ -18,4 +19,4 @@ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
SOFTWARE.

285
MDXNet.py Normal file
View File

@@ -0,0 +1,285 @@
import soundfile as sf
import torch, pdb, os, warnings, librosa
import numpy as np
from tqdm import tqdm
import torch
dim_c = 4
class Conv_TDF_net_trim:
def __init__(
self, device, model_name, target_name, L, dim_f, dim_t, n_fft, hop=1024
):
super(Conv_TDF_net_trim, self).__init__()
self.dim_f = dim_f
self.dim_t = 2**dim_t
self.n_fft = n_fft
self.hop = hop
self.n_bins = self.n_fft // 2 + 1
self.chunk_size = hop * (self.dim_t - 1)
self.window = torch.hann_window(window_length=self.n_fft, periodic=True).to(
device
)
self.target_name = target_name
self.blender = "blender" in model_name
out_c = dim_c * 4 if target_name == "*" else dim_c
self.freq_pad = torch.zeros(
[1, out_c, self.n_bins - self.dim_f, self.dim_t]
).to(device)
self.n = L // 2
def stft(self, x):
x = x.reshape([-1, self.chunk_size])
x = torch.stft(
x,
n_fft=self.n_fft,
hop_length=self.hop,
window=self.window,
center=True,
return_complex=True,
)
x = torch.view_as_real(x)
x = x.permute([0, 3, 1, 2])
x = x.reshape([-1, 2, 2, self.n_bins, self.dim_t]).reshape(
[-1, dim_c, self.n_bins, self.dim_t]
)
return x[:, :, : self.dim_f]
def istft(self, x, freq_pad=None):
freq_pad = (
self.freq_pad.repeat([x.shape[0], 1, 1, 1])
if freq_pad is None
else freq_pad
)
x = torch.cat([x, freq_pad], -2)
c = 4 * 2 if self.target_name == "*" else 2
x = x.reshape([-1, c, 2, self.n_bins, self.dim_t]).reshape(
[-1, 2, self.n_bins, self.dim_t]
)
x = x.permute([0, 2, 3, 1])
x = x.contiguous()
x = torch.view_as_complex(x)
x = torch.istft(
x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True
)
return x.reshape([-1, c, self.chunk_size])
def get_models(device, dim_f, dim_t, n_fft):
return Conv_TDF_net_trim(
device=device,
model_name="Conv-TDF",
target_name="vocals",
L=11,
dim_f=dim_f,
dim_t=dim_t,
n_fft=n_fft,
)
warnings.filterwarnings("ignore")
import sys
now_dir = os.getcwd()
sys.path.append(now_dir)
from config import Config
cpu = torch.device("cpu")
device = Config().device
# if torch.cuda.is_available():
# device = torch.device("cuda:0")
# elif torch.backends.mps.is_available():
# device = torch.device("mps")
# else:
# device = torch.device("cpu")
class Predictor:
def __init__(self, args):
self.args = args
self.model_ = get_models(
device=cpu, dim_f=args.dim_f, dim_t=args.dim_t, n_fft=args.n_fft
)
import onnxruntime as ort
print(ort.get_available_providers())
self.model = ort.InferenceSession(
os.path.join(args.onnx, self.model_.target_name + ".onnx"),
providers=[
"CUDAExecutionProvider",
"DmlExecutionProvider",
"CPUExecutionProvider",
],
)
print("onnx load done")
def demix(self, mix):
samples = mix.shape[-1]
margin = self.args.margin
chunk_size = self.args.chunks * 44100
assert not margin == 0, "margin cannot be zero!"
if margin > chunk_size:
margin = chunk_size
segmented_mix = {}
if self.args.chunks == 0 or samples < chunk_size:
chunk_size = samples
counter = -1
for skip in range(0, samples, chunk_size):
counter += 1
s_margin = 0 if counter == 0 else margin
end = min(skip + chunk_size + margin, samples)
start = skip - s_margin
segmented_mix[skip] = mix[:, start:end].copy()
if end == samples:
break
sources = self.demix_base(segmented_mix, margin_size=margin)
"""
mix:(2,big_sample)
segmented_mix:offset->(2,small_sample)
sources:(1,2,big_sample)
"""
return sources
def demix_base(self, mixes, margin_size):
chunked_sources = []
progress_bar = tqdm(total=len(mixes))
progress_bar.set_description("Processing")
for mix in mixes:
cmix = mixes[mix]
sources = []
n_sample = cmix.shape[1]
model = self.model_
trim = model.n_fft // 2
gen_size = model.chunk_size - 2 * trim
pad = gen_size - n_sample % gen_size
mix_p = np.concatenate(
(np.zeros((2, trim)), cmix, np.zeros((2, pad)), np.zeros((2, trim))), 1
)
mix_waves = []
i = 0
while i < n_sample + pad:
waves = np.array(mix_p[:, i : i + model.chunk_size])
mix_waves.append(waves)
i += gen_size
mix_waves = torch.tensor(mix_waves, dtype=torch.float32).to(cpu)
with torch.no_grad():
_ort = self.model
spek = model.stft(mix_waves)
if self.args.denoise:
spec_pred = (
-_ort.run(None, {"input": -spek.cpu().numpy()})[0] * 0.5
+ _ort.run(None, {"input": spek.cpu().numpy()})[0] * 0.5
)
tar_waves = model.istft(torch.tensor(spec_pred))
else:
tar_waves = model.istft(
torch.tensor(_ort.run(None, {"input": spek.cpu().numpy()})[0])
)
tar_signal = (
tar_waves[:, :, trim:-trim]
.transpose(0, 1)
.reshape(2, -1)
.numpy()[:, :-pad]
)
start = 0 if mix == 0 else margin_size
end = None if mix == list(mixes.keys())[::-1][0] else -margin_size
if margin_size == 0:
end = None
sources.append(tar_signal[:, start:end])
progress_bar.update(1)
chunked_sources.append(sources)
_sources = np.concatenate(chunked_sources, axis=-1)
# del self.model
progress_bar.close()
return _sources
def prediction(self, m, vocal_root, others_root, format):
os.makedirs(vocal_root, exist_ok=True)
os.makedirs(others_root, exist_ok=True)
basename = os.path.basename(m)
mix, rate = librosa.load(m, mono=False, sr=44100)
if mix.ndim == 1:
mix = np.asfortranarray([mix, mix])
mix = mix.T
sources = self.demix(mix.T)
opt = sources[0].T
if format in ["wav", "flac"]:
sf.write(
"%s/%s_main_vocal.%s" % (vocal_root, basename, format), mix - opt, rate
)
sf.write("%s/%s_others.%s" % (others_root, basename, format), opt, rate)
else:
path_vocal = "%s/%s_main_vocal.wav" % (vocal_root, basename)
path_other = "%s/%s_others.wav" % (others_root, basename)
sf.write(path_vocal, mix - opt, rate)
sf.write(path_other, opt, rate)
if os.path.exists(path_vocal):
os.system(
"ffmpeg -i %s -vn %s -q:a 2 -y"
% (path_vocal, path_vocal[:-4] + ".%s" % format)
)
if os.path.exists(path_other):
os.system(
"ffmpeg -i %s -vn %s -q:a 2 -y"
% (path_other, path_other[:-4] + ".%s" % format)
)
class MDXNetDereverb:
def __init__(self, chunks):
self.onnx = "uvr5_weights/onnx_dereverb_By_FoxJoy"
self.shifts = 10 #'Predict with randomised equivariant stabilisation'
self.mixing = "min_mag" # ['default','min_mag','max_mag']
self.chunks = chunks
self.margin = 44100
self.dim_t = 9
self.dim_f = 3072
self.n_fft = 6144
self.denoise = True
self.pred = Predictor(self)
def _path_audio_(self, input, vocal_root, others_root, format):
self.pred.prediction(input, vocal_root, others_root, format)
if __name__ == "__main__":
dereverb = MDXNetDereverb(15)
from time import time as ttime
t0 = ttime()
dereverb._path_audio_(
"雪雪伴奏对消HP5.wav",
"vocal",
"others",
)
t1 = ttime()
print(t1 - t0)
"""
runtime\python.exe MDXNet.py
6G:
15/9:0.8G->6.8G
14:0.8G->6.5G
25:炸
half15:0.7G->6.6G,22.69s
fp32-15:0.7G->6.6G,20.85s
"""

View File

@@ -1,50 +1,45 @@
MIT License
Copyright (c) 2023 liujing04
Copyright (c) 2023 源文雨
本软件及其相关代码以MIT协议开源作者不对软件具备任何控制力使用软件者、传播软件导出的声音者自负全责
如不认可该条款,则不能使用或引用软件包内任何代码和文件。
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
特此授予任何获得本软件和相关文档文件(以下简称“软件”)副本的人免费使用、复制、修改、合并、出版、分发、再授权和/或销售本软件的权利,以及授予本软件所提供的人使用本软件的权利,但须符合以下条件:
上述版权声明和本许可声明应包含在软件的所有副本或实质部分中。
软件是“按原样”提供的,没有任何明示或暗示的保证,包括但不限于适销性、适用于特定目的和不侵权的保证。在任何情况下,作者或版权持有人均不承担因软件或软件的使用或其他交易而产生、产生或与之相关的任何索赔、损害赔偿或其他责任,无论是在合同诉讼、侵权诉讼还是其他诉讼中。
相关引用库协议如下:
#################
ContentVec
https://github.com/auspicious3000/contentvec/blob/main/LICENSE
MIT License
#################
VITS
https://github.com/jaywalnut310/vits/blob/main/LICENSE
MIT License
#################
HIFIGAN
https://github.com/jik876/hifi-gan/blob/master/LICENSE
MIT License
#################
gradio
https://github.com/gradio-app/gradio/blob/main/LICENSE
Apache License 2.0
#################
ffmpeg
https://github.com/FFmpeg/FFmpeg/blob/master/COPYING.LGPLv3
https://github.com/BtbN/FFmpeg-Builds/releases/download/autobuild-2021-02-28-12-32/ffmpeg-n4.3.2-160-gfbb9368226-win64-lgpl-4.3.zip
LPGLv3 License
MIT License
#################
ultimatevocalremovergui
https://github.com/Anjok07/ultimatevocalremovergui/blob/master/LICENSE
https://github.com/yang123qwe/vocal_separation_by_uvr5
MIT License
#################
audio-slicer
https://github.com/openvpi/audio-slicer/blob/main/LICENSE
MIT License
本软件及其相关代码以MIT协议开源作者不对软件具备任何控制力使用软件者、传播软件导出的声音者自负全责。
如不认可该条款,则不能使用或引用软件包内任何代码和文件。
特此授予任何获得本软件和相关文档文件(以下简称“软件”)副本的人免费使用、复制、修改、合并、出版、分发、再授权和/或销售本软件的权利,以及授予本软件所提供的人使用本软件的权利,但须符合以下条件:
上述版权声明和本许可声明应包含在软件的所有副本或实质部分中。
软件是“按原样”提供的,没有任何明示或暗示的保证,包括但不限于适销性、适用于特定目的和不侵权的保证。在任何情况下,作者或版权持有人均不承担因软件或软件的使用或其他交易而产生、产生或与之相关的任何索赔、损害赔偿或其他责任,无论是在合同诉讼、侵权诉讼还是其他诉讼中
The LICENCEs for related libraries are as follows.
相关引用库协议如下:
ContentVec
https://github.com/auspicious3000/contentvec/blob/main/LICENSE
MIT License
VITS
https://github.com/jaywalnut310/vits/blob/main/LICENSE
MIT License
HIFIGAN
https://github.com/jik876/hifi-gan/blob/master/LICENSE
MIT License
gradio
https://github.com/gradio-app/gradio/blob/main/LICENSE
Apache License 2.0
ffmpeg
https://github.com/FFmpeg/FFmpeg/blob/master/COPYING.LGPLv3
https://github.com/BtbN/FFmpeg-Builds/releases/download/autobuild-2021-02-28-12-32/ffmpeg-n4.3.2-160-gfbb9368226-win64-lgpl-4.3.zip
LPGLv3 License
MIT License
ultimatevocalremovergui
https://github.com/Anjok07/ultimatevocalremovergui/blob/master/LICENSE
https://github.com/yang123qwe/vocal_separation_by_uvr5
MIT License
audio-slicer
https://github.com/openvpi/audio-slicer/blob/main/LICENSE
MIT License
PySimpleGUI
https://github.com/PySimpleGUI/PySimpleGUI/blob/master/license.txt
LPGLv3 License

249
README.md
View File

@@ -1,105 +1,144 @@
<div align="center">
<h1>Retrieval-based-Voice-Conversion-WebUI</h1>
一个基于VITS的简单易用的语音转换变声器框架<br><br>
[![madewithlove](https://forthebadge.com/images/badges/built-with-love.svg)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI)
<img src="https://counter.seku.su/cmoe?name=rvc&theme=r34" /><br>
[![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Retrieval_based_Voice_Conversion_WebUI.ipynb)
[![Licence](https://img.shields.io/github/license/liujing04/Retrieval-based-Voice-Conversion-WebUI?style=for-the-badge)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/%E4%BD%BF%E7%94%A8%E9%9C%80%E9%81%B5%E5%AE%88%E7%9A%84%E5%8D%8F%E8%AE%AE-LICENSE.txt)
[![Huggingface](https://img.shields.io/badge/🤗%20-Spaces-yellow.svg?style=for-the-badge)](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)
[![Discord](https://img.shields.io/badge/RVC%20Developers-Discord-7289DA?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/HcsmBBGyVk)
</div>
------
[**更新日志**](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Changelog_CN.md)
[**English**](./docs/README.en.md) | [**中文简体**](./README.md) | [**日本語**](./docs/README.ja.md)
> 点此查看我们的[演示视频](https://www.bilibili.com/video/BV1pm4y1z7Gm/) !
> 使用了RVC的实时语音转换: [w-okada/voice-changer](https://github.com/w-okada/voice-changer)
> 底模使用接近50小时的开源高质量VCTK训练集训练无版权方面的顾虑请大家放心使用
> 后续会陆续加入高质量有授权歌声训练集训练底模
## 简介
本仓库具有以下特点
+ 使用top1检索替换输入源特征为训练集特征来杜绝音色泄漏
+ 即便在相对较差的显卡上也能快速训练
+ 使用少量数据进行训练也能得到较好结果(推荐至少收集10分钟低底噪语音数据)
+ 可以通过模型融合来改变音色(借助ckpt处理选项卡中的ckpt-merge)
+ 简单易用的网页界面
+ 可调用UVR5模型来快速分离人声和伴奏
## 环境配置
推荐使用poetry配置环境。
以下指令需在Python版本大于3.8的环境中执行:
```bash
# 安装Pytorch及其核心依赖若已安装则跳过
# 参考自: https://pytorch.org/get-started/locally/
pip install torch torchvision torchaudio
#如果是win系统+Nvidia Ampere架构(RTX30xx),根据 #21 的经验需要指定pytorch对应的cuda版本
#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
# 安装 Poetry 依赖管理工具, 若已安装则跳过
# 参考自: https://python-poetry.org/docs/#installation
curl -sSL https://install.python-poetry.org | python3 -
# 通过poetry安装依赖
poetry install
```
你也可以通过pip来安装依赖
**注意**: `MacOS``faiss 1.7.2`版本会导致抛出段错误,请将`requirements.txt`的对应条目改为`faiss-cpu==1.7.0`
```bash
pip install -r requirements.txt
```
## 其他预模型准备
RVC需要其他一些预模型来推理和训练。
你可以从我们的[Hugging Face space](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)下载到这些模型。
以下是一份清单包括了所有RVC所需的预模型和其他文件的名称:
```bash
hubert_base.pt
./pretrained
./uvr5_weights
#如果你正在使用Windows则你可能需要这个文件若ffmpeg已安装则跳过
./ffmpeg
```
之后使用以下指令来启动WebUI:
```bash
python infer-web.py
```
如果你正在使用Windows你可以直接下载并解压`RVC-beta.7z`,运行`go-web.bat`以启动WebUI。
仓库内还有一份`小白简易教程.doc`以供参考。
## 参考项目
+ [ContentVec](https://github.com/auspicious3000/contentvec/)
+ [VITS](https://github.com/jaywalnut310/vits)
+ [HIFIGAN](https://github.com/jik876/hifi-gan)
+ [Gradio](https://github.com/gradio-app/gradio)
+ [FFmpeg](https://github.com/FFmpeg/FFmpeg)
+ [Ultimate Vocal Remover](https://github.com/Anjok07/ultimatevocalremovergui)
+ [audio-slicer](https://github.com/openvpi/audio-slicer)
## 感谢所有贡献者作出的努力
<a href="https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/graphs/contributors" target="_blank">
<img src="https://contrib.rocks/image?repo=liujing04/Retrieval-based-Voice-Conversion-WebUI" />
</a>
<div align="center">
<h1>Retrieval-based-Voice-Conversion-WebUI</h1>
一个基于VITS的简单易用的语音转换变声器框架<br><br>
[![madewithlove](https://img.shields.io/badge/made_with-%E2%9D%A4-red?style=for-the-badge&labelColor=orange
)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)
<img src="https://counter.seku.su/cmoe?name=rvc&theme=r34" /><br>
[![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/Retrieval_based_Voice_Conversion_WebUI.ipynb)
[![Licence](https://img.shields.io/badge/LICENSE-MIT-green.svg?style=for-the-badge)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/LICENSE)
[![Huggingface](https://img.shields.io/badge/🤗%20-Spaces-yellow.svg?style=for-the-badge)](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)
[![Discord](https://img.shields.io/badge/RVC%20Developers-Discord-7289DA?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/HcsmBBGyVk)
[**更新日志**](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/Changelog_CN.md) | [**常见问题解答**](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98%E8%A7%A3%E7%AD%94) | [**AutoDL·5毛钱训练AI歌手**](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/wiki/Autodl%E8%AE%AD%E7%BB%83RVC%C2%B7AI%E6%AD%8C%E6%89%8B%E6%95%99%E7%A8%8B) | [**对照实验记录**](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/wiki/Autodl%E8%AE%AD%E7%BB%83RVC%C2%B7AI%E6%AD%8C%E6%89%8B%E6%95%99%E7%A8%8B](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/wiki/%E5%AF%B9%E7%85%A7%E5%AE%9E%E9%AA%8C%C2%B7%E5%AE%9E%E9%AA%8C%E8%AE%B0%E5%BD%95)) | [**在线演示**](https://huggingface.co/spaces/Ricecake123/RVC-demo)
</div>
------
[**English**](./docs/README.en.md) | [**中文简体**](./README.md) | [**日本語**](./docs/README.ja.md) | [**한국어**](./docs/README.ko.md) ([**韓國語**](./docs/README.ko.han.md)) | [**Türkçe**](./docs/README.tr.md)
点此查看我们的[演示视频](https://www.bilibili.com/video/BV1pm4y1z7Gm/) !
> 使用了RVC的实时语音转换: [w-okada/voice-changer](https://github.com/w-okada/voice-changer)
> 底模使用接近50小时的开源高质量VCTK训练集训练无版权方面的顾虑请大家放心使用
> 请期待RVCv3的底模参数更大数据更大效果更好基本持平的推理速度需要训练数据量更少。
## 简介
本仓库具有以下特点
+ 使用top1检索替换输入源特征为训练集特征来杜绝音色泄漏
+ 即便在相对较差的显卡上也能快速训练
+ 使用少量数据进行训练也能得到较好结果(推荐至少收集10分钟低底噪语音数据)
+ 可以通过模型融合来改变音色(借助ckpt处理选项卡中的ckpt-merge)
+ 简单易用的网页界面
+ 可调用UVR5模型来快速分离人声和伴奏
+ 使用最先进的[人声音高提取算法InterSpeech2023-RMVPE](#参考项目)根绝哑音问题。效果最好显著地但比crepe_full更快、资源占用更小
+ A卡I卡加速支持
## 环境配置
以下指令需在 Python 版本大于3.8的环境中执行。
(Windows/Linux)
首先通过 pip 安装主要依赖:
```bash
# 安装Pytorch及其核心依赖若已安装则跳过
# 参考自: https://pytorch.org/get-started/locally/
pip install torch torchvision torchaudio
#如果是win系统+Nvidia Ampere架构(RTX30xx),根据 #21 的经验需要指定pytorch对应的cuda版本
#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
```
可以使用 poetry 来安装依赖:
```bash
# 安装 Poetry 依赖管理工具, 若已安装则跳过
# 参考自: https://python-poetry.org/docs/#installation
curl -sSL https://install.python-poetry.org | python3 -
# 通过poetry安装依赖
poetry install
```
你也可以通过 pip 来安装依赖:
```bash
N卡
pip install -r requirements.txt
A卡/I卡
pip install -r requirements-dml.txt
```
------
Mac 用户可以通过 `run.sh` 来安装依赖:
```bash
sh ./run.sh
```
## 其他预模型准备
RVC需要其他一些预模型来推理和训练。
你可以从我们的[Hugging Face space](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)下载到这些模型。
以下是一份清单包括了所有RVC所需的预模型和其他文件的名称:
```bash
hubert_base.pt
./pretrained
./uvr5_weights
想测试v2版本模型的话需要额外下载
./pretrained_v2
如果你正在使用Windows则你可能需要这个文件若ffmpeg和ffprobe已安装则跳过; ubuntu/debian 用户可以通过apt install ffmpeg来安装这2个库, Mac 用户则可以通过brew install ffmpeg来安装 (需要预先安装brew)
./ffmpeg
https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe
./ffprobe
https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe
如果你想使用最新的RMVPE人声音高提取算法则你需要下载音高提取模型参数并放置于RVC根目录
https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/rmvpe.pt
A卡I卡用户需要的dml环境要请下载
https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/rmvpe.onnx
```
之后使用以下指令来启动WebUI:
```bash
python infer-web.py
```
如果你正在使用Windows 或 macOS你可以直接下载并解压`RVC-beta.7z`,前者可以运行`go-web.bat`以启动WebUI后者则运行命令`sh ./run.sh`以启动WebUI。
仓库内还有一份`小白简易教程.doc`以供参考。
## 参考项目
+ [ContentVec](https://github.com/auspicious3000/contentvec/)
+ [VITS](https://github.com/jaywalnut310/vits)
+ [HIFIGAN](https://github.com/jik876/hifi-gan)
+ [Gradio](https://github.com/gradio-app/gradio)
+ [FFmpeg](https://github.com/FFmpeg/FFmpeg)
+ [Ultimate Vocal Remover](https://github.com/Anjok07/ultimatevocalremovergui)
+ [audio-slicer](https://github.com/openvpi/audio-slicer)
+ [Vocal pitch extraction:RMVPE](https://github.com/Dream-High/RMVPE)
+ The pretrained model is trained and tested by [yxlllc](https://github.com/yxlllc/RMVPE) and [RVC-Boss](https://github.com/RVC-Boss).
## 感谢所有贡献者作出的努力
<a href="https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/graphs/contributors" target="_blank">
<img src="https://contrib.rocks/image?repo=RVC-Project/Retrieval-based-Voice-Conversion-WebUI" />
</a>

View File

@@ -1,381 +1,384 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"private_outputs": true,
"provenance": []
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
},
"accelerator": "GPU",
"gpuClass": "standard"
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"private_outputs": true,
"provenance": []
},
"cells": [
{
"cell_type": "markdown",
"source": [
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Retrieval_based_Voice_Conversion_WebUI.ipynb)"
],
"metadata": {
"id": "ZFFCx5J80SGa"
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "GmFP6bN9dvOq"
},
"outputs": [],
"source": [
"#@title 查看显卡\n",
"!nvidia-smi"
]
},
{
"cell_type": "code",
"source": [
"#@title 安装依赖\n",
"!apt-get -y install build-essential python3-dev ffmpeg\n",
"!pip3 install --upgrade setuptools wheel\n",
"!pip3 install --upgrade pip\n",
"!pip3 install faiss-gpu fairseq gradio ffmpeg ffmpeg-python praat-parselmouth pyworld numpy==1.23.5 numba==0.56.4 librosa==0.9.2"
],
"metadata": {
"id": "wjddIFr1oS3W"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title 克隆仓库\n",
"\n",
"!git clone --depth=1 -b stable https://github.com/fumiama/Retrieval-based-Voice-Conversion-WebUI\n",
"%cd /content/Retrieval-based-Voice-Conversion-WebUI\n",
"!mkdir -p pretrained uvr5_weights"
],
"metadata": {
"id": "ge_97mfpgqTm"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title 更新仓库(一般无需执行)\n",
"!git pull"
],
"metadata": {
"id": "BLDEZADkvlw1"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title 安装aria2\n",
"!apt -y install -qq aria2"
],
"metadata": {
"id": "pqE0PrnuRqI2"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title 下载底模\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D32k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o D32k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D40k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o D40k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D48k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o D48k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G32k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o G32k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G40k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o G40k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G48k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o G48k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D32k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o f0D32k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D40k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o f0D40k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D48k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o f0D48k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G32k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o f0G32k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G40k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o f0G40k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G48k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o f0G48k.pth"
],
"metadata": {
"id": "UG3XpUwEomUz"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title 下载人声分离模型\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2-人声vocals+非人声instrumentals.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/uvr5_weights -o HP2-人声vocals+非人声instrumentals.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5-主旋律人声vocals+其他instrumentals.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/uvr5_weights -o HP5-主旋律人声vocals+其他instrumentals.pth"
],
"metadata": {
"id": "HugjmZqZRuiF"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title 下载hubert_base\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt -d /content/Retrieval-based-Voice-Conversion-WebUI -o hubert_base.pt"
],
"metadata": {
"id": "2RCaT9FTR0ej"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title 挂载谷歌云盘\n",
"\n",
"from google.colab import drive\n",
"drive.mount('/content/drive')"
],
"metadata": {
"id": "jwu07JgqoFON"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title 从谷歌云盘加载打包好的数据集到/content/dataset\n",
"\n",
"#@markdown 数据集位置\n",
"DATASET = \"/content/drive/MyDrive/dataset/lulu20230327_32k.zip\" #@param {type:\"string\"}\n",
"\n",
"!mkdir -p /content/dataset\n",
"!unzip -d /content/dataset -B {DATASET}"
],
"metadata": {
"id": "Mwk7Q0Loqzjx"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title 重命名数据集中的重名文件\n",
"!ls -a /content/dataset/\n",
"!rename 's/(\\w+)\\.(\\w+)~(\\d*)/$1_$3.$2/' /content/dataset/*.*~*"
],
"metadata": {
"id": "PDlFxWHWEynD"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title 启动web\n",
"%cd /content/Retrieval-based-Voice-Conversion-WebUI\n",
"# %load_ext tensorboard\n",
"# %tensorboard --logdir /content/Retrieval-based-Voice-Conversion-WebUI/logs\n",
"!python3 infer-web.py --colab --pycmd python3"
],
"metadata": {
"id": "7vh6vphDwO0b"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title 手动将训练后的模型文件备份到谷歌云盘\n",
"#@markdown 需要自己查看logs文件夹下模型的文件名手动修改下方命令末尾的文件名\n",
"\n",
"#@markdown 模型名\n",
"MODELNAME = \"lulu\" #@param {type:\"string\"}\n",
"#@markdown 模型epoch\n",
"MODELEPOCH = 9600 #@param {type:\"integer\"}\n",
"\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/G_{MODELEPOCH}.pth /content/drive/MyDrive/{MODELNAME}_D_{MODELEPOCH}.pth\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/D_{MODELEPOCH}.pth /content/drive/MyDrive/{MODELNAME}_G_{MODELEPOCH}.pth\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/added_*.index /content/drive/MyDrive/\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/total_*.npy /content/drive/MyDrive/\n",
"\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/weights/{MODELNAME}.pth /content/drive/MyDrive/{MODELNAME}{MODELEPOCH}.pth"
],
"metadata": {
"id": "FgJuNeAwx5Y_"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title 从谷歌云盘恢复pth\n",
"#@markdown 需要自己查看logs文件夹下模型的文件名手动修改下方命令末尾的文件名\n",
"\n",
"#@markdown 模型名\n",
"MODELNAME = \"lulu\" #@param {type:\"string\"}\n",
"#@markdown 模型epoch\n",
"MODELEPOCH = 7500 #@param {type:\"integer\"}\n",
"\n",
"!mkdir -p /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}\n",
"\n",
"!cp /content/drive/MyDrive/{MODELNAME}_D_{MODELEPOCH}.pth /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/G_{MODELEPOCH}.pth\n",
"!cp /content/drive/MyDrive/{MODELNAME}_G_{MODELEPOCH}.pth /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/D_{MODELEPOCH}.pth\n",
"!cp /content/drive/MyDrive/*.index /content/\n",
"!cp /content/drive/MyDrive/*.npy /content/\n",
"!cp /content/drive/MyDrive/{MODELNAME}{MODELEPOCH}.pth /content/Retrieval-based-Voice-Conversion-WebUI/weights/{MODELNAME}.pth"
],
"metadata": {
"id": "OVQoLQJXS7WX"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title 手动预处理(不推荐)\n",
"#@markdown 模型名\n",
"MODELNAME = \"lulu\" #@param {type:\"string\"}\n",
"#@markdown 采样率\n",
"BITRATE = 48000 #@param {type:\"integer\"}\n",
"#@markdown 使用的进程数\n",
"THREADCOUNT = 8 #@param {type:\"integer\"}\n",
"\n",
"!python3 trainset_preprocess_pipeline_print.py /content/dataset {BITRATE} {THREADCOUNT} logs/{MODELNAME} True\n"
],
"metadata": {
"id": "ZKAyuKb9J6dz"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title 手动提取特征(不推荐)\n",
"#@markdown 模型名\n",
"MODELNAME = \"lulu\" #@param {type:\"string\"}\n",
"#@markdown 使用的进程数\n",
"THREADCOUNT = 8 #@param {type:\"integer\"}\n",
"#@markdown 音高提取算法\n",
"ALGO = \"harvest\" #@param {type:\"string\"}\n",
"\n",
"!python3 extract_f0_print.py logs/{MODELNAME} {THREADCOUNT} {ALGO}\n",
"\n",
"!python3 extract_feature_print.py cpu 1 0 0 logs/{MODELNAME}\n"
],
"metadata": {
"id": "CrxJqzAUKmPJ"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title 手动训练(不推荐)\n",
"#@markdown 模型名\n",
"MODELNAME = \"lulu\" #@param {type:\"string\"}\n",
"#@markdown 使用的GPU\n",
"USEGPU = \"0\" #@param {type:\"string\"}\n",
"#@markdown 批大小\n",
"BATCHSIZE = 32 #@param {type:\"integer\"}\n",
"#@markdown 停止的epoch\n",
"MODELEPOCH = 3200 #@param {type:\"integer\"}\n",
"#@markdown 保存epoch间隔\n",
"EPOCHSAVE = 100 #@param {type:\"integer\"}\n",
"#@markdown 采样率\n",
"MODELSAMPLE = \"48k\" #@param {type:\"string\"}\n",
"#@markdown 是否缓存训练集\n",
"CACHEDATA = 1 #@param {type:\"integer\"}\n",
"#@markdown 是否仅保存最新的ckpt文件\n",
"ONLYLATEST = 0 #@param {type:\"integer\"}\n",
"\n",
"!python3 train_nsf_sim_cache_sid_load_pretrain.py -e lulu -sr {MODELSAMPLE} -f0 1 -bs {BATCHSIZE} -g {USEGPU} -te {MODELEPOCH} -se {EPOCHSAVE} -pg pretrained/f0G{MODELSAMPLE}.pth -pd pretrained/f0D{MODELSAMPLE}.pth -l {ONLYLATEST} -c {CACHEDATA}\n"
],
"metadata": {
"id": "IMLPLKOaKj58"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title 删除其它pth只留选中的慎点仔细看代码\n",
"#@markdown 模型名\n",
"MODELNAME = \"lulu\" #@param {type:\"string\"}\n",
"#@markdown 选中模型epoch\n",
"MODELEPOCH = 9600 #@param {type:\"integer\"}\n",
"\n",
"!echo \"备份选中的模型。。。\"\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/G_{MODELEPOCH}.pth /content/{MODELNAME}_D_{MODELEPOCH}.pth\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/D_{MODELEPOCH}.pth /content/{MODELNAME}_G_{MODELEPOCH}.pth\n",
"\n",
"!echo \"正在删除。。。\"\n",
"!ls /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}\n",
"!rm /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/*.pth\n",
"\n",
"!echo \"恢复选中的模型。。。\"\n",
"!mv /content/{MODELNAME}_D_{MODELEPOCH}.pth /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/G_{MODELEPOCH}.pth \n",
"!mv /content/{MODELNAME}_G_{MODELEPOCH}.pth /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/D_{MODELEPOCH}.pth\n",
"\n",
"!echo \"删除完成\"\n",
"!ls /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}"
],
"metadata": {
"id": "haYA81hySuDl"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title 清除项目下所有文件,只留选中的模型(慎点,仔细看代码)\n",
"#@markdown 模型名\n",
"MODELNAME = \"lulu\" #@param {type:\"string\"}\n",
"#@markdown 选中模型epoch\n",
"MODELEPOCH = 9600 #@param {type:\"integer\"}\n",
"\n",
"!echo \"备份选中的模型。。。\"\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/G_{MODELEPOCH}.pth /content/{MODELNAME}_D_{MODELEPOCH}.pth\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/D_{MODELEPOCH}.pth /content/{MODELNAME}_G_{MODELEPOCH}.pth\n",
"\n",
"!echo \"正在删除。。。\"\n",
"!ls /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}\n",
"!rm -rf /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/*\n",
"\n",
"!echo \"恢复选中的模型。。。\"\n",
"!mv /content/{MODELNAME}_D_{MODELEPOCH}.pth /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/G_{MODELEPOCH}.pth \n",
"!mv /content/{MODELNAME}_G_{MODELEPOCH}.pth /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/D_{MODELEPOCH}.pth\n",
"\n",
"!echo \"删除完成\"\n",
"!ls /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}"
],
"metadata": {
"id": "QhSiPTVPoIRh"
},
"execution_count": null,
"outputs": []
}
]
}
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
},
"accelerator": "GPU",
"gpuClass": "standard"
},
"cells": [
{
"cell_type": "markdown",
"source": [
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/Retrieval_based_Voice_Conversion_WebUI.ipynb)"
],
"metadata": {
"id": "ZFFCx5J80SGa"
}
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "GmFP6bN9dvOq"
},
"outputs": [],
"source": [
"# @title 查看显卡\n",
"!nvidia-smi"
]
},
{
"cell_type": "code",
"source": [
"# @title 安装依赖\n",
"!apt-get -y install build-essential python3-dev ffmpeg\n",
"!pip3 install --upgrade setuptools wheel\n",
"!pip3 install --upgrade pip\n",
"!pip3 install faiss-cpu==1.7.2 fairseq gradio==3.14.0 ffmpeg ffmpeg-python praat-parselmouth pyworld numpy==1.23.5 numba==0.56.4 librosa==0.9.2"
],
"metadata": {
"id": "wjddIFr1oS3W"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# @title 克隆仓库\n",
"\n",
"!git clone --depth=1 -b stable https://github.com/fumiama/Retrieval-based-Voice-Conversion-WebUI\n",
"%cd /content/Retrieval-based-Voice-Conversion-WebUI\n",
"!mkdir -p pretrained uvr5_weights"
],
"metadata": {
"id": "ge_97mfpgqTm"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# @title 更新仓库(一般无需执行)\n",
"!git pull"
],
"metadata": {
"id": "BLDEZADkvlw1"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# @title 安装aria2\n",
"!apt -y install -qq aria2"
],
"metadata": {
"id": "pqE0PrnuRqI2"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# @title 下载底模\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D32k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o D32k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D40k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o D40k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D48k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o D48k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G32k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o G32k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G40k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o G40k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G48k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o G48k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D32k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o f0D32k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D40k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o f0D40k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D48k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o f0D48k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G32k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o f0G32k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G40k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o f0G40k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G48k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o f0G48k.pth"
],
"metadata": {
"id": "UG3XpUwEomUz"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# @title 下载人声分离模型\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2-人声vocals+非人声instrumentals.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/uvr5_weights -o HP2-人声vocals+非人声instrumentals.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5-主旋律人声vocals+其他instrumentals.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/uvr5_weights -o HP5-主旋律人声vocals+其他instrumentals.pth"
],
"metadata": {
"id": "HugjmZqZRuiF"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# @title 下载hubert_base\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt -d /content/Retrieval-based-Voice-Conversion-WebUI -o hubert_base.pt"
],
"metadata": {
"id": "2RCaT9FTR0ej"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# @title 挂载谷歌云盘\n",
"\n",
"from google.colab import drive\n",
"\n",
"drive.mount(\"/content/drive\")"
],
"metadata": {
"id": "jwu07JgqoFON"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# @title 从谷歌云盘加载打包好的数据集到/content/dataset\n",
"\n",
"# @markdown 数据集位置\n",
"DATASET = (\n",
" \"/content/drive/MyDrive/dataset/lulu20230327_32k.zip\" # @param {type:\"string\"}\n",
")\n",
"\n",
"!mkdir -p /content/dataset\n",
"!unzip -d /content/dataset -B {DATASET}"
],
"metadata": {
"id": "Mwk7Q0Loqzjx"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# @title 重命名数据集中的重名文件\n",
"!ls -a /content/dataset/\n",
"!rename 's/(\\w+)\\.(\\w+)~(\\d*)/$1_$3.$2/' /content/dataset/*.*~*"
],
"metadata": {
"id": "PDlFxWHWEynD"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# @title 启动web\n",
"%cd /content/Retrieval-based-Voice-Conversion-WebUI\n",
"# %load_ext tensorboard\n",
"# %tensorboard --logdir /content/Retrieval-based-Voice-Conversion-WebUI/logs\n",
"!python3 infer-web.py --colab --pycmd python3"
],
"metadata": {
"id": "7vh6vphDwO0b"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# @title 手动将训练后的模型文件备份到谷歌云盘\n",
"# @markdown 需要自己查看logs文件夹下模型的文件名手动修改下方命令末尾的文件名\n",
"\n",
"# @markdown 模型名\n",
"MODELNAME = \"lulu\" # @param {type:\"string\"}\n",
"# @markdown 模型epoch\n",
"MODELEPOCH = 9600 # @param {type:\"integer\"}\n",
"\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/G_{MODELEPOCH}.pth /content/drive/MyDrive/{MODELNAME}_D_{MODELEPOCH}.pth\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/D_{MODELEPOCH}.pth /content/drive/MyDrive/{MODELNAME}_G_{MODELEPOCH}.pth\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/added_*.index /content/drive/MyDrive/\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/total_*.npy /content/drive/MyDrive/\n",
"\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/weights/{MODELNAME}.pth /content/drive/MyDrive/{MODELNAME}{MODELEPOCH}.pth"
],
"metadata": {
"id": "FgJuNeAwx5Y_"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# @title 从谷歌云盘恢复pth\n",
"# @markdown 需要自己查看logs文件夹下模型的文件名手动修改下方命令末尾的文件名\n",
"\n",
"# @markdown 模型名\n",
"MODELNAME = \"lulu\" # @param {type:\"string\"}\n",
"# @markdown 模型epoch\n",
"MODELEPOCH = 7500 # @param {type:\"integer\"}\n",
"\n",
"!mkdir -p /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}\n",
"\n",
"!cp /content/drive/MyDrive/{MODELNAME}_D_{MODELEPOCH}.pth /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/G_{MODELEPOCH}.pth\n",
"!cp /content/drive/MyDrive/{MODELNAME}_G_{MODELEPOCH}.pth /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/D_{MODELEPOCH}.pth\n",
"!cp /content/drive/MyDrive/*.index /content/\n",
"!cp /content/drive/MyDrive/*.npy /content/\n",
"!cp /content/drive/MyDrive/{MODELNAME}{MODELEPOCH}.pth /content/Retrieval-based-Voice-Conversion-WebUI/weights/{MODELNAME}.pth"
],
"metadata": {
"id": "OVQoLQJXS7WX"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# @title 手动预处理(不推荐)\n",
"# @markdown 模型名\n",
"MODELNAME = \"lulu\" # @param {type:\"string\"}\n",
"# @markdown 采样率\n",
"BITRATE = 48000 # @param {type:\"integer\"}\n",
"# @markdown 使用的进程数\n",
"THREADCOUNT = 8 # @param {type:\"integer\"}\n",
"\n",
"!python3 trainset_preprocess_pipeline_print.py /content/dataset {BITRATE} {THREADCOUNT} logs/{MODELNAME} True"
],
"metadata": {
"id": "ZKAyuKb9J6dz"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# @title 手动提取特征(不推荐)\n",
"# @markdown 模型名\n",
"MODELNAME = \"lulu\" # @param {type:\"string\"}\n",
"# @markdown 使用的进程数\n",
"THREADCOUNT = 8 # @param {type:\"integer\"}\n",
"# @markdown 音高提取算法\n",
"ALGO = \"harvest\" # @param {type:\"string\"}\n",
"\n",
"!python3 extract_f0_print.py logs/{MODELNAME} {THREADCOUNT} {ALGO}\n",
"\n",
"!python3 extract_feature_print.py cpu 1 0 0 logs/{MODELNAME}"
],
"metadata": {
"id": "CrxJqzAUKmPJ"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# @title 手动训练(不推荐)\n",
"# @markdown 模型名\n",
"MODELNAME = \"lulu\" # @param {type:\"string\"}\n",
"# @markdown 使用的GPU\n",
"USEGPU = \"0\" # @param {type:\"string\"}\n",
"# @markdown 批大小\n",
"BATCHSIZE = 32 # @param {type:\"integer\"}\n",
"# @markdown 停止的epoch\n",
"MODELEPOCH = 3200 # @param {type:\"integer\"}\n",
"# @markdown 保存epoch间隔\n",
"EPOCHSAVE = 100 # @param {type:\"integer\"}\n",
"# @markdown 采样率\n",
"MODELSAMPLE = \"48k\" # @param {type:\"string\"}\n",
"# @markdown 是否缓存训练集\n",
"CACHEDATA = 1 # @param {type:\"integer\"}\n",
"# @markdown 是否仅保存最新的ckpt文件\n",
"ONLYLATEST = 0 # @param {type:\"integer\"}\n",
"\n",
"!python3 train_nsf_sim_cache_sid_load_pretrain.py -e lulu -sr {MODELSAMPLE} -f0 1 -bs {BATCHSIZE} -g {USEGPU} -te {MODELEPOCH} -se {EPOCHSAVE} -pg pretrained/f0G{MODELSAMPLE}.pth -pd pretrained/f0D{MODELSAMPLE}.pth -l {ONLYLATEST} -c {CACHEDATA}"
],
"metadata": {
"id": "IMLPLKOaKj58"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# @title 删除其它pth只留选中的慎点仔细看代码\n",
"# @markdown 模型名\n",
"MODELNAME = \"lulu\" # @param {type:\"string\"}\n",
"# @markdown 选中模型epoch\n",
"MODELEPOCH = 9600 # @param {type:\"integer\"}\n",
"\n",
"!echo \"备份选中的模型。。。\"\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/G_{MODELEPOCH}.pth /content/{MODELNAME}_D_{MODELEPOCH}.pth\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/D_{MODELEPOCH}.pth /content/{MODELNAME}_G_{MODELEPOCH}.pth\n",
"\n",
"!echo \"正在删除。。。\"\n",
"!ls /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}\n",
"!rm /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/*.pth\n",
"\n",
"!echo \"恢复选中的模型。。。\"\n",
"!mv /content/{MODELNAME}_D_{MODELEPOCH}.pth /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/G_{MODELEPOCH}.pth\n",
"!mv /content/{MODELNAME}_G_{MODELEPOCH}.pth /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/D_{MODELEPOCH}.pth\n",
"\n",
"!echo \"删除完成\"\n",
"!ls /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}"
],
"metadata": {
"id": "haYA81hySuDl"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# @title 清除项目下所有文件,只留选中的模型(慎点,仔细看代码)\n",
"# @markdown 模型名\n",
"MODELNAME = \"lulu\" # @param {type:\"string\"}\n",
"# @markdown 选中模型epoch\n",
"MODELEPOCH = 9600 # @param {type:\"integer\"}\n",
"\n",
"!echo \"备份选中的模型。。。\"\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/G_{MODELEPOCH}.pth /content/{MODELNAME}_D_{MODELEPOCH}.pth\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/D_{MODELEPOCH}.pth /content/{MODELNAME}_G_{MODELEPOCH}.pth\n",
"\n",
"!echo \"正在删除。。。\"\n",
"!ls /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}\n",
"!rm -rf /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/*\n",
"\n",
"!echo \"恢复选中的模型。。。\"\n",
"!mv /content/{MODELNAME}_D_{MODELEPOCH}.pth /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/G_{MODELEPOCH}.pth\n",
"!mv /content/{MODELNAME}_G_{MODELEPOCH}.pth /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/D_{MODELEPOCH}.pth\n",
"\n",
"!echo \"删除完成\"\n",
"!ls /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}"
],
"metadata": {
"id": "QhSiPTVPoIRh"
},
"execution_count": null,
"outputs": []
}
]
}

View File

@@ -0,0 +1,404 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"id": "ZFFCx5J80SGa"
},
"source": [
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/Retrieval_based_Voice_Conversion_WebUI_v2.ipynb)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "GmFP6bN9dvOq"
},
"outputs": [],
"source": [
"# @title 查看显卡\n",
"!nvidia-smi"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "wjddIFr1oS3W"
},
"outputs": [],
"source": [
"# @title 安装依赖\n",
"!apt-get -y install build-essential python3-dev ffmpeg\n",
"!pip3 install --upgrade setuptools wheel\n",
"!pip3 install --upgrade pip\n",
"!pip3 install faiss-cpu==1.7.2 fairseq gradio==3.14.0 ffmpeg ffmpeg-python praat-parselmouth pyworld numpy==1.23.5 numba==0.56.4 librosa==0.9.2"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ge_97mfpgqTm"
},
"outputs": [],
"source": [
"# @title 克隆仓库\n",
"\n",
"!mkdir Retrieval-based-Voice-Conversion-WebUI\n",
"%cd /content/Retrieval-based-Voice-Conversion-WebUI\n",
"!git init\n",
"!git remote add origin https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI.git\n",
"!git fetch origin cfd984812804ddc9247d65b14c82cd32e56c1133 --depth=1\n",
"!git reset --hard FETCH_HEAD"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "BLDEZADkvlw1"
},
"outputs": [],
"source": [
"# @title 更新仓库(一般无需执行)\n",
"!git pull"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "pqE0PrnuRqI2"
},
"outputs": [],
"source": [
"# @title 安装aria2\n",
"!apt -y install -qq aria2"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "UG3XpUwEomUz"
},
"outputs": [],
"source": [
"# @title 下载底模\n",
"\n",
"# v1\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D32k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o D32k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D40k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o D40k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D48k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o D48k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G32k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o G32k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G40k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o G40k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G48k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o G48k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D32k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o f0D32k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D40k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o f0D40k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D48k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o f0D48k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G32k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o f0G32k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G40k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o f0G40k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G48k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained -o f0G48k.pth\n",
"\n",
"# v2\n",
"# !aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D32k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained_v2 -o D32k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D40k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained_v2 -o D40k.pth\n",
"# !aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D48k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained_v2 -o D48k.pth\n",
"# !aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G32k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained_v2 -o G32k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G40k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained_v2 -o G40k.pth\n",
"# !aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G48k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained_v2 -o G48k.pth\n",
"# !aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D32k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained_v2 -o f0D32k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D40k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained_v2 -o f0D40k.pth\n",
"# !aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D48k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained_v2 -o f0D48k.pth\n",
"# !aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G32k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained_v2 -o f0G32k.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G40k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained_v2 -o f0G40k.pth\n",
"# !aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G48k.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/pretrained_v2 -o f0G48k.pth"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "HugjmZqZRuiF"
},
"outputs": [],
"source": [
"# @title 下载人声分离模型\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2-人声vocals+非人声instrumentals.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/uvr5_weights -o HP2-人声vocals+非人声instrumentals.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5-主旋律人声vocals+其他instrumentals.pth -d /content/Retrieval-based-Voice-Conversion-WebUI/uvr5_weights -o HP5-主旋律人声vocals+其他instrumentals.pth"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "2RCaT9FTR0ej"
},
"outputs": [],
"source": [
"# @title 下载hubert_base\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt -d /content/Retrieval-based-Voice-Conversion-WebUI -o hubert_base.pt"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "jwu07JgqoFON"
},
"outputs": [],
"source": [
"# @title 挂载谷歌云盘\n",
"\n",
"from google.colab import drive\n",
"\n",
"drive.mount(\"/content/drive\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Mwk7Q0Loqzjx"
},
"outputs": [],
"source": [
"# @title 从谷歌云盘加载打包好的数据集到/content/dataset\n",
"\n",
"# @markdown 数据集位置\n",
"DATASET = (\n",
" \"/content/drive/MyDrive/dataset/lulu20230327_32k.zip\" # @param {type:\"string\"}\n",
")\n",
"\n",
"!mkdir -p /content/dataset\n",
"!unzip -d /content/dataset -B {DATASET}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "PDlFxWHWEynD"
},
"outputs": [],
"source": [
"# @title 重命名数据集中的重名文件\n",
"!ls -a /content/dataset/\n",
"!rename 's/(\\w+)\\.(\\w+)~(\\d*)/$1_$3.$2/' /content/dataset/*.*~*"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7vh6vphDwO0b"
},
"outputs": [],
"source": [
"# @title 启动web\n",
"%cd /content/Retrieval-based-Voice-Conversion-WebUI\n",
"# %load_ext tensorboard\n",
"# %tensorboard --logdir /content/Retrieval-based-Voice-Conversion-WebUI/logs\n",
"!python3 infer-web.py --colab --pycmd python3"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "FgJuNeAwx5Y_"
},
"outputs": [],
"source": [
"# @title 手动将训练后的模型文件备份到谷歌云盘\n",
"# @markdown 需要自己查看logs文件夹下模型的文件名手动修改下方命令末尾的文件名\n",
"\n",
"# @markdown 模型名\n",
"MODELNAME = \"lulu\" # @param {type:\"string\"}\n",
"# @markdown 模型epoch\n",
"MODELEPOCH = 9600 # @param {type:\"integer\"}\n",
"\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/G_{MODELEPOCH}.pth /content/drive/MyDrive/{MODELNAME}_D_{MODELEPOCH}.pth\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/D_{MODELEPOCH}.pth /content/drive/MyDrive/{MODELNAME}_G_{MODELEPOCH}.pth\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/added_*.index /content/drive/MyDrive/\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/total_*.npy /content/drive/MyDrive/\n",
"\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/weights/{MODELNAME}.pth /content/drive/MyDrive/{MODELNAME}{MODELEPOCH}.pth"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "OVQoLQJXS7WX"
},
"outputs": [],
"source": [
"# @title 从谷歌云盘恢复pth\n",
"# @markdown 需要自己查看logs文件夹下模型的文件名手动修改下方命令末尾的文件名\n",
"\n",
"# @markdown 模型名\n",
"MODELNAME = \"lulu\" # @param {type:\"string\"}\n",
"# @markdown 模型epoch\n",
"MODELEPOCH = 7500 # @param {type:\"integer\"}\n",
"\n",
"!mkdir -p /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}\n",
"\n",
"!cp /content/drive/MyDrive/{MODELNAME}_D_{MODELEPOCH}.pth /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/G_{MODELEPOCH}.pth\n",
"!cp /content/drive/MyDrive/{MODELNAME}_G_{MODELEPOCH}.pth /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/D_{MODELEPOCH}.pth\n",
"!cp /content/drive/MyDrive/*.index /content/\n",
"!cp /content/drive/MyDrive/*.npy /content/\n",
"!cp /content/drive/MyDrive/{MODELNAME}{MODELEPOCH}.pth /content/Retrieval-based-Voice-Conversion-WebUI/weights/{MODELNAME}.pth"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ZKAyuKb9J6dz"
},
"outputs": [],
"source": [
"# @title 手动预处理(不推荐)\n",
"# @markdown 模型名\n",
"MODELNAME = \"lulu\" # @param {type:\"string\"}\n",
"# @markdown 采样率\n",
"BITRATE = 48000 # @param {type:\"integer\"}\n",
"# @markdown 使用的进程数\n",
"THREADCOUNT = 8 # @param {type:\"integer\"}\n",
"\n",
"!python3 trainset_preprocess_pipeline_print.py /content/dataset {BITRATE} {THREADCOUNT} logs/{MODELNAME} True"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "CrxJqzAUKmPJ"
},
"outputs": [],
"source": [
"# @title 手动提取特征(不推荐)\n",
"# @markdown 模型名\n",
"MODELNAME = \"lulu\" # @param {type:\"string\"}\n",
"# @markdown 使用的进程数\n",
"THREADCOUNT = 8 # @param {type:\"integer\"}\n",
"# @markdown 音高提取算法\n",
"ALGO = \"harvest\" # @param {type:\"string\"}\n",
"\n",
"!python3 extract_f0_print.py logs/{MODELNAME} {THREADCOUNT} {ALGO}\n",
"\n",
"!python3 extract_feature_print.py cpu 1 0 0 logs/{MODELNAME}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "IMLPLKOaKj58"
},
"outputs": [],
"source": [
"# @title 手动训练(不推荐)\n",
"# @markdown 模型名\n",
"MODELNAME = \"lulu\" # @param {type:\"string\"}\n",
"# @markdown 使用的GPU\n",
"USEGPU = \"0\" # @param {type:\"string\"}\n",
"# @markdown 批大小\n",
"BATCHSIZE = 32 # @param {type:\"integer\"}\n",
"# @markdown 停止的epoch\n",
"MODELEPOCH = 3200 # @param {type:\"integer\"}\n",
"# @markdown 保存epoch间隔\n",
"EPOCHSAVE = 100 # @param {type:\"integer\"}\n",
"# @markdown 采样率\n",
"MODELSAMPLE = \"48k\" # @param {type:\"string\"}\n",
"# @markdown 是否缓存训练集\n",
"CACHEDATA = 1 # @param {type:\"integer\"}\n",
"# @markdown 是否仅保存最新的ckpt文件\n",
"ONLYLATEST = 0 # @param {type:\"integer\"}\n",
"\n",
"!python3 train_nsf_sim_cache_sid_load_pretrain.py -e lulu -sr {MODELSAMPLE} -f0 1 -bs {BATCHSIZE} -g {USEGPU} -te {MODELEPOCH} -se {EPOCHSAVE} -pg pretrained/f0G{MODELSAMPLE}.pth -pd pretrained/f0D{MODELSAMPLE}.pth -l {ONLYLATEST} -c {CACHEDATA}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "haYA81hySuDl"
},
"outputs": [],
"source": [
"# @title 删除其它pth只留选中的慎点仔细看代码\n",
"# @markdown 模型名\n",
"MODELNAME = \"lulu\" # @param {type:\"string\"}\n",
"# @markdown 选中模型epoch\n",
"MODELEPOCH = 9600 # @param {type:\"integer\"}\n",
"\n",
"!echo \"备份选中的模型。。。\"\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/G_{MODELEPOCH}.pth /content/{MODELNAME}_D_{MODELEPOCH}.pth\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/D_{MODELEPOCH}.pth /content/{MODELNAME}_G_{MODELEPOCH}.pth\n",
"\n",
"!echo \"正在删除。。。\"\n",
"!ls /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}\n",
"!rm /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/*.pth\n",
"\n",
"!echo \"恢复选中的模型。。。\"\n",
"!mv /content/{MODELNAME}_D_{MODELEPOCH}.pth /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/G_{MODELEPOCH}.pth\n",
"!mv /content/{MODELNAME}_G_{MODELEPOCH}.pth /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/D_{MODELEPOCH}.pth\n",
"\n",
"!echo \"删除完成\"\n",
"!ls /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "QhSiPTVPoIRh"
},
"outputs": [],
"source": [
"# @title 清除项目下所有文件,只留选中的模型(慎点,仔细看代码)\n",
"# @markdown 模型名\n",
"MODELNAME = \"lulu\" # @param {type:\"string\"}\n",
"# @markdown 选中模型epoch\n",
"MODELEPOCH = 9600 # @param {type:\"integer\"}\n",
"\n",
"!echo \"备份选中的模型。。。\"\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/G_{MODELEPOCH}.pth /content/{MODELNAME}_D_{MODELEPOCH}.pth\n",
"!cp /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/D_{MODELEPOCH}.pth /content/{MODELNAME}_G_{MODELEPOCH}.pth\n",
"\n",
"!echo \"正在删除。。。\"\n",
"!ls /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}\n",
"!rm -rf /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/*\n",
"\n",
"!echo \"恢复选中的模型。。。\"\n",
"!mv /content/{MODELNAME}_D_{MODELEPOCH}.pth /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/G_{MODELEPOCH}.pth\n",
"!mv /content/{MODELNAME}_G_{MODELEPOCH}.pth /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}/D_{MODELEPOCH}.pth\n",
"\n",
"!echo \"删除完成\"\n",
"!ls /content/Retrieval-based-Voice-Conversion-WebUI/logs/{MODELNAME}"
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"private_outputs": true,
"provenance": []
},
"gpuClass": "standard",
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

319
app.py Normal file
View File

@@ -0,0 +1,319 @@
import os
import torch
# os.system("wget -P cvec/ https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt")
import gradio as gr
import librosa
import numpy as np
import logging
from fairseq import checkpoint_utils
from vc_infer_pipeline import VC
import traceback
from config import Config
from lib.infer_pack.models import (
SynthesizerTrnMs256NSFsid,
SynthesizerTrnMs256NSFsid_nono,
SynthesizerTrnMs768NSFsid,
SynthesizerTrnMs768NSFsid_nono,
)
from i18n import I18nAuto
logging.getLogger("numba").setLevel(logging.WARNING)
logging.getLogger("markdown_it").setLevel(logging.WARNING)
logging.getLogger("urllib3").setLevel(logging.WARNING)
logging.getLogger("matplotlib").setLevel(logging.WARNING)
i18n = I18nAuto()
i18n.print()
config = Config()
weight_root = "weights"
weight_uvr5_root = "uvr5_weights"
index_root = "logs"
names = []
hubert_model = None
for name in os.listdir(weight_root):
if name.endswith(".pth"):
names.append(name)
index_paths = []
for root, dirs, files in os.walk(index_root, topdown=False):
for name in files:
if name.endswith(".index") and "trained" not in name:
index_paths.append("%s/%s" % (root, name))
def get_vc(sid):
global n_spk, tgt_sr, net_g, vc, cpt, version
if sid == "" or sid == []:
global hubert_model
if hubert_model != None: # 考虑到轮询, 需要加个判断看是否 sid 是由有模型切换到无模型的
print("clean_empty_cache")
del net_g, n_spk, vc, hubert_model, tgt_sr # ,cpt
hubert_model = net_g = n_spk = vc = hubert_model = tgt_sr = None
if torch.cuda.is_available():
torch.cuda.empty_cache()
###楼下不这么折腾清理不干净
if_f0 = cpt.get("f0", 1)
version = cpt.get("version", "v1")
if version == "v1":
if if_f0 == 1:
net_g = SynthesizerTrnMs256NSFsid(
*cpt["config"], is_half=config.is_half
)
else:
net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
elif version == "v2":
if if_f0 == 1:
net_g = SynthesizerTrnMs768NSFsid(
*cpt["config"], is_half=config.is_half
)
else:
net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
del net_g, cpt
if torch.cuda.is_available():
torch.cuda.empty_cache()
cpt = None
return {"visible": False, "__type__": "update"}
person = "%s/%s" % (weight_root, sid)
print("loading %s" % person)
cpt = torch.load(person, map_location="cpu")
tgt_sr = cpt["config"][-1]
cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
if_f0 = cpt.get("f0", 1)
version = cpt.get("version", "v1")
if version == "v1":
if if_f0 == 1:
net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half)
else:
net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
elif version == "v2":
if if_f0 == 1:
net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half)
else:
net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
del net_g.enc_q
print(net_g.load_state_dict(cpt["weight"], strict=False))
net_g.eval().to(config.device)
if config.is_half:
net_g = net_g.half()
else:
net_g = net_g.float()
vc = VC(tgt_sr, config)
n_spk = cpt["config"][-3]
return {"visible": True, "maximum": n_spk, "__type__": "update"}
def load_hubert():
global hubert_model
models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
["hubert_base.pt"],
suffix="",
)
hubert_model = models[0]
hubert_model = hubert_model.to(config.device)
if config.is_half:
hubert_model = hubert_model.half()
else:
hubert_model = hubert_model.float()
hubert_model.eval()
def vc_single(
sid,
input_audio_path,
f0_up_key,
f0_file,
f0_method,
file_index,
file_index2,
# file_big_npy,
index_rate,
filter_radius,
resample_sr,
rms_mix_rate,
protect,
): # spk_item, input_audio0, vc_transform0,f0_file,f0method0
global tgt_sr, net_g, vc, hubert_model, version
if input_audio_path is None:
return "You need to upload an audio", None
f0_up_key = int(f0_up_key)
try:
audio = input_audio_path[1] / 32768.0
if len(audio.shape) == 2:
audio = np.mean(audio, -1)
audio = librosa.resample(audio, orig_sr=input_audio_path[0], target_sr=16000)
audio_max = np.abs(audio).max() / 0.95
if audio_max > 1:
audio /= audio_max
times = [0, 0, 0]
if hubert_model == None:
load_hubert()
if_f0 = cpt.get("f0", 1)
file_index = (
(
file_index.strip(" ")
.strip('"')
.strip("\n")
.strip('"')
.strip(" ")
.replace("trained", "added")
)
if file_index != ""
else file_index2
) # 防止小白写错,自动帮他替换掉
# file_big_npy = (
# file_big_npy.strip(" ").strip('"').strip("\n").strip('"').strip(" ")
# )
audio_opt = vc.pipeline(
hubert_model,
net_g,
sid,
audio,
input_audio_path,
times,
f0_up_key,
f0_method,
file_index,
# file_big_npy,
index_rate,
if_f0,
filter_radius,
tgt_sr,
resample_sr,
rms_mix_rate,
version,
protect,
f0_file=f0_file,
)
if resample_sr >= 16000 and tgt_sr != resample_sr:
tgt_sr = resample_sr
index_info = (
"Using index:%s." % file_index
if os.path.exists(file_index)
else "Index not used."
)
return "Success.\n %s\nTime:\n npy:%ss, f0:%ss, infer:%ss" % (
index_info,
times[0],
times[1],
times[2],
), (tgt_sr, audio_opt)
except:
info = traceback.format_exc()
print(info)
return info, (None, None)
app = gr.Blocks()
with app:
with gr.Tabs():
with gr.TabItem("在线demo"):
gr.Markdown(
value="""
RVC 在线demo
"""
)
sid = gr.Dropdown(label=i18n("推理音色"), choices=sorted(names))
with gr.Column():
spk_item = gr.Slider(
minimum=0,
maximum=2333,
step=1,
label=i18n("请选择说话人id"),
value=0,
visible=False,
interactive=True,
)
sid.change(
fn=get_vc,
inputs=[sid],
outputs=[spk_item],
)
gr.Markdown(
value=i18n("男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ")
)
vc_input3 = gr.Audio(label="上传音频长度小于90秒")
vc_transform0 = gr.Number(label=i18n("变调(整数, 半音数量, 升八度12降八度-12)"), value=0)
f0method0 = gr.Radio(
label=i18n("选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU"),
choices=["pm", "harvest", "crepe", "rmvpe"],
value="pm",
interactive=True,
)
filter_radius0 = gr.Slider(
minimum=0,
maximum=7,
label=i18n(">=3则使用对harvest音高识别的结果使用中值滤波数值为滤波半径使用可以削弱哑音"),
value=3,
step=1,
interactive=True,
)
with gr.Column():
file_index1 = gr.Textbox(
label=i18n("特征检索库文件路径,为空则使用下拉的选择结果"),
value="",
interactive=False,
visible=False,
)
file_index2 = gr.Dropdown(
label=i18n("自动检测index路径,下拉式选择(dropdown)"),
choices=sorted(index_paths),
interactive=True,
)
index_rate1 = gr.Slider(
minimum=0,
maximum=1,
label=i18n("检索特征占比"),
value=0.88,
interactive=True,
)
resample_sr0 = gr.Slider(
minimum=0,
maximum=48000,
label=i18n("后处理重采样至最终采样率0为不进行重采样"),
value=0,
step=1,
interactive=True,
)
rms_mix_rate0 = gr.Slider(
minimum=0,
maximum=1,
label=i18n("输入源音量包络替换输出音量包络融合比例越靠近1越使用输出包络"),
value=1,
interactive=True,
)
protect0 = gr.Slider(
minimum=0,
maximum=0.5,
label=i18n("保护清辅音和呼吸声防止电音撕裂等artifact拉满0.5不开启,调低加大保护力度但可能降低索引效果"),
value=0.33,
step=0.01,
interactive=True,
)
f0_file = gr.File(label=i18n("F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调"))
but0 = gr.Button(i18n("转换"), variant="primary")
vc_output1 = gr.Textbox(label=i18n("输出信息"))
vc_output2 = gr.Audio(label=i18n("输出音频(右下角三个点,点了可以下载)"))
but0.click(
vc_single,
[
spk_item,
vc_input3,
vc_transform0,
f0_file,
f0method0,
file_index1,
file_index2,
# file_big_npy1,
index_rate1,
filter_radius0,
resample_sr0,
rms_mix_rate0,
protect0,
],
[vc_output1, vc_output2],
)
app.launch()

274
config.py
View File

@@ -1,88 +1,186 @@
########################硬件参数########################
# 填写cuda:x, cpu 或 mps, x指代第几张卡只支持 N卡 / Apple Silicon 加速
device = "cuda:0"
# 9-10-20-30-40系显卡无脑True不影响质量>=20显卡开启有加速
is_half = True
# 默认0用上所有线程写数字限制CPU资源使用
n_cpu = 0
########################硬件参数########################
##################下为参数处理逻辑,勿动##################
########################命令行参数########################
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--port", type=int, default=7865, help="Listen port")
parser.add_argument("--pycmd", type=str, default="python", help="Python command")
parser.add_argument("--colab", action="store_true", help="Launch in colab")
parser.add_argument(
"--noparallel", action="store_true", help="Disable parallel processing"
)
parser.add_argument(
"--noautoopen", action="store_true", help="Do not open in browser automatically"
)
cmd_opts = parser.parse_args()
python_cmd = cmd_opts.pycmd
listen_port = cmd_opts.port
iscolab = cmd_opts.colab
noparallel = cmd_opts.noparallel
noautoopen = cmd_opts.noautoopen
########################命令行参数########################
import sys
import torch
# has_mps is only available in nightly pytorch (for now) and MasOS 12.3+.
# check `getattr` and try it for compatibility
def has_mps() -> bool:
if sys.platform != "darwin":
return False
else:
if not getattr(torch, "has_mps", False):
return False
try:
torch.zeros(1).to(torch.device("mps"))
return True
except Exception:
return False
if not torch.cuda.is_available():
if has_mps():
print("没有发现支持的N卡, 使用MPS进行推理")
device = "mps"
else:
print("没有发现支持的N卡, 使用CPU进行推理")
device = "cpu"
is_half = False
if device not in ["cpu", "mps"]:
gpu_name = torch.cuda.get_device_name(int(device.split(":")[-1]))
if "16" in gpu_name or "MX" in gpu_name:
print("16系显卡/MX系显卡强制单精度")
is_half = False
from multiprocessing import cpu_count
if n_cpu == 0:
n_cpu = cpu_count()
if is_half:
# 6G显存配置
x_pad = 3
x_query = 10
x_center = 60
x_max = 65
else:
# 5G显存配置
x_pad = 1
x_query = 6
x_center = 38
x_max = 41
import os
import argparse
import sys
import torch
from multiprocessing import cpu_count
def use_fp32_config():
for config_file in [
"32k.json",
"40k.json",
"48k.json",
"48k_v2.json",
"32k_v2.json",
]:
with open(f"configs/{config_file}", "r") as f:
strr = f.read().replace("true", "false")
with open(f"configs/{config_file}", "w") as f:
f.write(strr)
with open("trainset_preprocess_pipeline_print.py", "r") as f:
strr = f.read().replace("3.7", "3.0")
with open("trainset_preprocess_pipeline_print.py", "w") as f:
f.write(strr)
class Config:
def __init__(self):
self.device = "cuda:0"
self.is_half = True
self.n_cpu = 0
self.gpu_name = None
self.gpu_mem = None
(
self.python_cmd,
self.listen_port,
self.iscolab,
self.noparallel,
self.noautoopen,
self.dml,
) = self.arg_parse()
self.instead = ""
self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config()
@staticmethod
def arg_parse() -> tuple:
exe = sys.executable or "python"
parser = argparse.ArgumentParser()
parser.add_argument("--port", type=int, default=7865, help="Listen port")
parser.add_argument("--pycmd", type=str, default=exe, help="Python command")
parser.add_argument("--colab", action="store_true", help="Launch in colab")
parser.add_argument(
"--noparallel", action="store_true", help="Disable parallel processing"
)
parser.add_argument(
"--noautoopen",
action="store_true",
help="Do not open in browser automatically",
)
parser.add_argument(
"--dml",
action="store_true",
help="torch_dml",
)
cmd_opts = parser.parse_args()
cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865
return (
cmd_opts.pycmd,
cmd_opts.port,
cmd_opts.colab,
cmd_opts.noparallel,
cmd_opts.noautoopen,
cmd_opts.dml,
)
# has_mps is only available in nightly pytorch (for now) and MasOS 12.3+.
# check `getattr` and try it for compatibility
@staticmethod
def has_mps() -> bool:
if not torch.backends.mps.is_available():
return False
try:
torch.zeros(1).to(torch.device("mps"))
return True
except Exception:
return False
def device_config(self) -> tuple:
if torch.cuda.is_available():
i_device = int(self.device.split(":")[-1])
self.gpu_name = torch.cuda.get_device_name(i_device)
if (
("16" in self.gpu_name and "V100" not in self.gpu_name.upper())
or "P40" in self.gpu_name.upper()
or "1060" in self.gpu_name
or "1070" in self.gpu_name
or "1080" in self.gpu_name
):
print("Found GPU", self.gpu_name, ", force to fp32")
self.is_half = False
use_fp32_config()
else:
print("Found GPU", self.gpu_name)
self.gpu_mem = int(
torch.cuda.get_device_properties(i_device).total_memory
/ 1024
/ 1024
/ 1024
+ 0.4
)
if self.gpu_mem <= 4:
with open("trainset_preprocess_pipeline_print.py", "r") as f:
strr = f.read().replace("3.7", "3.0")
with open("trainset_preprocess_pipeline_print.py", "w") as f:
f.write(strr)
elif self.has_mps():
print("No supported Nvidia GPU found")
self.device = self.instead = "mps"
self.is_half = False
use_fp32_config()
else:
print("No supported Nvidia GPU found")
self.device = self.instead = "cpu"
self.is_half = False
use_fp32_config()
if self.n_cpu == 0:
self.n_cpu = cpu_count()
if self.is_half:
# 6G显存配置
x_pad = 3
x_query = 10
x_center = 60
x_max = 65
else:
# 5G显存配置
x_pad = 1
x_query = 6
x_center = 38
x_max = 41
if self.gpu_mem is not None and self.gpu_mem <= 4:
x_pad = 1
x_query = 5
x_center = 30
x_max = 32
if self.dml:
print("use DirectML instead")
try:
os.rename(
"runtime\Lib\site-packages\onnxruntime",
"runtime\Lib\site-packages\onnxruntime-cuda",
)
except:
pass
try:
os.rename(
"runtime\Lib\site-packages\onnxruntime-dml",
"runtime\Lib\site-packages\onnxruntime",
)
except:
pass
import torch_directml
self.device = torch_directml.device(torch_directml.default_device())
self.is_half = False
else:
if self.instead:
print(f"use {self.instead} instead")
try:
os.rename(
"runtime\Lib\site-packages\onnxruntime",
"runtime\Lib\site-packages\onnxruntime-cuda",
)
except:
pass
try:
os.rename(
"runtime\Lib\site-packages\onnxruntime-dml",
"runtime\Lib\site-packages\onnxruntime",
)
except:
pass
return x_pad, x_query, x_center, x_max

46
configs/32k_v2.json Normal file
View File

@@ -0,0 +1,46 @@
{
"train": {
"log_interval": 200,
"seed": 1234,
"epochs": 20000,
"learning_rate": 1e-4,
"betas": [0.8, 0.99],
"eps": 1e-9,
"batch_size": 4,
"fp16_run": true,
"lr_decay": 0.999875,
"segment_size": 12800,
"init_lr_ratio": 1,
"warmup_epochs": 0,
"c_mel": 45,
"c_kl": 1.0
},
"data": {
"max_wav_value": 32768.0,
"sampling_rate": 32000,
"filter_length": 1024,
"hop_length": 320,
"win_length": 1024,
"n_mel_channels": 80,
"mel_fmin": 0.0,
"mel_fmax": null
},
"model": {
"inter_channels": 192,
"hidden_channels": 192,
"filter_channels": 768,
"n_heads": 2,
"n_layers": 6,
"kernel_size": 3,
"p_dropout": 0,
"resblock": "1",
"resblock_kernel_sizes": [3,7,11],
"resblock_dilation_sizes": [[1,3,5], [1,3,5], [1,3,5]],
"upsample_rates": [10,8,2,2],
"upsample_initial_channel": 512,
"upsample_kernel_sizes": [20,16,4,4],
"use_spectral_norm": false,
"gin_channels": 256,
"spk_embed_dim": 109
}
}

46
configs/48k_v2.json Normal file
View File

@@ -0,0 +1,46 @@
{
"train": {
"log_interval": 200,
"seed": 1234,
"epochs": 20000,
"learning_rate": 1e-4,
"betas": [0.8, 0.99],
"eps": 1e-9,
"batch_size": 4,
"fp16_run": true,
"lr_decay": 0.999875,
"segment_size": 17280,
"init_lr_ratio": 1,
"warmup_epochs": 0,
"c_mel": 45,
"c_kl": 1.0
},
"data": {
"max_wav_value": 32768.0,
"sampling_rate": 48000,
"filter_length": 2048,
"hop_length": 480,
"win_length": 2048,
"n_mel_channels": 128,
"mel_fmin": 0.0,
"mel_fmax": null
},
"model": {
"inter_channels": 192,
"hidden_channels": 192,
"filter_channels": 768,
"n_heads": 2,
"n_layers": 6,
"kernel_size": 3,
"p_dropout": 0,
"resblock": "1",
"resblock_kernel_sizes": [3,7,11],
"resblock_dilation_sizes": [[1,3,5], [1,3,5], [1,3,5]],
"upsample_rates": [12,10,2,2],
"upsample_initial_channel": 512,
"upsample_kernel_sizes": [24,20,4,4],
"use_spectral_norm": false,
"gin_channels": 256,
"spk_embed_dim": 109
}
}

96
docs/Changelog_CN.md Normal file
View File

@@ -0,0 +1,96 @@
### 20230813更新
1-常规bug修复
- 保存频率总轮数最低改为1 总轮数最低改为2
- 修复无pretrain模型训练报错
- 增加伴奏人声分离完毕清理显存
- faiss保存路径绝对路径改为相对路径
- 支持路径包含空格(训练集路径+实验名称均支持,不再会报错)
- filelist取消强制utf8编码
- 解决实时变声中开启索引导致的CPU极大占用问题
2-重点更新
- 训练出当前最强开源人声音高提取模型RMVPE并用于RVC的训练、离线/实时推理支持pytorch/onnx/DirectML
- 通过pytorch-dml支持A卡和I卡的
1实时变声2推理3人声伴奏分离4训练暂未支持会切换至CPU训练通过onnx_dml支持rmvpe_gpu的推理
### 20230618更新
- v2增加32k和48k两个新预训练模型
- 修复非f0模型推理报错
- 对于超过一小时的训练集的索引建立环节自动kmeans缩小特征处理以加速索引训练、加入和查询
- 附送一个人声转吉他玩具仓库
- 数据处理剔除异常值切片
- onnx导出选项卡
失败的实验:
- ~~特征检索增加时序维度:寄,没啥效果~~
- ~~特征检索增加PCAR降维可选项数据大用kmeans缩小数据量数据小降维操作耗时比省下的匹配耗时还多~~
- ~~支持onnx推理附带仅推理的小压缩包生成nsf还是需要pytorch~~
- ~~训练时在音高、gender、eq、噪声等方面对输入进行随机增强没啥效果~~
- ~~接入小型声码器调研:寄,效果变差~~
todolist
- ~~训练集音高识别支持crepe已经被RMVPE取代不需要~~
- ~~多进程harvest推理已经被RMVPE取代不需要~~
- ~~crepe的精度支持和RVC-config同步已经被RMVPE取代不需要。支持这个还要同步torchcrepe的库麻烦~~
- 对接F0编辑器
### 20230528更新
- 增加v2的jupyter notebook韩文changelog增加一些环境依赖
- 增加呼吸、清辅音、齿音保护模式
- 支持crepe-full推理
- UVR5人声伴奏分离加上3个去延迟模型和MDX-Net去混响模型增加HP3人声提取模型
- 索引名称增加版本和实验名称
- 人声伴奏分离、推理批量导出增加音频导出格式选项
- 废弃32k模型的训练
### 20230513更新
- 清除一键包内部老版本runtime内残留的lib.infer_pack和uvr5_pack
- 修复训练集预处理伪多进程的bug
- 增加harvest识别音高可选通过中值滤波削弱哑音现象可调整中值滤波半径
- 导出音频增加后处理重采样
- 训练n_cpu进程数从"仅调整f0提取"改为"调整数据预处理和f0提取"
- 自动检测logs文件夹下的index路径提供下拉列表功能
- tab页增加"常见问题解答"也可参考github-rvc-wiki
- 相同路径的输入音频推理增加了音高缓存用途使用harvest音高提取整个pipeline会经历漫长且重复的音高提取过程如果不使用缓存实验不同音色、索引、音高中值滤波半径参数的用户在第一次测试后的等待结果会非常痛苦
### 20230514更新
- 音量包络对齐输入混合可以缓解“输入静音输出小幅度噪声”的问题。如果输入音频背景底噪大则不建议开启默认不开启值为1可视为不开启
- 支持按照指定频率保存提取的小模型假如你想尝试不同epoch下的推理效果但是不想保存所有大checkpoint并且每次都要ckpt手工处理提取小模型这项功能会非常实用
- 通过设置环境变量解决服务端开了系统全局代理导致浏览器连接错误的问题
- 支持v2预训练模型目前只公开了40k版本进行测试另外2个采样率还没有训练完全
- 推理前限制超过1的过大音量
- 微调数据预处理参数
### 20230409更新
- 修正训练参数提升显卡平均利用率A100最高从25%提升至90%左右V100:50%->90%左右2060S:60%->85%左右P40:25%->95%左右,训练速度显著提升
- 修正参数总batch_size改为每张卡的batch_size
- 修正total_epoch最大限制100解锁至1000默认10提升至默认20
- 修复ckpt提取识别是否带音高错误导致推理异常的问题
- 修复分布式训练每个rank都保存一次ckpt的问题
- 特征提取进行nan特征过滤
- 修复静音输入输出随机辅音or噪声的问题老版模型需要重做训练集重训
### 20230416更新
- 新增本地实时变声迷你GUI双击go-realtime-gui.bat启动
- 训练推理均对<50Hz的频段进行滤波过滤
- 训练推理音高提取pyworld最低音高从默认80下降至50,50-80hz间的男声低音不会哑
- WebUI支持根据系统区域变更语言现支持en_USja_JPzh_CNzh_HKzh_SGzh_TW不支持的默认en_US
- 修正部分显卡识别例如V100-16G识别失败P4识别失败
### 20230428更新
- 升级faiss索引设置速度更快质量更高
- 取消total_npy依赖后续分享模型不再需要填写total_npy
- 解锁16系限制4G显存GPU给到4G的推理设置
- 修复部分音频格式下UVR5人声伴奏分离的bug
- 实时变声迷你gui增加对非40k与不懈怠音高模型的支持
### 后续计划:
功能
- 支持多人训练选项卡至多4人
底模
- 收集呼吸wav加入训练集修正呼吸变声电音的问题
- 我们正在训练增加了歌声训练集的底模未来会公开

100
docs/Changelog_EN.md Normal file
View File

@@ -0,0 +1,100 @@
### 2023-08-13
1-Regular bug fix
- Change the minimum total epoch number to 1, and change the minimum total epoch number to 2
- Fix training errors of not using pre-train models
- After accompaniment vocals separation, clear graphics memory
- Change faiss save path absolute path to relative path
- Support path containing spaces (both training set path and experiment name are supported, and errors will no longer be reported)
- Filelist cancels mandatory utf8 encoding
- Solve the CPU consumption problem caused by faiss searching during real-time voice changes
2-Key updates
- Train the current strongest open-source vocal pitch extraction model RMVPE, and use it for RVC training, offline/real-time inference, supporting PyTorch/Onnx/DirectML
- Support for AMD and Intel graphics cards through Pytorch_DML
(1) Real time voice change (2) Inference (3) Separation of vocal accompaniment (4) Training not currently supported, will switch to CPU training; supports RMVPE inference of gpu by Onnx_Dml
### 2023-06-18
- New pretrained v2 models: 32k and 48k
- Fix non-f0 model inference errors
- For training-set exceeding 1 hour, do automatic minibatch-kmeans to reduce feature shape, so that index training, adding, and searching will be much faster.
- Provide a toy vocal2guitar huggingface space
- Auto delete outlier short cut training-set audios
- Onnx export tab
Failed experiments:
- ~~Feature retrieval: add temporal feature retrieval: not effective~~
- ~~Feature retrieval: add PCAR dimensionality reduction: searching is even slower~~
- ~~Random data augmentation when training: not effective~~
todolist
- ~~Vocos-RVC (tiny vocoder): not effective~~
- ~~Crepe support for trainingreplaced by RMVPE~~
- ~~Half precision crepe inferencereplaced by RMVPE. And hard to achive.~~
- F0 editor support
### 2023-05-28
- Add v2 jupyter notebook, korean changelog, fix some environment requirments
- Add voiceless consonant and breath protection mode
- Support crepe-full pitch detect
- UVR5 vocal separation: support dereverb models and de-echo models
- Add experiment name and version on the name of index
- Support users to manually select export format of output audios when batch voice conversion processing and UVR5 vocal separation
- v1 32k model training is no more supported
### 2023-05-13
- Clear the redundant codes in the old version of runtime in the one-click-package: lib.infer_pack and uvr5_pack
- Fix pseudo multiprocessing bug in training set preprocessing
- Adding median filtering radius adjustment for harvest pitch recognize algorithm
- Support post processing resampling for exporting audio
- Multi processing "n_cpu" setting for training is changed from "f0 extraction" to "data preprocessing and f0 extraction"
- Automatically detect the index paths under the logs folder and provide a drop-down list function
- Add "Frequently Asked Questions and Answers" on the tab page (you can also refer to github RVC wiki)
- When inference, harvest pitch is cached when using same input audio path (purpose: using harvest pitch extraction, the entire pipeline will go through a long and repetitive pitch extraction process. If caching is not used, users who experiment with different timbre, index, and pitch median filtering radius settings will experience a very painful waiting process after the first inference)
### 2023-05-14
- Use volume envelope of input to mix or replace the volume envelope of output (can alleviate the problem of "input muting and output small amplitude noise". If the input audio background noise is high, it is not recommended to turn it on, and it is not turned on by default (1 can be considered as not turned on)
- Support saving extracted small models at a specified frequency (if you want to see the performance under different epochs, but do not want to save all large checkpoints and manually extract small models by ckpt-processing every time, this feature will be very practical)
- Resolve the issue of "connection errors" caused by the server's global proxy by setting environment variables
- Supports pre-trained v2 models (currently only 40k versions are publicly available for testing, and the other two sampling rates have not been fully trained yet)
- Limit excessive volume exceeding 1 before inference
- Slightly adjusted the settings of training-set preprocessing
#######################
History changelogs:
### 2023-04-09
- Fixed training parameters to improve GPU utilization rate: A100 increased from 25% to around 90%, V100: 50% to around 90%, 2060S: 60% to around 85%, P40: 25% to around 95%; significantly improved training speed
- Changed parameter: total batch_size is now per GPU batch_size
- Changed total_epoch: maximum limit increased from 100 to 1000; default increased from 10 to 20
- Fixed issue of ckpt extraction recognizing pitch incorrectly, causing abnormal inference
- Fixed issue of distributed training saving ckpt for each rank
- Applied nan feature filtering for feature extraction
- Fixed issue with silent input/output producing random consonants or noise (old models need to retrain with a new dataset)
### 2023-04-16 Update
- Added local real-time voice changing mini-GUI, start by double-clicking go-realtime-gui.bat
- Applied filtering for frequency bands below 50Hz during training and inference
- Lowered the minimum pitch extraction of pyworld from the default 80 to 50 for training and inference, allowing male low-pitched voices between 50-80Hz not to be muted
- WebUI supports changing languages according to system locale (currently supporting en_US, ja_JP, zh_CN, zh_HK, zh_SG, zh_TW; defaults to en_US if not supported)
- Fixed recognition of some GPUs (e.g., V100-16G recognition failure, P4 recognition failure)
### 2023-04-28 Update
- Upgraded faiss index settings for faster speed and higher quality
- Removed dependency on total_npy; future model sharing will not require total_npy input
- Unlocked restrictions for the 16-series GPUs, providing 4GB inference settings for 4GB VRAM GPUs
- Fixed bug in UVR5 vocal accompaniment separation for certain audio formats
- Real-time voice changing mini-GUI now supports non-40k and non-lazy pitch models
### Future Plans:
Features:
- Add option: extract small models for each epoch save
- Add option: export additional mp3 to the specified path during inference
- Support multi-person training tab (up to 4 people)
Base model:
- Collect breathing wav files to add to the training dataset to fix the issue of distorted breath sounds
- We are currently training a base model with an extended singing dataset, which will be released in the future

91
docs/Changelog_KO.md Normal file
View File

@@ -0,0 +1,91 @@
### 2023년 6월 18일 업데이트
- v2 버전에서 새로운 32k와 48k 사전 학습 모델을 추가.
- non-f0 모델들의 추론 오류 수정.
- 학습 세트가 1시간을 넘어가는 경우, 인덱스 생성 단계에서 minibatch-kmeans을 사용해, 학습속도 가속화.
- [huggingface](https://huggingface.co/spaces/lj1995/vocal2guitar)에서 vocal2guitar 제공.
- 데이터 처리 단계에서 이상 값 자동으로 제거.
- ONNX로 내보내는(export) 옵션 탭 추가.
업데이트에 적용되지 않았지만 시도한 것들 :
- 시계열 차원을 추가하여 특징 검색을 진행했지만, 유의미한 효과는 없었습니다.
- PCA 차원 축소를 추가하여 특징 검색을 진행했지만, 유의미한 효과는 없었습니다.
- ONNX 추론을 지원하는 것에 실패했습니다. nsf 생성시, Pytorch가 필요하기 때문입니다.
- 훈련 중에 입력에 대한 음고, 성별, 이퀄라이저, 노이즈 등 무작위로 강화하는 것에, 유의미한 효과는 없었습니다.
추후 업데이트 목록:
- Vocos-RVC (소형 보코더) 통합 예정.
- 학습 단계에 음고 인식을 위한 Crepe 지원 예정.
- Crepe의 정밀도를 REC-config와 동기화하여 지원 예정.
- FO 에디터 지원 예정.
### 2023년 5월 28일 업데이트
- v2 jupyter notebook 추가, 한국어 업데이트 로그 추가, 의존성 모듈 일부 수정.
- 무성음 및 숨소리 보호 모드 추가.
- crepe-full pitch 감지 지원.
- UVR5 보컬 분리: 디버브 및 디-에코 모델 지원.
- index 이름에 experiment 이름과 버전 추가.
- 배치 음성 변환 처리 및 UVR5 보컬 분리 시, 사용자가 수동으로 출력 오디오의 내보내기(export) 형식을 선택할 수 있도록 지원.
- 32k 훈련 모델 지원 종료.
### 2023년 5월 13일 업데이트
- 원클릭 패키지의 이전 버전 런타임 내, 불필요한 코드(lib.infer_pack 및 uvr5_pack) 제거.
- 훈련 세트 전처리의 유사 다중 처리 버그 수정.
- Harvest 피치 인식 알고리즘에 대한 중위수 필터링 반경 조정 추가.
- 오디오 내보낼 때, 후처리 리샘플링 지원.
- 훈련에 대한 다중 처리 "n_cpu" 설정이 "f0 추출"에서 "데이터 전처리 및 f0 추출"로 변경.
- logs 폴더 하의 인덱스 경로를 자동으로 감지 및 드롭다운 목록 기능 제공.
- 탭 페이지에 "자주 묻는 질문과 답변" 추가. (github RVC wiki 참조 가능)
- 동일한 입력 오디오 경로를 사용할 때 추론, Harvest 피치를 캐시.
(주의: Harvest 피치 추출을 사용하면 전체 파이프라인은 길고 반복적인 피치 추출 과정을 거치게됩니다. 캐싱을 하지 않는다면, 첫 inference 이후의 단계에서 timbre, 인덱스, 피치 중위수 필터링 반경 설정 등 대기시간이 엄청나게 길어집니다!)
### 2023년 5월 14일 업데이트
- 입력의 볼륨 캡슐을 사용하여 출력의 볼륨 캡슐을 혼합하거나 대체. (입력이 무음이거나 출력의 노이즈 문제를 최소화 할 수 있습니다. 입력 오디오의 배경 노이즈(소음)가 큰 경우 해당 기능을 사용하지 않는 것이 좋습니다. 기본적으로 비활성화 되어있는 옵션입니다. (1: 비활성화 상태))
- 추출된 소형 모델을 지정된 빈도로 저장하는 기능을 지원. (다양한 에폭 하에서의 성능을 보려고 하지만 모든 대형 체크포인트를 저장하고 매번 ckpt 처리를 통해 소형 모델을 수동으로 추출하고 싶지 않은 경우 이 기능은 매우 유용합니다)
- 환경 변수를 설정하여 서버의 전역 프록시로 인한 "연결 오류" 문제 해결.
- 사전 훈련된 v2 모델 지원. (현재 40k 버전만 테스트를 위해 공개적으로 사용 가능하며, 다른 두 개의 샘플링 비율은 아직 완전히 훈련되지 않아 보류되었습니다.)
- 추론 전, 1을 초과하는 과도한 볼륨 제한.
- 데이터 전처리 매개변수 미세 조정.
### 2023년 4월 9일 업데이트
- GPU 이용률 향상을 위해 훈련 파라미터 수정: A100은 25%에서 약 90%로 증가, V100: 50%에서 약 90%로 증가, 2060S: 60%에서 약 85%로 증가, P40: 25%에서 약 95%로 증가.
훈련 속도가 크게 향상.
- 매개변수 기준 변경: total batch_size는 GPU당 batch_size를 의미.
- total_epoch 변경: 최대 한도가 100에서 1000으로 증가. 기본값이 10에서 20으로 증가.
- ckpt 추출이 피치를 잘못 인식하여 비정상적인 추론을 유발하는 문제 수정.
- 분산 훈련 과정에서 각 랭크마다 ckpt를 저장하는 문제 수정.
- 특성 추출 과정에 나노 특성 필터링 적용.
- 무음 입력/출력이 랜덤하게 소음을 생성하는 문제 수정. (이전 모델은 새 데이터셋으로 다시 훈련해야 합니다)
### 2023년 4월 16일 업데이트
- 로컬 실시간 음성 변경 미니-GUI 추가, go-realtime-gui.bat를 더블 클릭하여 시작.
- 훈련 및 추론 중 50Hz 이하의 주파수 대역에 대해 필터링 적용.
- 훈련 및 추론의 pyworld 최소 피치 추출을 기본 80에서 50으로 낮춤. 이로 인해, 50-80Hz 사이의 남성 저음이 무음화되지 않습니다.
- 시스템 지역에 따른 WebUI 언어 변경 지원. (현재 en_US, ja_JP, zh_CN, zh_HK, zh_SG, zh_TW를 지원하며, 지원되지 않는 경우 기본값은 en_US)
- 일부 GPU의 인식 수정. (예: V100-16G 인식 실패, P4 인식 실패)
### 2023년 4월 28일 업데이트
- Faiss 인덱스 설정 업그레이드로 속도가 더 빨라지고 품질이 향상.
- total_npy에 대한 의존성 제거. 추후의 모델 공유는 total_npy 입력을 필요로 하지 않습니다.
- 16 시리즈 GPU에 대한 제한 해제, 4GB VRAM GPU에 대한 4GB 추론 설정 제공.
- 일부 오디오 형식에 대한 UVR5 보컬 동반 분리에서의 버그 수정.
- 실시간 음성 변경 미니-GUI는 이제 non-40k 및 non-lazy 피치 모델을 지원합니다.
### 추후 계획
Features:
- 다중 사용자 훈련 탭 지원.(최대 4명)
Base model:
- 훈련 데이터셋에 숨소리 wav 파일을 추가하여, 보컬의 호흡이 노이즈로 변환되는 문제 수정.
- 보컬 훈련 세트의 기본 모델을 추가하기 위한 작업을 진행중이며, 이는 향후에 발표될 예정.

80
docs/Changelog_TR.md Normal file
View File

@@ -0,0 +1,80 @@
### 2023-06-18
- Yeni önceden eğitilmiş v2 modelleri: 32k ve 48k
- F0 olmayan model çıkarımlarındaki hatalar düzeltildi
- Eğitim kümesi 1 saatini aşarsa, özelliğin boyutunu azaltmak için otomatik minibatch-kmeans yapılır, böylece indeks eğitimi, ekleme ve arama işlemleri çok daha hızlı olur.
- Oyuncak sesden gitar huggingface alanı sağlanır
- Aykırı kısa kesme eğitim kümesi sesleri otomatik olarak silinir
- Onnx dışa aktarma sekmesi
Başarısız deneyler:
- ~~Özellik çıkarımı: zamansal özellik çıkarımı ekleme: etkili değil~~
- ~~Özellik çıkarımı: PCAR boyut indirgeme ekleme: arama daha da yavaş~~
- ~~Eğitimde rastgele veri artırma: etkili değil~~
Yapılacaklar listesi:
- Vocos-RVC (küçük vokoder)
- Eğitim için Crepe desteği
- Yarı hassas Crepe çıkarımı
- F0 düzenleyici desteği
### 2023-05-28
- v2 jupyter not defteri eklendi, korece değişiklik günlüğü eklendi, bazı ortam gereksinimleri düzeltildi
- Sesli olmayan ünsüz ve nefes koruma modu eklendi
- Crepe-full pitch algılama desteği eklendi
- UVR5 vokal ayırma: dereverb ve de-echo modellerini destekler
- İndeksin adında deney adı ve sürümünü ekleyin
- Toplu ses dönüşüm işlemi ve UVR5 vokal ayırma sırasında çıktı seslerinin ihracat formatını manuel olarak seçme desteği eklendi
- v1 32k model eğitimi artık desteklenmiyor
### 2023-05-13
- Tek tıklamalı paketin eski sürümündeki gereksiz kodlar temizlendi: lib.infer_pack ve uvr5_pack
- Eğitim kümesi ön işlemesinde sahte çok işlem hatası düzeltildi
- Harvest pitch algı algoritması için median filtre yarıçapı ayarlama eklendi
- Ses ihracatı için yeniden örnekleme desteği eklendi
- Eğitimde "n_cpu" için çoklu işlem ayarı "f0 çıkarma" dan "veri ön işleme ve f0 çıkarma" olarak değiştirildi
- İndex yolu otomatik olarak algılanır ve açılır liste işlevi sağlanır
- Sekme sayfasında "Sık Sorulan Sorular ve Cevaplar" eklendi (ayrıca github RVC wiki'ye bakabilirsiniz)
- Çıkarım sırasında, aynı giriş sesi yolu kullanıldığında harvest pitch önbelleğe alınır (amaç: harvest pitch çıkarma kullanılırken, tüm işlem süreci uzun ve tekrarlayan bir pitch çıkarma sürecinden geçer. Önbellek kullanılmazsa, farklı timbre, index ve pitch median filtre yarıçapı ayarlarıyla deney yapan kullanıcılar ilk çıkarımın ardından çok acı verici bir bekleme süreci yaşayacaktır)
### 2023-05-14
- Girişin ses hacmini çıkışın ses hacmiyle karıştırma veya değiştirme seçeneği eklendi ( "giriş sessiz ve çıkış düşük amplitütlü gürültü" sorununu hafifletmeye yardımcı olur. Giriş sesinin arka plan gürültüsü yüksekse, önerilmez ve varsayılan olarak kapalıdır (1 kapalı olarak düşünülebilir)
- Çıkarılan küçük modellerin belirli bir sıklıkta kaydedilmesini destekler (farklı epoch altındaki performansı görmek istiyorsanız, ancak tüm büyük kontrol noktalarını kaydetmek istemiyor ve her seferinde ckpt-processing ile küçük modelleri manuel olarak çıkarmak istemiyorsanız, bu özellik oldukça pratik olacaktır)
- Sunucunun genel proxy'sinin neden olduğu "bağlantı hataları" sorununu, çevre değişkenleri ayarlayarak çözer
- Önceden eğitilmiş v2 modelleri destekler (şu anda sadece 40k sürümleri test için kamuya açıktır ve diğer iki örnekleme hızı henüz tam olarak eğitilmemiştir)
- İnferans öncesi aşırı ses hacmi 1'i aşmasını engeller
- Eğitim kümesinin ayarlarını hafifçe düzeltildi
#######################
Geçmiş değişiklik günlükleri:
### 2023-04-09
- GPU kullanım oranını artırmak için eğitim parametreleri düzeltilerek: A100% 25'ten yaklaşık 90'a, V100: %50'den yaklaşık 90'a, 2060S: %60'dan yaklaşık 85'e, P40: %25'ten yaklaşık 95'e; eğitim hızı önemli ölçüde artırıldı
- Parametre değiştirildi: toplam batch_size artık her GPU için batch_size
- Toplam_epoch değiştirildi: maksimum sınır 100'den 1000'e yükseltildi; varsayılan 10'dan 20'ye yükseltildi
- Ckpt çıkarımı sırasında pitch yanlış tanıma nedeniyle oluşan anormal çıkarım sorunu
düzeltildi
- Dağıtılmış eğitimde her sıra için ckpt kaydetme sorunu düzeltildi
- Özellik çıkarımında nan özellik filtreleme uygulandı
- Giriş/çıkış sessiz üretildiğinde rastgele ünsüzler veya gürültü üretme sorunu düzeltildi (eski modeller yeni bir veri kümesiyle yeniden eğitilmelidir)
### 2023-04-16 Güncellemesi
- Yerel gerçek zamanlı ses değiştirme mini-GUI eklendi, go-realtime-gui.bat dosyasını çift tıklatarak başlayın
- Eğitim ve çıkarımda 50Hz'nin altındaki frekans bantları için filtreleme uygulandı
- Eğitim ve çıkarımda pyworld'ün varsayılan 80'den 50'ye düşürüldü, böylece 50-80Hz aralığındaki erkek düşük perdeli seslerin sessiz kalmaması sağlandı
- WebUI, sistem yereli diline göre dil değiştirme desteği ekledi (şu anda en_US, ja_JP, zh_CN, zh_HK, zh_SG, zh_TW'yi desteklemektedir; desteklenmezse varsayılan olarak en_US kullanılır)
- Bazı GPU'ların tanınmasında sorun giderildi (örneğin, V100-16G tanınma hatası, P4 tanınma hatası)
### 2023-04-28 Güncellemesi
- Daha hızlı hız ve daha yüksek kalite için faiss indeks ayarları yükseltildi
- total_npy bağımlılığı kaldırıldı; gelecekteki model paylaşımı total_npy girişi gerektirmeyecek
- 16 serisi GPU'lar için kısıtlamalar kaldırıldı, 4GB VRAM GPU'ları için 4GB çıkarım ayarları sağlanıyor
- Belirli ses biçimleri için UVR5 vokal eşlik ayırma hatası düzeltildi
- Gerçek zamanlı ses değiştirme mini-GUI, 40k dışında ve tembelleştirilmemiş pitch modellerini destekler hale geldi
### Gelecek Planlar:
Özellikler:
- Her epoch kaydetmek için küçük modelleri çıkarma seçeneği ekle
- Çıkarım sırasında çıktı sesleri için belirli bir yola ekstra mp3'leri kaydetme seçeneği ekle
- Birden çok kişi eğitim sekmesini destekle (en fazla 4 kişiye kadar)

View File

@@ -1,14 +1,14 @@
<div align="center">
<h1>Retrieval-based-Voice-Conversion-WebUI</h1>
An easy-to-use SVC framework based on VITS.<br><br>
An easy-to-use Voice Conversion framework based on VITS.<br><br>
[![madewithlove](https://forthebadge.com/images/badges/built-with-love.svg)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI)
[![madewithlove](https://forthebadge.com/images/badges/built-with-love.svg)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)
<img src="https://counter.seku.su/cmoe?name=rvc&theme=r34" /><br>
[![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Retrieval_based_Voice_Conversion_WebUI.ipynb)
[![Licence](https://img.shields.io/github/license/liujing04/Retrieval-based-Voice-Conversion-WebUI?style=for-the-badge)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/%E4%BD%BF%E7%94%A8%E9%9C%80%E9%81%B5%E5%AE%88%E7%9A%84%E5%8D%8F%E8%AE%AE-LICENSE.txt)
[![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/Retrieval_based_Voice_Conversion_WebUI.ipynb)
[![Licence](https://img.shields.io/github/license/RVC-Project/Retrieval-based-Voice-Conversion-WebUI?style=for-the-badge)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/LICENSE)
[![Huggingface](https://img.shields.io/badge/🤗%20-Spaces-yellow.svg?style=for-the-badge)](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)
[![Discord](https://img.shields.io/badge/RVC%20Developers-Discord-7289DA?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/HcsmBBGyVk)
@@ -16,36 +16,49 @@ An easy-to-use SVC framework based on VITS.<br><br>
</div>
------
[**Changelog**](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Changelog_CN.md)
[**Changelog**](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/Changelog_EN.md) | [**FAQ (Frequently Asked Questions)**](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/wiki/FAQ-(Frequently-Asked-Questions))
[**English**](./README.en.md) | [**中文简体**](../README.md) | [**日本語**](./README.ja.md)
[**English**](./README.en.md) | [**中文简体**](../README.md) | [**日本語**](./README.ja.md) | [**한국어**](./README.ko.md) ([**韓國語**](./README.ko.han.md)) | [**Türkçe**](./README.tr.md)
> Check our [Demo Video](https://www.bilibili.com/video/BV1pm4y1z7Gm/) here!
> Realtime Voice Conversion Software using RVC : [w-okada/voice-changer](https://github.com/w-okada/voice-changer)
Check our [Demo Video](https://www.bilibili.com/video/BV1pm4y1z7Gm/) here!
Realtime Voice Conversion Software using RVC : [w-okada/voice-changer](https://github.com/w-okada/voice-changer)
> The dataset for the pre-training model uses nearly 50 hours of high quality VCTK open source dataset.
> High quality licensed song datasets will be added to training-set one after another for your use, without worrying about copyright infringement.
> Please look forward to the pretrained base model of RVCv3, which has larger parameters, larger data, better results, unchanged inference speed, and requires less training data for training.
## Summary
This repository has the following features:
+ Reduce tone leakage by replacing source feature to training-set feature using top1 retrieval;
+ Reduce tone leakage by replacing the source feature to training-set feature using top1 retrieval;
+ Easy and fast training, even on relatively poor graphics cards;
+ Training with a small amount of data also obtains relatively good results (>=10min low noise speech recommended);
+ Supporting model fusion to change timbres (using ckpt processing tab->ckpt merge);
+ Easy-to-use Webui interface;
+ Use the UVR5 model to quickly separate vocals and instruments.
+ The dataset for the pre-training model uses nearly 50 hours of high quality VCTK open source dataset, and high quality licensed song datasets will be added to training-set one after another for your use, without worrying about copyright infringement.
## Preparing the environment
We recommend you install the dependencies through poetry.
+ Use the most powerful High-pitch Voice Extraction Algorithm [InterSpeech2023-RMVPE](#Credits) to prevent the muted sound problem. Provides the best results (significantly) and is faster, with even lower resource consumption than Crepe_full.
+ AMD/Intel graphics cards acceleration supported.
The following commands need to be executed in the environment of Python version 3.8 or higher:
## Preparing the environment
The following commands need to be executed in the environment of Python version 3.8 or higher.
(Windows/Linux)
First install the main dependencies through pip:
```bash
# Install PyTorch-related core dependencies, skip if installed
# Reference: https://pytorch.org/get-started/locally/
pip install torch torchvision torchaudio
#For Windows + Nvidia Ampere Architecture(RTX30xx), you need to specify the cuda version corresponding to pytorch according to the experience of https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/issues/21
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
#For Windows + Nvidia Ampere Architecture(RTX30xx), you need to specify the cuda version corresponding to pytorch according to the experience of https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/issues/21
#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
```
Then can use poetry to install the other dependencies:
```bash
# Install the Poetry dependency management tool, skip if installed
# Reference: https://python-poetry.org/docs/#installation
curl -sSL https://install.python-poetry.org | python3 -
@@ -53,12 +66,22 @@ curl -sSL https://install.python-poetry.org | python3 -
# Install the project dependencies
poetry install
```
You can also use pip to install the dependencies
**Notice**: `faiss 1.7.2` will raise Segmentation Fault: 11 under `MacOS`, please change corresponding line in `requirements.txt` to `faiss-cpu==1.7.0`
You can also use pip to install them:
```bash
pip install -r requirements.txt
for Nvidia graphics cards
pip install -r requirements.txt
for AMD/Intel graphics cards
pip install -r requirements-dml.txt
```
------
Mac users can install dependencies via `run.sh`:
```bash
sh ./run.sh
```
## Preparation of other Pre-models
@@ -74,24 +97,47 @@ hubert_base.pt
./uvr5_weights
#If you are using Windows, you may also need this dictionary, skip if FFmpeg is installed
If you want to test the v2 version model (the v2 version model has changed the input from the 256 dimensional feature of 9-layer Hubert+final_proj to the 768 dimensional feature of 12-layer Hubert, and has added 3 period discriminators), you will need to download additional features
./pretrained_v2
#If you are using Windows, you may also need these two files, skip if FFmpeg and FFprobe are installed
ffmpeg.exe
https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe
ffprobe.exe
https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe
If you want to use the latest SOTA RMVPE vocal pitch extraction algorithm, you need to download the RMVPE weights and place them in the RVC root directory
https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/rmvpe.pt
For AMD/Intel graphics cards users you need download:
https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/rmvpe.onnx
```
Then use this command to start Webui:
```bash
python infer-web.py
```
If you are using Windows, you can download and extract `RVC-beta.7z` to use RVC directly and use `go-web.bat` to start Webui.
We will develop an English version of the WebUI in 2 weeks.
There's also a tutorial on RVC in Chinese and you can check it out if needed.
If you are using Windows or macOS, you can download and extract `RVC-beta.7z` to use RVC directly by using `go-web.bat` on windows or `sh ./run.sh` on macOS to start Webui.
## Credits
+ [ContentVec](https://github.com/auspicious3000/contentvec/)
+ [VITS](https://github.com/jaywalnut310/vits)
+ [HIFIGAN](https://github.com/jik876/hifi-gan)
+ [Gradio](https://github.com/gradio-app/gradio)
+ [FFmpeg](https://github.com/FFmpeg/FFmpeg)
+ [Ultimate Vocal Remover](https://github.com/Anjok07/ultimatevocalremovergui)
+ [audio-slicer](https://github.com/openvpi/audio-slicer)
+ [Vocal pitch extraction:RMVPE](https://github.com/Dream-High/RMVPE)
+ The pretrained model is trained and tested by [yxlllc](https://github.com/yxlllc/RMVPE) and [RVC-Boss](https://github.com/RVC-Boss).
## Thanks to all contributors for their efforts
<a href="https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/graphs/contributors" target="_blank">
<img src="https://contrib.rocks/image?repo=liujing04/Retrieval-based-Voice-Conversion-WebUI" />
<a href="https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/graphs/contributors" target="_blank">
<img src="https://contrib.rocks/image?repo=RVC-Project/Retrieval-based-Voice-Conversion-WebUI" />
</a>

View File

@@ -3,12 +3,12 @@
<h1>Retrieval-based-Voice-Conversion-WebUI</h1>
VITSに基づく使いやすい音声変換voice changerframework<br><br>
[![madewithlove](https://forthebadge.com/images/badges/built-with-love.svg)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI)
[![madewithlove](https://forthebadge.com/images/badges/built-with-love.svg)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)
<img src="https://counter.seku.su/cmoe?name=rvc&theme=r34" /><br>
[![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Retrieval_based_Voice_Conversion_WebUI.ipynb)
[![Licence](https://img.shields.io/github/license/liujing04/Retrieval-based-Voice-Conversion-WebUI?style=for-the-badge)](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/%E4%BD%BF%E7%94%A8%E9%9C%80%E9%81%B5%E5%AE%88%E7%9A%84%E5%8D%8F%E8%AE%AE-LICENSE.txt)
[![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/Retrieval_based_Voice_Conversion_WebUI.ipynb)
[![Licence](https://img.shields.io/github/license/RVC-Project/Retrieval-based-Voice-Conversion-WebUI?style=for-the-badge)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/LICENSE)
[![Huggingface](https://img.shields.io/badge/🤗%20-Spaces-yellow.svg?style=for-the-badge)](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)
[![Discord](https://img.shields.io/badge/RVC%20Developers-Discord-7289DA?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/HcsmBBGyVk)
@@ -17,62 +17,60 @@ VITSに基づく使いやすい音声変換voice changerframework<br><br>
------
[**更新日誌**](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Changelog_CN.md)
[**更新日誌**](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/Changelog_CN.md)
[**English**](./README.en.md) | [**中文简体**](../README.md) | [**日本語**](./README.ja.md)
[**English**](./README.en.md) | [**中文简体**](../README.md) | [**日本語**](./README.ja.md) | [**한국어**](./README.ko.md) ([**韓國語**](./README.ko.han.md)) | [**Türkçe**](./README.tr.md)
> デモ動画は[こちら](https://www.bilibili.com/video/BV1pm4y1z7Gm/)でご覧ください
> デモ動画は[こちら](https://www.bilibili.com/video/BV1pm4y1z7Gm/)でご覧ください
> RVCによるリアルタイム音声変換: [w-okada/voice-changer](https://github.com/w-okada/voice-changer)
> 基底modelを訓練(training)したのは、約50時間の高品質なオープンソースデータセット。著作権侵害を心配することなく使用できるように
> 著作権侵害を心配することなく使用できるように、基底モデルは約50時間の高品質なオープンソースデータセットで訓練されています
> 今後次々と使用許可のある高品質歌声資料集を追加し、基底modelを訓練する。
> 今後も、次々と使用許可のある高品質歌声資料集を追加し、基底モデルを訓練する予定です
## はじめに
repoは下記の特徴があります
リポジトリには下記の特徴があります
+ 調子(tone)の漏洩が下がれるためtop1検索で源特徴量を訓練集特徴量に置換
+ 古い又は安いGPUでも高速に訓練でき
+ 小さい訓練集でもかなりいいmodelを得られる(10分以上の低noise音声を推奨)
+ modelを融合し音色をmergeできる(ckpt processing->ckpt merge使用)
+ 使いやすいWebUI
+ UVR5 Modelも含めるため人声とBGMを素早く分離でき
+ Top1検索を用いることで、生の特徴量を訓練用データセット特徴量に変換し、トーンリーケージを削減します。
+ 比較的貧弱なGPUでも高速かつ簡単に訓練できます。
+ 少量のデータセットからでも、比較的良い結果を得ることができます。10分以上のイズの少ない音声を推奨します。
+ モデルを融合することで、音声を混ぜることができます。(ckpt processingタブの、ckpt merge使用します。)
+ 使いやすいWebUI
+ UVR5 Modelも含んでいるため、人の声とBGMを素早く分離できます。
## 環境構築
poetryで依存関係をinstallすることをお勧めします。
Poetryで依存関係をインストールすることをお勧めします。
下記のcommandsは、Python3.8以上の環境で実行する必要があります:
下記のコマンドは、Python3.8以上の環境で実行する必要があります:
```bash
# PyTorch関連の依存関係をinstall。install済の場合はskip
# PyTorch関連の依存関係をインストール。インストール済の場合は省略。
# 参照先: https://pytorch.org/get-started/locally/
pip install torch torchvision torchaudio
#Windows Nvidia Ampere Architecture(RTX30xx)の場合、 #21 に従い、pytorchに対応するcuda versionを指定する必要があります。
#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
# PyTorch関連の依存関係をinstall。install済の場合はskip
# PyTorch関連の依存関係をインストール。インストール済の場合は省略。
# 参照先: https://python-poetry.org/docs/#installation
curl -sSL https://install.python-poetry.org | python3 -
# Poetry経由で依存関係をinstall
# Poetry経由で依存関係をインストール
poetry install
```
pipでも依存関係のinstallが可能です:
**注意**:`faiss 1.7.2``macOS``Segmentation Fault: 11`を起こすので、`requirements.txt`の該当行を `faiss-cpu==1.7.0`に変更してください。
pipでも依存関係のインストールが可能です:
```bash
pip install -r requirements.txt
```
## 基底modelsを準備
RVCは推論/訓練のために、様々な事前訓練を行った基底modelsが必要です。
RVCは推論/訓練のために、様々な事前訓練を行った基底モデルを必要とします。
modelsは[Hugging Face space](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)からダウンロードできます。
以下は、RVCに必要な基底modelsやその他のfilesの一覧です。
以下は、RVCに必要な基底モデルやその他のファイルの一覧です。
```bash
hubert_base.pt
@@ -80,16 +78,16 @@ hubert_base.pt
./uvr5_weights
# ffmpegがすでにinstallされている場合はskip
# ffmpegがすでにinstallされている場合は省略
./ffmpeg
```
その後、下記のcommandでWebUIを起動
その後、下記のコマンドでWebUIを起動します。
```bash
python infer-web.py
```
Windowsをお使いの方は、直接`RVC-beta.7z`をダウンロード後に展開し、`go-web.bat`clickでWebUIを起動。(7zipが必要です)
Windowsをお使いの方は、直接`RVC-beta.7z`をダウンロード後に展開し、`go-web.bat`クリックすることで、WebUIを起動することができます。(7zipが必要です)
また、repoに[小白简易教程.doc](./小白简易教程.doc)がありますので、参考にしてください(中国語版のみ)。
また、リポジトリに[小白简易教程.doc](./小白简易教程.doc)がありますので、参考にしてください(中国語版のみ)。
## 参考プロジェクト
+ [ContentVec](https://github.com/auspicious3000/contentvec/)
@@ -100,7 +98,7 @@ Windowsをお使いの方は、直接に`RVC-beta.7z`をダウンロード後に
+ [Ultimate Vocal Remover](https://github.com/Anjok07/ultimatevocalremovergui)
+ [audio-slicer](https://github.com/openvpi/audio-slicer)
## 貢献者(contributer)の皆様の尽力に感謝します
<a href="https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/graphs/contributors" target="_blank">
<img src="https://contrib.rocks/image?repo=liujing04/Retrieval-based-Voice-Conversion-WebUI" />
## 貢献者(contributor)の皆様の尽力に感謝します
<a href="https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/graphs/contributors" target="_blank">
<img src="https://contrib.rocks/image?repo=RVC-Project/Retrieval-based-Voice-Conversion-WebUI" />
</a>

100
docs/README.ko.han.md Normal file
View File

@@ -0,0 +1,100 @@
<div align="center">
<h1>Retrieval-based-Voice-Conversion-WebUI</h1>
VITS基盤의 簡單하고使用하기 쉬운音聲變換틀<br><br>
[![madewithlove](https://forthebadge.com/images/badges/built-with-love.svg)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)
<img src="https://counter.seku.su/cmoe?name=rvc&theme=r34" /><br>
[![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/Retrieval_based_Voice_Conversion_WebUI.ipynb)
[![Licence](https://img.shields.io/github/license/RVC-Project/Retrieval-based-Voice-Conversion-WebUI?style=for-the-badge)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/LICENSE)
[![Huggingface](https://img.shields.io/badge/🤗%20-Spaces-yellow.svg?style=for-the-badge)](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)
[![Discord](https://img.shields.io/badge/RVC%20Developers-Discord-7289DA?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/HcsmBBGyVk)
</div>
------
[**更新日誌**](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/Changelog_KO.md)
[**English**](./README.en.md) | [**中文简体**](../README.md) | [**日本語**](./README.ja.md) | [**한국어**](./README.ko.md) ([**韓國語**](./README.ko.han.md)) | [**Türkçe**](./README.tr.md)
> [示範映像](https://www.bilibili.com/video/BV1pm4y1z7Gm/)을 確認해 보세요!
> RVC를活用한實時間音聲變換: [w-okada/voice-changer](https://github.com/w-okada/voice-changer)
> 基本모델은 50時間假量의 高品質 오픈 소스 VCTK 데이터셋을 使用하였으므로, 著作權上의 念慮가 없으니 安心하고 使用하시기 바랍니다.
> 著作權問題가 없는 高品質의 노래를 以後에도 繼續해서 訓練할 豫定입니다.
## 紹介
本Repo는 다음과 같은 特徵을 가지고 있습니다:
+ top1檢索을利用하여 入力音色特徵을 訓練세트音色特徵으로 代替하여 音色의漏出을 防止;
+ 相對的으로 낮은性能의 GPU에서도 빠른訓練可能;
+ 적은量의 데이터로 訓練해도 좋은 結果를 얻을 수 있음 (最小10分以上의 低雜음音聲데이터를 使用하는 것을 勸獎);
+ 모델融合을通한 音色의 變調可能 (ckpt處理탭->ckpt混合選擇);
+ 使用하기 쉬운 WebUI (웹 使用者인터페이스);
+ UVR5 모델을 利用하여 목소리와 背景音樂의 빠른 分離;
## 環境의準備
poetry를通해 依存를設置하는 것을 勸獎합니다.
다음命令은 Python 버전3.8以上의環境에서 實行되어야 합니다:
```bash
# PyTorch 關聯主要依存設置, 이미設置되어 있는 境遇 건너뛰기 可能
# 參照: https://pytorch.org/get-started/locally/
pip install torch torchvision torchaudio
# Windows + Nvidia Ampere Architecture(RTX30xx)를 使用하고 있다面, #21 에서 명시된 것과 같이 PyTorch에 맞는 CUDA 버전을 指定해야 합니다.
#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
# Poetry 設置, 이미設置되어 있는 境遇 건너뛰기 可能
# Reference: https://python-poetry.org/docs/#installation
curl -sSL https://install.python-poetry.org | python3 -
# 依存設置
poetry install
```
pip를 活用하여依存를 設置하여도 無妨합니다.
```bash
pip install -r requirements.txt
```
## 其他預備모델準備
RVC 모델은 推論과訓練을 依하여 다른 預備모델이 必要합니다.
[Huggingface space](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)를 通해서 다운로드 할 수 있습니다.
다음은 RVC에 必要한 預備모델 및 其他 파일 目錄입니다:
```bash
hubert_base.pt
./pretrained
./uvr5_weights
# Windows를 使用하는境遇 이 사전도 必要할 수 있습니다. FFmpeg가 設置되어 있으면 건너뛰어도 됩니다.
ffmpeg.exe
```
그後 以下의 命令을 使用하여 WebUI를 始作할 수 있습니다:
```bash
python infer-web.py
```
Windows를 使用하는境遇 `RVC-beta.7z`를 다운로드 및 壓縮解除하여 RVC를 直接使用하거나 `go-web.bat`을 使用하여 WebUi를 直接할 수 있습니다.
## 參考
+ [ContentVec](https://github.com/auspicious3000/contentvec/)
+ [VITS](https://github.com/jaywalnut310/vits)
+ [HIFIGAN](https://github.com/jik876/hifi-gan)
+ [Gradio](https://github.com/gradio-app/gradio)
+ [FFmpeg](https://github.com/FFmpeg/FFmpeg)
+ [Ultimate Vocal Remover](https://github.com/Anjok07/ultimatevocalremovergui)
+ [audio-slicer](https://github.com/openvpi/audio-slicer)
## 모든寄與者분들의勞力에感謝드립니다
<a href="https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/graphs/contributors" target="_blank">
<img src="https://contrib.rocks/image?repo=RVC-Project/Retrieval-based-Voice-Conversion-WebUI" />
</a>

112
docs/README.ko.md Normal file
View File

@@ -0,0 +1,112 @@
<div align="center">
<h1>Retrieval-based-Voice-Conversion-WebUI</h1>
VITS 기반의 간단하고 사용하기 쉬운 음성 변환 프레임워크.<br><br>
[![madewithlove](https://forthebadge.com/images/badges/built-with-love.svg)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)
<img src="https://counter.seku.su/cmoe?name=rvc&theme=r34" /><br>
[![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/Retrieval_based_Voice_Conversion_WebUI.ipynb)
[![Licence](https://img.shields.io/github/license/RVC-Project/Retrieval-based-Voice-Conversion-WebUI?style=for-the-badge)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/LICENSE)
[![Huggingface](https://img.shields.io/badge/🤗%20-Spaces-yellow.svg?style=for-the-badge)](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)
[![Discord](https://img.shields.io/badge/RVC%20Developers-Discord-7289DA?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/HcsmBBGyVk)
</div>
---
[**업데이트 로그**](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/Changelog_KO.md)
[**English**](./README.en.md) | [**中文简体**](../README.md) | [**日本語**](./README.ja.md) | [**한국어**](./README.ko.md) ([**韓國語**](./README.ko.han.md)) | [**Türkçe**](./README.tr.md)
> [데모 영상](https://www.bilibili.com/video/BV1pm4y1z7Gm/)을 확인해 보세요!
> RVC를 활용한 실시간 음성변환: [w-okada/voice-changer](https://github.com/w-okada/voice-changer)
> 기본 모델은 50시간 가량의 고퀄리티 오픈 소스 VCTK 데이터셋을 사용하였으므로, 저작권상의 염려가 없으니 안심하고 사용하시기 바랍니다.
> 저작권 문제가 없는 고퀄리티의 노래를 이후에도 계속해서 훈련할 예정입니다.
## 소개
본 Repo는 다음과 같은 특징을 가지고 있습니다:
- top1 검색을 이용하여 입력 음색 특징을 훈련 세트 음색 특징으로 대체하여 음색의 누출을 방지;
- 상대적으로 낮은 성능의 GPU에서도 빠른 훈련 가능;
- 적은 양의 데이터로 훈련해도 좋은 결과를 얻을 수 있음 (최소 10분 이상의 저잡음 음성 데이터를 사용하는 것을 권장);
- 모델 융합을 통한 음색의 변조 가능 (ckpt 처리 탭->ckpt 병합 선택);
- 사용하기 쉬운 WebUI (웹 인터페이스);
- UVR5 모델을 이용하여 목소리와 배경음악의 빠른 분리;
## 환경의 준비
poetry를 통해 dependecies를 설치하는 것을 권장합니다.
다음 명령은 Python 버전 3.8 이상의 환경에서 실행되어야 합니다:
```bash
# PyTorch 관련 주요 dependencies 설치, 이미 설치되어 있는 경우 건너뛰기 가능
# 참조: https://pytorch.org/get-started/locally/
pip install torch torchvision torchaudio
# Windows + Nvidia Ampere Architecture(RTX30xx)를 사용하고 있다면, https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/issues/21 에서 명시된 것과 같이 PyTorch에 맞는 CUDA 버전을 지정해야 합니다.
#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
# Poetry 설치, 이미 설치되어 있는 경우 건너뛰기 가능
# Reference: https://python-poetry.org/docs/#installation
curl -sSL https://install.python-poetry.org | python3 -
# Dependecies 설치
poetry install
```
pip를 활용하여 dependencies를 설치하여도 무방합니다.
```bash
pip install -r requirements.txt
```
## 기타 사전 모델 준비
RVC 모델은 추론과 훈련을 위하여 다른 사전 모델이 필요합니다.
[Huggingface space](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)를 통해서 다운로드 할 수 있습니다.
다음은 RVC에 필요한 사전 모델 및 기타 파일 목록입니다:
```bash
hubert_base.pt
./pretrained
./uvr5_weights
# Windows를 사용하는 경우 이 사전도 필요할 수 있습니다. FFmpeg가 설치되어 있으면 건너뛰어도 됩니다.
ffmpeg.exe
```
그 후 이하의 명령을 사용하여 WebUI를 시작할 수 있습니다:
```bash
python infer-web.py
```
Windows를 사용하는 경우 `RVC-beta.7z`를 다운로드 및 압축 해제하여 RVC를 직접 사용하거나 `go-web.bat`을 사용하여 WebUi를 시작할 수 있습니다.
## 참고
- [ContentVec](https://github.com/auspicious3000/contentvec/)
- [VITS](https://github.com/jaywalnut310/vits)
- [HIFIGAN](https://github.com/jik876/hifi-gan)
- [Gradio](https://github.com/gradio-app/gradio)
- [FFmpeg](https://github.com/FFmpeg/FFmpeg)
- [Ultimate Vocal Remover](https://github.com/Anjok07/ultimatevocalremovergui)
- [audio-slicer](https://github.com/openvpi/audio-slicer)
## 모든 기여자 분들의 노력에 감사드립니다.
<a href="https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/graphs/contributors" target="_blank">
<img src="https://contrib.rocks/image?repo=RVC-Project/Retrieval-based-Voice-Conversion-WebUI" />
</a>

126
docs/README.tr.md Normal file
View File

@@ -0,0 +1,126 @@
# Retrieval-based-Voice-Conversion-WebUI
<div align="center">
<h1>Retrieval Tabanlı Ses Dönüşümü Web Arayüzü</h1>
Kolay kullanılabilen VITS tabanlı bir Ses Dönüşümü çerçevesi.<br><br>
[![madewithlove](https://forthebadge.com/images/badges/built-with-love.svg)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)
<img src="https://counter.seku.su/cmoe?name=rvc&theme=r34" /><br>
[![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/Retrieval_based_Voice_Conversion_WebUI.ipynb)
[![Licence](https://img.shields.io/github/license/RVC-Project/Retrieval-based-Voice-Conversion-WebUI?style=for-the-badge)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/LICENSE)
[![Huggingface](https://img.shields.io/badge/🤗%20-Spaces-yellow.svg?style=for-the-badge)](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)
[![Discord](https://img.shields.io/badge/RVC%20Developers-Discord-7289DA?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/HcsmBBGyVk)
</div>
------
[**Değişiklik Kaydı**](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/docs/Changelog_TR.md) | [**SSS (Sıkça Sorulan Sorular)**](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/wiki/FAQ-(Frequently-Asked-Questions))
[**English**](./README.en.md) | [**中文简体**](../README.md) | [**日本語**](./README.ja.md) | [**한국어**](./README.ko.md) ([**韓國語**](./README.ko.han.md)) | [**Türkçe**](./README.tr.md)
Demo Videosu için [buraya](https://www.bilibili.com/video/BV1pm4y1z7Gm/) bakın!
RVC kullanarak Gerçek Zamanlı Ses Dönüşümü Yazılımı: [w-okada/voice-changer](https://github.com/w-okada/voice-changer)
> RVC kullanan çevrimiçi bir demo: Vocal'i Akustik Gitar sesine dönüştüren demo: https://huggingface.co/spaces/lj1995/vocal2guitar
> Vocal2Guitar demo videosu: https://www.bilibili.com/video/BV19W4y1D7tT/
> Ön eğitim modeli için neredeyse 50 saatlik yüksek kaliteli VCTK açık kaynaklı veri kümesi kullanılmıştır.
> Lisanslı yüksek kaliteli şarkı veri kümesi, telif hakkı ihlali endişesi olmadan kullanımınız için sırayla eklenecektir.
## Özet
Bu depo aşağıdaki özelliklere sahiptir:
+ Top1 geri alım kullanarak kaynak özelliğini eğitim seti özelliğiyle değiştirerek ses tonu sızmasını azaltma;
+ Kolay ve hızlı eğitim, hatta göreceli olarak zayıf grafik kartlarında bile;
+ Az miktarda veri ile bile (en az 10 dakika düşük gürültülü konuşma tavsiye edilir) oldukça iyi sonuçlar elde etme;
+ Timbrları değiştirmek için model birleştirmeyi destekleme (ckpt işleme sekmesinde ckpt birleştirme kullanma);
+ Kolay kullanımlı Webui arayüzü;
+ UVR5 modelini kullanarak hızlı bir şekilde vokalleri ve enstrümanları ayırma.
+ En güçlü Yüksek Tiz Ses Ayıklama Algoritması [InterSpeech2023-RMVPE](#Teşekkürler) sessiz ses sorununu önlemek için kullanılması. En iyi sonuçları (önemli ölçüde) sağlar ve Crepe_full'dan daha düşük kaynak tüketimiyle daha hızlıdır.
## Ortamı Hazırlama
Aşağıdaki komutlar Python sürümü 3.8 veya daha yüksek olan ortamda çalıştırılmalıdır.
(Windows/Linux)
Önce pip aracılığıyla ana bağımlılıkları yükleyin:
```bash
# PyTorch ile ilgili temel bağımlılıkları yükleyin, kuruluysa atlayın
# Referans: https://pytorch.org/get-started/locally/
pip install torch torchvision torchaudio
#Windows + Nvidia Ampere Mimarisi(RTX30xx) için, deneyime göre https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/issues/21 adresindeki cuda sürümüne göre pytorch'a karşılık gelen cuda sürümünü belirtmeniz gerekebilir
#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
```
Sonra poetry kullanarak diğer bağımlılıkları yükleyebilirsiniz:
```bash
# Poetry bağımlılık yönetim aracını yükleyin, kuruluysa atlayın
# Referans: https://python-poetry.org/docs/#installation
curl -sSL https://install.python-poetry.org | python3 -
# Proje bağımlılıklarını yükleyin
poetry install
```
Bunun yerine pip kullanarak da yükleyebilirsiniz:
```bash
pip install -r requirements.txt
```
------
Mac kullanıcıları bağımlılıkları `run.sh` üzerinden yükleyebilir:
```bash
sh ./run.sh
```
## Diğer Ön-Modellerin Hazırlanması
RVC'n
in çıkarım ve eğitim için diğer ön-modellere ihtiyacı vardır.
Onları [Huggingface alanımızdan](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/) indirmeniz gerekmektedir.
İşte RVC'nin ihtiyaç duyduğu Diğer Ön-Modellerin ve diğer dosyaların listesi:
```bash
hubert_base.pt
./pretrained
./uvr5_weights
V2 sürümü modelini test etmek istiyorsanız (v2 sürümü modeli girişi 256 boyutlu 9 katmanlı Hubert+final_proj'dan 768 boyutlu 12 katmanlı Hubert'ın özelliğine ve 3 dönem ayrımına değiştirilmiştir), ek özellikleri indirmeniz gerekecektir.
./pretrained_v2
#Eğer Windows kullanıyorsanız, FFmpeg yüklü değilse bu dictionariyaya da ihtiyacınız olabilir, FFmpeg yüklüyse atlayın
ffmpeg.exe
```
Daha sonra bu komutu kullanarak Webui'yi başlatabilirsiniz:
```bash
python infer-web.py
```
Windows veya macOS kullanıyorsanız, RVC-beta.7z'yi indirip çıkarabilir ve Webui'yi başlatmak için windows'ta `go-web.bat` veya macOS'te `sh ./run.sh` kullanarak RVC'yi doğrudan kullanabilirsiniz.
Ayrıca, RVC hakkında bir rehber de bulunmaktadır ve ihtiyacınız varsa buna göz atabilirsiniz.
## Teşekkürler
+ [ContentVec](https://github.com/auspicious3000/contentvec/)
+ [VITS](https://github.com/jaywalnut310/vits)
+ [HIFIGAN](https://github.com/jik876/hifi-gan)
+ [Gradio](https://github.com/gradio-app/gradio)
+ [FFmpeg](https://github.com/FFmpeg/FFmpeg)
+ [Ultimate Vocal Remover](https://github.com/Anjok07/ultimatevocalremovergui)
+ [audio-slicer](https://github.com/openvpi/audio-slicer)
+ [Vocal pitch extraction:RMVPE](https://github.com/Dream-High/RMVPE)
+ Ön eğitimli model [yxlllc](https://github.com/yxlllc/RMVPE) ve [RVC-Boss](https://github.com/RVC-Boss) tarafından eğitilmiş ve test edilmiştir.
## Tüm katkıda bulunanlara teşekkürler
<a href="https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/graphs/contributors" target="_blank">
<img src="https://contrib.rocks/image?repo=RVC-Project/Retrieval-based-Voice-Conversion-WebUI" />
</a>

102
docs/faiss_tips_en.md Normal file
View File

@@ -0,0 +1,102 @@
faiss tuning TIPS
==================
# about faiss
faiss is a library of neighborhood searches for dense vectors, developed by facebook research, which efficiently implements many approximate neighborhood search methods.
Approximate Neighbor Search finds similar vectors quickly while sacrificing some accuracy.
## faiss in RVC
In RVC, for the embedding of features converted by HuBERT, we search for embeddings similar to the embedding generated from the training data and mix them to achieve a conversion that is closer to the original speech. However, since this search takes time if performed naively, high-speed conversion is realized by using approximate neighborhood search.
# implementation overview
In '/logs/your-experiment/3_feature256' where the model is located, features extracted by HuBERT from each voice data are located.
From here we read the npy files in order sorted by filename and concatenate the vectors to create big_npy. (This vector has shape [N, 256].)
After saving big_npy as /logs/your-experiment/total_fea.npy, train it with faiss.
In this article, I will explain the meaning of these parameters.
# Explanation of the method
## index factory
An index factory is a unique faiss notation that expresses a pipeline that connects multiple approximate neighborhood search methods as a string.
This allows you to try various approximate neighborhood search methods simply by changing the index factory string.
In RVC it is used like this:
```python
index = faiss.index_factory(256, "IVF%s,Flat" % n_ivf)
```
Among the arguments of index_factory, the first is the number of dimensions of the vector, the second is the index factory string, and the third is the distance to use.
For more detailed notation
https://github.com/facebookresearch/faiss/wiki/The-index-factory
## index for distance
There are two typical indexes used as similarity of embedding as follows.
- Euclidean distance (METRIC_L2)
- inner product (METRIC_INNER_PRODUCT)
Euclidean distance takes the squared difference in each dimension, sums the differences in all dimensions, and then takes the square root. This is the same as the distance in 2D and 3D that we use on a daily basis.
The inner product is not used as an index of similarity as it is, and the cosine similarity that takes the inner product after being normalized by the L2 norm is generally used.
Which is better depends on the case, but cosine similarity is often used in embedding obtained by word2vec and similar image retrieval models learned by ArcFace. If you want to do l2 normalization on vector X with numpy, you can do it with the following code with eps small enough to avoid 0 division.
```python
X_normed = X / np.maximum(eps, np.linalg.norm(X, ord=2, axis=-1, keepdims=True))
```
Also, for the index factory, you can change the distance index used for calculation by choosing the value to pass as the third argument.
```python
index = faiss.index_factory(dimention, text, faiss.METRIC_INNER_PRODUCT)
```
## IVF
IVF (Inverted file indexes) is an algorithm similar to the inverted index in full-text search.
During learning, the search target is clustered with kmeans, and Voronoi partitioning is performed using the cluster center. Each data point is assigned a cluster, so we create a dictionary that looks up the data points from the clusters.
For example, if clusters are assigned as follows
|index|Cluster|
|-----|-------|
|1|A|
|2|B|
|3|A|
|4|C|
|5|B|
The resulting inverted index looks like this:
|cluster|index|
|-------|-----|
|A|1, 3|
|B|2, 5|
|C|4|
When searching, we first search n_probe clusters from the clusters, and then calculate the distances for the data points belonging to each cluster.
# recommend parameter
There are official guidelines on how to choose an index, so I will explain accordingly.
https://github.com/facebookresearch/faiss/wiki/Guidelines-to-choose-an-index
For datasets below 1M, 4bit-PQ is the most efficient method available in faiss as of April 2023.
Combining this with IVF, narrowing down the candidates with 4bit-PQ, and finally recalculating the distance with an accurate index can be described by using the following index factory.
```python
index = faiss.index_factory(256, "IVF1024,PQ128x4fs,RFlat")
```
## Recommended parameters for IVF
Consider the case of too many IVFs. For example, if coarse quantization by IVF is performed for the number of data, this is the same as a naive exhaustive search and is inefficient.
For 1M or less, IVF values are recommended between 4*sqrt(N) ~ 16*sqrt(N) for N number of data points.
Since the calculation time increases in proportion to the number of n_probes, please consult with the accuracy and choose appropriately. Personally, I don't think RVC needs that much accuracy, so n_probe = 1 is fine.
## FastScan
FastScan is a method that enables high-speed approximation of distances by Cartesian product quantization by performing them in registers.
Cartesian product quantization performs clustering independently for each d dimension (usually d = 2) during learning, calculates the distance between clusters in advance, and creates a lookup table. At the time of prediction, the distance of each dimension can be calculated in O(1) by looking at the lookup table.
So the number you specify after PQ usually specifies half the dimension of the vector.
For a more detailed description of FastScan, please refer to the official documentation.
https://github.com/facebookresearch/faiss/wiki/Fast-accumulation-of-PQ-and-AQ-codes-(FastScan)
## RFlat
RFlat is an instruction to recalculate the rough distance calculated by FastScan with the exact distance specified by the third argument of index factory.
When getting k neighbors, k*k_factor points are recalculated.

101
docs/faiss_tips_ja.md Normal file
View File

@@ -0,0 +1,101 @@
faiss tuning TIPS
==================
# about faiss
faissはfacebook researchの開発する、密なベクトルに対する近傍探索をまとめたライブラリで、多くの近似近傍探索の手法を効率的に実装しています。
近似近傍探索はある程度精度を犠牲にしながら高速に類似するベクトルを探します。
## faiss in RVC
RVCではHuBERTで変換した特徴量のEmbeddingに対し、学習データから生成されたEmbeddingと類似するものを検索し、混ぜることでより元の音声に近い変換を実現しています。ただ、この検索は愚直に行うと時間がかかるため、近似近傍探索を用いることで高速な変換を実現しています。
# 実装のoverview
モデルが配置されている '/logs/your-experiment/3_feature256'には各音声データからHuBERTで抽出された特徴量が配置されています。
ここからnpyファイルをファイル名でソートした順番で読み込み、ベクトルを連結してbig_npyを作成しfaissを学習させます。(このベクトルのshapeは[N, 256]です。)
本Tipsではまずこれらのパラメータの意味を解説します。
# 手法の解説
## index factory
index factoryは複数の近似近傍探索の手法を繋げるパイプラインをstringで表記するfaiss独自の記法です。
これにより、index factoryの文字列を変更するだけで様々な近似近傍探索の手法を試せます。
RVCでは以下のように使われています。
```python
index = faiss.index_factory(256, "IVF%s,Flat" % n_ivf)
```
index_factoryの引数のうち、1つ目はベクトルの次元数、2つ目はindex factoryの文字列で、3つ目には用いる距離を指定することができます。
より詳細な記法については
https://github.com/facebookresearch/faiss/wiki/The-index-factory
## 距離指標
embeddingの類似度として用いられる代表的な指標として以下の二つがあります。
- ユークリッド距離(METRIC_L2)
- 内積(METRIC_INNER_PRODUCT)
ユークリッド距離では各次元において二乗の差をとり、全次元の差を足してから平方根をとります。これは日常的に用いる2次元、3次元での距離と同じです。
内積はこのままでは類似度の指標として用いず、一般的にはL2ルムで正規化してから内積をとるコサイン類似度を用います。
どちらがよいかは場合によりますが、word2vec等で得られるembeddingやArcFace等で学習した類似画像検索のモデルではコサイン類似度が用いられることが多いです。ベクトルXに対してl2正規化をnumpyで行う場合は、0 divisionを避けるために十分に小さな値をepsとして以下のコードで可能です。
```python
X_normed = X / np.maximum(eps, np.linalg.norm(X, ord=2, axis=-1, keepdims=True))
```
また、index factoryには第3引数に渡す値を選ぶことで計算に用いる距離指標を変更できます。
```python
index = faiss.index_factory(dimention, text, faiss.METRIC_INNER_PRODUCT)
```
## IVF
IVF(Inverted file indexes)は全文検索における転置インデックスと似たようなアルゴリズムです。
学習時には検索対象に対してkmeansでクラスタリングを行い、クラスタ中心を用いてボロイ分割を行います。各データ点には一つずつクラスタが割り当てられるので、クラスタからデータ点を逆引きする辞書を作成します。
例えば以下のようにクラスタが割り当てられた場合
|index|クラスタ|
|-----|-------|
|1|A|
|2|B|
|3|A|
|4|C|
|5|B|
作成される転置インデックスは以下のようになります。
|クラスタ|index|
|-------|-----|
|A|1, 3|
|B|2, 5|
|C|4|
検索時にはまずクラスタからn_probe個のクラスタを検索し、次にそれぞれのクラスタに属するデータ点について距離を計算します。
# 推奨されるパラメータ
indexの選び方については公式にガイドラインがあるので、それに準じて説明します。
https://github.com/facebookresearch/faiss/wiki/Guidelines-to-choose-an-index
1M以下のデータセットにおいては4bit-PQが2023年4月時点ではfaissで利用できる最も効率的な手法です。
これをIVFと組み合わせ、4bit-PQで候補を絞り、最後に正確な指標で距離を再計算するには以下のindex factoryを用いることで記載できます。
```python
index = faiss.index_factory(256, "IVF1024,PQ128x4fs,RFlat")
```
## IVFの推奨パラメータ
IVFの数が多すぎる場合、たとえばデータ数の数だけIVFによる粗量子化を行うと、これは愚直な全探索と同じになり効率が悪いです。
1M以下の場合ではIVFの値はデータ点の数Nに対して4*sqrt(N) ~ 16*sqrt(N)に推奨しています。
n_probeはn_probeの数に比例して計算時間が増えるので、精度と相談して適切に選んでください。個人的にはRVCにおいてそこまで精度は必要ないと思うのでn_probe = 1で良いと思います。
## FastScan
FastScanは直積量子化で大まかに距離を近似するのを、レジスタ内で行うことにより高速に行うようにした手法です。
直積量子化は学習時にd次元ごと(通常はd=2)に独立してクラスタリングを行い、クラスタ同士の距離を事前計算してlookup tableを作成します。予測時はlookup tableを見ることで各次元の距離をO(1)で計算できます。
そのため、PQの次に指定する数字は通常ベクトルの半分の次元を指定します。
FastScanに関するより詳細な説明は公式のドキュメントを参照してください。
https://github.com/facebookresearch/faiss/wiki/Fast-accumulation-of-PQ-and-AQ-codes-(FastScan)
## RFlat
RFlatはFastScanで計算した大まかな距離を、index factoryの第三引数で指定した正確な距離で再計算する指示です。
k個の近傍を取得する際は、k*k_factor個の点について再計算が行われます。

132
docs/faiss_tips_ko.md Normal file
View File

@@ -0,0 +1,132 @@
Facebook AI Similarity Search (Faiss) 팁
==================
# Faiss에 대하여
Faiss 는 Facebook Research가 개발하는, 고밀도 벡터 이웃 검색 라이브러리입니다. 근사 근접 탐색법 (Approximate Neigbor Search)은 약간의 정확성을 희생하여 유사 벡터를 고속으로 찾습니다.
## RVC에 있어서 Faiss
RVC에서는 HuBERT로 변환한 feature의 embedding을 위해 훈련 데이터에서 생성된 embedding과 유사한 embadding을 검색하고 혼합하여 원래의 음성에 더욱 가까운 변환을 달성합니다. 그러나, 이 탐색법은 단순히 수행하면 시간이 다소 소모되므로, 근사 근접 탐색법을 통해 고속 변환을 가능케 하고 있습니다.
# 구현 개요
모델이 위치한 `/logs/your-experiment/3_feature256`에는 각 음성 데이터에서 HuBERT가 추출한 feature들이 있습니다. 여기에서 파일 이름별로 정렬된 npy 파일을 읽고, 벡터를 연결하여 big_npy ([N, 256] 모양의 벡터) 를 만듭니다. big_npy를 `/logs/your-experiment/total_fea.npy`로 저장한 후, Faiss로 학습시킵니다.
2023/04/18 기준으로, Faiss의 Index Factory 기능을 이용해, L2 거리에 근거하는 IVF를 이용하고 있습니다. IVF의 분할수(n_ivf)는 N//39로, n_probe는 int(np.power(n_ivf, 0.3))가 사용되고 있습니다. (infer-web.py의 train_index 주위를 찾으십시오.)
이 팁에서는 먼저 이러한 매개 변수의 의미를 설명하고, 개발자가 추후 더 나은 index를 작성할 수 있도록 하는 조언을 작성합니다.
# 방법의 설명
## Index factory
index factory는 여러 근사 근접 탐색법을 문자열로 연결하는 pipeline을 문자열로 표기하는 Faiss만의 독자적인 기법입니다. 이를 통해 index factory의 문자열을 변경하는 것만으로 다양한 근사 근접 탐색을 시도해 볼 수 있습니다. RVC에서는 다음과 같이 사용됩니다:
```python
index = Faiss.index_factory(256, "IVF%s,Flat" % n_ivf)
```
`index_factory`의 인수들 중 첫 번째는 벡터의 차원 수이고, 두번째는 index factory 문자열이며, 세번째에는 사용할 거리를 지정할 수 있습니다.
기법의 보다 자세한 설명은 https://github.com/facebookresearch/Faiss/wiki/The-index-factory 를 확인해 주십시오.
## 거리에 대한 index
embedding의 유사도로서 사용되는 대표적인 지표로서 이하의 2개가 있습니다.
- 유클리드 거리 (METRIC_L2)
- 내적(内積) (METRIC_INNER_PRODUCT)
유클리드 거리에서는 각 차원에서 제곱의 차를 구하고, 각 차원에서 구한 차를 모두 더한 후 제곱근을 취합니다. 이것은 일상적으로 사용되는 2차원, 3차원에서의 거리의 연산법과 같습니다. 내적은 그 값을 그대로 유사도 지표로 사용하지 않고, L2 정규화를 한 이후 내적을 취하는 코사인 유사도를 사용합니다.
어느 쪽이 더 좋은지는 경우에 따라 다르지만, word2vec에서 얻은 embedding 및 ArcFace를 활용한 이미지 검색 모델은 코사인 유사성이 이용되는 경우가 많습니다. numpy를 사용하여 벡터 X에 대해 L2 정규화를 하고자 하는 경우, 0 division을 피하기 위해 충분히 작은 값을 eps로 한 뒤 이하에 코드를 활용하면 됩니다.
```python
X_normed = X / np.maximum(eps, np.linalg.norm(X, ord=2, axis=-1, keepdims=True))
```
또한, `index factory`의 3번째 인수에 건네주는 값을 선택하는 것을 통해 계산에 사용하는 거리 index를 변경할 수 있습니다.
```python
index = Faiss.index_factory(dimention, text, Faiss.METRIC_INNER_PRODUCT)
```
## IVF
IVF (Inverted file indexes)는 역색인 탐색법과 유사한 알고리즘입니다. 학습시에는 검색 대상에 대해 k-평균 군집법을 실시하고 클러스터 중심을 이용해 보로노이 분할을 실시합니다. 각 데이터 포인트에는 클러스터가 할당되므로, 클러스터에서 데이터 포인트를 조회하는 dictionary를 만듭니다.
예를 들어, 클러스터가 다음과 같이 할당된 경우
|index|Cluster|
|-----|-------|
|1|A|
|2|B|
|3|A|
|4|C|
|5|B|
IVF 이후의 결과는 다음과 같습니다:
|cluster|index|
|-------|-----|
|A|1, 3|
|B|2, 5|
|C|4|
탐색 시, 우선 클러스터에서 `n_probe`개의 클러스터를 탐색한 다음, 각 클러스터에 속한 데이터 포인트의 거리를 계산합니다.
# 권장 매개변수
index의 선택 방법에 대해서는 공식적으로 가이드 라인이 있으므로, 거기에 준해 설명합니다.
https://github.com/facebookresearch/Faiss/wiki/Guidelines-to-choose-an-index
1M 이하의 데이터 세트에 있어서는 4bit-PQ가 2023년 4월 시점에서는 Faiss로 이용할 수 있는 가장 효율적인 수법입니다. 이것을 IVF와 조합해, 4bit-PQ로 후보를 추려내고, 마지막으로 이하의 index factory를 이용하여 정확한 지표로 거리를 재계산하면 됩니다.
```python
index = Faiss.index_factory(256, "IVF1024,PQ128x4fs,RFlat")
```
## IVF 권장 매개변수
IVF의 수가 너무 많으면, 가령 데이터 수의 수만큼 IVF로 양자화(Quantization)를 수행하면, 이것은 완전탐색과 같아져 효율이 나빠지게 됩니다. 1M 이하의 경우 IVF 값은 데이터 포인트 수 N에 대해 4sqrt(N) ~ 16sqrt(N)를 사용하는 것을 권장합니다.
n_probe는 n_probe의 수에 비례하여 계산 시간이 늘어나므로 정확도와 시간을 적절히 균형을 맞추어 주십시오. 개인적으로 RVC에 있어서 그렇게까지 정확도는 필요 없다고 생각하기 때문에 n_probe = 1이면 된다고 생각합니다.
## FastScan
FastScan은 직적 양자화를 레지스터에서 수행함으로써 거리의 고속 근사를 가능하게 하는 방법입니다.직적 양자화는 학습시에 d차원마다(보통 d=2)에 독립적으로 클러스터링을 실시해, 클러스터끼리의 거리를 사전 계산해 lookup table를 작성합니다. 예측시는 lookup table을 보면 각 차원의 거리를 O(1)로 계산할 수 있습니다. 따라서 PQ 다음에 지정하는 숫자는 일반적으로 벡터의 절반 차원을 지정합니다.
FastScan에 대한 자세한 설명은 공식 문서를 참조하십시오.
https://github.com/facebookresearch/Faiss/wiki/Fast-accumulation-of-PQ-and-AQ-codes-(FastScan)
## RFlat
RFlat은 FastScan이 계산한 대략적인 거리를 index factory의 3번째 인수로 지정한 정확한 거리로 다시 계산하라는 인스트럭션입니다. k개의 근접 변수를 가져올 때 k*k_factor개의 점에 대해 재계산이 이루어집니다.
# Embedding 테크닉
## Alpha 쿼리 확장
퀴리 확장이란 탐색에서 사용되는 기술로, 예를 들어 전문 탐색 시, 입력된 검색문에 단어를 몇 개를 추가함으로써 검색 정확도를 올리는 방법입니다. 백터 탐색을 위해서도 몇가지 방법이 제안되었는데, 그 중 α-쿼리 확장은 추가 학습이 필요 없는 매우 효과적인 방법으로 알려져 있습니다. [Attention-Based Query Expansion Learning](https://arxiv.org/abs/2007.08019)와 [2nd place solution of kaggle shopee competition](https://www.kaggle.com/code/lyakaap/2nd-place-solution/notebook) 논문에서 소개된 바 있습니다..
α-쿼리 확장은 한 벡터에 인접한 벡터를 유사도의 α곱한 가중치로 더해주면 됩니다. 코드로 예시를 들어 보겠습니다. big_npy를 α query expansion로 대체합니다.
```python
alpha = 3.
index = Faiss.index_factory(256, "IVF512,PQ128x4fs,RFlat")
original_norm = np.maximum(np.linalg.norm(big_npy, ord=2, axis=1, keepdims=True), 1e-9)
big_npy /= original_norm
index.train(big_npy)
index.add(big_npy)
dist, neighbor = index.search(big_npy, num_expand)
expand_arrays = []
ixs = np.arange(big_npy.shape[0])
for i in range(-(-big_npy.shape[0]//batch_size)):
ix = ixs[i*batch_size:(i+1)*batch_size]
weight = np.power(np.einsum("nd,nmd->nm", big_npy[ix], big_npy[neighbor[ix]]), alpha)
expand_arrays.append(np.sum(big_npy[neighbor[ix]] * np.expand_dims(weight, axis=2),axis=1))
big_npy = np.concatenate(expand_arrays, axis=0)
# index version 정규화
big_npy = big_npy / np.maximum(np.linalg.norm(big_npy, ord=2, axis=1, keepdims=True), 1e-9)
```
위 테크닉은 탐색을 수행하는 쿼리에도, 탐색 대상 DB에도 적응 가능한 테크닉입니다.
## MiniBatch KMeans에 의한 embedding 압축
total_fea.npy가 너무 클 경우 K-means를 이용하여 벡터를 작게 만드는 것이 가능합니다. 이하 코드로 embedding의 압축이 가능합니다. n_clusters에 압축하고자 하는 크기를 지정하고 batch_size에 256 * CPU의 코어 수를 지정함으로써 CPU 병렬화의 혜택을 충분히 얻을 수 있습니다.
```python
import multiprocessing
from sklearn.cluster import MiniBatchKMeans
kmeans = MiniBatchKMeans(n_clusters=10000, batch_size=256 * multiprocessing.cpu_count(), init="random")
kmeans.fit(big_npy)
sample_npy = kmeans.cluster_centers_
```

105
docs/faiss_tips_tr.md Normal file
View File

@@ -0,0 +1,105 @@
faiss ayarları hakkında ipuçları
=============================
# faiss hakkında
faiss, facebook araştırma tarafından geliştirilen, yoğun vektörler için yakınsaklık aramaları için bir kütüphanedir ve birçok yaklaşık yakınsaklık arama yöntemini verimli bir şekilde uygular.
Yaklaşık Yakınsaklık Arama, biraz doğruluktan ödün vererek benzer vektörleri hızlı bir şekilde bulur.
## RVC'de faiss
RVC'de, HuBERT tarafından dönüştürülen özelliklerin gömülmesi için eğitim verilerinden oluşturulan gömülmelerle benzer gömülmeleri arar ve onları karıştırarak orijinal konuşmaya daha yakın bir dönüşüm elde ederiz. Ancak, bu arama zaman alıyorsa, yaklaşık yakınsaklık arama kullanarak yüksek hızlı dönüşüm elde edilir.
# Uygulama genel bakışı
Modelin bulunduğu '/logs/your-experiment/3_feature256' dizininde, her ses verisinden HuBERT tarafından çıkarılan özellikler bulunur.
Burası, dosya adına göre sıralanmış npy dosyalarını okuyarak vektörleri birleştirerek büyük npy oluşturur. (Bu vektörün şekli [N, 256].)
Büyük npy, /logs/your-experiment/total_fea.npy olarak kaydedildikten sonra faiss ile eğitilir.
Bu makalede, bu parametrelerin anlamınııklayacağım.
# Yöntemin Açıklaması
## indeks fabrikası
Bir indeks fabrikası, birden çok yaklaşık yakınsaklık arama yöntemini bir dize olarak bağlayan benzersiz bir faiss gösterimidir.
Bu, indeks fabrikası dizesini değiştirerek kolayca çeşitli yaklaşık yakınsaklık arama yöntemlerini denemenize olanak tanır.
RVC'de bunu şu şekilde kullanıyoruz:
```python
index = faiss.index_factory(256, "IVF%s,Flat" % n_ivf)
```
index_factory'nin argümanları arasında ilk olarak vektörün boyutu, ikinci olarak indeks fabrikası dizesi ve üçüncü olarak kullanılacak mesafe bulunur.
Daha ayrıntılı gösterim için
https://github.com/facebookresearch/faiss/wiki/The-index-factory
## mesafe için indeks
Aşağıda gömülmenin benzerliğinde kullanılan iki tipik indeks bulunur.
- Öklidyen mesafesi (METRIC_L2)
- iç çarpım (METRIC_INNER_PRODUCT)
Öklidyen mesafesi, her boyutta kare farkı alır, tüm boyutlardaki farkları toplar ve ardından karekökünü alır. Bu, günlük hayatta kullandığımız 2D ve 3D'deki mesafeyle aynıdır.
İç çarpım, doğrudan bir benzerlik indeksi olarak kullanılmaz, genellikle L2 normuyla normalize edildikten sonra iç çarpım alınan kosinüs benzerliği kullanılır.
Hangisinin daha iyi olduğu duruma bağlıdır, ancak word2vec tarafından elde edilen gömülme ve ArcFace ile öğrenilmiş benzer görüntü arama modellerinde genellikle kosinüs benzerliği kullanılır. numpy ile X vektörüne l2 normalizasyonu yapmak için aşağıdaki kodu eps değerini sıfıra bölme hatasından kaçınmak için yeterince küçük bir değerle kullanabilirsiniz.
```python
X_normed = X / np.maximum(eps, np.linalg.norm(X, ord=2, axis=-1, keepdims=True))
```
Ayrıca, indeks fabrikasında hesaplama için kullanılan mesafe indeksini üçüncü argüman olarak geçerek hesaplanan mesafeyi değiştirebilirsiniz.
```python
index = faiss.index_factory(dimention, text, faiss.METRIC_INNER_PRODUCT)
```
## IVF
IVF (Ters dosya indeksleri), tam metin aramasındaki ters indekse benzer bir algoritmadır.
Öğrenme sırasında, arama hedefi kmeans ile kümeleme yapılır ve küme merkezi ile Voronoi bölümlenmesi yapılır. Her veri noktası bir kümeye atanır, bu nedenle veri noktalarını kümelelerden arayan bir sözlük oluştururuz.
Örneğin, kümeler şu şekilde atanırsa:
|index|Küme|
|-----|-------|
|1|A|
|2|B|
|3|A|
|4|C|
|5|B|
Sonuçta elde edilen ters indeks aşağıdaki gibi görünecektir:
|küme|index|
|-------|-----|
|A|1, 3|
|B|2, 5|
|C|4|
Arama yaparken, önce kümelerden n_probe kümeleri arar ve ardından her kümeye ait veri noktalarının mesafesini hesaplar.
# Önerilen parametreler
Önerilen bir indeks seçme konusunda resmi yönergeler bulunur, bu nedenle buna göre açıklayacağım.
https://github.com/facebookresearch/faiss/wiki/Guidelines-to-choose-an-index
1M'den küçük veri kümeleri için, Nisan 2023 itibarıyla faiss tarafından mevcut olan en verimli yöntem 4bit-PQ'dir.
Bunu IVF ile birleştirerek, 4bit-PQ ile adayları daraltabilir ve nihayetinde doğru bir indeksle mesafeyi yeniden hesaplayarak aşağıdaki indeks fabrikasını kullanarak tanımlayabiliriz.
```python
index = faiss.index_factory(256, "IVF1024,PQ128x4fs,RFlat")
```
## IVF için Önerilen Parametreler
Çok fazla IVF'nin olduğu durumu düşünün. Örneğin, IVF tarafından verilerin sayısı için kalın nicelleme yapıldığında, bu, basit bir tam arama ile aynıdır ve verimsizdir.
1M veya daha az için IVF değerleri, veri noktalarının N sayısı için 4*sqrt(N) ~ 16*sqrt(N) arasında önerilir.
n_probes sayısı arttıkça hesaplama süresi arttığından, doğruluk ile danışın ve uygun şekilde seçin. Kişisel olarak RVC'nin bu kadar hassas olmasını gerektiren bir durum olmadığını düşünüyorum, bu nedenle n_probe = 1 yeterlidir.
## FastScan
FastScan, bunları kayıtlarda gerçekleştirerek onları kartez ürün kuantizasyonu ile yüksek hızda mesafeye yaklaşık olarak yapılmasını sağlayan bir yöntemdir.
Kartez ürün kuantizasyonu, öğrenme sırasında her d boyut için (genellikle d = 2) bağımsız olarak kümeleme yapar, küme merkezleri arasındaki mesafeyi önceden hesaplar ve bir arama tablosu oluşturur. Tahmin sırasında her boyutun mesafesi, arama tablosuna bakarak O(1) olarak hesaplanabilir.
Bu nedenle PQ'dan sonra belirttiğiniz sayı genellikle vektörün yarısı olarak belirtir.
FastScan hakkında daha ayrıntılı bilgi için lütfen resmi belgelere başvurun.
https://github.com/facebookresearch/faiss/wiki/Fast-accumulation-of-PQ-and-AQ-codes-(FastScan)
## RFlat
RFlat, FastScan ile hesaplanan yaklaşık mesafeyi indeks fabrikasının üçüncü argümanı ile belirtilen tam mesafe ile yeniden hesaplamak için bir talimattır.
K-en yakın komşuyu alırken, k*k_factor kadar nokta yeniden hesaplanır.

93
docs/faq.md Normal file
View File

@@ -0,0 +1,93 @@
## Q1:ffmpeg error/utf8 error.
大概率不是ffmpeg问题而是音频路径问题<br>
ffmpeg读取路径带空格、()等特殊符号可能出现ffmpeg error训练集音频带中文路径在写入filelist.txt的时候可能出现utf8 error<br>
## Q2:一键训练结束没有索引
显示"Training is done. The program is closed."则模型训练成功,后续紧邻的报错是假的;<br>
一键训练结束完成没有added开头的索引文件可能是因为训练集太大卡住了添加索引的步骤已通过批处理add索引解决内存add索引对内存需求过大的问题。临时可尝试再次点击"训练索引"按钮。<br>
## Q3:训练结束推理没看到训练集的音色
点刷新音色再看看如果还没有看看训练有没有报错控制台和webui的截图logs/实验名下的log都可以发给开发者看看。<br>
## Q4:如何分享模型
rvc_root/logs/实验名 下面存储的pth不是用来分享模型用来推理的而是为了存储实验状态供复现以及继续训练用的。用来分享的模型应该是weights文件夹下大小为60+MB的pth文件<br>
后续将把weights/exp_name.pth和logs/exp_name/added_xxx.index合并打包成weights/exp_name.zip省去填写index的步骤那么zip文件用来分享不要分享pth文件除非是想换机器继续训练<br>
如果你把logs文件夹下的几百MB的pth文件复制/分享到weights文件夹下强行用于推理可能会出现f0tgt_sr等各种key不存在的报错。你需要用ckpt选项卡最下面手工或自动本地logs下如果能找到相关信息则会自动选择是否携带音高、目标音频采样率的选项后进行ckpt小模型提取输入路径填G开头的那个提取完在weights文件夹下会出现60+MB的pth文件刷新音色后可以选择使用。<br>
## Q5:Connection Error.
也许你关闭了控制台(黑色窗口)。<br>
## Q6:WebUI弹出Expecting value: line 1 column 1 (char 0).
请关闭系统局域网代理/全局代理。<br>
这个不仅是客户端的代理也包括服务端的代理例如你使用autodl设置了http_proxy和https_proxy学术加速使用时也需要unset关掉<br>
## Q7:不用WebUI如何通过命令训练推理
训练脚本:<br>
可先跑通WebUI消息窗内会显示数据集处理和训练用命令行<br>
推理脚本:<br>
https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/myinfer.py<br>
例子:<br>
runtime\python.exe myinfer.py 0 "E:\codes\py39\RVC-beta\todo-songs\1111.wav" "E:\codes\py39\logs\mi-test\added_IVF677_Flat_nprobe_7.index" harvest "test.wav" "weights/mi-test.pth" 0.6 cuda:0 True<br>
f0up_key=sys.argv[1]<br>
input_path=sys.argv[2]<br>
index_path=sys.argv[3]<br>
f0method=sys.argv[4]#harvest or pm<br>
opt_path=sys.argv[5]<br>
model_path=sys.argv[6]<br>
index_rate=float(sys.argv[7])<br>
device=sys.argv[8]<br>
is_half=bool(sys.argv[9])<br>
## Q8:Cuda error/Cuda out of memory.
小概率是cuda配置问题、设备不支持大概率是显存不够out of memory<br>
训练的话缩小batch size如果缩小到1还不够只能更换显卡训练推理的话酌情缩小config.py结尾的x_padx_queryx_centerx_max。4G以下显存例如10603G和各种2G显卡可以直接放弃4G显存显卡还有救。<br>
## Q9:total_epoch调多少比较好
如果训练集音质差底噪大20~30足够了调太高底模音质无法带高你的低音质训练集<br>
如果训练集音质高底噪低时长多可以调高200是ok的训练速度很快既然你有条件准备高音质训练集显卡想必条件也不错肯定不在乎多一些训练时间<br>
## Q10:需要多少训练集时长
推荐10min至50min<br>
  保证音质高底噪低的情况下,如果有个人特色的音色统一,则多多益善<br>
  高水平的训练集(精简+音色有特色5min至10min也是ok的仓库作者本人就经常这么玩<br>
也有人拿1min至2min的数据来训练并且训练成功的但是成功经验是其他人不可复现的不太具备参考价值。这要求训练集音色特色非常明显比如说高频气声较明显的萝莉少女音且音质高<br>
1min以下时长数据目前没见有人尝试成功过。不建议进行这种鬼畜行为。<br>
## Q11:index rate干嘛用的怎么调科普
  如果底模和推理源的音质高于训练集的音质,他们可以带高推理结果的音质,但代价可能是音色往底模/推理源的音色靠,这种现象叫做"音色泄露"<br>
index rate用来削减/解决音色泄露问题。调到1则理论上不存在推理源的音色泄露问题但音质更倾向于训练集。如果训练集音质比推理源低则index rate调高可能降低音质。调到0则不具备利用检索混合来保护训练集音色的效果<br>
如果训练集优质时长多可调高total_epoch此时模型本身不太会引用推理源和底模的音色很少存在"音色泄露"问题此时index_rate不重要你甚至可以不建立/分享index索引文件。<br>
## Q11:推理怎么选gpu
config.py文件里device cuda:后面选择卡号;<br>
卡号和显卡的映射关系,在训练选项卡的显卡信息栏里能看到。<br>
## Q12:如何推理训练中间保存的pth
通过ckpt选项卡最下面提取小模型。<br>
## Q13:如何中断和继续训练
现阶段只能关闭WebUI控制台双击go-web.bat重启程序。网页参数也要刷新重新填写<br>
继续训练相同网页参数点训练模型就会接着上次的checkpoint继续训练。<br>
## Q14:训练时出现文件页面/内存error
进程开太多了,内存炸了。你可能可以通过如下方式解决<br>
1、"提取音高和处理数据使用的CPU进程数" 酌情拉低;<br>
2、训练集音频手工切一下不要太长。<br>
## Q15:如何中途加数据训练
1、所有数据新建一个实验名<br>
2、拷贝上一次的最新的那个G和D文件或者你想基于哪个中间ckpt训练也可以拷贝中间的到新实验名<br>
3、一键训练新实验名他会继续上一次的最新进度训练。<br>

104
docs/faq_en.md Normal file
View File

@@ -0,0 +1,104 @@
## Q1:ffmpeg error/utf8 error.
It is most likely not a FFmpeg issue, but rather an audio path issue;
FFmpeg may encounter an error when reading paths containing special characters like spaces and (), which may cause an FFmpeg error; and when the training set's audio contains Chinese paths, writing it into filelist.txt may cause a utf8 error.<br>
## Q2:Cannot find index file after "One-click Training".
If it displays "Training is done. The program is closed," then the model has been trained successfully, and the subsequent errors are fake;
The lack of an 'added' index file after One-click training may be due to the training set being too large, causing the addition of the index to get stuck; this has been resolved by using batch processing to add the index, which solves the problem of memory overload when adding the index. As a temporary solution, try clicking the "Train Index" button again.<br>
## Q3:Cannot find the model in “Inferencing timbre” after training
Click “Refresh timbre list” and check again; if still not visible, check if there are any errors during training and send screenshots of the console, web UI, and logs/experiment_name/*.log to the developers for further analysis.<br>
## Q4:How to share a model/How to use others' models?
The pth files stored in rvc_root/logs/experiment_name are not meant for sharing or inference, but for storing the experiment checkpoits for reproducibility and further training. The model to be shared should be the 60+MB pth file in the weights folder;
In the future, weights/exp_name.pth and logs/exp_name/added_xxx.index will be merged into a single weights/exp_name.zip file to eliminate the need for manual index input; so share the zip file, not the pth file, unless you want to continue training on a different machine;
Copying/sharing the several hundred MB pth files from the logs folder to the weights folder for forced inference may result in errors such as missing f0, tgt_sr, or other keys. You need to use the ckpt tab at the bottom to manually or automatically (if the information is found in the logs/exp_name), select whether to include pitch infomation and target audio sampling rate options and then extract the smaller model. After extraction, there will be a 60+ MB pth file in the weights folder, and you can refresh the voices to use it.<br>
## Q5:Connection Error.
You may have closed the console (black command line window).<br>
## Q6:WebUI popup 'Expecting value: line 1 column 1 (char 0)'.
Please disable system LAN proxy/global proxy and then refresh.<br>
## Q7:How to train and infer without the WebUI?
Training script:<br>
You can run training in WebUI first, and the command-line versions of dataset preprocessing and training will be displayed in the message window.<br>
Inference script:<br>
https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/myinfer.py<br>
e.g.<br>
runtime\python.exe myinfer.py 0 "E:\codes\py39\RVC-beta\todo-songs\1111.wav" "E:\codes\py39\logs\mi-test\added_IVF677_Flat_nprobe_7.index" harvest "test.wav" "weights/mi-test.pth" 0.6 cuda:0 True<br>
f0up_key=sys.argv[1]<br>
input_path=sys.argv[2]<br>
index_path=sys.argv[3]<br>
f0method=sys.argv[4]#harvest or pm<br>
opt_path=sys.argv[5]<br>
model_path=sys.argv[6]<br>
index_rate=float(sys.argv[7])<br>
device=sys.argv[8]<br>
is_half=bool(sys.argv[9])<br>
## Q8:Cuda error/Cuda out of memory.
There is a small chance that there is a problem with the CUDA configuration or the device is not supported; more likely, there is not enough memory (out of memory).<br>
For training, reduce the batch size (if reducing to 1 is still not enough, you may need to change the graphics card); for inference, adjust the x_pad, x_query, x_center, and x_max settings in the config.py file as needed. 4G or lower memory cards (e.g. 1060(3G) and various 2G cards) can be abandoned, while 4G memory cards still have a chance.<br>
## Q9:How many total_epoch are optimal?
If the training dataset's audio quality is poor and the noise floor is high, 20-30 epochs are sufficient. Setting it too high won't improve the audio quality of your low-quality training set.<br>
If the training set audio quality is high, the noise floor is low, and there is sufficient duration, you can increase it. 200 is acceptable (since training is fast, and if you're able to prepare a high-quality training set, your GPU likely can handle a longer training duration without issue).<br>
## Q10:How much training set duration is needed?
A dataset of around 10min to 50min is recommended.<br>
With guaranteed high sound quality and low bottom noise, more can be added if the dataset's timbre is uniform.<br>
For a high-level training set (lean + distinctive tone), 5min to 10min is fine.<br>
There are some people who have trained successfully with 1min to 2min data, but the success is not reproducible by others and is not very informative. <br>This requires that the training set has a very distinctive timbre (e.g. a high-frequency airy anime girl sound) and the quality of the audio is high;
Data of less than 1min duration has not been successfully attempted so far. This is not recommended.<br>
## Q11:What is the index rate for and how to adjust it?
If the tone quality of the pre-trained model and inference source is higher than that of the training set, they can bring up the tone quality of the inference result, but at the cost of a possible tone bias towards the tone of the underlying model/inference source rather than the tone of the training set, which is generally referred to as "tone leakage".<br>
The index rate is used to reduce/resolve the timbre leakage problem. If the index rate is set to 1, theoretically there is no timbre leakage from the inference source and the timbre quality is more biased towards the training set. If the training set has a lower sound quality than the inference source, then a higher index rate may reduce the sound quality. Turning it down to 0 does not have the effect of using retrieval blending to protect the training set tones.<br>
If the training set has good audio quality and long duration, turn up the total_epoch, when the model itself is less likely to refer to the inferred source and the pretrained underlying model, and there is little "tone leakage", the index_rate is not important and you can even not create/share the index file.<br>
## Q12:How to choose the gpu when inferring?
In the config.py file, select the card number after "device cuda:".<br>
The mapping between card number and graphics card can be seen in the graphics card information section of the training tab.<br>
## Q13:How to use the model saved in the middle of training?
Save via model extraction at the bottom of the ckpt processing tab.
## Q14:File/memory error(when training)?
Too many processes and your memory is not enough. You may fix it by:
1、decrease the input in field "Threads of CPU".
2、pre-cut trainset to shorter audio files.
## Q15: How to continue training using more data
step1: put all wav data to path2.
step2: exp_name2+path2 -> process dataset and extract feature.
step3: copy the latest G and D file of exp_name1 (your previous experiment) into exp_name2 folder.
step4: click "train the model", and it will continue training from the beginning of your previous exp model epoch.

96
docs/faq_tr.md Normal file
View File

@@ -0,0 +1,96 @@
## Soru 1: FFmpeg hatası/utf8 hatası.
Muhtemelen bir FFmpeg sorunu değil, ses yolunda bir sorun var;
FFmpeg, boşluklar ve () gibi özel karakterler içeren yolları okurken bir hata ile karşılaşabilir ve FFmpeg hatası oluşturabilir; ve eğitim setinin sesleri Çince yollar içeriyorsa, bunları filelist.txt'ye yazmak utf8 hatasına neden olabilir.
## Soru 2: "Tek Tıklamayla Eğitim" sonrasında indeks dosyası bulunamıyor.
"Training is done. The program is closed" şeklinde görüntüleniyorsa, model başarılı bir şekilde eğitilmiş demektir ve sonraki hatalar yanıltıcı olabilir;
Tek tıklamalı eğitim sonrasında "added" indeks dosyasının eksik olması, eğitim setinin çok büyük olmasından kaynaklanabilir ve indeksin eklenmesinin takılmasına neden olabilir; bunun çözümü, indeksi eklerken bellek aşımı sorununu çözen toplu işlemi kullanmaktır. Geçici bir çözüm olarak, "Train Index" düğmesine tekrar tıklamayı deneyin.
## Soru 3: Eğitim sonrasında "Timbre Inferencing" bölümünde model bulunamıyor
"Refresh timbre list"e tıklayın ve tekrar kontrol edin; hala görünmüyorsa, eğitim sırasında hatalar olup olmadığını kontrol edin ve geliştiricilere ek analiz için konsol, web UI ve logs/experiment_name/*.log ekran görüntüleri gönderin.
## Soru 4: Bir modeli nasıl paylaşabilirim/Başkalarının modellerini nasıl kullanabilirim?
rvc_root/logs/experiment_name klasöründe depolanan pth dosyaları, paylaşım veya çıkarım için değil, yeniden üretilebilirlik ve daha fazla eğitim için deney kontrol noktalarını depolamak içindir. Paylaşılacak model, weights klasöründeki 60+MB pth dosyası olmalıdır;
Gelecekte, weights/exp_name.pth ve logs/exp_name/added_xxx.index birleştirilerek, manuel indeks girişi gerektirmeyen bir tek weights/exp_name.zip dosyası oluşturulacak; bu nedenle, farklı bir makinede eğitime devam etmek istemiyorsanız, pth dosyasını değil zip dosyasını paylaşın;
Logs klasöründen weights klasörüne birkaç yüz MB'lık pth dosyalarını zorlama çıkarım için kopyalamak/paylaşmak, eksik f0, tgt_sr veya diğer anahtarlar gibi hatalara neden olabilir. Alt kısımdaki ckpt sekmesini kullanarak manuel veya otomatik olarak (bilgiler logs/exp_name'de bulunuyorsa) ton infomasyonu ve hedef ses örnekleme hızı seçmeyi deneyin ve ardından daha küçük modeli çıkarın. Çıkarıldıktan sonra weights klasöründe 60+ MB'lık bir pth dosyası olacak ve sesleri yenileyerek kullanabilirsiniz.
## Soru 5: Bağlantı Hatası.
Muhtemelen konsolu (siyah komut satırı penceresini) kapattınız.
## Soru 6: WebUI'de 'Expecting value: line 1 column 1 (char 0)' hatası.
Sistem LAN proxy/global proxy'yi devre dışı bırakın ve sonra yenileyin.
## Soru 7: WebUI olmadan nasıl eğitilir ve sonuçlandırılır?
Eğitim betiği:
Eğitimi WebUI'de çalıştırabilirsiniz, ve mesaj penceresinde veri seti ön işleme ve eğitiminin komut satırı sürümleri gösterilecektir.
Sonuçlandırma betiği:
https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/myinfer.py
Örneğin:
```bash
runtime\python.exe myinfer.py 0 "E:\codes\py39\RVC-beta\todo-songs\1111.wav" "E:\codes\py39\logs\mi-test\added_IVF677_Flat_nprobe_7.index" harvest "test.wav" "weights/mi-test.pth" 0.6 cuda:0 True
```
f0up_key=sys.argv[1]
input_path=sys.argv[2]
index_path=sys.argv[3]
f0method=sys.argv[4]#harvest or pm
opt_path=sys.argv[5]
model_path=sys.argv[6]
index_rate=float(sys.argv[7])
device=sys.argv[8]
is_half=bool(sys.argv[9])
## Soru 8: Cuda hatası/Cuda bellek dışı.
Küçük bir olasılıkla CUDA yapılandırmasında bir sorun olabilir veya cihaz desteklenmiyor olabilir; daha olası bir şekilde, yeterli belleğiniz yoktur (bellek dışı).
Eğitim için, toplu boyutunu azaltın (1'e düşürmek hala yeterli değilse, grafik kartını değiştirmeniz gerekebilir); sonuçlandırma için, config.py dosyasında x_pad, x_query, x_center ve x_max ayarlarını ihtiyaca göre ayarlayın. 4G veya daha düşük bellekli kartlar (örn. 1060(3G) ve çeşitli 2G kartlar) terk edilebilir, ancak 4G bellekli kartların hala bir şansı vardır.
## Soru 9: Optimal kaç total_epoch kullanmalıyım?
Eğitim veri setinin ses kalitesi düşük ve gürültü seviyesi yüksekse, 20-30 epoch yeterlidir. Çok yüksek bir değer ayarlamak, düşük kaliteli eğitim setinizin ses kalitesini artırmaz.
Eğitim setinin ses kalitesi yüksek, gürültü sevi
yesi düşük ve yeterli süresi varsa, artırabilirsiniz. 200 kabul edilebilir (çünkü eğitim hızlıdır ve yüksek kaliteli bir eğitim seti hazırlayabiliyorsanız, GPU'nuz muhtemelen sorunsuz bir şekilde daha uzun bir eğitim süresini işleyebilir).
## Soru 10: Ne kadar eğitim verisi süresine ihtiyacım var?
Yaklaşık 10 dakika ile 50 dakika arasında bir veri seti önerilir.
Sağlam ses kalitesi ve düşük taban gürültü garantiliyse, veri seti seslerinin homojen olması durumunda daha fazla ekleyebilirsiniz.
Yüksek seviye bir eğitim seti için (düzgün + belirgin bir ton), 5 dakika ile 10 dakika arasında yeterlidir.
1 dakika ile 2 dakika veriyle başarıyla eğitim yapan bazı insanlar var, ancak başarı başkaları tarafından tekrarlanabilir değil ve çok bilgi verici değil. Bu, eğitim setinin çok belirgin bir tona sahip olmasını (örneğin yüksek frekanslı havadar anime kız sesi gibi) ve ses kalitesinin yüksek olmasını gerektirir; 1 dakikadan daha kısa veriler şu ana kadar başarılı bir şekilde deneme yapılmamıştır. Bu önerilmez.
## Soru 11: İndeks oranı nedir ve nasıl ayarlanır?
Önceden eğitilmiş modelin ve çıkarım kaynağının ton kalitesi, eğitim setinin ton kalitesinden daha yüksekse, bunlar çıkarım sonucunun ton kalitesini artırabilir, ancak eğitim setinin tonuna göre değil, genellikle "ton sızıntısı" olarak adlandırılan eğitim setinin tonuna göre bir ton eğilimine yol açabilir.
İndeks oranı, ton sızıntı sorununu azaltmak/çözmek için kullanılır. İndeks oranı 1 olarak ayarlandığında, teorik olarak çıkarım kaynağından hiç ton sızıntısı olmaz ve ton kalitesi daha çok eğitim setine yönlendirilir. Eğitim seti, çıkarım kaynağından ses kalitesi açısından daha düşükse, daha yüksek bir indeks oranı ses kalitesini azaltabilir. 0'a indirildiğinde, eğitim seti tonlarını korumak için çıkarım karışımı kullanma etkisi yoktur.
Eğitim seti iyi ses kalitesine sahipse ve uzun süreliyse, total_epoch'ı artırın, modelin kendi başına çıkarım kaynağına ve önceden eğitilmiş temel modeline başvurma olasılığı azaldığında ve "ton sızıntısı" çok az olduğunda, indeks oranı önemli değildir ve hatta indeks dosyası oluşturmak/paylaşmak zorunda kalmazsınız.
## Soru 12: Çıkarırken hangi gpu'yu seçmeliyim?
config.py dosyasında, "device cuda:" dan sonra kart numarasını seçin.
Kart numarası ile grafik kartı arasındaki eşleştirmeyi eğitim sekmesinin grafik kartı bilgisi bölümünde görebilirsiniz.
## Soru 13: Eğitimin ortasında kaydedilen modeli nasıl kullanabilirim?
Çıkartma modeli, ckpt processing sekmesinin alt kısmında kaydedin.
## Soru 14: Dosya/bellek hatası (eğitim sırasında)?
Çok fazla işlem ve belleğiniz yeterli değil. Bunun düzeltilmesi için:
1. "Threads of CPU" alanında girişi azaltın.
2. Eğitim setini daha kısa ses dosyalarına önceden kesin.
## Soru 15: Daha fazla veri kullanarak nasıl eğitime devam ederim?
Adım 1: Tüm wav verilerini path2'ye koyun.
Adım 2: exp_name2+path2 -> veri kümesini işleyin ve özellik çıkarın.
Adım 3: exp_name1 (önceki deneyiminiz) en son G ve D dosyalarını exp_name2 klasörüne kopyalayın.
Adım 4: "train the model" düğmesine tıklayın ve önceki deneyiminiz model epoğunun başlangıcından itibaren eğitime devam edecektir.

65
docs/training_tips_en.md Normal file
View File

@@ -0,0 +1,65 @@
Instructions and tips for RVC training
======================================
This TIPS explains how data training is done.
# Training flow
I will explain along the steps in the training tab of the GUI.
## step1
Set the experiment name here.
You can also set here whether the model should take pitch into account.
If the model doesn't consider pitch, the model will be lighter, but not suitable for singing.
Data for each experiment is placed in `/logs/your-experiment-name/`.
## step2a
Loads and preprocesses audio.
### load audio
If you specify a folder with audio, the audio files in that folder will be read automatically.
For example, if you specify `C:Users\hoge\voices`, `C:Users\hoge\voices\voice.mp3` will be loaded, but `C:Users\hoge\voices\dir\voice.mp3` will Not loaded.
Since ffmpeg is used internally for reading audio, if the extension is supported by ffmpeg, it will be read automatically.
After converting to int16 with ffmpeg, convert to float32 and normalize between -1 to 1.
### denoising
The audio is smoothed by scipy's filtfilt.
### Audio Split
First, the input audio is divided by detecting parts of silence that last longer than a certain period (max_sil_kept=5 seconds?). After splitting the audio on silence, split the audio every 4 seconds with an overlap of 0.3 seconds. For audio separated within 4 seconds, after normalizing the volume, convert the wav file to `/logs/your-experiment-name/0_gt_wavs` and then convert it to 16k sampling rate to `/logs/your-experiment-name/1_16k_wavs ` as a wav file.
## step2b
### Extract pitch
Extract pitch information from wav files. Extract the pitch information (=f0) using the method built into parselmouth or pyworld and save it in `/logs/your-experiment-name/2a_f0`. Then logarithmically convert the pitch information to an integer between 1 and 255 and save it in `/logs/your-experiment-name/2b-f0nsf`.
### Extract feature_print
Convert the wav file to embedding in advance using HuBERT. Read the wav file saved in `/logs/your-experiment-name/1_16k_wavs`, convert the wav file to 256-dimensional features with HuBERT, and save in npy format in `/logs/your-experiment-name/3_feature256`.
## step3
train the model.
### Glossary for Beginners
In deep learning, the data set is divided and the learning proceeds little by little. In one model update (step), batch_size data are retrieved and predictions and error corrections are performed. Doing this once for a dataset counts as one epoch.
Therefore, the learning time is the learning time per step x (the number of data in the dataset / batch size) x the number of epochs. In general, the larger the batch size, the more stable the learning becomes (learning time per step ÷ batch size) becomes smaller, but it uses more GPU memory. GPU RAM can be checked with the nvidia-smi command. Learning can be done in a short time by increasing the batch size as much as possible according to the machine of the execution environment.
### Specify pretrained model
RVC starts training the model from pretrained weights instead of from 0, so it can be trained with a small dataset.
By default
- If you consider pitch, it loads `rvc-location/pretrained/f0G40k.pth` and `rvc-location/pretrained/f0D40k.pth`.
- If you don't consider pitch, it loads `rvc-location/pretrained/f0G40k.pth` and `rvc-location/pretrained/f0D40k.pth`.
When learning, model parameters are saved in `logs/your-experiment-name/G_{}.pth` and `logs/your-experiment-name/D_{}.pth` for each save_every_epoch, but by specifying this path, you can start learning. You can restart or start training from model weights learned in a different experiment.
### learning index
RVC saves the HuBERT feature values used during training, and during inference, searches for feature values that are similar to the feature values used during learning to perform inference. In order to perform this search at high speed, the index is learned in advance.
For index learning, we use the approximate neighborhood search library faiss. Read the feature value of `logs/your-experiment-name/3_feature256` and use it to learn the index, and save it as `logs/your-experiment-name/add_XXX.index`.
(From the 20230428update version, it is read from the index, and saving / specifying is no longer necessary.)
### Button description
- Train model: After executing step2b, press this button to train the model.
- Train feature index: After training the model, perform index learning.
- One-click training: step2b, model training and feature index training all at once.

64
docs/training_tips_ja.md Normal file
View File

@@ -0,0 +1,64 @@
RVCの訓練における説明、およびTIPS
===============================
本TIPSではどのようにデータの訓練が行われているかを説明します。
# 訓練の流れ
GUIの訓練タブのstepに沿って説明します。
## step1
実験名の設定を行います。
また、モデルに音高ガイド(ピッチ)を考慮させるかもここで設定できます。考慮させない場合はモデルは軽量になりますが、歌唱には向かなくなります。
各実験のデータは`/logs/実験名/`に配置されます。
## step2a
音声の読み込みと前処理を行います。
### load audio
音声のあるフォルダを指定すると、そのフォルダ内にある音声ファイルを自動で読み込みます。
例えば`C:Users\hoge\voices`を指定した場合、`C:Users\hoge\voices\voice.mp3`は読み込まれますが、`C:Users\hoge\voices\dir\voice.mp3`は読み込まれません。
音声の読み込みには内部でffmpegを利用しているので、ffmpegで対応している拡張子であれば自動的に読み込まれます。
ffmpegでint16に変換した後、float32に変換し、-1 ~ 1の間に正規化されます。
### denoising
音声についてscipyのfiltfiltによる平滑化を行います。
### 音声の分割
入力した音声はまず、一定期間(max_sil_kept=5秒?)より長く無音が続く部分を検知して音声を分割します。無音で音声を分割した後は、0.3秒のoverlapを含む4秒ごとに音声を分割します。4秒以内に区切られた音声は、音量の正規化を行った後wavファイルを`/logs/実験名/0_gt_wavs`に、そこから16kのサンプリングレートに変換して`/logs/実験名/1_16k_wavs`にwavファイルで保存します。
## step2b
### ピッチの抽出
wavファイルからピッチ(音の高低)の情報を抽出します。parselmouthやpyworldに内蔵されている手法でピッチ情報(=f0)を抽出し、`/logs/実験名/2a_f0`に保存します。その後、ピッチ情報を対数で変換して1~255の整数に変換し、`/logs/実験名/2b-f0nsf`に保存します。
### feature_printの抽出
HuBERTを用いてwavファイルを事前にembeddingに変換します。`/logs/実験名/1_16k_wavs`に保存したwavファイルを読み込み、HuBERTでwavファイルを256次元の特徴量に変換し、npy形式で`/logs/実験名/3_feature256`に保存します。
## step3
モデルのトレーニングを行います。
### 初心者向け用語解説
深層学習ではデータセットを分割し、少しずつ学習を進めていきます。一回のモデルの更新(step)では、batch_size個のデータを取り出し予測と誤差の修正を行います。これをデータセットに対して一通り行うと一epochと数えます。
そのため、学習時間は 1step当たりの学習時間 x (データセット内のデータ数 ÷ バッチサイズ) x epoch数 かかります。一般にバッチサイズを大きくするほど学習は安定し、(1step当たりの学習時間÷バッチサイズ)は小さくなりますが、その分GPUのメモリを多く使用します。GPUのRAMはnvidia-smiコマンド等で確認できます。実行環境のマシンに合わせてバッチサイズをできるだけ大きくするとより短時間で学習が可能です。
### pretrained modelの指定
RVCではモデルの訓練を0からではなく、事前学習済みの重みから開始するため、少ないデータセットで学習を行えます。
デフォルトでは
- 音高ガイドを考慮する場合、`RVCのある場所/pretrained/f0G40k.pth``RVCのある場所/pretrained/f0D40k.pth`を読み込みます。
- 音高ガイドを考慮しない場合、`RVCのある場所/pretrained/G40k.pth``RVCのある場所/pretrained/D40k.pth`を読み込みます。
学習時はsave_every_epochごとにモデルのパラメータが`logs/実験名/G_{}.pth``logs/実験名/D_{}.pth`に保存されますが、このパスを指定することで学習を再開したり、もしくは違う実験で学習したモデルの重みから学習を開始できます。
### indexの学習
RVCでは学習時に使われたHuBERTの特徴量を保存し、推論時は学習時の特徴量から近い特徴量を探してきて推論を行います。この検索を高速に行うために事前にindexの学習を行います。
indexの学習には近似近傍探索ライブラリのfaissを用います。`/logs/実験名/3_feature256`の特徴量を読み込み、それを用いて学習したindexを`/logs/実験名/add_XXX.index`として保存します。
(20230428updateよりtotal_fea.npyはindexから読み込むので不要になりました。)
### ボタンの説明
- モデルのトレーニング: step2bまでを実行した後、このボタンを押すとモデルの学習を行います。
- 特徴インデックスのトレーニング: モデルのトレーニング後、indexの学習を行います。
- ワンクリックトレーニング: step2bまでとモデルのトレーニング、特徴インデックスのトレーニングを一括で行います。

53
docs/training_tips_ko.md Normal file
View File

@@ -0,0 +1,53 @@
RVC 훈련에 대한 설명과 팁들
======================================
본 팁에서는 어떻게 데이터 훈련이 이루어지고 있는지 설명합니다.
# 훈련의 흐름
GUI의 훈련 탭의 단계를 따라 설명합니다.
## step1
실험 이름을 지정합니다. 또한, 모델이 피치(소리의 높낮이)를 고려해야 하는지 여부를 여기에서 설정할 수도 있습니다..
각 실험을 위한 데이터는 `/logs/experiment name/`에 배치됩니다..
## step2a
음성 파일을 불러오고 전처리합니다.
### 음성 파일 불러오기
음성 파일이 있는 폴더를 지정하면 해당 폴더에 있는 음성 파일이 자동으로 가져와집니다.
예를 들어 `C:Users\hoge\voices`를 지정하면 `C:Users\hoge\voices\voice.mp3`가 읽히지만 `C:Users\hoge\voices\dir\voice.mp3`는 읽히지 않습니다.
음성 로드에는 내부적으로 ffmpeg를 이용하고 있으므로, ffmpeg로 대응하고 있는 확장자라면 자동적으로 읽힙니다.
ffmpeg에서 int16으로 변환한 후 float32로 변환하고 -1과 1 사이에 정규화됩니다.
### 잡음 제거
음성 파일에 대해 scipy의 filtfilt를 이용하여 잡음을 처리합니다.
### 음성 분할
입력한 음성 파일은 먼저 일정 기간(max_sil_kept=5초?)보다 길게 무음이 지속되는 부분을 감지하여 음성을 분할합니다.무음으로 음성을 분할한 후에는 0.3초의 overlap을 포함하여 4초마다 음성을 분할합니다.4초 이내에 구분된 음성은 음량의 정규화를 실시한 후 wav 파일을 `/logs/실험명/0_gt_wavs`로, 거기에서 16k의 샘플링 레이트로 변환해 `/logs/실험명/1_16k_wavs`에 wav 파일로 저장합니다.
## step2b
### 피치 추출
wav 파일에서 피치(소리의 높낮이) 정보를 추출합니다. parselmouth나 pyworld에 내장되어 있는 메서드으로 피치 정보(=f0)를 추출해, `/logs/실험명/2a_f0`에 저장합니다. 그 후 피치 정보를 로그로 변환하여 1~255 정수로 변환하고 `/logs/실험명/2b-f0nsf`에 저장합니다.
### feature_print 추출
HuBERT를 이용하여 wav 파일을 미리 embedding으로 변환합니다. `/logs/실험명/1_16k_wavs`에 저장한 wav 파일을 읽고 HuBERT에서 wav 파일을 256차원 feature들로 변환한 후 npy 형식으로 `/logs/실험명/3_feature256`에 저장합니다.
## step3
모델의 훈련을 진행합니다.
### 초보자용 용어 해설
심층학습(딥러닝)에서는 데이터셋을 분할하여 조금씩 학습을 진행합니다.한 번의 모델 업데이트(step) 단계 당 batch_size개의 데이터를 탐색하여 예측과 오차를 수정합니다. 데이터셋 전부에 대해 이 작업을 한 번 수행하는 이를 하나의 epoch라고 계산합니다.
따라서 학습 시간은 단계당 학습 시간 x (데이터셋 내 데이터의 수 / batch size) x epoch 수가 소요됩니다. 일반적으로 batch size가 클수록 학습이 안정적이게 됩니다. (step당 학습 시간 ÷ batch size)는 작아지지만 GPU 메모리를 더 많이 사용합니다. GPU RAM은 nvidia-smi 명령어를 통해 확인할 수 있습니다. 실행 환경에 따라 배치 크기를 최대한 늘리면 짧은 시간 내에 학습이 가능합니다.
### 사전 학습된 모델 지정
RVC는 적은 데이터셋으로도 훈련이 가능하도록 사전 훈련된 가중치에서 모델 훈련을 시작합니다. 기본적으로 `rvc-location/pretrained/f0G40k.pth``rvc-location/pretrained/f0D40k.pth`를 불러옵니다. 학습을 할 시에, 모델 파라미터는 각 save_every_epoch별로 `logs/experiment name/G_{}.pth``logs/experiment name/D_{}.pth`로 저장이 되는데, 이 경로를 지정함으로써 학습을 재개하거나, 다른 실험에서 학습한 모델의 가중치에서 학습을 시작할 수 있습니다.
### index의 학습
RVC에서는 학습시에 사용된 HuBERT의 feature값을 저장하고, 추론 시에는 학습 시 사용한 feature값과 유사한 feature 값을 탐색해 추론을 진행합니다. 이 탐색을 고속으로 수행하기 위해 사전에 index을 학습하게 됩니다.
Index 학습에는 근사 근접 탐색법 라이브러리인 Faiss를 사용하게 됩니다. `/logs/실험명/3_feature256`의 feature값을 불러와, 이를 모두 결합시킨 feature값을 `/logs/실험명/total_fea.npy`로서 저장, 그것을 사용해 학습한 index를`/logs/실험명/add_XXX.index`로 저장합니다.
### 버튼 설명
- モデルのトレーニング (모델 학습): step2b까지 실행한 후, 이 버튼을 눌러 모델을 학습합니다.
- 特徴インデックスのトレーニング (특징 지수 훈련): 모델의 훈련 후, index를 학습합니다.
- ワンクリックトレーニング (원클릭 트레이닝): step2b까지의 모델 훈련, feature index 훈련을 일괄로 실시합니다.

68
docs/training_tips_tr.md Normal file
View File

@@ -0,0 +1,68 @@
RVC Eğitimi için Talimatlar ve İpuçları
===========================================
Bu TIPS, veri eğitiminin nasıl yapıldığınııklar.
# Eğitim Süreci
Eğitim sekmesinde adımları takip ederek açıklayacağım.
## Adım 1
Burada deney adını ayarlayın.
Ayrıca burada modelin pitch'i dikkate alıp almayacağını da belirtebilirsiniz.
Eğer model pitch'i dikkate almazsa, model daha hafif olacak ancak şarkı söyleme için uygun olmayacaktır.
Her deney için veriler `/logs/deney-adınız/` klasörüne yerleştirilir.
## Adım 2a
Ses yüklenir ve ön işlem yapılır.
### Ses yükleme
Ses içeren bir klasörü belirtirseniz, o klasördeki ses dosyaları otomatik olarak okunacaktır.
Örneğin, `C:Kullanıcılar\hoge\sese` gibi bir klasör belirtirseniz, `C:Kullanıcılar\hoge\sese\voice.mp3` yüklenecek, ancak `C:Kullanıcılar\hoge\sese\klasör\voice.mp3` yüklenecektir.
Ses okumak için dahili olarak ffmpeg kullanıldığından, uzantı ffmpeg tarafından destekleniyorsa otomatik olarak okunacaktır.
ffmpeg ile int16'ya dönüştürüldükten sonra, float32'ye çevrilir ve -1 ile 1 arasında normalize edilir.
### Gürültü Temizleme
Ses, scipy'nin filtfilt fonksiyonu ile düzeltilir.
### Ses Ayırma
Önceki işlemlerin ardından giriş sesi, belirli bir süreden (max_sil_kept=5 saniye?) daha uzun süren sessiz bölümleri algılayarak bölünür. Ses sessizlik üzerinde bölündükten sonra, sesi her 4 saniyede bir 0.3 saniyelik bir örtüşme ile bölünür. 4 saniye içinde ayrılan ses için, sesin ses düzeyi normalize edildikten sonra wav dosyasına çevrilir ve `/logs/deney-adınız/0_gt_wavs` klasörüne kaydedilir ve ardından 16k örnekleme hızında `/logs/deney-adınız/1_16k_wavs` klasörüne kaydedilir.
## Adım 2b
### Pitch (Ton Yüksekliği) Çıkarma
Wav dosyalarından pitch bilgisi çıkarılır. Parselmouth veya pyworld tarafından sağlanan yöntem kullanılarak pitch bilgisi (=f0) çıkarılır ve `/logs/deney-adınız/2a_f0` klasöründe kaydedilir. Daha sonra pitch bilgisi logaritmik olarak 1 ile 255 arasında bir tamsayıya dönüştürülür ve `/logs/deney-adınız/2b-f0nsf` klasöründe kaydedilir.
### Özelliklerin Çıkartılması
Wav dosyası, HuBERT kullanılarak önceden gömme olarak çıkartılır. `/logs/deney-adınız/1_16k_wavs` klasöründe kaydedilen wav dosyası okunur, 256 boyutlu özelliklere HuBERT kullanılarak dönüştürülür ve `/logs/deney-adınız/3_feature256` klasöründe npy formatında kaydedilir.
## Adım 3
Modeli eğitin.
### Yeni Başlayanlar İçin Terimler
Derin öğrenmede, veri kümesi bölmeye ve öğrenmeye azar azar devam eder. Bir model güncellemesinde (adım), batch_size veri alınır ve tahminler ve hata düzeltmeleri yapılır. Bunun bir veri kümesi için bir kez yapılması bir epoch olarak sayılır.
Bu nedenle, öğrenme süresi adım başına öğrenme süresi x (veri kümesindeki veri sayısı / batch boyutu) x epoch sayısıdır. Genel olarak, batch boyutu ne kadar büyükse, öğrenme daha istikrarlı olur (adım başına öğrenme süresi ÷ batch boyutu) daha küçük olur, ancak daha fazla GPU belleği kullanır. GPU RAM, nvidia-smi komutu ile kontrol edilebilir. Makineye göre mümkün olduğunca batch boyutunu artırarak kısa sürede öğrenme yapılabilir.
### Önceden Eğitilmiş Modeli Belirtme
RVC, modeli 0'dan değil önceden eğitilmiş ağırlıklardan başlayarak eğitmeye başlar, bu nedenle küçük bir veri kümesiyle eğitilebilir.
Varsayılan olarak
- Eğer pitch'i dikkate alıyorsanız, `rvc-konumu/pretrained/f0G40k.pth` ve `rvc-konumu/pretrained/f0D40k.pth` yüklenir.
- Eğer pitch'i dikkate almıyorsanız, `rvc-konumu/pretrained/f0G40k.pth` ve `rvc-konumu/pretrained/f0D40k.pth` yüklenir.
Eğitim sırasında, model parametreleri `logs/deney-adınız/G_{}.pth` ve `logs/deney-adınız/D_{}.pth` olarak her save_every_epoch için kaydedilir, ancak bu yolu belirterek eğitimi başlatabilirsiniz. Farklı bir deneyde öğrenilen model ağırlıklarından eğitime yeniden başlatabilir veya yeni başlatabilirsiniz.
### Öğrenme İndeksi
RVC, eğitim sırasında kullanılan HuBERT özellik değerlerini kaydeder ve çıkarım sırasında eğitim sırasında kullanılan özellik değerlerine ben
zer özellik değerlerini aramak için çıkarım yapar. Bu aramayı yüksek hızda gerçekleştirmek için indeksi önceden öğrenir.
İndeks öğrenimi için, yaklaşık komşuluk arama kütüphanesi faiss kullanılır. `/logs/deney-adınız/3_feature256` klasöründe kaydedilen özellik değerini okuyarak indeks öğrenimi yapılır ve `logs/deney-adınız/add_XXX.index` olarak kaydedilir.
(20230428 güncelleme sürümünden itibaren, indeks okunur ve kaydetme / belirtme artık gerekli değildir.)
### Buton açıklamaları
- Modeli Eğit: Adım 2b'yi tamamladıktan sonra, modeli eğitmek için bu düğmeye basın.
- Özellik İndeksini Eğit: Model eğitimini tamamladıktan sonra, indeks öğrenimini yapmak için bu düğmeye basın.
- Tek Tıkla Eğitim: Adım 2b, model eğitimi ve özellik indeksi eğitimi hepsi bir arada.

186
environment_dml.yaml Normal file
View File

@@ -0,0 +1,186 @@
name: pydml
channels:
- pytorch
- https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
- https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/
- https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
- https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge/
- defaults
- https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/fastai/
- https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/
- https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/bioconda/
dependencies:
- abseil-cpp=20211102.0=hd77b12b_0
- absl-py=1.3.0=py310haa95532_0
- aiohttp=3.8.3=py310h2bbff1b_0
- aiosignal=1.2.0=pyhd3eb1b0_0
- async-timeout=4.0.2=py310haa95532_0
- attrs=22.1.0=py310haa95532_0
- blas=1.0=mkl
- blinker=1.4=py310haa95532_0
- bottleneck=1.3.5=py310h9128911_0
- brotli=1.0.9=h2bbff1b_7
- brotli-bin=1.0.9=h2bbff1b_7
- brotlipy=0.7.0=py310h2bbff1b_1002
- bzip2=1.0.8=he774522_0
- c-ares=1.19.0=h2bbff1b_0
- ca-certificates=2023.05.30=haa95532_0
- cachetools=4.2.2=pyhd3eb1b0_0
- certifi=2023.5.7=py310haa95532_0
- cffi=1.15.1=py310h2bbff1b_3
- charset-normalizer=2.0.4=pyhd3eb1b0_0
- click=8.0.4=py310haa95532_0
- colorama=0.4.6=py310haa95532_0
- contourpy=1.0.5=py310h59b6b97_0
- cryptography=39.0.1=py310h21b164f_0
- cycler=0.11.0=pyhd3eb1b0_0
- fonttools=4.25.0=pyhd3eb1b0_0
- freetype=2.12.1=ha860e81_0
- frozenlist=1.3.3=py310h2bbff1b_0
- giflib=5.2.1=h8cc25b3_3
- glib=2.69.1=h5dc1a3c_2
- google-auth=2.6.0=pyhd3eb1b0_0
- google-auth-oauthlib=0.4.4=pyhd3eb1b0_0
- grpc-cpp=1.48.2=hf108199_0
- grpcio=1.48.2=py310hf108199_0
- gst-plugins-base=1.18.5=h9e645db_0
- gstreamer=1.18.5=hd78058f_0
- icu=58.2=ha925a31_3
- idna=3.4=py310haa95532_0
- intel-openmp=2023.1.0=h59b6b97_46319
- jpeg=9e=h2bbff1b_1
- kiwisolver=1.4.4=py310hd77b12b_0
- krb5=1.19.4=h5b6d351_0
- lerc=3.0=hd77b12b_0
- libbrotlicommon=1.0.9=h2bbff1b_7
- libbrotlidec=1.0.9=h2bbff1b_7
- libbrotlienc=1.0.9=h2bbff1b_7
- libclang=14.0.6=default_hb5a9fac_1
- libclang13=14.0.6=default_h8e68704_1
- libdeflate=1.17=h2bbff1b_0
- libffi=3.4.4=hd77b12b_0
- libiconv=1.16=h2bbff1b_2
- libogg=1.3.5=h2bbff1b_1
- libpng=1.6.39=h8cc25b3_0
- libprotobuf=3.20.3=h23ce68f_0
- libtiff=4.5.0=h6c2663c_2
- libuv=1.44.2=h2bbff1b_0
- libvorbis=1.3.7=he774522_0
- libwebp=1.2.4=hbc33d0d_1
- libwebp-base=1.2.4=h2bbff1b_1
- libxml2=2.10.3=h0ad7f3c_0
- libxslt=1.1.37=h2bbff1b_0
- lz4-c=1.9.4=h2bbff1b_0
- markdown=3.4.1=py310haa95532_0
- markupsafe=2.1.1=py310h2bbff1b_0
- matplotlib=3.7.1=py310haa95532_1
- matplotlib-base=3.7.1=py310h4ed8f06_1
- mkl=2023.1.0=h8bd8f75_46356
- mkl-service=2.4.0=py310h2bbff1b_1
- mkl_fft=1.3.6=py310h4ed8f06_1
- mkl_random=1.2.2=py310h4ed8f06_1
- multidict=6.0.2=py310h2bbff1b_0
- munkres=1.1.4=py_0
- numexpr=2.8.4=py310h2cd9be0_1
- numpy=1.24.3=py310h055cbcc_1
- numpy-base=1.24.3=py310h65a83cf_1
- oauthlib=3.2.2=py310haa95532_0
- openssl=1.1.1t=h2bbff1b_0
- packaging=23.0=py310haa95532_0
- pandas=1.5.3=py310h4ed8f06_0
- pcre=8.45=hd77b12b_0
- pillow=9.4.0=py310hd77b12b_0
- pip=23.0.1=py310haa95532_0
- ply=3.11=py310haa95532_0
- protobuf=3.20.3=py310hd77b12b_0
- pyasn1=0.4.8=pyhd3eb1b0_0
- pyasn1-modules=0.2.8=py_0
- pycparser=2.21=pyhd3eb1b0_0
- pyjwt=2.4.0=py310haa95532_0
- pyopenssl=23.0.0=py310haa95532_0
- pyparsing=3.0.9=py310haa95532_0
- pyqt=5.15.7=py310hd77b12b_0
- pyqt5-sip=12.11.0=py310hd77b12b_0
- pysocks=1.7.1=py310haa95532_0
- python=3.10.11=h966fe2a_2
- python-dateutil=2.8.2=pyhd3eb1b0_0
- pytorch-mutex=1.0=cpu
- pytz=2022.7=py310haa95532_0
- pyyaml=6.0=py310h2bbff1b_1
- qt-main=5.15.2=he8e5bd7_8
- qt-webengine=5.15.9=hb9a9bb5_5
- qtwebkit=5.212=h2bbfb41_5
- re2=2022.04.01=hd77b12b_0
- requests=2.29.0=py310haa95532_0
- requests-oauthlib=1.3.0=py_0
- rsa=4.7.2=pyhd3eb1b0_1
- setuptools=67.8.0=py310haa95532_0
- sip=6.6.2=py310hd77b12b_0
- six=1.16.0=pyhd3eb1b0_1
- sqlite=3.41.2=h2bbff1b_0
- tbb=2021.8.0=h59b6b97_0
- tensorboard=2.10.0=py310haa95532_0
- tensorboard-data-server=0.6.1=py310haa95532_0
- tensorboard-plugin-wit=1.8.1=py310haa95532_0
- tk=8.6.12=h2bbff1b_0
- toml=0.10.2=pyhd3eb1b0_0
- tornado=6.2=py310h2bbff1b_0
- tqdm=4.65.0=py310h9909e9c_0
- typing_extensions=4.5.0=py310haa95532_0
- tzdata=2023c=h04d1e81_0
- urllib3=1.26.16=py310haa95532_0
- vc=14.2=h21ff451_1
- vs2015_runtime=14.27.29016=h5e58377_2
- werkzeug=2.2.3=py310haa95532_0
- wheel=0.38.4=py310haa95532_0
- win_inet_pton=1.1.0=py310haa95532_0
- xz=5.4.2=h8cc25b3_0
- yaml=0.2.5=he774522_0
- yarl=1.8.1=py310h2bbff1b_0
- zlib=1.2.13=h8cc25b3_0
- zstd=1.5.5=hd43e919_0
- pip:
- antlr4-python3-runtime==4.8
- appdirs==1.4.4
- audioread==3.0.0
- bitarray==2.7.4
- cython==0.29.35
- decorator==5.1.1
- fairseq==0.12.2
- faiss-cpu==1.7.4
- filelock==3.12.0
- hydra-core==1.0.7
- jinja2==3.1.2
- joblib==1.2.0
- lazy-loader==0.2
- librosa==0.10.0.post2
- llvmlite==0.40.0
- lxml==4.9.2
- mpmath==1.3.0
- msgpack==1.0.5
- networkx==3.1
- noisereduce==2.0.1
- numba==0.57.0
- omegaconf==2.0.6
- opencv-python==4.7.0.72
- pooch==1.6.0
- portalocker==2.7.0
- pysimplegui==4.60.5
- pywin32==306
- pyworld==0.3.3
- regex==2023.5.5
- sacrebleu==2.3.1
- scikit-learn==1.2.2
- scipy==1.10.1
- sounddevice==0.4.6
- soundfile==0.12.1
- soxr==0.3.5
- sympy==1.12
- tabulate==0.9.0
- threadpoolctl==3.1.0
- torch==2.0.0
- torch-directml==0.2.0.dev230426
- torchaudio==2.0.1
- torchvision==0.15.1
- wget==3.2
prefix: D:\ProgramData\anaconda3_\envs\pydml

View File

@@ -1,47 +0,0 @@
from infer_pack.models_onnx import SynthesizerTrnMs256NSFsid
import torch
person = "Shiroha/shiroha.pth"
exported_path = "model.onnx"
cpt = torch.load(person, map_location="cpu")
cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
print(*cpt["config"])
net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=False)
net_g.load_state_dict(cpt["weight"], strict=False)
test_phone = torch.rand(1, 200, 256)
test_phone_lengths = torch.tensor([200]).long()
test_pitch = torch.randint(size=(1, 200), low=5, high=255)
test_pitchf = torch.rand(1, 200)
test_ds = torch.LongTensor([0])
test_rnd = torch.rand(1, 192, 200)
input_names = ["phone", "phone_lengths", "pitch", "pitchf", "ds", "rnd"]
output_names = [
"audio",
]
device = "cpu"
torch.onnx.export(
net_g,
(
test_phone.to(device),
test_phone_lengths.to(device),
test_pitch.to(device),
test_pitchf.to(device),
test_ds.to(device),
test_rnd.to(device),
),
exported_path,
dynamic_axes={
"phone": [1],
"pitch": [1],
"pitchf": [1],
"rnd": [2],
},
do_constant_folding=False,
opset_version=16,
verbose=False,
input_names=input_names,
output_names=output_names,
)

View File

@@ -1,7 +1,9 @@
import os, traceback, sys, parselmouth
import librosa
now_dir = os.getcwd()
sys.path.append(now_dir)
from lib.audio import load_audio
import pyworld
from scipy.io import wavfile
import numpy as np, logging
logging.getLogger("numba").setLevel(logging.WARNING)
@@ -33,15 +35,14 @@ class FeatureInput(object):
self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700)
def compute_f0(self, path, f0_method):
x, sr = librosa.load(path, self.fs)
x = load_audio(path, self.fs)
p_len = x.shape[0] // self.hop
assert sr == self.fs
if f0_method == "pm":
time_step = 160 / 16000 * 1000
f0_min = 50
f0_max = 1100
f0 = (
parselmouth.Sound(x, sr)
parselmouth.Sound(x, self.fs)
.to_pitch_ac(
time_step=time_step / 1000,
voicing_threshold=0.6,
@@ -58,21 +59,28 @@ class FeatureInput(object):
elif f0_method == "harvest":
f0, t = pyworld.harvest(
x.astype(np.double),
fs=sr,
fs=self.fs,
f0_ceil=self.f0_max,
f0_floor=self.f0_min,
frame_period=1000 * self.hop / sr,
frame_period=1000 * self.hop / self.fs,
)
f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.fs)
elif f0_method == "dio":
f0, t = pyworld.dio(
x.astype(np.double),
fs=sr,
fs=self.fs,
f0_ceil=self.f0_max,
f0_floor=self.f0_min,
frame_period=1000 * self.hop / sr,
frame_period=1000 * self.hop / self.fs,
)
f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.fs)
elif f0_method == "rmvpe":
if hasattr(self, "model_rmvpe") == False:
from lib.rmvpe import RMVPE
print("loading rmvpe model")
self.model_rmvpe = RMVPE("rmvpe.pt", is_half=False, device="cpu")
f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03)
return f0
def coarse_f0(self, f0):
@@ -84,7 +92,7 @@ class FeatureInput(object):
# use 0 or 1
f0_mel[f0_mel <= 1] = 1
f0_mel[f0_mel > self.f0_bin - 1] = self.f0_bin - 1
f0_coarse = np.rint(f0_mel).astype(np.int)
f0_coarse = np.rint(f0_mel).astype(int)
assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (
f0_coarse.max(),
f0_coarse.min(),
@@ -152,7 +160,7 @@ if __name__ == "__main__":
f0method,
),
)
p.start()
ps.append(p)
for p in ps:
p.join()
p.start()
for i in range(n_p):
ps[i].join()

132
extract_f0_rmvpe.py Normal file
View File

@@ -0,0 +1,132 @@
import os, traceback, sys, parselmouth
now_dir = os.getcwd()
sys.path.append(now_dir)
from lib.audio import load_audio
import pyworld
import numpy as np, logging
logging.getLogger("numba").setLevel(logging.WARNING)
n_part = int(sys.argv[1])
i_part = int(sys.argv[2])
i_gpu = sys.argv[3]
os.environ["CUDA_VISIBLE_DEVICES"] = str(i_gpu)
exp_dir = sys.argv[4]
is_half = sys.argv[5]
f = open("%s/extract_f0_feature.log" % exp_dir, "a+")
def printt(strr):
print(strr)
f.write("%s\n" % strr)
f.flush()
class FeatureInput(object):
def __init__(self, samplerate=16000, hop_size=160):
self.fs = samplerate
self.hop = hop_size
self.f0_bin = 256
self.f0_max = 1100.0
self.f0_min = 50.0
self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700)
self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700)
def compute_f0(self, path, f0_method):
x = load_audio(path, self.fs)
# p_len = x.shape[0] // self.hop
if f0_method == "rmvpe":
if hasattr(self, "model_rmvpe") == False:
from lib.rmvpe import RMVPE
print("loading rmvpe model")
self.model_rmvpe = RMVPE("rmvpe.pt", is_half=is_half, device="cuda")
f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03)
return f0
def coarse_f0(self, f0):
f0_mel = 1127 * np.log(1 + f0 / 700)
f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - self.f0_mel_min) * (
self.f0_bin - 2
) / (self.f0_mel_max - self.f0_mel_min) + 1
# use 0 or 1
f0_mel[f0_mel <= 1] = 1
f0_mel[f0_mel > self.f0_bin - 1] = self.f0_bin - 1
f0_coarse = np.rint(f0_mel).astype(int)
assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (
f0_coarse.max(),
f0_coarse.min(),
)
return f0_coarse
def go(self, paths, f0_method):
if len(paths) == 0:
printt("no-f0-todo")
else:
printt("todo-f0-%s" % len(paths))
n = max(len(paths) // 5, 1) # 每个进程最多打印5条
for idx, (inp_path, opt_path1, opt_path2) in enumerate(paths):
try:
if idx % n == 0:
printt("f0ing,now-%s,all-%s,-%s" % (idx, len(paths), inp_path))
if (
os.path.exists(opt_path1 + ".npy") == True
and os.path.exists(opt_path2 + ".npy") == True
):
continue
featur_pit = self.compute_f0(inp_path, f0_method)
np.save(
opt_path2,
featur_pit,
allow_pickle=False,
) # nsf
coarse_pit = self.coarse_f0(featur_pit)
np.save(
opt_path1,
coarse_pit,
allow_pickle=False,
) # ori
except:
printt("f0fail-%s-%s-%s" % (idx, inp_path, traceback.format_exc()))
if __name__ == "__main__":
# exp_dir=r"E:\codes\py39\dataset\mi-test"
# n_p=16
# f = open("%s/log_extract_f0.log"%exp_dir, "w")
printt(sys.argv)
featureInput = FeatureInput()
paths = []
inp_root = "%s/1_16k_wavs" % (exp_dir)
opt_root1 = "%s/2a_f0" % (exp_dir)
opt_root2 = "%s/2b-f0nsf" % (exp_dir)
os.makedirs(opt_root1, exist_ok=True)
os.makedirs(opt_root2, exist_ok=True)
for name in sorted(list(os.listdir(inp_root))):
inp_path = "%s/%s" % (inp_root, name)
if "spec" in inp_path:
continue
opt_path1 = "%s/%s" % (opt_root1, name)
opt_path2 = "%s/%s" % (opt_root2, name)
paths.append([inp_path, opt_path1, opt_path2])
try:
featureInput.go(paths[i_part::n_part], "rmvpe")
except:
printt("f0_all_fail-%s" % (traceback.format_exc()))
# ps = []
# for i in range(n_p):
# p = Process(
# target=featureInput.go,
# args=(
# paths[i::n_p],
# f0method,
# ),
# )
# ps.append(p)
# p.start()
# for i in range(n_p):
# ps[i].join()

130
extract_f0_rmvpe_dml.py Normal file
View File

@@ -0,0 +1,130 @@
import os, traceback, sys, parselmouth
now_dir = os.getcwd()
sys.path.append(now_dir)
from lib.audio import load_audio
import pyworld
import numpy as np, logging
logging.getLogger("numba").setLevel(logging.WARNING)
exp_dir = sys.argv[1]
import torch_directml
device = torch_directml.device(torch_directml.default_device())
f = open("%s/extract_f0_feature.log" % exp_dir, "a+")
def printt(strr):
print(strr)
f.write("%s\n" % strr)
f.flush()
class FeatureInput(object):
def __init__(self, samplerate=16000, hop_size=160):
self.fs = samplerate
self.hop = hop_size
self.f0_bin = 256
self.f0_max = 1100.0
self.f0_min = 50.0
self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700)
self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700)
def compute_f0(self, path, f0_method):
x = load_audio(path, self.fs)
# p_len = x.shape[0] // self.hop
if f0_method == "rmvpe":
if hasattr(self, "model_rmvpe") == False:
from lib.rmvpe import RMVPE
print("loading rmvpe model")
self.model_rmvpe = RMVPE("rmvpe.pt", is_half=False, device=device)
f0 = self.model_rmvpe.infer_from_audio(x, thred=0.03)
return f0
def coarse_f0(self, f0):
f0_mel = 1127 * np.log(1 + f0 / 700)
f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - self.f0_mel_min) * (
self.f0_bin - 2
) / (self.f0_mel_max - self.f0_mel_min) + 1
# use 0 or 1
f0_mel[f0_mel <= 1] = 1
f0_mel[f0_mel > self.f0_bin - 1] = self.f0_bin - 1
f0_coarse = np.rint(f0_mel).astype(int)
assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (
f0_coarse.max(),
f0_coarse.min(),
)
return f0_coarse
def go(self, paths, f0_method):
if len(paths) == 0:
printt("no-f0-todo")
else:
printt("todo-f0-%s" % len(paths))
n = max(len(paths) // 5, 1) # 每个进程最多打印5条
for idx, (inp_path, opt_path1, opt_path2) in enumerate(paths):
try:
if idx % n == 0:
printt("f0ing,now-%s,all-%s,-%s" % (idx, len(paths), inp_path))
if (
os.path.exists(opt_path1 + ".npy") == True
and os.path.exists(opt_path2 + ".npy") == True
):
continue
featur_pit = self.compute_f0(inp_path, f0_method)
np.save(
opt_path2,
featur_pit,
allow_pickle=False,
) # nsf
coarse_pit = self.coarse_f0(featur_pit)
np.save(
opt_path1,
coarse_pit,
allow_pickle=False,
) # ori
except:
printt("f0fail-%s-%s-%s" % (idx, inp_path, traceback.format_exc()))
if __name__ == "__main__":
# exp_dir=r"E:\codes\py39\dataset\mi-test"
# n_p=16
# f = open("%s/log_extract_f0.log"%exp_dir, "w")
printt(sys.argv)
featureInput = FeatureInput()
paths = []
inp_root = "%s/1_16k_wavs" % (exp_dir)
opt_root1 = "%s/2a_f0" % (exp_dir)
opt_root2 = "%s/2b-f0nsf" % (exp_dir)
os.makedirs(opt_root1, exist_ok=True)
os.makedirs(opt_root2, exist_ok=True)
for name in sorted(list(os.listdir(inp_root))):
inp_path = "%s/%s" % (inp_root, name)
if "spec" in inp_path:
continue
opt_path1 = "%s/%s" % (opt_root1, name)
opt_path2 = "%s/%s" % (opt_root2, name)
paths.append([inp_path, opt_path1, opt_path2])
try:
featureInput.go(paths, "rmvpe")
except:
printt("f0_all_fail-%s" % (traceback.format_exc()))
# ps = []
# for i in range(n_p):
# p = Process(
# target=featureInput.go,
# args=(
# paths[i::n_p],
# f0method,
# ),
# )
# ps.append(p)
# p.start()
# for i in range(n_p):
# ps[i].join()

View File

@@ -1,22 +1,42 @@
import os, sys, traceback
# device=sys.argv[1]
os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"
os.environ["PYTORCH_MPS_HIGH_WATERMARK_RATIO"] = "0.0"
device = sys.argv[1]
n_part = int(sys.argv[2])
i_part = int(sys.argv[3])
if len(sys.argv) == 5:
if len(sys.argv) == 6:
exp_dir = sys.argv[4]
version = sys.argv[5]
else:
i_gpu = sys.argv[4]
exp_dir = sys.argv[5]
os.environ["CUDA_VISIBLE_DEVICES"] = str(i_gpu)
version = sys.argv[6]
import torch
import torch.nn.functional as F
import soundfile as sf
import numpy as np
from fairseq import checkpoint_utils
import fairseq
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
if "privateuseone" not in device:
device = "cpu"
if torch.cuda.is_available():
device = "cuda"
elif torch.backends.mps.is_available():
device = "mps"
else:
import torch_directml
device = torch_directml.device(torch_directml.default_device())
def forward_dml(ctx, x, scale):
ctx.scale = scale
res = x.clone().detach()
return res
fairseq.modules.grad_multiply.GradMultiply.forward = forward_dml
f = open("%s/extract_f0_feature.log" % exp_dir, "a+")
@@ -32,7 +52,9 @@ model_path = "hubert_base.pt"
printt(exp_dir)
wavPath = "%s/1_16k_wavs" % exp_dir
outPath = "%s/3_feature256" % exp_dir
outPath = (
"%s/3_feature256" % exp_dir if version == "v1" else "%s/3_feature768" % exp_dir
)
os.makedirs(outPath, exist_ok=True)
@@ -53,14 +75,21 @@ def readwave(wav_path, normalize=False):
# HuBERT model
printt("load model(s) from {}".format(model_path))
models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
# if hubert model is exist
if os.access(model_path, os.F_OK) == False:
printt(
"Error: Extracting is shut down because %s does not exist, you may download it from https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main"
% model_path
)
exit(0)
models, saved_cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task(
[model_path],
suffix="",
)
model = models[0]
model = model.to(device)
printt("move model to %s" % device)
if device != "cpu":
if device not in ["mps", "cpu"]:
model = model.half()
model.eval()
@@ -83,14 +112,16 @@ else:
padding_mask = torch.BoolTensor(feats.shape).fill_(False)
inputs = {
"source": feats.half().to(device)
if device != "cpu"
if device not in ["mps", "cpu"]
else feats.to(device),
"padding_mask": padding_mask.to(device),
"output_layer": 9, # layer 9
"output_layer": 9 if version == "v1" else 12, # layer 9
}
with torch.no_grad():
logits = model.extract_features(**inputs)
feats = model.final_proj(logits[0])
feats = (
model.final_proj(logits[0]) if version == "v1" else logits[0]
)
feats = feats.squeeze(0).float().cpu().numpy()
if np.isnan(feats).sum() == 0:

View File

@@ -22,9 +22,13 @@ def process(fn: str):
print("processing infer-web.py")
process("infer-web.py")
print("processing gui.py")
process("gui.py")
print("processing gui_v0.py")
process("gui_v0.py")
print("processing gui_v1.py")
process("gui_v1.py")
# Save as a JSON file
with open("./i18n/zh_CN.json", "w", encoding="utf-8") as f:
with open("./lib/i18n/zh_CN.json", "w", encoding="utf-8") as f:
json.dump(data, f, ensure_ascii=False, indent=4)
f.write("\n")

2
go-realtime-gui.bat Normal file
View File

@@ -0,0 +1,2 @@
runtime\python.exe gui_v1.py
pause

View File

@@ -1,2 +1,2 @@
runtime\python.exe infer-web.py --pycmd runtime\python.exe
runtime\python.exe infer-web.py --pycmd runtime\python.exe --port 7897
pause

540
gui.py
View File

@@ -1,540 +0,0 @@
import PySimpleGUI as sg
import sounddevice as sd
import noisereduce as nr
import numpy as np
from fairseq import checkpoint_utils
import librosa, torch, parselmouth, faiss, time, threading
import torch.nn.functional as F
import torchaudio.transforms as tat
# import matplotlib.pyplot as plt
from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono
from i18n import I18nAuto
i18n = I18nAuto()
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
class RVC:
def __init__(
self, key, hubert_path, pth_path, index_path, npy_path, index_rate
) -> None:
"""
初始化
"""
self.f0_up_key = key
self.time_step = 160 / 16000 * 1000
self.f0_min = 50
self.f0_max = 1100
self.f0_mel_min = 1127 * np.log(1 + self.f0_min / 700)
self.f0_mel_max = 1127 * np.log(1 + self.f0_max / 700)
self.index = faiss.read_index(index_path)
self.index_rate = index_rate
"""NOT YET USED"""
self.big_npy = np.load(npy_path)
model_path = hubert_path
print("load model(s) from {}".format(model_path))
models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
[model_path],
suffix="",
)
self.model = models[0]
self.model = self.model.to(device)
self.model = self.model.half()
self.model.eval()
cpt = torch.load(pth_path, map_location="cpu")
tgt_sr = cpt["config"][-1]
cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
if_f0 = cpt.get("f0", 1)
if if_f0 == 1:
self.net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=True)
else:
self.net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
del self.net_g.enc_q
print(self.net_g.load_state_dict(cpt["weight"], strict=False))
self.net_g.eval().to(device)
self.net_g.half()
def get_f0_coarse(self, f0):
f0_mel = 1127 * np.log(1 + f0 / 700)
f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - self.f0_mel_min) * 254 / (
self.f0_mel_max - self.f0_mel_min
) + 1
f0_mel[f0_mel <= 1] = 1
f0_mel[f0_mel > 255] = 255
# f0_mel[f0_mel > 188] = 188
f0_coarse = np.rint(f0_mel).astype(np.int)
return f0_coarse
def get_f0(self, x, p_len, f0_up_key=0):
f0 = (
parselmouth.Sound(x, 16000)
.to_pitch_ac(
time_step=self.time_step / 1000,
voicing_threshold=0.6,
pitch_floor=self.f0_min,
pitch_ceiling=self.f0_max,
)
.selected_array["frequency"]
)
pad_size = (p_len - len(f0) + 1) // 2
if pad_size > 0 or p_len - len(f0) - pad_size > 0:
f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
f0 *= pow(2, f0_up_key / 12)
# f0=suofang(f0)
f0bak = f0.copy()
f0_coarse = self.get_f0_coarse(f0)
return f0_coarse, f0bak
def infer(self, feats: torch.Tensor) -> np.ndarray:
"""
推理函数
"""
audio = feats.clone().cpu().numpy()
assert feats.dim() == 1, feats.dim()
feats = feats.view(1, -1)
padding_mask = torch.BoolTensor(feats.shape).fill_(False)
inputs = {
"source": feats.half().to(device),
"padding_mask": padding_mask.to(device),
"output_layer": 9, # layer 9
}
torch.cuda.synchronize()
with torch.no_grad():
logits = self.model.extract_features(**inputs)
feats = self.model.final_proj(logits[0])
####索引优化
if (
isinstance(self.index, type(None)) == False
and isinstance(self.big_npy, type(None)) == False
and self.index_rate != 0
):
npy = feats[0].cpu().numpy().astype("float32")
_, I = self.index.search(npy, 1)
npy = self.big_npy[I.squeeze()].astype("float16")
feats = (
torch.from_numpy(npy).unsqueeze(0).to(device) * self.index_rate
+ (1 - self.index_rate) * feats
)
feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
torch.cuda.synchronize()
# p_len = min(feats.shape[1],10000,pitch.shape[0])#太大了爆显存
p_len = min(feats.shape[1], 12000) #
print(feats.shape)
pitch, pitchf = self.get_f0(audio, p_len, self.f0_up_key)
p_len = min(feats.shape[1], 12000, pitch.shape[0]) # 太大了爆显存
torch.cuda.synchronize()
# print(feats.shape,pitch.shape)
feats = feats[:, :p_len, :]
pitch = pitch[:p_len]
pitchf = pitchf[:p_len]
p_len = torch.LongTensor([p_len]).to(device)
pitch = torch.LongTensor(pitch).unsqueeze(0).to(device)
pitchf = torch.FloatTensor(pitchf).unsqueeze(0).to(device)
ii = 0 # sid
sid = torch.LongTensor([ii]).to(device)
with torch.no_grad():
infered_audio = (
self.net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0]
.data.cpu()
.float()
) # nsf
torch.cuda.synchronize()
return infered_audio
class Config:
def __init__(self) -> None:
self.hubert_path: str = ""
self.pth_path: str = ""
self.index_path: str = ""
self.npy_path: str = ""
self.pitch: int = 12
self.samplerate: int = 44100
self.block_time: float = 1.0 # s
self.buffer_num: int = 1
self.threhold: int = -30
self.crossfade_time: float = 0.08
self.extra_time: float = 0.04
self.I_noise_reduce = False
self.O_noise_reduce = False
self.index_rate = 0.3
class GUI:
def __init__(self) -> None:
self.config = Config()
self.flag_vc = False
self.launcher()
def launcher(self):
sg.theme("LightBlue3")
input_devices, output_devices, _, _ = self.get_devices()
layout = [
[
sg.Frame(
title=i18n("加载模型"),
layout=[
[
sg.Input(
default_text="TEMP\\hubert_base.pt", key="hubert_path"
),
sg.FileBrowse(i18n("Hubert模型")),
],
[
sg.Input(default_text="TEMP\\atri.pth", key="pth_path"),
sg.FileBrowse(i18n("选择.pth文件")),
],
[
sg.Input(
default_text="TEMP\\added_IVF512_Flat_atri_baseline_src_feat.index",
key="index_path",
),
sg.FileBrowse(i18n("选择.index文件")),
],
[
sg.Input(
default_text="TEMP\\big_src_feature_atri.npy",
key="npy_path",
),
sg.FileBrowse(i18n("选择.npy文件")),
],
],
)
],
[
sg.Frame(
layout=[
[
sg.Text(i18n("输入设备")),
sg.Combo(
input_devices,
key="sg_input_device",
default_value=input_devices[sd.default.device[0]],
),
],
[
sg.Text(i18n("输出设备")),
sg.Combo(
output_devices,
key="sg_output_device",
default_value=output_devices[sd.default.device[1]],
),
],
],
title=i18n("音频设备(请使用同种类驱动)"),
)
],
[
sg.Frame(
layout=[
[
sg.Text(i18n("响应阈值")),
sg.Slider(
range=(-60, 0),
key="threhold",
resolution=1,
orientation="h",
default_value=-30,
),
],
[
sg.Text(i18n("音调设置")),
sg.Slider(
range=(-24, 24),
key="pitch",
resolution=1,
orientation="h",
default_value=12,
),
],
[
sg.Text(i18n("Index Rate")),
sg.Slider(
range=(0.0, 1.0),
key="index_rate",
resolution=0.01,
orientation="h",
default_value=0.5,
),
],
],
title=i18n("常规设置"),
),
sg.Frame(
layout=[
[
sg.Text(i18n("采样长度")),
sg.Slider(
range=(0.1, 3.0),
key="block_time",
resolution=0.1,
orientation="h",
default_value=1.0,
),
],
[
sg.Text(i18n("淡入淡出长度")),
sg.Slider(
range=(0.01, 0.15),
key="crossfade_length",
resolution=0.01,
orientation="h",
default_value=0.08,
),
],
[
sg.Text(i18n("额外推理时长")),
sg.Slider(
range=(0.05, 3.00),
key="extra_time",
resolution=0.01,
orientation="h",
default_value=0.05,
),
],
[
sg.Checkbox(i18n("输入降噪"), key="I_noise_reduce"),
sg.Checkbox(i18n("输出降噪"), key="O_noise_reduce"),
],
],
title=i18n("性能设置"),
),
],
[
sg.Button(i18n("开始音频转换"), key="start_vc"),
sg.Button(i18n("停止音频转换"), key="stop_vc"),
sg.Text(i18n("推理时间(ms):")),
sg.Text("0", key="infer_time"),
],
]
self.window = sg.Window("RVC - GUI", layout=layout)
self.event_handler()
def event_handler(self):
while True:
event, values = self.window.read()
if event == sg.WINDOW_CLOSED:
self.flag_vc = False
exit()
if event == "start_vc" and self.flag_vc == False:
self.set_values(values)
print(str(self.config.__dict__))
print("using_cuda:" + str(torch.cuda.is_available()))
self.start_vc()
if event == "stop_vc" and self.flag_vc == True:
self.flag_vc = False
def set_values(self, values):
self.set_devices(values["sg_input_device"], values["sg_output_device"])
self.config.hubert_path = values["hubert_path"]
self.config.pth_path = values["pth_path"]
self.config.index_path = values["index_path"]
self.config.npy_path = values["npy_path"]
self.config.threhold = values["threhold"]
self.config.pitch = values["pitch"]
self.config.block_time = values["block_time"]
self.config.crossfade_time = values["crossfade_length"]
self.config.extra_time = values["extra_time"]
self.config.I_noise_reduce = values["I_noise_reduce"]
self.config.O_noise_reduce = values["O_noise_reduce"]
self.config.index_rate = values["index_rate"]
def start_vc(self):
torch.cuda.empty_cache()
self.flag_vc = True
self.block_frame = int(self.config.block_time * self.config.samplerate)
self.crossfade_frame = int(self.config.crossfade_time * self.config.samplerate)
self.sola_search_frame = int(0.012 * self.config.samplerate)
self.delay_frame = int(0.02 * self.config.samplerate) # 往前预留0.02s
self.extra_frame = int(
self.config.extra_time * self.config.samplerate
) # 往后预留0.04s
self.rvc = None
self.rvc = RVC(
self.config.pitch,
self.config.hubert_path,
self.config.pth_path,
self.config.index_path,
self.config.npy_path,
self.config.index_rate,
)
self.input_wav: np.ndarray = np.zeros(
self.extra_frame
+ self.crossfade_frame
+ self.sola_search_frame
+ self.block_frame,
dtype="float32",
)
self.output_wav: torch.Tensor = torch.zeros(
self.block_frame, device=device, dtype=torch.float32
)
self.sola_buffer: torch.Tensor = torch.zeros(
self.crossfade_frame, device=device, dtype=torch.float32
)
self.fade_in_window: torch.Tensor = torch.linspace(
0.0, 1.0, steps=self.crossfade_frame, device=device, dtype=torch.float32
)
self.fade_out_window: torch.Tensor = 1 - self.fade_in_window
self.resampler1 = tat.Resample(
orig_freq=self.config.samplerate, new_freq=16000, dtype=torch.float32
)
self.resampler2 = tat.Resample(
orig_freq=40000, new_freq=self.config.samplerate, dtype=torch.float32
)
thread_vc = threading.Thread(target=self.soundinput)
thread_vc.start()
def soundinput(self):
"""
接受音频输入
"""
with sd.Stream(
callback=self.audio_callback,
blocksize=self.block_frame,
samplerate=self.config.samplerate,
dtype="float32",
):
while self.flag_vc:
time.sleep(self.config.block_time)
print("Audio block passed.")
print("ENDing VC")
def audio_callback(
self, indata: np.ndarray, outdata: np.ndarray, frames, times, status
):
"""
音频处理
"""
start_time = time.perf_counter()
indata = librosa.to_mono(indata.T)
if self.config.I_noise_reduce:
indata[:] = nr.reduce_noise(y=indata, sr=self.config.samplerate)
"""noise gate"""
frame_length = 2048
hop_length = 1024
rms = librosa.feature.rms(
y=indata, frame_length=frame_length, hop_length=hop_length
)
db_threhold = librosa.amplitude_to_db(rms, ref=1.0)[0] < self.config.threhold
# print(rms.shape,db.shape,db)
for i in range(db_threhold.shape[0]):
if db_threhold[i]:
indata[i * hop_length : (i + 1) * hop_length] = 0
self.input_wav[:] = np.append(self.input_wav[self.block_frame :], indata)
# infer
print("input_wav:" + str(self.input_wav.shape))
# print('infered_wav:'+str(infer_wav.shape))
infer_wav: torch.Tensor = self.resampler2(
self.rvc.infer(self.resampler1(torch.from_numpy(self.input_wav)))
)[-self.crossfade_frame - self.sola_search_frame - self.block_frame :].to(
device
)
print("infer_wav:" + str(infer_wav.shape))
# SOLA algorithm from https://github.com/yxlllc/DDSP-SVC
cor_nom = F.conv1d(
infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame],
self.sola_buffer[None, None, :],
)
cor_den = torch.sqrt(
F.conv1d(
infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame]
** 2,
torch.ones(1, 1, self.crossfade_frame, device=device),
)
+ 1e-8
)
sola_offset = torch.argmax(cor_nom[0, 0] / cor_den[0, 0])
print("sola offset: " + str(int(sola_offset)))
# crossfade
self.output_wav[:] = infer_wav[sola_offset : sola_offset + self.block_frame]
self.output_wav[: self.crossfade_frame] *= self.fade_in_window
self.output_wav[: self.crossfade_frame] += self.sola_buffer[:]
if sola_offset < self.sola_search_frame:
self.sola_buffer[:] = (
infer_wav[
-self.sola_search_frame
- self.crossfade_frame
+ sola_offset : -self.sola_search_frame
+ sola_offset
]
* self.fade_out_window
)
else:
self.sola_buffer[:] = (
infer_wav[-self.crossfade_frame :] * self.fade_out_window
)
if self.config.O_noise_reduce:
outdata[:] = np.tile(
nr.reduce_noise(
y=self.output_wav[:].cpu().numpy(), sr=self.config.samplerate
),
(2, 1),
).T
else:
outdata[:] = self.output_wav[:].repeat(2, 1).t().cpu().numpy()
total_time = time.perf_counter() - start_time
print("infer time:" + str(total_time))
self.window["infer_time"].update(int(total_time * 1000))
def get_devices(self, update: bool = True):
"""获取设备列表"""
if update:
sd._terminate()
sd._initialize()
devices = sd.query_devices()
hostapis = sd.query_hostapis()
for hostapi in hostapis:
for device_idx in hostapi["devices"]:
devices[device_idx]["hostapi_name"] = hostapi["name"]
input_devices = [
f"{d['name']} ({d['hostapi_name']})"
for d in devices
if d["max_input_channels"] > 0
]
output_devices = [
f"{d['name']} ({d['hostapi_name']})"
for d in devices
if d["max_output_channels"] > 0
]
input_devices_indices = [
d["index"] for d in devices if d["max_input_channels"] > 0
]
output_devices_indices = [
d["index"] for d in devices if d["max_output_channels"] > 0
]
return (
input_devices,
output_devices,
input_devices_indices,
output_devices_indices,
)
def set_devices(self, input_device, output_device):
"""设置输出设备"""
(
input_devices,
output_devices,
input_device_indices,
output_device_indices,
) = self.get_devices()
sd.default.device[0] = input_device_indices[input_devices.index(input_device)]
sd.default.device[1] = output_device_indices[
output_devices.index(output_device)
]
print("input device:" + str(sd.default.device[0]) + ":" + str(input_device))
print("output device:" + str(sd.default.device[1]) + ":" + str(output_device))
gui = GUI()

662
gui_v1.py Normal file
View File

@@ -0,0 +1,662 @@
import os, sys, pdb
os.environ["OMP_NUM_THREADS"] = "2"
if sys.platform == "darwin":
os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"
now_dir = os.getcwd()
sys.path.append(now_dir)
import multiprocessing
class Harvest(multiprocessing.Process):
def __init__(self, inp_q, opt_q):
multiprocessing.Process.__init__(self)
self.inp_q = inp_q
self.opt_q = opt_q
def run(self):
import numpy as np, pyworld
while 1:
idx, x, res_f0, n_cpu, ts = self.inp_q.get()
f0, t = pyworld.harvest(
x.astype(np.double),
fs=16000,
f0_ceil=1100,
f0_floor=50,
frame_period=10,
)
res_f0[idx] = f0
if len(res_f0.keys()) >= n_cpu:
self.opt_q.put(ts)
if __name__ == "__main__":
from multiprocessing import Queue
from queue import Empty
import numpy as np
import multiprocessing
import traceback, re
import json
import PySimpleGUI as sg
import sounddevice as sd
import noisereduce as nr
from multiprocessing import cpu_count
import librosa, torch, time, threading
import torch.nn.functional as F
import torchaudio.transforms as tat
from i18n import I18nAuto
import rvc_for_realtime
i18n = I18nAuto()
device = rvc_for_realtime.config.device
# device = torch.device(
# "cuda"
# if torch.cuda.is_available()
# else ("mps" if torch.backends.mps.is_available() else "cpu")
# )
current_dir = os.getcwd()
inp_q = Queue()
opt_q = Queue()
n_cpu = min(cpu_count(), 8)
for _ in range(n_cpu):
Harvest(inp_q, opt_q).start()
class GUIConfig:
def __init__(self) -> None:
self.pth_path: str = ""
self.index_path: str = ""
self.pitch: int = 12
self.samplerate: int = 40000
self.block_time: float = 1.0 # s
self.buffer_num: int = 1
self.threhold: int = -30
self.crossfade_time: float = 0.08
self.extra_time: float = 0.04
self.I_noise_reduce = False
self.O_noise_reduce = False
self.index_rate = 0.3
self.n_cpu = min(n_cpu, 6)
self.f0method = "harvest"
self.sg_input_device = ""
self.sg_output_device = ""
class GUI:
def __init__(self) -> None:
self.config = GUIConfig()
self.flag_vc = False
self.launcher()
def load(self):
input_devices, output_devices, _, _ = self.get_devices()
try:
with open("values1.json", "r") as j:
data = json.load(j)
data["pm"] = data["f0method"] == "pm"
data["harvest"] = data["f0method"] == "harvest"
data["crepe"] = data["f0method"] == "crepe"
data["rmvpe"] = data["f0method"] == "rmvpe"
except:
with open("values1.json", "w") as j:
data = {
"pth_path": " ",
"index_path": " ",
"sg_input_device": input_devices[sd.default.device[0]],
"sg_output_device": output_devices[sd.default.device[1]],
"threhold": "-45",
"pitch": "0",
"index_rate": "0",
"block_time": "1",
"crossfade_length": "0.04",
"extra_time": "1",
"f0method": "rmvpe",
}
return data
def launcher(self):
data = self.load()
sg.theme("LightBlue3")
input_devices, output_devices, _, _ = self.get_devices()
layout = [
[
sg.Frame(
title=i18n("加载模型"),
layout=[
[
sg.Input(
default_text=data.get("pth_path", ""),
key="pth_path",
),
sg.FileBrowse(
i18n("选择.pth文件"),
initial_folder=os.path.join(os.getcwd(), "weights"),
file_types=((". pth"),),
),
],
[
sg.Input(
default_text=data.get("index_path", ""),
key="index_path",
),
sg.FileBrowse(
i18n("选择.index文件"),
initial_folder=os.path.join(os.getcwd(), "logs"),
file_types=((". index"),),
),
],
],
)
],
[
sg.Frame(
layout=[
[
sg.Text(i18n("输入设备")),
sg.Combo(
input_devices,
key="sg_input_device",
default_value=data.get("sg_input_device", ""),
),
],
[
sg.Text(i18n("输出设备")),
sg.Combo(
output_devices,
key="sg_output_device",
default_value=data.get("sg_output_device", ""),
),
],
[sg.Button(i18n("重载设备列表"), key="reload_devices")],
],
title=i18n("音频设备(请使用同种类驱动)"),
)
],
[
sg.Frame(
layout=[
[
sg.Text(i18n("响应阈值")),
sg.Slider(
range=(-60, 0),
key="threhold",
resolution=1,
orientation="h",
default_value=data.get("threhold", ""),
),
],
[
sg.Text(i18n("音调设置")),
sg.Slider(
range=(-24, 24),
key="pitch",
resolution=1,
orientation="h",
default_value=data.get("pitch", ""),
),
],
[
sg.Text(i18n("Index Rate")),
sg.Slider(
range=(0.0, 1.0),
key="index_rate",
resolution=0.01,
orientation="h",
default_value=data.get("index_rate", ""),
),
],
[
sg.Text(i18n("音高算法")),
sg.Radio(
"pm",
"f0method",
key="pm",
default=data.get("pm", "") == True,
),
sg.Radio(
"harvest",
"f0method",
key="harvest",
default=data.get("harvest", "") == True,
),
sg.Radio(
"crepe",
"f0method",
key="crepe",
default=data.get("crepe", "") == True,
),
sg.Radio(
"rmvpe",
"f0method",
key="rmvpe",
default=data.get("rmvpe", "") == True,
),
],
],
title=i18n("常规设置"),
),
sg.Frame(
layout=[
[
sg.Text(i18n("采样长度")),
sg.Slider(
range=(0.09, 2.4),
key="block_time",
resolution=0.03,
orientation="h",
default_value=data.get("block_time", ""),
),
],
[
sg.Text(i18n("harvest进程数")),
sg.Slider(
range=(1, n_cpu),
key="n_cpu",
resolution=1,
orientation="h",
default_value=data.get(
"n_cpu", min(self.config.n_cpu, n_cpu)
),
),
],
[
sg.Text(i18n("淡入淡出长度")),
sg.Slider(
range=(0.01, 0.15),
key="crossfade_length",
resolution=0.01,
orientation="h",
default_value=data.get("crossfade_length", ""),
),
],
[
sg.Text(i18n("额外推理时长")),
sg.Slider(
range=(0.05, 5.00),
key="extra_time",
resolution=0.01,
orientation="h",
default_value=data.get("extra_time", ""),
),
],
[
sg.Checkbox(i18n("输入降噪"), key="I_noise_reduce"),
sg.Checkbox(i18n("输出降噪"), key="O_noise_reduce"),
],
],
title=i18n("性能设置"),
),
],
[
sg.Button(i18n("开始音频转换"), key="start_vc"),
sg.Button(i18n("停止音频转换"), key="stop_vc"),
sg.Text(i18n("推理时间(ms):")),
sg.Text("0", key="infer_time"),
],
]
self.window = sg.Window("RVC - GUI", layout=layout)
self.event_handler()
def event_handler(self):
while True:
event, values = self.window.read()
if event == sg.WINDOW_CLOSED:
self.flag_vc = False
exit()
if event == "reload_devices":
prev_input = self.window["sg_input_device"].get()
prev_output = self.window["sg_output_device"].get()
input_devices, output_devices, _, _ = self.get_devices(update=True)
if prev_input not in input_devices:
self.config.sg_input_device = input_devices[0]
else:
self.config.sg_input_device = prev_input
self.window["sg_input_device"].Update(values=input_devices)
self.window["sg_input_device"].Update(
value=self.config.sg_input_device
)
if prev_output not in output_devices:
self.config.sg_output_device = output_devices[0]
else:
self.config.sg_output_device = prev_output
self.window["sg_output_device"].Update(values=output_devices)
self.window["sg_output_device"].Update(
value=self.config.sg_output_device
)
if event == "start_vc" and self.flag_vc == False:
if self.set_values(values) == True:
print("using_cuda:" + str(torch.cuda.is_available()))
self.start_vc()
settings = {
"pth_path": values["pth_path"],
"index_path": values["index_path"],
"sg_input_device": values["sg_input_device"],
"sg_output_device": values["sg_output_device"],
"threhold": values["threhold"],
"pitch": values["pitch"],
"index_rate": values["index_rate"],
"block_time": values["block_time"],
"crossfade_length": values["crossfade_length"],
"extra_time": values["extra_time"],
"n_cpu": values["n_cpu"],
"f0method": ["pm", "harvest", "crepe", "rmvpe"][
[
values["pm"],
values["harvest"],
values["crepe"],
values["rmvpe"],
].index(True)
],
}
with open("values1.json", "w") as j:
json.dump(settings, j)
if event == "stop_vc" and self.flag_vc == True:
self.flag_vc = False
def set_values(self, values):
if len(values["pth_path"].strip()) == 0:
sg.popup(i18n("请选择pth文件"))
return False
if len(values["index_path"].strip()) == 0:
sg.popup(i18n("请选择index文件"))
return False
pattern = re.compile("[^\x00-\x7F]+")
if pattern.findall(values["pth_path"]):
sg.popup(i18n("pth文件路径不可包含中文"))
return False
if pattern.findall(values["index_path"]):
sg.popup(i18n("index文件路径不可包含中文"))
return False
self.set_devices(values["sg_input_device"], values["sg_output_device"])
self.config.pth_path = values["pth_path"]
self.config.index_path = values["index_path"]
self.config.threhold = values["threhold"]
self.config.pitch = values["pitch"]
self.config.block_time = values["block_time"]
self.config.crossfade_time = values["crossfade_length"]
self.config.extra_time = values["extra_time"]
self.config.I_noise_reduce = values["I_noise_reduce"]
self.config.O_noise_reduce = values["O_noise_reduce"]
self.config.index_rate = values["index_rate"]
self.config.n_cpu = values["n_cpu"]
self.config.f0method = ["pm", "harvest", "crepe", "rmvpe"][
[
values["pm"],
values["harvest"],
values["crepe"],
values["rmvpe"],
].index(True)
]
return True
def start_vc(self):
torch.cuda.empty_cache()
self.flag_vc = True
self.rvc = rvc_for_realtime.RVC(
self.config.pitch,
self.config.pth_path,
self.config.index_path,
self.config.index_rate,
self.config.n_cpu,
inp_q,
opt_q,
device,
)
self.config.samplerate = self.rvc.tgt_sr
self.config.crossfade_time = min(
self.config.crossfade_time, self.config.block_time
)
self.block_frame = int(self.config.block_time * self.config.samplerate)
self.crossfade_frame = int(
self.config.crossfade_time * self.config.samplerate
)
self.sola_search_frame = int(0.01 * self.config.samplerate)
self.extra_frame = int(self.config.extra_time * self.config.samplerate)
self.zc = self.rvc.tgt_sr // 100
self.input_wav: np.ndarray = np.zeros(
int(
np.ceil(
(
self.extra_frame
+ self.crossfade_frame
+ self.sola_search_frame
+ self.block_frame
)
/ self.zc
)
* self.zc
),
dtype="float32",
)
self.output_wav_cache: torch.Tensor = torch.zeros(
int(
np.ceil(
(
self.extra_frame
+ self.crossfade_frame
+ self.sola_search_frame
+ self.block_frame
)
/ self.zc
)
* self.zc
),
device=device,
dtype=torch.float32,
)
self.pitch: np.ndarray = np.zeros(
self.input_wav.shape[0] // self.zc,
dtype="int32",
)
self.pitchf: np.ndarray = np.zeros(
self.input_wav.shape[0] // self.zc,
dtype="float64",
)
self.output_wav: torch.Tensor = torch.zeros(
self.block_frame, device=device, dtype=torch.float32
)
self.sola_buffer: torch.Tensor = torch.zeros(
self.crossfade_frame, device=device, dtype=torch.float32
)
self.fade_in_window: torch.Tensor = torch.linspace(
0.0, 1.0, steps=self.crossfade_frame, device=device, dtype=torch.float32
)
self.fade_out_window: torch.Tensor = 1 - self.fade_in_window
self.resampler = tat.Resample(
orig_freq=self.config.samplerate, new_freq=16000, dtype=torch.float32
).to(device)
thread_vc = threading.Thread(target=self.soundinput)
thread_vc.start()
def soundinput(self):
"""
接受音频输入
"""
channels = 1 if sys.platform == "darwin" else 2
with sd.Stream(
channels=channels,
callback=self.audio_callback,
blocksize=self.block_frame,
samplerate=self.config.samplerate,
dtype="float32",
):
while self.flag_vc:
time.sleep(self.config.block_time)
print("Audio block passed.")
print("ENDing VC")
def audio_callback(
self, indata: np.ndarray, outdata: np.ndarray, frames, times, status
):
"""
音频处理
"""
start_time = time.perf_counter()
indata = librosa.to_mono(indata.T)
if self.config.I_noise_reduce:
indata[:] = nr.reduce_noise(y=indata, sr=self.config.samplerate)
"""noise gate"""
frame_length = 2048
hop_length = 1024
rms = librosa.feature.rms(
y=indata, frame_length=frame_length, hop_length=hop_length
)
if self.config.threhold > -60:
db_threhold = (
librosa.amplitude_to_db(rms, ref=1.0)[0] < self.config.threhold
)
for i in range(db_threhold.shape[0]):
if db_threhold[i]:
indata[i * hop_length : (i + 1) * hop_length] = 0
self.input_wav[:] = np.append(self.input_wav[self.block_frame :], indata)
# infer
inp = torch.from_numpy(self.input_wav).to(device)
res1 = self.resampler(inp)
###55%
rate1 = self.block_frame / (
self.extra_frame
+ self.crossfade_frame
+ self.sola_search_frame
+ self.block_frame
)
rate2 = (
self.crossfade_frame + self.sola_search_frame + self.block_frame
) / (
self.extra_frame
+ self.crossfade_frame
+ self.sola_search_frame
+ self.block_frame
)
res2 = self.rvc.infer(
res1,
res1[-self.block_frame :].cpu().numpy(),
rate1,
rate2,
self.pitch,
self.pitchf,
self.config.f0method,
)
self.output_wav_cache[-res2.shape[0] :] = res2
infer_wav = self.output_wav_cache[
-self.crossfade_frame - self.sola_search_frame - self.block_frame :
]
# SOLA algorithm from https://github.com/yxlllc/DDSP-SVC
cor_nom = F.conv1d(
infer_wav[None, None, : self.crossfade_frame + self.sola_search_frame],
self.sola_buffer[None, None, :],
)
cor_den = torch.sqrt(
F.conv1d(
infer_wav[
None, None, : self.crossfade_frame + self.sola_search_frame
]
** 2,
torch.ones(1, 1, self.crossfade_frame, device=device),
)
+ 1e-8
)
if sys.platform == "darwin":
_, sola_offset = torch.max(cor_nom[0, 0] / cor_den[0, 0])
sola_offset = sola_offset.item()
else:
sola_offset = torch.argmax(cor_nom[0, 0] / cor_den[0, 0])
print("sola offset: " + str(int(sola_offset)))
self.output_wav[:] = infer_wav[sola_offset : sola_offset + self.block_frame]
self.output_wav[: self.crossfade_frame] *= self.fade_in_window
self.output_wav[: self.crossfade_frame] += self.sola_buffer[:]
# crossfade
if sola_offset < self.sola_search_frame:
self.sola_buffer[:] = (
infer_wav[
-self.sola_search_frame
- self.crossfade_frame
+ sola_offset : -self.sola_search_frame
+ sola_offset
]
* self.fade_out_window
)
else:
self.sola_buffer[:] = (
infer_wav[-self.crossfade_frame :] * self.fade_out_window
)
if self.config.O_noise_reduce:
if sys.platform == "darwin":
noise_reduced_signal = nr.reduce_noise(
y=self.output_wav[:].cpu().numpy(), sr=self.config.samplerate
)
outdata[:] = noise_reduced_signal[:, np.newaxis]
else:
outdata[:] = np.tile(
nr.reduce_noise(
y=self.output_wav[:].cpu().numpy(),
sr=self.config.samplerate,
),
(2, 1),
).T
else:
if sys.platform == "darwin":
outdata[:] = self.output_wav[:].cpu().numpy()[:, np.newaxis]
else:
outdata[:] = self.output_wav[:].repeat(2, 1).t().cpu().numpy()
total_time = time.perf_counter() - start_time
self.window["infer_time"].update(int(total_time * 1000))
print("infer time:" + str(total_time))
def get_devices(self, update: bool = True):
"""获取设备列表"""
if update:
sd._terminate()
sd._initialize()
devices = sd.query_devices()
hostapis = sd.query_hostapis()
for hostapi in hostapis:
for device_idx in hostapi["devices"]:
devices[device_idx]["hostapi_name"] = hostapi["name"]
input_devices = [
f"{d['name']} ({d['hostapi_name']})"
for d in devices
if d["max_input_channels"] > 0
]
output_devices = [
f"{d['name']} ({d['hostapi_name']})"
for d in devices
if d["max_output_channels"] > 0
]
input_devices_indices = [
d["index"] if "index" in d else d["name"]
for d in devices
if d["max_input_channels"] > 0
]
output_devices_indices = [
d["index"] if "index" in d else d["name"]
for d in devices
if d["max_output_channels"] > 0
]
return (
input_devices,
output_devices,
input_devices_indices,
output_devices_indices,
)
def set_devices(self, input_device, output_device):
"""设置输出设备"""
(
input_devices,
output_devices,
input_device_indices,
output_device_indices,
) = self.get_devices()
sd.default.device[0] = input_device_indices[
input_devices.index(input_device)
]
sd.default.device[1] = output_device_indices[
output_devices.index(output_device)
]
print("input device:" + str(sd.default.device[0]) + ":" + str(input_device))
print(
"output device:" + str(sd.default.device[1]) + ":" + str(output_device)
)
gui = GUI()

19
i18n.py
View File

@@ -4,22 +4,25 @@ import os
def load_language_list(language):
with open(f"./i18n/{language}.json", "r", encoding="utf-8") as f:
with open(f"./lib/i18n/{language}.json", "r", encoding="utf-8") as f:
language_list = json.load(f)
return language_list
class I18nAuto:
def __init__(self, language=None):
if language is None:
language = "auto"
if language == "auto":
language = locale.getdefaultlocale()[0]
if not os.path.exists(f"./i18n/{language}.json"):
if language in ["Auto", None]:
language = locale.getdefaultlocale()[
0
] # getlocale can't identify the system's language ((None, None))
if not os.path.exists(f"./lib/i18n/{language}.json"):
language = "en_US"
self.language = language
print("Use Language:", language)
# print("Use Language:", language)
self.language_map = load_language_list(language)
def __call__(self, key):
return self.language_map[key]
return self.language_map.get(key, key)
def print(self):
print("Use Language:", self.language)

View File

@@ -1,99 +0,0 @@
{
"本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>使用需遵守的协议-LICENSE.txt</b>.": "本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>使用需遵守的协议-LICENSE.txt</b>.",
"模型推理": "Model inference",
"推理音色": "Inferencing voice",
"刷新音色列表": "Refresh voice list",
"卸载音色省显存": "Unload voice to save GPU memory",
"请选择说话人id": "select a speaker ID",
"男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ": "Recommended +12 key for male-to-female voice conversion, -12 key for female-to-male voice conversion. If the pitch range is too wide and causes distortion, adjust it to a suitable range by yourself.",
"变调(整数, 半音数量, 升八度12降八度-12)": "Pitch shifting (integer, number of semitones, raise by an octave +12 or lower by an octave -12)",
"输入待处理音频文件路径(默认是正确格式示例)": "Enter the file path of the audio to be processed (default is the correct format example)",
"选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比": "Select the algorithm for pitch extraction. Use 'pm' to speed up for singing voices, or use 'harvest' for better low-pitched voices, but it is extremely slow.",
"特征检索库文件路径": "Feature search database file path",
"特征文件路径": "Feature file path",
"F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调": "F0 curve file, optional, one pitch per line, instead of default F0 and pitch shifting",
"转换": "Conversion",
"输出信息": "Output information",
"输出音频(右下角三个点,点了可以下载)": "Output audio (click the three dots in the lower right corner to download)",
"批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ": "Batch conversion, input the folder containing audio files to be converted, or upload multiple audio files. The converted audio will be output in the specified folder (default opt).",
"指定输出文件夹": "Specify output folder",
"检索特征占比": "Search feature ratio",
"输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)": "Enter the path to the audio folder to be processed (just copy it from the file manager address bar)",
"也可批量输入音频文件, 二选一, 优先读文件夹": "Multiple audio files can also be inputted, either of the two options, with priority given to the folder",
"伴奏人声分离": "Instrumental and vocal separation",
"人声伴奏分离批量处理, 使用UVR5模型. <br>不带和声用HP2, 带和声且提取的人声不需要和声用HP5<br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)": "Batch processing of instrumental and vocal separation using UVR5 model. <br>Use HP2 for vocal separation without harmonics, and use HP5 for vocal separation with harmonics and the extracted vocals do not need to have harmonics. <br>Example of a qualified folder path: E:\\codes\\py39\\vits_vc_gpu\\test_sample (just copy it from the file manager address bar)",
"输入待处理音频文件夹路径": "Input the path to the audio folder to be processed",
"模型": "Model",
"指定输出人声文件夹": "Specify vocals output folder",
"指定输出乐器文件夹": "Specify instrumentals output folder",
"训练": "Train",
"step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. ": "step1: Fill in the experiment configuration. Experiment data is stored in the 'logs' directory, with each experiment in a separate folder. The experiment name path needs to be entered manually and should contain the experiment configuration, logs, and trained model files.",
"输入实验名": "Input experiment name",
"目标采样率": "Target sample rate",
"模型是否带音高指导(唱歌一定要, 语音可以不要)": "Whether the model has pitch guidance (necessary for singing, but not required for speech)",
"step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. ": "step2a: Automatically traverse the training folder and slice and normalize all audio files that can be decoded into audio. Two 'wav' folders will be generated in the experiment directory. Currently, only single-person training is supported.",
"输入训练文件夹路径": "Input training folder path",
"请指定说话人id": "Please specify speaker ID",
"处理数据": "Process data",
"step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)": "step2b: Use CPU to extract pitch (if the model has pitch guidance) and GPU to extract features (select card number).",
"以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2": "Separate the GPU id numbers with '-' when inputting. For example, '0-1-2' means using GPU 0, GPU 1, and GPU 2.",
"显卡信息": "GPU information",
"提取音高使用的CPU进程数": "Number of CPU threads to use for pitch extraction",
"选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢": "Select pitch extraction algorithm: Use 'pm' for faster processing of singing voice, 'dio' for high-quality speech but slower processing, and 'harvest' for the best quality but slowest processing.",
"特征提取": "Feature extraction",
"step3: 填写训练设置, 开始训练模型和索引": "step3: Fill in the training settings and start training the model and index.",
"保存频率save_every_epoch": "Saving frequency (save_every_epoch)",
"总训练轮数total_epoch": "Total training epochs (total_epoch)",
"是否仅保存最新的ckpt文件以节省硬盘空间": "Whether to save only the latest ckpt file to save disk space",
"是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速": "Whether to cache all training sets in GPU memory. Small datasets (under 10 minutes) can be cached to speed up training, but caching large datasets can cause GPU memory errors and does not increase speed significantly.",
"加载预训练底模G路径": "Load pre-trained base model G path.",
"加载预训练底模D路径": "Load pre-trained base model D path.",
"训练模型": "Train model.",
"训练特征索引": "Train feature index.",
"一键训练": "One-click training.",
"ckpt处理": "Ckpt processing.",
"模型融合, 可用于测试音色融合": "Model fusion, can be used for merging diffrent voices",
"A模型路径": "A model path.",
"B模型路径": "B model path.",
"A模型权重": "A model weight for model A.",
"模型是否带音高指导": "Whether the model has pitch guidance.",
"要置入的模型信息": "Model information to be placed.",
"保存的模型名不带后缀": "Saved model name without extension.",
"融合": "Fusion.",
"修改模型信息(仅支持weights文件夹下提取的小模型文件)": "Modify model information (only supports small model files extracted under the weights folder).",
"模型路径": "Model path",
"要改的模型信息": "Model information to be modified",
"保存的文件名, 默认空为和源文件同名": "Name of the file to be saved, default is the same as the source file name",
"修改": "Modify",
"查看模型信息(仅支持weights文件夹下提取的小模型文件)": "View model information (only applicable to small model files extracted from the 'weights' folder)",
"查看": "View",
"模型提取(输入logs文件夹下大文件模型路径),适用于训一半不想训了模型没有自动提取保存小文件模型,或者想测试中间模型的情况": "Model extraction (input the path of a large model file in the 'logs' folder), applicable when you want to extract a small model file after training halfway and it was not saved automatically, or when you want to test an intermediate model",
"保存名": "Save Name",
"模型是否带音高指导,1是0否": "Whether the model has pitch guidance, 1 for yes, 0 for no",
"提取": "Extract",
"招募音高曲线前端编辑器": "Recruit front-end editors for pitch curves",
"加开发群联系我xxxxx": "Join the development group to contact me at xxxxx",
"点击查看交流、问题反馈群号": "Click to view the communication and problem feedback group number",
"xxxxx": "xxxxx",
"加载模型": "Load Model",
"Hubert模型": "Hubert Model",
"选择.pth文件": "Select .pth file",
"选择.index文件": "Select .index file",
"选择.npy文件": "Select .npy file",
"输入设备": "Input device",
"输出设备": "Output device",
"音频设备(请使用同种类驱动)": "Audio device (please use the same type of driver)",
"响应阈值": "Response threshold",
"音调设置": "Pitch setting",
"Index Rate": "Index Rate",
"常规设置": "General Settings",
"采样长度": "Sampling length",
"淡入淡出长度": "Fade in/out length",
"额外推理时长": "Additional inference time",
"输入降噪": "Input Noise Reduction",
"输出降噪": "Output Noise Reduction",
"性能设置": "Performance settings",
"开始音频转换": "Start Audio Conversion",
"停止音频转换": "Stop Audio Conversion",
"推理时间(ms):": "Infer Time(ms):"
}

View File

@@ -1,99 +0,0 @@
{
"本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>使用需遵守的协议-LICENSE.txt</b>.": "本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>使用需遵守的协议-LICENSE.txt</b>.",
"模型推理": "モデル推論",
"推理音色": "音源推論",
"刷新音色列表": "音源リストを更新",
"卸载音色省显存": "音源を削除してメモリを節約",
"请选择说话人id": "話者IDを選択してください",
"男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ": "男性から女性へは+12キーをお勧めします。女性から男性へは-12キーをお勧めします。音域が広すぎて音質が劣化した場合は、適切な音域に自分で調整することもできます。",
"变调(整数, 半音数量, 升八度12降八度-12)": "ピッチ変更(整数、半音数、上下オクターブ12-12)",
"输入待处理音频文件路径(默认是正确格式示例)": "処理対象音声ファイルのパスを入力してください(デフォルトは正しいフォーマットの例です)",
"选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比": "ピッチ抽出アルゴリズムを選択してください。歌声の場合は、pmを使用して速度を上げることができます。低音が重要な場合は、harvestを使用できますが、非常に遅くなります。",
"特征检索库文件路径": "特徴量検索データベースのファイルパス",
"特征文件路径": "特徴量ファイルのパス",
"F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调": "F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调",
"转换": "変換",
"输出信息": "出力情報",
"输出音频(右下角三个点,点了可以下载)": "出力音声(右下の三点をクリックしてダウンロードできます)",
"批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ": "批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ",
"指定输出文件夹": "出力フォルダを指定してください",
"检索特征占比": "検索特徴率",
"输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)": "処理対象音声フォルダーのパスを入力してください(ファイルマネージャのアドレスバーからコピーしてください)",
"也可批量输入音频文件, 二选一, 优先读文件夹": "複数の音声ファイルを一括で入力することもできますが、フォルダーを優先して読み込みます",
"伴奏人声分离": "伴奏とボーカルの分離",
"人声伴奏分离批量处理, 使用UVR5模型. <br>不带和声用HP2, 带和声且提取的人声不需要和声用HP5<br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)": "人声伴奏分离批量处理, 使用UVR5模型. <br>不带和声用HP2, 带和声且提取的人声不需要和声用HP5<br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)",
"输入待处理音频文件夹路径": "処理するオーディオファイルのフォルダパスを入力してください",
"模型": "モデル",
"指定输出人声文件夹": "人の声を出力するフォルダを指定してください",
"指定输出乐器文件夹": "楽器の出力フォルダを指定してください",
"训练": "トレーニング",
"step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. ": "ステップ1:実験設定を入力します。実験データはlogsに保存され、各実験にはフォルダーがあります。実験名のパスを手動で入力する必要があり、実験設定、ログ、トレーニングされたモデルファイルが含まれます。",
"输入实验名": "実験名を入力してください",
"目标采样率": "目標サンプリングレート",
"模型是否带音高指导(唱歌一定要, 语音可以不要)": "モデルに音高ガイドがあるかどうか(歌唱には必要ですが、音声には必要ありません)",
"step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. ": "ステップ2a: 訓練フォルダー内のすべての音声ファイルを自動的に探索し、スライスと正規化を行い、2つのwavフォルダーを実験ディレクトリに生成します。現在は一人でのトレーニングのみをサポートしています。",
"输入训练文件夹路径": "トレーニング用フォルダのパスを入力してください",
"请指定说话人id": "話者IDを指定してください",
"处理数据": "データ処理",
"step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)": "ステップ2b: CPUを使用して音高を抽出する(モデルに音高がある場合)、GPUを使用して特徴を抽出する(カード番号を選択する)",
"以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2": "ハイフンで区切って使用するカード番号を入力します。例えば0-1-2はカード0、カード1、カード2を使用します",
"显卡信息": "カード情報",
"提取音高使用的CPU进程数": "抽出に使用するCPUプロセス数",
"选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢": "音高抽出アルゴリズムの選択:歌声を入力する場合は、pmを使用して速度を上げることができます。CPUが低い場合はdioを使用して速度を上げることができます。harvestは品質が高く、精度が高いですが、遅いです。",
"特征提取": "特徴抽出",
"step3: 填写训练设置, 开始训练模型和索引": "ステップ3: トレーニング設定を入力して、モデルとインデックスのトレーニングを開始します",
"保存频率save_every_epoch": "エポックごとの保存頻度",
"总训练轮数total_epoch": "総トレーニング回数",
"是否仅保存最新的ckpt文件以节省硬盘空间": "ハードディスク容量を節約するため、最新のckptファイルのみを保存するかどうか",
"是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速": "すべてのトレーニングデータをメモリにキャッシュするかどうか。10分以下の小さなデータはキャッシュしてトレーニングを高速化できますが、大きなデータをキャッシュするとメモリが破裂し、あまり速度が上がりません。",
"加载预训练底模G路径": "事前学習済みのGモデルのパスをロードしてください",
"加载预训练底模D路径": "事前学習済みのDモデルのパスをロードしてください",
"训练模型": "モデルのトレーニング",
"训练特征索引": "特徴インデックスのトレーニング",
"一键训练": "ワンクリックトレーニング",
"ckpt处理": "ckptファイルの処理",
"模型融合, 可用于测试音色融合": "モデルのマージ、音源のマージテストに使用できます",
"A模型路径": "Aモデルのパス",
"B模型路径": "Bモデルのパス",
"A模型权重": "Aモデルの重み",
"模型是否带音高指导": "モデルに音高ガイドを付けるかどうか",
"要置入的模型信息": "挿入するモデル情報",
"保存的模型名不带后缀": "拡張子のない保存するモデル名",
"融合": "フュージョン",
"修改模型信息(仅支持weights文件夹下提取的小模型文件)": "修改模型信息(仅支持weights文件夹下提取的小模型文件)",
"模型路径": "モデルパス",
"要改的模型信息": "変更するモデル情報",
"保存的文件名, 默认空为和源文件同名": "保存するファイル名、デフォルトでは空欄で元のファイル名と同じ名前になります",
"修改": "変更",
"查看模型信息(仅支持weights文件夹下提取的小模型文件)": "モデル情報を表示する(小さいモデルファイルはweightsフォルダーからのみサポートされています)",
"查看": "表示",
"模型提取(输入logs文件夹下大文件模型路径),适用于训一半不想训了模型没有自动提取保存小文件模型,或者想测试中间模型的情况": "モデル抽出(ログフォルダー内の大きなファイルのモデルパスを入力)、モデルを半分までトレーニングし、自動的に小さいファイルモデルを保存しなかったり、中間モデルをテストしたい場合に適用されます。",
"保存名": "保存するファイル名",
"模型是否带音高指导,1是0否": "モデルに音高ガイドを付けるかどうか、1は付ける、0は付けない",
"提取": "抽出",
"招募音高曲线前端编辑器": "音高曲線フロントエンドエディターを募集",
"加开发群联系我xxxxx": "開発グループに参加して私に連絡してくださいxxxxx",
"点击查看交流、问题反馈群号": "クリックして交流、問題フィードバックグループ番号を表示",
"xxxxx": "xxxxx",
"加载模型": "モデルをロードする",
"Hubert模型": "Hubert模型",
"选择.pth文件": ".pthファイルを選択する",
"选择.index文件": ".indexファイルを選択する",
"选择.npy文件": ".npyファイルを選択する",
"输入设备": "入力デバイス",
"输出设备": "出力デバイス",
"音频设备(请使用同种类驱动)": "オーディオデバイス(同じ種類のドライバーを使用してください)",
"响应阈值": "反応閾値",
"音调设置": "音程設定",
"Index Rate": "Index Rate",
"常规设置": "一般設定",
"采样长度": "サンプル長",
"淡入淡出长度": "フェードイン/フェードアウト長",
"额外推理时长": "追加推論時間",
"输入降噪": "入力ノイズの低減",
"输出降噪": "出力ノイズの低減",
"性能设置": "パフォーマンス設定",
"开始音频转换": "音声変換を開始する",
"停止音频转换": "音声変換を停止する",
"推理时间(ms):": "推論時間(ms):"
}

File diff suppressed because it is too large Load Diff

215
infer_batch_rvc.py Normal file
View File

@@ -0,0 +1,215 @@
"""
v1
runtime\python.exe myinfer-v2-0528.py 0 "E:\codes\py39\RVC-beta\todo-songs" "E:\codes\py39\logs\mi-test\added_IVF677_Flat_nprobe_7.index" harvest "E:\codes\py39\RVC-beta\output" "E:\codes\py39\test-20230416b\weights\mi-test.pth" 0.66 cuda:0 True 3 0 1 0.33
v2
runtime\python.exe myinfer-v2-0528.py 0 "E:\codes\py39\RVC-beta\todo-songs" "E:\codes\py39\test-20230416b\logs\mi-test-v2\aadded_IVF677_Flat_nprobe_1_v2.index" harvest "E:\codes\py39\RVC-beta\output_v2" "E:\codes\py39\test-20230416b\weights\mi-test-v2.pth" 0.66 cuda:0 True 3 0 1 0.33
"""
import os, sys, pdb, torch
now_dir = os.getcwd()
sys.path.append(now_dir)
import sys
import torch
import tqdm as tq
from multiprocessing import cpu_count
class Config:
def __init__(self, device, is_half):
self.device = device
self.is_half = is_half
self.n_cpu = 0
self.gpu_name = None
self.gpu_mem = None
self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config()
def device_config(self) -> tuple:
if torch.cuda.is_available():
i_device = int(self.device.split(":")[-1])
self.gpu_name = torch.cuda.get_device_name(i_device)
if (
("16" in self.gpu_name and "V100" not in self.gpu_name.upper())
or "P40" in self.gpu_name.upper()
or "1060" in self.gpu_name
or "1070" in self.gpu_name
or "1080" in self.gpu_name
):
print("16系/10系显卡和P40强制单精度")
self.is_half = False
for config_file in ["32k.json", "40k.json", "48k.json"]:
with open(f"configs/{config_file}", "r") as f:
strr = f.read().replace("true", "false")
with open(f"configs/{config_file}", "w") as f:
f.write(strr)
with open("trainset_preprocess_pipeline_print.py", "r") as f:
strr = f.read().replace("3.7", "3.0")
with open("trainset_preprocess_pipeline_print.py", "w") as f:
f.write(strr)
else:
self.gpu_name = None
self.gpu_mem = int(
torch.cuda.get_device_properties(i_device).total_memory
/ 1024
/ 1024
/ 1024
+ 0.4
)
if self.gpu_mem <= 4:
with open("trainset_preprocess_pipeline_print.py", "r") as f:
strr = f.read().replace("3.7", "3.0")
with open("trainset_preprocess_pipeline_print.py", "w") as f:
f.write(strr)
elif torch.backends.mps.is_available():
print("没有发现支持的N卡, 使用MPS进行推理")
self.device = "mps"
else:
print("没有发现支持的N卡, 使用CPU进行推理")
self.device = "cpu"
self.is_half = True
if self.n_cpu == 0:
self.n_cpu = cpu_count()
if self.is_half:
# 6G显存配置
x_pad = 3
x_query = 10
x_center = 60
x_max = 65
else:
# 5G显存配置
x_pad = 1
x_query = 6
x_center = 38
x_max = 41
if self.gpu_mem != None and self.gpu_mem <= 4:
x_pad = 1
x_query = 5
x_center = 30
x_max = 32
return x_pad, x_query, x_center, x_max
f0up_key = sys.argv[1]
input_path = sys.argv[2]
index_path = sys.argv[3]
f0method = sys.argv[4] # harvest or pm
opt_path = sys.argv[5]
model_path = sys.argv[6]
index_rate = float(sys.argv[7])
device = sys.argv[8]
is_half = sys.argv[9].lower() != "false"
filter_radius = int(sys.argv[10])
resample_sr = int(sys.argv[11])
rms_mix_rate = float(sys.argv[12])
protect = float(sys.argv[13])
print(sys.argv)
config = Config(device, is_half)
now_dir = os.getcwd()
sys.path.append(now_dir)
from vc_infer_pipeline import VC
from lib.infer_pack.models import (
SynthesizerTrnMs256NSFsid,
SynthesizerTrnMs256NSFsid_nono,
SynthesizerTrnMs768NSFsid,
SynthesizerTrnMs768NSFsid_nono,
)
from lib.audio import load_audio
from fairseq import checkpoint_utils
from scipy.io import wavfile
hubert_model = None
def load_hubert():
global hubert_model
models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
["hubert_base.pt"],
suffix="",
)
hubert_model = models[0]
hubert_model = hubert_model.to(device)
if is_half:
hubert_model = hubert_model.half()
else:
hubert_model = hubert_model.float()
hubert_model.eval()
def vc_single(sid, input_audio, f0_up_key, f0_file, f0_method, file_index, index_rate):
global tgt_sr, net_g, vc, hubert_model, version
if input_audio is None:
return "You need to upload an audio", None
f0_up_key = int(f0_up_key)
audio = load_audio(input_audio, 16000)
times = [0, 0, 0]
if hubert_model == None:
load_hubert()
if_f0 = cpt.get("f0", 1)
# audio_opt=vc.pipeline(hubert_model,net_g,sid,audio,times,f0_up_key,f0_method,file_index,file_big_npy,index_rate,if_f0,f0_file=f0_file)
audio_opt = vc.pipeline(
hubert_model,
net_g,
sid,
audio,
input_audio,
times,
f0_up_key,
f0_method,
file_index,
index_rate,
if_f0,
filter_radius,
tgt_sr,
resample_sr,
rms_mix_rate,
version,
protect,
f0_file=f0_file,
)
print(times)
return audio_opt
def get_vc(model_path):
global n_spk, tgt_sr, net_g, vc, cpt, device, is_half, version
print("loading pth %s" % model_path)
cpt = torch.load(model_path, map_location="cpu")
tgt_sr = cpt["config"][-1]
cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
if_f0 = cpt.get("f0", 1)
version = cpt.get("version", "v1")
if version == "v1":
if if_f0 == 1:
net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half)
else:
net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
elif version == "v2":
if if_f0 == 1: #
net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=is_half)
else:
net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
del net_g.enc_q
print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净,真奇葩
net_g.eval().to(device)
if is_half:
net_g = net_g.half()
else:
net_g = net_g.float()
vc = VC(tgt_sr, config)
n_spk = cpt["config"][-3]
# return {"visible": True,"maximum": n_spk, "__type__": "update"}
get_vc(model_path)
audios = os.listdir(input_path)
for file in tq.tqdm(audios):
if file.endswith(".wav"):
file_path = input_path + "/" + file
wav_opt = vc_single(
0, file_path, f0up_key, None, f0method, index_path, index_rate
)
out_path = opt_path + "/" + file
wavfile.write(out_path, tgt_sr, wav_opt)

272
infer_cli.py Normal file
View File

@@ -0,0 +1,272 @@
from scipy.io import wavfile
from fairseq import checkpoint_utils
from lib.audio import load_audio
from lib.infer_pack.models import (
SynthesizerTrnMs256NSFsid,
SynthesizerTrnMs256NSFsid_nono,
SynthesizerTrnMs768NSFsid,
SynthesizerTrnMs768NSFsid_nono,
)
from vc_infer_pipeline import VC
from multiprocessing import cpu_count
import numpy as np
import torch
import sys
import glob
import argparse
import os
import sys
import pdb
import torch
now_dir = os.getcwd()
sys.path.append(now_dir)
####
# USAGE
#
# In your Terminal or CMD or whatever
# python infer_cli.py [TRANSPOSE_VALUE] "[INPUT_PATH]" "[OUTPUT_PATH]" "[MODEL_PATH]" "[INDEX_FILE_PATH]" "[INFERENCE_DEVICE]" "[METHOD]"
using_cli = False
device = "cuda:0"
is_half = False
if len(sys.argv) > 0:
f0_up_key = int(sys.argv[1]) # transpose value
input_path = sys.argv[2]
output_path = sys.argv[3]
model_path = sys.argv[4]
file_index = sys.argv[5] # .index file
device = sys.argv[6]
f0_method = sys.argv[7] # pm or harvest or crepe
using_cli = True
# file_index2=sys.argv[8]
# index_rate=float(sys.argv[10]) #search feature ratio
# filter_radius=float(sys.argv[11]) #median filter
# resample_sr=float(sys.argv[12]) #resample audio in post processing
# rms_mix_rate=float(sys.argv[13]) #search feature
print(sys.argv)
class Config:
def __init__(self, device, is_half):
self.device = device
self.is_half = is_half
self.n_cpu = 0
self.gpu_name = None
self.gpu_mem = None
self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config()
def device_config(self) -> tuple:
if torch.cuda.is_available() and device != "cpu":
i_device = int(self.device.split(":")[-1])
self.gpu_name = torch.cuda.get_device_name(i_device)
if (
("16" in self.gpu_name and "V100" not in self.gpu_name.upper())
or "P40" in self.gpu_name.upper()
or "1060" in self.gpu_name
or "1070" in self.gpu_name
or "1080" in self.gpu_name
):
print("16系/10系显卡和P40强制单精度")
self.is_half = False
for config_file in ["32k.json", "40k.json", "48k.json"]:
with open(f"configs/{config_file}", "r") as f:
strr = f.read().replace("true", "false")
with open(f"configs/{config_file}", "w") as f:
f.write(strr)
with open("trainset_preprocess_pipeline_print.py", "r") as f:
strr = f.read().replace("3.7", "3.0")
with open("trainset_preprocess_pipeline_print.py", "w") as f:
f.write(strr)
else:
self.gpu_name = None
self.gpu_mem = int(
torch.cuda.get_device_properties(i_device).total_memory
/ 1024
/ 1024
/ 1024
+ 0.4
)
if self.gpu_mem <= 4:
with open("trainset_preprocess_pipeline_print.py", "r") as f:
strr = f.read().replace("3.7", "3.0")
with open("trainset_preprocess_pipeline_print.py", "w") as f:
f.write(strr)
elif torch.backends.mps.is_available():
print("没有发现支持的N卡, 使用MPS进行推理")
self.device = "mps"
else:
print("没有发现支持的N卡, 使用CPU进行推理")
self.device = "cpu"
self.is_half = False
if self.n_cpu == 0:
self.n_cpu = cpu_count()
if self.is_half:
# 6G显存配置
x_pad = 3
x_query = 10
x_center = 60
x_max = 65
else:
# 5G显存配置
x_pad = 1
x_query = 6
x_center = 38
x_max = 41
if self.gpu_mem != None and self.gpu_mem <= 4:
x_pad = 1
x_query = 5
x_center = 30
x_max = 32
return x_pad, x_query, x_center, x_max
config = Config(device, is_half)
now_dir = os.getcwd()
sys.path.append(now_dir)
hubert_model = None
def load_hubert():
global hubert_model
models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
["hubert_base.pt"],
suffix="",
)
hubert_model = models[0]
hubert_model = hubert_model.to(config.device)
if config.is_half:
hubert_model = hubert_model.half()
else:
hubert_model = hubert_model.float()
hubert_model.eval()
def vc_single(
sid=0,
input_audio_path=None,
f0_up_key=0,
f0_file=None,
f0_method="pm",
file_index="", # .index file
file_index2="",
# file_big_npy,
index_rate=1.0,
filter_radius=3,
resample_sr=0,
rms_mix_rate=1.0,
model_path="",
output_path="",
protect=0.33,
):
global tgt_sr, net_g, vc, hubert_model, version
get_vc(model_path)
if input_audio_path is None:
return "You need to upload an audio file", None
f0_up_key = int(f0_up_key)
audio = load_audio(input_audio_path, 16000)
audio_max = np.abs(audio).max() / 0.95
if audio_max > 1:
audio /= audio_max
times = [0, 0, 0]
if hubert_model == None:
load_hubert()
if_f0 = cpt.get("f0", 1)
file_index = (
(
file_index.strip(" ")
.strip('"')
.strip("\n")
.strip('"')
.strip(" ")
.replace("trained", "added")
)
if file_index != ""
else file_index2
)
audio_opt = vc.pipeline(
hubert_model,
net_g,
sid,
audio,
input_audio_path,
times,
f0_up_key,
f0_method,
file_index,
# file_big_npy,
index_rate,
if_f0,
filter_radius,
tgt_sr,
resample_sr,
rms_mix_rate,
version,
f0_file=f0_file,
protect=protect,
)
wavfile.write(output_path, tgt_sr, audio_opt)
return "processed"
def get_vc(model_path):
global n_spk, tgt_sr, net_g, vc, cpt, device, is_half, version
print("loading pth %s" % model_path)
cpt = torch.load(model_path, map_location="cpu")
tgt_sr = cpt["config"][-1]
cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
if_f0 = cpt.get("f0", 1)
version = cpt.get("version", "v1")
if version == "v1":
if if_f0 == 1:
net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half)
else:
net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
elif version == "v2":
if if_f0 == 1:
net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=is_half)
else:
net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
del net_g.enc_q
print(net_g.load_state_dict(cpt["weight"], strict=False))
net_g.eval().to(device)
if is_half:
net_g = net_g.half()
else:
net_g = net_g.float()
vc = VC(tgt_sr, config)
n_spk = cpt["config"][-3]
# return {"visible": True,"maximum": n_spk, "__type__": "update"}
if using_cli:
vc_single(
sid=0,
input_audio_path=input_path,
f0_up_key=f0_up_key,
f0_file=None,
f0_method=f0_method,
file_index=file_index,
file_index2="",
index_rate=1,
filter_radius=3,
resample_sr=0,
rms_mix_rate=0,
model_path=model_path,
output_path=output_path,
)

View File

@@ -1,19 +1,25 @@
import os, sys, torch, warnings, pdb
now_dir = os.getcwd()
sys.path.append(now_dir)
from json import load as ll
warnings.filterwarnings("ignore")
import librosa
import importlib
import numpy as np
import hashlib, math
from tqdm import tqdm
from uvr5_pack.lib_v5 import spec_utils
from uvr5_pack.utils import _get_name_params, inference
from uvr5_pack.lib_v5.model_param_init import ModelParameters
from scipy.io import wavfile
from lib.uvr5_pack.lib_v5 import spec_utils
from lib.uvr5_pack.utils import _get_name_params, inference
from lib.uvr5_pack.lib_v5.model_param_init import ModelParameters
import soundfile as sf
from lib.uvr5_pack.lib_v5.nets_new import CascadedNet
from lib.uvr5_pack.lib_v5 import nets_61968KB as nets
class _audio_pre_:
def __init__(self, model_path, device, is_half):
def __init__(self, agg, model_path, device, is_half):
self.model_path = model_path
self.device = device
self.data = {
@@ -22,31 +28,10 @@ class _audio_pre_:
"tta": False,
# Constants
"window_size": 512,
"agg": 10,
"agg": agg,
"high_end_process": "mirroring",
}
nn_arch_sizes = [
31191, # default
33966,
61968,
123821,
123812,
537238, # custom
]
self.nn_architecture = list("{}KB".format(s) for s in nn_arch_sizes)
model_size = math.ceil(os.stat(model_path).st_size / 1024)
nn_architecture = "{}KB".format(
min(nn_arch_sizes, key=lambda x: abs(x - model_size))
)
nets = importlib.import_module(
"uvr5_pack.lib_v5.nets"
+ f"_{nn_architecture}".replace("_{}KB".format(nn_arch_sizes[0]), ""),
package=None,
)
model_hash = hashlib.md5(open(model_path, "rb").read()).hexdigest()
param_name, model_params_d = _get_name_params(model_path, model_hash)
mp = ModelParameters(model_params_d)
mp = ModelParameters("lib/uvr5_pack/lib_v5/modelparams/4band_v2.json")
model = nets.CascadedASPPNet(mp.param["bins"] * 2)
cpk = torch.load(model_path, map_location="cpu")
model.load_state_dict(cpk)
@@ -59,7 +44,7 @@ class _audio_pre_:
self.mp = mp
self.model = model
def _path_audio_(self, music_file, ins_root=None, vocal_root=None):
def _path_audio_(self, music_file, ins_root=None, vocal_root=None, format="flac"):
if ins_root is None and vocal_root is None:
return "No save root."
name = os.path.basename(music_file)
@@ -138,11 +123,29 @@ class _audio_pre_:
else:
wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp)
print("%s instruments done" % name)
wavfile.write(
os.path.join(ins_root, "instrument_{}.wav".format(name)),
self.mp.param["sr"],
(np.array(wav_instrument) * 32768).astype("int16"),
) #
if format in ["wav", "flac"]:
sf.write(
os.path.join(
ins_root,
"instrument_{}_{}.{}".format(name, self.data["agg"], format),
),
(np.array(wav_instrument) * 32768).astype("int16"),
self.mp.param["sr"],
) #
else:
path = os.path.join(
ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"])
)
sf.write(
path,
(np.array(wav_instrument) * 32768).astype("int16"),
self.mp.param["sr"],
)
if os.path.exists(path):
os.system(
"ffmpeg -i %s -vn %s -q:a 2 -y"
% (path, path[:-4] + ".%s" % format)
)
if vocal_root is not None:
if self.data["high_end_process"].startswith("mirroring"):
input_high_end_ = spec_utils.mirroring(
@@ -154,18 +157,207 @@ class _audio_pre_:
else:
wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp)
print("%s vocals done" % name)
wavfile.write(
os.path.join(vocal_root, "vocal_{}.wav".format(name)),
self.mp.param["sr"],
(np.array(wav_vocals) * 32768).astype("int16"),
if format in ["wav", "flac"]:
sf.write(
os.path.join(
vocal_root,
"vocal_{}_{}.{}".format(name, self.data["agg"], format),
),
(np.array(wav_vocals) * 32768).astype("int16"),
self.mp.param["sr"],
)
else:
path = os.path.join(
vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"])
)
sf.write(
path,
(np.array(wav_vocals) * 32768).astype("int16"),
self.mp.param["sr"],
)
if os.path.exists(path):
os.system(
"ffmpeg -i %s -vn %s -q:a 2 -y"
% (path, path[:-4] + ".%s" % format)
)
class _audio_pre_new:
def __init__(self, agg, model_path, device, is_half):
self.model_path = model_path
self.device = device
self.data = {
# Processing Options
"postprocess": False,
"tta": False,
# Constants
"window_size": 512,
"agg": agg,
"high_end_process": "mirroring",
}
mp = ModelParameters("lib/uvr5_pack/lib_v5/modelparams/4band_v3.json")
nout = 64 if "DeReverb" in model_path else 48
model = CascadedNet(mp.param["bins"] * 2, nout)
cpk = torch.load(model_path, map_location="cpu")
model.load_state_dict(cpk)
model.eval()
if is_half:
model = model.half().to(device)
else:
model = model.to(device)
self.mp = mp
self.model = model
def _path_audio_(
self, music_file, vocal_root=None, ins_root=None, format="flac"
): # 3个VR模型vocal和ins是反的
if ins_root is None and vocal_root is None:
return "No save root."
name = os.path.basename(music_file)
if ins_root is not None:
os.makedirs(ins_root, exist_ok=True)
if vocal_root is not None:
os.makedirs(vocal_root, exist_ok=True)
X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {}
bands_n = len(self.mp.param["band"])
# print(bands_n)
for d in range(bands_n, 0, -1):
bp = self.mp.param["band"][d]
if d == bands_n: # high-end band
(
X_wave[d],
_,
) = librosa.core.load( # 理论上librosa读取可能对某些音频有bug应该上ffmpeg读取但是太麻烦了弃坑
music_file,
bp["sr"],
False,
dtype=np.float32,
res_type=bp["res_type"],
)
if X_wave[d].ndim == 1:
X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]])
else: # lower bands
X_wave[d] = librosa.core.resample(
X_wave[d + 1],
self.mp.param["band"][d + 1]["sr"],
bp["sr"],
res_type=bp["res_type"],
)
# Stft of wave source
X_spec_s[d] = spec_utils.wave_to_spectrogram_mt(
X_wave[d],
bp["hl"],
bp["n_fft"],
self.mp.param["mid_side"],
self.mp.param["mid_side_b2"],
self.mp.param["reverse"],
)
# pdb.set_trace()
if d == bands_n and self.data["high_end_process"] != "none":
input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + (
self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"]
)
input_high_end = X_spec_s[d][
:, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, :
]
X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp)
aggresive_set = float(self.data["agg"] / 100)
aggressiveness = {
"value": aggresive_set,
"split_bin": self.mp.param["band"][1]["crop_stop"],
}
with torch.no_grad():
pred, X_mag, X_phase = inference(
X_spec_m, self.device, self.model, aggressiveness, self.data
)
# Postprocess
if self.data["postprocess"]:
pred_inv = np.clip(X_mag - pred, 0, np.inf)
pred = spec_utils.mask_silence(pred, pred_inv)
y_spec_m = pred * X_phase
v_spec_m = X_spec_m - y_spec_m
if ins_root is not None:
if self.data["high_end_process"].startswith("mirroring"):
input_high_end_ = spec_utils.mirroring(
self.data["high_end_process"], y_spec_m, input_high_end, self.mp
)
wav_instrument = spec_utils.cmb_spectrogram_to_wave(
y_spec_m, self.mp, input_high_end_h, input_high_end_
)
else:
wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp)
print("%s instruments done" % name)
if format in ["wav", "flac"]:
sf.write(
os.path.join(
ins_root,
"instrument_{}_{}.{}".format(name, self.data["agg"], format),
),
(np.array(wav_instrument) * 32768).astype("int16"),
self.mp.param["sr"],
) #
else:
path = os.path.join(
ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"])
)
sf.write(
path,
(np.array(wav_instrument) * 32768).astype("int16"),
self.mp.param["sr"],
)
if os.path.exists(path):
os.system(
"ffmpeg -i %s -vn %s -q:a 2 -y"
% (path, path[:-4] + ".%s" % format)
)
if vocal_root is not None:
if self.data["high_end_process"].startswith("mirroring"):
input_high_end_ = spec_utils.mirroring(
self.data["high_end_process"], v_spec_m, input_high_end, self.mp
)
wav_vocals = spec_utils.cmb_spectrogram_to_wave(
v_spec_m, self.mp, input_high_end_h, input_high_end_
)
else:
wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp)
print("%s vocals done" % name)
if format in ["wav", "flac"]:
sf.write(
os.path.join(
vocal_root,
"vocal_{}_{}.{}".format(name, self.data["agg"], format),
),
(np.array(wav_vocals) * 32768).astype("int16"),
self.mp.param["sr"],
)
else:
path = os.path.join(
vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"])
)
sf.write(
path,
(np.array(wav_vocals) * 32768).astype("int16"),
self.mp.param["sr"],
)
if os.path.exists(path):
os.system(
"ffmpeg -i %s -vn %s -q:a 2 -y"
% (path, path[:-4] + ".%s" % format)
)
if __name__ == "__main__":
device = "cuda"
is_half = True
model_path = "uvr5_weights/2_HP-UVR.pth"
pre_fun = _audio_pre_(model_path=model_path, device=device, is_half=True)
audio_path = "神女劈观.aac"
# model_path = "uvr5_weights/2_HP-UVR.pth"
# model_path = "uvr5_weights/VR-DeEchoDeReverb.pth"
# model_path = "uvr5_weights/VR-DeEchoNormal.pth"
model_path = "uvr5_weights/DeEchoNormal.pth"
# pre_fun = _audio_pre_(model_path=model_path, device=device, is_half=True,agg=10)
pre_fun = _audio_pre_new(model_path=model_path, device=device, is_half=True, agg=10)
audio_path = "雪雪伴奏对消HP5.wav"
save_path = "opt"
pre_fun._path_audio_(audio_path, save_path, save_path)

View File

@@ -12,10 +12,10 @@ def load_audio(file, sr):
) # 防止小白拷路径头尾带了空格和"和回车
out, _ = (
ffmpeg.input(file, threads=0)
.output("-", format="s16le", acodec="pcm_s16le", ac=1, ar=sr)
.output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr)
.run(cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True)
)
except Exception as e:
raise RuntimeError(f"Failed to load audio: {e}")
return np.frombuffer(out, np.int16).flatten().astype(np.float32) / 32768.0
return np.frombuffer(out, np.float32).flatten()

132
lib/i18n/en_US.json Normal file
View File

@@ -0,0 +1,132 @@
{
"很遗憾您这没有能用的显卡来支持您训练": "Unfortunately, there is no compatible GPU available to support your training.",
"是": "Yes",
"step1:正在处理数据": "Step 1: Processing data",
"step2a:无需提取音高": "Step 2a: Skipping pitch extraction",
"step2b:正在提取特征": "Step 2b: Extracting features",
"step3a:正在训练模型": "Step 3a: Model training started",
"训练结束, 您可查看控制台训练日志或实验文件夹下的train.log": "Training complete. You can check the training logs in the console or the 'train.log' file under the experiment folder.",
"全流程结束!": "All processes have been completed!",
"本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>LICENSE</b>.": "This software is open source under the MIT license. The author does not have any control over the software. Users who use the software and distribute the sounds exported by the software are solely responsible. <br>If you do not agree with this clause, you cannot use or reference any codes and files within the software package. See the root directory <b>Agreement-LICENSE.txt</b> for details.",
"模型推理": "Model Inference",
"推理音色": "Inferencing voice:",
"刷新音色列表和索引路径": "Refresh voice list and index path",
"卸载音色省显存": "Unload voice to save GPU memory:",
"请选择说话人id": "Select Speaker/Singer ID:",
"男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ": "Recommended +12 key for male to female conversion, and -12 key for female to male conversion. If the sound range goes too far and the voice is distorted, you can also adjust it to the appropriate range by yourself.",
"变调(整数, 半音数量, 升八度12降八度-12)": "Transpose (integer, number of semitones, raise by an octave: 12, lower by an octave: -12):",
"输入待处理音频文件路径(默认是正确格式示例)": "Enter the path of the audio file to be processed (default is the correct format example):",
"选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU,rmvpe效果最好且微吃GPU": "Select the pitch extraction algorithm ('pm': faster extraction but lower-quality speech; 'harvest': better bass but extremely slow; 'crepe': better quality but GPU intensive), 'rmvpe': best quality, and little GPU requirement",
">=3则使用对harvest音高识别的结果使用中值滤波数值为滤波半径使用可以削弱哑音": "If >=3: apply median filtering to the harvested pitch results. The value represents the filter radius and can reduce breathiness.",
"特征检索库文件路径,为空则使用下拉的选择结果": "Path to the feature index file. Leave blank to use the selected result from the dropdown:",
"自动检测index路径,下拉式选择(dropdown)": "Auto-detect index path and select from the dropdown:",
"特征文件路径": "Path to feature file:",
"检索特征占比": "Search feature ratio (controls accent strength, too high has artifacting):",
"后处理重采样至最终采样率0为不进行重采样": "Resample the output audio in post-processing to the final sample rate. Set to 0 for no resampling:",
"输入源音量包络替换输出音量包络融合比例越靠近1越使用输出包络": "Adjust the volume envelope scaling. Closer to 0, the more it mimicks the volume of the original vocals. Can help mask noise and make volume sound more natural when set relatively low. Closer to 1 will be more of a consistently loud volume:",
"保护清辅音和呼吸声防止电音撕裂等artifact拉满0.5不开启,调低加大保护力度但可能降低索引效果": "Protect voiceless consonants and breath sounds to prevent artifacts such as tearing in electronic music. Set to 0.5 to disable. Decrease the value to increase protection, but it may reduce indexing accuracy:",
"F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调": "F0 curve file (optional). One pitch per line. Replaces the default F0 and pitch modulation:",
"转换": "Convert",
"输出信息": "Output information",
"输出音频(右下角三个点,点了可以下载)": "Export audio (click on the three dots in the lower right corner to download)",
"批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ": "Batch conversion. Enter the folder containing the audio files to be converted or upload multiple audio files. The converted audio will be output in the specified folder (default: 'opt').",
"指定输出文件夹": "Specify output folder:",
"输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)": "Enter the path of the audio folder to be processed (copy it from the address bar of the file manager):",
"也可批量拖拽音频文件, 二选一, 优先读文件夹,文件夹留空则读取拖拽文件": "You can also drag input audio files here. Choose one of the two options. Priority is given to reading from the folder, if the folder option is blank, it will read the files you drag here.",
"导出文件格式": "Export file format",
"伴奏人声分离&去混响&去回声": "Vocals/Accompaniment Separation & Reverberation Removal",
"人声伴奏分离批量处理, 使用UVR5模型。 <br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)。 <br>模型分为三类: <br>1、保留人声不带和声的音频选这个对主人声保留比HP5更好。内置HP2和HP3两个模型HP3可能轻微漏伴奏但对主人声保留比HP2稍微好一丁点 <br>2、仅保留主人声带和声的音频选这个对主人声可能有削弱。内置HP5一个模型 <br> 3、去混响、去延迟模型by FoxJoy<br>(1)MDX-Net(onnx_dereverb):对于双通道混响是最好的选择,不能去除单通道混响;<br>&emsp;(234)DeEcho:去除延迟效果。Aggressive比Normal去除得更彻底DeReverb额外去除混响可去除单声道混响但是对高频重的板式混响去不干净。<br>去混响/去延迟,附:<br>1、DeEcho-DeReverb模型的耗时是另外2个DeEcho模型的接近2倍<br>2、MDX-Net-Dereverb模型挺慢的<br>3、个人推荐的最干净的配置是先MDX-Net再DeEcho-Aggressive。": "Batch processing for vocal accompaniment separation using the UVR5 model.<br>Example of a valid folder path format: D:\\path\\to\\input\\folder (copy it from the file manager address bar).<br>The model is divided into three categories:<br>1. Preserve vocals: Choose this option for audio without harmonies. It preserves vocals better than HP5. It includes two built-in models: HP2 and HP3. HP3 may slightly leak accompaniment but preserves vocals slightly better than HP2.<br>2. Preserve main vocals only: Choose this option for audio with harmonies. It may weaken the main vocals. It includes one built-in model: HP5.<br>3. De-reverb and de-delay models (by FoxJoy):<br>(1) MDX-Net: The best choice for stereo reverb removal but cannot remove mono reverb;<br>&emsp;(234) DeEcho: Removes delay effects. Aggressive mode removes more thoroughly than Normal mode. DeReverb additionally removes reverb and can remove mono reverb, but not very effectively for heavily reverberated high-frequency content.<br>De-reverb/de-delay notes:<br>1. The processing time for the DeEcho-DeReverb model is approximately twice as long as the other two DeEcho models.<br>2. The MDX-Net-Dereverb model is quite slow.<br>3. The recommended cleanest configuration is to apply MDX-Net first and then DeEcho-Aggressive.",
"输入待处理音频文件夹路径": "Enter the path of the audio folder to be processed:",
"模型": "Model",
"指定输出主人声文件夹": "Specify the output folder for vocals:",
"指定输出非主人声文件夹": "Specify the output folder for accompaniment:",
"训练": "Train",
"step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. ": "Step 1: Fill in the experimental configuration. Experimental data is stored in the 'logs' folder, with each experiment having a separate folder. Manually enter the experiment name path, which contains the experimental configuration, logs, and trained model files.",
"输入实验名": "Enter the experiment name:",
"目标采样率": "Target sample rate:",
"模型是否带音高指导(唱歌一定要, 语音可以不要)": "Whether the model has pitch guidance (required for singing, optional for speech):",
"版本": "Version",
"提取音高和处理数据使用的CPU进程数": "Number of CPU processes used for pitch extraction and data processing:",
"step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. ": "Step 2a: Automatically traverse all files in the training folder that can be decoded into audio and perform slice normalization. Generates 2 wav folders in the experiment directory. Currently, only single-singer/speaker training is supported.",
"输入训练文件夹路径": "Enter the path of the training folder:",
"请指定说话人id": "Please specify the speaker/singer ID:",
"处理数据": "Process data",
"step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)": "Step 2b: Use CPU to extract pitch (if the model has pitch), use GPU to extract features (select GPU index):",
"以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2": "Enter the GPU index(es) separated by '-', e.g., 0-1-2 to use GPU 0, 1, and 2:",
"显卡信息": "GPU Information",
"选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢": "Select the pitch extraction algorithm ('pm': faster extraction but lower-quality speech; 'dio': improved speech but slower extraction; 'harvest': better quality but slower extraction):",
"rmvpe卡号配置以-分隔输入使用的不同进程卡号,例如0-0-1使用在卡0上跑2个进程并在卡1上跑1个进程": "Enter the GPU index(es) separated by '-', e.g., 0-0-1 to use 2 processes in GPU0 and 1 process in GPU1",
"特征提取": "Feature extraction",
"step3: 填写训练设置, 开始训练模型和索引": "Step 3: Fill in the training settings and start training the model and index",
"保存频率save_every_epoch": "Save frequency (save_every_epoch):",
"总训练轮数total_epoch": "Total training epochs (total_epoch):",
"每张显卡的batch_size": "Batch size per GPU:",
"是否仅保存最新的ckpt文件以节省硬盘空间": "Save only the latest '.ckpt' file to save disk space:",
"否": "No",
"是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速": "Cache all training sets to GPU memory. Caching small datasets (less than 10 minutes) can speed up training, but caching large datasets will consume a lot of GPU memory and may not provide much speed improvement:",
"是否在每次保存时间点将最终小模型保存至weights文件夹": "Save a small final model to the 'weights' folder at each save point:",
"加载预训练底模G路径": "Load pre-trained base model G path:",
"加载预训练底模D路径": "Load pre-trained base model D path:",
"训练模型": "Train model",
"训练特征索引": "Train feature index",
"一键训练": "One-click training",
"ckpt处理": "ckpt Processing",
"模型融合, 可用于测试音色融合": "Model fusion, can be used to test timbre fusion",
"A模型路径": "Path to Model A:",
"B模型路径": "Path to Model B:",
"A模型权重": "Weight (w) for Model A:",
"模型是否带音高指导": "Whether the model has pitch guidance:",
"要置入的模型信息": "Model information to be placed:",
"保存的模型名不带后缀": "Saved model name (without extension):",
"模型版本型号": "Model architecture version:",
"融合": "Fusion",
"修改模型信息(仅支持weights文件夹下提取的小模型文件)": "Modify model information (only supported for small model files extracted from the 'weights' folder)",
"模型路径": "Path to Model:",
"要改的模型信息": "Model information to be modified:",
"保存的文件名, 默认空为和源文件同名": "Save file name (default: same as the source file):",
"修改": "Modify",
"查看模型信息(仅支持weights文件夹下提取的小模型文件)": "View model information (only supported for small model files extracted from the 'weights' folder)",
"查看": "View",
"模型提取(输入logs文件夹下大文件模型路径),适用于训一半不想训了模型没有自动提取保存小文件模型,或者想测试中间模型的情况": "Model extraction (enter the path of the large file model under the 'logs' folder). This is useful if you want to stop training halfway and manually extract and save a small model file, or if you want to test an intermediate model:",
"保存名": "Save name:",
"模型是否带音高指导,1是0否": "Whether the model has pitch guidance (1: yes, 0: no):",
"提取": "Extract",
"Onnx导出": "Export Onnx",
"RVC模型路径": "RVC Model Path:",
"Onnx输出路径": "Onnx Export Path:",
"导出Onnx模型": "Export Onnx Model",
"常见问题解答": "FAQ (Frequently Asked Questions)",
"招募音高曲线前端编辑器": "Recruiting front-end editors for pitch curves",
"加开发群联系我xxxxx": "Join the development group and contact me at xxxxx",
"点击查看交流、问题反馈群号": "Click to view the communication and problem feedback group number",
"xxxxx": "xxxxx",
"加载模型": "Load model",
"Hubert模型": "Hubert Model",
"选择.pth文件": "Select the .pth file",
"选择.index文件": "Select the .index file",
"选择.npy文件": "Select the .npy file",
"输入设备": "Input device",
"输出设备": "Output device",
"音频设备(请使用同种类驱动)": "Audio device (please use the same type of driver)",
"响应阈值": "Response threshold",
"音调设置": "Pitch settings",
"Index Rate": "Index Rate",
"常规设置": "General settings",
"采样长度": "Sample length",
"淡入淡出长度": "Fade length",
"额外推理时长": "Extra inference time",
"输入降噪": "Input noise reduction",
"输出降噪": "Output noise reduction",
"性能设置": "Performance settings",
"开始音频转换": "Start audio conversion",
"停止音频转换": "Stop audio conversion",
"推理时间(ms):": "Inference time (ms):",
"请选择pth文件": "请选择pth文件",
"请选择index文件": "请选择index文件",
"hubert模型路径不可包含中文": "hubert模型路径不可包含中文",
"pth文件路径不可包含中文": "pth文件路径不可包含中文",
"index文件路径不可包含中文": "index文件路径不可包含中文",
"重载设备列表": "Reload device list",
"音高算法": "pitch detection algorithm",
"harvest进程数": "Number of CPU processes used for harvest pitch algorithm"
}

132
lib/i18n/es_ES.json Normal file
View File

@@ -0,0 +1,132 @@
{
"很遗憾您这没有能用的显卡来支持您训练": "Lamentablemente, no tiene una tarjeta gráfica adecuada para soportar su entrenamiento",
"是": "Sí",
"step1:正在处理数据": "Paso 1: Procesando datos",
"step2a:无需提取音高": "Paso 2a: No es necesario extraer el tono",
"step2b:正在提取特征": "Paso 2b: Extrayendo características",
"step3a:正在训练模型": "Paso 3a: Entrenando el modelo",
"训练结束, 您可查看控制台训练日志或实验文件夹下的train.log": "Entrenamiento finalizado, puede ver el registro de entrenamiento en la consola o en el archivo train.log en la carpeta del experimento",
"全流程结束!": "¡Todo el proceso ha terminado!",
"本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>LICENSE</b>.": "Este software es de código abierto bajo la licencia MIT, el autor no tiene ningún control sobre el software, y aquellos que usan el software y difunden los sonidos exportados por el software son los únicos responsables.<br>Si no está de acuerdo con esta cláusula , no puede utilizar ni citar ningún código ni archivo del paquete de software Consulte el directorio raíz <b>Agreement-LICENSE.txt</b> para obtener más información.",
"模型推理": "inferencia del modelo",
"推理音色": "inferencia de voz",
"刷新音色列表和索引路径": "Actualizar la lista de timbres e índice de rutas",
"卸载音色省显存": "Descargue la voz para ahorrar memoria GPU",
"请选择说话人id": "seleccione una identificación de altavoz",
"男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ": "Tecla +12 recomendada para conversión de voz de hombre a mujer, tecla -12 para conversión de voz de mujer a hombre. Si el rango de tono es demasiado amplio y causa distorsión, ajústelo usted mismo a un rango adecuado.",
"变调(整数, 半音数量, 升八度12降八度-12)": "Cambio de tono (entero, número de semitonos, subir una octava +12 o bajar una octava -12)",
"输入待处理音频文件路径(默认是正确格式示例)": "Ingrese la ruta del archivo del audio que se procesará (el formato predeterminado es el ejemplo correcto)",
"选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU,rmvpe效果最好且微吃GPU": "Elija el algoritmo de extracción de tono, use 'pm' para acelerar la entrada de canto, 'harvest' es bueno para los graves pero extremadamente lento, 'crepe' tiene buenos resultados pero consume GPU",
">=3则使用对harvest音高识别的结果使用中值滤波数值为滤波半径使用可以削弱哑音": "Si es >=3, entonces use el resultado del reconocimiento de tono de 'harvest' con filtro de mediana, el valor es el radio del filtro, su uso puede debilitar el sonido sordo",
"特征检索库文件路径,为空则使用下拉的选择结果": "Ruta del archivo de la biblioteca de características, si está vacío, se utilizará el resultado de la selección desplegable",
"自动检测index路径,下拉式选择(dropdown)": "Detección automática de la ruta del índice, selección desplegable (dropdown)",
"特征文件路径": "Ruta del archivo de características",
"检索特征占比": "Proporción de función de búsqueda",
"后处理重采样至最终采样率0为不进行重采样": "Remuestreo posterior al proceso a la tasa de muestreo final, 0 significa no remuestrear",
"输入源音量包络替换输出音量包络融合比例越靠近1越使用输出包络": "Proporción de fusión para reemplazar el sobre de volumen de entrada con el sobre de volumen de salida, cuanto más cerca de 1, más se utiliza el sobre de salida",
"保护清辅音和呼吸声防止电音撕裂等artifact拉满0.5不开启,调低加大保护力度但可能降低索引效果": "Proteger las consonantes claras y la respiración, prevenir artefactos como la distorsión de sonido electrónico, 0.5 no está activado, reducir aumentará la protección pero puede reducir el efecto del índice",
"F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调": "Archivo de curva F0, opcional, un tono por línea, en lugar de F0 predeterminado y cambio de tono",
"转换": "Conversión",
"输出信息": "Información de salida",
"输出音频(右下角三个点,点了可以下载)": "Salida de audio (haga clic en los tres puntos en la esquina inferior derecha para descargar)",
"批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ": "Conversión por lotes, ingrese la carpeta que contiene los archivos de audio para convertir o cargue varios archivos de audio. El audio convertido se emitirá en la carpeta especificada (opción predeterminada).",
"指定输出文件夹": "Especificar carpeta de salida",
"输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)": "Ingrese la ruta a la carpeta de audio que se procesará (simplemente cópiela desde la barra de direcciones del administrador de archivos)",
"也可批量拖拽音频文件, 二选一, 优先读文件夹,文件夹留空则读取拖拽文件": "También se pueden ingresar múltiples archivos de audio, cualquiera de las dos opciones, con prioridad dada a la carpeta",
"导出文件格式": "Formato de archivo de exportación",
"伴奏人声分离&去混响&去回声": "Separación de voz acompañante & eliminación de reverberación & eco",
"人声伴奏分离批量处理, 使用UVR5模型。 <br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)。 <br>模型分为三类: <br>1、保留人声不带和声的音频选这个对主人声保留比HP5更好。内置HP2和HP3两个模型HP3可能轻微漏伴奏但对主人声保留比HP2稍微好一丁点 <br>2、仅保留主人声带和声的音频选这个对主人声可能有削弱。内置HP5一个模型 <br> 3、去混响、去延迟模型by FoxJoy<br>(1)MDX-Net(onnx_dereverb):对于双通道混响是最好的选择,不能去除单通道混响;<br>&emsp;(234)DeEcho:去除延迟效果。Aggressive比Normal去除得更彻底DeReverb额外去除混响可去除单声道混响但是对高频重的板式混响去不干净。<br>去混响/去延迟,附:<br>1、DeEcho-DeReverb模型的耗时是另外2个DeEcho模型的接近2倍<br>2、MDX-Net-Dereverb模型挺慢的<br>3、个人推荐的最干净的配置是先MDX-Net再DeEcho-Aggressive。": "Procesamiento por lotes para la separación de acompañamiento vocal utilizando el modelo UVR5.<br>Ejemplo de formato de ruta de carpeta válido: D:\\ruta\\a\\la\\carpeta\\de\\entrada (copiar desde la barra de direcciones del administrador de archivos).<br>El modelo se divide en tres categorías:<br>1. Preservar voces: Elija esta opción para audio sin armonías. Preserva las voces mejor que HP5. Incluye dos modelos incorporados: HP2 y HP3. HP3 puede filtrar ligeramente el acompañamiento pero conserva las voces un poco mejor que HP2.<br>2. Preservar solo voces principales: Elija esta opción para audio con armonías. Puede debilitar las voces principales. Incluye un modelo incorporado: HP5.<br>3. Modelos de des-reverberación y des-retardo (por FoxJoy):<br>(1) MDX-Net: La mejor opción para la eliminación de reverberación estéreo pero no puede eliminar la reverberación mono;<br>&emsp;(234) DeEcho: Elimina efectos de retardo. El modo Agresivo elimina más a fondo que el modo Normal. DeReverb adicionalmente elimina la reverberación y puede eliminar la reverberación mono, pero no muy efectivamente para contenido de alta frecuencia fuertemente reverberado.<br>Notas de des-reverberación/des-retardo:<br>1. El tiempo de procesamiento para el modelo DeEcho-DeReverb es aproximadamente el doble que los otros dos modelos DeEcho.<br>2. El modelo MDX-Net-Dereverb es bastante lento.<br>3. La configuración más limpia recomendada es aplicar primero MDX-Net y luego DeEcho-Agresivo.",
"输入待处理音频文件夹路径": "Ingrese la ruta a la carpeta de audio que se procesará",
"模型": "Modelo",
"指定输出主人声文件夹": "Especifique la carpeta de salida para la voz principal",
"指定输出非主人声文件夹": "Especifique la carpeta de salida para las voces no principales",
"训练": "Entrenamiento",
"step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. ": "paso 1: Complete la configuración del experimento. Los datos del experimento se almacenan en el directorio 'logs', con cada experimento en una carpeta separada. La ruta del nombre del experimento debe ingresarse manualmente y debe contener la configuración del experimento, los registros y los archivos del modelo entrenado.",
"输入实验名": "Ingrese el nombre del modelo",
"目标采样率": "Tasa de muestreo objetivo",
"模型是否带音高指导(唱歌一定要, 语音可以不要)": "Si el modelo tiene guía de tono (necesaria para cantar, pero no para hablar)",
"版本": "Versión",
"提取音高和处理数据使用的CPU进程数": "Número de procesos de CPU utilizados para extraer el tono y procesar los datos",
"step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. ": "paso 2a: recorra automáticamente la carpeta de capacitación y corte y normalice todos los archivos de audio que se pueden decodificar en audio. Se generarán dos carpetas 'wav' en el directorio del experimento. Actualmente, solo se admite la capacitación de una sola persona.",
"输入训练文件夹路径": "Introduzca la ruta de la carpeta de entrenamiento",
"请指定说话人id": "Especifique el ID del hablante",
"处理数据": "Procesar datos",
"step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)": "paso 2b: use la CPU para extraer el tono (si el modelo tiene guía de tono) y la GPU para extraer características (seleccione el número de tarjeta).",
"以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2": "Separe los números de identificación de la GPU con '-' al ingresarlos. Por ejemplo, '0-1-2' significa usar GPU 0, GPU 1 y GPU 2.",
"显卡信息": "información de la GPU",
"选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢": "Seleccione el algoritmo de extracción de tono: utilice 'pm' para un procesamiento más rápido de la voz cantada, 'dio' para un discurso de alta calidad pero un procesamiento más lento y 'cosecha' para obtener la mejor calidad pero un procesamiento más lento.",
"rmvpe卡号配置以-分隔输入使用的不同进程卡号,例如0-0-1使用在卡0上跑2个进程并在卡1上跑1个进程": "rmvpe卡号配置以-分隔输入使用的不同进程卡号,例如0-0-1使用在卡0上跑2个进程并在卡1上跑1个进程",
"特征提取": "Extracción de características",
"step3: 填写训练设置, 开始训练模型和索引": "Paso 3: complete la configuración de entrenamiento y comience a entrenar el modelo y el índice.",
"保存频率save_every_epoch": "Frecuencia de guardado (save_every_epoch)",
"总训练轮数total_epoch": "Total de épocas de entrenamiento (total_epoch)",
"每张显卡的batch_size": "Tamaño del lote (batch_size) por tarjeta gráfica",
"是否仅保存最新的ckpt文件以节省硬盘空间": "Si guardar solo el archivo ckpt más reciente para ahorrar espacio en disco",
"否": "No",
"是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速": "Si almacenar en caché todos los conjuntos de entrenamiento en la memoria de la GPU. Los conjuntos de datos pequeños (menos de 10 minutos) se pueden almacenar en caché para acelerar el entrenamiento, pero el almacenamiento en caché de conjuntos de datos grandes puede causar errores de memoria en la GPU y no aumenta la velocidad de manera significativa.",
"是否在每次保存时间点将最终小模型保存至weights文件夹": "¿Guardar el pequeño modelo final en la carpeta 'weights' en cada punto de guardado?",
"加载预训练底模G路径": "Cargue la ruta G del modelo base preentrenada.",
"加载预训练底模D路径": "Cargue la ruta del modelo D base preentrenada.",
"训练模型": "Entrenar Modelo",
"训练特征索引": "Índice de características del Entrenamiento",
"一键训练": "Entrenamiento con un clic.",
"ckpt处理": "Procesamiento de recibos",
"模型融合, 可用于测试音色融合": "Fusión de modelos, se puede utilizar para fusionar diferentes voces",
"A模型路径": "Modelo A ruta.",
"B模型路径": "Modelo B ruta.",
"A模型权重": "Un peso modelo para el modelo A.",
"模型是否带音高指导": "Si el modelo tiene guía de tono.",
"要置入的模型信息": "Información del modelo a colocar.",
"保存的模型名不带后缀": "Nombre del modelo guardado sin extensión.",
"模型版本型号": "Versión y modelo del modelo",
"融合": "Fusión.",
"修改模型信息(仅支持weights文件夹下提取的小模型文件)": "Modificar la información del modelo (solo admite archivos de modelos pequeños extraídos en la carpeta de pesos).",
"模型路径": "Ruta del modelo",
"要改的模型信息": "Información del modelo a modificar",
"保存的文件名, 默认空为和源文件同名": "Nombre del archivo que se guardará, el valor predeterminado es el mismo que el nombre del archivo de origen",
"修改": "Modificar",
"查看模型信息(仅支持weights文件夹下提取的小模型文件)": "Ver información del modelo (solo aplicable a archivos de modelos pequeños extraídos de la carpeta 'pesos')",
"查看": "Ver",
"模型提取(输入logs文件夹下大文件模型路径),适用于训一半不想训了模型没有自动提取保存小文件模型,或者想测试中间模型的情况": "Extracción de modelo (ingrese la ruta de un archivo de modelo grande en la carpeta 'logs'), aplicable cuando desea extraer un archivo de modelo pequeño después de entrenar a mitad de camino y no se guardó automáticamente, o cuando desea probar un modelo intermedio",
"保存名": "Guardar nombre",
"模型是否带音高指导,1是0否": "Si el modelo tiene guía de tono, 1 para sí, 0 para no",
"提取": "Extracter",
"Onnx导出": "Exportar Onnx",
"RVC模型路径": "Ruta del modelo RVC",
"Onnx输出路径": "Ruta de salida Onnx",
"导出Onnx模型": "Exportar modelo Onnx",
"常见问题解答": "Preguntas frecuentes",
"招募音高曲线前端编辑器": "Reclutar editores front-end para curvas de tono",
"加开发群联系我xxxxx": "Únase al grupo de desarrollo para contactarme en xxxxx",
"点击查看交流、问题反馈群号": "Haga clic para ver el número de grupo de comunicación y comentarios sobre problemas",
"xxxxx": "xxxxx",
"加载模型": "Cargar modelo",
"Hubert模型": "Modelo de Hubert ",
"选择.pth文件": "Seleccionar archivo .pth",
"选择.index文件": "Select .index file",
"选择.npy文件": "Seleccionar archivo .npy",
"输入设备": "Dispositivo de entrada",
"输出设备": "Dispositivo de salida",
"音频设备(请使用同种类驱动)": "Dispositivo de audio (utilice el mismo tipo de controlador)",
"响应阈值": "Umbral de respuesta",
"音调设置": "Ajuste de tono",
"Index Rate": "Tasa de índice",
"常规设置": "Configuración general",
"采样长度": "Longitud de muestreo",
"淡入淡出长度": "Duración del fundido de entrada/salida",
"额外推理时长": "Tiempo de inferencia adicional",
"输入降噪": "Reducción de ruido de entrada",
"输出降噪": "Reducción de ruido de salida",
"性能设置": "Configuración de rendimiento",
"开始音频转换": "Iniciar conversión de audio",
"停止音频转换": "Detener la conversión de audio",
"推理时间(ms):": "Inferir tiempo (ms):",
"请选择pth文件": "请选择pth文件",
"请选择index文件": "请选择index文件",
"hubert模型路径不可包含中文": "hubert模型路径不可包含中文",
"pth文件路径不可包含中文": "pth文件路径不可包含中文",
"index文件路径不可包含中文": "index文件路径不可包含中文",
"重载设备列表": "Recargar lista de dispositivos",
"音高算法": "音高算法",
"harvest进程数": "harvest进程数"
}

132
lib/i18n/it_IT.json Normal file
View File

@@ -0,0 +1,132 @@
{
"很遗憾您这没有能用的显卡来支持您训练": "Sfortunatamente, non è disponibile alcuna GPU compatibile per supportare l'addestramento.",
"是": "SÌ",
"step1:正在处理数据": "Passaggio 1: elaborazione dei dati",
"step2a:无需提取音高": "Step 2a: Saltare l'estrazione del tono",
"step2b:正在提取特征": "Passaggio 2b: estrazione delle funzionalità",
"step3a:正在训练模型": "Passaggio 3a: è iniziato l'addestramento del modello",
"训练结束, 您可查看控制台训练日志或实验文件夹下的train.log": "Addestramento completato. ",
"全流程结束!": "Tutti i processi sono stati completati!",
"本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>LICENSE</b>.": "Questo software è open source con licenza MIT. <br>Se non si accetta questa clausola, non è possibile utilizzare o fare riferimento a codici e file all'interno del pacchetto software. <b>Contratto-LICENZA.txt</b> per dettagli.",
"模型推理": "Inferenza del modello",
"推理音色": "Voce di inferenza:",
"刷新音色列表和索引路径": "Aggiorna l'elenco delle voci e il percorso dell'indice",
"卸载音色省显存": "Scarica la voce per risparmiare memoria della GPU:",
"请选择说话人id": "Seleziona ID locutore/cantante:",
"男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ": "Tonalità +12 consigliata per la conversione da maschio a femmina e tonalità -12 per la conversione da femmina a maschio. ",
"变调(整数, 半音数量, 升八度12降八度-12)": "Trasposizione (numero intero, numero di semitoni, alza di un'ottava: 12, abbassa di un'ottava: -12):",
"输入待处理音频文件路径(默认是正确格式示例)": "Immettere il percorso del file audio da elaborare (l'impostazione predefinita è l'esempio di formato corretto):",
"选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU,rmvpe效果最好且微吃GPU": "Seleziona l'algoritmo di estrazione del tono (\"pm\": estrazione più veloce ma risultato di qualità inferiore; \"harvest\": bassi migliori ma estremamente lenti; \"crepe\": qualità migliore ma utilizzo intensivo della GPU):",
">=3则使用对harvest音高识别的结果使用中值滤波数值为滤波半径使用可以削弱哑音": "Se >=3: applica il filtro mediano ai risultati del pitch raccolto. ",
"特征检索库文件路径,为空则使用下拉的选择结果": "Percorso del file di indice delle caratteristiche. ",
"自动检测index路径,下拉式选择(dropdown)": "Rileva automaticamente il percorso dell'indice e seleziona dal menu a tendina:",
"特征文件路径": "Percorso del file delle caratteristiche:",
"检索特征占比": "Rapporto funzionalità di ricerca (controlla la forza dell'accento, troppo alto ha artefatti):",
"后处理重采样至最终采样率0为不进行重采样": "Ricampiona l'audio di output in post-elaborazione alla frequenza di campionamento finale. ",
"输入源音量包络替换输出音量包络融合比例越靠近1越使用输出包络": "Regola il ridimensionamento dell'inviluppo del volume. ",
"保护清辅音和呼吸声防止电音撕裂等artifact拉满0.5不开启,调低加大保护力度但可能降低索引效果": "Proteggi le consonanti senza voce e i suoni del respiro per evitare artefatti come il tearing nella musica elettronica. ",
"F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调": "File curva F0 (opzionale). ",
"转换": "Convertire",
"输出信息": "Informazioni sull'uscita",
"输出音频(右下角三个点,点了可以下载)": "Esporta audio (clicca sui tre puntini in basso a destra per scaricarlo)",
"批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ": "Conversione massiva. Inserisci il percorso della cartella che contiene i file da convertire o carica più file audio. I file convertiti finiranno nella cartella specificata. (default: opt) ",
"指定输出文件夹": "Specifica la cartella di output:",
"输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)": "Immettere il percorso della cartella audio da elaborare (copiarlo dalla barra degli indirizzi del file manager):",
"也可批量拖拽音频文件, 二选一, 优先读文件夹,文件夹留空则读取拖拽文件": "Puoi anche inserire file audio in massa. ",
"导出文件格式": "Formato file di esportazione",
"伴奏人声分离&去混响&去回声": "Separazione voce/accompagnamento",
"人声伴奏分离批量处理, 使用UVR5模型。 <br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)。 <br>模型分为三类: <br>1、保留人声不带和声的音频选这个对主人声保留比HP5更好。内置HP2和HP3两个模型HP3可能轻微漏伴奏但对主人声保留比HP2稍微好一丁点 <br>2、仅保留主人声带和声的音频选这个对主人声可能有削弱。内置HP5一个模型 <br> 3、去混响、去延迟模型by FoxJoy<br>(1)MDX-Net(onnx_dereverb):对于双通道混响是最好的选择,不能去除单通道混响;<br>&emsp;(234)DeEcho:去除延迟效果。Aggressive比Normal去除得更彻底DeReverb额外去除混响可去除单声道混响但是对高频重的板式混响去不干净。<br>去混响/去延迟,附:<br>1、DeEcho-DeReverb模型的耗时是另外2个DeEcho模型的接近2倍<br>2、MDX-Net-Dereverb模型挺慢的<br>3、个人推荐的最干净的配置是先MDX-Net再DeEcho-Aggressive。": "Elaborazione batch per la separazione dell'accompagnamento vocale utilizzando il modello UVR5.<br>Esempio di un formato di percorso di cartella valido: D:\\path\\to\\input\\folder (copialo dalla barra degli indirizzi del file manager).<br>Il modello è suddiviso in tre categorie:<br>1. Conserva la voce: scegli questa opzione per l'audio senza armonie. <br>2. Mantieni solo la voce principale: scegli questa opzione per l'audio con armonie. <br>3. Modelli di de-riverbero e de-delay (di FoxJoy):<br>(1) MDX-Net: la scelta migliore per la rimozione del riverbero stereo ma non può rimuovere il riverbero mono;<br><br>Note di de-riverbero/de-delay:<br>1. Il tempo di elaborazione per il modello DeEcho-DeReverb è circa il doppio rispetto agli altri due modelli DeEcho.<br>2. Il modello MDX-Net-Dereverb è piuttosto lento.<br>3. La configurazione più pulita consigliata consiste nell'applicare prima MDX-Net e poi DeEcho-Aggressive.",
"输入待处理音频文件夹路径": "Immettere il percorso della cartella audio da elaborare:",
"模型": "Modello",
"指定输出主人声文件夹": "Specifica la cartella di output per le voci:",
"指定输出非主人声文件夹": "Specificare la cartella di output per l'accompagnamento:",
"训练": "Addestramento",
"step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. ": "Passaggio 1: compilare la configurazione sperimentale. ",
"输入实验名": "Inserisci il nome dell'esperimento:",
"目标采样率": "Frequenza di campionamento target:",
"模型是否带音高指导(唱歌一定要, 语音可以不要)": "Se il modello ha una guida del tono (necessario per il canto, facoltativo per il parlato):",
"版本": "Versione",
"提取音高和处理数据使用的CPU进程数": "Numero di processi CPU utilizzati per l'estrazione del tono e l'elaborazione dei dati:",
"step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. ": "Passaggio 2a: attraversa automaticamente tutti i file nella cartella di addestramento che possono essere decodificati in audio ed esegui la normalizzazione delle sezioni. ",
"输入训练文件夹路径": "Inserisci il percorso della cartella di addestramento:",
"请指定说话人id": "Si prega di specificare l'ID del locutore/cantante:",
"处理数据": "Processa dati",
"step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)": "Passaggio 2b: utilizzare la CPU per estrarre il tono (se il modello ha il tono), utilizzare la GPU per estrarre le caratteristiche (selezionare l'indice GPU):",
"以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2": "Inserisci gli indici GPU separati da '-', ad esempio 0-1-2 per utilizzare GPU 0, 1 e 2:",
"显卡信息": "Informazioni GPU",
"选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢": "Seleziona l'algoritmo di estrazione del tono (\"pm\": estrazione più rapida ma parlato di qualità inferiore; \"dio\": parlato migliorato ma estrazione più lenta; \"harvest\": migliore qualità ma estrazione più lenta):",
"rmvpe卡号配置以-分隔输入使用的不同进程卡号,例如0-0-1使用在卡0上跑2个进程并在卡1上跑1个进程": "rmvpe卡号配置以-分隔输入使用的不同进程卡号,例如0-0-1使用在卡0上跑2个进程并在卡1上跑1个进程",
"特征提取": "Estrazione delle caratteristiche",
"step3: 填写训练设置, 开始训练模型和索引": "Passaggio 3: compilare le impostazioni di addestramento e avviare l'addestramento del modello e dell'indice",
"保存频率save_every_epoch": "Frequenza di salvataggio (save_every_epoch):",
"总训练轮数total_epoch": "Epoch totali di addestramento (total_epoch):",
"每张显卡的batch_size": "Dimensione batch per GPU:",
"是否仅保存最新的ckpt文件以节省硬盘空间": "Salva solo l'ultimo file '.ckpt' per risparmiare spazio su disco:",
"否": "NO",
"是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速": "Memorizza nella cache tutti i set di addestramento nella memoria della GPU. ",
"是否在每次保存时间点将最终小模型保存至weights文件夹": "Salva un piccolo modello finale nella cartella \"weights\" in ogni punto di salvataggio:",
"加载预训练底模G路径": "Carica il percorso G del modello base pre-addestrato:",
"加载预训练底模D路径": "Carica il percorso D del modello base pre-addestrato:",
"训练模型": "Addestra modello",
"训练特征索引": "Addestra indice delle caratteristiche",
"一键训练": "Addestramento con un clic",
"ckpt处理": "Elaborazione ckpt",
"模型融合, 可用于测试音色融合": "Model fusion, può essere utilizzato per testare la fusione timbrica",
"A模型路径": "Percorso per il modello A:",
"B模型路径": "Percorso per il modello B:",
"A模型权重": "Peso (w) per il modello A:",
"模型是否带音高指导": "Se il modello ha una guida del tono:",
"要置入的模型信息": "Informazioni sul modello da posizionare:",
"保存的模型名不带后缀": "Nome del modello salvato (senza estensione):",
"模型版本型号": "Versione dell'architettura del modello:",
"融合": "Fusione",
"修改模型信息(仅支持weights文件夹下提取的小模型文件)": "Modifica le informazioni sul modello (supportato solo per i file di modello di piccole dimensioni estratti dalla cartella 'weights')",
"模型路径": "Percorso al modello:",
"要改的模型信息": "Informazioni sul modello da modificare:",
"保存的文件名, 默认空为和源文件同名": "Salva il nome del file (predefinito: uguale al file di origine):",
"修改": "Modificare",
"查看模型信息(仅支持weights文件夹下提取的小模型文件)": "Visualizza le informazioni sul modello (supportato solo per file di modello piccoli estratti dalla cartella 'weights')",
"查看": "Visualizzazione",
"模型提取(输入logs文件夹下大文件模型路径),适用于训一半不想训了模型没有自动提取保存小文件模型,或者想测试中间模型的情况": "Estrazione del modello (inserire il percorso del modello di file di grandi dimensioni nella cartella \"logs\"). ",
"保存名": "Salva nome:",
"模型是否带音高指导,1是0否": "Se il modello ha una guida del tono (1: sì, 0: no):",
"提取": "Estrai",
"Onnx导出": "Esporta Onnx",
"RVC模型路径": "Percorso modello RVC:",
"Onnx输出路径": "Percorso di esportazione Onnx:",
"导出Onnx模型": "Esporta modello Onnx",
"常见问题解答": "FAQ (Domande frequenti)",
"招募音高曲线前端编辑器": "Reclutamento di redattori front-end per curve di tono",
"加开发群联系我xxxxx": "Unisciti al gruppo di sviluppo e contattami a xxxxx",
"点击查看交流、问题反馈群号": "Fare clic per visualizzare il numero del gruppo di comunicazione e feedback sui problemi",
"xxxxx": "xxxxx",
"加载模型": "Carica modello",
"Hubert模型": "Modello Hubert",
"选择.pth文件": "Seleziona il file .pth",
"选择.index文件": "Seleziona il file .index",
"选择.npy文件": "Seleziona il file .npy",
"输入设备": "Dispositivo di input",
"输出设备": "Dispositivo di uscita",
"音频设备(请使用同种类驱动)": "Dispositivo audio (utilizzare lo stesso tipo di driver)",
"响应阈值": "Soglia di risposta",
"音调设置": "Impostazioni del tono",
"Index Rate": "Tasso di indice",
"常规设置": "Impostazioni generali",
"采样长度": "Lunghezza del campione",
"淡入淡出长度": "Lunghezza dissolvenza",
"额外推理时长": "Tempo di inferenza extra",
"输入降噪": "Riduzione del rumore in ingresso",
"输出降噪": "Riduzione del rumore in uscita",
"性能设置": "Impostazioni delle prestazioni",
"开始音频转换": "Avvia la conversione audio",
"停止音频转换": "Arresta la conversione audio",
"推理时间(ms):": "Tempo di inferenza (ms):",
"请选择pth文件": "请选择pth 文件",
"请选择index文件": "请选择index文件",
"hubert模型路径不可包含中文": "hubert 模型路径不可包含中文",
"pth文件路径不可包含中文": "pth è un'app per il futuro",
"index文件路径不可包含中文": "index文件路径不可包含中文",
"重载设备列表": "Ricaricare l'elenco dei dispositivi",
"音高算法": "音高算法",
"harvest进程数": "harvest进程数"
}

132
lib/i18n/ja_JP.json Normal file
View File

@@ -0,0 +1,132 @@
{
"很遗憾您这没有能用的显卡来支持您训练": "トレーニングに対応したGPUが動作しないのは残念です。",
"是": "はい",
"step1:正在处理数据": "step1:処理中のデータ",
"step2a:无需提取音高": "step2a:ピッチの抽出は不要",
"step2b:正在提取特征": "step2b:抽出される特徴量",
"step3a:正在训练模型": "step3a:トレーニング中のモデル",
"训练结束, 您可查看控制台训练日志或实验文件夹下的train.log": "トレーニング終了時に、トレーニングログやフォルダ内のtrain.logを確認することができます",
"全流程结束!": "全工程が完了!",
"本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>LICENSE</b>.": "本ソフトウェアはMITライセンスに基づくオープンソースであり、製作者は本ソフトウェアに対していかなる責任を持ちません。本ソフトウェアの利用者および本ソフトウェアから派生した音源(成果物)を配布する者は、本ソフトウェアに対して自身で責任を負うものとします。 <br>この条項に同意しない場合、パッケージ内のコードやファイルを使用や参照を禁じます。詳しくは<b>LICENSE</b>をご覧ください。",
"模型推理": "モデル推論",
"推理音色": "音源推論",
"刷新音色列表和索引路径": "音源リストとインデックスパスの更新",
"卸载音色省显存": "音源を削除してメモリを節約",
"请选择说话人id": "話者IDを選択してください",
"男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ": "男性から女性へは+12キーをお勧めします。女性から男性へは-12キーをお勧めします。音域が広すぎて音質が劣化した場合は、適切な音域に自分で調整してください。",
"变调(整数, 半音数量, 升八度12降八度-12)": "ピッチ変更(整数、半音数、上下オクターブ12-12)",
"输入待处理音频文件路径(默认是正确格式示例)": "処理対象音声ファイルのパスを入力してください(デフォルトは正しいフォーマットの例です)",
"选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU,rmvpe效果最好且微吃GPU": "ピッチ抽出アルゴリズムの選択、歌声はpmで高速化でき、harvestは低音が良いが信じられないほど遅く、crepeは良く動くがGPUを喰います",
">=3则使用对harvest音高识别的结果使用中值滤波数值为滤波半径使用可以削弱哑音": ">=3 次に、harvestピッチの認識結果に対してメディアンフィルタを使用します。値はフィルター半径で、ミュートを減衰させるために使用します。",
"特征检索库文件路径,为空则使用下拉的选择结果": "特徴検索ライブラリへのパス 空の場合はドロップダウンで選択",
"自动检测index路径,下拉式选择(dropdown)": "インデックスパスの自動検出 ドロップダウンで選択",
"特征文件路径": "特徴量ファイルのパス",
"检索特征占比": "検索特徴率",
"后处理重采样至最终采样率0为不进行重采样": "最終的なサンプリングレートへのポストプロセッシングのリサンプリング リサンプリングしない場合は0",
"输入源音量包络替换输出音量包络融合比例越靠近1越使用输出包络": "入力ソースの音量エンベロープと出力音量エンベロープの融合率 1に近づくほど、出力音量エンベロープの割合が高くなる",
"保护清辅音和呼吸声防止电音撕裂等artifact拉满0.5不开启,调低加大保护力度但可能降低索引效果": "保护清辅音和呼吸声防止电音撕裂等artifact拉满0.5不开启,调低加大保护力度但可能降低索引效果",
"F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调": "F0(最低共振周波数)カーブファイル(オプション、1行に1ピッチ、デフォルトのF0(最低共振周波数)とエレベーションを置き換えます。)",
"转换": "変換",
"输出信息": "出力情報",
"输出音频(右下角三个点,点了可以下载)": "出力音声(右下の三点をクリックしてダウンロードできます)",
"批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ": "一括変換、変換する音声フォルダを入力、または複数の音声ファイルをアップロードし、指定したフォルダ(デフォルトのopt)に変換した音声を出力します。",
"指定输出文件夹": "出力フォルダを指定してください",
"输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)": "処理対象音声フォルダーのパスを入力してください(エクスプローラーのアドレスバーからコピーしてください)",
"也可批量拖拽音频文件, 二选一, 优先读文件夹,文件夹留空则读取拖拽文件": "複数の音声ファイルを一括で入力することもできますが、フォルダーを優先して読み込みます",
"导出文件格式": "エクスポート形式",
"伴奏人声分离&去混响&去回声": "伴奏ボーカル分離&残響除去&エコー除去",
"人声伴奏分离批量处理, 使用UVR5模型。 <br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)。 <br>模型分为三类: <br>1、保留人声不带和声的音频选这个对主人声保留比HP5更好。内置HP2和HP3两个模型HP3可能轻微漏伴奏但对主人声保留比HP2稍微好一丁点 <br>2、仅保留主人声带和声的音频选这个对主人声可能有削弱。内置HP5一个模型 <br> 3、去混响、去延迟模型by FoxJoy<br>(1)MDX-Net(onnx_dereverb):对于双通道混响是最好的选择,不能去除单通道混响;<br>&emsp;(234)DeEcho:去除延迟效果。Aggressive比Normal去除得更彻底DeReverb额外去除混响可去除单声道混响但是对高频重的板式混响去不干净。<br>去混响/去延迟,附:<br>1、DeEcho-DeReverb模型的耗时是另外2个DeEcho模型的接近2倍<br>2、MDX-Net-Dereverb模型挺慢的<br>3、个人推荐的最干净的配置是先MDX-Net再DeEcho-Aggressive。": "UVR5モデルを使用したボーカル伴奏の分離バッチ処理。<br>有効なフォルダーパスフォーマットの例: D:\\path\\to\\input\\folder (エクスプローラーのアドレスバーからコピーします)。<br>モデルは三つのカテゴリに分かれています:<br>1. ボーカルを保持: ハーモニーのないオーディオに対してこれを選択します。HP5よりもボーカルをより良く保持します。HP2とHP3の二つの内蔵モデルが含まれています。HP3は伴奏をわずかに漏らす可能性がありますが、HP2よりもわずかにボーカルをより良く保持します。<br>2. 主なボーカルのみを保持: ハーモニーのあるオーディオに対してこれを選択します。主なボーカルを弱める可能性があります。HP5の一つの内蔵モデルが含まれています。<br>3. ディリバーブとディレイモデル (by FoxJoy):<br>(1) MDX-Net: ステレオリバーブの除去に最適な選択肢ですが、モノリバーブは除去できません;<br>&emsp;(234) DeEcho: ディレイ効果を除去します。AggressiveモードはNormalモードよりも徹底的に除去します。DeReverbはさらにリバーブを除去し、モリバーブを除去することができますが、高周波のリバーブが強い内容に対しては非常に効果的ではありません。<br>ディリバーブ/ディレイに関する注意点:<br>1. DeEcho-DeReverbモデルの処理時間は、他の二つのDeEchoモデルの約二倍です。<br>2. MDX-Net-Dereverbモデルは非常に遅いです。<br>3. 推奨される最もクリーンな設定は、最初にMDX-Netを適用し、その後にDeEcho-Aggressiveを適用することです。",
"输入待处理音频文件夹路径": "処理するオーディオファイルのフォルダパスを入力してください",
"模型": "モデル",
"指定输出主人声文件夹": "マスターの出力音声フォルダーを指定する",
"指定输出非主人声文件夹": "マスター以外の出力音声フォルダーを指定する",
"训练": "トレーニング",
"step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. ": "ステップ1:実験設定を入力します。実験データはlogsに保存され、各実験にはフォルダーがあります。実験名のパスを手動で入力する必要があり、実験設定、ログ、トレーニングされたモデルファイルが含まれます。",
"输入实验名": "モデル名",
"目标采样率": "目標サンプリングレート",
"模型是否带音高指导(唱歌一定要, 语音可以不要)": "モデルに音高ガイドがあるかどうか(歌唱には必要ですが、音声には必要ありません)",
"版本": "バージョン",
"提取音高和处理数据使用的CPU进程数": "ピッチの抽出やデータ処理に使用するCPUスレッド数",
"step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. ": "ステップ2a: 訓練フォルダー内のすべての音声ファイルを自動的に探索し、スライスと正規化を行い、2つのwavフォルダーを実験ディレクトリに生成します。現在は一人でのトレーニングのみをサポートしています。",
"输入训练文件夹路径": "トレーニング用フォルダのパスを入力してください",
"请指定说话人id": "話者IDを指定してください",
"处理数据": "データ処理",
"step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)": "ステップ2b: CPUを使用して音高を抽出する(モデルに音高がある場合)、GPUを使用して特徴を抽出する(GPUの番号を選択する)",
"以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2": "ハイフンで区切って使用するGPUの番号を入力します。例えば0-1-2はGPU0、GPU1、GPU2を使用します",
"显卡信息": "GPU情報",
"选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢": "音高抽出アルゴリズムの選択:歌声を入力する場合は、pmを使用して速度を上げることができます。CPUが低い場合はdioを使用して速度を上げることができます。harvestは品質が良く精度が高いですが、遅いです。",
"rmvpe卡号配置以-分隔输入使用的不同进程卡号,例如0-0-1使用在卡0上跑2个进程并在卡1上跑1个进程": "rmvpe卡号配置以-分隔输入使用的不同进程卡号,例如0-0-1使用在卡0上跑2个进程并在卡1上跑1个进程",
"特征提取": "特徴抽出",
"step3: 填写训练设置, 开始训练模型和索引": "ステップ3: トレーニング設定を入力して、モデルとインデックスのトレーニングを開始します",
"保存频率save_every_epoch": "エポックごとの保存頻度",
"总训练轮数total_epoch": "総エポック数",
"每张显卡的batch_size": "GPUごとのバッチサイズ",
"是否仅保存最新的ckpt文件以节省硬盘空间": "ハードディスク容量を節約するため、最新のckptファイルのみを保存しますか",
"否": "いいえ",
"是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速": "すべてのトレーニングデータをメモリにキャッシュするかどうか。10分以下の小さなデータはキャッシュしてトレーニングを高速化できますが、大きなデータをキャッシュするとメモリが破裂し、あまり速度が上がりません。",
"是否在每次保存时间点将最终小模型保存至weights文件夹": "各保存時点の小モデルを全部weightsフォルダに保存するかどうか",
"加载预训练底模G路径": "事前学習済みのGモデルのパス",
"加载预训练底模D路径": "事前学習済みのDモデルのパス",
"训练模型": "モデルのトレーニング",
"训练特征索引": "特徴インデックスのトレーニング",
"一键训练": "ワンクリックトレーニング",
"ckpt处理": "ckptファイルの処理",
"模型融合, 可用于测试音色融合": "モデルのマージ、音源のマージテストに使用できます",
"A模型路径": "Aモデルのパス",
"B模型路径": "Bモデルのパス",
"A模型权重": "Aモデルの重み",
"模型是否带音高指导": "モデルに音高ガイドを付けるかどうか",
"要置入的模型信息": "挿入するモデル情報",
"保存的模型名不带后缀": "拡張子のない保存するモデル名",
"模型版本型号": "モデルのバージョン",
"融合": "マージ",
"修改模型信息(仅支持weights文件夹下提取的小模型文件)": "モデル情報の修正(weightsフォルダから抽出された小さなモデルファイルのみ対応)",
"模型路径": "モデルパス",
"要改的模型信息": "変更するモデル情報",
"保存的文件名, 默认空为和源文件同名": "保存するファイル名、デフォルトでは空欄で元のファイル名と同じ名前になります",
"修改": "変更",
"查看模型信息(仅支持weights文件夹下提取的小模型文件)": "モデル情報を表示する(小さいモデルファイルはweightsフォルダーからのみサポートされています)",
"查看": "表示",
"模型提取(输入logs文件夹下大文件模型路径),适用于训一半不想训了模型没有自动提取保存小文件模型,或者想测试中间模型的情况": "モデル抽出(ログフォルダー内の大きなファイルのモデルパスを入力)、モデルを半分までトレーニングし、自動的に小さいファイルモデルを保存しなかったり、中間モデルをテストしたい場合に適用されます。",
"保存名": "保存ファイル名",
"模型是否带音高指导,1是0否": "モデルに音高ガイドを付けるかどうか、1は付ける、0は付けない",
"提取": "抽出",
"Onnx导出": "Onnxエクスポート",
"RVC模型路径": "RVCモデルパス",
"Onnx输出路径": "Onnx出力パス",
"导出Onnx模型": "Onnxに変換",
"常见问题解答": "よくある質問",
"招募音高曲线前端编辑器": "音高曲線フロントエンドエディターを募集",
"加开发群联系我xxxxx": "開発グループに参加して私に連絡してくださいxxxxx",
"点击查看交流、问题反馈群号": "クリックして交流、問題フィードバックグループ番号を表示",
"xxxxx": "xxxxx",
"加载模型": "モデルをロード",
"Hubert模型": "Hubertモデル",
"选择.pth文件": ".pthファイルを選択",
"选择.index文件": ".indexファイルを選択",
"选择.npy文件": ".npyファイルを選択",
"输入设备": "入力デバイス",
"输出设备": "出力デバイス",
"音频设备(请使用同种类驱动)": "オーディオデバイス(同じ種類のドライバーを使用してください)",
"响应阈值": "反応閾値",
"音调设置": "音程設定",
"Index Rate": "Index Rate",
"常规设置": "一般設定",
"采样长度": "サンプル長",
"淡入淡出长度": "フェードイン/フェードアウト長",
"额外推理时长": "追加推論時間",
"输入降噪": "入力ノイズの低減",
"输出降噪": "出力ノイズの低減",
"性能设置": "パフォーマンス設定",
"开始音频转换": "音声変換を開始",
"停止音频转换": "音声変換を停止",
"推理时间(ms):": "推論時間(ms):",
"请选择pth文件": "请选择pth文件",
"请选择index文件": "请选择index文件",
"hubert模型路径不可包含中文": "hubert模型路径不可包含中文",
"pth文件路径不可包含中文": "pth文件路径不可包含中文",
"index文件路径不可包含中文": "index文件路径不可包含中文",
"重载设备列表": "デバイスリストをリロードする",
"音高算法": "音高算法",
"harvest进程数": "harvest进程数"
}

View File

@@ -42,3 +42,4 @@ for lang_file in languages:
# Save the updated language file
with open(lang_file, "w", encoding="utf-8") as f:
json.dump(lang_data, f, ensure_ascii=False, indent=4)
f.write("\n")

132
lib/i18n/ru-RU.json Normal file
View File

@@ -0,0 +1,132 @@
{
"很遗憾您这没有能用的显卡来支持您训练": "К сожалению у вас нету видеокарты, которая поддерживает тренировку модели.",
"是": "Да",
"step1:正在处理数据": "Шаг 1: Переработка данных",
"step2a:无需提取音高": "Шаг 2а: Пропуск вытаскивания тональности",
"step2b:正在提取特征": "Шаг 2б: Вытаскивание черт",
"step3a:正在训练模型": "Шаг 3а: Тренировка модели начата",
"训练结束, 您可查看控制台训练日志或实验文件夹下的train.log": "Тренировка завершена. Вы можете проверить логи тренировки в консоли или в файле 'train.log' в папке модели.",
"全流程结束!": "Все процессы завершены!",
"本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>LICENSE</b>.": "本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>LICENSE</b>.",
"模型推理": "Обработка модели",
"推理音色": "Обработка голоса:",
"刷新音色列表和索引路径": "Обновить список голосов и индексов",
"卸载音色省显存": "Выгрузить голос для сохранения памяти видеокарты:",
"请选择说话人id": "Выбери айди голоса:",
"男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ": "Рекомендованно +12 для конвертирования мужского голоса в женский и -12 для конвертирования женского в мужской. Если диапазон голоса слищком велик и голос искажается, значение можно изменить на свой вкус.",
"变调(整数, 半音数量, 升八度12降八度-12)": "Высота голоса (число, полутоны, поднять на октаву: 12, понизить на октаву: -12):",
"输入待处理音频文件路径(默认是正确格式示例)": "Введите путь к аудиофайлу, который хотите переработать (по умолчанию введён правильный формат):",
"选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU,rmvpe效果最好且微吃GPU": "Выберите алгоритм вытаскивания тональности ('pm': быстрое извлечение но качество речи хуже; 'harvest': бассы лучше но очень медленный; 'crepe': лучшее качество но сильно использует видеокарту):",
">=3则使用对harvest音高识别的结果使用中值滤波数值为滤波半径使用可以削弱哑音": "Если больше 3: применить медианную фильтрацию к вытащенным тональностям. Значение контролирует радиус фильтра и может уменьшить излишнее дыхание.",
"特征检索库文件路径,为空则使用下拉的选择结果": "Путь к файлу индекса черт. Оставьте пустым, чтобы использовать выбранный результат из списка:",
"自动检测index路径,下拉式选择(dropdown)": "Автоматически найти путь к индексу и выбрать его из списка:",
"特征文件路径": "Путь к файлу черт:",
"检索特征占比": "Соотношение поиска черт:",
"后处理重采样至最终采样率0为不进行重采样": "Изменить частоту дискретизации в выходном файле на финальную. Поставьте 0, чтобы ничего не изменялось:",
"输入源音量包络替换输出音量包络融合比例越靠近1越使用输出包络": "Использовать громкость входного файла для замены или перемешивания с громкостью выходного файла. Чем ближе соотношение к 1, тем больше используется звука из выходного файла:",
"保护清辅音和呼吸声防止电音撕裂等artifact拉满0.5不开启,调低加大保护力度但可能降低索引效果": "Защитить глухие согласные и звуки дыхания для предотвращения артефактов, например разрывание в электронной музыке. Поставьте на 0.5, чтобы выключить. Уменьшите значение для повышения защиты, но при этом может ухудшиться аккуратность индексирования:",
"F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调": "Файл дуги F0 (не обязательно). Одна тональность на каждую строчку. Заменяет обычный F0 и модуляцию тональности:",
"转换": "Конвертировать",
"输出信息": "Выходная информация",
"输出音频(右下角三个点,点了可以下载)": "Экспортировать аудиофайл (нажми на три точки в правом нижнем углу для загрузки)",
"批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ": "Конвертировать пачкой. Введите путь к папке, в которой находятся файлы для конвертирования или выложите несколько аудиофайлов. Сконвертированные файлы будут сохранены в указанной папке (по умолчанию 'opt').",
"指定输出文件夹": "Укажите выходную папку:",
"输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)": "Введите путь к папке с аудио для переработки:",
"也可批量拖拽音频文件, 二选一, 优先读文件夹,文件夹留空则读取拖拽文件": "Вы также можете выложить аудиофайлы пачкой. Выберите одно из двух. Приоритет отдаётся считыванию из папки.",
"导出文件格式": "Формат выходного файла",
"伴奏人声分离&去混响&去回声": "Отделение вокала/инструментала и убирание эхо",
"人声伴奏分离批量处理, 使用UVR5模型。 <br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)。 <br>模型分为三类: <br>1、保留人声不带和声的音频选这个对主人声保留比HP5更好。内置HP2和HP3两个模型HP3可能轻微漏伴奏但对主人声保留比HP2稍微好一丁点 <br>2、仅保留主人声带和声的音频选这个对主人声可能有削弱。内置HP5一个模型 <br> 3、去混响、去延迟模型by FoxJoy<br>(1)MDX-Net(onnx_dereverb):对于双通道混响是最好的选择,不能去除单通道混响;<br>&emsp;(234)DeEcho:去除延迟效果。Aggressive比Normal去除得更彻底DeReverb额外去除混响可去除单声道混响但是对高频重的板式混响去不干净。<br>去混响/去延迟,附:<br>1、DeEcho-DeReverb模型的耗时是另外2个DeEcho模型的接近2倍<br>2、MDX-Net-Dereverb模型挺慢的<br>3、个人推荐的最干净的配置是先MDX-Net再DeEcho-Aggressive。": "Пакетная обработка для разделения вокального сопровождения с использованием модели UVR5.<br>Пример допустимого формата пути к папке: D:\\path\\to\\input\\folder<br> Модель разделена на три категории:<br>1. Сохранить вокал: выберите этот вариант для звука без гармоний. Он сохраняет вокал лучше, чем HP5. Он включает в себя две встроенные модели: HP2 и HP3. HP3 может немного пропускать инструментал, но сохраняет вокал немного лучше, чем HP2.<br>2. Сохранить только основной вокал: выберите этот вариант для звука с гармониями. Это может ослабить основной вокал. Он включает одну встроенную модель: HP5.<br>3. Модели удаления реверберации и задержки (от FoxJoy):<br>(1) MDX-Net: лучший выбор для удаления стереореверберации, но он не может удалить монореверберацию;<br>&emsp;(234) DeEcho: удаляет эффекты задержки. Агрессивный режим удаляет более тщательно, чем Нормальный режим. DeReverb дополнительно удаляет реверберацию и может удалять монореверберацию, но не очень эффективно для сильно реверберированного высокочастотного контента.<br>Примечания по удалению реверберации/задержки:<br>1. Время обработки для модели DeEcho-DeReverb примерно в два раза больше, чем для двух других моделей DeEcho.<br>2. Модель MDX-Net-Dereverb довольно медленная.<br>3. Рекомендуемая самая чистая конфигурация — сначала применить MDX-Net, а затем DeEcho-Aggressive.",
"输入待处理音频文件夹路径": "Введите путь к папке с аудиофайлами для переработки:",
"模型": "Модели",
"指定输出主人声文件夹": "Введите путь к папке для вокала:",
"指定输出非主人声文件夹": "Введите путь к папке для инструментала:",
"训练": "Тренировка",
"step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. ": "Шаг 1: Заполните настройки модели. Данные модели сохранены в папку 'logs' и для каждой модели создаётся отдельная папка. Введите вручную путь к настройкам для модели, в которой находятся логи и тренировочные файлы.",
"输入实验名": "Введите название модели:",
"目标采样率": "Частота дискретизации модели:",
"模型是否带音高指导(唱歌一定要, 语音可以不要)": "Наведение по тональности у модели (обязательно для пения, необязательно для речи):",
"版本": "Версия",
"提取音高和处理数据使用的CPU进程数": "Число процессов ЦП, используемое для вытаскивания тональностей и обрабротки данных:",
"step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. ": "Шаг 2а: Автоматически пройтись по всем аудиофайлам в папке тренировки и нормализировать куски. Создаст 2 папки wav в папке модели. В данных момент поддерживается тренировка только одного голоса.",
"输入训练文件夹路径": "Введите путь к папке тренировки:",
"请指定说话人id": "Введите айди голоса:",
"处理数据": "Переработать данные",
"step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)": "Шаг 2б: Вытащить тональности с помошью процессора (если в модели есть тональности), вытащить черты с помощью видеокарты (выберите какой):",
"以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2": "Введите, какие(-ую) видеокарты(-у) хотите использовать через '-', например 0-1-2, чтобы использовать видеокарту 0, 1 и 2:",
"显卡信息": "Информация о видеокартах",
"选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢": "Выберите алгоритм вытаскивания тональности ('pm': быстрое извлечение но качество речи хуже; 'harvest': бассы лучше но очень медленный; 'crepe': лучшее качество но сильно использует видеокарту):",
"rmvpe卡号配置以-分隔输入使用的不同进程卡号,例如0-0-1使用在卡0上跑2个进程并在卡1上跑1个进程": "rmvpe卡号配置以-分隔输入使用的不同进程卡号,例如0-0-1使用在卡0上跑2个进程并在卡1上跑1个进程",
"特征提取": "Вытаскивание черт",
"step3: 填写训练设置, 开始训练模型和索引": "Шаг 3: Заполните остальные настройки тренировки и начните тренировать модель и индекс",
"保存频率save_every_epoch": "Частота сохранения (save_every_epoch):",
"总训练轮数total_epoch": "Полное количество эпох (total_epoch):",
"每张显卡的batch_size": "Размер пачки для видеокарты:",
"是否仅保存最新的ckpt文件以节省硬盘空间": "Сохранять только последний файл '.ckpt', чтобы сохранить место на диске:",
"否": "Нет",
"是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速": "Кэшировать все тренировочные сеты в видеопамять. Кэширование маленький датасетов (меньше 10 минут) может ускорить тренировку, но кэширование больших, наоборот, займёт много видеопамяти и не сильно ускорит тренировку:",
"是否在每次保存时间点将最终小模型保存至weights文件夹": "Сохранять маленькую финальную модель в папку 'weights' на каждой точке сохранения:",
"加载预训练底模G路径": "Путь к натренированой базовой модели G:",
"加载预训练底模D路径": "Путь к натренированой базовой модели D:",
"训练模型": "Тренировать модель",
"训练特征索引": "Тренировать индекс черт",
"一键训练": "Тренировка одним нажатием",
"ckpt处理": "Обработка ckpt",
"模型融合, 可用于测试音色融合": "Слияние моделей, может быть использовано для проверки слияния тембра",
"A模型路径": "Путь к модели А:",
"B模型路径": "Путь к модели Б:",
"A模型权重": "Вес (w) модели А::",
"模型是否带音高指导": "Есть ли у модели наведение по тональности (1: да, 0: нет):",
"要置入的模型信息": "Информация о модели:",
"保存的模型名不带后缀": "Название сохранённой модели (без расширения):",
"模型版本型号": "Версия архитектуры модели:",
"融合": "Слияние",
"修改模型信息(仅支持weights文件夹下提取的小模型文件)": "Модифицировать информацию о модели (поддерживается только для маленких моделей, взятых из папки 'weights')",
"模型路径": "Путь к папке:",
"要改的模型信息": "Информация о модели, которую нужно модифицировать:",
"保存的文件名, 默认空为和源文件同名": "Название сохранённого файла (по умолчанию такое же, как и входного):",
"修改": "Модифицировать",
"查看模型信息(仅支持weights文件夹下提取的小模型文件)": "Просмотреть информацию о модели (поддерживается только для маленких моделей, взятых из папки 'weights')",
"查看": "Просмотр",
"模型提取(输入logs文件夹下大文件模型路径),适用于训一半不想训了模型没有自动提取保存小文件模型,或者想测试中间模型的情况": "Вытаскивание модели (введите путь к большому файлу модели в папке 'logs'). Полезно, если Вам нужно заверщить тренировку и вручную достать и сохранить маленький файл модели, или если Вам нужно проверить незаконченную модель:",
"保存名": "Имя сохранённого файла:",
"模型是否带音高指导,1是0否": "Есть ли у модели наведение по тональности (1: да, 0: нет):",
"提取": "Вытащить",
"Onnx导出": "Экспортировать Onnx",
"RVC模型路径": "Путь к модели RVC:",
"Onnx输出路径": "Путь для экспотрированного Onnx:",
"导出Onnx模型": "Экспортировать Onnx модель",
"常见问题解答": "ЧаВО (Часто задаваемые вопросы)",
"招募音高曲线前端编辑器": "Использование фронтенд редакторов для тональных дуг",
"加开发群联系我xxxxx": "Присоединитесь к группе разработки и свяжитесь со мной по xxxxx",
"点击查看交流、问题反馈群号": "Нажмите, чтобы просмотреть номер группы коммуникации и отзывах о проблемах",
"xxxxx": "xxxxx",
"加载模型": "Загрузить модель",
"Hubert模型": "Модель Hubert",
"选择.pth文件": "Выбрать файл .pth",
"选择.index文件": "Выбрать файл .index",
"选择.npy文件": "Выбрать файл .npy",
"输入设备": "Входное устройство",
"输出设备": "Выходное устройство",
"音频设备(请使用同种类驱动)": "Аудио устройство (пожалуйста используйте такой=же тип драйвера)",
"响应阈值": "Порог ответа",
"音调设置": "Настройки тональности",
"Index Rate": "Темп индекса",
"常规设置": "Основные настройки",
"采样长度": "Длина сэмпла",
"淡入淡出长度": "Длина затухания",
"额外推理时长": "Доп. время переработки",
"输入降噪": "Уменьшения шума во входной информации",
"输出降噪": "Уменьшения шума во выходной информации",
"性能设置": "Настройки быстроты",
"开始音频转换": "Начать конвертацию аудио",
"停止音频转换": "Закончить конвертацию аудио",
"推理时间(ms):": "Время переработки (мс):",
"请选择pth文件": "请选择pth文件",
"请选择index文件": "请选择index文件",
"hubert模型路径不可包含中文": "hubert模型路径不可包含中文",
"pth文件路径不可包含中文": "pth文件路径不可包含中文",
"index文件路径不可包含中文": "index文件路径不可包含中文",
"重载设备列表": "Перезагрузить список устройств",
"音高算法": "音高算法",
"harvest进程数": "harvest进程数"
}

132
lib/i18n/tr_TR.json Normal file
View File

@@ -0,0 +1,132 @@
{
"很遗憾您这没有能用的显卡来支持您训练": "Maalesef, eğitiminizi desteklemek için uyumlu bir GPU bulunmamaktadır.",
"是": "Evet",
"step1:正在处理数据": "Adım 1: Veri işleme",
"step2a:无需提取音高": "Adım 2a: Pitch çıkartma adımını atlama",
"step2b:正在提取特征": "Adım 2b: Özelliklerin çıkarılması",
"step3a:正在训练模型": "Adım 3a: Model eğitimi başladı",
"训练结束, 您可查看控制台训练日志或实验文件夹下的train.log": "Eğitim tamamlandı. Eğitim günlüklerini konsolda veya deney klasörü altındaki train.log dosyasında kontrol edebilirsiniz.",
"全流程结束!": "Tüm işlemler tamamlandı!",
"本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>LICENSE</b>.": "Bu yazılım, MIT lisansı altında açık kaynaklıdır. Yazarın yazılım üzerinde herhangi bir kontrolü yoktur. Yazılımı kullanan ve yazılım tarafından dışa aktarılan sesleri dağıtan kullanıcılar sorumludur. <br>Eğer bu maddeyle aynı fikirde değilseniz, yazılım paketi içindeki herhangi bir kod veya dosyayı kullanamaz veya referans göremezsiniz. Detaylar için kök dizindeki <b>Agreement-LICENSE.txt</b> dosyasına bakınız.",
"模型推理": "Model çıkartma (Inference)",
"推理音色": "Ses çıkartma (Inference):",
"刷新音色列表和索引路径": "Ses listesini ve indeks yolunu yenile",
"卸载音色省显存": "GPU bellek kullanımını azaltmak için sesi kaldır",
"请选择说话人id": "Konuşmacı/Şarkıcı No seçin:",
"男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ": "Erkekten kadına çevirmek için +12 tuş önerilir, kadından erkeğe çevirmek için ise -12 tuş önerilir. Eğer ses aralığı çok fazla genişler ve ses bozulursa, isteğe bağlı olarak uygun aralığa kendiniz de ayarlayabilirsiniz.",
"变调(整数, 半音数量, 升八度12降八度-12)": "Transpoze et (tamsayı, yarıton sayısıyla; bir oktav yükseltmek için: 12, bir oktav düşürmek için: -12):",
"输入待处理音频文件路径(默认是正确格式示例)": "İşlenecek ses dosyasının yolunu girin (varsayılan doğru format örneğidir):",
"选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU,rmvpe效果最好且微吃GPU": "Pitch algoritmasını seçin ('pm': daha hızlı çıkarır ancak daha düşük kaliteli konuşma; 'harvest': daha iyi konuşma sesi ancak son derece yavaş; 'crepe': daha da iyi kalite ancak GPU yoğunluğu gerektirir):",
">=3则使用对harvest音高识别的结果使用中值滤波数值为滤波半径使用可以削弱哑音": "Eğer >=3 ise, elde edilen pitch sonuçlarına median filtreleme uygula. Bu değer, filtre yarıçapını temsil eder ve nefesliliği azaltabilir.",
"特征检索库文件路径,为空则使用下拉的选择结果": "Özellik indeksi dosyasının yolunu belirtin. Seçilen sonucu kullanmak için boş bırakın veya açılır menüden seçim yapın.",
"自动检测index路径,下拉式选择(dropdown)": "İndeks yolunu otomatik olarak tespit et ve açılır menüden seçim yap.",
"特征文件路径": "Özellik dosyasının yolu:",
"检索特征占比": "Arama özelliği oranı (vurgu gücünü kontrol eder, çok yüksek olması sanal etkilere neden olur)",
"后处理重采样至最终采样率0为不进行重采样": "Son işleme aşamasında çıktı sesini son örnekleme hızına yeniden örnekle. 0 değeri için yeniden örnekleme yapılmaz:",
"输入源音量包络替换输出音量包络融合比例越靠近1越使用输出包络": "Sesin hacim zarfını ayarlayın. 0'a yakın değerler, sesin orijinal vokallerin hacmine benzer olmasını sağlar. Düşük bir değerle ses gürültüsünü maskeleyebilir ve hacmi daha doğal bir şekilde duyulabilir hale getirebilirsiniz. 1'e yaklaştıkça sürekli bir yüksek ses seviyesi elde edilir:",
"保护清辅音和呼吸声防止电音撕裂等artifact拉满0.5不开启,调低加大保护力度但可能降低索引效果": "Sessiz ünsüzleri ve nefes seslerini koruyarak elektronik müzikte yırtılma gibi sanal hataların oluşmasını engeller. 0.5 olarak ayarlandığında devre dışı kalır. Değerin azaltılması korumayı artırabilir, ancak indeksleme doğruluğunu azaltabilir:",
"F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调": "F0 eğrisi dosyası (isteğe bağlı). Her satırda bir pitch değeri bulunur. Varsayılan F0 ve pitch modülasyonunu değiştirir:",
"转换": "Dönüştür",
"输出信息": ıkış bilgisi",
"输出音频(右下角三个点,点了可以下载)": "Ses dosyasını dışa aktar (indirmek için sağ alt köşedeki üç noktaya tıklayın)",
"批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ": "Toplu dönüştür. Dönüştürülecek ses dosyalarının bulunduğu klasörü girin veya birden çok ses dosyasını yükleyin. Dönüştürülen ses dosyaları belirtilen klasöre ('opt' varsayılan olarak) dönüştürülecektir",
"指定输出文件夹": ıkış klasörünü belirt:",
"输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)": "İşlenecek ses klasörünün yolunu girin (dosya yöneticisinin adres çubuğundan kopyalayın):",
"也可批量拖拽音频文件, 二选一, 优先读文件夹,文件夹留空则读取拖拽文件": "Toplu olarak ses dosyalarını da girebilirsiniz. İki seçenekten birini seçin. Öncelik klasörden okumaya verilir.",
"导出文件格式": "Dışa aktarma dosya formatı",
"伴奏人声分离&去混响&去回声": "Vokal/Müzik Ayrıştırma ve Yankı Giderme",
"人声伴奏分离批量处理, 使用UVR5模型。 <br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)。 <br>模型分为三类: <br>1、保留人声不带和声的音频选这个对主人声保留比HP5更好。内置HP2和HP3两个模型HP3可能轻微漏伴奏但对主人声保留比HP2稍微好一丁点 <br>2、仅保留主人声带和声的音频选这个对主人声可能有削弱。内置HP5一个模型 <br> 3、去混响、去延迟模型by FoxJoy<br>(1)MDX-Net(onnx_dereverb):对于双通道混响是最好的选择,不能去除单通道混响;<br>&emsp;(234)DeEcho:去除延迟效果。Aggressive比Normal去除得更彻底DeReverb额外去除混响可去除单声道混响但是对高频重的板式混响去不干净。<br>去混响/去延迟,附:<br>1、DeEcho-DeReverb模型的耗时是另外2个DeEcho模型的接近2倍<br>2、MDX-Net-Dereverb模型挺慢的<br>3、个人推荐的最干净的配置是先MDX-Net再DeEcho-Aggressive。": "Batch işleme kullanarak vokal eşlik ayrımı için UVR5 modeli kullanılır.<br>Geçerli bir klasör yol formatı örneği: D:\\path\\to\\input\\folder (dosya yöneticisi adres çubuğundan kopyalanır).<br>Model üç kategoriye ayrılır:<br>1. Vokalleri koru: Bu seçeneği, harmoni içermeyen sesler için kullanın. HP5'ten daha iyi bir şekilde vokalleri korur. İki dahili model içerir: HP2 ve HP3. HP3, eşlik sesini hafifçe sızdırabilir, ancak vokalleri HP2'den biraz daha iyi korur.<br>2. Sadece ana vokalleri koru: Bu seçeneği, harmoni içeren sesler için kullanın. Ana vokalleri zayıflatabilir. Bir dahili model içerir: HP5.<br>3. Reverb ve gecikme modelleri (FoxJoy tarafından):<br>(1) MDX-Net: Stereo reverb'i kaldırmak için en iyi seçenek, ancak mono reverb'i kaldıramaz;<br>(234) DeEcho: Gecikme efektlerini kaldırır. Agresif mod, Normal moda göre daha kapsamlı bir şekilde kaldırma yapar. DeReverb ayrıca reverb'i kaldırır ve mono reverb'i kaldırabilir, ancak yoğun yankılı yüksek frekanslı içerikler için çok etkili değildir.<br>Reverb/gecikme notları:<br>1. DeEcho-DeReverb modelinin işleme süresi diğer iki DeEcho modeline göre yaklaşık olarak iki kat daha uzundur.<br>2. MDX-Net-Dereverb modeli oldukça yavaştır.<br>3. Tavsiye edilen en temiz yapılandırma önce MDX-Net'i uygulamak ve ardından DeEcho-Aggressive uygulamaktır.",
"输入待处理音频文件夹路径": "İşlenecek ses klasörünün yolunu girin:",
"模型": "Model",
"指定输出主人声文件夹": "Vokal için çıkış klasörünü belirtin:",
"指定输出非主人声文件夹": "Müzik ve diğer sesler için çıkış klasörünü belirtin:",
"训练": "Eğitim",
"step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. ": "Adım 1: Deneysel yapılandırmayı doldurun. Deneysel veriler 'logs' klasöründe saklanır ve her bir deney için ayrı bir klasör vardır. Deneysel adı yolu manuel olarak girin; bu yol, deneysel yapılandırmayı, günlükleri ve eğitilmiş model dosyalarını içerir.",
"输入实验名": "Deneysel adı girin:",
"目标采样率": "Hedef örnekleme oranı:",
"模型是否带音高指导(唱歌一定要, 语音可以不要)": "Modelin ses yüksekliği (Pitch) rehberliği içerip içermediği (şarkı söyleme için şarttır, konuşma için isteğe bağlıdır):",
"版本": "Sürüm",
"提取音高和处理数据使用的CPU进程数": "Ses yüksekliği çıkartmak (Pitch) ve verileri işlemek için kullanılacak CPU işlemci sayısı:",
"step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. ": "Adım 2a: Eğitim klasöründe ses dosyalarını otomatik olarak gezinerek dilimleme normalizasyonu yapın. Deney dizini içinde 2 wav klasörü oluşturur. Şu anda sadece tek kişilik eğitim desteklenmektedir.",
"输入训练文件夹路径": "Eğitim klasörünün yolunu girin:",
"请指定说话人id": "Lütfen konuşmacı/sanatçı no belirtin:",
"处理数据": "Verileri işle",
"step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)": "Adım 2b: Ses yüksekliği (Pitch) çıkartmak için CPU kullanın (eğer model ses yüksekliği içeriyorsa), özellikleri çıkartmak için GPU kullanın (GPU indeksini seçin):",
"以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2": "GPU indekslerini '-' ile ayırarak girin, örneğin 0-1-2, GPU 0, 1 ve 2'yi kullanmak için:",
"显卡信息": "GPU Bilgisi",
"选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢": "Ses yüksekliği (Pitch) çıkartma algoritmasını seçin ('pm': daha hızlı çıkartma, ancak düşük kaliteli konuşma; 'dio': geliştirilmiş konuşma kalitesi, ancak daha yavaş çıkartma; 'harvest': daha iyi kalite, ancak daha da yavaş çıkartma):",
"rmvpe卡号配置以-分隔输入使用的不同进程卡号,例如0-0-1使用在卡0上跑2个进程并在卡1上跑1个进程": "rmvpe卡号配置以-分隔输入使用的不同进程卡号,例如0-0-1使用在卡0上跑2个进程并在卡1上跑1个进程",
"特征提取": "Özellik çıkartma",
"step3: 填写训练设置, 开始训练模型和索引": "Adım 3: Eğitim ayarlarını doldurun ve modeli ve dizini eğitmeye başlayın",
"保存频率save_every_epoch": "Kaydetme sıklığı (save_every_epoch):",
"总训练轮数total_epoch": "Toplam eğitim turu (total_epoch):",
"每张显卡的batch_size": "Her GPU için yığın boyutu (batch_size):",
"是否仅保存最新的ckpt文件以节省硬盘空间": "Sadece en son '.ckpt' dosyasını kaydet:",
"否": "Hayır",
"是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速": "Tüm eğitim verilerini GPU belleğine önbelleğe alıp almayacağınızı belirtin. Küçük veri setlerini (10 dakikadan az) önbelleğe almak eğitimi hızlandırabilir, ancak büyük veri setlerini önbelleğe almak çok fazla GPU belleği tüketir ve çok fazla hız artışı sağlamaz:",
"是否在每次保存时间点将最终小模型保存至weights文件夹": "Her kaydetme noktasında son küçük bir modeli 'weights' klasörüne kaydetmek için:",
"加载预训练底模G路径": "Önceden eğitilmiş temel G modelini yükleme yolu:",
"加载预训练底模D路径": "Önceden eğitilmiş temel D modelini yükleme yolu:",
"训练模型": "Modeli Eğit",
"训练特征索引": "Özellik Dizinini Eğit",
"一键训练": "Tek Tuşla Eğit",
"ckpt处理": "ckpt İşleme",
"模型融合, 可用于测试音色融合": "Model birleştirme, ses rengi birleştirmesi için kullanılabilir",
"A模型路径": "A Modeli Yolu:",
"B模型路径": "B Modeli Yolu:",
"A模型权重": "A Modeli Ağırlığı:",
"模型是否带音高指导": "Modelin ses yüksekliği rehberi içerip içermediği:",
"要置入的模型信息": "Eklemek için model bilgileri:",
"保存的模型名不带后缀": "Kaydedilecek model adı (uzantı olmadan):",
"模型版本型号": "Model mimari versiyonu:",
"融合": "Birleştir",
"修改模型信息(仅支持weights文件夹下提取的小模型文件)": "Model bilgilerini düzenle (sadece 'weights' klasöründen çıkarılan küçük model dosyaları desteklenir)",
"模型路径": "Model Yolu:",
"要改的模型信息": "Düzenlenecek model bilgileri:",
"保存的文件名, 默认空为和源文件同名": "Kaydedilecek dosya adı (varsayılan: kaynak dosya ile aynı):",
"修改": "Düzenle",
"查看模型信息(仅支持weights文件夹下提取的小模型文件)": "Model bilgilerini görüntüle (sadece 'weights' klasöründen çıkarılan küçük model dosyaları desteklenir)",
"查看": "Görüntüle",
"模型提取(输入logs文件夹下大文件模型路径),适用于训一半不想训了模型没有自动提取保存小文件模型,或者想测试中间模型的情况": "Model çıkartma (büyük dosya modeli yolunu 'logs' klasöründe girin). Bu, eğitimi yarıda bırakmak istediğinizde ve manuel olarak küçük bir model dosyası çıkartmak ve kaydetmek istediğinizde veya bir ara modeli test etmek istediğinizde kullanışlıdır:",
"保存名": "Kaydetme Adı:",
"模型是否带音高指导,1是0否": "Modelin ses yüksekliği rehberi içerip içermediği (1: evet, 0: hayır):",
"提取": ıkart",
"Onnx导出": "Onnx Dışa Aktar",
"RVC模型路径": "RVC Model Yolu:",
"Onnx输出路径": "Onnx Dışa Aktarım Yolu:",
"导出Onnx模型": "Onnx Modeli Dışa Aktar",
"常见问题解答": "Sıkça Sorulan Sorular (SSS)",
"招募音高曲线前端编辑器": "Ses yükseklik eğrisi ön uç düzenleyicisi için işe alım",
"加开发群联系我xxxxx": "Geliştirme grubuna katılın ve benimle iletişime geçin: xxxxx",
"点击查看交流、问题反馈群号": "İletişim ve sorun geri bildirim grup numarasını görüntülemek için tıklayın",
"xxxxx": "xxxxx",
"加载模型": "Model yükle",
"Hubert模型": "Hubert Modeli",
"选择.pth文件": ".pth dosyası seç",
"选择.index文件": ".index dosyası seç",
"选择.npy文件": ".npy dosyası seç",
"输入设备": "Giriş cihazı",
"输出设备": ıkış cihazı",
"音频设备(请使用同种类驱动)": "Ses cihazı (aynı tür sürücüyü kullanın)",
"响应阈值": "Tepki eşiği",
"音调设置": "Pitch ayarları",
"Index Rate": "Index Oranı",
"常规设置": "Genel ayarlar",
"采样长度": "Örnekleme uzunluğu",
"淡入淡出长度": "Geçiş (Fade) uzunluğu",
"额外推理时长": "Ekstra çıkartma süresi",
"输入降噪": "Giriş gürültü azaltma",
"输出降噪": ıkış gürültü azaltma",
"性能设置": "Performans ayarları",
"开始音频转换": "Ses dönüştürmeyi başlat",
"停止音频转换": "Ses dönüştürmeyi durdur",
"推理时间(ms):": ıkarsama süresi (ms):",
"请选择pth文件": "Lütfen .pth dosyası seçin",
"请选择index文件": "Lütfen .index dosyası seçin",
"hubert模型路径不可包含中文": "hubert modeli yolu Çince karakter içeremez",
"pth文件路径不可包含中文": ".pth dosya yolu Çince karakter içeremez",
"index文件路径不可包含中文": ".index dosya yolu Çince karakter içeremez",
"重载设备列表": "Cihaz listesini yeniden yükle",
"音高算法": "音高算法",
"harvest进程数": "harvest进程数"
}

View File

@@ -1,36 +1,52 @@
{
"本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>使用需遵守的协议-LICENSE.txt</b>.": "本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>使用需遵守的协议-LICENSE.txt</b>.",
"很遗憾您这没有能用的显卡来支持您训练": "很遗憾您这没有能用的显卡来支持您训练",
"是": "是",
"step1:正在处理数据": "step1:正在处理数据",
"step2a:无需提取音高": "step2a:无需提取音高",
"step2b:正在提取特征": "step2b:正在提取特征",
"step3a:正在训练模型": "step3a:正在训练模型",
"训练结束, 您可查看控制台训练日志或实验文件夹下的train.log": "训练结束, 您可查看控制台训练日志或实验文件夹下的train.log",
"全流程结束!": "全流程结束!",
"本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>LICENSE</b>.": "本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>LICENSE</b>.",
"模型推理": "模型推理",
"推理音色": "推理音色",
"刷新音色列表": "刷新音色列表",
"刷新音色列表和索引路径": "刷新音色列表和索引路径",
"卸载音色省显存": "卸载音色省显存",
"请选择说话人id": "请选择说话人id",
"男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ": "男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ",
"变调(整数, 半音数量, 升八度12降八度-12)": "变调(整数, 半音数量, 升八度12降八度-12)",
"输入待处理音频文件路径(默认是正确格式示例)": "输入待处理音频文件路径(默认是正确格式示例)",
"选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比": "选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比",
"特征检索库文件路径": "特征检索库文件路径",
"选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU,rmvpe效果最好且微吃GPU": "选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU,rmvpe效果最好且微吃GPU",
">=3则使用对harvest音高识别的结果使用中值滤波数值为滤波半径使用可以削弱哑音": ">=3则使用对harvest音高识别的结果使用中值滤波数值为滤波半径使用可以削弱哑音",
"特征检索库文件路径,为空则使用下拉的选择结果": "特征检索库文件路径,为空则使用下拉的选择结果",
"自动检测index路径,下拉式选择(dropdown)": "自动检测index路径,下拉式选择(dropdown)",
"特征文件路径": "特征文件路径",
"检索特征占比": "检索特征占比",
"后处理重采样至最终采样率0为不进行重采样": "后处理重采样至最终采样率0为不进行重采样",
"输入源音量包络替换输出音量包络融合比例越靠近1越使用输出包络": "输入源音量包络替换输出音量包络融合比例越靠近1越使用输出包络",
"保护清辅音和呼吸声防止电音撕裂等artifact拉满0.5不开启,调低加大保护力度但可能降低索引效果": "保护清辅音和呼吸声防止电音撕裂等artifact拉满0.5不开启,调低加大保护力度但可能降低索引效果",
"F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调": "F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调",
"转换": "转换",
"输出信息": "输出信息",
"输出音频(右下角三个点,点了可以下载)": "输出音频(右下角三个点,点了可以下载)",
"批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ": "批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ",
"指定输出文件夹": "指定输出文件夹",
"检索特征占比": "检索特征占比",
"输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)": "输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)",
"也可批量输入音频文件, 二选一, 优先读文件夹": "也可批量输入音频文件, 二选一, 优先读文件夹",
"伴奏人声分离": "伴奏人声分离",
"人声伴奏分离批量处理, 使用UVR5模型. <br>不带和声用HP2, 带和声且提取的人声不需要和声用HP5<br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)": "人声伴奏分离批量处理, 使用UVR5模型. <br>不带和声用HP2, 带和声且提取的人声不需要和声用HP5<br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)",
"也可批量拖拽音频文件, 二选一, 优先读文件夹,文件夹留空则读取拖拽文件": "也可批量拖拽音频文件, 二选一, 优先读文件夹,文件夹留空则读取拖拽文件",
"导出文件格式": "导出文件格式",
"伴奏人声分离&去混响&去回声": "伴奏人声分离&去混响&去回声",
"人声伴奏分离批量处理, 使用UVR5模型。 <br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)。 <br>模型分为三类: <br>1、保留人声不带和声的音频选这个对主人声保留比HP5更好。内置HP2和HP3两个模型HP3可能轻微漏伴奏但对主人声保留比HP2稍微好一丁点 <br>2、仅保留主人声带和声的音频选这个对主人声可能有削弱。内置HP5一个模型 <br> 3、去混响、去延迟模型by FoxJoy<br>(1)MDX-Net(onnx_dereverb):对于双通道混响是最好的选择,不能去除单通道混响;<br>&emsp;(234)DeEcho:去除延迟效果。Aggressive比Normal去除得更彻底DeReverb额外去除混响可去除单声道混响但是对高频重的板式混响去不干净。<br>去混响/去延迟,附:<br>1、DeEcho-DeReverb模型的耗时是另外2个DeEcho模型的接近2倍<br>2、MDX-Net-Dereverb模型挺慢的<br>3、个人推荐的最干净的配置是先MDX-Net再DeEcho-Aggressive。": "人声伴奏分离批量处理, 使用UVR5模型。 <br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)。 <br>模型分为三类: <br>1、保留人声不带和声的音频选这个对主人声保留比HP5更好。内置HP2和HP3两个模型HP3可能轻微漏伴奏但对主人声保留比HP2稍微好一丁点 <br>2、仅保留主人声带和声的音频选这个对主人声可能有削弱。内置HP5一个模型 <br> 3、去混响、去延迟模型by FoxJoy<br>(1)MDX-Net(onnx_dereverb):对于双通道混响是最好的选择,不能去除单通道混响;<br>&emsp;(234)DeEcho:去除延迟效果。Aggressive比Normal去除得更彻底DeReverb额外去除混响可去除单声道混响但是对高频重的板式混响去不干净。<br>去混响/去延迟,附:<br>1、DeEcho-DeReverb模型的耗时是另外2个DeEcho模型的接近2倍<br>2、MDX-Net-Dereverb模型挺慢的<br>3、个人推荐的最干净的配置是先MDX-Net再DeEcho-Aggressive。",
"输入待处理音频文件夹路径": "输入待处理音频文件夹路径",
"模型": "模型",
"指定输出人声文件夹": "指定输出人声文件夹",
"指定输出乐器文件夹": "指定输出乐器文件夹",
"指定输出人声文件夹": "指定输出人声文件夹",
"指定输出非主人声文件夹": "指定输出非主人声文件夹",
"训练": "训练",
"step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. ": "step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. ",
"输入实验名": "输入实验名",
"目标采样率": "目标采样率",
"模型是否带音高指导(唱歌一定要, 语音可以不要)": "模型是否带音高指导(唱歌一定要, 语音可以不要)",
"版本": "版本",
"提取音高和处理数据使用的CPU进程数": "提取音高和处理数据使用的CPU进程数",
"step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. ": "step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. ",
"输入训练文件夹路径": "输入训练文件夹路径",
"请指定说话人id": "请指定说话人id",
@@ -38,14 +54,17 @@
"step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)": "step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)",
"以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2": "以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2",
"显卡信息": "显卡信息",
"提取音高使用的CPU进程数": "提取音高使用的CPU进程数",
"选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢": "选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢",
"rmvpe卡号配置以-分隔输入使用的不同进程卡号,例如0-0-1使用在卡0上跑2个进程并在卡1上跑1个进程": "rmvpe卡号配置以-分隔输入使用的不同进程卡号,例如0-0-1使用在卡0上跑2个进程并在卡1上跑1个进程",
"特征提取": "特征提取",
"step3: 填写训练设置, 开始训练模型和索引": "step3: 填写训练设置, 开始训练模型和索引",
"保存频率save_every_epoch": "保存频率save_every_epoch",
"总训练轮数total_epoch": "总训练轮数total_epoch",
"每张显卡的batch_size": "每张显卡的batch_size",
"是否仅保存最新的ckpt文件以节省硬盘空间": "是否仅保存最新的ckpt文件以节省硬盘空间",
"否": "否",
"是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速": "是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速",
"是否在每次保存时间点将最终小模型保存至weights文件夹": "是否在每次保存时间点将最终小模型保存至weights文件夹",
"加载预训练底模G路径": "加载预训练底模G路径",
"加载预训练底模D路径": "加载预训练底模D路径",
"训练模型": "训练模型",
@@ -59,6 +78,7 @@
"模型是否带音高指导": "模型是否带音高指导",
"要置入的模型信息": "要置入的模型信息",
"保存的模型名不带后缀": "保存的模型名不带后缀",
"模型版本型号": "模型版本型号",
"融合": "融合",
"修改模型信息(仅支持weights文件夹下提取的小模型文件)": "修改模型信息(仅支持weights文件夹下提取的小模型文件)",
"模型路径": "模型路径",
@@ -71,6 +91,11 @@
"保存名": "保存名",
"模型是否带音高指导,1是0否": "模型是否带音高指导,1是0否",
"提取": "提取",
"Onnx导出": "Onnx导出",
"RVC模型路径": "RVC模型路径",
"Onnx输出路径": "Onnx输出路径",
"导出Onnx模型": "导出Onnx模型",
"常见问题解答": "常见问题解答",
"招募音高曲线前端编辑器": "招募音高曲线前端编辑器",
"加开发群联系我xxxxx": "加开发群联系我xxxxx",
"点击查看交流、问题反馈群号": "点击查看交流、问题反馈群号",
@@ -95,5 +120,13 @@
"性能设置": "性能设置",
"开始音频转换": "开始音频转换",
"停止音频转换": "停止音频转换",
"推理时间(ms):": "推理时间(ms):"
}
"推理时间(ms):": "推理时间(ms):",
"请选择pth文件": "请选择pth文件",
"请选择index文件": "请选择index文件",
"hubert模型路径不可包含中文": "hubert模型路径不可包含中文",
"pth文件路径不可包含中文": "pth文件路径不可包含中文",
"index文件路径不可包含中文": "index文件路径不可包含中文",
"重载设备列表": "重载设备列表",
"音高算法": "音高算法",
"harvest进程数": "harvest进程数"
}

View File

@@ -1,36 +1,52 @@
{
"本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>使用需遵守的协议-LICENSE.txt</b>.": "本軟體以MIT協議開源作者不對軟體具備任何控制力使用軟體者、傳播軟體導出的聲音者自負全責。<br>如不認可該條款,則不能使用或引用軟體包內任何程式碼和檔案。詳見根目錄<b>使用需遵守的協議-LICENSE.txt</b>。",
"很遗憾您这没有能用的显卡来支持您训练": "很遗憾您这没有能用的显卡来支持您训练",
"是": "是",
"step1:正在处理数据": "step1:正在处理数据",
"step2a:无需提取音高": "step2a:无需提取音高",
"step2b:正在提取特征": "step2b:正在提取特征",
"step3a:正在训练模型": "step3a:正在训练模型",
"训练结束, 您可查看控制台训练日志或实验文件夹下的train.log": "训练结束, 您可查看控制台训练日志或实验文件夹下的train.log",
"全流程结束!": "全流程结束!",
"本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>LICENSE</b>.": "本軟體以MIT協議開源作者不對軟體具備任何控制力使用軟體者、傳播軟體導出的聲音者自負全責。<br>如不認可該條款,則不能使用或引用軟體包內任何程式碼和檔案。詳見根目錄<b>使用需遵守的協議-LICENSE.txt</b>。",
"模型推理": "模型推理",
"推理音色": "推理音色",
"刷新音色列表": "重新整理音色列表",
"刷新音色列表和索引路径": "刷新音色列表和索引路徑",
"卸载音色省显存": "卸載音色節省 VRAM",
"请选择说话人id": "請選擇說話人ID",
"男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ": "男性轉女性推薦+12key女性轉男性推薦-12key如果音域爆炸導致音色失真也可以自己調整到合適音域。",
"变调(整数, 半音数量, 升八度12降八度-12)": "變調(整數、半音數量、升八度12降八度-12)",
"输入待处理音频文件路径(默认是正确格式示例)": "輸入待處理音頻檔案路徑(預設是正確格式示例)",
"选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比": "選擇音高提取演算法輸入歌聲可用 pm 提速harvest 低音好但巨慢無比",
"特征检索库文件路径": "特徵檢索庫檔案路徑",
"选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU,rmvpe效果最好且微吃GPU": "選擇音高提取演算法,輸入歌聲可用pm提速,harvest低音好但巨慢無比,crepe效果好但吃GPU,rmvpe效果最好且微吃GPU",
">=3则使用对harvest音高识别的结果使用中值滤波数值为滤波半径使用可以削弱哑音": ">=3則使用對harvest音高識別的結果使用中值濾波數值為濾波半徑使用可以削弱啞音",
"特征检索库文件路径,为空则使用下拉的选择结果": "特徵檢索庫檔路徑,為空則使用下拉的選擇結果",
"自动检测index路径,下拉式选择(dropdown)": "自動檢測index路徑,下拉式選擇(dropdown)",
"特征文件路径": "特徵檔案路徑",
"检索特征占比": "檢索特徵佔比",
"后处理重采样至最终采样率0为不进行重采样": "後處理重採樣至最終採樣率0為不進行重採樣",
"输入源音量包络替换输出音量包络融合比例越靠近1越使用输出包络": "輸入源音量包絡替換輸出音量包絡融合比例越靠近1越使用輸出包絡",
"保护清辅音和呼吸声防止电音撕裂等artifact拉满0.5不开启,调低加大保护力度但可能降低索引效果": "保護清輔音和呼吸聲防止電音撕裂等artifact拉滿0.5不開啟,調低加大保護力度但可能降低索引效果",
"F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调": "F0曲線檔案可選一行一個音高代替預設的F0及升降調",
"转换": "轉換",
"输出信息": "輸出訊息",
"输出音频(右下角三个点,点了可以下载)": "輸出音頻(右下角三個點,點了可以下載)",
"批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ": "批量轉換,輸入待轉換音頻資料夾,或上傳多個音頻檔案,在指定資料夾(默認opt)下輸出轉換的音頻。",
"指定输出文件夹": "指定輸出資料夾",
"检索特征占比": "檢索特徵佔比",
"输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)": "輸入待處理音頻資料夾路徑(去檔案管理器地址欄拷貝即可)",
"也可批量输入音频文件, 二选一, 优先读文件夹": "也可批量輸入音頻檔案,二選一優先讀資料夾",
"伴奏人声分离": "伴奏人聲分離",
"人声伴奏分离批量处理, 使用UVR5模型. <br>不带和声用HP2, 带和声且提取的人声不需要和声用HP5<br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)": "人聲伴奏分離批量處理使用UVR5模型。<br>不帶和聲用HP2帶和聲且提取的人聲不需要和聲用HP5<br>合格的資料夾路徑格式舉例E:\\codes\\py39\\vits_vc_gpu\\白鷺霜華測試樣例(去檔案管理員地址欄複製就行了)",
"也可批量拖拽音频文件, 二选一, 优先读文件夹,文件夹留空则读取拖拽文件": "也可批量拖拽音頻檔, 二選一, 優先讀檔夾,檔夾留空則讀取拖拽檔",
"导出文件格式": "導出檔格式",
"伴奏人声分离&去混响&去回声": "伴奏人聲分離&去混響&去回聲",
"人声伴奏分离批量处理, 使用UVR5模型。 <br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)。 <br>模型分为三类: <br>1、保留人声不带和声的音频选这个对主人声保留比HP5更好。内置HP2和HP3两个模型HP3可能轻微漏伴奏但对主人声保留比HP2稍微好一丁点 <br>2、仅保留主人声带和声的音频选这个对主人声可能有削弱。内置HP5一个模型 <br> 3、去混响、去延迟模型by FoxJoy<br>(1)MDX-Net(onnx_dereverb):对于双通道混响是最好的选择,不能去除单通道混响;<br>&emsp;(234)DeEcho:去除延迟效果。Aggressive比Normal去除得更彻底DeReverb额外去除混响可去除单声道混响但是对高频重的板式混响去不干净。<br>去混响/去延迟,附:<br>1、DeEcho-DeReverb模型的耗时是另外2个DeEcho模型的接近2倍<br>2、MDX-Net-Dereverb模型挺慢的<br>3、个人推荐的最干净的配置是先MDX-Net再DeEcho-Aggressive。": "使用UVR5模型進行人聲伴奏分離的批次處理。<br>有效資料夾路徑格式的例子D:\\path\\to\\input\\folder從檔案管理員地址欄複製。<br>模型分為三類:<br>1. 保留人聲選擇這個選項適用於沒有和聲的音訊。它比HP5更好地保留了人聲。它包括兩個內建模型HP2和HP3。HP3可能輕微漏出伴奏但比HP2更好地保留了人聲<br>2. 僅保留主人聲選擇這個選項適用於有和聲的音訊。它可能會削弱主人聲。它包括一個內建模型HP5。<br>3. 消除混響和延遲模型由FoxJoy提供<br>(1) MDX-Net對於立體聲混響的移除是最好的選擇但不能移除單聲道混響<br>&emsp;(234) DeEcho移除延遲效果。Aggressive模式比Normal模式移除得更徹底。DeReverb另外移除混響可以移除單聲道混響但對於高頻重的板式混響移除不乾淨。<br>消除混響/延遲注意事項:<br>1. DeEcho-DeReverb模型的處理時間是其他兩個DeEcho模型的近兩倍<br>2. MDX-Net-Dereverb模型相當慢<br>3. 個人推薦的最乾淨配置是先使用MDX-Net然後使用DeEcho-Aggressive。",
"输入待处理音频文件夹路径": "輸入待處理音頻資料夾路徑",
"模型": "模型",
"指定输出人声文件夹": "指定輸出人聲資料夾",
"指定输出乐器文件夹": "指定輸出樂器資料夾",
"指定输出人声文件夹": "指定输出主人声文件夹",
"指定输出非主人声文件夹": "指定输出非主人声文件夹",
"训练": "訓練",
"step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. ": "step1填寫實驗配置。實驗數據放在logs下每個實驗一個資料夾需手動輸入實驗名路徑內含實驗配置、日誌、訓練得到的模型檔案。",
"输入实验名": "輸入實驗名稱",
"目标采样率": "目標取樣率",
"模型是否带音高指导(唱歌一定要, 语音可以不要)": "模型是否帶音高指導(唱歌一定要,語音可以不要)",
"版本": "版本",
"提取音高和处理数据使用的CPU进程数": "提取音高和處理數據使用的CPU進程數",
"step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. ": "step2a自動遍歷訓練資料夾下所有可解碼成音頻的檔案並進行切片歸一化在實驗目錄下生成2個wav資料夾暫時只支援單人訓練。",
"输入训练文件夹路径": "輸入訓練檔案夾路徑",
"请指定说话人id": "請指定說話人id",
@@ -38,14 +54,17 @@
"step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)": "步驟2b: 使用CPU提取音高(如果模型帶音高), 使用GPU提取特徵(選擇卡號)",
"以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2": "以-分隔輸入使用的卡號, 例如 0-1-2 使用卡0和卡1和卡2",
"显卡信息": "顯示卡資訊",
"提取音高使用的CPU进程数": "提取音高使用的CPU進程數",
"选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢": "選擇音高提取算法:輸入歌聲可用pm提速,高品質語音但CPU差可用dio提速,harvest品質更好但較慢",
"rmvpe卡号配置以-分隔输入使用的不同进程卡号,例如0-0-1使用在卡0上跑2个进程并在卡1上跑1个进程": "rmvpe卡號配置以-分隔輸入使用的不同進程卡號,例如0-0-1使用在卡0上跑2個進程並在卡1上跑1個進程",
"特征提取": "特徵提取",
"step3: 填写训练设置, 开始训练模型和索引": "步驟3: 填寫訓練設定, 開始訓練模型和索引",
"保存频率save_every_epoch": "保存頻率save_every_epoch",
"总训练轮数total_epoch": "總訓練輪數total_epoch",
"每张显卡的batch_size": "每张显卡的batch_size",
"是否仅保存最新的ckpt文件以节省硬盘空间": "是否僅保存最新的ckpt檔案以節省硬碟空間",
"否": "否",
"是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速": "是否緩存所有訓練集至 VRAM。小於10分鐘的小數據可緩存以加速訓練大數據緩存會爆 VRAM 也加不了多少速度",
"是否在每次保存时间点将最终小模型保存至weights文件夹": "是否在每次保存時間點將最終小模型保存至weights檔夾",
"加载预训练底模G路径": "加載預訓練底模G路徑",
"加载预训练底模D路径": "加載預訓練底模D路徑",
"训练模型": "訓練模型",
@@ -59,6 +78,7 @@
"模型是否带音高指导": "模型是否帶音高指導",
"要置入的模型信息": "要置入的模型資訊",
"保存的模型名不带后缀": "儲存的模型名不帶副檔名",
"模型版本型号": "模型版本型號",
"融合": "融合",
"修改模型信息(仅支持weights文件夹下提取的小模型文件)": "修改模型資訊(僅支援weights資料夾下提取的小模型檔案)",
"模型路径": "模型路徑",
@@ -71,6 +91,11 @@
"保存名": "儲存名",
"模型是否带音高指导,1是0否": "模型是否帶音高指導1是0否",
"提取": "提取",
"Onnx导出": "Onnx导出",
"RVC模型路径": "RVC模型路径",
"Onnx输出路径": "Onnx输出路径",
"导出Onnx模型": "导出Onnx模型",
"常见问题解答": "常見問題解答",
"招募音高曲线前端编辑器": "招募音高曲線前端編輯器",
"加开发群联系我xxxxx": "加開發群聯繫我xxxxx",
"点击查看交流、问题反馈群号": "點擊查看交流、問題反饋群號",
@@ -95,5 +120,13 @@
"性能设置": "效能設定",
"开始音频转换": "開始音訊轉換",
"停止音频转换": "停止音訊轉換",
"推理时间(ms):": "推理時間(ms):"
}
"推理时间(ms):": "推理時間(ms):",
"请选择pth文件": "请选择pth文件",
"请选择index文件": "请选择index文件",
"hubert模型路径不可包含中文": "hubert模型路径不可包含中文",
"pth文件路径不可包含中文": "pth文件路径不可包含中文",
"index文件路径不可包含中文": "index文件路径不可包含中文",
"重载设备列表": "重載設備列表",
"音高算法": "音高演算法",
"harvest进程数": "harvest進程數"
}

View File

@@ -1,36 +1,52 @@
{
"本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>使用需遵守的协议-LICENSE.txt</b>.": "本軟體以MIT協議開源作者不對軟體具備任何控制力使用軟體者、傳播軟體導出的聲音者自負全責。<br>如不認可該條款,則不能使用或引用軟體包內任何程式碼和檔案。詳見根目錄<b>使用需遵守的協議-LICENSE.txt</b>。",
"很遗憾您这没有能用的显卡来支持您训练": "很遗憾您这没有能用的显卡来支持您训练",
"是": "是",
"step1:正在处理数据": "step1:正在处理数据",
"step2a:无需提取音高": "step2a:无需提取音高",
"step2b:正在提取特征": "step2b:正在提取特征",
"step3a:正在训练模型": "step3a:正在训练模型",
"训练结束, 您可查看控制台训练日志或实验文件夹下的train.log": "训练结束, 您可查看控制台训练日志或实验文件夹下的train.log",
"全流程结束!": "全流程结束!",
"本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>LICENSE</b>.": "本軟體以MIT協議開源作者不對軟體具備任何控制力使用軟體者、傳播軟體導出的聲音者自負全責。<br>如不認可該條款,則不能使用或引用軟體包內任何程式碼和檔案。詳見根目錄<b>使用需遵守的協議-LICENSE.txt</b>。",
"模型推理": "模型推理",
"推理音色": "推理音色",
"刷新音色列表": "重新整理音色列表",
"刷新音色列表和索引路径": "刷新音色列表和索引路徑",
"卸载音色省显存": "卸載音色節省 VRAM",
"请选择说话人id": "請選擇說話人ID",
"男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ": "男性轉女性推薦+12key女性轉男性推薦-12key如果音域爆炸導致音色失真也可以自己調整到合適音域。",
"变调(整数, 半音数量, 升八度12降八度-12)": "變調(整數、半音數量、升八度12降八度-12)",
"输入待处理音频文件路径(默认是正确格式示例)": "輸入待處理音頻檔案路徑(預設是正確格式示例)",
"选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比": "選擇音高提取演算法輸入歌聲可用 pm 提速harvest 低音好但巨慢無比",
"特征检索库文件路径": "特徵檢索庫檔案路徑",
"选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU,rmvpe效果最好且微吃GPU": "選擇音高提取演算法,輸入歌聲可用pm提速,harvest低音好但巨慢無比,crepe效果好但吃GPU,rmvpe效果最好且微吃GPU",
">=3则使用对harvest音高识别的结果使用中值滤波数值为滤波半径使用可以削弱哑音": ">=3則使用對harvest音高識別的結果使用中值濾波數值為濾波半徑使用可以削弱啞音",
"特征检索库文件路径,为空则使用下拉的选择结果": "特徵檢索庫檔路徑,為空則使用下拉的選擇結果",
"自动检测index路径,下拉式选择(dropdown)": "自動檢測index路徑,下拉式選擇(dropdown)",
"特征文件路径": "特徵檔案路徑",
"检索特征占比": "檢索特徵佔比",
"后处理重采样至最终采样率0为不进行重采样": "後處理重採樣至最終採樣率0為不進行重採樣",
"输入源音量包络替换输出音量包络融合比例越靠近1越使用输出包络": "輸入源音量包絡替換輸出音量包絡融合比例越靠近1越使用輸出包絡",
"保护清辅音和呼吸声防止电音撕裂等artifact拉满0.5不开启,调低加大保护力度但可能降低索引效果": "保護清輔音和呼吸聲防止電音撕裂等artifact拉滿0.5不開啟,調低加大保護力度但可能降低索引效果",
"F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调": "F0曲線檔案可選一行一個音高代替預設的F0及升降調",
"转换": "轉換",
"输出信息": "輸出訊息",
"输出音频(右下角三个点,点了可以下载)": "輸出音頻(右下角三個點,點了可以下載)",
"批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ": "批量轉換,輸入待轉換音頻資料夾,或上傳多個音頻檔案,在指定資料夾(默認opt)下輸出轉換的音頻。",
"指定输出文件夹": "指定輸出資料夾",
"检索特征占比": "檢索特徵佔比",
"输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)": "輸入待處理音頻資料夾路徑(去檔案管理器地址欄拷貝即可)",
"也可批量输入音频文件, 二选一, 优先读文件夹": "也可批量輸入音頻檔案,二選一優先讀資料夾",
"伴奏人声分离": "伴奏人聲分離",
"人声伴奏分离批量处理, 使用UVR5模型. <br>不带和声用HP2, 带和声且提取的人声不需要和声用HP5<br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)": "人聲伴奏分離批量處理使用UVR5模型。<br>不帶和聲用HP2帶和聲且提取的人聲不需要和聲用HP5<br>合格的資料夾路徑格式舉例E:\\codes\\py39\\vits_vc_gpu\\白鷺霜華測試樣例(去檔案管理員地址欄複製就行了)",
"也可批量拖拽音频文件, 二选一, 优先读文件夹,文件夹留空则读取拖拽文件": "也可批量拖拽音頻檔, 二選一, 優先讀檔夾,檔夾留空則讀取拖拽檔",
"导出文件格式": "導出檔格式",
"伴奏人声分离&去混响&去回声": "伴奏人聲分離&去混響&去回聲",
"人声伴奏分离批量处理, 使用UVR5模型。 <br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)。 <br>模型分为三类: <br>1、保留人声不带和声的音频选这个对主人声保留比HP5更好。内置HP2和HP3两个模型HP3可能轻微漏伴奏但对主人声保留比HP2稍微好一丁点 <br>2、仅保留主人声带和声的音频选这个对主人声可能有削弱。内置HP5一个模型 <br> 3、去混响、去延迟模型by FoxJoy<br>(1)MDX-Net(onnx_dereverb):对于双通道混响是最好的选择,不能去除单通道混响;<br>&emsp;(234)DeEcho:去除延迟效果。Aggressive比Normal去除得更彻底DeReverb额外去除混响可去除单声道混响但是对高频重的板式混响去不干净。<br>去混响/去延迟,附:<br>1、DeEcho-DeReverb模型的耗时是另外2个DeEcho模型的接近2倍<br>2、MDX-Net-Dereverb模型挺慢的<br>3、个人推荐的最干净的配置是先MDX-Net再DeEcho-Aggressive。": "使用UVR5模型進行人聲伴奏分離的批次處理。<br>有效資料夾路徑格式的例子D:\\path\\to\\input\\folder從檔案管理員地址欄複製。<br>模型分為三類:<br>1. 保留人聲選擇這個選項適用於沒有和聲的音訊。它比HP5更好地保留了人聲。它包括兩個內建模型HP2和HP3。HP3可能輕微漏出伴奏但比HP2更好地保留了人聲<br>2. 僅保留主人聲選擇這個選項適用於有和聲的音訊。它可能會削弱主人聲。它包括一個內建模型HP5。<br>3. 消除混響和延遲模型由FoxJoy提供<br>(1) MDX-Net對於立體聲混響的移除是最好的選擇但不能移除單聲道混響<br>&emsp;(234) DeEcho移除延遲效果。Aggressive模式比Normal模式移除得更徹底。DeReverb另外移除混響可以移除單聲道混響但對於高頻重的板式混響移除不乾淨。<br>消除混響/延遲注意事項:<br>1. DeEcho-DeReverb模型的處理時間是其他兩個DeEcho模型的近兩倍<br>2. MDX-Net-Dereverb模型相當慢<br>3. 個人推薦的最乾淨配置是先使用MDX-Net然後使用DeEcho-Aggressive。",
"输入待处理音频文件夹路径": "輸入待處理音頻資料夾路徑",
"模型": "模型",
"指定输出人声文件夹": "指定輸出人聲資料夾",
"指定输出乐器文件夹": "指定輸出樂器資料夾",
"指定输出人声文件夹": "指定输出主人声文件夹",
"指定输出非主人声文件夹": "指定输出非主人声文件夹",
"训练": "訓練",
"step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. ": "step1填寫實驗配置。實驗數據放在logs下每個實驗一個資料夾需手動輸入實驗名路徑內含實驗配置、日誌、訓練得到的模型檔案。",
"输入实验名": "輸入實驗名稱",
"目标采样率": "目標取樣率",
"模型是否带音高指导(唱歌一定要, 语音可以不要)": "模型是否帶音高指導(唱歌一定要,語音可以不要)",
"版本": "版本",
"提取音高和处理数据使用的CPU进程数": "提取音高和處理數據使用的CPU進程數",
"step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. ": "step2a自動遍歷訓練資料夾下所有可解碼成音頻的檔案並進行切片歸一化在實驗目錄下生成2個wav資料夾暫時只支援單人訓練。",
"输入训练文件夹路径": "輸入訓練檔案夾路徑",
"请指定说话人id": "請指定說話人id",
@@ -38,14 +54,17 @@
"step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)": "步驟2b: 使用CPU提取音高(如果模型帶音高), 使用GPU提取特徵(選擇卡號)",
"以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2": "以-分隔輸入使用的卡號, 例如 0-1-2 使用卡0和卡1和卡2",
"显卡信息": "顯示卡資訊",
"提取音高使用的CPU进程数": "提取音高使用的CPU進程數",
"选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢": "選擇音高提取算法:輸入歌聲可用pm提速,高品質語音但CPU差可用dio提速,harvest品質更好但較慢",
"rmvpe卡号配置以-分隔输入使用的不同进程卡号,例如0-0-1使用在卡0上跑2个进程并在卡1上跑1个进程": "rmvpe卡號配置以-分隔輸入使用的不同進程卡號,例如0-0-1使用在卡0上跑2個進程並在卡1上跑1個進程",
"特征提取": "特徵提取",
"step3: 填写训练设置, 开始训练模型和索引": "步驟3: 填寫訓練設定, 開始訓練模型和索引",
"保存频率save_every_epoch": "保存頻率save_every_epoch",
"总训练轮数total_epoch": "總訓練輪數total_epoch",
"每张显卡的batch_size": "每张显卡的batch_size",
"是否仅保存最新的ckpt文件以节省硬盘空间": "是否僅保存最新的ckpt檔案以節省硬碟空間",
"否": "否",
"是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速": "是否緩存所有訓練集至 VRAM。小於10分鐘的小數據可緩存以加速訓練大數據緩存會爆 VRAM 也加不了多少速度",
"是否在每次保存时间点将最终小模型保存至weights文件夹": "是否在每次保存時間點將最終小模型保存至weights檔夾",
"加载预训练底模G路径": "加載預訓練底模G路徑",
"加载预训练底模D路径": "加載預訓練底模D路徑",
"训练模型": "訓練模型",
@@ -59,6 +78,7 @@
"模型是否带音高指导": "模型是否帶音高指導",
"要置入的模型信息": "要置入的模型資訊",
"保存的模型名不带后缀": "儲存的模型名不帶副檔名",
"模型版本型号": "模型版本型號",
"融合": "融合",
"修改模型信息(仅支持weights文件夹下提取的小模型文件)": "修改模型資訊(僅支援weights資料夾下提取的小模型檔案)",
"模型路径": "模型路徑",
@@ -71,6 +91,11 @@
"保存名": "儲存名",
"模型是否带音高指导,1是0否": "模型是否帶音高指導1是0否",
"提取": "提取",
"Onnx导出": "Onnx导出",
"RVC模型路径": "RVC模型路径",
"Onnx输出路径": "Onnx输出路径",
"导出Onnx模型": "导出Onnx模型",
"常见问题解答": "常見問題解答",
"招募音高曲线前端编辑器": "招募音高曲線前端編輯器",
"加开发群联系我xxxxx": "加開發群聯繫我xxxxx",
"点击查看交流、问题反馈群号": "點擊查看交流、問題反饋群號",
@@ -95,5 +120,13 @@
"性能设置": "效能設定",
"开始音频转换": "開始音訊轉換",
"停止音频转换": "停止音訊轉換",
"推理时间(ms):": "推理時間(ms):"
}
"推理时间(ms):": "推理時間(ms):",
"请选择pth文件": "请选择pth文件",
"请选择index文件": "请选择index文件",
"hubert模型路径不可包含中文": "hubert模型路径不可包含中文",
"pth文件路径不可包含中文": "pth文件路径不可包含中文",
"index文件路径不可包含中文": "index文件路径不可包含中文",
"重载设备列表": "重載設備列表",
"音高算法": "音高演算法",
"harvest进程数": "harvest進程數"
}

View File

@@ -1,36 +1,52 @@
{
"本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>使用需遵守的协议-LICENSE.txt</b>.": "本軟體以MIT協議開源作者不對軟體具備任何控制力使用軟體者、傳播軟體導出的聲音者自負全責。<br>如不認可該條款,則不能使用或引用軟體包內任何程式碼和檔案。詳見根目錄<b>使用需遵守的協議-LICENSE.txt</b>。",
"很遗憾您这没有能用的显卡来支持您训练": "很遗憾您这没有能用的显卡来支持您训练",
"是": "是",
"step1:正在处理数据": "step1:正在处理数据",
"step2a:无需提取音高": "step2a:无需提取音高",
"step2b:正在提取特征": "step2b:正在提取特征",
"step3a:正在训练模型": "step3a:正在训练模型",
"训练结束, 您可查看控制台训练日志或实验文件夹下的train.log": "训练结束, 您可查看控制台训练日志或实验文件夹下的train.log",
"全流程结束!": "全流程结束!",
"本软件以MIT协议开源, 作者不对软件具备任何控制力, 使用软件者、传播软件导出的声音者自负全责. <br>如不认可该条款, 则不能使用或引用软件包内任何代码和文件. 详见根目录<b>LICENSE</b>.": "本軟體以MIT協議開源作者不對軟體具備任何控制力使用軟體者、傳播軟體導出的聲音者自負全責。<br>如不認可該條款,則不能使用或引用軟體包內任何程式碼和檔案。詳見根目錄<b>使用需遵守的協議-LICENSE.txt</b>。",
"模型推理": "模型推理",
"推理音色": "推理音色",
"刷新音色列表": "重新整理音色列表",
"刷新音色列表和索引路径": "刷新音色列表和索引路徑",
"卸载音色省显存": "卸載音色節省 VRAM",
"请选择说话人id": "請選擇說話人ID",
"男转女推荐+12key, 女转男推荐-12key, 如果音域爆炸导致音色失真也可以自己调整到合适音域. ": "男性轉女性推薦+12key女性轉男性推薦-12key如果音域爆炸導致音色失真也可以自己調整到合適音域。",
"变调(整数, 半音数量, 升八度12降八度-12)": "變調(整數、半音數量、升八度12降八度-12)",
"输入待处理音频文件路径(默认是正确格式示例)": "輸入待處理音頻檔案路徑(預設是正確格式示例)",
"选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比": "選擇音高提取演算法輸入歌聲可用 pm 提速harvest 低音好但巨慢無比",
"特征检索库文件路径": "特徵檢索庫檔案路徑",
"选择音高提取算法,输入歌声可用pm提速,harvest低音好但巨慢无比,crepe效果好但吃GPU,rmvpe效果最好且微吃GPU": "選擇音高提取演算法,輸入歌聲可用pm提速,harvest低音好但巨慢無比,crepe效果好但吃GPU,rmvpe效果最好且微吃GPU",
">=3则使用对harvest音高识别的结果使用中值滤波数值为滤波半径使用可以削弱哑音": ">=3則使用對harvest音高識別的結果使用中值濾波數值為濾波半徑使用可以削弱啞音",
"特征检索库文件路径,为空则使用下拉的选择结果": "特徵檢索庫檔路徑,為空則使用下拉的選擇結果",
"自动检测index路径,下拉式选择(dropdown)": "自動檢測index路徑,下拉式選擇(dropdown)",
"特征文件路径": "特徵檔案路徑",
"检索特征占比": "檢索特徵佔比",
"后处理重采样至最终采样率0为不进行重采样": "後處理重採樣至最終採樣率0為不進行重採樣",
"输入源音量包络替换输出音量包络融合比例越靠近1越使用输出包络": "輸入源音量包絡替換輸出音量包絡融合比例越靠近1越使用輸出包絡",
"保护清辅音和呼吸声防止电音撕裂等artifact拉满0.5不开启,调低加大保护力度但可能降低索引效果": "保護清輔音和呼吸聲防止電音撕裂等artifact拉滿0.5不開啟,調低加大保護力度但可能降低索引效果",
"F0曲线文件, 可选, 一行一个音高, 代替默认F0及升降调": "F0曲線檔案可選一行一個音高代替預設的F0及升降調",
"转换": "轉換",
"输出信息": "輸出訊息",
"输出音频(右下角三个点,点了可以下载)": "輸出音頻(右下角三個點,點了可以下載)",
"批量转换, 输入待转换音频文件夹, 或上传多个音频文件, 在指定文件夹(默认opt)下输出转换的音频. ": "批量轉換,輸入待轉換音頻資料夾,或上傳多個音頻檔案,在指定資料夾(默認opt)下輸出轉換的音頻。",
"指定输出文件夹": "指定輸出資料夾",
"检索特征占比": "檢索特徵佔比",
"输入待处理音频文件夹路径(去文件管理器地址栏拷就行了)": "輸入待處理音頻資料夾路徑(去檔案管理器地址欄拷貝即可)",
"也可批量输入音频文件, 二选一, 优先读文件夹": "也可批量輸入音頻檔案,二選一優先讀資料夾",
"伴奏人声分离": "伴奏人聲分離",
"人声伴奏分离批量处理, 使用UVR5模型. <br>不带和声用HP2, 带和声且提取的人声不需要和声用HP5<br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)": "人聲伴奏分離批量處理使用UVR5模型。<br>不帶和聲用HP2帶和聲且提取的人聲不需要和聲用HP5<br>合格的資料夾路徑格式舉例E:\\codes\\py39\\vits_vc_gpu\\白鷺霜華測試樣例(去檔案管理員地址欄複製就行了)",
"也可批量拖拽音频文件, 二选一, 优先读文件夹,文件夹留空则读取拖拽文件": "也可批量拖拽音頻檔, 二選一, 優先讀檔夾,檔夾留空則讀取拖拽檔",
"导出文件格式": "導出檔格式",
"伴奏人声分离&去混响&去回声": "伴奏人聲分離&去混響&去回聲",
"人声伴奏分离批量处理, 使用UVR5模型。 <br>合格的文件夹路径格式举例: E:\\codes\\py39\\vits_vc_gpu\\白鹭霜华测试样例(去文件管理器地址栏拷就行了)。 <br>模型分为三类: <br>1、保留人声不带和声的音频选这个对主人声保留比HP5更好。内置HP2和HP3两个模型HP3可能轻微漏伴奏但对主人声保留比HP2稍微好一丁点 <br>2、仅保留主人声带和声的音频选这个对主人声可能有削弱。内置HP5一个模型 <br> 3、去混响、去延迟模型by FoxJoy<br>(1)MDX-Net(onnx_dereverb):对于双通道混响是最好的选择,不能去除单通道混响;<br>&emsp;(234)DeEcho:去除延迟效果。Aggressive比Normal去除得更彻底DeReverb额外去除混响可去除单声道混响但是对高频重的板式混响去不干净。<br>去混响/去延迟,附:<br>1、DeEcho-DeReverb模型的耗时是另外2个DeEcho模型的接近2倍<br>2、MDX-Net-Dereverb模型挺慢的<br>3、个人推荐的最干净的配置是先MDX-Net再DeEcho-Aggressive。": "使用UVR5模型進行人聲伴奏分離的批次處理。<br>有效資料夾路徑格式的例子D:\\path\\to\\input\\folder從檔案管理員地址欄複製。<br>模型分為三類:<br>1. 保留人聲選擇這個選項適用於沒有和聲的音訊。它比HP5更好地保留了人聲。它包括兩個內建模型HP2和HP3。HP3可能輕微漏出伴奏但比HP2更好地保留了人聲<br>2. 僅保留主人聲選擇這個選項適用於有和聲的音訊。它可能會削弱主人聲。它包括一個內建模型HP5。<br>3. 消除混響和延遲模型由FoxJoy提供<br>(1) MDX-Net對於立體聲混響的移除是最好的選擇但不能移除單聲道混響<br>&emsp;(234) DeEcho移除延遲效果。Aggressive模式比Normal模式移除得更徹底。DeReverb另外移除混響可以移除單聲道混響但對於高頻重的板式混響移除不乾淨。<br>消除混響/延遲注意事項:<br>1. DeEcho-DeReverb模型的處理時間是其他兩個DeEcho模型的近兩倍<br>2. MDX-Net-Dereverb模型相當慢<br>3. 個人推薦的最乾淨配置是先使用MDX-Net然後使用DeEcho-Aggressive。",
"输入待处理音频文件夹路径": "輸入待處理音頻資料夾路徑",
"模型": "模型",
"指定输出人声文件夹": "指定輸出人聲資料夾",
"指定输出乐器文件夹": "指定輸出樂器資料夾",
"指定输出人声文件夹": "指定输出主人声文件夹",
"指定输出非主人声文件夹": "指定输出非主人声文件夹",
"训练": "訓練",
"step1: 填写实验配置. 实验数据放在logs下, 每个实验一个文件夹, 需手工输入实验名路径, 内含实验配置, 日志, 训练得到的模型文件. ": "step1填寫實驗配置。實驗數據放在logs下每個實驗一個資料夾需手動輸入實驗名路徑內含實驗配置、日誌、訓練得到的模型檔案。",
"输入实验名": "輸入實驗名稱",
"目标采样率": "目標取樣率",
"模型是否带音高指导(唱歌一定要, 语音可以不要)": "模型是否帶音高指導(唱歌一定要,語音可以不要)",
"版本": "版本",
"提取音高和处理数据使用的CPU进程数": "提取音高和處理數據使用的CPU進程數",
"step2a: 自动遍历训练文件夹下所有可解码成音频的文件并进行切片归一化, 在实验目录下生成2个wav文件夹; 暂时只支持单人训练. ": "step2a自動遍歷訓練資料夾下所有可解碼成音頻的檔案並進行切片歸一化在實驗目錄下生成2個wav資料夾暫時只支援單人訓練。",
"输入训练文件夹路径": "輸入訓練檔案夾路徑",
"请指定说话人id": "請指定說話人id",
@@ -38,14 +54,17 @@
"step2b: 使用CPU提取音高(如果模型带音高), 使用GPU提取特征(选择卡号)": "步驟2b: 使用CPU提取音高(如果模型帶音高), 使用GPU提取特徵(選擇卡號)",
"以-分隔输入使用的卡号, 例如 0-1-2 使用卡0和卡1和卡2": "以-分隔輸入使用的卡號, 例如 0-1-2 使用卡0和卡1和卡2",
"显卡信息": "顯示卡資訊",
"提取音高使用的CPU进程数": "提取音高使用的CPU進程數",
"选择音高提取算法:输入歌声可用pm提速,高质量语音但CPU差可用dio提速,harvest质量更好但慢": "選擇音高提取算法:輸入歌聲可用pm提速,高品質語音但CPU差可用dio提速,harvest品質更好但較慢",
"rmvpe卡号配置以-分隔输入使用的不同进程卡号,例如0-0-1使用在卡0上跑2个进程并在卡1上跑1个进程": "rmvpe卡號配置以-分隔輸入使用的不同進程卡號,例如0-0-1使用在卡0上跑2個進程並在卡1上跑1個進程",
"特征提取": "特徵提取",
"step3: 填写训练设置, 开始训练模型和索引": "步驟3: 填寫訓練設定, 開始訓練模型和索引",
"保存频率save_every_epoch": "保存頻率save_every_epoch",
"总训练轮数total_epoch": "總訓練輪數total_epoch",
"每张显卡的batch_size": "每张显卡的batch_size",
"是否仅保存最新的ckpt文件以节省硬盘空间": "是否僅保存最新的ckpt檔案以節省硬碟空間",
"否": "否",
"是否缓存所有训练集至显存. 10min以下小数据可缓存以加速训练, 大数据缓存会炸显存也加不了多少速": "是否緩存所有訓練集至 VRAM。小於10分鐘的小數據可緩存以加速訓練大數據緩存會爆 VRAM 也加不了多少速度",
"是否在每次保存时间点将最终小模型保存至weights文件夹": "是否在每次保存時間點將最終小模型保存至weights檔夾",
"加载预训练底模G路径": "加載預訓練底模G路徑",
"加载预训练底模D路径": "加載預訓練底模D路徑",
"训练模型": "訓練模型",
@@ -59,6 +78,7 @@
"模型是否带音高指导": "模型是否帶音高指導",
"要置入的模型信息": "要置入的模型資訊",
"保存的模型名不带后缀": "儲存的模型名不帶副檔名",
"模型版本型号": "模型版本型號",
"融合": "融合",
"修改模型信息(仅支持weights文件夹下提取的小模型文件)": "修改模型資訊(僅支援weights資料夾下提取的小模型檔案)",
"模型路径": "模型路徑",
@@ -71,6 +91,11 @@
"保存名": "儲存名",
"模型是否带音高指导,1是0否": "模型是否帶音高指導1是0否",
"提取": "提取",
"Onnx导出": "Onnx导出",
"RVC模型路径": "RVC模型路径",
"Onnx输出路径": "Onnx输出路径",
"导出Onnx模型": "导出Onnx模型",
"常见问题解答": "常見問題解答",
"招募音高曲线前端编辑器": "招募音高曲線前端編輯器",
"加开发群联系我xxxxx": "加開發群聯繫我xxxxx",
"点击查看交流、问题反馈群号": "點擊查看交流、問題反饋群號",
@@ -95,5 +120,13 @@
"性能设置": "效能設定",
"开始音频转换": "開始音訊轉換",
"停止音频转换": "停止音訊轉換",
"推理时间(ms):": "推理時間(ms):"
}
"推理时间(ms):": "推理時間(ms):",
"请选择pth文件": "请选择pth文件",
"请选择index文件": "请选择index文件",
"hubert模型路径不可包含中文": "hubert模型路径不可包含中文",
"pth文件路径不可包含中文": "pth文件路径不可包含中文",
"index文件路径不可包含中文": "index文件路径不可包含中文",
"重载设备列表": "重載設備列表",
"音高算法": "音高演算法",
"harvest进程数": "harvest進程數"
}

View File

@@ -5,9 +5,9 @@ import torch
from torch import nn
from torch.nn import functional as F
from infer_pack import commons
from infer_pack import modules
from infer_pack.modules import LayerNorm
from lib.infer_pack import commons
from lib.infer_pack import modules
from lib.infer_pack.modules import LayerNorm
class Encoder(nn.Module):

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -9,9 +9,9 @@ from torch.nn import functional as F
from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
from torch.nn.utils import weight_norm, remove_weight_norm
from infer_pack import commons
from infer_pack.commons import init_weights, get_padding
from infer_pack.transforms import piecewise_rational_quadratic_transform
from lib.infer_pack import commons
from lib.infer_pack.commons import init_weights, get_padding
from lib.infer_pack.transforms import piecewise_rational_quadratic_transform
LRELU_SLOPE = 0.1

View File

@@ -0,0 +1,90 @@
from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
import pyworld
import numpy as np
class DioF0Predictor(F0Predictor):
def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
self.hop_length = hop_length
self.f0_min = f0_min
self.f0_max = f0_max
self.sampling_rate = sampling_rate
def interpolate_f0(self, f0):
"""
对F0进行插值处理
"""
data = np.reshape(f0, (f0.size, 1))
vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
vuv_vector[data > 0.0] = 1.0
vuv_vector[data <= 0.0] = 0.0
ip_data = data
frame_number = data.size
last_value = 0.0
for i in range(frame_number):
if data[i] <= 0.0:
j = i + 1
for j in range(i + 1, frame_number):
if data[j] > 0.0:
break
if j < frame_number - 1:
if last_value > 0.0:
step = (data[j] - data[i - 1]) / float(j - i)
for k in range(i, j):
ip_data[k] = data[i - 1] + step * (k - i + 1)
else:
for k in range(i, j):
ip_data[k] = data[j]
else:
for k in range(i, frame_number):
ip_data[k] = last_value
else:
ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
last_value = data[i]
return ip_data[:, 0], vuv_vector[:, 0]
def resize_f0(self, x, target_len):
source = np.array(x)
source[source < 0.001] = np.nan
target = np.interp(
np.arange(0, len(source) * target_len, len(source)) / target_len,
np.arange(0, len(source)),
source,
)
res = np.nan_to_num(target)
return res
def compute_f0(self, wav, p_len=None):
if p_len is None:
p_len = wav.shape[0] // self.hop_length
f0, t = pyworld.dio(
wav.astype(np.double),
fs=self.sampling_rate,
f0_floor=self.f0_min,
f0_ceil=self.f0_max,
frame_period=1000 * self.hop_length / self.sampling_rate,
)
f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
for index, pitch in enumerate(f0):
f0[index] = round(pitch, 1)
return self.interpolate_f0(self.resize_f0(f0, p_len))[0]
def compute_f0_uv(self, wav, p_len=None):
if p_len is None:
p_len = wav.shape[0] // self.hop_length
f0, t = pyworld.dio(
wav.astype(np.double),
fs=self.sampling_rate,
f0_floor=self.f0_min,
f0_ceil=self.f0_max,
frame_period=1000 * self.hop_length / self.sampling_rate,
)
f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
for index, pitch in enumerate(f0):
f0[index] = round(pitch, 1)
return self.interpolate_f0(self.resize_f0(f0, p_len))

View File

@@ -0,0 +1,16 @@
class F0Predictor(object):
def compute_f0(self, wav, p_len):
"""
input: wav:[signal_length]
p_len:int
output: f0:[signal_length//hop_length]
"""
pass
def compute_f0_uv(self, wav, p_len):
"""
input: wav:[signal_length]
p_len:int
output: f0:[signal_length//hop_length],uv:[signal_length//hop_length]
"""
pass

View File

@@ -0,0 +1,86 @@
from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
import pyworld
import numpy as np
class HarvestF0Predictor(F0Predictor):
def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
self.hop_length = hop_length
self.f0_min = f0_min
self.f0_max = f0_max
self.sampling_rate = sampling_rate
def interpolate_f0(self, f0):
"""
对F0进行插值处理
"""
data = np.reshape(f0, (f0.size, 1))
vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
vuv_vector[data > 0.0] = 1.0
vuv_vector[data <= 0.0] = 0.0
ip_data = data
frame_number = data.size
last_value = 0.0
for i in range(frame_number):
if data[i] <= 0.0:
j = i + 1
for j in range(i + 1, frame_number):
if data[j] > 0.0:
break
if j < frame_number - 1:
if last_value > 0.0:
step = (data[j] - data[i - 1]) / float(j - i)
for k in range(i, j):
ip_data[k] = data[i - 1] + step * (k - i + 1)
else:
for k in range(i, j):
ip_data[k] = data[j]
else:
for k in range(i, frame_number):
ip_data[k] = last_value
else:
ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
last_value = data[i]
return ip_data[:, 0], vuv_vector[:, 0]
def resize_f0(self, x, target_len):
source = np.array(x)
source[source < 0.001] = np.nan
target = np.interp(
np.arange(0, len(source) * target_len, len(source)) / target_len,
np.arange(0, len(source)),
source,
)
res = np.nan_to_num(target)
return res
def compute_f0(self, wav, p_len=None):
if p_len is None:
p_len = wav.shape[0] // self.hop_length
f0, t = pyworld.harvest(
wav.astype(np.double),
fs=self.hop_length,
f0_ceil=self.f0_max,
f0_floor=self.f0_min,
frame_period=1000 * self.hop_length / self.sampling_rate,
)
f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.fs)
return self.interpolate_f0(self.resize_f0(f0, p_len))[0]
def compute_f0_uv(self, wav, p_len=None):
if p_len is None:
p_len = wav.shape[0] // self.hop_length
f0, t = pyworld.harvest(
wav.astype(np.double),
fs=self.sampling_rate,
f0_floor=self.f0_min,
f0_ceil=self.f0_max,
frame_period=1000 * self.hop_length / self.sampling_rate,
)
f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
return self.interpolate_f0(self.resize_f0(f0, p_len))

View File

@@ -0,0 +1,97 @@
from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
import parselmouth
import numpy as np
class PMF0Predictor(F0Predictor):
def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
self.hop_length = hop_length
self.f0_min = f0_min
self.f0_max = f0_max
self.sampling_rate = sampling_rate
def interpolate_f0(self, f0):
"""
对F0进行插值处理
"""
data = np.reshape(f0, (f0.size, 1))
vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
vuv_vector[data > 0.0] = 1.0
vuv_vector[data <= 0.0] = 0.0
ip_data = data
frame_number = data.size
last_value = 0.0
for i in range(frame_number):
if data[i] <= 0.0:
j = i + 1
for j in range(i + 1, frame_number):
if data[j] > 0.0:
break
if j < frame_number - 1:
if last_value > 0.0:
step = (data[j] - data[i - 1]) / float(j - i)
for k in range(i, j):
ip_data[k] = data[i - 1] + step * (k - i + 1)
else:
for k in range(i, j):
ip_data[k] = data[j]
else:
for k in range(i, frame_number):
ip_data[k] = last_value
else:
ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
last_value = data[i]
return ip_data[:, 0], vuv_vector[:, 0]
def compute_f0(self, wav, p_len=None):
x = wav
if p_len is None:
p_len = x.shape[0] // self.hop_length
else:
assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
time_step = self.hop_length / self.sampling_rate * 1000
f0 = (
parselmouth.Sound(x, self.sampling_rate)
.to_pitch_ac(
time_step=time_step / 1000,
voicing_threshold=0.6,
pitch_floor=self.f0_min,
pitch_ceiling=self.f0_max,
)
.selected_array["frequency"]
)
pad_size = (p_len - len(f0) + 1) // 2
if pad_size > 0 or p_len - len(f0) - pad_size > 0:
f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
f0, uv = self.interpolate_f0(f0)
return f0
def compute_f0_uv(self, wav, p_len=None):
x = wav
if p_len is None:
p_len = x.shape[0] // self.hop_length
else:
assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
time_step = self.hop_length / self.sampling_rate * 1000
f0 = (
parselmouth.Sound(x, self.sampling_rate)
.to_pitch_ac(
time_step=time_step / 1000,
voicing_threshold=0.6,
pitch_floor=self.f0_min,
pitch_ceiling=self.f0_max,
)
.selected_array["frequency"]
)
pad_size = (p_len - len(f0) + 1) // 2
if pad_size > 0 or p_len - len(f0) - pad_size > 0:
f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
f0, uv = self.interpolate_f0(f0)
return f0, uv

View File

@@ -0,0 +1,145 @@
import onnxruntime
import librosa
import numpy as np
import soundfile
class ContentVec:
def __init__(self, vec_path="pretrained/vec-768-layer-12.onnx", device=None):
print("load model(s) from {}".format(vec_path))
if device == "cpu" or device is None:
providers = ["CPUExecutionProvider"]
elif device == "cuda":
providers = ["CUDAExecutionProvider", "CPUExecutionProvider"]
elif device == "dml":
providers = ["DmlExecutionProvider"]
else:
raise RuntimeError("Unsportted Device")
self.model = onnxruntime.InferenceSession(vec_path, providers=providers)
def __call__(self, wav):
return self.forward(wav)
def forward(self, wav):
feats = wav
if feats.ndim == 2: # double channels
feats = feats.mean(-1)
assert feats.ndim == 1, feats.ndim
feats = np.expand_dims(np.expand_dims(feats, 0), 0)
onnx_input = {self.model.get_inputs()[0].name: feats}
logits = self.model.run(None, onnx_input)[0]
return logits.transpose(0, 2, 1)
def get_f0_predictor(f0_predictor, hop_length, sampling_rate, **kargs):
if f0_predictor == "pm":
from lib.infer_pack.modules.F0Predictor.PMF0Predictor import PMF0Predictor
f0_predictor_object = PMF0Predictor(
hop_length=hop_length, sampling_rate=sampling_rate
)
elif f0_predictor == "harvest":
from lib.infer_pack.modules.F0Predictor.HarvestF0Predictor import (
HarvestF0Predictor,
)
f0_predictor_object = HarvestF0Predictor(
hop_length=hop_length, sampling_rate=sampling_rate
)
elif f0_predictor == "dio":
from lib.infer_pack.modules.F0Predictor.DioF0Predictor import DioF0Predictor
f0_predictor_object = DioF0Predictor(
hop_length=hop_length, sampling_rate=sampling_rate
)
else:
raise Exception("Unknown f0 predictor")
return f0_predictor_object
class OnnxRVC:
def __init__(
self,
model_path,
sr=40000,
hop_size=512,
vec_path="vec-768-layer-12",
device="cpu",
):
vec_path = f"pretrained/{vec_path}.onnx"
self.vec_model = ContentVec(vec_path, device)
if device == "cpu" or device is None:
providers = ["CPUExecutionProvider"]
elif device == "cuda":
providers = ["CUDAExecutionProvider", "CPUExecutionProvider"]
elif device == "dml":
providers = ["DmlExecutionProvider"]
else:
raise RuntimeError("Unsportted Device")
self.model = onnxruntime.InferenceSession(model_path, providers=providers)
self.sampling_rate = sr
self.hop_size = hop_size
def forward(self, hubert, hubert_length, pitch, pitchf, ds, rnd):
onnx_input = {
self.model.get_inputs()[0].name: hubert,
self.model.get_inputs()[1].name: hubert_length,
self.model.get_inputs()[2].name: pitch,
self.model.get_inputs()[3].name: pitchf,
self.model.get_inputs()[4].name: ds,
self.model.get_inputs()[5].name: rnd,
}
return (self.model.run(None, onnx_input)[0] * 32767).astype(np.int16)
def inference(
self,
raw_path,
sid,
f0_method="dio",
f0_up_key=0,
pad_time=0.5,
cr_threshold=0.02,
):
f0_min = 50
f0_max = 1100
f0_mel_min = 1127 * np.log(1 + f0_min / 700)
f0_mel_max = 1127 * np.log(1 + f0_max / 700)
f0_predictor = get_f0_predictor(
f0_method,
hop_length=self.hop_size,
sampling_rate=self.sampling_rate,
threshold=cr_threshold,
)
wav, sr = librosa.load(raw_path, sr=self.sampling_rate)
org_length = len(wav)
if org_length / sr > 50.0:
raise RuntimeError("Reached Max Length")
wav16k = librosa.resample(wav, orig_sr=self.sampling_rate, target_sr=16000)
wav16k = wav16k
hubert = self.vec_model(wav16k)
hubert = np.repeat(hubert, 2, axis=2).transpose(0, 2, 1).astype(np.float32)
hubert_length = hubert.shape[1]
pitchf = f0_predictor.compute_f0(wav, hubert_length)
pitchf = pitchf * 2 ** (f0_up_key / 12)
pitch = pitchf.copy()
f0_mel = 1127 * np.log(1 + pitch / 700)
f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
f0_mel_max - f0_mel_min
) + 1
f0_mel[f0_mel <= 1] = 1
f0_mel[f0_mel > 255] = 255
pitch = np.rint(f0_mel).astype(np.int64)
pitchf = pitchf.reshape(1, len(pitchf)).astype(np.float32)
pitch = pitch.reshape(1, len(pitch))
ds = np.array([sid]).astype(np.int64)
rnd = np.random.randn(1, 192, hubert_length).astype(np.float32)
hubert_length = np.array([hubert_length]).astype(np.int64)
out_wav = self.forward(hubert, hubert_length, pitch, pitchf, ds, rnd).squeeze()
out_wav = np.pad(out_wav, (0, 2 * self.hop_size), "constant")
return out_wav[0:org_length]

692
lib/rmvpe.py Normal file
View File

@@ -0,0 +1,692 @@
import torch, numpy as np, pdb
import torch.nn as nn
import torch.nn.functional as F
import torch, pdb
import numpy as np
import torch.nn.functional as F
from scipy.signal import get_window
from librosa.util import pad_center, tiny, normalize
###stft codes from https://github.com/pseeth/torch-stft/blob/master/torch_stft/util.py
def window_sumsquare(
window,
n_frames,
hop_length=200,
win_length=800,
n_fft=800,
dtype=np.float32,
norm=None,
):
"""
# from librosa 0.6
Compute the sum-square envelope of a window function at a given hop length.
This is used to estimate modulation effects induced by windowing
observations in short-time fourier transforms.
Parameters
----------
window : string, tuple, number, callable, or list-like
Window specification, as in `get_window`
n_frames : int > 0
The number of analysis frames
hop_length : int > 0
The number of samples to advance between frames
win_length : [optional]
The length of the window function. By default, this matches `n_fft`.
n_fft : int > 0
The length of each analysis frame.
dtype : np.dtype
The data type of the output
Returns
-------
wss : np.ndarray, shape=`(n_fft + hop_length * (n_frames - 1))`
The sum-squared envelope of the window function
"""
if win_length is None:
win_length = n_fft
n = n_fft + hop_length * (n_frames - 1)
x = np.zeros(n, dtype=dtype)
# Compute the squared window at the desired length
win_sq = get_window(window, win_length, fftbins=True)
win_sq = normalize(win_sq, norm=norm) ** 2
win_sq = pad_center(win_sq, n_fft)
# Fill the envelope
for i in range(n_frames):
sample = i * hop_length
x[sample : min(n, sample + n_fft)] += win_sq[: max(0, min(n_fft, n - sample))]
return x
class STFT(torch.nn.Module):
def __init__(
self, filter_length=1024, hop_length=512, win_length=None, window="hann"
):
"""
This module implements an STFT using 1D convolution and 1D transpose convolutions.
This is a bit tricky so there are some cases that probably won't work as working
out the same sizes before and after in all overlap add setups is tough. Right now,
this code should work with hop lengths that are half the filter length (50% overlap
between frames).
Keyword Arguments:
filter_length {int} -- Length of filters used (default: {1024})
hop_length {int} -- Hop length of STFT (restrict to 50% overlap between frames) (default: {512})
win_length {[type]} -- Length of the window function applied to each frame (if not specified, it
equals the filter length). (default: {None})
window {str} -- Type of window to use (options are bartlett, hann, hamming, blackman, blackmanharris)
(default: {'hann'})
"""
super(STFT, self).__init__()
self.filter_length = filter_length
self.hop_length = hop_length
self.win_length = win_length if win_length else filter_length
self.window = window
self.forward_transform = None
self.pad_amount = int(self.filter_length / 2)
scale = self.filter_length / self.hop_length
fourier_basis = np.fft.fft(np.eye(self.filter_length))
cutoff = int((self.filter_length / 2 + 1))
fourier_basis = np.vstack(
[np.real(fourier_basis[:cutoff, :]), np.imag(fourier_basis[:cutoff, :])]
)
forward_basis = torch.FloatTensor(fourier_basis[:, None, :])
inverse_basis = torch.FloatTensor(
np.linalg.pinv(scale * fourier_basis).T[:, None, :]
)
assert filter_length >= self.win_length
# get window and zero center pad it to filter_length
fft_window = get_window(window, self.win_length, fftbins=True)
fft_window = pad_center(fft_window, size=filter_length)
fft_window = torch.from_numpy(fft_window).float()
# window the bases
forward_basis *= fft_window
inverse_basis *= fft_window
self.register_buffer("forward_basis", forward_basis.float())
self.register_buffer("inverse_basis", inverse_basis.float())
def transform(self, input_data):
"""Take input data (audio) to STFT domain.
Arguments:
input_data {tensor} -- Tensor of floats, with shape (num_batch, num_samples)
Returns:
magnitude {tensor} -- Magnitude of STFT with shape (num_batch,
num_frequencies, num_frames)
phase {tensor} -- Phase of STFT with shape (num_batch,
num_frequencies, num_frames)
"""
num_batches = input_data.shape[0]
num_samples = input_data.shape[-1]
self.num_samples = num_samples
# similar to librosa, reflect-pad the input
input_data = input_data.view(num_batches, 1, num_samples)
# print(1234,input_data.shape)
input_data = F.pad(
input_data.unsqueeze(1),
(self.pad_amount, self.pad_amount, 0, 0, 0, 0),
mode="reflect",
).squeeze(1)
# print(2333,input_data.shape,self.forward_basis.shape,self.hop_length)
# pdb.set_trace()
forward_transform = F.conv1d(
input_data, self.forward_basis, stride=self.hop_length, padding=0
)
cutoff = int((self.filter_length / 2) + 1)
real_part = forward_transform[:, :cutoff, :]
imag_part = forward_transform[:, cutoff:, :]
magnitude = torch.sqrt(real_part**2 + imag_part**2)
# phase = torch.atan2(imag_part.data, real_part.data)
return magnitude # , phase
def inverse(self, magnitude, phase):
"""Call the inverse STFT (iSTFT), given magnitude and phase tensors produced
by the ```transform``` function.
Arguments:
magnitude {tensor} -- Magnitude of STFT with shape (num_batch,
num_frequencies, num_frames)
phase {tensor} -- Phase of STFT with shape (num_batch,
num_frequencies, num_frames)
Returns:
inverse_transform {tensor} -- Reconstructed audio given magnitude and phase. Of
shape (num_batch, num_samples)
"""
recombine_magnitude_phase = torch.cat(
[magnitude * torch.cos(phase), magnitude * torch.sin(phase)], dim=1
)
inverse_transform = F.conv_transpose1d(
recombine_magnitude_phase,
self.inverse_basis,
stride=self.hop_length,
padding=0,
)
if self.window is not None:
window_sum = window_sumsquare(
self.window,
magnitude.size(-1),
hop_length=self.hop_length,
win_length=self.win_length,
n_fft=self.filter_length,
dtype=np.float32,
)
# remove modulation effects
approx_nonzero_indices = torch.from_numpy(
np.where(window_sum > tiny(window_sum))[0]
)
window_sum = torch.from_numpy(window_sum).to(inverse_transform.device)
inverse_transform[:, :, approx_nonzero_indices] /= window_sum[
approx_nonzero_indices
]
# scale by hop ratio
inverse_transform *= float(self.filter_length) / self.hop_length
inverse_transform = inverse_transform[..., self.pad_amount :]
inverse_transform = inverse_transform[..., : self.num_samples]
inverse_transform = inverse_transform.squeeze(1)
return inverse_transform
def forward(self, input_data):
"""Take input data (audio) to STFT domain and then back to audio.
Arguments:
input_data {tensor} -- Tensor of floats, with shape (num_batch, num_samples)
Returns:
reconstruction {tensor} -- Reconstructed audio given magnitude and phase. Of
shape (num_batch, num_samples)
"""
self.magnitude, self.phase = self.transform(input_data)
reconstruction = self.inverse(self.magnitude, self.phase)
return reconstruction
from time import time as ttime
class BiGRU(nn.Module):
def __init__(self, input_features, hidden_features, num_layers):
super(BiGRU, self).__init__()
self.gru = nn.GRU(
input_features,
hidden_features,
num_layers=num_layers,
batch_first=True,
bidirectional=True,
)
def forward(self, x):
return self.gru(x)[0]
class ConvBlockRes(nn.Module):
def __init__(self, in_channels, out_channels, momentum=0.01):
super(ConvBlockRes, self).__init__()
self.conv = nn.Sequential(
nn.Conv2d(
in_channels=in_channels,
out_channels=out_channels,
kernel_size=(3, 3),
stride=(1, 1),
padding=(1, 1),
bias=False,
),
nn.BatchNorm2d(out_channels, momentum=momentum),
nn.ReLU(),
nn.Conv2d(
in_channels=out_channels,
out_channels=out_channels,
kernel_size=(3, 3),
stride=(1, 1),
padding=(1, 1),
bias=False,
),
nn.BatchNorm2d(out_channels, momentum=momentum),
nn.ReLU(),
)
if in_channels != out_channels:
self.shortcut = nn.Conv2d(in_channels, out_channels, (1, 1))
self.is_shortcut = True
else:
self.is_shortcut = False
def forward(self, x):
if self.is_shortcut:
return self.conv(x) + self.shortcut(x)
else:
return self.conv(x) + x
class Encoder(nn.Module):
def __init__(
self,
in_channels,
in_size,
n_encoders,
kernel_size,
n_blocks,
out_channels=16,
momentum=0.01,
):
super(Encoder, self).__init__()
self.n_encoders = n_encoders
self.bn = nn.BatchNorm2d(in_channels, momentum=momentum)
self.layers = nn.ModuleList()
self.latent_channels = []
for i in range(self.n_encoders):
self.layers.append(
ResEncoderBlock(
in_channels, out_channels, kernel_size, n_blocks, momentum=momentum
)
)
self.latent_channels.append([out_channels, in_size])
in_channels = out_channels
out_channels *= 2
in_size //= 2
self.out_size = in_size
self.out_channel = out_channels
def forward(self, x):
concat_tensors = []
x = self.bn(x)
for i in range(self.n_encoders):
_, x = self.layers[i](x)
concat_tensors.append(_)
return x, concat_tensors
class ResEncoderBlock(nn.Module):
def __init__(
self, in_channels, out_channels, kernel_size, n_blocks=1, momentum=0.01
):
super(ResEncoderBlock, self).__init__()
self.n_blocks = n_blocks
self.conv = nn.ModuleList()
self.conv.append(ConvBlockRes(in_channels, out_channels, momentum))
for i in range(n_blocks - 1):
self.conv.append(ConvBlockRes(out_channels, out_channels, momentum))
self.kernel_size = kernel_size
if self.kernel_size is not None:
self.pool = nn.AvgPool2d(kernel_size=kernel_size)
def forward(self, x):
for i in range(self.n_blocks):
x = self.conv[i](x)
if self.kernel_size is not None:
return x, self.pool(x)
else:
return x
class Intermediate(nn.Module): #
def __init__(self, in_channels, out_channels, n_inters, n_blocks, momentum=0.01):
super(Intermediate, self).__init__()
self.n_inters = n_inters
self.layers = nn.ModuleList()
self.layers.append(
ResEncoderBlock(in_channels, out_channels, None, n_blocks, momentum)
)
for i in range(self.n_inters - 1):
self.layers.append(
ResEncoderBlock(out_channels, out_channels, None, n_blocks, momentum)
)
def forward(self, x):
for i in range(self.n_inters):
x = self.layers[i](x)
return x
class ResDecoderBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride, n_blocks=1, momentum=0.01):
super(ResDecoderBlock, self).__init__()
out_padding = (0, 1) if stride == (1, 2) else (1, 1)
self.n_blocks = n_blocks
self.conv1 = nn.Sequential(
nn.ConvTranspose2d(
in_channels=in_channels,
out_channels=out_channels,
kernel_size=(3, 3),
stride=stride,
padding=(1, 1),
output_padding=out_padding,
bias=False,
),
nn.BatchNorm2d(out_channels, momentum=momentum),
nn.ReLU(),
)
self.conv2 = nn.ModuleList()
self.conv2.append(ConvBlockRes(out_channels * 2, out_channels, momentum))
for i in range(n_blocks - 1):
self.conv2.append(ConvBlockRes(out_channels, out_channels, momentum))
def forward(self, x, concat_tensor):
x = self.conv1(x)
x = torch.cat((x, concat_tensor), dim=1)
for i in range(self.n_blocks):
x = self.conv2[i](x)
return x
class Decoder(nn.Module):
def __init__(self, in_channels, n_decoders, stride, n_blocks, momentum=0.01):
super(Decoder, self).__init__()
self.layers = nn.ModuleList()
self.n_decoders = n_decoders
for i in range(self.n_decoders):
out_channels = in_channels // 2
self.layers.append(
ResDecoderBlock(in_channels, out_channels, stride, n_blocks, momentum)
)
in_channels = out_channels
def forward(self, x, concat_tensors):
for i in range(self.n_decoders):
x = self.layers[i](x, concat_tensors[-1 - i])
return x
class DeepUnet(nn.Module):
def __init__(
self,
kernel_size,
n_blocks,
en_de_layers=5,
inter_layers=4,
in_channels=1,
en_out_channels=16,
):
super(DeepUnet, self).__init__()
self.encoder = Encoder(
in_channels, 128, en_de_layers, kernel_size, n_blocks, en_out_channels
)
self.intermediate = Intermediate(
self.encoder.out_channel // 2,
self.encoder.out_channel,
inter_layers,
n_blocks,
)
self.decoder = Decoder(
self.encoder.out_channel, en_de_layers, kernel_size, n_blocks
)
def forward(self, x):
x, concat_tensors = self.encoder(x)
x = self.intermediate(x)
x = self.decoder(x, concat_tensors)
return x
class E2E(nn.Module):
def __init__(
self,
n_blocks,
n_gru,
kernel_size,
en_de_layers=5,
inter_layers=4,
in_channels=1,
en_out_channels=16,
):
super(E2E, self).__init__()
self.unet = DeepUnet(
kernel_size,
n_blocks,
en_de_layers,
inter_layers,
in_channels,
en_out_channels,
)
self.cnn = nn.Conv2d(en_out_channels, 3, (3, 3), padding=(1, 1))
if n_gru:
self.fc = nn.Sequential(
BiGRU(3 * 128, 256, n_gru),
nn.Linear(512, 360),
nn.Dropout(0.25),
nn.Sigmoid(),
)
else:
self.fc = nn.Sequential(
nn.Linear(3 * nn.N_MELS, nn.N_CLASS), nn.Dropout(0.25), nn.Sigmoid()
)
def forward(self, mel):
# print(mel.shape)
mel = mel.transpose(-1, -2).unsqueeze(1)
x = self.cnn(self.unet(mel)).transpose(1, 2).flatten(-2)
x = self.fc(x)
# print(x.shape)
return x
from librosa.filters import mel
class MelSpectrogram(torch.nn.Module):
def __init__(
self,
is_half,
n_mel_channels,
sampling_rate,
win_length,
hop_length,
n_fft=None,
mel_fmin=0,
mel_fmax=None,
clamp=1e-5,
):
super().__init__()
n_fft = win_length if n_fft is None else n_fft
self.hann_window = {}
mel_basis = mel(
sr=sampling_rate,
n_fft=n_fft,
n_mels=n_mel_channels,
fmin=mel_fmin,
fmax=mel_fmax,
htk=True,
)
mel_basis = torch.from_numpy(mel_basis).float()
self.register_buffer("mel_basis", mel_basis)
self.n_fft = win_length if n_fft is None else n_fft
self.hop_length = hop_length
self.win_length = win_length
self.sampling_rate = sampling_rate
self.n_mel_channels = n_mel_channels
self.clamp = clamp
self.is_half = is_half
def forward(self, audio, keyshift=0, speed=1, center=True):
factor = 2 ** (keyshift / 12)
n_fft_new = int(np.round(self.n_fft * factor))
win_length_new = int(np.round(self.win_length * factor))
hop_length_new = int(np.round(self.hop_length * speed))
keyshift_key = str(keyshift) + "_" + str(audio.device)
if keyshift_key not in self.hann_window:
self.hann_window[keyshift_key] = torch.hann_window(win_length_new).to(
# "cpu"if(audio.device.type=="privateuseone") else audio.device
audio.device
)
# fft = torch.stft(#doesn't support pytorch_dml
# # audio.cpu() if(audio.device.type=="privateuseone")else audio,
# audio,
# n_fft=n_fft_new,
# hop_length=hop_length_new,
# win_length=win_length_new,
# window=self.hann_window[keyshift_key],
# center=center,
# return_complex=True,
# )
# magnitude = torch.sqrt(fft.real.pow(2) + fft.imag.pow(2))
# print(1111111111)
# print(222222222222222,audio.device,self.is_half)
if hasattr(self, "stft") == False:
# print(n_fft_new,hop_length_new,win_length_new,audio.shape)
self.stft = STFT(
filter_length=n_fft_new,
hop_length=hop_length_new,
win_length=win_length_new,
window="hann",
).to(audio.device)
magnitude = self.stft.transform(audio) # phase
# if (audio.device.type == "privateuseone"):
# magnitude=magnitude.to(audio.device)
if keyshift != 0:
size = self.n_fft // 2 + 1
resize = magnitude.size(1)
if resize < size:
magnitude = F.pad(magnitude, (0, 0, 0, size - resize))
magnitude = magnitude[:, :size, :] * self.win_length / win_length_new
mel_output = torch.matmul(self.mel_basis, magnitude)
if self.is_half == True:
mel_output = mel_output.half()
log_mel_spec = torch.log(torch.clamp(mel_output, min=self.clamp))
# print(log_mel_spec.device.type)
return log_mel_spec
class RMVPE:
def __init__(self, model_path, is_half, device=None):
self.resample_kernel = {}
self.resample_kernel = {}
self.is_half = is_half
if device is None:
device = "cuda" if torch.cuda.is_available() else "cpu"
self.device = device
self.mel_extractor = MelSpectrogram(
is_half, 128, 16000, 1024, 160, None, 30, 8000
).to(device)
if "privateuseone" in str(device):
import onnxruntime as ort
ort_session = ort.InferenceSession(
"rmvpe.onnx", providers=["DmlExecutionProvider"]
)
self.model = ort_session
else:
model = E2E(4, 1, (2, 2))
ckpt = torch.load(model_path, map_location="cpu")
model.load_state_dict(ckpt)
model.eval()
if is_half == True:
model = model.half()
self.model = model
self.model = self.model.to(device)
cents_mapping = 20 * np.arange(360) + 1997.3794084376191
self.cents_mapping = np.pad(cents_mapping, (4, 4)) # 368
def mel2hidden(self, mel):
with torch.no_grad():
n_frames = mel.shape[-1]
mel = F.pad(
mel, (0, 32 * ((n_frames - 1) // 32 + 1) - n_frames), mode="reflect"
)
if "privateuseone" in str(self.device):
onnx_input_name = self.model.get_inputs()[0].name
onnx_outputs_names = self.model.get_outputs()[0].name
hidden = self.model.run(
[onnx_outputs_names],
input_feed={onnx_input_name: mel.cpu().numpy()},
)[0]
else:
hidden = self.model(mel)
return hidden[:, :n_frames]
def decode(self, hidden, thred=0.03):
cents_pred = self.to_local_average_cents(hidden, thred=thred)
f0 = 10 * (2 ** (cents_pred / 1200))
f0[f0 == 10] = 0
# f0 = np.array([10 * (2 ** (cent_pred / 1200)) if cent_pred else 0 for cent_pred in cents_pred])
return f0
def infer_from_audio(self, audio, thred=0.03):
# torch.cuda.synchronize()
t0 = ttime()
mel = self.mel_extractor(
torch.from_numpy(audio).float().to(self.device).unsqueeze(0), center=True
)
# print(123123123,mel.device.type)
# torch.cuda.synchronize()
t1 = ttime()
hidden = self.mel2hidden(mel)
# torch.cuda.synchronize()
t2 = ttime()
# print(234234,hidden.device.type)
if "privateuseone" not in str(self.device):
hidden = hidden.squeeze(0).cpu().numpy()
else:
hidden = hidden[0]
if self.is_half == True:
hidden = hidden.astype("float32")
f0 = self.decode(hidden, thred=thred)
# torch.cuda.synchronize()
t3 = ttime()
# print("hmvpe:%s\t%s\t%s\t%s"%(t1-t0,t2-t1,t3-t2,t3-t0))
return f0
def to_local_average_cents(self, salience, thred=0.05):
# t0 = ttime()
center = np.argmax(salience, axis=1) # 帧长#index
salience = np.pad(salience, ((0, 0), (4, 4))) # 帧长,368
# t1 = ttime()
center += 4
todo_salience = []
todo_cents_mapping = []
starts = center - 4
ends = center + 5
for idx in range(salience.shape[0]):
todo_salience.append(salience[:, starts[idx] : ends[idx]][idx])
todo_cents_mapping.append(self.cents_mapping[starts[idx] : ends[idx]])
# t2 = ttime()
todo_salience = np.array(todo_salience) # 帧长9
todo_cents_mapping = np.array(todo_cents_mapping) # 帧长9
product_sum = np.sum(todo_salience * todo_cents_mapping, 1)
weight_sum = np.sum(todo_salience, 1) # 帧长
devided = product_sum / weight_sum # 帧长
# t3 = ttime()
maxx = np.max(salience, axis=1) # 帧长
devided[maxx <= thred] = 0
# t4 = ttime()
# print("decode:%s\t%s\t%s\t%s" % (t1 - t0, t2 - t1, t3 - t2, t4 - t3))
return devided
if __name__ == "__main__":
import soundfile as sf, librosa
audio, sampling_rate = sf.read(r"C:\Users\liujing04\Desktop\Z\冬之花clip1.wav")
if len(audio.shape) > 1:
audio = librosa.to_mono(audio.transpose(1, 0))
audio_bak = audio.copy()
if sampling_rate != 16000:
audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
model_path = r"D:\BaiduNetdiskDownload\RVC-beta-v2-0727AMD_realtime\rmvpe.pt"
thred = 0.03 # 0.01
device = "cuda" if torch.cuda.is_available() else "cpu"
rmvpe = RMVPE(model_path, is_half=False, device=device)
t0 = ttime()
f0 = rmvpe.infer_from_audio(audio, thred=thred)
# f0 = rmvpe.infer_from_audio(audio, thred=thred)
# f0 = rmvpe.infer_from_audio(audio, thred=thred)
# f0 = rmvpe.infer_from_audio(audio, thred=thred)
# f0 = rmvpe.infer_from_audio(audio, thred=thred)
t1 = ttime()
print(f0.shape, t1 - t0)

View File

@@ -3,8 +3,8 @@ import numpy as np
import torch
import torch.utils.data
from mel_processing import spectrogram_torch
from utils import load_wav_to_torch, load_filepaths_and_text
from lib.train.mel_processing import spectrogram_torch
from lib.train.utils import load_wav_to_torch, load_filepaths_and_text
class TextAudioLoaderMultiNSFsid(torch.utils.data.Dataset):
@@ -38,7 +38,7 @@ class TextAudioLoaderMultiNSFsid(torch.utils.data.Dataset):
for audiopath, text, pitch, pitchf, dv in self.audiopaths_and_text:
if self.min_text_len <= len(text) and len(text) <= self.max_text_len:
audiopaths_and_text_new.append([audiopath, text, pitch, pitchf, dv])
lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length))
lengths.append(os.path.getsize(audiopath) // (3 * self.hop_length))
self.audiopaths_and_text = audiopaths_and_text_new
self.lengths = lengths
@@ -98,7 +98,10 @@ class TextAudioLoaderMultiNSFsid(torch.utils.data.Dataset):
sampling_rate, self.sampling_rate
)
)
audio_norm = audio / self.max_wav_value
audio_norm = audio
# audio_norm = audio / self.max_wav_value
# audio_norm = audio / np.abs(audio).max()
audio_norm = audio_norm.unsqueeze(0)
spec_filename = filename.replace(".wav", ".spec.pt")
if os.path.exists(spec_filename):
@@ -243,7 +246,7 @@ class TextAudioLoader(torch.utils.data.Dataset):
for audiopath, text, dv in self.audiopaths_and_text:
if self.min_text_len <= len(text) and len(text) <= self.max_text_len:
audiopaths_and_text_new.append([audiopath, text, dv])
lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length))
lengths.append(os.path.getsize(audiopath) // (3 * self.hop_length))
self.audiopaths_and_text = audiopaths_and_text_new
self.lengths = lengths
@@ -287,7 +290,10 @@ class TextAudioLoader(torch.utils.data.Dataset):
sampling_rate, self.sampling_rate
)
)
audio_norm = audio / self.max_wav_value
audio_norm = audio
# audio_norm = audio / self.max_wav_value
# audio_norm = audio / np.abs(audio).max()
audio_norm = audio_norm.unsqueeze(0)
spec_filename = filename.replace(".wav", ".spec.pt")
if os.path.exists(spec_filename):

View File

@@ -1,5 +1,4 @@
import torch
from torch.nn import functional as F
def feature_loss(fmap_r, fmap_g):

130
lib/train/mel_processing.py Normal file
View File

@@ -0,0 +1,130 @@
import torch
import torch.utils.data
from librosa.filters import mel as librosa_mel_fn
MAX_WAV_VALUE = 32768.0
def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
"""
PARAMS
------
C: compression factor
"""
return torch.log(torch.clamp(x, min=clip_val) * C)
def dynamic_range_decompression_torch(x, C=1):
"""
PARAMS
------
C: compression factor used to compress
"""
return torch.exp(x) / C
def spectral_normalize_torch(magnitudes):
return dynamic_range_compression_torch(magnitudes)
def spectral_de_normalize_torch(magnitudes):
return dynamic_range_decompression_torch(magnitudes)
# Reusable banks
mel_basis = {}
hann_window = {}
def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False):
"""Convert waveform into Linear-frequency Linear-amplitude spectrogram.
Args:
y :: (B, T) - Audio waveforms
n_fft
sampling_rate
hop_size
win_size
center
Returns:
:: (B, Freq, Frame) - Linear-frequency Linear-amplitude spectrogram
"""
# Validation
if torch.min(y) < -1.07:
print("min value is ", torch.min(y))
if torch.max(y) > 1.07:
print("max value is ", torch.max(y))
# Window - Cache if needed
global hann_window
dtype_device = str(y.dtype) + "_" + str(y.device)
wnsize_dtype_device = str(win_size) + "_" + dtype_device
if wnsize_dtype_device not in hann_window:
hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(
dtype=y.dtype, device=y.device
)
# Padding
y = torch.nn.functional.pad(
y.unsqueeze(1),
(int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)),
mode="reflect",
)
y = y.squeeze(1)
# Complex Spectrogram :: (B, T) -> (B, Freq, Frame, RealComplex=2)
spec = torch.stft(
y,
n_fft,
hop_length=hop_size,
win_length=win_size,
window=hann_window[wnsize_dtype_device],
center=center,
pad_mode="reflect",
normalized=False,
onesided=True,
return_complex=False,
)
# Linear-frequency Linear-amplitude spectrogram :: (B, Freq, Frame, RealComplex=2) -> (B, Freq, Frame)
spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
return spec
def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax):
# MelBasis - Cache if needed
global mel_basis
dtype_device = str(spec.dtype) + "_" + str(spec.device)
fmax_dtype_device = str(fmax) + "_" + dtype_device
if fmax_dtype_device not in mel_basis:
mel = librosa_mel_fn(
sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax
)
mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(
dtype=spec.dtype, device=spec.device
)
# Mel-frequency Log-amplitude spectrogram :: (B, Freq=num_mels, Frame)
melspec = torch.matmul(mel_basis[fmax_dtype_device], spec)
melspec = spectral_normalize_torch(melspec)
return melspec
def mel_spectrogram_torch(
y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False
):
"""Convert waveform into Mel-frequency Log-amplitude spectrogram.
Args:
y :: (B, T) - Waveforms
Returns:
melspec :: (B, Freq, Frame) - Mel-frequency Log-amplitude spectrogram
"""
# Linear-frequency Linear-amplitude spectrogram :: (B, T) -> (B, Freq, Frame)
spec = spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center)
# Mel-frequency Log-amplitude spectrogram :: (B, Freq, Frame) -> (B, Freq=num_mels, Frame)
melspec = spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax)
return melspec

View File

@@ -1,248 +1,259 @@
import torch, traceback, os, pdb
from collections import OrderedDict
def savee(ckpt, sr, if_f0, name, epoch):
try:
opt = OrderedDict()
opt["weight"] = {}
for key in ckpt.keys():
if "enc_q" in key:
continue
opt["weight"][key] = ckpt[key].half()
if sr == "40k":
opt["config"] = [
1025,
32,
192,
192,
768,
2,
6,
3,
0,
"1",
[3, 7, 11],
[[1, 3, 5], [1, 3, 5], [1, 3, 5]],
[10, 10, 2, 2],
512,
[16, 16, 4, 4],
109,
256,
40000,
]
elif sr == "48k":
opt["config"] = [
1025,
32,
192,
192,
768,
2,
6,
3,
0,
"1",
[3, 7, 11],
[[1, 3, 5], [1, 3, 5], [1, 3, 5]],
[10, 6, 2, 2, 2],
512,
[16, 16, 4, 4, 4],
109,
256,
48000,
]
elif sr == "32k":
opt["config"] = [
513,
32,
192,
192,
768,
2,
6,
3,
0,
"1",
[3, 7, 11],
[[1, 3, 5], [1, 3, 5], [1, 3, 5]],
[10, 4, 2, 2, 2],
512,
[16, 16, 4, 4, 4],
109,
256,
32000,
]
opt["info"] = "%sepoch" % epoch
opt["sr"] = sr
opt["f0"] = if_f0
torch.save(opt, "weights/%s.pth" % name)
return "Success."
except:
return traceback.format_exc()
def show_info(path):
try:
a = torch.load(path, map_location="cpu")
return "模型信息:%s\n采样率:%s\n模型是否输入音高引导:%s" % (
a.get("info", "None"),
a.get("sr", "None"),
a.get("f0", "None"),
)
except:
return traceback.format_exc()
def extract_small_model(path, name, sr, if_f0, info):
try:
ckpt = torch.load(path, map_location="cpu")
if "model" in ckpt:
ckpt = ckpt["model"]
opt = OrderedDict()
opt["weight"] = {}
for key in ckpt.keys():
if "enc_q" in key:
continue
opt["weight"][key] = ckpt[key].half()
if sr == "40k":
opt["config"] = [
1025,
32,
192,
192,
768,
2,
6,
3,
0,
"1",
[3, 7, 11],
[[1, 3, 5], [1, 3, 5], [1, 3, 5]],
[10, 10, 2, 2],
512,
[16, 16, 4, 4],
109,
256,
40000,
]
elif sr == "48k":
opt["config"] = [
1025,
32,
192,
192,
768,
2,
6,
3,
0,
"1",
[3, 7, 11],
[[1, 3, 5], [1, 3, 5], [1, 3, 5]],
[10, 6, 2, 2, 2],
512,
[16, 16, 4, 4, 4],
109,
256,
48000,
]
elif sr == "32k":
opt["config"] = [
513,
32,
192,
192,
768,
2,
6,
3,
0,
"1",
[3, 7, 11],
[[1, 3, 5], [1, 3, 5], [1, 3, 5]],
[10, 4, 2, 2, 2],
512,
[16, 16, 4, 4, 4],
109,
256,
32000,
]
if info == "":
info = "Extracted model."
opt["info"] = info
opt["sr"] = sr
opt["f0"] = int(if_f0)
torch.save(opt, "weights/%s.pth" % name)
return "Success."
except:
return traceback.format_exc()
def change_info(path, info, name):
try:
ckpt = torch.load(path, map_location="cpu")
ckpt["info"] = info
if name == "":
name = os.path.basename(path)
torch.save(ckpt, "weights/%s" % name)
return "Success."
except:
return traceback.format_exc()
def merge(path1, path2, alpha1, sr, f0, info, name):
try:
def extract(ckpt):
a = ckpt["model"]
opt = OrderedDict()
opt["weight"] = {}
for key in a.keys():
if "enc_q" in key:
continue
opt["weight"][key] = a[key]
return opt
ckpt1 = torch.load(path1, map_location="cpu")
ckpt2 = torch.load(path2, map_location="cpu")
cfg = ckpt1["config"]
if "model" in ckpt1:
ckpt1 = extract(ckpt1)
else:
ckpt1 = ckpt1["weight"]
if "model" in ckpt2:
ckpt2 = extract(ckpt2)
else:
ckpt2 = ckpt2["weight"]
if sorted(list(ckpt1.keys())) != sorted(list(ckpt2.keys())):
return "Fail to merge the models. The model architectures are not the same."
opt = OrderedDict()
opt["weight"] = {}
for key in ckpt1.keys():
# try:
if key == "emb_g.weight" and ckpt1[key].shape != ckpt2[key].shape:
min_shape0 = min(ckpt1[key].shape[0], ckpt2[key].shape[0])
opt["weight"][key] = (
alpha1 * (ckpt1[key][:min_shape0].float())
+ (1 - alpha1) * (ckpt2[key][:min_shape0].float())
).half()
else:
opt["weight"][key] = (
alpha1 * (ckpt1[key].float()) + (1 - alpha1) * (ckpt2[key].float())
).half()
# except:
# pdb.set_trace()
opt["config"] = cfg
"""
if(sr=="40k"):opt["config"] = [1025, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10, 10, 2, 2], 512, [16, 16, 4, 4,4], 109, 256, 40000]
elif(sr=="48k"):opt["config"] = [1025, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10,6,2,2,2], 512, [16, 16, 4, 4], 109, 256, 48000]
elif(sr=="32k"):opt["config"] = [513, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10, 4, 2, 2, 2], 512, [16, 16, 4, 4,4], 109, 256, 32000]
"""
opt["sr"] = sr
opt["f0"] = 1 if f0 == "" else 0
opt["info"] = info
torch.save(opt, "weights/%s.pth" % name)
return "Success."
except:
return traceback.format_exc()
import torch, traceback, os, sys
now_dir = os.getcwd()
sys.path.append(now_dir)
from collections import OrderedDict
from i18n import I18nAuto
i18n = I18nAuto()
def savee(ckpt, sr, if_f0, name, epoch, version, hps):
try:
opt = OrderedDict()
opt["weight"] = {}
for key in ckpt.keys():
if "enc_q" in key:
continue
opt["weight"][key] = ckpt[key].half()
opt["config"] = [
hps.data.filter_length // 2 + 1,
32,
hps.model.inter_channels,
hps.model.hidden_channels,
hps.model.filter_channels,
hps.model.n_heads,
hps.model.n_layers,
hps.model.kernel_size,
hps.model.p_dropout,
hps.model.resblock,
hps.model.resblock_kernel_sizes,
hps.model.resblock_dilation_sizes,
hps.model.upsample_rates,
hps.model.upsample_initial_channel,
hps.model.upsample_kernel_sizes,
hps.model.spk_embed_dim,
hps.model.gin_channels,
hps.data.sampling_rate,
]
opt["info"] = "%sepoch" % epoch
opt["sr"] = sr
opt["f0"] = if_f0
opt["version"] = version
torch.save(opt, "weights/%s.pth" % name)
return "Success."
except:
return traceback.format_exc()
def show_info(path):
try:
a = torch.load(path, map_location="cpu")
return "模型信息:%s\n采样率:%s\n模型是否输入音高引导:%s\n版本:%s" % (
a.get("info", "None"),
a.get("sr", "None"),
a.get("f0", "None"),
a.get("version", "None"),
)
except:
return traceback.format_exc()
def extract_small_model(path, name, sr, if_f0, info, version):
try:
ckpt = torch.load(path, map_location="cpu")
if "model" in ckpt:
ckpt = ckpt["model"]
opt = OrderedDict()
opt["weight"] = {}
for key in ckpt.keys():
if "enc_q" in key:
continue
opt["weight"][key] = ckpt[key].half()
if sr == "40k":
opt["config"] = [
1025,
32,
192,
192,
768,
2,
6,
3,
0,
"1",
[3, 7, 11],
[[1, 3, 5], [1, 3, 5], [1, 3, 5]],
[10, 10, 2, 2],
512,
[16, 16, 4, 4],
109,
256,
40000,
]
elif sr == "48k":
if version == "v1":
opt["config"] = [
1025,
32,
192,
192,
768,
2,
6,
3,
0,
"1",
[3, 7, 11],
[[1, 3, 5], [1, 3, 5], [1, 3, 5]],
[10, 6, 2, 2, 2],
512,
[16, 16, 4, 4, 4],
109,
256,
48000,
]
else:
opt["config"] = [
1025,
32,
192,
192,
768,
2,
6,
3,
0,
"1",
[3, 7, 11],
[[1, 3, 5], [1, 3, 5], [1, 3, 5]],
[12, 10, 2, 2],
512,
[24, 20, 4, 4],
109,
256,
48000,
]
elif sr == "32k":
if version == "v1":
opt["config"] = [
513,
32,
192,
192,
768,
2,
6,
3,
0,
"1",
[3, 7, 11],
[[1, 3, 5], [1, 3, 5], [1, 3, 5]],
[10, 4, 2, 2, 2],
512,
[16, 16, 4, 4, 4],
109,
256,
32000,
]
else:
opt["config"] = [
513,
32,
192,
192,
768,
2,
6,
3,
0,
"1",
[3, 7, 11],
[[1, 3, 5], [1, 3, 5], [1, 3, 5]],
[10, 8, 2, 2],
512,
[20, 16, 4, 4],
109,
256,
32000,
]
if info == "":
info = "Extracted model."
opt["info"] = info
opt["version"] = version
opt["sr"] = sr
opt["f0"] = int(if_f0)
torch.save(opt, "weights/%s.pth" % name)
return "Success."
except:
return traceback.format_exc()
def change_info(path, info, name):
try:
ckpt = torch.load(path, map_location="cpu")
ckpt["info"] = info
if name == "":
name = os.path.basename(path)
torch.save(ckpt, "weights/%s" % name)
return "Success."
except:
return traceback.format_exc()
def merge(path1, path2, alpha1, sr, f0, info, name, version):
try:
def extract(ckpt):
a = ckpt["model"]
opt = OrderedDict()
opt["weight"] = {}
for key in a.keys():
if "enc_q" in key:
continue
opt["weight"][key] = a[key]
return opt
ckpt1 = torch.load(path1, map_location="cpu")
ckpt2 = torch.load(path2, map_location="cpu")
cfg = ckpt1["config"]
if "model" in ckpt1:
ckpt1 = extract(ckpt1)
else:
ckpt1 = ckpt1["weight"]
if "model" in ckpt2:
ckpt2 = extract(ckpt2)
else:
ckpt2 = ckpt2["weight"]
if sorted(list(ckpt1.keys())) != sorted(list(ckpt2.keys())):
return "Fail to merge the models. The model architectures are not the same."
opt = OrderedDict()
opt["weight"] = {}
for key in ckpt1.keys():
# try:
if key == "emb_g.weight" and ckpt1[key].shape != ckpt2[key].shape:
min_shape0 = min(ckpt1[key].shape[0], ckpt2[key].shape[0])
opt["weight"][key] = (
alpha1 * (ckpt1[key][:min_shape0].float())
+ (1 - alpha1) * (ckpt2[key][:min_shape0].float())
).half()
else:
opt["weight"][key] = (
alpha1 * (ckpt1[key].float()) + (1 - alpha1) * (ckpt2[key].float())
).half()
# except:
# pdb.set_trace()
opt["config"] = cfg
"""
if(sr=="40k"):opt["config"] = [1025, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10, 10, 2, 2], 512, [16, 16, 4, 4,4], 109, 256, 40000]
elif(sr=="48k"):opt["config"] = [1025, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10,6,2,2,2], 512, [16, 16, 4, 4], 109, 256, 48000]
elif(sr=="32k"):opt["config"] = [513, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10, 4, 2, 2, 2], 512, [16, 16, 4, 4,4], 109, 256, 32000]
"""
opt["sr"] = sr
opt["f0"] = 1 if f0 == i18n("") else 0
opt["version"] = version
opt["info"] = info
torch.save(opt, "weights/%s.pth" % name)
return "Success."
except:
return traceback.format_exc()

View File

@@ -44,9 +44,10 @@ def load_checkpoint_d(checkpoint_path, combd, sbd, optimizer=None, load_opt=1):
model.module.load_state_dict(new_state_dict, strict=False)
else:
model.load_state_dict(new_state_dict, strict=False)
return model
go(combd, "combd")
go(sbd, "sbd")
model = go(sbd, "sbd")
#############
logger.info("Loaded model weights")
@@ -284,8 +285,8 @@ def get_hparams(init=True):
bs done
pretrainGpretrainD done
卡号os.en["CUDA_VISIBLE_DEVICES"] done
if_latest todo
模型if_f0 todo
if_latest done
模型if_f0 done
采样率自动选择config done
是否缓存数据集进GPU:if_cache_data_in_gpu done
@@ -321,6 +322,16 @@ def get_hparams(init=True):
parser.add_argument(
"-sr", "--sample_rate", type=str, required=True, help="sample rate, 32k/40k/48k"
)
parser.add_argument(
"-sw",
"--save_every_weights",
type=str,
default="0",
help="save the extracted model in weights directory when saving checkpoints",
)
parser.add_argument(
"-v", "--version", type=str, required=True, help="model version"
)
parser.add_argument(
"-f0",
"--if_f0",
@@ -350,7 +361,10 @@ def get_hparams(init=True):
if not os.path.exists(experiment_dir):
os.makedirs(experiment_dir)
config_path = "configs/%s.json" % args.sample_rate
if args.version == "v1" or args.sample_rate == "40k":
config_path = "configs/%s.json" % args.sample_rate
else:
config_path = "configs/%s_v2.json" % args.sample_rate
config_save_path = os.path.join(experiment_dir, "config.json")
if init:
with open(config_path, "r") as f:
@@ -369,11 +383,13 @@ def get_hparams(init=True):
hparams.total_epoch = args.total_epoch
hparams.pretrainG = args.pretrainG
hparams.pretrainD = args.pretrainD
hparams.version = args.version
hparams.gpus = args.gpus
hparams.train.batch_size = args.batch_size
hparams.sample_rate = args.sample_rate
hparams.if_f0 = args.if_f0
hparams.if_latest = args.if_latest
hparams.save_every_weights = args.save_every_weights
hparams.if_cache_data_in_gpu = args.if_cache_data_in_gpu
hparams.data.training_files = "%s/filelist.txt" % experiment_dir
return hparams

View File

@@ -6,7 +6,7 @@ import torch
import torch.utils.data
from tqdm import tqdm
from uvr5_pack.lib_v5 import spec_utils
from . import spec_utils
class VocalRemoverValidationSet(torch.utils.data.Dataset):

View File

@@ -2,7 +2,7 @@ import torch
from torch import nn
import torch.nn.functional as F
from uvr5_pack.lib_v5 import spec_utils
from . import spec_utils
class Conv2DBNActiv(nn.Module):

View File

@@ -2,7 +2,7 @@ import torch
from torch import nn
import torch.nn.functional as F
from uvr5_pack.lib_v5 import spec_utils
from . import spec_utils
class Conv2DBNActiv(nn.Module):

View File

@@ -2,7 +2,7 @@ import torch
from torch import nn
import torch.nn.functional as F
from uvr5_pack.lib_v5 import spec_utils
from . import spec_utils
class Conv2DBNActiv(nn.Module):

View File

@@ -2,7 +2,7 @@ import torch
from torch import nn
import torch.nn.functional as F
from uvr5_pack.lib_v5 import spec_utils
from . import spec_utils
class Conv2DBNActiv(nn.Module):

View File

@@ -2,7 +2,7 @@ import torch
from torch import nn
import torch.nn.functional as F
from uvr5_pack.lib_v5 import spec_utils
from . import spec_utils
class Conv2DBNActiv(nn.Module):

View File

@@ -2,7 +2,7 @@ import torch
from torch import nn
import torch.nn.functional as F
from uvr5_pack.lib_v5 import spec_utils
from . import spec_utils
class Conv2DBNActiv(nn.Module):

View File

@@ -0,0 +1,125 @@
import torch
from torch import nn
import torch.nn.functional as F
from . import spec_utils
class Conv2DBNActiv(nn.Module):
def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
super(Conv2DBNActiv, self).__init__()
self.conv = nn.Sequential(
nn.Conv2d(
nin,
nout,
kernel_size=ksize,
stride=stride,
padding=pad,
dilation=dilation,
bias=False,
),
nn.BatchNorm2d(nout),
activ(),
)
def __call__(self, x):
return self.conv(x)
class Encoder(nn.Module):
def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
super(Encoder, self).__init__()
self.conv1 = Conv2DBNActiv(nin, nout, ksize, stride, pad, activ=activ)
self.conv2 = Conv2DBNActiv(nout, nout, ksize, 1, pad, activ=activ)
def __call__(self, x):
h = self.conv1(x)
h = self.conv2(h)
return h
class Decoder(nn.Module):
def __init__(
self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
):
super(Decoder, self).__init__()
self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
# self.conv2 = Conv2DBNActiv(nout, nout, ksize, 1, pad, activ=activ)
self.dropout = nn.Dropout2d(0.1) if dropout else None
def __call__(self, x, skip=None):
x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
if skip is not None:
skip = spec_utils.crop_center(skip, x)
x = torch.cat([x, skip], dim=1)
h = self.conv1(x)
# h = self.conv2(h)
if self.dropout is not None:
h = self.dropout(h)
return h
class ASPPModule(nn.Module):
def __init__(self, nin, nout, dilations=(4, 8, 12), activ=nn.ReLU, dropout=False):
super(ASPPModule, self).__init__()
self.conv1 = nn.Sequential(
nn.AdaptiveAvgPool2d((1, None)),
Conv2DBNActiv(nin, nout, 1, 1, 0, activ=activ),
)
self.conv2 = Conv2DBNActiv(nin, nout, 1, 1, 0, activ=activ)
self.conv3 = Conv2DBNActiv(
nin, nout, 3, 1, dilations[0], dilations[0], activ=activ
)
self.conv4 = Conv2DBNActiv(
nin, nout, 3, 1, dilations[1], dilations[1], activ=activ
)
self.conv5 = Conv2DBNActiv(
nin, nout, 3, 1, dilations[2], dilations[2], activ=activ
)
self.bottleneck = Conv2DBNActiv(nout * 5, nout, 1, 1, 0, activ=activ)
self.dropout = nn.Dropout2d(0.1) if dropout else None
def forward(self, x):
_, _, h, w = x.size()
feat1 = F.interpolate(
self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
)
feat2 = self.conv2(x)
feat3 = self.conv3(x)
feat4 = self.conv4(x)
feat5 = self.conv5(x)
out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1)
out = self.bottleneck(out)
if self.dropout is not None:
out = self.dropout(out)
return out
class LSTMModule(nn.Module):
def __init__(self, nin_conv, nin_lstm, nout_lstm):
super(LSTMModule, self).__init__()
self.conv = Conv2DBNActiv(nin_conv, 1, 1, 1, 0)
self.lstm = nn.LSTM(
input_size=nin_lstm, hidden_size=nout_lstm // 2, bidirectional=True
)
self.dense = nn.Sequential(
nn.Linear(nout_lstm, nin_lstm), nn.BatchNorm1d(nin_lstm), nn.ReLU()
)
def forward(self, x):
N, _, nbins, nframes = x.size()
h = self.conv(x)[:, 0] # N, nbins, nframes
h = h.permute(2, 0, 1) # nframes, N, nbins
h, _ = self.lstm(h)
h = self.dense(h.reshape(-1, h.size()[-1])) # nframes * N, nbins
h = h.reshape(nframes, N, 1, nbins)
h = h.permute(1, 2, 3, 0)
return h

Some files were not shown because too many files have changed in this diff Show More