模型:
speechbrain/tts-hifigan-libritts-22050Hz
此存储库提供了使用 HiFIGAN 训练的 LibriTTS (多说话人)的声码器所需的所有工具。声码器使用的采样率为22050 Hz。
预训练模型接收输入的频谱图并产生输出的波形。通常,在将输入文本转换为频谱图的TTS模型之后使用声码器。
此模型的替代品如下:
pip install speechbrain
请注意,我们鼓励您阅读我们的教程并了解更多关于 SpeechBrain 的信息。
import torch from speechbrain.pretrained import HIFIGAN hifi_gan = HIFIGAN.from_hparams(source="speechbrain/tts-hifigan-libritts-22050Hz", savedir="tmpdir") mel_specs = torch.rand(2, 80,298) # Running Vocoder (spectrogram-to-waveform) waveforms = hifi_gan.decode_batch(mel_specs)
import torchaudio from speechbrain.pretrained import Tacotron2 from speechbrain.pretrained import HIFIGAN # Intialize TTS (tacotron2) and Vocoder (HiFIGAN) tacotron2 = Tacotron2.from_hparams(source="speechbrain/tts-tacotron2-ljspeech", savedir="tmpdir_tts") hifi_gan = HIFIGAN.from_hparams(source="speechbrain/tts-hifigan-libritts-22050Hz", savedir="tmpdir_vocoder") # Running the TTS mel_output, mel_length, alignment = tacotron2.encode_text("Mary had a little lamb") # Running Vocoder (spectrogram-to-waveform) waveforms = hifi_gan.decode_batch(mel_output) # Save the waverform torchaudio.save('example_TTS.wav',waveforms.squeeze(1), 22050)
要在GPU上执行推理,请在调用from_hparams方法时添加run_opts = {"device":"cuda"}。
该模型是使用SpeechBrain进行训练的。要从头开始训练,请按照以下步骤进行:
git clone https://github.com/speechbrain/speechbrain/
cd speechbrain pip install -r requirements.txt pip install -e .
cd recipes/LibriTTS/vocoder/hifigan/ python train.py hparams/train.yaml --data_folder=/path/to/LibriTTS_data_destination --sample_rate=22050
要更改模型训练的采样率,请转到“recipes / LibriTTS / vocoder / hifigan / hparams / train.yaml”文件并根据需要更改sample_rate的值。训练日志和检查点可在 here 中获得。