模型:
speechbrain/sepformer-wsj02mix
此存储库提供了使用SpeechBrain实现、在WSJ0-2Mix数据集上预训练的 SepFormer 模型进行音频源分离的所有必要工具。为了获得更好的体验,我们鼓励您了解更多关于 SpeechBrain 的信息。该模型在WSJ0-2Mix数据集的测试集上的性能为22.4 dB。
Release | Test-Set SI-SNRi | Test-Set SDRi |
---|---|---|
09-03-21 | 22.4dB | 22.6dB |
您可以通过 here 收听在WSJ0-2/3Mix测试集上获得的示例结果。
首先,请使用以下命令安装SpeechBrain:
pip install speechbrain
请注意,我们鼓励您阅读我们的教程并了解更多关于 SpeechBrain 的信息。
from speechbrain.pretrained import SepformerSeparation as separator import torchaudio model = separator.from_hparams(source="speechbrain/sepformer-wsj02mix", savedir='pretrained_models/sepformer-wsj02mix') # for custom file, change path est_sources = model.separate_file(path='speechbrain/sepformer-wsj02mix/test_mixture.wav') torchaudio.save("source1hat.wav", est_sources[:, :, 0].detach().cpu(), 8000) torchaudio.save("source2hat.wav", est_sources[:, :, 1].detach().cpu(), 8000)
该系统需要以8kHz(单声道)采样的输入录音。如果您的信号具有不同的采样率,请在使用接口之前将其重新采样(例如使用torchaudio或sox)。
要在GPU上执行推断,请在调用from_hparams方法时添加run_opts={"device":"cuda"}。
该模型是使用SpeechBrain(fc2eabb7)训练的。按照以下步骤从头开始训练:
git clone https://github.com/speechbrain/speechbrain/
cd speechbrain pip install -r requirements.txt pip install -e .
cd recipes/WSJ0Mix/separation python train.py hparams/sepformer.yaml --data_folder=your_data_folder
您可以在 here 找到我们的训练结果(模型、日志等)。
SpeechBrain团队不对在其他数据集上使用此模型时的性能提供任何保证。
引用SpeechBrain@misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} }引用SepFormer
@inproceedings{subakan2021attention, title={Attention is All You Need in Speech Separation}, author={Cem Subakan and Mirco Ravanelli and Samuele Cornell and Mirko Bronzi and Jianyuan Zhong}, year={2021}, booktitle={ICASSP 2021} }