英文

在Libri2Mix上训练的SepFormer

该存储库提供了使用SpeechBrain实现的SepFormer模型在Libri2Mix数据集上进行音频源分离所需的所有工具。为了获得更好的体验,我们鼓励您了解更多关于 SpeechBrain 的信息。该模型在Libri2Mix数据集的测试集上的性能为20.6 dB。

Release Test-Set SI-SNRi Test-Set SDRi
16-09-22 20.6dB 20.9dB

您可以通过 here 听取在WSJ0-2/3Mix测试集上获得的示例结果。

安装SpeechBrain

首先,请使用以下命令安装SpeechBrain:

pip install speechbrain

请注意,我们鼓励您阅读我们的教程并了解更多关于 SpeechBrain 的信息。

对自己的音频文件进行源分离

from speechbrain.pretrained import SepformerSeparation as separator
import torchaudio

model = separator.from_hparams(source="speechbrain/sepformer-libri2mix", savedir='pretrained_models/sepformer-libri2mix')

# for custom file, change path
est_sources = model.separate_file(path='speechbrain/sepformer-wsj02mix/test_mixture.wav') 

torchaudio.save("source1hat.wav", est_sources[:, :, 0].detach().cpu(), 8000)
torchaudio.save("source2hat.wav", est_sources[:, :, 1].detach().cpu(), 8000)

该系统期望以8kHz(单声道)采样的输入录音。如果您的信号具有不同的采样率,请在使用界面之前对其进行重采样(例如使用torchaudio或sox)。

在GPU上进行推理

要在GPU上执行推理,请在调用from_hparams方法时添加run_opts={"device":"cuda"}。

训练

该模型是使用SpeechBrain(fc2eabb7)进行训练的。要从头开始训练,请按照以下步骤进行:

  • 克隆SpeechBrain:
  • git clone https://github.com/speechbrain/speechbrain/
    
  • 安装SpeechBrain:
  • cd speechbrain
    pip install -r requirements.txt
    pip install -e .
    
  • 运行训练:
  • cd  recipes/Libri2Mix/separation
    python train.py hparams/sepformer.yaml --data_folder=your_data_folder
    

    您可以在 here 中找到我们的训练结果(模型、日志等)。

    限制

    SpeechBrain团队不保证在其他数据集上使用该模型时能实现的性能。

    引用SpeechBrain
    @misc{speechbrain,
      title={{SpeechBrain}: A General-Purpose Speech Toolkit},
      author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
      year={2021},
      eprint={2106.04624},
      archivePrefix={arXiv},
      primaryClass={eess.AS},
      note={arXiv:2106.04624}
    }
    
    引用SepFormer
    @inproceedings{subakan2021attention,
          title={Attention is All You Need in Speech Separation}, 
          author={Cem Subakan and Mirco Ravanelli and Samuele Cornell and Mirko Bronzi and Jianyuan Zhong},
          year={2021},
          booktitle={ICASSP 2021}
    }
    
    @misc{subakan2022sepformer
      author = {Subakan, Cem and Ravanelli, Mirco and Cornell, Samuele and Grondin, Francois and Bronzi, Mirko},
      title = {On Using Transformers for Speech-Separation},
      year = {2022},
      copyright = {arXiv.org perpetual, non-exclusive license}
    }
    

    关于SpeechBrain