英文

在Libri3Mix上训练的SepFormer模型

该存储库提供了使用SpeechBrain实现的 SepFormer 模型在Libri3Mix数据集上进行音频源分离的所有必要工具。为了获得更好的体验,我们鼓励您了解 SpeechBrain 的更多信息。模型在Libri3Mix数据集的测试集上的性能为19.8 dB SI-SNRi。

Release Test-Set SI-SNRi Test-Set SDRi
16-09-22 19.0dB 19.4dB

安装SpeechBrain

首先,请使用以下命令安装SpeechBrain:

pip install speechbrain

请注意,我们鼓励您阅读我们的教程并了解 SpeechBrain 的更多信息。

对自己的音频文件进行源分离

from speechbrain.pretrained import SepformerSeparation as separator
import torchaudio

model = separator.from_hparams(source="speechbrain/sepformer-libri3mix", savedir='pretrained_models/sepformer-libri3mix')

est_sources = model.separate_file(path='speechbrain/sepformer-wsj03mix/test_mixture_3spks.wav') 

torchaudio.save("source1hat.wav", est_sources[:, :, 0].detach().cpu(), 8000)
torchaudio.save("source2hat.wav", est_sources[:, :, 1].detach().cpu(), 8000)
torchaudio.save("source3hat.wav", est_sources[:, :, 2].detach().cpu(), 8000)

该系统期望输入的录音采样率为8kHz(单声道)。如果您的信号具有不同的采样率,请在使用界面之前对其进行重新采样(例如,使用torchaudio或sox)。

在GPU上进行推断

要在GPU上执行推断,请在调用from_hparams方法时添加run_opts={"device":"cuda"}。

训练

该模型是使用SpeechBrain(fc2eabb7)训练的。要从头开始训练,请按照以下步骤操作:

  • 克隆SpeechBrain:
  • git clone https://github.com/speechbrain/speechbrain/
    
  • 安装它:
  • cd speechbrain
    pip install -r requirements.txt
    pip install -e .
    
  • 运行训练:
  • cd  recipes/LibriMix/separation
    python train.py hparams/sepformer.yaml --data_folder=your_data_folder
    

    注意:在yaml文件中将num_spks更改为3。

    您可以在 here 找到我们的训练结果(模型、日志等)。

    局限性

    SpeechBrain团队不对该模型在其他数据集上的性能提供任何保证。

    Referencing SpeechBrain
    @misc{speechbrain,
      title={{SpeechBrain}: A General-Purpose Speech Toolkit},
      author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
      year={2021},
      eprint={2106.04624},
      archivePrefix={arXiv},
      primaryClass={eess.AS},
      note={arXiv:2106.04624}
    }
    
    Referencing SepFormer
    @inproceedings{subakan2021attention,
          title={Attention is All You Need in Speech Separation}, 
          author={Cem Subakan and Mirco Ravanelli and Samuele Cornell and Mirko Bronzi and Jianyuan Zhong},
          year={2021},
          booktitle={ICASSP 2021}
    }
    
    @misc{subakan2022sepformer
      author = {Subakan, Cem and Ravanelli, Mirco and Cornell, Samuele and Grondin, Francois and Bronzi, Mirko},
      title = {On Using Transformers for Speech-Separation},
      year = {2022},
      copyright = {arXiv.org perpetual, non-exclusive license}
    }
    

    关于SpeechBrain