模型:
speechbrain/urbansound8k_ecapa
该存储库提供了使用SpeechBrain在UrbanSound8k上预训练的模型进行声音识别的所有必要工具。您可以下载数据集 here 提供的系统可以识别以下10个关键词:
dog_bark, children_playing, air_conditioner, street_music, gun_shot, siren, engine_idling, jackhammer, drilling, car_horn
为了获得更好的体验,我们鼓励您了解更多关于 SpeechBrain 的信息。模型在测试集上的性能如下:
Release | Accuracy 1-fold (%) |
---|---|
04-06-21 | 75.5 |
该系统由ECAPA模型与统计池化组成。在其上方应用了使用分类交叉熵损失训练的分类器。
首先,请使用以下命令安装SpeechBrain:
pip install speechbrain
请注意,我们鼓励您阅读我们的教程并了解更多关于 SpeechBrain 的信息。
import torchaudio from speechbrain.pretrained import EncoderClassifier classifier = EncoderClassifier.from_hparams(source="speechbrain/urbansound8k_ecapa", savedir="pretrained_models/gurbansound8k_ecapa") out_prob, score, index, text_lab = classifier.classify_file('speechbrain/urbansound8k_ecapa/dog_bark.wav') print(text_lab)
该系统使用采样率为16kHz的录音进行训练(单声道)。当调用classify_file时,代码会自动对音频进行归一化处理(即重新采样+选择单声道)。如果您使用encode_batch和classify_batch,请确保输入张量与预期的采样率符合要求。
要在GPU上执行推理,请在调用from_hparams方法时添加run_opts={"device":"cuda"}。
该模型是使用SpeechBrain(8cab8b0c)进行训练的。按照以下步骤从头开始训练它:
git clone https://github.com/speechbrain/speechbrain/
cd speechbrain pip install -r requirements.txt pip install -e .
cd recipes/UrbanSound8k/SoundClassification python train.py hparams/train_ecapa_tdnn.yaml --data_folder=your_data_folder
您可以在这里找到我们的训练结果(模型、日志等) here 。
SpeechBrain团队不对在其他数据集上使用该模型时的性能提供任何保证。
引用ECAPAauthor = {Brecht Desplanques and Jenthe Thienpondt and Kris Demuynck}, editor = {Helen Meng and Bo Xu and Thomas Fang Zheng}, title = {{ECAPA-TDNN:} Emphasized Channel Attention, Propagation and Aggregation in {TDNN} Based Speaker Verification}, booktitle = {Interspeech 2020}, pages = {3830--3834}, publisher = {{ISCA}}, year = {2020}, }引用UrbanSound
Author = {Salamon, J. and Jacoby, C. and Bello, J. P.}, Booktitle = {22nd {ACM} International Conference on Multimedia (ACM-MM'14)}, Month = {Nov.}, Pages = {1041--1044}, Title = {A Dataset and Taxonomy for Urban Sound Research}, Year = {2014}}
如果您在研究或商业中使用SpeechBrain,请引用它。
@misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} }