模型:
asapp/sew-d-base-plus-400k-ft-ls100h
该基础模型在16kHz的采样语音音频上进行了预训练。使用该模型时,请确保您的语音输入也以16kHz进行采样。请注意,该模型应在下游任务中进行微调,如自动语音识别、说话人识别、意图分类、情感识别等等...
论文: Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition
作者:Felix Wu,Kwangyoun Kim,Jing Pan,Kyu Han,Kilian Q. Weinberger,Yoav Artzi
摘要:本文是对预训练模型在自动语音识别(ASR)中的性能和效率之间的权衡进行研究。我们聚焦于wav2vec 2.0,并形式化了几种影响模型性能和效率的架构设计。根据所有观察结果,我们介绍了SEW(Squeezed and Efficient Wav2vec),这是一个预训练模型架构,在性能和效率两个维度上都有显著改善,适用于各种训练设置。例如,在LibriSpeech的100h-960h半监督设置下,SEW与wav2vec 2.0相比,推理速度提高了1.9倍,词错误率相对减少了13.5%。在类似的推理时间下,SEW在不同模型尺寸上将词错误率降低了25-50%。
原始模型可在 https://github.com/asappresearch/sew#model-checkpoints 下找到。
要转录音频文件,可以将该模型用作独立的声学模型,如下所示:
from transformers import Wav2Vec2Processor, SEWDForCTC from datasets import load_dataset import soundfile as sf import torch # load the model and preprocessor processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-base-plus-400k-ft-ls100h") model = SEWDForCTC.from_pretrained("asapp/sew-d-base-plus-400k-ft-ls100h") # load the dummy dataset with speech samples ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # preprocess input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1 # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids)
此代码片段展示了如何在LibriSpeech的"clean"和"other"测试数据上评估 asapp/sew-d-base-plus-400k-ft-ls100h。
from datasets import load_dataset from transformers import SEWDForCTC, Wav2Vec2Processor import torch from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") model = SEWDForCTC.from_pretrained("asapp/sew-d-base-plus-400k-ft-ls100h").to("cuda") processor = Wav2Vec2Processor.from_pretrained("asapp/sew-d-base-plus-400k-ft-ls100h") def map_to_pred(batch): input_values = processor(batch["audio"][0]["array"], sampling_rate=16000, return_tensors="pt", padding="longest").input_values with torch.no_grad(): logits = model(input_values.to("cuda")).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"]) print("WER:", wer(result["text"], result["transcription"]))
结果(WER):
"clean" | "other" |
---|---|
4.34 | 9.45 |