模型:

laion/clap-htsat-unfused

英文

CLAP模型的模型卡

CLAP模型的模型卡:对比式语言音频预训练

目录

  • TL;DR
  • 模型详情
  • 用法
  • 应用
  • 引用
  • TL;DR

    论文摘要中提到:

    对比学习在多模态表示学习领域取得了显著的成功。本文提出了对比式语言音频预训练的流水线,通过将音频数据与自然语言描述相结合,开发出音频表示。为了实现这一目标,我们首先发布了LAION-Audio-630K,这是一个来自不同数据源的633,526个音频-文本对的大型数据集。其次,我们构建了一个对比式语言音频预训练模型,考虑了不同的音频编码器和文本编码器。我们将特征融合机制和关键词到字幕增强引入模型设计中,进一步使模型能够处理变长音频输入并提高性能。第三,我们进行了全面的实验来评估我们的模型在三个任务上的表现:文本到音频检索、零样本音频分类和有监督音频分类。结果表明,我们的模型在文本到音频检索任务中实现了优越的性能。在音频分类任务中,模型在零样本设置下取得了最先进的性能,并且能够达到非零样本设置下模型结果相当的性能。LAION-Audio-630K和提出的模型都对公众开放。

    用法

    您可以使用此模型进行零样本音频分类或提取音频和/或文本特征。

    应用

    执行零样本音频分类

    使用流水线

    from datasets import load_dataset
    from transformers import pipeline
    
    dataset = load_dataset("ashraq/esc50")
    audio = dataset["train"]["audio"][-1]["array"]
    
    audio_classifier = pipeline(task="zero-shot-audio-classification", model="laion/clap-htsat-unfused")
    output = audio_classifier(audio, candidate_labels=["Sound of a dog", "Sound of vaccum cleaner"])
    print(output)
    >>> [{"score": 0.999, "label": "Sound of a dog"}, {"score": 0.001, "label": "Sound of vaccum cleaner"}]
    

    运行模型:

    您还可以使用ClapModel获取音频和文本嵌入。

    在CPU上运行模型:

    from datasets import load_dataset
    from transformers import ClapModel, ClapProcessor
    
    librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
    audio_sample = librispeech_dummy[0]
    
    model = ClapModel.from_pretrained("laion/clap-htsat-unfused")
    processor = ClapProcessor.from_pretrained("laion/clap-htsat-unfused")
    
    inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt")
    audio_embed = model.get_audio_features(**inputs)
    

    在GPU上运行模型:

    from datasets import load_dataset
    from transformers import ClapModel, ClapProcessor
    
    librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
    audio_sample = librispeech_dummy[0]
    
    model = ClapModel.from_pretrained("laion/clap-htsat-unfused").to(0)
    processor = ClapProcessor.from_pretrained("laion/clap-htsat-unfused")
    
    inputs = processor(audios=audio_sample["audio"]["array"], return_tensors="pt").to(0)
    audio_embed = model.get_audio_features(**inputs)
    

    引用

    如果您在您的工作中使用了此模型,请考虑引用原始论文:

    @misc{https://doi.org/10.48550/arxiv.2211.06687,
      doi = {10.48550/ARXIV.2211.06687},
      
      url = {https://arxiv.org/abs/2211.06687},
      
      author = {Wu, Yusong and Chen, Ke and Zhang, Tianyu and Hui, Yuchen and Berg-Kirkpatrick, Taylor and Dubnov, Shlomo},
      
      keywords = {Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
      
      title = {Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation},
      
      publisher = {arXiv},
      
      year = {2022},
      
      copyright = {Creative Commons Attribution 4.0 International}
    }