模型:

cardiffnlp/tweet-topic-latest-multi

英文

tweet-topic-latest-multi

这是一个在截至2022年9月底的1.6886亿条推特上进行训练的RoBERTa-base模型,并且在一个由11,267个语料组成的多标签主题分类语料库上进行了微调。可在此处找到原始的RoBERTa-base模型。这个模型适用于英语。

标签:

0: arts_&_culture 5: fashion_&_style 10: learning_&_educational 15: science_&_technology
1: business_&_entrepreneurs 6: film_tv_&_video 11: music 16: sports
2: celebrity_&_pop_culture 7: fitness_&_health 12: news_&_social_concern 17: travel_&_adventure
3: diaries_&_daily_life 8: food_&_dining 13: other_hobbies 18: youth_&_student_life
4: family 9: gaming 14: relationships

完整分类示例

from transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import expit

    
MODEL = f"cardiffnlp/tweet-topic-latest-multi"
tokenizer = AutoTokenizer.from_pretrained(MODEL)

# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
class_mapping = model.config.id2label

text = "It is great to see athletes promoting awareness for climate change."
tokens = tokenizer(text, return_tensors='pt')
output = model(**tokens)

scores = output[0][0].detach().numpy()
scores = expit(scores)
predictions = (scores >= 0.5) * 1


# TF
#tf_model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
#class_mapping = tf_model.config.id2label
#text = "It is great to see athletes promoting awareness for climate change."
#tokens = tokenizer(text, return_tensors='tf')
#output = tf_model(**tokens)
#scores = output[0][0]
#scores = expit(scores)
#predictions = (scores >= 0.5) * 1

# Map to classes
for i in range(len(predictions)):
  if predictions[i]:
    print(class_mapping[i])

输出:

fitness_&_health
news_&_social_concern
sports