模型:

cointegrated/rubert-tiny-sentiment-balanced

英文

这是用于对短俄语文本进行情感分类的 cointegrated/rubert-tiny 模型。

该问题被定义为多类分类: 负面 vs 中性 vs 正面。

用法

以下函数估计给定文本的情感:

# !pip install transformers sentencepiece --quiet
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification

model_checkpoint = 'cointegrated/rubert-tiny-sentiment-balanced'
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint)
if torch.cuda.is_available():
    model.cuda()
    
def get_sentiment(text, return_type='label'):
    """ Calculate sentiment of a text. `return_type` can be 'label', 'score' or 'proba' """
    with torch.no_grad():
        inputs = tokenizer(text, return_tensors='pt', truncation=True, padding=True).to(model.device)
        proba = torch.sigmoid(model(**inputs).logits).cpu().numpy()[0]
    if return_type == 'label':
        return model.config.id2label[proba.argmax()]
    elif return_type == 'score':
        return proba.dot([-1, 0, 1])
    return proba

text = 'Какая гадость эта ваша заливная рыба!'
# classify the text
print(get_sentiment(text, 'label'))  # negative
# score the text on the scale from -1 (very negative) to +1 (very positive)
print(get_sentiment(text, 'score'))  # -0.5894946306943893
# calculate probabilities of all labels
print(get_sentiment(text, 'proba'))  # [0.7870447  0.4947824  0.19755007]

训练

我们在 the datasets collected by Smetanin 上对模型进行了训练。我们将所有训练数据转换为3类格式,并对训练数据进行了上采样和下采样,以平衡来源和类别。训练代码可在 a Colab notebook 中找到。在平衡的测试集上的指标如下:

Source Macro F1
SentiRuEval2016_banks 0.83
SentiRuEval2016_tele 0.74
kaggle_news 0.66
linis 0.50
mokoron 0.98
rureviews 0.72
rusentiment 0.67