模型:

cardiffnlp/twitter-roberta-base-2021-124m

英文

Twitter 2021 124M(RoBERTa-base)

这是一个在2021年底之前使用123.86M推文训练的RoBERTa-base模型。更多详细信息和性能得分可在这里查看。

以下是使用标准Transformers接口的一些使用示例。如果需要比较不同时间间隔训练的模型之间的预测和困惑度得分,可以使用这里更适合的接口。

要查看训练至不同时期的其他模型,请查看这里

预处理文本

将用户名和链接替换为占位符:“@user”和“http”。如果您有兴趣保留在训练期间也保留的已验证用户,可以保留以下用户列表这里

def preprocess(text):
    preprocessed_text = []
    for t in text.split():
        if len(t) > 1:
            t = '@user' if t[0] == '@' and t.count('@') == 1 else t
            t = 'http' if t.startswith('http') else t
        preprocessed_text.append(t)
    return ' '.join(preprocessed_text)

示例掩码语言模型

from transformers import pipeline, AutoTokenizer

MODEL = "cardiffnlp/twitter-roberta-base-2021-124m"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)

def pprint(candidates, n):
    for i in range(n):
        token = tokenizer.decode(candidates[i]['token'])
        score = candidates[i]['score']
        print("%d) %.5f %s" % (i+1, score, token))

texts = [
    "So glad I'm <mask> vaccinated.",
    "I keep forgetting to bring a <mask>.",
    "Looking forward to watching <mask> Game tonight!",
]

for text in texts:
    t = preprocess(text)
    print(f"{'-'*30}\n{t}")
    candidates = fill_mask(t)
    pprint(candidates, 5)

输出:

------------------------------
So glad I'm <mask> vaccinated.
1) 0.39613  fully
2) 0.26333  getting
3) 0.18988  not
4) 0.02312  still
5) 0.02099  already
------------------------------
I keep forgetting to bring a <mask>.
1) 0.08356  mask
2) 0.05696  book
3) 0.03505  bag
4) 0.02983  backpack
5) 0.02847  blanket
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.46618  the
2) 0.24042  The
3) 0.03216  End
4) 0.02925  Squid
5) 0.02610  this

示例推文嵌入

from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter

def get_embedding(text):  # naive approach for demonstration
  text = preprocess(text)
  encoded_input = tokenizer(text, return_tensors='pt')
  features = model(**encoded_input)
  features = features[0].detach().cpu().numpy() 
  return np.mean(features[0], axis=0) 


MODEL = "cardiffnlp/twitter-roberta-base-2021-124m"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)

query = "The book was awesome"
tweets = ["I just ordered fried chicken ?", 
          "The movie was great",
          "What time is the next game?",
          "Just finished reading 'Embeddings in NLP'"]

sims = Counter()
for tweet in tweets:
    sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
    sims[tweet] = sim

print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
    print("%d) %.5f %s" % (idx+1, sim, tweet))

输出:

Most similar to:  The book was awesome
------------------------------
1) 0.98969 The movie was great
2) 0.96102 Just finished reading 'Embeddings in NLP'
3) 0.95565 I just ordered fried chicken ?
4) 0.95041 What time is the next game?

示例特征提取

from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np

MODEL = "cardiffnlp/twitter-roberta-base-2021-124m"
tokenizer = AutoTokenizer.from_pretrained(MODEL)

text = "Good night ?"
text = preprocess(text)

# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy() 
features_mean = np.mean(features[0], axis=0) 
#features_max = np.max(features[0], axis=0)

# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0) 
# #features_max = np.max(features[0], axis=0)