模型:

cardiffnlp/twitter-roberta-base-2022-154m

英文

Twitter 2022 154M (RoBERTa-base, 154M - full update)

这是一个在2022年12月之前,在原始检查点上使用154M条推文训练的RoBERTa-base模型(没有增量更新)。

这154M条推文是通过从Twitter学术API独占地获取的220M条推文进行筛选得到的,涵盖了2018年1月至2022年12月的每个月。筛选和预处理的详细信息可在 TimeLMs paper 处获取。

在下面,我们提供了使用标准Transformer接口的一些使用示例。如果需要比较在不同时间间隔训练的模型之间的预测和困惑度得分,可以查看 TimeLMs repository 处提供的另一种接口。

有关训练至不同时间段的其他模型,请查看此处的 table

预处理文本

将用户名和链接替换为占位符:"@user"和"http"。如果您有兴趣保留在训练期间也要保留的已验证用户,可以保留列在 here 处的用户。

def preprocess(text):
    preprocessed_text = []
    for t in text.split():
        if len(t) > 1:
            t = '@user' if t[0] == '@' and t.count('@') == 1 else t
            t = 'http' if t.startswith('http') else t
        preprocessed_text.append(t)
    return ' '.join(preprocessed_text)

示例遮罩语言模型

from transformers import pipeline, AutoTokenizer

MODEL = "cardiffnlp/twitter-roberta-base-2022-154m"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)

def pprint(candidates, n):
    for i in range(n):
        token = tokenizer.decode(candidates[i]['token'])
        score = candidates[i]['score']
        print("%d) %.5f %s" % (i+1, score, token))

texts = [
    "So glad I'm <mask> vaccinated.",
    "I keep forgetting to bring a <mask>.",
    "Looking forward to watching <mask> Game tonight!",
]
for text in texts:
    t = preprocess(text)
    print(f"{'-'*30}\n{t}")
    candidates = fill_mask(t)
    pprint(candidates, 5)

输出:

------------------------------
So glad I'm <mask> vaccinated.
1) 0.26251  not
2) 0.25460  a
3) 0.12611  in
4) 0.11036  the
5) 0.04210  getting
------------------------------
I keep forgetting to bring a <mask>.
1) 0.09274  charger
2) 0.04727  lighter
3) 0.04469  mask
4) 0.04395  drink
5) 0.03644  camera
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.57683  Squid
2) 0.17419  The
3) 0.04198  the
4) 0.00970  Spring
5) 0.00921  Big

示例推文嵌入

from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter

def get_embedding(text):  # naive approach for demonstration
  text = preprocess(text)
  encoded_input = tokenizer(text, return_tensors='pt')
  features = model(**encoded_input)
  features = features[0].detach().cpu().numpy() 
  return np.mean(features[0], axis=0) 


MODEL = "cardiffnlp/twitter-roberta-base-2022-154m"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)

query = "The book was awesome"
tweets = ["I just ordered fried chicken ?", 
          "The movie was great",
          "What time is the next game?",
          "Just finished reading 'Embeddings in NLP'"]

sims = Counter()
for tweet in tweets:
    sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
    sims[tweet] = sim

print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
    print("%d) %.5f %s" % (idx+1, sim, tweet))

输出:

Most similar to:  The book was awesome
------------------------------
1) 0.99403 The movie was great
2) 0.98006 Just finished reading 'Embeddings in NLP'
3) 0.97314 What time is the next game?
4) 0.92448 I just ordered fried chicken ?

示例特征提取

from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np

MODEL = "cardiffnlp/twitter-roberta-base-2022-154m"
tokenizer = AutoTokenizer.from_pretrained(MODEL)

text = "Good night ?"
text = preprocess(text)

# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy() 
features_mean = np.mean(features[0], axis=0) 
#features_max = np.max(features[0], axis=0)

# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0) 
# #features_max = np.max(features[0], axis=0)