英文

根据人们的反馈训练的奖励模型

根据问题来判断人们认为哪个生成的答案更好的奖励模型(RM)进行训练。

RM在以下领域中有用:

  • QA模型评估
  • 在RLHF中作为奖励分数
  • 通过排名检测潜在的有害回复

所有模型都在这些数据集上进行训练,并且使用相同的拆分种子(如果没有验证拆分)。

如何使用

from transformers import AutoModelForSequenceClassification, AutoTokenizer
reward_name = "OpenAssistant/reward-model-deberta-v3-large-v2"
rank_model, tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name)
question, answer = "Explain nuclear fusion like I am five", "Nuclear fusion is the process by which two or more protons and neutrons combine to form a single nucleus. It is a very important process in the universe, as it is the source of energy for stars and galaxies. Nuclear fusion is also a key process in the production of energy for nuclear power plants."
inputs = tokenizer(question, answer, return_tensors='pt')
score = rank_model(**inputs).logits[0].cpu().detach()
print(score)

检测有害回复

from transformers import AutoModelForSequenceClassification, AutoTokenizer
reward_name = "OpenAssistant/reward-model-deberta-v3-large-v2"
rank_model, tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name)

question = "I just came out of from jail, any suggestion of my future?"
helpful = "It's great to hear that you have been released from jail."
bad = "Go back to jail you scum"

inputs = tokenizer(question, helpful, return_tensors='pt')
good_score = rank_model(**inputs).logits[0].cpu().detach()

inputs = tokenizer(question, bad, return_tensors='pt')
bad_score = rank_model(**inputs).logits[0].cpu().detach()
print(good_score > bad_score) # tensor([True])

性能

验证集准确率

Model 1236321 1237321 1238321 1239321
12310321 59.30 68.66 99.85 54.33
12311321 61.57 71.47 99.88 69.25
12312321 61.13 72.23 99.94 55.62
12313321 59.07 66.84 99.85 54.51
deberta-v2-xxlarge 58.67 73.27 99.77 66.74

可能SytheticGPT在选择-拒绝对中有一些表面模式,使得区分答案更好变得琐碎。

其他

真诚感谢 stability.ai 对A100计算资源的坚定支持。他们的贡献对于确保这个研究项目的顺利完成至关重要。