英文

从人类反馈中训练的奖励模型

训练奖励模型(RM)来预测给定问题,哪个生成的答案被人类判断为更好。

奖励模型在以下领域中非常有用:

  • QA模型评估

  • 作为RLHF中的奖励分数

所有模型都使用相同的拆分种子对这些数据集进行训练(如果没有可用的验证拆分)

如何使用

from transformers import AutoModelForSequenceClassification, AutoTokenizer
reward_name = "OpenAssistant/reward-model-deberta-v3-base"
rank_model, tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name)
question, answer = "Explain nuclear fusion like I am five", "Nuclear fusion is the process by which two or more protons and neutrons combine to form a single nucleus. It is a very important process in the universe, as it is the source of energy for stars and galaxies. Nuclear fusion is also a key process in the production of energy for nuclear power plants."
inputs = tokenizer(question, answer, return_tensors='pt')
score = rank_model(**inputs).logits[0].cpu().detach()
print(score)

性能

验证集准确率

Model 1234321 1235321 1236321
1237321 59.30 68.66 99.85
1238321 61.13 72.23 99.94
1239321 59.07 66.84 99.85

SytheticGPT可能在选择-拒绝对中具有某种表面模式,这使得更好的答案之间的区别变得微不足道。