英文

基于人类反馈训练的奖励模型

奖励模型(RM)通过给定一个问题,在判断哪个生成的答案更好时,训练以预测人类的判断。

RM在以下领域中很有用:

  • QA模型评估

  • 在RLHF中作为奖励分数

所有模型都在这些数据集上进行训练,并且使用相同的切分种子(如果没有验证切分)。

如何使用

from transformers import AutoModelForSequenceClassification, AutoTokenizer
reward_name = "OpenAssistant/reward-model-electra-large-discriminator"
rank_model, tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name)
question, answer = "Explain nuclear fusion like I am five", "Nuclear fusion is the process by which two or more protons and neutrons combine to form a single nucleus. It is a very important process in the universe, as it is the source of energy for stars and galaxies. Nuclear fusion is also a key process in the production of energy for nuclear power plants."
inputs = tokenizer(question, answer, return_tensors='pt')
score = rank_model(**inputs).logits[0].cpu().detach()
print(score)

性能

验证切分的准确率

Model 1234321 1235321 1236321
1237321 59.30 68.66 99.85
1238321 61.13 72.23 99.94
1239321 59.07 66.84 99.85

SytheticGPT可能在所选择的被拒绝的对中有某种表面模式,使得很容易区分更好的答案。