英文

基于人类反馈训练的奖励模型

训练的奖励模型(RM)根据给定的问题,预测哪个生成的答案被人类评为更好。

RM在以下领域中很有用:

  • QA模型评估
  • 作为RLHF中的奖励分数

所有模型都使用同一种数据集分割种子进行训练(如果没有可用的验证分割)

如何使用

from transformers import AutoModelForSequenceClassification, AutoTokenizer
reward_name = "OpenAssistant/reward-model-deberta-v3-large"
rank_model, tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name)
question, answer = "Explain nuclear fusion like I am five", "Nuclear fusion is the process by which two or more protons and neutrons combine to form a single nucleus. It is a very important process in the universe, as it is the source of energy for stars and galaxies. Nuclear fusion is also a key process in the production of energy for nuclear power plants."
inputs = tokenizer(question, answer, return_tensors='pt')
score = rank_model(**inputs).logits[0].cpu().detach()
print(score)

性能

验证分割准确性

Model 1234321 1235321 1236321
1237321 59.30 68.66 99.85
1238321 61.13 72.23 99.94
1239321 59.07 66.84 99.85

很可能SytheticGPT在选择-拒绝对中存在某种表面模式,这使得区分更好的答案变得容易。