【Preference Proxy Evaluations (PPE):一个用于评估奖励模型和LLM裁判的基准测试,帮助在大规模LLM训练和评估中复制人类偏好,包含真实的人类偏好数据和可验证的正确性偏好数据】'Preference Proxy Evaluations is an evaluation benchmark for reward models and LLM-judges, which are used to replicate human preferences for large-scale LLM training and evaluation.' GitHub: github.com/lmarena/PPE #AI基准测试# #LLM评估# #人类偏好#