1. 5

From the abstract: “We propose a benchmark to measure whether a language model is truthful in generating answers to questions. […] The best model was truthful on 58% of questions, while human performance was 94%. Models generated many false answers that mimic popular misconceptions and have the potential to deceive humans. The largest models were generally the least truthful. […] We suggest that scaling up models alone is less promising for improving truthfulness than fine-tuning using training objectives other than imitation of text from the web.”

  1.