Summarize this content to 2000 words in 6 paragraphs
Natasha Jaques, an assistant professor at the University of Washington’s Paul G. Allen School of Computer Science & Engineering. (UW Photo)
As artificial intelligence chatbots are popping up to provide information in all sorts of applications, University of Washington researchers have developed a new way to fine-tune their responses.
Dubbed “variational preference learning,” the goal of the method is to shape a large language model’s output to better match an individual user according to their expressed preferences.
AI systems are trained on datasets that include baked-in biases and inappropriate information that engineers currently try to filter out of responses through “reinforcement learning from human feedback,” or RLHF. The strategy requires a group of people to review outputs from the chatbots and select the preferred answer, nudging the system to a safe, accurate and acceptable response.
But those preferences are determined by the organization creating the chatbot and don’t necessarily include the wide-ranging views held among the diverse users engaging with the tools.
“I think it’s a little scary that we have researchers at a handful of corporations, who aren’t trained in policy or sociology, deciding what is appropriate and what is not for the models to say, and we have so many people using these systems and trying to find out the truth from them,” said Natasha Jaques, an assistant professor at the UW’s Paul G. Allen School of Computer Science & Engineering, in a UW post.
“This is one of the more pressing problems in AI,” she said, “so we need better techniques to address it.”
Jaques leads the Social Reinforcement Learning Lab at the UW and is also a senior research scientist at Google DeepMind. She joined the UW’s Allen School nearly two years ago.
Jaques gave an example of a case when the RLHF training approach could create a problem. Imagine a lower-income student was interacting with a chatbot to learn more about a college they wanted to apply to, but the model’s response was tuned for the majority of the school’s applications, which was higher-income students. The model would deduce that there was limited interest in financial aid information and not provide it.
The variational preference learning approach developed by the UW researchers would put the chatbot users themselves in the role of refining the outputs. And it can do it quickly — with just four queries, the VPL training method can learn what sort of responses a user will choose.
The fine-tuning can include the preferred level of specificity of the answer, the length and tone of the output, as well as which information is included.
The strategy could be applied to verbal interactions as well as training robots performing simple tasks in personal settings such as homes.
But VPL does need to watch out for preferences for misinformation or disinformation, as well as inappropriate responses, Jaques said.
Jaques and colleagues shared their research at last week’s Conference on Neural Information Processing Systems in Vancouver, B.C.
Additional co-authors of the study include Allen School assistant professor Abhishek Gupta, as well as Allen School doctoral students Sriyash Poddar, Yanming Wan and Hamish Ivison.
Jaques said participants in the long-running international conference were interested in the issue of promoting diverse perspectives in AI systems that she and others are tackling.
“I’m encouraged to see the receptiveness of the AI community and momentum in this area,” Jaques told GeekWire.