Active Instruction Tuning: Improving Cross-Task Generalization by Training on Prompt Sensitive Tasks
Po-Nien Kung, Fan Yin, Di Wu, Kai-Wei Chang, and Nanyun Peng, in The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023.
Download the full text
Abstract
Instruction-tuned models are scaling up in their training tasks! 🚀
— Po-Nien Kung (@P_N_Kung) November 20, 2023
🤔Curious how to choose novel tasks to enhance models effectively?
Our #EMNLP2023 paper reveals that selecting prompt-sensitive (Ambiguous) tasks leads to better performance! 🧵(1/7)
📎https://t.co/JkxfAh7nNf pic.twitter.com/4COfHxmIHI
Bib Entry
@inproceedings{kung2023active, title = {Active Instruction Tuning: Improving Cross-Task Generalization by Training on Prompt Sensitive Tasks}, author = {Kung, Po-Nien and Yin, Fan and Wu, Di and Chang, Kai-Wei and Peng, Nanyun}, booktitle = {The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)}, year = {2023} }