Do Models Really Learn to Follow Instructions? An Empirical Study of Instruction Tuning
Po-Nien Kung and Nanyun Peng, in Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL), short, 2023.
Download the full text
Abstract
🚨New #ACL2023 paper!
— Po-Nien Kung (@P_N_Kung) July 3, 2023
Do instruction-tuned models follow the instruction? They don’t necessarily do!
Our study discovered that TK-Instruct and T0 models can achieve comparable performance by using only the label information in the instructions.(1/4)
🔗https://t.co/kT6FwQnZPf pic.twitter.com/XLHyFAOWLo
Bib Entry
@inproceedings{kung2023models, title = {Do Models Really Learn to Follow Instructions? An Empirical Study of Instruction Tuning}, author = {Kung, Po-Nien and Peng, Nanyun}, booktitle = {Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL), short}, year = {2023} }
Related Publications
Do Models Really Learn to Follow Instructions? An Empirical Study of Instruction Tuning
Po-Nien Kung and Nanyun Peng, in Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL), short, 2023.
Full Text Poster BibTeX Details@inproceedings{kung2023models, title = {Do Models Really Learn to Follow Instructions? An Empirical Study of Instruction Tuning}, author = {Kung, Po-Nien and Peng, Nanyun}, booktitle = {Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL), short}, year = {2023} }