Share this page:

DRS: Deep Question Reformulation With Structured Output

Zhecheng Li, Yiwei Wang, Bryan Hooi, Yujun Cai, Nanyun Peng, and Kai-Wei Chang, in Findings of the Association for Computational Linguistics: ACL 2025 , 2025 .

Download the full text


Abstract

Question answering represents a core capability of large language models (LLMs). However, when individuals encounter unfamiliar knowledge in texts, they often formulate questions that the text itself cannot answer due to insufficient understanding of the underlying information. Recent studies reveal that while LLMs can detect unanswerable questions, they struggle to assist users in reformulating these questions. Even advanced models like GPT-3.5 demonstrate limited effectiveness in this regard. To address this limitation, we propose DRS: Deep Question Reformulation with Structured Output, a novel zero-shot method aimed at enhancing LLMs’ ability to assist users in reformulating questions to extract relevant information from new documents. DRS combines the strengths of LLMs with a DFS-based algorithm to iteratively explore potential entity combinations and constrain outputs using predefined entities. This structured approach significantly enhances the reformulation capabilities of LLMs. Comprehensive experimental evaluations demonstrate that DRS improves the reformulation accuracy of GPT-3.5 from 23.03% to 70.42%, while also enhancing the performance of open-source models, such as Gemma2-9B, from 26.35% to 56.75%.


Bib Entry

@inproceedings{li2025drs,
  title = { DRS: Deep Question Reformulation With Structured Output },
  author = {Li, Zhecheng and Wang, Yiwei and Hooi, Bryan and Cai, Yujun and Peng, Nanyun and Chang, Kai-Wei},
  year = { 2025 },
  booktitle = { Findings of the Association for Computational Linguistics: ACL 2025 }
}

Related Publications