Share this page:

Adaptable Logical Control for Large Language Models

Honghua Zhang, Po-Nien Kung, Masahiro Yoshida, Guy Van den Broeck, and Nanyun Peng, in Proceedings of The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS), 2024.

Abstract

Despite the success of Large Language Models (LLMs) in performing various tasks with provided instructions, controlling model generation during inference poses a persistent challenge. In this paper, we introduce Ctrl-G, an adaptable framework that facilitates tractable and flexible control over LLM generation. Ctrl-G can combine any production-ready LLMs with a Hidden Markov Model (HMM), enabling output generation that adheres to logical constraints represented as deterministic finite automata (DFAs), including keyword control, length control, and insertion. Our study demonstrates that Ctrl-G, coupled with a TULU-2-7B model, outperforms GPT3.5 and GPT4 models in human evaluations for interactive text editing by 30% overall satisfaction rate, and exhibits high-quality generation with 100% constraint satisfaction. Additionally, our experiment on the Grade School Math (GSM) dataset highlights the potential of applying Ctrl-G beyond natural language generation (NLG) tasks. By guiding the reasoning process with logical constraints, we achieved a 3.4% improvement on the GSM subset, underscoring Ctrl-G’s broader applicability.



Bib Entry

@inproceedings{zhang2024adaptable,
  title = {Adaptable Logical Control for Large Language Models},
  author = {Zhang, Honghua and Kung, Po-Nien and Yoshida, Masahiro and den Broeck, Guy Van and Peng, Nanyun},
  year = {2024},
  booktitle = {Proceedings of The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS)}
}

Related Publications