Share this page:

CaKE: Circuit-aware Editing Enables Generalizable Knowledge Learners

Yunzhi Yao, Jizhan Fang, Jia-Chen Gu, Ningyu Zhang, Shumin Deng, Huajun Chen, and Nanyun Peng, in Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2025.

Download the full text


Abstract

Knowledge Editing (KE) enables the modification of outdated or incorrect information in large language models (LLMs). While existing KE methods can update isolated facts, they often fail to generalize these updates to multi-hop reasoning tasks that rely on the modified knowledge. Through an analysis of reasoning circuits—the neural pathways LLMs use for knowledge-based inference, we find that current layer-localized KE approaches (e.g., MEMIT, WISE), which edit only single or a few model layers, inadequately integrate updated knowledge into these reasoning pathways. To address this limitation, we present CaKE (Circuit-aware Knowledge Editing), a novel method that enhances the effective integration of updated knowledge in LLMs. By only leveraging a few curated data samples guided by our circuit-based analysis, CaKE stimulates the model to develop appropriate reasoning circuits for newly incorporated knowledge. Experiments show that CaKE enables more accurate and consistent use of edited knowledge across related reasoning tasks, achieving an average improvement of 20% in multi-hop reasoning accuracy on the MQuAKE dataset while requiring less memory than existing KE methods.


Bib Entry

@inproceedings{yao2025cake,
  title = {CaKE: Circuit-aware Editing Enables Generalizable Knowledge Learners},
  author = {Yao, Yunzhi and Fang, Jizhan and Gu, Jia-Chen and Zhang, Ningyu and Deng, Shumin and Chen, Huajun and Peng, Nanyun},
  year = {2025},
  booktitle = {Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP)}
}

Related Publications