Transformers
Safetensors
English

πŸ€– ParamMute: Suppressing Knowledge-Critical FFNs for Faithful Retrieval-Augmented Generation

This is the official model for ParamMute: Suppressing Knowledge-Critical FFNs for Faithful Retrieval-Augmented Generation.

We investigate the internal mechanisms behind unfaithful generation and identify a subset of mid-to-deep (70%–90% relative depth range) FFNs that are disproportionately activated in such cases. Building on this insight, we propose Parametric Knowledge Muting through FFN Suppression (ParamMute), a framework that improves contextual faithfulness by suppressing the activation of unfaithfulness-associated FFNs and calibrating the model toward retrieved knowledge. Experimental results on CoConflictQA and ConFiQA demonstrate that ParamMute significantly reduces knowledge conflicts and improves context fidelity.


πŸ“š Paper

For a detailed explanation of the methodology and experiments, please refer to our paper:
ParamMute: Suppressing Knowledge-Critical FFNs for Faithful Retrieval-Augmented Generation


πŸ“Š Reproduce the Results

To reproduce the experiments and benchmarks from the paper, follow the instructions provided in the official GitHub repository: πŸ‘‰ GitHub: OpenBMB/ParamMute.

πŸ“ Model Details

  • Model Name: ParamMute-8B-KTO
  • Architecture: LLaMA3-8B-Instruct trained with the KTO
  • Training Data: CoConflictQA Dataset
  • Pretrained Tasks: Knowledge-Augmented Generation, Contextual Faithfulness Evaluation

πŸ”– Citation

If you use PIP-KAG in your work, please consider citing our paper:

@misc{huang2025parammutesuppressingknowledgecriticalffns,
      title={ParamMute: Suppressing Knowledge-Critical FFNs for Faithful Retrieval-Augmented Generation}, 
      author={Pengcheng Huang and Zhenghao Liu and Yukun Yan and Haiyan Zhao and Xiaoyuan Yi and Hao Chen and Zhiyuan Liu and Maosong Sun and Tong Xiao and Ge Yu and Chenyan Xiong},
      year={2025},
      eprint={2502.15543},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2502.15543}, 
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for chengpingan/ParamMute-8B-KTO

Finetuned
(465)
this model

Dataset used to train chengpingan/ParamMute-8B-KTO

Collection including chengpingan/ParamMute-8B-KTO