Diff Interpretation Tuning

Diff Interpretation Tuning: Weight Diffs and Adapters

This repository contains the weight diffs and DIT adapters used in the paper Learning to Interpret Weight Differences in Language Models (Goel et al. 2025). To play around with the weight diffs and DIT adapters from the paper, please check out our Google Colab demo notebook. This notebook shows how to load the weight diffs and adapters from this repo.

The code used to train and evaluate our weight diffs and DIT adapters can be found at github.com/Aviously/diff-interpretation-tuning. Some of the large data files used for training can be found at hf.co/datasets/diff-interpretation-tuning/finetuning-data.

Repository structure

All weight diffs and DIT adapters in the repository live under a specific <experiment>/<model> folder (e.g. hidden-topic/qwen3-4b). Please consult the paper to understand what each experiment refers to.

Under each <experiment>/<model> folder, there are three potential types of files:

Please consult the demo notebook for details on how to load and use these files.

Citing our work

You can cite our work using the following bibtex:

@misc{goel2025learninginterpretweightdifferences,
  title={Learning to Interpret Weight Differences in Language Models}, 
  author={Avichal Goel and Yoon Kim and Nir Shavit and Tony T. Wang},
  year={2025},
  eprint={2510.05092},
  archivePrefix={arXiv},
  url={https://arxiv.org/abs/2510.05092}, 
}
Downloads last month
626,034
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for diff-interpretation-tuning/loras

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Adapter
(89)
this model

Dataset used to train diff-interpretation-tuning/loras