Update README.md
Browse files
README.md
CHANGED
|
@@ -9,7 +9,7 @@ tags:
|
|
| 9 |
datasets:
|
| 10 |
- AdaptLLM/remote-sensing-visual-instructions
|
| 11 |
---
|
| 12 |
-
# Adapting Multimodal Large Language Models to Domains via Post-Training
|
| 13 |
|
| 14 |
This repos contains the **remote sensing MLLM developed from Qwen-2-VL-2B-Instruct** in our paper: [On Domain-Specific Post-Training for Multimodal Large Language Models](https://huggingface.co/papers/2411.19930). The correspoding training dataset is in [remote-sensing-visual-instructions](https://huggingface.co/datasets/AdaptLLM/remote-sensing-visual-instructions).
|
| 15 |
|
|
@@ -107,10 +107,10 @@ See [Post-Train Guide](https://github.com/bigai-ai/QA-Synthesizer/blob/main/docs
|
|
| 107 |
## Citation
|
| 108 |
If you find our work helpful, please cite us.
|
| 109 |
|
| 110 |
-
[
|
| 111 |
```bibtex
|
| 112 |
@article{adamllm,
|
| 113 |
-
title={On Domain-
|
| 114 |
author={Cheng, Daixuan and Huang, Shaohan and Zhu, Ziyu and Zhang, Xintong and Zhao, Wayne Xin and Luan, Zhongzhi and Dai, Bo and Zhang, Zhenliang},
|
| 115 |
journal={arXiv preprint arXiv:2411.19930},
|
| 116 |
year={2024}
|
|
|
|
| 9 |
datasets:
|
| 10 |
- AdaptLLM/remote-sensing-visual-instructions
|
| 11 |
---
|
| 12 |
+
# Adapting Multimodal Large Language Models to Domains via Post-Training (EMNLP 2025)
|
| 13 |
|
| 14 |
This repos contains the **remote sensing MLLM developed from Qwen-2-VL-2B-Instruct** in our paper: [On Domain-Specific Post-Training for Multimodal Large Language Models](https://huggingface.co/papers/2411.19930). The correspoding training dataset is in [remote-sensing-visual-instructions](https://huggingface.co/datasets/AdaptLLM/remote-sensing-visual-instructions).
|
| 15 |
|
|
|
|
| 107 |
## Citation
|
| 108 |
If you find our work helpful, please cite us.
|
| 109 |
|
| 110 |
+
[Adapt MLLM to Domains](https://huggingface.co/papers/2411.19930) (EMNLP 2025 Findings)
|
| 111 |
```bibtex
|
| 112 |
@article{adamllm,
|
| 113 |
+
title={On Domain-Adaptive Post-Training for Multimodal Large Language Models},
|
| 114 |
author={Cheng, Daixuan and Huang, Shaohan and Zhu, Ziyu and Zhang, Xintong and Zhao, Wayne Xin and Luan, Zhongzhi and Dai, Bo and Zhang, Zhenliang},
|
| 115 |
journal={arXiv preprint arXiv:2411.19930},
|
| 116 |
year={2024}
|