PyTorch
English
llama

πŸ’° Demystifying Domain-adaptive Post-training for Financial LLMs

This is the finance-specific large language model trained using the recipe described in our paper:
πŸ“„ Demystifying Domain-adaptive Post-training for Financial LLMs

For more details, please check the following resources:

Ethical Considerations

Users need to make their own assessment regarding any obligations or responsibilities under the corresponding licenses or terms and conditions pertaining to the original datasets and data. This release is for research purposes only in support of an academic paper.

Citation

If you find our project helpful, please consider citing our paper 😊

@misc{ke2025demystifyingdomainadaptiveposttrainingfinancial,
      title={Demystifying Domain-adaptive Post-training for Financial LLMs}, 
      author={Zixuan Ke and Yifei Ming and Xuan-Phi Nguyen and Caiming Xiong and Shafiq Joty},
      year={2025},
      eprint={2501.04961},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2501.04961}, 
}
Downloads last month
79
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Salesforce/Llama-Fin-8b

Finetuned
(793)
this model
Quantizations
1 model

Datasets used to train Salesforce/Llama-Fin-8b

Collection including Salesforce/Llama-Fin-8b