YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

PLaID++

This repository contains our flagship model's weights in our paper: PLaID++: A Preference-Aligned Language Model for Targeted Inorganic Materials Design, by Andy Xu, Rohan Desai, Larry Wang, Gabriel Hope, and Ethan Ritz.

Summary

PLaID++ introduces an LLM fine-tuned for stable and property-targeted inorganic crystal generation. PLaID++ achieves a ~50% higher S.U.N. (Stable, Unique, Novel) rate than prior work and robust conditional generation by space group though:

  1. Leveraging a novel Wyckoff-based text encoding
  2. Aligning the model using Direct Preference Optimization (DPO), an RL method guided by machine-learned interatomic potentials
  3. Unified training across conditional and unconditional generation tasks

plaid_architecture_diagram

Model

The full PLaID++ model is available in train_dpo/.

Citation

Arxiv Link

@article{xu2025plaid++,
  title={PLaID++: A Preference Aligned Language Model for Targeted Inorganic Materials Design},
  author={Xu, Andy and Desai, Rohan and Wang, Larry and Hope, Gabriel and Ritz, Ethan},
  journal={arXiv preprint arXiv:2509.07150},
  year={2025}
}

License

Most of PLaID++ is distributed under the CC BY 4.0 license. However, some components of the project are governed by different licenses: pymatgen is licensed under MIT, Hugging Face Transformers under Apache 2.0, and ASE under the GNU Lesser General Public License.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support