F2LLM / README.md
Geralt-Targaryen's picture
Add task categories and prominent links to paper and code (#1)
c8158be verified
metadata
language:
  - en
license: apache-2.0
size_categories:
  - 1M<n<10M
task_categories:
  - text-retrieval
  - feature-extraction

F2LLM Dataset

Paper | Code

The F2LLM dataset includes 6 million query-document-negative tuples curated solely from open-source, non-synthetic data, serving as a strong, budget-friendly baseline for training embedding models.

Data Format

Data are compiled into three categories: retrieval, classification, and clustering. Each retrieval and clustering data sample is accompanied by 24 hard negatives. Each classification data sample is accompanied by 1 hard negative.

The data fields are:

{
  "query": ...
  "passage": ...
  "negative_1": ...
  ...
  "negative_n": ...
}

For more details, please refer to our technical report.

Usage

Code for training embedding models on the F2LLM data is available in our Github repo.

Citation

If you use the F2LLM models, data, or code, please cite the following technical report.

@article{2025F2LLM,
  title={F2LLM Technical Report: Matching SOTA Embedding Performance with 6 Million Open-Source Data}, 
  author={Ziyin Zhang and Zihan Liao and Hang Yu and Peng Di and Rui Wang},
  journal      = {CoRR},
  volume       = {abs/2510.02294},
  year         = {2025},
  url          = {https://doi.org/10.48550/arXiv.2510.02294},
  doi          = {10.48550/ARXIV.2510.02294},
  eprinttype    = {arXiv},
  eprint       = {2510.02294}
}