File size: 1,655 Bytes
5326d7f c8158be 5326d7f c8158be 5326d7f c8158be 5326d7f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
---
language:
- en
license: apache-2.0
size_categories:
- 1M<n<10M
task_categories:
- text-retrieval
- feature-extraction
---
# F2LLM Dataset
[Paper](https://huggingface.co/papers/2510.02294) | [Code](https://github.com/codefuse-ai/CodeFuse-Embeddings/tree/main/F2LLM)
The F2LLM dataset includes 6 million query-document-negative tuples curated solely from open-source, non-synthetic data, serving as a strong, budget-friendly baseline for training embedding models.
## Data Format
Data are compiled into three categories: retrieval, classification, and clustering. Each retrieval and clustering data sample is accompanied by 24 hard negatives. Each classification data sample is accompanied by 1 hard negative.
The data fields are:
```json
{
"query": ...
"passage": ...
"negative_1": ...
...
"negative_n": ...
}
```
For more details, please refer to our [technical report](https://arxiv.org/abs/2510.02294).
## Usage
Code for training embedding models on the F2LLM data is available in our [Github repo](https://github.com/codefuse-ai/CodeFuse-Embeddings/tree/main/F2LLM).
## Citation
If you use the F2LLM models, data, or code, please cite the following technical report.
```
@article{2025F2LLM,
title={F2LLM Technical Report: Matching SOTA Embedding Performance with 6 Million Open-Source Data},
author={Ziyin Zhang and Zihan Liao and Hang Yu and Peng Di and Rui Wang},
journal = {CoRR},
volume = {abs/2510.02294},
year = {2025},
url = {https://doi.org/10.48550/arXiv.2510.02294},
doi = {10.48550/ARXIV.2510.02294},
eprinttype = {arXiv},
eprint = {2510.02294}
}
``` |