Datasets:

Modalities:
Text
Formats:
json
Languages:
Hindi
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 2,273 Bytes
cd72e6c
 
 
 
 
 
 
 
 
c657be9
 
 
cd72e6c
 
 
 
 
e1d5a35
cd72e6c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55ce526
cd72e6c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c657be9
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
license: cc-by-4.0
task_categories:
- text-generation
language:
- hi
pretty_name: Hindi IFEval
size_categories:
- 1K<n<10K
tags:
- ifeval
- hindi
---

## Dataset Description:
The IFEval-Hi (Hindi IFEval) evaluation dataset contains 848 prompts in the Hindi language to evaluate the instruction-following ability of the large language models (LLMs). The dataset is constructed in a similar manner as the English version of IFEval, and the responses are verifiable by heuristics. Hence, these are "verifiable instructions”. The prompts are curated natively by specialists who are well-versed in Hindi and can cover the local nuances of the Hindi language and Indian culture. 

This dataset is ready for commercial/non-commercial use. The evaluation steps are described [here](https://huggingface.co/datasets/nvidia/IFEval-Hi/blob/main/EVAL.md).

## Dataset Owner:
NVIDIA Corporation


## Dataset Creation Date:
April 2025

## License/Terms of Use: 
CC-BY 4.0

## Intended Usage:
Evaluate the instruction-following ability of the LLM in the Hindi language.

## Dataset Characterization
Data Collection Method<br>
* Human <br>

Labeling Method<br>
* Not Applicable <br>

## Dataset Format
Text

## Dataset Quantification
688KB of query prompts, comprising 848 individual samples.

## Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility, and we have established policies and practices to enable development for a wide array of AI applications.  When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.   

Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).

## Citing

If you find our work helpful, please consider citing our paper:
```
@article{kamath2025benchmarking,
  title={Benchmarking Hindi LLMs: A New Suite of Datasets and a Comparative Analysis},
  author={Kamath, Anusha and Singla, Kanishk and Paul, Rakesh and Joshi, Raviraj and Vaidya, Utkarsh and Chauhan, Sanjay Singh and Wartikar, Niranjan},
  journal={arXiv preprint arXiv:2508.19831},
  year={2025}
}
```