Update EVAL.md
Browse files
EVAL.md
CHANGED
|
@@ -4,6 +4,19 @@
|
|
| 4 |
|
| 5 |
IFEval-Hi is a Hindi language adaptation of the IFEval (Instruction Following Evaluation) benchmark, designed to evaluate the instruction-following capabilities of Large Language Models (LLMs) in Hindi. This implementation maintains the core evaluation methodology of the original English IFEval while incorporating language-specific modifications to ensure accurate and fair assessment of Hindi language models.
|
| 6 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
## Setup and Usage
|
| 8 |
|
| 9 |
### Step 1: Create Task Configuration
|
|
|
|
| 4 |
|
| 5 |
IFEval-Hi is a Hindi language adaptation of the IFEval (Instruction Following Evaluation) benchmark, designed to evaluate the instruction-following capabilities of Large Language Models (LLMs) in Hindi. This implementation maintains the core evaluation methodology of the original English IFEval while incorporating language-specific modifications to ensure accurate and fair assessment of Hindi language models.
|
| 6 |
|
| 7 |
+
## Getting Started
|
| 8 |
+
|
| 9 |
+
You have two options to use this evaluation framework:
|
| 10 |
+
|
| 11 |
+
1. **Option 1: Use the Ready-to-Use Fork** (Recommended)
|
| 12 |
+
- Fork or clone the repository directly from: https://github.com/anushaknvidia/lm-evaluation-harness
|
| 13 |
+
- This fork already includes all the Hindi-specific configurations and modifications
|
| 14 |
+
- Skip to [Step 3: Run Evaluation](#step-3-run-evaluation)
|
| 15 |
+
|
| 16 |
+
2. **Option 2: Manual Setup**
|
| 17 |
+
- Follow the step-by-step instructions below to set up IFEval-Hi from scratch
|
| 18 |
+
- This is useful if you want to customize or understand the implementation details
|
| 19 |
+
|
| 20 |
## Setup and Usage
|
| 21 |
|
| 22 |
### Step 1: Create Task Configuration
|