update ARC description in README
Browse files
README.md
CHANGED
|
@@ -181,7 +181,7 @@ We used the standard implementation of the [MultiBLiMP](https://github.com/Eleut
|
|
| 181 |
### ARC Benchmark Results
|
| 182 |
**What is ARC?** [ARC](https://arxiv.org/pdf/1803.05457) - The AI2 Reasoning Challenge is a multiple-choice science question benchmark **in English**, derived from U.S. grade-school standardized exams. It has two subsets — ARC Easy and ARC Challenge — designed to test factual knowledge and common-sense.
|
| 183 |
|
| 184 |
-
**Why does this Matter?** ARC probes a model’s ability to answer non-trivial questions by applying world knowledge.
|
| 185 |
|
| 186 |
**What did we do?**
|
| 187 |
We use multilingual translations of ARC provided by [Eurolingua](https://huggingface.co/datasets/Eurolingua/arcx); please refer to the [publication](https://arxiv.org/pdf/2410.08928). Other than the data source, we replicate the standard [LM Evaluation Harness configuration for ARC](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/arc). Our exact configuration is available at [TBA]. We set tokenisers to ```use_fast=False```. We report **5-shot** accuracy.
|
|
|
|
| 181 |
### ARC Benchmark Results
|
| 182 |
**What is ARC?** [ARC](https://arxiv.org/pdf/1803.05457) - The AI2 Reasoning Challenge is a multiple-choice science question benchmark **in English**, derived from U.S. grade-school standardized exams. It has two subsets — ARC Easy and ARC Challenge — designed to test factual knowledge and common-sense.
|
| 183 |
|
| 184 |
+
**Why does this Matter?** ARC probes a model’s ability to answer non-trivial questions by applying world knowledge. Although the answer can sometimes be inferred from the question, in the classic lm-evaluation-harness ARC implementation the answer choices for each question are **not** provided during inference, thus placing emphasis on world knowledge, rather than on the model's reasoning capabilities.
|
| 185 |
|
| 186 |
**What did we do?**
|
| 187 |
We use multilingual translations of ARC provided by [Eurolingua](https://huggingface.co/datasets/Eurolingua/arcx); please refer to the [publication](https://arxiv.org/pdf/2410.08928). Other than the data source, we replicate the standard [LM Evaluation Harness configuration for ARC](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/arc). Our exact configuration is available at [TBA]. We set tokenisers to ```use_fast=False```. We report **5-shot** accuracy.
|