Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
json
Languages:
English
Size:
1K - 10K
Tags:
multiple-choice
reading-comprehension
dialogue
conversational-ai
question-answering
openai-format
License:
File size: 5,430 Bytes
ea8d08d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 |
---
# ====== YAML metadata for the Hub ======
pretty_name: DREAM-CFB
license: mit
language:
- en
tags:
- multiple-choice
- reading-comprehension
- dialogue
- conversational-ai
- question-answering
- openai-format
task_categories:
- question-answering
size_categories:
- 1K<n<10K
source_datasets:
- dream
annotations_creators:
- expert-generated
---
# DREAM‑CFB · _Dialogue-based Reading Comprehension Examination through Machine Reading (Conversation Fact Benchmark Format)_
**DREAM‑CFB** is a 6,444 example dataset derived from the original **DREAM** dataset, transformed and adapted for the Conversation Fact Benchmark framework. Each item consists of multi-turn dialogues with associated multiple-choice questions that test reading comprehension and conversational understanding.
The dataset focuses on **dialogue-based reading comprehension**: questions require understanding conversational context, speaker intentions, and implicit information that emerges through multi-turn interactions.
The dataset follows a structured format with dialogue turns and questions, making it suitable for evaluating conversational AI systems and reading comprehension models.
---
## Dataset at a glance
| Field | Type / shape | Description |
| ---------------------- | --------------------- | ---------------------------------------------------- |
| `id` | `str` | Unique identifier for the dialogue instance |
| `dialogue_turns` | `list[dict]` | Multi-turn conversation with speaker and text fields |
| `questions` | `list[dict]` | List of questions associated with the dialogue |
| `question_text` | `str` | The comprehension question about the dialogue |
| `answer_text` | `str` | Ground-truth answer string |
| `choices` | `list[str]` (len = 3) | Three multiple-choice answer options |
| `correct_choice_index` | `int` (0‑2) | Index of the correct answer (0-based) |
---
## Intended uses
| Use case | How to use it |
| ---------------------------- | --------------------------------------------------------------- |
| Reading comprehension eval | Test model's ability to understand dialogue context and meaning |
| Conversational understanding | Evaluate comprehension of multi-turn speaker interactions |
| Multiple-choice QA | Assess reasoning capabilities in structured question formats |
| Dialogue systems | Benchmark conversational AI understanding of context and intent |
---
## Example
```json
{
"id": "5-510",
"dialogue_turns": [
{
"speaker": "M",
"text": "I am considering dropping my dancing class. I am not making any progress."
},
{
"speaker": "W",
"text": "If I were you, I stick with it. It's definitely worth time and effort."
}
],
"questions": [
{
"question_text": "What does the man suggest the woman do?",
"answer_text": "Continue her dancing class.",
"choices": [
"Consult her dancing teacher.",
"Take a more interesting class.",
"Continue her dancing class."
],
"correct_choice_index": 2
}
]
}
```
## Dataset Statistics
- **Total examples**: 6,444 dialogue-question pairs
- **Average choices per question**: 3 (standard multiple-choice format)
- **Source**: Original DREAM dataset
- **Language**: English
- **Domain**: General conversational scenarios
## Data Splits
The dataset includes the following splits from the original DREAM dataset:
- Train: ~4,000 examples
- Dev: ~1,300 examples
- Test: ~1,300 examples
## Changelog
v1.0.0 · Initial release – transformed original DREAM dataset to Conversation Fact Benchmark format with structured dialogue turns and multiple-choice questions
## Dataset Creation
This dataset was created by transforming the original DREAM dataset into a format suitable for the [Conversation Fact Benchmark](https://github.com/savourylie/Conversation-Fact-Benchmark) framework. The transformation process:
1. Converted raw dialogue text into structured speaker turns
2. Preserved original multiple-choice questions and answers
3. Added explicit choice indexing for evaluation
4. Maintained dialogue context and question associations
## Citation
If you use this dataset, please cite both the original DREAM paper and the Conversation Fact Benchmark:
```bibtex
@inproceedings{sun2019dream,
title={DREAM: A Challenge Dataset and Models for Dialogue-Based Reading Comprehension},
author={Sun, Kai and Yu, Dian and Chen, Jianshu and Yu, Dong and Choi, Yejin and Cardie, Claire},
booktitle={Transactions of the Association for Computational Linguistics},
year={2019}
}
```
## Contributing
We welcome contributions for:
- Additional data formats (CSV, Parquet)
- Evaluation scripts and baselines
- Error analysis and dataset improvements
Please maintain the MIT license and cite appropriately.
## License
This dataset is released under the MIT License, following the original DREAM dataset licensing terms.
Enjoy benchmarking your conversational reading comprehension models!
# Last updated: Mon Jun 30 16:27:51 HKT 2025
|