Datasets:
				
			
			
	
			
			
	
		pretty_name: DREAM-CFB
license: mit
language:
  - en
tags:
  - multiple-choice
  - reading-comprehension
  - dialogue
  - conversational-ai
  - question-answering
  - openai-format
task_categories:
  - question-answering
size_categories:
  - 1K<n<10K
source_datasets:
  - dream
annotations_creators:
  - expert-generated
DREAM‑CFB · Dialogue-based Reading Comprehension Examination through Machine Reading (Conversation Fact Benchmark Format)
DREAM‑CFB is a 6,444 example dataset derived from the original DREAM dataset, transformed and adapted for the Conversation Fact Benchmark framework. Each item consists of multi-turn dialogues with associated multiple-choice questions that test reading comprehension and conversational understanding.
The dataset focuses on dialogue-based reading comprehension: questions require understanding conversational context, speaker intentions, and implicit information that emerges through multi-turn interactions.
The dataset follows a structured format with dialogue turns and questions, making it suitable for evaluating conversational AI systems and reading comprehension models.
Dataset at a glance
| Field | Type / shape | Description | 
|---|---|---|
| id | str | Unique identifier for the dialogue instance | 
| dialogue_turns | list[dict] | Multi-turn conversation with speaker and text fields | 
| questions | list[dict] | List of questions associated with the dialogue | 
| question_text | str | The comprehension question about the dialogue | 
| answer_text | str | Ground-truth answer string | 
| choices | list[str](len = 3) | Three multiple-choice answer options | 
| correct_choice_index | int(0‑2) | Index of the correct answer (0-based) | 
Intended uses
| Use case | How to use it | 
|---|---|
| Reading comprehension eval | Test model's ability to understand dialogue context and meaning | 
| Conversational understanding | Evaluate comprehension of multi-turn speaker interactions | 
| Multiple-choice QA | Assess reasoning capabilities in structured question formats | 
| Dialogue systems | Benchmark conversational AI understanding of context and intent | 
Example
{
  "id": "5-510",
  "dialogue_turns": [
    {
      "speaker": "M",
      "text": "I am considering dropping my dancing class. I am not making any progress."
    },
    {
      "speaker": "W",
      "text": "If I were you, I stick with it. It's definitely worth time and effort."
    }
  ],
  "questions": [
    {
      "question_text": "What does the man suggest the woman do?",
      "answer_text": "Continue her dancing class.",
      "choices": [
        "Consult her dancing teacher.",
        "Take a more interesting class.",
        "Continue her dancing class."
      ],
      "correct_choice_index": 2
    }
  ]
}
Dataset Statistics
- Total examples: 6,444 dialogue-question pairs
- Average choices per question: 3 (standard multiple-choice format)
- Source: Original DREAM dataset
- Language: English
- Domain: General conversational scenarios
Data Splits
The dataset includes the following splits from the original DREAM dataset:
- Train: ~4,000 examples
- Dev: ~1,300 examples
- Test: ~1,300 examples
Changelog
v1.0.0 · Initial release – transformed original DREAM dataset to Conversation Fact Benchmark format with structured dialogue turns and multiple-choice questions
Dataset Creation
This dataset was created by transforming the original DREAM dataset into a format suitable for the Conversation Fact Benchmark framework. The transformation process:
- Converted raw dialogue text into structured speaker turns
- Preserved original multiple-choice questions and answers
- Added explicit choice indexing for evaluation
- Maintained dialogue context and question associations
Citation
If you use this dataset, please cite both the original DREAM paper and the Conversation Fact Benchmark:
@inproceedings{sun2019dream,
  title={DREAM: A Challenge Dataset and Models for Dialogue-Based Reading Comprehension},
  author={Sun, Kai and Yu, Dian and Chen, Jianshu and Yu, Dong and Choi, Yejin and Cardie, Claire},
  booktitle={Transactions of the Association for Computational Linguistics},
  year={2019}
}
Contributing
We welcome contributions for:
- Additional data formats (CSV, Parquet)
- Evaluation scripts and baselines
- Error analysis and dataset improvements
Please maintain the MIT license and cite appropriately.
License
This dataset is released under the MIT License, following the original DREAM dataset licensing terms.
Enjoy benchmarking your conversational reading comprehension models!
