Datasets:
The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
answer: string
image_url: string
original_order: string
parquet_path: string
question: string
speciality: string
flag_answer_format: string
flag_image_type: string
flag_cognitive_process: string
flag_rarity: string
flag_difficulty_llms: string
image: struct<bytes: binary, path: string>
child 0, bytes: binary
child 1, path: string
original_problem_id: string
permutation_number: string
problem_id: string
order: int64
-- schema metadata --
huggingface: '{"info": {"features": {"answer": {"dtype": "string", "_type' + 835
to
{'question': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'image': Image(mode=None, decode=True, id=None), 'image_url': Value(dtype='string', id=None), 'problem_id': Value(dtype='string', id=None), 'order': Value(dtype='int64', id=None), 'parquet_path': Value(dtype='string', id=None), 'speciality': Value(dtype='string', id=None), 'flag_answer_format': Value(dtype='string', id=None), 'flag_image_type': Value(dtype='string', id=None), 'flag_cognitive_process': Value(dtype='string', id=None), 'flag_rarity': Value(dtype='string', id=None), 'flag_difficulty_llms': Value(dtype='string', id=None)}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2285, in __iter__
for key, example in ex_iterable:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__
for key, pa_table in self._iter_arrow():
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1879, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 476, in _iter_arrow
for key, pa_table in iterator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 323, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables
yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2293, in table_cast
return cast_table_to_schema(table, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2241, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
answer: string
image_url: string
original_order: string
parquet_path: string
question: string
speciality: string
flag_answer_format: string
flag_image_type: string
flag_cognitive_process: string
flag_rarity: string
flag_difficulty_llms: string
image: struct<bytes: binary, path: string>
child 0, bytes: binary
child 1, path: string
original_problem_id: string
permutation_number: string
problem_id: string
order: int64
-- schema metadata --
huggingface: '{"info": {"features": {"answer": {"dtype": "string", "_type' + 835
to
{'question': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'image': Image(mode=None, decode=True, id=None), 'image_url': Value(dtype='string', id=None), 'problem_id': Value(dtype='string', id=None), 'order': Value(dtype='int64', id=None), 'parquet_path': Value(dtype='string', id=None), 'speciality': Value(dtype='string', id=None), 'flag_answer_format': Value(dtype='string', id=None), 'flag_image_type': Value(dtype='string', id=None), 'flag_cognitive_process': Value(dtype='string', id=None), 'flag_rarity': Value(dtype='string', id=None), 'flag_difficulty_llms': Value(dtype='string', id=None)}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
SMMILE: An Expert-Driven Benchmark for Multimodal Medical In-Context Learning
Paper | Project page | Code
Introduction
Multimodal in-context learning (ICL) remains underexplored despite the profound potential it could have in complex application domains such as medicine. Clinicians routinely face a long tail of tasks which they need to learn to solve from few examples, such as considering few relevant previous cases or few differential diagnoses. While MLLMs have shown impressive advances in medical visual question answering (VQA) or multi-turn chatting, their ability to learn multimodal tasks from context is largely unknown.
We introduce SMMILE (Stanford Multimodal Medical In-context Learning Evaluation), the first multimodal medical ICL benchmark. A set of clinical experts curated ICL problems to scrutinize MLLM's ability to learn multimodal tasks at inference time from context.
Dataset Access
The SMMILE dataset is available on HuggingFace:
from datasets import load_dataset
load_dataset('smmile/SMMILE', token=YOUR_HF_TOKEN)
load_dataset('smmile/SMMILE-plusplus', token=YOUR_HF_TOKEN)
Note: You need to set your HuggingFace token as an environment variable:
export HF_TOKEN=your_token_here
License
This work is licensed under a Creative Commons Attribution 4.0 International License.
Citation
If you find our dataset useful for your research, please cite the following paper:
@article{rieff2025smmile,
title={SMMILE: An Expert-Driven Benchmark for Multimodal Medical In-Context Learning},
author={Melanie Rieff and Maya Varma and Ossian Rabow and Subathra Adithan and Julie Kim and Ken Chang and Hannah Lee and Nidhi Rohatgi and Christian Bluethgen and Mohamed S. Muneer and Jean-Benoit Delbrouck and Michael Moor},
year={2025},
eprint={2506.21355},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2506.21355},
}
Acknowledgments
We thank the clinical experts who contributed to curating the benchmark dataset.
- Downloads last month
- 62