The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: JSON parse error: Column() changed from object to string in row 0
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 160, in _generate_tables
df = pandas_read_json(f)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
return pd.read_json(path_or_buf, **kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 815, in read_json
return json_reader.read()
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1025, in read
obj = self._get_object_parser(self.data)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1051, in _get_object_parser
obj = FrameParser(json, **kwargs).parse()
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1187, in parse
self._parse()
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 1402, in _parse
self.obj = DataFrame(
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/core/frame.py", line 778, in __init__
mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/core/internals/construction.py", line 503, in dict_to_mgr
return arrays_to_mgr(arrays, columns, index, dtype=dtype, typ=typ, consolidate=copy)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/core/internals/construction.py", line 114, in arrays_to_mgr
index = _extract_index(arrays)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/core/internals/construction.py", line 677, in _extract_index
raise ValueError("All arrays must be of the same length")
ValueError: All arrays must be of the same length
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3357, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2111, in _head
return next(iter(self.iter(batch_size=n)))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2315, in iter
for key, example in iterator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__
for key, pa_table in self._iter_arrow():
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1878, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 476, in _iter_arrow
for key, pa_table in iterator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 323, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 163, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 137, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to string in row 0Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Model Card: Med-GLIP
Model Details
Model Name: Med-GLIP
Paper Title: Med-GLIP: Advancing Medical Language-Image Pre-training with Large-scale Grounded Dataset
Authors: Ziye Deng, Ruihan He, Jiaxiang Liu, Yuan Wang, Zijie Meng, Songtao Jiang, Yong Xie, Zuozhu Liu
Affiliations: Zhejiang University
Version: v1
Date: (Presumed August 2025)
Model Type: Medical Language-Image Pre-training Model with Visual Grounding capabilities.
Relevant Links:
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Citation:
@misc{deng2025medglip, title={Med-GLIP: Advancing Medical Language-Image Pre-training with Large-scale Grounded Dataset}, author={Ziye Deng and Ruihan He and Jiaxiang Liu and Yuan Wang and Zijie Meng and Songtao Jiang and Yong Xie and Zuozhu Liu}, year={2025}, eprint={2508.10528}, archivePrefix={arXiv}, primaryClass={cs.CV} }
Model Description
Med-GLIP is a medical-domain language-image pre-training model designed to enhance the understanding of fine-grained correspondences between medical images and text. In contrast to existing medical multi-modal models (e.g., MedKLIP, LLaVA-Med), Med-GLIP specifically emphasizes visual grounding, the ability to localize medical entities or findings mentioned in text to their corresponding regions in the image. The model's development is coupled with a large-scale, grounded medical language-image dataset, Med-GLIP-5M.
The model aims to overcome the limitations of existing methods in fine-grained understanding and localization, which is crucial for applications that require precise linking between report findings and image regions. By pre-training on the large-scale grounding dataset, Med-GLIP learns stronger cross-modal alignment capabilities.
Intended Use
- Primary Intended Uses:
- Medical Visual Question Answering (VQA)
- Medical Report Generation (MRG)
- Phrase Grounding: Localizing text phrases (e.g., diseases, anatomical structures) to image regions.
- Serving as a foundational pre-trained model for various downstream medical multi-modal tasks (e.g., interactive segmentation, diagnostic assistance).
- Primary Intended Users:
- Medical AI researchers
- Engineers developing medical image analysis and reporting tools
- Researchers interested in multi-modal learning and visual grounding
- Out-of-Scope Uses:
- Direct use in clinical diagnostic decision-making without rigorous validation and regulatory approval.
- Use in non-medical image-text tasks.
Training Data
- Dataset: Med-GLIP-5M
- A custom-built, large-scale medical language-image dataset specifically created for Med-GLIP, featuring extensive grounding annotations (correspondences between image regions and text phrases).
- Dataset Construction: The paper details the pipeline, including Data Source Analysis, Data Collection, Data Preprocessing, Quality Control, and the generation of grounding annotations (possibly utilizing tools like SAM).
- Composition: (Specific details depend on the full paper) Expected to include various medical imaging modalities (e.g., X-rays, CTs, MRIs) paired with corresponding radiological reports or descriptive texts, with a focus on high-quality phrase-region bounding box annotations.
Model Architecture
- Med-GLIP is based on the architectural principles of GLIP (Grounded Language-Image Pre-training), adapted for the medical domain. Key components are expected to include:
- Image Encoder: Likely based on a Transformer architecture (e.g., ViT or Swin Transformer) for feature extraction.
- Text Encoder: Likely based on a BERT variant for encoding text inputs (reports and query phrases).
- Cross-Modal Fusion Module: For deep interaction between image and text features.
- Grounding Head: To predict bounding boxes corresponding to the input text phrases based on the fused features.
- Training Objectives:
- Grounding Loss: Minimizing the difference between predicted and ground-truth bounding boxes (e.g., using L1 and GIoU loss).
- Image-Text Contrastive (ITC) Loss: Ensuring that matched image-text pairs are aligned in the feature space, facilitating global alignment. The formula is likely similar to: $L_{ITC} = -\log \frac{\exp(\text{sim}(I, T)/\tau)}{\sum \exp(\text{sim}(I, T')/\tau)} - \log \frac{\exp(\text{sim}(I, T)/\tau)}{\sum \exp(\text{sim}(I', T)/\tau)}$.
Evaluation
- Evaluation Tasks:
- Phrase Grounding: Evaluated on dedicated medical grounding datasets.
- Visual Question Answering (VQA): Evaluated on standard medical VQA datasets (e.g., VQA-RAD, SLAKE, PathVQA).
- Medical Report Generation (MRG): Evaluated on datasets like MIMIC-CXR for report quality.
- Metrics:
- Grounding: IoU (Intersection over Union), Recall@k.
- VQA: Accuracy, AUC.
- MRG: Text generation metrics such as BLEU, ROUGE, and CIDEr.
- Results: The paper reports significant performance gains over previous state-of-the-art methods in several downstream tasks, particularly those requiring strong grounding capabilities. Figures and tables (e.g., Figure 7, Table 6) provide qualitative and quantitative comparisons.
Limitations
- Model performance is highly dependent on the quality and coverage of the Med-GLIP-5M dataset.
- The model's ability to generalize to rare diseases or unseen imaging modalities/styles may be limited.
- Noise or inaccuracies introduced during the automated grounding annotation process could affect the model's precision.
- The model's computational requirements may be high for training and inference.
- (Refer to the full paper for a comprehensive discussion of limitations.)
Bias, Risks, and Ethical Considerations
- Data Bias: The Med-GLIP-5M dataset may contain demographic biases (e.g., in age, gender, race representation) from its source institutions, which can be reflected in the model's performance on underrepresented groups.
- Clinical Risk: The model is an AI research tool and must not be used for primary clinical diagnosis or patient care without explicit, strict clinical validation and regulatory approval. Misinterpretation of results could lead to patient harm.
- Interpretability: While the grounding feature aids in interpretability, the overall decision-making process is complex, and failures should be treated with caution.
- (Refer to the full paper for a detailed discussion of ethical and societal implications.)
- Downloads last month
- 28