--- license: apache-2.0 task_categories: - image-to-3d tags: - fmri - neuroscience - 3d-reconstruction --- # fMRI-Objaverse This repository contains **fMRI-Objaverse**, a comprehensive dataset for fMRI-based 3D reconstruction, as presented in the paper [MinD-3D++: Advancing fMRI-Based 3D Reconstruction with High-Quality Textured Mesh Generation and a Comprehensive Dataset](https://huggingface.co/papers/2409.11315). [![ArXiv](https://img.shields.io/badge/ArXiv-2409.11315-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2409.11315) **Project Page**: [https://jianxgao.github.io/MinD-3D](https://jianxgao.github.io/MinD-3D) **Code**: [https://github.com/JianxGao/MinD-3D](https://github.com/JianxGao/MinD-3D) ## Overview fMRI-Objaverse is an extended dataset for [fMRI-Shape](https://huggingface.co/datasets/Fudan-fMRI/fMRI-Shape). It is part of the larger fMRI-3D dataset, which significantly advances the task of reconstructing 3D visuals from functional Magnetic Resonance Imaging (fMRI) data. The fMRI-3D dataset includes data from 15 participants and showcases a total of 4,768 3D objects. fMRI-Objaverse specifically includes data from 5 subjects, 4 of whom are also part of the core set in fMRI-Shape. Each subject views 3,142 3D objects across 117 categories, all accompanied by text captions. This significantly enhances the diversity and potential applications of the dataset for decoding textured 3D visual information from fMRI signals and generating 3D textured meshes with detailed textures. ## Sample Usage This section provides quick instructions on setting up the environment, training models, and performing inference using the associated code from the [MinD-3D GitHub repository](https://github.com/JianxGao/MinD-3D). ### Environment Setup To set up the environment for MinD-3D: ```bash git clone https://github.com/JianxGao/MinD-3D.git cd MinD-3D bash env_install.sh ``` For MinD-3D++ specific setup, please refer to the [InstantMesh](https://github.com/TencentARC/InstantMesh) repository for detailed environment setup instructions. ### Train Example commands to train models using the MinD-3D framework: **MinD-3D Training:** ```bash CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 --master_port=25645 \ train_stage1.py --sub_id 0001 --ddp \ --config ./configs/mind3d.yaml \ --out_dir sub01_stage1 --batchsize 8 ``` ```bash CUDA_VISIBLE_DEVICES=1 python -m torch.distributed.launch --nproc_per_node=1 --master_port=25645 \ train_stage2.py --sub_id 0001 --ddp \ --config ./configs/mind3d.yaml \ --out_dir sub01_stage2 --batchsize 2 ``` You can access the quantized features for training through the link: https://drive.google.com/file/d/1R8IpG1bligLAfHkLQ2COrfTIkay14AEm/view?usp=drive_link. You can download the weight of subject 1 through the link: https://drive.google.com/file/d/1ni4g1iCvdpoi2xYtmydr_w3XA5PpNrvm/view?usp=sharing **MinD-3D++ Training:** ```bash CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 --master_port=25644 \ python train_sd.py --ddp \ --config ./configs/mind3d_pp.yaml \ --out_dir mind3dpp_fmri_shape_subject1_rank_64 --batchsize 8 ``` ### Inference Example commands to run inference with the trained models: **MinD-3D Inference:** ```bash # Sub01 Plane python generate_fmri2shape.py --config ./configs/mind3d.yaml --check_point_path ./mind3d_sub01.pt \ --uid b5d0ae4f723bce81f119374ee5d5f944 --topk 250 # Sub01 Car python generate_fmri2shape.py --config ./configs/mind3d.yaml --check_point_path ./mind3d_sub01.pt \ --uid aebd98c5d7e8150b709ce7955adef61b --topk 250 ``` **MinD-3D++ Inference:** ```bash cd InstantMesh # Navigate to the InstantMesh directory for this inference CUDA_VISIBLE_DEVICES=0 python infer_fmri_obj.py ./configs/mind3d_pp_infer.yaml \ --unet_path model_weight \ --save_name save_dir \ --input_path ./dataset/fmri_shape/core_test_list.txt \ --fmri_dir fmri_dir \ --gt_image_dir gt_image_dir \ --save_video --export_texmap ``` ## Citation If you find our dataset useful for your research and applications, please cite using this BibTeX: ``` @misc{gao2024fmri3dcomprehensivedatasetenhancing, title={fMRI-3D: A Comprehensive Dataset for Enhancing fMRI-based 3D Reconstruction}, author={Jianxiong Gao and Yuqian Fu and Yun Wang and Xuelin Qian and Jianfeng Feng and Yanwei Fu}, year={2024}, eprint={2409.11315}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2409.11315}, } ```