Update README.md
#21
by
dylanebert
- opened
README.md
CHANGED
|
@@ -5,12 +5,132 @@ tags:
|
|
| 5 |
- image-to-3d
|
| 6 |
---
|
| 7 |
|
| 8 |
-
|
|
|
|
|
|
|
| 9 |
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
|
| 12 |
-
|
| 13 |
|
| 14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
|
| 16 |
-
|
|
|
|
|
|
| 5 |
- image-to-3d
|
| 6 |
---
|
| 7 |
|
| 8 |
+
<div align="center">
|
| 9 |
+
|
| 10 |
+
# InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models
|
| 11 |
|
| 12 |
+
<a href="https://arxiv.org/abs/2404.07191"><img src="https://img.shields.io/badge/ArXiv-2404.07191-brightgreen"></a>
|
| 13 |
+
<a href="https://huggingface.co/TencentARC/InstantMesh"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Model_Card-Huggingface-orange"></a>
|
| 14 |
+
<a href="https://huggingface.co/spaces/TencentARC/InstantMesh"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Gradio%20Demo-Huggingface-orange"></a> <br>
|
| 15 |
+
<a href="https://replicate.com/camenduru/instantmesh"><img src="https://img.shields.io/badge/Demo-Replicate-blue"></a>
|
| 16 |
+
<a href="https://colab.research.google.com/github/camenduru/InstantMesh-jupyter/blob/main/InstantMesh_jupyter.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg"></a>
|
| 17 |
+
<a href="https://github.com/jtydhr88/ComfyUI-InstantMesh"><img src="https://img.shields.io/badge/Demo-ComfyUI-8A2BE2"></a>
|
| 18 |
|
| 19 |
+
</div>
|
| 20 |
|
| 21 |
+
---
|
| 22 |
+
|
| 23 |
+
InstantMesh is a feed-forward framework for efficient 3D mesh generation from a single image based on the [LRM/Instant3D](https://huggingface.co/papers/2311.04400) architecture.
|
| 24 |
+
|
| 25 |
+
# βοΈ Dependencies and Installation
|
| 26 |
+
|
| 27 |
+
We recommend using `Python>=3.10`, `PyTorch>=2.1.0`, and `CUDA>=12.1`.
|
| 28 |
+
```bash
|
| 29 |
+
conda create --name instantmesh python=3.10
|
| 30 |
+
conda activate instantmesh
|
| 31 |
+
pip install -U pip
|
| 32 |
+
|
| 33 |
+
# Ensure Ninja is installed
|
| 34 |
+
conda install Ninja
|
| 35 |
+
|
| 36 |
+
# Install the correct version of CUDA
|
| 37 |
+
conda install cuda -c nvidia/label/cuda-12.1.0
|
| 38 |
+
|
| 39 |
+
# Install PyTorch and xformers
|
| 40 |
+
# You may need to install another xformers version if you use a different PyTorch version
|
| 41 |
+
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
|
| 42 |
+
pip install xformers==0.0.22.post7
|
| 43 |
+
|
| 44 |
+
# Install other requirements
|
| 45 |
+
pip install -r requirements.txt
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
# π« How to Use
|
| 49 |
+
|
| 50 |
+
## Download the models
|
| 51 |
+
|
| 52 |
+
We provide 4 sparse-view reconstruction model variants and a customized Zero123++ UNet for white-background image generation in the [model card](https://huggingface.co/TencentARC/InstantMesh).
|
| 53 |
+
|
| 54 |
+
Our inference script will download the models automatically. Alternatively, you can manually download the models and put them under the `ckpts/` directory.
|
| 55 |
+
|
| 56 |
+
By default, we use the `instant-mesh-large` reconstruction model variant.
|
| 57 |
+
|
| 58 |
+
## Start a local gradio demo
|
| 59 |
+
|
| 60 |
+
To start a gradio demo in your local machine, simply run:
|
| 61 |
+
```bash
|
| 62 |
+
python app.py
|
| 63 |
+
```
|
| 64 |
+
|
| 65 |
+
If you have multiple GPUs in your machine, the demo app will run on two GPUs automatically to save memory. You can also force it to run on a single GPU:
|
| 66 |
+
```bash
|
| 67 |
+
CUDA_VISIBLE_DEVICES=0 python app.py
|
| 68 |
+
```
|
| 69 |
+
|
| 70 |
+
Alternatively, you can run the demo with docker. Please follow the instructions in the [docker](docker/) directory.
|
| 71 |
+
|
| 72 |
+
## Running with command line
|
| 73 |
+
|
| 74 |
+
To generate 3D meshes from images via command line, simply run:
|
| 75 |
+
```bash
|
| 76 |
+
python run.py configs/instant-mesh-large.yaml examples/hatsune_miku.png --save_video
|
| 77 |
+
```
|
| 78 |
+
|
| 79 |
+
We use [rembg](https://github.com/danielgatis/rembg) to segment the foreground object. If the input image already has an alpha mask, please specify the `no_rembg` flag:
|
| 80 |
+
```bash
|
| 81 |
+
python run.py configs/instant-mesh-large.yaml examples/hatsune_miku.png --save_video --no_rembg
|
| 82 |
+
```
|
| 83 |
+
|
| 84 |
+
By default, our script exports a `.obj` mesh with vertex colors, please specify the `--export_texmap` flag if you hope to export a mesh with a texture map instead (this will cost longer time):
|
| 85 |
+
```bash
|
| 86 |
+
python run.py configs/instant-mesh-large.yaml examples/hatsune_miku.png --save_video --export_texmap
|
| 87 |
+
```
|
| 88 |
+
|
| 89 |
+
Please use a different `.yaml` config file in the [configs](./configs) directory if you hope to use other reconstruction model variants. For example, using the `instant-nerf-large` model for generation:
|
| 90 |
+
```bash
|
| 91 |
+
python run.py configs/instant-nerf-large.yaml examples/hatsune_miku.png --save_video
|
| 92 |
+
```
|
| 93 |
+
**Note:** When using the `NeRF` model variants for image-to-3D generation, exporting a mesh with texture map by specifying `--export_texmap` may cost long time in the UV unwarping step since the default iso-surface extraction resolution is `256`. You can set a lower iso-surface extraction resolution in the config file.
|
| 94 |
+
|
| 95 |
+
# π» Training
|
| 96 |
+
|
| 97 |
+
We provide our training code to facilitate future research. But we cannot provide the training dataset due to its size. Please refer to our [dataloader](src/data/objaverse.py) for more details.
|
| 98 |
+
|
| 99 |
+
To train the sparse-view reconstruction models, please run:
|
| 100 |
+
```bash
|
| 101 |
+
# Training on NeRF representation
|
| 102 |
+
python train.py --base configs/instant-nerf-large-train.yaml --gpus 0,1,2,3,4,5,6,7 --num_nodes 1
|
| 103 |
+
|
| 104 |
+
# Training on Mesh representation
|
| 105 |
+
python train.py --base configs/instant-mesh-large-train.yaml --gpus 0,1,2,3,4,5,6,7 --num_nodes 1
|
| 106 |
+
```
|
| 107 |
+
|
| 108 |
+
We also provide our Zero123++ fine-tuning code since it is frequently requested. The running command is:
|
| 109 |
+
```bash
|
| 110 |
+
python train.py --base configs/zero123plus-finetune.yaml --gpus 0,1,2,3,4,5,6,7 --num_nodes 1
|
| 111 |
+
```
|
| 112 |
+
|
| 113 |
+
# π Citation
|
| 114 |
+
|
| 115 |
+
If you find our work useful for your research or applications, please cite using this BibTeX:
|
| 116 |
+
|
| 117 |
+
```BibTeX
|
| 118 |
+
@article{xu2024instantmesh,
|
| 119 |
+
title={InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models},
|
| 120 |
+
author={Xu, Jiale and Cheng, Weihao and Gao, Yiming and Wang, Xintao and Gao, Shenghua and Shan, Ying},
|
| 121 |
+
journal={arXiv preprint arXiv:2404.07191},
|
| 122 |
+
year={2024}
|
| 123 |
+
}
|
| 124 |
+
```
|
| 125 |
+
|
| 126 |
+
# π€ Acknowledgements
|
| 127 |
+
|
| 128 |
+
We thank the authors of the following projects for their excellent contributions to 3D generative AI!
|
| 129 |
+
|
| 130 |
+
- [Zero123++](https://github.com/SUDO-AI-3D/zero123plus)
|
| 131 |
+
- [OpenLRM](https://github.com/3DTopia/OpenLRM)
|
| 132 |
+
- [FlexiCubes](https://github.com/nv-tlabs/FlexiCubes)
|
| 133 |
+
- [Instant3D](https://instant-3d.github.io/)
|
| 134 |
|
| 135 |
+
Thank [@camenduru](https://github.com/camenduru) for implementing [Replicate Demo](https://replicate.com/camenduru/instantmesh) and [Colab Demo](https://colab.research.google.com/github/camenduru/InstantMesh-jupyter/blob/main/InstantMesh_jupyter.ipynb)!
|
| 136 |
+
Thank [@jtydhr88](https://github.com/jtydhr88) for implementing [ComfyUI support](https://github.com/jtydhr88/ComfyUI-InstantMesh)!
|