Commit
·
cf06bf7
1
Parent(s):
6c60ccc
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,167 +1,5 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
[Paper](https://arxiv.org/abs/2206.11253) | [Project Page](https://shangchenzhou.com/projects/CodeFormer/) | [Video](https://youtu.be/d3VDpkXlueI)
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
<a href="https://colab.research.google.com/drive/1m52PNveE4PBhYrecj34cnpEeiHcC5LTb?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a> [](https://huggingface.co/spaces/sczhou/CodeFormer) [](https://replicate.com/sczhou/codeformer) [](https://openxlab.org.cn/apps/detail/ShangchenZhou/CodeFormer) 
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
[Shangchen Zhou](https://shangchenzhou.com/), [Kelvin C.K. Chan](https://ckkelvinchan.github.io/), [Chongyi Li](https://li-chongyi.github.io/), [Chen Change Loy](https://www.mmlab-ntu.com/person/ccloy/)
|
| 14 |
-
|
| 15 |
-
S-Lab, Nanyang Technological University
|
| 16 |
-
|
| 17 |
-
<img src="assets/network.jpg" width="800px"/>
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
:star: If CodeFormer is helpful to your images or projects, please help star this repo. Thanks! :hugs:
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
### Update
|
| 24 |
-
- **2023.07.20**: Integrated to :panda_face: [OpenXLab](https://openxlab.org.cn/apps). Try out online demo! [](https://openxlab.org.cn/apps/detail/ShangchenZhou/CodeFormer)
|
| 25 |
-
- **2023.04.19**: :whale: Training codes and config files are public available now.
|
| 26 |
-
- **2023.04.09**: Add features of inpainting and colorization for cropped and aligned face images.
|
| 27 |
-
- **2023.02.10**: Include `dlib` as a new face detector option, it produces more accurate face identity.
|
| 28 |
-
- **2022.10.05**: Support video input `--input_path [YOUR_VIDEO.mp4]`. Try it to enhance your videos! :clapper:
|
| 29 |
-
- **2022.09.14**: Integrated to :hugs: [Hugging Face](https://huggingface.co/spaces). Try out online demo! [](https://huggingface.co/spaces/sczhou/CodeFormer)
|
| 30 |
-
- **2022.09.09**: Integrated to :rocket: [Replicate](https://replicate.com/explore). Try out online demo! [](https://replicate.com/sczhou/codeformer)
|
| 31 |
-
- [**More**](docs/history_changelog.md)
|
| 32 |
-
|
| 33 |
-
### TODO
|
| 34 |
-
- [x] Add training code and config files
|
| 35 |
-
- [x] Add checkpoint and script for face inpainting
|
| 36 |
-
- [x] Add checkpoint and script for face colorization
|
| 37 |
-
- [x] ~~Add background image enhancement~~
|
| 38 |
-
|
| 39 |
-
#### :panda_face: Try Enhancing Old Photos / Fixing AI-arts
|
| 40 |
-
[<img src="assets/imgsli_1.jpg" height="226px"/>](https://imgsli.com/MTI3NTE2) [<img src="assets/imgsli_2.jpg" height="226px"/>](https://imgsli.com/MTI3NTE1) [<img src="assets/imgsli_3.jpg" height="226px"/>](https://imgsli.com/MTI3NTIw)
|
| 41 |
-
|
| 42 |
-
#### Face Restoration
|
| 43 |
-
|
| 44 |
-
<img src="assets/restoration_result1.png" width="400px"/> <img src="assets/restoration_result2.png" width="400px"/>
|
| 45 |
-
<img src="assets/restoration_result3.png" width="400px"/> <img src="assets/restoration_result4.png" width="400px"/>
|
| 46 |
-
|
| 47 |
-
#### Face Color Enhancement and Restoration
|
| 48 |
-
|
| 49 |
-
<img src="assets/color_enhancement_result1.png" width="400px"/> <img src="assets/color_enhancement_result2.png" width="400px"/>
|
| 50 |
-
|
| 51 |
-
#### Face Inpainting
|
| 52 |
-
|
| 53 |
-
<img src="assets/inpainting_result1.png" width="400px"/> <img src="assets/inpainting_result2.png" width="400px"/>
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
### Dependencies and Installation
|
| 58 |
-
|
| 59 |
-
- Pytorch >= 1.7.1
|
| 60 |
-
- CUDA >= 10.1
|
| 61 |
-
- Other required packages in `requirements.txt`
|
| 62 |
-
```
|
| 63 |
-
# git clone this repository
|
| 64 |
-
git clone https://github.com/sczhou/CodeFormer
|
| 65 |
-
cd CodeFormer
|
| 66 |
-
|
| 67 |
-
# create new anaconda env
|
| 68 |
-
conda create -n codeformer python=3.8 -y
|
| 69 |
-
conda activate codeformer
|
| 70 |
-
|
| 71 |
-
# install python dependencies
|
| 72 |
-
pip3 install -r requirements.txt
|
| 73 |
-
python basicsr/setup.py develop
|
| 74 |
-
conda install -c conda-forge dlib (only for face detection or cropping with dlib)
|
| 75 |
-
```
|
| 76 |
-
<!-- conda install -c conda-forge dlib -->
|
| 77 |
-
|
| 78 |
-
### Quick Inference
|
| 79 |
-
|
| 80 |
-
#### Download Pre-trained Models:
|
| 81 |
-
Download the facelib and dlib pretrained models from [[Releases](https://github.com/sczhou/CodeFormer/releases/tag/v0.1.0) | [Google Drive](https://drive.google.com/drive/folders/1b_3qwrzY_kTQh0-SnBoGBgOrJ_PLZSKm?usp=sharing) | [OneDrive](https://entuedu-my.sharepoint.com/:f:/g/personal/s200094_e_ntu_edu_sg/EvDxR7FcAbZMp_MA9ouq7aQB8XTppMb3-T0uGZ_2anI2mg?e=DXsJFo)] to the `weights/facelib` folder. You can manually download the pretrained models OR download by running the following command:
|
| 82 |
-
```
|
| 83 |
-
python scripts/download_pretrained_models.py facelib
|
| 84 |
-
python scripts/download_pretrained_models.py dlib (only for dlib face detector)
|
| 85 |
-
```
|
| 86 |
-
|
| 87 |
-
Download the CodeFormer pretrained models from [[Releases](https://github.com/sczhou/CodeFormer/releases/tag/v0.1.0) | [Google Drive](https://drive.google.com/drive/folders/1CNNByjHDFt0b95q54yMVp6Ifo5iuU6QS?usp=sharing) | [OneDrive](https://entuedu-my.sharepoint.com/:f:/g/personal/s200094_e_ntu_edu_sg/EoKFj4wo8cdIn2-TY2IV6CYBhZ0pIG4kUOeHdPR_A5nlbg?e=AO8UN9)] to the `weights/CodeFormer` folder. You can manually download the pretrained models OR download by running the following command:
|
| 88 |
-
```
|
| 89 |
-
python scripts/download_pretrained_models.py CodeFormer
|
| 90 |
-
```
|
| 91 |
-
|
| 92 |
-
#### Prepare Testing Data:
|
| 93 |
-
You can put the testing images in the `inputs/TestWhole` folder. If you would like to test on cropped and aligned faces, you can put them in the `inputs/cropped_faces` folder. You can get the cropped and aligned faces by running the following command:
|
| 94 |
-
```
|
| 95 |
-
# you may need to install dlib via: conda install -c conda-forge dlib
|
| 96 |
-
python scripts/crop_align_face.py -i [input folder] -o [output folder]
|
| 97 |
-
```
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
#### Testing:
|
| 101 |
-
[Note] If you want to compare CodeFormer in your paper, please run the following command indicating `--has_aligned` (for cropped and aligned face), as the command for the whole image will involve a process of face-background fusion that may damage hair texture on the boundary, which leads to unfair comparison.
|
| 102 |
-
|
| 103 |
-
Fidelity weight *w* lays in [0, 1]. Generally, smaller *w* tends to produce a higher-quality result, while larger *w* yields a higher-fidelity result. The results will be saved in the `results` folder.
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
🧑🏻 Face Restoration (cropped and aligned face)
|
| 107 |
-
```
|
| 108 |
-
# For cropped and aligned faces (512x512)
|
| 109 |
-
python inference_codeformer.py -w 0.5 --has_aligned --input_path [image folder]|[image path]
|
| 110 |
-
```
|
| 111 |
-
|
| 112 |
-
:framed_picture: Whole Image Enhancement
|
| 113 |
-
```
|
| 114 |
-
# For whole image
|
| 115 |
-
# Add '--bg_upsampler realesrgan' to enhance the background regions with Real-ESRGAN
|
| 116 |
-
# Add '--face_upsample' to further upsample restorated face with Real-ESRGAN
|
| 117 |
-
python inference_codeformer.py -w 0.7 --input_path [image folder]|[image path]
|
| 118 |
-
```
|
| 119 |
-
|
| 120 |
-
:clapper: Video Enhancement
|
| 121 |
-
```
|
| 122 |
-
# For Windows/Mac users, please install ffmpeg first
|
| 123 |
-
conda install -c conda-forge ffmpeg
|
| 124 |
-
```
|
| 125 |
-
```
|
| 126 |
-
# For video clips
|
| 127 |
-
# Video path should end with '.mp4'|'.mov'|'.avi'
|
| 128 |
-
python inference_codeformer.py --bg_upsampler realesrgan --face_upsample -w 1.0 --input_path [video path]
|
| 129 |
-
```
|
| 130 |
-
|
| 131 |
-
🌈 Face Colorization (cropped and aligned face)
|
| 132 |
-
```
|
| 133 |
-
# For cropped and aligned faces (512x512)
|
| 134 |
-
# Colorize black and white or faded photo
|
| 135 |
-
python inference_colorization.py --input_path [image folder]|[image path]
|
| 136 |
-
```
|
| 137 |
-
|
| 138 |
-
🎨 Face Inpainting (cropped and aligned face)
|
| 139 |
-
```
|
| 140 |
-
# For cropped and aligned faces (512x512)
|
| 141 |
-
# Inputs could be masked by white brush using an image editing app (e.g., Photoshop)
|
| 142 |
-
# (check out the examples in inputs/masked_faces)
|
| 143 |
-
python inference_inpainting.py --input_path [image folder]|[image path]
|
| 144 |
-
```
|
| 145 |
-
### Training:
|
| 146 |
-
The training commands can be found in the documents: [English](docs/train.md) **|** [简体中文](docs/train_CN.md).
|
| 147 |
-
|
| 148 |
-
### Citation
|
| 149 |
-
If our work is useful for your research, please consider citing:
|
| 150 |
-
|
| 151 |
-
@inproceedings{zhou2022codeformer,
|
| 152 |
-
author = {Zhou, Shangchen and Chan, Kelvin C.K. and Li, Chongyi and Loy, Chen Change},
|
| 153 |
-
title = {Towards Robust Blind Face Restoration with Codebook Lookup TransFormer},
|
| 154 |
-
booktitle = {NeurIPS},
|
| 155 |
-
year = {2022}
|
| 156 |
-
}
|
| 157 |
-
|
| 158 |
-
### License
|
| 159 |
-
|
| 160 |
-
This project is licensed under <a rel="license" href="https://github.com/sczhou/CodeFormer/blob/master/LICENSE">NTU S-Lab License 1.0</a>. Redistribution and use should follow this license.
|
| 161 |
-
|
| 162 |
-
### Acknowledgement
|
| 163 |
-
|
| 164 |
-
This project is based on [BasicSR](https://github.com/XPixelGroup/BasicSR). Some codes are brought from [Unleashing Transformers](https://github.com/samb-t/unleashing-transformers), [YOLOv5-face](https://github.com/deepcam-cn/yolov5-face), and [FaceXLib](https://github.com/xinntao/facexlib). We also adopt [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN) to support background image enhancement. Thanks for their awesome works.
|
| 165 |
-
|
| 166 |
-
### Contact
|
| 167 |
-
If you have any questions, please feel free to reach me out at `[email protected]`.
|
|
|
|
| 1 |
+
---
|
| 2 |
+
{}
|
| 3 |
+
---
|
| 4 |
|
| 5 |
+
codeformer
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|