Post
				
				
							813
					🚧 Reproducing LBM-Eraser… in the open [1] !
Today we have trained a LBM [2] promptless inpainter using
We use a subset of 1.25M images with
2 takeaways :
🖼 Inpainting is better compared to our RORD experiments [5]
🦶 "4 steps" outperforms single-step
[1] Finegrain LBM Fork : https://github.com/finegrain-ai/LBM
[2] LBM: Latent Bridge Matching for Fast Image-to-Image Translation (2503.07535)
[3] supermodelresearch/Re-LAION-Caption19M
[4] Resolution-robust Large Mask Inpainting with Fourier Convolutions (2109.07161)
[5] https://huggingface.co/posts/piercus/778833977889788
cc @supermodelresearch @presencesw
	
		
	Today we have trained a LBM [2] promptless inpainter using
Re-LAION-Caption19M[3].We use a subset of 1.25M images with
aesthetic_score > 5.6 and pwatermark < 0.2 and LaMa [2] mask generation.2 takeaways :
🖼 Inpainting is better compared to our RORD experiments [5]
🦶 "4 steps" outperforms single-step
[1] Finegrain LBM Fork : https://github.com/finegrain-ai/LBM
[2] LBM: Latent Bridge Matching for Fast Image-to-Image Translation (2503.07535)
[3] supermodelresearch/Re-LAION-Caption19M
[4] Resolution-robust Large Mask Inpainting with Fourier Convolutions (2109.07161)
[5] https://huggingface.co/posts/piercus/778833977889788
cc @supermodelresearch @presencesw