Baseline Pretrained Models for the FOMO25 Challenge
This repository contains two unets which have been trained on the FOMO60k dataset using the baseline-codebase and the following configs:
UNet B:
epochs: 100 (5 warmup)
batch_size: 2
patch_size: 96
augmentation_preset: all
UNet XL:
epochs: 100 (5 warmup)
batch_size: 8
patch_size: 96
augmentation_preset: all
The models were finetuned for each of the three tasks using default parameters of the baseline-codebase and submitted to the leaderboard by the challenge organizers.
These models can be used by all participants of the challenge, to either compare against their own method or for continual pretraining or to explore novel finetuning and adaptation strategies.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support