Maria Castellanos commited on
Commit
b1db5fd
·
1 Parent(s): 5c22f32

change submission freq

Browse files
Files changed (1) hide show
  1. app.py +1 -1
app.py CHANGED
@@ -165,7 +165,7 @@ def gradio_interface():
165
  - In the spirit of open science and open source we would love to see code showing how you created your submission if possible, in the form of a Github Repository.
166
  If not possible due to IP or other constraints you must at a minimum provide a short report written methodology based on the template [here](https://docs.google.com/document/d/1bttGiBQcLiSXFngmzUdEqVchzPhj-hcYLtYMszaOqP8/edit?usp=sharing).
167
  **Make sure your lat submission before the deadline includes a link to a report or to a Github repository.**
168
- - Each participant can submit as many times as they like, up to a limit of 5 times/day. **Only your latest submission will be considered for the final leaderboard.**
169
  - The endpoints will be judged individually by mean absolute error (**MAE**), while an overall leaderboard will be judged by the macro-averaged relative absolute error (**MA-RAE**).
170
  - For endpoints that are not already on a log scale (e.g LogD) they will be transformed to log scale to minimize the impact of outliers on evaluation.
171
  - We will estimate errors on the metrics using bootstrapping and use the statistical testing workflow outlined in [this paper](https://chemrxiv.org/engage/chemrxiv/article-details/672a91bd7be152b1d01a926b) to determine if model performance is statistically distinct.
 
165
  - In the spirit of open science and open source we would love to see code showing how you created your submission if possible, in the form of a Github Repository.
166
  If not possible due to IP or other constraints you must at a minimum provide a short report written methodology based on the template [here](https://docs.google.com/document/d/1bttGiBQcLiSXFngmzUdEqVchzPhj-hcYLtYMszaOqP8/edit?usp=sharing).
167
  **Make sure your lat submission before the deadline includes a link to a report or to a Github repository.**
168
+ - Each participant can submit as many times as they like, up to a limit of once per day. **Only your latest submission will be considered for the final leaderboard.**
169
  - The endpoints will be judged individually by mean absolute error (**MAE**), while an overall leaderboard will be judged by the macro-averaged relative absolute error (**MA-RAE**).
170
  - For endpoints that are not already on a log scale (e.g LogD) they will be transformed to log scale to minimize the impact of outliers on evaluation.
171
  - We will estimate errors on the metrics using bootstrapping and use the statistical testing workflow outlined in [this paper](https://chemrxiv.org/engage/chemrxiv/article-details/672a91bd7be152b1d01a926b) to determine if model performance is statistically distinct.