# Leaderboard

The only requirement to appear on the leaderboard is to have a successful [Run](/other/glossary.md#run).

Being on the leaderboard makes you eligible for rewards based on the rewards system of the crunch you are participating in.

## Submission Phase

During the [Submission Phase](/other/glossary.md#submission-phase), the leaderboard is updated when a [Run](/other/glossary.md#run) is scored.

If there is no public leaderboard, the leaderboard will be updated when a [Prediction](/other/glossary.md#prediction) is validated to make sure it is ready for scoring.

## Out-of-Sample Phase

During the [Out-of-Sample Phase](/other/glossary.md#out-of-sample-phase), the leaderboard is only updated once when all the [Run](/other/glossary.md#run) that are running on unseen data are scored.

## Columns

Only a subset of columns will be visible, depending on the current state of the crunch.

<table><thead><tr><th width="235">Name</th><th>Description</th></tr></thead><tbody><tr><td>Rank</td><td>Rank of the participant / model</td></tr><tr><td>Participant</td><td>Name of the participant / model</td></tr><tr><td>Participation</td><td>Distinguish if a participant is ready for the private leaderboard<br>(only if there is no public leaderboard)</td></tr><tr><td>Team</td><td>The participant's team name<br>(only if the competition allows teams)</td></tr><tr><td>Best <code>metric</code> (<code>unit</code>)</td><td>The best submission score for the metric <code>metric</code><br>(reset between each rounds)</td></tr><tr><td>Last <code>metric</code> (<code>unit</code>)</td><td>The last submission score for the metric <code>metric</code></td></tr><tr><td>Mean</td><td>The weighted average of the metrics<br>(only if there are multiple)</td></tr><tr><td><code>metric</code> (<code>unit</code>)</td><td>The score value of the metric <code>metric</code></td></tr><tr><td>Run Success</td><td>The number of successful runs / the number of failed runs</td></tr><tr><td>Previous LB Chn</td><td>Rank change from the previous round</td></tr><tr><td>Public LB Chn</td><td>Rank change from the public leaderboard</td></tr><tr><td>Weekly Change</td><td>Rank change from the previous week</td></tr><tr><td>User Hist. Rewards</td><td>The amount of rewards ever received by the participant</td></tr><tr><td>Proj. Rewards</td><td>The amount of rewards the participant is expected to receive at the end of the month</td></tr></tbody></table>

<figure><img src="/files/2N87haLEER8txdylz2dj" alt=""><figcaption><p>A small view of the leaderboard, including rank changes and a metric.</p></figcaption></figure>

### Badges

<table><thead><tr><th width="214">Icon</th><th>Description</th></tr></thead><tbody><tr><td> <img src="/files/MeukIcHEJPVdRjxycNUc" alt="" data-size="original"></td><td>The participant has won the crunch.</td></tr><tr><td> <img src="/files/nmUrhv3eaSUGF9XH3Qqb" alt=""></td><td>The participant has earned a certificate.</td></tr><tr><td> <img src="/files/qJX4ERAXE792LcH2Plik" alt=""></td><td>The participant has received a prize.</td></tr><tr><td><img src="/files/wn0WHRUH85f4m0m1WhB0" alt="" data-size="original"></td><td>The participant only submitted a Quickstarter and is disqualified from receiving any rewards.</td></tr><tr><td><img src="/files/epsn6PFWkXQy05YPiG1r" alt="" data-size="original"></td><td>The <a href="/pages/NFmTIM0z2lNwBkqO1Pym#prediction">prediction</a> is too close to another<br>(<a href="/pages/qKt1yiZBJIfA0sn4F0mj">grouped by same user or same team, if correlation > 95%*</a>).</td></tr><tr><td><img src="/files/yN1LygV6GOCoMwgfB0rI" alt="" data-size="original"></td><td>The code is not deterministic and is not eligible for any rewards.<br>It is tested by running the infer function twice (on the same data) and comparing if the predictions are the same.</td></tr><tr><td><img src="/files/AjbWmXjQuJ1skL4cldVr" alt="" data-size="original"></td><td>The score is not within the allowed range.<br>Hover over to see which limit(s) the model did not fit.</td></tr><tr><td><img src="/files/5cjzX32m9yzs0Q2u7UL8" alt="" data-size="original"></td><td>Only the <a href="/pages/FTjE9yWXYQ2w1RT1Em1w#leaders">team leader</a> is considered for ranking on the leaderboard.</td></tr></tbody></table>

## Participant

If the crunch allows more than one model, the model name is displayed right after the participant name. Otherwise, only the participant name is displayed.

### Only one Model on the Leaderboard

Some competitions (like Mid+One) allow more than one model, but only ONE can be displayed on the leaderboard at any given time.

The selection can be made during the [Submission Phase](/other/glossary.md#submission-phase). After that, the "Use this one" button will no longer be available.

<figure><img src="/files/KEfUjHCmpJIL2ksF0FtE" alt=""><figcaption><p>The model will be displayed on the leaderboard and will be eligible for rewards.</p></figcaption></figure>

<figure><img src="/files/N6nzp6anwKpvE2ENzLhc" alt=""><figcaption><p>The model will NOT be displayed on the leaderboard and will NOT be eligible for rewards.</p></figcaption></figure>

## Ties

Some competitions (such as Broad #2) allow ties when models achieve the same scores, meaning they receive the same rank.

The rank assigned to all tied models is the first rank of the tie.

The reward is distributed equally among the tied models, calculated as the sum of the rewards for the tied models divided by the number of models in the tie.

#### Example

For a prize pool that rewards the top 10 models:

* If the <mark style="color:blue;">1st</mark>, <mark style="color:purple;">2nd</mark>, and <mark style="color:orange;">3rd</mark> place models are tied, they will all be ranked <mark style="color:blue;">1st</mark>.\
  The next model will be ranked <mark style="color:green;">4th</mark>.\
  Their total reward will be: (<mark style="color:blue;">1st place prize</mark> + <mark style="color:purple;">2nd place prize</mark> + <mark style="color:orange;">3rd place prize</mark>) / 3.
* If the <mark style="color:blue;">8th</mark>, <mark style="color:purple;">9th</mark>, <mark style="color:orange;">10th</mark>, <mark style="color:red;">11th</mark> and <mark style="color:yellow;">12th</mark> place models are tied, they will all be ranked <mark style="color:blue;">8th</mark>.\
  The next model will be ranked <mark style="color:green;">13th</mark>.\
  Their total reward will be: (<mark style="color:blue;">8th place prize</mark> + <mark style="color:purple;">9th place prize</mark> + <mark style="color:orange;">10th place prize</mark>) / 5.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.crunchdao.com/competitions/leaderboard.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
