CrunchDAO Docs V3
  • Crunch Hub
    • The Crunch-Hub
      • Activity Graphs
  • Competitions
    • Competitions
      • DataCrunch Competition
      • ADIA Lab Structural Break Challenge
      • Broad Institute Autoimmune Disease
        • Crunch 1 – Oct 28 to Feb 9 – Predict gene expression
        • Crunch 2 – Nov 18 to Mar 21 – Predicting Unseen Genes
        • Crunch 3 – Dec 9 to Apr 30 – Identifying Gene
        • Full Specifications
        • Lectures
      • ADIA Lab Causal Discovery
      • ADIA Lab Market Prediction Competition
    • Rallies
      • Mid+One
      • DataCrunch Rally
      • X-Alpha Rally
    • Participate
    • Teams
      • Managing
      • Referendums
      • Leaderboard
      • Rewards
    • Data
    • Code Interface
    • Leaderboard
      • Duplicate Predictions
    • Resources Limit
    • Whitelisted Libraries
    • Known Issues
  • CRUNCH Token practical
    • Release Map
  • Credits
    • Avatar
  • Other
    • Glossary
Powered by GitBook
On this page
  • Overview
  • Prize
  • Weekly Crunches
  • Data
  • X_train
  • y_train
  • X_test - y_test
  • The Performance Metric
  • Reward Scheme
  • Payouts calculation
  • Computing Resources
  • Quickstarter Notebook
  • Legacy Endpoints
  1. Competitions
  2. Competitions

DataCrunch Competition

PreviousCompetitionsNextADIA Lab Structural Break Challenge

Last updated 28 days ago

Overview

DataCrunch uses the quantitative research of the CrunchDAO to manage its systematic market-neutral portfolio. DataCrunch built a dataset covering thousands of publicly traded U.S companies.

The long-term strategic goal of the fund is capital appreciation by capturing idiosyncratic return at low volatility.

In order to achieve this goal, DataCrunch needs the community to assess the relative performance of all assets in a subset of the universe. In other words, DataCrunch is expecting your model to rank the constituent of its investment universe.

Prize

Reward are split in targets as follow. Each target represent an investment horizon and can be predicted using the DataCrunch dataset. Reward are distributed every month based on crunchers performance:

  • 60,000 $USDC yearly on target_b + $10k bonus for cumulative alpha target.

  • 20,000 $USDC yearly on target_g

  • 20,000 $USDC yearly on target_r

  • 10,000 $USDC yearly on target_w

Weekly Crunches

Every week, two phases:

  • The Submission Phase: every Friday at 8PM UTC to Tuesday 12PM UTC, the system will release an additional moon. Competitors will be able to submit their code or model

  • The Out-Of-Sample Phase: the models will be run on the Out-Of-Sample data (the live data). The score of each target are published as they are resolved against live market data as reward.

Data

Each row of the dataset describes a stock at a certain date.

The dataset is composed of three files, X_train y_train and X_test.

X_train

  • moon: A sequentially increasing integer representing a date. Time between subsequent dates is constant, denoting a weekly fixed frequency at which the data is sampled.

  • id: A unique identifier representing a stock at a given Moon. Note that the same asset has a different id in different Moons.

  • Feature_Industry: the industry to which a stock belongs at a given moon.

  • (gordon_Feature_1, …, dolly_Feature_30): Anonymised features that describe the state of assets on a given date. They are grouped into several families, or ways of assessing the relative performance of each stock on a given month.

Note: All features have the string "Feature" in their name, prefixed by a code name for the feature family.

y_train

  • moon: Same as in X_train.

  • id: Same as in X_train.

  • (target_w, …, target_b): the targets that may help you build your models. Target_w, r, g, b refer to 7, 28, 63, 91 days compounding of returns.

X_test - y_test

X_test and y_test has the same structure as X_train and y_train but comprises only 13 moons. These files are used to simulate the submission process locally via crunch.test() (within the code), or crunch test (via the cli). The aim is to help participants debug their code and have successful submissions. A successful local test usually means no errors during execution on the submission platform. The data of these files is composed of the 13 moons on which the longest target (target_b) is not resolved. The missing data for each target were replaced by -1 values.

Note: the features are split in two groups. The legacy features and the v2 features which are suffixed by "_v2"

The Performance Metric

The infer function from your code will return your predictions.

A Spearman rank correlation will be computed against the live targets.

Reward Scheme

All reward are computed on the leaderboards.

The Historical Rewards are the sum of every payout you have received from the DataCrunch competition.

The Projected Rewards are the current estimated rewards yet to be distributed.

Payouts calculation

Payouts are computed based on your the rank of your prediction for each target. The higher the Spearman Rank between your prediction and market realisation, the higher your rank on the leaderboard.

The payouts are distributed according to an exponential function of your position on the leaderboards, as shown in the graph below, the top 20 crunchers earn approximately 30% of the total rewards.

Computing Resources

Competitors will be allocated a specified quantity of computing resources within the cloud environment for the execution of their code.

During the phase, they are entitled to 10 hours of GPU or CPU compute time per week, and for the OOS phase, this allocation increased 10% in case of slower deployement by the system.

During the SUBMISSION phase, you are entitled to 10 hours of GPU or CPU computing time per week, and during the OOS phase, this allocation is increased by 10%.

Quickstarter Notebook

A Quickstarter notebook is available below so you can get familiar with what is expected from you.

Legacy Endpoints

The old data format is still available on the legacy endpoint, but will be removed at some point in the future. We encourage people who still rely on this data to migrate to the new submission format.

Name
Parquet
CSV (deprecated)

X_train

y_train

X_test

example_submission

Russell 3000
/data/X_train.parquet
/data/X_train.csv
/data/y_train.parquet
/data/y_train.csv
/data/X_test.parquet
/data/X_test.csv
/data/example_submission.parquet
/data/example_submission.csv
Google Colab
Logo
Cumulative distribution of rewards (2023-03-03 leaderboard)
Page cover image