mirror of https://github.com/01-edu/public.git
You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
nprimo
2de13717c9
|
8 months ago | |
---|---|---|
.. | ||
README.md | 8 months ago |
README.md
Financial strategies on the SP500
Is the structure of the project like the one presented in the Project repository structure
in the subject?
Does the README file summarize how to run the code and explain the global approach?
Does the environment contain all libraries used and their versions that are necessary to run the code?
Do the text files explain the chosen model methodology?
Data processing and feature engineering
Is the data split in a train set and test set?
Is the last day of the train set D and the first day of the test set D+n with n>0? Splitting without considering the time series structure is wrong.
Is there no leakage? Unfortunately, there's no automated way to check if the dataset is leaked. This step is validated if the features of date d are built as follows:
Index | Features | Target |
---|---|---|
Day D-1 | Features until D-1 23:59pm | return(D, D+1) |
Day D | Features until D 23:59pm | return(D+1, D+2) |
Day D+1 | Features until D+1 23:59pm | return(D+2, D+3) |
Have the features been grouped by ticker before computing the features?
Has the target been grouped by ticker before computing the future returns?
Machine Learning pipeline
Cross-Validation
Does the CV contain at least 10 folds in total?
Do all train folds have more than 2y history? If you use time series split, checking that the first fold has more than 2y history is enough.
Can you confirm that the last validation set of the train data is not overlapping with the test data?
Are all the data folds split by date? A fold should not contain repeated data from the same date and ticker.
Is There a plot showing your cross-validation? As usual, all plots should have named axis and a title. If you chose a Time Series Split the plot should look like this:
Model Selection
Has the test set not been used to train the model and select the model?
Is the selected model saved in a pkl
file and described in a txt
file?
Selected model
Are the ML metrics computed on the train set aggregated (sum or median)?
Are the ML metrics saved in a csv
file?
Are the top 10 important features per fold saved in top_10_feature_importance.csv
?
Does metric_train.png
show a plot similar to the one below?
_Note that, this can be done also on the test set IF this hasn't helped to select the pipeline. _