Browse Source

feat(training): updated expected result for ex05/q1

pull/2504/head
nprimo 8 months ago committed by Niccolò Primo
parent
commit
1f9c0db5ad
  1. 84
      subjects/ai/training/audit/README.md

84
subjects/ai/training/audit/README.md

@ -128,70 +128,66 @@ Having a 99% ROC AUC is not usual. The data set we used is easy to classify. On
###### For question 1, are the scores outputted close to the scores below? Some of the algorithms use random steps (random sampling used by the `RandomForest`). I used `random_state = 43` for the Random Forest, the Decision Tree and the Gradient Boosting. ###### For question 1, are the scores outputted close to the scores below? Some of the algorithms use random steps (random sampling used by the `RandomForest`). I used `random_state = 43` for the Random Forest, the Decision Tree and the Gradient Boosting.
```console ```console
# Linear regression ~~~
Linear Regression
TRAIN TRAIN
r2 on the train set: 0.34823544284172625 r2 score: 0.6054131599242079
MAE on the train set: 0.533092001261455 MAE: 0.5330920012614552
MSE on the train set: 0.5273648371379568 MSE: 0.5273648371379568
TEST TEST
r2 on the test set: 0.3551785428138914 r2 score: 0.6128959462132963
MAE on the test set: 0.5196420310323713 MAE: 0.5196420310323714
MSE on the test set: 0.49761195027083804 MSE: 0.49761195027083804
~~~
SVM
# SVM
TRAIN TRAIN
r2 on the train set: 0.6462366150965996 r2 score: 0.749610858293664
MAE on the train set: 0.38356451633259875 MAE: 0.3835645163325988
MSE on the train set: 0.33464478671339165 MSE: 0.3346447867133917
TEST TEST
r2 on the test set: 0.6162644671183826 r2 score: 0.7295080649899683
MAE on the test set: 0.3897680598426786 MAE: 0.38976805984267887
MSE on the test set: 0.3477101776543003 MSE: 0.3477101776543005
~~~
Decision Tree
# Decision Tree
TRAIN TRAIN
r2 on the train set: 0.9999999999999488 r2 score: 1.0
MAE on the train set: 1.3685733933909677e-08 MAE: 4.221907539810565e-17
MSE on the train set: 6.842866883530944e-14 MSE: 9.24499456646287e-32
TEST TEST
r2 on the test set: 0.6263651902480918 r2 score: 0.6228217144931267
MAE on the test set: 0.4383758696244002 MAE: 0.4403051356589147
MSE on the test set: 0.4727017198871596 MSE: 0.4848526395290697
~~~
Random Forest
# Random Forest
TRAIN TRAIN
r2 on the train set: 0.9705418471542886 r2 score: 0.9741263135396302
MAE on the train set: 0.11983836612191189 MAE: 0.12000198560508221
MSE on the train set: 0.034538356420577995 MSE: 0.03458015083247723
TEST TEST
r2 on the test set: 0.7504673649554309 r2 score: 0.8119778189909694
MAE on the test set: 0.31889891600404635 MAE: 0.3194169859011629
MSE on the test set: 0.24096164834441108 MSE: 0.24169750554364758
~~~
Gradient Boosting
# Gradient Boosting
TRAIN TRAIN
r2 on the train set: 0.7395782392433273 r2 score: 0.8042086499063386
MAE on the train set: 0.35656543036682264 MAE: 0.35656543036682264
MSE on the train set: 0.26167490389525294 MSE: 0.26167490389525294
TEST TEST
r2 on the test set: 0.7157456298013534 r2 score: 0.7895081234643192
MAE on the test set: 0.36455447680396397 MAE: 0.36455447680396397
MSE on the test set: 0.27058170064218096 MSE: 0.27058170064218096
``` ```
It is important to notice that the Decision Tree overfits very easily. It learns easily the training data but is not able to extrapolate on the test set. This algorithm is not used a lot because of its overfitting ability. It is important to notice that the Decision Tree overfits very easily. It learns easily the training data but is not able to extrapolate on the test set. This algorithm is not used a lot because of its overfitting ability.

Loading…
Cancel
Save