Week 5 – Monday

Last week, I used the R-squared metric for cross-validation, which measures the proportion of variance in the dependent variable that is predictable from predictors. In further exploration today, I tried evaluating my models using alternative scoring metrics, reading about their differences. Notably, I discovered that in the absence of a specified scoring metric in the parameters, the cross_val_score function defaults to calculating the negative Mean Squared Error (MSE) for each fold, a metric that is particularly sensitive to outliers. Furthermore, I gained an understanding of the Mean Absolute Error (MAE) metric, which should be preferred when an equal weightage to all errors is desired .

Leave a Reply

Your email address will not be published. Required fields are marked *