- Home
- Assessment

Now, let us use cross validation to find the proper score of each model, also to ensure that the model is not overfitting or underfitting. Based on this cross-validation, we will select the model for fine-tuning its hyperparameters.

**NOTE:**

- If the cross validation score values for a performance measure (say accuracy) are not varying significantly for various folds (k-folds), then we can say that the model is not overfitting.
- If the cross validation score values for a performance measure (say accuracy) are not very low for various folds (k-folds), then we can say that the model is not underfitting.

We will perform **k-fold cross-validation.**
We will randomly split the training set into 3 distinct subsets called folds (**cv=3**). Since cross validation is a computing intensive and time consuming process, we are limiting 'cv' (no. of folds) to 3 instead of normally 10 folds.
Then will train and evaluate each model 3 times by picking a different fold for evaluation every time and training on the other 2 folds
The result will be an array containing the 3 evaluation scores for each of the measures - **accuracy, precision, recall, F1 score.**
We will use **cross_val_score()** function to calculate **accuracy**

But accuracy is generally not the preferred performance measure for classifiers, especially when you are dealing with skewed datasets. (A dataset is said to be skewed when some classes are much more frequent than others. )

Even if the current training dataset may not be skewed, the future test dataset (live) on which the model runs can be skewed, hence, considering we may get skewed dataset in future, let us calculate Precision, Recall and F1 score also for the models.
And will use **cross_val_predict()** function to create confusion matrix to calculate **Precision, Recall and F1 score.**

XP

Checking Please wait.

Success

Error

No hints are availble for this assesment

Answer is not availble for this assesment

## Loading comments...