Flash Sale: Flat 70% + Addl. 25% Off on all Courses | Use Coupon DS25 in Checkout | Offer Expires In

  Enroll Now

End to End ML Project - Fashion MNIST - Selecting the Model - Cross-Validation - Conclusion

You can print the various metrics of each model the following way:

print("=== Decision Tree === ")
display_scores(dec_tree_cv_scores)
print("dec_tree_cv_accuracy:", dec_tree_cv_accuracy)
print("dec_tree_cv_precision:", dec_tree_cv_precision)
print("dec_tree_cv_recall :", dec_tree_cv_recall )
print("dec_tree_cv_f1_score:", dec_tree_cv_f1_score)

print("=== SGD === ")
display_scores(sgd_cv_scores)
print("sgd_cv_accuracy: ", sgd_cv_accuracy);
print("sgd_cv_precision: ", sgd_cv_precision);
print("sgd_cv_recall: ", sgd_cv_recall);
print("sgd_cv_f1_score: ", sgd_cv_f1_score);

print("=== Softmax === ")
display_scores(log_cv_scores)
print("log_cv_accuracy:", log_cv_accuracy)
print("log_cv_precision:", log_cv_precision)
print("log_cv_recall:", log_cv_recall)
print("log_cv_f1_score:", log_cv_f1_score)

print("=== Random Forest === ")
display_scores(rnd_cv_scores)
print("rnd_cv_accuracy:", rnd_cv_accuracy)
print("rnd_cv_precision:", rnd_cv_precision)
print("rnd_cv_recall :", rnd_cv_recall )
print("rnd_cv_f1_score:", rnd_cv_f1_score)


print("=== Voting Classsifier: Softmax + RF === ")
display_scores(voting_cv_scores)
print("voting_cv_accuracy:", voting_cv_accuracy)
print("voting_cv_precision:", voting_cv_precision)
print("voting_cv_recall :", voting_cv_recall )
print("voting_cv_f1_score:", voting_cv_f1_score)

In one of the example runs, we got the following results:

=== Decision Tree === 
Score:  [0.78925 0.78965 0.7894 ]
Mean:  0.7894333333333333
SD:  0.00016499158227684292
dec_tree_cv_accuracy: 0.7894333333333333
dec_tree_cv_precision: 0.7894438101880717
dec_tree_cv_recall : 0.7894333333333333
dec_tree_cv_f1_score: 0.7894183081171982
=== SGD === 
Score:  [0.83695 0.83365 0.83575]
Mean:  0.8354499999999999
SD:  0.0013638181696985737
sgd_cv_precision:  0.8354137225088748
sgd_cv_recall:  0.83545
sgd_cv_f1_score:  0.8350392911124828
=== Softmax === 
Score:  [0.84905 0.84825 0.84395]
Mean:  0.8470833333333333
SD:  0.0022395436042987695
log_cv_accuracy: 0.8470833333333333
log_cv_precision: 0.8458694883855286
log_cv_recall: 0.8470833333333333
log_cv_f1_score: 0.846272242627941
=== Random Forest === 
Score:  [0.85155 0.84745 0.84585]
Mean:  0.8482833333333334
SD:  0.002400462918318523
rnd_cv_accuracy: 0.8482833333333334
rnd_cv_precision: 0.8482875291253137
rnd_cv_recall : 0.8482833333333333
rnd_cv_f1_score: 0.8452574403288833

=== Voting Classsifier: Softmax + RF === 
Score:  [0.8676  0.86805 0.86445]
Mean:  0.8667000000000001
SD:  0.0016015617378046761
voting_cv_accuracy: 0.8667000000000001
voting_cv_precision: 0.8656638550701956
voting_cv_recall : 0.8667
voting_cv_f1_score: 0.8649796298749081

From the results of the cross-validation process, we see that Voting classifier ensemble has given the best results (accuracy - 86.67%, standard deviation for accuracy - 0.0016, Precision - 0.8657, Recall - 0.8667, F1 score - 0.8650).

Since Voting Classifier's results are the best in cross-validation phase, we select Voting Classifier model as our model to proceed with the fine-tuning of the model (hyperparameters tuning).


No hints are availble for this assesment

Answer is not availble for this assesment

Loading comments...