Project - Fashion MNIST

20 / 22

End to End ML Project - Fashion MNIST - Evaluating Final Model on Test Dataset

Since, we already got our 'final' model from grid search (best_estimator_), let us evaluate the same on the test dataset.

Since, we performed grid search on the dimensionally reduced training dataset X_train_reduced, we need to apply dimensionality reduction the the test dataset also before we could use it for prediction on the test dataset.

INSTRUCTIONS

Please follow the below steps:

Store the best_estimator_ model, that we got from grid search, in a variable called final_model

final_model = grid_search.<<your code comes here>>

Import various score from sklearn's metrics package:

from <<your code comes here>> import accuracy_score
from <<your code comes here>> import confusion_matrix
from <<your code comes here>> import precision_score, recall_score
from <<your code comes here>> import f1_score

Remember, you have to use pca object of training dataset (you got on training dataset during dimensionality reduction (please don't create new instance of PCA) and only apply transform() on test dataset (not fit_transform).

Please apply transform() on X_test (using pca object) and store the resulting dataset in X_test_reduced variable

X_test_reduced = pca.<<your code comes here>>(X_test)

Perform the predictions on the X_test_reduced dataset using final model, and store the result in y_test_predict variable.

y_test_predict = final_model.<<your code comes here>>(X_test_reduced)

Create the confusion matrix

confusion_matrix(y_test, <<your code comes here>>)

Calculate various metrics scores like - accuracy, precision, recall, F1 score - using the actual and the predicted values and relevant functions, and store them in respective variables - final_accuracy, final_precision, final_recall and final_f1_score.

final_accuracy = <<your code comes here>>(y_test, <<your code comes here>>)
final_precision = <<your code comes here>>(y_test, <<your code comes here>>, average='weighted')
final_recall = <<your code comes here>>(y_test, <<your code comes here>>, average='weighted')
final_f1_score = <<your code comes here>>(y_test, <<your code comes here>>, average='weighted')

Print the values of final_accuracy, final_precision, final_recall and final_f1_score

Just check with a sample value, if the predictions were correct

y_test[0]

y_test_predict[0]

showImage(X_test[0])
Get Hint

Answer is not availble for this assesment


Note - Having trouble with the assessment engine? Follow the steps listed here

Loading comments...