Enrollments closing soon for Post Graduate Certificate Program in Applied Data Science & AI By IIT Roorkee | 3 Seats Left
Apply NowLogin using Social Account
     Continue with GoogleLogin using your credentials
Let us now view how well our autoencoder is trained and how good the reconstructed images are trained.
Use show_reconstructions function and pass stacked_ae and X_test as input argument. This displays 5 ground truth images and the corresponding reconstructed images.
show_reconstructions(stacked_ae, X_test)
Let us view the rounded_accuracies of X_test using stacked_ae.evaluate.
<< your code comes here >>(X_test, X_test)
Let us view the class-wise clusters for the validation data as predicted by our model stacked_ae. Since we can't display multiple-dimensions, we shall do this by using TSNE dimensionality reduction.
We shall use predict of stacked_encoder on X_valid to get the compressed data of the validation data.
Use fit_transform of TSNE() to get the 2D representation of the compressed validation data and scale it data.
Now plot this data with colormaps for each class.
Use the following code to do get the 2D representation of the compressed validation data.
np.random.seed(42)
from sklearn.manifold import TSNE
start = time.time()
X_valid_compressed = stacked_encoder.predict(X_valid)
tsne = TSNE()
X_valid_2D = tsne.fit_transform(X_valid_compressed)
X_valid_2D = (X_valid_2D - X_valid_2D.min()) / (X_valid_2D.max() - X_valid_2D.min())
end = time.time()
print("Time of execution:", round(end-start,2),"seconds")
Use the following code to display the class-wise clusters.
plt.figure(figsize=(10, 8))
cmap = plt.cm.tab10
plt.scatter(X_valid_2D[:, 0], X_valid_2D[:, 1], c=y_valid, s=10, cmap=cmap)
image_positions = np.array([[1., 1.]])
for index, position in enumerate(X_valid_2D):
    dist = np.sum((position - image_positions) ** 2, axis=1)
    if np.min(dist) > 0.02: # if far enough from other images
        image_positions = np.r_[image_positions, [position]]
        imagebox = mpl.offsetbox.AnnotationBbox(
            mpl.offsetbox.OffsetImage(X_valid[index], cmap="binary"),
            position, bboxprops={"edgecolor": cmap(y_valid[index]), "lw": 2})
        plt.gca().add_artist(imagebox)
plt.axis("off")
plt.show()
            
Want to create exercises like this yourself? Click here.
Note - Having trouble with the assessment engine? Follow the steps listed here
Loading comments...