Login using Social Account
     Continue with GoogleLogin using your credentials
Let us first split the test data in order to use a part of it for validation purposes. After that, let us have a look at the shape of the train, validation, and test datasets.
The test data contains 50 samples. Let the first 25 samples form the validation data, while the rest 25 samples form the test data.
validation_x = test_set_x_orig[:25]
validation_y = << your code comes here >>[:25]
test_set_x =<< your code comes here >>[25:]
test_set_y = test_set_y_orig[25:]
Print the shape of both train_set_x_orig
and train_set_y_orig
print("train_set_x shape: ", train_set_x_orig.shape)
print("train_set_y shape: ", train_set_y_orig.shape)
Print the shape of both validation_x
and validation_y
print("Validation data size: ", << your code comes here >>)
print("Validation data size: ", << your code comes here >>)
Print the shape of both test_set_x
and test_set_y
print("test_set_x shape: ", << your code comes here >>)
print("test_set_y shape: ", << your code comes here >>)
We observe that we have very small data.
So using transfer learning, we could come up with a decent model yielding reasonable accuracy by using our tiny dataset.
Taking you to the next exercise in seconds...
Want to create exercises like this yourself? Click here.
Note - Having trouble with the assessment engine? Follow the steps listed here
Loading comments...