Login using Social Account
     Continue with GoogleLogin using your credentials
Let's build a stacked Autoencoder with 3 hidden layers and 1 output layer (i.e., 2 stacked Autoencoders).
Define the encoder stacked_encoder
with a flattened input layer, along with 2 dense layers - one with 100 neurons - selu activation while the other with 30 neurons - selu activation function. We shall add these layers to keras.models.Sequential
.
stacked_encoder = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(100, activation="selu"),
keras.layers.Dense(30, activation="selu"),
])
Similarly, we shall define the encoder stacked_decoder
with 2 dense layers - one with 100 neurons - selu activation while the other with 28*28 neurons - sigmoid activation function, followed by an output layer of shape 28 x 28. We shall add these layers to keras.models.Sequential
.
stacked_decoder = keras.models.Sequential([
keras.layers.Dense(100, activation="selu", input_shape=[30]),
keras.layers.Dense(28 * 28, activation="sigmoid"),
keras.layers.Reshape([28, 28])
])
We shall now club the stacked_encoder
and stacked_decoder
using keras.models.Sequential
to form our complete autoencoder stacked_ae
.
stacked_ae = keras.models.Sequential([stacked_encoder, stacked_decoder])
Now, we compile the stacked_ae
by using compile
. We shall set "binary_crossentropy"
loss, keras.optimizers.SGD(lr=1.5)
optimizer and rounded_accuracy
(which we have previously defined) metric.
stacked_ae.<< your code comes here >>(loss="binary_crossentropy",
optimizer=<< your code comes here >>, metrics=[rounded_accuracy])
Taking you to the next exercise in seconds...
Want to create exercises like this yourself? Click here.
No hints are availble for this assesment
Loading comments...