Login using Social Account
     Continue with GoogleLogin using your credentials
Finally, we have reached the modeling part.
As discussed previously, we shall use a GRU based model.
Firstly, let us import the necessary TensorFlow and Scikit-Learn libraries.
Next, we shall build the model by adding layers, compiling it, and then fitting the model on the train data.
A bit about the model we are going to build:
return_sequences=True
in the GRU layers whose output would potentially act as the input to the next GRU layer.Import the below libraries.
import tensorflow as tf
tf.random.set_seed(42)
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.optimizers import Adam
Initialize model
to a sequential model from keras.
model = keras.<< your code comes here >>
Add the following layers to the model
.
# First GRU layer
model.add(layers.GRU(units=100, return_sequences=True, input_shape=(1,n_features), activation='tanh'))
model.add(layers.Dropout(0.2))
# Second GRU layer
model.add(layers.GRU(units=150, return_sequences=True, input_shape=(1,n_features), activation='tanh'))
model.add(layers.Dropout(0.2))
# Third GRU layer
model.add(layers.GRU(units=100, activation='tanh'))
model.add(layers.Dropout(0.2))
# The output layer
model.add(layers.Dense(units=1, kernel_initializer='he_uniform', activation='linear'))
Observe the argument return_sequences
, which is set to be True
only for those layers which have a GRU layer after them(that is, the first and second GRU layers), unlike the third layer.
This is so because the output of the third layer would be fed to a Dense layer but not a GRU/LSTM layer.
Compile the model as follows, by mentioning the learning rate lr=0.0005
, loss='mean_squared_error'
and metrics = ['mean_squared_error']
.
model.compile(loss='mean_squared_error', optimizer=Adam(lr = 0.0005) , metrics = ['mean_squared_error'])
Let us now see the summary of the model architecture.
print(model.summary())
Use fit
method of the model to start training on the train data. Use validation_data = (valX,valY)
.
history = model.fit(trainX,trainY,epochs=100,batch_size=128, verbose=1, validation_data = (valX,valY))
Taking you to the next exercise in seconds...
Want to create exercises like this yourself? Click here.
Note - Having trouble with the assessment engine? Follow the steps listed here
Loading comments...