Registrations Closing Soon for DevOps Certification Training by CloudxLab | Registrations Closing in

  Enroll Now

Building the Model

  • Now that we have preprocessed and created the dataset, we can create the model:

    • The first layer is an Embedding layer, which will convert word IDs into embeddings. The embedding matrix needs to have one row per word ID (vocab_size + num_oov_buckets) and one column per embedding dimension (this example uses 128 dimensions, but this is a hyperparameter you could tune).
    • Whereas the inputs of the model will be 2D tensors of shape [batch size, time steps], the output of the Embedding layer will be a 3D tensor of shape [batch size, time steps, embedding size].


  • keras.layers.Embedding : Turns positive integers (indexes) into dense vectors of fixed size. More here.
  • keras.layers.GRU : The GRU(Gated Recurrent Unit) Layer.
  • Set embed_size to 128, which is the embedding size of each word.

    embed_size = 128
  • Create the model model with

    • Embedding layer

    • GRU layer with 4 units

    • GRU layer with 2 units

    • Dense layer with 1 unit and sigmoid activation

      model = keras.models.Sequential([
          keras.layers.Embedding(vocab_size + num_oov_buckets, embed_size,
          keras.layers.GRU(4, return_sequences=True),
          keras.layers.Dense(1, activation="sigmoid")
  • Compile the model with "binary_crossentropy" loss(as this is a binary classification problem), "adam" optimizer and "accuracy" metric.

    model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])

No hints are availble for this assesment

Answer is not availble for this assesment

Note - Having trouble with the assessment engine? Follow the steps listed here

Loading comments...