Registrations Closing Soon for DevOps Certification Training by CloudxLab | Registrations Closing in

  Enroll Now

Defining some Hyper-parameters

Let us define the following hyper-parameters we would be using:

  • style_weight

  • content_weight

  • an optimizer, here we shall use Adam and set its hyper-parameters values like the learning rate.

We shall also create a function clip_0_1 that would clip the values of image pixels to be in between 0 and 1 since this is a float image.

We shall also define the variable image which we would be using further to update its pixels throughout the train-steps in the coming slides. We shall assign the tf.Variable(content_image) to the image. We use tf.Variable since the pixel values of this image are to be updated through the gradient descent.


  • tf.clip_by_value clips tensor values to a specified min and max.
  • Define and. We do this to optimize using a weighted combination of the two losses to get the total loss:

  • Use the tf.optimizers.Adam optimizer and set the learning_rate to 0.02, beta_1 to 0.99, and epsilon to 1e-1.

    opt = << your code comes here >>(learning_rate=0.02, beta_1=0.99, epsilon=1e-1)
  • Define the function clip_0_1:

    def clip_0_1(image):
        return tf.clip_by_value(image, clip_value_min=0.0, clip_value_max=1.0)
  • Use tf.Variable to declare the image

    image = << your code comes here >>(content_image)

No hints are availble for this assesment

Answer is not availble for this assesment

Note - Having trouble with the assessment engine? Follow the steps listed here

Loading comments...