Login using Social Account
     Continue with GoogleLogin using your credentials
Let us define the following hyper-parameters we would be using:
style_weight
content_weight
an optimizer, here we shall use Adam
and set its hyper-parameters values like the learning rate.
We shall also create a function clip_0_1
that would clip the values of image pixels to be in between 0 and 1 since this is a float image.
We shall also define the variable image
which we would be using further to update its pixels throughout the train-steps in the coming slides. We shall assign the tf.Variable(content_image)
to the image
. We use tf.Variable
since the pixel values of this image
are to be updated through the gradient descent.
Note:
tf.clip_by_value
clips tensor values to a specified min and max.Define and
. We do this to optimize using a weighted combination of the two losses to get the total loss:
style_weight=1e-2
content_weight=1e4
Use the tf.optimizers.Adam
optimizer and set the learning_rate
to 0.02
, beta_1
to 0.99
, and epsilon
to 1e-1
.
opt = << your code comes here >>(learning_rate=0.02, beta_1=0.99, epsilon=1e-1)
Define the function clip_0_1
:
def clip_0_1(image):
return tf.clip_by_value(image, clip_value_min=0.0, clip_value_max=1.0)
Use tf.Variable
to declare the image
image = << your code comes here >>(content_image)
Taking you to the next exercise in seconds...
Want to create exercises like this yourself? Click here.
Note - Having trouble with the assessment engine? Follow the steps listed here
Loading comments...