Login using Social Account
     Continue with GoogleLogin using your credentials
We are going to define a class to extract the style and content of a given image.
So basically, we build a model that returns the style and content tensors.
Inside of Keras the Model class is the root class used to define a model architecture. Since Keras utilizes object-oriented programming, we can actually subclass the Model class and then insert our architecture definition.
Model subclassing is fully-customizable and enables you to implement your own custom forward-pass of the model.
We are going to define our custom style-content extractor for the given image by subclassing tf.keras.models.Model
. We do that by:
Define __init()__
:
super().__init__()
, the constructor of tf.keras.models.Model
which is the parent class. self.vgg
to the vgg_layers
function which we have previously defined. This returns the custom model with the specified style layers and content layers. trainable
to False
, as we want to use the same VGG19 weights trained on ImageNet Database.Define call
method:
call
method is regarded as the forward pass of the model. We would customize it.
In our scenario, we define call
such that we will be returned the gram-matrices representing the style of the image and, the content of the image will be returned. We shall implement the following steps in the call
function:
tf.keras.applications.vgg19.preprocess_input
.self.vgg
- we defined with the specified style and content layers using vgg_layers
funtion. This returns the outputs
, which contains the style and content matrices for our input image.gram_matrix
function to do this.Note:
super().__init__()
calls our parent constructor. From there on, our layers are defined as instance attributes. Attributes in Python use the self keyword and are typically (but not always) defined in a constructor.
tf.keras.applications.vgg19.preprocess_input
returns preprocessed NumPy array or a tf.Tensor with type float32. The images are converted from RGB to BGR, then each color channel is zero-centered with respect to the ImageNet dataset, without scaling.
call
: Once the layers of our choice are defined, we can then define the network topology/graph inside the call function which is used to perform a forward-pass.
Use the following code to define the StyleContentModel
, which returns the style and content representations of the given input image. Each instruction in the below code is just a Pythonic implementation of the above-mentioned description. So, make sure to understand each and every line.
class StyleContentModel(tf.keras.models.Model):
def __init__(self, style_layers, content_layers):
super().__init__()
self.vgg = vgg_layers(style_layers + content_layers)
self.style_layers = style_layers
self.content_layers = content_layers
self.num_style_layers = len(style_layers)
self.vgg.trainable = False
def call(self, inputs):
inputs = inputs*255.0
preprocessed_input = tf.keras.applications.vgg19.preprocess_input(inputs)
outputs = self.vgg(preprocessed_input)
style_outputs, content_outputs = (outputs[:self.num_style_layers],
outputs[self.num_style_layers:])
style_outputs = [gram_matrix(style_output)
for style_output in style_outputs]
content_dict = {content_name:value
for content_name, value
in zip(self.content_layers, content_outputs)}
style_dict = {style_name:value
for style_name, value
in zip(self.style_layers, style_outputs)}
return {'content':content_dict, 'style':style_dict}
Taking you to the next exercise in seconds...
Want to create exercises like this yourself? Click here.
Note - Having trouble with the assessment engine? Follow the steps listed here
Loading comments...