Login using Social Account
     Continue with GoogleLogin using your credentials
What does a CNN do?
At a high level, in order for a network to perform image classification (which this network has been trained to do), it must understand the image.
This requires taking the raw image as input pixels and building an internal representation that converts the raw image pixels into a complex understanding of the features present within the image.
Starting from the network's input layer, the first few layer activations represent low-level features like edges and textures.
As you step through the network, the final few layers represent higher-level features—object parts like wheels or eyes.
Since the CNNs try to understand the images, they are able to generalize well: they’re able to capture the invariances and defining features within classes (e.g. cats vs. dogs) that are agnostic to background noise and other nuisances.
Thus, somewhere between where the raw image is fed into the model and the output classification label, the model serves as a complex feature extractor.
What are we going to do?
By accessing intermediate layers of the model, we will able to describe the content and style of input images.
Thus, we could use the intermediate layers of the model to get the content and style representations of the image. We will be using the intermediate layers of the VGG19 network architecture, a pre-trained image classification network.
We follow the below steps to define the content and style representations:
Thus, for an input image, we try to match the corresponding style and content target representations at the corresponding intermediate layers.
Taking you to the next exercise in seconds...
Want to create exercises like this yourself? Click here.
No hints are availble for this assesment
Answer is not availble for this assesment
Loading comments...