Training Deep Neural Nets

2 / 2

Training Deep Neural Nets - Session 02


No hints are availble for this assesment

Answer is not availble for this assesment

Please login to comment

23 Comments

Sir, I need complete code of freezing the layers from scratch

  Upvote    Share

Hi,

Could you please elaborate a little more on the issue you are facing?

Thanks.

  Upvote    Share

hi,

When you say "three layer neural network", do you mean 3 hidden layers or 1 input and two hidden layer?

Since in dropout example, while implementing dropout regularization  to 3 layer neural network. we have created only 2 hidden layers.

while in previos discussions in introduction to ANN we were having 2 hidden layers for two layer neural network.

  Upvote    Share

Hi,

3-layer Neural Network means a total of 3 layers including input and output layer. However, could you please tell me which part of the lecture you are referring to?

Thanks.

  Upvote    Share

Its fascinating how mechanics's  momentum, moments etc concepts are adopted in ML & Deep Learning. Sandeep sir is awsome the way he  drills down every layer of complex topics. I am gonna through this whole  course again after having my certificate.

Regards,

HS

 1  Upvote    Share

This comment has been removed.

The jupyter lab is continuosly getting disconnected could you please do something for this??

  Upvote    Share

Hi Sharathchandran,

Make sure you're using a internet connection with good download and upload speeds.
Also please avoid opening notebooks in multiple tabs.
If you're still having issues please send a screenshot at reachus@cloudxlab.com

  Upvote    Share

I want to know that when Batch accuracy comes out to be 1.0, does this means overfitting?

  Upvote    Share

Hi,

Any ML/DL model is basically an approximation of what the original value is. So, when it is predicting the training set with 100% accuracy, that means it is memorizing everything, or in other words it is overfitting.

Thanks.

  Upvote    Share

Somewhere it is written bool      training="training" , somewhere training="true",  and traning="false".
I have understood training = "true"   we use while we are training and "false" when we are run for validation

But when do we use   training="training"

  Upvote    Share

Hi,

It is referring to a boolean value.

Thanks.

  Upvote    Share

what does this refer???? training="training" true or false? or we are just initialising it

  Upvote    Share

Hi,

Could you please tell me where is it referring to training= and not trainbale=?

Thanks.

  Upvote    Share

Hi,

global_step=tf.variable(0,trainable=False)  

at the video position 1:1:33 what does this trainable=False means?

  Upvote    Share

Hi,

When building a machine learning model it is often convenient to distinguish between variables holding trainable model parameters and other variables such as a step variable used to count training steps. To make this easier, the variable constructor supports a trainable=<bool> parameter.

Thanks.

  Upvote    Share

This comment has been removed.

Hi,

So book can take either values; true or false. Also, it is to be noted that A = B is not the same as A == B.

Thanks.

  Upvote    Share

Hi,

In AdaGrad we do element wise calculation, but in Gradient descent,Momentum opt.,nesterov accelerated gradient do we also do element wise calculation?

 

  Upvote    Share

What is the correct order of applying batch norm and drop out? Which layer comes first, batch norm or drop out?

  Upvote    Share

Good Question.

The flow is like this:

Input -> Normalization -> Neurons Layer -> Output -> Batch Normalization -> Another Neurons Layer

The dropout is applied on the layer of neurons. Dropout is essentially turning off some of the neurons. So, you can say, normalization comes before dropout.

  Upvote    Share