Training Deep Neural Nets

You are currently auditing this course.
1 / 2

Training Deep Neural Nets - Session 01


No hints are availble for this assesment

Answer is not availble for this assesment

Please login to comment

19 Comments

Hi Team,

Could you please help me to understand the below variable initialization in reusing pretrained layer:

X = tf.get_default_graph().get_tensor_by_name("X:0")

Why we put 0 in "X:0" ? How do you know we need to put 0 here ?

Similar thing i noticed in below sample line as well.

accuracy = tf.get_default_graph().get_tensor_by_name("eval/accuracy:0")

Regards,

Birendra Singh

  Upvote    Share

Hi,

Feel free to go through this detailed explanation: https://stackoverflow.com/a/36784246/14619383

Thanks.

  Upvote    Share

Hi,

I want to know reason behind to use batch normalisation before or after activation function? and why min max normalisation process won't help to normalize batches?

  Upvote    Share

Hi,

Good question!

Batch Normalization not only solves the problem of normalizing a dataset, it also has a few other advantages. It makes the training of a Neural Network faster, it addresses the problem of the internal covariate shift by ensuring that the input for every layer in the Neural Network is distributed around the same mean and standard deviation, it also contributes in smoothening of the loss function.

Thanks.

 1  Upvote    Share

seems the whole slide is attached, it will be a great help if  PDF only  contains related videos  with it. 

  Upvote    Share

Hi,

This presentation is related to the vidoes of Training Deep Neural Nets only.

Thanks.

  Upvote    Share

This comment has been removed.

R u covering Deep Learning with Pytorch in this course

  Upvote    Share

Hi,

No, we are covergin Deep Learning with Tensorflow 1.0.

Thanks.

  Upvote    Share

Then we r not latest /updated in terms of course content , pytorch is very usefull linaries when converting numpy array ito pytorch for faster processing and training deep neural networks

  Upvote    Share

Hi,

Tensorflow is one of the most popular Deep Learning libraries. Every library has it's own pros and cons, so the aim should not be to learn the syntax specific to a library, but to learn the concepts so that you can apply them on any library that you would want to.

Thanks.

  Upvote    Share

Good afternnon, could you please tell me what does assign_kernels.input[1]  do?

  Upvote    Share

Hi,

Here we are initializing the kernel.

Thanks.

  Upvote    Share

Hello 

Please let me know how can we add our own layers to partial model taken from previous. Share with me some sample code if possible.

Thanks n Regards..

  Upvote    Share

Hi,

Are you referring to pretrained models?

Thanks.

  Upvote    Share

Hi

Yeah in reusing only few layers of a model, how can we add our new layers to it. please explain n share the code for the same if possible.

Thanks n regards..

  Upvote    Share

Hi,

Please go through our notebook in our GitHub repository, especially the Reusing Pretrained Layers part:

https://github.com/cloudxlab/ml/blob/master/deep_learning/training_deep_neural_nets.ipynb

Thanks.

  Upvote    Share

This comment has been removed.

Hi,

This is because that is how they were saved originally in the other model. Again, this is just an example of how it is done, and it may not always be the case.

Thanks.

  Upvote    Share