Artificial Neural Network

You are currently auditing this course.
4 / 4

Artificial Neural Networks - Session 03

Slides

INSTRUCTIONS

Latest Instructions for launching Tensorboard

If you are facing challenges opening Tensorboard, please visit the below link:

https://discuss.cloudxlab.com/t/solved-cannot-start-tensorboard-server/5146


No hints are availble for this assesment

Answer is not availble for this assesment

Please login to comment

34 Comments

The slides used in this Aritificial Neural network sesson  3 is missing....

pls update or attach...

  Upvote    Share

Hi Sumbul,

I have added the slides of "training deep neural networks" too.

  Upvote    Share

What is the difference between normal function and the partial function?

  Upvote    Share

Hi,

Could you please tell me which slide are you referring to?

Thanks.

  Upvote    Share

Hello sir

I have previously also sent query about ppt of artificial neural network. It is not complete. Fine tuning of Dnn and a few more topic is not there. 

 

Thanks n regards..

 

  Upvote    Share

Hi,

We have updated the slides, could you please check once again and let me know if it is fine now.

Thanks.

  Upvote    Share

Hello 

Yeah its fine now. Thanks so much for the updation.

Regards..

  Upvote    Share

Can we use the same method to classify the voice recognition as we did in MNIST dataset??

  Upvote    Share

Hi,

Yes, however we would need to process the data.

Thanks.

  Upvote    Share

Hello,

At 2.03.26 why are we scaling X_batch?
Also why are training each batch seperately, rather than sequentially?
Thank you. 

  Upvote    Share

Hi,

Feature scaling is essential for machine learning algorithms that calculate distances between data. Since the range of values of raw data varies widely, in some machine learning algorithms, objective functions do not work correctly without normalization.

We are training in batches here because it is a chapter on batch normalization.

Thanks.

  Upvote    Share

I have 2 questions regarding the Machine Learning please help me with these:-

Q1. Can we use heterogenous models also in bagging and boosting because it is generally used with homogenous models?

Q2. What algos we can use apart from decision trees in bagging,boosting as they generally used with decision tree only.?

 

THANKYOU..

  Upvote    Share

Hi,

Very good question!

1. Yes, it is called heterogenous boosting.

2. Although they are usually used in Decision Trees, they can be used with any model.

Thanks.

 1  Upvote    Share

Hi,

How can I get the notebooks in my jupyter that you are showing in these lessons? I can only see the blank assignments ones there.

Thank you.

  Upvote    Share

Hi Sneha,

You can get the notebook instructor is using in video here, the notebook you have in your home directory are to perform our assessments.

  Upvote    Share

Hi,

I dont have the jupyter notebook for training_neural_nets in my cloned repository, please provide me with the same.

Thank you

  Upvote    Share

Hi,

You would find it under the Deep Learning folder, it would be right at the bottom of the list, named training_deep_neural_nets.ipynb. You can find it in our GitHub repository:

https://github.com/cloudxlab/ml/tree/master/deep_learning

Thanks.

  Upvote    Share

Please put the full slides, after 77 there are no slides, pls put the full slides ASAP

  Upvote    Share

Hello Disqus,

Thanks for contacting CloudxLab!

This automatic reply is just to let you know that we received your message and we’ll get back to you with a response as quickly as possible. During business hours (9am-5pm IST, Monday-Friday) we do our best to reply within a few hours. Evenings and weekends may take us a little bit longer.

If you have a general question about using CloudxLab, you’re welcome to browse our below Knowledge Base for walkthroughs of all of our features and answers to frequently asked questions.

- Tech FAQ <https: cloudxlab.com="" faq="" support="">
- General FAQ <https: cloudxlab.com="" faq=""/>

If you have any additional information that you think will help us to assist you, please feel free to reply to this email. We look forward to chatting soon!

Cheers,
The CloudxLab Team

  Upvote    Share

Hi Avishek,

Our courses are constantly updated. So we have segregated ANN, and DNN into 2 different parts. So you would find the DNN slides separately in the next part of the tutorial.

Thanks.

-- Rajtilak Bhattacharjee

  Upvote    Share

Hi,

There are no slides of last two videos (3rd & 4th) in the next part of the tutorial as well (checked). Please look into the matter. It will be difficult for us to be on the same track while learning. Sometimes pdf are better to understand

  Upvote    Share

Hi,

Could you please point out which videos you are referring to?

Thanks.

-- Rajtilak Bhattacharjee

  Upvote    Share

Hello 

Session 5,6, 7, and 8 are explained using slides of the topic "Introduction to Artificial Neural Network". The slides uploaded for this topic are having 77 slides only. But as per the video many more slides should be there. Please update. Hope it is clear now.

Thanks n Regards..

  Upvote    Share

Hi,

Are you referring to the Deep Neural Net slides? If yes, then they are available under that topic.

Thanks.

  Upvote    Share

What do you mean by next part of the tutorial? i cannot find it where is it?

  Upvote    Share

...and here is the paper on BN:

https://arxiv.org/pdf/1502....

  Upvote    Share

For those who are interested, here's the paper from Glorot and Bengio:

http://proceedings.mlr.pres...

  Upvote    Share

It appears that content contain only 77 slides (link provided one) but video has got more slides . Where the remaining slides could be found>

  Upvote    Share

In the execution phase for training, will the algorithm keep reinitializing weights for each epoch?

  Upvote    Share

Yes, you are right.

During the back propagation it will keep on reinitializing the weights for every epoch.

All the best.

  Upvote    Share

Why do we need to scale input with mean 0 and S.D. 1 in case of selu activation function?

  Upvote    Share

In generally, it is always better to scale/normalize the input to a neural network.

  Upvote    Share

Hi. Can we use He initialization when leaky relu, elu or selu activation functions are used, as He initialization was proposed for sigmoid, hyperbolic tangent and relu activation functions.

  Upvote    Share

Good question. He initialization generally helps you in avoid vanishing or exploding gradients. There is no hard and fast rule as such because it is just initialization of weights which are going to be tweaked anyway.

  Upvote    Share