Latest Instructions for launching Tensorboard
If you are facing challenges opening Tensorboard, please visit the below link:
https://discuss.cloudxlab.com/t/solved-cannot-start-tensorboard-server/5146
Taking you to the next topic in seconds...
Want to create exercises like this yourself? Click here.
No hints are availble for this assesment
Answer is not availble for this assesment
Please login to comment
34 Comments
The slides used in this Aritificial Neural network sesson 3 is missing....
pls update or attach...
Upvote ShareHi Sumbul,
I have added the slides of "training deep neural networks" too.
Upvote ShareWhat is the difference between normal function and the partial function?
Upvote ShareHi,
Could you please tell me which slide are you referring to?
Thanks.
Upvote ShareHello sir
I have previously also sent query about ppt of artificial neural network. It is not complete. Fine tuning of Dnn and a few more topic is not there.
Thanks n regards..
Hi,
We have updated the slides, could you please check once again and let me know if it is fine now.
Thanks.
Upvote ShareHello
Yeah its fine now. Thanks so much for the updation.
Regards..
Upvote ShareCan we use the same method to classify the voice recognition as we did in MNIST dataset??
Upvote ShareHi,
Yes, however we would need to process the data.
Thanks.
Upvote ShareHello,
At 2.03.26 why are we scaling X_batch?
Upvote ShareAlso why are training each batch seperately, rather than sequentially?
Thank you.
Hi,
Feature scaling is essential for machine learning algorithms that calculate distances between data. Since the range of values of raw data varies widely, in some machine learning algorithms, objective functions do not work correctly without normalization.
We are training in batches here because it is a chapter on batch normalization.
Thanks.
Upvote ShareI have 2 questions regarding the Machine Learning please help me with these:-
Q1. Can we use heterogenous models also in bagging and boosting because it is generally used with homogenous models?
Q2. What algos we can use apart from decision trees in bagging,boosting as they generally used with decision tree only.?
THANKYOU..
Upvote ShareHi,
Very good question!
1. Yes, it is called heterogenous boosting.
2. Although they are usually used in Decision Trees, they can be used with any model.
Thanks.
1 Upvote ShareHi,
How can I get the notebooks in my jupyter that you are showing in these lessons? I can only see the blank assignments ones there.
Thank you.
Upvote ShareHi Sneha,
You can get the notebook instructor is using in video here, the notebook you have in your home directory are to perform our assessments.
Upvote ShareHi,
I dont have the jupyter notebook for training_neural_nets in my cloned repository, please provide me with the same.
Thank you
Upvote ShareHi,
You would find it under the Deep Learning folder, it would be right at the bottom of the list, named training_deep_neural_nets.ipynb. You can find it in our GitHub repository:
https://github.com/cloudxlab/ml/tree/master/deep_learning
Thanks.
Upvote SharePlease put the full slides, after 77 there are no slides, pls put the full slides ASAP
Upvote ShareHello Disqus,
Thanks for contacting CloudxLab!
This automatic reply is just to let you know that we received your message and we’ll get back to you with a response as quickly as possible. During business hours (9am-5pm IST, Monday-Friday) we do our best to reply within a few hours. Evenings and weekends may take us a little bit longer.
If you have a general question about using CloudxLab, you’re welcome to browse our below Knowledge Base for walkthroughs of all of our features and answers to frequently asked questions.
- Tech FAQ <https: cloudxlab.com="" faq="" support="">
- General FAQ <https: cloudxlab.com="" faq=""/>
If you have any additional information that you think will help us to assist you, please feel free to reply to this email. We look forward to chatting soon!
Cheers,
Upvote ShareThe CloudxLab Team
Hi Avishek,
Our courses are constantly updated. So we have segregated ANN, and DNN into 2 different parts. So you would find the DNN slides separately in the next part of the tutorial.
Thanks.
-- Rajtilak Bhattacharjee
Upvote ShareHi,
There are no slides of last two videos (3rd & 4th) in the next part of the tutorial as well (checked). Please look into the matter. It will be difficult for us to be on the same track while learning. Sometimes pdf are better to understand
Upvote ShareHi,
Could you please point out which videos you are referring to?
Thanks.
-- Rajtilak Bhattacharjee
Upvote ShareHello
Session 5,6, 7, and 8 are explained using slides of the topic "Introduction to Artificial Neural Network". The slides uploaded for this topic are having 77 slides only. But as per the video many more slides should be there. Please update. Hope it is clear now.
Thanks n Regards..
Upvote ShareHi,
Are you referring to the Deep Neural Net slides? If yes, then they are available under that topic.
Thanks.
Upvote ShareWhat do you mean by next part of the tutorial? i cannot find it where is it?
Upvote Share...and here is the paper on BN:
https://arxiv.org/pdf/1502....
Upvote ShareFor those who are interested, here's the paper from Glorot and Bengio:
http://proceedings.mlr.pres...
Upvote ShareIt appears that content contain only 77 slides (link provided one) but video has got more slides . Where the remaining slides could be found>
Upvote ShareIn the execution phase for training, will the algorithm keep reinitializing weights for each epoch?
Upvote ShareYes, you are right.
During the back propagation it will keep on reinitializing the weights for every epoch.
All the best.
Upvote ShareWhy do we need to scale input with mean 0 and S.D. 1 in case of selu activation function?
Upvote ShareIn generally, it is always better to scale/normalize the input to a neural network.
Upvote ShareHi. Can we use He initialization when leaky relu, elu or selu activation functions are used, as He initialization was proposed for sigmoid, hyperbolic tangent and relu activation functions.
Upvote ShareGood question. He initialization generally helps you in avoid vanishing or exploding gradients. There is no hard and fast rule as such because it is just initialization of weights which are going to be tweaked anyway.
Upvote Share