Training Deep Neural Networks

43 / 49

Since AdaGrad, RMSProp, and Adam optimization do not automatically reduce the learning rate during training, it is not necessary to add an extra learning schedule.

See Answer

Note - Having trouble with the assessment engine? Follow the steps listed here

No hints are availble for this assesment

Loading comments...