Training Deep Neural Networks

You are currently auditing this course.
43 / 49

Since AdaGrad, RMSProp, and Adam optimization do not automatically reduce the learning rate during training, it is not necessary to add an extra learning schedule.

See Answer

No hints are availble for this assesment

Loading comments...