Training Deep Neural Networks

You are currently auditing this course.
29 / 49

Momentum optimization does not care about what previous gradients were. So gradient descent is better to converge faster.

See Answer

No hints are availble for this assesment

Loading comments...