Login using Social Account
     Continue with GoogleLogin using your credentials
Momentum optimization does not care about what previous gradients were. So gradient descent is better to converge faster.
Taking you to the next exercise in seconds...
Want to create exercises like this yourself? Click here.
No hints are availble for this assesment
Loading comments...