StackedML
Practice
Labs
Questions
Models
Pricing
Sign in
Questions
/
Optimization
/
Gradient Methods
/
Convergence
← Previous
Next →
140.
Convergence on Convex Loss
easy
A convex loss function is optimized with gradient descent. What convergence guarantee exists?
A
With a sufficiently large learning rate, gradient descent converges faster to the global minimum by taking fewer steps
B
With a fixed learning rate, gradient descent converges in exactly n steps where n is the number of parameters
C
With any learning rate, gradient descent converges to the global minimum since convexity eliminates all saddle points in the optimization landscape
D
With a sufficiently small learning rate, gradient descent converges to the global minimum since convex functions have no spurious local minima
Sign in to verify your answer
← Back to Questions