StackedML
Practice
Labs
Questions
Models
Pricing
Sign in
Questions
/
Optimization
/
Gradient Methods
/
Learning rate effects
← Previous
Next →
366.
High Learning Rate Effect
easy
What happens when the learning rate is set too high in gradient descent?
A
The update steps ignore the gradient direction entirely, causing parameters to update randomly each iteration
B
The update steps become biased toward high-loss regions, causing the model to underfit the training data
C
The update steps become too conservative, causing training to stall before reaching a useful solution
D
The update steps overshoot the minimum, causing the loss to oscillate or diverge rather than converge
Sign in to verify your answer
← Back to Questions