StackedML
Practice
Labs
Questions
Models
Pricing
Sign in
Questions
/
Math Foundations
/
Calculus
/
Why gradients matter in ML
← Previous
Next →
836.
Zero Gradient for a Weight
easy
A gradient of zero for a specific weight in a neural network implies what about that weight?
A
The loss is currently insensitive to small changes in that weight; increasing or decreasing it slightly will not change the loss
B
The loss is currently minimized with respect to that weight; it has converged to its optimal value and should be frozen for the rest of training
C
The loss is currently not influenced by that weight; it is receiving no training signal and should be re-initialized to restore gradient flow
D
The loss is currently unaffected by that weight; it is redundant and can be set to zero without affecting the model's predictions
Sign in to verify your answer
← Back to Questions