StackedML
Practice
Labs
Questions
Models
Pricing
Sign in
Questions
/
Optimization
/
Gradient Methods
/
Saddle points
← Previous
Next →
703.
Saddle Points in High Dimensions
medium
Why are saddle points more prevalent than local minima in high-dimensional loss surfaces?
A
For a point to be a saddle point, the loss must be lower than all neighboring points — high dimensions create many such configurations
B
For a point to be a local minimum, the gradient must be exactly zero — in high dimensions this occurs with probability approaching zero
C
For a point to be a saddle point, at least one eigenvalue must be negative — in high dimensions this is guaranteed by symmetry
D
For a point to be a local minimum, all eigenvalues of the Hessian must be positive — in high dimensions this is exponentially unlikely
Sign in to verify your answer
← Back to Questions