StackedML
Practice
Labs
Questions
Models
Pricing
Sign in
Questions
/
Supervised Learning
/
Regularization
/
L1 vs L2 geometry
← Previous
Next →
319.
Geometric Reason for L1 Sparsity
easy
Why does L1 regularization tend to produce sparse solutions while L2 does not?
A
The diamond shape forces all coefficients to be equal in magnitude, making most of them effectively zero
B
The diamond shape has corners on the axes where one or more coefficients are zero, and OLS contours often intersect there
C
The L1 constraint shrinks coefficients by a fixed absolute amount at each step, eventually reaching zero for weak features
D
The L1 penalty grows faster than L2 for large coefficients, making zero a more stable fixed point
Sign in to verify your answer
← Back to Questions