StackedML
Practice
Labs
Questions
Models
Pricing
Sign in
Questions
/
Supervised Learning
/
Tree-Based Models
/
Gradient Boosting (XGBoost, LightGBM)
← Previous
Next →
248.
Early Stopping in Gradient Boosting
medium
Early stopping is commonly used when training gradient boosting models. What does it do?
A
It removes trees from the ensemble that contribute less than a minimum threshold to ensemble accuracy
B
It stops training when the gradient magnitude falls below a threshold, indicating convergence
C
It halts training when validation loss stops improving, preventing overfitting from too many trees
D
It reduces the learning rate automatically when training loss plateaus to improve final convergence
Sign in to verify your answer
← Back to Questions