StackedML
Practice
Labs
Questions
Models
Pricing
Sign in
Questions
/
Supervised Learning
/
Tree-Based Models
/
Gradient Boosting (XGBoost, LightGBM)
← Previous
Next →
830.
XGBoost vs Vanilla Gradient Boosting
easy
What is a key difference between XGBoost and vanilla gradient boosting?
A
XGBoost uses a second-order Taylor expansion of the loss and adds regularization terms to the objective
B
XGBoost uses entropy instead of Gini impurity to select splits, making it more statistically principled
C
XGBoost replaces decision trees with linear models as the base learners for improved accuracy
D
XGBoost trains trees in parallel across multiple cores while vanilla boosting is strictly sequential
Sign in to verify your answer
← Back to Questions