Ensemble Learning and XGBoost

You are currently auditing this course.
39 / 42

In Gradient Boosting, instead of tweaking the instance weights at every iteration like AdaBoost does, it tries to fit the new predictor to the residual errors made by the previous predictor.


No hints are availble for this assesment

Answer is not availble for this assesment

Loading comments...