当前位置:网站首页>The third detailed explanation of gradient lifting tree (gbdt): principle derivation

The third detailed explanation of gradient lifting tree (gbdt): principle derivation

2020-12-07 14:10:25 osc_ vrlg1skx

This article is about GBDT The last play in the series , It will focus on the principle of gradient lifting tree and its mathematical derivation . I hope you can read this before , For will GBDT The technical details applied to regression and classification are already familiar . Regarding this , You can refer to the first two articles in the series ( See the literature 【1】 and 【2】).

* This article is about GBDT The last play in the series , It will focus on the details of the principle of gradient lifting tree and its mathematical derivation , It's full of hard core dry goods . Those who have mathematical formula phobia should be cautious !! If you just want to understand the flow of the algorithm more easily without paying attention to the mathematical details , I suggest you refer to the literature 【1】 and 【2】.

* Because the input of mathematical formula in this paper is more troublesome , So some places use screenshots .

GBDT It can be used for regression , It can also be used to classify , And whether it's regression or classification , The algorithm flow is the same , It's like this . The only difference is , The definition of loss function will be different , We will explain this in detail later .

What you see in different references about GDBT The algorithm description in this paper is consistent with that given above , It's just that the notation may be slightly different . For example, here is Dr. Li Hang 《 Statistical learning method 》 A description of the algorithm flow given in the book . These about GDBT The description of the algorithm is all ten thousand methods , As long as you understand it , Everything will be the same

版权声明
本文为[osc_ vrlg1skx]所创,转载请带上原文链接,感谢
https://chowdera.com/2020/12/20201207140821786e.html